Page MenuHomeFreeBSD

No OneTemporary

This file is larger than 256 KB, so syntax highlighting was skipped.
Index: head/share/doc/usd/07.mail/mail6.nr
===================================================================
--- head/share/doc/usd/07.mail/mail6.nr (revision 300049)
+++ head/share/doc/usd/07.mail/mail6.nr (revision 300050)
@@ -1,121 +1,121 @@
.\" Copyright (c) 1980, 1993
.\" The Regents of the University of California. All rights reserved.
.\"
.\" Redistribution and use in source and binary forms, with or without
.\" modification, are permitted provided that the following conditions
.\" are met:
.\" 1. Redistributions of source code must retain the above copyright
.\" notice, this list of conditions and the following disclaimer.
.\" 2. Redistributions in binary form must reproduce the above copyright
.\" notice, this list of conditions and the following disclaimer in the
.\" documentation and/or other materials provided with the distribution.
.\" 3. Neither the name of the University nor the names of its contributors
.\" may be used to endorse or promote products derived from this software
.\" without specific prior written permission.
.\"
.\" THIS SOFTWARE IS PROVIDED BY THE REGENTS AND CONTRIBUTORS ``AS IS'' AND
.\" ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
.\" IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
.\" ARE DISCLAIMED. IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE LIABLE
.\" FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
.\" DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
.\" OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
.\" HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
.\" LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
.\" OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
.\" SUCH DAMAGE.
.\"
.\" @(#)mail6.nr 8.1 (Berkeley) 6/8/93
.\"
.bp
.sh 1 "Command line options"
.pp
This section describes command line options for
.i Mail
and what they are used for.
.ip \-N
Suppress the initial printing of headers.
.ip \-d
Turn on debugging information. Not of general interest.
.ip "\-f file\ \ "
Show the messages in
.i file
instead of your system mailbox. If
.i file
is omitted,
.i Mail
reads
.i mbox
in your home directory.
.ip \-i
Ignore tty interrupt signals. Useful on noisy phone lines, which
generate spurious RUBOUT or DELETE characters. It's usually
more effective to change your interrupt character to control\-c,
for which see the
.i stty
shell command.
.ip \-n
Inhibit reading of /etc/mail.rc. Not generally useful, since
/etc/mail.rc is usually empty.
.ip "\-s string"
Used for sending mail.
.i String
is used as the subject of the message being composed. If
.i string
contains blanks, you must surround it with quote marks.
.ip "\-u name"
Read
.i names's
mail instead of your own. Unwitting others often neglect to protect
their mailboxes, but discretion is advised. Essentially,
.b "\-u user"
is a shorthand way of doing
.b "\-f /var/mail/user".
.ip "\-v"
Use the
.b \-v
flag when invoking sendmail. This feature may also be enabled
-by setting the the option "verbose".
+by setting the option "verbose".
.pp
The following command line flags are also recognized, but are
intended for use by programs invoking
.i Mail
and not for people.
.ip "\-T file"
Arrange to print on
.i file
the contents of the
.i article-id
fields of all messages that were either read or deleted.
.b \-T
is for the
.i readnews
program and should NOT be used for reading your mail.
.ip "\-h number"
Pass on hop count information.
.i Mail
will take the number, increment it, and pass it with
.b \-h
to the mail delivery system.
.b \-h
only has effect when sending mail and is used for network mail
forwarding.
.ip "\-r name"
Used for network mail forwarding: interpret
.i name
as the sender of the message. The
.i name
and
.b \-r
are simply sent along to the mail delivery system. Also,
.i Mail
will wait for the message to be sent and return the exit status.
Also restricts formatting of message.
.pp
Note that
.b \-h
and
.b \-r ,
which are for network mail forwarding, are not used in practice
since mail forwarding is now handled separately. They may
disappear soon.
Index: head/share/man/man9/BUS_GET_CPUS.9
===================================================================
--- head/share/man/man9/BUS_GET_CPUS.9 (revision 300049)
+++ head/share/man/man9/BUS_GET_CPUS.9 (revision 300050)
@@ -1,101 +1,101 @@
.\" -*- nroff -*-
.\"
.\" Copyright (c) 2016 John H. Baldwin <jhb@FreeBSD.org>
.\" All rights reserved.
.\"
.\" Redistribution and use in source and binary forms, with or without
.\" modification, are permitted provided that the following conditions
.\" are met:
.\" 1. Redistributions of source code must retain the above copyright
.\" notice, this list of conditions and the following disclaimer.
.\" 2. Redistributions in binary form must reproduce the above copyright
.\" notice, this list of conditions and the following disclaimer in the
.\" documentation and/or other materials provided with the distribution.
.\"
.\" THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND
.\" ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
.\" IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
.\" ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE
.\" FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
.\" DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
.\" OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
.\" HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
.\" LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
.\" OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
.\" SUCH DAMAGE.
.\"
.\" $FreeBSD$
.\"
.Dd March 1, 2016
.Dt BUS_GET_CPUS 9
.Os
.Sh NAME
.Nm BUS_GET_CPUS ,
.Nm bus_get_cpus
.Nd "request a set of device-specific CPUs"
.Sh SYNOPSIS
.In sys/param.h
.In sys/bus.h
.In sys/cpuset.h
.Ft int
.Fo BUS_GET_CPUS
.Fa "device_t dev" "device_t child" "enum cpu_sets op" "size_t setsize"
.Fa "cpuset_t *cpuset"
.Fc
.Ft int
.Fo bus_get_cpus
.Fa "device_t dev" "enum cpu_sets op" "size_t setsize" "cpuset_t *cpuset"
.Fc
.Sh DESCRIPTION
The
.Fn BUS_GET_CPUS
method queries the parent bus device for a set of device-specific CPUs.
The
.Fa op
argument specifies which set of CPUs to retrieve.
If successful,
the requested set of CPUs are returned in
.Fa cpuset .
The
.Fa setsize
argument specifies the size in bytes of the set passed in
.Fa cpuset .
.Pp
.Fn BUS_GET_CPUS
-supports querying different types of CPU sets via the the
+supports querying different types of CPU sets via the
.Fa op argument.
Not all set types are supported for every device.
If a set type is not supported,
.Fn BUS_GET_CPUS
fails with
.Er EINVAL .
These set types are supported:
.Bl -tag -width ".Dv LOCAL_CPUS"
.It Dv LOCAL_CPUS
The set of CPUs that are local to the device.
If a device is closer to a specific memory domain in a non-uniform memory
architecture system
.Pq NUMA ,
this will return the set of CPUs in that memory domain.
.It Dv INTR_CPUS
The preferred set of CPUs that this device should use for device interrupts.
This set type must be supported by all bus drivers.
.El
.Pp
The
.Fn bus_get_cpus
function is a simple wrapper around
.Fn BUS_GET_CPUS .
.Sh RETURN VALUES
Zero is returned on success, otherwise an appropriate error is returned.
.Sh SEE ALSO
.Xr cpuset 2 ,
.Xr BUS_BIND_INTR 9 ,
.Xr device 9
.Sh HISTORY
The
.Fn BUS_GET_CPUS
method and
.Fn bus_get_cpus
function first appeared in
.Fx 11.0 .
Index: head/sys/amd64/vmm/io/vhpet.c
===================================================================
--- head/sys/amd64/vmm/io/vhpet.c (revision 300049)
+++ head/sys/amd64/vmm/io/vhpet.c (revision 300050)
@@ -1,759 +1,759 @@
/*-
* Copyright (c) 2013 Tycho Nightingale <tycho.nightingale@pluribusnetworks.com>
* Copyright (c) 2013 Neel Natu <neel@freebsd.org>
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
*
* THIS SOFTWARE IS PROVIDED BY NETAPP, INC ``AS IS'' AND
* ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
* ARE DISCLAIMED. IN NO EVENT SHALL NETAPP, INC OR CONTRIBUTORS BE LIABLE
* FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
* DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
* OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
* HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
* LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
* OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
* SUCH DAMAGE.
*
* $FreeBSD$
*/
#include <sys/cdefs.h>
__FBSDID("$FreeBSD$");
#include <sys/param.h>
#include <sys/lock.h>
#include <sys/mutex.h>
#include <sys/kernel.h>
#include <sys/malloc.h>
#include <sys/systm.h>
#include <dev/acpica/acpi_hpet.h>
#include <machine/vmm.h>
#include <machine/vmm_dev.h>
#include "vmm_lapic.h"
#include "vatpic.h"
#include "vioapic.h"
#include "vhpet.h"
#include "vmm_ktr.h"
static MALLOC_DEFINE(M_VHPET, "vhpet", "bhyve virtual hpet");
#define HPET_FREQ 10000000 /* 10.0 Mhz */
#define FS_PER_S 1000000000000000ul
/* Timer N Configuration and Capabilities Register */
#define HPET_TCAP_RO_MASK (HPET_TCAP_INT_ROUTE | \
HPET_TCAP_FSB_INT_DEL | \
HPET_TCAP_SIZE | \
HPET_TCAP_PER_INT)
/*
* HPET requires at least 3 timers and up to 32 timers per block.
*/
#define VHPET_NUM_TIMERS 8
CTASSERT(VHPET_NUM_TIMERS >= 3 && VHPET_NUM_TIMERS <= 32);
struct vhpet_callout_arg {
struct vhpet *vhpet;
int timer_num;
};
struct vhpet {
struct vm *vm;
struct mtx mtx;
sbintime_t freq_sbt;
uint64_t config; /* Configuration */
uint64_t isr; /* Interrupt Status */
uint32_t countbase; /* HPET counter base value */
sbintime_t countbase_sbt; /* uptime corresponding to base value */
struct {
uint64_t cap_config; /* Configuration */
uint64_t msireg; /* FSB interrupt routing */
uint32_t compval; /* Comparator */
uint32_t comprate;
struct callout callout;
sbintime_t callout_sbt; /* time when counter==compval */
struct vhpet_callout_arg arg;
} timer[VHPET_NUM_TIMERS];
};
#define VHPET_LOCK(vhp) mtx_lock(&((vhp)->mtx))
#define VHPET_UNLOCK(vhp) mtx_unlock(&((vhp)->mtx))
static void vhpet_start_timer(struct vhpet *vhpet, int n, uint32_t counter,
sbintime_t now);
static uint64_t
vhpet_capabilities(void)
{
uint64_t cap = 0;
cap |= 0x8086 << 16; /* vendor id */
cap |= (VHPET_NUM_TIMERS - 1) << 8; /* number of timers */
cap |= 1; /* revision */
cap &= ~HPET_CAP_COUNT_SIZE; /* 32-bit timer */
cap &= 0xffffffff;
cap |= (FS_PER_S / HPET_FREQ) << 32; /* tick period in fs */
return (cap);
}
static __inline bool
vhpet_counter_enabled(struct vhpet *vhpet)
{
return ((vhpet->config & HPET_CNF_ENABLE) ? true : false);
}
static __inline bool
vhpet_timer_msi_enabled(struct vhpet *vhpet, int n)
{
const uint64_t msi_enable = HPET_TCAP_FSB_INT_DEL | HPET_TCNF_FSB_EN;
if ((vhpet->timer[n].cap_config & msi_enable) == msi_enable)
return (true);
else
return (false);
}
static __inline int
vhpet_timer_ioapic_pin(struct vhpet *vhpet, int n)
{
/*
* If the timer is configured to use MSI then treat it as if the
* timer is not connected to the ioapic.
*/
if (vhpet_timer_msi_enabled(vhpet, n))
return (0);
return ((vhpet->timer[n].cap_config & HPET_TCNF_INT_ROUTE) >> 9);
}
static uint32_t
vhpet_counter(struct vhpet *vhpet, sbintime_t *nowptr)
{
uint32_t val;
sbintime_t now, delta;
val = vhpet->countbase;
if (vhpet_counter_enabled(vhpet)) {
now = sbinuptime();
delta = now - vhpet->countbase_sbt;
KASSERT(delta >= 0, ("vhpet_counter: uptime went backwards: "
"%#lx to %#lx", vhpet->countbase_sbt, now));
val += delta / vhpet->freq_sbt;
if (nowptr != NULL)
*nowptr = now;
} else {
/*
* The sbinuptime corresponding to the 'countbase' is
* meaningless when the counter is disabled. Make sure
- * that the the caller doesn't want to use it.
+ * that the caller doesn't want to use it.
*/
KASSERT(nowptr == NULL, ("vhpet_counter: nowptr must be NULL"));
}
return (val);
}
static void
vhpet_timer_clear_isr(struct vhpet *vhpet, int n)
{
int pin;
if (vhpet->isr & (1 << n)) {
pin = vhpet_timer_ioapic_pin(vhpet, n);
KASSERT(pin != 0, ("vhpet timer %d irq incorrectly routed", n));
vioapic_deassert_irq(vhpet->vm, pin);
vhpet->isr &= ~(1 << n);
}
}
static __inline bool
vhpet_periodic_timer(struct vhpet *vhpet, int n)
{
return ((vhpet->timer[n].cap_config & HPET_TCNF_TYPE) != 0);
}
static __inline bool
vhpet_timer_interrupt_enabled(struct vhpet *vhpet, int n)
{
return ((vhpet->timer[n].cap_config & HPET_TCNF_INT_ENB) != 0);
}
static __inline bool
vhpet_timer_edge_trig(struct vhpet *vhpet, int n)
{
KASSERT(!vhpet_timer_msi_enabled(vhpet, n), ("vhpet_timer_edge_trig: "
"timer %d is using MSI", n));
if ((vhpet->timer[n].cap_config & HPET_TCNF_INT_TYPE) == 0)
return (true);
else
return (false);
}
static void
vhpet_timer_interrupt(struct vhpet *vhpet, int n)
{
int pin;
/* If interrupts are not enabled for this timer then just return. */
if (!vhpet_timer_interrupt_enabled(vhpet, n))
return;
/*
* If a level triggered interrupt is already asserted then just return.
*/
if ((vhpet->isr & (1 << n)) != 0) {
VM_CTR1(vhpet->vm, "hpet t%d intr is already asserted", n);
return;
}
if (vhpet_timer_msi_enabled(vhpet, n)) {
lapic_intr_msi(vhpet->vm, vhpet->timer[n].msireg >> 32,
vhpet->timer[n].msireg & 0xffffffff);
return;
}
pin = vhpet_timer_ioapic_pin(vhpet, n);
if (pin == 0) {
VM_CTR1(vhpet->vm, "hpet t%d intr is not routed to ioapic", n);
return;
}
if (vhpet_timer_edge_trig(vhpet, n)) {
vioapic_pulse_irq(vhpet->vm, pin);
} else {
vhpet->isr |= 1 << n;
vioapic_assert_irq(vhpet->vm, pin);
}
}
static void
vhpet_adjust_compval(struct vhpet *vhpet, int n, uint32_t counter)
{
uint32_t compval, comprate, compnext;
KASSERT(vhpet->timer[n].comprate != 0, ("hpet t%d is not periodic", n));
compval = vhpet->timer[n].compval;
comprate = vhpet->timer[n].comprate;
/*
* Calculate the comparator value to be used for the next periodic
* interrupt.
*
* This function is commonly called from the callout handler.
* In this scenario the 'counter' is ahead of 'compval'. To find
* the next value to program into the accumulator we divide the
* number space between 'compval' and 'counter' into 'comprate'
* sized units. The 'compval' is rounded up such that is "ahead"
* of 'counter'.
*/
compnext = compval + ((counter - compval) / comprate + 1) * comprate;
vhpet->timer[n].compval = compnext;
}
static void
vhpet_handler(void *a)
{
int n;
uint32_t counter;
sbintime_t now;
struct vhpet *vhpet;
struct callout *callout;
struct vhpet_callout_arg *arg;
arg = a;
vhpet = arg->vhpet;
n = arg->timer_num;
callout = &vhpet->timer[n].callout;
VM_CTR1(vhpet->vm, "hpet t%d fired", n);
VHPET_LOCK(vhpet);
if (callout_pending(callout)) /* callout was reset */
goto done;
if (!callout_active(callout)) /* callout was stopped */
goto done;
callout_deactivate(callout);
if (!vhpet_counter_enabled(vhpet))
panic("vhpet(%p) callout with counter disabled", vhpet);
counter = vhpet_counter(vhpet, &now);
vhpet_start_timer(vhpet, n, counter, now);
vhpet_timer_interrupt(vhpet, n);
done:
VHPET_UNLOCK(vhpet);
return;
}
static void
vhpet_stop_timer(struct vhpet *vhpet, int n, sbintime_t now)
{
VM_CTR1(vhpet->vm, "hpet t%d stopped", n);
callout_stop(&vhpet->timer[n].callout);
/*
* If the callout was scheduled to expire in the past but hasn't
* had a chance to execute yet then trigger the timer interrupt
* here. Failing to do so will result in a missed timer interrupt
* in the guest. This is especially bad in one-shot mode because
* the next interrupt has to wait for the counter to wrap around.
*/
if (vhpet->timer[n].callout_sbt < now) {
VM_CTR1(vhpet->vm, "hpet t%d interrupt triggered after "
"stopping timer", n);
vhpet_timer_interrupt(vhpet, n);
}
}
static void
vhpet_start_timer(struct vhpet *vhpet, int n, uint32_t counter, sbintime_t now)
{
sbintime_t delta, precision;
if (vhpet->timer[n].comprate != 0)
vhpet_adjust_compval(vhpet, n, counter);
else {
/*
* In one-shot mode it is the guest's responsibility to make
* sure that the comparator value is not in the "past". The
* hardware doesn't have any belt-and-suspenders to deal with
* this so we don't either.
*/
}
delta = (vhpet->timer[n].compval - counter) * vhpet->freq_sbt;
precision = delta >> tc_precexp;
vhpet->timer[n].callout_sbt = now + delta;
callout_reset_sbt(&vhpet->timer[n].callout, vhpet->timer[n].callout_sbt,
precision, vhpet_handler, &vhpet->timer[n].arg, C_ABSOLUTE);
}
static void
vhpet_start_counting(struct vhpet *vhpet)
{
int i;
vhpet->countbase_sbt = sbinuptime();
for (i = 0; i < VHPET_NUM_TIMERS; i++) {
/*
* Restart the timers based on the value of the main counter
* when it stopped counting.
*/
vhpet_start_timer(vhpet, i, vhpet->countbase,
vhpet->countbase_sbt);
}
}
static void
vhpet_stop_counting(struct vhpet *vhpet, uint32_t counter, sbintime_t now)
{
int i;
vhpet->countbase = counter;
for (i = 0; i < VHPET_NUM_TIMERS; i++)
vhpet_stop_timer(vhpet, i, now);
}
static __inline void
update_register(uint64_t *regptr, uint64_t data, uint64_t mask)
{
*regptr &= ~mask;
*regptr |= (data & mask);
}
static void
vhpet_timer_update_config(struct vhpet *vhpet, int n, uint64_t data,
uint64_t mask)
{
bool clear_isr;
int old_pin, new_pin;
uint32_t allowed_irqs;
uint64_t oldval, newval;
if (vhpet_timer_msi_enabled(vhpet, n) ||
vhpet_timer_edge_trig(vhpet, n)) {
if (vhpet->isr & (1 << n))
panic("vhpet timer %d isr should not be asserted", n);
}
old_pin = vhpet_timer_ioapic_pin(vhpet, n);
oldval = vhpet->timer[n].cap_config;
newval = oldval;
update_register(&newval, data, mask);
newval &= ~(HPET_TCAP_RO_MASK | HPET_TCNF_32MODE);
newval |= oldval & HPET_TCAP_RO_MASK;
if (newval == oldval)
return;
vhpet->timer[n].cap_config = newval;
VM_CTR2(vhpet->vm, "hpet t%d cap_config set to 0x%016x", n, newval);
/*
* Validate the interrupt routing in the HPET_TCNF_INT_ROUTE field.
* If it does not match the bits set in HPET_TCAP_INT_ROUTE then set
* it to the default value of 0.
*/
allowed_irqs = vhpet->timer[n].cap_config >> 32;
new_pin = vhpet_timer_ioapic_pin(vhpet, n);
if (new_pin != 0 && (allowed_irqs & (1 << new_pin)) == 0) {
VM_CTR3(vhpet->vm, "hpet t%d configured invalid irq %d, "
"allowed_irqs 0x%08x", n, new_pin, allowed_irqs);
new_pin = 0;
vhpet->timer[n].cap_config &= ~HPET_TCNF_INT_ROUTE;
}
if (!vhpet_periodic_timer(vhpet, n))
vhpet->timer[n].comprate = 0;
/*
* If the timer's ISR bit is set then clear it in the following cases:
* - interrupt is disabled
* - interrupt type is changed from level to edge or fsb.
* - interrupt routing is changed
*
* This is to ensure that this timer's level triggered interrupt does
* not remain asserted forever.
*/
if (vhpet->isr & (1 << n)) {
KASSERT(old_pin != 0, ("timer %d isr asserted to ioapic pin %d",
n, old_pin));
if (!vhpet_timer_interrupt_enabled(vhpet, n))
clear_isr = true;
else if (vhpet_timer_msi_enabled(vhpet, n))
clear_isr = true;
else if (vhpet_timer_edge_trig(vhpet, n))
clear_isr = true;
else if (vhpet_timer_ioapic_pin(vhpet, n) != old_pin)
clear_isr = true;
else
clear_isr = false;
if (clear_isr) {
VM_CTR1(vhpet->vm, "hpet t%d isr cleared due to "
"configuration change", n);
vioapic_deassert_irq(vhpet->vm, old_pin);
vhpet->isr &= ~(1 << n);
}
}
}
int
vhpet_mmio_write(void *vm, int vcpuid, uint64_t gpa, uint64_t val, int size,
void *arg)
{
struct vhpet *vhpet;
uint64_t data, mask, oldval, val64;
uint32_t isr_clear_mask, old_compval, old_comprate, counter;
sbintime_t now, *nowptr;
int i, offset;
vhpet = vm_hpet(vm);
offset = gpa - VHPET_BASE;
VHPET_LOCK(vhpet);
/* Accesses to the HPET should be 4 or 8 bytes wide */
switch (size) {
case 8:
mask = 0xffffffffffffffff;
data = val;
break;
case 4:
mask = 0xffffffff;
data = val;
if ((offset & 0x4) != 0) {
mask <<= 32;
data <<= 32;
}
break;
default:
VM_CTR2(vhpet->vm, "hpet invalid mmio write: "
"offset 0x%08x, size %d", offset, size);
goto done;
}
/* Access to the HPET should be naturally aligned to its width */
if (offset & (size - 1)) {
VM_CTR2(vhpet->vm, "hpet invalid mmio write: "
"offset 0x%08x, size %d", offset, size);
goto done;
}
if (offset == HPET_CONFIG || offset == HPET_CONFIG + 4) {
/*
* Get the most recent value of the counter before updating
* the 'config' register. If the HPET is going to be disabled
* then we need to update 'countbase' with the value right
* before it is disabled.
*/
nowptr = vhpet_counter_enabled(vhpet) ? &now : NULL;
counter = vhpet_counter(vhpet, nowptr);
oldval = vhpet->config;
update_register(&vhpet->config, data, mask);
/*
* LegacyReplacement Routing is not supported so clear the
* bit explicitly.
*/
vhpet->config &= ~HPET_CNF_LEG_RT;
if ((oldval ^ vhpet->config) & HPET_CNF_ENABLE) {
if (vhpet_counter_enabled(vhpet)) {
vhpet_start_counting(vhpet);
VM_CTR0(vhpet->vm, "hpet enabled");
} else {
vhpet_stop_counting(vhpet, counter, now);
VM_CTR0(vhpet->vm, "hpet disabled");
}
}
goto done;
}
if (offset == HPET_ISR || offset == HPET_ISR + 4) {
isr_clear_mask = vhpet->isr & data;
for (i = 0; i < VHPET_NUM_TIMERS; i++) {
if ((isr_clear_mask & (1 << i)) != 0) {
VM_CTR1(vhpet->vm, "hpet t%d isr cleared", i);
vhpet_timer_clear_isr(vhpet, i);
}
}
goto done;
}
if (offset == HPET_MAIN_COUNTER || offset == HPET_MAIN_COUNTER + 4) {
/* Zero-extend the counter to 64-bits before updating it */
val64 = vhpet_counter(vhpet, NULL);
update_register(&val64, data, mask);
vhpet->countbase = val64;
if (vhpet_counter_enabled(vhpet))
vhpet_start_counting(vhpet);
goto done;
}
for (i = 0; i < VHPET_NUM_TIMERS; i++) {
if (offset == HPET_TIMER_CAP_CNF(i) ||
offset == HPET_TIMER_CAP_CNF(i) + 4) {
vhpet_timer_update_config(vhpet, i, data, mask);
break;
}
if (offset == HPET_TIMER_COMPARATOR(i) ||
offset == HPET_TIMER_COMPARATOR(i) + 4) {
old_compval = vhpet->timer[i].compval;
old_comprate = vhpet->timer[i].comprate;
if (vhpet_periodic_timer(vhpet, i)) {
/*
* In periodic mode writes to the comparator
* change the 'compval' register only if the
* HPET_TCNF_VAL_SET bit is set in the config
* register.
*/
val64 = vhpet->timer[i].comprate;
update_register(&val64, data, mask);
vhpet->timer[i].comprate = val64;
if ((vhpet->timer[i].cap_config &
HPET_TCNF_VAL_SET) != 0) {
vhpet->timer[i].compval = val64;
}
} else {
KASSERT(vhpet->timer[i].comprate == 0,
("vhpet one-shot timer %d has invalid "
"rate %u", i, vhpet->timer[i].comprate));
val64 = vhpet->timer[i].compval;
update_register(&val64, data, mask);
vhpet->timer[i].compval = val64;
}
vhpet->timer[i].cap_config &= ~HPET_TCNF_VAL_SET;
if (vhpet->timer[i].compval != old_compval ||
vhpet->timer[i].comprate != old_comprate) {
if (vhpet_counter_enabled(vhpet)) {
counter = vhpet_counter(vhpet, &now);
vhpet_start_timer(vhpet, i, counter,
now);
}
}
break;
}
if (offset == HPET_TIMER_FSB_VAL(i) ||
offset == HPET_TIMER_FSB_ADDR(i)) {
update_register(&vhpet->timer[i].msireg, data, mask);
break;
}
}
done:
VHPET_UNLOCK(vhpet);
return (0);
}
int
vhpet_mmio_read(void *vm, int vcpuid, uint64_t gpa, uint64_t *rval, int size,
void *arg)
{
int i, offset;
struct vhpet *vhpet;
uint64_t data;
vhpet = vm_hpet(vm);
offset = gpa - VHPET_BASE;
VHPET_LOCK(vhpet);
/* Accesses to the HPET should be 4 or 8 bytes wide */
if (size != 4 && size != 8) {
VM_CTR2(vhpet->vm, "hpet invalid mmio read: "
"offset 0x%08x, size %d", offset, size);
data = 0;
goto done;
}
/* Access to the HPET should be naturally aligned to its width */
if (offset & (size - 1)) {
VM_CTR2(vhpet->vm, "hpet invalid mmio read: "
"offset 0x%08x, size %d", offset, size);
data = 0;
goto done;
}
if (offset == HPET_CAPABILITIES || offset == HPET_CAPABILITIES + 4) {
data = vhpet_capabilities();
goto done;
}
if (offset == HPET_CONFIG || offset == HPET_CONFIG + 4) {
data = vhpet->config;
goto done;
}
if (offset == HPET_ISR || offset == HPET_ISR + 4) {
data = vhpet->isr;
goto done;
}
if (offset == HPET_MAIN_COUNTER || offset == HPET_MAIN_COUNTER + 4) {
data = vhpet_counter(vhpet, NULL);
goto done;
}
for (i = 0; i < VHPET_NUM_TIMERS; i++) {
if (offset == HPET_TIMER_CAP_CNF(i) ||
offset == HPET_TIMER_CAP_CNF(i) + 4) {
data = vhpet->timer[i].cap_config;
break;
}
if (offset == HPET_TIMER_COMPARATOR(i) ||
offset == HPET_TIMER_COMPARATOR(i) + 4) {
data = vhpet->timer[i].compval;
break;
}
if (offset == HPET_TIMER_FSB_VAL(i) ||
offset == HPET_TIMER_FSB_ADDR(i)) {
data = vhpet->timer[i].msireg;
break;
}
}
if (i >= VHPET_NUM_TIMERS)
data = 0;
done:
VHPET_UNLOCK(vhpet);
if (size == 4) {
if (offset & 0x4)
data >>= 32;
}
*rval = data;
return (0);
}
struct vhpet *
vhpet_init(struct vm *vm)
{
int i, pincount;
struct vhpet *vhpet;
uint64_t allowed_irqs;
struct vhpet_callout_arg *arg;
struct bintime bt;
vhpet = malloc(sizeof(struct vhpet), M_VHPET, M_WAITOK | M_ZERO);
vhpet->vm = vm;
mtx_init(&vhpet->mtx, "vhpet lock", NULL, MTX_DEF);
FREQ2BT(HPET_FREQ, &bt);
vhpet->freq_sbt = bttosbt(bt);
pincount = vioapic_pincount(vm);
if (pincount >= 24)
allowed_irqs = 0x00f00000; /* irqs 20, 21, 22 and 23 */
else
allowed_irqs = 0;
/*
* Initialize HPET timer hardware state.
*/
for (i = 0; i < VHPET_NUM_TIMERS; i++) {
vhpet->timer[i].cap_config = allowed_irqs << 32;
vhpet->timer[i].cap_config |= HPET_TCAP_PER_INT;
vhpet->timer[i].cap_config |= HPET_TCAP_FSB_INT_DEL;
vhpet->timer[i].compval = 0xffffffff;
callout_init(&vhpet->timer[i].callout, 1);
arg = &vhpet->timer[i].arg;
arg->vhpet = vhpet;
arg->timer_num = i;
}
return (vhpet);
}
void
vhpet_cleanup(struct vhpet *vhpet)
{
int i;
for (i = 0; i < VHPET_NUM_TIMERS; i++)
callout_drain(&vhpet->timer[i].callout);
free(vhpet, M_VHPET);
}
int
vhpet_getcap(struct vm_hpet_cap *cap)
{
cap->capabilities = vhpet_capabilities();
return (0);
}
Index: head/sys/arm/allwinner/a10_ahci.c
===================================================================
--- head/sys/arm/allwinner/a10_ahci.c (revision 300049)
+++ head/sys/arm/allwinner/a10_ahci.c (revision 300050)
@@ -1,397 +1,397 @@
/*-
* Copyright (c) 2014-2015 M. Warner Losh <imp@freebsd.org>
* Copyright (c) 2015 Luiz Otavio O Souza <loos@freebsd.org>
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
*
* THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND
* ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
* ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE
* FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
* DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
* OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
* HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
* LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
* OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
* SUCH DAMAGE.
*
* The magic-bit-bang sequence used in this code may be based on a linux
* platform driver in the Allwinner SDK from Allwinner Technology Co., Ltd.
* www.allwinnertech.com, by Daniel Wang <danielwang@allwinnertech.com>
* though none of the original code was copied.
*/
#include "opt_bus.h"
#include <sys/cdefs.h>
__FBSDID("$FreeBSD$");
#include <sys/param.h>
#include <sys/systm.h>
#include <sys/bus.h>
#include <sys/rman.h>
#include <sys/kernel.h>
#include <sys/module.h>
#include <machine/bus.h>
#include <dev/ofw/ofw_bus.h>
#include <dev/ofw/ofw_bus_subr.h>
#include <dev/ahci/ahci.h>
#include <dev/extres/clk/clk.h>
/*
* Allwinner a1x/a2x/a8x SATA attachment. This is just the AHCI register
* set with a few extra implementation-specific registers that need to
* be accounted for. There's only one PHY in the system, and it needs
* to be trained to bring the link up. In addition, there's some DMA
* specific things that need to be done as well. These things are also
* just about completely undocumented, except in ugly code in the Linux
* SDK Allwinner releases.
*/
/* BITx -- Unknown bit that needs to be set/cleared at position x */
/* UFx -- Uknown multi-bit field frobbed during init */
#define AHCI_BISTAFR 0x00A0
#define AHCI_BISTCR 0x00A4
#define AHCI_BISTFCTR 0x00A8
#define AHCI_BISTSR 0x00AC
#define AHCI_BISTDECR 0x00B0
#define AHCI_DIAGNR 0x00B4
#define AHCI_DIAGNR1 0x00B8
#define AHCI_OOBR 0x00BC
#define AHCI_PHYCS0R 0x00C0
/* Bits 0..17 are a mystery */
#define PHYCS0R_BIT18 (1 << 18)
#define PHYCS0R_POWER_ENABLE (1 << 19)
#define PHYCS0R_UF1_MASK (7 << 20) /* Unknown Field 1 */
#define PHYCS0R_UF1_INIT (3 << 20)
#define PHYCS0R_BIT23 (1 << 23)
#define PHYCS0R_UF2_MASK (7 << 24) /* Uknown Field 2 */
#define PHYCS0R_UF2_INIT (5 << 24)
/* Bit 27 mystery */
#define PHYCS0R_POWER_STATUS_MASK (7 << 28)
#define PHYCS0R_PS_GOOD (2 << 28)
/* Bit 31 mystery */
#define AHCI_PHYCS1R 0x00C4
/* Bits 0..5 are a mystery */
#define PHYCS1R_UF1_MASK (3 << 6)
#define PHYCS1R_UF1_INIT (2 << 6)
#define PHYCS1R_UF2_MASK (0x1f << 8)
#define PHYCS1R_UF2_INIT (6 << 8)
/* Bits 13..14 are a mystery */
#define PHYCS1R_BIT15 (1 << 15)
#define PHYCS1R_UF3_MASK (3 << 16)
#define PHYCS1R_UF3_INIT (2 << 16)
/* Bit 18 mystery */
#define PHYCS1R_HIGHZ (1 << 19)
/* Bits 20..27 mystery */
#define PHYCS1R_BIT28 (1 << 28)
/* Bits 29..31 mystery */
#define AHCI_PHYCS2R 0x00C8
/* bits 0..4 mystery */
#define PHYCS2R_UF1_MASK (0x1f << 5)
#define PHYCS2R_UF1_INIT (0x19 << 5)
/* Bits 10..23 mystery */
#define PHYCS2R_CALIBRATE (1 << 24)
/* Bits 25..31 mystery */
#define AHCI_TIMER1MS 0x00E0
#define AHCI_GPARAM1R 0x00E8
#define AHCI_GPARAM2R 0x00EC
#define AHCI_PPARAMR 0x00F0
#define AHCI_TESTR 0x00F4
#define AHCI_VERSIONR 0x00F8
#define AHCI_IDR 0x00FC
#define AHCI_RWCR 0x00FC
#define AHCI_P0DMACR 0x0070
#define AHCI_P0PHYCR 0x0078
#define AHCI_P0PHYSR 0x007C
#define PLL_FREQ 100000000
static void inline
ahci_set(struct resource *m, bus_size_t off, uint32_t set)
{
uint32_t val = ATA_INL(m, off);
val |= set;
ATA_OUTL(m, off, val);
}
static void inline
ahci_clr(struct resource *m, bus_size_t off, uint32_t clr)
{
uint32_t val = ATA_INL(m, off);
val &= ~clr;
ATA_OUTL(m, off, val);
}
static void inline
ahci_mask_set(struct resource *m, bus_size_t off, uint32_t mask, uint32_t set)
{
uint32_t val = ATA_INL(m, off);
val &= mask;
val |= set;
ATA_OUTL(m, off, val);
}
/*
* Should this be phy_reset or phy_init
*/
#define PHY_RESET_TIMEOUT 1000
static void
ahci_a10_phy_reset(device_t dev)
{
uint32_t to, val;
struct ahci_controller *ctlr = device_get_softc(dev);
/*
- * Here start the the magic -- most of the comments are based
+ * Here starts the magic -- most of the comments are based
* on guesswork, names of routines and printf error
* messages. The code works, but it will do that even if the
* comments are 100% BS.
*/
/*
* Lock out other access while we initialize. Or at least that
* seems to be the case based on Linux SDK #defines. Maybe this
* put things into reset?
*/
ATA_OUTL(ctlr->r_mem, AHCI_RWCR, 0);
DELAY(100);
/*
* Set bit 19 in PHYCS1R. Guessing this disables driving the PHY
* port for a bit while we reset things.
*/
ahci_set(ctlr->r_mem, AHCI_PHYCS1R, PHYCS1R_HIGHZ);
/*
* Frob PHYCS0R...
*/
ahci_mask_set(ctlr->r_mem, AHCI_PHYCS0R,
~PHYCS0R_UF2_MASK,
PHYCS0R_UF2_INIT | PHYCS0R_BIT23 | PHYCS0R_BIT18);
/*
* Set three fields in PHYCS1R
*/
ahci_mask_set(ctlr->r_mem, AHCI_PHYCS1R,
~(PHYCS1R_UF1_MASK | PHYCS1R_UF2_MASK | PHYCS1R_UF3_MASK),
PHYCS1R_UF1_INIT | PHYCS1R_UF2_INIT | PHYCS1R_UF3_INIT);
/*
* Two more mystery bits in PHYCS1R. -- can these be combined above?
*/
ahci_set(ctlr->r_mem, AHCI_PHYCS1R, PHYCS1R_BIT15 | PHYCS1R_BIT28);
/*
* Now clear that first mysery bit. Perhaps this starts
* driving the PHY again so we can power it up and start
* talking to the SATA drive, if any below.
*/
ahci_clr(ctlr->r_mem, AHCI_PHYCS1R, PHYCS1R_HIGHZ);
/*
* Frob PHYCS0R again...
*/
ahci_mask_set(ctlr->r_mem, AHCI_PHYCS0R,
~PHYCS0R_UF1_MASK, PHYCS0R_UF1_INIT);
/*
* Frob PHYCS2R, because 25 means something?
*/
ahci_mask_set(ctlr->r_mem, AHCI_PHYCS2R, ~PHYCS2R_UF1_MASK,
PHYCS2R_UF1_INIT);
DELAY(100); /* WAG */
/*
* Turn on the power to the PHY and wait for it to report back
* good?
*/
ahci_set(ctlr->r_mem, AHCI_PHYCS0R, PHYCS0R_POWER_ENABLE);
for (to = PHY_RESET_TIMEOUT; to > 0; to--) {
val = ATA_INL(ctlr->r_mem, AHCI_PHYCS0R);
if ((val & PHYCS0R_POWER_STATUS_MASK) == PHYCS0R_PS_GOOD)
break;
DELAY(10);
}
if (to == 0 && bootverbose)
device_printf(dev, "PHY Power Failed PHYCS0R = %#x\n", val);
/*
* Calibrate the clocks between the device and the host. This appears
* to be an automated process that clears the bit when it is done.
*/
ahci_set(ctlr->r_mem, AHCI_PHYCS2R, PHYCS2R_CALIBRATE);
for (to = PHY_RESET_TIMEOUT; to > 0; to--) {
val = ATA_INL(ctlr->r_mem, AHCI_PHYCS2R);
if ((val & PHYCS2R_CALIBRATE) == 0)
break;
DELAY(10);
}
if (to == 0 && bootverbose)
device_printf(dev, "PHY Cal Failed PHYCS2R %#x\n", val);
/*
* OK, let things settle down a bit.
*/
DELAY(1000);
/*
* Go back into normal mode now that we've calibrated the PHY.
*/
ATA_OUTL(ctlr->r_mem, AHCI_RWCR, 7);
}
static void
ahci_a10_ch_start(struct ahci_channel *ch)
{
uint32_t reg;
/*
* Magical values from Allwinner SDK, setup the DMA before start
* operations on this channel.
*/
reg = ATA_INL(ch->r_mem, AHCI_P0DMACR);
reg &= ~0xff00;
reg |= 0x4400;
ATA_OUTL(ch->r_mem, AHCI_P0DMACR, reg);
}
static int
ahci_a10_ctlr_reset(device_t dev)
{
ahci_a10_phy_reset(dev);
return (ahci_ctlr_reset(dev));
}
static int
ahci_a10_probe(device_t dev)
{
if (!ofw_bus_is_compatible(dev, "allwinner,sun4i-a10-ahci"))
return (ENXIO);
device_set_desc(dev, "Allwinner Integrated AHCI controller");
return (BUS_PROBE_DEFAULT);
}
static int
ahci_a10_attach(device_t dev)
{
int error;
struct ahci_controller *ctlr;
clk_t clk_pll, clk_gate;
ctlr = device_get_softc(dev);
clk_pll = clk_gate = NULL;
ctlr->quirks = AHCI_Q_NOPMP;
ctlr->vendorid = 0;
ctlr->deviceid = 0;
ctlr->subvendorid = 0;
ctlr->subdeviceid = 0;
ctlr->r_rid = 0;
if (!(ctlr->r_mem = bus_alloc_resource_any(dev, SYS_RES_MEMORY,
&ctlr->r_rid, RF_ACTIVE)))
return (ENXIO);
/* Enable clocks */
error = clk_get_by_ofw_index(dev, 0, &clk_pll);
if (error != 0) {
device_printf(dev, "Cannot get PLL clock\n");
goto fail;
}
error = clk_get_by_ofw_index(dev, 1, &clk_gate);
if (error != 0) {
device_printf(dev, "Cannot get gate clock\n");
goto fail;
}
error = clk_set_freq(clk_pll, PLL_FREQ, CLK_SET_ROUND_DOWN);
if (error != 0) {
device_printf(dev, "Cannot set PLL frequency\n");
goto fail;
}
error = clk_enable(clk_pll);
if (error != 0) {
device_printf(dev, "Cannot enable PLL\n");
goto fail;
}
error = clk_enable(clk_gate);
if (error != 0) {
device_printf(dev, "Cannot enable clk gate\n");
goto fail;
}
/* Reset controller */
if ((error = ahci_a10_ctlr_reset(dev)) != 0)
goto fail;
/*
* No MSI registers on this platform.
*/
ctlr->msi = 0;
ctlr->numirqs = 1;
/* Channel start callback(). */
ctlr->ch_start = ahci_a10_ch_start;
/*
* Note: ahci_attach will release ctlr->r_mem on errors automatically
*/
return (ahci_attach(dev));
fail:
if (clk_gate != NULL)
clk_release(clk_gate);
if (clk_pll != NULL)
clk_release(clk_pll);
bus_release_resource(dev, SYS_RES_MEMORY, ctlr->r_rid, ctlr->r_mem);
return (error);
}
static int
ahci_a10_detach(device_t dev)
{
return (ahci_detach(dev));
}
devclass_t ahci_devclass;
static device_method_t ahci_ata_methods[] = {
DEVMETHOD(device_probe, ahci_a10_probe),
DEVMETHOD(device_attach, ahci_a10_attach),
DEVMETHOD(device_detach, ahci_a10_detach),
DEVMETHOD(bus_print_child, ahci_print_child),
DEVMETHOD(bus_alloc_resource, ahci_alloc_resource),
DEVMETHOD(bus_release_resource, ahci_release_resource),
DEVMETHOD(bus_setup_intr, ahci_setup_intr),
DEVMETHOD(bus_teardown_intr,ahci_teardown_intr),
DEVMETHOD(bus_child_location_str, ahci_child_location_str),
DEVMETHOD_END
};
static driver_t ahci_ata_driver = {
"ahci",
ahci_ata_methods,
sizeof(struct ahci_controller)
};
DRIVER_MODULE(ahci, simplebus, ahci_ata_driver, ahci_devclass, 0, 0);
Index: head/sys/arm/freescale/imx/imx_sdhci.c
===================================================================
--- head/sys/arm/freescale/imx/imx_sdhci.c (revision 300049)
+++ head/sys/arm/freescale/imx/imx_sdhci.c (revision 300050)
@@ -1,839 +1,839 @@
/*-
* Copyright (c) 2013 Ian Lepore <ian@freebsd.org>
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
*
* THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND
* ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
* ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE
* FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
* DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
* OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
* HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
* LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
* OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
* SUCH DAMAGE.
*
*/
#include <sys/cdefs.h>
__FBSDID("$FreeBSD$");
/*
* SDHCI driver glue for Freescale i.MX SoC family.
*
* This supports both eSDHC (earlier SoCs) and uSDHC (more recent SoCs).
*/
#include <sys/param.h>
#include <sys/systm.h>
#include <sys/types.h>
#include <sys/bus.h>
#include <sys/callout.h>
#include <sys/kernel.h>
#include <sys/lock.h>
#include <sys/malloc.h>
#include <sys/module.h>
#include <sys/mutex.h>
#include <sys/resource.h>
#include <sys/rman.h>
#include <sys/sysctl.h>
#include <sys/taskqueue.h>
#include <sys/time.h>
#include <machine/bus.h>
#include <machine/resource.h>
#include <machine/intr.h>
#include <arm/freescale/imx/imx_ccmvar.h>
#include <dev/ofw/ofw_bus.h>
#include <dev/ofw/ofw_bus_subr.h>
#include <dev/mmc/bridge.h>
#include <dev/mmc/mmcreg.h>
#include <dev/mmc/mmcbrvar.h>
#include <dev/sdhci/sdhci.h>
#include "sdhci_if.h"
struct imx_sdhci_softc {
device_t dev;
struct resource * mem_res;
struct resource * irq_res;
void * intr_cookie;
struct sdhci_slot slot;
struct callout r1bfix_callout;
sbintime_t r1bfix_timeout_at;
uint32_t baseclk_hz;
uint32_t sdclockreg_freq_bits;
uint32_t cmd_and_mode;
uint32_t r1bfix_intmask;
uint8_t r1bfix_type;
uint8_t hwtype;
boolean_t force_card_present;
};
#define R1BFIX_NONE 0 /* No fix needed at next interrupt. */
#define R1BFIX_NODATA 1 /* Synthesize DATA_END for R1B w/o data. */
#define R1BFIX_AC12 2 /* Wait for busy after auto command 12. */
#define HWTYPE_NONE 0 /* Hardware not recognized/supported. */
#define HWTYPE_ESDHC 1 /* imx5x and earlier. */
#define HWTYPE_USDHC 2 /* imx6. */
#define SDHC_WTMK_LVL 0x44 /* Watermark Level register. */
#define USDHC_MIX_CONTROL 0x48 /* Mix(ed) Control register. */
#define SDHC_VEND_SPEC 0xC0 /* Vendor-specific register. */
#define SDHC_VEND_FRC_SDCLK_ON (1 << 8)
#define SDHC_VEND_IPGEN (1 << 11)
#define SDHC_VEND_HCKEN (1 << 12)
#define SDHC_VEND_PEREN (1 << 13)
#define SDHC_PRES_STATE 0x24
#define SDHC_PRES_CIHB (1 << 0)
#define SDHC_PRES_CDIHB (1 << 1)
#define SDHC_PRES_DLA (1 << 2)
#define SDHC_PRES_SDSTB (1 << 3)
#define SDHC_PRES_IPGOFF (1 << 4)
#define SDHC_PRES_HCKOFF (1 << 5)
#define SDHC_PRES_PEROFF (1 << 6)
#define SDHC_PRES_SDOFF (1 << 7)
#define SDHC_PRES_WTA (1 << 8)
#define SDHC_PRES_RTA (1 << 9)
#define SDHC_PRES_BWEN (1 << 10)
#define SDHC_PRES_BREN (1 << 11)
#define SDHC_PRES_RTR (1 << 12)
#define SDHC_PRES_CINST (1 << 16)
#define SDHC_PRES_CDPL (1 << 18)
#define SDHC_PRES_WPSPL (1 << 19)
#define SDHC_PRES_CLSL (1 << 23)
#define SDHC_PRES_DLSL_SHIFT 24
#define SDHC_PRES_DLSL_MASK (0xffU << SDHC_PRES_DLSL_SHIFT)
#define SDHC_PROT_CTRL 0x28
#define SDHC_PROT_LED (1 << 0)
#define SDHC_PROT_WIDTH_1BIT (0 << 1)
#define SDHC_PROT_WIDTH_4BIT (1 << 1)
#define SDHC_PROT_WIDTH_8BIT (2 << 1)
#define SDHC_PROT_WIDTH_MASK (3 << 1)
#define SDHC_PROT_D3CD (1 << 3)
#define SDHC_PROT_EMODE_BIG (0 << 4)
#define SDHC_PROT_EMODE_HALF (1 << 4)
#define SDHC_PROT_EMODE_LITTLE (2 << 4)
#define SDHC_PROT_EMODE_MASK (3 << 4)
#define SDHC_PROT_SDMA (0 << 8)
#define SDHC_PROT_ADMA1 (1 << 8)
#define SDHC_PROT_ADMA2 (2 << 8)
#define SDHC_PROT_ADMA264 (3 << 8)
#define SDHC_PROT_DMA_MASK (3 << 8)
#define SDHC_PROT_CDTL (1 << 6)
#define SDHC_PROT_CDSS (1 << 7)
#define SDHC_INT_STATUS 0x30
#define SDHC_CLK_IPGEN (1 << 0)
#define SDHC_CLK_HCKEN (1 << 1)
#define SDHC_CLK_PEREN (1 << 2)
#define SDHC_CLK_DIVISOR_MASK 0x000000f0
#define SDHC_CLK_DIVISOR_SHIFT 4
#define SDHC_CLK_PRESCALE_MASK 0x0000ff00
#define SDHC_CLK_PRESCALE_SHIFT 8
static struct ofw_compat_data compat_data[] = {
{"fsl,imx6q-usdhc", HWTYPE_USDHC},
{"fsl,imx6sl-usdhc", HWTYPE_USDHC},
{"fsl,imx53-esdhc", HWTYPE_ESDHC},
{"fsl,imx51-esdhc", HWTYPE_ESDHC},
{NULL, HWTYPE_NONE},
};
static void imx_sdhc_set_clock(struct imx_sdhci_softc *sc, int enable);
static void imx_sdhci_r1bfix_func(void *arg);
static inline uint32_t
RD4(struct imx_sdhci_softc *sc, bus_size_t off)
{
return (bus_read_4(sc->mem_res, off));
}
static inline void
WR4(struct imx_sdhci_softc *sc, bus_size_t off, uint32_t val)
{
bus_write_4(sc->mem_res, off, val);
}
static uint8_t
imx_sdhci_read_1(device_t dev, struct sdhci_slot *slot, bus_size_t off)
{
struct imx_sdhci_softc *sc = device_get_softc(dev);
uint32_t val32, wrk32;
/*
* Most of the things in the standard host control register are in the
* hardware's wider protocol control register, but some of the bits are
* moved around.
*/
if (off == SDHCI_HOST_CONTROL) {
wrk32 = RD4(sc, SDHC_PROT_CTRL);
val32 = wrk32 & (SDHCI_CTRL_LED | SDHCI_CTRL_CARD_DET |
SDHCI_CTRL_FORCE_CARD);
switch (wrk32 & SDHC_PROT_WIDTH_MASK) {
case SDHC_PROT_WIDTH_1BIT:
/* Value is already 0. */
break;
case SDHC_PROT_WIDTH_4BIT:
val32 |= SDHCI_CTRL_4BITBUS;
break;
case SDHC_PROT_WIDTH_8BIT:
val32 |= SDHCI_CTRL_8BITBUS;
break;
}
switch (wrk32 & SDHC_PROT_DMA_MASK) {
case SDHC_PROT_SDMA:
/* Value is already 0. */
break;
case SDHC_PROT_ADMA1:
/* This value is deprecated, should never appear. */
break;
case SDHC_PROT_ADMA2:
val32 |= SDHCI_CTRL_ADMA2;
break;
case SDHC_PROT_ADMA264:
val32 |= SDHCI_CTRL_ADMA264;
break;
}
return val32;
}
/*
* XXX can't find the bus power on/off knob. For now we have to say the
* power is always on and always set to the same voltage.
*/
if (off == SDHCI_POWER_CONTROL) {
return (SDHCI_POWER_ON | SDHCI_POWER_300);
}
return ((RD4(sc, off & ~3) >> (off & 3) * 8) & 0xff);
}
static uint16_t
imx_sdhci_read_2(device_t dev, struct sdhci_slot *slot, bus_size_t off)
{
struct imx_sdhci_softc *sc = device_get_softc(dev);
uint32_t val32, wrk32;
if (sc->hwtype == HWTYPE_USDHC) {
/*
* The USDHC hardware has nothing in the version register, but
* it's v3 compatible with all our translation code.
*/
if (off == SDHCI_HOST_VERSION) {
return (SDHCI_SPEC_300 << SDHCI_SPEC_VER_SHIFT);
}
/*
* The USDHC hardware moved the transfer mode bits to the mixed
* control register, fetch them from there.
*/
if (off == SDHCI_TRANSFER_MODE)
return (RD4(sc, USDHC_MIX_CONTROL) & 0x37);
} else if (sc->hwtype == HWTYPE_ESDHC) {
/*
* The ESDHC hardware has the typical 32-bit combined "command
* and mode" register that we have to cache so that command
* isn't written until after mode. On a read, just retrieve the
* cached values last written.
*/
if (off == SDHCI_TRANSFER_MODE) {
return (sc->cmd_and_mode >> 16);
} else if (off == SDHCI_COMMAND_FLAGS) {
return (sc->cmd_and_mode & 0x0000ffff);
}
}
/*
* This hardware only manages one slot. Synthesize a slot interrupt
* status register... if there are any enabled interrupts active they
* must be coming from our one and only slot.
*/
if (off == SDHCI_SLOT_INT_STATUS) {
val32 = RD4(sc, SDHCI_INT_STATUS);
val32 &= RD4(sc, SDHCI_SIGNAL_ENABLE);
return (val32 ? 1 : 0);
}
/*
* The clock enable bit is in the vendor register and the clock-stable
* bit is in the present state register. Transcribe them as if they
* were in the clock control register where they should be.
* XXX Is it important that we distinguish between "internal" and "card"
* clocks? Probably not; transcribe the card clock status to both bits.
*/
if (off == SDHCI_CLOCK_CONTROL) {
val32 = 0;
wrk32 = RD4(sc, SDHC_VEND_SPEC);
if (wrk32 & SDHC_VEND_FRC_SDCLK_ON)
val32 |= SDHCI_CLOCK_INT_EN | SDHCI_CLOCK_CARD_EN;
wrk32 = RD4(sc, SDHC_PRES_STATE);
if (wrk32 & SDHC_PRES_SDSTB)
val32 |= SDHCI_CLOCK_INT_STABLE;
val32 |= sc->sdclockreg_freq_bits;
return (val32);
}
return ((RD4(sc, off & ~3) >> (off & 3) * 8) & 0xffff);
}
static uint32_t
imx_sdhci_read_4(device_t dev, struct sdhci_slot *slot, bus_size_t off)
{
struct imx_sdhci_softc *sc = device_get_softc(dev);
uint32_t val32, wrk32;
val32 = RD4(sc, off);
/*
* The hardware leaves the base clock frequency out of the capabilities
* register; fill it in. The timeout clock is the same as the active
* output sdclock; we indicate that with a quirk setting so don't
* populate the timeout frequency bits.
*
* XXX Turn off (for now) features the hardware can do but this driver
* doesn't yet handle (1.8v, suspend/resume, etc).
*/
if (off == SDHCI_CAPABILITIES) {
val32 &= ~SDHCI_CAN_VDD_180;
val32 &= ~SDHCI_CAN_DO_SUSPEND;
val32 |= SDHCI_CAN_DO_8BITBUS;
val32 |= (sc->baseclk_hz / 1000000) << SDHCI_CLOCK_BASE_SHIFT;
return (val32);
}
/*
* The hardware moves bits around in the present state register to make
* room for all 8 data line state bits. To translate, mask out all the
* bits which are not in the same position in both registers (this also
* masks out some freescale-specific bits in locations defined as
* reserved by sdhci), then shift the data line and retune request bits
* down to their standard locations.
*/
if (off == SDHCI_PRESENT_STATE) {
wrk32 = val32;
val32 &= 0x000F0F07;
val32 |= (wrk32 >> 4) & SDHCI_STATE_DAT_MASK;
val32 |= (wrk32 >> 9) & SDHCI_RETUNE_REQUEST;
if (sc->force_card_present)
val32 |= SDHCI_CARD_PRESENT;
return (val32);
}
/*
* imx_sdhci_intr() can synthesize a DATA_END interrupt following a
* command with an R1B response, mix it into the hardware status.
*/
if (off == SDHCI_INT_STATUS) {
return (val32 | sc->r1bfix_intmask);
}
return val32;
}
static void
imx_sdhci_read_multi_4(device_t dev, struct sdhci_slot *slot, bus_size_t off,
uint32_t *data, bus_size_t count)
{
struct imx_sdhci_softc *sc = device_get_softc(dev);
bus_read_multi_4(sc->mem_res, off, data, count);
}
static void
imx_sdhci_write_1(device_t dev, struct sdhci_slot *slot, bus_size_t off, uint8_t val)
{
struct imx_sdhci_softc *sc = device_get_softc(dev);
uint32_t val32;
/*
* Most of the things in the standard host control register are in the
* hardware's wider protocol control register, but some of the bits are
* moved around.
*/
if (off == SDHCI_HOST_CONTROL) {
val32 = RD4(sc, SDHC_PROT_CTRL);
val32 &= ~(SDHC_PROT_LED | SDHC_PROT_DMA_MASK |
SDHC_PROT_WIDTH_MASK | SDHC_PROT_CDTL | SDHC_PROT_CDSS);
val32 |= (val & SDHCI_CTRL_LED);
if (val & SDHCI_CTRL_8BITBUS)
val32 |= SDHC_PROT_WIDTH_8BIT;
else
val32 |= (val & SDHCI_CTRL_4BITBUS);
val32 |= (val & (SDHCI_CTRL_SDMA | SDHCI_CTRL_ADMA2)) << 4;
val32 |= (val & (SDHCI_CTRL_CARD_DET | SDHCI_CTRL_FORCE_CARD));
WR4(sc, SDHC_PROT_CTRL, val32);
return;
}
/* XXX I can't find the bus power on/off knob; do nothing. */
if (off == SDHCI_POWER_CONTROL) {
return;
}
val32 = RD4(sc, off & ~3);
val32 &= ~(0xff << (off & 3) * 8);
val32 |= (val << (off & 3) * 8);
WR4(sc, off & ~3, val32);
}
static void
imx_sdhci_write_2(device_t dev, struct sdhci_slot *slot, bus_size_t off, uint16_t val)
{
struct imx_sdhci_softc *sc = device_get_softc(dev);
uint32_t val32;
/* The USDHC hardware moved the transfer mode bits to mixed control. */
if (sc->hwtype == HWTYPE_USDHC) {
if (off == SDHCI_TRANSFER_MODE) {
val32 = RD4(sc, USDHC_MIX_CONTROL);
val32 &= ~0x3f;
val32 |= val & 0x37;
// XXX acmd23 not supported here (or by sdhci driver)
WR4(sc, USDHC_MIX_CONTROL, val32);
return;
}
}
/*
* The clock control stuff is complex enough to have its own routine
* that can both change speeds and en/disable the clock output. Also,
* save the register bits in SDHCI format so that we can play them back
* in the read2 routine without complex decoding.
*/
if (off == SDHCI_CLOCK_CONTROL) {
sc->sdclockreg_freq_bits = val & 0xffc0;
if (val & SDHCI_CLOCK_CARD_EN) {
imx_sdhc_set_clock(sc, true);
} else {
imx_sdhc_set_clock(sc, false);
}
return;
}
/*
* Figure out whether we need to check the DAT0 line for busy status at
* interrupt time. The controller should be doing this, but for some
* reason it doesn't. There are two cases:
* - R1B response with no data transfer should generate a DATA_END (aka
* TRANSFER_COMPLETE) interrupt after waiting for busy, but if
* there's no data transfer there's no DATA_END interrupt. This is
* documented; they seem to think it's a feature.
* - R1B response after Auto-CMD12 appears to not work, even though
* there's a control bit for it (bit 3) in the vendor register.
* When we're starting a command that needs a manual DAT0 line check at
* interrupt time, we leave ourselves a note in r1bfix_type so that we
* can do the extra work in imx_sdhci_intr().
*/
if (off == SDHCI_COMMAND_FLAGS) {
if (val & SDHCI_CMD_DATA) {
const uint32_t MBAUTOCMD = SDHCI_TRNS_ACMD12 | SDHCI_TRNS_MULTI;
val32 = RD4(sc, USDHC_MIX_CONTROL);
if ((val32 & MBAUTOCMD) == MBAUTOCMD)
sc->r1bfix_type = R1BFIX_AC12;
} else {
if ((val & SDHCI_CMD_RESP_MASK) == SDHCI_CMD_RESP_SHORT_BUSY) {
WR4(sc, SDHCI_INT_ENABLE, slot->intmask | SDHCI_INT_RESPONSE);
WR4(sc, SDHCI_SIGNAL_ENABLE, slot->intmask | SDHCI_INT_RESPONSE);
sc->r1bfix_type = R1BFIX_NODATA;
}
}
}
val32 = RD4(sc, off & ~3);
val32 &= ~(0xffff << (off & 3) * 8);
val32 |= ((val & 0xffff) << (off & 3) * 8);
WR4(sc, off & ~3, val32);
}
static void
imx_sdhci_write_4(device_t dev, struct sdhci_slot *slot, bus_size_t off, uint32_t val)
{
struct imx_sdhci_softc *sc = device_get_softc(dev);
/* Clear synthesized interrupts, then pass the value to the hardware. */
if (off == SDHCI_INT_STATUS) {
sc->r1bfix_intmask &= ~val;
}
WR4(sc, off, val);
}
static void
imx_sdhci_write_multi_4(device_t dev, struct sdhci_slot *slot, bus_size_t off,
uint32_t *data, bus_size_t count)
{
struct imx_sdhci_softc *sc = device_get_softc(dev);
bus_write_multi_4(sc->mem_res, off, data, count);
}
static void
imx_sdhc_set_clock(struct imx_sdhci_softc *sc, int enable)
{
uint32_t divisor, enable_bits, enable_reg, freq, prescale, val32;
if (sc->hwtype == HWTYPE_ESDHC) {
divisor = (sc->sdclockreg_freq_bits >> SDHCI_DIVIDER_SHIFT) &
SDHCI_DIVIDER_MASK;
enable_reg = SDHCI_CLOCK_CONTROL;
enable_bits = SDHC_CLK_IPGEN | SDHC_CLK_HCKEN |
SDHC_CLK_PEREN;
} else {
divisor = (sc->sdclockreg_freq_bits >> SDHCI_DIVIDER_SHIFT) &
SDHCI_DIVIDER_MASK;
divisor |= ((sc->sdclockreg_freq_bits >>
SDHCI_DIVIDER_HI_SHIFT) &
SDHCI_DIVIDER_HI_MASK) << SDHCI_DIVIDER_MASK_LEN;
enable_reg = SDHCI_CLOCK_CONTROL;
enable_bits = SDHC_VEND_IPGEN | SDHC_VEND_HCKEN |
SDHC_VEND_PEREN;
}
WR4(sc, SDHC_VEND_SPEC,
RD4(sc, SDHC_VEND_SPEC) & ~SDHC_VEND_FRC_SDCLK_ON);
WR4(sc, enable_reg, RD4(sc, enable_reg) & ~enable_bits);
if (!enable)
return;
if (divisor == 0)
freq = sc->baseclk_hz;
else
freq = sc->baseclk_hz / (2 * divisor);
for (prescale = 2; freq < sc->baseclk_hz / (prescale * 16);)
prescale <<= 1;
for (divisor = 1; freq < sc->baseclk_hz / (prescale * divisor);)
++divisor;
#ifdef DEBUG
device_printf(sc->dev,
"desired SD freq: %d, actual: %d; base %d prescale %d divisor %d\n",
freq, sc->baseclk_hz / (prescale * divisor), sc->baseclk_hz,
prescale, divisor);
#endif
prescale >>= 1;
divisor -= 1;
val32 = RD4(sc, SDHCI_CLOCK_CONTROL);
val32 &= ~SDHC_CLK_DIVISOR_MASK;
val32 |= divisor << SDHC_CLK_DIVISOR_SHIFT;
val32 &= ~SDHC_CLK_PRESCALE_MASK;
val32 |= prescale << SDHC_CLK_PRESCALE_SHIFT;
WR4(sc, SDHCI_CLOCK_CONTROL, val32);
WR4(sc, enable_reg, RD4(sc, enable_reg) | enable_bits);
WR4(sc, SDHC_VEND_SPEC,
RD4(sc, SDHC_VEND_SPEC) | SDHC_VEND_FRC_SDCLK_ON);
}
static boolean_t
imx_sdhci_r1bfix_is_wait_done(struct imx_sdhci_softc *sc)
{
uint32_t inhibit;
mtx_assert(&sc->slot.mtx, MA_OWNED);
/*
* Check the DAT0 line status using both the DLA (data line active) and
* CDIHB (data inhibit) bits in the present state register. In theory
* just DLA should do the trick, but in practice it takes both. If the
* DAT0 line is still being held and we're not yet beyond the timeout
* point, just schedule another callout to check again later.
*/
inhibit = RD4(sc, SDHC_PRES_STATE) & (SDHC_PRES_DLA | SDHC_PRES_CDIHB);
if (inhibit && getsbinuptime() < sc->r1bfix_timeout_at) {
callout_reset_sbt(&sc->r1bfix_callout, SBT_1MS, 0,
imx_sdhci_r1bfix_func, sc, 0);
return (false);
}
/*
* If we reach this point with the inhibit bits still set, we've got a
* timeout, synthesize a DATA_TIMEOUT interrupt. Otherwise the DAT0
* line has been released, and we synthesize a DATA_END, and if the type
* of fix needed was on a command-without-data we also now add in the
* original INT_RESPONSE that we suppressed earlier.
*/
if (inhibit)
sc->r1bfix_intmask |= SDHCI_INT_DATA_TIMEOUT;
else {
sc->r1bfix_intmask |= SDHCI_INT_DATA_END;
if (sc->r1bfix_type == R1BFIX_NODATA)
sc->r1bfix_intmask |= SDHCI_INT_RESPONSE;
}
sc->r1bfix_type = R1BFIX_NONE;
return (true);
}
static void
imx_sdhci_r1bfix_func(void * arg)
{
struct imx_sdhci_softc *sc = arg;
boolean_t r1bwait_done;
mtx_lock(&sc->slot.mtx);
r1bwait_done = imx_sdhci_r1bfix_is_wait_done(sc);
mtx_unlock(&sc->slot.mtx);
if (r1bwait_done)
sdhci_generic_intr(&sc->slot);
}
static void
imx_sdhci_intr(void *arg)
{
struct imx_sdhci_softc *sc = arg;
uint32_t intmask;
mtx_lock(&sc->slot.mtx);
/*
* Manually check the DAT0 line for R1B response types that the
* controller fails to handle properly. The controller asserts the done
* interrupt while the card is still asserting busy with the DAT0 line.
*
* We check DAT0 immediately because most of the time, especially on a
* read, the card will actually be done by time we get here. If it's
* not, then the wait_done routine will schedule a callout to re-check
* periodically until it is done. In that case we clear the interrupt
* out of the hardware now so that we can present it later when the DAT0
* line is released.
*
- * If we need to wait for the the DAT0 line to be released, we set up a
+ * If we need to wait for the DAT0 line to be released, we set up a
* timeout point 250ms in the future. This number comes from the SD
* spec, which allows a command to take that long. In the real world,
* cards tend to take 10-20ms for a long-running command such as a write
* or erase that spans two pages.
*/
switch (sc->r1bfix_type) {
case R1BFIX_NODATA:
intmask = RD4(sc, SDHC_INT_STATUS) & SDHCI_INT_RESPONSE;
break;
case R1BFIX_AC12:
intmask = RD4(sc, SDHC_INT_STATUS) & SDHCI_INT_DATA_END;
break;
default:
intmask = 0;
break;
}
if (intmask) {
sc->r1bfix_timeout_at = getsbinuptime() + 250 * SBT_1MS;
if (!imx_sdhci_r1bfix_is_wait_done(sc)) {
WR4(sc, SDHC_INT_STATUS, intmask);
bus_barrier(sc->mem_res, SDHC_INT_STATUS, 4,
BUS_SPACE_BARRIER_WRITE);
}
}
mtx_unlock(&sc->slot.mtx);
sdhci_generic_intr(&sc->slot);
}
static int
imx_sdhci_get_ro(device_t bus, device_t child)
{
return (false);
}
static int
imx_sdhci_detach(device_t dev)
{
return (EBUSY);
}
static int
imx_sdhci_attach(device_t dev)
{
struct imx_sdhci_softc *sc = device_get_softc(dev);
int rid, err;
phandle_t node;
sc->dev = dev;
sc->hwtype = ofw_bus_search_compatible(dev, compat_data)->ocd_data;
if (sc->hwtype == HWTYPE_NONE)
panic("Impossible: not compatible in imx_sdhci_attach()");
rid = 0;
sc->mem_res = bus_alloc_resource_any(dev, SYS_RES_MEMORY, &rid,
RF_ACTIVE);
if (!sc->mem_res) {
device_printf(dev, "cannot allocate memory window\n");
err = ENXIO;
goto fail;
}
rid = 0;
sc->irq_res = bus_alloc_resource_any(dev, SYS_RES_IRQ, &rid,
RF_ACTIVE);
if (!sc->irq_res) {
device_printf(dev, "cannot allocate interrupt\n");
err = ENXIO;
goto fail;
}
if (bus_setup_intr(dev, sc->irq_res, INTR_TYPE_BIO | INTR_MPSAFE,
NULL, imx_sdhci_intr, sc, &sc->intr_cookie)) {
device_printf(dev, "cannot setup interrupt handler\n");
err = ENXIO;
goto fail;
}
sc->slot.quirks |= SDHCI_QUIRK_DATA_TIMEOUT_USES_SDCLK;
/*
* DMA is not really broken, I just haven't implemented it yet.
*/
sc->slot.quirks |= SDHCI_QUIRK_BROKEN_DMA;
/*
* Set the buffer watermark level to 128 words (512 bytes) for both read
* and write. The hardware has a restriction that when the read or
* write ready status is asserted, that means you can read exactly the
* number of words set in the watermark register before you have to
* re-check the status and potentially wait for more data. The main
* sdhci driver provides no hook for doing status checking on less than
* a full block boundary, so we set the watermark level to be a full
* block. Reads and writes where the block size is less than the
* watermark size will work correctly too, no need to change the
* watermark for different size blocks. However, 128 is the maximum
* allowed for the watermark, so PIO is limitted to 512 byte blocks
* (which works fine for SD cards, may be a problem for SDIO some day).
*
* XXX need named constants for this stuff.
*/
WR4(sc, SDHC_WTMK_LVL, 0x08800880);
sc->baseclk_hz = imx_ccm_sdhci_hz();
/*
* If the slot is flagged with the non-removable property, set our flag
* to always force the SDHCI_CARD_PRESENT bit on.
*
* XXX Workaround for gpio-based card detect...
*
* We don't have gpio support yet. If there's a cd-gpios property just
* force the SDHCI_CARD_PRESENT bit on for now. If there isn't really a
* card there it will fail to probe at the mmc layer and nothing bad
* happens except instantiating an mmcN device for an empty slot.
*/
node = ofw_bus_get_node(dev);
if (OF_hasprop(node, "non-removable"))
sc->force_card_present = true;
else if (OF_hasprop(node, "cd-gpios")) {
/* XXX put real gpio hookup here. */
sc->force_card_present = true;
}
callout_init(&sc->r1bfix_callout, 1);
sdhci_init_slot(dev, &sc->slot, 0);
bus_generic_probe(dev);
bus_generic_attach(dev);
sdhci_start_slot(&sc->slot);
return (0);
fail:
if (sc->intr_cookie)
bus_teardown_intr(dev, sc->irq_res, sc->intr_cookie);
if (sc->irq_res)
bus_release_resource(dev, SYS_RES_IRQ, 0, sc->irq_res);
if (sc->mem_res)
bus_release_resource(dev, SYS_RES_MEMORY, 0, sc->mem_res);
return (err);
}
static int
imx_sdhci_probe(device_t dev)
{
if (!ofw_bus_status_okay(dev))
return (ENXIO);
switch (ofw_bus_search_compatible(dev, compat_data)->ocd_data) {
case HWTYPE_ESDHC:
device_set_desc(dev, "Freescale eSDHC controller");
return (BUS_PROBE_DEFAULT);
case HWTYPE_USDHC:
device_set_desc(dev, "Freescale uSDHC controller");
return (BUS_PROBE_DEFAULT);
default:
break;
}
return (ENXIO);
}
static device_method_t imx_sdhci_methods[] = {
/* Device interface */
DEVMETHOD(device_probe, imx_sdhci_probe),
DEVMETHOD(device_attach, imx_sdhci_attach),
DEVMETHOD(device_detach, imx_sdhci_detach),
/* Bus interface */
DEVMETHOD(bus_read_ivar, sdhci_generic_read_ivar),
DEVMETHOD(bus_write_ivar, sdhci_generic_write_ivar),
DEVMETHOD(bus_print_child, bus_generic_print_child),
/* MMC bridge interface */
DEVMETHOD(mmcbr_update_ios, sdhci_generic_update_ios),
DEVMETHOD(mmcbr_request, sdhci_generic_request),
DEVMETHOD(mmcbr_get_ro, imx_sdhci_get_ro),
DEVMETHOD(mmcbr_acquire_host, sdhci_generic_acquire_host),
DEVMETHOD(mmcbr_release_host, sdhci_generic_release_host),
/* SDHCI registers accessors */
DEVMETHOD(sdhci_read_1, imx_sdhci_read_1),
DEVMETHOD(sdhci_read_2, imx_sdhci_read_2),
DEVMETHOD(sdhci_read_4, imx_sdhci_read_4),
DEVMETHOD(sdhci_read_multi_4, imx_sdhci_read_multi_4),
DEVMETHOD(sdhci_write_1, imx_sdhci_write_1),
DEVMETHOD(sdhci_write_2, imx_sdhci_write_2),
DEVMETHOD(sdhci_write_4, imx_sdhci_write_4),
DEVMETHOD(sdhci_write_multi_4, imx_sdhci_write_multi_4),
{ 0, 0 }
};
static devclass_t imx_sdhci_devclass;
static driver_t imx_sdhci_driver = {
"sdhci_imx",
imx_sdhci_methods,
sizeof(struct imx_sdhci_softc),
};
DRIVER_MODULE(sdhci_imx, simplebus, imx_sdhci_driver, imx_sdhci_devclass, 0, 0);
MODULE_DEPEND(sdhci_imx, sdhci, 1, 1, 1);
DRIVER_MODULE(mmc, sdhci_imx, mmc_driver, mmc_devclass, NULL, NULL);
MODULE_DEPEND(sdhci_imx, mmc, 1, 1, 1);
Index: head/sys/arm/include/asm.h
===================================================================
--- head/sys/arm/include/asm.h (revision 300049)
+++ head/sys/arm/include/asm.h (revision 300050)
@@ -1,251 +1,251 @@
/* $NetBSD: asm.h,v 1.5 2003/08/07 16:26:53 agc Exp $ */
/*-
* Copyright (c) 1990 The Regents of the University of California.
* All rights reserved.
*
* This code is derived from software contributed to Berkeley by
* William Jolitz.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
* 3. Neither the name of the University nor the names of its contributors
* may be used to endorse or promote products derived from this software
* without specific prior written permission.
*
* THIS SOFTWARE IS PROVIDED BY THE REGENTS AND CONTRIBUTORS ``AS IS'' AND
* ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
* ARE DISCLAIMED. IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE LIABLE
* FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
* DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
* OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
* HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
* LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
* OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
* SUCH DAMAGE.
*
* from: @(#)asm.h 5.5 (Berkeley) 5/7/91
*
* $FreeBSD$
*/
#ifndef _MACHINE_ASM_H_
#define _MACHINE_ASM_H_
#include <sys/cdefs.h>
#include <machine/acle-compat.h>
#include <machine/sysreg.h>
#define _C_LABEL(x) x
#define _ASM_LABEL(x) x
#ifndef _ALIGN_TEXT
# define _ALIGN_TEXT .align 2
#endif
#ifndef _STANDALONE
#define STOP_UNWINDING .cantunwind
#define _FNSTART .fnstart
#define _FNEND .fnend
#define _SAVE(...) .save __VA_ARGS__
#else
#define STOP_UNWINDING
#define _FNSTART
#define _FNEND
#define _SAVE(...)
#endif
/*
* gas/arm uses @ as a single comment character and thus cannot be used here.
* It recognises the # instead of an @ symbol in .type directives.
*/
#define _ASM_TYPE_FUNCTION #function
#define _ASM_TYPE_OBJECT #object
/* XXX Is this still the right prologue for profiling? */
#ifdef GPROF
#define _PROF_PROLOGUE \
mov ip, lr; \
bl __mcount
#else
#define _PROF_PROLOGUE
#endif
/*
* EENTRY()/EEND() mark "extra" entry/exit points from a function.
- * LEENTRY()/LEEND() are the the same for local symbols.
+ * LEENTRY()/LEEND() are the same for local symbols.
* The unwind info cannot handle the concept of a nested function, or a function
* with multiple .fnstart directives, but some of our assembler code is written
* with multiple labels to allow entry at several points. The EENTRY() macro
* defines such an extra entry point without a new .fnstart, so that it's
* basically just a label that you can jump to. The EEND() macro does nothing
* at all, except document the exit point associated with the same-named entry.
*/
#define GLOBAL(x) .global x
#ifdef __thumb__
#define _FUNC_MODE .code 16; .thumb_func
#else
#define _FUNC_MODE .code 32
#endif
#define _LEENTRY(x) .type x,_ASM_TYPE_FUNCTION; _FUNC_MODE; x:
#define _LEEND(x) /* nothing */
#define _EENTRY(x) GLOBAL(x); _LEENTRY(x)
#define _EEND(x) _LEEND(x)
#define _LENTRY(x) .text; _ALIGN_TEXT; _LEENTRY(x); _FNSTART
#define _LEND(x) .size x, . - x; _FNEND
#define _ENTRY(x) .text; _ALIGN_TEXT; _EENTRY(x); _FNSTART
#define _END(x) _LEND(x)
#define ENTRY(y) _ENTRY(_C_LABEL(y)); _PROF_PROLOGUE
#define EENTRY(y) _EENTRY(_C_LABEL(y));
#define ENTRY_NP(y) _ENTRY(_C_LABEL(y))
#define EENTRY_NP(y) _EENTRY(_C_LABEL(y))
#define END(y) _END(_C_LABEL(y))
#define EEND(y) _EEND(_C_LABEL(y))
#define ASENTRY(y) _ENTRY(_ASM_LABEL(y)); _PROF_PROLOGUE
#define ASLENTRY(y) _LENTRY(_ASM_LABEL(y)); _PROF_PROLOGUE
#define ASEENTRY(y) _EENTRY(_ASM_LABEL(y));
#define ASLEENTRY(y) _LEENTRY(_ASM_LABEL(y));
#define ASENTRY_NP(y) _ENTRY(_ASM_LABEL(y))
#define ASLENTRY_NP(y) _LENTRY(_ASM_LABEL(y))
#define ASEENTRY_NP(y) _EENTRY(_ASM_LABEL(y))
#define ASLEENTRY_NP(y) _LEENTRY(_ASM_LABEL(y))
#define ASEND(y) _END(_ASM_LABEL(y))
#define ASLEND(y) _LEND(_ASM_LABEL(y))
#define ASEEND(y) _EEND(_ASM_LABEL(y))
#define ASLEEND(y) _LEEND(_ASM_LABEL(y))
#define ASMSTR .asciz
#if defined(PIC)
#define PLT_SYM(x) PIC_SYM(x, PLT)
#define GOT_SYM(x) PIC_SYM(x, GOT)
#define GOT_GET(x,got,sym) \
ldr x, sym; \
ldr x, [x, got]
#define GOT_INIT(got,gotsym,pclabel) \
ldr got, gotsym; \
pclabel: add got, pc
#ifdef __thumb__
#define GOT_INITSYM(gotsym,pclabel) \
.align 2; \
gotsym: .word _C_LABEL(_GLOBAL_OFFSET_TABLE_) - (pclabel+4)
#else
#define GOT_INITSYM(gotsym,pclabel) \
.align 2; \
gotsym: .word _C_LABEL(_GLOBAL_OFFSET_TABLE_) - (pclabel+8)
#endif
#ifdef __STDC__
#define PIC_SYM(x,y) x ## ( ## y ## )
#else
#define PIC_SYM(x,y) x/**/(/**/y/**/)
#endif
#else
#define PLT_SYM(x) x
#define GOT_SYM(x) x
#define GOT_GET(x,got,sym) \
ldr x, sym;
#define GOT_INIT(got,gotsym,pclabel)
#define GOT_INITSYM(gotsym,pclabel)
#define PIC_SYM(x,y) x
#endif /* PIC */
#undef __FBSDID
#if !defined(lint) && !defined(STRIP_FBSDID)
#define __FBSDID(s) .ident s
#else
#define __FBSDID(s) /* nothing */
#endif
#define WEAK_ALIAS(alias,sym) \
.weak alias; \
alias = sym
#ifdef __STDC__
#define WARN_REFERENCES(sym,msg) \
.stabs msg ## ,30,0,0,0 ; \
.stabs __STRING(_C_LABEL(sym)) ## ,1,0,0,0
#else
#define WARN_REFERENCES(sym,msg) \
.stabs msg,30,0,0,0 ; \
.stabs __STRING(sym),1,0,0,0
#endif /* __STDC__ */
/* Exactly one of the __ARM_ARCH_*__ macros will be defined by the compiler. */
/* The _ARM_ARCH_* macros are deprecated and will be removed soon. */
/* This should be moved into another header so it can be used in
* both asm and C code. machine/asm.h cannot be included in C code. */
#if defined (__ARM_ARCH_7__) || defined (__ARM_ARCH_7A__)
#define _ARM_ARCH_7
#define _HAVE_ARMv7_INSTRUCTIONS 1
#endif
#if defined (_HAVE_ARMv7_INSTRUCTIONS) || defined (__ARM_ARCH_6__) || \
defined (__ARM_ARCH_6J__) || defined (__ARM_ARCH_6K__) || \
defined (__ARM_ARCH_6Z__) || defined (__ARM_ARCH_6ZK__)
#define _ARM_ARCH_6
#define _HAVE_ARMv6_INSTRUCTIONS 1
#endif
#if defined (_HAVE_ARMv6_INSTRUCTIONS) || defined (__ARM_ARCH_5TE__) || \
defined (__ARM_ARCH_5TEJ__) || defined (__ARM_ARCH_5E__)
#define _ARM_ARCH_5E
#define _HAVE_ARMv5E_INSTRUCTIONS 1
#endif
#if defined (_HAVE_ARMv5E_INSTRUCTIONS) || defined (__ARM_ARCH_5__) || \
defined (__ARM_ARCH_5T__)
#define _ARM_ARCH_5
#define _HAVE_ARMv5_INSTRUCTIONS 1
#endif
#if defined (_HAVE_ARMv5_INSTRUCTIONS) || defined (__ARM_ARCH_4T__)
#define _ARM_ARCH_4T
#define _HAVE_ARMv4T_INSTRUCTIONS 1
#endif
/* FreeBSD requires ARMv4, so this is always set. */
#define _HAVE_ARMv4_INSTRUCTIONS 1
#if defined (_HAVE_ARMv4T_INSTRUCTIONS)
# define RET bx lr
# define RETeq bxeq lr
# define RETne bxne lr
# define RETc(c) bx##c lr
#else
# define RET mov pc, lr
# define RETeq moveq pc, lr
# define RETne movne pc, lr
# define RETc(c) mov##c pc, lr
#endif
#if __ARM_ARCH >= 7
#define ISB isb
#define DSB dsb
#define DMB dmb
#define WFI wfi
#elif __ARM_ARCH == 6
#define ISB mcr CP15_CP15ISB
#define DSB mcr CP15_CP15DSB
#define DMB mcr CP15_CP15DMB
#define WFI mcr CP15_CP15WFI
#else
#define ISB mcr CP15_CP15ISB
#define DSB mcr CP15_CP15DSB /* DSB and DMB are the */
#define DMB mcr CP15_CP15DSB /* same prior to v6.*/
/* No form of WFI available on v4, define nothing to get an error on use. */
#endif
#endif /* !_MACHINE_ASM_H_ */
Index: head/sys/cam/cam_periph.c
===================================================================
--- head/sys/cam/cam_periph.c (revision 300049)
+++ head/sys/cam/cam_periph.c (revision 300050)
@@ -1,1925 +1,1925 @@
/*-
* Common functions for CAM "type" (peripheral) drivers.
*
* Copyright (c) 1997, 1998 Justin T. Gibbs.
* Copyright (c) 1997, 1998, 1999, 2000 Kenneth D. Merry.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions, and the following disclaimer,
* without modification, immediately at the beginning of the file.
* 2. The name of the author may not be used to endorse or promote products
* derived from this software without specific prior written permission.
*
* THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND
* ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
* ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE FOR
* ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
* DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
* OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
* HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
* LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
* OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
* SUCH DAMAGE.
*/
#include <sys/cdefs.h>
__FBSDID("$FreeBSD$");
#include <sys/param.h>
#include <sys/systm.h>
#include <sys/types.h>
#include <sys/malloc.h>
#include <sys/kernel.h>
#include <sys/bio.h>
#include <sys/lock.h>
#include <sys/mutex.h>
#include <sys/buf.h>
#include <sys/proc.h>
#include <sys/devicestat.h>
#include <sys/bus.h>
#include <sys/sbuf.h>
#include <vm/vm.h>
#include <vm/vm_extern.h>
#include <cam/cam.h>
#include <cam/cam_ccb.h>
#include <cam/cam_queue.h>
#include <cam/cam_xpt_periph.h>
#include <cam/cam_periph.h>
#include <cam/cam_debug.h>
#include <cam/cam_sim.h>
#include <cam/scsi/scsi_all.h>
#include <cam/scsi/scsi_message.h>
#include <cam/scsi/scsi_pass.h>
static u_int camperiphnextunit(struct periph_driver *p_drv,
u_int newunit, int wired,
path_id_t pathid, target_id_t target,
lun_id_t lun);
static u_int camperiphunit(struct periph_driver *p_drv,
path_id_t pathid, target_id_t target,
lun_id_t lun);
static void camperiphdone(struct cam_periph *periph,
union ccb *done_ccb);
static void camperiphfree(struct cam_periph *periph);
static int camperiphscsistatuserror(union ccb *ccb,
union ccb **orig_ccb,
cam_flags camflags,
u_int32_t sense_flags,
int *openings,
u_int32_t *relsim_flags,
u_int32_t *timeout,
u_int32_t *action,
const char **action_string);
static int camperiphscsisenseerror(union ccb *ccb,
union ccb **orig_ccb,
cam_flags camflags,
u_int32_t sense_flags,
int *openings,
u_int32_t *relsim_flags,
u_int32_t *timeout,
u_int32_t *action,
const char **action_string);
static void cam_periph_devctl_notify(union ccb *ccb);
static int nperiph_drivers;
static int initialized = 0;
struct periph_driver **periph_drivers;
static MALLOC_DEFINE(M_CAMPERIPH, "CAM periph", "CAM peripheral buffers");
static int periph_selto_delay = 1000;
TUNABLE_INT("kern.cam.periph_selto_delay", &periph_selto_delay);
static int periph_noresrc_delay = 500;
TUNABLE_INT("kern.cam.periph_noresrc_delay", &periph_noresrc_delay);
static int periph_busy_delay = 500;
TUNABLE_INT("kern.cam.periph_busy_delay", &periph_busy_delay);
void
periphdriver_register(void *data)
{
struct periph_driver *drv = (struct periph_driver *)data;
struct periph_driver **newdrivers, **old;
int ndrivers;
again:
ndrivers = nperiph_drivers + 2;
newdrivers = malloc(sizeof(*newdrivers) * ndrivers, M_CAMPERIPH,
M_WAITOK);
xpt_lock_buses();
if (ndrivers != nperiph_drivers + 2) {
/*
* Lost race against itself; go around.
*/
xpt_unlock_buses();
free(newdrivers, M_CAMPERIPH);
goto again;
}
if (periph_drivers)
bcopy(periph_drivers, newdrivers,
sizeof(*newdrivers) * nperiph_drivers);
newdrivers[nperiph_drivers] = drv;
newdrivers[nperiph_drivers + 1] = NULL;
old = periph_drivers;
periph_drivers = newdrivers;
nperiph_drivers++;
xpt_unlock_buses();
if (old)
free(old, M_CAMPERIPH);
/* If driver marked as early or it is late now, initialize it. */
if (((drv->flags & CAM_PERIPH_DRV_EARLY) != 0 && initialized > 0) ||
initialized > 1)
(*drv->init)();
}
void
periphdriver_init(int level)
{
int i, early;
initialized = max(initialized, level);
for (i = 0; periph_drivers[i] != NULL; i++) {
early = (periph_drivers[i]->flags & CAM_PERIPH_DRV_EARLY) ? 1 : 2;
if (early == initialized)
(*periph_drivers[i]->init)();
}
}
cam_status
cam_periph_alloc(periph_ctor_t *periph_ctor,
periph_oninv_t *periph_oninvalidate,
periph_dtor_t *periph_dtor, periph_start_t *periph_start,
char *name, cam_periph_type type, struct cam_path *path,
ac_callback_t *ac_callback, ac_code code, void *arg)
{
struct periph_driver **p_drv;
struct cam_sim *sim;
struct cam_periph *periph;
struct cam_periph *cur_periph;
path_id_t path_id;
target_id_t target_id;
lun_id_t lun_id;
cam_status status;
u_int init_level;
init_level = 0;
/*
* Handle Hot-Plug scenarios. If there is already a peripheral
* of our type assigned to this path, we are likely waiting for
* final close on an old, invalidated, peripheral. If this is
* the case, queue up a deferred call to the peripheral's async
* handler. If it looks like a mistaken re-allocation, complain.
*/
if ((periph = cam_periph_find(path, name)) != NULL) {
if ((periph->flags & CAM_PERIPH_INVALID) != 0
&& (periph->flags & CAM_PERIPH_NEW_DEV_FOUND) == 0) {
periph->flags |= CAM_PERIPH_NEW_DEV_FOUND;
periph->deferred_callback = ac_callback;
periph->deferred_ac = code;
return (CAM_REQ_INPROG);
} else {
printf("cam_periph_alloc: attempt to re-allocate "
"valid device %s%d rejected flags %#x "
"refcount %d\n", periph->periph_name,
periph->unit_number, periph->flags,
periph->refcount);
}
return (CAM_REQ_INVALID);
}
periph = (struct cam_periph *)malloc(sizeof(*periph), M_CAMPERIPH,
M_NOWAIT|M_ZERO);
if (periph == NULL)
return (CAM_RESRC_UNAVAIL);
init_level++;
sim = xpt_path_sim(path);
path_id = xpt_path_path_id(path);
target_id = xpt_path_target_id(path);
lun_id = xpt_path_lun_id(path);
periph->periph_start = periph_start;
periph->periph_dtor = periph_dtor;
periph->periph_oninval = periph_oninvalidate;
periph->type = type;
periph->periph_name = name;
periph->scheduled_priority = CAM_PRIORITY_NONE;
periph->immediate_priority = CAM_PRIORITY_NONE;
periph->refcount = 1; /* Dropped by invalidation. */
periph->sim = sim;
SLIST_INIT(&periph->ccb_list);
status = xpt_create_path(&path, periph, path_id, target_id, lun_id);
if (status != CAM_REQ_CMP)
goto failure;
periph->path = path;
xpt_lock_buses();
for (p_drv = periph_drivers; *p_drv != NULL; p_drv++) {
if (strcmp((*p_drv)->driver_name, name) == 0)
break;
}
if (*p_drv == NULL) {
printf("cam_periph_alloc: invalid periph name '%s'\n", name);
xpt_unlock_buses();
xpt_free_path(periph->path);
free(periph, M_CAMPERIPH);
return (CAM_REQ_INVALID);
}
periph->unit_number = camperiphunit(*p_drv, path_id, target_id, lun_id);
cur_periph = TAILQ_FIRST(&(*p_drv)->units);
while (cur_periph != NULL
&& cur_periph->unit_number < periph->unit_number)
cur_periph = TAILQ_NEXT(cur_periph, unit_links);
if (cur_periph != NULL) {
KASSERT(cur_periph->unit_number != periph->unit_number, ("duplicate units on periph list"));
TAILQ_INSERT_BEFORE(cur_periph, periph, unit_links);
} else {
TAILQ_INSERT_TAIL(&(*p_drv)->units, periph, unit_links);
(*p_drv)->generation++;
}
xpt_unlock_buses();
init_level++;
status = xpt_add_periph(periph);
if (status != CAM_REQ_CMP)
goto failure;
init_level++;
CAM_DEBUG(periph->path, CAM_DEBUG_INFO, ("Periph created\n"));
status = periph_ctor(periph, arg);
if (status == CAM_REQ_CMP)
init_level++;
failure:
switch (init_level) {
case 4:
/* Initialized successfully */
break;
case 3:
CAM_DEBUG(periph->path, CAM_DEBUG_INFO, ("Periph destroyed\n"));
xpt_remove_periph(periph);
/* FALLTHROUGH */
case 2:
xpt_lock_buses();
TAILQ_REMOVE(&(*p_drv)->units, periph, unit_links);
xpt_unlock_buses();
xpt_free_path(periph->path);
/* FALLTHROUGH */
case 1:
free(periph, M_CAMPERIPH);
/* FALLTHROUGH */
case 0:
/* No cleanup to perform. */
break;
default:
panic("%s: Unknown init level", __func__);
}
return(status);
}
/*
* Find a peripheral structure with the specified path, target, lun,
* and (optionally) type. If the name is NULL, this function will return
* the first peripheral driver that matches the specified path.
*/
struct cam_periph *
cam_periph_find(struct cam_path *path, char *name)
{
struct periph_driver **p_drv;
struct cam_periph *periph;
xpt_lock_buses();
for (p_drv = periph_drivers; *p_drv != NULL; p_drv++) {
if (name != NULL && (strcmp((*p_drv)->driver_name, name) != 0))
continue;
TAILQ_FOREACH(periph, &(*p_drv)->units, unit_links) {
if (xpt_path_comp(periph->path, path) == 0) {
xpt_unlock_buses();
cam_periph_assert(periph, MA_OWNED);
return(periph);
}
}
if (name != NULL) {
xpt_unlock_buses();
return(NULL);
}
}
xpt_unlock_buses();
return(NULL);
}
/*
* Find peripheral driver instances attached to the specified path.
*/
int
cam_periph_list(struct cam_path *path, struct sbuf *sb)
{
struct sbuf local_sb;
struct periph_driver **p_drv;
struct cam_periph *periph;
int count;
int sbuf_alloc_len;
sbuf_alloc_len = 16;
retry:
sbuf_new(&local_sb, NULL, sbuf_alloc_len, SBUF_FIXEDLEN);
count = 0;
xpt_lock_buses();
for (p_drv = periph_drivers; *p_drv != NULL; p_drv++) {
TAILQ_FOREACH(periph, &(*p_drv)->units, unit_links) {
if (xpt_path_comp(periph->path, path) != 0)
continue;
if (sbuf_len(&local_sb) != 0)
sbuf_cat(&local_sb, ",");
sbuf_printf(&local_sb, "%s%d", periph->periph_name,
periph->unit_number);
if (sbuf_error(&local_sb) == ENOMEM) {
sbuf_alloc_len *= 2;
xpt_unlock_buses();
sbuf_delete(&local_sb);
goto retry;
}
count++;
}
}
xpt_unlock_buses();
sbuf_finish(&local_sb);
sbuf_cpy(sb, sbuf_data(&local_sb));
sbuf_delete(&local_sb);
return (count);
}
cam_status
cam_periph_acquire(struct cam_periph *periph)
{
cam_status status;
status = CAM_REQ_CMP_ERR;
if (periph == NULL)
return (status);
xpt_lock_buses();
if ((periph->flags & CAM_PERIPH_INVALID) == 0) {
periph->refcount++;
status = CAM_REQ_CMP;
}
xpt_unlock_buses();
return (status);
}
void
cam_periph_doacquire(struct cam_periph *periph)
{
xpt_lock_buses();
KASSERT(periph->refcount >= 1,
("cam_periph_doacquire() with refcount == %d", periph->refcount));
periph->refcount++;
xpt_unlock_buses();
}
void
cam_periph_release_locked_buses(struct cam_periph *periph)
{
cam_periph_assert(periph, MA_OWNED);
KASSERT(periph->refcount >= 1, ("periph->refcount >= 1"));
if (--periph->refcount == 0)
camperiphfree(periph);
}
void
cam_periph_release_locked(struct cam_periph *periph)
{
if (periph == NULL)
return;
xpt_lock_buses();
cam_periph_release_locked_buses(periph);
xpt_unlock_buses();
}
void
cam_periph_release(struct cam_periph *periph)
{
struct mtx *mtx;
if (periph == NULL)
return;
cam_periph_assert(periph, MA_NOTOWNED);
mtx = cam_periph_mtx(periph);
mtx_lock(mtx);
cam_periph_release_locked(periph);
mtx_unlock(mtx);
}
int
cam_periph_hold(struct cam_periph *periph, int priority)
{
int error;
/*
* Increment the reference count on the peripheral
* while we wait for our lock attempt to succeed
* to ensure the peripheral doesn't disappear out
* from user us while we sleep.
*/
if (cam_periph_acquire(periph) != CAM_REQ_CMP)
return (ENXIO);
cam_periph_assert(periph, MA_OWNED);
while ((periph->flags & CAM_PERIPH_LOCKED) != 0) {
periph->flags |= CAM_PERIPH_LOCK_WANTED;
if ((error = cam_periph_sleep(periph, periph, priority,
"caplck", 0)) != 0) {
cam_periph_release_locked(periph);
return (error);
}
if (periph->flags & CAM_PERIPH_INVALID) {
cam_periph_release_locked(periph);
return (ENXIO);
}
}
periph->flags |= CAM_PERIPH_LOCKED;
return (0);
}
void
cam_periph_unhold(struct cam_periph *periph)
{
cam_periph_assert(periph, MA_OWNED);
periph->flags &= ~CAM_PERIPH_LOCKED;
if ((periph->flags & CAM_PERIPH_LOCK_WANTED) != 0) {
periph->flags &= ~CAM_PERIPH_LOCK_WANTED;
wakeup(periph);
}
cam_periph_release_locked(periph);
}
/*
* Look for the next unit number that is not currently in use for this
* peripheral type starting at "newunit". Also exclude unit numbers that
* are reserved by for future "hardwiring" unless we already know that this
* is a potential wired device. Only assume that the device is "wired" the
* first time through the loop since after that we'll be looking at unit
* numbers that did not match a wiring entry.
*/
static u_int
camperiphnextunit(struct periph_driver *p_drv, u_int newunit, int wired,
path_id_t pathid, target_id_t target, lun_id_t lun)
{
struct cam_periph *periph;
char *periph_name;
int i, val, dunit, r;
const char *dname, *strval;
periph_name = p_drv->driver_name;
for (;;newunit++) {
for (periph = TAILQ_FIRST(&p_drv->units);
periph != NULL && periph->unit_number != newunit;
periph = TAILQ_NEXT(periph, unit_links))
;
if (periph != NULL && periph->unit_number == newunit) {
if (wired != 0) {
xpt_print(periph->path, "Duplicate Wired "
"Device entry!\n");
xpt_print(periph->path, "Second device (%s "
"device at scbus%d target %d lun %d) will "
"not be wired\n", periph_name, pathid,
target, lun);
wired = 0;
}
continue;
}
if (wired)
break;
/*
* Don't match entries like "da 4" as a wired down
* device, but do match entries like "da 4 target 5"
* or even "da 4 scbus 1".
*/
i = 0;
dname = periph_name;
for (;;) {
r = resource_find_dev(&i, dname, &dunit, NULL, NULL);
if (r != 0)
break;
/* if no "target" and no specific scbus, skip */
if (resource_int_value(dname, dunit, "target", &val) &&
(resource_string_value(dname, dunit, "at",&strval)||
strcmp(strval, "scbus") == 0))
continue;
if (newunit == dunit)
break;
}
if (r != 0)
break;
}
return (newunit);
}
static u_int
camperiphunit(struct periph_driver *p_drv, path_id_t pathid,
target_id_t target, lun_id_t lun)
{
u_int unit;
int wired, i, val, dunit;
const char *dname, *strval;
char pathbuf[32], *periph_name;
periph_name = p_drv->driver_name;
snprintf(pathbuf, sizeof(pathbuf), "scbus%d", pathid);
unit = 0;
i = 0;
dname = periph_name;
for (wired = 0; resource_find_dev(&i, dname, &dunit, NULL, NULL) == 0;
wired = 0) {
if (resource_string_value(dname, dunit, "at", &strval) == 0) {
if (strcmp(strval, pathbuf) != 0)
continue;
wired++;
}
if (resource_int_value(dname, dunit, "target", &val) == 0) {
if (val != target)
continue;
wired++;
}
if (resource_int_value(dname, dunit, "lun", &val) == 0) {
if (val != lun)
continue;
wired++;
}
if (wired != 0) {
unit = dunit;
break;
}
}
/*
* Either start from 0 looking for the next unit or from
* the unit number given in the resource config. This way,
* if we have wildcard matches, we don't return the same
* unit number twice.
*/
unit = camperiphnextunit(p_drv, unit, wired, pathid, target, lun);
return (unit);
}
void
cam_periph_invalidate(struct cam_periph *periph)
{
cam_periph_assert(periph, MA_OWNED);
/*
* We only call this routine the first time a peripheral is
* invalidated.
*/
if ((periph->flags & CAM_PERIPH_INVALID) != 0)
return;
CAM_DEBUG(periph->path, CAM_DEBUG_INFO, ("Periph invalidated\n"));
if ((periph->flags & CAM_PERIPH_ANNOUNCED) && !rebooting)
xpt_denounce_periph(periph);
periph->flags |= CAM_PERIPH_INVALID;
periph->flags &= ~CAM_PERIPH_NEW_DEV_FOUND;
if (periph->periph_oninval != NULL)
periph->periph_oninval(periph);
cam_periph_release_locked(periph);
}
static void
camperiphfree(struct cam_periph *periph)
{
struct periph_driver **p_drv;
cam_periph_assert(periph, MA_OWNED);
KASSERT(periph->periph_allocating == 0, ("%s%d: freed while allocating",
periph->periph_name, periph->unit_number));
for (p_drv = periph_drivers; *p_drv != NULL; p_drv++) {
if (strcmp((*p_drv)->driver_name, periph->periph_name) == 0)
break;
}
if (*p_drv == NULL) {
printf("camperiphfree: attempt to free non-existant periph\n");
return;
}
/*
* We need to set this flag before dropping the topology lock, to
* let anyone who is traversing the list that this peripheral is
* about to be freed, and there will be no more reference count
* checks.
*/
periph->flags |= CAM_PERIPH_FREE;
/*
* The peripheral destructor semantics dictate calling with only the
* SIM mutex held. Since it might sleep, it should not be called
* with the topology lock held.
*/
xpt_unlock_buses();
/*
* We need to call the peripheral destructor prior to removing the
* peripheral from the list. Otherwise, we risk running into a
* scenario where the peripheral unit number may get reused
* (because it has been removed from the list), but some resources
* used by the peripheral are still hanging around. In particular,
* the devfs nodes used by some peripherals like the pass(4) driver
* aren't fully cleaned up until the destructor is run. If the
* unit number is reused before the devfs instance is fully gone,
* devfs will panic.
*/
if (periph->periph_dtor != NULL)
periph->periph_dtor(periph);
/*
* The peripheral list is protected by the topology lock.
*/
xpt_lock_buses();
TAILQ_REMOVE(&(*p_drv)->units, periph, unit_links);
(*p_drv)->generation++;
xpt_remove_periph(periph);
xpt_unlock_buses();
if ((periph->flags & CAM_PERIPH_ANNOUNCED) && !rebooting)
xpt_print(periph->path, "Periph destroyed\n");
else
CAM_DEBUG(periph->path, CAM_DEBUG_INFO, ("Periph destroyed\n"));
if (periph->flags & CAM_PERIPH_NEW_DEV_FOUND) {
union ccb ccb;
void *arg;
switch (periph->deferred_ac) {
case AC_FOUND_DEVICE:
ccb.ccb_h.func_code = XPT_GDEV_TYPE;
xpt_setup_ccb(&ccb.ccb_h, periph->path, CAM_PRIORITY_NORMAL);
xpt_action(&ccb);
arg = &ccb;
break;
case AC_PATH_REGISTERED:
ccb.ccb_h.func_code = XPT_PATH_INQ;
xpt_setup_ccb(&ccb.ccb_h, periph->path, CAM_PRIORITY_NORMAL);
xpt_action(&ccb);
arg = &ccb;
break;
default:
arg = NULL;
break;
}
periph->deferred_callback(NULL, periph->deferred_ac,
periph->path, arg);
}
xpt_free_path(periph->path);
free(periph, M_CAMPERIPH);
xpt_lock_buses();
}
/*
* Map user virtual pointers into kernel virtual address space, so we can
* access the memory. This is now a generic function that centralizes most
* of the sanity checks on the data flags, if any.
* This also only works for up to MAXPHYS memory. Since we use
* buffers to map stuff in and out, we're limited to the buffer size.
*/
int
cam_periph_mapmem(union ccb *ccb, struct cam_periph_map_info *mapinfo,
u_int maxmap)
{
int numbufs, i, j;
int flags[CAM_PERIPH_MAXMAPS];
u_int8_t **data_ptrs[CAM_PERIPH_MAXMAPS];
u_int32_t lengths[CAM_PERIPH_MAXMAPS];
u_int32_t dirs[CAM_PERIPH_MAXMAPS];
if (maxmap == 0)
maxmap = DFLTPHYS; /* traditional default */
else if (maxmap > MAXPHYS)
maxmap = MAXPHYS; /* for safety */
switch(ccb->ccb_h.func_code) {
case XPT_DEV_MATCH:
if (ccb->cdm.match_buf_len == 0) {
printf("cam_periph_mapmem: invalid match buffer "
"length 0\n");
return(EINVAL);
}
if (ccb->cdm.pattern_buf_len > 0) {
data_ptrs[0] = (u_int8_t **)&ccb->cdm.patterns;
lengths[0] = ccb->cdm.pattern_buf_len;
dirs[0] = CAM_DIR_OUT;
data_ptrs[1] = (u_int8_t **)&ccb->cdm.matches;
lengths[1] = ccb->cdm.match_buf_len;
dirs[1] = CAM_DIR_IN;
numbufs = 2;
} else {
data_ptrs[0] = (u_int8_t **)&ccb->cdm.matches;
lengths[0] = ccb->cdm.match_buf_len;
dirs[0] = CAM_DIR_IN;
numbufs = 1;
}
/*
* This request will not go to the hardware, no reason
* to be so strict. vmapbuf() is able to map up to MAXPHYS.
*/
maxmap = MAXPHYS;
break;
case XPT_SCSI_IO:
case XPT_CONT_TARGET_IO:
if ((ccb->ccb_h.flags & CAM_DIR_MASK) == CAM_DIR_NONE)
return(0);
if ((ccb->ccb_h.flags & CAM_DATA_MASK) != CAM_DATA_VADDR)
return (EINVAL);
data_ptrs[0] = &ccb->csio.data_ptr;
lengths[0] = ccb->csio.dxfer_len;
dirs[0] = ccb->ccb_h.flags & CAM_DIR_MASK;
numbufs = 1;
break;
case XPT_ATA_IO:
if ((ccb->ccb_h.flags & CAM_DIR_MASK) == CAM_DIR_NONE)
return(0);
if ((ccb->ccb_h.flags & CAM_DATA_MASK) != CAM_DATA_VADDR)
return (EINVAL);
data_ptrs[0] = &ccb->ataio.data_ptr;
lengths[0] = ccb->ataio.dxfer_len;
dirs[0] = ccb->ccb_h.flags & CAM_DIR_MASK;
numbufs = 1;
break;
case XPT_SMP_IO:
data_ptrs[0] = &ccb->smpio.smp_request;
lengths[0] = ccb->smpio.smp_request_len;
dirs[0] = CAM_DIR_OUT;
data_ptrs[1] = &ccb->smpio.smp_response;
lengths[1] = ccb->smpio.smp_response_len;
dirs[1] = CAM_DIR_IN;
numbufs = 2;
break;
case XPT_DEV_ADVINFO:
if (ccb->cdai.bufsiz == 0)
return (0);
data_ptrs[0] = (uint8_t **)&ccb->cdai.buf;
lengths[0] = ccb->cdai.bufsiz;
dirs[0] = CAM_DIR_IN;
numbufs = 1;
/*
* This request will not go to the hardware, no reason
* to be so strict. vmapbuf() is able to map up to MAXPHYS.
*/
maxmap = MAXPHYS;
break;
default:
return(EINVAL);
break; /* NOTREACHED */
}
/*
* Check the transfer length and permissions first, so we don't
* have to unmap any previously mapped buffers.
*/
for (i = 0; i < numbufs; i++) {
flags[i] = 0;
/*
* The userland data pointer passed in may not be page
* aligned. vmapbuf() truncates the address to a page
* boundary, so if the address isn't page aligned, we'll
* need enough space for the given transfer length, plus
* whatever extra space is necessary to make it to the page
* boundary.
*/
if ((lengths[i] +
(((vm_offset_t)(*data_ptrs[i])) & PAGE_MASK)) > maxmap){
printf("cam_periph_mapmem: attempt to map %lu bytes, "
"which is greater than %lu\n",
(long)(lengths[i] +
(((vm_offset_t)(*data_ptrs[i])) & PAGE_MASK)),
(u_long)maxmap);
return(E2BIG);
}
if (dirs[i] & CAM_DIR_OUT) {
flags[i] = BIO_WRITE;
}
if (dirs[i] & CAM_DIR_IN) {
flags[i] = BIO_READ;
}
}
/*
- * This keeps the the kernel stack of current thread from getting
+ * This keeps the kernel stack of current thread from getting
* swapped. In low-memory situations where the kernel stack might
* otherwise get swapped out, this holds it and allows the thread
* to make progress and release the kernel mapped pages sooner.
*
* XXX KDM should I use P_NOSWAP instead?
*/
PHOLD(curproc);
for (i = 0; i < numbufs; i++) {
/*
* Get the buffer.
*/
mapinfo->bp[i] = getpbuf(NULL);
/* put our pointer in the data slot */
mapinfo->bp[i]->b_data = *data_ptrs[i];
/* save the user's data address */
mapinfo->bp[i]->b_caller1 = *data_ptrs[i];
/* set the transfer length, we know it's < MAXPHYS */
mapinfo->bp[i]->b_bufsize = lengths[i];
/* set the direction */
mapinfo->bp[i]->b_iocmd = flags[i];
/*
* Map the buffer into kernel memory.
*
* Note that useracc() alone is not a sufficient test.
* vmapbuf() can still fail due to a smaller file mapped
* into a larger area of VM, or if userland races against
* vmapbuf() after the useracc() check.
*/
if (vmapbuf(mapinfo->bp[i], 1) < 0) {
for (j = 0; j < i; ++j) {
*data_ptrs[j] = mapinfo->bp[j]->b_caller1;
vunmapbuf(mapinfo->bp[j]);
relpbuf(mapinfo->bp[j], NULL);
}
relpbuf(mapinfo->bp[i], NULL);
PRELE(curproc);
return(EACCES);
}
/* set our pointer to the new mapped area */
*data_ptrs[i] = mapinfo->bp[i]->b_data;
mapinfo->num_bufs_used++;
}
/*
* Now that we've gotten this far, change ownership to the kernel
* of the buffers so that we don't run afoul of returning to user
* space with locks (on the buffer) held.
*/
for (i = 0; i < numbufs; i++) {
BUF_KERNPROC(mapinfo->bp[i]);
}
return(0);
}
/*
* Unmap memory segments mapped into kernel virtual address space by
* cam_periph_mapmem().
*/
void
cam_periph_unmapmem(union ccb *ccb, struct cam_periph_map_info *mapinfo)
{
int numbufs, i;
u_int8_t **data_ptrs[CAM_PERIPH_MAXMAPS];
if (mapinfo->num_bufs_used <= 0) {
/* nothing to free and the process wasn't held. */
return;
}
switch (ccb->ccb_h.func_code) {
case XPT_DEV_MATCH:
numbufs = min(mapinfo->num_bufs_used, 2);
if (numbufs == 1) {
data_ptrs[0] = (u_int8_t **)&ccb->cdm.matches;
} else {
data_ptrs[0] = (u_int8_t **)&ccb->cdm.patterns;
data_ptrs[1] = (u_int8_t **)&ccb->cdm.matches;
}
break;
case XPT_SCSI_IO:
case XPT_CONT_TARGET_IO:
data_ptrs[0] = &ccb->csio.data_ptr;
numbufs = min(mapinfo->num_bufs_used, 1);
break;
case XPT_ATA_IO:
data_ptrs[0] = &ccb->ataio.data_ptr;
numbufs = min(mapinfo->num_bufs_used, 1);
break;
case XPT_SMP_IO:
numbufs = min(mapinfo->num_bufs_used, 2);
data_ptrs[0] = &ccb->smpio.smp_request;
data_ptrs[1] = &ccb->smpio.smp_response;
break;
case XPT_DEV_ADVINFO:
numbufs = min(mapinfo->num_bufs_used, 1);
data_ptrs[0] = (uint8_t **)&ccb->cdai.buf;
break;
default:
/* allow ourselves to be swapped once again */
PRELE(curproc);
return;
break; /* NOTREACHED */
}
for (i = 0; i < numbufs; i++) {
/* Set the user's pointer back to the original value */
*data_ptrs[i] = mapinfo->bp[i]->b_caller1;
/* unmap the buffer */
vunmapbuf(mapinfo->bp[i]);
/* release the buffer */
relpbuf(mapinfo->bp[i], NULL);
}
/* allow ourselves to be swapped once again */
PRELE(curproc);
}
void
cam_periph_ccbwait(union ccb *ccb)
{
if ((ccb->ccb_h.pinfo.index != CAM_UNQUEUED_INDEX)
|| ((ccb->ccb_h.status & CAM_STATUS_MASK) == CAM_REQ_INPROG))
xpt_path_sleep(ccb->ccb_h.path, &ccb->ccb_h.cbfcnp, PRIBIO,
"cbwait", 0);
}
int
cam_periph_ioctl(struct cam_periph *periph, u_long cmd, caddr_t addr,
int (*error_routine)(union ccb *ccb,
cam_flags camflags,
u_int32_t sense_flags))
{
union ccb *ccb;
int error;
int found;
error = found = 0;
switch(cmd){
case CAMGETPASSTHRU:
ccb = cam_periph_getccb(periph, CAM_PRIORITY_NORMAL);
xpt_setup_ccb(&ccb->ccb_h,
ccb->ccb_h.path,
CAM_PRIORITY_NORMAL);
ccb->ccb_h.func_code = XPT_GDEVLIST;
/*
* Basically, the point of this is that we go through
* getting the list of devices, until we find a passthrough
* device. In the current version of the CAM code, the
* only way to determine what type of device we're dealing
* with is by its name.
*/
while (found == 0) {
ccb->cgdl.index = 0;
ccb->cgdl.status = CAM_GDEVLIST_MORE_DEVS;
while (ccb->cgdl.status == CAM_GDEVLIST_MORE_DEVS) {
/* we want the next device in the list */
xpt_action(ccb);
if (strncmp(ccb->cgdl.periph_name,
"pass", 4) == 0){
found = 1;
break;
}
}
if ((ccb->cgdl.status == CAM_GDEVLIST_LAST_DEVICE) &&
(found == 0)) {
ccb->cgdl.periph_name[0] = '\0';
ccb->cgdl.unit_number = 0;
break;
}
}
/* copy the result back out */
bcopy(ccb, addr, sizeof(union ccb));
/* and release the ccb */
xpt_release_ccb(ccb);
break;
default:
error = ENOTTY;
break;
}
return(error);
}
static void
cam_periph_done(struct cam_periph *periph, union ccb *done_ccb)
{
/* Caller will release the CCB */
wakeup(&done_ccb->ccb_h.cbfcnp);
}
int
cam_periph_runccb(union ccb *ccb,
int (*error_routine)(union ccb *ccb,
cam_flags camflags,
u_int32_t sense_flags),
cam_flags camflags, u_int32_t sense_flags,
struct devstat *ds)
{
struct bintime *starttime;
struct bintime ltime;
int error;
starttime = NULL;
xpt_path_assert(ccb->ccb_h.path, MA_OWNED);
/*
* If the user has supplied a stats structure, and if we understand
* this particular type of ccb, record the transaction start.
*/
if ((ds != NULL) && (ccb->ccb_h.func_code == XPT_SCSI_IO ||
ccb->ccb_h.func_code == XPT_ATA_IO)) {
starttime = &ltime;
binuptime(starttime);
devstat_start_transaction(ds, starttime);
}
ccb->ccb_h.cbfcnp = cam_periph_done;
xpt_action(ccb);
do {
cam_periph_ccbwait(ccb);
if ((ccb->ccb_h.status & CAM_STATUS_MASK) == CAM_REQ_CMP)
error = 0;
else if (error_routine != NULL)
error = (*error_routine)(ccb, camflags, sense_flags);
else
error = 0;
} while (error == ERESTART);
if ((ccb->ccb_h.status & CAM_DEV_QFRZN) != 0) {
cam_release_devq(ccb->ccb_h.path,
/* relsim_flags */0,
/* openings */0,
/* timeout */0,
/* getcount_only */ FALSE);
ccb->ccb_h.status &= ~CAM_DEV_QFRZN;
}
if (ds != NULL) {
if (ccb->ccb_h.func_code == XPT_SCSI_IO) {
devstat_end_transaction(ds,
ccb->csio.dxfer_len - ccb->csio.resid,
ccb->csio.tag_action & 0x3,
((ccb->ccb_h.flags & CAM_DIR_MASK) ==
CAM_DIR_NONE) ? DEVSTAT_NO_DATA :
(ccb->ccb_h.flags & CAM_DIR_OUT) ?
DEVSTAT_WRITE :
DEVSTAT_READ, NULL, starttime);
} else if (ccb->ccb_h.func_code == XPT_ATA_IO) {
devstat_end_transaction(ds,
ccb->ataio.dxfer_len - ccb->ataio.resid,
0, /* Not used in ATA */
((ccb->ccb_h.flags & CAM_DIR_MASK) ==
CAM_DIR_NONE) ? DEVSTAT_NO_DATA :
(ccb->ccb_h.flags & CAM_DIR_OUT) ?
DEVSTAT_WRITE :
DEVSTAT_READ, NULL, starttime);
}
}
return(error);
}
void
cam_freeze_devq(struct cam_path *path)
{
struct ccb_hdr ccb_h;
CAM_DEBUG(path, CAM_DEBUG_TRACE, ("cam_freeze_devq\n"));
xpt_setup_ccb(&ccb_h, path, /*priority*/1);
ccb_h.func_code = XPT_NOOP;
ccb_h.flags = CAM_DEV_QFREEZE;
xpt_action((union ccb *)&ccb_h);
}
u_int32_t
cam_release_devq(struct cam_path *path, u_int32_t relsim_flags,
u_int32_t openings, u_int32_t arg,
int getcount_only)
{
struct ccb_relsim crs;
CAM_DEBUG(path, CAM_DEBUG_TRACE, ("cam_release_devq(%u, %u, %u, %d)\n",
relsim_flags, openings, arg, getcount_only));
xpt_setup_ccb(&crs.ccb_h, path, CAM_PRIORITY_NORMAL);
crs.ccb_h.func_code = XPT_REL_SIMQ;
crs.ccb_h.flags = getcount_only ? CAM_DEV_QFREEZE : 0;
crs.release_flags = relsim_flags;
crs.openings = openings;
crs.release_timeout = arg;
xpt_action((union ccb *)&crs);
return (crs.qfrozen_cnt);
}
#define saved_ccb_ptr ppriv_ptr0
static void
camperiphdone(struct cam_periph *periph, union ccb *done_ccb)
{
union ccb *saved_ccb;
cam_status status;
struct scsi_start_stop_unit *scsi_cmd;
int error_code, sense_key, asc, ascq;
scsi_cmd = (struct scsi_start_stop_unit *)
&done_ccb->csio.cdb_io.cdb_bytes;
status = done_ccb->ccb_h.status;
if ((status & CAM_STATUS_MASK) != CAM_REQ_CMP) {
if (scsi_extract_sense_ccb(done_ccb,
&error_code, &sense_key, &asc, &ascq)) {
/*
* If the error is "invalid field in CDB",
* and the load/eject flag is set, turn the
* flag off and try again. This is just in
* case the drive in question barfs on the
* load eject flag. The CAM code should set
* the load/eject flag by default for
* removable media.
*/
if ((scsi_cmd->opcode == START_STOP_UNIT) &&
((scsi_cmd->how & SSS_LOEJ) != 0) &&
(asc == 0x24) && (ascq == 0x00)) {
scsi_cmd->how &= ~SSS_LOEJ;
if (status & CAM_DEV_QFRZN) {
cam_release_devq(done_ccb->ccb_h.path,
0, 0, 0, 0);
done_ccb->ccb_h.status &=
~CAM_DEV_QFRZN;
}
xpt_action(done_ccb);
goto out;
}
}
if (cam_periph_error(done_ccb,
0, SF_RETRY_UA | SF_NO_PRINT, NULL) == ERESTART)
goto out;
if (done_ccb->ccb_h.status & CAM_DEV_QFRZN) {
cam_release_devq(done_ccb->ccb_h.path, 0, 0, 0, 0);
done_ccb->ccb_h.status &= ~CAM_DEV_QFRZN;
}
} else {
/*
* If we have successfully taken a device from the not
* ready to ready state, re-scan the device and re-get
* the inquiry information. Many devices (mostly disks)
* don't properly report their inquiry information unless
* they are spun up.
*/
if (scsi_cmd->opcode == START_STOP_UNIT)
xpt_async(AC_INQ_CHANGED, done_ccb->ccb_h.path, NULL);
}
/*
* Perform the final retry with the original CCB so that final
* error processing is performed by the owner of the CCB.
*/
saved_ccb = (union ccb *)done_ccb->ccb_h.saved_ccb_ptr;
bcopy(saved_ccb, done_ccb, sizeof(*done_ccb));
xpt_free_ccb(saved_ccb);
if (done_ccb->ccb_h.cbfcnp != camperiphdone)
periph->flags &= ~CAM_PERIPH_RECOVERY_INPROG;
xpt_action(done_ccb);
out:
/* Drop freeze taken due to CAM_DEV_QFREEZE flag set. */
cam_release_devq(done_ccb->ccb_h.path, 0, 0, 0, 0);
}
/*
* Generic Async Event handler. Peripheral drivers usually
* filter out the events that require personal attention,
* and leave the rest to this function.
*/
void
cam_periph_async(struct cam_periph *periph, u_int32_t code,
struct cam_path *path, void *arg)
{
switch (code) {
case AC_LOST_DEVICE:
cam_periph_invalidate(periph);
break;
default:
break;
}
}
void
cam_periph_bus_settle(struct cam_periph *periph, u_int bus_settle)
{
struct ccb_getdevstats cgds;
xpt_setup_ccb(&cgds.ccb_h, periph->path, CAM_PRIORITY_NORMAL);
cgds.ccb_h.func_code = XPT_GDEV_STATS;
xpt_action((union ccb *)&cgds);
cam_periph_freeze_after_event(periph, &cgds.last_reset, bus_settle);
}
void
cam_periph_freeze_after_event(struct cam_periph *periph,
struct timeval* event_time, u_int duration_ms)
{
struct timeval delta;
struct timeval duration_tv;
if (!timevalisset(event_time))
return;
microtime(&delta);
timevalsub(&delta, event_time);
duration_tv.tv_sec = duration_ms / 1000;
duration_tv.tv_usec = (duration_ms % 1000) * 1000;
if (timevalcmp(&delta, &duration_tv, <)) {
timevalsub(&duration_tv, &delta);
duration_ms = duration_tv.tv_sec * 1000;
duration_ms += duration_tv.tv_usec / 1000;
cam_freeze_devq(periph->path);
cam_release_devq(periph->path,
RELSIM_RELEASE_AFTER_TIMEOUT,
/*reduction*/0,
/*timeout*/duration_ms,
/*getcount_only*/0);
}
}
static int
camperiphscsistatuserror(union ccb *ccb, union ccb **orig_ccb,
cam_flags camflags, u_int32_t sense_flags,
int *openings, u_int32_t *relsim_flags,
u_int32_t *timeout, u_int32_t *action, const char **action_string)
{
int error;
switch (ccb->csio.scsi_status) {
case SCSI_STATUS_OK:
case SCSI_STATUS_COND_MET:
case SCSI_STATUS_INTERMED:
case SCSI_STATUS_INTERMED_COND_MET:
error = 0;
break;
case SCSI_STATUS_CMD_TERMINATED:
case SCSI_STATUS_CHECK_COND:
error = camperiphscsisenseerror(ccb, orig_ccb,
camflags,
sense_flags,
openings,
relsim_flags,
timeout,
action,
action_string);
break;
case SCSI_STATUS_QUEUE_FULL:
{
/* no decrement */
struct ccb_getdevstats cgds;
/*
* First off, find out what the current
* transaction counts are.
*/
xpt_setup_ccb(&cgds.ccb_h,
ccb->ccb_h.path,
CAM_PRIORITY_NORMAL);
cgds.ccb_h.func_code = XPT_GDEV_STATS;
xpt_action((union ccb *)&cgds);
/*
* If we were the only transaction active, treat
* the QUEUE FULL as if it were a BUSY condition.
*/
if (cgds.dev_active != 0) {
int total_openings;
/*
* Reduce the number of openings to
* be 1 less than the amount it took
* to get a queue full bounded by the
* minimum allowed tag count for this
* device.
*/
total_openings = cgds.dev_active + cgds.dev_openings;
*openings = cgds.dev_active;
if (*openings < cgds.mintags)
*openings = cgds.mintags;
if (*openings < total_openings)
*relsim_flags = RELSIM_ADJUST_OPENINGS;
else {
/*
* Some devices report queue full for
* temporary resource shortages. For
* this reason, we allow a minimum
* tag count to be entered via a
* quirk entry to prevent the queue
* count on these devices from falling
* to a pessimisticly low value. We
* still wait for the next successful
* completion, however, before queueing
* more transactions to the device.
*/
*relsim_flags = RELSIM_RELEASE_AFTER_CMDCMPLT;
}
*timeout = 0;
error = ERESTART;
*action &= ~SSQ_PRINT_SENSE;
break;
}
/* FALLTHROUGH */
}
case SCSI_STATUS_BUSY:
/*
* Restart the queue after either another
* command completes or a 1 second timeout.
*/
if ((sense_flags & SF_RETRY_BUSY) != 0 ||
(ccb->ccb_h.retry_count--) > 0) {
error = ERESTART;
*relsim_flags = RELSIM_RELEASE_AFTER_TIMEOUT
| RELSIM_RELEASE_AFTER_CMDCMPLT;
*timeout = 1000;
} else {
error = EIO;
}
break;
case SCSI_STATUS_RESERV_CONFLICT:
default:
error = EIO;
break;
}
return (error);
}
static int
camperiphscsisenseerror(union ccb *ccb, union ccb **orig,
cam_flags camflags, u_int32_t sense_flags,
int *openings, u_int32_t *relsim_flags,
u_int32_t *timeout, u_int32_t *action, const char **action_string)
{
struct cam_periph *periph;
union ccb *orig_ccb = ccb;
int error, recoveryccb;
periph = xpt_path_periph(ccb->ccb_h.path);
recoveryccb = (ccb->ccb_h.cbfcnp == camperiphdone);
if ((periph->flags & CAM_PERIPH_RECOVERY_INPROG) && !recoveryccb) {
/*
* If error recovery is already in progress, don't attempt
* to process this error, but requeue it unconditionally
* and attempt to process it once error recovery has
* completed. This failed command is probably related to
* the error that caused the currently active error recovery
* action so our current recovery efforts should also
* address this command. Be aware that the error recovery
* code assumes that only one recovery action is in progress
* on a particular peripheral instance at any given time
* (e.g. only one saved CCB for error recovery) so it is
* imperitive that we don't violate this assumption.
*/
error = ERESTART;
*action &= ~SSQ_PRINT_SENSE;
} else {
scsi_sense_action err_action;
struct ccb_getdev cgd;
/*
* Grab the inquiry data for this device.
*/
xpt_setup_ccb(&cgd.ccb_h, ccb->ccb_h.path, CAM_PRIORITY_NORMAL);
cgd.ccb_h.func_code = XPT_GDEV_TYPE;
xpt_action((union ccb *)&cgd);
err_action = scsi_error_action(&ccb->csio, &cgd.inq_data,
sense_flags);
error = err_action & SS_ERRMASK;
/*
* Do not autostart sequential access devices
* to avoid unexpected tape loading.
*/
if ((err_action & SS_MASK) == SS_START &&
SID_TYPE(&cgd.inq_data) == T_SEQUENTIAL) {
*action_string = "Will not autostart a "
"sequential access device";
goto sense_error_done;
}
/*
* Avoid recovery recursion if recovery action is the same.
*/
if ((err_action & SS_MASK) >= SS_START && recoveryccb) {
if (((err_action & SS_MASK) == SS_START &&
ccb->csio.cdb_io.cdb_bytes[0] == START_STOP_UNIT) ||
((err_action & SS_MASK) == SS_TUR &&
(ccb->csio.cdb_io.cdb_bytes[0] == TEST_UNIT_READY))) {
err_action = SS_RETRY|SSQ_DECREMENT_COUNT|EIO;
*relsim_flags = RELSIM_RELEASE_AFTER_TIMEOUT;
*timeout = 500;
}
}
/*
* If the recovery action will consume a retry,
* make sure we actually have retries available.
*/
if ((err_action & SSQ_DECREMENT_COUNT) != 0) {
if (ccb->ccb_h.retry_count > 0 &&
(periph->flags & CAM_PERIPH_INVALID) == 0)
ccb->ccb_h.retry_count--;
else {
*action_string = "Retries exhausted";
goto sense_error_done;
}
}
if ((err_action & SS_MASK) >= SS_START) {
/*
* Do common portions of commands that
* use recovery CCBs.
*/
orig_ccb = xpt_alloc_ccb_nowait();
if (orig_ccb == NULL) {
*action_string = "Can't allocate recovery CCB";
goto sense_error_done;
}
/*
* Clear freeze flag for original request here, as
* this freeze will be dropped as part of ERESTART.
*/
ccb->ccb_h.status &= ~CAM_DEV_QFRZN;
bcopy(ccb, orig_ccb, sizeof(*orig_ccb));
}
switch (err_action & SS_MASK) {
case SS_NOP:
*action_string = "No recovery action needed";
error = 0;
break;
case SS_RETRY:
*action_string = "Retrying command (per sense data)";
error = ERESTART;
break;
case SS_FAIL:
*action_string = "Unretryable error";
break;
case SS_START:
{
int le;
/*
* Send a start unit command to the device, and
* then retry the command.
*/
*action_string = "Attempting to start unit";
periph->flags |= CAM_PERIPH_RECOVERY_INPROG;
/*
* Check for removable media and set
* load/eject flag appropriately.
*/
if (SID_IS_REMOVABLE(&cgd.inq_data))
le = TRUE;
else
le = FALSE;
scsi_start_stop(&ccb->csio,
/*retries*/1,
camperiphdone,
MSG_SIMPLE_Q_TAG,
/*start*/TRUE,
/*load/eject*/le,
/*immediate*/FALSE,
SSD_FULL_SIZE,
/*timeout*/50000);
break;
}
case SS_TUR:
{
/*
* Send a Test Unit Ready to the device.
* If the 'many' flag is set, we send 120
* test unit ready commands, one every half
* second. Otherwise, we just send one TUR.
* We only want to do this if the retry
* count has not been exhausted.
*/
int retries;
if ((err_action & SSQ_MANY) != 0) {
*action_string = "Polling device for readiness";
retries = 120;
} else {
*action_string = "Testing device for readiness";
retries = 1;
}
periph->flags |= CAM_PERIPH_RECOVERY_INPROG;
scsi_test_unit_ready(&ccb->csio,
retries,
camperiphdone,
MSG_SIMPLE_Q_TAG,
SSD_FULL_SIZE,
/*timeout*/5000);
/*
* Accomplish our 500ms delay by deferring
* the release of our device queue appropriately.
*/
*relsim_flags = RELSIM_RELEASE_AFTER_TIMEOUT;
*timeout = 500;
break;
}
default:
panic("Unhandled error action %x", err_action);
}
if ((err_action & SS_MASK) >= SS_START) {
/*
* Drop the priority, so that the recovery
* CCB is the first to execute. Freeze the queue
* after this command is sent so that we can
* restore the old csio and have it queued in
* the proper order before we release normal
* transactions to the device.
*/
ccb->ccb_h.pinfo.priority--;
ccb->ccb_h.flags |= CAM_DEV_QFREEZE;
ccb->ccb_h.saved_ccb_ptr = orig_ccb;
error = ERESTART;
*orig = orig_ccb;
}
sense_error_done:
*action = err_action;
}
return (error);
}
/*
* Generic error handler. Peripheral drivers usually filter
* out the errors that they handle in a unique manner, then
* call this function.
*/
int
cam_periph_error(union ccb *ccb, cam_flags camflags,
u_int32_t sense_flags, union ccb *save_ccb)
{
struct cam_path *newpath;
union ccb *orig_ccb, *scan_ccb;
struct cam_periph *periph;
const char *action_string;
cam_status status;
int frozen, error, openings, devctl_err;
u_int32_t action, relsim_flags, timeout;
action = SSQ_PRINT_SENSE;
periph = xpt_path_periph(ccb->ccb_h.path);
action_string = NULL;
status = ccb->ccb_h.status;
frozen = (status & CAM_DEV_QFRZN) != 0;
status &= CAM_STATUS_MASK;
devctl_err = openings = relsim_flags = timeout = 0;
orig_ccb = ccb;
/* Filter the errors that should be reported via devctl */
switch (ccb->ccb_h.status & CAM_STATUS_MASK) {
case CAM_CMD_TIMEOUT:
case CAM_REQ_ABORTED:
case CAM_REQ_CMP_ERR:
case CAM_REQ_TERMIO:
case CAM_UNREC_HBA_ERROR:
case CAM_DATA_RUN_ERR:
case CAM_SCSI_STATUS_ERROR:
case CAM_ATA_STATUS_ERROR:
case CAM_SMP_STATUS_ERROR:
devctl_err++;
break;
default:
break;
}
switch (status) {
case CAM_REQ_CMP:
error = 0;
action &= ~SSQ_PRINT_SENSE;
break;
case CAM_SCSI_STATUS_ERROR:
error = camperiphscsistatuserror(ccb, &orig_ccb,
camflags, sense_flags, &openings, &relsim_flags,
&timeout, &action, &action_string);
break;
case CAM_AUTOSENSE_FAIL:
error = EIO; /* we have to kill the command */
break;
case CAM_UA_ABORT:
case CAM_UA_TERMIO:
case CAM_MSG_REJECT_REC:
/* XXX Don't know that these are correct */
error = EIO;
break;
case CAM_SEL_TIMEOUT:
if ((camflags & CAM_RETRY_SELTO) != 0) {
if (ccb->ccb_h.retry_count > 0 &&
(periph->flags & CAM_PERIPH_INVALID) == 0) {
ccb->ccb_h.retry_count--;
error = ERESTART;
/*
* Wait a bit to give the device
* time to recover before we try again.
*/
relsim_flags = RELSIM_RELEASE_AFTER_TIMEOUT;
timeout = periph_selto_delay;
break;
}
action_string = "Retries exhausted";
}
/* FALLTHROUGH */
case CAM_DEV_NOT_THERE:
error = ENXIO;
action = SSQ_LOST;
break;
case CAM_REQ_INVALID:
case CAM_PATH_INVALID:
case CAM_NO_HBA:
case CAM_PROVIDE_FAIL:
case CAM_REQ_TOO_BIG:
case CAM_LUN_INVALID:
case CAM_TID_INVALID:
case CAM_FUNC_NOTAVAIL:
error = EINVAL;
break;
case CAM_SCSI_BUS_RESET:
case CAM_BDR_SENT:
/*
* Commands that repeatedly timeout and cause these
* kinds of error recovery actions, should return
* CAM_CMD_TIMEOUT, which allows us to safely assume
* that this command was an innocent bystander to
* these events and should be unconditionally
* retried.
*/
case CAM_REQUEUE_REQ:
/* Unconditional requeue if device is still there */
if (periph->flags & CAM_PERIPH_INVALID) {
action_string = "Periph was invalidated";
error = EIO;
} else if (sense_flags & SF_NO_RETRY) {
error = EIO;
action_string = "Retry was blocked";
} else {
error = ERESTART;
action &= ~SSQ_PRINT_SENSE;
}
break;
case CAM_RESRC_UNAVAIL:
/* Wait a bit for the resource shortage to abate. */
timeout = periph_noresrc_delay;
/* FALLTHROUGH */
case CAM_BUSY:
if (timeout == 0) {
/* Wait a bit for the busy condition to abate. */
timeout = periph_busy_delay;
}
relsim_flags = RELSIM_RELEASE_AFTER_TIMEOUT;
/* FALLTHROUGH */
case CAM_ATA_STATUS_ERROR:
case CAM_REQ_CMP_ERR:
case CAM_CMD_TIMEOUT:
case CAM_UNEXP_BUSFREE:
case CAM_UNCOR_PARITY:
case CAM_DATA_RUN_ERR:
default:
if (periph->flags & CAM_PERIPH_INVALID) {
error = EIO;
action_string = "Periph was invalidated";
} else if (ccb->ccb_h.retry_count == 0) {
error = EIO;
action_string = "Retries exhausted";
} else if (sense_flags & SF_NO_RETRY) {
error = EIO;
action_string = "Retry was blocked";
} else {
ccb->ccb_h.retry_count--;
error = ERESTART;
}
break;
}
if ((sense_flags & SF_PRINT_ALWAYS) ||
CAM_DEBUGGED(ccb->ccb_h.path, CAM_DEBUG_INFO))
action |= SSQ_PRINT_SENSE;
else if (sense_flags & SF_NO_PRINT)
action &= ~SSQ_PRINT_SENSE;
if ((action & SSQ_PRINT_SENSE) != 0)
cam_error_print(orig_ccb, CAM_ESF_ALL, CAM_EPF_ALL);
if (error != 0 && (action & SSQ_PRINT_SENSE) != 0) {
if (error != ERESTART) {
if (action_string == NULL)
action_string = "Unretryable error";
xpt_print(ccb->ccb_h.path, "Error %d, %s\n",
error, action_string);
} else if (action_string != NULL)
xpt_print(ccb->ccb_h.path, "%s\n", action_string);
else
xpt_print(ccb->ccb_h.path, "Retrying command\n");
}
if (devctl_err)
cam_periph_devctl_notify(orig_ccb);
if ((action & SSQ_LOST) != 0) {
lun_id_t lun_id;
/*
* For a selection timeout, we consider all of the LUNs on
* the target to be gone. If the status is CAM_DEV_NOT_THERE,
* then we only get rid of the device(s) specified by the
* path in the original CCB.
*/
if (status == CAM_SEL_TIMEOUT)
lun_id = CAM_LUN_WILDCARD;
else
lun_id = xpt_path_lun_id(ccb->ccb_h.path);
/* Should we do more if we can't create the path?? */
if (xpt_create_path(&newpath, periph,
xpt_path_path_id(ccb->ccb_h.path),
xpt_path_target_id(ccb->ccb_h.path),
lun_id) == CAM_REQ_CMP) {
/*
* Let peripheral drivers know that this
* device has gone away.
*/
xpt_async(AC_LOST_DEVICE, newpath, NULL);
xpt_free_path(newpath);
}
}
/* Broadcast UNIT ATTENTIONs to all periphs. */
if ((action & SSQ_UA) != 0)
xpt_async(AC_UNIT_ATTENTION, orig_ccb->ccb_h.path, orig_ccb);
/* Rescan target on "Reported LUNs data has changed" */
if ((action & SSQ_RESCAN) != 0) {
if (xpt_create_path(&newpath, NULL,
xpt_path_path_id(ccb->ccb_h.path),
xpt_path_target_id(ccb->ccb_h.path),
CAM_LUN_WILDCARD) == CAM_REQ_CMP) {
scan_ccb = xpt_alloc_ccb_nowait();
if (scan_ccb != NULL) {
scan_ccb->ccb_h.path = newpath;
scan_ccb->ccb_h.func_code = XPT_SCAN_TGT;
scan_ccb->crcn.flags = 0;
xpt_rescan(scan_ccb);
} else {
xpt_print(newpath,
"Can't allocate CCB to rescan target\n");
xpt_free_path(newpath);
}
}
}
/* Attempt a retry */
if (error == ERESTART || error == 0) {
if (frozen != 0)
ccb->ccb_h.status &= ~CAM_DEV_QFRZN;
if (error == ERESTART)
xpt_action(ccb);
if (frozen != 0)
cam_release_devq(ccb->ccb_h.path,
relsim_flags,
openings,
timeout,
/*getcount_only*/0);
}
return (error);
}
#define CAM_PERIPH_DEVD_MSG_SIZE 256
static void
cam_periph_devctl_notify(union ccb *ccb)
{
struct cam_periph *periph;
struct ccb_getdev *cgd;
struct sbuf sb;
int serr, sk, asc, ascq;
char *sbmsg, *type;
sbmsg = malloc(CAM_PERIPH_DEVD_MSG_SIZE, M_CAMPERIPH, M_NOWAIT);
if (sbmsg == NULL)
return;
sbuf_new(&sb, sbmsg, CAM_PERIPH_DEVD_MSG_SIZE, SBUF_FIXEDLEN);
periph = xpt_path_periph(ccb->ccb_h.path);
sbuf_printf(&sb, "device=%s%d ", periph->periph_name,
periph->unit_number);
sbuf_printf(&sb, "serial=\"");
if ((cgd = (struct ccb_getdev *)xpt_alloc_ccb_nowait()) != NULL) {
xpt_setup_ccb(&cgd->ccb_h, ccb->ccb_h.path,
CAM_PRIORITY_NORMAL);
cgd->ccb_h.func_code = XPT_GDEV_TYPE;
xpt_action((union ccb *)cgd);
if (cgd->ccb_h.status == CAM_REQ_CMP)
sbuf_bcat(&sb, cgd->serial_num, cgd->serial_num_len);
xpt_free_ccb((union ccb *)cgd);
}
sbuf_printf(&sb, "\" ");
sbuf_printf(&sb, "cam_status=\"0x%x\" ", ccb->ccb_h.status);
switch (ccb->ccb_h.status & CAM_STATUS_MASK) {
case CAM_CMD_TIMEOUT:
sbuf_printf(&sb, "timeout=%d ", ccb->ccb_h.timeout);
type = "timeout";
break;
case CAM_SCSI_STATUS_ERROR:
sbuf_printf(&sb, "scsi_status=%d ", ccb->csio.scsi_status);
if (scsi_extract_sense_ccb(ccb, &serr, &sk, &asc, &ascq))
sbuf_printf(&sb, "scsi_sense=\"%02x %02x %02x %02x\" ",
serr, sk, asc, ascq);
type = "error";
break;
case CAM_ATA_STATUS_ERROR:
sbuf_printf(&sb, "RES=\"");
ata_res_sbuf(&ccb->ataio.res, &sb);
sbuf_printf(&sb, "\" ");
type = "error";
break;
default:
type = "error";
break;
}
if (ccb->ccb_h.func_code == XPT_SCSI_IO) {
sbuf_printf(&sb, "CDB=\"");
if ((ccb->ccb_h.flags & CAM_CDB_POINTER) != 0)
scsi_cdb_sbuf(ccb->csio.cdb_io.cdb_ptr, &sb);
else
scsi_cdb_sbuf(ccb->csio.cdb_io.cdb_bytes, &sb);
sbuf_printf(&sb, "\" ");
} else if (ccb->ccb_h.func_code == XPT_ATA_IO) {
sbuf_printf(&sb, "ACB=\"");
ata_cmd_sbuf(&ccb->ataio.cmd, &sb);
sbuf_printf(&sb, "\" ");
}
if (sbuf_finish(&sb) == 0)
devctl_notify("CAM", "periph", type, sbuf_data(&sb));
sbuf_delete(&sb);
free(sbmsg, M_CAMPERIPH);
}
Index: head/sys/dev/bhnd/bhnd.c
===================================================================
--- head/sys/dev/bhnd/bhnd.c (revision 300049)
+++ head/sys/dev/bhnd/bhnd.c (revision 300050)
@@ -1,667 +1,667 @@
/*-
* Copyright (c) 2015 Landon Fuller <landon@landonf.org>
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer,
* without modification.
* 2. Redistributions in binary form must reproduce at minimum a disclaimer
* similar to the "NO WARRANTY" disclaimer below ("Disclaimer") and any
* redistribution must be conditioned upon including a substantially
* similar Disclaimer requirement for further binary redistribution.
*
* NO WARRANTY
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
* ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
* LIMITED TO, THE IMPLIED WARRANTIES OF NONINFRINGEMENT, MERCHANTIBILITY
* AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL
* THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE LIABLE FOR SPECIAL, EXEMPLARY,
* OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
* SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
* INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER
* IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
* ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF
* THE POSSIBILITY OF SUCH DAMAGES.
*/
#include <sys/cdefs.h>
__FBSDID("$FreeBSD$");
/*
* Broadcom Home Networking Division (HND) Bus Driver.
*
* The Broadcom HND family of devices consists of both SoCs and host-connected
* networking chipsets containing a common family of Broadcom IP cores,
* including an integrated MIPS and/or ARM cores.
*
* HND devices expose a nearly identical interface whether accessible over a
* native SoC interconnect, or when connected via a host interface such as
* PCIe. As a result, the majority of hardware support code should be re-usable
* across host drivers for HND networking chipsets, as well as FreeBSD support
* for Broadcom MIPS/ARM HND SoCs.
*
* Earlier HND models used the siba(4) on-chip interconnect, while later models
* use bcma(4); the programming model is almost entirely independent
* of the actual underlying interconect.
*/
#include <sys/param.h>
#include <sys/kernel.h>
#include <sys/bus.h>
#include <sys/module.h>
#include <sys/systm.h>
#include <machine/bus.h>
#include <sys/rman.h>
#include <machine/resource.h>
#include "bhnd.h"
#include "bhndvar.h"
MALLOC_DEFINE(M_BHND, "bhnd", "bhnd bus data structures");
/**
* bhnd_generic_probe_nomatch() reporting configuration.
*/
static const struct bhnd_nomatch {
uint16_t vendor; /**< core designer */
uint16_t device; /**< core id */
bool if_verbose; /**< print when bootverbose is set. */
} bhnd_nomatch_table[] = {
{ BHND_MFGID_ARM, BHND_COREID_OOB_ROUTER, true },
{ BHND_MFGID_ARM, BHND_COREID_EROM, true },
{ BHND_MFGID_ARM, BHND_COREID_PL301, true },
{ BHND_MFGID_ARM, BHND_COREID_APB_BRIDGE, true },
{ BHND_MFGID_ARM, BHND_COREID_AXI_UNMAPPED, false },
{ BHND_MFGID_INVALID, BHND_COREID_INVALID, false }
};
static int compare_ascending_probe_order(const void *lhs,
const void *rhs);
static int compare_descending_probe_order(const void *lhs,
const void *rhs);
/**
* Default bhnd(4) bus driver implementation of DEVICE_ATTACH().
*
* This implementation calls device_probe_and_attach() for each of the device's
* children, in bhnd probe order.
*/
int
bhnd_generic_attach(device_t dev)
{
device_t *devs;
int ndevs;
int error;
if (device_is_attached(dev))
return (EBUSY);
if ((error = device_get_children(dev, &devs, &ndevs)))
return (error);
qsort(devs, ndevs, sizeof(*devs), compare_ascending_probe_order);
for (int i = 0; i < ndevs; i++) {
device_t child = devs[i];
device_probe_and_attach(child);
}
free(devs, M_TEMP);
return (0);
}
/**
* Default bhnd(4) bus driver implementation of DEVICE_DETACH().
*
- * This implementation calls device_detach() for each of the the device's
+ * This implementation calls device_detach() for each of the device's
* children, in reverse bhnd probe order, terminating if any call to
* device_detach() fails.
*/
int
bhnd_generic_detach(device_t dev)
{
device_t *devs;
int ndevs;
int error;
if (!device_is_attached(dev))
return (EBUSY);
if ((error = device_get_children(dev, &devs, &ndevs)))
return (error);
/* Detach in the reverse of attach order */
qsort(devs, ndevs, sizeof(*devs), compare_descending_probe_order);
for (int i = 0; i < ndevs; i++) {
device_t child = devs[i];
/* Terminate on first error */
if ((error = device_detach(child)))
goto cleanup;
}
cleanup:
free(devs, M_TEMP);
return (error);
}
/**
* Default bhnd(4) bus driver implementation of DEVICE_SHUTDOWN().
*
* This implementation calls device_shutdown() for each of the device's
* children, in reverse bhnd probe order, terminating if any call to
* device_shutdown() fails.
*/
int
bhnd_generic_shutdown(device_t dev)
{
device_t *devs;
int ndevs;
int error;
if (!device_is_attached(dev))
return (EBUSY);
if ((error = device_get_children(dev, &devs, &ndevs)))
return (error);
/* Shutdown in the reverse of attach order */
qsort(devs, ndevs, sizeof(*devs), compare_descending_probe_order);
for (int i = 0; i < ndevs; i++) {
device_t child = devs[i];
/* Terminate on first error */
if ((error = device_shutdown(child)))
goto cleanup;
}
cleanup:
free(devs, M_TEMP);
return (error);
}
/**
* Default bhnd(4) bus driver implementation of DEVICE_RESUME().
*
* This implementation calls BUS_RESUME_CHILD() for each of the device's
* children in bhnd probe order, terminating if any call to BUS_RESUME_CHILD()
* fails.
*/
int
bhnd_generic_resume(device_t dev)
{
device_t *devs;
int ndevs;
int error;
if (!device_is_attached(dev))
return (EBUSY);
if ((error = device_get_children(dev, &devs, &ndevs)))
return (error);
qsort(devs, ndevs, sizeof(*devs), compare_ascending_probe_order);
for (int i = 0; i < ndevs; i++) {
device_t child = devs[i];
/* Terminate on first error */
if ((error = BUS_RESUME_CHILD(device_get_parent(child), child)))
goto cleanup;
}
cleanup:
free(devs, M_TEMP);
return (error);
}
/**
* Default bhnd(4) bus driver implementation of DEVICE_SUSPEND().
*
* This implementation calls BUS_SUSPEND_CHILD() for each of the device's
* children in reverse bhnd probe order. If any call to BUS_SUSPEND_CHILD()
* fails, the suspend operation is terminated and any devices that were
* suspended are resumed immediately by calling their BUS_RESUME_CHILD()
* methods.
*/
int
bhnd_generic_suspend(device_t dev)
{
device_t *devs;
int ndevs;
int error;
if (!device_is_attached(dev))
return (EBUSY);
if ((error = device_get_children(dev, &devs, &ndevs)))
return (error);
/* Suspend in the reverse of attach order */
qsort(devs, ndevs, sizeof(*devs), compare_descending_probe_order);
for (int i = 0; i < ndevs; i++) {
device_t child = devs[i];
error = BUS_SUSPEND_CHILD(device_get_parent(child), child);
/* On error, resume suspended devices and then terminate */
if (error) {
for (int j = 0; j < i; j++) {
BUS_RESUME_CHILD(device_get_parent(devs[j]),
devs[j]);
}
goto cleanup;
}
}
cleanup:
free(devs, M_TEMP);
return (error);
}
/*
* Ascending comparison of bhnd device's probe order.
*/
static int
compare_ascending_probe_order(const void *lhs, const void *rhs)
{
device_t ldev, rdev;
int lorder, rorder;
ldev = (*(const device_t *) lhs);
rdev = (*(const device_t *) rhs);
lorder = BHND_BUS_GET_PROBE_ORDER(device_get_parent(ldev), ldev);
rorder = BHND_BUS_GET_PROBE_ORDER(device_get_parent(rdev), rdev);
if (lorder < rorder) {
return (-1);
} else if (lorder > rorder) {
return (1);
} else {
return (0);
}
}
/*
* Descending comparison of bhnd device's probe order.
*/
static int
compare_descending_probe_order(const void *lhs, const void *rhs)
{
return (compare_ascending_probe_order(rhs, lhs));
}
/**
* Default bhnd(4) bus driver implementation of BHND_BUS_GET_PROBE_ORDER().
*
* This implementation determines probe ordering based on the device's class
* and other properties, including whether the device is serving as a host
* bridge.
*/
int
bhnd_generic_get_probe_order(device_t dev, device_t child)
{
switch (bhnd_get_class(child)) {
case BHND_DEVCLASS_CC:
/* Must be early enough to provide NVRAM access to the
* host bridge */
return (BHND_PROBE_ROOT + BHND_PROBE_ORDER_FIRST);
case BHND_DEVCLASS_CC_B:
/* fall through */
case BHND_DEVCLASS_PMU:
return (BHND_PROBE_BUS + BHND_PROBE_ORDER_EARLY);
case BHND_DEVCLASS_SOC_ROUTER:
return (BHND_PROBE_BUS + BHND_PROBE_ORDER_LATE);
case BHND_DEVCLASS_SOC_BRIDGE:
return (BHND_PROBE_BUS + BHND_PROBE_ORDER_LAST);
case BHND_DEVCLASS_CPU:
return (BHND_PROBE_CPU + BHND_PROBE_ORDER_FIRST);
case BHND_DEVCLASS_RAM:
/* fall through */
case BHND_DEVCLASS_MEMC:
return (BHND_PROBE_CPU + BHND_PROBE_ORDER_EARLY);
case BHND_DEVCLASS_NVRAM:
return (BHND_PROBE_RESOURCE + BHND_PROBE_ORDER_EARLY);
case BHND_DEVCLASS_PCI:
case BHND_DEVCLASS_PCIE:
case BHND_DEVCLASS_PCCARD:
case BHND_DEVCLASS_ENET:
case BHND_DEVCLASS_ENET_MAC:
case BHND_DEVCLASS_ENET_PHY:
case BHND_DEVCLASS_WLAN:
case BHND_DEVCLASS_WLAN_MAC:
case BHND_DEVCLASS_WLAN_PHY:
case BHND_DEVCLASS_EROM:
case BHND_DEVCLASS_OTHER:
case BHND_DEVCLASS_INVALID:
if (bhnd_find_hostb_device(dev) == child)
return (BHND_PROBE_ROOT + BHND_PROBE_ORDER_EARLY);
return (BHND_PROBE_DEFAULT);
default:
return (BHND_PROBE_DEFAULT);
}
}
/**
* Default bhnd(4) bus driver implementation of BHND_BUS_IS_REGION_VALID().
*
* This implementation assumes that port and region numbers are 0-indexed and
* are allocated non-sparsely, using BHND_BUS_GET_PORT_COUNT() and
* BHND_BUS_GET_REGION_COUNT() to determine if @p port and @p region fall
* within the defined range.
*/
static bool
bhnd_generic_is_region_valid(device_t dev, device_t child,
bhnd_port_type type, u_int port, u_int region)
{
if (port >= bhnd_get_port_count(child, type))
return (false);
if (region >= bhnd_get_region_count(child, type, port))
return (false);
return (true);
}
/**
* Default bhnd(4) bus driver implementation of BUS_PRINT_CHILD().
*
* This implementation requests the device's struct resource_list via
* BUS_GET_RESOURCE_LIST.
*/
int
bhnd_generic_print_child(device_t dev, device_t child)
{
struct resource_list *rl;
int retval = 0;
retval += bus_print_child_header(dev, child);
rl = BUS_GET_RESOURCE_LIST(dev, child);
if (rl != NULL) {
retval += resource_list_print_type(rl, "mem", SYS_RES_MEMORY,
"%#jx");
}
retval += printf(" at core %u", bhnd_get_core_index(child));
retval += bus_print_child_domain(dev, child);
retval += bus_print_child_footer(dev, child);
return (retval);
}
/**
* Default bhnd(4) bus driver implementation of BUS_PROBE_NOMATCH().
*
* This implementation requests the device's struct resource_list via
* BUS_GET_RESOURCE_LIST.
*/
void
bhnd_generic_probe_nomatch(device_t dev, device_t child)
{
struct resource_list *rl;
const struct bhnd_nomatch *nm;
bool report;
/* Fetch reporting configuration for this device */
report = true;
for (nm = bhnd_nomatch_table; nm->device != BHND_COREID_INVALID; nm++) {
if (nm->vendor != bhnd_get_vendor(child))
continue;
if (nm->device != bhnd_get_device(child))
continue;
report = false;
if (bootverbose && nm->if_verbose)
report = true;
break;
}
if (!report)
return;
/* Print the non-matched device info */
device_printf(dev, "<%s %s>", bhnd_get_vendor_name(child),
bhnd_get_device_name(child));
rl = BUS_GET_RESOURCE_LIST(dev, child);
if (rl != NULL)
resource_list_print_type(rl, "mem", SYS_RES_MEMORY, "%#jx");
printf(" at core %u (no driver attached)\n",
bhnd_get_core_index(child));
}
/**
* Default implementation of BUS_CHILD_PNPINFO_STR().
*/
static int
bhnd_child_pnpinfo_str(device_t dev, device_t child, char *buf,
size_t buflen)
{
if (device_get_parent(child) != dev) {
return (BUS_CHILD_PNPINFO_STR(device_get_parent(dev), child,
buf, buflen));
}
snprintf(buf, buflen, "vendor=0x%hx device=0x%hx rev=0x%hhx",
bhnd_get_vendor(child), bhnd_get_device(child),
bhnd_get_hwrev(child));
return (0);
}
/**
* Default implementation of BUS_CHILD_LOCATION_STR().
*/
static int
bhnd_child_location_str(device_t dev, device_t child, char *buf,
size_t buflen)
{
bhnd_addr_t addr;
bhnd_size_t size;
if (device_get_parent(child) != dev) {
return (BUS_CHILD_LOCATION_STR(device_get_parent(dev), child,
buf, buflen));
}
if (bhnd_get_region_addr(child, BHND_PORT_DEVICE, 0, 0, &addr, &size)) {
/* No device default port/region */
if (buflen > 0)
*buf = '\0';
return (0);
}
snprintf(buf, buflen, "port0.0=0x%llx", (unsigned long long) addr);
return (0);
}
/**
* Helper function for implementing BUS_SUSPEND_CHILD().
*
* TODO: Power management
*
* If @p child is not a direct child of @p dev, suspension is delegated to
* the @p dev parent.
*/
int
bhnd_generic_suspend_child(device_t dev, device_t child)
{
if (device_get_parent(child) != dev)
BUS_SUSPEND_CHILD(device_get_parent(dev), child);
return bus_generic_suspend_child(dev, child);
}
/**
* Helper function for implementing BUS_RESUME_CHILD().
*
* TODO: Power management
*
* If @p child is not a direct child of @p dev, suspension is delegated to
* the @p dev parent.
*/
int
bhnd_generic_resume_child(device_t dev, device_t child)
{
if (device_get_parent(child) != dev)
BUS_RESUME_CHILD(device_get_parent(dev), child);
return bus_generic_resume_child(dev, child);
}
/*
* Delegate all indirect I/O to the parent device. When inherited by
* non-bridged bus implementations, resources will never be marked as
* indirect, and these methods should never be called.
*/
#define BHND_IO_READ(_type, _name, _method) \
static _type \
bhnd_read_ ## _name (device_t dev, device_t child, \
struct bhnd_resource *r, bus_size_t offset) \
{ \
return (BHND_BUS_READ_ ## _method( \
device_get_parent(dev), child, r, offset)); \
}
#define BHND_IO_WRITE(_type, _name, _method) \
static void \
bhnd_write_ ## _name (device_t dev, device_t child, \
struct bhnd_resource *r, bus_size_t offset, _type value) \
{ \
return (BHND_BUS_WRITE_ ## _method( \
device_get_parent(dev), child, r, offset, \
value)); \
}
#define BHND_IO_MULTI(_type, _rw, _name, _method) \
static void \
bhnd_ ## _rw ## _multi_ ## _name (device_t dev, device_t child, \
struct bhnd_resource *r, bus_size_t offset, _type *datap, \
bus_size_t count) \
{ \
BHND_BUS_ ## _method(device_get_parent(dev), child, r, \
offset, datap, count); \
}
#define BHND_IO_METHODS(_type, _size) \
BHND_IO_READ(_type, _size, _size) \
BHND_IO_WRITE(_type, _size, _size) \
\
BHND_IO_READ(_type, stream_ ## _size, STREAM_ ## _size) \
BHND_IO_WRITE(_type, stream_ ## _size, STREAM_ ## _size) \
\
BHND_IO_MULTI(_type, read, _size, READ_MULTI_ ## _size) \
BHND_IO_MULTI(_type, write, _size, WRITE_MULTI_ ## _size) \
\
BHND_IO_MULTI(_type, read, stream_ ## _size, \
READ_MULTI_STREAM_ ## _size) \
BHND_IO_MULTI(_type, write, stream_ ## _size, \
WRITE_MULTI_STREAM_ ## _size) \
BHND_IO_METHODS(uint8_t, 1);
BHND_IO_METHODS(uint16_t, 2);
BHND_IO_METHODS(uint32_t, 4);
static void
bhnd_barrier(device_t dev, device_t child, struct bhnd_resource *r,
bus_size_t offset, bus_size_t length, int flags)
{
BHND_BUS_BARRIER(device_get_parent(dev), child, r, offset, length,
flags);
}
static device_method_t bhnd_methods[] = {
/* Device interface */ \
DEVMETHOD(device_attach, bhnd_generic_attach),
DEVMETHOD(device_detach, bhnd_generic_detach),
DEVMETHOD(device_shutdown, bhnd_generic_shutdown),
DEVMETHOD(device_suspend, bhnd_generic_suspend),
DEVMETHOD(device_resume, bhnd_generic_resume),
/* Bus interface */
DEVMETHOD(bus_probe_nomatch, bhnd_generic_probe_nomatch),
DEVMETHOD(bus_print_child, bhnd_generic_print_child),
DEVMETHOD(bus_child_pnpinfo_str, bhnd_child_pnpinfo_str),
DEVMETHOD(bus_child_location_str, bhnd_child_location_str),
DEVMETHOD(bus_suspend_child, bhnd_generic_suspend_child),
DEVMETHOD(bus_resume_child, bhnd_generic_resume_child),
DEVMETHOD(bus_set_resource, bus_generic_rl_set_resource),
DEVMETHOD(bus_get_resource, bus_generic_rl_get_resource),
DEVMETHOD(bus_delete_resource, bus_generic_rl_delete_resource),
DEVMETHOD(bus_alloc_resource, bus_generic_rl_alloc_resource),
DEVMETHOD(bus_adjust_resource, bus_generic_adjust_resource),
DEVMETHOD(bus_release_resource, bus_generic_rl_release_resource),
DEVMETHOD(bus_activate_resource, bus_generic_activate_resource),
DEVMETHOD(bus_deactivate_resource, bus_generic_deactivate_resource),
DEVMETHOD(bus_setup_intr, bus_generic_setup_intr),
DEVMETHOD(bus_teardown_intr, bus_generic_teardown_intr),
DEVMETHOD(bus_config_intr, bus_generic_config_intr),
DEVMETHOD(bus_bind_intr, bus_generic_bind_intr),
DEVMETHOD(bus_describe_intr, bus_generic_describe_intr),
DEVMETHOD(bus_get_dma_tag, bus_generic_get_dma_tag),
/* BHND interface */
DEVMETHOD(bhnd_bus_get_chipid, bhnd_bus_generic_get_chipid),
DEVMETHOD(bhnd_bus_get_probe_order, bhnd_generic_get_probe_order),
DEVMETHOD(bhnd_bus_is_region_valid, bhnd_generic_is_region_valid),
DEVMETHOD(bhnd_bus_is_hw_disabled, bhnd_bus_generic_is_hw_disabled),
DEVMETHOD(bhnd_bus_get_nvram_var, bhnd_bus_generic_get_nvram_var),
DEVMETHOD(bhnd_bus_read_1, bhnd_read_1),
DEVMETHOD(bhnd_bus_read_2, bhnd_read_2),
DEVMETHOD(bhnd_bus_read_4, bhnd_read_4),
DEVMETHOD(bhnd_bus_write_1, bhnd_write_1),
DEVMETHOD(bhnd_bus_write_2, bhnd_write_2),
DEVMETHOD(bhnd_bus_write_4, bhnd_write_4),
DEVMETHOD(bhnd_bus_read_stream_1, bhnd_read_stream_1),
DEVMETHOD(bhnd_bus_read_stream_2, bhnd_read_stream_2),
DEVMETHOD(bhnd_bus_read_stream_4, bhnd_read_stream_4),
DEVMETHOD(bhnd_bus_write_stream_1, bhnd_write_stream_1),
DEVMETHOD(bhnd_bus_write_stream_2, bhnd_write_stream_2),
DEVMETHOD(bhnd_bus_write_stream_4, bhnd_write_stream_4),
DEVMETHOD(bhnd_bus_read_multi_1, bhnd_read_multi_1),
DEVMETHOD(bhnd_bus_read_multi_2, bhnd_read_multi_2),
DEVMETHOD(bhnd_bus_read_multi_4, bhnd_read_multi_4),
DEVMETHOD(bhnd_bus_write_multi_1, bhnd_write_multi_1),
DEVMETHOD(bhnd_bus_write_multi_2, bhnd_write_multi_2),
DEVMETHOD(bhnd_bus_write_multi_4, bhnd_write_multi_4),
DEVMETHOD(bhnd_bus_read_multi_stream_1, bhnd_read_multi_stream_1),
DEVMETHOD(bhnd_bus_read_multi_stream_2, bhnd_read_multi_stream_2),
DEVMETHOD(bhnd_bus_read_multi_stream_4, bhnd_read_multi_stream_4),
DEVMETHOD(bhnd_bus_write_multi_stream_1,bhnd_write_multi_stream_1),
DEVMETHOD(bhnd_bus_write_multi_stream_2,bhnd_write_multi_stream_2),
DEVMETHOD(bhnd_bus_write_multi_stream_4,bhnd_write_multi_stream_4),
DEVMETHOD(bhnd_bus_barrier, bhnd_barrier),
DEVMETHOD_END
};
devclass_t bhnd_devclass; /**< bhnd bus. */
devclass_t bhnd_hostb_devclass; /**< bhnd bus host bridge. */
devclass_t bhnd_nvram_devclass; /**< bhnd NVRAM device */
DEFINE_CLASS_0(bhnd, bhnd_driver, bhnd_methods, sizeof(struct bhnd_softc));
MODULE_VERSION(bhnd, 1);
Index: head/sys/dev/bxe/ecore_hsi.h
===================================================================
--- head/sys/dev/bxe/ecore_hsi.h (revision 300049)
+++ head/sys/dev/bxe/ecore_hsi.h (revision 300050)
@@ -1,13260 +1,13260 @@
/*-
* Copyright (c) 2007-2017 QLogic Corporation. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
*
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
*
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
* AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
* ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS
* BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
* CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
* SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
* INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
* CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
* ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF
* THE POSSIBILITY OF SUCH DAMAGE.
*/
#include <sys/cdefs.h>
__FBSDID("$FreeBSD$");
#ifndef ECORE_HSI_H
#define ECORE_HSI_H
#define FW_ENCODE_32BIT_PATTERN 0x1e1e1e1e
struct license_key {
uint32_t reserved[6];
uint32_t max_iscsi_conn;
#define LICENSE_MAX_ISCSI_TRGT_CONN_MASK 0xFFFF
#define LICENSE_MAX_ISCSI_TRGT_CONN_SHIFT 0
#define LICENSE_MAX_ISCSI_INIT_CONN_MASK 0xFFFF0000
#define LICENSE_MAX_ISCSI_INIT_CONN_SHIFT 16
uint32_t reserved_a;
uint32_t max_fcoe_conn;
#define LICENSE_MAX_FCOE_TRGT_CONN_MASK 0xFFFF
#define LICENSE_MAX_FCOE_TRGT_CONN_SHIFT 0
#define LICENSE_MAX_FCOE_INIT_CONN_MASK 0xFFFF0000
#define LICENSE_MAX_FCOE_INIT_CONN_SHIFT 16
uint32_t reserved_b[4];
};
typedef struct license_key license_key_t;
/****************************************************************************
* Shared HW configuration *
****************************************************************************/
#define PIN_CFG_NA 0x00000000
#define PIN_CFG_GPIO0_P0 0x00000001
#define PIN_CFG_GPIO1_P0 0x00000002
#define PIN_CFG_GPIO2_P0 0x00000003
#define PIN_CFG_GPIO3_P0 0x00000004
#define PIN_CFG_GPIO0_P1 0x00000005
#define PIN_CFG_GPIO1_P1 0x00000006
#define PIN_CFG_GPIO2_P1 0x00000007
#define PIN_CFG_GPIO3_P1 0x00000008
#define PIN_CFG_EPIO0 0x00000009
#define PIN_CFG_EPIO1 0x0000000a
#define PIN_CFG_EPIO2 0x0000000b
#define PIN_CFG_EPIO3 0x0000000c
#define PIN_CFG_EPIO4 0x0000000d
#define PIN_CFG_EPIO5 0x0000000e
#define PIN_CFG_EPIO6 0x0000000f
#define PIN_CFG_EPIO7 0x00000010
#define PIN_CFG_EPIO8 0x00000011
#define PIN_CFG_EPIO9 0x00000012
#define PIN_CFG_EPIO10 0x00000013
#define PIN_CFG_EPIO11 0x00000014
#define PIN_CFG_EPIO12 0x00000015
#define PIN_CFG_EPIO13 0x00000016
#define PIN_CFG_EPIO14 0x00000017
#define PIN_CFG_EPIO15 0x00000018
#define PIN_CFG_EPIO16 0x00000019
#define PIN_CFG_EPIO17 0x0000001a
#define PIN_CFG_EPIO18 0x0000001b
#define PIN_CFG_EPIO19 0x0000001c
#define PIN_CFG_EPIO20 0x0000001d
#define PIN_CFG_EPIO21 0x0000001e
#define PIN_CFG_EPIO22 0x0000001f
#define PIN_CFG_EPIO23 0x00000020
#define PIN_CFG_EPIO24 0x00000021
#define PIN_CFG_EPIO25 0x00000022
#define PIN_CFG_EPIO26 0x00000023
#define PIN_CFG_EPIO27 0x00000024
#define PIN_CFG_EPIO28 0x00000025
#define PIN_CFG_EPIO29 0x00000026
#define PIN_CFG_EPIO30 0x00000027
#define PIN_CFG_EPIO31 0x00000028
/* EPIO definition */
#define EPIO_CFG_NA 0x00000000
#define EPIO_CFG_EPIO0 0x00000001
#define EPIO_CFG_EPIO1 0x00000002
#define EPIO_CFG_EPIO2 0x00000003
#define EPIO_CFG_EPIO3 0x00000004
#define EPIO_CFG_EPIO4 0x00000005
#define EPIO_CFG_EPIO5 0x00000006
#define EPIO_CFG_EPIO6 0x00000007
#define EPIO_CFG_EPIO7 0x00000008
#define EPIO_CFG_EPIO8 0x00000009
#define EPIO_CFG_EPIO9 0x0000000a
#define EPIO_CFG_EPIO10 0x0000000b
#define EPIO_CFG_EPIO11 0x0000000c
#define EPIO_CFG_EPIO12 0x0000000d
#define EPIO_CFG_EPIO13 0x0000000e
#define EPIO_CFG_EPIO14 0x0000000f
#define EPIO_CFG_EPIO15 0x00000010
#define EPIO_CFG_EPIO16 0x00000011
#define EPIO_CFG_EPIO17 0x00000012
#define EPIO_CFG_EPIO18 0x00000013
#define EPIO_CFG_EPIO19 0x00000014
#define EPIO_CFG_EPIO20 0x00000015
#define EPIO_CFG_EPIO21 0x00000016
#define EPIO_CFG_EPIO22 0x00000017
#define EPIO_CFG_EPIO23 0x00000018
#define EPIO_CFG_EPIO24 0x00000019
#define EPIO_CFG_EPIO25 0x0000001a
#define EPIO_CFG_EPIO26 0x0000001b
#define EPIO_CFG_EPIO27 0x0000001c
#define EPIO_CFG_EPIO28 0x0000001d
#define EPIO_CFG_EPIO29 0x0000001e
#define EPIO_CFG_EPIO30 0x0000001f
#define EPIO_CFG_EPIO31 0x00000020
struct mac_addr {
uint32_t upper;
uint32_t lower;
};
struct shared_hw_cfg { /* NVRAM Offset */
/* Up to 16 bytes of NULL-terminated string */
uint8_t part_num[16]; /* 0x104 */
uint32_t config; /* 0x114 */
#define SHARED_HW_CFG_MDIO_VOLTAGE_MASK 0x00000001
#define SHARED_HW_CFG_MDIO_VOLTAGE_SHIFT 0
#define SHARED_HW_CFG_MDIO_VOLTAGE_1_2V 0x00000000
#define SHARED_HW_CFG_MDIO_VOLTAGE_2_5V 0x00000001
#define SHARED_HW_CFG_PORT_SWAP 0x00000004
#define SHARED_HW_CFG_BEACON_WOL_EN 0x00000008
#define SHARED_HW_CFG_PCIE_GEN3_DISABLED 0x00000000
#define SHARED_HW_CFG_PCIE_GEN3_ENABLED 0x00000010
#define SHARED_HW_CFG_MFW_SELECT_MASK 0x00000700
#define SHARED_HW_CFG_MFW_SELECT_SHIFT 8
/* Whatever MFW found in NVM
(if multiple found, priority order is: NC-SI, UMP, IPMI) */
#define SHARED_HW_CFG_MFW_SELECT_DEFAULT 0x00000000
#define SHARED_HW_CFG_MFW_SELECT_NC_SI 0x00000100
#define SHARED_HW_CFG_MFW_SELECT_UMP 0x00000200
#define SHARED_HW_CFG_MFW_SELECT_IPMI 0x00000300
/* Use SPIO4 as an arbiter between: 0-NC_SI, 1-IPMI
(can only be used when an add-in board, not BMC, pulls-down SPIO4) */
#define SHARED_HW_CFG_MFW_SELECT_SPIO4_NC_SI_IPMI 0x00000400
/* Use SPIO4 as an arbiter between: 0-UMP, 1-IPMI
(can only be used when an add-in board, not BMC, pulls-down SPIO4) */
#define SHARED_HW_CFG_MFW_SELECT_SPIO4_UMP_IPMI 0x00000500
/* Use SPIO4 as an arbiter between: 0-NC-SI, 1-UMP
(can only be used when an add-in board, not BMC, pulls-down SPIO4) */
#define SHARED_HW_CFG_MFW_SELECT_SPIO4_NC_SI_UMP 0x00000600
/* Adjust the PCIe G2 Tx amplitude driver for all Tx lanes. For
backwards compatibility, value of 0 is disabling this feature.
That means that though 0 is a valid value, it cannot be
configured. */
#define SHARED_HW_CFG_G2_TX_DRIVE_MASK 0x0000F000
#define SHARED_HW_CFG_G2_TX_DRIVE_SHIFT 12
#define SHARED_HW_CFG_LED_MODE_MASK 0x000F0000
#define SHARED_HW_CFG_LED_MODE_SHIFT 16
#define SHARED_HW_CFG_LED_MAC1 0x00000000
#define SHARED_HW_CFG_LED_PHY1 0x00010000
#define SHARED_HW_CFG_LED_PHY2 0x00020000
#define SHARED_HW_CFG_LED_PHY3 0x00030000
#define SHARED_HW_CFG_LED_MAC2 0x00040000
#define SHARED_HW_CFG_LED_PHY4 0x00050000
#define SHARED_HW_CFG_LED_PHY5 0x00060000
#define SHARED_HW_CFG_LED_PHY6 0x00070000
#define SHARED_HW_CFG_LED_MAC3 0x00080000
#define SHARED_HW_CFG_LED_PHY7 0x00090000
#define SHARED_HW_CFG_LED_PHY9 0x000a0000
#define SHARED_HW_CFG_LED_PHY11 0x000b0000
#define SHARED_HW_CFG_LED_MAC4 0x000c0000
#define SHARED_HW_CFG_LED_PHY8 0x000d0000
#define SHARED_HW_CFG_LED_EXTPHY1 0x000e0000
#define SHARED_HW_CFG_LED_EXTPHY2 0x000f0000
#define SHARED_HW_CFG_SRIOV_MASK 0x40000000
#define SHARED_HW_CFG_SRIOV_DISABLED 0x00000000
#define SHARED_HW_CFG_SRIOV_ENABLED 0x40000000
#define SHARED_HW_CFG_ATC_MASK 0x80000000
#define SHARED_HW_CFG_ATC_DISABLED 0x00000000
#define SHARED_HW_CFG_ATC_ENABLED 0x80000000
uint32_t config2; /* 0x118 */
#define SHARED_HW_CFG_PCIE_GEN2_MASK 0x00000100
#define SHARED_HW_CFG_PCIE_GEN2_SHIFT 8
#define SHARED_HW_CFG_PCIE_GEN2_DISABLED 0x00000000
#define SHARED_HW_CFG_PCIE_GEN2_ENABLED 0x00000100
#define SHARED_HW_CFG_SMBUS_TIMING_MASK 0x00001000
#define SHARED_HW_CFG_SMBUS_TIMING_100KHZ 0x00000000
#define SHARED_HW_CFG_SMBUS_TIMING_400KHZ 0x00001000
#define SHARED_HW_CFG_HIDE_PORT1 0x00002000
/* Output low when PERST is asserted */
#define SHARED_HW_CFG_SPIO4_FOLLOW_PERST_MASK 0x00008000
#define SHARED_HW_CFG_SPIO4_FOLLOW_PERST_DISABLED 0x00000000
#define SHARED_HW_CFG_SPIO4_FOLLOW_PERST_ENABLED 0x00008000
#define SHARED_HW_CFG_PCIE_GEN2_PREEMPHASIS_MASK 0x00070000
#define SHARED_HW_CFG_PCIE_GEN2_PREEMPHASIS_SHIFT 16
#define SHARED_HW_CFG_PCIE_GEN2_PREEMPHASIS_HW 0x00000000
#define SHARED_HW_CFG_PCIE_GEN2_PREEMPHASIS_0DB 0x00010000
#define SHARED_HW_CFG_PCIE_GEN2_PREEMPHASIS_3_5DB 0x00020000
#define SHARED_HW_CFG_PCIE_GEN2_PREEMPHASIS_6_0DB 0x00030000
/* The fan failure mechanism is usually related to the PHY type
since the power consumption of the board is determined by the PHY.
Currently, fan is required for most designs with SFX7101, BCM8727
and BCM8481. If a fan is not required for a board which uses one
of those PHYs, this field should be set to "Disabled". If a fan is
required for a different PHY type, this option should be set to
"Enabled". The fan failure indication is expected on SPIO5 */
#define SHARED_HW_CFG_FAN_FAILURE_MASK 0x00180000
#define SHARED_HW_CFG_FAN_FAILURE_SHIFT 19
#define SHARED_HW_CFG_FAN_FAILURE_PHY_TYPE 0x00000000
#define SHARED_HW_CFG_FAN_FAILURE_DISABLED 0x00080000
#define SHARED_HW_CFG_FAN_FAILURE_ENABLED 0x00100000
/* ASPM Power Management support */
#define SHARED_HW_CFG_ASPM_SUPPORT_MASK 0x00600000
#define SHARED_HW_CFG_ASPM_SUPPORT_SHIFT 21
#define SHARED_HW_CFG_ASPM_SUPPORT_L0S_L1_ENABLED 0x00000000
#define SHARED_HW_CFG_ASPM_SUPPORT_L0S_DISABLED 0x00200000
#define SHARED_HW_CFG_ASPM_SUPPORT_L1_DISABLED 0x00400000
#define SHARED_HW_CFG_ASPM_SUPPORT_L0S_L1_DISABLED 0x00600000
/* The value of PM_TL_IGNORE_REQS (bit0) in PCI register
tl_control_0 (register 0x2800) */
#define SHARED_HW_CFG_PREVENT_L1_ENTRY_MASK 0x00800000
#define SHARED_HW_CFG_PREVENT_L1_ENTRY_DISABLED 0x00000000
#define SHARED_HW_CFG_PREVENT_L1_ENTRY_ENABLED 0x00800000
/* Set the MDC/MDIO access for the first external phy */
#define SHARED_HW_CFG_MDC_MDIO_ACCESS1_MASK 0x1C000000
#define SHARED_HW_CFG_MDC_MDIO_ACCESS1_SHIFT 26
#define SHARED_HW_CFG_MDC_MDIO_ACCESS1_PHY_TYPE 0x00000000
#define SHARED_HW_CFG_MDC_MDIO_ACCESS1_EMAC0 0x04000000
#define SHARED_HW_CFG_MDC_MDIO_ACCESS1_EMAC1 0x08000000
#define SHARED_HW_CFG_MDC_MDIO_ACCESS1_BOTH 0x0c000000
#define SHARED_HW_CFG_MDC_MDIO_ACCESS1_SWAPPED 0x10000000
/* Set the MDC/MDIO access for the second external phy */
#define SHARED_HW_CFG_MDC_MDIO_ACCESS2_MASK 0xE0000000
#define SHARED_HW_CFG_MDC_MDIO_ACCESS2_SHIFT 29
#define SHARED_HW_CFG_MDC_MDIO_ACCESS2_PHY_TYPE 0x00000000
#define SHARED_HW_CFG_MDC_MDIO_ACCESS2_EMAC0 0x20000000
#define SHARED_HW_CFG_MDC_MDIO_ACCESS2_EMAC1 0x40000000
#define SHARED_HW_CFG_MDC_MDIO_ACCESS2_BOTH 0x60000000
#define SHARED_HW_CFG_MDC_MDIO_ACCESS2_SWAPPED 0x80000000
/* Max number of PF MSIX vectors */
uint32_t config_3; /* 0x11C */
#define SHARED_HW_CFG_PF_MSIX_MAX_NUM_MASK 0x0000007F
#define SHARED_HW_CFG_PF_MSIX_MAX_NUM_SHIFT 0
/* This field extends the mf mode chosen in nvm cfg #73 (as we ran
out of bits) */
#define SHARED_HW_CFG_EXTENDED_MF_MODE_MASK 0x00000F00
#define SHARED_HW_CFG_EXTENDED_MF_MODE_SHIFT 8
#define SHARED_HW_CFG_EXTENDED_MF_MODE_NPAR1_DOT_5 0x00000000
#define SHARED_HW_CFG_EXTENDED_MF_MODE_NPAR2_DOT_0 0x00000100
uint32_t ump_nc_si_config; /* 0x120 */
#define SHARED_HW_CFG_UMP_NC_SI_MII_MODE_MASK 0x00000003
#define SHARED_HW_CFG_UMP_NC_SI_MII_MODE_SHIFT 0
#define SHARED_HW_CFG_UMP_NC_SI_MII_MODE_MAC 0x00000000
#define SHARED_HW_CFG_UMP_NC_SI_MII_MODE_PHY 0x00000001
#define SHARED_HW_CFG_UMP_NC_SI_MII_MODE_MII 0x00000000
#define SHARED_HW_CFG_UMP_NC_SI_MII_MODE_RMII 0x00000002
/* Reserved bits: 226-230 */
/* The output pin template BSC_SEL which selects the I2C for this
port in the I2C Mux */
uint32_t board; /* 0x124 */
#define SHARED_HW_CFG_E3_I2C_MUX0_MASK 0x0000003F
#define SHARED_HW_CFG_E3_I2C_MUX0_SHIFT 0
#define SHARED_HW_CFG_E3_I2C_MUX1_MASK 0x00000FC0
#define SHARED_HW_CFG_E3_I2C_MUX1_SHIFT 6
/* Use the PIN_CFG_XXX defines on top */
#define SHARED_HW_CFG_BOARD_REV_MASK 0x00FF0000
#define SHARED_HW_CFG_BOARD_REV_SHIFT 16
#define SHARED_HW_CFG_BOARD_MAJOR_VER_MASK 0x0F000000
#define SHARED_HW_CFG_BOARD_MAJOR_VER_SHIFT 24
#define SHARED_HW_CFG_BOARD_MINOR_VER_MASK 0xF0000000
#define SHARED_HW_CFG_BOARD_MINOR_VER_SHIFT 28
uint32_t wc_lane_config; /* 0x128 */
#define SHARED_HW_CFG_LANE_SWAP_CFG_MASK 0x0000FFFF
#define SHARED_HW_CFG_LANE_SWAP_CFG_SHIFT 0
#define SHARED_HW_CFG_LANE_SWAP_CFG_32103210 0x00001b1b
#define SHARED_HW_CFG_LANE_SWAP_CFG_32100123 0x00001be4
#define SHARED_HW_CFG_LANE_SWAP_CFG_31200213 0x000027d8
#define SHARED_HW_CFG_LANE_SWAP_CFG_02133120 0x0000d827
#define SHARED_HW_CFG_LANE_SWAP_CFG_01233210 0x0000e41b
#define SHARED_HW_CFG_LANE_SWAP_CFG_01230123 0x0000e4e4
#define SHARED_HW_CFG_LANE_SWAP_CFG_TX_MASK 0x000000FF
#define SHARED_HW_CFG_LANE_SWAP_CFG_TX_SHIFT 0
#define SHARED_HW_CFG_LANE_SWAP_CFG_RX_MASK 0x0000FF00
#define SHARED_HW_CFG_LANE_SWAP_CFG_RX_SHIFT 8
/* TX lane Polarity swap */
#define SHARED_HW_CFG_TX_LANE0_POL_FLIP_ENABLED 0x00010000
#define SHARED_HW_CFG_TX_LANE1_POL_FLIP_ENABLED 0x00020000
#define SHARED_HW_CFG_TX_LANE2_POL_FLIP_ENABLED 0x00040000
#define SHARED_HW_CFG_TX_LANE3_POL_FLIP_ENABLED 0x00080000
/* TX lane Polarity swap */
#define SHARED_HW_CFG_RX_LANE0_POL_FLIP_ENABLED 0x00100000
#define SHARED_HW_CFG_RX_LANE1_POL_FLIP_ENABLED 0x00200000
#define SHARED_HW_CFG_RX_LANE2_POL_FLIP_ENABLED 0x00400000
#define SHARED_HW_CFG_RX_LANE3_POL_FLIP_ENABLED 0x00800000
/* Selects the port layout of the board */
#define SHARED_HW_CFG_E3_PORT_LAYOUT_MASK 0x0F000000
#define SHARED_HW_CFG_E3_PORT_LAYOUT_SHIFT 24
#define SHARED_HW_CFG_E3_PORT_LAYOUT_2P_01 0x00000000
#define SHARED_HW_CFG_E3_PORT_LAYOUT_2P_10 0x01000000
#define SHARED_HW_CFG_E3_PORT_LAYOUT_4P_0123 0x02000000
#define SHARED_HW_CFG_E3_PORT_LAYOUT_4P_1032 0x03000000
#define SHARED_HW_CFG_E3_PORT_LAYOUT_4P_2301 0x04000000
#define SHARED_HW_CFG_E3_PORT_LAYOUT_4P_3210 0x05000000
#define SHARED_HW_CFG_E3_PORT_LAYOUT_2P_01_SIG 0x06000000
};
/****************************************************************************
* Port HW configuration *
****************************************************************************/
struct port_hw_cfg { /* port 0: 0x12c port 1: 0x2bc */
uint32_t pci_id;
#define PORT_HW_CFG_PCI_DEVICE_ID_MASK 0x0000FFFF
#define PORT_HW_CFG_PCI_DEVICE_ID_SHIFT 0
#define PORT_HW_CFG_PCI_VENDOR_ID_MASK 0xFFFF0000
#define PORT_HW_CFG_PCI_VENDOR_ID_SHIFT 16
uint32_t pci_sub_id;
#define PORT_HW_CFG_PCI_SUBSYS_VENDOR_ID_MASK 0x0000FFFF
#define PORT_HW_CFG_PCI_SUBSYS_VENDOR_ID_SHIFT 0
#define PORT_HW_CFG_PCI_SUBSYS_DEVICE_ID_MASK 0xFFFF0000
#define PORT_HW_CFG_PCI_SUBSYS_DEVICE_ID_SHIFT 16
uint32_t power_dissipated;
#define PORT_HW_CFG_POWER_DIS_D0_MASK 0x000000FF
#define PORT_HW_CFG_POWER_DIS_D0_SHIFT 0
#define PORT_HW_CFG_POWER_DIS_D1_MASK 0x0000FF00
#define PORT_HW_CFG_POWER_DIS_D1_SHIFT 8
#define PORT_HW_CFG_POWER_DIS_D2_MASK 0x00FF0000
#define PORT_HW_CFG_POWER_DIS_D2_SHIFT 16
#define PORT_HW_CFG_POWER_DIS_D3_MASK 0xFF000000
#define PORT_HW_CFG_POWER_DIS_D3_SHIFT 24
uint32_t power_consumed;
#define PORT_HW_CFG_POWER_CONS_D0_MASK 0x000000FF
#define PORT_HW_CFG_POWER_CONS_D0_SHIFT 0
#define PORT_HW_CFG_POWER_CONS_D1_MASK 0x0000FF00
#define PORT_HW_CFG_POWER_CONS_D1_SHIFT 8
#define PORT_HW_CFG_POWER_CONS_D2_MASK 0x00FF0000
#define PORT_HW_CFG_POWER_CONS_D2_SHIFT 16
#define PORT_HW_CFG_POWER_CONS_D3_MASK 0xFF000000
#define PORT_HW_CFG_POWER_CONS_D3_SHIFT 24
uint32_t mac_upper;
uint32_t mac_lower; /* 0x140 */
#define PORT_HW_CFG_UPPERMAC_MASK 0x0000FFFF
#define PORT_HW_CFG_UPPERMAC_SHIFT 0
uint32_t iscsi_mac_upper; /* Upper 16 bits are always zeroes */
uint32_t iscsi_mac_lower;
uint32_t rdma_mac_upper; /* Upper 16 bits are always zeroes */
uint32_t rdma_mac_lower;
uint32_t serdes_config;
#define PORT_HW_CFG_SERDES_TX_DRV_PRE_EMPHASIS_MASK 0x0000FFFF
#define PORT_HW_CFG_SERDES_TX_DRV_PRE_EMPHASIS_SHIFT 0
#define PORT_HW_CFG_SERDES_RX_DRV_EQUALIZER_MASK 0xFFFF0000
#define PORT_HW_CFG_SERDES_RX_DRV_EQUALIZER_SHIFT 16
/* Default values: 2P-64, 4P-32 */
uint32_t reserved;
uint32_t vf_config; /* 0x15C */
#define PORT_HW_CFG_VF_PCI_DEVICE_ID_MASK 0xFFFF0000
#define PORT_HW_CFG_VF_PCI_DEVICE_ID_SHIFT 16
uint32_t mf_pci_id; /* 0x160 */
#define PORT_HW_CFG_MF_PCI_DEVICE_ID_MASK 0x0000FFFF
#define PORT_HW_CFG_MF_PCI_DEVICE_ID_SHIFT 0
/* Controls the TX laser of the SFP+ module */
uint32_t sfp_ctrl; /* 0x164 */
#define PORT_HW_CFG_TX_LASER_MASK 0x000000FF
#define PORT_HW_CFG_TX_LASER_SHIFT 0
#define PORT_HW_CFG_TX_LASER_MDIO 0x00000000
#define PORT_HW_CFG_TX_LASER_GPIO0 0x00000001
#define PORT_HW_CFG_TX_LASER_GPIO1 0x00000002
#define PORT_HW_CFG_TX_LASER_GPIO2 0x00000003
#define PORT_HW_CFG_TX_LASER_GPIO3 0x00000004
/* Controls the fault module LED of the SFP+ */
#define PORT_HW_CFG_FAULT_MODULE_LED_MASK 0x0000FF00
#define PORT_HW_CFG_FAULT_MODULE_LED_SHIFT 8
#define PORT_HW_CFG_FAULT_MODULE_LED_GPIO0 0x00000000
#define PORT_HW_CFG_FAULT_MODULE_LED_GPIO1 0x00000100
#define PORT_HW_CFG_FAULT_MODULE_LED_GPIO2 0x00000200
#define PORT_HW_CFG_FAULT_MODULE_LED_GPIO3 0x00000300
#define PORT_HW_CFG_FAULT_MODULE_LED_DISABLED 0x00000400
/* The output pin TX_DIS that controls the TX laser of the SFP+
module. Use the PIN_CFG_XXX defines on top */
uint32_t e3_sfp_ctrl; /* 0x168 */
#define PORT_HW_CFG_E3_TX_LASER_MASK 0x000000FF
#define PORT_HW_CFG_E3_TX_LASER_SHIFT 0
/* The output pin for SFPP_TYPE which turns on the Fault module LED */
#define PORT_HW_CFG_E3_FAULT_MDL_LED_MASK 0x0000FF00
#define PORT_HW_CFG_E3_FAULT_MDL_LED_SHIFT 8
/* The input pin MOD_ABS that indicates whether SFP+ module is
present or not. Use the PIN_CFG_XXX defines on top */
#define PORT_HW_CFG_E3_MOD_ABS_MASK 0x00FF0000
#define PORT_HW_CFG_E3_MOD_ABS_SHIFT 16
/* The output pin PWRDIS_SFP_X which disable the power of the SFP+
module. Use the PIN_CFG_XXX defines on top */
#define PORT_HW_CFG_E3_PWR_DIS_MASK 0xFF000000
#define PORT_HW_CFG_E3_PWR_DIS_SHIFT 24
/*
* The input pin which signals module transmit fault. Use the
* PIN_CFG_XXX defines on top
*/
uint32_t e3_cmn_pin_cfg; /* 0x16C */
#define PORT_HW_CFG_E3_TX_FAULT_MASK 0x000000FF
#define PORT_HW_CFG_E3_TX_FAULT_SHIFT 0
/* The output pin which reset the PHY. Use the PIN_CFG_XXX defines on
top */
#define PORT_HW_CFG_E3_PHY_RESET_MASK 0x0000FF00
#define PORT_HW_CFG_E3_PHY_RESET_SHIFT 8
/*
* The output pin which powers down the PHY. Use the PIN_CFG_XXX
* defines on top
*/
#define PORT_HW_CFG_E3_PWR_DOWN_MASK 0x00FF0000
#define PORT_HW_CFG_E3_PWR_DOWN_SHIFT 16
/* The output pin values BSC_SEL which selects the I2C for this port
in the I2C Mux */
#define PORT_HW_CFG_E3_I2C_MUX0_MASK 0x01000000
#define PORT_HW_CFG_E3_I2C_MUX1_MASK 0x02000000
/*
* The input pin I_FAULT which indicate over-current has occurred.
* Use the PIN_CFG_XXX defines on top
*/
uint32_t e3_cmn_pin_cfg1; /* 0x170 */
#define PORT_HW_CFG_E3_OVER_CURRENT_MASK 0x000000FF
#define PORT_HW_CFG_E3_OVER_CURRENT_SHIFT 0
/* pause on host ring */
uint32_t generic_features; /* 0x174 */
#define PORT_HW_CFG_PAUSE_ON_HOST_RING_MASK 0x00000001
#define PORT_HW_CFG_PAUSE_ON_HOST_RING_SHIFT 0
#define PORT_HW_CFG_PAUSE_ON_HOST_RING_DISABLED 0x00000000
#define PORT_HW_CFG_PAUSE_ON_HOST_RING_ENABLED 0x00000001
/* SFP+ Tx Equalization: NIC recommended and tested value is 0xBEB2
* LOM recommended and tested value is 0xBEB2. Using a different
* value means using a value not tested by BRCM
*/
uint32_t sfi_tap_values; /* 0x178 */
#define PORT_HW_CFG_TX_EQUALIZATION_MASK 0x0000FFFF
#define PORT_HW_CFG_TX_EQUALIZATION_SHIFT 0
/* SFP+ Tx driver broadcast IDRIVER: NIC recommended and tested
* value is 0x2. LOM recommended and tested value is 0x2. Using a
* different value means using a value not tested by BRCM
*/
#define PORT_HW_CFG_TX_DRV_BROADCAST_MASK 0x000F0000
#define PORT_HW_CFG_TX_DRV_BROADCAST_SHIFT 16
/* Set non-default values for TXFIR in SFP mode. */
#define PORT_HW_CFG_TX_DRV_IFIR_MASK 0x00F00000
#define PORT_HW_CFG_TX_DRV_IFIR_SHIFT 20
/* Set non-default values for IPREDRIVER in SFP mode. */
#define PORT_HW_CFG_TX_DRV_IPREDRIVER_MASK 0x0F000000
#define PORT_HW_CFG_TX_DRV_IPREDRIVER_SHIFT 24
/* Set non-default values for POST2 in SFP mode. */
#define PORT_HW_CFG_TX_DRV_POST2_MASK 0xF0000000
#define PORT_HW_CFG_TX_DRV_POST2_SHIFT 28
uint32_t reserved0[5]; /* 0x17c */
uint32_t aeu_int_mask; /* 0x190 */
uint32_t media_type; /* 0x194 */
#define PORT_HW_CFG_MEDIA_TYPE_PHY0_MASK 0x000000FF
#define PORT_HW_CFG_MEDIA_TYPE_PHY0_SHIFT 0
#define PORT_HW_CFG_MEDIA_TYPE_PHY1_MASK 0x0000FF00
#define PORT_HW_CFG_MEDIA_TYPE_PHY1_SHIFT 8
#define PORT_HW_CFG_MEDIA_TYPE_PHY2_MASK 0x00FF0000
#define PORT_HW_CFG_MEDIA_TYPE_PHY2_SHIFT 16
/* 4 times 16 bits for all 4 lanes. In case external PHY is present
(not direct mode), those values will not take effect on the 4 XGXS
lanes. For some external PHYs (such as 8706 and 8726) the values
will be used to configure the external PHY in those cases, not
all 4 values are needed. */
uint16_t xgxs_config_rx[4]; /* 0x198 */
uint16_t xgxs_config_tx[4]; /* 0x1A0 */
/* For storing FCOE mac on shared memory */
uint32_t fcoe_fip_mac_upper;
#define PORT_HW_CFG_FCOE_UPPERMAC_MASK 0x0000ffff
#define PORT_HW_CFG_FCOE_UPPERMAC_SHIFT 0
uint32_t fcoe_fip_mac_lower;
uint32_t fcoe_wwn_port_name_upper;
uint32_t fcoe_wwn_port_name_lower;
uint32_t fcoe_wwn_node_name_upper;
uint32_t fcoe_wwn_node_name_lower;
/* wwpn for npiv enabled */
uint32_t wwpn_for_npiv_config; /* 0x1C0 */
#define PORT_HW_CFG_WWPN_FOR_NPIV_ENABLED_MASK 0x00000001
#define PORT_HW_CFG_WWPN_FOR_NPIV_ENABLED_SHIFT 0
#define PORT_HW_CFG_WWPN_FOR_NPIV_ENABLED_DISABLED 0x00000000
#define PORT_HW_CFG_WWPN_FOR_NPIV_ENABLED_ENABLED 0x00000001
/* wwpn for npiv valid addresses */
uint32_t wwpn_for_npiv_valid_addresses; /* 0x1C4 */
#define PORT_HW_CFG_WWPN_FOR_NPIV_ADDRESS_BITMAP_MASK 0x0000FFFF
#define PORT_HW_CFG_WWPN_FOR_NPIV_ADDRESS_BITMAP_SHIFT 0
struct mac_addr wwpn_for_niv_macs[16];
/* Reserved bits: 2272-2336 For storing FCOE mac on shared memory */
uint32_t Reserved1[14];
uint32_t pf_allocation; /* 0x280 */
/* number of vfs per PF, if 0 - sriov disabled */
#define PORT_HW_CFG_NUMBER_OF_VFS_MASK 0x000000FF
#define PORT_HW_CFG_NUMBER_OF_VFS_SHIFT 0
/* Enable RJ45 magjack pair swapping on 10GBase-T PHY (0=default),
84833 only */
uint32_t xgbt_phy_cfg; /* 0x284 */
#define PORT_HW_CFG_RJ45_PAIR_SWAP_MASK 0x000000FF
#define PORT_HW_CFG_RJ45_PAIR_SWAP_SHIFT 0
uint32_t default_cfg; /* 0x288 */
#define PORT_HW_CFG_GPIO0_CONFIG_MASK 0x00000003
#define PORT_HW_CFG_GPIO0_CONFIG_SHIFT 0
#define PORT_HW_CFG_GPIO0_CONFIG_NA 0x00000000
#define PORT_HW_CFG_GPIO0_CONFIG_LOW 0x00000001
#define PORT_HW_CFG_GPIO0_CONFIG_HIGH 0x00000002
#define PORT_HW_CFG_GPIO0_CONFIG_INPUT 0x00000003
#define PORT_HW_CFG_GPIO1_CONFIG_MASK 0x0000000C
#define PORT_HW_CFG_GPIO1_CONFIG_SHIFT 2
#define PORT_HW_CFG_GPIO1_CONFIG_NA 0x00000000
#define PORT_HW_CFG_GPIO1_CONFIG_LOW 0x00000004
#define PORT_HW_CFG_GPIO1_CONFIG_HIGH 0x00000008
#define PORT_HW_CFG_GPIO1_CONFIG_INPUT 0x0000000c
#define PORT_HW_CFG_GPIO2_CONFIG_MASK 0x00000030
#define PORT_HW_CFG_GPIO2_CONFIG_SHIFT 4
#define PORT_HW_CFG_GPIO2_CONFIG_NA 0x00000000
#define PORT_HW_CFG_GPIO2_CONFIG_LOW 0x00000010
#define PORT_HW_CFG_GPIO2_CONFIG_HIGH 0x00000020
#define PORT_HW_CFG_GPIO2_CONFIG_INPUT 0x00000030
#define PORT_HW_CFG_GPIO3_CONFIG_MASK 0x000000C0
#define PORT_HW_CFG_GPIO3_CONFIG_SHIFT 6
#define PORT_HW_CFG_GPIO3_CONFIG_NA 0x00000000
#define PORT_HW_CFG_GPIO3_CONFIG_LOW 0x00000040
#define PORT_HW_CFG_GPIO3_CONFIG_HIGH 0x00000080
#define PORT_HW_CFG_GPIO3_CONFIG_INPUT 0x000000c0
/* When KR link is required to be set to force which is not
KR-compliant, this parameter determine what is the trigger for it.
When GPIO is selected, low input will force the speed. Currently
default speed is 1G. In the future, it may be widen to select the
forced speed in with another parameter. Note when force-1G is
enabled, it override option 56: Link Speed option. */
#define PORT_HW_CFG_FORCE_KR_ENABLER_MASK 0x00000F00
#define PORT_HW_CFG_FORCE_KR_ENABLER_SHIFT 8
#define PORT_HW_CFG_FORCE_KR_ENABLER_NOT_FORCED 0x00000000
#define PORT_HW_CFG_FORCE_KR_ENABLER_GPIO0_P0 0x00000100
#define PORT_HW_CFG_FORCE_KR_ENABLER_GPIO1_P0 0x00000200
#define PORT_HW_CFG_FORCE_KR_ENABLER_GPIO2_P0 0x00000300
#define PORT_HW_CFG_FORCE_KR_ENABLER_GPIO3_P0 0x00000400
#define PORT_HW_CFG_FORCE_KR_ENABLER_GPIO0_P1 0x00000500
#define PORT_HW_CFG_FORCE_KR_ENABLER_GPIO1_P1 0x00000600
#define PORT_HW_CFG_FORCE_KR_ENABLER_GPIO2_P1 0x00000700
#define PORT_HW_CFG_FORCE_KR_ENABLER_GPIO3_P1 0x00000800
#define PORT_HW_CFG_FORCE_KR_ENABLER_FORCED 0x00000900
/* Enable to determine with which GPIO to reset the external phy */
#define PORT_HW_CFG_EXT_PHY_GPIO_RST_MASK 0x000F0000
#define PORT_HW_CFG_EXT_PHY_GPIO_RST_SHIFT 16
#define PORT_HW_CFG_EXT_PHY_GPIO_RST_PHY_TYPE 0x00000000
#define PORT_HW_CFG_EXT_PHY_GPIO_RST_GPIO0_P0 0x00010000
#define PORT_HW_CFG_EXT_PHY_GPIO_RST_GPIO1_P0 0x00020000
#define PORT_HW_CFG_EXT_PHY_GPIO_RST_GPIO2_P0 0x00030000
#define PORT_HW_CFG_EXT_PHY_GPIO_RST_GPIO3_P0 0x00040000
#define PORT_HW_CFG_EXT_PHY_GPIO_RST_GPIO0_P1 0x00050000
#define PORT_HW_CFG_EXT_PHY_GPIO_RST_GPIO1_P1 0x00060000
#define PORT_HW_CFG_EXT_PHY_GPIO_RST_GPIO2_P1 0x00070000
#define PORT_HW_CFG_EXT_PHY_GPIO_RST_GPIO3_P1 0x00080000
/* Enable BAM on KR */
#define PORT_HW_CFG_ENABLE_BAM_ON_KR_MASK 0x00100000
#define PORT_HW_CFG_ENABLE_BAM_ON_KR_SHIFT 20
#define PORT_HW_CFG_ENABLE_BAM_ON_KR_DISABLED 0x00000000
#define PORT_HW_CFG_ENABLE_BAM_ON_KR_ENABLED 0x00100000
/* Enable Common Mode Sense */
#define PORT_HW_CFG_ENABLE_CMS_MASK 0x00200000
#define PORT_HW_CFG_ENABLE_CMS_SHIFT 21
#define PORT_HW_CFG_ENABLE_CMS_DISABLED 0x00000000
#define PORT_HW_CFG_ENABLE_CMS_ENABLED 0x00200000
/* Determine the Serdes electrical interface */
#define PORT_HW_CFG_NET_SERDES_IF_MASK 0x0F000000
#define PORT_HW_CFG_NET_SERDES_IF_SHIFT 24
#define PORT_HW_CFG_NET_SERDES_IF_SGMII 0x00000000
#define PORT_HW_CFG_NET_SERDES_IF_XFI 0x01000000
#define PORT_HW_CFG_NET_SERDES_IF_SFI 0x02000000
#define PORT_HW_CFG_NET_SERDES_IF_KR 0x03000000
#define PORT_HW_CFG_NET_SERDES_IF_DXGXS 0x04000000
#define PORT_HW_CFG_NET_SERDES_IF_KR2 0x05000000
/* SFP+ main TAP and post TAP volumes */
#define PORT_HW_CFG_TAP_LEVELS_MASK 0x70000000
#define PORT_HW_CFG_TAP_LEVELS_SHIFT 28
#define PORT_HW_CFG_TAP_LEVELS_POST_15_MAIN_43 0x00000000
#define PORT_HW_CFG_TAP_LEVELS_POST_14_MAIN_44 0x10000000
#define PORT_HW_CFG_TAP_LEVELS_POST_13_MAIN_45 0x20000000
#define PORT_HW_CFG_TAP_LEVELS_POST_12_MAIN_46 0x30000000
#define PORT_HW_CFG_TAP_LEVELS_POST_11_MAIN_47 0x40000000
#define PORT_HW_CFG_TAP_LEVELS_POST_10_MAIN_48 0x50000000
uint32_t speed_capability_mask2; /* 0x28C */
#define PORT_HW_CFG_SPEED_CAPABILITY2_D3_MASK 0x0000FFFF
#define PORT_HW_CFG_SPEED_CAPABILITY2_D3_SHIFT 0
#define PORT_HW_CFG_SPEED_CAPABILITY2_D3_10M_FULL 0x00000001
#define PORT_HW_CFG_SPEED_CAPABILITY2_D3_10M_HALF 0x00000002
#define PORT_HW_CFG_SPEED_CAPABILITY2_D3_100M_HALF 0x00000004
#define PORT_HW_CFG_SPEED_CAPABILITY2_D3_100M_FULL 0x00000008
#define PORT_HW_CFG_SPEED_CAPABILITY2_D3_1G 0x00000010
#define PORT_HW_CFG_SPEED_CAPABILITY2_D3_2_5G 0x00000020
#define PORT_HW_CFG_SPEED_CAPABILITY2_D3_10G 0x00000040
#define PORT_HW_CFG_SPEED_CAPABILITY2_D3_20G 0x00000080
#define PORT_HW_CFG_SPEED_CAPABILITY2_D0_MASK 0xFFFF0000
#define PORT_HW_CFG_SPEED_CAPABILITY2_D0_SHIFT 16
#define PORT_HW_CFG_SPEED_CAPABILITY2_D0_10M_FULL 0x00010000
#define PORT_HW_CFG_SPEED_CAPABILITY2_D0_10M_HALF 0x00020000
#define PORT_HW_CFG_SPEED_CAPABILITY2_D0_100M_HALF 0x00040000
#define PORT_HW_CFG_SPEED_CAPABILITY2_D0_100M_FULL 0x00080000
#define PORT_HW_CFG_SPEED_CAPABILITY2_D0_1G 0x00100000
#define PORT_HW_CFG_SPEED_CAPABILITY2_D0_2_5G 0x00200000
#define PORT_HW_CFG_SPEED_CAPABILITY2_D0_10G 0x00400000
#define PORT_HW_CFG_SPEED_CAPABILITY2_D0_20G 0x00800000
/* In the case where two media types (e.g. copper and fiber) are
present and electrically active at the same time, PHY Selection
will determine which of the two PHYs will be designated as the
Active PHY and used for a connection to the network. */
uint32_t multi_phy_config; /* 0x290 */
#define PORT_HW_CFG_PHY_SELECTION_MASK 0x00000007
#define PORT_HW_CFG_PHY_SELECTION_SHIFT 0
#define PORT_HW_CFG_PHY_SELECTION_HARDWARE_DEFAULT 0x00000000
#define PORT_HW_CFG_PHY_SELECTION_FIRST_PHY 0x00000001
#define PORT_HW_CFG_PHY_SELECTION_SECOND_PHY 0x00000002
#define PORT_HW_CFG_PHY_SELECTION_FIRST_PHY_PRIORITY 0x00000003
#define PORT_HW_CFG_PHY_SELECTION_SECOND_PHY_PRIORITY 0x00000004
/* When enabled, all second phy nvram parameters will be swapped
with the first phy parameters */
#define PORT_HW_CFG_PHY_SWAPPED_MASK 0x00000008
#define PORT_HW_CFG_PHY_SWAPPED_SHIFT 3
#define PORT_HW_CFG_PHY_SWAPPED_DISABLED 0x00000000
#define PORT_HW_CFG_PHY_SWAPPED_ENABLED 0x00000008
/* Address of the second external phy */
uint32_t external_phy_config2; /* 0x294 */
#define PORT_HW_CFG_XGXS_EXT_PHY2_ADDR_MASK 0x000000FF
#define PORT_HW_CFG_XGXS_EXT_PHY2_ADDR_SHIFT 0
/* The second XGXS external PHY type */
#define PORT_HW_CFG_XGXS_EXT_PHY2_TYPE_MASK 0x0000FF00
#define PORT_HW_CFG_XGXS_EXT_PHY2_TYPE_SHIFT 8
#define PORT_HW_CFG_XGXS_EXT_PHY2_TYPE_DIRECT 0x00000000
#define PORT_HW_CFG_XGXS_EXT_PHY2_TYPE_BCM8071 0x00000100
#define PORT_HW_CFG_XGXS_EXT_PHY2_TYPE_BCM8072 0x00000200
#define PORT_HW_CFG_XGXS_EXT_PHY2_TYPE_BCM8073 0x00000300
#define PORT_HW_CFG_XGXS_EXT_PHY2_TYPE_BCM8705 0x00000400
#define PORT_HW_CFG_XGXS_EXT_PHY2_TYPE_BCM8706 0x00000500
#define PORT_HW_CFG_XGXS_EXT_PHY2_TYPE_BCM8726 0x00000600
#define PORT_HW_CFG_XGXS_EXT_PHY2_TYPE_BCM8481 0x00000700
#define PORT_HW_CFG_XGXS_EXT_PHY2_TYPE_SFX7101 0x00000800
#define PORT_HW_CFG_XGXS_EXT_PHY2_TYPE_BCM8727 0x00000900
#define PORT_HW_CFG_XGXS_EXT_PHY2_TYPE_BCM8727_NOC 0x00000a00
#define PORT_HW_CFG_XGXS_EXT_PHY2_TYPE_BCM84823 0x00000b00
#define PORT_HW_CFG_XGXS_EXT_PHY2_TYPE_BCM54640 0x00000c00
#define PORT_HW_CFG_XGXS_EXT_PHY2_TYPE_BCM84833 0x00000d00
#define PORT_HW_CFG_XGXS_EXT_PHY2_TYPE_BCM54618SE 0x00000e00
#define PORT_HW_CFG_XGXS_EXT_PHY2_TYPE_BCM8722 0x00000f00
#define PORT_HW_CFG_XGXS_EXT_PHY2_TYPE_BCM54616 0x00001000
#define PORT_HW_CFG_XGXS_EXT_PHY2_TYPE_BCM84834 0x00001100
#define PORT_HW_CFG_XGXS_EXT_PHY2_TYPE_BCM84858 0x00001200
#define PORT_HW_CFG_XGXS_EXT_PHY2_TYPE_FAILURE 0x0000fd00
#define PORT_HW_CFG_XGXS_EXT_PHY2_TYPE_NOT_CONN 0x0000ff00
/* 4 times 16 bits for all 4 lanes. For some external PHYs (such as
8706, 8726 and 8727) not all 4 values are needed. */
uint16_t xgxs_config2_rx[4]; /* 0x296 */
uint16_t xgxs_config2_tx[4]; /* 0x2A0 */
uint32_t lane_config;
#define PORT_HW_CFG_LANE_SWAP_CFG_MASK 0x0000FFFF
#define PORT_HW_CFG_LANE_SWAP_CFG_SHIFT 0
/* AN and forced */
#define PORT_HW_CFG_LANE_SWAP_CFG_01230123 0x00001b1b
/* forced only */
#define PORT_HW_CFG_LANE_SWAP_CFG_01233210 0x00001be4
/* forced only */
#define PORT_HW_CFG_LANE_SWAP_CFG_31203120 0x0000d8d8
/* forced only */
#define PORT_HW_CFG_LANE_SWAP_CFG_32103210 0x0000e4e4
#define PORT_HW_CFG_LANE_SWAP_CFG_TX_MASK 0x000000FF
#define PORT_HW_CFG_LANE_SWAP_CFG_TX_SHIFT 0
#define PORT_HW_CFG_LANE_SWAP_CFG_RX_MASK 0x0000FF00
#define PORT_HW_CFG_LANE_SWAP_CFG_RX_SHIFT 8
#define PORT_HW_CFG_LANE_SWAP_CFG_MASTER_MASK 0x0000C000
#define PORT_HW_CFG_LANE_SWAP_CFG_MASTER_SHIFT 14
/* Indicate whether to swap the external phy polarity */
#define PORT_HW_CFG_SWAP_PHY_POLARITY_MASK 0x00010000
#define PORT_HW_CFG_SWAP_PHY_POLARITY_DISABLED 0x00000000
#define PORT_HW_CFG_SWAP_PHY_POLARITY_ENABLED 0x00010000
uint32_t external_phy_config;
#define PORT_HW_CFG_XGXS_EXT_PHY_ADDR_MASK 0x000000FF
#define PORT_HW_CFG_XGXS_EXT_PHY_ADDR_SHIFT 0
#define PORT_HW_CFG_XGXS_EXT_PHY_TYPE_MASK 0x0000FF00
#define PORT_HW_CFG_XGXS_EXT_PHY_TYPE_SHIFT 8
#define PORT_HW_CFG_XGXS_EXT_PHY_TYPE_DIRECT 0x00000000
#define PORT_HW_CFG_XGXS_EXT_PHY_TYPE_BCM8071 0x00000100
#define PORT_HW_CFG_XGXS_EXT_PHY_TYPE_BCM8072 0x00000200
#define PORT_HW_CFG_XGXS_EXT_PHY_TYPE_BCM8073 0x00000300
#define PORT_HW_CFG_XGXS_EXT_PHY_TYPE_BCM8705 0x00000400
#define PORT_HW_CFG_XGXS_EXT_PHY_TYPE_BCM8706 0x00000500
#define PORT_HW_CFG_XGXS_EXT_PHY_TYPE_BCM8726 0x00000600
#define PORT_HW_CFG_XGXS_EXT_PHY_TYPE_BCM8481 0x00000700
#define PORT_HW_CFG_XGXS_EXT_PHY_TYPE_SFX7101 0x00000800
#define PORT_HW_CFG_XGXS_EXT_PHY_TYPE_BCM8727 0x00000900
#define PORT_HW_CFG_XGXS_EXT_PHY_TYPE_BCM8727_NOC 0x00000a00
#define PORT_HW_CFG_XGXS_EXT_PHY_TYPE_BCM84823 0x00000b00
#define PORT_HW_CFG_XGXS_EXT_PHY_TYPE_BCM54640 0x00000c00
#define PORT_HW_CFG_XGXS_EXT_PHY_TYPE_BCM84833 0x00000d00
#define PORT_HW_CFG_XGXS_EXT_PHY_TYPE_BCM54618SE 0x00000e00
#define PORT_HW_CFG_XGXS_EXT_PHY_TYPE_BCM8722 0x00000f00
#define PORT_HW_CFG_XGXS_EXT_PHY_TYPE_BCM54616 0x00001000
#define PORT_HW_CFG_XGXS_EXT_PHY_TYPE_BCM84834 0x00001100
#define PORT_HW_CFG_XGXS_EXT_PHY_TYPE_BCM84858 0x00001200
#define PORT_HW_CFG_XGXS_EXT_PHY_TYPE_DIRECT_WC 0x0000fc00
#define PORT_HW_CFG_XGXS_EXT_PHY_TYPE_FAILURE 0x0000fd00
#define PORT_HW_CFG_XGXS_EXT_PHY_TYPE_NOT_CONN 0x0000ff00
#define PORT_HW_CFG_SERDES_EXT_PHY_ADDR_MASK 0x00FF0000
#define PORT_HW_CFG_SERDES_EXT_PHY_ADDR_SHIFT 16
#define PORT_HW_CFG_SERDES_EXT_PHY_TYPE_MASK 0xFF000000
#define PORT_HW_CFG_SERDES_EXT_PHY_TYPE_SHIFT 24
#define PORT_HW_CFG_SERDES_EXT_PHY_TYPE_DIRECT 0x00000000
#define PORT_HW_CFG_SERDES_EXT_PHY_TYPE_BCM5482 0x01000000
#define PORT_HW_CFG_SERDES_EXT_PHY_TYPE_DIRECT_SD 0x02000000
#define PORT_HW_CFG_SERDES_EXT_PHY_TYPE_NOT_CONN 0xff000000
uint32_t speed_capability_mask;
#define PORT_HW_CFG_SPEED_CAPABILITY_D3_MASK 0x0000FFFF
#define PORT_HW_CFG_SPEED_CAPABILITY_D3_SHIFT 0
#define PORT_HW_CFG_SPEED_CAPABILITY_D3_10M_FULL 0x00000001
#define PORT_HW_CFG_SPEED_CAPABILITY_D3_10M_HALF 0x00000002
#define PORT_HW_CFG_SPEED_CAPABILITY_D3_100M_HALF 0x00000004
#define PORT_HW_CFG_SPEED_CAPABILITY_D3_100M_FULL 0x00000008
#define PORT_HW_CFG_SPEED_CAPABILITY_D3_1G 0x00000010
#define PORT_HW_CFG_SPEED_CAPABILITY_D3_2_5G 0x00000020
#define PORT_HW_CFG_SPEED_CAPABILITY_D3_10G 0x00000040
#define PORT_HW_CFG_SPEED_CAPABILITY_D3_20G 0x00000080
#define PORT_HW_CFG_SPEED_CAPABILITY_D3_RESERVED 0x0000f000
#define PORT_HW_CFG_SPEED_CAPABILITY_D0_MASK 0xFFFF0000
#define PORT_HW_CFG_SPEED_CAPABILITY_D0_SHIFT 16
#define PORT_HW_CFG_SPEED_CAPABILITY_D0_10M_FULL 0x00010000
#define PORT_HW_CFG_SPEED_CAPABILITY_D0_10M_HALF 0x00020000
#define PORT_HW_CFG_SPEED_CAPABILITY_D0_100M_HALF 0x00040000
#define PORT_HW_CFG_SPEED_CAPABILITY_D0_100M_FULL 0x00080000
#define PORT_HW_CFG_SPEED_CAPABILITY_D0_1G 0x00100000
#define PORT_HW_CFG_SPEED_CAPABILITY_D0_2_5G 0x00200000
#define PORT_HW_CFG_SPEED_CAPABILITY_D0_10G 0x00400000
#define PORT_HW_CFG_SPEED_CAPABILITY_D0_20G 0x00800000
#define PORT_HW_CFG_SPEED_CAPABILITY_D0_RESERVED 0xf0000000
/* A place to hold the original MAC address as a backup */
uint32_t backup_mac_upper; /* 0x2B4 */
uint32_t backup_mac_lower; /* 0x2B8 */
};
/****************************************************************************
* Shared Feature configuration *
****************************************************************************/
struct shared_feat_cfg { /* NVRAM Offset */
uint32_t config; /* 0x450 */
#define SHARED_FEATURE_BMC_ECHO_MODE_EN 0x00000001
/* Use NVRAM values instead of HW default values */
#define SHARED_FEAT_CFG_OVERRIDE_PREEMPHASIS_CFG_MASK \
0x00000002
#define SHARED_FEAT_CFG_OVERRIDE_PREEMPHASIS_CFG_DISABLED \
0x00000000
#define SHARED_FEAT_CFG_OVERRIDE_PREEMPHASIS_CFG_ENABLED \
0x00000002
#define SHARED_FEAT_CFG_NCSI_ID_METHOD_MASK 0x00000008
#define SHARED_FEAT_CFG_NCSI_ID_METHOD_SPIO 0x00000000
#define SHARED_FEAT_CFG_NCSI_ID_METHOD_NVRAM 0x00000008
#define SHARED_FEAT_CFG_NCSI_ID_MASK 0x00000030
#define SHARED_FEAT_CFG_NCSI_ID_SHIFT 4
/* Override the OTP back to single function mode. When using GPIO,
high means only SF, 0 is according to CLP configuration */
#define SHARED_FEAT_CFG_FORCE_SF_MODE_MASK 0x00000700
#define SHARED_FEAT_CFG_FORCE_SF_MODE_SHIFT 8
#define SHARED_FEAT_CFG_FORCE_SF_MODE_MF_ALLOWED 0x00000000
#define SHARED_FEAT_CFG_FORCE_SF_MODE_FORCED_SF 0x00000100
#define SHARED_FEAT_CFG_FORCE_SF_MODE_SPIO4 0x00000200
#define SHARED_FEAT_CFG_FORCE_SF_MODE_SWITCH_INDEPT 0x00000300
#define SHARED_FEAT_CFG_FORCE_SF_MODE_AFEX_MODE 0x00000400
#define SHARED_FEAT_CFG_FORCE_SF_MODE_BD_MODE 0x00000500
#define SHARED_FEAT_CFG_FORCE_SF_MODE_UFP_MODE 0x00000600
#define SHARED_FEAT_CFG_FORCE_SF_MODE_EXTENDED_MODE 0x00000700
/* Act as if the FCoE license is invalid */
#define SHARED_FEAT_CFG_PREVENT_FCOE 0x00001000
/* Force FLR capability to all ports */
#define SHARED_FEAT_CFG_FORCE_FLR_CAPABILITY 0x00002000
/* Act as if the iSCSI license is invalid */
#define SHARED_FEAT_CFG_PREVENT_ISCSI_MASK 0x00004000
#define SHARED_FEAT_CFG_PREVENT_ISCSI_SHIFT 14
#define SHARED_FEAT_CFG_PREVENT_ISCSI_DISABLED 0x00000000
#define SHARED_FEAT_CFG_PREVENT_ISCSI_ENABLED 0x00004000
/* The interval in seconds between sending LLDP packets. Set to zero
to disable the feature */
#define SHARED_FEAT_CFG_LLDP_XMIT_INTERVAL_MASK 0x00FF0000
#define SHARED_FEAT_CFG_LLDP_XMIT_INTERVAL_SHIFT 16
/* The assigned device type ID for LLDP usage */
#define SHARED_FEAT_CFG_LLDP_DEVICE_TYPE_ID_MASK 0xFF000000
#define SHARED_FEAT_CFG_LLDP_DEVICE_TYPE_ID_SHIFT 24
};
/****************************************************************************
* Port Feature configuration *
****************************************************************************/
struct port_feat_cfg { /* port 0: 0x454 port 1: 0x4c8 */
uint32_t config;
#define PORT_FEAT_CFG_BAR1_SIZE_MASK 0x0000000F
#define PORT_FEAT_CFG_BAR1_SIZE_SHIFT 0
#define PORT_FEAT_CFG_BAR1_SIZE_DISABLED 0x00000000
#define PORT_FEAT_CFG_BAR1_SIZE_64K 0x00000001
#define PORT_FEAT_CFG_BAR1_SIZE_128K 0x00000002
#define PORT_FEAT_CFG_BAR1_SIZE_256K 0x00000003
#define PORT_FEAT_CFG_BAR1_SIZE_512K 0x00000004
#define PORT_FEAT_CFG_BAR1_SIZE_1M 0x00000005
#define PORT_FEAT_CFG_BAR1_SIZE_2M 0x00000006
#define PORT_FEAT_CFG_BAR1_SIZE_4M 0x00000007
#define PORT_FEAT_CFG_BAR1_SIZE_8M 0x00000008
#define PORT_FEAT_CFG_BAR1_SIZE_16M 0x00000009
#define PORT_FEAT_CFG_BAR1_SIZE_32M 0x0000000a
#define PORT_FEAT_CFG_BAR1_SIZE_64M 0x0000000b
#define PORT_FEAT_CFG_BAR1_SIZE_128M 0x0000000c
#define PORT_FEAT_CFG_BAR1_SIZE_256M 0x0000000d
#define PORT_FEAT_CFG_BAR1_SIZE_512M 0x0000000e
#define PORT_FEAT_CFG_BAR1_SIZE_1G 0x0000000f
#define PORT_FEAT_CFG_BAR2_SIZE_MASK 0x000000F0
#define PORT_FEAT_CFG_BAR2_SIZE_SHIFT 4
#define PORT_FEAT_CFG_BAR2_SIZE_DISABLED 0x00000000
#define PORT_FEAT_CFG_BAR2_SIZE_64K 0x00000010
#define PORT_FEAT_CFG_BAR2_SIZE_128K 0x00000020
#define PORT_FEAT_CFG_BAR2_SIZE_256K 0x00000030
#define PORT_FEAT_CFG_BAR2_SIZE_512K 0x00000040
#define PORT_FEAT_CFG_BAR2_SIZE_1M 0x00000050
#define PORT_FEAT_CFG_BAR2_SIZE_2M 0x00000060
#define PORT_FEAT_CFG_BAR2_SIZE_4M 0x00000070
#define PORT_FEAT_CFG_BAR2_SIZE_8M 0x00000080
#define PORT_FEAT_CFG_BAR2_SIZE_16M 0x00000090
#define PORT_FEAT_CFG_BAR2_SIZE_32M 0x000000a0
#define PORT_FEAT_CFG_BAR2_SIZE_64M 0x000000b0
#define PORT_FEAT_CFG_BAR2_SIZE_128M 0x000000c0
#define PORT_FEAT_CFG_BAR2_SIZE_256M 0x000000d0
#define PORT_FEAT_CFG_BAR2_SIZE_512M 0x000000e0
#define PORT_FEAT_CFG_BAR2_SIZE_1G 0x000000f0
#define PORT_FEAT_CFG_DCBX_MASK 0x00000100
#define PORT_FEAT_CFG_DCBX_DISABLED 0x00000000
#define PORT_FEAT_CFG_DCBX_ENABLED 0x00000100
#define PORT_FEAT_CFG_AUTOGREEEN_MASK 0x00000200
#define PORT_FEAT_CFG_AUTOGREEEN_SHIFT 9
#define PORT_FEAT_CFG_AUTOGREEEN_DISABLED 0x00000000
#define PORT_FEAT_CFG_AUTOGREEEN_ENABLED 0x00000200
#define PORT_FEAT_CFG_STORAGE_PERSONALITY_MASK 0x00000C00
#define PORT_FEAT_CFG_STORAGE_PERSONALITY_SHIFT 10
#define PORT_FEAT_CFG_STORAGE_PERSONALITY_DEFAULT 0x00000000
#define PORT_FEAT_CFG_STORAGE_PERSONALITY_FCOE 0x00000400
#define PORT_FEAT_CFG_STORAGE_PERSONALITY_ISCSI 0x00000800
#define PORT_FEAT_CFG_STORAGE_PERSONALITY_BOTH 0x00000c00
#define PORT_FEATURE_EN_SIZE_MASK 0x0f000000
#define PORT_FEATURE_EN_SIZE_SHIFT 24
#define PORT_FEATURE_WOL_ENABLED 0x01000000
#define PORT_FEATURE_MBA_ENABLED 0x02000000
#define PORT_FEATURE_MFW_ENABLED 0x04000000
/* Advertise expansion ROM even if MBA is disabled */
#define PORT_FEAT_CFG_FORCE_EXP_ROM_ADV_MASK 0x08000000
#define PORT_FEAT_CFG_FORCE_EXP_ROM_ADV_DISABLED 0x00000000
#define PORT_FEAT_CFG_FORCE_EXP_ROM_ADV_ENABLED 0x08000000
/* Check the optic vendor via i2c against a list of approved modules
in a separate nvram image */
#define PORT_FEAT_CFG_OPT_MDL_ENFRCMNT_MASK 0xE0000000
#define PORT_FEAT_CFG_OPT_MDL_ENFRCMNT_SHIFT 29
#define PORT_FEAT_CFG_OPT_MDL_ENFRCMNT_NO_ENFORCEMENT \
0x00000000
#define PORT_FEAT_CFG_OPT_MDL_ENFRCMNT_DISABLE_TX_LASER \
0x20000000
#define PORT_FEAT_CFG_OPT_MDL_ENFRCMNT_WARNING_MSG 0x40000000
#define PORT_FEAT_CFG_OPT_MDL_ENFRCMNT_POWER_DOWN 0x60000000
uint32_t wol_config;
/* Default is used when driver sets to "auto" mode */
#define PORT_FEATURE_WOL_ACPI_UPON_MGMT 0x00000010
uint32_t mba_config;
#define PORT_FEATURE_MBA_BOOT_AGENT_TYPE_MASK 0x00000007
#define PORT_FEATURE_MBA_BOOT_AGENT_TYPE_SHIFT 0
#define PORT_FEATURE_MBA_BOOT_AGENT_TYPE_PXE 0x00000000
#define PORT_FEATURE_MBA_BOOT_AGENT_TYPE_RPL 0x00000001
#define PORT_FEATURE_MBA_BOOT_AGENT_TYPE_BOOTP 0x00000002
#define PORT_FEATURE_MBA_BOOT_AGENT_TYPE_ISCSIB 0x00000003
#define PORT_FEATURE_MBA_BOOT_AGENT_TYPE_FCOE_BOOT 0x00000004
#define PORT_FEATURE_MBA_BOOT_AGENT_TYPE_NONE 0x00000007
#define PORT_FEATURE_MBA_BOOT_RETRY_MASK 0x00000038
#define PORT_FEATURE_MBA_BOOT_RETRY_SHIFT 3
#define PORT_FEATURE_MBA_SETUP_PROMPT_ENABLE 0x00000400
#define PORT_FEATURE_MBA_HOTKEY_MASK 0x00000800
#define PORT_FEATURE_MBA_HOTKEY_CTRL_S 0x00000000
#define PORT_FEATURE_MBA_HOTKEY_CTRL_B 0x00000800
#define PORT_FEATURE_MBA_EXP_ROM_SIZE_MASK 0x000FF000
#define PORT_FEATURE_MBA_EXP_ROM_SIZE_SHIFT 12
#define PORT_FEATURE_MBA_EXP_ROM_SIZE_DISABLED 0x00000000
#define PORT_FEATURE_MBA_EXP_ROM_SIZE_2K 0x00001000
#define PORT_FEATURE_MBA_EXP_ROM_SIZE_4K 0x00002000
#define PORT_FEATURE_MBA_EXP_ROM_SIZE_8K 0x00003000
#define PORT_FEATURE_MBA_EXP_ROM_SIZE_16K 0x00004000
#define PORT_FEATURE_MBA_EXP_ROM_SIZE_32K 0x00005000
#define PORT_FEATURE_MBA_EXP_ROM_SIZE_64K 0x00006000
#define PORT_FEATURE_MBA_EXP_ROM_SIZE_128K 0x00007000
#define PORT_FEATURE_MBA_EXP_ROM_SIZE_256K 0x00008000
#define PORT_FEATURE_MBA_EXP_ROM_SIZE_512K 0x00009000
#define PORT_FEATURE_MBA_EXP_ROM_SIZE_1M 0x0000a000
#define PORT_FEATURE_MBA_EXP_ROM_SIZE_2M 0x0000b000
#define PORT_FEATURE_MBA_EXP_ROM_SIZE_4M 0x0000c000
#define PORT_FEATURE_MBA_EXP_ROM_SIZE_8M 0x0000d000
#define PORT_FEATURE_MBA_EXP_ROM_SIZE_16M 0x0000e000
#define PORT_FEATURE_MBA_EXP_ROM_SIZE_32M 0x0000f000
#define PORT_FEATURE_MBA_MSG_TIMEOUT_MASK 0x00F00000
#define PORT_FEATURE_MBA_MSG_TIMEOUT_SHIFT 20
#define PORT_FEATURE_MBA_BIOS_BOOTSTRAP_MASK 0x03000000
#define PORT_FEATURE_MBA_BIOS_BOOTSTRAP_SHIFT 24
#define PORT_FEATURE_MBA_BIOS_BOOTSTRAP_AUTO 0x00000000
#define PORT_FEATURE_MBA_BIOS_BOOTSTRAP_BBS 0x01000000
#define PORT_FEATURE_MBA_BIOS_BOOTSTRAP_INT18H 0x02000000
#define PORT_FEATURE_MBA_BIOS_BOOTSTRAP_INT19H 0x03000000
#define PORT_FEATURE_MBA_LINK_SPEED_MASK 0x3C000000
#define PORT_FEATURE_MBA_LINK_SPEED_SHIFT 26
#define PORT_FEATURE_MBA_LINK_SPEED_AUTO 0x00000000
#define PORT_FEATURE_MBA_LINK_SPEED_10M_HALF 0x04000000
#define PORT_FEATURE_MBA_LINK_SPEED_10M_FULL 0x08000000
#define PORT_FEATURE_MBA_LINK_SPEED_100M_HALF 0x0c000000
#define PORT_FEATURE_MBA_LINK_SPEED_100M_FULL 0x10000000
#define PORT_FEATURE_MBA_LINK_SPEED_1G 0x14000000
#define PORT_FEATURE_MBA_LINK_SPEED_2_5G 0x18000000
#define PORT_FEATURE_MBA_LINK_SPEED_10G 0x1c000000
#define PORT_FEATURE_MBA_LINK_SPEED_20G 0x20000000
uint32_t Reserved0; /* 0x460 */
uint32_t mba_vlan_cfg;
#define PORT_FEATURE_MBA_VLAN_TAG_MASK 0x0000FFFF
#define PORT_FEATURE_MBA_VLAN_TAG_SHIFT 0
#define PORT_FEATURE_MBA_VLAN_EN 0x00010000
#define PORT_FEATUTE_BOFM_CFGD_EN 0x00020000
#define PORT_FEATURE_BOFM_CFGD_FTGT 0x00040000
#define PORT_FEATURE_BOFM_CFGD_VEN 0x00080000
uint32_t Reserved1;
uint32_t smbus_config;
#define PORT_FEATURE_SMBUS_ADDR_MASK 0x000000fe
#define PORT_FEATURE_SMBUS_ADDR_SHIFT 1
uint32_t vf_config;
#define PORT_FEAT_CFG_VF_BAR2_SIZE_MASK 0x0000000F
#define PORT_FEAT_CFG_VF_BAR2_SIZE_SHIFT 0
#define PORT_FEAT_CFG_VF_BAR2_SIZE_DISABLED 0x00000000
#define PORT_FEAT_CFG_VF_BAR2_SIZE_4K 0x00000001
#define PORT_FEAT_CFG_VF_BAR2_SIZE_8K 0x00000002
#define PORT_FEAT_CFG_VF_BAR2_SIZE_16K 0x00000003
#define PORT_FEAT_CFG_VF_BAR2_SIZE_32K 0x00000004
#define PORT_FEAT_CFG_VF_BAR2_SIZE_64K 0x00000005
#define PORT_FEAT_CFG_VF_BAR2_SIZE_128K 0x00000006
#define PORT_FEAT_CFG_VF_BAR2_SIZE_256K 0x00000007
#define PORT_FEAT_CFG_VF_BAR2_SIZE_512K 0x00000008
#define PORT_FEAT_CFG_VF_BAR2_SIZE_1M 0x00000009
#define PORT_FEAT_CFG_VF_BAR2_SIZE_2M 0x0000000a
#define PORT_FEAT_CFG_VF_BAR2_SIZE_4M 0x0000000b
#define PORT_FEAT_CFG_VF_BAR2_SIZE_8M 0x0000000c
#define PORT_FEAT_CFG_VF_BAR2_SIZE_16M 0x0000000d
#define PORT_FEAT_CFG_VF_BAR2_SIZE_32M 0x0000000e
#define PORT_FEAT_CFG_VF_BAR2_SIZE_64M 0x0000000f
uint32_t link_config; /* Used as HW defaults for the driver */
#define PORT_FEATURE_FLOW_CONTROL_MASK 0x00000700
#define PORT_FEATURE_FLOW_CONTROL_SHIFT 8
#define PORT_FEATURE_FLOW_CONTROL_AUTO 0x00000000
#define PORT_FEATURE_FLOW_CONTROL_TX 0x00000100
#define PORT_FEATURE_FLOW_CONTROL_RX 0x00000200
#define PORT_FEATURE_FLOW_CONTROL_BOTH 0x00000300
#define PORT_FEATURE_FLOW_CONTROL_NONE 0x00000400
#define PORT_FEATURE_FLOW_CONTROL_SAFC_RX 0x00000500
#define PORT_FEATURE_FLOW_CONTROL_SAFC_TX 0x00000600
#define PORT_FEATURE_FLOW_CONTROL_SAFC_BOTH 0x00000700
#define PORT_FEATURE_LINK_SPEED_MASK 0x000F0000
#define PORT_FEATURE_LINK_SPEED_SHIFT 16
#define PORT_FEATURE_LINK_SPEED_AUTO 0x00000000
#define PORT_FEATURE_LINK_SPEED_10M_HALF 0x00010000
#define PORT_FEATURE_LINK_SPEED_10M_FULL 0x00020000
#define PORT_FEATURE_LINK_SPEED_100M_HALF 0x00030000
#define PORT_FEATURE_LINK_SPEED_100M_FULL 0x00040000
#define PORT_FEATURE_LINK_SPEED_1G 0x00050000
#define PORT_FEATURE_LINK_SPEED_2_5G 0x00060000
#define PORT_FEATURE_LINK_SPEED_10G_CX4 0x00070000
#define PORT_FEATURE_LINK_SPEED_20G 0x00080000
#define PORT_FEATURE_CONNECTED_SWITCH_MASK 0x03000000
#define PORT_FEATURE_CONNECTED_SWITCH_SHIFT 24
/* (forced) low speed switch (< 10G) */
#define PORT_FEATURE_CON_SWITCH_1G_SWITCH 0x00000000
/* (forced) high speed switch (>= 10G) */
#define PORT_FEATURE_CON_SWITCH_10G_SWITCH 0x01000000
#define PORT_FEATURE_CON_SWITCH_AUTO_DETECT 0x02000000
#define PORT_FEATURE_CON_SWITCH_ONE_TIME_DETECT 0x03000000
/* The default for MCP link configuration,
uses the same defines as link_config */
uint32_t mfw_wol_link_cfg;
/* The default for the driver of the second external phy,
uses the same defines as link_config */
uint32_t link_config2; /* 0x47C */
/* The default for MCP of the second external phy,
uses the same defines as link_config */
uint32_t mfw_wol_link_cfg2; /* 0x480 */
/* EEE power saving mode */
uint32_t eee_power_mode; /* 0x484 */
#define PORT_FEAT_CFG_EEE_POWER_MODE_MASK 0x000000FF
#define PORT_FEAT_CFG_EEE_POWER_MODE_SHIFT 0
#define PORT_FEAT_CFG_EEE_POWER_MODE_DISABLED 0x00000000
#define PORT_FEAT_CFG_EEE_POWER_MODE_BALANCED 0x00000001
#define PORT_FEAT_CFG_EEE_POWER_MODE_AGGRESSIVE 0x00000002
#define PORT_FEAT_CFG_EEE_POWER_MODE_LOW_LATENCY 0x00000003
uint32_t Reserved2[16]; /* 0x48C */
};
/****************************************************************************
* Device Information *
****************************************************************************/
struct shm_dev_info { /* size */
uint32_t bc_rev; /* 8 bits each: major, minor, build */ /* 4 */
struct shared_hw_cfg shared_hw_config; /* 40 */
struct port_hw_cfg port_hw_config[PORT_MAX]; /* 400*2=800 */
struct shared_feat_cfg shared_feature_config; /* 4 */
struct port_feat_cfg port_feature_config[PORT_MAX];/* 116*2=232 */
};
struct extended_dev_info_shared_cfg { /* NVRAM OFFSET */
/* Threshold in celcius to start using the fan */
uint32_t temperature_monitor1; /* 0x4000 */
#define EXTENDED_DEV_INFO_SHARED_CFG_TEMP_FAN_THRESH_MASK 0x0000007F
#define EXTENDED_DEV_INFO_SHARED_CFG_TEMP_FAN_THRESH_SHIFT 0
/* Threshold in celcius to shut down the board */
#define EXTENDED_DEV_INFO_SHARED_CFG_TEMP_SHUT_THRESH_MASK 0x00007F00
#define EXTENDED_DEV_INFO_SHARED_CFG_TEMP_SHUT_THRESH_SHIFT 8
/* EPIO of fan temperature status */
#define EXTENDED_DEV_INFO_SHARED_CFG_TEMP_FAN_EPIO_MASK 0x00FF0000
#define EXTENDED_DEV_INFO_SHARED_CFG_TEMP_FAN_EPIO_SHIFT 16
#define EXTENDED_DEV_INFO_SHARED_CFG_TEMP_FAN_EPIO_NA 0x00000000
#define EXTENDED_DEV_INFO_SHARED_CFG_TEMP_FAN_EPIO_EPIO0 0x00010000
#define EXTENDED_DEV_INFO_SHARED_CFG_TEMP_FAN_EPIO_EPIO1 0x00020000
#define EXTENDED_DEV_INFO_SHARED_CFG_TEMP_FAN_EPIO_EPIO2 0x00030000
#define EXTENDED_DEV_INFO_SHARED_CFG_TEMP_FAN_EPIO_EPIO3 0x00040000
#define EXTENDED_DEV_INFO_SHARED_CFG_TEMP_FAN_EPIO_EPIO4 0x00050000
#define EXTENDED_DEV_INFO_SHARED_CFG_TEMP_FAN_EPIO_EPIO5 0x00060000
#define EXTENDED_DEV_INFO_SHARED_CFG_TEMP_FAN_EPIO_EPIO6 0x00070000
#define EXTENDED_DEV_INFO_SHARED_CFG_TEMP_FAN_EPIO_EPIO7 0x00080000
#define EXTENDED_DEV_INFO_SHARED_CFG_TEMP_FAN_EPIO_EPIO8 0x00090000
#define EXTENDED_DEV_INFO_SHARED_CFG_TEMP_FAN_EPIO_EPIO9 0x000a0000
#define EXTENDED_DEV_INFO_SHARED_CFG_TEMP_FAN_EPIO_EPIO10 0x000b0000
#define EXTENDED_DEV_INFO_SHARED_CFG_TEMP_FAN_EPIO_EPIO11 0x000c0000
#define EXTENDED_DEV_INFO_SHARED_CFG_TEMP_FAN_EPIO_EPIO12 0x000d0000
#define EXTENDED_DEV_INFO_SHARED_CFG_TEMP_FAN_EPIO_EPIO13 0x000e0000
#define EXTENDED_DEV_INFO_SHARED_CFG_TEMP_FAN_EPIO_EPIO14 0x000f0000
#define EXTENDED_DEV_INFO_SHARED_CFG_TEMP_FAN_EPIO_EPIO15 0x00100000
#define EXTENDED_DEV_INFO_SHARED_CFG_TEMP_FAN_EPIO_EPIO16 0x00110000
#define EXTENDED_DEV_INFO_SHARED_CFG_TEMP_FAN_EPIO_EPIO17 0x00120000
#define EXTENDED_DEV_INFO_SHARED_CFG_TEMP_FAN_EPIO_EPIO18 0x00130000
#define EXTENDED_DEV_INFO_SHARED_CFG_TEMP_FAN_EPIO_EPIO19 0x00140000
#define EXTENDED_DEV_INFO_SHARED_CFG_TEMP_FAN_EPIO_EPIO20 0x00150000
#define EXTENDED_DEV_INFO_SHARED_CFG_TEMP_FAN_EPIO_EPIO21 0x00160000
#define EXTENDED_DEV_INFO_SHARED_CFG_TEMP_FAN_EPIO_EPIO22 0x00170000
#define EXTENDED_DEV_INFO_SHARED_CFG_TEMP_FAN_EPIO_EPIO23 0x00180000
#define EXTENDED_DEV_INFO_SHARED_CFG_TEMP_FAN_EPIO_EPIO24 0x00190000
#define EXTENDED_DEV_INFO_SHARED_CFG_TEMP_FAN_EPIO_EPIO25 0x001a0000
#define EXTENDED_DEV_INFO_SHARED_CFG_TEMP_FAN_EPIO_EPIO26 0x001b0000
#define EXTENDED_DEV_INFO_SHARED_CFG_TEMP_FAN_EPIO_EPIO27 0x001c0000
#define EXTENDED_DEV_INFO_SHARED_CFG_TEMP_FAN_EPIO_EPIO28 0x001d0000
#define EXTENDED_DEV_INFO_SHARED_CFG_TEMP_FAN_EPIO_EPIO29 0x001e0000
#define EXTENDED_DEV_INFO_SHARED_CFG_TEMP_FAN_EPIO_EPIO30 0x001f0000
#define EXTENDED_DEV_INFO_SHARED_CFG_TEMP_FAN_EPIO_EPIO31 0x00200000
/* EPIO of shut down temperature status */
#define EXTENDED_DEV_INFO_SHARED_CFG_TEMP_SHUT_EPIO_MASK 0xFF000000
#define EXTENDED_DEV_INFO_SHARED_CFG_TEMP_SHUT_EPIO_SHIFT 24
#define EXTENDED_DEV_INFO_SHARED_CFG_TEMP_SHUT_EPIO_NA 0x00000000
#define EXTENDED_DEV_INFO_SHARED_CFG_TEMP_SHUT_EPIO_EPIO0 0x01000000
#define EXTENDED_DEV_INFO_SHARED_CFG_TEMP_SHUT_EPIO_EPIO1 0x02000000
#define EXTENDED_DEV_INFO_SHARED_CFG_TEMP_SHUT_EPIO_EPIO2 0x03000000
#define EXTENDED_DEV_INFO_SHARED_CFG_TEMP_SHUT_EPIO_EPIO3 0x04000000
#define EXTENDED_DEV_INFO_SHARED_CFG_TEMP_SHUT_EPIO_EPIO4 0x05000000
#define EXTENDED_DEV_INFO_SHARED_CFG_TEMP_SHUT_EPIO_EPIO5 0x06000000
#define EXTENDED_DEV_INFO_SHARED_CFG_TEMP_SHUT_EPIO_EPIO6 0x07000000
#define EXTENDED_DEV_INFO_SHARED_CFG_TEMP_SHUT_EPIO_EPIO7 0x08000000
#define EXTENDED_DEV_INFO_SHARED_CFG_TEMP_SHUT_EPIO_EPIO8 0x09000000
#define EXTENDED_DEV_INFO_SHARED_CFG_TEMP_SHUT_EPIO_EPIO9 0x0a000000
#define EXTENDED_DEV_INFO_SHARED_CFG_TEMP_SHUT_EPIO_EPIO10 0x0b000000
#define EXTENDED_DEV_INFO_SHARED_CFG_TEMP_SHUT_EPIO_EPIO11 0x0c000000
#define EXTENDED_DEV_INFO_SHARED_CFG_TEMP_SHUT_EPIO_EPIO12 0x0d000000
#define EXTENDED_DEV_INFO_SHARED_CFG_TEMP_SHUT_EPIO_EPIO13 0x0e000000
#define EXTENDED_DEV_INFO_SHARED_CFG_TEMP_SHUT_EPIO_EPIO14 0x0f000000
#define EXTENDED_DEV_INFO_SHARED_CFG_TEMP_SHUT_EPIO_EPIO15 0x10000000
#define EXTENDED_DEV_INFO_SHARED_CFG_TEMP_SHUT_EPIO_EPIO16 0x11000000
#define EXTENDED_DEV_INFO_SHARED_CFG_TEMP_SHUT_EPIO_EPIO17 0x12000000
#define EXTENDED_DEV_INFO_SHARED_CFG_TEMP_SHUT_EPIO_EPIO18 0x13000000
#define EXTENDED_DEV_INFO_SHARED_CFG_TEMP_SHUT_EPIO_EPIO19 0x14000000
#define EXTENDED_DEV_INFO_SHARED_CFG_TEMP_SHUT_EPIO_EPIO20 0x15000000
#define EXTENDED_DEV_INFO_SHARED_CFG_TEMP_SHUT_EPIO_EPIO21 0x16000000
#define EXTENDED_DEV_INFO_SHARED_CFG_TEMP_SHUT_EPIO_EPIO22 0x17000000
#define EXTENDED_DEV_INFO_SHARED_CFG_TEMP_SHUT_EPIO_EPIO23 0x18000000
#define EXTENDED_DEV_INFO_SHARED_CFG_TEMP_SHUT_EPIO_EPIO24 0x19000000
#define EXTENDED_DEV_INFO_SHARED_CFG_TEMP_SHUT_EPIO_EPIO25 0x1a000000
#define EXTENDED_DEV_INFO_SHARED_CFG_TEMP_SHUT_EPIO_EPIO26 0x1b000000
#define EXTENDED_DEV_INFO_SHARED_CFG_TEMP_SHUT_EPIO_EPIO27 0x1c000000
#define EXTENDED_DEV_INFO_SHARED_CFG_TEMP_SHUT_EPIO_EPIO28 0x1d000000
#define EXTENDED_DEV_INFO_SHARED_CFG_TEMP_SHUT_EPIO_EPIO29 0x1e000000
#define EXTENDED_DEV_INFO_SHARED_CFG_TEMP_SHUT_EPIO_EPIO30 0x1f000000
#define EXTENDED_DEV_INFO_SHARED_CFG_TEMP_SHUT_EPIO_EPIO31 0x20000000
/* EPIO of shut down temperature status */
uint32_t temperature_monitor2; /* 0x4004 */
#define EXTENDED_DEV_INFO_SHARED_CFG_TEMP_PERIOD_MASK 0x0000FFFF
#define EXTENDED_DEV_INFO_SHARED_CFG_TEMP_PERIOD_SHIFT 0
/* Sensor interface - Disabled / BSC / In the future - SMBUS */
#define EXTENDED_DEV_INFO_SHARED_CFG_SENSOR_INTERFACE_MASK 0x00030000
#define EXTENDED_DEV_INFO_SHARED_CFG_SENSOR_INTERFACE_SHIFT 16
#define EXTENDED_DEV_INFO_SHARED_CFG_SENSOR_INTERFACE_DISABLED 0x00000000
#define EXTENDED_DEV_INFO_SHARED_CFG_SENSOR_INTERFACE_BSC 0x00010000
/* On Board Sensor Address */
#define EXTENDED_DEV_INFO_SHARED_CFG_SENSOR_ADDR_MASK 0x03FC0000
#define EXTENDED_DEV_INFO_SHARED_CFG_SENSOR_ADDR_SHIFT 18
/* MFW flavor to be used */
uint32_t mfw_cfg; /* 0x4008 */
#define EXTENDED_DEV_INFO_SHARED_CFG_MFW_FLAVOR_MASK 0x000000FF
#define EXTENDED_DEV_INFO_SHARED_CFG_MFW_FLAVOR_SHIFT 0
#define EXTENDED_DEV_INFO_SHARED_CFG_MFW_FLAVOR_NA 0x00000000
#define EXTENDED_DEV_INFO_SHARED_CFG_MFW_FLAVOR_A 0x00000001
/* Should NIC data query remain enabled upon last drv unload */
#define EXTENDED_DEV_INFO_SHARED_CFG_OCBB_EN_LAST_DRV_MASK 0x00000100
#define EXTENDED_DEV_INFO_SHARED_CFG_OCBB_EN_LAST_DRV_SHIFT 8
#define EXTENDED_DEV_INFO_SHARED_CFG_OCBB_EN_LAST_DRV_DISABLED 0x00000000
#define EXTENDED_DEV_INFO_SHARED_CFG_OCBB_EN_LAST_DRV_ENABLED 0x00000100
/* Prevent OCBB feature */
#define EXTENDED_DEV_INFO_SHARED_CFG_OCBB_PREVENT_MASK 0x00000200
#define EXTENDED_DEV_INFO_SHARED_CFG_OCBB_PREVENT_SHIFT 9
#define EXTENDED_DEV_INFO_SHARED_CFG_OCBB_PREVENT_DISABLED 0x00000000
#define EXTENDED_DEV_INFO_SHARED_CFG_OCBB_PREVENT_ENABLED 0x00000200
/* Enable DCi support */
#define EXTENDED_DEV_INFO_SHARED_CFG_DCI_SUPPORT_MASK 0x00000400
#define EXTENDED_DEV_INFO_SHARED_CFG_DCI_SUPPORT_SHIFT 10
#define EXTENDED_DEV_INFO_SHARED_CFG_DCI_SUPPORT_DISABLED 0x00000000
#define EXTENDED_DEV_INFO_SHARED_CFG_DCI_SUPPORT_ENABLED 0x00000400
/* Reserved bits: 75-76 */
/* Hide DCBX feature in CCM/BACS menus */
#define EXTENDED_DEV_INFO_SHARED_CFG_HIDE_DCBX_FEAT_MASK 0x00010000
#define EXTENDED_DEV_INFO_SHARED_CFG_HIDE_DCBX_FEAT_SHIFT 16
#define EXTENDED_DEV_INFO_SHARED_CFG_HIDE_DCBX_FEAT_DISABLED 0x00000000
#define EXTENDED_DEV_INFO_SHARED_CFG_HIDE_DCBX_FEAT_ENABLED 0x00010000
uint32_t smbus_config; /* 0x400C */
#define EXTENDED_DEV_INFO_SHARED_CFG_SMBUS_ADDR_MASK 0x000000FF
#define EXTENDED_DEV_INFO_SHARED_CFG_SMBUS_ADDR_SHIFT 0
/* Switching regulator loop gain */
uint32_t board_cfg; /* 0x4010 */
#define EXTENDED_DEV_INFO_SHARED_CFG_LOOP_GAIN_MASK 0x0000000F
#define EXTENDED_DEV_INFO_SHARED_CFG_LOOP_GAIN_SHIFT 0
#define EXTENDED_DEV_INFO_SHARED_CFG_LOOP_GAIN_HW_DEFAULT 0x00000000
#define EXTENDED_DEV_INFO_SHARED_CFG_LOOP_GAIN_X2 0x00000008
#define EXTENDED_DEV_INFO_SHARED_CFG_LOOP_GAIN_X4 0x00000009
#define EXTENDED_DEV_INFO_SHARED_CFG_LOOP_GAIN_X8 0x0000000a
#define EXTENDED_DEV_INFO_SHARED_CFG_LOOP_GAIN_X16 0x0000000b
#define EXTENDED_DEV_INFO_SHARED_CFG_LOOP_GAIN_DIV8 0x0000000c
#define EXTENDED_DEV_INFO_SHARED_CFG_LOOP_GAIN_DIV4 0x0000000d
#define EXTENDED_DEV_INFO_SHARED_CFG_LOOP_GAIN_DIV2 0x0000000e
#define EXTENDED_DEV_INFO_SHARED_CFG_LOOP_GAIN_X1 0x0000000f
/* whether shadow swim feature is supported */
#define EXTENDED_DEV_INFO_SHARED_CFG_SHADOW_SWIM_MASK 0x00000100
#define EXTENDED_DEV_INFO_SHARED_CFG_SHADOW_SWIM_SHIFT 8
#define EXTENDED_DEV_INFO_SHARED_CFG_SHADOW_SWIM_DISABLED 0x00000000
#define EXTENDED_DEV_INFO_SHARED_CFG_SHADOW_SWIM_ENABLED 0x00000100
/* whether to show/hide SRIOV menu in CCM */
#define EXTENDED_DEV_INFO_SHARED_CFG_SRIOV_SHOW_MENU_MASK 0x00000200
#define EXTENDED_DEV_INFO_SHARED_CFG_SRIOV_SHOW_MENU_SHIFT 9
#define EXTENDED_DEV_INFO_SHARED_CFG_SRIOV_SHOW_MENU 0x00000000
#define EXTENDED_DEV_INFO_SHARED_CFG_SRIOV_HIDE_MENU 0x00000200
/* Overide PCIE revision ID when enabled the,
revision ID will set to B1=='0x11' */
#define EXTENDED_DEV_INFO_SHARED_CFG_OVR_REV_ID_MASK 0x00000400
#define EXTENDED_DEV_INFO_SHARED_CFG_OVR_REV_ID_SHIFT 10
#define EXTENDED_DEV_INFO_SHARED_CFG_OVR_REV_ID_DISABLED 0x00000000
#define EXTENDED_DEV_INFO_SHARED_CFG_OVR_REV_ID_ENABLED 0x00000400
/* Bypass slicer offset tuning */
#define EXTENDED_DEV_INFO_SHARED_CFG_BYPASS_SLICER_MASK 0x00000800
#define EXTENDED_DEV_INFO_SHARED_CFG_BYPASS_SLICER_SHIFT 11
#define EXTENDED_DEV_INFO_SHARED_CFG_BYPASS_SLICER_DISABLED 0x00000000
#define EXTENDED_DEV_INFO_SHARED_CFG_BYPASS_SLICER_ENABLED 0x00000800
/* Control Revision ID */
#define EXTENDED_DEV_INFO_SHARED_CFG_REV_ID_CTRL_MASK 0x00003000
#define EXTENDED_DEV_INFO_SHARED_CFG_REV_ID_CTRL_SHIFT 12
#define EXTENDED_DEV_INFO_SHARED_CFG_REV_ID_CTRL_PRESERVE 0x00000000
#define EXTENDED_DEV_INFO_SHARED_CFG_REV_ID_CTRL_ACTUAL 0x00001000
#define EXTENDED_DEV_INFO_SHARED_CFG_REV_ID_CTRL_FORCE_B0 0x00002000
#define EXTENDED_DEV_INFO_SHARED_CFG_REV_ID_CTRL_FORCE_B1 0x00003000
/* Threshold in celcius for max continuous operation */
uint32_t temperature_report; /* 0x4014 */
#define EXTENDED_DEV_INFO_SHARED_CFG_TEMP_MCOT_MASK 0x0000007F
#define EXTENDED_DEV_INFO_SHARED_CFG_TEMP_MCOT_SHIFT 0
/* Threshold in celcius for sensor caution */
#define EXTENDED_DEV_INFO_SHARED_CFG_TEMP_SCT_MASK 0x00007F00
#define EXTENDED_DEV_INFO_SHARED_CFG_TEMP_SCT_SHIFT 8
/* wwn node prefix to be used (unless value is 0) */
uint32_t wwn_prefix; /* 0x4018 */
#define EXTENDED_DEV_INFO_SHARED_CFG_WWN_NODE_PREFIX0_MASK 0x000000FF
#define EXTENDED_DEV_INFO_SHARED_CFG_WWN_NODE_PREFIX0_SHIFT 0
#define EXTENDED_DEV_INFO_SHARED_CFG_WWN_NODE_PREFIX1_MASK 0x0000FF00
#define EXTENDED_DEV_INFO_SHARED_CFG_WWN_NODE_PREFIX1_SHIFT 8
/* wwn port prefix to be used (unless value is 0) */
#define EXTENDED_DEV_INFO_SHARED_CFG_WWN_PORT_PREFIX0_MASK 0x00FF0000
#define EXTENDED_DEV_INFO_SHARED_CFG_WWN_PORT_PREFIX0_SHIFT 16
/* wwn port prefix to be used (unless value is 0) */
#define EXTENDED_DEV_INFO_SHARED_CFG_WWN_PORT_PREFIX1_MASK 0xFF000000
#define EXTENDED_DEV_INFO_SHARED_CFG_WWN_PORT_PREFIX1_SHIFT 24
/* General debug nvm cfg */
uint32_t dbg_cfg_flags; /* 0x401C */
#define EXTENDED_DEV_INFO_SHARED_CFG_DBG_MASK 0x000FFFFF
#define EXTENDED_DEV_INFO_SHARED_CFG_DBG_SHIFT 0
#define EXTENDED_DEV_INFO_SHARED_CFG_DBG_ENABLE 0x00000001
#define EXTENDED_DEV_INFO_SHARED_CFG_DBG_EN_SIGDET_FILTER 0x00000002
#define EXTENDED_DEV_INFO_SHARED_CFG_DBG_SET_LP_TX_PRESET7 0x00000004
#define EXTENDED_DEV_INFO_SHARED_CFG_DBG_SET_TX_ANA_DEFAULT 0x00000008
#define EXTENDED_DEV_INFO_SHARED_CFG_DBG_SET_PLL_ANA_DEFAULT 0x00000010
#define EXTENDED_DEV_INFO_SHARED_CFG_DBG_FORCE_G1PLL_RETUNE 0x00000020
#define EXTENDED_DEV_INFO_SHARED_CFG_DBG_SET_RX_ANA_DEFAULT 0x00000040
#define EXTENDED_DEV_INFO_SHARED_CFG_DBG_FORCE_SERDES_RX_CLK 0x00000080
#define EXTENDED_DEV_INFO_SHARED_CFG_DBG_DIS_RX_LP_EIEOS 0x00000100
#define EXTENDED_DEV_INFO_SHARED_CFG_DBG_FINALIZE_UCODE 0x00000200
#define EXTENDED_DEV_INFO_SHARED_CFG_DBG_HOLDOFF_REQ 0x00000400
#define EXTENDED_DEV_INFO_SHARED_CFG_DBG_RX_SIGDET_OVERRIDE 0x00000800
#define EXTENDED_DEV_INFO_SHARED_CFG_DBG_GP_PORG_UC_RESET 0x00001000
#define EXTENDED_DEV_INFO_SHARED_CFG_DBG_SUPPRESS_COMPEN_EVT 0x00002000
#define EXTENDED_DEV_INFO_SHARED_CFG_DBG_ADJ_TXEQ_P0_P1 0x00004000
#define EXTENDED_DEV_INFO_SHARED_CFG_DBG_G3_PLL_RETUNE 0x00008000
#define EXTENDED_DEV_INFO_SHARED_CFG_DBG_SET_MAC_PHY_CTL8 0x00010000
#define EXTENDED_DEV_INFO_SHARED_CFG_DBG_DIS_MAC_G3_FRM_ERR 0x00020000
#define EXTENDED_DEV_INFO_SHARED_CFG_DBG_INFERRED_EI 0x00040000
#define EXTENDED_DEV_INFO_SHARED_CFG_DBG_GEN3_COMPLI_ENA 0x00080000
/* Override Rx signal detect threshold when enabled the threshold
* will be set staticaly
*/
#define EXTENDED_DEV_INFO_SHARED_CFG_OVERRIDE_RX_SIG_MASK 0x00100000
#define EXTENDED_DEV_INFO_SHARED_CFG_OVERRIDE_RX_SIG_SHIFT 20
#define EXTENDED_DEV_INFO_SHARED_CFG_OVERRIDE_RX_SIG_DISABLED 0x00000000
#define EXTENDED_DEV_INFO_SHARED_CFG_OVERRIDE_RX_SIG_ENABLED 0x00100000
/* Debug signet rx threshold */
uint32_t dbg_rx_sigdet_threshold; /* 0x4020 */
#define EXTENDED_DEV_INFO_SHARED_CFG_DBG_RX_SIGDET_MASK 0x00000007
#define EXTENDED_DEV_INFO_SHARED_CFG_DBG_RX_SIGDET_SHIFT 0
/* Enable IFFE feature */
uint32_t iffe_features; /* 0x4024 */
#define EXTENDED_DEV_INFO_SHARED_CFG_ENABLE_IFFE_MASK 0x00000001
#define EXTENDED_DEV_INFO_SHARED_CFG_ENABLE_IFFE_SHIFT 0
#define EXTENDED_DEV_INFO_SHARED_CFG_ENABLE_IFFE_DISABLED 0x00000000
#define EXTENDED_DEV_INFO_SHARED_CFG_ENABLE_IFFE_ENABLED 0x00000001
/* Allowable port enablement (bitmask for ports 3-1) */
#define EXTENDED_DEV_INFO_SHARED_CFG_OVERRIDE_PORT_MASK 0x0000000E
#define EXTENDED_DEV_INFO_SHARED_CFG_OVERRIDE_PORT_SHIFT 1
/* Allow iSCSI offload override */
#define EXTENDED_DEV_INFO_SHARED_CFG_OVERRIDE_ISCSI_MASK 0x00000010
#define EXTENDED_DEV_INFO_SHARED_CFG_OVERRIDE_ISCSI_SHIFT 4
#define EXTENDED_DEV_INFO_SHARED_CFG_OVERRIDE_ISCSI_DISABLED 0x00000000
#define EXTENDED_DEV_INFO_SHARED_CFG_OVERRIDE_ISCSI_ENABLED 0x00000010
/* Allow FCoE offload override */
#define EXTENDED_DEV_INFO_SHARED_CFG_OVERRIDE_FCOE_MASK 0x00000020
#define EXTENDED_DEV_INFO_SHARED_CFG_OVERRIDE_FCOE_SHIFT 5
#define EXTENDED_DEV_INFO_SHARED_CFG_OVERRIDE_FCOE_DISABLED 0x00000000
#define EXTENDED_DEV_INFO_SHARED_CFG_OVERRIDE_FCOE_ENABLED 0x00000020
/* Tie to adaptor */
#define EXTENDED_DEV_INFO_SHARED_CFG_TIE_ADAPTOR_MASK 0x00008000
#define EXTENDED_DEV_INFO_SHARED_CFG_TIE_ADAPTOR_SHIFT 15
#define EXTENDED_DEV_INFO_SHARED_CFG_TIE_ADAPTOR_DISABLED 0x00000000
#define EXTENDED_DEV_INFO_SHARED_CFG_TIE_ADAPTOR_ENABLED 0x00008000
/* Currently enabled port(s) (bitmask for ports 3-1) */
uint32_t current_iffe_mask; /* 0x4028 */
#define EXTENDED_DEV_INFO_SHARED_CFG_CURRENT_CFG_MASK 0x0000000E
#define EXTENDED_DEV_INFO_SHARED_CFG_CURRENT_CFG_SHIFT 1
/* Current iSCSI offload */
#define EXTENDED_DEV_INFO_SHARED_CFG_CURRENT_ISCSI_MASK 0x00000010
#define EXTENDED_DEV_INFO_SHARED_CFG_CURRENT_ISCSI_SHIFT 4
#define EXTENDED_DEV_INFO_SHARED_CFG_CURRENT_ISCSI_DISABLED 0x00000000
#define EXTENDED_DEV_INFO_SHARED_CFG_CURRENT_ISCSI_ENABLED 0x00000010
/* Current FCoE offload */
#define EXTENDED_DEV_INFO_SHARED_CFG_CURRENT_FCOE_MASK 0x00000020
#define EXTENDED_DEV_INFO_SHARED_CFG_CURRENT_FCOE_SHIFT 5
#define EXTENDED_DEV_INFO_SHARED_CFG_CURRENT_FCOE_DISABLED 0x00000000
#define EXTENDED_DEV_INFO_SHARED_CFG_CURRENT_FCOE_ENABLED 0x00000020
/* FW set this pin to "0" (assert) these signal if either of its MAC
* or PHY specific threshold values is exceeded.
* Values are standard GPIO/EPIO pins.
*/
uint32_t threshold_pin; /* 0x402C */
#define EXTENDED_DEV_INFO_SHARED_CFG_TCONTROL_PIN_MASK 0x000000FF
#define EXTENDED_DEV_INFO_SHARED_CFG_TCONTROL_PIN_SHIFT 0
#define EXTENDED_DEV_INFO_SHARED_CFG_TWARNING_PIN_MASK 0x0000FF00
#define EXTENDED_DEV_INFO_SHARED_CFG_TWARNING_PIN_SHIFT 8
#define EXTENDED_DEV_INFO_SHARED_CFG_TCRITICAL_PIN_MASK 0x00FF0000
#define EXTENDED_DEV_INFO_SHARED_CFG_TCRITICAL_PIN_SHIFT 16
/* MAC die temperature threshold in Celsius. */
uint32_t mac_threshold_val; /* 0x4030 */
#define EXTENDED_DEV_INFO_SHARED_CFG_CONTROL_MAC_THRESH_MASK 0x000000FF
#define EXTENDED_DEV_INFO_SHARED_CFG_CONTROL_MAC_THRESH_SHIFT 0
#define EXTENDED_DEV_INFO_SHARED_CFG_WARNING_MAC_THRESH_MASK 0x0000FF00
#define EXTENDED_DEV_INFO_SHARED_CFG_WARNING_MAC_THRESH_SHIFT 8
#define EXTENDED_DEV_INFO_SHARED_CFG_CRITICAL_MAC_THRESH_MASK 0x00FF0000
#define EXTENDED_DEV_INFO_SHARED_CFG_CRITICAL_MAC_THRESH_SHIFT 16
/* PHY die temperature threshold in Celsius. */
uint32_t phy_threshold_val; /* 0x4034 */
#define EXTENDED_DEV_INFO_SHARED_CFG_CONTROL_PHY_THRESH_MASK 0x000000FF
#define EXTENDED_DEV_INFO_SHARED_CFG_CONTROL_PHY_THRESH_SHIFT 0
#define EXTENDED_DEV_INFO_SHARED_CFG_WARNING_PHY_THRESH_MASK 0x0000FF00
#define EXTENDED_DEV_INFO_SHARED_CFG_WARNING_PHY_THRESH_SHIFT 8
#define EXTENDED_DEV_INFO_SHARED_CFG_CRITICAL_PHY_THRESH_MASK 0x00FF0000
#define EXTENDED_DEV_INFO_SHARED_CFG_CRITICAL_PHY_THRESH_SHIFT 16
/* External pins to communicate with host.
* Values are standard GPIO/EPIO pins.
*/
uint32_t host_pin; /* 0x4038 */
#define EXTENDED_DEV_INFO_SHARED_CFG_I2C_ISOLATE_MASK 0x000000FF
#define EXTENDED_DEV_INFO_SHARED_CFG_I2C_ISOLATE_SHIFT 0
#define EXTENDED_DEV_INFO_SHARED_CFG_MEZZ_FAULT_MASK 0x0000FF00
#define EXTENDED_DEV_INFO_SHARED_CFG_MEZZ_FAULT_SHIFT 8
#define EXTENDED_DEV_INFO_SHARED_CFG_MEZZ_VPD_UPDATE_MASK 0x00FF0000
#define EXTENDED_DEV_INFO_SHARED_CFG_MEZZ_VPD_UPDATE_SHIFT 16
#define EXTENDED_DEV_INFO_SHARED_CFG_VPD_CACHE_COMP_MASK 0xFF000000
#define EXTENDED_DEV_INFO_SHARED_CFG_VPD_CACHE_COMP_SHIFT 24
/* Manufacture kit version */
uint32_t manufacture_ver; /* 0x403C */
/* Manufacture timestamp */
uint32_t manufacture_data; /* 0x4040 */
/* Number of ISCSI/FCOE cfg images */
#define EXTENDED_DEV_INFO_SHARED_CFG_NUM_ISCSI_FCOE_CFGS_MASK 0x00040000
#define EXTENDED_DEV_INFO_SHARED_CFG_NUM_ISCSI_FCOE_CFGS_SHIFT18
#define EXTENDED_DEV_INFO_SHARED_CFG_NUM_ISCSI_FCOE_CFGS_2 0x00000000
#define EXTENDED_DEV_INFO_SHARED_CFG_NUM_ISCSI_FCOE_CFGS_4 0x00040000
/* MCP crash dump trigger */
uint32_t mcp_crash_dump; /* 0x4044 */
#define EXTENDED_DEV_INFO_SHARED_CFG_CRASH_DUMP_MASK 0x7FFFFFFF
#define EXTENDED_DEV_INFO_SHARED_CFG_CRASH_DUMP_SHIFT 0
#define EXTENDED_DEV_INFO_SHARED_CFG_CRASH_DUMP_DISABLED 0x00000000
#define EXTENDED_DEV_INFO_SHARED_CFG_CRASH_DUMP_ENABLED 0x00000001
/* MBI version */
uint32_t mbi_version; /* 0x4048 */
/* MBI date */
uint32_t mbi_date; /* 0x404C */
};
#if !defined(__LITTLE_ENDIAN) && !defined(__BIG_ENDIAN)
#error "Missing either LITTLE_ENDIAN or BIG_ENDIAN definition."
#endif
#define FUNC_0 0
#define FUNC_1 1
#define FUNC_2 2
#define FUNC_3 3
#define FUNC_4 4
#define FUNC_5 5
#define FUNC_6 6
#define FUNC_7 7
#define E1_FUNC_MAX 2
#define E1H_FUNC_MAX 8
#define E2_FUNC_MAX 4 /* per path */
#define VN_0 0
#define VN_1 1
#define VN_2 2
#define VN_3 3
#define E1VN_MAX 1
#define E1HVN_MAX 4
#define E2_VF_MAX 64 /* HC_REG_VF_CONFIGURATION_SIZE */
/* This value (in milliseconds) determines the frequency of the driver
* issuing the PULSE message code. The firmware monitors this periodic
* pulse to determine when to switch to an OS-absent mode. */
#define DRV_PULSE_PERIOD_MS 250
/* This value (in milliseconds) determines how long the driver should
* wait for an acknowledgement from the firmware before timing out. Once
* the firmware has timed out, the driver will assume there is no firmware
* running and there won't be any firmware-driver synchronization during a
* driver reset. */
#define FW_ACK_TIME_OUT_MS 5000
#define FW_ACK_POLL_TIME_MS 1
#define FW_ACK_NUM_OF_POLL (FW_ACK_TIME_OUT_MS/FW_ACK_POLL_TIME_MS)
#define MFW_TRACE_SIGNATURE 0x54524342
/****************************************************************************
* Driver <-> FW Mailbox *
****************************************************************************/
struct drv_port_mb {
uint32_t link_status;
/* Driver should update this field on any link change event */
#define LINK_STATUS_NONE (0<<0)
#define LINK_STATUS_LINK_FLAG_MASK 0x00000001
#define LINK_STATUS_LINK_UP 0x00000001
#define LINK_STATUS_SPEED_AND_DUPLEX_MASK 0x0000001E
#define LINK_STATUS_SPEED_AND_DUPLEX_AN_NOT_COMPLETE (0<<1)
#define LINK_STATUS_SPEED_AND_DUPLEX_10THD (1<<1)
#define LINK_STATUS_SPEED_AND_DUPLEX_10TFD (2<<1)
#define LINK_STATUS_SPEED_AND_DUPLEX_100TXHD (3<<1)
#define LINK_STATUS_SPEED_AND_DUPLEX_100T4 (4<<1)
#define LINK_STATUS_SPEED_AND_DUPLEX_100TXFD (5<<1)
#define LINK_STATUS_SPEED_AND_DUPLEX_1000THD (6<<1)
#define LINK_STATUS_SPEED_AND_DUPLEX_1000TFD (7<<1)
#define LINK_STATUS_SPEED_AND_DUPLEX_1000XFD (7<<1)
#define LINK_STATUS_SPEED_AND_DUPLEX_2500THD (8<<1)
#define LINK_STATUS_SPEED_AND_DUPLEX_2500TFD (9<<1)
#define LINK_STATUS_SPEED_AND_DUPLEX_2500XFD (9<<1)
#define LINK_STATUS_SPEED_AND_DUPLEX_10GTFD (10<<1)
#define LINK_STATUS_SPEED_AND_DUPLEX_10GXFD (10<<1)
#define LINK_STATUS_SPEED_AND_DUPLEX_20GTFD (11<<1)
#define LINK_STATUS_SPEED_AND_DUPLEX_20GXFD (11<<1)
#define LINK_STATUS_AUTO_NEGOTIATE_FLAG_MASK 0x00000020
#define LINK_STATUS_AUTO_NEGOTIATE_ENABLED 0x00000020
#define LINK_STATUS_AUTO_NEGOTIATE_COMPLETE 0x00000040
#define LINK_STATUS_PARALLEL_DETECTION_FLAG_MASK 0x00000080
#define LINK_STATUS_PARALLEL_DETECTION_USED 0x00000080
#define LINK_STATUS_LINK_PARTNER_1000TFD_CAPABLE 0x00000200
#define LINK_STATUS_LINK_PARTNER_1000THD_CAPABLE 0x00000400
#define LINK_STATUS_LINK_PARTNER_100T4_CAPABLE 0x00000800
#define LINK_STATUS_LINK_PARTNER_100TXFD_CAPABLE 0x00001000
#define LINK_STATUS_LINK_PARTNER_100TXHD_CAPABLE 0x00002000
#define LINK_STATUS_LINK_PARTNER_10TFD_CAPABLE 0x00004000
#define LINK_STATUS_LINK_PARTNER_10THD_CAPABLE 0x00008000
#define LINK_STATUS_TX_FLOW_CONTROL_FLAG_MASK 0x00010000
#define LINK_STATUS_TX_FLOW_CONTROL_ENABLED 0x00010000
#define LINK_STATUS_RX_FLOW_CONTROL_FLAG_MASK 0x00020000
#define LINK_STATUS_RX_FLOW_CONTROL_ENABLED 0x00020000
#define LINK_STATUS_LINK_PARTNER_FLOW_CONTROL_MASK 0x000C0000
#define LINK_STATUS_LINK_PARTNER_NOT_PAUSE_CAPABLE (0<<18)
#define LINK_STATUS_LINK_PARTNER_SYMMETRIC_PAUSE (1<<18)
#define LINK_STATUS_LINK_PARTNER_ASYMMETRIC_PAUSE (2<<18)
#define LINK_STATUS_LINK_PARTNER_BOTH_PAUSE (3<<18)
#define LINK_STATUS_SERDES_LINK 0x00100000
#define LINK_STATUS_LINK_PARTNER_2500XFD_CAPABLE 0x00200000
#define LINK_STATUS_LINK_PARTNER_2500XHD_CAPABLE 0x00400000
#define LINK_STATUS_LINK_PARTNER_10GXFD_CAPABLE 0x00800000
#define LINK_STATUS_LINK_PARTNER_20GXFD_CAPABLE 0x10000000
#define LINK_STATUS_PFC_ENABLED 0x20000000
#define LINK_STATUS_PHYSICAL_LINK_FLAG 0x40000000
#define LINK_STATUS_SFP_TX_FAULT 0x80000000
uint32_t port_stx;
uint32_t stat_nig_timer;
/* MCP firmware does not use this field */
uint32_t ext_phy_fw_version;
};
struct drv_func_mb {
uint32_t drv_mb_header;
#define DRV_MSG_CODE_MASK 0xffff0000
#define DRV_MSG_CODE_LOAD_REQ 0x10000000
#define DRV_MSG_CODE_LOAD_DONE 0x11000000
#define DRV_MSG_CODE_UNLOAD_REQ_WOL_EN 0x20000000
#define DRV_MSG_CODE_UNLOAD_REQ_WOL_DIS 0x20010000
#define DRV_MSG_CODE_UNLOAD_REQ_WOL_MCP 0x20020000
#define DRV_MSG_CODE_UNLOAD_DONE 0x21000000
#define DRV_MSG_CODE_DCC_OK 0x30000000
#define DRV_MSG_CODE_DCC_FAILURE 0x31000000
#define DRV_MSG_CODE_DIAG_ENTER_REQ 0x50000000
#define DRV_MSG_CODE_DIAG_EXIT_REQ 0x60000000
#define DRV_MSG_CODE_VALIDATE_KEY 0x70000000
#define DRV_MSG_CODE_GET_CURR_KEY 0x80000000
#define DRV_MSG_CODE_GET_UPGRADE_KEY 0x81000000
#define DRV_MSG_CODE_GET_MANUF_KEY 0x82000000
#define DRV_MSG_CODE_LOAD_L2B_PRAM 0x90000000
#define DRV_MSG_CODE_OEM_OK 0x00010000
#define DRV_MSG_CODE_OEM_FAILURE 0x00020000
#define DRV_MSG_CODE_OEM_UPDATE_SVID_OK 0x00030000
#define DRV_MSG_CODE_OEM_UPDATE_SVID_FAILURE 0x00040000
/*
* The optic module verification command requires bootcode
* v5.0.6 or later, te specific optic module verification command
* requires bootcode v5.2.12 or later
*/
#define DRV_MSG_CODE_VRFY_FIRST_PHY_OPT_MDL 0xa0000000
#define REQ_BC_VER_4_VRFY_FIRST_PHY_OPT_MDL 0x00050006
#define DRV_MSG_CODE_VRFY_SPECIFIC_PHY_OPT_MDL 0xa1000000
#define REQ_BC_VER_4_VRFY_SPECIFIC_PHY_OPT_MDL 0x00050234
#define DRV_MSG_CODE_VRFY_AFEX_SUPPORTED 0xa2000000
#define REQ_BC_VER_4_VRFY_AFEX_SUPPORTED 0x00070002
#define REQ_BC_VER_4_SFP_TX_DISABLE_SUPPORTED 0x00070014
#define REQ_BC_VER_4_MT_SUPPORTED 0x00070201
#define REQ_BC_VER_4_PFC_STATS_SUPPORTED 0x00070201
#define REQ_BC_VER_4_FCOE_FEATURES 0x00070209
#define DRV_MSG_CODE_DCBX_ADMIN_PMF_MSG 0xb0000000
#define DRV_MSG_CODE_DCBX_PMF_DRV_OK 0xb2000000
#define REQ_BC_VER_4_DCBX_ADMIN_MSG_NON_PMF 0x00070401
#define DRV_MSG_CODE_VF_DISABLED_DONE 0xc0000000
#define DRV_MSG_CODE_AFEX_DRIVER_SETMAC 0xd0000000
#define DRV_MSG_CODE_AFEX_LISTGET_ACK 0xd1000000
#define DRV_MSG_CODE_AFEX_LISTSET_ACK 0xd2000000
#define DRV_MSG_CODE_AFEX_STATSGET_ACK 0xd3000000
#define DRV_MSG_CODE_AFEX_VIFSET_ACK 0xd4000000
#define DRV_MSG_CODE_DRV_INFO_ACK 0xd8000000
#define DRV_MSG_CODE_DRV_INFO_NACK 0xd9000000
#define DRV_MSG_CODE_EEE_RESULTS_ACK 0xda000000
#define DRV_MSG_CODE_RMMOD 0xdb000000
#define REQ_BC_VER_4_RMMOD_CMD 0x0007080f
#define DRV_MSG_CODE_SET_MF_BW 0xe0000000
#define REQ_BC_VER_4_SET_MF_BW 0x00060202
#define DRV_MSG_CODE_SET_MF_BW_ACK 0xe1000000
#define DRV_MSG_CODE_LINK_STATUS_CHANGED 0x01000000
#define DRV_MSG_CODE_INITIATE_FLR 0x02000000
#define REQ_BC_VER_4_INITIATE_FLR 0x00070213
#define BIOS_MSG_CODE_LIC_CHALLENGE 0xff010000
#define BIOS_MSG_CODE_LIC_RESPONSE 0xff020000
#define BIOS_MSG_CODE_VIRT_MAC_PRIM 0xff030000
#define BIOS_MSG_CODE_VIRT_MAC_ISCSI 0xff040000
#define DRV_MSG_CODE_IMG_OFFSET_REQ 0xe2000000
#define DRV_MSG_CODE_IMG_SIZE_REQ 0xe3000000
#define DRV_MSG_CODE_UFP_CONFIG_ACK 0xe4000000
#define DRV_MSG_SEQ_NUMBER_MASK 0x0000ffff
#define DRV_MSG_CODE_CONFIG_CHANGE 0xC1000000
uint32_t drv_mb_param;
#define DRV_MSG_CODE_SET_MF_BW_MIN_MASK 0x00ff0000
#define DRV_MSG_CODE_SET_MF_BW_MAX_MASK 0xff000000
#define DRV_MSG_CODE_UNLOAD_NON_D3_POWER 0x00000001
#define DRV_MSG_CODE_UNLOAD_SKIP_LINK_RESET 0x00000002
#define DRV_MSG_CODE_LOAD_REQ_WITH_LFA 0x0000100a
#define DRV_MSG_CODE_LOAD_REQ_FORCE_LFA 0x00002000
#define DRV_MSG_CODE_USR_BLK_IMAGE_REQ 0x00000001
#define DRV_MSG_CODE_ISCSI_PERS_IMAGE_REQ 0x00000002
#define DRV_MSG_CODE_VPD_IMAGE_REQ 0x00000003
#define DRV_MSG_CODE_CONFIG_CHANGE_MTU_SIZE 0x00000001
#define DRV_MSG_CODE_CONFIG_CHANGE_MAC_ADD 0x00000002
#define DRV_MSG_CODE_CONFIG_CHANGE_WOL_ENA 0x00000003
#define DRV_MSG_CODE_CONFIG_CHANGE_ISCI_BOOT 0x00000004
#define DRV_MSG_CODE_CONFIG_CHANGE_FCOE_BOOT 0x00000005
uint32_t fw_mb_header;
#define FW_MSG_CODE_MASK 0xffff0000
#define FW_MSG_CODE_DRV_LOAD_COMMON 0x10100000
#define FW_MSG_CODE_DRV_LOAD_PORT 0x10110000
#define FW_MSG_CODE_DRV_LOAD_FUNCTION 0x10120000
/* Load common chip is supported from bc 6.0.0 */
#define REQ_BC_VER_4_DRV_LOAD_COMMON_CHIP 0x00060000
#define FW_MSG_CODE_DRV_LOAD_COMMON_CHIP 0x10130000
#define FW_MSG_CODE_DRV_LOAD_REFUSED 0x10200000
#define FW_MSG_CODE_DRV_LOAD_DONE 0x11100000
#define FW_MSG_CODE_DRV_UNLOAD_COMMON 0x20100000
#define FW_MSG_CODE_DRV_UNLOAD_PORT 0x20110000
#define FW_MSG_CODE_DRV_UNLOAD_FUNCTION 0x20120000
#define FW_MSG_CODE_DRV_UNLOAD_DONE 0x21100000
#define FW_MSG_CODE_DCC_DONE 0x30100000
#define FW_MSG_CODE_LLDP_DONE 0x40100000
#define FW_MSG_CODE_DIAG_ENTER_DONE 0x50100000
#define FW_MSG_CODE_DIAG_REFUSE 0x50200000
#define FW_MSG_CODE_DIAG_EXIT_DONE 0x60100000
#define FW_MSG_CODE_VALIDATE_KEY_SUCCESS 0x70100000
#define FW_MSG_CODE_VALIDATE_KEY_FAILURE 0x70200000
#define FW_MSG_CODE_GET_KEY_DONE 0x80100000
#define FW_MSG_CODE_NO_KEY 0x80f00000
#define FW_MSG_CODE_LIC_INFO_NOT_READY 0x80f80000
#define FW_MSG_CODE_L2B_PRAM_LOADED 0x90100000
#define FW_MSG_CODE_L2B_PRAM_T_LOAD_FAILURE 0x90210000
#define FW_MSG_CODE_L2B_PRAM_C_LOAD_FAILURE 0x90220000
#define FW_MSG_CODE_L2B_PRAM_X_LOAD_FAILURE 0x90230000
#define FW_MSG_CODE_L2B_PRAM_U_LOAD_FAILURE 0x90240000
#define FW_MSG_CODE_VRFY_OPT_MDL_SUCCESS 0xa0100000
#define FW_MSG_CODE_VRFY_OPT_MDL_INVLD_IMG 0xa0200000
#define FW_MSG_CODE_VRFY_OPT_MDL_UNAPPROVED 0xa0300000
#define FW_MSG_CODE_VF_DISABLED_DONE 0xb0000000
#define FW_MSG_CODE_HW_SET_INVALID_IMAGE 0xb0100000
#define FW_MSG_CODE_AFEX_DRIVER_SETMAC_DONE 0xd0100000
#define FW_MSG_CODE_AFEX_LISTGET_ACK 0xd1100000
#define FW_MSG_CODE_AFEX_LISTSET_ACK 0xd2100000
#define FW_MSG_CODE_AFEX_STATSGET_ACK 0xd3100000
#define FW_MSG_CODE_AFEX_VIFSET_ACK 0xd4100000
#define FW_MSG_CODE_DRV_INFO_ACK 0xd8100000
#define FW_MSG_CODE_DRV_INFO_NACK 0xd9100000
#define FW_MSG_CODE_EEE_RESULS_ACK 0xda100000
#define FW_MSG_CODE_RMMOD_ACK 0xdb100000
#define FW_MSG_CODE_SET_MF_BW_SENT 0xe0000000
#define FW_MSG_CODE_SET_MF_BW_DONE 0xe1000000
#define FW_MSG_CODE_LINK_CHANGED_ACK 0x01100000
#define FW_MSG_CODE_FLR_ACK 0x02000000
#define FW_MSG_CODE_FLR_NACK 0x02100000
#define FW_MSG_CODE_LIC_CHALLENGE 0xff010000
#define FW_MSG_CODE_LIC_RESPONSE 0xff020000
#define FW_MSG_CODE_VIRT_MAC_PRIM 0xff030000
#define FW_MSG_CODE_VIRT_MAC_ISCSI 0xff040000
#define FW_MSG_CODE_IMG_OFFSET_RESPONSE 0xe2100000
#define FW_MSG_CODE_IMG_SIZE_RESPONSE 0xe3100000
#define FW_MSG_CODE_OEM_ACK 0x00010000
#define DRV_MSG_CODE_OEM_UPDATE_SVID_ACK 0x00020000
#define FW_MSG_CODE_CONFIG_CHANGE_DONE 0xC2000000
#define FW_MSG_SEQ_NUMBER_MASK 0x0000ffff
uint32_t fw_mb_param;
#define FW_PARAM_INVALID_IMG 0xffffffff
uint32_t drv_pulse_mb;
#define DRV_PULSE_SEQ_MASK 0x00007fff
#define DRV_PULSE_SYSTEM_TIME_MASK 0xffff0000
/*
* The system time is in the format of
* (year-2001)*12*32 + month*32 + day.
*/
#define DRV_PULSE_ALWAYS_ALIVE 0x00008000
/*
* Indicate to the firmware not to go into the
* OS-absent when it is not getting driver pulse.
* This is used for debugging as well for PXE(MBA).
*/
uint32_t mcp_pulse_mb;
#define MCP_PULSE_SEQ_MASK 0x00007fff
#define MCP_PULSE_ALWAYS_ALIVE 0x00008000
/* Indicates to the driver not to assert due to lack
* of MCP response */
#define MCP_EVENT_MASK 0xffff0000
#define MCP_EVENT_OTHER_DRIVER_RESET_REQ 0x00010000
uint32_t iscsi_boot_signature;
uint32_t iscsi_boot_block_offset;
uint32_t drv_status;
#define DRV_STATUS_PMF 0x00000001
#define DRV_STATUS_VF_DISABLED 0x00000002
#define DRV_STATUS_SET_MF_BW 0x00000004
#define DRV_STATUS_LINK_EVENT 0x00000008
#define DRV_STATUS_OEM_EVENT_MASK 0x00000070
#define DRV_STATUS_OEM_DISABLE_ENABLE_PF 0x00000010
#define DRV_STATUS_OEM_BANDWIDTH_ALLOCATION 0x00000020
#define DRV_STATUS_OEM_FC_NPIV_UPDATE 0x00000040
#define DRV_STATUS_OEM_UPDATE_SVID 0x00000080
#define DRV_STATUS_DCC_EVENT_MASK 0x0000ff00
#define DRV_STATUS_DCC_DISABLE_ENABLE_PF 0x00000100
#define DRV_STATUS_DCC_BANDWIDTH_ALLOCATION 0x00000200
#define DRV_STATUS_DCC_CHANGE_MAC_ADDRESS 0x00000400
#define DRV_STATUS_DCC_RESERVED1 0x00000800
#define DRV_STATUS_DCC_SET_PROTOCOL 0x00001000
#define DRV_STATUS_DCC_SET_PRIORITY 0x00002000
#define DRV_STATUS_DCBX_EVENT_MASK 0x000f0000
#define DRV_STATUS_DCBX_NEGOTIATION_RESULTS 0x00010000
#define DRV_STATUS_AFEX_EVENT_MASK 0x03f00000
#define DRV_STATUS_AFEX_LISTGET_REQ 0x00100000
#define DRV_STATUS_AFEX_LISTSET_REQ 0x00200000
#define DRV_STATUS_AFEX_STATSGET_REQ 0x00400000
#define DRV_STATUS_AFEX_VIFSET_REQ 0x00800000
#define DRV_STATUS_DRV_INFO_REQ 0x04000000
#define DRV_STATUS_EEE_NEGOTIATION_RESULTS 0x08000000
uint32_t virt_mac_upper;
#define VIRT_MAC_SIGN_MASK 0xffff0000
#define VIRT_MAC_SIGNATURE 0x564d0000
uint32_t virt_mac_lower;
};
/****************************************************************************
* Management firmware state *
****************************************************************************/
/* Allocate 440 bytes for management firmware */
#define MGMTFW_STATE_WORD_SIZE 110
struct mgmtfw_state {
uint32_t opaque[MGMTFW_STATE_WORD_SIZE];
};
/****************************************************************************
* Multi-Function configuration *
****************************************************************************/
struct shared_mf_cfg {
uint32_t clp_mb;
#define SHARED_MF_CLP_SET_DEFAULT 0x00000000
/* set by CLP */
#define SHARED_MF_CLP_EXIT 0x00000001
/* set by MCP */
#define SHARED_MF_CLP_EXIT_DONE 0x00010000
};
struct port_mf_cfg {
uint32_t dynamic_cfg; /* device control channel */
#define PORT_MF_CFG_E1HOV_TAG_MASK 0x0000ffff
#define PORT_MF_CFG_E1HOV_TAG_SHIFT 0
#define PORT_MF_CFG_E1HOV_TAG_DEFAULT PORT_MF_CFG_E1HOV_TAG_MASK
uint32_t reserved[1];
};
struct func_mf_cfg {
uint32_t config;
/* E/R/I/D */
/* function 0 of each port cannot be hidden */
#define FUNC_MF_CFG_FUNC_HIDE 0x00000001
#define FUNC_MF_CFG_PROTOCOL_MASK 0x00000006
#define FUNC_MF_CFG_PROTOCOL_FCOE 0x00000000
#define FUNC_MF_CFG_PROTOCOL_ETHERNET 0x00000002
#define FUNC_MF_CFG_PROTOCOL_ETHERNET_WITH_RDMA 0x00000004
#define FUNC_MF_CFG_PROTOCOL_ISCSI 0x00000006
#define FUNC_MF_CFG_PROTOCOL_DEFAULT \
FUNC_MF_CFG_PROTOCOL_ETHERNET_WITH_RDMA
#define FUNC_MF_CFG_FUNC_DISABLED 0x00000008
#define FUNC_MF_CFG_FUNC_DELETED 0x00000010
#define FUNC_MF_CFG_FUNC_BOOT_MASK 0x00000060
#define FUNC_MF_CFG_FUNC_BOOT_BIOS_CTRL 0x00000000
#define FUNC_MF_CFG_FUNC_BOOT_VCM_DISABLED 0x00000020
#define FUNC_MF_CFG_FUNC_BOOT_VCM_ENABLED 0x00000040
/* PRI */
/* 0 - low priority, 3 - high priority */
#define FUNC_MF_CFG_TRANSMIT_PRIORITY_MASK 0x00000300
#define FUNC_MF_CFG_TRANSMIT_PRIORITY_SHIFT 8
#define FUNC_MF_CFG_TRANSMIT_PRIORITY_DEFAULT 0x00000000
/* MINBW, MAXBW */
/* value range - 0..100, increments in 100Mbps */
#define FUNC_MF_CFG_MIN_BW_MASK 0x00ff0000
#define FUNC_MF_CFG_MIN_BW_SHIFT 16
#define FUNC_MF_CFG_MIN_BW_DEFAULT 0x00000000
#define FUNC_MF_CFG_MAX_BW_MASK 0xff000000
#define FUNC_MF_CFG_MAX_BW_SHIFT 24
#define FUNC_MF_CFG_MAX_BW_DEFAULT 0x64000000
uint32_t mac_upper; /* MAC */
#define FUNC_MF_CFG_UPPERMAC_MASK 0x0000ffff
#define FUNC_MF_CFG_UPPERMAC_SHIFT 0
#define FUNC_MF_CFG_UPPERMAC_DEFAULT FUNC_MF_CFG_UPPERMAC_MASK
uint32_t mac_lower;
#define FUNC_MF_CFG_LOWERMAC_DEFAULT 0xffffffff
uint32_t e1hov_tag; /* VNI */
#define FUNC_MF_CFG_E1HOV_TAG_MASK 0x0000ffff
#define FUNC_MF_CFG_E1HOV_TAG_SHIFT 0
#define FUNC_MF_CFG_E1HOV_TAG_DEFAULT FUNC_MF_CFG_E1HOV_TAG_MASK
/* afex default VLAN ID - 12 bits */
#define FUNC_MF_CFG_AFEX_VLAN_MASK 0x0fff0000
#define FUNC_MF_CFG_AFEX_VLAN_SHIFT 16
uint32_t afex_config;
#define FUNC_MF_CFG_AFEX_COS_FILTER_MASK 0x000000ff
#define FUNC_MF_CFG_AFEX_COS_FILTER_SHIFT 0
#define FUNC_MF_CFG_AFEX_MBA_ENABLED_MASK 0x0000ff00
#define FUNC_MF_CFG_AFEX_MBA_ENABLED_SHIFT 8
#define FUNC_MF_CFG_AFEX_MBA_ENABLED_VAL 0x00000100
#define FUNC_MF_CFG_AFEX_VLAN_MODE_MASK 0x000f0000
#define FUNC_MF_CFG_AFEX_VLAN_MODE_SHIFT 16
uint32_t pf_allocation;
/* number of vfs in function, if 0 - sriov disabled */
#define FUNC_MF_CFG_NUMBER_OF_VFS_MASK 0x000000FF
#define FUNC_MF_CFG_NUMBER_OF_VFS_SHIFT 0
};
enum mf_cfg_afex_vlan_mode {
FUNC_MF_CFG_AFEX_VLAN_TRUNK_MODE = 0,
FUNC_MF_CFG_AFEX_VLAN_ACCESS_MODE,
FUNC_MF_CFG_AFEX_VLAN_TRUNK_TAG_NATIVE_MODE
};
/* This structure is not applicable and should not be accessed on 57711 */
struct func_ext_cfg {
uint32_t func_cfg;
#define MACP_FUNC_CFG_FLAGS_MASK 0x0000007F
#define MACP_FUNC_CFG_FLAGS_SHIFT 0
#define MACP_FUNC_CFG_FLAGS_ENABLED 0x00000001
#define MACP_FUNC_CFG_FLAGS_ETHERNET 0x00000002
#define MACP_FUNC_CFG_FLAGS_ISCSI_OFFLOAD 0x00000004
#define MACP_FUNC_CFG_FLAGS_FCOE_OFFLOAD 0x00000008
#define MACP_FUNC_CFG_PAUSE_ON_HOST_RING 0x00000080
uint32_t iscsi_mac_addr_upper;
uint32_t iscsi_mac_addr_lower;
uint32_t fcoe_mac_addr_upper;
uint32_t fcoe_mac_addr_lower;
uint32_t fcoe_wwn_port_name_upper;
uint32_t fcoe_wwn_port_name_lower;
uint32_t fcoe_wwn_node_name_upper;
uint32_t fcoe_wwn_node_name_lower;
uint32_t preserve_data;
#define MF_FUNC_CFG_PRESERVE_L2_MAC (1<<0)
#define MF_FUNC_CFG_PRESERVE_ISCSI_MAC (1<<1)
#define MF_FUNC_CFG_PRESERVE_FCOE_MAC (1<<2)
#define MF_FUNC_CFG_PRESERVE_FCOE_WWN_P (1<<3)
#define MF_FUNC_CFG_PRESERVE_FCOE_WWN_N (1<<4)
#define MF_FUNC_CFG_PRESERVE_TX_BW (1<<5)
};
struct mf_cfg {
struct shared_mf_cfg shared_mf_config; /* 0x4 */
struct port_mf_cfg port_mf_config[NVM_PATH_MAX][PORT_MAX];
/* 0x10*2=0x20 */
/* for all chips, there are 8 mf functions */
struct func_mf_cfg func_mf_config[E1H_FUNC_MAX]; /* 0x18 * 8 = 0xc0 */
/*
* Extended configuration per function - this array does not exist and
* should not be accessed on 57711
*/
struct func_ext_cfg func_ext_config[E1H_FUNC_MAX]; /* 0x28 * 8 = 0x140*/
}; /* 0x224 */
/****************************************************************************
* Shared Memory Region *
****************************************************************************/
struct shmem_region { /* SharedMem Offset (size) */
uint32_t validity_map[PORT_MAX]; /* 0x0 (4*2 = 0x8) */
#define SHR_MEM_FORMAT_REV_MASK 0xff000000
#define SHR_MEM_FORMAT_REV_ID ('A'<<24)
/* validity bits */
#define SHR_MEM_VALIDITY_PCI_CFG 0x00100000
#define SHR_MEM_VALIDITY_MB 0x00200000
#define SHR_MEM_VALIDITY_DEV_INFO 0x00400000
#define SHR_MEM_VALIDITY_RESERVED 0x00000007
/* One licensing bit should be set */
#define SHR_MEM_VALIDITY_LIC_KEY_IN_EFFECT_MASK 0x00000038
#define SHR_MEM_VALIDITY_LIC_MANUF_KEY_IN_EFFECT 0x00000008
#define SHR_MEM_VALIDITY_LIC_UPGRADE_KEY_IN_EFFECT 0x00000010
#define SHR_MEM_VALIDITY_LIC_NO_KEY_IN_EFFECT 0x00000020
/* Active MFW */
#define SHR_MEM_VALIDITY_ACTIVE_MFW_UNKNOWN 0x00000000
#define SHR_MEM_VALIDITY_ACTIVE_MFW_MASK 0x000001c0
#define SHR_MEM_VALIDITY_ACTIVE_MFW_IPMI 0x00000040
#define SHR_MEM_VALIDITY_ACTIVE_MFW_UMP 0x00000080
#define SHR_MEM_VALIDITY_ACTIVE_MFW_NCSI 0x000000c0
#define SHR_MEM_VALIDITY_ACTIVE_MFW_NONE 0x000001c0
struct shm_dev_info dev_info; /* 0x8 (0x438) */
license_key_t drv_lic_key[PORT_MAX]; /* 0x440 (52*2=0x68) */
/* FW information (for internal FW use) */
uint32_t fw_info_fio_offset; /* 0x4a8 (0x4) */
struct mgmtfw_state mgmtfw_state; /* 0x4ac (0x1b8) */
struct drv_port_mb port_mb[PORT_MAX]; /* 0x664 (16*2=0x20) */
#ifdef BMAPI
/* This is a variable length array */
/* the number of function depends on the chip type */
struct drv_func_mb func_mb[1]; /* 0x684 (44*2/4/8=0x58/0xb0/0x160) */
#else
/* the number of function depends on the chip type */
struct drv_func_mb func_mb[]; /* 0x684 (44*2/4/8=0x58/0xb0/0x160) */
#endif /* BMAPI */
}; /* 57710 = 0x6dc | 57711 = 0x7E4 | 57712 = 0x734 */
/****************************************************************************
* Shared Memory 2 Region *
****************************************************************************/
/* The fw_flr_ack is actually built in the following way: */
/* 8 bit: PF ack */
/* 64 bit: VF ack */
/* 8 bit: ios_dis_ack */
/* In order to maintain endianity in the mailbox hsi, we want to keep using */
/* uint32_t. The fw must have the VF right after the PF since this is how it */
/* access arrays(it expects always the VF to reside after the PF, and that */
/* makes the calculation much easier for it. ) */
/* In order to answer both limitations, and keep the struct small, the code */
/* will abuse the structure defined here to achieve the actual partition */
/* above */
/****************************************************************************/
struct fw_flr_ack {
uint32_t pf_ack;
uint32_t vf_ack;
uint32_t iov_dis_ack;
};
struct fw_flr_mb {
uint32_t aggint;
uint32_t opgen_addr;
struct fw_flr_ack ack;
};
struct eee_remote_vals {
uint32_t tx_tw;
uint32_t rx_tw;
};
/**** SUPPORT FOR SHMEM ARRRAYS ***
* The SHMEM HSI is aligned on 32 bit boundaries which makes it difficult to
* define arrays with storage types smaller then unsigned dwords.
* The macros below add generic support for SHMEM arrays with numeric elements
* that can span 2,4,8 or 16 bits. The array underlying type is a 32 bit dword
* array with individual bit-filed elements accessed using shifts and masks.
*
*/
/* eb is the bitwidth of a single element */
#define SHMEM_ARRAY_MASK(eb) ((1<<(eb))-1)
#define SHMEM_ARRAY_ENTRY(i, eb) ((i)/(32/(eb)))
/* the bit-position macro allows the used to flip the order of the arrays
* elements on a per byte or word boundary.
*
* example: an array with 8 entries each 4 bit wide. This array will fit into
* a single dword. The diagrmas below show the array order of the nibbles.
*
* SHMEM_ARRAY_BITPOS(i, 4, 4) defines the stadard ordering:
*
* | | | |
* 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 |
* | | | |
*
* SHMEM_ARRAY_BITPOS(i, 4, 8) defines a flip ordering per byte:
*
* | | | |
* 1 | 0 | 3 | 2 | 5 | 4 | 7 | 6 |
* | | | |
*
* SHMEM_ARRAY_BITPOS(i, 4, 16) defines a flip ordering per word:
*
* | | | |
* 3 | 2 | 1 | 0 | 7 | 6 | 5 | 4 |
* | | | |
*/
#define SHMEM_ARRAY_BITPOS(i, eb, fb) \
((((32/(fb)) - 1 - ((i)/((fb)/(eb))) % (32/(fb))) * (fb)) + \
(((i)%((fb)/(eb))) * (eb)))
#define SHMEM_ARRAY_GET(a, i, eb, fb) \
((a[SHMEM_ARRAY_ENTRY(i, eb)] >> SHMEM_ARRAY_BITPOS(i, eb, fb)) & \
SHMEM_ARRAY_MASK(eb))
#define SHMEM_ARRAY_SET(a, i, eb, fb, val) \
do { \
a[SHMEM_ARRAY_ENTRY(i, eb)] &= ~(SHMEM_ARRAY_MASK(eb) << \
SHMEM_ARRAY_BITPOS(i, eb, fb)); \
a[SHMEM_ARRAY_ENTRY(i, eb)] |= (((val) & SHMEM_ARRAY_MASK(eb)) << \
SHMEM_ARRAY_BITPOS(i, eb, fb)); \
} while (0)
/****START OF DCBX STRUCTURES DECLARATIONS****/
#define DCBX_MAX_NUM_PRI_PG_ENTRIES 8
#define DCBX_PRI_PG_BITWIDTH 4
#define DCBX_PRI_PG_FBITS 8
#define DCBX_PRI_PG_GET(a, i) \
SHMEM_ARRAY_GET(a, i, DCBX_PRI_PG_BITWIDTH, DCBX_PRI_PG_FBITS)
#define DCBX_PRI_PG_SET(a, i, val) \
SHMEM_ARRAY_SET(a, i, DCBX_PRI_PG_BITWIDTH, DCBX_PRI_PG_FBITS, val)
#define DCBX_MAX_NUM_PG_BW_ENTRIES 8
#define DCBX_BW_PG_BITWIDTH 8
#define DCBX_PG_BW_GET(a, i) \
SHMEM_ARRAY_GET(a, i, DCBX_BW_PG_BITWIDTH, DCBX_BW_PG_BITWIDTH)
#define DCBX_PG_BW_SET(a, i, val) \
SHMEM_ARRAY_SET(a, i, DCBX_BW_PG_BITWIDTH, DCBX_BW_PG_BITWIDTH, val)
#define DCBX_STRICT_PRI_PG 15
#define DCBX_MAX_APP_PROTOCOL 16
#define DCBX_MAX_APP_LOCAL 32
#define FCOE_APP_IDX 0
#define ISCSI_APP_IDX 1
#define PREDEFINED_APP_IDX_MAX 2
/* Big/Little endian have the same representation. */
struct dcbx_ets_feature {
/*
* For Admin MIB - is this feature supported by the
* driver | For Local MIB - should this feature be enabled.
*/
uint32_t enabled;
uint32_t pg_bw_tbl[2];
uint32_t pri_pg_tbl[1];
};
/* Driver structure in LE */
struct dcbx_pfc_feature {
#ifdef __BIG_ENDIAN
uint8_t pri_en_bitmap;
#define DCBX_PFC_PRI_0 0x01
#define DCBX_PFC_PRI_1 0x02
#define DCBX_PFC_PRI_2 0x04
#define DCBX_PFC_PRI_3 0x08
#define DCBX_PFC_PRI_4 0x10
#define DCBX_PFC_PRI_5 0x20
#define DCBX_PFC_PRI_6 0x40
#define DCBX_PFC_PRI_7 0x80
uint8_t pfc_caps;
uint8_t reserved;
uint8_t enabled;
#elif defined(__LITTLE_ENDIAN)
uint8_t enabled;
uint8_t reserved;
uint8_t pfc_caps;
uint8_t pri_en_bitmap;
#define DCBX_PFC_PRI_0 0x01
#define DCBX_PFC_PRI_1 0x02
#define DCBX_PFC_PRI_2 0x04
#define DCBX_PFC_PRI_3 0x08
#define DCBX_PFC_PRI_4 0x10
#define DCBX_PFC_PRI_5 0x20
#define DCBX_PFC_PRI_6 0x40
#define DCBX_PFC_PRI_7 0x80
#endif
};
struct dcbx_app_priority_entry {
#ifdef __BIG_ENDIAN
uint16_t app_id;
uint8_t pri_bitmap;
uint8_t appBitfield;
#define DCBX_APP_ENTRY_VALID 0x01
#define DCBX_APP_ENTRY_SF_MASK 0x30
#define DCBX_APP_ENTRY_SF_SHIFT 4
#define DCBX_APP_SF_ETH_TYPE 0x10
#define DCBX_APP_SF_PORT 0x20
#define DCBX_APP_PRI_0 0x01
#define DCBX_APP_PRI_1 0x02
#define DCBX_APP_PRI_2 0x04
#define DCBX_APP_PRI_3 0x08
#define DCBX_APP_PRI_4 0x10
#define DCBX_APP_PRI_5 0x20
#define DCBX_APP_PRI_6 0x40
#define DCBX_APP_PRI_7 0x80
#elif defined(__LITTLE_ENDIAN)
uint8_t appBitfield;
#define DCBX_APP_ENTRY_VALID 0x01
#define DCBX_APP_ENTRY_SF_MASK 0x30
#define DCBX_APP_ENTRY_SF_SHIFT 4
#define DCBX_APP_SF_ETH_TYPE 0x10
#define DCBX_APP_SF_PORT 0x20
uint8_t pri_bitmap;
uint16_t app_id;
#endif
};
/* FW structure in BE */
struct dcbx_app_priority_feature {
#ifdef __BIG_ENDIAN
uint8_t reserved;
uint8_t default_pri;
uint8_t tc_supported;
uint8_t enabled;
#elif defined(__LITTLE_ENDIAN)
uint8_t enabled;
uint8_t tc_supported;
uint8_t default_pri;
uint8_t reserved;
#endif
struct dcbx_app_priority_entry app_pri_tbl[DCBX_MAX_APP_PROTOCOL];
};
/* FW structure in BE */
struct dcbx_features {
/* PG feature */
struct dcbx_ets_feature ets;
/* PFC feature */
struct dcbx_pfc_feature pfc;
/* APP feature */
struct dcbx_app_priority_feature app;
};
/* LLDP protocol parameters */
/* FW structure in BE */
struct lldp_params {
#ifdef __BIG_ENDIAN
uint8_t msg_fast_tx_interval;
uint8_t msg_tx_hold;
uint8_t msg_tx_interval;
uint8_t admin_status;
#define LLDP_TX_ONLY 0x01
#define LLDP_RX_ONLY 0x02
#define LLDP_TX_RX 0x03
#define LLDP_DISABLED 0x04
uint8_t reserved1;
uint8_t tx_fast;
uint8_t tx_crd_max;
uint8_t tx_crd;
#elif defined(__LITTLE_ENDIAN)
uint8_t admin_status;
#define LLDP_TX_ONLY 0x01
#define LLDP_RX_ONLY 0x02
#define LLDP_TX_RX 0x03
#define LLDP_DISABLED 0x04
uint8_t msg_tx_interval;
uint8_t msg_tx_hold;
uint8_t msg_fast_tx_interval;
uint8_t tx_crd;
uint8_t tx_crd_max;
uint8_t tx_fast;
uint8_t reserved1;
#endif
#define REM_CHASSIS_ID_STAT_LEN 4
#define REM_PORT_ID_STAT_LEN 4
/* Holds remote Chassis ID TLV header, subtype and 9B of payload. */
uint32_t peer_chassis_id[REM_CHASSIS_ID_STAT_LEN];
/* Holds remote Port ID TLV header, subtype and 9B of payload. */
uint32_t peer_port_id[REM_PORT_ID_STAT_LEN];
};
struct lldp_dcbx_stat {
#define LOCAL_CHASSIS_ID_STAT_LEN 2
#define LOCAL_PORT_ID_STAT_LEN 2
/* Holds local Chassis ID 8B payload of constant subtype 4. */
uint32_t local_chassis_id[LOCAL_CHASSIS_ID_STAT_LEN];
/* Holds local Port ID 8B payload of constant subtype 3. */
uint32_t local_port_id[LOCAL_PORT_ID_STAT_LEN];
/* Number of DCBX frames transmitted. */
uint32_t num_tx_dcbx_pkts;
/* Number of DCBX frames received. */
uint32_t num_rx_dcbx_pkts;
};
/* ADMIN MIB - DCBX local machine default configuration. */
struct lldp_admin_mib {
uint32_t ver_cfg_flags;
#define DCBX_ETS_CONFIG_TX_ENABLED 0x00000001
#define DCBX_PFC_CONFIG_TX_ENABLED 0x00000002
#define DCBX_APP_CONFIG_TX_ENABLED 0x00000004
#define DCBX_ETS_RECO_TX_ENABLED 0x00000008
#define DCBX_ETS_RECO_VALID 0x00000010
#define DCBX_ETS_WILLING 0x00000020
#define DCBX_PFC_WILLING 0x00000040
#define DCBX_APP_WILLING 0x00000080
#define DCBX_VERSION_CEE 0x00000100
#define DCBX_VERSION_IEEE 0x00000200
#define DCBX_DCBX_ENABLED 0x00000400
#define DCBX_CEE_VERSION_MASK 0x0000f000
#define DCBX_CEE_VERSION_SHIFT 12
#define DCBX_CEE_MAX_VERSION_MASK 0x000f0000
#define DCBX_CEE_MAX_VERSION_SHIFT 16
struct dcbx_features features;
};
/* REMOTE MIB - remote machine DCBX configuration. */
struct lldp_remote_mib {
uint32_t prefix_seq_num;
uint32_t flags;
#define DCBX_ETS_TLV_RX 0x00000001
#define DCBX_PFC_TLV_RX 0x00000002
#define DCBX_APP_TLV_RX 0x00000004
#define DCBX_ETS_RX_ERROR 0x00000010
#define DCBX_PFC_RX_ERROR 0x00000020
#define DCBX_APP_RX_ERROR 0x00000040
#define DCBX_ETS_REM_WILLING 0x00000100
#define DCBX_PFC_REM_WILLING 0x00000200
#define DCBX_APP_REM_WILLING 0x00000400
#define DCBX_REMOTE_ETS_RECO_VALID 0x00001000
#define DCBX_REMOTE_MIB_VALID 0x00002000
struct dcbx_features features;
uint32_t suffix_seq_num;
};
/* LOCAL MIB - operational DCBX configuration - transmitted on Tx LLDPDU. */
struct lldp_local_mib {
uint32_t prefix_seq_num;
/* Indicates if there is mismatch with negotiation results. */
uint32_t error;
#define DCBX_LOCAL_ETS_ERROR 0x00000001
#define DCBX_LOCAL_PFC_ERROR 0x00000002
#define DCBX_LOCAL_APP_ERROR 0x00000004
#define DCBX_LOCAL_PFC_MISMATCH 0x00000010
#define DCBX_LOCAL_APP_MISMATCH 0x00000020
#define DCBX_REMOTE_MIB_ERROR 0x00000040
#define DCBX_REMOTE_ETS_TLV_NOT_FOUND 0x00000080
#define DCBX_REMOTE_PFC_TLV_NOT_FOUND 0x00000100
#define DCBX_REMOTE_APP_TLV_NOT_FOUND 0x00000200
struct dcbx_features features;
uint32_t suffix_seq_num;
};
struct lldp_local_mib_ext {
uint32_t prefix_seq_num;
/* APP TLV extension - 16 more entries for negotiation results*/
struct dcbx_app_priority_entry app_pri_tbl_ext[DCBX_MAX_APP_PROTOCOL];
uint32_t suffix_seq_num;
};
/***END OF DCBX STRUCTURES DECLARATIONS***/
/***********************************************************/
/* Elink section */
/***********************************************************/
#define SHMEM_LINK_CONFIG_SIZE 2
struct shmem_lfa {
uint32_t req_duplex;
#define REQ_DUPLEX_PHY0_MASK 0x0000ffff
#define REQ_DUPLEX_PHY0_SHIFT 0
#define REQ_DUPLEX_PHY1_MASK 0xffff0000
#define REQ_DUPLEX_PHY1_SHIFT 16
uint32_t req_flow_ctrl;
#define REQ_FLOW_CTRL_PHY0_MASK 0x0000ffff
#define REQ_FLOW_CTRL_PHY0_SHIFT 0
#define REQ_FLOW_CTRL_PHY1_MASK 0xffff0000
#define REQ_FLOW_CTRL_PHY1_SHIFT 16
uint32_t req_line_speed; /* Also determine AutoNeg */
#define REQ_LINE_SPD_PHY0_MASK 0x0000ffff
#define REQ_LINE_SPD_PHY0_SHIFT 0
#define REQ_LINE_SPD_PHY1_MASK 0xffff0000
#define REQ_LINE_SPD_PHY1_SHIFT 16
uint32_t speed_cap_mask[SHMEM_LINK_CONFIG_SIZE];
uint32_t additional_config;
#define REQ_FC_AUTO_ADV_MASK 0x0000ffff
#define REQ_FC_AUTO_ADV0_SHIFT 0
#define NO_LFA_DUE_TO_DCC_MASK 0x00010000
uint32_t lfa_sts;
#define LFA_LINK_FLAP_REASON_OFFSET 0
#define LFA_LINK_FLAP_REASON_MASK 0x000000ff
#define LFA_LINK_DOWN 0x1
#define LFA_LOOPBACK_ENABLED 0x2
#define LFA_DUPLEX_MISMATCH 0x3
#define LFA_MFW_IS_TOO_OLD 0x4
#define LFA_LINK_SPEED_MISMATCH 0x5
#define LFA_FLOW_CTRL_MISMATCH 0x6
#define LFA_SPEED_CAP_MISMATCH 0x7
#define LFA_DCC_LFA_DISABLED 0x8
#define LFA_EEE_MISMATCH 0x9
#define LINK_FLAP_AVOIDANCE_COUNT_OFFSET 8
#define LINK_FLAP_AVOIDANCE_COUNT_MASK 0x0000ff00
#define LINK_FLAP_COUNT_OFFSET 16
#define LINK_FLAP_COUNT_MASK 0x00ff0000
#define LFA_FLAGS_MASK 0xff000000
#define SHMEM_LFA_DONT_CLEAR_STAT (1<<24)
};
/*
Used to suppoert NSCI get OS driver version
On driver load the version value will be set
On driver unload driver value of 0x0 will be set
*/
struct os_drv_ver{
#define DRV_VER_NOT_LOADED 0
/*personalites orrder is importent */
#define DRV_PERS_ETHERNET 0
#define DRV_PERS_ISCSI 1
#define DRV_PERS_FCOE 2
/*shmem2 struct is constatnt can't add more personalites here*/
#define MAX_DRV_PERS 3
uint32_t versions[MAX_DRV_PERS];
};
#define OEM_I2C_UUID_STR_ADDR 0x9f
#define OEM_I2C_CARD_SKU_STR_ADDR 0x3c
#define OEM_I2C_CARD_FN_STR_ADDR 0x48
#define OEM_I2C_CARD_NAME_STR_ADDR 0x10e
#define OEM_I2C_UUID_STR_LEN 16
#define OEM_I2C_CARD_SKU_STR_LEN 12
#define OEM_I2C_CARD_FN_STR_LEN 12
#define OEM_I2C_CARD_NAME_STR_LEN 128
#define OEM_I2C_CARD_VERSION_STR_LEN 36
struct oem_i2c_data_t {
uint32_t size;
uint8_t uuid[OEM_I2C_UUID_STR_LEN];
uint8_t card_sku[OEM_I2C_CARD_SKU_STR_LEN];
uint8_t card_name[OEM_I2C_CARD_NAME_STR_LEN];
uint8_t card_ver[OEM_I2C_CARD_VERSION_STR_LEN];
uint8_t card_fn[OEM_I2C_CARD_FN_STR_LEN];
};
enum curr_cfg_method_e {
CURR_CFG_MET_NONE = 0, /* default config */
CURR_CFG_MET_OS = 1,
CURR_CFG_MET_VENDOR_SPEC = 2,/* e.g. Option ROM, NPAR, O/S Cfg Utils */
CURR_CFG_MET_HP_OTHER = 3,
CURR_CFG_MET_VC_CLP = 4, /* C-Class SM-CLP */
CURR_CFG_MET_HP_CNU = 5, /* Converged Network Utility */
CURR_CFG_MET_HP_DCI = 6, /* DCi (BD) changes */
};
#define FC_NPIV_WWPN_SIZE 8
#define FC_NPIV_WWNN_SIZE 8
struct bdn_npiv_settings {
uint8_t npiv_wwpn[FC_NPIV_WWPN_SIZE];
uint8_t npiv_wwnn[FC_NPIV_WWNN_SIZE];
};
struct bdn_fc_npiv_cfg {
/* hdr used internally by the MFW */
uint32_t hdr;
uint32_t num_of_npiv;
};
#define MAX_NUMBER_NPIV 64
struct bdn_fc_npiv_tbl {
struct bdn_fc_npiv_cfg fc_npiv_cfg;
struct bdn_npiv_settings settings[MAX_NUMBER_NPIV];
};
struct mdump_driver_info {
uint32_t epoc;
uint32_t drv_ver;
uint32_t fw_ver;
uint32_t valid_dump;
#define FIRST_DUMP_VALID (1 << 0)
#define SECOND_DUMP_VALID (1 << 1)
uint32_t flags;
#define ENABLE_ALL_TRIGGERS (0x7fffffff)
#define TRIGGER_MDUMP_ONCE (1 << 31)
};
struct shmem2_region {
uint32_t size; /* 0x0000 */
uint32_t dcc_support; /* 0x0004 */
#define SHMEM_DCC_SUPPORT_NONE 0x00000000
#define SHMEM_DCC_SUPPORT_DISABLE_ENABLE_PF_TLV 0x00000001
#define SHMEM_DCC_SUPPORT_BANDWIDTH_ALLOCATION_TLV 0x00000004
#define SHMEM_DCC_SUPPORT_CHANGE_MAC_ADDRESS_TLV 0x00000008
#define SHMEM_DCC_SUPPORT_SET_PROTOCOL_TLV 0x00000040
#define SHMEM_DCC_SUPPORT_SET_PRIORITY_TLV 0x00000080
uint32_t ext_phy_fw_version2[PORT_MAX]; /* 0x0008 */
/*
* For backwards compatibility, if the mf_cfg_addr does not exist
* (the size filed is smaller than 0xc) the mf_cfg resides at the
* end of struct shmem_region
*/
uint32_t mf_cfg_addr; /* 0x0010 */
#define SHMEM_MF_CFG_ADDR_NONE 0x00000000
struct fw_flr_mb flr_mb; /* 0x0014 */
uint32_t dcbx_lldp_params_offset; /* 0x0028 */
#define SHMEM_LLDP_DCBX_PARAMS_NONE 0x00000000
uint32_t dcbx_neg_res_offset; /* 0x002c */
#define SHMEM_DCBX_NEG_RES_NONE 0x00000000
uint32_t dcbx_remote_mib_offset; /* 0x0030 */
#define SHMEM_DCBX_REMOTE_MIB_NONE 0x00000000
/*
* The other shmemX_base_addr holds the other path's shmem address
* required for example in case of common phy init, or for path1 to know
* the address of mcp debug trace which is located in offset from shmem
* of path0
*/
uint32_t other_shmem_base_addr; /* 0x0034 */
uint32_t other_shmem2_base_addr; /* 0x0038 */
/*
* mcp_vf_disabled is set by the MCP to indicate the driver about VFs
* which were disabled/flred
*/
uint32_t mcp_vf_disabled[E2_VF_MAX / 32]; /* 0x003c */
/*
* drv_ack_vf_disabled is set by the PF driver to ack handled disabled
* VFs
*/
uint32_t drv_ack_vf_disabled[E2_FUNC_MAX][E2_VF_MAX / 32]; /* 0x0044 */
uint32_t dcbx_lldp_dcbx_stat_offset; /* 0x0064 */
#define SHMEM_LLDP_DCBX_STAT_NONE 0x00000000
/*
* edebug_driver_if field is used to transfer messages between edebug
* app to the driver through shmem2.
*
* message format:
* bits 0-2 - function number / instance of driver to perform request
* bits 3-5 - op code / is_ack?
* bits 6-63 - data
*/
uint32_t edebug_driver_if[2]; /* 0x0068 */
#define EDEBUG_DRIVER_IF_OP_CODE_GET_PHYS_ADDR 1
#define EDEBUG_DRIVER_IF_OP_CODE_GET_BUS_ADDR 2
#define EDEBUG_DRIVER_IF_OP_CODE_DISABLE_STAT 3
uint32_t nvm_retain_bitmap_addr; /* 0x0070 */
/* afex support of that driver */
uint32_t afex_driver_support; /* 0x0074 */
#define SHMEM_AFEX_VERSION_MASK 0x100f
#define SHMEM_AFEX_SUPPORTED_VERSION_ONE 0x1001
#define SHMEM_AFEX_REDUCED_DRV_LOADED 0x8000
/* driver receives addr in scratchpad to which it should respond */
uint32_t afex_scratchpad_addr_to_write[E2_FUNC_MAX];
/*
* generic params from MCP to driver (value depends on the msg sent
* to driver
*/
uint32_t afex_param1_to_driver[E2_FUNC_MAX]; /* 0x0088 */
uint32_t afex_param2_to_driver[E2_FUNC_MAX]; /* 0x0098 */
uint32_t swim_base_addr; /* 0x00a8 */
uint32_t swim_funcs; /* 0x00ac */
uint32_t swim_main_cb; /* 0x00b0 */
/*
* bitmap notifying which VIF profiles stored in nvram are enabled by
* switch
*/
uint32_t afex_profiles_enabled[2]; /* 0x00b4 */
/* generic flags controlled by the driver */
uint32_t drv_flags; /* 0x00bc */
#define DRV_FLAGS_DCB_CONFIGURED 0x0
#define DRV_FLAGS_DCB_CONFIGURATION_ABORTED 0x1
#define DRV_FLAGS_DCB_MFW_CONFIGURED 0x2
#define DRV_FLAGS_PORT_MASK ((1 << DRV_FLAGS_DCB_CONFIGURED) | \
(1 << DRV_FLAGS_DCB_CONFIGURATION_ABORTED) | \
(1 << DRV_FLAGS_DCB_MFW_CONFIGURED))
/* Port offset*/
#define DRV_FLAGS_P0_OFFSET 0
#define DRV_FLAGS_P1_OFFSET 16
#define DRV_FLAGS_GET_PORT_OFFSET(_port) ((0 == _port) ? \
DRV_FLAGS_P0_OFFSET : \
DRV_FLAGS_P1_OFFSET)
#define DRV_FLAGS_GET_PORT_MASK(_port) (DRV_FLAGS_PORT_MASK << \
DRV_FLAGS_GET_PORT_OFFSET(_port))
#define DRV_FLAGS_FILED_BY_PORT(_field_bit, _port) (1 << ( \
(_field_bit) + DRV_FLAGS_GET_PORT_OFFSET(_port)))
/* pointer to extended dev_info shared data copied from nvm image */
uint32_t extended_dev_info_shared_addr; /* 0x00c0 */
uint32_t ncsi_oem_data_addr; /* 0x00c4 */
uint32_t sensor_data_addr; /* 0x00c8 */
uint32_t buffer_block_addr; /* 0x00cc */
uint32_t sensor_data_req_update_interval; /* 0x00d0 */
uint32_t temperature_in_half_celsius; /* 0x00d4 */
uint32_t glob_struct_in_host; /* 0x00d8 */
uint32_t dcbx_neg_res_ext_offset; /* 0x00dc */
#define SHMEM_DCBX_NEG_RES_EXT_NONE 0x00000000
uint32_t drv_capabilities_flag[E2_FUNC_MAX]; /* 0x00e0 */
#define DRV_FLAGS_CAPABILITIES_LOADED_SUPPORTED 0x00000001
#define DRV_FLAGS_CAPABILITIES_LOADED_L2 0x00000002
#define DRV_FLAGS_CAPABILITIES_LOADED_FCOE 0x00000004
#define DRV_FLAGS_CAPABILITIES_LOADED_ISCSI 0x00000008
#define DRV_FLAGS_MTU_MASK 0xffff0000
#define DRV_FLAGS_MTU_SHIFT 16
uint32_t extended_dev_info_shared_cfg_size; /* 0x00f0 */
uint32_t dcbx_en[PORT_MAX]; /* 0x00f4 */
/* The offset points to the multi threaded meta structure */
uint32_t multi_thread_data_offset; /* 0x00fc */
/* address of DMAable host address holding values from the drivers */
uint32_t drv_info_host_addr_lo; /* 0x0100 */
uint32_t drv_info_host_addr_hi; /* 0x0104 */
/* general values written by the MFW (such as current version) */
uint32_t drv_info_control; /* 0x0108 */
#define DRV_INFO_CONTROL_VER_MASK 0x000000ff
#define DRV_INFO_CONTROL_VER_SHIFT 0
#define DRV_INFO_CONTROL_OP_CODE_MASK 0x0000ff00
#define DRV_INFO_CONTROL_OP_CODE_SHIFT 8
uint32_t ibft_host_addr; /* initialized by option ROM */ /* 0x010c */
struct eee_remote_vals eee_remote_vals[PORT_MAX]; /* 0x0110 */
uint32_t pf_allocation[E2_FUNC_MAX]; /* 0x0120 */
#define PF_ALLOACTION_MSIX_VECTORS_MASK 0x000000ff /* real value, as PCI config space can show only maximum of 64 vectors */
#define PF_ALLOACTION_MSIX_VECTORS_SHIFT 0
/* the status of EEE auto-negotiation
* bits 15:0 the configured tx-lpi entry timer value. Depends on bit 31.
* bits 19:16 the supported modes for EEE.
* bits 23:20 the speeds advertised for EEE.
* bits 27:24 the speeds the Link partner advertised for EEE.
* The supported/adv. modes in bits 27:19 originate from the
* SHMEM_EEE_XXX_ADV definitions (where XXX is replaced by speed).
* bit 28 when 1'b1 EEE was requested.
* bit 29 when 1'b1 tx lpi was requested.
* bit 30 when 1'b1 EEE was negotiated. Tx lpi will be asserted iff
* 30:29 are 2'b11.
* bit 31 when 1'b0 bits 15:0 contain a PORT_FEAT_CFG_EEE_ define as
* value. When 1'b1 those bits contains a value times 16 microseconds.
*/
uint32_t eee_status[PORT_MAX]; /* 0x0130 */
#define SHMEM_EEE_TIMER_MASK 0x0000ffff
#define SHMEM_EEE_SUPPORTED_MASK 0x000f0000
#define SHMEM_EEE_SUPPORTED_SHIFT 16
#define SHMEM_EEE_ADV_STATUS_MASK 0x00f00000
#define SHMEM_EEE_100M_ADV (1<<0)
#define SHMEM_EEE_1G_ADV (1<<1)
#define SHMEM_EEE_10G_ADV (1<<2)
#define SHMEM_EEE_ADV_STATUS_SHIFT 20
#define SHMEM_EEE_LP_ADV_STATUS_MASK 0x0f000000
#define SHMEM_EEE_LP_ADV_STATUS_SHIFT 24
#define SHMEM_EEE_REQUESTED_BIT 0x10000000
#define SHMEM_EEE_LPI_REQUESTED_BIT 0x20000000
#define SHMEM_EEE_ACTIVE_BIT 0x40000000
#define SHMEM_EEE_TIME_OUTPUT_BIT 0x80000000
uint32_t sizeof_port_stats; /* 0x0138 */
/* Link Flap Avoidance */
uint32_t lfa_host_addr[PORT_MAX]; /* 0x013c */
/* External PHY temperature in deg C. */
uint32_t extphy_temps_in_celsius; /* 0x0144 */
#define EXTPHY1_TEMP_MASK 0x0000ffff
#define EXTPHY1_TEMP_SHIFT 0
#define ON_BOARD_TEMP_MASK 0xffff0000
#define ON_BOARD_TEMP_SHIFT 16
uint32_t ocdata_info_addr; /* Offset 0x148 */
uint32_t drv_func_info_addr; /* Offset 0x14C */
uint32_t drv_func_info_size; /* Offset 0x150 */
uint32_t link_attr_sync[PORT_MAX]; /* Offset 0x154 */
#define LINK_ATTR_SYNC_KR2_ENABLE 0x00000001
#define LINK_ATTR_84858 0x00000002
#define LINK_SFP_EEPROM_COMP_CODE_MASK 0x0000ff00
#define LINK_SFP_EEPROM_COMP_CODE_SHIFT 8
#define LINK_SFP_EEPROM_COMP_CODE_SR 0x00001000
#define LINK_SFP_EEPROM_COMP_CODE_LR 0x00002000
#define LINK_SFP_EEPROM_COMP_CODE_LRM 0x00004000
uint32_t ibft_host_addr_hi; /* Initialize by uEFI ROM Offset 0x158 */
uint32_t fcode_ver; /* Offset 0x15c */
uint32_t link_change_count[PORT_MAX]; /* Offset 0x160-0x164 */
#define LINK_CHANGE_COUNT_MASK 0xff /* Offset 0x168 */
/* driver version for each personality*/
struct os_drv_ver func_os_drv_ver[E2_FUNC_MAX]; /* Offset 0x16c */
/* Flag to the driver that PF's drv_info_host_addr buffer was read */
uint32_t mfw_drv_indication; /* Offset 0x19c */
/* We use inidcation for each PF (0..3) */
#define MFW_DRV_IND_READ_DONE_OFFSET(_pf_) (1 << _pf_)
union { /* For various OEMs */ /* Offset 0x1a0 */
uint8_t storage_boot_prog[E2_FUNC_MAX];
#define STORAGE_BOOT_PROG_MASK 0x000000FF
#define STORAGE_BOOT_PROG_NONE 0x00000000
#define STORAGE_BOOT_PROG_ISCSI_IP_ACQUIRED 0x00000002
#define STORAGE_BOOT_PROG_FCOE_FABRIC_LOGIN_SUCCESS 0x00000002
#define STORAGE_BOOT_PROG_TARGET_FOUND 0x00000004
#define STORAGE_BOOT_PROG_ISCSI_CHAP_SUCCESS 0x00000008
#define STORAGE_BOOT_PROG_FCOE_LUN_FOUND 0x00000008
#define STORAGE_BOOT_PROG_LOGGED_INTO_TGT 0x00000010
#define STORAGE_BOOT_PROG_IMG_DOWNLOADED 0x00000020
#define STORAGE_BOOT_PROG_OS_HANDOFF 0x00000040
#define STORAGE_BOOT_PROG_COMPLETED 0x00000080
uint32_t oem_i2c_data_addr;
}u;
/* 9 entires for the C2S PCP map for each inner VLAN PCP + 1 default */
/* For PCP values 0-3 use the map lower */
/* 0xFF000000 - PCP 0, 0x00FF0000 - PCP 1,
* 0x0000FF00 - PCP 2, 0x000000FF PCP 3
*/
uint32_t c2s_pcp_map_lower[E2_FUNC_MAX]; /* 0x1a4 */
/* For PCP values 4-7 use the map upper */
/* 0xFF000000 - PCP 4, 0x00FF0000 - PCP 5,
* 0x0000FF00 - PCP 6, 0x000000FF PCP 7
*/
uint32_t c2s_pcp_map_upper[E2_FUNC_MAX]; /* 0x1b4 */
/* For PCP default value get the MSB byte of the map default */
uint32_t c2s_pcp_map_default[E2_FUNC_MAX]; /* 0x1c4 */
/* FC_NPIV table offset in NVRAM */
uint32_t fc_npiv_nvram_tbl_addr[PORT_MAX]; /* 0x1d4 */
/* Shows last method that changed configuration of this device */
enum curr_cfg_method_e curr_cfg; /* 0x1dc */
/* Storm FW version, shold be kept in the format 0xMMmmbbdd:
* MM - Major, mm - Minor, bb - Build ,dd - Drop
*/
uint32_t netproc_fw_ver; /* 0x1e0 */
/* Option ROM SMASH CLP version */
uint32_t clp_ver; /* 0x1e4 */
uint32_t pcie_bus_num; /* 0x1e8 */
uint32_t sriov_switch_mode; /* 0x1ec */
#define SRIOV_SWITCH_MODE_NONE 0x0
#define SRIOV_SWITCH_MODE_VEB 0x1
#define SRIOV_SWITCH_MODE_VEPA 0x2
uint8_t rsrv2[E2_FUNC_MAX]; /* 0x1f0 */
uint32_t img_inv_table_addr; /* Address to INV_TABLE_P */ /* 0x1f4 */
uint32_t mtu_size[E2_FUNC_MAX]; /* 0x1f8 */
uint32_t os_driver_state[E2_FUNC_MAX]; /* 0x208 */
#define OS_DRIVER_STATE_NOT_LOADED 0 /* not installed */
#define OS_DRIVER_STATE_LOADING 1 /* transition state */
#define OS_DRIVER_STATE_DISABLED 2 /* installed but disabled */
#define OS_DRIVER_STATE_ACTIVE 3 /* installed and active */
/* mini dump driver info */
struct mdump_driver_info drv_info; /* 0x218 */
/* 0x22c */
};
struct emac_stats {
uint32_t rx_stat_ifhcinoctets;
uint32_t rx_stat_ifhcinbadoctets;
uint32_t rx_stat_etherstatsfragments;
uint32_t rx_stat_ifhcinucastpkts;
uint32_t rx_stat_ifhcinmulticastpkts;
uint32_t rx_stat_ifhcinbroadcastpkts;
uint32_t rx_stat_dot3statsfcserrors;
uint32_t rx_stat_dot3statsalignmenterrors;
uint32_t rx_stat_dot3statscarriersenseerrors;
uint32_t rx_stat_xonpauseframesreceived;
uint32_t rx_stat_xoffpauseframesreceived;
uint32_t rx_stat_maccontrolframesreceived;
uint32_t rx_stat_xoffstateentered;
uint32_t rx_stat_dot3statsframestoolong;
uint32_t rx_stat_etherstatsjabbers;
uint32_t rx_stat_etherstatsundersizepkts;
uint32_t rx_stat_etherstatspkts64octets;
uint32_t rx_stat_etherstatspkts65octetsto127octets;
uint32_t rx_stat_etherstatspkts128octetsto255octets;
uint32_t rx_stat_etherstatspkts256octetsto511octets;
uint32_t rx_stat_etherstatspkts512octetsto1023octets;
uint32_t rx_stat_etherstatspkts1024octetsto1522octets;
uint32_t rx_stat_etherstatspktsover1522octets;
uint32_t rx_stat_falsecarriererrors;
uint32_t tx_stat_ifhcoutoctets;
uint32_t tx_stat_ifhcoutbadoctets;
uint32_t tx_stat_etherstatscollisions;
uint32_t tx_stat_outxonsent;
uint32_t tx_stat_outxoffsent;
uint32_t tx_stat_flowcontroldone;
uint32_t tx_stat_dot3statssinglecollisionframes;
uint32_t tx_stat_dot3statsmultiplecollisionframes;
uint32_t tx_stat_dot3statsdeferredtransmissions;
uint32_t tx_stat_dot3statsexcessivecollisions;
uint32_t tx_stat_dot3statslatecollisions;
uint32_t tx_stat_ifhcoutucastpkts;
uint32_t tx_stat_ifhcoutmulticastpkts;
uint32_t tx_stat_ifhcoutbroadcastpkts;
uint32_t tx_stat_etherstatspkts64octets;
uint32_t tx_stat_etherstatspkts65octetsto127octets;
uint32_t tx_stat_etherstatspkts128octetsto255octets;
uint32_t tx_stat_etherstatspkts256octetsto511octets;
uint32_t tx_stat_etherstatspkts512octetsto1023octets;
uint32_t tx_stat_etherstatspkts1024octetsto1522octets;
uint32_t tx_stat_etherstatspktsover1522octets;
uint32_t tx_stat_dot3statsinternalmactransmiterrors;
};
struct bmac1_stats {
uint32_t tx_stat_gtpkt_lo;
uint32_t tx_stat_gtpkt_hi;
uint32_t tx_stat_gtxpf_lo;
uint32_t tx_stat_gtxpf_hi;
uint32_t tx_stat_gtfcs_lo;
uint32_t tx_stat_gtfcs_hi;
uint32_t tx_stat_gtmca_lo;
uint32_t tx_stat_gtmca_hi;
uint32_t tx_stat_gtbca_lo;
uint32_t tx_stat_gtbca_hi;
uint32_t tx_stat_gtfrg_lo;
uint32_t tx_stat_gtfrg_hi;
uint32_t tx_stat_gtovr_lo;
uint32_t tx_stat_gtovr_hi;
uint32_t tx_stat_gt64_lo;
uint32_t tx_stat_gt64_hi;
uint32_t tx_stat_gt127_lo;
uint32_t tx_stat_gt127_hi;
uint32_t tx_stat_gt255_lo;
uint32_t tx_stat_gt255_hi;
uint32_t tx_stat_gt511_lo;
uint32_t tx_stat_gt511_hi;
uint32_t tx_stat_gt1023_lo;
uint32_t tx_stat_gt1023_hi;
uint32_t tx_stat_gt1518_lo;
uint32_t tx_stat_gt1518_hi;
uint32_t tx_stat_gt2047_lo;
uint32_t tx_stat_gt2047_hi;
uint32_t tx_stat_gt4095_lo;
uint32_t tx_stat_gt4095_hi;
uint32_t tx_stat_gt9216_lo;
uint32_t tx_stat_gt9216_hi;
uint32_t tx_stat_gt16383_lo;
uint32_t tx_stat_gt16383_hi;
uint32_t tx_stat_gtmax_lo;
uint32_t tx_stat_gtmax_hi;
uint32_t tx_stat_gtufl_lo;
uint32_t tx_stat_gtufl_hi;
uint32_t tx_stat_gterr_lo;
uint32_t tx_stat_gterr_hi;
uint32_t tx_stat_gtbyt_lo;
uint32_t tx_stat_gtbyt_hi;
uint32_t rx_stat_gr64_lo;
uint32_t rx_stat_gr64_hi;
uint32_t rx_stat_gr127_lo;
uint32_t rx_stat_gr127_hi;
uint32_t rx_stat_gr255_lo;
uint32_t rx_stat_gr255_hi;
uint32_t rx_stat_gr511_lo;
uint32_t rx_stat_gr511_hi;
uint32_t rx_stat_gr1023_lo;
uint32_t rx_stat_gr1023_hi;
uint32_t rx_stat_gr1518_lo;
uint32_t rx_stat_gr1518_hi;
uint32_t rx_stat_gr2047_lo;
uint32_t rx_stat_gr2047_hi;
uint32_t rx_stat_gr4095_lo;
uint32_t rx_stat_gr4095_hi;
uint32_t rx_stat_gr9216_lo;
uint32_t rx_stat_gr9216_hi;
uint32_t rx_stat_gr16383_lo;
uint32_t rx_stat_gr16383_hi;
uint32_t rx_stat_grmax_lo;
uint32_t rx_stat_grmax_hi;
uint32_t rx_stat_grpkt_lo;
uint32_t rx_stat_grpkt_hi;
uint32_t rx_stat_grfcs_lo;
uint32_t rx_stat_grfcs_hi;
uint32_t rx_stat_grmca_lo;
uint32_t rx_stat_grmca_hi;
uint32_t rx_stat_grbca_lo;
uint32_t rx_stat_grbca_hi;
uint32_t rx_stat_grxcf_lo;
uint32_t rx_stat_grxcf_hi;
uint32_t rx_stat_grxpf_lo;
uint32_t rx_stat_grxpf_hi;
uint32_t rx_stat_grxuo_lo;
uint32_t rx_stat_grxuo_hi;
uint32_t rx_stat_grjbr_lo;
uint32_t rx_stat_grjbr_hi;
uint32_t rx_stat_grovr_lo;
uint32_t rx_stat_grovr_hi;
uint32_t rx_stat_grflr_lo;
uint32_t rx_stat_grflr_hi;
uint32_t rx_stat_grmeg_lo;
uint32_t rx_stat_grmeg_hi;
uint32_t rx_stat_grmeb_lo;
uint32_t rx_stat_grmeb_hi;
uint32_t rx_stat_grbyt_lo;
uint32_t rx_stat_grbyt_hi;
uint32_t rx_stat_grund_lo;
uint32_t rx_stat_grund_hi;
uint32_t rx_stat_grfrg_lo;
uint32_t rx_stat_grfrg_hi;
uint32_t rx_stat_grerb_lo;
uint32_t rx_stat_grerb_hi;
uint32_t rx_stat_grfre_lo;
uint32_t rx_stat_grfre_hi;
uint32_t rx_stat_gripj_lo;
uint32_t rx_stat_gripj_hi;
};
struct bmac2_stats {
uint32_t tx_stat_gtpk_lo; /* gtpok */
uint32_t tx_stat_gtpk_hi; /* gtpok */
uint32_t tx_stat_gtxpf_lo; /* gtpf */
uint32_t tx_stat_gtxpf_hi; /* gtpf */
uint32_t tx_stat_gtpp_lo; /* NEW BMAC2 */
uint32_t tx_stat_gtpp_hi; /* NEW BMAC2 */
uint32_t tx_stat_gtfcs_lo;
uint32_t tx_stat_gtfcs_hi;
uint32_t tx_stat_gtuca_lo; /* NEW BMAC2 */
uint32_t tx_stat_gtuca_hi; /* NEW BMAC2 */
uint32_t tx_stat_gtmca_lo;
uint32_t tx_stat_gtmca_hi;
uint32_t tx_stat_gtbca_lo;
uint32_t tx_stat_gtbca_hi;
uint32_t tx_stat_gtovr_lo;
uint32_t tx_stat_gtovr_hi;
uint32_t tx_stat_gtfrg_lo;
uint32_t tx_stat_gtfrg_hi;
uint32_t tx_stat_gtpkt1_lo; /* gtpkt */
uint32_t tx_stat_gtpkt1_hi; /* gtpkt */
uint32_t tx_stat_gt64_lo;
uint32_t tx_stat_gt64_hi;
uint32_t tx_stat_gt127_lo;
uint32_t tx_stat_gt127_hi;
uint32_t tx_stat_gt255_lo;
uint32_t tx_stat_gt255_hi;
uint32_t tx_stat_gt511_lo;
uint32_t tx_stat_gt511_hi;
uint32_t tx_stat_gt1023_lo;
uint32_t tx_stat_gt1023_hi;
uint32_t tx_stat_gt1518_lo;
uint32_t tx_stat_gt1518_hi;
uint32_t tx_stat_gt2047_lo;
uint32_t tx_stat_gt2047_hi;
uint32_t tx_stat_gt4095_lo;
uint32_t tx_stat_gt4095_hi;
uint32_t tx_stat_gt9216_lo;
uint32_t tx_stat_gt9216_hi;
uint32_t tx_stat_gt16383_lo;
uint32_t tx_stat_gt16383_hi;
uint32_t tx_stat_gtmax_lo;
uint32_t tx_stat_gtmax_hi;
uint32_t tx_stat_gtufl_lo;
uint32_t tx_stat_gtufl_hi;
uint32_t tx_stat_gterr_lo;
uint32_t tx_stat_gterr_hi;
uint32_t tx_stat_gtbyt_lo;
uint32_t tx_stat_gtbyt_hi;
uint32_t rx_stat_gr64_lo;
uint32_t rx_stat_gr64_hi;
uint32_t rx_stat_gr127_lo;
uint32_t rx_stat_gr127_hi;
uint32_t rx_stat_gr255_lo;
uint32_t rx_stat_gr255_hi;
uint32_t rx_stat_gr511_lo;
uint32_t rx_stat_gr511_hi;
uint32_t rx_stat_gr1023_lo;
uint32_t rx_stat_gr1023_hi;
uint32_t rx_stat_gr1518_lo;
uint32_t rx_stat_gr1518_hi;
uint32_t rx_stat_gr2047_lo;
uint32_t rx_stat_gr2047_hi;
uint32_t rx_stat_gr4095_lo;
uint32_t rx_stat_gr4095_hi;
uint32_t rx_stat_gr9216_lo;
uint32_t rx_stat_gr9216_hi;
uint32_t rx_stat_gr16383_lo;
uint32_t rx_stat_gr16383_hi;
uint32_t rx_stat_grmax_lo;
uint32_t rx_stat_grmax_hi;
uint32_t rx_stat_grpkt_lo;
uint32_t rx_stat_grpkt_hi;
uint32_t rx_stat_grfcs_lo;
uint32_t rx_stat_grfcs_hi;
uint32_t rx_stat_gruca_lo;
uint32_t rx_stat_gruca_hi;
uint32_t rx_stat_grmca_lo;
uint32_t rx_stat_grmca_hi;
uint32_t rx_stat_grbca_lo;
uint32_t rx_stat_grbca_hi;
uint32_t rx_stat_grxpf_lo; /* grpf */
uint32_t rx_stat_grxpf_hi; /* grpf */
uint32_t rx_stat_grpp_lo;
uint32_t rx_stat_grpp_hi;
uint32_t rx_stat_grxuo_lo; /* gruo */
uint32_t rx_stat_grxuo_hi; /* gruo */
uint32_t rx_stat_grjbr_lo;
uint32_t rx_stat_grjbr_hi;
uint32_t rx_stat_grovr_lo;
uint32_t rx_stat_grovr_hi;
uint32_t rx_stat_grxcf_lo; /* grcf */
uint32_t rx_stat_grxcf_hi; /* grcf */
uint32_t rx_stat_grflr_lo;
uint32_t rx_stat_grflr_hi;
uint32_t rx_stat_grpok_lo;
uint32_t rx_stat_grpok_hi;
uint32_t rx_stat_grmeg_lo;
uint32_t rx_stat_grmeg_hi;
uint32_t rx_stat_grmeb_lo;
uint32_t rx_stat_grmeb_hi;
uint32_t rx_stat_grbyt_lo;
uint32_t rx_stat_grbyt_hi;
uint32_t rx_stat_grund_lo;
uint32_t rx_stat_grund_hi;
uint32_t rx_stat_grfrg_lo;
uint32_t rx_stat_grfrg_hi;
uint32_t rx_stat_grerb_lo; /* grerrbyt */
uint32_t rx_stat_grerb_hi; /* grerrbyt */
uint32_t rx_stat_grfre_lo; /* grfrerr */
uint32_t rx_stat_grfre_hi; /* grfrerr */
uint32_t rx_stat_gripj_lo;
uint32_t rx_stat_gripj_hi;
};
struct mstat_stats {
struct {
/* OTE MSTAT on E3 has a bug where this register's contents are
* actually tx_gtxpok + tx_gtxpf + (possibly)tx_gtxpp
*/
uint32_t tx_gtxpok_lo;
uint32_t tx_gtxpok_hi;
uint32_t tx_gtxpf_lo;
uint32_t tx_gtxpf_hi;
uint32_t tx_gtxpp_lo;
uint32_t tx_gtxpp_hi;
uint32_t tx_gtfcs_lo;
uint32_t tx_gtfcs_hi;
uint32_t tx_gtuca_lo;
uint32_t tx_gtuca_hi;
uint32_t tx_gtmca_lo;
uint32_t tx_gtmca_hi;
uint32_t tx_gtgca_lo;
uint32_t tx_gtgca_hi;
uint32_t tx_gtpkt_lo;
uint32_t tx_gtpkt_hi;
uint32_t tx_gt64_lo;
uint32_t tx_gt64_hi;
uint32_t tx_gt127_lo;
uint32_t tx_gt127_hi;
uint32_t tx_gt255_lo;
uint32_t tx_gt255_hi;
uint32_t tx_gt511_lo;
uint32_t tx_gt511_hi;
uint32_t tx_gt1023_lo;
uint32_t tx_gt1023_hi;
uint32_t tx_gt1518_lo;
uint32_t tx_gt1518_hi;
uint32_t tx_gt2047_lo;
uint32_t tx_gt2047_hi;
uint32_t tx_gt4095_lo;
uint32_t tx_gt4095_hi;
uint32_t tx_gt9216_lo;
uint32_t tx_gt9216_hi;
uint32_t tx_gt16383_lo;
uint32_t tx_gt16383_hi;
uint32_t tx_gtufl_lo;
uint32_t tx_gtufl_hi;
uint32_t tx_gterr_lo;
uint32_t tx_gterr_hi;
uint32_t tx_gtbyt_lo;
uint32_t tx_gtbyt_hi;
uint32_t tx_collisions_lo;
uint32_t tx_collisions_hi;
uint32_t tx_singlecollision_lo;
uint32_t tx_singlecollision_hi;
uint32_t tx_multiplecollisions_lo;
uint32_t tx_multiplecollisions_hi;
uint32_t tx_deferred_lo;
uint32_t tx_deferred_hi;
uint32_t tx_excessivecollisions_lo;
uint32_t tx_excessivecollisions_hi;
uint32_t tx_latecollisions_lo;
uint32_t tx_latecollisions_hi;
} stats_tx;
struct {
uint32_t rx_gr64_lo;
uint32_t rx_gr64_hi;
uint32_t rx_gr127_lo;
uint32_t rx_gr127_hi;
uint32_t rx_gr255_lo;
uint32_t rx_gr255_hi;
uint32_t rx_gr511_lo;
uint32_t rx_gr511_hi;
uint32_t rx_gr1023_lo;
uint32_t rx_gr1023_hi;
uint32_t rx_gr1518_lo;
uint32_t rx_gr1518_hi;
uint32_t rx_gr2047_lo;
uint32_t rx_gr2047_hi;
uint32_t rx_gr4095_lo;
uint32_t rx_gr4095_hi;
uint32_t rx_gr9216_lo;
uint32_t rx_gr9216_hi;
uint32_t rx_gr16383_lo;
uint32_t rx_gr16383_hi;
uint32_t rx_grpkt_lo;
uint32_t rx_grpkt_hi;
uint32_t rx_grfcs_lo;
uint32_t rx_grfcs_hi;
uint32_t rx_gruca_lo;
uint32_t rx_gruca_hi;
uint32_t rx_grmca_lo;
uint32_t rx_grmca_hi;
uint32_t rx_grbca_lo;
uint32_t rx_grbca_hi;
uint32_t rx_grxpf_lo;
uint32_t rx_grxpf_hi;
uint32_t rx_grxpp_lo;
uint32_t rx_grxpp_hi;
uint32_t rx_grxuo_lo;
uint32_t rx_grxuo_hi;
uint32_t rx_grovr_lo;
uint32_t rx_grovr_hi;
uint32_t rx_grxcf_lo;
uint32_t rx_grxcf_hi;
uint32_t rx_grflr_lo;
uint32_t rx_grflr_hi;
uint32_t rx_grpok_lo;
uint32_t rx_grpok_hi;
uint32_t rx_grbyt_lo;
uint32_t rx_grbyt_hi;
uint32_t rx_grund_lo;
uint32_t rx_grund_hi;
uint32_t rx_grfrg_lo;
uint32_t rx_grfrg_hi;
uint32_t rx_grerb_lo;
uint32_t rx_grerb_hi;
uint32_t rx_grfre_lo;
uint32_t rx_grfre_hi;
uint32_t rx_alignmenterrors_lo;
uint32_t rx_alignmenterrors_hi;
uint32_t rx_falsecarrier_lo;
uint32_t rx_falsecarrier_hi;
uint32_t rx_llfcmsgcnt_lo;
uint32_t rx_llfcmsgcnt_hi;
} stats_rx;
};
union mac_stats {
struct emac_stats emac_stats;
struct bmac1_stats bmac1_stats;
struct bmac2_stats bmac2_stats;
struct mstat_stats mstat_stats;
};
struct mac_stx {
/* in_bad_octets */
uint32_t rx_stat_ifhcinbadoctets_hi;
uint32_t rx_stat_ifhcinbadoctets_lo;
/* out_bad_octets */
uint32_t tx_stat_ifhcoutbadoctets_hi;
uint32_t tx_stat_ifhcoutbadoctets_lo;
/* crc_receive_errors */
uint32_t rx_stat_dot3statsfcserrors_hi;
uint32_t rx_stat_dot3statsfcserrors_lo;
/* alignment_errors */
uint32_t rx_stat_dot3statsalignmenterrors_hi;
uint32_t rx_stat_dot3statsalignmenterrors_lo;
/* carrier_sense_errors */
uint32_t rx_stat_dot3statscarriersenseerrors_hi;
uint32_t rx_stat_dot3statscarriersenseerrors_lo;
/* false_carrier_detections */
uint32_t rx_stat_falsecarriererrors_hi;
uint32_t rx_stat_falsecarriererrors_lo;
/* runt_packets_received */
uint32_t rx_stat_etherstatsundersizepkts_hi;
uint32_t rx_stat_etherstatsundersizepkts_lo;
/* jabber_packets_received */
uint32_t rx_stat_dot3statsframestoolong_hi;
uint32_t rx_stat_dot3statsframestoolong_lo;
/* error_runt_packets_received */
uint32_t rx_stat_etherstatsfragments_hi;
uint32_t rx_stat_etherstatsfragments_lo;
/* error_jabber_packets_received */
uint32_t rx_stat_etherstatsjabbers_hi;
uint32_t rx_stat_etherstatsjabbers_lo;
/* control_frames_received */
uint32_t rx_stat_maccontrolframesreceived_hi;
uint32_t rx_stat_maccontrolframesreceived_lo;
uint32_t rx_stat_mac_xpf_hi;
uint32_t rx_stat_mac_xpf_lo;
uint32_t rx_stat_mac_xcf_hi;
uint32_t rx_stat_mac_xcf_lo;
/* xoff_state_entered */
uint32_t rx_stat_xoffstateentered_hi;
uint32_t rx_stat_xoffstateentered_lo;
/* pause_xon_frames_received */
uint32_t rx_stat_xonpauseframesreceived_hi;
uint32_t rx_stat_xonpauseframesreceived_lo;
/* pause_xoff_frames_received */
uint32_t rx_stat_xoffpauseframesreceived_hi;
uint32_t rx_stat_xoffpauseframesreceived_lo;
/* pause_xon_frames_transmitted */
uint32_t tx_stat_outxonsent_hi;
uint32_t tx_stat_outxonsent_lo;
/* pause_xoff_frames_transmitted */
uint32_t tx_stat_outxoffsent_hi;
uint32_t tx_stat_outxoffsent_lo;
/* flow_control_done */
uint32_t tx_stat_flowcontroldone_hi;
uint32_t tx_stat_flowcontroldone_lo;
/* ether_stats_collisions */
uint32_t tx_stat_etherstatscollisions_hi;
uint32_t tx_stat_etherstatscollisions_lo;
/* single_collision_transmit_frames */
uint32_t tx_stat_dot3statssinglecollisionframes_hi;
uint32_t tx_stat_dot3statssinglecollisionframes_lo;
/* multiple_collision_transmit_frames */
uint32_t tx_stat_dot3statsmultiplecollisionframes_hi;
uint32_t tx_stat_dot3statsmultiplecollisionframes_lo;
/* deferred_transmissions */
uint32_t tx_stat_dot3statsdeferredtransmissions_hi;
uint32_t tx_stat_dot3statsdeferredtransmissions_lo;
/* excessive_collision_frames */
uint32_t tx_stat_dot3statsexcessivecollisions_hi;
uint32_t tx_stat_dot3statsexcessivecollisions_lo;
/* late_collision_frames */
uint32_t tx_stat_dot3statslatecollisions_hi;
uint32_t tx_stat_dot3statslatecollisions_lo;
/* frames_transmitted_64_bytes */
uint32_t tx_stat_etherstatspkts64octets_hi;
uint32_t tx_stat_etherstatspkts64octets_lo;
/* frames_transmitted_65_127_bytes */
uint32_t tx_stat_etherstatspkts65octetsto127octets_hi;
uint32_t tx_stat_etherstatspkts65octetsto127octets_lo;
/* frames_transmitted_128_255_bytes */
uint32_t tx_stat_etherstatspkts128octetsto255octets_hi;
uint32_t tx_stat_etherstatspkts128octetsto255octets_lo;
/* frames_transmitted_256_511_bytes */
uint32_t tx_stat_etherstatspkts256octetsto511octets_hi;
uint32_t tx_stat_etherstatspkts256octetsto511octets_lo;
/* frames_transmitted_512_1023_bytes */
uint32_t tx_stat_etherstatspkts512octetsto1023octets_hi;
uint32_t tx_stat_etherstatspkts512octetsto1023octets_lo;
/* frames_transmitted_1024_1522_bytes */
uint32_t tx_stat_etherstatspkts1024octetsto1522octets_hi;
uint32_t tx_stat_etherstatspkts1024octetsto1522octets_lo;
/* frames_transmitted_1523_9022_bytes */
uint32_t tx_stat_etherstatspktsover1522octets_hi;
uint32_t tx_stat_etherstatspktsover1522octets_lo;
uint32_t tx_stat_mac_2047_hi;
uint32_t tx_stat_mac_2047_lo;
uint32_t tx_stat_mac_4095_hi;
uint32_t tx_stat_mac_4095_lo;
uint32_t tx_stat_mac_9216_hi;
uint32_t tx_stat_mac_9216_lo;
uint32_t tx_stat_mac_16383_hi;
uint32_t tx_stat_mac_16383_lo;
/* internal_mac_transmit_errors */
uint32_t tx_stat_dot3statsinternalmactransmiterrors_hi;
uint32_t tx_stat_dot3statsinternalmactransmiterrors_lo;
/* if_out_discards */
uint32_t tx_stat_mac_ufl_hi;
uint32_t tx_stat_mac_ufl_lo;
};
#define MAC_STX_IDX_MAX 2
struct host_port_stats {
uint32_t host_port_stats_counter;
struct mac_stx mac_stx[MAC_STX_IDX_MAX];
uint32_t brb_drop_hi;
uint32_t brb_drop_lo;
uint32_t not_used; /* obsolete as of MFW 7.2.1 */
uint32_t pfc_frames_tx_hi;
uint32_t pfc_frames_tx_lo;
uint32_t pfc_frames_rx_hi;
uint32_t pfc_frames_rx_lo;
uint32_t eee_lpi_count_hi;
uint32_t eee_lpi_count_lo;
};
struct host_func_stats {
uint32_t host_func_stats_start;
uint32_t total_bytes_received_hi;
uint32_t total_bytes_received_lo;
uint32_t total_bytes_transmitted_hi;
uint32_t total_bytes_transmitted_lo;
uint32_t total_unicast_packets_received_hi;
uint32_t total_unicast_packets_received_lo;
uint32_t total_multicast_packets_received_hi;
uint32_t total_multicast_packets_received_lo;
uint32_t total_broadcast_packets_received_hi;
uint32_t total_broadcast_packets_received_lo;
uint32_t total_unicast_packets_transmitted_hi;
uint32_t total_unicast_packets_transmitted_lo;
uint32_t total_multicast_packets_transmitted_hi;
uint32_t total_multicast_packets_transmitted_lo;
uint32_t total_broadcast_packets_transmitted_hi;
uint32_t total_broadcast_packets_transmitted_lo;
uint32_t valid_bytes_received_hi;
uint32_t valid_bytes_received_lo;
uint32_t host_func_stats_end;
};
/* VIC definitions */
#define VICSTATST_UIF_INDEX 2
/*
* stats collected for afex.
* NOTE: structure is exactly as expected to be received by the switch.
* order must remain exactly as is unless protocol changes !
*/
struct afex_stats {
uint32_t tx_unicast_frames_hi;
uint32_t tx_unicast_frames_lo;
uint32_t tx_unicast_bytes_hi;
uint32_t tx_unicast_bytes_lo;
uint32_t tx_multicast_frames_hi;
uint32_t tx_multicast_frames_lo;
uint32_t tx_multicast_bytes_hi;
uint32_t tx_multicast_bytes_lo;
uint32_t tx_broadcast_frames_hi;
uint32_t tx_broadcast_frames_lo;
uint32_t tx_broadcast_bytes_hi;
uint32_t tx_broadcast_bytes_lo;
uint32_t tx_frames_discarded_hi;
uint32_t tx_frames_discarded_lo;
uint32_t tx_frames_dropped_hi;
uint32_t tx_frames_dropped_lo;
uint32_t rx_unicast_frames_hi;
uint32_t rx_unicast_frames_lo;
uint32_t rx_unicast_bytes_hi;
uint32_t rx_unicast_bytes_lo;
uint32_t rx_multicast_frames_hi;
uint32_t rx_multicast_frames_lo;
uint32_t rx_multicast_bytes_hi;
uint32_t rx_multicast_bytes_lo;
uint32_t rx_broadcast_frames_hi;
uint32_t rx_broadcast_frames_lo;
uint32_t rx_broadcast_bytes_hi;
uint32_t rx_broadcast_bytes_lo;
uint32_t rx_frames_discarded_hi;
uint32_t rx_frames_discarded_lo;
uint32_t rx_frames_dropped_hi;
uint32_t rx_frames_dropped_lo;
};
/* To maintain backward compatibility between FW and drivers, new elements */
/* should be added to the end of the structure. */
/* Per Port Statistics */
struct port_info {
uint32_t size; /* size of this structure (i.e. sizeof(port_info)) */
uint32_t enabled; /* 0 =Disabled, 1= Enabled */
uint32_t link_speed; /* multiplier of 100Mb */
uint32_t wol_support; /* WoL Support (i.e. Non-Zero if WOL supported ) */
uint32_t flow_control; /* 802.3X Flow Ctrl. 0=off 1=RX 2=TX 3=RX&TX.*/
uint32_t flex10; /* Flex10 mode enabled. non zero = yes */
uint32_t rx_drops; /* RX Discards. Counters roll over, never reset */
uint32_t rx_errors; /* RX Errors. Physical Port Stats L95, All PFs and NC-SI.
This is flagged by Consumer as an error. */
uint32_t rx_uncast_lo; /* RX Unicast Packets. Free running counters: */
uint32_t rx_uncast_hi; /* RX Unicast Packets. Free running counters: */
uint32_t rx_mcast_lo; /* RX Multicast Packets */
uint32_t rx_mcast_hi; /* RX Multicast Packets */
uint32_t rx_bcast_lo; /* RX Broadcast Packets */
uint32_t rx_bcast_hi; /* RX Broadcast Packets */
uint32_t tx_uncast_lo; /* TX Unicast Packets */
uint32_t tx_uncast_hi; /* TX Unicast Packets */
uint32_t tx_mcast_lo; /* TX Multicast Packets */
uint32_t tx_mcast_hi; /* TX Multicast Packets */
uint32_t tx_bcast_lo; /* TX Broadcast Packets */
uint32_t tx_bcast_hi; /* TX Broadcast Packets */
uint32_t tx_errors; /* TX Errors */
uint32_t tx_discards; /* TX Discards */
uint32_t rx_frames_lo; /* RX Frames received */
uint32_t rx_frames_hi; /* RX Frames received */
uint32_t rx_bytes_lo; /* RX Bytes received */
uint32_t rx_bytes_hi; /* RX Bytes received */
uint32_t tx_frames_lo; /* TX Frames sent */
uint32_t tx_frames_hi; /* TX Frames sent */
uint32_t tx_bytes_lo; /* TX Bytes sent */
uint32_t tx_bytes_hi; /* TX Bytes sent */
uint32_t link_status; /* Port P Link Status. 1:0 bit for port enabled.
1:1 bit for link good,
2:1 Set if link changed between last poll. */
uint32_t tx_pfc_frames_lo; /* PFC Frames sent. */
uint32_t tx_pfc_frames_hi; /* PFC Frames sent. */
uint32_t rx_pfc_frames_lo; /* PFC Frames Received. */
uint32_t rx_pfc_frames_hi; /* PFC Frames Received. */
};
#define BCM_5710_FW_MAJOR_VERSION 7
#define BCM_5710_FW_MINOR_VERSION 13
#define BCM_5710_FW_REVISION_VERSION 1
#define BCM_5710_FW_ENGINEERING_VERSION 0
#define BCM_5710_FW_COMPILE_FLAGS 1
/*
* attention bits $$KEEP_ENDIANNESS$$
*/
struct atten_sp_status_block
{
uint32_t attn_bits /* 16 bit of attention signal lines */;
uint32_t attn_bits_ack /* 16 bit of attention signal ack */;
uint8_t status_block_id /* status block id */;
uint8_t reserved0 /* resreved for padding */;
uint16_t attn_bits_index /* attention bits running index */;
uint32_t reserved1 /* resreved for padding */;
};
/*
* The eth aggregative context of Cstorm
*/
struct cstorm_eth_ag_context
{
uint32_t __reserved0[10];
};
/*
* The iscsi aggregative context of Cstorm
*/
struct cstorm_iscsi_ag_context
{
uint32_t agg_vars1;
#define CSTORM_ISCSI_AG_CONTEXT_STATE (0xFF<<0) /* BitField agg_vars1Various aggregative variables The state of the connection */
#define CSTORM_ISCSI_AG_CONTEXT_STATE_SHIFT 0
#define __CSTORM_ISCSI_AG_CONTEXT_EXISTS_IN_QM0 (0x1<<8) /* BitField agg_vars1Various aggregative variables The connection is currently registered to the QM with queue index 0 */
#define __CSTORM_ISCSI_AG_CONTEXT_EXISTS_IN_QM0_SHIFT 8
#define __CSTORM_ISCSI_AG_CONTEXT_EXISTS_IN_QM1 (0x1<<9) /* BitField agg_vars1Various aggregative variables The connection is currently registered to the QM with queue index 1 */
#define __CSTORM_ISCSI_AG_CONTEXT_EXISTS_IN_QM1_SHIFT 9
#define __CSTORM_ISCSI_AG_CONTEXT_EXISTS_IN_QM2 (0x1<<10) /* BitField agg_vars1Various aggregative variables The connection is currently registered to the QM with queue index 2 */
#define __CSTORM_ISCSI_AG_CONTEXT_EXISTS_IN_QM2_SHIFT 10
#define __CSTORM_ISCSI_AG_CONTEXT_EXISTS_IN_QM3 (0x1<<11) /* BitField agg_vars1Various aggregative variables The connection is currently registered to the QM with queue index 3 */
#define __CSTORM_ISCSI_AG_CONTEXT_EXISTS_IN_QM3_SHIFT 11
#define __CSTORM_ISCSI_AG_CONTEXT_RESERVED_ULP_RX_SE_CF_EN (0x1<<12) /* BitField agg_vars1Various aggregative variables ULP Rx SE counter flag enable */
#define __CSTORM_ISCSI_AG_CONTEXT_RESERVED_ULP_RX_SE_CF_EN_SHIFT 12
#define __CSTORM_ISCSI_AG_CONTEXT_RESERVED_ULP_RX_INV_CF_EN (0x1<<13) /* BitField agg_vars1Various aggregative variables ULP Rx invalidate counter flag enable */
#define __CSTORM_ISCSI_AG_CONTEXT_RESERVED_ULP_RX_INV_CF_EN_SHIFT 13
#define __CSTORM_ISCSI_AG_CONTEXT_AUX4_CF (0x3<<14) /* BitField agg_vars1Various aggregative variables Aux 4 counter flag */
#define __CSTORM_ISCSI_AG_CONTEXT_AUX4_CF_SHIFT 14
#define __CSTORM_ISCSI_AG_CONTEXT_RESERVED66 (0x3<<16) /* BitField agg_vars1Various aggregative variables The connection QOS */
#define __CSTORM_ISCSI_AG_CONTEXT_RESERVED66_SHIFT 16
#define __CSTORM_ISCSI_AG_CONTEXT_FIN_RECEIVED_CF_EN (0x1<<18) /* BitField agg_vars1Various aggregative variables Enable decision rule for fin_received_cf */
#define __CSTORM_ISCSI_AG_CONTEXT_FIN_RECEIVED_CF_EN_SHIFT 18
#define __CSTORM_ISCSI_AG_CONTEXT_AUX1_CF_EN (0x1<<19) /* BitField agg_vars1Various aggregative variables Enable decision rule for auxiliary counter flag 1 */
#define __CSTORM_ISCSI_AG_CONTEXT_AUX1_CF_EN_SHIFT 19
#define __CSTORM_ISCSI_AG_CONTEXT_AUX2_CF_EN (0x1<<20) /* BitField agg_vars1Various aggregative variables Enable decision rule for auxiliary counter flag 2 */
#define __CSTORM_ISCSI_AG_CONTEXT_AUX2_CF_EN_SHIFT 20
#define __CSTORM_ISCSI_AG_CONTEXT_AUX3_CF_EN (0x1<<21) /* BitField agg_vars1Various aggregative variables Enable decision rule for auxiliary counter flag 3 */
#define __CSTORM_ISCSI_AG_CONTEXT_AUX3_CF_EN_SHIFT 21
#define __CSTORM_ISCSI_AG_CONTEXT_AUX4_CF_EN (0x1<<22) /* BitField agg_vars1Various aggregative variables Enable decision rule for auxiliary counter flag 4 */
#define __CSTORM_ISCSI_AG_CONTEXT_AUX4_CF_EN_SHIFT 22
#define __CSTORM_ISCSI_AG_CONTEXT_REL_SEQ_RULE (0x7<<23) /* BitField agg_vars1Various aggregative variables 0-NOP, 1-EQ, 2-NEQ, 3-GT, 4-GE, 5-LS, 6-LE */
#define __CSTORM_ISCSI_AG_CONTEXT_REL_SEQ_RULE_SHIFT 23
#define CSTORM_ISCSI_AG_CONTEXT_HQ_PROD_RULE (0x3<<26) /* BitField agg_vars1Various aggregative variables 0-NOP, 1-EQ, 2-NEQ */
#define CSTORM_ISCSI_AG_CONTEXT_HQ_PROD_RULE_SHIFT 26
#define __CSTORM_ISCSI_AG_CONTEXT_RESERVED52 (0x3<<28) /* BitField agg_vars1Various aggregative variables 0-NOP, 1-EQ, 2-NEQ */
#define __CSTORM_ISCSI_AG_CONTEXT_RESERVED52_SHIFT 28
#define __CSTORM_ISCSI_AG_CONTEXT_RESERVED53 (0x3<<30) /* BitField agg_vars1Various aggregative variables 0-NOP, 1-EQ, 2-NEQ */
#define __CSTORM_ISCSI_AG_CONTEXT_RESERVED53_SHIFT 30
#if defined(__BIG_ENDIAN)
uint8_t __aux1_th /* Aux1 threhsold for the decision */;
uint8_t __aux1_val /* Aux1 aggregation value */;
uint16_t __agg_vars2 /* Various aggregative variables*/;
#elif defined(__LITTLE_ENDIAN)
uint16_t __agg_vars2 /* Various aggregative variables*/;
uint8_t __aux1_val /* Aux1 aggregation value */;
uint8_t __aux1_th /* Aux1 threhsold for the decision */;
#endif
uint32_t rel_seq /* The sequence to release */;
uint32_t rel_seq_th /* The threshold for the released sequence */;
#if defined(__BIG_ENDIAN)
uint16_t hq_cons /* The HQ Consumer */;
uint16_t hq_prod /* The HQ producer */;
#elif defined(__LITTLE_ENDIAN)
uint16_t hq_prod /* The HQ producer */;
uint16_t hq_cons /* The HQ Consumer */;
#endif
#if defined(__BIG_ENDIAN)
uint8_t __reserved62 /* Mask value for the decision algorithm of the general flags */;
uint8_t __reserved61 /* General flags */;
uint8_t __reserved60 /* ORQ consumer updated by the completor */;
uint8_t __reserved59 /* ORQ ULP Rx consumer */;
#elif defined(__LITTLE_ENDIAN)
uint8_t __reserved59 /* ORQ ULP Rx consumer */;
uint8_t __reserved60 /* ORQ consumer updated by the completor */;
uint8_t __reserved61 /* General flags */;
uint8_t __reserved62 /* Mask value for the decision algorithm of the general flags */;
#endif
#if defined(__BIG_ENDIAN)
uint16_t __reserved64 /* RQ consumer kept by the completor */;
uint16_t cq_u_prod /* Ustorm producer of CQ */;
#elif defined(__LITTLE_ENDIAN)
uint16_t cq_u_prod /* Ustorm producer of CQ */;
uint16_t __reserved64 /* RQ consumer kept by the completor */;
#endif
uint32_t __cq_u_prod1 /* Ustorm producer of CQ 1 */;
#if defined(__BIG_ENDIAN)
uint16_t __agg_vars3 /* Various aggregative variables*/;
uint16_t cq_u_pend /* Ustorm pending completions of CQ */;
#elif defined(__LITTLE_ENDIAN)
uint16_t cq_u_pend /* Ustorm pending completions of CQ */;
uint16_t __agg_vars3 /* Various aggregative variables*/;
#endif
#if defined(__BIG_ENDIAN)
uint16_t __aux2_th /* Aux2 threhsold for the decision */;
uint16_t aux2_val /* Aux2 aggregation value */;
#elif defined(__LITTLE_ENDIAN)
uint16_t aux2_val /* Aux2 aggregation value */;
uint16_t __aux2_th /* Aux2 threhsold for the decision */;
#endif
};
/*
* The toe aggregative context of Cstorm
*/
struct cstorm_toe_ag_context
{
uint32_t __agg_vars1 /* Various aggregative variables*/;
#if defined(__BIG_ENDIAN)
uint8_t __aux1_th /* Aux1 threhsold for the decision */;
uint8_t __aux1_val /* Aux1 aggregation value */;
uint16_t __agg_vars2 /* Various aggregative variables*/;
#elif defined(__LITTLE_ENDIAN)
uint16_t __agg_vars2 /* Various aggregative variables*/;
uint8_t __aux1_val /* Aux1 aggregation value */;
uint8_t __aux1_th /* Aux1 threhsold for the decision */;
#endif
uint32_t rel_seq /* The sequence to release */;
uint32_t __rel_seq_threshold /* The threshold for the released sequence */;
#if defined(__BIG_ENDIAN)
uint16_t __reserved58 /* The HQ Consumer */;
uint16_t bd_prod /* The HQ producer */;
#elif defined(__LITTLE_ENDIAN)
uint16_t bd_prod /* The HQ producer */;
uint16_t __reserved58 /* The HQ Consumer */;
#endif
#if defined(__BIG_ENDIAN)
uint8_t __reserved62 /* Mask value for the decision algorithm of the general flags */;
uint8_t __reserved61 /* General flags */;
uint8_t __reserved60 /* ORQ consumer updated by the completor */;
uint8_t __completion_opcode /* ORQ ULP Rx consumer */;
#elif defined(__LITTLE_ENDIAN)
uint8_t __completion_opcode /* ORQ ULP Rx consumer */;
uint8_t __reserved60 /* ORQ consumer updated by the completor */;
uint8_t __reserved61 /* General flags */;
uint8_t __reserved62 /* Mask value for the decision algorithm of the general flags */;
#endif
#if defined(__BIG_ENDIAN)
uint16_t __reserved64 /* RQ consumer kept by the completor */;
uint16_t __reserved63 /* RQ consumer updated by the ULP RX */;
#elif defined(__LITTLE_ENDIAN)
uint16_t __reserved63 /* RQ consumer updated by the ULP RX */;
uint16_t __reserved64 /* RQ consumer kept by the completor */;
#endif
uint32_t snd_max /* The ACK sequence number received in the last completed DDP */;
#if defined(__BIG_ENDIAN)
uint16_t __agg_vars3 /* Various aggregative variables*/;
- uint16_t __reserved67 /* A counter for the number of RQ WQEs with invalidate the the USTORM encountered */;
+ uint16_t __reserved67 /* A counter for the number of RQ WQEs with invalidate the USTORM encountered */;
#elif defined(__LITTLE_ENDIAN)
- uint16_t __reserved67 /* A counter for the number of RQ WQEs with invalidate the the USTORM encountered */;
+ uint16_t __reserved67 /* A counter for the number of RQ WQEs with invalidate the USTORM encountered */;
uint16_t __agg_vars3 /* Various aggregative variables*/;
#endif
#if defined(__BIG_ENDIAN)
uint16_t __aux2_th /* Aux2 threhsold for the decision */;
uint16_t __aux2_val /* Aux2 aggregation value */;
#elif defined(__LITTLE_ENDIAN)
uint16_t __aux2_val /* Aux2 aggregation value */;
uint16_t __aux2_th /* Aux2 threhsold for the decision */;
#endif
};
/*
* dmae command structure
*/
struct dmae_cmd
{
uint32_t opcode;
#define DMAE_CMD_SRC (0x1<<0) /* BitField opcode Whether the source is the PCIe or the GRC. 0- The source is the PCIe 1- The source is the GRC. */
#define DMAE_CMD_SRC_SHIFT 0
#define DMAE_CMD_DST (0x3<<1) /* BitField opcode The destination of the DMA can be: 0-None 1-PCIe 2-GRC 3-None */
#define DMAE_CMD_DST_SHIFT 1
#define DMAE_CMD_C_DST (0x1<<3) /* BitField opcode The destination of the completion: 0-PCIe 1-GRC */
#define DMAE_CMD_C_DST_SHIFT 3
#define DMAE_CMD_C_TYPE_ENABLE (0x1<<4) /* BitField opcode Whether to write a completion word to the completion destination: 0-Do not write a completion word 1-Write the completion word */
#define DMAE_CMD_C_TYPE_ENABLE_SHIFT 4
#define DMAE_CMD_C_TYPE_CRC_ENABLE (0x1<<5) /* BitField opcode Whether to write a CRC word to the completion destination 0-Do not write a CRC word 1-Write a CRC word */
#define DMAE_CMD_C_TYPE_CRC_ENABLE_SHIFT 5
#define DMAE_CMD_C_TYPE_CRC_OFFSET (0x7<<6) /* BitField opcode The CRC word should be taken from the DMAE GRC space from address 9+X, where X is the value in these bits. */
#define DMAE_CMD_C_TYPE_CRC_OFFSET_SHIFT 6
#define DMAE_CMD_ENDIANITY (0x3<<9) /* BitField opcode swapping mode. */
#define DMAE_CMD_ENDIANITY_SHIFT 9
#define DMAE_CMD_PORT (0x1<<11) /* BitField opcode Which network port ID to present to the PCI request interface */
#define DMAE_CMD_PORT_SHIFT 11
#define DMAE_CMD_CRC_RESET (0x1<<12) /* BitField opcode reset crc result */
#define DMAE_CMD_CRC_RESET_SHIFT 12
#define DMAE_CMD_SRC_RESET (0x1<<13) /* BitField opcode reset source address in next go */
#define DMAE_CMD_SRC_RESET_SHIFT 13
#define DMAE_CMD_DST_RESET (0x1<<14) /* BitField opcode reset dest address in next go */
#define DMAE_CMD_DST_RESET_SHIFT 14
#define DMAE_CMD_E1HVN (0x3<<15) /* BitField opcode vnic number E2 and onwards source vnic */
#define DMAE_CMD_E1HVN_SHIFT 15
#define DMAE_CMD_DST_VN (0x3<<17) /* BitField opcode E2 and onwards dest vnic */
#define DMAE_CMD_DST_VN_SHIFT 17
#define DMAE_CMD_C_FUNC (0x1<<19) /* BitField opcode E2 and onwards which function gets the completion src_vn(e1hvn)-0 dst_vn-1 */
#define DMAE_CMD_C_FUNC_SHIFT 19
#define DMAE_CMD_ERR_POLICY (0x3<<20) /* BitField opcode E2 and onwards what to do when theres a completion and a PCI error regular-0 error indication-1 no completion-2 */
#define DMAE_CMD_ERR_POLICY_SHIFT 20
#define DMAE_CMD_RESERVED0 (0x3FF<<22) /* BitField opcode */
#define DMAE_CMD_RESERVED0_SHIFT 22
uint32_t src_addr_lo /* source address low/grc address */;
uint32_t src_addr_hi /* source address hi */;
uint32_t dst_addr_lo /* dest address low/grc address */;
uint32_t dst_addr_hi /* dest address hi */;
#if defined(__BIG_ENDIAN)
uint16_t opcode_iov;
#define DMAE_CMD_SRC_VFID (0x3F<<0) /* BitField opcode_iovE2 and onward, set to 0 for backward compatibility source VF id */
#define DMAE_CMD_SRC_VFID_SHIFT 0
#define DMAE_CMD_SRC_VFPF (0x1<<6) /* BitField opcode_iovE2 and onward, set to 0 for backward compatibility selects the source function PF-0, VF-1 */
#define DMAE_CMD_SRC_VFPF_SHIFT 6
#define DMAE_CMD_RESERVED1 (0x1<<7) /* BitField opcode_iovE2 and onward, set to 0 for backward compatibility */
#define DMAE_CMD_RESERVED1_SHIFT 7
#define DMAE_CMD_DST_VFID (0x3F<<8) /* BitField opcode_iovE2 and onward, set to 0 for backward compatibility destination VF id */
#define DMAE_CMD_DST_VFID_SHIFT 8
#define DMAE_CMD_DST_VFPF (0x1<<14) /* BitField opcode_iovE2 and onward, set to 0 for backward compatibility selects the destination function PF-0, VF-1 */
#define DMAE_CMD_DST_VFPF_SHIFT 14
#define DMAE_CMD_RESERVED2 (0x1<<15) /* BitField opcode_iovE2 and onward, set to 0 for backward compatibility */
#define DMAE_CMD_RESERVED2_SHIFT 15
uint16_t len /* copy length */;
#elif defined(__LITTLE_ENDIAN)
uint16_t len /* copy length */;
uint16_t opcode_iov;
#define DMAE_CMD_SRC_VFID (0x3F<<0) /* BitField opcode_iovE2 and onward, set to 0 for backward compatibility source VF id */
#define DMAE_CMD_SRC_VFID_SHIFT 0
#define DMAE_CMD_SRC_VFPF (0x1<<6) /* BitField opcode_iovE2 and onward, set to 0 for backward compatibility selects the source function PF-0, VF-1 */
#define DMAE_CMD_SRC_VFPF_SHIFT 6
#define DMAE_CMD_RESERVED1 (0x1<<7) /* BitField opcode_iovE2 and onward, set to 0 for backward compatibility */
#define DMAE_CMD_RESERVED1_SHIFT 7
#define DMAE_CMD_DST_VFID (0x3F<<8) /* BitField opcode_iovE2 and onward, set to 0 for backward compatibility destination VF id */
#define DMAE_CMD_DST_VFID_SHIFT 8
#define DMAE_CMD_DST_VFPF (0x1<<14) /* BitField opcode_iovE2 and onward, set to 0 for backward compatibility selects the destination function PF-0, VF-1 */
#define DMAE_CMD_DST_VFPF_SHIFT 14
#define DMAE_CMD_RESERVED2 (0x1<<15) /* BitField opcode_iovE2 and onward, set to 0 for backward compatibility */
#define DMAE_CMD_RESERVED2_SHIFT 15
#endif
uint32_t comp_addr_lo /* completion address low/grc address */;
uint32_t comp_addr_hi /* completion address hi */;
uint32_t comp_val /* value to write to completion address */;
uint32_t crc32 /* crc32 result */;
uint32_t crc32_c /* crc32_c result */;
#if defined(__BIG_ENDIAN)
uint16_t crc16_c /* crc16_c result */;
uint16_t crc16 /* crc16 result */;
#elif defined(__LITTLE_ENDIAN)
uint16_t crc16 /* crc16 result */;
uint16_t crc16_c /* crc16_c result */;
#endif
#if defined(__BIG_ENDIAN)
uint16_t reserved3;
uint16_t crc_t10 /* crc_t10 result */;
#elif defined(__LITTLE_ENDIAN)
uint16_t crc_t10 /* crc_t10 result */;
uint16_t reserved3;
#endif
#if defined(__BIG_ENDIAN)
uint16_t xsum8 /* checksum8 result */;
uint16_t xsum16 /* checksum16 result */;
#elif defined(__LITTLE_ENDIAN)
uint16_t xsum16 /* checksum16 result */;
uint16_t xsum8 /* checksum8 result */;
#endif
};
/*
* common data for all protocols
*/
struct doorbell_hdr_t
{
uint8_t data;
#define DOORBELL_HDR_T_RX (0x1<<0) /* BitField data 1 for rx doorbell, 0 for tx doorbell */
#define DOORBELL_HDR_T_RX_SHIFT 0
#define DOORBELL_HDR_T_DB_TYPE (0x1<<1) /* BitField data 0 for normal doorbell, 1 for advertise wnd doorbell */
#define DOORBELL_HDR_T_DB_TYPE_SHIFT 1
#define DOORBELL_HDR_T_DPM_SIZE (0x3<<2) /* BitField data rdma tx only: DPM transaction size specifier (64/128/256/512 bytes) */
#define DOORBELL_HDR_T_DPM_SIZE_SHIFT 2
#define DOORBELL_HDR_T_CONN_TYPE (0xF<<4) /* BitField data connection type */
#define DOORBELL_HDR_T_CONN_TYPE_SHIFT 4
};
/*
* Ethernet doorbell
*/
struct eth_tx_doorbell
{
#if defined(__BIG_ENDIAN)
uint16_t npackets /* number of data bytes that were added in the doorbell */;
uint8_t params;
#define ETH_TX_DOORBELL_NUM_BDS (0x3F<<0) /* BitField params number of buffer descriptors that were added in the doorbell */
#define ETH_TX_DOORBELL_NUM_BDS_SHIFT 0
#define ETH_TX_DOORBELL_RESERVED_TX_FIN_FLAG (0x1<<6) /* BitField params tx fin command flag */
#define ETH_TX_DOORBELL_RESERVED_TX_FIN_FLAG_SHIFT 6
#define ETH_TX_DOORBELL_SPARE (0x1<<7) /* BitField params doorbell queue spare flag */
#define ETH_TX_DOORBELL_SPARE_SHIFT 7
struct doorbell_hdr_t hdr;
#elif defined(__LITTLE_ENDIAN)
struct doorbell_hdr_t hdr;
uint8_t params;
#define ETH_TX_DOORBELL_NUM_BDS (0x3F<<0) /* BitField params number of buffer descriptors that were added in the doorbell */
#define ETH_TX_DOORBELL_NUM_BDS_SHIFT 0
#define ETH_TX_DOORBELL_RESERVED_TX_FIN_FLAG (0x1<<6) /* BitField params tx fin command flag */
#define ETH_TX_DOORBELL_RESERVED_TX_FIN_FLAG_SHIFT 6
#define ETH_TX_DOORBELL_SPARE (0x1<<7) /* BitField params doorbell queue spare flag */
#define ETH_TX_DOORBELL_SPARE_SHIFT 7
uint16_t npackets /* number of data bytes that were added in the doorbell */;
#endif
};
/*
* 3 lines. status block $$KEEP_ENDIANNESS$$
*/
struct hc_status_block_e1x
{
uint16_t index_values[HC_SB_MAX_INDICES_E1X] /* indices reported by cstorm */;
uint16_t running_index[HC_SB_MAX_SM] /* Status Block running indices */;
uint32_t rsrv[11];
};
/*
* host status block
*/
struct host_hc_status_block_e1x
{
struct hc_status_block_e1x sb /* fast path indices */;
};
/*
* 3 lines. status block $$KEEP_ENDIANNESS$$
*/
struct hc_status_block_e2
{
uint16_t index_values[HC_SB_MAX_INDICES_E2] /* indices reported by cstorm */;
uint16_t running_index[HC_SB_MAX_SM] /* Status Block running indices */;
uint32_t reserved[11];
};
/*
* host status block
*/
struct host_hc_status_block_e2
{
struct hc_status_block_e2 sb /* fast path indices */;
};
/*
* 5 lines. slow-path status block $$KEEP_ENDIANNESS$$
*/
struct hc_sp_status_block
{
uint16_t index_values[HC_SP_SB_MAX_INDICES] /* indices reported by cstorm */;
uint16_t running_index /* Status Block running index */;
uint16_t rsrv;
uint32_t rsrv1;
};
/*
* host status block
*/
struct host_sp_status_block
{
struct atten_sp_status_block atten_status_block /* attention bits section */;
struct hc_sp_status_block sp_sb /* slow path indices */;
};
/*
* IGU driver acknowledgment register
*/
struct igu_ack_register
{
#if defined(__BIG_ENDIAN)
uint16_t sb_id_and_flags;
#define IGU_ACK_REGISTER_STATUS_BLOCK_ID (0x1F<<0) /* BitField sb_id_and_flags 0-15: non default status blocks, 16: default status block */
#define IGU_ACK_REGISTER_STATUS_BLOCK_ID_SHIFT 0
#define IGU_ACK_REGISTER_STORM_ID (0x7<<5) /* BitField sb_id_and_flags 0-3:storm id, 4: attn status block (valid in default sb only) */
#define IGU_ACK_REGISTER_STORM_ID_SHIFT 5
#define IGU_ACK_REGISTER_UPDATE_INDEX (0x1<<8) /* BitField sb_id_and_flags if set, acknowledges status block index */
#define IGU_ACK_REGISTER_UPDATE_INDEX_SHIFT 8
#define IGU_ACK_REGISTER_INTERRUPT_MODE (0x3<<9) /* BitField sb_id_and_flags interrupt enable/disable/nop: use IGU_INT_xxx constants */
#define IGU_ACK_REGISTER_INTERRUPT_MODE_SHIFT 9
#define IGU_ACK_REGISTER_RESERVED (0x1F<<11) /* BitField sb_id_and_flags */
#define IGU_ACK_REGISTER_RESERVED_SHIFT 11
uint16_t status_block_index /* status block index acknowledgement */;
#elif defined(__LITTLE_ENDIAN)
uint16_t status_block_index /* status block index acknowledgement */;
uint16_t sb_id_and_flags;
#define IGU_ACK_REGISTER_STATUS_BLOCK_ID (0x1F<<0) /* BitField sb_id_and_flags 0-15: non default status blocks, 16: default status block */
#define IGU_ACK_REGISTER_STATUS_BLOCK_ID_SHIFT 0
#define IGU_ACK_REGISTER_STORM_ID (0x7<<5) /* BitField sb_id_and_flags 0-3:storm id, 4: attn status block (valid in default sb only) */
#define IGU_ACK_REGISTER_STORM_ID_SHIFT 5
#define IGU_ACK_REGISTER_UPDATE_INDEX (0x1<<8) /* BitField sb_id_and_flags if set, acknowledges status block index */
#define IGU_ACK_REGISTER_UPDATE_INDEX_SHIFT 8
#define IGU_ACK_REGISTER_INTERRUPT_MODE (0x3<<9) /* BitField sb_id_and_flags interrupt enable/disable/nop: use IGU_INT_xxx constants */
#define IGU_ACK_REGISTER_INTERRUPT_MODE_SHIFT 9
#define IGU_ACK_REGISTER_RESERVED (0x1F<<11) /* BitField sb_id_and_flags */
#define IGU_ACK_REGISTER_RESERVED_SHIFT 11
#endif
};
/*
* IGU driver acknowledgement register
*/
struct igu_backward_compatible
{
uint32_t sb_id_and_flags;
#define IGU_BACKWARD_COMPATIBLE_SB_INDEX (0xFFFF<<0) /* BitField sb_id_and_flags */
#define IGU_BACKWARD_COMPATIBLE_SB_INDEX_SHIFT 0
#define IGU_BACKWARD_COMPATIBLE_SB_SELECT (0x1F<<16) /* BitField sb_id_and_flags */
#define IGU_BACKWARD_COMPATIBLE_SB_SELECT_SHIFT 16
#define IGU_BACKWARD_COMPATIBLE_SEGMENT_ACCESS (0x7<<21) /* BitField sb_id_and_flags 0-3:storm id, 4: attn status block (valid in default sb only) */
#define IGU_BACKWARD_COMPATIBLE_SEGMENT_ACCESS_SHIFT 21
#define IGU_BACKWARD_COMPATIBLE_BUPDATE (0x1<<24) /* BitField sb_id_and_flags if set, acknowledges status block index */
#define IGU_BACKWARD_COMPATIBLE_BUPDATE_SHIFT 24
#define IGU_BACKWARD_COMPATIBLE_ENABLE_INT (0x3<<25) /* BitField sb_id_and_flags interrupt enable/disable/nop: use IGU_INT_xxx constants */
#define IGU_BACKWARD_COMPATIBLE_ENABLE_INT_SHIFT 25
#define IGU_BACKWARD_COMPATIBLE_RESERVED_0 (0x1F<<27) /* BitField sb_id_and_flags */
#define IGU_BACKWARD_COMPATIBLE_RESERVED_0_SHIFT 27
uint32_t reserved_2;
};
/*
* IGU driver acknowledgement register
*/
struct igu_regular
{
uint32_t sb_id_and_flags;
#define IGU_REGULAR_SB_INDEX (0xFFFFF<<0) /* BitField sb_id_and_flags */
#define IGU_REGULAR_SB_INDEX_SHIFT 0
#define IGU_REGULAR_RESERVED0 (0x1<<20) /* BitField sb_id_and_flags */
#define IGU_REGULAR_RESERVED0_SHIFT 20
#define IGU_REGULAR_SEGMENT_ACCESS (0x7<<21) /* BitField sb_id_and_flags 21-23 (use enum igu_seg_access) */
#define IGU_REGULAR_SEGMENT_ACCESS_SHIFT 21
#define IGU_REGULAR_BUPDATE (0x1<<24) /* BitField sb_id_and_flags */
#define IGU_REGULAR_BUPDATE_SHIFT 24
#define IGU_REGULAR_ENABLE_INT (0x3<<25) /* BitField sb_id_and_flags interrupt enable/disable/nop (use enum igu_int_cmd) */
#define IGU_REGULAR_ENABLE_INT_SHIFT 25
#define IGU_REGULAR_RESERVED_1 (0x1<<27) /* BitField sb_id_and_flags */
#define IGU_REGULAR_RESERVED_1_SHIFT 27
#define IGU_REGULAR_CLEANUP_TYPE (0x3<<28) /* BitField sb_id_and_flags */
#define IGU_REGULAR_CLEANUP_TYPE_SHIFT 28
#define IGU_REGULAR_CLEANUP_SET (0x1<<30) /* BitField sb_id_and_flags */
#define IGU_REGULAR_CLEANUP_SET_SHIFT 30
#define IGU_REGULAR_BCLEANUP (0x1<<31) /* BitField sb_id_and_flags */
#define IGU_REGULAR_BCLEANUP_SHIFT 31
uint32_t reserved_2;
};
/*
* IGU driver acknowledgement register
*/
union igu_consprod_reg
{
struct igu_regular regular;
struct igu_backward_compatible backward_compatible;
};
/*
* Igu control commands
*/
enum igu_ctrl_cmd
{
IGU_CTRL_CMD_TYPE_RD,
IGU_CTRL_CMD_TYPE_WR,
MAX_IGU_CTRL_CMD};
/*
* Control register for the IGU command register
*/
struct igu_ctrl_reg
{
uint32_t ctrl_data;
#define IGU_CTRL_REG_ADDRESS (0xFFF<<0) /* BitField ctrl_data */
#define IGU_CTRL_REG_ADDRESS_SHIFT 0
#define IGU_CTRL_REG_FID (0x7F<<12) /* BitField ctrl_data */
#define IGU_CTRL_REG_FID_SHIFT 12
#define IGU_CTRL_REG_RESERVED (0x1<<19) /* BitField ctrl_data */
#define IGU_CTRL_REG_RESERVED_SHIFT 19
#define IGU_CTRL_REG_TYPE (0x1<<20) /* BitField ctrl_data (use enum igu_ctrl_cmd) */
#define IGU_CTRL_REG_TYPE_SHIFT 20
#define IGU_CTRL_REG_UNUSED (0x7FF<<21) /* BitField ctrl_data */
#define IGU_CTRL_REG_UNUSED_SHIFT 21
};
/*
* Igu interrupt command
*/
enum igu_int_cmd
{
IGU_INT_ENABLE,
IGU_INT_DISABLE,
IGU_INT_NOP,
IGU_INT_NOP2,
MAX_IGU_INT_CMD};
/*
* Igu segments
*/
enum igu_seg_access
{
IGU_SEG_ACCESS_NORM,
IGU_SEG_ACCESS_DEF,
IGU_SEG_ACCESS_ATTN,
MAX_IGU_SEG_ACCESS};
/*
* iscsi doorbell
*/
struct iscsi_tx_doorbell
{
#if defined(__BIG_ENDIAN)
uint16_t reserved /* number of data bytes that were added in the doorbell */;
uint8_t params;
#define ISCSI_TX_DOORBELL_NUM_WQES (0x3F<<0) /* BitField params number of buffer descriptors that were added in the doorbell */
#define ISCSI_TX_DOORBELL_NUM_WQES_SHIFT 0
#define ISCSI_TX_DOORBELL_RESERVED_TX_FIN_FLAG (0x1<<6) /* BitField params tx fin command flag */
#define ISCSI_TX_DOORBELL_RESERVED_TX_FIN_FLAG_SHIFT 6
#define ISCSI_TX_DOORBELL_SPARE (0x1<<7) /* BitField params doorbell queue spare flag */
#define ISCSI_TX_DOORBELL_SPARE_SHIFT 7
struct doorbell_hdr_t hdr;
#elif defined(__LITTLE_ENDIAN)
struct doorbell_hdr_t hdr;
uint8_t params;
#define ISCSI_TX_DOORBELL_NUM_WQES (0x3F<<0) /* BitField params number of buffer descriptors that were added in the doorbell */
#define ISCSI_TX_DOORBELL_NUM_WQES_SHIFT 0
#define ISCSI_TX_DOORBELL_RESERVED_TX_FIN_FLAG (0x1<<6) /* BitField params tx fin command flag */
#define ISCSI_TX_DOORBELL_RESERVED_TX_FIN_FLAG_SHIFT 6
#define ISCSI_TX_DOORBELL_SPARE (0x1<<7) /* BitField params doorbell queue spare flag */
#define ISCSI_TX_DOORBELL_SPARE_SHIFT 7
uint16_t reserved /* number of data bytes that were added in the doorbell */;
#endif
};
/*
* Parser parsing flags field
*/
struct parsing_flags
{
uint16_t flags;
#define PARSING_FLAGS_ETHERNET_ADDRESS_TYPE (0x1<<0) /* BitField flagscontext flags 0=non-unicast, 1=unicast (use enum prs_flags_eth_addr_type) */
#define PARSING_FLAGS_ETHERNET_ADDRESS_TYPE_SHIFT 0
#define PARSING_FLAGS_INNER_VLAN_EXIST (0x1<<1) /* BitField flagscontext flags 0 or 1 */
#define PARSING_FLAGS_INNER_VLAN_EXIST_SHIFT 1
#define PARSING_FLAGS_OUTER_VLAN_EXIST (0x1<<2) /* BitField flagscontext flags 0 or 1 */
#define PARSING_FLAGS_OUTER_VLAN_EXIST_SHIFT 2
#define PARSING_FLAGS_OVER_ETHERNET_PROTOCOL (0x3<<3) /* BitField flagscontext flags 0=un-known, 1=Ipv4, 2=Ipv6,3=LLC SNAP un-known. LLC SNAP here refers only to LLC/SNAP packets that do not have Ipv4 or Ipv6 above them. Ipv4 and Ipv6 indications are even if they are over LLC/SNAP and not directly over Ethernet (use enum prs_flags_over_eth) */
#define PARSING_FLAGS_OVER_ETHERNET_PROTOCOL_SHIFT 3
#define PARSING_FLAGS_IP_OPTIONS (0x1<<5) /* BitField flagscontext flags 0=no IP options / extension headers. 1=IP options / extension header exist */
#define PARSING_FLAGS_IP_OPTIONS_SHIFT 5
#define PARSING_FLAGS_FRAGMENTATION_STATUS (0x1<<6) /* BitField flagscontext flags 0=non-fragmented, 1=fragmented */
#define PARSING_FLAGS_FRAGMENTATION_STATUS_SHIFT 6
#define PARSING_FLAGS_OVER_IP_PROTOCOL (0x3<<7) /* BitField flagscontext flags 0=un-known, 1=TCP, 2=UDP (use enum prs_flags_over_ip) */
#define PARSING_FLAGS_OVER_IP_PROTOCOL_SHIFT 7
#define PARSING_FLAGS_PURE_ACK_INDICATION (0x1<<9) /* BitField flagscontext flags 0=packet with data, 1=pure-ACK (use enum prs_flags_ack_type) */
#define PARSING_FLAGS_PURE_ACK_INDICATION_SHIFT 9
#define PARSING_FLAGS_TCP_OPTIONS_EXIST (0x1<<10) /* BitField flagscontext flags 0=no TCP options. 1=TCP options */
#define PARSING_FLAGS_TCP_OPTIONS_EXIST_SHIFT 10
#define PARSING_FLAGS_TIME_STAMP_EXIST_FLAG (0x1<<11) /* BitField flagscontext flags According to the TCP header options parsing */
#define PARSING_FLAGS_TIME_STAMP_EXIST_FLAG_SHIFT 11
#define PARSING_FLAGS_CONNECTION_MATCH (0x1<<12) /* BitField flagscontext flags connection match in searcher indication */
#define PARSING_FLAGS_CONNECTION_MATCH_SHIFT 12
#define PARSING_FLAGS_LLC_SNAP (0x1<<13) /* BitField flagscontext flags LLC SNAP indication */
#define PARSING_FLAGS_LLC_SNAP_SHIFT 13
#define PARSING_FLAGS_RESERVED0 (0x3<<14) /* BitField flagscontext flags */
#define PARSING_FLAGS_RESERVED0_SHIFT 14
};
/*
* Parsing flags for TCP ACK type
*/
enum prs_flags_ack_type
{
PRS_FLAG_PUREACK_PIGGY,
PRS_FLAG_PUREACK_PURE,
MAX_PRS_FLAGS_ACK_TYPE};
/*
* Parsing flags for Ethernet address type
*/
enum prs_flags_eth_addr_type
{
PRS_FLAG_ETHTYPE_NON_UNICAST,
PRS_FLAG_ETHTYPE_UNICAST,
MAX_PRS_FLAGS_ETH_ADDR_TYPE};
/*
* Parsing flags for over-ethernet protocol
*/
enum prs_flags_over_eth
{
PRS_FLAG_OVERETH_UNKNOWN,
PRS_FLAG_OVERETH_IPV4,
PRS_FLAG_OVERETH_IPV6,
PRS_FLAG_OVERETH_LLCSNAP_UNKNOWN,
MAX_PRS_FLAGS_OVER_ETH};
/*
* Parsing flags for over-IP protocol
*/
enum prs_flags_over_ip
{
PRS_FLAG_OVERIP_UNKNOWN,
PRS_FLAG_OVERIP_TCP,
PRS_FLAG_OVERIP_UDP,
MAX_PRS_FLAGS_OVER_IP};
/*
* SDM operation gen command (generate aggregative interrupt)
*/
struct sdm_op_gen
{
uint32_t command;
#define SDM_OP_GEN_COMP_PARAM (0x1F<<0) /* BitField commandcomp_param and comp_type thread ID/aggr interrupt number/counter depending on the completion type */
#define SDM_OP_GEN_COMP_PARAM_SHIFT 0
#define SDM_OP_GEN_COMP_TYPE (0x7<<5) /* BitField commandcomp_param and comp_type Direct messages to CM / PCI switch are not supported in operation_gen completion */
#define SDM_OP_GEN_COMP_TYPE_SHIFT 5
#define SDM_OP_GEN_AGG_VECT_IDX (0xFF<<8) /* BitField commandcomp_param and comp_type bit index in aggregated interrupt vector */
#define SDM_OP_GEN_AGG_VECT_IDX_SHIFT 8
#define SDM_OP_GEN_AGG_VECT_IDX_VALID (0x1<<16) /* BitField commandcomp_param and comp_type */
#define SDM_OP_GEN_AGG_VECT_IDX_VALID_SHIFT 16
#define SDM_OP_GEN_RESERVED (0x7FFF<<17) /* BitField commandcomp_param and comp_type */
#define SDM_OP_GEN_RESERVED_SHIFT 17
};
/*
* Timers connection context
*/
struct timers_block_context
{
uint32_t __client0 /* data of client 0 of the timers block*/;
uint32_t __client1 /* data of client 1 of the timers block*/;
uint32_t __client2 /* data of client 2 of the timers block*/;
uint32_t flags;
#define __TIMERS_BLOCK_CONTEXT_NUM_OF_ACTIVE_TIMERS (0x3<<0) /* BitField flagscontext flags number of active timers running */
#define __TIMERS_BLOCK_CONTEXT_NUM_OF_ACTIVE_TIMERS_SHIFT 0
#define TIMERS_BLOCK_CONTEXT_CONN_VALID_FLG (0x1<<2) /* BitField flagscontext flags flag: is connection valid (should be set by driver to 1 in toe/iscsi connections) */
#define TIMERS_BLOCK_CONTEXT_CONN_VALID_FLG_SHIFT 2
#define __TIMERS_BLOCK_CONTEXT_RESERVED0 (0x1FFFFFFF<<3) /* BitField flagscontext flags */
#define __TIMERS_BLOCK_CONTEXT_RESERVED0_SHIFT 3
};
/*
* advertise window doorbell
*/
struct toe_adv_wnd_doorbell
{
#if defined(__BIG_ENDIAN)
uint16_t wnd_sz_lsb /* Less significant bits of advertise window update value */;
uint8_t wnd_sz_msb /* Most significant bits of advertise window update value */;
struct doorbell_hdr_t hdr /* See description of the appropriate type */;
#elif defined(__LITTLE_ENDIAN)
struct doorbell_hdr_t hdr /* See description of the appropriate type */;
uint8_t wnd_sz_msb /* Most significant bits of advertise window update value */;
uint16_t wnd_sz_lsb /* Less significant bits of advertise window update value */;
#endif
};
/*
* toe rx BDs update doorbell
*/
struct toe_rx_bds_doorbell
{
#if defined(__BIG_ENDIAN)
uint16_t nbds /* BDs update value */;
uint8_t params;
#define TOE_RX_BDS_DOORBELL_RESERVED (0x1F<<0) /* BitField params reserved */
#define TOE_RX_BDS_DOORBELL_RESERVED_SHIFT 0
#define TOE_RX_BDS_DOORBELL_OPCODE (0x7<<5) /* BitField params BDs update doorbell opcode (2) */
#define TOE_RX_BDS_DOORBELL_OPCODE_SHIFT 5
struct doorbell_hdr_t hdr;
#elif defined(__LITTLE_ENDIAN)
struct doorbell_hdr_t hdr;
uint8_t params;
#define TOE_RX_BDS_DOORBELL_RESERVED (0x1F<<0) /* BitField params reserved */
#define TOE_RX_BDS_DOORBELL_RESERVED_SHIFT 0
#define TOE_RX_BDS_DOORBELL_OPCODE (0x7<<5) /* BitField params BDs update doorbell opcode (2) */
#define TOE_RX_BDS_DOORBELL_OPCODE_SHIFT 5
uint16_t nbds /* BDs update value */;
#endif
};
/*
* toe rx bytes and BDs update doorbell
*/
struct toe_rx_bytes_and_bds_doorbell
{
#if defined(__BIG_ENDIAN)
uint16_t nbytes /* nbytes */;
uint8_t params;
#define TOE_RX_BYTES_AND_BDS_DOORBELL_NBDS (0x1F<<0) /* BitField params producer delta from the last doorbell */
#define TOE_RX_BYTES_AND_BDS_DOORBELL_NBDS_SHIFT 0
#define TOE_RX_BYTES_AND_BDS_DOORBELL_OPCODE (0x7<<5) /* BitField params rx bytes and BDs update doorbell opcode (1) */
#define TOE_RX_BYTES_AND_BDS_DOORBELL_OPCODE_SHIFT 5
struct doorbell_hdr_t hdr;
#elif defined(__LITTLE_ENDIAN)
struct doorbell_hdr_t hdr;
uint8_t params;
#define TOE_RX_BYTES_AND_BDS_DOORBELL_NBDS (0x1F<<0) /* BitField params producer delta from the last doorbell */
#define TOE_RX_BYTES_AND_BDS_DOORBELL_NBDS_SHIFT 0
#define TOE_RX_BYTES_AND_BDS_DOORBELL_OPCODE (0x7<<5) /* BitField params rx bytes and BDs update doorbell opcode (1) */
#define TOE_RX_BYTES_AND_BDS_DOORBELL_OPCODE_SHIFT 5
uint16_t nbytes /* nbytes */;
#endif
};
/*
* toe rx bytes doorbell
*/
struct toe_rx_byte_doorbell
{
#if defined(__BIG_ENDIAN)
uint16_t nbytes_lsb /* bits [0:15] of nbytes */;
uint8_t params;
#define TOE_RX_BYTE_DOORBELL_NBYTES_MSB (0x1F<<0) /* BitField params bits [20:16] of nbytes */
#define TOE_RX_BYTE_DOORBELL_NBYTES_MSB_SHIFT 0
#define TOE_RX_BYTE_DOORBELL_OPCODE (0x7<<5) /* BitField params rx bytes doorbell opcode (0) */
#define TOE_RX_BYTE_DOORBELL_OPCODE_SHIFT 5
struct doorbell_hdr_t hdr;
#elif defined(__LITTLE_ENDIAN)
struct doorbell_hdr_t hdr;
uint8_t params;
#define TOE_RX_BYTE_DOORBELL_NBYTES_MSB (0x1F<<0) /* BitField params bits [20:16] of nbytes */
#define TOE_RX_BYTE_DOORBELL_NBYTES_MSB_SHIFT 0
#define TOE_RX_BYTE_DOORBELL_OPCODE (0x7<<5) /* BitField params rx bytes doorbell opcode (0) */
#define TOE_RX_BYTE_DOORBELL_OPCODE_SHIFT 5
uint16_t nbytes_lsb /* bits [0:15] of nbytes */;
#endif
};
/*
* toe rx consume GRQ doorbell
*/
struct toe_rx_grq_doorbell
{
#if defined(__BIG_ENDIAN)
uint16_t nbytes_lsb /* bits [0:15] of nbytes */;
uint8_t params;
#define TOE_RX_GRQ_DOORBELL_NBYTES_MSB (0x1F<<0) /* BitField params bits [20:16] of nbytes */
#define TOE_RX_GRQ_DOORBELL_NBYTES_MSB_SHIFT 0
#define TOE_RX_GRQ_DOORBELL_OPCODE (0x7<<5) /* BitField params rx GRQ doorbell opcode (4) */
#define TOE_RX_GRQ_DOORBELL_OPCODE_SHIFT 5
struct doorbell_hdr_t hdr;
#elif defined(__LITTLE_ENDIAN)
struct doorbell_hdr_t hdr;
uint8_t params;
#define TOE_RX_GRQ_DOORBELL_NBYTES_MSB (0x1F<<0) /* BitField params bits [20:16] of nbytes */
#define TOE_RX_GRQ_DOORBELL_NBYTES_MSB_SHIFT 0
#define TOE_RX_GRQ_DOORBELL_OPCODE (0x7<<5) /* BitField params rx GRQ doorbell opcode (4) */
#define TOE_RX_GRQ_DOORBELL_OPCODE_SHIFT 5
uint16_t nbytes_lsb /* bits [0:15] of nbytes */;
#endif
};
/*
* toe doorbell
*/
struct toe_tx_doorbell
{
#if defined(__BIG_ENDIAN)
uint16_t nbytes /* number of data bytes that were added in the doorbell */;
uint8_t params;
#define TOE_TX_DOORBELL_NUM_BDS (0x3F<<0) /* BitField params number of buffer descriptors that were added in the doorbell */
#define TOE_TX_DOORBELL_NUM_BDS_SHIFT 0
#define TOE_TX_DOORBELL_TX_FIN_FLAG (0x1<<6) /* BitField params tx fin command flag */
#define TOE_TX_DOORBELL_TX_FIN_FLAG_SHIFT 6
#define TOE_TX_DOORBELL_FLUSH (0x1<<7) /* BitField params doorbell queue spare flag */
#define TOE_TX_DOORBELL_FLUSH_SHIFT 7
struct doorbell_hdr_t hdr;
#elif defined(__LITTLE_ENDIAN)
struct doorbell_hdr_t hdr;
uint8_t params;
#define TOE_TX_DOORBELL_NUM_BDS (0x3F<<0) /* BitField params number of buffer descriptors that were added in the doorbell */
#define TOE_TX_DOORBELL_NUM_BDS_SHIFT 0
#define TOE_TX_DOORBELL_TX_FIN_FLAG (0x1<<6) /* BitField params tx fin command flag */
#define TOE_TX_DOORBELL_TX_FIN_FLAG_SHIFT 6
#define TOE_TX_DOORBELL_FLUSH (0x1<<7) /* BitField params doorbell queue spare flag */
#define TOE_TX_DOORBELL_FLUSH_SHIFT 7
uint16_t nbytes /* number of data bytes that were added in the doorbell */;
#endif
};
/*
* The eth aggregative context of Tstorm
*/
struct tstorm_eth_ag_context
{
uint32_t __reserved0[14];
};
/*
* The fcoe extra aggregative context section of Tstorm
*/
struct tstorm_fcoe_extra_ag_context_section
{
uint32_t __agg_val1 /* aggregated value 1 */;
#if defined(__BIG_ENDIAN)
uint8_t __tcp_agg_vars2 /* Various aggregative variables*/;
uint8_t __agg_val3 /* aggregated value 3 */;
uint16_t __agg_val2 /* aggregated value 2 */;
#elif defined(__LITTLE_ENDIAN)
uint16_t __agg_val2 /* aggregated value 2 */;
uint8_t __agg_val3 /* aggregated value 3 */;
uint8_t __tcp_agg_vars2 /* Various aggregative variables*/;
#endif
#if defined(__BIG_ENDIAN)
uint16_t __agg_val5;
uint8_t __agg_val6;
uint8_t __tcp_agg_vars3 /* Various aggregative variables*/;
#elif defined(__LITTLE_ENDIAN)
uint8_t __tcp_agg_vars3 /* Various aggregative variables*/;
uint8_t __agg_val6;
uint16_t __agg_val5;
#endif
uint32_t __lcq_prod /* Next sequence number to transmit, given by Tx */;
uint32_t rtt_seq /* Rtt recording sequence number */;
uint32_t rtt_time /* Rtt recording real time clock */;
uint32_t __reserved66;
uint32_t wnd_right_edge /* The right edge of the receive window. Updated by the XSTORM when a segment with ACK is transmitted */;
uint32_t tcp_agg_vars1;
#define TSTORM_FCOE_EXTRA_AG_CONTEXT_SECTION_FIN_SENT_FLAG (0x1<<0) /* BitField tcp_agg_vars1Various aggregative variables Sticky bit that is set when FIN is sent and remains set */
#define TSTORM_FCOE_EXTRA_AG_CONTEXT_SECTION_FIN_SENT_FLAG_SHIFT 0
#define TSTORM_FCOE_EXTRA_AG_CONTEXT_SECTION_LAST_PACKET_FIN_FLAG (0x1<<1) /* BitField tcp_agg_vars1Various aggregative variables The Tx indicates that it sent a FIN packet */
#define TSTORM_FCOE_EXTRA_AG_CONTEXT_SECTION_LAST_PACKET_FIN_FLAG_SHIFT 1
#define TSTORM_FCOE_EXTRA_AG_CONTEXT_SECTION_WND_UPD_CF (0x3<<2) /* BitField tcp_agg_vars1Various aggregative variables Counter flag to indicate a window update */
#define TSTORM_FCOE_EXTRA_AG_CONTEXT_SECTION_WND_UPD_CF_SHIFT 2
#define TSTORM_FCOE_EXTRA_AG_CONTEXT_SECTION_TIMEOUT_CF (0x3<<4) /* BitField tcp_agg_vars1Various aggregative variables Indicates that a timeout expired */
#define TSTORM_FCOE_EXTRA_AG_CONTEXT_SECTION_TIMEOUT_CF_SHIFT 4
#define TSTORM_FCOE_EXTRA_AG_CONTEXT_SECTION_WND_UPD_CF_EN (0x1<<6) /* BitField tcp_agg_vars1Various aggregative variables Enable the decision rule that considers the WndUpd counter flag */
#define TSTORM_FCOE_EXTRA_AG_CONTEXT_SECTION_WND_UPD_CF_EN_SHIFT 6
#define TSTORM_FCOE_EXTRA_AG_CONTEXT_SECTION_TIMEOUT_CF_EN (0x1<<7) /* BitField tcp_agg_vars1Various aggregative variables Enable the decision rule that considers the Timeout counter flag */
#define TSTORM_FCOE_EXTRA_AG_CONTEXT_SECTION_TIMEOUT_CF_EN_SHIFT 7
#define TSTORM_FCOE_EXTRA_AG_CONTEXT_SECTION_RETRANSMIT_SEQ_EN (0x1<<8) /* BitField tcp_agg_vars1Various aggregative variables If 1 then the Rxmit sequence decision rule is enabled */
#define TSTORM_FCOE_EXTRA_AG_CONTEXT_SECTION_RETRANSMIT_SEQ_EN_SHIFT 8
#define __TSTORM_FCOE_EXTRA_AG_CONTEXT_SECTION_LCQ_SND_EN (0x1<<9) /* BitField tcp_agg_vars1Various aggregative variables If set then the SendNext decision rule is enabled */
#define __TSTORM_FCOE_EXTRA_AG_CONTEXT_SECTION_LCQ_SND_EN_SHIFT 9
#define TSTORM_FCOE_EXTRA_AG_CONTEXT_SECTION_AUX1_FLAG (0x1<<10) /* BitField tcp_agg_vars1Various aggregative variables */
#define TSTORM_FCOE_EXTRA_AG_CONTEXT_SECTION_AUX1_FLAG_SHIFT 10
#define TSTORM_FCOE_EXTRA_AG_CONTEXT_SECTION_AUX2_FLAG (0x1<<11) /* BitField tcp_agg_vars1Various aggregative variables */
#define TSTORM_FCOE_EXTRA_AG_CONTEXT_SECTION_AUX2_FLAG_SHIFT 11
#define TSTORM_FCOE_EXTRA_AG_CONTEXT_SECTION_AUX1_CF_EN (0x1<<12) /* BitField tcp_agg_vars1Various aggregative variables */
#define TSTORM_FCOE_EXTRA_AG_CONTEXT_SECTION_AUX1_CF_EN_SHIFT 12
#define TSTORM_FCOE_EXTRA_AG_CONTEXT_SECTION_AUX2_CF_EN (0x1<<13) /* BitField tcp_agg_vars1Various aggregative variables */
#define TSTORM_FCOE_EXTRA_AG_CONTEXT_SECTION_AUX2_CF_EN_SHIFT 13
#define TSTORM_FCOE_EXTRA_AG_CONTEXT_SECTION_AUX1_CF (0x3<<14) /* BitField tcp_agg_vars1Various aggregative variables */
#define TSTORM_FCOE_EXTRA_AG_CONTEXT_SECTION_AUX1_CF_SHIFT 14
#define TSTORM_FCOE_EXTRA_AG_CONTEXT_SECTION_AUX2_CF (0x3<<16) /* BitField tcp_agg_vars1Various aggregative variables */
#define TSTORM_FCOE_EXTRA_AG_CONTEXT_SECTION_AUX2_CF_SHIFT 16
#define TSTORM_FCOE_EXTRA_AG_CONTEXT_SECTION_TX_BLOCKED (0x1<<18) /* BitField tcp_agg_vars1Various aggregative variables Indicates that Tx has more to send, but has not enough window to send it */
#define TSTORM_FCOE_EXTRA_AG_CONTEXT_SECTION_TX_BLOCKED_SHIFT 18
#define __TSTORM_FCOE_EXTRA_AG_CONTEXT_SECTION_AUX10_CF_EN (0x1<<19) /* BitField tcp_agg_vars1Various aggregative variables */
#define __TSTORM_FCOE_EXTRA_AG_CONTEXT_SECTION_AUX10_CF_EN_SHIFT 19
#define __TSTORM_FCOE_EXTRA_AG_CONTEXT_SECTION_AUX11_CF_EN (0x1<<20) /* BitField tcp_agg_vars1Various aggregative variables */
#define __TSTORM_FCOE_EXTRA_AG_CONTEXT_SECTION_AUX11_CF_EN_SHIFT 20
#define __TSTORM_FCOE_EXTRA_AG_CONTEXT_SECTION_AUX12_CF_EN (0x1<<21) /* BitField tcp_agg_vars1Various aggregative variables */
#define __TSTORM_FCOE_EXTRA_AG_CONTEXT_SECTION_AUX12_CF_EN_SHIFT 21
#define __TSTORM_FCOE_EXTRA_AG_CONTEXT_SECTION_RESERVED1 (0x3<<22) /* BitField tcp_agg_vars1Various aggregative variables */
#define __TSTORM_FCOE_EXTRA_AG_CONTEXT_SECTION_RESERVED1_SHIFT 22
#define TSTORM_FCOE_EXTRA_AG_CONTEXT_SECTION_RETRANSMIT_PEND_SEQ (0xF<<24) /* BitField tcp_agg_vars1Various aggregative variables The sequence of the last fast retransmit or goto SS comand sent */
#define TSTORM_FCOE_EXTRA_AG_CONTEXT_SECTION_RETRANSMIT_PEND_SEQ_SHIFT 24
#define TSTORM_FCOE_EXTRA_AG_CONTEXT_SECTION_RETRANSMIT_DONE_SEQ (0xF<<28) /* BitField tcp_agg_vars1Various aggregative variables The sequence of the last fast retransmit or Goto SS command performed by the XSTORM */
#define TSTORM_FCOE_EXTRA_AG_CONTEXT_SECTION_RETRANSMIT_DONE_SEQ_SHIFT 28
uint32_t snd_max /* Maximum sequence number that was ever transmitted */;
uint32_t __lcq_cons /* Last ACK sequence number sent by the Tx */;
uint32_t __reserved2;
};
/*
* The fcoe aggregative context of Tstorm
*/
struct tstorm_fcoe_ag_context
{
#if defined(__BIG_ENDIAN)
uint16_t ulp_credit;
uint8_t agg_vars1;
#define TSTORM_FCOE_AG_CONTEXT_EXISTS_IN_QM0 (0x1<<0) /* BitField agg_vars1Various aggregative variables The connection is currently registered to the QM with queue index 0 */
#define TSTORM_FCOE_AG_CONTEXT_EXISTS_IN_QM0_SHIFT 0
#define TSTORM_FCOE_AG_CONTEXT_EXISTS_IN_QM1 (0x1<<1) /* BitField agg_vars1Various aggregative variables The connection is currently registered to the QM with queue index 1 */
#define TSTORM_FCOE_AG_CONTEXT_EXISTS_IN_QM1_SHIFT 1
#define TSTORM_FCOE_AG_CONTEXT_EXISTS_IN_QM2 (0x1<<2) /* BitField agg_vars1Various aggregative variables The connection is currently registered to the QM with queue index 2 */
#define TSTORM_FCOE_AG_CONTEXT_EXISTS_IN_QM2_SHIFT 2
#define TSTORM_FCOE_AG_CONTEXT_EXISTS_IN_QM3 (0x1<<3) /* BitField agg_vars1Various aggregative variables The connection is currently registered to the QM with queue index 3 */
#define TSTORM_FCOE_AG_CONTEXT_EXISTS_IN_QM3_SHIFT 3
#define __TSTORM_FCOE_AG_CONTEXT_QUEUE0_FLUSH_CF (0x3<<4) /* BitField agg_vars1Various aggregative variables */
#define __TSTORM_FCOE_AG_CONTEXT_QUEUE0_FLUSH_CF_SHIFT 4
#define __TSTORM_FCOE_AG_CONTEXT_AUX3_FLAG (0x1<<6) /* BitField agg_vars1Various aggregative variables */
#define __TSTORM_FCOE_AG_CONTEXT_AUX3_FLAG_SHIFT 6
#define __TSTORM_FCOE_AG_CONTEXT_AUX4_FLAG (0x1<<7) /* BitField agg_vars1Various aggregative variables */
#define __TSTORM_FCOE_AG_CONTEXT_AUX4_FLAG_SHIFT 7
uint8_t state /* The state of the connection */;
#elif defined(__LITTLE_ENDIAN)
uint8_t state /* The state of the connection */;
uint8_t agg_vars1;
#define TSTORM_FCOE_AG_CONTEXT_EXISTS_IN_QM0 (0x1<<0) /* BitField agg_vars1Various aggregative variables The connection is currently registered to the QM with queue index 0 */
#define TSTORM_FCOE_AG_CONTEXT_EXISTS_IN_QM0_SHIFT 0
#define TSTORM_FCOE_AG_CONTEXT_EXISTS_IN_QM1 (0x1<<1) /* BitField agg_vars1Various aggregative variables The connection is currently registered to the QM with queue index 1 */
#define TSTORM_FCOE_AG_CONTEXT_EXISTS_IN_QM1_SHIFT 1
#define TSTORM_FCOE_AG_CONTEXT_EXISTS_IN_QM2 (0x1<<2) /* BitField agg_vars1Various aggregative variables The connection is currently registered to the QM with queue index 2 */
#define TSTORM_FCOE_AG_CONTEXT_EXISTS_IN_QM2_SHIFT 2
#define TSTORM_FCOE_AG_CONTEXT_EXISTS_IN_QM3 (0x1<<3) /* BitField agg_vars1Various aggregative variables The connection is currently registered to the QM with queue index 3 */
#define TSTORM_FCOE_AG_CONTEXT_EXISTS_IN_QM3_SHIFT 3
#define __TSTORM_FCOE_AG_CONTEXT_QUEUE0_FLUSH_CF (0x3<<4) /* BitField agg_vars1Various aggregative variables */
#define __TSTORM_FCOE_AG_CONTEXT_QUEUE0_FLUSH_CF_SHIFT 4
#define __TSTORM_FCOE_AG_CONTEXT_AUX3_FLAG (0x1<<6) /* BitField agg_vars1Various aggregative variables */
#define __TSTORM_FCOE_AG_CONTEXT_AUX3_FLAG_SHIFT 6
#define __TSTORM_FCOE_AG_CONTEXT_AUX4_FLAG (0x1<<7) /* BitField agg_vars1Various aggregative variables */
#define __TSTORM_FCOE_AG_CONTEXT_AUX4_FLAG_SHIFT 7
uint16_t ulp_credit;
#endif
#if defined(__BIG_ENDIAN)
uint16_t __agg_val4;
uint16_t agg_vars2;
#define __TSTORM_FCOE_AG_CONTEXT_AUX5_FLAG (0x1<<0) /* BitField agg_vars2Various aggregative variables */
#define __TSTORM_FCOE_AG_CONTEXT_AUX5_FLAG_SHIFT 0
#define __TSTORM_FCOE_AG_CONTEXT_AUX6_FLAG (0x1<<1) /* BitField agg_vars2Various aggregative variables */
#define __TSTORM_FCOE_AG_CONTEXT_AUX6_FLAG_SHIFT 1
#define __TSTORM_FCOE_AG_CONTEXT_AUX4_CF (0x3<<2) /* BitField agg_vars2Various aggregative variables */
#define __TSTORM_FCOE_AG_CONTEXT_AUX4_CF_SHIFT 2
#define __TSTORM_FCOE_AG_CONTEXT_AUX5_CF (0x3<<4) /* BitField agg_vars2Various aggregative variables */
#define __TSTORM_FCOE_AG_CONTEXT_AUX5_CF_SHIFT 4
#define __TSTORM_FCOE_AG_CONTEXT_AUX6_CF (0x3<<6) /* BitField agg_vars2Various aggregative variables */
#define __TSTORM_FCOE_AG_CONTEXT_AUX6_CF_SHIFT 6
#define __TSTORM_FCOE_AG_CONTEXT_AUX7_CF (0x3<<8) /* BitField agg_vars2Various aggregative variables */
#define __TSTORM_FCOE_AG_CONTEXT_AUX7_CF_SHIFT 8
#define __TSTORM_FCOE_AG_CONTEXT_AUX7_FLAG (0x1<<10) /* BitField agg_vars2Various aggregative variables */
#define __TSTORM_FCOE_AG_CONTEXT_AUX7_FLAG_SHIFT 10
#define __TSTORM_FCOE_AG_CONTEXT_QUEUE0_FLUSH_CF_EN (0x1<<11) /* BitField agg_vars2Various aggregative variables */
#define __TSTORM_FCOE_AG_CONTEXT_QUEUE0_FLUSH_CF_EN_SHIFT 11
#define TSTORM_FCOE_AG_CONTEXT_AUX4_CF_EN (0x1<<12) /* BitField agg_vars2Various aggregative variables */
#define TSTORM_FCOE_AG_CONTEXT_AUX4_CF_EN_SHIFT 12
#define TSTORM_FCOE_AG_CONTEXT_AUX5_CF_EN (0x1<<13) /* BitField agg_vars2Various aggregative variables */
#define TSTORM_FCOE_AG_CONTEXT_AUX5_CF_EN_SHIFT 13
#define TSTORM_FCOE_AG_CONTEXT_AUX6_CF_EN (0x1<<14) /* BitField agg_vars2Various aggregative variables */
#define TSTORM_FCOE_AG_CONTEXT_AUX6_CF_EN_SHIFT 14
#define TSTORM_FCOE_AG_CONTEXT_AUX7_CF_EN (0x1<<15) /* BitField agg_vars2Various aggregative variables */
#define TSTORM_FCOE_AG_CONTEXT_AUX7_CF_EN_SHIFT 15
#elif defined(__LITTLE_ENDIAN)
uint16_t agg_vars2;
#define __TSTORM_FCOE_AG_CONTEXT_AUX5_FLAG (0x1<<0) /* BitField agg_vars2Various aggregative variables */
#define __TSTORM_FCOE_AG_CONTEXT_AUX5_FLAG_SHIFT 0
#define __TSTORM_FCOE_AG_CONTEXT_AUX6_FLAG (0x1<<1) /* BitField agg_vars2Various aggregative variables */
#define __TSTORM_FCOE_AG_CONTEXT_AUX6_FLAG_SHIFT 1
#define __TSTORM_FCOE_AG_CONTEXT_AUX4_CF (0x3<<2) /* BitField agg_vars2Various aggregative variables */
#define __TSTORM_FCOE_AG_CONTEXT_AUX4_CF_SHIFT 2
#define __TSTORM_FCOE_AG_CONTEXT_AUX5_CF (0x3<<4) /* BitField agg_vars2Various aggregative variables */
#define __TSTORM_FCOE_AG_CONTEXT_AUX5_CF_SHIFT 4
#define __TSTORM_FCOE_AG_CONTEXT_AUX6_CF (0x3<<6) /* BitField agg_vars2Various aggregative variables */
#define __TSTORM_FCOE_AG_CONTEXT_AUX6_CF_SHIFT 6
#define __TSTORM_FCOE_AG_CONTEXT_AUX7_CF (0x3<<8) /* BitField agg_vars2Various aggregative variables */
#define __TSTORM_FCOE_AG_CONTEXT_AUX7_CF_SHIFT 8
#define __TSTORM_FCOE_AG_CONTEXT_AUX7_FLAG (0x1<<10) /* BitField agg_vars2Various aggregative variables */
#define __TSTORM_FCOE_AG_CONTEXT_AUX7_FLAG_SHIFT 10
#define __TSTORM_FCOE_AG_CONTEXT_QUEUE0_FLUSH_CF_EN (0x1<<11) /* BitField agg_vars2Various aggregative variables */
#define __TSTORM_FCOE_AG_CONTEXT_QUEUE0_FLUSH_CF_EN_SHIFT 11
#define TSTORM_FCOE_AG_CONTEXT_AUX4_CF_EN (0x1<<12) /* BitField agg_vars2Various aggregative variables */
#define TSTORM_FCOE_AG_CONTEXT_AUX4_CF_EN_SHIFT 12
#define TSTORM_FCOE_AG_CONTEXT_AUX5_CF_EN (0x1<<13) /* BitField agg_vars2Various aggregative variables */
#define TSTORM_FCOE_AG_CONTEXT_AUX5_CF_EN_SHIFT 13
#define TSTORM_FCOE_AG_CONTEXT_AUX6_CF_EN (0x1<<14) /* BitField agg_vars2Various aggregative variables */
#define TSTORM_FCOE_AG_CONTEXT_AUX6_CF_EN_SHIFT 14
#define TSTORM_FCOE_AG_CONTEXT_AUX7_CF_EN (0x1<<15) /* BitField agg_vars2Various aggregative variables */
#define TSTORM_FCOE_AG_CONTEXT_AUX7_CF_EN_SHIFT 15
uint16_t __agg_val4;
#endif
struct tstorm_fcoe_extra_ag_context_section __extra_section /* Extra context section */;
};
/*
* The iscsi aggregative context section of Tstorm
*/
struct tstorm_iscsi_tcp_ag_context_section
{
uint32_t __agg_val1 /* aggregated value 1 */;
#if defined(__BIG_ENDIAN)
uint8_t __tcp_agg_vars2 /* Various aggregative variables*/;
uint8_t __agg_val3 /* aggregated value 3 */;
uint16_t __agg_val2 /* aggregated value 2 */;
#elif defined(__LITTLE_ENDIAN)
uint16_t __agg_val2 /* aggregated value 2 */;
uint8_t __agg_val3 /* aggregated value 3 */;
uint8_t __tcp_agg_vars2 /* Various aggregative variables*/;
#endif
#if defined(__BIG_ENDIAN)
uint16_t __agg_val5;
uint8_t __agg_val6;
uint8_t __tcp_agg_vars3 /* Various aggregative variables*/;
#elif defined(__LITTLE_ENDIAN)
uint8_t __tcp_agg_vars3 /* Various aggregative variables*/;
uint8_t __agg_val6;
uint16_t __agg_val5;
#endif
uint32_t snd_nxt /* Next sequence number to transmit, given by Tx */;
uint32_t rtt_seq /* Rtt recording sequence number */;
uint32_t rtt_time /* Rtt recording real time clock */;
uint32_t wnd_right_edge_local;
uint32_t wnd_right_edge /* The right edge of the receive window. Updated by the XSTORM when a segment with ACK is transmitted */;
uint32_t tcp_agg_vars1;
#define TSTORM_ISCSI_TCP_AG_CONTEXT_SECTION_FIN_SENT_FLAG (0x1<<0) /* BitField tcp_agg_vars1Various aggregative variables Sticky bit that is set when FIN is sent and remains set */
#define TSTORM_ISCSI_TCP_AG_CONTEXT_SECTION_FIN_SENT_FLAG_SHIFT 0
#define TSTORM_ISCSI_TCP_AG_CONTEXT_SECTION_LAST_PACKET_FIN_FLAG (0x1<<1) /* BitField tcp_agg_vars1Various aggregative variables The Tx indicates that it sent a FIN packet */
#define TSTORM_ISCSI_TCP_AG_CONTEXT_SECTION_LAST_PACKET_FIN_FLAG_SHIFT 1
#define TSTORM_ISCSI_TCP_AG_CONTEXT_SECTION_WND_UPD_CF (0x3<<2) /* BitField tcp_agg_vars1Various aggregative variables Counter flag to indicate a window update */
#define TSTORM_ISCSI_TCP_AG_CONTEXT_SECTION_WND_UPD_CF_SHIFT 2
#define TSTORM_ISCSI_TCP_AG_CONTEXT_SECTION_TIMEOUT_CF (0x3<<4) /* BitField tcp_agg_vars1Various aggregative variables Indicates that a timeout expired */
#define TSTORM_ISCSI_TCP_AG_CONTEXT_SECTION_TIMEOUT_CF_SHIFT 4
#define TSTORM_ISCSI_TCP_AG_CONTEXT_SECTION_WND_UPD_CF_EN (0x1<<6) /* BitField tcp_agg_vars1Various aggregative variables Enable the decision rule that considers the WndUpd counter flag */
#define TSTORM_ISCSI_TCP_AG_CONTEXT_SECTION_WND_UPD_CF_EN_SHIFT 6
#define TSTORM_ISCSI_TCP_AG_CONTEXT_SECTION_TIMEOUT_CF_EN (0x1<<7) /* BitField tcp_agg_vars1Various aggregative variables Enable the decision rule that considers the Timeout counter flag */
#define TSTORM_ISCSI_TCP_AG_CONTEXT_SECTION_TIMEOUT_CF_EN_SHIFT 7
#define TSTORM_ISCSI_TCP_AG_CONTEXT_SECTION_RETRANSMIT_SEQ_EN (0x1<<8) /* BitField tcp_agg_vars1Various aggregative variables If 1 then the Rxmit sequence decision rule is enabled */
#define TSTORM_ISCSI_TCP_AG_CONTEXT_SECTION_RETRANSMIT_SEQ_EN_SHIFT 8
#define TSTORM_ISCSI_TCP_AG_CONTEXT_SECTION_SND_NXT_EN (0x1<<9) /* BitField tcp_agg_vars1Various aggregative variables If set then the SendNext decision rule is enabled */
#define TSTORM_ISCSI_TCP_AG_CONTEXT_SECTION_SND_NXT_EN_SHIFT 9
#define TSTORM_ISCSI_TCP_AG_CONTEXT_SECTION_AUX1_FLAG (0x1<<10) /* BitField tcp_agg_vars1Various aggregative variables */
#define TSTORM_ISCSI_TCP_AG_CONTEXT_SECTION_AUX1_FLAG_SHIFT 10
#define TSTORM_ISCSI_TCP_AG_CONTEXT_SECTION_AUX2_FLAG (0x1<<11) /* BitField tcp_agg_vars1Various aggregative variables */
#define TSTORM_ISCSI_TCP_AG_CONTEXT_SECTION_AUX2_FLAG_SHIFT 11
#define TSTORM_ISCSI_TCP_AG_CONTEXT_SECTION_AUX1_CF_EN (0x1<<12) /* BitField tcp_agg_vars1Various aggregative variables */
#define TSTORM_ISCSI_TCP_AG_CONTEXT_SECTION_AUX1_CF_EN_SHIFT 12
#define TSTORM_ISCSI_TCP_AG_CONTEXT_SECTION_AUX2_CF_EN (0x1<<13) /* BitField tcp_agg_vars1Various aggregative variables */
#define TSTORM_ISCSI_TCP_AG_CONTEXT_SECTION_AUX2_CF_EN_SHIFT 13
#define TSTORM_ISCSI_TCP_AG_CONTEXT_SECTION_AUX1_CF (0x3<<14) /* BitField tcp_agg_vars1Various aggregative variables */
#define TSTORM_ISCSI_TCP_AG_CONTEXT_SECTION_AUX1_CF_SHIFT 14
#define TSTORM_ISCSI_TCP_AG_CONTEXT_SECTION_AUX2_CF (0x3<<16) /* BitField tcp_agg_vars1Various aggregative variables */
#define TSTORM_ISCSI_TCP_AG_CONTEXT_SECTION_AUX2_CF_SHIFT 16
#define TSTORM_ISCSI_TCP_AG_CONTEXT_SECTION_TX_BLOCKED (0x1<<18) /* BitField tcp_agg_vars1Various aggregative variables Indicates that Tx has more to send, but has not enough window to send it */
#define TSTORM_ISCSI_TCP_AG_CONTEXT_SECTION_TX_BLOCKED_SHIFT 18
#define __TSTORM_ISCSI_TCP_AG_CONTEXT_SECTION_AUX10_CF_EN (0x1<<19) /* BitField tcp_agg_vars1Various aggregative variables */
#define __TSTORM_ISCSI_TCP_AG_CONTEXT_SECTION_AUX10_CF_EN_SHIFT 19
#define __TSTORM_ISCSI_TCP_AG_CONTEXT_SECTION_AUX11_CF_EN (0x1<<20) /* BitField tcp_agg_vars1Various aggregative variables */
#define __TSTORM_ISCSI_TCP_AG_CONTEXT_SECTION_AUX11_CF_EN_SHIFT 20
#define __TSTORM_ISCSI_TCP_AG_CONTEXT_SECTION_AUX12_CF_EN (0x1<<21) /* BitField tcp_agg_vars1Various aggregative variables */
#define __TSTORM_ISCSI_TCP_AG_CONTEXT_SECTION_AUX12_CF_EN_SHIFT 21
#define __TSTORM_ISCSI_TCP_AG_CONTEXT_SECTION_RESERVED1 (0x3<<22) /* BitField tcp_agg_vars1Various aggregative variables */
#define __TSTORM_ISCSI_TCP_AG_CONTEXT_SECTION_RESERVED1_SHIFT 22
#define TSTORM_ISCSI_TCP_AG_CONTEXT_SECTION_RETRANSMIT_PEND_SEQ (0xF<<24) /* BitField tcp_agg_vars1Various aggregative variables The sequence of the last fast retransmit or goto SS comand sent */
#define TSTORM_ISCSI_TCP_AG_CONTEXT_SECTION_RETRANSMIT_PEND_SEQ_SHIFT 24
#define TSTORM_ISCSI_TCP_AG_CONTEXT_SECTION_RETRANSMIT_DONE_SEQ (0xF<<28) /* BitField tcp_agg_vars1Various aggregative variables The sequence of the last fast retransmit or Goto SS command performed by the XSTORM */
#define TSTORM_ISCSI_TCP_AG_CONTEXT_SECTION_RETRANSMIT_DONE_SEQ_SHIFT 28
uint32_t snd_max /* Maximum sequence number that was ever transmitted */;
uint32_t snd_una /* Last ACK sequence number sent by the Tx */;
uint32_t __reserved2;
};
/*
* The iscsi aggregative context of Tstorm
*/
struct tstorm_iscsi_ag_context
{
#if defined(__BIG_ENDIAN)
uint16_t ulp_credit;
uint8_t agg_vars1;
#define TSTORM_ISCSI_AG_CONTEXT_EXISTS_IN_QM0 (0x1<<0) /* BitField agg_vars1Various aggregative variables The connection is currently registered to the QM with queue index 0 */
#define TSTORM_ISCSI_AG_CONTEXT_EXISTS_IN_QM0_SHIFT 0
#define TSTORM_ISCSI_AG_CONTEXT_EXISTS_IN_QM1 (0x1<<1) /* BitField agg_vars1Various aggregative variables The connection is currently registered to the QM with queue index 1 */
#define TSTORM_ISCSI_AG_CONTEXT_EXISTS_IN_QM1_SHIFT 1
#define TSTORM_ISCSI_AG_CONTEXT_EXISTS_IN_QM2 (0x1<<2) /* BitField agg_vars1Various aggregative variables The connection is currently registered to the QM with queue index 2 */
#define TSTORM_ISCSI_AG_CONTEXT_EXISTS_IN_QM2_SHIFT 2
#define TSTORM_ISCSI_AG_CONTEXT_EXISTS_IN_QM3 (0x1<<3) /* BitField agg_vars1Various aggregative variables The connection is currently registered to the QM with queue index 3 */
#define TSTORM_ISCSI_AG_CONTEXT_EXISTS_IN_QM3_SHIFT 3
#define __TSTORM_ISCSI_AG_CONTEXT_QUEUES_FLUSH_Q0_CF (0x3<<4) /* BitField agg_vars1Various aggregative variables */
#define __TSTORM_ISCSI_AG_CONTEXT_QUEUES_FLUSH_Q0_CF_SHIFT 4
#define __TSTORM_ISCSI_AG_CONTEXT_AUX3_FLAG (0x1<<6) /* BitField agg_vars1Various aggregative variables */
#define __TSTORM_ISCSI_AG_CONTEXT_AUX3_FLAG_SHIFT 6
#define __TSTORM_ISCSI_AG_CONTEXT_ACK_ON_FIN_SENT_FLAG (0x1<<7) /* BitField agg_vars1Various aggregative variables */
#define __TSTORM_ISCSI_AG_CONTEXT_ACK_ON_FIN_SENT_FLAG_SHIFT 7
uint8_t state /* The state of the connection */;
#elif defined(__LITTLE_ENDIAN)
uint8_t state /* The state of the connection */;
uint8_t agg_vars1;
#define TSTORM_ISCSI_AG_CONTEXT_EXISTS_IN_QM0 (0x1<<0) /* BitField agg_vars1Various aggregative variables The connection is currently registered to the QM with queue index 0 */
#define TSTORM_ISCSI_AG_CONTEXT_EXISTS_IN_QM0_SHIFT 0
#define TSTORM_ISCSI_AG_CONTEXT_EXISTS_IN_QM1 (0x1<<1) /* BitField agg_vars1Various aggregative variables The connection is currently registered to the QM with queue index 1 */
#define TSTORM_ISCSI_AG_CONTEXT_EXISTS_IN_QM1_SHIFT 1
#define TSTORM_ISCSI_AG_CONTEXT_EXISTS_IN_QM2 (0x1<<2) /* BitField agg_vars1Various aggregative variables The connection is currently registered to the QM with queue index 2 */
#define TSTORM_ISCSI_AG_CONTEXT_EXISTS_IN_QM2_SHIFT 2
#define TSTORM_ISCSI_AG_CONTEXT_EXISTS_IN_QM3 (0x1<<3) /* BitField agg_vars1Various aggregative variables The connection is currently registered to the QM with queue index 3 */
#define TSTORM_ISCSI_AG_CONTEXT_EXISTS_IN_QM3_SHIFT 3
#define __TSTORM_ISCSI_AG_CONTEXT_QUEUES_FLUSH_Q0_CF (0x3<<4) /* BitField agg_vars1Various aggregative variables */
#define __TSTORM_ISCSI_AG_CONTEXT_QUEUES_FLUSH_Q0_CF_SHIFT 4
#define __TSTORM_ISCSI_AG_CONTEXT_AUX3_FLAG (0x1<<6) /* BitField agg_vars1Various aggregative variables */
#define __TSTORM_ISCSI_AG_CONTEXT_AUX3_FLAG_SHIFT 6
#define __TSTORM_ISCSI_AG_CONTEXT_ACK_ON_FIN_SENT_FLAG (0x1<<7) /* BitField agg_vars1Various aggregative variables */
#define __TSTORM_ISCSI_AG_CONTEXT_ACK_ON_FIN_SENT_FLAG_SHIFT 7
uint16_t ulp_credit;
#endif
#if defined(__BIG_ENDIAN)
uint16_t __agg_val4;
uint16_t agg_vars2;
#define __TSTORM_ISCSI_AG_CONTEXT_MSL_TIMER_SET_FLAG (0x1<<0) /* BitField agg_vars2Various aggregative variables */
#define __TSTORM_ISCSI_AG_CONTEXT_MSL_TIMER_SET_FLAG_SHIFT 0
#define __TSTORM_ISCSI_AG_CONTEXT_FIN_SENT_FIRST_FLAG (0x1<<1) /* BitField agg_vars2Various aggregative variables */
#define __TSTORM_ISCSI_AG_CONTEXT_FIN_SENT_FIRST_FLAG_SHIFT 1
#define __TSTORM_ISCSI_AG_CONTEXT_RST_SENT_CF (0x3<<2) /* BitField agg_vars2Various aggregative variables */
#define __TSTORM_ISCSI_AG_CONTEXT_RST_SENT_CF_SHIFT 2
#define __TSTORM_ISCSI_AG_CONTEXT_WAKEUP_CALL_CF (0x3<<4) /* BitField agg_vars2Various aggregative variables */
#define __TSTORM_ISCSI_AG_CONTEXT_WAKEUP_CALL_CF_SHIFT 4
#define __TSTORM_ISCSI_AG_CONTEXT_AUX6_CF (0x3<<6) /* BitField agg_vars2Various aggregative variables */
#define __TSTORM_ISCSI_AG_CONTEXT_AUX6_CF_SHIFT 6
#define __TSTORM_ISCSI_AG_CONTEXT_AUX7_CF (0x3<<8) /* BitField agg_vars2Various aggregative variables */
#define __TSTORM_ISCSI_AG_CONTEXT_AUX7_CF_SHIFT 8
#define __TSTORM_ISCSI_AG_CONTEXT_AUX7_FLAG (0x1<<10) /* BitField agg_vars2Various aggregative variables */
#define __TSTORM_ISCSI_AG_CONTEXT_AUX7_FLAG_SHIFT 10
#define __TSTORM_ISCSI_AG_CONTEXT_QUEUES_FLUSH_Q0_CF_EN (0x1<<11) /* BitField agg_vars2Various aggregative variables */
#define __TSTORM_ISCSI_AG_CONTEXT_QUEUES_FLUSH_Q0_CF_EN_SHIFT 11
#define __TSTORM_ISCSI_AG_CONTEXT_RST_SENT_CF_EN (0x1<<12) /* BitField agg_vars2Various aggregative variables */
#define __TSTORM_ISCSI_AG_CONTEXT_RST_SENT_CF_EN_SHIFT 12
#define __TSTORM_ISCSI_AG_CONTEXT_WAKEUP_CALL_CF_EN (0x1<<13) /* BitField agg_vars2Various aggregative variables */
#define __TSTORM_ISCSI_AG_CONTEXT_WAKEUP_CALL_CF_EN_SHIFT 13
#define TSTORM_ISCSI_AG_CONTEXT_AUX6_CF_EN (0x1<<14) /* BitField agg_vars2Various aggregative variables */
#define TSTORM_ISCSI_AG_CONTEXT_AUX6_CF_EN_SHIFT 14
#define TSTORM_ISCSI_AG_CONTEXT_AUX7_CF_EN (0x1<<15) /* BitField agg_vars2Various aggregative variables */
#define TSTORM_ISCSI_AG_CONTEXT_AUX7_CF_EN_SHIFT 15
#elif defined(__LITTLE_ENDIAN)
uint16_t agg_vars2;
#define __TSTORM_ISCSI_AG_CONTEXT_MSL_TIMER_SET_FLAG (0x1<<0) /* BitField agg_vars2Various aggregative variables */
#define __TSTORM_ISCSI_AG_CONTEXT_MSL_TIMER_SET_FLAG_SHIFT 0
#define __TSTORM_ISCSI_AG_CONTEXT_FIN_SENT_FIRST_FLAG (0x1<<1) /* BitField agg_vars2Various aggregative variables */
#define __TSTORM_ISCSI_AG_CONTEXT_FIN_SENT_FIRST_FLAG_SHIFT 1
#define __TSTORM_ISCSI_AG_CONTEXT_RST_SENT_CF (0x3<<2) /* BitField agg_vars2Various aggregative variables */
#define __TSTORM_ISCSI_AG_CONTEXT_RST_SENT_CF_SHIFT 2
#define __TSTORM_ISCSI_AG_CONTEXT_WAKEUP_CALL_CF (0x3<<4) /* BitField agg_vars2Various aggregative variables */
#define __TSTORM_ISCSI_AG_CONTEXT_WAKEUP_CALL_CF_SHIFT 4
#define __TSTORM_ISCSI_AG_CONTEXT_AUX6_CF (0x3<<6) /* BitField agg_vars2Various aggregative variables */
#define __TSTORM_ISCSI_AG_CONTEXT_AUX6_CF_SHIFT 6
#define __TSTORM_ISCSI_AG_CONTEXT_AUX7_CF (0x3<<8) /* BitField agg_vars2Various aggregative variables */
#define __TSTORM_ISCSI_AG_CONTEXT_AUX7_CF_SHIFT 8
#define __TSTORM_ISCSI_AG_CONTEXT_AUX7_FLAG (0x1<<10) /* BitField agg_vars2Various aggregative variables */
#define __TSTORM_ISCSI_AG_CONTEXT_AUX7_FLAG_SHIFT 10
#define __TSTORM_ISCSI_AG_CONTEXT_QUEUES_FLUSH_Q0_CF_EN (0x1<<11) /* BitField agg_vars2Various aggregative variables */
#define __TSTORM_ISCSI_AG_CONTEXT_QUEUES_FLUSH_Q0_CF_EN_SHIFT 11
#define __TSTORM_ISCSI_AG_CONTEXT_RST_SENT_CF_EN (0x1<<12) /* BitField agg_vars2Various aggregative variables */
#define __TSTORM_ISCSI_AG_CONTEXT_RST_SENT_CF_EN_SHIFT 12
#define __TSTORM_ISCSI_AG_CONTEXT_WAKEUP_CALL_CF_EN (0x1<<13) /* BitField agg_vars2Various aggregative variables */
#define __TSTORM_ISCSI_AG_CONTEXT_WAKEUP_CALL_CF_EN_SHIFT 13
#define TSTORM_ISCSI_AG_CONTEXT_AUX6_CF_EN (0x1<<14) /* BitField agg_vars2Various aggregative variables */
#define TSTORM_ISCSI_AG_CONTEXT_AUX6_CF_EN_SHIFT 14
#define TSTORM_ISCSI_AG_CONTEXT_AUX7_CF_EN (0x1<<15) /* BitField agg_vars2Various aggregative variables */
#define TSTORM_ISCSI_AG_CONTEXT_AUX7_CF_EN_SHIFT 15
uint16_t __agg_val4;
#endif
struct tstorm_iscsi_tcp_ag_context_section tcp /* TCP context section, shared in TOE and iSCSI */;
};
/*
* The tcp aggregative context section of Tstorm
*/
struct tstorm_tcp_tcp_ag_context_section
{
uint32_t __agg_val1 /* aggregated value 1 */;
#if defined(__BIG_ENDIAN)
uint8_t __tcp_agg_vars2 /* Various aggregative variables*/;
uint8_t __agg_val3 /* aggregated value 3 */;
uint16_t __agg_val2 /* aggregated value 2 */;
#elif defined(__LITTLE_ENDIAN)
uint16_t __agg_val2 /* aggregated value 2 */;
uint8_t __agg_val3 /* aggregated value 3 */;
uint8_t __tcp_agg_vars2 /* Various aggregative variables*/;
#endif
#if defined(__BIG_ENDIAN)
uint16_t __agg_val5;
uint8_t __agg_val6;
uint8_t __tcp_agg_vars3 /* Various aggregative variables*/;
#elif defined(__LITTLE_ENDIAN)
uint8_t __tcp_agg_vars3 /* Various aggregative variables*/;
uint8_t __agg_val6;
uint16_t __agg_val5;
#endif
uint32_t snd_nxt /* Next sequence number to transmit, given by Tx */;
uint32_t rtt_seq /* Rtt recording sequence number */;
uint32_t rtt_time /* Rtt recording real time clock */;
uint32_t __reserved66;
uint32_t wnd_right_edge /* The right edge of the receive window. Updated by the XSTORM when a segment with ACK is transmitted */;
uint32_t tcp_agg_vars1;
#define TSTORM_TCP_TCP_AG_CONTEXT_SECTION_FIN_SENT_FLAG (0x1<<0) /* BitField tcp_agg_vars1Various aggregative variables Sticky bit that is set when FIN is sent and remains set */
#define TSTORM_TCP_TCP_AG_CONTEXT_SECTION_FIN_SENT_FLAG_SHIFT 0
#define TSTORM_TCP_TCP_AG_CONTEXT_SECTION_LAST_PACKET_FIN_FLAG (0x1<<1) /* BitField tcp_agg_vars1Various aggregative variables The Tx indicates that it sent a FIN packet */
#define TSTORM_TCP_TCP_AG_CONTEXT_SECTION_LAST_PACKET_FIN_FLAG_SHIFT 1
#define TSTORM_TCP_TCP_AG_CONTEXT_SECTION_WND_UPD_CF (0x3<<2) /* BitField tcp_agg_vars1Various aggregative variables Counter flag to indicate a window update */
#define TSTORM_TCP_TCP_AG_CONTEXT_SECTION_WND_UPD_CF_SHIFT 2
#define TSTORM_TCP_TCP_AG_CONTEXT_SECTION_TIMEOUT_CF (0x3<<4) /* BitField tcp_agg_vars1Various aggregative variables Indicates that a timeout expired */
#define TSTORM_TCP_TCP_AG_CONTEXT_SECTION_TIMEOUT_CF_SHIFT 4
#define TSTORM_TCP_TCP_AG_CONTEXT_SECTION_WND_UPD_CF_EN (0x1<<6) /* BitField tcp_agg_vars1Various aggregative variables Enable the decision rule that considers the WndUpd counter flag */
#define TSTORM_TCP_TCP_AG_CONTEXT_SECTION_WND_UPD_CF_EN_SHIFT 6
#define TSTORM_TCP_TCP_AG_CONTEXT_SECTION_TIMEOUT_CF_EN (0x1<<7) /* BitField tcp_agg_vars1Various aggregative variables Enable the decision rule that considers the Timeout counter flag */
#define TSTORM_TCP_TCP_AG_CONTEXT_SECTION_TIMEOUT_CF_EN_SHIFT 7
#define TSTORM_TCP_TCP_AG_CONTEXT_SECTION_RETRANSMIT_SEQ_EN (0x1<<8) /* BitField tcp_agg_vars1Various aggregative variables If 1 then the Rxmit sequence decision rule is enabled */
#define TSTORM_TCP_TCP_AG_CONTEXT_SECTION_RETRANSMIT_SEQ_EN_SHIFT 8
#define TSTORM_TCP_TCP_AG_CONTEXT_SECTION_SND_NXT_EN (0x1<<9) /* BitField tcp_agg_vars1Various aggregative variables If set then the SendNext decision rule is enabled */
#define TSTORM_TCP_TCP_AG_CONTEXT_SECTION_SND_NXT_EN_SHIFT 9
#define TSTORM_TCP_TCP_AG_CONTEXT_SECTION_AUX1_FLAG (0x1<<10) /* BitField tcp_agg_vars1Various aggregative variables */
#define TSTORM_TCP_TCP_AG_CONTEXT_SECTION_AUX1_FLAG_SHIFT 10
#define TSTORM_TCP_TCP_AG_CONTEXT_SECTION_AUX2_FLAG (0x1<<11) /* BitField tcp_agg_vars1Various aggregative variables */
#define TSTORM_TCP_TCP_AG_CONTEXT_SECTION_AUX2_FLAG_SHIFT 11
#define TSTORM_TCP_TCP_AG_CONTEXT_SECTION_AUX1_CF_EN (0x1<<12) /* BitField tcp_agg_vars1Various aggregative variables */
#define TSTORM_TCP_TCP_AG_CONTEXT_SECTION_AUX1_CF_EN_SHIFT 12
#define TSTORM_TCP_TCP_AG_CONTEXT_SECTION_AUX2_CF_EN (0x1<<13) /* BitField tcp_agg_vars1Various aggregative variables */
#define TSTORM_TCP_TCP_AG_CONTEXT_SECTION_AUX2_CF_EN_SHIFT 13
#define TSTORM_TCP_TCP_AG_CONTEXT_SECTION_AUX1_CF (0x3<<14) /* BitField tcp_agg_vars1Various aggregative variables */
#define TSTORM_TCP_TCP_AG_CONTEXT_SECTION_AUX1_CF_SHIFT 14
#define TSTORM_TCP_TCP_AG_CONTEXT_SECTION_AUX2_CF (0x3<<16) /* BitField tcp_agg_vars1Various aggregative variables */
#define TSTORM_TCP_TCP_AG_CONTEXT_SECTION_AUX2_CF_SHIFT 16
#define TSTORM_TCP_TCP_AG_CONTEXT_SECTION_TX_BLOCKED (0x1<<18) /* BitField tcp_agg_vars1Various aggregative variables Indicates that Tx has more to send, but has not enough window to send it */
#define TSTORM_TCP_TCP_AG_CONTEXT_SECTION_TX_BLOCKED_SHIFT 18
#define __TSTORM_TCP_TCP_AG_CONTEXT_SECTION_AUX10_CF_EN (0x1<<19) /* BitField tcp_agg_vars1Various aggregative variables */
#define __TSTORM_TCP_TCP_AG_CONTEXT_SECTION_AUX10_CF_EN_SHIFT 19
#define __TSTORM_TCP_TCP_AG_CONTEXT_SECTION_AUX11_CF_EN (0x1<<20) /* BitField tcp_agg_vars1Various aggregative variables */
#define __TSTORM_TCP_TCP_AG_CONTEXT_SECTION_AUX11_CF_EN_SHIFT 20
#define __TSTORM_TCP_TCP_AG_CONTEXT_SECTION_AUX12_CF_EN (0x1<<21) /* BitField tcp_agg_vars1Various aggregative variables */
#define __TSTORM_TCP_TCP_AG_CONTEXT_SECTION_AUX12_CF_EN_SHIFT 21
#define __TSTORM_TCP_TCP_AG_CONTEXT_SECTION_RESERVED1 (0x3<<22) /* BitField tcp_agg_vars1Various aggregative variables */
#define __TSTORM_TCP_TCP_AG_CONTEXT_SECTION_RESERVED1_SHIFT 22
#define TSTORM_TCP_TCP_AG_CONTEXT_SECTION_RETRANSMIT_PEND_SEQ (0xF<<24) /* BitField tcp_agg_vars1Various aggregative variables The sequence of the last fast retransmit or goto SS comand sent */
#define TSTORM_TCP_TCP_AG_CONTEXT_SECTION_RETRANSMIT_PEND_SEQ_SHIFT 24
#define TSTORM_TCP_TCP_AG_CONTEXT_SECTION_RETRANSMIT_DONE_SEQ (0xF<<28) /* BitField tcp_agg_vars1Various aggregative variables The sequence of the last fast retransmit or Goto SS command performed by the XSTORM */
#define TSTORM_TCP_TCP_AG_CONTEXT_SECTION_RETRANSMIT_DONE_SEQ_SHIFT 28
uint32_t snd_max /* Maximum sequence number that was ever transmitted */;
uint32_t snd_una /* Last ACK sequence number sent by the Tx */;
uint32_t __reserved2;
};
/*
* The toe aggregative context section of Tstorm
*/
struct tstorm_toe_tcp_ag_context_section
{
uint32_t __agg_val1 /* aggregated value 1 */;
#if defined(__BIG_ENDIAN)
uint8_t __tcp_agg_vars2 /* Various aggregative variables*/;
uint8_t __agg_val3 /* aggregated value 3 */;
uint16_t __agg_val2 /* aggregated value 2 */;
#elif defined(__LITTLE_ENDIAN)
uint16_t __agg_val2 /* aggregated value 2 */;
uint8_t __agg_val3 /* aggregated value 3 */;
uint8_t __tcp_agg_vars2 /* Various aggregative variables*/;
#endif
#if defined(__BIG_ENDIAN)
uint16_t __agg_val5;
uint8_t __agg_val6;
uint8_t __tcp_agg_vars3 /* Various aggregative variables*/;
#elif defined(__LITTLE_ENDIAN)
uint8_t __tcp_agg_vars3 /* Various aggregative variables*/;
uint8_t __agg_val6;
uint16_t __agg_val5;
#endif
uint32_t snd_nxt /* Next sequence number to transmit, given by Tx */;
uint32_t rtt_seq /* Rtt recording sequence number */;
uint32_t rtt_time /* Rtt recording real time clock */;
uint32_t __reserved66;
uint32_t wnd_right_edge /* The right edge of the receive window. Updated by the XSTORM when a segment with ACK is transmitted */;
uint32_t tcp_agg_vars1;
#define TSTORM_TOE_TCP_AG_CONTEXT_SECTION_FIN_SENT_FLAG (0x1<<0) /* BitField tcp_agg_vars1Various aggregative variables Sticky bit that is set when FIN is sent and remains set */
#define TSTORM_TOE_TCP_AG_CONTEXT_SECTION_FIN_SENT_FLAG_SHIFT 0
#define TSTORM_TOE_TCP_AG_CONTEXT_SECTION_LAST_PACKET_FIN_FLAG (0x1<<1) /* BitField tcp_agg_vars1Various aggregative variables The Tx indicates that it sent a FIN packet */
#define TSTORM_TOE_TCP_AG_CONTEXT_SECTION_LAST_PACKET_FIN_FLAG_SHIFT 1
#define TSTORM_TOE_TCP_AG_CONTEXT_SECTION_RESERVED52 (0x3<<2) /* BitField tcp_agg_vars1Various aggregative variables Counter flag to indicate a window update */
#define TSTORM_TOE_TCP_AG_CONTEXT_SECTION_RESERVED52_SHIFT 2
#define TSTORM_TOE_TCP_AG_CONTEXT_SECTION_TIMEOUT_CF (0x3<<4) /* BitField tcp_agg_vars1Various aggregative variables Indicates that a timeout expired */
#define TSTORM_TOE_TCP_AG_CONTEXT_SECTION_TIMEOUT_CF_SHIFT 4
#define TSTORM_TOE_TCP_AG_CONTEXT_SECTION_RESERVED_WND_UPD_CF_EN (0x1<<6) /* BitField tcp_agg_vars1Various aggregative variables Enable the decision rule that considers the WndUpd counter flag */
#define TSTORM_TOE_TCP_AG_CONTEXT_SECTION_RESERVED_WND_UPD_CF_EN_SHIFT 6
#define TSTORM_TOE_TCP_AG_CONTEXT_SECTION_TIMEOUT_CF_EN (0x1<<7) /* BitField tcp_agg_vars1Various aggregative variables Enable the decision rule that considers the Timeout counter flag */
#define TSTORM_TOE_TCP_AG_CONTEXT_SECTION_TIMEOUT_CF_EN_SHIFT 7
#define TSTORM_TOE_TCP_AG_CONTEXT_SECTION_RETRANSMIT_SEQ_EN (0x1<<8) /* BitField tcp_agg_vars1Various aggregative variables If 1 then the Rxmit sequence decision rule is enabled */
#define TSTORM_TOE_TCP_AG_CONTEXT_SECTION_RETRANSMIT_SEQ_EN_SHIFT 8
#define TSTORM_TOE_TCP_AG_CONTEXT_SECTION_SND_NXT_EN (0x1<<9) /* BitField tcp_agg_vars1Various aggregative variables If set then the SendNext decision rule is enabled */
#define TSTORM_TOE_TCP_AG_CONTEXT_SECTION_SND_NXT_EN_SHIFT 9
#define TSTORM_TOE_TCP_AG_CONTEXT_SECTION_NEWRTTSAMPLE (0x1<<10) /* BitField tcp_agg_vars1Various aggregative variables */
#define TSTORM_TOE_TCP_AG_CONTEXT_SECTION_NEWRTTSAMPLE_SHIFT 10
#define TSTORM_TOE_TCP_AG_CONTEXT_SECTION_RESERVED55 (0x1<<11) /* BitField tcp_agg_vars1Various aggregative variables */
#define TSTORM_TOE_TCP_AG_CONTEXT_SECTION_RESERVED55_SHIFT 11
#define TSTORM_TOE_TCP_AG_CONTEXT_SECTION_RESERVED_AUX1_CF_EN (0x1<<12) /* BitField tcp_agg_vars1Various aggregative variables */
#define TSTORM_TOE_TCP_AG_CONTEXT_SECTION_RESERVED_AUX1_CF_EN_SHIFT 12
#define TSTORM_TOE_TCP_AG_CONTEXT_SECTION_RESERVED_AUX2_CF_EN (0x1<<13) /* BitField tcp_agg_vars1Various aggregative variables */
#define TSTORM_TOE_TCP_AG_CONTEXT_SECTION_RESERVED_AUX2_CF_EN_SHIFT 13
#define TSTORM_TOE_TCP_AG_CONTEXT_SECTION_RESERVED56 (0x3<<14) /* BitField tcp_agg_vars1Various aggregative variables */
#define TSTORM_TOE_TCP_AG_CONTEXT_SECTION_RESERVED56_SHIFT 14
#define TSTORM_TOE_TCP_AG_CONTEXT_SECTION_RESERVED57 (0x3<<16) /* BitField tcp_agg_vars1Various aggregative variables */
#define TSTORM_TOE_TCP_AG_CONTEXT_SECTION_RESERVED57_SHIFT 16
#define TSTORM_TOE_TCP_AG_CONTEXT_SECTION_TX_BLOCKED (0x1<<18) /* BitField tcp_agg_vars1Various aggregative variables Indicates that Tx has more to send, but has not enough window to send it */
#define TSTORM_TOE_TCP_AG_CONTEXT_SECTION_TX_BLOCKED_SHIFT 18
#define __TSTORM_TOE_TCP_AG_CONTEXT_SECTION_AUX10_CF_EN (0x1<<19) /* BitField tcp_agg_vars1Various aggregative variables */
#define __TSTORM_TOE_TCP_AG_CONTEXT_SECTION_AUX10_CF_EN_SHIFT 19
#define __TSTORM_TOE_TCP_AG_CONTEXT_SECTION_AUX11_CF_EN (0x1<<20) /* BitField tcp_agg_vars1Various aggregative variables */
#define __TSTORM_TOE_TCP_AG_CONTEXT_SECTION_AUX11_CF_EN_SHIFT 20
#define __TSTORM_TOE_TCP_AG_CONTEXT_SECTION_AUX12_CF_EN (0x1<<21) /* BitField tcp_agg_vars1Various aggregative variables */
#define __TSTORM_TOE_TCP_AG_CONTEXT_SECTION_AUX12_CF_EN_SHIFT 21
#define __TSTORM_TOE_TCP_AG_CONTEXT_SECTION_RESERVED1 (0x3<<22) /* BitField tcp_agg_vars1Various aggregative variables */
#define __TSTORM_TOE_TCP_AG_CONTEXT_SECTION_RESERVED1_SHIFT 22
#define TSTORM_TOE_TCP_AG_CONTEXT_SECTION_RETRANSMIT_PEND_SEQ (0xF<<24) /* BitField tcp_agg_vars1Various aggregative variables The sequence of the last fast retransmit or goto SS comand sent */
#define TSTORM_TOE_TCP_AG_CONTEXT_SECTION_RETRANSMIT_PEND_SEQ_SHIFT 24
#define TSTORM_TOE_TCP_AG_CONTEXT_SECTION_RETRANSMIT_DONE_SEQ (0xF<<28) /* BitField tcp_agg_vars1Various aggregative variables The sequence of the last fast retransmit or Goto SS command performed by the XSTORM */
#define TSTORM_TOE_TCP_AG_CONTEXT_SECTION_RETRANSMIT_DONE_SEQ_SHIFT 28
uint32_t snd_max /* Maximum sequence number that was ever transmitted */;
uint32_t snd_una /* Last ACK sequence number sent by the Tx */;
uint32_t __reserved2;
};
/*
* The toe aggregative context of Tstorm
*/
struct tstorm_toe_ag_context
{
#if defined(__BIG_ENDIAN)
uint16_t reserved54;
uint8_t agg_vars1;
#define TSTORM_TOE_AG_CONTEXT_EXISTS_IN_QM0 (0x1<<0) /* BitField agg_vars1Various aggregative variables The connection is currently registered to the QM with queue index 0 */
#define TSTORM_TOE_AG_CONTEXT_EXISTS_IN_QM0_SHIFT 0
#define TSTORM_TOE_AG_CONTEXT_RESERVED51 (0x1<<1) /* BitField agg_vars1Various aggregative variables The connection is currently registered to the QM with queue index 1 */
#define TSTORM_TOE_AG_CONTEXT_RESERVED51_SHIFT 1
#define TSTORM_TOE_AG_CONTEXT_RESERVED52 (0x1<<2) /* BitField agg_vars1Various aggregative variables The connection is currently registered to the QM with queue index 2 */
#define TSTORM_TOE_AG_CONTEXT_RESERVED52_SHIFT 2
#define TSTORM_TOE_AG_CONTEXT_RESERVED53 (0x1<<3) /* BitField agg_vars1Various aggregative variables The connection is currently registered to the QM with queue index 3 */
#define TSTORM_TOE_AG_CONTEXT_RESERVED53_SHIFT 3
#define __TSTORM_TOE_AG_CONTEXT_QUEUES_FLUSH_Q0_CF (0x3<<4) /* BitField agg_vars1Various aggregative variables */
#define __TSTORM_TOE_AG_CONTEXT_QUEUES_FLUSH_Q0_CF_SHIFT 4
#define __TSTORM_TOE_AG_CONTEXT_AUX3_FLAG (0x1<<6) /* BitField agg_vars1Various aggregative variables */
#define __TSTORM_TOE_AG_CONTEXT_AUX3_FLAG_SHIFT 6
#define __TSTORM_TOE_AG_CONTEXT_AUX4_FLAG (0x1<<7) /* BitField agg_vars1Various aggregative variables */
#define __TSTORM_TOE_AG_CONTEXT_AUX4_FLAG_SHIFT 7
uint8_t __state /* The state of the connection */;
#elif defined(__LITTLE_ENDIAN)
uint8_t __state /* The state of the connection */;
uint8_t agg_vars1;
#define TSTORM_TOE_AG_CONTEXT_EXISTS_IN_QM0 (0x1<<0) /* BitField agg_vars1Various aggregative variables The connection is currently registered to the QM with queue index 0 */
#define TSTORM_TOE_AG_CONTEXT_EXISTS_IN_QM0_SHIFT 0
#define TSTORM_TOE_AG_CONTEXT_RESERVED51 (0x1<<1) /* BitField agg_vars1Various aggregative variables The connection is currently registered to the QM with queue index 1 */
#define TSTORM_TOE_AG_CONTEXT_RESERVED51_SHIFT 1
#define TSTORM_TOE_AG_CONTEXT_RESERVED52 (0x1<<2) /* BitField agg_vars1Various aggregative variables The connection is currently registered to the QM with queue index 2 */
#define TSTORM_TOE_AG_CONTEXT_RESERVED52_SHIFT 2
#define TSTORM_TOE_AG_CONTEXT_RESERVED53 (0x1<<3) /* BitField agg_vars1Various aggregative variables The connection is currently registered to the QM with queue index 3 */
#define TSTORM_TOE_AG_CONTEXT_RESERVED53_SHIFT 3
#define __TSTORM_TOE_AG_CONTEXT_QUEUES_FLUSH_Q0_CF (0x3<<4) /* BitField agg_vars1Various aggregative variables */
#define __TSTORM_TOE_AG_CONTEXT_QUEUES_FLUSH_Q0_CF_SHIFT 4
#define __TSTORM_TOE_AG_CONTEXT_AUX3_FLAG (0x1<<6) /* BitField agg_vars1Various aggregative variables */
#define __TSTORM_TOE_AG_CONTEXT_AUX3_FLAG_SHIFT 6
#define __TSTORM_TOE_AG_CONTEXT_AUX4_FLAG (0x1<<7) /* BitField agg_vars1Various aggregative variables */
#define __TSTORM_TOE_AG_CONTEXT_AUX4_FLAG_SHIFT 7
uint16_t reserved54;
#endif
#if defined(__BIG_ENDIAN)
uint16_t __agg_val4;
uint16_t agg_vars2;
#define __TSTORM_TOE_AG_CONTEXT_AUX5_FLAG (0x1<<0) /* BitField agg_vars2Various aggregative variables */
#define __TSTORM_TOE_AG_CONTEXT_AUX5_FLAG_SHIFT 0
#define __TSTORM_TOE_AG_CONTEXT_AUX6_FLAG (0x1<<1) /* BitField agg_vars2Various aggregative variables */
#define __TSTORM_TOE_AG_CONTEXT_AUX6_FLAG_SHIFT 1
#define __TSTORM_TOE_AG_CONTEXT_AUX4_CF (0x3<<2) /* BitField agg_vars2Various aggregative variables */
#define __TSTORM_TOE_AG_CONTEXT_AUX4_CF_SHIFT 2
#define __TSTORM_TOE_AG_CONTEXT_AUX5_CF (0x3<<4) /* BitField agg_vars2Various aggregative variables */
#define __TSTORM_TOE_AG_CONTEXT_AUX5_CF_SHIFT 4
#define __TSTORM_TOE_AG_CONTEXT_AUX6_CF (0x3<<6) /* BitField agg_vars2Various aggregative variables */
#define __TSTORM_TOE_AG_CONTEXT_AUX6_CF_SHIFT 6
#define __TSTORM_TOE_AG_CONTEXT_AUX7_CF (0x3<<8) /* BitField agg_vars2Various aggregative variables */
#define __TSTORM_TOE_AG_CONTEXT_AUX7_CF_SHIFT 8
#define __TSTORM_TOE_AG_CONTEXT_AUX7_FLAG (0x1<<10) /* BitField agg_vars2Various aggregative variables */
#define __TSTORM_TOE_AG_CONTEXT_AUX7_FLAG_SHIFT 10
#define __TSTORM_TOE_AG_CONTEXT_QUEUES_FLUSH_Q0_CF_EN (0x1<<11) /* BitField agg_vars2Various aggregative variables */
#define __TSTORM_TOE_AG_CONTEXT_QUEUES_FLUSH_Q0_CF_EN_SHIFT 11
#define TSTORM_TOE_AG_CONTEXT_RESERVED_AUX4_CF_EN (0x1<<12) /* BitField agg_vars2Various aggregative variables */
#define TSTORM_TOE_AG_CONTEXT_RESERVED_AUX4_CF_EN_SHIFT 12
#define TSTORM_TOE_AG_CONTEXT_RESERVED_AUX5_CF_EN (0x1<<13) /* BitField agg_vars2Various aggregative variables */
#define TSTORM_TOE_AG_CONTEXT_RESERVED_AUX5_CF_EN_SHIFT 13
#define TSTORM_TOE_AG_CONTEXT_RESERVED_AUX6_CF_EN (0x1<<14) /* BitField agg_vars2Various aggregative variables */
#define TSTORM_TOE_AG_CONTEXT_RESERVED_AUX6_CF_EN_SHIFT 14
#define TSTORM_TOE_AG_CONTEXT_RESERVED_AUX7_CF_EN (0x1<<15) /* BitField agg_vars2Various aggregative variables */
#define TSTORM_TOE_AG_CONTEXT_RESERVED_AUX7_CF_EN_SHIFT 15
#elif defined(__LITTLE_ENDIAN)
uint16_t agg_vars2;
#define __TSTORM_TOE_AG_CONTEXT_AUX5_FLAG (0x1<<0) /* BitField agg_vars2Various aggregative variables */
#define __TSTORM_TOE_AG_CONTEXT_AUX5_FLAG_SHIFT 0
#define __TSTORM_TOE_AG_CONTEXT_AUX6_FLAG (0x1<<1) /* BitField agg_vars2Various aggregative variables */
#define __TSTORM_TOE_AG_CONTEXT_AUX6_FLAG_SHIFT 1
#define __TSTORM_TOE_AG_CONTEXT_AUX4_CF (0x3<<2) /* BitField agg_vars2Various aggregative variables */
#define __TSTORM_TOE_AG_CONTEXT_AUX4_CF_SHIFT 2
#define __TSTORM_TOE_AG_CONTEXT_AUX5_CF (0x3<<4) /* BitField agg_vars2Various aggregative variables */
#define __TSTORM_TOE_AG_CONTEXT_AUX5_CF_SHIFT 4
#define __TSTORM_TOE_AG_CONTEXT_AUX6_CF (0x3<<6) /* BitField agg_vars2Various aggregative variables */
#define __TSTORM_TOE_AG_CONTEXT_AUX6_CF_SHIFT 6
#define __TSTORM_TOE_AG_CONTEXT_AUX7_CF (0x3<<8) /* BitField agg_vars2Various aggregative variables */
#define __TSTORM_TOE_AG_CONTEXT_AUX7_CF_SHIFT 8
#define __TSTORM_TOE_AG_CONTEXT_AUX7_FLAG (0x1<<10) /* BitField agg_vars2Various aggregative variables */
#define __TSTORM_TOE_AG_CONTEXT_AUX7_FLAG_SHIFT 10
#define __TSTORM_TOE_AG_CONTEXT_QUEUES_FLUSH_Q0_CF_EN (0x1<<11) /* BitField agg_vars2Various aggregative variables */
#define __TSTORM_TOE_AG_CONTEXT_QUEUES_FLUSH_Q0_CF_EN_SHIFT 11
#define TSTORM_TOE_AG_CONTEXT_RESERVED_AUX4_CF_EN (0x1<<12) /* BitField agg_vars2Various aggregative variables */
#define TSTORM_TOE_AG_CONTEXT_RESERVED_AUX4_CF_EN_SHIFT 12
#define TSTORM_TOE_AG_CONTEXT_RESERVED_AUX5_CF_EN (0x1<<13) /* BitField agg_vars2Various aggregative variables */
#define TSTORM_TOE_AG_CONTEXT_RESERVED_AUX5_CF_EN_SHIFT 13
#define TSTORM_TOE_AG_CONTEXT_RESERVED_AUX6_CF_EN (0x1<<14) /* BitField agg_vars2Various aggregative variables */
#define TSTORM_TOE_AG_CONTEXT_RESERVED_AUX6_CF_EN_SHIFT 14
#define TSTORM_TOE_AG_CONTEXT_RESERVED_AUX7_CF_EN (0x1<<15) /* BitField agg_vars2Various aggregative variables */
#define TSTORM_TOE_AG_CONTEXT_RESERVED_AUX7_CF_EN_SHIFT 15
uint16_t __agg_val4;
#endif
struct tstorm_toe_tcp_ag_context_section tcp /* TCP context section, shared in TOE and iSCSI */;
};
/*
* The eth aggregative context of Ustorm
*/
struct ustorm_eth_ag_context
{
uint32_t __reserved0;
#if defined(__BIG_ENDIAN)
uint8_t cdu_usage /* Will be used by the CDU for validation of the CID/connection type on doorbells. */;
uint8_t __reserved2;
uint16_t __reserved1;
#elif defined(__LITTLE_ENDIAN)
uint16_t __reserved1;
uint8_t __reserved2;
uint8_t cdu_usage /* Will be used by the CDU for validation of the CID/connection type on doorbells. */;
#endif
uint32_t __reserved3[6];
};
/*
* The fcoe aggregative context of Ustorm
*/
struct ustorm_fcoe_ag_context
{
#if defined(__BIG_ENDIAN)
uint8_t __aux_counter_flags /* auxiliary counter flags*/;
uint8_t agg_vars2;
#define USTORM_FCOE_AG_CONTEXT_TX_CF (0x3<<0) /* BitField agg_vars2various aggregation variables Set when a message was received from the Tx STORM. For future use. */
#define USTORM_FCOE_AG_CONTEXT_TX_CF_SHIFT 0
#define __USTORM_FCOE_AG_CONTEXT_TIMER_CF (0x3<<2) /* BitField agg_vars2various aggregation variables Set when a message was received from the Timer. */
#define __USTORM_FCOE_AG_CONTEXT_TIMER_CF_SHIFT 2
#define USTORM_FCOE_AG_CONTEXT_AGG_MISC4_RULE (0x7<<4) /* BitField agg_vars2various aggregation variables 0-NOP,1-EQ,2-NEQ,3-GT_CYC,4-GT_ABS,5-LT_CYC,6-LT_ABS */
#define USTORM_FCOE_AG_CONTEXT_AGG_MISC4_RULE_SHIFT 4
#define __USTORM_FCOE_AG_CONTEXT_AGG_VAL2_MASK (0x1<<7) /* BitField agg_vars2various aggregation variables Used to mask the decision rule of AggVal2. Used in iSCSI. Should be 0 in all other protocols */
#define __USTORM_FCOE_AG_CONTEXT_AGG_VAL2_MASK_SHIFT 7
uint8_t agg_vars1;
#define __USTORM_FCOE_AG_CONTEXT_EXISTS_IN_QM0 (0x1<<0) /* BitField agg_vars1various aggregation variables The connection is currently registered to the QM with queue index 0 */
#define __USTORM_FCOE_AG_CONTEXT_EXISTS_IN_QM0_SHIFT 0
#define USTORM_FCOE_AG_CONTEXT_EXISTS_IN_QM1 (0x1<<1) /* BitField agg_vars1various aggregation variables The connection is currently registered to the QM with queue index 1 */
#define USTORM_FCOE_AG_CONTEXT_EXISTS_IN_QM1_SHIFT 1
#define USTORM_FCOE_AG_CONTEXT_EXISTS_IN_QM2 (0x1<<2) /* BitField agg_vars1various aggregation variables The connection is currently registered to the QM with queue index 2 */
#define USTORM_FCOE_AG_CONTEXT_EXISTS_IN_QM2_SHIFT 2
#define USTORM_FCOE_AG_CONTEXT_EXISTS_IN_QM3 (0x1<<3) /* BitField agg_vars1various aggregation variables The connection is currently registered to the QM with queue index 3 */
#define USTORM_FCOE_AG_CONTEXT_EXISTS_IN_QM3_SHIFT 3
#define USTORM_FCOE_AG_CONTEXT_INV_CF (0x3<<4) /* BitField agg_vars1various aggregation variables Indicates a valid invalidate request. Set by the CMP STORM. */
#define USTORM_FCOE_AG_CONTEXT_INV_CF_SHIFT 4
#define USTORM_FCOE_AG_CONTEXT_COMPLETION_CF (0x3<<6) /* BitField agg_vars1various aggregation variables Set when a message was received from the CMP STORM. For future use. */
#define USTORM_FCOE_AG_CONTEXT_COMPLETION_CF_SHIFT 6
uint8_t state /* The state of the connection */;
#elif defined(__LITTLE_ENDIAN)
uint8_t state /* The state of the connection */;
uint8_t agg_vars1;
#define __USTORM_FCOE_AG_CONTEXT_EXISTS_IN_QM0 (0x1<<0) /* BitField agg_vars1various aggregation variables The connection is currently registered to the QM with queue index 0 */
#define __USTORM_FCOE_AG_CONTEXT_EXISTS_IN_QM0_SHIFT 0
#define USTORM_FCOE_AG_CONTEXT_EXISTS_IN_QM1 (0x1<<1) /* BitField agg_vars1various aggregation variables The connection is currently registered to the QM with queue index 1 */
#define USTORM_FCOE_AG_CONTEXT_EXISTS_IN_QM1_SHIFT 1
#define USTORM_FCOE_AG_CONTEXT_EXISTS_IN_QM2 (0x1<<2) /* BitField agg_vars1various aggregation variables The connection is currently registered to the QM with queue index 2 */
#define USTORM_FCOE_AG_CONTEXT_EXISTS_IN_QM2_SHIFT 2
#define USTORM_FCOE_AG_CONTEXT_EXISTS_IN_QM3 (0x1<<3) /* BitField agg_vars1various aggregation variables The connection is currently registered to the QM with queue index 3 */
#define USTORM_FCOE_AG_CONTEXT_EXISTS_IN_QM3_SHIFT 3
#define USTORM_FCOE_AG_CONTEXT_INV_CF (0x3<<4) /* BitField agg_vars1various aggregation variables Indicates a valid invalidate request. Set by the CMP STORM. */
#define USTORM_FCOE_AG_CONTEXT_INV_CF_SHIFT 4
#define USTORM_FCOE_AG_CONTEXT_COMPLETION_CF (0x3<<6) /* BitField agg_vars1various aggregation variables Set when a message was received from the CMP STORM. For future use. */
#define USTORM_FCOE_AG_CONTEXT_COMPLETION_CF_SHIFT 6
uint8_t agg_vars2;
#define USTORM_FCOE_AG_CONTEXT_TX_CF (0x3<<0) /* BitField agg_vars2various aggregation variables Set when a message was received from the Tx STORM. For future use. */
#define USTORM_FCOE_AG_CONTEXT_TX_CF_SHIFT 0
#define __USTORM_FCOE_AG_CONTEXT_TIMER_CF (0x3<<2) /* BitField agg_vars2various aggregation variables Set when a message was received from the Timer. */
#define __USTORM_FCOE_AG_CONTEXT_TIMER_CF_SHIFT 2
#define USTORM_FCOE_AG_CONTEXT_AGG_MISC4_RULE (0x7<<4) /* BitField agg_vars2various aggregation variables 0-NOP,1-EQ,2-NEQ,3-GT_CYC,4-GT_ABS,5-LT_CYC,6-LT_ABS */
#define USTORM_FCOE_AG_CONTEXT_AGG_MISC4_RULE_SHIFT 4
#define __USTORM_FCOE_AG_CONTEXT_AGG_VAL2_MASK (0x1<<7) /* BitField agg_vars2various aggregation variables Used to mask the decision rule of AggVal2. Used in iSCSI. Should be 0 in all other protocols */
#define __USTORM_FCOE_AG_CONTEXT_AGG_VAL2_MASK_SHIFT 7
uint8_t __aux_counter_flags /* auxiliary counter flags*/;
#endif
#if defined(__BIG_ENDIAN)
uint8_t cdu_usage /* Will be used by the CDU for validation of the CID/connection type on doorbells. */;
uint8_t agg_misc2;
uint16_t pbf_tx_seq_ack /* Sequence number of the last sequence transmitted by PBF. */;
#elif defined(__LITTLE_ENDIAN)
uint16_t pbf_tx_seq_ack /* Sequence number of the last sequence transmitted by PBF. */;
uint8_t agg_misc2;
uint8_t cdu_usage /* Will be used by the CDU for validation of the CID/connection type on doorbells. */;
#endif
uint32_t agg_misc4;
#if defined(__BIG_ENDIAN)
uint8_t agg_val3_th;
uint8_t agg_val3;
uint16_t agg_misc3;
#elif defined(__LITTLE_ENDIAN)
uint16_t agg_misc3;
uint8_t agg_val3;
uint8_t agg_val3_th;
#endif
uint32_t expired_task_id /* Timer expiration task id */;
uint32_t agg_misc4_th;
#if defined(__BIG_ENDIAN)
uint16_t cq_prod /* CQ producer updated by FW */;
uint16_t cq_cons /* CQ consumer updated by driver via doorbell */;
#elif defined(__LITTLE_ENDIAN)
uint16_t cq_cons /* CQ consumer updated by driver via doorbell */;
uint16_t cq_prod /* CQ producer updated by FW */;
#endif
#if defined(__BIG_ENDIAN)
uint16_t __reserved2;
uint8_t decision_rules;
#define USTORM_FCOE_AG_CONTEXT_CQ_DEC_RULE (0x7<<0) /* BitField decision_rulesVarious decision rules */
#define USTORM_FCOE_AG_CONTEXT_CQ_DEC_RULE_SHIFT 0
#define __USTORM_FCOE_AG_CONTEXT_AGG_VAL3_RULE (0x7<<3) /* BitField decision_rulesVarious decision rules */
#define __USTORM_FCOE_AG_CONTEXT_AGG_VAL3_RULE_SHIFT 3
#define USTORM_FCOE_AG_CONTEXT_CQ_ARM_N_FLAG (0x1<<6) /* BitField decision_rulesVarious decision rules CQ negative arm indication updated via doorbell */
#define USTORM_FCOE_AG_CONTEXT_CQ_ARM_N_FLAG_SHIFT 6
#define __USTORM_FCOE_AG_CONTEXT_RESERVED1 (0x1<<7) /* BitField decision_rulesVarious decision rules */
#define __USTORM_FCOE_AG_CONTEXT_RESERVED1_SHIFT 7
uint8_t decision_rule_enable_bits;
#define __USTORM_FCOE_AG_CONTEXT_RESERVED_INV_CF_EN (0x1<<0) /* BitField decision_rule_enable_bitsEnable bits for various decision rules */
#define __USTORM_FCOE_AG_CONTEXT_RESERVED_INV_CF_EN_SHIFT 0
#define USTORM_FCOE_AG_CONTEXT_COMPLETION_CF_EN (0x1<<1) /* BitField decision_rule_enable_bitsEnable bits for various decision rules */
#define USTORM_FCOE_AG_CONTEXT_COMPLETION_CF_EN_SHIFT 1
#define USTORM_FCOE_AG_CONTEXT_TX_CF_EN (0x1<<2) /* BitField decision_rule_enable_bitsEnable bits for various decision rules */
#define USTORM_FCOE_AG_CONTEXT_TX_CF_EN_SHIFT 2
#define __USTORM_FCOE_AG_CONTEXT_TIMER_CF_EN (0x1<<3) /* BitField decision_rule_enable_bitsEnable bits for various decision rules */
#define __USTORM_FCOE_AG_CONTEXT_TIMER_CF_EN_SHIFT 3
#define __USTORM_FCOE_AG_CONTEXT_AUX1_CF_EN (0x1<<4) /* BitField decision_rule_enable_bitsEnable bits for various decision rules */
#define __USTORM_FCOE_AG_CONTEXT_AUX1_CF_EN_SHIFT 4
#define __USTORM_FCOE_AG_CONTEXT_QUEUE0_CF_EN (0x1<<5) /* BitField decision_rule_enable_bitsEnable bits for various decision rules The flush queues counter flag en. */
#define __USTORM_FCOE_AG_CONTEXT_QUEUE0_CF_EN_SHIFT 5
#define __USTORM_FCOE_AG_CONTEXT_AUX3_CF_EN (0x1<<6) /* BitField decision_rule_enable_bitsEnable bits for various decision rules */
#define __USTORM_FCOE_AG_CONTEXT_AUX3_CF_EN_SHIFT 6
#define __USTORM_FCOE_AG_CONTEXT_DQ_CF_EN (0x1<<7) /* BitField decision_rule_enable_bitsEnable bits for various decision rules */
#define __USTORM_FCOE_AG_CONTEXT_DQ_CF_EN_SHIFT 7
#elif defined(__LITTLE_ENDIAN)
uint8_t decision_rule_enable_bits;
#define __USTORM_FCOE_AG_CONTEXT_RESERVED_INV_CF_EN (0x1<<0) /* BitField decision_rule_enable_bitsEnable bits for various decision rules */
#define __USTORM_FCOE_AG_CONTEXT_RESERVED_INV_CF_EN_SHIFT 0
#define USTORM_FCOE_AG_CONTEXT_COMPLETION_CF_EN (0x1<<1) /* BitField decision_rule_enable_bitsEnable bits for various decision rules */
#define USTORM_FCOE_AG_CONTEXT_COMPLETION_CF_EN_SHIFT 1
#define USTORM_FCOE_AG_CONTEXT_TX_CF_EN (0x1<<2) /* BitField decision_rule_enable_bitsEnable bits for various decision rules */
#define USTORM_FCOE_AG_CONTEXT_TX_CF_EN_SHIFT 2
#define __USTORM_FCOE_AG_CONTEXT_TIMER_CF_EN (0x1<<3) /* BitField decision_rule_enable_bitsEnable bits for various decision rules */
#define __USTORM_FCOE_AG_CONTEXT_TIMER_CF_EN_SHIFT 3
#define __USTORM_FCOE_AG_CONTEXT_AUX1_CF_EN (0x1<<4) /* BitField decision_rule_enable_bitsEnable bits for various decision rules */
#define __USTORM_FCOE_AG_CONTEXT_AUX1_CF_EN_SHIFT 4
#define __USTORM_FCOE_AG_CONTEXT_QUEUE0_CF_EN (0x1<<5) /* BitField decision_rule_enable_bitsEnable bits for various decision rules The flush queues counter flag en. */
#define __USTORM_FCOE_AG_CONTEXT_QUEUE0_CF_EN_SHIFT 5
#define __USTORM_FCOE_AG_CONTEXT_AUX3_CF_EN (0x1<<6) /* BitField decision_rule_enable_bitsEnable bits for various decision rules */
#define __USTORM_FCOE_AG_CONTEXT_AUX3_CF_EN_SHIFT 6
#define __USTORM_FCOE_AG_CONTEXT_DQ_CF_EN (0x1<<7) /* BitField decision_rule_enable_bitsEnable bits for various decision rules */
#define __USTORM_FCOE_AG_CONTEXT_DQ_CF_EN_SHIFT 7
uint8_t decision_rules;
#define USTORM_FCOE_AG_CONTEXT_CQ_DEC_RULE (0x7<<0) /* BitField decision_rulesVarious decision rules */
#define USTORM_FCOE_AG_CONTEXT_CQ_DEC_RULE_SHIFT 0
#define __USTORM_FCOE_AG_CONTEXT_AGG_VAL3_RULE (0x7<<3) /* BitField decision_rulesVarious decision rules */
#define __USTORM_FCOE_AG_CONTEXT_AGG_VAL3_RULE_SHIFT 3
#define USTORM_FCOE_AG_CONTEXT_CQ_ARM_N_FLAG (0x1<<6) /* BitField decision_rulesVarious decision rules CQ negative arm indication updated via doorbell */
#define USTORM_FCOE_AG_CONTEXT_CQ_ARM_N_FLAG_SHIFT 6
#define __USTORM_FCOE_AG_CONTEXT_RESERVED1 (0x1<<7) /* BitField decision_rulesVarious decision rules */
#define __USTORM_FCOE_AG_CONTEXT_RESERVED1_SHIFT 7
uint16_t __reserved2;
#endif
};
/*
* The iscsi aggregative context of Ustorm
*/
struct ustorm_iscsi_ag_context
{
#if defined(__BIG_ENDIAN)
uint8_t __aux_counter_flags /* auxiliary counter flags*/;
uint8_t agg_vars2;
#define USTORM_ISCSI_AG_CONTEXT_TX_CF (0x3<<0) /* BitField agg_vars2various aggregation variables Set when a message was received from the Tx STORM. For future use. */
#define USTORM_ISCSI_AG_CONTEXT_TX_CF_SHIFT 0
#define __USTORM_ISCSI_AG_CONTEXT_TIMER_CF (0x3<<2) /* BitField agg_vars2various aggregation variables Set when a message was received from the Timer. */
#define __USTORM_ISCSI_AG_CONTEXT_TIMER_CF_SHIFT 2
#define USTORM_ISCSI_AG_CONTEXT_AGG_MISC4_RULE (0x7<<4) /* BitField agg_vars2various aggregation variables 0-NOP,1-EQ,2-NEQ,3-GT_CYC,4-GT_ABS,5-LT_CYC,6-LT_ABS */
#define USTORM_ISCSI_AG_CONTEXT_AGG_MISC4_RULE_SHIFT 4
#define __USTORM_ISCSI_AG_CONTEXT_AGG_VAL2_MASK (0x1<<7) /* BitField agg_vars2various aggregation variables Used to mask the decision rule of AggVal2. Used in iSCSI. Should be 0 in all other protocols */
#define __USTORM_ISCSI_AG_CONTEXT_AGG_VAL2_MASK_SHIFT 7
uint8_t agg_vars1;
#define __USTORM_ISCSI_AG_CONTEXT_EXISTS_IN_QM0 (0x1<<0) /* BitField agg_vars1various aggregation variables The connection is currently registered to the QM with queue index 0 */
#define __USTORM_ISCSI_AG_CONTEXT_EXISTS_IN_QM0_SHIFT 0
#define USTORM_ISCSI_AG_CONTEXT_EXISTS_IN_QM1 (0x1<<1) /* BitField agg_vars1various aggregation variables The connection is currently registered to the QM with queue index 1 */
#define USTORM_ISCSI_AG_CONTEXT_EXISTS_IN_QM1_SHIFT 1
#define USTORM_ISCSI_AG_CONTEXT_EXISTS_IN_QM2 (0x1<<2) /* BitField agg_vars1various aggregation variables The connection is currently registered to the QM with queue index 2 */
#define USTORM_ISCSI_AG_CONTEXT_EXISTS_IN_QM2_SHIFT 2
#define USTORM_ISCSI_AG_CONTEXT_EXISTS_IN_QM3 (0x1<<3) /* BitField agg_vars1various aggregation variables The connection is currently registered to the QM with queue index 3 */
#define USTORM_ISCSI_AG_CONTEXT_EXISTS_IN_QM3_SHIFT 3
#define USTORM_ISCSI_AG_CONTEXT_INV_CF (0x3<<4) /* BitField agg_vars1various aggregation variables Indicates a valid invalidate request. Set by the CMP STORM. */
#define USTORM_ISCSI_AG_CONTEXT_INV_CF_SHIFT 4
#define USTORM_ISCSI_AG_CONTEXT_COMPLETION_CF (0x3<<6) /* BitField agg_vars1various aggregation variables Set when a message was received from the CMP STORM. For future use. */
#define USTORM_ISCSI_AG_CONTEXT_COMPLETION_CF_SHIFT 6
uint8_t state /* The state of the connection */;
#elif defined(__LITTLE_ENDIAN)
uint8_t state /* The state of the connection */;
uint8_t agg_vars1;
#define __USTORM_ISCSI_AG_CONTEXT_EXISTS_IN_QM0 (0x1<<0) /* BitField agg_vars1various aggregation variables The connection is currently registered to the QM with queue index 0 */
#define __USTORM_ISCSI_AG_CONTEXT_EXISTS_IN_QM0_SHIFT 0
#define USTORM_ISCSI_AG_CONTEXT_EXISTS_IN_QM1 (0x1<<1) /* BitField agg_vars1various aggregation variables The connection is currently registered to the QM with queue index 1 */
#define USTORM_ISCSI_AG_CONTEXT_EXISTS_IN_QM1_SHIFT 1
#define USTORM_ISCSI_AG_CONTEXT_EXISTS_IN_QM2 (0x1<<2) /* BitField agg_vars1various aggregation variables The connection is currently registered to the QM with queue index 2 */
#define USTORM_ISCSI_AG_CONTEXT_EXISTS_IN_QM2_SHIFT 2
#define USTORM_ISCSI_AG_CONTEXT_EXISTS_IN_QM3 (0x1<<3) /* BitField agg_vars1various aggregation variables The connection is currently registered to the QM with queue index 3 */
#define USTORM_ISCSI_AG_CONTEXT_EXISTS_IN_QM3_SHIFT 3
#define USTORM_ISCSI_AG_CONTEXT_INV_CF (0x3<<4) /* BitField agg_vars1various aggregation variables Indicates a valid invalidate request. Set by the CMP STORM. */
#define USTORM_ISCSI_AG_CONTEXT_INV_CF_SHIFT 4
#define USTORM_ISCSI_AG_CONTEXT_COMPLETION_CF (0x3<<6) /* BitField agg_vars1various aggregation variables Set when a message was received from the CMP STORM. For future use. */
#define USTORM_ISCSI_AG_CONTEXT_COMPLETION_CF_SHIFT 6
uint8_t agg_vars2;
#define USTORM_ISCSI_AG_CONTEXT_TX_CF (0x3<<0) /* BitField agg_vars2various aggregation variables Set when a message was received from the Tx STORM. For future use. */
#define USTORM_ISCSI_AG_CONTEXT_TX_CF_SHIFT 0
#define __USTORM_ISCSI_AG_CONTEXT_TIMER_CF (0x3<<2) /* BitField agg_vars2various aggregation variables Set when a message was received from the Timer. */
#define __USTORM_ISCSI_AG_CONTEXT_TIMER_CF_SHIFT 2
#define USTORM_ISCSI_AG_CONTEXT_AGG_MISC4_RULE (0x7<<4) /* BitField agg_vars2various aggregation variables 0-NOP,1-EQ,2-NEQ,3-GT_CYC,4-GT_ABS,5-LT_CYC,6-LT_ABS */
#define USTORM_ISCSI_AG_CONTEXT_AGG_MISC4_RULE_SHIFT 4
#define __USTORM_ISCSI_AG_CONTEXT_AGG_VAL2_MASK (0x1<<7) /* BitField agg_vars2various aggregation variables Used to mask the decision rule of AggVal2. Used in iSCSI. Should be 0 in all other protocols */
#define __USTORM_ISCSI_AG_CONTEXT_AGG_VAL2_MASK_SHIFT 7
uint8_t __aux_counter_flags /* auxiliary counter flags*/;
#endif
#if defined(__BIG_ENDIAN)
uint8_t cdu_usage /* Will be used by the CDU for validation of the CID/connection type on doorbells. */;
uint8_t agg_misc2;
uint16_t __cq_local_comp_itt_val /* The local completion ITT to complete. Set by the CMP STORM RO for USTORM. */;
#elif defined(__LITTLE_ENDIAN)
uint16_t __cq_local_comp_itt_val /* The local completion ITT to complete. Set by the CMP STORM RO for USTORM. */;
uint8_t agg_misc2;
uint8_t cdu_usage /* Will be used by the CDU for validation of the CID/connection type on doorbells. */;
#endif
uint32_t agg_misc4;
#if defined(__BIG_ENDIAN)
uint8_t agg_val3_th;
uint8_t agg_val3;
uint16_t agg_misc3;
#elif defined(__LITTLE_ENDIAN)
uint16_t agg_misc3;
uint8_t agg_val3;
uint8_t agg_val3_th;
#endif
uint32_t agg_val1;
uint32_t agg_misc4_th;
#if defined(__BIG_ENDIAN)
uint16_t agg_val2_th;
uint16_t agg_val2;
#elif defined(__LITTLE_ENDIAN)
uint16_t agg_val2;
uint16_t agg_val2_th;
#endif
#if defined(__BIG_ENDIAN)
uint16_t __reserved2;
uint8_t decision_rules;
#define USTORM_ISCSI_AG_CONTEXT_AGG_VAL2_RULE (0x7<<0) /* BitField decision_rulesVarious decision rules */
#define USTORM_ISCSI_AG_CONTEXT_AGG_VAL2_RULE_SHIFT 0
#define __USTORM_ISCSI_AG_CONTEXT_AGG_VAL3_RULE (0x7<<3) /* BitField decision_rulesVarious decision rules */
#define __USTORM_ISCSI_AG_CONTEXT_AGG_VAL3_RULE_SHIFT 3
#define USTORM_ISCSI_AG_CONTEXT_AGG_VAL2_ARM_N_FLAG (0x1<<6) /* BitField decision_rulesVarious decision rules */
#define USTORM_ISCSI_AG_CONTEXT_AGG_VAL2_ARM_N_FLAG_SHIFT 6
#define __USTORM_ISCSI_AG_CONTEXT_RESERVED1 (0x1<<7) /* BitField decision_rulesVarious decision rules */
#define __USTORM_ISCSI_AG_CONTEXT_RESERVED1_SHIFT 7
uint8_t decision_rule_enable_bits;
#define USTORM_ISCSI_AG_CONTEXT_INV_CF_EN (0x1<<0) /* BitField decision_rule_enable_bitsEnable bits for various decision rules */
#define USTORM_ISCSI_AG_CONTEXT_INV_CF_EN_SHIFT 0
#define USTORM_ISCSI_AG_CONTEXT_COMPLETION_CF_EN (0x1<<1) /* BitField decision_rule_enable_bitsEnable bits for various decision rules */
#define USTORM_ISCSI_AG_CONTEXT_COMPLETION_CF_EN_SHIFT 1
#define USTORM_ISCSI_AG_CONTEXT_TX_CF_EN (0x1<<2) /* BitField decision_rule_enable_bitsEnable bits for various decision rules */
#define USTORM_ISCSI_AG_CONTEXT_TX_CF_EN_SHIFT 2
#define __USTORM_ISCSI_AG_CONTEXT_TIMER_CF_EN (0x1<<3) /* BitField decision_rule_enable_bitsEnable bits for various decision rules */
#define __USTORM_ISCSI_AG_CONTEXT_TIMER_CF_EN_SHIFT 3
#define __USTORM_ISCSI_AG_CONTEXT_CQ_LOCAL_COMP_CF_EN (0x1<<4) /* BitField decision_rule_enable_bitsEnable bits for various decision rules The local completion counter flag enable. Enabled by USTORM at the beginning. */
#define __USTORM_ISCSI_AG_CONTEXT_CQ_LOCAL_COMP_CF_EN_SHIFT 4
#define __USTORM_ISCSI_AG_CONTEXT_QUEUES_FLUSH_Q0_CF_EN (0x1<<5) /* BitField decision_rule_enable_bitsEnable bits for various decision rules The flush queues counter flag en. */
#define __USTORM_ISCSI_AG_CONTEXT_QUEUES_FLUSH_Q0_CF_EN_SHIFT 5
#define __USTORM_ISCSI_AG_CONTEXT_AUX3_CF_EN (0x1<<6) /* BitField decision_rule_enable_bitsEnable bits for various decision rules */
#define __USTORM_ISCSI_AG_CONTEXT_AUX3_CF_EN_SHIFT 6
#define __USTORM_ISCSI_AG_CONTEXT_DQ_CF_EN (0x1<<7) /* BitField decision_rule_enable_bitsEnable bits for various decision rules */
#define __USTORM_ISCSI_AG_CONTEXT_DQ_CF_EN_SHIFT 7
#elif defined(__LITTLE_ENDIAN)
uint8_t decision_rule_enable_bits;
#define USTORM_ISCSI_AG_CONTEXT_INV_CF_EN (0x1<<0) /* BitField decision_rule_enable_bitsEnable bits for various decision rules */
#define USTORM_ISCSI_AG_CONTEXT_INV_CF_EN_SHIFT 0
#define USTORM_ISCSI_AG_CONTEXT_COMPLETION_CF_EN (0x1<<1) /* BitField decision_rule_enable_bitsEnable bits for various decision rules */
#define USTORM_ISCSI_AG_CONTEXT_COMPLETION_CF_EN_SHIFT 1
#define USTORM_ISCSI_AG_CONTEXT_TX_CF_EN (0x1<<2) /* BitField decision_rule_enable_bitsEnable bits for various decision rules */
#define USTORM_ISCSI_AG_CONTEXT_TX_CF_EN_SHIFT 2
#define __USTORM_ISCSI_AG_CONTEXT_TIMER_CF_EN (0x1<<3) /* BitField decision_rule_enable_bitsEnable bits for various decision rules */
#define __USTORM_ISCSI_AG_CONTEXT_TIMER_CF_EN_SHIFT 3
#define __USTORM_ISCSI_AG_CONTEXT_CQ_LOCAL_COMP_CF_EN (0x1<<4) /* BitField decision_rule_enable_bitsEnable bits for various decision rules The local completion counter flag enable. Enabled by USTORM at the beginning. */
#define __USTORM_ISCSI_AG_CONTEXT_CQ_LOCAL_COMP_CF_EN_SHIFT 4
#define __USTORM_ISCSI_AG_CONTEXT_QUEUES_FLUSH_Q0_CF_EN (0x1<<5) /* BitField decision_rule_enable_bitsEnable bits for various decision rules The flush queues counter flag en. */
#define __USTORM_ISCSI_AG_CONTEXT_QUEUES_FLUSH_Q0_CF_EN_SHIFT 5
#define __USTORM_ISCSI_AG_CONTEXT_AUX3_CF_EN (0x1<<6) /* BitField decision_rule_enable_bitsEnable bits for various decision rules */
#define __USTORM_ISCSI_AG_CONTEXT_AUX3_CF_EN_SHIFT 6
#define __USTORM_ISCSI_AG_CONTEXT_DQ_CF_EN (0x1<<7) /* BitField decision_rule_enable_bitsEnable bits for various decision rules */
#define __USTORM_ISCSI_AG_CONTEXT_DQ_CF_EN_SHIFT 7
uint8_t decision_rules;
#define USTORM_ISCSI_AG_CONTEXT_AGG_VAL2_RULE (0x7<<0) /* BitField decision_rulesVarious decision rules */
#define USTORM_ISCSI_AG_CONTEXT_AGG_VAL2_RULE_SHIFT 0
#define __USTORM_ISCSI_AG_CONTEXT_AGG_VAL3_RULE (0x7<<3) /* BitField decision_rulesVarious decision rules */
#define __USTORM_ISCSI_AG_CONTEXT_AGG_VAL3_RULE_SHIFT 3
#define USTORM_ISCSI_AG_CONTEXT_AGG_VAL2_ARM_N_FLAG (0x1<<6) /* BitField decision_rulesVarious decision rules */
#define USTORM_ISCSI_AG_CONTEXT_AGG_VAL2_ARM_N_FLAG_SHIFT 6
#define __USTORM_ISCSI_AG_CONTEXT_RESERVED1 (0x1<<7) /* BitField decision_rulesVarious decision rules */
#define __USTORM_ISCSI_AG_CONTEXT_RESERVED1_SHIFT 7
uint16_t __reserved2;
#endif
};
/*
* The toe aggregative context of Ustorm
*/
struct ustorm_toe_ag_context
{
#if defined(__BIG_ENDIAN)
uint8_t __aux_counter_flags /* auxiliary counter flags*/;
uint8_t __agg_vars2 /* various aggregation variables*/;
uint8_t __agg_vars1 /* various aggregation variables*/;
uint8_t __state /* The state of the connection */;
#elif defined(__LITTLE_ENDIAN)
uint8_t __state /* The state of the connection */;
uint8_t __agg_vars1 /* various aggregation variables*/;
uint8_t __agg_vars2 /* various aggregation variables*/;
uint8_t __aux_counter_flags /* auxiliary counter flags*/;
#endif
#if defined(__BIG_ENDIAN)
uint8_t cdu_usage /* Will be used by the CDU for validation of the CID/connection type on doorbells. */;
uint8_t __agg_misc2;
uint16_t __agg_misc1;
#elif defined(__LITTLE_ENDIAN)
uint16_t __agg_misc1;
uint8_t __agg_misc2;
uint8_t cdu_usage /* Will be used by the CDU for validation of the CID/connection type on doorbells. */;
#endif
uint32_t __agg_misc4;
#if defined(__BIG_ENDIAN)
uint8_t __agg_val3_th;
uint8_t __agg_val3;
uint16_t __agg_misc3;
#elif defined(__LITTLE_ENDIAN)
uint16_t __agg_misc3;
uint8_t __agg_val3;
uint8_t __agg_val3_th;
#endif
uint32_t driver_doorbell_info_ptr_lo /* the host pointer that consist the struct of info updated */;
uint32_t driver_doorbell_info_ptr_hi /* the host pointer that consist the struct of info updated */;
#if defined(__BIG_ENDIAN)
uint16_t __agg_val2_th;
uint16_t rq_prod /* The RQ producer */;
#elif defined(__LITTLE_ENDIAN)
uint16_t rq_prod /* The RQ producer */;
uint16_t __agg_val2_th;
#endif
#if defined(__BIG_ENDIAN)
uint16_t __reserved2;
uint8_t decision_rules;
#define __USTORM_TOE_AG_CONTEXT_AGG_VAL2_RULE (0x7<<0) /* BitField decision_rulesVarious decision rules */
#define __USTORM_TOE_AG_CONTEXT_AGG_VAL2_RULE_SHIFT 0
#define __USTORM_TOE_AG_CONTEXT_AGG_VAL3_RULE (0x7<<3) /* BitField decision_rulesVarious decision rules */
#define __USTORM_TOE_AG_CONTEXT_AGG_VAL3_RULE_SHIFT 3
#define USTORM_TOE_AG_CONTEXT_AGG_VAL2_ARM_N_FLAG (0x1<<6) /* BitField decision_rulesVarious decision rules */
#define USTORM_TOE_AG_CONTEXT_AGG_VAL2_ARM_N_FLAG_SHIFT 6
#define __USTORM_TOE_AG_CONTEXT_RESERVED1 (0x1<<7) /* BitField decision_rulesVarious decision rules */
#define __USTORM_TOE_AG_CONTEXT_RESERVED1_SHIFT 7
uint8_t __decision_rule_enable_bits /* Enable bits for various decision rules*/;
#elif defined(__LITTLE_ENDIAN)
uint8_t __decision_rule_enable_bits /* Enable bits for various decision rules*/;
uint8_t decision_rules;
#define __USTORM_TOE_AG_CONTEXT_AGG_VAL2_RULE (0x7<<0) /* BitField decision_rulesVarious decision rules */
#define __USTORM_TOE_AG_CONTEXT_AGG_VAL2_RULE_SHIFT 0
#define __USTORM_TOE_AG_CONTEXT_AGG_VAL3_RULE (0x7<<3) /* BitField decision_rulesVarious decision rules */
#define __USTORM_TOE_AG_CONTEXT_AGG_VAL3_RULE_SHIFT 3
#define USTORM_TOE_AG_CONTEXT_AGG_VAL2_ARM_N_FLAG (0x1<<6) /* BitField decision_rulesVarious decision rules */
#define USTORM_TOE_AG_CONTEXT_AGG_VAL2_ARM_N_FLAG_SHIFT 6
#define __USTORM_TOE_AG_CONTEXT_RESERVED1 (0x1<<7) /* BitField decision_rulesVarious decision rules */
#define __USTORM_TOE_AG_CONTEXT_RESERVED1_SHIFT 7
uint16_t __reserved2;
#endif
};
/*
* The eth aggregative context of Xstorm
*/
struct xstorm_eth_ag_context
{
uint32_t reserved0;
#if defined(__BIG_ENDIAN)
uint8_t cdu_reserved /* Used by the CDU for validation and debugging */;
uint8_t reserved2;
uint16_t reserved1;
#elif defined(__LITTLE_ENDIAN)
uint16_t reserved1;
uint8_t reserved2;
uint8_t cdu_reserved /* Used by the CDU for validation and debugging */;
#endif
uint32_t reserved3[30];
};
/*
* The fcoe aggregative context section of Xstorm
*/
struct xstorm_fcoe_extra_ag_context_section
{
#if defined(__BIG_ENDIAN)
uint8_t tcp_agg_vars1;
#define __XSTORM_FCOE_EXTRA_AG_CONTEXT_SECTION_RESERVED51 (0x3<<0) /* BitField tcp_agg_vars1Various aggregative variables Counter flag used to rewind the DA timer */
#define __XSTORM_FCOE_EXTRA_AG_CONTEXT_SECTION_RESERVED51_SHIFT 0
#define __XSTORM_FCOE_EXTRA_AG_CONTEXT_SECTION_ACK_TO_FE_UPDATED (0x3<<2) /* BitField tcp_agg_vars1Various aggregative variables auxiliary counter flag 2 */
#define __XSTORM_FCOE_EXTRA_AG_CONTEXT_SECTION_ACK_TO_FE_UPDATED_SHIFT 2
#define XSTORM_FCOE_EXTRA_AG_CONTEXT_SECTION_SIDEBAND_SENT_CF (0x3<<4) /* BitField tcp_agg_vars1Various aggregative variables auxiliary counter flag 3 */
#define XSTORM_FCOE_EXTRA_AG_CONTEXT_SECTION_SIDEBAND_SENT_CF_SHIFT 4
#define __XSTORM_FCOE_EXTRA_AG_CONTEXT_SECTION_RESERVED_CLEAR_DA_TIMER_EN (0x1<<6) /* BitField tcp_agg_vars1Various aggregative variables If set enables sending clear commands as port of the DA decision rules */
#define __XSTORM_FCOE_EXTRA_AG_CONTEXT_SECTION_RESERVED_CLEAR_DA_TIMER_EN_SHIFT 6
#define __XSTORM_FCOE_EXTRA_AG_CONTEXT_SECTION_RESERVED_DA_EXPIRATION_FLAG (0x1<<7) /* BitField tcp_agg_vars1Various aggregative variables Indicates that there was a delayed ack timer expiration */
#define __XSTORM_FCOE_EXTRA_AG_CONTEXT_SECTION_RESERVED_DA_EXPIRATION_FLAG_SHIFT 7
uint8_t __reserved_da_cnt /* Counts the number of ACK requests received from the TSTORM with no registration to QM. */;
uint16_t __mtu /* MSS used for nagle algorithm and for transmission */;
#elif defined(__LITTLE_ENDIAN)
uint16_t __mtu /* MSS used for nagle algorithm and for transmission */;
uint8_t __reserved_da_cnt /* Counts the number of ACK requests received from the TSTORM with no registration to QM. */;
uint8_t tcp_agg_vars1;
#define __XSTORM_FCOE_EXTRA_AG_CONTEXT_SECTION_RESERVED51 (0x3<<0) /* BitField tcp_agg_vars1Various aggregative variables Counter flag used to rewind the DA timer */
#define __XSTORM_FCOE_EXTRA_AG_CONTEXT_SECTION_RESERVED51_SHIFT 0
#define __XSTORM_FCOE_EXTRA_AG_CONTEXT_SECTION_ACK_TO_FE_UPDATED (0x3<<2) /* BitField tcp_agg_vars1Various aggregative variables auxiliary counter flag 2 */
#define __XSTORM_FCOE_EXTRA_AG_CONTEXT_SECTION_ACK_TO_FE_UPDATED_SHIFT 2
#define XSTORM_FCOE_EXTRA_AG_CONTEXT_SECTION_SIDEBAND_SENT_CF (0x3<<4) /* BitField tcp_agg_vars1Various aggregative variables auxiliary counter flag 3 */
#define XSTORM_FCOE_EXTRA_AG_CONTEXT_SECTION_SIDEBAND_SENT_CF_SHIFT 4
#define __XSTORM_FCOE_EXTRA_AG_CONTEXT_SECTION_RESERVED_CLEAR_DA_TIMER_EN (0x1<<6) /* BitField tcp_agg_vars1Various aggregative variables If set enables sending clear commands as port of the DA decision rules */
#define __XSTORM_FCOE_EXTRA_AG_CONTEXT_SECTION_RESERVED_CLEAR_DA_TIMER_EN_SHIFT 6
#define __XSTORM_FCOE_EXTRA_AG_CONTEXT_SECTION_RESERVED_DA_EXPIRATION_FLAG (0x1<<7) /* BitField tcp_agg_vars1Various aggregative variables Indicates that there was a delayed ack timer expiration */
#define __XSTORM_FCOE_EXTRA_AG_CONTEXT_SECTION_RESERVED_DA_EXPIRATION_FLAG_SHIFT 7
#endif
uint32_t snd_nxt /* The current sequence number to send */;
uint32_t __xfrqe_bd_addr_lo /* The Current transmission window in bytes */;
uint32_t __xfrqe_bd_addr_hi /* The current Send UNA sequence number */;
uint32_t __xfrqe_data1 /* The current local advertised window to FE. */;
#if defined(__BIG_ENDIAN)
uint8_t __agg_val8_th /* aggregated value 8 - threshold */;
uint8_t __tx_dest /* aggregated value 8 */;
uint16_t tcp_agg_vars2;
#define __XSTORM_FCOE_EXTRA_AG_CONTEXT_SECTION_RESERVED57 (0x1<<0) /* BitField tcp_agg_vars2Various aggregative variables Used in TOE to indicate that FIN is sent on a BD to bypass the naggle rule */
#define __XSTORM_FCOE_EXTRA_AG_CONTEXT_SECTION_RESERVED57_SHIFT 0
#define __XSTORM_FCOE_EXTRA_AG_CONTEXT_SECTION_RESERVED58 (0x1<<1) /* BitField tcp_agg_vars2Various aggregative variables Enables the tx window based decision */
#define __XSTORM_FCOE_EXTRA_AG_CONTEXT_SECTION_RESERVED58_SHIFT 1
#define __XSTORM_FCOE_EXTRA_AG_CONTEXT_SECTION_RESERVED59 (0x1<<2) /* BitField tcp_agg_vars2Various aggregative variables The DA Timer status. If set indicates that the delayed ACK timer is active. */
#define __XSTORM_FCOE_EXTRA_AG_CONTEXT_SECTION_RESERVED59_SHIFT 2
#define __XSTORM_FCOE_EXTRA_AG_CONTEXT_SECTION_AUX3_FLAG (0x1<<3) /* BitField tcp_agg_vars2Various aggregative variables auxiliary flag 3 */
#define __XSTORM_FCOE_EXTRA_AG_CONTEXT_SECTION_AUX3_FLAG_SHIFT 3
#define __XSTORM_FCOE_EXTRA_AG_CONTEXT_SECTION_AUX4_FLAG (0x1<<4) /* BitField tcp_agg_vars2Various aggregative variables auxiliary flag 4 */
#define __XSTORM_FCOE_EXTRA_AG_CONTEXT_SECTION_AUX4_FLAG_SHIFT 4
#define __XSTORM_FCOE_EXTRA_AG_CONTEXT_SECTION_RESERVED60 (0x1<<5) /* BitField tcp_agg_vars2Various aggregative variables Enable DA for the specific connection */
#define __XSTORM_FCOE_EXTRA_AG_CONTEXT_SECTION_RESERVED60_SHIFT 5
#define __XSTORM_FCOE_EXTRA_AG_CONTEXT_SECTION_RESERVED_ACK_TO_FE_UPDATED_EN (0x1<<6) /* BitField tcp_agg_vars2Various aggregative variables Enable decision rules based on aux2_cf */
#define __XSTORM_FCOE_EXTRA_AG_CONTEXT_SECTION_RESERVED_ACK_TO_FE_UPDATED_EN_SHIFT 6
#define XSTORM_FCOE_EXTRA_AG_CONTEXT_SECTION_SIDEBAND_SENT_CF_EN (0x1<<7) /* BitField tcp_agg_vars2Various aggregative variables Enable decision rules based on aux3_cf */
#define XSTORM_FCOE_EXTRA_AG_CONTEXT_SECTION_SIDEBAND_SENT_CF_EN_SHIFT 7
#define __XSTORM_FCOE_EXTRA_AG_CONTEXT_SECTION_RESERVED_TX_FIN_FLAG_EN (0x1<<8) /* BitField tcp_agg_vars2Various aggregative variables Enable Decision rule based on tx_fin_flag */
#define __XSTORM_FCOE_EXTRA_AG_CONTEXT_SECTION_RESERVED_TX_FIN_FLAG_EN_SHIFT 8
#define __XSTORM_FCOE_EXTRA_AG_CONTEXT_SECTION_AUX1_FLAG (0x1<<9) /* BitField tcp_agg_vars2Various aggregative variables auxiliary flag 1 */
#define __XSTORM_FCOE_EXTRA_AG_CONTEXT_SECTION_AUX1_FLAG_SHIFT 9
#define __XSTORM_FCOE_EXTRA_AG_CONTEXT_SECTION_SET_RTO_CF (0x3<<10) /* BitField tcp_agg_vars2Various aggregative variables counter flag for setting the rto timer */
#define __XSTORM_FCOE_EXTRA_AG_CONTEXT_SECTION_SET_RTO_CF_SHIFT 10
#define __XSTORM_FCOE_EXTRA_AG_CONTEXT_SECTION_TS_TO_ECHO_UPDATED_CF (0x3<<12) /* BitField tcp_agg_vars2Various aggregative variables timestamp was updated counter flag */
#define __XSTORM_FCOE_EXTRA_AG_CONTEXT_SECTION_TS_TO_ECHO_UPDATED_CF_SHIFT 12
#define __XSTORM_FCOE_EXTRA_AG_CONTEXT_SECTION_TX_DEST_UPDATED_CF (0x3<<14) /* BitField tcp_agg_vars2Various aggregative variables auxiliary counter flag 8 */
#define __XSTORM_FCOE_EXTRA_AG_CONTEXT_SECTION_TX_DEST_UPDATED_CF_SHIFT 14
#elif defined(__LITTLE_ENDIAN)
uint16_t tcp_agg_vars2;
#define __XSTORM_FCOE_EXTRA_AG_CONTEXT_SECTION_RESERVED57 (0x1<<0) /* BitField tcp_agg_vars2Various aggregative variables Used in TOE to indicate that FIN is sent on a BD to bypass the naggle rule */
#define __XSTORM_FCOE_EXTRA_AG_CONTEXT_SECTION_RESERVED57_SHIFT 0
#define __XSTORM_FCOE_EXTRA_AG_CONTEXT_SECTION_RESERVED58 (0x1<<1) /* BitField tcp_agg_vars2Various aggregative variables Enables the tx window based decision */
#define __XSTORM_FCOE_EXTRA_AG_CONTEXT_SECTION_RESERVED58_SHIFT 1
#define __XSTORM_FCOE_EXTRA_AG_CONTEXT_SECTION_RESERVED59 (0x1<<2) /* BitField tcp_agg_vars2Various aggregative variables The DA Timer status. If set indicates that the delayed ACK timer is active. */
#define __XSTORM_FCOE_EXTRA_AG_CONTEXT_SECTION_RESERVED59_SHIFT 2
#define __XSTORM_FCOE_EXTRA_AG_CONTEXT_SECTION_AUX3_FLAG (0x1<<3) /* BitField tcp_agg_vars2Various aggregative variables auxiliary flag 3 */
#define __XSTORM_FCOE_EXTRA_AG_CONTEXT_SECTION_AUX3_FLAG_SHIFT 3
#define __XSTORM_FCOE_EXTRA_AG_CONTEXT_SECTION_AUX4_FLAG (0x1<<4) /* BitField tcp_agg_vars2Various aggregative variables auxiliary flag 4 */
#define __XSTORM_FCOE_EXTRA_AG_CONTEXT_SECTION_AUX4_FLAG_SHIFT 4
#define __XSTORM_FCOE_EXTRA_AG_CONTEXT_SECTION_RESERVED60 (0x1<<5) /* BitField tcp_agg_vars2Various aggregative variables Enable DA for the specific connection */
#define __XSTORM_FCOE_EXTRA_AG_CONTEXT_SECTION_RESERVED60_SHIFT 5
#define __XSTORM_FCOE_EXTRA_AG_CONTEXT_SECTION_RESERVED_ACK_TO_FE_UPDATED_EN (0x1<<6) /* BitField tcp_agg_vars2Various aggregative variables Enable decision rules based on aux2_cf */
#define __XSTORM_FCOE_EXTRA_AG_CONTEXT_SECTION_RESERVED_ACK_TO_FE_UPDATED_EN_SHIFT 6
#define XSTORM_FCOE_EXTRA_AG_CONTEXT_SECTION_SIDEBAND_SENT_CF_EN (0x1<<7) /* BitField tcp_agg_vars2Various aggregative variables Enable decision rules based on aux3_cf */
#define XSTORM_FCOE_EXTRA_AG_CONTEXT_SECTION_SIDEBAND_SENT_CF_EN_SHIFT 7
#define __XSTORM_FCOE_EXTRA_AG_CONTEXT_SECTION_RESERVED_TX_FIN_FLAG_EN (0x1<<8) /* BitField tcp_agg_vars2Various aggregative variables Enable Decision rule based on tx_fin_flag */
#define __XSTORM_FCOE_EXTRA_AG_CONTEXT_SECTION_RESERVED_TX_FIN_FLAG_EN_SHIFT 8
#define __XSTORM_FCOE_EXTRA_AG_CONTEXT_SECTION_AUX1_FLAG (0x1<<9) /* BitField tcp_agg_vars2Various aggregative variables auxiliary flag 1 */
#define __XSTORM_FCOE_EXTRA_AG_CONTEXT_SECTION_AUX1_FLAG_SHIFT 9
#define __XSTORM_FCOE_EXTRA_AG_CONTEXT_SECTION_SET_RTO_CF (0x3<<10) /* BitField tcp_agg_vars2Various aggregative variables counter flag for setting the rto timer */
#define __XSTORM_FCOE_EXTRA_AG_CONTEXT_SECTION_SET_RTO_CF_SHIFT 10
#define __XSTORM_FCOE_EXTRA_AG_CONTEXT_SECTION_TS_TO_ECHO_UPDATED_CF (0x3<<12) /* BitField tcp_agg_vars2Various aggregative variables timestamp was updated counter flag */
#define __XSTORM_FCOE_EXTRA_AG_CONTEXT_SECTION_TS_TO_ECHO_UPDATED_CF_SHIFT 12
#define __XSTORM_FCOE_EXTRA_AG_CONTEXT_SECTION_TX_DEST_UPDATED_CF (0x3<<14) /* BitField tcp_agg_vars2Various aggregative variables auxiliary counter flag 8 */
#define __XSTORM_FCOE_EXTRA_AG_CONTEXT_SECTION_TX_DEST_UPDATED_CF_SHIFT 14
uint8_t __tx_dest /* aggregated value 8 */;
uint8_t __agg_val8_th /* aggregated value 8 - threshold */;
#endif
uint32_t __sq_base_addr_lo /* The low page address which the SQ resides in host memory */;
uint32_t __sq_base_addr_hi /* The high page address which the SQ resides in host memory */;
uint32_t __xfrq_base_addr_lo /* The low page address which the XFRQ resides in host memory */;
uint32_t __xfrq_base_addr_hi /* The high page address which the XFRQ resides in host memory */;
#if defined(__BIG_ENDIAN)
uint16_t __xfrq_cons /* The XFRQ consumer */;
uint16_t __xfrq_prod /* The XFRQ producer, updated by Ustorm */;
#elif defined(__LITTLE_ENDIAN)
uint16_t __xfrq_prod /* The XFRQ producer, updated by Ustorm */;
uint16_t __xfrq_cons /* The XFRQ consumer */;
#endif
#if defined(__BIG_ENDIAN)
uint8_t __tcp_agg_vars5 /* Various aggregative variables*/;
uint8_t __tcp_agg_vars4 /* Various aggregative variables*/;
uint8_t __tcp_agg_vars3 /* Various aggregative variables*/;
uint8_t __reserved_force_pure_ack_cnt /* The number of force ACK commands arrived from the TSTORM */;
#elif defined(__LITTLE_ENDIAN)
uint8_t __reserved_force_pure_ack_cnt /* The number of force ACK commands arrived from the TSTORM */;
uint8_t __tcp_agg_vars3 /* Various aggregative variables*/;
uint8_t __tcp_agg_vars4 /* Various aggregative variables*/;
uint8_t __tcp_agg_vars5 /* Various aggregative variables*/;
#endif
uint32_t __tcp_agg_vars6 /* Various aggregative variables*/;
#if defined(__BIG_ENDIAN)
uint16_t __xfrqe_mng /* Misc aggregated variable 6 */;
uint16_t __tcp_agg_vars7 /* Various aggregative variables*/;
#elif defined(__LITTLE_ENDIAN)
uint16_t __tcp_agg_vars7 /* Various aggregative variables*/;
uint16_t __xfrqe_mng /* Misc aggregated variable 6 */;
#endif
uint32_t __xfrqe_data0 /* aggregated value 10 */;
uint32_t __agg_val10_th /* aggregated value 10 - threshold */;
#if defined(__BIG_ENDIAN)
uint16_t __reserved3;
uint8_t __reserved2;
uint8_t __da_only_cnt /* counts delayed acks and not window updates */;
#elif defined(__LITTLE_ENDIAN)
uint8_t __da_only_cnt /* counts delayed acks and not window updates */;
uint8_t __reserved2;
uint16_t __reserved3;
#endif
};
/*
* The fcoe aggregative context of Xstorm
*/
struct xstorm_fcoe_ag_context
{
#if defined(__BIG_ENDIAN)
uint16_t agg_val1 /* aggregated value 1 */;
uint8_t agg_vars1;
#define __XSTORM_FCOE_AG_CONTEXT_EXISTS_IN_QM0 (0x1<<0) /* BitField agg_vars1Various aggregative variables The connection is currently registered to the QM with queue index 0 */
#define __XSTORM_FCOE_AG_CONTEXT_EXISTS_IN_QM0_SHIFT 0
#define __XSTORM_FCOE_AG_CONTEXT_EXISTS_IN_QM1 (0x1<<1) /* BitField agg_vars1Various aggregative variables The connection is currently registered to the QM with queue index 1 */
#define __XSTORM_FCOE_AG_CONTEXT_EXISTS_IN_QM1_SHIFT 1
#define __XSTORM_FCOE_AG_CONTEXT_RESERVED51 (0x1<<2) /* BitField agg_vars1Various aggregative variables The connection is currently registered to the QM with queue index 2 */
#define __XSTORM_FCOE_AG_CONTEXT_RESERVED51_SHIFT 2
#define __XSTORM_FCOE_AG_CONTEXT_RESERVED52 (0x1<<3) /* BitField agg_vars1Various aggregative variables The connection is currently registered to the QM with queue index 3 */
#define __XSTORM_FCOE_AG_CONTEXT_RESERVED52_SHIFT 3
#define __XSTORM_FCOE_AG_CONTEXT_MORE_TO_SEND_EN (0x1<<4) /* BitField agg_vars1Various aggregative variables Enables the decision rule of more_to_Send > 0 */
#define __XSTORM_FCOE_AG_CONTEXT_MORE_TO_SEND_EN_SHIFT 4
#define XSTORM_FCOE_AG_CONTEXT_NAGLE_EN (0x1<<5) /* BitField agg_vars1Various aggregative variables Enables the nagle decision */
#define XSTORM_FCOE_AG_CONTEXT_NAGLE_EN_SHIFT 5
#define __XSTORM_FCOE_AG_CONTEXT_DQ_SPARE_FLAG (0x1<<6) /* BitField agg_vars1Various aggregative variables Used for future indication by the Driver on a doorbell */
#define __XSTORM_FCOE_AG_CONTEXT_DQ_SPARE_FLAG_SHIFT 6
#define __XSTORM_FCOE_AG_CONTEXT_RESERVED_UNA_GT_NXT_EN (0x1<<7) /* BitField agg_vars1Various aggregative variables Enable decision rules based on equality between snd_una and snd_nxt */
#define __XSTORM_FCOE_AG_CONTEXT_RESERVED_UNA_GT_NXT_EN_SHIFT 7
uint8_t __state /* The state of the connection */;
#elif defined(__LITTLE_ENDIAN)
uint8_t __state /* The state of the connection */;
uint8_t agg_vars1;
#define __XSTORM_FCOE_AG_CONTEXT_EXISTS_IN_QM0 (0x1<<0) /* BitField agg_vars1Various aggregative variables The connection is currently registered to the QM with queue index 0 */
#define __XSTORM_FCOE_AG_CONTEXT_EXISTS_IN_QM0_SHIFT 0
#define __XSTORM_FCOE_AG_CONTEXT_EXISTS_IN_QM1 (0x1<<1) /* BitField agg_vars1Various aggregative variables The connection is currently registered to the QM with queue index 1 */
#define __XSTORM_FCOE_AG_CONTEXT_EXISTS_IN_QM1_SHIFT 1
#define __XSTORM_FCOE_AG_CONTEXT_RESERVED51 (0x1<<2) /* BitField agg_vars1Various aggregative variables The connection is currently registered to the QM with queue index 2 */
#define __XSTORM_FCOE_AG_CONTEXT_RESERVED51_SHIFT 2
#define __XSTORM_FCOE_AG_CONTEXT_RESERVED52 (0x1<<3) /* BitField agg_vars1Various aggregative variables The connection is currently registered to the QM with queue index 3 */
#define __XSTORM_FCOE_AG_CONTEXT_RESERVED52_SHIFT 3
#define __XSTORM_FCOE_AG_CONTEXT_MORE_TO_SEND_EN (0x1<<4) /* BitField agg_vars1Various aggregative variables Enables the decision rule of more_to_Send > 0 */
#define __XSTORM_FCOE_AG_CONTEXT_MORE_TO_SEND_EN_SHIFT 4
#define XSTORM_FCOE_AG_CONTEXT_NAGLE_EN (0x1<<5) /* BitField agg_vars1Various aggregative variables Enables the nagle decision */
#define XSTORM_FCOE_AG_CONTEXT_NAGLE_EN_SHIFT 5
#define __XSTORM_FCOE_AG_CONTEXT_DQ_SPARE_FLAG (0x1<<6) /* BitField agg_vars1Various aggregative variables Used for future indication by the Driver on a doorbell */
#define __XSTORM_FCOE_AG_CONTEXT_DQ_SPARE_FLAG_SHIFT 6
#define __XSTORM_FCOE_AG_CONTEXT_RESERVED_UNA_GT_NXT_EN (0x1<<7) /* BitField agg_vars1Various aggregative variables Enable decision rules based on equality between snd_una and snd_nxt */
#define __XSTORM_FCOE_AG_CONTEXT_RESERVED_UNA_GT_NXT_EN_SHIFT 7
uint16_t agg_val1 /* aggregated value 1 */;
#endif
#if defined(__BIG_ENDIAN)
uint8_t cdu_reserved /* Used by the CDU for validation and debugging */;
uint8_t __agg_vars4 /* Various aggregative variables*/;
uint8_t agg_vars3;
#define XSTORM_FCOE_AG_CONTEXT_PHYSICAL_QUEUE_NUM2 (0x3F<<0) /* BitField agg_vars3Various aggregative variables The physical queue number of queue index 2 */
#define XSTORM_FCOE_AG_CONTEXT_PHYSICAL_QUEUE_NUM2_SHIFT 0
#define __XSTORM_FCOE_AG_CONTEXT_AUX19_CF (0x3<<6) /* BitField agg_vars3Various aggregative variables auxiliary counter flag 19 */
#define __XSTORM_FCOE_AG_CONTEXT_AUX19_CF_SHIFT 6
uint8_t agg_vars2;
#define __XSTORM_FCOE_AG_CONTEXT_DQ_CF (0x3<<0) /* BitField agg_vars2Various aggregative variables auxiliary counter flag 4 */
#define __XSTORM_FCOE_AG_CONTEXT_DQ_CF_SHIFT 0
#define __XSTORM_FCOE_AG_CONTEXT_DQ_SPARE_FLAG_EN (0x1<<2) /* BitField agg_vars2Various aggregative variables Enable decision rule based on dq_spare_flag */
#define __XSTORM_FCOE_AG_CONTEXT_DQ_SPARE_FLAG_EN_SHIFT 2
#define __XSTORM_FCOE_AG_CONTEXT_AUX8_FLAG (0x1<<3) /* BitField agg_vars2Various aggregative variables auxiliary flag 8 */
#define __XSTORM_FCOE_AG_CONTEXT_AUX8_FLAG_SHIFT 3
#define __XSTORM_FCOE_AG_CONTEXT_AUX9_FLAG (0x1<<4) /* BitField agg_vars2Various aggregative variables auxiliary flag 9 */
#define __XSTORM_FCOE_AG_CONTEXT_AUX9_FLAG_SHIFT 4
#define XSTORM_FCOE_AG_CONTEXT_DECISION_RULE1 (0x3<<5) /* BitField agg_vars2Various aggregative variables 0-NOP,1-EQ,2-NEQ */
#define XSTORM_FCOE_AG_CONTEXT_DECISION_RULE1_SHIFT 5
#define __XSTORM_FCOE_AG_CONTEXT_DQ_CF_EN (0x1<<7) /* BitField agg_vars2Various aggregative variables Enable decision rules based on aux4_cf */
#define __XSTORM_FCOE_AG_CONTEXT_DQ_CF_EN_SHIFT 7
#elif defined(__LITTLE_ENDIAN)
uint8_t agg_vars2;
#define __XSTORM_FCOE_AG_CONTEXT_DQ_CF (0x3<<0) /* BitField agg_vars2Various aggregative variables auxiliary counter flag 4 */
#define __XSTORM_FCOE_AG_CONTEXT_DQ_CF_SHIFT 0
#define __XSTORM_FCOE_AG_CONTEXT_DQ_SPARE_FLAG_EN (0x1<<2) /* BitField agg_vars2Various aggregative variables Enable decision rule based on dq_spare_flag */
#define __XSTORM_FCOE_AG_CONTEXT_DQ_SPARE_FLAG_EN_SHIFT 2
#define __XSTORM_FCOE_AG_CONTEXT_AUX8_FLAG (0x1<<3) /* BitField agg_vars2Various aggregative variables auxiliary flag 8 */
#define __XSTORM_FCOE_AG_CONTEXT_AUX8_FLAG_SHIFT 3
#define __XSTORM_FCOE_AG_CONTEXT_AUX9_FLAG (0x1<<4) /* BitField agg_vars2Various aggregative variables auxiliary flag 9 */
#define __XSTORM_FCOE_AG_CONTEXT_AUX9_FLAG_SHIFT 4
#define XSTORM_FCOE_AG_CONTEXT_DECISION_RULE1 (0x3<<5) /* BitField agg_vars2Various aggregative variables 0-NOP,1-EQ,2-NEQ */
#define XSTORM_FCOE_AG_CONTEXT_DECISION_RULE1_SHIFT 5
#define __XSTORM_FCOE_AG_CONTEXT_DQ_CF_EN (0x1<<7) /* BitField agg_vars2Various aggregative variables Enable decision rules based on aux4_cf */
#define __XSTORM_FCOE_AG_CONTEXT_DQ_CF_EN_SHIFT 7
uint8_t agg_vars3;
#define XSTORM_FCOE_AG_CONTEXT_PHYSICAL_QUEUE_NUM2 (0x3F<<0) /* BitField agg_vars3Various aggregative variables The physical queue number of queue index 2 */
#define XSTORM_FCOE_AG_CONTEXT_PHYSICAL_QUEUE_NUM2_SHIFT 0
#define __XSTORM_FCOE_AG_CONTEXT_AUX19_CF (0x3<<6) /* BitField agg_vars3Various aggregative variables auxiliary counter flag 19 */
#define __XSTORM_FCOE_AG_CONTEXT_AUX19_CF_SHIFT 6
uint8_t __agg_vars4 /* Various aggregative variables*/;
uint8_t cdu_reserved /* Used by the CDU for validation and debugging */;
#endif
uint32_t more_to_send /* The number of bytes left to send */;
#if defined(__BIG_ENDIAN)
uint16_t agg_vars5;
#define XSTORM_FCOE_AG_CONTEXT_DECISION_RULE5 (0x3<<0) /* BitField agg_vars5Various aggregative variables 0-NOP,1-EQ,2-NEQ */
#define XSTORM_FCOE_AG_CONTEXT_DECISION_RULE5_SHIFT 0
#define XSTORM_FCOE_AG_CONTEXT_PHYSICAL_QUEUE_NUM0 (0x3F<<2) /* BitField agg_vars5Various aggregative variables The physical queue number of queue index 0 */
#define XSTORM_FCOE_AG_CONTEXT_PHYSICAL_QUEUE_NUM0_SHIFT 2
#define XSTORM_FCOE_AG_CONTEXT_PHYSICAL_QUEUE_NUM1 (0x3F<<8) /* BitField agg_vars5Various aggregative variables The physical queue number of queue index 1 */
#define XSTORM_FCOE_AG_CONTEXT_PHYSICAL_QUEUE_NUM1_SHIFT 8
#define __XSTORM_FCOE_AG_CONTEXT_CONFQ_DEC_RULE (0x3<<14) /* BitField agg_vars5Various aggregative variables 0-NOP,1-EQ,2-NEQ */
#define __XSTORM_FCOE_AG_CONTEXT_CONFQ_DEC_RULE_SHIFT 14
uint16_t sq_cons /* The SQ consumer updated by Xstorm after consuming aother WQE */;
#elif defined(__LITTLE_ENDIAN)
uint16_t sq_cons /* The SQ consumer updated by Xstorm after consuming aother WQE */;
uint16_t agg_vars5;
#define XSTORM_FCOE_AG_CONTEXT_DECISION_RULE5 (0x3<<0) /* BitField agg_vars5Various aggregative variables 0-NOP,1-EQ,2-NEQ */
#define XSTORM_FCOE_AG_CONTEXT_DECISION_RULE5_SHIFT 0
#define XSTORM_FCOE_AG_CONTEXT_PHYSICAL_QUEUE_NUM0 (0x3F<<2) /* BitField agg_vars5Various aggregative variables The physical queue number of queue index 0 */
#define XSTORM_FCOE_AG_CONTEXT_PHYSICAL_QUEUE_NUM0_SHIFT 2
#define XSTORM_FCOE_AG_CONTEXT_PHYSICAL_QUEUE_NUM1 (0x3F<<8) /* BitField agg_vars5Various aggregative variables The physical queue number of queue index 1 */
#define XSTORM_FCOE_AG_CONTEXT_PHYSICAL_QUEUE_NUM1_SHIFT 8
#define __XSTORM_FCOE_AG_CONTEXT_CONFQ_DEC_RULE (0x3<<14) /* BitField agg_vars5Various aggregative variables 0-NOP,1-EQ,2-NEQ */
#define __XSTORM_FCOE_AG_CONTEXT_CONFQ_DEC_RULE_SHIFT 14
#endif
struct xstorm_fcoe_extra_ag_context_section __extra_section /* Extra context section */;
#if defined(__BIG_ENDIAN)
uint16_t agg_vars7;
#define __XSTORM_FCOE_AG_CONTEXT_AGG_VAL11_DECISION_RULE (0x7<<0) /* BitField agg_vars7Various aggregative variables 0-NOP,1-EQ,2-NEQ,3-GT_CYC,4-GT_ABS,5-LT_CYC,6-LT_ABS */
#define __XSTORM_FCOE_AG_CONTEXT_AGG_VAL11_DECISION_RULE_SHIFT 0
#define __XSTORM_FCOE_AG_CONTEXT_AUX13_FLAG (0x1<<3) /* BitField agg_vars7Various aggregative variables auxiliary flag 13 */
#define __XSTORM_FCOE_AG_CONTEXT_AUX13_FLAG_SHIFT 3
#define __XSTORM_FCOE_AG_CONTEXT_QUEUE0_CF (0x3<<4) /* BitField agg_vars7Various aggregative variables auxiliary counter flag 18 */
#define __XSTORM_FCOE_AG_CONTEXT_QUEUE0_CF_SHIFT 4
#define XSTORM_FCOE_AG_CONTEXT_DECISION_RULE3 (0x3<<6) /* BitField agg_vars7Various aggregative variables 0-NOP,1-EQ,2-NEQ */
#define XSTORM_FCOE_AG_CONTEXT_DECISION_RULE3_SHIFT 6
#define XSTORM_FCOE_AG_CONTEXT_AUX1_CF (0x3<<8) /* BitField agg_vars7Various aggregative variables auxiliary counter flag 1 */
#define XSTORM_FCOE_AG_CONTEXT_AUX1_CF_SHIFT 8
#define __XSTORM_FCOE_AG_CONTEXT_RESERVED62 (0x1<<10) /* BitField agg_vars7Various aggregative variables Mask the check of the completion sequence on retransmit */
#define __XSTORM_FCOE_AG_CONTEXT_RESERVED62_SHIFT 10
#define __XSTORM_FCOE_AG_CONTEXT_AUX1_CF_EN (0x1<<11) /* BitField agg_vars7Various aggregative variables Enable decision rules based on aux1_cf */
#define __XSTORM_FCOE_AG_CONTEXT_AUX1_CF_EN_SHIFT 11
#define __XSTORM_FCOE_AG_CONTEXT_AUX10_FLAG (0x1<<12) /* BitField agg_vars7Various aggregative variables auxiliary flag 10 */
#define __XSTORM_FCOE_AG_CONTEXT_AUX10_FLAG_SHIFT 12
#define __XSTORM_FCOE_AG_CONTEXT_AUX11_FLAG (0x1<<13) /* BitField agg_vars7Various aggregative variables auxiliary flag 11 */
#define __XSTORM_FCOE_AG_CONTEXT_AUX11_FLAG_SHIFT 13
#define __XSTORM_FCOE_AG_CONTEXT_AUX12_FLAG (0x1<<14) /* BitField agg_vars7Various aggregative variables auxiliary flag 12 */
#define __XSTORM_FCOE_AG_CONTEXT_AUX12_FLAG_SHIFT 14
#define __XSTORM_FCOE_AG_CONTEXT_AUX2_FLAG (0x1<<15) /* BitField agg_vars7Various aggregative variables auxiliary flag 2 */
#define __XSTORM_FCOE_AG_CONTEXT_AUX2_FLAG_SHIFT 15
uint8_t agg_val3_th /* Aggregated value 3 - threshold */;
uint8_t agg_vars6;
#define XSTORM_FCOE_AG_CONTEXT_DECISION_RULE6 (0x7<<0) /* BitField agg_vars6Various aggregative variables 0-NOP,1-EQ,2-NEQ,3-GT_CYC,4-GT_ABS,5-LT_CYC,6-LT_ABS */
#define XSTORM_FCOE_AG_CONTEXT_DECISION_RULE6_SHIFT 0
#define __XSTORM_FCOE_AG_CONTEXT_XFRQ_DEC_RULE (0x7<<3) /* BitField agg_vars6Various aggregative variables 0-NOP,1-EQ,2-NEQ,3-GT_CYC,4-GT_ABS,5-LT_CYC,6-LT_ABS */
#define __XSTORM_FCOE_AG_CONTEXT_XFRQ_DEC_RULE_SHIFT 3
#define __XSTORM_FCOE_AG_CONTEXT_SQ_DEC_RULE (0x3<<6) /* BitField agg_vars6Various aggregative variables 0-NOP,1-EQ,2-NEQ */
#define __XSTORM_FCOE_AG_CONTEXT_SQ_DEC_RULE_SHIFT 6
#elif defined(__LITTLE_ENDIAN)
uint8_t agg_vars6;
#define XSTORM_FCOE_AG_CONTEXT_DECISION_RULE6 (0x7<<0) /* BitField agg_vars6Various aggregative variables 0-NOP,1-EQ,2-NEQ,3-GT_CYC,4-GT_ABS,5-LT_CYC,6-LT_ABS */
#define XSTORM_FCOE_AG_CONTEXT_DECISION_RULE6_SHIFT 0
#define __XSTORM_FCOE_AG_CONTEXT_XFRQ_DEC_RULE (0x7<<3) /* BitField agg_vars6Various aggregative variables 0-NOP,1-EQ,2-NEQ,3-GT_CYC,4-GT_ABS,5-LT_CYC,6-LT_ABS */
#define __XSTORM_FCOE_AG_CONTEXT_XFRQ_DEC_RULE_SHIFT 3
#define __XSTORM_FCOE_AG_CONTEXT_SQ_DEC_RULE (0x3<<6) /* BitField agg_vars6Various aggregative variables 0-NOP,1-EQ,2-NEQ */
#define __XSTORM_FCOE_AG_CONTEXT_SQ_DEC_RULE_SHIFT 6
uint8_t agg_val3_th /* Aggregated value 3 - threshold */;
uint16_t agg_vars7;
#define __XSTORM_FCOE_AG_CONTEXT_AGG_VAL11_DECISION_RULE (0x7<<0) /* BitField agg_vars7Various aggregative variables 0-NOP,1-EQ,2-NEQ,3-GT_CYC,4-GT_ABS,5-LT_CYC,6-LT_ABS */
#define __XSTORM_FCOE_AG_CONTEXT_AGG_VAL11_DECISION_RULE_SHIFT 0
#define __XSTORM_FCOE_AG_CONTEXT_AUX13_FLAG (0x1<<3) /* BitField agg_vars7Various aggregative variables auxiliary flag 13 */
#define __XSTORM_FCOE_AG_CONTEXT_AUX13_FLAG_SHIFT 3
#define __XSTORM_FCOE_AG_CONTEXT_QUEUE0_CF (0x3<<4) /* BitField agg_vars7Various aggregative variables auxiliary counter flag 18 */
#define __XSTORM_FCOE_AG_CONTEXT_QUEUE0_CF_SHIFT 4
#define XSTORM_FCOE_AG_CONTEXT_DECISION_RULE3 (0x3<<6) /* BitField agg_vars7Various aggregative variables 0-NOP,1-EQ,2-NEQ */
#define XSTORM_FCOE_AG_CONTEXT_DECISION_RULE3_SHIFT 6
#define XSTORM_FCOE_AG_CONTEXT_AUX1_CF (0x3<<8) /* BitField agg_vars7Various aggregative variables auxiliary counter flag 1 */
#define XSTORM_FCOE_AG_CONTEXT_AUX1_CF_SHIFT 8
#define __XSTORM_FCOE_AG_CONTEXT_RESERVED62 (0x1<<10) /* BitField agg_vars7Various aggregative variables Mask the check of the completion sequence on retransmit */
#define __XSTORM_FCOE_AG_CONTEXT_RESERVED62_SHIFT 10
#define __XSTORM_FCOE_AG_CONTEXT_AUX1_CF_EN (0x1<<11) /* BitField agg_vars7Various aggregative variables Enable decision rules based on aux1_cf */
#define __XSTORM_FCOE_AG_CONTEXT_AUX1_CF_EN_SHIFT 11
#define __XSTORM_FCOE_AG_CONTEXT_AUX10_FLAG (0x1<<12) /* BitField agg_vars7Various aggregative variables auxiliary flag 10 */
#define __XSTORM_FCOE_AG_CONTEXT_AUX10_FLAG_SHIFT 12
#define __XSTORM_FCOE_AG_CONTEXT_AUX11_FLAG (0x1<<13) /* BitField agg_vars7Various aggregative variables auxiliary flag 11 */
#define __XSTORM_FCOE_AG_CONTEXT_AUX11_FLAG_SHIFT 13
#define __XSTORM_FCOE_AG_CONTEXT_AUX12_FLAG (0x1<<14) /* BitField agg_vars7Various aggregative variables auxiliary flag 12 */
#define __XSTORM_FCOE_AG_CONTEXT_AUX12_FLAG_SHIFT 14
#define __XSTORM_FCOE_AG_CONTEXT_AUX2_FLAG (0x1<<15) /* BitField agg_vars7Various aggregative variables auxiliary flag 2 */
#define __XSTORM_FCOE_AG_CONTEXT_AUX2_FLAG_SHIFT 15
#endif
#if defined(__BIG_ENDIAN)
uint16_t __agg_val11_th /* aggregated value 11 - threshold */;
uint16_t __agg_val11 /* aggregated value 11 */;
#elif defined(__LITTLE_ENDIAN)
uint16_t __agg_val11 /* aggregated value 11 */;
uint16_t __agg_val11_th /* aggregated value 11 - threshold */;
#endif
#if defined(__BIG_ENDIAN)
uint8_t __reserved1;
uint8_t __agg_val6_th /* aggregated value 6 - threshold */;
uint16_t __agg_val9 /* aggregated value 9 */;
#elif defined(__LITTLE_ENDIAN)
uint16_t __agg_val9 /* aggregated value 9 */;
uint8_t __agg_val6_th /* aggregated value 6 - threshold */;
uint8_t __reserved1;
#endif
#if defined(__BIG_ENDIAN)
uint16_t confq_cons /* CONFQ Consumer */;
uint16_t confq_prod /* CONFQ Producer, updated by Ustorm - AggVal2 */;
#elif defined(__LITTLE_ENDIAN)
uint16_t confq_prod /* CONFQ Producer, updated by Ustorm - AggVal2 */;
uint16_t confq_cons /* CONFQ Consumer */;
#endif
uint32_t agg_varint8_t;
#define XSTORM_FCOE_AG_CONTEXT_AGG_MISC2 (0xFFFFFF<<0) /* BitField agg_varint8_tVarious aggregative variables Misc aggregated variable 2 */
#define XSTORM_FCOE_AG_CONTEXT_AGG_MISC2_SHIFT 0
#define XSTORM_FCOE_AG_CONTEXT_AGG_MISC3 (0xFF<<24) /* BitField agg_varint8_tVarious aggregative variables Misc aggregated variable 3 */
#define XSTORM_FCOE_AG_CONTEXT_AGG_MISC3_SHIFT 24
#if defined(__BIG_ENDIAN)
uint16_t __cache_wqe_db /* Misc aggregated variable 0 */;
uint16_t sq_prod /* The SQ Producer updated by Xstorm after reading a bunch of WQEs into the context */;
#elif defined(__LITTLE_ENDIAN)
uint16_t sq_prod /* The SQ Producer updated by Xstorm after reading a bunch of WQEs into the context */;
uint16_t __cache_wqe_db /* Misc aggregated variable 0 */;
#endif
#if defined(__BIG_ENDIAN)
uint8_t agg_val3 /* Aggregated value 3 */;
uint8_t agg_val6 /* Aggregated value 6 */;
uint8_t agg_val5_th /* Aggregated value 5 - threshold */;
uint8_t agg_val5 /* Aggregated value 5 */;
#elif defined(__LITTLE_ENDIAN)
uint8_t agg_val5 /* Aggregated value 5 */;
uint8_t agg_val5_th /* Aggregated value 5 - threshold */;
uint8_t agg_val6 /* Aggregated value 6 */;
uint8_t agg_val3 /* Aggregated value 3 */;
#endif
#if defined(__BIG_ENDIAN)
uint16_t __agg_misc1 /* Spare value for aggregation. NOTE: this value is used in the retransmit decision rule if CmpSeqDecMask is 0. In that case it is intended to be CmpBdSize. */;
uint16_t agg_limit1 /* aggregated limit 1 */;
#elif defined(__LITTLE_ENDIAN)
uint16_t agg_limit1 /* aggregated limit 1 */;
uint16_t __agg_misc1 /* Spare value for aggregation. NOTE: this value is used in the retransmit decision rule if CmpSeqDecMask is 0. In that case it is intended to be CmpBdSize. */;
#endif
uint32_t completion_seq /* The sequence number of the start completion point (BD) */;
uint32_t confq_pbl_base_lo /* The CONFQ PBL base low address resides in host memory */;
uint32_t confq_pbl_base_hi /* The CONFQ PBL base hihj address resides in host memory */;
};
/*
* The tcp aggregative context section of Xstorm
*/
struct xstorm_tcp_tcp_ag_context_section
{
#if defined(__BIG_ENDIAN)
uint8_t tcp_agg_vars1;
#define __XSTORM_TCP_TCP_AG_CONTEXT_SECTION_SET_DA_TIMER_CF (0x3<<0) /* BitField tcp_agg_vars1Various aggregative variables Counter flag used to rewind the DA timer */
#define __XSTORM_TCP_TCP_AG_CONTEXT_SECTION_SET_DA_TIMER_CF_SHIFT 0
#define __XSTORM_TCP_TCP_AG_CONTEXT_SECTION_ACK_TO_FE_UPDATED (0x3<<2) /* BitField tcp_agg_vars1Various aggregative variables auxiliary counter flag 2 */
#define __XSTORM_TCP_TCP_AG_CONTEXT_SECTION_ACK_TO_FE_UPDATED_SHIFT 2
#define XSTORM_TCP_TCP_AG_CONTEXT_SECTION_SIDEBAND_SENT_CF (0x3<<4) /* BitField tcp_agg_vars1Various aggregative variables auxiliary counter flag 3 */
#define XSTORM_TCP_TCP_AG_CONTEXT_SECTION_SIDEBAND_SENT_CF_SHIFT 4
#define __XSTORM_TCP_TCP_AG_CONTEXT_SECTION_CLEAR_DA_TIMER_EN (0x1<<6) /* BitField tcp_agg_vars1Various aggregative variables If set enables sending clear commands as port of the DA decision rules */
#define __XSTORM_TCP_TCP_AG_CONTEXT_SECTION_CLEAR_DA_TIMER_EN_SHIFT 6
#define __XSTORM_TCP_TCP_AG_CONTEXT_SECTION_DA_EXPIRATION_FLAG (0x1<<7) /* BitField tcp_agg_vars1Various aggregative variables Indicates that there was a delayed ack timer expiration */
#define __XSTORM_TCP_TCP_AG_CONTEXT_SECTION_DA_EXPIRATION_FLAG_SHIFT 7
uint8_t __da_cnt /* Counts the number of ACK requests received from the TSTORM with no registration to QM. */;
uint16_t mss /* MSS used for nagle algorithm and for transmission */;
#elif defined(__LITTLE_ENDIAN)
uint16_t mss /* MSS used for nagle algorithm and for transmission */;
uint8_t __da_cnt /* Counts the number of ACK requests received from the TSTORM with no registration to QM. */;
uint8_t tcp_agg_vars1;
#define __XSTORM_TCP_TCP_AG_CONTEXT_SECTION_SET_DA_TIMER_CF (0x3<<0) /* BitField tcp_agg_vars1Various aggregative variables Counter flag used to rewind the DA timer */
#define __XSTORM_TCP_TCP_AG_CONTEXT_SECTION_SET_DA_TIMER_CF_SHIFT 0
#define __XSTORM_TCP_TCP_AG_CONTEXT_SECTION_ACK_TO_FE_UPDATED (0x3<<2) /* BitField tcp_agg_vars1Various aggregative variables auxiliary counter flag 2 */
#define __XSTORM_TCP_TCP_AG_CONTEXT_SECTION_ACK_TO_FE_UPDATED_SHIFT 2
#define XSTORM_TCP_TCP_AG_CONTEXT_SECTION_SIDEBAND_SENT_CF (0x3<<4) /* BitField tcp_agg_vars1Various aggregative variables auxiliary counter flag 3 */
#define XSTORM_TCP_TCP_AG_CONTEXT_SECTION_SIDEBAND_SENT_CF_SHIFT 4
#define __XSTORM_TCP_TCP_AG_CONTEXT_SECTION_CLEAR_DA_TIMER_EN (0x1<<6) /* BitField tcp_agg_vars1Various aggregative variables If set enables sending clear commands as port of the DA decision rules */
#define __XSTORM_TCP_TCP_AG_CONTEXT_SECTION_CLEAR_DA_TIMER_EN_SHIFT 6
#define __XSTORM_TCP_TCP_AG_CONTEXT_SECTION_DA_EXPIRATION_FLAG (0x1<<7) /* BitField tcp_agg_vars1Various aggregative variables Indicates that there was a delayed ack timer expiration */
#define __XSTORM_TCP_TCP_AG_CONTEXT_SECTION_DA_EXPIRATION_FLAG_SHIFT 7
#endif
uint32_t snd_nxt /* The current sequence number to send */;
uint32_t tx_wnd /* The Current transmission window in bytes */;
uint32_t snd_una /* The current Send UNA sequence number */;
uint32_t local_adv_wnd /* The current local advertised window to FE. */;
#if defined(__BIG_ENDIAN)
uint8_t __agg_val8_th /* aggregated value 8 - threshold */;
uint8_t __tx_dest /* aggregated value 8 */;
uint16_t tcp_agg_vars2;
#define XSTORM_TCP_TCP_AG_CONTEXT_SECTION_TX_FIN_FLAG (0x1<<0) /* BitField tcp_agg_vars2Various aggregative variables Used in TOE to indicate that FIN is sent on a BD to bypass the naggle rule */
#define XSTORM_TCP_TCP_AG_CONTEXT_SECTION_TX_FIN_FLAG_SHIFT 0
#define __XSTORM_TCP_TCP_AG_CONTEXT_SECTION_TX_UNBLOCKED (0x1<<1) /* BitField tcp_agg_vars2Various aggregative variables Enables the tx window based decision */
#define __XSTORM_TCP_TCP_AG_CONTEXT_SECTION_TX_UNBLOCKED_SHIFT 1
#define __XSTORM_TCP_TCP_AG_CONTEXT_SECTION_DA_TIMER_ACTIVE (0x1<<2) /* BitField tcp_agg_vars2Various aggregative variables The DA Timer status. If set indicates that the delayed ACK timer is active. */
#define __XSTORM_TCP_TCP_AG_CONTEXT_SECTION_DA_TIMER_ACTIVE_SHIFT 2
#define __XSTORM_TCP_TCP_AG_CONTEXT_SECTION_AUX3_FLAG (0x1<<3) /* BitField tcp_agg_vars2Various aggregative variables auxiliary flag 3 */
#define __XSTORM_TCP_TCP_AG_CONTEXT_SECTION_AUX3_FLAG_SHIFT 3
#define __XSTORM_TCP_TCP_AG_CONTEXT_SECTION_AUX4_FLAG (0x1<<4) /* BitField tcp_agg_vars2Various aggregative variables auxiliary flag 4 */
#define __XSTORM_TCP_TCP_AG_CONTEXT_SECTION_AUX4_FLAG_SHIFT 4
#define XSTORM_TCP_TCP_AG_CONTEXT_SECTION_DA_ENABLE (0x1<<5) /* BitField tcp_agg_vars2Various aggregative variables Enable DA for the specific connection */
#define XSTORM_TCP_TCP_AG_CONTEXT_SECTION_DA_ENABLE_SHIFT 5
#define __XSTORM_TCP_TCP_AG_CONTEXT_SECTION_ACK_TO_FE_UPDATED_EN (0x1<<6) /* BitField tcp_agg_vars2Various aggregative variables Enable decision rules based on aux2_cf */
#define __XSTORM_TCP_TCP_AG_CONTEXT_SECTION_ACK_TO_FE_UPDATED_EN_SHIFT 6
#define XSTORM_TCP_TCP_AG_CONTEXT_SECTION_SIDEBAND_SENT_CF_EN (0x1<<7) /* BitField tcp_agg_vars2Various aggregative variables Enable decision rules based on aux3_cf */
#define XSTORM_TCP_TCP_AG_CONTEXT_SECTION_SIDEBAND_SENT_CF_EN_SHIFT 7
#define __XSTORM_TCP_TCP_AG_CONTEXT_SECTION_TX_FIN_FLAG_EN (0x1<<8) /* BitField tcp_agg_vars2Various aggregative variables Enable Decision rule based on tx_fin_flag */
#define __XSTORM_TCP_TCP_AG_CONTEXT_SECTION_TX_FIN_FLAG_EN_SHIFT 8
#define __XSTORM_TCP_TCP_AG_CONTEXT_SECTION_AUX1_FLAG (0x1<<9) /* BitField tcp_agg_vars2Various aggregative variables auxiliary flag 1 */
#define __XSTORM_TCP_TCP_AG_CONTEXT_SECTION_AUX1_FLAG_SHIFT 9
#define __XSTORM_TCP_TCP_AG_CONTEXT_SECTION_SET_RTO_CF (0x3<<10) /* BitField tcp_agg_vars2Various aggregative variables counter flag for setting the rto timer */
#define __XSTORM_TCP_TCP_AG_CONTEXT_SECTION_SET_RTO_CF_SHIFT 10
#define __XSTORM_TCP_TCP_AG_CONTEXT_SECTION_TS_TO_ECHO_UPDATED_CF (0x3<<12) /* BitField tcp_agg_vars2Various aggregative variables timestamp was updated counter flag */
#define __XSTORM_TCP_TCP_AG_CONTEXT_SECTION_TS_TO_ECHO_UPDATED_CF_SHIFT 12
#define __XSTORM_TCP_TCP_AG_CONTEXT_SECTION_TX_DEST_UPDATED_CF (0x3<<14) /* BitField tcp_agg_vars2Various aggregative variables auxiliary counter flag 8 */
#define __XSTORM_TCP_TCP_AG_CONTEXT_SECTION_TX_DEST_UPDATED_CF_SHIFT 14
#elif defined(__LITTLE_ENDIAN)
uint16_t tcp_agg_vars2;
#define XSTORM_TCP_TCP_AG_CONTEXT_SECTION_TX_FIN_FLAG (0x1<<0) /* BitField tcp_agg_vars2Various aggregative variables Used in TOE to indicate that FIN is sent on a BD to bypass the naggle rule */
#define XSTORM_TCP_TCP_AG_CONTEXT_SECTION_TX_FIN_FLAG_SHIFT 0
#define __XSTORM_TCP_TCP_AG_CONTEXT_SECTION_TX_UNBLOCKED (0x1<<1) /* BitField tcp_agg_vars2Various aggregative variables Enables the tx window based decision */
#define __XSTORM_TCP_TCP_AG_CONTEXT_SECTION_TX_UNBLOCKED_SHIFT 1
#define __XSTORM_TCP_TCP_AG_CONTEXT_SECTION_DA_TIMER_ACTIVE (0x1<<2) /* BitField tcp_agg_vars2Various aggregative variables The DA Timer status. If set indicates that the delayed ACK timer is active. */
#define __XSTORM_TCP_TCP_AG_CONTEXT_SECTION_DA_TIMER_ACTIVE_SHIFT 2
#define __XSTORM_TCP_TCP_AG_CONTEXT_SECTION_AUX3_FLAG (0x1<<3) /* BitField tcp_agg_vars2Various aggregative variables auxiliary flag 3 */
#define __XSTORM_TCP_TCP_AG_CONTEXT_SECTION_AUX3_FLAG_SHIFT 3
#define __XSTORM_TCP_TCP_AG_CONTEXT_SECTION_AUX4_FLAG (0x1<<4) /* BitField tcp_agg_vars2Various aggregative variables auxiliary flag 4 */
#define __XSTORM_TCP_TCP_AG_CONTEXT_SECTION_AUX4_FLAG_SHIFT 4
#define XSTORM_TCP_TCP_AG_CONTEXT_SECTION_DA_ENABLE (0x1<<5) /* BitField tcp_agg_vars2Various aggregative variables Enable DA for the specific connection */
#define XSTORM_TCP_TCP_AG_CONTEXT_SECTION_DA_ENABLE_SHIFT 5
#define __XSTORM_TCP_TCP_AG_CONTEXT_SECTION_ACK_TO_FE_UPDATED_EN (0x1<<6) /* BitField tcp_agg_vars2Various aggregative variables Enable decision rules based on aux2_cf */
#define __XSTORM_TCP_TCP_AG_CONTEXT_SECTION_ACK_TO_FE_UPDATED_EN_SHIFT 6
#define XSTORM_TCP_TCP_AG_CONTEXT_SECTION_SIDEBAND_SENT_CF_EN (0x1<<7) /* BitField tcp_agg_vars2Various aggregative variables Enable decision rules based on aux3_cf */
#define XSTORM_TCP_TCP_AG_CONTEXT_SECTION_SIDEBAND_SENT_CF_EN_SHIFT 7
#define __XSTORM_TCP_TCP_AG_CONTEXT_SECTION_TX_FIN_FLAG_EN (0x1<<8) /* BitField tcp_agg_vars2Various aggregative variables Enable Decision rule based on tx_fin_flag */
#define __XSTORM_TCP_TCP_AG_CONTEXT_SECTION_TX_FIN_FLAG_EN_SHIFT 8
#define __XSTORM_TCP_TCP_AG_CONTEXT_SECTION_AUX1_FLAG (0x1<<9) /* BitField tcp_agg_vars2Various aggregative variables auxiliary flag 1 */
#define __XSTORM_TCP_TCP_AG_CONTEXT_SECTION_AUX1_FLAG_SHIFT 9
#define __XSTORM_TCP_TCP_AG_CONTEXT_SECTION_SET_RTO_CF (0x3<<10) /* BitField tcp_agg_vars2Various aggregative variables counter flag for setting the rto timer */
#define __XSTORM_TCP_TCP_AG_CONTEXT_SECTION_SET_RTO_CF_SHIFT 10
#define __XSTORM_TCP_TCP_AG_CONTEXT_SECTION_TS_TO_ECHO_UPDATED_CF (0x3<<12) /* BitField tcp_agg_vars2Various aggregative variables timestamp was updated counter flag */
#define __XSTORM_TCP_TCP_AG_CONTEXT_SECTION_TS_TO_ECHO_UPDATED_CF_SHIFT 12
#define __XSTORM_TCP_TCP_AG_CONTEXT_SECTION_TX_DEST_UPDATED_CF (0x3<<14) /* BitField tcp_agg_vars2Various aggregative variables auxiliary counter flag 8 */
#define __XSTORM_TCP_TCP_AG_CONTEXT_SECTION_TX_DEST_UPDATED_CF_SHIFT 14
uint8_t __tx_dest /* aggregated value 8 */;
uint8_t __agg_val8_th /* aggregated value 8 - threshold */;
#endif
uint32_t ack_to_far_end /* The ACK sequence to send to far end */;
uint32_t rto_timer /* The RTO timer value */;
uint32_t ka_timer /* The KA timer value */;
uint32_t ts_to_echo /* The time stamp value to echo to far end */;
#if defined(__BIG_ENDIAN)
uint16_t __agg_val7_th /* aggregated value 7 - threshold */;
uint16_t __agg_val7 /* aggregated value 7 */;
#elif defined(__LITTLE_ENDIAN)
uint16_t __agg_val7 /* aggregated value 7 */;
uint16_t __agg_val7_th /* aggregated value 7 - threshold */;
#endif
#if defined(__BIG_ENDIAN)
uint8_t __tcp_agg_vars5 /* Various aggregative variables*/;
uint8_t __tcp_agg_vars4 /* Various aggregative variables*/;
uint8_t __tcp_agg_vars3 /* Various aggregative variables*/;
uint8_t __force_pure_ack_cnt /* The number of force ACK commands arrived from the TSTORM */;
#elif defined(__LITTLE_ENDIAN)
uint8_t __force_pure_ack_cnt /* The number of force ACK commands arrived from the TSTORM */;
uint8_t __tcp_agg_vars3 /* Various aggregative variables*/;
uint8_t __tcp_agg_vars4 /* Various aggregative variables*/;
uint8_t __tcp_agg_vars5 /* Various aggregative variables*/;
#endif
uint32_t tcp_agg_vars6;
#define __XSTORM_TCP_TCP_AG_CONTEXT_SECTION_TS_TO_ECHO_CF_EN (0x1<<0) /* BitField tcp_agg_vars6Various aggregative variables Enable decision rules based on aux7_cf */
#define __XSTORM_TCP_TCP_AG_CONTEXT_SECTION_TS_TO_ECHO_CF_EN_SHIFT 0
#define __XSTORM_TCP_TCP_AG_CONTEXT_SECTION_TX_DEST_UPDATED_CF_EN (0x1<<1) /* BitField tcp_agg_vars6Various aggregative variables Enable decision rules based on aux8_cf */
#define __XSTORM_TCP_TCP_AG_CONTEXT_SECTION_TX_DEST_UPDATED_CF_EN_SHIFT 1
#define __XSTORM_TCP_TCP_AG_CONTEXT_SECTION_AUX9_CF_EN (0x1<<2) /* BitField tcp_agg_vars6Various aggregative variables Enable decision rules based on aux9_cf */
#define __XSTORM_TCP_TCP_AG_CONTEXT_SECTION_AUX9_CF_EN_SHIFT 2
#define __XSTORM_TCP_TCP_AG_CONTEXT_SECTION_AUX10_CF_EN (0x1<<3) /* BitField tcp_agg_vars6Various aggregative variables Enable decision rules based on aux10_cf */
#define __XSTORM_TCP_TCP_AG_CONTEXT_SECTION_AUX10_CF_EN_SHIFT 3
#define __XSTORM_TCP_TCP_AG_CONTEXT_SECTION_AUX6_FLAG (0x1<<4) /* BitField tcp_agg_vars6Various aggregative variables auxiliary flag 6 */
#define __XSTORM_TCP_TCP_AG_CONTEXT_SECTION_AUX6_FLAG_SHIFT 4
#define __XSTORM_TCP_TCP_AG_CONTEXT_SECTION_AUX7_FLAG (0x1<<5) /* BitField tcp_agg_vars6Various aggregative variables auxiliary flag 7 */
#define __XSTORM_TCP_TCP_AG_CONTEXT_SECTION_AUX7_FLAG_SHIFT 5
#define __XSTORM_TCP_TCP_AG_CONTEXT_SECTION_AUX5_CF (0x3<<6) /* BitField tcp_agg_vars6Various aggregative variables auxiliary counter flag 5 */
#define __XSTORM_TCP_TCP_AG_CONTEXT_SECTION_AUX5_CF_SHIFT 6
#define __XSTORM_TCP_TCP_AG_CONTEXT_SECTION_AUX9_CF (0x3<<8) /* BitField tcp_agg_vars6Various aggregative variables auxiliary counter flag 9 */
#define __XSTORM_TCP_TCP_AG_CONTEXT_SECTION_AUX9_CF_SHIFT 8
#define __XSTORM_TCP_TCP_AG_CONTEXT_SECTION_AUX10_CF (0x3<<10) /* BitField tcp_agg_vars6Various aggregative variables auxiliary counter flag 10 */
#define __XSTORM_TCP_TCP_AG_CONTEXT_SECTION_AUX10_CF_SHIFT 10
#define __XSTORM_TCP_TCP_AG_CONTEXT_SECTION_AUX11_CF (0x3<<12) /* BitField tcp_agg_vars6Various aggregative variables auxiliary counter flag 11 */
#define __XSTORM_TCP_TCP_AG_CONTEXT_SECTION_AUX11_CF_SHIFT 12
#define __XSTORM_TCP_TCP_AG_CONTEXT_SECTION_AUX12_CF (0x3<<14) /* BitField tcp_agg_vars6Various aggregative variables auxiliary counter flag 12 */
#define __XSTORM_TCP_TCP_AG_CONTEXT_SECTION_AUX12_CF_SHIFT 14
#define __XSTORM_TCP_TCP_AG_CONTEXT_SECTION_AUX13_CF (0x3<<16) /* BitField tcp_agg_vars6Various aggregative variables auxiliary counter flag 13 */
#define __XSTORM_TCP_TCP_AG_CONTEXT_SECTION_AUX13_CF_SHIFT 16
#define __XSTORM_TCP_TCP_AG_CONTEXT_SECTION_AUX14_CF (0x3<<18) /* BitField tcp_agg_vars6Various aggregative variables auxiliary counter flag 14 */
#define __XSTORM_TCP_TCP_AG_CONTEXT_SECTION_AUX14_CF_SHIFT 18
#define __XSTORM_TCP_TCP_AG_CONTEXT_SECTION_AUX15_CF (0x3<<20) /* BitField tcp_agg_vars6Various aggregative variables auxiliary counter flag 15 */
#define __XSTORM_TCP_TCP_AG_CONTEXT_SECTION_AUX15_CF_SHIFT 20
#define __XSTORM_TCP_TCP_AG_CONTEXT_SECTION_AUX16_CF (0x3<<22) /* BitField tcp_agg_vars6Various aggregative variables auxiliary counter flag 16 */
#define __XSTORM_TCP_TCP_AG_CONTEXT_SECTION_AUX16_CF_SHIFT 22
#define __XSTORM_TCP_TCP_AG_CONTEXT_SECTION_AUX17_CF (0x3<<24) /* BitField tcp_agg_vars6Various aggregative variables auxiliary counter flag 17 */
#define __XSTORM_TCP_TCP_AG_CONTEXT_SECTION_AUX17_CF_SHIFT 24
#define XSTORM_TCP_TCP_AG_CONTEXT_SECTION_ECE_FLAG (0x1<<26) /* BitField tcp_agg_vars6Various aggregative variables Can be also used as general purpose if ECN is not used */
#define XSTORM_TCP_TCP_AG_CONTEXT_SECTION_ECE_FLAG_SHIFT 26
#define __XSTORM_TCP_TCP_AG_CONTEXT_SECTION_RESERVED71 (0x1<<27) /* BitField tcp_agg_vars6Various aggregative variables Can be also used as general purpose if ECN is not used */
#define __XSTORM_TCP_TCP_AG_CONTEXT_SECTION_RESERVED71_SHIFT 27
#define __XSTORM_TCP_TCP_AG_CONTEXT_SECTION_FORCE_PURE_ACK_CNT_DIRTY (0x1<<28) /* BitField tcp_agg_vars6Various aggregative variables This flag is set if the Force ACK count is set by the TSTORM. On QM output it is cleared. */
#define __XSTORM_TCP_TCP_AG_CONTEXT_SECTION_FORCE_PURE_ACK_CNT_DIRTY_SHIFT 28
#define __XSTORM_TCP_TCP_AG_CONTEXT_SECTION_TCP_AUTO_STOP_FLAG (0x1<<29) /* BitField tcp_agg_vars6Various aggregative variables Indicates that the connection is in autostop mode */
#define __XSTORM_TCP_TCP_AG_CONTEXT_SECTION_TCP_AUTO_STOP_FLAG_SHIFT 29
#define __XSTORM_TCP_TCP_AG_CONTEXT_SECTION_DO_TS_UPDATE_FLAG (0x1<<30) /* BitField tcp_agg_vars6Various aggregative variables This bit uses like a one shot that the TSTORM fires and the XSTORM arms. Used to allow a single TS update for each transmission */
#define __XSTORM_TCP_TCP_AG_CONTEXT_SECTION_DO_TS_UPDATE_FLAG_SHIFT 30
#define __XSTORM_TCP_TCP_AG_CONTEXT_SECTION_CANCEL_RETRANSMIT_FLAG (0x1<<31) /* BitField tcp_agg_vars6Various aggregative variables This bit is set by the TSTORM when need to cancel precious fast retransmit */
#define __XSTORM_TCP_TCP_AG_CONTEXT_SECTION_CANCEL_RETRANSMIT_FLAG_SHIFT 31
#if defined(__BIG_ENDIAN)
uint16_t __agg_misc6 /* Misc aggregated variable 6 */;
uint16_t __tcp_agg_vars7 /* Various aggregative variables*/;
#elif defined(__LITTLE_ENDIAN)
uint16_t __tcp_agg_vars7 /* Various aggregative variables*/;
uint16_t __agg_misc6 /* Misc aggregated variable 6 */;
#endif
uint32_t __agg_val10 /* aggregated value 10 */;
uint32_t __agg_val10_th /* aggregated value 10 - threshold */;
#if defined(__BIG_ENDIAN)
uint16_t __reserved3;
uint8_t __reserved2;
uint8_t __da_only_cnt /* counts delayed acks and not window updates */;
#elif defined(__LITTLE_ENDIAN)
uint8_t __da_only_cnt /* counts delayed acks and not window updates */;
uint8_t __reserved2;
uint16_t __reserved3;
#endif
};
/*
* The iscsi aggregative context of Xstorm
*/
struct xstorm_iscsi_ag_context
{
#if defined(__BIG_ENDIAN)
uint16_t agg_val1 /* aggregated value 1 */;
uint8_t agg_vars1;
#define __XSTORM_ISCSI_AG_CONTEXT_EXISTS_IN_QM0 (0x1<<0) /* BitField agg_vars1Various aggregative variables The connection is currently registered to the QM with queue index 0 */
#define __XSTORM_ISCSI_AG_CONTEXT_EXISTS_IN_QM0_SHIFT 0
#define XSTORM_ISCSI_AG_CONTEXT_EXISTS_IN_QM1 (0x1<<1) /* BitField agg_vars1Various aggregative variables The connection is currently registered to the QM with queue index 1 */
#define XSTORM_ISCSI_AG_CONTEXT_EXISTS_IN_QM1_SHIFT 1
#define XSTORM_ISCSI_AG_CONTEXT_EXISTS_IN_QM2 (0x1<<2) /* BitField agg_vars1Various aggregative variables The connection is currently registered to the QM with queue index 2 */
#define XSTORM_ISCSI_AG_CONTEXT_EXISTS_IN_QM2_SHIFT 2
#define XSTORM_ISCSI_AG_CONTEXT_EXISTS_IN_QM3 (0x1<<3) /* BitField agg_vars1Various aggregative variables The connection is currently registered to the QM with queue index 3 */
#define XSTORM_ISCSI_AG_CONTEXT_EXISTS_IN_QM3_SHIFT 3
#define __XSTORM_ISCSI_AG_CONTEXT_MORE_TO_SEND_EN (0x1<<4) /* BitField agg_vars1Various aggregative variables Enables the decision rule of more_to_Send > 0 */
#define __XSTORM_ISCSI_AG_CONTEXT_MORE_TO_SEND_EN_SHIFT 4
#define XSTORM_ISCSI_AG_CONTEXT_NAGLE_EN (0x1<<5) /* BitField agg_vars1Various aggregative variables Enables the nagle decision */
#define XSTORM_ISCSI_AG_CONTEXT_NAGLE_EN_SHIFT 5
#define __XSTORM_ISCSI_AG_CONTEXT_DQ_SPARE_FLAG (0x1<<6) /* BitField agg_vars1Various aggregative variables Used for future indication by the Driver on a doorbell */
#define __XSTORM_ISCSI_AG_CONTEXT_DQ_SPARE_FLAG_SHIFT 6
#define __XSTORM_ISCSI_AG_CONTEXT_UNA_GT_NXT_EN (0x1<<7) /* BitField agg_vars1Various aggregative variables Enable decision rules based on equality between snd_una and snd_nxt */
#define __XSTORM_ISCSI_AG_CONTEXT_UNA_GT_NXT_EN_SHIFT 7
uint8_t state /* The state of the connection */;
#elif defined(__LITTLE_ENDIAN)
uint8_t state /* The state of the connection */;
uint8_t agg_vars1;
#define __XSTORM_ISCSI_AG_CONTEXT_EXISTS_IN_QM0 (0x1<<0) /* BitField agg_vars1Various aggregative variables The connection is currently registered to the QM with queue index 0 */
#define __XSTORM_ISCSI_AG_CONTEXT_EXISTS_IN_QM0_SHIFT 0
#define XSTORM_ISCSI_AG_CONTEXT_EXISTS_IN_QM1 (0x1<<1) /* BitField agg_vars1Various aggregative variables The connection is currently registered to the QM with queue index 1 */
#define XSTORM_ISCSI_AG_CONTEXT_EXISTS_IN_QM1_SHIFT 1
#define XSTORM_ISCSI_AG_CONTEXT_EXISTS_IN_QM2 (0x1<<2) /* BitField agg_vars1Various aggregative variables The connection is currently registered to the QM with queue index 2 */
#define XSTORM_ISCSI_AG_CONTEXT_EXISTS_IN_QM2_SHIFT 2
#define XSTORM_ISCSI_AG_CONTEXT_EXISTS_IN_QM3 (0x1<<3) /* BitField agg_vars1Various aggregative variables The connection is currently registered to the QM with queue index 3 */
#define XSTORM_ISCSI_AG_CONTEXT_EXISTS_IN_QM3_SHIFT 3
#define __XSTORM_ISCSI_AG_CONTEXT_MORE_TO_SEND_EN (0x1<<4) /* BitField agg_vars1Various aggregative variables Enables the decision rule of more_to_Send > 0 */
#define __XSTORM_ISCSI_AG_CONTEXT_MORE_TO_SEND_EN_SHIFT 4
#define XSTORM_ISCSI_AG_CONTEXT_NAGLE_EN (0x1<<5) /* BitField agg_vars1Various aggregative variables Enables the nagle decision */
#define XSTORM_ISCSI_AG_CONTEXT_NAGLE_EN_SHIFT 5
#define __XSTORM_ISCSI_AG_CONTEXT_DQ_SPARE_FLAG (0x1<<6) /* BitField agg_vars1Various aggregative variables Used for future indication by the Driver on a doorbell */
#define __XSTORM_ISCSI_AG_CONTEXT_DQ_SPARE_FLAG_SHIFT 6
#define __XSTORM_ISCSI_AG_CONTEXT_UNA_GT_NXT_EN (0x1<<7) /* BitField agg_vars1Various aggregative variables Enable decision rules based on equality between snd_una and snd_nxt */
#define __XSTORM_ISCSI_AG_CONTEXT_UNA_GT_NXT_EN_SHIFT 7
uint16_t agg_val1 /* aggregated value 1 */;
#endif
#if defined(__BIG_ENDIAN)
uint8_t cdu_reserved /* Used by the CDU for validation and debugging */;
uint8_t __agg_vars4 /* Various aggregative variables*/;
uint8_t agg_vars3;
#define XSTORM_ISCSI_AG_CONTEXT_PHYSICAL_QUEUE_NUM2 (0x3F<<0) /* BitField agg_vars3Various aggregative variables The physical queue number of queue index 2 */
#define XSTORM_ISCSI_AG_CONTEXT_PHYSICAL_QUEUE_NUM2_SHIFT 0
#define __XSTORM_ISCSI_AG_CONTEXT_RX_TS_EN_CF (0x3<<6) /* BitField agg_vars3Various aggregative variables auxiliary counter flag 19 */
#define __XSTORM_ISCSI_AG_CONTEXT_RX_TS_EN_CF_SHIFT 6
uint8_t agg_vars2;
#define __XSTORM_ISCSI_AG_CONTEXT_DQ_CF (0x3<<0) /* BitField agg_vars2Various aggregative variables auxiliary counter flag 4 */
#define __XSTORM_ISCSI_AG_CONTEXT_DQ_CF_SHIFT 0
#define __XSTORM_ISCSI_AG_CONTEXT_DQ_SPARE_FLAG_EN (0x1<<2) /* BitField agg_vars2Various aggregative variables Enable decision rule based on dq_spare_flag */
#define __XSTORM_ISCSI_AG_CONTEXT_DQ_SPARE_FLAG_EN_SHIFT 2
#define __XSTORM_ISCSI_AG_CONTEXT_AUX8_FLAG (0x1<<3) /* BitField agg_vars2Various aggregative variables auxiliary flag 8 */
#define __XSTORM_ISCSI_AG_CONTEXT_AUX8_FLAG_SHIFT 3
#define __XSTORM_ISCSI_AG_CONTEXT_AUX9_FLAG (0x1<<4) /* BitField agg_vars2Various aggregative variables auxiliary flag 9 */
#define __XSTORM_ISCSI_AG_CONTEXT_AUX9_FLAG_SHIFT 4
#define XSTORM_ISCSI_AG_CONTEXT_DECISION_RULE1 (0x3<<5) /* BitField agg_vars2Various aggregative variables 0-NOP,1-EQ,2-NEQ */
#define XSTORM_ISCSI_AG_CONTEXT_DECISION_RULE1_SHIFT 5
#define __XSTORM_ISCSI_AG_CONTEXT_DQ_CF_EN (0x1<<7) /* BitField agg_vars2Various aggregative variables Enable decision rules based on aux4_cf */
#define __XSTORM_ISCSI_AG_CONTEXT_DQ_CF_EN_SHIFT 7
#elif defined(__LITTLE_ENDIAN)
uint8_t agg_vars2;
#define __XSTORM_ISCSI_AG_CONTEXT_DQ_CF (0x3<<0) /* BitField agg_vars2Various aggregative variables auxiliary counter flag 4 */
#define __XSTORM_ISCSI_AG_CONTEXT_DQ_CF_SHIFT 0
#define __XSTORM_ISCSI_AG_CONTEXT_DQ_SPARE_FLAG_EN (0x1<<2) /* BitField agg_vars2Various aggregative variables Enable decision rule based on dq_spare_flag */
#define __XSTORM_ISCSI_AG_CONTEXT_DQ_SPARE_FLAG_EN_SHIFT 2
#define __XSTORM_ISCSI_AG_CONTEXT_AUX8_FLAG (0x1<<3) /* BitField agg_vars2Various aggregative variables auxiliary flag 8 */
#define __XSTORM_ISCSI_AG_CONTEXT_AUX8_FLAG_SHIFT 3
#define __XSTORM_ISCSI_AG_CONTEXT_AUX9_FLAG (0x1<<4) /* BitField agg_vars2Various aggregative variables auxiliary flag 9 */
#define __XSTORM_ISCSI_AG_CONTEXT_AUX9_FLAG_SHIFT 4
#define XSTORM_ISCSI_AG_CONTEXT_DECISION_RULE1 (0x3<<5) /* BitField agg_vars2Various aggregative variables 0-NOP,1-EQ,2-NEQ */
#define XSTORM_ISCSI_AG_CONTEXT_DECISION_RULE1_SHIFT 5
#define __XSTORM_ISCSI_AG_CONTEXT_DQ_CF_EN (0x1<<7) /* BitField agg_vars2Various aggregative variables Enable decision rules based on aux4_cf */
#define __XSTORM_ISCSI_AG_CONTEXT_DQ_CF_EN_SHIFT 7
uint8_t agg_vars3;
#define XSTORM_ISCSI_AG_CONTEXT_PHYSICAL_QUEUE_NUM2 (0x3F<<0) /* BitField agg_vars3Various aggregative variables The physical queue number of queue index 2 */
#define XSTORM_ISCSI_AG_CONTEXT_PHYSICAL_QUEUE_NUM2_SHIFT 0
#define __XSTORM_ISCSI_AG_CONTEXT_RX_TS_EN_CF (0x3<<6) /* BitField agg_vars3Various aggregative variables auxiliary counter flag 19 */
#define __XSTORM_ISCSI_AG_CONTEXT_RX_TS_EN_CF_SHIFT 6
uint8_t __agg_vars4 /* Various aggregative variables*/;
uint8_t cdu_reserved /* Used by the CDU for validation and debugging */;
#endif
uint32_t more_to_send /* The number of bytes left to send */;
#if defined(__BIG_ENDIAN)
uint16_t agg_vars5;
#define XSTORM_ISCSI_AG_CONTEXT_DECISION_RULE5 (0x3<<0) /* BitField agg_vars5Various aggregative variables 0-NOP,1-EQ,2-NEQ */
#define XSTORM_ISCSI_AG_CONTEXT_DECISION_RULE5_SHIFT 0
#define XSTORM_ISCSI_AG_CONTEXT_PHYSICAL_QUEUE_NUM0 (0x3F<<2) /* BitField agg_vars5Various aggregative variables The physical queue number of queue index 0 */
#define XSTORM_ISCSI_AG_CONTEXT_PHYSICAL_QUEUE_NUM0_SHIFT 2
#define XSTORM_ISCSI_AG_CONTEXT_PHYSICAL_QUEUE_NUM1 (0x3F<<8) /* BitField agg_vars5Various aggregative variables The physical queue number of queue index 1 */
#define XSTORM_ISCSI_AG_CONTEXT_PHYSICAL_QUEUE_NUM1_SHIFT 8
#define XSTORM_ISCSI_AG_CONTEXT_DECISION_RULE2 (0x3<<14) /* BitField agg_vars5Various aggregative variables 0-NOP,1-EQ,2-NEQ */
#define XSTORM_ISCSI_AG_CONTEXT_DECISION_RULE2_SHIFT 14
uint16_t sq_cons /* aggregated value 4 - threshold */;
#elif defined(__LITTLE_ENDIAN)
uint16_t sq_cons /* aggregated value 4 - threshold */;
uint16_t agg_vars5;
#define XSTORM_ISCSI_AG_CONTEXT_DECISION_RULE5 (0x3<<0) /* BitField agg_vars5Various aggregative variables 0-NOP,1-EQ,2-NEQ */
#define XSTORM_ISCSI_AG_CONTEXT_DECISION_RULE5_SHIFT 0
#define XSTORM_ISCSI_AG_CONTEXT_PHYSICAL_QUEUE_NUM0 (0x3F<<2) /* BitField agg_vars5Various aggregative variables The physical queue number of queue index 0 */
#define XSTORM_ISCSI_AG_CONTEXT_PHYSICAL_QUEUE_NUM0_SHIFT 2
#define XSTORM_ISCSI_AG_CONTEXT_PHYSICAL_QUEUE_NUM1 (0x3F<<8) /* BitField agg_vars5Various aggregative variables The physical queue number of queue index 1 */
#define XSTORM_ISCSI_AG_CONTEXT_PHYSICAL_QUEUE_NUM1_SHIFT 8
#define XSTORM_ISCSI_AG_CONTEXT_DECISION_RULE2 (0x3<<14) /* BitField agg_vars5Various aggregative variables 0-NOP,1-EQ,2-NEQ */
#define XSTORM_ISCSI_AG_CONTEXT_DECISION_RULE2_SHIFT 14
#endif
struct xstorm_tcp_tcp_ag_context_section tcp /* TCP context section, shared in TOE and ISCSI */;
#if defined(__BIG_ENDIAN)
uint16_t agg_vars7;
#define __XSTORM_ISCSI_AG_CONTEXT_AGG_VAL11_DECISION_RULE (0x7<<0) /* BitField agg_vars7Various aggregative variables 0-NOP,1-EQ,2-NEQ,3-GT_CYC,4-GT_ABS,5-LT_CYC,6-LT_ABS */
#define __XSTORM_ISCSI_AG_CONTEXT_AGG_VAL11_DECISION_RULE_SHIFT 0
#define __XSTORM_ISCSI_AG_CONTEXT_AUX13_FLAG (0x1<<3) /* BitField agg_vars7Various aggregative variables auxiliary flag 13 */
#define __XSTORM_ISCSI_AG_CONTEXT_AUX13_FLAG_SHIFT 3
#define __XSTORM_ISCSI_AG_CONTEXT_STORMS_SYNC_CF (0x3<<4) /* BitField agg_vars7Various aggregative variables Sync Tstorm and Xstorm */
#define __XSTORM_ISCSI_AG_CONTEXT_STORMS_SYNC_CF_SHIFT 4
#define XSTORM_ISCSI_AG_CONTEXT_DECISION_RULE3 (0x3<<6) /* BitField agg_vars7Various aggregative variables 0-NOP,1-EQ,2-NEQ */
#define XSTORM_ISCSI_AG_CONTEXT_DECISION_RULE3_SHIFT 6
#define XSTORM_ISCSI_AG_CONTEXT_AUX1_CF (0x3<<8) /* BitField agg_vars7Various aggregative variables auxiliary counter flag 1 */
#define XSTORM_ISCSI_AG_CONTEXT_AUX1_CF_SHIFT 8
#define __XSTORM_ISCSI_AG_CONTEXT_COMPLETION_SEQ_DECISION_MASK (0x1<<10) /* BitField agg_vars7Various aggregative variables Mask the check of the completion sequence on retransmit */
#define __XSTORM_ISCSI_AG_CONTEXT_COMPLETION_SEQ_DECISION_MASK_SHIFT 10
#define __XSTORM_ISCSI_AG_CONTEXT_AUX1_CF_EN (0x1<<11) /* BitField agg_vars7Various aggregative variables Enable decision rules based on aux1_cf */
#define __XSTORM_ISCSI_AG_CONTEXT_AUX1_CF_EN_SHIFT 11
#define __XSTORM_ISCSI_AG_CONTEXT_AUX10_FLAG (0x1<<12) /* BitField agg_vars7Various aggregative variables auxiliary flag 10 */
#define __XSTORM_ISCSI_AG_CONTEXT_AUX10_FLAG_SHIFT 12
#define __XSTORM_ISCSI_AG_CONTEXT_AUX11_FLAG (0x1<<13) /* BitField agg_vars7Various aggregative variables auxiliary flag 11 */
#define __XSTORM_ISCSI_AG_CONTEXT_AUX11_FLAG_SHIFT 13
#define __XSTORM_ISCSI_AG_CONTEXT_AUX12_FLAG (0x1<<14) /* BitField agg_vars7Various aggregative variables auxiliary flag 12 */
#define __XSTORM_ISCSI_AG_CONTEXT_AUX12_FLAG_SHIFT 14
#define __XSTORM_ISCSI_AG_CONTEXT_RX_WND_SCL_EN (0x1<<15) /* BitField agg_vars7Various aggregative variables auxiliary flag 2 */
#define __XSTORM_ISCSI_AG_CONTEXT_RX_WND_SCL_EN_SHIFT 15
uint8_t agg_val3_th /* Aggregated value 3 - threshold */;
uint8_t agg_vars6;
#define XSTORM_ISCSI_AG_CONTEXT_DECISION_RULE6 (0x7<<0) /* BitField agg_vars6Various aggregative variables 0-NOP,1-EQ,2-NEQ,3-GT_CYC,4-GT_ABS,5-LT_CYC,6-LT_ABS */
#define XSTORM_ISCSI_AG_CONTEXT_DECISION_RULE6_SHIFT 0
#define XSTORM_ISCSI_AG_CONTEXT_DECISION_RULE7 (0x7<<3) /* BitField agg_vars6Various aggregative variables 0-NOP,1-EQ,2-NEQ,3-GT_CYC,4-GT_ABS,5-LT_CYC,6-LT_ABS */
#define XSTORM_ISCSI_AG_CONTEXT_DECISION_RULE7_SHIFT 3
#define XSTORM_ISCSI_AG_CONTEXT_DECISION_RULE4 (0x3<<6) /* BitField agg_vars6Various aggregative variables 0-NOP,1-EQ,2-NEQ */
#define XSTORM_ISCSI_AG_CONTEXT_DECISION_RULE4_SHIFT 6
#elif defined(__LITTLE_ENDIAN)
uint8_t agg_vars6;
#define XSTORM_ISCSI_AG_CONTEXT_DECISION_RULE6 (0x7<<0) /* BitField agg_vars6Various aggregative variables 0-NOP,1-EQ,2-NEQ,3-GT_CYC,4-GT_ABS,5-LT_CYC,6-LT_ABS */
#define XSTORM_ISCSI_AG_CONTEXT_DECISION_RULE6_SHIFT 0
#define XSTORM_ISCSI_AG_CONTEXT_DECISION_RULE7 (0x7<<3) /* BitField agg_vars6Various aggregative variables 0-NOP,1-EQ,2-NEQ,3-GT_CYC,4-GT_ABS,5-LT_CYC,6-LT_ABS */
#define XSTORM_ISCSI_AG_CONTEXT_DECISION_RULE7_SHIFT 3
#define XSTORM_ISCSI_AG_CONTEXT_DECISION_RULE4 (0x3<<6) /* BitField agg_vars6Various aggregative variables 0-NOP,1-EQ,2-NEQ */
#define XSTORM_ISCSI_AG_CONTEXT_DECISION_RULE4_SHIFT 6
uint8_t agg_val3_th /* Aggregated value 3 - threshold */;
uint16_t agg_vars7;
#define __XSTORM_ISCSI_AG_CONTEXT_AGG_VAL11_DECISION_RULE (0x7<<0) /* BitField agg_vars7Various aggregative variables 0-NOP,1-EQ,2-NEQ,3-GT_CYC,4-GT_ABS,5-LT_CYC,6-LT_ABS */
#define __XSTORM_ISCSI_AG_CONTEXT_AGG_VAL11_DECISION_RULE_SHIFT 0
#define __XSTORM_ISCSI_AG_CONTEXT_AUX13_FLAG (0x1<<3) /* BitField agg_vars7Various aggregative variables auxiliary flag 13 */
#define __XSTORM_ISCSI_AG_CONTEXT_AUX13_FLAG_SHIFT 3
#define __XSTORM_ISCSI_AG_CONTEXT_STORMS_SYNC_CF (0x3<<4) /* BitField agg_vars7Various aggregative variables Sync Tstorm and Xstorm */
#define __XSTORM_ISCSI_AG_CONTEXT_STORMS_SYNC_CF_SHIFT 4
#define XSTORM_ISCSI_AG_CONTEXT_DECISION_RULE3 (0x3<<6) /* BitField agg_vars7Various aggregative variables 0-NOP,1-EQ,2-NEQ */
#define XSTORM_ISCSI_AG_CONTEXT_DECISION_RULE3_SHIFT 6
#define XSTORM_ISCSI_AG_CONTEXT_AUX1_CF (0x3<<8) /* BitField agg_vars7Various aggregative variables auxiliary counter flag 1 */
#define XSTORM_ISCSI_AG_CONTEXT_AUX1_CF_SHIFT 8
#define __XSTORM_ISCSI_AG_CONTEXT_COMPLETION_SEQ_DECISION_MASK (0x1<<10) /* BitField agg_vars7Various aggregative variables Mask the check of the completion sequence on retransmit */
#define __XSTORM_ISCSI_AG_CONTEXT_COMPLETION_SEQ_DECISION_MASK_SHIFT 10
#define __XSTORM_ISCSI_AG_CONTEXT_AUX1_CF_EN (0x1<<11) /* BitField agg_vars7Various aggregative variables Enable decision rules based on aux1_cf */
#define __XSTORM_ISCSI_AG_CONTEXT_AUX1_CF_EN_SHIFT 11
#define __XSTORM_ISCSI_AG_CONTEXT_AUX10_FLAG (0x1<<12) /* BitField agg_vars7Various aggregative variables auxiliary flag 10 */
#define __XSTORM_ISCSI_AG_CONTEXT_AUX10_FLAG_SHIFT 12
#define __XSTORM_ISCSI_AG_CONTEXT_AUX11_FLAG (0x1<<13) /* BitField agg_vars7Various aggregative variables auxiliary flag 11 */
#define __XSTORM_ISCSI_AG_CONTEXT_AUX11_FLAG_SHIFT 13
#define __XSTORM_ISCSI_AG_CONTEXT_AUX12_FLAG (0x1<<14) /* BitField agg_vars7Various aggregative variables auxiliary flag 12 */
#define __XSTORM_ISCSI_AG_CONTEXT_AUX12_FLAG_SHIFT 14
#define __XSTORM_ISCSI_AG_CONTEXT_RX_WND_SCL_EN (0x1<<15) /* BitField agg_vars7Various aggregative variables auxiliary flag 2 */
#define __XSTORM_ISCSI_AG_CONTEXT_RX_WND_SCL_EN_SHIFT 15
#endif
#if defined(__BIG_ENDIAN)
uint16_t __agg_val11_th /* aggregated value 11 - threshold */;
uint16_t __gen_data /* Used for Iscsi. In connection establishment, it uses as rxMss, and in connection termination, it uses as command Id: 1=L5CM_TX_ACK_ON_FIN_CMD 2=L5CM_SET_MSL_TIMER_CMD 3=L5CM_TX_RST_CMD */;
#elif defined(__LITTLE_ENDIAN)
uint16_t __gen_data /* Used for Iscsi. In connection establishment, it uses as rxMss, and in connection termination, it uses as command Id: 1=L5CM_TX_ACK_ON_FIN_CMD 2=L5CM_SET_MSL_TIMER_CMD 3=L5CM_TX_RST_CMD */;
uint16_t __agg_val11_th /* aggregated value 11 - threshold */;
#endif
#if defined(__BIG_ENDIAN)
uint8_t __reserved1;
uint8_t __agg_val6_th /* aggregated value 6 - threshold */;
uint16_t __agg_val9 /* aggregated value 9 */;
#elif defined(__LITTLE_ENDIAN)
uint16_t __agg_val9 /* aggregated value 9 */;
uint8_t __agg_val6_th /* aggregated value 6 - threshold */;
uint8_t __reserved1;
#endif
#if defined(__BIG_ENDIAN)
uint16_t hq_prod /* The HQ producer threashold to compare the HQ consumer, which is the current HQ producer +1 - AggVal2Th */;
uint16_t hq_cons /* HQ Consumer, updated by Cstorm - AggVal2 */;
#elif defined(__LITTLE_ENDIAN)
uint16_t hq_cons /* HQ Consumer, updated by Cstorm - AggVal2 */;
uint16_t hq_prod /* The HQ producer threashold to compare the HQ consumer, which is the current HQ producer +1 - AggVal2Th */;
#endif
uint32_t agg_varint8_t;
#define XSTORM_ISCSI_AG_CONTEXT_AGG_MISC2 (0xFFFFFF<<0) /* BitField agg_varint8_tVarious aggregative variables Misc aggregated variable 2 */
#define XSTORM_ISCSI_AG_CONTEXT_AGG_MISC2_SHIFT 0
#define XSTORM_ISCSI_AG_CONTEXT_AGG_MISC3 (0xFF<<24) /* BitField agg_varint8_tVarious aggregative variables Misc aggregated variable 3 */
#define XSTORM_ISCSI_AG_CONTEXT_AGG_MISC3_SHIFT 24
#if defined(__BIG_ENDIAN)
uint16_t r2tq_prod /* Misc aggregated variable 0 */;
uint16_t sq_prod /* SQ Producer */;
#elif defined(__LITTLE_ENDIAN)
uint16_t sq_prod /* SQ Producer */;
uint16_t r2tq_prod /* Misc aggregated variable 0 */;
#endif
#if defined(__BIG_ENDIAN)
uint8_t agg_val3 /* Aggregated value 3 */;
uint8_t agg_val6 /* Aggregated value 6 */;
uint8_t agg_val5_th /* Aggregated value 5 - threshold */;
uint8_t agg_val5 /* Aggregated value 5 */;
#elif defined(__LITTLE_ENDIAN)
uint8_t agg_val5 /* Aggregated value 5 */;
uint8_t agg_val5_th /* Aggregated value 5 - threshold */;
uint8_t agg_val6 /* Aggregated value 6 */;
uint8_t agg_val3 /* Aggregated value 3 */;
#endif
#if defined(__BIG_ENDIAN)
uint16_t __agg_misc1 /* Spare value for aggregation. NOTE: this value is used in the retransmit decision rule if CmpSeqDecMask is 0. In that case it is intended to be CmpBdSize. */;
uint16_t agg_limit1 /* aggregated limit 1 */;
#elif defined(__LITTLE_ENDIAN)
uint16_t agg_limit1 /* aggregated limit 1 */;
uint16_t __agg_misc1 /* Spare value for aggregation. NOTE: this value is used in the retransmit decision rule if CmpSeqDecMask is 0. In that case it is intended to be CmpBdSize. */;
#endif
uint32_t hq_cons_tcp_seq /* TCP sequence of the HQ BD pointed by hq_cons */;
uint32_t exp_stat_sn /* expected status SN, updated by Ustorm */;
uint32_t rst_seq_num /* spare aggregated variable 5 */;
};
/*
* The toe aggregative context section of Xstorm
*/
struct xstorm_toe_tcp_ag_context_section
{
#if defined(__BIG_ENDIAN)
uint8_t tcp_agg_vars1;
#define __XSTORM_TOE_TCP_AG_CONTEXT_SECTION_SET_DA_TIMER_CF (0x3<<0) /* BitField tcp_agg_vars1Various aggregative variables Counter flag used to rewind the DA timer */
#define __XSTORM_TOE_TCP_AG_CONTEXT_SECTION_SET_DA_TIMER_CF_SHIFT 0
#define __XSTORM_TOE_TCP_AG_CONTEXT_SECTION_ACK_TO_FE_UPDATED (0x3<<2) /* BitField tcp_agg_vars1Various aggregative variables auxiliary counter flag 2 */
#define __XSTORM_TOE_TCP_AG_CONTEXT_SECTION_ACK_TO_FE_UPDATED_SHIFT 2
#define XSTORM_TOE_TCP_AG_CONTEXT_SECTION_SIDEBAND_SENT_CF (0x3<<4) /* BitField tcp_agg_vars1Various aggregative variables auxiliary counter flag 3 */
#define XSTORM_TOE_TCP_AG_CONTEXT_SECTION_SIDEBAND_SENT_CF_SHIFT 4
#define __XSTORM_TOE_TCP_AG_CONTEXT_SECTION_CLEAR_DA_TIMER_EN (0x1<<6) /* BitField tcp_agg_vars1Various aggregative variables If set enables sending clear commands as port of the DA decision rules */
#define __XSTORM_TOE_TCP_AG_CONTEXT_SECTION_CLEAR_DA_TIMER_EN_SHIFT 6
#define __XSTORM_TOE_TCP_AG_CONTEXT_SECTION_DA_EXPIRATION_FLAG (0x1<<7) /* BitField tcp_agg_vars1Various aggregative variables Indicates that there was a delayed ack timer expiration */
#define __XSTORM_TOE_TCP_AG_CONTEXT_SECTION_DA_EXPIRATION_FLAG_SHIFT 7
uint8_t __da_cnt /* Counts the number of ACK requests received from the TSTORM with no registration to QM. */;
uint16_t mss /* MSS used for nagle algorithm and for transmission */;
#elif defined(__LITTLE_ENDIAN)
uint16_t mss /* MSS used for nagle algorithm and for transmission */;
uint8_t __da_cnt /* Counts the number of ACK requests received from the TSTORM with no registration to QM. */;
uint8_t tcp_agg_vars1;
#define __XSTORM_TOE_TCP_AG_CONTEXT_SECTION_SET_DA_TIMER_CF (0x3<<0) /* BitField tcp_agg_vars1Various aggregative variables Counter flag used to rewind the DA timer */
#define __XSTORM_TOE_TCP_AG_CONTEXT_SECTION_SET_DA_TIMER_CF_SHIFT 0
#define __XSTORM_TOE_TCP_AG_CONTEXT_SECTION_ACK_TO_FE_UPDATED (0x3<<2) /* BitField tcp_agg_vars1Various aggregative variables auxiliary counter flag 2 */
#define __XSTORM_TOE_TCP_AG_CONTEXT_SECTION_ACK_TO_FE_UPDATED_SHIFT 2
#define XSTORM_TOE_TCP_AG_CONTEXT_SECTION_SIDEBAND_SENT_CF (0x3<<4) /* BitField tcp_agg_vars1Various aggregative variables auxiliary counter flag 3 */
#define XSTORM_TOE_TCP_AG_CONTEXT_SECTION_SIDEBAND_SENT_CF_SHIFT 4
#define __XSTORM_TOE_TCP_AG_CONTEXT_SECTION_CLEAR_DA_TIMER_EN (0x1<<6) /* BitField tcp_agg_vars1Various aggregative variables If set enables sending clear commands as port of the DA decision rules */
#define __XSTORM_TOE_TCP_AG_CONTEXT_SECTION_CLEAR_DA_TIMER_EN_SHIFT 6
#define __XSTORM_TOE_TCP_AG_CONTEXT_SECTION_DA_EXPIRATION_FLAG (0x1<<7) /* BitField tcp_agg_vars1Various aggregative variables Indicates that there was a delayed ack timer expiration */
#define __XSTORM_TOE_TCP_AG_CONTEXT_SECTION_DA_EXPIRATION_FLAG_SHIFT 7
#endif
uint32_t snd_nxt /* The current sequence number to send */;
uint32_t tx_wnd /* The Current transmission window in bytes */;
uint32_t snd_una /* The current Send UNA sequence number */;
uint32_t local_adv_wnd /* The current local advertised window to FE. */;
#if defined(__BIG_ENDIAN)
uint8_t __agg_val8_th /* aggregated value 8 - threshold */;
uint8_t __tx_dest /* aggregated value 8 */;
uint16_t tcp_agg_vars2;
#define XSTORM_TOE_TCP_AG_CONTEXT_SECTION_TX_FIN_FLAG (0x1<<0) /* BitField tcp_agg_vars2Various aggregative variables Used in TOE to indicate that FIN is sent on a BD to bypass the naggle rule */
#define XSTORM_TOE_TCP_AG_CONTEXT_SECTION_TX_FIN_FLAG_SHIFT 0
#define __XSTORM_TOE_TCP_AG_CONTEXT_SECTION_TX_UNBLOCKED (0x1<<1) /* BitField tcp_agg_vars2Various aggregative variables Enables the tx window based decision */
#define __XSTORM_TOE_TCP_AG_CONTEXT_SECTION_TX_UNBLOCKED_SHIFT 1
#define __XSTORM_TOE_TCP_AG_CONTEXT_SECTION_DA_TIMER_ACTIVE (0x1<<2) /* BitField tcp_agg_vars2Various aggregative variables The DA Timer status. If set indicates that the delayed ACK timer is active. */
#define __XSTORM_TOE_TCP_AG_CONTEXT_SECTION_DA_TIMER_ACTIVE_SHIFT 2
#define __XSTORM_TOE_TCP_AG_CONTEXT_SECTION_AUX3_FLAG (0x1<<3) /* BitField tcp_agg_vars2Various aggregative variables auxiliary flag 3 */
#define __XSTORM_TOE_TCP_AG_CONTEXT_SECTION_AUX3_FLAG_SHIFT 3
#define __XSTORM_TOE_TCP_AG_CONTEXT_SECTION_AUX4_FLAG (0x1<<4) /* BitField tcp_agg_vars2Various aggregative variables auxiliary flag 4 */
#define __XSTORM_TOE_TCP_AG_CONTEXT_SECTION_AUX4_FLAG_SHIFT 4
#define XSTORM_TOE_TCP_AG_CONTEXT_SECTION_DA_ENABLE (0x1<<5) /* BitField tcp_agg_vars2Various aggregative variables Enable DA for the specific connection */
#define XSTORM_TOE_TCP_AG_CONTEXT_SECTION_DA_ENABLE_SHIFT 5
#define __XSTORM_TOE_TCP_AG_CONTEXT_SECTION_ACK_TO_FE_UPDATED_EN (0x1<<6) /* BitField tcp_agg_vars2Various aggregative variables Enable decision rules based on aux2_cf */
#define __XSTORM_TOE_TCP_AG_CONTEXT_SECTION_ACK_TO_FE_UPDATED_EN_SHIFT 6
#define XSTORM_TOE_TCP_AG_CONTEXT_SECTION_SIDEBAND_SENT_CF_EN (0x1<<7) /* BitField tcp_agg_vars2Various aggregative variables Enable decision rules based on aux3_cf */
#define XSTORM_TOE_TCP_AG_CONTEXT_SECTION_SIDEBAND_SENT_CF_EN_SHIFT 7
#define __XSTORM_TOE_TCP_AG_CONTEXT_SECTION_TX_FIN_FLAG_EN (0x1<<8) /* BitField tcp_agg_vars2Various aggregative variables Enable Decision rule based on tx_fin_flag */
#define __XSTORM_TOE_TCP_AG_CONTEXT_SECTION_TX_FIN_FLAG_EN_SHIFT 8
#define __XSTORM_TOE_TCP_AG_CONTEXT_SECTION_AUX1_FLAG (0x1<<9) /* BitField tcp_agg_vars2Various aggregative variables auxiliary flag 1 */
#define __XSTORM_TOE_TCP_AG_CONTEXT_SECTION_AUX1_FLAG_SHIFT 9
#define __XSTORM_TOE_TCP_AG_CONTEXT_SECTION_SET_RTO_CF (0x3<<10) /* BitField tcp_agg_vars2Various aggregative variables counter flag for setting the rto timer */
#define __XSTORM_TOE_TCP_AG_CONTEXT_SECTION_SET_RTO_CF_SHIFT 10
#define __XSTORM_TOE_TCP_AG_CONTEXT_SECTION_TS_TO_ECHO_UPDATED_CF (0x3<<12) /* BitField tcp_agg_vars2Various aggregative variables timestamp was updated counter flag */
#define __XSTORM_TOE_TCP_AG_CONTEXT_SECTION_TS_TO_ECHO_UPDATED_CF_SHIFT 12
#define __XSTORM_TOE_TCP_AG_CONTEXT_SECTION_TX_DEST_UPDATED_CF (0x3<<14) /* BitField tcp_agg_vars2Various aggregative variables auxiliary counter flag 8 */
#define __XSTORM_TOE_TCP_AG_CONTEXT_SECTION_TX_DEST_UPDATED_CF_SHIFT 14
#elif defined(__LITTLE_ENDIAN)
uint16_t tcp_agg_vars2;
#define XSTORM_TOE_TCP_AG_CONTEXT_SECTION_TX_FIN_FLAG (0x1<<0) /* BitField tcp_agg_vars2Various aggregative variables Used in TOE to indicate that FIN is sent on a BD to bypass the naggle rule */
#define XSTORM_TOE_TCP_AG_CONTEXT_SECTION_TX_FIN_FLAG_SHIFT 0
#define __XSTORM_TOE_TCP_AG_CONTEXT_SECTION_TX_UNBLOCKED (0x1<<1) /* BitField tcp_agg_vars2Various aggregative variables Enables the tx window based decision */
#define __XSTORM_TOE_TCP_AG_CONTEXT_SECTION_TX_UNBLOCKED_SHIFT 1
#define __XSTORM_TOE_TCP_AG_CONTEXT_SECTION_DA_TIMER_ACTIVE (0x1<<2) /* BitField tcp_agg_vars2Various aggregative variables The DA Timer status. If set indicates that the delayed ACK timer is active. */
#define __XSTORM_TOE_TCP_AG_CONTEXT_SECTION_DA_TIMER_ACTIVE_SHIFT 2
#define __XSTORM_TOE_TCP_AG_CONTEXT_SECTION_AUX3_FLAG (0x1<<3) /* BitField tcp_agg_vars2Various aggregative variables auxiliary flag 3 */
#define __XSTORM_TOE_TCP_AG_CONTEXT_SECTION_AUX3_FLAG_SHIFT 3
#define __XSTORM_TOE_TCP_AG_CONTEXT_SECTION_AUX4_FLAG (0x1<<4) /* BitField tcp_agg_vars2Various aggregative variables auxiliary flag 4 */
#define __XSTORM_TOE_TCP_AG_CONTEXT_SECTION_AUX4_FLAG_SHIFT 4
#define XSTORM_TOE_TCP_AG_CONTEXT_SECTION_DA_ENABLE (0x1<<5) /* BitField tcp_agg_vars2Various aggregative variables Enable DA for the specific connection */
#define XSTORM_TOE_TCP_AG_CONTEXT_SECTION_DA_ENABLE_SHIFT 5
#define __XSTORM_TOE_TCP_AG_CONTEXT_SECTION_ACK_TO_FE_UPDATED_EN (0x1<<6) /* BitField tcp_agg_vars2Various aggregative variables Enable decision rules based on aux2_cf */
#define __XSTORM_TOE_TCP_AG_CONTEXT_SECTION_ACK_TO_FE_UPDATED_EN_SHIFT 6
#define XSTORM_TOE_TCP_AG_CONTEXT_SECTION_SIDEBAND_SENT_CF_EN (0x1<<7) /* BitField tcp_agg_vars2Various aggregative variables Enable decision rules based on aux3_cf */
#define XSTORM_TOE_TCP_AG_CONTEXT_SECTION_SIDEBAND_SENT_CF_EN_SHIFT 7
#define __XSTORM_TOE_TCP_AG_CONTEXT_SECTION_TX_FIN_FLAG_EN (0x1<<8) /* BitField tcp_agg_vars2Various aggregative variables Enable Decision rule based on tx_fin_flag */
#define __XSTORM_TOE_TCP_AG_CONTEXT_SECTION_TX_FIN_FLAG_EN_SHIFT 8
#define __XSTORM_TOE_TCP_AG_CONTEXT_SECTION_AUX1_FLAG (0x1<<9) /* BitField tcp_agg_vars2Various aggregative variables auxiliary flag 1 */
#define __XSTORM_TOE_TCP_AG_CONTEXT_SECTION_AUX1_FLAG_SHIFT 9
#define __XSTORM_TOE_TCP_AG_CONTEXT_SECTION_SET_RTO_CF (0x3<<10) /* BitField tcp_agg_vars2Various aggregative variables counter flag for setting the rto timer */
#define __XSTORM_TOE_TCP_AG_CONTEXT_SECTION_SET_RTO_CF_SHIFT 10
#define __XSTORM_TOE_TCP_AG_CONTEXT_SECTION_TS_TO_ECHO_UPDATED_CF (0x3<<12) /* BitField tcp_agg_vars2Various aggregative variables timestamp was updated counter flag */
#define __XSTORM_TOE_TCP_AG_CONTEXT_SECTION_TS_TO_ECHO_UPDATED_CF_SHIFT 12
#define __XSTORM_TOE_TCP_AG_CONTEXT_SECTION_TX_DEST_UPDATED_CF (0x3<<14) /* BitField tcp_agg_vars2Various aggregative variables auxiliary counter flag 8 */
#define __XSTORM_TOE_TCP_AG_CONTEXT_SECTION_TX_DEST_UPDATED_CF_SHIFT 14
uint8_t __tx_dest /* aggregated value 8 */;
uint8_t __agg_val8_th /* aggregated value 8 - threshold */;
#endif
uint32_t ack_to_far_end /* The ACK sequence to send to far end */;
uint32_t rto_timer /* The RTO timer value */;
uint32_t ka_timer /* The KA timer value */;
uint32_t ts_to_echo /* The time stamp value to echo to far end */;
#if defined(__BIG_ENDIAN)
uint16_t __agg_val7_th /* aggregated value 7 - threshold */;
uint16_t __agg_val7 /* aggregated value 7 */;
#elif defined(__LITTLE_ENDIAN)
uint16_t __agg_val7 /* aggregated value 7 */;
uint16_t __agg_val7_th /* aggregated value 7 - threshold */;
#endif
#if defined(__BIG_ENDIAN)
uint8_t __tcp_agg_vars5 /* Various aggregative variables*/;
uint8_t __tcp_agg_vars4 /* Various aggregative variables*/;
uint8_t __tcp_agg_vars3 /* Various aggregative variables*/;
uint8_t __force_pure_ack_cnt /* The number of force ACK commands arrived from the TSTORM */;
#elif defined(__LITTLE_ENDIAN)
uint8_t __force_pure_ack_cnt /* The number of force ACK commands arrived from the TSTORM */;
uint8_t __tcp_agg_vars3 /* Various aggregative variables*/;
uint8_t __tcp_agg_vars4 /* Various aggregative variables*/;
uint8_t __tcp_agg_vars5 /* Various aggregative variables*/;
#endif
uint32_t tcp_agg_vars6;
#define __XSTORM_TOE_TCP_AG_CONTEXT_SECTION_TS_TO_ECHO_CF_EN (0x1<<0) /* BitField tcp_agg_vars6Various aggregative variables Enable decision rules based on aux7_cf */
#define __XSTORM_TOE_TCP_AG_CONTEXT_SECTION_TS_TO_ECHO_CF_EN_SHIFT 0
#define __XSTORM_TOE_TCP_AG_CONTEXT_SECTION_TX_DEST_UPDATED_CF_EN (0x1<<1) /* BitField tcp_agg_vars6Various aggregative variables Enable decision rules based on aux8_cf */
#define __XSTORM_TOE_TCP_AG_CONTEXT_SECTION_TX_DEST_UPDATED_CF_EN_SHIFT 1
#define __XSTORM_TOE_TCP_AG_CONTEXT_SECTION_AUX9_CF_EN (0x1<<2) /* BitField tcp_agg_vars6Various aggregative variables Enable decision rules based on aux9_cf */
#define __XSTORM_TOE_TCP_AG_CONTEXT_SECTION_AUX9_CF_EN_SHIFT 2
#define __XSTORM_TOE_TCP_AG_CONTEXT_SECTION_AUX10_CF_EN (0x1<<3) /* BitField tcp_agg_vars6Various aggregative variables Enable decision rules based on aux10_cf */
#define __XSTORM_TOE_TCP_AG_CONTEXT_SECTION_AUX10_CF_EN_SHIFT 3
#define __XSTORM_TOE_TCP_AG_CONTEXT_SECTION_AUX6_FLAG (0x1<<4) /* BitField tcp_agg_vars6Various aggregative variables auxiliary flag 6 */
#define __XSTORM_TOE_TCP_AG_CONTEXT_SECTION_AUX6_FLAG_SHIFT 4
#define __XSTORM_TOE_TCP_AG_CONTEXT_SECTION_AUX7_FLAG (0x1<<5) /* BitField tcp_agg_vars6Various aggregative variables auxiliary flag 7 */
#define __XSTORM_TOE_TCP_AG_CONTEXT_SECTION_AUX7_FLAG_SHIFT 5
#define __XSTORM_TOE_TCP_AG_CONTEXT_SECTION_AUX5_CF (0x3<<6) /* BitField tcp_agg_vars6Various aggregative variables auxiliary counter flag 5 */
#define __XSTORM_TOE_TCP_AG_CONTEXT_SECTION_AUX5_CF_SHIFT 6
#define __XSTORM_TOE_TCP_AG_CONTEXT_SECTION_AUX9_CF (0x3<<8) /* BitField tcp_agg_vars6Various aggregative variables auxiliary counter flag 9 */
#define __XSTORM_TOE_TCP_AG_CONTEXT_SECTION_AUX9_CF_SHIFT 8
#define __XSTORM_TOE_TCP_AG_CONTEXT_SECTION_AUX10_CF (0x3<<10) /* BitField tcp_agg_vars6Various aggregative variables auxiliary counter flag 10 */
#define __XSTORM_TOE_TCP_AG_CONTEXT_SECTION_AUX10_CF_SHIFT 10
#define __XSTORM_TOE_TCP_AG_CONTEXT_SECTION_AUX11_CF (0x3<<12) /* BitField tcp_agg_vars6Various aggregative variables auxiliary counter flag 11 */
#define __XSTORM_TOE_TCP_AG_CONTEXT_SECTION_AUX11_CF_SHIFT 12
#define __XSTORM_TOE_TCP_AG_CONTEXT_SECTION_AUX12_CF (0x3<<14) /* BitField tcp_agg_vars6Various aggregative variables auxiliary counter flag 12 */
#define __XSTORM_TOE_TCP_AG_CONTEXT_SECTION_AUX12_CF_SHIFT 14
#define __XSTORM_TOE_TCP_AG_CONTEXT_SECTION_AUX13_CF (0x3<<16) /* BitField tcp_agg_vars6Various aggregative variables auxiliary counter flag 13 */
#define __XSTORM_TOE_TCP_AG_CONTEXT_SECTION_AUX13_CF_SHIFT 16
#define __XSTORM_TOE_TCP_AG_CONTEXT_SECTION_AUX14_CF (0x3<<18) /* BitField tcp_agg_vars6Various aggregative variables auxiliary counter flag 14 */
#define __XSTORM_TOE_TCP_AG_CONTEXT_SECTION_AUX14_CF_SHIFT 18
#define __XSTORM_TOE_TCP_AG_CONTEXT_SECTION_AUX15_CF (0x3<<20) /* BitField tcp_agg_vars6Various aggregative variables auxiliary counter flag 15 */
#define __XSTORM_TOE_TCP_AG_CONTEXT_SECTION_AUX15_CF_SHIFT 20
#define __XSTORM_TOE_TCP_AG_CONTEXT_SECTION_AUX16_CF (0x3<<22) /* BitField tcp_agg_vars6Various aggregative variables auxiliary counter flag 16 */
#define __XSTORM_TOE_TCP_AG_CONTEXT_SECTION_AUX16_CF_SHIFT 22
#define __XSTORM_TOE_TCP_AG_CONTEXT_SECTION_AUX17_CF (0x3<<24) /* BitField tcp_agg_vars6Various aggregative variables auxiliary counter flag 17 */
#define __XSTORM_TOE_TCP_AG_CONTEXT_SECTION_AUX17_CF_SHIFT 24
#define XSTORM_TOE_TCP_AG_CONTEXT_SECTION_ECE_FLAG (0x1<<26) /* BitField tcp_agg_vars6Various aggregative variables Can be also used as general purpose if ECN is not used */
#define XSTORM_TOE_TCP_AG_CONTEXT_SECTION_ECE_FLAG_SHIFT 26
#define __XSTORM_TOE_TCP_AG_CONTEXT_SECTION_RESERVED71 (0x1<<27) /* BitField tcp_agg_vars6Various aggregative variables Can be also used as general purpose if ECN is not used */
#define __XSTORM_TOE_TCP_AG_CONTEXT_SECTION_RESERVED71_SHIFT 27
#define __XSTORM_TOE_TCP_AG_CONTEXT_SECTION_FORCE_PURE_ACK_CNT_DIRTY (0x1<<28) /* BitField tcp_agg_vars6Various aggregative variables This flag is set if the Force ACK count is set by the TSTORM. On QM output it is cleared. */
#define __XSTORM_TOE_TCP_AG_CONTEXT_SECTION_FORCE_PURE_ACK_CNT_DIRTY_SHIFT 28
#define __XSTORM_TOE_TCP_AG_CONTEXT_SECTION_TCP_AUTO_STOP_FLAG (0x1<<29) /* BitField tcp_agg_vars6Various aggregative variables Indicates that the connection is in autostop mode */
#define __XSTORM_TOE_TCP_AG_CONTEXT_SECTION_TCP_AUTO_STOP_FLAG_SHIFT 29
#define __XSTORM_TOE_TCP_AG_CONTEXT_SECTION_DO_TS_UPDATE_FLAG (0x1<<30) /* BitField tcp_agg_vars6Various aggregative variables This bit uses like a one shot that the TSTORM fires and the XSTORM arms. Used to allow a single TS update for each transmission */
#define __XSTORM_TOE_TCP_AG_CONTEXT_SECTION_DO_TS_UPDATE_FLAG_SHIFT 30
#define __XSTORM_TOE_TCP_AG_CONTEXT_SECTION_CANCEL_RETRANSMIT_FLAG (0x1<<31) /* BitField tcp_agg_vars6Various aggregative variables This bit is set by the TSTORM when need to cancel precious fast retransmit */
#define __XSTORM_TOE_TCP_AG_CONTEXT_SECTION_CANCEL_RETRANSMIT_FLAG_SHIFT 31
#if defined(__BIG_ENDIAN)
uint16_t __agg_misc6 /* Misc aggregated variable 6 */;
uint16_t __tcp_agg_vars7 /* Various aggregative variables*/;
#elif defined(__LITTLE_ENDIAN)
uint16_t __tcp_agg_vars7 /* Various aggregative variables*/;
uint16_t __agg_misc6 /* Misc aggregated variable 6 */;
#endif
uint32_t __agg_val10 /* aggregated value 10 */;
uint32_t __agg_val10_th /* aggregated value 10 - threshold */;
#if defined(__BIG_ENDIAN)
uint16_t __reserved3;
uint8_t __reserved2;
uint8_t __da_only_cnt /* counts delayed acks and not window updates */;
#elif defined(__LITTLE_ENDIAN)
uint8_t __da_only_cnt /* counts delayed acks and not window updates */;
uint8_t __reserved2;
uint16_t __reserved3;
#endif
};
/*
* The toe aggregative context of Xstorm
*/
struct xstorm_toe_ag_context
{
#if defined(__BIG_ENDIAN)
uint16_t agg_val1 /* aggregated value 1 */;
uint8_t agg_vars1;
#define __XSTORM_TOE_AG_CONTEXT_EXISTS_IN_QM0 (0x1<<0) /* BitField agg_vars1Various aggregative variables The connection is currently registered to the QM with queue index 0 */
#define __XSTORM_TOE_AG_CONTEXT_EXISTS_IN_QM0_SHIFT 0
#define __XSTORM_TOE_AG_CONTEXT_RESERVED50 (0x1<<1) /* BitField agg_vars1Various aggregative variables The connection is currently registered to the QM with queue index 1 */
#define __XSTORM_TOE_AG_CONTEXT_RESERVED50_SHIFT 1
#define __XSTORM_TOE_AG_CONTEXT_RESERVED51 (0x1<<2) /* BitField agg_vars1Various aggregative variables The connection is currently registered to the QM with queue index 2 */
#define __XSTORM_TOE_AG_CONTEXT_RESERVED51_SHIFT 2
#define __XSTORM_TOE_AG_CONTEXT_RESERVED52 (0x1<<3) /* BitField agg_vars1Various aggregative variables The connection is currently registered to the QM with queue index 3 */
#define __XSTORM_TOE_AG_CONTEXT_RESERVED52_SHIFT 3
#define __XSTORM_TOE_AG_CONTEXT_MORE_TO_SEND_EN (0x1<<4) /* BitField agg_vars1Various aggregative variables Enables the decision rule of more_to_Send > 0 */
#define __XSTORM_TOE_AG_CONTEXT_MORE_TO_SEND_EN_SHIFT 4
#define XSTORM_TOE_AG_CONTEXT_NAGLE_EN (0x1<<5) /* BitField agg_vars1Various aggregative variables Enables the nagle decision */
#define XSTORM_TOE_AG_CONTEXT_NAGLE_EN_SHIFT 5
#define __XSTORM_TOE_AG_CONTEXT_DQ_FLUSH_FLAG (0x1<<6) /* BitField agg_vars1Various aggregative variables used to indicate last doorbell for specific connection */
#define __XSTORM_TOE_AG_CONTEXT_DQ_FLUSH_FLAG_SHIFT 6
#define __XSTORM_TOE_AG_CONTEXT_UNA_GT_NXT_EN (0x1<<7) /* BitField agg_vars1Various aggregative variables Enable decision rules based on equality between snd_una and snd_nxt */
#define __XSTORM_TOE_AG_CONTEXT_UNA_GT_NXT_EN_SHIFT 7
uint8_t __state /* The state of the connection */;
#elif defined(__LITTLE_ENDIAN)
uint8_t __state /* The state of the connection */;
uint8_t agg_vars1;
#define __XSTORM_TOE_AG_CONTEXT_EXISTS_IN_QM0 (0x1<<0) /* BitField agg_vars1Various aggregative variables The connection is currently registered to the QM with queue index 0 */
#define __XSTORM_TOE_AG_CONTEXT_EXISTS_IN_QM0_SHIFT 0
#define __XSTORM_TOE_AG_CONTEXT_RESERVED50 (0x1<<1) /* BitField agg_vars1Various aggregative variables The connection is currently registered to the QM with queue index 1 */
#define __XSTORM_TOE_AG_CONTEXT_RESERVED50_SHIFT 1
#define __XSTORM_TOE_AG_CONTEXT_RESERVED51 (0x1<<2) /* BitField agg_vars1Various aggregative variables The connection is currently registered to the QM with queue index 2 */
#define __XSTORM_TOE_AG_CONTEXT_RESERVED51_SHIFT 2
#define __XSTORM_TOE_AG_CONTEXT_RESERVED52 (0x1<<3) /* BitField agg_vars1Various aggregative variables The connection is currently registered to the QM with queue index 3 */
#define __XSTORM_TOE_AG_CONTEXT_RESERVED52_SHIFT 3
#define __XSTORM_TOE_AG_CONTEXT_MORE_TO_SEND_EN (0x1<<4) /* BitField agg_vars1Various aggregative variables Enables the decision rule of more_to_Send > 0 */
#define __XSTORM_TOE_AG_CONTEXT_MORE_TO_SEND_EN_SHIFT 4
#define XSTORM_TOE_AG_CONTEXT_NAGLE_EN (0x1<<5) /* BitField agg_vars1Various aggregative variables Enables the nagle decision */
#define XSTORM_TOE_AG_CONTEXT_NAGLE_EN_SHIFT 5
#define __XSTORM_TOE_AG_CONTEXT_DQ_FLUSH_FLAG (0x1<<6) /* BitField agg_vars1Various aggregative variables used to indicate last doorbell for specific connection */
#define __XSTORM_TOE_AG_CONTEXT_DQ_FLUSH_FLAG_SHIFT 6
#define __XSTORM_TOE_AG_CONTEXT_UNA_GT_NXT_EN (0x1<<7) /* BitField agg_vars1Various aggregative variables Enable decision rules based on equality between snd_una and snd_nxt */
#define __XSTORM_TOE_AG_CONTEXT_UNA_GT_NXT_EN_SHIFT 7
uint16_t agg_val1 /* aggregated value 1 */;
#endif
#if defined(__BIG_ENDIAN)
uint8_t cdu_reserved /* Used by the CDU for validation and debugging */;
uint8_t __agg_vars4 /* Various aggregative variables*/;
uint8_t agg_vars3;
#define XSTORM_TOE_AG_CONTEXT_PHYSICAL_QUEUE_NUM2 (0x3F<<0) /* BitField agg_vars3Various aggregative variables The physical queue number of queue index 2 */
#define XSTORM_TOE_AG_CONTEXT_PHYSICAL_QUEUE_NUM2_SHIFT 0
#define __XSTORM_TOE_AG_CONTEXT_QUEUES_FLUSH_Q1_CF (0x3<<6) /* BitField agg_vars3Various aggregative variables auxiliary counter flag 19 */
#define __XSTORM_TOE_AG_CONTEXT_QUEUES_FLUSH_Q1_CF_SHIFT 6
uint8_t agg_vars2;
#define __XSTORM_TOE_AG_CONTEXT_DQ_CF (0x3<<0) /* BitField agg_vars2Various aggregative variables auxiliary counter flag 4 */
#define __XSTORM_TOE_AG_CONTEXT_DQ_CF_SHIFT 0
#define __XSTORM_TOE_AG_CONTEXT_DQ_FLUSH_FLAG_EN (0x1<<2) /* BitField agg_vars2Various aggregative variables Enable decision rule based on dq_spare_flag */
#define __XSTORM_TOE_AG_CONTEXT_DQ_FLUSH_FLAG_EN_SHIFT 2
#define __XSTORM_TOE_AG_CONTEXT_AUX8_FLAG (0x1<<3) /* BitField agg_vars2Various aggregative variables auxiliary flag 8 */
#define __XSTORM_TOE_AG_CONTEXT_AUX8_FLAG_SHIFT 3
#define __XSTORM_TOE_AG_CONTEXT_AUX9_FLAG (0x1<<4) /* BitField agg_vars2Various aggregative variables auxiliary flag 9 */
#define __XSTORM_TOE_AG_CONTEXT_AUX9_FLAG_SHIFT 4
#define XSTORM_TOE_AG_CONTEXT_RESERVED53 (0x3<<5) /* BitField agg_vars2Various aggregative variables 0-NOP,1-EQ,2-NEQ */
#define XSTORM_TOE_AG_CONTEXT_RESERVED53_SHIFT 5
#define __XSTORM_TOE_AG_CONTEXT_DQ_CF_EN (0x1<<7) /* BitField agg_vars2Various aggregative variables Enable decision rules based on aux4_cf */
#define __XSTORM_TOE_AG_CONTEXT_DQ_CF_EN_SHIFT 7
#elif defined(__LITTLE_ENDIAN)
uint8_t agg_vars2;
#define __XSTORM_TOE_AG_CONTEXT_DQ_CF (0x3<<0) /* BitField agg_vars2Various aggregative variables auxiliary counter flag 4 */
#define __XSTORM_TOE_AG_CONTEXT_DQ_CF_SHIFT 0
#define __XSTORM_TOE_AG_CONTEXT_DQ_FLUSH_FLAG_EN (0x1<<2) /* BitField agg_vars2Various aggregative variables Enable decision rule based on dq_spare_flag */
#define __XSTORM_TOE_AG_CONTEXT_DQ_FLUSH_FLAG_EN_SHIFT 2
#define __XSTORM_TOE_AG_CONTEXT_AUX8_FLAG (0x1<<3) /* BitField agg_vars2Various aggregative variables auxiliary flag 8 */
#define __XSTORM_TOE_AG_CONTEXT_AUX8_FLAG_SHIFT 3
#define __XSTORM_TOE_AG_CONTEXT_AUX9_FLAG (0x1<<4) /* BitField agg_vars2Various aggregative variables auxiliary flag 9 */
#define __XSTORM_TOE_AG_CONTEXT_AUX9_FLAG_SHIFT 4
#define XSTORM_TOE_AG_CONTEXT_RESERVED53 (0x3<<5) /* BitField agg_vars2Various aggregative variables 0-NOP,1-EQ,2-NEQ */
#define XSTORM_TOE_AG_CONTEXT_RESERVED53_SHIFT 5
#define __XSTORM_TOE_AG_CONTEXT_DQ_CF_EN (0x1<<7) /* BitField agg_vars2Various aggregative variables Enable decision rules based on aux4_cf */
#define __XSTORM_TOE_AG_CONTEXT_DQ_CF_EN_SHIFT 7
uint8_t agg_vars3;
#define XSTORM_TOE_AG_CONTEXT_PHYSICAL_QUEUE_NUM2 (0x3F<<0) /* BitField agg_vars3Various aggregative variables The physical queue number of queue index 2 */
#define XSTORM_TOE_AG_CONTEXT_PHYSICAL_QUEUE_NUM2_SHIFT 0
#define __XSTORM_TOE_AG_CONTEXT_QUEUES_FLUSH_Q1_CF (0x3<<6) /* BitField agg_vars3Various aggregative variables auxiliary counter flag 19 */
#define __XSTORM_TOE_AG_CONTEXT_QUEUES_FLUSH_Q1_CF_SHIFT 6
uint8_t __agg_vars4 /* Various aggregative variables*/;
uint8_t cdu_reserved /* Used by the CDU for validation and debugging */;
#endif
uint32_t more_to_send /* The number of bytes left to send */;
#if defined(__BIG_ENDIAN)
uint16_t agg_vars5;
#define __XSTORM_TOE_AG_CONTEXT_RESERVED54 (0x3<<0) /* BitField agg_vars5Various aggregative variables 0-NOP,1-EQ,2-NEQ */
#define __XSTORM_TOE_AG_CONTEXT_RESERVED54_SHIFT 0
#define XSTORM_TOE_AG_CONTEXT_PHYSICAL_QUEUE_NUM0 (0x3F<<2) /* BitField agg_vars5Various aggregative variables The physical queue number of queue index 0 */
#define XSTORM_TOE_AG_CONTEXT_PHYSICAL_QUEUE_NUM0_SHIFT 2
#define XSTORM_TOE_AG_CONTEXT_PHYSICAL_QUEUE_NUM1 (0x3F<<8) /* BitField agg_vars5Various aggregative variables The physical queue number of queue index 1 */
#define XSTORM_TOE_AG_CONTEXT_PHYSICAL_QUEUE_NUM1_SHIFT 8
#define __XSTORM_TOE_AG_CONTEXT_RESERVED56 (0x3<<14) /* BitField agg_vars5Various aggregative variables 0-NOP,1-EQ,2-NEQ */
#define __XSTORM_TOE_AG_CONTEXT_RESERVED56_SHIFT 14
uint16_t __agg_val4_th /* aggregated value 4 - threshold */;
#elif defined(__LITTLE_ENDIAN)
uint16_t __agg_val4_th /* aggregated value 4 - threshold */;
uint16_t agg_vars5;
#define __XSTORM_TOE_AG_CONTEXT_RESERVED54 (0x3<<0) /* BitField agg_vars5Various aggregative variables 0-NOP,1-EQ,2-NEQ */
#define __XSTORM_TOE_AG_CONTEXT_RESERVED54_SHIFT 0
#define XSTORM_TOE_AG_CONTEXT_PHYSICAL_QUEUE_NUM0 (0x3F<<2) /* BitField agg_vars5Various aggregative variables The physical queue number of queue index 0 */
#define XSTORM_TOE_AG_CONTEXT_PHYSICAL_QUEUE_NUM0_SHIFT 2
#define XSTORM_TOE_AG_CONTEXT_PHYSICAL_QUEUE_NUM1 (0x3F<<8) /* BitField agg_vars5Various aggregative variables The physical queue number of queue index 1 */
#define XSTORM_TOE_AG_CONTEXT_PHYSICAL_QUEUE_NUM1_SHIFT 8
#define __XSTORM_TOE_AG_CONTEXT_RESERVED56 (0x3<<14) /* BitField agg_vars5Various aggregative variables 0-NOP,1-EQ,2-NEQ */
#define __XSTORM_TOE_AG_CONTEXT_RESERVED56_SHIFT 14
#endif
struct xstorm_toe_tcp_ag_context_section tcp /* TCP context section, shared in TOE and ISCSI */;
#if defined(__BIG_ENDIAN)
uint16_t __agg_vars7 /* Various aggregative variables*/;
uint8_t __agg_val3_th /* Aggregated value 3 - threshold */;
uint8_t __agg_vars6 /* Various aggregative variables*/;
#elif defined(__LITTLE_ENDIAN)
uint8_t __agg_vars6 /* Various aggregative variables*/;
uint8_t __agg_val3_th /* Aggregated value 3 - threshold */;
uint16_t __agg_vars7 /* Various aggregative variables*/;
#endif
#if defined(__BIG_ENDIAN)
uint16_t __agg_val11_th /* aggregated value 11 - threshold */;
uint16_t __agg_val11 /* aggregated value 11 */;
#elif defined(__LITTLE_ENDIAN)
uint16_t __agg_val11 /* aggregated value 11 */;
uint16_t __agg_val11_th /* aggregated value 11 - threshold */;
#endif
#if defined(__BIG_ENDIAN)
uint8_t __reserved1;
uint8_t __agg_val6_th /* aggregated value 6 - threshold */;
uint16_t __agg_val9 /* aggregated value 9 */;
#elif defined(__LITTLE_ENDIAN)
uint16_t __agg_val9 /* aggregated value 9 */;
uint8_t __agg_val6_th /* aggregated value 6 - threshold */;
uint8_t __reserved1;
#endif
#if defined(__BIG_ENDIAN)
uint16_t __agg_val2_th /* Aggregated value 2 - threshold */;
uint16_t cmp_bd_cons /* BD Consumer from the Completor */;
#elif defined(__LITTLE_ENDIAN)
uint16_t cmp_bd_cons /* BD Consumer from the Completor */;
uint16_t __agg_val2_th /* Aggregated value 2 - threshold */;
#endif
uint32_t __agg_varint8_t /* Various aggregative variables*/;
#if defined(__BIG_ENDIAN)
uint16_t __agg_misc0 /* Misc aggregated variable 0 */;
uint16_t __agg_val4 /* aggregated value 4 */;
#elif defined(__LITTLE_ENDIAN)
uint16_t __agg_val4 /* aggregated value 4 */;
uint16_t __agg_misc0 /* Misc aggregated variable 0 */;
#endif
#if defined(__BIG_ENDIAN)
uint8_t __agg_val3 /* Aggregated value 3 */;
uint8_t __agg_val6 /* Aggregated value 6 */;
uint8_t __agg_val5_th /* Aggregated value 5 - threshold */;
uint8_t __agg_val5 /* Aggregated value 5 */;
#elif defined(__LITTLE_ENDIAN)
uint8_t __agg_val5 /* Aggregated value 5 */;
uint8_t __agg_val5_th /* Aggregated value 5 - threshold */;
uint8_t __agg_val6 /* Aggregated value 6 */;
uint8_t __agg_val3 /* Aggregated value 3 */;
#endif
#if defined(__BIG_ENDIAN)
uint16_t __agg_misc1 /* Spare value for aggregation. NOTE: this value is used in the retransmit decision rule if CmpSeqDecMask is 0. In that case it is intended to be CmpBdSize. */;
uint16_t __bd_ind_max_val /* modulo value for bd_prod */;
#elif defined(__LITTLE_ENDIAN)
uint16_t __bd_ind_max_val /* modulo value for bd_prod */;
uint16_t __agg_misc1 /* Spare value for aggregation. NOTE: this value is used in the retransmit decision rule if CmpSeqDecMask is 0. In that case it is intended to be CmpBdSize. */;
#endif
uint32_t cmp_bd_start_seq /* The sequence number of the start completion point (BD) */;
uint32_t cmp_bd_page_0_to_31 /* Misc aggregated variable 4 */;
uint32_t cmp_bd_page_32_to_63 /* spare aggregated variable 5 */;
};
/*
* doorbell message sent to the chip
*/
struct doorbell
{
#if defined(__BIG_ENDIAN)
uint16_t zero_fill2 /* driver must zero this field! */;
uint8_t zero_fill1 /* driver must zero this field! */;
struct doorbell_hdr_t header;
#elif defined(__LITTLE_ENDIAN)
struct doorbell_hdr_t header;
uint8_t zero_fill1 /* driver must zero this field! */;
uint16_t zero_fill2 /* driver must zero this field! */;
#endif
};
/*
* doorbell message sent to the chip
*/
struct doorbell_set_prod
{
#if defined(__BIG_ENDIAN)
uint16_t prod /* Producer index to be set */;
uint8_t zero_fill1 /* driver must zero this field! */;
struct doorbell_hdr_t header;
#elif defined(__LITTLE_ENDIAN)
struct doorbell_hdr_t header;
uint8_t zero_fill1 /* driver must zero this field! */;
uint16_t prod /* Producer index to be set */;
#endif
};
struct regpair_native_t
{
uint32_t lo /* low word for reg-pair */;
uint32_t hi /* high word for reg-pair */;
};
struct regpair_t
{
uint32_t lo /* low word for reg-pair */;
uint32_t hi /* high word for reg-pair */;
};
/*
* Classify rule opcodes in E2/E3
*/
enum classify_rule
{
CLASSIFY_RULE_OPCODE_MAC /* Add/remove a MAC address */,
CLASSIFY_RULE_OPCODE_VLAN /* Add/remove a VLAN */,
CLASSIFY_RULE_OPCODE_PAIR /* Add/remove a MAC-VLAN pair */,
CLASSIFY_RULE_OPCODE_IMAC_VNI /* Add/remove an Inner MAC-VNI pair entry */,
MAX_CLASSIFY_RULE};
/*
* Classify rule types in E2/E3
*/
enum classify_rule_action_type
{
CLASSIFY_RULE_REMOVE,
CLASSIFY_RULE_ADD,
MAX_CLASSIFY_RULE_ACTION_TYPE};
/*
* client init ramrod data $$KEEP_ENDIANNESS$$
*/
struct client_init_general_data
{
uint8_t client_id /* client_id */;
uint8_t statistics_counter_id /* statistics counter id */;
uint8_t statistics_en_flg /* statistics en flg */;
uint8_t is_fcoe_flg /* is this an fcoe connection. (1 bit is used) */;
uint8_t activate_flg /* if 0 - the client is deactivate else the client is activate client (1 bit is used) */;
uint8_t sp_client_id /* the slow path rings client Id. */;
uint16_t mtu /* Host MTU from client config */;
uint8_t statistics_zero_flg /* if set FW will reset the statistic counter of this client */;
uint8_t func_id /* PCI function ID (0-71) */;
uint8_t cos /* The connection cos, if applicable */;
uint8_t traffic_type;
uint8_t fp_hsi_ver /* Hsi version */;
uint8_t reserved0[3];
};
/*
* client init rx data $$KEEP_ENDIANNESS$$
*/
struct client_init_rx_data
{
uint8_t tpa_en;
#define CLIENT_INIT_RX_DATA_TPA_EN_IPV4 (0x1<<0) /* BitField tpa_entpa_enable tpa enable flg ipv4 */
#define CLIENT_INIT_RX_DATA_TPA_EN_IPV4_SHIFT 0
#define CLIENT_INIT_RX_DATA_TPA_EN_IPV6 (0x1<<1) /* BitField tpa_entpa_enable tpa enable flg ipv6 */
#define CLIENT_INIT_RX_DATA_TPA_EN_IPV6_SHIFT 1
#define CLIENT_INIT_RX_DATA_TPA_MODE (0x1<<2) /* BitField tpa_entpa_enable tpa mode (LRO or GRO) (use enum tpa_mode) */
#define CLIENT_INIT_RX_DATA_TPA_MODE_SHIFT 2
#define CLIENT_INIT_RX_DATA_RESERVED5 (0x1F<<3) /* BitField tpa_entpa_enable */
#define CLIENT_INIT_RX_DATA_RESERVED5_SHIFT 3
uint8_t vmqueue_mode_en_flg /* If set, working in VMQueue mode (always consume one sge) */;
uint8_t extra_data_over_sgl_en_flg /* if set, put over sgl data from end of input message */;
uint8_t cache_line_alignment_log_size /* The log size of cache line alignment in bytes. Must be a power of 2. */;
uint8_t enable_dynamic_hc /* If set, dynamic HC is enabled */;
uint8_t max_sges_for_packet /* The maximal number of SGEs that can be used for one packet. depends on MTU and SGE size. must be 0 if SGEs are disabled */;
uint8_t client_qzone_id /* used in E2 only, to specify the HW queue zone ID used for this client rx producers */;
uint8_t drop_ip_cs_err_flg /* If set, this client drops packets with IP checksum error */;
uint8_t drop_tcp_cs_err_flg /* If set, this client drops packets with TCP checksum error */;
uint8_t drop_ttl0_flg /* If set, this client drops packets with TTL=0 */;
uint8_t drop_udp_cs_err_flg /* If set, this client drops packets with UDP checksum error */;
uint8_t inner_vlan_removal_enable_flg /* If set, inner VLAN removal is enabled for this client */;
uint8_t outer_vlan_removal_enable_flg /* If set, outer VLAN removal is enabled for this client */;
uint8_t status_block_id /* rx status block id */;
uint8_t rx_sb_index_number /* status block indices */;
uint8_t dont_verify_rings_pause_thr_flg /* If set, the rings pause thresholds will not be verified by firmware. */;
uint8_t max_tpa_queues /* maximal TPA queues allowed for this client */;
uint8_t silent_vlan_removal_flg /* if set, and the vlan is equal to requested vlan according to mask, the vlan will be remove without notifying the driver */;
uint16_t max_bytes_on_bd /* Maximum bytes that can be placed on a BD. The BD allocated size should include 2 more bytes (ip alignment) and alignment size (in case the address is not aligned) */;
uint16_t sge_buff_size /* Size of the buffers pointed by SGEs */;
uint8_t approx_mcast_engine_id /* In Everest2, if is_approx_mcast is set, this field specified which approximate multicast engine is associate with this client */;
uint8_t rss_engine_id /* In Everest2, if rss_mode is set, this field specified which RSS engine is associate with this client */;
struct regpair_t bd_page_base /* BD page base address at the host */;
struct regpair_t sge_page_base /* SGE page base address at the host */;
struct regpair_t cqe_page_base /* Completion queue base address */;
uint8_t is_leading_rss;
uint8_t is_approx_mcast;
uint16_t max_agg_size /* maximal size for the aggregated TPA packets, reprted by the host */;
uint16_t state;
#define CLIENT_INIT_RX_DATA_UCAST_DROP_ALL (0x1<<0) /* BitField staterx filters state drop all unicast packets */
#define CLIENT_INIT_RX_DATA_UCAST_DROP_ALL_SHIFT 0
#define CLIENT_INIT_RX_DATA_UCAST_ACCEPT_ALL (0x1<<1) /* BitField staterx filters state accept all unicast packets (subject to vlan) */
#define CLIENT_INIT_RX_DATA_UCAST_ACCEPT_ALL_SHIFT 1
#define CLIENT_INIT_RX_DATA_UCAST_ACCEPT_UNMATCHED (0x1<<2) /* BitField staterx filters state accept all unmatched unicast packets (subject to vlan) */
#define CLIENT_INIT_RX_DATA_UCAST_ACCEPT_UNMATCHED_SHIFT 2
#define CLIENT_INIT_RX_DATA_MCAST_DROP_ALL (0x1<<3) /* BitField staterx filters state drop all multicast packets */
#define CLIENT_INIT_RX_DATA_MCAST_DROP_ALL_SHIFT 3
#define CLIENT_INIT_RX_DATA_MCAST_ACCEPT_ALL (0x1<<4) /* BitField staterx filters state accept all multicast packets (subject to vlan) */
#define CLIENT_INIT_RX_DATA_MCAST_ACCEPT_ALL_SHIFT 4
#define CLIENT_INIT_RX_DATA_BCAST_ACCEPT_ALL (0x1<<5) /* BitField staterx filters state accept all broadcast packets (subject to vlan) */
#define CLIENT_INIT_RX_DATA_BCAST_ACCEPT_ALL_SHIFT 5
#define CLIENT_INIT_RX_DATA_ACCEPT_ANY_VLAN (0x1<<6) /* BitField staterx filters state accept packets matched only by MAC (without checking vlan) */
#define CLIENT_INIT_RX_DATA_ACCEPT_ANY_VLAN_SHIFT 6
#define CLIENT_INIT_RX_DATA_RESERVED2 (0x1FF<<7) /* BitField staterx filters state */
#define CLIENT_INIT_RX_DATA_RESERVED2_SHIFT 7
uint16_t cqe_pause_thr_low /* number of remaining cqes under which, we send pause message */;
uint16_t cqe_pause_thr_high /* number of remaining cqes above which, we send un-pause message */;
uint16_t bd_pause_thr_low /* number of remaining bds under which, we send pause message */;
uint16_t bd_pause_thr_high /* number of remaining bds above which, we send un-pause message */;
uint16_t sge_pause_thr_low /* number of remaining sges under which, we send pause message */;
uint16_t sge_pause_thr_high /* number of remaining sges above which, we send un-pause message */;
uint16_t rx_cos_mask /* the bits that will be set on pfc/ safc paket whith will be genratet when this ring is full. for regular flow control set this to 1 */;
uint16_t silent_vlan_value /* The vlan to compare, in case, silent vlan is set */;
uint16_t silent_vlan_mask /* The vlan mask, in case, silent vlan is set */;
uint8_t handle_ptp_pkts_flg /* If set, this client handles PTP Packets */;
uint8_t reserved6[3];
uint32_t reserved7;
};
/*
* client init tx data $$KEEP_ENDIANNESS$$
*/
struct client_init_tx_data
{
uint8_t enforce_security_flg /* if set, security checks will be made for this connection */;
uint8_t tx_status_block_id /* the number of status block to update */;
uint8_t tx_sb_index_number /* the index to use inside the status block */;
uint8_t tss_leading_client_id /* client ID of the leading TSS client, for TX classification source knock out */;
uint8_t tx_switching_flg /* if set, tx switching will be done to packets on this connection */;
uint8_t anti_spoofing_flg /* if set, anti spoofing check will be done to packets on this connection */;
uint16_t default_vlan /* default vlan tag (id+pri). (valid if default_vlan_flg is set) */;
struct regpair_t tx_bd_page_base /* BD page base address at the host for TxBdCons */;
uint16_t state;
#define CLIENT_INIT_TX_DATA_UCAST_ACCEPT_ALL (0x1<<0) /* BitField statetx filters state accept all unicast packets (subject to vlan) */
#define CLIENT_INIT_TX_DATA_UCAST_ACCEPT_ALL_SHIFT 0
#define CLIENT_INIT_TX_DATA_MCAST_ACCEPT_ALL (0x1<<1) /* BitField statetx filters state accept all multicast packets (subject to vlan) */
#define CLIENT_INIT_TX_DATA_MCAST_ACCEPT_ALL_SHIFT 1
#define CLIENT_INIT_TX_DATA_BCAST_ACCEPT_ALL (0x1<<2) /* BitField statetx filters state accept all broadcast packets (subject to vlan) */
#define CLIENT_INIT_TX_DATA_BCAST_ACCEPT_ALL_SHIFT 2
#define CLIENT_INIT_TX_DATA_ACCEPT_ANY_VLAN (0x1<<3) /* BitField statetx filters state accept packets matched only by MAC (without checking vlan) */
#define CLIENT_INIT_TX_DATA_ACCEPT_ANY_VLAN_SHIFT 3
#define CLIENT_INIT_TX_DATA_RESERVED0 (0xFFF<<4) /* BitField statetx filters state */
#define CLIENT_INIT_TX_DATA_RESERVED0_SHIFT 4
uint8_t default_vlan_flg /* is default vlan valid for this client. */;
uint8_t force_default_pri_flg /* if set, force default priority */;
uint8_t tunnel_lso_inc_ip_id /* In case of LSO over IPv4 tunnel, whether to increment IP ID on external IP header or internal IP header */;
uint8_t refuse_outband_vlan_flg /* if set, the FW will not add outband vlan on packet (even if will exist on BD). */;
uint8_t tunnel_non_lso_pcsum_location /* In case of non-Lso encapsulated packets with L4 checksum offload, the pseudo checksum location - on packet or on BD. */;
uint8_t tunnel_non_lso_outer_ip_csum_location /* In case of non-Lso encapsulated packets with outer L3 ip checksum offload, the pseudo checksum location - on packet or on BD. */;
};
/*
* client init ramrod data $$KEEP_ENDIANNESS$$
*/
struct client_init_ramrod_data
{
struct client_init_general_data general /* client init general data */;
struct client_init_rx_data rx /* client init rx data */;
struct client_init_tx_data tx /* client init tx data */;
};
/*
* client update ramrod data $$KEEP_ENDIANNESS$$
*/
struct client_update_ramrod_data
{
uint8_t client_id /* the client to update */;
uint8_t func_id /* PCI function ID this client belongs to (0-71) */;
uint8_t inner_vlan_removal_enable_flg /* If set, inner VLAN removal is enabled for this client, will be change according to change flag */;
uint8_t inner_vlan_removal_change_flg /* If set, inner VLAN removal flag will be set according to the enable flag */;
uint8_t outer_vlan_removal_enable_flg /* If set, outer VLAN removal is enabled for this client, will be change according to change flag */;
uint8_t outer_vlan_removal_change_flg /* If set, outer VLAN removal flag will be set according to the enable flag */;
uint8_t anti_spoofing_enable_flg /* If set, anti spoofing is enabled for this client, will be change according to change flag */;
uint8_t anti_spoofing_change_flg /* If set, anti spoofing flag will be set according to anti spoofing flag */;
uint8_t activate_flg /* if 0 - the client is deactivate else the client is activate client (1 bit is used) */;
uint8_t activate_change_flg /* If set, activate_flg will be checked */;
uint16_t default_vlan /* default vlan tag (id+pri). (valid if default_vlan_flg is set) */;
uint8_t default_vlan_enable_flg;
uint8_t default_vlan_change_flg;
uint16_t silent_vlan_value /* The vlan to compare, in case, silent vlan is set */;
uint16_t silent_vlan_mask /* The vlan mask, in case, silent vlan is set */;
uint8_t silent_vlan_removal_flg /* if set, and the vlan is equal to requested vlan according to mask, the vlan will be remove without notifying the driver */;
uint8_t silent_vlan_change_flg;
uint8_t refuse_outband_vlan_flg /* If set, the FW will not add outband vlan on packet (even if will exist on BD). */;
uint8_t refuse_outband_vlan_change_flg /* If set, refuse_outband_vlan_flg will be updated. */;
uint8_t tx_switching_flg /* If set, tx switching will be done to packets on this connection. */;
uint8_t tx_switching_change_flg /* If set, tx_switching_flg will be updated. */;
uint8_t handle_ptp_pkts_flg /* If set, this client handles PTP Packets */;
uint8_t handle_ptp_pkts_change_flg /* If set, handle_ptp_pkts_flg will be updated. */;
uint16_t reserved1;
uint32_t echo /* echo value to be sent to driver on event ring */;
};
/*
* The eth storm context of Cstorm
*/
struct cstorm_eth_st_context
{
uint32_t __reserved0[4];
};
struct double_regpair
{
uint32_t regpair0_lo /* low word for reg-pair0 */;
uint32_t regpair0_hi /* high word for reg-pair0 */;
uint32_t regpair1_lo /* low word for reg-pair1 */;
uint32_t regpair1_hi /* high word for reg-pair1 */;
};
/*
* 2nd parse bd type used in ethernet tx BDs
*/
enum eth_2nd_parse_bd_type
{
ETH_2ND_PARSE_BD_TYPE_LSO_TUNNEL,
MAX_ETH_2ND_PARSE_BD_TYPE};
/*
* Ethernet address typesm used in ethernet tx BDs
*/
enum eth_addr_type
{
UNKNOWN_ADDRESS,
UNICAST_ADDRESS,
MULTICAST_ADDRESS,
BROADCAST_ADDRESS,
MAX_ETH_ADDR_TYPE};
/*
* $$KEEP_ENDIANNESS$$
*/
struct eth_classify_cmd_header
{
uint8_t cmd_general_data;
#define ETH_CLASSIFY_CMD_HEADER_RX_CMD (0x1<<0) /* BitField cmd_general_data should this cmd be applied for Rx */
#define ETH_CLASSIFY_CMD_HEADER_RX_CMD_SHIFT 0
#define ETH_CLASSIFY_CMD_HEADER_TX_CMD (0x1<<1) /* BitField cmd_general_data should this cmd be applied for Tx */
#define ETH_CLASSIFY_CMD_HEADER_TX_CMD_SHIFT 1
#define ETH_CLASSIFY_CMD_HEADER_OPCODE (0x3<<2) /* BitField cmd_general_data command opcode for MAC/VLAN/PAIR/IMAC_VNI (use enum classify_rule) */
#define ETH_CLASSIFY_CMD_HEADER_OPCODE_SHIFT 2
#define ETH_CLASSIFY_CMD_HEADER_IS_ADD (0x1<<4) /* BitField cmd_general_data (use enum classify_rule_action_type) */
#define ETH_CLASSIFY_CMD_HEADER_IS_ADD_SHIFT 4
#define ETH_CLASSIFY_CMD_HEADER_RESERVED0 (0x7<<5) /* BitField cmd_general_data */
#define ETH_CLASSIFY_CMD_HEADER_RESERVED0_SHIFT 5
uint8_t func_id /* the function id */;
uint8_t client_id;
uint8_t reserved1;
};
/*
* header for eth classification config ramrod $$KEEP_ENDIANNESS$$
*/
struct eth_classify_header
{
uint8_t rule_cnt /* number of rules in classification config ramrod */;
uint8_t reserved0;
uint16_t reserved1;
uint32_t echo /* echo value to be sent to driver on event ring */;
};
/*
* Command for adding/removing a Inner-MAC/VNI classification rule $$KEEP_ENDIANNESS$$
*/
struct eth_classify_imac_vni_cmd
{
struct eth_classify_cmd_header header;
uint32_t vni;
uint16_t imac_lsb;
uint16_t imac_mid;
uint16_t imac_msb;
uint16_t reserved1;
};
/*
* Command for adding/removing a MAC classification rule $$KEEP_ENDIANNESS$$
*/
struct eth_classify_mac_cmd
{
struct eth_classify_cmd_header header;
uint16_t reserved0;
uint16_t inner_mac;
uint16_t mac_lsb;
uint16_t mac_mid;
uint16_t mac_msb;
uint16_t reserved1;
};
/*
* Command for adding/removing a MAC-VLAN pair classification rule $$KEEP_ENDIANNESS$$
*/
struct eth_classify_pair_cmd
{
struct eth_classify_cmd_header header;
uint16_t reserved0;
uint16_t inner_mac;
uint16_t mac_lsb;
uint16_t mac_mid;
uint16_t mac_msb;
uint16_t vlan;
};
/*
* Command for adding/removing a VLAN classification rule $$KEEP_ENDIANNESS$$
*/
struct eth_classify_vlan_cmd
{
struct eth_classify_cmd_header header;
uint32_t reserved0;
uint32_t reserved1;
uint16_t reserved2;
uint16_t vlan;
};
/*
* union for eth classification rule $$KEEP_ENDIANNESS$$
*/
union eth_classify_rule_cmd
{
struct eth_classify_mac_cmd mac;
struct eth_classify_vlan_cmd vlan;
struct eth_classify_pair_cmd pair;
struct eth_classify_imac_vni_cmd imac_vni;
};
/*
* parameters for eth classification configuration ramrod $$KEEP_ENDIANNESS$$
*/
struct eth_classify_rules_ramrod_data
{
struct eth_classify_header header;
union eth_classify_rule_cmd rules[CLASSIFY_RULES_COUNT];
};
/*
* The data contain client ID need to the ramrod $$KEEP_ENDIANNESS$$
*/
struct eth_common_ramrod_data
{
uint32_t client_id /* id of this client. (5 bits are used) */;
uint32_t reserved1;
};
/*
* The eth storm context of Ustorm
*/
struct ustorm_eth_st_context
{
uint32_t reserved0[52];
};
/*
* The eth storm context of Tstorm
*/
struct tstorm_eth_st_context
{
uint32_t __reserved0[28];
};
/*
* The eth storm context of Xstorm
*/
struct xstorm_eth_st_context
{
uint32_t reserved0[60];
};
/*
* Ethernet connection context
*/
struct eth_context
{
struct ustorm_eth_st_context ustorm_st_context /* Ustorm storm context */;
struct tstorm_eth_st_context tstorm_st_context /* Tstorm storm context */;
struct xstorm_eth_ag_context xstorm_ag_context /* Xstorm aggregative context */;
struct tstorm_eth_ag_context tstorm_ag_context /* Tstorm aggregative context */;
struct cstorm_eth_ag_context cstorm_ag_context /* Cstorm aggregative context */;
struct ustorm_eth_ag_context ustorm_ag_context /* Ustorm aggregative context */;
struct timers_block_context timers_context /* Timers block context */;
struct xstorm_eth_st_context xstorm_st_context /* Xstorm storm context */;
struct cstorm_eth_st_context cstorm_st_context /* Cstorm storm context */;
};
/*
* union for sgl and raw data.
*/
union eth_sgl_or_raw_data
{
uint16_t sgl[8] /* Scatter-gather list of SGEs used by this packet. This list includes the indices of the SGEs. */;
uint32_t raw_data[4] /* raw data from Tstorm to the driver. */;
};
/*
* eth FP end aggregation CQE parameters struct $$KEEP_ENDIANNESS$$
*/
struct eth_end_agg_rx_cqe
{
uint8_t type_error_flags;
#define ETH_END_AGG_RX_CQE_TYPE (0x3<<0) /* BitField type_error_flags (use enum eth_rx_cqe_type) */
#define ETH_END_AGG_RX_CQE_TYPE_SHIFT 0
#define ETH_END_AGG_RX_CQE_SGL_RAW_SEL (0x1<<2) /* BitField type_error_flags (use enum eth_rx_fp_sel) */
#define ETH_END_AGG_RX_CQE_SGL_RAW_SEL_SHIFT 2
#define ETH_END_AGG_RX_CQE_RESERVED0 (0x1F<<3) /* BitField type_error_flags */
#define ETH_END_AGG_RX_CQE_RESERVED0_SHIFT 3
uint8_t reserved1;
uint8_t queue_index /* The aggregation queue index of this packet */;
uint8_t reserved2;
uint32_t timestamp_delta /* timestamp delta between first packet to last packet in aggregation */;
uint16_t num_of_coalesced_segs /* Num of coalesced segments. */;
uint16_t pkt_len /* Packet length */;
uint8_t pure_ack_count /* Number of pure acks coalesced. */;
uint8_t reserved3;
uint16_t reserved4;
union eth_sgl_or_raw_data sgl_or_raw_data /* union for sgl and raw data. */;
uint32_t padding[8];
};
/*
* regular eth FP CQE parameters struct $$KEEP_ENDIANNESS$$
*/
struct eth_fast_path_rx_cqe
{
uint8_t type_error_flags;
#define ETH_FAST_PATH_RX_CQE_TYPE (0x3<<0) /* BitField type_error_flags (use enum eth_rx_cqe_type) */
#define ETH_FAST_PATH_RX_CQE_TYPE_SHIFT 0
#define ETH_FAST_PATH_RX_CQE_SGL_RAW_SEL (0x1<<2) /* BitField type_error_flags (use enum eth_rx_fp_sel) */
#define ETH_FAST_PATH_RX_CQE_SGL_RAW_SEL_SHIFT 2
#define ETH_FAST_PATH_RX_CQE_PHY_DECODE_ERR_FLG (0x1<<3) /* BitField type_error_flags Physical layer errors */
#define ETH_FAST_PATH_RX_CQE_PHY_DECODE_ERR_FLG_SHIFT 3
#define ETH_FAST_PATH_RX_CQE_IP_BAD_XSUM_FLG (0x1<<4) /* BitField type_error_flags IP checksum error */
#define ETH_FAST_PATH_RX_CQE_IP_BAD_XSUM_FLG_SHIFT 4
#define ETH_FAST_PATH_RX_CQE_L4_BAD_XSUM_FLG (0x1<<5) /* BitField type_error_flags TCP/UDP checksum error */
#define ETH_FAST_PATH_RX_CQE_L4_BAD_XSUM_FLG_SHIFT 5
#define ETH_FAST_PATH_RX_CQE_PTP_PKT (0x1<<6) /* BitField type_error_flags Is a PTP Timesync Packet */
#define ETH_FAST_PATH_RX_CQE_PTP_PKT_SHIFT 6
#define ETH_FAST_PATH_RX_CQE_RESERVED0 (0x1<<7) /* BitField type_error_flags */
#define ETH_FAST_PATH_RX_CQE_RESERVED0_SHIFT 7
uint8_t status_flags;
#define ETH_FAST_PATH_RX_CQE_RSS_HASH_TYPE (0x7<<0) /* BitField status_flags (use enum eth_rss_hash_type) */
#define ETH_FAST_PATH_RX_CQE_RSS_HASH_TYPE_SHIFT 0
#define ETH_FAST_PATH_RX_CQE_RSS_HASH_FLG (0x1<<3) /* BitField status_flags RSS hashing on/off */
#define ETH_FAST_PATH_RX_CQE_RSS_HASH_FLG_SHIFT 3
#define ETH_FAST_PATH_RX_CQE_BROADCAST_FLG (0x1<<4) /* BitField status_flags if set to 1, this is a broadcast packet */
#define ETH_FAST_PATH_RX_CQE_BROADCAST_FLG_SHIFT 4
#define ETH_FAST_PATH_RX_CQE_MAC_MATCH_FLG (0x1<<5) /* BitField status_flags if set to 1, the MAC address was matched in the tstorm CAM search */
#define ETH_FAST_PATH_RX_CQE_MAC_MATCH_FLG_SHIFT 5
#define ETH_FAST_PATH_RX_CQE_IP_XSUM_NO_VALIDATION_FLG (0x1<<6) /* BitField status_flags IP checksum validation was not performed (if packet is not IPv4) */
#define ETH_FAST_PATH_RX_CQE_IP_XSUM_NO_VALIDATION_FLG_SHIFT 6
#define ETH_FAST_PATH_RX_CQE_L4_XSUM_NO_VALIDATION_FLG (0x1<<7) /* BitField status_flags TCP/UDP checksum validation was not performed (if packet is not TCP/UDP or IPv6 extheaders exist) */
#define ETH_FAST_PATH_RX_CQE_L4_XSUM_NO_VALIDATION_FLG_SHIFT 7
uint8_t queue_index /* The aggregation queue index of this packet */;
uint8_t placement_offset /* Placement offset from the start of the BD, in bytes */;
uint32_t rss_hash_result /* RSS toeplitz hash result */;
uint16_t vlan_tag /* Ethernet VLAN tag field */;
uint16_t pkt_len_or_gro_seg_len /* Packet length (for non-TPA CQE) or GRO Segment Length (for TPA in GRO Mode) otherwise 0 */;
uint16_t len_on_bd /* Number of bytes placed on the BD */;
struct parsing_flags pars_flags;
union eth_sgl_or_raw_data sgl_or_raw_data /* union for sgl and raw data. */;
uint8_t tunn_type /* packet tunneling type */;
uint8_t tunn_inner_hdrs_offset /* Offset to Inner Headers (for tunn_type != TUNN_TYPE_NONE) */;
uint16_t reserved1;
uint32_t tunn_tenant_id /* Tenant ID (for tunn_type != TUNN_TYPE_NONE */;
uint32_t padding[5];
uint32_t marker /* Used internally by the driver */;
};
/*
* Command for setting classification flags for a client $$KEEP_ENDIANNESS$$
*/
struct eth_filter_rules_cmd
{
uint8_t cmd_general_data;
#define ETH_FILTER_RULES_CMD_RX_CMD (0x1<<0) /* BitField cmd_general_data should this cmd be applied for Rx */
#define ETH_FILTER_RULES_CMD_RX_CMD_SHIFT 0
#define ETH_FILTER_RULES_CMD_TX_CMD (0x1<<1) /* BitField cmd_general_data should this cmd be applied for Tx */
#define ETH_FILTER_RULES_CMD_TX_CMD_SHIFT 1
#define ETH_FILTER_RULES_CMD_RESERVED0 (0x3F<<2) /* BitField cmd_general_data */
#define ETH_FILTER_RULES_CMD_RESERVED0_SHIFT 2
uint8_t func_id /* the function id */;
uint8_t client_id /* the client id */;
uint8_t reserved1;
uint16_t state;
#define ETH_FILTER_RULES_CMD_UCAST_DROP_ALL (0x1<<0) /* BitField state drop all unicast packets */
#define ETH_FILTER_RULES_CMD_UCAST_DROP_ALL_SHIFT 0
#define ETH_FILTER_RULES_CMD_UCAST_ACCEPT_ALL (0x1<<1) /* BitField state accept all unicast packets (subject to vlan) */
#define ETH_FILTER_RULES_CMD_UCAST_ACCEPT_ALL_SHIFT 1
#define ETH_FILTER_RULES_CMD_UCAST_ACCEPT_UNMATCHED (0x1<<2) /* BitField state accept all unmatched unicast packets */
#define ETH_FILTER_RULES_CMD_UCAST_ACCEPT_UNMATCHED_SHIFT 2
#define ETH_FILTER_RULES_CMD_MCAST_DROP_ALL (0x1<<3) /* BitField state drop all multicast packets */
#define ETH_FILTER_RULES_CMD_MCAST_DROP_ALL_SHIFT 3
#define ETH_FILTER_RULES_CMD_MCAST_ACCEPT_ALL (0x1<<4) /* BitField state accept all multicast packets (subject to vlan) */
#define ETH_FILTER_RULES_CMD_MCAST_ACCEPT_ALL_SHIFT 4
#define ETH_FILTER_RULES_CMD_BCAST_ACCEPT_ALL (0x1<<5) /* BitField state accept all broadcast packets (subject to vlan) */
#define ETH_FILTER_RULES_CMD_BCAST_ACCEPT_ALL_SHIFT 5
#define ETH_FILTER_RULES_CMD_ACCEPT_ANY_VLAN (0x1<<6) /* BitField state accept packets matched only by MAC (without checking vlan) */
#define ETH_FILTER_RULES_CMD_ACCEPT_ANY_VLAN_SHIFT 6
#define ETH_FILTER_RULES_CMD_RESERVED2 (0x1FF<<7) /* BitField state */
#define ETH_FILTER_RULES_CMD_RESERVED2_SHIFT 7
uint16_t reserved3;
struct regpair_t reserved4;
};
/*
* parameters for eth classification filters ramrod $$KEEP_ENDIANNESS$$
*/
struct eth_filter_rules_ramrod_data
{
struct eth_classify_header header;
struct eth_filter_rules_cmd rules[FILTER_RULES_COUNT];
};
/*
* Hsi version
*/
enum eth_fp_hsi_ver
{
ETH_FP_HSI_VER_0 /* Hsi which does not support tunnelling */,
ETH_FP_HSI_VER_1 /* Hsi does support tunnelling */,
ETH_FP_HSI_VER_2 /* Hsi which supports tunneling and UFP */,
MAX_ETH_FP_HSI_VER};
/*
* parameters for eth classification configuration ramrod $$KEEP_ENDIANNESS$$
*/
struct eth_general_rules_ramrod_data
{
struct eth_classify_header header;
union eth_classify_rule_cmd rules[CLASSIFY_RULES_COUNT];
};
/*
* The data for Halt ramrod
*/
struct eth_halt_ramrod_data
{
uint32_t client_id /* id of this client. (5 bits are used) */;
uint32_t reserved0;
};
/*
* destination and source mac address.
*/
struct eth_mac_addresses
{
#if defined(__BIG_ENDIAN)
uint16_t dst_mid /* destination mac address 16 middle bits */;
uint16_t dst_lo /* destination mac address 16 low bits */;
#elif defined(__LITTLE_ENDIAN)
uint16_t dst_lo /* destination mac address 16 low bits */;
uint16_t dst_mid /* destination mac address 16 middle bits */;
#endif
#if defined(__BIG_ENDIAN)
uint16_t src_lo /* source mac address 16 low bits */;
uint16_t dst_hi /* destination mac address 16 high bits */;
#elif defined(__LITTLE_ENDIAN)
uint16_t dst_hi /* destination mac address 16 high bits */;
uint16_t src_lo /* source mac address 16 low bits */;
#endif
#if defined(__BIG_ENDIAN)
uint16_t src_hi /* source mac address 16 high bits */;
uint16_t src_mid /* source mac address 16 middle bits */;
#elif defined(__LITTLE_ENDIAN)
uint16_t src_mid /* source mac address 16 middle bits */;
uint16_t src_hi /* source mac address 16 high bits */;
#endif
};
/*
* tunneling related data. $$KEEP_ENDIANNESS$$
*/
struct eth_tunnel_data
{
uint16_t dst_lo /* destination mac address 16 low bits */;
uint16_t dst_mid /* destination mac address 16 middle bits */;
uint16_t dst_hi /* destination mac address 16 high bits */;
uint16_t fw_ip_hdr_csum /* Fw Ip header checksum (with ALL ip header fields) for the outer IP header */;
uint16_t pseudo_csum /* Pseudo checksum with length field=0 */;
uint8_t ip_hdr_start_inner_w /* Inner IP header offset in WORDs (16-bit) from start of packet */;
uint8_t flags;
#define ETH_TUNNEL_DATA_IPV6_OUTER (0x1<<0) /* BitField flags Set in case outer IP header is ipV6 */
#define ETH_TUNNEL_DATA_IPV6_OUTER_SHIFT 0
#define ETH_TUNNEL_DATA_RESERVED (0x7F<<1) /* BitField flags Should be set with 0 */
#define ETH_TUNNEL_DATA_RESERVED_SHIFT 1
};
/*
* union for mac addresses and for tunneling data. considered as tunneling data only if (tunnel_exist == 1).
*/
union eth_mac_addr_or_tunnel_data
{
struct eth_mac_addresses mac_addr /* destination and source mac addresses. */;
struct eth_tunnel_data tunnel_data /* tunneling related data. */;
};
/*
* Command for setting multicast classification for a client $$KEEP_ENDIANNESS$$
*/
struct eth_multicast_rules_cmd
{
uint8_t cmd_general_data;
#define ETH_MULTICAST_RULES_CMD_RX_CMD (0x1<<0) /* BitField cmd_general_data should this cmd be applied for Rx */
#define ETH_MULTICAST_RULES_CMD_RX_CMD_SHIFT 0
#define ETH_MULTICAST_RULES_CMD_TX_CMD (0x1<<1) /* BitField cmd_general_data should this cmd be applied for Tx */
#define ETH_MULTICAST_RULES_CMD_TX_CMD_SHIFT 1
#define ETH_MULTICAST_RULES_CMD_IS_ADD (0x1<<2) /* BitField cmd_general_data 1 for add rule, 0 for remove rule */
#define ETH_MULTICAST_RULES_CMD_IS_ADD_SHIFT 2
#define ETH_MULTICAST_RULES_CMD_RESERVED0 (0x1F<<3) /* BitField cmd_general_data */
#define ETH_MULTICAST_RULES_CMD_RESERVED0_SHIFT 3
uint8_t func_id /* the function id */;
uint8_t bin_id /* the bin to add this function to (0-255) */;
uint8_t engine_id /* the approximate multicast engine id */;
uint32_t reserved2;
struct regpair_t reserved3;
};
/*
* parameters for multicast classification ramrod $$KEEP_ENDIANNESS$$
*/
struct eth_multicast_rules_ramrod_data
{
struct eth_classify_header header;
struct eth_multicast_rules_cmd rules[MULTICAST_RULES_COUNT];
};
/*
* Place holder for ramrods protocol specific data
*/
struct ramrod_data
{
uint32_t data_lo;
uint32_t data_hi;
};
/*
* union for ramrod data for Ethernet protocol (CQE) (force size of 16 bits)
*/
union eth_ramrod_data
{
struct ramrod_data general;
};
/*
* RSS toeplitz hash type, as reported in CQE
*/
enum eth_rss_hash_type
{
DEFAULT_HASH_TYPE,
IPV4_HASH_TYPE,
TCP_IPV4_HASH_TYPE,
IPV6_HASH_TYPE,
TCP_IPV6_HASH_TYPE,
VLAN_PRI_HASH_TYPE,
E1HOV_PRI_HASH_TYPE,
DSCP_HASH_TYPE,
MAX_ETH_RSS_HASH_TYPE};
/*
* Ethernet RSS mode
*/
enum eth_rss_mode
{
ETH_RSS_MODE_DISABLED,
ETH_RSS_MODE_REGULAR /* Regular (ndis-like) RSS */,
ETH_RSS_MODE_ESX51 /* RSS mode for Vmware ESX 5.1 (Only do RSS for VXLAN packets) */,
ETH_RSS_MODE_VLAN_PRI /* RSS based on inner-vlan priority field (E1/E1h Only) */,
ETH_RSS_MODE_E1HOV_PRI /* RSS based on outer-vlan priority field (E1/E1h Only) */,
ETH_RSS_MODE_IP_DSCP /* RSS based on IPv4 DSCP field (E1/E1h Only) */,
MAX_ETH_RSS_MODE};
/*
* parameters for RSS update ramrod (E2) $$KEEP_ENDIANNESS$$
*/
struct eth_rss_update_ramrod_data
{
uint8_t rss_engine_id;
uint8_t rss_mode /* The RSS mode for this function */;
uint16_t capabilities;
#define ETH_RSS_UPDATE_RAMROD_DATA_IPV4_CAPABILITY (0x1<<0) /* BitField capabilitiesFunction RSS capabilities configuration of the IpV4 2-tuple capability */
#define ETH_RSS_UPDATE_RAMROD_DATA_IPV4_CAPABILITY_SHIFT 0
#define ETH_RSS_UPDATE_RAMROD_DATA_IPV4_TCP_CAPABILITY (0x1<<1) /* BitField capabilitiesFunction RSS capabilities configuration of the IpV4 4-tuple capability for TCP */
#define ETH_RSS_UPDATE_RAMROD_DATA_IPV4_TCP_CAPABILITY_SHIFT 1
#define ETH_RSS_UPDATE_RAMROD_DATA_IPV4_UDP_CAPABILITY (0x1<<2) /* BitField capabilitiesFunction RSS capabilities configuration of the IpV4 4-tuple capability for UDP */
#define ETH_RSS_UPDATE_RAMROD_DATA_IPV4_UDP_CAPABILITY_SHIFT 2
#define ETH_RSS_UPDATE_RAMROD_DATA_IPV4_VXLAN_CAPABILITY (0x1<<3) /* BitField capabilitiesFunction RSS capabilities configuration of the IpV4 4-tuple capability for VXLAN Tunnels */
#define ETH_RSS_UPDATE_RAMROD_DATA_IPV4_VXLAN_CAPABILITY_SHIFT 3
#define ETH_RSS_UPDATE_RAMROD_DATA_IPV6_CAPABILITY (0x1<<4) /* BitField capabilitiesFunction RSS capabilities configuration of the IpV6 2-tuple capability */
#define ETH_RSS_UPDATE_RAMROD_DATA_IPV6_CAPABILITY_SHIFT 4
#define ETH_RSS_UPDATE_RAMROD_DATA_IPV6_TCP_CAPABILITY (0x1<<5) /* BitField capabilitiesFunction RSS capabilities configuration of the IpV6 4-tuple capability for TCP */
#define ETH_RSS_UPDATE_RAMROD_DATA_IPV6_TCP_CAPABILITY_SHIFT 5
#define ETH_RSS_UPDATE_RAMROD_DATA_IPV6_UDP_CAPABILITY (0x1<<6) /* BitField capabilitiesFunction RSS capabilities configuration of the IpV6 4-tuple capability for UDP */
#define ETH_RSS_UPDATE_RAMROD_DATA_IPV6_UDP_CAPABILITY_SHIFT 6
#define ETH_RSS_UPDATE_RAMROD_DATA_IPV6_VXLAN_CAPABILITY (0x1<<7) /* BitField capabilitiesFunction RSS capabilities configuration of the IpV6 4-tuple capability for VXLAN Tunnels */
#define ETH_RSS_UPDATE_RAMROD_DATA_IPV6_VXLAN_CAPABILITY_SHIFT 7
#define ETH_RSS_UPDATE_RAMROD_DATA_TUNN_INNER_HDRS_CAPABILITY (0x1<<8) /* BitField capabilitiesFunction RSS capabilities configuration of Tunnel Inner Headers capability. */
#define ETH_RSS_UPDATE_RAMROD_DATA_TUNN_INNER_HDRS_CAPABILITY_SHIFT 8
#define ETH_RSS_UPDATE_RAMROD_DATA_UPDATE_RSS_KEY (0x1<<9) /* BitField capabilitiesFunction RSS capabilities if set update the rss keys */
#define ETH_RSS_UPDATE_RAMROD_DATA_UPDATE_RSS_KEY_SHIFT 9
#define ETH_RSS_UPDATE_RAMROD_DATA_RESERVED (0x3F<<10) /* BitField capabilitiesFunction RSS capabilities */
#define ETH_RSS_UPDATE_RAMROD_DATA_RESERVED_SHIFT 10
uint8_t rss_result_mask /* The mask for the lower byte of RSS result - defines which section of the indirection table will be used. To enable all table put here 0x7F */;
uint8_t reserved3;
uint16_t reserved4;
uint8_t indirection_table[T_ETH_INDIRECTION_TABLE_SIZE] /* RSS indirection table */;
uint32_t rss_key[T_ETH_RSS_KEY] /* RSS key supplied as by OS */;
uint32_t echo;
uint32_t reserved5;
};
/*
* The eth Rx Buffer Descriptor
*/
struct eth_rx_bd
{
uint32_t addr_lo /* Single continuous buffer low pointer */;
uint32_t addr_hi /* Single continuous buffer high pointer */;
};
struct eth_rx_bd_next_page
{
uint32_t addr_lo /* Next page low pointer */;
uint32_t addr_hi /* Next page high pointer */;
uint8_t reserved[8];
};
/*
* Eth Rx Cqe structure- general structure for ramrods $$KEEP_ENDIANNESS$$
*/
struct common_ramrod_eth_rx_cqe
{
uint8_t ramrod_type;
#define COMMON_RAMROD_ETH_RX_CQE_TYPE (0x3<<0) /* BitField ramrod_type (use enum eth_rx_cqe_type) */
#define COMMON_RAMROD_ETH_RX_CQE_TYPE_SHIFT 0
#define COMMON_RAMROD_ETH_RX_CQE_ERROR (0x1<<2) /* BitField ramrod_type */
#define COMMON_RAMROD_ETH_RX_CQE_ERROR_SHIFT 2
#define COMMON_RAMROD_ETH_RX_CQE_RESERVED0 (0x1F<<3) /* BitField ramrod_type */
#define COMMON_RAMROD_ETH_RX_CQE_RESERVED0_SHIFT 3
uint8_t conn_type /* only 3 bits are used */;
uint16_t reserved1 /* protocol specific data */;
uint32_t conn_and_cmd_data;
#define COMMON_RAMROD_ETH_RX_CQE_CID (0xFFFFFF<<0) /* BitField conn_and_cmd_data */
#define COMMON_RAMROD_ETH_RX_CQE_CID_SHIFT 0
#define COMMON_RAMROD_ETH_RX_CQE_CMD_ID (0xFF<<24) /* BitField conn_and_cmd_data command id of the ramrod- use RamrodCommandIdEnum */
#define COMMON_RAMROD_ETH_RX_CQE_CMD_ID_SHIFT 24
struct ramrod_data protocol_data /* protocol specific data */;
uint32_t echo;
uint32_t reserved2[11];
};
/*
* Rx Last CQE in page (in ETH)
*/
struct eth_rx_cqe_next_page
{
uint32_t addr_lo /* Next page low pointer */;
uint32_t addr_hi /* Next page high pointer */;
uint32_t reserved[14];
};
/*
* union for all eth rx cqe types (fix their sizes)
*/
union eth_rx_cqe
{
struct eth_fast_path_rx_cqe fast_path_cqe;
struct common_ramrod_eth_rx_cqe ramrod_cqe;
struct eth_rx_cqe_next_page next_page_cqe;
struct eth_end_agg_rx_cqe end_agg_cqe;
};
/*
* Values for RX ETH CQE type field
*/
enum eth_rx_cqe_type
{
RX_ETH_CQE_TYPE_ETH_FASTPATH /* Fast path CQE */,
RX_ETH_CQE_TYPE_ETH_RAMROD /* Slow path CQE */,
RX_ETH_CQE_TYPE_ETH_START_AGG /* Fast path CQE */,
RX_ETH_CQE_TYPE_ETH_STOP_AGG /* Slow path CQE */,
MAX_ETH_RX_CQE_TYPE};
/*
* Type of SGL/Raw field in ETH RX fast path CQE
*/
enum eth_rx_fp_sel
{
ETH_FP_CQE_REGULAR /* Regular CQE- no extra data */,
ETH_FP_CQE_RAW /* Extra data is raw data- iscsi OOO */,
MAX_ETH_RX_FP_SEL};
/*
* The eth Rx SGE Descriptor
*/
struct eth_rx_sge
{
uint32_t addr_lo /* Single continuous buffer low pointer */;
uint32_t addr_hi /* Single continuous buffer high pointer */;
};
/*
* common data for all protocols $$KEEP_ENDIANNESS$$
*/
struct spe_hdr_t
{
uint32_t conn_and_cmd_data;
#define SPE_HDR_T_CID (0xFFFFFF<<0) /* BitField conn_and_cmd_data */
#define SPE_HDR_T_CID_SHIFT 0
#define SPE_HDR_T_CMD_ID (0xFFUL<<24) /* BitField conn_and_cmd_data command id of the ramrod- use enum common_spqe_cmd_id/eth_spqe_cmd_id/toe_spqe_cmd_id */
#define SPE_HDR_T_CMD_ID_SHIFT 24
uint16_t type;
#define SPE_HDR_T_CONN_TYPE (0xFF<<0) /* BitField type connection type. (3 bits are used) (use enum connection_type) */
#define SPE_HDR_T_CONN_TYPE_SHIFT 0
#define SPE_HDR_T_FUNCTION_ID (0xFF<<8) /* BitField type */
#define SPE_HDR_T_FUNCTION_ID_SHIFT 8
uint16_t reserved1;
};
/*
* specific data for ethernet slow path element
*/
union eth_specific_data
{
uint8_t protocol_data[8] /* to fix this structure size to 8 bytes */;
struct regpair_t client_update_ramrod_data /* The address of the data for client update ramrod */;
struct regpair_t client_init_ramrod_init_data /* The data for client setup ramrod */;
struct eth_halt_ramrod_data halt_ramrod_data /* Includes the client id to be deleted */;
struct regpair_t update_data_addr /* physical address of the eth_rss_update_ramrod_data struct, as allocated by the driver */;
struct eth_common_ramrod_data common_ramrod_data /* The data contain client ID need to the ramrod */;
struct regpair_t classify_cfg_addr /* physical address of the eth_classify_rules_ramrod_data struct, as allocated by the driver */;
struct regpair_t filter_cfg_addr /* physical address of the eth_filter_cfg_ramrod_data struct, as allocated by the driver */;
struct regpair_t mcast_cfg_addr /* physical address of the eth_mcast_cfg_ramrod_data struct, as allocated by the driver */;
};
/*
* Ethernet slow path element
*/
struct eth_spe
{
struct spe_hdr_t hdr /* common data for all protocols */;
union eth_specific_data data /* data specific to ethernet protocol */;
};
/*
* Ethernet command ID for slow path elements
*/
enum eth_spqe_cmd_id
{
RAMROD_CMD_ID_ETH_UNUSED,
RAMROD_CMD_ID_ETH_CLIENT_SETUP /* Setup a new L2 client */,
RAMROD_CMD_ID_ETH_HALT /* Halt an L2 client */,
RAMROD_CMD_ID_ETH_FORWARD_SETUP /* Setup a new FW channel */,
RAMROD_CMD_ID_ETH_TX_QUEUE_SETUP /* Setup a new Tx only queue */,
RAMROD_CMD_ID_ETH_CLIENT_UPDATE /* Update an L2 client configuration */,
RAMROD_CMD_ID_ETH_EMPTY /* Empty ramrod - used to synchronize iSCSI OOO */,
RAMROD_CMD_ID_ETH_TERMINATE /* Terminate an L2 client */,
RAMROD_CMD_ID_ETH_TPA_UPDATE /* update the tpa roles in L2 client */,
RAMROD_CMD_ID_ETH_CLASSIFICATION_RULES /* Add/remove classification filters for L2 client (in E2/E3 only) */,
RAMROD_CMD_ID_ETH_FILTER_RULES /* Add/remove classification filters for L2 client (in E2/E3 only) */,
RAMROD_CMD_ID_ETH_MULTICAST_RULES /* Add/remove multicast classification bin (in E2/E3 only) */,
RAMROD_CMD_ID_ETH_RSS_UPDATE /* Update RSS configuration */,
RAMROD_CMD_ID_ETH_SET_MAC /* Update RSS configuration */,
MAX_ETH_SPQE_CMD_ID};
/*
* eth tpa update command
*/
enum eth_tpa_update_command
{
TPA_UPDATE_NONE_COMMAND /* nop command */,
TPA_UPDATE_ENABLE_COMMAND /* enable command */,
TPA_UPDATE_DISABLE_COMMAND /* disable command */,
MAX_ETH_TPA_UPDATE_COMMAND};
/*
* In case of LSO over IPv4 tunnel, whether to increment IP ID on external IP header or internal IP header
*/
enum eth_tunnel_lso_inc_ip_id
{
EXT_HEADER /* Increment IP ID of external header (HW works on external, FW works on internal */,
INT_HEADER /* Increment IP ID of internal header (HW works on internal, FW works on external */,
MAX_ETH_TUNNEL_LSO_INC_IP_ID};
/*
* In case tunnel exist and L4 checksum offload (or outer ip header checksum), the pseudo checksum location, on packet or on BD.
*/
enum eth_tunnel_non_lso_csum_location
{
CSUM_ON_PKT /* checksum is on the packet. */,
CSUM_ON_BD /* checksum is on the BD. */,
MAX_ETH_TUNNEL_NON_LSO_CSUM_LOCATION};
/*
* Packet Tunneling Type
*/
enum eth_tunn_type
{
TUNN_TYPE_NONE,
TUNN_TYPE_VXLAN,
TUNN_TYPE_L2_GRE /* Ethernet over GRE */,
TUNN_TYPE_IPV4_GRE /* IPv4 over GRE */,
TUNN_TYPE_IPV6_GRE /* IPv6 over GRE */,
TUNN_TYPE_L2_GENEVE /* Ethernet over GENEVE */,
TUNN_TYPE_IPV4_GENEVE /* IPv4 over GENEVE */,
TUNN_TYPE_IPV6_GENEVE /* IPv6 over GENEVE */,
MAX_ETH_TUNN_TYPE};
/*
* Tx regular BD structure $$KEEP_ENDIANNESS$$
*/
struct eth_tx_bd
{
uint32_t addr_lo /* Single continuous buffer low pointer */;
uint32_t addr_hi /* Single continuous buffer high pointer */;
uint16_t total_pkt_bytes /* Size of the entire packet, valid for non-LSO packets */;
uint16_t nbytes /* Size of the data represented by the BD */;
uint8_t reserved[4] /* keeps same size as other eth tx bd types */;
};
/*
* structure for easy accessibility to assembler
*/
struct eth_tx_bd_flags
{
uint8_t as_bitfield;
#define ETH_TX_BD_FLAGS_IP_CSUM (0x1<<0) /* BitField as_bitfield IP CKSUM flag,Relevant in START */
#define ETH_TX_BD_FLAGS_IP_CSUM_SHIFT 0
#define ETH_TX_BD_FLAGS_L4_CSUM (0x1<<1) /* BitField as_bitfield L4 CKSUM flag,Relevant in START */
#define ETH_TX_BD_FLAGS_L4_CSUM_SHIFT 1
#define ETH_TX_BD_FLAGS_VLAN_MODE (0x3<<2) /* BitField as_bitfield 00 - no vlan; 01 - inband Vlan; 10 outband Vlan (use enum eth_tx_vlan_type) */
#define ETH_TX_BD_FLAGS_VLAN_MODE_SHIFT 2
#define ETH_TX_BD_FLAGS_START_BD (0x1<<4) /* BitField as_bitfield Start of packet BD */
#define ETH_TX_BD_FLAGS_START_BD_SHIFT 4
#define ETH_TX_BD_FLAGS_IS_UDP (0x1<<5) /* BitField as_bitfield flag that indicates that the current packet is a udp packet */
#define ETH_TX_BD_FLAGS_IS_UDP_SHIFT 5
#define ETH_TX_BD_FLAGS_SW_LSO (0x1<<6) /* BitField as_bitfield LSO flag, Relevant in START */
#define ETH_TX_BD_FLAGS_SW_LSO_SHIFT 6
#define ETH_TX_BD_FLAGS_IPV6 (0x1<<7) /* BitField as_bitfield set in case ipV6 packet, Relevant in START */
#define ETH_TX_BD_FLAGS_IPV6_SHIFT 7
};
/*
* The eth Tx Buffer Descriptor $$KEEP_ENDIANNESS$$
*/
struct eth_tx_start_bd
{
uint32_t addr_lo /* Single continuous buffer low pointer */;
uint32_t addr_hi /* Single continuous buffer high pointer */;
uint16_t nbd /* Num of BDs in packet: include parsInfoBD, Relevant in START(only in Everest) */;
uint16_t nbytes /* Size of the data represented by the BD */;
uint16_t vlan_or_ethertype /* Vlan structure: vlan_id is in lsb, then cfi and then priority vlan_id 12 bits (lsb), cfi 1 bit, priority 3 bits. In E2, this field should be set with etherType for VFs with no vlan */;
struct eth_tx_bd_flags bd_flags;
uint8_t general_data;
#define ETH_TX_START_BD_HDR_NBDS (0x7<<0) /* BitField general_data contains the number of BDs that contain Ethernet/IP/TCP headers, for full/partial LSO modes */
#define ETH_TX_START_BD_HDR_NBDS_SHIFT 0
#define ETH_TX_START_BD_NO_ADDED_TAGS (0x1<<3) /* BitField general_data If set, do not add any additional tags to the packet including MF Tags, Default VLAN or VLAN for the sake of DCB */
#define ETH_TX_START_BD_NO_ADDED_TAGS_SHIFT 3
#define ETH_TX_START_BD_FORCE_VLAN_MODE (0x1<<4) /* BitField general_data force vlan mode according to bds (vlan mode can change accroding to global configuration) */
#define ETH_TX_START_BD_FORCE_VLAN_MODE_SHIFT 4
#define ETH_TX_START_BD_PARSE_NBDS (0x3<<5) /* BitField general_data Determines the number of parsing BDs in packet. Number of parsing BDs in packet is (parse_nbds+1). */
#define ETH_TX_START_BD_PARSE_NBDS_SHIFT 5
#define ETH_TX_START_BD_TUNNEL_EXIST (0x1<<7) /* BitField general_data set in case of tunneling encapsulated packet */
#define ETH_TX_START_BD_TUNNEL_EXIST_SHIFT 7
};
/*
* Tx parsing BD structure for ETH E1/E1h $$KEEP_ENDIANNESS$$
*/
struct eth_tx_parse_bd_e1x
{
uint16_t global_data;
#define ETH_TX_PARSE_BD_E1X_IP_HDR_START_OFFSET_W (0xF<<0) /* BitField global_data IP header Offset in WORDs from start of packet */
#define ETH_TX_PARSE_BD_E1X_IP_HDR_START_OFFSET_W_SHIFT 0
#define ETH_TX_PARSE_BD_E1X_ETH_ADDR_TYPE (0x3<<4) /* BitField global_data marks ethernet address type (use enum eth_addr_type) */
#define ETH_TX_PARSE_BD_E1X_ETH_ADDR_TYPE_SHIFT 4
#define ETH_TX_PARSE_BD_E1X_PSEUDO_CS_WITHOUT_LEN (0x1<<6) /* BitField global_data */
#define ETH_TX_PARSE_BD_E1X_PSEUDO_CS_WITHOUT_LEN_SHIFT 6
#define ETH_TX_PARSE_BD_E1X_LLC_SNAP_EN (0x1<<7) /* BitField global_data */
#define ETH_TX_PARSE_BD_E1X_LLC_SNAP_EN_SHIFT 7
#define ETH_TX_PARSE_BD_E1X_NS_FLG (0x1<<8) /* BitField global_data an optional addition to ECN that protects against accidental or malicious concealment of marked packets from the TCP sender. */
#define ETH_TX_PARSE_BD_E1X_NS_FLG_SHIFT 8
#define ETH_TX_PARSE_BD_E1X_RESERVED0 (0x7F<<9) /* BitField global_data reserved bit, should be set with 0 */
#define ETH_TX_PARSE_BD_E1X_RESERVED0_SHIFT 9
uint8_t tcp_flags;
#define ETH_TX_PARSE_BD_E1X_FIN_FLG (0x1<<0) /* BitField tcp_flagsState flags End of data flag */
#define ETH_TX_PARSE_BD_E1X_FIN_FLG_SHIFT 0
#define ETH_TX_PARSE_BD_E1X_SYN_FLG (0x1<<1) /* BitField tcp_flagsState flags Synchronize sequence numbers flag */
#define ETH_TX_PARSE_BD_E1X_SYN_FLG_SHIFT 1
#define ETH_TX_PARSE_BD_E1X_RST_FLG (0x1<<2) /* BitField tcp_flagsState flags Reset connection flag */
#define ETH_TX_PARSE_BD_E1X_RST_FLG_SHIFT 2
#define ETH_TX_PARSE_BD_E1X_PSH_FLG (0x1<<3) /* BitField tcp_flagsState flags Push flag */
#define ETH_TX_PARSE_BD_E1X_PSH_FLG_SHIFT 3
#define ETH_TX_PARSE_BD_E1X_ACK_FLG (0x1<<4) /* BitField tcp_flagsState flags Acknowledgment number valid flag */
#define ETH_TX_PARSE_BD_E1X_ACK_FLG_SHIFT 4
#define ETH_TX_PARSE_BD_E1X_URG_FLG (0x1<<5) /* BitField tcp_flagsState flags Urgent pointer valid flag */
#define ETH_TX_PARSE_BD_E1X_URG_FLG_SHIFT 5
#define ETH_TX_PARSE_BD_E1X_ECE_FLG (0x1<<6) /* BitField tcp_flagsState flags ECN-Echo */
#define ETH_TX_PARSE_BD_E1X_ECE_FLG_SHIFT 6
#define ETH_TX_PARSE_BD_E1X_CWR_FLG (0x1<<7) /* BitField tcp_flagsState flags Congestion Window Reduced */
#define ETH_TX_PARSE_BD_E1X_CWR_FLG_SHIFT 7
uint8_t ip_hlen_w /* IP header length in WORDs */;
uint16_t total_hlen_w /* IP+TCP+ETH */;
uint16_t tcp_pseudo_csum /* Checksum of pseudo header with length field=0 */;
uint16_t lso_mss /* for LSO mode */;
uint16_t ip_id /* for LSO mode */;
uint32_t tcp_send_seq /* for LSO mode */;
};
/*
* Tx parsing BD structure for ETH E2 $$KEEP_ENDIANNESS$$
*/
struct eth_tx_parse_bd_e2
{
union eth_mac_addr_or_tunnel_data data /* union for mac addresses and for tunneling data. considered as tunneling data only if (tunnel_exist == 1). */;
uint32_t parsing_data;
#define ETH_TX_PARSE_BD_E2_L4_HDR_START_OFFSET_W (0x7FF<<0) /* BitField parsing_data TCP/UDP header Offset in WORDs from start of packet */
#define ETH_TX_PARSE_BD_E2_L4_HDR_START_OFFSET_W_SHIFT 0
#define ETH_TX_PARSE_BD_E2_TCP_HDR_LENGTH_DW (0xF<<11) /* BitField parsing_data TCP header size in DOUBLE WORDS */
#define ETH_TX_PARSE_BD_E2_TCP_HDR_LENGTH_DW_SHIFT 11
#define ETH_TX_PARSE_BD_E2_IPV6_WITH_EXT_HDR (0x1<<15) /* BitField parsing_data a flag to indicate an ipv6 packet with extension headers. If set on LSO packet, pseudo CS should be placed in TCP CS field without length field */
#define ETH_TX_PARSE_BD_E2_IPV6_WITH_EXT_HDR_SHIFT 15
#define ETH_TX_PARSE_BD_E2_LSO_MSS (0x3FFF<<16) /* BitField parsing_data for LSO mode */
#define ETH_TX_PARSE_BD_E2_LSO_MSS_SHIFT 16
#define ETH_TX_PARSE_BD_E2_ETH_ADDR_TYPE (0x3<<30) /* BitField parsing_data marks ethernet address type (use enum eth_addr_type) */
#define ETH_TX_PARSE_BD_E2_ETH_ADDR_TYPE_SHIFT 30
};
/*
* Tx 2nd parsing BD structure for ETH packet $$KEEP_ENDIANNESS$$
*/
struct eth_tx_parse_2nd_bd
{
uint16_t global_data;
#define ETH_TX_PARSE_2ND_BD_IP_HDR_START_OUTER_W (0xF<<0) /* BitField global_data Outer IP header offset in WORDs (16-bit) from start of packet */
#define ETH_TX_PARSE_2ND_BD_IP_HDR_START_OUTER_W_SHIFT 0
#define ETH_TX_PARSE_2ND_BD_RESERVED0 (0x1<<4) /* BitField global_data should be set with 0 */
#define ETH_TX_PARSE_2ND_BD_RESERVED0_SHIFT 4
#define ETH_TX_PARSE_2ND_BD_LLC_SNAP_EN (0x1<<5) /* BitField global_data */
#define ETH_TX_PARSE_2ND_BD_LLC_SNAP_EN_SHIFT 5
#define ETH_TX_PARSE_2ND_BD_NS_FLG (0x1<<6) /* BitField global_data an optional addition to ECN that protects against accidental or malicious concealment of marked packets from the TCP sender. */
#define ETH_TX_PARSE_2ND_BD_NS_FLG_SHIFT 6
#define ETH_TX_PARSE_2ND_BD_TUNNEL_UDP_EXIST (0x1<<7) /* BitField global_data Set in case UDP header exists in tunnel outer hedears. */
#define ETH_TX_PARSE_2ND_BD_TUNNEL_UDP_EXIST_SHIFT 7
#define ETH_TX_PARSE_2ND_BD_IP_HDR_LEN_OUTER_W (0x1F<<8) /* BitField global_data Outer IP header length in WORDs (16-bit). Valid only for IpV4. */
#define ETH_TX_PARSE_2ND_BD_IP_HDR_LEN_OUTER_W_SHIFT 8
#define ETH_TX_PARSE_2ND_BD_RESERVED1 (0x7<<13) /* BitField global_data should be set with 0 */
#define ETH_TX_PARSE_2ND_BD_RESERVED1_SHIFT 13
uint8_t bd_type;
#define ETH_TX_PARSE_2ND_BD_TYPE (0xF<<0) /* BitField bd_type Type of bd (use enum eth_2nd_parse_bd_type) */
#define ETH_TX_PARSE_2ND_BD_TYPE_SHIFT 0
#define ETH_TX_PARSE_2ND_BD_RESERVED2 (0xF<<4) /* BitField bd_type */
#define ETH_TX_PARSE_2ND_BD_RESERVED2_SHIFT 4
uint8_t reserved3;
uint8_t tcp_flags;
#define ETH_TX_PARSE_2ND_BD_FIN_FLG (0x1<<0) /* BitField tcp_flagsState flags End of data flag */
#define ETH_TX_PARSE_2ND_BD_FIN_FLG_SHIFT 0
#define ETH_TX_PARSE_2ND_BD_SYN_FLG (0x1<<1) /* BitField tcp_flagsState flags Synchronize sequence numbers flag */
#define ETH_TX_PARSE_2ND_BD_SYN_FLG_SHIFT 1
#define ETH_TX_PARSE_2ND_BD_RST_FLG (0x1<<2) /* BitField tcp_flagsState flags Reset connection flag */
#define ETH_TX_PARSE_2ND_BD_RST_FLG_SHIFT 2
#define ETH_TX_PARSE_2ND_BD_PSH_FLG (0x1<<3) /* BitField tcp_flagsState flags Push flag */
#define ETH_TX_PARSE_2ND_BD_PSH_FLG_SHIFT 3
#define ETH_TX_PARSE_2ND_BD_ACK_FLG (0x1<<4) /* BitField tcp_flagsState flags Acknowledgment number valid flag */
#define ETH_TX_PARSE_2ND_BD_ACK_FLG_SHIFT 4
#define ETH_TX_PARSE_2ND_BD_URG_FLG (0x1<<5) /* BitField tcp_flagsState flags Urgent pointer valid flag */
#define ETH_TX_PARSE_2ND_BD_URG_FLG_SHIFT 5
#define ETH_TX_PARSE_2ND_BD_ECE_FLG (0x1<<6) /* BitField tcp_flagsState flags ECN-Echo */
#define ETH_TX_PARSE_2ND_BD_ECE_FLG_SHIFT 6
#define ETH_TX_PARSE_2ND_BD_CWR_FLG (0x1<<7) /* BitField tcp_flagsState flags Congestion Window Reduced */
#define ETH_TX_PARSE_2ND_BD_CWR_FLG_SHIFT 7
uint8_t reserved4;
uint8_t tunnel_udp_hdr_start_w /* Offset (in WORDs) from start of packet to tunnel UDP header. (if exist) */;
uint8_t fw_ip_hdr_to_payload_w /* In IpV4, the length (in WORDs) from the FW IpV4 header start to the payload start. In IpV6, the length (in WORDs) from the FW IpV6 header end to the payload start. However, if extension headers are included, their length is counted here as well. */;
uint16_t fw_ip_csum_wo_len_flags_frag /* For the IP header which is set by the FW, the IP checksum without length, flags and fragment offset. */;
uint16_t hw_ip_id /* The IP ID to be set by HW for LSO packets in tunnel mode. */;
uint32_t tcp_send_seq /* The TCP sequence number for LSO packets. */;
};
/*
* The last BD in the BD memory will hold a pointer to the next BD memory
*/
struct eth_tx_next_bd
{
uint32_t addr_lo /* Single continuous buffer low pointer */;
uint32_t addr_hi /* Single continuous buffer high pointer */;
uint8_t reserved[8] /* keeps same size as other eth tx bd types */;
};
/*
* union for 4 Bd types
*/
union eth_tx_bd_types
{
struct eth_tx_start_bd start_bd /* the first bd in a packets */;
struct eth_tx_bd reg_bd /* the common bd */;
struct eth_tx_parse_bd_e1x parse_bd_e1x /* parsing info BD for e1/e1h */;
struct eth_tx_parse_bd_e2 parse_bd_e2 /* parsing info BD for e2 */;
struct eth_tx_parse_2nd_bd parse_2nd_bd /* 2nd parsing info BD */;
struct eth_tx_next_bd next_bd /* Bd that contains the address of the next page */;
};
/*
* array of 13 bds as appears in the eth xstorm context
*/
struct eth_tx_bds_array
{
union eth_tx_bd_types bds[13];
};
/*
* VLAN mode on TX BDs
*/
enum eth_tx_vlan_type
{
X_ETH_NO_VLAN,
X_ETH_OUTBAND_VLAN,
X_ETH_INBAND_VLAN,
X_ETH_FW_ADDED_VLAN /* Driver should not use this! */,
MAX_ETH_TX_VLAN_TYPE};
/*
* Ethernet VLAN filtering mode in E1x
*/
enum eth_vlan_filter_mode
{
ETH_VLAN_FILTER_ANY_VLAN /* Dont filter by vlan */,
ETH_VLAN_FILTER_SPECIFIC_VLAN /* Only the vlan_id is allowed */,
ETH_VLAN_FILTER_CLASSIFY /* Vlan will be added to CAM for classification */,
MAX_ETH_VLAN_FILTER_MODE};
/*
* MAC filtering configuration command header $$KEEP_ENDIANNESS$$
*/
struct mac_configuration_hdr
{
uint8_t length /* number of entries valid in this command (6 bits) */;
uint8_t offset /* offset of the first entry in the list */;
uint16_t client_id /* the client id which this ramrod is sent on. 5b is used. */;
uint32_t echo /* echo value to be sent to driver on event ring */;
};
/*
* MAC address in list for ramrod $$KEEP_ENDIANNESS$$
*/
struct mac_configuration_entry
{
uint16_t lsb_mac_addr /* 2 LSB of MAC address (should be given in big endien - driver should do hton to this number!!!) */;
uint16_t middle_mac_addr /* 2 middle bytes of MAC address (should be given in big endien - driver should do hton to this number!!!) */;
uint16_t msb_mac_addr /* 2 MSB of MAC address (should be given in big endien - driver should do hton to this number!!!) */;
uint16_t vlan_id /* The inner vlan id (12b). Used either in vlan_in_cam for mac_valn pair or for vlan filtering */;
uint8_t pf_id /* The pf id, for multi function mode */;
uint8_t flags;
#define MAC_CONFIGURATION_ENTRY_ACTION_TYPE (0x1<<0) /* BitField flags configures the action to be done in cam (used only is slow path handlers) (use enum set_mac_action_type) */
#define MAC_CONFIGURATION_ENTRY_ACTION_TYPE_SHIFT 0
#define MAC_CONFIGURATION_ENTRY_RDMA_MAC (0x1<<1) /* BitField flags If set, this MAC also belongs to RDMA client */
#define MAC_CONFIGURATION_ENTRY_RDMA_MAC_SHIFT 1
#define MAC_CONFIGURATION_ENTRY_VLAN_FILTERING_MODE (0x3<<2) /* BitField flags (use enum eth_vlan_filter_mode) */
#define MAC_CONFIGURATION_ENTRY_VLAN_FILTERING_MODE_SHIFT 2
#define MAC_CONFIGURATION_ENTRY_OVERRIDE_VLAN_REMOVAL (0x1<<4) /* BitField flags BitField flags 0 - cant remove vlan 1 - can remove vlan. relevant only to everest1 */
#define MAC_CONFIGURATION_ENTRY_OVERRIDE_VLAN_REMOVAL_SHIFT 4
#define MAC_CONFIGURATION_ENTRY_BROADCAST (0x1<<5) /* BitField flags BitField flags 0 - not broadcast 1 - broadcast. relevant only to everest1 */
#define MAC_CONFIGURATION_ENTRY_BROADCAST_SHIFT 5
#define MAC_CONFIGURATION_ENTRY_RESERVED1 (0x3<<6) /* BitField flags */
#define MAC_CONFIGURATION_ENTRY_RESERVED1_SHIFT 6
uint16_t reserved0;
uint32_t clients_bit_vector /* Bit vector for the clients which should receive this MAC. */;
};
/*
* MAC filtering configuration command
*/
struct mac_configuration_cmd
{
struct mac_configuration_hdr hdr /* header */;
struct mac_configuration_entry config_table[64] /* table of 64 MAC configuration entries: addresses and target table entries */;
};
/*
* Set-MAC command type (in E1x)
*/
enum set_mac_action_type
{
T_ETH_MAC_COMMAND_INVALIDATE,
T_ETH_MAC_COMMAND_SET,
MAX_SET_MAC_ACTION_TYPE};
/*
* Ethernet TPA Modes
*/
enum tpa_mode
{
TPA_LRO /* LRO mode TPA */,
TPA_GRO /* GRO mode TPA */,
MAX_TPA_MODE};
/*
* tpa update ramrod data $$KEEP_ENDIANNESS$$
*/
struct tpa_update_ramrod_data
{
uint8_t update_ipv4 /* none, enable or disable */;
uint8_t update_ipv6 /* none, enable or disable */;
uint8_t client_id /* client init flow control data */;
uint8_t max_tpa_queues /* maximal TPA queues allowed for this client */;
uint8_t max_sges_for_packet /* The maximal number of SGEs that can be used for one packet. depends on MTU and SGE size. must be 0 if SGEs are disabled */;
uint8_t complete_on_both_clients /* If set and the client has different sp_client, completion will be sent to both rings */;
uint8_t dont_verify_rings_pause_thr_flg /* If set, the rings pause thresholds will not be verified by firmware. */;
uint8_t tpa_mode /* TPA mode to use (LRO or GRO) */;
uint16_t sge_buff_size /* Size of the buffers pointed by SGEs */;
uint16_t max_agg_size /* maximal size for the aggregated TPA packets, reprted by the host */;
uint32_t sge_page_base_lo /* The address to fetch the next sges from (low) */;
uint32_t sge_page_base_hi /* The address to fetch the next sges from (high) */;
uint16_t sge_pause_thr_low /* number of remaining sges under which, we send pause message */;
uint16_t sge_pause_thr_high /* number of remaining sges above which, we send un-pause message */;
};
/*
* approximate-match multicast filtering for E1H per function in Tstorm
*/
struct tstorm_eth_approximate_match_multicast_filtering
{
uint32_t mcast_add_hash_bit_array[8] /* Bit array for multicast hash filtering.Each bit supports a hash function result if to accept this multicast dst address. */;
};
/*
* Common configuration parameters per function in Tstorm $$KEEP_ENDIANNESS$$
*/
struct tstorm_eth_function_common_config
{
uint16_t config_flags;
#define TSTORM_ETH_FUNCTION_COMMON_CONFIG_RSS_IPV4_CAPABILITY (0x1<<0) /* BitField config_flagsGeneral configuration flags configuration of the port RSS IpV4 2-tupple capability */
#define TSTORM_ETH_FUNCTION_COMMON_CONFIG_RSS_IPV4_CAPABILITY_SHIFT 0
#define TSTORM_ETH_FUNCTION_COMMON_CONFIG_RSS_IPV4_TCP_CAPABILITY (0x1<<1) /* BitField config_flagsGeneral configuration flags configuration of the port RSS IpV4 4-tupple capability */
#define TSTORM_ETH_FUNCTION_COMMON_CONFIG_RSS_IPV4_TCP_CAPABILITY_SHIFT 1
#define TSTORM_ETH_FUNCTION_COMMON_CONFIG_RSS_IPV6_CAPABILITY (0x1<<2) /* BitField config_flagsGeneral configuration flags configuration of the port RSS IpV4 2-tupple capability */
#define TSTORM_ETH_FUNCTION_COMMON_CONFIG_RSS_IPV6_CAPABILITY_SHIFT 2
#define TSTORM_ETH_FUNCTION_COMMON_CONFIG_RSS_IPV6_TCP_CAPABILITY (0x1<<3) /* BitField config_flagsGeneral configuration flags configuration of the port RSS IpV6 4-tupple capability */
#define TSTORM_ETH_FUNCTION_COMMON_CONFIG_RSS_IPV6_TCP_CAPABILITY_SHIFT 3
#define TSTORM_ETH_FUNCTION_COMMON_CONFIG_RSS_MODE (0x7<<4) /* BitField config_flagsGeneral configuration flags RSS mode of operation (use enum eth_rss_mode) */
#define TSTORM_ETH_FUNCTION_COMMON_CONFIG_RSS_MODE_SHIFT 4
#define TSTORM_ETH_FUNCTION_COMMON_CONFIG_VLAN_FILTERING_ENABLE (0x1<<7) /* BitField config_flagsGeneral configuration flags 0 - Dont filter by vlan, 1 - Filter according to the vlans specificied in mac_filter_config */
#define TSTORM_ETH_FUNCTION_COMMON_CONFIG_VLAN_FILTERING_ENABLE_SHIFT 7
#define __TSTORM_ETH_FUNCTION_COMMON_CONFIG_RESERVED0 (0xFF<<8) /* BitField config_flagsGeneral configuration flags */
#define __TSTORM_ETH_FUNCTION_COMMON_CONFIG_RESERVED0_SHIFT 8
uint8_t rss_result_mask /* The mask for the lower byte of RSS result - defines which section of the indirection table will be used. To enable all table put here 0x7F */;
uint8_t reserved1;
uint16_t vlan_id[2] /* VLANs of this function. VLAN filtering is determine according to vlan_filtering_enable. */;
};
/*
* MAC filtering configuration parameters per port in Tstorm $$KEEP_ENDIANNESS$$
*/
struct tstorm_eth_mac_filter_config
{
uint32_t ucast_drop_all /* bit vector in which the clients which drop all unicast packets are set */;
uint32_t ucast_accept_all /* bit vector in which clients that accept all unicast packets are set */;
uint32_t mcast_drop_all /* bit vector in which the clients which drop all multicast packets are set */;
uint32_t mcast_accept_all /* bit vector in which clients that accept all multicast packets are set */;
uint32_t bcast_accept_all /* bit vector in which clients that accept all broadcast packets are set */;
uint32_t vlan_filter[2] /* bit vector for VLAN filtering. Clients which enforce filtering of vlan[x] should be marked in vlan_filter[x]. In E1 only vlan_filter[1] is checked. The primary vlan is taken from the CAM target table. */;
uint32_t unmatched_unicast /* bit vector in which clients that accept unmatched unicast packets are set */;
};
/*
* tx only queue init ramrod data $$KEEP_ENDIANNESS$$
*/
struct tx_queue_init_ramrod_data
{
struct client_init_general_data general /* client init general data */;
struct client_init_tx_data tx /* client init tx data */;
};
/*
* Three RX producers for ETH
*/
struct ustorm_eth_rx_producers
{
#if defined(__BIG_ENDIAN)
uint16_t bd_prod /* Producer of the RX BD ring */;
uint16_t cqe_prod /* Producer of the RX CQE ring */;
#elif defined(__LITTLE_ENDIAN)
uint16_t cqe_prod /* Producer of the RX CQE ring */;
uint16_t bd_prod /* Producer of the RX BD ring */;
#endif
#if defined(__BIG_ENDIAN)
uint16_t reserved;
uint16_t sge_prod /* Producer of the RX SGE ring */;
#elif defined(__LITTLE_ENDIAN)
uint16_t sge_prod /* Producer of the RX SGE ring */;
uint16_t reserved;
#endif
};
/*
* ABTS info $$KEEP_ENDIANNESS$$
*/
struct fcoe_abts_info
{
uint16_t aborted_task_id /* Task ID to be aborted */;
uint16_t reserved0;
uint32_t reserved1;
};
/*
* Fixed size structure in order to plant it in Union structure $$KEEP_ENDIANNESS$$
*/
struct fcoe_abts_rsp_union
{
uint8_t r_ctl /* Only R_CTL part of the FC header in ABTS ACC or BA_RJT messages is placed */;
uint8_t rsrv[3];
uint32_t abts_rsp_payload[7] /* The payload of the ABTS ACC (12B) or the BA_RJT (4B) */;
};
/*
* 4 regs size $$KEEP_ENDIANNESS$$
*/
struct fcoe_bd_ctx
{
uint32_t buf_addr_hi /* Higher buffer host address */;
uint32_t buf_addr_lo /* Lower buffer host address */;
uint16_t buf_len /* Buffer length (in bytes) */;
uint16_t rsrv0;
uint16_t flags /* BD flags */;
uint16_t rsrv1;
};
/*
* FCoE cached sges context $$KEEP_ENDIANNESS$$
*/
struct fcoe_cached_sge_ctx
{
struct regpair_t cur_buf_addr /* Current buffer address (in initialization it is the first cached buffer) */;
uint16_t cur_buf_rem /* Remaining data in current buffer (in bytes) */;
uint16_t second_buf_rem /* Remaining data in second buffer (in bytes) */;
struct regpair_t second_buf_addr /* Second cached buffer address */;
};
/*
* Cleanup info $$KEEP_ENDIANNESS$$
*/
struct fcoe_cleanup_info
{
uint16_t cleaned_task_id /* Task ID to be cleaned */;
uint16_t rolled_tx_seq_cnt /* Tx sequence count */;
uint32_t rolled_tx_data_offset /* Tx data offset */;
};
/*
* Fcp RSP flags $$KEEP_ENDIANNESS$$
*/
struct fcoe_fcp_rsp_flags
{
uint8_t flags;
#define FCOE_FCP_RSP_FLAGS_FCP_RSP_LEN_VALID (0x1<<0) /* BitField flags */
#define FCOE_FCP_RSP_FLAGS_FCP_RSP_LEN_VALID_SHIFT 0
#define FCOE_FCP_RSP_FLAGS_FCP_SNS_LEN_VALID (0x1<<1) /* BitField flags */
#define FCOE_FCP_RSP_FLAGS_FCP_SNS_LEN_VALID_SHIFT 1
#define FCOE_FCP_RSP_FLAGS_FCP_RESID_OVER (0x1<<2) /* BitField flags */
#define FCOE_FCP_RSP_FLAGS_FCP_RESID_OVER_SHIFT 2
#define FCOE_FCP_RSP_FLAGS_FCP_RESID_UNDER (0x1<<3) /* BitField flags */
#define FCOE_FCP_RSP_FLAGS_FCP_RESID_UNDER_SHIFT 3
#define FCOE_FCP_RSP_FLAGS_FCP_CONF_REQ (0x1<<4) /* BitField flags */
#define FCOE_FCP_RSP_FLAGS_FCP_CONF_REQ_SHIFT 4
#define FCOE_FCP_RSP_FLAGS_FCP_BIDI_FLAGS (0x7<<5) /* BitField flags */
#define FCOE_FCP_RSP_FLAGS_FCP_BIDI_FLAGS_SHIFT 5
};
/*
* Fcp RSP payload $$KEEP_ENDIANNESS$$
*/
struct fcoe_fcp_rsp_payload
{
struct regpair_t reserved0;
uint32_t fcp_resid;
uint8_t scsi_status_code;
struct fcoe_fcp_rsp_flags fcp_flags;
uint16_t retry_delay_timer;
uint32_t fcp_rsp_len;
uint32_t fcp_sns_len;
};
/*
* Fixed size structure in order to plant it in Union structure $$KEEP_ENDIANNESS$$
*/
struct fcoe_fcp_rsp_union
{
struct fcoe_fcp_rsp_payload payload;
struct regpair_t reserved0;
};
/*
* FC header $$KEEP_ENDIANNESS$$
*/
struct fcoe_fc_hdr
{
uint8_t s_id[3];
uint8_t cs_ctl;
uint8_t d_id[3];
uint8_t r_ctl;
uint16_t seq_cnt;
uint8_t df_ctl;
uint8_t seq_id;
uint8_t f_ctl[3];
uint8_t type;
uint32_t parameters;
uint16_t rx_id;
uint16_t ox_id;
};
/*
* FC header union $$KEEP_ENDIANNESS$$
*/
struct fcoe_mp_rsp_union
{
struct fcoe_fc_hdr fc_hdr /* FC header copied into task context (middle path flows) */;
uint32_t mp_payload_len /* Length of the MP payload that was placed */;
uint32_t rsrv;
};
/*
* Completion information $$KEEP_ENDIANNESS$$
*/
union fcoe_comp_flow_info
{
struct fcoe_fcp_rsp_union fcp_rsp /* FCP_RSP payload */;
struct fcoe_abts_rsp_union abts_rsp /* ABTS ACC R_CTL part of the FC header ABTS ACC or BA_RJT payload frame */;
struct fcoe_mp_rsp_union mp_rsp /* FC header copied into task context (middle path flows) */;
uint32_t opaque[8];
};
/*
* External ABTS info $$KEEP_ENDIANNESS$$
*/
struct fcoe_ext_abts_info
{
uint32_t rsrv0[6];
struct fcoe_abts_info ctx /* ABTS information. Initialized by Xstorm */;
};
/*
* External cleanup info $$KEEP_ENDIANNESS$$
*/
struct fcoe_ext_cleanup_info
{
uint32_t rsrv0[6];
struct fcoe_cleanup_info ctx /* Cleanup information */;
};
/*
* Fcoe FW Tx sequence context $$KEEP_ENDIANNESS$$
*/
struct fcoe_fw_tx_seq_ctx
{
uint32_t data_offset /* The amount of data transmitted so far (equal to FCP_DATA PARAMETER field) */;
uint16_t seq_cnt /* The last SEQ_CNT transmitted */;
uint16_t rsrv0;
};
/*
* Fcoe external FW Tx sequence context $$KEEP_ENDIANNESS$$
*/
struct fcoe_ext_fw_tx_seq_ctx
{
uint32_t rsrv0[6];
struct fcoe_fw_tx_seq_ctx ctx /* TX sequence context */;
};
/*
* FCoE multiple sges context $$KEEP_ENDIANNESS$$
*/
struct fcoe_mul_sges_ctx
{
struct regpair_t cur_sge_addr /* Current BD address */;
uint16_t cur_sge_off /* Offset in current BD (in bytes) */;
uint8_t cur_sge_idx /* Current BD index in BD list */;
uint8_t sgl_size /* Total number of BDs */;
};
/*
* FCoE external multiple sges context $$KEEP_ENDIANNESS$$
*/
struct fcoe_ext_mul_sges_ctx
{
struct fcoe_mul_sges_ctx mul_sgl /* SGL context */;
struct regpair_t rsrv0;
};
/*
* FCP CMD payload $$KEEP_ENDIANNESS$$
*/
struct fcoe_fcp_cmd_payload
{
uint32_t opaque[8];
};
/*
* Fcp xfr rdy payload $$KEEP_ENDIANNESS$$
*/
struct fcoe_fcp_xfr_rdy_payload
{
uint32_t burst_len;
uint32_t data_ro;
};
/*
* FC frame $$KEEP_ENDIANNESS$$
*/
struct fcoe_fc_frame
{
struct fcoe_fc_hdr fc_hdr;
uint32_t reserved0[2];
};
/*
* FCoE KCQ CQE parameters $$KEEP_ENDIANNESS$$
*/
union fcoe_kcqe_params
{
uint32_t reserved0[4];
};
/*
* FCoE KCQ CQE $$KEEP_ENDIANNESS$$
*/
struct fcoe_kcqe
{
uint32_t fcoe_conn_id /* Drivers connection ID (only 16 bits are used) */;
uint32_t completion_status /* 0=command completed successfully, 1=command failed */;
uint32_t fcoe_conn_context_id /* Context ID of the FCoE connection */;
union fcoe_kcqe_params params /* command-specific parameters */;
uint16_t qe_self_seq /* Self identifying sequence number */;
uint8_t op_code /* FCoE KCQ opcode */;
uint8_t flags;
#define FCOE_KCQE_RESERVED0 (0x7<<0) /* BitField flags */
#define FCOE_KCQE_RESERVED0_SHIFT 0
#define FCOE_KCQE_RAMROD_COMPLETION (0x1<<3) /* BitField flags Everest only - indicates whether this KCQE is a ramrod completion */
#define FCOE_KCQE_RAMROD_COMPLETION_SHIFT 3
#define FCOE_KCQE_LAYER_CODE (0x7<<4) /* BitField flags protocol layer (L2,L3,L4,L5,iSCSI,FCoE) */
#define FCOE_KCQE_LAYER_CODE_SHIFT 4
#define FCOE_KCQE_LINKED_WITH_NEXT (0x1<<7) /* BitField flags Indicates whether this KCQE is linked with the next KCQE */
#define FCOE_KCQE_LINKED_WITH_NEXT_SHIFT 7
};
/*
* FCoE KWQE header $$KEEP_ENDIANNESS$$
*/
struct fcoe_kwqe_header
{
uint8_t op_code /* FCoE KWQE opcode */;
uint8_t flags;
#define FCOE_KWQE_HEADER_RESERVED0 (0xF<<0) /* BitField flags */
#define FCOE_KWQE_HEADER_RESERVED0_SHIFT 0
#define FCOE_KWQE_HEADER_LAYER_CODE (0x7<<4) /* BitField flags protocol layer (L2,L3,L4,L5) */
#define FCOE_KWQE_HEADER_LAYER_CODE_SHIFT 4
#define FCOE_KWQE_HEADER_RESERVED1 (0x1<<7) /* BitField flags */
#define FCOE_KWQE_HEADER_RESERVED1_SHIFT 7
};
/*
* FCoE firmware init request 1 $$KEEP_ENDIANNESS$$
*/
struct fcoe_kwqe_init1
{
uint16_t num_tasks /* Number of tasks in global task list */;
struct fcoe_kwqe_header hdr /* KWQ WQE header */;
uint32_t task_list_pbl_addr_lo /* Lower 32-bit of Task List page table */;
uint32_t task_list_pbl_addr_hi /* Higher 32-bit of Task List page table */;
uint32_t dummy_buffer_addr_lo /* Lower 32-bit of dummy buffer */;
uint32_t dummy_buffer_addr_hi /* Higher 32-bit of dummy buffer */;
uint16_t sq_num_wqes /* Number of entries in the Send Queue */;
uint16_t rq_num_wqes /* Number of entries in the Receive Queue */;
uint16_t rq_buffer_log_size /* Log of the size of a single buffer (entry) in the RQ */;
uint16_t cq_num_wqes /* Number of entries in the Completion Queue */;
uint16_t mtu /* Max transmission unit */;
uint8_t num_sessions_log /* Log of the number of sessions */;
uint8_t flags;
#define FCOE_KWQE_INIT1_LOG_PAGE_SIZE (0xF<<0) /* BitField flags log of page size value */
#define FCOE_KWQE_INIT1_LOG_PAGE_SIZE_SHIFT 0
#define FCOE_KWQE_INIT1_LOG_CACHED_PBES_PER_FUNC (0x7<<4) /* BitField flags */
#define FCOE_KWQE_INIT1_LOG_CACHED_PBES_PER_FUNC_SHIFT 4
#define FCOE_KWQE_INIT1_CLASSIFY_FAILED_ALLOWED (0x1<<7) /* BitField flags Special MF mode where classification failure indication from HW is allowed */
#define FCOE_KWQE_INIT1_CLASSIFY_FAILED_ALLOWED_SHIFT 7
};
/*
* FCoE firmware init request 2 $$KEEP_ENDIANNESS$$
*/
struct fcoe_kwqe_init2
{
uint8_t hsi_major_version /* Implies on a change broken previous HSI */;
uint8_t hsi_minor_version /* Implies on a change which does not broken previous HSI */;
struct fcoe_kwqe_header hdr /* KWQ WQE header */;
uint32_t hash_tbl_pbl_addr_lo /* Lower 32-bit of Hash table PBL */;
uint32_t hash_tbl_pbl_addr_hi /* Higher 32-bit of Hash table PBL */;
uint32_t t2_hash_tbl_addr_lo /* Lower 32-bit of T2 Hash table */;
uint32_t t2_hash_tbl_addr_hi /* Higher 32-bit of T2 Hash table */;
uint32_t t2_ptr_hash_tbl_addr_lo /* Lower 32-bit of T2 ptr Hash table */;
uint32_t t2_ptr_hash_tbl_addr_hi /* Higher 32-bit of T2 ptr Hash table */;
uint32_t free_list_count /* T2 free list count */;
};
/*
* FCoE firmware init request 3 $$KEEP_ENDIANNESS$$
*/
struct fcoe_kwqe_init3
{
uint16_t reserved0;
struct fcoe_kwqe_header hdr /* KWQ WQE header */;
uint32_t error_bit_map_lo /* 32 lower bits of error bitmap: 1=error, 0=warning */;
uint32_t error_bit_map_hi /* 32 upper bits of error bitmap: 1=error, 0=warning */;
uint8_t perf_config /* 0= no performance acceleration, 1=cached connection, 2=cached tasks, 3=both */;
uint8_t reserved21[3];
uint32_t reserved2[4];
};
/*
* FCoE connection offload request 1 $$KEEP_ENDIANNESS$$
*/
struct fcoe_kwqe_conn_offload1
{
uint16_t fcoe_conn_id /* Drivers connection ID. Should be sent in KCQEs to speed-up drivers access to connection data. */;
struct fcoe_kwqe_header hdr /* KWQ WQE header */;
uint32_t sq_addr_lo /* Lower 32-bit of SQ */;
uint32_t sq_addr_hi /* Higher 32-bit of SQ */;
uint32_t rq_pbl_addr_lo /* Lower 32-bit of RQ page table */;
uint32_t rq_pbl_addr_hi /* Higher 32-bit of RQ page table */;
uint32_t rq_first_pbe_addr_lo /* Lower 32-bit of first RQ pbe */;
uint32_t rq_first_pbe_addr_hi /* Higher 32-bit of first RQ pbe */;
uint16_t rq_prod /* Initial RQ producer */;
uint16_t reserved0;
};
/*
* FCoE connection offload request 2 $$KEEP_ENDIANNESS$$
*/
struct fcoe_kwqe_conn_offload2
{
uint16_t tx_max_fc_pay_len /* The maximum acceptable FC payload size (Buffer-to-buffer Receive Data_Field size) supported by target, received during both FLOGI and PLOGI, minimum value should be taken */;
struct fcoe_kwqe_header hdr /* KWQ WQE header */;
uint32_t cq_addr_lo /* Lower 32-bit of CQ */;
uint32_t cq_addr_hi /* Higher 32-bit of CQ */;
uint32_t xferq_addr_lo /* Lower 32-bit of XFERQ */;
uint32_t xferq_addr_hi /* Higher 32-bit of XFERQ */;
uint32_t conn_db_addr_lo /* Lower 32-bit of Conn DB (RQ prod and CQ arm bit) */;
uint32_t conn_db_addr_hi /* Higher 32-bit of Conn DB (RQ prod and CQ arm bit) */;
uint32_t reserved1;
};
/*
* FCoE connection offload request 3 $$KEEP_ENDIANNESS$$
*/
struct fcoe_kwqe_conn_offload3
{
uint16_t vlan_tag;
#define FCOE_KWQE_CONN_OFFLOAD3_VLAN_ID (0xFFF<<0) /* BitField vlan_tag Vlan id */
#define FCOE_KWQE_CONN_OFFLOAD3_VLAN_ID_SHIFT 0
#define FCOE_KWQE_CONN_OFFLOAD3_CFI (0x1<<12) /* BitField vlan_tag Canonical format indicator */
#define FCOE_KWQE_CONN_OFFLOAD3_CFI_SHIFT 12
#define FCOE_KWQE_CONN_OFFLOAD3_PRIORITY (0x7<<13) /* BitField vlan_tag Vlan priority */
#define FCOE_KWQE_CONN_OFFLOAD3_PRIORITY_SHIFT 13
struct fcoe_kwqe_header hdr /* KWQ WQE header */;
uint8_t s_id[3] /* Source ID, received during FLOGI */;
uint8_t tx_max_conc_seqs_c3 /* Maximum concurrent Sequences for Class 3 supported by target, received during PLOGI */;
uint8_t d_id[3] /* Destination ID, received after inquiry of the fabric network */;
uint8_t flags;
#define FCOE_KWQE_CONN_OFFLOAD3_B_MUL_N_PORT_IDS (0x1<<0) /* BitField flags Supporting multiple N_Port IDs indication, received during FLOGI */
#define FCOE_KWQE_CONN_OFFLOAD3_B_MUL_N_PORT_IDS_SHIFT 0
#define FCOE_KWQE_CONN_OFFLOAD3_B_E_D_TOV_RES (0x1<<1) /* BitField flags E_D_TOV resolution (0 - msec, 1 - nsec), negotiated in PLOGI */
#define FCOE_KWQE_CONN_OFFLOAD3_B_E_D_TOV_RES_SHIFT 1
#define FCOE_KWQE_CONN_OFFLOAD3_B_CONT_INCR_SEQ_CNT (0x1<<2) /* BitField flags Continuously increasing SEQ_CNT indication, received during PLOGI */
#define FCOE_KWQE_CONN_OFFLOAD3_B_CONT_INCR_SEQ_CNT_SHIFT 2
#define FCOE_KWQE_CONN_OFFLOAD3_B_CONF_REQ (0x1<<3) /* BitField flags Confirmation request supported */
#define FCOE_KWQE_CONN_OFFLOAD3_B_CONF_REQ_SHIFT 3
#define FCOE_KWQE_CONN_OFFLOAD3_B_REC_VALID (0x1<<4) /* BitField flags REC allowed */
#define FCOE_KWQE_CONN_OFFLOAD3_B_REC_VALID_SHIFT 4
#define FCOE_KWQE_CONN_OFFLOAD3_B_C2_VALID (0x1<<5) /* BitField flags Class 2 valid, received during PLOGI */
#define FCOE_KWQE_CONN_OFFLOAD3_B_C2_VALID_SHIFT 5
#define FCOE_KWQE_CONN_OFFLOAD3_B_ACK_0 (0x1<<6) /* BitField flags ACK_0 capability supporting by target, received furing PLOGI */
#define FCOE_KWQE_CONN_OFFLOAD3_B_ACK_0_SHIFT 6
#define FCOE_KWQE_CONN_OFFLOAD3_B_VLAN_FLAG (0x1<<7) /* BitField flags Is inner vlan exist */
#define FCOE_KWQE_CONN_OFFLOAD3_B_VLAN_FLAG_SHIFT 7
uint32_t reserved;
uint32_t confq_first_pbe_addr_lo /* The first page used when handling CONFQ - low address */;
uint32_t confq_first_pbe_addr_hi /* The first page used when handling CONFQ - high address */;
uint16_t tx_total_conc_seqs /* Total concurrent Sequences for all Classes supported by target, received during PLOGI */;
uint16_t rx_max_fc_pay_len /* The maximum acceptable FC payload size (Buffer-to-buffer Receive Data_Field size) supported by us, sent during FLOGI/PLOGI */;
uint16_t rx_total_conc_seqs /* Total concurrent Sequences for all Classes supported by us, sent during PLOGI */;
uint8_t rx_max_conc_seqs_c3 /* Maximum Concurrent Sequences for Class 3 supported by us, sent during PLOGI */;
uint8_t rx_open_seqs_exch_c3 /* Maximum Open Sequences per Exchange for Class 3 supported by us, sent during PLOGI */;
};
/*
* FCoE connection offload request 4 $$KEEP_ENDIANNESS$$
*/
struct fcoe_kwqe_conn_offload4
{
uint8_t e_d_tov_timer_val /* E_D_TOV timer value in milliseconds/20, negotiated in PLOGI */;
uint8_t reserved2;
struct fcoe_kwqe_header hdr /* KWQ WQE header */;
uint8_t src_mac_addr_lo[2] /* Lower 16-bit of source MAC address */;
uint8_t src_mac_addr_mid[2] /* Mid 16-bit of source MAC address */;
uint8_t src_mac_addr_hi[2] /* Higher 16-bit of source MAC address */;
uint8_t dst_mac_addr_hi[2] /* Higher 16-bit of destination MAC address */;
uint8_t dst_mac_addr_lo[2] /* Lower 16-bit destination MAC address */;
uint8_t dst_mac_addr_mid[2] /* Mid 16-bit destination MAC address */;
uint32_t lcq_addr_lo /* Lower 32-bit of LCQ */;
uint32_t lcq_addr_hi /* Higher 32-bit of LCQ */;
uint32_t confq_pbl_base_addr_lo /* CONFQ PBL low address */;
uint32_t confq_pbl_base_addr_hi /* CONFQ PBL high address */;
};
/*
* FCoE connection enable request $$KEEP_ENDIANNESS$$
*/
struct fcoe_kwqe_conn_enable_disable
{
uint16_t reserved0;
struct fcoe_kwqe_header hdr /* KWQ WQE header */;
uint8_t src_mac_addr_lo[2] /* Lower 16-bit of source MAC address (HBAs MAC address) */;
uint8_t src_mac_addr_mid[2] /* Mid 16-bit of source MAC address (HBAs MAC address) */;
uint8_t src_mac_addr_hi[2] /* Higher 16-bit of source MAC address (HBAs MAC address) */;
uint16_t vlan_tag;
#define FCOE_KWQE_CONN_ENABLE_DISABLE_VLAN_ID (0xFFF<<0) /* BitField vlan_tagVlan tag Vlan id */
#define FCOE_KWQE_CONN_ENABLE_DISABLE_VLAN_ID_SHIFT 0
#define FCOE_KWQE_CONN_ENABLE_DISABLE_CFI (0x1<<12) /* BitField vlan_tagVlan tag Canonical format indicator */
#define FCOE_KWQE_CONN_ENABLE_DISABLE_CFI_SHIFT 12
#define FCOE_KWQE_CONN_ENABLE_DISABLE_PRIORITY (0x7<<13) /* BitField vlan_tagVlan tag Vlan priority */
#define FCOE_KWQE_CONN_ENABLE_DISABLE_PRIORITY_SHIFT 13
uint8_t dst_mac_addr_lo[2] /* Lower 16-bit of destination MAC address (FCFs MAC address) */;
uint8_t dst_mac_addr_mid[2] /* Mid 16-bit of destination MAC address (FCFs MAC address) */;
uint8_t dst_mac_addr_hi[2] /* Higher 16-bit of destination MAC address (FCFs MAC address) */;
uint16_t reserved1;
uint8_t s_id[3] /* Source ID, received during FLOGI */;
uint8_t vlan_flag /* Vlan flag */;
uint8_t d_id[3] /* Destination ID, received after inquiry of the fabric network */;
uint8_t reserved3;
uint32_t context_id /* Context ID (cid) of the connection */;
uint32_t conn_id /* FCoE Connection ID */;
uint32_t reserved4;
};
/*
* FCoE connection destroy request $$KEEP_ENDIANNESS$$
*/
struct fcoe_kwqe_conn_destroy
{
uint16_t reserved0;
struct fcoe_kwqe_header hdr /* KWQ WQE header */;
uint32_t context_id /* Context ID (cid) of the connection */;
uint32_t conn_id /* FCoE Connection ID */;
uint32_t reserved1[5];
};
/*
* FCoe destroy request $$KEEP_ENDIANNESS$$
*/
struct fcoe_kwqe_destroy
{
uint16_t reserved0;
struct fcoe_kwqe_header hdr /* KWQ WQE header */;
uint32_t reserved1[7];
};
/*
* FCoe statistics request $$KEEP_ENDIANNESS$$
*/
struct fcoe_kwqe_stat
{
uint16_t reserved0;
struct fcoe_kwqe_header hdr /* KWQ WQE header */;
uint32_t stat_params_addr_lo /* Statistics host address */;
uint32_t stat_params_addr_hi /* Statistics host address */;
uint32_t reserved1[5];
};
/*
* FCoE KWQ WQE $$KEEP_ENDIANNESS$$
*/
union fcoe_kwqe
{
struct fcoe_kwqe_init1 init1;
struct fcoe_kwqe_init2 init2;
struct fcoe_kwqe_init3 init3;
struct fcoe_kwqe_conn_offload1 conn_offload1;
struct fcoe_kwqe_conn_offload2 conn_offload2;
struct fcoe_kwqe_conn_offload3 conn_offload3;
struct fcoe_kwqe_conn_offload4 conn_offload4;
struct fcoe_kwqe_conn_enable_disable conn_enable_disable;
struct fcoe_kwqe_conn_destroy conn_destroy;
struct fcoe_kwqe_destroy destroy;
struct fcoe_kwqe_stat statistics;
};
/*
* TX SGL context $$KEEP_ENDIANNESS$$
*/
union fcoe_sgl_union_ctx
{
struct fcoe_cached_sge_ctx cached_sge /* Cached SGEs context */;
struct fcoe_ext_mul_sges_ctx sgl /* SGL context */;
uint32_t opaque[5];
};
/*
* Data-In/ELS/BLS information $$KEEP_ENDIANNESS$$
*/
struct fcoe_read_flow_info
{
union fcoe_sgl_union_ctx sgl_ctx /* The SGL that would be used for data placement (20 bytes) */;
uint32_t rsrv0[3];
};
/*
* Fcoe stat context $$KEEP_ENDIANNESS$$
*/
struct fcoe_s_stat_ctx
{
uint8_t flags;
#define FCOE_S_STAT_CTX_ACTIVE (0x1<<0) /* BitField flags Active Sequence indication (0 - not avtive; 1 - active) */
#define FCOE_S_STAT_CTX_ACTIVE_SHIFT 0
#define FCOE_S_STAT_CTX_ACK_ABORT_SEQ_COND (0x1<<1) /* BitField flags Abort Sequence requested indication */
#define FCOE_S_STAT_CTX_ACK_ABORT_SEQ_COND_SHIFT 1
#define FCOE_S_STAT_CTX_ABTS_PERFORMED (0x1<<2) /* BitField flags ABTS (on Sequence) protocol complete indication (0 - not completed; 1 -completed by Recipient) */
#define FCOE_S_STAT_CTX_ABTS_PERFORMED_SHIFT 2
#define FCOE_S_STAT_CTX_SEQ_TIMEOUT (0x1<<3) /* BitField flags E_D_TOV timeout indication */
#define FCOE_S_STAT_CTX_SEQ_TIMEOUT_SHIFT 3
#define FCOE_S_STAT_CTX_P_RJT (0x1<<4) /* BitField flags P_RJT transmitted indication */
#define FCOE_S_STAT_CTX_P_RJT_SHIFT 4
#define FCOE_S_STAT_CTX_ACK_EOFT (0x1<<5) /* BitField flags ACK (EOFt) transmitted indication (0 - not tranmitted; 1 - transmitted) */
#define FCOE_S_STAT_CTX_ACK_EOFT_SHIFT 5
#define FCOE_S_STAT_CTX_RSRV1 (0x3<<6) /* BitField flags */
#define FCOE_S_STAT_CTX_RSRV1_SHIFT 6
};
/*
* Fcoe rx seq context $$KEEP_ENDIANNESS$$
*/
struct fcoe_rx_seq_ctx
{
uint8_t seq_id /* The Sequence ID */;
struct fcoe_s_stat_ctx s_stat /* The Sequence status */;
uint16_t seq_cnt /* The lowest SEQ_CNT received for the Sequence */;
uint32_t low_exp_ro /* Report on the offset at the beginning of the Sequence */;
uint32_t high_exp_ro /* The highest expected relative offset. The next buffer offset to be received in case of XFER_RDY or in FCP_DATA */;
};
/*
* FCoE RX statistics parameters section#0 $$KEEP_ENDIANNESS$$
*/
struct fcoe_rx_stat_params_section0
{
uint32_t fcoe_rx_pkt_cnt /* Number of FCoE packets that were legally received */;
uint32_t fcoe_rx_byte_cnt /* Number of FCoE bytes that were legally received */;
};
/*
* FCoE RX statistics parameters section#1 $$KEEP_ENDIANNESS$$
*/
struct fcoe_rx_stat_params_section1
{
uint32_t fcoe_ver_cnt /* Number of packets with wrong FCoE version */;
uint32_t fcoe_rx_drop_pkt_cnt /* Number of FCoE packets that were dropped */;
};
/*
* FCoE RX statistics parameters section#2 $$KEEP_ENDIANNESS$$
*/
struct fcoe_rx_stat_params_section2
{
uint32_t fc_crc_cnt /* Number of packets with FC CRC error */;
uint32_t eofa_del_cnt /* Number of packets with EOFa delimiter */;
uint32_t miss_frame_cnt /* Number of missing packets */;
uint32_t seq_timeout_cnt /* Number of sequence timeout expirations (E_D_TOV) */;
uint32_t drop_seq_cnt /* Number of Sequences that were sropped */;
uint32_t fcoe_rx_drop_pkt_cnt /* Number of FCoE packets that were dropped */;
uint32_t fcp_rx_pkt_cnt /* Number of FCP packets that were legally received */;
uint32_t reserved0;
};
/*
* Fcoe rx_wr union context $$KEEP_ENDIANNESS$$
*/
union fcoe_rx_wr_union_ctx
{
struct fcoe_read_flow_info read_info /* Data-In/ELS/BLS information */;
union fcoe_comp_flow_info comp_info /* Completion information */;
uint32_t opaque[8];
};
/*
* FCoE SQ element $$KEEP_ENDIANNESS$$
*/
struct fcoe_sqe
{
uint16_t wqe;
#define FCOE_SQE_TASK_ID (0x7FFF<<0) /* BitField wqe The task ID (OX_ID) to be processed */
#define FCOE_SQE_TASK_ID_SHIFT 0
#define FCOE_SQE_TOGGLE_BIT (0x1<<15) /* BitField wqe Toggle bit updated by the driver */
#define FCOE_SQE_TOGGLE_BIT_SHIFT 15
};
/*
* FCoE TX statistics parameters $$KEEP_ENDIANNESS$$
*/
struct fcoe_tx_stat_params
{
uint32_t fcoe_tx_pkt_cnt /* Number of transmitted FCoE packets */;
uint32_t fcoe_tx_byte_cnt /* Number of transmitted FCoE bytes */;
uint32_t fcp_tx_pkt_cnt /* Number of transmitted FCP packets */;
uint32_t reserved0;
};
/*
* FCoE statistics parameters $$KEEP_ENDIANNESS$$
*/
struct fcoe_statistics_params
{
struct fcoe_tx_stat_params tx_stat /* FCoE TX statistics parameters */;
struct fcoe_rx_stat_params_section0 rx_stat0 /* FCoE RX statistics parameters section#0 */;
struct fcoe_rx_stat_params_section1 rx_stat1 /* FCoE RX statistics parameters section#1 */;
struct fcoe_rx_stat_params_section2 rx_stat2 /* FCoE RX statistics parameters section#2 */;
};
/*
* 14 regs $$KEEP_ENDIANNESS$$
*/
struct fcoe_tce_tx_only
{
union fcoe_sgl_union_ctx sgl_ctx /* TX SGL context */;
uint32_t rsrv0;
};
/*
* 32 bytes (8 regs) used for TX only purposes $$KEEP_ENDIANNESS$$
*/
union fcoe_tx_wr_rx_rd_union_ctx
{
struct fcoe_fc_frame tx_frame /* Middle-path/ABTS/Data-Out information */;
struct fcoe_fcp_cmd_payload fcp_cmd /* FCP_CMD payload */;
struct fcoe_ext_cleanup_info cleanup /* Task ID to be cleaned */;
struct fcoe_ext_abts_info abts /* Task ID to be aborted */;
struct fcoe_ext_fw_tx_seq_ctx tx_seq /* TX sequence information */;
uint32_t opaque[8];
};
/*
* tce_tx_wr_rx_rd_const $$KEEP_ENDIANNESS$$
*/
struct fcoe_tce_tx_wr_rx_rd_const
{
uint8_t init_flags;
#define FCOE_TCE_TX_WR_RX_RD_CONST_TASK_TYPE (0x7<<0) /* BitField init_flags Task type - Write / Read / Middle / Unsolicited / ABTS / Cleanup */
#define FCOE_TCE_TX_WR_RX_RD_CONST_TASK_TYPE_SHIFT 0
#define FCOE_TCE_TX_WR_RX_RD_CONST_DEV_TYPE (0x1<<3) /* BitField init_flags Tape/Disk device indication */
#define FCOE_TCE_TX_WR_RX_RD_CONST_DEV_TYPE_SHIFT 3
#define FCOE_TCE_TX_WR_RX_RD_CONST_CLASS_TYPE (0x1<<4) /* BitField init_flags Class 3/2 indication */
#define FCOE_TCE_TX_WR_RX_RD_CONST_CLASS_TYPE_SHIFT 4
#define FCOE_TCE_TX_WR_RX_RD_CONST_CACHED_SGE (0x3<<5) /* BitField init_flags Num of cached sge (0 - not cached sge) */
#define FCOE_TCE_TX_WR_RX_RD_CONST_CACHED_SGE_SHIFT 5
#define FCOE_TCE_TX_WR_RX_RD_CONST_SUPPORT_REC_TOV (0x1<<7) /* BitField init_flags Support REC_TOV flag, for FW use only */
#define FCOE_TCE_TX_WR_RX_RD_CONST_SUPPORT_REC_TOV_SHIFT 7
uint8_t tx_flags;
#define FCOE_TCE_TX_WR_RX_RD_CONST_TX_VALID (0x1<<0) /* BitField tx_flagsBoth TX and RX processing could read but only the TX could write Indication of TX valid task */
#define FCOE_TCE_TX_WR_RX_RD_CONST_TX_VALID_SHIFT 0
#define FCOE_TCE_TX_WR_RX_RD_CONST_TX_STATE (0xF<<1) /* BitField tx_flagsBoth TX and RX processing could read but only the TX could write The TX state of the task */
#define FCOE_TCE_TX_WR_RX_RD_CONST_TX_STATE_SHIFT 1
#define FCOE_TCE_TX_WR_RX_RD_CONST_RSRV1 (0x1<<5) /* BitField tx_flagsBoth TX and RX processing could read but only the TX could write */
#define FCOE_TCE_TX_WR_RX_RD_CONST_RSRV1_SHIFT 5
#define FCOE_TCE_TX_WR_RX_RD_CONST_TX_SEQ_INIT (0x1<<6) /* BitField tx_flagsBoth TX and RX processing could read but only the TX could write TX Sequence initiative indication */
#define FCOE_TCE_TX_WR_RX_RD_CONST_TX_SEQ_INIT_SHIFT 6
#define FCOE_TCE_TX_WR_RX_RD_CONST_TX_COMP_TRNS (0x1<<7) /* BitField tx_flagsBoth TX and RX processing could read but only the TX could write Compelted full tranmission of this task */
#define FCOE_TCE_TX_WR_RX_RD_CONST_TX_COMP_TRNS_SHIFT 7
uint16_t rsrv3;
uint32_t verify_tx_seq /* Sequence counter snapshot in order to verify target did not send FCP_RSP before the actual transmission of PBF from the SGL */;
};
/*
* tce_tx_wr_rx_rd $$KEEP_ENDIANNESS$$
*/
struct fcoe_tce_tx_wr_rx_rd
{
union fcoe_tx_wr_rx_rd_union_ctx union_ctx /* 32 (8 regs) bytes used for TX only purposes */;
struct fcoe_tce_tx_wr_rx_rd_const const_ctx /* Constant TX_WR_RX_RD */;
};
/*
* tce_rx_wr_tx_rd_const $$KEEP_ENDIANNESS$$
*/
struct fcoe_tce_rx_wr_tx_rd_const
{
uint32_t data_2_trns /* The maximum amount of data that would be transferred in this task */;
uint32_t init_flags;
#define FCOE_TCE_RX_WR_TX_RD_CONST_CID (0xFFFFFF<<0) /* BitField init_flags The CID of the connection (used by the CHIP) */
#define FCOE_TCE_RX_WR_TX_RD_CONST_CID_SHIFT 0
#define FCOE_TCE_RX_WR_TX_RD_CONST_RSRV0 (0xFF<<24) /* BitField init_flags */
#define FCOE_TCE_RX_WR_TX_RD_CONST_RSRV0_SHIFT 24
};
/*
* tce_rx_wr_tx_rd_var $$KEEP_ENDIANNESS$$
*/
struct fcoe_tce_rx_wr_tx_rd_var
{
uint16_t rx_flags;
#define FCOE_TCE_RX_WR_TX_RD_VAR_RSRV1 (0xF<<0) /* BitField rx_flags */
#define FCOE_TCE_RX_WR_TX_RD_VAR_RSRV1_SHIFT 0
#define FCOE_TCE_RX_WR_TX_RD_VAR_NUM_RQ_WQE (0x7<<4) /* BitField rx_flags The number of RQ WQEs that were consumed (for sense data only) */
#define FCOE_TCE_RX_WR_TX_RD_VAR_NUM_RQ_WQE_SHIFT 4
#define FCOE_TCE_RX_WR_TX_RD_VAR_CONF_REQ (0x1<<7) /* BitField rx_flags Confirmation request indication */
#define FCOE_TCE_RX_WR_TX_RD_VAR_CONF_REQ_SHIFT 7
#define FCOE_TCE_RX_WR_TX_RD_VAR_RX_STATE (0xF<<8) /* BitField rx_flags The RX state of the task */
#define FCOE_TCE_RX_WR_TX_RD_VAR_RX_STATE_SHIFT 8
#define FCOE_TCE_RX_WR_TX_RD_VAR_EXP_FIRST_FRAME (0x1<<12) /* BitField rx_flags Indication on expecting to receive the first frame from target */
#define FCOE_TCE_RX_WR_TX_RD_VAR_EXP_FIRST_FRAME_SHIFT 12
#define FCOE_TCE_RX_WR_TX_RD_VAR_RX_SEQ_INIT (0x1<<13) /* BitField rx_flags RX Sequence initiative indication */
#define FCOE_TCE_RX_WR_TX_RD_VAR_RX_SEQ_INIT_SHIFT 13
#define FCOE_TCE_RX_WR_TX_RD_VAR_RSRV2 (0x1<<14) /* BitField rx_flags */
#define FCOE_TCE_RX_WR_TX_RD_VAR_RSRV2_SHIFT 14
#define FCOE_TCE_RX_WR_TX_RD_VAR_RX_VALID (0x1<<15) /* BitField rx_flags Indication of RX valid task */
#define FCOE_TCE_RX_WR_TX_RD_VAR_RX_VALID_SHIFT 15
uint16_t rx_id /* The RX_ID read from incoming frame and to be used in subsequent transmitting frames */;
struct fcoe_fcp_xfr_rdy_payload fcp_xfr_rdy /* Data-In/ELS/BLS information */;
};
/*
* tce_rx_wr_tx_rd $$KEEP_ENDIANNESS$$
*/
struct fcoe_tce_rx_wr_tx_rd
{
struct fcoe_tce_rx_wr_tx_rd_const const_ctx /* The RX_ID read from incoming frame and to be used in subsequent transmitting frames */;
struct fcoe_tce_rx_wr_tx_rd_var var_ctx /* The RX_ID read from incoming frame and to be used in subsequent transmitting frames */;
};
/*
* tce_rx_only $$KEEP_ENDIANNESS$$
*/
struct fcoe_tce_rx_only
{
struct fcoe_rx_seq_ctx rx_seq_ctx /* The context of current receiving Sequence */;
union fcoe_rx_wr_union_ctx union_ctx /* Read flow info/ Completion flow info */;
};
/*
* task_ctx_entry $$KEEP_ENDIANNESS$$
*/
struct fcoe_task_ctx_entry
{
struct fcoe_tce_tx_only txwr_only /* TX processing shall be the only one to read/write to this section */;
struct fcoe_tce_tx_wr_rx_rd txwr_rxrd /* TX processing shall write and RX shall read from this section */;
struct fcoe_tce_rx_wr_tx_rd rxwr_txrd /* RX processing shall write and TX shall read from this section */;
struct fcoe_tce_rx_only rxwr_only /* RX processing shall be the only one to read/write to this section */;
};
/*
* FCoE XFRQ element $$KEEP_ENDIANNESS$$
*/
struct fcoe_xfrqe
{
uint16_t wqe;
#define FCOE_XFRQE_TASK_ID (0x7FFF<<0) /* BitField wqe The task ID (OX_ID) to be processed */
#define FCOE_XFRQE_TASK_ID_SHIFT 0
#define FCOE_XFRQE_TOGGLE_BIT (0x1<<15) /* BitField wqe Toggle bit updated by the driver */
#define FCOE_XFRQE_TOGGLE_BIT_SHIFT 15
};
/*
* Cached SGEs $$KEEP_ENDIANNESS$$
*/
struct common_fcoe_sgl
{
struct fcoe_bd_ctx sge[3];
};
/*
* FCoE SQ\XFRQ element
*/
struct fcoe_cached_wqe
{
struct fcoe_sqe sqe /* SQ WQE */;
struct fcoe_xfrqe xfrqe /* XFRQ WQE */;
};
/*
* FCoE connection enable\disable params passed by driver to FW in FCoE enable ramrod $$KEEP_ENDIANNESS$$
*/
struct fcoe_conn_enable_disable_ramrod_params
{
struct fcoe_kwqe_conn_enable_disable enable_disable_kwqe;
};
/*
* FCoE connection offload params passed by driver to FW in FCoE offload ramrod $$KEEP_ENDIANNESS$$
*/
struct fcoe_conn_offload_ramrod_params
{
struct fcoe_kwqe_conn_offload1 offload_kwqe1;
struct fcoe_kwqe_conn_offload2 offload_kwqe2;
struct fcoe_kwqe_conn_offload3 offload_kwqe3;
struct fcoe_kwqe_conn_offload4 offload_kwqe4;
};
struct ustorm_fcoe_mng_ctx
{
#if defined(__BIG_ENDIAN)
uint8_t mid_seq_proc_flag /* Middle Sequence received processing */;
uint8_t tce_in_cam_flag /* TCE in CAM indication */;
uint8_t tce_on_ior_flag /* TCE on IOR indication (TCE on IORs but not necessarily in CAM) */;
uint8_t en_cached_tce_flag /* TCE cached functionality enabled indication */;
#elif defined(__LITTLE_ENDIAN)
uint8_t en_cached_tce_flag /* TCE cached functionality enabled indication */;
uint8_t tce_on_ior_flag /* TCE on IOR indication (TCE on IORs but not necessarily in CAM) */;
uint8_t tce_in_cam_flag /* TCE in CAM indication */;
uint8_t mid_seq_proc_flag /* Middle Sequence received processing */;
#endif
#if defined(__BIG_ENDIAN)
uint8_t tce_cam_addr /* CAM address of task context */;
uint8_t cached_conn_flag /* Cached locked connection indication */;
uint16_t rsrv0;
#elif defined(__LITTLE_ENDIAN)
uint16_t rsrv0;
uint8_t cached_conn_flag /* Cached locked connection indication */;
uint8_t tce_cam_addr /* CAM address of task context */;
#endif
#if defined(__BIG_ENDIAN)
uint16_t dma_tce_ram_addr /* RAM address of task context when executing DMA operations (read/write) */;
uint16_t tce_ram_addr /* RAM address of task context (might be in cached table or in scratchpad) */;
#elif defined(__LITTLE_ENDIAN)
uint16_t tce_ram_addr /* RAM address of task context (might be in cached table or in scratchpad) */;
uint16_t dma_tce_ram_addr /* RAM address of task context when executing DMA operations (read/write) */;
#endif
#if defined(__BIG_ENDIAN)
uint16_t ox_id /* Last OX_ID that has been used */;
uint16_t wr_done_seq /* Last task write done in the specific connection */;
#elif defined(__LITTLE_ENDIAN)
uint16_t wr_done_seq /* Last task write done in the specific connection */;
uint16_t ox_id /* Last OX_ID that has been used */;
#endif
struct regpair_t task_addr /* Last task address in used */;
};
/*
* Parameters initialized during offloaded according to FLOGI/PLOGI/PRLI and used in FCoE context section
*/
struct ustorm_fcoe_params
{
#if defined(__BIG_ENDIAN)
uint16_t fcoe_conn_id /* The connection ID that would be used by driver to identify the conneciton */;
uint16_t flags;
#define USTORM_FCOE_PARAMS_B_MUL_N_PORT_IDS (0x1<<0) /* BitField flags Supporting multiple N_Port IDs indication, received during FLOGI */
#define USTORM_FCOE_PARAMS_B_MUL_N_PORT_IDS_SHIFT 0
#define USTORM_FCOE_PARAMS_B_E_D_TOV_RES (0x1<<1) /* BitField flags E_D_TOV resolution (0 - msec, 1 - nsec), negotiated in PLOGI */
#define USTORM_FCOE_PARAMS_B_E_D_TOV_RES_SHIFT 1
#define USTORM_FCOE_PARAMS_B_CONT_INCR_SEQ_CNT (0x1<<2) /* BitField flags Continuously increasing SEQ_CNT indication, received during PLOGI */
#define USTORM_FCOE_PARAMS_B_CONT_INCR_SEQ_CNT_SHIFT 2
#define USTORM_FCOE_PARAMS_B_CONF_REQ (0x1<<3) /* BitField flags Confirmation request supported */
#define USTORM_FCOE_PARAMS_B_CONF_REQ_SHIFT 3
#define USTORM_FCOE_PARAMS_B_REC_VALID (0x1<<4) /* BitField flags REC allowed */
#define USTORM_FCOE_PARAMS_B_REC_VALID_SHIFT 4
#define USTORM_FCOE_PARAMS_B_CQ_TOGGLE_BIT (0x1<<5) /* BitField flags CQ toggle bit */
#define USTORM_FCOE_PARAMS_B_CQ_TOGGLE_BIT_SHIFT 5
#define USTORM_FCOE_PARAMS_B_XFRQ_TOGGLE_BIT (0x1<<6) /* BitField flags XFRQ toggle bit */
#define USTORM_FCOE_PARAMS_B_XFRQ_TOGGLE_BIT_SHIFT 6
#define USTORM_FCOE_PARAMS_B_CONFQ_TOGGLE_BIT (0x1<<7) /* BitField flags CONFQ toggle bit */
#define USTORM_FCOE_PARAMS_B_CONFQ_TOGGLE_BIT_SHIFT 7
#define USTORM_FCOE_PARAMS_RSRV0 (0xFF<<8) /* BitField flags */
#define USTORM_FCOE_PARAMS_RSRV0_SHIFT 8
#elif defined(__LITTLE_ENDIAN)
uint16_t flags;
#define USTORM_FCOE_PARAMS_B_MUL_N_PORT_IDS (0x1<<0) /* BitField flags Supporting multiple N_Port IDs indication, received during FLOGI */
#define USTORM_FCOE_PARAMS_B_MUL_N_PORT_IDS_SHIFT 0
#define USTORM_FCOE_PARAMS_B_E_D_TOV_RES (0x1<<1) /* BitField flags E_D_TOV resolution (0 - msec, 1 - nsec), negotiated in PLOGI */
#define USTORM_FCOE_PARAMS_B_E_D_TOV_RES_SHIFT 1
#define USTORM_FCOE_PARAMS_B_CONT_INCR_SEQ_CNT (0x1<<2) /* BitField flags Continuously increasing SEQ_CNT indication, received during PLOGI */
#define USTORM_FCOE_PARAMS_B_CONT_INCR_SEQ_CNT_SHIFT 2
#define USTORM_FCOE_PARAMS_B_CONF_REQ (0x1<<3) /* BitField flags Confirmation request supported */
#define USTORM_FCOE_PARAMS_B_CONF_REQ_SHIFT 3
#define USTORM_FCOE_PARAMS_B_REC_VALID (0x1<<4) /* BitField flags REC allowed */
#define USTORM_FCOE_PARAMS_B_REC_VALID_SHIFT 4
#define USTORM_FCOE_PARAMS_B_CQ_TOGGLE_BIT (0x1<<5) /* BitField flags CQ toggle bit */
#define USTORM_FCOE_PARAMS_B_CQ_TOGGLE_BIT_SHIFT 5
#define USTORM_FCOE_PARAMS_B_XFRQ_TOGGLE_BIT (0x1<<6) /* BitField flags XFRQ toggle bit */
#define USTORM_FCOE_PARAMS_B_XFRQ_TOGGLE_BIT_SHIFT 6
#define USTORM_FCOE_PARAMS_B_CONFQ_TOGGLE_BIT (0x1<<7) /* BitField flags CONFQ toggle bit */
#define USTORM_FCOE_PARAMS_B_CONFQ_TOGGLE_BIT_SHIFT 7
#define USTORM_FCOE_PARAMS_RSRV0 (0xFF<<8) /* BitField flags */
#define USTORM_FCOE_PARAMS_RSRV0_SHIFT 8
uint16_t fcoe_conn_id /* The connection ID that would be used by driver to identify the conneciton */;
#endif
#if defined(__BIG_ENDIAN)
uint8_t hc_csdm_byte_en /* Host coalescing Cstorm RAM address byte enable */;
uint8_t func_id /* Function id */;
uint8_t port_id /* Port id */;
uint8_t vnic_id /* Vnic id */;
#elif defined(__LITTLE_ENDIAN)
uint8_t vnic_id /* Vnic id */;
uint8_t port_id /* Port id */;
uint8_t func_id /* Function id */;
uint8_t hc_csdm_byte_en /* Host coalescing Cstorm RAM address byte enable */;
#endif
#if defined(__BIG_ENDIAN)
uint16_t rx_total_conc_seqs /* Total concurrent Sequences for all Classes supported by us, sent during PLOGI */;
uint16_t rx_max_fc_pay_len /* The maximum acceptable FC payload size (Buffer-to-buffer Receive Data_Field size) supported by us, sent during FLOGI/PLOGI */;
#elif defined(__LITTLE_ENDIAN)
uint16_t rx_max_fc_pay_len /* The maximum acceptable FC payload size (Buffer-to-buffer Receive Data_Field size) supported by us, sent during FLOGI/PLOGI */;
uint16_t rx_total_conc_seqs /* Total concurrent Sequences for all Classes supported by us, sent during PLOGI */;
#endif
#if defined(__BIG_ENDIAN)
uint8_t task_pbe_idx_off /* The first PBE for this specific task list in RAM */;
uint8_t task_in_page_log_size /* Number of tasks in page (log 2) */;
uint16_t rx_max_conc_seqs /* Maximum Concurrent Sequences for Class 3 supported by us, sent during PLOGI */;
#elif defined(__LITTLE_ENDIAN)
uint16_t rx_max_conc_seqs /* Maximum Concurrent Sequences for Class 3 supported by us, sent during PLOGI */;
uint8_t task_in_page_log_size /* Number of tasks in page (log 2) */;
uint8_t task_pbe_idx_off /* The first PBE for this specific task list in RAM */;
#endif
};
/*
* FCoE 16-bits index structure
*/
struct fcoe_idx16_fields
{
uint16_t fields;
#define FCOE_IDX16_FIELDS_IDX (0x7FFF<<0) /* BitField fields */
#define FCOE_IDX16_FIELDS_IDX_SHIFT 0
#define FCOE_IDX16_FIELDS_MSB (0x1<<15) /* BitField fields */
#define FCOE_IDX16_FIELDS_MSB_SHIFT 15
};
/*
* FCoE 16-bits index union
*/
union fcoe_idx16_field_union
{
struct fcoe_idx16_fields fields /* Parameters field */;
uint16_t val /* Global value */;
};
/*
* Parameters required for placement according to SGL
*/
struct ustorm_fcoe_data_place_mng
{
#if defined(__BIG_ENDIAN)
uint16_t sge_off;
uint8_t num_sges /* Number of SGEs left to be used on context */;
uint8_t sge_idx /* 0xFF value indicated loading SGL */;
#elif defined(__LITTLE_ENDIAN)
uint8_t sge_idx /* 0xFF value indicated loading SGL */;
uint8_t num_sges /* Number of SGEs left to be used on context */;
uint16_t sge_off;
#endif
};
/*
* Parameters required for placement according to SGL
*/
struct ustorm_fcoe_data_place
{
struct ustorm_fcoe_data_place_mng cached_mng /* 0xFF value indicated loading SGL */;
struct fcoe_bd_ctx cached_sge[2];
};
/*
* TX processing shall write and RX processing shall read from this section
*/
union fcoe_u_tce_tx_wr_rx_rd_union
{
struct fcoe_abts_info abts /* ABTS information */;
struct fcoe_cleanup_info cleanup /* Cleanup information */;
struct fcoe_fw_tx_seq_ctx tx_seq_ctx /* TX sequence context */;
uint32_t opaque[2];
};
/*
* TX processing shall write and RX processing shall read from this section
*/
struct fcoe_u_tce_tx_wr_rx_rd
{
union fcoe_u_tce_tx_wr_rx_rd_union union_ctx /* FW DATA_OUT/CLEANUP information */;
struct fcoe_tce_tx_wr_rx_rd_const const_ctx /* TX processing shall write and RX shall read from this section */;
};
struct ustorm_fcoe_tce
{
struct fcoe_u_tce_tx_wr_rx_rd txwr_rxrd /* TX processing shall write and RX shall read from this section */;
struct fcoe_tce_rx_wr_tx_rd rxwr_txrd /* RX processing shall write and TX shall read from this section */;
struct fcoe_tce_rx_only rxwr /* RX processing shall be the only one to read/write to this section */;
};
struct ustorm_fcoe_cache_ctx
{
uint32_t rsrv0;
struct ustorm_fcoe_data_place data_place;
struct ustorm_fcoe_tce tce /* Task context */;
};
/*
* Ustorm FCoE Storm Context
*/
struct ustorm_fcoe_st_context
{
struct ustorm_fcoe_mng_ctx mng_ctx /* Managing the processing of the flow */;
struct ustorm_fcoe_params fcoe_params /* Align to 128 bytes */;
struct regpair_t cq_base_addr /* CQ current page host address */;
struct regpair_t rq_pbl_base /* PBL host address for RQ */;
struct regpair_t rq_cur_page_addr /* RQ current page host address */;
struct regpair_t confq_pbl_base_addr /* Base address of the CONFQ page list */;
struct regpair_t conn_db_base /* Connection data base address in host memory where RQ producer and CQ arm bit reside in */;
struct regpair_t xfrq_base_addr /* XFRQ base host address */;
struct regpair_t lcq_base_addr /* LCQ base host address */;
#if defined(__BIG_ENDIAN)
union fcoe_idx16_field_union rq_cons /* RQ consumer advance for each RQ WQE consuming */;
union fcoe_idx16_field_union rq_prod /* RQ producer update by driver and read by FW (should be initialized to RQ size) */;
#elif defined(__LITTLE_ENDIAN)
union fcoe_idx16_field_union rq_prod /* RQ producer update by driver and read by FW (should be initialized to RQ size) */;
union fcoe_idx16_field_union rq_cons /* RQ consumer advance for each RQ WQE consuming */;
#endif
#if defined(__BIG_ENDIAN)
uint16_t xfrq_prod /* XFRQ producer (No consumer is needed since Q can not be overloaded) */;
uint16_t cq_cons /* CQ consumer copy of last update from driver (Q can not be overloaded) */;
#elif defined(__LITTLE_ENDIAN)
uint16_t cq_cons /* CQ consumer copy of last update from driver (Q can not be overloaded) */;
uint16_t xfrq_prod /* XFRQ producer (No consumer is needed since Q can not be overloaded) */;
#endif
#if defined(__BIG_ENDIAN)
uint16_t lcq_cons /* lcq consumer */;
uint16_t hc_cram_address /* Host coalescing Cstorm RAM address */;
#elif defined(__LITTLE_ENDIAN)
uint16_t hc_cram_address /* Host coalescing Cstorm RAM address */;
uint16_t lcq_cons /* lcq consumer */;
#endif
#if defined(__BIG_ENDIAN)
uint16_t sq_xfrq_lcq_confq_size /* SQ/XFRQ/LCQ/CONFQ size */;
uint16_t confq_prod /* CONFQ producer */;
#elif defined(__LITTLE_ENDIAN)
uint16_t confq_prod /* CONFQ producer */;
uint16_t sq_xfrq_lcq_confq_size /* SQ/XFRQ/LCQ/CONFQ size */;
#endif
#if defined(__BIG_ENDIAN)
uint8_t hc_csdm_agg_int /* Host coalescing CSDM aggregative interrupts */;
uint8_t rsrv2;
uint8_t available_rqes /* Available RQEs */;
uint8_t sp_q_flush_cnt /* The remain number of queues to be flushed (in QM) */;
#elif defined(__LITTLE_ENDIAN)
uint8_t sp_q_flush_cnt /* The remain number of queues to be flushed (in QM) */;
uint8_t available_rqes /* Available RQEs */;
uint8_t rsrv2;
uint8_t hc_csdm_agg_int /* Host coalescing CSDM aggregative interrupts */;
#endif
#if defined(__BIG_ENDIAN)
uint16_t num_pend_tasks /* Number of pending tasks */;
uint16_t pbf_ack_ram_addr /* PBF TX sequence ACK ram address */;
#elif defined(__LITTLE_ENDIAN)
uint16_t pbf_ack_ram_addr /* PBF TX sequence ACK ram address */;
uint16_t num_pend_tasks /* Number of pending tasks */;
#endif
struct ustorm_fcoe_cache_ctx cache_ctx /* Cached context */;
};
/*
* The FCoE non-aggregative context of Tstorm
*/
struct tstorm_fcoe_st_context
{
struct regpair_t reserved0;
struct regpair_t reserved1;
};
/*
* Ethernet context section
*/
struct xstorm_fcoe_eth_context_section
{
#if defined(__BIG_ENDIAN)
uint8_t remote_addr_4 /* Remote Mac Address, used in PBF Header Builder Command */;
uint8_t remote_addr_5 /* Remote Mac Address, used in PBF Header Builder Command */;
uint8_t local_addr_0 /* Local Mac Address, used in PBF Header Builder Command */;
uint8_t local_addr_1 /* Local Mac Address, used in PBF Header Builder Command */;
#elif defined(__LITTLE_ENDIAN)
uint8_t local_addr_1 /* Local Mac Address, used in PBF Header Builder Command */;
uint8_t local_addr_0 /* Local Mac Address, used in PBF Header Builder Command */;
uint8_t remote_addr_5 /* Remote Mac Address, used in PBF Header Builder Command */;
uint8_t remote_addr_4 /* Remote Mac Address, used in PBF Header Builder Command */;
#endif
#if defined(__BIG_ENDIAN)
uint8_t remote_addr_0 /* Remote Mac Address, used in PBF Header Builder Command */;
uint8_t remote_addr_1 /* Remote Mac Address, used in PBF Header Builder Command */;
uint8_t remote_addr_2 /* Remote Mac Address, used in PBF Header Builder Command */;
uint8_t remote_addr_3 /* Remote Mac Address, used in PBF Header Builder Command */;
#elif defined(__LITTLE_ENDIAN)
uint8_t remote_addr_3 /* Remote Mac Address, used in PBF Header Builder Command */;
uint8_t remote_addr_2 /* Remote Mac Address, used in PBF Header Builder Command */;
uint8_t remote_addr_1 /* Remote Mac Address, used in PBF Header Builder Command */;
uint8_t remote_addr_0 /* Remote Mac Address, used in PBF Header Builder Command */;
#endif
#if defined(__BIG_ENDIAN)
uint16_t reserved_vlan_type /* this field is not an absolute must, but the reseved was here */;
uint16_t params;
#define XSTORM_FCOE_ETH_CONTEXT_SECTION_VLAN_ID (0xFFF<<0) /* BitField params part of PBF Header Builder Command */
#define XSTORM_FCOE_ETH_CONTEXT_SECTION_VLAN_ID_SHIFT 0
#define XSTORM_FCOE_ETH_CONTEXT_SECTION_CFI (0x1<<12) /* BitField params Canonical format indicator, part of PBF Header Builder Command */
#define XSTORM_FCOE_ETH_CONTEXT_SECTION_CFI_SHIFT 12
#define XSTORM_FCOE_ETH_CONTEXT_SECTION_PRIORITY (0x7<<13) /* BitField params part of PBF Header Builder Command */
#define XSTORM_FCOE_ETH_CONTEXT_SECTION_PRIORITY_SHIFT 13
#elif defined(__LITTLE_ENDIAN)
uint16_t params;
#define XSTORM_FCOE_ETH_CONTEXT_SECTION_VLAN_ID (0xFFF<<0) /* BitField params part of PBF Header Builder Command */
#define XSTORM_FCOE_ETH_CONTEXT_SECTION_VLAN_ID_SHIFT 0
#define XSTORM_FCOE_ETH_CONTEXT_SECTION_CFI (0x1<<12) /* BitField params Canonical format indicator, part of PBF Header Builder Command */
#define XSTORM_FCOE_ETH_CONTEXT_SECTION_CFI_SHIFT 12
#define XSTORM_FCOE_ETH_CONTEXT_SECTION_PRIORITY (0x7<<13) /* BitField params part of PBF Header Builder Command */
#define XSTORM_FCOE_ETH_CONTEXT_SECTION_PRIORITY_SHIFT 13
uint16_t reserved_vlan_type /* this field is not an absolute must, but the reseved was here */;
#endif
#if defined(__BIG_ENDIAN)
uint8_t local_addr_2 /* Local Mac Address, used in PBF Header Builder Command */;
uint8_t local_addr_3 /* Local Mac Address, used in PBF Header Builder Command */;
uint8_t local_addr_4 /* Loca lMac Address, used in PBF Header Builder Command */;
uint8_t local_addr_5 /* Local Mac Address, used in PBF Header Builder Command */;
#elif defined(__LITTLE_ENDIAN)
uint8_t local_addr_5 /* Local Mac Address, used in PBF Header Builder Command */;
uint8_t local_addr_4 /* Loca lMac Address, used in PBF Header Builder Command */;
uint8_t local_addr_3 /* Local Mac Address, used in PBF Header Builder Command */;
uint8_t local_addr_2 /* Local Mac Address, used in PBF Header Builder Command */;
#endif
};
/*
* Flags used in FCoE context section - 1 byte
*/
struct xstorm_fcoe_context_flags
{
uint8_t flags;
#define XSTORM_FCOE_CONTEXT_FLAGS_B_PROC_Q (0x3<<0) /* BitField flags The current queue in process */
#define XSTORM_FCOE_CONTEXT_FLAGS_B_PROC_Q_SHIFT 0
#define XSTORM_FCOE_CONTEXT_FLAGS_B_MID_SEQ (0x1<<2) /* BitField flags Middle of Sequence indication */
#define XSTORM_FCOE_CONTEXT_FLAGS_B_MID_SEQ_SHIFT 2
#define XSTORM_FCOE_CONTEXT_FLAGS_B_BLOCK_SQ (0x1<<3) /* BitField flags Indicates whether the SQ is blocked since we are in the middle of ABTS/Cleanup procedure */
#define XSTORM_FCOE_CONTEXT_FLAGS_B_BLOCK_SQ_SHIFT 3
#define XSTORM_FCOE_CONTEXT_FLAGS_B_REC_SUPPORT (0x1<<4) /* BitField flags REC support */
#define XSTORM_FCOE_CONTEXT_FLAGS_B_REC_SUPPORT_SHIFT 4
#define XSTORM_FCOE_CONTEXT_FLAGS_B_SQ_TOGGLE (0x1<<5) /* BitField flags SQ toggle bit */
#define XSTORM_FCOE_CONTEXT_FLAGS_B_SQ_TOGGLE_SHIFT 5
#define XSTORM_FCOE_CONTEXT_FLAGS_B_XFRQ_TOGGLE (0x1<<6) /* BitField flags XFRQ toggle bit */
#define XSTORM_FCOE_CONTEXT_FLAGS_B_XFRQ_TOGGLE_SHIFT 6
#define XSTORM_FCOE_CONTEXT_FLAGS_B_VNTAG_VLAN (0x1<<7) /* BitField flags Are we using VNTag inner vlan - in this case we have to read it on every VNTag version change */
#define XSTORM_FCOE_CONTEXT_FLAGS_B_VNTAG_VLAN_SHIFT 7
};
struct xstorm_fcoe_tce
{
struct fcoe_tce_tx_only txwr /* TX processing shall be the only one to read/write to this section */;
struct fcoe_tce_tx_wr_rx_rd txwr_rxrd /* TX processing shall write and RX processing shall read from this section */;
};
/*
* FCP_DATA parameters required for transmission
*/
struct xstorm_fcoe_fcp_data
{
uint32_t io_rem /* IO remainder */;
#if defined(__BIG_ENDIAN)
uint16_t cached_sge_off;
uint8_t cached_num_sges /* Number of SGEs on context */;
uint8_t cached_sge_idx /* 0xFF value indicated loading SGL */;
#elif defined(__LITTLE_ENDIAN)
uint8_t cached_sge_idx /* 0xFF value indicated loading SGL */;
uint8_t cached_num_sges /* Number of SGEs on context */;
uint16_t cached_sge_off;
#endif
uint32_t buf_addr_hi_0 /* Higher buffer host address */;
uint32_t buf_addr_lo_0 /* Lower buffer host address */;
#if defined(__BIG_ENDIAN)
uint16_t num_of_pending_tasks /* Num of pending tasks */;
uint16_t buf_len_0 /* Buffer length (in bytes) */;
#elif defined(__LITTLE_ENDIAN)
uint16_t buf_len_0 /* Buffer length (in bytes) */;
uint16_t num_of_pending_tasks /* Num of pending tasks */;
#endif
uint32_t buf_addr_hi_1 /* Higher buffer host address */;
uint32_t buf_addr_lo_1 /* Lower buffer host address */;
#if defined(__BIG_ENDIAN)
uint16_t task_pbe_idx_off /* Task pbe index offset */;
uint16_t buf_len_1 /* Buffer length (in bytes) */;
#elif defined(__LITTLE_ENDIAN)
uint16_t buf_len_1 /* Buffer length (in bytes) */;
uint16_t task_pbe_idx_off /* Task pbe index offset */;
#endif
uint32_t buf_addr_hi_2 /* Higher buffer host address */;
uint32_t buf_addr_lo_2 /* Lower buffer host address */;
#if defined(__BIG_ENDIAN)
uint16_t ox_id /* OX_ID */;
uint16_t buf_len_2 /* Buffer length (in bytes) */;
#elif defined(__LITTLE_ENDIAN)
uint16_t buf_len_2 /* Buffer length (in bytes) */;
uint16_t ox_id /* OX_ID */;
#endif
};
/*
* Continuation of Flags used in FCoE context section - 1 byte
*/
struct xstorm_fcoe_context_flags_cont
{
uint8_t flags;
#define XSTORM_FCOE_CONTEXT_FLAGS_CONT_B_CONFQ_TOGGLE (0x1<<0) /* BitField flags CONFQ toggle bit */
#define XSTORM_FCOE_CONTEXT_FLAGS_CONT_B_CONFQ_TOGGLE_SHIFT 0
#define XSTORM_FCOE_CONTEXT_FLAGS_CONT_VLAN_FLAG (0x1<<1) /* BitField flags Is any inner vlan exist */
#define XSTORM_FCOE_CONTEXT_FLAGS_CONT_VLAN_FLAG_SHIFT 1
#define XSTORM_FCOE_CONTEXT_FLAGS_CONT_RESERVED (0x3F<<2) /* BitField flags */
#define XSTORM_FCOE_CONTEXT_FLAGS_CONT_RESERVED_SHIFT 2
};
/*
* vlan configuration
*/
struct xstorm_fcoe_vlan_conf
{
uint8_t vlan_conf;
#define XSTORM_FCOE_VLAN_CONF_INNER_VLAN_PRIORITY (0x7<<0) /* BitField vlan_conf Original inner vlan priority */
#define XSTORM_FCOE_VLAN_CONF_INNER_VLAN_PRIORITY_SHIFT 0
#define XSTORM_FCOE_VLAN_CONF_INNER_VLAN_FLAG (0x1<<3) /* BitField vlan_conf Original inner vlan flag */
#define XSTORM_FCOE_VLAN_CONF_INNER_VLAN_FLAG_SHIFT 3
#define XSTORM_FCOE_VLAN_CONF_OUTER_VLAN_PRIORITY (0x7<<4) /* BitField vlan_conf Original outer vlan priority */
#define XSTORM_FCOE_VLAN_CONF_OUTER_VLAN_PRIORITY_SHIFT 4
#define XSTORM_FCOE_VLAN_CONF_RESERVED (0x1<<7) /* BitField vlan_conf */
#define XSTORM_FCOE_VLAN_CONF_RESERVED_SHIFT 7
};
/*
* FCoE 16-bits vlan structure
*/
struct fcoe_vlan_fields
{
uint16_t fields;
#define FCOE_VLAN_FIELDS_VID (0xFFF<<0) /* BitField fields */
#define FCOE_VLAN_FIELDS_VID_SHIFT 0
#define FCOE_VLAN_FIELDS_CLI (0x1<<12) /* BitField fields */
#define FCOE_VLAN_FIELDS_CLI_SHIFT 12
#define FCOE_VLAN_FIELDS_PRI (0x7<<13) /* BitField fields */
#define FCOE_VLAN_FIELDS_PRI_SHIFT 13
};
/*
* FCoE 16-bits vlan union
*/
union fcoe_vlan_field_union
{
struct fcoe_vlan_fields fields /* Parameters field */;
uint16_t val /* Global value */;
};
/*
* FCoE 16-bits vlan, vif union
*/
union fcoe_vlan_vif_field_union
{
union fcoe_vlan_field_union vlan /* Vlan */;
uint16_t vif /* VIF */;
};
/*
* FCoE context section
*/
struct xstorm_fcoe_context_section
{
#if defined(__BIG_ENDIAN)
uint8_t cs_ctl /* cs ctl */;
uint8_t s_id[3] /* Source ID, received during FLOGI */;
#elif defined(__LITTLE_ENDIAN)
uint8_t s_id[3] /* Source ID, received during FLOGI */;
uint8_t cs_ctl /* cs ctl */;
#endif
#if defined(__BIG_ENDIAN)
uint8_t rctl /* rctl */;
uint8_t d_id[3] /* Destination ID, received after inquiry of the fabric network */;
#elif defined(__LITTLE_ENDIAN)
uint8_t d_id[3] /* Destination ID, received after inquiry of the fabric network */;
uint8_t rctl /* rctl */;
#endif
#if defined(__BIG_ENDIAN)
uint16_t sq_xfrq_lcq_confq_size /* SQ/XFRQ/LCQ/CONFQ size */;
uint16_t tx_max_fc_pay_len /* The maximum acceptable FC payload size (Buffer-to-buffer Receive Data_Field size) supported by target, received during both FLOGI and PLOGI, minimum value should be taken */;
#elif defined(__LITTLE_ENDIAN)
uint16_t tx_max_fc_pay_len /* The maximum acceptable FC payload size (Buffer-to-buffer Receive Data_Field size) supported by target, received during both FLOGI and PLOGI, minimum value should be taken */;
uint16_t sq_xfrq_lcq_confq_size /* SQ/XFRQ/LCQ/CONFQ size */;
#endif
uint32_t lcq_prod /* LCQ producer value */;
#if defined(__BIG_ENDIAN)
uint8_t port_id /* Port ID */;
uint8_t func_id /* Function ID */;
uint8_t seq_id /* SEQ ID counter to be used in transmitted FC header */;
struct xstorm_fcoe_context_flags tx_flags;
#elif defined(__LITTLE_ENDIAN)
struct xstorm_fcoe_context_flags tx_flags;
uint8_t seq_id /* SEQ ID counter to be used in transmitted FC header */;
uint8_t func_id /* Function ID */;
uint8_t port_id /* Port ID */;
#endif
#if defined(__BIG_ENDIAN)
uint16_t mtu /* MTU */;
uint8_t func_mode /* Function mode */;
uint8_t vnic_id /* Vnic ID */;
#elif defined(__LITTLE_ENDIAN)
uint8_t vnic_id /* Vnic ID */;
uint8_t func_mode /* Function mode */;
uint16_t mtu /* MTU */;
#endif
struct regpair_t confq_curr_page_addr /* The current page of CONFQ to be processed */;
struct fcoe_cached_wqe cached_wqe[8] /* Up to 8 SQ/XFRQ WQEs read in one shot */;
struct regpair_t lcq_base_addr /* The page address which the LCQ resides in host memory */;
struct xstorm_fcoe_tce tce /* TX section task context */;
struct xstorm_fcoe_fcp_data fcp_data /* The parameters required for FCP_DATA Sequences transmission */;
#if defined(__BIG_ENDIAN)
uint8_t tx_max_conc_seqs_c3 /* Maximum concurrent Sequences for Class 3 supported by traget, received during PLOGI */;
struct xstorm_fcoe_context_flags_cont tx_flags_cont;
uint8_t dcb_val /* DCB val - let us know if dcb info changes */;
uint8_t data_pb_cmd_size /* Data pb cmd size */;
#elif defined(__LITTLE_ENDIAN)
uint8_t data_pb_cmd_size /* Data pb cmd size */;
uint8_t dcb_val /* DCB val - let us know if dcb info changes */;
struct xstorm_fcoe_context_flags_cont tx_flags_cont;
uint8_t tx_max_conc_seqs_c3 /* Maximum concurrent Sequences for Class 3 supported by traget, received during PLOGI */;
#endif
#if defined(__BIG_ENDIAN)
uint16_t fcoe_tx_stat_params_ram_addr /* stat Ram Addr */;
uint16_t fcoe_tx_fc_seq_ram_addr /* Tx FC sequence Ram Addr */;
#elif defined(__LITTLE_ENDIAN)
uint16_t fcoe_tx_fc_seq_ram_addr /* Tx FC sequence Ram Addr */;
uint16_t fcoe_tx_stat_params_ram_addr /* stat Ram Addr */;
#endif
#if defined(__BIG_ENDIAN)
uint8_t fcp_cmd_line_credit;
uint8_t eth_hdr_size /* Ethernet header size without eth type */;
uint16_t pbf_addr /* PBF addr */;
#elif defined(__LITTLE_ENDIAN)
uint16_t pbf_addr /* PBF addr */;
uint8_t eth_hdr_size /* Ethernet header size without eth type */;
uint8_t fcp_cmd_line_credit;
#endif
#if defined(__BIG_ENDIAN)
union fcoe_vlan_vif_field_union multi_func_val /* Outer vlan vif union */;
uint8_t page_log_size /* Page log size */;
struct xstorm_fcoe_vlan_conf orig_vlan_conf /* original vlan configuration, used when we switch from dcb enable to dcb disabled */;
#elif defined(__LITTLE_ENDIAN)
struct xstorm_fcoe_vlan_conf orig_vlan_conf /* original vlan configuration, used when we switch from dcb enable to dcb disabled */;
uint8_t page_log_size /* Page log size */;
union fcoe_vlan_vif_field_union multi_func_val /* Outer vlan vif union */;
#endif
#if defined(__BIG_ENDIAN)
uint16_t fcp_cmd_frame_size /* FCP_CMD frame size */;
uint16_t pbf_addr_ff /* PBF addr with ff */;
#elif defined(__LITTLE_ENDIAN)
uint16_t pbf_addr_ff /* PBF addr with ff */;
uint16_t fcp_cmd_frame_size /* FCP_CMD frame size */;
#endif
#if defined(__BIG_ENDIAN)
uint8_t vlan_num /* Vlan number */;
uint8_t cos /* Cos */;
uint8_t cache_xfrq_cons /* Cache xferq consumer */;
uint8_t cache_sq_cons /* Cache sq consumer */;
#elif defined(__LITTLE_ENDIAN)
uint8_t cache_sq_cons /* Cache sq consumer */;
uint8_t cache_xfrq_cons /* Cache xferq consumer */;
uint8_t cos /* Cos */;
uint8_t vlan_num /* Vlan number */;
#endif
uint32_t verify_tx_seq /* Sequence number of last transmitted sequence in order to verify target did not send FCP_RSP before the actual transmission of PBF from the SGL */;
};
/*
* Xstorm FCoE Storm Context
*/
struct xstorm_fcoe_st_context
{
struct xstorm_fcoe_eth_context_section eth;
struct xstorm_fcoe_context_section fcoe;
};
/*
* Fcoe connection context
*/
struct fcoe_context
{
struct ustorm_fcoe_st_context ustorm_st_context /* Ustorm storm context */;
struct tstorm_fcoe_st_context tstorm_st_context /* Tstorm storm context */;
struct xstorm_fcoe_ag_context xstorm_ag_context /* Xstorm aggregative context */;
struct tstorm_fcoe_ag_context tstorm_ag_context /* Tstorm aggregative context */;
struct ustorm_fcoe_ag_context ustorm_ag_context /* Ustorm aggregative context */;
struct timers_block_context timers_context /* Timers block context */;
struct xstorm_fcoe_st_context xstorm_st_context /* Xstorm storm context */;
};
/*
* FCoE init params passed by driver to FW in FCoE init ramrod $$KEEP_ENDIANNESS$$
*/
struct fcoe_init_ramrod_params
{
struct fcoe_kwqe_init1 init_kwqe1;
struct fcoe_kwqe_init2 init_kwqe2;
struct fcoe_kwqe_init3 init_kwqe3;
struct regpair_t eq_pbl_base /* Physical address of PBL */;
uint32_t eq_pbl_size /* PBL size */;
uint32_t reserved2;
uint16_t eq_prod /* EQ prdocuer */;
uint16_t sb_num /* Status block number */;
uint8_t sb_id /* Status block id (EQ consumer) */;
uint8_t reserved0;
uint16_t reserved1;
};
/*
* FCoE statistics params buffer passed by driver to FW in FCoE statistics ramrod $$KEEP_ENDIANNESS$$
*/
struct fcoe_stat_ramrod_params
{
struct fcoe_kwqe_stat stat_kwqe;
};
/*
* CQ DB CQ producer and pending completion counter
*/
struct iscsi_cq_db_prod_pnd_cmpltn_cnt
{
#if defined(__BIG_ENDIAN)
uint16_t cntr /* CQ pending completion counter */;
uint16_t prod /* Ustorm CQ producer , updated by Ustorm */;
#elif defined(__LITTLE_ENDIAN)
uint16_t prod /* Ustorm CQ producer , updated by Ustorm */;
uint16_t cntr /* CQ pending completion counter */;
#endif
};
/*
* CQ DB pending completion ITT array
*/
struct iscsi_cq_db_prod_pnd_cmpltn_cnt_arr
{
struct iscsi_cq_db_prod_pnd_cmpltn_cnt prod_pend_comp[8] /* CQ pending completion ITT array */;
};
/*
* CQ DB pending completion ITT array
*/
struct iscsi_cq_db_pnd_comp_itt_arr
{
uint16_t itt[8] /* CQ pending completion ITT array */;
};
/*
* Cstorm CQ sequence to notify array, updated by driver
*/
struct iscsi_cq_db_sqn_2_notify_arr
{
uint16_t sqn[8] /* Cstorm CQ sequence to notify array, updated by driver */;
};
/*
* CQ DB
*/
struct iscsi_cq_db
{
struct iscsi_cq_db_prod_pnd_cmpltn_cnt_arr cq_u_prod_pend_comp_ctr_arr /* Ustorm CQ producer and pending completion counter array, updated by Ustorm */;
struct iscsi_cq_db_pnd_comp_itt_arr cq_c_pend_comp_itt_arr /* Cstorm CQ pending completion ITT array, updated by Cstorm */;
struct iscsi_cq_db_sqn_2_notify_arr cq_drv_sqn_2_notify_arr /* Cstorm CQ sequence to notify array, updated by driver */;
uint32_t reserved[4] /* 16 byte allignment */;
};
/*
* iSCSI KCQ CQE parameters
*/
union iscsi_kcqe_params
{
uint32_t reserved0[4];
};
/*
* iSCSI KCQ CQE
*/
struct iscsi_kcqe
{
uint32_t iscsi_conn_id /* Drivers connection ID (only 16 bits are used) */;
uint32_t completion_status /* 0=command completed successfully, 1=command failed */;
uint32_t iscsi_conn_context_id /* Context ID of the iSCSI connection */;
union iscsi_kcqe_params params /* command-specific parameters */;
#if defined(__BIG_ENDIAN)
uint8_t flags;
#define ISCSI_KCQE_RESERVED0 (0x7<<0) /* BitField flags */
#define ISCSI_KCQE_RESERVED0_SHIFT 0
#define ISCSI_KCQE_RAMROD_COMPLETION (0x1<<3) /* BitField flags Everest only - indicates whether this KCQE is a ramrod completion */
#define ISCSI_KCQE_RAMROD_COMPLETION_SHIFT 3
#define ISCSI_KCQE_LAYER_CODE (0x7<<4) /* BitField flags protocol layer (L2,L3,L4,L5,iSCSI) */
#define ISCSI_KCQE_LAYER_CODE_SHIFT 4
#define ISCSI_KCQE_LINKED_WITH_NEXT (0x1<<7) /* BitField flags Indicates whether this KCQE is linked with the next KCQE */
#define ISCSI_KCQE_LINKED_WITH_NEXT_SHIFT 7
uint8_t op_code /* iSCSI KCQ opcode */;
uint16_t qe_self_seq /* Self identifying sequence number */;
#elif defined(__LITTLE_ENDIAN)
uint16_t qe_self_seq /* Self identifying sequence number */;
uint8_t op_code /* iSCSI KCQ opcode */;
uint8_t flags;
#define ISCSI_KCQE_RESERVED0 (0x7<<0) /* BitField flags */
#define ISCSI_KCQE_RESERVED0_SHIFT 0
#define ISCSI_KCQE_RAMROD_COMPLETION (0x1<<3) /* BitField flags Everest only - indicates whether this KCQE is a ramrod completion */
#define ISCSI_KCQE_RAMROD_COMPLETION_SHIFT 3
#define ISCSI_KCQE_LAYER_CODE (0x7<<4) /* BitField flags protocol layer (L2,L3,L4,L5,iSCSI) */
#define ISCSI_KCQE_LAYER_CODE_SHIFT 4
#define ISCSI_KCQE_LINKED_WITH_NEXT (0x1<<7) /* BitField flags Indicates whether this KCQE is linked with the next KCQE */
#define ISCSI_KCQE_LINKED_WITH_NEXT_SHIFT 7
#endif
};
/*
* iSCSI KWQE header
*/
struct iscsi_kwqe_header
{
#if defined(__BIG_ENDIAN)
uint8_t flags;
#define ISCSI_KWQE_HEADER_RESERVED0 (0xF<<0) /* BitField flags */
#define ISCSI_KWQE_HEADER_RESERVED0_SHIFT 0
#define ISCSI_KWQE_HEADER_LAYER_CODE (0x7<<4) /* BitField flags protocol layer (L2,L3,L4,L5,iSCSI) */
#define ISCSI_KWQE_HEADER_LAYER_CODE_SHIFT 4
#define ISCSI_KWQE_HEADER_RESERVED1 (0x1<<7) /* BitField flags */
#define ISCSI_KWQE_HEADER_RESERVED1_SHIFT 7
uint8_t op_code /* iSCSI KWQE opcode */;
#elif defined(__LITTLE_ENDIAN)
uint8_t op_code /* iSCSI KWQE opcode */;
uint8_t flags;
#define ISCSI_KWQE_HEADER_RESERVED0 (0xF<<0) /* BitField flags */
#define ISCSI_KWQE_HEADER_RESERVED0_SHIFT 0
#define ISCSI_KWQE_HEADER_LAYER_CODE (0x7<<4) /* BitField flags protocol layer (L2,L3,L4,L5,iSCSI) */
#define ISCSI_KWQE_HEADER_LAYER_CODE_SHIFT 4
#define ISCSI_KWQE_HEADER_RESERVED1 (0x1<<7) /* BitField flags */
#define ISCSI_KWQE_HEADER_RESERVED1_SHIFT 7
#endif
};
/*
* iSCSI firmware init request 1
*/
struct iscsi_kwqe_init1
{
#if defined(__BIG_ENDIAN)
struct iscsi_kwqe_header hdr /* KWQ WQE header */;
uint8_t hsi_version /* HSI version number */;
uint8_t num_cqs /* Number of completion queues */;
#elif defined(__LITTLE_ENDIAN)
uint8_t num_cqs /* Number of completion queues */;
uint8_t hsi_version /* HSI version number */;
struct iscsi_kwqe_header hdr /* KWQ WQE header */;
#endif
uint32_t dummy_buffer_addr_lo /* Lower 32-bit of dummy buffer - Teton only */;
uint32_t dummy_buffer_addr_hi /* Higher 32-bit of dummy buffer - Teton only */;
#if defined(__BIG_ENDIAN)
uint16_t num_ccells_per_conn /* Number of ccells per connection */;
uint16_t num_tasks_per_conn /* Number of tasks per connection */;
#elif defined(__LITTLE_ENDIAN)
uint16_t num_tasks_per_conn /* Number of tasks per connection */;
uint16_t num_ccells_per_conn /* Number of ccells per connection */;
#endif
#if defined(__BIG_ENDIAN)
uint16_t sq_wqes_per_page /* Number of work entries in a single page of SQ */;
uint16_t sq_num_wqes /* Number of entries in the Send Queue */;
#elif defined(__LITTLE_ENDIAN)
uint16_t sq_num_wqes /* Number of entries in the Send Queue */;
uint16_t sq_wqes_per_page /* Number of work entries in a single page of SQ */;
#endif
#if defined(__BIG_ENDIAN)
uint8_t cq_log_wqes_per_page /* Log of number of work entries in a single page of CQ */;
uint8_t flags;
#define ISCSI_KWQE_INIT1_PAGE_SIZE (0xF<<0) /* BitField flags page size code */
#define ISCSI_KWQE_INIT1_PAGE_SIZE_SHIFT 0
#define ISCSI_KWQE_INIT1_DELAYED_ACK_ENABLE (0x1<<4) /* BitField flags if set, delayed ack is enabled */
#define ISCSI_KWQE_INIT1_DELAYED_ACK_ENABLE_SHIFT 4
#define ISCSI_KWQE_INIT1_KEEP_ALIVE_ENABLE (0x1<<5) /* BitField flags if set, keep alive is enabled */
#define ISCSI_KWQE_INIT1_KEEP_ALIVE_ENABLE_SHIFT 5
#define ISCSI_KWQE_INIT1_RESERVED1 (0x3<<6) /* BitField flags */
#define ISCSI_KWQE_INIT1_RESERVED1_SHIFT 6
uint16_t cq_num_wqes /* Number of entries in the Completion Queue */;
#elif defined(__LITTLE_ENDIAN)
uint16_t cq_num_wqes /* Number of entries in the Completion Queue */;
uint8_t flags;
#define ISCSI_KWQE_INIT1_PAGE_SIZE (0xF<<0) /* BitField flags page size code */
#define ISCSI_KWQE_INIT1_PAGE_SIZE_SHIFT 0
#define ISCSI_KWQE_INIT1_DELAYED_ACK_ENABLE (0x1<<4) /* BitField flags if set, delayed ack is enabled */
#define ISCSI_KWQE_INIT1_DELAYED_ACK_ENABLE_SHIFT 4
#define ISCSI_KWQE_INIT1_KEEP_ALIVE_ENABLE (0x1<<5) /* BitField flags if set, keep alive is enabled */
#define ISCSI_KWQE_INIT1_KEEP_ALIVE_ENABLE_SHIFT 5
#define ISCSI_KWQE_INIT1_RESERVED1 (0x3<<6) /* BitField flags */
#define ISCSI_KWQE_INIT1_RESERVED1_SHIFT 6
uint8_t cq_log_wqes_per_page /* Log of number of work entries in a single page of CQ */;
#endif
#if defined(__BIG_ENDIAN)
uint16_t cq_num_pages /* Number of pages in CQ page table */;
uint16_t sq_num_pages /* Number of pages in SQ page table */;
#elif defined(__LITTLE_ENDIAN)
uint16_t sq_num_pages /* Number of pages in SQ page table */;
uint16_t cq_num_pages /* Number of pages in CQ page table */;
#endif
#if defined(__BIG_ENDIAN)
uint16_t rq_buffer_size /* Size of a single buffer (entry) in the RQ */;
uint16_t rq_num_wqes /* Number of entries in the Receive Queue */;
#elif defined(__LITTLE_ENDIAN)
uint16_t rq_num_wqes /* Number of entries in the Receive Queue */;
uint16_t rq_buffer_size /* Size of a single buffer (entry) in the RQ */;
#endif
};
/*
* iSCSI firmware init request 2
*/
struct iscsi_kwqe_init2
{
#if defined(__BIG_ENDIAN)
struct iscsi_kwqe_header hdr /* KWQ WQE header */;
uint16_t max_cq_sqn /* CQ wraparound value */;
#elif defined(__LITTLE_ENDIAN)
uint16_t max_cq_sqn /* CQ wraparound value */;
struct iscsi_kwqe_header hdr /* KWQ WQE header */;
#endif
uint32_t error_bit_map[2] /* bit per error type, 0=error, 1=warning */;
uint32_t tcp_keepalive /* TCP keepalive time in seconds */;
uint32_t reserved1[4];
};
/*
* Initial iSCSI connection offload request 1
*/
struct iscsi_kwqe_conn_offload1
{
#if defined(__BIG_ENDIAN)
struct iscsi_kwqe_header hdr /* KWQ WQE header */;
uint16_t iscsi_conn_id /* Drivers connection ID. Should be sent in KCQEs to speed-up drivers access to connection data. */;
#elif defined(__LITTLE_ENDIAN)
uint16_t iscsi_conn_id /* Drivers connection ID. Should be sent in KCQEs to speed-up drivers access to connection data. */;
struct iscsi_kwqe_header hdr /* KWQ WQE header */;
#endif
uint32_t sq_page_table_addr_lo /* Lower 32-bit of the SQs page table address */;
uint32_t sq_page_table_addr_hi /* Higher 32-bit of the SQs page table address */;
uint32_t cq_page_table_addr_lo /* Lower 32-bit of the CQs page table address */;
uint32_t cq_page_table_addr_hi /* Higher 32-bit of the CQs page table address */;
uint32_t reserved0[3];
};
/*
* iSCSI Page Table Entry (PTE)
*/
struct iscsi_pte
{
uint32_t hi /* Higher 32 bits of address */;
uint32_t lo /* Lower 32 bits of address */;
};
/*
* Initial iSCSI connection offload request 2
*/
struct iscsi_kwqe_conn_offload2
{
#if defined(__BIG_ENDIAN)
struct iscsi_kwqe_header hdr /* KWQE header */;
uint16_t reserved0;
#elif defined(__LITTLE_ENDIAN)
uint16_t reserved0;
struct iscsi_kwqe_header hdr /* KWQE header */;
#endif
uint32_t rq_page_table_addr_lo /* Lower 32-bits of the RQs page table address */;
uint32_t rq_page_table_addr_hi /* Higher 32-bits of the RQs page table address */;
struct iscsi_pte sq_first_pte /* first SQ page table entry (for FW caching) */;
struct iscsi_pte cq_first_pte /* first CQ page table entry (for FW caching) */;
uint32_t num_additional_wqes /* Everest specific - number of offload3 KWQEs that will follow this KWQE */;
};
/*
* Everest specific - Initial iSCSI connection offload request 3
*/
struct iscsi_kwqe_conn_offload3
{
#if defined(__BIG_ENDIAN)
struct iscsi_kwqe_header hdr /* KWQE header */;
uint16_t reserved0;
#elif defined(__LITTLE_ENDIAN)
uint16_t reserved0;
struct iscsi_kwqe_header hdr /* KWQE header */;
#endif
uint32_t reserved1;
struct iscsi_pte qp_first_pte[3] /* first page table entry of some iSCSI ring (for FW caching) */;
};
/*
* iSCSI connection update request
*/
struct iscsi_kwqe_conn_update
{
#if defined(__BIG_ENDIAN)
struct iscsi_kwqe_header hdr /* KWQE header */;
uint16_t reserved0;
#elif defined(__LITTLE_ENDIAN)
uint16_t reserved0;
struct iscsi_kwqe_header hdr /* KWQE header */;
#endif
#if defined(__BIG_ENDIAN)
uint8_t session_error_recovery_level /* iSCSI Error Recovery Level negotiated on this connection */;
uint8_t max_outstanding_r2ts /* Maximum number of outstanding R2ts that a target can send for a command */;
uint8_t reserved2;
uint8_t conn_flags;
#define ISCSI_KWQE_CONN_UPDATE_HEADER_DIGEST (0x1<<0) /* BitField conn_flags 0=off, 1=on */
#define ISCSI_KWQE_CONN_UPDATE_HEADER_DIGEST_SHIFT 0
#define ISCSI_KWQE_CONN_UPDATE_DATA_DIGEST (0x1<<1) /* BitField conn_flags 0=off, 1=on */
#define ISCSI_KWQE_CONN_UPDATE_DATA_DIGEST_SHIFT 1
#define ISCSI_KWQE_CONN_UPDATE_INITIAL_R2T (0x1<<2) /* BitField conn_flags 0=no, 1=yes */
#define ISCSI_KWQE_CONN_UPDATE_INITIAL_R2T_SHIFT 2
#define ISCSI_KWQE_CONN_UPDATE_IMMEDIATE_DATA (0x1<<3) /* BitField conn_flags 0=no, 1=yes */
#define ISCSI_KWQE_CONN_UPDATE_IMMEDIATE_DATA_SHIFT 3
#define ISCSI_KWQE_CONN_UPDATE_OOO_SUPPORT_MODE (0x3<<4) /* BitField conn_flags (use enum tcp_tstorm_ooo) */
#define ISCSI_KWQE_CONN_UPDATE_OOO_SUPPORT_MODE_SHIFT 4
#define ISCSI_KWQE_CONN_UPDATE_RESERVED1 (0x3<<6) /* BitField conn_flags */
#define ISCSI_KWQE_CONN_UPDATE_RESERVED1_SHIFT 6
#elif defined(__LITTLE_ENDIAN)
uint8_t conn_flags;
#define ISCSI_KWQE_CONN_UPDATE_HEADER_DIGEST (0x1<<0) /* BitField conn_flags 0=off, 1=on */
#define ISCSI_KWQE_CONN_UPDATE_HEADER_DIGEST_SHIFT 0
#define ISCSI_KWQE_CONN_UPDATE_DATA_DIGEST (0x1<<1) /* BitField conn_flags 0=off, 1=on */
#define ISCSI_KWQE_CONN_UPDATE_DATA_DIGEST_SHIFT 1
#define ISCSI_KWQE_CONN_UPDATE_INITIAL_R2T (0x1<<2) /* BitField conn_flags 0=no, 1=yes */
#define ISCSI_KWQE_CONN_UPDATE_INITIAL_R2T_SHIFT 2
#define ISCSI_KWQE_CONN_UPDATE_IMMEDIATE_DATA (0x1<<3) /* BitField conn_flags 0=no, 1=yes */
#define ISCSI_KWQE_CONN_UPDATE_IMMEDIATE_DATA_SHIFT 3
#define ISCSI_KWQE_CONN_UPDATE_OOO_SUPPORT_MODE (0x3<<4) /* BitField conn_flags (use enum tcp_tstorm_ooo) */
#define ISCSI_KWQE_CONN_UPDATE_OOO_SUPPORT_MODE_SHIFT 4
#define ISCSI_KWQE_CONN_UPDATE_RESERVED1 (0x3<<6) /* BitField conn_flags */
#define ISCSI_KWQE_CONN_UPDATE_RESERVED1_SHIFT 6
uint8_t reserved2;
uint8_t max_outstanding_r2ts /* Maximum number of outstanding R2ts that a target can send for a command */;
uint8_t session_error_recovery_level /* iSCSI Error Recovery Level negotiated on this connection */;
#endif
uint32_t context_id /* Context ID of the iSCSI connection */;
uint32_t max_send_pdu_length /* Maximum length of a PDU that the target can receive */;
uint32_t max_recv_pdu_length /* Maximum length of a PDU that the Initiator can receive */;
uint32_t first_burst_length /* Maximum length of the immediate and unsolicited data that Initiator can send */;
uint32_t max_burst_length /* Maximum length of the data that Initiator and target can send in one burst */;
uint32_t exp_stat_sn /* Expected Status Serial Number */;
};
/*
* iSCSI destroy connection request
*/
struct iscsi_kwqe_conn_destroy
{
#if defined(__BIG_ENDIAN)
struct iscsi_kwqe_header hdr /* KWQ WQE header */;
uint16_t iscsi_conn_id /* Drivers connection ID. Should be sent in KCQEs to speed-up drivers access to connection data. */;
#elif defined(__LITTLE_ENDIAN)
uint16_t iscsi_conn_id /* Drivers connection ID. Should be sent in KCQEs to speed-up drivers access to connection data. */;
struct iscsi_kwqe_header hdr /* KWQ WQE header */;
#endif
uint32_t context_id /* Context ID of the iSCSI connection */;
uint32_t reserved1[6];
};
/*
* iSCSI KWQ WQE
*/
union iscsi_kwqe
{
struct iscsi_kwqe_init1 init1;
struct iscsi_kwqe_init2 init2;
struct iscsi_kwqe_conn_offload1 conn_offload1;
struct iscsi_kwqe_conn_offload2 conn_offload2;
struct iscsi_kwqe_conn_offload3 conn_offload3;
struct iscsi_kwqe_conn_update conn_update;
struct iscsi_kwqe_conn_destroy conn_destroy;
};
struct iscsi_rq_db
{
#if defined(__BIG_ENDIAN)
uint16_t reserved1;
uint16_t rq_prod;
#elif defined(__LITTLE_ENDIAN)
uint16_t rq_prod;
uint16_t reserved1;
#endif
uint32_t __fw_hdr[15] /* Used by FW for partial header placement */;
};
struct iscsi_sq_db
{
#if defined(__BIG_ENDIAN)
uint16_t reserved0 /* Pad structure size to 16 bytes */;
uint16_t sq_prod;
#elif defined(__LITTLE_ENDIAN)
uint16_t sq_prod;
uint16_t reserved0 /* Pad structure size to 16 bytes */;
#endif
uint32_t reserved1[3] /* Pad structure size to 16 bytes */;
};
/*
* Tstorm Tcp flags
*/
struct tstorm_l5cm_tcp_flags
{
uint16_t flags;
#define TSTORM_L5CM_TCP_FLAGS_VLAN_ID (0xFFF<<0) /* BitField flags */
#define TSTORM_L5CM_TCP_FLAGS_VLAN_ID_SHIFT 0
#define TSTORM_L5CM_TCP_FLAGS_DELAYED_ACK_EN (0x1<<12) /* BitField flags */
#define TSTORM_L5CM_TCP_FLAGS_DELAYED_ACK_EN_SHIFT 12
#define TSTORM_L5CM_TCP_FLAGS_TS_ENABLED (0x1<<13) /* BitField flags */
#define TSTORM_L5CM_TCP_FLAGS_TS_ENABLED_SHIFT 13
#define TSTORM_L5CM_TCP_FLAGS_RSRV1 (0x3<<14) /* BitField flags */
#define TSTORM_L5CM_TCP_FLAGS_RSRV1_SHIFT 14
};
/*
* Cstorm iSCSI Storm Context
*/
struct cstorm_iscsi_st_context
{
struct iscsi_cq_db_prod_pnd_cmpltn_cnt_arr cq_c_prod_pend_comp_ctr_arr /* Cstorm CQ producer and CQ pending completion array, updated by Cstorm */;
struct iscsi_cq_db_sqn_2_notify_arr cq_c_prod_sqn_arr /* Cstorm CQ producer sequence, updated by Cstorm */;
struct iscsi_cq_db_sqn_2_notify_arr cq_c_sqn_2_notify_arr /* Event Coalescing CQ sequence to notify driver, copied by Cstorm from CQ DB that is updated by Driver */;
struct regpair_t hq_pbl_base /* HQ PBL base */;
struct regpair_t hq_curr_pbe /* HQ current PBE */;
struct regpair_t task_pbl_base /* Task Context Entry PBL base */;
struct regpair_t cq_db_base /* pointer to CQ DB array. each CQ DB entry consists of CQ PBL, arm bit and idx to notify */;
#if defined(__BIG_ENDIAN)
uint16_t hq_bd_itt /* copied from HQ BD */;
uint16_t iscsi_conn_id;
#elif defined(__LITTLE_ENDIAN)
uint16_t iscsi_conn_id;
uint16_t hq_bd_itt /* copied from HQ BD */;
#endif
uint32_t hq_bd_data_segment_len /* copied from HQ BD */;
uint32_t hq_bd_buffer_offset /* copied from HQ BD */;
#if defined(__BIG_ENDIAN)
uint8_t rsrv;
uint8_t cq_proc_en_bit_map /* CQ processing enable bit map, 1 bit per CQ */;
uint8_t cq_pend_comp_itt_valid_bit_map /* CQ pending completion ITT valid bit map, 1 bit per CQ */;
uint8_t hq_bd_opcode /* copied from HQ BD */;
#elif defined(__LITTLE_ENDIAN)
uint8_t hq_bd_opcode /* copied from HQ BD */;
uint8_t cq_pend_comp_itt_valid_bit_map /* CQ pending completion ITT valid bit map, 1 bit per CQ */;
uint8_t cq_proc_en_bit_map /* CQ processing enable bit map, 1 bit per CQ */;
uint8_t rsrv;
#endif
uint32_t hq_tcp_seq /* TCP sequence of next BD to release */;
#if defined(__BIG_ENDIAN)
uint16_t flags;
#define CSTORM_ISCSI_ST_CONTEXT_DATA_DIGEST_EN (0x1<<0) /* BitField flags */
#define CSTORM_ISCSI_ST_CONTEXT_DATA_DIGEST_EN_SHIFT 0
#define CSTORM_ISCSI_ST_CONTEXT_HDR_DIGEST_EN (0x1<<1) /* BitField flags */
#define CSTORM_ISCSI_ST_CONTEXT_HDR_DIGEST_EN_SHIFT 1
#define CSTORM_ISCSI_ST_CONTEXT_HQ_BD_CTXT_VALID (0x1<<2) /* BitField flags copied from HQ BD */
#define CSTORM_ISCSI_ST_CONTEXT_HQ_BD_CTXT_VALID_SHIFT 2
#define CSTORM_ISCSI_ST_CONTEXT_HQ_BD_LCL_CMPLN_FLG (0x1<<3) /* BitField flags copied from HQ BD */
#define CSTORM_ISCSI_ST_CONTEXT_HQ_BD_LCL_CMPLN_FLG_SHIFT 3
#define CSTORM_ISCSI_ST_CONTEXT_HQ_BD_WRITE_TASK (0x1<<4) /* BitField flags calculated using HQ BD opcode and write flag */
#define CSTORM_ISCSI_ST_CONTEXT_HQ_BD_WRITE_TASK_SHIFT 4
#define CSTORM_ISCSI_ST_CONTEXT_CTRL_FLAGS_RSRV (0x7FF<<5) /* BitField flags */
#define CSTORM_ISCSI_ST_CONTEXT_CTRL_FLAGS_RSRV_SHIFT 5
uint16_t hq_cons /* HQ consumer */;
#elif defined(__LITTLE_ENDIAN)
uint16_t hq_cons /* HQ consumer */;
uint16_t flags;
#define CSTORM_ISCSI_ST_CONTEXT_DATA_DIGEST_EN (0x1<<0) /* BitField flags */
#define CSTORM_ISCSI_ST_CONTEXT_DATA_DIGEST_EN_SHIFT 0
#define CSTORM_ISCSI_ST_CONTEXT_HDR_DIGEST_EN (0x1<<1) /* BitField flags */
#define CSTORM_ISCSI_ST_CONTEXT_HDR_DIGEST_EN_SHIFT 1
#define CSTORM_ISCSI_ST_CONTEXT_HQ_BD_CTXT_VALID (0x1<<2) /* BitField flags copied from HQ BD */
#define CSTORM_ISCSI_ST_CONTEXT_HQ_BD_CTXT_VALID_SHIFT 2
#define CSTORM_ISCSI_ST_CONTEXT_HQ_BD_LCL_CMPLN_FLG (0x1<<3) /* BitField flags copied from HQ BD */
#define CSTORM_ISCSI_ST_CONTEXT_HQ_BD_LCL_CMPLN_FLG_SHIFT 3
#define CSTORM_ISCSI_ST_CONTEXT_HQ_BD_WRITE_TASK (0x1<<4) /* BitField flags calculated using HQ BD opcode and write flag */
#define CSTORM_ISCSI_ST_CONTEXT_HQ_BD_WRITE_TASK_SHIFT 4
#define CSTORM_ISCSI_ST_CONTEXT_CTRL_FLAGS_RSRV (0x7FF<<5) /* BitField flags */
#define CSTORM_ISCSI_ST_CONTEXT_CTRL_FLAGS_RSRV_SHIFT 5
#endif
struct regpair_t rsrv1;
};
/*
* SCSI read/write SQ WQE
*/
struct iscsi_cmd_pdu_hdr_little_endian
{
#if defined(__BIG_ENDIAN)
uint8_t opcode;
uint8_t op_attr;
#define ISCSI_CMD_PDU_HDR_LITTLE_ENDIAN_ATTRIBUTES (0x7<<0) /* BitField op_attr Attributes of the SCSI command. To be sent with the outgoing command PDU. */
#define ISCSI_CMD_PDU_HDR_LITTLE_ENDIAN_ATTRIBUTES_SHIFT 0
#define ISCSI_CMD_PDU_HDR_LITTLE_ENDIAN_RSRV1 (0x3<<3) /* BitField op_attr */
#define ISCSI_CMD_PDU_HDR_LITTLE_ENDIAN_RSRV1_SHIFT 3
#define ISCSI_CMD_PDU_HDR_LITTLE_ENDIAN_WRITE_FLAG (0x1<<5) /* BitField op_attr Write bit. Initiator is expected to send the data to the target */
#define ISCSI_CMD_PDU_HDR_LITTLE_ENDIAN_WRITE_FLAG_SHIFT 5
#define ISCSI_CMD_PDU_HDR_LITTLE_ENDIAN_READ_FLAG (0x1<<6) /* BitField op_attr Read bit. Data from target is expected */
#define ISCSI_CMD_PDU_HDR_LITTLE_ENDIAN_READ_FLAG_SHIFT 6
#define ISCSI_CMD_PDU_HDR_LITTLE_ENDIAN_FINAL_FLAG (0x1<<7) /* BitField op_attr Final bit. Firmware can change this bit based on the command before putting it into the outgoing PDU. */
#define ISCSI_CMD_PDU_HDR_LITTLE_ENDIAN_FINAL_FLAG_SHIFT 7
uint16_t rsrv0;
#elif defined(__LITTLE_ENDIAN)
uint16_t rsrv0;
uint8_t op_attr;
#define ISCSI_CMD_PDU_HDR_LITTLE_ENDIAN_ATTRIBUTES (0x7<<0) /* BitField op_attr Attributes of the SCSI command. To be sent with the outgoing command PDU. */
#define ISCSI_CMD_PDU_HDR_LITTLE_ENDIAN_ATTRIBUTES_SHIFT 0
#define ISCSI_CMD_PDU_HDR_LITTLE_ENDIAN_RSRV1 (0x3<<3) /* BitField op_attr */
#define ISCSI_CMD_PDU_HDR_LITTLE_ENDIAN_RSRV1_SHIFT 3
#define ISCSI_CMD_PDU_HDR_LITTLE_ENDIAN_WRITE_FLAG (0x1<<5) /* BitField op_attr Write bit. Initiator is expected to send the data to the target */
#define ISCSI_CMD_PDU_HDR_LITTLE_ENDIAN_WRITE_FLAG_SHIFT 5
#define ISCSI_CMD_PDU_HDR_LITTLE_ENDIAN_READ_FLAG (0x1<<6) /* BitField op_attr Read bit. Data from target is expected */
#define ISCSI_CMD_PDU_HDR_LITTLE_ENDIAN_READ_FLAG_SHIFT 6
#define ISCSI_CMD_PDU_HDR_LITTLE_ENDIAN_FINAL_FLAG (0x1<<7) /* BitField op_attr Final bit. Firmware can change this bit based on the command before putting it into the outgoing PDU. */
#define ISCSI_CMD_PDU_HDR_LITTLE_ENDIAN_FINAL_FLAG_SHIFT 7
uint8_t opcode;
#endif
uint32_t data_fields;
#define ISCSI_CMD_PDU_HDR_LITTLE_ENDIAN_DATA_SEGMENT_LENGTH (0xFFFFFF<<0) /* BitField data_fields */
#define ISCSI_CMD_PDU_HDR_LITTLE_ENDIAN_DATA_SEGMENT_LENGTH_SHIFT 0
#define ISCSI_CMD_PDU_HDR_LITTLE_ENDIAN_TOTAL_AHS_LENGTH (0xFF<<24) /* BitField data_fields */
#define ISCSI_CMD_PDU_HDR_LITTLE_ENDIAN_TOTAL_AHS_LENGTH_SHIFT 24
struct regpair_t lun;
uint32_t itt;
uint32_t expected_data_transfer_length;
uint32_t cmd_sn;
uint32_t exp_stat_sn;
uint32_t scsi_command_block[4];
};
/*
* Buffer per connection, used in Tstorm
*/
struct iscsi_conn_buf
{
struct regpair_t reserved[8];
};
/*
* iSCSI context region, used only in iSCSI
*/
struct ustorm_iscsi_rq_db
{
struct regpair_t pbl_base /* Pointer to the rq page base list. */;
struct regpair_t curr_pbe /* Pointer to the current rq page base. */;
};
/*
* iSCSI context region, used only in iSCSI
*/
struct ustorm_iscsi_r2tq_db
{
struct regpair_t pbl_base /* Pointer to the r2tq page base list. */;
struct regpair_t curr_pbe /* Pointer to the current r2tq page base. */;
};
/*
* iSCSI context region, used only in iSCSI
*/
struct ustorm_iscsi_cq_db
{
#if defined(__BIG_ENDIAN)
uint16_t cq_sn /* CQ serial number */;
uint16_t prod /* CQ producer */;
#elif defined(__LITTLE_ENDIAN)
uint16_t prod /* CQ producer */;
uint16_t cq_sn /* CQ serial number */;
#endif
struct regpair_t curr_pbe /* Pointer to the current cq page base. */;
};
/*
* iSCSI context region, used only in iSCSI
*/
struct rings_db
{
struct ustorm_iscsi_rq_db rq /* RQ db. */;
struct ustorm_iscsi_r2tq_db r2tq /* R2TQ db. */;
struct ustorm_iscsi_cq_db cq[8] /* CQ db. */;
#if defined(__BIG_ENDIAN)
uint16_t rq_prod /* RQ prod */;
uint16_t r2tq_prod /* R2TQ producer. */;
#elif defined(__LITTLE_ENDIAN)
uint16_t r2tq_prod /* R2TQ producer. */;
uint16_t rq_prod /* RQ prod */;
#endif
struct regpair_t cq_pbl_base /* Pointer to the cq page base list. */;
};
/*
* iSCSI context region, used only in iSCSI
*/
struct ustorm_iscsi_placement_db
{
uint32_t sgl_base_lo /* SGL base address lo */;
uint32_t sgl_base_hi /* SGL base address hi */;
uint32_t local_sge_0_address_hi /* SGE address hi */;
uint32_t local_sge_0_address_lo /* SGE address lo */;
#if defined(__BIG_ENDIAN)
uint16_t curr_sge_offset /* Current offset in the SGE */;
uint16_t local_sge_0_size /* SGE size */;
#elif defined(__LITTLE_ENDIAN)
uint16_t local_sge_0_size /* SGE size */;
uint16_t curr_sge_offset /* Current offset in the SGE */;
#endif
uint32_t local_sge_1_address_hi /* SGE address hi */;
uint32_t local_sge_1_address_lo /* SGE address lo */;
#if defined(__BIG_ENDIAN)
uint8_t exp_padding_2b /* Number of padding bytes not yet processed */;
uint8_t nal_len_3b /* Non 4 byte aligned bytes in the previous iteration */;
uint16_t local_sge_1_size /* SGE size */;
#elif defined(__LITTLE_ENDIAN)
uint16_t local_sge_1_size /* SGE size */;
uint8_t nal_len_3b /* Non 4 byte aligned bytes in the previous iteration */;
uint8_t exp_padding_2b /* Number of padding bytes not yet processed */;
#endif
#if defined(__BIG_ENDIAN)
uint8_t sgl_size /* Number of SGEs remaining till end of SGL */;
uint8_t local_sge_index_2b /* Index to the local SGE currently used */;
uint16_t reserved7;
#elif defined(__LITTLE_ENDIAN)
uint16_t reserved7;
uint8_t local_sge_index_2b /* Index to the local SGE currently used */;
uint8_t sgl_size /* Number of SGEs remaining till end of SGL */;
#endif
uint32_t rem_pdu /* Number of bytes remaining in PDU */;
uint32_t place_db_bitfield_1;
#define USTORM_ISCSI_PLACEMENT_DB_REM_PDU_PAYLOAD (0xFFFFFF<<0) /* BitField place_db_bitfield_1place_db_bitfield_1 Number of bytes remaining in PDU payload */
#define USTORM_ISCSI_PLACEMENT_DB_REM_PDU_PAYLOAD_SHIFT 0
#define USTORM_ISCSI_PLACEMENT_DB_CQ_ID (0xFF<<24) /* BitField place_db_bitfield_1place_db_bitfield_1 Temp task context - determines the CQ index for CQE placement */
#define USTORM_ISCSI_PLACEMENT_DB_CQ_ID_SHIFT 24
uint32_t place_db_bitfield_2;
#define USTORM_ISCSI_PLACEMENT_DB_BYTES_2_TRUNCATE (0xFFFFFF<<0) /* BitField place_db_bitfield_2place_db_bitfield_2 Bytes to truncate from the payload. */
#define USTORM_ISCSI_PLACEMENT_DB_BYTES_2_TRUNCATE_SHIFT 0
#define USTORM_ISCSI_PLACEMENT_DB_HOST_SGE_INDEX (0xFF<<24) /* BitField place_db_bitfield_2place_db_bitfield_2 Sge index on host */
#define USTORM_ISCSI_PLACEMENT_DB_HOST_SGE_INDEX_SHIFT 24
uint32_t nal;
#define USTORM_ISCSI_PLACEMENT_DB_REM_SGE_SIZE (0xFFFFFF<<0) /* BitField nalNon aligned db Number of bytes remaining in local SGEs */
#define USTORM_ISCSI_PLACEMENT_DB_REM_SGE_SIZE_SHIFT 0
#define USTORM_ISCSI_PLACEMENT_DB_EXP_DIGEST_3B (0xFF<<24) /* BitField nalNon aligned db Number of digest bytes not yet processed */
#define USTORM_ISCSI_PLACEMENT_DB_EXP_DIGEST_3B_SHIFT 24
};
/*
* Ustorm iSCSI Storm Context
*/
struct ustorm_iscsi_st_context
{
uint32_t exp_stat_sn /* Expected status sequence number, incremented with each response/middle path/unsolicited received. */;
uint32_t exp_data_sn /* Expected Data sequence number, incremented with each data in */;
struct rings_db ring /* rq, r2tq ,cq */;
struct regpair_t task_pbl_base /* Task PBL base will be read from RAM to context */;
struct regpair_t tce_phy_addr /* Pointer to the task context physical address */;
struct ustorm_iscsi_placement_db place_db;
uint32_t reserved8 /* reserved */;
uint32_t rem_rcv_len /* Temp task context - Remaining bytes to end of task */;
#if defined(__BIG_ENDIAN)
uint16_t hdr_itt /* field copied from PDU header */;
uint16_t iscsi_conn_id;
#elif defined(__LITTLE_ENDIAN)
uint16_t iscsi_conn_id;
uint16_t hdr_itt /* field copied from PDU header */;
#endif
uint32_t nal_bytes /* nal bytes read from BRB */;
#if defined(__BIG_ENDIAN)
uint8_t hdr_second_byte_union /* field copied from PDU header */;
uint8_t bitfield_0;
#define USTORM_ISCSI_ST_CONTEXT_BMIDDLEOFPDU (0x1<<0) /* BitField bitfield_0bitfield_0 marks that processing of payload has started */
#define USTORM_ISCSI_ST_CONTEXT_BMIDDLEOFPDU_SHIFT 0
#define USTORM_ISCSI_ST_CONTEXT_BFENCECQE (0x1<<1) /* BitField bitfield_0bitfield_0 marks that fence is need on the next CQE */
#define USTORM_ISCSI_ST_CONTEXT_BFENCECQE_SHIFT 1
#define USTORM_ISCSI_ST_CONTEXT_BRESETCRC (0x1<<2) /* BitField bitfield_0bitfield_0 marks that a RESET should be sent to CRC machine. Used in NAL condition in the beginning of a PDU. */
#define USTORM_ISCSI_ST_CONTEXT_BRESETCRC_SHIFT 2
#define USTORM_ISCSI_ST_CONTEXT_RESERVED1 (0x1F<<3) /* BitField bitfield_0bitfield_0 reserved */
#define USTORM_ISCSI_ST_CONTEXT_RESERVED1_SHIFT 3
uint8_t task_pdu_cache_index;
uint8_t task_pbe_cache_index;
#elif defined(__LITTLE_ENDIAN)
uint8_t task_pbe_cache_index;
uint8_t task_pdu_cache_index;
uint8_t bitfield_0;
#define USTORM_ISCSI_ST_CONTEXT_BMIDDLEOFPDU (0x1<<0) /* BitField bitfield_0bitfield_0 marks that processing of payload has started */
#define USTORM_ISCSI_ST_CONTEXT_BMIDDLEOFPDU_SHIFT 0
#define USTORM_ISCSI_ST_CONTEXT_BFENCECQE (0x1<<1) /* BitField bitfield_0bitfield_0 marks that fence is need on the next CQE */
#define USTORM_ISCSI_ST_CONTEXT_BFENCECQE_SHIFT 1
#define USTORM_ISCSI_ST_CONTEXT_BRESETCRC (0x1<<2) /* BitField bitfield_0bitfield_0 marks that a RESET should be sent to CRC machine. Used in NAL condition in the beginning of a PDU. */
#define USTORM_ISCSI_ST_CONTEXT_BRESETCRC_SHIFT 2
#define USTORM_ISCSI_ST_CONTEXT_RESERVED1 (0x1F<<3) /* BitField bitfield_0bitfield_0 reserved */
#define USTORM_ISCSI_ST_CONTEXT_RESERVED1_SHIFT 3
uint8_t hdr_second_byte_union /* field copied from PDU header */;
#endif
#if defined(__BIG_ENDIAN)
uint16_t reserved3 /* reserved */;
uint8_t reserved2 /* reserved */;
uint8_t acDecrement /* Manage the AC decrement that should be done by USDM */;
#elif defined(__LITTLE_ENDIAN)
uint8_t acDecrement /* Manage the AC decrement that should be done by USDM */;
uint8_t reserved2 /* reserved */;
uint16_t reserved3 /* reserved */;
#endif
uint32_t task_stat /* counts dataIn for read and holds data outs, r2t for write */;
#if defined(__BIG_ENDIAN)
uint8_t hdr_opcode /* field copied from PDU header */;
uint8_t num_cqs /* Number of CQs supported by this connection */;
uint16_t reserved5 /* reserved */;
#elif defined(__LITTLE_ENDIAN)
uint16_t reserved5 /* reserved */;
uint8_t num_cqs /* Number of CQs supported by this connection */;
uint8_t hdr_opcode /* field copied from PDU header */;
#endif
uint32_t negotiated_rx;
#define USTORM_ISCSI_ST_CONTEXT_MAX_RECV_PDU_LENGTH (0xFFFFFF<<0) /* BitField negotiated_rx */
#define USTORM_ISCSI_ST_CONTEXT_MAX_RECV_PDU_LENGTH_SHIFT 0
#define USTORM_ISCSI_ST_CONTEXT_MAX_OUTSTANDING_R2TS (0xFF<<24) /* BitField negotiated_rx */
#define USTORM_ISCSI_ST_CONTEXT_MAX_OUTSTANDING_R2TS_SHIFT 24
uint32_t negotiated_rx_and_flags;
#define USTORM_ISCSI_ST_CONTEXT_MAX_BURST_LENGTH (0xFFFFFF<<0) /* BitField negotiated_rx_and_flags Negotiated maximum length of sequence */
#define USTORM_ISCSI_ST_CONTEXT_MAX_BURST_LENGTH_SHIFT 0
#define USTORM_ISCSI_ST_CONTEXT_B_CQE_POSTED_OR_HEADER_CACHED (0x1<<24) /* BitField negotiated_rx_and_flags Marks that unvalid CQE was already posted or PDU header was cachaed in RAM */
#define USTORM_ISCSI_ST_CONTEXT_B_CQE_POSTED_OR_HEADER_CACHED_SHIFT 24
#define USTORM_ISCSI_ST_CONTEXT_B_HDR_DIGEST_EN (0x1<<25) /* BitField negotiated_rx_and_flags Header digest support enable */
#define USTORM_ISCSI_ST_CONTEXT_B_HDR_DIGEST_EN_SHIFT 25
#define USTORM_ISCSI_ST_CONTEXT_B_DATA_DIGEST_EN (0x1<<26) /* BitField negotiated_rx_and_flags Data digest support enable */
#define USTORM_ISCSI_ST_CONTEXT_B_DATA_DIGEST_EN_SHIFT 26
#define USTORM_ISCSI_ST_CONTEXT_B_PROTOCOL_ERROR (0x1<<27) /* BitField negotiated_rx_and_flags */
#define USTORM_ISCSI_ST_CONTEXT_B_PROTOCOL_ERROR_SHIFT 27
#define USTORM_ISCSI_ST_CONTEXT_B_TASK_VALID (0x1<<28) /* BitField negotiated_rx_and_flags temp task context */
#define USTORM_ISCSI_ST_CONTEXT_B_TASK_VALID_SHIFT 28
#define USTORM_ISCSI_ST_CONTEXT_TASK_TYPE (0x3<<29) /* BitField negotiated_rx_and_flags Task type: 0 = slow-path (non-RW) 1 = read 2 = write */
#define USTORM_ISCSI_ST_CONTEXT_TASK_TYPE_SHIFT 29
#define USTORM_ISCSI_ST_CONTEXT_B_ALL_DATA_ACKED (0x1<<31) /* BitField negotiated_rx_and_flags Set if all data is acked */
#define USTORM_ISCSI_ST_CONTEXT_B_ALL_DATA_ACKED_SHIFT 31
};
/*
* TCP context region, shared in TOE, RDMA and ISCSI
*/
struct tstorm_tcp_st_context_section
{
uint32_t flags1;
#define TSTORM_TCP_ST_CONTEXT_SECTION_RTT_SRTT (0xFFFFFF<<0) /* BitField flags1various state flags 20b only, Smoothed Rount Trip Time */
#define TSTORM_TCP_ST_CONTEXT_SECTION_RTT_SRTT_SHIFT 0
#define TSTORM_TCP_ST_CONTEXT_SECTION_PAWS_INVALID (0x1<<24) /* BitField flags1various state flags PAWS asserted as invalid in KA flow */
#define TSTORM_TCP_ST_CONTEXT_SECTION_PAWS_INVALID_SHIFT 24
#define TSTORM_TCP_ST_CONTEXT_SECTION_TIMESTAMP_EXISTS (0x1<<25) /* BitField flags1various state flags Timestamps supported on this connection */
#define TSTORM_TCP_ST_CONTEXT_SECTION_TIMESTAMP_EXISTS_SHIFT 25
#define TSTORM_TCP_ST_CONTEXT_SECTION_RESERVED0 (0x1<<26) /* BitField flags1various state flags */
#define TSTORM_TCP_ST_CONTEXT_SECTION_RESERVED0_SHIFT 26
#define TSTORM_TCP_ST_CONTEXT_SECTION_STOP_RX_PAYLOAD (0x1<<27) /* BitField flags1various state flags stop receiving rx payload */
#define TSTORM_TCP_ST_CONTEXT_SECTION_STOP_RX_PAYLOAD_SHIFT 27
#define TSTORM_TCP_ST_CONTEXT_SECTION_KA_ENABLED (0x1<<28) /* BitField flags1various state flags Keep Alive enabled */
#define TSTORM_TCP_ST_CONTEXT_SECTION_KA_ENABLED_SHIFT 28
#define TSTORM_TCP_ST_CONTEXT_SECTION_FIRST_RTO_ESTIMATE (0x1<<29) /* BitField flags1various state flags First Retransmition Timout Estimation */
#define TSTORM_TCP_ST_CONTEXT_SECTION_FIRST_RTO_ESTIMATE_SHIFT 29
#define TSTORM_TCP_ST_CONTEXT_SECTION_MAX_SEG_RETRANSMIT_EN (0x1<<30) /* BitField flags1various state flags per connection flag, signals whether to check if rt count exceeds max_seg_retransmit */
#define TSTORM_TCP_ST_CONTEXT_SECTION_MAX_SEG_RETRANSMIT_EN_SHIFT 30
#define TSTORM_TCP_ST_CONTEXT_SECTION_LAST_ISLE_HAS_FIN (0x1<<31) /* BitField flags1various state flags last isle ends with FIN. FIN is counted as 1 byte for isle end sequence */
#define TSTORM_TCP_ST_CONTEXT_SECTION_LAST_ISLE_HAS_FIN_SHIFT 31
uint32_t flags2;
#define TSTORM_TCP_ST_CONTEXT_SECTION_RTT_VARIATION (0xFFFFFF<<0) /* BitField flags2various state flags 20b only, Round Trip Time variation */
#define TSTORM_TCP_ST_CONTEXT_SECTION_RTT_VARIATION_SHIFT 0
#define TSTORM_TCP_ST_CONTEXT_SECTION_DA_EN (0x1<<24) /* BitField flags2various state flags */
#define TSTORM_TCP_ST_CONTEXT_SECTION_DA_EN_SHIFT 24
#define TSTORM_TCP_ST_CONTEXT_SECTION_DA_COUNTER_EN (0x1<<25) /* BitField flags2various state flags per GOS flags, but duplicated for each context */
#define TSTORM_TCP_ST_CONTEXT_SECTION_DA_COUNTER_EN_SHIFT 25
#define __TSTORM_TCP_ST_CONTEXT_SECTION_KA_PROBE_SENT (0x1<<26) /* BitField flags2various state flags keep alive packet was sent */
#define __TSTORM_TCP_ST_CONTEXT_SECTION_KA_PROBE_SENT_SHIFT 26
#define __TSTORM_TCP_ST_CONTEXT_SECTION_PERSIST_PROBE_SENT (0x1<<27) /* BitField flags2various state flags persist packet was sent */
#define __TSTORM_TCP_ST_CONTEXT_SECTION_PERSIST_PROBE_SENT_SHIFT 27
#define TSTORM_TCP_ST_CONTEXT_SECTION_UPDATE_L2_STATSTICS (0x1<<28) /* BitField flags2various state flags determines wheather or not to update l2 statistics */
#define TSTORM_TCP_ST_CONTEXT_SECTION_UPDATE_L2_STATSTICS_SHIFT 28
#define TSTORM_TCP_ST_CONTEXT_SECTION_UPDATE_L4_STATSTICS (0x1<<29) /* BitField flags2various state flags determines wheather or not to update l4 statistics */
#define TSTORM_TCP_ST_CONTEXT_SECTION_UPDATE_L4_STATSTICS_SHIFT 29
#define __TSTORM_TCP_ST_CONTEXT_SECTION_IN_WINDOW_RST_ATTACK (0x1<<30) /* BitField flags2various state flags possible blind-in-window RST attack detected */
#define __TSTORM_TCP_ST_CONTEXT_SECTION_IN_WINDOW_RST_ATTACK_SHIFT 30
#define __TSTORM_TCP_ST_CONTEXT_SECTION_IN_WINDOW_SYN_ATTACK (0x1<<31) /* BitField flags2various state flags possible blind-in-window SYN attack detected */
#define __TSTORM_TCP_ST_CONTEXT_SECTION_IN_WINDOW_SYN_ATTACK_SHIFT 31
#if defined(__BIG_ENDIAN)
uint16_t mss;
uint8_t tcp_sm_state /* 3b only, Tcp state machine state */;
uint8_t rto_exp /* 3b only, Exponential Backoff index */;
#elif defined(__LITTLE_ENDIAN)
uint8_t rto_exp /* 3b only, Exponential Backoff index */;
uint8_t tcp_sm_state /* 3b only, Tcp state machine state */;
uint16_t mss;
#endif
uint32_t rcv_nxt /* Receive sequence: next expected */;
uint32_t timestamp_recent /* last timestamp from segTS */;
uint32_t timestamp_recent_time /* time at which timestamp_recent has been set */;
uint32_t cwnd /* Congestion window */;
uint32_t ss_thresh /* Slow Start Threshold */;
uint32_t cwnd_accum /* Congestion window accumilation */;
uint32_t prev_seg_seq /* Sequence number used for last sndWnd update (was: snd_wnd_l1) */;
uint32_t expected_rel_seq /* the last update of rel_seq */;
uint32_t recover /* Recording of sndMax when we enter retransmit */;
#if defined(__BIG_ENDIAN)
uint8_t retransmit_count /* Number of times a packet was retransmitted */;
uint8_t ka_max_probe_count /* Keep Alive maximum probe counter */;
uint8_t persist_probe_count /* Persist probe counter */;
uint8_t ka_probe_count /* Keep Alive probe counter */;
#elif defined(__LITTLE_ENDIAN)
uint8_t ka_probe_count /* Keep Alive probe counter */;
uint8_t persist_probe_count /* Persist probe counter */;
uint8_t ka_max_probe_count /* Keep Alive maximum probe counter */;
uint8_t retransmit_count /* Number of times a packet was retransmitted */;
#endif
#if defined(__BIG_ENDIAN)
uint8_t statistics_counter_id /* The ID of the statistics client for counting common/L2 statistics */;
uint8_t ooo_support_mode;
uint8_t snd_wnd_scale /* 4b only, Far-end window (Snd.Wind.Scale) scale */;
uint8_t dup_ack_count /* Duplicate Ack Counter */;
#elif defined(__LITTLE_ENDIAN)
uint8_t dup_ack_count /* Duplicate Ack Counter */;
uint8_t snd_wnd_scale /* 4b only, Far-end window (Snd.Wind.Scale) scale */;
uint8_t ooo_support_mode;
uint8_t statistics_counter_id /* The ID of the statistics client for counting common/L2 statistics */;
#endif
uint32_t retransmit_start_time /* Used by retransmit as a recording of start time */;
uint32_t ka_timeout /* Keep Alive timeout */;
uint32_t ka_interval /* Keep Alive interval */;
uint32_t isle_start_seq /* First Out-of-order isle start sequence */;
uint32_t isle_end_seq /* First Out-of-order isle end sequence */;
#if defined(__BIG_ENDIAN)
uint16_t second_isle_address /* address of the second isle (if exists) in internal RAM */;
uint16_t recent_seg_wnd /* Last far end window received (not scaled!) */;
#elif defined(__LITTLE_ENDIAN)
uint16_t recent_seg_wnd /* Last far end window received (not scaled!) */;
uint16_t second_isle_address /* address of the second isle (if exists) in internal RAM */;
#endif
#if defined(__BIG_ENDIAN)
uint8_t max_isles_ever_happened /* for statistics only - max number of isles ever happened on this connection */;
uint8_t isles_number /* number of isles */;
uint16_t last_isle_address /* address of the last isle (if exists) in internal RAM */;
#elif defined(__LITTLE_ENDIAN)
uint16_t last_isle_address /* address of the last isle (if exists) in internal RAM */;
uint8_t isles_number /* number of isles */;
uint8_t max_isles_ever_happened /* for statistics only - max number of isles ever happened on this connection */;
#endif
uint32_t max_rt_time;
#if defined(__BIG_ENDIAN)
uint16_t lsb_mac_address /* TX source MAC LSB-16 */;
uint16_t vlan_id /* Connection-configured VLAN ID */;
#elif defined(__LITTLE_ENDIAN)
uint16_t vlan_id /* Connection-configured VLAN ID */;
uint16_t lsb_mac_address /* TX source MAC LSB-16 */;
#endif
#if defined(__BIG_ENDIAN)
uint16_t msb_mac_address /* TX source MAC MSB-16 */;
uint16_t mid_mac_address /* TX source MAC MID-16 */;
#elif defined(__LITTLE_ENDIAN)
uint16_t mid_mac_address /* TX source MAC MID-16 */;
uint16_t msb_mac_address /* TX source MAC MSB-16 */;
#endif
uint32_t rightmost_received_seq /* The maximum sequence ever received - used for The New Patent */;
};
/*
* Termination variables
*/
struct iscsi_term_vars
{
uint8_t BitMap;
#define ISCSI_TERM_VARS_TCP_STATE (0xF<<0) /* BitField BitMap tcp state for the termination process */
#define ISCSI_TERM_VARS_TCP_STATE_SHIFT 0
#define ISCSI_TERM_VARS_FIN_RECEIVED_SBIT (0x1<<4) /* BitField BitMap fin received sticky bit */
#define ISCSI_TERM_VARS_FIN_RECEIVED_SBIT_SHIFT 4
#define ISCSI_TERM_VARS_ACK_ON_FIN_RECEIVED_SBIT (0x1<<5) /* BitField BitMap ack on fin received stick bit */
#define ISCSI_TERM_VARS_ACK_ON_FIN_RECEIVED_SBIT_SHIFT 5
#define ISCSI_TERM_VARS_TERM_ON_CHIP (0x1<<6) /* BitField BitMap termination on chip ( option2 ) */
#define ISCSI_TERM_VARS_TERM_ON_CHIP_SHIFT 6
#define ISCSI_TERM_VARS_RSRV (0x1<<7) /* BitField BitMap */
#define ISCSI_TERM_VARS_RSRV_SHIFT 7
};
/*
* iSCSI context region, used only in iSCSI
*/
struct tstorm_iscsi_st_context_section
{
uint32_t nalPayload /* Non-aligned payload */;
uint32_t b2nh /* Number of bytes to next iSCSI header */;
#if defined(__BIG_ENDIAN)
uint16_t rq_cons /* RQ consumer */;
uint8_t flags;
#define TSTORM_ISCSI_ST_CONTEXT_SECTION_B_HDR_DIGEST_EN (0x1<<0) /* BitField flags header digest enable, set at login stage */
#define TSTORM_ISCSI_ST_CONTEXT_SECTION_B_HDR_DIGEST_EN_SHIFT 0
#define TSTORM_ISCSI_ST_CONTEXT_SECTION_B_DATA_DIGEST_EN (0x1<<1) /* BitField flags data digest enable, set at login stage */
#define TSTORM_ISCSI_ST_CONTEXT_SECTION_B_DATA_DIGEST_EN_SHIFT 1
#define TSTORM_ISCSI_ST_CONTEXT_SECTION_B_PARTIAL_HEADER (0x1<<2) /* BitField flags partial header flow indication */
#define TSTORM_ISCSI_ST_CONTEXT_SECTION_B_PARTIAL_HEADER_SHIFT 2
#define TSTORM_ISCSI_ST_CONTEXT_SECTION_B_FULL_FEATURE (0x1<<3) /* BitField flags */
#define TSTORM_ISCSI_ST_CONTEXT_SECTION_B_FULL_FEATURE_SHIFT 3
#define TSTORM_ISCSI_ST_CONTEXT_SECTION_B_DROP_ALL_PDUS (0x1<<4) /* BitField flags */
#define TSTORM_ISCSI_ST_CONTEXT_SECTION_B_DROP_ALL_PDUS_SHIFT 4
#define TSTORM_ISCSI_ST_CONTEXT_SECTION_NALLEN (0x3<<5) /* BitField flags Non-aligned length */
#define TSTORM_ISCSI_ST_CONTEXT_SECTION_NALLEN_SHIFT 5
#define TSTORM_ISCSI_ST_CONTEXT_SECTION_RSRV0 (0x1<<7) /* BitField flags */
#define TSTORM_ISCSI_ST_CONTEXT_SECTION_RSRV0_SHIFT 7
uint8_t hdr_bytes_2_fetch /* Number of bytes left to fetch to complete iSCSI header */;
#elif defined(__LITTLE_ENDIAN)
uint8_t hdr_bytes_2_fetch /* Number of bytes left to fetch to complete iSCSI header */;
uint8_t flags;
#define TSTORM_ISCSI_ST_CONTEXT_SECTION_B_HDR_DIGEST_EN (0x1<<0) /* BitField flags header digest enable, set at login stage */
#define TSTORM_ISCSI_ST_CONTEXT_SECTION_B_HDR_DIGEST_EN_SHIFT 0
#define TSTORM_ISCSI_ST_CONTEXT_SECTION_B_DATA_DIGEST_EN (0x1<<1) /* BitField flags data digest enable, set at login stage */
#define TSTORM_ISCSI_ST_CONTEXT_SECTION_B_DATA_DIGEST_EN_SHIFT 1
#define TSTORM_ISCSI_ST_CONTEXT_SECTION_B_PARTIAL_HEADER (0x1<<2) /* BitField flags partial header flow indication */
#define TSTORM_ISCSI_ST_CONTEXT_SECTION_B_PARTIAL_HEADER_SHIFT 2
#define TSTORM_ISCSI_ST_CONTEXT_SECTION_B_FULL_FEATURE (0x1<<3) /* BitField flags */
#define TSTORM_ISCSI_ST_CONTEXT_SECTION_B_FULL_FEATURE_SHIFT 3
#define TSTORM_ISCSI_ST_CONTEXT_SECTION_B_DROP_ALL_PDUS (0x1<<4) /* BitField flags */
#define TSTORM_ISCSI_ST_CONTEXT_SECTION_B_DROP_ALL_PDUS_SHIFT 4
#define TSTORM_ISCSI_ST_CONTEXT_SECTION_NALLEN (0x3<<5) /* BitField flags Non-aligned length */
#define TSTORM_ISCSI_ST_CONTEXT_SECTION_NALLEN_SHIFT 5
#define TSTORM_ISCSI_ST_CONTEXT_SECTION_RSRV0 (0x1<<7) /* BitField flags */
#define TSTORM_ISCSI_ST_CONTEXT_SECTION_RSRV0_SHIFT 7
uint16_t rq_cons /* RQ consumer */;
#endif
struct regpair_t rq_db_phy_addr;
#if defined(__BIG_ENDIAN)
struct iscsi_term_vars term_vars /* Termination variables */;
uint8_t rsrv1;
uint16_t iscsi_conn_id;
#elif defined(__LITTLE_ENDIAN)
uint16_t iscsi_conn_id;
uint8_t rsrv1;
struct iscsi_term_vars term_vars /* Termination variables */;
#endif
uint32_t process_nxt /* next TCP sequence to be processed by the iSCSI layer. */;
};
/*
* The iSCSI non-aggregative context of Tstorm
*/
struct tstorm_iscsi_st_context
{
struct tstorm_tcp_st_context_section tcp /* TCP context region, shared in TOE, RDMA and iSCSI */;
struct tstorm_iscsi_st_context_section iscsi /* iSCSI context region, used only in iSCSI */;
};
/*
* Ethernet context section, shared in TOE, RDMA and ISCSI
*/
struct xstorm_eth_context_section
{
#if defined(__BIG_ENDIAN)
uint8_t remote_addr_4 /* Remote Mac Address, used in PBF Header Builder Command */;
uint8_t remote_addr_5 /* Remote Mac Address, used in PBF Header Builder Command */;
uint8_t local_addr_0 /* Local Mac Address, used in PBF Header Builder Command */;
uint8_t local_addr_1 /* Local Mac Address, used in PBF Header Builder Command */;
#elif defined(__LITTLE_ENDIAN)
uint8_t local_addr_1 /* Local Mac Address, used in PBF Header Builder Command */;
uint8_t local_addr_0 /* Local Mac Address, used in PBF Header Builder Command */;
uint8_t remote_addr_5 /* Remote Mac Address, used in PBF Header Builder Command */;
uint8_t remote_addr_4 /* Remote Mac Address, used in PBF Header Builder Command */;
#endif
#if defined(__BIG_ENDIAN)
uint8_t remote_addr_0 /* Remote Mac Address, used in PBF Header Builder Command */;
uint8_t remote_addr_1 /* Remote Mac Address, used in PBF Header Builder Command */;
uint8_t remote_addr_2 /* Remote Mac Address, used in PBF Header Builder Command */;
uint8_t remote_addr_3 /* Remote Mac Address, used in PBF Header Builder Command */;
#elif defined(__LITTLE_ENDIAN)
uint8_t remote_addr_3 /* Remote Mac Address, used in PBF Header Builder Command */;
uint8_t remote_addr_2 /* Remote Mac Address, used in PBF Header Builder Command */;
uint8_t remote_addr_1 /* Remote Mac Address, used in PBF Header Builder Command */;
uint8_t remote_addr_0 /* Remote Mac Address, used in PBF Header Builder Command */;
#endif
#if defined(__BIG_ENDIAN)
uint16_t reserved_vlan_type /* this field is not an absolute must, but the reseved was here */;
uint16_t vlan_params;
#define XSTORM_ETH_CONTEXT_SECTION_VLAN_ID (0xFFF<<0) /* BitField vlan_params part of PBF Header Builder Command */
#define XSTORM_ETH_CONTEXT_SECTION_VLAN_ID_SHIFT 0
#define XSTORM_ETH_CONTEXT_SECTION_CFI (0x1<<12) /* BitField vlan_params Canonical format indicator, part of PBF Header Builder Command */
#define XSTORM_ETH_CONTEXT_SECTION_CFI_SHIFT 12
#define XSTORM_ETH_CONTEXT_SECTION_PRIORITY (0x7<<13) /* BitField vlan_params part of PBF Header Builder Command */
#define XSTORM_ETH_CONTEXT_SECTION_PRIORITY_SHIFT 13
#elif defined(__LITTLE_ENDIAN)
uint16_t vlan_params;
#define XSTORM_ETH_CONTEXT_SECTION_VLAN_ID (0xFFF<<0) /* BitField vlan_params part of PBF Header Builder Command */
#define XSTORM_ETH_CONTEXT_SECTION_VLAN_ID_SHIFT 0
#define XSTORM_ETH_CONTEXT_SECTION_CFI (0x1<<12) /* BitField vlan_params Canonical format indicator, part of PBF Header Builder Command */
#define XSTORM_ETH_CONTEXT_SECTION_CFI_SHIFT 12
#define XSTORM_ETH_CONTEXT_SECTION_PRIORITY (0x7<<13) /* BitField vlan_params part of PBF Header Builder Command */
#define XSTORM_ETH_CONTEXT_SECTION_PRIORITY_SHIFT 13
uint16_t reserved_vlan_type /* this field is not an absolute must, but the reseved was here */;
#endif
#if defined(__BIG_ENDIAN)
uint8_t local_addr_2 /* Local Mac Address, used in PBF Header Builder Command */;
uint8_t local_addr_3 /* Local Mac Address, used in PBF Header Builder Command */;
uint8_t local_addr_4 /* Loca lMac Address, used in PBF Header Builder Command */;
uint8_t local_addr_5 /* Local Mac Address, used in PBF Header Builder Command */;
#elif defined(__LITTLE_ENDIAN)
uint8_t local_addr_5 /* Local Mac Address, used in PBF Header Builder Command */;
uint8_t local_addr_4 /* Loca lMac Address, used in PBF Header Builder Command */;
uint8_t local_addr_3 /* Local Mac Address, used in PBF Header Builder Command */;
uint8_t local_addr_2 /* Local Mac Address, used in PBF Header Builder Command */;
#endif
};
/*
* IpV4 context section, shared in TOE, RDMA and ISCSI
*/
struct xstorm_ip_v4_context_section
{
#if defined(__BIG_ENDIAN)
uint16_t __pbf_hdr_cmd_rsvd_id;
uint16_t __pbf_hdr_cmd_rsvd_flags_offset;
#elif defined(__LITTLE_ENDIAN)
uint16_t __pbf_hdr_cmd_rsvd_flags_offset;
uint16_t __pbf_hdr_cmd_rsvd_id;
#endif
#if defined(__BIG_ENDIAN)
uint8_t __pbf_hdr_cmd_rsvd_ver_ihl;
uint8_t tos /* Type Of Service, used in PBF Header Builder Command */;
uint16_t __pbf_hdr_cmd_rsvd_length;
#elif defined(__LITTLE_ENDIAN)
uint16_t __pbf_hdr_cmd_rsvd_length;
uint8_t tos /* Type Of Service, used in PBF Header Builder Command */;
uint8_t __pbf_hdr_cmd_rsvd_ver_ihl;
#endif
uint32_t ip_local_addr /* used in PBF Header Builder Command */;
#if defined(__BIG_ENDIAN)
uint8_t ttl /* Time to live, used in PBF Header Builder Command */;
uint8_t __pbf_hdr_cmd_rsvd_protocol;
uint16_t __pbf_hdr_cmd_rsvd_csum;
#elif defined(__LITTLE_ENDIAN)
uint16_t __pbf_hdr_cmd_rsvd_csum;
uint8_t __pbf_hdr_cmd_rsvd_protocol;
uint8_t ttl /* Time to live, used in PBF Header Builder Command */;
#endif
uint32_t __pbf_hdr_cmd_rsvd_1 /* places the ip_remote_addr field in the proper place in the regpair */;
uint32_t ip_remote_addr /* used in PBF Header Builder Command */;
};
/*
* context section, shared in TOE, RDMA and ISCSI
*/
struct xstorm_padded_ip_v4_context_section
{
struct xstorm_ip_v4_context_section ip_v4;
uint32_t reserved1[4];
};
/*
* IpV6 context section, shared in TOE, RDMA and ISCSI
*/
struct xstorm_ip_v6_context_section
{
#if defined(__BIG_ENDIAN)
uint16_t pbf_hdr_cmd_rsvd_payload_len;
uint8_t pbf_hdr_cmd_rsvd_nxt_hdr;
uint8_t hop_limit /* used in PBF Header Builder Command */;
#elif defined(__LITTLE_ENDIAN)
uint8_t hop_limit /* used in PBF Header Builder Command */;
uint8_t pbf_hdr_cmd_rsvd_nxt_hdr;
uint16_t pbf_hdr_cmd_rsvd_payload_len;
#endif
uint32_t priority_flow_label;
#define XSTORM_IP_V6_CONTEXT_SECTION_FLOW_LABEL (0xFFFFF<<0) /* BitField priority_flow_label used in PBF Header Builder Command */
#define XSTORM_IP_V6_CONTEXT_SECTION_FLOW_LABEL_SHIFT 0
#define XSTORM_IP_V6_CONTEXT_SECTION_TRAFFIC_CLASS (0xFF<<20) /* BitField priority_flow_label used in PBF Header Builder Command */
#define XSTORM_IP_V6_CONTEXT_SECTION_TRAFFIC_CLASS_SHIFT 20
#define XSTORM_IP_V6_CONTEXT_SECTION_PBF_HDR_CMD_RSVD_VER (0xF<<28) /* BitField priority_flow_label */
#define XSTORM_IP_V6_CONTEXT_SECTION_PBF_HDR_CMD_RSVD_VER_SHIFT 28
uint32_t ip_local_addr_lo_hi /* second 32 bits of Ip local Address, used in PBF Header Builder Command */;
uint32_t ip_local_addr_lo_lo /* first 32 bits of Ip local Address, used in PBF Header Builder Command */;
uint32_t ip_local_addr_hi_hi /* fourth 32 bits of Ip local Address, used in PBF Header Builder Command */;
uint32_t ip_local_addr_hi_lo /* third 32 bits of Ip local Address, used in PBF Header Builder Command */;
uint32_t ip_remote_addr_lo_hi /* second 32 bits of Ip remoteinsation Address, used in PBF Header Builder Command */;
uint32_t ip_remote_addr_lo_lo /* first 32 bits of Ip remoteinsation Address, used in PBF Header Builder Command */;
uint32_t ip_remote_addr_hi_hi /* fourth 32 bits of Ip remoteinsation Address, used in PBF Header Builder Command */;
uint32_t ip_remote_addr_hi_lo /* third 32 bits of Ip remoteinsation Address, used in PBF Header Builder Command */;
};
union xstorm_ip_context_section_types
{
struct xstorm_padded_ip_v4_context_section padded_ip_v4;
struct xstorm_ip_v6_context_section ip_v6;
};
/*
* TCP context section, shared in TOE, RDMA and ISCSI
*/
struct xstorm_tcp_context_section
{
uint32_t snd_max;
#if defined(__BIG_ENDIAN)
uint16_t remote_port /* used in PBF Header Builder Command */;
uint16_t local_port /* used in PBF Header Builder Command */;
#elif defined(__LITTLE_ENDIAN)
uint16_t local_port /* used in PBF Header Builder Command */;
uint16_t remote_port /* used in PBF Header Builder Command */;
#endif
#if defined(__BIG_ENDIAN)
uint8_t original_nagle_1b;
uint8_t ts_enabled /* Only 1 bit is used */;
uint16_t tcp_params;
#define XSTORM_TCP_CONTEXT_SECTION_TOTAL_HEADER_SIZE (0xFF<<0) /* BitField tcp_paramsTcp parameters for ease of pbf command construction */
#define XSTORM_TCP_CONTEXT_SECTION_TOTAL_HEADER_SIZE_SHIFT 0
#define __XSTORM_TCP_CONTEXT_SECTION_ECT_BIT (0x1<<8) /* BitField tcp_paramsTcp parameters */
#define __XSTORM_TCP_CONTEXT_SECTION_ECT_BIT_SHIFT 8
#define __XSTORM_TCP_CONTEXT_SECTION_ECN_ENABLED (0x1<<9) /* BitField tcp_paramsTcp parameters */
#define __XSTORM_TCP_CONTEXT_SECTION_ECN_ENABLED_SHIFT 9
#define XSTORM_TCP_CONTEXT_SECTION_SACK_ENABLED (0x1<<10) /* BitField tcp_paramsTcp parameters Selective Ack Enabled */
#define XSTORM_TCP_CONTEXT_SECTION_SACK_ENABLED_SHIFT 10
#define XSTORM_TCP_CONTEXT_SECTION_SMALL_WIN_ADV (0x1<<11) /* BitField tcp_paramsTcp parameters window smaller than initial window was advertised to far end */
#define XSTORM_TCP_CONTEXT_SECTION_SMALL_WIN_ADV_SHIFT 11
#define XSTORM_TCP_CONTEXT_SECTION_FIN_SENT_FLAG (0x1<<12) /* BitField tcp_paramsTcp parameters */
#define XSTORM_TCP_CONTEXT_SECTION_FIN_SENT_FLAG_SHIFT 12
#define XSTORM_TCP_CONTEXT_SECTION_WINDOW_SATURATED (0x1<<13) /* BitField tcp_paramsTcp parameters */
#define XSTORM_TCP_CONTEXT_SECTION_WINDOW_SATURATED_SHIFT 13
#define XSTORM_TCP_CONTEXT_SECTION_SLOWPATH_QUEUES_FLUSH_COUNTER (0x3<<14) /* BitField tcp_paramsTcp parameters */
#define XSTORM_TCP_CONTEXT_SECTION_SLOWPATH_QUEUES_FLUSH_COUNTER_SHIFT 14
#elif defined(__LITTLE_ENDIAN)
uint16_t tcp_params;
#define XSTORM_TCP_CONTEXT_SECTION_TOTAL_HEADER_SIZE (0xFF<<0) /* BitField tcp_paramsTcp parameters for ease of pbf command construction */
#define XSTORM_TCP_CONTEXT_SECTION_TOTAL_HEADER_SIZE_SHIFT 0
#define __XSTORM_TCP_CONTEXT_SECTION_ECT_BIT (0x1<<8) /* BitField tcp_paramsTcp parameters */
#define __XSTORM_TCP_CONTEXT_SECTION_ECT_BIT_SHIFT 8
#define __XSTORM_TCP_CONTEXT_SECTION_ECN_ENABLED (0x1<<9) /* BitField tcp_paramsTcp parameters */
#define __XSTORM_TCP_CONTEXT_SECTION_ECN_ENABLED_SHIFT 9
#define XSTORM_TCP_CONTEXT_SECTION_SACK_ENABLED (0x1<<10) /* BitField tcp_paramsTcp parameters Selective Ack Enabled */
#define XSTORM_TCP_CONTEXT_SECTION_SACK_ENABLED_SHIFT 10
#define XSTORM_TCP_CONTEXT_SECTION_SMALL_WIN_ADV (0x1<<11) /* BitField tcp_paramsTcp parameters window smaller than initial window was advertised to far end */
#define XSTORM_TCP_CONTEXT_SECTION_SMALL_WIN_ADV_SHIFT 11
#define XSTORM_TCP_CONTEXT_SECTION_FIN_SENT_FLAG (0x1<<12) /* BitField tcp_paramsTcp parameters */
#define XSTORM_TCP_CONTEXT_SECTION_FIN_SENT_FLAG_SHIFT 12
#define XSTORM_TCP_CONTEXT_SECTION_WINDOW_SATURATED (0x1<<13) /* BitField tcp_paramsTcp parameters */
#define XSTORM_TCP_CONTEXT_SECTION_WINDOW_SATURATED_SHIFT 13
#define XSTORM_TCP_CONTEXT_SECTION_SLOWPATH_QUEUES_FLUSH_COUNTER (0x3<<14) /* BitField tcp_paramsTcp parameters */
#define XSTORM_TCP_CONTEXT_SECTION_SLOWPATH_QUEUES_FLUSH_COUNTER_SHIFT 14
uint8_t ts_enabled /* Only 1 bit is used */;
uint8_t original_nagle_1b;
#endif
#if defined(__BIG_ENDIAN)
uint16_t pseudo_csum /* the precaluclated pseudo checksum header for pbf command construction */;
uint16_t window_scaling_factor /* local_adv_wnd by this variable to reach the advertised window to far end */;
#elif defined(__LITTLE_ENDIAN)
uint16_t window_scaling_factor /* local_adv_wnd by this variable to reach the advertised window to far end */;
uint16_t pseudo_csum /* the precaluclated pseudo checksum header for pbf command construction */;
#endif
#if defined(__BIG_ENDIAN)
uint16_t reserved2 /* The ID of the statistics client for counting common/L2 statistics */;
uint8_t statistics_counter_id /* The ID of the statistics client for counting common/L2 statistics */;
uint8_t statistics_params;
#define XSTORM_TCP_CONTEXT_SECTION_UPDATE_L2_STATSTICS (0x1<<0) /* BitField statistics_paramsTcp parameters set by the driver, determines wheather or not to update l2 statistics */
#define XSTORM_TCP_CONTEXT_SECTION_UPDATE_L2_STATSTICS_SHIFT 0
#define XSTORM_TCP_CONTEXT_SECTION_UPDATE_L4_STATSTICS (0x1<<1) /* BitField statistics_paramsTcp parameters set by the driver, determines wheather or not to update l4 statistics */
#define XSTORM_TCP_CONTEXT_SECTION_UPDATE_L4_STATSTICS_SHIFT 1
#define XSTORM_TCP_CONTEXT_SECTION_RESERVED (0x3F<<2) /* BitField statistics_paramsTcp parameters */
#define XSTORM_TCP_CONTEXT_SECTION_RESERVED_SHIFT 2
#elif defined(__LITTLE_ENDIAN)
uint8_t statistics_params;
#define XSTORM_TCP_CONTEXT_SECTION_UPDATE_L2_STATSTICS (0x1<<0) /* BitField statistics_paramsTcp parameters set by the driver, determines wheather or not to update l2 statistics */
#define XSTORM_TCP_CONTEXT_SECTION_UPDATE_L2_STATSTICS_SHIFT 0
#define XSTORM_TCP_CONTEXT_SECTION_UPDATE_L4_STATSTICS (0x1<<1) /* BitField statistics_paramsTcp parameters set by the driver, determines wheather or not to update l4 statistics */
#define XSTORM_TCP_CONTEXT_SECTION_UPDATE_L4_STATSTICS_SHIFT 1
#define XSTORM_TCP_CONTEXT_SECTION_RESERVED (0x3F<<2) /* BitField statistics_paramsTcp parameters */
#define XSTORM_TCP_CONTEXT_SECTION_RESERVED_SHIFT 2
uint8_t statistics_counter_id /* The ID of the statistics client for counting common/L2 statistics */;
uint16_t reserved2 /* The ID of the statistics client for counting common/L2 statistics */;
#endif
uint32_t ts_time_diff /* Time Stamp Offload, used in PBF Header Builder Command */;
uint32_t __next_timer_expir /* Last Packet Real Time Clock Stamp */;
};
/*
* Common context section, shared in TOE, RDMA and ISCSI
*/
struct xstorm_common_context_section
{
struct xstorm_eth_context_section ethernet;
union xstorm_ip_context_section_types ip_union;
struct xstorm_tcp_context_section tcp;
#if defined(__BIG_ENDIAN)
uint8_t __dcb_val;
uint8_t flags;
#define XSTORM_COMMON_CONTEXT_SECTION_PHYSQ_INITIALIZED (0x1<<0) /* BitField flagsTcp parameters part of the tx switching state machine */
#define XSTORM_COMMON_CONTEXT_SECTION_PHYSQ_INITIALIZED_SHIFT 0
#define XSTORM_COMMON_CONTEXT_SECTION_PBF_PORT (0x7<<1) /* BitField flagsTcp parameters determines to which voq credit will be returned */
#define XSTORM_COMMON_CONTEXT_SECTION_PBF_PORT_SHIFT 1
#define XSTORM_COMMON_CONTEXT_SECTION_VLAN_MODE (0x1<<4) /* BitField flagsTcp parameters Flag that states wether inner valn was provided by the OS */
#define XSTORM_COMMON_CONTEXT_SECTION_VLAN_MODE_SHIFT 4
#define XSTORM_COMMON_CONTEXT_SECTION_ORIGINAL_PRIORITY (0x7<<5) /* BitField flagsTcp parameters original priority given from the OS */
#define XSTORM_COMMON_CONTEXT_SECTION_ORIGINAL_PRIORITY_SHIFT 5
uint8_t outer_tag_flags;
#define XSTORM_COMMON_CONTEXT_SECTION_DCB_OUTER_PRI (0x7<<0) /* BitField outer_tag_flagsTcp parameters Priority of outer tag in case of DCB enabled */
#define XSTORM_COMMON_CONTEXT_SECTION_DCB_OUTER_PRI_SHIFT 0
#define XSTORM_COMMON_CONTEXT_SECTION_OUTER_PRI (0x7<<3) /* BitField outer_tag_flagsTcp parameters Priority of outer tag in case of DCB disabled */
#define XSTORM_COMMON_CONTEXT_SECTION_OUTER_PRI_SHIFT 3
#define XSTORM_COMMON_CONTEXT_SECTION_RESERVED (0x3<<6) /* BitField outer_tag_flagsTcp parameters */
#define XSTORM_COMMON_CONTEXT_SECTION_RESERVED_SHIFT 6
uint8_t ip_version_1b;
#elif defined(__LITTLE_ENDIAN)
uint8_t ip_version_1b;
uint8_t outer_tag_flags;
#define XSTORM_COMMON_CONTEXT_SECTION_DCB_OUTER_PRI (0x7<<0) /* BitField outer_tag_flagsTcp parameters Priority of outer tag in case of DCB enabled */
#define XSTORM_COMMON_CONTEXT_SECTION_DCB_OUTER_PRI_SHIFT 0
#define XSTORM_COMMON_CONTEXT_SECTION_OUTER_PRI (0x7<<3) /* BitField outer_tag_flagsTcp parameters Priority of outer tag in case of DCB disabled */
#define XSTORM_COMMON_CONTEXT_SECTION_OUTER_PRI_SHIFT 3
#define XSTORM_COMMON_CONTEXT_SECTION_RESERVED (0x3<<6) /* BitField outer_tag_flagsTcp parameters */
#define XSTORM_COMMON_CONTEXT_SECTION_RESERVED_SHIFT 6
uint8_t flags;
#define XSTORM_COMMON_CONTEXT_SECTION_PHYSQ_INITIALIZED (0x1<<0) /* BitField flagsTcp parameters part of the tx switching state machine */
#define XSTORM_COMMON_CONTEXT_SECTION_PHYSQ_INITIALIZED_SHIFT 0
#define XSTORM_COMMON_CONTEXT_SECTION_PBF_PORT (0x7<<1) /* BitField flagsTcp parameters determines to which voq credit will be returned */
#define XSTORM_COMMON_CONTEXT_SECTION_PBF_PORT_SHIFT 1
#define XSTORM_COMMON_CONTEXT_SECTION_VLAN_MODE (0x1<<4) /* BitField flagsTcp parameters Flag that states wether inner valn was provided by the OS */
#define XSTORM_COMMON_CONTEXT_SECTION_VLAN_MODE_SHIFT 4
#define XSTORM_COMMON_CONTEXT_SECTION_ORIGINAL_PRIORITY (0x7<<5) /* BitField flagsTcp parameters original priority given from the OS */
#define XSTORM_COMMON_CONTEXT_SECTION_ORIGINAL_PRIORITY_SHIFT 5
uint8_t __dcb_val;
#endif
};
/*
* Flags used in ISCSI context section
*/
struct xstorm_iscsi_context_flags
{
uint8_t flags;
#define XSTORM_ISCSI_CONTEXT_FLAGS_B_IMMEDIATE_DATA (0x1<<0) /* BitField flags */
#define XSTORM_ISCSI_CONTEXT_FLAGS_B_IMMEDIATE_DATA_SHIFT 0
#define XSTORM_ISCSI_CONTEXT_FLAGS_B_INITIAL_R2T (0x1<<1) /* BitField flags */
#define XSTORM_ISCSI_CONTEXT_FLAGS_B_INITIAL_R2T_SHIFT 1
#define XSTORM_ISCSI_CONTEXT_FLAGS_B_EN_HEADER_DIGEST (0x1<<2) /* BitField flags */
#define XSTORM_ISCSI_CONTEXT_FLAGS_B_EN_HEADER_DIGEST_SHIFT 2
#define XSTORM_ISCSI_CONTEXT_FLAGS_B_EN_DATA_DIGEST (0x1<<3) /* BitField flags */
#define XSTORM_ISCSI_CONTEXT_FLAGS_B_EN_DATA_DIGEST_SHIFT 3
#define XSTORM_ISCSI_CONTEXT_FLAGS_B_HQ_BD_WRITTEN (0x1<<4) /* BitField flags */
#define XSTORM_ISCSI_CONTEXT_FLAGS_B_HQ_BD_WRITTEN_SHIFT 4
#define XSTORM_ISCSI_CONTEXT_FLAGS_B_LAST_OP_SQ (0x1<<5) /* BitField flags */
#define XSTORM_ISCSI_CONTEXT_FLAGS_B_LAST_OP_SQ_SHIFT 5
#define XSTORM_ISCSI_CONTEXT_FLAGS_B_UPDATE_SND_NXT (0x1<<6) /* BitField flags */
#define XSTORM_ISCSI_CONTEXT_FLAGS_B_UPDATE_SND_NXT_SHIFT 6
#define XSTORM_ISCSI_CONTEXT_FLAGS_RESERVED4 (0x1<<7) /* BitField flags */
#define XSTORM_ISCSI_CONTEXT_FLAGS_RESERVED4_SHIFT 7
};
struct iscsi_task_context_entry_x
{
uint32_t data_out_buffer_offset;
uint32_t itt;
uint32_t data_sn;
};
struct iscsi_task_context_entry_xuc_x_write_only
{
uint32_t tx_r2t_sn /* Xstorm increments for every data-out seq sent. */;
};
struct iscsi_task_context_entry_xuc_xu_write_both
{
uint32_t sgl_base_lo;
uint32_t sgl_base_hi;
#if defined(__BIG_ENDIAN)
uint8_t sgl_size;
uint8_t sge_index;
uint16_t sge_offset;
#elif defined(__LITTLE_ENDIAN)
uint16_t sge_offset;
uint8_t sge_index;
uint8_t sgl_size;
#endif
};
/*
* iSCSI context section
*/
struct xstorm_iscsi_context_section
{
uint32_t first_burst_length;
uint32_t max_send_pdu_length;
struct regpair_t sq_pbl_base;
struct regpair_t sq_curr_pbe;
struct regpair_t hq_pbl_base;
struct regpair_t hq_curr_pbe_base;
struct regpair_t r2tq_pbl_base;
struct regpair_t r2tq_curr_pbe_base;
struct regpair_t task_pbl_base;
#if defined(__BIG_ENDIAN)
uint16_t data_out_count;
struct xstorm_iscsi_context_flags flags;
uint8_t task_pbl_cache_idx /* All-ones value stands for PBL not cached */;
#elif defined(__LITTLE_ENDIAN)
uint8_t task_pbl_cache_idx /* All-ones value stands for PBL not cached */;
struct xstorm_iscsi_context_flags flags;
uint16_t data_out_count;
#endif
uint32_t seq_more_2_send;
uint32_t pdu_more_2_send;
struct iscsi_task_context_entry_x temp_tce_x;
struct iscsi_task_context_entry_xuc_x_write_only temp_tce_x_wr;
struct iscsi_task_context_entry_xuc_xu_write_both temp_tce_xu_wr;
struct regpair_t lun;
uint32_t exp_data_transfer_len_ttt /* Overloaded with ttt in multi-pdu sequences flow. */;
uint32_t pdu_data_2_rxmit;
uint32_t rxmit_bytes_2_dr;
#if defined(__BIG_ENDIAN)
uint16_t rxmit_sge_offset;
uint16_t hq_rxmit_cons;
#elif defined(__LITTLE_ENDIAN)
uint16_t hq_rxmit_cons;
uint16_t rxmit_sge_offset;
#endif
#if defined(__BIG_ENDIAN)
uint16_t r2tq_cons;
uint8_t rxmit_flags;
#define XSTORM_ISCSI_CONTEXT_SECTION_B_NEW_HQ_BD (0x1<<0) /* BitField rxmit_flags */
#define XSTORM_ISCSI_CONTEXT_SECTION_B_NEW_HQ_BD_SHIFT 0
#define XSTORM_ISCSI_CONTEXT_SECTION_B_RXMIT_PDU_HDR (0x1<<1) /* BitField rxmit_flags */
#define XSTORM_ISCSI_CONTEXT_SECTION_B_RXMIT_PDU_HDR_SHIFT 1
#define XSTORM_ISCSI_CONTEXT_SECTION_B_RXMIT_END_PDU (0x1<<2) /* BitField rxmit_flags */
#define XSTORM_ISCSI_CONTEXT_SECTION_B_RXMIT_END_PDU_SHIFT 2
#define XSTORM_ISCSI_CONTEXT_SECTION_B_RXMIT_DR (0x1<<3) /* BitField rxmit_flags */
#define XSTORM_ISCSI_CONTEXT_SECTION_B_RXMIT_DR_SHIFT 3
#define XSTORM_ISCSI_CONTEXT_SECTION_B_RXMIT_START_DR (0x1<<4) /* BitField rxmit_flags */
#define XSTORM_ISCSI_CONTEXT_SECTION_B_RXMIT_START_DR_SHIFT 4
#define XSTORM_ISCSI_CONTEXT_SECTION_B_RXMIT_PADDING (0x3<<5) /* BitField rxmit_flags */
#define XSTORM_ISCSI_CONTEXT_SECTION_B_RXMIT_PADDING_SHIFT 5
#define XSTORM_ISCSI_CONTEXT_SECTION_B_ISCSI_CONT_FAST_RXMIT (0x1<<7) /* BitField rxmit_flags */
#define XSTORM_ISCSI_CONTEXT_SECTION_B_ISCSI_CONT_FAST_RXMIT_SHIFT 7
uint8_t rxmit_sge_idx;
#elif defined(__LITTLE_ENDIAN)
uint8_t rxmit_sge_idx;
uint8_t rxmit_flags;
#define XSTORM_ISCSI_CONTEXT_SECTION_B_NEW_HQ_BD (0x1<<0) /* BitField rxmit_flags */
#define XSTORM_ISCSI_CONTEXT_SECTION_B_NEW_HQ_BD_SHIFT 0
#define XSTORM_ISCSI_CONTEXT_SECTION_B_RXMIT_PDU_HDR (0x1<<1) /* BitField rxmit_flags */
#define XSTORM_ISCSI_CONTEXT_SECTION_B_RXMIT_PDU_HDR_SHIFT 1
#define XSTORM_ISCSI_CONTEXT_SECTION_B_RXMIT_END_PDU (0x1<<2) /* BitField rxmit_flags */
#define XSTORM_ISCSI_CONTEXT_SECTION_B_RXMIT_END_PDU_SHIFT 2
#define XSTORM_ISCSI_CONTEXT_SECTION_B_RXMIT_DR (0x1<<3) /* BitField rxmit_flags */
#define XSTORM_ISCSI_CONTEXT_SECTION_B_RXMIT_DR_SHIFT 3
#define XSTORM_ISCSI_CONTEXT_SECTION_B_RXMIT_START_DR (0x1<<4) /* BitField rxmit_flags */
#define XSTORM_ISCSI_CONTEXT_SECTION_B_RXMIT_START_DR_SHIFT 4
#define XSTORM_ISCSI_CONTEXT_SECTION_B_RXMIT_PADDING (0x3<<5) /* BitField rxmit_flags */
#define XSTORM_ISCSI_CONTEXT_SECTION_B_RXMIT_PADDING_SHIFT 5
#define XSTORM_ISCSI_CONTEXT_SECTION_B_ISCSI_CONT_FAST_RXMIT (0x1<<7) /* BitField rxmit_flags */
#define XSTORM_ISCSI_CONTEXT_SECTION_B_ISCSI_CONT_FAST_RXMIT_SHIFT 7
uint16_t r2tq_cons;
#endif
uint32_t hq_rxmit_tcp_seq;
};
/*
* Xstorm iSCSI Storm Context
*/
struct xstorm_iscsi_st_context
{
struct xstorm_common_context_section common;
struct xstorm_iscsi_context_section iscsi;
};
/*
* Iscsi connection context
*/
struct iscsi_context
{
struct ustorm_iscsi_st_context ustorm_st_context /* Ustorm storm context */;
struct tstorm_iscsi_st_context tstorm_st_context /* Tstorm storm context */;
struct xstorm_iscsi_ag_context xstorm_ag_context /* Xstorm aggregative context */;
struct tstorm_iscsi_ag_context tstorm_ag_context /* Tstorm aggregative context */;
struct cstorm_iscsi_ag_context cstorm_ag_context /* Cstorm aggregative context */;
struct ustorm_iscsi_ag_context ustorm_ag_context /* Ustorm aggregative context */;
struct timers_block_context timers_context /* Timers block context */;
struct regpair_t upb_context /* UPb context */;
struct xstorm_iscsi_st_context xstorm_st_context /* Xstorm storm context */;
struct regpair_t xpb_context /* XPb context (inside the PBF) */;
struct cstorm_iscsi_st_context cstorm_st_context /* Cstorm storm context */;
};
/*
* PDU header of an iSCSI DATA-OUT
*/
struct iscsi_data_pdu_hdr_little_endian
{
#if defined(__BIG_ENDIAN)
uint8_t opcode;
uint8_t op_attr;
#define ISCSI_DATA_PDU_HDR_LITTLE_ENDIAN_RSRV1 (0x7F<<0) /* BitField op_attr */
#define ISCSI_DATA_PDU_HDR_LITTLE_ENDIAN_RSRV1_SHIFT 0
#define ISCSI_DATA_PDU_HDR_LITTLE_ENDIAN_FINAL_FLAG (0x1<<7) /* BitField op_attr */
#define ISCSI_DATA_PDU_HDR_LITTLE_ENDIAN_FINAL_FLAG_SHIFT 7
uint16_t rsrv0;
#elif defined(__LITTLE_ENDIAN)
uint16_t rsrv0;
uint8_t op_attr;
#define ISCSI_DATA_PDU_HDR_LITTLE_ENDIAN_RSRV1 (0x7F<<0) /* BitField op_attr */
#define ISCSI_DATA_PDU_HDR_LITTLE_ENDIAN_RSRV1_SHIFT 0
#define ISCSI_DATA_PDU_HDR_LITTLE_ENDIAN_FINAL_FLAG (0x1<<7) /* BitField op_attr */
#define ISCSI_DATA_PDU_HDR_LITTLE_ENDIAN_FINAL_FLAG_SHIFT 7
uint8_t opcode;
#endif
uint32_t data_fields;
#define ISCSI_DATA_PDU_HDR_LITTLE_ENDIAN_DATA_SEGMENT_LENGTH (0xFFFFFF<<0) /* BitField data_fields */
#define ISCSI_DATA_PDU_HDR_LITTLE_ENDIAN_DATA_SEGMENT_LENGTH_SHIFT 0
#define ISCSI_DATA_PDU_HDR_LITTLE_ENDIAN_TOTAL_AHS_LENGTH (0xFF<<24) /* BitField data_fields */
#define ISCSI_DATA_PDU_HDR_LITTLE_ENDIAN_TOTAL_AHS_LENGTH_SHIFT 24
struct regpair_t lun;
uint32_t itt;
uint32_t ttt;
uint32_t rsrv2;
uint32_t exp_stat_sn;
uint32_t rsrv3;
uint32_t data_sn;
uint32_t buffer_offset;
uint32_t rsrv4;
};
/*
* PDU header of an iSCSI login request
*/
struct iscsi_login_req_hdr_little_endian
{
#if defined(__BIG_ENDIAN)
uint8_t opcode;
uint8_t op_attr;
#define ISCSI_LOGIN_REQ_HDR_LITTLE_ENDIAN_NSG (0x3<<0) /* BitField op_attr */
#define ISCSI_LOGIN_REQ_HDR_LITTLE_ENDIAN_NSG_SHIFT 0
#define ISCSI_LOGIN_REQ_HDR_LITTLE_ENDIAN_CSG (0x3<<2) /* BitField op_attr */
#define ISCSI_LOGIN_REQ_HDR_LITTLE_ENDIAN_CSG_SHIFT 2
#define ISCSI_LOGIN_REQ_HDR_LITTLE_ENDIAN_RSRV0 (0x3<<4) /* BitField op_attr */
#define ISCSI_LOGIN_REQ_HDR_LITTLE_ENDIAN_RSRV0_SHIFT 4
#define ISCSI_LOGIN_REQ_HDR_LITTLE_ENDIAN_CONTINUE_FLG (0x1<<6) /* BitField op_attr */
#define ISCSI_LOGIN_REQ_HDR_LITTLE_ENDIAN_CONTINUE_FLG_SHIFT 6
#define ISCSI_LOGIN_REQ_HDR_LITTLE_ENDIAN_TRANSIT (0x1<<7) /* BitField op_attr */
#define ISCSI_LOGIN_REQ_HDR_LITTLE_ENDIAN_TRANSIT_SHIFT 7
uint8_t version_max;
uint8_t version_min;
#elif defined(__LITTLE_ENDIAN)
uint8_t version_min;
uint8_t version_max;
uint8_t op_attr;
#define ISCSI_LOGIN_REQ_HDR_LITTLE_ENDIAN_NSG (0x3<<0) /* BitField op_attr */
#define ISCSI_LOGIN_REQ_HDR_LITTLE_ENDIAN_NSG_SHIFT 0
#define ISCSI_LOGIN_REQ_HDR_LITTLE_ENDIAN_CSG (0x3<<2) /* BitField op_attr */
#define ISCSI_LOGIN_REQ_HDR_LITTLE_ENDIAN_CSG_SHIFT 2
#define ISCSI_LOGIN_REQ_HDR_LITTLE_ENDIAN_RSRV0 (0x3<<4) /* BitField op_attr */
#define ISCSI_LOGIN_REQ_HDR_LITTLE_ENDIAN_RSRV0_SHIFT 4
#define ISCSI_LOGIN_REQ_HDR_LITTLE_ENDIAN_CONTINUE_FLG (0x1<<6) /* BitField op_attr */
#define ISCSI_LOGIN_REQ_HDR_LITTLE_ENDIAN_CONTINUE_FLG_SHIFT 6
#define ISCSI_LOGIN_REQ_HDR_LITTLE_ENDIAN_TRANSIT (0x1<<7) /* BitField op_attr */
#define ISCSI_LOGIN_REQ_HDR_LITTLE_ENDIAN_TRANSIT_SHIFT 7
uint8_t opcode;
#endif
uint32_t data_fields;
#define ISCSI_LOGIN_REQ_HDR_LITTLE_ENDIAN_DATA_SEGMENT_LENGTH (0xFFFFFF<<0) /* BitField data_fields */
#define ISCSI_LOGIN_REQ_HDR_LITTLE_ENDIAN_DATA_SEGMENT_LENGTH_SHIFT 0
#define ISCSI_LOGIN_REQ_HDR_LITTLE_ENDIAN_TOTAL_AHS_LENGTH (0xFF<<24) /* BitField data_fields */
#define ISCSI_LOGIN_REQ_HDR_LITTLE_ENDIAN_TOTAL_AHS_LENGTH_SHIFT 24
uint32_t isid_lo;
#if defined(__BIG_ENDIAN)
uint16_t isid_hi;
uint16_t tsih;
#elif defined(__LITTLE_ENDIAN)
uint16_t tsih;
uint16_t isid_hi;
#endif
uint32_t itt;
#if defined(__BIG_ENDIAN)
uint16_t cid;
uint16_t rsrv1;
#elif defined(__LITTLE_ENDIAN)
uint16_t rsrv1;
uint16_t cid;
#endif
uint32_t cmd_sn;
uint32_t exp_stat_sn;
uint32_t rsrv2[4];
};
/*
* PDU header of an iSCSI logout request
*/
struct iscsi_logout_req_hdr_little_endian
{
#if defined(__BIG_ENDIAN)
uint8_t opcode;
uint8_t op_attr;
#define ISCSI_LOGOUT_REQ_HDR_LITTLE_ENDIAN_REASON_CODE (0x7F<<0) /* BitField op_attr */
#define ISCSI_LOGOUT_REQ_HDR_LITTLE_ENDIAN_REASON_CODE_SHIFT 0
#define ISCSI_LOGOUT_REQ_HDR_LITTLE_ENDIAN_RSRV1_1 (0x1<<7) /* BitField op_attr this value must be 1 */
#define ISCSI_LOGOUT_REQ_HDR_LITTLE_ENDIAN_RSRV1_1_SHIFT 7
uint16_t rsrv0;
#elif defined(__LITTLE_ENDIAN)
uint16_t rsrv0;
uint8_t op_attr;
#define ISCSI_LOGOUT_REQ_HDR_LITTLE_ENDIAN_REASON_CODE (0x7F<<0) /* BitField op_attr */
#define ISCSI_LOGOUT_REQ_HDR_LITTLE_ENDIAN_REASON_CODE_SHIFT 0
#define ISCSI_LOGOUT_REQ_HDR_LITTLE_ENDIAN_RSRV1_1 (0x1<<7) /* BitField op_attr this value must be 1 */
#define ISCSI_LOGOUT_REQ_HDR_LITTLE_ENDIAN_RSRV1_1_SHIFT 7
uint8_t opcode;
#endif
uint32_t data_fields;
#define ISCSI_LOGOUT_REQ_HDR_LITTLE_ENDIAN_DATA_SEGMENT_LENGTH (0xFFFFFF<<0) /* BitField data_fields */
#define ISCSI_LOGOUT_REQ_HDR_LITTLE_ENDIAN_DATA_SEGMENT_LENGTH_SHIFT 0
#define ISCSI_LOGOUT_REQ_HDR_LITTLE_ENDIAN_TOTAL_AHS_LENGTH (0xFF<<24) /* BitField data_fields */
#define ISCSI_LOGOUT_REQ_HDR_LITTLE_ENDIAN_TOTAL_AHS_LENGTH_SHIFT 24
uint32_t rsrv2[2];
uint32_t itt;
#if defined(__BIG_ENDIAN)
uint16_t cid;
uint16_t rsrv1;
#elif defined(__LITTLE_ENDIAN)
uint16_t rsrv1;
uint16_t cid;
#endif
uint32_t cmd_sn;
uint32_t exp_stat_sn;
uint32_t rsrv3[4];
};
/*
* PDU header of an iSCSI TMF request
*/
struct iscsi_tmf_req_hdr_little_endian
{
#if defined(__BIG_ENDIAN)
uint8_t opcode;
uint8_t op_attr;
#define ISCSI_TMF_REQ_HDR_LITTLE_ENDIAN_FUNCTION (0x7F<<0) /* BitField op_attr */
#define ISCSI_TMF_REQ_HDR_LITTLE_ENDIAN_FUNCTION_SHIFT 0
#define ISCSI_TMF_REQ_HDR_LITTLE_ENDIAN_RSRV1_1 (0x1<<7) /* BitField op_attr this value must be 1 */
#define ISCSI_TMF_REQ_HDR_LITTLE_ENDIAN_RSRV1_1_SHIFT 7
uint16_t rsrv0;
#elif defined(__LITTLE_ENDIAN)
uint16_t rsrv0;
uint8_t op_attr;
#define ISCSI_TMF_REQ_HDR_LITTLE_ENDIAN_FUNCTION (0x7F<<0) /* BitField op_attr */
#define ISCSI_TMF_REQ_HDR_LITTLE_ENDIAN_FUNCTION_SHIFT 0
#define ISCSI_TMF_REQ_HDR_LITTLE_ENDIAN_RSRV1_1 (0x1<<7) /* BitField op_attr this value must be 1 */
#define ISCSI_TMF_REQ_HDR_LITTLE_ENDIAN_RSRV1_1_SHIFT 7
uint8_t opcode;
#endif
uint32_t data_fields;
#define ISCSI_TMF_REQ_HDR_LITTLE_ENDIAN_DATA_SEGMENT_LENGTH (0xFFFFFF<<0) /* BitField data_fields */
#define ISCSI_TMF_REQ_HDR_LITTLE_ENDIAN_DATA_SEGMENT_LENGTH_SHIFT 0
#define ISCSI_TMF_REQ_HDR_LITTLE_ENDIAN_TOTAL_AHS_LENGTH (0xFF<<24) /* BitField data_fields */
#define ISCSI_TMF_REQ_HDR_LITTLE_ENDIAN_TOTAL_AHS_LENGTH_SHIFT 24
struct regpair_t lun;
uint32_t itt;
uint32_t referenced_task_tag;
uint32_t cmd_sn;
uint32_t exp_stat_sn;
uint32_t ref_cmd_sn;
uint32_t exp_data_sn;
uint32_t rsrv2[2];
};
/*
* PDU header of an iSCSI Text request
*/
struct iscsi_text_req_hdr_little_endian
{
#if defined(__BIG_ENDIAN)
uint8_t opcode;
uint8_t op_attr;
#define ISCSI_TEXT_REQ_HDR_LITTLE_ENDIAN_RSRV1 (0x3F<<0) /* BitField op_attr */
#define ISCSI_TEXT_REQ_HDR_LITTLE_ENDIAN_RSRV1_SHIFT 0
#define ISCSI_TEXT_REQ_HDR_LITTLE_ENDIAN_CONTINUE_FLG (0x1<<6) /* BitField op_attr */
#define ISCSI_TEXT_REQ_HDR_LITTLE_ENDIAN_CONTINUE_FLG_SHIFT 6
#define ISCSI_TEXT_REQ_HDR_LITTLE_ENDIAN_FINAL (0x1<<7) /* BitField op_attr */
#define ISCSI_TEXT_REQ_HDR_LITTLE_ENDIAN_FINAL_SHIFT 7
uint16_t rsrv0;
#elif defined(__LITTLE_ENDIAN)
uint16_t rsrv0;
uint8_t op_attr;
#define ISCSI_TEXT_REQ_HDR_LITTLE_ENDIAN_RSRV1 (0x3F<<0) /* BitField op_attr */
#define ISCSI_TEXT_REQ_HDR_LITTLE_ENDIAN_RSRV1_SHIFT 0
#define ISCSI_TEXT_REQ_HDR_LITTLE_ENDIAN_CONTINUE_FLG (0x1<<6) /* BitField op_attr */
#define ISCSI_TEXT_REQ_HDR_LITTLE_ENDIAN_CONTINUE_FLG_SHIFT 6
#define ISCSI_TEXT_REQ_HDR_LITTLE_ENDIAN_FINAL (0x1<<7) /* BitField op_attr */
#define ISCSI_TEXT_REQ_HDR_LITTLE_ENDIAN_FINAL_SHIFT 7
uint8_t opcode;
#endif
uint32_t data_fields;
#define ISCSI_TEXT_REQ_HDR_LITTLE_ENDIAN_DATA_SEGMENT_LENGTH (0xFFFFFF<<0) /* BitField data_fields */
#define ISCSI_TEXT_REQ_HDR_LITTLE_ENDIAN_DATA_SEGMENT_LENGTH_SHIFT 0
#define ISCSI_TEXT_REQ_HDR_LITTLE_ENDIAN_TOTAL_AHS_LENGTH (0xFF<<24) /* BitField data_fields */
#define ISCSI_TEXT_REQ_HDR_LITTLE_ENDIAN_TOTAL_AHS_LENGTH_SHIFT 24
struct regpair_t lun;
uint32_t itt;
uint32_t ttt;
uint32_t cmd_sn;
uint32_t exp_stat_sn;
uint32_t rsrv3[4];
};
/*
* PDU header of an iSCSI Nop-Out
*/
struct iscsi_nop_out_hdr_little_endian
{
#if defined(__BIG_ENDIAN)
uint8_t opcode;
uint8_t op_attr;
#define ISCSI_NOP_OUT_HDR_LITTLE_ENDIAN_RSRV1 (0x7F<<0) /* BitField op_attr */
#define ISCSI_NOP_OUT_HDR_LITTLE_ENDIAN_RSRV1_SHIFT 0
#define ISCSI_NOP_OUT_HDR_LITTLE_ENDIAN_RSRV2_1 (0x1<<7) /* BitField op_attr this reserved bit must be set to 1 */
#define ISCSI_NOP_OUT_HDR_LITTLE_ENDIAN_RSRV2_1_SHIFT 7
uint16_t rsrv0;
#elif defined(__LITTLE_ENDIAN)
uint16_t rsrv0;
uint8_t op_attr;
#define ISCSI_NOP_OUT_HDR_LITTLE_ENDIAN_RSRV1 (0x7F<<0) /* BitField op_attr */
#define ISCSI_NOP_OUT_HDR_LITTLE_ENDIAN_RSRV1_SHIFT 0
#define ISCSI_NOP_OUT_HDR_LITTLE_ENDIAN_RSRV2_1 (0x1<<7) /* BitField op_attr this reserved bit must be set to 1 */
#define ISCSI_NOP_OUT_HDR_LITTLE_ENDIAN_RSRV2_1_SHIFT 7
uint8_t opcode;
#endif
uint32_t data_fields;
#define ISCSI_NOP_OUT_HDR_LITTLE_ENDIAN_DATA_SEGMENT_LENGTH (0xFFFFFF<<0) /* BitField data_fields */
#define ISCSI_NOP_OUT_HDR_LITTLE_ENDIAN_DATA_SEGMENT_LENGTH_SHIFT 0
#define ISCSI_NOP_OUT_HDR_LITTLE_ENDIAN_TOTAL_AHS_LENGTH (0xFF<<24) /* BitField data_fields */
#define ISCSI_NOP_OUT_HDR_LITTLE_ENDIAN_TOTAL_AHS_LENGTH_SHIFT 24
struct regpair_t lun;
uint32_t itt;
uint32_t ttt;
uint32_t cmd_sn;
uint32_t exp_stat_sn;
uint32_t rsrv3[4];
};
/*
* iscsi pdu headers in little endian form.
*/
union iscsi_pdu_headers_little_endian
{
uint32_t fullHeaderSize[12] /* The full size of the header. protects the union size */;
struct iscsi_cmd_pdu_hdr_little_endian command_pdu_hdr /* PDU header of an iSCSI command - read,write */;
struct iscsi_data_pdu_hdr_little_endian data_out_pdu_hdr /* PDU header of an iSCSI DATA-IN and DATA-OUT PDU */;
struct iscsi_login_req_hdr_little_endian login_req_pdu_hdr /* PDU header of an iSCSI Login request */;
struct iscsi_logout_req_hdr_little_endian logout_req_pdu_hdr /* PDU header of an iSCSI Logout request */;
struct iscsi_tmf_req_hdr_little_endian tmf_req_pdu_hdr /* PDU header of an iSCSI TMF request */;
struct iscsi_text_req_hdr_little_endian text_req_pdu_hdr /* PDU header of an iSCSI Text request */;
struct iscsi_nop_out_hdr_little_endian nop_out_pdu_hdr /* PDU header of an iSCSI Nop-Out */;
};
struct iscsi_hq_bd
{
union iscsi_pdu_headers_little_endian pdu_header;
#if defined(__BIG_ENDIAN)
uint16_t reserved1;
uint16_t lcl_cmp_flg;
#elif defined(__LITTLE_ENDIAN)
uint16_t lcl_cmp_flg;
uint16_t reserved1;
#endif
uint32_t sgl_base_lo;
uint32_t sgl_base_hi;
#if defined(__BIG_ENDIAN)
uint8_t sgl_size;
uint8_t sge_index;
uint16_t sge_offset;
#elif defined(__LITTLE_ENDIAN)
uint16_t sge_offset;
uint8_t sge_index;
uint8_t sgl_size;
#endif
};
/*
* CQE data for L2 OOO connection $$KEEP_ENDIANNESS$$
*/
struct iscsi_l2_ooo_data
{
uint32_t iscsi_cid /* iSCSI context ID */;
uint8_t drop_isle /* isle number of the first isle to drop */;
uint8_t drop_size /* number of isles to drop */;
uint8_t ooo_opcode /* Out Of Order opcode (use enum tcp_ooo_event */;
uint8_t ooo_isle /* OOO isle number to add the packet to */;
uint8_t reserved[8];
};
struct iscsi_task_context_entry_xuc_c_write_only
{
uint32_t total_data_acked /* Xstorm inits to zero. C increments. U validates */;
};
struct iscsi_task_context_r2t_table_entry
{
uint32_t ttt;
uint32_t desired_data_len;
};
struct iscsi_task_context_entry_xuc_u_write_only
{
uint32_t exp_r2t_sn /* Xstorm inits to zero. U increments. */;
struct iscsi_task_context_r2t_table_entry r2t_table[4] /* U updates. X reads */;
#if defined(__BIG_ENDIAN)
uint16_t data_in_count /* X inits to zero. U increments. */;
uint8_t cq_id /* X inits to zero. U uses. */;
uint8_t valid_1b /* X sets. U resets. */;
#elif defined(__LITTLE_ENDIAN)
uint8_t valid_1b /* X sets. U resets. */;
uint8_t cq_id /* X inits to zero. U uses. */;
uint16_t data_in_count /* X inits to zero. U increments. */;
#endif
};
struct iscsi_task_context_entry_xuc
{
struct iscsi_task_context_entry_xuc_c_write_only write_c /* Cstorm only inits data here, without further change by any storm. */;
uint32_t exp_data_transfer_len /* Xstorm only inits data here. */;
struct iscsi_task_context_entry_xuc_x_write_only write_x /* only Xstorm writes data here. */;
uint32_t lun_lo /* Xstorm only inits data here. */;
struct iscsi_task_context_entry_xuc_xu_write_both write_xu /* Both X and U update this struct, but in different flow. */;
uint32_t lun_hi /* Xstorm only inits data here. */;
struct iscsi_task_context_entry_xuc_u_write_only write_u /* Ustorm only inits data here, without further change by any storm. */;
};
struct iscsi_task_context_entry_u
{
uint32_t exp_r2t_buff_offset;
uint32_t rem_rcv_len;
uint32_t exp_data_sn;
};
struct iscsi_task_context_entry
{
struct iscsi_task_context_entry_x tce_x;
#if defined(__BIG_ENDIAN)
uint16_t data_out_count;
uint16_t rsrv0;
#elif defined(__LITTLE_ENDIAN)
uint16_t rsrv0;
uint16_t data_out_count;
#endif
struct iscsi_task_context_entry_xuc tce_xuc;
struct iscsi_task_context_entry_u tce_u;
uint32_t rsrv1[7] /* increase the size to 128 bytes */;
};
struct iscsi_task_context_entry_xuc_x_init_only
{
struct regpair_t lun /* X inits. U validates */;
uint32_t exp_data_transfer_len /* Xstorm inits to SQ WQE data. U validates */;
};
/*
* The data afex vif list ramrod need $$KEEP_ENDIANNESS$$
*/
struct afex_vif_list_ramrod_data
{
uint8_t afex_vif_list_command /* set get, clear all a VIF list id defined by enum vif_list_rule_kind */;
uint8_t func_bit_map /* the function bit map to set */;
uint16_t vif_list_index /* the VIF list, in a per pf vector to add this function to */;
uint8_t func_to_clear /* the func id to clear in case of clear func mode */;
uint8_t echo;
uint16_t reserved1;
};
/*
* $$KEEP_ENDIANNESS$$
*/
struct c2s_pri_trans_table_entry
{
uint8_t val[MAX_VLAN_PRIORITIES] /* Inner to outer vlan priority translation table entry for current PF */;
};
/*
* cfc delete event data $$KEEP_ENDIANNESS$$
*/
struct cfc_del_event_data
{
uint32_t cid /* cid of deleted connection */;
uint32_t reserved0;
uint32_t reserved1;
};
/*
* per-port SAFC demo variables
*/
struct cmng_flags_per_port
{
uint32_t cmng_enables;
#define CMNG_FLAGS_PER_PORT_FAIRNESS_VN (0x1<<0) /* BitField cmng_enablesenables flag for fairness and rate shaping between protocols, vnics and COSes if set, enable fairness between vnics */
#define CMNG_FLAGS_PER_PORT_FAIRNESS_VN_SHIFT 0
#define CMNG_FLAGS_PER_PORT_RATE_SHAPING_VN (0x1<<1) /* BitField cmng_enablesenables flag for fairness and rate shaping between protocols, vnics and COSes if set, enable rate shaping between vnics */
#define CMNG_FLAGS_PER_PORT_RATE_SHAPING_VN_SHIFT 1
#define CMNG_FLAGS_PER_PORT_FAIRNESS_COS (0x1<<2) /* BitField cmng_enablesenables flag for fairness and rate shaping between protocols, vnics and COSes if set, enable fairness between COSes */
#define CMNG_FLAGS_PER_PORT_FAIRNESS_COS_SHIFT 2
#define CMNG_FLAGS_PER_PORT_FAIRNESS_COS_MODE (0x1<<3) /* BitField cmng_enablesenables flag for fairness and rate shaping between protocols, vnics and COSes (use enum fairness_mode) */
#define CMNG_FLAGS_PER_PORT_FAIRNESS_COS_MODE_SHIFT 3
#define __CMNG_FLAGS_PER_PORT_RESERVED0 (0xFFFFFFF<<4) /* BitField cmng_enablesenables flag for fairness and rate shaping between protocols, vnics and COSes reserved */
#define __CMNG_FLAGS_PER_PORT_RESERVED0_SHIFT 4
uint32_t __reserved1;
};
/*
* per-port rate shaping variables
*/
struct rate_shaping_vars_per_port
{
uint32_t rs_periodic_timeout /* timeout of periodic timer */;
uint32_t rs_threshold /* threshold, below which we start to stop queues */;
};
/*
* per-port fairness variables
*/
struct fairness_vars_per_port
{
uint32_t upper_bound /* Quota for a protocol/vnic */;
uint32_t fair_threshold /* almost-empty threshold */;
uint32_t fairness_timeout /* timeout of fairness timer */;
uint32_t reserved0;
};
/*
* per-port SAFC variables
*/
struct safc_struct_per_port
{
#if defined(__BIG_ENDIAN)
uint16_t __reserved1;
uint8_t __reserved0;
uint8_t safc_timeout_usec /* timeout to stop queues on SAFC pause command */;
#elif defined(__LITTLE_ENDIAN)
uint8_t safc_timeout_usec /* timeout to stop queues on SAFC pause command */;
uint8_t __reserved0;
uint16_t __reserved1;
#endif
uint8_t cos_to_traffic_types[MAX_COS_NUMBER] /* translate cos to service traffics types */;
uint16_t cos_to_pause_mask[NUM_OF_SAFC_BITS] /* QM pause mask for each class of service in the SAFC frame */;
};
/*
* Per-port congestion management variables
*/
struct cmng_struct_per_port
{
struct rate_shaping_vars_per_port rs_vars;
struct fairness_vars_per_port fair_vars;
struct safc_struct_per_port safc_vars;
struct cmng_flags_per_port flags;
};
/*
* a single rate shaping counter. can be used as protocol or vnic counter
*/
struct rate_shaping_counter
{
uint32_t quota /* Quota for a protocol/vnic */;
#if defined(__BIG_ENDIAN)
uint16_t __reserved0;
uint16_t rate /* Vnic/Protocol rate in units of Mega-bits/sec */;
#elif defined(__LITTLE_ENDIAN)
uint16_t rate /* Vnic/Protocol rate in units of Mega-bits/sec */;
uint16_t __reserved0;
#endif
};
/*
* per-vnic rate shaping variables
*/
struct rate_shaping_vars_per_vn
{
struct rate_shaping_counter vn_counter /* per-vnic counter */;
};
/*
* per-vnic fairness variables
*/
struct fairness_vars_per_vn
{
uint32_t cos_credit_delta[MAX_COS_NUMBER] /* used for incrementing the credit */;
uint32_t vn_credit_delta /* used for incrementing the credit */;
uint32_t __reserved0;
};
/*
* cmng port init state
*/
struct cmng_vnic
{
struct rate_shaping_vars_per_vn vnic_max_rate[4];
struct fairness_vars_per_vn vnic_min_rate[4];
};
/*
* cmng port init state
*/
struct cmng_init
{
struct cmng_struct_per_port port;
struct cmng_vnic vnic;
};
/*
* driver parameters for congestion management init, all rates are in Mbps
*/
struct cmng_init_input
{
uint32_t port_rate;
uint16_t vnic_min_rate[4] /* rates are in Mbps */;
uint16_t vnic_max_rate[4] /* rates are in Mbps */;
uint16_t cos_min_rate[MAX_COS_NUMBER] /* rates are in Mbps */;
uint16_t cos_to_pause_mask[MAX_COS_NUMBER];
struct cmng_flags_per_port flags;
};
/*
* Protocol-common command ID for slow path elements
*/
enum common_spqe_cmd_id
{
RAMROD_CMD_ID_COMMON_UNUSED,
RAMROD_CMD_ID_COMMON_FUNCTION_START /* Start a function (for PFs only) */,
RAMROD_CMD_ID_COMMON_FUNCTION_STOP /* Stop a function (for PFs only) */,
RAMROD_CMD_ID_COMMON_FUNCTION_UPDATE /* niv update function */,
RAMROD_CMD_ID_COMMON_CFC_DEL /* Delete a connection from CFC */,
RAMROD_CMD_ID_COMMON_CFC_DEL_WB /* Delete a connection from CFC (with write back) */,
RAMROD_CMD_ID_COMMON_STAT_QUERY /* Collect statistics counters */,
RAMROD_CMD_ID_COMMON_STOP_TRAFFIC /* Stop Tx traffic (before DCB updates) */,
RAMROD_CMD_ID_COMMON_START_TRAFFIC /* Start Tx traffic (after DCB updates) */,
RAMROD_CMD_ID_COMMON_AFEX_VIF_LISTS /* niv vif lists */,
RAMROD_CMD_ID_COMMON_SET_TIMESYNC /* Set Timesync Parameters (E3 Only) */,
MAX_COMMON_SPQE_CMD_ID};
/*
* Per-protocol connection types
*/
enum connection_type
{
ETH_CONNECTION_TYPE /* Ethernet */,
TOE_CONNECTION_TYPE /* TOE */,
RDMA_CONNECTION_TYPE /* RDMA */,
ISCSI_CONNECTION_TYPE /* iSCSI */,
FCOE_CONNECTION_TYPE /* FCoE */,
RESERVED_CONNECTION_TYPE_0,
RESERVED_CONNECTION_TYPE_1,
RESERVED_CONNECTION_TYPE_2,
NONE_CONNECTION_TYPE /* General- used for common slow path */,
MAX_CONNECTION_TYPE};
/*
* Cos modes
*/
enum cos_mode
{
OVERRIDE_COS /* Firmware deduce cos according to DCB */,
STATIC_COS /* Firmware has constant queues per CoS */,
FW_WRR /* Firmware keep fairness between different CoSes */,
MAX_COS_MODE};
/*
* Dynamic HC counters set by the driver
*/
struct hc_dynamic_drv_counter
{
uint32_t val[HC_SB_MAX_DYNAMIC_INDICES] /* 4 bytes * 4 indices = 2 lines */;
};
/*
* zone A per-queue data
*/
struct cstorm_queue_zone_data
{
struct hc_dynamic_drv_counter hc_dyn_drv_cnt /* 4 bytes * 4 indices = 2 lines */;
struct regpair_t reserved[2];
};
/*
* Vf-PF channel data in cstorm ram (non-triggered zone)
*/
struct vf_pf_channel_zone_data
{
uint32_t msg_addr_lo /* the message address on VF memory */;
uint32_t msg_addr_hi /* the message address on VF memory */;
};
/*
* zone for VF non-triggered data
*/
struct non_trigger_vf_zone
{
struct vf_pf_channel_zone_data vf_pf_channel /* vf-pf channel zone data */;
};
/*
* Vf-PF channel trigger zone in cstorm ram
*/
struct vf_pf_channel_zone_trigger
{
uint8_t addr_valid /* indicates that a vf-pf message is pending. MUST be set AFTER the message address. */;
};
/*
* zone that triggers the in-bound interrupt
*/
struct trigger_vf_zone
{
#if defined(__BIG_ENDIAN)
uint16_t reserved1;
uint8_t reserved0;
struct vf_pf_channel_zone_trigger vf_pf_channel;
#elif defined(__LITTLE_ENDIAN)
struct vf_pf_channel_zone_trigger vf_pf_channel;
uint8_t reserved0;
uint16_t reserved1;
#endif
uint32_t reserved2;
};
/*
* zone B per-VF data
*/
struct cstorm_vf_zone_data
{
struct non_trigger_vf_zone non_trigger /* zone for VF non-triggered data */;
struct trigger_vf_zone trigger /* zone that triggers the in-bound interrupt */;
};
/*
* Dynamic host coalescing init parameters, per state machine
*/
struct dynamic_hc_sm_config
{
uint32_t threshold[3] /* thresholds of number of outstanding bytes */;
uint8_t shift_per_protocol[HC_SB_MAX_DYNAMIC_INDICES] /* bytes difference of each protocol is shifted right by this value */;
uint8_t hc_timeout0[HC_SB_MAX_DYNAMIC_INDICES] /* timeout for level 0 for each protocol, in units of usec */;
uint8_t hc_timeout1[HC_SB_MAX_DYNAMIC_INDICES] /* timeout for level 1 for each protocol, in units of usec */;
uint8_t hc_timeout2[HC_SB_MAX_DYNAMIC_INDICES] /* timeout for level 2 for each protocol, in units of usec */;
uint8_t hc_timeout3[HC_SB_MAX_DYNAMIC_INDICES] /* timeout for level 3 for each protocol, in units of usec */;
};
/*
* Dynamic host coalescing init parameters
*/
struct dynamic_hc_config
{
struct dynamic_hc_sm_config sm_config[HC_SB_MAX_SM] /* Configuration per state machine */;
};
struct e2_integ_data
{
#if defined(__BIG_ENDIAN)
uint8_t flags;
#define E2_INTEG_DATA_TESTING_EN (0x1<<0) /* BitField flags integration testing enabled */
#define E2_INTEG_DATA_TESTING_EN_SHIFT 0
#define E2_INTEG_DATA_LB_TX (0x1<<1) /* BitField flags flag indicating this connection will transmit on loopback */
#define E2_INTEG_DATA_LB_TX_SHIFT 1
#define E2_INTEG_DATA_COS_TX (0x1<<2) /* BitField flags flag indicating this connection will transmit according to cos field */
#define E2_INTEG_DATA_COS_TX_SHIFT 2
#define E2_INTEG_DATA_OPPORTUNISTICQM (0x1<<3) /* BitField flags flag indicating this connection will activate the opportunistic QM credit flow */
#define E2_INTEG_DATA_OPPORTUNISTICQM_SHIFT 3
#define E2_INTEG_DATA_DPMTESTRELEASEDQ (0x1<<4) /* BitField flags flag indicating this connection will release the door bell queue (DQ) */
#define E2_INTEG_DATA_DPMTESTRELEASEDQ_SHIFT 4
#define E2_INTEG_DATA_RESERVED (0x7<<5) /* BitField flags */
#define E2_INTEG_DATA_RESERVED_SHIFT 5
uint8_t cos /* cos of the connection (relevant only in cos transmitting connections, when cosTx is set */;
uint8_t voq /* voq to return credit on. Normally equal to port (i.e. always 0 in E2 operational connections). in cos tests equal to cos. in loopback tests equal to LB_PORT (=4) */;
uint8_t pbf_queue /* pbf queue to transmit on. Normally equal to port (i.e. always 0 in E2 operational connections). in cos tests equal to cos. in loopback tests equal to LB_PORT (=4) */;
#elif defined(__LITTLE_ENDIAN)
uint8_t pbf_queue /* pbf queue to transmit on. Normally equal to port (i.e. always 0 in E2 operational connections). in cos tests equal to cos. in loopback tests equal to LB_PORT (=4) */;
uint8_t voq /* voq to return credit on. Normally equal to port (i.e. always 0 in E2 operational connections). in cos tests equal to cos. in loopback tests equal to LB_PORT (=4) */;
uint8_t cos /* cos of the connection (relevant only in cos transmitting connections, when cosTx is set */;
uint8_t flags;
#define E2_INTEG_DATA_TESTING_EN (0x1<<0) /* BitField flags integration testing enabled */
#define E2_INTEG_DATA_TESTING_EN_SHIFT 0
#define E2_INTEG_DATA_LB_TX (0x1<<1) /* BitField flags flag indicating this connection will transmit on loopback */
#define E2_INTEG_DATA_LB_TX_SHIFT 1
#define E2_INTEG_DATA_COS_TX (0x1<<2) /* BitField flags flag indicating this connection will transmit according to cos field */
#define E2_INTEG_DATA_COS_TX_SHIFT 2
#define E2_INTEG_DATA_OPPORTUNISTICQM (0x1<<3) /* BitField flags flag indicating this connection will activate the opportunistic QM credit flow */
#define E2_INTEG_DATA_OPPORTUNISTICQM_SHIFT 3
#define E2_INTEG_DATA_DPMTESTRELEASEDQ (0x1<<4) /* BitField flags flag indicating this connection will release the door bell queue (DQ) */
#define E2_INTEG_DATA_DPMTESTRELEASEDQ_SHIFT 4
#define E2_INTEG_DATA_RESERVED (0x7<<5) /* BitField flags */
#define E2_INTEG_DATA_RESERVED_SHIFT 5
#endif
#if defined(__BIG_ENDIAN)
uint16_t reserved3;
uint8_t reserved2;
uint8_t ramEn /* context area reserved for reading enable bit from ram */;
#elif defined(__LITTLE_ENDIAN)
uint8_t ramEn /* context area reserved for reading enable bit from ram */;
uint8_t reserved2;
uint16_t reserved3;
#endif
};
/*
* set mac event data $$KEEP_ENDIANNESS$$
*/
struct eth_event_data
{
uint32_t echo /* set mac echo data to return to driver */;
uint32_t reserved0;
uint32_t reserved1;
};
/*
* pf-vf event data $$KEEP_ENDIANNESS$$
*/
struct vf_pf_event_data
{
uint8_t vf_id /* VF ID (0-63) */;
uint8_t reserved0;
uint16_t reserved1;
uint32_t msg_addr_lo /* message address on Vf (low 32 bits) */;
uint32_t msg_addr_hi /* message address on Vf (high 32 bits) */;
};
/*
* VF FLR event data $$KEEP_ENDIANNESS$$
*/
struct vf_flr_event_data
{
uint8_t vf_id /* VF ID (0-63) */;
uint8_t reserved0;
uint16_t reserved1;
uint32_t reserved2;
uint32_t reserved3;
};
/*
* malicious VF event data $$KEEP_ENDIANNESS$$
*/
struct malicious_vf_event_data
{
uint8_t vf_id /* VF ID (0-63) */;
uint8_t err_id /* reason for malicious notification */;
uint16_t reserved1;
uint32_t reserved2;
uint32_t reserved3;
};
/*
* vif list event data $$KEEP_ENDIANNESS$$
*/
struct vif_list_event_data
{
uint8_t func_bit_map /* bit map of pf indice */;
uint8_t echo;
uint16_t reserved0;
uint32_t reserved1;
uint32_t reserved2;
};
/*
* function update event data $$KEEP_ENDIANNESS$$
*/
struct function_update_event_data
{
uint8_t echo;
uint8_t reserved;
uint16_t reserved0;
uint32_t reserved1;
uint32_t reserved2;
};
/*
* union for all event ring message types
*/
union event_data
{
struct vf_pf_event_data vf_pf_event /* vf-pf event data */;
struct eth_event_data eth_event /* set mac event data */;
struct cfc_del_event_data cfc_del_event /* cfc delete event data */;
struct vf_flr_event_data vf_flr_event /* vf flr event data */;
struct malicious_vf_event_data malicious_vf_event /* malicious vf event data */;
struct vif_list_event_data vif_list_event /* vif list event data */;
struct function_update_event_data function_update_event /* function update event data */;
};
/*
* per PF event ring data
*/
struct event_ring_data
{
struct regpair_native_t base_addr /* ring base address */;
#if defined(__BIG_ENDIAN)
uint8_t index_id /* index ID within the status block */;
uint8_t sb_id /* status block ID */;
uint16_t producer /* event ring producer */;
#elif defined(__LITTLE_ENDIAN)
uint16_t producer /* event ring producer */;
uint8_t sb_id /* status block ID */;
uint8_t index_id /* index ID within the status block */;
#endif
uint32_t reserved0;
};
/*
* event ring message element (each element is 128 bits) $$KEEP_ENDIANNESS$$
*/
struct event_ring_msg
{
uint8_t opcode;
uint8_t error /* error on the mesasage */;
uint16_t reserved1;
union event_data data /* message data (96 bits data) */;
};
/*
* event ring next page element (128 bits)
*/
struct event_ring_next
{
struct regpair_t addr /* Address of the next page of the ring */;
uint32_t reserved[2];
};
/*
* union for event ring element types (each element is 128 bits)
*/
union event_ring_elem
{
struct event_ring_msg message /* event ring message */;
struct event_ring_next next_page /* event ring next page */;
};
/*
* Common event ring opcodes
*/
enum event_ring_opcode
{
EVENT_RING_OPCODE_VF_PF_CHANNEL,
EVENT_RING_OPCODE_FUNCTION_START /* Start a function (for PFs only) */,
EVENT_RING_OPCODE_FUNCTION_STOP /* Stop a function (for PFs only) */,
EVENT_RING_OPCODE_CFC_DEL /* Delete a connection from CFC */,
EVENT_RING_OPCODE_CFC_DEL_WB /* Delete a connection from CFC (with write back) */,
EVENT_RING_OPCODE_STAT_QUERY /* Collect statistics counters */,
EVENT_RING_OPCODE_STOP_TRAFFIC /* Stop Tx traffic (before DCB updates) */,
EVENT_RING_OPCODE_START_TRAFFIC /* Start Tx traffic (after DCB updates) */,
EVENT_RING_OPCODE_VF_FLR /* VF FLR indication for PF */,
EVENT_RING_OPCODE_MALICIOUS_VF /* Malicious VF operation detected */,
EVENT_RING_OPCODE_FORWARD_SETUP /* Initialize forward channel */,
EVENT_RING_OPCODE_RSS_UPDATE_RULES /* Update RSS configuration */,
EVENT_RING_OPCODE_FUNCTION_UPDATE /* function update */,
EVENT_RING_OPCODE_AFEX_VIF_LISTS /* event ring opcode niv vif lists */,
EVENT_RING_OPCODE_SET_MAC /* Add/remove MAC (in E1x only) */,
EVENT_RING_OPCODE_CLASSIFICATION_RULES /* Add/remove MAC or VLAN (in E2/E3 only) */,
EVENT_RING_OPCODE_FILTERS_RULES /* Add/remove classification filters for L2 client (in E2/E3 only) */,
EVENT_RING_OPCODE_MULTICAST_RULES /* Add/remove multicast classification bin (in E2/E3 only) */,
EVENT_RING_OPCODE_SET_TIMESYNC /* Set Timesync Parameters (E3 Only) */,
MAX_EVENT_RING_OPCODE};
/*
* Modes for fairness algorithm
*/
enum fairness_mode
{
FAIRNESS_COS_WRR_MODE /* Weighted round robin mode (used in Google) */,
FAIRNESS_COS_ETS_MODE /* ETS mode (used in FCoE) */,
MAX_FAIRNESS_MODE};
/*
* Priority and cos $$KEEP_ENDIANNESS$$
*/
struct priority_cos
{
uint8_t priority /* Priority */;
uint8_t cos /* Cos */;
uint16_t reserved1;
};
/*
* The data for flow control configuration $$KEEP_ENDIANNESS$$
*/
struct flow_control_configuration
{
struct priority_cos traffic_type_to_priority_cos[MAX_TRAFFIC_TYPES] /* traffic_type to priority cos */;
uint8_t dcb_enabled /* If DCB mode is enabled then traffic class to priority array is fully initialized and there must be inner VLAN */;
uint8_t dcb_version /* DCB version Increase by one on each DCB update */;
uint8_t dont_add_pri_0 /* In case, the priority is 0, and the packet has no vlan, the firmware wont add vlan */;
uint8_t reserved1;
uint32_t reserved2;
uint8_t dcb_outer_pri[MAX_TRAFFIC_TYPES] /* Indicates the updated DCB outer tag priority per protocol */;
};
/*
* $$KEEP_ENDIANNESS$$
*/
struct function_start_data
{
uint8_t function_mode /* the function mode */;
uint8_t allow_npar_tx_switching /* If set, inter-pf tx switching is allowed in Switch Independent function mode. (E2/E3 Only) */;
uint16_t sd_vlan_tag /* value of Vlan in case of switch depended multi-function mode */;
uint16_t vif_id /* value of VIF id in case of NIV multi-function mode */;
uint8_t path_id;
uint8_t network_cos_mode /* The cos mode for network traffic. */;
uint8_t dmae_cmd_id /* The DMAE command id to use for FW DMAE transactions */;
uint8_t no_added_tags /* If set, the mfTag length is always zero (used in UFP) */;
uint16_t reserved0;
uint32_t reserved1;
uint8_t inner_clss_vxlan /* Classification type for VXLAN */;
uint8_t inner_clss_l2gre /* If set, classification on the inner MAC/VLAN of L2GRE tunneled packets is enabled */;
uint8_t inner_clss_l2geneve /* If set, classification on the inner MAC/(VLAN or VNI) of L2GENEVE tunneled packets is enabled */;
uint8_t inner_rss /* If set, RSS on the inner headers of tunneled packets is enabled */;
uint16_t vxlan_dst_port /* UDP Destination Port to be recognised as VXLAN tunneled packets (0 is disabled) */;
uint16_t geneve_dst_port /* UDP Destination Port to be recognised as GENEVE tunneled packets (0 is disabled) */;
uint8_t sd_accept_mf_clss_fail /* If set, accept packets that fail Multi-Function Switch-Dependent classification. Only one VNIC on the port can have this set to 1 */;
uint8_t sd_accept_mf_clss_fail_match_ethtype /* If set, accepted packets must match the ethertype of sd_clss_fail_ethtype */;
uint16_t sd_accept_mf_clss_fail_ethtype /* Ethertype to match in the case of sd_accept_mf_clss_fail_match_ethtype */;
uint16_t sd_vlan_eth_type /* Value of ether-type to use in the case of switch dependent multi-function mode. Setting this to 0 uses the default value of 0x8100 */;
uint8_t sd_vlan_force_pri_flg /* If set, the SD Vlan Priority is forced to the value of the sd_vlan_pri_force_val field regardless of the DCB or inband VLAN priority. */;
uint8_t sd_vlan_force_pri_val /* value to force SD Vlan Priority if sd_vlan_pri_force_flg is set */;
uint8_t c2s_pri_tt_valid /* When set, c2s_pri_trans_table is valid */;
uint8_t c2s_pri_default /* This value will be the sVlan pri value in case no Cvlan is present */;
uint8_t reserved2[6];
struct c2s_pri_trans_table_entry c2s_pri_trans_table /* Inner to outer vlan priority translation table entry for current PF */;
};
/*
* $$KEEP_ENDIANNESS$$
*/
struct function_update_data
{
uint8_t vif_id_change_flg /* If set, vif_id will be checked */;
uint8_t afex_default_vlan_change_flg /* If set, afex_default_vlan will be checked */;
uint8_t allowed_priorities_change_flg /* If set, allowed_priorities will be checked */;
uint8_t network_cos_mode_change_flg /* If set, network_cos_mode will be checked */;
uint16_t vif_id /* value of VIF id in case of NIV multi-function mode */;
uint16_t afex_default_vlan /* value of default Vlan in case of NIV mf */;
uint8_t allowed_priorities /* bit vector of allowed Vlan priorities for this VIF */;
uint8_t network_cos_mode /* The cos mode for network traffic. */;
uint8_t lb_mode_en_change_flg /* If set, lb_mode_en will be checked */;
uint8_t lb_mode_en /* If set, niv loopback mode will be enabled */;
uint8_t tx_switch_suspend_change_flg /* If set, tx_switch_suspend will be checked */;
uint8_t tx_switch_suspend /* If set, TX switching TO this function will be disabled and packets will be dropped */;
uint8_t echo;
uint8_t update_tunn_cfg_flg /* If set, tunneling config for the function will be updated according to the following fields */;
uint8_t inner_clss_vxlan /* Classification type for VXLAN */;
uint8_t inner_clss_l2gre /* If set, classification on the inner MAC/VLAN of L2GRE tunneled packets is enabled */;
uint8_t inner_clss_l2geneve /* If set, classification on the inner MAC/(VLAN or VNI) of L2GENEVE tunneled packets is enabled */;
uint8_t inner_rss /* If set, RSS on the inner headers of tunneled packets is enabled */;
uint16_t vxlan_dst_port /* UDP Destination Port to be recognised as VXLAN tunneled packets (0 is disabled) */;
uint16_t geneve_dst_port /* UDP Destination Port to be recognised as GENEVE tunneled packets (0 is disabled) */;
uint8_t sd_vlan_force_pri_change_flg /* If set, the SD VLAN Priority Fixed configuration is updated from fields sd_vlan_pri_force_flg and sd_vlan_pri_force_val */;
uint8_t sd_vlan_force_pri_flg /* If set, the SD Vlan Priority is forced to the value of the sd_vlan_pri_force_val field regardless of the DCB or inband VLAN priority. */;
uint8_t sd_vlan_force_pri_val /* value to force SD Vlan Priority if sd_vlan_pri_force_flg is set */;
uint8_t sd_vlan_tag_change_flg /* If set, the SD VLAN Tag is changed according to the field sd_vlan_tag */;
uint8_t sd_vlan_eth_type_change_flg /* If set, the SD VLAN Ethertype is changed according to the field sd_vlan_eth_type */;
uint8_t reserved1;
uint16_t sd_vlan_tag /* New value of Outer Vlan in case of switch depended multi-function mode */;
uint16_t sd_vlan_eth_type /* New value of ether-type in the case of switch dependent multi-function mode. Setting this to 0 restores the default value of 0x8100 */;
uint16_t reserved0;
uint32_t reserved2;
};
/*
* FW version stored in the Xstorm RAM
*/
struct fw_version
{
#if defined(__BIG_ENDIAN)
uint8_t engineering /* firmware current engineering version */;
uint8_t revision /* firmware current revision version */;
uint8_t minor /* firmware current minor version */;
uint8_t major /* firmware current major version */;
#elif defined(__LITTLE_ENDIAN)
uint8_t major /* firmware current major version */;
uint8_t minor /* firmware current minor version */;
uint8_t revision /* firmware current revision version */;
uint8_t engineering /* firmware current engineering version */;
#endif
uint32_t flags;
#define FW_VERSION_OPTIMIZED (0x1<<0) /* BitField flags if set, this is optimized ASM */
#define FW_VERSION_OPTIMIZED_SHIFT 0
#define FW_VERSION_BIG_ENDIEN (0x1<<1) /* BitField flags if set, this is big-endien ASM */
#define FW_VERSION_BIG_ENDIEN_SHIFT 1
#define FW_VERSION_CHIP_VERSION (0x3<<2) /* BitField flags 0 - E1, 1 - E1H */
#define FW_VERSION_CHIP_VERSION_SHIFT 2
#define __FW_VERSION_RESERVED (0xFFFFFFF<<4) /* BitField flags */
#define __FW_VERSION_RESERVED_SHIFT 4
};
/*
* Dynamic Host-Coalescing - Driver(host) counters
*/
struct hc_dynamic_sb_drv_counters
{
uint32_t dynamic_hc_drv_counter[HC_SB_MAX_DYNAMIC_INDICES] /* Dynamic HC counters written by drivers */;
};
/*
* 2 bytes. configuration/state parameters for a single protocol index
*/
struct hc_index_data
{
#if defined(__BIG_ENDIAN)
uint8_t flags;
#define HC_INDEX_DATA_SM_ID (0x1<<0) /* BitField flags Index to a state machine. Can be 0 or 1 */
#define HC_INDEX_DATA_SM_ID_SHIFT 0
#define HC_INDEX_DATA_HC_ENABLED (0x1<<1) /* BitField flags if set, host coalescing would be done for this index */
#define HC_INDEX_DATA_HC_ENABLED_SHIFT 1
#define HC_INDEX_DATA_DYNAMIC_HC_ENABLED (0x1<<2) /* BitField flags if set, dynamic HC will be done for this index */
#define HC_INDEX_DATA_DYNAMIC_HC_ENABLED_SHIFT 2
#define HC_INDEX_DATA_RESERVE (0x1F<<3) /* BitField flags */
#define HC_INDEX_DATA_RESERVE_SHIFT 3
uint8_t timeout /* the timeout values for this index. Units are 4 usec */;
#elif defined(__LITTLE_ENDIAN)
uint8_t timeout /* the timeout values for this index. Units are 4 usec */;
uint8_t flags;
#define HC_INDEX_DATA_SM_ID (0x1<<0) /* BitField flags Index to a state machine. Can be 0 or 1 */
#define HC_INDEX_DATA_SM_ID_SHIFT 0
#define HC_INDEX_DATA_HC_ENABLED (0x1<<1) /* BitField flags if set, host coalescing would be done for this index */
#define HC_INDEX_DATA_HC_ENABLED_SHIFT 1
#define HC_INDEX_DATA_DYNAMIC_HC_ENABLED (0x1<<2) /* BitField flags if set, dynamic HC will be done for this index */
#define HC_INDEX_DATA_DYNAMIC_HC_ENABLED_SHIFT 2
#define HC_INDEX_DATA_RESERVE (0x1F<<3) /* BitField flags */
#define HC_INDEX_DATA_RESERVE_SHIFT 3
#endif
};
/*
* HC state-machine
*/
struct hc_status_block_sm
{
#if defined(__BIG_ENDIAN)
uint8_t igu_seg_id;
uint8_t igu_sb_id /* sb_id within the IGU */;
uint8_t timer_value /* Determines the time_to_expire */;
uint8_t __flags;
#elif defined(__LITTLE_ENDIAN)
uint8_t __flags;
uint8_t timer_value /* Determines the time_to_expire */;
uint8_t igu_sb_id /* sb_id within the IGU */;
uint8_t igu_seg_id;
#endif
uint32_t time_to_expire /* The time in which it expects to wake up */;
};
/*
* hold PCI identification variables- used in various places in firmware
*/
struct pci_entity
{
#if defined(__BIG_ENDIAN)
uint8_t vf_valid /* If set, this is a VF, otherwise it is PF */;
uint8_t vf_id /* VF ID (0-63). Value of 0xFF means VF not valid */;
uint8_t vnic_id /* Virtual NIC ID (0-3) */;
uint8_t pf_id /* PCI physical function number (0-7). The LSB of this field is the port ID */;
#elif defined(__LITTLE_ENDIAN)
uint8_t pf_id /* PCI physical function number (0-7). The LSB of this field is the port ID */;
uint8_t vnic_id /* Virtual NIC ID (0-3) */;
uint8_t vf_id /* VF ID (0-63). Value of 0xFF means VF not valid */;
uint8_t vf_valid /* If set, this is a VF, otherwise it is PF */;
#endif
};
/*
* The fast-path status block meta-data, common to all chips
*/
struct hc_sb_data
{
struct regpair_native_t host_sb_addr /* Host status block address */;
struct hc_status_block_sm state_machine[HC_SB_MAX_SM] /* Holds the state machines of the status block */;
struct pci_entity p_func /* vnic / port of the status block to be set by the driver */;
#if defined(__BIG_ENDIAN)
uint8_t rsrv0;
uint8_t state;
uint8_t dhc_qzone_id /* used in E2 only, to specify the HW queue zone ID used for this status block dynamic HC counters */;
uint8_t same_igu_sb_1b /* Indicate that both state-machines acts like single sm */;
#elif defined(__LITTLE_ENDIAN)
uint8_t same_igu_sb_1b /* Indicate that both state-machines acts like single sm */;
uint8_t dhc_qzone_id /* used in E2 only, to specify the HW queue zone ID used for this status block dynamic HC counters */;
uint8_t state;
uint8_t rsrv0;
#endif
struct regpair_native_t rsrv1[2];
};
/*
* Segment types for host coaslescing
*/
enum hc_segment
{
HC_REGULAR_SEGMENT,
HC_DEFAULT_SEGMENT,
MAX_HC_SEGMENT};
/*
* The fast-path status block meta-data
*/
struct hc_sp_status_block_data
{
struct regpair_native_t host_sb_addr /* Host status block address */;
#if defined(__BIG_ENDIAN)
uint8_t rsrv1;
uint8_t state;
uint8_t igu_seg_id /* segment id of the IGU */;
uint8_t igu_sb_id /* sb_id within the IGU */;
#elif defined(__LITTLE_ENDIAN)
uint8_t igu_sb_id /* sb_id within the IGU */;
uint8_t igu_seg_id /* segment id of the IGU */;
uint8_t state;
uint8_t rsrv1;
#endif
struct pci_entity p_func /* vnic / port of the status block to be set by the driver */;
};
/*
* The fast-path status block meta-data
*/
struct hc_status_block_data_e1x
{
struct hc_index_data index_data[HC_SB_MAX_INDICES_E1X] /* configuration/state parameters for a single protocol index */;
struct hc_sb_data common /* The fast-path status block meta-data, common to all chips */;
};
/*
* The fast-path status block meta-data
*/
struct hc_status_block_data_e2
{
struct hc_index_data index_data[HC_SB_MAX_INDICES_E2] /* configuration/state parameters for a single protocol index */;
struct hc_sb_data common /* The fast-path status block meta-data, common to all chips */;
};
/*
* IGU block operartion modes (in Everest2)
*/
enum igu_mode
{
HC_IGU_BC_MODE /* Backward compatible mode */,
HC_IGU_NBC_MODE /* Non-backward compatible mode */,
MAX_IGU_MODE};
/*
* Inner Headers Classification Type
*/
enum inner_clss_type
{
INNER_CLSS_DISABLED /* Inner Classification Disabled */,
INNER_CLSS_USE_VLAN /* Inner Classification using MAC/Inner VLAN */,
INNER_CLSS_USE_VNI /* Inner Classification using MAC/VNI (Only for VXLAN and GENEVE) */,
MAX_INNER_CLSS_TYPE};
/*
* IP versions
*/
enum ip_ver
{
IP_V4,
IP_V6,
MAX_IP_VER};
/*
* Malicious VF error ID
*/
enum malicious_vf_error_id
{
MALICIOUS_VF_NO_ERROR /* Zero placeholder value */,
VF_PF_CHANNEL_NOT_READY /* Writing to VF/PF channel when it is not ready */,
ETH_ILLEGAL_BD_LENGTHS /* TX BD lengths error was detected */,
ETH_PACKET_TOO_SHORT /* TX packet is shorter then reported on BDs */,
ETH_PAYLOAD_TOO_BIG /* TX packet is greater then MTU */,
ETH_ILLEGAL_ETH_TYPE /* TX packet reported without VLAN but eth type is 0x8100 */,
ETH_ILLEGAL_LSO_HDR_LEN /* LSO header length on BDs and on hdr_nbd do not match */,
ETH_TOO_MANY_BDS /* Tx packet has too many BDs */,
ETH_ZERO_HDR_NBDS /* hdr_nbds field is zero */,
ETH_START_BD_NOT_SET /* start_bd should be set on first TX BD in packet */,
ETH_ILLEGAL_PARSE_NBDS /* Tx packet with parse_nbds field which is not legal */,
ETH_IPV6_AND_CHECKSUM /* Tx packet with IP checksum on IPv6 */,
ETH_VLAN_FLG_INCORRECT /* Tx packet with incorrect VLAN flag */,
ETH_ILLEGAL_LSO_MSS /* Tx LSO packet with illegal MSS value */,
ETH_TUNNEL_NOT_SUPPORTED /* Tunneling packets are not supported in current connection */,
MAX_MALICIOUS_VF_ERROR_ID};
/*
* Multi-function modes
*/
enum mf_mode
{
SINGLE_FUNCTION,
MULTI_FUNCTION_SD /* Switch dependent (vlan based) */,
MULTI_FUNCTION_SI /* Switch independent (mac based) */,
MULTI_FUNCTION_AFEX /* Switch dependent (niv based) */,
MAX_MF_MODE};
/*
* Protocol-common statistics collected by the Tstorm (per pf) $$KEEP_ENDIANNESS$$
*/
struct tstorm_per_pf_stats
{
struct regpair_t rcv_error_bytes /* number of bytes received with errors */;
};
/*
* $$KEEP_ENDIANNESS$$
*/
struct per_pf_stats
{
struct tstorm_per_pf_stats tstorm_pf_statistics;
};
/*
* Protocol-common statistics collected by the Tstorm (per port) $$KEEP_ENDIANNESS$$
*/
struct tstorm_per_port_stats
{
uint32_t mac_discard /* number of packets with mac errors */;
uint32_t mac_filter_discard /* the number of good frames dropped because of no perfect match to MAC/VLAN address */;
uint32_t brb_truncate_discard /* the number of packtes that were dropped because they were truncated in BRB */;
uint32_t mf_tag_discard /* the number of good frames dropped because of no match to the outer vlan/VNtag */;
uint32_t packet_drop /* general packet drop conter- incremented for every packet drop */;
uint32_t reserved;
};
/*
* $$KEEP_ENDIANNESS$$
*/
struct per_port_stats
{
struct tstorm_per_port_stats tstorm_port_statistics;
};
/*
* Protocol-common statistics collected by the Tstorm (per client) $$KEEP_ENDIANNESS$$
*/
struct tstorm_per_queue_stats
{
struct regpair_t rcv_ucast_bytes /* number of bytes in unicast packets received without errors and pass the filter */;
uint32_t rcv_ucast_pkts /* number of unicast packets received without errors and pass the filter */;
uint32_t checksum_discard /* number of total packets received with checksum error */;
struct regpair_t rcv_bcast_bytes /* number of bytes in broadcast packets received without errors and pass the filter */;
uint32_t rcv_bcast_pkts /* number of packets in broadcast packets received without errors and pass the filter */;
uint32_t pkts_too_big_discard /* number of too long packets received */;
struct regpair_t rcv_mcast_bytes /* number of bytes in multicast packets received without errors and pass the filter */;
uint32_t rcv_mcast_pkts /* number of packets in multicast packets received without errors and pass the filter */;
uint32_t ttl0_discard /* the number of good frames dropped because of TTL=0 */;
uint16_t no_buff_discard;
uint16_t reserved0;
uint32_t reserved1;
};
/*
* Protocol-common statistics collected by the Ustorm (per client) $$KEEP_ENDIANNESS$$
*/
struct ustorm_per_queue_stats
{
struct regpair_t ucast_no_buff_bytes /* the number of unicast bytes received from network dropped because of no buffer at host */;
struct regpair_t mcast_no_buff_bytes /* the number of multicast bytes received from network dropped because of no buffer at host */;
struct regpair_t bcast_no_buff_bytes /* the number of broadcast bytes received from network dropped because of no buffer at host */;
uint32_t ucast_no_buff_pkts /* the number of unicast frames received from network dropped because of no buffer at host */;
uint32_t mcast_no_buff_pkts /* the number of unicast frames received from network dropped because of no buffer at host */;
uint32_t bcast_no_buff_pkts /* the number of unicast frames received from network dropped because of no buffer at host */;
uint32_t coalesced_pkts /* the number of packets coalesced in all aggregations */;
struct regpair_t coalesced_bytes /* the number of bytes coalesced in all aggregations */;
uint32_t coalesced_events /* the number of aggregations */;
uint32_t coalesced_aborts /* the number of exception which avoid aggregation */;
};
/*
* Protocol-common statistics collected by the Xstorm (per client) $$KEEP_ENDIANNESS$$
*/
struct xstorm_per_queue_stats
{
struct regpair_t ucast_bytes_sent /* number of total bytes sent without errors */;
struct regpair_t mcast_bytes_sent /* number of total bytes sent without errors */;
struct regpair_t bcast_bytes_sent /* number of total bytes sent without errors */;
uint32_t ucast_pkts_sent /* number of total packets sent without errors */;
uint32_t mcast_pkts_sent /* number of total packets sent without errors */;
uint32_t bcast_pkts_sent /* number of total packets sent without errors */;
uint32_t error_drop_pkts /* number of total packets drooped due to errors */;
};
/*
* $$KEEP_ENDIANNESS$$
*/
struct per_queue_stats
{
struct tstorm_per_queue_stats tstorm_queue_statistics;
struct ustorm_per_queue_stats ustorm_queue_statistics;
struct xstorm_per_queue_stats xstorm_queue_statistics;
};
/*
* FW version stored in first line of pram $$KEEP_ENDIANNESS$$
*/
struct pram_fw_version
{
uint8_t major /* firmware current major version */;
uint8_t minor /* firmware current minor version */;
uint8_t revision /* firmware current revision version */;
uint8_t engineering /* firmware current engineering version */;
uint8_t flags;
#define PRAM_FW_VERSION_OPTIMIZED (0x1<<0) /* BitField flags if set, this is optimized ASM */
#define PRAM_FW_VERSION_OPTIMIZED_SHIFT 0
#define PRAM_FW_VERSION_STORM_ID (0x3<<1) /* BitField flags storm_id identification */
#define PRAM_FW_VERSION_STORM_ID_SHIFT 1
#define PRAM_FW_VERSION_BIG_ENDIEN (0x1<<3) /* BitField flags if set, this is big-endien ASM */
#define PRAM_FW_VERSION_BIG_ENDIEN_SHIFT 3
#define PRAM_FW_VERSION_CHIP_VERSION (0x3<<4) /* BitField flags 0 - E1, 1 - E1H */
#define PRAM_FW_VERSION_CHIP_VERSION_SHIFT 4
#define __PRAM_FW_VERSION_RESERVED0 (0x3<<6) /* BitField flags */
#define __PRAM_FW_VERSION_RESERVED0_SHIFT 6
};
/*
* Ethernet slow path element
*/
union protocol_common_specific_data
{
uint8_t protocol_data[8] /* to fix this structure size to 8 bytes */;
struct regpair_t phy_address /* SPE physical address */;
struct regpair_t mac_config_addr /* physical address of the MAC configuration command, as allocated by the driver */;
struct afex_vif_list_ramrod_data afex_vif_list_data /* The data afex vif list ramrod need */;
};
/*
* The send queue element
*/
struct protocol_common_spe
{
struct spe_hdr_t hdr /* SPE header */;
union protocol_common_specific_data data /* data specific to common protocol */;
};
/*
* The data for the Set Timesync Ramrod $$KEEP_ENDIANNESS$$
*/
struct set_timesync_ramrod_data
{
uint8_t drift_adjust_cmd /* Timesync Drift Adjust Command */;
uint8_t offset_cmd /* Timesync Offset Command */;
uint8_t add_sub_drift_adjust_value /* Whether to add(1)/subtract(0) Drift Adjust Value from the Offset */;
uint8_t drift_adjust_value /* Drift Adjust Value (in ns) */;
uint32_t drift_adjust_period /* Drift Adjust Period (in us) */;
struct regpair_t offset_delta /* Timesync Offset Delta (in ns) */;
};
/*
* The send queue element
*/
struct slow_path_element
{
struct spe_hdr_t hdr /* common data for all protocols */;
struct regpair_t protocol_data /* additional data specific to the protocol */;
};
/*
* Protocol-common statistics counter $$KEEP_ENDIANNESS$$
*/
struct stats_counter
{
uint16_t xstats_counter /* xstorm statistics counter */;
uint16_t reserved0;
uint32_t reserved1;
uint16_t tstats_counter /* tstorm statistics counter */;
uint16_t reserved2;
uint32_t reserved3;
uint16_t ustats_counter /* ustorm statistics counter */;
uint16_t reserved4;
uint32_t reserved5;
uint16_t cstats_counter /* ustorm statistics counter */;
uint16_t reserved6;
uint32_t reserved7;
};
/*
* $$KEEP_ENDIANNESS$$
*/
struct stats_query_entry
{
uint8_t kind;
uint8_t index /* queue index */;
uint16_t funcID /* the func the statistic will send to */;
uint32_t reserved;
struct regpair_t address /* pxp address */;
};
/*
* statistic command $$KEEP_ENDIANNESS$$
*/
struct stats_query_cmd_group
{
struct stats_query_entry query[STATS_QUERY_CMD_COUNT];
};
/*
* statistic command header $$KEEP_ENDIANNESS$$
*/
struct stats_query_header
{
uint8_t cmd_num /* command number */;
uint8_t reserved0;
uint16_t drv_stats_counter;
uint32_t reserved1;
struct regpair_t stats_counters_addrs /* stats counter */;
};
/*
* Types of statistcis query entry
*/
enum stats_query_type
{
STATS_TYPE_QUEUE,
STATS_TYPE_PORT,
STATS_TYPE_PF,
STATS_TYPE_TOE,
STATS_TYPE_FCOE,
MAX_STATS_QUERY_TYPE};
/*
* Indicate of the function status block state
*/
enum status_block_state
{
SB_DISABLED,
SB_ENABLED,
SB_CLEANED,
MAX_STATUS_BLOCK_STATE};
/*
* Storm IDs (including attentions for IGU related enums)
*/
enum storm_id
{
USTORM_ID,
CSTORM_ID,
XSTORM_ID,
TSTORM_ID,
ATTENTION_ID,
MAX_STORM_ID};
/*
* Taffic types used in ETS and flow control algorithms
*/
enum traffic_type
{
LLFC_TRAFFIC_TYPE_NW /* Networking */,
LLFC_TRAFFIC_TYPE_FCOE /* FCoE */,
LLFC_TRAFFIC_TYPE_ISCSI /* iSCSI */,
MAX_TRAFFIC_TYPE};
/*
* zone A per-queue data
*/
struct tstorm_queue_zone_data
{
struct regpair_t reserved[4];
};
/*
* zone B per-VF data
*/
struct tstorm_vf_zone_data
{
struct regpair_t reserved;
};
/*
* Add or Subtract Value for Set Timesync Ramrod
*/
enum ts_add_sub_value
{
TS_SUB_VALUE /* Subtract Value */,
TS_ADD_VALUE /* Add Value */,
MAX_TS_ADD_SUB_VALUE};
/*
* Drift-Adjust Commands for Set Timesync Ramrod
*/
enum ts_drift_adjust_cmd
{
TS_DRIFT_ADJUST_KEEP /* Keep Drift-Adjust at current values */,
TS_DRIFT_ADJUST_SET /* Set Drift-Adjust */,
TS_DRIFT_ADJUST_RESET /* Reset Drift-Adjust */,
MAX_TS_DRIFT_ADJUST_CMD};
/*
* Offset Commands for Set Timesync Ramrod
*/
enum ts_offset_cmd
{
TS_OFFSET_KEEP /* Keep Offset at current values */,
TS_OFFSET_INC /* Increase Offset by Offset Delta */,
TS_OFFSET_DEC /* Decrease Offset by Offset Delta */,
MAX_TS_OFFSET_CMD};
/*
* Input for measuring Pci Latency
*/
struct t_measure_pci_latency_ctrl
{
struct regpair_t read_addr /* Address to read from */;
#if defined(__BIG_ENDIAN)
uint8_t sleep /* Measure including a thread sleep */;
uint8_t enable /* Enable PCI Latency measurements */;
uint8_t func_id /* Function ID */;
uint8_t read_size /* Amount of bytes to read */;
#elif defined(__LITTLE_ENDIAN)
uint8_t read_size /* Amount of bytes to read */;
uint8_t func_id /* Function ID */;
uint8_t enable /* Enable PCI Latency measurements */;
uint8_t sleep /* Measure including a thread sleep */;
#endif
#if defined(__BIG_ENDIAN)
uint16_t num_meas /* Number of measurements to make */;
uint8_t reserved;
uint8_t period_10us /* Number of 10s of microseconds to wait between measurements */;
#elif defined(__LITTLE_ENDIAN)
uint8_t period_10us /* Number of 10s of microseconds to wait between measurements */;
uint8_t reserved;
uint16_t num_meas /* Number of measurements to make */;
#endif
};
/*
* Input for measuring Pci Latency
*/
struct t_measure_pci_latency_data
{
#if defined(__BIG_ENDIAN)
uint16_t max_time_ns /* Maximum Time for a read (in ns) */;
uint16_t min_time_ns /* Minimum Time for a read (in ns) */;
#elif defined(__LITTLE_ENDIAN)
uint16_t min_time_ns /* Minimum Time for a read (in ns) */;
uint16_t max_time_ns /* Maximum Time for a read (in ns) */;
#endif
#if defined(__BIG_ENDIAN)
uint16_t reserved;
uint16_t num_reads /* Number of reads - Used for Average */;
#elif defined(__LITTLE_ENDIAN)
uint16_t num_reads /* Number of reads - Used for Average */;
uint16_t reserved;
#endif
struct regpair_t sum_time_ns /* Sum of all the reads (in ns) - Used for Average */;
};
/*
* zone A per-queue data
*/
struct ustorm_queue_zone_data
{
struct ustorm_eth_rx_producers eth_rx_producers /* ETH RX rings producers */;
struct regpair_t reserved[3];
};
/*
* zone B per-VF data
*/
struct ustorm_vf_zone_data
{
struct regpair_t reserved;
};
/*
* data per VF-PF channel
*/
struct vf_pf_channel_data
{
#if defined(__BIG_ENDIAN)
uint16_t reserved0;
uint8_t valid /* flag for channel validity. (cleared when identify a VF as malicious) */;
uint8_t state /* channel state (ready / waiting for ack) */;
#elif defined(__LITTLE_ENDIAN)
uint8_t state /* channel state (ready / waiting for ack) */;
uint8_t valid /* flag for channel validity. (cleared when identify a VF as malicious) */;
uint16_t reserved0;
#endif
uint32_t reserved1;
};
/*
* State of VF-PF channel
*/
enum vf_pf_channel_state
{
VF_PF_CHANNEL_STATE_READY /* Channel is ready to accept a message from VF */,
VF_PF_CHANNEL_STATE_WAITING_FOR_ACK /* Channel waits for an ACK from PF */,
MAX_VF_PF_CHANNEL_STATE};
/*
* vif_list_rule_kind
*/
enum vif_list_rule_kind
{
VIF_LIST_RULE_SET,
VIF_LIST_RULE_GET,
VIF_LIST_RULE_CLEAR_ALL,
VIF_LIST_RULE_CLEAR_FUNC,
MAX_VIF_LIST_RULE_KIND};
/*
* zone A per-queue data
*/
struct xstorm_queue_zone_data
{
struct regpair_t reserved[4];
};
/*
* zone B per-VF data
*/
struct xstorm_vf_zone_data
{
struct regpair_t reserved;
};
/*
* Out-of-order states
*/
enum tcp_ooo_event
{
TCP_EVENT_ADD_PEN=0,
TCP_EVENT_ADD_NEW_ISLE=1,
TCP_EVENT_ADD_ISLE_RIGHT=2,
TCP_EVENT_ADD_ISLE_LEFT=3,
TCP_EVENT_JOIN=4,
TCP_EVENT_NOP=5,
MAX_TCP_OOO_EVENT};
/*
* OOO support modes
*/
enum tcp_tstorm_ooo
{
TCP_TSTORM_OOO_DROP_AND_PROC_ACK,
TCP_TSTORM_OOO_SEND_PURE_ACK,
TCP_TSTORM_OOO_SUPPORTED,
MAX_TCP_TSTORM_OOO};
/*
* toe statistics collected by the Cstorm (per port)
*/
struct cstorm_toe_stats
{
uint32_t no_tx_cqes /* count the number of time storm find that there are no more CQEs */;
uint32_t reserved;
};
/*
* The toe storm context of Cstorm
*/
struct cstorm_toe_st_context
{
uint32_t bds_ring_page_base_addr_lo /* Base address of next page in host bds ring */;
uint32_t bds_ring_page_base_addr_hi /* Base address of next page in host bds ring */;
uint32_t free_seq /* Sequnce number of the last byte that was free including */;
uint32_t __last_rel_to_notify /* Accumulated release size for the next Chimney completion msg */;
#if defined(__BIG_ENDIAN)
uint16_t __rss_params_ram_line /* The ram line containing the rss params */;
uint16_t bd_cons /* The bd s ring consumer */;
#elif defined(__LITTLE_ENDIAN)
uint16_t bd_cons /* The bd s ring consumer */;
uint16_t __rss_params_ram_line /* The ram line containing the rss params */;
#endif
uint32_t cpu_id /* CPU id for sending completion for TSS (only 8 bits are used) */;
uint32_t prev_snd_max /* last snd_max that was used for dynamic HC producer update */;
uint32_t __reserved4 /* reserved */;
};
/*
* Cstorm Toe Storm Aligned Context
*/
struct cstorm_toe_st_aligned_context
{
struct cstorm_toe_st_context context /* context */;
};
/*
* prefetched isle bd
*/
struct ustorm_toe_prefetched_isle_bd
{
uint32_t __addr_lo /* receive payload base address - Single continuous buffer (page) pointer */;
uint32_t __addr_hi /* receive payload base address - Single continuous buffer (page) pointer */;
#if defined(__BIG_ENDIAN)
uint8_t __reserved1 /* reserved */;
uint8_t __isle_num /* isle_number of the pre-fetched BD */;
uint16_t __buf_un_used /* Number of bytes left for placement in the pre fetched application/grq bd 0 size for buffer is not valid */;
#elif defined(__LITTLE_ENDIAN)
uint16_t __buf_un_used /* Number of bytes left for placement in the pre fetched application/grq bd 0 size for buffer is not valid */;
uint8_t __isle_num /* isle_number of the pre-fetched BD */;
uint8_t __reserved1 /* reserved */;
#endif
};
/*
* ring params
*/
struct ustorm_toe_ring_params
{
uint32_t rq_cons_addr_lo /* A pointer to the next to consume application bd */;
uint32_t rq_cons_addr_hi /* A pointer to the next to consume application bd */;
#if defined(__BIG_ENDIAN)
uint8_t __rq_local_cons /* consumer of the local rq ring */;
uint8_t __rq_local_prod /* producer of the local rq ring */;
uint16_t rq_cons /* RQ consumer is the index of the next to consume application bd */;
#elif defined(__LITTLE_ENDIAN)
uint16_t rq_cons /* RQ consumer is the index of the next to consume application bd */;
uint8_t __rq_local_prod /* producer of the local rq ring */;
uint8_t __rq_local_cons /* consumer of the local rq ring */;
#endif
};
/*
* prefetched bd
*/
struct ustorm_toe_prefetched_bd
{
uint32_t __addr_lo /* receive payload base address - Single continuous buffer (page) pointer */;
uint32_t __addr_hi /* receive payload base address - Single continuous buffer (page) pointer */;
#if defined(__BIG_ENDIAN)
uint16_t flags;
#define __USTORM_TOE_PREFETCHED_BD_START (0x1<<0) /* BitField flagsbd command flags this bd is the beginning of an application buffer */
#define __USTORM_TOE_PREFETCHED_BD_START_SHIFT 0
#define __USTORM_TOE_PREFETCHED_BD_END (0x1<<1) /* BitField flagsbd command flags this bd is the end of an application buffer */
#define __USTORM_TOE_PREFETCHED_BD_END_SHIFT 1
#define __USTORM_TOE_PREFETCHED_BD_NO_PUSH (0x1<<2) /* BitField flagsbd command flags this application buffer must not be partially completed */
#define __USTORM_TOE_PREFETCHED_BD_NO_PUSH_SHIFT 2
#define USTORM_TOE_PREFETCHED_BD_SPLIT (0x1<<3) /* BitField flagsbd command flags this application buffer is part of a bigger buffer and this buffer is not the last */
#define USTORM_TOE_PREFETCHED_BD_SPLIT_SHIFT 3
#define __USTORM_TOE_PREFETCHED_BD_RESERVED1 (0xFFF<<4) /* BitField flagsbd command flags reserved */
#define __USTORM_TOE_PREFETCHED_BD_RESERVED1_SHIFT 4
uint16_t __buf_un_used /* Number of bytes left for placement in the pre fetched application/grq bd 0 size for buffer is not valid */;
#elif defined(__LITTLE_ENDIAN)
uint16_t __buf_un_used /* Number of bytes left for placement in the pre fetched application/grq bd 0 size for buffer is not valid */;
uint16_t flags;
#define __USTORM_TOE_PREFETCHED_BD_START (0x1<<0) /* BitField flagsbd command flags this bd is the beginning of an application buffer */
#define __USTORM_TOE_PREFETCHED_BD_START_SHIFT 0
#define __USTORM_TOE_PREFETCHED_BD_END (0x1<<1) /* BitField flagsbd command flags this bd is the end of an application buffer */
#define __USTORM_TOE_PREFETCHED_BD_END_SHIFT 1
#define __USTORM_TOE_PREFETCHED_BD_NO_PUSH (0x1<<2) /* BitField flagsbd command flags this application buffer must not be partially completed */
#define __USTORM_TOE_PREFETCHED_BD_NO_PUSH_SHIFT 2
#define USTORM_TOE_PREFETCHED_BD_SPLIT (0x1<<3) /* BitField flagsbd command flags this application buffer is part of a bigger buffer and this buffer is not the last */
#define USTORM_TOE_PREFETCHED_BD_SPLIT_SHIFT 3
#define __USTORM_TOE_PREFETCHED_BD_RESERVED1 (0xFFF<<4) /* BitField flagsbd command flags reserved */
#define __USTORM_TOE_PREFETCHED_BD_RESERVED1_SHIFT 4
#endif
};
/*
* Ustorm Toe Storm Context
*/
struct ustorm_toe_st_context
{
uint32_t __pen_rq_placed /* Number of bytes that were placed in the RQ and not completed yet. */;
uint32_t pen_grq_placed_bytes /* The number of in-order bytes (peninsula) that were placed in the GRQ (excluding bytes that were already copied to RQ BDs or RQ dummy BDs) */;
#if defined(__BIG_ENDIAN)
uint8_t flags2;
#define USTORM_TOE_ST_CONTEXT_IGNORE_GRQ_PUSH (0x1<<0) /* BitField flags2various state flags we will ignore grq push unless it is ping pong test */
#define USTORM_TOE_ST_CONTEXT_IGNORE_GRQ_PUSH_SHIFT 0
#define USTORM_TOE_ST_CONTEXT_PUSH_FLAG (0x1<<1) /* BitField flags2various state flags indicates if push timer is set */
#define USTORM_TOE_ST_CONTEXT_PUSH_FLAG_SHIFT 1
#define USTORM_TOE_ST_CONTEXT_RSS_UPDATE_ENABLED (0x1<<2) /* BitField flags2various state flags indicates if RSS update is supported */
#define USTORM_TOE_ST_CONTEXT_RSS_UPDATE_ENABLED_SHIFT 2
#define USTORM_TOE_ST_CONTEXT_RESERVED0 (0x1F<<3) /* BitField flags2various state flags */
#define USTORM_TOE_ST_CONTEXT_RESERVED0_SHIFT 3
uint8_t __indirection_shift /* Offset in bits of the cupid of this connection on the 64Bits fetched from internal memoy */;
uint16_t indirection_ram_offset /* address offset in internal memory from the beginning of the table consisting the cpu id of this connection (Only 12 bits are used) */;
#elif defined(__LITTLE_ENDIAN)
uint16_t indirection_ram_offset /* address offset in internal memory from the beginning of the table consisting the cpu id of this connection (Only 12 bits are used) */;
uint8_t __indirection_shift /* Offset in bits of the cupid of this connection on the 64Bits fetched from internal memoy */;
uint8_t flags2;
#define USTORM_TOE_ST_CONTEXT_IGNORE_GRQ_PUSH (0x1<<0) /* BitField flags2various state flags we will ignore grq push unless it is ping pong test */
#define USTORM_TOE_ST_CONTEXT_IGNORE_GRQ_PUSH_SHIFT 0
#define USTORM_TOE_ST_CONTEXT_PUSH_FLAG (0x1<<1) /* BitField flags2various state flags indicates if push timer is set */
#define USTORM_TOE_ST_CONTEXT_PUSH_FLAG_SHIFT 1
#define USTORM_TOE_ST_CONTEXT_RSS_UPDATE_ENABLED (0x1<<2) /* BitField flags2various state flags indicates if RSS update is supported */
#define USTORM_TOE_ST_CONTEXT_RSS_UPDATE_ENABLED_SHIFT 2
#define USTORM_TOE_ST_CONTEXT_RESERVED0 (0x1F<<3) /* BitField flags2various state flags */
#define USTORM_TOE_ST_CONTEXT_RESERVED0_SHIFT 3
#endif
uint32_t __rq_available_bytes;
#if defined(__BIG_ENDIAN)
uint8_t isles_counter /* signals that dca is enabled */;
uint8_t __push_timer_state /* indicates if push timer is set */;
uint16_t rcv_indication_size /* The chip will release the current GRQ buffer to the driver when it knows that the driver has no knowledge of other GRQ payload that it can indicate and the current GRQ buffer has at least RcvIndicationSize bytes. */;
#elif defined(__LITTLE_ENDIAN)
uint16_t rcv_indication_size /* The chip will release the current GRQ buffer to the driver when it knows that the driver has no knowledge of other GRQ payload that it can indicate and the current GRQ buffer has at least RcvIndicationSize bytes. */;
uint8_t __push_timer_state /* indicates if push timer is set */;
uint8_t isles_counter /* signals that dca is enabled */;
#endif
uint32_t __min_expiration_time /* if the timer will expire before this time it will be considered as a race */;
uint32_t initial_rcv_wnd /* the maximal advertized window */;
uint32_t __bytes_cons /* the last rq_available_bytes producer that was read from host - used to know how many bytes were added */;
uint32_t __prev_consumed_grq_bytes /* the last rq_available_bytes producer that was read from host - used to know how many bytes were added */;
uint32_t prev_rcv_win_right_edge /* siquence of the last bytes that can be received - used to know how many bytes were added */;
uint32_t rcv_nxt /* Receive sequence: next expected - of the right most received packet */;
struct ustorm_toe_prefetched_isle_bd __isle_bd /* prefetched bd for the isle */;
struct ustorm_toe_ring_params pen_ring_params /* peninsula ring params */;
struct ustorm_toe_prefetched_bd __pen_bd_0 /* peninsula prefetched bd for the peninsula */;
struct ustorm_toe_prefetched_bd __pen_bd_1 /* peninsula prefetched bd for the peninsula */;
struct ustorm_toe_prefetched_bd __pen_bd_2 /* peninsula prefetched bd for the peninsula */;
struct ustorm_toe_prefetched_bd __pen_bd_3 /* peninsula prefetched bd for the peninsula */;
struct ustorm_toe_prefetched_bd __pen_bd_4 /* peninsula prefetched bd for the peninsula */;
struct ustorm_toe_prefetched_bd __pen_bd_5 /* peninsula prefetched bd for the peninsula */;
struct ustorm_toe_prefetched_bd __pen_bd_6 /* peninsula prefetched bd for the peninsula */;
struct ustorm_toe_prefetched_bd __pen_bd_7 /* peninsula prefetched bd for the peninsula */;
struct ustorm_toe_prefetched_bd __pen_bd_8 /* peninsula prefetched bd for the peninsula */;
struct ustorm_toe_prefetched_bd __pen_bd_9 /* peninsula prefetched bd for the peninsula */;
uint32_t __reserved3 /* reserved */;
};
/*
* Ustorm Toe Storm Aligned Context
*/
struct ustorm_toe_st_aligned_context
{
struct ustorm_toe_st_context context /* context */;
};
/*
* TOE context region, used only in TOE
*/
struct tstorm_toe_st_context_section
{
uint32_t reserved0[3];
};
/*
* The TOE non-aggregative context of Tstorm
*/
struct tstorm_toe_st_context
{
struct tstorm_tcp_st_context_section tcp /* TCP context region, shared in TOE, RDMA and ISCSI */;
struct tstorm_toe_st_context_section toe /* TOE context region, used only in TOE */;
};
/*
* The TOE non-aggregative aligned context of Tstorm
*/
struct tstorm_toe_st_aligned_context
{
struct tstorm_toe_st_context context /* context */;
uint8_t padding[16] /* padding to 64 byte aligned */;
};
/*
* TOE context section
*/
struct xstorm_toe_context_section
{
uint32_t tx_bd_page_base_lo /* BD page base address at the host for TxBdCons */;
uint32_t tx_bd_page_base_hi /* BD page base address at the host for TxBdCons */;
#if defined(__BIG_ENDIAN)
uint16_t tx_bd_offset /* The offset within the BD */;
uint16_t tx_bd_cons /* The transmit BD cons pointer to the host ring */;
#elif defined(__LITTLE_ENDIAN)
uint16_t tx_bd_cons /* The transmit BD cons pointer to the host ring */;
uint16_t tx_bd_offset /* The offset within the BD */;
#endif
#if defined(__BIG_ENDIAN)
uint16_t bd_prod;
uint16_t seqMismatchCnt;
#elif defined(__LITTLE_ENDIAN)
uint16_t seqMismatchCnt;
uint16_t bd_prod;
#endif
uint32_t driver_doorbell_info_ptr_lo;
uint32_t driver_doorbell_info_ptr_hi;
};
/*
* Xstorm Toe Storm Context
*/
struct xstorm_toe_st_context
{
struct xstorm_common_context_section common;
struct xstorm_toe_context_section toe;
};
/*
* Xstorm Toe Storm Aligned Context
*/
struct xstorm_toe_st_aligned_context
{
struct xstorm_toe_st_context context /* context */;
};
/*
* Ethernet connection context
*/
struct toe_context
{
struct ustorm_toe_st_aligned_context ustorm_st_context /* Ustorm storm context */;
struct tstorm_toe_st_aligned_context tstorm_st_context /* Tstorm storm context */;
struct xstorm_toe_ag_context xstorm_ag_context /* Xstorm aggregative context */;
struct tstorm_toe_ag_context tstorm_ag_context /* Tstorm aggregative context */;
struct cstorm_toe_ag_context cstorm_ag_context /* Cstorm aggregative context */;
struct ustorm_toe_ag_context ustorm_ag_context /* Ustorm aggregative context */;
struct timers_block_context timers_context /* Timers block context */;
struct xstorm_toe_st_aligned_context xstorm_st_context /* Xstorm storm context */;
struct cstorm_toe_st_aligned_context cstorm_st_context /* Cstorm storm context */;
};
/*
* ramrod data for toe protocol initiate offload ramrod (CQE)
*/
struct toe_initiate_offload_ramrod_data
{
uint32_t flags;
#define TOE_INITIATE_OFFLOAD_RAMROD_DATA_SEARCH_CONFIG_FAILED (0x1<<0) /* BitField flags error in searcher configuration */
#define TOE_INITIATE_OFFLOAD_RAMROD_DATA_SEARCH_CONFIG_FAILED_SHIFT 0
#define TOE_INITIATE_OFFLOAD_RAMROD_DATA_LICENSE_FAILURE (0x1<<1) /* BitField flags license errors */
#define TOE_INITIATE_OFFLOAD_RAMROD_DATA_LICENSE_FAILURE_SHIFT 1
#define TOE_INITIATE_OFFLOAD_RAMROD_DATA_RESERVED0 (0x3FFFFFFF<<2) /* BitField flags */
#define TOE_INITIATE_OFFLOAD_RAMROD_DATA_RESERVED0_SHIFT 2
uint32_t reserved1;
};
/*
* union for ramrod data for TOE protocol (CQE) (force size of 16 bits)
*/
struct toe_init_ramrod_data
{
#if defined(__BIG_ENDIAN)
uint16_t reserved1;
uint8_t reserved0;
uint8_t rss_num /* the rss num in its rqr to complete this ramrod */;
#elif defined(__LITTLE_ENDIAN)
uint8_t rss_num /* the rss num in its rqr to complete this ramrod */;
uint8_t reserved0;
uint16_t reserved1;
#endif
uint32_t reserved2;
};
/*
* next page pointer bd used in toe CQs and tx/rx bd chains
*/
struct toe_page_addr_bd
{
uint32_t addr_lo /* page pointer */;
uint32_t addr_hi /* page pointer */;
uint8_t reserved[8] /* resereved for driver use */;
};
/*
* union for ramrod data for TOE protocol (CQE) (force size of 16 bits)
*/
union toe_ramrod_data
{
struct ramrod_data general;
struct toe_initiate_offload_ramrod_data initiate_offload;
};
/*
* TOE_RX_CQES_OPCODE_RSS_UPD results
*/
enum toe_rss_update_opcode
{
TOE_RSS_UPD_QUIET,
TOE_RSS_UPD_SLEEPING,
TOE_RSS_UPD_DELAYED,
MAX_TOE_RSS_UPDATE_OPCODE};
/*
* union for ramrod data for TOE protocol (CQE) (force size of 16 bits)
*/
struct toe_rss_update_ramrod_data
{
uint8_t indirection_table[128] /* RSS indirection table */;
#if defined(__BIG_ENDIAN)
uint16_t reserved0;
uint16_t toe_rss_bitmap /* The bitmap specifies which toe rss chains to complete the ramrod on (0 bitmap is not valid option). The port is gleaned from the CID */;
#elif defined(__LITTLE_ENDIAN)
uint16_t toe_rss_bitmap /* The bitmap specifies which toe rss chains to complete the ramrod on (0 bitmap is not valid option). The port is gleaned from the CID */;
uint16_t reserved0;
#endif
uint32_t reserved1;
};
/*
* The toe Rx Buffer Descriptor
*/
struct toe_rx_bd
{
uint32_t addr_lo /* receive payload base address - Single continuous buffer (page) pointer */;
uint32_t addr_hi /* receive payload base address - Single continuous buffer (page) pointer */;
#if defined(__BIG_ENDIAN)
uint16_t flags;
#define TOE_RX_BD_START (0x1<<0) /* BitField flagsbd command flags this bd is the beginning of an application buffer */
#define TOE_RX_BD_START_SHIFT 0
#define TOE_RX_BD_END (0x1<<1) /* BitField flagsbd command flags this bd is the end of an application buffer */
#define TOE_RX_BD_END_SHIFT 1
#define TOE_RX_BD_NO_PUSH (0x1<<2) /* BitField flagsbd command flags this application buffer must not be partially completed */
#define TOE_RX_BD_NO_PUSH_SHIFT 2
#define TOE_RX_BD_SPLIT (0x1<<3) /* BitField flagsbd command flags this application buffer is part of a bigger buffer and this buffer is not the last */
#define TOE_RX_BD_SPLIT_SHIFT 3
#define TOE_RX_BD_RESERVED1 (0xFFF<<4) /* BitField flagsbd command flags reserved */
#define TOE_RX_BD_RESERVED1_SHIFT 4
uint16_t size /* Size of the buffer pointed by the BD */;
#elif defined(__LITTLE_ENDIAN)
uint16_t size /* Size of the buffer pointed by the BD */;
uint16_t flags;
#define TOE_RX_BD_START (0x1<<0) /* BitField flagsbd command flags this bd is the beginning of an application buffer */
#define TOE_RX_BD_START_SHIFT 0
#define TOE_RX_BD_END (0x1<<1) /* BitField flagsbd command flags this bd is the end of an application buffer */
#define TOE_RX_BD_END_SHIFT 1
#define TOE_RX_BD_NO_PUSH (0x1<<2) /* BitField flagsbd command flags this application buffer must not be partially completed */
#define TOE_RX_BD_NO_PUSH_SHIFT 2
#define TOE_RX_BD_SPLIT (0x1<<3) /* BitField flagsbd command flags this application buffer is part of a bigger buffer and this buffer is not the last */
#define TOE_RX_BD_SPLIT_SHIFT 3
#define TOE_RX_BD_RESERVED1 (0xFFF<<4) /* BitField flagsbd command flags reserved */
#define TOE_RX_BD_RESERVED1_SHIFT 4
#endif
uint32_t dbg_bytes_prod /* a cyclic parameter that caounts how many byte were available for placement till no not including this bd */;
};
/*
* ramrod data for toe protocol General rx completion
*/
struct toe_rx_completion_ramrod_data
{
#if defined(__BIG_ENDIAN)
uint16_t reserved0;
uint16_t hash_value /* information for ustorm to use in completion */;
#elif defined(__LITTLE_ENDIAN)
uint16_t hash_value /* information for ustorm to use in completion */;
uint16_t reserved0;
#endif
uint32_t reserved1;
};
/*
* OOO params in union for TOE rx cqe data
*/
struct toe_rx_cqe_ooo_params
{
uint32_t ooo_params;
#define TOE_RX_CQE_OOO_PARAMS_NBYTES (0xFFFFFF<<0) /* BitField ooo_paramsdata params for OOO cqe connection nbytes */
#define TOE_RX_CQE_OOO_PARAMS_NBYTES_SHIFT 0
#define TOE_RX_CQE_OOO_PARAMS_ISLE_NUM (0xFF<<24) /* BitField ooo_paramsdata params for OOO cqe isle number for OOO completions */
#define TOE_RX_CQE_OOO_PARAMS_ISLE_NUM_SHIFT 24
};
/*
* in order params in union for TOE rx cqe data
*/
struct toe_rx_cqe_in_order_params
{
uint32_t in_order_params;
#define TOE_RX_CQE_IN_ORDER_PARAMS_NBYTES (0xFFFFFFFF<<0) /* BitField in_order_paramsdata params for in order cqe connection nbytes */
#define TOE_RX_CQE_IN_ORDER_PARAMS_NBYTES_SHIFT 0
};
/*
* union for TOE rx cqe data
*/
union toe_rx_cqe_data_union
{
struct toe_rx_cqe_ooo_params ooo_params /* data params for OOO cqe - nbytes and isle number */;
struct toe_rx_cqe_in_order_params in_order_params /* data params for in order cqe - nbytes */;
uint32_t raw_data /* global data param */;
};
/*
* The toe Rx cq element
*/
struct toe_rx_cqe
{
uint32_t params1;
#define TOE_RX_CQE_CID (0xFFFFFF<<0) /* BitField params1completion cid and opcode connection id */
#define TOE_RX_CQE_CID_SHIFT 0
#define TOE_RX_CQE_COMPLETION_OPCODE (0xFF<<24) /* BitField params1completion cid and opcode completion opcode - use enum toe_rx_cqe_type or toe_rss_update_opcode */
#define TOE_RX_CQE_COMPLETION_OPCODE_SHIFT 24
union toe_rx_cqe_data_union data /* completion cid and opcode */;
};
/*
* toe rx doorbell data in host memory
*/
struct toe_rx_db_data
{
uint32_t rcv_win_right_edge /* siquence of the last bytes that can be received */;
uint32_t bytes_prod /* cyclic counter of posted bytes */;
#if defined(__BIG_ENDIAN)
uint8_t reserved1 /* reserved */;
uint8_t flags;
#define TOE_RX_DB_DATA_IGNORE_WND_UPDATES (0x1<<0) /* BitField flags ustorm ignores window updates when this flag is set */
#define TOE_RX_DB_DATA_IGNORE_WND_UPDATES_SHIFT 0
#define TOE_RX_DB_DATA_PARTIAL_FILLED_BUF (0x1<<1) /* BitField flags indicates if to set push timer due to partially filled receive request after offload */
#define TOE_RX_DB_DATA_PARTIAL_FILLED_BUF_SHIFT 1
#define TOE_RX_DB_DATA_RESERVED0 (0x3F<<2) /* BitField flags */
#define TOE_RX_DB_DATA_RESERVED0_SHIFT 2
uint16_t bds_prod /* cyclic counter of bds to post */;
#elif defined(__LITTLE_ENDIAN)
uint16_t bds_prod /* cyclic counter of bds to post */;
uint8_t flags;
#define TOE_RX_DB_DATA_IGNORE_WND_UPDATES (0x1<<0) /* BitField flags ustorm ignores window updates when this flag is set */
#define TOE_RX_DB_DATA_IGNORE_WND_UPDATES_SHIFT 0
#define TOE_RX_DB_DATA_PARTIAL_FILLED_BUF (0x1<<1) /* BitField flags indicates if to set push timer due to partially filled receive request after offload */
#define TOE_RX_DB_DATA_PARTIAL_FILLED_BUF_SHIFT 1
#define TOE_RX_DB_DATA_RESERVED0 (0x3F<<2) /* BitField flags */
#define TOE_RX_DB_DATA_RESERVED0_SHIFT 2
uint8_t reserved1 /* reserved */;
#endif
uint32_t consumed_grq_bytes /* cyclic counter of consumed grq bytes */;
};
/*
* The toe Rx Generic Buffer Descriptor
*/
struct toe_rx_grq_bd
{
uint32_t addr_lo /* receive payload base address - Single continuous buffer (page) pointer */;
uint32_t addr_hi /* receive payload base address - Single continuous buffer (page) pointer */;
};
/*
* toe slow path element
*/
union toe_spe_data
{
uint8_t protocol_data[8] /* to fix this structure size to 8 bytes */;
struct regpair_t phys_addr /* used in initiate offload ramrod */;
struct toe_rx_completion_ramrod_data rx_completion /* used in all ramrods that have a general rx completion */;
struct toe_init_ramrod_data toe_init /* used in toe init ramrod */;
};
/*
* toe slow path element
*/
struct toe_spe
{
struct spe_hdr_t hdr /* common data for all protocols */;
union toe_spe_data toe_data /* data specific to toe protocol */;
};
/*
* TOE slow path opcodes (opcode 0 is illegal) - includes commands and completions
*/
enum toe_sq_opcode_type
{
CMP_OPCODE_TOE_GA=1,
CMP_OPCODE_TOE_GR=2,
CMP_OPCODE_TOE_GNI=3,
CMP_OPCODE_TOE_GAIR=4,
CMP_OPCODE_TOE_GAIL=5,
CMP_OPCODE_TOE_GRI=6,
CMP_OPCODE_TOE_GJ=7,
CMP_OPCODE_TOE_DGI=8,
CMP_OPCODE_TOE_CMP=9,
CMP_OPCODE_TOE_REL=10,
CMP_OPCODE_TOE_SKP=11,
CMP_OPCODE_TOE_URG=12,
CMP_OPCODE_TOE_RT_TO=13,
CMP_OPCODE_TOE_KA_TO=14,
CMP_OPCODE_TOE_MAX_RT=15,
CMP_OPCODE_TOE_DBT_RE=16,
CMP_OPCODE_TOE_SYN=17,
CMP_OPCODE_TOE_OPT_ERR=18,
CMP_OPCODE_TOE_FW2_TO=19,
CMP_OPCODE_TOE_2WY_CLS=20,
CMP_OPCODE_TOE_TX_CMP=21,
RAMROD_OPCODE_TOE_INIT=32,
RAMROD_OPCODE_TOE_RSS_UPDATE=33,
RAMROD_OPCODE_TOE_TERMINATE_RING=34,
CMP_OPCODE_TOE_RST_RCV=48,
CMP_OPCODE_TOE_FIN_RCV=49,
CMP_OPCODE_TOE_FIN_UPL=50,
CMP_OPCODE_TOE_SRC_ERR=51,
CMP_OPCODE_TOE_LCN_ERR=52,
RAMROD_OPCODE_TOE_INITIATE_OFFLOAD=80,
RAMROD_OPCODE_TOE_SEARCHER_DELETE=81,
RAMROD_OPCODE_TOE_TERMINATE=82,
RAMROD_OPCODE_TOE_QUERY=83,
RAMROD_OPCODE_TOE_RESET_SEND=84,
RAMROD_OPCODE_TOE_INVALIDATE=85,
RAMROD_OPCODE_TOE_EMPTY_RAMROD=86,
RAMROD_OPCODE_TOE_UPDATE=87,
MAX_TOE_SQ_OPCODE_TYPE};
/*
* Toe statistics collected by the Xstorm (per port)
*/
struct xstorm_toe_stats_section
{
uint32_t tcp_out_segments;
uint32_t tcp_retransmitted_segments;
struct regpair_t ip_out_octets;
uint32_t ip_out_requests;
uint32_t reserved;
};
/*
* Toe statistics collected by the Xstorm (per port)
*/
struct xstorm_toe_stats
{
struct xstorm_toe_stats_section statistics[2] /* 0 - ipv4 , 1 - ipv6 */;
uint32_t reserved[2];
};
/*
* Toe statistics collected by the Tstorm (per port)
*/
struct tstorm_toe_stats_section
{
uint32_t ip_in_receives;
uint32_t ip_in_delivers;
struct regpair_t ip_in_octets;
uint32_t tcp_in_errors /* all discards except discards already counted by Ipv4 stats */;
uint32_t ip_in_header_errors /* IP checksum */;
uint32_t ip_in_discards /* no resources */;
uint32_t ip_in_truncated_packets;
};
/*
* Toe statistics collected by the Tstorm (per port)
*/
struct tstorm_toe_stats
{
struct tstorm_toe_stats_section statistics[2] /* 0 - ipv4 , 1 - ipv6 */;
uint32_t reserved[2];
};
/*
* Eth statistics query structure for the eth_stats_query ramrod
*/
struct toe_stats_query
{
struct xstorm_toe_stats xstorm_toe /* Xstorm Toe statistics structure */;
struct tstorm_toe_stats tstorm_toe /* Tstorm Toe statistics structure */;
struct cstorm_toe_stats cstorm_toe /* Cstorm Toe statistics structure */;
};
/*
* The toe Tx Buffer Descriptor
*/
struct toe_tx_bd
{
uint32_t addr_lo /* tranasmit payload base address - Single continuous buffer (page) pointer */;
uint32_t addr_hi /* tranasmit payload base address - Single continuous buffer (page) pointer */;
#if defined(__BIG_ENDIAN)
uint16_t flags;
#define TOE_TX_BD_PUSH (0x1<<0) /* BitField flagsbd command flags End of data flag */
#define TOE_TX_BD_PUSH_SHIFT 0
#define TOE_TX_BD_NOTIFY (0x1<<1) /* BitField flagsbd command flags notify driver with released data bytes including this bd */
#define TOE_TX_BD_NOTIFY_SHIFT 1
#define TOE_TX_BD_FIN (0x1<<2) /* BitField flagsbd command flags send fin request */
#define TOE_TX_BD_FIN_SHIFT 2
#define TOE_TX_BD_LARGE_IO (0x1<<3) /* BitField flagsbd command flags this bd is part of an application buffer larger than mss */
#define TOE_TX_BD_LARGE_IO_SHIFT 3
#define TOE_TX_BD_RESERVED1 (0xFFF<<4) /* BitField flagsbd command flags reserved */
#define TOE_TX_BD_RESERVED1_SHIFT 4
uint16_t size /* Size of the data represented by the BD */;
#elif defined(__LITTLE_ENDIAN)
uint16_t size /* Size of the data represented by the BD */;
uint16_t flags;
#define TOE_TX_BD_PUSH (0x1<<0) /* BitField flagsbd command flags End of data flag */
#define TOE_TX_BD_PUSH_SHIFT 0
#define TOE_TX_BD_NOTIFY (0x1<<1) /* BitField flagsbd command flags notify driver with released data bytes including this bd */
#define TOE_TX_BD_NOTIFY_SHIFT 1
#define TOE_TX_BD_FIN (0x1<<2) /* BitField flagsbd command flags send fin request */
#define TOE_TX_BD_FIN_SHIFT 2
#define TOE_TX_BD_LARGE_IO (0x1<<3) /* BitField flagsbd command flags this bd is part of an application buffer larger than mss */
#define TOE_TX_BD_LARGE_IO_SHIFT 3
#define TOE_TX_BD_RESERVED1 (0xFFF<<4) /* BitField flagsbd command flags reserved */
#define TOE_TX_BD_RESERVED1_SHIFT 4
#endif
uint32_t nextBdStartSeq;
};
/*
* The toe Tx cqe
*/
struct toe_tx_cqe
{
uint32_t params;
#define TOE_TX_CQE_CID (0xFFFFFF<<0) /* BitField paramscompletion cid and opcode connection id */
#define TOE_TX_CQE_CID_SHIFT 0
#define TOE_TX_CQE_COMPLETION_OPCODE (0xFF<<24) /* BitField paramscompletion cid and opcode completion opcode (use enum toe_tx_cqe_type) */
#define TOE_TX_CQE_COMPLETION_OPCODE_SHIFT 24
uint32_t len /* the more2release in Bytes */;
};
/*
* toe tx doorbell data in host memory
*/
struct toe_tx_db_data
{
uint32_t bytes_prod_seq /* greatest sequence the chip can transmit */;
#if defined(__BIG_ENDIAN)
uint16_t flags;
#define TOE_TX_DB_DATA_FIN (0x1<<0) /* BitField flags flag for post FIN request */
#define TOE_TX_DB_DATA_FIN_SHIFT 0
#define TOE_TX_DB_DATA_FLUSH (0x1<<1) /* BitField flags flag for last doorbell - flushing doorbell queue */
#define TOE_TX_DB_DATA_FLUSH_SHIFT 1
#define TOE_TX_DB_DATA_RESERVE (0x3FFF<<2) /* BitField flags */
#define TOE_TX_DB_DATA_RESERVE_SHIFT 2
uint16_t bds_prod /* cyclic counter of posted bds */;
#elif defined(__LITTLE_ENDIAN)
uint16_t bds_prod /* cyclic counter of posted bds */;
uint16_t flags;
#define TOE_TX_DB_DATA_FIN (0x1<<0) /* BitField flags flag for post FIN request */
#define TOE_TX_DB_DATA_FIN_SHIFT 0
#define TOE_TX_DB_DATA_FLUSH (0x1<<1) /* BitField flags flag for last doorbell - flushing doorbell queue */
#define TOE_TX_DB_DATA_FLUSH_SHIFT 1
#define TOE_TX_DB_DATA_RESERVE (0x3FFF<<2) /* BitField flags */
#define TOE_TX_DB_DATA_RESERVE_SHIFT 2
#endif
};
/*
* sturct used in update ramrod. Driver notifies chip which fields have changed via the bitmap $$KEEP_ENDIANNESS$$
*/
struct toe_update_ramrod_cached_params
{
uint16_t changed_fields;
#define TOE_UPDATE_RAMROD_CACHED_PARAMS_DEST_ADDR_CHANGED (0x1<<0) /* BitField changed_fieldsbitmap for indicating changed fields */
#define TOE_UPDATE_RAMROD_CACHED_PARAMS_DEST_ADDR_CHANGED_SHIFT 0
#define TOE_UPDATE_RAMROD_CACHED_PARAMS_MSS_CHANGED (0x1<<1) /* BitField changed_fieldsbitmap for indicating changed fields */
#define TOE_UPDATE_RAMROD_CACHED_PARAMS_MSS_CHANGED_SHIFT 1
#define TOE_UPDATE_RAMROD_CACHED_PARAMS_KA_TIMEOUT_CHANGED (0x1<<2) /* BitField changed_fieldsbitmap for indicating changed fields */
#define TOE_UPDATE_RAMROD_CACHED_PARAMS_KA_TIMEOUT_CHANGED_SHIFT 2
#define TOE_UPDATE_RAMROD_CACHED_PARAMS_KA_INTERVAL_CHANGED (0x1<<3) /* BitField changed_fieldsbitmap for indicating changed fields */
#define TOE_UPDATE_RAMROD_CACHED_PARAMS_KA_INTERVAL_CHANGED_SHIFT 3
#define TOE_UPDATE_RAMROD_CACHED_PARAMS_MAX_RT_CHANGED (0x1<<4) /* BitField changed_fieldsbitmap for indicating changed fields */
#define TOE_UPDATE_RAMROD_CACHED_PARAMS_MAX_RT_CHANGED_SHIFT 4
#define TOE_UPDATE_RAMROD_CACHED_PARAMS_RCV_INDICATION_SIZE_CHANGED (0x1<<5) /* BitField changed_fieldsbitmap for indicating changed fields */
#define TOE_UPDATE_RAMROD_CACHED_PARAMS_RCV_INDICATION_SIZE_CHANGED_SHIFT 5
#define TOE_UPDATE_RAMROD_CACHED_PARAMS_FLOW_LABEL_CHANGED (0x1<<6) /* BitField changed_fieldsbitmap for indicating changed fields */
#define TOE_UPDATE_RAMROD_CACHED_PARAMS_FLOW_LABEL_CHANGED_SHIFT 6
#define TOE_UPDATE_RAMROD_CACHED_PARAMS_ENABLE_KEEPALIVE_CHANGED (0x1<<7) /* BitField changed_fieldsbitmap for indicating changed fields */
#define TOE_UPDATE_RAMROD_CACHED_PARAMS_ENABLE_KEEPALIVE_CHANGED_SHIFT 7
#define TOE_UPDATE_RAMROD_CACHED_PARAMS_ENABLE_NAGLE_CHANGED (0x1<<8) /* BitField changed_fieldsbitmap for indicating changed fields */
#define TOE_UPDATE_RAMROD_CACHED_PARAMS_ENABLE_NAGLE_CHANGED_SHIFT 8
#define TOE_UPDATE_RAMROD_CACHED_PARAMS_TTL_CHANGED (0x1<<9) /* BitField changed_fieldsbitmap for indicating changed fields */
#define TOE_UPDATE_RAMROD_CACHED_PARAMS_TTL_CHANGED_SHIFT 9
#define TOE_UPDATE_RAMROD_CACHED_PARAMS_HOP_LIMIT_CHANGED (0x1<<10) /* BitField changed_fieldsbitmap for indicating changed fields */
#define TOE_UPDATE_RAMROD_CACHED_PARAMS_HOP_LIMIT_CHANGED_SHIFT 10
#define TOE_UPDATE_RAMROD_CACHED_PARAMS_TOS_CHANGED (0x1<<11) /* BitField changed_fieldsbitmap for indicating changed fields */
#define TOE_UPDATE_RAMROD_CACHED_PARAMS_TOS_CHANGED_SHIFT 11
#define TOE_UPDATE_RAMROD_CACHED_PARAMS_TRAFFIC_CLASS_CHANGED (0x1<<12) /* BitField changed_fieldsbitmap for indicating changed fields */
#define TOE_UPDATE_RAMROD_CACHED_PARAMS_TRAFFIC_CLASS_CHANGED_SHIFT 12
#define TOE_UPDATE_RAMROD_CACHED_PARAMS_KA_MAX_PROBE_COUNT_CHANGED (0x1<<13) /* BitField changed_fieldsbitmap for indicating changed fields */
#define TOE_UPDATE_RAMROD_CACHED_PARAMS_KA_MAX_PROBE_COUNT_CHANGED_SHIFT 13
#define TOE_UPDATE_RAMROD_CACHED_PARAMS_USER_PRIORITY_CHANGED (0x1<<14) /* BitField changed_fieldsbitmap for indicating changed fields */
#define TOE_UPDATE_RAMROD_CACHED_PARAMS_USER_PRIORITY_CHANGED_SHIFT 14
#define TOE_UPDATE_RAMROD_CACHED_PARAMS_INITIAL_RCV_WND_CHANGED (0x1<<15) /* BitField changed_fieldsbitmap for indicating changed fields */
#define TOE_UPDATE_RAMROD_CACHED_PARAMS_INITIAL_RCV_WND_CHANGED_SHIFT 15
uint8_t ka_restart /* Only 1 bit is used */;
uint8_t retransmit_restart /* Only 1 bit is used */;
uint8_t dest_addr[6];
uint16_t mss;
uint32_t ka_timeout;
uint32_t ka_interval;
uint32_t max_rt;
uint32_t flow_label /* Only 20 bits are used */;
uint16_t rcv_indication_size;
uint8_t enable_keepalive /* Only 1 bit is used */;
uint8_t enable_nagle /* Only 1 bit is used */;
uint8_t ttl;
uint8_t hop_limit;
uint8_t tos;
uint8_t traffic_class;
uint8_t ka_max_probe_count;
uint8_t user_priority /* Only 4 bits are used */;
uint16_t reserved2;
uint32_t initial_rcv_wnd;
uint32_t reserved1;
};
/*
* rx rings pause data for E1h only
*/
struct ustorm_toe_rx_pause_data_e1h
{
#if defined(__BIG_ENDIAN)
uint16_t grq_thr_low /* number of remaining grqes under which, we send pause message */;
uint16_t cq_thr_low /* number of remaining cqes under which, we send pause message */;
#elif defined(__LITTLE_ENDIAN)
uint16_t cq_thr_low /* number of remaining cqes under which, we send pause message */;
uint16_t grq_thr_low /* number of remaining grqes under which, we send pause message */;
#endif
#if defined(__BIG_ENDIAN)
uint16_t grq_thr_high /* number of remaining grqes above which, we send un-pause message */;
uint16_t cq_thr_high /* number of remaining cqes above which, we send un-pause message */;
#elif defined(__LITTLE_ENDIAN)
uint16_t cq_thr_high /* number of remaining cqes above which, we send un-pause message */;
uint16_t grq_thr_high /* number of remaining grqes above which, we send un-pause message */;
#endif
};
#endif /* ECORE_HSI_H */
Index: head/sys/dev/drm2/i915/intel_crt.c
===================================================================
--- head/sys/dev/drm2/i915/intel_crt.c (revision 300049)
+++ head/sys/dev/drm2/i915/intel_crt.c (revision 300050)
@@ -1,807 +1,807 @@
/*
* Copyright © 2006-2007 Intel Corporation
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice (including the next
* paragraph) shall be included in all copies or substantial portions of the
* Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*
* Authors:
* Eric Anholt <eric@anholt.net>
*/
#include <sys/cdefs.h>
__FBSDID("$FreeBSD$");
#include <dev/drm2/drmP.h>
#include <dev/drm2/drm_crtc.h>
#include <dev/drm2/drm_crtc_helper.h>
#include <dev/drm2/drm_edid.h>
#include <dev/drm2/i915/intel_drv.h>
#include <dev/drm2/i915/i915_drm.h>
#include <dev/drm2/i915/i915_drv.h>
/* Here's the desired hotplug mode */
#define ADPA_HOTPLUG_BITS (ADPA_CRT_HOTPLUG_PERIOD_128 | \
ADPA_CRT_HOTPLUG_WARMUP_10MS | \
ADPA_CRT_HOTPLUG_SAMPLE_4S | \
ADPA_CRT_HOTPLUG_VOLTAGE_50 | \
ADPA_CRT_HOTPLUG_VOLREF_325MV | \
ADPA_CRT_HOTPLUG_ENABLE)
struct intel_crt {
struct intel_encoder base;
/* DPMS state is stored in the connector, which we need in the
* encoder's enable/disable callbacks */
struct intel_connector *connector;
bool force_hotplug_required;
u32 adpa_reg;
};
static struct intel_crt *intel_attached_crt(struct drm_connector *connector)
{
return container_of(intel_attached_encoder(connector),
struct intel_crt, base);
}
static struct intel_crt *intel_encoder_to_crt(struct intel_encoder *encoder)
{
return container_of(encoder, struct intel_crt, base);
}
static bool intel_crt_get_hw_state(struct intel_encoder *encoder,
enum pipe *pipe)
{
struct drm_device *dev = encoder->base.dev;
struct drm_i915_private *dev_priv = dev->dev_private;
struct intel_crt *crt = intel_encoder_to_crt(encoder);
u32 tmp;
tmp = I915_READ(crt->adpa_reg);
if (!(tmp & ADPA_DAC_ENABLE))
return false;
if (HAS_PCH_CPT(dev))
*pipe = PORT_TO_PIPE_CPT(tmp);
else
*pipe = PORT_TO_PIPE(tmp);
return true;
}
/* Note: The caller is required to filter out dpms modes not supported by the
* platform. */
static void intel_crt_set_dpms(struct intel_encoder *encoder, int mode)
{
struct drm_device *dev = encoder->base.dev;
struct drm_i915_private *dev_priv = dev->dev_private;
struct intel_crt *crt = intel_encoder_to_crt(encoder);
u32 temp;
temp = I915_READ(crt->adpa_reg);
temp &= ~(ADPA_HSYNC_CNTL_DISABLE | ADPA_VSYNC_CNTL_DISABLE);
temp &= ~ADPA_DAC_ENABLE;
switch (mode) {
case DRM_MODE_DPMS_ON:
temp |= ADPA_DAC_ENABLE;
break;
case DRM_MODE_DPMS_STANDBY:
temp |= ADPA_DAC_ENABLE | ADPA_HSYNC_CNTL_DISABLE;
break;
case DRM_MODE_DPMS_SUSPEND:
temp |= ADPA_DAC_ENABLE | ADPA_VSYNC_CNTL_DISABLE;
break;
case DRM_MODE_DPMS_OFF:
temp |= ADPA_HSYNC_CNTL_DISABLE | ADPA_VSYNC_CNTL_DISABLE;
break;
}
I915_WRITE(crt->adpa_reg, temp);
}
static void intel_disable_crt(struct intel_encoder *encoder)
{
intel_crt_set_dpms(encoder, DRM_MODE_DPMS_OFF);
}
static void intel_enable_crt(struct intel_encoder *encoder)
{
struct intel_crt *crt = intel_encoder_to_crt(encoder);
intel_crt_set_dpms(encoder, crt->connector->base.dpms);
}
static void intel_crt_dpms(struct drm_connector *connector, int mode)
{
struct drm_device *dev = connector->dev;
struct intel_encoder *encoder = intel_attached_encoder(connector);
struct drm_crtc *crtc;
int old_dpms;
/* PCH platforms and VLV only support on/off. */
if (INTEL_INFO(dev)->gen >= 5 && mode != DRM_MODE_DPMS_ON)
mode = DRM_MODE_DPMS_OFF;
if (mode == connector->dpms)
return;
old_dpms = connector->dpms;
connector->dpms = mode;
/* Only need to change hw state when actually enabled */
crtc = encoder->base.crtc;
if (!crtc) {
encoder->connectors_active = false;
return;
}
/* We need the pipe to run for anything but OFF. */
if (mode == DRM_MODE_DPMS_OFF)
encoder->connectors_active = false;
else
encoder->connectors_active = true;
if (mode < old_dpms) {
/* From off to on, enable the pipe first. */
intel_crtc_update_dpms(crtc);
intel_crt_set_dpms(encoder, mode);
} else {
intel_crt_set_dpms(encoder, mode);
intel_crtc_update_dpms(crtc);
}
intel_modeset_check_state(connector->dev);
}
static int intel_crt_mode_valid(struct drm_connector *connector,
struct drm_display_mode *mode)
{
struct drm_device *dev = connector->dev;
int max_clock = 0;
if (mode->flags & DRM_MODE_FLAG_DBLSCAN)
return MODE_NO_DBLESCAN;
if (mode->clock < 25000)
return MODE_CLOCK_LOW;
if (IS_GEN2(dev))
max_clock = 350000;
else
max_clock = 400000;
if (mode->clock > max_clock)
return MODE_CLOCK_HIGH;
/* The FDI receiver on LPT only supports 8bpc and only has 2 lanes. */
if (HAS_PCH_LPT(dev) &&
(ironlake_get_lanes_required(mode->clock, 270000, 24) > 2))
return MODE_CLOCK_HIGH;
return MODE_OK;
}
static bool intel_crt_mode_fixup(struct drm_encoder *encoder,
const struct drm_display_mode *mode,
struct drm_display_mode *adjusted_mode)
{
return true;
}
static void intel_crt_mode_set(struct drm_encoder *encoder,
struct drm_display_mode *mode,
struct drm_display_mode *adjusted_mode)
{
struct drm_device *dev = encoder->dev;
struct drm_crtc *crtc = encoder->crtc;
struct intel_crt *crt =
intel_encoder_to_crt(to_intel_encoder(encoder));
struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
struct drm_i915_private *dev_priv = dev->dev_private;
u32 adpa;
if (HAS_PCH_SPLIT(dev))
adpa = ADPA_HOTPLUG_BITS;
else
adpa = 0;
if (adjusted_mode->flags & DRM_MODE_FLAG_PHSYNC)
adpa |= ADPA_HSYNC_ACTIVE_HIGH;
if (adjusted_mode->flags & DRM_MODE_FLAG_PVSYNC)
adpa |= ADPA_VSYNC_ACTIVE_HIGH;
/* For CPT allow 3 pipe config, for others just use A or B */
if (HAS_PCH_LPT(dev))
; /* Those bits don't exist here */
else if (HAS_PCH_CPT(dev))
adpa |= PORT_TRANS_SEL_CPT(intel_crtc->pipe);
else if (intel_crtc->pipe == 0)
adpa |= ADPA_PIPE_A_SELECT;
else
adpa |= ADPA_PIPE_B_SELECT;
if (!HAS_PCH_SPLIT(dev))
I915_WRITE(BCLRPAT(intel_crtc->pipe), 0);
I915_WRITE(crt->adpa_reg, adpa);
}
static bool intel_ironlake_crt_detect_hotplug(struct drm_connector *connector)
{
struct drm_device *dev = connector->dev;
struct intel_crt *crt = intel_attached_crt(connector);
struct drm_i915_private *dev_priv = dev->dev_private;
u32 adpa;
bool ret;
/* The first time through, trigger an explicit detection cycle */
if (crt->force_hotplug_required) {
bool turn_off_dac = HAS_PCH_SPLIT(dev);
u32 save_adpa;
crt->force_hotplug_required = 0;
save_adpa = adpa = I915_READ(PCH_ADPA);
DRM_DEBUG_KMS("trigger hotplug detect cycle: adpa=0x%x\n", adpa);
adpa |= ADPA_CRT_HOTPLUG_FORCE_TRIGGER;
if (turn_off_dac)
adpa &= ~ADPA_DAC_ENABLE;
I915_WRITE(PCH_ADPA, adpa);
if (wait_for((I915_READ(PCH_ADPA) & ADPA_CRT_HOTPLUG_FORCE_TRIGGER) == 0,
1000))
DRM_DEBUG_KMS("timed out waiting for FORCE_TRIGGER");
if (turn_off_dac) {
I915_WRITE(PCH_ADPA, save_adpa);
POSTING_READ(PCH_ADPA);
}
}
/* Check the status to see if both blue and green are on now */
adpa = I915_READ(PCH_ADPA);
if ((adpa & ADPA_CRT_HOTPLUG_MONITOR_MASK) != 0)
ret = true;
else
ret = false;
DRM_DEBUG_KMS("ironlake hotplug adpa=0x%x, result %d\n", adpa, ret);
return ret;
}
static bool valleyview_crt_detect_hotplug(struct drm_connector *connector)
{
struct drm_device *dev = connector->dev;
struct drm_i915_private *dev_priv = dev->dev_private;
u32 adpa;
bool ret;
u32 save_adpa;
save_adpa = adpa = I915_READ(ADPA);
DRM_DEBUG_KMS("trigger hotplug detect cycle: adpa=0x%x\n", adpa);
adpa |= ADPA_CRT_HOTPLUG_FORCE_TRIGGER;
I915_WRITE(ADPA, adpa);
if (wait_for((I915_READ(ADPA) & ADPA_CRT_HOTPLUG_FORCE_TRIGGER) == 0,
1000)) {
DRM_DEBUG_KMS("timed out waiting for FORCE_TRIGGER");
I915_WRITE(ADPA, save_adpa);
}
/* Check the status to see if both blue and green are on now */
adpa = I915_READ(ADPA);
if ((adpa & ADPA_CRT_HOTPLUG_MONITOR_MASK) != 0)
ret = true;
else
ret = false;
DRM_DEBUG_KMS("valleyview hotplug adpa=0x%x, result %d\n", adpa, ret);
/* FIXME: debug force function and remove */
ret = true;
return ret;
}
/**
* Uses CRT_HOTPLUG_EN and CRT_HOTPLUG_STAT to detect CRT presence.
*
* Not for i915G/i915GM
*
* \return true if CRT is connected.
* \return false if CRT is disconnected.
*/
static bool intel_crt_detect_hotplug(struct drm_connector *connector)
{
struct drm_device *dev = connector->dev;
struct drm_i915_private *dev_priv = dev->dev_private;
u32 hotplug_en, orig, stat;
bool ret = false;
int i, tries = 0;
if (HAS_PCH_SPLIT(dev))
return intel_ironlake_crt_detect_hotplug(connector);
if (IS_VALLEYVIEW(dev))
return valleyview_crt_detect_hotplug(connector);
/*
* On 4 series desktop, CRT detect sequence need to be done twice
* to get a reliable result.
*/
if (IS_G4X(dev) && !IS_GM45(dev))
tries = 2;
else
tries = 1;
hotplug_en = orig = I915_READ(PORT_HOTPLUG_EN);
hotplug_en |= CRT_HOTPLUG_FORCE_DETECT;
for (i = 0; i < tries ; i++) {
/* turn on the FORCE_DETECT */
I915_WRITE(PORT_HOTPLUG_EN, hotplug_en);
/* wait for FORCE_DETECT to go off */
if (wait_for((I915_READ(PORT_HOTPLUG_EN) &
CRT_HOTPLUG_FORCE_DETECT) == 0,
1000))
DRM_DEBUG_KMS("timed out waiting for FORCE_DETECT to go off");
}
stat = I915_READ(PORT_HOTPLUG_STAT);
if ((stat & CRT_HOTPLUG_MONITOR_MASK) != CRT_HOTPLUG_MONITOR_NONE)
ret = true;
/* clear the interrupt we just generated, if any */
I915_WRITE(PORT_HOTPLUG_STAT, CRT_HOTPLUG_INT_STATUS);
/* and put the bits back */
I915_WRITE(PORT_HOTPLUG_EN, orig);
return ret;
}
static struct edid *intel_crt_get_edid(struct drm_connector *connector,
device_t i2c)
{
struct edid *edid;
edid = drm_get_edid(connector, i2c);
if (!edid && !intel_gmbus_is_forced_bit(i2c)) {
DRM_DEBUG_KMS("CRT GMBUS EDID read failed, retry using GPIO bit-banging\n");
intel_gmbus_force_bit(i2c, true);
edid = drm_get_edid(connector, i2c);
intel_gmbus_force_bit(i2c, false);
}
return edid;
}
/* local version of intel_ddc_get_modes() to use intel_crt_get_edid() */
static int intel_crt_ddc_get_modes(struct drm_connector *connector,
device_t adapter)
{
struct edid *edid;
int ret;
edid = intel_crt_get_edid(connector, adapter);
if (!edid)
return 0;
ret = intel_connector_update_modes(connector, edid);
free(edid, DRM_MEM_KMS);
return ret;
}
static bool intel_crt_detect_ddc(struct drm_connector *connector)
{
struct intel_crt *crt = intel_attached_crt(connector);
struct drm_i915_private *dev_priv = crt->base.base.dev->dev_private;
struct edid *edid;
device_t i2c;
bool res = false;
BUG_ON(crt->base.type != INTEL_OUTPUT_ANALOG);
i2c = intel_gmbus_get_adapter(dev_priv, dev_priv->crt_ddc_pin);
edid = intel_crt_get_edid(connector, i2c);
if (edid) {
bool is_digital = edid->input & DRM_EDID_INPUT_DIGITAL;
/*
* This may be a DVI-I connector with a shared DDC
* link between analog and digital outputs, so we
* have to check the EDID input spec of the attached device.
*/
if (!is_digital) {
DRM_DEBUG_KMS("CRT detected via DDC:0x50 [EDID]\n");
res = true;
goto out;
}
DRM_DEBUG_KMS("CRT not detected via DDC:0x50 [EDID reports a digital panel]\n");
} else {
DRM_DEBUG_KMS("CRT not detected via DDC:0x50 [no valid EDID found]\n");
}
out:
free(edid, DRM_MEM_KMS);
return res;
}
static enum drm_connector_status
intel_crt_load_detect(struct intel_crt *crt)
{
struct drm_device *dev = crt->base.base.dev;
struct drm_i915_private *dev_priv = dev->dev_private;
uint32_t pipe = to_intel_crtc(crt->base.base.crtc)->pipe;
uint32_t save_bclrpat;
uint32_t save_vtotal;
uint32_t vtotal, vactive;
uint32_t vsample;
uint32_t vblank, vblank_start, vblank_end;
uint32_t dsl;
uint32_t bclrpat_reg;
uint32_t vtotal_reg;
uint32_t vblank_reg;
uint32_t vsync_reg;
uint32_t pipeconf_reg;
uint32_t pipe_dsl_reg;
uint8_t st00;
enum drm_connector_status status;
DRM_DEBUG_KMS("starting load-detect on CRT\n");
bclrpat_reg = BCLRPAT(pipe);
vtotal_reg = VTOTAL(pipe);
vblank_reg = VBLANK(pipe);
vsync_reg = VSYNC(pipe);
pipeconf_reg = PIPECONF(pipe);
pipe_dsl_reg = PIPEDSL(pipe);
save_bclrpat = I915_READ(bclrpat_reg);
save_vtotal = I915_READ(vtotal_reg);
vblank = I915_READ(vblank_reg);
vtotal = ((save_vtotal >> 16) & 0xfff) + 1;
vactive = (save_vtotal & 0x7ff) + 1;
vblank_start = (vblank & 0xfff) + 1;
vblank_end = ((vblank >> 16) & 0xfff) + 1;
/* Set the border color to purple. */
I915_WRITE(bclrpat_reg, 0x500050);
if (!IS_GEN2(dev)) {
uint32_t pipeconf = I915_READ(pipeconf_reg);
I915_WRITE(pipeconf_reg, pipeconf | PIPECONF_FORCE_BORDER);
POSTING_READ(pipeconf_reg);
/* Wait for next Vblank to substitue
* border color for Color info */
intel_wait_for_vblank(dev, pipe);
st00 = I915_READ8(VGA_MSR_WRITE);
status = ((st00 & (1 << 4)) != 0) ?
connector_status_connected :
connector_status_disconnected;
I915_WRITE(pipeconf_reg, pipeconf);
} else {
bool restore_vblank = false;
int count, detect;
/*
* If there isn't any border, add some.
* Yes, this will flicker
*/
if (vblank_start <= vactive && vblank_end >= vtotal) {
uint32_t vsync = I915_READ(vsync_reg);
uint32_t vsync_start = (vsync & 0xffff) + 1;
vblank_start = vsync_start;
I915_WRITE(vblank_reg,
(vblank_start - 1) |
((vblank_end - 1) << 16));
restore_vblank = true;
}
/* sample in the vertical border, selecting the larger one */
if (vblank_start - vactive >= vtotal - vblank_end)
vsample = (vblank_start + vactive) >> 1;
else
vsample = (vtotal + vblank_end) >> 1;
/*
* Wait for the border to be displayed
*/
while (I915_READ(pipe_dsl_reg) >= vactive)
;
while ((dsl = I915_READ(pipe_dsl_reg)) <= vsample)
;
/*
* Watch ST00 for an entire scanline
*/
detect = 0;
count = 0;
do {
count++;
/* Read the ST00 VGA status register */
st00 = I915_READ8(VGA_MSR_WRITE);
if (st00 & (1 << 4))
detect++;
} while ((I915_READ(pipe_dsl_reg) == dsl));
/* restore vblank if necessary */
if (restore_vblank)
I915_WRITE(vblank_reg, vblank);
/*
* If more than 3/4 of the scanline detected a monitor,
* then it is assumed to be present. This works even on i830,
* where there isn't any way to force the border color across
* the screen
*/
status = detect * 4 > count * 3 ?
connector_status_connected :
connector_status_disconnected;
}
/* Restore previous settings */
I915_WRITE(bclrpat_reg, save_bclrpat);
return status;
}
static enum drm_connector_status
intel_crt_detect(struct drm_connector *connector, bool force)
{
struct drm_device *dev = connector->dev;
struct intel_crt *crt = intel_attached_crt(connector);
enum drm_connector_status status;
struct intel_load_detect_pipe tmp;
if (I915_HAS_HOTPLUG(dev)) {
/* We can not rely on the HPD pin always being correctly wired
* up, for example many KVM do not pass it through, and so
* only trust an assertion that the monitor is connected.
*/
if (intel_crt_detect_hotplug(connector)) {
DRM_DEBUG_KMS("CRT detected via hotplug\n");
return connector_status_connected;
} else
DRM_DEBUG_KMS("CRT not detected via hotplug\n");
}
if (intel_crt_detect_ddc(connector))
return connector_status_connected;
/* Load detection is broken on HPD capable machines. Whoever wants a
* broken monitor (without edid) to work behind a broken kvm (that fails
* to have the right resistors for HP detection) needs to fix this up.
* For now just bail out. */
if (I915_HAS_HOTPLUG(dev))
return connector_status_disconnected;
if (!force)
return connector->status;
/* for pre-945g platforms use load detect */
if (intel_get_load_detect_pipe(connector, NULL, &tmp)) {
if (intel_crt_detect_ddc(connector))
status = connector_status_connected;
else
status = intel_crt_load_detect(crt);
intel_release_load_detect_pipe(connector, &tmp);
} else
status = connector_status_unknown;
return status;
}
static void intel_crt_destroy(struct drm_connector *connector)
{
drm_connector_cleanup(connector);
free(connector, DRM_MEM_KMS);
}
static int intel_crt_get_modes(struct drm_connector *connector)
{
struct drm_device *dev = connector->dev;
struct drm_i915_private *dev_priv = dev->dev_private;
int ret;
device_t i2c;
i2c = intel_gmbus_get_adapter(dev_priv, dev_priv->crt_ddc_pin);
ret = intel_crt_ddc_get_modes(connector, i2c);
if (ret || !IS_G4X(dev))
return ret;
/* Try to probe digital port for output in DVI-I -> VGA mode. */
i2c = intel_gmbus_get_adapter(dev_priv, GMBUS_PORT_DPB);
return intel_crt_ddc_get_modes(connector, i2c);
}
static int intel_crt_set_property(struct drm_connector *connector,
struct drm_property *property,
uint64_t value)
{
return 0;
}
static void intel_crt_reset(struct drm_connector *connector)
{
struct drm_device *dev = connector->dev;
struct drm_i915_private *dev_priv = dev->dev_private;
struct intel_crt *crt = intel_attached_crt(connector);
if (HAS_PCH_SPLIT(dev)) {
u32 adpa;
adpa = I915_READ(PCH_ADPA);
adpa &= ~ADPA_CRT_HOTPLUG_MASK;
adpa |= ADPA_HOTPLUG_BITS;
I915_WRITE(PCH_ADPA, adpa);
POSTING_READ(PCH_ADPA);
DRM_DEBUG_KMS("pch crt adpa set to 0x%x\n", adpa);
crt->force_hotplug_required = 1;
}
}
/*
* Routines for controlling stuff on the analog port
*/
static const struct drm_encoder_helper_funcs crt_encoder_funcs = {
.mode_fixup = intel_crt_mode_fixup,
.mode_set = intel_crt_mode_set,
.disable = intel_encoder_noop,
};
static const struct drm_connector_funcs intel_crt_connector_funcs = {
.reset = intel_crt_reset,
.dpms = intel_crt_dpms,
.detect = intel_crt_detect,
.fill_modes = drm_helper_probe_single_connector_modes,
.destroy = intel_crt_destroy,
.set_property = intel_crt_set_property,
};
static const struct drm_connector_helper_funcs intel_crt_connector_helper_funcs = {
.mode_valid = intel_crt_mode_valid,
.get_modes = intel_crt_get_modes,
.best_encoder = intel_best_encoder,
};
static const struct drm_encoder_funcs intel_crt_enc_funcs = {
.destroy = intel_encoder_destroy,
};
static int __init intel_no_crt_dmi_callback(const struct dmi_system_id *id)
{
DRM_INFO("Skipping CRT initialization for %s\n", id->ident);
return 1;
}
static const struct dmi_system_id intel_no_crt[] = {
{
.callback = intel_no_crt_dmi_callback,
.ident = "ACER ZGB",
.matches = {
DMI_MATCH(DMI_SYS_VENDOR, "ACER"),
DMI_MATCH(DMI_PRODUCT_NAME, "ZGB"),
},
},
{ }
};
void intel_crt_init(struct drm_device *dev)
{
struct drm_connector *connector;
struct intel_crt *crt;
struct intel_connector *intel_connector;
struct drm_i915_private *dev_priv = dev->dev_private;
/* Skip machines without VGA that falsely report hotplug events */
if (dmi_check_system(intel_no_crt))
return;
crt = malloc(sizeof(struct intel_crt), DRM_MEM_KMS, M_WAITOK | M_ZERO);
if (!crt)
return;
intel_connector = malloc(sizeof(struct intel_connector), DRM_MEM_KMS, M_WAITOK | M_ZERO);
if (!intel_connector) {
free(crt, DRM_MEM_KMS);
return;
}
connector = &intel_connector->base;
crt->connector = intel_connector;
drm_connector_init(dev, &intel_connector->base,
&intel_crt_connector_funcs, DRM_MODE_CONNECTOR_VGA);
drm_encoder_init(dev, &crt->base.base, &intel_crt_enc_funcs,
DRM_MODE_ENCODER_DAC);
intel_connector_attach_encoder(intel_connector, &crt->base);
crt->base.type = INTEL_OUTPUT_ANALOG;
crt->base.cloneable = true;
if (IS_I830(dev))
crt->base.crtc_mask = (1 << 0);
else
crt->base.crtc_mask = (1 << 0) | (1 << 1) | (1 << 2);
if (IS_GEN2(dev))
connector->interlace_allowed = 0;
else
connector->interlace_allowed = 1;
connector->doublescan_allowed = 0;
if (HAS_PCH_SPLIT(dev))
crt->adpa_reg = PCH_ADPA;
else if (IS_VALLEYVIEW(dev))
crt->adpa_reg = VLV_ADPA;
else
crt->adpa_reg = ADPA;
crt->base.disable = intel_disable_crt;
crt->base.enable = intel_enable_crt;
if (IS_HASWELL(dev))
crt->base.get_hw_state = intel_ddi_get_hw_state;
else
crt->base.get_hw_state = intel_crt_get_hw_state;
intel_connector->get_hw_state = intel_connector_get_hw_state;
drm_encoder_helper_add(&crt->base.base, &crt_encoder_funcs);
drm_connector_helper_add(connector, &intel_crt_connector_helper_funcs);
if (I915_HAS_HOTPLUG(dev))
connector->polled = DRM_CONNECTOR_POLL_HPD;
else
connector->polled = DRM_CONNECTOR_POLL_CONNECT;
/*
* Configure the automatic hotplug detection stuff
*/
crt->force_hotplug_required = 0;
dev_priv->hotplug_supported_mask |= CRT_HOTPLUG_INT_STATUS;
/*
- * TODO: find a proper way to discover whether we need to set the the
+ * TODO: find a proper way to discover whether we need to set the
* polarity and link reversal bits or not, instead of relying on the
* BIOS.
*/
if (HAS_PCH_LPT(dev)) {
u32 fdi_config = FDI_RX_POLARITY_REVERSED_LPT |
FDI_RX_LINK_REVERSAL_OVERRIDE;
dev_priv->fdi_rx_config = I915_READ(_FDI_RXA_CTL) & fdi_config;
}
}
Index: head/sys/dev/drm2/i915/intel_display.c
===================================================================
--- head/sys/dev/drm2/i915/intel_display.c (revision 300049)
+++ head/sys/dev/drm2/i915/intel_display.c (revision 300050)
@@ -1,9560 +1,9560 @@
/*
* Copyright © 2006-2007 Intel Corporation
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice (including the next
* paragraph) shall be included in all copies or substantial portions of the
* Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*
* Authors:
* Eric Anholt <eric@anholt.net>
*/
#include <sys/cdefs.h>
__FBSDID("$FreeBSD$");
#include <dev/drm2/drmP.h>
#include <dev/drm2/drm_edid.h>
#include <dev/drm2/i915/intel_drv.h>
#include <dev/drm2/i915/i915_drm.h>
#include <dev/drm2/i915/i915_drv.h>
#include <dev/drm2/drm_dp_helper.h>
#include <dev/drm2/drm_crtc_helper.h>
bool intel_pipe_has_type(struct drm_crtc *crtc, int type);
static void intel_increase_pllclock(struct drm_crtc *crtc);
static void intel_crtc_update_cursor(struct drm_crtc *crtc, bool on);
typedef struct {
/* given values */
int n;
int m1, m2;
int p1, p2;
/* derived values */
int dot;
int vco;
int m;
int p;
} intel_clock_t;
typedef struct {
int min, max;
} intel_range_t;
typedef struct {
int dot_limit;
int p2_slow, p2_fast;
} intel_p2_t;
#define INTEL_P2_NUM 2
typedef struct intel_limit intel_limit_t;
struct intel_limit {
intel_range_t dot, vco, n, m, m1, m2, p, p1;
intel_p2_t p2;
bool (* find_pll)(const intel_limit_t *, struct drm_crtc *,
int, int, intel_clock_t *, intel_clock_t *);
};
/* FDI */
#define IRONLAKE_FDI_FREQ 2700000 /* in kHz for mode->clock */
int
intel_pch_rawclk(struct drm_device *dev)
{
struct drm_i915_private *dev_priv = dev->dev_private;
WARN_ON(!HAS_PCH_SPLIT(dev));
return I915_READ(PCH_RAWCLK_FREQ) & RAWCLK_FREQ_MASK;
}
static bool
intel_find_best_PLL(const intel_limit_t *limit, struct drm_crtc *crtc,
int target, int refclk, intel_clock_t *match_clock,
intel_clock_t *best_clock);
static bool
intel_g4x_find_best_PLL(const intel_limit_t *limit, struct drm_crtc *crtc,
int target, int refclk, intel_clock_t *match_clock,
intel_clock_t *best_clock);
static bool
intel_find_pll_g4x_dp(const intel_limit_t *, struct drm_crtc *crtc,
int target, int refclk, intel_clock_t *match_clock,
intel_clock_t *best_clock);
static bool
intel_find_pll_ironlake_dp(const intel_limit_t *, struct drm_crtc *crtc,
int target, int refclk, intel_clock_t *match_clock,
intel_clock_t *best_clock);
static bool
intel_vlv_find_best_pll(const intel_limit_t *limit, struct drm_crtc *crtc,
int target, int refclk, intel_clock_t *match_clock,
intel_clock_t *best_clock);
static inline u32 /* units of 100MHz */
intel_fdi_link_freq(struct drm_device *dev)
{
if (IS_GEN5(dev)) {
struct drm_i915_private *dev_priv = dev->dev_private;
return (I915_READ(FDI_PLL_BIOS_0) & FDI_PLL_FB_CLOCK_MASK) + 2;
} else
return 27;
}
static const intel_limit_t intel_limits_i8xx_dvo = {
.dot = { .min = 25000, .max = 350000 },
.vco = { .min = 930000, .max = 1400000 },
.n = { .min = 3, .max = 16 },
.m = { .min = 96, .max = 140 },
.m1 = { .min = 18, .max = 26 },
.m2 = { .min = 6, .max = 16 },
.p = { .min = 4, .max = 128 },
.p1 = { .min = 2, .max = 33 },
.p2 = { .dot_limit = 165000,
.p2_slow = 4, .p2_fast = 2 },
.find_pll = intel_find_best_PLL,
};
static const intel_limit_t intel_limits_i8xx_lvds = {
.dot = { .min = 25000, .max = 350000 },
.vco = { .min = 930000, .max = 1400000 },
.n = { .min = 3, .max = 16 },
.m = { .min = 96, .max = 140 },
.m1 = { .min = 18, .max = 26 },
.m2 = { .min = 6, .max = 16 },
.p = { .min = 4, .max = 128 },
.p1 = { .min = 1, .max = 6 },
.p2 = { .dot_limit = 165000,
.p2_slow = 14, .p2_fast = 7 },
.find_pll = intel_find_best_PLL,
};
static const intel_limit_t intel_limits_i9xx_sdvo = {
.dot = { .min = 20000, .max = 400000 },
.vco = { .min = 1400000, .max = 2800000 },
.n = { .min = 1, .max = 6 },
.m = { .min = 70, .max = 120 },
.m1 = { .min = 8, .max = 18 },
.m2 = { .min = 3, .max = 7 },
.p = { .min = 5, .max = 80 },
.p1 = { .min = 1, .max = 8 },
.p2 = { .dot_limit = 200000,
.p2_slow = 10, .p2_fast = 5 },
.find_pll = intel_find_best_PLL,
};
static const intel_limit_t intel_limits_i9xx_lvds = {
.dot = { .min = 20000, .max = 400000 },
.vco = { .min = 1400000, .max = 2800000 },
.n = { .min = 1, .max = 6 },
.m = { .min = 70, .max = 120 },
.m1 = { .min = 10, .max = 22 },
.m2 = { .min = 5, .max = 9 },
.p = { .min = 7, .max = 98 },
.p1 = { .min = 1, .max = 8 },
.p2 = { .dot_limit = 112000,
.p2_slow = 14, .p2_fast = 7 },
.find_pll = intel_find_best_PLL,
};
static const intel_limit_t intel_limits_g4x_sdvo = {
.dot = { .min = 25000, .max = 270000 },
.vco = { .min = 1750000, .max = 3500000},
.n = { .min = 1, .max = 4 },
.m = { .min = 104, .max = 138 },
.m1 = { .min = 17, .max = 23 },
.m2 = { .min = 5, .max = 11 },
.p = { .min = 10, .max = 30 },
.p1 = { .min = 1, .max = 3},
.p2 = { .dot_limit = 270000,
.p2_slow = 10,
.p2_fast = 10
},
.find_pll = intel_g4x_find_best_PLL,
};
static const intel_limit_t intel_limits_g4x_hdmi = {
.dot = { .min = 22000, .max = 400000 },
.vco = { .min = 1750000, .max = 3500000},
.n = { .min = 1, .max = 4 },
.m = { .min = 104, .max = 138 },
.m1 = { .min = 16, .max = 23 },
.m2 = { .min = 5, .max = 11 },
.p = { .min = 5, .max = 80 },
.p1 = { .min = 1, .max = 8},
.p2 = { .dot_limit = 165000,
.p2_slow = 10, .p2_fast = 5 },
.find_pll = intel_g4x_find_best_PLL,
};
static const intel_limit_t intel_limits_g4x_single_channel_lvds = {
.dot = { .min = 20000, .max = 115000 },
.vco = { .min = 1750000, .max = 3500000 },
.n = { .min = 1, .max = 3 },
.m = { .min = 104, .max = 138 },
.m1 = { .min = 17, .max = 23 },
.m2 = { .min = 5, .max = 11 },
.p = { .min = 28, .max = 112 },
.p1 = { .min = 2, .max = 8 },
.p2 = { .dot_limit = 0,
.p2_slow = 14, .p2_fast = 14
},
.find_pll = intel_g4x_find_best_PLL,
};
static const intel_limit_t intel_limits_g4x_dual_channel_lvds = {
.dot = { .min = 80000, .max = 224000 },
.vco = { .min = 1750000, .max = 3500000 },
.n = { .min = 1, .max = 3 },
.m = { .min = 104, .max = 138 },
.m1 = { .min = 17, .max = 23 },
.m2 = { .min = 5, .max = 11 },
.p = { .min = 14, .max = 42 },
.p1 = { .min = 2, .max = 6 },
.p2 = { .dot_limit = 0,
.p2_slow = 7, .p2_fast = 7
},
.find_pll = intel_g4x_find_best_PLL,
};
static const intel_limit_t intel_limits_g4x_display_port = {
.dot = { .min = 161670, .max = 227000 },
.vco = { .min = 1750000, .max = 3500000},
.n = { .min = 1, .max = 2 },
.m = { .min = 97, .max = 108 },
.m1 = { .min = 0x10, .max = 0x12 },
.m2 = { .min = 0x05, .max = 0x06 },
.p = { .min = 10, .max = 20 },
.p1 = { .min = 1, .max = 2},
.p2 = { .dot_limit = 0,
.p2_slow = 10, .p2_fast = 10 },
.find_pll = intel_find_pll_g4x_dp,
};
static const intel_limit_t intel_limits_pineview_sdvo = {
.dot = { .min = 20000, .max = 400000},
.vco = { .min = 1700000, .max = 3500000 },
/* Pineview's Ncounter is a ring counter */
.n = { .min = 3, .max = 6 },
.m = { .min = 2, .max = 256 },
/* Pineview only has one combined m divider, which we treat as m2. */
.m1 = { .min = 0, .max = 0 },
.m2 = { .min = 0, .max = 254 },
.p = { .min = 5, .max = 80 },
.p1 = { .min = 1, .max = 8 },
.p2 = { .dot_limit = 200000,
.p2_slow = 10, .p2_fast = 5 },
.find_pll = intel_find_best_PLL,
};
static const intel_limit_t intel_limits_pineview_lvds = {
.dot = { .min = 20000, .max = 400000 },
.vco = { .min = 1700000, .max = 3500000 },
.n = { .min = 3, .max = 6 },
.m = { .min = 2, .max = 256 },
.m1 = { .min = 0, .max = 0 },
.m2 = { .min = 0, .max = 254 },
.p = { .min = 7, .max = 112 },
.p1 = { .min = 1, .max = 8 },
.p2 = { .dot_limit = 112000,
.p2_slow = 14, .p2_fast = 14 },
.find_pll = intel_find_best_PLL,
};
/* Ironlake / Sandybridge
*
* We calculate clock using (register_value + 2) for N/M1/M2, so here
* the range value for them is (actual_value - 2).
*/
static const intel_limit_t intel_limits_ironlake_dac = {
.dot = { .min = 25000, .max = 350000 },
.vco = { .min = 1760000, .max = 3510000 },
.n = { .min = 1, .max = 5 },
.m = { .min = 79, .max = 127 },
.m1 = { .min = 12, .max = 22 },
.m2 = { .min = 5, .max = 9 },
.p = { .min = 5, .max = 80 },
.p1 = { .min = 1, .max = 8 },
.p2 = { .dot_limit = 225000,
.p2_slow = 10, .p2_fast = 5 },
.find_pll = intel_g4x_find_best_PLL,
};
static const intel_limit_t intel_limits_ironlake_single_lvds = {
.dot = { .min = 25000, .max = 350000 },
.vco = { .min = 1760000, .max = 3510000 },
.n = { .min = 1, .max = 3 },
.m = { .min = 79, .max = 118 },
.m1 = { .min = 12, .max = 22 },
.m2 = { .min = 5, .max = 9 },
.p = { .min = 28, .max = 112 },
.p1 = { .min = 2, .max = 8 },
.p2 = { .dot_limit = 225000,
.p2_slow = 14, .p2_fast = 14 },
.find_pll = intel_g4x_find_best_PLL,
};
static const intel_limit_t intel_limits_ironlake_dual_lvds = {
.dot = { .min = 25000, .max = 350000 },
.vco = { .min = 1760000, .max = 3510000 },
.n = { .min = 1, .max = 3 },
.m = { .min = 79, .max = 127 },
.m1 = { .min = 12, .max = 22 },
.m2 = { .min = 5, .max = 9 },
.p = { .min = 14, .max = 56 },
.p1 = { .min = 2, .max = 8 },
.p2 = { .dot_limit = 225000,
.p2_slow = 7, .p2_fast = 7 },
.find_pll = intel_g4x_find_best_PLL,
};
/* LVDS 100mhz refclk limits. */
static const intel_limit_t intel_limits_ironlake_single_lvds_100m = {
.dot = { .min = 25000, .max = 350000 },
.vco = { .min = 1760000, .max = 3510000 },
.n = { .min = 1, .max = 2 },
.m = { .min = 79, .max = 126 },
.m1 = { .min = 12, .max = 22 },
.m2 = { .min = 5, .max = 9 },
.p = { .min = 28, .max = 112 },
.p1 = { .min = 2, .max = 8 },
.p2 = { .dot_limit = 225000,
.p2_slow = 14, .p2_fast = 14 },
.find_pll = intel_g4x_find_best_PLL,
};
static const intel_limit_t intel_limits_ironlake_dual_lvds_100m = {
.dot = { .min = 25000, .max = 350000 },
.vco = { .min = 1760000, .max = 3510000 },
.n = { .min = 1, .max = 3 },
.m = { .min = 79, .max = 126 },
.m1 = { .min = 12, .max = 22 },
.m2 = { .min = 5, .max = 9 },
.p = { .min = 14, .max = 42 },
.p1 = { .min = 2, .max = 6 },
.p2 = { .dot_limit = 225000,
.p2_slow = 7, .p2_fast = 7 },
.find_pll = intel_g4x_find_best_PLL,
};
static const intel_limit_t intel_limits_ironlake_display_port = {
.dot = { .min = 25000, .max = 350000 },
.vco = { .min = 1760000, .max = 3510000},
.n = { .min = 1, .max = 2 },
.m = { .min = 81, .max = 90 },
.m1 = { .min = 12, .max = 22 },
.m2 = { .min = 5, .max = 9 },
.p = { .min = 10, .max = 20 },
.p1 = { .min = 1, .max = 2},
.p2 = { .dot_limit = 0,
.p2_slow = 10, .p2_fast = 10 },
.find_pll = intel_find_pll_ironlake_dp,
};
static const intel_limit_t intel_limits_vlv_dac = {
.dot = { .min = 25000, .max = 270000 },
.vco = { .min = 4000000, .max = 6000000 },
.n = { .min = 1, .max = 7 },
.m = { .min = 22, .max = 450 }, /* guess */
.m1 = { .min = 2, .max = 3 },
.m2 = { .min = 11, .max = 156 },
.p = { .min = 10, .max = 30 },
.p1 = { .min = 2, .max = 3 },
.p2 = { .dot_limit = 270000,
.p2_slow = 2, .p2_fast = 20 },
.find_pll = intel_vlv_find_best_pll,
};
static const intel_limit_t intel_limits_vlv_hdmi = {
.dot = { .min = 20000, .max = 165000 },
.vco = { .min = 4000000, .max = 5994000},
.n = { .min = 1, .max = 7 },
.m = { .min = 60, .max = 300 }, /* guess */
.m1 = { .min = 2, .max = 3 },
.m2 = { .min = 11, .max = 156 },
.p = { .min = 10, .max = 30 },
.p1 = { .min = 2, .max = 3 },
.p2 = { .dot_limit = 270000,
.p2_slow = 2, .p2_fast = 20 },
.find_pll = intel_vlv_find_best_pll,
};
static const intel_limit_t intel_limits_vlv_dp = {
.dot = { .min = 25000, .max = 270000 },
.vco = { .min = 4000000, .max = 6000000 },
.n = { .min = 1, .max = 7 },
.m = { .min = 22, .max = 450 },
.m1 = { .min = 2, .max = 3 },
.m2 = { .min = 11, .max = 156 },
.p = { .min = 10, .max = 30 },
.p1 = { .min = 2, .max = 3 },
.p2 = { .dot_limit = 270000,
.p2_slow = 2, .p2_fast = 20 },
.find_pll = intel_vlv_find_best_pll,
};
u32 intel_dpio_read(struct drm_i915_private *dev_priv, int reg)
{
u32 val = 0;
sx_xlock(&dev_priv->dpio_lock);
if (wait_for_atomic_us((I915_READ(DPIO_PKT) & DPIO_BUSY) == 0, 100)) {
DRM_ERROR("DPIO idle wait timed out\n");
goto out_unlock;
}
I915_WRITE(DPIO_REG, reg);
I915_WRITE(DPIO_PKT, DPIO_RID | DPIO_OP_READ | DPIO_PORTID |
DPIO_BYTE);
if (wait_for_atomic_us((I915_READ(DPIO_PKT) & DPIO_BUSY) == 0, 100)) {
DRM_ERROR("DPIO read wait timed out\n");
goto out_unlock;
}
val = I915_READ(DPIO_DATA);
out_unlock:
sx_xunlock(&dev_priv->dpio_lock);
return val;
}
static void intel_dpio_write(struct drm_i915_private *dev_priv, int reg,
u32 val)
{
sx_xlock(&dev_priv->dpio_lock);
if (wait_for_atomic_us((I915_READ(DPIO_PKT) & DPIO_BUSY) == 0, 100)) {
DRM_ERROR("DPIO idle wait timed out\n");
goto out_unlock;
}
I915_WRITE(DPIO_DATA, val);
I915_WRITE(DPIO_REG, reg);
I915_WRITE(DPIO_PKT, DPIO_RID | DPIO_OP_WRITE | DPIO_PORTID |
DPIO_BYTE);
if (wait_for_atomic_us((I915_READ(DPIO_PKT) & DPIO_BUSY) == 0, 100))
DRM_ERROR("DPIO write wait timed out\n");
out_unlock:
sx_xunlock(&dev_priv->dpio_lock);
}
static void vlv_init_dpio(struct drm_device *dev)
{
struct drm_i915_private *dev_priv = dev->dev_private;
/* Reset the DPIO config */
I915_WRITE(DPIO_CTL, 0);
POSTING_READ(DPIO_CTL);
I915_WRITE(DPIO_CTL, 1);
POSTING_READ(DPIO_CTL);
}
static int intel_dual_link_lvds_callback(const struct dmi_system_id *id)
{
DRM_INFO("Forcing lvds to dual link mode on %s\n", id->ident);
return 1;
}
static const struct dmi_system_id intel_dual_link_lvds[] = {
{
.callback = intel_dual_link_lvds_callback,
.ident = "Apple MacBook Pro (Core i5/i7 Series)",
.matches = {
DMI_MATCH(DMI_SYS_VENDOR, "Apple Inc."),
DMI_MATCH(DMI_PRODUCT_NAME, "MacBookPro8,2"),
},
},
{ } /* terminating entry */
};
static bool is_dual_link_lvds(struct drm_i915_private *dev_priv,
unsigned int reg)
{
unsigned int val;
/* use the module option value if specified */
if (i915_lvds_channel_mode > 0)
return i915_lvds_channel_mode == 2;
if (dmi_check_system(intel_dual_link_lvds))
return true;
if (dev_priv->lvds_val)
val = dev_priv->lvds_val;
else {
/* BIOS should set the proper LVDS register value at boot, but
* in reality, it doesn't set the value when the lid is closed;
* we need to check "the value to be set" in VBT when LVDS
* register is uninitialized.
*/
val = I915_READ(reg);
if (!(val & ~(LVDS_PIPE_MASK | LVDS_DETECTED)))
val = dev_priv->bios_lvds_val;
dev_priv->lvds_val = val;
}
return (val & LVDS_CLKB_POWER_MASK) == LVDS_CLKB_POWER_UP;
}
static const intel_limit_t *intel_ironlake_limit(struct drm_crtc *crtc,
int refclk)
{
struct drm_device *dev = crtc->dev;
struct drm_i915_private *dev_priv = dev->dev_private;
const intel_limit_t *limit;
if (intel_pipe_has_type(crtc, INTEL_OUTPUT_LVDS)) {
if (is_dual_link_lvds(dev_priv, PCH_LVDS)) {
/* LVDS dual channel */
if (refclk == 100000)
limit = &intel_limits_ironlake_dual_lvds_100m;
else
limit = &intel_limits_ironlake_dual_lvds;
} else {
if (refclk == 100000)
limit = &intel_limits_ironlake_single_lvds_100m;
else
limit = &intel_limits_ironlake_single_lvds;
}
} else if (intel_pipe_has_type(crtc, INTEL_OUTPUT_DISPLAYPORT) ||
intel_pipe_has_type(crtc, INTEL_OUTPUT_EDP))
limit = &intel_limits_ironlake_display_port;
else
limit = &intel_limits_ironlake_dac;
return limit;
}
static const intel_limit_t *intel_g4x_limit(struct drm_crtc *crtc)
{
struct drm_device *dev = crtc->dev;
struct drm_i915_private *dev_priv = dev->dev_private;
const intel_limit_t *limit;
if (intel_pipe_has_type(crtc, INTEL_OUTPUT_LVDS)) {
if (is_dual_link_lvds(dev_priv, LVDS))
/* LVDS with dual channel */
limit = &intel_limits_g4x_dual_channel_lvds;
else
/* LVDS with dual channel */
limit = &intel_limits_g4x_single_channel_lvds;
} else if (intel_pipe_has_type(crtc, INTEL_OUTPUT_HDMI) ||
intel_pipe_has_type(crtc, INTEL_OUTPUT_ANALOG)) {
limit = &intel_limits_g4x_hdmi;
} else if (intel_pipe_has_type(crtc, INTEL_OUTPUT_SDVO)) {
limit = &intel_limits_g4x_sdvo;
} else if (intel_pipe_has_type(crtc, INTEL_OUTPUT_DISPLAYPORT)) {
limit = &intel_limits_g4x_display_port;
} else /* The option is for other outputs */
limit = &intel_limits_i9xx_sdvo;
return limit;
}
static const intel_limit_t *intel_limit(struct drm_crtc *crtc, int refclk)
{
struct drm_device *dev = crtc->dev;
const intel_limit_t *limit;
if (HAS_PCH_SPLIT(dev))
limit = intel_ironlake_limit(crtc, refclk);
else if (IS_G4X(dev)) {
limit = intel_g4x_limit(crtc);
} else if (IS_PINEVIEW(dev)) {
if (intel_pipe_has_type(crtc, INTEL_OUTPUT_LVDS))
limit = &intel_limits_pineview_lvds;
else
limit = &intel_limits_pineview_sdvo;
} else if (IS_VALLEYVIEW(dev)) {
if (intel_pipe_has_type(crtc, INTEL_OUTPUT_ANALOG))
limit = &intel_limits_vlv_dac;
else if (intel_pipe_has_type(crtc, INTEL_OUTPUT_HDMI))
limit = &intel_limits_vlv_hdmi;
else
limit = &intel_limits_vlv_dp;
} else if (!IS_GEN2(dev)) {
if (intel_pipe_has_type(crtc, INTEL_OUTPUT_LVDS))
limit = &intel_limits_i9xx_lvds;
else
limit = &intel_limits_i9xx_sdvo;
} else {
if (intel_pipe_has_type(crtc, INTEL_OUTPUT_LVDS))
limit = &intel_limits_i8xx_lvds;
else
limit = &intel_limits_i8xx_dvo;
}
return limit;
}
/* m1 is reserved as 0 in Pineview, n is a ring counter */
static void pineview_clock(int refclk, intel_clock_t *clock)
{
clock->m = clock->m2 + 2;
clock->p = clock->p1 * clock->p2;
clock->vco = refclk * clock->m / clock->n;
clock->dot = clock->vco / clock->p;
}
static void intel_clock(struct drm_device *dev, int refclk, intel_clock_t *clock)
{
if (IS_PINEVIEW(dev)) {
pineview_clock(refclk, clock);
return;
}
clock->m = 5 * (clock->m1 + 2) + (clock->m2 + 2);
clock->p = clock->p1 * clock->p2;
clock->vco = refclk * clock->m / (clock->n + 2);
clock->dot = clock->vco / clock->p;
}
/**
* Returns whether any output on the specified pipe is of the specified type
*/
bool intel_pipe_has_type(struct drm_crtc *crtc, int type)
{
struct drm_device *dev = crtc->dev;
struct intel_encoder *encoder;
for_each_encoder_on_crtc(dev, crtc, encoder)
if (encoder->type == type)
return true;
return false;
}
#define INTELPllInvalid(s) do { /* DRM_DEBUG(s); */ return false; } while (0)
/**
* Returns whether the given set of divisors are valid for a given refclk with
* the given connectors.
*/
static bool intel_PLL_is_valid(struct drm_device *dev,
const intel_limit_t *limit,
const intel_clock_t *clock)
{
if (clock->p1 < limit->p1.min || limit->p1.max < clock->p1)
INTELPllInvalid("p1 out of range\n");
if (clock->p < limit->p.min || limit->p.max < clock->p)
INTELPllInvalid("p out of range\n");
if (clock->m2 < limit->m2.min || limit->m2.max < clock->m2)
INTELPllInvalid("m2 out of range\n");
if (clock->m1 < limit->m1.min || limit->m1.max < clock->m1)
INTELPllInvalid("m1 out of range\n");
if (clock->m1 <= clock->m2 && !IS_PINEVIEW(dev))
INTELPllInvalid("m1 <= m2\n");
if (clock->m < limit->m.min || limit->m.max < clock->m)
INTELPllInvalid("m out of range\n");
if (clock->n < limit->n.min || limit->n.max < clock->n)
INTELPllInvalid("n out of range\n");
if (clock->vco < limit->vco.min || limit->vco.max < clock->vco)
INTELPllInvalid("vco out of range\n");
/* XXX: We may need to be checking "Dot clock" depending on the multiplier,
* connector, etc., rather than just a single range.
*/
if (clock->dot < limit->dot.min || limit->dot.max < clock->dot)
INTELPllInvalid("dot out of range\n");
return true;
}
static bool
intel_find_best_PLL(const intel_limit_t *limit, struct drm_crtc *crtc,
int target, int refclk, intel_clock_t *match_clock,
intel_clock_t *best_clock)
{
struct drm_device *dev = crtc->dev;
struct drm_i915_private *dev_priv = dev->dev_private;
intel_clock_t clock;
int err = target;
if (intel_pipe_has_type(crtc, INTEL_OUTPUT_LVDS) &&
(I915_READ(LVDS)) != 0) {
/*
* For LVDS, if the panel is on, just rely on its current
* settings for dual-channel. We haven't figured out how to
* reliably set up different single/dual channel state, if we
* even can.
*/
if (is_dual_link_lvds(dev_priv, LVDS))
clock.p2 = limit->p2.p2_fast;
else
clock.p2 = limit->p2.p2_slow;
} else {
if (target < limit->p2.dot_limit)
clock.p2 = limit->p2.p2_slow;
else
clock.p2 = limit->p2.p2_fast;
}
memset(best_clock, 0, sizeof(*best_clock));
for (clock.m1 = limit->m1.min; clock.m1 <= limit->m1.max;
clock.m1++) {
for (clock.m2 = limit->m2.min;
clock.m2 <= limit->m2.max; clock.m2++) {
/* m1 is always 0 in Pineview */
if (clock.m2 >= clock.m1 && !IS_PINEVIEW(dev))
break;
for (clock.n = limit->n.min;
clock.n <= limit->n.max; clock.n++) {
for (clock.p1 = limit->p1.min;
clock.p1 <= limit->p1.max; clock.p1++) {
int this_err;
intel_clock(dev, refclk, &clock);
if (!intel_PLL_is_valid(dev, limit,
&clock))
continue;
if (match_clock &&
clock.p != match_clock->p)
continue;
this_err = abs(clock.dot - target);
if (this_err < err) {
*best_clock = clock;
err = this_err;
}
}
}
}
}
return (err != target);
}
static bool
intel_g4x_find_best_PLL(const intel_limit_t *limit, struct drm_crtc *crtc,
int target, int refclk, intel_clock_t *match_clock,
intel_clock_t *best_clock)
{
struct drm_device *dev = crtc->dev;
struct drm_i915_private *dev_priv = dev->dev_private;
intel_clock_t clock;
int max_n;
bool found;
/* approximately equals target * 0.00585 */
int err_most = (target >> 8) + (target >> 9);
found = false;
if (intel_pipe_has_type(crtc, INTEL_OUTPUT_LVDS)) {
int lvds_reg;
if (HAS_PCH_SPLIT(dev))
lvds_reg = PCH_LVDS;
else
lvds_reg = LVDS;
if ((I915_READ(lvds_reg) & LVDS_CLKB_POWER_MASK) ==
LVDS_CLKB_POWER_UP)
clock.p2 = limit->p2.p2_fast;
else
clock.p2 = limit->p2.p2_slow;
} else {
if (target < limit->p2.dot_limit)
clock.p2 = limit->p2.p2_slow;
else
clock.p2 = limit->p2.p2_fast;
}
memset(best_clock, 0, sizeof(*best_clock));
max_n = limit->n.max;
/* based on hardware requirement, prefer smaller n to precision */
for (clock.n = limit->n.min; clock.n <= max_n; clock.n++) {
/* based on hardware requirement, prefere larger m1,m2 */
for (clock.m1 = limit->m1.max;
clock.m1 >= limit->m1.min; clock.m1--) {
for (clock.m2 = limit->m2.max;
clock.m2 >= limit->m2.min; clock.m2--) {
for (clock.p1 = limit->p1.max;
clock.p1 >= limit->p1.min; clock.p1--) {
int this_err;
intel_clock(dev, refclk, &clock);
if (!intel_PLL_is_valid(dev, limit,
&clock))
continue;
if (match_clock &&
clock.p != match_clock->p)
continue;
this_err = abs(clock.dot - target);
if (this_err < err_most) {
*best_clock = clock;
err_most = this_err;
max_n = clock.n;
found = true;
}
}
}
}
}
return found;
}
static bool
intel_find_pll_ironlake_dp(const intel_limit_t *limit, struct drm_crtc *crtc,
int target, int refclk, intel_clock_t *match_clock,
intel_clock_t *best_clock)
{
struct drm_device *dev = crtc->dev;
intel_clock_t clock;
if (target < 200000) {
clock.n = 1;
clock.p1 = 2;
clock.p2 = 10;
clock.m1 = 12;
clock.m2 = 9;
} else {
clock.n = 2;
clock.p1 = 1;
clock.p2 = 10;
clock.m1 = 14;
clock.m2 = 8;
}
intel_clock(dev, refclk, &clock);
memcpy(best_clock, &clock, sizeof(intel_clock_t));
return true;
}
/* DisplayPort has only two frequencies, 162MHz and 270MHz */
static bool
intel_find_pll_g4x_dp(const intel_limit_t *limit, struct drm_crtc *crtc,
int target, int refclk, intel_clock_t *match_clock,
intel_clock_t *best_clock)
{
intel_clock_t clock;
if (target < 200000) {
clock.p1 = 2;
clock.p2 = 10;
clock.n = 2;
clock.m1 = 23;
clock.m2 = 8;
} else {
clock.p1 = 1;
clock.p2 = 10;
clock.n = 1;
clock.m1 = 14;
clock.m2 = 2;
}
clock.m = 5 * (clock.m1 + 2) + (clock.m2 + 2);
clock.p = (clock.p1 * clock.p2);
clock.dot = 96000 * clock.m / (clock.n + 2) / clock.p;
clock.vco = 0;
memcpy(best_clock, &clock, sizeof(intel_clock_t));
return true;
}
static bool
intel_vlv_find_best_pll(const intel_limit_t *limit, struct drm_crtc *crtc,
int target, int refclk, intel_clock_t *match_clock,
intel_clock_t *best_clock)
{
u32 p1, p2, m1, m2, vco, bestn, bestm1, bestm2, bestp1, bestp2;
u32 m, n, fastclk;
u32 updrate, minupdate, fracbits, p;
unsigned long bestppm, ppm, absppm;
int dotclk, flag;
flag = 0;
dotclk = target * 1000;
bestppm = 1000000;
ppm = absppm = 0;
fastclk = dotclk / (2*100);
updrate = 0;
minupdate = 19200;
fracbits = 1;
n = p = p1 = p2 = m = m1 = m2 = vco = bestn = 0;
bestm1 = bestm2 = bestp1 = bestp2 = 0;
/* based on hardware requirement, prefer smaller n to precision */
for (n = limit->n.min; n <= ((refclk) / minupdate); n++) {
updrate = refclk / n;
for (p1 = limit->p1.max; p1 > limit->p1.min; p1--) {
for (p2 = limit->p2.p2_fast+1; p2 > 0; p2--) {
if (p2 > 10)
p2 = p2 - 1;
p = p1 * p2;
/* based on hardware requirement, prefer bigger m1,m2 values */
for (m1 = limit->m1.min; m1 <= limit->m1.max; m1++) {
m2 = (((2*(fastclk * p * n / m1 )) +
refclk) / (2*refclk));
m = m1 * m2;
vco = updrate * m;
if (vco >= limit->vco.min && vco < limit->vco.max) {
ppm = 1000000 * ((vco / p) - fastclk) / fastclk;
absppm = (ppm > 0) ? ppm : (-ppm);
if (absppm < 100 && ((p1 * p2) > (bestp1 * bestp2))) {
bestppm = 0;
flag = 1;
}
if (absppm < bestppm - 10) {
bestppm = absppm;
flag = 1;
}
if (flag) {
bestn = n;
bestm1 = m1;
bestm2 = m2;
bestp1 = p1;
bestp2 = p2;
flag = 0;
}
}
}
}
}
}
best_clock->n = bestn;
best_clock->m1 = bestm1;
best_clock->m2 = bestm2;
best_clock->p1 = bestp1;
best_clock->p2 = bestp2;
return true;
}
enum transcoder intel_pipe_to_cpu_transcoder(struct drm_i915_private *dev_priv,
enum pipe pipe)
{
struct drm_crtc *crtc = dev_priv->pipe_to_crtc_mapping[pipe];
struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
return intel_crtc->cpu_transcoder;
}
static void ironlake_wait_for_vblank(struct drm_device *dev, int pipe)
{
struct drm_i915_private *dev_priv = dev->dev_private;
u32 frame, frame_reg = PIPEFRAME(pipe);
frame = I915_READ(frame_reg);
if (wait_for(I915_READ_NOTRACE(frame_reg) != frame, 50))
DRM_DEBUG_KMS("vblank wait timed out\n");
}
/**
* intel_wait_for_vblank - wait for vblank on a given pipe
* @dev: drm device
* @pipe: pipe to wait for
*
* Wait for vblank to occur on a given pipe. Needed for various bits of
* mode setting code.
*/
void intel_wait_for_vblank(struct drm_device *dev, int pipe)
{
struct drm_i915_private *dev_priv = dev->dev_private;
int pipestat_reg = PIPESTAT(pipe);
if (INTEL_INFO(dev)->gen >= 5) {
ironlake_wait_for_vblank(dev, pipe);
return;
}
/* Clear existing vblank status. Note this will clear any other
* sticky status fields as well.
*
* This races with i915_driver_irq_handler() with the result
* that either function could miss a vblank event. Here it is not
* fatal, as we will either wait upon the next vblank interrupt or
* timeout. Generally speaking intel_wait_for_vblank() is only
* called during modeset at which time the GPU should be idle and
* should *not* be performing page flips and thus not waiting on
* vblanks...
* Currently, the result of us stealing a vblank from the irq
* handler is that a single frame will be skipped during swapbuffers.
*/
I915_WRITE(pipestat_reg,
I915_READ(pipestat_reg) | PIPE_VBLANK_INTERRUPT_STATUS);
/* Wait for vblank interrupt bit to set */
if (wait_for(I915_READ(pipestat_reg) &
PIPE_VBLANK_INTERRUPT_STATUS,
50))
DRM_DEBUG_KMS("vblank wait timed out\n");
}
/*
* intel_wait_for_pipe_off - wait for pipe to turn off
* @dev: drm device
* @pipe: pipe to wait for
*
* After disabling a pipe, we can't wait for vblank in the usual way,
* spinning on the vblank interrupt status bit, since we won't actually
* see an interrupt when the pipe is disabled.
*
* On Gen4 and above:
* wait for the pipe register state bit to turn off
*
* Otherwise:
* wait for the display line value to settle (it usually
* ends up stopping at the start of the next frame).
*
*/
void intel_wait_for_pipe_off(struct drm_device *dev, int pipe)
{
struct drm_i915_private *dev_priv = dev->dev_private;
enum transcoder cpu_transcoder = intel_pipe_to_cpu_transcoder(dev_priv,
pipe);
if (INTEL_INFO(dev)->gen >= 4) {
int reg = PIPECONF(cpu_transcoder);
/* Wait for the Pipe State to go off */
if (wait_for((I915_READ(reg) & I965_PIPECONF_ACTIVE) == 0,
100))
WARN(1, "pipe_off wait timed out\n");
} else {
u32 last_line, line_mask;
int reg = PIPEDSL(pipe);
unsigned long timeout = jiffies + msecs_to_jiffies(100);
if (IS_GEN2(dev))
line_mask = DSL_LINEMASK_GEN2;
else
line_mask = DSL_LINEMASK_GEN3;
/* Wait for the display line to settle */
do {
last_line = I915_READ(reg) & line_mask;
mdelay(5);
} while (((I915_READ(reg) & line_mask) != last_line) &&
time_after(timeout, jiffies));
if (time_after(jiffies, timeout))
WARN(1, "pipe_off wait timed out\n");
}
}
static const char *state_string(bool enabled)
{
return enabled ? "on" : "off";
}
/* Only for pre-ILK configs */
static void assert_pll(struct drm_i915_private *dev_priv,
enum pipe pipe, bool state)
{
int reg;
u32 val;
bool cur_state;
reg = DPLL(pipe);
val = I915_READ(reg);
cur_state = !!(val & DPLL_VCO_ENABLE);
WARN(cur_state != state,
"PLL state assertion failure (expected %s, current %s)\n",
state_string(state), state_string(cur_state));
}
#define assert_pll_enabled(d, p) assert_pll(d, p, true)
#define assert_pll_disabled(d, p) assert_pll(d, p, false)
/* For ILK+ */
static void assert_pch_pll(struct drm_i915_private *dev_priv,
struct intel_pch_pll *pll,
struct intel_crtc *crtc,
bool state)
{
u32 val;
bool cur_state;
if (HAS_PCH_LPT(dev_priv->dev)) {
DRM_DEBUG_DRIVER("LPT detected: skipping PCH PLL test\n");
return;
}
if (WARN (!pll,
"asserting PCH PLL %s with no PLL\n", state_string(state)))
return;
val = I915_READ(pll->pll_reg);
cur_state = !!(val & DPLL_VCO_ENABLE);
WARN(cur_state != state,
"PCH PLL state for reg %x assertion failure (expected %s, current %s), val=%08x\n",
pll->pll_reg, state_string(state), state_string(cur_state), val);
/* Make sure the selected PLL is correctly attached to the transcoder */
if (crtc && HAS_PCH_CPT(dev_priv->dev)) {
u32 pch_dpll;
pch_dpll = I915_READ(PCH_DPLL_SEL);
cur_state = pll->pll_reg == _PCH_DPLL_B;
if (!WARN(((pch_dpll >> (4 * crtc->pipe)) & 1) != cur_state,
"PLL[%d] not attached to this transcoder %d: %08x\n",
cur_state, crtc->pipe, pch_dpll)) {
cur_state = !!(val >> (4*crtc->pipe + 3));
WARN(cur_state != state,
"PLL[%d] not %s on this transcoder %d: %08x\n",
pll->pll_reg == _PCH_DPLL_B,
state_string(state),
crtc->pipe,
val);
}
}
}
#define assert_pch_pll_enabled(d, p, c) assert_pch_pll(d, p, c, true)
#define assert_pch_pll_disabled(d, p, c) assert_pch_pll(d, p, c, false)
static void assert_fdi_tx(struct drm_i915_private *dev_priv,
enum pipe pipe, bool state)
{
int reg;
u32 val;
bool cur_state;
enum transcoder cpu_transcoder = intel_pipe_to_cpu_transcoder(dev_priv,
pipe);
if (IS_HASWELL(dev_priv->dev)) {
/* On Haswell, DDI is used instead of FDI_TX_CTL */
reg = TRANS_DDI_FUNC_CTL(cpu_transcoder);
val = I915_READ(reg);
cur_state = !!(val & TRANS_DDI_FUNC_ENABLE);
} else {
reg = FDI_TX_CTL(pipe);
val = I915_READ(reg);
cur_state = !!(val & FDI_TX_ENABLE);
}
WARN(cur_state != state,
"FDI TX state assertion failure (expected %s, current %s)\n",
state_string(state), state_string(cur_state));
}
#define assert_fdi_tx_enabled(d, p) assert_fdi_tx(d, p, true)
#define assert_fdi_tx_disabled(d, p) assert_fdi_tx(d, p, false)
static void assert_fdi_rx(struct drm_i915_private *dev_priv,
enum pipe pipe, bool state)
{
int reg;
u32 val;
bool cur_state;
reg = FDI_RX_CTL(pipe);
val = I915_READ(reg);
cur_state = !!(val & FDI_RX_ENABLE);
WARN(cur_state != state,
"FDI RX state assertion failure (expected %s, current %s)\n",
state_string(state), state_string(cur_state));
}
#define assert_fdi_rx_enabled(d, p) assert_fdi_rx(d, p, true)
#define assert_fdi_rx_disabled(d, p) assert_fdi_rx(d, p, false)
static void assert_fdi_tx_pll_enabled(struct drm_i915_private *dev_priv,
enum pipe pipe)
{
int reg;
u32 val;
/* ILK FDI PLL is always enabled */
if (dev_priv->info->gen == 5)
return;
/* On Haswell, DDI ports are responsible for the FDI PLL setup */
if (IS_HASWELL(dev_priv->dev))
return;
reg = FDI_TX_CTL(pipe);
val = I915_READ(reg);
WARN(!(val & FDI_TX_PLL_ENABLE), "FDI TX PLL assertion failure, should be active but is disabled\n");
}
static void assert_fdi_rx_pll_enabled(struct drm_i915_private *dev_priv,
enum pipe pipe)
{
int reg;
u32 val;
reg = FDI_RX_CTL(pipe);
val = I915_READ(reg);
WARN(!(val & FDI_RX_PLL_ENABLE), "FDI RX PLL assertion failure, should be active but is disabled\n");
}
static void assert_panel_unlocked(struct drm_i915_private *dev_priv,
enum pipe pipe)
{
int pp_reg, lvds_reg;
u32 val;
enum pipe panel_pipe = PIPE_A;
bool locked = true;
if (HAS_PCH_SPLIT(dev_priv->dev)) {
pp_reg = PCH_PP_CONTROL;
lvds_reg = PCH_LVDS;
} else {
pp_reg = PP_CONTROL;
lvds_reg = LVDS;
}
val = I915_READ(pp_reg);
if (!(val & PANEL_POWER_ON) ||
((val & PANEL_UNLOCK_REGS) == PANEL_UNLOCK_REGS))
locked = false;
if (I915_READ(lvds_reg) & LVDS_PIPEB_SELECT)
panel_pipe = PIPE_B;
WARN(panel_pipe == pipe && locked,
"panel assertion failure, pipe %c regs locked\n",
pipe_name(pipe));
}
void assert_pipe(struct drm_i915_private *dev_priv,
enum pipe pipe, bool state)
{
int reg;
u32 val;
bool cur_state;
enum transcoder cpu_transcoder = intel_pipe_to_cpu_transcoder(dev_priv,
pipe);
/* if we need the pipe A quirk it must be always on */
if (pipe == PIPE_A && dev_priv->quirks & QUIRK_PIPEA_FORCE)
state = true;
reg = PIPECONF(cpu_transcoder);
val = I915_READ(reg);
cur_state = !!(val & PIPECONF_ENABLE);
WARN(cur_state != state,
"pipe %c assertion failure (expected %s, current %s)\n",
pipe_name(pipe), state_string(state), state_string(cur_state));
}
static void assert_plane(struct drm_i915_private *dev_priv,
enum plane plane, bool state)
{
int reg;
u32 val;
bool cur_state;
reg = DSPCNTR(plane);
val = I915_READ(reg);
cur_state = !!(val & DISPLAY_PLANE_ENABLE);
WARN(cur_state != state,
"plane %c assertion failure (expected %s, current %s)\n",
plane_name(plane), state_string(state), state_string(cur_state));
}
#define assert_plane_enabled(d, p) assert_plane(d, p, true)
#define assert_plane_disabled(d, p) assert_plane(d, p, false)
static void assert_planes_disabled(struct drm_i915_private *dev_priv,
enum pipe pipe)
{
int reg, i;
u32 val;
int cur_pipe;
/* Planes are fixed to pipes on ILK+ */
if (HAS_PCH_SPLIT(dev_priv->dev)) {
reg = DSPCNTR(pipe);
val = I915_READ(reg);
WARN((val & DISPLAY_PLANE_ENABLE),
"plane %c assertion failure, should be disabled but not\n",
plane_name(pipe));
return;
}
/* Need to check both planes against the pipe */
for (i = 0; i < 2; i++) {
reg = DSPCNTR(i);
val = I915_READ(reg);
cur_pipe = (val & DISPPLANE_SEL_PIPE_MASK) >>
DISPPLANE_SEL_PIPE_SHIFT;
WARN((val & DISPLAY_PLANE_ENABLE) && pipe == cur_pipe,
"plane %c assertion failure, should be off on pipe %c but is still active\n",
plane_name(i), pipe_name(pipe));
}
}
static void assert_pch_refclk_enabled(struct drm_i915_private *dev_priv)
{
u32 val;
bool enabled;
if (HAS_PCH_LPT(dev_priv->dev)) {
DRM_DEBUG_DRIVER("LPT does not has PCH refclk, skipping check\n");
return;
}
val = I915_READ(PCH_DREF_CONTROL);
enabled = !!(val & (DREF_SSC_SOURCE_MASK | DREF_NONSPREAD_SOURCE_MASK |
DREF_SUPERSPREAD_SOURCE_MASK));
WARN(!enabled, "PCH refclk assertion failure, should be active but is disabled\n");
}
static void assert_transcoder_disabled(struct drm_i915_private *dev_priv,
enum pipe pipe)
{
int reg;
u32 val;
bool enabled;
reg = TRANSCONF(pipe);
val = I915_READ(reg);
enabled = !!(val & TRANS_ENABLE);
WARN(enabled,
"transcoder assertion failed, should be off on pipe %c but is still active\n",
pipe_name(pipe));
}
static bool dp_pipe_enabled(struct drm_i915_private *dev_priv,
enum pipe pipe, u32 port_sel, u32 val)
{
if ((val & DP_PORT_EN) == 0)
return false;
if (HAS_PCH_CPT(dev_priv->dev)) {
u32 trans_dp_ctl_reg = TRANS_DP_CTL(pipe);
u32 trans_dp_ctl = I915_READ(trans_dp_ctl_reg);
if ((trans_dp_ctl & TRANS_DP_PORT_SEL_MASK) != port_sel)
return false;
} else {
if ((val & DP_PIPE_MASK) != (pipe << 30))
return false;
}
return true;
}
static bool hdmi_pipe_enabled(struct drm_i915_private *dev_priv,
enum pipe pipe, u32 val)
{
if ((val & PORT_ENABLE) == 0)
return false;
if (HAS_PCH_CPT(dev_priv->dev)) {
if ((val & PORT_TRANS_SEL_MASK) != PORT_TRANS_SEL_CPT(pipe))
return false;
} else {
if ((val & TRANSCODER_MASK) != TRANSCODER(pipe))
return false;
}
return true;
}
static bool lvds_pipe_enabled(struct drm_i915_private *dev_priv,
enum pipe pipe, u32 val)
{
if ((val & LVDS_PORT_EN) == 0)
return false;
if (HAS_PCH_CPT(dev_priv->dev)) {
if ((val & PORT_TRANS_SEL_MASK) != PORT_TRANS_SEL_CPT(pipe))
return false;
} else {
if ((val & LVDS_PIPE_MASK) != LVDS_PIPE(pipe))
return false;
}
return true;
}
static bool adpa_pipe_enabled(struct drm_i915_private *dev_priv,
enum pipe pipe, u32 val)
{
if ((val & ADPA_DAC_ENABLE) == 0)
return false;
if (HAS_PCH_CPT(dev_priv->dev)) {
if ((val & PORT_TRANS_SEL_MASK) != PORT_TRANS_SEL_CPT(pipe))
return false;
} else {
if ((val & ADPA_PIPE_SELECT_MASK) != ADPA_PIPE_SELECT(pipe))
return false;
}
return true;
}
static void assert_pch_dp_disabled(struct drm_i915_private *dev_priv,
enum pipe pipe, int reg, u32 port_sel)
{
u32 val = I915_READ(reg);
WARN(dp_pipe_enabled(dev_priv, pipe, port_sel, val),
"PCH DP (0x%08x) enabled on transcoder %c, should be disabled\n",
reg, pipe_name(pipe));
WARN(HAS_PCH_IBX(dev_priv->dev) && (val & DP_PORT_EN) == 0
&& (val & DP_PIPEB_SELECT),
"IBX PCH dp port still using transcoder B\n");
}
static void assert_pch_hdmi_disabled(struct drm_i915_private *dev_priv,
enum pipe pipe, int reg)
{
u32 val = I915_READ(reg);
WARN(hdmi_pipe_enabled(dev_priv, pipe, val),
"PCH HDMI (0x%08x) enabled on transcoder %c, should be disabled\n",
reg, pipe_name(pipe));
WARN(HAS_PCH_IBX(dev_priv->dev) && (val & PORT_ENABLE) == 0
&& (val & SDVO_PIPE_B_SELECT),
"IBX PCH hdmi port still using transcoder B\n");
}
static void assert_pch_ports_disabled(struct drm_i915_private *dev_priv,
enum pipe pipe)
{
int reg;
u32 val;
assert_pch_dp_disabled(dev_priv, pipe, PCH_DP_B, TRANS_DP_PORT_SEL_B);
assert_pch_dp_disabled(dev_priv, pipe, PCH_DP_C, TRANS_DP_PORT_SEL_C);
assert_pch_dp_disabled(dev_priv, pipe, PCH_DP_D, TRANS_DP_PORT_SEL_D);
reg = PCH_ADPA;
val = I915_READ(reg);
WARN(adpa_pipe_enabled(dev_priv, pipe, val),
"PCH VGA enabled on transcoder %c, should be disabled\n",
pipe_name(pipe));
reg = PCH_LVDS;
val = I915_READ(reg);
WARN(lvds_pipe_enabled(dev_priv, pipe, val),
"PCH LVDS enabled on transcoder %c, should be disabled\n",
pipe_name(pipe));
assert_pch_hdmi_disabled(dev_priv, pipe, HDMIB);
assert_pch_hdmi_disabled(dev_priv, pipe, HDMIC);
assert_pch_hdmi_disabled(dev_priv, pipe, HDMID);
}
/**
* intel_enable_pll - enable a PLL
* @dev_priv: i915 private structure
* @pipe: pipe PLL to enable
*
* Enable @pipe's PLL so we can start pumping pixels from a plane. Check to
* make sure the PLL reg is writable first though, since the panel write
* protect mechanism may be enabled.
*
* Note! This is for pre-ILK only.
*
* Unfortunately needed by dvo_ns2501 since the dvo depends on it running.
*/
static void intel_enable_pll(struct drm_i915_private *dev_priv, enum pipe pipe)
{
int reg;
u32 val;
/* No really, not for ILK+ */
BUG_ON(!IS_VALLEYVIEW(dev_priv->dev) && dev_priv->info->gen >= 5);
/* PLL is protected by panel, make sure we can write it */
if (IS_MOBILE(dev_priv->dev) && !IS_I830(dev_priv->dev))
assert_panel_unlocked(dev_priv, pipe);
reg = DPLL(pipe);
val = I915_READ(reg);
val |= DPLL_VCO_ENABLE;
/* We do this three times for luck */
I915_WRITE(reg, val);
POSTING_READ(reg);
udelay(150); /* wait for warmup */
I915_WRITE(reg, val);
POSTING_READ(reg);
udelay(150); /* wait for warmup */
I915_WRITE(reg, val);
POSTING_READ(reg);
udelay(150); /* wait for warmup */
}
/**
* intel_disable_pll - disable a PLL
* @dev_priv: i915 private structure
* @pipe: pipe PLL to disable
*
* Disable the PLL for @pipe, making sure the pipe is off first.
*
* Note! This is for pre-ILK only.
*/
static void intel_disable_pll(struct drm_i915_private *dev_priv, enum pipe pipe)
{
int reg;
u32 val;
/* Don't disable pipe A or pipe A PLLs if needed */
if (pipe == PIPE_A && (dev_priv->quirks & QUIRK_PIPEA_FORCE))
return;
/* Make sure the pipe isn't still relying on us */
assert_pipe_disabled(dev_priv, pipe);
reg = DPLL(pipe);
val = I915_READ(reg);
val &= ~DPLL_VCO_ENABLE;
I915_WRITE(reg, val);
POSTING_READ(reg);
}
/* SBI access */
static void
intel_sbi_write(struct drm_i915_private *dev_priv, u16 reg, u32 value,
enum intel_sbi_destination destination)
{
u32 tmp;
sx_xlock(&dev_priv->dpio_lock);
if (wait_for((I915_READ(SBI_CTL_STAT) & SBI_BUSY) == 0, 100)) {
DRM_ERROR("timeout waiting for SBI to become ready\n");
goto out_unlock;
}
I915_WRITE(SBI_ADDR, (reg << 16));
I915_WRITE(SBI_DATA, value);
if (destination == SBI_ICLK)
tmp = SBI_CTL_DEST_ICLK | SBI_CTL_OP_CRWR;
else
tmp = SBI_CTL_DEST_MPHY | SBI_CTL_OP_IOWR;
I915_WRITE(SBI_CTL_STAT, SBI_BUSY | tmp);
if (wait_for((I915_READ(SBI_CTL_STAT) & (SBI_BUSY | SBI_RESPONSE_FAIL)) == 0,
100)) {
DRM_ERROR("timeout waiting for SBI to complete write transaction\n");
goto out_unlock;
}
out_unlock:
sx_xunlock(&dev_priv->dpio_lock);
}
static u32
intel_sbi_read(struct drm_i915_private *dev_priv, u16 reg,
enum intel_sbi_destination destination)
{
u32 value = 0;
sx_xlock(&dev_priv->dpio_lock);
if (wait_for((I915_READ(SBI_CTL_STAT) & SBI_BUSY) == 0, 100)) {
DRM_ERROR("timeout waiting for SBI to become ready\n");
goto out_unlock;
}
I915_WRITE(SBI_ADDR, (reg << 16));
if (destination == SBI_ICLK)
value = SBI_CTL_DEST_ICLK | SBI_CTL_OP_CRRD;
else
value = SBI_CTL_DEST_MPHY | SBI_CTL_OP_IORD;
I915_WRITE(SBI_CTL_STAT, value | SBI_BUSY);
if (wait_for((I915_READ(SBI_CTL_STAT) & (SBI_BUSY | SBI_RESPONSE_FAIL)) == 0,
100)) {
DRM_ERROR("timeout waiting for SBI to complete read transaction\n");
goto out_unlock;
}
value = I915_READ(SBI_DATA);
out_unlock:
sx_xunlock(&dev_priv->dpio_lock);
return value;
}
/**
* ironlake_enable_pch_pll - enable PCH PLL
* @dev_priv: i915 private structure
* @pipe: pipe PLL to enable
*
* The PCH PLL needs to be enabled before the PCH transcoder, since it
* drives the transcoder clock.
*/
static void ironlake_enable_pch_pll(struct intel_crtc *intel_crtc)
{
struct drm_i915_private *dev_priv = intel_crtc->base.dev->dev_private;
struct intel_pch_pll *pll;
int reg;
u32 val;
/* PCH PLLs only available on ILK, SNB and IVB */
BUG_ON(dev_priv->info->gen < 5);
pll = intel_crtc->pch_pll;
if (pll == NULL)
return;
if (WARN_ON(pll->refcount == 0))
return;
DRM_DEBUG_KMS("enable PCH PLL %x (active %d, on? %d)for crtc %d\n",
pll->pll_reg, pll->active, pll->on,
intel_crtc->base.base.id);
/* PCH refclock must be enabled first */
assert_pch_refclk_enabled(dev_priv);
if (pll->active++ && pll->on) {
assert_pch_pll_enabled(dev_priv, pll, NULL);
return;
}
DRM_DEBUG_KMS("enabling PCH PLL %x\n", pll->pll_reg);
reg = pll->pll_reg;
val = I915_READ(reg);
val |= DPLL_VCO_ENABLE;
I915_WRITE(reg, val);
POSTING_READ(reg);
udelay(200);
pll->on = true;
}
static void intel_disable_pch_pll(struct intel_crtc *intel_crtc)
{
struct drm_i915_private *dev_priv = intel_crtc->base.dev->dev_private;
struct intel_pch_pll *pll = intel_crtc->pch_pll;
int reg;
u32 val;
/* PCH only available on ILK+ */
BUG_ON(dev_priv->info->gen < 5);
if (pll == NULL)
return;
if (WARN_ON(pll->refcount == 0))
return;
DRM_DEBUG_KMS("disable PCH PLL %x (active %d, on? %d) for crtc %d\n",
pll->pll_reg, pll->active, pll->on,
intel_crtc->base.base.id);
if (WARN_ON(pll->active == 0)) {
assert_pch_pll_disabled(dev_priv, pll, NULL);
return;
}
if (--pll->active) {
assert_pch_pll_enabled(dev_priv, pll, NULL);
return;
}
DRM_DEBUG_KMS("disabling PCH PLL %x\n", pll->pll_reg);
/* Make sure transcoder isn't still depending on us */
assert_transcoder_disabled(dev_priv, intel_crtc->pipe);
reg = pll->pll_reg;
val = I915_READ(reg);
val &= ~DPLL_VCO_ENABLE;
I915_WRITE(reg, val);
POSTING_READ(reg);
udelay(200);
pll->on = false;
}
static void ironlake_enable_pch_transcoder(struct drm_i915_private *dev_priv,
enum pipe pipe)
{
struct drm_device *dev = dev_priv->dev;
struct drm_crtc *crtc = dev_priv->pipe_to_crtc_mapping[pipe];
uint32_t reg, val, pipeconf_val;
/* PCH only available on ILK+ */
BUG_ON(dev_priv->info->gen < 5);
/* Make sure PCH DPLL is enabled */
assert_pch_pll_enabled(dev_priv,
to_intel_crtc(crtc)->pch_pll,
to_intel_crtc(crtc));
/* FDI must be feeding us bits for PCH ports */
assert_fdi_tx_enabled(dev_priv, pipe);
assert_fdi_rx_enabled(dev_priv, pipe);
if (HAS_PCH_CPT(dev)) {
/* Workaround: Set the timing override bit before enabling the
* pch transcoder. */
reg = TRANS_CHICKEN2(pipe);
val = I915_READ(reg);
val |= TRANS_CHICKEN2_TIMING_OVERRIDE;
I915_WRITE(reg, val);
}
reg = TRANSCONF(pipe);
val = I915_READ(reg);
pipeconf_val = I915_READ(PIPECONF(pipe));
if (HAS_PCH_IBX(dev_priv->dev)) {
/*
* make the BPC in transcoder be consistent with
* that in pipeconf reg.
*/
val &= ~PIPE_BPC_MASK;
val |= pipeconf_val & PIPE_BPC_MASK;
}
val &= ~TRANS_INTERLACE_MASK;
if ((pipeconf_val & PIPECONF_INTERLACE_MASK) == PIPECONF_INTERLACED_ILK)
if (HAS_PCH_IBX(dev_priv->dev) &&
intel_pipe_has_type(crtc, INTEL_OUTPUT_SDVO))
val |= TRANS_LEGACY_INTERLACED_ILK;
else
val |= TRANS_INTERLACED;
else
val |= TRANS_PROGRESSIVE;
I915_WRITE(reg, val | TRANS_ENABLE);
if (wait_for(I915_READ(reg) & TRANS_STATE_ENABLE, 100))
DRM_ERROR("failed to enable transcoder %d\n", pipe);
}
static void lpt_enable_pch_transcoder(struct drm_i915_private *dev_priv,
enum transcoder cpu_transcoder)
{
u32 val, pipeconf_val;
/* PCH only available on ILK+ */
BUG_ON(dev_priv->info->gen < 5);
/* FDI must be feeding us bits for PCH ports */
assert_fdi_tx_enabled(dev_priv, (enum pipe)cpu_transcoder);
assert_fdi_rx_enabled(dev_priv, (enum pipe)TRANSCODER_A);
/* Workaround: set timing override bit. */
val = I915_READ(_TRANSA_CHICKEN2);
val |= TRANS_CHICKEN2_TIMING_OVERRIDE;
I915_WRITE(_TRANSA_CHICKEN2, val);
val = TRANS_ENABLE;
pipeconf_val = I915_READ(PIPECONF(cpu_transcoder));
if ((pipeconf_val & PIPECONF_INTERLACE_MASK_HSW) ==
PIPECONF_INTERLACED_ILK)
val |= TRANS_INTERLACED;
else
val |= TRANS_PROGRESSIVE;
I915_WRITE(TRANSCONF(TRANSCODER_A), val);
if (wait_for(I915_READ(_TRANSACONF) & TRANS_STATE_ENABLE, 100))
DRM_ERROR("Failed to enable PCH transcoder\n");
}
static void ironlake_disable_pch_transcoder(struct drm_i915_private *dev_priv,
enum pipe pipe)
{
struct drm_device *dev = dev_priv->dev;
uint32_t reg, val;
/* FDI relies on the transcoder */
assert_fdi_tx_disabled(dev_priv, pipe);
assert_fdi_rx_disabled(dev_priv, pipe);
/* Ports must be off as well */
assert_pch_ports_disabled(dev_priv, pipe);
reg = TRANSCONF(pipe);
val = I915_READ(reg);
val &= ~TRANS_ENABLE;
I915_WRITE(reg, val);
/* wait for PCH transcoder off, transcoder state */
if (wait_for((I915_READ(reg) & TRANS_STATE_ENABLE) == 0, 50))
DRM_ERROR("failed to disable transcoder %d\n", pipe);
if (!HAS_PCH_IBX(dev)) {
/* Workaround: Clear the timing override chicken bit again. */
reg = TRANS_CHICKEN2(pipe);
val = I915_READ(reg);
val &= ~TRANS_CHICKEN2_TIMING_OVERRIDE;
I915_WRITE(reg, val);
}
}
static void lpt_disable_pch_transcoder(struct drm_i915_private *dev_priv)
{
u32 val;
val = I915_READ(_TRANSACONF);
val &= ~TRANS_ENABLE;
I915_WRITE(_TRANSACONF, val);
/* wait for PCH transcoder off, transcoder state */
if (wait_for((I915_READ(_TRANSACONF) & TRANS_STATE_ENABLE) == 0, 50))
DRM_ERROR("Failed to disable PCH transcoder\n");
/* Workaround: clear timing override bit. */
val = I915_READ(_TRANSA_CHICKEN2);
val &= ~TRANS_CHICKEN2_TIMING_OVERRIDE;
I915_WRITE(_TRANSA_CHICKEN2, val);
}
/**
* intel_enable_pipe - enable a pipe, asserting requirements
* @dev_priv: i915 private structure
* @pipe: pipe to enable
* @pch_port: on ILK+, is this pipe driving a PCH port or not
*
* Enable @pipe, making sure that various hardware specific requirements
* are met, if applicable, e.g. PLL enabled, LVDS pairs enabled, etc.
*
* @pipe should be %PIPE_A or %PIPE_B.
*
* Will wait until the pipe is actually running (i.e. first vblank) before
* returning.
*/
static void intel_enable_pipe(struct drm_i915_private *dev_priv, enum pipe pipe,
bool pch_port)
{
enum transcoder cpu_transcoder = intel_pipe_to_cpu_transcoder(dev_priv,
pipe);
enum transcoder pch_transcoder;
int reg;
u32 val;
if (IS_HASWELL(dev_priv->dev))
pch_transcoder = TRANSCODER_A;
else
pch_transcoder = (enum transcoder)pipe;
/*
* A pipe without a PLL won't actually be able to drive bits from
* a plane. On ILK+ the pipe PLLs are integrated, so we don't
* need the check.
*/
if (!HAS_PCH_SPLIT(dev_priv->dev))
assert_pll_enabled(dev_priv, pipe);
else {
if (pch_port) {
/* if driving the PCH, we need FDI enabled */
assert_fdi_rx_pll_enabled(dev_priv, (enum pipe)pch_transcoder);
assert_fdi_tx_pll_enabled(dev_priv, (enum pipe)cpu_transcoder);
}
/* FIXME: assert CPU port conditions for SNB+ */
}
reg = PIPECONF(cpu_transcoder);
val = I915_READ(reg);
if (val & PIPECONF_ENABLE)
return;
I915_WRITE(reg, val | PIPECONF_ENABLE);
intel_wait_for_vblank(dev_priv->dev, pipe);
}
/**
* intel_disable_pipe - disable a pipe, asserting requirements
* @dev_priv: i915 private structure
* @pipe: pipe to disable
*
* Disable @pipe, making sure that various hardware specific requirements
* are met, if applicable, e.g. plane disabled, panel fitter off, etc.
*
* @pipe should be %PIPE_A or %PIPE_B.
*
* Will wait until the pipe has shut down before returning.
*/
static void intel_disable_pipe(struct drm_i915_private *dev_priv,
enum pipe pipe)
{
enum transcoder cpu_transcoder = intel_pipe_to_cpu_transcoder(dev_priv,
pipe);
int reg;
u32 val;
/*
* Make sure planes won't keep trying to pump pixels to us,
* or we might hang the display.
*/
assert_planes_disabled(dev_priv, pipe);
/* Don't disable pipe A or pipe A PLLs if needed */
if (pipe == PIPE_A && (dev_priv->quirks & QUIRK_PIPEA_FORCE))
return;
reg = PIPECONF(cpu_transcoder);
val = I915_READ(reg);
if ((val & PIPECONF_ENABLE) == 0)
return;
I915_WRITE(reg, val & ~PIPECONF_ENABLE);
intel_wait_for_pipe_off(dev_priv->dev, pipe);
}
/*
* Plane regs are double buffered, going from enabled->disabled needs a
* trigger in order to latch. The display address reg provides this.
*/
void intel_flush_display_plane(struct drm_i915_private *dev_priv,
enum plane plane)
{
if (dev_priv->info->gen >= 4)
I915_WRITE(DSPSURF(plane), I915_READ(DSPSURF(plane)));
else
I915_WRITE(DSPADDR(plane), I915_READ(DSPADDR(plane)));
}
/**
* intel_enable_plane - enable a display plane on a given pipe
* @dev_priv: i915 private structure
* @plane: plane to enable
* @pipe: pipe being fed
*
* Enable @plane on @pipe, making sure that @pipe is running first.
*/
static void intel_enable_plane(struct drm_i915_private *dev_priv,
enum plane plane, enum pipe pipe)
{
int reg;
u32 val;
/* If the pipe isn't enabled, we can't pump pixels and may hang */
assert_pipe_enabled(dev_priv, pipe);
reg = DSPCNTR(plane);
val = I915_READ(reg);
if (val & DISPLAY_PLANE_ENABLE)
return;
I915_WRITE(reg, val | DISPLAY_PLANE_ENABLE);
intel_flush_display_plane(dev_priv, plane);
intel_wait_for_vblank(dev_priv->dev, pipe);
}
/**
* intel_disable_plane - disable a display plane
* @dev_priv: i915 private structure
* @plane: plane to disable
* @pipe: pipe consuming the data
*
* Disable @plane; should be an independent operation.
*/
static void intel_disable_plane(struct drm_i915_private *dev_priv,
enum plane plane, enum pipe pipe)
{
int reg;
u32 val;
reg = DSPCNTR(plane);
val = I915_READ(reg);
if ((val & DISPLAY_PLANE_ENABLE) == 0)
return;
I915_WRITE(reg, val & ~DISPLAY_PLANE_ENABLE);
intel_flush_display_plane(dev_priv, plane);
intel_wait_for_vblank(dev_priv->dev, pipe);
}
int
intel_pin_and_fence_fb_obj(struct drm_device *dev,
struct drm_i915_gem_object *obj,
struct intel_ring_buffer *pipelined)
{
struct drm_i915_private *dev_priv = dev->dev_private;
u32 alignment;
int ret;
switch (obj->tiling_mode) {
case I915_TILING_NONE:
if (IS_BROADWATER(dev) || IS_CRESTLINE(dev))
alignment = 128 * 1024;
else if (INTEL_INFO(dev)->gen >= 4)
alignment = 4 * 1024;
else
alignment = 64 * 1024;
break;
case I915_TILING_X:
/* pin() will align the object as required by fence */
alignment = 0;
break;
case I915_TILING_Y:
/* FIXME: Is this true? */
DRM_ERROR("Y tiled not allowed for scan out buffers\n");
return -EINVAL;
default:
BUG();
}
dev_priv->mm.interruptible = false;
ret = i915_gem_object_pin_to_display_plane(obj, alignment, pipelined);
if (ret)
goto err_interruptible;
/* Install a fence for tiled scan-out. Pre-i965 always needs a
* fence, whereas 965+ only requires a fence if using
* framebuffer compression. For simplicity, we always install
* a fence as the cost is not that onerous.
*/
ret = i915_gem_object_get_fence(obj);
if (ret)
goto err_unpin;
i915_gem_object_pin_fence(obj);
dev_priv->mm.interruptible = true;
return 0;
err_unpin:
i915_gem_object_unpin(obj);
err_interruptible:
dev_priv->mm.interruptible = true;
return ret;
}
void intel_unpin_fb_obj(struct drm_i915_gem_object *obj)
{
i915_gem_object_unpin_fence(obj);
i915_gem_object_unpin(obj);
}
/* Computes the linear offset to the base tile and adjusts x, y. bytes per pixel
* is assumed to be a power-of-two. */
unsigned long intel_gen4_compute_page_offset(int *x, int *y,
unsigned int tiling_mode,
unsigned int cpp,
unsigned int pitch)
{
if (tiling_mode != I915_TILING_NONE) {
unsigned int tile_rows, tiles;
tile_rows = *y / 8;
*y %= 8;
tiles = *x / (512/cpp);
*x %= 512/cpp;
return tile_rows * pitch * 8 + tiles * 4096;
} else {
unsigned int offset;
offset = *y * pitch + *x * cpp;
*y = 0;
*x = (offset & 4095) / cpp;
return offset & -4096;
}
}
static int i9xx_update_plane(struct drm_crtc *crtc, struct drm_framebuffer *fb,
int x, int y)
{
struct drm_device *dev = crtc->dev;
struct drm_i915_private *dev_priv = dev->dev_private;
struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
struct intel_framebuffer *intel_fb;
struct drm_i915_gem_object *obj;
int plane = intel_crtc->plane;
unsigned long linear_offset;
u32 dspcntr;
u32 reg;
switch (plane) {
case 0:
case 1:
break;
default:
DRM_ERROR("Can't update plane %d in SAREA\n", plane);
return -EINVAL;
}
intel_fb = to_intel_framebuffer(fb);
obj = intel_fb->obj;
reg = DSPCNTR(plane);
dspcntr = I915_READ(reg);
/* Mask out pixel format bits in case we change it */
dspcntr &= ~DISPPLANE_PIXFORMAT_MASK;
switch (fb->pixel_format) {
case DRM_FORMAT_C8:
dspcntr |= DISPPLANE_8BPP;
break;
case DRM_FORMAT_XRGB1555:
case DRM_FORMAT_ARGB1555:
dspcntr |= DISPPLANE_BGRX555;
break;
case DRM_FORMAT_RGB565:
dspcntr |= DISPPLANE_BGRX565;
break;
case DRM_FORMAT_XRGB8888:
case DRM_FORMAT_ARGB8888:
dspcntr |= DISPPLANE_BGRX888;
break;
case DRM_FORMAT_XBGR8888:
case DRM_FORMAT_ABGR8888:
dspcntr |= DISPPLANE_RGBX888;
break;
case DRM_FORMAT_XRGB2101010:
case DRM_FORMAT_ARGB2101010:
dspcntr |= DISPPLANE_BGRX101010;
break;
case DRM_FORMAT_XBGR2101010:
case DRM_FORMAT_ABGR2101010:
dspcntr |= DISPPLANE_RGBX101010;
break;
default:
DRM_ERROR("Unknown pixel format 0x%08x\n", fb->pixel_format);
return -EINVAL;
}
if (INTEL_INFO(dev)->gen >= 4) {
if (obj->tiling_mode != I915_TILING_NONE)
dspcntr |= DISPPLANE_TILED;
else
dspcntr &= ~DISPPLANE_TILED;
}
I915_WRITE(reg, dspcntr);
linear_offset = y * fb->pitches[0] + x * (fb->bits_per_pixel / 8);
if (INTEL_INFO(dev)->gen >= 4) {
intel_crtc->dspaddr_offset =
intel_gen4_compute_page_offset(&x, &y, obj->tiling_mode,
fb->bits_per_pixel / 8,
fb->pitches[0]);
linear_offset -= intel_crtc->dspaddr_offset;
} else {
intel_crtc->dspaddr_offset = linear_offset;
}
DRM_DEBUG_KMS("Writing base %08X %08lX %d %d %d\n",
obj->gtt_offset, linear_offset, x, y, fb->pitches[0]);
I915_WRITE(DSPSTRIDE(plane), fb->pitches[0]);
if (INTEL_INFO(dev)->gen >= 4) {
I915_MODIFY_DISPBASE(DSPSURF(plane),
obj->gtt_offset + intel_crtc->dspaddr_offset);
I915_WRITE(DSPTILEOFF(plane), (y << 16) | x);
I915_WRITE(DSPLINOFF(plane), linear_offset);
} else
I915_WRITE(DSPADDR(plane), obj->gtt_offset + linear_offset);
POSTING_READ(reg);
return 0;
}
static int ironlake_update_plane(struct drm_crtc *crtc,
struct drm_framebuffer *fb, int x, int y)
{
struct drm_device *dev = crtc->dev;
struct drm_i915_private *dev_priv = dev->dev_private;
struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
struct intel_framebuffer *intel_fb;
struct drm_i915_gem_object *obj;
int plane = intel_crtc->plane;
unsigned long linear_offset;
u32 dspcntr;
u32 reg;
switch (plane) {
case 0:
case 1:
case 2:
break;
default:
DRM_ERROR("Can't update plane %d in SAREA\n", plane);
return -EINVAL;
}
intel_fb = to_intel_framebuffer(fb);
obj = intel_fb->obj;
reg = DSPCNTR(plane);
dspcntr = I915_READ(reg);
/* Mask out pixel format bits in case we change it */
dspcntr &= ~DISPPLANE_PIXFORMAT_MASK;
switch (fb->pixel_format) {
case DRM_FORMAT_C8:
dspcntr |= DISPPLANE_8BPP;
break;
case DRM_FORMAT_RGB565:
dspcntr |= DISPPLANE_BGRX565;
break;
case DRM_FORMAT_XRGB8888:
case DRM_FORMAT_ARGB8888:
dspcntr |= DISPPLANE_BGRX888;
break;
case DRM_FORMAT_XBGR8888:
case DRM_FORMAT_ABGR8888:
dspcntr |= DISPPLANE_RGBX888;
break;
case DRM_FORMAT_XRGB2101010:
case DRM_FORMAT_ARGB2101010:
dspcntr |= DISPPLANE_BGRX101010;
break;
case DRM_FORMAT_XBGR2101010:
case DRM_FORMAT_ABGR2101010:
dspcntr |= DISPPLANE_RGBX101010;
break;
default:
DRM_ERROR("Unknown pixel format 0x%08x\n", fb->pixel_format);
return -EINVAL;
}
if (obj->tiling_mode != I915_TILING_NONE)
dspcntr |= DISPPLANE_TILED;
else
dspcntr &= ~DISPPLANE_TILED;
/* must disable */
dspcntr |= DISPPLANE_TRICKLE_FEED_DISABLE;
I915_WRITE(reg, dspcntr);
linear_offset = y * fb->pitches[0] + x * (fb->bits_per_pixel / 8);
intel_crtc->dspaddr_offset =
intel_gen4_compute_page_offset(&x, &y, obj->tiling_mode,
fb->bits_per_pixel / 8,
fb->pitches[0]);
linear_offset -= intel_crtc->dspaddr_offset;
DRM_DEBUG_KMS("Writing base %08X %08lX %d %d %d\n",
obj->gtt_offset, linear_offset, x, y, fb->pitches[0]);
I915_WRITE(DSPSTRIDE(plane), fb->pitches[0]);
I915_MODIFY_DISPBASE(DSPSURF(plane),
obj->gtt_offset + intel_crtc->dspaddr_offset);
if (IS_HASWELL(dev)) {
I915_WRITE(DSPOFFSET(plane), (y << 16) | x);
} else {
I915_WRITE(DSPTILEOFF(plane), (y << 16) | x);
I915_WRITE(DSPLINOFF(plane), linear_offset);
}
POSTING_READ(reg);
return 0;
}
/* Assume fb object is pinned & idle & fenced and just update base pointers */
static int
intel_pipe_set_base_atomic(struct drm_crtc *crtc, struct drm_framebuffer *fb,
int x, int y, enum mode_set_atomic state)
{
struct drm_device *dev = crtc->dev;
struct drm_i915_private *dev_priv = dev->dev_private;
if (dev_priv->display.disable_fbc)
dev_priv->display.disable_fbc(dev);
intel_increase_pllclock(crtc);
return dev_priv->display.update_plane(crtc, fb, x, y);
}
static int
intel_finish_fb(struct drm_framebuffer *old_fb)
{
struct drm_i915_gem_object *obj = to_intel_framebuffer(old_fb)->obj;
struct drm_device *dev = obj->base.dev;
struct drm_i915_private *dev_priv = obj->base.dev->dev_private;
bool was_interruptible = dev_priv->mm.interruptible;
int ret;
mtx_lock(&dev->event_lock);
while (!(atomic_read(&dev_priv->mm.wedged) ||
atomic_read(&obj->pending_flip) == 0)) {
msleep(&dev_priv->pending_flip_queue, &dev->event_lock,
0, "915flp", 0);
}
mtx_unlock(&dev->event_lock);
/* Big Hammer, we also need to ensure that any pending
* MI_WAIT_FOR_EVENT inside a user batch buffer on the
* current scanout is retired before unpinning the old
* framebuffer.
*
* This should only fail upon a hung GPU, in which case we
* can safely continue.
*/
dev_priv->mm.interruptible = false;
ret = i915_gem_object_finish_gpu(obj);
dev_priv->mm.interruptible = was_interruptible;
return ret;
}
static void intel_crtc_update_sarea_pos(struct drm_crtc *crtc, int x, int y)
{
struct drm_device *dev = crtc->dev;
struct drm_i915_master_private *master_priv;
struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
if (!dev->primary->master)
return;
master_priv = dev->primary->master->driver_priv;
if (!master_priv->sarea_priv)
return;
switch (intel_crtc->pipe) {
case 0:
master_priv->sarea_priv->pipeA_x = x;
master_priv->sarea_priv->pipeA_y = y;
break;
case 1:
master_priv->sarea_priv->pipeB_x = x;
master_priv->sarea_priv->pipeB_y = y;
break;
default:
break;
}
}
static int
intel_pipe_set_base(struct drm_crtc *crtc, int x, int y,
struct drm_framebuffer *fb)
{
struct drm_device *dev = crtc->dev;
struct drm_i915_private *dev_priv = dev->dev_private;
struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
struct drm_framebuffer *old_fb;
int ret;
/* no fb bound */
if (!fb) {
DRM_ERROR("No FB bound\n");
return 0;
}
if(intel_crtc->plane > dev_priv->num_pipe) {
DRM_ERROR("no plane for crtc: plane %d, num_pipes %d\n",
intel_crtc->plane,
dev_priv->num_pipe);
return -EINVAL;
}
DRM_LOCK(dev);
ret = intel_pin_and_fence_fb_obj(dev,
to_intel_framebuffer(fb)->obj,
NULL);
if (ret != 0) {
DRM_UNLOCK(dev);
DRM_ERROR("pin & fence failed\n");
return ret;
}
if (crtc->fb)
intel_finish_fb(crtc->fb);
ret = dev_priv->display.update_plane(crtc, fb, x, y);
if (ret) {
intel_unpin_fb_obj(to_intel_framebuffer(fb)->obj);
DRM_UNLOCK(dev);
DRM_ERROR("failed to update base address\n");
return ret;
}
old_fb = crtc->fb;
crtc->fb = fb;
crtc->x = x;
crtc->y = y;
if (old_fb) {
intel_wait_for_vblank(dev, intel_crtc->pipe);
intel_unpin_fb_obj(to_intel_framebuffer(old_fb)->obj);
}
intel_update_fbc(dev);
DRM_UNLOCK(dev);
intel_crtc_update_sarea_pos(crtc, x, y);
return 0;
}
static void ironlake_set_pll_edp(struct drm_crtc *crtc, int clock)
{
struct drm_device *dev = crtc->dev;
struct drm_i915_private *dev_priv = dev->dev_private;
u32 dpa_ctl;
DRM_DEBUG_KMS("eDP PLL enable for clock %d\n", clock);
dpa_ctl = I915_READ(DP_A);
dpa_ctl &= ~DP_PLL_FREQ_MASK;
if (clock < 200000) {
u32 temp;
dpa_ctl |= DP_PLL_FREQ_160MHZ;
/* workaround for 160Mhz:
1) program 0x4600c bits 15:0 = 0x8124
2) program 0x46010 bit 0 = 1
3) program 0x46034 bit 24 = 1
4) program 0x64000 bit 14 = 1
*/
temp = I915_READ(0x4600c);
temp &= 0xffff0000;
I915_WRITE(0x4600c, temp | 0x8124);
temp = I915_READ(0x46010);
I915_WRITE(0x46010, temp | 1);
temp = I915_READ(0x46034);
I915_WRITE(0x46034, temp | (1 << 24));
} else {
dpa_ctl |= DP_PLL_FREQ_270MHZ;
}
I915_WRITE(DP_A, dpa_ctl);
POSTING_READ(DP_A);
udelay(500);
}
static void intel_fdi_normal_train(struct drm_crtc *crtc)
{
struct drm_device *dev = crtc->dev;
struct drm_i915_private *dev_priv = dev->dev_private;
struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
int pipe = intel_crtc->pipe;
u32 reg, temp;
/* enable normal train */
reg = FDI_TX_CTL(pipe);
temp = I915_READ(reg);
if (IS_IVYBRIDGE(dev)) {
temp &= ~FDI_LINK_TRAIN_NONE_IVB;
temp |= FDI_LINK_TRAIN_NONE_IVB | FDI_TX_ENHANCE_FRAME_ENABLE;
} else {
temp &= ~FDI_LINK_TRAIN_NONE;
temp |= FDI_LINK_TRAIN_NONE | FDI_TX_ENHANCE_FRAME_ENABLE;
}
I915_WRITE(reg, temp);
reg = FDI_RX_CTL(pipe);
temp = I915_READ(reg);
if (HAS_PCH_CPT(dev)) {
temp &= ~FDI_LINK_TRAIN_PATTERN_MASK_CPT;
temp |= FDI_LINK_TRAIN_NORMAL_CPT;
} else {
temp &= ~FDI_LINK_TRAIN_NONE;
temp |= FDI_LINK_TRAIN_NONE;
}
I915_WRITE(reg, temp | FDI_RX_ENHANCE_FRAME_ENABLE);
/* wait one idle pattern time */
POSTING_READ(reg);
udelay(1000);
/* IVB wants error correction enabled */
if (IS_IVYBRIDGE(dev))
I915_WRITE(reg, I915_READ(reg) | FDI_FS_ERRC_ENABLE |
FDI_FE_ERRC_ENABLE);
}
static void ivb_modeset_global_resources(struct drm_device *dev)
{
struct drm_i915_private *dev_priv = dev->dev_private;
struct intel_crtc *pipe_B_crtc =
to_intel_crtc(dev_priv->pipe_to_crtc_mapping[PIPE_B]);
struct intel_crtc *pipe_C_crtc =
to_intel_crtc(dev_priv->pipe_to_crtc_mapping[PIPE_C]);
uint32_t temp;
/* When everything is off disable fdi C so that we could enable fdi B
* with all lanes. XXX: This misses the case where a pipe is not using
* any pch resources and so doesn't need any fdi lanes. */
if (!pipe_B_crtc->base.enabled && !pipe_C_crtc->base.enabled) {
WARN_ON(I915_READ(FDI_RX_CTL(PIPE_B)) & FDI_RX_ENABLE);
WARN_ON(I915_READ(FDI_RX_CTL(PIPE_C)) & FDI_RX_ENABLE);
temp = I915_READ(SOUTH_CHICKEN1);
temp &= ~FDI_BC_BIFURCATION_SELECT;
DRM_DEBUG_KMS("disabling fdi C rx\n");
I915_WRITE(SOUTH_CHICKEN1, temp);
}
}
/* The FDI link training functions for ILK/Ibexpeak. */
static void ironlake_fdi_link_train(struct drm_crtc *crtc)
{
struct drm_device *dev = crtc->dev;
struct drm_i915_private *dev_priv = dev->dev_private;
struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
int pipe = intel_crtc->pipe;
int plane = intel_crtc->plane;
u32 reg, temp, tries;
/* FDI needs bits from pipe & plane first */
assert_pipe_enabled(dev_priv, pipe);
assert_plane_enabled(dev_priv, plane);
/* Train 1: umask FDI RX Interrupt symbol_lock and bit_lock bit
for train result */
reg = FDI_RX_IMR(pipe);
temp = I915_READ(reg);
temp &= ~FDI_RX_SYMBOL_LOCK;
temp &= ~FDI_RX_BIT_LOCK;
I915_WRITE(reg, temp);
I915_READ(reg);
udelay(150);
/* enable CPU FDI TX and PCH FDI RX */
reg = FDI_TX_CTL(pipe);
temp = I915_READ(reg);
temp &= ~(7 << 19);
temp |= (intel_crtc->fdi_lanes - 1) << 19;
temp &= ~FDI_LINK_TRAIN_NONE;
temp |= FDI_LINK_TRAIN_PATTERN_1;
I915_WRITE(reg, temp | FDI_TX_ENABLE);
reg = FDI_RX_CTL(pipe);
temp = I915_READ(reg);
temp &= ~FDI_LINK_TRAIN_NONE;
temp |= FDI_LINK_TRAIN_PATTERN_1;
I915_WRITE(reg, temp | FDI_RX_ENABLE);
POSTING_READ(reg);
udelay(150);
/* Ironlake workaround, enable clock pointer after FDI enable*/
I915_WRITE(FDI_RX_CHICKEN(pipe), FDI_RX_PHASE_SYNC_POINTER_OVR);
I915_WRITE(FDI_RX_CHICKEN(pipe), FDI_RX_PHASE_SYNC_POINTER_OVR |
FDI_RX_PHASE_SYNC_POINTER_EN);
reg = FDI_RX_IIR(pipe);
for (tries = 0; tries < 5; tries++) {
temp = I915_READ(reg);
DRM_DEBUG_KMS("FDI_RX_IIR 0x%x\n", temp);
if ((temp & FDI_RX_BIT_LOCK)) {
DRM_DEBUG_KMS("FDI train 1 done.\n");
I915_WRITE(reg, temp | FDI_RX_BIT_LOCK);
break;
}
}
if (tries == 5)
DRM_ERROR("FDI train 1 fail!\n");
/* Train 2 */
reg = FDI_TX_CTL(pipe);
temp = I915_READ(reg);
temp &= ~FDI_LINK_TRAIN_NONE;
temp |= FDI_LINK_TRAIN_PATTERN_2;
I915_WRITE(reg, temp);
reg = FDI_RX_CTL(pipe);
temp = I915_READ(reg);
temp &= ~FDI_LINK_TRAIN_NONE;
temp |= FDI_LINK_TRAIN_PATTERN_2;
I915_WRITE(reg, temp);
POSTING_READ(reg);
udelay(150);
reg = FDI_RX_IIR(pipe);
for (tries = 0; tries < 5; tries++) {
temp = I915_READ(reg);
DRM_DEBUG_KMS("FDI_RX_IIR 0x%x\n", temp);
if (temp & FDI_RX_SYMBOL_LOCK) {
I915_WRITE(reg, temp | FDI_RX_SYMBOL_LOCK);
DRM_DEBUG_KMS("FDI train 2 done.\n");
break;
}
}
if (tries == 5)
DRM_ERROR("FDI train 2 fail!\n");
DRM_DEBUG_KMS("FDI train done\n");
}
static const int snb_b_fdi_train_param[] = {
FDI_LINK_TRAIN_400MV_0DB_SNB_B,
FDI_LINK_TRAIN_400MV_6DB_SNB_B,
FDI_LINK_TRAIN_600MV_3_5DB_SNB_B,
FDI_LINK_TRAIN_800MV_0DB_SNB_B,
};
/* The FDI link training functions for SNB/Cougarpoint. */
static void gen6_fdi_link_train(struct drm_crtc *crtc)
{
struct drm_device *dev = crtc->dev;
struct drm_i915_private *dev_priv = dev->dev_private;
struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
int pipe = intel_crtc->pipe;
u32 reg, temp, i, retry;
/* Train 1: umask FDI RX Interrupt symbol_lock and bit_lock bit
for train result */
reg = FDI_RX_IMR(pipe);
temp = I915_READ(reg);
temp &= ~FDI_RX_SYMBOL_LOCK;
temp &= ~FDI_RX_BIT_LOCK;
I915_WRITE(reg, temp);
POSTING_READ(reg);
udelay(150);
/* enable CPU FDI TX and PCH FDI RX */
reg = FDI_TX_CTL(pipe);
temp = I915_READ(reg);
temp &= ~(7 << 19);
temp |= (intel_crtc->fdi_lanes - 1) << 19;
temp &= ~FDI_LINK_TRAIN_NONE;
temp |= FDI_LINK_TRAIN_PATTERN_1;
temp &= ~FDI_LINK_TRAIN_VOL_EMP_MASK;
/* SNB-B */
temp |= FDI_LINK_TRAIN_400MV_0DB_SNB_B;
I915_WRITE(reg, temp | FDI_TX_ENABLE);
I915_WRITE(FDI_RX_MISC(pipe),
FDI_RX_TP1_TO_TP2_48 | FDI_RX_FDI_DELAY_90);
reg = FDI_RX_CTL(pipe);
temp = I915_READ(reg);
if (HAS_PCH_CPT(dev)) {
temp &= ~FDI_LINK_TRAIN_PATTERN_MASK_CPT;
temp |= FDI_LINK_TRAIN_PATTERN_1_CPT;
} else {
temp &= ~FDI_LINK_TRAIN_NONE;
temp |= FDI_LINK_TRAIN_PATTERN_1;
}
I915_WRITE(reg, temp | FDI_RX_ENABLE);
POSTING_READ(reg);
udelay(150);
for (i = 0; i < 4; i++) {
reg = FDI_TX_CTL(pipe);
temp = I915_READ(reg);
temp &= ~FDI_LINK_TRAIN_VOL_EMP_MASK;
temp |= snb_b_fdi_train_param[i];
I915_WRITE(reg, temp);
POSTING_READ(reg);
udelay(500);
for (retry = 0; retry < 5; retry++) {
reg = FDI_RX_IIR(pipe);
temp = I915_READ(reg);
DRM_DEBUG_KMS("FDI_RX_IIR 0x%x\n", temp);
if (temp & FDI_RX_BIT_LOCK) {
I915_WRITE(reg, temp | FDI_RX_BIT_LOCK);
DRM_DEBUG_KMS("FDI train 1 done.\n");
break;
}
udelay(50);
}
if (retry < 5)
break;
}
if (i == 4)
DRM_ERROR("FDI train 1 fail!\n");
/* Train 2 */
reg = FDI_TX_CTL(pipe);
temp = I915_READ(reg);
temp &= ~FDI_LINK_TRAIN_NONE;
temp |= FDI_LINK_TRAIN_PATTERN_2;
if (IS_GEN6(dev)) {
temp &= ~FDI_LINK_TRAIN_VOL_EMP_MASK;
/* SNB-B */
temp |= FDI_LINK_TRAIN_400MV_0DB_SNB_B;
}
I915_WRITE(reg, temp);
reg = FDI_RX_CTL(pipe);
temp = I915_READ(reg);
if (HAS_PCH_CPT(dev)) {
temp &= ~FDI_LINK_TRAIN_PATTERN_MASK_CPT;
temp |= FDI_LINK_TRAIN_PATTERN_2_CPT;
} else {
temp &= ~FDI_LINK_TRAIN_NONE;
temp |= FDI_LINK_TRAIN_PATTERN_2;
}
I915_WRITE(reg, temp);
POSTING_READ(reg);
udelay(150);
for (i = 0; i < 4; i++) {
reg = FDI_TX_CTL(pipe);
temp = I915_READ(reg);
temp &= ~FDI_LINK_TRAIN_VOL_EMP_MASK;
temp |= snb_b_fdi_train_param[i];
I915_WRITE(reg, temp);
POSTING_READ(reg);
udelay(500);
for (retry = 0; retry < 5; retry++) {
reg = FDI_RX_IIR(pipe);
temp = I915_READ(reg);
DRM_DEBUG_KMS("FDI_RX_IIR 0x%x\n", temp);
if (temp & FDI_RX_SYMBOL_LOCK) {
I915_WRITE(reg, temp | FDI_RX_SYMBOL_LOCK);
DRM_DEBUG_KMS("FDI train 2 done.\n");
break;
}
udelay(50);
}
if (retry < 5)
break;
}
if (i == 4)
DRM_ERROR("FDI train 2 fail!\n");
DRM_DEBUG_KMS("FDI train done.\n");
}
/* Manual link training for Ivy Bridge A0 parts */
static void ivb_manual_fdi_link_train(struct drm_crtc *crtc)
{
struct drm_device *dev = crtc->dev;
struct drm_i915_private *dev_priv = dev->dev_private;
struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
int pipe = intel_crtc->pipe;
u32 reg, temp, i;
/* Train 1: umask FDI RX Interrupt symbol_lock and bit_lock bit
for train result */
reg = FDI_RX_IMR(pipe);
temp = I915_READ(reg);
temp &= ~FDI_RX_SYMBOL_LOCK;
temp &= ~FDI_RX_BIT_LOCK;
I915_WRITE(reg, temp);
POSTING_READ(reg);
udelay(150);
DRM_DEBUG_KMS("FDI_RX_IIR before link train 0x%x\n",
I915_READ(FDI_RX_IIR(pipe)));
/* enable CPU FDI TX and PCH FDI RX */
reg = FDI_TX_CTL(pipe);
temp = I915_READ(reg);
temp &= ~(7 << 19);
temp |= (intel_crtc->fdi_lanes - 1) << 19;
temp &= ~(FDI_LINK_TRAIN_AUTO | FDI_LINK_TRAIN_NONE_IVB);
temp |= FDI_LINK_TRAIN_PATTERN_1_IVB;
temp &= ~FDI_LINK_TRAIN_VOL_EMP_MASK;
temp |= FDI_LINK_TRAIN_400MV_0DB_SNB_B;
temp |= FDI_COMPOSITE_SYNC;
I915_WRITE(reg, temp | FDI_TX_ENABLE);
I915_WRITE(FDI_RX_MISC(pipe),
FDI_RX_TP1_TO_TP2_48 | FDI_RX_FDI_DELAY_90);
reg = FDI_RX_CTL(pipe);
temp = I915_READ(reg);
temp &= ~FDI_LINK_TRAIN_AUTO;
temp &= ~FDI_LINK_TRAIN_PATTERN_MASK_CPT;
temp |= FDI_LINK_TRAIN_PATTERN_1_CPT;
temp |= FDI_COMPOSITE_SYNC;
I915_WRITE(reg, temp | FDI_RX_ENABLE);
POSTING_READ(reg);
udelay(150);
for (i = 0; i < 4; i++) {
reg = FDI_TX_CTL(pipe);
temp = I915_READ(reg);
temp &= ~FDI_LINK_TRAIN_VOL_EMP_MASK;
temp |= snb_b_fdi_train_param[i];
I915_WRITE(reg, temp);
POSTING_READ(reg);
udelay(500);
reg = FDI_RX_IIR(pipe);
temp = I915_READ(reg);
DRM_DEBUG_KMS("FDI_RX_IIR 0x%x\n", temp);
if (temp & FDI_RX_BIT_LOCK ||
(I915_READ(reg) & FDI_RX_BIT_LOCK)) {
I915_WRITE(reg, temp | FDI_RX_BIT_LOCK);
DRM_DEBUG_KMS("FDI train 1 done, level %i.\n", i);
break;
}
}
if (i == 4)
DRM_ERROR("FDI train 1 fail!\n");
/* Train 2 */
reg = FDI_TX_CTL(pipe);
temp = I915_READ(reg);
temp &= ~FDI_LINK_TRAIN_NONE_IVB;
temp |= FDI_LINK_TRAIN_PATTERN_2_IVB;
temp &= ~FDI_LINK_TRAIN_VOL_EMP_MASK;
temp |= FDI_LINK_TRAIN_400MV_0DB_SNB_B;
I915_WRITE(reg, temp);
reg = FDI_RX_CTL(pipe);
temp = I915_READ(reg);
temp &= ~FDI_LINK_TRAIN_PATTERN_MASK_CPT;
temp |= FDI_LINK_TRAIN_PATTERN_2_CPT;
I915_WRITE(reg, temp);
POSTING_READ(reg);
udelay(150);
for (i = 0; i < 4; i++) {
reg = FDI_TX_CTL(pipe);
temp = I915_READ(reg);
temp &= ~FDI_LINK_TRAIN_VOL_EMP_MASK;
temp |= snb_b_fdi_train_param[i];
I915_WRITE(reg, temp);
POSTING_READ(reg);
udelay(500);
reg = FDI_RX_IIR(pipe);
temp = I915_READ(reg);
DRM_DEBUG_KMS("FDI_RX_IIR 0x%x\n", temp);
if (temp & FDI_RX_SYMBOL_LOCK) {
I915_WRITE(reg, temp | FDI_RX_SYMBOL_LOCK);
DRM_DEBUG_KMS("FDI train 2 done, level %i.\n", i);
break;
}
}
if (i == 4)
DRM_ERROR("FDI train 2 fail!\n");
DRM_DEBUG_KMS("FDI train done.\n");
}
static void ironlake_fdi_pll_enable(struct intel_crtc *intel_crtc)
{
struct drm_device *dev = intel_crtc->base.dev;
struct drm_i915_private *dev_priv = dev->dev_private;
int pipe = intel_crtc->pipe;
u32 reg, temp;
/* enable PCH FDI RX PLL, wait warmup plus DMI latency */
reg = FDI_RX_CTL(pipe);
temp = I915_READ(reg);
temp &= ~((0x7 << 19) | (0x7 << 16));
temp |= (intel_crtc->fdi_lanes - 1) << 19;
temp |= (I915_READ(PIPECONF(pipe)) & PIPE_BPC_MASK) << 11;
I915_WRITE(reg, temp | FDI_RX_PLL_ENABLE);
POSTING_READ(reg);
udelay(200);
/* Switch from Rawclk to PCDclk */
temp = I915_READ(reg);
I915_WRITE(reg, temp | FDI_PCDCLK);
POSTING_READ(reg);
udelay(200);
/* On Haswell, the PLL configuration for ports and pipes is handled
* separately, as part of DDI setup */
if (!IS_HASWELL(dev)) {
/* Enable CPU FDI TX PLL, always on for Ironlake */
reg = FDI_TX_CTL(pipe);
temp = I915_READ(reg);
if ((temp & FDI_TX_PLL_ENABLE) == 0) {
I915_WRITE(reg, temp | FDI_TX_PLL_ENABLE);
POSTING_READ(reg);
udelay(100);
}
}
}
static void ironlake_fdi_pll_disable(struct intel_crtc *intel_crtc)
{
struct drm_device *dev = intel_crtc->base.dev;
struct drm_i915_private *dev_priv = dev->dev_private;
int pipe = intel_crtc->pipe;
u32 reg, temp;
/* Switch from PCDclk to Rawclk */
reg = FDI_RX_CTL(pipe);
temp = I915_READ(reg);
I915_WRITE(reg, temp & ~FDI_PCDCLK);
/* Disable CPU FDI TX PLL */
reg = FDI_TX_CTL(pipe);
temp = I915_READ(reg);
I915_WRITE(reg, temp & ~FDI_TX_PLL_ENABLE);
POSTING_READ(reg);
udelay(100);
reg = FDI_RX_CTL(pipe);
temp = I915_READ(reg);
I915_WRITE(reg, temp & ~FDI_RX_PLL_ENABLE);
/* Wait for the clocks to turn off. */
POSTING_READ(reg);
udelay(100);
}
static void ironlake_fdi_disable(struct drm_crtc *crtc)
{
struct drm_device *dev = crtc->dev;
struct drm_i915_private *dev_priv = dev->dev_private;
struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
int pipe = intel_crtc->pipe;
u32 reg, temp;
/* disable CPU FDI tx and PCH FDI rx */
reg = FDI_TX_CTL(pipe);
temp = I915_READ(reg);
I915_WRITE(reg, temp & ~FDI_TX_ENABLE);
POSTING_READ(reg);
reg = FDI_RX_CTL(pipe);
temp = I915_READ(reg);
temp &= ~(0x7 << 16);
temp |= (I915_READ(PIPECONF(pipe)) & PIPE_BPC_MASK) << 11;
I915_WRITE(reg, temp & ~FDI_RX_ENABLE);
POSTING_READ(reg);
udelay(100);
/* Ironlake workaround, disable clock pointer after downing FDI */
if (HAS_PCH_IBX(dev)) {
I915_WRITE(FDI_RX_CHICKEN(pipe), FDI_RX_PHASE_SYNC_POINTER_OVR);
}
/* still set train pattern 1 */
reg = FDI_TX_CTL(pipe);
temp = I915_READ(reg);
temp &= ~FDI_LINK_TRAIN_NONE;
temp |= FDI_LINK_TRAIN_PATTERN_1;
I915_WRITE(reg, temp);
reg = FDI_RX_CTL(pipe);
temp = I915_READ(reg);
if (HAS_PCH_CPT(dev)) {
temp &= ~FDI_LINK_TRAIN_PATTERN_MASK_CPT;
temp |= FDI_LINK_TRAIN_PATTERN_1_CPT;
} else {
temp &= ~FDI_LINK_TRAIN_NONE;
temp |= FDI_LINK_TRAIN_PATTERN_1;
}
/* BPC in FDI rx is consistent with that in PIPECONF */
temp &= ~(0x07 << 16);
temp |= (I915_READ(PIPECONF(pipe)) & PIPE_BPC_MASK) << 11;
I915_WRITE(reg, temp);
POSTING_READ(reg);
udelay(100);
}
static bool intel_crtc_has_pending_flip(struct drm_crtc *crtc)
{
struct drm_device *dev = crtc->dev;
struct drm_i915_private *dev_priv = dev->dev_private;
bool pending;
if (atomic_read(&dev_priv->mm.wedged))
return false;
/*
* NOTE Linux<->FreeBSD dev->event_lock is already locked in
* intel_crtc_wait_for_pending_flips().
*/
pending = to_intel_crtc(crtc)->unpin_work != NULL;
return pending;
}
static void intel_crtc_wait_for_pending_flips(struct drm_crtc *crtc)
{
struct drm_device *dev = crtc->dev;
struct drm_i915_private *dev_priv = dev->dev_private;
if (crtc->fb == NULL)
return;
mtx_lock(&dev->event_lock);
while (intel_crtc_has_pending_flip(crtc)) {
msleep(&dev_priv->pending_flip_queue, &dev->event_lock,
0, "915flp", 0);
}
mtx_unlock(&dev->event_lock);
DRM_LOCK(dev);
intel_finish_fb(crtc->fb);
DRM_UNLOCK(dev);
}
static bool ironlake_crtc_driving_pch(struct drm_crtc *crtc)
{
struct drm_device *dev = crtc->dev;
struct intel_encoder *intel_encoder;
/*
* If there's a non-PCH eDP on this crtc, it must be DP_A, and that
* must be driven by its own crtc; no sharing is possible.
*/
for_each_encoder_on_crtc(dev, crtc, intel_encoder) {
switch (intel_encoder->type) {
case INTEL_OUTPUT_EDP:
if (!intel_encoder_is_pch_edp(&intel_encoder->base))
return false;
continue;
}
}
return true;
}
static bool haswell_crtc_driving_pch(struct drm_crtc *crtc)
{
return intel_pipe_has_type(crtc, INTEL_OUTPUT_ANALOG);
}
/* Program iCLKIP clock to the desired frequency */
static void lpt_program_iclkip(struct drm_crtc *crtc)
{
struct drm_device *dev = crtc->dev;
struct drm_i915_private *dev_priv = dev->dev_private;
u32 divsel, phaseinc, auxdiv, phasedir = 0;
u32 temp;
/* It is necessary to ungate the pixclk gate prior to programming
* the divisors, and gate it back when it is done.
*/
I915_WRITE(PIXCLK_GATE, PIXCLK_GATE_GATE);
/* Disable SSCCTL */
intel_sbi_write(dev_priv, SBI_SSCCTL6,
intel_sbi_read(dev_priv, SBI_SSCCTL6, SBI_ICLK) |
SBI_SSCCTL_DISABLE,
SBI_ICLK);
/* 20MHz is a corner case which is out of range for the 7-bit divisor */
if (crtc->mode.clock == 20000) {
auxdiv = 1;
divsel = 0x41;
phaseinc = 0x20;
} else {
/* The iCLK virtual clock root frequency is in MHz,
* but the crtc->mode.clock in in KHz. To get the divisors,
* it is necessary to divide one by another, so we
* convert the virtual clock precision to KHz here for higher
* precision.
*/
u32 iclk_virtual_root_freq = 172800 * 1000;
u32 iclk_pi_range = 64;
u32 desired_divisor, msb_divisor_value, pi_value;
desired_divisor = (iclk_virtual_root_freq / crtc->mode.clock);
msb_divisor_value = desired_divisor / iclk_pi_range;
pi_value = desired_divisor % iclk_pi_range;
auxdiv = 0;
divsel = msb_divisor_value - 2;
phaseinc = pi_value;
}
/* This should not happen with any sane values */
WARN_ON(SBI_SSCDIVINTPHASE_DIVSEL(divsel) &
~SBI_SSCDIVINTPHASE_DIVSEL_MASK);
WARN_ON(SBI_SSCDIVINTPHASE_DIR(phasedir) &
~SBI_SSCDIVINTPHASE_INCVAL_MASK);
DRM_DEBUG_KMS("iCLKIP clock: found settings for %dKHz refresh rate: auxdiv=%x, divsel=%x, phasedir=%x, phaseinc=%x\n",
crtc->mode.clock,
auxdiv,
divsel,
phasedir,
phaseinc);
/* Program SSCDIVINTPHASE6 */
temp = intel_sbi_read(dev_priv, SBI_SSCDIVINTPHASE6, SBI_ICLK);
temp &= ~SBI_SSCDIVINTPHASE_DIVSEL_MASK;
temp |= SBI_SSCDIVINTPHASE_DIVSEL(divsel);
temp &= ~SBI_SSCDIVINTPHASE_INCVAL_MASK;
temp |= SBI_SSCDIVINTPHASE_INCVAL(phaseinc);
temp |= SBI_SSCDIVINTPHASE_DIR(phasedir);
temp |= SBI_SSCDIVINTPHASE_PROPAGATE;
intel_sbi_write(dev_priv, SBI_SSCDIVINTPHASE6, temp, SBI_ICLK);
/* Program SSCAUXDIV */
temp = intel_sbi_read(dev_priv, SBI_SSCAUXDIV6, SBI_ICLK);
temp &= ~SBI_SSCAUXDIV_FINALDIV2SEL(1);
temp |= SBI_SSCAUXDIV_FINALDIV2SEL(auxdiv);
intel_sbi_write(dev_priv, SBI_SSCAUXDIV6, temp, SBI_ICLK);
/* Enable modulator and associated divider */
temp = intel_sbi_read(dev_priv, SBI_SSCCTL6, SBI_ICLK);
temp &= ~SBI_SSCCTL_DISABLE;
intel_sbi_write(dev_priv, SBI_SSCCTL6, temp, SBI_ICLK);
/* Wait for initialization time */
udelay(24);
I915_WRITE(PIXCLK_GATE, PIXCLK_GATE_UNGATE);
}
/*
* Enable PCH resources required for PCH ports:
* - PCH PLLs
* - FDI training & RX/TX
* - update transcoder timings
* - DP transcoding bits
* - transcoder
*/
static void ironlake_pch_enable(struct drm_crtc *crtc)
{
struct drm_device *dev = crtc->dev;
struct drm_i915_private *dev_priv = dev->dev_private;
struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
int pipe = intel_crtc->pipe;
u32 reg, temp;
assert_transcoder_disabled(dev_priv, pipe);
/* Write the TU size bits before fdi link training, so that error
* detection works. */
I915_WRITE(FDI_RX_TUSIZE1(pipe),
I915_READ(PIPE_DATA_M1(pipe)) & TU_SIZE_MASK);
/* For PCH output, training FDI link */
dev_priv->display.fdi_link_train(crtc);
/* XXX: pch pll's can be enabled any time before we enable the PCH
* transcoder, and we actually should do this to not upset any PCH
* transcoder that already use the clock when we share it.
*
* Note that enable_pch_pll tries to do the right thing, but get_pch_pll
* unconditionally resets the pll - we need that to have the right LVDS
* enable sequence. */
ironlake_enable_pch_pll(intel_crtc);
if (HAS_PCH_CPT(dev)) {
u32 sel;
temp = I915_READ(PCH_DPLL_SEL);
switch (pipe) {
default:
case 0:
temp |= TRANSA_DPLL_ENABLE;
sel = TRANSA_DPLLB_SEL;
break;
case 1:
temp |= TRANSB_DPLL_ENABLE;
sel = TRANSB_DPLLB_SEL;
break;
case 2:
temp |= TRANSC_DPLL_ENABLE;
sel = TRANSC_DPLLB_SEL;
break;
}
if (intel_crtc->pch_pll->pll_reg == _PCH_DPLL_B)
temp |= sel;
else
temp &= ~sel;
I915_WRITE(PCH_DPLL_SEL, temp);
}
/* set transcoder timing, panel must allow it */
assert_panel_unlocked(dev_priv, pipe);
I915_WRITE(TRANS_HTOTAL(pipe), I915_READ(HTOTAL(pipe)));
I915_WRITE(TRANS_HBLANK(pipe), I915_READ(HBLANK(pipe)));
I915_WRITE(TRANS_HSYNC(pipe), I915_READ(HSYNC(pipe)));
I915_WRITE(TRANS_VTOTAL(pipe), I915_READ(VTOTAL(pipe)));
I915_WRITE(TRANS_VBLANK(pipe), I915_READ(VBLANK(pipe)));
I915_WRITE(TRANS_VSYNC(pipe), I915_READ(VSYNC(pipe)));
I915_WRITE(TRANS_VSYNCSHIFT(pipe), I915_READ(VSYNCSHIFT(pipe)));
intel_fdi_normal_train(crtc);
/* For PCH DP, enable TRANS_DP_CTL */
if (HAS_PCH_CPT(dev) &&
(intel_pipe_has_type(crtc, INTEL_OUTPUT_DISPLAYPORT) ||
intel_pipe_has_type(crtc, INTEL_OUTPUT_EDP))) {
u32 bpc = (I915_READ(PIPECONF(pipe)) & PIPE_BPC_MASK) >> 5;
reg = TRANS_DP_CTL(pipe);
temp = I915_READ(reg);
temp &= ~(TRANS_DP_PORT_SEL_MASK |
TRANS_DP_SYNC_MASK |
TRANS_DP_BPC_MASK);
temp |= (TRANS_DP_OUTPUT_ENABLE |
TRANS_DP_ENH_FRAMING);
temp |= bpc << 9; /* same format but at 11:9 */
if (crtc->mode.flags & DRM_MODE_FLAG_PHSYNC)
temp |= TRANS_DP_HSYNC_ACTIVE_HIGH;
if (crtc->mode.flags & DRM_MODE_FLAG_PVSYNC)
temp |= TRANS_DP_VSYNC_ACTIVE_HIGH;
switch (intel_trans_dp_port_sel(crtc)) {
case PCH_DP_B:
temp |= TRANS_DP_PORT_SEL_B;
break;
case PCH_DP_C:
temp |= TRANS_DP_PORT_SEL_C;
break;
case PCH_DP_D:
temp |= TRANS_DP_PORT_SEL_D;
break;
default:
BUG();
}
I915_WRITE(reg, temp);
}
ironlake_enable_pch_transcoder(dev_priv, pipe);
}
static void lpt_pch_enable(struct drm_crtc *crtc)
{
struct drm_device *dev = crtc->dev;
struct drm_i915_private *dev_priv = dev->dev_private;
struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
enum transcoder cpu_transcoder = intel_crtc->cpu_transcoder;
assert_transcoder_disabled(dev_priv, (enum pipe)TRANSCODER_A);
lpt_program_iclkip(crtc);
/* Set transcoder timing. */
I915_WRITE(_TRANS_HTOTAL_A, I915_READ(HTOTAL(cpu_transcoder)));
I915_WRITE(_TRANS_HBLANK_A, I915_READ(HBLANK(cpu_transcoder)));
I915_WRITE(_TRANS_HSYNC_A, I915_READ(HSYNC(cpu_transcoder)));
I915_WRITE(_TRANS_VTOTAL_A, I915_READ(VTOTAL(cpu_transcoder)));
I915_WRITE(_TRANS_VBLANK_A, I915_READ(VBLANK(cpu_transcoder)));
I915_WRITE(_TRANS_VSYNC_A, I915_READ(VSYNC(cpu_transcoder)));
I915_WRITE(_TRANS_VSYNCSHIFT_A, I915_READ(VSYNCSHIFT(cpu_transcoder)));
lpt_enable_pch_transcoder(dev_priv, cpu_transcoder);
}
static void intel_put_pch_pll(struct intel_crtc *intel_crtc)
{
struct intel_pch_pll *pll = intel_crtc->pch_pll;
if (pll == NULL)
return;
if (pll->refcount == 0) {
WARN(1, "bad PCH PLL refcount\n");
return;
}
--pll->refcount;
intel_crtc->pch_pll = NULL;
}
static struct intel_pch_pll *intel_get_pch_pll(struct intel_crtc *intel_crtc, u32 dpll, u32 fp)
{
struct drm_i915_private *dev_priv = intel_crtc->base.dev->dev_private;
struct intel_pch_pll *pll;
int i;
pll = intel_crtc->pch_pll;
if (pll) {
DRM_DEBUG_KMS("CRTC:%d reusing existing PCH PLL %x\n",
intel_crtc->base.base.id, pll->pll_reg);
goto prepare;
}
if (HAS_PCH_IBX(dev_priv->dev)) {
/* Ironlake PCH has a fixed PLL->PCH pipe mapping. */
i = intel_crtc->pipe;
pll = &dev_priv->pch_plls[i];
DRM_DEBUG_KMS("CRTC:%d using pre-allocated PCH PLL %x\n",
intel_crtc->base.base.id, pll->pll_reg);
goto found;
}
for (i = 0; i < dev_priv->num_pch_pll; i++) {
pll = &dev_priv->pch_plls[i];
/* Only want to check enabled timings first */
if (pll->refcount == 0)
continue;
if (dpll == (I915_READ(pll->pll_reg) & 0x7fffffff) &&
fp == I915_READ(pll->fp0_reg)) {
DRM_DEBUG_KMS("CRTC:%d sharing existing PCH PLL %x (refcount %d, ative %d)\n",
intel_crtc->base.base.id,
pll->pll_reg, pll->refcount, pll->active);
goto found;
}
}
/* Ok no matching timings, maybe there's a free one? */
for (i = 0; i < dev_priv->num_pch_pll; i++) {
pll = &dev_priv->pch_plls[i];
if (pll->refcount == 0) {
DRM_DEBUG_KMS("CRTC:%d allocated PCH PLL %x\n",
intel_crtc->base.base.id, pll->pll_reg);
goto found;
}
}
return NULL;
found:
intel_crtc->pch_pll = pll;
pll->refcount++;
DRM_DEBUG_DRIVER("using pll %d for pipe %d\n", i, intel_crtc->pipe);
prepare: /* separate function? */
DRM_DEBUG_DRIVER("switching PLL %x off\n", pll->pll_reg);
/* Wait for the clocks to stabilize before rewriting the regs */
I915_WRITE(pll->pll_reg, dpll & ~DPLL_VCO_ENABLE);
POSTING_READ(pll->pll_reg);
udelay(150);
I915_WRITE(pll->fp0_reg, fp);
I915_WRITE(pll->pll_reg, dpll & ~DPLL_VCO_ENABLE);
pll->on = false;
return pll;
}
void intel_cpt_verify_modeset(struct drm_device *dev, int pipe)
{
struct drm_i915_private *dev_priv = dev->dev_private;
int dslreg = PIPEDSL(pipe);
u32 temp;
temp = I915_READ(dslreg);
udelay(500);
if (wait_for(I915_READ(dslreg) != temp, 5)) {
if (wait_for(I915_READ(dslreg) != temp, 5))
DRM_ERROR("mode set failed: pipe %d stuck\n", pipe);
}
}
static void ironlake_crtc_enable(struct drm_crtc *crtc)
{
struct drm_device *dev = crtc->dev;
struct drm_i915_private *dev_priv = dev->dev_private;
struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
struct intel_encoder *encoder;
int pipe = intel_crtc->pipe;
int plane = intel_crtc->plane;
u32 temp;
bool is_pch_port;
WARN_ON(!crtc->enabled);
if (intel_crtc->active)
return;
intel_crtc->active = true;
intel_update_watermarks(dev);
if (intel_pipe_has_type(crtc, INTEL_OUTPUT_LVDS)) {
temp = I915_READ(PCH_LVDS);
if ((temp & LVDS_PORT_EN) == 0)
I915_WRITE(PCH_LVDS, temp | LVDS_PORT_EN);
}
is_pch_port = ironlake_crtc_driving_pch(crtc);
if (is_pch_port) {
/* Note: FDI PLL enabling _must_ be done before we enable the
* cpu pipes, hence this is separate from all the other fdi/pch
* enabling. */
ironlake_fdi_pll_enable(intel_crtc);
} else {
assert_fdi_tx_disabled(dev_priv, pipe);
assert_fdi_rx_disabled(dev_priv, pipe);
}
for_each_encoder_on_crtc(dev, crtc, encoder)
if (encoder->pre_enable)
encoder->pre_enable(encoder);
/* Enable panel fitting for LVDS */
if (dev_priv->pch_pf_size &&
(intel_pipe_has_type(crtc, INTEL_OUTPUT_LVDS) ||
intel_pipe_has_type(crtc, INTEL_OUTPUT_EDP))) {
/* Force use of hard-coded filter coefficients
* as some pre-programmed values are broken,
* e.g. x201.
*/
if (IS_IVYBRIDGE(dev))
I915_WRITE(PF_CTL(pipe), PF_ENABLE | PF_FILTER_MED_3x3 |
PF_PIPE_SEL_IVB(pipe));
else
I915_WRITE(PF_CTL(pipe), PF_ENABLE | PF_FILTER_MED_3x3);
I915_WRITE(PF_WIN_POS(pipe), dev_priv->pch_pf_pos);
I915_WRITE(PF_WIN_SZ(pipe), dev_priv->pch_pf_size);
}
/*
* On ILK+ LUT must be loaded before the pipe is running but with
* clocks enabled
*/
intel_crtc_load_lut(crtc);
intel_enable_pipe(dev_priv, pipe, is_pch_port);
intel_enable_plane(dev_priv, plane, pipe);
if (is_pch_port)
ironlake_pch_enable(crtc);
DRM_LOCK(dev);
intel_update_fbc(dev);
DRM_UNLOCK(dev);
intel_crtc_update_cursor(crtc, true);
for_each_encoder_on_crtc(dev, crtc, encoder)
encoder->enable(encoder);
if (HAS_PCH_CPT(dev))
intel_cpt_verify_modeset(dev, intel_crtc->pipe);
/*
* There seems to be a race in PCH platform hw (at least on some
* outputs) where an enabled pipe still completes any pageflip right
* away (as if the pipe is off) instead of waiting for vblank. As soon
* as the first vblank happened, everything works as expected. Hence just
* wait for one vblank before returning to avoid strange things
* happening.
*/
intel_wait_for_vblank(dev, intel_crtc->pipe);
}
static void haswell_crtc_enable(struct drm_crtc *crtc)
{
struct drm_device *dev = crtc->dev;
struct drm_i915_private *dev_priv = dev->dev_private;
struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
struct intel_encoder *encoder;
int pipe = intel_crtc->pipe;
int plane = intel_crtc->plane;
bool is_pch_port;
WARN_ON(!crtc->enabled);
if (intel_crtc->active)
return;
intel_crtc->active = true;
intel_update_watermarks(dev);
is_pch_port = haswell_crtc_driving_pch(crtc);
if (is_pch_port)
dev_priv->display.fdi_link_train(crtc);
for_each_encoder_on_crtc(dev, crtc, encoder)
if (encoder->pre_enable)
encoder->pre_enable(encoder);
intel_ddi_enable_pipe_clock(intel_crtc);
/* Enable panel fitting for eDP */
if (dev_priv->pch_pf_size &&
intel_pipe_has_type(crtc, INTEL_OUTPUT_EDP)) {
/* Force use of hard-coded filter coefficients
* as some pre-programmed values are broken,
* e.g. x201.
*/
I915_WRITE(PF_CTL(pipe), PF_ENABLE | PF_FILTER_MED_3x3 |
PF_PIPE_SEL_IVB(pipe));
I915_WRITE(PF_WIN_POS(pipe), dev_priv->pch_pf_pos);
I915_WRITE(PF_WIN_SZ(pipe), dev_priv->pch_pf_size);
}
/*
* On ILK+ LUT must be loaded before the pipe is running but with
* clocks enabled
*/
intel_crtc_load_lut(crtc);
intel_ddi_set_pipe_settings(crtc);
intel_ddi_enable_pipe_func(crtc);
intel_enable_pipe(dev_priv, pipe, is_pch_port);
intel_enable_plane(dev_priv, plane, pipe);
if (is_pch_port)
lpt_pch_enable(crtc);
DRM_LOCK(dev);
intel_update_fbc(dev);
DRM_UNLOCK(dev);
intel_crtc_update_cursor(crtc, true);
for_each_encoder_on_crtc(dev, crtc, encoder)
encoder->enable(encoder);
/*
* There seems to be a race in PCH platform hw (at least on some
* outputs) where an enabled pipe still completes any pageflip right
* away (as if the pipe is off) instead of waiting for vblank. As soon
* as the first vblank happened, everything works as expected. Hence just
* wait for one vblank before returning to avoid strange things
* happening.
*/
intel_wait_for_vblank(dev, intel_crtc->pipe);
}
static void ironlake_crtc_disable(struct drm_crtc *crtc)
{
struct drm_device *dev = crtc->dev;
struct drm_i915_private *dev_priv = dev->dev_private;
struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
struct intel_encoder *encoder;
int pipe = intel_crtc->pipe;
int plane = intel_crtc->plane;
u32 reg, temp;
if (!intel_crtc->active)
return;
for_each_encoder_on_crtc(dev, crtc, encoder)
encoder->disable(encoder);
intel_crtc_wait_for_pending_flips(crtc);
drm_vblank_off(dev, pipe);
intel_crtc_update_cursor(crtc, false);
intel_disable_plane(dev_priv, plane, pipe);
if (dev_priv->cfb_plane == plane)
intel_disable_fbc(dev);
intel_disable_pipe(dev_priv, pipe);
/* Disable PF */
I915_WRITE(PF_CTL(pipe), 0);
I915_WRITE(PF_WIN_SZ(pipe), 0);
for_each_encoder_on_crtc(dev, crtc, encoder)
if (encoder->post_disable)
encoder->post_disable(encoder);
ironlake_fdi_disable(crtc);
ironlake_disable_pch_transcoder(dev_priv, pipe);
if (HAS_PCH_CPT(dev)) {
/* disable TRANS_DP_CTL */
reg = TRANS_DP_CTL(pipe);
temp = I915_READ(reg);
temp &= ~(TRANS_DP_OUTPUT_ENABLE | TRANS_DP_PORT_SEL_MASK);
temp |= TRANS_DP_PORT_SEL_NONE;
I915_WRITE(reg, temp);
/* disable DPLL_SEL */
temp = I915_READ(PCH_DPLL_SEL);
switch (pipe) {
case 0:
temp &= ~(TRANSA_DPLL_ENABLE | TRANSA_DPLLB_SEL);
break;
case 1:
temp &= ~(TRANSB_DPLL_ENABLE | TRANSB_DPLLB_SEL);
break;
case 2:
/* C shares PLL A or B */
temp &= ~(TRANSC_DPLL_ENABLE | TRANSC_DPLLB_SEL);
break;
default:
BUG(); /* wtf */
}
I915_WRITE(PCH_DPLL_SEL, temp);
}
/* disable PCH DPLL */
intel_disable_pch_pll(intel_crtc);
ironlake_fdi_pll_disable(intel_crtc);
intel_crtc->active = false;
intel_update_watermarks(dev);
DRM_LOCK(dev);
intel_update_fbc(dev);
DRM_UNLOCK(dev);
}
static void haswell_crtc_disable(struct drm_crtc *crtc)
{
struct drm_device *dev = crtc->dev;
struct drm_i915_private *dev_priv = dev->dev_private;
struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
struct intel_encoder *encoder;
int pipe = intel_crtc->pipe;
int plane = intel_crtc->plane;
enum transcoder cpu_transcoder = intel_crtc->cpu_transcoder;
bool is_pch_port;
if (!intel_crtc->active)
return;
is_pch_port = haswell_crtc_driving_pch(crtc);
for_each_encoder_on_crtc(dev, crtc, encoder)
encoder->disable(encoder);
intel_crtc_wait_for_pending_flips(crtc);
drm_vblank_off(dev, pipe);
intel_crtc_update_cursor(crtc, false);
intel_disable_plane(dev_priv, plane, pipe);
if (dev_priv->cfb_plane == plane)
intel_disable_fbc(dev);
intel_disable_pipe(dev_priv, pipe);
intel_ddi_disable_transcoder_func(dev_priv, cpu_transcoder);
/* Disable PF */
I915_WRITE(PF_CTL(pipe), 0);
I915_WRITE(PF_WIN_SZ(pipe), 0);
intel_ddi_disable_pipe_clock(intel_crtc);
for_each_encoder_on_crtc(dev, crtc, encoder)
if (encoder->post_disable)
encoder->post_disable(encoder);
if (is_pch_port) {
lpt_disable_pch_transcoder(dev_priv);
intel_ddi_fdi_disable(crtc);
}
intel_crtc->active = false;
intel_update_watermarks(dev);
DRM_LOCK(dev);
intel_update_fbc(dev);
DRM_UNLOCK(dev);
}
static void ironlake_crtc_off(struct drm_crtc *crtc)
{
struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
intel_put_pch_pll(intel_crtc);
}
static void haswell_crtc_off(struct drm_crtc *crtc)
{
struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
/* Stop saying we're using TRANSCODER_EDP because some other CRTC might
* start using it. */
intel_crtc->cpu_transcoder = (enum transcoder)intel_crtc->pipe;
intel_ddi_put_crtc_pll(crtc);
}
static void intel_crtc_dpms_overlay(struct intel_crtc *intel_crtc, bool enable)
{
if (!enable && intel_crtc->overlay) {
struct drm_device *dev = intel_crtc->base.dev;
struct drm_i915_private *dev_priv = dev->dev_private;
DRM_LOCK(dev);
dev_priv->mm.interruptible = false;
(void) intel_overlay_switch_off(intel_crtc->overlay);
dev_priv->mm.interruptible = true;
DRM_UNLOCK(dev);
}
/* Let userspace switch the overlay on again. In most cases userspace
* has to recompute where to put it anyway.
*/
}
static void i9xx_crtc_enable(struct drm_crtc *crtc)
{
struct drm_device *dev = crtc->dev;
struct drm_i915_private *dev_priv = dev->dev_private;
struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
struct intel_encoder *encoder;
int pipe = intel_crtc->pipe;
int plane = intel_crtc->plane;
WARN_ON(!crtc->enabled);
if (intel_crtc->active)
return;
intel_crtc->active = true;
intel_update_watermarks(dev);
intel_enable_pll(dev_priv, pipe);
intel_enable_pipe(dev_priv, pipe, false);
intel_enable_plane(dev_priv, plane, pipe);
intel_crtc_load_lut(crtc);
intel_update_fbc(dev);
/* Give the overlay scaler a chance to enable if it's on this pipe */
intel_crtc_dpms_overlay(intel_crtc, true);
intel_crtc_update_cursor(crtc, true);
for_each_encoder_on_crtc(dev, crtc, encoder)
encoder->enable(encoder);
}
static void i9xx_crtc_disable(struct drm_crtc *crtc)
{
struct drm_device *dev = crtc->dev;
struct drm_i915_private *dev_priv = dev->dev_private;
struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
struct intel_encoder *encoder;
int pipe = intel_crtc->pipe;
int plane = intel_crtc->plane;
u32 pctl;
if (!intel_crtc->active)
return;
for_each_encoder_on_crtc(dev, crtc, encoder)
encoder->disable(encoder);
/* Give the overlay scaler a chance to disable if it's on this pipe */
intel_crtc_wait_for_pending_flips(crtc);
drm_vblank_off(dev, pipe);
intel_crtc_dpms_overlay(intel_crtc, false);
intel_crtc_update_cursor(crtc, false);
if (dev_priv->cfb_plane == plane)
intel_disable_fbc(dev);
intel_disable_plane(dev_priv, plane, pipe);
intel_disable_pipe(dev_priv, pipe);
/* Disable pannel fitter if it is on this pipe. */
pctl = I915_READ(PFIT_CONTROL);
if ((pctl & PFIT_ENABLE) &&
((pctl & PFIT_PIPE_MASK) >> PFIT_PIPE_SHIFT) == pipe)
I915_WRITE(PFIT_CONTROL, 0);
intel_disable_pll(dev_priv, pipe);
intel_crtc->active = false;
intel_update_fbc(dev);
intel_update_watermarks(dev);
}
static void i9xx_crtc_off(struct drm_crtc *crtc)
{
}
static void intel_crtc_update_sarea(struct drm_crtc *crtc,
bool enabled)
{
struct drm_device *dev = crtc->dev;
struct drm_i915_master_private *master_priv;
struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
int pipe = intel_crtc->pipe;
if (!dev->primary->master)
return;
master_priv = dev->primary->master->driver_priv;
if (!master_priv->sarea_priv)
return;
switch (pipe) {
case 0:
master_priv->sarea_priv->pipeA_w = enabled ? crtc->mode.hdisplay : 0;
master_priv->sarea_priv->pipeA_h = enabled ? crtc->mode.vdisplay : 0;
break;
case 1:
master_priv->sarea_priv->pipeB_w = enabled ? crtc->mode.hdisplay : 0;
master_priv->sarea_priv->pipeB_h = enabled ? crtc->mode.vdisplay : 0;
break;
default:
DRM_ERROR("Can't update pipe %c in SAREA\n", pipe_name(pipe));
break;
}
}
/**
* Sets the power management mode of the pipe and plane.
*/
void intel_crtc_update_dpms(struct drm_crtc *crtc)
{
struct drm_device *dev = crtc->dev;
struct drm_i915_private *dev_priv = dev->dev_private;
struct intel_encoder *intel_encoder;
bool enable = false;
for_each_encoder_on_crtc(dev, crtc, intel_encoder)
enable |= intel_encoder->connectors_active;
if (enable)
dev_priv->display.crtc_enable(crtc);
else
dev_priv->display.crtc_disable(crtc);
intel_crtc_update_sarea(crtc, enable);
}
static void intel_crtc_noop(struct drm_crtc *crtc)
{
}
static void intel_crtc_disable(struct drm_crtc *crtc)
{
struct drm_device *dev = crtc->dev;
struct drm_connector *connector;
struct drm_i915_private *dev_priv = dev->dev_private;
/* crtc should still be enabled when we disable it. */
WARN_ON(!crtc->enabled);
dev_priv->display.crtc_disable(crtc);
intel_crtc_update_sarea(crtc, false);
dev_priv->display.off(crtc);
assert_plane_disabled(dev->dev_private, to_intel_crtc(crtc)->plane);
assert_pipe_disabled(dev->dev_private, to_intel_crtc(crtc)->pipe);
if (crtc->fb) {
DRM_LOCK(dev);
intel_unpin_fb_obj(to_intel_framebuffer(crtc->fb)->obj);
DRM_UNLOCK(dev);
crtc->fb = NULL;
}
/* Update computed state. */
list_for_each_entry(connector, &dev->mode_config.connector_list, head) {
if (!connector->encoder || !connector->encoder->crtc)
continue;
if (connector->encoder->crtc != crtc)
continue;
connector->dpms = DRM_MODE_DPMS_OFF;
to_intel_encoder(connector->encoder)->connectors_active = false;
}
}
void intel_modeset_disable(struct drm_device *dev)
{
struct drm_crtc *crtc;
list_for_each_entry(crtc, &dev->mode_config.crtc_list, head) {
if (crtc->enabled)
intel_crtc_disable(crtc);
}
}
void intel_encoder_noop(struct drm_encoder *encoder)
{
}
void intel_encoder_destroy(struct drm_encoder *encoder)
{
struct intel_encoder *intel_encoder = to_intel_encoder(encoder);
drm_encoder_cleanup(encoder);
free(intel_encoder, DRM_MEM_KMS);
}
/* Simple dpms helper for encodres with just one connector, no cloning and only
* one kind of off state. It clamps all !ON modes to fully OFF and changes the
* state of the entire output pipe. */
void intel_encoder_dpms(struct intel_encoder *encoder, int mode)
{
if (mode == DRM_MODE_DPMS_ON) {
encoder->connectors_active = true;
intel_crtc_update_dpms(encoder->base.crtc);
} else {
encoder->connectors_active = false;
intel_crtc_update_dpms(encoder->base.crtc);
}
}
/* Cross check the actual hw state with our own modeset state tracking (and it's
* internal consistency). */
static void intel_connector_check_state(struct intel_connector *connector)
{
if (connector->get_hw_state(connector)) {
struct intel_encoder *encoder = connector->encoder;
struct drm_crtc *crtc;
bool encoder_enabled;
enum pipe pipe;
DRM_DEBUG_KMS("[CONNECTOR:%d:%s]\n",
connector->base.base.id,
drm_get_connector_name(&connector->base));
WARN(connector->base.dpms == DRM_MODE_DPMS_OFF,
"wrong connector dpms state\n");
WARN(connector->base.encoder != &encoder->base,
"active connector not linked to encoder\n");
WARN(!encoder->connectors_active,
"encoder->connectors_active not set\n");
encoder_enabled = encoder->get_hw_state(encoder, &pipe);
WARN(!encoder_enabled, "encoder not enabled\n");
if (WARN_ON(!encoder->base.crtc))
return;
crtc = encoder->base.crtc;
WARN(!crtc->enabled, "crtc not enabled\n");
WARN(!to_intel_crtc(crtc)->active, "crtc not active\n");
WARN(pipe != to_intel_crtc(crtc)->pipe,
"encoder active on the wrong pipe\n");
}
}
/* Even simpler default implementation, if there's really no special case to
* consider. */
void intel_connector_dpms(struct drm_connector *connector, int mode)
{
struct intel_encoder *encoder = intel_attached_encoder(connector);
/* All the simple cases only support two dpms states. */
if (mode != DRM_MODE_DPMS_ON)
mode = DRM_MODE_DPMS_OFF;
if (mode == connector->dpms)
return;
connector->dpms = mode;
/* Only need to change hw state when actually enabled */
if (encoder->base.crtc)
intel_encoder_dpms(encoder, mode);
else
WARN_ON(encoder->connectors_active != false);
intel_modeset_check_state(connector->dev);
}
/* Simple connector->get_hw_state implementation for encoders that support only
* one connector and no cloning and hence the encoder state determines the state
* of the connector. */
bool intel_connector_get_hw_state(struct intel_connector *connector)
{
enum pipe pipe = 0;
struct intel_encoder *encoder = connector->encoder;
return encoder->get_hw_state(encoder, &pipe);
}
static bool intel_crtc_mode_fixup(struct drm_crtc *crtc,
const struct drm_display_mode *mode,
struct drm_display_mode *adjusted_mode)
{
struct drm_device *dev = crtc->dev;
if (HAS_PCH_SPLIT(dev)) {
/* FDI link clock is fixed at 2.7G */
if (mode->clock * 3 > IRONLAKE_FDI_FREQ * 4)
return false;
}
/* All interlaced capable intel hw wants timings in frames. Note though
* that intel_lvds_mode_fixup does some funny tricks with the crtc
* timings, so we need to be careful not to clobber these.*/
if (!(adjusted_mode->private_flags & INTEL_MODE_CRTC_TIMINGS_SET))
drm_mode_set_crtcinfo(adjusted_mode, 0);
/* WaPruneModeWithIncorrectHsyncOffset: Cantiga+ cannot handle modes
* with a hsync front porch of 0.
*/
if ((INTEL_INFO(dev)->gen > 4 || IS_G4X(dev)) &&
adjusted_mode->hsync_start == adjusted_mode->hdisplay)
return false;
return true;
}
static int valleyview_get_display_clock_speed(struct drm_device *dev)
{
return 400000; /* FIXME */
}
static int i945_get_display_clock_speed(struct drm_device *dev)
{
return 400000;
}
static int i915_get_display_clock_speed(struct drm_device *dev)
{
return 333000;
}
static int i9xx_misc_get_display_clock_speed(struct drm_device *dev)
{
return 200000;
}
static int i915gm_get_display_clock_speed(struct drm_device *dev)
{
u16 gcfgc = 0;
pci_read_config_word(dev->dev, GCFGC, &gcfgc);
if (gcfgc & GC_LOW_FREQUENCY_ENABLE)
return 133000;
else {
switch (gcfgc & GC_DISPLAY_CLOCK_MASK) {
case GC_DISPLAY_CLOCK_333_MHZ:
return 333000;
default:
case GC_DISPLAY_CLOCK_190_200_MHZ:
return 190000;
}
}
}
static int i865_get_display_clock_speed(struct drm_device *dev)
{
return 266000;
}
static int i855_get_display_clock_speed(struct drm_device *dev)
{
u16 hpllcc = 0;
/* Assume that the hardware is in the high speed state. This
* should be the default.
*/
switch (hpllcc & GC_CLOCK_CONTROL_MASK) {
case GC_CLOCK_133_200:
case GC_CLOCK_100_200:
return 200000;
case GC_CLOCK_166_250:
return 250000;
case GC_CLOCK_100_133:
return 133000;
}
/* Shouldn't happen */
return 0;
}
static int i830_get_display_clock_speed(struct drm_device *dev)
{
return 133000;
}
struct fdi_m_n {
u32 tu;
u32 gmch_m;
u32 gmch_n;
u32 link_m;
u32 link_n;
};
static void
fdi_reduce_ratio(u32 *num, u32 *den)
{
while (*num > 0xffffff || *den > 0xffffff) {
*num >>= 1;
*den >>= 1;
}
}
static void
ironlake_compute_m_n(int bits_per_pixel, int nlanes, int pixel_clock,
int link_clock, struct fdi_m_n *m_n)
{
m_n->tu = 64; /* default size */
/* BUG_ON(pixel_clock > INT_MAX / 36); */
m_n->gmch_m = bits_per_pixel * pixel_clock;
m_n->gmch_n = link_clock * nlanes * 8;
fdi_reduce_ratio(&m_n->gmch_m, &m_n->gmch_n);
m_n->link_m = pixel_clock;
m_n->link_n = link_clock;
fdi_reduce_ratio(&m_n->link_m, &m_n->link_n);
}
static inline bool intel_panel_use_ssc(struct drm_i915_private *dev_priv)
{
if (i915_panel_use_ssc >= 0)
return i915_panel_use_ssc != 0;
return dev_priv->lvds_use_ssc
&& !(dev_priv->quirks & QUIRK_LVDS_SSC_DISABLE);
}
/**
* intel_choose_pipe_bpp_dither - figure out what color depth the pipe should send
* @crtc: CRTC structure
* @mode: requested mode
*
* A pipe may be connected to one or more outputs. Based on the depth of the
* attached framebuffer, choose a good color depth to use on the pipe.
*
* If possible, match the pipe depth to the fb depth. In some cases, this
* isn't ideal, because the connected output supports a lesser or restricted
* set of depths. Resolve that here:
* LVDS typically supports only 6bpc, so clamp down in that case
* HDMI supports only 8bpc or 12bpc, so clamp to 8bpc with dither for 10bpc
* Displays may support a restricted set as well, check EDID and clamp as
* appropriate.
* DP may want to dither down to 6bpc to fit larger modes
*
* RETURNS:
* Dithering requirement (i.e. false if display bpc and pipe bpc match,
* true if they don't match).
*/
static bool intel_choose_pipe_bpp_dither(struct drm_crtc *crtc,
struct drm_framebuffer *fb,
unsigned int *pipe_bpp,
struct drm_display_mode *mode)
{
struct drm_device *dev = crtc->dev;
struct drm_i915_private *dev_priv = dev->dev_private;
struct drm_connector *connector;
struct intel_encoder *intel_encoder;
unsigned int display_bpc = UINT_MAX, bpc;
/* Walk the encoders & connectors on this crtc, get min bpc */
for_each_encoder_on_crtc(dev, crtc, intel_encoder) {
if (intel_encoder->type == INTEL_OUTPUT_LVDS) {
unsigned int lvds_bpc;
if ((I915_READ(PCH_LVDS) & LVDS_A3_POWER_MASK) ==
LVDS_A3_POWER_UP)
lvds_bpc = 8;
else
lvds_bpc = 6;
if (lvds_bpc < display_bpc) {
DRM_DEBUG_KMS("clamping display bpc (was %d) to LVDS (%d)\n", display_bpc, lvds_bpc);
display_bpc = lvds_bpc;
}
continue;
}
/* Not one of the known troublemakers, check the EDID */
list_for_each_entry(connector, &dev->mode_config.connector_list,
head) {
if (connector->encoder != &intel_encoder->base)
continue;
/* Don't use an invalid EDID bpc value */
if (connector->display_info.bpc &&
connector->display_info.bpc < display_bpc) {
DRM_DEBUG_KMS("clamping display bpc (was %d) to EDID reported max of %d\n", display_bpc, connector->display_info.bpc);
display_bpc = connector->display_info.bpc;
}
}
if (intel_encoder->type == INTEL_OUTPUT_EDP) {
/* Use VBT settings if we have an eDP panel */
unsigned int edp_bpc = dev_priv->edp.bpp / 3;
if (edp_bpc && edp_bpc < display_bpc) {
DRM_DEBUG_KMS("clamping display bpc (was %d) to eDP (%d)\n", display_bpc, edp_bpc);
display_bpc = edp_bpc;
}
continue;
}
/*
* HDMI is either 12 or 8, so if the display lets 10bpc sneak
* through, clamp it down. (Note: >12bpc will be caught below.)
*/
if (intel_encoder->type == INTEL_OUTPUT_HDMI) {
if (display_bpc > 8 && display_bpc < 12) {
DRM_DEBUG_KMS("forcing bpc to 12 for HDMI\n");
display_bpc = 12;
} else {
DRM_DEBUG_KMS("forcing bpc to 8 for HDMI\n");
display_bpc = 8;
}
}
}
if (mode->private_flags & INTEL_MODE_DP_FORCE_6BPC) {
DRM_DEBUG_KMS("Dithering DP to 6bpc\n");
display_bpc = 6;
}
/*
* We could just drive the pipe at the highest bpc all the time and
* enable dithering as needed, but that costs bandwidth. So choose
* the minimum value that expresses the full color range of the fb but
* also stays within the max display bpc discovered above.
*/
switch (fb->depth) {
case 8:
bpc = 8; /* since we go through a colormap */
break;
case 15:
case 16:
bpc = 6; /* min is 18bpp */
break;
case 24:
bpc = 8;
break;
case 30:
bpc = 10;
break;
case 48:
bpc = 12;
break;
default:
DRM_DEBUG("unsupported depth, assuming 24 bits\n");
bpc = min((unsigned int)8, display_bpc);
break;
}
display_bpc = min(display_bpc, bpc);
DRM_DEBUG_KMS("setting pipe bpc to %d (max display bpc %d)\n",
bpc, display_bpc);
*pipe_bpp = display_bpc * 3;
return display_bpc != bpc;
}
static int vlv_get_refclk(struct drm_crtc *crtc)
{
struct drm_device *dev = crtc->dev;
struct drm_i915_private *dev_priv = dev->dev_private;
int refclk = 27000; /* for DP & HDMI */
return 100000; /* only one validated so far */
if (intel_pipe_has_type(crtc, INTEL_OUTPUT_ANALOG)) {
refclk = 96000;
} else if (intel_pipe_has_type(crtc, INTEL_OUTPUT_LVDS)) {
if (intel_panel_use_ssc(dev_priv))
refclk = 100000;
else
refclk = 96000;
} else if (intel_pipe_has_type(crtc, INTEL_OUTPUT_EDP)) {
refclk = 100000;
}
return refclk;
}
static int i9xx_get_refclk(struct drm_crtc *crtc, int num_connectors)
{
struct drm_device *dev = crtc->dev;
struct drm_i915_private *dev_priv = dev->dev_private;
int refclk;
if (IS_VALLEYVIEW(dev)) {
refclk = vlv_get_refclk(crtc);
} else if (intel_pipe_has_type(crtc, INTEL_OUTPUT_LVDS) &&
intel_panel_use_ssc(dev_priv) && num_connectors < 2) {
refclk = dev_priv->lvds_ssc_freq * 1000;
DRM_DEBUG_KMS("using SSC reference clock of %d MHz\n",
refclk / 1000);
} else if (!IS_GEN2(dev)) {
refclk = 96000;
} else {
refclk = 48000;
}
return refclk;
}
static void i9xx_adjust_sdvo_tv_clock(struct drm_display_mode *adjusted_mode,
intel_clock_t *clock)
{
/* SDVO TV has fixed PLL values depend on its clock range,
this mirrors vbios setting. */
if (adjusted_mode->clock >= 100000
&& adjusted_mode->clock < 140500) {
clock->p1 = 2;
clock->p2 = 10;
clock->n = 3;
clock->m1 = 16;
clock->m2 = 8;
} else if (adjusted_mode->clock >= 140500
&& adjusted_mode->clock <= 200000) {
clock->p1 = 1;
clock->p2 = 10;
clock->n = 6;
clock->m1 = 12;
clock->m2 = 8;
}
}
static void i9xx_update_pll_dividers(struct drm_crtc *crtc,
intel_clock_t *clock,
intel_clock_t *reduced_clock)
{
struct drm_device *dev = crtc->dev;
struct drm_i915_private *dev_priv = dev->dev_private;
struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
int pipe = intel_crtc->pipe;
u32 fp, fp2 = 0;
if (IS_PINEVIEW(dev)) {
fp = (1 << clock->n) << 16 | clock->m1 << 8 | clock->m2;
if (reduced_clock)
fp2 = (1 << reduced_clock->n) << 16 |
reduced_clock->m1 << 8 | reduced_clock->m2;
} else {
fp = clock->n << 16 | clock->m1 << 8 | clock->m2;
if (reduced_clock)
fp2 = reduced_clock->n << 16 | reduced_clock->m1 << 8 |
reduced_clock->m2;
}
I915_WRITE(FP0(pipe), fp);
intel_crtc->lowfreq_avail = false;
if (intel_pipe_has_type(crtc, INTEL_OUTPUT_LVDS) &&
reduced_clock && i915_powersave) {
I915_WRITE(FP1(pipe), fp2);
intel_crtc->lowfreq_avail = true;
} else {
I915_WRITE(FP1(pipe), fp);
}
}
static void intel_update_lvds(struct drm_crtc *crtc, intel_clock_t *clock,
struct drm_display_mode *adjusted_mode)
{
struct drm_device *dev = crtc->dev;
struct drm_i915_private *dev_priv = dev->dev_private;
struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
int pipe = intel_crtc->pipe;
u32 temp;
temp = I915_READ(LVDS);
temp |= LVDS_PORT_EN | LVDS_A0A2_CLKA_POWER_UP;
if (pipe == 1) {
temp |= LVDS_PIPEB_SELECT;
} else {
temp &= ~LVDS_PIPEB_SELECT;
}
/* set the corresponsding LVDS_BORDER bit */
temp |= dev_priv->lvds_border_bits;
/* Set the B0-B3 data pairs corresponding to whether we're going to
* set the DPLLs for dual-channel mode or not.
*/
if (clock->p2 == 7)
temp |= LVDS_B0B3_POWER_UP | LVDS_CLKB_POWER_UP;
else
temp &= ~(LVDS_B0B3_POWER_UP | LVDS_CLKB_POWER_UP);
/* It would be nice to set 24 vs 18-bit mode (LVDS_A3_POWER_UP)
* appropriately here, but we need to look more thoroughly into how
* panels behave in the two modes.
*/
/* set the dithering flag on LVDS as needed */
if (INTEL_INFO(dev)->gen >= 4) {
if (dev_priv->lvds_dither)
temp |= LVDS_ENABLE_DITHER;
else
temp &= ~LVDS_ENABLE_DITHER;
}
temp &= ~(LVDS_HSYNC_POLARITY | LVDS_VSYNC_POLARITY);
if (adjusted_mode->flags & DRM_MODE_FLAG_NHSYNC)
temp |= LVDS_HSYNC_POLARITY;
if (adjusted_mode->flags & DRM_MODE_FLAG_NVSYNC)
temp |= LVDS_VSYNC_POLARITY;
I915_WRITE(LVDS, temp);
}
static void vlv_update_pll(struct drm_crtc *crtc,
struct drm_display_mode *mode,
struct drm_display_mode *adjusted_mode,
intel_clock_t *clock, intel_clock_t *reduced_clock,
int num_connectors)
{
struct drm_device *dev = crtc->dev;
struct drm_i915_private *dev_priv = dev->dev_private;
struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
int pipe = intel_crtc->pipe;
u32 dpll, mdiv, pdiv;
u32 bestn, bestm1, bestm2, bestp1, bestp2;
bool is_sdvo;
u32 temp;
is_sdvo = intel_pipe_has_type(crtc, INTEL_OUTPUT_SDVO) ||
intel_pipe_has_type(crtc, INTEL_OUTPUT_HDMI);
dpll = DPLL_VGA_MODE_DIS;
dpll |= DPLL_EXT_BUFFER_ENABLE_VLV;
dpll |= DPLL_REFA_CLK_ENABLE_VLV;
dpll |= DPLL_INTEGRATED_CLOCK_VLV;
I915_WRITE(DPLL(pipe), dpll);
POSTING_READ(DPLL(pipe));
bestn = clock->n;
bestm1 = clock->m1;
bestm2 = clock->m2;
bestp1 = clock->p1;
bestp2 = clock->p2;
/*
* In Valleyview PLL and program lane counter registers are exposed
* through DPIO interface
*/
mdiv = ((bestm1 << DPIO_M1DIV_SHIFT) | (bestm2 & DPIO_M2DIV_MASK));
mdiv |= ((bestp1 << DPIO_P1_SHIFT) | (bestp2 << DPIO_P2_SHIFT));
mdiv |= ((bestn << DPIO_N_SHIFT));
mdiv |= (1 << DPIO_POST_DIV_SHIFT);
mdiv |= (1 << DPIO_K_SHIFT);
mdiv |= DPIO_ENABLE_CALIBRATION;
intel_dpio_write(dev_priv, DPIO_DIV(pipe), mdiv);
intel_dpio_write(dev_priv, DPIO_CORE_CLK(pipe), 0x01000000);
pdiv = (1 << DPIO_REFSEL_OVERRIDE) | (5 << DPIO_PLL_MODESEL_SHIFT) |
(3 << DPIO_BIAS_CURRENT_CTL_SHIFT) | (1<<20) |
(7 << DPIO_PLL_REFCLK_SEL_SHIFT) | (8 << DPIO_DRIVER_CTL_SHIFT) |
(5 << DPIO_CLK_BIAS_CTL_SHIFT);
intel_dpio_write(dev_priv, DPIO_REFSFR(pipe), pdiv);
intel_dpio_write(dev_priv, DPIO_LFP_COEFF(pipe), 0x005f003b);
dpll |= DPLL_VCO_ENABLE;
I915_WRITE(DPLL(pipe), dpll);
POSTING_READ(DPLL(pipe));
if (wait_for(((I915_READ(DPLL(pipe)) & DPLL_LOCK_VLV) == DPLL_LOCK_VLV), 1))
DRM_ERROR("DPLL %d failed to lock\n", pipe);
intel_dpio_write(dev_priv, DPIO_FASTCLK_DISABLE, 0x620);
if (intel_pipe_has_type(crtc, INTEL_OUTPUT_DISPLAYPORT))
intel_dp_set_m_n(crtc, mode, adjusted_mode);
I915_WRITE(DPLL(pipe), dpll);
/* Wait for the clocks to stabilize. */
POSTING_READ(DPLL(pipe));
udelay(150);
temp = 0;
if (is_sdvo) {
temp = intel_mode_get_pixel_multiplier(adjusted_mode);
if (temp > 1)
temp = (temp - 1) << DPLL_MD_UDI_MULTIPLIER_SHIFT;
else
temp = 0;
}
I915_WRITE(DPLL_MD(pipe), temp);
POSTING_READ(DPLL_MD(pipe));
/* Now program lane control registers */
if(intel_pipe_has_type(crtc, INTEL_OUTPUT_DISPLAYPORT)
|| intel_pipe_has_type(crtc, INTEL_OUTPUT_HDMI))
{
temp = 0x1000C4;
if(pipe == 1)
temp |= (1 << 21);
intel_dpio_write(dev_priv, DPIO_DATA_CHANNEL1, temp);
}
if(intel_pipe_has_type(crtc,INTEL_OUTPUT_EDP))
{
temp = 0x1000C4;
if(pipe == 1)
temp |= (1 << 21);
intel_dpio_write(dev_priv, DPIO_DATA_CHANNEL2, temp);
}
}
static void i9xx_update_pll(struct drm_crtc *crtc,
struct drm_display_mode *mode,
struct drm_display_mode *adjusted_mode,
intel_clock_t *clock, intel_clock_t *reduced_clock,
int num_connectors)
{
struct drm_device *dev = crtc->dev;
struct drm_i915_private *dev_priv = dev->dev_private;
struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
int pipe = intel_crtc->pipe;
u32 dpll;
bool is_sdvo;
i9xx_update_pll_dividers(crtc, clock, reduced_clock);
is_sdvo = intel_pipe_has_type(crtc, INTEL_OUTPUT_SDVO) ||
intel_pipe_has_type(crtc, INTEL_OUTPUT_HDMI);
dpll = DPLL_VGA_MODE_DIS;
if (intel_pipe_has_type(crtc, INTEL_OUTPUT_LVDS))
dpll |= DPLLB_MODE_LVDS;
else
dpll |= DPLLB_MODE_DAC_SERIAL;
if (is_sdvo) {
int pixel_multiplier = intel_mode_get_pixel_multiplier(adjusted_mode);
if (pixel_multiplier > 1) {
if (IS_I945G(dev) || IS_I945GM(dev) || IS_G33(dev))
dpll |= (pixel_multiplier - 1) << SDVO_MULTIPLIER_SHIFT_HIRES;
}
dpll |= DPLL_DVO_HIGH_SPEED;
}
if (intel_pipe_has_type(crtc, INTEL_OUTPUT_DISPLAYPORT))
dpll |= DPLL_DVO_HIGH_SPEED;
/* compute bitmask from p1 value */
if (IS_PINEVIEW(dev))
dpll |= (1 << (clock->p1 - 1)) << DPLL_FPA01_P1_POST_DIV_SHIFT_PINEVIEW;
else {
dpll |= (1 << (clock->p1 - 1)) << DPLL_FPA01_P1_POST_DIV_SHIFT;
if (IS_G4X(dev) && reduced_clock)
dpll |= (1 << (reduced_clock->p1 - 1)) << DPLL_FPA1_P1_POST_DIV_SHIFT;
}
switch (clock->p2) {
case 5:
dpll |= DPLL_DAC_SERIAL_P2_CLOCK_DIV_5;
break;
case 7:
dpll |= DPLLB_LVDS_P2_CLOCK_DIV_7;
break;
case 10:
dpll |= DPLL_DAC_SERIAL_P2_CLOCK_DIV_10;
break;
case 14:
dpll |= DPLLB_LVDS_P2_CLOCK_DIV_14;
break;
}
if (INTEL_INFO(dev)->gen >= 4)
dpll |= (6 << PLL_LOAD_PULSE_PHASE_SHIFT);
if (is_sdvo && intel_pipe_has_type(crtc, INTEL_OUTPUT_TVOUT))
dpll |= PLL_REF_INPUT_TVCLKINBC;
else if (intel_pipe_has_type(crtc, INTEL_OUTPUT_TVOUT))
/* XXX: just matching BIOS for now */
/* dpll |= PLL_REF_INPUT_TVCLKINBC; */
dpll |= 3;
else if (intel_pipe_has_type(crtc, INTEL_OUTPUT_LVDS) &&
intel_panel_use_ssc(dev_priv) && num_connectors < 2)
dpll |= PLLB_REF_INPUT_SPREADSPECTRUMIN;
else
dpll |= PLL_REF_INPUT_DREFCLK;
dpll |= DPLL_VCO_ENABLE;
I915_WRITE(DPLL(pipe), dpll & ~DPLL_VCO_ENABLE);
POSTING_READ(DPLL(pipe));
udelay(150);
/* The LVDS pin pair needs to be on before the DPLLs are enabled.
* This is an exception to the general rule that mode_set doesn't turn
* things on.
*/
if (intel_pipe_has_type(crtc, INTEL_OUTPUT_LVDS))
intel_update_lvds(crtc, clock, adjusted_mode);
if (intel_pipe_has_type(crtc, INTEL_OUTPUT_DISPLAYPORT))
intel_dp_set_m_n(crtc, mode, adjusted_mode);
I915_WRITE(DPLL(pipe), dpll);
/* Wait for the clocks to stabilize. */
POSTING_READ(DPLL(pipe));
udelay(150);
if (INTEL_INFO(dev)->gen >= 4) {
u32 temp = 0;
if (is_sdvo) {
temp = intel_mode_get_pixel_multiplier(adjusted_mode);
if (temp > 1)
temp = (temp - 1) << DPLL_MD_UDI_MULTIPLIER_SHIFT;
else
temp = 0;
}
I915_WRITE(DPLL_MD(pipe), temp);
} else {
/* The pixel multiplier can only be updated once the
* DPLL is enabled and the clocks are stable.
*
* So write it again.
*/
I915_WRITE(DPLL(pipe), dpll);
}
}
static void i8xx_update_pll(struct drm_crtc *crtc,
struct drm_display_mode *adjusted_mode,
intel_clock_t *clock, intel_clock_t *reduced_clock,
int num_connectors)
{
struct drm_device *dev = crtc->dev;
struct drm_i915_private *dev_priv = dev->dev_private;
struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
int pipe = intel_crtc->pipe;
u32 dpll;
i9xx_update_pll_dividers(crtc, clock, reduced_clock);
dpll = DPLL_VGA_MODE_DIS;
if (intel_pipe_has_type(crtc, INTEL_OUTPUT_LVDS)) {
dpll |= (1 << (clock->p1 - 1)) << DPLL_FPA01_P1_POST_DIV_SHIFT;
} else {
if (clock->p1 == 2)
dpll |= PLL_P1_DIVIDE_BY_TWO;
else
dpll |= (clock->p1 - 2) << DPLL_FPA01_P1_POST_DIV_SHIFT;
if (clock->p2 == 4)
dpll |= PLL_P2_DIVIDE_BY_4;
}
if (intel_pipe_has_type(crtc, INTEL_OUTPUT_TVOUT))
/* XXX: just matching BIOS for now */
/* dpll |= PLL_REF_INPUT_TVCLKINBC; */
dpll |= 3;
else if (intel_pipe_has_type(crtc, INTEL_OUTPUT_LVDS) &&
intel_panel_use_ssc(dev_priv) && num_connectors < 2)
dpll |= PLLB_REF_INPUT_SPREADSPECTRUMIN;
else
dpll |= PLL_REF_INPUT_DREFCLK;
dpll |= DPLL_VCO_ENABLE;
I915_WRITE(DPLL(pipe), dpll & ~DPLL_VCO_ENABLE);
POSTING_READ(DPLL(pipe));
udelay(150);
/* The LVDS pin pair needs to be on before the DPLLs are enabled.
* This is an exception to the general rule that mode_set doesn't turn
* things on.
*/
if (intel_pipe_has_type(crtc, INTEL_OUTPUT_LVDS))
intel_update_lvds(crtc, clock, adjusted_mode);
I915_WRITE(DPLL(pipe), dpll);
/* Wait for the clocks to stabilize. */
POSTING_READ(DPLL(pipe));
udelay(150);
/* The pixel multiplier can only be updated once the
* DPLL is enabled and the clocks are stable.
*
* So write it again.
*/
I915_WRITE(DPLL(pipe), dpll);
}
static void intel_set_pipe_timings(struct intel_crtc *intel_crtc,
struct drm_display_mode *mode,
struct drm_display_mode *adjusted_mode)
{
struct drm_device *dev = intel_crtc->base.dev;
struct drm_i915_private *dev_priv = dev->dev_private;
enum pipe pipe = intel_crtc->pipe;
enum transcoder cpu_transcoder = intel_crtc->cpu_transcoder;
uint32_t vsyncshift;
if (!IS_GEN2(dev) && adjusted_mode->flags & DRM_MODE_FLAG_INTERLACE) {
/* the chip adds 2 halflines automatically */
adjusted_mode->crtc_vtotal -= 1;
adjusted_mode->crtc_vblank_end -= 1;
vsyncshift = adjusted_mode->crtc_hsync_start
- adjusted_mode->crtc_htotal / 2;
} else {
vsyncshift = 0;
}
if (INTEL_INFO(dev)->gen > 3)
I915_WRITE(VSYNCSHIFT(cpu_transcoder), vsyncshift);
I915_WRITE(HTOTAL(cpu_transcoder),
(adjusted_mode->crtc_hdisplay - 1) |
((adjusted_mode->crtc_htotal - 1) << 16));
I915_WRITE(HBLANK(cpu_transcoder),
(adjusted_mode->crtc_hblank_start - 1) |
((adjusted_mode->crtc_hblank_end - 1) << 16));
I915_WRITE(HSYNC(cpu_transcoder),
(adjusted_mode->crtc_hsync_start - 1) |
((adjusted_mode->crtc_hsync_end - 1) << 16));
I915_WRITE(VTOTAL(cpu_transcoder),
(adjusted_mode->crtc_vdisplay - 1) |
((adjusted_mode->crtc_vtotal - 1) << 16));
I915_WRITE(VBLANK(cpu_transcoder),
(adjusted_mode->crtc_vblank_start - 1) |
((adjusted_mode->crtc_vblank_end - 1) << 16));
I915_WRITE(VSYNC(cpu_transcoder),
(adjusted_mode->crtc_vsync_start - 1) |
((adjusted_mode->crtc_vsync_end - 1) << 16));
/* Workaround: when the EDP input selection is B, the VTOTAL_B must be
* programmed with the VTOTAL_EDP value. Same for VTOTAL_C. This is
* documented on the DDI_FUNC_CTL register description, EDP Input Select
* bits. */
if (IS_HASWELL(dev) && cpu_transcoder == TRANSCODER_EDP &&
(pipe == PIPE_B || pipe == PIPE_C))
I915_WRITE(VTOTAL(pipe), I915_READ(VTOTAL(cpu_transcoder)));
/* pipesrc controls the size that is scaled from, which should
* always be the user's requested size.
*/
I915_WRITE(PIPESRC(pipe),
((mode->hdisplay - 1) << 16) | (mode->vdisplay - 1));
}
static int i9xx_crtc_mode_set(struct drm_crtc *crtc,
struct drm_display_mode *mode,
struct drm_display_mode *adjusted_mode,
int x, int y,
struct drm_framebuffer *fb)
{
struct drm_device *dev = crtc->dev;
struct drm_i915_private *dev_priv = dev->dev_private;
struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
int pipe = intel_crtc->pipe;
int plane = intel_crtc->plane;
int refclk, num_connectors = 0;
intel_clock_t clock, reduced_clock;
u32 dspcntr, pipeconf;
bool ok, has_reduced_clock = false, is_sdvo = false;
bool is_lvds = false, is_tv = false, is_dp = false;
struct intel_encoder *encoder;
const intel_limit_t *limit;
int ret;
for_each_encoder_on_crtc(dev, crtc, encoder) {
switch (encoder->type) {
case INTEL_OUTPUT_LVDS:
is_lvds = true;
break;
case INTEL_OUTPUT_SDVO:
case INTEL_OUTPUT_HDMI:
is_sdvo = true;
if (encoder->needs_tv_clock)
is_tv = true;
break;
case INTEL_OUTPUT_TVOUT:
is_tv = true;
break;
case INTEL_OUTPUT_DISPLAYPORT:
is_dp = true;
break;
}
num_connectors++;
}
refclk = i9xx_get_refclk(crtc, num_connectors);
/*
* Returns a set of divisors for the desired target clock with the given
* refclk, or FALSE. The returned values represent the clock equation:
* reflck * (5 * (m1 + 2) + (m2 + 2)) / (n + 2) / p1 / p2.
*/
limit = intel_limit(crtc, refclk);
ok = limit->find_pll(limit, crtc, adjusted_mode->clock, refclk, NULL,
&clock);
if (!ok) {
DRM_ERROR("Couldn't find PLL settings for mode!\n");
return -EINVAL;
}
/* Ensure that the cursor is valid for the new mode before changing... */
intel_crtc_update_cursor(crtc, true);
if (is_lvds && dev_priv->lvds_downclock_avail) {
/*
* Ensure we match the reduced clock's P to the target clock.
* If the clocks don't match, we can't switch the display clock
* by using the FP0/FP1. In such case we will disable the LVDS
* downclock feature.
*/
has_reduced_clock = limit->find_pll(limit, crtc,
dev_priv->lvds_downclock,
refclk,
&clock,
&reduced_clock);
}
if (is_sdvo && is_tv)
i9xx_adjust_sdvo_tv_clock(adjusted_mode, &clock);
if (IS_GEN2(dev))
i8xx_update_pll(crtc, adjusted_mode, &clock,
has_reduced_clock ? &reduced_clock : NULL,
num_connectors);
else if (IS_VALLEYVIEW(dev))
vlv_update_pll(crtc, mode, adjusted_mode, &clock,
has_reduced_clock ? &reduced_clock : NULL,
num_connectors);
else
i9xx_update_pll(crtc, mode, adjusted_mode, &clock,
has_reduced_clock ? &reduced_clock : NULL,
num_connectors);
/* setup pipeconf */
pipeconf = I915_READ(PIPECONF(pipe));
/* Set up the display plane register */
dspcntr = DISPPLANE_GAMMA_ENABLE;
if (pipe == 0)
dspcntr &= ~DISPPLANE_SEL_PIPE_MASK;
else
dspcntr |= DISPPLANE_SEL_PIPE_B;
if (pipe == 0 && INTEL_INFO(dev)->gen < 4) {
/* Enable pixel doubling when the dot clock is > 90% of the (display)
* core speed.
*
* XXX: No double-wide on 915GM pipe B. Is that the only reason for the
* pipe == 0 check?
*/
if (mode->clock >
dev_priv->display.get_display_clock_speed(dev) * 9 / 10)
pipeconf |= PIPECONF_DOUBLE_WIDE;
else
pipeconf &= ~PIPECONF_DOUBLE_WIDE;
}
/* default to 8bpc */
pipeconf &= ~(PIPECONF_BPP_MASK | PIPECONF_DITHER_EN);
if (is_dp) {
if (adjusted_mode->private_flags & INTEL_MODE_DP_FORCE_6BPC) {
pipeconf |= PIPECONF_BPP_6 |
PIPECONF_DITHER_EN |
PIPECONF_DITHER_TYPE_SP;
}
}
if (IS_VALLEYVIEW(dev) && intel_pipe_has_type(crtc, INTEL_OUTPUT_EDP)) {
if (adjusted_mode->private_flags & INTEL_MODE_DP_FORCE_6BPC) {
pipeconf |= PIPECONF_BPP_6 |
PIPECONF_ENABLE |
I965_PIPECONF_ACTIVE;
}
}
DRM_DEBUG_KMS("Mode for pipe %c:\n", pipe == 0 ? 'A' : 'B');
drm_mode_debug_printmodeline(mode);
if (HAS_PIPE_CXSR(dev)) {
if (intel_crtc->lowfreq_avail) {
DRM_DEBUG_KMS("enabling CxSR downclocking\n");
pipeconf |= PIPECONF_CXSR_DOWNCLOCK;
} else {
DRM_DEBUG_KMS("disabling CxSR downclocking\n");
pipeconf &= ~PIPECONF_CXSR_DOWNCLOCK;
}
}
pipeconf &= ~PIPECONF_INTERLACE_MASK;
if (!IS_GEN2(dev) &&
adjusted_mode->flags & DRM_MODE_FLAG_INTERLACE)
pipeconf |= PIPECONF_INTERLACE_W_FIELD_INDICATION;
else
pipeconf |= PIPECONF_PROGRESSIVE;
intel_set_pipe_timings(intel_crtc, mode, adjusted_mode);
/* pipesrc and dspsize control the size that is scaled from,
* which should always be the user's requested size.
*/
I915_WRITE(DSPSIZE(plane),
((mode->vdisplay - 1) << 16) |
(mode->hdisplay - 1));
I915_WRITE(DSPPOS(plane), 0);
I915_WRITE(PIPECONF(pipe), pipeconf);
POSTING_READ(PIPECONF(pipe));
intel_enable_pipe(dev_priv, pipe, false);
intel_wait_for_vblank(dev, pipe);
I915_WRITE(DSPCNTR(plane), dspcntr);
POSTING_READ(DSPCNTR(plane));
ret = intel_pipe_set_base(crtc, x, y, fb);
intel_update_watermarks(dev);
return ret;
}
static void ironlake_init_pch_refclk(struct drm_device *dev)
{
struct drm_i915_private *dev_priv = dev->dev_private;
struct drm_mode_config *mode_config = &dev->mode_config;
struct intel_encoder *encoder;
u32 temp;
bool has_lvds = false;
bool has_cpu_edp = false;
bool has_pch_edp = false;
bool has_panel = false;
bool has_ck505 = false;
bool can_ssc = false;
/* We need to take the global config into account */
list_for_each_entry(encoder, &mode_config->encoder_list,
base.head) {
switch (encoder->type) {
case INTEL_OUTPUT_LVDS:
has_panel = true;
has_lvds = true;
break;
case INTEL_OUTPUT_EDP:
has_panel = true;
if (intel_encoder_is_pch_edp(&encoder->base))
has_pch_edp = true;
else
has_cpu_edp = true;
break;
}
}
if (HAS_PCH_IBX(dev)) {
has_ck505 = dev_priv->display_clock_mode;
can_ssc = has_ck505;
} else {
has_ck505 = false;
can_ssc = true;
}
DRM_DEBUG_KMS("has_panel %d has_lvds %d has_pch_edp %d has_cpu_edp %d has_ck505 %d\n",
has_panel, has_lvds, has_pch_edp, has_cpu_edp,
has_ck505);
/* Ironlake: try to setup display ref clock before DPLL
* enabling. This is only under driver's control after
* PCH B stepping, previous chipset stepping should be
* ignoring this setting.
*/
temp = I915_READ(PCH_DREF_CONTROL);
/* Always enable nonspread source */
temp &= ~DREF_NONSPREAD_SOURCE_MASK;
if (has_ck505)
temp |= DREF_NONSPREAD_CK505_ENABLE;
else
temp |= DREF_NONSPREAD_SOURCE_ENABLE;
if (has_panel) {
temp &= ~DREF_SSC_SOURCE_MASK;
temp |= DREF_SSC_SOURCE_ENABLE;
/* SSC must be turned on before enabling the CPU output */
if (intel_panel_use_ssc(dev_priv) && can_ssc) {
DRM_DEBUG_KMS("Using SSC on panel\n");
temp |= DREF_SSC1_ENABLE;
} else
temp &= ~DREF_SSC1_ENABLE;
/* Get SSC going before enabling the outputs */
I915_WRITE(PCH_DREF_CONTROL, temp);
POSTING_READ(PCH_DREF_CONTROL);
udelay(200);
temp &= ~DREF_CPU_SOURCE_OUTPUT_MASK;
/* Enable CPU source on CPU attached eDP */
if (has_cpu_edp) {
if (intel_panel_use_ssc(dev_priv) && can_ssc) {
DRM_DEBUG_KMS("Using SSC on eDP\n");
temp |= DREF_CPU_SOURCE_OUTPUT_DOWNSPREAD;
}
else
temp |= DREF_CPU_SOURCE_OUTPUT_NONSPREAD;
} else
temp |= DREF_CPU_SOURCE_OUTPUT_DISABLE;
I915_WRITE(PCH_DREF_CONTROL, temp);
POSTING_READ(PCH_DREF_CONTROL);
udelay(200);
} else {
DRM_DEBUG_KMS("Disabling SSC entirely\n");
temp &= ~DREF_CPU_SOURCE_OUTPUT_MASK;
/* Turn off CPU output */
temp |= DREF_CPU_SOURCE_OUTPUT_DISABLE;
I915_WRITE(PCH_DREF_CONTROL, temp);
POSTING_READ(PCH_DREF_CONTROL);
udelay(200);
/* Turn off the SSC source */
temp &= ~DREF_SSC_SOURCE_MASK;
temp |= DREF_SSC_SOURCE_DISABLE;
/* Turn off SSC1 */
temp &= ~ DREF_SSC1_ENABLE;
I915_WRITE(PCH_DREF_CONTROL, temp);
POSTING_READ(PCH_DREF_CONTROL);
udelay(200);
}
}
/* Sequence to enable CLKOUT_DP for FDI usage and configure PCH FDI I/O. */
static void lpt_init_pch_refclk(struct drm_device *dev)
{
struct drm_i915_private *dev_priv = dev->dev_private;
struct drm_mode_config *mode_config = &dev->mode_config;
struct intel_encoder *encoder;
bool has_vga = false;
bool is_sdv = false;
u32 tmp;
list_for_each_entry(encoder, &mode_config->encoder_list, base.head) {
switch (encoder->type) {
case INTEL_OUTPUT_ANALOG:
has_vga = true;
break;
}
}
if (!has_vga)
return;
/* XXX: Rip out SDV support once Haswell ships for real. */
if (IS_HASWELL(dev) && (dev->pci_device & 0xFF00) == 0x0C00)
is_sdv = true;
tmp = intel_sbi_read(dev_priv, SBI_SSCCTL, SBI_ICLK);
tmp &= ~SBI_SSCCTL_DISABLE;
tmp |= SBI_SSCCTL_PATHALT;
intel_sbi_write(dev_priv, SBI_SSCCTL, tmp, SBI_ICLK);
udelay(24);
tmp = intel_sbi_read(dev_priv, SBI_SSCCTL, SBI_ICLK);
tmp &= ~SBI_SSCCTL_PATHALT;
intel_sbi_write(dev_priv, SBI_SSCCTL, tmp, SBI_ICLK);
if (!is_sdv) {
tmp = I915_READ(SOUTH_CHICKEN2);
tmp |= FDI_MPHY_IOSFSB_RESET_CTL;
I915_WRITE(SOUTH_CHICKEN2, tmp);
if (wait_for_atomic_us(I915_READ(SOUTH_CHICKEN2) &
FDI_MPHY_IOSFSB_RESET_STATUS, 100))
DRM_ERROR("FDI mPHY reset assert timeout\n");
tmp = I915_READ(SOUTH_CHICKEN2);
tmp &= ~FDI_MPHY_IOSFSB_RESET_CTL;
I915_WRITE(SOUTH_CHICKEN2, tmp);
if (wait_for_atomic_us((I915_READ(SOUTH_CHICKEN2) &
FDI_MPHY_IOSFSB_RESET_STATUS) == 0,
100))
DRM_ERROR("FDI mPHY reset de-assert timeout\n");
}
tmp = intel_sbi_read(dev_priv, 0x8008, SBI_MPHY);
tmp &= ~(0xFF << 24);
tmp |= (0x12 << 24);
intel_sbi_write(dev_priv, 0x8008, tmp, SBI_MPHY);
if (!is_sdv) {
tmp = intel_sbi_read(dev_priv, 0x808C, SBI_MPHY);
tmp &= ~(0x3 << 6);
tmp |= (1 << 6) | (1 << 0);
intel_sbi_write(dev_priv, 0x808C, tmp, SBI_MPHY);
}
if (is_sdv) {
tmp = intel_sbi_read(dev_priv, 0x800C, SBI_MPHY);
tmp |= 0x7FFF;
intel_sbi_write(dev_priv, 0x800C, tmp, SBI_MPHY);
}
tmp = intel_sbi_read(dev_priv, 0x2008, SBI_MPHY);
tmp |= (1 << 11);
intel_sbi_write(dev_priv, 0x2008, tmp, SBI_MPHY);
tmp = intel_sbi_read(dev_priv, 0x2108, SBI_MPHY);
tmp |= (1 << 11);
intel_sbi_write(dev_priv, 0x2108, tmp, SBI_MPHY);
if (is_sdv) {
tmp = intel_sbi_read(dev_priv, 0x2038, SBI_MPHY);
tmp |= (0x3F << 24) | (0xF << 20) | (0xF << 16);
intel_sbi_write(dev_priv, 0x2038, tmp, SBI_MPHY);
tmp = intel_sbi_read(dev_priv, 0x2138, SBI_MPHY);
tmp |= (0x3F << 24) | (0xF << 20) | (0xF << 16);
intel_sbi_write(dev_priv, 0x2138, tmp, SBI_MPHY);
tmp = intel_sbi_read(dev_priv, 0x203C, SBI_MPHY);
tmp |= (0x3F << 8);
intel_sbi_write(dev_priv, 0x203C, tmp, SBI_MPHY);
tmp = intel_sbi_read(dev_priv, 0x213C, SBI_MPHY);
tmp |= (0x3F << 8);
intel_sbi_write(dev_priv, 0x213C, tmp, SBI_MPHY);
}
tmp = intel_sbi_read(dev_priv, 0x206C, SBI_MPHY);
tmp |= (1 << 24) | (1 << 21) | (1 << 18);
intel_sbi_write(dev_priv, 0x206C, tmp, SBI_MPHY);
tmp = intel_sbi_read(dev_priv, 0x216C, SBI_MPHY);
tmp |= (1 << 24) | (1 << 21) | (1 << 18);
intel_sbi_write(dev_priv, 0x216C, tmp, SBI_MPHY);
if (!is_sdv) {
tmp = intel_sbi_read(dev_priv, 0x2080, SBI_MPHY);
tmp &= ~(7 << 13);
tmp |= (5 << 13);
intel_sbi_write(dev_priv, 0x2080, tmp, SBI_MPHY);
tmp = intel_sbi_read(dev_priv, 0x2180, SBI_MPHY);
tmp &= ~(7 << 13);
tmp |= (5 << 13);
intel_sbi_write(dev_priv, 0x2180, tmp, SBI_MPHY);
}
tmp = intel_sbi_read(dev_priv, 0x208C, SBI_MPHY);
tmp &= ~0xFF;
tmp |= 0x1C;
intel_sbi_write(dev_priv, 0x208C, tmp, SBI_MPHY);
tmp = intel_sbi_read(dev_priv, 0x218C, SBI_MPHY);
tmp &= ~0xFF;
tmp |= 0x1C;
intel_sbi_write(dev_priv, 0x218C, tmp, SBI_MPHY);
tmp = intel_sbi_read(dev_priv, 0x2098, SBI_MPHY);
tmp &= ~(0xFF << 16);
tmp |= (0x1C << 16);
intel_sbi_write(dev_priv, 0x2098, tmp, SBI_MPHY);
tmp = intel_sbi_read(dev_priv, 0x2198, SBI_MPHY);
tmp &= ~(0xFF << 16);
tmp |= (0x1C << 16);
intel_sbi_write(dev_priv, 0x2198, tmp, SBI_MPHY);
if (!is_sdv) {
tmp = intel_sbi_read(dev_priv, 0x20C4, SBI_MPHY);
tmp |= (1 << 27);
intel_sbi_write(dev_priv, 0x20C4, tmp, SBI_MPHY);
tmp = intel_sbi_read(dev_priv, 0x21C4, SBI_MPHY);
tmp |= (1 << 27);
intel_sbi_write(dev_priv, 0x21C4, tmp, SBI_MPHY);
tmp = intel_sbi_read(dev_priv, 0x20EC, SBI_MPHY);
tmp &= ~(0xF << 28);
tmp |= (4 << 28);
intel_sbi_write(dev_priv, 0x20EC, tmp, SBI_MPHY);
tmp = intel_sbi_read(dev_priv, 0x21EC, SBI_MPHY);
tmp &= ~(0xF << 28);
tmp |= (4 << 28);
intel_sbi_write(dev_priv, 0x21EC, tmp, SBI_MPHY);
}
/* ULT uses SBI_GEN0, but ULT doesn't have VGA, so we don't care. */
tmp = intel_sbi_read(dev_priv, SBI_DBUFF0, SBI_ICLK);
tmp |= SBI_DBUFF0_ENABLE;
intel_sbi_write(dev_priv, SBI_DBUFF0, tmp, SBI_ICLK);
}
/*
* Initialize reference clocks when the driver loads
*/
void intel_init_pch_refclk(struct drm_device *dev)
{
if (HAS_PCH_IBX(dev) || HAS_PCH_CPT(dev))
ironlake_init_pch_refclk(dev);
else if (HAS_PCH_LPT(dev))
lpt_init_pch_refclk(dev);
}
static int ironlake_get_refclk(struct drm_crtc *crtc)
{
struct drm_device *dev = crtc->dev;
struct drm_i915_private *dev_priv = dev->dev_private;
struct intel_encoder *encoder;
struct intel_encoder *edp_encoder = NULL;
int num_connectors = 0;
bool is_lvds = false;
for_each_encoder_on_crtc(dev, crtc, encoder) {
switch (encoder->type) {
case INTEL_OUTPUT_LVDS:
is_lvds = true;
break;
case INTEL_OUTPUT_EDP:
edp_encoder = encoder;
break;
}
num_connectors++;
}
if (is_lvds && intel_panel_use_ssc(dev_priv) && num_connectors < 2) {
DRM_DEBUG_KMS("using SSC reference clock of %d MHz\n",
dev_priv->lvds_ssc_freq);
return dev_priv->lvds_ssc_freq * 1000;
}
return 120000;
}
static void ironlake_set_pipeconf(struct drm_crtc *crtc,
struct drm_display_mode *adjusted_mode,
bool dither)
{
struct drm_i915_private *dev_priv = crtc->dev->dev_private;
struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
int pipe = intel_crtc->pipe;
uint32_t val;
val = I915_READ(PIPECONF(pipe));
val &= ~PIPE_BPC_MASK;
switch (intel_crtc->bpp) {
case 18:
val |= PIPE_6BPC;
break;
case 24:
val |= PIPE_8BPC;
break;
case 30:
val |= PIPE_10BPC;
break;
case 36:
val |= PIPE_12BPC;
break;
default:
/* Case prevented by intel_choose_pipe_bpp_dither. */
BUG();
}
val &= ~(PIPECONF_DITHER_EN | PIPECONF_DITHER_TYPE_MASK);
if (dither)
val |= (PIPECONF_DITHER_EN | PIPECONF_DITHER_TYPE_SP);
val &= ~PIPECONF_INTERLACE_MASK;
if (adjusted_mode->flags & DRM_MODE_FLAG_INTERLACE)
val |= PIPECONF_INTERLACED_ILK;
else
val |= PIPECONF_PROGRESSIVE;
I915_WRITE(PIPECONF(pipe), val);
POSTING_READ(PIPECONF(pipe));
}
static void haswell_set_pipeconf(struct drm_crtc *crtc,
struct drm_display_mode *adjusted_mode,
bool dither)
{
struct drm_i915_private *dev_priv = crtc->dev->dev_private;
struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
enum transcoder cpu_transcoder = intel_crtc->cpu_transcoder;
uint32_t val;
val = I915_READ(PIPECONF(cpu_transcoder));
val &= ~(PIPECONF_DITHER_EN | PIPECONF_DITHER_TYPE_MASK);
if (dither)
val |= (PIPECONF_DITHER_EN | PIPECONF_DITHER_TYPE_SP);
val &= ~PIPECONF_INTERLACE_MASK_HSW;
if (adjusted_mode->flags & DRM_MODE_FLAG_INTERLACE)
val |= PIPECONF_INTERLACED_ILK;
else
val |= PIPECONF_PROGRESSIVE;
I915_WRITE(PIPECONF(cpu_transcoder), val);
POSTING_READ(PIPECONF(cpu_transcoder));
}
static bool ironlake_compute_clocks(struct drm_crtc *crtc,
struct drm_display_mode *adjusted_mode,
intel_clock_t *clock,
bool *has_reduced_clock,
intel_clock_t *reduced_clock)
{
struct drm_device *dev = crtc->dev;
struct drm_i915_private *dev_priv = dev->dev_private;
struct intel_encoder *intel_encoder;
int refclk;
const intel_limit_t *limit;
bool ret, is_sdvo = false, is_tv = false, is_lvds = false;
for_each_encoder_on_crtc(dev, crtc, intel_encoder) {
switch (intel_encoder->type) {
case INTEL_OUTPUT_LVDS:
is_lvds = true;
break;
case INTEL_OUTPUT_SDVO:
case INTEL_OUTPUT_HDMI:
is_sdvo = true;
if (intel_encoder->needs_tv_clock)
is_tv = true;
break;
case INTEL_OUTPUT_TVOUT:
is_tv = true;
break;
}
}
refclk = ironlake_get_refclk(crtc);
/*
* Returns a set of divisors for the desired target clock with the given
* refclk, or FALSE. The returned values represent the clock equation:
* reflck * (5 * (m1 + 2) + (m2 + 2)) / (n + 2) / p1 / p2.
*/
limit = intel_limit(crtc, refclk);
ret = limit->find_pll(limit, crtc, adjusted_mode->clock, refclk, NULL,
clock);
if (!ret)
return false;
if (is_lvds && dev_priv->lvds_downclock_avail) {
/*
* Ensure we match the reduced clock's P to the target clock.
* If the clocks don't match, we can't switch the display clock
* by using the FP0/FP1. In such case we will disable the LVDS
* downclock feature.
*/
*has_reduced_clock = limit->find_pll(limit, crtc,
dev_priv->lvds_downclock,
refclk,
clock,
reduced_clock);
}
if (is_sdvo && is_tv)
i9xx_adjust_sdvo_tv_clock(adjusted_mode, clock);
return true;
}
static void cpt_enable_fdi_bc_bifurcation(struct drm_device *dev)
{
struct drm_i915_private *dev_priv = dev->dev_private;
uint32_t temp;
temp = I915_READ(SOUTH_CHICKEN1);
if (temp & FDI_BC_BIFURCATION_SELECT)
return;
WARN_ON(I915_READ(FDI_RX_CTL(PIPE_B)) & FDI_RX_ENABLE);
WARN_ON(I915_READ(FDI_RX_CTL(PIPE_C)) & FDI_RX_ENABLE);
temp |= FDI_BC_BIFURCATION_SELECT;
DRM_DEBUG_KMS("enabling fdi C rx\n");
I915_WRITE(SOUTH_CHICKEN1, temp);
POSTING_READ(SOUTH_CHICKEN1);
}
static bool ironlake_check_fdi_lanes(struct intel_crtc *intel_crtc)
{
struct drm_device *dev = intel_crtc->base.dev;
struct drm_i915_private *dev_priv = dev->dev_private;
struct intel_crtc *pipe_B_crtc =
to_intel_crtc(dev_priv->pipe_to_crtc_mapping[PIPE_B]);
DRM_DEBUG_KMS("checking fdi config on pipe %i, lanes %i\n",
intel_crtc->pipe, intel_crtc->fdi_lanes);
if (intel_crtc->fdi_lanes > 4) {
DRM_DEBUG_KMS("invalid fdi lane config on pipe %i: %i lanes\n",
intel_crtc->pipe, intel_crtc->fdi_lanes);
/* Clamp lanes to avoid programming the hw with bogus values. */
intel_crtc->fdi_lanes = 4;
return false;
}
if (dev_priv->num_pipe == 2)
return true;
switch (intel_crtc->pipe) {
case PIPE_A:
return true;
case PIPE_B:
if (dev_priv->pipe_to_crtc_mapping[PIPE_C]->enabled &&
intel_crtc->fdi_lanes > 2) {
DRM_DEBUG_KMS("invalid shared fdi lane config on pipe %i: %i lanes\n",
intel_crtc->pipe, intel_crtc->fdi_lanes);
/* Clamp lanes to avoid programming the hw with bogus values. */
intel_crtc->fdi_lanes = 2;
return false;
}
if (intel_crtc->fdi_lanes > 2)
WARN_ON(I915_READ(SOUTH_CHICKEN1) & FDI_BC_BIFURCATION_SELECT);
else
cpt_enable_fdi_bc_bifurcation(dev);
return true;
case PIPE_C:
if (!pipe_B_crtc->base.enabled || pipe_B_crtc->fdi_lanes <= 2) {
if (intel_crtc->fdi_lanes > 2) {
DRM_DEBUG_KMS("invalid shared fdi lane config on pipe %i: %i lanes\n",
intel_crtc->pipe, intel_crtc->fdi_lanes);
/* Clamp lanes to avoid programming the hw with bogus values. */
intel_crtc->fdi_lanes = 2;
return false;
}
} else {
DRM_DEBUG_KMS("fdi link B uses too many lanes to enable link C\n");
return false;
}
cpt_enable_fdi_bc_bifurcation(dev);
return true;
default:
BUG();
}
}
int ironlake_get_lanes_required(int target_clock, int link_bw, int bpp)
{
/*
* Account for spread spectrum to avoid
* oversubscribing the link. Max center spread
* is 2.5%; use 5% for safety's sake.
*/
u32 bps = target_clock * bpp * 21 / 20;
return bps / (link_bw * 8) + 1;
}
static void ironlake_set_m_n(struct drm_crtc *crtc,
struct drm_display_mode *mode,
struct drm_display_mode *adjusted_mode)
{
struct drm_device *dev = crtc->dev;
struct drm_i915_private *dev_priv = dev->dev_private;
struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
enum transcoder cpu_transcoder = intel_crtc->cpu_transcoder;
struct intel_encoder *intel_encoder, *edp_encoder = NULL;
struct fdi_m_n m_n = {0};
int target_clock, pixel_multiplier, lane, link_bw;
bool is_dp = false, is_cpu_edp = false;
for_each_encoder_on_crtc(dev, crtc, intel_encoder) {
switch (intel_encoder->type) {
case INTEL_OUTPUT_DISPLAYPORT:
is_dp = true;
break;
case INTEL_OUTPUT_EDP:
is_dp = true;
if (!intel_encoder_is_pch_edp(&intel_encoder->base))
is_cpu_edp = true;
edp_encoder = intel_encoder;
break;
}
}
/* FDI link */
pixel_multiplier = intel_mode_get_pixel_multiplier(adjusted_mode);
lane = 0;
/* CPU eDP doesn't require FDI link, so just set DP M/N
according to current link config */
if (is_cpu_edp) {
intel_edp_link_config(edp_encoder, &lane, &link_bw);
} else {
/* FDI is a binary signal running at ~2.7GHz, encoding
* each output octet as 10 bits. The actual frequency
* is stored as a divider into a 100MHz clock, and the
* mode pixel clock is stored in units of 1KHz.
* Hence the bw of each lane in terms of the mode signal
* is:
*/
link_bw = intel_fdi_link_freq(dev) * MHz(100)/KHz(1)/10;
}
/* [e]DP over FDI requires target mode clock instead of link clock. */
if (edp_encoder)
target_clock = intel_edp_target_clock(edp_encoder, mode);
else if (is_dp)
target_clock = mode->clock;
else
target_clock = adjusted_mode->clock;
if (!lane)
lane = ironlake_get_lanes_required(target_clock, link_bw,
intel_crtc->bpp);
intel_crtc->fdi_lanes = lane;
if (pixel_multiplier > 1)
link_bw *= pixel_multiplier;
ironlake_compute_m_n(intel_crtc->bpp, lane, target_clock, link_bw,
&m_n);
I915_WRITE(PIPE_DATA_M1(cpu_transcoder), TU_SIZE(m_n.tu) | m_n.gmch_m);
I915_WRITE(PIPE_DATA_N1(cpu_transcoder), m_n.gmch_n);
I915_WRITE(PIPE_LINK_M1(cpu_transcoder), m_n.link_m);
I915_WRITE(PIPE_LINK_N1(cpu_transcoder), m_n.link_n);
}
static uint32_t ironlake_compute_dpll(struct intel_crtc *intel_crtc,
struct drm_display_mode *adjusted_mode,
intel_clock_t *clock, u32 fp)
{
struct drm_crtc *crtc = &intel_crtc->base;
struct drm_device *dev = crtc->dev;
struct drm_i915_private *dev_priv = dev->dev_private;
struct intel_encoder *intel_encoder;
uint32_t dpll;
int factor, pixel_multiplier, num_connectors = 0;
bool is_lvds = false, is_sdvo = false, is_tv = false;
bool is_dp = false, is_cpu_edp = false;
for_each_encoder_on_crtc(dev, crtc, intel_encoder) {
switch (intel_encoder->type) {
case INTEL_OUTPUT_LVDS:
is_lvds = true;
break;
case INTEL_OUTPUT_SDVO:
case INTEL_OUTPUT_HDMI:
is_sdvo = true;
if (intel_encoder->needs_tv_clock)
is_tv = true;
break;
case INTEL_OUTPUT_TVOUT:
is_tv = true;
break;
case INTEL_OUTPUT_DISPLAYPORT:
is_dp = true;
break;
case INTEL_OUTPUT_EDP:
is_dp = true;
if (!intel_encoder_is_pch_edp(&intel_encoder->base))
is_cpu_edp = true;
break;
}
num_connectors++;
}
/* Enable autotuning of the PLL clock (if permissible) */
factor = 21;
if (is_lvds) {
if ((intel_panel_use_ssc(dev_priv) &&
dev_priv->lvds_ssc_freq == 100) ||
(I915_READ(PCH_LVDS) & LVDS_CLKB_POWER_MASK) == LVDS_CLKB_POWER_UP)
factor = 25;
} else if (is_sdvo && is_tv)
factor = 20;
if (clock->m < factor * clock->n)
fp |= FP_CB_TUNE;
dpll = 0;
if (is_lvds)
dpll |= DPLLB_MODE_LVDS;
else
dpll |= DPLLB_MODE_DAC_SERIAL;
if (is_sdvo) {
pixel_multiplier = intel_mode_get_pixel_multiplier(adjusted_mode);
if (pixel_multiplier > 1) {
dpll |= (pixel_multiplier - 1) << PLL_REF_SDVO_HDMI_MULTIPLIER_SHIFT;
}
dpll |= DPLL_DVO_HIGH_SPEED;
}
if (is_dp && !is_cpu_edp)
dpll |= DPLL_DVO_HIGH_SPEED;
/* compute bitmask from p1 value */
dpll |= (1 << (clock->p1 - 1)) << DPLL_FPA01_P1_POST_DIV_SHIFT;
/* also FPA1 */
dpll |= (1 << (clock->p1 - 1)) << DPLL_FPA1_P1_POST_DIV_SHIFT;
switch (clock->p2) {
case 5:
dpll |= DPLL_DAC_SERIAL_P2_CLOCK_DIV_5;
break;
case 7:
dpll |= DPLLB_LVDS_P2_CLOCK_DIV_7;
break;
case 10:
dpll |= DPLL_DAC_SERIAL_P2_CLOCK_DIV_10;
break;
case 14:
dpll |= DPLLB_LVDS_P2_CLOCK_DIV_14;
break;
}
if (is_sdvo && is_tv)
dpll |= PLL_REF_INPUT_TVCLKINBC;
else if (is_tv)
/* XXX: just matching BIOS for now */
/* dpll |= PLL_REF_INPUT_TVCLKINBC; */
dpll |= 3;
else if (is_lvds && intel_panel_use_ssc(dev_priv) && num_connectors < 2)
dpll |= PLLB_REF_INPUT_SPREADSPECTRUMIN;
else
dpll |= PLL_REF_INPUT_DREFCLK;
return dpll;
}
static int ironlake_crtc_mode_set(struct drm_crtc *crtc,
struct drm_display_mode *mode,
struct drm_display_mode *adjusted_mode,
int x, int y,
struct drm_framebuffer *fb)
{
struct drm_device *dev = crtc->dev;
struct drm_i915_private *dev_priv = dev->dev_private;
struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
int pipe = intel_crtc->pipe;
int plane = intel_crtc->plane;
int num_connectors = 0;
intel_clock_t clock, reduced_clock;
u32 dpll, fp = 0, fp2 = 0;
bool ok, has_reduced_clock = false;
bool is_lvds = false, is_dp = false, is_cpu_edp = false;
struct intel_encoder *encoder;
u32 temp;
int ret;
bool dither, fdi_config_ok;
for_each_encoder_on_crtc(dev, crtc, encoder) {
switch (encoder->type) {
case INTEL_OUTPUT_LVDS:
is_lvds = true;
break;
case INTEL_OUTPUT_DISPLAYPORT:
is_dp = true;
break;
case INTEL_OUTPUT_EDP:
is_dp = true;
if (!intel_encoder_is_pch_edp(&encoder->base))
is_cpu_edp = true;
break;
}
num_connectors++;
}
WARN(!(HAS_PCH_IBX(dev) || HAS_PCH_CPT(dev)),
"Unexpected PCH type %d\n", INTEL_PCH_TYPE(dev));
ok = ironlake_compute_clocks(crtc, adjusted_mode, &clock,
&has_reduced_clock, &reduced_clock);
if (!ok) {
DRM_ERROR("Couldn't find PLL settings for mode!\n");
return -EINVAL;
}
/* Ensure that the cursor is valid for the new mode before changing... */
intel_crtc_update_cursor(crtc, true);
/* determine panel color depth */
dither = intel_choose_pipe_bpp_dither(crtc, fb, &intel_crtc->bpp,
adjusted_mode);
if (is_lvds && dev_priv->lvds_dither)
dither = true;
fp = clock.n << 16 | clock.m1 << 8 | clock.m2;
if (has_reduced_clock)
fp2 = reduced_clock.n << 16 | reduced_clock.m1 << 8 |
reduced_clock.m2;
dpll = ironlake_compute_dpll(intel_crtc, adjusted_mode, &clock, fp);
DRM_DEBUG_KMS("Mode for pipe %d:\n", pipe);
drm_mode_debug_printmodeline(mode);
/* CPU eDP is the only output that doesn't need a PCH PLL of its own. */
if (!is_cpu_edp) {
struct intel_pch_pll *pll;
pll = intel_get_pch_pll(intel_crtc, dpll, fp);
if (pll == NULL) {
DRM_DEBUG_DRIVER("failed to find PLL for pipe %d\n",
pipe);
return -EINVAL;
}
} else
intel_put_pch_pll(intel_crtc);
/* The LVDS pin pair needs to be on before the DPLLs are enabled.
* This is an exception to the general rule that mode_set doesn't turn
* things on.
*/
if (is_lvds) {
temp = I915_READ(PCH_LVDS);
temp |= LVDS_PORT_EN | LVDS_A0A2_CLKA_POWER_UP;
if (HAS_PCH_CPT(dev)) {
temp &= ~PORT_TRANS_SEL_MASK;
temp |= PORT_TRANS_SEL_CPT(pipe);
} else {
if (pipe == 1)
temp |= LVDS_PIPEB_SELECT;
else
temp &= ~LVDS_PIPEB_SELECT;
}
/* set the corresponsding LVDS_BORDER bit */
temp |= dev_priv->lvds_border_bits;
/* Set the B0-B3 data pairs corresponding to whether we're going to
* set the DPLLs for dual-channel mode or not.
*/
if (clock.p2 == 7)
temp |= LVDS_B0B3_POWER_UP | LVDS_CLKB_POWER_UP;
else
temp &= ~(LVDS_B0B3_POWER_UP | LVDS_CLKB_POWER_UP);
/* It would be nice to set 24 vs 18-bit mode (LVDS_A3_POWER_UP)
* appropriately here, but we need to look more thoroughly into how
* panels behave in the two modes.
*/
temp &= ~(LVDS_HSYNC_POLARITY | LVDS_VSYNC_POLARITY);
if (adjusted_mode->flags & DRM_MODE_FLAG_NHSYNC)
temp |= LVDS_HSYNC_POLARITY;
if (adjusted_mode->flags & DRM_MODE_FLAG_NVSYNC)
temp |= LVDS_VSYNC_POLARITY;
I915_WRITE(PCH_LVDS, temp);
}
if (is_dp && !is_cpu_edp) {
intel_dp_set_m_n(crtc, mode, adjusted_mode);
} else {
/* For non-DP output, clear any trans DP clock recovery setting.*/
I915_WRITE(TRANSDATA_M1(pipe), 0);
I915_WRITE(TRANSDATA_N1(pipe), 0);
I915_WRITE(TRANSDPLINK_M1(pipe), 0);
I915_WRITE(TRANSDPLINK_N1(pipe), 0);
}
if (intel_crtc->pch_pll) {
I915_WRITE(intel_crtc->pch_pll->pll_reg, dpll);
/* Wait for the clocks to stabilize. */
POSTING_READ(intel_crtc->pch_pll->pll_reg);
udelay(150);
/* The pixel multiplier can only be updated once the
* DPLL is enabled and the clocks are stable.
*
* So write it again.
*/
I915_WRITE(intel_crtc->pch_pll->pll_reg, dpll);
}
intel_crtc->lowfreq_avail = false;
if (intel_crtc->pch_pll) {
if (is_lvds && has_reduced_clock && i915_powersave) {
I915_WRITE(intel_crtc->pch_pll->fp1_reg, fp2);
intel_crtc->lowfreq_avail = true;
} else {
I915_WRITE(intel_crtc->pch_pll->fp1_reg, fp);
}
}
intel_set_pipe_timings(intel_crtc, mode, adjusted_mode);
/* Note, this also computes intel_crtc->fdi_lanes which is used below in
* ironlake_check_fdi_lanes. */
ironlake_set_m_n(crtc, mode, adjusted_mode);
fdi_config_ok = ironlake_check_fdi_lanes(intel_crtc);
if (is_cpu_edp)
ironlake_set_pll_edp(crtc, adjusted_mode->clock);
ironlake_set_pipeconf(crtc, adjusted_mode, dither);
intel_wait_for_vblank(dev, pipe);
/* Set up the display plane register */
I915_WRITE(DSPCNTR(plane), DISPPLANE_GAMMA_ENABLE);
POSTING_READ(DSPCNTR(plane));
ret = intel_pipe_set_base(crtc, x, y, fb);
intel_update_watermarks(dev);
intel_update_linetime_watermarks(dev, pipe, adjusted_mode);
return fdi_config_ok ? ret : -EINVAL;
}
static int haswell_crtc_mode_set(struct drm_crtc *crtc,
struct drm_display_mode *mode,
struct drm_display_mode *adjusted_mode,
int x, int y,
struct drm_framebuffer *fb)
{
struct drm_device *dev = crtc->dev;
struct drm_i915_private *dev_priv = dev->dev_private;
struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
int pipe = intel_crtc->pipe;
int plane = intel_crtc->plane;
int num_connectors = 0;
intel_clock_t clock, reduced_clock;
u32 dpll = 0, fp = 0, fp2 = 0;
bool ok, has_reduced_clock = false;
bool is_lvds = false, is_dp = false, is_cpu_edp = false;
struct intel_encoder *encoder;
u32 temp;
int ret;
bool dither;
for_each_encoder_on_crtc(dev, crtc, encoder) {
switch (encoder->type) {
case INTEL_OUTPUT_LVDS:
is_lvds = true;
break;
case INTEL_OUTPUT_DISPLAYPORT:
is_dp = true;
break;
case INTEL_OUTPUT_EDP:
is_dp = true;
if (!intel_encoder_is_pch_edp(&encoder->base))
is_cpu_edp = true;
break;
}
num_connectors++;
}
if (is_cpu_edp)
intel_crtc->cpu_transcoder = TRANSCODER_EDP;
else
intel_crtc->cpu_transcoder = pipe;
/* We are not sure yet this won't happen. */
WARN(!HAS_PCH_LPT(dev), "Unexpected PCH type %d\n",
INTEL_PCH_TYPE(dev));
WARN(num_connectors != 1, "%d connectors attached to pipe %c\n",
num_connectors, pipe_name(pipe));
WARN_ON(I915_READ(PIPECONF(intel_crtc->cpu_transcoder)) &
(PIPECONF_ENABLE | I965_PIPECONF_ACTIVE));
WARN_ON(I915_READ(DSPCNTR(plane)) & DISPLAY_PLANE_ENABLE);
if (!intel_ddi_pll_mode_set(crtc, adjusted_mode->clock))
return -EINVAL;
if (HAS_PCH_IBX(dev) || HAS_PCH_CPT(dev)) {
ok = ironlake_compute_clocks(crtc, adjusted_mode, &clock,
&has_reduced_clock,
&reduced_clock);
if (!ok) {
DRM_ERROR("Couldn't find PLL settings for mode!\n");
return -EINVAL;
}
}
/* Ensure that the cursor is valid for the new mode before changing... */
intel_crtc_update_cursor(crtc, true);
/* determine panel color depth */
dither = intel_choose_pipe_bpp_dither(crtc, fb, &intel_crtc->bpp,
adjusted_mode);
if (is_lvds && dev_priv->lvds_dither)
dither = true;
DRM_DEBUG_KMS("Mode for pipe %d:\n", pipe);
drm_mode_debug_printmodeline(mode);
if (HAS_PCH_IBX(dev) || HAS_PCH_CPT(dev)) {
fp = clock.n << 16 | clock.m1 << 8 | clock.m2;
if (has_reduced_clock)
fp2 = reduced_clock.n << 16 | reduced_clock.m1 << 8 |
reduced_clock.m2;
dpll = ironlake_compute_dpll(intel_crtc, adjusted_mode, &clock,
fp);
/* CPU eDP is the only output that doesn't need a PCH PLL of its
* own on pre-Haswell/LPT generation */
if (!is_cpu_edp) {
struct intel_pch_pll *pll;
pll = intel_get_pch_pll(intel_crtc, dpll, fp);
if (pll == NULL) {
DRM_DEBUG_DRIVER("failed to find PLL for pipe %d\n",
pipe);
return -EINVAL;
}
} else
intel_put_pch_pll(intel_crtc);
/* The LVDS pin pair needs to be on before the DPLLs are
* enabled. This is an exception to the general rule that
* mode_set doesn't turn things on.
*/
if (is_lvds) {
temp = I915_READ(PCH_LVDS);
temp |= LVDS_PORT_EN | LVDS_A0A2_CLKA_POWER_UP;
if (HAS_PCH_CPT(dev)) {
temp &= ~PORT_TRANS_SEL_MASK;
temp |= PORT_TRANS_SEL_CPT(pipe);
} else {
if (pipe == 1)
temp |= LVDS_PIPEB_SELECT;
else
temp &= ~LVDS_PIPEB_SELECT;
}
/* set the corresponsding LVDS_BORDER bit */
temp |= dev_priv->lvds_border_bits;
/* Set the B0-B3 data pairs corresponding to whether
* we're going to set the DPLLs for dual-channel mode or
* not.
*/
if (clock.p2 == 7)
temp |= LVDS_B0B3_POWER_UP | LVDS_CLKB_POWER_UP;
else
temp &= ~(LVDS_B0B3_POWER_UP |
LVDS_CLKB_POWER_UP);
/* It would be nice to set 24 vs 18-bit mode
* (LVDS_A3_POWER_UP) appropriately here, but we need to
* look more thoroughly into how panels behave in the
* two modes.
*/
temp &= ~(LVDS_HSYNC_POLARITY | LVDS_VSYNC_POLARITY);
if (adjusted_mode->flags & DRM_MODE_FLAG_NHSYNC)
temp |= LVDS_HSYNC_POLARITY;
if (adjusted_mode->flags & DRM_MODE_FLAG_NVSYNC)
temp |= LVDS_VSYNC_POLARITY;
I915_WRITE(PCH_LVDS, temp);
}
}
if (is_dp && !is_cpu_edp) {
intel_dp_set_m_n(crtc, mode, adjusted_mode);
} else {
if (HAS_PCH_IBX(dev) || HAS_PCH_CPT(dev)) {
/* For non-DP output, clear any trans DP clock recovery
* setting.*/
I915_WRITE(TRANSDATA_M1(pipe), 0);
I915_WRITE(TRANSDATA_N1(pipe), 0);
I915_WRITE(TRANSDPLINK_M1(pipe), 0);
I915_WRITE(TRANSDPLINK_N1(pipe), 0);
}
}
intel_crtc->lowfreq_avail = false;
if (HAS_PCH_IBX(dev) || HAS_PCH_CPT(dev)) {
if (intel_crtc->pch_pll) {
I915_WRITE(intel_crtc->pch_pll->pll_reg, dpll);
/* Wait for the clocks to stabilize. */
POSTING_READ(intel_crtc->pch_pll->pll_reg);
udelay(150);
/* The pixel multiplier can only be updated once the
* DPLL is enabled and the clocks are stable.
*
* So write it again.
*/
I915_WRITE(intel_crtc->pch_pll->pll_reg, dpll);
}
if (intel_crtc->pch_pll) {
if (is_lvds && has_reduced_clock && i915_powersave) {
I915_WRITE(intel_crtc->pch_pll->fp1_reg, fp2);
intel_crtc->lowfreq_avail = true;
} else {
I915_WRITE(intel_crtc->pch_pll->fp1_reg, fp);
}
}
}
intel_set_pipe_timings(intel_crtc, mode, adjusted_mode);
if (!is_dp || is_cpu_edp)
ironlake_set_m_n(crtc, mode, adjusted_mode);
if (HAS_PCH_IBX(dev) || HAS_PCH_CPT(dev))
if (is_cpu_edp)
ironlake_set_pll_edp(crtc, adjusted_mode->clock);
haswell_set_pipeconf(crtc, adjusted_mode, dither);
/* Set up the display plane register */
I915_WRITE(DSPCNTR(plane), DISPPLANE_GAMMA_ENABLE);
POSTING_READ(DSPCNTR(plane));
ret = intel_pipe_set_base(crtc, x, y, fb);
intel_update_watermarks(dev);
intel_update_linetime_watermarks(dev, pipe, adjusted_mode);
return ret;
}
static int intel_crtc_mode_set(struct drm_crtc *crtc,
struct drm_display_mode *mode,
struct drm_display_mode *adjusted_mode,
int x, int y,
struct drm_framebuffer *fb)
{
struct drm_device *dev = crtc->dev;
struct drm_i915_private *dev_priv = dev->dev_private;
struct drm_encoder_helper_funcs *encoder_funcs;
struct intel_encoder *encoder;
struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
int pipe = intel_crtc->pipe;
int ret;
drm_vblank_pre_modeset(dev, pipe);
ret = dev_priv->display.crtc_mode_set(crtc, mode, adjusted_mode,
x, y, fb);
drm_vblank_post_modeset(dev, pipe);
if (ret != 0)
return ret;
for_each_encoder_on_crtc(dev, crtc, encoder) {
DRM_DEBUG_KMS("[ENCODER:%d:%s] set [MODE:%d:%s]\n",
encoder->base.base.id,
drm_get_encoder_name(&encoder->base),
mode->base.id, mode->name);
encoder_funcs = encoder->base.helper_private;
encoder_funcs->mode_set(&encoder->base, mode, adjusted_mode);
}
return 0;
}
static bool intel_eld_uptodate(struct drm_connector *connector,
int reg_eldv, uint32_t bits_eldv,
int reg_elda, uint32_t bits_elda,
int reg_edid)
{
struct drm_i915_private *dev_priv = connector->dev->dev_private;
uint8_t *eld = connector->eld;
uint32_t i;
i = I915_READ(reg_eldv);
i &= bits_eldv;
if (!eld[0])
return !i;
if (!i)
return false;
i = I915_READ(reg_elda);
i &= ~bits_elda;
I915_WRITE(reg_elda, i);
for (i = 0; i < eld[2]; i++)
if (I915_READ(reg_edid) != *((uint32_t *)eld + i))
return false;
return true;
}
static void g4x_write_eld(struct drm_connector *connector,
struct drm_crtc *crtc)
{
struct drm_i915_private *dev_priv = connector->dev->dev_private;
uint8_t *eld = connector->eld;
uint32_t eldv;
uint32_t len;
uint32_t i;
i = I915_READ(G4X_AUD_VID_DID);
if (i == INTEL_AUDIO_DEVBLC || i == INTEL_AUDIO_DEVCL)
eldv = G4X_ELDV_DEVCL_DEVBLC;
else
eldv = G4X_ELDV_DEVCTG;
if (intel_eld_uptodate(connector,
G4X_AUD_CNTL_ST, eldv,
G4X_AUD_CNTL_ST, G4X_ELD_ADDR,
G4X_HDMIW_HDMIEDID))
return;
i = I915_READ(G4X_AUD_CNTL_ST);
i &= ~(eldv | G4X_ELD_ADDR);
len = (i >> 9) & 0x1f; /* ELD buffer size */
I915_WRITE(G4X_AUD_CNTL_ST, i);
if (!eld[0])
return;
len = min_t(uint8_t, eld[2], len);
DRM_DEBUG_DRIVER("ELD size %d\n", len);
for (i = 0; i < len; i++)
I915_WRITE(G4X_HDMIW_HDMIEDID, *((uint32_t *)eld + i));
i = I915_READ(G4X_AUD_CNTL_ST);
i |= eldv;
I915_WRITE(G4X_AUD_CNTL_ST, i);
}
static void haswell_write_eld(struct drm_connector *connector,
struct drm_crtc *crtc)
{
struct drm_i915_private *dev_priv = connector->dev->dev_private;
uint8_t *eld = connector->eld;
struct drm_device *dev = crtc->dev;
uint32_t eldv;
uint32_t i;
int len;
int pipe = to_intel_crtc(crtc)->pipe;
int tmp;
int hdmiw_hdmiedid = HSW_AUD_EDID_DATA(pipe);
int aud_cntl_st = HSW_AUD_DIP_ELD_CTRL(pipe);
int aud_config = HSW_AUD_CFG(pipe);
int aud_cntrl_st2 = HSW_AUD_PIN_ELD_CP_VLD;
DRM_DEBUG_DRIVER("HDMI: Haswell Audio initialize....\n");
/* Audio output enable */
DRM_DEBUG_DRIVER("HDMI audio: enable codec\n");
tmp = I915_READ(aud_cntrl_st2);
tmp |= (AUDIO_OUTPUT_ENABLE_A << (pipe * 4));
I915_WRITE(aud_cntrl_st2, tmp);
/* Wait for 1 vertical blank */
intel_wait_for_vblank(dev, pipe);
/* Set ELD valid state */
tmp = I915_READ(aud_cntrl_st2);
DRM_DEBUG_DRIVER("HDMI audio: pin eld vld status=0x%8x\n", tmp);
tmp |= (AUDIO_ELD_VALID_A << (pipe * 4));
I915_WRITE(aud_cntrl_st2, tmp);
tmp = I915_READ(aud_cntrl_st2);
DRM_DEBUG_DRIVER("HDMI audio: eld vld status=0x%8x\n", tmp);
/* Enable HDMI mode */
tmp = I915_READ(aud_config);
DRM_DEBUG_DRIVER("HDMI audio: audio conf: 0x%8x\n", tmp);
/* clear N_programing_enable and N_value_index */
tmp &= ~(AUD_CONFIG_N_VALUE_INDEX | AUD_CONFIG_N_PROG_ENABLE);
I915_WRITE(aud_config, tmp);
DRM_DEBUG_DRIVER("ELD on pipe %c\n", pipe_name(pipe));
eldv = AUDIO_ELD_VALID_A << (pipe * 4);
if (intel_pipe_has_type(crtc, INTEL_OUTPUT_DISPLAYPORT)) {
DRM_DEBUG_DRIVER("ELD: DisplayPort detected\n");
eld[5] |= (1 << 2); /* Conn_Type, 0x1 = DisplayPort */
I915_WRITE(aud_config, AUD_CONFIG_N_VALUE_INDEX); /* 0x1 = DP */
} else
I915_WRITE(aud_config, 0);
if (intel_eld_uptodate(connector,
aud_cntrl_st2, eldv,
aud_cntl_st, IBX_ELD_ADDRESS,
hdmiw_hdmiedid))
return;
i = I915_READ(aud_cntrl_st2);
i &= ~eldv;
I915_WRITE(aud_cntrl_st2, i);
if (!eld[0])
return;
i = I915_READ(aud_cntl_st);
i &= ~IBX_ELD_ADDRESS;
I915_WRITE(aud_cntl_st, i);
i = (i >> 29) & DIP_PORT_SEL_MASK; /* DIP_Port_Select, 0x1 = PortB */
DRM_DEBUG_DRIVER("port num:%d\n", i);
len = min_t(uint8_t, eld[2], 21); /* 84 bytes of hw ELD buffer */
DRM_DEBUG_DRIVER("ELD size %d\n", len);
for (i = 0; i < len; i++)
I915_WRITE(hdmiw_hdmiedid, *((uint32_t *)eld + i));
i = I915_READ(aud_cntrl_st2);
i |= eldv;
I915_WRITE(aud_cntrl_st2, i);
}
static void ironlake_write_eld(struct drm_connector *connector,
struct drm_crtc *crtc)
{
struct drm_i915_private *dev_priv = connector->dev->dev_private;
uint8_t *eld = connector->eld;
uint32_t eldv;
uint32_t i;
int len;
int hdmiw_hdmiedid;
int aud_config;
int aud_cntl_st;
int aud_cntrl_st2;
int pipe = to_intel_crtc(crtc)->pipe;
if (HAS_PCH_IBX(connector->dev)) {
hdmiw_hdmiedid = IBX_HDMIW_HDMIEDID(pipe);
aud_config = IBX_AUD_CFG(pipe);
aud_cntl_st = IBX_AUD_CNTL_ST(pipe);
aud_cntrl_st2 = IBX_AUD_CNTL_ST2;
} else {
hdmiw_hdmiedid = CPT_HDMIW_HDMIEDID(pipe);
aud_config = CPT_AUD_CFG(pipe);
aud_cntl_st = CPT_AUD_CNTL_ST(pipe);
aud_cntrl_st2 = CPT_AUD_CNTRL_ST2;
}
DRM_DEBUG_DRIVER("ELD on pipe %c\n", pipe_name(pipe));
i = I915_READ(aud_cntl_st);
i = (i >> 29) & DIP_PORT_SEL_MASK; /* DIP_Port_Select, 0x1 = PortB */
if (!i) {
DRM_DEBUG_DRIVER("Audio directed to unknown port\n");
/* operate blindly on all ports */
eldv = IBX_ELD_VALIDB;
eldv |= IBX_ELD_VALIDB << 4;
eldv |= IBX_ELD_VALIDB << 8;
} else {
DRM_DEBUG_DRIVER("ELD on port %c\n", 'A' + i);
eldv = IBX_ELD_VALIDB << ((i - 1) * 4);
}
if (intel_pipe_has_type(crtc, INTEL_OUTPUT_DISPLAYPORT)) {
DRM_DEBUG_DRIVER("ELD: DisplayPort detected\n");
eld[5] |= (1 << 2); /* Conn_Type, 0x1 = DisplayPort */
I915_WRITE(aud_config, AUD_CONFIG_N_VALUE_INDEX); /* 0x1 = DP */
} else
I915_WRITE(aud_config, 0);
if (intel_eld_uptodate(connector,
aud_cntrl_st2, eldv,
aud_cntl_st, IBX_ELD_ADDRESS,
hdmiw_hdmiedid))
return;
i = I915_READ(aud_cntrl_st2);
i &= ~eldv;
I915_WRITE(aud_cntrl_st2, i);
if (!eld[0])
return;
i = I915_READ(aud_cntl_st);
i &= ~IBX_ELD_ADDRESS;
I915_WRITE(aud_cntl_st, i);
len = min_t(uint8_t, eld[2], 21); /* 84 bytes of hw ELD buffer */
DRM_DEBUG_DRIVER("ELD size %d\n", len);
for (i = 0; i < len; i++)
I915_WRITE(hdmiw_hdmiedid, *((uint32_t *)eld + i));
i = I915_READ(aud_cntrl_st2);
i |= eldv;
I915_WRITE(aud_cntrl_st2, i);
}
void intel_write_eld(struct drm_encoder *encoder,
struct drm_display_mode *mode)
{
struct drm_crtc *crtc = encoder->crtc;
struct drm_connector *connector;
struct drm_device *dev = encoder->dev;
struct drm_i915_private *dev_priv = dev->dev_private;
connector = drm_select_eld(encoder, mode);
if (!connector)
return;
DRM_DEBUG_DRIVER("ELD on [CONNECTOR:%d:%s], [ENCODER:%d:%s]\n",
connector->base.id,
drm_get_connector_name(connector),
connector->encoder->base.id,
drm_get_encoder_name(connector->encoder));
connector->eld[6] = drm_av_sync_delay(connector, mode) / 2;
if (dev_priv->display.write_eld)
dev_priv->display.write_eld(connector, crtc);
}
/** Loads the palette/gamma unit for the CRTC with the prepared values */
void intel_crtc_load_lut(struct drm_crtc *crtc)
{
struct drm_device *dev = crtc->dev;
struct drm_i915_private *dev_priv = dev->dev_private;
struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
int palreg = PALETTE(intel_crtc->pipe);
int i;
/* The clocks have to be on to load the palette. */
if (!crtc->enabled || !intel_crtc->active)
return;
/* use legacy palette for Ironlake */
if (HAS_PCH_SPLIT(dev))
palreg = LGC_PALETTE(intel_crtc->pipe);
for (i = 0; i < 256; i++) {
I915_WRITE(palreg + 4 * i,
(intel_crtc->lut_r[i] << 16) |
(intel_crtc->lut_g[i] << 8) |
intel_crtc->lut_b[i]);
}
}
static void i845_update_cursor(struct drm_crtc *crtc, u32 base)
{
struct drm_device *dev = crtc->dev;
struct drm_i915_private *dev_priv = dev->dev_private;
struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
bool visible = base != 0;
u32 cntl;
if (intel_crtc->cursor_visible == visible)
return;
cntl = I915_READ(_CURACNTR);
if (visible) {
/* On these chipsets we can only modify the base whilst
* the cursor is disabled.
*/
I915_WRITE(_CURABASE, base);
cntl &= ~(CURSOR_FORMAT_MASK);
/* XXX width must be 64, stride 256 => 0x00 << 28 */
cntl |= CURSOR_ENABLE |
CURSOR_GAMMA_ENABLE |
CURSOR_FORMAT_ARGB;
} else
cntl &= ~(CURSOR_ENABLE | CURSOR_GAMMA_ENABLE);
I915_WRITE(_CURACNTR, cntl);
intel_crtc->cursor_visible = visible;
}
static void i9xx_update_cursor(struct drm_crtc *crtc, u32 base)
{
struct drm_device *dev = crtc->dev;
struct drm_i915_private *dev_priv = dev->dev_private;
struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
int pipe = intel_crtc->pipe;
bool visible = base != 0;
if (intel_crtc->cursor_visible != visible) {
uint32_t cntl = I915_READ(CURCNTR(pipe));
if (base) {
cntl &= ~(CURSOR_MODE | MCURSOR_PIPE_SELECT);
cntl |= CURSOR_MODE_64_ARGB_AX | MCURSOR_GAMMA_ENABLE;
cntl |= pipe << 28; /* Connect to correct pipe */
} else {
cntl &= ~(CURSOR_MODE | MCURSOR_GAMMA_ENABLE);
cntl |= CURSOR_MODE_DISABLE;
}
I915_WRITE(CURCNTR(pipe), cntl);
intel_crtc->cursor_visible = visible;
}
/* and commit changes on next vblank */
I915_WRITE(CURBASE(pipe), base);
}
static void ivb_update_cursor(struct drm_crtc *crtc, u32 base)
{
struct drm_device *dev = crtc->dev;
struct drm_i915_private *dev_priv = dev->dev_private;
struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
int pipe = intel_crtc->pipe;
bool visible = base != 0;
if (intel_crtc->cursor_visible != visible) {
uint32_t cntl = I915_READ(CURCNTR_IVB(pipe));
if (base) {
cntl &= ~CURSOR_MODE;
cntl |= CURSOR_MODE_64_ARGB_AX | MCURSOR_GAMMA_ENABLE;
} else {
cntl &= ~(CURSOR_MODE | MCURSOR_GAMMA_ENABLE);
cntl |= CURSOR_MODE_DISABLE;
}
I915_WRITE(CURCNTR_IVB(pipe), cntl);
intel_crtc->cursor_visible = visible;
}
/* and commit changes on next vblank */
I915_WRITE(CURBASE_IVB(pipe), base);
}
/* If no-part of the cursor is visible on the framebuffer, then the GPU may hang... */
static void intel_crtc_update_cursor(struct drm_crtc *crtc,
bool on)
{
struct drm_device *dev = crtc->dev;
struct drm_i915_private *dev_priv = dev->dev_private;
struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
int pipe = intel_crtc->pipe;
int x = intel_crtc->cursor_x;
int y = intel_crtc->cursor_y;
u32 base, pos;
bool visible;
pos = 0;
if (on && crtc->enabled && crtc->fb) {
base = intel_crtc->cursor_addr;
if (x > (int) crtc->fb->width)
base = 0;
if (y > (int) crtc->fb->height)
base = 0;
} else
base = 0;
if (x < 0) {
if (x + intel_crtc->cursor_width < 0)
base = 0;
pos |= CURSOR_POS_SIGN << CURSOR_X_SHIFT;
x = -x;
}
pos |= x << CURSOR_X_SHIFT;
if (y < 0) {
if (y + intel_crtc->cursor_height < 0)
base = 0;
pos |= CURSOR_POS_SIGN << CURSOR_Y_SHIFT;
y = -y;
}
pos |= y << CURSOR_Y_SHIFT;
visible = base != 0;
if (!visible && !intel_crtc->cursor_visible)
return;
if (IS_IVYBRIDGE(dev) || IS_HASWELL(dev)) {
I915_WRITE(CURPOS_IVB(pipe), pos);
ivb_update_cursor(crtc, base);
} else {
I915_WRITE(CURPOS(pipe), pos);
if (IS_845G(dev) || IS_I865G(dev))
i845_update_cursor(crtc, base);
else
i9xx_update_cursor(crtc, base);
}
}
static int intel_crtc_cursor_set(struct drm_crtc *crtc,
struct drm_file *file,
uint32_t handle,
uint32_t width, uint32_t height)
{
struct drm_device *dev = crtc->dev;
struct drm_i915_private *dev_priv = dev->dev_private;
struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
struct drm_i915_gem_object *obj;
uint32_t addr;
int ret;
/* if we want to turn off the cursor ignore width and height */
if (!handle) {
DRM_DEBUG_KMS("cursor off\n");
addr = 0;
obj = NULL;
DRM_LOCK(dev);
goto finish;
}
/* Currently we only support 64x64 cursors */
if (width != 64 || height != 64) {
DRM_ERROR("we currently only support 64x64 cursors\n");
return -EINVAL;
}
obj = to_intel_bo(drm_gem_object_lookup(dev, file, handle));
if (&obj->base == NULL)
return -ENOENT;
if (obj->base.size < width * height * 4) {
DRM_ERROR("buffer is to small\n");
ret = -ENOMEM;
goto fail;
}
/* we only need to pin inside GTT if cursor is non-phy */
DRM_LOCK(dev);
if (!dev_priv->info->cursor_needs_physical) {
if (obj->tiling_mode) {
DRM_ERROR("cursor cannot be tiled\n");
ret = -EINVAL;
goto fail_locked;
}
ret = i915_gem_object_pin_to_display_plane(obj, 0, NULL);
if (ret) {
DRM_ERROR("failed to move cursor bo into the GTT\n");
goto fail_locked;
}
ret = i915_gem_object_put_fence(obj);
if (ret) {
DRM_ERROR("failed to release fence for cursor");
goto fail_unpin;
}
addr = obj->gtt_offset;
} else {
int align = IS_I830(dev) ? 16 * 1024 : 256;
ret = i915_gem_attach_phys_object(dev, obj,
(intel_crtc->pipe == 0) ? I915_GEM_PHYS_CURSOR_0 : I915_GEM_PHYS_CURSOR_1,
align);
if (ret) {
DRM_ERROR("failed to attach phys object\n");
goto fail_locked;
}
addr = obj->phys_obj->handle->busaddr;
}
if (IS_GEN2(dev))
I915_WRITE(CURSIZE, (height << 12) | width);
finish:
if (intel_crtc->cursor_bo) {
if (dev_priv->info->cursor_needs_physical) {
if (intel_crtc->cursor_bo != obj)
i915_gem_detach_phys_object(dev, intel_crtc->cursor_bo);
} else
i915_gem_object_unpin(intel_crtc->cursor_bo);
drm_gem_object_unreference(&intel_crtc->cursor_bo->base);
}
DRM_UNLOCK(dev);
intel_crtc->cursor_addr = addr;
intel_crtc->cursor_bo = obj;
intel_crtc->cursor_width = width;
intel_crtc->cursor_height = height;
intel_crtc_update_cursor(crtc, true);
return 0;
fail_unpin:
i915_gem_object_unpin(obj);
fail_locked:
DRM_UNLOCK(dev);
fail:
drm_gem_object_unreference_unlocked(&obj->base);
return ret;
}
static int intel_crtc_cursor_move(struct drm_crtc *crtc, int x, int y)
{
struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
intel_crtc->cursor_x = x;
intel_crtc->cursor_y = y;
intel_crtc_update_cursor(crtc, true);
return 0;
}
/** Sets the color ramps on behalf of RandR */
void intel_crtc_fb_gamma_set(struct drm_crtc *crtc, u16 red, u16 green,
u16 blue, int regno)
{
struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
intel_crtc->lut_r[regno] = red >> 8;
intel_crtc->lut_g[regno] = green >> 8;
intel_crtc->lut_b[regno] = blue >> 8;
}
void intel_crtc_fb_gamma_get(struct drm_crtc *crtc, u16 *red, u16 *green,
u16 *blue, int regno)
{
struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
*red = intel_crtc->lut_r[regno] << 8;
*green = intel_crtc->lut_g[regno] << 8;
*blue = intel_crtc->lut_b[regno] << 8;
}
static void intel_crtc_gamma_set(struct drm_crtc *crtc, u16 *red, u16 *green,
u16 *blue, uint32_t start, uint32_t size)
{
int end = (start + size > 256) ? 256 : start + size, i;
struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
for (i = start; i < end; i++) {
intel_crtc->lut_r[i] = red[i] >> 8;
intel_crtc->lut_g[i] = green[i] >> 8;
intel_crtc->lut_b[i] = blue[i] >> 8;
}
intel_crtc_load_lut(crtc);
}
/**
* Get a pipe with a simple mode set on it for doing load-based monitor
* detection.
*
* It will be up to the load-detect code to adjust the pipe as appropriate for
* its requirements. The pipe will be connected to no other encoders.
*
* Currently this code will only succeed if there is a pipe with no encoders
* configured for it. In the future, it could choose to temporarily disable
* some outputs to free up a pipe for its use.
*
* \return crtc, or NULL if no pipes are available.
*/
/* VESA 640x480x72Hz mode to set on the pipe */
static struct drm_display_mode load_detect_mode = {
DRM_MODE("640x480", DRM_MODE_TYPE_DEFAULT, 31500, 640, 664,
704, 832, 0, 480, 489, 491, 520, 0, DRM_MODE_FLAG_NHSYNC | DRM_MODE_FLAG_NVSYNC),
};
static int
intel_framebuffer_create(struct drm_device *dev,
struct drm_mode_fb_cmd2 *mode_cmd,
struct drm_i915_gem_object *obj,
struct drm_framebuffer **res)
{
struct intel_framebuffer *intel_fb;
int ret;
intel_fb = malloc(sizeof(*intel_fb), DRM_MEM_KMS, M_WAITOK | M_ZERO);
if (!intel_fb) {
drm_gem_object_unreference_unlocked(&obj->base);
return -ENOMEM;
}
ret = intel_framebuffer_init(dev, intel_fb, mode_cmd, obj);
if (ret) {
drm_gem_object_unreference_unlocked(&obj->base);
free(intel_fb, DRM_MEM_KMS);
return ret;
}
*res = &intel_fb->base;
return 0;
}
static u32
intel_framebuffer_pitch_for_width(int width, int bpp)
{
u32 pitch = DIV_ROUND_UP(width * bpp, 8);
return roundup2(pitch, 64);
}
static u32
intel_framebuffer_size_for_mode(struct drm_display_mode *mode, int bpp)
{
u32 pitch = intel_framebuffer_pitch_for_width(mode->hdisplay, bpp);
return roundup2(pitch * mode->vdisplay, PAGE_SIZE);
}
static int
intel_framebuffer_create_for_mode(struct drm_device *dev,
struct drm_display_mode *mode,
int depth, int bpp,
struct drm_framebuffer **res)
{
struct drm_i915_gem_object *obj;
struct drm_mode_fb_cmd2 mode_cmd = { 0 };
obj = i915_gem_alloc_object(dev,
intel_framebuffer_size_for_mode(mode, bpp));
if (obj == NULL)
return -ENOMEM;
mode_cmd.width = mode->hdisplay;
mode_cmd.height = mode->vdisplay;
mode_cmd.pitches[0] = intel_framebuffer_pitch_for_width(mode_cmd.width,
bpp);
mode_cmd.pixel_format = drm_mode_legacy_fb_format(bpp, depth);
return intel_framebuffer_create(dev, &mode_cmd, obj, res);
}
static struct drm_framebuffer *
mode_fits_in_fbdev(struct drm_device *dev,
struct drm_display_mode *mode)
{
struct drm_i915_private *dev_priv = dev->dev_private;
struct drm_i915_gem_object *obj;
struct drm_framebuffer *fb;
if (dev_priv->fbdev == NULL)
return NULL;
obj = dev_priv->fbdev->ifb.obj;
if (obj == NULL)
return NULL;
fb = &dev_priv->fbdev->ifb.base;
if (fb->pitches[0] < intel_framebuffer_pitch_for_width(mode->hdisplay,
fb->bits_per_pixel))
return NULL;
if (obj->base.size < mode->vdisplay * fb->pitches[0])
return NULL;
return fb;
}
bool intel_get_load_detect_pipe(struct drm_connector *connector,
struct drm_display_mode *mode,
struct intel_load_detect_pipe *old)
{
struct intel_crtc *intel_crtc;
struct intel_encoder *intel_encoder =
intel_attached_encoder(connector);
struct drm_crtc *possible_crtc;
struct drm_encoder *encoder = &intel_encoder->base;
struct drm_crtc *crtc = NULL;
struct drm_device *dev = encoder->dev;
struct drm_framebuffer *fb;
int i = -1;
int ret;
DRM_DEBUG_KMS("[CONNECTOR:%d:%s], [ENCODER:%d:%s]\n",
connector->base.id, drm_get_connector_name(connector),
encoder->base.id, drm_get_encoder_name(encoder));
/*
* Algorithm gets a little messy:
*
* - if the connector already has an assigned crtc, use it (but make
* sure it's on first)
*
* - try to find the first unused crtc that can drive this connector,
* and use that if we find one
*/
/* See if we already have a CRTC for this connector */
if (encoder->crtc) {
crtc = encoder->crtc;
old->dpms_mode = connector->dpms;
old->load_detect_temp = false;
/* Make sure the crtc and connector are running */
if (connector->dpms != DRM_MODE_DPMS_ON)
connector->funcs->dpms(connector, DRM_MODE_DPMS_ON);
return true;
}
/* Find an unused one (if possible) */
list_for_each_entry(possible_crtc, &dev->mode_config.crtc_list, head) {
i++;
if (!(encoder->possible_crtcs & (1 << i)))
continue;
if (!possible_crtc->enabled) {
crtc = possible_crtc;
break;
}
}
/*
* If we didn't find an unused CRTC, don't use any.
*/
if (!crtc) {
DRM_DEBUG_KMS("no pipe available for load-detect\n");
return false;
}
intel_encoder->new_crtc = to_intel_crtc(crtc);
to_intel_connector(connector)->new_encoder = intel_encoder;
intel_crtc = to_intel_crtc(crtc);
old->dpms_mode = connector->dpms;
old->load_detect_temp = true;
old->release_fb = NULL;
if (!mode)
mode = &load_detect_mode;
/* We need a framebuffer large enough to accommodate all accesses
* that the plane may generate whilst we perform load detection.
* We can not rely on the fbcon either being present (we get called
* during its initialisation to detect all boot displays, or it may
* not even exist) or that it is large enough to satisfy the
* requested mode.
*/
ret = 0;
fb = mode_fits_in_fbdev(dev, mode);
if (fb == NULL) {
DRM_DEBUG_KMS("creating tmp fb for load-detection\n");
ret = intel_framebuffer_create_for_mode(dev, mode, 24, 32, &fb);
old->release_fb = fb;
} else
DRM_DEBUG_KMS("reusing fbdev for load-detection framebuffer\n");
if (ret) {
DRM_DEBUG_KMS("failed to allocate framebuffer for load-detection\n");
return false;
}
if (!intel_set_mode(crtc, mode, 0, 0, fb)) {
DRM_DEBUG_KMS("failed to set mode on load-detect pipe\n");
if (old->release_fb)
old->release_fb->funcs->destroy(old->release_fb);
return false;
}
/* let the connector get through one full cycle before testing */
intel_wait_for_vblank(dev, intel_crtc->pipe);
return true;
}
void intel_release_load_detect_pipe(struct drm_connector *connector,
struct intel_load_detect_pipe *old)
{
struct intel_encoder *intel_encoder =
intel_attached_encoder(connector);
struct drm_encoder *encoder = &intel_encoder->base;
DRM_DEBUG_KMS("[CONNECTOR:%d:%s], [ENCODER:%d:%s]\n",
connector->base.id, drm_get_connector_name(connector),
encoder->base.id, drm_get_encoder_name(encoder));
if (old->load_detect_temp) {
struct drm_crtc *crtc = encoder->crtc;
to_intel_connector(connector)->new_encoder = NULL;
intel_encoder->new_crtc = NULL;
intel_set_mode(crtc, NULL, 0, 0, NULL);
if (old->release_fb)
old->release_fb->funcs->destroy(old->release_fb);
return;
}
/* Switch crtc and encoder back off if necessary */
if (old->dpms_mode != DRM_MODE_DPMS_ON)
connector->funcs->dpms(connector, old->dpms_mode);
}
/* Returns the clock of the currently programmed mode of the given pipe. */
static int intel_crtc_clock_get(struct drm_device *dev, struct drm_crtc *crtc)
{
struct drm_i915_private *dev_priv = dev->dev_private;
struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
int pipe = intel_crtc->pipe;
u32 dpll = I915_READ(DPLL(pipe));
u32 fp;
intel_clock_t clock;
if ((dpll & DISPLAY_RATE_SELECT_FPA1) == 0)
fp = I915_READ(FP0(pipe));
else
fp = I915_READ(FP1(pipe));
clock.m1 = (fp & FP_M1_DIV_MASK) >> FP_M1_DIV_SHIFT;
if (IS_PINEVIEW(dev)) {
clock.n = ffs((fp & FP_N_PINEVIEW_DIV_MASK) >> FP_N_DIV_SHIFT) - 1;
clock.m2 = (fp & FP_M2_PINEVIEW_DIV_MASK) >> FP_M2_DIV_SHIFT;
} else {
clock.n = (fp & FP_N_DIV_MASK) >> FP_N_DIV_SHIFT;
clock.m2 = (fp & FP_M2_DIV_MASK) >> FP_M2_DIV_SHIFT;
}
if (!IS_GEN2(dev)) {
if (IS_PINEVIEW(dev))
clock.p1 = ffs((dpll & DPLL_FPA01_P1_POST_DIV_MASK_PINEVIEW) >>
DPLL_FPA01_P1_POST_DIV_SHIFT_PINEVIEW);
else
clock.p1 = ffs((dpll & DPLL_FPA01_P1_POST_DIV_MASK) >>
DPLL_FPA01_P1_POST_DIV_SHIFT);
switch (dpll & DPLL_MODE_MASK) {
case DPLLB_MODE_DAC_SERIAL:
clock.p2 = dpll & DPLL_DAC_SERIAL_P2_CLOCK_DIV_5 ?
5 : 10;
break;
case DPLLB_MODE_LVDS:
clock.p2 = dpll & DPLLB_LVDS_P2_CLOCK_DIV_7 ?
7 : 14;
break;
default:
DRM_DEBUG_KMS("Unknown DPLL mode %08x in programmed "
"mode\n", (int)(dpll & DPLL_MODE_MASK));
return 0;
}
/* XXX: Handle the 100Mhz refclk */
intel_clock(dev, 96000, &clock);
} else {
bool is_lvds = (pipe == 1) && (I915_READ(LVDS) & LVDS_PORT_EN);
if (is_lvds) {
clock.p1 = ffs((dpll & DPLL_FPA01_P1_POST_DIV_MASK_I830_LVDS) >>
DPLL_FPA01_P1_POST_DIV_SHIFT);
clock.p2 = 14;
if ((dpll & PLL_REF_INPUT_MASK) ==
PLLB_REF_INPUT_SPREADSPECTRUMIN) {
/* XXX: might not be 66MHz */
intel_clock(dev, 66000, &clock);
} else
intel_clock(dev, 48000, &clock);
} else {
if (dpll & PLL_P1_DIVIDE_BY_TWO)
clock.p1 = 2;
else {
clock.p1 = ((dpll & DPLL_FPA01_P1_POST_DIV_MASK_I830) >>
DPLL_FPA01_P1_POST_DIV_SHIFT) + 2;
}
if (dpll & PLL_P2_DIVIDE_BY_4)
clock.p2 = 4;
else
clock.p2 = 2;
intel_clock(dev, 48000, &clock);
}
}
/* XXX: It would be nice to validate the clocks, but we can't reuse
* i830PllIsValid() because it relies on the xf86_config connector
* configuration being accurate, which it isn't necessarily.
*/
return clock.dot;
}
/** Returns the currently programmed mode of the given pipe. */
struct drm_display_mode *intel_crtc_mode_get(struct drm_device *dev,
struct drm_crtc *crtc)
{
struct drm_i915_private *dev_priv = dev->dev_private;
struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
enum transcoder cpu_transcoder = intel_crtc->cpu_transcoder;
struct drm_display_mode *mode;
int htot = I915_READ(HTOTAL(cpu_transcoder));
int hsync = I915_READ(HSYNC(cpu_transcoder));
int vtot = I915_READ(VTOTAL(cpu_transcoder));
int vsync = I915_READ(VSYNC(cpu_transcoder));
mode = malloc(sizeof(*mode), DRM_MEM_KMS, M_WAITOK | M_ZERO);
if (!mode)
return NULL;
mode->clock = intel_crtc_clock_get(dev, crtc);
mode->hdisplay = (htot & 0xffff) + 1;
mode->htotal = ((htot & 0xffff0000) >> 16) + 1;
mode->hsync_start = (hsync & 0xffff) + 1;
mode->hsync_end = ((hsync & 0xffff0000) >> 16) + 1;
mode->vdisplay = (vtot & 0xffff) + 1;
mode->vtotal = ((vtot & 0xffff0000) >> 16) + 1;
mode->vsync_start = (vsync & 0xffff) + 1;
mode->vsync_end = ((vsync & 0xffff0000) >> 16) + 1;
drm_mode_set_name(mode);
return mode;
}
static void intel_increase_pllclock(struct drm_crtc *crtc)
{
struct drm_device *dev = crtc->dev;
drm_i915_private_t *dev_priv = dev->dev_private;
struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
int pipe = intel_crtc->pipe;
int dpll_reg = DPLL(pipe);
int dpll;
if (HAS_PCH_SPLIT(dev))
return;
if (!dev_priv->lvds_downclock_avail)
return;
dpll = I915_READ(dpll_reg);
if (!HAS_PIPE_CXSR(dev) && (dpll & DISPLAY_RATE_SELECT_FPA1)) {
DRM_DEBUG_DRIVER("upclocking LVDS\n");
assert_panel_unlocked(dev_priv, pipe);
dpll &= ~DISPLAY_RATE_SELECT_FPA1;
I915_WRITE(dpll_reg, dpll);
intel_wait_for_vblank(dev, pipe);
dpll = I915_READ(dpll_reg);
if (dpll & DISPLAY_RATE_SELECT_FPA1)
DRM_DEBUG_DRIVER("failed to upclock LVDS!\n");
}
}
static void intel_decrease_pllclock(struct drm_crtc *crtc)
{
struct drm_device *dev = crtc->dev;
drm_i915_private_t *dev_priv = dev->dev_private;
struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
if (HAS_PCH_SPLIT(dev))
return;
if (!dev_priv->lvds_downclock_avail)
return;
/*
* Since this is called by a timer, we should never get here in
* the manual case.
*/
if (!HAS_PIPE_CXSR(dev) && intel_crtc->lowfreq_avail) {
int pipe = intel_crtc->pipe;
int dpll_reg = DPLL(pipe);
int dpll;
DRM_DEBUG_DRIVER("downclocking LVDS\n");
assert_panel_unlocked(dev_priv, pipe);
dpll = I915_READ(dpll_reg);
dpll |= DISPLAY_RATE_SELECT_FPA1;
I915_WRITE(dpll_reg, dpll);
intel_wait_for_vblank(dev, pipe);
dpll = I915_READ(dpll_reg);
if (!(dpll & DISPLAY_RATE_SELECT_FPA1))
DRM_DEBUG_DRIVER("failed to downclock LVDS!\n");
}
}
void intel_mark_busy(struct drm_device *dev)
{
i915_update_gfx_val(dev->dev_private);
}
void intel_mark_idle(struct drm_device *dev)
{
struct drm_crtc *crtc;
if (!i915_powersave)
return;
list_for_each_entry(crtc, &dev->mode_config.crtc_list, head) {
if (!crtc->fb)
continue;
intel_decrease_pllclock(crtc);
}
}
void intel_mark_fb_busy(struct drm_i915_gem_object *obj)
{
struct drm_device *dev = obj->base.dev;
struct drm_crtc *crtc;
if (!i915_powersave)
return;
list_for_each_entry(crtc, &dev->mode_config.crtc_list, head) {
if (!crtc->fb)
continue;
if (to_intel_framebuffer(crtc->fb)->obj == obj)
intel_increase_pllclock(crtc);
}
}
static void intel_crtc_destroy(struct drm_crtc *crtc)
{
struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
struct drm_device *dev = crtc->dev;
struct drm_i915_private *dev_priv = dev->dev_private;
struct intel_unpin_work *work;
mtx_lock(&dev->event_lock);
work = intel_crtc->unpin_work;
intel_crtc->unpin_work = NULL;
mtx_unlock(&dev->event_lock);
if (work) {
taskqueue_cancel(dev_priv->wq, &work->work, NULL);
taskqueue_drain(dev_priv->wq, &work->work);
free(work, DRM_MEM_KMS);
}
drm_crtc_cleanup(crtc);
free(intel_crtc, DRM_MEM_KMS);
}
static void intel_unpin_work_fn(void *arg, int pending)
{
struct intel_unpin_work *work =
arg;
struct drm_device *dev = work->crtc->dev;
DRM_LOCK(dev);
intel_unpin_fb_obj(work->old_fb_obj);
drm_gem_object_unreference(&work->pending_flip_obj->base);
drm_gem_object_unreference(&work->old_fb_obj->base);
intel_update_fbc(dev);
DRM_UNLOCK(dev);
BUG_ON(atomic_read(&to_intel_crtc(work->crtc)->unpin_work_count) == 0);
atomic_dec(&to_intel_crtc(work->crtc)->unpin_work_count);
free(work, DRM_MEM_KMS);
}
static void do_intel_finish_page_flip(struct drm_device *dev,
struct drm_crtc *crtc)
{
drm_i915_private_t *dev_priv = dev->dev_private;
struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
struct intel_unpin_work *work;
struct drm_i915_gem_object *obj;
/* Ignore early vblank irqs */
if (intel_crtc == NULL)
return;
mtx_lock(&dev->event_lock);
work = intel_crtc->unpin_work;
/* Ensure we don't miss a work->pending update ... */
smp_rmb();
if (work == NULL || atomic_read(&work->pending) < INTEL_FLIP_COMPLETE) {
mtx_unlock(&dev->event_lock);
return;
}
/* and that the unpin work is consistent wrt ->pending. */
smp_rmb();
intel_crtc->unpin_work = NULL;
if (work->event)
drm_send_vblank_event(dev, intel_crtc->pipe, work->event);
drm_vblank_put(dev, intel_crtc->pipe);
mtx_unlock(&dev->event_lock);
obj = work->old_fb_obj;
atomic_clear_mask(1 << intel_crtc->plane,
&obj->pending_flip);
wake_up(&dev_priv->pending_flip_queue);
taskqueue_enqueue(dev_priv->wq, &work->work);
CTR2(KTR_DRM, "i915_flip_complete %d %p", intel_crtc->plane,
work->pending_flip_obj);
}
void intel_finish_page_flip(struct drm_device *dev, int pipe)
{
drm_i915_private_t *dev_priv = dev->dev_private;
struct drm_crtc *crtc = dev_priv->pipe_to_crtc_mapping[pipe];
do_intel_finish_page_flip(dev, crtc);
}
void intel_finish_page_flip_plane(struct drm_device *dev, int plane)
{
drm_i915_private_t *dev_priv = dev->dev_private;
struct drm_crtc *crtc = dev_priv->plane_to_crtc_mapping[plane];
do_intel_finish_page_flip(dev, crtc);
}
void intel_prepare_page_flip(struct drm_device *dev, int plane)
{
drm_i915_private_t *dev_priv = dev->dev_private;
struct intel_crtc *intel_crtc =
to_intel_crtc(dev_priv->plane_to_crtc_mapping[plane]);
/* NB: An MMIO update of the plane base pointer will also
* generate a page-flip completion irq, i.e. every modeset
* is also accompanied by a spurious intel_prepare_page_flip().
*/
mtx_lock(&dev->event_lock);
if (intel_crtc->unpin_work)
atomic_inc_not_zero(&intel_crtc->unpin_work->pending);
mtx_unlock(&dev->event_lock);
}
inline static void intel_mark_page_flip_active(struct intel_crtc *intel_crtc)
{
/* Ensure that the work item is consistent when activating it ... */
smp_wmb();
atomic_set(&intel_crtc->unpin_work->pending, INTEL_FLIP_PENDING);
/* and that it is marked active as soon as the irq could fire. */
smp_wmb();
}
static int intel_gen2_queue_flip(struct drm_device *dev,
struct drm_crtc *crtc,
struct drm_framebuffer *fb,
struct drm_i915_gem_object *obj)
{
struct drm_i915_private *dev_priv = dev->dev_private;
struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
u32 flip_mask;
struct intel_ring_buffer *ring = &dev_priv->ring[RCS];
int ret;
ret = intel_pin_and_fence_fb_obj(dev, obj, ring);
if (ret)
goto err;
ret = intel_ring_begin(ring, 6);
if (ret)
goto err_unpin;
/* Can't queue multiple flips, so wait for the previous
* one to finish before executing the next.
*/
if (intel_crtc->plane)
flip_mask = MI_WAIT_FOR_PLANE_B_FLIP;
else
flip_mask = MI_WAIT_FOR_PLANE_A_FLIP;
intel_ring_emit(ring, MI_WAIT_FOR_EVENT | flip_mask);
intel_ring_emit(ring, MI_NOOP);
intel_ring_emit(ring, MI_DISPLAY_FLIP |
MI_DISPLAY_FLIP_PLANE(intel_crtc->plane));
intel_ring_emit(ring, fb->pitches[0]);
intel_ring_emit(ring, obj->gtt_offset + intel_crtc->dspaddr_offset);
intel_ring_emit(ring, 0); /* aux display base address, unused */
intel_mark_page_flip_active(intel_crtc);
intel_ring_advance(ring);
return 0;
err_unpin:
intel_unpin_fb_obj(obj);
err:
return ret;
}
static int intel_gen3_queue_flip(struct drm_device *dev,
struct drm_crtc *crtc,
struct drm_framebuffer *fb,
struct drm_i915_gem_object *obj)
{
struct drm_i915_private *dev_priv = dev->dev_private;
struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
u32 flip_mask;
struct intel_ring_buffer *ring = &dev_priv->ring[RCS];
int ret;
ret = intel_pin_and_fence_fb_obj(dev, obj, ring);
if (ret)
goto err;
ret = intel_ring_begin(ring, 6);
if (ret)
goto err_unpin;
if (intel_crtc->plane)
flip_mask = MI_WAIT_FOR_PLANE_B_FLIP;
else
flip_mask = MI_WAIT_FOR_PLANE_A_FLIP;
intel_ring_emit(ring, MI_WAIT_FOR_EVENT | flip_mask);
intel_ring_emit(ring, MI_NOOP);
intel_ring_emit(ring, MI_DISPLAY_FLIP_I915 |
MI_DISPLAY_FLIP_PLANE(intel_crtc->plane));
intel_ring_emit(ring, fb->pitches[0]);
intel_ring_emit(ring, obj->gtt_offset + intel_crtc->dspaddr_offset);
intel_ring_emit(ring, MI_NOOP);
intel_mark_page_flip_active(intel_crtc);
intel_ring_advance(ring);
return 0;
err_unpin:
intel_unpin_fb_obj(obj);
err:
return ret;
}
static int intel_gen4_queue_flip(struct drm_device *dev,
struct drm_crtc *crtc,
struct drm_framebuffer *fb,
struct drm_i915_gem_object *obj)
{
struct drm_i915_private *dev_priv = dev->dev_private;
struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
uint32_t pf, pipesrc;
struct intel_ring_buffer *ring = &dev_priv->ring[RCS];
int ret;
ret = intel_pin_and_fence_fb_obj(dev, obj, ring);
if (ret)
goto err;
ret = intel_ring_begin(ring, 4);
if (ret)
goto err_unpin;
/* i965+ uses the linear or tiled offsets from the
* Display Registers (which do not change across a page-flip)
* so we need only reprogram the base address.
*/
intel_ring_emit(ring, MI_DISPLAY_FLIP |
MI_DISPLAY_FLIP_PLANE(intel_crtc->plane));
intel_ring_emit(ring, fb->pitches[0]);
intel_ring_emit(ring,
(obj->gtt_offset + intel_crtc->dspaddr_offset) |
obj->tiling_mode);
/* XXX Enabling the panel-fitter across page-flip is so far
* untested on non-native modes, so ignore it for now.
* pf = I915_READ(pipe == 0 ? PFA_CTL_1 : PFB_CTL_1) & PF_ENABLE;
*/
pf = 0;
pipesrc = I915_READ(PIPESRC(intel_crtc->pipe)) & 0x0fff0fff;
intel_ring_emit(ring, pf | pipesrc);
intel_mark_page_flip_active(intel_crtc);
intel_ring_advance(ring);
return 0;
err_unpin:
intel_unpin_fb_obj(obj);
err:
return ret;
}
static int intel_gen6_queue_flip(struct drm_device *dev,
struct drm_crtc *crtc,
struct drm_framebuffer *fb,
struct drm_i915_gem_object *obj)
{
struct drm_i915_private *dev_priv = dev->dev_private;
struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
struct intel_ring_buffer *ring = &dev_priv->ring[RCS];
uint32_t pf, pipesrc;
int ret;
ret = intel_pin_and_fence_fb_obj(dev, obj, ring);
if (ret)
goto err;
ret = intel_ring_begin(ring, 4);
if (ret)
goto err_unpin;
intel_ring_emit(ring, MI_DISPLAY_FLIP |
MI_DISPLAY_FLIP_PLANE(intel_crtc->plane));
intel_ring_emit(ring, fb->pitches[0] | obj->tiling_mode);
intel_ring_emit(ring, obj->gtt_offset + intel_crtc->dspaddr_offset);
/* Contrary to the suggestions in the documentation,
* "Enable Panel Fitter" does not seem to be required when page
* flipping with a non-native mode, and worse causes a normal
* modeset to fail.
* pf = I915_READ(PF_CTL(intel_crtc->pipe)) & PF_ENABLE;
*/
pf = 0;
pipesrc = I915_READ(PIPESRC(intel_crtc->pipe)) & 0x0fff0fff;
intel_ring_emit(ring, pf | pipesrc);
intel_mark_page_flip_active(intel_crtc);
intel_ring_advance(ring);
return 0;
err_unpin:
intel_unpin_fb_obj(obj);
err:
return ret;
}
/*
* On gen7 we currently use the blit ring because (in early silicon at least)
* the render ring doesn't give us interrpts for page flip completion, which
* means clients will hang after the first flip is queued. Fortunately the
* blit ring generates interrupts properly, so use it instead.
*/
static int intel_gen7_queue_flip(struct drm_device *dev,
struct drm_crtc *crtc,
struct drm_framebuffer *fb,
struct drm_i915_gem_object *obj)
{
struct drm_i915_private *dev_priv = dev->dev_private;
struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
struct intel_ring_buffer *ring = &dev_priv->ring[BCS];
uint32_t plane_bit = 0;
int ret;
ret = intel_pin_and_fence_fb_obj(dev, obj, ring);
if (ret)
goto err;
switch(intel_crtc->plane) {
case PLANE_A:
plane_bit = MI_DISPLAY_FLIP_IVB_PLANE_A;
break;
case PLANE_B:
plane_bit = MI_DISPLAY_FLIP_IVB_PLANE_B;
break;
case PLANE_C:
plane_bit = MI_DISPLAY_FLIP_IVB_PLANE_C;
break;
default:
WARN_ONCE(1, "unknown plane in flip command\n");
ret = -ENODEV;
goto err_unpin;
}
ret = intel_ring_begin(ring, 4);
if (ret)
goto err_unpin;
intel_ring_emit(ring, MI_DISPLAY_FLIP_I915 | plane_bit);
intel_ring_emit(ring, (fb->pitches[0] | obj->tiling_mode));
intel_ring_emit(ring, obj->gtt_offset + intel_crtc->dspaddr_offset);
intel_ring_emit(ring, (MI_NOOP));
intel_mark_page_flip_active(intel_crtc);
intel_ring_advance(ring);
return 0;
err_unpin:
intel_unpin_fb_obj(obj);
err:
return ret;
}
static int intel_default_queue_flip(struct drm_device *dev,
struct drm_crtc *crtc,
struct drm_framebuffer *fb,
struct drm_i915_gem_object *obj)
{
return -ENODEV;
}
static int intel_crtc_page_flip(struct drm_crtc *crtc,
struct drm_framebuffer *fb,
struct drm_pending_vblank_event *event)
{
struct drm_device *dev = crtc->dev;
struct drm_i915_private *dev_priv = dev->dev_private;
struct drm_framebuffer *old_fb = crtc->fb;
struct drm_i915_gem_object *obj = to_intel_framebuffer(fb)->obj;
struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
struct intel_unpin_work *work;
int ret;
/* Can't change pixel format via MI display flips. */
if (fb->pixel_format != crtc->fb->pixel_format)
return -EINVAL;
/*
* TILEOFF/LINOFF registers can't be changed via MI display flips.
* Note that pitch changes could also affect these register.
*/
if (INTEL_INFO(dev)->gen > 3 &&
(fb->offsets[0] != crtc->fb->offsets[0] ||
fb->pitches[0] != crtc->fb->pitches[0]))
return -EINVAL;
work = malloc(sizeof *work, DRM_MEM_KMS, M_WAITOK | M_ZERO);
if (work == NULL)
return -ENOMEM;
work->event = event;
work->crtc = crtc;
work->old_fb_obj = to_intel_framebuffer(old_fb)->obj;
TASK_INIT(&work->work, 0, intel_unpin_work_fn, work);
ret = drm_vblank_get(dev, intel_crtc->pipe);
if (ret)
goto free_work;
/* We borrow the event spin lock for protecting unpin_work */
mtx_lock(&dev->event_lock);
if (intel_crtc->unpin_work) {
mtx_unlock(&dev->event_lock);
free(work, DRM_MEM_KMS);
drm_vblank_put(dev, intel_crtc->pipe);
DRM_DEBUG_DRIVER("flip queue: crtc already busy\n");
return -EBUSY;
}
intel_crtc->unpin_work = work;
mtx_unlock(&dev->event_lock);
if (atomic_read(&intel_crtc->unpin_work_count) >= 2)
taskqueue_drain_all(dev_priv->wq);
ret = i915_mutex_lock_interruptible(dev);
if (ret)
goto cleanup;
/* Reference the objects for the scheduled work. */
drm_gem_object_reference(&work->old_fb_obj->base);
drm_gem_object_reference(&obj->base);
crtc->fb = fb;
work->pending_flip_obj = obj;
work->enable_stall_check = true;
/* Block clients from rendering to the new back buffer until
* the flip occurs and the object is no longer visible.
*/
atomic_add(1 << intel_crtc->plane, &work->old_fb_obj->pending_flip);
atomic_inc(&intel_crtc->unpin_work_count);
ret = dev_priv->display.queue_flip(dev, crtc, fb, obj);
if (ret)
goto cleanup_pending;
intel_disable_fbc(dev);
intel_mark_fb_busy(obj);
DRM_UNLOCK(dev);
CTR2(KTR_DRM, "i915_flip_request %d %p", intel_crtc->plane, obj);
return 0;
cleanup_pending:
atomic_dec(&intel_crtc->unpin_work_count);
crtc->fb = old_fb;
atomic_sub(1 << intel_crtc->plane, &work->old_fb_obj->pending_flip);
drm_gem_object_unreference(&work->old_fb_obj->base);
drm_gem_object_unreference(&obj->base);
DRM_UNLOCK(dev);
cleanup:
mtx_lock(&dev->event_lock);
intel_crtc->unpin_work = NULL;
mtx_unlock(&dev->event_lock);
drm_vblank_put(dev, intel_crtc->pipe);
free_work:
free(work, DRM_MEM_KMS);
return ret;
}
static struct drm_crtc_helper_funcs intel_helper_funcs = {
.mode_set_base_atomic = intel_pipe_set_base_atomic,
.load_lut = intel_crtc_load_lut,
.disable = intel_crtc_noop,
};
bool intel_encoder_check_is_cloned(struct intel_encoder *encoder)
{
struct intel_encoder *other_encoder;
struct drm_crtc *crtc = &encoder->new_crtc->base;
if (WARN_ON(!crtc))
return false;
list_for_each_entry(other_encoder,
&crtc->dev->mode_config.encoder_list,
base.head) {
if (&other_encoder->new_crtc->base != crtc ||
encoder == other_encoder)
continue;
else
return true;
}
return false;
}
static bool intel_encoder_crtc_ok(struct drm_encoder *encoder,
struct drm_crtc *crtc)
{
struct drm_device *dev;
struct drm_crtc *tmp;
int crtc_mask = 1;
WARN(!crtc, "checking null crtc?\n");
dev = crtc->dev;
list_for_each_entry(tmp, &dev->mode_config.crtc_list, head) {
if (tmp == crtc)
break;
crtc_mask <<= 1;
}
if (encoder->possible_crtcs & crtc_mask)
return true;
return false;
}
/**
* intel_modeset_update_staged_output_state
*
* Updates the staged output configuration state, e.g. after we've read out the
* current hw state.
*/
static void intel_modeset_update_staged_output_state(struct drm_device *dev)
{
struct intel_encoder *encoder;
struct intel_connector *connector;
list_for_each_entry(connector, &dev->mode_config.connector_list,
base.head) {
connector->new_encoder =
to_intel_encoder(connector->base.encoder);
}
list_for_each_entry(encoder, &dev->mode_config.encoder_list,
base.head) {
encoder->new_crtc =
to_intel_crtc(encoder->base.crtc);
}
}
/**
* intel_modeset_commit_output_state
*
* This function copies the stage display pipe configuration to the real one.
*/
static void intel_modeset_commit_output_state(struct drm_device *dev)
{
struct intel_encoder *encoder;
struct intel_connector *connector;
list_for_each_entry(connector, &dev->mode_config.connector_list,
base.head) {
connector->base.encoder = &connector->new_encoder->base;
}
list_for_each_entry(encoder, &dev->mode_config.encoder_list,
base.head) {
encoder->base.crtc = &encoder->new_crtc->base;
}
}
static int
intel_modeset_adjusted_mode(struct drm_crtc *crtc,
struct drm_display_mode *mode,
struct drm_display_mode **res)
{
struct drm_device *dev = crtc->dev;
struct drm_display_mode *adjusted_mode;
struct drm_encoder_helper_funcs *encoder_funcs;
struct intel_encoder *encoder;
adjusted_mode = drm_mode_duplicate(dev, mode);
if (!adjusted_mode)
return -ENOMEM;
/* Pass our mode to the connectors and the CRTC to give them a chance to
* adjust it according to limitations or connector properties, and also
* a chance to reject the mode entirely.
*/
list_for_each_entry(encoder, &dev->mode_config.encoder_list,
base.head) {
if (&encoder->new_crtc->base != crtc)
continue;
encoder_funcs = encoder->base.helper_private;
if (!(encoder_funcs->mode_fixup(&encoder->base, mode,
adjusted_mode))) {
DRM_DEBUG_KMS("Encoder fixup failed\n");
goto fail;
}
}
if (!(intel_crtc_mode_fixup(crtc, mode, adjusted_mode))) {
DRM_DEBUG_KMS("CRTC fixup failed\n");
goto fail;
}
DRM_DEBUG_KMS("[CRTC:%d]\n", crtc->base.id);
*res = adjusted_mode;
return 0;
fail:
drm_mode_destroy(dev, adjusted_mode);
return -EINVAL;
}
/* Computes which crtcs are affected and sets the relevant bits in the mask. For
* simplicity we use the crtc's pipe number (because it's easier to obtain). */
static void
intel_modeset_affected_pipes(struct drm_crtc *crtc, unsigned *modeset_pipes,
unsigned *prepare_pipes, unsigned *disable_pipes)
{
struct intel_crtc *intel_crtc;
struct drm_device *dev = crtc->dev;
struct intel_encoder *encoder;
struct intel_connector *connector;
struct drm_crtc *tmp_crtc;
*disable_pipes = *modeset_pipes = *prepare_pipes = 0;
/* Check which crtcs have changed outputs connected to them, these need
* to be part of the prepare_pipes mask. We don't (yet) support global
* modeset across multiple crtcs, so modeset_pipes will only have one
* bit set at most. */
list_for_each_entry(connector, &dev->mode_config.connector_list,
base.head) {
if (connector->base.encoder == &connector->new_encoder->base)
continue;
if (connector->base.encoder) {
tmp_crtc = connector->base.encoder->crtc;
*prepare_pipes |= 1 << to_intel_crtc(tmp_crtc)->pipe;
}
if (connector->new_encoder)
*prepare_pipes |=
1 << connector->new_encoder->new_crtc->pipe;
}
list_for_each_entry(encoder, &dev->mode_config.encoder_list,
base.head) {
if (encoder->base.crtc == &encoder->new_crtc->base)
continue;
if (encoder->base.crtc) {
tmp_crtc = encoder->base.crtc;
*prepare_pipes |= 1 << to_intel_crtc(tmp_crtc)->pipe;
}
if (encoder->new_crtc)
*prepare_pipes |= 1 << encoder->new_crtc->pipe;
}
/* Check for any pipes that will be fully disabled ... */
list_for_each_entry(intel_crtc, &dev->mode_config.crtc_list,
base.head) {
bool used = false;
/* Don't try to disable disabled crtcs. */
if (!intel_crtc->base.enabled)
continue;
list_for_each_entry(encoder, &dev->mode_config.encoder_list,
base.head) {
if (encoder->new_crtc == intel_crtc)
used = true;
}
if (!used)
*disable_pipes |= 1 << intel_crtc->pipe;
}
/* set_mode is also used to update properties on life display pipes. */
intel_crtc = to_intel_crtc(crtc);
if (crtc->enabled)
*prepare_pipes |= 1 << intel_crtc->pipe;
/*
* For simplicity do a full modeset on any pipe where the output routing
* changed. We could be more clever, but that would require us to be
* more careful with calling the relevant encoder->mode_set functions.
*/
if (*prepare_pipes)
*modeset_pipes = *prepare_pipes;
/* ... and mask these out. */
*modeset_pipes &= ~(*disable_pipes);
*prepare_pipes &= ~(*disable_pipes);
/*
* HACK: We don't (yet) fully support global modesets. intel_set_config
* obies this rule, but the modeset restore mode of
* intel_modeset_setup_hw_state does not.
*/
*modeset_pipes &= 1 << intel_crtc->pipe;
*prepare_pipes &= 1 << intel_crtc->pipe;
}
static bool intel_crtc_in_use(struct drm_crtc *crtc)
{
struct drm_encoder *encoder;
struct drm_device *dev = crtc->dev;
list_for_each_entry(encoder, &dev->mode_config.encoder_list, head)
if (encoder->crtc == crtc)
return true;
return false;
}
static void
intel_modeset_update_state(struct drm_device *dev, unsigned prepare_pipes)
{
struct intel_encoder *intel_encoder;
struct intel_crtc *intel_crtc;
struct drm_connector *connector;
list_for_each_entry(intel_encoder, &dev->mode_config.encoder_list,
base.head) {
if (!intel_encoder->base.crtc)
continue;
intel_crtc = to_intel_crtc(intel_encoder->base.crtc);
if (prepare_pipes & (1 << intel_crtc->pipe))
intel_encoder->connectors_active = false;
}
intel_modeset_commit_output_state(dev);
/* Update computed state. */
list_for_each_entry(intel_crtc, &dev->mode_config.crtc_list,
base.head) {
intel_crtc->base.enabled = intel_crtc_in_use(&intel_crtc->base);
}
list_for_each_entry(connector, &dev->mode_config.connector_list, head) {
if (!connector->encoder || !connector->encoder->crtc)
continue;
intel_crtc = to_intel_crtc(connector->encoder->crtc);
if (prepare_pipes & (1 << intel_crtc->pipe)) {
struct drm_property *dpms_property =
dev->mode_config.dpms_property;
connector->dpms = DRM_MODE_DPMS_ON;
drm_object_property_set_value(&connector->base,
dpms_property,
DRM_MODE_DPMS_ON);
intel_encoder = to_intel_encoder(connector->encoder);
intel_encoder->connectors_active = true;
}
}
}
#define for_each_intel_crtc_masked(dev, mask, intel_crtc) \
list_for_each_entry((intel_crtc), \
&(dev)->mode_config.crtc_list, \
base.head) \
if (mask & (1 <<(intel_crtc)->pipe)) \
void
intel_modeset_check_state(struct drm_device *dev)
{
struct intel_crtc *crtc;
struct intel_encoder *encoder;
struct intel_connector *connector;
list_for_each_entry(connector, &dev->mode_config.connector_list,
base.head) {
/* This also checks the encoder/connector hw state with the
* ->get_hw_state callbacks. */
intel_connector_check_state(connector);
WARN(&connector->new_encoder->base != connector->base.encoder,
"connector's staged encoder doesn't match current encoder\n");
}
list_for_each_entry(encoder, &dev->mode_config.encoder_list,
base.head) {
bool enabled = false;
bool active = false;
enum pipe pipe, tracked_pipe;
DRM_DEBUG_KMS("[ENCODER:%d:%s]\n",
encoder->base.base.id,
drm_get_encoder_name(&encoder->base));
WARN(&encoder->new_crtc->base != encoder->base.crtc,
"encoder's stage crtc doesn't match current crtc\n");
WARN(encoder->connectors_active && !encoder->base.crtc,
"encoder's active_connectors set, but no crtc\n");
list_for_each_entry(connector, &dev->mode_config.connector_list,
base.head) {
if (connector->base.encoder != &encoder->base)
continue;
enabled = true;
if (connector->base.dpms != DRM_MODE_DPMS_OFF)
active = true;
}
WARN(!!encoder->base.crtc != enabled,
"encoder's enabled state mismatch "
"(expected %i, found %i)\n",
!!encoder->base.crtc, enabled);
WARN(active && !encoder->base.crtc,
"active encoder with no crtc\n");
WARN(encoder->connectors_active != active,
"encoder's computed active state doesn't match tracked active state "
"(expected %i, found %i)\n", active, encoder->connectors_active);
active = encoder->get_hw_state(encoder, &pipe);
WARN(active != encoder->connectors_active,
"encoder's hw state doesn't match sw tracking "
"(expected %i, found %i)\n",
encoder->connectors_active, active);
if (!encoder->base.crtc)
continue;
tracked_pipe = to_intel_crtc(encoder->base.crtc)->pipe;
WARN(active && pipe != tracked_pipe,
"active encoder's pipe doesn't match"
"(expected %i, found %i)\n",
tracked_pipe, pipe);
}
list_for_each_entry(crtc, &dev->mode_config.crtc_list,
base.head) {
bool enabled = false;
bool active = false;
DRM_DEBUG_KMS("[CRTC:%d]\n",
crtc->base.base.id);
WARN(crtc->active && !crtc->base.enabled,
"active crtc, but not enabled in sw tracking\n");
list_for_each_entry(encoder, &dev->mode_config.encoder_list,
base.head) {
if (encoder->base.crtc != &crtc->base)
continue;
enabled = true;
if (encoder->connectors_active)
active = true;
}
WARN(active != crtc->active,
"crtc's computed active state doesn't match tracked active state "
"(expected %i, found %i)\n", active, crtc->active);
WARN(enabled != crtc->base.enabled,
"crtc's computed enabled state doesn't match tracked enabled state "
"(expected %i, found %i)\n", enabled, crtc->base.enabled);
assert_pipe(dev->dev_private, crtc->pipe, crtc->active);
}
}
bool intel_set_mode(struct drm_crtc *crtc,
struct drm_display_mode *mode,
int x, int y, struct drm_framebuffer *fb)
{
struct drm_device *dev = crtc->dev;
drm_i915_private_t *dev_priv = dev->dev_private;
struct drm_display_mode *adjusted_mode, saved_mode, saved_hwmode;
struct intel_crtc *intel_crtc;
unsigned disable_pipes, prepare_pipes, modeset_pipes;
bool ret = true;
intel_modeset_affected_pipes(crtc, &modeset_pipes,
&prepare_pipes, &disable_pipes);
DRM_DEBUG_KMS("set mode pipe masks: modeset: %x, prepare: %x, disable: %x\n",
modeset_pipes, prepare_pipes, disable_pipes);
for_each_intel_crtc_masked(dev, disable_pipes, intel_crtc)
intel_crtc_disable(&intel_crtc->base);
saved_hwmode = crtc->hwmode;
saved_mode = crtc->mode;
/* Hack: Because we don't (yet) support global modeset on multiple
* crtcs, we don't keep track of the new mode for more than one crtc.
* Hence simply check whether any bit is set in modeset_pipes in all the
* pieces of code that are not yet converted to deal with mutliple crtcs
* changing their mode at the same time. */
adjusted_mode = NULL;
if (modeset_pipes) {
int err = intel_modeset_adjusted_mode(crtc, mode, &adjusted_mode);
if (err) {
return false;
}
}
for_each_intel_crtc_masked(dev, prepare_pipes, intel_crtc) {
if (intel_crtc->base.enabled)
dev_priv->display.crtc_disable(&intel_crtc->base);
}
/* crtc->mode is already used by the ->mode_set callbacks, hence we need
* to set it here already despite that we pass it down the callchain.
*/
if (modeset_pipes)
crtc->mode = *mode;
/* Only after disabling all output pipelines that will be changed can we
- * update the the output configuration. */
+ * update the output configuration. */
intel_modeset_update_state(dev, prepare_pipes);
if (dev_priv->display.modeset_global_resources)
dev_priv->display.modeset_global_resources(dev);
/* Set up the DPLL and any encoders state that needs to adjust or depend
* on the DPLL.
*/
for_each_intel_crtc_masked(dev, modeset_pipes, intel_crtc) {
ret = !intel_crtc_mode_set(&intel_crtc->base,
mode, adjusted_mode,
x, y, fb);
if (!ret)
goto done;
}
/* Now enable the clocks, plane, pipe, and connectors that we set up. */
for_each_intel_crtc_masked(dev, prepare_pipes, intel_crtc)
dev_priv->display.crtc_enable(&intel_crtc->base);
if (modeset_pipes) {
/* Store real post-adjustment hardware mode. */
crtc->hwmode = *adjusted_mode;
/* Calculate and store various constants which
* are later needed by vblank and swap-completion
* timestamping. They are derived from true hwmode.
*/
drm_calc_timestamping_constants(crtc);
}
/* FIXME: add subpixel order */
done:
drm_mode_destroy(dev, adjusted_mode);
if (!ret && crtc->enabled) {
crtc->hwmode = saved_hwmode;
crtc->mode = saved_mode;
} else {
intel_modeset_check_state(dev);
}
return ret;
}
#undef for_each_intel_crtc_masked
static void intel_set_config_free(struct intel_set_config *config)
{
if (!config)
return;
free(config->save_connector_encoders, DRM_MEM_KMS);
free(config->save_encoder_crtcs, DRM_MEM_KMS);
free(config, DRM_MEM_KMS);
}
static int intel_set_config_save_state(struct drm_device *dev,
struct intel_set_config *config)
{
struct drm_encoder *encoder;
struct drm_connector *connector;
int count;
config->save_encoder_crtcs =
malloc(dev->mode_config.num_encoder *
sizeof(struct drm_crtc *), DRM_MEM_KMS, M_NOWAIT | M_ZERO);
if (!config->save_encoder_crtcs)
return -ENOMEM;
config->save_connector_encoders =
malloc(dev->mode_config.num_connector *
sizeof(struct drm_encoder *), DRM_MEM_KMS, M_NOWAIT | M_ZERO);
if (!config->save_connector_encoders)
return -ENOMEM;
/* Copy data. Note that driver private data is not affected.
* Should anything bad happen only the expected state is
* restored, not the drivers personal bookkeeping.
*/
count = 0;
list_for_each_entry(encoder, &dev->mode_config.encoder_list, head) {
config->save_encoder_crtcs[count++] = encoder->crtc;
}
count = 0;
list_for_each_entry(connector, &dev->mode_config.connector_list, head) {
config->save_connector_encoders[count++] = connector->encoder;
}
return 0;
}
static void intel_set_config_restore_state(struct drm_device *dev,
struct intel_set_config *config)
{
struct intel_encoder *encoder;
struct intel_connector *connector;
int count;
count = 0;
list_for_each_entry(encoder, &dev->mode_config.encoder_list, base.head) {
encoder->new_crtc =
to_intel_crtc(config->save_encoder_crtcs[count++]);
}
count = 0;
list_for_each_entry(connector, &dev->mode_config.connector_list, base.head) {
connector->new_encoder =
to_intel_encoder(config->save_connector_encoders[count++]);
}
}
static void
intel_set_config_compute_mode_changes(struct drm_mode_set *set,
struct intel_set_config *config)
{
/* We should be able to check here if the fb has the same properties
* and then just flip_or_move it */
if (set->crtc->fb != set->fb) {
/* If we have no fb then treat it as a full mode set */
if (set->crtc->fb == NULL) {
DRM_DEBUG_KMS("crtc has no fb, full mode set\n");
config->mode_changed = true;
} else if (set->fb == NULL) {
config->mode_changed = true;
} else if (set->fb->depth != set->crtc->fb->depth) {
config->mode_changed = true;
} else if (set->fb->bits_per_pixel !=
set->crtc->fb->bits_per_pixel) {
config->mode_changed = true;
} else
config->fb_changed = true;
}
if (set->fb && (set->x != set->crtc->x || set->y != set->crtc->y))
config->fb_changed = true;
if (set->mode && !drm_mode_equal(set->mode, &set->crtc->mode)) {
DRM_DEBUG_KMS("modes are different, full mode set\n");
drm_mode_debug_printmodeline(&set->crtc->mode);
drm_mode_debug_printmodeline(set->mode);
config->mode_changed = true;
}
}
static int
intel_modeset_stage_output_state(struct drm_device *dev,
struct drm_mode_set *set,
struct intel_set_config *config)
{
struct drm_crtc *new_crtc;
struct intel_connector *connector;
struct intel_encoder *encoder;
int count, ro;
/* The upper layers ensure that we either disabl a crtc or have a list
* of connectors. For paranoia, double-check this. */
WARN_ON(!set->fb && (set->num_connectors != 0));
WARN_ON(set->fb && (set->num_connectors == 0));
count = 0;
list_for_each_entry(connector, &dev->mode_config.connector_list,
base.head) {
/* Otherwise traverse passed in connector list and get encoders
* for them. */
for (ro = 0; ro < set->num_connectors; ro++) {
if (set->connectors[ro] == &connector->base) {
connector->new_encoder = connector->encoder;
break;
}
}
/* If we disable the crtc, disable all its connectors. Also, if
* the connector is on the changing crtc but not on the new
* connector list, disable it. */
if ((!set->fb || ro == set->num_connectors) &&
connector->base.encoder &&
connector->base.encoder->crtc == set->crtc) {
connector->new_encoder = NULL;
DRM_DEBUG_KMS("[CONNECTOR:%d:%s] to [NOCRTC]\n",
connector->base.base.id,
drm_get_connector_name(&connector->base));
}
if (&connector->new_encoder->base != connector->base.encoder) {
DRM_DEBUG_KMS("encoder changed, full mode switch\n");
config->mode_changed = true;
}
}
/* connector->new_encoder is now updated for all connectors. */
/* Update crtc of enabled connectors. */
count = 0;
list_for_each_entry(connector, &dev->mode_config.connector_list,
base.head) {
if (!connector->new_encoder)
continue;
new_crtc = connector->new_encoder->base.crtc;
for (ro = 0; ro < set->num_connectors; ro++) {
if (set->connectors[ro] == &connector->base)
new_crtc = set->crtc;
}
/* Make sure the new CRTC will work with the encoder */
if (!intel_encoder_crtc_ok(&connector->new_encoder->base,
new_crtc)) {
return -EINVAL;
}
connector->encoder->new_crtc = to_intel_crtc(new_crtc);
DRM_DEBUG_KMS("[CONNECTOR:%d:%s] to [CRTC:%d]\n",
connector->base.base.id,
drm_get_connector_name(&connector->base),
new_crtc->base.id);
}
/* Check for any encoders that needs to be disabled. */
list_for_each_entry(encoder, &dev->mode_config.encoder_list,
base.head) {
list_for_each_entry(connector,
&dev->mode_config.connector_list,
base.head) {
if (connector->new_encoder == encoder) {
WARN_ON(!connector->new_encoder->new_crtc);
goto next_encoder;
}
}
encoder->new_crtc = NULL;
next_encoder:
/* Only now check for crtc changes so we don't miss encoders
* that will be disabled. */
if (&encoder->new_crtc->base != encoder->base.crtc) {
DRM_DEBUG_KMS("crtc changed, full mode switch\n");
config->mode_changed = true;
}
}
/* Now we've also updated encoder->new_crtc for all encoders. */
return 0;
}
static int intel_crtc_set_config(struct drm_mode_set *set)
{
struct drm_device *dev;
struct drm_mode_set save_set;
struct intel_set_config *config;
int ret;
BUG_ON(!set);
BUG_ON(!set->crtc);
BUG_ON(!set->crtc->helper_private);
if (!set->mode)
set->fb = NULL;
/* The fb helper likes to play gross jokes with ->mode_set_config.
* Unfortunately the crtc helper doesn't do much at all for this case,
* so we have to cope with this madness until the fb helper is fixed up. */
if (set->fb && set->num_connectors == 0)
return 0;
if (set->fb) {
DRM_DEBUG_KMS("[CRTC:%d] [FB:%d] #connectors=%d (x y) (%i %i)\n",
set->crtc->base.id, set->fb->base.id,
(int)set->num_connectors, set->x, set->y);
} else {
DRM_DEBUG_KMS("[CRTC:%d] [NOFB]\n", set->crtc->base.id);
}
dev = set->crtc->dev;
ret = -ENOMEM;
config = malloc(sizeof(*config), DRM_MEM_KMS, M_NOWAIT | M_ZERO);
if (!config)
goto out_config;
ret = intel_set_config_save_state(dev, config);
if (ret)
goto out_config;
save_set.crtc = set->crtc;
save_set.mode = &set->crtc->mode;
save_set.x = set->crtc->x;
save_set.y = set->crtc->y;
save_set.fb = set->crtc->fb;
/* Compute whether we need a full modeset, only an fb base update or no
* change at all. In the future we might also check whether only the
* mode changed, e.g. for LVDS where we only change the panel fitter in
* such cases. */
intel_set_config_compute_mode_changes(set, config);
ret = intel_modeset_stage_output_state(dev, set, config);
if (ret)
goto fail;
if (config->mode_changed) {
if (set->mode) {
DRM_DEBUG_KMS("attempting to set mode from"
" userspace\n");
drm_mode_debug_printmodeline(set->mode);
}
if (!intel_set_mode(set->crtc, set->mode,
set->x, set->y, set->fb)) {
DRM_ERROR("failed to set mode on [CRTC:%d]\n",
set->crtc->base.id);
ret = -EINVAL;
goto fail;
}
} else if (config->fb_changed) {
ret = intel_pipe_set_base(set->crtc,
set->x, set->y, set->fb);
}
intel_set_config_free(config);
return 0;
fail:
intel_set_config_restore_state(dev, config);
/* Try to restore the config */
if (config->mode_changed &&
!intel_set_mode(save_set.crtc, save_set.mode,
save_set.x, save_set.y, save_set.fb))
DRM_ERROR("failed to restore config after modeset failure\n");
out_config:
intel_set_config_free(config);
return ret;
}
static const struct drm_crtc_funcs intel_crtc_funcs = {
.cursor_set = intel_crtc_cursor_set,
.cursor_move = intel_crtc_cursor_move,
.gamma_set = intel_crtc_gamma_set,
.set_config = intel_crtc_set_config,
.destroy = intel_crtc_destroy,
.page_flip = intel_crtc_page_flip,
};
static void intel_cpu_pll_init(struct drm_device *dev)
{
if (IS_HASWELL(dev))
intel_ddi_pll_init(dev);
}
static void intel_pch_pll_init(struct drm_device *dev)
{
drm_i915_private_t *dev_priv = dev->dev_private;
int i;
if (dev_priv->num_pch_pll == 0) {
DRM_DEBUG_KMS("No PCH PLLs on this hardware, skipping initialisation\n");
return;
}
for (i = 0; i < dev_priv->num_pch_pll; i++) {
dev_priv->pch_plls[i].pll_reg = _PCH_DPLL(i);
dev_priv->pch_plls[i].fp0_reg = _PCH_FP0(i);
dev_priv->pch_plls[i].fp1_reg = _PCH_FP1(i);
}
}
static void intel_crtc_init(struct drm_device *dev, int pipe)
{
drm_i915_private_t *dev_priv = dev->dev_private;
struct intel_crtc *intel_crtc;
int i;
intel_crtc = malloc(sizeof(struct intel_crtc) + (INTELFB_CONN_LIMIT * sizeof(struct drm_connector *)), DRM_MEM_KMS, M_WAITOK | M_ZERO);
if (intel_crtc == NULL)
return;
drm_crtc_init(dev, &intel_crtc->base, &intel_crtc_funcs);
drm_mode_crtc_set_gamma_size(&intel_crtc->base, 256);
for (i = 0; i < 256; i++) {
intel_crtc->lut_r[i] = i;
intel_crtc->lut_g[i] = i;
intel_crtc->lut_b[i] = i;
}
/* Swap pipes & planes for FBC on pre-965 */
intel_crtc->pipe = pipe;
intel_crtc->plane = pipe;
intel_crtc->cpu_transcoder = pipe;
if (IS_MOBILE(dev) && IS_GEN3(dev)) {
DRM_DEBUG_KMS("swapping pipes & planes for FBC\n");
intel_crtc->plane = !pipe;
}
BUG_ON(pipe >= ARRAY_SIZE(dev_priv->plane_to_crtc_mapping) ||
dev_priv->plane_to_crtc_mapping[intel_crtc->plane] != NULL);
dev_priv->plane_to_crtc_mapping[intel_crtc->plane] = &intel_crtc->base;
dev_priv->pipe_to_crtc_mapping[intel_crtc->pipe] = &intel_crtc->base;
intel_crtc->bpp = 24; /* default for pre-Ironlake */
drm_crtc_helper_add(&intel_crtc->base, &intel_helper_funcs);
}
int intel_get_pipe_from_crtc_id(struct drm_device *dev, void *data,
struct drm_file *file)
{
struct drm_i915_get_pipe_from_crtc_id *pipe_from_crtc_id = data;
struct drm_mode_object *drmmode_obj;
struct intel_crtc *crtc;
if (!drm_core_check_feature(dev, DRIVER_MODESET))
return -ENODEV;
drmmode_obj = drm_mode_object_find(dev, pipe_from_crtc_id->crtc_id,
DRM_MODE_OBJECT_CRTC);
if (!drmmode_obj) {
DRM_ERROR("no such CRTC id\n");
return -EINVAL;
}
crtc = to_intel_crtc(obj_to_crtc(drmmode_obj));
pipe_from_crtc_id->pipe = crtc->pipe;
return 0;
}
static int intel_encoder_clones(struct intel_encoder *encoder)
{
struct drm_device *dev = encoder->base.dev;
struct intel_encoder *source_encoder;
int index_mask = 0;
int entry = 0;
list_for_each_entry(source_encoder,
&dev->mode_config.encoder_list, base.head) {
if (encoder == source_encoder)
index_mask |= (1 << entry);
/* Intel hw has only one MUX where enocoders could be cloned. */
if (encoder->cloneable && source_encoder->cloneable)
index_mask |= (1 << entry);
entry++;
}
return index_mask;
}
static bool has_edp_a(struct drm_device *dev)
{
struct drm_i915_private *dev_priv = dev->dev_private;
if (!IS_MOBILE(dev))
return false;
if ((I915_READ(DP_A) & DP_DETECTED) == 0)
return false;
if (IS_GEN5(dev) &&
(I915_READ(ILK_DISPLAY_CHICKEN_FUSES) & ILK_eDP_A_DISABLE))
return false;
return true;
}
static void intel_setup_outputs(struct drm_device *dev)
{
struct drm_i915_private *dev_priv = dev->dev_private;
struct intel_encoder *encoder;
bool dpd_is_edp = false;
bool has_lvds;
has_lvds = intel_lvds_init(dev);
if (!has_lvds && !HAS_PCH_SPLIT(dev)) {
/* disable the panel fitter on everything but LVDS */
I915_WRITE(PFIT_CONTROL, 0);
}
if (!(IS_HASWELL(dev) &&
(I915_READ(DDI_BUF_CTL(PORT_A)) & DDI_A_4_LANES)))
intel_crt_init(dev);
if (IS_HASWELL(dev)) {
int found;
/* Haswell uses DDI functions to detect digital outputs */
found = I915_READ(DDI_BUF_CTL_A) & DDI_INIT_DISPLAY_DETECTED;
/* DDI A only supports eDP */
if (found)
intel_ddi_init(dev, PORT_A);
/* DDI B, C and D detection is indicated by the SFUSE_STRAP
* register */
found = I915_READ(SFUSE_STRAP);
if (found & SFUSE_STRAP_DDIB_DETECTED)
intel_ddi_init(dev, PORT_B);
if (found & SFUSE_STRAP_DDIC_DETECTED)
intel_ddi_init(dev, PORT_C);
if (found & SFUSE_STRAP_DDID_DETECTED)
intel_ddi_init(dev, PORT_D);
} else if (HAS_PCH_SPLIT(dev)) {
int found;
dpd_is_edp = intel_dpd_is_edp(dev);
if (has_edp_a(dev))
intel_dp_init(dev, DP_A, PORT_A);
if (I915_READ(HDMIB) & PORT_DETECTED) {
/* PCH SDVOB multiplex with HDMIB */
found = intel_sdvo_init(dev, PCH_SDVOB, true);
if (!found)
intel_hdmi_init(dev, HDMIB, PORT_B);
if (!found && (I915_READ(PCH_DP_B) & DP_DETECTED))
intel_dp_init(dev, PCH_DP_B, PORT_B);
}
if (I915_READ(HDMIC) & PORT_DETECTED)
intel_hdmi_init(dev, HDMIC, PORT_C);
if (!dpd_is_edp && I915_READ(HDMID) & PORT_DETECTED)
intel_hdmi_init(dev, HDMID, PORT_D);
if (I915_READ(PCH_DP_C) & DP_DETECTED)
intel_dp_init(dev, PCH_DP_C, PORT_C);
if (I915_READ(PCH_DP_D) & DP_DETECTED)
intel_dp_init(dev, PCH_DP_D, PORT_D);
} else if (IS_VALLEYVIEW(dev)) {
int found;
/* Check for built-in panel first. Shares lanes with HDMI on SDVOC */
if (I915_READ(DP_C) & DP_DETECTED)
intel_dp_init(dev, DP_C, PORT_C);
if (I915_READ(SDVOB) & PORT_DETECTED) {
/* SDVOB multiplex with HDMIB */
found = intel_sdvo_init(dev, SDVOB, true);
if (!found)
intel_hdmi_init(dev, SDVOB, PORT_B);
if (!found && (I915_READ(DP_B) & DP_DETECTED))
intel_dp_init(dev, DP_B, PORT_B);
}
if (I915_READ(SDVOC) & PORT_DETECTED)
intel_hdmi_init(dev, SDVOC, PORT_C);
} else if (SUPPORTS_DIGITAL_OUTPUTS(dev)) {
bool found = false;
if (I915_READ(SDVOB) & SDVO_DETECTED) {
DRM_DEBUG_KMS("probing SDVOB\n");
found = intel_sdvo_init(dev, SDVOB, true);
if (!found && SUPPORTS_INTEGRATED_HDMI(dev)) {
DRM_DEBUG_KMS("probing HDMI on SDVOB\n");
intel_hdmi_init(dev, SDVOB, PORT_B);
}
if (!found && SUPPORTS_INTEGRATED_DP(dev)) {
DRM_DEBUG_KMS("probing DP_B\n");
intel_dp_init(dev, DP_B, PORT_B);
}
}
/* Before G4X SDVOC doesn't have its own detect register */
if (I915_READ(SDVOB) & SDVO_DETECTED) {
DRM_DEBUG_KMS("probing SDVOC\n");
found = intel_sdvo_init(dev, SDVOC, false);
}
if (!found && (I915_READ(SDVOC) & SDVO_DETECTED)) {
if (SUPPORTS_INTEGRATED_HDMI(dev)) {
DRM_DEBUG_KMS("probing HDMI on SDVOC\n");
intel_hdmi_init(dev, SDVOC, PORT_C);
}
if (SUPPORTS_INTEGRATED_DP(dev)) {
DRM_DEBUG_KMS("probing DP_C\n");
intel_dp_init(dev, DP_C, PORT_C);
}
}
if (SUPPORTS_INTEGRATED_DP(dev) &&
(I915_READ(DP_D) & DP_DETECTED)) {
DRM_DEBUG_KMS("probing DP_D\n");
intel_dp_init(dev, DP_D, PORT_D);
}
} else if (IS_GEN2(dev))
intel_dvo_init(dev);
if (SUPPORTS_TV(dev))
intel_tv_init(dev);
list_for_each_entry(encoder, &dev->mode_config.encoder_list, base.head) {
encoder->base.possible_crtcs = encoder->crtc_mask;
encoder->base.possible_clones =
intel_encoder_clones(encoder);
}
intel_init_pch_refclk(dev);
drm_helper_move_panel_connectors_to_head(dev);
}
static void intel_user_framebuffer_destroy(struct drm_framebuffer *fb)
{
struct intel_framebuffer *intel_fb = to_intel_framebuffer(fb);
drm_framebuffer_cleanup(fb);
drm_gem_object_unreference_unlocked(&intel_fb->obj->base);
free(intel_fb, DRM_MEM_KMS);
}
static int intel_user_framebuffer_create_handle(struct drm_framebuffer *fb,
struct drm_file *file,
unsigned int *handle)
{
struct intel_framebuffer *intel_fb = to_intel_framebuffer(fb);
struct drm_i915_gem_object *obj = intel_fb->obj;
return drm_gem_handle_create(file, &obj->base, handle);
}
static const struct drm_framebuffer_funcs intel_fb_funcs = {
.destroy = intel_user_framebuffer_destroy,
.create_handle = intel_user_framebuffer_create_handle,
};
int intel_framebuffer_init(struct drm_device *dev,
struct intel_framebuffer *intel_fb,
struct drm_mode_fb_cmd2 *mode_cmd,
struct drm_i915_gem_object *obj)
{
int ret;
if (obj->tiling_mode == I915_TILING_Y) {
DRM_DEBUG("hardware does not support tiling Y\n");
return -EINVAL;
}
if (mode_cmd->pitches[0] & 63) {
DRM_DEBUG("pitch (%d) must be at least 64 byte aligned\n",
mode_cmd->pitches[0]);
return -EINVAL;
}
/* FIXME <= Gen4 stride limits are bit unclear */
if (mode_cmd->pitches[0] > 32768) {
DRM_DEBUG("pitch (%d) must be at less than 32768\n",
mode_cmd->pitches[0]);
return -EINVAL;
}
if (obj->tiling_mode != I915_TILING_NONE &&
mode_cmd->pitches[0] != obj->stride) {
DRM_DEBUG("pitch (%d) must match tiling stride (%d)\n",
mode_cmd->pitches[0], obj->stride);
return -EINVAL;
}
/* Reject formats not supported by any plane early. */
switch (mode_cmd->pixel_format) {
case DRM_FORMAT_C8:
case DRM_FORMAT_RGB565:
case DRM_FORMAT_XRGB8888:
case DRM_FORMAT_ARGB8888:
break;
case DRM_FORMAT_XRGB1555:
case DRM_FORMAT_ARGB1555:
if (INTEL_INFO(dev)->gen > 3) {
DRM_DEBUG("invalid format: 0x%08x\n", mode_cmd->pixel_format);
return -EINVAL;
}
break;
case DRM_FORMAT_XBGR8888:
case DRM_FORMAT_ABGR8888:
case DRM_FORMAT_XRGB2101010:
case DRM_FORMAT_ARGB2101010:
case DRM_FORMAT_XBGR2101010:
case DRM_FORMAT_ABGR2101010:
if (INTEL_INFO(dev)->gen < 4) {
DRM_DEBUG("invalid format: 0x%08x\n", mode_cmd->pixel_format);
return -EINVAL;
}
break;
case DRM_FORMAT_YUYV:
case DRM_FORMAT_UYVY:
case DRM_FORMAT_YVYU:
case DRM_FORMAT_VYUY:
if (INTEL_INFO(dev)->gen < 5) {
DRM_DEBUG("invalid format: 0x%08x\n", mode_cmd->pixel_format);
return -EINVAL;
}
break;
default:
DRM_DEBUG("unsupported pixel format 0x%08x\n", mode_cmd->pixel_format);
return -EINVAL;
}
/* FIXME need to adjust LINOFF/TILEOFF accordingly. */
if (mode_cmd->offsets[0] != 0)
return -EINVAL;
ret = drm_framebuffer_init(dev, &intel_fb->base, &intel_fb_funcs);
if (ret) {
DRM_ERROR("framebuffer init failed %d\n", ret);
return ret;
}
drm_helper_mode_fill_fb_struct(&intel_fb->base, mode_cmd);
intel_fb->obj = obj;
return 0;
}
static int
intel_user_framebuffer_create(struct drm_device *dev,
struct drm_file *filp,
struct drm_mode_fb_cmd2 *mode_cmd,
struct drm_framebuffer **res)
{
struct drm_i915_gem_object *obj;
obj = to_intel_bo(drm_gem_object_lookup(dev, filp,
mode_cmd->handles[0]));
if (&obj->base == NULL)
return -ENOENT;
return intel_framebuffer_create(dev, mode_cmd, obj, res);
}
static const struct drm_mode_config_funcs intel_mode_funcs = {
.fb_create = intel_user_framebuffer_create,
.output_poll_changed = intel_fb_output_poll_changed,
};
/* Set up chip specific display functions */
static void intel_init_display(struct drm_device *dev)
{
struct drm_i915_private *dev_priv = dev->dev_private;
/* We always want a DPMS function */
if (IS_HASWELL(dev)) {
dev_priv->display.crtc_mode_set = haswell_crtc_mode_set;
dev_priv->display.crtc_enable = haswell_crtc_enable;
dev_priv->display.crtc_disable = haswell_crtc_disable;
dev_priv->display.off = haswell_crtc_off;
dev_priv->display.update_plane = ironlake_update_plane;
} else if (HAS_PCH_SPLIT(dev)) {
dev_priv->display.crtc_mode_set = ironlake_crtc_mode_set;
dev_priv->display.crtc_enable = ironlake_crtc_enable;
dev_priv->display.crtc_disable = ironlake_crtc_disable;
dev_priv->display.off = ironlake_crtc_off;
dev_priv->display.update_plane = ironlake_update_plane;
} else {
dev_priv->display.crtc_mode_set = i9xx_crtc_mode_set;
dev_priv->display.crtc_enable = i9xx_crtc_enable;
dev_priv->display.crtc_disable = i9xx_crtc_disable;
dev_priv->display.off = i9xx_crtc_off;
dev_priv->display.update_plane = i9xx_update_plane;
}
/* Returns the core display clock speed */
if (IS_VALLEYVIEW(dev))
dev_priv->display.get_display_clock_speed =
valleyview_get_display_clock_speed;
else if (IS_I945G(dev) || (IS_G33(dev) && !IS_PINEVIEW_M(dev)))
dev_priv->display.get_display_clock_speed =
i945_get_display_clock_speed;
else if (IS_I915G(dev))
dev_priv->display.get_display_clock_speed =
i915_get_display_clock_speed;
else if (IS_I945GM(dev) || IS_845G(dev) || IS_PINEVIEW_M(dev))
dev_priv->display.get_display_clock_speed =
i9xx_misc_get_display_clock_speed;
else if (IS_I915GM(dev))
dev_priv->display.get_display_clock_speed =
i915gm_get_display_clock_speed;
else if (IS_I865G(dev))
dev_priv->display.get_display_clock_speed =
i865_get_display_clock_speed;
else if (IS_I85X(dev))
dev_priv->display.get_display_clock_speed =
i855_get_display_clock_speed;
else /* 852, 830 */
dev_priv->display.get_display_clock_speed =
i830_get_display_clock_speed;
if (HAS_PCH_SPLIT(dev)) {
if (IS_GEN5(dev)) {
dev_priv->display.fdi_link_train = ironlake_fdi_link_train;
dev_priv->display.write_eld = ironlake_write_eld;
} else if (IS_GEN6(dev)) {
dev_priv->display.fdi_link_train = gen6_fdi_link_train;
dev_priv->display.write_eld = ironlake_write_eld;
} else if (IS_IVYBRIDGE(dev)) {
/* FIXME: detect B0+ stepping and use auto training */
dev_priv->display.fdi_link_train = ivb_manual_fdi_link_train;
dev_priv->display.write_eld = ironlake_write_eld;
dev_priv->display.modeset_global_resources =
ivb_modeset_global_resources;
} else if (IS_HASWELL(dev)) {
dev_priv->display.fdi_link_train = hsw_fdi_link_train;
dev_priv->display.write_eld = haswell_write_eld;
} else
dev_priv->display.update_wm = NULL;
} else if (IS_G4X(dev)) {
dev_priv->display.write_eld = g4x_write_eld;
}
/* Default just returns -ENODEV to indicate unsupported */
dev_priv->display.queue_flip = intel_default_queue_flip;
switch (INTEL_INFO(dev)->gen) {
case 2:
dev_priv->display.queue_flip = intel_gen2_queue_flip;
break;
case 3:
dev_priv->display.queue_flip = intel_gen3_queue_flip;
break;
case 4:
case 5:
dev_priv->display.queue_flip = intel_gen4_queue_flip;
break;
case 6:
dev_priv->display.queue_flip = intel_gen6_queue_flip;
break;
case 7:
dev_priv->display.queue_flip = intel_gen7_queue_flip;
break;
}
}
/*
* Some BIOSes insist on assuming the GPU's pipe A is enabled at suspend,
* resume, or other times. This quirk makes sure that's the case for
* affected systems.
*/
static void quirk_pipea_force(struct drm_device *dev)
{
struct drm_i915_private *dev_priv = dev->dev_private;
dev_priv->quirks |= QUIRK_PIPEA_FORCE;
DRM_INFO("applying pipe a force quirk\n");
}
/*
* Some machines (Lenovo U160) do not work with SSC on LVDS for some reason
*/
static void quirk_ssc_force_disable(struct drm_device *dev)
{
struct drm_i915_private *dev_priv = dev->dev_private;
dev_priv->quirks |= QUIRK_LVDS_SSC_DISABLE;
DRM_INFO("applying lvds SSC disable quirk\n");
}
/*
* A machine (e.g. Acer Aspire 5734Z) may need to invert the panel backlight
* brightness value
*/
static void quirk_invert_brightness(struct drm_device *dev)
{
struct drm_i915_private *dev_priv = dev->dev_private;
dev_priv->quirks |= QUIRK_INVERT_BRIGHTNESS;
DRM_INFO("applying inverted panel brightness quirk\n");
}
struct intel_quirk {
int device;
int subsystem_vendor;
int subsystem_device;
void (*hook)(struct drm_device *dev);
};
/* For systems that don't have a meaningful PCI subdevice/subvendor ID */
struct intel_dmi_quirk {
void (*hook)(struct drm_device *dev);
const struct dmi_system_id (*dmi_id_list)[];
};
static int intel_dmi_reverse_brightness(const struct dmi_system_id *id)
{
DRM_INFO("Backlight polarity reversed on %s\n", id->ident);
return 1;
}
static const struct intel_dmi_quirk intel_dmi_quirks[] = {
{
.dmi_id_list = &(const struct dmi_system_id[]) {
{
.callback = intel_dmi_reverse_brightness,
.ident = "NCR Corporation",
.matches = {DMI_MATCH(DMI_SYS_VENDOR, "NCR Corporation"),
DMI_MATCH(DMI_PRODUCT_NAME, ""),
},
},
{ } /* terminating entry */
},
.hook = quirk_invert_brightness,
},
};
#define PCI_ANY_ID (~0u)
static struct intel_quirk intel_quirks[] = {
/* HP Mini needs pipe A force quirk (LP: #322104) */
{ 0x27ae, 0x103c, 0x361a, quirk_pipea_force },
/* Toshiba Protege R-205, S-209 needs pipe A force quirk */
{ 0x2592, 0x1179, 0x0001, quirk_pipea_force },
/* ThinkPad T60 needs pipe A force quirk (bug #16494) */
{ 0x2782, 0x17aa, 0x201a, quirk_pipea_force },
/* 830/845 need to leave pipe A & dpll A up */
{ 0x2562, PCI_ANY_ID, PCI_ANY_ID, quirk_pipea_force },
{ 0x3577, PCI_ANY_ID, PCI_ANY_ID, quirk_pipea_force },
/* Lenovo U160 cannot use SSC on LVDS */
{ 0x0046, 0x17aa, 0x3920, quirk_ssc_force_disable },
/* Sony Vaio Y cannot use SSC on LVDS */
{ 0x0046, 0x104d, 0x9076, quirk_ssc_force_disable },
/* Acer Aspire 5734Z must invert backlight brightness */
{ 0x2a42, 0x1025, 0x0459, quirk_invert_brightness },
/* Acer Aspire 4736Z */
{ 0x2a42, 0x1025, 0x0260, quirk_invert_brightness },
/* Acer/eMachines G725 */
{ 0x2a42, 0x1025, 0x0210, quirk_invert_brightness },
/* Acer/eMachines e725 */
{ 0x2a42, 0x1025, 0x0212, quirk_invert_brightness },
/* Acer/Packard Bell NCL20 */
{ 0x2a42, 0x1025, 0x034b, quirk_invert_brightness },
};
static void intel_init_quirks(struct drm_device *dev)
{
int i;
for (i = 0; i < ARRAY_SIZE(intel_quirks); i++) {
struct intel_quirk *q = &intel_quirks[i];
if (pci_get_device(dev->dev) == q->device &&
(pci_get_subvendor(dev->dev) == q->subsystem_vendor ||
q->subsystem_vendor == PCI_ANY_ID) &&
(pci_get_subdevice(dev->dev) == q->subsystem_device ||
q->subsystem_device == PCI_ANY_ID))
q->hook(dev);
}
for (i = 0; i < ARRAY_SIZE(intel_dmi_quirks); i++) {
if (dmi_check_system(*intel_dmi_quirks[i].dmi_id_list) != 0)
intel_dmi_quirks[i].hook(dev);
}
}
/* Disable the VGA plane that we never use */
static void i915_disable_vga(struct drm_device *dev)
{
struct drm_i915_private *dev_priv = dev->dev_private;
u8 sr1;
u32 vga_reg;
if (HAS_PCH_SPLIT(dev))
vga_reg = CPU_VGACNTRL;
else
vga_reg = VGACNTRL;
#ifdef FREEBSD_WIP
vga_get_uninterruptible(dev->pdev, VGA_RSRC_LEGACY_IO);
#endif /* FREEBSD_WIP */
outb(VGA_SR_INDEX, SR01);
sr1 = inb(VGA_SR_DATA);
outb(VGA_SR_DATA, sr1 | 1<<5);
#ifdef FREEBSD_WIP
vga_put(dev->pdev, VGA_RSRC_LEGACY_IO);
#endif /* FREEBSD_WIP */
udelay(300);
I915_WRITE(vga_reg, VGA_DISP_DISABLE);
POSTING_READ(vga_reg);
}
void intel_modeset_init_hw(struct drm_device *dev)
{
/* We attempt to init the necessary power wells early in the initialization
* time, so the subsystems that expect power to be enabled can work.
*/
intel_init_power_wells(dev);
intel_prepare_ddi(dev);
intel_init_clock_gating(dev);
DRM_LOCK(dev);
intel_enable_gt_powersave(dev);
DRM_UNLOCK(dev);
}
void intel_modeset_init(struct drm_device *dev)
{
struct drm_i915_private *dev_priv = dev->dev_private;
int i, ret;
drm_mode_config_init(dev);
dev->mode_config.min_width = 0;
dev->mode_config.min_height = 0;
dev->mode_config.preferred_depth = 24;
dev->mode_config.prefer_shadow = 1;
dev->mode_config.funcs = &intel_mode_funcs;
intel_init_quirks(dev);
intel_init_pm(dev);
intel_init_display(dev);
if (IS_GEN2(dev)) {
dev->mode_config.max_width = 2048;
dev->mode_config.max_height = 2048;
} else if (IS_GEN3(dev)) {
dev->mode_config.max_width = 4096;
dev->mode_config.max_height = 4096;
} else {
dev->mode_config.max_width = 8192;
dev->mode_config.max_height = 8192;
}
dev->mode_config.fb_base = dev_priv->mm.gtt_base_addr;
DRM_DEBUG_KMS("%d display pipe%s available.\n",
dev_priv->num_pipe, dev_priv->num_pipe > 1 ? "s" : "");
for (i = 0; i < dev_priv->num_pipe; i++) {
intel_crtc_init(dev, i);
ret = intel_plane_init(dev, i);
if (ret)
DRM_DEBUG_KMS("plane %d init failed: %d\n", i, ret);
}
intel_cpu_pll_init(dev);
intel_pch_pll_init(dev);
/* Just disable it once at startup */
i915_disable_vga(dev);
intel_setup_outputs(dev);
}
static void
intel_connector_break_all_links(struct intel_connector *connector)
{
connector->base.dpms = DRM_MODE_DPMS_OFF;
connector->base.encoder = NULL;
connector->encoder->connectors_active = false;
connector->encoder->base.crtc = NULL;
}
static void intel_enable_pipe_a(struct drm_device *dev)
{
struct intel_connector *connector;
struct drm_connector *crt = NULL;
struct intel_load_detect_pipe load_detect_temp;
/* We can't just switch on the pipe A, we need to set things up with a
* proper mode and output configuration. As a gross hack, enable pipe A
* by enabling the load detect pipe once. */
list_for_each_entry(connector,
&dev->mode_config.connector_list,
base.head) {
if (connector->encoder->type == INTEL_OUTPUT_ANALOG) {
crt = &connector->base;
break;
}
}
if (!crt)
return;
if (intel_get_load_detect_pipe(crt, NULL, &load_detect_temp))
intel_release_load_detect_pipe(crt, &load_detect_temp);
}
static bool
intel_check_plane_mapping(struct intel_crtc *crtc)
{
struct drm_i915_private *dev_priv = crtc->base.dev->dev_private;
u32 reg, val;
if (dev_priv->num_pipe == 1)
return true;
reg = DSPCNTR(!crtc->plane);
val = I915_READ(reg);
if ((val & DISPLAY_PLANE_ENABLE) &&
(!!(val & DISPPLANE_SEL_PIPE_MASK) == crtc->pipe))
return false;
return true;
}
static void intel_sanitize_crtc(struct intel_crtc *crtc)
{
struct drm_device *dev = crtc->base.dev;
struct drm_i915_private *dev_priv = dev->dev_private;
u32 reg;
/* Clear any frame start delays used for debugging left by the BIOS */
reg = PIPECONF(crtc->cpu_transcoder);
I915_WRITE(reg, I915_READ(reg) & ~PIPECONF_FRAME_START_DELAY_MASK);
/* We need to sanitize the plane -> pipe mapping first because this will
* disable the crtc (and hence change the state) if it is wrong. Note
* that gen4+ has a fixed plane -> pipe mapping. */
if (INTEL_INFO(dev)->gen < 4 && !intel_check_plane_mapping(crtc)) {
struct intel_connector *connector;
bool plane;
DRM_DEBUG_KMS("[CRTC:%d] wrong plane connection detected!\n",
crtc->base.base.id);
/* Pipe has the wrong plane attached and the plane is active.
* Temporarily change the plane mapping and disable everything
* ... */
plane = crtc->plane;
crtc->plane = !plane;
dev_priv->display.crtc_disable(&crtc->base);
crtc->plane = plane;
/* ... and break all links. */
list_for_each_entry(connector, &dev->mode_config.connector_list,
base.head) {
if (connector->encoder->base.crtc != &crtc->base)
continue;
intel_connector_break_all_links(connector);
}
WARN_ON(crtc->active);
crtc->base.enabled = false;
}
if (dev_priv->quirks & QUIRK_PIPEA_FORCE &&
crtc->pipe == PIPE_A && !crtc->active) {
/* BIOS forgot to enable pipe A, this mostly happens after
* resume. Force-enable the pipe to fix this, the update_dpms
* call below we restore the pipe to the right state, but leave
* the required bits on. */
intel_enable_pipe_a(dev);
}
/* Adjust the state of the output pipe according to whether we
* have active connectors/encoders. */
intel_crtc_update_dpms(&crtc->base);
if (crtc->active != crtc->base.enabled) {
struct intel_encoder *encoder;
/* This can happen either due to bugs in the get_hw_state
* functions or because the pipe is force-enabled due to the
* pipe A quirk. */
DRM_DEBUG_KMS("[CRTC:%d] hw state adjusted, was %s, now %s\n",
crtc->base.base.id,
crtc->base.enabled ? "enabled" : "disabled",
crtc->active ? "enabled" : "disabled");
crtc->base.enabled = crtc->active;
/* Because we only establish the connector -> encoder ->
* crtc links if something is active, this means the
* crtc is now deactivated. Break the links. connector
* -> encoder links are only establish when things are
* actually up, hence no need to break them. */
WARN_ON(crtc->active);
for_each_encoder_on_crtc(dev, &crtc->base, encoder) {
WARN_ON(encoder->connectors_active);
encoder->base.crtc = NULL;
}
}
}
static void intel_sanitize_encoder(struct intel_encoder *encoder)
{
struct intel_connector *connector;
struct drm_device *dev = encoder->base.dev;
/* We need to check both for a crtc link (meaning that the
* encoder is active and trying to read from a pipe) and the
* pipe itself being active. */
bool has_active_crtc = encoder->base.crtc &&
to_intel_crtc(encoder->base.crtc)->active;
if (encoder->connectors_active && !has_active_crtc) {
DRM_DEBUG_KMS("[ENCODER:%d:%s] has active connectors but no active pipe!\n",
encoder->base.base.id,
drm_get_encoder_name(&encoder->base));
/* Connector is active, but has no active pipe. This is
* fallout from our resume register restoring. Disable
* the encoder manually again. */
if (encoder->base.crtc) {
DRM_DEBUG_KMS("[ENCODER:%d:%s] manually disabled\n",
encoder->base.base.id,
drm_get_encoder_name(&encoder->base));
encoder->disable(encoder);
}
/* Inconsistent output/port/pipe state happens presumably due to
* a bug in one of the get_hw_state functions. Or someplace else
* in our code, like the register restore mess on resume. Clamp
* things to off as a safer default. */
list_for_each_entry(connector,
&dev->mode_config.connector_list,
base.head) {
if (connector->encoder != encoder)
continue;
intel_connector_break_all_links(connector);
}
}
/* Enabled encoders without active connectors will be fixed in
* the crtc fixup. */
}
static void i915_redisable_vga(struct drm_device *dev)
{
struct drm_i915_private *dev_priv = dev->dev_private;
u32 vga_reg;
if (HAS_PCH_SPLIT(dev))
vga_reg = CPU_VGACNTRL;
else
vga_reg = VGACNTRL;
if (I915_READ(vga_reg) != VGA_DISP_DISABLE) {
DRM_DEBUG_KMS("Something enabled VGA plane, disabling it\n");
I915_WRITE(vga_reg, VGA_DISP_DISABLE);
POSTING_READ(vga_reg);
}
}
/* Scan out the current hw modeset state, sanitizes it and maps it into the drm
* and i915 state tracking structures. */
void intel_modeset_setup_hw_state(struct drm_device *dev,
bool force_restore)
{
struct drm_i915_private *dev_priv = dev->dev_private;
enum pipe pipe;
u32 tmp;
struct intel_crtc *crtc;
struct intel_encoder *encoder;
struct intel_connector *connector;
if (IS_HASWELL(dev)) {
tmp = I915_READ(TRANS_DDI_FUNC_CTL(TRANSCODER_EDP));
if (tmp & TRANS_DDI_FUNC_ENABLE) {
switch (tmp & TRANS_DDI_EDP_INPUT_MASK) {
case TRANS_DDI_EDP_INPUT_A_ON:
case TRANS_DDI_EDP_INPUT_A_ONOFF:
pipe = PIPE_A;
break;
case TRANS_DDI_EDP_INPUT_B_ONOFF:
pipe = PIPE_B;
break;
case TRANS_DDI_EDP_INPUT_C_ONOFF:
pipe = PIPE_C;
break;
}
crtc = to_intel_crtc(dev_priv->pipe_to_crtc_mapping[pipe]);
crtc->cpu_transcoder = TRANSCODER_EDP;
DRM_DEBUG_KMS("Pipe %c using transcoder EDP\n",
pipe_name(pipe));
}
}
for_each_pipe(pipe) {
crtc = to_intel_crtc(dev_priv->pipe_to_crtc_mapping[pipe]);
tmp = I915_READ(PIPECONF(crtc->cpu_transcoder));
if (tmp & PIPECONF_ENABLE)
crtc->active = true;
else
crtc->active = false;
crtc->base.enabled = crtc->active;
DRM_DEBUG_KMS("[CRTC:%d] hw state readout: %s\n",
crtc->base.base.id,
crtc->active ? "enabled" : "disabled");
}
if (IS_HASWELL(dev))
intel_ddi_setup_hw_pll_state(dev);
list_for_each_entry(encoder, &dev->mode_config.encoder_list,
base.head) {
pipe = 0;
if (encoder->get_hw_state(encoder, &pipe)) {
encoder->base.crtc =
dev_priv->pipe_to_crtc_mapping[pipe];
} else {
encoder->base.crtc = NULL;
}
encoder->connectors_active = false;
DRM_DEBUG_KMS("[ENCODER:%d:%s] hw state readout: %s, pipe=%i\n",
encoder->base.base.id,
drm_get_encoder_name(&encoder->base),
encoder->base.crtc ? "enabled" : "disabled",
pipe);
}
list_for_each_entry(connector, &dev->mode_config.connector_list,
base.head) {
if (connector->get_hw_state(connector)) {
connector->base.dpms = DRM_MODE_DPMS_ON;
connector->encoder->connectors_active = true;
connector->base.encoder = &connector->encoder->base;
} else {
connector->base.dpms = DRM_MODE_DPMS_OFF;
connector->base.encoder = NULL;
}
DRM_DEBUG_KMS("[CONNECTOR:%d:%s] hw state readout: %s\n",
connector->base.base.id,
drm_get_connector_name(&connector->base),
connector->base.encoder ? "enabled" : "disabled");
}
/* HW state is read out, now we need to sanitize this mess. */
list_for_each_entry(encoder, &dev->mode_config.encoder_list,
base.head) {
intel_sanitize_encoder(encoder);
}
for_each_pipe(pipe) {
crtc = to_intel_crtc(dev_priv->pipe_to_crtc_mapping[pipe]);
intel_sanitize_crtc(crtc);
}
if (force_restore) {
for_each_pipe(pipe) {
crtc = to_intel_crtc(dev_priv->pipe_to_crtc_mapping[pipe]);
intel_set_mode(&crtc->base, &crtc->base.mode,
crtc->base.x, crtc->base.y, crtc->base.fb);
}
i915_redisable_vga(dev);
} else {
intel_modeset_update_staged_output_state(dev);
}
intel_modeset_check_state(dev);
drm_mode_config_reset(dev);
}
void intel_modeset_gem_init(struct drm_device *dev)
{
intel_modeset_init_hw(dev);
intel_setup_overlay(dev);
intel_modeset_setup_hw_state(dev, false);
}
void intel_modeset_cleanup(struct drm_device *dev)
{
struct drm_i915_private *dev_priv = dev->dev_private;
struct drm_crtc *crtc;
struct intel_crtc *intel_crtc;
drm_kms_helper_poll_fini(dev);
DRM_LOCK(dev);
intel_unregister_dsm_handler();
list_for_each_entry(crtc, &dev->mode_config.crtc_list, head) {
/* Skip inactive CRTCs */
if (!crtc->fb)
continue;
intel_crtc = to_intel_crtc(crtc);
intel_increase_pllclock(crtc);
}
intel_disable_fbc(dev);
intel_disable_gt_powersave(dev);
ironlake_teardown_rc6(dev);
if (IS_VALLEYVIEW(dev))
vlv_init_dpio(dev);
DRM_UNLOCK(dev);
/* Disable the irq before mode object teardown, for the irq might
* enqueue unpin/hotplug work. */
drm_irq_uninstall(dev);
if (taskqueue_cancel(dev_priv->wq, &dev_priv->hotplug_work, NULL))
taskqueue_drain(dev_priv->wq, &dev_priv->hotplug_work);
if (taskqueue_cancel(dev_priv->wq, &dev_priv->rps.work, NULL))
taskqueue_drain(dev_priv->wq, &dev_priv->rps.work);
/* flush any delayed tasks or pending work */
taskqueue_drain_all(dev_priv->wq);
/* destroy backlight, if any, before the connectors */
intel_panel_destroy_backlight(dev);
drm_mode_config_cleanup(dev);
}
/*
* Return which encoder is currently attached for connector.
*/
struct drm_encoder *intel_best_encoder(struct drm_connector *connector)
{
return &intel_attached_encoder(connector)->base;
}
void intel_connector_attach_encoder(struct intel_connector *connector,
struct intel_encoder *encoder)
{
connector->encoder = encoder;
drm_mode_connector_attach_encoder(&connector->base,
&encoder->base);
}
/*
* set vga decode state - true == enable VGA decode
*/
int intel_modeset_vga_set_state(struct drm_device *dev, bool state)
{
struct drm_i915_private *dev_priv = dev->dev_private;
u16 gmch_ctrl;
pci_read_config_word(dev_priv->bridge_dev, INTEL_GMCH_CTRL, &gmch_ctrl);
if (state)
gmch_ctrl &= ~INTEL_GMCH_VGA_DISABLE;
else
gmch_ctrl |= INTEL_GMCH_VGA_DISABLE;
pci_write_config_word(dev_priv->bridge_dev, INTEL_GMCH_CTRL, gmch_ctrl);
return 0;
}
//#ifdef CONFIG_DEBUG_FS
#define seq_printf(m, fmt, ...) sbuf_printf((m), (fmt), ##__VA_ARGS__)
struct intel_display_error_state {
struct intel_cursor_error_state {
u32 control;
u32 position;
u32 base;
u32 size;
} cursor[I915_MAX_PIPES];
struct intel_pipe_error_state {
u32 conf;
u32 source;
u32 htotal;
u32 hblank;
u32 hsync;
u32 vtotal;
u32 vblank;
u32 vsync;
} pipe[I915_MAX_PIPES];
struct intel_plane_error_state {
u32 control;
u32 stride;
u32 size;
u32 pos;
u32 addr;
u32 surface;
u32 tile_offset;
} plane[I915_MAX_PIPES];
};
struct intel_display_error_state *
intel_display_capture_error_state(struct drm_device *dev)
{
drm_i915_private_t *dev_priv = dev->dev_private;
struct intel_display_error_state *error;
enum transcoder cpu_transcoder;
int i;
error = malloc(sizeof(*error), DRM_MEM_KMS, M_NOWAIT);
if (error == NULL)
return NULL;
for_each_pipe(i) {
cpu_transcoder = intel_pipe_to_cpu_transcoder(dev_priv, i);
error->cursor[i].control = I915_READ(CURCNTR(i));
error->cursor[i].position = I915_READ(CURPOS(i));
error->cursor[i].base = I915_READ(CURBASE(i));
error->plane[i].control = I915_READ(DSPCNTR(i));
error->plane[i].stride = I915_READ(DSPSTRIDE(i));
error->plane[i].size = I915_READ(DSPSIZE(i));
error->plane[i].pos = I915_READ(DSPPOS(i));
error->plane[i].addr = I915_READ(DSPADDR(i));
if (INTEL_INFO(dev)->gen >= 4) {
error->plane[i].surface = I915_READ(DSPSURF(i));
error->plane[i].tile_offset = I915_READ(DSPTILEOFF(i));
}
error->pipe[i].conf = I915_READ(PIPECONF(cpu_transcoder));
error->pipe[i].source = I915_READ(PIPESRC(i));
error->pipe[i].htotal = I915_READ(HTOTAL(cpu_transcoder));
error->pipe[i].hblank = I915_READ(HBLANK(cpu_transcoder));
error->pipe[i].hsync = I915_READ(HSYNC(cpu_transcoder));
error->pipe[i].vtotal = I915_READ(VTOTAL(cpu_transcoder));
error->pipe[i].vblank = I915_READ(VBLANK(cpu_transcoder));
error->pipe[i].vsync = I915_READ(VSYNC(cpu_transcoder));
}
return error;
}
void
intel_display_print_error_state(struct sbuf *m,
struct drm_device *dev,
struct intel_display_error_state *error)
{
drm_i915_private_t *dev_priv = dev->dev_private;
int i;
seq_printf(m, "Num Pipes: %d\n", dev_priv->num_pipe);
for_each_pipe(i) {
seq_printf(m, "Pipe [%d]:\n", i);
seq_printf(m, " CONF: %08x\n", error->pipe[i].conf);
seq_printf(m, " SRC: %08x\n", error->pipe[i].source);
seq_printf(m, " HTOTAL: %08x\n", error->pipe[i].htotal);
seq_printf(m, " HBLANK: %08x\n", error->pipe[i].hblank);
seq_printf(m, " HSYNC: %08x\n", error->pipe[i].hsync);
seq_printf(m, " VTOTAL: %08x\n", error->pipe[i].vtotal);
seq_printf(m, " VBLANK: %08x\n", error->pipe[i].vblank);
seq_printf(m, " VSYNC: %08x\n", error->pipe[i].vsync);
seq_printf(m, "Plane [%d]:\n", i);
seq_printf(m, " CNTR: %08x\n", error->plane[i].control);
seq_printf(m, " STRIDE: %08x\n", error->plane[i].stride);
seq_printf(m, " SIZE: %08x\n", error->plane[i].size);
seq_printf(m, " POS: %08x\n", error->plane[i].pos);
seq_printf(m, " ADDR: %08x\n", error->plane[i].addr);
if (INTEL_INFO(dev)->gen >= 4) {
seq_printf(m, " SURF: %08x\n", error->plane[i].surface);
seq_printf(m, " TILEOFF: %08x\n", error->plane[i].tile_offset);
}
seq_printf(m, "Cursor [%d]:\n", i);
seq_printf(m, " CNTR: %08x\n", error->cursor[i].control);
seq_printf(m, " POS: %08x\n", error->cursor[i].position);
seq_printf(m, " BASE: %08x\n", error->cursor[i].base);
}
}
//#endif
Index: head/sys/dev/drm2/radeon/atombios.h
===================================================================
--- head/sys/dev/drm2/radeon/atombios.h (revision 300049)
+++ head/sys/dev/drm2/radeon/atombios.h (revision 300050)
@@ -1,8013 +1,8013 @@
/*
* Copyright 2006-2007 Advanced Micro Devices, Inc.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
* OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
* ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
* OTHER DEALINGS IN THE SOFTWARE.
*/
#include <sys/cdefs.h>
__FBSDID("$FreeBSD$");
/****************************************************************************/
/*Portion I: Definitions shared between VBIOS and Driver */
/****************************************************************************/
#ifndef _ATOMBIOS_H
#define _ATOMBIOS_H
#define ATOM_VERSION_MAJOR 0x00020000
#define ATOM_VERSION_MINOR 0x00000002
#define ATOM_HEADER_VERSION (ATOM_VERSION_MAJOR | ATOM_VERSION_MINOR)
/* Endianness should be specified before inclusion,
* default to little endian
*/
#ifndef ATOM_BIG_ENDIAN
#error Endian not specified
#endif
#ifdef _H2INC
#ifndef ULONG
typedef unsigned long ULONG;
#endif
#ifndef UCHAR
typedef unsigned char UCHAR;
#endif
#ifndef USHORT
typedef unsigned short USHORT;
#endif
#endif
#define ATOM_DAC_A 0
#define ATOM_DAC_B 1
#define ATOM_EXT_DAC 2
#define ATOM_CRTC1 0
#define ATOM_CRTC2 1
#define ATOM_CRTC3 2
#define ATOM_CRTC4 3
#define ATOM_CRTC5 4
#define ATOM_CRTC6 5
#define ATOM_CRTC_INVALID 0xFF
#define ATOM_DIGA 0
#define ATOM_DIGB 1
#define ATOM_PPLL1 0
#define ATOM_PPLL2 1
#define ATOM_DCPLL 2
#define ATOM_PPLL0 2
#define ATOM_EXT_PLL1 8
#define ATOM_EXT_PLL2 9
#define ATOM_EXT_CLOCK 10
#define ATOM_PPLL_INVALID 0xFF
#define ENCODER_REFCLK_SRC_P1PLL 0
#define ENCODER_REFCLK_SRC_P2PLL 1
#define ENCODER_REFCLK_SRC_DCPLL 2
#define ENCODER_REFCLK_SRC_EXTCLK 3
#define ENCODER_REFCLK_SRC_INVALID 0xFF
#define ATOM_SCALER1 0
#define ATOM_SCALER2 1
#define ATOM_SCALER_DISABLE 0
#define ATOM_SCALER_CENTER 1
#define ATOM_SCALER_EXPANSION 2
#define ATOM_SCALER_MULTI_EX 3
#define ATOM_DISABLE 0
#define ATOM_ENABLE 1
#define ATOM_LCD_BLOFF (ATOM_DISABLE+2)
#define ATOM_LCD_BLON (ATOM_ENABLE+2)
#define ATOM_LCD_BL_BRIGHTNESS_CONTROL (ATOM_ENABLE+3)
#define ATOM_LCD_SELFTEST_START (ATOM_DISABLE+5)
#define ATOM_LCD_SELFTEST_STOP (ATOM_ENABLE+5)
#define ATOM_ENCODER_INIT (ATOM_DISABLE+7)
#define ATOM_INIT (ATOM_DISABLE+7)
#define ATOM_GET_STATUS (ATOM_DISABLE+8)
#define ATOM_BLANKING 1
#define ATOM_BLANKING_OFF 0
#define ATOM_CURSOR1 0
#define ATOM_CURSOR2 1
#define ATOM_ICON1 0
#define ATOM_ICON2 1
#define ATOM_CRT1 0
#define ATOM_CRT2 1
#define ATOM_TV_NTSC 1
#define ATOM_TV_NTSCJ 2
#define ATOM_TV_PAL 3
#define ATOM_TV_PALM 4
#define ATOM_TV_PALCN 5
#define ATOM_TV_PALN 6
#define ATOM_TV_PAL60 7
#define ATOM_TV_SECAM 8
#define ATOM_TV_CV 16
#define ATOM_DAC1_PS2 1
#define ATOM_DAC1_CV 2
#define ATOM_DAC1_NTSC 3
#define ATOM_DAC1_PAL 4
#define ATOM_DAC2_PS2 ATOM_DAC1_PS2
#define ATOM_DAC2_CV ATOM_DAC1_CV
#define ATOM_DAC2_NTSC ATOM_DAC1_NTSC
#define ATOM_DAC2_PAL ATOM_DAC1_PAL
#define ATOM_PM_ON 0
#define ATOM_PM_STANDBY 1
#define ATOM_PM_SUSPEND 2
#define ATOM_PM_OFF 3
/* Bit0:{=0:single, =1:dual},
Bit1 {=0:666RGB, =1:888RGB},
Bit2:3:{Grey level}
Bit4:{=0:LDI format for RGB888, =1 FPDI format for RGB888}*/
#define ATOM_PANEL_MISC_DUAL 0x00000001
#define ATOM_PANEL_MISC_888RGB 0x00000002
#define ATOM_PANEL_MISC_GREY_LEVEL 0x0000000C
#define ATOM_PANEL_MISC_FPDI 0x00000010
#define ATOM_PANEL_MISC_GREY_LEVEL_SHIFT 2
#define ATOM_PANEL_MISC_SPATIAL 0x00000020
#define ATOM_PANEL_MISC_TEMPORAL 0x00000040
#define ATOM_PANEL_MISC_API_ENABLED 0x00000080
#define MEMTYPE_DDR1 "DDR1"
#define MEMTYPE_DDR2 "DDR2"
#define MEMTYPE_DDR3 "DDR3"
#define MEMTYPE_DDR4 "DDR4"
#define ASIC_BUS_TYPE_PCI "PCI"
#define ASIC_BUS_TYPE_AGP "AGP"
#define ASIC_BUS_TYPE_PCIE "PCI_EXPRESS"
/* Maximum size of that FireGL flag string */
#define ATOM_FIREGL_FLAG_STRING "FGL" //Flag used to enable FireGL Support
#define ATOM_MAX_SIZE_OF_FIREGL_FLAG_STRING 3 //sizeof( ATOM_FIREGL_FLAG_STRING )
#define ATOM_FAKE_DESKTOP_STRING "DSK" //Flag used to enable mobile ASIC on Desktop
#define ATOM_MAX_SIZE_OF_FAKE_DESKTOP_STRING ATOM_MAX_SIZE_OF_FIREGL_FLAG_STRING
#define ATOM_M54T_FLAG_STRING "M54T" //Flag used to enable M54T Support
#define ATOM_MAX_SIZE_OF_M54T_FLAG_STRING 4 //sizeof( ATOM_M54T_FLAG_STRING )
#define HW_ASSISTED_I2C_STATUS_FAILURE 2
#define HW_ASSISTED_I2C_STATUS_SUCCESS 1
#pragma pack(1) /* BIOS data must use byte aligment */
/* Define offset to location of ROM header. */
#define OFFSET_TO_POINTER_TO_ATOM_ROM_HEADER 0x00000048L
#define OFFSET_TO_ATOM_ROM_IMAGE_SIZE 0x00000002L
#define OFFSET_TO_ATOMBIOS_ASIC_BUS_MEM_TYPE 0x94
#define MAXSIZE_OF_ATOMBIOS_ASIC_BUS_MEM_TYPE 20 /* including the terminator 0x0! */
#define OFFSET_TO_GET_ATOMBIOS_STRINGS_NUMBER 0x002f
#define OFFSET_TO_GET_ATOMBIOS_STRINGS_START 0x006e
/* Common header for all ROM Data tables.
Every table pointed _ATOM_MASTER_DATA_TABLE has this common header.
And the pointer actually points to this header. */
typedef struct _ATOM_COMMON_TABLE_HEADER
{
USHORT usStructureSize;
UCHAR ucTableFormatRevision; /*Change it when the Parser is not backward compatible */
UCHAR ucTableContentRevision; /*Change it only when the table needs to change but the firmware */
/*Image can't be updated, while Driver needs to carry the new table! */
}ATOM_COMMON_TABLE_HEADER;
/****************************************************************************/
// Structure stores the ROM header.
/****************************************************************************/
typedef struct _ATOM_ROM_HEADER
{
ATOM_COMMON_TABLE_HEADER sHeader;
UCHAR uaFirmWareSignature[4]; /*Signature to distinguish between Atombios and non-atombios,
atombios should init it as "ATOM", don't change the position */
USHORT usBiosRuntimeSegmentAddress;
USHORT usProtectedModeInfoOffset;
USHORT usConfigFilenameOffset;
USHORT usCRC_BlockOffset;
USHORT usBIOS_BootupMessageOffset;
USHORT usInt10Offset;
USHORT usPciBusDevInitCode;
USHORT usIoBaseAddress;
USHORT usSubsystemVendorID;
USHORT usSubsystemID;
USHORT usPCI_InfoOffset;
USHORT usMasterCommandTableOffset; /*Offset for SW to get all command table offsets, Don't change the position */
USHORT usMasterDataTableOffset; /*Offset for SW to get all data table offsets, Don't change the position */
UCHAR ucExtendedFunctionCode;
UCHAR ucReserved;
}ATOM_ROM_HEADER;
/*==============================Command Table Portion==================================== */
#ifdef UEFI_BUILD
#define UTEMP USHORT
#define USHORT void*
#endif
/****************************************************************************/
// Structures used in Command.mtb
/****************************************************************************/
typedef struct _ATOM_MASTER_LIST_OF_COMMAND_TABLES{
USHORT ASIC_Init; //Function Table, used by various SW components,latest version 1.1
USHORT GetDisplaySurfaceSize; //Atomic Table, Used by Bios when enabling HW ICON
USHORT ASIC_RegistersInit; //Atomic Table, indirectly used by various SW components,called from ASIC_Init
USHORT VRAM_BlockVenderDetection; //Atomic Table, used only by Bios
USHORT DIGxEncoderControl; //Only used by Bios
USHORT MemoryControllerInit; //Atomic Table, indirectly used by various SW components,called from ASIC_Init
USHORT EnableCRTCMemReq; //Function Table,directly used by various SW components,latest version 2.1
USHORT MemoryParamAdjust; //Atomic Table, indirectly used by various SW components,called from SetMemoryClock if needed
USHORT DVOEncoderControl; //Function Table,directly used by various SW components,latest version 1.2
USHORT GPIOPinControl; //Atomic Table, only used by Bios
USHORT SetEngineClock; //Function Table,directly used by various SW components,latest version 1.1
USHORT SetMemoryClock; //Function Table,directly used by various SW components,latest version 1.1
USHORT SetPixelClock; //Function Table,directly used by various SW components,latest version 1.2
USHORT EnableDispPowerGating; //Atomic Table, indirectly used by various SW components,called from ASIC_Init
USHORT ResetMemoryDLL; //Atomic Table, indirectly used by various SW components,called from SetMemoryClock
USHORT ResetMemoryDevice; //Atomic Table, indirectly used by various SW components,called from SetMemoryClock
USHORT MemoryPLLInit; //Atomic Table, used only by Bios
USHORT AdjustDisplayPll; //Atomic Table, used by various SW componentes.
USHORT AdjustMemoryController; //Atomic Table, indirectly used by various SW components,called from SetMemoryClock
USHORT EnableASIC_StaticPwrMgt; //Atomic Table, only used by Bios
USHORT ASIC_StaticPwrMgtStatusChange; //Obsolete , only used by Bios
USHORT DAC_LoadDetection; //Atomic Table, directly used by various SW components,latest version 1.2
USHORT LVTMAEncoderControl; //Atomic Table,directly used by various SW components,latest version 1.3
USHORT HW_Misc_Operation; //Atomic Table, directly used by various SW components,latest version 1.1
USHORT DAC1EncoderControl; //Atomic Table, directly used by various SW components,latest version 1.1
USHORT DAC2EncoderControl; //Atomic Table, directly used by various SW components,latest version 1.1
USHORT DVOOutputControl; //Atomic Table, directly used by various SW components,latest version 1.1
USHORT CV1OutputControl; //Atomic Table, Atomic Table, Obsolete from Ry6xx, use DAC2 Output instead
USHORT GetConditionalGoldenSetting; //Only used by Bios
USHORT TVEncoderControl; //Function Table,directly used by various SW components,latest version 1.1
USHORT PatchMCSetting; //only used by BIOS
USHORT MC_SEQ_Control; //only used by BIOS
USHORT TV1OutputControl; //Atomic Table, Obsolete from Ry6xx, use DAC2 Output instead
USHORT EnableScaler; //Atomic Table, used only by Bios
USHORT BlankCRTC; //Atomic Table, directly used by various SW components,latest version 1.1
USHORT EnableCRTC; //Atomic Table, directly used by various SW components,latest version 1.1
USHORT GetPixelClock; //Atomic Table, directly used by various SW components,latest version 1.1
USHORT EnableVGA_Render; //Function Table,directly used by various SW components,latest version 1.1
USHORT GetSCLKOverMCLKRatio; //Atomic Table, only used by Bios
USHORT SetCRTC_Timing; //Atomic Table, directly used by various SW components,latest version 1.1
USHORT SetCRTC_OverScan; //Atomic Table, used by various SW components,latest version 1.1
USHORT SetCRTC_Replication; //Atomic Table, used only by Bios
USHORT SelectCRTC_Source; //Atomic Table, directly used by various SW components,latest version 1.1
USHORT EnableGraphSurfaces; //Atomic Table, used only by Bios
USHORT UpdateCRTC_DoubleBufferRegisters; //Atomic Table, used only by Bios
USHORT LUT_AutoFill; //Atomic Table, only used by Bios
USHORT EnableHW_IconCursor; //Atomic Table, only used by Bios
USHORT GetMemoryClock; //Atomic Table, directly used by various SW components,latest version 1.1
USHORT GetEngineClock; //Atomic Table, directly used by various SW components,latest version 1.1
USHORT SetCRTC_UsingDTDTiming; //Atomic Table, directly used by various SW components,latest version 1.1
USHORT ExternalEncoderControl; //Atomic Table, directly used by various SW components,latest version 2.1
USHORT LVTMAOutputControl; //Atomic Table, directly used by various SW components,latest version 1.1
USHORT VRAM_BlockDetectionByStrap; //Atomic Table, used only by Bios
USHORT MemoryCleanUp; //Atomic Table, only used by Bios
USHORT ProcessI2cChannelTransaction; //Function Table,only used by Bios
USHORT WriteOneByteToHWAssistedI2C; //Function Table,indirectly used by various SW components
USHORT ReadHWAssistedI2CStatus; //Atomic Table, indirectly used by various SW components
USHORT SpeedFanControl; //Function Table,indirectly used by various SW components,called from ASIC_Init
USHORT PowerConnectorDetection; //Atomic Table, directly used by various SW components,latest version 1.1
USHORT MC_Synchronization; //Atomic Table, indirectly used by various SW components,called from SetMemoryClock
USHORT ComputeMemoryEnginePLL; //Atomic Table, indirectly used by various SW components,called from SetMemory/EngineClock
USHORT MemoryRefreshConversion; //Atomic Table, indirectly used by various SW components,called from SetMemory or SetEngineClock
USHORT VRAM_GetCurrentInfoBlock; //Atomic Table, used only by Bios
USHORT DynamicMemorySettings; //Atomic Table, indirectly used by various SW components,called from SetMemoryClock
USHORT MemoryTraining; //Atomic Table, used only by Bios
USHORT EnableSpreadSpectrumOnPPLL; //Atomic Table, directly used by various SW components,latest version 1.2
USHORT TMDSAOutputControl; //Atomic Table, directly used by various SW components,latest version 1.1
USHORT SetVoltage; //Function Table,directly and/or indirectly used by various SW components,latest version 1.1
USHORT DAC1OutputControl; //Atomic Table, directly used by various SW components,latest version 1.1
USHORT DAC2OutputControl; //Atomic Table, directly used by various SW components,latest version 1.1
USHORT ComputeMemoryClockParam; //Function Table,only used by Bios, obsolete soon.Switch to use "ReadEDIDFromHWAssistedI2C"
USHORT ClockSource; //Atomic Table, indirectly used by various SW components,called from ASIC_Init
USHORT MemoryDeviceInit; //Atomic Table, indirectly used by various SW components,called from SetMemoryClock
USHORT GetDispObjectInfo; //Atomic Table, indirectly used by various SW components,called from EnableVGARender
USHORT DIG1EncoderControl; //Atomic Table,directly used by various SW components,latest version 1.1
USHORT DIG2EncoderControl; //Atomic Table,directly used by various SW components,latest version 1.1
USHORT DIG1TransmitterControl; //Atomic Table,directly used by various SW components,latest version 1.1
USHORT DIG2TransmitterControl; //Atomic Table,directly used by various SW components,latest version 1.1
USHORT ProcessAuxChannelTransaction; //Function Table,only used by Bios
USHORT DPEncoderService; //Function Table,only used by Bios
USHORT GetVoltageInfo; //Function Table,only used by Bios since SI
}ATOM_MASTER_LIST_OF_COMMAND_TABLES;
// For backward compatible
#define ReadEDIDFromHWAssistedI2C ProcessI2cChannelTransaction
#define DPTranslatorControl DIG2EncoderControl
#define UNIPHYTransmitterControl DIG1TransmitterControl
#define LVTMATransmitterControl DIG2TransmitterControl
#define SetCRTC_DPM_State GetConditionalGoldenSetting
#define SetUniphyInstance ASIC_StaticPwrMgtStatusChange
#define HPDInterruptService ReadHWAssistedI2CStatus
#define EnableVGA_Access GetSCLKOverMCLKRatio
#define EnableYUV GetDispObjectInfo
#define DynamicClockGating EnableDispPowerGating
#define SetupHWAssistedI2CStatus ComputeMemoryClockParam
#define TMDSAEncoderControl PatchMCSetting
#define LVDSEncoderControl MC_SEQ_Control
#define LCD1OutputControl HW_Misc_Operation
typedef struct _ATOM_MASTER_COMMAND_TABLE
{
ATOM_COMMON_TABLE_HEADER sHeader;
ATOM_MASTER_LIST_OF_COMMAND_TABLES ListOfCommandTables;
}ATOM_MASTER_COMMAND_TABLE;
/****************************************************************************/
// Structures used in every command table
/****************************************************************************/
typedef struct _ATOM_TABLE_ATTRIBUTE
{
#if ATOM_BIG_ENDIAN
USHORT UpdatedByUtility:1; //[15]=Table updated by utility flag
USHORT PS_SizeInBytes:7; //[14:8]=Size of parameter space in Bytes (multiple of a dword),
USHORT WS_SizeInBytes:8; //[7:0]=Size of workspace in Bytes (in multiple of a dword),
#else
USHORT WS_SizeInBytes:8; //[7:0]=Size of workspace in Bytes (in multiple of a dword),
USHORT PS_SizeInBytes:7; //[14:8]=Size of parameter space in Bytes (multiple of a dword),
USHORT UpdatedByUtility:1; //[15]=Table updated by utility flag
#endif
}ATOM_TABLE_ATTRIBUTE;
typedef union _ATOM_TABLE_ATTRIBUTE_ACCESS
{
ATOM_TABLE_ATTRIBUTE sbfAccess;
USHORT susAccess;
}ATOM_TABLE_ATTRIBUTE_ACCESS;
/****************************************************************************/
// Common header for all command tables.
// Every table pointed by _ATOM_MASTER_COMMAND_TABLE has this common header.
// And the pointer actually points to this header.
/****************************************************************************/
typedef struct _ATOM_COMMON_ROM_COMMAND_TABLE_HEADER
{
ATOM_COMMON_TABLE_HEADER CommonHeader;
ATOM_TABLE_ATTRIBUTE TableAttribute;
}ATOM_COMMON_ROM_COMMAND_TABLE_HEADER;
/****************************************************************************/
// Structures used by ComputeMemoryEnginePLLTable
/****************************************************************************/
#define COMPUTE_MEMORY_PLL_PARAM 1
#define COMPUTE_ENGINE_PLL_PARAM 2
#define ADJUST_MC_SETTING_PARAM 3
/****************************************************************************/
// Structures used by AdjustMemoryControllerTable
/****************************************************************************/
typedef struct _ATOM_ADJUST_MEMORY_CLOCK_FREQ
{
#if ATOM_BIG_ENDIAN
ULONG ulPointerReturnFlag:1; // BYTE_3[7]=1 - Return the pointer to the right Data Block; BYTE_3[7]=0 - Program the right Data Block
ULONG ulMemoryModuleNumber:7; // BYTE_3[6:0]
ULONG ulClockFreq:24;
#else
ULONG ulClockFreq:24;
ULONG ulMemoryModuleNumber:7; // BYTE_3[6:0]
ULONG ulPointerReturnFlag:1; // BYTE_3[7]=1 - Return the pointer to the right Data Block; BYTE_3[7]=0 - Program the right Data Block
#endif
}ATOM_ADJUST_MEMORY_CLOCK_FREQ;
#define POINTER_RETURN_FLAG 0x80
typedef struct _COMPUTE_MEMORY_ENGINE_PLL_PARAMETERS
{
ULONG ulClock; //When returen, it's the re-calculated clock based on given Fb_div Post_Div and ref_div
UCHAR ucAction; //0:reserved //1:Memory //2:Engine
UCHAR ucReserved; //may expand to return larger Fbdiv later
UCHAR ucFbDiv; //return value
UCHAR ucPostDiv; //return value
}COMPUTE_MEMORY_ENGINE_PLL_PARAMETERS;
typedef struct _COMPUTE_MEMORY_ENGINE_PLL_PARAMETERS_V2
{
ULONG ulClock; //When return, [23:0] return real clock
UCHAR ucAction; //0:reserved;COMPUTE_MEMORY_PLL_PARAM:Memory;COMPUTE_ENGINE_PLL_PARAM:Engine. it return ref_div to be written to register
USHORT usFbDiv; //return Feedback value to be written to register
UCHAR ucPostDiv; //return post div to be written to register
}COMPUTE_MEMORY_ENGINE_PLL_PARAMETERS_V2;
#define COMPUTE_MEMORY_ENGINE_PLL_PARAMETERS_PS_ALLOCATION COMPUTE_MEMORY_ENGINE_PLL_PARAMETERS
#define SET_CLOCK_FREQ_MASK 0x00FFFFFF //Clock change tables only take bit [23:0] as the requested clock value
#define USE_NON_BUS_CLOCK_MASK 0x01000000 //Applicable to both memory and engine clock change, when set, it uses another clock as the temporary clock (engine uses memory and vice versa)
#define USE_MEMORY_SELF_REFRESH_MASK 0x02000000 //Only applicable to memory clock change, when set, using memory self refresh during clock transition
#define SKIP_INTERNAL_MEMORY_PARAMETER_CHANGE 0x04000000 //Only applicable to memory clock change, when set, the table will skip predefined internal memory parameter change
#define FIRST_TIME_CHANGE_CLOCK 0x08000000 //Applicable to both memory and engine clock change,when set, it means this is 1st time to change clock after ASIC bootup
#define SKIP_SW_PROGRAM_PLL 0x10000000 //Applicable to both memory and engine clock change, when set, it means the table will not program SPLL/MPLL
#define USE_SS_ENABLED_PIXEL_CLOCK USE_NON_BUS_CLOCK_MASK
#define b3USE_NON_BUS_CLOCK_MASK 0x01 //Applicable to both memory and engine clock change, when set, it uses another clock as the temporary clock (engine uses memory and vice versa)
#define b3USE_MEMORY_SELF_REFRESH 0x02 //Only applicable to memory clock change, when set, using memory self refresh during clock transition
#define b3SKIP_INTERNAL_MEMORY_PARAMETER_CHANGE 0x04 //Only applicable to memory clock change, when set, the table will skip predefined internal memory parameter change
#define b3FIRST_TIME_CHANGE_CLOCK 0x08 //Applicable to both memory and engine clock change,when set, it means this is 1st time to change clock after ASIC bootup
#define b3SKIP_SW_PROGRAM_PLL 0x10 //Applicable to both memory and engine clock change, when set, it means the table will not program SPLL/MPLL
typedef struct _ATOM_COMPUTE_CLOCK_FREQ
{
#if ATOM_BIG_ENDIAN
ULONG ulComputeClockFlag:8; // =1: COMPUTE_MEMORY_PLL_PARAM, =2: COMPUTE_ENGINE_PLL_PARAM
ULONG ulClockFreq:24; // in unit of 10kHz
#else
ULONG ulClockFreq:24; // in unit of 10kHz
ULONG ulComputeClockFlag:8; // =1: COMPUTE_MEMORY_PLL_PARAM, =2: COMPUTE_ENGINE_PLL_PARAM
#endif
}ATOM_COMPUTE_CLOCK_FREQ;
typedef struct _ATOM_S_MPLL_FB_DIVIDER
{
USHORT usFbDivFrac;
USHORT usFbDiv;
}ATOM_S_MPLL_FB_DIVIDER;
typedef struct _COMPUTE_MEMORY_ENGINE_PLL_PARAMETERS_V3
{
union
{
ATOM_COMPUTE_CLOCK_FREQ ulClock; //Input Parameter
ATOM_S_MPLL_FB_DIVIDER ulFbDiv; //Output Parameter
};
UCHAR ucRefDiv; //Output Parameter
UCHAR ucPostDiv; //Output Parameter
UCHAR ucCntlFlag; //Output Parameter
UCHAR ucReserved;
}COMPUTE_MEMORY_ENGINE_PLL_PARAMETERS_V3;
// ucCntlFlag
#define ATOM_PLL_CNTL_FLAG_PLL_POST_DIV_EN 1
#define ATOM_PLL_CNTL_FLAG_MPLL_VCO_MODE 2
#define ATOM_PLL_CNTL_FLAG_FRACTION_DISABLE 4
#define ATOM_PLL_CNTL_FLAG_SPLL_ISPARE_9 8
// V4 are only used for APU which PLL outside GPU
typedef struct _COMPUTE_MEMORY_ENGINE_PLL_PARAMETERS_V4
{
#if ATOM_BIG_ENDIAN
ULONG ucPostDiv; //return parameter: post divider which is used to program to register directly
ULONG ulClock:24; //Input= target clock, output = actual clock
#else
ULONG ulClock:24; //Input= target clock, output = actual clock
ULONG ucPostDiv; //return parameter: post divider which is used to program to register directly
#endif
}COMPUTE_MEMORY_ENGINE_PLL_PARAMETERS_V4;
typedef struct _COMPUTE_MEMORY_ENGINE_PLL_PARAMETERS_V5
{
union
{
ATOM_COMPUTE_CLOCK_FREQ ulClock; //Input Parameter
ATOM_S_MPLL_FB_DIVIDER ulFbDiv; //Output Parameter
};
UCHAR ucRefDiv; //Output Parameter
UCHAR ucPostDiv; //Output Parameter
union
{
UCHAR ucCntlFlag; //Output Flags
UCHAR ucInputFlag; //Input Flags. ucInputFlag[0] - Strobe(1)/Performance(0) mode
};
UCHAR ucReserved;
}COMPUTE_MEMORY_ENGINE_PLL_PARAMETERS_V5;
// ucInputFlag
#define ATOM_PLL_INPUT_FLAG_PLL_STROBE_MODE_EN 1 // 1-StrobeMode, 0-PerformanceMode
// use for ComputeMemoryClockParamTable
typedef struct _COMPUTE_MEMORY_CLOCK_PARAM_PARAMETERS_V2_1
{
union
{
ULONG ulClock;
ATOM_S_MPLL_FB_DIVIDER ulFbDiv; //Output:UPPER_WORD=FB_DIV_INTEGER, LOWER_WORD=FB_DIV_FRAC shl (16-FB_FRACTION_BITS)
};
UCHAR ucDllSpeed; //Output
UCHAR ucPostDiv; //Output
union{
UCHAR ucInputFlag; //Input : ATOM_PLL_INPUT_FLAG_PLL_STROBE_MODE_EN: 1-StrobeMode, 0-PerformanceMode
UCHAR ucPllCntlFlag; //Output:
};
UCHAR ucBWCntl;
}COMPUTE_MEMORY_CLOCK_PARAM_PARAMETERS_V2_1;
// definition of ucInputFlag
#define MPLL_INPUT_FLAG_STROBE_MODE_EN 0x01
// definition of ucPllCntlFlag
#define MPLL_CNTL_FLAG_VCO_MODE_MASK 0x03
#define MPLL_CNTL_FLAG_BYPASS_DQ_PLL 0x04
#define MPLL_CNTL_FLAG_QDR_ENABLE 0x08
#define MPLL_CNTL_FLAG_AD_HALF_RATE 0x10
//MPLL_CNTL_FLAG_BYPASS_AD_PLL has a wrong name, should be BYPASS_DQ_PLL
#define MPLL_CNTL_FLAG_BYPASS_AD_PLL 0x04
typedef struct _DYNAMICE_MEMORY_SETTINGS_PARAMETER
{
ATOM_COMPUTE_CLOCK_FREQ ulClock;
ULONG ulReserved[2];
}DYNAMICE_MEMORY_SETTINGS_PARAMETER;
typedef struct _DYNAMICE_ENGINE_SETTINGS_PARAMETER
{
ATOM_COMPUTE_CLOCK_FREQ ulClock;
ULONG ulMemoryClock;
ULONG ulReserved;
}DYNAMICE_ENGINE_SETTINGS_PARAMETER;
/****************************************************************************/
// Structures used by SetEngineClockTable
/****************************************************************************/
typedef struct _SET_ENGINE_CLOCK_PARAMETERS
{
ULONG ulTargetEngineClock; //In 10Khz unit
}SET_ENGINE_CLOCK_PARAMETERS;
typedef struct _SET_ENGINE_CLOCK_PS_ALLOCATION
{
ULONG ulTargetEngineClock; //In 10Khz unit
COMPUTE_MEMORY_ENGINE_PLL_PARAMETERS_PS_ALLOCATION sReserved;
}SET_ENGINE_CLOCK_PS_ALLOCATION;
/****************************************************************************/
// Structures used by SetMemoryClockTable
/****************************************************************************/
typedef struct _SET_MEMORY_CLOCK_PARAMETERS
{
ULONG ulTargetMemoryClock; //In 10Khz unit
}SET_MEMORY_CLOCK_PARAMETERS;
typedef struct _SET_MEMORY_CLOCK_PS_ALLOCATION
{
ULONG ulTargetMemoryClock; //In 10Khz unit
COMPUTE_MEMORY_ENGINE_PLL_PARAMETERS_PS_ALLOCATION sReserved;
}SET_MEMORY_CLOCK_PS_ALLOCATION;
/****************************************************************************/
// Structures used by ASIC_Init.ctb
/****************************************************************************/
typedef struct _ASIC_INIT_PARAMETERS
{
ULONG ulDefaultEngineClock; //In 10Khz unit
ULONG ulDefaultMemoryClock; //In 10Khz unit
}ASIC_INIT_PARAMETERS;
typedef struct _ASIC_INIT_PS_ALLOCATION
{
ASIC_INIT_PARAMETERS sASICInitClocks;
SET_ENGINE_CLOCK_PS_ALLOCATION sReserved; //Caller doesn't need to init this structure
}ASIC_INIT_PS_ALLOCATION;
/****************************************************************************/
// Structure used by DynamicClockGatingTable.ctb
/****************************************************************************/
typedef struct _DYNAMIC_CLOCK_GATING_PARAMETERS
{
UCHAR ucEnable; // ATOM_ENABLE or ATOM_DISABLE
UCHAR ucPadding[3];
}DYNAMIC_CLOCK_GATING_PARAMETERS;
#define DYNAMIC_CLOCK_GATING_PS_ALLOCATION DYNAMIC_CLOCK_GATING_PARAMETERS
/****************************************************************************/
// Structure used by EnableDispPowerGatingTable.ctb
/****************************************************************************/
typedef struct _ENABLE_DISP_POWER_GATING_PARAMETERS_V2_1
{
UCHAR ucDispPipeId; // ATOM_CRTC1, ATOM_CRTC2, ...
UCHAR ucEnable; // ATOM_ENABLE or ATOM_DISABLE
UCHAR ucPadding[2];
}ENABLE_DISP_POWER_GATING_PARAMETERS_V2_1;
/****************************************************************************/
// Structure used by EnableASIC_StaticPwrMgtTable.ctb
/****************************************************************************/
typedef struct _ENABLE_ASIC_STATIC_PWR_MGT_PARAMETERS
{
UCHAR ucEnable; // ATOM_ENABLE or ATOM_DISABLE
UCHAR ucPadding[3];
}ENABLE_ASIC_STATIC_PWR_MGT_PARAMETERS;
#define ENABLE_ASIC_STATIC_PWR_MGT_PS_ALLOCATION ENABLE_ASIC_STATIC_PWR_MGT_PARAMETERS
/****************************************************************************/
// Structures used by DAC_LoadDetectionTable.ctb
/****************************************************************************/
typedef struct _DAC_LOAD_DETECTION_PARAMETERS
{
USHORT usDeviceID; //{ATOM_DEVICE_CRTx_SUPPORT,ATOM_DEVICE_TVx_SUPPORT,ATOM_DEVICE_CVx_SUPPORT}
UCHAR ucDacType; //{ATOM_DAC_A,ATOM_DAC_B, ATOM_EXT_DAC}
UCHAR ucMisc; //Valid only when table revision =1.3 and above
}DAC_LOAD_DETECTION_PARAMETERS;
// DAC_LOAD_DETECTION_PARAMETERS.ucMisc
#define DAC_LOAD_MISC_YPrPb 0x01
typedef struct _DAC_LOAD_DETECTION_PS_ALLOCATION
{
DAC_LOAD_DETECTION_PARAMETERS sDacload;
ULONG Reserved[2];// Don't set this one, allocation for EXT DAC
}DAC_LOAD_DETECTION_PS_ALLOCATION;
/****************************************************************************/
// Structures used by DAC1EncoderControlTable.ctb and DAC2EncoderControlTable.ctb
/****************************************************************************/
typedef struct _DAC_ENCODER_CONTROL_PARAMETERS
{
USHORT usPixelClock; // in 10KHz; for bios convenient
UCHAR ucDacStandard; // See definition of ATOM_DACx_xxx, For DEC3.0, bit 7 used as internal flag to indicate DAC2 (==1) or DAC1 (==0)
UCHAR ucAction; // 0: turn off encoder
// 1: setup and turn on encoder
// 7: ATOM_ENCODER_INIT Initialize DAC
}DAC_ENCODER_CONTROL_PARAMETERS;
#define DAC_ENCODER_CONTROL_PS_ALLOCATION DAC_ENCODER_CONTROL_PARAMETERS
/****************************************************************************/
// Structures used by DIG1EncoderControlTable
// DIG2EncoderControlTable
// ExternalEncoderControlTable
/****************************************************************************/
typedef struct _DIG_ENCODER_CONTROL_PARAMETERS
{
USHORT usPixelClock; // in 10KHz; for bios convenient
UCHAR ucConfig;
// [2] Link Select:
// =0: PHY linkA if bfLane<3
// =1: PHY linkB if bfLanes<3
// =0: PHY linkA+B if bfLanes=3
// [3] Transmitter Sel
// =0: UNIPHY or PCIEPHY
// =1: LVTMA
UCHAR ucAction; // =0: turn off encoder
// =1: turn on encoder
UCHAR ucEncoderMode;
// =0: DP encoder
// =1: LVDS encoder
// =2: DVI encoder
// =3: HDMI encoder
// =4: SDVO encoder
UCHAR ucLaneNum; // how many lanes to enable
UCHAR ucReserved[2];
}DIG_ENCODER_CONTROL_PARAMETERS;
#define DIG_ENCODER_CONTROL_PS_ALLOCATION DIG_ENCODER_CONTROL_PARAMETERS
#define EXTERNAL_ENCODER_CONTROL_PARAMETER DIG_ENCODER_CONTROL_PARAMETERS
//ucConfig
#define ATOM_ENCODER_CONFIG_DPLINKRATE_MASK 0x01
#define ATOM_ENCODER_CONFIG_DPLINKRATE_1_62GHZ 0x00
#define ATOM_ENCODER_CONFIG_DPLINKRATE_2_70GHZ 0x01
#define ATOM_ENCODER_CONFIG_DPLINKRATE_5_40GHZ 0x02
#define ATOM_ENCODER_CONFIG_LINK_SEL_MASK 0x04
#define ATOM_ENCODER_CONFIG_LINKA 0x00
#define ATOM_ENCODER_CONFIG_LINKB 0x04
#define ATOM_ENCODER_CONFIG_LINKA_B ATOM_TRANSMITTER_CONFIG_LINKA
#define ATOM_ENCODER_CONFIG_LINKB_A ATOM_ENCODER_CONFIG_LINKB
#define ATOM_ENCODER_CONFIG_TRANSMITTER_SEL_MASK 0x08
#define ATOM_ENCODER_CONFIG_UNIPHY 0x00
#define ATOM_ENCODER_CONFIG_LVTMA 0x08
#define ATOM_ENCODER_CONFIG_TRANSMITTER1 0x00
#define ATOM_ENCODER_CONFIG_TRANSMITTER2 0x08
#define ATOM_ENCODER_CONFIG_DIGB 0x80 // VBIOS Internal use, outside SW should set this bit=0
// ucAction
// ATOM_ENABLE: Enable Encoder
// ATOM_DISABLE: Disable Encoder
//ucEncoderMode
#define ATOM_ENCODER_MODE_DP 0
#define ATOM_ENCODER_MODE_LVDS 1
#define ATOM_ENCODER_MODE_DVI 2
#define ATOM_ENCODER_MODE_HDMI 3
#define ATOM_ENCODER_MODE_SDVO 4
#define ATOM_ENCODER_MODE_DP_AUDIO 5
#define ATOM_ENCODER_MODE_TV 13
#define ATOM_ENCODER_MODE_CV 14
#define ATOM_ENCODER_MODE_CRT 15
#define ATOM_ENCODER_MODE_DVO 16
#define ATOM_ENCODER_MODE_DP_SST ATOM_ENCODER_MODE_DP // For DP1.2
#define ATOM_ENCODER_MODE_DP_MST 5 // For DP1.2
typedef struct _ATOM_DIG_ENCODER_CONFIG_V2
{
#if ATOM_BIG_ENDIAN
UCHAR ucReserved1:2;
UCHAR ucTransmitterSel:2; // =0: UniphyAB, =1: UniphyCD =2: UniphyEF
UCHAR ucLinkSel:1; // =0: linkA/C/E =1: linkB/D/F
UCHAR ucReserved:1;
UCHAR ucDPLinkRate:1; // =0: 1.62Ghz, =1: 2.7Ghz
#else
UCHAR ucDPLinkRate:1; // =0: 1.62Ghz, =1: 2.7Ghz
UCHAR ucReserved:1;
UCHAR ucLinkSel:1; // =0: linkA/C/E =1: linkB/D/F
UCHAR ucTransmitterSel:2; // =0: UniphyAB, =1: UniphyCD =2: UniphyEF
UCHAR ucReserved1:2;
#endif
}ATOM_DIG_ENCODER_CONFIG_V2;
typedef struct _DIG_ENCODER_CONTROL_PARAMETERS_V2
{
USHORT usPixelClock; // in 10KHz; for bios convenient
ATOM_DIG_ENCODER_CONFIG_V2 acConfig;
UCHAR ucAction;
UCHAR ucEncoderMode;
// =0: DP encoder
// =1: LVDS encoder
// =2: DVI encoder
// =3: HDMI encoder
// =4: SDVO encoder
UCHAR ucLaneNum; // how many lanes to enable
UCHAR ucStatus; // = DP_LINK_TRAINING_COMPLETE or DP_LINK_TRAINING_INCOMPLETE, only used by VBIOS with command ATOM_ENCODER_CMD_QUERY_DP_LINK_TRAINING_STATUS
UCHAR ucReserved;
}DIG_ENCODER_CONTROL_PARAMETERS_V2;
//ucConfig
#define ATOM_ENCODER_CONFIG_V2_DPLINKRATE_MASK 0x01
#define ATOM_ENCODER_CONFIG_V2_DPLINKRATE_1_62GHZ 0x00
#define ATOM_ENCODER_CONFIG_V2_DPLINKRATE_2_70GHZ 0x01
#define ATOM_ENCODER_CONFIG_V2_LINK_SEL_MASK 0x04
#define ATOM_ENCODER_CONFIG_V2_LINKA 0x00
#define ATOM_ENCODER_CONFIG_V2_LINKB 0x04
#define ATOM_ENCODER_CONFIG_V2_TRANSMITTER_SEL_MASK 0x18
#define ATOM_ENCODER_CONFIG_V2_TRANSMITTER1 0x00
#define ATOM_ENCODER_CONFIG_V2_TRANSMITTER2 0x08
#define ATOM_ENCODER_CONFIG_V2_TRANSMITTER3 0x10
// ucAction:
// ATOM_DISABLE
// ATOM_ENABLE
#define ATOM_ENCODER_CMD_DP_LINK_TRAINING_START 0x08
#define ATOM_ENCODER_CMD_DP_LINK_TRAINING_PATTERN1 0x09
#define ATOM_ENCODER_CMD_DP_LINK_TRAINING_PATTERN2 0x0a
#define ATOM_ENCODER_CMD_DP_LINK_TRAINING_PATTERN3 0x13
#define ATOM_ENCODER_CMD_DP_LINK_TRAINING_COMPLETE 0x0b
#define ATOM_ENCODER_CMD_DP_VIDEO_OFF 0x0c
#define ATOM_ENCODER_CMD_DP_VIDEO_ON 0x0d
#define ATOM_ENCODER_CMD_QUERY_DP_LINK_TRAINING_STATUS 0x0e
#define ATOM_ENCODER_CMD_SETUP 0x0f
#define ATOM_ENCODER_CMD_SETUP_PANEL_MODE 0x10
// ucStatus
#define ATOM_ENCODER_STATUS_LINK_TRAINING_COMPLETE 0x10
#define ATOM_ENCODER_STATUS_LINK_TRAINING_INCOMPLETE 0x00
//ucTableFormatRevision=1
//ucTableContentRevision=3
// Following function ENABLE sub-function will be used by driver when TMDS/HDMI/LVDS is used, disable function will be used by driver
typedef struct _ATOM_DIG_ENCODER_CONFIG_V3
{
#if ATOM_BIG_ENDIAN
UCHAR ucReserved1:1;
UCHAR ucDigSel:3; // =0/1/2/3/4/5: DIG0/1/2/3/4/5 (In register spec also referred as DIGA/B/C/D/E/F)
UCHAR ucReserved:3;
UCHAR ucDPLinkRate:1; // =0: 1.62Ghz, =1: 2.7Ghz
#else
UCHAR ucDPLinkRate:1; // =0: 1.62Ghz, =1: 2.7Ghz
UCHAR ucReserved:3;
UCHAR ucDigSel:3; // =0/1/2/3/4/5: DIG0/1/2/3/4/5 (In register spec also referred as DIGA/B/C/D/E/F)
UCHAR ucReserved1:1;
#endif
}ATOM_DIG_ENCODER_CONFIG_V3;
#define ATOM_ENCODER_CONFIG_V3_DPLINKRATE_MASK 0x03
#define ATOM_ENCODER_CONFIG_V3_DPLINKRATE_1_62GHZ 0x00
#define ATOM_ENCODER_CONFIG_V3_DPLINKRATE_2_70GHZ 0x01
#define ATOM_ENCODER_CONFIG_V3_ENCODER_SEL 0x70
#define ATOM_ENCODER_CONFIG_V3_DIG0_ENCODER 0x00
#define ATOM_ENCODER_CONFIG_V3_DIG1_ENCODER 0x10
#define ATOM_ENCODER_CONFIG_V3_DIG2_ENCODER 0x20
#define ATOM_ENCODER_CONFIG_V3_DIG3_ENCODER 0x30
#define ATOM_ENCODER_CONFIG_V3_DIG4_ENCODER 0x40
#define ATOM_ENCODER_CONFIG_V3_DIG5_ENCODER 0x50
typedef struct _DIG_ENCODER_CONTROL_PARAMETERS_V3
{
USHORT usPixelClock; // in 10KHz; for bios convenient
ATOM_DIG_ENCODER_CONFIG_V3 acConfig;
UCHAR ucAction;
union {
UCHAR ucEncoderMode;
// =0: DP encoder
// =1: LVDS encoder
// =2: DVI encoder
// =3: HDMI encoder
// =4: SDVO encoder
// =5: DP audio
UCHAR ucPanelMode; // only valid when ucAction == ATOM_ENCODER_CMD_SETUP_PANEL_MODE
// =0: external DP
// =1: internal DP2
// =0x11: internal DP1 for NutMeg/Travis DP translator
};
UCHAR ucLaneNum; // how many lanes to enable
UCHAR ucBitPerColor; // only valid for DP mode when ucAction = ATOM_ENCODER_CMD_SETUP
UCHAR ucReserved;
}DIG_ENCODER_CONTROL_PARAMETERS_V3;
//ucTableFormatRevision=1
//ucTableContentRevision=4
// start from NI
// Following function ENABLE sub-function will be used by driver when TMDS/HDMI/LVDS is used, disable function will be used by driver
typedef struct _ATOM_DIG_ENCODER_CONFIG_V4
{
#if ATOM_BIG_ENDIAN
UCHAR ucReserved1:1;
UCHAR ucDigSel:3; // =0/1/2/3/4/5: DIG0/1/2/3/4/5 (In register spec also referred as DIGA/B/C/D/E/F)
UCHAR ucReserved:2;
UCHAR ucDPLinkRate:2; // =0: 1.62Ghz, =1: 2.7Ghz, 2=5.4Ghz <= Changed comparing to previous version
#else
UCHAR ucDPLinkRate:2; // =0: 1.62Ghz, =1: 2.7Ghz, 2=5.4Ghz <= Changed comparing to previous version
UCHAR ucReserved:2;
UCHAR ucDigSel:3; // =0/1/2/3/4/5: DIG0/1/2/3/4/5 (In register spec also referred as DIGA/B/C/D/E/F)
UCHAR ucReserved1:1;
#endif
}ATOM_DIG_ENCODER_CONFIG_V4;
#define ATOM_ENCODER_CONFIG_V4_DPLINKRATE_MASK 0x03
#define ATOM_ENCODER_CONFIG_V4_DPLINKRATE_1_62GHZ 0x00
#define ATOM_ENCODER_CONFIG_V4_DPLINKRATE_2_70GHZ 0x01
#define ATOM_ENCODER_CONFIG_V4_DPLINKRATE_5_40GHZ 0x02
#define ATOM_ENCODER_CONFIG_V4_DPLINKRATE_3_24GHZ 0x03
#define ATOM_ENCODER_CONFIG_V4_ENCODER_SEL 0x70
#define ATOM_ENCODER_CONFIG_V4_DIG0_ENCODER 0x00
#define ATOM_ENCODER_CONFIG_V4_DIG1_ENCODER 0x10
#define ATOM_ENCODER_CONFIG_V4_DIG2_ENCODER 0x20
#define ATOM_ENCODER_CONFIG_V4_DIG3_ENCODER 0x30
#define ATOM_ENCODER_CONFIG_V4_DIG4_ENCODER 0x40
#define ATOM_ENCODER_CONFIG_V4_DIG5_ENCODER 0x50
#define ATOM_ENCODER_CONFIG_V4_DIG6_ENCODER 0x60
typedef struct _DIG_ENCODER_CONTROL_PARAMETERS_V4
{
USHORT usPixelClock; // in 10KHz; for bios convenient
union{
ATOM_DIG_ENCODER_CONFIG_V4 acConfig;
UCHAR ucConfig;
};
UCHAR ucAction;
union {
UCHAR ucEncoderMode;
// =0: DP encoder
// =1: LVDS encoder
// =2: DVI encoder
// =3: HDMI encoder
// =4: SDVO encoder
// =5: DP audio
UCHAR ucPanelMode; // only valid when ucAction == ATOM_ENCODER_CMD_SETUP_PANEL_MODE
// =0: external DP
// =1: internal DP2
// =0x11: internal DP1 for NutMeg/Travis DP translator
};
UCHAR ucLaneNum; // how many lanes to enable
UCHAR ucBitPerColor; // only valid for DP mode when ucAction = ATOM_ENCODER_CMD_SETUP
UCHAR ucHPD_ID; // HPD ID (1-6). =0 means to skip HDP programming. New comparing to previous version
}DIG_ENCODER_CONTROL_PARAMETERS_V4;
// define ucBitPerColor:
#define PANEL_BPC_UNDEFINE 0x00
#define PANEL_6BIT_PER_COLOR 0x01
#define PANEL_8BIT_PER_COLOR 0x02
#define PANEL_10BIT_PER_COLOR 0x03
#define PANEL_12BIT_PER_COLOR 0x04
#define PANEL_16BIT_PER_COLOR 0x05
//define ucPanelMode
#define DP_PANEL_MODE_EXTERNAL_DP_MODE 0x00
#define DP_PANEL_MODE_INTERNAL_DP2_MODE 0x01
#define DP_PANEL_MODE_INTERNAL_DP1_MODE 0x11
/****************************************************************************/
// Structures used by UNIPHYTransmitterControlTable
// LVTMATransmitterControlTable
// DVOOutputControlTable
/****************************************************************************/
typedef struct _ATOM_DP_VS_MODE
{
UCHAR ucLaneSel;
UCHAR ucLaneSet;
}ATOM_DP_VS_MODE;
typedef struct _DIG_TRANSMITTER_CONTROL_PARAMETERS
{
union
{
USHORT usPixelClock; // in 10KHz; for bios convenient
USHORT usInitInfo; // when init uniphy,lower 8bit is used for connector type defined in objectid.h
ATOM_DP_VS_MODE asMode; // DP Voltage swing mode
};
UCHAR ucConfig;
// [0]=0: 4 lane Link,
// =1: 8 lane Link ( Dual Links TMDS )
// [1]=0: InCoherent mode
// =1: Coherent Mode
// [2] Link Select:
// =0: PHY linkA if bfLane<3
// =1: PHY linkB if bfLanes<3
// =0: PHY linkA+B if bfLanes=3
// [5:4]PCIE lane Sel
// =0: lane 0~3 or 0~7
// =1: lane 4~7
// =2: lane 8~11 or 8~15
// =3: lane 12~15
UCHAR ucAction; // =0: turn off encoder
// =1: turn on encoder
UCHAR ucReserved[4];
}DIG_TRANSMITTER_CONTROL_PARAMETERS;
#define DIG_TRANSMITTER_CONTROL_PS_ALLOCATION DIG_TRANSMITTER_CONTROL_PARAMETERS
//ucInitInfo
#define ATOM_TRAMITTER_INITINFO_CONNECTOR_MASK 0x00ff
//ucConfig
#define ATOM_TRANSMITTER_CONFIG_8LANE_LINK 0x01
#define ATOM_TRANSMITTER_CONFIG_COHERENT 0x02
#define ATOM_TRANSMITTER_CONFIG_LINK_SEL_MASK 0x04
#define ATOM_TRANSMITTER_CONFIG_LINKA 0x00
#define ATOM_TRANSMITTER_CONFIG_LINKB 0x04
#define ATOM_TRANSMITTER_CONFIG_LINKA_B 0x00
#define ATOM_TRANSMITTER_CONFIG_LINKB_A 0x04
#define ATOM_TRANSMITTER_CONFIG_ENCODER_SEL_MASK 0x08 // only used when ATOM_TRANSMITTER_ACTION_ENABLE
#define ATOM_TRANSMITTER_CONFIG_DIG1_ENCODER 0x00 // only used when ATOM_TRANSMITTER_ACTION_ENABLE
#define ATOM_TRANSMITTER_CONFIG_DIG2_ENCODER 0x08 // only used when ATOM_TRANSMITTER_ACTION_ENABLE
#define ATOM_TRANSMITTER_CONFIG_CLKSRC_MASK 0x30
#define ATOM_TRANSMITTER_CONFIG_CLKSRC_PPLL 0x00
#define ATOM_TRANSMITTER_CONFIG_CLKSRC_PCIE 0x20
#define ATOM_TRANSMITTER_CONFIG_CLKSRC_XTALIN 0x30
#define ATOM_TRANSMITTER_CONFIG_LANE_SEL_MASK 0xc0
#define ATOM_TRANSMITTER_CONFIG_LANE_0_3 0x00
#define ATOM_TRANSMITTER_CONFIG_LANE_0_7 0x00
#define ATOM_TRANSMITTER_CONFIG_LANE_4_7 0x40
#define ATOM_TRANSMITTER_CONFIG_LANE_8_11 0x80
#define ATOM_TRANSMITTER_CONFIG_LANE_8_15 0x80
#define ATOM_TRANSMITTER_CONFIG_LANE_12_15 0xc0
//ucAction
#define ATOM_TRANSMITTER_ACTION_DISABLE 0
#define ATOM_TRANSMITTER_ACTION_ENABLE 1
#define ATOM_TRANSMITTER_ACTION_LCD_BLOFF 2
#define ATOM_TRANSMITTER_ACTION_LCD_BLON 3
#define ATOM_TRANSMITTER_ACTION_BL_BRIGHTNESS_CONTROL 4
#define ATOM_TRANSMITTER_ACTION_LCD_SELFTEST_START 5
#define ATOM_TRANSMITTER_ACTION_LCD_SELFTEST_STOP 6
#define ATOM_TRANSMITTER_ACTION_INIT 7
#define ATOM_TRANSMITTER_ACTION_DISABLE_OUTPUT 8
#define ATOM_TRANSMITTER_ACTION_ENABLE_OUTPUT 9
#define ATOM_TRANSMITTER_ACTION_SETUP 10
#define ATOM_TRANSMITTER_ACTION_SETUP_VSEMPH 11
#define ATOM_TRANSMITTER_ACTION_POWER_ON 12
#define ATOM_TRANSMITTER_ACTION_POWER_OFF 13
// Following are used for DigTransmitterControlTable ver1.2
typedef struct _ATOM_DIG_TRANSMITTER_CONFIG_V2
{
#if ATOM_BIG_ENDIAN
UCHAR ucTransmitterSel:2; //bit7:6: =0 Dig Transmitter 1 ( Uniphy AB )
// =1 Dig Transmitter 2 ( Uniphy CD )
// =2 Dig Transmitter 3 ( Uniphy EF )
UCHAR ucReserved:1;
UCHAR fDPConnector:1; //bit4=0: DP connector =1: None DP connector
UCHAR ucEncoderSel:1; //bit3=0: Data/Clk path source from DIGA( DIG inst0 ). =1: Data/clk path source from DIGB ( DIG inst1 )
UCHAR ucLinkSel:1; //bit2=0: Uniphy LINKA or C or E when fDualLinkConnector=0. when fDualLinkConnector=1, it means master link of dual link is A or C or E
// =1: Uniphy LINKB or D or F when fDualLinkConnector=0. when fDualLinkConnector=1, it means master link of dual link is B or D or F
UCHAR fCoherentMode:1; //bit1=1: Coherent Mode ( for DVI/HDMI mode )
UCHAR fDualLinkConnector:1; //bit0=1: Dual Link DVI connector
#else
UCHAR fDualLinkConnector:1; //bit0=1: Dual Link DVI connector
UCHAR fCoherentMode:1; //bit1=1: Coherent Mode ( for DVI/HDMI mode )
UCHAR ucLinkSel:1; //bit2=0: Uniphy LINKA or C or E when fDualLinkConnector=0. when fDualLinkConnector=1, it means master link of dual link is A or C or E
// =1: Uniphy LINKB or D or F when fDualLinkConnector=0. when fDualLinkConnector=1, it means master link of dual link is B or D or F
UCHAR ucEncoderSel:1; //bit3=0: Data/Clk path source from DIGA( DIG inst0 ). =1: Data/clk path source from DIGB ( DIG inst1 )
UCHAR fDPConnector:1; //bit4=0: DP connector =1: None DP connector
UCHAR ucReserved:1;
UCHAR ucTransmitterSel:2; //bit7:6: =0 Dig Transmitter 1 ( Uniphy AB )
// =1 Dig Transmitter 2 ( Uniphy CD )
// =2 Dig Transmitter 3 ( Uniphy EF )
#endif
}ATOM_DIG_TRANSMITTER_CONFIG_V2;
//ucConfig
//Bit0
#define ATOM_TRANSMITTER_CONFIG_V2_DUAL_LINK_CONNECTOR 0x01
//Bit1
#define ATOM_TRANSMITTER_CONFIG_V2_COHERENT 0x02
//Bit2
#define ATOM_TRANSMITTER_CONFIG_V2_LINK_SEL_MASK 0x04
#define ATOM_TRANSMITTER_CONFIG_V2_LINKA 0x00
#define ATOM_TRANSMITTER_CONFIG_V2_LINKB 0x04
// Bit3
#define ATOM_TRANSMITTER_CONFIG_V2_ENCODER_SEL_MASK 0x08
#define ATOM_TRANSMITTER_CONFIG_V2_DIG1_ENCODER 0x00 // only used when ucAction == ATOM_TRANSMITTER_ACTION_ENABLE or ATOM_TRANSMITTER_ACTION_SETUP
#define ATOM_TRANSMITTER_CONFIG_V2_DIG2_ENCODER 0x08 // only used when ucAction == ATOM_TRANSMITTER_ACTION_ENABLE or ATOM_TRANSMITTER_ACTION_SETUP
// Bit4
#define ATOM_TRASMITTER_CONFIG_V2_DP_CONNECTOR 0x10
// Bit7:6
#define ATOM_TRANSMITTER_CONFIG_V2_TRANSMITTER_SEL_MASK 0xC0
#define ATOM_TRANSMITTER_CONFIG_V2_TRANSMITTER1 0x00 //AB
#define ATOM_TRANSMITTER_CONFIG_V2_TRANSMITTER2 0x40 //CD
#define ATOM_TRANSMITTER_CONFIG_V2_TRANSMITTER3 0x80 //EF
typedef struct _DIG_TRANSMITTER_CONTROL_PARAMETERS_V2
{
union
{
USHORT usPixelClock; // in 10KHz; for bios convenient
USHORT usInitInfo; // when init uniphy,lower 8bit is used for connector type defined in objectid.h
ATOM_DP_VS_MODE asMode; // DP Voltage swing mode
};
ATOM_DIG_TRANSMITTER_CONFIG_V2 acConfig;
UCHAR ucAction; // define as ATOM_TRANSMITER_ACTION_XXX
UCHAR ucReserved[4];
}DIG_TRANSMITTER_CONTROL_PARAMETERS_V2;
typedef struct _ATOM_DIG_TRANSMITTER_CONFIG_V3
{
#if ATOM_BIG_ENDIAN
UCHAR ucTransmitterSel:2; //bit7:6: =0 Dig Transmitter 1 ( Uniphy AB )
// =1 Dig Transmitter 2 ( Uniphy CD )
// =2 Dig Transmitter 3 ( Uniphy EF )
UCHAR ucRefClkSource:2; //bit5:4: PPLL1 =0, PPLL2=1, EXT_CLK=2
UCHAR ucEncoderSel:1; //bit3=0: Data/Clk path source from DIGA/C/E. =1: Data/clk path source from DIGB/D/F
UCHAR ucLinkSel:1; //bit2=0: Uniphy LINKA or C or E when fDualLinkConnector=0. when fDualLinkConnector=1, it means master link of dual link is A or C or E
// =1: Uniphy LINKB or D or F when fDualLinkConnector=0. when fDualLinkConnector=1, it means master link of dual link is B or D or F
UCHAR fCoherentMode:1; //bit1=1: Coherent Mode ( for DVI/HDMI mode )
UCHAR fDualLinkConnector:1; //bit0=1: Dual Link DVI connector
#else
UCHAR fDualLinkConnector:1; //bit0=1: Dual Link DVI connector
UCHAR fCoherentMode:1; //bit1=1: Coherent Mode ( for DVI/HDMI mode )
UCHAR ucLinkSel:1; //bit2=0: Uniphy LINKA or C or E when fDualLinkConnector=0. when fDualLinkConnector=1, it means master link of dual link is A or C or E
// =1: Uniphy LINKB or D or F when fDualLinkConnector=0. when fDualLinkConnector=1, it means master link of dual link is B or D or F
UCHAR ucEncoderSel:1; //bit3=0: Data/Clk path source from DIGA/C/E. =1: Data/clk path source from DIGB/D/F
UCHAR ucRefClkSource:2; //bit5:4: PPLL1 =0, PPLL2=1, EXT_CLK=2
UCHAR ucTransmitterSel:2; //bit7:6: =0 Dig Transmitter 1 ( Uniphy AB )
// =1 Dig Transmitter 2 ( Uniphy CD )
// =2 Dig Transmitter 3 ( Uniphy EF )
#endif
}ATOM_DIG_TRANSMITTER_CONFIG_V3;
typedef struct _DIG_TRANSMITTER_CONTROL_PARAMETERS_V3
{
union
{
USHORT usPixelClock; // in 10KHz; for bios convenient
USHORT usInitInfo; // when init uniphy,lower 8bit is used for connector type defined in objectid.h
ATOM_DP_VS_MODE asMode; // DP Voltage swing mode
};
ATOM_DIG_TRANSMITTER_CONFIG_V3 acConfig;
UCHAR ucAction; // define as ATOM_TRANSMITER_ACTION_XXX
UCHAR ucLaneNum;
UCHAR ucReserved[3];
}DIG_TRANSMITTER_CONTROL_PARAMETERS_V3;
//ucConfig
//Bit0
#define ATOM_TRANSMITTER_CONFIG_V3_DUAL_LINK_CONNECTOR 0x01
//Bit1
#define ATOM_TRANSMITTER_CONFIG_V3_COHERENT 0x02
//Bit2
#define ATOM_TRANSMITTER_CONFIG_V3_LINK_SEL_MASK 0x04
#define ATOM_TRANSMITTER_CONFIG_V3_LINKA 0x00
#define ATOM_TRANSMITTER_CONFIG_V3_LINKB 0x04
// Bit3
#define ATOM_TRANSMITTER_CONFIG_V3_ENCODER_SEL_MASK 0x08
#define ATOM_TRANSMITTER_CONFIG_V3_DIG1_ENCODER 0x00
#define ATOM_TRANSMITTER_CONFIG_V3_DIG2_ENCODER 0x08
// Bit5:4
#define ATOM_TRASMITTER_CONFIG_V3_REFCLK_SEL_MASK 0x30
#define ATOM_TRASMITTER_CONFIG_V3_P1PLL 0x00
#define ATOM_TRASMITTER_CONFIG_V3_P2PLL 0x10
#define ATOM_TRASMITTER_CONFIG_V3_REFCLK_SRC_EXT 0x20
// Bit7:6
#define ATOM_TRANSMITTER_CONFIG_V3_TRANSMITTER_SEL_MASK 0xC0
#define ATOM_TRANSMITTER_CONFIG_V3_TRANSMITTER1 0x00 //AB
#define ATOM_TRANSMITTER_CONFIG_V3_TRANSMITTER2 0x40 //CD
#define ATOM_TRANSMITTER_CONFIG_V3_TRANSMITTER3 0x80 //EF
/****************************************************************************/
// Structures used by UNIPHYTransmitterControlTable V1.4
// ASIC Families: NI
// ucTableFormatRevision=1
// ucTableContentRevision=4
/****************************************************************************/
typedef struct _ATOM_DP_VS_MODE_V4
{
UCHAR ucLaneSel;
union
{
UCHAR ucLaneSet;
struct {
#if ATOM_BIG_ENDIAN
UCHAR ucPOST_CURSOR2:2; //Bit[7:6] Post Cursor2 Level <= New in V4
UCHAR ucPRE_EMPHASIS:3; //Bit[5:3] Pre-emphasis Level
UCHAR ucVOLTAGE_SWING:3; //Bit[2:0] Voltage Swing Level
#else
UCHAR ucVOLTAGE_SWING:3; //Bit[2:0] Voltage Swing Level
UCHAR ucPRE_EMPHASIS:3; //Bit[5:3] Pre-emphasis Level
UCHAR ucPOST_CURSOR2:2; //Bit[7:6] Post Cursor2 Level <= New in V4
#endif
};
};
}ATOM_DP_VS_MODE_V4;
typedef struct _ATOM_DIG_TRANSMITTER_CONFIG_V4
{
#if ATOM_BIG_ENDIAN
UCHAR ucTransmitterSel:2; //bit7:6: =0 Dig Transmitter 1 ( Uniphy AB )
// =1 Dig Transmitter 2 ( Uniphy CD )
// =2 Dig Transmitter 3 ( Uniphy EF )
UCHAR ucRefClkSource:2; //bit5:4: PPLL1 =0, PPLL2=1, DCPLL=2, EXT_CLK=3 <= New
UCHAR ucEncoderSel:1; //bit3=0: Data/Clk path source from DIGA/C/E. =1: Data/clk path source from DIGB/D/F
UCHAR ucLinkSel:1; //bit2=0: Uniphy LINKA or C or E when fDualLinkConnector=0. when fDualLinkConnector=1, it means master link of dual link is A or C or E
// =1: Uniphy LINKB or D or F when fDualLinkConnector=0. when fDualLinkConnector=1, it means master link of dual link is B or D or F
UCHAR fCoherentMode:1; //bit1=1: Coherent Mode ( for DVI/HDMI mode )
UCHAR fDualLinkConnector:1; //bit0=1: Dual Link DVI connector
#else
UCHAR fDualLinkConnector:1; //bit0=1: Dual Link DVI connector
UCHAR fCoherentMode:1; //bit1=1: Coherent Mode ( for DVI/HDMI mode )
UCHAR ucLinkSel:1; //bit2=0: Uniphy LINKA or C or E when fDualLinkConnector=0. when fDualLinkConnector=1, it means master link of dual link is A or C or E
// =1: Uniphy LINKB or D or F when fDualLinkConnector=0. when fDualLinkConnector=1, it means master link of dual link is B or D or F
UCHAR ucEncoderSel:1; //bit3=0: Data/Clk path source from DIGA/C/E. =1: Data/clk path source from DIGB/D/F
UCHAR ucRefClkSource:2; //bit5:4: PPLL1 =0, PPLL2=1, DCPLL=2, EXT_CLK=3 <= New
UCHAR ucTransmitterSel:2; //bit7:6: =0 Dig Transmitter 1 ( Uniphy AB )
// =1 Dig Transmitter 2 ( Uniphy CD )
// =2 Dig Transmitter 3 ( Uniphy EF )
#endif
}ATOM_DIG_TRANSMITTER_CONFIG_V4;
typedef struct _DIG_TRANSMITTER_CONTROL_PARAMETERS_V4
{
union
{
USHORT usPixelClock; // in 10KHz; for bios convenient
USHORT usInitInfo; // when init uniphy,lower 8bit is used for connector type defined in objectid.h
ATOM_DP_VS_MODE_V4 asMode; // DP Voltage swing mode Redefined comparing to previous version
};
union
{
ATOM_DIG_TRANSMITTER_CONFIG_V4 acConfig;
UCHAR ucConfig;
};
UCHAR ucAction; // define as ATOM_TRANSMITER_ACTION_XXX
UCHAR ucLaneNum;
UCHAR ucReserved[3];
}DIG_TRANSMITTER_CONTROL_PARAMETERS_V4;
//ucConfig
//Bit0
#define ATOM_TRANSMITTER_CONFIG_V4_DUAL_LINK_CONNECTOR 0x01
//Bit1
#define ATOM_TRANSMITTER_CONFIG_V4_COHERENT 0x02
//Bit2
#define ATOM_TRANSMITTER_CONFIG_V4_LINK_SEL_MASK 0x04
#define ATOM_TRANSMITTER_CONFIG_V4_LINKA 0x00
#define ATOM_TRANSMITTER_CONFIG_V4_LINKB 0x04
// Bit3
#define ATOM_TRANSMITTER_CONFIG_V4_ENCODER_SEL_MASK 0x08
#define ATOM_TRANSMITTER_CONFIG_V4_DIG1_ENCODER 0x00
#define ATOM_TRANSMITTER_CONFIG_V4_DIG2_ENCODER 0x08
// Bit5:4
#define ATOM_TRANSMITTER_CONFIG_V4_REFCLK_SEL_MASK 0x30
#define ATOM_TRANSMITTER_CONFIG_V4_P1PLL 0x00
#define ATOM_TRANSMITTER_CONFIG_V4_P2PLL 0x10
#define ATOM_TRANSMITTER_CONFIG_V4_DCPLL 0x20 // New in _V4
#define ATOM_TRANSMITTER_CONFIG_V4_REFCLK_SRC_EXT 0x30 // Changed comparing to V3
// Bit7:6
#define ATOM_TRANSMITTER_CONFIG_V4_TRANSMITTER_SEL_MASK 0xC0
#define ATOM_TRANSMITTER_CONFIG_V4_TRANSMITTER1 0x00 //AB
#define ATOM_TRANSMITTER_CONFIG_V4_TRANSMITTER2 0x40 //CD
#define ATOM_TRANSMITTER_CONFIG_V4_TRANSMITTER3 0x80 //EF
typedef struct _ATOM_DIG_TRANSMITTER_CONFIG_V5
{
#if ATOM_BIG_ENDIAN
UCHAR ucReservd1:1;
UCHAR ucHPDSel:3;
UCHAR ucPhyClkSrcId:2;
UCHAR ucCoherentMode:1;
UCHAR ucReserved:1;
#else
UCHAR ucReserved:1;
UCHAR ucCoherentMode:1;
UCHAR ucPhyClkSrcId:2;
UCHAR ucHPDSel:3;
UCHAR ucReservd1:1;
#endif
}ATOM_DIG_TRANSMITTER_CONFIG_V5;
typedef struct _DIG_TRANSMITTER_CONTROL_PARAMETERS_V1_5
{
USHORT usSymClock; // Encoder Clock in 10kHz,(DP mode)= linkclock/10, (TMDS/LVDS/HDMI)= pixel clock, (HDMI deep color), =pixel clock * deep_color_ratio
UCHAR ucPhyId; // 0=UNIPHYA, 1=UNIPHYB, 2=UNIPHYC, 3=UNIPHYD, 4= UNIPHYE 5=UNIPHYF
UCHAR ucAction; // define as ATOM_TRANSMITER_ACTION_xxx
UCHAR ucLaneNum; // indicate lane number 1-8
UCHAR ucConnObjId; // Connector Object Id defined in ObjectId.h
UCHAR ucDigMode; // indicate DIG mode
union{
ATOM_DIG_TRANSMITTER_CONFIG_V5 asConfig;
UCHAR ucConfig;
};
UCHAR ucDigEncoderSel; // indicate DIG front end encoder
UCHAR ucDPLaneSet;
UCHAR ucReserved;
UCHAR ucReserved1;
}DIG_TRANSMITTER_CONTROL_PARAMETERS_V1_5;
//ucPhyId
#define ATOM_PHY_ID_UNIPHYA 0
#define ATOM_PHY_ID_UNIPHYB 1
#define ATOM_PHY_ID_UNIPHYC 2
#define ATOM_PHY_ID_UNIPHYD 3
#define ATOM_PHY_ID_UNIPHYE 4
#define ATOM_PHY_ID_UNIPHYF 5
#define ATOM_PHY_ID_UNIPHYG 6
// ucDigEncoderSel
#define ATOM_TRANMSITTER_V5__DIGA_SEL 0x01
#define ATOM_TRANMSITTER_V5__DIGB_SEL 0x02
#define ATOM_TRANMSITTER_V5__DIGC_SEL 0x04
#define ATOM_TRANMSITTER_V5__DIGD_SEL 0x08
#define ATOM_TRANMSITTER_V5__DIGE_SEL 0x10
#define ATOM_TRANMSITTER_V5__DIGF_SEL 0x20
#define ATOM_TRANMSITTER_V5__DIGG_SEL 0x40
// ucDigMode
#define ATOM_TRANSMITTER_DIGMODE_V5_DP 0
#define ATOM_TRANSMITTER_DIGMODE_V5_LVDS 1
#define ATOM_TRANSMITTER_DIGMODE_V5_DVI 2
#define ATOM_TRANSMITTER_DIGMODE_V5_HDMI 3
#define ATOM_TRANSMITTER_DIGMODE_V5_SDVO 4
#define ATOM_TRANSMITTER_DIGMODE_V5_DP_MST 5
// ucDPLaneSet
#define DP_LANE_SET__0DB_0_4V 0x00
#define DP_LANE_SET__0DB_0_6V 0x01
#define DP_LANE_SET__0DB_0_8V 0x02
#define DP_LANE_SET__0DB_1_2V 0x03
#define DP_LANE_SET__3_5DB_0_4V 0x08
#define DP_LANE_SET__3_5DB_0_6V 0x09
#define DP_LANE_SET__3_5DB_0_8V 0x0a
#define DP_LANE_SET__6DB_0_4V 0x10
#define DP_LANE_SET__6DB_0_6V 0x11
#define DP_LANE_SET__9_5DB_0_4V 0x18
// ATOM_DIG_TRANSMITTER_CONFIG_V5 asConfig;
// Bit1
#define ATOM_TRANSMITTER_CONFIG_V5_COHERENT 0x02
// Bit3:2
#define ATOM_TRANSMITTER_CONFIG_V5_REFCLK_SEL_MASK 0x0c
#define ATOM_TRANSMITTER_CONFIG_V5_REFCLK_SEL_SHIFT 0x02
#define ATOM_TRANSMITTER_CONFIG_V5_P1PLL 0x00
#define ATOM_TRANSMITTER_CONFIG_V5_P2PLL 0x04
#define ATOM_TRANSMITTER_CONFIG_V5_P0PLL 0x08
#define ATOM_TRANSMITTER_CONFIG_V5_REFCLK_SRC_EXT 0x0c
// Bit6:4
#define ATOM_TRANSMITTER_CONFIG_V5_HPD_SEL_MASK 0x70
#define ATOM_TRANSMITTER_CONFIG_V5_HPD_SEL_SHIFT 0x04
#define ATOM_TRANSMITTER_CONFIG_V5_NO_HPD_SEL 0x00
#define ATOM_TRANSMITTER_CONFIG_V5_HPD1_SEL 0x10
#define ATOM_TRANSMITTER_CONFIG_V5_HPD2_SEL 0x20
#define ATOM_TRANSMITTER_CONFIG_V5_HPD3_SEL 0x30
#define ATOM_TRANSMITTER_CONFIG_V5_HPD4_SEL 0x40
#define ATOM_TRANSMITTER_CONFIG_V5_HPD5_SEL 0x50
#define ATOM_TRANSMITTER_CONFIG_V5_HPD6_SEL 0x60
#define DIG_TRANSMITTER_CONTROL_PS_ALLOCATION_V1_5 DIG_TRANSMITTER_CONTROL_PARAMETERS_V1_5
/****************************************************************************/
// Structures used by ExternalEncoderControlTable V1.3
// ASIC Families: Evergreen, Llano, NI
// ucTableFormatRevision=1
// ucTableContentRevision=3
/****************************************************************************/
typedef struct _EXTERNAL_ENCODER_CONTROL_PARAMETERS_V3
{
union{
USHORT usPixelClock; // pixel clock in 10Khz, valid when ucAction=SETUP/ENABLE_OUTPUT
USHORT usConnectorId; // connector id, valid when ucAction = INIT
};
UCHAR ucConfig; // indicate which encoder, and DP link rate when ucAction = SETUP/ENABLE_OUTPUT
UCHAR ucAction; //
UCHAR ucEncoderMode; // encoder mode, only used when ucAction = SETUP/ENABLE_OUTPUT
UCHAR ucLaneNum; // lane number, only used when ucAction = SETUP/ENABLE_OUTPUT
UCHAR ucBitPerColor; // output bit per color, only valid when ucAction = SETUP/ENABLE_OUTPUT and ucEncodeMode= DP
UCHAR ucReserved;
}EXTERNAL_ENCODER_CONTROL_PARAMETERS_V3;
// ucAction
#define EXTERNAL_ENCODER_ACTION_V3_DISABLE_OUTPUT 0x00
#define EXTERNAL_ENCODER_ACTION_V3_ENABLE_OUTPUT 0x01
#define EXTERNAL_ENCODER_ACTION_V3_ENCODER_INIT 0x07
#define EXTERNAL_ENCODER_ACTION_V3_ENCODER_SETUP 0x0f
#define EXTERNAL_ENCODER_ACTION_V3_ENCODER_BLANKING_OFF 0x10
#define EXTERNAL_ENCODER_ACTION_V3_ENCODER_BLANKING 0x11
#define EXTERNAL_ENCODER_ACTION_V3_DACLOAD_DETECTION 0x12
#define EXTERNAL_ENCODER_ACTION_V3_DDC_SETUP 0x14
// ucConfig
#define EXTERNAL_ENCODER_CONFIG_V3_DPLINKRATE_MASK 0x03
#define EXTERNAL_ENCODER_CONFIG_V3_DPLINKRATE_1_62GHZ 0x00
#define EXTERNAL_ENCODER_CONFIG_V3_DPLINKRATE_2_70GHZ 0x01
#define EXTERNAL_ENCODER_CONFIG_V3_DPLINKRATE_5_40GHZ 0x02
#define EXTERNAL_ENCODER_CONFIG_V3_ENCODER_SEL_MASK 0x70
#define EXTERNAL_ENCODER_CONFIG_V3_ENCODER1 0x00
#define EXTERNAL_ENCODER_CONFIG_V3_ENCODER2 0x10
#define EXTERNAL_ENCODER_CONFIG_V3_ENCODER3 0x20
typedef struct _EXTERNAL_ENCODER_CONTROL_PS_ALLOCATION_V3
{
EXTERNAL_ENCODER_CONTROL_PARAMETERS_V3 sExtEncoder;
ULONG ulReserved[2];
}EXTERNAL_ENCODER_CONTROL_PS_ALLOCATION_V3;
/****************************************************************************/
// Structures used by DAC1OuputControlTable
// DAC2OuputControlTable
// LVTMAOutputControlTable (Before DEC30)
// TMDSAOutputControlTable (Before DEC30)
/****************************************************************************/
typedef struct _DISPLAY_DEVICE_OUTPUT_CONTROL_PARAMETERS
{
UCHAR ucAction; // Possible input:ATOM_ENABLE||ATOMDISABLE
// When the display is LCD, in addition to above:
// ATOM_LCD_BLOFF|| ATOM_LCD_BLON ||ATOM_LCD_BL_BRIGHTNESS_CONTROL||ATOM_LCD_SELFTEST_START||
// ATOM_LCD_SELFTEST_STOP
UCHAR aucPadding[3]; // padding to DWORD aligned
}DISPLAY_DEVICE_OUTPUT_CONTROL_PARAMETERS;
#define DISPLAY_DEVICE_OUTPUT_CONTROL_PS_ALLOCATION DISPLAY_DEVICE_OUTPUT_CONTROL_PARAMETERS
#define CRT1_OUTPUT_CONTROL_PARAMETERS DISPLAY_DEVICE_OUTPUT_CONTROL_PARAMETERS
#define CRT1_OUTPUT_CONTROL_PS_ALLOCATION DISPLAY_DEVICE_OUTPUT_CONTROL_PS_ALLOCATION
#define CRT2_OUTPUT_CONTROL_PARAMETERS DISPLAY_DEVICE_OUTPUT_CONTROL_PARAMETERS
#define CRT2_OUTPUT_CONTROL_PS_ALLOCATION DISPLAY_DEVICE_OUTPUT_CONTROL_PS_ALLOCATION
#define CV1_OUTPUT_CONTROL_PARAMETERS DISPLAY_DEVICE_OUTPUT_CONTROL_PARAMETERS
#define CV1_OUTPUT_CONTROL_PS_ALLOCATION DISPLAY_DEVICE_OUTPUT_CONTROL_PS_ALLOCATION
#define TV1_OUTPUT_CONTROL_PARAMETERS DISPLAY_DEVICE_OUTPUT_CONTROL_PARAMETERS
#define TV1_OUTPUT_CONTROL_PS_ALLOCATION DISPLAY_DEVICE_OUTPUT_CONTROL_PS_ALLOCATION
#define DFP1_OUTPUT_CONTROL_PARAMETERS DISPLAY_DEVICE_OUTPUT_CONTROL_PARAMETERS
#define DFP1_OUTPUT_CONTROL_PS_ALLOCATION DISPLAY_DEVICE_OUTPUT_CONTROL_PS_ALLOCATION
#define DFP2_OUTPUT_CONTROL_PARAMETERS DISPLAY_DEVICE_OUTPUT_CONTROL_PARAMETERS
#define DFP2_OUTPUT_CONTROL_PS_ALLOCATION DISPLAY_DEVICE_OUTPUT_CONTROL_PS_ALLOCATION
#define LCD1_OUTPUT_CONTROL_PARAMETERS DISPLAY_DEVICE_OUTPUT_CONTROL_PARAMETERS
#define LCD1_OUTPUT_CONTROL_PS_ALLOCATION DISPLAY_DEVICE_OUTPUT_CONTROL_PS_ALLOCATION
#define DVO_OUTPUT_CONTROL_PARAMETERS DISPLAY_DEVICE_OUTPUT_CONTROL_PARAMETERS
#define DVO_OUTPUT_CONTROL_PS_ALLOCATION DIG_TRANSMITTER_CONTROL_PS_ALLOCATION
#define DVO_OUTPUT_CONTROL_PARAMETERS_V3 DIG_TRANSMITTER_CONTROL_PARAMETERS
/****************************************************************************/
// Structures used by BlankCRTCTable
/****************************************************************************/
typedef struct _BLANK_CRTC_PARAMETERS
{
UCHAR ucCRTC; // ATOM_CRTC1 or ATOM_CRTC2
UCHAR ucBlanking; // ATOM_BLANKING or ATOM_BLANKINGOFF
USHORT usBlackColorRCr;
USHORT usBlackColorGY;
USHORT usBlackColorBCb;
}BLANK_CRTC_PARAMETERS;
#define BLANK_CRTC_PS_ALLOCATION BLANK_CRTC_PARAMETERS
/****************************************************************************/
// Structures used by EnableCRTCTable
// EnableCRTCMemReqTable
// UpdateCRTC_DoubleBufferRegistersTable
/****************************************************************************/
typedef struct _ENABLE_CRTC_PARAMETERS
{
UCHAR ucCRTC; // ATOM_CRTC1 or ATOM_CRTC2
UCHAR ucEnable; // ATOM_ENABLE or ATOM_DISABLE
UCHAR ucPadding[2];
}ENABLE_CRTC_PARAMETERS;
#define ENABLE_CRTC_PS_ALLOCATION ENABLE_CRTC_PARAMETERS
/****************************************************************************/
// Structures used by SetCRTC_OverScanTable
/****************************************************************************/
typedef struct _SET_CRTC_OVERSCAN_PARAMETERS
{
USHORT usOverscanRight; // right
USHORT usOverscanLeft; // left
USHORT usOverscanBottom; // bottom
USHORT usOverscanTop; // top
UCHAR ucCRTC; // ATOM_CRTC1 or ATOM_CRTC2
UCHAR ucPadding[3];
}SET_CRTC_OVERSCAN_PARAMETERS;
#define SET_CRTC_OVERSCAN_PS_ALLOCATION SET_CRTC_OVERSCAN_PARAMETERS
/****************************************************************************/
// Structures used by SetCRTC_ReplicationTable
/****************************************************************************/
typedef struct _SET_CRTC_REPLICATION_PARAMETERS
{
UCHAR ucH_Replication; // horizontal replication
UCHAR ucV_Replication; // vertical replication
UCHAR usCRTC; // ATOM_CRTC1 or ATOM_CRTC2
UCHAR ucPadding;
}SET_CRTC_REPLICATION_PARAMETERS;
#define SET_CRTC_REPLICATION_PS_ALLOCATION SET_CRTC_REPLICATION_PARAMETERS
/****************************************************************************/
// Structures used by SelectCRTC_SourceTable
/****************************************************************************/
typedef struct _SELECT_CRTC_SOURCE_PARAMETERS
{
UCHAR ucCRTC; // ATOM_CRTC1 or ATOM_CRTC2
UCHAR ucDevice; // ATOM_DEVICE_CRT1|ATOM_DEVICE_CRT2|....
UCHAR ucPadding[2];
}SELECT_CRTC_SOURCE_PARAMETERS;
#define SELECT_CRTC_SOURCE_PS_ALLOCATION SELECT_CRTC_SOURCE_PARAMETERS
typedef struct _SELECT_CRTC_SOURCE_PARAMETERS_V2
{
UCHAR ucCRTC; // ATOM_CRTC1 or ATOM_CRTC2
UCHAR ucEncoderID; // DAC1/DAC2/TVOUT/DIG1/DIG2/DVO
UCHAR ucEncodeMode; // Encoding mode, only valid when using DIG1/DIG2/DVO
UCHAR ucPadding;
}SELECT_CRTC_SOURCE_PARAMETERS_V2;
//ucEncoderID
//#define ASIC_INT_DAC1_ENCODER_ID 0x00
//#define ASIC_INT_TV_ENCODER_ID 0x02
//#define ASIC_INT_DIG1_ENCODER_ID 0x03
//#define ASIC_INT_DAC2_ENCODER_ID 0x04
//#define ASIC_EXT_TV_ENCODER_ID 0x06
//#define ASIC_INT_DVO_ENCODER_ID 0x07
//#define ASIC_INT_DIG2_ENCODER_ID 0x09
//#define ASIC_EXT_DIG_ENCODER_ID 0x05
//ucEncodeMode
//#define ATOM_ENCODER_MODE_DP 0
//#define ATOM_ENCODER_MODE_LVDS 1
//#define ATOM_ENCODER_MODE_DVI 2
//#define ATOM_ENCODER_MODE_HDMI 3
//#define ATOM_ENCODER_MODE_SDVO 4
//#define ATOM_ENCODER_MODE_TV 13
//#define ATOM_ENCODER_MODE_CV 14
//#define ATOM_ENCODER_MODE_CRT 15
/****************************************************************************/
// Structures used by SetPixelClockTable
// GetPixelClockTable
/****************************************************************************/
//Major revision=1., Minor revision=1
typedef struct _PIXEL_CLOCK_PARAMETERS
{
USHORT usPixelClock; // in 10kHz unit; for bios convenient = (RefClk*FB_Div)/(Ref_Div*Post_Div)
// 0 means disable PPLL
USHORT usRefDiv; // Reference divider
USHORT usFbDiv; // feedback divider
UCHAR ucPostDiv; // post divider
UCHAR ucFracFbDiv; // fractional feedback divider
UCHAR ucPpll; // ATOM_PPLL1 or ATOM_PPL2
UCHAR ucRefDivSrc; // ATOM_PJITTER or ATO_NONPJITTER
UCHAR ucCRTC; // Which CRTC uses this Ppll
UCHAR ucPadding;
}PIXEL_CLOCK_PARAMETERS;
//Major revision=1., Minor revision=2, add ucMiscIfno
//ucMiscInfo:
#define MISC_FORCE_REPROG_PIXEL_CLOCK 0x1
#define MISC_DEVICE_INDEX_MASK 0xF0
#define MISC_DEVICE_INDEX_SHIFT 4
typedef struct _PIXEL_CLOCK_PARAMETERS_V2
{
USHORT usPixelClock; // in 10kHz unit; for bios convenient = (RefClk*FB_Div)/(Ref_Div*Post_Div)
// 0 means disable PPLL
USHORT usRefDiv; // Reference divider
USHORT usFbDiv; // feedback divider
UCHAR ucPostDiv; // post divider
UCHAR ucFracFbDiv; // fractional feedback divider
UCHAR ucPpll; // ATOM_PPLL1 or ATOM_PPL2
UCHAR ucRefDivSrc; // ATOM_PJITTER or ATO_NONPJITTER
UCHAR ucCRTC; // Which CRTC uses this Ppll
UCHAR ucMiscInfo; // Different bits for different purpose, bit [7:4] as device index, bit[0]=Force prog
}PIXEL_CLOCK_PARAMETERS_V2;
//Major revision=1., Minor revision=3, structure/definition change
//ucEncoderMode:
//ATOM_ENCODER_MODE_DP
//ATOM_ENOCDER_MODE_LVDS
//ATOM_ENOCDER_MODE_DVI
//ATOM_ENOCDER_MODE_HDMI
//ATOM_ENOCDER_MODE_SDVO
//ATOM_ENCODER_MODE_TV 13
//ATOM_ENCODER_MODE_CV 14
//ATOM_ENCODER_MODE_CRT 15
//ucDVOConfig
//#define DVO_ENCODER_CONFIG_RATE_SEL 0x01
//#define DVO_ENCODER_CONFIG_DDR_SPEED 0x00
//#define DVO_ENCODER_CONFIG_SDR_SPEED 0x01
//#define DVO_ENCODER_CONFIG_OUTPUT_SEL 0x0c
//#define DVO_ENCODER_CONFIG_LOW12BIT 0x00
//#define DVO_ENCODER_CONFIG_UPPER12BIT 0x04
//#define DVO_ENCODER_CONFIG_24BIT 0x08
//ucMiscInfo: also changed, see below
#define PIXEL_CLOCK_MISC_FORCE_PROG_PPLL 0x01
#define PIXEL_CLOCK_MISC_VGA_MODE 0x02
#define PIXEL_CLOCK_MISC_CRTC_SEL_MASK 0x04
#define PIXEL_CLOCK_MISC_CRTC_SEL_CRTC1 0x00
#define PIXEL_CLOCK_MISC_CRTC_SEL_CRTC2 0x04
#define PIXEL_CLOCK_MISC_USE_ENGINE_FOR_DISPCLK 0x08
#define PIXEL_CLOCK_MISC_REF_DIV_SRC 0x10
// V1.4 for RoadRunner
#define PIXEL_CLOCK_V4_MISC_SS_ENABLE 0x10
#define PIXEL_CLOCK_V4_MISC_COHERENT_MODE 0x20
typedef struct _PIXEL_CLOCK_PARAMETERS_V3
{
USHORT usPixelClock; // in 10kHz unit; for bios convenient = (RefClk*FB_Div)/(Ref_Div*Post_Div)
// 0 means disable PPLL. For VGA PPLL,make sure this value is not 0.
USHORT usRefDiv; // Reference divider
USHORT usFbDiv; // feedback divider
UCHAR ucPostDiv; // post divider
UCHAR ucFracFbDiv; // fractional feedback divider
UCHAR ucPpll; // ATOM_PPLL1 or ATOM_PPL2
UCHAR ucTransmitterId; // graphic encoder id defined in objectId.h
union
{
UCHAR ucEncoderMode; // encoder type defined as ATOM_ENCODER_MODE_DP/DVI/HDMI/
UCHAR ucDVOConfig; // when use DVO, need to know SDR/DDR, 12bit or 24bit
};
UCHAR ucMiscInfo; // bit[0]=Force program, bit[1]= set pclk for VGA, b[2]= CRTC sel
// bit[3]=0:use PPLL for dispclk source, =1: use engine clock for dispclock source
// bit[4]=0:use XTALIN as the source of reference divider,=1 use the pre-defined clock as the source of reference divider
}PIXEL_CLOCK_PARAMETERS_V3;
#define PIXEL_CLOCK_PARAMETERS_LAST PIXEL_CLOCK_PARAMETERS_V2
#define GET_PIXEL_CLOCK_PS_ALLOCATION PIXEL_CLOCK_PARAMETERS_LAST
typedef struct _PIXEL_CLOCK_PARAMETERS_V5
{
UCHAR ucCRTC; // ATOM_CRTC1~6, indicate the CRTC controller to
// drive the pixel clock. not used for DCPLL case.
union{
UCHAR ucReserved;
UCHAR ucFracFbDiv; // [gphan] temporary to prevent build problem. remove it after driver code is changed.
};
USHORT usPixelClock; // target the pixel clock to drive the CRTC timing
// 0 means disable PPLL/DCPLL.
USHORT usFbDiv; // feedback divider integer part.
UCHAR ucPostDiv; // post divider.
UCHAR ucRefDiv; // Reference divider
UCHAR ucPpll; // ATOM_PPLL1/ATOM_PPLL2/ATOM_DCPLL
UCHAR ucTransmitterID; // ASIC encoder id defined in objectId.h,
// indicate which graphic encoder will be used.
UCHAR ucEncoderMode; // Encoder mode:
UCHAR ucMiscInfo; // bit[0]= Force program PPLL
// bit[1]= when VGA timing is used.
// bit[3:2]= HDMI panel bit depth: =0: 24bpp =1:30bpp, =2:32bpp
// bit[4]= RefClock source for PPLL.
// =0: XTLAIN( default mode )
// =1: other external clock source, which is pre-defined
// by VBIOS depend on the feature required.
// bit[7:5]: reserved.
ULONG ulFbDivDecFrac; // 20 bit feedback divider decimal fraction part, range from 1~999999 ( 0.000001 to 0.999999 )
}PIXEL_CLOCK_PARAMETERS_V5;
#define PIXEL_CLOCK_V5_MISC_FORCE_PROG_PPLL 0x01
#define PIXEL_CLOCK_V5_MISC_VGA_MODE 0x02
#define PIXEL_CLOCK_V5_MISC_HDMI_BPP_MASK 0x0c
#define PIXEL_CLOCK_V5_MISC_HDMI_24BPP 0x00
#define PIXEL_CLOCK_V5_MISC_HDMI_30BPP 0x04
#define PIXEL_CLOCK_V5_MISC_HDMI_32BPP 0x08
#define PIXEL_CLOCK_V5_MISC_REF_DIV_SRC 0x10
typedef struct _CRTC_PIXEL_CLOCK_FREQ
{
#if ATOM_BIG_ENDIAN
ULONG ucCRTC:8; // ATOM_CRTC1~6, indicate the CRTC controller to
// drive the pixel clock. not used for DCPLL case.
ULONG ulPixelClock:24; // target the pixel clock to drive the CRTC timing.
// 0 means disable PPLL/DCPLL. Expanded to 24 bits comparing to previous version.
#else
ULONG ulPixelClock:24; // target the pixel clock to drive the CRTC timing.
// 0 means disable PPLL/DCPLL. Expanded to 24 bits comparing to previous version.
ULONG ucCRTC:8; // ATOM_CRTC1~6, indicate the CRTC controller to
// drive the pixel clock. not used for DCPLL case.
#endif
}CRTC_PIXEL_CLOCK_FREQ;
typedef struct _PIXEL_CLOCK_PARAMETERS_V6
{
union{
CRTC_PIXEL_CLOCK_FREQ ulCrtcPclkFreq; // pixel clock and CRTC id frequency
ULONG ulDispEngClkFreq; // dispclk frequency
};
USHORT usFbDiv; // feedback divider integer part.
UCHAR ucPostDiv; // post divider.
UCHAR ucRefDiv; // Reference divider
UCHAR ucPpll; // ATOM_PPLL1/ATOM_PPLL2/ATOM_DCPLL
UCHAR ucTransmitterID; // ASIC encoder id defined in objectId.h,
// indicate which graphic encoder will be used.
UCHAR ucEncoderMode; // Encoder mode:
UCHAR ucMiscInfo; // bit[0]= Force program PPLL
// bit[1]= when VGA timing is used.
// bit[3:2]= HDMI panel bit depth: =0: 24bpp =1:30bpp, =2:32bpp
// bit[4]= RefClock source for PPLL.
// =0: XTLAIN( default mode )
// =1: other external clock source, which is pre-defined
// by VBIOS depend on the feature required.
// bit[7:5]: reserved.
ULONG ulFbDivDecFrac; // 20 bit feedback divider decimal fraction part, range from 1~999999 ( 0.000001 to 0.999999 )
}PIXEL_CLOCK_PARAMETERS_V6;
#define PIXEL_CLOCK_V6_MISC_FORCE_PROG_PPLL 0x01
#define PIXEL_CLOCK_V6_MISC_VGA_MODE 0x02
#define PIXEL_CLOCK_V6_MISC_HDMI_BPP_MASK 0x0c
#define PIXEL_CLOCK_V6_MISC_HDMI_24BPP 0x00
#define PIXEL_CLOCK_V6_MISC_HDMI_36BPP 0x04
#define PIXEL_CLOCK_V6_MISC_HDMI_30BPP 0x08
#define PIXEL_CLOCK_V6_MISC_HDMI_48BPP 0x0c
#define PIXEL_CLOCK_V6_MISC_REF_DIV_SRC 0x10
typedef struct _GET_DISP_PLL_STATUS_INPUT_PARAMETERS_V2
{
PIXEL_CLOCK_PARAMETERS_V3 sDispClkInput;
}GET_DISP_PLL_STATUS_INPUT_PARAMETERS_V2;
typedef struct _GET_DISP_PLL_STATUS_OUTPUT_PARAMETERS_V2
{
UCHAR ucStatus;
UCHAR ucRefDivSrc; // =1: reference clock source from XTALIN, =0: source from PCIE ref clock
UCHAR ucReserved[2];
}GET_DISP_PLL_STATUS_OUTPUT_PARAMETERS_V2;
typedef struct _GET_DISP_PLL_STATUS_INPUT_PARAMETERS_V3
{
PIXEL_CLOCK_PARAMETERS_V5 sDispClkInput;
}GET_DISP_PLL_STATUS_INPUT_PARAMETERS_V3;
/****************************************************************************/
// Structures used by AdjustDisplayPllTable
/****************************************************************************/
typedef struct _ADJUST_DISPLAY_PLL_PARAMETERS
{
USHORT usPixelClock;
UCHAR ucTransmitterID;
UCHAR ucEncodeMode;
union
{
UCHAR ucDVOConfig; //if DVO, need passing link rate and output 12bitlow or 24bit
UCHAR ucConfig; //if none DVO, not defined yet
};
UCHAR ucReserved[3];
}ADJUST_DISPLAY_PLL_PARAMETERS;
#define ADJUST_DISPLAY_CONFIG_SS_ENABLE 0x10
#define ADJUST_DISPLAY_PLL_PS_ALLOCATION ADJUST_DISPLAY_PLL_PARAMETERS
typedef struct _ADJUST_DISPLAY_PLL_INPUT_PARAMETERS_V3
{
USHORT usPixelClock; // target pixel clock
UCHAR ucTransmitterID; // GPU transmitter id defined in objectid.h
UCHAR ucEncodeMode; // encoder mode: CRT, LVDS, DP, TMDS or HDMI
UCHAR ucDispPllConfig; // display pll configure parameter defined as following DISPPLL_CONFIG_XXXX
UCHAR ucExtTransmitterID; // external encoder id.
UCHAR ucReserved[2];
}ADJUST_DISPLAY_PLL_INPUT_PARAMETERS_V3;
// usDispPllConfig v1.2 for RoadRunner
#define DISPPLL_CONFIG_DVO_RATE_SEL 0x0001 // need only when ucTransmitterID = DVO
#define DISPPLL_CONFIG_DVO_DDR_SPEED 0x0000 // need only when ucTransmitterID = DVO
#define DISPPLL_CONFIG_DVO_SDR_SPEED 0x0001 // need only when ucTransmitterID = DVO
#define DISPPLL_CONFIG_DVO_OUTPUT_SEL 0x000c // need only when ucTransmitterID = DVO
#define DISPPLL_CONFIG_DVO_LOW12BIT 0x0000 // need only when ucTransmitterID = DVO
#define DISPPLL_CONFIG_DVO_UPPER12BIT 0x0004 // need only when ucTransmitterID = DVO
#define DISPPLL_CONFIG_DVO_24BIT 0x0008 // need only when ucTransmitterID = DVO
#define DISPPLL_CONFIG_SS_ENABLE 0x0010 // Only used when ucEncoderMode = DP or LVDS
#define DISPPLL_CONFIG_COHERENT_MODE 0x0020 // Only used when ucEncoderMode = TMDS or HDMI
#define DISPPLL_CONFIG_DUAL_LINK 0x0040 // Only used when ucEncoderMode = TMDS or LVDS
typedef struct _ADJUST_DISPLAY_PLL_OUTPUT_PARAMETERS_V3
{
ULONG ulDispPllFreq; // return display PPLL freq which is used to generate the pixclock, and related idclk, symclk etc
UCHAR ucRefDiv; // if it is none-zero, it is used to be calculated the other ppll parameter fb_divider and post_div ( if it is not given )
UCHAR ucPostDiv; // if it is none-zero, it is used to be calculated the other ppll parameter fb_divider
UCHAR ucReserved[2];
}ADJUST_DISPLAY_PLL_OUTPUT_PARAMETERS_V3;
typedef struct _ADJUST_DISPLAY_PLL_PS_ALLOCATION_V3
{
union
{
ADJUST_DISPLAY_PLL_INPUT_PARAMETERS_V3 sInput;
ADJUST_DISPLAY_PLL_OUTPUT_PARAMETERS_V3 sOutput;
};
} ADJUST_DISPLAY_PLL_PS_ALLOCATION_V3;
/****************************************************************************/
// Structures used by EnableYUVTable
/****************************************************************************/
typedef struct _ENABLE_YUV_PARAMETERS
{
UCHAR ucEnable; // ATOM_ENABLE:Enable YUV or ATOM_DISABLE:Disable YUV (RGB)
UCHAR ucCRTC; // Which CRTC needs this YUV or RGB format
UCHAR ucPadding[2];
}ENABLE_YUV_PARAMETERS;
#define ENABLE_YUV_PS_ALLOCATION ENABLE_YUV_PARAMETERS
/****************************************************************************/
// Structures used by GetMemoryClockTable
/****************************************************************************/
typedef struct _GET_MEMORY_CLOCK_PARAMETERS
{
ULONG ulReturnMemoryClock; // current memory speed in 10KHz unit
} GET_MEMORY_CLOCK_PARAMETERS;
#define GET_MEMORY_CLOCK_PS_ALLOCATION GET_MEMORY_CLOCK_PARAMETERS
/****************************************************************************/
// Structures used by GetEngineClockTable
/****************************************************************************/
typedef struct _GET_ENGINE_CLOCK_PARAMETERS
{
ULONG ulReturnEngineClock; // current engine speed in 10KHz unit
} GET_ENGINE_CLOCK_PARAMETERS;
#define GET_ENGINE_CLOCK_PS_ALLOCATION GET_ENGINE_CLOCK_PARAMETERS
/****************************************************************************/
// Following Structures and constant may be obsolete
/****************************************************************************/
//Maxium 8 bytes,the data read in will be placed in the parameter space.
//Read operaion successeful when the parameter space is non-zero, otherwise read operation failed
typedef struct _READ_EDID_FROM_HW_I2C_DATA_PARAMETERS
{
USHORT usPrescale; //Ratio between Engine clock and I2C clock
USHORT usVRAMAddress; //Address in Frame Buffer where to pace raw EDID
USHORT usStatus; //When use output: lower byte EDID checksum, high byte hardware status
//WHen use input: lower byte as 'byte to read':currently limited to 128byte or 1byte
UCHAR ucSlaveAddr; //Read from which slave
UCHAR ucLineNumber; //Read from which HW assisted line
}READ_EDID_FROM_HW_I2C_DATA_PARAMETERS;
#define READ_EDID_FROM_HW_I2C_DATA_PS_ALLOCATION READ_EDID_FROM_HW_I2C_DATA_PARAMETERS
#define ATOM_WRITE_I2C_FORMAT_PSOFFSET_PSDATABYTE 0
#define ATOM_WRITE_I2C_FORMAT_PSOFFSET_PSTWODATABYTES 1
#define ATOM_WRITE_I2C_FORMAT_PSCOUNTER_PSOFFSET_IDDATABLOCK 2
#define ATOM_WRITE_I2C_FORMAT_PSCOUNTER_IDOFFSET_PLUS_IDDATABLOCK 3
#define ATOM_WRITE_I2C_FORMAT_IDCOUNTER_IDOFFSET_IDDATABLOCK 4
typedef struct _WRITE_ONE_BYTE_HW_I2C_DATA_PARAMETERS
{
USHORT usPrescale; //Ratio between Engine clock and I2C clock
USHORT usByteOffset; //Write to which byte
//Upper portion of usByteOffset is Format of data
//1bytePS+offsetPS
//2bytesPS+offsetPS
//blockID+offsetPS
//blockID+offsetID
//blockID+counterID+offsetID
UCHAR ucData; //PS data1
UCHAR ucStatus; //Status byte 1=success, 2=failure, Also is used as PS data2
UCHAR ucSlaveAddr; //Write to which slave
UCHAR ucLineNumber; //Write from which HW assisted line
}WRITE_ONE_BYTE_HW_I2C_DATA_PARAMETERS;
#define WRITE_ONE_BYTE_HW_I2C_DATA_PS_ALLOCATION WRITE_ONE_BYTE_HW_I2C_DATA_PARAMETERS
typedef struct _SET_UP_HW_I2C_DATA_PARAMETERS
{
USHORT usPrescale; //Ratio between Engine clock and I2C clock
UCHAR ucSlaveAddr; //Write to which slave
UCHAR ucLineNumber; //Write from which HW assisted line
}SET_UP_HW_I2C_DATA_PARAMETERS;
/**************************************************************************/
#define SPEED_FAN_CONTROL_PS_ALLOCATION WRITE_ONE_BYTE_HW_I2C_DATA_PARAMETERS
/****************************************************************************/
// Structures used by PowerConnectorDetectionTable
/****************************************************************************/
typedef struct _POWER_CONNECTOR_DETECTION_PARAMETERS
{
UCHAR ucPowerConnectorStatus; //Used for return value 0: detected, 1:not detected
UCHAR ucPwrBehaviorId;
USHORT usPwrBudget; //how much power currently boot to in unit of watt
}POWER_CONNECTOR_DETECTION_PARAMETERS;
typedef struct POWER_CONNECTOR_DETECTION_PS_ALLOCATION
{
UCHAR ucPowerConnectorStatus; //Used for return value 0: detected, 1:not detected
UCHAR ucReserved;
USHORT usPwrBudget; //how much power currently boot to in unit of watt
WRITE_ONE_BYTE_HW_I2C_DATA_PS_ALLOCATION sReserved;
}POWER_CONNECTOR_DETECTION_PS_ALLOCATION;
/****************************LVDS SS Command Table Definitions**********************/
/****************************************************************************/
// Structures used by EnableSpreadSpectrumOnPPLLTable
/****************************************************************************/
typedef struct _ENABLE_LVDS_SS_PARAMETERS
{
USHORT usSpreadSpectrumPercentage;
UCHAR ucSpreadSpectrumType; //Bit1=0 Down Spread,=1 Center Spread. Bit1=1 Ext. =0 Int. Others:TBD
UCHAR ucSpreadSpectrumStepSize_Delay; //bits3:2 SS_STEP_SIZE; bit 6:4 SS_DELAY
UCHAR ucEnable; //ATOM_ENABLE or ATOM_DISABLE
UCHAR ucPadding[3];
}ENABLE_LVDS_SS_PARAMETERS;
//ucTableFormatRevision=1,ucTableContentRevision=2
typedef struct _ENABLE_LVDS_SS_PARAMETERS_V2
{
USHORT usSpreadSpectrumPercentage;
UCHAR ucSpreadSpectrumType; //Bit1=0 Down Spread,=1 Center Spread. Bit1=1 Ext. =0 Int. Others:TBD
UCHAR ucSpreadSpectrumStep; //
UCHAR ucEnable; //ATOM_ENABLE or ATOM_DISABLE
UCHAR ucSpreadSpectrumDelay;
UCHAR ucSpreadSpectrumRange;
UCHAR ucPadding;
}ENABLE_LVDS_SS_PARAMETERS_V2;
//This new structure is based on ENABLE_LVDS_SS_PARAMETERS but expands to SS on PPLL, so other devices can use SS.
typedef struct _ENABLE_SPREAD_SPECTRUM_ON_PPLL
{
USHORT usSpreadSpectrumPercentage;
UCHAR ucSpreadSpectrumType; // Bit1=0 Down Spread,=1 Center Spread. Bit1=1 Ext. =0 Int. Others:TBD
UCHAR ucSpreadSpectrumStep; //
UCHAR ucEnable; // ATOM_ENABLE or ATOM_DISABLE
UCHAR ucSpreadSpectrumDelay;
UCHAR ucSpreadSpectrumRange;
UCHAR ucPpll; // ATOM_PPLL1/ATOM_PPLL2
}ENABLE_SPREAD_SPECTRUM_ON_PPLL;
typedef struct _ENABLE_SPREAD_SPECTRUM_ON_PPLL_V2
{
USHORT usSpreadSpectrumPercentage;
UCHAR ucSpreadSpectrumType; // Bit[0]: 0-Down Spread,1-Center Spread.
// Bit[1]: 1-Ext. 0-Int.
// Bit[3:2]: =0 P1PLL =1 P2PLL =2 DCPLL
// Bits[7:4] reserved
UCHAR ucEnable; // ATOM_ENABLE or ATOM_DISABLE
USHORT usSpreadSpectrumAmount; // Includes SS_AMOUNT_FBDIV[7:0] and SS_AMOUNT_NFRAC_SLIP[11:8]
USHORT usSpreadSpectrumStep; // SS_STEP_SIZE_DSFRAC
}ENABLE_SPREAD_SPECTRUM_ON_PPLL_V2;
#define ATOM_PPLL_SS_TYPE_V2_DOWN_SPREAD 0x00
#define ATOM_PPLL_SS_TYPE_V2_CENTRE_SPREAD 0x01
#define ATOM_PPLL_SS_TYPE_V2_EXT_SPREAD 0x02
#define ATOM_PPLL_SS_TYPE_V2_PPLL_SEL_MASK 0x0c
#define ATOM_PPLL_SS_TYPE_V2_P1PLL 0x00
#define ATOM_PPLL_SS_TYPE_V2_P2PLL 0x04
#define ATOM_PPLL_SS_TYPE_V2_DCPLL 0x08
#define ATOM_PPLL_SS_AMOUNT_V2_FBDIV_MASK 0x00FF
#define ATOM_PPLL_SS_AMOUNT_V2_FBDIV_SHIFT 0
#define ATOM_PPLL_SS_AMOUNT_V2_NFRAC_MASK 0x0F00
#define ATOM_PPLL_SS_AMOUNT_V2_NFRAC_SHIFT 8
// Used by DCE5.0
typedef struct _ENABLE_SPREAD_SPECTRUM_ON_PPLL_V3
{
USHORT usSpreadSpectrumAmountFrac; // SS_AMOUNT_DSFRAC New in DCE5.0
UCHAR ucSpreadSpectrumType; // Bit[0]: 0-Down Spread,1-Center Spread.
// Bit[1]: 1-Ext. 0-Int.
// Bit[3:2]: =0 P1PLL =1 P2PLL =2 DCPLL
// Bits[7:4] reserved
UCHAR ucEnable; // ATOM_ENABLE or ATOM_DISABLE
USHORT usSpreadSpectrumAmount; // Includes SS_AMOUNT_FBDIV[7:0] and SS_AMOUNT_NFRAC_SLIP[11:8]
USHORT usSpreadSpectrumStep; // SS_STEP_SIZE_DSFRAC
}ENABLE_SPREAD_SPECTRUM_ON_PPLL_V3;
#define ATOM_PPLL_SS_TYPE_V3_DOWN_SPREAD 0x00
#define ATOM_PPLL_SS_TYPE_V3_CENTRE_SPREAD 0x01
#define ATOM_PPLL_SS_TYPE_V3_EXT_SPREAD 0x02
#define ATOM_PPLL_SS_TYPE_V3_PPLL_SEL_MASK 0x0c
#define ATOM_PPLL_SS_TYPE_V3_P1PLL 0x00
#define ATOM_PPLL_SS_TYPE_V3_P2PLL 0x04
#define ATOM_PPLL_SS_TYPE_V3_DCPLL 0x08
#define ATOM_PPLL_SS_TYPE_V3_P0PLL ATOM_PPLL_SS_TYPE_V3_DCPLL
#define ATOM_PPLL_SS_AMOUNT_V3_FBDIV_MASK 0x00FF
#define ATOM_PPLL_SS_AMOUNT_V3_FBDIV_SHIFT 0
#define ATOM_PPLL_SS_AMOUNT_V3_NFRAC_MASK 0x0F00
#define ATOM_PPLL_SS_AMOUNT_V3_NFRAC_SHIFT 8
#define ENABLE_SPREAD_SPECTRUM_ON_PPLL_PS_ALLOCATION ENABLE_SPREAD_SPECTRUM_ON_PPLL
/**************************************************************************/
typedef struct _SET_PIXEL_CLOCK_PS_ALLOCATION
{
PIXEL_CLOCK_PARAMETERS sPCLKInput;
ENABLE_SPREAD_SPECTRUM_ON_PPLL sReserved;//Caller doesn't need to init this portion
}SET_PIXEL_CLOCK_PS_ALLOCATION;
#define ENABLE_VGA_RENDER_PS_ALLOCATION SET_PIXEL_CLOCK_PS_ALLOCATION
/****************************************************************************/
// Structures used by ###
/****************************************************************************/
typedef struct _MEMORY_TRAINING_PARAMETERS
{
ULONG ulTargetMemoryClock; //In 10Khz unit
}MEMORY_TRAINING_PARAMETERS;
#define MEMORY_TRAINING_PS_ALLOCATION MEMORY_TRAINING_PARAMETERS
/****************************LVDS and other encoder command table definitions **********************/
/****************************************************************************/
// Structures used by LVDSEncoderControlTable (Before DCE30)
// LVTMAEncoderControlTable (Before DCE30)
// TMDSAEncoderControlTable (Before DCE30)
/****************************************************************************/
typedef struct _LVDS_ENCODER_CONTROL_PARAMETERS
{
USHORT usPixelClock; // in 10KHz; for bios convenient
UCHAR ucMisc; // bit0=0: Enable single link
// =1: Enable dual link
// Bit1=0: 666RGB
// =1: 888RGB
UCHAR ucAction; // 0: turn off encoder
// 1: setup and turn on encoder
}LVDS_ENCODER_CONTROL_PARAMETERS;
#define LVDS_ENCODER_CONTROL_PS_ALLOCATION LVDS_ENCODER_CONTROL_PARAMETERS
#define TMDS1_ENCODER_CONTROL_PARAMETERS LVDS_ENCODER_CONTROL_PARAMETERS
#define TMDS1_ENCODER_CONTROL_PS_ALLOCATION TMDS1_ENCODER_CONTROL_PARAMETERS
#define TMDS2_ENCODER_CONTROL_PARAMETERS TMDS1_ENCODER_CONTROL_PARAMETERS
#define TMDS2_ENCODER_CONTROL_PS_ALLOCATION TMDS2_ENCODER_CONTROL_PARAMETERS
//ucTableFormatRevision=1,ucTableContentRevision=2
typedef struct _LVDS_ENCODER_CONTROL_PARAMETERS_V2
{
USHORT usPixelClock; // in 10KHz; for bios convenient
UCHAR ucMisc; // see PANEL_ENCODER_MISC_xx defintions below
UCHAR ucAction; // 0: turn off encoder
// 1: setup and turn on encoder
UCHAR ucTruncate; // bit0=0: Disable truncate
// =1: Enable truncate
// bit4=0: 666RGB
// =1: 888RGB
UCHAR ucSpatial; // bit0=0: Disable spatial dithering
// =1: Enable spatial dithering
// bit4=0: 666RGB
// =1: 888RGB
UCHAR ucTemporal; // bit0=0: Disable temporal dithering
// =1: Enable temporal dithering
// bit4=0: 666RGB
// =1: 888RGB
// bit5=0: Gray level 2
// =1: Gray level 4
UCHAR ucFRC; // bit4=0: 25FRC_SEL pattern E
// =1: 25FRC_SEL pattern F
// bit6:5=0: 50FRC_SEL pattern A
// =1: 50FRC_SEL pattern B
// =2: 50FRC_SEL pattern C
// =3: 50FRC_SEL pattern D
// bit7=0: 75FRC_SEL pattern E
// =1: 75FRC_SEL pattern F
}LVDS_ENCODER_CONTROL_PARAMETERS_V2;
#define LVDS_ENCODER_CONTROL_PS_ALLOCATION_V2 LVDS_ENCODER_CONTROL_PARAMETERS_V2
#define TMDS1_ENCODER_CONTROL_PARAMETERS_V2 LVDS_ENCODER_CONTROL_PARAMETERS_V2
#define TMDS1_ENCODER_CONTROL_PS_ALLOCATION_V2 TMDS1_ENCODER_CONTROL_PARAMETERS_V2
#define TMDS2_ENCODER_CONTROL_PARAMETERS_V2 TMDS1_ENCODER_CONTROL_PARAMETERS_V2
#define TMDS2_ENCODER_CONTROL_PS_ALLOCATION_V2 TMDS2_ENCODER_CONTROL_PARAMETERS_V2
#define LVDS_ENCODER_CONTROL_PARAMETERS_V3 LVDS_ENCODER_CONTROL_PARAMETERS_V2
#define LVDS_ENCODER_CONTROL_PS_ALLOCATION_V3 LVDS_ENCODER_CONTROL_PARAMETERS_V3
#define TMDS1_ENCODER_CONTROL_PARAMETERS_V3 LVDS_ENCODER_CONTROL_PARAMETERS_V3
#define TMDS1_ENCODER_CONTROL_PS_ALLOCATION_V3 TMDS1_ENCODER_CONTROL_PARAMETERS_V3
#define TMDS2_ENCODER_CONTROL_PARAMETERS_V3 LVDS_ENCODER_CONTROL_PARAMETERS_V3
#define TMDS2_ENCODER_CONTROL_PS_ALLOCATION_V3 TMDS2_ENCODER_CONTROL_PARAMETERS_V3
/****************************************************************************/
// Structures used by ###
/****************************************************************************/
typedef struct _ENABLE_EXTERNAL_TMDS_ENCODER_PARAMETERS
{
UCHAR ucEnable; // Enable or Disable External TMDS encoder
UCHAR ucMisc; // Bit0=0:Enable Single link;=1:Enable Dual link;Bit1 {=0:666RGB, =1:888RGB}
UCHAR ucPadding[2];
}ENABLE_EXTERNAL_TMDS_ENCODER_PARAMETERS;
typedef struct _ENABLE_EXTERNAL_TMDS_ENCODER_PS_ALLOCATION
{
ENABLE_EXTERNAL_TMDS_ENCODER_PARAMETERS sXTmdsEncoder;
WRITE_ONE_BYTE_HW_I2C_DATA_PS_ALLOCATION sReserved; //Caller doesn't need to init this portion
}ENABLE_EXTERNAL_TMDS_ENCODER_PS_ALLOCATION;
#define ENABLE_EXTERNAL_TMDS_ENCODER_PARAMETERS_V2 LVDS_ENCODER_CONTROL_PARAMETERS_V2
typedef struct _ENABLE_EXTERNAL_TMDS_ENCODER_PS_ALLOCATION_V2
{
ENABLE_EXTERNAL_TMDS_ENCODER_PARAMETERS_V2 sXTmdsEncoder;
WRITE_ONE_BYTE_HW_I2C_DATA_PS_ALLOCATION sReserved; //Caller doesn't need to init this portion
}ENABLE_EXTERNAL_TMDS_ENCODER_PS_ALLOCATION_V2;
typedef struct _EXTERNAL_ENCODER_CONTROL_PS_ALLOCATION
{
DIG_ENCODER_CONTROL_PARAMETERS sDigEncoder;
WRITE_ONE_BYTE_HW_I2C_DATA_PS_ALLOCATION sReserved;
}EXTERNAL_ENCODER_CONTROL_PS_ALLOCATION;
/****************************************************************************/
// Structures used by DVOEncoderControlTable
/****************************************************************************/
//ucTableFormatRevision=1,ucTableContentRevision=3
//ucDVOConfig:
#define DVO_ENCODER_CONFIG_RATE_SEL 0x01
#define DVO_ENCODER_CONFIG_DDR_SPEED 0x00
#define DVO_ENCODER_CONFIG_SDR_SPEED 0x01
#define DVO_ENCODER_CONFIG_OUTPUT_SEL 0x0c
#define DVO_ENCODER_CONFIG_LOW12BIT 0x00
#define DVO_ENCODER_CONFIG_UPPER12BIT 0x04
#define DVO_ENCODER_CONFIG_24BIT 0x08
typedef struct _DVO_ENCODER_CONTROL_PARAMETERS_V3
{
USHORT usPixelClock;
UCHAR ucDVOConfig;
UCHAR ucAction; //ATOM_ENABLE/ATOM_DISABLE/ATOM_HPD_INIT
UCHAR ucReseved[4];
}DVO_ENCODER_CONTROL_PARAMETERS_V3;
#define DVO_ENCODER_CONTROL_PS_ALLOCATION_V3 DVO_ENCODER_CONTROL_PARAMETERS_V3
//ucTableFormatRevision=1
//ucTableContentRevision=3 structure is not changed but usMisc add bit 1 as another input for
// bit1=0: non-coherent mode
// =1: coherent mode
//==========================================================================================
//Only change is here next time when changing encoder parameter definitions again!
#define LVDS_ENCODER_CONTROL_PARAMETERS_LAST LVDS_ENCODER_CONTROL_PARAMETERS_V3
#define LVDS_ENCODER_CONTROL_PS_ALLOCATION_LAST LVDS_ENCODER_CONTROL_PARAMETERS_LAST
#define TMDS1_ENCODER_CONTROL_PARAMETERS_LAST LVDS_ENCODER_CONTROL_PARAMETERS_V3
#define TMDS1_ENCODER_CONTROL_PS_ALLOCATION_LAST TMDS1_ENCODER_CONTROL_PARAMETERS_LAST
#define TMDS2_ENCODER_CONTROL_PARAMETERS_LAST LVDS_ENCODER_CONTROL_PARAMETERS_V3
#define TMDS2_ENCODER_CONTROL_PS_ALLOCATION_LAST TMDS2_ENCODER_CONTROL_PARAMETERS_LAST
#define DVO_ENCODER_CONTROL_PARAMETERS_LAST DVO_ENCODER_CONTROL_PARAMETERS
#define DVO_ENCODER_CONTROL_PS_ALLOCATION_LAST DVO_ENCODER_CONTROL_PS_ALLOCATION
//==========================================================================================
#define PANEL_ENCODER_MISC_DUAL 0x01
#define PANEL_ENCODER_MISC_COHERENT 0x02
#define PANEL_ENCODER_MISC_TMDS_LINKB 0x04
#define PANEL_ENCODER_MISC_HDMI_TYPE 0x08
#define PANEL_ENCODER_ACTION_DISABLE ATOM_DISABLE
#define PANEL_ENCODER_ACTION_ENABLE ATOM_ENABLE
#define PANEL_ENCODER_ACTION_COHERENTSEQ (ATOM_ENABLE+1)
#define PANEL_ENCODER_TRUNCATE_EN 0x01
#define PANEL_ENCODER_TRUNCATE_DEPTH 0x10
#define PANEL_ENCODER_SPATIAL_DITHER_EN 0x01
#define PANEL_ENCODER_SPATIAL_DITHER_DEPTH 0x10
#define PANEL_ENCODER_TEMPORAL_DITHER_EN 0x01
#define PANEL_ENCODER_TEMPORAL_DITHER_DEPTH 0x10
#define PANEL_ENCODER_TEMPORAL_LEVEL_4 0x20
#define PANEL_ENCODER_25FRC_MASK 0x10
#define PANEL_ENCODER_25FRC_E 0x00
#define PANEL_ENCODER_25FRC_F 0x10
#define PANEL_ENCODER_50FRC_MASK 0x60
#define PANEL_ENCODER_50FRC_A 0x00
#define PANEL_ENCODER_50FRC_B 0x20
#define PANEL_ENCODER_50FRC_C 0x40
#define PANEL_ENCODER_50FRC_D 0x60
#define PANEL_ENCODER_75FRC_MASK 0x80
#define PANEL_ENCODER_75FRC_E 0x00
#define PANEL_ENCODER_75FRC_F 0x80
/****************************************************************************/
// Structures used by SetVoltageTable
/****************************************************************************/
#define SET_VOLTAGE_TYPE_ASIC_VDDC 1
#define SET_VOLTAGE_TYPE_ASIC_MVDDC 2
#define SET_VOLTAGE_TYPE_ASIC_MVDDQ 3
#define SET_VOLTAGE_TYPE_ASIC_VDDCI 4
#define SET_VOLTAGE_INIT_MODE 5
#define SET_VOLTAGE_GET_MAX_VOLTAGE 6 //Gets the Max. voltage for the soldered Asic
#define SET_ASIC_VOLTAGE_MODE_ALL_SOURCE 0x1
#define SET_ASIC_VOLTAGE_MODE_SOURCE_A 0x2
#define SET_ASIC_VOLTAGE_MODE_SOURCE_B 0x4
#define SET_ASIC_VOLTAGE_MODE_SET_VOLTAGE 0x0
#define SET_ASIC_VOLTAGE_MODE_GET_GPIOVAL 0x1
#define SET_ASIC_VOLTAGE_MODE_GET_GPIOMASK 0x2
typedef struct _SET_VOLTAGE_PARAMETERS
{
UCHAR ucVoltageType; // To tell which voltage to set up, VDDC/MVDDC/MVDDQ
UCHAR ucVoltageMode; // To set all, to set source A or source B or ...
UCHAR ucVoltageIndex; // An index to tell which voltage level
UCHAR ucReserved;
}SET_VOLTAGE_PARAMETERS;
typedef struct _SET_VOLTAGE_PARAMETERS_V2
{
UCHAR ucVoltageType; // To tell which voltage to set up, VDDC/MVDDC/MVDDQ
UCHAR ucVoltageMode; // Not used, maybe use for state machine for differen power mode
USHORT usVoltageLevel; // real voltage level
}SET_VOLTAGE_PARAMETERS_V2;
typedef struct _SET_VOLTAGE_PARAMETERS_V1_3
{
UCHAR ucVoltageType; // To tell which voltage to set up, VDDC/MVDDC/MVDDQ/VDDCI
UCHAR ucVoltageMode; // Indicate action: Set voltage level
USHORT usVoltageLevel; // real voltage level in unit of mv or Voltage Phase (0, 1, 2, .. )
}SET_VOLTAGE_PARAMETERS_V1_3;
//ucVoltageType
#define VOLTAGE_TYPE_VDDC 1
#define VOLTAGE_TYPE_MVDDC 2
#define VOLTAGE_TYPE_MVDDQ 3
#define VOLTAGE_TYPE_VDDCI 4
//SET_VOLTAGE_PARAMETERS_V3.ucVoltageMode
#define ATOM_SET_VOLTAGE 0 //Set voltage Level
#define ATOM_INIT_VOLTAGE_REGULATOR 3 //Init Regulator
#define ATOM_SET_VOLTAGE_PHASE 4 //Set Vregulator Phase
#define ATOM_GET_MAX_VOLTAGE 6 //Get Max Voltage, not used in SetVoltageTable v1.3
#define ATOM_GET_VOLTAGE_LEVEL 6 //Get Voltage level from vitual voltage ID
// define vitual voltage id in usVoltageLevel
#define ATOM_VIRTUAL_VOLTAGE_ID0 0xff01
#define ATOM_VIRTUAL_VOLTAGE_ID1 0xff02
#define ATOM_VIRTUAL_VOLTAGE_ID2 0xff03
#define ATOM_VIRTUAL_VOLTAGE_ID3 0xff04
typedef struct _SET_VOLTAGE_PS_ALLOCATION
{
SET_VOLTAGE_PARAMETERS sASICSetVoltage;
WRITE_ONE_BYTE_HW_I2C_DATA_PS_ALLOCATION sReserved;
}SET_VOLTAGE_PS_ALLOCATION;
// New Added from SI for GetVoltageInfoTable, input parameter structure
typedef struct _GET_VOLTAGE_INFO_INPUT_PARAMETER_V1_1
{
UCHAR ucVoltageType; // Input: To tell which voltage to set up, VDDC/MVDDC/MVDDQ/VDDCI
UCHAR ucVoltageMode; // Input: Indicate action: Get voltage info
USHORT usVoltageLevel; // Input: real voltage level in unit of mv or Voltage Phase (0, 1, 2, .. ) or Leakage Id
ULONG ulReserved;
}GET_VOLTAGE_INFO_INPUT_PARAMETER_V1_1;
// New Added from SI for GetVoltageInfoTable, output parameter structure when ucVotlageMode == ATOM_GET_VOLTAGE_VID
typedef struct _GET_VOLTAGE_INFO_OUTPUT_PARAMETER_V1_1
{
ULONG ulVotlageGpioState;
ULONG ulVoltageGPioMask;
}GET_VOLTAGE_INFO_OUTPUT_PARAMETER_V1_1;
// New Added from SI for GetVoltageInfoTable, output parameter structure when ucVotlageMode == ATOM_GET_VOLTAGE_STATEx_LEAKAGE_VID
typedef struct _GET_LEAKAGE_VOLTAGE_INFO_OUTPUT_PARAMETER_V1_1
{
USHORT usVoltageLevel;
USHORT usVoltageId; // Voltage Id programmed in Voltage Regulator
ULONG ulReseved;
}GET_LEAKAGE_VOLTAGE_INFO_OUTPUT_PARAMETER_V1_1;
// GetVoltageInfo v1.1 ucVoltageMode
#define ATOM_GET_VOLTAGE_VID 0x00
#define ATOM_GET_VOTLAGE_INIT_SEQ 0x03
#define ATOM_GET_VOLTTAGE_PHASE_PHASE_VID 0x04
// for SI, this state map to 0xff02 voltage state in Power Play table, which is power boost state
#define ATOM_GET_VOLTAGE_STATE0_LEAKAGE_VID 0x10
// for SI, this state map to 0xff01 voltage state in Power Play table, which is performance state
#define ATOM_GET_VOLTAGE_STATE1_LEAKAGE_VID 0x11
// undefined power state
#define ATOM_GET_VOLTAGE_STATE2_LEAKAGE_VID 0x12
#define ATOM_GET_VOLTAGE_STATE3_LEAKAGE_VID 0x13
/****************************************************************************/
// Structures used by TVEncoderControlTable
/****************************************************************************/
typedef struct _TV_ENCODER_CONTROL_PARAMETERS
{
USHORT usPixelClock; // in 10KHz; for bios convenient
UCHAR ucTvStandard; // See definition "ATOM_TV_NTSC ..."
UCHAR ucAction; // 0: turn off encoder
// 1: setup and turn on encoder
}TV_ENCODER_CONTROL_PARAMETERS;
typedef struct _TV_ENCODER_CONTROL_PS_ALLOCATION
{
TV_ENCODER_CONTROL_PARAMETERS sTVEncoder;
WRITE_ONE_BYTE_HW_I2C_DATA_PS_ALLOCATION sReserved; // Don't set this one
}TV_ENCODER_CONTROL_PS_ALLOCATION;
//==============================Data Table Portion====================================
/****************************************************************************/
// Structure used in Data.mtb
/****************************************************************************/
typedef struct _ATOM_MASTER_LIST_OF_DATA_TABLES
{
USHORT UtilityPipeLine; // Offest for the utility to get parser info,Don't change this position!
USHORT MultimediaCapabilityInfo; // Only used by MM Lib,latest version 1.1, not configuable from Bios, need to include the table to build Bios
USHORT MultimediaConfigInfo; // Only used by MM Lib,latest version 2.1, not configuable from Bios, need to include the table to build Bios
USHORT StandardVESA_Timing; // Only used by Bios
USHORT FirmwareInfo; // Shared by various SW components,latest version 1.4
USHORT PaletteData; // Only used by BIOS
USHORT LCD_Info; // Shared by various SW components,latest version 1.3, was called LVDS_Info
USHORT DIGTransmitterInfo; // Internal used by VBIOS only version 3.1
USHORT AnalogTV_Info; // Shared by various SW components,latest version 1.1
USHORT SupportedDevicesInfo; // Will be obsolete from R600
USHORT GPIO_I2C_Info; // Shared by various SW components,latest version 1.2 will be used from R600
USHORT VRAM_UsageByFirmware; // Shared by various SW components,latest version 1.3 will be used from R600
USHORT GPIO_Pin_LUT; // Shared by various SW components,latest version 1.1
USHORT VESA_ToInternalModeLUT; // Only used by Bios
USHORT ComponentVideoInfo; // Shared by various SW components,latest version 2.1 will be used from R600
USHORT PowerPlayInfo; // Shared by various SW components,latest version 2.1,new design from R600
USHORT CompassionateData; // Will be obsolete from R600
USHORT SaveRestoreInfo; // Only used by Bios
USHORT PPLL_SS_Info; // Shared by various SW components,latest version 1.2, used to call SS_Info, change to new name because of int ASIC SS info
USHORT OemInfo; // Defined and used by external SW, should be obsolete soon
USHORT XTMDS_Info; // Will be obsolete from R600
USHORT MclkSS_Info; // Shared by various SW components,latest version 1.1, only enabled when ext SS chip is used
USHORT Object_Header; // Shared by various SW components,latest version 1.1
USHORT IndirectIOAccess; // Only used by Bios,this table position can't change at all!!
USHORT MC_InitParameter; // Only used by command table
USHORT ASIC_VDDC_Info; // Will be obsolete from R600
USHORT ASIC_InternalSS_Info; // New tabel name from R600, used to be called "ASIC_MVDDC_Info"
USHORT TV_VideoMode; // Only used by command table
USHORT VRAM_Info; // Only used by command table, latest version 1.3
USHORT MemoryTrainingInfo; // Used for VBIOS and Diag utility for memory training purpose since R600. the new table rev start from 2.1
USHORT IntegratedSystemInfo; // Shared by various SW components
USHORT ASIC_ProfilingInfo; // New table name from R600, used to be called "ASIC_VDDCI_Info" for pre-R600
USHORT VoltageObjectInfo; // Shared by various SW components, latest version 1.1
USHORT PowerSourceInfo; // Shared by various SW components, latest versoin 1.1
}ATOM_MASTER_LIST_OF_DATA_TABLES;
typedef struct _ATOM_MASTER_DATA_TABLE
{
ATOM_COMMON_TABLE_HEADER sHeader;
ATOM_MASTER_LIST_OF_DATA_TABLES ListOfDataTables;
}ATOM_MASTER_DATA_TABLE;
// For backward compatible
#define LVDS_Info LCD_Info
#define DAC_Info PaletteData
#define TMDS_Info DIGTransmitterInfo
/****************************************************************************/
// Structure used in MultimediaCapabilityInfoTable
/****************************************************************************/
typedef struct _ATOM_MULTIMEDIA_CAPABILITY_INFO
{
ATOM_COMMON_TABLE_HEADER sHeader;
ULONG ulSignature; // HW info table signature string "$ATI"
UCHAR ucI2C_Type; // I2C type (normal GP_IO, ImpactTV GP_IO, Dedicated I2C pin, etc)
UCHAR ucTV_OutInfo; // Type of TV out supported (3:0) and video out crystal frequency (6:4) and TV data port (7)
UCHAR ucVideoPortInfo; // Provides the video port capabilities
UCHAR ucHostPortInfo; // Provides host port configuration information
}ATOM_MULTIMEDIA_CAPABILITY_INFO;
/****************************************************************************/
// Structure used in MultimediaConfigInfoTable
/****************************************************************************/
typedef struct _ATOM_MULTIMEDIA_CONFIG_INFO
{
ATOM_COMMON_TABLE_HEADER sHeader;
ULONG ulSignature; // MM info table signature sting "$MMT"
UCHAR ucTunerInfo; // Type of tuner installed on the adapter (4:0) and video input for tuner (7:5)
UCHAR ucAudioChipInfo; // List the audio chip type (3:0) product type (4) and OEM revision (7:5)
UCHAR ucProductID; // Defines as OEM ID or ATI board ID dependent on product type setting
UCHAR ucMiscInfo1; // Tuner voltage (1:0) HW teletext support (3:2) FM audio decoder (5:4) reserved (6) audio scrambling (7)
UCHAR ucMiscInfo2; // I2S input config (0) I2S output config (1) I2S Audio Chip (4:2) SPDIF Output Config (5) reserved (7:6)
UCHAR ucMiscInfo3; // Video Decoder Type (3:0) Video In Standard/Crystal (7:4)
UCHAR ucMiscInfo4; // Video Decoder Host Config (2:0) reserved (7:3)
UCHAR ucVideoInput0Info;// Video Input 0 Type (1:0) F/B setting (2) physical connector ID (5:3) reserved (7:6)
UCHAR ucVideoInput1Info;// Video Input 1 Type (1:0) F/B setting (2) physical connector ID (5:3) reserved (7:6)
UCHAR ucVideoInput2Info;// Video Input 2 Type (1:0) F/B setting (2) physical connector ID (5:3) reserved (7:6)
UCHAR ucVideoInput3Info;// Video Input 3 Type (1:0) F/B setting (2) physical connector ID (5:3) reserved (7:6)
UCHAR ucVideoInput4Info;// Video Input 4 Type (1:0) F/B setting (2) physical connector ID (5:3) reserved (7:6)
}ATOM_MULTIMEDIA_CONFIG_INFO;
/****************************************************************************/
// Structures used in FirmwareInfoTable
/****************************************************************************/
// usBIOSCapability Definition:
// Bit 0 = 0: Bios image is not Posted, =1:Bios image is Posted;
// Bit 1 = 0: Dual CRTC is not supported, =1: Dual CRTC is supported;
// Bit 2 = 0: Extended Desktop is not supported, =1: Extended Desktop is supported;
// Others: Reserved
#define ATOM_BIOS_INFO_ATOM_FIRMWARE_POSTED 0x0001
#define ATOM_BIOS_INFO_DUAL_CRTC_SUPPORT 0x0002
#define ATOM_BIOS_INFO_EXTENDED_DESKTOP_SUPPORT 0x0004
#define ATOM_BIOS_INFO_MEMORY_CLOCK_SS_SUPPORT 0x0008 // (valid from v1.1 ~v1.4):=1: memclk SS enable, =0 memclk SS disable.
#define ATOM_BIOS_INFO_ENGINE_CLOCK_SS_SUPPORT 0x0010 // (valid from v1.1 ~v1.4):=1: engclk SS enable, =0 engclk SS disable.
#define ATOM_BIOS_INFO_BL_CONTROLLED_BY_GPU 0x0020
#define ATOM_BIOS_INFO_WMI_SUPPORT 0x0040
#define ATOM_BIOS_INFO_PPMODE_ASSIGNGED_BY_SYSTEM 0x0080
#define ATOM_BIOS_INFO_HYPERMEMORY_SUPPORT 0x0100
#define ATOM_BIOS_INFO_HYPERMEMORY_SIZE_MASK 0x1E00
#define ATOM_BIOS_INFO_VPOST_WITHOUT_FIRST_MODE_SET 0x2000
#define ATOM_BIOS_INFO_BIOS_SCRATCH6_SCL2_REDEFINE 0x4000
#define ATOM_BIOS_INFO_MEMORY_CLOCK_EXT_SS_SUPPORT 0x0008 // (valid from v2.1 ): =1: memclk ss enable with external ss chip
#define ATOM_BIOS_INFO_ENGINE_CLOCK_EXT_SS_SUPPORT 0x0010 // (valid from v2.1 ): =1: engclk ss enable with external ss chip
#ifndef _H2INC
//Please don't add or expand this bitfield structure below, this one will retire soon.!
typedef struct _ATOM_FIRMWARE_CAPABILITY
{
#if ATOM_BIG_ENDIAN
USHORT Reserved:1;
USHORT SCL2Redefined:1;
USHORT PostWithoutModeSet:1;
USHORT HyperMemory_Size:4;
USHORT HyperMemory_Support:1;
USHORT PPMode_Assigned:1;
USHORT WMI_SUPPORT:1;
USHORT GPUControlsBL:1;
USHORT EngineClockSS_Support:1;
USHORT MemoryClockSS_Support:1;
USHORT ExtendedDesktopSupport:1;
USHORT DualCRTC_Support:1;
USHORT FirmwarePosted:1;
#else
USHORT FirmwarePosted:1;
USHORT DualCRTC_Support:1;
USHORT ExtendedDesktopSupport:1;
USHORT MemoryClockSS_Support:1;
USHORT EngineClockSS_Support:1;
USHORT GPUControlsBL:1;
USHORT WMI_SUPPORT:1;
USHORT PPMode_Assigned:1;
USHORT HyperMemory_Support:1;
USHORT HyperMemory_Size:4;
USHORT PostWithoutModeSet:1;
USHORT SCL2Redefined:1;
USHORT Reserved:1;
#endif
}ATOM_FIRMWARE_CAPABILITY;
typedef union _ATOM_FIRMWARE_CAPABILITY_ACCESS
{
ATOM_FIRMWARE_CAPABILITY sbfAccess;
USHORT susAccess;
}ATOM_FIRMWARE_CAPABILITY_ACCESS;
#else
typedef union _ATOM_FIRMWARE_CAPABILITY_ACCESS
{
USHORT susAccess;
}ATOM_FIRMWARE_CAPABILITY_ACCESS;
#endif
typedef struct _ATOM_FIRMWARE_INFO
{
ATOM_COMMON_TABLE_HEADER sHeader;
ULONG ulFirmwareRevision;
ULONG ulDefaultEngineClock; //In 10Khz unit
ULONG ulDefaultMemoryClock; //In 10Khz unit
ULONG ulDriverTargetEngineClock; //In 10Khz unit
ULONG ulDriverTargetMemoryClock; //In 10Khz unit
ULONG ulMaxEngineClockPLL_Output; //In 10Khz unit
ULONG ulMaxMemoryClockPLL_Output; //In 10Khz unit
ULONG ulMaxPixelClockPLL_Output; //In 10Khz unit
ULONG ulASICMaxEngineClock; //In 10Khz unit
ULONG ulASICMaxMemoryClock; //In 10Khz unit
UCHAR ucASICMaxTemperature;
UCHAR ucPadding[3]; //Don't use them
ULONG aulReservedForBIOS[3]; //Don't use them
USHORT usMinEngineClockPLL_Input; //In 10Khz unit
USHORT usMaxEngineClockPLL_Input; //In 10Khz unit
USHORT usMinEngineClockPLL_Output; //In 10Khz unit
USHORT usMinMemoryClockPLL_Input; //In 10Khz unit
USHORT usMaxMemoryClockPLL_Input; //In 10Khz unit
USHORT usMinMemoryClockPLL_Output; //In 10Khz unit
USHORT usMaxPixelClock; //In 10Khz unit, Max. Pclk
USHORT usMinPixelClockPLL_Input; //In 10Khz unit
USHORT usMaxPixelClockPLL_Input; //In 10Khz unit
USHORT usMinPixelClockPLL_Output; //In 10Khz unit, the definitions above can't change!!!
ATOM_FIRMWARE_CAPABILITY_ACCESS usFirmwareCapability;
USHORT usReferenceClock; //In 10Khz unit
USHORT usPM_RTS_Location; //RTS PM4 starting location in ROM in 1Kb unit
UCHAR ucPM_RTS_StreamSize; //RTS PM4 packets in Kb unit
UCHAR ucDesign_ID; //Indicate what is the board design
UCHAR ucMemoryModule_ID; //Indicate what is the board design
}ATOM_FIRMWARE_INFO;
typedef struct _ATOM_FIRMWARE_INFO_V1_2
{
ATOM_COMMON_TABLE_HEADER sHeader;
ULONG ulFirmwareRevision;
ULONG ulDefaultEngineClock; //In 10Khz unit
ULONG ulDefaultMemoryClock; //In 10Khz unit
ULONG ulDriverTargetEngineClock; //In 10Khz unit
ULONG ulDriverTargetMemoryClock; //In 10Khz unit
ULONG ulMaxEngineClockPLL_Output; //In 10Khz unit
ULONG ulMaxMemoryClockPLL_Output; //In 10Khz unit
ULONG ulMaxPixelClockPLL_Output; //In 10Khz unit
ULONG ulASICMaxEngineClock; //In 10Khz unit
ULONG ulASICMaxMemoryClock; //In 10Khz unit
UCHAR ucASICMaxTemperature;
UCHAR ucMinAllowedBL_Level;
UCHAR ucPadding[2]; //Don't use them
ULONG aulReservedForBIOS[2]; //Don't use them
ULONG ulMinPixelClockPLL_Output; //In 10Khz unit
USHORT usMinEngineClockPLL_Input; //In 10Khz unit
USHORT usMaxEngineClockPLL_Input; //In 10Khz unit
USHORT usMinEngineClockPLL_Output; //In 10Khz unit
USHORT usMinMemoryClockPLL_Input; //In 10Khz unit
USHORT usMaxMemoryClockPLL_Input; //In 10Khz unit
USHORT usMinMemoryClockPLL_Output; //In 10Khz unit
USHORT usMaxPixelClock; //In 10Khz unit, Max. Pclk
USHORT usMinPixelClockPLL_Input; //In 10Khz unit
USHORT usMaxPixelClockPLL_Input; //In 10Khz unit
USHORT usMinPixelClockPLL_Output; //In 10Khz unit - lower 16bit of ulMinPixelClockPLL_Output
ATOM_FIRMWARE_CAPABILITY_ACCESS usFirmwareCapability;
USHORT usReferenceClock; //In 10Khz unit
USHORT usPM_RTS_Location; //RTS PM4 starting location in ROM in 1Kb unit
UCHAR ucPM_RTS_StreamSize; //RTS PM4 packets in Kb unit
UCHAR ucDesign_ID; //Indicate what is the board design
UCHAR ucMemoryModule_ID; //Indicate what is the board design
}ATOM_FIRMWARE_INFO_V1_2;
typedef struct _ATOM_FIRMWARE_INFO_V1_3
{
ATOM_COMMON_TABLE_HEADER sHeader;
ULONG ulFirmwareRevision;
ULONG ulDefaultEngineClock; //In 10Khz unit
ULONG ulDefaultMemoryClock; //In 10Khz unit
ULONG ulDriverTargetEngineClock; //In 10Khz unit
ULONG ulDriverTargetMemoryClock; //In 10Khz unit
ULONG ulMaxEngineClockPLL_Output; //In 10Khz unit
ULONG ulMaxMemoryClockPLL_Output; //In 10Khz unit
ULONG ulMaxPixelClockPLL_Output; //In 10Khz unit
ULONG ulASICMaxEngineClock; //In 10Khz unit
ULONG ulASICMaxMemoryClock; //In 10Khz unit
UCHAR ucASICMaxTemperature;
UCHAR ucMinAllowedBL_Level;
UCHAR ucPadding[2]; //Don't use them
ULONG aulReservedForBIOS; //Don't use them
ULONG ul3DAccelerationEngineClock;//In 10Khz unit
ULONG ulMinPixelClockPLL_Output; //In 10Khz unit
USHORT usMinEngineClockPLL_Input; //In 10Khz unit
USHORT usMaxEngineClockPLL_Input; //In 10Khz unit
USHORT usMinEngineClockPLL_Output; //In 10Khz unit
USHORT usMinMemoryClockPLL_Input; //In 10Khz unit
USHORT usMaxMemoryClockPLL_Input; //In 10Khz unit
USHORT usMinMemoryClockPLL_Output; //In 10Khz unit
USHORT usMaxPixelClock; //In 10Khz unit, Max. Pclk
USHORT usMinPixelClockPLL_Input; //In 10Khz unit
USHORT usMaxPixelClockPLL_Input; //In 10Khz unit
USHORT usMinPixelClockPLL_Output; //In 10Khz unit - lower 16bit of ulMinPixelClockPLL_Output
ATOM_FIRMWARE_CAPABILITY_ACCESS usFirmwareCapability;
USHORT usReferenceClock; //In 10Khz unit
USHORT usPM_RTS_Location; //RTS PM4 starting location in ROM in 1Kb unit
UCHAR ucPM_RTS_StreamSize; //RTS PM4 packets in Kb unit
UCHAR ucDesign_ID; //Indicate what is the board design
UCHAR ucMemoryModule_ID; //Indicate what is the board design
}ATOM_FIRMWARE_INFO_V1_3;
typedef struct _ATOM_FIRMWARE_INFO_V1_4
{
ATOM_COMMON_TABLE_HEADER sHeader;
ULONG ulFirmwareRevision;
ULONG ulDefaultEngineClock; //In 10Khz unit
ULONG ulDefaultMemoryClock; //In 10Khz unit
ULONG ulDriverTargetEngineClock; //In 10Khz unit
ULONG ulDriverTargetMemoryClock; //In 10Khz unit
ULONG ulMaxEngineClockPLL_Output; //In 10Khz unit
ULONG ulMaxMemoryClockPLL_Output; //In 10Khz unit
ULONG ulMaxPixelClockPLL_Output; //In 10Khz unit
ULONG ulASICMaxEngineClock; //In 10Khz unit
ULONG ulASICMaxMemoryClock; //In 10Khz unit
UCHAR ucASICMaxTemperature;
UCHAR ucMinAllowedBL_Level;
USHORT usBootUpVDDCVoltage; //In MV unit
USHORT usLcdMinPixelClockPLL_Output; // In MHz unit
USHORT usLcdMaxPixelClockPLL_Output; // In MHz unit
ULONG ul3DAccelerationEngineClock;//In 10Khz unit
ULONG ulMinPixelClockPLL_Output; //In 10Khz unit
USHORT usMinEngineClockPLL_Input; //In 10Khz unit
USHORT usMaxEngineClockPLL_Input; //In 10Khz unit
USHORT usMinEngineClockPLL_Output; //In 10Khz unit
USHORT usMinMemoryClockPLL_Input; //In 10Khz unit
USHORT usMaxMemoryClockPLL_Input; //In 10Khz unit
USHORT usMinMemoryClockPLL_Output; //In 10Khz unit
USHORT usMaxPixelClock; //In 10Khz unit, Max. Pclk
USHORT usMinPixelClockPLL_Input; //In 10Khz unit
USHORT usMaxPixelClockPLL_Input; //In 10Khz unit
USHORT usMinPixelClockPLL_Output; //In 10Khz unit - lower 16bit of ulMinPixelClockPLL_Output
ATOM_FIRMWARE_CAPABILITY_ACCESS usFirmwareCapability;
USHORT usReferenceClock; //In 10Khz unit
USHORT usPM_RTS_Location; //RTS PM4 starting location in ROM in 1Kb unit
UCHAR ucPM_RTS_StreamSize; //RTS PM4 packets in Kb unit
UCHAR ucDesign_ID; //Indicate what is the board design
UCHAR ucMemoryModule_ID; //Indicate what is the board design
}ATOM_FIRMWARE_INFO_V1_4;
//the structure below to be used from Cypress
typedef struct _ATOM_FIRMWARE_INFO_V2_1
{
ATOM_COMMON_TABLE_HEADER sHeader;
ULONG ulFirmwareRevision;
ULONG ulDefaultEngineClock; //In 10Khz unit
ULONG ulDefaultMemoryClock; //In 10Khz unit
ULONG ulReserved1;
ULONG ulReserved2;
ULONG ulMaxEngineClockPLL_Output; //In 10Khz unit
ULONG ulMaxMemoryClockPLL_Output; //In 10Khz unit
ULONG ulMaxPixelClockPLL_Output; //In 10Khz unit
ULONG ulBinaryAlteredInfo; //Was ulASICMaxEngineClock
ULONG ulDefaultDispEngineClkFreq; //In 10Khz unit
UCHAR ucReserved1; //Was ucASICMaxTemperature;
UCHAR ucMinAllowedBL_Level;
USHORT usBootUpVDDCVoltage; //In MV unit
USHORT usLcdMinPixelClockPLL_Output; // In MHz unit
USHORT usLcdMaxPixelClockPLL_Output; // In MHz unit
ULONG ulReserved4; //Was ulAsicMaximumVoltage
ULONG ulMinPixelClockPLL_Output; //In 10Khz unit
USHORT usMinEngineClockPLL_Input; //In 10Khz unit
USHORT usMaxEngineClockPLL_Input; //In 10Khz unit
USHORT usMinEngineClockPLL_Output; //In 10Khz unit
USHORT usMinMemoryClockPLL_Input; //In 10Khz unit
USHORT usMaxMemoryClockPLL_Input; //In 10Khz unit
USHORT usMinMemoryClockPLL_Output; //In 10Khz unit
USHORT usMaxPixelClock; //In 10Khz unit, Max. Pclk
USHORT usMinPixelClockPLL_Input; //In 10Khz unit
USHORT usMaxPixelClockPLL_Input; //In 10Khz unit
USHORT usMinPixelClockPLL_Output; //In 10Khz unit - lower 16bit of ulMinPixelClockPLL_Output
ATOM_FIRMWARE_CAPABILITY_ACCESS usFirmwareCapability;
USHORT usCoreReferenceClock; //In 10Khz unit
USHORT usMemoryReferenceClock; //In 10Khz unit
USHORT usUniphyDPModeExtClkFreq; //In 10Khz unit, if it is 0, In DP Mode Uniphy Input clock from internal PPLL, otherwise Input clock from external Spread clock
UCHAR ucMemoryModule_ID; //Indicate what is the board design
UCHAR ucReserved4[3];
}ATOM_FIRMWARE_INFO_V2_1;
//the structure below to be used from NI
//ucTableFormatRevision=2
//ucTableContentRevision=2
typedef struct _ATOM_FIRMWARE_INFO_V2_2
{
ATOM_COMMON_TABLE_HEADER sHeader;
ULONG ulFirmwareRevision;
ULONG ulDefaultEngineClock; //In 10Khz unit
ULONG ulDefaultMemoryClock; //In 10Khz unit
ULONG ulReserved[2];
ULONG ulReserved1; //Was ulMaxEngineClockPLL_Output; //In 10Khz unit*
ULONG ulReserved2; //Was ulMaxMemoryClockPLL_Output; //In 10Khz unit*
ULONG ulMaxPixelClockPLL_Output; //In 10Khz unit
ULONG ulBinaryAlteredInfo; //Was ulASICMaxEngineClock ?
ULONG ulDefaultDispEngineClkFreq; //In 10Khz unit. This is the frequency before DCDTO, corresponding to usBootUpVDDCVoltage.
UCHAR ucReserved3; //Was ucASICMaxTemperature;
UCHAR ucMinAllowedBL_Level;
USHORT usBootUpVDDCVoltage; //In MV unit
USHORT usLcdMinPixelClockPLL_Output; // In MHz unit
USHORT usLcdMaxPixelClockPLL_Output; // In MHz unit
ULONG ulReserved4; //Was ulAsicMaximumVoltage
ULONG ulMinPixelClockPLL_Output; //In 10Khz unit
UCHAR ucRemoteDisplayConfig;
UCHAR ucReserved5[3]; //Was usMinEngineClockPLL_Input and usMaxEngineClockPLL_Input
ULONG ulReserved6; //Was usMinEngineClockPLL_Output and usMinMemoryClockPLL_Input
ULONG ulReserved7; //Was usMaxMemoryClockPLL_Input and usMinMemoryClockPLL_Output
USHORT usReserved11; //Was usMaxPixelClock; //In 10Khz unit, Max. Pclk used only for DAC
USHORT usMinPixelClockPLL_Input; //In 10Khz unit
USHORT usMaxPixelClockPLL_Input; //In 10Khz unit
USHORT usBootUpVDDCIVoltage; //In unit of mv; Was usMinPixelClockPLL_Output;
ATOM_FIRMWARE_CAPABILITY_ACCESS usFirmwareCapability;
USHORT usCoreReferenceClock; //In 10Khz unit
USHORT usMemoryReferenceClock; //In 10Khz unit
USHORT usUniphyDPModeExtClkFreq; //In 10Khz unit, if it is 0, In DP Mode Uniphy Input clock from internal PPLL, otherwise Input clock from external Spread clock
UCHAR ucMemoryModule_ID; //Indicate what is the board design
UCHAR ucReserved9[3];
USHORT usBootUpMVDDCVoltage; //In unit of mv; Was usMinPixelClockPLL_Output;
USHORT usReserved12;
ULONG ulReserved10[3]; // New added comparing to previous version
}ATOM_FIRMWARE_INFO_V2_2;
#define ATOM_FIRMWARE_INFO_LAST ATOM_FIRMWARE_INFO_V2_2
// definition of ucRemoteDisplayConfig
#define REMOTE_DISPLAY_DISABLE 0x00
#define REMOTE_DISPLAY_ENABLE 0x01
/****************************************************************************/
// Structures used in IntegratedSystemInfoTable
/****************************************************************************/
#define IGP_CAP_FLAG_DYNAMIC_CLOCK_EN 0x2
#define IGP_CAP_FLAG_AC_CARD 0x4
#define IGP_CAP_FLAG_SDVO_CARD 0x8
#define IGP_CAP_FLAG_POSTDIV_BY_2_MODE 0x10
typedef struct _ATOM_INTEGRATED_SYSTEM_INFO
{
ATOM_COMMON_TABLE_HEADER sHeader;
ULONG ulBootUpEngineClock; //in 10kHz unit
ULONG ulBootUpMemoryClock; //in 10kHz unit
ULONG ulMaxSystemMemoryClock; //in 10kHz unit
ULONG ulMinSystemMemoryClock; //in 10kHz unit
UCHAR ucNumberOfCyclesInPeriodHi;
UCHAR ucLCDTimingSel; //=0:not valid.!=0 sel this timing descriptor from LCD EDID.
USHORT usReserved1;
USHORT usInterNBVoltageLow; //An intermidiate PMW value to set the voltage
USHORT usInterNBVoltageHigh; //Another intermidiate PMW value to set the voltage
ULONG ulReserved[2];
USHORT usFSBClock; //In MHz unit
USHORT usCapabilityFlag; //Bit0=1 indicates the fake HDMI support,Bit1=0/1 for Dynamic clocking dis/enable
//Bit[3:2]== 0:No PCIE card, 1:AC card, 2:SDVO card
//Bit[4]==1: P/2 mode, ==0: P/1 mode
USHORT usPCIENBCfgReg7; //bit[7:0]=MUX_Sel, bit[9:8]=MUX_SEL_LEVEL2, bit[10]=Lane_Reversal
USHORT usK8MemoryClock; //in MHz unit
USHORT usK8SyncStartDelay; //in 0.01 us unit
USHORT usK8DataReturnTime; //in 0.01 us unit
UCHAR ucMaxNBVoltage;
UCHAR ucMinNBVoltage;
UCHAR ucMemoryType; //[7:4]=1:DDR1;=2:DDR2;=3:DDR3.[3:0] is reserved
UCHAR ucNumberOfCyclesInPeriod; //CG.FVTHROT_PWM_CTRL_REG0.NumberOfCyclesInPeriod
UCHAR ucStartingPWM_HighTime; //CG.FVTHROT_PWM_CTRL_REG0.StartingPWM_HighTime
UCHAR ucHTLinkWidth; //16 bit vs. 8 bit
UCHAR ucMaxNBVoltageHigh;
UCHAR ucMinNBVoltageHigh;
}ATOM_INTEGRATED_SYSTEM_INFO;
/* Explanation on entries in ATOM_INTEGRATED_SYSTEM_INFO
ulBootUpMemoryClock: For Intel IGP,it's the UMA system memory clock
For AMD IGP,it's 0 if no SidePort memory installed or it's the boot-up SidePort memory clock
ulMaxSystemMemoryClock: For Intel IGP,it's the Max freq from memory SPD if memory runs in ASYNC mode or otherwise (SYNC mode) it's 0
For AMD IGP,for now this can be 0
ulMinSystemMemoryClock: For Intel IGP,it's 133MHz if memory runs in ASYNC mode or otherwise (SYNC mode) it's 0
For AMD IGP,for now this can be 0
usFSBClock: For Intel IGP,it's FSB Freq
For AMD IGP,it's HT Link Speed
usK8MemoryClock: For AMD IGP only. For RevF CPU, set it to 200
usK8SyncStartDelay: For AMD IGP only. Memory access latency in K8, required for watermark calculation
usK8DataReturnTime: For AMD IGP only. Memory access latency in K8, required for watermark calculation
VC:Voltage Control
ucMaxNBVoltage: Voltage regulator dependent PWM value. Low 8 bits of the value for the max voltage.Set this one to 0xFF if VC without PWM. Set this to 0x0 if no VC at all.
ucMinNBVoltage: Voltage regulator dependent PWM value. Low 8 bits of the value for the min voltage.Set this one to 0x00 if VC without PWM or no VC at all.
ucNumberOfCyclesInPeriod: Indicate how many cycles when PWM duty is 100%. low 8 bits of the value.
ucNumberOfCyclesInPeriodHi: Indicate how many cycles when PWM duty is 100%. high 8 bits of the value.If the PWM has an inverter,set bit [7]==1,otherwise set it 0
ucMaxNBVoltageHigh: Voltage regulator dependent PWM value. High 8 bits of the value for the max voltage.Set this one to 0xFF if VC without PWM. Set this to 0x0 if no VC at all.
ucMinNBVoltageHigh: Voltage regulator dependent PWM value. High 8 bits of the value for the min voltage.Set this one to 0x00 if VC without PWM or no VC at all.
-usInterNBVoltageLow: Voltage regulator dependent PWM value. The value makes the the voltage >=Min NB voltage but <=InterNBVoltageHigh. Set this to 0x0000 if VC without PWM or no VC at all.
-usInterNBVoltageHigh: Voltage regulator dependent PWM value. The value makes the the voltage >=InterNBVoltageLow but <=Max NB voltage.Set this to 0x0000 if VC without PWM or no VC at all.
+usInterNBVoltageLow: Voltage regulator dependent PWM value. The value makes the voltage >=Min NB voltage but <=InterNBVoltageHigh. Set this to 0x0000 if VC without PWM or no VC at all.
+usInterNBVoltageHigh: Voltage regulator dependent PWM value. The value makes the voltage >=InterNBVoltageLow but <=Max NB voltage.Set this to 0x0000 if VC without PWM or no VC at all.
*/
/*
The following IGP table is introduced from RS780, which is supposed to be put by SBIOS in FB before IGP VBIOS starts VPOST;
Then VBIOS will copy the whole structure to its image so all GPU SW components can access this data structure to get whatever they need.
The enough reservation should allow us to never change table revisions. Whenever needed, a GPU SW component can use reserved portion for new data entries.
SW components can access the IGP system infor structure in the same way as before
*/
typedef struct _ATOM_INTEGRATED_SYSTEM_INFO_V2
{
ATOM_COMMON_TABLE_HEADER sHeader;
ULONG ulBootUpEngineClock; //in 10kHz unit
ULONG ulReserved1[2]; //must be 0x0 for the reserved
ULONG ulBootUpUMAClock; //in 10kHz unit
ULONG ulBootUpSidePortClock; //in 10kHz unit
ULONG ulMinSidePortClock; //in 10kHz unit
ULONG ulReserved2[6]; //must be 0x0 for the reserved
ULONG ulSystemConfig; //see explanation below
ULONG ulBootUpReqDisplayVector;
ULONG ulOtherDisplayMisc;
ULONG ulDDISlot1Config;
ULONG ulDDISlot2Config;
UCHAR ucMemoryType; //[3:0]=1:DDR1;=2:DDR2;=3:DDR3.[7:4] is reserved
UCHAR ucUMAChannelNumber;
UCHAR ucDockingPinBit;
UCHAR ucDockingPinPolarity;
ULONG ulDockingPinCFGInfo;
ULONG ulCPUCapInfo;
USHORT usNumberOfCyclesInPeriod;
USHORT usMaxNBVoltage;
USHORT usMinNBVoltage;
USHORT usBootUpNBVoltage;
ULONG ulHTLinkFreq; //in 10Khz
USHORT usMinHTLinkWidth;
USHORT usMaxHTLinkWidth;
USHORT usUMASyncStartDelay;
USHORT usUMADataReturnTime;
USHORT usLinkStatusZeroTime;
USHORT usDACEfuse; //for storing badgap value (for RS880 only)
ULONG ulHighVoltageHTLinkFreq; // in 10Khz
ULONG ulLowVoltageHTLinkFreq; // in 10Khz
USHORT usMaxUpStreamHTLinkWidth;
USHORT usMaxDownStreamHTLinkWidth;
USHORT usMinUpStreamHTLinkWidth;
USHORT usMinDownStreamHTLinkWidth;
USHORT usFirmwareVersion; //0 means FW is not supported. Otherwise it's the FW version loaded by SBIOS and driver should enable FW.
USHORT usFullT0Time; // Input to calculate minimum HT link change time required by NB P-State. Unit is 0.01us.
ULONG ulReserved3[96]; //must be 0x0
}ATOM_INTEGRATED_SYSTEM_INFO_V2;
/*
ulBootUpEngineClock: Boot-up Engine Clock in 10Khz;
ulBootUpUMAClock: Boot-up UMA Clock in 10Khz; it must be 0x0 when UMA is not present
ulBootUpSidePortClock: Boot-up SidePort Clock in 10Khz; it must be 0x0 when SidePort Memory is not present,this could be equal to or less than maximum supported Sideport memory clock
ulSystemConfig:
Bit[0]=1: PowerExpress mode =0 Non-PowerExpress mode;
Bit[1]=1: system boots up at AMD overdrived state or user customized mode. In this case, driver will just stick to this boot-up mode. No other PowerPlay state
=0: system boots up at driver control state. Power state depends on PowerPlay table.
Bit[2]=1: PWM method is used on NB voltage control. =0: GPIO method is used.
Bit[3]=1: Only one power state(Performance) will be supported.
=0: Multiple power states supported from PowerPlay table.
Bit[4]=1: CLMC is supported and enabled on current system.
=0: CLMC is not supported or enabled on current system. SBIOS need to support HT link/freq change through ATIF interface.
Bit[5]=1: Enable CDLW for all driver control power states. Max HT width is from SBIOS, while Min HT width is determined by display requirement.
=0: CDLW is disabled. If CLMC is enabled case, Min HT width will be set equal to Max HT width. If CLMC disabled case, Max HT width will be applied.
Bit[6]=1: High Voltage requested for all power states. In this case, voltage will be forced at 1.1v and powerplay table voltage drop/throttling request will be ignored.
=0: Voltage settings is determined by powerplay table.
Bit[7]=1: Enable CLMC as hybrid Mode. CDLD and CILR will be disabled in this case and we're using legacy C1E. This is workaround for CPU(Griffin) performance issue.
=0: Enable CLMC as regular mode, CDLD and CILR will be enabled.
Bit[8]=1: CDLF is supported and enabled on current system.
=0: CDLF is not supported or enabled on current system.
Bit[9]=1: DLL Shut Down feature is enabled on current system.
=0: DLL Shut Down feature is not enabled or supported on current system.
ulBootUpReqDisplayVector: This dword is a bit vector indicates what display devices are requested during boot-up. Refer to ATOM_DEVICE_xxx_SUPPORT for the bit vector definitions.
ulOtherDisplayMisc: [15:8]- Bootup LCD Expansion selection; 0-center, 1-full panel size expansion;
[7:0] - BootupTV standard selection; This is a bit vector to indicate what TV standards are supported by the system. Refer to ucTVSupportedStd definition;
ulDDISlot1Config: Describes the PCIE lane configuration on this DDI PCIE slot (ADD2 card) or connector (Mobile design).
[3:0] - Bit vector to indicate PCIE lane config of the DDI slot/connector on chassis (bit 0=1 lane 3:0; bit 1=1 lane 7:4; bit 2=1 lane 11:8; bit 3=1 lane 15:12)
[7:4] - Bit vector to indicate PCIE lane config of the same DDI slot/connector on docking station (bit 4=1 lane 3:0; bit 5=1 lane 7:4; bit 6=1 lane 11:8; bit 7=1 lane 15:12)
When a DDI connector is not "paired" (meaming two connections mutualexclusive on chassis or docking, only one of them can be connected at one time.
in both chassis and docking, SBIOS has to duplicate the same PCIE lane info from chassis to docking or vice versa. For example:
one DDI connector is only populated in docking with PCIE lane 8-11, but there is no paired connection on chassis, SBIOS has to copy bit 6 to bit 2.
[15:8] - Lane configuration attribute;
[23:16]- Connector type, possible value:
CONNECTOR_OBJECT_ID_SINGLE_LINK_DVI_D
CONNECTOR_OBJECT_ID_DUAL_LINK_DVI_D
CONNECTOR_OBJECT_ID_HDMI_TYPE_A
CONNECTOR_OBJECT_ID_DISPLAYPORT
CONNECTOR_OBJECT_ID_eDP
[31:24]- Reserved
ulDDISlot2Config: Same as Slot1.
ucMemoryType: SidePort memory type, set it to 0x0 when Sideport memory is not installed. Driver needs this info to change sideport memory clock. Not for display in CCC.
For IGP, Hypermemory is the only memory type showed in CCC.
ucUMAChannelNumber: how many channels for the UMA;
ulDockingPinCFGInfo: [15:0]-Bus/Device/Function # to CFG to read this Docking Pin; [31:16]-reg offset in CFG to read this pin
ucDockingPinBit: which bit in this register to read the pin status;
ucDockingPinPolarity:Polarity of the pin when docked;
ulCPUCapInfo: [7:0]=1:Griffin;[7:0]=2:Greyhound;[7:0]=3:K8, [7:0]=4:Pharaoh, other bits reserved for now and must be 0x0
usNumberOfCyclesInPeriod:Indicate how many cycles when PWM duty is 100%.
usMaxNBVoltage:Max. voltage control value in either PWM or GPIO mode.
usMinNBVoltage:Min. voltage control value in either PWM or GPIO mode.
GPIO mode: both usMaxNBVoltage & usMinNBVoltage have a valid value ulSystemConfig.SYSTEM_CONFIG_USE_PWM_ON_VOLTAGE=0
PWM mode: both usMaxNBVoltage & usMinNBVoltage have a valid value ulSystemConfig.SYSTEM_CONFIG_USE_PWM_ON_VOLTAGE=1
GPU SW don't control mode: usMaxNBVoltage & usMinNBVoltage=0 and no care about ulSystemConfig.SYSTEM_CONFIG_USE_PWM_ON_VOLTAGE
usBootUpNBVoltage:Boot-up voltage regulator dependent PWM value.
ulHTLinkFreq: Bootup HT link Frequency in 10Khz.
usMinHTLinkWidth: Bootup minimum HT link width. If CDLW disabled, this is equal to usMaxHTLinkWidth.
If CDLW enabled, both upstream and downstream width should be the same during bootup.
usMaxHTLinkWidth: Bootup maximum HT link width. If CDLW disabled, this is equal to usMinHTLinkWidth.
If CDLW enabled, both upstream and downstream width should be the same during bootup.
usUMASyncStartDelay: Memory access latency, required for watermark calculation
usUMADataReturnTime: Memory access latency, required for watermark calculation
usLinkStatusZeroTime:Memory access latency required for watermark calculation, set this to 0x0 for K8 CPU, set a proper value in 0.01 the unit of us
for Griffin or Greyhound. SBIOS needs to convert to actual time by:
if T0Ttime [5:4]=00b, then usLinkStatusZeroTime=T0Ttime [3:0]*0.1us (0.0 to 1.5us)
if T0Ttime [5:4]=01b, then usLinkStatusZeroTime=T0Ttime [3:0]*0.5us (0.0 to 7.5us)
if T0Ttime [5:4]=10b, then usLinkStatusZeroTime=T0Ttime [3:0]*2.0us (0.0 to 30us)
if T0Ttime [5:4]=11b, and T0Ttime [3:0]=0x0 to 0xa, then usLinkStatusZeroTime=T0Ttime [3:0]*20us (0.0 to 200us)
ulHighVoltageHTLinkFreq: HT link frequency for power state with low voltage. If boot up runs in HT1, this must be 0.
This must be less than or equal to ulHTLinkFreq(bootup frequency).
ulLowVoltageHTLinkFreq: HT link frequency for power state with low voltage or voltage scaling 1.0v~1.1v. If boot up runs in HT1, this must be 0.
This must be less than or equal to ulHighVoltageHTLinkFreq.
usMaxUpStreamHTLinkWidth: Asymmetric link width support in the future, to replace usMaxHTLinkWidth. Not used for now.
usMaxDownStreamHTLinkWidth: same as above.
usMinUpStreamHTLinkWidth: Asymmetric link width support in the future, to replace usMinHTLinkWidth. Not used for now.
usMinDownStreamHTLinkWidth: same as above.
*/
// ATOM_INTEGRATED_SYSTEM_INFO::ulCPUCapInfo - CPU type definition
#define INTEGRATED_SYSTEM_INFO__UNKNOWN_CPU 0
#define INTEGRATED_SYSTEM_INFO__AMD_CPU__GRIFFIN 1
#define INTEGRATED_SYSTEM_INFO__AMD_CPU__GREYHOUND 2
#define INTEGRATED_SYSTEM_INFO__AMD_CPU__K8 3
#define INTEGRATED_SYSTEM_INFO__AMD_CPU__PHARAOH 4
#define INTEGRATED_SYSTEM_INFO__AMD_CPU__OROCHI 5
#define INTEGRATED_SYSTEM_INFO__AMD_CPU__MAX_CODE INTEGRATED_SYSTEM_INFO__AMD_CPU__OROCHI // this deff reflects max defined CPU code
#define SYSTEM_CONFIG_POWEREXPRESS_ENABLE 0x00000001
#define SYSTEM_CONFIG_RUN_AT_OVERDRIVE_ENGINE 0x00000002
#define SYSTEM_CONFIG_USE_PWM_ON_VOLTAGE 0x00000004
#define SYSTEM_CONFIG_PERFORMANCE_POWERSTATE_ONLY 0x00000008
#define SYSTEM_CONFIG_CLMC_ENABLED 0x00000010
#define SYSTEM_CONFIG_CDLW_ENABLED 0x00000020
#define SYSTEM_CONFIG_HIGH_VOLTAGE_REQUESTED 0x00000040
#define SYSTEM_CONFIG_CLMC_HYBRID_MODE_ENABLED 0x00000080
#define SYSTEM_CONFIG_CDLF_ENABLED 0x00000100
#define SYSTEM_CONFIG_DLL_SHUTDOWN_ENABLED 0x00000200
#define IGP_DDI_SLOT_LANE_CONFIG_MASK 0x000000FF
#define b0IGP_DDI_SLOT_LANE_MAP_MASK 0x0F
#define b0IGP_DDI_SLOT_DOCKING_LANE_MAP_MASK 0xF0
#define b0IGP_DDI_SLOT_CONFIG_LANE_0_3 0x01
#define b0IGP_DDI_SLOT_CONFIG_LANE_4_7 0x02
#define b0IGP_DDI_SLOT_CONFIG_LANE_8_11 0x04
#define b0IGP_DDI_SLOT_CONFIG_LANE_12_15 0x08
#define IGP_DDI_SLOT_ATTRIBUTE_MASK 0x0000FF00
#define IGP_DDI_SLOT_CONFIG_REVERSED 0x00000100
#define b1IGP_DDI_SLOT_CONFIG_REVERSED 0x01
#define IGP_DDI_SLOT_CONNECTOR_TYPE_MASK 0x00FF0000
// IntegratedSystemInfoTable new Rev is V5 after V2, because of the real rev of V2 is v1.4. This rev is used for RR
typedef struct _ATOM_INTEGRATED_SYSTEM_INFO_V5
{
ATOM_COMMON_TABLE_HEADER sHeader;
ULONG ulBootUpEngineClock; //in 10kHz unit
ULONG ulDentistVCOFreq; //Dentist VCO clock in 10kHz unit, the source of GPU SCLK, LCLK, UCLK and VCLK.
ULONG ulLClockFreq; //GPU Lclk freq in 10kHz unit, have relationship with NCLK in NorthBridge
ULONG ulBootUpUMAClock; //in 10kHz unit
ULONG ulReserved1[8]; //must be 0x0 for the reserved
ULONG ulBootUpReqDisplayVector;
ULONG ulOtherDisplayMisc;
ULONG ulReserved2[4]; //must be 0x0 for the reserved
ULONG ulSystemConfig; //TBD
ULONG ulCPUCapInfo; //TBD
USHORT usMaxNBVoltage; //high NB voltage, calculated using current VDDNB (D24F2xDC) and VDDNB offset fuse;
USHORT usMinNBVoltage; //low NB voltage, calculated using current VDDNB (D24F2xDC) and VDDNB offset fuse;
USHORT usBootUpNBVoltage; //boot up NB voltage
UCHAR ucHtcTmpLmt; //bit [22:16] of D24F3x64 Hardware Thermal Control (HTC) Register, may not be needed, TBD
UCHAR ucTjOffset; //bit [28:22] of D24F3xE4 Thermtrip Status Register,may not be needed, TBD
ULONG ulReserved3[4]; //must be 0x0 for the reserved
ULONG ulDDISlot1Config; //see above ulDDISlot1Config definition
ULONG ulDDISlot2Config;
ULONG ulDDISlot3Config;
ULONG ulDDISlot4Config;
ULONG ulReserved4[4]; //must be 0x0 for the reserved
UCHAR ucMemoryType; //[3:0]=1:DDR1;=2:DDR2;=3:DDR3.[7:4] is reserved
UCHAR ucUMAChannelNumber;
USHORT usReserved;
ULONG ulReserved5[4]; //must be 0x0 for the reserved
ULONG ulCSR_M3_ARB_CNTL_DEFAULT[10];//arrays with values for CSR M3 arbiter for default
ULONG ulCSR_M3_ARB_CNTL_UVD[10]; //arrays with values for CSR M3 arbiter for UVD playback
ULONG ulCSR_M3_ARB_CNTL_FS3D[10];//arrays with values for CSR M3 arbiter for Full Screen 3D applications
ULONG ulReserved6[61]; //must be 0x0
}ATOM_INTEGRATED_SYSTEM_INFO_V5;
#define ATOM_CRT_INT_ENCODER1_INDEX 0x00000000
#define ATOM_LCD_INT_ENCODER1_INDEX 0x00000001
#define ATOM_TV_INT_ENCODER1_INDEX 0x00000002
#define ATOM_DFP_INT_ENCODER1_INDEX 0x00000003
#define ATOM_CRT_INT_ENCODER2_INDEX 0x00000004
#define ATOM_LCD_EXT_ENCODER1_INDEX 0x00000005
#define ATOM_TV_EXT_ENCODER1_INDEX 0x00000006
#define ATOM_DFP_EXT_ENCODER1_INDEX 0x00000007
#define ATOM_CV_INT_ENCODER1_INDEX 0x00000008
#define ATOM_DFP_INT_ENCODER2_INDEX 0x00000009
#define ATOM_CRT_EXT_ENCODER1_INDEX 0x0000000A
#define ATOM_CV_EXT_ENCODER1_INDEX 0x0000000B
#define ATOM_DFP_INT_ENCODER3_INDEX 0x0000000C
#define ATOM_DFP_INT_ENCODER4_INDEX 0x0000000D
// define ASIC internal encoder id ( bit vector ), used for CRTC_SourceSelTable
#define ASIC_INT_DAC1_ENCODER_ID 0x00
#define ASIC_INT_TV_ENCODER_ID 0x02
#define ASIC_INT_DIG1_ENCODER_ID 0x03
#define ASIC_INT_DAC2_ENCODER_ID 0x04
#define ASIC_EXT_TV_ENCODER_ID 0x06
#define ASIC_INT_DVO_ENCODER_ID 0x07
#define ASIC_INT_DIG2_ENCODER_ID 0x09
#define ASIC_EXT_DIG_ENCODER_ID 0x05
#define ASIC_EXT_DIG2_ENCODER_ID 0x08
#define ASIC_INT_DIG3_ENCODER_ID 0x0a
#define ASIC_INT_DIG4_ENCODER_ID 0x0b
#define ASIC_INT_DIG5_ENCODER_ID 0x0c
#define ASIC_INT_DIG6_ENCODER_ID 0x0d
#define ASIC_INT_DIG7_ENCODER_ID 0x0e
//define Encoder attribute
#define ATOM_ANALOG_ENCODER 0
#define ATOM_DIGITAL_ENCODER 1
#define ATOM_DP_ENCODER 2
#define ATOM_ENCODER_ENUM_MASK 0x70
#define ATOM_ENCODER_ENUM_ID1 0x00
#define ATOM_ENCODER_ENUM_ID2 0x10
#define ATOM_ENCODER_ENUM_ID3 0x20
#define ATOM_ENCODER_ENUM_ID4 0x30
#define ATOM_ENCODER_ENUM_ID5 0x40
#define ATOM_ENCODER_ENUM_ID6 0x50
#define ATOM_DEVICE_CRT1_INDEX 0x00000000
#define ATOM_DEVICE_LCD1_INDEX 0x00000001
#define ATOM_DEVICE_TV1_INDEX 0x00000002
#define ATOM_DEVICE_DFP1_INDEX 0x00000003
#define ATOM_DEVICE_CRT2_INDEX 0x00000004
#define ATOM_DEVICE_LCD2_INDEX 0x00000005
#define ATOM_DEVICE_DFP6_INDEX 0x00000006
#define ATOM_DEVICE_DFP2_INDEX 0x00000007
#define ATOM_DEVICE_CV_INDEX 0x00000008
#define ATOM_DEVICE_DFP3_INDEX 0x00000009
#define ATOM_DEVICE_DFP4_INDEX 0x0000000A
#define ATOM_DEVICE_DFP5_INDEX 0x0000000B
#define ATOM_DEVICE_RESERVEDC_INDEX 0x0000000C
#define ATOM_DEVICE_RESERVEDD_INDEX 0x0000000D
#define ATOM_DEVICE_RESERVEDE_INDEX 0x0000000E
#define ATOM_DEVICE_RESERVEDF_INDEX 0x0000000F
#define ATOM_MAX_SUPPORTED_DEVICE_INFO (ATOM_DEVICE_DFP3_INDEX+1)
#define ATOM_MAX_SUPPORTED_DEVICE_INFO_2 ATOM_MAX_SUPPORTED_DEVICE_INFO
#define ATOM_MAX_SUPPORTED_DEVICE_INFO_3 (ATOM_DEVICE_DFP5_INDEX + 1 )
#define ATOM_MAX_SUPPORTED_DEVICE (ATOM_DEVICE_RESERVEDF_INDEX+1)
#define ATOM_DEVICE_CRT1_SUPPORT (0x1L << ATOM_DEVICE_CRT1_INDEX )
#define ATOM_DEVICE_LCD1_SUPPORT (0x1L << ATOM_DEVICE_LCD1_INDEX )
#define ATOM_DEVICE_TV1_SUPPORT (0x1L << ATOM_DEVICE_TV1_INDEX )
#define ATOM_DEVICE_DFP1_SUPPORT (0x1L << ATOM_DEVICE_DFP1_INDEX )
#define ATOM_DEVICE_CRT2_SUPPORT (0x1L << ATOM_DEVICE_CRT2_INDEX )
#define ATOM_DEVICE_LCD2_SUPPORT (0x1L << ATOM_DEVICE_LCD2_INDEX )
#define ATOM_DEVICE_DFP6_SUPPORT (0x1L << ATOM_DEVICE_DFP6_INDEX )
#define ATOM_DEVICE_DFP2_SUPPORT (0x1L << ATOM_DEVICE_DFP2_INDEX )
#define ATOM_DEVICE_CV_SUPPORT (0x1L << ATOM_DEVICE_CV_INDEX )
#define ATOM_DEVICE_DFP3_SUPPORT (0x1L << ATOM_DEVICE_DFP3_INDEX )
#define ATOM_DEVICE_DFP4_SUPPORT (0x1L << ATOM_DEVICE_DFP4_INDEX )
#define ATOM_DEVICE_DFP5_SUPPORT (0x1L << ATOM_DEVICE_DFP5_INDEX )
#define ATOM_DEVICE_CRT_SUPPORT (ATOM_DEVICE_CRT1_SUPPORT | ATOM_DEVICE_CRT2_SUPPORT)
#define ATOM_DEVICE_DFP_SUPPORT (ATOM_DEVICE_DFP1_SUPPORT | ATOM_DEVICE_DFP2_SUPPORT | ATOM_DEVICE_DFP3_SUPPORT | ATOM_DEVICE_DFP4_SUPPORT | ATOM_DEVICE_DFP5_SUPPORT | ATOM_DEVICE_DFP6_SUPPORT)
#define ATOM_DEVICE_TV_SUPPORT (ATOM_DEVICE_TV1_SUPPORT)
#define ATOM_DEVICE_LCD_SUPPORT (ATOM_DEVICE_LCD1_SUPPORT | ATOM_DEVICE_LCD2_SUPPORT)
#define ATOM_DEVICE_CONNECTOR_TYPE_MASK 0x000000F0
#define ATOM_DEVICE_CONNECTOR_TYPE_SHIFT 0x00000004
#define ATOM_DEVICE_CONNECTOR_VGA 0x00000001
#define ATOM_DEVICE_CONNECTOR_DVI_I 0x00000002
#define ATOM_DEVICE_CONNECTOR_DVI_D 0x00000003
#define ATOM_DEVICE_CONNECTOR_DVI_A 0x00000004
#define ATOM_DEVICE_CONNECTOR_SVIDEO 0x00000005
#define ATOM_DEVICE_CONNECTOR_COMPOSITE 0x00000006
#define ATOM_DEVICE_CONNECTOR_LVDS 0x00000007
#define ATOM_DEVICE_CONNECTOR_DIGI_LINK 0x00000008
#define ATOM_DEVICE_CONNECTOR_SCART 0x00000009
#define ATOM_DEVICE_CONNECTOR_HDMI_TYPE_A 0x0000000A
#define ATOM_DEVICE_CONNECTOR_HDMI_TYPE_B 0x0000000B
#define ATOM_DEVICE_CONNECTOR_CASE_1 0x0000000E
#define ATOM_DEVICE_CONNECTOR_DISPLAYPORT 0x0000000F
#define ATOM_DEVICE_DAC_INFO_MASK 0x0000000F
#define ATOM_DEVICE_DAC_INFO_SHIFT 0x00000000
#define ATOM_DEVICE_DAC_INFO_NODAC 0x00000000
#define ATOM_DEVICE_DAC_INFO_DACA 0x00000001
#define ATOM_DEVICE_DAC_INFO_DACB 0x00000002
#define ATOM_DEVICE_DAC_INFO_EXDAC 0x00000003
#define ATOM_DEVICE_I2C_ID_NOI2C 0x00000000
#define ATOM_DEVICE_I2C_LINEMUX_MASK 0x0000000F
#define ATOM_DEVICE_I2C_LINEMUX_SHIFT 0x00000000
#define ATOM_DEVICE_I2C_ID_MASK 0x00000070
#define ATOM_DEVICE_I2C_ID_SHIFT 0x00000004
#define ATOM_DEVICE_I2C_ID_IS_FOR_NON_MM_USE 0x00000001
#define ATOM_DEVICE_I2C_ID_IS_FOR_MM_USE 0x00000002
#define ATOM_DEVICE_I2C_ID_IS_FOR_SDVO_USE 0x00000003 //For IGP RS600
#define ATOM_DEVICE_I2C_ID_IS_FOR_DAC_SCL 0x00000004 //For IGP RS690
#define ATOM_DEVICE_I2C_HARDWARE_CAP_MASK 0x00000080
#define ATOM_DEVICE_I2C_HARDWARE_CAP_SHIFT 0x00000007
#define ATOM_DEVICE_USES_SOFTWARE_ASSISTED_I2C 0x00000000
#define ATOM_DEVICE_USES_HARDWARE_ASSISTED_I2C 0x00000001
// usDeviceSupport:
// Bits0 = 0 - no CRT1 support= 1- CRT1 is supported
// Bit 1 = 0 - no LCD1 support= 1- LCD1 is supported
// Bit 2 = 0 - no TV1 support= 1- TV1 is supported
// Bit 3 = 0 - no DFP1 support= 1- DFP1 is supported
// Bit 4 = 0 - no CRT2 support= 1- CRT2 is supported
// Bit 5 = 0 - no LCD2 support= 1- LCD2 is supported
// Bit 6 = 0 - no DFP6 support= 1- DFP6 is supported
// Bit 7 = 0 - no DFP2 support= 1- DFP2 is supported
// Bit 8 = 0 - no CV support= 1- CV is supported
// Bit 9 = 0 - no DFP3 support= 1- DFP3 is supported
// Bit 10 = 0 - no DFP4 support= 1- DFP4 is supported
// Bit 11 = 0 - no DFP5 support= 1- DFP5 is supported
//
//
/****************************************************************************/
/* Structure used in MclkSS_InfoTable */
/****************************************************************************/
// ucI2C_ConfigID
// [7:0] - I2C LINE Associate ID
// = 0 - no I2C
// [7] - HW_Cap = 1, [6:0]=HW assisted I2C ID(HW line selection)
// = 0, [6:0]=SW assisted I2C ID
// [6-4] - HW_ENGINE_ID = 1, HW engine for NON multimedia use
// = 2, HW engine for Multimedia use
// = 3-7 Reserved for future I2C engines
// [3-0] - I2C_LINE_MUX = A Mux number when it's HW assisted I2C or GPIO ID when it's SW I2C
typedef struct _ATOM_I2C_ID_CONFIG
{
#if ATOM_BIG_ENDIAN
UCHAR bfHW_Capable:1;
UCHAR bfHW_EngineID:3;
UCHAR bfI2C_LineMux:4;
#else
UCHAR bfI2C_LineMux:4;
UCHAR bfHW_EngineID:3;
UCHAR bfHW_Capable:1;
#endif
}ATOM_I2C_ID_CONFIG;
typedef union _ATOM_I2C_ID_CONFIG_ACCESS
{
ATOM_I2C_ID_CONFIG sbfAccess;
UCHAR ucAccess;
}ATOM_I2C_ID_CONFIG_ACCESS;
/****************************************************************************/
// Structure used in GPIO_I2C_InfoTable
/****************************************************************************/
typedef struct _ATOM_GPIO_I2C_ASSIGMENT
{
USHORT usClkMaskRegisterIndex;
USHORT usClkEnRegisterIndex;
USHORT usClkY_RegisterIndex;
USHORT usClkA_RegisterIndex;
USHORT usDataMaskRegisterIndex;
USHORT usDataEnRegisterIndex;
USHORT usDataY_RegisterIndex;
USHORT usDataA_RegisterIndex;
ATOM_I2C_ID_CONFIG_ACCESS sucI2cId;
UCHAR ucClkMaskShift;
UCHAR ucClkEnShift;
UCHAR ucClkY_Shift;
UCHAR ucClkA_Shift;
UCHAR ucDataMaskShift;
UCHAR ucDataEnShift;
UCHAR ucDataY_Shift;
UCHAR ucDataA_Shift;
UCHAR ucReserved1;
UCHAR ucReserved2;
}ATOM_GPIO_I2C_ASSIGMENT;
typedef struct _ATOM_GPIO_I2C_INFO
{
ATOM_COMMON_TABLE_HEADER sHeader;
ATOM_GPIO_I2C_ASSIGMENT asGPIO_Info[ATOM_MAX_SUPPORTED_DEVICE];
}ATOM_GPIO_I2C_INFO;
/****************************************************************************/
// Common Structure used in other structures
/****************************************************************************/
#ifndef _H2INC
//Please don't add or expand this bitfield structure below, this one will retire soon.!
typedef struct _ATOM_MODE_MISC_INFO
{
#if ATOM_BIG_ENDIAN
USHORT Reserved:6;
USHORT RGB888:1;
USHORT DoubleClock:1;
USHORT Interlace:1;
USHORT CompositeSync:1;
USHORT V_ReplicationBy2:1;
USHORT H_ReplicationBy2:1;
USHORT VerticalCutOff:1;
USHORT VSyncPolarity:1; //0=Active High, 1=Active Low
USHORT HSyncPolarity:1; //0=Active High, 1=Active Low
USHORT HorizontalCutOff:1;
#else
USHORT HorizontalCutOff:1;
USHORT HSyncPolarity:1; //0=Active High, 1=Active Low
USHORT VSyncPolarity:1; //0=Active High, 1=Active Low
USHORT VerticalCutOff:1;
USHORT H_ReplicationBy2:1;
USHORT V_ReplicationBy2:1;
USHORT CompositeSync:1;
USHORT Interlace:1;
USHORT DoubleClock:1;
USHORT RGB888:1;
USHORT Reserved:6;
#endif
}ATOM_MODE_MISC_INFO;
typedef union _ATOM_MODE_MISC_INFO_ACCESS
{
ATOM_MODE_MISC_INFO sbfAccess;
USHORT usAccess;
}ATOM_MODE_MISC_INFO_ACCESS;
#else
typedef union _ATOM_MODE_MISC_INFO_ACCESS
{
USHORT usAccess;
}ATOM_MODE_MISC_INFO_ACCESS;
#endif
// usModeMiscInfo-
#define ATOM_H_CUTOFF 0x01
#define ATOM_HSYNC_POLARITY 0x02 //0=Active High, 1=Active Low
#define ATOM_VSYNC_POLARITY 0x04 //0=Active High, 1=Active Low
#define ATOM_V_CUTOFF 0x08
#define ATOM_H_REPLICATIONBY2 0x10
#define ATOM_V_REPLICATIONBY2 0x20
#define ATOM_COMPOSITESYNC 0x40
#define ATOM_INTERLACE 0x80
#define ATOM_DOUBLE_CLOCK_MODE 0x100
#define ATOM_RGB888_MODE 0x200
//usRefreshRate-
#define ATOM_REFRESH_43 43
#define ATOM_REFRESH_47 47
#define ATOM_REFRESH_56 56
#define ATOM_REFRESH_60 60
#define ATOM_REFRESH_65 65
#define ATOM_REFRESH_70 70
#define ATOM_REFRESH_72 72
#define ATOM_REFRESH_75 75
#define ATOM_REFRESH_85 85
// ATOM_MODE_TIMING data are exactly the same as VESA timing data.
// Translation from EDID to ATOM_MODE_TIMING, use the following formula.
//
// VESA_HTOTAL = VESA_ACTIVE + 2* VESA_BORDER + VESA_BLANK
// = EDID_HA + EDID_HBL
// VESA_HDISP = VESA_ACTIVE = EDID_HA
// VESA_HSYNC_START = VESA_ACTIVE + VESA_BORDER + VESA_FRONT_PORCH
// = EDID_HA + EDID_HSO
// VESA_HSYNC_WIDTH = VESA_HSYNC_TIME = EDID_HSPW
// VESA_BORDER = EDID_BORDER
/****************************************************************************/
// Structure used in SetCRTC_UsingDTDTimingTable
/****************************************************************************/
typedef struct _SET_CRTC_USING_DTD_TIMING_PARAMETERS
{
USHORT usH_Size;
USHORT usH_Blanking_Time;
USHORT usV_Size;
USHORT usV_Blanking_Time;
USHORT usH_SyncOffset;
USHORT usH_SyncWidth;
USHORT usV_SyncOffset;
USHORT usV_SyncWidth;
ATOM_MODE_MISC_INFO_ACCESS susModeMiscInfo;
UCHAR ucH_Border; // From DFP EDID
UCHAR ucV_Border;
UCHAR ucCRTC; // ATOM_CRTC1 or ATOM_CRTC2
UCHAR ucPadding[3];
}SET_CRTC_USING_DTD_TIMING_PARAMETERS;
/****************************************************************************/
// Structure used in SetCRTC_TimingTable
/****************************************************************************/
typedef struct _SET_CRTC_TIMING_PARAMETERS
{
USHORT usH_Total; // horizontal total
USHORT usH_Disp; // horizontal display
USHORT usH_SyncStart; // horozontal Sync start
USHORT usH_SyncWidth; // horizontal Sync width
USHORT usV_Total; // vertical total
USHORT usV_Disp; // vertical display
USHORT usV_SyncStart; // vertical Sync start
USHORT usV_SyncWidth; // vertical Sync width
ATOM_MODE_MISC_INFO_ACCESS susModeMiscInfo;
UCHAR ucCRTC; // ATOM_CRTC1 or ATOM_CRTC2
UCHAR ucOverscanRight; // right
UCHAR ucOverscanLeft; // left
UCHAR ucOverscanBottom; // bottom
UCHAR ucOverscanTop; // top
UCHAR ucReserved;
}SET_CRTC_TIMING_PARAMETERS;
#define SET_CRTC_TIMING_PARAMETERS_PS_ALLOCATION SET_CRTC_TIMING_PARAMETERS
/****************************************************************************/
// Structure used in StandardVESA_TimingTable
// AnalogTV_InfoTable
// ComponentVideoInfoTable
/****************************************************************************/
typedef struct _ATOM_MODE_TIMING
{
USHORT usCRTC_H_Total;
USHORT usCRTC_H_Disp;
USHORT usCRTC_H_SyncStart;
USHORT usCRTC_H_SyncWidth;
USHORT usCRTC_V_Total;
USHORT usCRTC_V_Disp;
USHORT usCRTC_V_SyncStart;
USHORT usCRTC_V_SyncWidth;
USHORT usPixelClock; //in 10Khz unit
ATOM_MODE_MISC_INFO_ACCESS susModeMiscInfo;
USHORT usCRTC_OverscanRight;
USHORT usCRTC_OverscanLeft;
USHORT usCRTC_OverscanBottom;
USHORT usCRTC_OverscanTop;
USHORT usReserve;
UCHAR ucInternalModeNumber;
UCHAR ucRefreshRate;
}ATOM_MODE_TIMING;
typedef struct _ATOM_DTD_FORMAT
{
USHORT usPixClk;
USHORT usHActive;
USHORT usHBlanking_Time;
USHORT usVActive;
USHORT usVBlanking_Time;
USHORT usHSyncOffset;
USHORT usHSyncWidth;
USHORT usVSyncOffset;
USHORT usVSyncWidth;
USHORT usImageHSize;
USHORT usImageVSize;
UCHAR ucHBorder;
UCHAR ucVBorder;
ATOM_MODE_MISC_INFO_ACCESS susModeMiscInfo;
UCHAR ucInternalModeNumber;
UCHAR ucRefreshRate;
}ATOM_DTD_FORMAT;
/****************************************************************************/
// Structure used in LVDS_InfoTable
// * Need a document to describe this table
/****************************************************************************/
#define SUPPORTED_LCD_REFRESHRATE_30Hz 0x0004
#define SUPPORTED_LCD_REFRESHRATE_40Hz 0x0008
#define SUPPORTED_LCD_REFRESHRATE_50Hz 0x0010
#define SUPPORTED_LCD_REFRESHRATE_60Hz 0x0020
//ucTableFormatRevision=1
//ucTableContentRevision=1
typedef struct _ATOM_LVDS_INFO
{
ATOM_COMMON_TABLE_HEADER sHeader;
ATOM_DTD_FORMAT sLCDTiming;
USHORT usModePatchTableOffset;
USHORT usSupportedRefreshRate; //Refer to panel info table in ATOMBIOS extension Spec.
USHORT usOffDelayInMs;
UCHAR ucPowerSequenceDigOntoDEin10Ms;
UCHAR ucPowerSequenceDEtoBLOnin10Ms;
UCHAR ucLVDS_Misc; // Bit0:{=0:single, =1:dual},Bit1 {=0:666RGB, =1:888RGB},Bit2:3:{Grey level}
// Bit4:{=0:LDI format for RGB888, =1 FPDI format for RGB888}
// Bit5:{=0:Spatial Dithering disabled;1 Spatial Dithering enabled}
// Bit6:{=0:Temporal Dithering disabled;1 Temporal Dithering enabled}
UCHAR ucPanelDefaultRefreshRate;
UCHAR ucPanelIdentification;
UCHAR ucSS_Id;
}ATOM_LVDS_INFO;
//ucTableFormatRevision=1
//ucTableContentRevision=2
typedef struct _ATOM_LVDS_INFO_V12
{
ATOM_COMMON_TABLE_HEADER sHeader;
ATOM_DTD_FORMAT sLCDTiming;
USHORT usExtInfoTableOffset;
USHORT usSupportedRefreshRate; //Refer to panel info table in ATOMBIOS extension Spec.
USHORT usOffDelayInMs;
UCHAR ucPowerSequenceDigOntoDEin10Ms;
UCHAR ucPowerSequenceDEtoBLOnin10Ms;
UCHAR ucLVDS_Misc; // Bit0:{=0:single, =1:dual},Bit1 {=0:666RGB, =1:888RGB},Bit2:3:{Grey level}
// Bit4:{=0:LDI format for RGB888, =1 FPDI format for RGB888}
// Bit5:{=0:Spatial Dithering disabled;1 Spatial Dithering enabled}
// Bit6:{=0:Temporal Dithering disabled;1 Temporal Dithering enabled}
UCHAR ucPanelDefaultRefreshRate;
UCHAR ucPanelIdentification;
UCHAR ucSS_Id;
USHORT usLCDVenderID;
USHORT usLCDProductID;
UCHAR ucLCDPanel_SpecialHandlingCap;
UCHAR ucPanelInfoSize; // start from ATOM_DTD_FORMAT to end of panel info, include ExtInfoTable
UCHAR ucReserved[2];
}ATOM_LVDS_INFO_V12;
//Definitions for ucLCDPanel_SpecialHandlingCap:
//Once DAL sees this CAP is set, it will read EDID from LCD on its own instead of using sLCDTiming in ATOM_LVDS_INFO_V12.
//Other entries in ATOM_LVDS_INFO_V12 are still valid/useful to DAL
#define LCDPANEL_CAP_READ_EDID 0x1
//If a design supports DRR (dynamic refresh rate) on internal panels (LVDS or EDP), this cap is set in ucLCDPanel_SpecialHandlingCap together
//with multiple supported refresh rates@usSupportedRefreshRate. This cap should not be set when only slow refresh rate is supported (static
//refresh rate switch by SW. This is only valid from ATOM_LVDS_INFO_V12
#define LCDPANEL_CAP_DRR_SUPPORTED 0x2
//Use this cap bit for a quick reference whether an embadded panel (LCD1 ) is LVDS or eDP.
#define LCDPANEL_CAP_eDP 0x4
//Color Bit Depth definition in EDID V1.4 @BYTE 14h
//Bit 6 5 4
// 0 0 0 - Color bit depth is undefined
// 0 0 1 - 6 Bits per Primary Color
// 0 1 0 - 8 Bits per Primary Color
// 0 1 1 - 10 Bits per Primary Color
// 1 0 0 - 12 Bits per Primary Color
// 1 0 1 - 14 Bits per Primary Color
// 1 1 0 - 16 Bits per Primary Color
// 1 1 1 - Reserved
#define PANEL_COLOR_BIT_DEPTH_MASK 0x70
// Bit7:{=0:Random Dithering disabled;1 Random Dithering enabled}
#define PANEL_RANDOM_DITHER 0x80
#define PANEL_RANDOM_DITHER_MASK 0x80
#define ATOM_LVDS_INFO_LAST ATOM_LVDS_INFO_V12 // no need to change this
/****************************************************************************/
// Structures used by LCD_InfoTable V1.3 Note: previous version was called ATOM_LVDS_INFO_V12
// ASIC Families: NI
// ucTableFormatRevision=1
// ucTableContentRevision=3
/****************************************************************************/
typedef struct _ATOM_LCD_INFO_V13
{
ATOM_COMMON_TABLE_HEADER sHeader;
ATOM_DTD_FORMAT sLCDTiming;
USHORT usExtInfoTableOffset;
USHORT usSupportedRefreshRate; //Refer to panel info table in ATOMBIOS extension Spec.
ULONG ulReserved0;
UCHAR ucLCD_Misc; // Reorganized in V13
// Bit0: {=0:single, =1:dual},
// Bit1: {=0:LDI format for RGB888, =1 FPDI format for RGB888} // was {=0:666RGB, =1:888RGB},
// Bit3:2: {Grey level}
// Bit6:4 Color Bit Depth definition (see below definition in EDID V1.4 @BYTE 14h)
// Bit7 Reserved. was for ATOM_PANEL_MISC_API_ENABLED, still need it?
UCHAR ucPanelDefaultRefreshRate;
UCHAR ucPanelIdentification;
UCHAR ucSS_Id;
USHORT usLCDVenderID;
USHORT usLCDProductID;
UCHAR ucLCDPanel_SpecialHandlingCap; // Reorganized in V13
// Bit0: Once DAL sees this CAP is set, it will read EDID from LCD on its own
// Bit1: See LCDPANEL_CAP_DRR_SUPPORTED
// Bit2: a quick reference whether an embadded panel (LCD1 ) is LVDS (0) or eDP (1)
// Bit7-3: Reserved
UCHAR ucPanelInfoSize; // start from ATOM_DTD_FORMAT to end of panel info, include ExtInfoTable
USHORT usBacklightPWM; // Backlight PWM in Hz. New in _V13
UCHAR ucPowerSequenceDIGONtoDE_in4Ms;
UCHAR ucPowerSequenceDEtoVARY_BL_in4Ms;
UCHAR ucPowerSequenceVARY_BLtoDE_in4Ms;
UCHAR ucPowerSequenceDEtoDIGON_in4Ms;
UCHAR ucOffDelay_in4Ms;
UCHAR ucPowerSequenceVARY_BLtoBLON_in4Ms;
UCHAR ucPowerSequenceBLONtoVARY_BL_in4Ms;
UCHAR ucReserved1;
UCHAR ucDPCD_eDP_CONFIGURATION_CAP; // dpcd 0dh
UCHAR ucDPCD_MAX_LINK_RATE; // dpcd 01h
UCHAR ucDPCD_MAX_LANE_COUNT; // dpcd 02h
UCHAR ucDPCD_MAX_DOWNSPREAD; // dpcd 03h
USHORT usMaxPclkFreqInSingleLink; // Max PixelClock frequency in single link mode.
UCHAR uceDPToLVDSRxId;
UCHAR ucLcdReservd;
ULONG ulReserved[2];
}ATOM_LCD_INFO_V13;
#define ATOM_LCD_INFO_LAST ATOM_LCD_INFO_V13
//Definitions for ucLCD_Misc
#define ATOM_PANEL_MISC_V13_DUAL 0x00000001
#define ATOM_PANEL_MISC_V13_FPDI 0x00000002
#define ATOM_PANEL_MISC_V13_GREY_LEVEL 0x0000000C
#define ATOM_PANEL_MISC_V13_GREY_LEVEL_SHIFT 2
#define ATOM_PANEL_MISC_V13_COLOR_BIT_DEPTH_MASK 0x70
#define ATOM_PANEL_MISC_V13_6BIT_PER_COLOR 0x10
#define ATOM_PANEL_MISC_V13_8BIT_PER_COLOR 0x20
//Color Bit Depth definition in EDID V1.4 @BYTE 14h
//Bit 6 5 4
// 0 0 0 - Color bit depth is undefined
// 0 0 1 - 6 Bits per Primary Color
// 0 1 0 - 8 Bits per Primary Color
// 0 1 1 - 10 Bits per Primary Color
// 1 0 0 - 12 Bits per Primary Color
// 1 0 1 - 14 Bits per Primary Color
// 1 1 0 - 16 Bits per Primary Color
// 1 1 1 - Reserved
//Definitions for ucLCDPanel_SpecialHandlingCap:
//Once DAL sees this CAP is set, it will read EDID from LCD on its own instead of using sLCDTiming in ATOM_LVDS_INFO_V12.
//Other entries in ATOM_LVDS_INFO_V12 are still valid/useful to DAL
#define LCDPANEL_CAP_V13_READ_EDID 0x1 // = LCDPANEL_CAP_READ_EDID no change comparing to previous version
//If a design supports DRR (dynamic refresh rate) on internal panels (LVDS or EDP), this cap is set in ucLCDPanel_SpecialHandlingCap together
//with multiple supported refresh rates@usSupportedRefreshRate. This cap should not be set when only slow refresh rate is supported (static
//refresh rate switch by SW. This is only valid from ATOM_LVDS_INFO_V12
#define LCDPANEL_CAP_V13_DRR_SUPPORTED 0x2 // = LCDPANEL_CAP_DRR_SUPPORTED no change comparing to previous version
//Use this cap bit for a quick reference whether an embadded panel (LCD1 ) is LVDS or eDP.
#define LCDPANEL_CAP_V13_eDP 0x4 // = LCDPANEL_CAP_eDP no change comparing to previous version
//uceDPToLVDSRxId
#define eDP_TO_LVDS_RX_DISABLE 0x00 // no eDP->LVDS translator chip
#define eDP_TO_LVDS_COMMON_ID 0x01 // common eDP->LVDS translator chip without AMD SW init
#define eDP_TO_LVDS_RT_ID 0x02 // RT tanslator which require AMD SW init
typedef struct _ATOM_PATCH_RECORD_MODE
{
UCHAR ucRecordType;
USHORT usHDisp;
USHORT usVDisp;
}ATOM_PATCH_RECORD_MODE;
typedef struct _ATOM_LCD_RTS_RECORD
{
UCHAR ucRecordType;
UCHAR ucRTSValue;
}ATOM_LCD_RTS_RECORD;
//!! If the record below exits, it shoud always be the first record for easy use in command table!!!
// The record below is only used when LVDS_Info is present. From ATOM_LVDS_INFO_V12, use ucLCDPanel_SpecialHandlingCap instead.
typedef struct _ATOM_LCD_MODE_CONTROL_CAP
{
UCHAR ucRecordType;
USHORT usLCDCap;
}ATOM_LCD_MODE_CONTROL_CAP;
#define LCD_MODE_CAP_BL_OFF 1
#define LCD_MODE_CAP_CRTC_OFF 2
#define LCD_MODE_CAP_PANEL_OFF 4
typedef struct _ATOM_FAKE_EDID_PATCH_RECORD
{
UCHAR ucRecordType;
UCHAR ucFakeEDIDLength;
UCHAR ucFakeEDIDString[1]; // This actually has ucFakeEdidLength elements.
} ATOM_FAKE_EDID_PATCH_RECORD;
typedef struct _ATOM_PANEL_RESOLUTION_PATCH_RECORD
{
UCHAR ucRecordType;
USHORT usHSize;
USHORT usVSize;
}ATOM_PANEL_RESOLUTION_PATCH_RECORD;
#define LCD_MODE_PATCH_RECORD_MODE_TYPE 1
#define LCD_RTS_RECORD_TYPE 2
#define LCD_CAP_RECORD_TYPE 3
#define LCD_FAKE_EDID_PATCH_RECORD_TYPE 4
#define LCD_PANEL_RESOLUTION_RECORD_TYPE 5
#define LCD_EDID_OFFSET_PATCH_RECORD_TYPE 6
#define ATOM_RECORD_END_TYPE 0xFF
/****************************Spread Spectrum Info Table Definitions **********************/
//ucTableFormatRevision=1
//ucTableContentRevision=2
typedef struct _ATOM_SPREAD_SPECTRUM_ASSIGNMENT
{
USHORT usSpreadSpectrumPercentage;
UCHAR ucSpreadSpectrumType; //Bit1=0 Down Spread,=1 Center Spread. Bit1=1 Ext. =0 Int. Bit2=1: PCIE REFCLK SS =0 iternal PPLL SS Others:TBD
UCHAR ucSS_Step;
UCHAR ucSS_Delay;
UCHAR ucSS_Id;
UCHAR ucRecommendedRef_Div;
UCHAR ucSS_Range; //it was reserved for V11
}ATOM_SPREAD_SPECTRUM_ASSIGNMENT;
#define ATOM_MAX_SS_ENTRY 16
#define ATOM_DP_SS_ID1 0x0f1 // SS ID for internal DP stream at 2.7Ghz. if ATOM_DP_SS_ID2 does not exist in SS_InfoTable, it is used for internal DP stream at 1.62Ghz as well.
#define ATOM_DP_SS_ID2 0x0f2 // SS ID for internal DP stream at 1.62Ghz, if it exists in SS_InfoTable.
#define ATOM_LVLINK_2700MHz_SS_ID 0x0f3 // SS ID for LV link translator chip at 2.7Ghz
#define ATOM_LVLINK_1620MHz_SS_ID 0x0f4 // SS ID for LV link translator chip at 1.62Ghz
#define ATOM_SS_DOWN_SPREAD_MODE_MASK 0x00000000
#define ATOM_SS_DOWN_SPREAD_MODE 0x00000000
#define ATOM_SS_CENTRE_SPREAD_MODE_MASK 0x00000001
#define ATOM_SS_CENTRE_SPREAD_MODE 0x00000001
#define ATOM_INTERNAL_SS_MASK 0x00000000
#define ATOM_EXTERNAL_SS_MASK 0x00000002
#define EXEC_SS_STEP_SIZE_SHIFT 2
#define EXEC_SS_DELAY_SHIFT 4
#define ACTIVEDATA_TO_BLON_DELAY_SHIFT 4
typedef struct _ATOM_SPREAD_SPECTRUM_INFO
{
ATOM_COMMON_TABLE_HEADER sHeader;
ATOM_SPREAD_SPECTRUM_ASSIGNMENT asSS_Info[ATOM_MAX_SS_ENTRY];
}ATOM_SPREAD_SPECTRUM_INFO;
/****************************************************************************/
// Structure used in AnalogTV_InfoTable (Top level)
/****************************************************************************/
//ucTVBootUpDefaultStd definition:
//ATOM_TV_NTSC 1
//ATOM_TV_NTSCJ 2
//ATOM_TV_PAL 3
//ATOM_TV_PALM 4
//ATOM_TV_PALCN 5
//ATOM_TV_PALN 6
//ATOM_TV_PAL60 7
//ATOM_TV_SECAM 8
//ucTVSupportedStd definition:
#define NTSC_SUPPORT 0x1
#define NTSCJ_SUPPORT 0x2
#define PAL_SUPPORT 0x4
#define PALM_SUPPORT 0x8
#define PALCN_SUPPORT 0x10
#define PALN_SUPPORT 0x20
#define PAL60_SUPPORT 0x40
#define SECAM_SUPPORT 0x80
#define MAX_SUPPORTED_TV_TIMING 2
typedef struct _ATOM_ANALOG_TV_INFO
{
ATOM_COMMON_TABLE_HEADER sHeader;
UCHAR ucTV_SupportedStandard;
UCHAR ucTV_BootUpDefaultStandard;
UCHAR ucExt_TV_ASIC_ID;
UCHAR ucExt_TV_ASIC_SlaveAddr;
/*ATOM_DTD_FORMAT aModeTimings[MAX_SUPPORTED_TV_TIMING];*/
ATOM_MODE_TIMING aModeTimings[MAX_SUPPORTED_TV_TIMING];
}ATOM_ANALOG_TV_INFO;
#define MAX_SUPPORTED_TV_TIMING_V1_2 3
typedef struct _ATOM_ANALOG_TV_INFO_V1_2
{
ATOM_COMMON_TABLE_HEADER sHeader;
UCHAR ucTV_SupportedStandard;
UCHAR ucTV_BootUpDefaultStandard;
UCHAR ucExt_TV_ASIC_ID;
UCHAR ucExt_TV_ASIC_SlaveAddr;
ATOM_DTD_FORMAT aModeTimings[MAX_SUPPORTED_TV_TIMING_V1_2];
}ATOM_ANALOG_TV_INFO_V1_2;
typedef struct _ATOM_DPCD_INFO
{
UCHAR ucRevisionNumber; //10h : Revision 1.0; 11h : Revision 1.1
UCHAR ucMaxLinkRate; //06h : 1.62Gbps per lane; 0Ah = 2.7Gbps per lane
UCHAR ucMaxLane; //Bits 4:0 = MAX_LANE_COUNT (1/2/4). Bit 7 = ENHANCED_FRAME_CAP
UCHAR ucMaxDownSpread; //Bit0 = 0: No Down spread; Bit0 = 1: 0.5% (Subject to change according to DP spec)
}ATOM_DPCD_INFO;
#define ATOM_DPCD_MAX_LANE_MASK 0x1F
/**************************************************************************/
// VRAM usage and their defintions
// One chunk of VRAM used by Bios are for HWICON surfaces,EDID data.
// Current Mode timing and Dail Timing and/or STD timing data EACH device. They can be broken down as below.
// All the addresses below are the offsets from the frame buffer start.They all MUST be Dword aligned!
// To driver: The physical address of this memory portion=mmFB_START(4K aligned)+ATOMBIOS_VRAM_USAGE_START_ADDR+ATOM_x_ADDR
// To Bios: ATOMBIOS_VRAM_USAGE_START_ADDR+ATOM_x_ADDR->MM_INDEX
#ifndef VESA_MEMORY_IN_64K_BLOCK
#define VESA_MEMORY_IN_64K_BLOCK 0x100 //256*64K=16Mb (Max. VESA memory is 16Mb!)
#endif
#define ATOM_EDID_RAW_DATASIZE 256 //In Bytes
#define ATOM_HWICON_SURFACE_SIZE 4096 //In Bytes
#define ATOM_HWICON_INFOTABLE_SIZE 32
#define MAX_DTD_MODE_IN_VRAM 6
#define ATOM_DTD_MODE_SUPPORT_TBL_SIZE (MAX_DTD_MODE_IN_VRAM*28) //28= (SIZEOF ATOM_DTD_FORMAT)
#define ATOM_STD_MODE_SUPPORT_TBL_SIZE 32*8 //32 is a predefined number,8= (SIZEOF ATOM_STD_FORMAT)
//20 bytes for Encoder Type and DPCD in STD EDID area
#define DFP_ENCODER_TYPE_OFFSET (ATOM_EDID_RAW_DATASIZE + ATOM_DTD_MODE_SUPPORT_TBL_SIZE + ATOM_STD_MODE_SUPPORT_TBL_SIZE - 20)
#define ATOM_DP_DPCD_OFFSET (DFP_ENCODER_TYPE_OFFSET + 4 )
#define ATOM_HWICON1_SURFACE_ADDR 0
#define ATOM_HWICON2_SURFACE_ADDR (ATOM_HWICON1_SURFACE_ADDR + ATOM_HWICON_SURFACE_SIZE)
#define ATOM_HWICON_INFOTABLE_ADDR (ATOM_HWICON2_SURFACE_ADDR + ATOM_HWICON_SURFACE_SIZE)
#define ATOM_CRT1_EDID_ADDR (ATOM_HWICON_INFOTABLE_ADDR + ATOM_HWICON_INFOTABLE_SIZE)
#define ATOM_CRT1_DTD_MODE_TBL_ADDR (ATOM_CRT1_EDID_ADDR + ATOM_EDID_RAW_DATASIZE)
#define ATOM_CRT1_STD_MODE_TBL_ADDR (ATOM_CRT1_DTD_MODE_TBL_ADDR + ATOM_DTD_MODE_SUPPORT_TBL_SIZE)
#define ATOM_LCD1_EDID_ADDR (ATOM_CRT1_STD_MODE_TBL_ADDR + ATOM_STD_MODE_SUPPORT_TBL_SIZE)
#define ATOM_LCD1_DTD_MODE_TBL_ADDR (ATOM_LCD1_EDID_ADDR + ATOM_EDID_RAW_DATASIZE)
#define ATOM_LCD1_STD_MODE_TBL_ADDR (ATOM_LCD1_DTD_MODE_TBL_ADDR + ATOM_DTD_MODE_SUPPORT_TBL_SIZE)
#define ATOM_TV1_DTD_MODE_TBL_ADDR (ATOM_LCD1_STD_MODE_TBL_ADDR + ATOM_STD_MODE_SUPPORT_TBL_SIZE)
#define ATOM_DFP1_EDID_ADDR (ATOM_TV1_DTD_MODE_TBL_ADDR + ATOM_DTD_MODE_SUPPORT_TBL_SIZE)
#define ATOM_DFP1_DTD_MODE_TBL_ADDR (ATOM_DFP1_EDID_ADDR + ATOM_EDID_RAW_DATASIZE)
#define ATOM_DFP1_STD_MODE_TBL_ADDR (ATOM_DFP1_DTD_MODE_TBL_ADDR + ATOM_DTD_MODE_SUPPORT_TBL_SIZE)
#define ATOM_CRT2_EDID_ADDR (ATOM_DFP1_STD_MODE_TBL_ADDR + ATOM_STD_MODE_SUPPORT_TBL_SIZE)
#define ATOM_CRT2_DTD_MODE_TBL_ADDR (ATOM_CRT2_EDID_ADDR + ATOM_EDID_RAW_DATASIZE)
#define ATOM_CRT2_STD_MODE_TBL_ADDR (ATOM_CRT2_DTD_MODE_TBL_ADDR + ATOM_DTD_MODE_SUPPORT_TBL_SIZE)
#define ATOM_LCD2_EDID_ADDR (ATOM_CRT2_STD_MODE_TBL_ADDR + ATOM_STD_MODE_SUPPORT_TBL_SIZE)
#define ATOM_LCD2_DTD_MODE_TBL_ADDR (ATOM_LCD2_EDID_ADDR + ATOM_EDID_RAW_DATASIZE)
#define ATOM_LCD2_STD_MODE_TBL_ADDR (ATOM_LCD2_DTD_MODE_TBL_ADDR + ATOM_DTD_MODE_SUPPORT_TBL_SIZE)
#define ATOM_DFP6_EDID_ADDR (ATOM_LCD2_STD_MODE_TBL_ADDR + ATOM_STD_MODE_SUPPORT_TBL_SIZE)
#define ATOM_DFP6_DTD_MODE_TBL_ADDR (ATOM_DFP6_EDID_ADDR + ATOM_EDID_RAW_DATASIZE)
#define ATOM_DFP6_STD_MODE_TBL_ADDR (ATOM_DFP6_DTD_MODE_TBL_ADDR + ATOM_DTD_MODE_SUPPORT_TBL_SIZE)
#define ATOM_DFP2_EDID_ADDR (ATOM_DFP6_STD_MODE_TBL_ADDR + ATOM_STD_MODE_SUPPORT_TBL_SIZE)
#define ATOM_DFP2_DTD_MODE_TBL_ADDR (ATOM_DFP2_EDID_ADDR + ATOM_EDID_RAW_DATASIZE)
#define ATOM_DFP2_STD_MODE_TBL_ADDR (ATOM_DFP2_DTD_MODE_TBL_ADDR + ATOM_DTD_MODE_SUPPORT_TBL_SIZE)
#define ATOM_CV_EDID_ADDR (ATOM_DFP2_STD_MODE_TBL_ADDR + ATOM_STD_MODE_SUPPORT_TBL_SIZE)
#define ATOM_CV_DTD_MODE_TBL_ADDR (ATOM_CV_EDID_ADDR + ATOM_EDID_RAW_DATASIZE)
#define ATOM_CV_STD_MODE_TBL_ADDR (ATOM_CV_DTD_MODE_TBL_ADDR + ATOM_DTD_MODE_SUPPORT_TBL_SIZE)
#define ATOM_DFP3_EDID_ADDR (ATOM_CV_STD_MODE_TBL_ADDR + ATOM_STD_MODE_SUPPORT_TBL_SIZE)
#define ATOM_DFP3_DTD_MODE_TBL_ADDR (ATOM_DFP3_EDID_ADDR + ATOM_EDID_RAW_DATASIZE)
#define ATOM_DFP3_STD_MODE_TBL_ADDR (ATOM_DFP3_DTD_MODE_TBL_ADDR + ATOM_DTD_MODE_SUPPORT_TBL_SIZE)
#define ATOM_DFP4_EDID_ADDR (ATOM_DFP3_STD_MODE_TBL_ADDR + ATOM_STD_MODE_SUPPORT_TBL_SIZE)
#define ATOM_DFP4_DTD_MODE_TBL_ADDR (ATOM_DFP4_EDID_ADDR + ATOM_EDID_RAW_DATASIZE)
#define ATOM_DFP4_STD_MODE_TBL_ADDR (ATOM_DFP4_DTD_MODE_TBL_ADDR + ATOM_DTD_MODE_SUPPORT_TBL_SIZE)
#define ATOM_DFP5_EDID_ADDR (ATOM_DFP4_STD_MODE_TBL_ADDR + ATOM_STD_MODE_SUPPORT_TBL_SIZE)
#define ATOM_DFP5_DTD_MODE_TBL_ADDR (ATOM_DFP5_EDID_ADDR + ATOM_EDID_RAW_DATASIZE)
#define ATOM_DFP5_STD_MODE_TBL_ADDR (ATOM_DFP5_DTD_MODE_TBL_ADDR + ATOM_DTD_MODE_SUPPORT_TBL_SIZE)
#define ATOM_DP_TRAINING_TBL_ADDR (ATOM_DFP5_STD_MODE_TBL_ADDR + ATOM_STD_MODE_SUPPORT_TBL_SIZE)
#define ATOM_STACK_STORAGE_START (ATOM_DP_TRAINING_TBL_ADDR + 1024)
#define ATOM_STACK_STORAGE_END ATOM_STACK_STORAGE_START + 512
//The size below is in Kb!
#define ATOM_VRAM_RESERVE_SIZE ((((ATOM_STACK_STORAGE_END - ATOM_HWICON1_SURFACE_ADDR)>>10)+4)&0xFFFC)
#define ATOM_VRAM_RESERVE_V2_SIZE 32
#define ATOM_VRAM_OPERATION_FLAGS_MASK 0xC0000000L
#define ATOM_VRAM_OPERATION_FLAGS_SHIFT 30
#define ATOM_VRAM_BLOCK_NEEDS_NO_RESERVATION 0x1
#define ATOM_VRAM_BLOCK_NEEDS_RESERVATION 0x0
/***********************************************************************************/
// Structure used in VRAM_UsageByFirmwareTable
// Note1: This table is filled by SetBiosReservationStartInFB in CoreCommSubs.asm
// at running time.
// note2: From RV770, the memory is more than 32bit addressable, so we will change
// ucTableFormatRevision=1,ucTableContentRevision=4, the strcuture remains
// exactly same as 1.1 and 1.2 (1.3 is never in use), but ulStartAddrUsedByFirmware
// (in offset to start of memory address) is KB aligned instead of byte aligend.
/***********************************************************************************/
// Note3:
/* If we change usReserved to "usFBUsedbyDrvInKB", then to VBIOS this usFBUsedbyDrvInKB is a predefined, unchanged constant across VGA or non VGA adapter,
for CAIL, The size of FB access area is known, only thing missing is the Offset of FB Access area, so we can have:
If (ulStartAddrUsedByFirmware!=0)
FBAccessAreaOffset= ulStartAddrUsedByFirmware - usFBUsedbyDrvInKB;
Reserved area has been claimed by VBIOS including this FB access area; CAIL doesn't need to reserve any extra area for this purpose
else //Non VGA case
if (FB_Size<=2Gb)
FBAccessAreaOffset= FB_Size - usFBUsedbyDrvInKB;
else
FBAccessAreaOffset= Aper_Size - usFBUsedbyDrvInKB
CAIL needs to claim an reserved area defined by FBAccessAreaOffset and usFBUsedbyDrvInKB in non VGA case.*/
/***********************************************************************************/
#define ATOM_MAX_FIRMWARE_VRAM_USAGE_INFO 1
typedef struct _ATOM_FIRMWARE_VRAM_RESERVE_INFO
{
ULONG ulStartAddrUsedByFirmware;
USHORT usFirmwareUseInKb;
USHORT usReserved;
}ATOM_FIRMWARE_VRAM_RESERVE_INFO;
typedef struct _ATOM_VRAM_USAGE_BY_FIRMWARE
{
ATOM_COMMON_TABLE_HEADER sHeader;
ATOM_FIRMWARE_VRAM_RESERVE_INFO asFirmwareVramReserveInfo[ATOM_MAX_FIRMWARE_VRAM_USAGE_INFO];
}ATOM_VRAM_USAGE_BY_FIRMWARE;
// change verion to 1.5, when allow driver to allocate the vram area for command table access.
typedef struct _ATOM_FIRMWARE_VRAM_RESERVE_INFO_V1_5
{
ULONG ulStartAddrUsedByFirmware;
USHORT usFirmwareUseInKb;
USHORT usFBUsedByDrvInKb;
}ATOM_FIRMWARE_VRAM_RESERVE_INFO_V1_5;
typedef struct _ATOM_VRAM_USAGE_BY_FIRMWARE_V1_5
{
ATOM_COMMON_TABLE_HEADER sHeader;
ATOM_FIRMWARE_VRAM_RESERVE_INFO_V1_5 asFirmwareVramReserveInfo[ATOM_MAX_FIRMWARE_VRAM_USAGE_INFO];
}ATOM_VRAM_USAGE_BY_FIRMWARE_V1_5;
/****************************************************************************/
// Structure used in GPIO_Pin_LUTTable
/****************************************************************************/
typedef struct _ATOM_GPIO_PIN_ASSIGNMENT
{
USHORT usGpioPin_AIndex;
UCHAR ucGpioPinBitShift;
UCHAR ucGPIO_ID;
}ATOM_GPIO_PIN_ASSIGNMENT;
typedef struct _ATOM_GPIO_PIN_LUT
{
ATOM_COMMON_TABLE_HEADER sHeader;
ATOM_GPIO_PIN_ASSIGNMENT asGPIO_Pin[1];
}ATOM_GPIO_PIN_LUT;
/****************************************************************************/
// Structure used in ComponentVideoInfoTable
/****************************************************************************/
#define GPIO_PIN_ACTIVE_HIGH 0x1
#define MAX_SUPPORTED_CV_STANDARDS 5
// definitions for ATOM_D_INFO.ucSettings
#define ATOM_GPIO_SETTINGS_BITSHIFT_MASK 0x1F // [4:0]
#define ATOM_GPIO_SETTINGS_RESERVED_MASK 0x60 // [6:5] = must be zeroed out
#define ATOM_GPIO_SETTINGS_ACTIVE_MASK 0x80 // [7]
typedef struct _ATOM_GPIO_INFO
{
USHORT usAOffset;
UCHAR ucSettings;
UCHAR ucReserved;
}ATOM_GPIO_INFO;
// definitions for ATOM_COMPONENT_VIDEO_INFO.ucMiscInfo (bit vector)
#define ATOM_CV_RESTRICT_FORMAT_SELECTION 0x2
// definitions for ATOM_COMPONENT_VIDEO_INFO.uc480i/uc480p/uc720p/uc1080i
#define ATOM_GPIO_DEFAULT_MODE_EN 0x80 //[7];
#define ATOM_GPIO_SETTING_PERMODE_MASK 0x7F //[6:0]
// definitions for ATOM_COMPONENT_VIDEO_INFO.ucLetterBoxMode
//Line 3 out put 5V.
#define ATOM_CV_LINE3_ASPECTRATIO_16_9_GPIO_A 0x01 //represent gpio 3 state for 16:9
#define ATOM_CV_LINE3_ASPECTRATIO_16_9_GPIO_B 0x02 //represent gpio 4 state for 16:9
#define ATOM_CV_LINE3_ASPECTRATIO_16_9_GPIO_SHIFT 0x0
//Line 3 out put 2.2V
#define ATOM_CV_LINE3_ASPECTRATIO_4_3_LETBOX_GPIO_A 0x04 //represent gpio 3 state for 4:3 Letter box
#define ATOM_CV_LINE3_ASPECTRATIO_4_3_LETBOX_GPIO_B 0x08 //represent gpio 4 state for 4:3 Letter box
#define ATOM_CV_LINE3_ASPECTRATIO_4_3_LETBOX_GPIO_SHIFT 0x2
//Line 3 out put 0V
#define ATOM_CV_LINE3_ASPECTRATIO_4_3_GPIO_A 0x10 //represent gpio 3 state for 4:3
#define ATOM_CV_LINE3_ASPECTRATIO_4_3_GPIO_B 0x20 //represent gpio 4 state for 4:3
#define ATOM_CV_LINE3_ASPECTRATIO_4_3_GPIO_SHIFT 0x4
#define ATOM_CV_LINE3_ASPECTRATIO_MASK 0x3F // bit [5:0]
#define ATOM_CV_LINE3_ASPECTRATIO_EXIST 0x80 //bit 7
//GPIO bit index in gpio setting per mode value, also represend the block no. in gpio blocks.
#define ATOM_GPIO_INDEX_LINE3_ASPECRATIO_GPIO_A 3 //bit 3 in uc480i/uc480p/uc720p/uc1080i, which represend the default gpio bit setting for the mode.
#define ATOM_GPIO_INDEX_LINE3_ASPECRATIO_GPIO_B 4 //bit 4 in uc480i/uc480p/uc720p/uc1080i, which represend the default gpio bit setting for the mode.
typedef struct _ATOM_COMPONENT_VIDEO_INFO
{
ATOM_COMMON_TABLE_HEADER sHeader;
USHORT usMask_PinRegisterIndex;
USHORT usEN_PinRegisterIndex;
USHORT usY_PinRegisterIndex;
USHORT usA_PinRegisterIndex;
UCHAR ucBitShift;
UCHAR ucPinActiveState; //ucPinActiveState: Bit0=1 active high, =0 active low
ATOM_DTD_FORMAT sReserved; // must be zeroed out
UCHAR ucMiscInfo;
UCHAR uc480i;
UCHAR uc480p;
UCHAR uc720p;
UCHAR uc1080i;
UCHAR ucLetterBoxMode;
UCHAR ucReserved[3];
UCHAR ucNumOfWbGpioBlocks; //For Component video D-Connector support. If zere, NTSC type connector
ATOM_GPIO_INFO aWbGpioStateBlock[MAX_SUPPORTED_CV_STANDARDS];
ATOM_DTD_FORMAT aModeTimings[MAX_SUPPORTED_CV_STANDARDS];
}ATOM_COMPONENT_VIDEO_INFO;
//ucTableFormatRevision=2
//ucTableContentRevision=1
typedef struct _ATOM_COMPONENT_VIDEO_INFO_V21
{
ATOM_COMMON_TABLE_HEADER sHeader;
UCHAR ucMiscInfo;
UCHAR uc480i;
UCHAR uc480p;
UCHAR uc720p;
UCHAR uc1080i;
UCHAR ucReserved;
UCHAR ucLetterBoxMode;
UCHAR ucNumOfWbGpioBlocks; //For Component video D-Connector support. If zere, NTSC type connector
ATOM_GPIO_INFO aWbGpioStateBlock[MAX_SUPPORTED_CV_STANDARDS];
ATOM_DTD_FORMAT aModeTimings[MAX_SUPPORTED_CV_STANDARDS];
}ATOM_COMPONENT_VIDEO_INFO_V21;
#define ATOM_COMPONENT_VIDEO_INFO_LAST ATOM_COMPONENT_VIDEO_INFO_V21
/****************************************************************************/
// Structure used in object_InfoTable
/****************************************************************************/
typedef struct _ATOM_OBJECT_HEADER
{
ATOM_COMMON_TABLE_HEADER sHeader;
USHORT usDeviceSupport;
USHORT usConnectorObjectTableOffset;
USHORT usRouterObjectTableOffset;
USHORT usEncoderObjectTableOffset;
USHORT usProtectionObjectTableOffset; //only available when Protection block is independent.
USHORT usDisplayPathTableOffset;
}ATOM_OBJECT_HEADER;
typedef struct _ATOM_OBJECT_HEADER_V3
{
ATOM_COMMON_TABLE_HEADER sHeader;
USHORT usDeviceSupport;
USHORT usConnectorObjectTableOffset;
USHORT usRouterObjectTableOffset;
USHORT usEncoderObjectTableOffset;
USHORT usProtectionObjectTableOffset; //only available when Protection block is independent.
USHORT usDisplayPathTableOffset;
USHORT usMiscObjectTableOffset;
}ATOM_OBJECT_HEADER_V3;
typedef struct _ATOM_DISPLAY_OBJECT_PATH
{
USHORT usDeviceTag; //supported device
USHORT usSize; //the size of ATOM_DISPLAY_OBJECT_PATH
USHORT usConnObjectId; //Connector Object ID
USHORT usGPUObjectId; //GPU ID
USHORT usGraphicObjIds[1]; //1st Encoder Obj source from GPU to last Graphic Obj destinate to connector.
}ATOM_DISPLAY_OBJECT_PATH;
typedef struct _ATOM_DISPLAY_EXTERNAL_OBJECT_PATH
{
USHORT usDeviceTag; //supported device
USHORT usSize; //the size of ATOM_DISPLAY_OBJECT_PATH
USHORT usConnObjectId; //Connector Object ID
USHORT usGPUObjectId; //GPU ID
USHORT usGraphicObjIds[2]; //usGraphicObjIds[0]= GPU internal encoder, usGraphicObjIds[1]= external encoder
}ATOM_DISPLAY_EXTERNAL_OBJECT_PATH;
typedef struct _ATOM_DISPLAY_OBJECT_PATH_TABLE
{
UCHAR ucNumOfDispPath;
UCHAR ucVersion;
UCHAR ucPadding[2];
ATOM_DISPLAY_OBJECT_PATH asDispPath[1];
}ATOM_DISPLAY_OBJECT_PATH_TABLE;
typedef struct _ATOM_OBJECT //each object has this structure
{
USHORT usObjectID;
USHORT usSrcDstTableOffset;
USHORT usRecordOffset; //this pointing to a bunch of records defined below
USHORT usReserved;
}ATOM_OBJECT;
typedef struct _ATOM_OBJECT_TABLE //Above 4 object table offset pointing to a bunch of objects all have this structure
{
UCHAR ucNumberOfObjects;
UCHAR ucPadding[3];
ATOM_OBJECT asObjects[1];
}ATOM_OBJECT_TABLE;
typedef struct _ATOM_SRC_DST_TABLE_FOR_ONE_OBJECT //usSrcDstTableOffset pointing to this structure
{
UCHAR ucNumberOfSrc;
USHORT usSrcObjectID[1];
UCHAR ucNumberOfDst;
USHORT usDstObjectID[1];
}ATOM_SRC_DST_TABLE_FOR_ONE_OBJECT;
//Two definitions below are for OPM on MXM module designs
#define EXT_HPDPIN_LUTINDEX_0 0
#define EXT_HPDPIN_LUTINDEX_1 1
#define EXT_HPDPIN_LUTINDEX_2 2
#define EXT_HPDPIN_LUTINDEX_3 3
#define EXT_HPDPIN_LUTINDEX_4 4
#define EXT_HPDPIN_LUTINDEX_5 5
#define EXT_HPDPIN_LUTINDEX_6 6
#define EXT_HPDPIN_LUTINDEX_7 7
#define MAX_NUMBER_OF_EXT_HPDPIN_LUT_ENTRIES (EXT_HPDPIN_LUTINDEX_7+1)
#define EXT_AUXDDC_LUTINDEX_0 0
#define EXT_AUXDDC_LUTINDEX_1 1
#define EXT_AUXDDC_LUTINDEX_2 2
#define EXT_AUXDDC_LUTINDEX_3 3
#define EXT_AUXDDC_LUTINDEX_4 4
#define EXT_AUXDDC_LUTINDEX_5 5
#define EXT_AUXDDC_LUTINDEX_6 6
#define EXT_AUXDDC_LUTINDEX_7 7
#define MAX_NUMBER_OF_EXT_AUXDDC_LUT_ENTRIES (EXT_AUXDDC_LUTINDEX_7+1)
//ucChannelMapping are defined as following
//for DP connector, eDP, DP to VGA/LVDS
//Bit[1:0]: Define which pin connect to DP connector DP_Lane0, =0: source from GPU pin TX0, =1: from GPU pin TX1, =2: from GPU pin TX2, =3 from GPU pin TX3
//Bit[3:2]: Define which pin connect to DP connector DP_Lane1, =0: source from GPU pin TX0, =1: from GPU pin TX1, =2: from GPU pin TX2, =3 from GPU pin TX3
//Bit[5:4]: Define which pin connect to DP connector DP_Lane2, =0: source from GPU pin TX0, =1: from GPU pin TX1, =2: from GPU pin TX2, =3 from GPU pin TX3
//Bit[7:6]: Define which pin connect to DP connector DP_Lane3, =0: source from GPU pin TX0, =1: from GPU pin TX1, =2: from GPU pin TX2, =3 from GPU pin TX3
typedef struct _ATOM_DP_CONN_CHANNEL_MAPPING
{
#if ATOM_BIG_ENDIAN
UCHAR ucDP_Lane3_Source:2;
UCHAR ucDP_Lane2_Source:2;
UCHAR ucDP_Lane1_Source:2;
UCHAR ucDP_Lane0_Source:2;
#else
UCHAR ucDP_Lane0_Source:2;
UCHAR ucDP_Lane1_Source:2;
UCHAR ucDP_Lane2_Source:2;
UCHAR ucDP_Lane3_Source:2;
#endif
}ATOM_DP_CONN_CHANNEL_MAPPING;
//for DVI/HDMI, in dual link case, both links have to have same mapping.
//Bit[1:0]: Define which pin connect to DVI connector data Lane2, =0: source from GPU pin TX0, =1: from GPU pin TX1, =2: from GPU pin TX2, =3 from GPU pin TX3
//Bit[3:2]: Define which pin connect to DVI connector data Lane1, =0: source from GPU pin TX0, =1: from GPU pin TX1, =2: from GPU pin TX2, =3 from GPU pin TX3
//Bit[5:4]: Define which pin connect to DVI connector data Lane0, =0: source from GPU pin TX0, =1: from GPU pin TX1, =2: from GPU pin TX2, =3 from GPU pin TX3
//Bit[7:6]: Define which pin connect to DVI connector clock lane, =0: source from GPU pin TX0, =1: from GPU pin TX1, =2: from GPU pin TX2, =3 from GPU pin TX3
typedef struct _ATOM_DVI_CONN_CHANNEL_MAPPING
{
#if ATOM_BIG_ENDIAN
UCHAR ucDVI_CLK_Source:2;
UCHAR ucDVI_DATA0_Source:2;
UCHAR ucDVI_DATA1_Source:2;
UCHAR ucDVI_DATA2_Source:2;
#else
UCHAR ucDVI_DATA2_Source:2;
UCHAR ucDVI_DATA1_Source:2;
UCHAR ucDVI_DATA0_Source:2;
UCHAR ucDVI_CLK_Source:2;
#endif
}ATOM_DVI_CONN_CHANNEL_MAPPING;
typedef struct _EXT_DISPLAY_PATH
{
USHORT usDeviceTag; //A bit vector to show what devices are supported
USHORT usDeviceACPIEnum; //16bit device ACPI id.
USHORT usDeviceConnector; //A physical connector for displays to plug in, using object connector definitions
UCHAR ucExtAUXDDCLutIndex; //An index into external AUX/DDC channel LUT
UCHAR ucExtHPDPINLutIndex; //An index into external HPD pin LUT
USHORT usExtEncoderObjId; //external encoder object id
union{
UCHAR ucChannelMapping; // if ucChannelMapping=0, using default one to one mapping
ATOM_DP_CONN_CHANNEL_MAPPING asDPMapping;
ATOM_DVI_CONN_CHANNEL_MAPPING asDVIMapping;
};
UCHAR ucChPNInvert; // bit vector for up to 8 lanes, =0: P and N is not invert, =1 P and N is inverted
USHORT usCaps;
USHORT usReserved;
}EXT_DISPLAY_PATH;
#define NUMBER_OF_UCHAR_FOR_GUID 16
#define MAX_NUMBER_OF_EXT_DISPLAY_PATH 7
//usCaps
#define EXT_DISPLAY_PATH_CAPS__HBR2_DISABLE 0x01
typedef struct _ATOM_EXTERNAL_DISPLAY_CONNECTION_INFO
{
ATOM_COMMON_TABLE_HEADER sHeader;
UCHAR ucGuid [NUMBER_OF_UCHAR_FOR_GUID]; // a GUID is a 16 byte long string
EXT_DISPLAY_PATH sPath[MAX_NUMBER_OF_EXT_DISPLAY_PATH]; // total of fixed 7 entries.
UCHAR ucChecksum; // a simple Checksum of the sum of whole structure equal to 0x0.
UCHAR uc3DStereoPinId; // use for eDP panel
UCHAR ucRemoteDisplayConfig;
UCHAR uceDPToLVDSRxId;
UCHAR Reserved[4]; // for potential expansion
}ATOM_EXTERNAL_DISPLAY_CONNECTION_INFO;
//Related definitions, all records are different but they have a commond header
typedef struct _ATOM_COMMON_RECORD_HEADER
{
UCHAR ucRecordType; //An emun to indicate the record type
UCHAR ucRecordSize; //The size of the whole record in byte
}ATOM_COMMON_RECORD_HEADER;
#define ATOM_I2C_RECORD_TYPE 1
#define ATOM_HPD_INT_RECORD_TYPE 2
#define ATOM_OUTPUT_PROTECTION_RECORD_TYPE 3
#define ATOM_CONNECTOR_DEVICE_TAG_RECORD_TYPE 4
#define ATOM_CONNECTOR_DVI_EXT_INPUT_RECORD_TYPE 5 //Obsolete, switch to use GPIO_CNTL_RECORD_TYPE
#define ATOM_ENCODER_FPGA_CONTROL_RECORD_TYPE 6 //Obsolete, switch to use GPIO_CNTL_RECORD_TYPE
#define ATOM_CONNECTOR_CVTV_SHARE_DIN_RECORD_TYPE 7
#define ATOM_JTAG_RECORD_TYPE 8 //Obsolete, switch to use GPIO_CNTL_RECORD_TYPE
#define ATOM_OBJECT_GPIO_CNTL_RECORD_TYPE 9
#define ATOM_ENCODER_DVO_CF_RECORD_TYPE 10
#define ATOM_CONNECTOR_CF_RECORD_TYPE 11
#define ATOM_CONNECTOR_HARDCODE_DTD_RECORD_TYPE 12
#define ATOM_CONNECTOR_PCIE_SUBCONNECTOR_RECORD_TYPE 13
#define ATOM_ROUTER_DDC_PATH_SELECT_RECORD_TYPE 14
#define ATOM_ROUTER_DATA_CLOCK_PATH_SELECT_RECORD_TYPE 15
#define ATOM_CONNECTOR_HPDPIN_LUT_RECORD_TYPE 16 //This is for the case when connectors are not known to object table
#define ATOM_CONNECTOR_AUXDDC_LUT_RECORD_TYPE 17 //This is for the case when connectors are not known to object table
#define ATOM_OBJECT_LINK_RECORD_TYPE 18 //Once this record is present under one object, it indicats the oobject is linked to another obj described by the record
#define ATOM_CONNECTOR_REMOTE_CAP_RECORD_TYPE 19
#define ATOM_ENCODER_CAP_RECORD_TYPE 20
//Must be updated when new record type is added,equal to that record definition!
#define ATOM_MAX_OBJECT_RECORD_NUMBER ATOM_ENCODER_CAP_RECORD_TYPE
typedef struct _ATOM_I2C_RECORD
{
ATOM_COMMON_RECORD_HEADER sheader;
ATOM_I2C_ID_CONFIG sucI2cId;
UCHAR ucI2CAddr; //The slave address, it's 0 when the record is attached to connector for DDC
}ATOM_I2C_RECORD;
typedef struct _ATOM_HPD_INT_RECORD
{
ATOM_COMMON_RECORD_HEADER sheader;
UCHAR ucHPDIntGPIOID; //Corresponding block in GPIO_PIN_INFO table gives the pin info
UCHAR ucPlugged_PinState;
}ATOM_HPD_INT_RECORD;
typedef struct _ATOM_OUTPUT_PROTECTION_RECORD
{
ATOM_COMMON_RECORD_HEADER sheader;
UCHAR ucProtectionFlag;
UCHAR ucReserved;
}ATOM_OUTPUT_PROTECTION_RECORD;
typedef struct _ATOM_CONNECTOR_DEVICE_TAG
{
ULONG ulACPIDeviceEnum; //Reserved for now
USHORT usDeviceID; //This Id is same as "ATOM_DEVICE_XXX_SUPPORT"
USHORT usPadding;
}ATOM_CONNECTOR_DEVICE_TAG;
typedef struct _ATOM_CONNECTOR_DEVICE_TAG_RECORD
{
ATOM_COMMON_RECORD_HEADER sheader;
UCHAR ucNumberOfDevice;
UCHAR ucReserved;
ATOM_CONNECTOR_DEVICE_TAG asDeviceTag[1]; //This Id is same as "ATOM_DEVICE_XXX_SUPPORT", 1 is only for allocation
}ATOM_CONNECTOR_DEVICE_TAG_RECORD;
typedef struct _ATOM_CONNECTOR_DVI_EXT_INPUT_RECORD
{
ATOM_COMMON_RECORD_HEADER sheader;
UCHAR ucConfigGPIOID;
UCHAR ucConfigGPIOState; //Set to 1 when it's active high to enable external flow in
UCHAR ucFlowinGPIPID;
UCHAR ucExtInGPIPID;
}ATOM_CONNECTOR_DVI_EXT_INPUT_RECORD;
typedef struct _ATOM_ENCODER_FPGA_CONTROL_RECORD
{
ATOM_COMMON_RECORD_HEADER sheader;
UCHAR ucCTL1GPIO_ID;
UCHAR ucCTL1GPIOState; //Set to 1 when it's active high
UCHAR ucCTL2GPIO_ID;
UCHAR ucCTL2GPIOState; //Set to 1 when it's active high
UCHAR ucCTL3GPIO_ID;
UCHAR ucCTL3GPIOState; //Set to 1 when it's active high
UCHAR ucCTLFPGA_IN_ID;
UCHAR ucPadding[3];
}ATOM_ENCODER_FPGA_CONTROL_RECORD;
typedef struct _ATOM_CONNECTOR_CVTV_SHARE_DIN_RECORD
{
ATOM_COMMON_RECORD_HEADER sheader;
UCHAR ucGPIOID; //Corresponding block in GPIO_PIN_INFO table gives the pin info
UCHAR ucTVActiveState; //Indicating when the pin==0 or 1 when TV is connected
}ATOM_CONNECTOR_CVTV_SHARE_DIN_RECORD;
typedef struct _ATOM_JTAG_RECORD
{
ATOM_COMMON_RECORD_HEADER sheader;
UCHAR ucTMSGPIO_ID;
UCHAR ucTMSGPIOState; //Set to 1 when it's active high
UCHAR ucTCKGPIO_ID;
UCHAR ucTCKGPIOState; //Set to 1 when it's active high
UCHAR ucTDOGPIO_ID;
UCHAR ucTDOGPIOState; //Set to 1 when it's active high
UCHAR ucTDIGPIO_ID;
UCHAR ucTDIGPIOState; //Set to 1 when it's active high
UCHAR ucPadding[2];
}ATOM_JTAG_RECORD;
//The following generic object gpio pin control record type will replace JTAG_RECORD/FPGA_CONTROL_RECORD/DVI_EXT_INPUT_RECORD above gradually
typedef struct _ATOM_GPIO_PIN_CONTROL_PAIR
{
UCHAR ucGPIOID; // GPIO_ID, find the corresponding ID in GPIO_LUT table
UCHAR ucGPIO_PinState; // Pin state showing how to set-up the pin
}ATOM_GPIO_PIN_CONTROL_PAIR;
typedef struct _ATOM_OBJECT_GPIO_CNTL_RECORD
{
ATOM_COMMON_RECORD_HEADER sheader;
UCHAR ucFlags; // Future expnadibility
UCHAR ucNumberOfPins; // Number of GPIO pins used to control the object
ATOM_GPIO_PIN_CONTROL_PAIR asGpio[1]; // the real gpio pin pair determined by number of pins ucNumberOfPins
}ATOM_OBJECT_GPIO_CNTL_RECORD;
//Definitions for GPIO pin state
#define GPIO_PIN_TYPE_INPUT 0x00
#define GPIO_PIN_TYPE_OUTPUT 0x10
#define GPIO_PIN_TYPE_HW_CONTROL 0x20
//For GPIO_PIN_TYPE_OUTPUT the following is defined
#define GPIO_PIN_OUTPUT_STATE_MASK 0x01
#define GPIO_PIN_OUTPUT_STATE_SHIFT 0
#define GPIO_PIN_STATE_ACTIVE_LOW 0x0
#define GPIO_PIN_STATE_ACTIVE_HIGH 0x1
// Indexes to GPIO array in GLSync record
// GLSync record is for Frame Lock/Gen Lock feature.
#define ATOM_GPIO_INDEX_GLSYNC_REFCLK 0
#define ATOM_GPIO_INDEX_GLSYNC_HSYNC 1
#define ATOM_GPIO_INDEX_GLSYNC_VSYNC 2
#define ATOM_GPIO_INDEX_GLSYNC_SWAP_REQ 3
#define ATOM_GPIO_INDEX_GLSYNC_SWAP_GNT 4
#define ATOM_GPIO_INDEX_GLSYNC_INTERRUPT 5
#define ATOM_GPIO_INDEX_GLSYNC_V_RESET 6
#define ATOM_GPIO_INDEX_GLSYNC_SWAP_CNTL 7
#define ATOM_GPIO_INDEX_GLSYNC_SWAP_SEL 8
#define ATOM_GPIO_INDEX_GLSYNC_MAX 9
typedef struct _ATOM_ENCODER_DVO_CF_RECORD
{
ATOM_COMMON_RECORD_HEADER sheader;
ULONG ulStrengthControl; // DVOA strength control for CF
UCHAR ucPadding[2];
}ATOM_ENCODER_DVO_CF_RECORD;
// Bit maps for ATOM_ENCODER_CAP_RECORD.ucEncoderCap
#define ATOM_ENCODER_CAP_RECORD_HBR2 0x01 // DP1.2 HBR2 is supported by HW encoder
#define ATOM_ENCODER_CAP_RECORD_HBR2_EN 0x02 // DP1.2 HBR2 setting is qualified and HBR2 can be enabled
typedef struct _ATOM_ENCODER_CAP_RECORD
{
ATOM_COMMON_RECORD_HEADER sheader;
union {
USHORT usEncoderCap;
struct {
#if ATOM_BIG_ENDIAN
USHORT usReserved:14; // Bit1-15 may be defined for other capability in future
USHORT usHBR2En:1; // Bit1 is for DP1.2 HBR2 enable
USHORT usHBR2Cap:1; // Bit0 is for DP1.2 HBR2 capability.
#else
USHORT usHBR2Cap:1; // Bit0 is for DP1.2 HBR2 capability.
USHORT usHBR2En:1; // Bit1 is for DP1.2 HBR2 enable
USHORT usReserved:14; // Bit1-15 may be defined for other capability in future
#endif
};
};
}ATOM_ENCODER_CAP_RECORD;
// value for ATOM_CONNECTOR_CF_RECORD.ucConnectedDvoBundle
#define ATOM_CONNECTOR_CF_RECORD_CONNECTED_UPPER12BITBUNDLEA 1
#define ATOM_CONNECTOR_CF_RECORD_CONNECTED_LOWER12BITBUNDLEB 2
typedef struct _ATOM_CONNECTOR_CF_RECORD
{
ATOM_COMMON_RECORD_HEADER sheader;
USHORT usMaxPixClk;
UCHAR ucFlowCntlGpioId;
UCHAR ucSwapCntlGpioId;
UCHAR ucConnectedDvoBundle;
UCHAR ucPadding;
}ATOM_CONNECTOR_CF_RECORD;
typedef struct _ATOM_CONNECTOR_HARDCODE_DTD_RECORD
{
ATOM_COMMON_RECORD_HEADER sheader;
ATOM_DTD_FORMAT asTiming;
}ATOM_CONNECTOR_HARDCODE_DTD_RECORD;
typedef struct _ATOM_CONNECTOR_PCIE_SUBCONNECTOR_RECORD
{
ATOM_COMMON_RECORD_HEADER sheader; //ATOM_CONNECTOR_PCIE_SUBCONNECTOR_RECORD_TYPE
UCHAR ucSubConnectorType; //CONNECTOR_OBJECT_ID_SINGLE_LINK_DVI_D|X_ID_DUAL_LINK_DVI_D|HDMI_TYPE_A
UCHAR ucReserved;
}ATOM_CONNECTOR_PCIE_SUBCONNECTOR_RECORD;
typedef struct _ATOM_ROUTER_DDC_PATH_SELECT_RECORD
{
ATOM_COMMON_RECORD_HEADER sheader;
UCHAR ucMuxType; //decide the number of ucMuxState, =0, no pin state, =1: single state with complement, >1: multiple state
UCHAR ucMuxControlPin;
UCHAR ucMuxState[2]; //for alligment purpose
}ATOM_ROUTER_DDC_PATH_SELECT_RECORD;
typedef struct _ATOM_ROUTER_DATA_CLOCK_PATH_SELECT_RECORD
{
ATOM_COMMON_RECORD_HEADER sheader;
UCHAR ucMuxType;
UCHAR ucMuxControlPin;
UCHAR ucMuxState[2]; //for alligment purpose
}ATOM_ROUTER_DATA_CLOCK_PATH_SELECT_RECORD;
// define ucMuxType
#define ATOM_ROUTER_MUX_PIN_STATE_MASK 0x0f
#define ATOM_ROUTER_MUX_PIN_SINGLE_STATE_COMPLEMENT 0x01
typedef struct _ATOM_CONNECTOR_HPDPIN_LUT_RECORD //record for ATOM_CONNECTOR_HPDPIN_LUT_RECORD_TYPE
{
ATOM_COMMON_RECORD_HEADER sheader;
UCHAR ucHPDPINMap[MAX_NUMBER_OF_EXT_HPDPIN_LUT_ENTRIES]; //An fixed size array which maps external pins to internal GPIO_PIN_INFO table
}ATOM_CONNECTOR_HPDPIN_LUT_RECORD;
typedef struct _ATOM_CONNECTOR_AUXDDC_LUT_RECORD //record for ATOM_CONNECTOR_AUXDDC_LUT_RECORD_TYPE
{
ATOM_COMMON_RECORD_HEADER sheader;
ATOM_I2C_ID_CONFIG ucAUXDDCMap[MAX_NUMBER_OF_EXT_AUXDDC_LUT_ENTRIES]; //An fixed size array which maps external pins to internal DDC ID
}ATOM_CONNECTOR_AUXDDC_LUT_RECORD;
typedef struct _ATOM_OBJECT_LINK_RECORD
{
ATOM_COMMON_RECORD_HEADER sheader;
USHORT usObjectID; //could be connector, encorder or other object in object.h
}ATOM_OBJECT_LINK_RECORD;
typedef struct _ATOM_CONNECTOR_REMOTE_CAP_RECORD
{
ATOM_COMMON_RECORD_HEADER sheader;
USHORT usReserved;
}ATOM_CONNECTOR_REMOTE_CAP_RECORD;
/****************************************************************************/
// ASIC voltage data table
/****************************************************************************/
typedef struct _ATOM_VOLTAGE_INFO_HEADER
{
USHORT usVDDCBaseLevel; //In number of 50mv unit
USHORT usReserved; //For possible extension table offset
UCHAR ucNumOfVoltageEntries;
UCHAR ucBytesPerVoltageEntry;
UCHAR ucVoltageStep; //Indicating in how many mv increament is one step, 0.5mv unit
UCHAR ucDefaultVoltageEntry;
UCHAR ucVoltageControlI2cLine;
UCHAR ucVoltageControlAddress;
UCHAR ucVoltageControlOffset;
}ATOM_VOLTAGE_INFO_HEADER;
typedef struct _ATOM_VOLTAGE_INFO
{
ATOM_COMMON_TABLE_HEADER sHeader;
ATOM_VOLTAGE_INFO_HEADER viHeader;
UCHAR ucVoltageEntries[64]; //64 is for allocation, the actual number of entry is present at ucNumOfVoltageEntries*ucBytesPerVoltageEntry
}ATOM_VOLTAGE_INFO;
typedef struct _ATOM_VOLTAGE_FORMULA
{
USHORT usVoltageBaseLevel; // In number of 1mv unit
USHORT usVoltageStep; // Indicating in how many mv increament is one step, 1mv unit
UCHAR ucNumOfVoltageEntries; // Number of Voltage Entry, which indicate max Voltage
UCHAR ucFlag; // bit0=0 :step is 1mv =1 0.5mv
UCHAR ucBaseVID; // if there is no lookup table, VID= BaseVID + ( Vol - BaseLevle ) /VoltageStep
UCHAR ucReserved;
UCHAR ucVIDAdjustEntries[32]; // 32 is for allocation, the actual number of entry is present at ucNumOfVoltageEntries
}ATOM_VOLTAGE_FORMULA;
typedef struct _VOLTAGE_LUT_ENTRY
{
USHORT usVoltageCode; // The Voltage ID, either GPIO or I2C code
USHORT usVoltageValue; // The corresponding Voltage Value, in mV
}VOLTAGE_LUT_ENTRY;
typedef struct _ATOM_VOLTAGE_FORMULA_V2
{
UCHAR ucNumOfVoltageEntries; // Number of Voltage Entry, which indicate max Voltage
UCHAR ucReserved[3];
VOLTAGE_LUT_ENTRY asVIDAdjustEntries[32];// 32 is for allocation, the actual number of entries is in ucNumOfVoltageEntries
}ATOM_VOLTAGE_FORMULA_V2;
typedef struct _ATOM_VOLTAGE_CONTROL
{
UCHAR ucVoltageControlId; //Indicate it is controlled by I2C or GPIO or HW state machine
UCHAR ucVoltageControlI2cLine;
UCHAR ucVoltageControlAddress;
UCHAR ucVoltageControlOffset;
USHORT usGpioPin_AIndex; //GPIO_PAD register index
UCHAR ucGpioPinBitShift[9]; //at most 8 pin support 255 VIDs, termintate with 0xff
UCHAR ucReserved;
}ATOM_VOLTAGE_CONTROL;
// Define ucVoltageControlId
#define VOLTAGE_CONTROLLED_BY_HW 0x00
#define VOLTAGE_CONTROLLED_BY_I2C_MASK 0x7F
#define VOLTAGE_CONTROLLED_BY_GPIO 0x80
#define VOLTAGE_CONTROL_ID_LM64 0x01 //I2C control, used for R5xx Core Voltage
#define VOLTAGE_CONTROL_ID_DAC 0x02 //I2C control, used for R5xx/R6xx MVDDC,MVDDQ or VDDCI
#define VOLTAGE_CONTROL_ID_VT116xM 0x03 //I2C control, used for R6xx Core Voltage
#define VOLTAGE_CONTROL_ID_DS4402 0x04
#define VOLTAGE_CONTROL_ID_UP6266 0x05
#define VOLTAGE_CONTROL_ID_SCORPIO 0x06
#define VOLTAGE_CONTROL_ID_VT1556M 0x07
#define VOLTAGE_CONTROL_ID_CHL822x 0x08
#define VOLTAGE_CONTROL_ID_VT1586M 0x09
#define VOLTAGE_CONTROL_ID_UP1637 0x0A
typedef struct _ATOM_VOLTAGE_OBJECT
{
UCHAR ucVoltageType; //Indicate Voltage Source: VDDC, MVDDC, MVDDQ or MVDDCI
UCHAR ucSize; //Size of Object
ATOM_VOLTAGE_CONTROL asControl; //describ how to control
ATOM_VOLTAGE_FORMULA asFormula; //Indicate How to convert real Voltage to VID
}ATOM_VOLTAGE_OBJECT;
typedef struct _ATOM_VOLTAGE_OBJECT_V2
{
UCHAR ucVoltageType; //Indicate Voltage Source: VDDC, MVDDC, MVDDQ or MVDDCI
UCHAR ucSize; //Size of Object
ATOM_VOLTAGE_CONTROL asControl; //describ how to control
ATOM_VOLTAGE_FORMULA_V2 asFormula; //Indicate How to convert real Voltage to VID
}ATOM_VOLTAGE_OBJECT_V2;
typedef struct _ATOM_VOLTAGE_OBJECT_INFO
{
ATOM_COMMON_TABLE_HEADER sHeader;
ATOM_VOLTAGE_OBJECT asVoltageObj[3]; //Info for Voltage control
}ATOM_VOLTAGE_OBJECT_INFO;
typedef struct _ATOM_VOLTAGE_OBJECT_INFO_V2
{
ATOM_COMMON_TABLE_HEADER sHeader;
ATOM_VOLTAGE_OBJECT_V2 asVoltageObj[3]; //Info for Voltage control
}ATOM_VOLTAGE_OBJECT_INFO_V2;
typedef struct _ATOM_LEAKID_VOLTAGE
{
UCHAR ucLeakageId;
UCHAR ucReserved;
USHORT usVoltage;
}ATOM_LEAKID_VOLTAGE;
typedef struct _ATOM_VOLTAGE_OBJECT_HEADER_V3{
UCHAR ucVoltageType; //Indicate Voltage Source: VDDC, MVDDC, MVDDQ or MVDDCI
UCHAR ucVoltageMode; //Indicate voltage control mode: Init/Set/Leakage/Set phase
USHORT usSize; //Size of Object
}ATOM_VOLTAGE_OBJECT_HEADER_V3;
typedef struct _VOLTAGE_LUT_ENTRY_V2
{
ULONG ulVoltageId; // The Voltage ID which is used to program GPIO register
USHORT usVoltageValue; // The corresponding Voltage Value, in mV
}VOLTAGE_LUT_ENTRY_V2;
typedef struct _LEAKAGE_VOLTAGE_LUT_ENTRY_V2
{
USHORT usVoltageLevel; // The Voltage ID which is used to program GPIO register
USHORT usVoltageId;
USHORT usLeakageId; // The corresponding Voltage Value, in mV
}LEAKAGE_VOLTAGE_LUT_ENTRY_V2;
typedef struct _ATOM_I2C_VOLTAGE_OBJECT_V3
{
ATOM_VOLTAGE_OBJECT_HEADER_V3 sHeader;
UCHAR ucVoltageRegulatorId; //Indicate Voltage Regulator Id
UCHAR ucVoltageControlI2cLine;
UCHAR ucVoltageControlAddress;
UCHAR ucVoltageControlOffset;
ULONG ulReserved;
VOLTAGE_LUT_ENTRY asVolI2cLut[1]; // end with 0xff
}ATOM_I2C_VOLTAGE_OBJECT_V3;
typedef struct _ATOM_GPIO_VOLTAGE_OBJECT_V3
{
ATOM_VOLTAGE_OBJECT_HEADER_V3 sHeader;
UCHAR ucVoltageGpioCntlId; // default is 0 which indicate control through CG VID mode
UCHAR ucGpioEntryNum; // indiate the entry numbers of Votlage/Gpio value Look up table
UCHAR ucPhaseDelay; // phase delay in unit of micro second
UCHAR ucReserved;
ULONG ulGpioMaskVal; // GPIO Mask value
VOLTAGE_LUT_ENTRY_V2 asVolGpioLut[1];
}ATOM_GPIO_VOLTAGE_OBJECT_V3;
typedef struct _ATOM_LEAKAGE_VOLTAGE_OBJECT_V3
{
ATOM_VOLTAGE_OBJECT_HEADER_V3 sHeader;
UCHAR ucLeakageCntlId; // default is 0
UCHAR ucLeakageEntryNum; // indicate the entry number of LeakageId/Voltage Lut table
UCHAR ucReserved[2];
ULONG ulMaxVoltageLevel;
LEAKAGE_VOLTAGE_LUT_ENTRY_V2 asLeakageIdLut[1];
}ATOM_LEAKAGE_VOLTAGE_OBJECT_V3;
typedef union _ATOM_VOLTAGE_OBJECT_V3{
ATOM_GPIO_VOLTAGE_OBJECT_V3 asGpioVoltageObj;
ATOM_I2C_VOLTAGE_OBJECT_V3 asI2cVoltageObj;
ATOM_LEAKAGE_VOLTAGE_OBJECT_V3 asLeakageObj;
}ATOM_VOLTAGE_OBJECT_V3;
typedef struct _ATOM_VOLTAGE_OBJECT_INFO_V3_1
{
ATOM_COMMON_TABLE_HEADER sHeader;
ATOM_VOLTAGE_OBJECT_V3 asVoltageObj[3]; //Info for Voltage control
}ATOM_VOLTAGE_OBJECT_INFO_V3_1;
typedef struct _ATOM_ASIC_PROFILE_VOLTAGE
{
UCHAR ucProfileId;
UCHAR ucReserved;
USHORT usSize;
USHORT usEfuseSpareStartAddr;
USHORT usFuseIndex[8]; //from LSB to MSB, Max 8bit,end of 0xffff if less than 8 efuse id,
ATOM_LEAKID_VOLTAGE asLeakVol[2]; //Leakid and relatd voltage
}ATOM_ASIC_PROFILE_VOLTAGE;
//ucProfileId
#define ATOM_ASIC_PROFILE_ID_EFUSE_VOLTAGE 1
#define ATOM_ASIC_PROFILE_ID_EFUSE_PERFORMANCE_VOLTAGE 1
#define ATOM_ASIC_PROFILE_ID_EFUSE_THERMAL_VOLTAGE 2
typedef struct _ATOM_ASIC_PROFILING_INFO
{
ATOM_COMMON_TABLE_HEADER asHeader;
ATOM_ASIC_PROFILE_VOLTAGE asVoltage;
}ATOM_ASIC_PROFILING_INFO;
typedef struct _ATOM_POWER_SOURCE_OBJECT
{
UCHAR ucPwrSrcId; // Power source
UCHAR ucPwrSensorType; // GPIO, I2C or none
UCHAR ucPwrSensId; // if GPIO detect, it is GPIO id, if I2C detect, it is I2C id
UCHAR ucPwrSensSlaveAddr; // Slave address if I2C detect
UCHAR ucPwrSensRegIndex; // I2C register Index if I2C detect
UCHAR ucPwrSensRegBitMask; // detect which bit is used if I2C detect
UCHAR ucPwrSensActiveState; // high active or low active
UCHAR ucReserve[3]; // reserve
USHORT usSensPwr; // in unit of watt
}ATOM_POWER_SOURCE_OBJECT;
typedef struct _ATOM_POWER_SOURCE_INFO
{
ATOM_COMMON_TABLE_HEADER asHeader;
UCHAR asPwrbehave[16];
ATOM_POWER_SOURCE_OBJECT asPwrObj[1];
}ATOM_POWER_SOURCE_INFO;
//Define ucPwrSrcId
#define POWERSOURCE_PCIE_ID1 0x00
#define POWERSOURCE_6PIN_CONNECTOR_ID1 0x01
#define POWERSOURCE_8PIN_CONNECTOR_ID1 0x02
#define POWERSOURCE_6PIN_CONNECTOR_ID2 0x04
#define POWERSOURCE_8PIN_CONNECTOR_ID2 0x08
//define ucPwrSensorId
#define POWER_SENSOR_ALWAYS 0x00
#define POWER_SENSOR_GPIO 0x01
#define POWER_SENSOR_I2C 0x02
typedef struct _ATOM_CLK_VOLT_CAPABILITY
{
ULONG ulVoltageIndex; // The Voltage Index indicated by FUSE, same voltage index shared with SCLK DPM fuse table
ULONG ulMaximumSupportedCLK; // Maximum clock supported with specified voltage index, unit in 10kHz
}ATOM_CLK_VOLT_CAPABILITY;
typedef struct _ATOM_AVAILABLE_SCLK_LIST
{
ULONG ulSupportedSCLK; // Maximum clock supported with specified voltage index, unit in 10kHz
USHORT usVoltageIndex; // The Voltage Index indicated by FUSE for specified SCLK
USHORT usVoltageID; // The Voltage ID indicated by FUSE for specified SCLK
}ATOM_AVAILABLE_SCLK_LIST;
// ATOM_INTEGRATED_SYSTEM_INFO_V6 ulSystemConfig cap definition
#define ATOM_IGP_INFO_V6_SYSTEM_CONFIG__PCIE_POWER_GATING_ENABLE 1 // refer to ulSystemConfig bit[0]
// this IntegrateSystemInfoTable is used for Liano/Ontario APU
typedef struct _ATOM_INTEGRATED_SYSTEM_INFO_V6
{
ATOM_COMMON_TABLE_HEADER sHeader;
ULONG ulBootUpEngineClock;
ULONG ulDentistVCOFreq;
ULONG ulBootUpUMAClock;
ATOM_CLK_VOLT_CAPABILITY sDISPCLK_Voltage[4];
ULONG ulBootUpReqDisplayVector;
ULONG ulOtherDisplayMisc;
ULONG ulGPUCapInfo;
ULONG ulSB_MMIO_Base_Addr;
USHORT usRequestedPWMFreqInHz;
UCHAR ucHtcTmpLmt;
UCHAR ucHtcHystLmt;
ULONG ulMinEngineClock;
ULONG ulSystemConfig;
ULONG ulCPUCapInfo;
USHORT usNBP0Voltage;
USHORT usNBP1Voltage;
USHORT usBootUpNBVoltage;
USHORT usExtDispConnInfoOffset;
USHORT usPanelRefreshRateRange;
UCHAR ucMemoryType;
UCHAR ucUMAChannelNumber;
ULONG ulCSR_M3_ARB_CNTL_DEFAULT[10];
ULONG ulCSR_M3_ARB_CNTL_UVD[10];
ULONG ulCSR_M3_ARB_CNTL_FS3D[10];
ATOM_AVAILABLE_SCLK_LIST sAvail_SCLK[5];
ULONG ulGMCRestoreResetTime;
ULONG ulMinimumNClk;
ULONG ulIdleNClk;
ULONG ulDDR_DLL_PowerUpTime;
ULONG ulDDR_PLL_PowerUpTime;
USHORT usPCIEClkSSPercentage;
USHORT usPCIEClkSSType;
USHORT usLvdsSSPercentage;
USHORT usLvdsSSpreadRateIn10Hz;
USHORT usHDMISSPercentage;
USHORT usHDMISSpreadRateIn10Hz;
USHORT usDVISSPercentage;
USHORT usDVISSpreadRateIn10Hz;
ULONG SclkDpmBoostMargin;
ULONG SclkDpmThrottleMargin;
USHORT SclkDpmTdpLimitPG;
USHORT SclkDpmTdpLimitBoost;
ULONG ulBoostEngineCLock;
UCHAR ulBoostVid_2bit;
UCHAR EnableBoost;
USHORT GnbTdpLimit;
USHORT usMaxLVDSPclkFreqInSingleLink;
UCHAR ucLvdsMisc;
UCHAR ucLVDSReserved;
ULONG ulReserved3[15];
ATOM_EXTERNAL_DISPLAY_CONNECTION_INFO sExtDispConnInfo;
}ATOM_INTEGRATED_SYSTEM_INFO_V6;
// ulGPUCapInfo
#define INTEGRATED_SYSTEM_INFO_V6_GPUCAPINFO__TMDSHDMI_COHERENT_SINGLEPLL_MODE 0x01
#define INTEGRATED_SYSTEM_INFO_V6_GPUCAPINFO__DISABLE_AUX_HW_MODE_DETECTION 0x08
//ucLVDSMisc:
#define SYS_INFO_LVDSMISC__888_FPDI_MODE 0x01
#define SYS_INFO_LVDSMISC__DL_CH_SWAP 0x02
#define SYS_INFO_LVDSMISC__888_BPC 0x04
#define SYS_INFO_LVDSMISC__OVERRIDE_EN 0x08
#define SYS_INFO_LVDSMISC__BLON_ACTIVE_LOW 0x10
// not used any more
#define SYS_INFO_LVDSMISC__VSYNC_ACTIVE_LOW 0x04
#define SYS_INFO_LVDSMISC__HSYNC_ACTIVE_LOW 0x08
/**********************************************************************************************************************
ATOM_INTEGRATED_SYSTEM_INFO_V6 Description
ulBootUpEngineClock: VBIOS bootup Engine clock frequency, in 10kHz unit. if it is equal 0, then VBIOS use pre-defined bootup engine clock
ulDentistVCOFreq: Dentist VCO clock in 10kHz unit.
ulBootUpUMAClock: System memory boot up clock frequency in 10Khz unit.
sDISPCLK_Voltage: Report Display clock voltage requirement.
ulBootUpReqDisplayVector: VBIOS boot up display IDs, following are supported devices in Liano/Ontaio projects:
ATOM_DEVICE_CRT1_SUPPORT 0x0001
ATOM_DEVICE_CRT2_SUPPORT 0x0010
ATOM_DEVICE_DFP1_SUPPORT 0x0008
ATOM_DEVICE_DFP6_SUPPORT 0x0040
ATOM_DEVICE_DFP2_SUPPORT 0x0080
ATOM_DEVICE_DFP3_SUPPORT 0x0200
ATOM_DEVICE_DFP4_SUPPORT 0x0400
ATOM_DEVICE_DFP5_SUPPORT 0x0800
ATOM_DEVICE_LCD1_SUPPORT 0x0002
ulOtherDisplayMisc: Other display related flags, not defined yet.
ulGPUCapInfo: bit[0]=0: TMDS/HDMI Coherent Mode use cascade PLL mode.
=1: TMDS/HDMI Coherent Mode use signel PLL mode.
bit[3]=0: Enable HW AUX mode detection logic
=1: Disable HW AUX mode dettion logic
ulSB_MMIO_Base_Addr: Physical Base address to SB MMIO space. Driver needs to initialize it for SMU usage.
usRequestedPWMFreqInHz: When it's set to 0x0 by SBIOS: the LCD BackLight is not controlled by GPU(SW).
Any attempt to change BL using VBIOS function or enable VariBri from PP table is not effective since ATOM_BIOS_INFO_BL_CONTROLLED_BY_GPU==0;
When it's set to a non-zero frequency, the BackLight is controlled by GPU (SW) in one of two ways below:
1. SW uses the GPU BL PWM output to control the BL, in chis case, this non-zero frequency determines what freq GPU should use;
VBIOS will set up proper PWM frequency and ATOM_BIOS_INFO_BL_CONTROLLED_BY_GPU==1,as the result,
Changing BL using VBIOS function is functional in both driver and non-driver present environment;
and enabling VariBri under the driver environment from PP table is optional.
2. SW uses other means to control BL (like DPCD),this non-zero frequency serves as a flag only indicating
that BL control from GPU is expected.
VBIOS will NOT set up PWM frequency but make ATOM_BIOS_INFO_BL_CONTROLLED_BY_GPU==1
Changing BL using VBIOS function could be functional in both driver and non-driver present environment,but
it's per platform
and enabling VariBri under the driver environment from PP table is optional.
ucHtcTmpLmt: Refer to D18F3x64 bit[22:16], HtcTmpLmt.
Threshold on value to enter HTC_active state.
ucHtcHystLmt: Refer to D18F3x64 bit[27:24], HtcHystLmt.
To calculate threshold off value to exit HTC_active state, which is Threshold on vlaue minus ucHtcHystLmt.
ulMinEngineClock: Minimum SCLK allowed in 10kHz unit. This is calculated based on WRCK Fuse settings.
ulSystemConfig: Bit[0]=0: PCIE Power Gating Disabled
=1: PCIE Power Gating Enabled
Bit[1]=0: DDR-DLL shut-down feature disabled.
1: DDR-DLL shut-down feature enabled.
Bit[2]=0: DDR-PLL Power down feature disabled.
1: DDR-PLL Power down feature enabled.
ulCPUCapInfo: TBD
usNBP0Voltage: VID for voltage on NB P0 State
usNBP1Voltage: VID for voltage on NB P1 State
usBootUpNBVoltage: Voltage Index of GNB voltage configured by SBIOS, which is suffcient to support VBIOS DISPCLK requirement.
usExtDispConnInfoOffset: Offset to sExtDispConnInfo inside the structure
usPanelRefreshRateRange: Bit vector for LCD supported refresh rate range. If DRR is requestd by the platform, at least two bits need to be set
to indicate a range.
SUPPORTED_LCD_REFRESHRATE_30Hz 0x0004
SUPPORTED_LCD_REFRESHRATE_40Hz 0x0008
SUPPORTED_LCD_REFRESHRATE_50Hz 0x0010
SUPPORTED_LCD_REFRESHRATE_60Hz 0x0020
ucMemoryType: [3:0]=1:DDR1;=2:DDR2;=3:DDR3.[7:4] is reserved.
ucUMAChannelNumber: System memory channel numbers.
ulCSR_M3_ARB_CNTL_DEFAULT[10]: Arrays with values for CSR M3 arbiter for default
ulCSR_M3_ARB_CNTL_UVD[10]: Arrays with values for CSR M3 arbiter for UVD playback.
ulCSR_M3_ARB_CNTL_FS3D[10]: Arrays with values for CSR M3 arbiter for Full Screen 3D applications.
sAvail_SCLK[5]: Arrays to provide available list of SLCK and corresponding voltage, order from low to high
ulGMCRestoreResetTime: GMC power restore and GMC reset time to calculate data reconnection latency. Unit in ns.
ulMinimumNClk: Minimum NCLK speed among all NB-Pstates to calcualte data reconnection latency. Unit in 10kHz.
ulIdleNClk: NCLK speed while memory runs in self-refresh state. Unit in 10kHz.
ulDDR_DLL_PowerUpTime: DDR PHY DLL power up time. Unit in ns.
ulDDR_PLL_PowerUpTime: DDR PHY PLL power up time. Unit in ns.
usPCIEClkSSPercentage: PCIE Clock Spread Spectrum Percentage in unit 0.01%; 100 mean 1%.
usPCIEClkSSType: PCIE Clock Spread Spectrum Type. 0 for Down spread(default); 1 for Center spread.
usLvdsSSPercentage: LVDS panel ( not include eDP ) Spread Spectrum Percentage in unit of 0.01%, =0, use VBIOS default setting.
usLvdsSSpreadRateIn10Hz: LVDS panel ( not include eDP ) Spread Spectrum frequency in unit of 10Hz, =0, use VBIOS default setting.
usHDMISSPercentage: HDMI Spread Spectrum Percentage in unit 0.01%; 100 mean 1%, =0, use VBIOS default setting.
usHDMISSpreadRateIn10Hz: HDMI Spread Spectrum frequency in unit of 10Hz, =0, use VBIOS default setting.
usDVISSPercentage: DVI Spread Spectrum Percentage in unit 0.01%; 100 mean 1%, =0, use VBIOS default setting.
usDVISSpreadRateIn10Hz: DVI Spread Spectrum frequency in unit of 10Hz, =0, use VBIOS default setting.
usMaxLVDSPclkFreqInSingleLink: Max pixel clock LVDS panel single link, if=0 means VBIOS use default threhold, right now it is 85Mhz
ucLVDSMisc: [bit0] LVDS 888bit panel mode =0: LVDS 888 panel in LDI mode, =1: LVDS 888 panel in FPDI mode
[bit1] LVDS panel lower and upper link mapping =0: lower link and upper link not swap, =1: lower link and upper link are swapped
[bit2] LVDS 888bit per color mode =0: 666 bit per color =1:888 bit per color
[bit3] LVDS parameter override enable =0: ucLvdsMisc parameter are not used =1: ucLvdsMisc parameter should be used
[bit4] Polarity of signal sent to digital BLON output pin. =0: not inverted(active high) =1: inverted ( active low )
**********************************************************************************************************************/
// this Table is used for Liano/Ontario APU
typedef struct _ATOM_FUSION_SYSTEM_INFO_V1
{
ATOM_INTEGRATED_SYSTEM_INFO_V6 sIntegratedSysInfo;
ULONG ulPowerplayTable[128];
}ATOM_FUSION_SYSTEM_INFO_V1;
/**********************************************************************************************************************
ATOM_FUSION_SYSTEM_INFO_V1 Description
sIntegratedSysInfo: refer to ATOM_INTEGRATED_SYSTEM_INFO_V6 definition.
ulPowerplayTable[128]: This 512 bytes memory is used to save ATOM_PPLIB_POWERPLAYTABLE3, starting form ulPowerplayTable[0]
**********************************************************************************************************************/
// this IntegrateSystemInfoTable is used for Trinity APU
typedef struct _ATOM_INTEGRATED_SYSTEM_INFO_V1_7
{
ATOM_COMMON_TABLE_HEADER sHeader;
ULONG ulBootUpEngineClock;
ULONG ulDentistVCOFreq;
ULONG ulBootUpUMAClock;
ATOM_CLK_VOLT_CAPABILITY sDISPCLK_Voltage[4];
ULONG ulBootUpReqDisplayVector;
ULONG ulOtherDisplayMisc;
ULONG ulGPUCapInfo;
ULONG ulSB_MMIO_Base_Addr;
USHORT usRequestedPWMFreqInHz;
UCHAR ucHtcTmpLmt;
UCHAR ucHtcHystLmt;
ULONG ulMinEngineClock;
ULONG ulSystemConfig;
ULONG ulCPUCapInfo;
USHORT usNBP0Voltage;
USHORT usNBP1Voltage;
USHORT usBootUpNBVoltage;
USHORT usExtDispConnInfoOffset;
USHORT usPanelRefreshRateRange;
UCHAR ucMemoryType;
UCHAR ucUMAChannelNumber;
UCHAR strVBIOSMsg[40];
ULONG ulReserved[20];
ATOM_AVAILABLE_SCLK_LIST sAvail_SCLK[5];
ULONG ulGMCRestoreResetTime;
ULONG ulMinimumNClk;
ULONG ulIdleNClk;
ULONG ulDDR_DLL_PowerUpTime;
ULONG ulDDR_PLL_PowerUpTime;
USHORT usPCIEClkSSPercentage;
USHORT usPCIEClkSSType;
USHORT usLvdsSSPercentage;
USHORT usLvdsSSpreadRateIn10Hz;
USHORT usHDMISSPercentage;
USHORT usHDMISSpreadRateIn10Hz;
USHORT usDVISSPercentage;
USHORT usDVISSpreadRateIn10Hz;
ULONG SclkDpmBoostMargin;
ULONG SclkDpmThrottleMargin;
USHORT SclkDpmTdpLimitPG;
USHORT SclkDpmTdpLimitBoost;
ULONG ulBoostEngineCLock;
UCHAR ulBoostVid_2bit;
UCHAR EnableBoost;
USHORT GnbTdpLimit;
USHORT usMaxLVDSPclkFreqInSingleLink;
UCHAR ucLvdsMisc;
UCHAR ucLVDSReserved;
UCHAR ucLVDSPwrOnSeqDIGONtoDE_in4Ms;
UCHAR ucLVDSPwrOnSeqDEtoVARY_BL_in4Ms;
UCHAR ucLVDSPwrOffSeqVARY_BLtoDE_in4Ms;
UCHAR ucLVDSPwrOffSeqDEtoDIGON_in4Ms;
UCHAR ucLVDSOffToOnDelay_in4Ms;
UCHAR ucLVDSPwrOnSeqVARY_BLtoBLON_in4Ms;
UCHAR ucLVDSPwrOffSeqBLONtoVARY_BL_in4Ms;
UCHAR ucLVDSReserved1;
ULONG ulLCDBitDepthControlVal;
ULONG ulNbpStateMemclkFreq[4];
USHORT usNBP2Voltage;
USHORT usNBP3Voltage;
ULONG ulNbpStateNClkFreq[4];
UCHAR ucNBDPMEnable;
UCHAR ucReserved[3];
UCHAR ucDPMState0VclkFid;
UCHAR ucDPMState0DclkFid;
UCHAR ucDPMState1VclkFid;
UCHAR ucDPMState1DclkFid;
UCHAR ucDPMState2VclkFid;
UCHAR ucDPMState2DclkFid;
UCHAR ucDPMState3VclkFid;
UCHAR ucDPMState3DclkFid;
ATOM_EXTERNAL_DISPLAY_CONNECTION_INFO sExtDispConnInfo;
}ATOM_INTEGRATED_SYSTEM_INFO_V1_7;
// ulOtherDisplayMisc
#define INTEGRATED_SYSTEM_INFO__GET_EDID_CALLBACK_FUNC_SUPPORT 0x01
#define INTEGRATED_SYSTEM_INFO__GET_BOOTUP_DISPLAY_CALLBACK_FUNC_SUPPORT 0x02
#define INTEGRATED_SYSTEM_INFO__GET_EXPANSION_CALLBACK_FUNC_SUPPORT 0x04
#define INTEGRATED_SYSTEM_INFO__FAST_BOOT_SUPPORT 0x08
// ulGPUCapInfo
#define SYS_INFO_GPUCAPS__TMDSHDMI_COHERENT_SINGLEPLL_MODE 0x01
#define SYS_INFO_GPUCAPS__DP_SINGLEPLL_MODE 0x02
#define SYS_INFO_GPUCAPS__DISABLE_AUX_MODE_DETECT 0x08
/**********************************************************************************************************************
ATOM_INTEGRATED_SYSTEM_INFO_V1_7 Description
ulBootUpEngineClock: VBIOS bootup Engine clock frequency, in 10kHz unit. if it is equal 0, then VBIOS use pre-defined bootup engine clock
ulDentistVCOFreq: Dentist VCO clock in 10kHz unit.
ulBootUpUMAClock: System memory boot up clock frequency in 10Khz unit.
sDISPCLK_Voltage: Report Display clock voltage requirement.
ulBootUpReqDisplayVector: VBIOS boot up display IDs, following are supported devices in Trinity projects:
ATOM_DEVICE_CRT1_SUPPORT 0x0001
ATOM_DEVICE_DFP1_SUPPORT 0x0008
ATOM_DEVICE_DFP6_SUPPORT 0x0040
ATOM_DEVICE_DFP2_SUPPORT 0x0080
ATOM_DEVICE_DFP3_SUPPORT 0x0200
ATOM_DEVICE_DFP4_SUPPORT 0x0400
ATOM_DEVICE_DFP5_SUPPORT 0x0800
ATOM_DEVICE_LCD1_SUPPORT 0x0002
ulOtherDisplayMisc: bit[0]=0: INT15 callback function Get LCD EDID ( ax=4e08, bl=1b ) is not supported by SBIOS.
=1: INT15 callback function Get LCD EDID ( ax=4e08, bl=1b ) is supported by SBIOS.
bit[1]=0: INT15 callback function Get boot display( ax=4e08, bl=01h) is not supported by SBIOS
=1: INT15 callback function Get boot display( ax=4e08, bl=01h) is supported by SBIOS
bit[2]=0: INT15 callback function Get panel Expansion ( ax=4e08, bl=02h) is not supported by SBIOS
=1: INT15 callback function Get panel Expansion ( ax=4e08, bl=02h) is supported by SBIOS
bit[3]=0: VBIOS fast boot is disable
=1: VBIOS fast boot is enable. ( VBIOS skip display device detection in every set mode if LCD panel is connect and LID is open)
ulGPUCapInfo: bit[0]=0: TMDS/HDMI Coherent Mode use cascade PLL mode.
=1: TMDS/HDMI Coherent Mode use signel PLL mode.
bit[1]=0: DP mode use cascade PLL mode ( New for Trinity )
=1: DP mode use single PLL mode
bit[3]=0: Enable AUX HW mode detection logic
=1: Disable AUX HW mode detection logic
ulSB_MMIO_Base_Addr: Physical Base address to SB MMIO space. Driver needs to initialize it for SMU usage.
usRequestedPWMFreqInHz: When it's set to 0x0 by SBIOS: the LCD BackLight is not controlled by GPU(SW).
Any attempt to change BL using VBIOS function or enable VariBri from PP table is not effective since ATOM_BIOS_INFO_BL_CONTROLLED_BY_GPU==0;
When it's set to a non-zero frequency, the BackLight is controlled by GPU (SW) in one of two ways below:
1. SW uses the GPU BL PWM output to control the BL, in chis case, this non-zero frequency determines what freq GPU should use;
VBIOS will set up proper PWM frequency and ATOM_BIOS_INFO_BL_CONTROLLED_BY_GPU==1,as the result,
Changing BL using VBIOS function is functional in both driver and non-driver present environment;
and enabling VariBri under the driver environment from PP table is optional.
2. SW uses other means to control BL (like DPCD),this non-zero frequency serves as a flag only indicating
that BL control from GPU is expected.
VBIOS will NOT set up PWM frequency but make ATOM_BIOS_INFO_BL_CONTROLLED_BY_GPU==1
Changing BL using VBIOS function could be functional in both driver and non-driver present environment,but
it's per platform
and enabling VariBri under the driver environment from PP table is optional.
ucHtcTmpLmt: Refer to D18F3x64 bit[22:16], HtcTmpLmt.
Threshold on value to enter HTC_active state.
ucHtcHystLmt: Refer to D18F3x64 bit[27:24], HtcHystLmt.
To calculate threshold off value to exit HTC_active state, which is Threshold on vlaue minus ucHtcHystLmt.
ulMinEngineClock: Minimum SCLK allowed in 10kHz unit. This is calculated based on WRCK Fuse settings.
ulSystemConfig: Bit[0]=0: PCIE Power Gating Disabled
=1: PCIE Power Gating Enabled
Bit[1]=0: DDR-DLL shut-down feature disabled.
1: DDR-DLL shut-down feature enabled.
Bit[2]=0: DDR-PLL Power down feature disabled.
1: DDR-PLL Power down feature enabled.
ulCPUCapInfo: TBD
usNBP0Voltage: VID for voltage on NB P0 State
usNBP1Voltage: VID for voltage on NB P1 State
usNBP2Voltage: VID for voltage on NB P2 State
usNBP3Voltage: VID for voltage on NB P3 State
usBootUpNBVoltage: Voltage Index of GNB voltage configured by SBIOS, which is suffcient to support VBIOS DISPCLK requirement.
usExtDispConnInfoOffset: Offset to sExtDispConnInfo inside the structure
usPanelRefreshRateRange: Bit vector for LCD supported refresh rate range. If DRR is requestd by the platform, at least two bits need to be set
to indicate a range.
SUPPORTED_LCD_REFRESHRATE_30Hz 0x0004
SUPPORTED_LCD_REFRESHRATE_40Hz 0x0008
SUPPORTED_LCD_REFRESHRATE_50Hz 0x0010
SUPPORTED_LCD_REFRESHRATE_60Hz 0x0020
ucMemoryType: [3:0]=1:DDR1;=2:DDR2;=3:DDR3.[7:4] is reserved.
ucUMAChannelNumber: System memory channel numbers.
ulCSR_M3_ARB_CNTL_DEFAULT[10]: Arrays with values for CSR M3 arbiter for default
ulCSR_M3_ARB_CNTL_UVD[10]: Arrays with values for CSR M3 arbiter for UVD playback.
ulCSR_M3_ARB_CNTL_FS3D[10]: Arrays with values for CSR M3 arbiter for Full Screen 3D applications.
sAvail_SCLK[5]: Arrays to provide available list of SLCK and corresponding voltage, order from low to high
ulGMCRestoreResetTime: GMC power restore and GMC reset time to calculate data reconnection latency. Unit in ns.
ulMinimumNClk: Minimum NCLK speed among all NB-Pstates to calcualte data reconnection latency. Unit in 10kHz.
ulIdleNClk: NCLK speed while memory runs in self-refresh state. Unit in 10kHz.
ulDDR_DLL_PowerUpTime: DDR PHY DLL power up time. Unit in ns.
ulDDR_PLL_PowerUpTime: DDR PHY PLL power up time. Unit in ns.
usPCIEClkSSPercentage: PCIE Clock Spread Spectrum Percentage in unit 0.01%; 100 mean 1%.
usPCIEClkSSType: PCIE Clock Spread Spectrum Type. 0 for Down spread(default); 1 for Center spread.
usLvdsSSPercentage: LVDS panel ( not include eDP ) Spread Spectrum Percentage in unit of 0.01%, =0, use VBIOS default setting.
usLvdsSSpreadRateIn10Hz: LVDS panel ( not include eDP ) Spread Spectrum frequency in unit of 10Hz, =0, use VBIOS default setting.
usHDMISSPercentage: HDMI Spread Spectrum Percentage in unit 0.01%; 100 mean 1%, =0, use VBIOS default setting.
usHDMISSpreadRateIn10Hz: HDMI Spread Spectrum frequency in unit of 10Hz, =0, use VBIOS default setting.
usDVISSPercentage: DVI Spread Spectrum Percentage in unit 0.01%; 100 mean 1%, =0, use VBIOS default setting.
usDVISSpreadRateIn10Hz: DVI Spread Spectrum frequency in unit of 10Hz, =0, use VBIOS default setting.
usMaxLVDSPclkFreqInSingleLink: Max pixel clock LVDS panel single link, if=0 means VBIOS use default threhold, right now it is 85Mhz
ucLVDSMisc: [bit0] LVDS 888bit panel mode =0: LVDS 888 panel in LDI mode, =1: LVDS 888 panel in FPDI mode
[bit1] LVDS panel lower and upper link mapping =0: lower link and upper link not swap, =1: lower link and upper link are swapped
[bit2] LVDS 888bit per color mode =0: 666 bit per color =1:888 bit per color
[bit3] LVDS parameter override enable =0: ucLvdsMisc parameter are not used =1: ucLvdsMisc parameter should be used
[bit4] Polarity of signal sent to digital BLON output pin. =0: not inverted(active high) =1: inverted ( active low )
ucLVDSPwrOnSeqDIGONtoDE_in4Ms: LVDS power up sequence time in unit of 4ms, time delay from DIGON signal active to data enable signal active( DE ).
=0 mean use VBIOS default which is 8 ( 32ms ). The LVDS power up sequence is as following: DIGON->DE->VARY_BL->BLON.
This parameter is used by VBIOS only. VBIOS will patch LVDS_InfoTable.
ucLVDSPwrOnDEtoVARY_BL_in4Ms: LVDS power up sequence time in unit of 4ms., time delay from DE( data enable ) active to Vary Brightness enable signal active( VARY_BL ).
=0 mean use VBIOS default which is 90 ( 360ms ). The LVDS power up sequence is as following: DIGON->DE->VARY_BL->BLON.
This parameter is used by VBIOS only. VBIOS will patch LVDS_InfoTable.
ucLVDSPwrOffVARY_BLtoDE_in4Ms: LVDS power down sequence time in unit of 4ms, time delay from data enable ( DE ) signal off to LCDVCC (DIGON) off.
=0 mean use VBIOS default delay which is 8 ( 32ms ). The LVDS power down sequence is as following: BLON->VARY_BL->DE->DIGON
This parameter is used by VBIOS only. VBIOS will patch LVDS_InfoTable.
ucLVDSPwrOffDEtoDIGON_in4Ms: LVDS power down sequence time in unit of 4ms, time delay from vary brightness enable signal( VARY_BL) off to data enable ( DE ) signal off.
=0 mean use VBIOS default which is 90 ( 360ms ). The LVDS power down sequence is as following: BLON->VARY_BL->DE->DIGON
This parameter is used by VBIOS only. VBIOS will patch LVDS_InfoTable.
ucLVDSOffToOnDelay_in4Ms: LVDS power down sequence time in unit of 4ms. Time delay from DIGON signal off to DIGON signal active.
=0 means to use VBIOS default delay which is 125 ( 500ms ).
This parameter is used by VBIOS only. VBIOS will patch LVDS_InfoTable.
ucLVDSPwrOnVARY_BLtoBLON_in4Ms: LVDS power up sequence time in unit of 4ms. Time delay from VARY_BL signal on to DLON signal active.
=0 means to use VBIOS default delay which is 0 ( 0ms ).
This parameter is used by VBIOS only. VBIOS will patch LVDS_InfoTable.
ucLVDSPwrOffBLONtoVARY_BL_in4Ms: LVDS power down sequence time in unit of 4ms. Time delay from BLON signal off to VARY_BL signal off.
=0 means to use VBIOS default delay which is 0 ( 0ms ).
This parameter is used by VBIOS only. VBIOS will patch LVDS_InfoTable.
ulNbpStateMemclkFreq[4]: system memory clock frequncey in unit of 10Khz in different NB pstate.
**********************************************************************************************************************/
/**************************************************************************/
// This portion is only used when ext thermal chip or engine/memory clock SS chip is populated on a design
//Memory SS Info Table
//Define Memory Clock SS chip ID
#define ICS91719 1
#define ICS91720 2
//Define one structure to inform SW a "block of data" writing to external SS chip via I2C protocol
typedef struct _ATOM_I2C_DATA_RECORD
{
UCHAR ucNunberOfBytes; //Indicates how many bytes SW needs to write to the external ASIC for one block, besides to "Start" and "Stop"
UCHAR ucI2CData[1]; //I2C data in bytes, should be less than 16 bytes usually
}ATOM_I2C_DATA_RECORD;
//Define one structure to inform SW how many blocks of data writing to external SS chip via I2C protocol, in addition to other information
typedef struct _ATOM_I2C_DEVICE_SETUP_INFO
{
ATOM_I2C_ID_CONFIG_ACCESS sucI2cId; //I2C line and HW/SW assisted cap.
UCHAR ucSSChipID; //SS chip being used
UCHAR ucSSChipSlaveAddr; //Slave Address to set up this SS chip
UCHAR ucNumOfI2CDataRecords; //number of data block
ATOM_I2C_DATA_RECORD asI2CData[1];
}ATOM_I2C_DEVICE_SETUP_INFO;
//==========================================================================================
typedef struct _ATOM_ASIC_MVDD_INFO
{
ATOM_COMMON_TABLE_HEADER sHeader;
ATOM_I2C_DEVICE_SETUP_INFO asI2CSetup[1];
}ATOM_ASIC_MVDD_INFO;
//==========================================================================================
#define ATOM_MCLK_SS_INFO ATOM_ASIC_MVDD_INFO
//==========================================================================================
/**************************************************************************/
typedef struct _ATOM_ASIC_SS_ASSIGNMENT
{
ULONG ulTargetClockRange; //Clock Out frequence (VCO ), in unit of 10Khz
USHORT usSpreadSpectrumPercentage; //in unit of 0.01%
USHORT usSpreadRateInKhz; //in unit of kHz, modulation freq
UCHAR ucClockIndication; //Indicate which clock source needs SS
UCHAR ucSpreadSpectrumMode; //Bit1=0 Down Spread,=1 Center Spread.
UCHAR ucReserved[2];
}ATOM_ASIC_SS_ASSIGNMENT;
//Define ucClockIndication, SW uses the IDs below to search if the SS is required/enabled on a clock branch/signal type.
//SS is not required or enabled if a match is not found.
#define ASIC_INTERNAL_MEMORY_SS 1
#define ASIC_INTERNAL_ENGINE_SS 2
#define ASIC_INTERNAL_UVD_SS 3
#define ASIC_INTERNAL_SS_ON_TMDS 4
#define ASIC_INTERNAL_SS_ON_HDMI 5
#define ASIC_INTERNAL_SS_ON_LVDS 6
#define ASIC_INTERNAL_SS_ON_DP 7
#define ASIC_INTERNAL_SS_ON_DCPLL 8
#define ASIC_EXTERNAL_SS_ON_DP_CLOCK 9
#define ASIC_INTERNAL_VCE_SS 10
typedef struct _ATOM_ASIC_SS_ASSIGNMENT_V2
{
ULONG ulTargetClockRange; //For mem/engine/uvd, Clock Out frequence (VCO ), in unit of 10Khz
//For TMDS/HDMI/LVDS, it is pixel clock , for DP, it is link clock ( 27000 or 16200 )
USHORT usSpreadSpectrumPercentage; //in unit of 0.01%
USHORT usSpreadRateIn10Hz; //in unit of 10Hz, modulation freq
UCHAR ucClockIndication; //Indicate which clock source needs SS
UCHAR ucSpreadSpectrumMode; //Bit0=0 Down Spread,=1 Center Spread, bit1=0: internal SS bit1=1: external SS
UCHAR ucReserved[2];
}ATOM_ASIC_SS_ASSIGNMENT_V2;
//ucSpreadSpectrumMode
//#define ATOM_SS_DOWN_SPREAD_MODE_MASK 0x00000000
//#define ATOM_SS_DOWN_SPREAD_MODE 0x00000000
//#define ATOM_SS_CENTRE_SPREAD_MODE_MASK 0x00000001
//#define ATOM_SS_CENTRE_SPREAD_MODE 0x00000001
//#define ATOM_INTERNAL_SS_MASK 0x00000000
//#define ATOM_EXTERNAL_SS_MASK 0x00000002
typedef struct _ATOM_ASIC_INTERNAL_SS_INFO
{
ATOM_COMMON_TABLE_HEADER sHeader;
ATOM_ASIC_SS_ASSIGNMENT asSpreadSpectrum[4];
}ATOM_ASIC_INTERNAL_SS_INFO;
typedef struct _ATOM_ASIC_INTERNAL_SS_INFO_V2
{
ATOM_COMMON_TABLE_HEADER sHeader;
ATOM_ASIC_SS_ASSIGNMENT_V2 asSpreadSpectrum[1]; //this is point only.
}ATOM_ASIC_INTERNAL_SS_INFO_V2;
typedef struct _ATOM_ASIC_SS_ASSIGNMENT_V3
{
ULONG ulTargetClockRange; //For mem/engine/uvd, Clock Out frequence (VCO ), in unit of 10Khz
//For TMDS/HDMI/LVDS, it is pixel clock , for DP, it is link clock ( 27000 or 16200 )
USHORT usSpreadSpectrumPercentage; //in unit of 0.01%
USHORT usSpreadRateIn10Hz; //in unit of 10Hz, modulation freq
UCHAR ucClockIndication; //Indicate which clock source needs SS
UCHAR ucSpreadSpectrumMode; //Bit0=0 Down Spread,=1 Center Spread, bit1=0: internal SS bit1=1: external SS
UCHAR ucReserved[2];
}ATOM_ASIC_SS_ASSIGNMENT_V3;
typedef struct _ATOM_ASIC_INTERNAL_SS_INFO_V3
{
ATOM_COMMON_TABLE_HEADER sHeader;
ATOM_ASIC_SS_ASSIGNMENT_V3 asSpreadSpectrum[1]; //this is pointer only.
}ATOM_ASIC_INTERNAL_SS_INFO_V3;
//==============================Scratch Pad Definition Portion===============================
#define ATOM_DEVICE_CONNECT_INFO_DEF 0
#define ATOM_ROM_LOCATION_DEF 1
#define ATOM_TV_STANDARD_DEF 2
#define ATOM_ACTIVE_INFO_DEF 3
#define ATOM_LCD_INFO_DEF 4
#define ATOM_DOS_REQ_INFO_DEF 5
#define ATOM_ACC_CHANGE_INFO_DEF 6
#define ATOM_DOS_MODE_INFO_DEF 7
#define ATOM_I2C_CHANNEL_STATUS_DEF 8
#define ATOM_I2C_CHANNEL_STATUS1_DEF 9
#define ATOM_INTERNAL_TIMER_DEF 10
// BIOS_0_SCRATCH Definition
#define ATOM_S0_CRT1_MONO 0x00000001L
#define ATOM_S0_CRT1_COLOR 0x00000002L
#define ATOM_S0_CRT1_MASK (ATOM_S0_CRT1_MONO+ATOM_S0_CRT1_COLOR)
#define ATOM_S0_TV1_COMPOSITE_A 0x00000004L
#define ATOM_S0_TV1_SVIDEO_A 0x00000008L
#define ATOM_S0_TV1_MASK_A (ATOM_S0_TV1_COMPOSITE_A+ATOM_S0_TV1_SVIDEO_A)
#define ATOM_S0_CV_A 0x00000010L
#define ATOM_S0_CV_DIN_A 0x00000020L
#define ATOM_S0_CV_MASK_A (ATOM_S0_CV_A+ATOM_S0_CV_DIN_A)
#define ATOM_S0_CRT2_MONO 0x00000100L
#define ATOM_S0_CRT2_COLOR 0x00000200L
#define ATOM_S0_CRT2_MASK (ATOM_S0_CRT2_MONO+ATOM_S0_CRT2_COLOR)
#define ATOM_S0_TV1_COMPOSITE 0x00000400L
#define ATOM_S0_TV1_SVIDEO 0x00000800L
#define ATOM_S0_TV1_SCART 0x00004000L
#define ATOM_S0_TV1_MASK (ATOM_S0_TV1_COMPOSITE+ATOM_S0_TV1_SVIDEO+ATOM_S0_TV1_SCART)
#define ATOM_S0_CV 0x00001000L
#define ATOM_S0_CV_DIN 0x00002000L
#define ATOM_S0_CV_MASK (ATOM_S0_CV+ATOM_S0_CV_DIN)
#define ATOM_S0_DFP1 0x00010000L
#define ATOM_S0_DFP2 0x00020000L
#define ATOM_S0_LCD1 0x00040000L
#define ATOM_S0_LCD2 0x00080000L
#define ATOM_S0_DFP6 0x00100000L
#define ATOM_S0_DFP3 0x00200000L
#define ATOM_S0_DFP4 0x00400000L
#define ATOM_S0_DFP5 0x00800000L
#define ATOM_S0_DFP_MASK ATOM_S0_DFP1 | ATOM_S0_DFP2 | ATOM_S0_DFP3 | ATOM_S0_DFP4 | ATOM_S0_DFP5 | ATOM_S0_DFP6
#define ATOM_S0_FAD_REGISTER_BUG 0x02000000L // If set, indicates we are running a PCIE asic with
// the FAD/HDP reg access bug. Bit is read by DAL, this is obsolete from RV5xx
#define ATOM_S0_THERMAL_STATE_MASK 0x1C000000L
#define ATOM_S0_THERMAL_STATE_SHIFT 26
#define ATOM_S0_SYSTEM_POWER_STATE_MASK 0xE0000000L
#define ATOM_S0_SYSTEM_POWER_STATE_SHIFT 29
#define ATOM_S0_SYSTEM_POWER_STATE_VALUE_AC 1
#define ATOM_S0_SYSTEM_POWER_STATE_VALUE_DC 2
#define ATOM_S0_SYSTEM_POWER_STATE_VALUE_LITEAC 3
#define ATOM_S0_SYSTEM_POWER_STATE_VALUE_LIT2AC 4
//Byte aligned definition for BIOS usage
#define ATOM_S0_CRT1_MONOb0 0x01
#define ATOM_S0_CRT1_COLORb0 0x02
#define ATOM_S0_CRT1_MASKb0 (ATOM_S0_CRT1_MONOb0+ATOM_S0_CRT1_COLORb0)
#define ATOM_S0_TV1_COMPOSITEb0 0x04
#define ATOM_S0_TV1_SVIDEOb0 0x08
#define ATOM_S0_TV1_MASKb0 (ATOM_S0_TV1_COMPOSITEb0+ATOM_S0_TV1_SVIDEOb0)
#define ATOM_S0_CVb0 0x10
#define ATOM_S0_CV_DINb0 0x20
#define ATOM_S0_CV_MASKb0 (ATOM_S0_CVb0+ATOM_S0_CV_DINb0)
#define ATOM_S0_CRT2_MONOb1 0x01
#define ATOM_S0_CRT2_COLORb1 0x02
#define ATOM_S0_CRT2_MASKb1 (ATOM_S0_CRT2_MONOb1+ATOM_S0_CRT2_COLORb1)
#define ATOM_S0_TV1_COMPOSITEb1 0x04
#define ATOM_S0_TV1_SVIDEOb1 0x08
#define ATOM_S0_TV1_SCARTb1 0x40
#define ATOM_S0_TV1_MASKb1 (ATOM_S0_TV1_COMPOSITEb1+ATOM_S0_TV1_SVIDEOb1+ATOM_S0_TV1_SCARTb1)
#define ATOM_S0_CVb1 0x10
#define ATOM_S0_CV_DINb1 0x20
#define ATOM_S0_CV_MASKb1 (ATOM_S0_CVb1+ATOM_S0_CV_DINb1)
#define ATOM_S0_DFP1b2 0x01
#define ATOM_S0_DFP2b2 0x02
#define ATOM_S0_LCD1b2 0x04
#define ATOM_S0_LCD2b2 0x08
#define ATOM_S0_DFP6b2 0x10
#define ATOM_S0_DFP3b2 0x20
#define ATOM_S0_DFP4b2 0x40
#define ATOM_S0_DFP5b2 0x80
#define ATOM_S0_THERMAL_STATE_MASKb3 0x1C
#define ATOM_S0_THERMAL_STATE_SHIFTb3 2
#define ATOM_S0_SYSTEM_POWER_STATE_MASKb3 0xE0
#define ATOM_S0_LCD1_SHIFT 18
// BIOS_1_SCRATCH Definition
#define ATOM_S1_ROM_LOCATION_MASK 0x0000FFFFL
#define ATOM_S1_PCI_BUS_DEV_MASK 0xFFFF0000L
// BIOS_2_SCRATCH Definition
#define ATOM_S2_TV1_STANDARD_MASK 0x0000000FL
#define ATOM_S2_CURRENT_BL_LEVEL_MASK 0x0000FF00L
#define ATOM_S2_CURRENT_BL_LEVEL_SHIFT 8
#define ATOM_S2_FORCEDLOWPWRMODE_STATE_MASK 0x0C000000L
#define ATOM_S2_FORCEDLOWPWRMODE_STATE_MASK_SHIFT 26
#define ATOM_S2_FORCEDLOWPWRMODE_STATE_CHANGE 0x10000000L
#define ATOM_S2_DEVICE_DPMS_STATE 0x00010000L
#define ATOM_S2_VRI_BRIGHT_ENABLE 0x20000000L
#define ATOM_S2_DISPLAY_ROTATION_0_DEGREE 0x0
#define ATOM_S2_DISPLAY_ROTATION_90_DEGREE 0x1
#define ATOM_S2_DISPLAY_ROTATION_180_DEGREE 0x2
#define ATOM_S2_DISPLAY_ROTATION_270_DEGREE 0x3
#define ATOM_S2_DISPLAY_ROTATION_DEGREE_SHIFT 30
#define ATOM_S2_DISPLAY_ROTATION_ANGLE_MASK 0xC0000000L
//Byte aligned definition for BIOS usage
#define ATOM_S2_TV1_STANDARD_MASKb0 0x0F
#define ATOM_S2_CURRENT_BL_LEVEL_MASKb1 0xFF
#define ATOM_S2_DEVICE_DPMS_STATEb2 0x01
#define ATOM_S2_DEVICE_DPMS_MASKw1 0x3FF
#define ATOM_S2_FORCEDLOWPWRMODE_STATE_MASKb3 0x0C
#define ATOM_S2_FORCEDLOWPWRMODE_STATE_CHANGEb3 0x10
#define ATOM_S2_TMDS_COHERENT_MODEb3 0x10 // used by VBIOS code only, use coherent mode for TMDS/HDMI mode
#define ATOM_S2_VRI_BRIGHT_ENABLEb3 0x20
#define ATOM_S2_ROTATION_STATE_MASKb3 0xC0
// BIOS_3_SCRATCH Definition
#define ATOM_S3_CRT1_ACTIVE 0x00000001L
#define ATOM_S3_LCD1_ACTIVE 0x00000002L
#define ATOM_S3_TV1_ACTIVE 0x00000004L
#define ATOM_S3_DFP1_ACTIVE 0x00000008L
#define ATOM_S3_CRT2_ACTIVE 0x00000010L
#define ATOM_S3_LCD2_ACTIVE 0x00000020L
#define ATOM_S3_DFP6_ACTIVE 0x00000040L
#define ATOM_S3_DFP2_ACTIVE 0x00000080L
#define ATOM_S3_CV_ACTIVE 0x00000100L
#define ATOM_S3_DFP3_ACTIVE 0x00000200L
#define ATOM_S3_DFP4_ACTIVE 0x00000400L
#define ATOM_S3_DFP5_ACTIVE 0x00000800L
#define ATOM_S3_DEVICE_ACTIVE_MASK 0x00000FFFL
#define ATOM_S3_LCD_FULLEXPANSION_ACTIVE 0x00001000L
#define ATOM_S3_LCD_EXPANSION_ASPEC_RATIO_ACTIVE 0x00002000L
#define ATOM_S3_CRT1_CRTC_ACTIVE 0x00010000L
#define ATOM_S3_LCD1_CRTC_ACTIVE 0x00020000L
#define ATOM_S3_TV1_CRTC_ACTIVE 0x00040000L
#define ATOM_S3_DFP1_CRTC_ACTIVE 0x00080000L
#define ATOM_S3_CRT2_CRTC_ACTIVE 0x00100000L
#define ATOM_S3_LCD2_CRTC_ACTIVE 0x00200000L
#define ATOM_S3_DFP6_CRTC_ACTIVE 0x00400000L
#define ATOM_S3_DFP2_CRTC_ACTIVE 0x00800000L
#define ATOM_S3_CV_CRTC_ACTIVE 0x01000000L
#define ATOM_S3_DFP3_CRTC_ACTIVE 0x02000000L
#define ATOM_S3_DFP4_CRTC_ACTIVE 0x04000000L
#define ATOM_S3_DFP5_CRTC_ACTIVE 0x08000000L
#define ATOM_S3_DEVICE_CRTC_ACTIVE_MASK 0x0FFF0000L
#define ATOM_S3_ASIC_GUI_ENGINE_HUNG 0x20000000L
//Below two definitions are not supported in pplib, but in the old powerplay in DAL
#define ATOM_S3_ALLOW_FAST_PWR_SWITCH 0x40000000L
#define ATOM_S3_RQST_GPU_USE_MIN_PWR 0x80000000L
//Byte aligned definition for BIOS usage
#define ATOM_S3_CRT1_ACTIVEb0 0x01
#define ATOM_S3_LCD1_ACTIVEb0 0x02
#define ATOM_S3_TV1_ACTIVEb0 0x04
#define ATOM_S3_DFP1_ACTIVEb0 0x08
#define ATOM_S3_CRT2_ACTIVEb0 0x10
#define ATOM_S3_LCD2_ACTIVEb0 0x20
#define ATOM_S3_DFP6_ACTIVEb0 0x40
#define ATOM_S3_DFP2_ACTIVEb0 0x80
#define ATOM_S3_CV_ACTIVEb1 0x01
#define ATOM_S3_DFP3_ACTIVEb1 0x02
#define ATOM_S3_DFP4_ACTIVEb1 0x04
#define ATOM_S3_DFP5_ACTIVEb1 0x08
#define ATOM_S3_ACTIVE_CRTC1w0 0xFFF
#define ATOM_S3_CRT1_CRTC_ACTIVEb2 0x01
#define ATOM_S3_LCD1_CRTC_ACTIVEb2 0x02
#define ATOM_S3_TV1_CRTC_ACTIVEb2 0x04
#define ATOM_S3_DFP1_CRTC_ACTIVEb2 0x08
#define ATOM_S3_CRT2_CRTC_ACTIVEb2 0x10
#define ATOM_S3_LCD2_CRTC_ACTIVEb2 0x20
#define ATOM_S3_DFP6_CRTC_ACTIVEb2 0x40
#define ATOM_S3_DFP2_CRTC_ACTIVEb2 0x80
#define ATOM_S3_CV_CRTC_ACTIVEb3 0x01
#define ATOM_S3_DFP3_CRTC_ACTIVEb3 0x02
#define ATOM_S3_DFP4_CRTC_ACTIVEb3 0x04
#define ATOM_S3_DFP5_CRTC_ACTIVEb3 0x08
#define ATOM_S3_ACTIVE_CRTC2w1 0xFFF
// BIOS_4_SCRATCH Definition
#define ATOM_S4_LCD1_PANEL_ID_MASK 0x000000FFL
#define ATOM_S4_LCD1_REFRESH_MASK 0x0000FF00L
#define ATOM_S4_LCD1_REFRESH_SHIFT 8
//Byte aligned definition for BIOS usage
#define ATOM_S4_LCD1_PANEL_ID_MASKb0 0x0FF
#define ATOM_S4_LCD1_REFRESH_MASKb1 ATOM_S4_LCD1_PANEL_ID_MASKb0
#define ATOM_S4_VRAM_INFO_MASKb2 ATOM_S4_LCD1_PANEL_ID_MASKb0
// BIOS_5_SCRATCH Definition, BIOS_5_SCRATCH is used by Firmware only !!!!
#define ATOM_S5_DOS_REQ_CRT1b0 0x01
#define ATOM_S5_DOS_REQ_LCD1b0 0x02
#define ATOM_S5_DOS_REQ_TV1b0 0x04
#define ATOM_S5_DOS_REQ_DFP1b0 0x08
#define ATOM_S5_DOS_REQ_CRT2b0 0x10
#define ATOM_S5_DOS_REQ_LCD2b0 0x20
#define ATOM_S5_DOS_REQ_DFP6b0 0x40
#define ATOM_S5_DOS_REQ_DFP2b0 0x80
#define ATOM_S5_DOS_REQ_CVb1 0x01
#define ATOM_S5_DOS_REQ_DFP3b1 0x02
#define ATOM_S5_DOS_REQ_DFP4b1 0x04
#define ATOM_S5_DOS_REQ_DFP5b1 0x08
#define ATOM_S5_DOS_REQ_DEVICEw0 0x0FFF
#define ATOM_S5_DOS_REQ_CRT1 0x0001
#define ATOM_S5_DOS_REQ_LCD1 0x0002
#define ATOM_S5_DOS_REQ_TV1 0x0004
#define ATOM_S5_DOS_REQ_DFP1 0x0008
#define ATOM_S5_DOS_REQ_CRT2 0x0010
#define ATOM_S5_DOS_REQ_LCD2 0x0020
#define ATOM_S5_DOS_REQ_DFP6 0x0040
#define ATOM_S5_DOS_REQ_DFP2 0x0080
#define ATOM_S5_DOS_REQ_CV 0x0100
#define ATOM_S5_DOS_REQ_DFP3 0x0200
#define ATOM_S5_DOS_REQ_DFP4 0x0400
#define ATOM_S5_DOS_REQ_DFP5 0x0800
#define ATOM_S5_DOS_FORCE_CRT1b2 ATOM_S5_DOS_REQ_CRT1b0
#define ATOM_S5_DOS_FORCE_TV1b2 ATOM_S5_DOS_REQ_TV1b0
#define ATOM_S5_DOS_FORCE_CRT2b2 ATOM_S5_DOS_REQ_CRT2b0
#define ATOM_S5_DOS_FORCE_CVb3 ATOM_S5_DOS_REQ_CVb1
#define ATOM_S5_DOS_FORCE_DEVICEw1 (ATOM_S5_DOS_FORCE_CRT1b2+ATOM_S5_DOS_FORCE_TV1b2+ATOM_S5_DOS_FORCE_CRT2b2+\
(ATOM_S5_DOS_FORCE_CVb3<<8))
// BIOS_6_SCRATCH Definition
#define ATOM_S6_DEVICE_CHANGE 0x00000001L
#define ATOM_S6_SCALER_CHANGE 0x00000002L
#define ATOM_S6_LID_CHANGE 0x00000004L
#define ATOM_S6_DOCKING_CHANGE 0x00000008L
#define ATOM_S6_ACC_MODE 0x00000010L
#define ATOM_S6_EXT_DESKTOP_MODE 0x00000020L
#define ATOM_S6_LID_STATE 0x00000040L
#define ATOM_S6_DOCK_STATE 0x00000080L
#define ATOM_S6_CRITICAL_STATE 0x00000100L
#define ATOM_S6_HW_I2C_BUSY_STATE 0x00000200L
#define ATOM_S6_THERMAL_STATE_CHANGE 0x00000400L
#define ATOM_S6_INTERRUPT_SET_BY_BIOS 0x00000800L
#define ATOM_S6_REQ_LCD_EXPANSION_FULL 0x00001000L //Normal expansion Request bit for LCD
#define ATOM_S6_REQ_LCD_EXPANSION_ASPEC_RATIO 0x00002000L //Aspect ratio expansion Request bit for LCD
#define ATOM_S6_DISPLAY_STATE_CHANGE 0x00004000L //This bit is recycled when ATOM_BIOS_INFO_BIOS_SCRATCH6_SCL2_REDEFINE is set,previously it's SCL2_H_expansion
#define ATOM_S6_I2C_STATE_CHANGE 0x00008000L //This bit is recycled,when ATOM_BIOS_INFO_BIOS_SCRATCH6_SCL2_REDEFINE is set,previously it's SCL2_V_expansion
#define ATOM_S6_ACC_REQ_CRT1 0x00010000L
#define ATOM_S6_ACC_REQ_LCD1 0x00020000L
#define ATOM_S6_ACC_REQ_TV1 0x00040000L
#define ATOM_S6_ACC_REQ_DFP1 0x00080000L
#define ATOM_S6_ACC_REQ_CRT2 0x00100000L
#define ATOM_S6_ACC_REQ_LCD2 0x00200000L
#define ATOM_S6_ACC_REQ_DFP6 0x00400000L
#define ATOM_S6_ACC_REQ_DFP2 0x00800000L
#define ATOM_S6_ACC_REQ_CV 0x01000000L
#define ATOM_S6_ACC_REQ_DFP3 0x02000000L
#define ATOM_S6_ACC_REQ_DFP4 0x04000000L
#define ATOM_S6_ACC_REQ_DFP5 0x08000000L
#define ATOM_S6_ACC_REQ_MASK 0x0FFF0000L
#define ATOM_S6_SYSTEM_POWER_MODE_CHANGE 0x10000000L
#define ATOM_S6_ACC_BLOCK_DISPLAY_SWITCH 0x20000000L
#define ATOM_S6_VRI_BRIGHTNESS_CHANGE 0x40000000L
#define ATOM_S6_CONFIG_DISPLAY_CHANGE_MASK 0x80000000L
//Byte aligned definition for BIOS usage
#define ATOM_S6_DEVICE_CHANGEb0 0x01
#define ATOM_S6_SCALER_CHANGEb0 0x02
#define ATOM_S6_LID_CHANGEb0 0x04
#define ATOM_S6_DOCKING_CHANGEb0 0x08
#define ATOM_S6_ACC_MODEb0 0x10
#define ATOM_S6_EXT_DESKTOP_MODEb0 0x20
#define ATOM_S6_LID_STATEb0 0x40
#define ATOM_S6_DOCK_STATEb0 0x80
#define ATOM_S6_CRITICAL_STATEb1 0x01
#define ATOM_S6_HW_I2C_BUSY_STATEb1 0x02
#define ATOM_S6_THERMAL_STATE_CHANGEb1 0x04
#define ATOM_S6_INTERRUPT_SET_BY_BIOSb1 0x08
#define ATOM_S6_REQ_LCD_EXPANSION_FULLb1 0x10
#define ATOM_S6_REQ_LCD_EXPANSION_ASPEC_RATIOb1 0x20
#define ATOM_S6_ACC_REQ_CRT1b2 0x01
#define ATOM_S6_ACC_REQ_LCD1b2 0x02
#define ATOM_S6_ACC_REQ_TV1b2 0x04
#define ATOM_S6_ACC_REQ_DFP1b2 0x08
#define ATOM_S6_ACC_REQ_CRT2b2 0x10
#define ATOM_S6_ACC_REQ_LCD2b2 0x20
#define ATOM_S6_ACC_REQ_DFP6b2 0x40
#define ATOM_S6_ACC_REQ_DFP2b2 0x80
#define ATOM_S6_ACC_REQ_CVb3 0x01
#define ATOM_S6_ACC_REQ_DFP3b3 0x02
#define ATOM_S6_ACC_REQ_DFP4b3 0x04
#define ATOM_S6_ACC_REQ_DFP5b3 0x08
#define ATOM_S6_ACC_REQ_DEVICEw1 ATOM_S5_DOS_REQ_DEVICEw0
#define ATOM_S6_SYSTEM_POWER_MODE_CHANGEb3 0x10
#define ATOM_S6_ACC_BLOCK_DISPLAY_SWITCHb3 0x20
#define ATOM_S6_VRI_BRIGHTNESS_CHANGEb3 0x40
#define ATOM_S6_CONFIG_DISPLAY_CHANGEb3 0x80
#define ATOM_S6_DEVICE_CHANGE_SHIFT 0
#define ATOM_S6_SCALER_CHANGE_SHIFT 1
#define ATOM_S6_LID_CHANGE_SHIFT 2
#define ATOM_S6_DOCKING_CHANGE_SHIFT 3
#define ATOM_S6_ACC_MODE_SHIFT 4
#define ATOM_S6_EXT_DESKTOP_MODE_SHIFT 5
#define ATOM_S6_LID_STATE_SHIFT 6
#define ATOM_S6_DOCK_STATE_SHIFT 7
#define ATOM_S6_CRITICAL_STATE_SHIFT 8
#define ATOM_S6_HW_I2C_BUSY_STATE_SHIFT 9
#define ATOM_S6_THERMAL_STATE_CHANGE_SHIFT 10
#define ATOM_S6_INTERRUPT_SET_BY_BIOS_SHIFT 11
#define ATOM_S6_REQ_SCALER_SHIFT 12
#define ATOM_S6_REQ_SCALER_ARATIO_SHIFT 13
#define ATOM_S6_DISPLAY_STATE_CHANGE_SHIFT 14
#define ATOM_S6_I2C_STATE_CHANGE_SHIFT 15
#define ATOM_S6_SYSTEM_POWER_MODE_CHANGE_SHIFT 28
#define ATOM_S6_ACC_BLOCK_DISPLAY_SWITCH_SHIFT 29
#define ATOM_S6_VRI_BRIGHTNESS_CHANGE_SHIFT 30
#define ATOM_S6_CONFIG_DISPLAY_CHANGE_SHIFT 31
// BIOS_7_SCRATCH Definition, BIOS_7_SCRATCH is used by Firmware only !!!!
#define ATOM_S7_DOS_MODE_TYPEb0 0x03
#define ATOM_S7_DOS_MODE_VGAb0 0x00
#define ATOM_S7_DOS_MODE_VESAb0 0x01
#define ATOM_S7_DOS_MODE_EXTb0 0x02
#define ATOM_S7_DOS_MODE_PIXEL_DEPTHb0 0x0C
#define ATOM_S7_DOS_MODE_PIXEL_FORMATb0 0xF0
#define ATOM_S7_DOS_8BIT_DAC_ENb1 0x01
#define ATOM_S7_DOS_MODE_NUMBERw1 0x0FFFF
#define ATOM_S7_DOS_8BIT_DAC_EN_SHIFT 8
// BIOS_8_SCRATCH Definition
#define ATOM_S8_I2C_CHANNEL_BUSY_MASK 0x00000FFFF
#define ATOM_S8_I2C_HW_ENGINE_BUSY_MASK 0x0FFFF0000
#define ATOM_S8_I2C_CHANNEL_BUSY_SHIFT 0
#define ATOM_S8_I2C_ENGINE_BUSY_SHIFT 16
// BIOS_9_SCRATCH Definition
#ifndef ATOM_S9_I2C_CHANNEL_COMPLETED_MASK
#define ATOM_S9_I2C_CHANNEL_COMPLETED_MASK 0x0000FFFF
#endif
#ifndef ATOM_S9_I2C_CHANNEL_ABORTED_MASK
#define ATOM_S9_I2C_CHANNEL_ABORTED_MASK 0xFFFF0000
#endif
#ifndef ATOM_S9_I2C_CHANNEL_COMPLETED_SHIFT
#define ATOM_S9_I2C_CHANNEL_COMPLETED_SHIFT 0
#endif
#ifndef ATOM_S9_I2C_CHANNEL_ABORTED_SHIFT
#define ATOM_S9_I2C_CHANNEL_ABORTED_SHIFT 16
#endif
#define ATOM_FLAG_SET 0x20
#define ATOM_FLAG_CLEAR 0
#define CLEAR_ATOM_S6_ACC_MODE ((ATOM_ACC_CHANGE_INFO_DEF << 8 )|ATOM_S6_ACC_MODE_SHIFT | ATOM_FLAG_CLEAR)
#define SET_ATOM_S6_DEVICE_CHANGE ((ATOM_ACC_CHANGE_INFO_DEF << 8 )|ATOM_S6_DEVICE_CHANGE_SHIFT | ATOM_FLAG_SET)
#define SET_ATOM_S6_VRI_BRIGHTNESS_CHANGE ((ATOM_ACC_CHANGE_INFO_DEF << 8 )|ATOM_S6_VRI_BRIGHTNESS_CHANGE_SHIFT | ATOM_FLAG_SET)
#define SET_ATOM_S6_SCALER_CHANGE ((ATOM_ACC_CHANGE_INFO_DEF << 8 )|ATOM_S6_SCALER_CHANGE_SHIFT | ATOM_FLAG_SET)
#define SET_ATOM_S6_LID_CHANGE ((ATOM_ACC_CHANGE_INFO_DEF << 8 )|ATOM_S6_LID_CHANGE_SHIFT | ATOM_FLAG_SET)
#define SET_ATOM_S6_LID_STATE ((ATOM_ACC_CHANGE_INFO_DEF << 8 )|ATOM_S6_LID_STATE_SHIFT | ATOM_FLAG_SET)
#define CLEAR_ATOM_S6_LID_STATE ((ATOM_ACC_CHANGE_INFO_DEF << 8 )|ATOM_S6_LID_STATE_SHIFT | ATOM_FLAG_CLEAR)
#define SET_ATOM_S6_DOCK_CHANGE ((ATOM_ACC_CHANGE_INFO_DEF << 8 )|ATOM_S6_DOCKING_CHANGE_SHIFT | ATOM_FLAG_SET)
#define SET_ATOM_S6_DOCK_STATE ((ATOM_ACC_CHANGE_INFO_DEF << 8 )|ATOM_S6_DOCK_STATE_SHIFT | ATOM_FLAG_SET)
#define CLEAR_ATOM_S6_DOCK_STATE ((ATOM_ACC_CHANGE_INFO_DEF << 8 )|ATOM_S6_DOCK_STATE_SHIFT | ATOM_FLAG_CLEAR)
#define SET_ATOM_S6_THERMAL_STATE_CHANGE ((ATOM_ACC_CHANGE_INFO_DEF << 8 )|ATOM_S6_THERMAL_STATE_CHANGE_SHIFT | ATOM_FLAG_SET)
#define SET_ATOM_S6_SYSTEM_POWER_MODE_CHANGE ((ATOM_ACC_CHANGE_INFO_DEF << 8 )|ATOM_S6_SYSTEM_POWER_MODE_CHANGE_SHIFT | ATOM_FLAG_SET)
#define SET_ATOM_S6_INTERRUPT_SET_BY_BIOS ((ATOM_ACC_CHANGE_INFO_DEF << 8 )|ATOM_S6_INTERRUPT_SET_BY_BIOS_SHIFT | ATOM_FLAG_SET)
#define SET_ATOM_S6_CRITICAL_STATE ((ATOM_ACC_CHANGE_INFO_DEF << 8 )|ATOM_S6_CRITICAL_STATE_SHIFT | ATOM_FLAG_SET)
#define CLEAR_ATOM_S6_CRITICAL_STATE ((ATOM_ACC_CHANGE_INFO_DEF << 8 )|ATOM_S6_CRITICAL_STATE_SHIFT | ATOM_FLAG_CLEAR)
#define SET_ATOM_S6_REQ_SCALER ((ATOM_ACC_CHANGE_INFO_DEF << 8 )|ATOM_S6_REQ_SCALER_SHIFT | ATOM_FLAG_SET)
#define CLEAR_ATOM_S6_REQ_SCALER ((ATOM_ACC_CHANGE_INFO_DEF << 8 )|ATOM_S6_REQ_SCALER_SHIFT | ATOM_FLAG_CLEAR )
#define SET_ATOM_S6_REQ_SCALER_ARATIO ((ATOM_ACC_CHANGE_INFO_DEF << 8 )|ATOM_S6_REQ_SCALER_ARATIO_SHIFT | ATOM_FLAG_SET )
#define CLEAR_ATOM_S6_REQ_SCALER_ARATIO ((ATOM_ACC_CHANGE_INFO_DEF << 8 )|ATOM_S6_REQ_SCALER_ARATIO_SHIFT | ATOM_FLAG_CLEAR )
#define SET_ATOM_S6_I2C_STATE_CHANGE ((ATOM_ACC_CHANGE_INFO_DEF << 8 )|ATOM_S6_I2C_STATE_CHANGE_SHIFT | ATOM_FLAG_SET )
#define SET_ATOM_S6_DISPLAY_STATE_CHANGE ((ATOM_ACC_CHANGE_INFO_DEF << 8 )|ATOM_S6_DISPLAY_STATE_CHANGE_SHIFT | ATOM_FLAG_SET )
#define SET_ATOM_S6_DEVICE_RECONFIG ((ATOM_ACC_CHANGE_INFO_DEF << 8 )|ATOM_S6_CONFIG_DISPLAY_CHANGE_SHIFT | ATOM_FLAG_SET)
#define CLEAR_ATOM_S0_LCD1 ((ATOM_DEVICE_CONNECT_INFO_DEF << 8 )| ATOM_S0_LCD1_SHIFT | ATOM_FLAG_CLEAR )
#define SET_ATOM_S7_DOS_8BIT_DAC_EN ((ATOM_DOS_MODE_INFO_DEF << 8 )|ATOM_S7_DOS_8BIT_DAC_EN_SHIFT | ATOM_FLAG_SET )
#define CLEAR_ATOM_S7_DOS_8BIT_DAC_EN ((ATOM_DOS_MODE_INFO_DEF << 8 )|ATOM_S7_DOS_8BIT_DAC_EN_SHIFT | ATOM_FLAG_CLEAR )
/****************************************************************************/
//Portion II: Definitinos only used in Driver
/****************************************************************************/
// Macros used by driver
#ifdef __cplusplus
#define GetIndexIntoMasterTable(MasterOrData, FieldName) ((reinterpret_cast<char*>(&(static_cast<ATOM_MASTER_LIST_OF_##MasterOrData##_TABLES*>(0))->FieldName)-static_cast<char*>(0))/sizeof(USHORT))
#define GET_COMMAND_TABLE_COMMANDSET_REVISION(TABLE_HEADER_OFFSET) (((static_cast<ATOM_COMMON_TABLE_HEADER*>(TABLE_HEADER_OFFSET))->ucTableFormatRevision )&0x3F)
#define GET_COMMAND_TABLE_PARAMETER_REVISION(TABLE_HEADER_OFFSET) (((static_cast<ATOM_COMMON_TABLE_HEADER*>(TABLE_HEADER_OFFSET))->ucTableContentRevision)&0x3F)
#else // not __cplusplus
#define GetIndexIntoMasterTable(MasterOrData, FieldName) (((char*)(&((ATOM_MASTER_LIST_OF_##MasterOrData##_TABLES*)0)->FieldName)-(char*)0)/sizeof(USHORT))
#define GET_COMMAND_TABLE_COMMANDSET_REVISION(TABLE_HEADER_OFFSET) ((((ATOM_COMMON_TABLE_HEADER*)TABLE_HEADER_OFFSET)->ucTableFormatRevision)&0x3F)
#define GET_COMMAND_TABLE_PARAMETER_REVISION(TABLE_HEADER_OFFSET) ((((ATOM_COMMON_TABLE_HEADER*)TABLE_HEADER_OFFSET)->ucTableContentRevision)&0x3F)
#endif // __cplusplus
#define GET_DATA_TABLE_MAJOR_REVISION GET_COMMAND_TABLE_COMMANDSET_REVISION
#define GET_DATA_TABLE_MINOR_REVISION GET_COMMAND_TABLE_PARAMETER_REVISION
/****************************************************************************/
//Portion III: Definitinos only used in VBIOS
/****************************************************************************/
#define ATOM_DAC_SRC 0x80
#define ATOM_SRC_DAC1 0
#define ATOM_SRC_DAC2 0x80
typedef struct _MEMORY_PLLINIT_PARAMETERS
{
ULONG ulTargetMemoryClock; //In 10Khz unit
UCHAR ucAction; //not define yet
UCHAR ucFbDiv_Hi; //Fbdiv Hi byte
UCHAR ucFbDiv; //FB value
UCHAR ucPostDiv; //Post div
}MEMORY_PLLINIT_PARAMETERS;
#define MEMORY_PLLINIT_PS_ALLOCATION MEMORY_PLLINIT_PARAMETERS
#define GPIO_PIN_WRITE 0x01
#define GPIO_PIN_READ 0x00
typedef struct _GPIO_PIN_CONTROL_PARAMETERS
{
UCHAR ucGPIO_ID; //return value, read from GPIO pins
UCHAR ucGPIOBitShift; //define which bit in uGPIOBitVal need to be update
UCHAR ucGPIOBitVal; //Set/Reset corresponding bit defined in ucGPIOBitMask
UCHAR ucAction; //=GPIO_PIN_WRITE: Read; =GPIO_PIN_READ: Write
}GPIO_PIN_CONTROL_PARAMETERS;
typedef struct _ENABLE_SCALER_PARAMETERS
{
UCHAR ucScaler; // ATOM_SCALER1, ATOM_SCALER2
UCHAR ucEnable; // ATOM_SCALER_DISABLE or ATOM_SCALER_CENTER or ATOM_SCALER_EXPANSION
UCHAR ucTVStandard; //
UCHAR ucPadding[1];
}ENABLE_SCALER_PARAMETERS;
#define ENABLE_SCALER_PS_ALLOCATION ENABLE_SCALER_PARAMETERS
//ucEnable:
#define SCALER_BYPASS_AUTO_CENTER_NO_REPLICATION 0
#define SCALER_BYPASS_AUTO_CENTER_AUTO_REPLICATION 1
#define SCALER_ENABLE_2TAP_ALPHA_MODE 2
#define SCALER_ENABLE_MULTITAP_MODE 3
typedef struct _ENABLE_HARDWARE_ICON_CURSOR_PARAMETERS
{
ULONG usHWIconHorzVertPosn; // Hardware Icon Vertical position
UCHAR ucHWIconVertOffset; // Hardware Icon Vertical offset
UCHAR ucHWIconHorzOffset; // Hardware Icon Horizontal offset
UCHAR ucSelection; // ATOM_CURSOR1 or ATOM_ICON1 or ATOM_CURSOR2 or ATOM_ICON2
UCHAR ucEnable; // ATOM_ENABLE or ATOM_DISABLE
}ENABLE_HARDWARE_ICON_CURSOR_PARAMETERS;
typedef struct _ENABLE_HARDWARE_ICON_CURSOR_PS_ALLOCATION
{
ENABLE_HARDWARE_ICON_CURSOR_PARAMETERS sEnableIcon;
ENABLE_CRTC_PARAMETERS sReserved;
}ENABLE_HARDWARE_ICON_CURSOR_PS_ALLOCATION;
typedef struct _ENABLE_GRAPH_SURFACE_PARAMETERS
{
USHORT usHight; // Image Hight
USHORT usWidth; // Image Width
UCHAR ucSurface; // Surface 1 or 2
UCHAR ucPadding[3];
}ENABLE_GRAPH_SURFACE_PARAMETERS;
typedef struct _ENABLE_GRAPH_SURFACE_PARAMETERS_V1_2
{
USHORT usHight; // Image Hight
USHORT usWidth; // Image Width
UCHAR ucSurface; // Surface 1 or 2
UCHAR ucEnable; // ATOM_ENABLE or ATOM_DISABLE
UCHAR ucPadding[2];
}ENABLE_GRAPH_SURFACE_PARAMETERS_V1_2;
typedef struct _ENABLE_GRAPH_SURFACE_PARAMETERS_V1_3
{
USHORT usHight; // Image Hight
USHORT usWidth; // Image Width
UCHAR ucSurface; // Surface 1 or 2
UCHAR ucEnable; // ATOM_ENABLE or ATOM_DISABLE
USHORT usDeviceId; // Active Device Id for this surface. If no device, set to 0.
}ENABLE_GRAPH_SURFACE_PARAMETERS_V1_3;
typedef struct _ENABLE_GRAPH_SURFACE_PARAMETERS_V1_4
{
USHORT usHight; // Image Hight
USHORT usWidth; // Image Width
USHORT usGraphPitch;
UCHAR ucColorDepth;
UCHAR ucPixelFormat;
UCHAR ucSurface; // Surface 1 or 2
UCHAR ucEnable; // ATOM_ENABLE or ATOM_DISABLE
UCHAR ucModeType;
UCHAR ucReserved;
}ENABLE_GRAPH_SURFACE_PARAMETERS_V1_4;
// ucEnable
#define ATOM_GRAPH_CONTROL_SET_PITCH 0x0f
#define ATOM_GRAPH_CONTROL_SET_DISP_START 0x10
typedef struct _ENABLE_GRAPH_SURFACE_PS_ALLOCATION
{
ENABLE_GRAPH_SURFACE_PARAMETERS sSetSurface;
ENABLE_YUV_PS_ALLOCATION sReserved; // Don't set this one
}ENABLE_GRAPH_SURFACE_PS_ALLOCATION;
typedef struct _MEMORY_CLEAN_UP_PARAMETERS
{
USHORT usMemoryStart; //in 8Kb boundary, offset from memory base address
USHORT usMemorySize; //8Kb blocks aligned
}MEMORY_CLEAN_UP_PARAMETERS;
#define MEMORY_CLEAN_UP_PS_ALLOCATION MEMORY_CLEAN_UP_PARAMETERS
typedef struct _GET_DISPLAY_SURFACE_SIZE_PARAMETERS
{
USHORT usX_Size; //When use as input parameter, usX_Size indicates which CRTC
USHORT usY_Size;
}GET_DISPLAY_SURFACE_SIZE_PARAMETERS;
typedef struct _GET_DISPLAY_SURFACE_SIZE_PARAMETERS_V2
{
union{
USHORT usX_Size; //When use as input parameter, usX_Size indicates which CRTC
USHORT usSurface;
};
USHORT usY_Size;
USHORT usDispXStart;
USHORT usDispYStart;
}GET_DISPLAY_SURFACE_SIZE_PARAMETERS_V2;
typedef struct _PALETTE_DATA_CONTROL_PARAMETERS_V3
{
UCHAR ucLutId;
UCHAR ucAction;
USHORT usLutStartIndex;
USHORT usLutLength;
USHORT usLutOffsetInVram;
}PALETTE_DATA_CONTROL_PARAMETERS_V3;
// ucAction:
#define PALETTE_DATA_AUTO_FILL 1
#define PALETTE_DATA_READ 2
#define PALETTE_DATA_WRITE 3
typedef struct _INTERRUPT_SERVICE_PARAMETERS_V2
{
UCHAR ucInterruptId;
UCHAR ucServiceId;
UCHAR ucStatus;
UCHAR ucReserved;
}INTERRUPT_SERVICE_PARAMETER_V2;
// ucInterruptId
#define HDP1_INTERRUPT_ID 1
#define HDP2_INTERRUPT_ID 2
#define HDP3_INTERRUPT_ID 3
#define HDP4_INTERRUPT_ID 4
#define HDP5_INTERRUPT_ID 5
#define HDP6_INTERRUPT_ID 6
#define SW_INTERRUPT_ID 11
// ucAction
#define INTERRUPT_SERVICE_GEN_SW_INT 1
#define INTERRUPT_SERVICE_GET_STATUS 2
// ucStatus
#define INTERRUPT_STATUS__INT_TRIGGER 1
#define INTERRUPT_STATUS__HPD_HIGH 2
typedef struct _INDIRECT_IO_ACCESS
{
ATOM_COMMON_TABLE_HEADER sHeader;
UCHAR IOAccessSequence[256];
} INDIRECT_IO_ACCESS;
#define INDIRECT_READ 0x00
#define INDIRECT_WRITE 0x80
#define INDIRECT_IO_MM 0
#define INDIRECT_IO_PLL 1
#define INDIRECT_IO_MC 2
#define INDIRECT_IO_PCIE 3
#define INDIRECT_IO_PCIEP 4
#define INDIRECT_IO_NBMISC 5
#define INDIRECT_IO_PLL_READ INDIRECT_IO_PLL | INDIRECT_READ
#define INDIRECT_IO_PLL_WRITE INDIRECT_IO_PLL | INDIRECT_WRITE
#define INDIRECT_IO_MC_READ INDIRECT_IO_MC | INDIRECT_READ
#define INDIRECT_IO_MC_WRITE INDIRECT_IO_MC | INDIRECT_WRITE
#define INDIRECT_IO_PCIE_READ INDIRECT_IO_PCIE | INDIRECT_READ
#define INDIRECT_IO_PCIE_WRITE INDIRECT_IO_PCIE | INDIRECT_WRITE
#define INDIRECT_IO_PCIEP_READ INDIRECT_IO_PCIEP | INDIRECT_READ
#define INDIRECT_IO_PCIEP_WRITE INDIRECT_IO_PCIEP | INDIRECT_WRITE
#define INDIRECT_IO_NBMISC_READ INDIRECT_IO_NBMISC | INDIRECT_READ
#define INDIRECT_IO_NBMISC_WRITE INDIRECT_IO_NBMISC | INDIRECT_WRITE
typedef struct _ATOM_OEM_INFO
{
ATOM_COMMON_TABLE_HEADER sHeader;
ATOM_I2C_ID_CONFIG_ACCESS sucI2cId;
}ATOM_OEM_INFO;
typedef struct _ATOM_TV_MODE
{
UCHAR ucVMode_Num; //Video mode number
UCHAR ucTV_Mode_Num; //Internal TV mode number
}ATOM_TV_MODE;
typedef struct _ATOM_BIOS_INT_TVSTD_MODE
{
ATOM_COMMON_TABLE_HEADER sHeader;
USHORT usTV_Mode_LUT_Offset; // Pointer to standard to internal number conversion table
USHORT usTV_FIFO_Offset; // Pointer to FIFO entry table
USHORT usNTSC_Tbl_Offset; // Pointer to SDTV_Mode_NTSC table
USHORT usPAL_Tbl_Offset; // Pointer to SDTV_Mode_PAL table
USHORT usCV_Tbl_Offset; // Pointer to SDTV_Mode_PAL table
}ATOM_BIOS_INT_TVSTD_MODE;
typedef struct _ATOM_TV_MODE_SCALER_PTR
{
USHORT ucFilter0_Offset; //Pointer to filter format 0 coefficients
USHORT usFilter1_Offset; //Pointer to filter format 0 coefficients
UCHAR ucTV_Mode_Num;
}ATOM_TV_MODE_SCALER_PTR;
typedef struct _ATOM_STANDARD_VESA_TIMING
{
ATOM_COMMON_TABLE_HEADER sHeader;
ATOM_DTD_FORMAT aModeTimings[16]; // 16 is not the real array number, just for initial allocation
}ATOM_STANDARD_VESA_TIMING;
typedef struct _ATOM_STD_FORMAT
{
USHORT usSTD_HDisp;
USHORT usSTD_VDisp;
USHORT usSTD_RefreshRate;
USHORT usReserved;
}ATOM_STD_FORMAT;
typedef struct _ATOM_VESA_TO_EXTENDED_MODE
{
USHORT usVESA_ModeNumber;
USHORT usExtendedModeNumber;
}ATOM_VESA_TO_EXTENDED_MODE;
typedef struct _ATOM_VESA_TO_INTENAL_MODE_LUT
{
ATOM_COMMON_TABLE_HEADER sHeader;
ATOM_VESA_TO_EXTENDED_MODE asVESA_ToExtendedModeInfo[76];
}ATOM_VESA_TO_INTENAL_MODE_LUT;
/*************** ATOM Memory Related Data Structure ***********************/
typedef struct _ATOM_MEMORY_VENDOR_BLOCK{
UCHAR ucMemoryType;
UCHAR ucMemoryVendor;
UCHAR ucAdjMCId;
UCHAR ucDynClkId;
ULONG ulDllResetClkRange;
}ATOM_MEMORY_VENDOR_BLOCK;
typedef struct _ATOM_MEMORY_SETTING_ID_CONFIG{
#if ATOM_BIG_ENDIAN
ULONG ucMemBlkId:8;
ULONG ulMemClockRange:24;
#else
ULONG ulMemClockRange:24;
ULONG ucMemBlkId:8;
#endif
}ATOM_MEMORY_SETTING_ID_CONFIG;
typedef union _ATOM_MEMORY_SETTING_ID_CONFIG_ACCESS
{
ATOM_MEMORY_SETTING_ID_CONFIG slAccess;
ULONG ulAccess;
}ATOM_MEMORY_SETTING_ID_CONFIG_ACCESS;
typedef struct _ATOM_MEMORY_SETTING_DATA_BLOCK{
ATOM_MEMORY_SETTING_ID_CONFIG_ACCESS ulMemoryID;
ULONG aulMemData[1];
}ATOM_MEMORY_SETTING_DATA_BLOCK;
typedef struct _ATOM_INIT_REG_INDEX_FORMAT{
USHORT usRegIndex; // MC register index
UCHAR ucPreRegDataLength; // offset in ATOM_INIT_REG_DATA_BLOCK.saRegDataBuf
}ATOM_INIT_REG_INDEX_FORMAT;
typedef struct _ATOM_INIT_REG_BLOCK{
USHORT usRegIndexTblSize; //size of asRegIndexBuf
USHORT usRegDataBlkSize; //size of ATOM_MEMORY_SETTING_DATA_BLOCK
ATOM_INIT_REG_INDEX_FORMAT asRegIndexBuf[1];
ATOM_MEMORY_SETTING_DATA_BLOCK asRegDataBuf[1];
}ATOM_INIT_REG_BLOCK;
#define END_OF_REG_INDEX_BLOCK 0x0ffff
#define END_OF_REG_DATA_BLOCK 0x00000000
#define ATOM_INIT_REG_MASK_FLAG 0x80 //Not used in BIOS
#define CLOCK_RANGE_HIGHEST 0x00ffffff
#define VALUE_DWORD SIZEOF ULONG
#define VALUE_SAME_AS_ABOVE 0
#define VALUE_MASK_DWORD 0x84
#define INDEX_ACCESS_RANGE_BEGIN (VALUE_DWORD + 1)
#define INDEX_ACCESS_RANGE_END (INDEX_ACCESS_RANGE_BEGIN + 1)
#define VALUE_INDEX_ACCESS_SINGLE (INDEX_ACCESS_RANGE_END + 1)
//#define ACCESS_MCIODEBUGIND 0x40 //defined in BIOS code
#define ACCESS_PLACEHOLDER 0x80
typedef struct _ATOM_MC_INIT_PARAM_TABLE
{
ATOM_COMMON_TABLE_HEADER sHeader;
USHORT usAdjustARB_SEQDataOffset;
USHORT usMCInitMemTypeTblOffset;
USHORT usMCInitCommonTblOffset;
USHORT usMCInitPowerDownTblOffset;
ULONG ulARB_SEQDataBuf[32];
ATOM_INIT_REG_BLOCK asMCInitMemType;
ATOM_INIT_REG_BLOCK asMCInitCommon;
}ATOM_MC_INIT_PARAM_TABLE;
#define _4Mx16 0x2
#define _4Mx32 0x3
#define _8Mx16 0x12
#define _8Mx32 0x13
#define _16Mx16 0x22
#define _16Mx32 0x23
#define _32Mx16 0x32
#define _32Mx32 0x33
#define _64Mx8 0x41
#define _64Mx16 0x42
#define _64Mx32 0x43
#define _128Mx8 0x51
#define _128Mx16 0x52
#define _256Mx8 0x61
#define _256Mx16 0x62
#define SAMSUNG 0x1
#define INFINEON 0x2
#define ELPIDA 0x3
#define ETRON 0x4
#define NANYA 0x5
#define HYNIX 0x6
#define MOSEL 0x7
#define WINBOND 0x8
#define ESMT 0x9
#define MICRON 0xF
#define QIMONDA INFINEON
#define PROMOS MOSEL
#define KRETON INFINEON
#define ELIXIR NANYA
/////////////Support for GDDR5 MC uCode to reside in upper 64K of ROM/////////////
#define UCODE_ROM_START_ADDRESS 0x1b800
#define UCODE_SIGNATURE 0x4375434d // 'MCuC' - MC uCode
//uCode block header for reference
typedef struct _MCuCodeHeader
{
ULONG ulSignature;
UCHAR ucRevision;
UCHAR ucChecksum;
UCHAR ucReserved1;
UCHAR ucReserved2;
USHORT usParametersLength;
USHORT usUCodeLength;
USHORT usReserved1;
USHORT usReserved2;
} MCuCodeHeader;
//////////////////////////////////////////////////////////////////////////////////
#define ATOM_MAX_NUMBER_OF_VRAM_MODULE 16
#define ATOM_VRAM_MODULE_MEMORY_VENDOR_ID_MASK 0xF
typedef struct _ATOM_VRAM_MODULE_V1
{
ULONG ulReserved;
USHORT usEMRSValue;
USHORT usMRSValue;
USHORT usReserved;
UCHAR ucExtMemoryID; // An external indicator (by hardcode, callback or pin) to tell what is the current memory module
UCHAR ucMemoryType; // [7:4]=0x1:DDR1;=0x2:DDR2;=0x3:DDR3;=0x4:DDR4;[3:0] reserved;
UCHAR ucMemoryVenderID; // Predefined,never change across designs or memory type/vender
UCHAR ucMemoryDeviceCfg; // [7:4]=0x0:4M;=0x1:8M;=0x2:16M;0x3:32M....[3:0]=0x0:x4;=0x1:x8;=0x2:x16;=0x3:x32...
UCHAR ucRow; // Number of Row,in power of 2;
UCHAR ucColumn; // Number of Column,in power of 2;
UCHAR ucBank; // Nunber of Bank;
UCHAR ucRank; // Number of Rank, in power of 2
UCHAR ucChannelNum; // Number of channel;
UCHAR ucChannelConfig; // [3:0]=Indication of what channel combination;[4:7]=Channel bit width, in number of 2
UCHAR ucDefaultMVDDQ_ID; // Default MVDDQ setting for this memory block, ID linking to MVDDQ info table to find real set-up data;
UCHAR ucDefaultMVDDC_ID; // Default MVDDC setting for this memory block, ID linking to MVDDC info table to find real set-up data;
UCHAR ucReserved[2];
}ATOM_VRAM_MODULE_V1;
typedef struct _ATOM_VRAM_MODULE_V2
{
ULONG ulReserved;
ULONG ulFlags; // To enable/disable functionalities based on memory type
ULONG ulEngineClock; // Override of default engine clock for particular memory type
ULONG ulMemoryClock; // Override of default memory clock for particular memory type
USHORT usEMRS2Value; // EMRS2 Value is used for GDDR2 and GDDR4 memory type
USHORT usEMRS3Value; // EMRS3 Value is used for GDDR2 and GDDR4 memory type
USHORT usEMRSValue;
USHORT usMRSValue;
USHORT usReserved;
UCHAR ucExtMemoryID; // An external indicator (by hardcode, callback or pin) to tell what is the current memory module
UCHAR ucMemoryType; // [7:4]=0x1:DDR1;=0x2:DDR2;=0x3:DDR3;=0x4:DDR4;[3:0] - must not be used for now;
UCHAR ucMemoryVenderID; // Predefined,never change across designs or memory type/vender. If not predefined, vendor detection table gets executed
UCHAR ucMemoryDeviceCfg; // [7:4]=0x0:4M;=0x1:8M;=0x2:16M;0x3:32M....[3:0]=0x0:x4;=0x1:x8;=0x2:x16;=0x3:x32...
UCHAR ucRow; // Number of Row,in power of 2;
UCHAR ucColumn; // Number of Column,in power of 2;
UCHAR ucBank; // Nunber of Bank;
UCHAR ucRank; // Number of Rank, in power of 2
UCHAR ucChannelNum; // Number of channel;
UCHAR ucChannelConfig; // [3:0]=Indication of what channel combination;[4:7]=Channel bit width, in number of 2
UCHAR ucDefaultMVDDQ_ID; // Default MVDDQ setting for this memory block, ID linking to MVDDQ info table to find real set-up data;
UCHAR ucDefaultMVDDC_ID; // Default MVDDC setting for this memory block, ID linking to MVDDC info table to find real set-up data;
UCHAR ucRefreshRateFactor;
UCHAR ucReserved[3];
}ATOM_VRAM_MODULE_V2;
typedef struct _ATOM_MEMORY_TIMING_FORMAT
{
ULONG ulClkRange; // memory clock in 10kHz unit, when target memory clock is below this clock, use this memory timing
union{
USHORT usMRS; // mode register
USHORT usDDR3_MR0;
};
union{
USHORT usEMRS; // extended mode register
USHORT usDDR3_MR1;
};
UCHAR ucCL; // CAS latency
UCHAR ucWL; // WRITE Latency
UCHAR uctRAS; // tRAS
UCHAR uctRC; // tRC
UCHAR uctRFC; // tRFC
UCHAR uctRCDR; // tRCDR
UCHAR uctRCDW; // tRCDW
UCHAR uctRP; // tRP
UCHAR uctRRD; // tRRD
UCHAR uctWR; // tWR
UCHAR uctWTR; // tWTR
UCHAR uctPDIX; // tPDIX
UCHAR uctFAW; // tFAW
UCHAR uctAOND; // tAOND
union
{
struct {
UCHAR ucflag; // flag to control memory timing calculation. bit0= control EMRS2 Infineon
UCHAR ucReserved;
};
USHORT usDDR3_MR2;
};
}ATOM_MEMORY_TIMING_FORMAT;
typedef struct _ATOM_MEMORY_TIMING_FORMAT_V1
{
ULONG ulClkRange; // memory clock in 10kHz unit, when target memory clock is below this clock, use this memory timing
USHORT usMRS; // mode register
USHORT usEMRS; // extended mode register
UCHAR ucCL; // CAS latency
UCHAR ucWL; // WRITE Latency
UCHAR uctRAS; // tRAS
UCHAR uctRC; // tRC
UCHAR uctRFC; // tRFC
UCHAR uctRCDR; // tRCDR
UCHAR uctRCDW; // tRCDW
UCHAR uctRP; // tRP
UCHAR uctRRD; // tRRD
UCHAR uctWR; // tWR
UCHAR uctWTR; // tWTR
UCHAR uctPDIX; // tPDIX
UCHAR uctFAW; // tFAW
UCHAR uctAOND; // tAOND
UCHAR ucflag; // flag to control memory timing calculation. bit0= control EMRS2 Infineon
////////////////////////////////////GDDR parameters///////////////////////////////////
UCHAR uctCCDL; //
UCHAR uctCRCRL; //
UCHAR uctCRCWL; //
UCHAR uctCKE; //
UCHAR uctCKRSE; //
UCHAR uctCKRSX; //
UCHAR uctFAW32; //
UCHAR ucMR5lo; //
UCHAR ucMR5hi; //
UCHAR ucTerminator;
}ATOM_MEMORY_TIMING_FORMAT_V1;
typedef struct _ATOM_MEMORY_TIMING_FORMAT_V2
{
ULONG ulClkRange; // memory clock in 10kHz unit, when target memory clock is below this clock, use this memory timing
USHORT usMRS; // mode register
USHORT usEMRS; // extended mode register
UCHAR ucCL; // CAS latency
UCHAR ucWL; // WRITE Latency
UCHAR uctRAS; // tRAS
UCHAR uctRC; // tRC
UCHAR uctRFC; // tRFC
UCHAR uctRCDR; // tRCDR
UCHAR uctRCDW; // tRCDW
UCHAR uctRP; // tRP
UCHAR uctRRD; // tRRD
UCHAR uctWR; // tWR
UCHAR uctWTR; // tWTR
UCHAR uctPDIX; // tPDIX
UCHAR uctFAW; // tFAW
UCHAR uctAOND; // tAOND
UCHAR ucflag; // flag to control memory timing calculation. bit0= control EMRS2 Infineon
////////////////////////////////////GDDR parameters///////////////////////////////////
UCHAR uctCCDL; //
UCHAR uctCRCRL; //
UCHAR uctCRCWL; //
UCHAR uctCKE; //
UCHAR uctCKRSE; //
UCHAR uctCKRSX; //
UCHAR uctFAW32; //
UCHAR ucMR4lo; //
UCHAR ucMR4hi; //
UCHAR ucMR5lo; //
UCHAR ucMR5hi; //
UCHAR ucTerminator;
UCHAR ucReserved;
}ATOM_MEMORY_TIMING_FORMAT_V2;
typedef struct _ATOM_MEMORY_FORMAT
{
ULONG ulDllDisClock; // memory DLL will be disable when target memory clock is below this clock
union{
USHORT usEMRS2Value; // EMRS2 Value is used for GDDR2 and GDDR4 memory type
USHORT usDDR3_Reserved; // Not used for DDR3 memory
};
union{
USHORT usEMRS3Value; // EMRS3 Value is used for GDDR2 and GDDR4 memory type
USHORT usDDR3_MR3; // Used for DDR3 memory
};
UCHAR ucMemoryType; // [7:4]=0x1:DDR1;=0x2:DDR2;=0x3:DDR3;=0x4:DDR4;[3:0] - must not be used for now;
UCHAR ucMemoryVenderID; // Predefined,never change across designs or memory type/vender. If not predefined, vendor detection table gets executed
UCHAR ucRow; // Number of Row,in power of 2;
UCHAR ucColumn; // Number of Column,in power of 2;
UCHAR ucBank; // Nunber of Bank;
UCHAR ucRank; // Number of Rank, in power of 2
UCHAR ucBurstSize; // burst size, 0= burst size=4 1= burst size=8
UCHAR ucDllDisBit; // position of DLL Enable/Disable bit in EMRS ( Extended Mode Register )
UCHAR ucRefreshRateFactor; // memory refresh rate in unit of ms
UCHAR ucDensity; // _8Mx32, _16Mx32, _16Mx16, _32Mx16
UCHAR ucPreamble; //[7:4] Write Preamble, [3:0] Read Preamble
UCHAR ucMemAttrib; // Memory Device Addribute, like RDBI/WDBI etc
ATOM_MEMORY_TIMING_FORMAT asMemTiming[5]; //Memory Timing block sort from lower clock to higher clock
}ATOM_MEMORY_FORMAT;
typedef struct _ATOM_VRAM_MODULE_V3
{
ULONG ulChannelMapCfg; // board dependent paramenter:Channel combination
USHORT usSize; // size of ATOM_VRAM_MODULE_V3
USHORT usDefaultMVDDQ; // board dependent parameter:Default Memory Core Voltage
USHORT usDefaultMVDDC; // board dependent parameter:Default Memory IO Voltage
UCHAR ucExtMemoryID; // An external indicator (by hardcode, callback or pin) to tell what is the current memory module
UCHAR ucChannelNum; // board dependent parameter:Number of channel;
UCHAR ucChannelSize; // board dependent parameter:32bit or 64bit
UCHAR ucVREFI; // board dependnt parameter: EXT or INT +160mv to -140mv
UCHAR ucNPL_RT; // board dependent parameter:NPL round trip delay, used for calculate memory timing parameters
UCHAR ucFlag; // To enable/disable functionalities based on memory type
ATOM_MEMORY_FORMAT asMemory; // describ all of video memory parameters from memory spec
}ATOM_VRAM_MODULE_V3;
//ATOM_VRAM_MODULE_V3.ucNPL_RT
#define NPL_RT_MASK 0x0f
#define BATTERY_ODT_MASK 0xc0
#define ATOM_VRAM_MODULE ATOM_VRAM_MODULE_V3
typedef struct _ATOM_VRAM_MODULE_V4
{
ULONG ulChannelMapCfg; // board dependent parameter: Channel combination
USHORT usModuleSize; // size of ATOM_VRAM_MODULE_V4, make it easy for VBIOS to look for next entry of VRAM_MODULE
USHORT usPrivateReserved; // BIOS internal reserved space to optimize code size, updated by the compiler, shouldn't be modified manually!!
// MC_ARB_RAMCFG (includes NOOFBANK,NOOFRANKS,NOOFROWS,NOOFCOLS)
USHORT usReserved;
UCHAR ucExtMemoryID; // An external indicator (by hardcode, callback or pin) to tell what is the current memory module
UCHAR ucMemoryType; // [7:4]=0x1:DDR1;=0x2:DDR2;=0x3:DDR3;=0x4:DDR4; 0x5:DDR5 [3:0] - Must be 0x0 for now;
UCHAR ucChannelNum; // Number of channels present in this module config
UCHAR ucChannelWidth; // 0 - 32 bits; 1 - 64 bits
UCHAR ucDensity; // _8Mx32, _16Mx32, _16Mx16, _32Mx16
UCHAR ucFlag; // To enable/disable functionalities based on memory type
UCHAR ucMisc; // bit0: 0 - single rank; 1 - dual rank; bit2: 0 - burstlength 4, 1 - burstlength 8
UCHAR ucVREFI; // board dependent parameter
UCHAR ucNPL_RT; // board dependent parameter:NPL round trip delay, used for calculate memory timing parameters
UCHAR ucPreamble; // [7:4] Write Preamble, [3:0] Read Preamble
UCHAR ucMemorySize; // BIOS internal reserved space to optimize code size, updated by the compiler, shouldn't be modified manually!!
// Total memory size in unit of 16MB for CONFIG_MEMSIZE - bit[23:0] zeros
UCHAR ucReserved[3];
//compare with V3, we flat the struct by merging ATOM_MEMORY_FORMAT (as is) into V4 as the same level
union{
USHORT usEMRS2Value; // EMRS2 Value is used for GDDR2 and GDDR4 memory type
USHORT usDDR3_Reserved;
};
union{
USHORT usEMRS3Value; // EMRS3 Value is used for GDDR2 and GDDR4 memory type
USHORT usDDR3_MR3; // Used for DDR3 memory
};
UCHAR ucMemoryVenderID; // Predefined, If not predefined, vendor detection table gets executed
UCHAR ucRefreshRateFactor; // [1:0]=RefreshFactor (00=8ms, 01=16ms, 10=32ms,11=64ms)
UCHAR ucReserved2[2];
ATOM_MEMORY_TIMING_FORMAT asMemTiming[5];//Memory Timing block sort from lower clock to higher clock
}ATOM_VRAM_MODULE_V4;
#define VRAM_MODULE_V4_MISC_RANK_MASK 0x3
#define VRAM_MODULE_V4_MISC_DUAL_RANK 0x1
#define VRAM_MODULE_V4_MISC_BL_MASK 0x4
#define VRAM_MODULE_V4_MISC_BL8 0x4
#define VRAM_MODULE_V4_MISC_DUAL_CS 0x10
typedef struct _ATOM_VRAM_MODULE_V5
{
ULONG ulChannelMapCfg; // board dependent parameter: Channel combination
USHORT usModuleSize; // size of ATOM_VRAM_MODULE_V4, make it easy for VBIOS to look for next entry of VRAM_MODULE
USHORT usPrivateReserved; // BIOS internal reserved space to optimize code size, updated by the compiler, shouldn't be modified manually!!
// MC_ARB_RAMCFG (includes NOOFBANK,NOOFRANKS,NOOFROWS,NOOFCOLS)
USHORT usReserved;
UCHAR ucExtMemoryID; // An external indicator (by hardcode, callback or pin) to tell what is the current memory module
UCHAR ucMemoryType; // [7:4]=0x1:DDR1;=0x2:DDR2;=0x3:DDR3;=0x4:DDR4; 0x5:DDR5 [3:0] - Must be 0x0 for now;
UCHAR ucChannelNum; // Number of channels present in this module config
UCHAR ucChannelWidth; // 0 - 32 bits; 1 - 64 bits
UCHAR ucDensity; // _8Mx32, _16Mx32, _16Mx16, _32Mx16
UCHAR ucFlag; // To enable/disable functionalities based on memory type
UCHAR ucMisc; // bit0: 0 - single rank; 1 - dual rank; bit2: 0 - burstlength 4, 1 - burstlength 8
UCHAR ucVREFI; // board dependent parameter
UCHAR ucNPL_RT; // board dependent parameter:NPL round trip delay, used for calculate memory timing parameters
UCHAR ucPreamble; // [7:4] Write Preamble, [3:0] Read Preamble
UCHAR ucMemorySize; // BIOS internal reserved space to optimize code size, updated by the compiler, shouldn't be modified manually!!
// Total memory size in unit of 16MB for CONFIG_MEMSIZE - bit[23:0] zeros
UCHAR ucReserved[3];
//compare with V3, we flat the struct by merging ATOM_MEMORY_FORMAT (as is) into V4 as the same level
USHORT usEMRS2Value; // EMRS2 Value is used for GDDR2 and GDDR4 memory type
USHORT usEMRS3Value; // EMRS3 Value is used for GDDR2 and GDDR4 memory type
UCHAR ucMemoryVenderID; // Predefined, If not predefined, vendor detection table gets executed
UCHAR ucRefreshRateFactor; // [1:0]=RefreshFactor (00=8ms, 01=16ms, 10=32ms,11=64ms)
UCHAR ucFIFODepth; // FIFO depth supposes to be detected during vendor detection, but if we dont do vendor detection we have to hardcode FIFO Depth
UCHAR ucCDR_Bandwidth; // [0:3]=Read CDR bandwidth, [4:7] - Write CDR Bandwidth
ATOM_MEMORY_TIMING_FORMAT_V1 asMemTiming[5];//Memory Timing block sort from lower clock to higher clock
}ATOM_VRAM_MODULE_V5;
typedef struct _ATOM_VRAM_MODULE_V6
{
ULONG ulChannelMapCfg; // board dependent parameter: Channel combination
USHORT usModuleSize; // size of ATOM_VRAM_MODULE_V4, make it easy for VBIOS to look for next entry of VRAM_MODULE
USHORT usPrivateReserved; // BIOS internal reserved space to optimize code size, updated by the compiler, shouldn't be modified manually!!
// MC_ARB_RAMCFG (includes NOOFBANK,NOOFRANKS,NOOFROWS,NOOFCOLS)
USHORT usReserved;
UCHAR ucExtMemoryID; // An external indicator (by hardcode, callback or pin) to tell what is the current memory module
UCHAR ucMemoryType; // [7:4]=0x1:DDR1;=0x2:DDR2;=0x3:DDR3;=0x4:DDR4; 0x5:DDR5 [3:0] - Must be 0x0 for now;
UCHAR ucChannelNum; // Number of channels present in this module config
UCHAR ucChannelWidth; // 0 - 32 bits; 1 - 64 bits
UCHAR ucDensity; // _8Mx32, _16Mx32, _16Mx16, _32Mx16
UCHAR ucFlag; // To enable/disable functionalities based on memory type
UCHAR ucMisc; // bit0: 0 - single rank; 1 - dual rank; bit2: 0 - burstlength 4, 1 - burstlength 8
UCHAR ucVREFI; // board dependent parameter
UCHAR ucNPL_RT; // board dependent parameter:NPL round trip delay, used for calculate memory timing parameters
UCHAR ucPreamble; // [7:4] Write Preamble, [3:0] Read Preamble
UCHAR ucMemorySize; // BIOS internal reserved space to optimize code size, updated by the compiler, shouldn't be modified manually!!
// Total memory size in unit of 16MB for CONFIG_MEMSIZE - bit[23:0] zeros
UCHAR ucReserved[3];
//compare with V3, we flat the struct by merging ATOM_MEMORY_FORMAT (as is) into V4 as the same level
USHORT usEMRS2Value; // EMRS2 Value is used for GDDR2 and GDDR4 memory type
USHORT usEMRS3Value; // EMRS3 Value is used for GDDR2 and GDDR4 memory type
UCHAR ucMemoryVenderID; // Predefined, If not predefined, vendor detection table gets executed
UCHAR ucRefreshRateFactor; // [1:0]=RefreshFactor (00=8ms, 01=16ms, 10=32ms,11=64ms)
UCHAR ucFIFODepth; // FIFO depth supposes to be detected during vendor detection, but if we dont do vendor detection we have to hardcode FIFO Depth
UCHAR ucCDR_Bandwidth; // [0:3]=Read CDR bandwidth, [4:7] - Write CDR Bandwidth
ATOM_MEMORY_TIMING_FORMAT_V2 asMemTiming[5];//Memory Timing block sort from lower clock to higher clock
}ATOM_VRAM_MODULE_V6;
typedef struct _ATOM_VRAM_MODULE_V7
{
// Design Specific Values
ULONG ulChannelMapCfg; // mmMC_SHARED_CHREMAP
USHORT usModuleSize; // Size of ATOM_VRAM_MODULE_V7
USHORT usPrivateReserved; // MC_ARB_RAMCFG (includes NOOFBANK,NOOFRANKS,NOOFROWS,NOOFCOLS)
USHORT usEnableChannels; // bit vector which indicate which channels are enabled
UCHAR ucExtMemoryID; // Current memory module ID
UCHAR ucMemoryType; // MEM_TYPE_DDR2/DDR3/GDDR3/GDDR5
UCHAR ucChannelNum; // Number of mem. channels supported in this module
UCHAR ucChannelWidth; // CHANNEL_16BIT/CHANNEL_32BIT/CHANNEL_64BIT
UCHAR ucDensity; // _8Mx32, _16Mx32, _16Mx16, _32Mx16
UCHAR ucReserve; // Former container for Mx_FLAGS like DBI_AC_MODE_ENABLE_ASIC for GDDR4. Not used now.
UCHAR ucMisc; // RANK_OF_THISMEMORY etc.
UCHAR ucVREFI; // Not used.
UCHAR ucNPL_RT; // Round trip delay (MC_SEQ_CAS_TIMING [28:24]:TCL=CL+NPL_RT-2). Always 2.
UCHAR ucPreamble; // [7:4] Write Preamble, [3:0] Read Preamble
UCHAR ucMemorySize; // Total memory size in unit of 16MB for CONFIG_MEMSIZE - bit[23:0] zeros
USHORT usSEQSettingOffset;
UCHAR ucReserved;
// Memory Module specific values
USHORT usEMRS2Value; // EMRS2/MR2 Value.
USHORT usEMRS3Value; // EMRS3/MR3 Value.
UCHAR ucMemoryVenderID; // [7:4] Revision, [3:0] Vendor code
UCHAR ucRefreshRateFactor; // [1:0]=RefreshFactor (00=8ms, 01=16ms, 10=32ms,11=64ms)
UCHAR ucFIFODepth; // FIFO depth can be detected during vendor detection, here is hardcoded per memory
UCHAR ucCDR_Bandwidth; // [0:3]=Read CDR bandwidth, [4:7] - Write CDR Bandwidth
char strMemPNString[20]; // part number end with '0'.
}ATOM_VRAM_MODULE_V7;
typedef struct _ATOM_VRAM_INFO_V2
{
ATOM_COMMON_TABLE_HEADER sHeader;
UCHAR ucNumOfVRAMModule;
ATOM_VRAM_MODULE aVramInfo[ATOM_MAX_NUMBER_OF_VRAM_MODULE]; // just for allocation, real number of blocks is in ucNumOfVRAMModule;
}ATOM_VRAM_INFO_V2;
typedef struct _ATOM_VRAM_INFO_V3
{
ATOM_COMMON_TABLE_HEADER sHeader;
USHORT usMemAdjustTblOffset; // offset of ATOM_INIT_REG_BLOCK structure for memory vendor specific MC adjust setting
USHORT usMemClkPatchTblOffset; // offset of ATOM_INIT_REG_BLOCK structure for memory clock specific MC setting
USHORT usRerseved;
UCHAR aVID_PinsShift[9]; // 8 bit strap maximum+terminator
UCHAR ucNumOfVRAMModule;
ATOM_VRAM_MODULE aVramInfo[ATOM_MAX_NUMBER_OF_VRAM_MODULE]; // just for allocation, real number of blocks is in ucNumOfVRAMModule;
ATOM_INIT_REG_BLOCK asMemPatch; // for allocation
// ATOM_INIT_REG_BLOCK aMemAdjust;
}ATOM_VRAM_INFO_V3;
#define ATOM_VRAM_INFO_LAST ATOM_VRAM_INFO_V3
typedef struct _ATOM_VRAM_INFO_V4
{
ATOM_COMMON_TABLE_HEADER sHeader;
USHORT usMemAdjustTblOffset; // offset of ATOM_INIT_REG_BLOCK structure for memory vendor specific MC adjust setting
USHORT usMemClkPatchTblOffset; // offset of ATOM_INIT_REG_BLOCK structure for memory clock specific MC setting
USHORT usRerseved;
UCHAR ucMemDQ7_0ByteRemap; // DQ line byte remap, =0: Memory Data line BYTE0, =1: BYTE1, =2: BYTE2, =3: BYTE3
ULONG ulMemDQ7_0BitRemap; // each DQ line ( 7~0) use 3bits, like: DQ0=Bit[2:0], DQ1:[5:3], ... DQ7:[23:21]
UCHAR ucReservde[4];
UCHAR ucNumOfVRAMModule;
ATOM_VRAM_MODULE_V4 aVramInfo[ATOM_MAX_NUMBER_OF_VRAM_MODULE]; // just for allocation, real number of blocks is in ucNumOfVRAMModule;
ATOM_INIT_REG_BLOCK asMemPatch; // for allocation
// ATOM_INIT_REG_BLOCK aMemAdjust;
}ATOM_VRAM_INFO_V4;
typedef struct _ATOM_VRAM_INFO_HEADER_V2_1
{
ATOM_COMMON_TABLE_HEADER sHeader;
USHORT usMemAdjustTblOffset; // offset of ATOM_INIT_REG_BLOCK structure for memory vendor specific MC adjust setting
USHORT usMemClkPatchTblOffset; // offset of ATOM_INIT_REG_BLOCK structure for memory clock specific MC setting
USHORT usPerBytePresetOffset; // offset of ATOM_INIT_REG_BLOCK structure for Per Byte Offset Preset Settings
USHORT usReserved[3];
UCHAR ucNumOfVRAMModule; // indicate number of VRAM module
UCHAR ucMemoryClkPatchTblVer; // version of memory AC timing register list
UCHAR ucVramModuleVer; // indicate ATOM_VRAM_MODUE version
UCHAR ucReserved;
ATOM_VRAM_MODULE_V7 aVramInfo[ATOM_MAX_NUMBER_OF_VRAM_MODULE]; // just for allocation, real number of blocks is in ucNumOfVRAMModule;
}ATOM_VRAM_INFO_HEADER_V2_1;
typedef struct _ATOM_VRAM_GPIO_DETECTION_INFO
{
ATOM_COMMON_TABLE_HEADER sHeader;
UCHAR aVID_PinsShift[9]; //8 bit strap maximum+terminator
}ATOM_VRAM_GPIO_DETECTION_INFO;
typedef struct _ATOM_MEMORY_TRAINING_INFO
{
ATOM_COMMON_TABLE_HEADER sHeader;
UCHAR ucTrainingLoop;
UCHAR ucReserved[3];
ATOM_INIT_REG_BLOCK asMemTrainingSetting;
}ATOM_MEMORY_TRAINING_INFO;
typedef struct SW_I2C_CNTL_DATA_PARAMETERS
{
UCHAR ucControl;
UCHAR ucData;
UCHAR ucSatus;
UCHAR ucTemp;
} SW_I2C_CNTL_DATA_PARAMETERS;
#define SW_I2C_CNTL_DATA_PS_ALLOCATION SW_I2C_CNTL_DATA_PARAMETERS
typedef struct _SW_I2C_IO_DATA_PARAMETERS
{
USHORT GPIO_Info;
UCHAR ucAct;
UCHAR ucData;
} SW_I2C_IO_DATA_PARAMETERS;
#define SW_I2C_IO_DATA_PS_ALLOCATION SW_I2C_IO_DATA_PARAMETERS
/****************************SW I2C CNTL DEFINITIONS**********************/
#define SW_I2C_IO_RESET 0
#define SW_I2C_IO_GET 1
#define SW_I2C_IO_DRIVE 2
#define SW_I2C_IO_SET 3
#define SW_I2C_IO_START 4
#define SW_I2C_IO_CLOCK 0
#define SW_I2C_IO_DATA 0x80
#define SW_I2C_IO_ZERO 0
#define SW_I2C_IO_ONE 0x100
#define SW_I2C_CNTL_READ 0
#define SW_I2C_CNTL_WRITE 1
#define SW_I2C_CNTL_START 2
#define SW_I2C_CNTL_STOP 3
#define SW_I2C_CNTL_OPEN 4
#define SW_I2C_CNTL_CLOSE 5
#define SW_I2C_CNTL_WRITE1BIT 6
//==============================VESA definition Portion===============================
#define VESA_OEM_PRODUCT_REV "01.00"
#define VESA_MODE_ATTRIBUTE_MODE_SUPPORT 0xBB //refer to VBE spec p.32, no TTY support
#define VESA_MODE_WIN_ATTRIBUTE 7
#define VESA_WIN_SIZE 64
typedef struct _PTR_32_BIT_STRUCTURE
{
USHORT Offset16;
USHORT Segment16;
} PTR_32_BIT_STRUCTURE;
typedef union _PTR_32_BIT_UNION
{
PTR_32_BIT_STRUCTURE SegmentOffset;
ULONG Ptr32_Bit;
} PTR_32_BIT_UNION;
typedef struct _VBE_1_2_INFO_BLOCK_UPDATABLE
{
UCHAR VbeSignature[4];
USHORT VbeVersion;
PTR_32_BIT_UNION OemStringPtr;
UCHAR Capabilities[4];
PTR_32_BIT_UNION VideoModePtr;
USHORT TotalMemory;
} VBE_1_2_INFO_BLOCK_UPDATABLE;
typedef struct _VBE_2_0_INFO_BLOCK_UPDATABLE
{
VBE_1_2_INFO_BLOCK_UPDATABLE CommonBlock;
USHORT OemSoftRev;
PTR_32_BIT_UNION OemVendorNamePtr;
PTR_32_BIT_UNION OemProductNamePtr;
PTR_32_BIT_UNION OemProductRevPtr;
} VBE_2_0_INFO_BLOCK_UPDATABLE;
typedef union _VBE_VERSION_UNION
{
VBE_2_0_INFO_BLOCK_UPDATABLE VBE_2_0_InfoBlock;
VBE_1_2_INFO_BLOCK_UPDATABLE VBE_1_2_InfoBlock;
} VBE_VERSION_UNION;
typedef struct _VBE_INFO_BLOCK
{
VBE_VERSION_UNION UpdatableVBE_Info;
UCHAR Reserved[222];
UCHAR OemData[256];
} VBE_INFO_BLOCK;
typedef struct _VBE_FP_INFO
{
USHORT HSize;
USHORT VSize;
USHORT FPType;
UCHAR RedBPP;
UCHAR GreenBPP;
UCHAR BlueBPP;
UCHAR ReservedBPP;
ULONG RsvdOffScrnMemSize;
ULONG RsvdOffScrnMEmPtr;
UCHAR Reserved[14];
} VBE_FP_INFO;
typedef struct _VESA_MODE_INFO_BLOCK
{
// Mandatory information for all VBE revisions
USHORT ModeAttributes; // dw ? ; mode attributes
UCHAR WinAAttributes; // db ? ; window A attributes
UCHAR WinBAttributes; // db ? ; window B attributes
USHORT WinGranularity; // dw ? ; window granularity
USHORT WinSize; // dw ? ; window size
USHORT WinASegment; // dw ? ; window A start segment
USHORT WinBSegment; // dw ? ; window B start segment
ULONG WinFuncPtr; // dd ? ; real mode pointer to window function
USHORT BytesPerScanLine;// dw ? ; bytes per scan line
//; Mandatory information for VBE 1.2 and above
USHORT XResolution; // dw ? ; horizontal resolution in pixels or characters
USHORT YResolution; // dw ? ; vertical resolution in pixels or characters
UCHAR XCharSize; // db ? ; character cell width in pixels
UCHAR YCharSize; // db ? ; character cell height in pixels
UCHAR NumberOfPlanes; // db ? ; number of memory planes
UCHAR BitsPerPixel; // db ? ; bits per pixel
UCHAR NumberOfBanks; // db ? ; number of banks
UCHAR MemoryModel; // db ? ; memory model type
UCHAR BankSize; // db ? ; bank size in KB
UCHAR NumberOfImagePages;// db ? ; number of images
UCHAR ReservedForPageFunction;//db 1 ; reserved for page function
//; Direct Color fields(required for direct/6 and YUV/7 memory models)
UCHAR RedMaskSize; // db ? ; size of direct color red mask in bits
UCHAR RedFieldPosition; // db ? ; bit position of lsb of red mask
UCHAR GreenMaskSize; // db ? ; size of direct color green mask in bits
UCHAR GreenFieldPosition; // db ? ; bit position of lsb of green mask
UCHAR BlueMaskSize; // db ? ; size of direct color blue mask in bits
UCHAR BlueFieldPosition; // db ? ; bit position of lsb of blue mask
UCHAR RsvdMaskSize; // db ? ; size of direct color reserved mask in bits
UCHAR RsvdFieldPosition; // db ? ; bit position of lsb of reserved mask
UCHAR DirectColorModeInfo;// db ? ; direct color mode attributes
//; Mandatory information for VBE 2.0 and above
ULONG PhysBasePtr; // dd ? ; physical address for flat memory frame buffer
ULONG Reserved_1; // dd 0 ; reserved - always set to 0
USHORT Reserved_2; // dw 0 ; reserved - always set to 0
//; Mandatory information for VBE 3.0 and above
USHORT LinBytesPerScanLine; // dw ? ; bytes per scan line for linear modes
UCHAR BnkNumberOfImagePages;// db ? ; number of images for banked modes
UCHAR LinNumberOfImagPages; // db ? ; number of images for linear modes
UCHAR LinRedMaskSize; // db ? ; size of direct color red mask(linear modes)
UCHAR LinRedFieldPosition; // db ? ; bit position of lsb of red mask(linear modes)
UCHAR LinGreenMaskSize; // db ? ; size of direct color green mask(linear modes)
UCHAR LinGreenFieldPosition;// db ? ; bit position of lsb of green mask(linear modes)
UCHAR LinBlueMaskSize; // db ? ; size of direct color blue mask(linear modes)
UCHAR LinBlueFieldPosition; // db ? ; bit position of lsb of blue mask(linear modes)
UCHAR LinRsvdMaskSize; // db ? ; size of direct color reserved mask(linear modes)
UCHAR LinRsvdFieldPosition; // db ? ; bit position of lsb of reserved mask(linear modes)
ULONG MaxPixelClock; // dd ? ; maximum pixel clock(in Hz) for graphics mode
UCHAR Reserved; // db 190 dup (0)
} VESA_MODE_INFO_BLOCK;
// BIOS function CALLS
#define ATOM_BIOS_EXTENDED_FUNCTION_CODE 0xA0 // ATI Extended Function code
#define ATOM_BIOS_FUNCTION_COP_MODE 0x00
#define ATOM_BIOS_FUNCTION_SHORT_QUERY1 0x04
#define ATOM_BIOS_FUNCTION_SHORT_QUERY2 0x05
#define ATOM_BIOS_FUNCTION_SHORT_QUERY3 0x06
#define ATOM_BIOS_FUNCTION_GET_DDC 0x0B
#define ATOM_BIOS_FUNCTION_ASIC_DSTATE 0x0E
#define ATOM_BIOS_FUNCTION_DEBUG_PLAY 0x0F
#define ATOM_BIOS_FUNCTION_STV_STD 0x16
#define ATOM_BIOS_FUNCTION_DEVICE_DET 0x17
#define ATOM_BIOS_FUNCTION_DEVICE_SWITCH 0x18
#define ATOM_BIOS_FUNCTION_PANEL_CONTROL 0x82
#define ATOM_BIOS_FUNCTION_OLD_DEVICE_DET 0x83
#define ATOM_BIOS_FUNCTION_OLD_DEVICE_SWITCH 0x84
#define ATOM_BIOS_FUNCTION_HW_ICON 0x8A
#define ATOM_BIOS_FUNCTION_SET_CMOS 0x8B
#define SUB_FUNCTION_UPDATE_DISPLAY_INFO 0x8000 // Sub function 80
#define SUB_FUNCTION_UPDATE_EXPANSION_INFO 0x8100 // Sub function 80
#define ATOM_BIOS_FUNCTION_DISPLAY_INFO 0x8D
#define ATOM_BIOS_FUNCTION_DEVICE_ON_OFF 0x8E
#define ATOM_BIOS_FUNCTION_VIDEO_STATE 0x8F
#define ATOM_SUB_FUNCTION_GET_CRITICAL_STATE 0x0300 // Sub function 03
#define ATOM_SUB_FUNCTION_GET_LIDSTATE 0x0700 // Sub function 7
#define ATOM_SUB_FUNCTION_THERMAL_STATE_NOTICE 0x1400 // Notify caller the current thermal state
#define ATOM_SUB_FUNCTION_CRITICAL_STATE_NOTICE 0x8300 // Notify caller the current critical state
#define ATOM_SUB_FUNCTION_SET_LIDSTATE 0x8500 // Sub function 85
#define ATOM_SUB_FUNCTION_GET_REQ_DISPLAY_FROM_SBIOS_MODE 0x8900// Sub function 89
#define ATOM_SUB_FUNCTION_INFORM_ADC_SUPPORT 0x9400 // Notify caller that ADC is supported
#define ATOM_BIOS_FUNCTION_VESA_DPMS 0x4F10 // Set DPMS
#define ATOM_SUB_FUNCTION_SET_DPMS 0x0001 // BL: Sub function 01
#define ATOM_SUB_FUNCTION_GET_DPMS 0x0002 // BL: Sub function 02
#define ATOM_PARAMETER_VESA_DPMS_ON 0x0000 // BH Parameter for DPMS ON.
#define ATOM_PARAMETER_VESA_DPMS_STANDBY 0x0100 // BH Parameter for DPMS STANDBY
#define ATOM_PARAMETER_VESA_DPMS_SUSPEND 0x0200 // BH Parameter for DPMS SUSPEND
#define ATOM_PARAMETER_VESA_DPMS_OFF 0x0400 // BH Parameter for DPMS OFF
#define ATOM_PARAMETER_VESA_DPMS_REDUCE_ON 0x0800 // BH Parameter for DPMS REDUCE ON (NOT SUPPORTED)
#define ATOM_BIOS_RETURN_CODE_MASK 0x0000FF00L
#define ATOM_BIOS_REG_HIGH_MASK 0x0000FF00L
#define ATOM_BIOS_REG_LOW_MASK 0x000000FFL
// structure used for VBIOS only
//DispOutInfoTable
typedef struct _ASIC_TRANSMITTER_INFO
{
USHORT usTransmitterObjId;
USHORT usSupportDevice;
UCHAR ucTransmitterCmdTblId;
UCHAR ucConfig;
UCHAR ucEncoderID; //available 1st encoder ( default )
UCHAR ucOptionEncoderID; //available 2nd encoder ( optional )
UCHAR uc2ndEncoderID;
UCHAR ucReserved;
}ASIC_TRANSMITTER_INFO;
#define ASIC_TRANSMITTER_INFO_CONFIG__DVO_SDR_MODE 0x01
#define ASIC_TRANSMITTER_INFO_CONFIG__COHERENT_MODE 0x02
#define ASIC_TRANSMITTER_INFO_CONFIG__ENCODEROBJ_ID_MASK 0xc4
#define ASIC_TRANSMITTER_INFO_CONFIG__ENCODER_A 0x00
#define ASIC_TRANSMITTER_INFO_CONFIG__ENCODER_B 0x04
#define ASIC_TRANSMITTER_INFO_CONFIG__ENCODER_C 0x40
#define ASIC_TRANSMITTER_INFO_CONFIG__ENCODER_D 0x44
#define ASIC_TRANSMITTER_INFO_CONFIG__ENCODER_E 0x80
#define ASIC_TRANSMITTER_INFO_CONFIG__ENCODER_F 0x84
typedef struct _ASIC_ENCODER_INFO
{
UCHAR ucEncoderID;
UCHAR ucEncoderConfig;
USHORT usEncoderCmdTblId;
}ASIC_ENCODER_INFO;
typedef struct _ATOM_DISP_OUT_INFO
{
ATOM_COMMON_TABLE_HEADER sHeader;
USHORT ptrTransmitterInfo;
USHORT ptrEncoderInfo;
ASIC_TRANSMITTER_INFO asTransmitterInfo[1];
ASIC_ENCODER_INFO asEncoderInfo[1];
}ATOM_DISP_OUT_INFO;
typedef struct _ATOM_DISP_OUT_INFO_V2
{
ATOM_COMMON_TABLE_HEADER sHeader;
USHORT ptrTransmitterInfo;
USHORT ptrEncoderInfo;
USHORT ptrMainCallParserFar; // direct address of main parser call in VBIOS binary.
ASIC_TRANSMITTER_INFO asTransmitterInfo[1];
ASIC_ENCODER_INFO asEncoderInfo[1];
}ATOM_DISP_OUT_INFO_V2;
typedef struct _ATOM_DISP_CLOCK_ID {
UCHAR ucPpllId;
UCHAR ucPpllAttribute;
}ATOM_DISP_CLOCK_ID;
// ucPpllAttribute
#define CLOCK_SOURCE_SHAREABLE 0x01
#define CLOCK_SOURCE_DP_MODE 0x02
#define CLOCK_SOURCE_NONE_DP_MODE 0x04
//DispOutInfoTable
typedef struct _ASIC_TRANSMITTER_INFO_V2
{
USHORT usTransmitterObjId;
USHORT usDispClkIdOffset; // point to clock source id list supported by Encoder Object
UCHAR ucTransmitterCmdTblId;
UCHAR ucConfig;
UCHAR ucEncoderID; // available 1st encoder ( default )
UCHAR ucOptionEncoderID; // available 2nd encoder ( optional )
UCHAR uc2ndEncoderID;
UCHAR ucReserved;
}ASIC_TRANSMITTER_INFO_V2;
typedef struct _ATOM_DISP_OUT_INFO_V3
{
ATOM_COMMON_TABLE_HEADER sHeader;
USHORT ptrTransmitterInfo;
USHORT ptrEncoderInfo;
USHORT ptrMainCallParserFar; // direct address of main parser call in VBIOS binary.
USHORT usReserved;
UCHAR ucDCERevision;
UCHAR ucMaxDispEngineNum;
UCHAR ucMaxActiveDispEngineNum;
UCHAR ucMaxPPLLNum;
UCHAR ucCoreRefClkSource; // value of CORE_REF_CLK_SOURCE
UCHAR ucReserved[3];
ASIC_TRANSMITTER_INFO_V2 asTransmitterInfo[1]; // for alligment only
}ATOM_DISP_OUT_INFO_V3;
typedef enum CORE_REF_CLK_SOURCE{
CLOCK_SRC_XTALIN=0,
CLOCK_SRC_XO_IN=1,
CLOCK_SRC_XO_IN2=2,
}CORE_REF_CLK_SOURCE;
// DispDevicePriorityInfo
typedef struct _ATOM_DISPLAY_DEVICE_PRIORITY_INFO
{
ATOM_COMMON_TABLE_HEADER sHeader;
USHORT asDevicePriority[16];
}ATOM_DISPLAY_DEVICE_PRIORITY_INFO;
//ProcessAuxChannelTransactionTable
typedef struct _PROCESS_AUX_CHANNEL_TRANSACTION_PARAMETERS
{
USHORT lpAuxRequest;
USHORT lpDataOut;
UCHAR ucChannelID;
union
{
UCHAR ucReplyStatus;
UCHAR ucDelay;
};
UCHAR ucDataOutLen;
UCHAR ucReserved;
}PROCESS_AUX_CHANNEL_TRANSACTION_PARAMETERS;
//ProcessAuxChannelTransactionTable
typedef struct _PROCESS_AUX_CHANNEL_TRANSACTION_PARAMETERS_V2
{
USHORT lpAuxRequest;
USHORT lpDataOut;
UCHAR ucChannelID;
union
{
UCHAR ucReplyStatus;
UCHAR ucDelay;
};
UCHAR ucDataOutLen;
UCHAR ucHPD_ID; //=0: HPD1, =1: HPD2, =2: HPD3, =3: HPD4, =4: HPD5, =5: HPD6
}PROCESS_AUX_CHANNEL_TRANSACTION_PARAMETERS_V2;
#define PROCESS_AUX_CHANNEL_TRANSACTION_PS_ALLOCATION PROCESS_AUX_CHANNEL_TRANSACTION_PARAMETERS
//GetSinkType
typedef struct _DP_ENCODER_SERVICE_PARAMETERS
{
USHORT ucLinkClock;
union
{
UCHAR ucConfig; // for DP training command
UCHAR ucI2cId; // use for GET_SINK_TYPE command
};
UCHAR ucAction;
UCHAR ucStatus;
UCHAR ucLaneNum;
UCHAR ucReserved[2];
}DP_ENCODER_SERVICE_PARAMETERS;
// ucAction
#define ATOM_DP_ACTION_GET_SINK_TYPE 0x01
/* obselete */
#define ATOM_DP_ACTION_TRAINING_START 0x02
#define ATOM_DP_ACTION_TRAINING_COMPLETE 0x03
#define ATOM_DP_ACTION_TRAINING_PATTERN_SEL 0x04
#define ATOM_DP_ACTION_SET_VSWING_PREEMP 0x05
#define ATOM_DP_ACTION_GET_VSWING_PREEMP 0x06
#define ATOM_DP_ACTION_BLANKING 0x07
// ucConfig
#define ATOM_DP_CONFIG_ENCODER_SEL_MASK 0x03
#define ATOM_DP_CONFIG_DIG1_ENCODER 0x00
#define ATOM_DP_CONFIG_DIG2_ENCODER 0x01
#define ATOM_DP_CONFIG_EXTERNAL_ENCODER 0x02
#define ATOM_DP_CONFIG_LINK_SEL_MASK 0x04
#define ATOM_DP_CONFIG_LINK_A 0x00
#define ATOM_DP_CONFIG_LINK_B 0x04
/* /obselete */
#define DP_ENCODER_SERVICE_PS_ALLOCATION WRITE_ONE_BYTE_HW_I2C_DATA_PARAMETERS
typedef struct _DP_ENCODER_SERVICE_PARAMETERS_V2
{
USHORT usExtEncoderObjId; // External Encoder Object Id, output parameter only, use when ucAction = DP_SERVICE_V2_ACTION_DET_EXT_CONNECTION
UCHAR ucAuxId;
UCHAR ucAction;
UCHAR ucSinkType; // Iput and Output parameters.
UCHAR ucHPDId; // Input parameter, used when ucAction = DP_SERVICE_V2_ACTION_DET_EXT_CONNECTION
UCHAR ucReserved[2];
}DP_ENCODER_SERVICE_PARAMETERS_V2;
typedef struct _DP_ENCODER_SERVICE_PS_ALLOCATION_V2
{
DP_ENCODER_SERVICE_PARAMETERS_V2 asDPServiceParam;
PROCESS_AUX_CHANNEL_TRANSACTION_PARAMETERS_V2 asAuxParam;
}DP_ENCODER_SERVICE_PS_ALLOCATION_V2;
// ucAction
#define DP_SERVICE_V2_ACTION_GET_SINK_TYPE 0x01
#define DP_SERVICE_V2_ACTION_DET_LCD_CONNECTION 0x02
// DP_TRAINING_TABLE
#define DPCD_SET_LINKRATE_LANENUM_PATTERN1_TBL_ADDR ATOM_DP_TRAINING_TBL_ADDR
#define DPCD_SET_SS_CNTL_TBL_ADDR (ATOM_DP_TRAINING_TBL_ADDR + 8 )
#define DPCD_SET_LANE_VSWING_PREEMP_TBL_ADDR (ATOM_DP_TRAINING_TBL_ADDR + 16 )
#define DPCD_SET_TRAINING_PATTERN0_TBL_ADDR (ATOM_DP_TRAINING_TBL_ADDR + 24 )
#define DPCD_SET_TRAINING_PATTERN2_TBL_ADDR (ATOM_DP_TRAINING_TBL_ADDR + 32)
#define DPCD_GET_LINKRATE_LANENUM_SS_TBL_ADDR (ATOM_DP_TRAINING_TBL_ADDR + 40)
#define DPCD_GET_LANE_STATUS_ADJUST_TBL_ADDR (ATOM_DP_TRAINING_TBL_ADDR + 48)
#define DP_I2C_AUX_DDC_WRITE_START_TBL_ADDR (ATOM_DP_TRAINING_TBL_ADDR + 60)
#define DP_I2C_AUX_DDC_WRITE_TBL_ADDR (ATOM_DP_TRAINING_TBL_ADDR + 64)
#define DP_I2C_AUX_DDC_READ_START_TBL_ADDR (ATOM_DP_TRAINING_TBL_ADDR + 72)
#define DP_I2C_AUX_DDC_READ_TBL_ADDR (ATOM_DP_TRAINING_TBL_ADDR + 76)
#define DP_I2C_AUX_DDC_WRITE_END_TBL_ADDR (ATOM_DP_TRAINING_TBL_ADDR + 80)
#define DP_I2C_AUX_DDC_READ_END_TBL_ADDR (ATOM_DP_TRAINING_TBL_ADDR + 84)
typedef struct _PROCESS_I2C_CHANNEL_TRANSACTION_PARAMETERS
{
UCHAR ucI2CSpeed;
union
{
UCHAR ucRegIndex;
UCHAR ucStatus;
};
USHORT lpI2CDataOut;
UCHAR ucFlag;
UCHAR ucTransBytes;
UCHAR ucSlaveAddr;
UCHAR ucLineNumber;
}PROCESS_I2C_CHANNEL_TRANSACTION_PARAMETERS;
#define PROCESS_I2C_CHANNEL_TRANSACTION_PS_ALLOCATION PROCESS_I2C_CHANNEL_TRANSACTION_PARAMETERS
//ucFlag
#define HW_I2C_WRITE 1
#define HW_I2C_READ 0
#define I2C_2BYTE_ADDR 0x02
/****************************************************************************/
// Structures used by HW_Misc_OperationTable
/****************************************************************************/
typedef struct _ATOM_HW_MISC_OPERATION_INPUT_PARAMETER_V1_1
{
UCHAR ucCmd; // Input: To tell which action to take
UCHAR ucReserved[3];
ULONG ulReserved;
}ATOM_HW_MISC_OPERATION_INPUT_PARAMETER_V1_1;
typedef struct _ATOM_HW_MISC_OPERATION_OUTPUT_PARAMETER_V1_1
{
UCHAR ucReturnCode; // Output: Return value base on action was taken
UCHAR ucReserved[3];
ULONG ulReserved;
}ATOM_HW_MISC_OPERATION_OUTPUT_PARAMETER_V1_1;
// Actions code
#define ATOM_GET_SDI_SUPPORT 0xF0
// Return code
#define ATOM_UNKNOWN_CMD 0
#define ATOM_FEATURE_NOT_SUPPORTED 1
#define ATOM_FEATURE_SUPPORTED 2
typedef struct _ATOM_HW_MISC_OPERATION_PS_ALLOCATION
{
ATOM_HW_MISC_OPERATION_INPUT_PARAMETER_V1_1 sInput_Output;
PROCESS_I2C_CHANNEL_TRANSACTION_PARAMETERS sReserved;
}ATOM_HW_MISC_OPERATION_PS_ALLOCATION;
/****************************************************************************/
typedef struct _SET_HWBLOCK_INSTANCE_PARAMETER_V2
{
UCHAR ucHWBlkInst; // HW block instance, 0, 1, 2, ...
UCHAR ucReserved[3];
}SET_HWBLOCK_INSTANCE_PARAMETER_V2;
#define HWBLKINST_INSTANCE_MASK 0x07
#define HWBLKINST_HWBLK_MASK 0xF0
#define HWBLKINST_HWBLK_SHIFT 0x04
//ucHWBlock
#define SELECT_DISP_ENGINE 0
#define SELECT_DISP_PLL 1
#define SELECT_DCIO_UNIPHY_LINK0 2
#define SELECT_DCIO_UNIPHY_LINK1 3
#define SELECT_DCIO_IMPCAL 4
#define SELECT_DCIO_DIG 6
#define SELECT_CRTC_PIXEL_RATE 7
#define SELECT_VGA_BLK 8
// DIGTransmitterInfoTable structure used to program UNIPHY settings
typedef struct _DIG_TRANSMITTER_INFO_HEADER_V3_1{
ATOM_COMMON_TABLE_HEADER sHeader;
USHORT usDPVsPreEmphSettingOffset; // offset of PHY_ANALOG_SETTING_INFO * with DP Voltage Swing and Pre-Emphasis for each Link clock
USHORT usPhyAnalogRegListOffset; // offset of CLOCK_CONDITION_REGESTER_INFO* with None-DP mode Analog Setting's register Info
USHORT usPhyAnalogSettingOffset; // offset of CLOCK_CONDITION_SETTING_ENTRY* with None-DP mode Analog Setting for each link clock range
USHORT usPhyPllRegListOffset; // offset of CLOCK_CONDITION_REGESTER_INFO* with Phy Pll register Info
USHORT usPhyPllSettingOffset; // offset of CLOCK_CONDITION_SETTING_ENTRY* with Phy Pll Settings
}DIG_TRANSMITTER_INFO_HEADER_V3_1;
typedef struct _CLOCK_CONDITION_REGESTER_INFO{
USHORT usRegisterIndex;
UCHAR ucStartBit;
UCHAR ucEndBit;
}CLOCK_CONDITION_REGESTER_INFO;
typedef struct _CLOCK_CONDITION_SETTING_ENTRY{
USHORT usMaxClockFreq;
UCHAR ucEncodeMode;
UCHAR ucPhySel;
ULONG ulAnalogSetting[1];
}CLOCK_CONDITION_SETTING_ENTRY;
typedef struct _CLOCK_CONDITION_SETTING_INFO{
USHORT usEntrySize;
CLOCK_CONDITION_SETTING_ENTRY asClkCondSettingEntry[1];
}CLOCK_CONDITION_SETTING_INFO;
typedef struct _PHY_CONDITION_REG_VAL{
ULONG ulCondition;
ULONG ulRegVal;
}PHY_CONDITION_REG_VAL;
typedef struct _PHY_CONDITION_REG_INFO{
USHORT usRegIndex;
USHORT usSize;
PHY_CONDITION_REG_VAL asRegVal[1];
}PHY_CONDITION_REG_INFO;
typedef struct _PHY_ANALOG_SETTING_INFO{
UCHAR ucEncodeMode;
UCHAR ucPhySel;
USHORT usSize;
PHY_CONDITION_REG_INFO asAnalogSetting[1];
}PHY_ANALOG_SETTING_INFO;
/****************************************************************************/
//Portion VI: Definitinos for vbios MC scratch registers that driver used
/****************************************************************************/
#define MC_MISC0__MEMORY_TYPE_MASK 0xF0000000
#define MC_MISC0__MEMORY_TYPE__GDDR1 0x10000000
#define MC_MISC0__MEMORY_TYPE__DDR2 0x20000000
#define MC_MISC0__MEMORY_TYPE__GDDR3 0x30000000
#define MC_MISC0__MEMORY_TYPE__GDDR4 0x40000000
#define MC_MISC0__MEMORY_TYPE__GDDR5 0x50000000
#define MC_MISC0__MEMORY_TYPE__DDR3 0xB0000000
/****************************************************************************/
//Portion VI: Definitinos being oboselete
/****************************************************************************/
//==========================================================================================
//Remove the definitions below when driver is ready!
typedef struct _ATOM_DAC_INFO
{
ATOM_COMMON_TABLE_HEADER sHeader;
USHORT usMaxFrequency; // in 10kHz unit
USHORT usReserved;
}ATOM_DAC_INFO;
typedef struct _COMPASSIONATE_DATA
{
ATOM_COMMON_TABLE_HEADER sHeader;
//============================== DAC1 portion
UCHAR ucDAC1_BG_Adjustment;
UCHAR ucDAC1_DAC_Adjustment;
USHORT usDAC1_FORCE_Data;
//============================== DAC2 portion
UCHAR ucDAC2_CRT2_BG_Adjustment;
UCHAR ucDAC2_CRT2_DAC_Adjustment;
USHORT usDAC2_CRT2_FORCE_Data;
USHORT usDAC2_CRT2_MUX_RegisterIndex;
UCHAR ucDAC2_CRT2_MUX_RegisterInfo; //Bit[4:0]=Bit position,Bit[7]=1:Active High;=0 Active Low
UCHAR ucDAC2_NTSC_BG_Adjustment;
UCHAR ucDAC2_NTSC_DAC_Adjustment;
USHORT usDAC2_TV1_FORCE_Data;
USHORT usDAC2_TV1_MUX_RegisterIndex;
UCHAR ucDAC2_TV1_MUX_RegisterInfo; //Bit[4:0]=Bit position,Bit[7]=1:Active High;=0 Active Low
UCHAR ucDAC2_CV_BG_Adjustment;
UCHAR ucDAC2_CV_DAC_Adjustment;
USHORT usDAC2_CV_FORCE_Data;
USHORT usDAC2_CV_MUX_RegisterIndex;
UCHAR ucDAC2_CV_MUX_RegisterInfo; //Bit[4:0]=Bit position,Bit[7]=1:Active High;=0 Active Low
UCHAR ucDAC2_PAL_BG_Adjustment;
UCHAR ucDAC2_PAL_DAC_Adjustment;
USHORT usDAC2_TV2_FORCE_Data;
}COMPASSIONATE_DATA;
/****************************Supported Device Info Table Definitions**********************/
// ucConnectInfo:
// [7:4] - connector type
// = 1 - VGA connector
// = 2 - DVI-I
// = 3 - DVI-D
// = 4 - DVI-A
// = 5 - SVIDEO
// = 6 - COMPOSITE
// = 7 - LVDS
// = 8 - DIGITAL LINK
// = 9 - SCART
// = 0xA - HDMI_type A
// = 0xB - HDMI_type B
// = 0xE - Special case1 (DVI+DIN)
// Others=TBD
// [3:0] - DAC Associated
// = 0 - no DAC
// = 1 - DACA
// = 2 - DACB
// = 3 - External DAC
// Others=TBD
//
typedef struct _ATOM_CONNECTOR_INFO
{
#if ATOM_BIG_ENDIAN
UCHAR bfConnectorType:4;
UCHAR bfAssociatedDAC:4;
#else
UCHAR bfAssociatedDAC:4;
UCHAR bfConnectorType:4;
#endif
}ATOM_CONNECTOR_INFO;
typedef union _ATOM_CONNECTOR_INFO_ACCESS
{
ATOM_CONNECTOR_INFO sbfAccess;
UCHAR ucAccess;
}ATOM_CONNECTOR_INFO_ACCESS;
typedef struct _ATOM_CONNECTOR_INFO_I2C
{
ATOM_CONNECTOR_INFO_ACCESS sucConnectorInfo;
ATOM_I2C_ID_CONFIG_ACCESS sucI2cId;
}ATOM_CONNECTOR_INFO_I2C;
typedef struct _ATOM_SUPPORTED_DEVICES_INFO
{
ATOM_COMMON_TABLE_HEADER sHeader;
USHORT usDeviceSupport;
ATOM_CONNECTOR_INFO_I2C asConnInfo[ATOM_MAX_SUPPORTED_DEVICE_INFO];
}ATOM_SUPPORTED_DEVICES_INFO;
#define NO_INT_SRC_MAPPED 0xFF
typedef struct _ATOM_CONNECTOR_INC_SRC_BITMAP
{
UCHAR ucIntSrcBitmap;
}ATOM_CONNECTOR_INC_SRC_BITMAP;
typedef struct _ATOM_SUPPORTED_DEVICES_INFO_2
{
ATOM_COMMON_TABLE_HEADER sHeader;
USHORT usDeviceSupport;
ATOM_CONNECTOR_INFO_I2C asConnInfo[ATOM_MAX_SUPPORTED_DEVICE_INFO_2];
ATOM_CONNECTOR_INC_SRC_BITMAP asIntSrcInfo[ATOM_MAX_SUPPORTED_DEVICE_INFO_2];
}ATOM_SUPPORTED_DEVICES_INFO_2;
typedef struct _ATOM_SUPPORTED_DEVICES_INFO_2d1
{
ATOM_COMMON_TABLE_HEADER sHeader;
USHORT usDeviceSupport;
ATOM_CONNECTOR_INFO_I2C asConnInfo[ATOM_MAX_SUPPORTED_DEVICE];
ATOM_CONNECTOR_INC_SRC_BITMAP asIntSrcInfo[ATOM_MAX_SUPPORTED_DEVICE];
}ATOM_SUPPORTED_DEVICES_INFO_2d1;
#define ATOM_SUPPORTED_DEVICES_INFO_LAST ATOM_SUPPORTED_DEVICES_INFO_2d1
typedef struct _ATOM_MISC_CONTROL_INFO
{
USHORT usFrequency;
UCHAR ucPLL_ChargePump; // PLL charge-pump gain control
UCHAR ucPLL_DutyCycle; // PLL duty cycle control
UCHAR ucPLL_VCO_Gain; // PLL VCO gain control
UCHAR ucPLL_VoltageSwing; // PLL driver voltage swing control
}ATOM_MISC_CONTROL_INFO;
#define ATOM_MAX_MISC_INFO 4
typedef struct _ATOM_TMDS_INFO
{
ATOM_COMMON_TABLE_HEADER sHeader;
USHORT usMaxFrequency; // in 10Khz
ATOM_MISC_CONTROL_INFO asMiscInfo[ATOM_MAX_MISC_INFO];
}ATOM_TMDS_INFO;
typedef struct _ATOM_ENCODER_ANALOG_ATTRIBUTE
{
UCHAR ucTVStandard; //Same as TV standards defined above,
UCHAR ucPadding[1];
}ATOM_ENCODER_ANALOG_ATTRIBUTE;
typedef struct _ATOM_ENCODER_DIGITAL_ATTRIBUTE
{
UCHAR ucAttribute; //Same as other digital encoder attributes defined above
UCHAR ucPadding[1];
}ATOM_ENCODER_DIGITAL_ATTRIBUTE;
typedef union _ATOM_ENCODER_ATTRIBUTE
{
ATOM_ENCODER_ANALOG_ATTRIBUTE sAlgAttrib;
ATOM_ENCODER_DIGITAL_ATTRIBUTE sDigAttrib;
}ATOM_ENCODER_ATTRIBUTE;
typedef struct _DVO_ENCODER_CONTROL_PARAMETERS
{
USHORT usPixelClock;
USHORT usEncoderID;
UCHAR ucDeviceType; //Use ATOM_DEVICE_xxx1_Index to indicate device type only.
UCHAR ucAction; //ATOM_ENABLE/ATOM_DISABLE/ATOM_HPD_INIT
ATOM_ENCODER_ATTRIBUTE usDevAttr;
}DVO_ENCODER_CONTROL_PARAMETERS;
typedef struct _DVO_ENCODER_CONTROL_PS_ALLOCATION
{
DVO_ENCODER_CONTROL_PARAMETERS sDVOEncoder;
WRITE_ONE_BYTE_HW_I2C_DATA_PS_ALLOCATION sReserved; //Caller doesn't need to init this portion
}DVO_ENCODER_CONTROL_PS_ALLOCATION;
#define ATOM_XTMDS_ASIC_SI164_ID 1
#define ATOM_XTMDS_ASIC_SI178_ID 2
#define ATOM_XTMDS_ASIC_TFP513_ID 3
#define ATOM_XTMDS_SUPPORTED_SINGLELINK 0x00000001
#define ATOM_XTMDS_SUPPORTED_DUALLINK 0x00000002
#define ATOM_XTMDS_MVPU_FPGA 0x00000004
typedef struct _ATOM_XTMDS_INFO
{
ATOM_COMMON_TABLE_HEADER sHeader;
USHORT usSingleLinkMaxFrequency;
ATOM_I2C_ID_CONFIG_ACCESS sucI2cId; //Point the ID on which I2C is used to control external chip
UCHAR ucXtransimitterID;
UCHAR ucSupportedLink; // Bit field, bit0=1, single link supported;bit1=1,dual link supported
UCHAR ucSequnceAlterID; // Even with the same external TMDS asic, it's possible that the program seqence alters
// due to design. This ID is used to alert driver that the sequence is not "standard"!
UCHAR ucMasterAddress; // Address to control Master xTMDS Chip
UCHAR ucSlaveAddress; // Address to control Slave xTMDS Chip
}ATOM_XTMDS_INFO;
typedef struct _DFP_DPMS_STATUS_CHANGE_PARAMETERS
{
UCHAR ucEnable; // ATOM_ENABLE=On or ATOM_DISABLE=Off
UCHAR ucDevice; // ATOM_DEVICE_DFP1_INDEX....
UCHAR ucPadding[2];
}DFP_DPMS_STATUS_CHANGE_PARAMETERS;
/****************************Legacy Power Play Table Definitions **********************/
//Definitions for ulPowerPlayMiscInfo
#define ATOM_PM_MISCINFO_SPLIT_CLOCK 0x00000000L
#define ATOM_PM_MISCINFO_USING_MCLK_SRC 0x00000001L
#define ATOM_PM_MISCINFO_USING_SCLK_SRC 0x00000002L
#define ATOM_PM_MISCINFO_VOLTAGE_DROP_SUPPORT 0x00000004L
#define ATOM_PM_MISCINFO_VOLTAGE_DROP_ACTIVE_HIGH 0x00000008L
#define ATOM_PM_MISCINFO_LOAD_PERFORMANCE_EN 0x00000010L
#define ATOM_PM_MISCINFO_ENGINE_CLOCK_CONTRL_EN 0x00000020L
#define ATOM_PM_MISCINFO_MEMORY_CLOCK_CONTRL_EN 0x00000040L
#define ATOM_PM_MISCINFO_PROGRAM_VOLTAGE 0x00000080L //When this bit set, ucVoltageDropIndex is not an index for GPIO pin, but a voltage ID that SW needs program
#define ATOM_PM_MISCINFO_ASIC_REDUCED_SPEED_SCLK_EN 0x00000100L
#define ATOM_PM_MISCINFO_ASIC_DYNAMIC_VOLTAGE_EN 0x00000200L
#define ATOM_PM_MISCINFO_ASIC_SLEEP_MODE_EN 0x00000400L
#define ATOM_PM_MISCINFO_LOAD_BALANCE_EN 0x00000800L
#define ATOM_PM_MISCINFO_DEFAULT_DC_STATE_ENTRY_TRUE 0x00001000L
#define ATOM_PM_MISCINFO_DEFAULT_LOW_DC_STATE_ENTRY_TRUE 0x00002000L
#define ATOM_PM_MISCINFO_LOW_LCD_REFRESH_RATE 0x00004000L
#define ATOM_PM_MISCINFO_DRIVER_DEFAULT_MODE 0x00008000L
#define ATOM_PM_MISCINFO_OVER_CLOCK_MODE 0x00010000L
#define ATOM_PM_MISCINFO_OVER_DRIVE_MODE 0x00020000L
#define ATOM_PM_MISCINFO_POWER_SAVING_MODE 0x00040000L
#define ATOM_PM_MISCINFO_THERMAL_DIODE_MODE 0x00080000L
#define ATOM_PM_MISCINFO_FRAME_MODULATION_MASK 0x00300000L //0-FM Disable, 1-2 level FM, 2-4 level FM, 3-Reserved
#define ATOM_PM_MISCINFO_FRAME_MODULATION_SHIFT 20
#define ATOM_PM_MISCINFO_DYN_CLK_3D_IDLE 0x00400000L
#define ATOM_PM_MISCINFO_DYNAMIC_CLOCK_DIVIDER_BY_2 0x00800000L
#define ATOM_PM_MISCINFO_DYNAMIC_CLOCK_DIVIDER_BY_4 0x01000000L
#define ATOM_PM_MISCINFO_DYNAMIC_HDP_BLOCK_EN 0x02000000L //When set, Dynamic
#define ATOM_PM_MISCINFO_DYNAMIC_MC_HOST_BLOCK_EN 0x04000000L //When set, Dynamic
#define ATOM_PM_MISCINFO_3D_ACCELERATION_EN 0x08000000L //When set, This mode is for acceleated 3D mode
#define ATOM_PM_MISCINFO_POWERPLAY_SETTINGS_GROUP_MASK 0x70000000L //1-Optimal Battery Life Group, 2-High Battery, 3-Balanced, 4-High Performance, 5- Optimal Performance (Default state with Default clocks)
#define ATOM_PM_MISCINFO_POWERPLAY_SETTINGS_GROUP_SHIFT 28
#define ATOM_PM_MISCINFO_ENABLE_BACK_BIAS 0x80000000L
#define ATOM_PM_MISCINFO2_SYSTEM_AC_LITE_MODE 0x00000001L
#define ATOM_PM_MISCINFO2_MULTI_DISPLAY_SUPPORT 0x00000002L
#define ATOM_PM_MISCINFO2_DYNAMIC_BACK_BIAS_EN 0x00000004L
#define ATOM_PM_MISCINFO2_FS3D_OVERDRIVE_INFO 0x00000008L
#define ATOM_PM_MISCINFO2_FORCEDLOWPWR_MODE 0x00000010L
#define ATOM_PM_MISCINFO2_VDDCI_DYNAMIC_VOLTAGE_EN 0x00000020L
#define ATOM_PM_MISCINFO2_VIDEO_PLAYBACK_CAPABLE 0x00000040L //If this bit is set in multi-pp mode, then driver will pack up one with the minior power consumption.
//If it's not set in any pp mode, driver will use its default logic to pick a pp mode in video playback
#define ATOM_PM_MISCINFO2_NOT_VALID_ON_DC 0x00000080L
#define ATOM_PM_MISCINFO2_STUTTER_MODE_EN 0x00000100L
#define ATOM_PM_MISCINFO2_UVD_SUPPORT_MODE 0x00000200L
//ucTableFormatRevision=1
//ucTableContentRevision=1
typedef struct _ATOM_POWERMODE_INFO
{
ULONG ulMiscInfo; //The power level should be arranged in ascending order
ULONG ulReserved1; // must set to 0
ULONG ulReserved2; // must set to 0
USHORT usEngineClock;
USHORT usMemoryClock;
UCHAR ucVoltageDropIndex; // index to GPIO table
UCHAR ucSelectedPanel_RefreshRate;// panel refresh rate
UCHAR ucMinTemperature;
UCHAR ucMaxTemperature;
UCHAR ucNumPciELanes; // number of PCIE lanes
}ATOM_POWERMODE_INFO;
//ucTableFormatRevision=2
//ucTableContentRevision=1
typedef struct _ATOM_POWERMODE_INFO_V2
{
ULONG ulMiscInfo; //The power level should be arranged in ascending order
ULONG ulMiscInfo2;
ULONG ulEngineClock;
ULONG ulMemoryClock;
UCHAR ucVoltageDropIndex; // index to GPIO table
UCHAR ucSelectedPanel_RefreshRate;// panel refresh rate
UCHAR ucMinTemperature;
UCHAR ucMaxTemperature;
UCHAR ucNumPciELanes; // number of PCIE lanes
}ATOM_POWERMODE_INFO_V2;
//ucTableFormatRevision=2
//ucTableContentRevision=2
typedef struct _ATOM_POWERMODE_INFO_V3
{
ULONG ulMiscInfo; //The power level should be arranged in ascending order
ULONG ulMiscInfo2;
ULONG ulEngineClock;
ULONG ulMemoryClock;
UCHAR ucVoltageDropIndex; // index to Core (VDDC) votage table
UCHAR ucSelectedPanel_RefreshRate;// panel refresh rate
UCHAR ucMinTemperature;
UCHAR ucMaxTemperature;
UCHAR ucNumPciELanes; // number of PCIE lanes
UCHAR ucVDDCI_VoltageDropIndex; // index to VDDCI votage table
}ATOM_POWERMODE_INFO_V3;
#define ATOM_MAX_NUMBEROF_POWER_BLOCK 8
#define ATOM_PP_OVERDRIVE_INTBITMAP_AUXWIN 0x01
#define ATOM_PP_OVERDRIVE_INTBITMAP_OVERDRIVE 0x02
#define ATOM_PP_OVERDRIVE_THERMALCONTROLLER_LM63 0x01
#define ATOM_PP_OVERDRIVE_THERMALCONTROLLER_ADM1032 0x02
#define ATOM_PP_OVERDRIVE_THERMALCONTROLLER_ADM1030 0x03
#define ATOM_PP_OVERDRIVE_THERMALCONTROLLER_MUA6649 0x04
#define ATOM_PP_OVERDRIVE_THERMALCONTROLLER_LM64 0x05
#define ATOM_PP_OVERDRIVE_THERMALCONTROLLER_F75375 0x06
#define ATOM_PP_OVERDRIVE_THERMALCONTROLLER_ASC7512 0x07 // Andigilog
typedef struct _ATOM_POWERPLAY_INFO
{
ATOM_COMMON_TABLE_HEADER sHeader;
UCHAR ucOverdriveThermalController;
UCHAR ucOverdriveI2cLine;
UCHAR ucOverdriveIntBitmap;
UCHAR ucOverdriveControllerAddress;
UCHAR ucSizeOfPowerModeEntry;
UCHAR ucNumOfPowerModeEntries;
ATOM_POWERMODE_INFO asPowerPlayInfo[ATOM_MAX_NUMBEROF_POWER_BLOCK];
}ATOM_POWERPLAY_INFO;
typedef struct _ATOM_POWERPLAY_INFO_V2
{
ATOM_COMMON_TABLE_HEADER sHeader;
UCHAR ucOverdriveThermalController;
UCHAR ucOverdriveI2cLine;
UCHAR ucOverdriveIntBitmap;
UCHAR ucOverdriveControllerAddress;
UCHAR ucSizeOfPowerModeEntry;
UCHAR ucNumOfPowerModeEntries;
ATOM_POWERMODE_INFO_V2 asPowerPlayInfo[ATOM_MAX_NUMBEROF_POWER_BLOCK];
}ATOM_POWERPLAY_INFO_V2;
typedef struct _ATOM_POWERPLAY_INFO_V3
{
ATOM_COMMON_TABLE_HEADER sHeader;
UCHAR ucOverdriveThermalController;
UCHAR ucOverdriveI2cLine;
UCHAR ucOverdriveIntBitmap;
UCHAR ucOverdriveControllerAddress;
UCHAR ucSizeOfPowerModeEntry;
UCHAR ucNumOfPowerModeEntries;
ATOM_POWERMODE_INFO_V3 asPowerPlayInfo[ATOM_MAX_NUMBEROF_POWER_BLOCK];
}ATOM_POWERPLAY_INFO_V3;
/* New PPlib */
/**************************************************************************/
typedef struct _ATOM_PPLIB_THERMALCONTROLLER
{
UCHAR ucType; // one of ATOM_PP_THERMALCONTROLLER_*
UCHAR ucI2cLine; // as interpreted by DAL I2C
UCHAR ucI2cAddress;
UCHAR ucFanParameters; // Fan Control Parameters.
UCHAR ucFanMinRPM; // Fan Minimum RPM (hundreds) -- for display purposes only.
UCHAR ucFanMaxRPM; // Fan Maximum RPM (hundreds) -- for display purposes only.
UCHAR ucReserved; // ----
UCHAR ucFlags; // to be defined
} ATOM_PPLIB_THERMALCONTROLLER;
#define ATOM_PP_FANPARAMETERS_TACHOMETER_PULSES_PER_REVOLUTION_MASK 0x0f
#define ATOM_PP_FANPARAMETERS_NOFAN 0x80 // No fan is connected to this controller.
#define ATOM_PP_THERMALCONTROLLER_NONE 0
#define ATOM_PP_THERMALCONTROLLER_LM63 1 // Not used by PPLib
#define ATOM_PP_THERMALCONTROLLER_ADM1032 2 // Not used by PPLib
#define ATOM_PP_THERMALCONTROLLER_ADM1030 3 // Not used by PPLib
#define ATOM_PP_THERMALCONTROLLER_MUA6649 4 // Not used by PPLib
#define ATOM_PP_THERMALCONTROLLER_LM64 5
#define ATOM_PP_THERMALCONTROLLER_F75375 6 // Not used by PPLib
#define ATOM_PP_THERMALCONTROLLER_RV6xx 7
#define ATOM_PP_THERMALCONTROLLER_RV770 8
#define ATOM_PP_THERMALCONTROLLER_ADT7473 9
#define ATOM_PP_THERMALCONTROLLER_EXTERNAL_GPIO 11
#define ATOM_PP_THERMALCONTROLLER_EVERGREEN 12
#define ATOM_PP_THERMALCONTROLLER_EMC2103 13 /* 0x0D */ // Only fan control will be implemented, do NOT show this in PPGen.
#define ATOM_PP_THERMALCONTROLLER_SUMO 14 /* 0x0E */ // Sumo type, used internally
#define ATOM_PP_THERMALCONTROLLER_NISLANDS 15
#define ATOM_PP_THERMALCONTROLLER_SISLANDS 16
#define ATOM_PP_THERMALCONTROLLER_LM96163 17
// Thermal controller 'combo type' to use an external controller for Fan control and an internal controller for thermal.
// We probably should reserve the bit 0x80 for this use.
// To keep the number of these types low we should also use the same code for all ASICs (i.e. do not distinguish RV6xx and RV7xx Internal here).
// The driver can pick the correct internal controller based on the ASIC.
#define ATOM_PP_THERMALCONTROLLER_ADT7473_WITH_INTERNAL 0x89 // ADT7473 Fan Control + Internal Thermal Controller
#define ATOM_PP_THERMALCONTROLLER_EMC2103_WITH_INTERNAL 0x8D // EMC2103 Fan Control + Internal Thermal Controller
typedef struct _ATOM_PPLIB_STATE
{
UCHAR ucNonClockStateIndex;
UCHAR ucClockStateIndices[1]; // variable-sized
} ATOM_PPLIB_STATE;
typedef struct _ATOM_PPLIB_FANTABLE
{
UCHAR ucFanTableFormat; // Change this if the table format changes or version changes so that the other fields are not the same.
UCHAR ucTHyst; // Temperature hysteresis. Integer.
USHORT usTMin; // The temperature, in 0.01 centigrades, below which we just run at a minimal PWM.
USHORT usTMed; // The middle temperature where we change slopes.
USHORT usTHigh; // The high point above TMed for adjusting the second slope.
USHORT usPWMMin; // The minimum PWM value in percent (0.01% increments).
USHORT usPWMMed; // The PWM value (in percent) at TMed.
USHORT usPWMHigh; // The PWM value at THigh.
} ATOM_PPLIB_FANTABLE;
typedef struct _ATOM_PPLIB_FANTABLE2
{
ATOM_PPLIB_FANTABLE basicTable;
USHORT usTMax; // The max temperature
} ATOM_PPLIB_FANTABLE2;
typedef struct _ATOM_PPLIB_EXTENDEDHEADER
{
USHORT usSize;
ULONG ulMaxEngineClock; // For Overdrive.
ULONG ulMaxMemoryClock; // For Overdrive.
// Add extra system parameters here, always adjust size to include all fields.
USHORT usVCETableOffset; //points to ATOM_PPLIB_VCE_Table
USHORT usUVDTableOffset; //points to ATOM_PPLIB_UVD_Table
} ATOM_PPLIB_EXTENDEDHEADER;
//// ATOM_PPLIB_POWERPLAYTABLE::ulPlatformCaps
#define ATOM_PP_PLATFORM_CAP_BACKBIAS 1
#define ATOM_PP_PLATFORM_CAP_POWERPLAY 2
#define ATOM_PP_PLATFORM_CAP_SBIOSPOWERSOURCE 4
#define ATOM_PP_PLATFORM_CAP_ASPM_L0s 8
#define ATOM_PP_PLATFORM_CAP_ASPM_L1 16
#define ATOM_PP_PLATFORM_CAP_HARDWAREDC 32
#define ATOM_PP_PLATFORM_CAP_GEMINIPRIMARY 64
#define ATOM_PP_PLATFORM_CAP_STEPVDDC 128
#define ATOM_PP_PLATFORM_CAP_VOLTAGECONTROL 256
#define ATOM_PP_PLATFORM_CAP_SIDEPORTCONTROL 512
#define ATOM_PP_PLATFORM_CAP_TURNOFFPLL_ASPML1 1024
#define ATOM_PP_PLATFORM_CAP_HTLINKCONTROL 2048
#define ATOM_PP_PLATFORM_CAP_MVDDCONTROL 4096
#define ATOM_PP_PLATFORM_CAP_GOTO_BOOT_ON_ALERT 0x2000 // Go to boot state on alerts, e.g. on an AC->DC transition.
#define ATOM_PP_PLATFORM_CAP_DONT_WAIT_FOR_VBLANK_ON_ALERT 0x4000 // Do NOT wait for VBLANK during an alert (e.g. AC->DC transition).
#define ATOM_PP_PLATFORM_CAP_VDDCI_CONTROL 0x8000 // Does the driver control VDDCI independently from VDDC.
#define ATOM_PP_PLATFORM_CAP_REGULATOR_HOT 0x00010000 // Enable the 'regulator hot' feature.
#define ATOM_PP_PLATFORM_CAP_BACO 0x00020000 // Does the driver supports BACO state.
typedef struct _ATOM_PPLIB_POWERPLAYTABLE
{
ATOM_COMMON_TABLE_HEADER sHeader;
UCHAR ucDataRevision;
UCHAR ucNumStates;
UCHAR ucStateEntrySize;
UCHAR ucClockInfoSize;
UCHAR ucNonClockSize;
// offset from start of this table to array of ucNumStates ATOM_PPLIB_STATE structures
USHORT usStateArrayOffset;
// offset from start of this table to array of ASIC-specific structures,
// currently ATOM_PPLIB_CLOCK_INFO.
USHORT usClockInfoArrayOffset;
// offset from start of this table to array of ATOM_PPLIB_NONCLOCK_INFO
USHORT usNonClockInfoArrayOffset;
USHORT usBackbiasTime; // in microseconds
USHORT usVoltageTime; // in microseconds
USHORT usTableSize; //the size of this structure, or the extended structure
ULONG ulPlatformCaps; // See ATOM_PPLIB_CAPS_*
ATOM_PPLIB_THERMALCONTROLLER sThermalController;
USHORT usBootClockInfoOffset;
USHORT usBootNonClockInfoOffset;
} ATOM_PPLIB_POWERPLAYTABLE;
typedef struct _ATOM_PPLIB_POWERPLAYTABLE2
{
ATOM_PPLIB_POWERPLAYTABLE basicTable;
UCHAR ucNumCustomThermalPolicy;
USHORT usCustomThermalPolicyArrayOffset;
}ATOM_PPLIB_POWERPLAYTABLE2, *LPATOM_PPLIB_POWERPLAYTABLE2;
typedef struct _ATOM_PPLIB_POWERPLAYTABLE3
{
ATOM_PPLIB_POWERPLAYTABLE2 basicTable2;
USHORT usFormatID; // To be used ONLY by PPGen.
USHORT usFanTableOffset;
USHORT usExtendendedHeaderOffset;
} ATOM_PPLIB_POWERPLAYTABLE3, *LPATOM_PPLIB_POWERPLAYTABLE3;
typedef struct _ATOM_PPLIB_POWERPLAYTABLE4
{
ATOM_PPLIB_POWERPLAYTABLE3 basicTable3;
ULONG ulGoldenPPID; // PPGen use only
ULONG ulGoldenRevision; // PPGen use only
USHORT usVddcDependencyOnSCLKOffset;
USHORT usVddciDependencyOnMCLKOffset;
USHORT usVddcDependencyOnMCLKOffset;
USHORT usMaxClockVoltageOnDCOffset;
USHORT usVddcPhaseShedLimitsTableOffset; // Points to ATOM_PPLIB_PhaseSheddingLimits_Table
USHORT usReserved;
} ATOM_PPLIB_POWERPLAYTABLE4, *LPATOM_PPLIB_POWERPLAYTABLE4;
typedef struct _ATOM_PPLIB_POWERPLAYTABLE5
{
ATOM_PPLIB_POWERPLAYTABLE4 basicTable4;
ULONG ulTDPLimit;
ULONG ulNearTDPLimit;
ULONG ulSQRampingThreshold;
USHORT usCACLeakageTableOffset; // Points to ATOM_PPLIB_CAC_Leakage_Table
ULONG ulCACLeakage; // The iLeakage for driver calculated CAC leakage table
USHORT usTDPODLimit;
USHORT usLoadLineSlope; // in milliOhms * 100
} ATOM_PPLIB_POWERPLAYTABLE5, *LPATOM_PPLIB_POWERPLAYTABLE5;
//// ATOM_PPLIB_NONCLOCK_INFO::usClassification
#define ATOM_PPLIB_CLASSIFICATION_UI_MASK 0x0007
#define ATOM_PPLIB_CLASSIFICATION_UI_SHIFT 0
#define ATOM_PPLIB_CLASSIFICATION_UI_NONE 0
#define ATOM_PPLIB_CLASSIFICATION_UI_BATTERY 1
#define ATOM_PPLIB_CLASSIFICATION_UI_BALANCED 3
#define ATOM_PPLIB_CLASSIFICATION_UI_PERFORMANCE 5
// 2, 4, 6, 7 are reserved
#define ATOM_PPLIB_CLASSIFICATION_BOOT 0x0008
#define ATOM_PPLIB_CLASSIFICATION_THERMAL 0x0010
#define ATOM_PPLIB_CLASSIFICATION_LIMITEDPOWERSOURCE 0x0020
#define ATOM_PPLIB_CLASSIFICATION_REST 0x0040
#define ATOM_PPLIB_CLASSIFICATION_FORCED 0x0080
#define ATOM_PPLIB_CLASSIFICATION_3DPERFORMANCE 0x0100
#define ATOM_PPLIB_CLASSIFICATION_OVERDRIVETEMPLATE 0x0200
#define ATOM_PPLIB_CLASSIFICATION_UVDSTATE 0x0400
#define ATOM_PPLIB_CLASSIFICATION_3DLOW 0x0800
#define ATOM_PPLIB_CLASSIFICATION_ACPI 0x1000
#define ATOM_PPLIB_CLASSIFICATION_HD2STATE 0x2000
#define ATOM_PPLIB_CLASSIFICATION_HDSTATE 0x4000
#define ATOM_PPLIB_CLASSIFICATION_SDSTATE 0x8000
//// ATOM_PPLIB_NONCLOCK_INFO::usClassification2
#define ATOM_PPLIB_CLASSIFICATION2_LIMITEDPOWERSOURCE_2 0x0001
#define ATOM_PPLIB_CLASSIFICATION2_ULV 0x0002
#define ATOM_PPLIB_CLASSIFICATION2_MVC 0x0004 //Multi-View Codec (BD-3D)
//// ATOM_PPLIB_NONCLOCK_INFO::ulCapsAndSettings
#define ATOM_PPLIB_SINGLE_DISPLAY_ONLY 0x00000001
#define ATOM_PPLIB_SUPPORTS_VIDEO_PLAYBACK 0x00000002
// 0 is 2.5Gb/s, 1 is 5Gb/s
#define ATOM_PPLIB_PCIE_LINK_SPEED_MASK 0x00000004
#define ATOM_PPLIB_PCIE_LINK_SPEED_SHIFT 2
// lanes - 1: 1, 2, 4, 8, 12, 16 permitted by PCIE spec
#define ATOM_PPLIB_PCIE_LINK_WIDTH_MASK 0x000000F8
#define ATOM_PPLIB_PCIE_LINK_WIDTH_SHIFT 3
// lookup into reduced refresh-rate table
#define ATOM_PPLIB_LIMITED_REFRESHRATE_VALUE_MASK 0x00000F00
#define ATOM_PPLIB_LIMITED_REFRESHRATE_VALUE_SHIFT 8
#define ATOM_PPLIB_LIMITED_REFRESHRATE_UNLIMITED 0
#define ATOM_PPLIB_LIMITED_REFRESHRATE_50HZ 1
// 2-15 TBD as needed.
#define ATOM_PPLIB_SOFTWARE_DISABLE_LOADBALANCING 0x00001000
#define ATOM_PPLIB_SOFTWARE_ENABLE_SLEEP_FOR_TIMESTAMPS 0x00002000
#define ATOM_PPLIB_DISALLOW_ON_DC 0x00004000
#define ATOM_PPLIB_ENABLE_VARIBRIGHT 0x00008000
//memory related flags
#define ATOM_PPLIB_SWSTATE_MEMORY_DLL_OFF 0x000010000
//M3 Arb //2bits, current 3 sets of parameters in total
#define ATOM_PPLIB_M3ARB_MASK 0x00060000
#define ATOM_PPLIB_M3ARB_SHIFT 17
#define ATOM_PPLIB_ENABLE_DRR 0x00080000
// remaining 16 bits are reserved
typedef struct _ATOM_PPLIB_THERMAL_STATE
{
UCHAR ucMinTemperature;
UCHAR ucMaxTemperature;
UCHAR ucThermalAction;
}ATOM_PPLIB_THERMAL_STATE, *LPATOM_PPLIB_THERMAL_STATE;
// Contained in an array starting at the offset
// in ATOM_PPLIB_POWERPLAYTABLE::usNonClockInfoArrayOffset.
// referenced from ATOM_PPLIB_STATE_INFO::ucNonClockStateIndex
#define ATOM_PPLIB_NONCLOCKINFO_VER1 12
#define ATOM_PPLIB_NONCLOCKINFO_VER2 24
typedef struct _ATOM_PPLIB_NONCLOCK_INFO
{
USHORT usClassification;
UCHAR ucMinTemperature;
UCHAR ucMaxTemperature;
ULONG ulCapsAndSettings;
UCHAR ucRequiredPower;
USHORT usClassification2;
ULONG ulVCLK;
ULONG ulDCLK;
UCHAR ucUnused[5];
} ATOM_PPLIB_NONCLOCK_INFO;
// Contained in an array starting at the offset
// in ATOM_PPLIB_POWERPLAYTABLE::usClockInfoArrayOffset.
// referenced from ATOM_PPLIB_STATE::ucClockStateIndices
typedef struct _ATOM_PPLIB_R600_CLOCK_INFO
{
USHORT usEngineClockLow;
UCHAR ucEngineClockHigh;
USHORT usMemoryClockLow;
UCHAR ucMemoryClockHigh;
USHORT usVDDC;
USHORT usUnused1;
USHORT usUnused2;
ULONG ulFlags; // ATOM_PPLIB_R600_FLAGS_*
} ATOM_PPLIB_R600_CLOCK_INFO;
// ulFlags in ATOM_PPLIB_R600_CLOCK_INFO
#define ATOM_PPLIB_R600_FLAGS_PCIEGEN2 1
#define ATOM_PPLIB_R600_FLAGS_UVDSAFE 2
#define ATOM_PPLIB_R600_FLAGS_BACKBIASENABLE 4
#define ATOM_PPLIB_R600_FLAGS_MEMORY_ODT_OFF 8
#define ATOM_PPLIB_R600_FLAGS_MEMORY_DLL_OFF 16
#define ATOM_PPLIB_R600_FLAGS_LOWPOWER 32 // On the RV770 use 'low power' setting (sequencer S0).
typedef struct _ATOM_PPLIB_EVERGREEN_CLOCK_INFO
{
USHORT usEngineClockLow;
UCHAR ucEngineClockHigh;
USHORT usMemoryClockLow;
UCHAR ucMemoryClockHigh;
USHORT usVDDC;
USHORT usVDDCI;
USHORT usUnused;
ULONG ulFlags; // ATOM_PPLIB_R600_FLAGS_*
} ATOM_PPLIB_EVERGREEN_CLOCK_INFO;
typedef struct _ATOM_PPLIB_SI_CLOCK_INFO
{
USHORT usEngineClockLow;
UCHAR ucEngineClockHigh;
USHORT usMemoryClockLow;
UCHAR ucMemoryClockHigh;
USHORT usVDDC;
USHORT usVDDCI;
UCHAR ucPCIEGen;
UCHAR ucUnused1;
ULONG ulFlags; // ATOM_PPLIB_SI_FLAGS_*, no flag is necessary for now
} ATOM_PPLIB_SI_CLOCK_INFO;
typedef struct _ATOM_PPLIB_RS780_CLOCK_INFO
{
USHORT usLowEngineClockLow; // Low Engine clock in MHz (the same way as on the R600).
UCHAR ucLowEngineClockHigh;
USHORT usHighEngineClockLow; // High Engine clock in MHz.
UCHAR ucHighEngineClockHigh;
USHORT usMemoryClockLow; // For now one of the ATOM_PPLIB_RS780_SPMCLK_XXXX constants.
UCHAR ucMemoryClockHigh; // Currentyl unused.
UCHAR ucPadding; // For proper alignment and size.
USHORT usVDDC; // For the 780, use: None, Low, High, Variable
UCHAR ucMaxHTLinkWidth; // From SBIOS - {2, 4, 8, 16}
UCHAR ucMinHTLinkWidth; // From SBIOS - {2, 4, 8, 16}. Effective only if CDLW enabled. Minimum down stream width could be bigger as display BW requriement.
USHORT usHTLinkFreq; // See definition ATOM_PPLIB_RS780_HTLINKFREQ_xxx or in MHz(>=200).
ULONG ulFlags;
} ATOM_PPLIB_RS780_CLOCK_INFO;
#define ATOM_PPLIB_RS780_VOLTAGE_NONE 0
#define ATOM_PPLIB_RS780_VOLTAGE_LOW 1
#define ATOM_PPLIB_RS780_VOLTAGE_HIGH 2
#define ATOM_PPLIB_RS780_VOLTAGE_VARIABLE 3
#define ATOM_PPLIB_RS780_SPMCLK_NONE 0 // We cannot change the side port memory clock, leave it as it is.
#define ATOM_PPLIB_RS780_SPMCLK_LOW 1
#define ATOM_PPLIB_RS780_SPMCLK_HIGH 2
#define ATOM_PPLIB_RS780_HTLINKFREQ_NONE 0
#define ATOM_PPLIB_RS780_HTLINKFREQ_LOW 1
#define ATOM_PPLIB_RS780_HTLINKFREQ_HIGH 2
typedef struct _ATOM_PPLIB_SUMO_CLOCK_INFO{
USHORT usEngineClockLow; //clockfrequency & 0xFFFF. The unit is in 10khz
UCHAR ucEngineClockHigh; //clockfrequency >> 16.
UCHAR vddcIndex; //2-bit vddc index;
USHORT tdpLimit;
//please initialize to 0
USHORT rsv1;
//please initialize to 0s
ULONG rsv2[2];
}ATOM_PPLIB_SUMO_CLOCK_INFO;
typedef struct _ATOM_PPLIB_STATE_V2
{
//number of valid dpm levels in this state; Driver uses it to calculate the whole
//size of the state: sizeof(ATOM_PPLIB_STATE_V2) + (ucNumDPMLevels - 1) * sizeof(UCHAR)
UCHAR ucNumDPMLevels;
//a index to the array of nonClockInfos
UCHAR nonClockInfoIndex;
/**
* Driver will read the first ucNumDPMLevels in this array
*/
UCHAR clockInfoIndex[1];
} ATOM_PPLIB_STATE_V2;
typedef struct _StateArray{
//how many states we have
UCHAR ucNumEntries;
ATOM_PPLIB_STATE_V2 states[1];
}StateArray;
typedef struct _ClockInfoArray{
//how many clock levels we have
UCHAR ucNumEntries;
//sizeof(ATOM_PPLIB_CLOCK_INFO)
UCHAR ucEntrySize;
UCHAR clockInfo[1];
}ClockInfoArray;
typedef struct _NonClockInfoArray{
//how many non-clock levels we have. normally should be same as number of states
UCHAR ucNumEntries;
//sizeof(ATOM_PPLIB_NONCLOCK_INFO)
UCHAR ucEntrySize;
ATOM_PPLIB_NONCLOCK_INFO nonClockInfo[1];
}NonClockInfoArray;
typedef struct _ATOM_PPLIB_Clock_Voltage_Dependency_Record
{
USHORT usClockLow;
UCHAR ucClockHigh;
USHORT usVoltage;
}ATOM_PPLIB_Clock_Voltage_Dependency_Record;
typedef struct _ATOM_PPLIB_Clock_Voltage_Dependency_Table
{
UCHAR ucNumEntries; // Number of entries.
ATOM_PPLIB_Clock_Voltage_Dependency_Record entries[1]; // Dynamically allocate entries.
}ATOM_PPLIB_Clock_Voltage_Dependency_Table;
typedef struct _ATOM_PPLIB_Clock_Voltage_Limit_Record
{
USHORT usSclkLow;
UCHAR ucSclkHigh;
USHORT usMclkLow;
UCHAR ucMclkHigh;
USHORT usVddc;
USHORT usVddci;
}ATOM_PPLIB_Clock_Voltage_Limit_Record;
typedef struct _ATOM_PPLIB_Clock_Voltage_Limit_Table
{
UCHAR ucNumEntries; // Number of entries.
ATOM_PPLIB_Clock_Voltage_Limit_Record entries[1]; // Dynamically allocate entries.
}ATOM_PPLIB_Clock_Voltage_Limit_Table;
typedef struct _ATOM_PPLIB_CAC_Leakage_Record
{
USHORT usVddc; // We use this field for the "fake" standardized VDDC for power calculations
ULONG ulLeakageValue;
}ATOM_PPLIB_CAC_Leakage_Record;
typedef struct _ATOM_PPLIB_CAC_Leakage_Table
{
UCHAR ucNumEntries; // Number of entries.
ATOM_PPLIB_CAC_Leakage_Record entries[1]; // Dynamically allocate entries.
}ATOM_PPLIB_CAC_Leakage_Table;
typedef struct _ATOM_PPLIB_PhaseSheddingLimits_Record
{
USHORT usVoltage;
USHORT usSclkLow;
UCHAR ucSclkHigh;
USHORT usMclkLow;
UCHAR ucMclkHigh;
}ATOM_PPLIB_PhaseSheddingLimits_Record;
typedef struct _ATOM_PPLIB_PhaseSheddingLimits_Table
{
UCHAR ucNumEntries; // Number of entries.
ATOM_PPLIB_PhaseSheddingLimits_Record entries[1]; // Dynamically allocate entries.
}ATOM_PPLIB_PhaseSheddingLimits_Table;
typedef struct _VCEClockInfo{
USHORT usEVClkLow;
UCHAR ucEVClkHigh;
USHORT usECClkLow;
UCHAR ucECClkHigh;
}VCEClockInfo;
typedef struct _VCEClockInfoArray{
UCHAR ucNumEntries;
VCEClockInfo entries[1];
}VCEClockInfoArray;
typedef struct _ATOM_PPLIB_VCE_Clock_Voltage_Limit_Record
{
USHORT usVoltage;
UCHAR ucVCEClockInfoIndex;
}ATOM_PPLIB_VCE_Clock_Voltage_Limit_Record;
typedef struct _ATOM_PPLIB_VCE_Clock_Voltage_Limit_Table
{
UCHAR numEntries;
ATOM_PPLIB_VCE_Clock_Voltage_Limit_Record entries[1];
}ATOM_PPLIB_VCE_Clock_Voltage_Limit_Table;
typedef struct _ATOM_PPLIB_VCE_State_Record
{
UCHAR ucVCEClockInfoIndex;
UCHAR ucClockInfoIndex; //highest 2 bits indicates memory p-states, lower 6bits indicates index to ClockInfoArrary
}ATOM_PPLIB_VCE_State_Record;
typedef struct _ATOM_PPLIB_VCE_State_Table
{
UCHAR numEntries;
ATOM_PPLIB_VCE_State_Record entries[1];
}ATOM_PPLIB_VCE_State_Table;
typedef struct _ATOM_PPLIB_VCE_Table
{
UCHAR revid;
// VCEClockInfoArray array;
// ATOM_PPLIB_VCE_Clock_Voltage_Limit_Table limits;
// ATOM_PPLIB_VCE_State_Table states;
}ATOM_PPLIB_VCE_Table;
typedef struct _UVDClockInfo{
USHORT usVClkLow;
UCHAR ucVClkHigh;
USHORT usDClkLow;
UCHAR ucDClkHigh;
}UVDClockInfo;
typedef struct _UVDClockInfoArray{
UCHAR ucNumEntries;
UVDClockInfo entries[1];
}UVDClockInfoArray;
typedef struct _ATOM_PPLIB_UVD_Clock_Voltage_Limit_Record
{
USHORT usVoltage;
UCHAR ucUVDClockInfoIndex;
}ATOM_PPLIB_UVD_Clock_Voltage_Limit_Record;
typedef struct _ATOM_PPLIB_UVD_Clock_Voltage_Limit_Table
{
UCHAR numEntries;
ATOM_PPLIB_UVD_Clock_Voltage_Limit_Record entries[1];
}ATOM_PPLIB_UVD_Clock_Voltage_Limit_Table;
typedef struct _ATOM_PPLIB_UVD_State_Record
{
UCHAR ucUVDClockInfoIndex;
UCHAR ucClockInfoIndex; //highest 2 bits indicates memory p-states, lower 6bits indicates index to ClockInfoArrary
}ATOM_PPLIB_UVD_State_Record;
typedef struct _ATOM_PPLIB_UVD_State_Table
{
UCHAR numEntries;
ATOM_PPLIB_UVD_State_Record entries[1];
}ATOM_PPLIB_UVD_State_Table;
typedef struct _ATOM_PPLIB_UVD_Table
{
UCHAR revid;
// UVDClockInfoArray array;
// ATOM_PPLIB_UVD_Clock_Voltage_Limit_Table limits;
// ATOM_PPLIB_UVD_State_Table states;
}ATOM_PPLIB_UVD_Table;
/**************************************************************************/
// Following definitions are for compatibility issue in different SW components.
#define ATOM_MASTER_DATA_TABLE_REVISION 0x01
#define Object_Info Object_Header
#define AdjustARB_SEQ MC_InitParameter
#define VRAM_GPIO_DetectionInfo VoltageObjectInfo
#define ASIC_VDDCI_Info ASIC_ProfilingInfo
#define ASIC_MVDDQ_Info MemoryTrainingInfo
#define SS_Info PPLL_SS_Info
#define ASIC_MVDDC_Info ASIC_InternalSS_Info
#define DispDevicePriorityInfo SaveRestoreInfo
#define DispOutInfo TV_VideoMode
#define ATOM_ENCODER_OBJECT_TABLE ATOM_OBJECT_TABLE
#define ATOM_CONNECTOR_OBJECT_TABLE ATOM_OBJECT_TABLE
//New device naming, remove them when both DAL/VBIOS is ready
#define DFP2I_OUTPUT_CONTROL_PARAMETERS CRT1_OUTPUT_CONTROL_PARAMETERS
#define DFP2I_OUTPUT_CONTROL_PS_ALLOCATION DFP2I_OUTPUT_CONTROL_PARAMETERS
#define DFP1X_OUTPUT_CONTROL_PARAMETERS CRT1_OUTPUT_CONTROL_PARAMETERS
#define DFP1X_OUTPUT_CONTROL_PS_ALLOCATION DFP1X_OUTPUT_CONTROL_PARAMETERS
#define DFP1I_OUTPUT_CONTROL_PARAMETERS DFP1_OUTPUT_CONTROL_PARAMETERS
#define DFP1I_OUTPUT_CONTROL_PS_ALLOCATION DFP1_OUTPUT_CONTROL_PS_ALLOCATION
#define ATOM_DEVICE_DFP1I_SUPPORT ATOM_DEVICE_DFP1_SUPPORT
#define ATOM_DEVICE_DFP1X_SUPPORT ATOM_DEVICE_DFP2_SUPPORT
#define ATOM_DEVICE_DFP1I_INDEX ATOM_DEVICE_DFP1_INDEX
#define ATOM_DEVICE_DFP1X_INDEX ATOM_DEVICE_DFP2_INDEX
#define ATOM_DEVICE_DFP2I_INDEX 0x00000009
#define ATOM_DEVICE_DFP2I_SUPPORT (0x1L << ATOM_DEVICE_DFP2I_INDEX)
#define ATOM_S0_DFP1I ATOM_S0_DFP1
#define ATOM_S0_DFP1X ATOM_S0_DFP2
#define ATOM_S0_DFP2I 0x00200000L
#define ATOM_S0_DFP2Ib2 0x20
#define ATOM_S2_DFP1I_DPMS_STATE ATOM_S2_DFP1_DPMS_STATE
#define ATOM_S2_DFP1X_DPMS_STATE ATOM_S2_DFP2_DPMS_STATE
#define ATOM_S2_DFP2I_DPMS_STATE 0x02000000L
#define ATOM_S2_DFP2I_DPMS_STATEb3 0x02
#define ATOM_S3_DFP2I_ACTIVEb1 0x02
#define ATOM_S3_DFP1I_ACTIVE ATOM_S3_DFP1_ACTIVE
#define ATOM_S3_DFP1X_ACTIVE ATOM_S3_DFP2_ACTIVE
#define ATOM_S3_DFP2I_ACTIVE 0x00000200L
#define ATOM_S3_DFP1I_CRTC_ACTIVE ATOM_S3_DFP1_CRTC_ACTIVE
#define ATOM_S3_DFP1X_CRTC_ACTIVE ATOM_S3_DFP2_CRTC_ACTIVE
#define ATOM_S3_DFP2I_CRTC_ACTIVE 0x02000000L
#define ATOM_S3_DFP2I_CRTC_ACTIVEb3 0x02
#define ATOM_S5_DOS_REQ_DFP2Ib1 0x02
#define ATOM_S5_DOS_REQ_DFP2I 0x0200
#define ATOM_S6_ACC_REQ_DFP1I ATOM_S6_ACC_REQ_DFP1
#define ATOM_S6_ACC_REQ_DFP1X ATOM_S6_ACC_REQ_DFP2
#define ATOM_S6_ACC_REQ_DFP2Ib3 0x02
#define ATOM_S6_ACC_REQ_DFP2I 0x02000000L
#define TMDS1XEncoderControl DVOEncoderControl
#define DFP1XOutputControl DVOOutputControl
#define ExternalDFPOutputControl DFP1XOutputControl
#define EnableExternalTMDS_Encoder TMDS1XEncoderControl
#define DFP1IOutputControl TMDSAOutputControl
#define DFP2IOutputControl LVTMAOutputControl
#define DAC1_ENCODER_CONTROL_PARAMETERS DAC_ENCODER_CONTROL_PARAMETERS
#define DAC1_ENCODER_CONTROL_PS_ALLOCATION DAC_ENCODER_CONTROL_PS_ALLOCATION
#define DAC2_ENCODER_CONTROL_PARAMETERS DAC_ENCODER_CONTROL_PARAMETERS
#define DAC2_ENCODER_CONTROL_PS_ALLOCATION DAC_ENCODER_CONTROL_PS_ALLOCATION
#define ucDac1Standard ucDacStandard
#define ucDac2Standard ucDacStandard
#define TMDS1EncoderControl TMDSAEncoderControl
#define TMDS2EncoderControl LVTMAEncoderControl
#define DFP1OutputControl TMDSAOutputControl
#define DFP2OutputControl LVTMAOutputControl
#define CRT1OutputControl DAC1OutputControl
#define CRT2OutputControl DAC2OutputControl
//These two lines will be removed for sure in a few days, will follow up with Michael V.
#define EnableLVDS_SS EnableSpreadSpectrumOnPPLL
#define ENABLE_LVDS_SS_PARAMETERS_V3 ENABLE_SPREAD_SPECTRUM_ON_PPLL
//#define ATOM_S2_CRT1_DPMS_STATE 0x00010000L
//#define ATOM_S2_LCD1_DPMS_STATE ATOM_S2_CRT1_DPMS_STATE
//#define ATOM_S2_TV1_DPMS_STATE ATOM_S2_CRT1_DPMS_STATE
//#define ATOM_S2_DFP1_DPMS_STATE ATOM_S2_CRT1_DPMS_STATE
//#define ATOM_S2_CRT2_DPMS_STATE ATOM_S2_CRT1_DPMS_STATE
#define ATOM_S6_ACC_REQ_TV2 0x00400000L
#define ATOM_DEVICE_TV2_INDEX 0x00000006
#define ATOM_DEVICE_TV2_SUPPORT (0x1L << ATOM_DEVICE_TV2_INDEX)
#define ATOM_S0_TV2 0x00100000L
#define ATOM_S3_TV2_ACTIVE ATOM_S3_DFP6_ACTIVE
#define ATOM_S3_TV2_CRTC_ACTIVE ATOM_S3_DFP6_CRTC_ACTIVE
//
#define ATOM_S2_CRT1_DPMS_STATE 0x00010000L
#define ATOM_S2_LCD1_DPMS_STATE 0x00020000L
#define ATOM_S2_TV1_DPMS_STATE 0x00040000L
#define ATOM_S2_DFP1_DPMS_STATE 0x00080000L
#define ATOM_S2_CRT2_DPMS_STATE 0x00100000L
#define ATOM_S2_LCD2_DPMS_STATE 0x00200000L
#define ATOM_S2_TV2_DPMS_STATE 0x00400000L
#define ATOM_S2_DFP2_DPMS_STATE 0x00800000L
#define ATOM_S2_CV_DPMS_STATE 0x01000000L
#define ATOM_S2_DFP3_DPMS_STATE 0x02000000L
#define ATOM_S2_DFP4_DPMS_STATE 0x04000000L
#define ATOM_S2_DFP5_DPMS_STATE 0x08000000L
#define ATOM_S2_CRT1_DPMS_STATEb2 0x01
#define ATOM_S2_LCD1_DPMS_STATEb2 0x02
#define ATOM_S2_TV1_DPMS_STATEb2 0x04
#define ATOM_S2_DFP1_DPMS_STATEb2 0x08
#define ATOM_S2_CRT2_DPMS_STATEb2 0x10
#define ATOM_S2_LCD2_DPMS_STATEb2 0x20
#define ATOM_S2_TV2_DPMS_STATEb2 0x40
#define ATOM_S2_DFP2_DPMS_STATEb2 0x80
#define ATOM_S2_CV_DPMS_STATEb3 0x01
#define ATOM_S2_DFP3_DPMS_STATEb3 0x02
#define ATOM_S2_DFP4_DPMS_STATEb3 0x04
#define ATOM_S2_DFP5_DPMS_STATEb3 0x08
#define ATOM_S3_ASIC_GUI_ENGINE_HUNGb3 0x20
#define ATOM_S3_ALLOW_FAST_PWR_SWITCHb3 0x40
#define ATOM_S3_RQST_GPU_USE_MIN_PWRb3 0x80
/*********************************************************************************/
#pragma pack() // BIOS data must use byte aligment
//
// AMD ACPI Table
//
#pragma pack(1)
typedef struct {
ULONG Signature;
ULONG TableLength; //Length
UCHAR Revision;
UCHAR Checksum;
UCHAR OemId[6];
UCHAR OemTableId[8]; //UINT64 OemTableId;
ULONG OemRevision;
ULONG CreatorId;
ULONG CreatorRevision;
} AMD_ACPI_DESCRIPTION_HEADER;
/*
//EFI_ACPI_DESCRIPTION_HEADER from AcpiCommon.h
typedef struct {
UINT32 Signature; //0x0
UINT32 Length; //0x4
UINT8 Revision; //0x8
UINT8 Checksum; //0x9
UINT8 OemId[6]; //0xA
UINT64 OemTableId; //0x10
UINT32 OemRevision; //0x18
UINT32 CreatorId; //0x1C
UINT32 CreatorRevision; //0x20
}EFI_ACPI_DESCRIPTION_HEADER;
*/
typedef struct {
AMD_ACPI_DESCRIPTION_HEADER SHeader;
UCHAR TableUUID[16]; //0x24
ULONG VBIOSImageOffset; //0x34. Offset to the first GOP_VBIOS_CONTENT block from the beginning of the structure.
ULONG Lib1ImageOffset; //0x38. Offset to the first GOP_LIB1_CONTENT block from the beginning of the structure.
ULONG Reserved[4]; //0x3C
}UEFI_ACPI_VFCT;
typedef struct {
ULONG PCIBus; //0x4C
ULONG PCIDevice; //0x50
ULONG PCIFunction; //0x54
USHORT VendorID; //0x58
USHORT DeviceID; //0x5A
USHORT SSVID; //0x5C
USHORT SSID; //0x5E
ULONG Revision; //0x60
ULONG ImageLength; //0x64
}VFCT_IMAGE_HEADER;
typedef struct {
VFCT_IMAGE_HEADER VbiosHeader;
UCHAR VbiosContent[1];
}GOP_VBIOS_CONTENT;
typedef struct {
VFCT_IMAGE_HEADER Lib1Header;
UCHAR Lib1Content[1];
}GOP_LIB1_CONTENT;
#pragma pack()
#endif /* _ATOMBIOS_H */
Index: head/sys/dev/drm2/radeon/r300_reg.h
===================================================================
--- head/sys/dev/drm2/radeon/r300_reg.h (revision 300049)
+++ head/sys/dev/drm2/radeon/r300_reg.h (revision 300050)
@@ -1,1792 +1,1792 @@
/*
* Copyright 2005 Nicolai Haehnle et al.
* Copyright 2008 Advanced Micro Devices, Inc.
* Copyright 2009 Jerome Glisse.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
* OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
* ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
* OTHER DEALINGS IN THE SOFTWARE.
*
* Authors: Nicolai Haehnle
* Jerome Glisse
*/
#ifndef _R300_REG_H_
#define _R300_REG_H_
#include <sys/cdefs.h>
__FBSDID("$FreeBSD$");
#define R300_SURF_TILE_MACRO (1<<16)
#define R300_SURF_TILE_MICRO (2<<16)
#define R300_SURF_TILE_BOTH (3<<16)
#define R300_MC_INIT_MISC_LAT_TIMER 0x180
# define R300_MC_MISC__MC_CPR_INIT_LAT_SHIFT 0
# define R300_MC_MISC__MC_VF_INIT_LAT_SHIFT 4
# define R300_MC_MISC__MC_DISP0R_INIT_LAT_SHIFT 8
# define R300_MC_MISC__MC_DISP1R_INIT_LAT_SHIFT 12
# define R300_MC_MISC__MC_FIXED_INIT_LAT_SHIFT 16
# define R300_MC_MISC__MC_E2R_INIT_LAT_SHIFT 20
# define R300_MC_MISC__MC_SAME_PAGE_PRIO_SHIFT 24
# define R300_MC_MISC__MC_GLOBW_INIT_LAT_SHIFT 28
#define R300_MC_INIT_GFX_LAT_TIMER 0x154
# define R300_MC_MISC__MC_G3D0R_INIT_LAT_SHIFT 0
# define R300_MC_MISC__MC_G3D1R_INIT_LAT_SHIFT 4
# define R300_MC_MISC__MC_G3D2R_INIT_LAT_SHIFT 8
# define R300_MC_MISC__MC_G3D3R_INIT_LAT_SHIFT 12
# define R300_MC_MISC__MC_TX0R_INIT_LAT_SHIFT 16
# define R300_MC_MISC__MC_TX1R_INIT_LAT_SHIFT 20
# define R300_MC_MISC__MC_GLOBR_INIT_LAT_SHIFT 24
# define R300_MC_MISC__MC_GLOBW_FULL_LAT_SHIFT 28
/*
* This file contains registers and constants for the R300. They have been
* found mostly by examining command buffers captured using glxtest, as well
* as by extrapolating some known registers and constants from the R200.
* I am fairly certain that they are correct unless stated otherwise
* in comments.
*/
#define R300_SE_VPORT_XSCALE 0x1D98
#define R300_SE_VPORT_XOFFSET 0x1D9C
#define R300_SE_VPORT_YSCALE 0x1DA0
#define R300_SE_VPORT_YOFFSET 0x1DA4
#define R300_SE_VPORT_ZSCALE 0x1DA8
#define R300_SE_VPORT_ZOFFSET 0x1DAC
/*
* Vertex Array Processing (VAP) Control
* Stolen from r200 code from Christoph Brill (It's a guess!)
*/
#define R300_VAP_CNTL 0x2080
/* This register is written directly and also starts data section
* in many 3d CP_PACKET3's
*/
#define R300_VAP_VF_CNTL 0x2084
# define R300_VAP_VF_CNTL__PRIM_TYPE__SHIFT 0
# define R300_VAP_VF_CNTL__PRIM_NONE (0<<0)
# define R300_VAP_VF_CNTL__PRIM_POINTS (1<<0)
# define R300_VAP_VF_CNTL__PRIM_LINES (2<<0)
# define R300_VAP_VF_CNTL__PRIM_LINE_STRIP (3<<0)
# define R300_VAP_VF_CNTL__PRIM_TRIANGLES (4<<0)
# define R300_VAP_VF_CNTL__PRIM_TRIANGLE_FAN (5<<0)
# define R300_VAP_VF_CNTL__PRIM_TRIANGLE_STRIP (6<<0)
# define R300_VAP_VF_CNTL__PRIM_LINE_LOOP (12<<0)
# define R300_VAP_VF_CNTL__PRIM_QUADS (13<<0)
# define R300_VAP_VF_CNTL__PRIM_QUAD_STRIP (14<<0)
# define R300_VAP_VF_CNTL__PRIM_POLYGON (15<<0)
# define R300_VAP_VF_CNTL__PRIM_WALK__SHIFT 4
/* State based - direct writes to registers trigger vertex
generation */
# define R300_VAP_VF_CNTL__PRIM_WALK_STATE_BASED (0<<4)
# define R300_VAP_VF_CNTL__PRIM_WALK_INDICES (1<<4)
# define R300_VAP_VF_CNTL__PRIM_WALK_VERTEX_LIST (2<<4)
# define R300_VAP_VF_CNTL__PRIM_WALK_VERTEX_EMBEDDED (3<<4)
/* I don't think I saw these three used.. */
# define R300_VAP_VF_CNTL__COLOR_ORDER__SHIFT 6
# define R300_VAP_VF_CNTL__TCL_OUTPUT_CTL_ENA__SHIFT 9
# define R300_VAP_VF_CNTL__PROG_STREAM_ENA__SHIFT 10
/* index size - when not set the indices are assumed to be 16 bit */
# define R300_VAP_VF_CNTL__INDEX_SIZE_32bit (1<<11)
/* number of vertices */
# define R300_VAP_VF_CNTL__NUM_VERTICES__SHIFT 16
/* BEGIN: Wild guesses */
#define R300_VAP_OUTPUT_VTX_FMT_0 0x2090
# define R300_VAP_OUTPUT_VTX_FMT_0__POS_PRESENT (1<<0)
# define R300_VAP_OUTPUT_VTX_FMT_0__COLOR_PRESENT (1<<1)
# define R300_VAP_OUTPUT_VTX_FMT_0__COLOR_1_PRESENT (1<<2) /* GUESS */
# define R300_VAP_OUTPUT_VTX_FMT_0__COLOR_2_PRESENT (1<<3) /* GUESS */
# define R300_VAP_OUTPUT_VTX_FMT_0__COLOR_3_PRESENT (1<<4) /* GUESS */
# define R300_VAP_OUTPUT_VTX_FMT_0__PT_SIZE_PRESENT (1<<16) /* GUESS */
#define R300_VAP_OUTPUT_VTX_FMT_1 0x2094
/* each of the following is 3 bits wide, specifies number
of components */
# define R300_VAP_OUTPUT_VTX_FMT_1__TEX_0_COMP_CNT_SHIFT 0
# define R300_VAP_OUTPUT_VTX_FMT_1__TEX_1_COMP_CNT_SHIFT 3
# define R300_VAP_OUTPUT_VTX_FMT_1__TEX_2_COMP_CNT_SHIFT 6
# define R300_VAP_OUTPUT_VTX_FMT_1__TEX_3_COMP_CNT_SHIFT 9
# define R300_VAP_OUTPUT_VTX_FMT_1__TEX_4_COMP_CNT_SHIFT 12
# define R300_VAP_OUTPUT_VTX_FMT_1__TEX_5_COMP_CNT_SHIFT 15
# define R300_VAP_OUTPUT_VTX_FMT_1__TEX_6_COMP_CNT_SHIFT 18
# define R300_VAP_OUTPUT_VTX_FMT_1__TEX_7_COMP_CNT_SHIFT 21
/* END: Wild guesses */
#define R300_SE_VTE_CNTL 0x20b0
# define R300_VPORT_X_SCALE_ENA 0x00000001
# define R300_VPORT_X_OFFSET_ENA 0x00000002
# define R300_VPORT_Y_SCALE_ENA 0x00000004
# define R300_VPORT_Y_OFFSET_ENA 0x00000008
# define R300_VPORT_Z_SCALE_ENA 0x00000010
# define R300_VPORT_Z_OFFSET_ENA 0x00000020
# define R300_VTX_XY_FMT 0x00000100
# define R300_VTX_Z_FMT 0x00000200
# define R300_VTX_W0_FMT 0x00000400
# define R300_VTX_W0_NORMALIZE 0x00000800
# define R300_VTX_ST_DENORMALIZED 0x00001000
/* BEGIN: Vertex data assembly - lots of uncertainties */
/* gap */
#define R300_VAP_CNTL_STATUS 0x2140
# define R300_VC_NO_SWAP (0 << 0)
# define R300_VC_16BIT_SWAP (1 << 0)
# define R300_VC_32BIT_SWAP (2 << 0)
# define R300_VAP_TCL_BYPASS (1 << 8)
/* gap */
/* Where do we get our vertex data?
*
* Vertex data either comes either from immediate mode registers or from
* vertex arrays.
* There appears to be no mixed mode (though we can force the pitch of
* vertex arrays to 0, effectively reusing the same element over and over
* again).
*
* Immediate mode is controlled by the INPUT_CNTL registers. I am not sure
* if these registers influence vertex array processing.
*
* Vertex arrays are controlled via the 3D_LOAD_VBPNTR packet3.
*
* In both cases, vertex attributes are then passed through INPUT_ROUTE.
*
* Beginning with INPUT_ROUTE_0_0 is a list of WORDs that route vertex data
* into the vertex processor's input registers.
* The first word routes the first input, the second word the second, etc.
* The corresponding input is routed into the register with the given index.
* The list is ended by a word with INPUT_ROUTE_END set.
*
* Always set COMPONENTS_4 in immediate mode.
*/
#define R300_VAP_INPUT_ROUTE_0_0 0x2150
# define R300_INPUT_ROUTE_COMPONENTS_1 (0 << 0)
# define R300_INPUT_ROUTE_COMPONENTS_2 (1 << 0)
# define R300_INPUT_ROUTE_COMPONENTS_3 (2 << 0)
# define R300_INPUT_ROUTE_COMPONENTS_4 (3 << 0)
# define R300_INPUT_ROUTE_COMPONENTS_RGBA (4 << 0) /* GUESS */
# define R300_VAP_INPUT_ROUTE_IDX_SHIFT 8
# define R300_VAP_INPUT_ROUTE_IDX_MASK (31 << 8) /* GUESS */
# define R300_VAP_INPUT_ROUTE_END (1 << 13)
# define R300_INPUT_ROUTE_IMMEDIATE_MODE (0 << 14) /* GUESS */
# define R300_INPUT_ROUTE_FLOAT (1 << 14) /* GUESS */
# define R300_INPUT_ROUTE_UNSIGNED_BYTE (2 << 14) /* GUESS */
# define R300_INPUT_ROUTE_FLOAT_COLOR (3 << 14) /* GUESS */
#define R300_VAP_INPUT_ROUTE_0_1 0x2154
#define R300_VAP_INPUT_ROUTE_0_2 0x2158
#define R300_VAP_INPUT_ROUTE_0_3 0x215C
#define R300_VAP_INPUT_ROUTE_0_4 0x2160
#define R300_VAP_INPUT_ROUTE_0_5 0x2164
#define R300_VAP_INPUT_ROUTE_0_6 0x2168
#define R300_VAP_INPUT_ROUTE_0_7 0x216C
/* gap */
/* Notes:
* - always set up to produce at least two attributes:
* if vertex program uses only position, fglrx will set normal, too
* - INPUT_CNTL_0_COLOR and INPUT_CNTL_COLOR bits are always equal.
*/
#define R300_VAP_INPUT_CNTL_0 0x2180
# define R300_INPUT_CNTL_0_COLOR 0x00000001
#define R300_VAP_INPUT_CNTL_1 0x2184
# define R300_INPUT_CNTL_POS 0x00000001
# define R300_INPUT_CNTL_NORMAL 0x00000002
# define R300_INPUT_CNTL_COLOR 0x00000004
# define R300_INPUT_CNTL_TC0 0x00000400
# define R300_INPUT_CNTL_TC1 0x00000800
# define R300_INPUT_CNTL_TC2 0x00001000 /* GUESS */
# define R300_INPUT_CNTL_TC3 0x00002000 /* GUESS */
# define R300_INPUT_CNTL_TC4 0x00004000 /* GUESS */
# define R300_INPUT_CNTL_TC5 0x00008000 /* GUESS */
# define R300_INPUT_CNTL_TC6 0x00010000 /* GUESS */
# define R300_INPUT_CNTL_TC7 0x00020000 /* GUESS */
/* gap */
/* Words parallel to INPUT_ROUTE_0; All words that are active in INPUT_ROUTE_0
* are set to a swizzling bit pattern, other words are 0.
*
* In immediate mode, the pattern is always set to xyzw. In vertex array
* mode, the swizzling pattern is e.g. used to set zw components in texture
* coordinates with only tweo components.
*/
#define R300_VAP_INPUT_ROUTE_1_0 0x21E0
# define R300_INPUT_ROUTE_SELECT_X 0
# define R300_INPUT_ROUTE_SELECT_Y 1
# define R300_INPUT_ROUTE_SELECT_Z 2
# define R300_INPUT_ROUTE_SELECT_W 3
# define R300_INPUT_ROUTE_SELECT_ZERO 4
# define R300_INPUT_ROUTE_SELECT_ONE 5
# define R300_INPUT_ROUTE_SELECT_MASK 7
# define R300_INPUT_ROUTE_X_SHIFT 0
# define R300_INPUT_ROUTE_Y_SHIFT 3
# define R300_INPUT_ROUTE_Z_SHIFT 6
# define R300_INPUT_ROUTE_W_SHIFT 9
# define R300_INPUT_ROUTE_ENABLE (15 << 12)
#define R300_VAP_INPUT_ROUTE_1_1 0x21E4
#define R300_VAP_INPUT_ROUTE_1_2 0x21E8
#define R300_VAP_INPUT_ROUTE_1_3 0x21EC
#define R300_VAP_INPUT_ROUTE_1_4 0x21F0
#define R300_VAP_INPUT_ROUTE_1_5 0x21F4
#define R300_VAP_INPUT_ROUTE_1_6 0x21F8
#define R300_VAP_INPUT_ROUTE_1_7 0x21FC
/* END: Vertex data assembly */
/* gap */
/* BEGIN: Upload vertex program and data */
/*
* The programmable vertex shader unit has a memory bank of unknown size
* that can be written to in 16 byte units by writing the address into
* UPLOAD_ADDRESS, followed by data in UPLOAD_DATA (multiples of 4 DWORDs).
*
* Pointers into the memory bank are always in multiples of 16 bytes.
*
* The memory bank is divided into areas with fixed meaning.
*
* Starting at address UPLOAD_PROGRAM: Vertex program instructions.
* Native limits reported by drivers from ATI suggest size 256 (i.e. 4KB),
* whereas the difference between known addresses suggests size 512.
*
* Starting at address UPLOAD_PARAMETERS: Vertex program parameters.
* Native reported limits and the VPI layout suggest size 256, whereas
* difference between known addresses suggests size 512.
*
* At address UPLOAD_POINTSIZE is a vector (0, 0, ps, 0), where ps is the
* floating point pointsize. The exact purpose of this state is uncertain,
* as there is also the R300_RE_POINTSIZE register.
*
* Multiple vertex programs and parameter sets can be loaded at once,
* which could explain the size discrepancy.
*/
#define R300_VAP_PVS_UPLOAD_ADDRESS 0x2200
# define R300_PVS_UPLOAD_PROGRAM 0x00000000
# define R300_PVS_UPLOAD_PARAMETERS 0x00000200
# define R300_PVS_UPLOAD_POINTSIZE 0x00000406
/* gap */
#define R300_VAP_PVS_UPLOAD_DATA 0x2208
/* END: Upload vertex program and data */
/* gap */
/* I do not know the purpose of this register. However, I do know that
* it is set to 221C_CLEAR for clear operations and to 221C_NORMAL
* for normal rendering.
*/
#define R300_VAP_UNKNOWN_221C 0x221C
# define R300_221C_NORMAL 0x00000000
# define R300_221C_CLEAR 0x0001C000
/* These seem to be per-pixel and per-vertex X and Y clipping planes. The first
* plane is per-pixel and the second plane is per-vertex.
*
* This was determined by experimentation alone but I believe it is correct.
*
* These registers are called X_QUAD0_1_FL to X_QUAD0_4_FL by glxtest.
*/
#define R300_VAP_CLIP_X_0 0x2220
#define R300_VAP_CLIP_X_1 0x2224
#define R300_VAP_CLIP_Y_0 0x2228
#define R300_VAP_CLIP_Y_1 0x2230
/* gap */
/* Sometimes, END_OF_PKT and 0x2284=0 are the only commands sent between
* rendering commands and overwriting vertex program parameters.
* Therefore, I suspect writing zero to 0x2284 synchronizes the engine and
* avoids bugs caused by still running shaders reading bad data from memory.
*/
#define R300_VAP_PVS_STATE_FLUSH_REG 0x2284
/* Absolutely no clue what this register is about. */
#define R300_VAP_UNKNOWN_2288 0x2288
# define R300_2288_R300 0x00750000 /* -- nh */
# define R300_2288_RV350 0x0000FFFF /* -- Vladimir */
/* gap */
/* Addresses are relative to the vertex program instruction area of the
* memory bank. PROGRAM_END points to the last instruction of the active
* program
*
* The meaning of the two UNKNOWN fields is obviously not known. However,
* experiments so far have shown that both *must* point to an instruction
* inside the vertex program, otherwise the GPU locks up.
*
* fglrx usually sets CNTL_3_UNKNOWN to the end of the program and
* R300_PVS_CNTL_1_POS_END_SHIFT points to instruction where last write to
* position takes place.
*
* Most likely this is used to ignore rest of the program in cases
* where group of verts arent visible. For some reason this "section"
* is sometimes accepted other instruction that have no relationship with
* position calculations.
*/
#define R300_VAP_PVS_CNTL_1 0x22D0
# define R300_PVS_CNTL_1_PROGRAM_START_SHIFT 0
# define R300_PVS_CNTL_1_POS_END_SHIFT 10
# define R300_PVS_CNTL_1_PROGRAM_END_SHIFT 20
-/* Addresses are relative the the vertex program parameters area. */
+/* Addresses are relative the vertex program parameters area. */
#define R300_VAP_PVS_CNTL_2 0x22D4
# define R300_PVS_CNTL_2_PARAM_OFFSET_SHIFT 0
# define R300_PVS_CNTL_2_PARAM_COUNT_SHIFT 16
#define R300_VAP_PVS_CNTL_3 0x22D8
# define R300_PVS_CNTL_3_PROGRAM_UNKNOWN_SHIFT 10
# define R300_PVS_CNTL_3_PROGRAM_UNKNOWN2_SHIFT 0
/* The entire range from 0x2300 to 0x2AC inclusive seems to be used for
* immediate vertices
*/
#define R300_VAP_VTX_COLOR_R 0x2464
#define R300_VAP_VTX_COLOR_G 0x2468
#define R300_VAP_VTX_COLOR_B 0x246C
#define R300_VAP_VTX_POS_0_X_1 0x2490 /* used for glVertex2*() */
#define R300_VAP_VTX_POS_0_Y_1 0x2494
#define R300_VAP_VTX_COLOR_PKD 0x249C /* RGBA */
#define R300_VAP_VTX_POS_0_X_2 0x24A0 /* used for glVertex3*() */
#define R300_VAP_VTX_POS_0_Y_2 0x24A4
#define R300_VAP_VTX_POS_0_Z_2 0x24A8
/* write 0 to indicate end of packet? */
#define R300_VAP_VTX_END_OF_PKT 0x24AC
/* gap */
/* These are values from r300_reg/r300_reg.h - they are known to be correct
* and are here so we can use one register file instead of several
* - Vladimir
*/
#define R300_GB_VAP_RASTER_VTX_FMT_0 0x4000
# define R300_GB_VAP_RASTER_VTX_FMT_0__POS_PRESENT (1<<0)
# define R300_GB_VAP_RASTER_VTX_FMT_0__COLOR_0_PRESENT (1<<1)
# define R300_GB_VAP_RASTER_VTX_FMT_0__COLOR_1_PRESENT (1<<2)
# define R300_GB_VAP_RASTER_VTX_FMT_0__COLOR_2_PRESENT (1<<3)
# define R300_GB_VAP_RASTER_VTX_FMT_0__COLOR_3_PRESENT (1<<4)
# define R300_GB_VAP_RASTER_VTX_FMT_0__COLOR_SPACE (0xf<<5)
# define R300_GB_VAP_RASTER_VTX_FMT_0__PT_SIZE_PRESENT (0x1<<16)
#define R300_GB_VAP_RASTER_VTX_FMT_1 0x4004
/* each of the following is 3 bits wide, specifies number
of components */
# define R300_GB_VAP_RASTER_VTX_FMT_1__TEX_0_COMP_CNT_SHIFT 0
# define R300_GB_VAP_RASTER_VTX_FMT_1__TEX_1_COMP_CNT_SHIFT 3
# define R300_GB_VAP_RASTER_VTX_FMT_1__TEX_2_COMP_CNT_SHIFT 6
# define R300_GB_VAP_RASTER_VTX_FMT_1__TEX_3_COMP_CNT_SHIFT 9
# define R300_GB_VAP_RASTER_VTX_FMT_1__TEX_4_COMP_CNT_SHIFT 12
# define R300_GB_VAP_RASTER_VTX_FMT_1__TEX_5_COMP_CNT_SHIFT 15
# define R300_GB_VAP_RASTER_VTX_FMT_1__TEX_6_COMP_CNT_SHIFT 18
# define R300_GB_VAP_RASTER_VTX_FMT_1__TEX_7_COMP_CNT_SHIFT 21
/* UNK30 seems to enables point to quad transformation on textures
* (or something closely related to that).
* This bit is rather fatal at the time being due to lackings at pixel
* shader side
*/
#define R300_GB_ENABLE 0x4008
# define R300_GB_POINT_STUFF_ENABLE (1<<0)
# define R300_GB_LINE_STUFF_ENABLE (1<<1)
# define R300_GB_TRIANGLE_STUFF_ENABLE (1<<2)
# define R300_GB_STENCIL_AUTO_ENABLE (1<<4)
# define R300_GB_UNK31 (1<<31)
/* each of the following is 2 bits wide */
#define R300_GB_TEX_REPLICATE 0
#define R300_GB_TEX_ST 1
#define R300_GB_TEX_STR 2
# define R300_GB_TEX0_SOURCE_SHIFT 16
# define R300_GB_TEX1_SOURCE_SHIFT 18
# define R300_GB_TEX2_SOURCE_SHIFT 20
# define R300_GB_TEX3_SOURCE_SHIFT 22
# define R300_GB_TEX4_SOURCE_SHIFT 24
# define R300_GB_TEX5_SOURCE_SHIFT 26
# define R300_GB_TEX6_SOURCE_SHIFT 28
# define R300_GB_TEX7_SOURCE_SHIFT 30
/* MSPOS - positions for multisample antialiasing (?) */
#define R300_GB_MSPOS0 0x4010
/* shifts - each of the fields is 4 bits */
# define R300_GB_MSPOS0__MS_X0_SHIFT 0
# define R300_GB_MSPOS0__MS_Y0_SHIFT 4
# define R300_GB_MSPOS0__MS_X1_SHIFT 8
# define R300_GB_MSPOS0__MS_Y1_SHIFT 12
# define R300_GB_MSPOS0__MS_X2_SHIFT 16
# define R300_GB_MSPOS0__MS_Y2_SHIFT 20
# define R300_GB_MSPOS0__MSBD0_Y 24
# define R300_GB_MSPOS0__MSBD0_X 28
#define R300_GB_MSPOS1 0x4014
# define R300_GB_MSPOS1__MS_X3_SHIFT 0
# define R300_GB_MSPOS1__MS_Y3_SHIFT 4
# define R300_GB_MSPOS1__MS_X4_SHIFT 8
# define R300_GB_MSPOS1__MS_Y4_SHIFT 12
# define R300_GB_MSPOS1__MS_X5_SHIFT 16
# define R300_GB_MSPOS1__MS_Y5_SHIFT 20
# define R300_GB_MSPOS1__MSBD1 24
#define R300_GB_TILE_CONFIG 0x4018
# define R300_GB_TILE_ENABLE (1<<0)
# define R300_GB_TILE_PIPE_COUNT_RV300 0
# define R300_GB_TILE_PIPE_COUNT_R300 (3<<1)
# define R300_GB_TILE_PIPE_COUNT_R420 (7<<1)
# define R300_GB_TILE_PIPE_COUNT_RV410 (3<<1)
# define R300_GB_TILE_SIZE_8 0
# define R300_GB_TILE_SIZE_16 (1<<4)
# define R300_GB_TILE_SIZE_32 (2<<4)
# define R300_GB_SUPER_SIZE_1 (0<<6)
# define R300_GB_SUPER_SIZE_2 (1<<6)
# define R300_GB_SUPER_SIZE_4 (2<<6)
# define R300_GB_SUPER_SIZE_8 (3<<6)
# define R300_GB_SUPER_SIZE_16 (4<<6)
# define R300_GB_SUPER_SIZE_32 (5<<6)
# define R300_GB_SUPER_SIZE_64 (6<<6)
# define R300_GB_SUPER_SIZE_128 (7<<6)
# define R300_GB_SUPER_X_SHIFT 9 /* 3 bits wide */
# define R300_GB_SUPER_Y_SHIFT 12 /* 3 bits wide */
# define R300_GB_SUPER_TILE_A 0
# define R300_GB_SUPER_TILE_B (1<<15)
# define R300_GB_SUBPIXEL_1_12 0
# define R300_GB_SUBPIXEL_1_16 (1<<16)
#define R300_GB_FIFO_SIZE 0x4024
/* each of the following is 2 bits wide */
#define R300_GB_FIFO_SIZE_32 0
#define R300_GB_FIFO_SIZE_64 1
#define R300_GB_FIFO_SIZE_128 2
#define R300_GB_FIFO_SIZE_256 3
# define R300_SC_IFIFO_SIZE_SHIFT 0
# define R300_SC_TZFIFO_SIZE_SHIFT 2
# define R300_SC_BFIFO_SIZE_SHIFT 4
# define R300_US_OFIFO_SIZE_SHIFT 12
# define R300_US_WFIFO_SIZE_SHIFT 14
/* the following use the same constants as above, but meaning is
is times 2 (i.e. instead of 32 words it means 64 */
# define R300_RS_TFIFO_SIZE_SHIFT 6
# define R300_RS_CFIFO_SIZE_SHIFT 8
# define R300_US_RAM_SIZE_SHIFT 10
/* watermarks, 3 bits wide */
# define R300_RS_HIGHWATER_COL_SHIFT 16
# define R300_RS_HIGHWATER_TEX_SHIFT 19
# define R300_OFIFO_HIGHWATER_SHIFT 22 /* two bits only */
# define R300_CUBE_FIFO_HIGHWATER_COL_SHIFT 24
#define R300_GB_SELECT 0x401C
# define R300_GB_FOG_SELECT_C0A 0
# define R300_GB_FOG_SELECT_C1A 1
# define R300_GB_FOG_SELECT_C2A 2
# define R300_GB_FOG_SELECT_C3A 3
# define R300_GB_FOG_SELECT_1_1_W 4
# define R300_GB_FOG_SELECT_Z 5
# define R300_GB_DEPTH_SELECT_Z 0
# define R300_GB_DEPTH_SELECT_1_1_W (1<<3)
# define R300_GB_W_SELECT_1_W 0
# define R300_GB_W_SELECT_1 (1<<4)
#define R300_GB_AA_CONFIG 0x4020
# define R300_AA_DISABLE 0x00
# define R300_AA_ENABLE 0x01
# define R300_AA_SUBSAMPLES_2 0
# define R300_AA_SUBSAMPLES_3 (1<<1)
# define R300_AA_SUBSAMPLES_4 (2<<1)
# define R300_AA_SUBSAMPLES_6 (3<<1)
/* gap */
/* Zero to flush caches. */
#define R300_TX_INVALTAGS 0x4100
#define R300_TX_FLUSH 0x0
/* The upper enable bits are guessed, based on fglrx reported limits. */
#define R300_TX_ENABLE 0x4104
# define R300_TX_ENABLE_0 (1 << 0)
# define R300_TX_ENABLE_1 (1 << 1)
# define R300_TX_ENABLE_2 (1 << 2)
# define R300_TX_ENABLE_3 (1 << 3)
# define R300_TX_ENABLE_4 (1 << 4)
# define R300_TX_ENABLE_5 (1 << 5)
# define R300_TX_ENABLE_6 (1 << 6)
# define R300_TX_ENABLE_7 (1 << 7)
# define R300_TX_ENABLE_8 (1 << 8)
# define R300_TX_ENABLE_9 (1 << 9)
# define R300_TX_ENABLE_10 (1 << 10)
# define R300_TX_ENABLE_11 (1 << 11)
# define R300_TX_ENABLE_12 (1 << 12)
# define R300_TX_ENABLE_13 (1 << 13)
# define R300_TX_ENABLE_14 (1 << 14)
# define R300_TX_ENABLE_15 (1 << 15)
/* The pointsize is given in multiples of 6. The pointsize can be
* enormous: Clear() renders a single point that fills the entire
* framebuffer.
*/
#define R300_RE_POINTSIZE 0x421C
# define R300_POINTSIZE_Y_SHIFT 0
# define R300_POINTSIZE_Y_MASK (0xFFFF << 0) /* GUESS */
# define R300_POINTSIZE_X_SHIFT 16
# define R300_POINTSIZE_X_MASK (0xFFFF << 16) /* GUESS */
# define R300_POINTSIZE_MAX (R300_POINTSIZE_Y_MASK / 6)
/* The line width is given in multiples of 6.
* In default mode lines are classified as vertical lines.
* HO: horizontal
* VE: vertical or horizontal
* HO & VE: no classification
*/
#define R300_RE_LINE_CNT 0x4234
# define R300_LINESIZE_SHIFT 0
# define R300_LINESIZE_MASK (0xFFFF << 0) /* GUESS */
# define R300_LINESIZE_MAX (R300_LINESIZE_MASK / 6)
# define R300_LINE_CNT_HO (1 << 16)
# define R300_LINE_CNT_VE (1 << 17)
/* Some sort of scale or clamp value for texcoordless textures. */
#define R300_RE_UNK4238 0x4238
/* Something shade related */
#define R300_RE_SHADE 0x4274
#define R300_RE_SHADE_MODEL 0x4278
# define R300_RE_SHADE_MODEL_SMOOTH 0x3aaaa
# define R300_RE_SHADE_MODEL_FLAT 0x39595
/* Dangerous */
#define R300_RE_POLYGON_MODE 0x4288
# define R300_PM_ENABLED (1 << 0)
# define R300_PM_FRONT_POINT (0 << 0)
# define R300_PM_BACK_POINT (0 << 0)
# define R300_PM_FRONT_LINE (1 << 4)
# define R300_PM_FRONT_FILL (1 << 5)
# define R300_PM_BACK_LINE (1 << 7)
# define R300_PM_BACK_FILL (1 << 8)
/* Fog parameters */
#define R300_RE_FOG_SCALE 0x4294
#define R300_RE_FOG_START 0x4298
/* Not sure why there are duplicate of factor and constant values.
* My best guess so far is that there are separate zbiases for test and write.
* Ordering might be wrong.
* Some of the tests indicate that fgl has a fallback implementation of zbias
* via pixel shaders.
*/
#define R300_RE_ZBIAS_CNTL 0x42A0 /* GUESS */
#define R300_RE_ZBIAS_T_FACTOR 0x42A4
#define R300_RE_ZBIAS_T_CONSTANT 0x42A8
#define R300_RE_ZBIAS_W_FACTOR 0x42AC
#define R300_RE_ZBIAS_W_CONSTANT 0x42B0
/* This register needs to be set to (1<<1) for RV350 to correctly
* perform depth test (see --vb-triangles in r300_demo)
* Don't know about other chips. - Vladimir
* This is set to 3 when GL_POLYGON_OFFSET_FILL is on.
* My guess is that there are two bits for each zbias primitive
* (FILL, LINE, POINT).
* One to enable depth test and one for depth write.
* Yet this doesn't explain why depth writes work ...
*/
#define R300_RE_OCCLUSION_CNTL 0x42B4
# define R300_OCCLUSION_ON (1<<1)
#define R300_RE_CULL_CNTL 0x42B8
# define R300_CULL_FRONT (1 << 0)
# define R300_CULL_BACK (1 << 1)
# define R300_FRONT_FACE_CCW (0 << 2)
# define R300_FRONT_FACE_CW (1 << 2)
/* BEGIN: Rasterization / Interpolators - many guesses */
/* 0_UNKNOWN_18 has always been set except for clear operations.
* TC_CNT is the number of incoming texture coordinate sets (i.e. it depends
* on the vertex program, *not* the fragment program)
*/
#define R300_RS_CNTL_0 0x4300
# define R300_RS_CNTL_TC_CNT_SHIFT 2
# define R300_RS_CNTL_TC_CNT_MASK (7 << 2)
/* number of color interpolators used */
# define R300_RS_CNTL_CI_CNT_SHIFT 7
# define R300_RS_CNTL_0_UNKNOWN_18 (1 << 18)
/* Guess: RS_CNTL_1 holds the index of the highest used RS_ROUTE_n
register. */
#define R300_RS_CNTL_1 0x4304
/* gap */
/* Only used for texture coordinates.
* Use the source field to route texture coordinate input from the
* vertex program to the desired interpolator. Note that the source
* field is relative to the outputs the vertex program *actually*
* writes. If a vertex program only writes texcoord[1], this will
* be source index 0.
* Set INTERP_USED on all interpolators that produce data used by
* the fragment program. INTERP_USED looks like a swizzling mask,
* but I haven't seen it used that way.
*
* Note: The _UNKNOWN constants are always set in their respective
* register. I don't know if this is necessary.
*/
#define R300_RS_INTERP_0 0x4310
#define R300_RS_INTERP_1 0x4314
# define R300_RS_INTERP_1_UNKNOWN 0x40
#define R300_RS_INTERP_2 0x4318
# define R300_RS_INTERP_2_UNKNOWN 0x80
#define R300_RS_INTERP_3 0x431C
# define R300_RS_INTERP_3_UNKNOWN 0xC0
#define R300_RS_INTERP_4 0x4320
#define R300_RS_INTERP_5 0x4324
#define R300_RS_INTERP_6 0x4328
#define R300_RS_INTERP_7 0x432C
# define R300_RS_INTERP_SRC_SHIFT 2
# define R300_RS_INTERP_SRC_MASK (7 << 2)
# define R300_RS_INTERP_USED 0x00D10000
/* These DWORDs control how vertex data is routed into fragment program
* registers, after interpolators.
*/
#define R300_RS_ROUTE_0 0x4330
#define R300_RS_ROUTE_1 0x4334
#define R300_RS_ROUTE_2 0x4338
#define R300_RS_ROUTE_3 0x433C /* GUESS */
#define R300_RS_ROUTE_4 0x4340 /* GUESS */
#define R300_RS_ROUTE_5 0x4344 /* GUESS */
#define R300_RS_ROUTE_6 0x4348 /* GUESS */
#define R300_RS_ROUTE_7 0x434C /* GUESS */
# define R300_RS_ROUTE_SOURCE_INTERP_0 0
# define R300_RS_ROUTE_SOURCE_INTERP_1 1
# define R300_RS_ROUTE_SOURCE_INTERP_2 2
# define R300_RS_ROUTE_SOURCE_INTERP_3 3
# define R300_RS_ROUTE_SOURCE_INTERP_4 4
# define R300_RS_ROUTE_SOURCE_INTERP_5 5 /* GUESS */
# define R300_RS_ROUTE_SOURCE_INTERP_6 6 /* GUESS */
# define R300_RS_ROUTE_SOURCE_INTERP_7 7 /* GUESS */
# define R300_RS_ROUTE_ENABLE (1 << 3) /* GUESS */
# define R300_RS_ROUTE_DEST_SHIFT 6
# define R300_RS_ROUTE_DEST_MASK (31 << 6) /* GUESS */
/* Special handling for color: When the fragment program uses color,
* the ROUTE_0_COLOR bit is set and ROUTE_0_COLOR_DEST contains the
* color register index.
*
* Apperently you may set the R300_RS_ROUTE_0_COLOR bit, but not provide any
* R300_RS_ROUTE_0_COLOR_DEST value; this setup is used for clearing the state.
* See r300_ioctl.c:r300EmitClearState. I'm not sure if this setup is strictly
* correct or not. - Oliver.
*/
# define R300_RS_ROUTE_0_COLOR (1 << 14)
# define R300_RS_ROUTE_0_COLOR_DEST_SHIFT 17
# define R300_RS_ROUTE_0_COLOR_DEST_MASK (31 << 17) /* GUESS */
/* As above, but for secondary color */
# define R300_RS_ROUTE_1_COLOR1 (1 << 14)
# define R300_RS_ROUTE_1_COLOR1_DEST_SHIFT 17
# define R300_RS_ROUTE_1_COLOR1_DEST_MASK (31 << 17)
# define R300_RS_ROUTE_1_UNKNOWN11 (1 << 11)
/* END: Rasterization / Interpolators - many guesses */
/* Hierarchical Z Enable */
#define R300_SC_HYPERZ 0x43a4
# define R300_SC_HYPERZ_DISABLE (0 << 0)
# define R300_SC_HYPERZ_ENABLE (1 << 0)
# define R300_SC_HYPERZ_MIN (0 << 1)
# define R300_SC_HYPERZ_MAX (1 << 1)
# define R300_SC_HYPERZ_ADJ_256 (0 << 2)
# define R300_SC_HYPERZ_ADJ_128 (1 << 2)
# define R300_SC_HYPERZ_ADJ_64 (2 << 2)
# define R300_SC_HYPERZ_ADJ_32 (3 << 2)
# define R300_SC_HYPERZ_ADJ_16 (4 << 2)
# define R300_SC_HYPERZ_ADJ_8 (5 << 2)
# define R300_SC_HYPERZ_ADJ_4 (6 << 2)
# define R300_SC_HYPERZ_ADJ_2 (7 << 2)
# define R300_SC_HYPERZ_HZ_Z0MIN_NO (0 << 5)
# define R300_SC_HYPERZ_HZ_Z0MIN (1 << 5)
# define R300_SC_HYPERZ_HZ_Z0MAX_NO (0 << 6)
# define R300_SC_HYPERZ_HZ_Z0MAX (1 << 6)
#define R300_SC_EDGERULE 0x43a8
/* BEGIN: Scissors and cliprects */
/* There are four clipping rectangles. Their corner coordinates are inclusive.
* Every pixel is assigned a number from 0 and 15 by setting bits 0-3 depending
* on whether the pixel is inside cliprects 0-3, respectively. For example,
* if a pixel is inside cliprects 0 and 1, but outside 2 and 3, it is assigned
* the number 3 (binary 0011).
* Iff the bit corresponding to the pixel's number in RE_CLIPRECT_CNTL is set,
* the pixel is rasterized.
*
* In addition to this, there is a scissors rectangle. Only pixels inside the
* scissors rectangle are drawn. (coordinates are inclusive)
*
* For some reason, the top-left corner of the framebuffer is at (1440, 1440)
* for the purpose of clipping and scissors.
*/
#define R300_RE_CLIPRECT_TL_0 0x43B0
#define R300_RE_CLIPRECT_BR_0 0x43B4
#define R300_RE_CLIPRECT_TL_1 0x43B8
#define R300_RE_CLIPRECT_BR_1 0x43BC
#define R300_RE_CLIPRECT_TL_2 0x43C0
#define R300_RE_CLIPRECT_BR_2 0x43C4
#define R300_RE_CLIPRECT_TL_3 0x43C8
#define R300_RE_CLIPRECT_BR_3 0x43CC
# define R300_CLIPRECT_OFFSET 1440
# define R300_CLIPRECT_MASK 0x1FFF
# define R300_CLIPRECT_X_SHIFT 0
# define R300_CLIPRECT_X_MASK (0x1FFF << 0)
# define R300_CLIPRECT_Y_SHIFT 13
# define R300_CLIPRECT_Y_MASK (0x1FFF << 13)
#define R300_RE_CLIPRECT_CNTL 0x43D0
# define R300_CLIP_OUT (1 << 0)
# define R300_CLIP_0 (1 << 1)
# define R300_CLIP_1 (1 << 2)
# define R300_CLIP_10 (1 << 3)
# define R300_CLIP_2 (1 << 4)
# define R300_CLIP_20 (1 << 5)
# define R300_CLIP_21 (1 << 6)
# define R300_CLIP_210 (1 << 7)
# define R300_CLIP_3 (1 << 8)
# define R300_CLIP_30 (1 << 9)
# define R300_CLIP_31 (1 << 10)
# define R300_CLIP_310 (1 << 11)
# define R300_CLIP_32 (1 << 12)
# define R300_CLIP_320 (1 << 13)
# define R300_CLIP_321 (1 << 14)
# define R300_CLIP_3210 (1 << 15)
/* gap */
#define R300_RE_SCISSORS_TL 0x43E0
#define R300_RE_SCISSORS_BR 0x43E4
# define R300_SCISSORS_OFFSET 1440
# define R300_SCISSORS_X_SHIFT 0
# define R300_SCISSORS_X_MASK (0x1FFF << 0)
# define R300_SCISSORS_Y_SHIFT 13
# define R300_SCISSORS_Y_MASK (0x1FFF << 13)
/* END: Scissors and cliprects */
/* BEGIN: Texture specification */
/*
* The texture specification dwords are grouped by meaning and not by texture
* unit. This means that e.g. the offset for texture image unit N is found in
* register TX_OFFSET_0 + (4*N)
*/
#define R300_TX_FILTER_0 0x4400
# define R300_TX_REPEAT 0
# define R300_TX_MIRRORED 1
# define R300_TX_CLAMP 4
# define R300_TX_CLAMP_TO_EDGE 2
# define R300_TX_CLAMP_TO_BORDER 6
# define R300_TX_WRAP_S_SHIFT 0
# define R300_TX_WRAP_S_MASK (7 << 0)
# define R300_TX_WRAP_T_SHIFT 3
# define R300_TX_WRAP_T_MASK (7 << 3)
# define R300_TX_WRAP_Q_SHIFT 6
# define R300_TX_WRAP_Q_MASK (7 << 6)
# define R300_TX_MAG_FILTER_NEAREST (1 << 9)
# define R300_TX_MAG_FILTER_LINEAR (2 << 9)
# define R300_TX_MAG_FILTER_MASK (3 << 9)
# define R300_TX_MIN_FILTER_NEAREST (1 << 11)
# define R300_TX_MIN_FILTER_LINEAR (2 << 11)
# define R300_TX_MIN_FILTER_NEAREST_MIP_NEAREST (5 << 11)
# define R300_TX_MIN_FILTER_NEAREST_MIP_LINEAR (9 << 11)
# define R300_TX_MIN_FILTER_LINEAR_MIP_NEAREST (6 << 11)
# define R300_TX_MIN_FILTER_LINEAR_MIP_LINEAR (10 << 11)
/* NOTE: NEAREST doesn't seem to exist.
* Im not seting MAG_FILTER_MASK and (3 << 11) on for all
* anisotropy modes because that would void selected mag filter
*/
# define R300_TX_MIN_FILTER_ANISO_NEAREST (0 << 13)
# define R300_TX_MIN_FILTER_ANISO_LINEAR (0 << 13)
# define R300_TX_MIN_FILTER_ANISO_NEAREST_MIP_NEAREST (1 << 13)
# define R300_TX_MIN_FILTER_ANISO_NEAREST_MIP_LINEAR (2 << 13)
# define R300_TX_MIN_FILTER_MASK ( (15 << 11) | (3 << 13) )
# define R300_TX_MAX_ANISO_1_TO_1 (0 << 21)
# define R300_TX_MAX_ANISO_2_TO_1 (2 << 21)
# define R300_TX_MAX_ANISO_4_TO_1 (4 << 21)
# define R300_TX_MAX_ANISO_8_TO_1 (6 << 21)
# define R300_TX_MAX_ANISO_16_TO_1 (8 << 21)
# define R300_TX_MAX_ANISO_MASK (14 << 21)
#define R300_TX_FILTER1_0 0x4440
# define R300_CHROMA_KEY_MODE_DISABLE 0
# define R300_CHROMA_KEY_FORCE 1
# define R300_CHROMA_KEY_BLEND 2
# define R300_MC_ROUND_NORMAL (0<<2)
# define R300_MC_ROUND_MPEG4 (1<<2)
# define R300_LOD_BIAS_MASK 0x1fff
# define R300_EDGE_ANISO_EDGE_DIAG (0<<13)
# define R300_EDGE_ANISO_EDGE_ONLY (1<<13)
# define R300_MC_COORD_TRUNCATE_DISABLE (0<<14)
# define R300_MC_COORD_TRUNCATE_MPEG (1<<14)
# define R300_TX_TRI_PERF_0_8 (0<<15)
# define R300_TX_TRI_PERF_1_8 (1<<15)
# define R300_TX_TRI_PERF_1_4 (2<<15)
# define R300_TX_TRI_PERF_3_8 (3<<15)
# define R300_ANISO_THRESHOLD_MASK (7<<17)
#define R300_TX_SIZE_0 0x4480
# define R300_TX_WIDTHMASK_SHIFT 0
# define R300_TX_WIDTHMASK_MASK (2047 << 0)
# define R300_TX_HEIGHTMASK_SHIFT 11
# define R300_TX_HEIGHTMASK_MASK (2047 << 11)
# define R300_TX_UNK23 (1 << 23)
# define R300_TX_MAX_MIP_LEVEL_SHIFT 26
# define R300_TX_MAX_MIP_LEVEL_MASK (0xf << 26)
# define R300_TX_SIZE_PROJECTED (1<<30)
# define R300_TX_SIZE_TXPITCH_EN (1<<31)
#define R300_TX_FORMAT_0 0x44C0
/* The interpretation of the format word by Wladimir van der Laan */
/* The X, Y, Z and W refer to the layout of the components.
They are given meanings as R, G, B and Alpha by the swizzle
specification */
# define R300_TX_FORMAT_X8 0x0
# define R300_TX_FORMAT_X16 0x1
# define R300_TX_FORMAT_Y4X4 0x2
# define R300_TX_FORMAT_Y8X8 0x3
# define R300_TX_FORMAT_Y16X16 0x4
# define R300_TX_FORMAT_Z3Y3X2 0x5
# define R300_TX_FORMAT_Z5Y6X5 0x6
# define R300_TX_FORMAT_Z6Y5X5 0x7
# define R300_TX_FORMAT_Z11Y11X10 0x8
# define R300_TX_FORMAT_Z10Y11X11 0x9
# define R300_TX_FORMAT_W4Z4Y4X4 0xA
# define R300_TX_FORMAT_W1Z5Y5X5 0xB
# define R300_TX_FORMAT_W8Z8Y8X8 0xC
# define R300_TX_FORMAT_W2Z10Y10X10 0xD
# define R300_TX_FORMAT_W16Z16Y16X16 0xE
# define R300_TX_FORMAT_DXT1 0xF
# define R300_TX_FORMAT_DXT3 0x10
# define R300_TX_FORMAT_DXT5 0x11
# define R300_TX_FORMAT_D3DMFT_CxV8U8 0x12 /* no swizzle */
# define R300_TX_FORMAT_A8R8G8B8 0x13 /* no swizzle */
# define R300_TX_FORMAT_B8G8_B8G8 0x14 /* no swizzle */
# define R300_TX_FORMAT_G8R8_G8B8 0x15 /* no swizzle */
/* 0x16 - some 16 bit green format.. ?? */
# define R300_TX_FORMAT_UNK25 (1 << 25) /* no swizzle */
# define R300_TX_FORMAT_CUBIC_MAP (1 << 26)
/* gap */
/* Floating point formats */
/* Note - hardware supports both 16 and 32 bit floating point */
# define R300_TX_FORMAT_FL_I16 0x18
# define R300_TX_FORMAT_FL_I16A16 0x19
# define R300_TX_FORMAT_FL_R16G16B16A16 0x1A
# define R300_TX_FORMAT_FL_I32 0x1B
# define R300_TX_FORMAT_FL_I32A32 0x1C
# define R300_TX_FORMAT_FL_R32G32B32A32 0x1D
# define R300_TX_FORMAT_ATI2N 0x1F
/* alpha modes, convenience mostly */
/* if you have alpha, pick constant appropriate to the
number of channels (1 for I8, 2 for I8A8, 4 for R8G8B8A8, etc */
# define R300_TX_FORMAT_ALPHA_1CH 0x000
# define R300_TX_FORMAT_ALPHA_2CH 0x200
# define R300_TX_FORMAT_ALPHA_4CH 0x600
# define R300_TX_FORMAT_ALPHA_NONE 0xA00
/* Swizzling */
/* constants */
# define R300_TX_FORMAT_X 0
# define R300_TX_FORMAT_Y 1
# define R300_TX_FORMAT_Z 2
# define R300_TX_FORMAT_W 3
# define R300_TX_FORMAT_ZERO 4
# define R300_TX_FORMAT_ONE 5
/* 2.0*Z, everything above 1.0 is set to 0.0 */
# define R300_TX_FORMAT_CUT_Z 6
/* 2.0*W, everything above 1.0 is set to 0.0 */
# define R300_TX_FORMAT_CUT_W 7
# define R300_TX_FORMAT_B_SHIFT 18
# define R300_TX_FORMAT_G_SHIFT 15
# define R300_TX_FORMAT_R_SHIFT 12
# define R300_TX_FORMAT_A_SHIFT 9
/* Convenience macro to take care of layout and swizzling */
# define R300_EASY_TX_FORMAT(B, G, R, A, FMT) ( \
((R300_TX_FORMAT_##B)<<R300_TX_FORMAT_B_SHIFT) \
| ((R300_TX_FORMAT_##G)<<R300_TX_FORMAT_G_SHIFT) \
| ((R300_TX_FORMAT_##R)<<R300_TX_FORMAT_R_SHIFT) \
| ((R300_TX_FORMAT_##A)<<R300_TX_FORMAT_A_SHIFT) \
| (R300_TX_FORMAT_##FMT) \
)
/* These can be ORed with result of R300_EASY_TX_FORMAT()
We don't really know what they do. Take values from a
constant color ? */
# define R300_TX_FORMAT_CONST_X (1<<5)
# define R300_TX_FORMAT_CONST_Y (2<<5)
# define R300_TX_FORMAT_CONST_Z (4<<5)
# define R300_TX_FORMAT_CONST_W (8<<5)
# define R300_TX_FORMAT_YUV_MODE 0x00800000
#define R300_TX_PITCH_0 0x4500 /* obvious missing in gap */
#define R300_TX_OFFSET_0 0x4540
/* BEGIN: Guess from R200 */
# define R300_TXO_ENDIAN_NO_SWAP (0 << 0)
# define R300_TXO_ENDIAN_BYTE_SWAP (1 << 0)
# define R300_TXO_ENDIAN_WORD_SWAP (2 << 0)
# define R300_TXO_ENDIAN_HALFDW_SWAP (3 << 0)
# define R300_TXO_MACRO_TILE (1 << 2)
# define R300_TXO_MICRO_TILE (1 << 3)
# define R300_TXO_MICRO_TILE_SQUARE (2 << 3)
# define R300_TXO_OFFSET_MASK 0xffffffe0
# define R300_TXO_OFFSET_SHIFT 5
/* END: Guess from R200 */
/* 32 bit chroma key */
#define R300_TX_CHROMA_KEY_0 0x4580
/* ff00ff00 == { 0, 1.0, 0, 1.0 } */
#define R300_TX_BORDER_COLOR_0 0x45C0
/* END: Texture specification */
/* BEGIN: Fragment program instruction set */
/* Fragment programs are written directly into register space.
* There are separate instruction streams for texture instructions and ALU
* instructions.
* In order to synchronize these streams, the program is divided into up
* to 4 nodes. Each node begins with a number of TEX operations, followed
* by a number of ALU operations.
* The first node can have zero TEX ops, all subsequent nodes must have at
* least
* one TEX ops.
* All nodes must have at least one ALU op.
*
* The index of the last node is stored in PFS_CNTL_0: A value of 0 means
* 1 node, a value of 3 means 4 nodes.
* The total amount of instructions is defined in PFS_CNTL_2. The offsets are
* offsets into the respective instruction streams, while *_END points to the
* last instruction relative to this offset.
*/
#define R300_PFS_CNTL_0 0x4600
# define R300_PFS_CNTL_LAST_NODES_SHIFT 0
# define R300_PFS_CNTL_LAST_NODES_MASK (3 << 0)
# define R300_PFS_CNTL_FIRST_NODE_HAS_TEX (1 << 3)
#define R300_PFS_CNTL_1 0x4604
/* There is an unshifted value here which has so far always been equal to the
* index of the highest used temporary register.
*/
#define R300_PFS_CNTL_2 0x4608
# define R300_PFS_CNTL_ALU_OFFSET_SHIFT 0
# define R300_PFS_CNTL_ALU_OFFSET_MASK (63 << 0)
# define R300_PFS_CNTL_ALU_END_SHIFT 6
# define R300_PFS_CNTL_ALU_END_MASK (63 << 6)
# define R300_PFS_CNTL_TEX_OFFSET_SHIFT 12
# define R300_PFS_CNTL_TEX_OFFSET_MASK (31 << 12) /* GUESS */
# define R300_PFS_CNTL_TEX_END_SHIFT 18
# define R300_PFS_CNTL_TEX_END_MASK (31 << 18) /* GUESS */
/* gap */
/* Nodes are stored backwards. The last active node is always stored in
* PFS_NODE_3.
* Example: In a 2-node program, NODE_0 and NODE_1 are set to 0. The
* first node is stored in NODE_2, the second node is stored in NODE_3.
*
* Offsets are relative to the master offset from PFS_CNTL_2.
*/
#define R300_PFS_NODE_0 0x4610
#define R300_PFS_NODE_1 0x4614
#define R300_PFS_NODE_2 0x4618
#define R300_PFS_NODE_3 0x461C
# define R300_PFS_NODE_ALU_OFFSET_SHIFT 0
# define R300_PFS_NODE_ALU_OFFSET_MASK (63 << 0)
# define R300_PFS_NODE_ALU_END_SHIFT 6
# define R300_PFS_NODE_ALU_END_MASK (63 << 6)
# define R300_PFS_NODE_TEX_OFFSET_SHIFT 12
# define R300_PFS_NODE_TEX_OFFSET_MASK (31 << 12)
# define R300_PFS_NODE_TEX_END_SHIFT 17
# define R300_PFS_NODE_TEX_END_MASK (31 << 17)
# define R300_PFS_NODE_OUTPUT_COLOR (1 << 22)
# define R300_PFS_NODE_OUTPUT_DEPTH (1 << 23)
/* TEX
* As far as I can tell, texture instructions cannot write into output
* registers directly. A subsequent ALU instruction is always necessary,
* even if it's just MAD o0, r0, 1, 0
*/
#define R300_PFS_TEXI_0 0x4620
# define R300_FPITX_SRC_SHIFT 0
# define R300_FPITX_SRC_MASK (31 << 0)
/* GUESS */
# define R300_FPITX_SRC_CONST (1 << 5)
# define R300_FPITX_DST_SHIFT 6
# define R300_FPITX_DST_MASK (31 << 6)
# define R300_FPITX_IMAGE_SHIFT 11
/* GUESS based on layout and native limits */
# define R300_FPITX_IMAGE_MASK (15 << 11)
/* Unsure if these are opcodes, or some kind of bitfield, but this is how
* they were set when I checked
*/
# define R300_FPITX_OPCODE_SHIFT 15
# define R300_FPITX_OP_TEX 1
# define R300_FPITX_OP_KIL 2
# define R300_FPITX_OP_TXP 3
# define R300_FPITX_OP_TXB 4
# define R300_FPITX_OPCODE_MASK (7 << 15)
/* ALU
* The ALU instructions register blocks are enumerated according to the order
* in which fglrx. I assume there is space for 64 instructions, since
* each block has space for a maximum of 64 DWORDs, and this matches reported
* native limits.
*
* The basic functional block seems to be one MAD for each color and alpha,
* and an adder that adds all components after the MUL.
* - ADD, MUL, MAD etc.: use MAD with appropriate neutral operands
* - DP4: Use OUTC_DP4, OUTA_DP4
* - DP3: Use OUTC_DP3, OUTA_DP4, appropriate alpha operands
* - DPH: Use OUTC_DP4, OUTA_DP4, appropriate alpha operands
* - CMPH: If ARG2 > 0.5, return ARG0, else return ARG1
* - CMP: If ARG2 < 0, return ARG1, else return ARG0
* - FLR: use FRC+MAD
* - XPD: use MAD+MAD
* - SGE, SLT: use MAD+CMP
* - RSQ: use ABS modifier for argument
* - Use OUTC_REPL_ALPHA to write results of an alpha-only operation
* (e.g. RCP) into color register
* - apparently, there's no quick DST operation
* - fglrx set FPI2_UNKNOWN_31 on a "MAD fragment.color, tmp0, tmp1, tmp2"
* - fglrx set FPI2_UNKNOWN_31 on a "MAX r2, r1, c0"
* - fglrx once set FPI0_UNKNOWN_31 on a "FRC r1, r1"
*
* Operand selection
* First stage selects three sources from the available registers and
* constant parameters. This is defined in INSTR1 (color) and INSTR3 (alpha).
* fglrx sorts the three source fields: Registers before constants,
* lower indices before higher indices; I do not know whether this is
* necessary.
*
* fglrx fills unused sources with "read constant 0"
* According to specs, you cannot select more than two different constants.
*
* Second stage selects the operands from the sources. This is defined in
* INSTR0 (color) and INSTR2 (alpha). You can also select the special constants
* zero and one.
* Swizzling and negation happens in this stage, as well.
*
* Important: Color and alpha seem to be mostly separate, i.e. their sources
* selection appears to be fully independent (the register storage is probably
* physically split into a color and an alpha section).
* However (because of the apparent physical split), there is some interaction
* WRT swizzling. If, for example, you want to load an R component into an
* Alpha operand, this R component is taken from a *color* source, not from
* an alpha source. The corresponding register doesn't even have to appear in
* the alpha sources list. (I hope this all makes sense to you)
*
* Destination selection
* The destination register index is in FPI1 (color) and FPI3 (alpha)
* together with enable bits.
* There are separate enable bits for writing into temporary registers
* (DSTC_REG_* /DSTA_REG) and and program output registers (DSTC_OUTPUT_*
* /DSTA_OUTPUT). You can write to both at once, or not write at all (the
* same index must be used for both).
*
* Note: There is a special form for LRP
* - Argument order is the same as in ARB_fragment_program.
* - Operation is MAD
* - ARG1 is set to ARGC_SRC1C_LRP/ARGC_SRC1A_LRP
* - Set FPI0/FPI2_SPECIAL_LRP
* Arbitrary LRP (including support for swizzling) requires vanilla MAD+MAD
*/
#define R300_PFS_INSTR1_0 0x46C0
# define R300_FPI1_SRC0C_SHIFT 0
# define R300_FPI1_SRC0C_MASK (31 << 0)
# define R300_FPI1_SRC0C_CONST (1 << 5)
# define R300_FPI1_SRC1C_SHIFT 6
# define R300_FPI1_SRC1C_MASK (31 << 6)
# define R300_FPI1_SRC1C_CONST (1 << 11)
# define R300_FPI1_SRC2C_SHIFT 12
# define R300_FPI1_SRC2C_MASK (31 << 12)
# define R300_FPI1_SRC2C_CONST (1 << 17)
# define R300_FPI1_SRC_MASK 0x0003ffff
# define R300_FPI1_DSTC_SHIFT 18
# define R300_FPI1_DSTC_MASK (31 << 18)
# define R300_FPI1_DSTC_REG_MASK_SHIFT 23
# define R300_FPI1_DSTC_REG_X (1 << 23)
# define R300_FPI1_DSTC_REG_Y (1 << 24)
# define R300_FPI1_DSTC_REG_Z (1 << 25)
# define R300_FPI1_DSTC_OUTPUT_MASK_SHIFT 26
# define R300_FPI1_DSTC_OUTPUT_X (1 << 26)
# define R300_FPI1_DSTC_OUTPUT_Y (1 << 27)
# define R300_FPI1_DSTC_OUTPUT_Z (1 << 28)
#define R300_PFS_INSTR3_0 0x47C0
# define R300_FPI3_SRC0A_SHIFT 0
# define R300_FPI3_SRC0A_MASK (31 << 0)
# define R300_FPI3_SRC0A_CONST (1 << 5)
# define R300_FPI3_SRC1A_SHIFT 6
# define R300_FPI3_SRC1A_MASK (31 << 6)
# define R300_FPI3_SRC1A_CONST (1 << 11)
# define R300_FPI3_SRC2A_SHIFT 12
# define R300_FPI3_SRC2A_MASK (31 << 12)
# define R300_FPI3_SRC2A_CONST (1 << 17)
# define R300_FPI3_SRC_MASK 0x0003ffff
# define R300_FPI3_DSTA_SHIFT 18
# define R300_FPI3_DSTA_MASK (31 << 18)
# define R300_FPI3_DSTA_REG (1 << 23)
# define R300_FPI3_DSTA_OUTPUT (1 << 24)
# define R300_FPI3_DSTA_DEPTH (1 << 27)
#define R300_PFS_INSTR0_0 0x48C0
# define R300_FPI0_ARGC_SRC0C_XYZ 0
# define R300_FPI0_ARGC_SRC0C_XXX 1
# define R300_FPI0_ARGC_SRC0C_YYY 2
# define R300_FPI0_ARGC_SRC0C_ZZZ 3
# define R300_FPI0_ARGC_SRC1C_XYZ 4
# define R300_FPI0_ARGC_SRC1C_XXX 5
# define R300_FPI0_ARGC_SRC1C_YYY 6
# define R300_FPI0_ARGC_SRC1C_ZZZ 7
# define R300_FPI0_ARGC_SRC2C_XYZ 8
# define R300_FPI0_ARGC_SRC2C_XXX 9
# define R300_FPI0_ARGC_SRC2C_YYY 10
# define R300_FPI0_ARGC_SRC2C_ZZZ 11
# define R300_FPI0_ARGC_SRC0A 12
# define R300_FPI0_ARGC_SRC1A 13
# define R300_FPI0_ARGC_SRC2A 14
# define R300_FPI0_ARGC_SRC1C_LRP 15
# define R300_FPI0_ARGC_ZERO 20
# define R300_FPI0_ARGC_ONE 21
/* GUESS */
# define R300_FPI0_ARGC_HALF 22
# define R300_FPI0_ARGC_SRC0C_YZX 23
# define R300_FPI0_ARGC_SRC1C_YZX 24
# define R300_FPI0_ARGC_SRC2C_YZX 25
# define R300_FPI0_ARGC_SRC0C_ZXY 26
# define R300_FPI0_ARGC_SRC1C_ZXY 27
# define R300_FPI0_ARGC_SRC2C_ZXY 28
# define R300_FPI0_ARGC_SRC0CA_WZY 29
# define R300_FPI0_ARGC_SRC1CA_WZY 30
# define R300_FPI0_ARGC_SRC2CA_WZY 31
# define R300_FPI0_ARG0C_SHIFT 0
# define R300_FPI0_ARG0C_MASK (31 << 0)
# define R300_FPI0_ARG0C_NEG (1 << 5)
# define R300_FPI0_ARG0C_ABS (1 << 6)
# define R300_FPI0_ARG1C_SHIFT 7
# define R300_FPI0_ARG1C_MASK (31 << 7)
# define R300_FPI0_ARG1C_NEG (1 << 12)
# define R300_FPI0_ARG1C_ABS (1 << 13)
# define R300_FPI0_ARG2C_SHIFT 14
# define R300_FPI0_ARG2C_MASK (31 << 14)
# define R300_FPI0_ARG2C_NEG (1 << 19)
# define R300_FPI0_ARG2C_ABS (1 << 20)
# define R300_FPI0_SPECIAL_LRP (1 << 21)
# define R300_FPI0_OUTC_MAD (0 << 23)
# define R300_FPI0_OUTC_DP3 (1 << 23)
# define R300_FPI0_OUTC_DP4 (2 << 23)
# define R300_FPI0_OUTC_MIN (4 << 23)
# define R300_FPI0_OUTC_MAX (5 << 23)
# define R300_FPI0_OUTC_CMPH (7 << 23)
# define R300_FPI0_OUTC_CMP (8 << 23)
# define R300_FPI0_OUTC_FRC (9 << 23)
# define R300_FPI0_OUTC_REPL_ALPHA (10 << 23)
# define R300_FPI0_OUTC_SAT (1 << 30)
# define R300_FPI0_INSERT_NOP (1U << 31)
#define R300_PFS_INSTR2_0 0x49C0
# define R300_FPI2_ARGA_SRC0C_X 0
# define R300_FPI2_ARGA_SRC0C_Y 1
# define R300_FPI2_ARGA_SRC0C_Z 2
# define R300_FPI2_ARGA_SRC1C_X 3
# define R300_FPI2_ARGA_SRC1C_Y 4
# define R300_FPI2_ARGA_SRC1C_Z 5
# define R300_FPI2_ARGA_SRC2C_X 6
# define R300_FPI2_ARGA_SRC2C_Y 7
# define R300_FPI2_ARGA_SRC2C_Z 8
# define R300_FPI2_ARGA_SRC0A 9
# define R300_FPI2_ARGA_SRC1A 10
# define R300_FPI2_ARGA_SRC2A 11
# define R300_FPI2_ARGA_SRC1A_LRP 15
# define R300_FPI2_ARGA_ZERO 16
# define R300_FPI2_ARGA_ONE 17
/* GUESS */
# define R300_FPI2_ARGA_HALF 18
# define R300_FPI2_ARG0A_SHIFT 0
# define R300_FPI2_ARG0A_MASK (31 << 0)
# define R300_FPI2_ARG0A_NEG (1 << 5)
/* GUESS */
# define R300_FPI2_ARG0A_ABS (1 << 6)
# define R300_FPI2_ARG1A_SHIFT 7
# define R300_FPI2_ARG1A_MASK (31 << 7)
# define R300_FPI2_ARG1A_NEG (1 << 12)
/* GUESS */
# define R300_FPI2_ARG1A_ABS (1 << 13)
# define R300_FPI2_ARG2A_SHIFT 14
# define R300_FPI2_ARG2A_MASK (31 << 14)
# define R300_FPI2_ARG2A_NEG (1 << 19)
/* GUESS */
# define R300_FPI2_ARG2A_ABS (1 << 20)
# define R300_FPI2_SPECIAL_LRP (1 << 21)
# define R300_FPI2_OUTA_MAD (0 << 23)
# define R300_FPI2_OUTA_DP4 (1 << 23)
# define R300_FPI2_OUTA_MIN (2 << 23)
# define R300_FPI2_OUTA_MAX (3 << 23)
# define R300_FPI2_OUTA_CMP (6 << 23)
# define R300_FPI2_OUTA_FRC (7 << 23)
# define R300_FPI2_OUTA_EX2 (8 << 23)
# define R300_FPI2_OUTA_LG2 (9 << 23)
# define R300_FPI2_OUTA_RCP (10 << 23)
# define R300_FPI2_OUTA_RSQ (11 << 23)
# define R300_FPI2_OUTA_SAT (1 << 30)
# define R300_FPI2_UNKNOWN_31 (1U << 31)
/* END: Fragment program instruction set */
/* Fog state and color */
#define R300_RE_FOG_STATE 0x4BC0
# define R300_FOG_ENABLE (1 << 0)
# define R300_FOG_MODE_LINEAR (0 << 1)
# define R300_FOG_MODE_EXP (1 << 1)
# define R300_FOG_MODE_EXP2 (2 << 1)
# define R300_FOG_MODE_MASK (3 << 1)
#define R300_FOG_COLOR_R 0x4BC8
#define R300_FOG_COLOR_G 0x4BCC
#define R300_FOG_COLOR_B 0x4BD0
#define R300_PP_ALPHA_TEST 0x4BD4
# define R300_REF_ALPHA_MASK 0x000000ff
# define R300_ALPHA_TEST_FAIL (0 << 8)
# define R300_ALPHA_TEST_LESS (1 << 8)
# define R300_ALPHA_TEST_LEQUAL (3 << 8)
# define R300_ALPHA_TEST_EQUAL (2 << 8)
# define R300_ALPHA_TEST_GEQUAL (6 << 8)
# define R300_ALPHA_TEST_GREATER (4 << 8)
# define R300_ALPHA_TEST_NEQUAL (5 << 8)
# define R300_ALPHA_TEST_PASS (7 << 8)
# define R300_ALPHA_TEST_OP_MASK (7 << 8)
# define R300_ALPHA_TEST_ENABLE (1 << 11)
/* gap */
/* Fragment program parameters in 7.16 floating point */
#define R300_PFS_PARAM_0_X 0x4C00
#define R300_PFS_PARAM_0_Y 0x4C04
#define R300_PFS_PARAM_0_Z 0x4C08
#define R300_PFS_PARAM_0_W 0x4C0C
/* GUESS: PARAM_31 is last, based on native limits reported by fglrx */
#define R300_PFS_PARAM_31_X 0x4DF0
#define R300_PFS_PARAM_31_Y 0x4DF4
#define R300_PFS_PARAM_31_Z 0x4DF8
#define R300_PFS_PARAM_31_W 0x4DFC
/* Notes:
* - AFAIK fglrx always sets BLEND_UNKNOWN when blending is used in
* the application
* - AFAIK fglrx always sets BLEND_NO_SEPARATE when CBLEND and ABLEND
* are set to the same
* function (both registers are always set up completely in any case)
* - Most blend flags are simply copied from R200 and not tested yet
*/
#define R300_RB3D_CBLEND 0x4E04
#define R300_RB3D_ABLEND 0x4E08
/* the following only appear in CBLEND */
# define R300_BLEND_ENABLE (1 << 0)
# define R300_BLEND_UNKNOWN (3 << 1)
# define R300_BLEND_NO_SEPARATE (1 << 3)
/* the following are shared between CBLEND and ABLEND */
# define R300_FCN_MASK (3 << 12)
# define R300_COMB_FCN_ADD_CLAMP (0 << 12)
# define R300_COMB_FCN_ADD_NOCLAMP (1 << 12)
# define R300_COMB_FCN_SUB_CLAMP (2 << 12)
# define R300_COMB_FCN_SUB_NOCLAMP (3 << 12)
# define R300_COMB_FCN_MIN (4 << 12)
# define R300_COMB_FCN_MAX (5 << 12)
# define R300_COMB_FCN_RSUB_CLAMP (6 << 12)
# define R300_COMB_FCN_RSUB_NOCLAMP (7 << 12)
# define R300_BLEND_GL_ZERO (32)
# define R300_BLEND_GL_ONE (33)
# define R300_BLEND_GL_SRC_COLOR (34)
# define R300_BLEND_GL_ONE_MINUS_SRC_COLOR (35)
# define R300_BLEND_GL_DST_COLOR (36)
# define R300_BLEND_GL_ONE_MINUS_DST_COLOR (37)
# define R300_BLEND_GL_SRC_ALPHA (38)
# define R300_BLEND_GL_ONE_MINUS_SRC_ALPHA (39)
# define R300_BLEND_GL_DST_ALPHA (40)
# define R300_BLEND_GL_ONE_MINUS_DST_ALPHA (41)
# define R300_BLEND_GL_SRC_ALPHA_SATURATE (42)
# define R300_BLEND_GL_CONST_COLOR (43)
# define R300_BLEND_GL_ONE_MINUS_CONST_COLOR (44)
# define R300_BLEND_GL_CONST_ALPHA (45)
# define R300_BLEND_GL_ONE_MINUS_CONST_ALPHA (46)
# define R300_BLEND_MASK (63)
# define R300_SRC_BLEND_SHIFT (16)
# define R300_DST_BLEND_SHIFT (24)
#define R300_RB3D_BLEND_COLOR 0x4E10
#define R300_RB3D_COLORMASK 0x4E0C
# define R300_COLORMASK0_B (1<<0)
# define R300_COLORMASK0_G (1<<1)
# define R300_COLORMASK0_R (1<<2)
# define R300_COLORMASK0_A (1<<3)
/* gap */
#define R300_RB3D_COLOROFFSET0 0x4E28
# define R300_COLOROFFSET_MASK 0xFFFFFFF0 /* GUESS */
#define R300_RB3D_COLOROFFSET1 0x4E2C /* GUESS */
#define R300_RB3D_COLOROFFSET2 0x4E30 /* GUESS */
#define R300_RB3D_COLOROFFSET3 0x4E34 /* GUESS */
/* gap */
/* Bit 16: Larger tiles
* Bit 17: 4x2 tiles
* Bit 18: Extremely weird tile like, but some pixels duplicated?
*/
#define R300_RB3D_COLORPITCH0 0x4E38
# define R300_COLORPITCH_MASK 0x00001FF8 /* GUESS */
# define R300_COLOR_TILE_ENABLE (1 << 16) /* GUESS */
# define R300_COLOR_MICROTILE_ENABLE (1 << 17) /* GUESS */
# define R300_COLOR_MICROTILE_SQUARE_ENABLE (2 << 17)
# define R300_COLOR_ENDIAN_NO_SWAP (0 << 18) /* GUESS */
# define R300_COLOR_ENDIAN_WORD_SWAP (1 << 18) /* GUESS */
# define R300_COLOR_ENDIAN_DWORD_SWAP (2 << 18) /* GUESS */
# define R300_COLOR_FORMAT_RGB565 (2 << 22)
# define R300_COLOR_FORMAT_ARGB8888 (3 << 22)
#define R300_RB3D_COLORPITCH1 0x4E3C /* GUESS */
#define R300_RB3D_COLORPITCH2 0x4E40 /* GUESS */
#define R300_RB3D_COLORPITCH3 0x4E44 /* GUESS */
#define R300_RB3D_AARESOLVE_OFFSET 0x4E80
#define R300_RB3D_AARESOLVE_PITCH 0x4E84
#define R300_RB3D_AARESOLVE_CTL 0x4E88
/* gap */
/* Guess by Vladimir.
* Set to 0A before 3D operations, set to 02 afterwards.
*/
/*#define R300_RB3D_DSTCACHE_CTLSTAT 0x4E4C*/
# define R300_RB3D_DSTCACHE_UNKNOWN_02 0x00000002
# define R300_RB3D_DSTCACHE_UNKNOWN_0A 0x0000000A
/* gap */
/* There seems to be no "write only" setting, so use Z-test = ALWAYS
* for this.
* Bit (1<<8) is the "test" bit. so plain write is 6 - vd
*/
#define R300_ZB_CNTL 0x4F00
# define R300_STENCIL_ENABLE (1 << 0)
# define R300_Z_ENABLE (1 << 1)
# define R300_Z_WRITE_ENABLE (1 << 2)
# define R300_Z_SIGNED_COMPARE (1 << 3)
# define R300_STENCIL_FRONT_BACK (1 << 4)
#define R300_ZB_ZSTENCILCNTL 0x4f04
/* functions */
# define R300_ZS_NEVER 0
# define R300_ZS_LESS 1
# define R300_ZS_LEQUAL 2
# define R300_ZS_EQUAL 3
# define R300_ZS_GEQUAL 4
# define R300_ZS_GREATER 5
# define R300_ZS_NOTEQUAL 6
# define R300_ZS_ALWAYS 7
# define R300_ZS_MASK 7
/* operations */
# define R300_ZS_KEEP 0
# define R300_ZS_ZERO 1
# define R300_ZS_REPLACE 2
# define R300_ZS_INCR 3
# define R300_ZS_DECR 4
# define R300_ZS_INVERT 5
# define R300_ZS_INCR_WRAP 6
# define R300_ZS_DECR_WRAP 7
# define R300_Z_FUNC_SHIFT 0
/* front and back refer to operations done for front
and back faces, i.e. separate stencil function support */
# define R300_S_FRONT_FUNC_SHIFT 3
# define R300_S_FRONT_SFAIL_OP_SHIFT 6
# define R300_S_FRONT_ZPASS_OP_SHIFT 9
# define R300_S_FRONT_ZFAIL_OP_SHIFT 12
# define R300_S_BACK_FUNC_SHIFT 15
# define R300_S_BACK_SFAIL_OP_SHIFT 18
# define R300_S_BACK_ZPASS_OP_SHIFT 21
# define R300_S_BACK_ZFAIL_OP_SHIFT 24
#define R300_ZB_STENCILREFMASK 0x4f08
# define R300_STENCILREF_SHIFT 0
# define R300_STENCILREF_MASK 0x000000ff
# define R300_STENCILMASK_SHIFT 8
# define R300_STENCILMASK_MASK 0x0000ff00
# define R300_STENCILWRITEMASK_SHIFT 16
# define R300_STENCILWRITEMASK_MASK 0x00ff0000
/* gap */
#define R300_ZB_FORMAT 0x4f10
# define R300_DEPTHFORMAT_16BIT_INT_Z (0 << 0)
# define R300_DEPTHFORMAT_16BIT_13E3 (1 << 0)
# define R300_DEPTHFORMAT_24BIT_INT_Z_8BIT_STENCIL (2 << 0)
/* reserved up to (15 << 0) */
# define R300_INVERT_13E3_LEADING_ONES (0 << 4)
# define R300_INVERT_13E3_LEADING_ZEROS (1 << 4)
#define R300_ZB_ZTOP 0x4F14
# define R300_ZTOP_DISABLE (0 << 0)
# define R300_ZTOP_ENABLE (1 << 0)
/* gap */
#define R300_ZB_ZCACHE_CTLSTAT 0x4f18
# define R300_ZB_ZCACHE_CTLSTAT_ZC_FLUSH_NO_EFFECT (0 << 0)
# define R300_ZB_ZCACHE_CTLSTAT_ZC_FLUSH_FLUSH_AND_FREE (1 << 0)
# define R300_ZB_ZCACHE_CTLSTAT_ZC_FREE_NO_EFFECT (0 << 1)
# define R300_ZB_ZCACHE_CTLSTAT_ZC_FREE_FREE (1 << 1)
# define R300_ZB_ZCACHE_CTLSTAT_ZC_BUSY_IDLE (0 << 31)
# define R300_ZB_ZCACHE_CTLSTAT_ZC_BUSY_BUSY (1U << 31)
#define R300_ZB_BW_CNTL 0x4f1c
# define R300_HIZ_DISABLE (0 << 0)
# define R300_HIZ_ENABLE (1 << 0)
# define R300_HIZ_MIN (0 << 1)
# define R300_HIZ_MAX (1 << 1)
# define R300_FAST_FILL_DISABLE (0 << 2)
# define R300_FAST_FILL_ENABLE (1 << 2)
# define R300_RD_COMP_DISABLE (0 << 3)
# define R300_RD_COMP_ENABLE (1 << 3)
# define R300_WR_COMP_DISABLE (0 << 4)
# define R300_WR_COMP_ENABLE (1 << 4)
# define R300_ZB_CB_CLEAR_RMW (0 << 5)
# define R300_ZB_CB_CLEAR_CACHE_LINEAR (1 << 5)
# define R300_FORCE_COMPRESSED_STENCIL_VALUE_DISABLE (0 << 6)
# define R300_FORCE_COMPRESSED_STENCIL_VALUE_ENABLE (1 << 6)
# define R500_ZEQUAL_OPTIMIZE_ENABLE (0 << 7)
# define R500_ZEQUAL_OPTIMIZE_DISABLE (1 << 7)
# define R500_SEQUAL_OPTIMIZE_ENABLE (0 << 8)
# define R500_SEQUAL_OPTIMIZE_DISABLE (1 << 8)
# define R500_BMASK_ENABLE (0 << 10)
# define R500_BMASK_DISABLE (1 << 10)
# define R500_HIZ_EQUAL_REJECT_DISABLE (0 << 11)
# define R500_HIZ_EQUAL_REJECT_ENABLE (1 << 11)
# define R500_HIZ_FP_EXP_BITS_DISABLE (0 << 12)
# define R500_HIZ_FP_EXP_BITS_1 (1 << 12)
# define R500_HIZ_FP_EXP_BITS_2 (2 << 12)
# define R500_HIZ_FP_EXP_BITS_3 (3 << 12)
# define R500_HIZ_FP_EXP_BITS_4 (4 << 12)
# define R500_HIZ_FP_EXP_BITS_5 (5 << 12)
# define R500_HIZ_FP_INVERT_LEADING_ONES (0 << 15)
# define R500_HIZ_FP_INVERT_LEADING_ZEROS (1 << 15)
# define R500_TILE_OVERWRITE_RECOMPRESSION_ENABLE (0 << 16)
# define R500_TILE_OVERWRITE_RECOMPRESSION_DISABLE (1 << 16)
# define R500_CONTIGUOUS_6XAA_SAMPLES_ENABLE (0 << 17)
# define R500_CONTIGUOUS_6XAA_SAMPLES_DISABLE (1 << 17)
# define R500_PEQ_PACKING_DISABLE (0 << 18)
# define R500_PEQ_PACKING_ENABLE (1 << 18)
# define R500_COVERED_PTR_MASKING_DISABLE (0 << 18)
# define R500_COVERED_PTR_MASKING_ENABLE (1 << 18)
/* gap */
/* Z Buffer Address Offset.
* Bits 31 to 5 are used for aligned Z buffer address offset for macro tiles.
*/
#define R300_ZB_DEPTHOFFSET 0x4f20
/* Z Buffer Pitch and Endian Control */
#define R300_ZB_DEPTHPITCH 0x4f24
# define R300_DEPTHPITCH_MASK 0x00003FFC
# define R300_DEPTHMACROTILE_DISABLE (0 << 16)
# define R300_DEPTHMACROTILE_ENABLE (1 << 16)
# define R300_DEPTHMICROTILE_LINEAR (0 << 17)
# define R300_DEPTHMICROTILE_TILED (1 << 17)
# define R300_DEPTHMICROTILE_TILED_SQUARE (2 << 17)
# define R300_DEPTHENDIAN_NO_SWAP (0 << 18)
# define R300_DEPTHENDIAN_WORD_SWAP (1 << 18)
# define R300_DEPTHENDIAN_DWORD_SWAP (2 << 18)
# define R300_DEPTHENDIAN_HALF_DWORD_SWAP (3 << 18)
/* Z Buffer Clear Value */
#define R300_ZB_DEPTHCLEARVALUE 0x4f28
#define R300_ZB_ZMASK_OFFSET 0x4f30
#define R300_ZB_ZMASK_PITCH 0x4f34
#define R300_ZB_ZMASK_WRINDEX 0x4f38
#define R300_ZB_ZMASK_DWORD 0x4f3c
#define R300_ZB_ZMASK_RDINDEX 0x4f40
/* Hierarchical Z Memory Offset */
#define R300_ZB_HIZ_OFFSET 0x4f44
/* Hierarchical Z Write Index */
#define R300_ZB_HIZ_WRINDEX 0x4f48
/* Hierarchical Z Data */
#define R300_ZB_HIZ_DWORD 0x4f4c
/* Hierarchical Z Read Index */
#define R300_ZB_HIZ_RDINDEX 0x4f50
/* Hierarchical Z Pitch */
#define R300_ZB_HIZ_PITCH 0x4f54
/* Z Buffer Z Pass Counter Data */
#define R300_ZB_ZPASS_DATA 0x4f58
/* Z Buffer Z Pass Counter Address */
#define R300_ZB_ZPASS_ADDR 0x4f5c
/* Depth buffer X and Y coordinate offset */
#define R300_ZB_DEPTHXY_OFFSET 0x4f60
# define R300_DEPTHX_OFFSET_SHIFT 1
# define R300_DEPTHX_OFFSET_MASK 0x000007FE
# define R300_DEPTHY_OFFSET_SHIFT 17
# define R300_DEPTHY_OFFSET_MASK 0x07FE0000
/* Sets the fifo sizes */
#define R500_ZB_FIFO_SIZE 0x4fd0
# define R500_OP_FIFO_SIZE_FULL (0 << 0)
# define R500_OP_FIFO_SIZE_HALF (1 << 0)
# define R500_OP_FIFO_SIZE_QUATER (2 << 0)
# define R500_OP_FIFO_SIZE_EIGTHS (4 << 0)
/* Stencil Reference Value and Mask for backfacing quads */
/* R300_ZB_STENCILREFMASK handles front face */
#define R500_ZB_STENCILREFMASK_BF 0x4fd4
# define R500_STENCILREF_SHIFT 0
# define R500_STENCILREF_MASK 0x000000ff
# define R500_STENCILMASK_SHIFT 8
# define R500_STENCILMASK_MASK 0x0000ff00
# define R500_STENCILWRITEMASK_SHIFT 16
# define R500_STENCILWRITEMASK_MASK 0x00ff0000
/* BEGIN: Vertex program instruction set */
/* Every instruction is four dwords long:
* DWORD 0: output and opcode
* DWORD 1: first argument
* DWORD 2: second argument
* DWORD 3: third argument
*
* Notes:
* - ABS r, a is implemented as MAX r, a, -a
* - MOV is implemented as ADD to zero
* - XPD is implemented as MUL + MAD
* - FLR is implemented as FRC + ADD
* - apparently, fglrx tries to schedule instructions so that there is at
* least one instruction between the write to a temporary and the first
* read from said temporary; however, violations of this scheduling are
* allowed
* - register indices seem to be unrelated with OpenGL aliasing to
* conventional state
* - only one attribute and one parameter can be loaded at a time; however,
* the same attribute/parameter can be used for more than one argument
* - the second software argument for POW is the third hardware argument
* (no idea why)
* - MAD with only temporaries as input seems to use VPI_OUT_SELECT_MAD_2
*
* There is some magic surrounding LIT:
* The single argument is replicated across all three inputs, but swizzled:
* First argument: xyzy
* Second argument: xyzx
* Third argument: xyzw
* Whenever the result is used later in the fragment program, fglrx forces
* x and w to be 1.0 in the input selection; I don't know whether this is
* strictly necessary
*/
#define R300_VPI_OUT_OP_DOT (1 << 0)
#define R300_VPI_OUT_OP_MUL (2 << 0)
#define R300_VPI_OUT_OP_ADD (3 << 0)
#define R300_VPI_OUT_OP_MAD (4 << 0)
#define R300_VPI_OUT_OP_DST (5 << 0)
#define R300_VPI_OUT_OP_FRC (6 << 0)
#define R300_VPI_OUT_OP_MAX (7 << 0)
#define R300_VPI_OUT_OP_MIN (8 << 0)
#define R300_VPI_OUT_OP_SGE (9 << 0)
#define R300_VPI_OUT_OP_SLT (10 << 0)
/* Used in GL_POINT_DISTANCE_ATTENUATION_ARB, vector(scalar, vector) */
#define R300_VPI_OUT_OP_UNK12 (12 << 0)
#define R300_VPI_OUT_OP_ARL (13 << 0)
#define R300_VPI_OUT_OP_EXP (65 << 0)
#define R300_VPI_OUT_OP_LOG (66 << 0)
/* Used in fog computations, scalar(scalar) */
#define R300_VPI_OUT_OP_UNK67 (67 << 0)
#define R300_VPI_OUT_OP_LIT (68 << 0)
#define R300_VPI_OUT_OP_POW (69 << 0)
#define R300_VPI_OUT_OP_RCP (70 << 0)
#define R300_VPI_OUT_OP_RSQ (72 << 0)
/* Used in GL_POINT_DISTANCE_ATTENUATION_ARB, scalar(scalar) */
#define R300_VPI_OUT_OP_UNK73 (73 << 0)
#define R300_VPI_OUT_OP_EX2 (75 << 0)
#define R300_VPI_OUT_OP_LG2 (76 << 0)
#define R300_VPI_OUT_OP_MAD_2 (128 << 0)
/* all temps, vector(scalar, vector, vector) */
#define R300_VPI_OUT_OP_UNK129 (129 << 0)
#define R300_VPI_OUT_REG_CLASS_TEMPORARY (0 << 8)
#define R300_VPI_OUT_REG_CLASS_ADDR (1 << 8)
#define R300_VPI_OUT_REG_CLASS_RESULT (2 << 8)
#define R300_VPI_OUT_REG_CLASS_MASK (31 << 8)
#define R300_VPI_OUT_REG_INDEX_SHIFT 13
/* GUESS based on fglrx native limits */
#define R300_VPI_OUT_REG_INDEX_MASK (31 << 13)
#define R300_VPI_OUT_WRITE_X (1 << 20)
#define R300_VPI_OUT_WRITE_Y (1 << 21)
#define R300_VPI_OUT_WRITE_Z (1 << 22)
#define R300_VPI_OUT_WRITE_W (1 << 23)
#define R300_VPI_IN_REG_CLASS_TEMPORARY (0 << 0)
#define R300_VPI_IN_REG_CLASS_ATTRIBUTE (1 << 0)
#define R300_VPI_IN_REG_CLASS_PARAMETER (2 << 0)
#define R300_VPI_IN_REG_CLASS_NONE (9 << 0)
#define R300_VPI_IN_REG_CLASS_MASK (31 << 0)
#define R300_VPI_IN_REG_INDEX_SHIFT 5
/* GUESS based on fglrx native limits */
#define R300_VPI_IN_REG_INDEX_MASK (255 << 5)
/* The R300 can select components from the input register arbitrarily.
* Use the following constants, shifted by the component shift you
* want to select
*/
#define R300_VPI_IN_SELECT_X 0
#define R300_VPI_IN_SELECT_Y 1
#define R300_VPI_IN_SELECT_Z 2
#define R300_VPI_IN_SELECT_W 3
#define R300_VPI_IN_SELECT_ZERO 4
#define R300_VPI_IN_SELECT_ONE 5
#define R300_VPI_IN_SELECT_MASK 7
#define R300_VPI_IN_X_SHIFT 13
#define R300_VPI_IN_Y_SHIFT 16
#define R300_VPI_IN_Z_SHIFT 19
#define R300_VPI_IN_W_SHIFT 22
#define R300_VPI_IN_NEG_X (1 << 25)
#define R300_VPI_IN_NEG_Y (1 << 26)
#define R300_VPI_IN_NEG_Z (1 << 27)
#define R300_VPI_IN_NEG_W (1 << 28)
/* END: Vertex program instruction set */
/* BEGIN: Packet 3 commands */
/* A primitive emission dword. */
#define R300_PRIM_TYPE_NONE (0 << 0)
#define R300_PRIM_TYPE_POINT (1 << 0)
#define R300_PRIM_TYPE_LINE (2 << 0)
#define R300_PRIM_TYPE_LINE_STRIP (3 << 0)
#define R300_PRIM_TYPE_TRI_LIST (4 << 0)
#define R300_PRIM_TYPE_TRI_FAN (5 << 0)
#define R300_PRIM_TYPE_TRI_STRIP (6 << 0)
#define R300_PRIM_TYPE_TRI_TYPE2 (7 << 0)
#define R300_PRIM_TYPE_RECT_LIST (8 << 0)
#define R300_PRIM_TYPE_3VRT_POINT_LIST (9 << 0)
#define R300_PRIM_TYPE_3VRT_LINE_LIST (10 << 0)
/* GUESS (based on r200) */
#define R300_PRIM_TYPE_POINT_SPRITES (11 << 0)
#define R300_PRIM_TYPE_LINE_LOOP (12 << 0)
#define R300_PRIM_TYPE_QUADS (13 << 0)
#define R300_PRIM_TYPE_QUAD_STRIP (14 << 0)
#define R300_PRIM_TYPE_POLYGON (15 << 0)
#define R300_PRIM_TYPE_MASK 0xF
#define R300_PRIM_WALK_IND (1 << 4)
#define R300_PRIM_WALK_LIST (2 << 4)
#define R300_PRIM_WALK_RING (3 << 4)
#define R300_PRIM_WALK_MASK (3 << 4)
/* GUESS (based on r200) */
#define R300_PRIM_COLOR_ORDER_BGRA (0 << 6)
#define R300_PRIM_COLOR_ORDER_RGBA (1 << 6)
#define R300_PRIM_NUM_VERTICES_SHIFT 16
#define R300_PRIM_NUM_VERTICES_MASK 0xffff
/* Draw a primitive from vertex data in arrays loaded via 3D_LOAD_VBPNTR.
* Two parameter dwords:
* 0. The first parameter appears to be always 0
* 1. The second parameter is a standard primitive emission dword.
*/
#define R300_PACKET3_3D_DRAW_VBUF 0x00002800
/* Specify the full set of vertex arrays as (address, stride).
* The first parameter is the number of vertex arrays specified.
* The rest of the command is a variable length list of blocks, where
* each block is three dwords long and specifies two arrays.
* The first dword of a block is split into two words, the lower significant
* word refers to the first array, the more significant word to the second
* array in the block.
* The low byte of each word contains the size of an array entry in dwords,
* the high byte contains the stride of the array.
* The second dword of a block contains the pointer to the first array,
* the third dword of a block contains the pointer to the second array.
* Note that if the total number of arrays is odd, the third dword of
* the last block is omitted.
*/
#define R300_PACKET3_3D_LOAD_VBPNTR 0x00002F00
#define R300_PACKET3_INDX_BUFFER 0x00003300
# define R300_EB_UNK1_SHIFT 24
# define R300_EB_UNK1 (0x80<<24)
# define R300_EB_UNK2 0x0810
#define R300_PACKET3_3D_DRAW_VBUF_2 0x00003400
#define R300_PACKET3_3D_DRAW_INDX_2 0x00003600
/* END: Packet 3 commands */
/* Color formats for 2d packets
*/
#define R300_CP_COLOR_FORMAT_CI8 2
#define R300_CP_COLOR_FORMAT_ARGB1555 3
#define R300_CP_COLOR_FORMAT_RGB565 4
#define R300_CP_COLOR_FORMAT_ARGB8888 6
#define R300_CP_COLOR_FORMAT_RGB332 7
#define R300_CP_COLOR_FORMAT_RGB8 9
#define R300_CP_COLOR_FORMAT_ARGB4444 15
/*
* CP type-3 packets
*/
#define R300_CP_CMD_BITBLT_MULTI 0xC0009B00
#define R500_VAP_INDEX_OFFSET 0x208c
#define R500_GA_US_VECTOR_INDEX 0x4250
#define R500_GA_US_VECTOR_DATA 0x4254
#define R500_RS_IP_0 0x4074
#define R500_RS_INST_0 0x4320
#define R500_US_CONFIG 0x4600
#define R500_US_FC_CTRL 0x4624
#define R500_US_CODE_ADDR 0x4630
#define R500_RB3D_COLOR_CLEAR_VALUE_AR 0x46c0
#define R500_RB3D_CONSTANT_COLOR_AR 0x4ef8
#define R300_SU_REG_DEST 0x42c8
#define RV530_FG_ZBREG_DEST 0x4be8
#define R300_ZB_ZPASS_DATA 0x4f58
#define R300_ZB_ZPASS_ADDR 0x4f5c
#endif /* _R300_REG_H */
Index: head/sys/dev/drm2/radeon/radeon_device.c
===================================================================
--- head/sys/dev/drm2/radeon/radeon_device.c (revision 300049)
+++ head/sys/dev/drm2/radeon/radeon_device.c (revision 300050)
@@ -1,1569 +1,1569 @@
/*
* Copyright 2008 Advanced Micro Devices, Inc.
* Copyright 2008 Red Hat Inc.
* Copyright 2009 Jerome Glisse.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
* OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
* ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
* OTHER DEALINGS IN THE SOFTWARE.
*
* Authors: Dave Airlie
* Alex Deucher
* Jerome Glisse
*/
#include <sys/cdefs.h>
__FBSDID("$FreeBSD$");
#include <dev/drm2/drmP.h>
#include <dev/drm2/drm_crtc_helper.h>
#include <dev/drm2/radeon/radeon_drm.h>
#include "radeon_reg.h"
#include "radeon.h"
#include "atom.h"
static const char radeon_family_name[][16] = {
"R100",
"RV100",
"RS100",
"RV200",
"RS200",
"R200",
"RV250",
"RS300",
"RV280",
"R300",
"R350",
"RV350",
"RV380",
"R420",
"R423",
"RV410",
"RS400",
"RS480",
"RS600",
"RS690",
"RS740",
"RV515",
"R520",
"RV530",
"RV560",
"RV570",
"R580",
"R600",
"RV610",
"RV630",
"RV670",
"RV620",
"RV635",
"RS780",
"RS880",
"RV770",
"RV730",
"RV710",
"RV740",
"CEDAR",
"REDWOOD",
"JUNIPER",
"CYPRESS",
"HEMLOCK",
"PALM",
"SUMO",
"SUMO2",
"BARTS",
"TURKS",
"CAICOS",
"CAYMAN",
"ARUBA",
"TAHITI",
"PITCAIRN",
"VERDE",
"LAST",
};
/**
* radeon_surface_init - Clear GPU surface registers.
*
* @rdev: radeon_device pointer
*
* Clear GPU surface registers (r1xx-r5xx).
*/
void radeon_surface_init(struct radeon_device *rdev)
{
/* FIXME: check this out */
if (rdev->family < CHIP_R600) {
int i;
for (i = 0; i < RADEON_GEM_MAX_SURFACES; i++) {
if (rdev->surface_regs[i].bo)
radeon_bo_get_surface_reg(rdev->surface_regs[i].bo);
else
radeon_clear_surface_reg(rdev, i);
}
/* enable surfaces */
WREG32(RADEON_SURFACE_CNTL, 0);
}
}
/*
* GPU scratch registers helpers function.
*/
/**
* radeon_scratch_init - Init scratch register driver information.
*
* @rdev: radeon_device pointer
*
* Init CP scratch register driver information (r1xx-r5xx)
*/
void radeon_scratch_init(struct radeon_device *rdev)
{
int i;
/* FIXME: check this out */
if (rdev->family < CHIP_R300) {
rdev->scratch.num_reg = 5;
} else {
rdev->scratch.num_reg = 7;
}
rdev->scratch.reg_base = RADEON_SCRATCH_REG0;
for (i = 0; i < rdev->scratch.num_reg; i++) {
rdev->scratch.free[i] = true;
rdev->scratch.reg[i] = rdev->scratch.reg_base + (i * 4);
}
}
/**
* radeon_scratch_get - Allocate a scratch register
*
* @rdev: radeon_device pointer
* @reg: scratch register mmio offset
*
* Allocate a CP scratch register for use by the driver (all asics).
* Returns 0 on success or -EINVAL on failure.
*/
int radeon_scratch_get(struct radeon_device *rdev, uint32_t *reg)
{
int i;
for (i = 0; i < rdev->scratch.num_reg; i++) {
if (rdev->scratch.free[i]) {
rdev->scratch.free[i] = false;
*reg = rdev->scratch.reg[i];
return 0;
}
}
return -EINVAL;
}
/**
* radeon_scratch_free - Free a scratch register
*
* @rdev: radeon_device pointer
* @reg: scratch register mmio offset
*
* Free a CP scratch register allocated for use by the driver (all asics)
*/
void radeon_scratch_free(struct radeon_device *rdev, uint32_t reg)
{
int i;
for (i = 0; i < rdev->scratch.num_reg; i++) {
if (rdev->scratch.reg[i] == reg) {
rdev->scratch.free[i] = true;
return;
}
}
}
/*
* radeon_wb_*()
- * Writeback is the the method by which the the GPU updates special pages
+ * Writeback is the method by which the GPU updates special pages
* in memory with the status of certain GPU events (fences, ring pointers,
* etc.).
*/
/**
* radeon_wb_disable - Disable Writeback
*
* @rdev: radeon_device pointer
*
* Disables Writeback (all asics). Used for suspend.
*/
void radeon_wb_disable(struct radeon_device *rdev)
{
int r;
if (rdev->wb.wb_obj) {
r = radeon_bo_reserve(rdev->wb.wb_obj, false);
if (unlikely(r != 0))
return;
radeon_bo_kunmap(rdev->wb.wb_obj);
radeon_bo_unpin(rdev->wb.wb_obj);
radeon_bo_unreserve(rdev->wb.wb_obj);
}
rdev->wb.enabled = false;
}
/**
* radeon_wb_fini - Disable Writeback and free memory
*
* @rdev: radeon_device pointer
*
* Disables Writeback and frees the Writeback memory (all asics).
* Used at driver shutdown.
*/
void radeon_wb_fini(struct radeon_device *rdev)
{
radeon_wb_disable(rdev);
if (rdev->wb.wb_obj) {
radeon_bo_unref(&rdev->wb.wb_obj);
rdev->wb.wb = NULL;
rdev->wb.wb_obj = NULL;
}
}
/**
* radeon_wb_init- Init Writeback driver info and allocate memory
*
* @rdev: radeon_device pointer
*
* Disables Writeback and frees the Writeback memory (all asics).
* Used at driver startup.
* Returns 0 on success or an -error on failure.
*/
int radeon_wb_init(struct radeon_device *rdev)
{
int r;
void *wb_ptr; /* FreeBSD: to please GCC 4.2. */
if (rdev->wb.wb_obj == NULL) {
r = radeon_bo_create(rdev, RADEON_GPU_PAGE_SIZE, PAGE_SIZE, true,
RADEON_GEM_DOMAIN_GTT, NULL, &rdev->wb.wb_obj);
if (r) {
dev_warn(rdev->dev, "(%d) create WB bo failed\n", r);
return r;
}
}
r = radeon_bo_reserve(rdev->wb.wb_obj, false);
if (unlikely(r != 0)) {
radeon_wb_fini(rdev);
return r;
}
r = radeon_bo_pin(rdev->wb.wb_obj, RADEON_GEM_DOMAIN_GTT,
&rdev->wb.gpu_addr);
if (r) {
radeon_bo_unreserve(rdev->wb.wb_obj);
dev_warn(rdev->dev, "(%d) pin WB bo failed\n", r);
radeon_wb_fini(rdev);
return r;
}
wb_ptr = &rdev->wb.wb;
r = radeon_bo_kmap(rdev->wb.wb_obj, wb_ptr);
radeon_bo_unreserve(rdev->wb.wb_obj);
if (r) {
dev_warn(rdev->dev, "(%d) map WB bo failed\n", r);
radeon_wb_fini(rdev);
return r;
}
/* clear wb memory */
memset(*(void **)wb_ptr, 0, RADEON_GPU_PAGE_SIZE);
/* disable event_write fences */
rdev->wb.use_event = false;
/* disabled via module param */
if (radeon_no_wb == 1) {
rdev->wb.enabled = false;
} else {
if (rdev->flags & RADEON_IS_AGP) {
/* often unreliable on AGP */
rdev->wb.enabled = false;
} else if (rdev->family < CHIP_R300) {
/* often unreliable on pre-r300 */
rdev->wb.enabled = false;
} else {
rdev->wb.enabled = true;
/* event_write fences are only available on r600+ */
if (rdev->family >= CHIP_R600) {
rdev->wb.use_event = true;
}
}
}
/* always use writeback/events on NI, APUs */
if (rdev->family >= CHIP_PALM) {
rdev->wb.enabled = true;
rdev->wb.use_event = true;
}
dev_info(rdev->dev, "WB %sabled\n", rdev->wb.enabled ? "en" : "dis");
return 0;
}
/**
* radeon_vram_location - try to find VRAM location
* @rdev: radeon device structure holding all necessary informations
* @mc: memory controller structure holding memory informations
* @base: base address at which to put VRAM
*
* Function will place try to place VRAM at base address provided
* as parameter (which is so far either PCI aperture address or
* for IGP TOM base address).
*
* If there is not enough space to fit the unvisible VRAM in the 32bits
* address space then we limit the VRAM size to the aperture.
*
* If we are using AGP and if the AGP aperture doesn't allow us to have
* room for all the VRAM than we restrict the VRAM to the PCI aperture
* size and print a warning.
*
* This function will never fails, worst case are limiting VRAM.
*
* Note: GTT start, end, size should be initialized before calling this
* function on AGP platform.
*
* Note: We don't explicitly enforce VRAM start to be aligned on VRAM size,
* this shouldn't be a problem as we are using the PCI aperture as a reference.
* Otherwise this would be needed for rv280, all r3xx, and all r4xx, but
* not IGP.
*
* Note: we use mc_vram_size as on some board we need to program the mc to
* cover the whole aperture even if VRAM size is inferior to aperture size
* Novell bug 204882 + along with lots of ubuntu ones
*
* Note: when limiting vram it's safe to overwritte real_vram_size because
* we are not in case where real_vram_size is inferior to mc_vram_size (ie
* note afected by bogus hw of Novell bug 204882 + along with lots of ubuntu
* ones)
*
* Note: IGP TOM addr should be the same as the aperture addr, we don't
* explicitly check for that thought.
*
* FIXME: when reducing VRAM size align new size on power of 2.
*/
void radeon_vram_location(struct radeon_device *rdev, struct radeon_mc *mc, u64 base)
{
uint64_t limit = (uint64_t)radeon_vram_limit << 20;
mc->vram_start = base;
if (mc->mc_vram_size > (0xFFFFFFFF - base + 1)) {
dev_warn(rdev->dev, "limiting VRAM to PCI aperture size\n");
mc->real_vram_size = mc->aper_size;
mc->mc_vram_size = mc->aper_size;
}
mc->vram_end = mc->vram_start + mc->mc_vram_size - 1;
if (rdev->flags & RADEON_IS_AGP && mc->vram_end > mc->gtt_start && mc->vram_start <= mc->gtt_end) {
dev_warn(rdev->dev, "limiting VRAM to PCI aperture size\n");
mc->real_vram_size = mc->aper_size;
mc->mc_vram_size = mc->aper_size;
}
mc->vram_end = mc->vram_start + mc->mc_vram_size - 1;
if (limit && limit < mc->real_vram_size)
mc->real_vram_size = limit;
dev_info(rdev->dev, "VRAM: %juM 0x%016jX - 0x%016jX (%juM used)\n",
(uintmax_t)mc->mc_vram_size >> 20, (uintmax_t)mc->vram_start,
(uintmax_t)mc->vram_end, (uintmax_t)mc->real_vram_size >> 20);
}
/**
* radeon_gtt_location - try to find GTT location
* @rdev: radeon device structure holding all necessary informations
* @mc: memory controller structure holding memory informations
*
* Function will place try to place GTT before or after VRAM.
*
* If GTT size is bigger than space left then we ajust GTT size.
* Thus function will never fails.
*
* FIXME: when reducing GTT size align new size on power of 2.
*/
void radeon_gtt_location(struct radeon_device *rdev, struct radeon_mc *mc)
{
u64 size_af, size_bf;
size_af = ((0xFFFFFFFF - mc->vram_end) + mc->gtt_base_align) & ~mc->gtt_base_align;
size_bf = mc->vram_start & ~mc->gtt_base_align;
if (size_bf > size_af) {
if (mc->gtt_size > size_bf) {
dev_warn(rdev->dev, "limiting GTT\n");
mc->gtt_size = size_bf;
}
mc->gtt_start = (mc->vram_start & ~mc->gtt_base_align) - mc->gtt_size;
} else {
if (mc->gtt_size > size_af) {
dev_warn(rdev->dev, "limiting GTT\n");
mc->gtt_size = size_af;
}
mc->gtt_start = (mc->vram_end + 1 + mc->gtt_base_align) & ~mc->gtt_base_align;
}
mc->gtt_end = mc->gtt_start + mc->gtt_size - 1;
dev_info(rdev->dev, "GTT: %juM 0x%016jX - 0x%016jX\n",
(uintmax_t)mc->gtt_size >> 20, (uintmax_t)mc->gtt_start, (uintmax_t)mc->gtt_end);
}
/*
* GPU helpers function.
*/
/**
* radeon_card_posted - check if the hw has already been initialized
*
* @rdev: radeon_device pointer
*
* Check if the asic has been initialized (all asics).
* Used at driver startup.
* Returns true if initialized or false if not.
*/
bool radeon_card_posted(struct radeon_device *rdev)
{
uint32_t reg;
#ifdef FREEBSD_WIP
if (efi_enabled(EFI_BOOT) &&
rdev->dev->pci_subvendor == PCI_VENDOR_ID_APPLE)
return false;
#endif /* FREEBSD_WIP */
/* first check CRTCs */
if (ASIC_IS_DCE41(rdev)) {
reg = RREG32(EVERGREEN_CRTC_CONTROL + EVERGREEN_CRTC0_REGISTER_OFFSET) |
RREG32(EVERGREEN_CRTC_CONTROL + EVERGREEN_CRTC1_REGISTER_OFFSET);
if (reg & EVERGREEN_CRTC_MASTER_EN)
return true;
} else if (ASIC_IS_DCE4(rdev)) {
reg = RREG32(EVERGREEN_CRTC_CONTROL + EVERGREEN_CRTC0_REGISTER_OFFSET) |
RREG32(EVERGREEN_CRTC_CONTROL + EVERGREEN_CRTC1_REGISTER_OFFSET) |
RREG32(EVERGREEN_CRTC_CONTROL + EVERGREEN_CRTC2_REGISTER_OFFSET) |
RREG32(EVERGREEN_CRTC_CONTROL + EVERGREEN_CRTC3_REGISTER_OFFSET) |
RREG32(EVERGREEN_CRTC_CONTROL + EVERGREEN_CRTC4_REGISTER_OFFSET) |
RREG32(EVERGREEN_CRTC_CONTROL + EVERGREEN_CRTC5_REGISTER_OFFSET);
if (reg & EVERGREEN_CRTC_MASTER_EN)
return true;
} else if (ASIC_IS_AVIVO(rdev)) {
reg = RREG32(AVIVO_D1CRTC_CONTROL) |
RREG32(AVIVO_D2CRTC_CONTROL);
if (reg & AVIVO_CRTC_EN) {
return true;
}
} else {
reg = RREG32(RADEON_CRTC_GEN_CNTL) |
RREG32(RADEON_CRTC2_GEN_CNTL);
if (reg & RADEON_CRTC_EN) {
return true;
}
}
/* then check MEM_SIZE, in case the crtcs are off */
if (rdev->family >= CHIP_R600)
reg = RREG32(R600_CONFIG_MEMSIZE);
else
reg = RREG32(RADEON_CONFIG_MEMSIZE);
if (reg)
return true;
return false;
}
/**
* radeon_update_bandwidth_info - update display bandwidth params
*
* @rdev: radeon_device pointer
*
* Used when sclk/mclk are switched or display modes are set.
* params are used to calculate display watermarks (all asics)
*/
void radeon_update_bandwidth_info(struct radeon_device *rdev)
{
fixed20_12 a;
u32 sclk = rdev->pm.current_sclk;
u32 mclk = rdev->pm.current_mclk;
/* sclk/mclk in Mhz */
a.full = dfixed_const(100);
rdev->pm.sclk.full = dfixed_const(sclk);
rdev->pm.sclk.full = dfixed_div(rdev->pm.sclk, a);
rdev->pm.mclk.full = dfixed_const(mclk);
rdev->pm.mclk.full = dfixed_div(rdev->pm.mclk, a);
if (rdev->flags & RADEON_IS_IGP) {
a.full = dfixed_const(16);
/* core_bandwidth = sclk(Mhz) * 16 */
rdev->pm.core_bandwidth.full = dfixed_div(rdev->pm.sclk, a);
}
}
/**
* radeon_boot_test_post_card - check and possibly initialize the hw
*
* @rdev: radeon_device pointer
*
* Check if the asic is initialized and if not, attempt to initialize
* it (all asics).
* Returns true if initialized or false if not.
*/
bool radeon_boot_test_post_card(struct radeon_device *rdev)
{
if (radeon_card_posted(rdev))
return true;
if (rdev->bios) {
DRM_INFO("GPU not posted. posting now...\n");
if (rdev->is_atom_bios)
atom_asic_init(rdev->mode_info.atom_context);
else
radeon_combios_asic_init(rdev->ddev);
return true;
} else {
dev_err(rdev->dev, "Card not posted and no BIOS - ignoring\n");
return false;
}
}
/**
* radeon_dummy_page_init - init dummy page used by the driver
*
* @rdev: radeon_device pointer
*
* Allocate the dummy page used by the driver (all asics).
* This dummy page is used by the driver as a filler for gart entries
* when pages are taken out of the GART
* Returns 0 on success, -ENOMEM on failure.
*/
int radeon_dummy_page_init(struct radeon_device *rdev)
{
if (rdev->dummy_page.dmah)
return 0;
rdev->dummy_page.dmah = drm_pci_alloc(rdev->ddev,
PAGE_SIZE, PAGE_SIZE, BUS_SPACE_MAXADDR_32BIT);
if (rdev->dummy_page.dmah == NULL)
return -ENOMEM;
rdev->dummy_page.addr = rdev->dummy_page.dmah->busaddr;
return 0;
}
/**
* radeon_dummy_page_fini - free dummy page used by the driver
*
* @rdev: radeon_device pointer
*
* Frees the dummy page used by the driver (all asics).
*/
void radeon_dummy_page_fini(struct radeon_device *rdev)
{
if (rdev->dummy_page.dmah == NULL)
return;
drm_pci_free(rdev->ddev, rdev->dummy_page.dmah);
rdev->dummy_page.dmah = NULL;
rdev->dummy_page.addr = 0;
}
/* ATOM accessor methods */
/*
* ATOM is an interpreted byte code stored in tables in the vbios. The
* driver registers callbacks to access registers and the interpreter
* in the driver parses the tables and executes then to program specific
* actions (set display modes, asic init, etc.). See radeon_atombios.c,
* atombios.h, and atom.c
*/
/**
* cail_pll_read - read PLL register
*
* @info: atom card_info pointer
* @reg: PLL register offset
*
* Provides a PLL register accessor for the atom interpreter (r4xx+).
* Returns the value of the PLL register.
*/
static uint32_t cail_pll_read(struct card_info *info, uint32_t reg)
{
struct radeon_device *rdev = info->dev->dev_private;
uint32_t r;
r = rdev->pll_rreg(rdev, reg);
return r;
}
/**
* cail_pll_write - write PLL register
*
* @info: atom card_info pointer
* @reg: PLL register offset
* @val: value to write to the pll register
*
* Provides a PLL register accessor for the atom interpreter (r4xx+).
*/
static void cail_pll_write(struct card_info *info, uint32_t reg, uint32_t val)
{
struct radeon_device *rdev = info->dev->dev_private;
rdev->pll_wreg(rdev, reg, val);
}
/**
* cail_mc_read - read MC (Memory Controller) register
*
* @info: atom card_info pointer
* @reg: MC register offset
*
* Provides an MC register accessor for the atom interpreter (r4xx+).
* Returns the value of the MC register.
*/
static uint32_t cail_mc_read(struct card_info *info, uint32_t reg)
{
struct radeon_device *rdev = info->dev->dev_private;
uint32_t r;
r = rdev->mc_rreg(rdev, reg);
return r;
}
/**
* cail_mc_write - write MC (Memory Controller) register
*
* @info: atom card_info pointer
* @reg: MC register offset
* @val: value to write to the pll register
*
* Provides a MC register accessor for the atom interpreter (r4xx+).
*/
static void cail_mc_write(struct card_info *info, uint32_t reg, uint32_t val)
{
struct radeon_device *rdev = info->dev->dev_private;
rdev->mc_wreg(rdev, reg, val);
}
/**
* cail_reg_write - write MMIO register
*
* @info: atom card_info pointer
* @reg: MMIO register offset
* @val: value to write to the pll register
*
* Provides a MMIO register accessor for the atom interpreter (r4xx+).
*/
static void cail_reg_write(struct card_info *info, uint32_t reg, uint32_t val)
{
struct radeon_device *rdev = info->dev->dev_private;
WREG32(reg*4, val);
}
/**
* cail_reg_read - read MMIO register
*
* @info: atom card_info pointer
* @reg: MMIO register offset
*
* Provides an MMIO register accessor for the atom interpreter (r4xx+).
* Returns the value of the MMIO register.
*/
static uint32_t cail_reg_read(struct card_info *info, uint32_t reg)
{
struct radeon_device *rdev = info->dev->dev_private;
uint32_t r;
r = RREG32(reg*4);
return r;
}
/**
* cail_ioreg_write - write IO register
*
* @info: atom card_info pointer
* @reg: IO register offset
* @val: value to write to the pll register
*
* Provides a IO register accessor for the atom interpreter (r4xx+).
*/
static void cail_ioreg_write(struct card_info *info, uint32_t reg, uint32_t val)
{
struct radeon_device *rdev = info->dev->dev_private;
WREG32_IO(reg*4, val);
}
/**
* cail_ioreg_read - read IO register
*
* @info: atom card_info pointer
* @reg: IO register offset
*
* Provides an IO register accessor for the atom interpreter (r4xx+).
* Returns the value of the IO register.
*/
static uint32_t cail_ioreg_read(struct card_info *info, uint32_t reg)
{
struct radeon_device *rdev = info->dev->dev_private;
uint32_t r;
r = RREG32_IO(reg*4);
return r;
}
/**
* radeon_atombios_init - init the driver info and callbacks for atombios
*
* @rdev: radeon_device pointer
*
* Initializes the driver info and register access callbacks for the
* ATOM interpreter (r4xx+).
* Returns 0 on success, -ENOMEM on failure.
* Called at driver startup.
*/
int radeon_atombios_init(struct radeon_device *rdev)
{
struct card_info *atom_card_info =
malloc(sizeof(struct card_info),
DRM_MEM_DRIVER, M_NOWAIT | M_ZERO);
if (!atom_card_info)
return -ENOMEM;
rdev->mode_info.atom_card_info = atom_card_info;
atom_card_info->dev = rdev->ddev;
atom_card_info->reg_read = cail_reg_read;
atom_card_info->reg_write = cail_reg_write;
/* needed for iio ops */
if (rdev->rio_mem) {
atom_card_info->ioreg_read = cail_ioreg_read;
atom_card_info->ioreg_write = cail_ioreg_write;
} else {
DRM_ERROR("Unable to find PCI I/O BAR; using MMIO for ATOM IIO\n");
atom_card_info->ioreg_read = cail_reg_read;
atom_card_info->ioreg_write = cail_reg_write;
}
atom_card_info->mc_read = cail_mc_read;
atom_card_info->mc_write = cail_mc_write;
atom_card_info->pll_read = cail_pll_read;
atom_card_info->pll_write = cail_pll_write;
rdev->mode_info.atom_context = atom_parse(atom_card_info, rdev->bios);
sx_init(&rdev->mode_info.atom_context->mutex,
"drm__radeon_device__mode_info__atom_context__mutex");
radeon_atom_initialize_bios_scratch_regs(rdev->ddev);
atom_allocate_fb_scratch(rdev->mode_info.atom_context);
return 0;
}
/**
* radeon_atombios_fini - free the driver info and callbacks for atombios
*
* @rdev: radeon_device pointer
*
* Frees the driver info and register access callbacks for the ATOM
* interpreter (r4xx+).
* Called at driver shutdown.
*/
void radeon_atombios_fini(struct radeon_device *rdev)
{
if (rdev->mode_info.atom_context) {
free(rdev->mode_info.atom_context->scratch, DRM_MEM_DRIVER);
atom_destroy(rdev->mode_info.atom_context);
}
free(rdev->mode_info.atom_card_info, DRM_MEM_DRIVER);
}
/* COMBIOS */
/*
* COMBIOS is the bios format prior to ATOM. It provides
* command tables similar to ATOM, but doesn't have a unified
* parser. See radeon_combios.c
*/
/**
* radeon_combios_init - init the driver info for combios
*
* @rdev: radeon_device pointer
*
* Initializes the driver info for combios (r1xx-r3xx).
* Returns 0 on success.
* Called at driver startup.
*/
int radeon_combios_init(struct radeon_device *rdev)
{
radeon_combios_initialize_bios_scratch_regs(rdev->ddev);
return 0;
}
/**
* radeon_combios_fini - free the driver info for combios
*
* @rdev: radeon_device pointer
*
* Frees the driver info for combios (r1xx-r3xx).
* Called at driver shutdown.
*/
void radeon_combios_fini(struct radeon_device *rdev)
{
}
#ifdef FREEBSD_WIP
/* if we get transitioned to only one device, take VGA back */
/**
* radeon_vga_set_decode - enable/disable vga decode
*
* @cookie: radeon_device pointer
* @state: enable/disable vga decode
*
* Enable/disable vga decode (all asics).
* Returns VGA resource flags.
*/
static unsigned int radeon_vga_set_decode(void *cookie, bool state)
{
struct radeon_device *rdev = cookie;
radeon_vga_set_state(rdev, state);
if (state)
return VGA_RSRC_LEGACY_IO | VGA_RSRC_LEGACY_MEM |
VGA_RSRC_NORMAL_IO | VGA_RSRC_NORMAL_MEM;
else
return VGA_RSRC_NORMAL_IO | VGA_RSRC_NORMAL_MEM;
}
#endif /* FREEBSD_WIP */
/**
* radeon_check_pot_argument - check that argument is a power of two
*
* @arg: value to check
*
* Validates that a certain argument is a power of two (all asics).
* Returns true if argument is valid.
*/
static bool radeon_check_pot_argument(int arg)
{
return (arg & (arg - 1)) == 0;
}
/**
* radeon_check_arguments - validate module params
*
* @rdev: radeon_device pointer
*
* Validates certain module parameters and updates
* the associated values used by the driver (all asics).
*/
static void radeon_check_arguments(struct radeon_device *rdev)
{
/* vramlimit must be a power of two */
if (!radeon_check_pot_argument(radeon_vram_limit)) {
dev_warn(rdev->dev, "vram limit (%d) must be a power of 2\n",
radeon_vram_limit);
radeon_vram_limit = 0;
}
/* gtt size must be power of two and greater or equal to 32M */
if (radeon_gart_size < 32) {
dev_warn(rdev->dev, "gart size (%d) too small forcing to 512M\n",
radeon_gart_size);
radeon_gart_size = 512;
} else if (!radeon_check_pot_argument(radeon_gart_size)) {
dev_warn(rdev->dev, "gart size (%d) must be a power of 2\n",
radeon_gart_size);
radeon_gart_size = 512;
}
rdev->mc.gtt_size = (uint64_t)radeon_gart_size << 20;
/* AGP mode can only be -1, 1, 2, 4, 8 */
switch (radeon_agpmode) {
case -1:
case 0:
case 1:
case 2:
case 4:
case 8:
break;
default:
dev_warn(rdev->dev, "invalid AGP mode %d (valid mode: "
"-1, 0, 1, 2, 4, 8)\n", radeon_agpmode);
radeon_agpmode = 0;
break;
}
}
/**
* radeon_switcheroo_quirk_long_wakeup - return true if longer d3 delay is
* needed for waking up.
*
* @pdev: pci dev pointer
*/
#ifdef FREEBSD_WIP
static bool radeon_switcheroo_quirk_long_wakeup(struct pci_dev *pdev)
{
/* 6600m in a macbook pro */
if (pdev->subsystem_vendor == PCI_VENDOR_ID_APPLE &&
pdev->subsystem_device == 0x00e2) {
printk(KERN_INFO "radeon: quirking longer d3 wakeup delay\n");
return true;
}
return false;
}
#endif /* FREEBSD_WIP */
/**
* radeon_switcheroo_set_state - set switcheroo state
*
* @pdev: pci dev pointer
* @state: vga switcheroo state
*
* Callback for the switcheroo driver. Suspends or resumes the
* the asics before or after it is powered up using ACPI methods.
*/
#ifdef FREEBSD_WIP
static void radeon_switcheroo_set_state(struct pci_dev *pdev, enum vga_switcheroo_state state)
{
struct drm_device *dev = pci_get_drvdata(pdev);
pm_message_t pmm = { .event = PM_EVENT_SUSPEND };
if (state == VGA_SWITCHEROO_ON) {
unsigned d3_delay = dev->pdev->d3_delay;
printk(KERN_INFO "radeon: switched on\n");
/* don't suspend or resume card normally */
dev->switch_power_state = DRM_SWITCH_POWER_CHANGING;
if (d3_delay < 20 && radeon_switcheroo_quirk_long_wakeup(pdev))
dev->pdev->d3_delay = 20;
radeon_resume_kms(dev);
dev->pdev->d3_delay = d3_delay;
dev->switch_power_state = DRM_SWITCH_POWER_ON;
drm_kms_helper_poll_enable(dev);
} else {
printk(KERN_INFO "radeon: switched off\n");
drm_kms_helper_poll_disable(dev);
dev->switch_power_state = DRM_SWITCH_POWER_CHANGING;
radeon_suspend_kms(dev, pmm);
dev->switch_power_state = DRM_SWITCH_POWER_OFF;
}
}
#endif /* FREEBSD_WIP */
/**
* radeon_switcheroo_can_switch - see if switcheroo state can change
*
* @pdev: pci dev pointer
*
* Callback for the switcheroo driver. Check of the switcheroo
* state can be changed.
* Returns true if the state can be changed, false if not.
*/
#ifdef FREEBSD_WIP
static bool radeon_switcheroo_can_switch(struct pci_dev *pdev)
{
struct drm_device *dev = pci_get_drvdata(pdev);
bool can_switch;
spin_lock(&dev->count_lock);
can_switch = (dev->open_count == 0);
spin_unlock(&dev->count_lock);
return can_switch;
}
static const struct vga_switcheroo_client_ops radeon_switcheroo_ops = {
.set_gpu_state = radeon_switcheroo_set_state,
.reprobe = NULL,
.can_switch = radeon_switcheroo_can_switch,
};
#endif /* FREEBSD_WIP */
/**
* radeon_device_init - initialize the driver
*
* @rdev: radeon_device pointer
* @pdev: drm dev pointer
* @flags: driver flags
*
* Initializes the driver info and hw (all asics).
* Returns 0 for success or an error on failure.
* Called at driver startup.
*/
int radeon_device_init(struct radeon_device *rdev,
struct drm_device *ddev,
uint32_t flags)
{
int r, i;
int dma_bits;
rdev->shutdown = false;
rdev->dev = ddev->dev;
rdev->ddev = ddev;
rdev->flags = flags;
rdev->family = flags & RADEON_FAMILY_MASK;
rdev->is_atom_bios = false;
rdev->usec_timeout = RADEON_MAX_USEC_TIMEOUT;
rdev->mc.gtt_size = radeon_gart_size * 1024 * 1024;
rdev->accel_working = false;
rdev->fictitious_range_registered = false;
rdev->fictitious_agp_range_registered = false;
/* set up ring ids */
for (i = 0; i < RADEON_NUM_RINGS; i++) {
rdev->ring[i].idx = i;
}
DRM_INFO("initializing kernel modesetting (%s 0x%04X:0x%04X 0x%04X:0x%04X).\n",
radeon_family_name[rdev->family], ddev->pci_vendor, ddev->pci_device,
ddev->pci_subvendor, ddev->pci_subdevice);
/* mutex initialization are all done here so we
* can recall function without having locking issues */
sx_init(&rdev->ring_lock, "drm__radeon_device__ring_lock");
sx_init(&rdev->dc_hw_i2c_mutex, "drm__radeon_device__dc_hw_i2c_mutex");
atomic_set(&rdev->ih.lock, 0);
sx_init(&rdev->gem.mutex, "drm__radeon_device__gem__mutex");
sx_init(&rdev->pm.mutex, "drm__radeon_device__pm__mutex");
sx_init(&rdev->gpu_clock_mutex, "drm__radeon_device__gpu_clock_mutex");
sx_init(&rdev->pm.mclk_lock, "drm__radeon_device__pm__mclk_lock");
sx_init(&rdev->exclusive_lock, "drm__radeon_device__exclusive_lock");
DRM_INIT_WAITQUEUE(&rdev->irq.vblank_queue);
r = radeon_gem_init(rdev);
if (r)
return r;
/* initialize vm here */
sx_init(&rdev->vm_manager.lock, "drm__radeon_device__vm_manager__lock");
/* Adjust VM size here.
* Currently set to 4GB ((1 << 20) 4k pages).
* Max GPUVM size for cayman and SI is 40 bits.
*/
rdev->vm_manager.max_pfn = 1 << 20;
INIT_LIST_HEAD(&rdev->vm_manager.lru_vm);
/* Set asic functions */
r = radeon_asic_init(rdev);
if (r)
return r;
radeon_check_arguments(rdev);
/* all of the newer IGP chips have an internal gart
* However some rs4xx report as AGP, so remove that here.
*/
if ((rdev->family >= CHIP_RS400) &&
(rdev->flags & RADEON_IS_IGP)) {
rdev->flags &= ~RADEON_IS_AGP;
}
if (rdev->flags & RADEON_IS_AGP && radeon_agpmode == -1) {
radeon_agp_disable(rdev);
}
/* set DMA mask + need_dma32 flags.
* PCIE - can handle 40-bits.
* IGP - can handle 40-bits
* AGP - generally dma32 is safest
* PCI - dma32 for legacy pci gart, 40 bits on newer asics
*/
rdev->need_dma32 = false;
if (rdev->flags & RADEON_IS_AGP)
rdev->need_dma32 = true;
if ((rdev->flags & RADEON_IS_PCI) &&
(rdev->family <= CHIP_RS740))
rdev->need_dma32 = true;
dma_bits = rdev->need_dma32 ? 32 : 40;
#ifdef FREEBSD_WIP
r = pci_set_dma_mask(rdev->pdev, DMA_BIT_MASK(dma_bits));
if (r) {
rdev->need_dma32 = true;
dma_bits = 32;
printk(KERN_WARNING "radeon: No suitable DMA available.\n");
}
r = pci_set_consistent_dma_mask(rdev->pdev, DMA_BIT_MASK(dma_bits));
if (r) {
pci_set_consistent_dma_mask(rdev->pdev, DMA_BIT_MASK(32));
printk(KERN_WARNING "radeon: No coherent DMA available.\n");
}
#endif /* FREEBSD_WIP */
/* Registers mapping */
/* TODO: block userspace mapping of io register */
DRM_SPININIT(&rdev->mmio_idx_lock, "drm__radeon_device__mmio_idx_lock");
rdev->rmmio_rid = PCIR_BAR(2);
rdev->rmmio = bus_alloc_resource_any(rdev->dev, SYS_RES_MEMORY,
&rdev->rmmio_rid, RF_ACTIVE | RF_SHAREABLE);
if (rdev->rmmio == NULL) {
return -ENOMEM;
}
rdev->rmmio_base = rman_get_start(rdev->rmmio);
rdev->rmmio_size = rman_get_size(rdev->rmmio);
DRM_INFO("register mmio base: 0x%08X\n", (uint32_t)rdev->rmmio_base);
DRM_INFO("register mmio size: %u\n", (unsigned)rdev->rmmio_size);
/* io port mapping */
for (i = 0; i < DRM_MAX_PCI_RESOURCE; i++) {
uint32_t data;
data = pci_read_config(rdev->dev, PCIR_BAR(i), 4);
if (PCI_BAR_IO(data)) {
rdev->rio_rid = PCIR_BAR(i);
rdev->rio_mem = bus_alloc_resource_any(rdev->dev,
SYS_RES_IOPORT, &rdev->rio_rid,
RF_ACTIVE | RF_SHAREABLE);
break;
}
}
if (rdev->rio_mem == NULL)
DRM_ERROR("Unable to find PCI I/O BAR\n");
rdev->tq = taskqueue_create("radeonkms", M_WAITOK,
taskqueue_thread_enqueue, &rdev->tq);
taskqueue_start_threads(&rdev->tq, 1, PWAIT, "radeon taskq");
#ifdef FREEBSD_WIP
/* if we have > 1 VGA cards, then disable the radeon VGA resources */
/* this will fail for cards that aren't VGA class devices, just
* ignore it */
vga_client_register(rdev->pdev, rdev, NULL, radeon_vga_set_decode);
vga_switcheroo_register_client(rdev->pdev, &radeon_switcheroo_ops);
#endif /* FREEBSD_WIP */
r = radeon_init(rdev);
if (r)
return r;
r = radeon_ib_ring_tests(rdev);
if (r)
DRM_ERROR("ib ring test failed (%d).\n", r);
if (rdev->flags & RADEON_IS_AGP && !rdev->accel_working) {
/* Acceleration not working on AGP card try again
* with fallback to PCI or PCIE GART
*/
radeon_asic_reset(rdev);
radeon_fini(rdev);
radeon_agp_disable(rdev);
r = radeon_init(rdev);
if (r)
return r;
}
DRM_INFO("%s: Taking over the fictitious range 0x%jx-0x%jx\n",
__func__, (uintmax_t)rdev->mc.aper_base,
(uintmax_t)rdev->mc.aper_base + rdev->mc.visible_vram_size);
r = vm_phys_fictitious_reg_range(
rdev->mc.aper_base,
rdev->mc.aper_base + rdev->mc.visible_vram_size,
VM_MEMATTR_WRITE_COMBINING);
if (r != 0) {
DRM_ERROR("Failed to register fictitious range "
"0x%jx-0x%jx (%d).\n", (uintmax_t)rdev->mc.aper_base,
(uintmax_t)rdev->mc.aper_base + rdev->mc.visible_vram_size, r);
return (-r);
}
rdev->fictitious_range_registered = true;
#if __OS_HAS_AGP
if (rdev->flags & RADEON_IS_AGP) {
DRM_INFO("%s: Taking over the fictitious range 0x%jx-0x%jx\n",
__func__, (uintmax_t)rdev->mc.agp_base,
(uintmax_t)rdev->mc.agp_base + rdev->mc.gtt_size);
r = vm_phys_fictitious_reg_range(
rdev->mc.agp_base,
rdev->mc.agp_base + rdev->mc.gtt_size,
VM_MEMATTR_WRITE_COMBINING);
if (r != 0) {
DRM_ERROR("Failed to register fictitious range "
"0x%jx-0x%jx (%d).\n", (uintmax_t)rdev->mc.agp_base,
(uintmax_t)rdev->mc.agp_base + rdev->mc.gtt_size, r);
return (-r);
}
rdev->fictitious_agp_range_registered = true;
}
#endif
if ((radeon_testing & 1)) {
radeon_test_moves(rdev);
}
if ((radeon_testing & 2)) {
radeon_test_syncing(rdev);
}
if (radeon_benchmarking) {
radeon_benchmark(rdev, radeon_benchmarking);
}
return 0;
}
#ifdef FREEBSD_WIP
static void radeon_debugfs_remove_files(struct radeon_device *rdev);
#endif /* FREEBSD_WIP */
/**
* radeon_device_fini - tear down the driver
*
* @rdev: radeon_device pointer
*
* Tear down the driver info (all asics).
* Called at driver shutdown.
*/
void radeon_device_fini(struct radeon_device *rdev)
{
DRM_INFO("radeon: finishing device.\n");
rdev->shutdown = true;
/* evict vram memory */
radeon_bo_evict_vram(rdev);
if (rdev->fictitious_range_registered) {
vm_phys_fictitious_unreg_range(
rdev->mc.aper_base,
rdev->mc.aper_base + rdev->mc.visible_vram_size);
}
#if __OS_HAS_AGP
if (rdev->fictitious_agp_range_registered) {
vm_phys_fictitious_unreg_range(
rdev->mc.agp_base,
rdev->mc.agp_base + rdev->mc.gtt_size);
}
#endif
radeon_fini(rdev);
#ifdef FREEBSD_WIP
vga_switcheroo_unregister_client(rdev->pdev);
vga_client_register(rdev->pdev, NULL, NULL, NULL);
#endif /* FREEBSD_WIP */
if (rdev->tq != NULL) {
taskqueue_free(rdev->tq);
rdev->tq = NULL;
}
if (rdev->rio_mem)
bus_release_resource(rdev->dev, SYS_RES_IOPORT, rdev->rio_rid,
rdev->rio_mem);
rdev->rio_mem = NULL;
bus_release_resource(rdev->dev, SYS_RES_MEMORY, rdev->rmmio_rid,
rdev->rmmio);
rdev->rmmio = NULL;
#ifdef FREEBSD_WIP
radeon_debugfs_remove_files(rdev);
#endif /* FREEBSD_WIP */
}
/*
* Suspend & resume.
*/
/**
* radeon_suspend_kms - initiate device suspend
*
* @pdev: drm dev pointer
* @state: suspend state
*
* Puts the hw in the suspend state (all asics).
* Returns 0 for success or an error on failure.
* Called at driver suspend.
*/
int radeon_suspend_kms(struct drm_device *dev)
{
struct radeon_device *rdev;
struct drm_crtc *crtc;
struct drm_connector *connector;
int i, r;
bool force_completion = false;
if (dev == NULL || dev->dev_private == NULL) {
return -ENODEV;
}
#ifdef FREEBSD_WIP
if (state.event == PM_EVENT_PRETHAW) {
return 0;
}
#endif /* FREEBSD_WIP */
rdev = dev->dev_private;
if (dev->switch_power_state == DRM_SWITCH_POWER_OFF)
return 0;
drm_kms_helper_poll_disable(dev);
/* turn off display hw */
list_for_each_entry(connector, &dev->mode_config.connector_list, head) {
drm_helper_connector_dpms(connector, DRM_MODE_DPMS_OFF);
}
/* unpin the front buffers */
list_for_each_entry(crtc, &dev->mode_config.crtc_list, head) {
struct radeon_framebuffer *rfb = to_radeon_framebuffer(crtc->fb);
struct radeon_bo *robj;
if (rfb == NULL || rfb->obj == NULL) {
continue;
}
robj = gem_to_radeon_bo(rfb->obj);
/* don't unpin kernel fb objects */
if (!radeon_fbdev_robj_is_fb(rdev, robj)) {
r = radeon_bo_reserve(robj, false);
if (r == 0) {
radeon_bo_unpin(robj);
radeon_bo_unreserve(robj);
}
}
}
/* evict vram memory */
radeon_bo_evict_vram(rdev);
sx_xlock(&rdev->ring_lock);
/* wait for gpu to finish processing current batch */
for (i = 0; i < RADEON_NUM_RINGS; i++) {
r = radeon_fence_wait_empty_locked(rdev, i);
if (r) {
/* delay GPU reset to resume */
force_completion = true;
}
}
if (force_completion) {
radeon_fence_driver_force_completion(rdev);
}
sx_xunlock(&rdev->ring_lock);
radeon_save_bios_scratch_regs(rdev);
radeon_pm_suspend(rdev);
radeon_suspend(rdev);
radeon_hpd_fini(rdev);
/* evict remaining vram memory */
radeon_bo_evict_vram(rdev);
radeon_agp_suspend(rdev);
#ifdef FREEBSD_WIP
if (state.event == PM_EVENT_SUSPEND) {
/* Shut down the device */
pci_disable_device(dev->pdev);
}
console_lock();
#endif /* FREEBSD_WIP */
radeon_fbdev_set_suspend(rdev, 1);
#ifdef FREEBSD_WIP
console_unlock();
#endif /* FREEBSD_WIP */
return 0;
}
/**
* radeon_resume_kms - initiate device resume
*
* @pdev: drm dev pointer
*
* Bring the hw back to operating state (all asics).
* Returns 0 for success or an error on failure.
* Called at driver resume.
*/
int radeon_resume_kms(struct drm_device *dev)
{
struct drm_connector *connector;
struct radeon_device *rdev = dev->dev_private;
int r;
if (dev->switch_power_state == DRM_SWITCH_POWER_OFF)
return 0;
#ifdef FREEBSD_WIP
console_lock();
if (pci_enable_device(dev->pdev)) {
console_unlock();
return -1;
}
#endif /* FREEBSD_WIP */
/* resume AGP if in use */
radeon_agp_resume(rdev);
radeon_resume(rdev);
r = radeon_ib_ring_tests(rdev);
if (r)
DRM_ERROR("ib ring test failed (%d).\n", r);
radeon_pm_resume(rdev);
radeon_restore_bios_scratch_regs(rdev);
radeon_fbdev_set_suspend(rdev, 0);
#ifdef FREEBSD_WIP
console_unlock();
#endif /* FREEBSD_WIP */
/* init dig PHYs, disp eng pll */
if (rdev->is_atom_bios) {
radeon_atom_encoder_init(rdev);
radeon_atom_disp_eng_pll_init(rdev);
/* turn on the BL */
if (rdev->mode_info.bl_encoder) {
u8 bl_level = radeon_get_backlight_level(rdev,
rdev->mode_info.bl_encoder);
radeon_set_backlight_level(rdev, rdev->mode_info.bl_encoder,
bl_level);
}
}
/* reset hpd state */
radeon_hpd_init(rdev);
/* blat the mode back in */
drm_helper_resume_force_mode(dev);
/* turn on display hw */
list_for_each_entry(connector, &dev->mode_config.connector_list, head) {
drm_helper_connector_dpms(connector, DRM_MODE_DPMS_ON);
}
drm_kms_helper_poll_enable(dev);
return 0;
}
/**
* radeon_gpu_reset - reset the asic
*
* @rdev: radeon device pointer
*
* Attempt the reset the GPU if it has hung (all asics).
* Returns 0 for success or an error on failure.
*/
int radeon_gpu_reset(struct radeon_device *rdev)
{
unsigned ring_sizes[RADEON_NUM_RINGS];
uint32_t *ring_data[RADEON_NUM_RINGS];
bool saved = false;
int i, r;
int resched;
sx_xlock(&rdev->exclusive_lock);
radeon_save_bios_scratch_regs(rdev);
/* block TTM */
resched = ttm_bo_lock_delayed_workqueue(&rdev->mman.bdev);
radeon_suspend(rdev);
for (i = 0; i < RADEON_NUM_RINGS; ++i) {
ring_sizes[i] = radeon_ring_backup(rdev, &rdev->ring[i],
&ring_data[i]);
if (ring_sizes[i]) {
saved = true;
dev_info(rdev->dev, "Saved %d dwords of commands "
"on ring %d.\n", ring_sizes[i], i);
}
}
retry:
r = radeon_asic_reset(rdev);
if (!r) {
dev_info(rdev->dev, "GPU reset succeeded, trying to resume\n");
radeon_resume(rdev);
}
radeon_restore_bios_scratch_regs(rdev);
if (!r) {
for (i = 0; i < RADEON_NUM_RINGS; ++i) {
radeon_ring_restore(rdev, &rdev->ring[i],
ring_sizes[i], ring_data[i]);
ring_sizes[i] = 0;
ring_data[i] = NULL;
}
r = radeon_ib_ring_tests(rdev);
if (r) {
dev_err(rdev->dev, "ib ring test failed (%d).\n", r);
if (saved) {
saved = false;
radeon_suspend(rdev);
goto retry;
}
}
} else {
radeon_fence_driver_force_completion(rdev);
for (i = 0; i < RADEON_NUM_RINGS; ++i) {
free(ring_data[i], DRM_MEM_DRIVER);
}
}
drm_helper_resume_force_mode(rdev->ddev);
ttm_bo_unlock_delayed_workqueue(&rdev->mman.bdev, resched);
if (r) {
/* bad news, how to tell it to userspace ? */
dev_info(rdev->dev, "GPU reset failed\n");
}
sx_xunlock(&rdev->exclusive_lock);
return r;
}
/*
* Debugfs
*/
#ifdef FREEBSD_WIP
int radeon_debugfs_add_files(struct radeon_device *rdev,
struct drm_info_list *files,
unsigned nfiles)
{
unsigned i;
for (i = 0; i < rdev->debugfs_count; i++) {
if (rdev->debugfs[i].files == files) {
/* Already registered */
return 0;
}
}
i = rdev->debugfs_count + 1;
if (i > RADEON_DEBUGFS_MAX_COMPONENTS) {
DRM_ERROR("Reached maximum number of debugfs components.\n");
DRM_ERROR("Report so we increase "
"RADEON_DEBUGFS_MAX_COMPONENTS.\n");
return -EINVAL;
}
rdev->debugfs[rdev->debugfs_count].files = files;
rdev->debugfs[rdev->debugfs_count].num_files = nfiles;
rdev->debugfs_count = i;
#if defined(CONFIG_DEBUG_FS)
drm_debugfs_create_files(files, nfiles,
rdev->ddev->control->debugfs_root,
rdev->ddev->control);
drm_debugfs_create_files(files, nfiles,
rdev->ddev->primary->debugfs_root,
rdev->ddev->primary);
#endif
return 0;
}
static void radeon_debugfs_remove_files(struct radeon_device *rdev)
{
#if defined(CONFIG_DEBUG_FS)
unsigned i;
for (i = 0; i < rdev->debugfs_count; i++) {
drm_debugfs_remove_files(rdev->debugfs[i].files,
rdev->debugfs[i].num_files,
rdev->ddev->control);
drm_debugfs_remove_files(rdev->debugfs[i].files,
rdev->debugfs[i].num_files,
rdev->ddev->primary);
}
#endif
}
#if defined(CONFIG_DEBUG_FS)
int radeon_debugfs_init(struct drm_minor *minor)
{
return 0;
}
void radeon_debugfs_cleanup(struct drm_minor *minor)
{
}
#endif /* FREEBSD_WIP */
#endif
Index: head/sys/dev/drm2/radeon/radeon_fence.c
===================================================================
--- head/sys/dev/drm2/radeon/radeon_fence.c (revision 300049)
+++ head/sys/dev/drm2/radeon/radeon_fence.c (revision 300050)
@@ -1,987 +1,987 @@
/*
* Copyright 2009 Jerome Glisse.
* All Rights Reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the
* "Software"), to deal in the Software without restriction, including
* without limitation the rights to use, copy, modify, merge, publish,
* distribute, sub license, and/or sell copies of the Software, and to
* permit persons to whom the Software is furnished to do so, subject to
* the following conditions:
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL
* THE COPYRIGHT HOLDERS, AUTHORS AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM,
* DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR
* OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE
* USE OR OTHER DEALINGS IN THE SOFTWARE.
*
* The above copyright notice and this permission notice (including the
* next paragraph) shall be included in all copies or substantial portions
* of the Software.
*
*/
/*
* Authors:
* Jerome Glisse <glisse@freedesktop.org>
* Dave Airlie
*/
#include <sys/cdefs.h>
__FBSDID("$FreeBSD$");
#include <dev/drm2/drmP.h>
#include "radeon_reg.h"
#include "radeon.h"
#ifdef FREEBSD_WIP
#include "radeon_trace.h"
#endif /* FREEBSD_WIP */
/*
* Fences
* Fences mark an event in the GPUs pipeline and are used
* for GPU/CPU synchronization. When the fence is written,
* it is expected that all buffers associated with that fence
* are no longer in use by the associated ring on the GPU and
- * that the the relevant GPU caches have been flushed. Whether
+ * that the relevant GPU caches have been flushed. Whether
* we use a scratch register or memory location depends on the asic
* and whether writeback is enabled.
*/
/**
* radeon_fence_write - write a fence value
*
* @rdev: radeon_device pointer
* @seq: sequence number to write
* @ring: ring index the fence is associated with
*
* Writes a fence value to memory or a scratch register (all asics).
*/
static void radeon_fence_write(struct radeon_device *rdev, u32 seq, int ring)
{
struct radeon_fence_driver *drv = &rdev->fence_drv[ring];
if (likely(rdev->wb.enabled || !drv->scratch_reg)) {
*drv->cpu_addr = cpu_to_le32(seq);
} else {
WREG32(drv->scratch_reg, seq);
}
}
/**
* radeon_fence_read - read a fence value
*
* @rdev: radeon_device pointer
* @ring: ring index the fence is associated with
*
* Reads a fence value from memory or a scratch register (all asics).
* Returns the value of the fence read from memory or register.
*/
static u32 radeon_fence_read(struct radeon_device *rdev, int ring)
{
struct radeon_fence_driver *drv = &rdev->fence_drv[ring];
u32 seq = 0;
if (likely(rdev->wb.enabled || !drv->scratch_reg)) {
seq = le32_to_cpu(*drv->cpu_addr);
} else {
seq = RREG32(drv->scratch_reg);
}
return seq;
}
/**
* radeon_fence_emit - emit a fence on the requested ring
*
* @rdev: radeon_device pointer
* @fence: radeon fence object
* @ring: ring index the fence is associated with
*
* Emits a fence command on the requested ring (all asics).
* Returns 0 on success, -ENOMEM on failure.
*/
int radeon_fence_emit(struct radeon_device *rdev,
struct radeon_fence **fence,
int ring)
{
/* we are protected by the ring emission mutex */
*fence = malloc(sizeof(struct radeon_fence), DRM_MEM_DRIVER, M_NOWAIT);
if ((*fence) == NULL) {
return -ENOMEM;
}
refcount_init(&((*fence)->kref), 1);
(*fence)->rdev = rdev;
(*fence)->seq = ++rdev->fence_drv[ring].sync_seq[ring];
(*fence)->ring = ring;
radeon_fence_ring_emit(rdev, ring, *fence);
CTR2(KTR_DRM, "radeon fence: emit (ring=%d, seq=%d)", ring, (*fence)->seq);
return 0;
}
/**
* radeon_fence_process - process a fence
*
* @rdev: radeon_device pointer
* @ring: ring index the fence is associated with
*
* Checks the current fence value and wakes the fence queue
* if the sequence number has increased (all asics).
*/
void radeon_fence_process(struct radeon_device *rdev, int ring)
{
uint64_t seq, last_seq, last_emitted;
unsigned count_loop = 0;
bool wake = false;
/* Note there is a scenario here for an infinite loop but it's
* very unlikely to happen. For it to happen, the current polling
* process need to be interrupted by another process and another
* process needs to update the last_seq btw the atomic read and
* xchg of the current process.
*
* More over for this to go in infinite loop there need to be
* continuously new fence signaled ie radeon_fence_read needs
* to return a different value each time for both the currently
* polling process and the other process that xchg the last_seq
* btw atomic read and xchg of the current process. And the
* value the other process set as last seq must be higher than
* the seq value we just read. Which means that current process
* need to be interrupted after radeon_fence_read and before
* atomic xchg.
*
* To be even more safe we count the number of time we loop and
* we bail after 10 loop just accepting the fact that we might
* have temporarly set the last_seq not to the true real last
* seq but to an older one.
*/
last_seq = atomic64_read(&rdev->fence_drv[ring].last_seq);
do {
last_emitted = rdev->fence_drv[ring].sync_seq[ring];
seq = radeon_fence_read(rdev, ring);
seq |= last_seq & 0xffffffff00000000LL;
if (seq < last_seq) {
seq &= 0xffffffff;
seq |= last_emitted & 0xffffffff00000000LL;
}
if (seq <= last_seq || seq > last_emitted) {
break;
}
/* If we loop over we don't want to return without
* checking if a fence is signaled as it means that the
* seq we just read is different from the previous on.
*/
wake = true;
last_seq = seq;
if ((count_loop++) > 10) {
/* We looped over too many time leave with the
* fact that we might have set an older fence
* seq then the current real last seq as signaled
* by the hw.
*/
break;
}
} while (atomic64_xchg(&rdev->fence_drv[ring].last_seq, seq) > seq);
if (wake) {
rdev->fence_drv[ring].last_activity = jiffies;
cv_broadcast(&rdev->fence_queue);
}
}
/**
* radeon_fence_destroy - destroy a fence
*
* @kref: fence kref
*
* Frees the fence object (all asics).
*/
static void radeon_fence_destroy(struct radeon_fence *fence)
{
free(fence, DRM_MEM_DRIVER);
}
/**
* radeon_fence_seq_signaled - check if a fence sequeuce number has signaled
*
* @rdev: radeon device pointer
* @seq: sequence number
* @ring: ring index the fence is associated with
*
* Check if the last singled fence sequnce number is >= the requested
* sequence number (all asics).
* Returns true if the fence has signaled (current fence value
* is >= requested value) or false if it has not (current fence
* value is < the requested value. Helper function for
* radeon_fence_signaled().
*/
static bool radeon_fence_seq_signaled(struct radeon_device *rdev,
u64 seq, unsigned ring)
{
if (atomic64_read(&rdev->fence_drv[ring].last_seq) >= seq) {
return true;
}
/* poll new last sequence at least once */
radeon_fence_process(rdev, ring);
if (atomic64_read(&rdev->fence_drv[ring].last_seq) >= seq) {
return true;
}
return false;
}
/**
* radeon_fence_signaled - check if a fence has signaled
*
* @fence: radeon fence object
*
* Check if the requested fence has signaled (all asics).
* Returns true if the fence has signaled or false if it has not.
*/
bool radeon_fence_signaled(struct radeon_fence *fence)
{
if (!fence) {
return true;
}
if (fence->seq == RADEON_FENCE_SIGNALED_SEQ) {
return true;
}
if (radeon_fence_seq_signaled(fence->rdev, fence->seq, fence->ring)) {
fence->seq = RADEON_FENCE_SIGNALED_SEQ;
return true;
}
return false;
}
/**
* radeon_fence_wait_seq - wait for a specific sequence number
*
* @rdev: radeon device pointer
* @target_seq: sequence number we want to wait for
* @ring: ring index the fence is associated with
* @intr: use interruptable sleep
* @lock_ring: whether the ring should be locked or not
*
* Wait for the requested sequence number to be written (all asics).
* @intr selects whether to use interruptable (true) or non-interruptable
* (false) sleep when waiting for the sequence number. Helper function
* for radeon_fence_wait(), et al.
* Returns 0 if the sequence number has passed, error for all other cases.
* -EDEADLK is returned when a GPU lockup has been detected and the ring is
* marked as not ready so no further jobs get scheduled until a successful
* reset.
*/
static int radeon_fence_wait_seq(struct radeon_device *rdev, u64 target_seq,
unsigned ring, bool intr, bool lock_ring)
{
unsigned long timeout, last_activity;
uint64_t seq;
unsigned i;
bool signaled, fence_queue_locked;
int r;
while (target_seq > atomic64_read(&rdev->fence_drv[ring].last_seq)) {
if (!rdev->ring[ring].ready) {
return -EBUSY;
}
timeout = jiffies - RADEON_FENCE_JIFFIES_TIMEOUT;
if (time_after(rdev->fence_drv[ring].last_activity, timeout)) {
/* the normal case, timeout is somewhere before last_activity */
timeout = rdev->fence_drv[ring].last_activity - timeout;
} else {
/* either jiffies wrapped around, or no fence was signaled in the last 500ms
* anyway we will just wait for the minimum amount and then check for a lockup
*/
timeout = 1;
}
seq = atomic64_read(&rdev->fence_drv[ring].last_seq);
/* Save current last activity valuee, used to check for GPU lockups */
last_activity = rdev->fence_drv[ring].last_activity;
CTR2(KTR_DRM, "radeon fence: wait begin (ring=%d, seq=%d)",
ring, seq);
radeon_irq_kms_sw_irq_get(rdev, ring);
fence_queue_locked = false;
r = 0;
while (!(signaled = radeon_fence_seq_signaled(rdev,
target_seq, ring))) {
if (!fence_queue_locked) {
mtx_lock(&rdev->fence_queue_mtx);
fence_queue_locked = true;
}
if (intr) {
r = cv_timedwait_sig(&rdev->fence_queue,
&rdev->fence_queue_mtx,
timeout);
} else {
r = cv_timedwait(&rdev->fence_queue,
&rdev->fence_queue_mtx,
timeout);
}
if (r == EINTR)
r = ERESTARTSYS;
if (r != 0) {
if (r == EWOULDBLOCK) {
signaled =
radeon_fence_seq_signaled(
rdev, target_seq, ring);
}
break;
}
}
if (fence_queue_locked) {
mtx_unlock(&rdev->fence_queue_mtx);
}
radeon_irq_kms_sw_irq_put(rdev, ring);
if (unlikely(r == ERESTARTSYS)) {
return -r;
}
CTR2(KTR_DRM, "radeon fence: wait end (ring=%d, seq=%d)",
ring, seq);
if (unlikely(!signaled)) {
#ifndef __FreeBSD__
/* we were interrupted for some reason and fence
* isn't signaled yet, resume waiting */
if (r) {
continue;
}
#endif
/* check if sequence value has changed since last_activity */
if (seq != atomic64_read(&rdev->fence_drv[ring].last_seq)) {
continue;
}
if (lock_ring) {
sx_xlock(&rdev->ring_lock);
}
/* test if somebody else has already decided that this is a lockup */
if (last_activity != rdev->fence_drv[ring].last_activity) {
if (lock_ring) {
sx_xunlock(&rdev->ring_lock);
}
continue;
}
if (radeon_ring_is_lockup(rdev, ring, &rdev->ring[ring])) {
/* good news we believe it's a lockup */
dev_warn(rdev->dev, "GPU lockup (waiting for 0x%016jx last fence id 0x%016jx)\n",
(uintmax_t)target_seq, (uintmax_t)seq);
/* change last activity so nobody else think there is a lockup */
for (i = 0; i < RADEON_NUM_RINGS; ++i) {
rdev->fence_drv[i].last_activity = jiffies;
}
/* mark the ring as not ready any more */
rdev->ring[ring].ready = false;
if (lock_ring) {
sx_xunlock(&rdev->ring_lock);
}
return -EDEADLK;
}
if (lock_ring) {
sx_xunlock(&rdev->ring_lock);
}
}
}
return 0;
}
/**
* radeon_fence_wait - wait for a fence to signal
*
* @fence: radeon fence object
* @intr: use interruptable sleep
*
* Wait for the requested fence to signal (all asics).
* @intr selects whether to use interruptable (true) or non-interruptable
* (false) sleep when waiting for the fence.
* Returns 0 if the fence has passed, error for all other cases.
*/
int radeon_fence_wait(struct radeon_fence *fence, bool intr)
{
int r;
if (fence == NULL) {
DRM_ERROR("Querying an invalid fence : %p !\n", fence);
return -EINVAL;
}
r = radeon_fence_wait_seq(fence->rdev, fence->seq,
fence->ring, intr, true);
if (r) {
return r;
}
fence->seq = RADEON_FENCE_SIGNALED_SEQ;
return 0;
}
static bool radeon_fence_any_seq_signaled(struct radeon_device *rdev, u64 *seq)
{
unsigned i;
for (i = 0; i < RADEON_NUM_RINGS; ++i) {
if (seq[i] && radeon_fence_seq_signaled(rdev, seq[i], i)) {
return true;
}
}
return false;
}
/**
* radeon_fence_wait_any_seq - wait for a sequence number on any ring
*
* @rdev: radeon device pointer
* @target_seq: sequence number(s) we want to wait for
* @intr: use interruptable sleep
*
* Wait for the requested sequence number(s) to be written by any ring
* (all asics). Sequnce number array is indexed by ring id.
* @intr selects whether to use interruptable (true) or non-interruptable
* (false) sleep when waiting for the sequence number. Helper function
* for radeon_fence_wait_any(), et al.
* Returns 0 if the sequence number has passed, error for all other cases.
*/
static int radeon_fence_wait_any_seq(struct radeon_device *rdev,
u64 *target_seq, bool intr)
{
unsigned long timeout, last_activity, tmp;
unsigned i, ring = RADEON_NUM_RINGS;
bool signaled, fence_queue_locked;
int r;
for (i = 0, last_activity = 0; i < RADEON_NUM_RINGS; ++i) {
if (!target_seq[i]) {
continue;
}
/* use the most recent one as indicator */
if (time_after(rdev->fence_drv[i].last_activity, last_activity)) {
last_activity = rdev->fence_drv[i].last_activity;
}
/* For lockup detection just pick the lowest ring we are
* actively waiting for
*/
if (i < ring) {
ring = i;
}
}
/* nothing to wait for ? */
if (ring == RADEON_NUM_RINGS) {
return -ENOENT;
}
while (!radeon_fence_any_seq_signaled(rdev, target_seq)) {
timeout = jiffies - RADEON_FENCE_JIFFIES_TIMEOUT;
if (time_after(last_activity, timeout)) {
/* the normal case, timeout is somewhere before last_activity */
timeout = last_activity - timeout;
} else {
/* either jiffies wrapped around, or no fence was signaled in the last 500ms
* anyway we will just wait for the minimum amount and then check for a lockup
*/
timeout = 1;
}
CTR2(KTR_DRM, "radeon fence: wait begin (ring=%d, target_seq=%d)",
ring, target_seq[ring]);
for (i = 0; i < RADEON_NUM_RINGS; ++i) {
if (target_seq[i]) {
radeon_irq_kms_sw_irq_get(rdev, i);
}
}
fence_queue_locked = false;
r = 0;
while (!(signaled = radeon_fence_any_seq_signaled(rdev,
target_seq))) {
if (!fence_queue_locked) {
mtx_lock(&rdev->fence_queue_mtx);
fence_queue_locked = true;
}
if (intr) {
r = cv_timedwait_sig(&rdev->fence_queue,
&rdev->fence_queue_mtx,
timeout);
} else {
r = cv_timedwait(&rdev->fence_queue,
&rdev->fence_queue_mtx,
timeout);
}
if (r == EINTR)
r = ERESTARTSYS;
if (r != 0) {
if (r == EWOULDBLOCK) {
signaled =
radeon_fence_any_seq_signaled(
rdev, target_seq);
}
break;
}
}
if (fence_queue_locked) {
mtx_unlock(&rdev->fence_queue_mtx);
}
for (i = 0; i < RADEON_NUM_RINGS; ++i) {
if (target_seq[i]) {
radeon_irq_kms_sw_irq_put(rdev, i);
}
}
if (unlikely(r == ERESTARTSYS)) {
return -r;
}
CTR2(KTR_DRM, "radeon fence: wait end (ring=%d, target_seq=%d)",
ring, target_seq[ring]);
if (unlikely(!signaled)) {
#ifndef __FreeBSD__
/* we were interrupted for some reason and fence
* isn't signaled yet, resume waiting */
if (r) {
continue;
}
#endif
sx_xlock(&rdev->ring_lock);
for (i = 0, tmp = 0; i < RADEON_NUM_RINGS; ++i) {
if (time_after(rdev->fence_drv[i].last_activity, tmp)) {
tmp = rdev->fence_drv[i].last_activity;
}
}
/* test if somebody else has already decided that this is a lockup */
if (last_activity != tmp) {
last_activity = tmp;
sx_xunlock(&rdev->ring_lock);
continue;
}
if (radeon_ring_is_lockup(rdev, ring, &rdev->ring[ring])) {
/* good news we believe it's a lockup */
dev_warn(rdev->dev, "GPU lockup (waiting for 0x%016jx)\n",
(uintmax_t)target_seq[ring]);
/* change last activity so nobody else think there is a lockup */
for (i = 0; i < RADEON_NUM_RINGS; ++i) {
rdev->fence_drv[i].last_activity = jiffies;
}
/* mark the ring as not ready any more */
rdev->ring[ring].ready = false;
sx_xunlock(&rdev->ring_lock);
return -EDEADLK;
}
sx_xunlock(&rdev->ring_lock);
}
}
return 0;
}
/**
* radeon_fence_wait_any - wait for a fence to signal on any ring
*
* @rdev: radeon device pointer
* @fences: radeon fence object(s)
* @intr: use interruptable sleep
*
* Wait for any requested fence to signal (all asics). Fence
* array is indexed by ring id. @intr selects whether to use
* interruptable (true) or non-interruptable (false) sleep when
* waiting for the fences. Used by the suballocator.
* Returns 0 if any fence has passed, error for all other cases.
*/
int radeon_fence_wait_any(struct radeon_device *rdev,
struct radeon_fence **fences,
bool intr)
{
uint64_t seq[RADEON_NUM_RINGS];
unsigned i;
int r;
for (i = 0; i < RADEON_NUM_RINGS; ++i) {
seq[i] = 0;
if (!fences[i]) {
continue;
}
if (fences[i]->seq == RADEON_FENCE_SIGNALED_SEQ) {
/* something was already signaled */
return 0;
}
seq[i] = fences[i]->seq;
}
r = radeon_fence_wait_any_seq(rdev, seq, intr);
if (r) {
return r;
}
return 0;
}
/**
* radeon_fence_wait_next_locked - wait for the next fence to signal
*
* @rdev: radeon device pointer
* @ring: ring index the fence is associated with
*
* Wait for the next fence on the requested ring to signal (all asics).
* Returns 0 if the next fence has passed, error for all other cases.
* Caller must hold ring lock.
*/
int radeon_fence_wait_next_locked(struct radeon_device *rdev, int ring)
{
uint64_t seq;
seq = atomic64_read(&rdev->fence_drv[ring].last_seq) + 1ULL;
if (seq >= rdev->fence_drv[ring].sync_seq[ring]) {
/* nothing to wait for, last_seq is
already the last emitted fence */
return -ENOENT;
}
return radeon_fence_wait_seq(rdev, seq, ring, false, false);
}
/**
* radeon_fence_wait_empty_locked - wait for all fences to signal
*
* @rdev: radeon device pointer
* @ring: ring index the fence is associated with
*
* Wait for all fences on the requested ring to signal (all asics).
* Returns 0 if the fences have passed, error for all other cases.
* Caller must hold ring lock.
*/
int radeon_fence_wait_empty_locked(struct radeon_device *rdev, int ring)
{
uint64_t seq = rdev->fence_drv[ring].sync_seq[ring];
int r;
r = radeon_fence_wait_seq(rdev, seq, ring, false, false);
if (r) {
if (r == -EDEADLK) {
return -EDEADLK;
}
dev_err(rdev->dev, "error waiting for ring[%d] to become idle (%d)\n",
ring, r);
}
return 0;
}
/**
* radeon_fence_ref - take a ref on a fence
*
* @fence: radeon fence object
*
* Take a reference on a fence (all asics).
* Returns the fence.
*/
struct radeon_fence *radeon_fence_ref(struct radeon_fence *fence)
{
refcount_acquire(&fence->kref);
return fence;
}
/**
* radeon_fence_unref - remove a ref on a fence
*
* @fence: radeon fence object
*
* Remove a reference on a fence (all asics).
*/
void radeon_fence_unref(struct radeon_fence **fence)
{
struct radeon_fence *tmp = *fence;
*fence = NULL;
if (tmp) {
if (refcount_release(&tmp->kref)) {
radeon_fence_destroy(tmp);
}
}
}
/**
* radeon_fence_count_emitted - get the count of emitted fences
*
* @rdev: radeon device pointer
* @ring: ring index the fence is associated with
*
* Get the number of fences emitted on the requested ring (all asics).
* Returns the number of emitted fences on the ring. Used by the
* dynpm code to ring track activity.
*/
unsigned radeon_fence_count_emitted(struct radeon_device *rdev, int ring)
{
uint64_t emitted;
/* We are not protected by ring lock when reading the last sequence
* but it's ok to report slightly wrong fence count here.
*/
radeon_fence_process(rdev, ring);
emitted = rdev->fence_drv[ring].sync_seq[ring]
- atomic64_read(&rdev->fence_drv[ring].last_seq);
/* to avoid 32bits warp around */
if (emitted > 0x10000000) {
emitted = 0x10000000;
}
return (unsigned)emitted;
}
/**
* radeon_fence_need_sync - do we need a semaphore
*
* @fence: radeon fence object
* @dst_ring: which ring to check against
*
* Check if the fence needs to be synced against another ring
* (all asics). If so, we need to emit a semaphore.
* Returns true if we need to sync with another ring, false if
* not.
*/
bool radeon_fence_need_sync(struct radeon_fence *fence, int dst_ring)
{
struct radeon_fence_driver *fdrv;
if (!fence) {
return false;
}
if (fence->ring == dst_ring) {
return false;
}
/* we are protected by the ring mutex */
fdrv = &fence->rdev->fence_drv[dst_ring];
if (fence->seq <= fdrv->sync_seq[fence->ring]) {
return false;
}
return true;
}
/**
* radeon_fence_note_sync - record the sync point
*
* @fence: radeon fence object
* @dst_ring: which ring to check against
*
* Note the sequence number at which point the fence will
* be synced with the requested ring (all asics).
*/
void radeon_fence_note_sync(struct radeon_fence *fence, int dst_ring)
{
struct radeon_fence_driver *dst, *src;
unsigned i;
if (!fence) {
return;
}
if (fence->ring == dst_ring) {
return;
}
/* we are protected by the ring mutex */
src = &fence->rdev->fence_drv[fence->ring];
dst = &fence->rdev->fence_drv[dst_ring];
for (i = 0; i < RADEON_NUM_RINGS; ++i) {
if (i == dst_ring) {
continue;
}
dst->sync_seq[i] = max(dst->sync_seq[i], src->sync_seq[i]);
}
}
/**
* radeon_fence_driver_start_ring - make the fence driver
* ready for use on the requested ring.
*
* @rdev: radeon device pointer
* @ring: ring index to start the fence driver on
*
* Make the fence driver ready for processing (all asics).
* Not all asics have all rings, so each asic will only
* start the fence driver on the rings it has.
* Returns 0 for success, errors for failure.
*/
int radeon_fence_driver_start_ring(struct radeon_device *rdev, int ring)
{
uint64_t index;
int r;
radeon_scratch_free(rdev, rdev->fence_drv[ring].scratch_reg);
if (rdev->wb.use_event || !radeon_ring_supports_scratch_reg(rdev, &rdev->ring[ring])) {
rdev->fence_drv[ring].scratch_reg = 0;
index = R600_WB_EVENT_OFFSET + ring * 4;
} else {
r = radeon_scratch_get(rdev, &rdev->fence_drv[ring].scratch_reg);
if (r) {
dev_err(rdev->dev, "fence failed to get scratch register\n");
return r;
}
index = RADEON_WB_SCRATCH_OFFSET +
rdev->fence_drv[ring].scratch_reg -
rdev->scratch.reg_base;
}
rdev->fence_drv[ring].cpu_addr = &rdev->wb.wb[index/4];
rdev->fence_drv[ring].gpu_addr = rdev->wb.gpu_addr + index;
radeon_fence_write(rdev, atomic64_read(&rdev->fence_drv[ring].last_seq), ring);
rdev->fence_drv[ring].initialized = true;
dev_info(rdev->dev, "fence driver on ring %d use gpu addr 0x%016jx and cpu addr 0x%p\n",
ring, (uintmax_t)rdev->fence_drv[ring].gpu_addr, rdev->fence_drv[ring].cpu_addr);
return 0;
}
/**
* radeon_fence_driver_init_ring - init the fence driver
* for the requested ring.
*
* @rdev: radeon device pointer
* @ring: ring index to start the fence driver on
*
* Init the fence driver for the requested ring (all asics).
* Helper function for radeon_fence_driver_init().
*/
static void radeon_fence_driver_init_ring(struct radeon_device *rdev, int ring)
{
int i;
rdev->fence_drv[ring].scratch_reg = -1;
rdev->fence_drv[ring].cpu_addr = NULL;
rdev->fence_drv[ring].gpu_addr = 0;
for (i = 0; i < RADEON_NUM_RINGS; ++i)
rdev->fence_drv[ring].sync_seq[i] = 0;
atomic64_set(&rdev->fence_drv[ring].last_seq, 0);
rdev->fence_drv[ring].last_activity = jiffies;
rdev->fence_drv[ring].initialized = false;
}
/**
* radeon_fence_driver_init - init the fence driver
* for all possible rings.
*
* @rdev: radeon device pointer
*
* Init the fence driver for all possible rings (all asics).
* Not all asics have all rings, so each asic will only
* start the fence driver on the rings it has using
* radeon_fence_driver_start_ring().
* Returns 0 for success.
*/
int radeon_fence_driver_init(struct radeon_device *rdev)
{
int ring;
mtx_init(&rdev->fence_queue_mtx,
"drm__radeon_device__fence_queue_mtx", NULL, MTX_DEF);
cv_init(&rdev->fence_queue, "drm__radeon_device__fence_queue");
for (ring = 0; ring < RADEON_NUM_RINGS; ring++) {
radeon_fence_driver_init_ring(rdev, ring);
}
if (radeon_debugfs_fence_init(rdev)) {
dev_err(rdev->dev, "fence debugfs file creation failed\n");
}
return 0;
}
/**
* radeon_fence_driver_fini - tear down the fence driver
* for all possible rings.
*
* @rdev: radeon device pointer
*
* Tear down the fence driver for all possible rings (all asics).
*/
void radeon_fence_driver_fini(struct radeon_device *rdev)
{
int ring, r;
sx_xlock(&rdev->ring_lock);
for (ring = 0; ring < RADEON_NUM_RINGS; ring++) {
if (!rdev->fence_drv[ring].initialized)
continue;
r = radeon_fence_wait_empty_locked(rdev, ring);
if (r) {
/* no need to trigger GPU reset as we are unloading */
radeon_fence_driver_force_completion(rdev);
}
cv_broadcast(&rdev->fence_queue);
radeon_scratch_free(rdev, rdev->fence_drv[ring].scratch_reg);
rdev->fence_drv[ring].initialized = false;
cv_destroy(&rdev->fence_queue);
}
sx_xunlock(&rdev->ring_lock);
}
/**
* radeon_fence_driver_force_completion - force all fence waiter to complete
*
* @rdev: radeon device pointer
*
* In case of GPU reset failure make sure no process keep waiting on fence
* that will never complete.
*/
void radeon_fence_driver_force_completion(struct radeon_device *rdev)
{
int ring;
for (ring = 0; ring < RADEON_NUM_RINGS; ring++) {
if (!rdev->fence_drv[ring].initialized)
continue;
radeon_fence_write(rdev, rdev->fence_drv[ring].sync_seq[ring], ring);
}
}
/*
* Fence debugfs
*/
#if defined(CONFIG_DEBUG_FS)
static int radeon_debugfs_fence_info(struct seq_file *m, void *data)
{
struct drm_info_node *node = (struct drm_info_node *)m->private;
struct drm_device *dev = node->minor->dev;
struct radeon_device *rdev = dev->dev_private;
int i, j;
for (i = 0; i < RADEON_NUM_RINGS; ++i) {
if (!rdev->fence_drv[i].initialized)
continue;
seq_printf(m, "--- ring %d ---\n", i);
seq_printf(m, "Last signaled fence 0x%016llx\n",
(unsigned long long)atomic64_read(&rdev->fence_drv[i].last_seq));
seq_printf(m, "Last emitted 0x%016llx\n",
rdev->fence_drv[i].sync_seq[i]);
for (j = 0; j < RADEON_NUM_RINGS; ++j) {
if (i != j && rdev->fence_drv[j].initialized)
seq_printf(m, "Last sync to ring %d 0x%016llx\n",
j, rdev->fence_drv[i].sync_seq[j]);
}
}
return 0;
}
static struct drm_info_list radeon_debugfs_fence_list[] = {
{"radeon_fence_info", &radeon_debugfs_fence_info, 0, NULL},
};
#endif
int radeon_debugfs_fence_init(struct radeon_device *rdev)
{
#if defined(CONFIG_DEBUG_FS)
return radeon_debugfs_add_files(rdev, radeon_debugfs_fence_list, 1);
#else
return 0;
#endif
}
Index: head/sys/dev/drm2/radeon/radeon_gart.c
===================================================================
--- head/sys/dev/drm2/radeon/radeon_gart.c (revision 300049)
+++ head/sys/dev/drm2/radeon/radeon_gart.c (revision 300050)
@@ -1,1309 +1,1309 @@
/*
* Copyright 2008 Advanced Micro Devices, Inc.
* Copyright 2008 Red Hat Inc.
* Copyright 2009 Jerome Glisse.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
* OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
* ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
* OTHER DEALINGS IN THE SOFTWARE.
*
* Authors: Dave Airlie
* Alex Deucher
* Jerome Glisse
*/
#include <sys/cdefs.h>
__FBSDID("$FreeBSD$");
#include <dev/drm2/drmP.h>
#include <dev/drm2/radeon/radeon_drm.h>
#include "radeon.h"
#include "radeon_reg.h"
/*
* GART
* The GART (Graphics Aperture Remapping Table) is an aperture
* in the GPU's address space. System pages can be mapped into
* the aperture and look like contiguous pages from the GPU's
* perspective. A page table maps the pages in the aperture
* to the actual backing pages in system memory.
*
* Radeon GPUs support both an internal GART, as described above,
* and AGP. AGP works similarly, but the GART table is configured
* and maintained by the northbridge rather than the driver.
* Radeon hw has a separate AGP aperture that is programmed to
* point to the AGP aperture provided by the northbridge and the
* requests are passed through to the northbridge aperture.
* Both AGP and internal GART can be used at the same time, however
* that is not currently supported by the driver.
*
* This file handles the common internal GART management.
*/
/*
* Common GART table functions.
*/
/**
* radeon_gart_table_ram_alloc - allocate system ram for gart page table
*
* @rdev: radeon_device pointer
*
* Allocate system memory for GART page table
* (r1xx-r3xx, non-pcie r4xx, rs400). These asics require the
* gart table to be in system memory.
* Returns 0 for success, -ENOMEM for failure.
*/
int radeon_gart_table_ram_alloc(struct radeon_device *rdev)
{
drm_dma_handle_t *dmah;
dmah = drm_pci_alloc(rdev->ddev, rdev->gart.table_size,
PAGE_SIZE, BUS_SPACE_MAXADDR);
if (dmah == NULL) {
return -ENOMEM;
}
rdev->gart.dmah = dmah;
rdev->gart.ptr = dmah->vaddr;
#ifdef CONFIG_X86
if (rdev->family == CHIP_RS400 || rdev->family == CHIP_RS480 ||
rdev->family == CHIP_RS690 || rdev->family == CHIP_RS740) {
pmap_change_attr((vm_offset_t)rdev->gart.ptr,
rdev->gart.table_size >> PAGE_SHIFT, PAT_UNCACHED);
}
#endif
rdev->gart.table_addr = dmah->busaddr;
memset((void *)rdev->gart.ptr, 0, rdev->gart.table_size);
return 0;
}
/**
* radeon_gart_table_ram_free - free system ram for gart page table
*
* @rdev: radeon_device pointer
*
* Free system memory for GART page table
* (r1xx-r3xx, non-pcie r4xx, rs400). These asics require the
* gart table to be in system memory.
*/
void radeon_gart_table_ram_free(struct radeon_device *rdev)
{
if (rdev->gart.ptr == NULL) {
return;
}
#ifdef CONFIG_X86
if (rdev->family == CHIP_RS400 || rdev->family == CHIP_RS480 ||
rdev->family == CHIP_RS690 || rdev->family == CHIP_RS740) {
pmap_change_attr((vm_offset_t)rdev->gart.ptr,
rdev->gart.table_size >> PAGE_SHIFT, PAT_WRITE_COMBINING);
}
#endif
drm_pci_free(rdev->ddev, rdev->gart.dmah);
rdev->gart.dmah = NULL;
rdev->gart.ptr = NULL;
rdev->gart.table_addr = 0;
}
/**
* radeon_gart_table_vram_alloc - allocate vram for gart page table
*
* @rdev: radeon_device pointer
*
* Allocate video memory for GART page table
* (pcie r4xx, r5xx+). These asics require the
* gart table to be in video memory.
* Returns 0 for success, error for failure.
*/
int radeon_gart_table_vram_alloc(struct radeon_device *rdev)
{
int r;
if (rdev->gart.robj == NULL) {
r = radeon_bo_create(rdev, rdev->gart.table_size,
PAGE_SIZE, true, RADEON_GEM_DOMAIN_VRAM,
NULL, &rdev->gart.robj);
if (r) {
return r;
}
}
return 0;
}
/**
* radeon_gart_table_vram_pin - pin gart page table in vram
*
* @rdev: radeon_device pointer
*
* Pin the GART page table in vram so it will not be moved
* by the memory manager (pcie r4xx, r5xx+). These asics require the
* gart table to be in video memory.
* Returns 0 for success, error for failure.
*/
int radeon_gart_table_vram_pin(struct radeon_device *rdev)
{
uint64_t gpu_addr;
int r;
r = radeon_bo_reserve(rdev->gart.robj, false);
if (unlikely(r != 0))
return r;
r = radeon_bo_pin(rdev->gart.robj,
RADEON_GEM_DOMAIN_VRAM, &gpu_addr);
if (r) {
radeon_bo_unreserve(rdev->gart.robj);
return r;
}
r = radeon_bo_kmap(rdev->gart.robj, &rdev->gart.ptr);
if (r)
radeon_bo_unpin(rdev->gart.robj);
radeon_bo_unreserve(rdev->gart.robj);
rdev->gart.table_addr = gpu_addr;
return r;
}
/**
* radeon_gart_table_vram_unpin - unpin gart page table in vram
*
* @rdev: radeon_device pointer
*
* Unpin the GART page table in vram (pcie r4xx, r5xx+).
* These asics require the gart table to be in video memory.
*/
void radeon_gart_table_vram_unpin(struct radeon_device *rdev)
{
int r;
if (rdev->gart.robj == NULL) {
return;
}
r = radeon_bo_reserve(rdev->gart.robj, false);
if (likely(r == 0)) {
radeon_bo_kunmap(rdev->gart.robj);
radeon_bo_unpin(rdev->gart.robj);
radeon_bo_unreserve(rdev->gart.robj);
rdev->gart.ptr = NULL;
}
}
/**
* radeon_gart_table_vram_free - free gart page table vram
*
* @rdev: radeon_device pointer
*
* Free the video memory used for the GART page table
* (pcie r4xx, r5xx+). These asics require the gart table to
* be in video memory.
*/
void radeon_gart_table_vram_free(struct radeon_device *rdev)
{
if (rdev->gart.robj == NULL) {
return;
}
radeon_gart_table_vram_unpin(rdev);
radeon_bo_unref(&rdev->gart.robj);
}
/*
* Common gart functions.
*/
/**
* radeon_gart_unbind - unbind pages from the gart page table
*
* @rdev: radeon_device pointer
* @offset: offset into the GPU's gart aperture
* @pages: number of pages to unbind
*
* Unbinds the requested pages from the gart page table and
* replaces them with the dummy page (all asics).
*/
void radeon_gart_unbind(struct radeon_device *rdev, unsigned offset,
int pages)
{
unsigned t;
unsigned p;
int i, j;
u64 page_base;
if (!rdev->gart.ready) {
DRM_ERROR("trying to unbind memory from uninitialized GART !\n");
return;
}
t = offset / RADEON_GPU_PAGE_SIZE;
p = t / (PAGE_SIZE / RADEON_GPU_PAGE_SIZE);
for (i = 0; i < pages; i++, p++) {
if (rdev->gart.pages[p]) {
rdev->gart.pages[p] = NULL;
rdev->gart.pages_addr[p] = rdev->dummy_page.addr;
page_base = rdev->gart.pages_addr[p];
for (j = 0; j < (PAGE_SIZE / RADEON_GPU_PAGE_SIZE); j++, t++) {
if (rdev->gart.ptr) {
radeon_gart_set_page(rdev, t, page_base);
}
page_base += RADEON_GPU_PAGE_SIZE;
}
}
}
mb();
radeon_gart_tlb_flush(rdev);
}
/**
* radeon_gart_bind - bind pages into the gart page table
*
* @rdev: radeon_device pointer
* @offset: offset into the GPU's gart aperture
* @pages: number of pages to bind
* @pagelist: pages to bind
* @dma_addr: DMA addresses of pages
*
* Binds the requested pages to the gart page table
* (all asics).
* Returns 0 for success, -EINVAL for failure.
*/
int radeon_gart_bind(struct radeon_device *rdev, unsigned offset,
int pages, vm_page_t *pagelist, dma_addr_t *dma_addr)
{
unsigned t;
unsigned p;
uint64_t page_base;
int i, j;
if (!rdev->gart.ready) {
DRM_ERROR("trying to bind memory to uninitialized GART !\n");
return -EINVAL;
}
t = offset / RADEON_GPU_PAGE_SIZE;
p = t / (PAGE_SIZE / RADEON_GPU_PAGE_SIZE);
for (i = 0; i < pages; i++, p++) {
rdev->gart.pages_addr[p] = dma_addr[i];
rdev->gart.pages[p] = pagelist[i];
if (rdev->gart.ptr) {
page_base = rdev->gart.pages_addr[p];
for (j = 0; j < (PAGE_SIZE / RADEON_GPU_PAGE_SIZE); j++, t++) {
radeon_gart_set_page(rdev, t, page_base);
page_base += RADEON_GPU_PAGE_SIZE;
}
}
}
mb();
radeon_gart_tlb_flush(rdev);
return 0;
}
/**
* radeon_gart_restore - bind all pages in the gart page table
*
* @rdev: radeon_device pointer
*
* Binds all pages in the gart page table (all asics).
* Used to rebuild the gart table on device startup or resume.
*/
void radeon_gart_restore(struct radeon_device *rdev)
{
int i, j, t;
u64 page_base;
if (!rdev->gart.ptr) {
return;
}
for (i = 0, t = 0; i < rdev->gart.num_cpu_pages; i++) {
page_base = rdev->gart.pages_addr[i];
for (j = 0; j < (PAGE_SIZE / RADEON_GPU_PAGE_SIZE); j++, t++) {
radeon_gart_set_page(rdev, t, page_base);
page_base += RADEON_GPU_PAGE_SIZE;
}
}
mb();
radeon_gart_tlb_flush(rdev);
}
/**
* radeon_gart_init - init the driver info for managing the gart
*
* @rdev: radeon_device pointer
*
* Allocate the dummy page and init the gart driver info (all asics).
* Returns 0 for success, error for failure.
*/
int radeon_gart_init(struct radeon_device *rdev)
{
int r, i;
if (rdev->gart.pages) {
return 0;
}
/* We need PAGE_SIZE >= RADEON_GPU_PAGE_SIZE */
if (PAGE_SIZE < RADEON_GPU_PAGE_SIZE) {
DRM_ERROR("Page size is smaller than GPU page size!\n");
return -EINVAL;
}
r = radeon_dummy_page_init(rdev);
if (r)
return r;
/* Compute table size */
rdev->gart.num_cpu_pages = rdev->mc.gtt_size / PAGE_SIZE;
rdev->gart.num_gpu_pages = rdev->mc.gtt_size / RADEON_GPU_PAGE_SIZE;
DRM_INFO("GART: num cpu pages %u, num gpu pages %u\n",
rdev->gart.num_cpu_pages, rdev->gart.num_gpu_pages);
/* Allocate pages table */
rdev->gart.pages = malloc(sizeof(void *) * rdev->gart.num_cpu_pages,
DRM_MEM_DRIVER, M_NOWAIT | M_ZERO);
if (rdev->gart.pages == NULL) {
radeon_gart_fini(rdev);
return -ENOMEM;
}
rdev->gart.pages_addr = malloc(sizeof(dma_addr_t) *
rdev->gart.num_cpu_pages,
DRM_MEM_DRIVER, M_NOWAIT | M_ZERO);
if (rdev->gart.pages_addr == NULL) {
radeon_gart_fini(rdev);
return -ENOMEM;
}
/* set GART entry to point to the dummy page by default */
for (i = 0; i < rdev->gart.num_cpu_pages; i++) {
rdev->gart.pages_addr[i] = rdev->dummy_page.addr;
}
return 0;
}
/**
* radeon_gart_fini - tear down the driver info for managing the gart
*
* @rdev: radeon_device pointer
*
* Tear down the gart driver info and free the dummy page (all asics).
*/
void radeon_gart_fini(struct radeon_device *rdev)
{
if (rdev->gart.pages && rdev->gart.pages_addr && rdev->gart.ready) {
/* unbind pages */
radeon_gart_unbind(rdev, 0, rdev->gart.num_cpu_pages);
}
rdev->gart.ready = false;
free(rdev->gart.pages, DRM_MEM_DRIVER);
free(rdev->gart.pages_addr, DRM_MEM_DRIVER);
rdev->gart.pages = NULL;
rdev->gart.pages_addr = NULL;
radeon_dummy_page_fini(rdev);
}
/*
* GPUVM
* GPUVM is similar to the legacy gart on older asics, however
* rather than there being a single global gart table
* for the entire GPU, there are multiple VM page tables active
* at any given time. The VM page tables can contain a mix
* vram pages and system memory pages and system memory pages
* can be mapped as snooped (cached system pages) or unsnooped
* (uncached system pages).
* Each VM has an ID associated with it and there is a page table
* associated with each VMID. When execting a command buffer,
- * the kernel tells the the ring what VMID to use for that command
+ * the kernel tells the ring what VMID to use for that command
* buffer. VMIDs are allocated dynamically as commands are submitted.
* The userspace drivers maintain their own address space and the kernel
* sets up their pages tables accordingly when they submit their
* command buffers and a VMID is assigned.
* Cayman/Trinity support up to 8 active VMs at any given time;
* SI supports 16.
*/
/*
* vm helpers
*
* TODO bind a default page at vm initialization for default address
*/
/**
* radeon_vm_num_pde - return the number of page directory entries
*
* @rdev: radeon_device pointer
*
* Calculate the number of page directory entries (cayman+).
*/
static unsigned radeon_vm_num_pdes(struct radeon_device *rdev)
{
return rdev->vm_manager.max_pfn >> RADEON_VM_BLOCK_SIZE;
}
/**
* radeon_vm_directory_size - returns the size of the page directory in bytes
*
* @rdev: radeon_device pointer
*
* Calculate the size of the page directory in bytes (cayman+).
*/
static unsigned radeon_vm_directory_size(struct radeon_device *rdev)
{
return RADEON_GPU_PAGE_ALIGN(radeon_vm_num_pdes(rdev) * 8);
}
/**
* radeon_vm_manager_init - init the vm manager
*
* @rdev: radeon_device pointer
*
* Init the vm manager (cayman+).
* Returns 0 for success, error for failure.
*/
int radeon_vm_manager_init(struct radeon_device *rdev)
{
struct radeon_vm *vm;
struct radeon_bo_va *bo_va;
int r;
unsigned size;
if (!rdev->vm_manager.enabled) {
/* allocate enough for 2 full VM pts */
size = radeon_vm_directory_size(rdev);
size += rdev->vm_manager.max_pfn * 8;
size *= 2;
r = radeon_sa_bo_manager_init(rdev, &rdev->vm_manager.sa_manager,
RADEON_GPU_PAGE_ALIGN(size),
RADEON_GEM_DOMAIN_VRAM);
if (r) {
dev_err(rdev->dev, "failed to allocate vm bo (%dKB)\n",
(rdev->vm_manager.max_pfn * 8) >> 10);
return r;
}
r = radeon_asic_vm_init(rdev);
if (r)
return r;
rdev->vm_manager.enabled = true;
r = radeon_sa_bo_manager_start(rdev, &rdev->vm_manager.sa_manager);
if (r)
return r;
}
/* restore page table */
list_for_each_entry(vm, &rdev->vm_manager.lru_vm, list) {
if (vm->page_directory == NULL)
continue;
list_for_each_entry(bo_va, &vm->va, vm_list) {
bo_va->valid = false;
}
}
return 0;
}
/**
* radeon_vm_free_pt - free the page table for a specific vm
*
* @rdev: radeon_device pointer
* @vm: vm to unbind
*
* Free the page table of a specific vm (cayman+).
*
* Global and local mutex must be lock!
*/
static void radeon_vm_free_pt(struct radeon_device *rdev,
struct radeon_vm *vm)
{
struct radeon_bo_va *bo_va;
int i;
if (!vm->page_directory)
return;
list_del_init(&vm->list);
radeon_sa_bo_free(rdev, &vm->page_directory, vm->fence);
list_for_each_entry(bo_va, &vm->va, vm_list) {
bo_va->valid = false;
}
if (vm->page_tables == NULL)
return;
for (i = 0; i < radeon_vm_num_pdes(rdev); i++)
radeon_sa_bo_free(rdev, &vm->page_tables[i], vm->fence);
free(vm->page_tables, DRM_MEM_DRIVER);
}
/**
* radeon_vm_manager_fini - tear down the vm manager
*
* @rdev: radeon_device pointer
*
* Tear down the VM manager (cayman+).
*/
void radeon_vm_manager_fini(struct radeon_device *rdev)
{
struct radeon_vm *vm, *tmp;
int i;
if (!rdev->vm_manager.enabled)
return;
sx_xlock(&rdev->vm_manager.lock);
/* free all allocated page tables */
list_for_each_entry_safe(vm, tmp, &rdev->vm_manager.lru_vm, list) {
sx_xlock(&vm->mutex);
radeon_vm_free_pt(rdev, vm);
sx_xunlock(&vm->mutex);
}
for (i = 0; i < RADEON_NUM_VM; ++i) {
radeon_fence_unref(&rdev->vm_manager.active[i]);
}
radeon_asic_vm_fini(rdev);
sx_xunlock(&rdev->vm_manager.lock);
radeon_sa_bo_manager_suspend(rdev, &rdev->vm_manager.sa_manager);
radeon_sa_bo_manager_fini(rdev, &rdev->vm_manager.sa_manager);
rdev->vm_manager.enabled = false;
}
/**
* radeon_vm_evict - evict page table to make room for new one
*
* @rdev: radeon_device pointer
* @vm: VM we want to allocate something for
*
* Evict a VM from the lru, making sure that it isn't @vm. (cayman+).
* Returns 0 for success, -ENOMEM for failure.
*
* Global and local mutex must be locked!
*/
static int radeon_vm_evict(struct radeon_device *rdev, struct radeon_vm *vm)
{
struct radeon_vm *vm_evict;
if (list_empty(&rdev->vm_manager.lru_vm))
return -ENOMEM;
vm_evict = list_first_entry(&rdev->vm_manager.lru_vm,
struct radeon_vm, list);
if (vm_evict == vm)
return -ENOMEM;
sx_xlock(&vm_evict->mutex);
radeon_vm_free_pt(rdev, vm_evict);
sx_xunlock(&vm_evict->mutex);
return 0;
}
/**
* radeon_vm_alloc_pt - allocates a page table for a VM
*
* @rdev: radeon_device pointer
* @vm: vm to bind
*
* Allocate a page table for the requested vm (cayman+).
* Returns 0 for success, error for failure.
*
* Global and local mutex must be locked!
*/
int radeon_vm_alloc_pt(struct radeon_device *rdev, struct radeon_vm *vm)
{
unsigned pd_size, pts_size;
u64 *pd_addr;
int r;
if (vm == NULL) {
return -EINVAL;
}
if (vm->page_directory != NULL) {
return 0;
}
retry:
pd_size = RADEON_GPU_PAGE_ALIGN(radeon_vm_directory_size(rdev));
r = radeon_sa_bo_new(rdev, &rdev->vm_manager.sa_manager,
&vm->page_directory, pd_size,
RADEON_GPU_PAGE_SIZE, false);
if (r == -ENOMEM) {
r = radeon_vm_evict(rdev, vm);
if (r)
return r;
goto retry;
} else if (r) {
return r;
}
vm->pd_gpu_addr = radeon_sa_bo_gpu_addr(vm->page_directory);
/* Initially clear the page directory */
pd_addr = radeon_sa_bo_cpu_addr(vm->page_directory);
memset(pd_addr, 0, pd_size);
pts_size = radeon_vm_num_pdes(rdev) * sizeof(struct radeon_sa_bo *);
vm->page_tables = malloc(pts_size, DRM_MEM_DRIVER, M_NOWAIT | M_ZERO);
if (vm->page_tables == NULL) {
DRM_ERROR("Cannot allocate memory for page table array\n");
radeon_sa_bo_free(rdev, &vm->page_directory, vm->fence);
return -ENOMEM;
}
return 0;
}
/**
* radeon_vm_add_to_lru - add VMs page table to LRU list
*
* @rdev: radeon_device pointer
* @vm: vm to add to LRU
*
* Add the allocated page table to the LRU list (cayman+).
*
* Global mutex must be locked!
*/
void radeon_vm_add_to_lru(struct radeon_device *rdev, struct radeon_vm *vm)
{
list_del_init(&vm->list);
list_add_tail(&vm->list, &rdev->vm_manager.lru_vm);
}
/**
* radeon_vm_grab_id - allocate the next free VMID
*
* @rdev: radeon_device pointer
* @vm: vm to allocate id for
* @ring: ring we want to submit job to
*
* Allocate an id for the vm (cayman+).
* Returns the fence we need to sync to (if any).
*
* Global and local mutex must be locked!
*/
struct radeon_fence *radeon_vm_grab_id(struct radeon_device *rdev,
struct radeon_vm *vm, int ring)
{
struct radeon_fence *best[RADEON_NUM_RINGS] = {};
unsigned choices[2] = {};
unsigned i;
/* check if the id is still valid */
if (vm->fence && vm->fence == rdev->vm_manager.active[vm->id])
return NULL;
/* we definitely need to flush */
radeon_fence_unref(&vm->last_flush);
/* skip over VMID 0, since it is the system VM */
for (i = 1; i < rdev->vm_manager.nvm; ++i) {
struct radeon_fence *fence = rdev->vm_manager.active[i];
if (fence == NULL) {
/* found a free one */
vm->id = i;
return NULL;
}
if (radeon_fence_is_earlier(fence, best[fence->ring])) {
best[fence->ring] = fence;
choices[fence->ring == ring ? 0 : 1] = i;
}
}
for (i = 0; i < 2; ++i) {
if (choices[i]) {
vm->id = choices[i];
return rdev->vm_manager.active[choices[i]];
}
}
/* should never happen */
panic("%s: failed to allocate next VMID", __func__);
return NULL;
}
/**
* radeon_vm_fence - remember fence for vm
*
* @rdev: radeon_device pointer
* @vm: vm we want to fence
* @fence: fence to remember
*
* Fence the vm (cayman+).
* Set the fence used to protect page table and id.
*
* Global and local mutex must be locked!
*/
void radeon_vm_fence(struct radeon_device *rdev,
struct radeon_vm *vm,
struct radeon_fence *fence)
{
radeon_fence_unref(&rdev->vm_manager.active[vm->id]);
rdev->vm_manager.active[vm->id] = radeon_fence_ref(fence);
radeon_fence_unref(&vm->fence);
vm->fence = radeon_fence_ref(fence);
}
/**
* radeon_vm_bo_find - find the bo_va for a specific vm & bo
*
* @vm: requested vm
* @bo: requested buffer object
*
* Find @bo inside the requested vm (cayman+).
* Search inside the @bos vm list for the requested vm
* Returns the found bo_va or NULL if none is found
*
* Object has to be reserved!
*/
struct radeon_bo_va *radeon_vm_bo_find(struct radeon_vm *vm,
struct radeon_bo *bo)
{
struct radeon_bo_va *bo_va;
list_for_each_entry(bo_va, &bo->va, bo_list) {
if (bo_va->vm == vm) {
return bo_va;
}
}
return NULL;
}
/**
* radeon_vm_bo_add - add a bo to a specific vm
*
* @rdev: radeon_device pointer
* @vm: requested vm
* @bo: radeon buffer object
*
* Add @bo into the requested vm (cayman+).
* Add @bo to the list of bos associated with the vm
* Returns newly added bo_va or NULL for failure
*
* Object has to be reserved!
*/
struct radeon_bo_va *radeon_vm_bo_add(struct radeon_device *rdev,
struct radeon_vm *vm,
struct radeon_bo *bo)
{
struct radeon_bo_va *bo_va;
bo_va = malloc(sizeof(struct radeon_bo_va),
DRM_MEM_DRIVER, M_NOWAIT | M_ZERO);
if (bo_va == NULL) {
return NULL;
}
bo_va->vm = vm;
bo_va->bo = bo;
bo_va->soffset = 0;
bo_va->eoffset = 0;
bo_va->flags = 0;
bo_va->valid = false;
bo_va->ref_count = 1;
INIT_LIST_HEAD(&bo_va->bo_list);
INIT_LIST_HEAD(&bo_va->vm_list);
sx_xlock(&vm->mutex);
list_add(&bo_va->vm_list, &vm->va);
list_add_tail(&bo_va->bo_list, &bo->va);
sx_xunlock(&vm->mutex);
return bo_va;
}
/**
* radeon_vm_bo_set_addr - set bos virtual address inside a vm
*
* @rdev: radeon_device pointer
* @bo_va: bo_va to store the address
* @soffset: requested offset of the buffer in the VM address space
* @flags: attributes of pages (read/write/valid/etc.)
*
* Set offset of @bo_va (cayman+).
* Validate and set the offset requested within the vm address space.
* Returns 0 for success, error for failure.
*
* Object has to be reserved!
*/
int radeon_vm_bo_set_addr(struct radeon_device *rdev,
struct radeon_bo_va *bo_va,
uint64_t soffset,
uint32_t flags)
{
uint64_t size = radeon_bo_size(bo_va->bo);
uint64_t eoffset, last_offset = 0;
struct radeon_vm *vm = bo_va->vm;
struct radeon_bo_va *tmp;
struct list_head *head;
unsigned last_pfn;
if (soffset) {
/* make sure object fit at this offset */
eoffset = soffset + size;
if (soffset >= eoffset) {
return -EINVAL;
}
last_pfn = eoffset / RADEON_GPU_PAGE_SIZE;
if (last_pfn > rdev->vm_manager.max_pfn) {
dev_err(rdev->dev, "va above limit (0x%08X > 0x%08X)\n",
last_pfn, rdev->vm_manager.max_pfn);
return -EINVAL;
}
} else {
eoffset = last_pfn = 0;
}
sx_xlock(&vm->mutex);
head = &vm->va;
last_offset = 0;
list_for_each_entry(tmp, &vm->va, vm_list) {
if (bo_va == tmp) {
/* skip over currently modified bo */
continue;
}
if (soffset >= last_offset && eoffset <= tmp->soffset) {
/* bo can be added before this one */
break;
}
if (eoffset > tmp->soffset && soffset < tmp->eoffset) {
/* bo and tmp overlap, invalid offset */
dev_err(rdev->dev, "bo %p va 0x%08X conflict with (bo %p 0x%08X 0x%08X)\n",
bo_va->bo, (unsigned)bo_va->soffset, tmp->bo,
(unsigned)tmp->soffset, (unsigned)tmp->eoffset);
sx_xunlock(&vm->mutex);
return -EINVAL;
}
last_offset = tmp->eoffset;
head = &tmp->vm_list;
}
bo_va->soffset = soffset;
bo_va->eoffset = eoffset;
bo_va->flags = flags;
bo_va->valid = false;
list_move(&bo_va->vm_list, head);
sx_xunlock(&vm->mutex);
return 0;
}
/**
* radeon_vm_map_gart - get the physical address of a gart page
*
* @rdev: radeon_device pointer
* @addr: the unmapped addr
*
* Look up the physical address of the page that the pte resolves
* to (cayman+).
* Returns the physical address of the page.
*/
uint64_t radeon_vm_map_gart(struct radeon_device *rdev, uint64_t addr)
{
uint64_t result;
/* page table offset */
result = rdev->gart.pages_addr[addr >> PAGE_SHIFT];
/* in case cpu page size != gpu page size*/
/*
* FreeBSD port note: FreeBSD's PAGE_MASK is the inverse of
* Linux's one. That's why the test below doesn't inverse the
* constant.
*/
result |= addr & (PAGE_MASK);
return result;
}
/**
* radeon_vm_update_pdes - make sure that page directory is valid
*
* @rdev: radeon_device pointer
* @vm: requested vm
* @start: start of GPU address range
* @end: end of GPU address range
*
* Allocates new page tables if necessary
* and updates the page directory (cayman+).
* Returns 0 for success, error for failure.
*
* Global and local mutex must be locked!
*/
static int radeon_vm_update_pdes(struct radeon_device *rdev,
struct radeon_vm *vm,
uint64_t start, uint64_t end)
{
static const uint32_t incr = RADEON_VM_PTE_COUNT * 8;
uint64_t last_pde = ~0, last_pt = ~0;
unsigned count = 0;
uint64_t pt_idx;
int r;
start = (start / RADEON_GPU_PAGE_SIZE) >> RADEON_VM_BLOCK_SIZE;
end = (end / RADEON_GPU_PAGE_SIZE) >> RADEON_VM_BLOCK_SIZE;
/* walk over the address space and update the page directory */
for (pt_idx = start; pt_idx <= end; ++pt_idx) {
uint64_t pde, pt;
if (vm->page_tables[pt_idx])
continue;
retry:
r = radeon_sa_bo_new(rdev, &rdev->vm_manager.sa_manager,
&vm->page_tables[pt_idx],
RADEON_VM_PTE_COUNT * 8,
RADEON_GPU_PAGE_SIZE, false);
if (r == -ENOMEM) {
r = radeon_vm_evict(rdev, vm);
if (r)
return r;
goto retry;
} else if (r) {
return r;
}
pde = vm->pd_gpu_addr + pt_idx * 8;
pt = radeon_sa_bo_gpu_addr(vm->page_tables[pt_idx]);
if (((last_pde + 8 * count) != pde) ||
((last_pt + incr * count) != pt)) {
if (count) {
radeon_asic_vm_set_page(rdev, last_pde,
last_pt, count, incr,
RADEON_VM_PAGE_VALID);
}
count = 1;
last_pde = pde;
last_pt = pt;
} else {
++count;
}
}
if (count) {
radeon_asic_vm_set_page(rdev, last_pde, last_pt, count,
incr, RADEON_VM_PAGE_VALID);
}
return 0;
}
/**
* radeon_vm_update_ptes - make sure that page tables are valid
*
* @rdev: radeon_device pointer
* @vm: requested vm
* @start: start of GPU address range
* @end: end of GPU address range
* @dst: destination address to map to
* @flags: mapping flags
*
* Update the page tables in the range @start - @end (cayman+).
*
* Global and local mutex must be locked!
*/
static void radeon_vm_update_ptes(struct radeon_device *rdev,
struct radeon_vm *vm,
uint64_t start, uint64_t end,
uint64_t dst, uint32_t flags)
{
static const uint64_t mask = RADEON_VM_PTE_COUNT - 1;
uint64_t last_pte = ~0, last_dst = ~0;
unsigned count = 0;
uint64_t addr;
start = start / RADEON_GPU_PAGE_SIZE;
end = end / RADEON_GPU_PAGE_SIZE;
/* walk over the address space and update the page tables */
for (addr = start; addr < end; ) {
uint64_t pt_idx = addr >> RADEON_VM_BLOCK_SIZE;
unsigned nptes;
uint64_t pte;
if ((addr & ~mask) == (end & ~mask))
nptes = end - addr;
else
nptes = RADEON_VM_PTE_COUNT - (addr & mask);
pte = radeon_sa_bo_gpu_addr(vm->page_tables[pt_idx]);
pte += (addr & mask) * 8;
if ((last_pte + 8 * count) != pte) {
if (count) {
radeon_asic_vm_set_page(rdev, last_pte,
last_dst, count,
RADEON_GPU_PAGE_SIZE,
flags);
}
count = nptes;
last_pte = pte;
last_dst = dst;
} else {
count += nptes;
}
addr += nptes;
dst += nptes * RADEON_GPU_PAGE_SIZE;
}
if (count) {
radeon_asic_vm_set_page(rdev, last_pte, last_dst, count,
RADEON_GPU_PAGE_SIZE, flags);
}
}
/**
* radeon_vm_bo_update_pte - map a bo into the vm page table
*
* @rdev: radeon_device pointer
* @vm: requested vm
* @bo: radeon buffer object
* @mem: ttm mem
*
* Fill in the page table entries for @bo (cayman+).
* Returns 0 for success, -EINVAL for failure.
*
* Object have to be reserved & global and local mutex must be locked!
*/
int radeon_vm_bo_update_pte(struct radeon_device *rdev,
struct radeon_vm *vm,
struct radeon_bo *bo,
struct ttm_mem_reg *mem)
{
unsigned ridx = rdev->asic->vm.pt_ring_index;
struct radeon_ring *ring = &rdev->ring[ridx];
struct radeon_semaphore *sem = NULL;
struct radeon_bo_va *bo_va;
unsigned nptes, npdes, ndw;
uint64_t addr;
int r;
/* nothing to do if vm isn't bound */
if (vm->page_directory == NULL)
return 0;
bo_va = radeon_vm_bo_find(vm, bo);
if (bo_va == NULL) {
dev_err(rdev->dev, "bo %p not in vm %p\n", bo, vm);
return -EINVAL;
}
if (!bo_va->soffset) {
dev_err(rdev->dev, "bo %p don't has a mapping in vm %p\n",
bo, vm);
return -EINVAL;
}
if ((bo_va->valid && mem) || (!bo_va->valid && mem == NULL))
return 0;
bo_va->flags &= ~RADEON_VM_PAGE_VALID;
bo_va->flags &= ~RADEON_VM_PAGE_SYSTEM;
if (mem) {
addr = mem->start << PAGE_SHIFT;
if (mem->mem_type != TTM_PL_SYSTEM) {
bo_va->flags |= RADEON_VM_PAGE_VALID;
bo_va->valid = true;
}
if (mem->mem_type == TTM_PL_TT) {
bo_va->flags |= RADEON_VM_PAGE_SYSTEM;
} else {
addr += rdev->vm_manager.vram_base_offset;
}
} else {
addr = 0;
bo_va->valid = false;
}
if (vm->fence && radeon_fence_signaled(vm->fence)) {
radeon_fence_unref(&vm->fence);
}
if (vm->fence && vm->fence->ring != ridx) {
r = radeon_semaphore_create(rdev, &sem);
if (r) {
return r;
}
}
nptes = radeon_bo_ngpu_pages(bo);
/* assume two extra pdes in case the mapping overlaps the borders */
npdes = (nptes >> RADEON_VM_BLOCK_SIZE) + 2;
/* estimate number of dw needed */
/* semaphore, fence and padding */
ndw = 32;
if (RADEON_VM_BLOCK_SIZE > 11)
/* reserve space for one header for every 2k dwords */
ndw += (nptes >> 11) * 4;
else
/* reserve space for one header for
every (1 << BLOCK_SIZE) entries */
ndw += (nptes >> RADEON_VM_BLOCK_SIZE) * 4;
/* reserve space for pte addresses */
ndw += nptes * 2;
/* reserve space for one header for every 2k dwords */
ndw += (npdes >> 11) * 4;
/* reserve space for pde addresses */
ndw += npdes * 2;
r = radeon_ring_lock(rdev, ring, ndw);
if (r) {
return r;
}
if (sem && radeon_fence_need_sync(vm->fence, ridx)) {
radeon_semaphore_sync_rings(rdev, sem, vm->fence->ring, ridx);
radeon_fence_note_sync(vm->fence, ridx);
}
r = radeon_vm_update_pdes(rdev, vm, bo_va->soffset, bo_va->eoffset);
if (r) {
radeon_ring_unlock_undo(rdev, ring);
return r;
}
radeon_vm_update_ptes(rdev, vm, bo_va->soffset, bo_va->eoffset,
addr, bo_va->flags);
radeon_fence_unref(&vm->fence);
r = radeon_fence_emit(rdev, &vm->fence, ridx);
if (r) {
radeon_ring_unlock_undo(rdev, ring);
return r;
}
radeon_ring_unlock_commit(rdev, ring);
radeon_semaphore_free(rdev, &sem, vm->fence);
radeon_fence_unref(&vm->last_flush);
return 0;
}
/**
* radeon_vm_bo_rmv - remove a bo to a specific vm
*
* @rdev: radeon_device pointer
* @bo_va: requested bo_va
*
* Remove @bo_va->bo from the requested vm (cayman+).
* Remove @bo_va->bo from the list of bos associated with the bo_va->vm and
* remove the ptes for @bo_va in the page table.
* Returns 0 for success.
*
* Object have to be reserved!
*/
int radeon_vm_bo_rmv(struct radeon_device *rdev,
struct radeon_bo_va *bo_va)
{
int r;
sx_xlock(&rdev->vm_manager.lock);
sx_xlock(&bo_va->vm->mutex);
r = radeon_vm_bo_update_pte(rdev, bo_va->vm, bo_va->bo, NULL);
sx_xunlock(&rdev->vm_manager.lock);
list_del(&bo_va->vm_list);
sx_xunlock(&bo_va->vm->mutex);
list_del(&bo_va->bo_list);
free(bo_va, DRM_MEM_DRIVER);
return r;
}
/**
* radeon_vm_bo_invalidate - mark the bo as invalid
*
* @rdev: radeon_device pointer
* @vm: requested vm
* @bo: radeon buffer object
*
* Mark @bo as invalid (cayman+).
*/
void radeon_vm_bo_invalidate(struct radeon_device *rdev,
struct radeon_bo *bo)
{
struct radeon_bo_va *bo_va;
list_for_each_entry(bo_va, &bo->va, bo_list) {
bo_va->valid = false;
}
}
/**
* radeon_vm_init - initialize a vm instance
*
* @rdev: radeon_device pointer
* @vm: requested vm
*
* Init @vm fields (cayman+).
*/
void radeon_vm_init(struct radeon_device *rdev, struct radeon_vm *vm)
{
vm->id = 0;
vm->fence = NULL;
sx_init(&vm->mutex, "drm__radeon_vm__mutex");
INIT_LIST_HEAD(&vm->list);
INIT_LIST_HEAD(&vm->va);
}
/**
* radeon_vm_fini - tear down a vm instance
*
* @rdev: radeon_device pointer
* @vm: requested vm
*
* Tear down @vm (cayman+).
* Unbind the VM and remove all bos from the vm bo list
*/
void radeon_vm_fini(struct radeon_device *rdev, struct radeon_vm *vm)
{
struct radeon_bo_va *bo_va, *tmp;
int r;
sx_xlock(&rdev->vm_manager.lock);
sx_xlock(&vm->mutex);
radeon_vm_free_pt(rdev, vm);
sx_xunlock(&rdev->vm_manager.lock);
if (!list_empty(&vm->va)) {
dev_err(rdev->dev, "still active bo inside vm\n");
}
list_for_each_entry_safe(bo_va, tmp, &vm->va, vm_list) {
list_del_init(&bo_va->vm_list);
r = radeon_bo_reserve(bo_va->bo, false);
if (!r) {
list_del_init(&bo_va->bo_list);
radeon_bo_unreserve(bo_va->bo);
free(bo_va, DRM_MEM_DRIVER);
}
}
radeon_fence_unref(&vm->fence);
radeon_fence_unref(&vm->last_flush);
sx_xunlock(&vm->mutex);
}
Index: head/sys/dev/e1000/e1000_82575.c
===================================================================
--- head/sys/dev/e1000/e1000_82575.c (revision 300049)
+++ head/sys/dev/e1000/e1000_82575.c (revision 300050)
@@ -1,3779 +1,3779 @@
/******************************************************************************
Copyright (c) 2001-2015, Intel Corporation
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:
1. Redistributions of source code must retain the above copyright notice,
this list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in the
documentation and/or other materials provided with the distribution.
3. Neither the name of the Intel Corporation nor the names of its
contributors may be used to endorse or promote products derived from
this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
POSSIBILITY OF SUCH DAMAGE.
******************************************************************************/
/*$FreeBSD$*/
/*
* 82575EB Gigabit Network Connection
* 82575EB Gigabit Backplane Connection
* 82575GB Gigabit Network Connection
* 82576 Gigabit Network Connection
* 82576 Quad Port Gigabit Mezzanine Adapter
* 82580 Gigabit Network Connection
* I350 Gigabit Network Connection
*/
#include "e1000_api.h"
#include "e1000_i210.h"
static s32 e1000_init_phy_params_82575(struct e1000_hw *hw);
static s32 e1000_init_mac_params_82575(struct e1000_hw *hw);
static s32 e1000_acquire_phy_82575(struct e1000_hw *hw);
static void e1000_release_phy_82575(struct e1000_hw *hw);
static s32 e1000_acquire_nvm_82575(struct e1000_hw *hw);
static void e1000_release_nvm_82575(struct e1000_hw *hw);
static s32 e1000_check_for_link_82575(struct e1000_hw *hw);
static s32 e1000_check_for_link_media_swap(struct e1000_hw *hw);
static s32 e1000_get_cfg_done_82575(struct e1000_hw *hw);
static s32 e1000_get_link_up_info_82575(struct e1000_hw *hw, u16 *speed,
u16 *duplex);
static s32 e1000_phy_hw_reset_sgmii_82575(struct e1000_hw *hw);
static s32 e1000_read_phy_reg_sgmii_82575(struct e1000_hw *hw, u32 offset,
u16 *data);
static s32 e1000_reset_hw_82575(struct e1000_hw *hw);
static s32 e1000_reset_hw_82580(struct e1000_hw *hw);
static s32 e1000_read_phy_reg_82580(struct e1000_hw *hw,
u32 offset, u16 *data);
static s32 e1000_write_phy_reg_82580(struct e1000_hw *hw,
u32 offset, u16 data);
static s32 e1000_set_d0_lplu_state_82580(struct e1000_hw *hw,
bool active);
static s32 e1000_set_d3_lplu_state_82580(struct e1000_hw *hw,
bool active);
static s32 e1000_set_d0_lplu_state_82575(struct e1000_hw *hw,
bool active);
static s32 e1000_setup_copper_link_82575(struct e1000_hw *hw);
static s32 e1000_setup_serdes_link_82575(struct e1000_hw *hw);
static s32 e1000_get_media_type_82575(struct e1000_hw *hw);
static s32 e1000_set_sfp_media_type_82575(struct e1000_hw *hw);
static s32 e1000_valid_led_default_82575(struct e1000_hw *hw, u16 *data);
static s32 e1000_write_phy_reg_sgmii_82575(struct e1000_hw *hw,
u32 offset, u16 data);
static void e1000_clear_hw_cntrs_82575(struct e1000_hw *hw);
static s32 e1000_acquire_swfw_sync_82575(struct e1000_hw *hw, u16 mask);
static s32 e1000_get_pcs_speed_and_duplex_82575(struct e1000_hw *hw,
u16 *speed, u16 *duplex);
static s32 e1000_get_phy_id_82575(struct e1000_hw *hw);
static void e1000_release_swfw_sync_82575(struct e1000_hw *hw, u16 mask);
static bool e1000_sgmii_active_82575(struct e1000_hw *hw);
static s32 e1000_reset_init_script_82575(struct e1000_hw *hw);
static s32 e1000_read_mac_addr_82575(struct e1000_hw *hw);
static void e1000_config_collision_dist_82575(struct e1000_hw *hw);
static void e1000_power_down_phy_copper_82575(struct e1000_hw *hw);
static void e1000_shutdown_serdes_link_82575(struct e1000_hw *hw);
static void e1000_power_up_serdes_link_82575(struct e1000_hw *hw);
static s32 e1000_set_pcie_completion_timeout(struct e1000_hw *hw);
static s32 e1000_reset_mdicnfg_82580(struct e1000_hw *hw);
static s32 e1000_validate_nvm_checksum_82580(struct e1000_hw *hw);
static s32 e1000_update_nvm_checksum_82580(struct e1000_hw *hw);
static s32 e1000_update_nvm_checksum_with_offset(struct e1000_hw *hw,
u16 offset);
static s32 e1000_validate_nvm_checksum_with_offset(struct e1000_hw *hw,
u16 offset);
static s32 e1000_validate_nvm_checksum_i350(struct e1000_hw *hw);
static s32 e1000_update_nvm_checksum_i350(struct e1000_hw *hw);
static void e1000_write_vfta_i350(struct e1000_hw *hw, u32 offset, u32 value);
static void e1000_clear_vfta_i350(struct e1000_hw *hw);
static void e1000_i2c_start(struct e1000_hw *hw);
static void e1000_i2c_stop(struct e1000_hw *hw);
static s32 e1000_clock_in_i2c_byte(struct e1000_hw *hw, u8 *data);
static s32 e1000_clock_out_i2c_byte(struct e1000_hw *hw, u8 data);
static s32 e1000_get_i2c_ack(struct e1000_hw *hw);
static s32 e1000_clock_in_i2c_bit(struct e1000_hw *hw, bool *data);
static s32 e1000_clock_out_i2c_bit(struct e1000_hw *hw, bool data);
static void e1000_raise_i2c_clk(struct e1000_hw *hw, u32 *i2cctl);
static void e1000_lower_i2c_clk(struct e1000_hw *hw, u32 *i2cctl);
static s32 e1000_set_i2c_data(struct e1000_hw *hw, u32 *i2cctl, bool data);
static bool e1000_get_i2c_data(u32 *i2cctl);
static const u16 e1000_82580_rxpbs_table[] = {
36, 72, 144, 1, 2, 4, 8, 16, 35, 70, 140 };
#define E1000_82580_RXPBS_TABLE_SIZE \
(sizeof(e1000_82580_rxpbs_table) / \
sizeof(e1000_82580_rxpbs_table[0]))
/**
* e1000_sgmii_uses_mdio_82575 - Determine if I2C pins are for external MDIO
* @hw: pointer to the HW structure
*
* Called to determine if the I2C pins are being used for I2C or as an
* external MDIO interface since the two options are mutually exclusive.
**/
static bool e1000_sgmii_uses_mdio_82575(struct e1000_hw *hw)
{
u32 reg = 0;
bool ext_mdio = FALSE;
DEBUGFUNC("e1000_sgmii_uses_mdio_82575");
switch (hw->mac.type) {
case e1000_82575:
case e1000_82576:
reg = E1000_READ_REG(hw, E1000_MDIC);
ext_mdio = !!(reg & E1000_MDIC_DEST);
break;
case e1000_82580:
case e1000_i350:
case e1000_i354:
case e1000_i210:
case e1000_i211:
reg = E1000_READ_REG(hw, E1000_MDICNFG);
ext_mdio = !!(reg & E1000_MDICNFG_EXT_MDIO);
break;
default:
break;
}
return ext_mdio;
}
/**
* e1000_init_phy_params_82575 - Init PHY func ptrs.
* @hw: pointer to the HW structure
**/
static s32 e1000_init_phy_params_82575(struct e1000_hw *hw)
{
struct e1000_phy_info *phy = &hw->phy;
s32 ret_val = E1000_SUCCESS;
u32 ctrl_ext;
DEBUGFUNC("e1000_init_phy_params_82575");
phy->ops.read_i2c_byte = e1000_read_i2c_byte_generic;
phy->ops.write_i2c_byte = e1000_write_i2c_byte_generic;
if (hw->phy.media_type != e1000_media_type_copper) {
phy->type = e1000_phy_none;
goto out;
}
phy->ops.power_up = e1000_power_up_phy_copper;
phy->ops.power_down = e1000_power_down_phy_copper_82575;
phy->autoneg_mask = AUTONEG_ADVERTISE_SPEED_DEFAULT;
phy->reset_delay_us = 100;
phy->ops.acquire = e1000_acquire_phy_82575;
phy->ops.check_reset_block = e1000_check_reset_block_generic;
phy->ops.commit = e1000_phy_sw_reset_generic;
phy->ops.get_cfg_done = e1000_get_cfg_done_82575;
phy->ops.release = e1000_release_phy_82575;
ctrl_ext = E1000_READ_REG(hw, E1000_CTRL_EXT);
if (e1000_sgmii_active_82575(hw)) {
phy->ops.reset = e1000_phy_hw_reset_sgmii_82575;
ctrl_ext |= E1000_CTRL_I2C_ENA;
} else {
phy->ops.reset = e1000_phy_hw_reset_generic;
ctrl_ext &= ~E1000_CTRL_I2C_ENA;
}
E1000_WRITE_REG(hw, E1000_CTRL_EXT, ctrl_ext);
e1000_reset_mdicnfg_82580(hw);
if (e1000_sgmii_active_82575(hw) && !e1000_sgmii_uses_mdio_82575(hw)) {
phy->ops.read_reg = e1000_read_phy_reg_sgmii_82575;
phy->ops.write_reg = e1000_write_phy_reg_sgmii_82575;
} else {
switch (hw->mac.type) {
case e1000_82580:
case e1000_i350:
case e1000_i354:
phy->ops.read_reg = e1000_read_phy_reg_82580;
phy->ops.write_reg = e1000_write_phy_reg_82580;
break;
case e1000_i210:
case e1000_i211:
phy->ops.read_reg = e1000_read_phy_reg_gs40g;
phy->ops.write_reg = e1000_write_phy_reg_gs40g;
break;
default:
phy->ops.read_reg = e1000_read_phy_reg_igp;
phy->ops.write_reg = e1000_write_phy_reg_igp;
}
}
/* Set phy->phy_addr and phy->id. */
ret_val = e1000_get_phy_id_82575(hw);
/* Verify phy id and set remaining function pointers */
switch (phy->id) {
case M88E1543_E_PHY_ID:
case M88E1512_E_PHY_ID:
case I347AT4_E_PHY_ID:
case M88E1112_E_PHY_ID:
case M88E1340M_E_PHY_ID:
case M88E1111_I_PHY_ID:
phy->type = e1000_phy_m88;
phy->ops.check_polarity = e1000_check_polarity_m88;
phy->ops.get_info = e1000_get_phy_info_m88;
if (phy->id == I347AT4_E_PHY_ID ||
phy->id == M88E1112_E_PHY_ID ||
phy->id == M88E1340M_E_PHY_ID)
phy->ops.get_cable_length =
e1000_get_cable_length_m88_gen2;
else if (phy->id == M88E1543_E_PHY_ID ||
phy->id == M88E1512_E_PHY_ID)
phy->ops.get_cable_length =
e1000_get_cable_length_m88_gen2;
else
phy->ops.get_cable_length = e1000_get_cable_length_m88;
phy->ops.force_speed_duplex = e1000_phy_force_speed_duplex_m88;
/* Check if this PHY is confgured for media swap. */
if (phy->id == M88E1112_E_PHY_ID) {
u16 data;
ret_val = phy->ops.write_reg(hw,
E1000_M88E1112_PAGE_ADDR,
2);
if (ret_val)
goto out;
ret_val = phy->ops.read_reg(hw,
E1000_M88E1112_MAC_CTRL_1,
&data);
if (ret_val)
goto out;
data = (data & E1000_M88E1112_MAC_CTRL_1_MODE_MASK) >>
E1000_M88E1112_MAC_CTRL_1_MODE_SHIFT;
if (data == E1000_M88E1112_AUTO_COPPER_SGMII ||
data == E1000_M88E1112_AUTO_COPPER_BASEX)
hw->mac.ops.check_for_link =
e1000_check_for_link_media_swap;
}
if (phy->id == M88E1512_E_PHY_ID) {
ret_val = e1000_initialize_M88E1512_phy(hw);
if (ret_val)
goto out;
}
if (phy->id == M88E1543_E_PHY_ID) {
ret_val = e1000_initialize_M88E1543_phy(hw);
if (ret_val)
goto out;
}
break;
case IGP03E1000_E_PHY_ID:
case IGP04E1000_E_PHY_ID:
phy->type = e1000_phy_igp_3;
phy->ops.check_polarity = e1000_check_polarity_igp;
phy->ops.get_info = e1000_get_phy_info_igp;
phy->ops.get_cable_length = e1000_get_cable_length_igp_2;
phy->ops.force_speed_duplex = e1000_phy_force_speed_duplex_igp;
phy->ops.set_d0_lplu_state = e1000_set_d0_lplu_state_82575;
phy->ops.set_d3_lplu_state = e1000_set_d3_lplu_state_generic;
break;
case I82580_I_PHY_ID:
case I350_I_PHY_ID:
phy->type = e1000_phy_82580;
phy->ops.check_polarity = e1000_check_polarity_82577;
phy->ops.force_speed_duplex =
e1000_phy_force_speed_duplex_82577;
phy->ops.get_cable_length = e1000_get_cable_length_82577;
phy->ops.get_info = e1000_get_phy_info_82577;
phy->ops.set_d0_lplu_state = e1000_set_d0_lplu_state_82580;
phy->ops.set_d3_lplu_state = e1000_set_d3_lplu_state_82580;
break;
case I210_I_PHY_ID:
phy->type = e1000_phy_i210;
phy->ops.check_polarity = e1000_check_polarity_m88;
phy->ops.get_info = e1000_get_phy_info_m88;
phy->ops.get_cable_length = e1000_get_cable_length_m88_gen2;
phy->ops.set_d0_lplu_state = e1000_set_d0_lplu_state_82580;
phy->ops.set_d3_lplu_state = e1000_set_d3_lplu_state_82580;
phy->ops.force_speed_duplex = e1000_phy_force_speed_duplex_m88;
break;
default:
ret_val = -E1000_ERR_PHY;
goto out;
}
out:
return ret_val;
}
/**
* e1000_init_nvm_params_82575 - Init NVM func ptrs.
* @hw: pointer to the HW structure
**/
s32 e1000_init_nvm_params_82575(struct e1000_hw *hw)
{
struct e1000_nvm_info *nvm = &hw->nvm;
u32 eecd = E1000_READ_REG(hw, E1000_EECD);
u16 size;
DEBUGFUNC("e1000_init_nvm_params_82575");
size = (u16)((eecd & E1000_EECD_SIZE_EX_MASK) >>
E1000_EECD_SIZE_EX_SHIFT);
/*
* Added to a constant, "size" becomes the left-shift value
* for setting word_size.
*/
size += NVM_WORD_SIZE_BASE_SHIFT;
/* Just in case size is out of range, cap it to the largest
* EEPROM size supported
*/
if (size > 15)
size = 15;
nvm->word_size = 1 << size;
if (hw->mac.type < e1000_i210) {
nvm->opcode_bits = 8;
nvm->delay_usec = 1;
switch (nvm->override) {
case e1000_nvm_override_spi_large:
nvm->page_size = 32;
nvm->address_bits = 16;
break;
case e1000_nvm_override_spi_small:
nvm->page_size = 8;
nvm->address_bits = 8;
break;
default:
nvm->page_size = eecd & E1000_EECD_ADDR_BITS ? 32 : 8;
nvm->address_bits = eecd & E1000_EECD_ADDR_BITS ?
16 : 8;
break;
}
if (nvm->word_size == (1 << 15))
nvm->page_size = 128;
nvm->type = e1000_nvm_eeprom_spi;
} else {
nvm->type = e1000_nvm_flash_hw;
}
/* Function Pointers */
nvm->ops.acquire = e1000_acquire_nvm_82575;
nvm->ops.release = e1000_release_nvm_82575;
if (nvm->word_size < (1 << 15))
nvm->ops.read = e1000_read_nvm_eerd;
else
nvm->ops.read = e1000_read_nvm_spi;
nvm->ops.write = e1000_write_nvm_spi;
nvm->ops.validate = e1000_validate_nvm_checksum_generic;
nvm->ops.update = e1000_update_nvm_checksum_generic;
nvm->ops.valid_led_default = e1000_valid_led_default_82575;
/* override generic family function pointers for specific descendants */
switch (hw->mac.type) {
case e1000_82580:
nvm->ops.validate = e1000_validate_nvm_checksum_82580;
nvm->ops.update = e1000_update_nvm_checksum_82580;
break;
case e1000_i350:
case e1000_i354:
nvm->ops.validate = e1000_validate_nvm_checksum_i350;
nvm->ops.update = e1000_update_nvm_checksum_i350;
break;
default:
break;
}
return E1000_SUCCESS;
}
/**
* e1000_init_mac_params_82575 - Init MAC func ptrs.
* @hw: pointer to the HW structure
**/
static s32 e1000_init_mac_params_82575(struct e1000_hw *hw)
{
struct e1000_mac_info *mac = &hw->mac;
struct e1000_dev_spec_82575 *dev_spec = &hw->dev_spec._82575;
DEBUGFUNC("e1000_init_mac_params_82575");
/* Derives media type */
e1000_get_media_type_82575(hw);
/* Set mta register count */
mac->mta_reg_count = 128;
/* Set uta register count */
mac->uta_reg_count = (hw->mac.type == e1000_82575) ? 0 : 128;
/* Set rar entry count */
mac->rar_entry_count = E1000_RAR_ENTRIES_82575;
if (mac->type == e1000_82576)
mac->rar_entry_count = E1000_RAR_ENTRIES_82576;
if (mac->type == e1000_82580)
mac->rar_entry_count = E1000_RAR_ENTRIES_82580;
if (mac->type == e1000_i350 || mac->type == e1000_i354)
mac->rar_entry_count = E1000_RAR_ENTRIES_I350;
/* Enable EEE default settings for EEE supported devices */
if (mac->type >= e1000_i350)
dev_spec->eee_disable = FALSE;
/* Allow a single clear of the SW semaphore on I210 and newer */
if (mac->type >= e1000_i210)
dev_spec->clear_semaphore_once = TRUE;
/* Set if part includes ASF firmware */
mac->asf_firmware_present = TRUE;
/* FWSM register */
mac->has_fwsm = TRUE;
/* ARC supported; valid only if manageability features are enabled. */
mac->arc_subsystem_valid =
!!(E1000_READ_REG(hw, E1000_FWSM) & E1000_FWSM_MODE_MASK);
/* Function pointers */
/* bus type/speed/width */
mac->ops.get_bus_info = e1000_get_bus_info_pcie_generic;
/* reset */
if (mac->type >= e1000_82580)
mac->ops.reset_hw = e1000_reset_hw_82580;
else
mac->ops.reset_hw = e1000_reset_hw_82575;
/* hw initialization */
if ((mac->type == e1000_i210) || (mac->type == e1000_i211))
mac->ops.init_hw = e1000_init_hw_i210;
else
mac->ops.init_hw = e1000_init_hw_82575;
/* link setup */
mac->ops.setup_link = e1000_setup_link_generic;
/* physical interface link setup */
mac->ops.setup_physical_interface =
(hw->phy.media_type == e1000_media_type_copper)
? e1000_setup_copper_link_82575 : e1000_setup_serdes_link_82575;
/* physical interface shutdown */
mac->ops.shutdown_serdes = e1000_shutdown_serdes_link_82575;
/* physical interface power up */
mac->ops.power_up_serdes = e1000_power_up_serdes_link_82575;
/* check for link */
mac->ops.check_for_link = e1000_check_for_link_82575;
/* read mac address */
mac->ops.read_mac_addr = e1000_read_mac_addr_82575;
/* configure collision distance */
mac->ops.config_collision_dist = e1000_config_collision_dist_82575;
/* multicast address update */
mac->ops.update_mc_addr_list = e1000_update_mc_addr_list_generic;
if (hw->mac.type == e1000_i350 || mac->type == e1000_i354) {
/* writing VFTA */
mac->ops.write_vfta = e1000_write_vfta_i350;
/* clearing VFTA */
mac->ops.clear_vfta = e1000_clear_vfta_i350;
} else {
/* writing VFTA */
mac->ops.write_vfta = e1000_write_vfta_generic;
/* clearing VFTA */
mac->ops.clear_vfta = e1000_clear_vfta_generic;
}
if (hw->mac.type >= e1000_82580)
mac->ops.validate_mdi_setting =
e1000_validate_mdi_setting_crossover_generic;
/* ID LED init */
mac->ops.id_led_init = e1000_id_led_init_generic;
/* blink LED */
mac->ops.blink_led = e1000_blink_led_generic;
/* setup LED */
mac->ops.setup_led = e1000_setup_led_generic;
/* cleanup LED */
mac->ops.cleanup_led = e1000_cleanup_led_generic;
/* turn on/off LED */
mac->ops.led_on = e1000_led_on_generic;
mac->ops.led_off = e1000_led_off_generic;
/* clear hardware counters */
mac->ops.clear_hw_cntrs = e1000_clear_hw_cntrs_82575;
/* link info */
mac->ops.get_link_up_info = e1000_get_link_up_info_82575;
/* acquire SW_FW sync */
mac->ops.acquire_swfw_sync = e1000_acquire_swfw_sync_82575;
mac->ops.release_swfw_sync = e1000_release_swfw_sync_82575;
if (mac->type >= e1000_i210) {
mac->ops.acquire_swfw_sync = e1000_acquire_swfw_sync_i210;
mac->ops.release_swfw_sync = e1000_release_swfw_sync_i210;
}
/* set lan id for port to determine which phy lock to use */
hw->mac.ops.set_lan_id(hw);
return E1000_SUCCESS;
}
/**
* e1000_init_function_pointers_82575 - Init func ptrs.
* @hw: pointer to the HW structure
*
* Called to initialize all function pointers and parameters.
**/
void e1000_init_function_pointers_82575(struct e1000_hw *hw)
{
DEBUGFUNC("e1000_init_function_pointers_82575");
hw->mac.ops.init_params = e1000_init_mac_params_82575;
hw->nvm.ops.init_params = e1000_init_nvm_params_82575;
hw->phy.ops.init_params = e1000_init_phy_params_82575;
hw->mbx.ops.init_params = e1000_init_mbx_params_pf;
}
/**
* e1000_acquire_phy_82575 - Acquire rights to access PHY
* @hw: pointer to the HW structure
*
* Acquire access rights to the correct PHY.
**/
static s32 e1000_acquire_phy_82575(struct e1000_hw *hw)
{
u16 mask = E1000_SWFW_PHY0_SM;
DEBUGFUNC("e1000_acquire_phy_82575");
if (hw->bus.func == E1000_FUNC_1)
mask = E1000_SWFW_PHY1_SM;
else if (hw->bus.func == E1000_FUNC_2)
mask = E1000_SWFW_PHY2_SM;
else if (hw->bus.func == E1000_FUNC_3)
mask = E1000_SWFW_PHY3_SM;
return hw->mac.ops.acquire_swfw_sync(hw, mask);
}
/**
* e1000_release_phy_82575 - Release rights to access PHY
* @hw: pointer to the HW structure
*
* A wrapper to release access rights to the correct PHY.
**/
static void e1000_release_phy_82575(struct e1000_hw *hw)
{
u16 mask = E1000_SWFW_PHY0_SM;
DEBUGFUNC("e1000_release_phy_82575");
if (hw->bus.func == E1000_FUNC_1)
mask = E1000_SWFW_PHY1_SM;
else if (hw->bus.func == E1000_FUNC_2)
mask = E1000_SWFW_PHY2_SM;
else if (hw->bus.func == E1000_FUNC_3)
mask = E1000_SWFW_PHY3_SM;
hw->mac.ops.release_swfw_sync(hw, mask);
}
/**
* e1000_read_phy_reg_sgmii_82575 - Read PHY register using sgmii
* @hw: pointer to the HW structure
* @offset: register offset to be read
* @data: pointer to the read data
*
* Reads the PHY register at offset using the serial gigabit media independent
* interface and stores the retrieved information in data.
**/
static s32 e1000_read_phy_reg_sgmii_82575(struct e1000_hw *hw, u32 offset,
u16 *data)
{
s32 ret_val = -E1000_ERR_PARAM;
DEBUGFUNC("e1000_read_phy_reg_sgmii_82575");
if (offset > E1000_MAX_SGMII_PHY_REG_ADDR) {
DEBUGOUT1("PHY Address %u is out of range\n", offset);
goto out;
}
ret_val = hw->phy.ops.acquire(hw);
if (ret_val)
goto out;
ret_val = e1000_read_phy_reg_i2c(hw, offset, data);
hw->phy.ops.release(hw);
out:
return ret_val;
}
/**
* e1000_write_phy_reg_sgmii_82575 - Write PHY register using sgmii
* @hw: pointer to the HW structure
* @offset: register offset to write to
* @data: data to write at register offset
*
* Writes the data to PHY register at the offset using the serial gigabit
* media independent interface.
**/
static s32 e1000_write_phy_reg_sgmii_82575(struct e1000_hw *hw, u32 offset,
u16 data)
{
s32 ret_val = -E1000_ERR_PARAM;
DEBUGFUNC("e1000_write_phy_reg_sgmii_82575");
if (offset > E1000_MAX_SGMII_PHY_REG_ADDR) {
DEBUGOUT1("PHY Address %d is out of range\n", offset);
goto out;
}
ret_val = hw->phy.ops.acquire(hw);
if (ret_val)
goto out;
ret_val = e1000_write_phy_reg_i2c(hw, offset, data);
hw->phy.ops.release(hw);
out:
return ret_val;
}
/**
* e1000_get_phy_id_82575 - Retrieve PHY addr and id
* @hw: pointer to the HW structure
*
* Retrieves the PHY address and ID for both PHY's which do and do not use
* sgmi interface.
**/
static s32 e1000_get_phy_id_82575(struct e1000_hw *hw)
{
struct e1000_phy_info *phy = &hw->phy;
s32 ret_val = E1000_SUCCESS;
u16 phy_id;
u32 ctrl_ext;
u32 mdic;
DEBUGFUNC("e1000_get_phy_id_82575");
/* some i354 devices need an extra read for phy id */
if (hw->mac.type == e1000_i354)
e1000_get_phy_id(hw);
/*
* For SGMII PHYs, we try the list of possible addresses until
* we find one that works. For non-SGMII PHYs
* (e.g. integrated copper PHYs), an address of 1 should
* work. The result of this function should mean phy->phy_addr
* and phy->id are set correctly.
*/
if (!e1000_sgmii_active_82575(hw)) {
phy->addr = 1;
ret_val = e1000_get_phy_id(hw);
goto out;
}
if (e1000_sgmii_uses_mdio_82575(hw)) {
switch (hw->mac.type) {
case e1000_82575:
case e1000_82576:
mdic = E1000_READ_REG(hw, E1000_MDIC);
mdic &= E1000_MDIC_PHY_MASK;
phy->addr = mdic >> E1000_MDIC_PHY_SHIFT;
break;
case e1000_82580:
case e1000_i350:
case e1000_i354:
case e1000_i210:
case e1000_i211:
mdic = E1000_READ_REG(hw, E1000_MDICNFG);
mdic &= E1000_MDICNFG_PHY_MASK;
phy->addr = mdic >> E1000_MDICNFG_PHY_SHIFT;
break;
default:
ret_val = -E1000_ERR_PHY;
goto out;
break;
}
ret_val = e1000_get_phy_id(hw);
goto out;
}
/* Power on sgmii phy if it is disabled */
ctrl_ext = E1000_READ_REG(hw, E1000_CTRL_EXT);
E1000_WRITE_REG(hw, E1000_CTRL_EXT,
ctrl_ext & ~E1000_CTRL_EXT_SDP3_DATA);
E1000_WRITE_FLUSH(hw);
msec_delay(300);
/*
* The address field in the I2CCMD register is 3 bits and 0 is invalid.
* Therefore, we need to test 1-7
*/
for (phy->addr = 1; phy->addr < 8; phy->addr++) {
ret_val = e1000_read_phy_reg_sgmii_82575(hw, PHY_ID1, &phy_id);
if (ret_val == E1000_SUCCESS) {
DEBUGOUT2("Vendor ID 0x%08X read at address %u\n",
phy_id, phy->addr);
/*
* At the time of this writing, The M88 part is
* the only supported SGMII PHY product.
*/
if (phy_id == M88_VENDOR)
break;
} else {
DEBUGOUT1("PHY address %u was unreadable\n",
phy->addr);
}
}
/* A valid PHY type couldn't be found. */
if (phy->addr == 8) {
phy->addr = 0;
ret_val = -E1000_ERR_PHY;
} else {
ret_val = e1000_get_phy_id(hw);
}
/* restore previous sfp cage power state */
E1000_WRITE_REG(hw, E1000_CTRL_EXT, ctrl_ext);
out:
return ret_val;
}
/**
* e1000_phy_hw_reset_sgmii_82575 - Performs a PHY reset
* @hw: pointer to the HW structure
*
* Resets the PHY using the serial gigabit media independent interface.
**/
static s32 e1000_phy_hw_reset_sgmii_82575(struct e1000_hw *hw)
{
s32 ret_val = E1000_SUCCESS;
struct e1000_phy_info *phy = &hw->phy;
DEBUGFUNC("e1000_phy_hw_reset_sgmii_82575");
/*
* This isn't a TRUE "hard" reset, but is the only reset
* available to us at this time.
*/
DEBUGOUT("Soft resetting SGMII attached PHY...\n");
if (!(hw->phy.ops.write_reg))
goto out;
/*
* SFP documentation requires the following to configure the SPF module
* to work on SGMII. No further documentation is given.
*/
ret_val = hw->phy.ops.write_reg(hw, 0x1B, 0x8084);
if (ret_val)
goto out;
ret_val = hw->phy.ops.commit(hw);
if (ret_val)
goto out;
if (phy->id == M88E1512_E_PHY_ID)
ret_val = e1000_initialize_M88E1512_phy(hw);
out:
return ret_val;
}
/**
* e1000_set_d0_lplu_state_82575 - Set Low Power Linkup D0 state
* @hw: pointer to the HW structure
* @active: TRUE to enable LPLU, FALSE to disable
*
* Sets the LPLU D0 state according to the active flag. When
* activating LPLU this function also disables smart speed
* and vice versa. LPLU will not be activated unless the
* device autonegotiation advertisement meets standards of
* either 10 or 10/100 or 10/100/1000 at all duplexes.
* This is a function pointer entry point only called by
* PHY setup routines.
**/
static s32 e1000_set_d0_lplu_state_82575(struct e1000_hw *hw, bool active)
{
struct e1000_phy_info *phy = &hw->phy;
s32 ret_val = E1000_SUCCESS;
u16 data;
DEBUGFUNC("e1000_set_d0_lplu_state_82575");
if (!(hw->phy.ops.read_reg))
goto out;
ret_val = phy->ops.read_reg(hw, IGP02E1000_PHY_POWER_MGMT, &data);
if (ret_val)
goto out;
if (active) {
data |= IGP02E1000_PM_D0_LPLU;
ret_val = phy->ops.write_reg(hw, IGP02E1000_PHY_POWER_MGMT,
data);
if (ret_val)
goto out;
/* When LPLU is enabled, we should disable SmartSpeed */
ret_val = phy->ops.read_reg(hw, IGP01E1000_PHY_PORT_CONFIG,
&data);
data &= ~IGP01E1000_PSCFR_SMART_SPEED;
ret_val = phy->ops.write_reg(hw, IGP01E1000_PHY_PORT_CONFIG,
data);
if (ret_val)
goto out;
} else {
data &= ~IGP02E1000_PM_D0_LPLU;
ret_val = phy->ops.write_reg(hw, IGP02E1000_PHY_POWER_MGMT,
data);
/*
* LPLU and SmartSpeed are mutually exclusive. LPLU is used
* during Dx states where the power conservation is most
* important. During driver activity we should enable
* SmartSpeed, so performance is maintained.
*/
if (phy->smart_speed == e1000_smart_speed_on) {
ret_val = phy->ops.read_reg(hw,
IGP01E1000_PHY_PORT_CONFIG,
&data);
if (ret_val)
goto out;
data |= IGP01E1000_PSCFR_SMART_SPEED;
ret_val = phy->ops.write_reg(hw,
IGP01E1000_PHY_PORT_CONFIG,
data);
if (ret_val)
goto out;
} else if (phy->smart_speed == e1000_smart_speed_off) {
ret_val = phy->ops.read_reg(hw,
IGP01E1000_PHY_PORT_CONFIG,
&data);
if (ret_val)
goto out;
data &= ~IGP01E1000_PSCFR_SMART_SPEED;
ret_val = phy->ops.write_reg(hw,
IGP01E1000_PHY_PORT_CONFIG,
data);
if (ret_val)
goto out;
}
}
out:
return ret_val;
}
/**
* e1000_set_d0_lplu_state_82580 - Set Low Power Linkup D0 state
* @hw: pointer to the HW structure
* @active: TRUE to enable LPLU, FALSE to disable
*
* Sets the LPLU D0 state according to the active flag. When
* activating LPLU this function also disables smart speed
* and vice versa. LPLU will not be activated unless the
* device autonegotiation advertisement meets standards of
* either 10 or 10/100 or 10/100/1000 at all duplexes.
* This is a function pointer entry point only called by
* PHY setup routines.
**/
static s32 e1000_set_d0_lplu_state_82580(struct e1000_hw *hw, bool active)
{
struct e1000_phy_info *phy = &hw->phy;
u32 data;
DEBUGFUNC("e1000_set_d0_lplu_state_82580");
data = E1000_READ_REG(hw, E1000_82580_PHY_POWER_MGMT);
if (active) {
data |= E1000_82580_PM_D0_LPLU;
/* When LPLU is enabled, we should disable SmartSpeed */
data &= ~E1000_82580_PM_SPD;
} else {
data &= ~E1000_82580_PM_D0_LPLU;
/*
* LPLU and SmartSpeed are mutually exclusive. LPLU is used
* during Dx states where the power conservation is most
* important. During driver activity we should enable
* SmartSpeed, so performance is maintained.
*/
if (phy->smart_speed == e1000_smart_speed_on)
data |= E1000_82580_PM_SPD;
else if (phy->smart_speed == e1000_smart_speed_off)
data &= ~E1000_82580_PM_SPD;
}
E1000_WRITE_REG(hw, E1000_82580_PHY_POWER_MGMT, data);
return E1000_SUCCESS;
}
/**
* e1000_set_d3_lplu_state_82580 - Sets low power link up state for D3
* @hw: pointer to the HW structure
* @active: boolean used to enable/disable lplu
*
* Success returns 0, Failure returns 1
*
* The low power link up (lplu) state is set to the power management level D3
* and SmartSpeed is disabled when active is TRUE, else clear lplu for D3
* and enable Smartspeed. LPLU and Smartspeed are mutually exclusive. LPLU
* is used during Dx states where the power conservation is most important.
* During driver activity, SmartSpeed should be enabled so performance is
* maintained.
**/
s32 e1000_set_d3_lplu_state_82580(struct e1000_hw *hw, bool active)
{
struct e1000_phy_info *phy = &hw->phy;
u32 data;
DEBUGFUNC("e1000_set_d3_lplu_state_82580");
data = E1000_READ_REG(hw, E1000_82580_PHY_POWER_MGMT);
if (!active) {
data &= ~E1000_82580_PM_D3_LPLU;
/*
* LPLU and SmartSpeed are mutually exclusive. LPLU is used
* during Dx states where the power conservation is most
* important. During driver activity we should enable
* SmartSpeed, so performance is maintained.
*/
if (phy->smart_speed == e1000_smart_speed_on)
data |= E1000_82580_PM_SPD;
else if (phy->smart_speed == e1000_smart_speed_off)
data &= ~E1000_82580_PM_SPD;
} else if ((phy->autoneg_advertised == E1000_ALL_SPEED_DUPLEX) ||
(phy->autoneg_advertised == E1000_ALL_NOT_GIG) ||
(phy->autoneg_advertised == E1000_ALL_10_SPEED)) {
data |= E1000_82580_PM_D3_LPLU;
/* When LPLU is enabled, we should disable SmartSpeed */
data &= ~E1000_82580_PM_SPD;
}
E1000_WRITE_REG(hw, E1000_82580_PHY_POWER_MGMT, data);
return E1000_SUCCESS;
}
/**
* e1000_acquire_nvm_82575 - Request for access to EEPROM
* @hw: pointer to the HW structure
*
* Acquire the necessary semaphores for exclusive access to the EEPROM.
* Set the EEPROM access request bit and wait for EEPROM access grant bit.
* Return successful if access grant bit set, else clear the request for
* EEPROM access and return -E1000_ERR_NVM (-1).
**/
static s32 e1000_acquire_nvm_82575(struct e1000_hw *hw)
{
s32 ret_val = E1000_SUCCESS;
DEBUGFUNC("e1000_acquire_nvm_82575");
ret_val = e1000_acquire_swfw_sync_82575(hw, E1000_SWFW_EEP_SM);
if (ret_val)
goto out;
/*
* Check if there is some access
* error this access may hook on
*/
if (hw->mac.type == e1000_i350) {
u32 eecd = E1000_READ_REG(hw, E1000_EECD);
if (eecd & (E1000_EECD_BLOCKED | E1000_EECD_ABORT |
E1000_EECD_TIMEOUT)) {
/* Clear all access error flags */
E1000_WRITE_REG(hw, E1000_EECD, eecd |
E1000_EECD_ERROR_CLR);
DEBUGOUT("Nvm bit banging access error detected and cleared.\n");
}
}
if (hw->mac.type == e1000_82580) {
u32 eecd = E1000_READ_REG(hw, E1000_EECD);
if (eecd & E1000_EECD_BLOCKED) {
/* Clear access error flag */
E1000_WRITE_REG(hw, E1000_EECD, eecd |
E1000_EECD_BLOCKED);
DEBUGOUT("Nvm bit banging access error detected and cleared.\n");
}
}
ret_val = e1000_acquire_nvm_generic(hw);
if (ret_val)
e1000_release_swfw_sync_82575(hw, E1000_SWFW_EEP_SM);
out:
return ret_val;
}
/**
* e1000_release_nvm_82575 - Release exclusive access to EEPROM
* @hw: pointer to the HW structure
*
* Stop any current commands to the EEPROM and clear the EEPROM request bit,
* then release the semaphores acquired.
**/
static void e1000_release_nvm_82575(struct e1000_hw *hw)
{
DEBUGFUNC("e1000_release_nvm_82575");
e1000_release_nvm_generic(hw);
e1000_release_swfw_sync_82575(hw, E1000_SWFW_EEP_SM);
}
/**
* e1000_acquire_swfw_sync_82575 - Acquire SW/FW semaphore
* @hw: pointer to the HW structure
* @mask: specifies which semaphore to acquire
*
* Acquire the SW/FW semaphore to access the PHY or NVM. The mask
* will also specify which port we're acquiring the lock for.
**/
static s32 e1000_acquire_swfw_sync_82575(struct e1000_hw *hw, u16 mask)
{
u32 swfw_sync;
u32 swmask = mask;
u32 fwmask = mask << 16;
s32 ret_val = E1000_SUCCESS;
s32 i = 0, timeout = 200;
DEBUGFUNC("e1000_acquire_swfw_sync_82575");
while (i < timeout) {
if (e1000_get_hw_semaphore_generic(hw)) {
ret_val = -E1000_ERR_SWFW_SYNC;
goto out;
}
swfw_sync = E1000_READ_REG(hw, E1000_SW_FW_SYNC);
if (!(swfw_sync & (fwmask | swmask)))
break;
/*
* Firmware currently using resource (fwmask)
* or other software thread using resource (swmask)
*/
e1000_put_hw_semaphore_generic(hw);
msec_delay_irq(5);
i++;
}
if (i == timeout) {
DEBUGOUT("Driver can't access resource, SW_FW_SYNC timeout.\n");
ret_val = -E1000_ERR_SWFW_SYNC;
goto out;
}
swfw_sync |= swmask;
E1000_WRITE_REG(hw, E1000_SW_FW_SYNC, swfw_sync);
e1000_put_hw_semaphore_generic(hw);
out:
return ret_val;
}
/**
* e1000_release_swfw_sync_82575 - Release SW/FW semaphore
* @hw: pointer to the HW structure
* @mask: specifies which semaphore to acquire
*
* Release the SW/FW semaphore used to access the PHY or NVM. The mask
* will also specify which port we're releasing the lock for.
**/
static void e1000_release_swfw_sync_82575(struct e1000_hw *hw, u16 mask)
{
u32 swfw_sync;
DEBUGFUNC("e1000_release_swfw_sync_82575");
while (e1000_get_hw_semaphore_generic(hw) != E1000_SUCCESS)
; /* Empty */
swfw_sync = E1000_READ_REG(hw, E1000_SW_FW_SYNC);
swfw_sync &= ~mask;
E1000_WRITE_REG(hw, E1000_SW_FW_SYNC, swfw_sync);
e1000_put_hw_semaphore_generic(hw);
}
/**
* e1000_get_cfg_done_82575 - Read config done bit
* @hw: pointer to the HW structure
*
* Read the management control register for the config done bit for
* completion status. NOTE: silicon which is EEPROM-less will fail trying
* to read the config done bit, so an error is *ONLY* logged and returns
* E1000_SUCCESS. If we were to return with error, EEPROM-less silicon
* would not be able to be reset or change link.
**/
static s32 e1000_get_cfg_done_82575(struct e1000_hw *hw)
{
s32 timeout = PHY_CFG_TIMEOUT;
u32 mask = E1000_NVM_CFG_DONE_PORT_0;
DEBUGFUNC("e1000_get_cfg_done_82575");
if (hw->bus.func == E1000_FUNC_1)
mask = E1000_NVM_CFG_DONE_PORT_1;
else if (hw->bus.func == E1000_FUNC_2)
mask = E1000_NVM_CFG_DONE_PORT_2;
else if (hw->bus.func == E1000_FUNC_3)
mask = E1000_NVM_CFG_DONE_PORT_3;
while (timeout) {
if (E1000_READ_REG(hw, E1000_EEMNGCTL) & mask)
break;
msec_delay(1);
timeout--;
}
if (!timeout)
DEBUGOUT("MNG configuration cycle has not completed.\n");
/* If EEPROM is not marked present, init the PHY manually */
if (!(E1000_READ_REG(hw, E1000_EECD) & E1000_EECD_PRES) &&
(hw->phy.type == e1000_phy_igp_3))
e1000_phy_init_script_igp3(hw);
return E1000_SUCCESS;
}
/**
* e1000_get_link_up_info_82575 - Get link speed/duplex info
* @hw: pointer to the HW structure
* @speed: stores the current speed
* @duplex: stores the current duplex
*
* This is a wrapper function, if using the serial gigabit media independent
* interface, use PCS to retrieve the link speed and duplex information.
* Otherwise, use the generic function to get the link speed and duplex info.
**/
static s32 e1000_get_link_up_info_82575(struct e1000_hw *hw, u16 *speed,
u16 *duplex)
{
s32 ret_val;
DEBUGFUNC("e1000_get_link_up_info_82575");
if (hw->phy.media_type != e1000_media_type_copper)
ret_val = e1000_get_pcs_speed_and_duplex_82575(hw, speed,
duplex);
else
ret_val = e1000_get_speed_and_duplex_copper_generic(hw, speed,
duplex);
return ret_val;
}
/**
* e1000_check_for_link_82575 - Check for link
* @hw: pointer to the HW structure
*
* If sgmii is enabled, then use the pcs register to determine link, otherwise
* use the generic interface for determining link.
**/
static s32 e1000_check_for_link_82575(struct e1000_hw *hw)
{
s32 ret_val;
u16 speed, duplex;
DEBUGFUNC("e1000_check_for_link_82575");
if (hw->phy.media_type != e1000_media_type_copper) {
ret_val = e1000_get_pcs_speed_and_duplex_82575(hw, &speed,
&duplex);
/*
* Use this flag to determine if link needs to be checked or
* not. If we have link clear the flag so that we do not
* continue to check for link.
*/
hw->mac.get_link_status = !hw->mac.serdes_has_link;
/*
* Configure Flow Control now that Auto-Neg has completed.
* First, we need to restore the desired flow control
* settings because we may have had to re-autoneg with a
* different link partner.
*/
ret_val = e1000_config_fc_after_link_up_generic(hw);
if (ret_val)
DEBUGOUT("Error configuring flow control\n");
} else {
ret_val = e1000_check_for_copper_link_generic(hw);
}
return ret_val;
}
/**
* e1000_check_for_link_media_swap - Check which M88E1112 interface linked
* @hw: pointer to the HW structure
*
* Poll the M88E1112 interfaces to see which interface achieved link.
*/
static s32 e1000_check_for_link_media_swap(struct e1000_hw *hw)
{
struct e1000_phy_info *phy = &hw->phy;
s32 ret_val;
u16 data;
u8 port = 0;
DEBUGFUNC("e1000_check_for_link_media_swap");
/* Check for copper. */
ret_val = phy->ops.write_reg(hw, E1000_M88E1112_PAGE_ADDR, 0);
if (ret_val)
return ret_val;
ret_val = phy->ops.read_reg(hw, E1000_M88E1112_STATUS, &data);
if (ret_val)
return ret_val;
if (data & E1000_M88E1112_STATUS_LINK)
port = E1000_MEDIA_PORT_COPPER;
/* Check for other. */
ret_val = phy->ops.write_reg(hw, E1000_M88E1112_PAGE_ADDR, 1);
if (ret_val)
return ret_val;
ret_val = phy->ops.read_reg(hw, E1000_M88E1112_STATUS, &data);
if (ret_val)
return ret_val;
if (data & E1000_M88E1112_STATUS_LINK)
port = E1000_MEDIA_PORT_OTHER;
/* Determine if a swap needs to happen. */
if (port && (hw->dev_spec._82575.media_port != port)) {
hw->dev_spec._82575.media_port = port;
hw->dev_spec._82575.media_changed = TRUE;
}
if (port == E1000_MEDIA_PORT_COPPER) {
/* reset page to 0 */
ret_val = phy->ops.write_reg(hw, E1000_M88E1112_PAGE_ADDR, 0);
if (ret_val)
return ret_val;
e1000_check_for_link_82575(hw);
} else {
e1000_check_for_link_82575(hw);
/* reset page to 0 */
ret_val = phy->ops.write_reg(hw, E1000_M88E1112_PAGE_ADDR, 0);
if (ret_val)
return ret_val;
}
return E1000_SUCCESS;
}
/**
* e1000_power_up_serdes_link_82575 - Power up the serdes link after shutdown
* @hw: pointer to the HW structure
**/
static void e1000_power_up_serdes_link_82575(struct e1000_hw *hw)
{
u32 reg;
DEBUGFUNC("e1000_power_up_serdes_link_82575");
if ((hw->phy.media_type != e1000_media_type_internal_serdes) &&
!e1000_sgmii_active_82575(hw))
return;
/* Enable PCS to turn on link */
reg = E1000_READ_REG(hw, E1000_PCS_CFG0);
reg |= E1000_PCS_CFG_PCS_EN;
E1000_WRITE_REG(hw, E1000_PCS_CFG0, reg);
/* Power up the laser */
reg = E1000_READ_REG(hw, E1000_CTRL_EXT);
reg &= ~E1000_CTRL_EXT_SDP3_DATA;
E1000_WRITE_REG(hw, E1000_CTRL_EXT, reg);
/* flush the write to verify completion */
E1000_WRITE_FLUSH(hw);
msec_delay(1);
}
/**
* e1000_get_pcs_speed_and_duplex_82575 - Retrieve current speed/duplex
* @hw: pointer to the HW structure
* @speed: stores the current speed
* @duplex: stores the current duplex
*
* Using the physical coding sub-layer (PCS), retrieve the current speed and
* duplex, then store the values in the pointers provided.
**/
static s32 e1000_get_pcs_speed_and_duplex_82575(struct e1000_hw *hw,
u16 *speed, u16 *duplex)
{
struct e1000_mac_info *mac = &hw->mac;
u32 pcs;
u32 status;
DEBUGFUNC("e1000_get_pcs_speed_and_duplex_82575");
/*
* Read the PCS Status register for link state. For non-copper mode,
* the status register is not accurate. The PCS status register is
* used instead.
*/
pcs = E1000_READ_REG(hw, E1000_PCS_LSTAT);
/*
* The link up bit determines when link is up on autoneg.
*/
if (pcs & E1000_PCS_LSTS_LINK_OK) {
mac->serdes_has_link = TRUE;
/* Detect and store PCS speed */
if (pcs & E1000_PCS_LSTS_SPEED_1000)
*speed = SPEED_1000;
else if (pcs & E1000_PCS_LSTS_SPEED_100)
*speed = SPEED_100;
else
*speed = SPEED_10;
/* Detect and store PCS duplex */
if (pcs & E1000_PCS_LSTS_DUPLEX_FULL)
*duplex = FULL_DUPLEX;
else
*duplex = HALF_DUPLEX;
/* Check if it is an I354 2.5Gb backplane connection. */
if (mac->type == e1000_i354) {
status = E1000_READ_REG(hw, E1000_STATUS);
if ((status & E1000_STATUS_2P5_SKU) &&
!(status & E1000_STATUS_2P5_SKU_OVER)) {
*speed = SPEED_2500;
*duplex = FULL_DUPLEX;
DEBUGOUT("2500 Mbs, ");
DEBUGOUT("Full Duplex\n");
}
}
} else {
mac->serdes_has_link = FALSE;
*speed = 0;
*duplex = 0;
}
return E1000_SUCCESS;
}
/**
* e1000_shutdown_serdes_link_82575 - Remove link during power down
* @hw: pointer to the HW structure
*
* In the case of serdes shut down sfp and PCS on driver unload
* when management pass thru is not enabled.
**/
void e1000_shutdown_serdes_link_82575(struct e1000_hw *hw)
{
u32 reg;
DEBUGFUNC("e1000_shutdown_serdes_link_82575");
if ((hw->phy.media_type != e1000_media_type_internal_serdes) &&
!e1000_sgmii_active_82575(hw))
return;
if (!e1000_enable_mng_pass_thru(hw)) {
/* Disable PCS to turn off link */
reg = E1000_READ_REG(hw, E1000_PCS_CFG0);
reg &= ~E1000_PCS_CFG_PCS_EN;
E1000_WRITE_REG(hw, E1000_PCS_CFG0, reg);
/* shutdown the laser */
reg = E1000_READ_REG(hw, E1000_CTRL_EXT);
reg |= E1000_CTRL_EXT_SDP3_DATA;
E1000_WRITE_REG(hw, E1000_CTRL_EXT, reg);
/* flush the write to verify completion */
E1000_WRITE_FLUSH(hw);
msec_delay(1);
}
return;
}
/**
* e1000_reset_hw_82575 - Reset hardware
* @hw: pointer to the HW structure
*
* This resets the hardware into a known state.
**/
static s32 e1000_reset_hw_82575(struct e1000_hw *hw)
{
u32 ctrl;
s32 ret_val;
DEBUGFUNC("e1000_reset_hw_82575");
/*
* Prevent the PCI-E bus from sticking if there is no TLP connection
* on the last TLP read/write transaction when MAC is reset.
*/
ret_val = e1000_disable_pcie_master_generic(hw);
if (ret_val)
DEBUGOUT("PCI-E Master disable polling has failed.\n");
/* set the completion timeout for interface */
ret_val = e1000_set_pcie_completion_timeout(hw);
if (ret_val)
DEBUGOUT("PCI-E Set completion timeout has failed.\n");
DEBUGOUT("Masking off all interrupts\n");
E1000_WRITE_REG(hw, E1000_IMC, 0xffffffff);
E1000_WRITE_REG(hw, E1000_RCTL, 0);
E1000_WRITE_REG(hw, E1000_TCTL, E1000_TCTL_PSP);
E1000_WRITE_FLUSH(hw);
msec_delay(10);
ctrl = E1000_READ_REG(hw, E1000_CTRL);
DEBUGOUT("Issuing a global reset to MAC\n");
E1000_WRITE_REG(hw, E1000_CTRL, ctrl | E1000_CTRL_RST);
ret_val = e1000_get_auto_rd_done_generic(hw);
if (ret_val) {
/*
* When auto config read does not complete, do not
* return with an error. This can happen in situations
* where there is no eeprom and prevents getting link.
*/
DEBUGOUT("Auto Read Done did not complete\n");
}
/* If EEPROM is not present, run manual init scripts */
if (!(E1000_READ_REG(hw, E1000_EECD) & E1000_EECD_PRES))
e1000_reset_init_script_82575(hw);
/* Clear any pending interrupt events. */
E1000_WRITE_REG(hw, E1000_IMC, 0xffffffff);
E1000_READ_REG(hw, E1000_ICR);
/* Install any alternate MAC address into RAR0 */
ret_val = e1000_check_alt_mac_addr_generic(hw);
return ret_val;
}
/**
* e1000_init_hw_82575 - Initialize hardware
* @hw: pointer to the HW structure
*
* This inits the hardware readying it for operation.
**/
s32 e1000_init_hw_82575(struct e1000_hw *hw)
{
struct e1000_mac_info *mac = &hw->mac;
s32 ret_val;
u16 i, rar_count = mac->rar_entry_count;
DEBUGFUNC("e1000_init_hw_82575");
/* Initialize identification LED */
ret_val = mac->ops.id_led_init(hw);
if (ret_val) {
DEBUGOUT("Error initializing identification LED\n");
/* This is not fatal and we should not stop init due to this */
}
/* Disabling VLAN filtering */
DEBUGOUT("Initializing the IEEE VLAN\n");
mac->ops.clear_vfta(hw);
/* Setup the receive address */
e1000_init_rx_addrs_generic(hw, rar_count);
/* Zero out the Multicast HASH table */
DEBUGOUT("Zeroing the MTA\n");
for (i = 0; i < mac->mta_reg_count; i++)
E1000_WRITE_REG_ARRAY(hw, E1000_MTA, i, 0);
/* Zero out the Unicast HASH table */
DEBUGOUT("Zeroing the UTA\n");
for (i = 0; i < mac->uta_reg_count; i++)
E1000_WRITE_REG_ARRAY(hw, E1000_UTA, i, 0);
/* Setup link and flow control */
ret_val = mac->ops.setup_link(hw);
/* Set the default MTU size */
hw->dev_spec._82575.mtu = 1500;
/*
* Clear all of the statistics registers (clear on read). It is
* important that we do this after we have tried to establish link
* because the symbol error count will increment wildly if there
* is no link.
*/
e1000_clear_hw_cntrs_82575(hw);
return ret_val;
}
/**
* e1000_setup_copper_link_82575 - Configure copper link settings
* @hw: pointer to the HW structure
*
* Configures the link for auto-neg or forced speed and duplex. Then we check
* for link, once link is established calls to configure collision distance
* and flow control are called.
**/
static s32 e1000_setup_copper_link_82575(struct e1000_hw *hw)
{
u32 ctrl;
s32 ret_val;
u32 phpm_reg;
DEBUGFUNC("e1000_setup_copper_link_82575");
ctrl = E1000_READ_REG(hw, E1000_CTRL);
ctrl |= E1000_CTRL_SLU;
ctrl &= ~(E1000_CTRL_FRCSPD | E1000_CTRL_FRCDPX);
E1000_WRITE_REG(hw, E1000_CTRL, ctrl);
/* Clear Go Link Disconnect bit on supported devices */
switch (hw->mac.type) {
case e1000_82580:
case e1000_i350:
case e1000_i210:
case e1000_i211:
phpm_reg = E1000_READ_REG(hw, E1000_82580_PHY_POWER_MGMT);
phpm_reg &= ~E1000_82580_PM_GO_LINKD;
E1000_WRITE_REG(hw, E1000_82580_PHY_POWER_MGMT, phpm_reg);
break;
default:
break;
}
ret_val = e1000_setup_serdes_link_82575(hw);
if (ret_val)
goto out;
if (e1000_sgmii_active_82575(hw)) {
/* allow time for SFP cage time to power up phy */
msec_delay(300);
ret_val = hw->phy.ops.reset(hw);
if (ret_val) {
DEBUGOUT("Error resetting the PHY.\n");
goto out;
}
}
switch (hw->phy.type) {
case e1000_phy_i210:
case e1000_phy_m88:
switch (hw->phy.id) {
case I347AT4_E_PHY_ID:
case M88E1112_E_PHY_ID:
case M88E1340M_E_PHY_ID:
case M88E1543_E_PHY_ID:
case M88E1512_E_PHY_ID:
case I210_I_PHY_ID:
ret_val = e1000_copper_link_setup_m88_gen2(hw);
break;
default:
ret_val = e1000_copper_link_setup_m88(hw);
break;
}
break;
case e1000_phy_igp_3:
ret_val = e1000_copper_link_setup_igp(hw);
break;
case e1000_phy_82580:
ret_val = e1000_copper_link_setup_82577(hw);
break;
default:
ret_val = -E1000_ERR_PHY;
break;
}
if (ret_val)
goto out;
ret_val = e1000_setup_copper_link_generic(hw);
out:
return ret_val;
}
/**
* e1000_setup_serdes_link_82575 - Setup link for serdes
* @hw: pointer to the HW structure
*
* Configure the physical coding sub-layer (PCS) link. The PCS link is
* used on copper connections where the serialized gigabit media independent
* interface (sgmii), or serdes fiber is being used. Configures the link
* for auto-negotiation or forces speed/duplex.
**/
static s32 e1000_setup_serdes_link_82575(struct e1000_hw *hw)
{
u32 ctrl_ext, ctrl_reg, reg, anadv_reg;
bool pcs_autoneg;
s32 ret_val = E1000_SUCCESS;
u16 data;
DEBUGFUNC("e1000_setup_serdes_link_82575");
if ((hw->phy.media_type != e1000_media_type_internal_serdes) &&
!e1000_sgmii_active_82575(hw))
return ret_val;
/*
* On the 82575, SerDes loopback mode persists until it is
* explicitly turned off or a power cycle is performed. A read to
* the register does not indicate its status. Therefore, we ensure
* loopback mode is disabled during initialization.
*/
E1000_WRITE_REG(hw, E1000_SCTL, E1000_SCTL_DISABLE_SERDES_LOOPBACK);
/* power on the sfp cage if present */
ctrl_ext = E1000_READ_REG(hw, E1000_CTRL_EXT);
ctrl_ext &= ~E1000_CTRL_EXT_SDP3_DATA;
E1000_WRITE_REG(hw, E1000_CTRL_EXT, ctrl_ext);
ctrl_reg = E1000_READ_REG(hw, E1000_CTRL);
ctrl_reg |= E1000_CTRL_SLU;
/* set both sw defined pins on 82575/82576*/
if (hw->mac.type == e1000_82575 || hw->mac.type == e1000_82576)
ctrl_reg |= E1000_CTRL_SWDPIN0 | E1000_CTRL_SWDPIN1;
reg = E1000_READ_REG(hw, E1000_PCS_LCTL);
/* default pcs_autoneg to the same setting as mac autoneg */
pcs_autoneg = hw->mac.autoneg;
switch (ctrl_ext & E1000_CTRL_EXT_LINK_MODE_MASK) {
case E1000_CTRL_EXT_LINK_MODE_SGMII:
/* sgmii mode lets the phy handle forcing speed/duplex */
pcs_autoneg = TRUE;
/* autoneg time out should be disabled for SGMII mode */
reg &= ~(E1000_PCS_LCTL_AN_TIMEOUT);
break;
case E1000_CTRL_EXT_LINK_MODE_1000BASE_KX:
/* disable PCS autoneg and support parallel detect only */
pcs_autoneg = FALSE;
/* fall through to default case */
default:
if (hw->mac.type == e1000_82575 ||
hw->mac.type == e1000_82576) {
ret_val = hw->nvm.ops.read(hw, NVM_COMPAT, 1, &data);
if (ret_val) {
DEBUGOUT("NVM Read Error\n");
return ret_val;
}
if (data & E1000_EEPROM_PCS_AUTONEG_DISABLE_BIT)
pcs_autoneg = FALSE;
}
/*
* non-SGMII modes only supports a speed of 1000/Full for the
* link so it is best to just force the MAC and let the pcs
* link either autoneg or be forced to 1000/Full
*/
ctrl_reg |= E1000_CTRL_SPD_1000 | E1000_CTRL_FRCSPD |
E1000_CTRL_FD | E1000_CTRL_FRCDPX;
/* set speed of 1000/Full if speed/duplex is forced */
reg |= E1000_PCS_LCTL_FSV_1000 | E1000_PCS_LCTL_FDV_FULL;
break;
}
E1000_WRITE_REG(hw, E1000_CTRL, ctrl_reg);
/*
* New SerDes mode allows for forcing speed or autonegotiating speed
* at 1gb. Autoneg should be default set by most drivers. This is the
* mode that will be compatible with older link partners and switches.
* However, both are supported by the hardware and some drivers/tools.
*/
reg &= ~(E1000_PCS_LCTL_AN_ENABLE | E1000_PCS_LCTL_FLV_LINK_UP |
E1000_PCS_LCTL_FSD | E1000_PCS_LCTL_FORCE_LINK);
if (pcs_autoneg) {
/* Set PCS register for autoneg */
reg |= E1000_PCS_LCTL_AN_ENABLE | /* Enable Autoneg */
E1000_PCS_LCTL_AN_RESTART; /* Restart autoneg */
/* Disable force flow control for autoneg */
reg &= ~E1000_PCS_LCTL_FORCE_FCTRL;
/* Configure flow control advertisement for autoneg */
anadv_reg = E1000_READ_REG(hw, E1000_PCS_ANADV);
anadv_reg &= ~(E1000_TXCW_ASM_DIR | E1000_TXCW_PAUSE);
switch (hw->fc.requested_mode) {
case e1000_fc_full:
case e1000_fc_rx_pause:
anadv_reg |= E1000_TXCW_ASM_DIR;
anadv_reg |= E1000_TXCW_PAUSE;
break;
case e1000_fc_tx_pause:
anadv_reg |= E1000_TXCW_ASM_DIR;
break;
default:
break;
}
E1000_WRITE_REG(hw, E1000_PCS_ANADV, anadv_reg);
DEBUGOUT1("Configuring Autoneg:PCS_LCTL=0x%08X\n", reg);
} else {
/* Set PCS register for forced link */
reg |= E1000_PCS_LCTL_FSD; /* Force Speed */
/* Force flow control for forced link */
reg |= E1000_PCS_LCTL_FORCE_FCTRL;
DEBUGOUT1("Configuring Forced Link:PCS_LCTL=0x%08X\n", reg);
}
E1000_WRITE_REG(hw, E1000_PCS_LCTL, reg);
if (!pcs_autoneg && !e1000_sgmii_active_82575(hw))
e1000_force_mac_fc_generic(hw);
return ret_val;
}
/**
* e1000_get_media_type_82575 - derives current media type.
* @hw: pointer to the HW structure
*
* The media type is chosen reflecting few settings.
* The following are taken into account:
* - link mode set in the current port Init Control Word #3
* - current link mode settings in CSR register
* - MDIO vs. I2C PHY control interface chosen
* - SFP module media type
**/
static s32 e1000_get_media_type_82575(struct e1000_hw *hw)
{
struct e1000_dev_spec_82575 *dev_spec = &hw->dev_spec._82575;
s32 ret_val = E1000_SUCCESS;
u32 ctrl_ext = 0;
u32 link_mode = 0;
/* Set internal phy as default */
dev_spec->sgmii_active = FALSE;
dev_spec->module_plugged = FALSE;
/* Get CSR setting */
ctrl_ext = E1000_READ_REG(hw, E1000_CTRL_EXT);
/* extract link mode setting */
link_mode = ctrl_ext & E1000_CTRL_EXT_LINK_MODE_MASK;
switch (link_mode) {
case E1000_CTRL_EXT_LINK_MODE_1000BASE_KX:
hw->phy.media_type = e1000_media_type_internal_serdes;
break;
case E1000_CTRL_EXT_LINK_MODE_GMII:
hw->phy.media_type = e1000_media_type_copper;
break;
case E1000_CTRL_EXT_LINK_MODE_SGMII:
/* Get phy control interface type set (MDIO vs. I2C)*/
if (e1000_sgmii_uses_mdio_82575(hw)) {
hw->phy.media_type = e1000_media_type_copper;
dev_spec->sgmii_active = TRUE;
break;
}
/* fall through for I2C based SGMII */
case E1000_CTRL_EXT_LINK_MODE_PCIE_SERDES:
/* read media type from SFP EEPROM */
ret_val = e1000_set_sfp_media_type_82575(hw);
if ((ret_val != E1000_SUCCESS) ||
(hw->phy.media_type == e1000_media_type_unknown)) {
/*
* If media type was not identified then return media
* type defined by the CTRL_EXT settings.
*/
hw->phy.media_type = e1000_media_type_internal_serdes;
if (link_mode == E1000_CTRL_EXT_LINK_MODE_SGMII) {
hw->phy.media_type = e1000_media_type_copper;
dev_spec->sgmii_active = TRUE;
}
break;
}
/* do not change link mode for 100BaseFX */
if (dev_spec->eth_flags.e100_base_fx)
break;
/* change current link mode setting */
ctrl_ext &= ~E1000_CTRL_EXT_LINK_MODE_MASK;
if (hw->phy.media_type == e1000_media_type_copper)
ctrl_ext |= E1000_CTRL_EXT_LINK_MODE_SGMII;
else
ctrl_ext |= E1000_CTRL_EXT_LINK_MODE_PCIE_SERDES;
E1000_WRITE_REG(hw, E1000_CTRL_EXT, ctrl_ext);
break;
}
return ret_val;
}
/**
* e1000_set_sfp_media_type_82575 - derives SFP module media type.
* @hw: pointer to the HW structure
*
* The media type is chosen based on SFP module.
* compatibility flags retrieved from SFP ID EEPROM.
**/
static s32 e1000_set_sfp_media_type_82575(struct e1000_hw *hw)
{
s32 ret_val = E1000_ERR_CONFIG;
u32 ctrl_ext = 0;
struct e1000_dev_spec_82575 *dev_spec = &hw->dev_spec._82575;
struct sfp_e1000_flags *eth_flags = &dev_spec->eth_flags;
u8 tranceiver_type = 0;
s32 timeout = 3;
/* Turn I2C interface ON and power on sfp cage */
ctrl_ext = E1000_READ_REG(hw, E1000_CTRL_EXT);
ctrl_ext &= ~E1000_CTRL_EXT_SDP3_DATA;
E1000_WRITE_REG(hw, E1000_CTRL_EXT, ctrl_ext | E1000_CTRL_I2C_ENA);
E1000_WRITE_FLUSH(hw);
/* Read SFP module data */
while (timeout) {
ret_val = e1000_read_sfp_data_byte(hw,
E1000_I2CCMD_SFP_DATA_ADDR(E1000_SFF_IDENTIFIER_OFFSET),
&tranceiver_type);
if (ret_val == E1000_SUCCESS)
break;
msec_delay(100);
timeout--;
}
if (ret_val != E1000_SUCCESS)
goto out;
ret_val = e1000_read_sfp_data_byte(hw,
E1000_I2CCMD_SFP_DATA_ADDR(E1000_SFF_ETH_FLAGS_OFFSET),
(u8 *)eth_flags);
if (ret_val != E1000_SUCCESS)
goto out;
/* Check if there is some SFP module plugged and powered */
if ((tranceiver_type == E1000_SFF_IDENTIFIER_SFP) ||
(tranceiver_type == E1000_SFF_IDENTIFIER_SFF)) {
dev_spec->module_plugged = TRUE;
if (eth_flags->e1000_base_lx || eth_flags->e1000_base_sx) {
hw->phy.media_type = e1000_media_type_internal_serdes;
} else if (eth_flags->e100_base_fx) {
dev_spec->sgmii_active = TRUE;
hw->phy.media_type = e1000_media_type_internal_serdes;
} else if (eth_flags->e1000_base_t) {
dev_spec->sgmii_active = TRUE;
hw->phy.media_type = e1000_media_type_copper;
} else {
hw->phy.media_type = e1000_media_type_unknown;
DEBUGOUT("PHY module has not been recognized\n");
goto out;
}
} else {
hw->phy.media_type = e1000_media_type_unknown;
}
ret_val = E1000_SUCCESS;
out:
/* Restore I2C interface setting */
E1000_WRITE_REG(hw, E1000_CTRL_EXT, ctrl_ext);
return ret_val;
}
/**
* e1000_valid_led_default_82575 - Verify a valid default LED config
* @hw: pointer to the HW structure
* @data: pointer to the NVM (EEPROM)
*
* Read the EEPROM for the current default LED configuration. If the
* LED configuration is not valid, set to a valid LED configuration.
**/
static s32 e1000_valid_led_default_82575(struct e1000_hw *hw, u16 *data)
{
s32 ret_val;
DEBUGFUNC("e1000_valid_led_default_82575");
ret_val = hw->nvm.ops.read(hw, NVM_ID_LED_SETTINGS, 1, data);
if (ret_val) {
DEBUGOUT("NVM Read Error\n");
goto out;
}
if (*data == ID_LED_RESERVED_0000 || *data == ID_LED_RESERVED_FFFF) {
switch (hw->phy.media_type) {
case e1000_media_type_internal_serdes:
*data = ID_LED_DEFAULT_82575_SERDES;
break;
case e1000_media_type_copper:
default:
*data = ID_LED_DEFAULT;
break;
}
}
out:
return ret_val;
}
/**
* e1000_sgmii_active_82575 - Return sgmii state
* @hw: pointer to the HW structure
*
* 82575 silicon has a serialized gigabit media independent interface (sgmii)
* which can be enabled for use in the embedded applications. Simply
* return the current state of the sgmii interface.
**/
static bool e1000_sgmii_active_82575(struct e1000_hw *hw)
{
struct e1000_dev_spec_82575 *dev_spec = &hw->dev_spec._82575;
return dev_spec->sgmii_active;
}
/**
* e1000_reset_init_script_82575 - Inits HW defaults after reset
* @hw: pointer to the HW structure
*
* Inits recommended HW defaults after a reset when there is no EEPROM
* detected. This is only for the 82575.
**/
static s32 e1000_reset_init_script_82575(struct e1000_hw *hw)
{
DEBUGFUNC("e1000_reset_init_script_82575");
if (hw->mac.type == e1000_82575) {
DEBUGOUT("Running reset init script for 82575\n");
/* SerDes configuration via SERDESCTRL */
e1000_write_8bit_ctrl_reg_generic(hw, E1000_SCTL, 0x00, 0x0C);
e1000_write_8bit_ctrl_reg_generic(hw, E1000_SCTL, 0x01, 0x78);
e1000_write_8bit_ctrl_reg_generic(hw, E1000_SCTL, 0x1B, 0x23);
e1000_write_8bit_ctrl_reg_generic(hw, E1000_SCTL, 0x23, 0x15);
/* CCM configuration via CCMCTL register */
e1000_write_8bit_ctrl_reg_generic(hw, E1000_CCMCTL, 0x14, 0x00);
e1000_write_8bit_ctrl_reg_generic(hw, E1000_CCMCTL, 0x10, 0x00);
/* PCIe lanes configuration */
e1000_write_8bit_ctrl_reg_generic(hw, E1000_GIOCTL, 0x00, 0xEC);
e1000_write_8bit_ctrl_reg_generic(hw, E1000_GIOCTL, 0x61, 0xDF);
e1000_write_8bit_ctrl_reg_generic(hw, E1000_GIOCTL, 0x34, 0x05);
e1000_write_8bit_ctrl_reg_generic(hw, E1000_GIOCTL, 0x2F, 0x81);
/* PCIe PLL Configuration */
e1000_write_8bit_ctrl_reg_generic(hw, E1000_SCCTL, 0x02, 0x47);
e1000_write_8bit_ctrl_reg_generic(hw, E1000_SCCTL, 0x14, 0x00);
e1000_write_8bit_ctrl_reg_generic(hw, E1000_SCCTL, 0x10, 0x00);
}
return E1000_SUCCESS;
}
/**
* e1000_read_mac_addr_82575 - Read device MAC address
* @hw: pointer to the HW structure
**/
static s32 e1000_read_mac_addr_82575(struct e1000_hw *hw)
{
s32 ret_val;
DEBUGFUNC("e1000_read_mac_addr_82575");
/*
* If there's an alternate MAC address place it in RAR0
* so that it will override the Si installed default perm
* address.
*/
ret_val = e1000_check_alt_mac_addr_generic(hw);
if (ret_val)
goto out;
ret_val = e1000_read_mac_addr_generic(hw);
out:
return ret_val;
}
/**
* e1000_config_collision_dist_82575 - Configure collision distance
* @hw: pointer to the HW structure
*
* Configures the collision distance to the default value and is used
* during link setup.
**/
static void e1000_config_collision_dist_82575(struct e1000_hw *hw)
{
u32 tctl_ext;
DEBUGFUNC("e1000_config_collision_dist_82575");
tctl_ext = E1000_READ_REG(hw, E1000_TCTL_EXT);
tctl_ext &= ~E1000_TCTL_EXT_COLD;
tctl_ext |= E1000_COLLISION_DISTANCE << E1000_TCTL_EXT_COLD_SHIFT;
E1000_WRITE_REG(hw, E1000_TCTL_EXT, tctl_ext);
E1000_WRITE_FLUSH(hw);
}
/**
* e1000_power_down_phy_copper_82575 - Remove link during PHY power down
* @hw: pointer to the HW structure
*
* In the case of a PHY power down to save power, or to turn off link during a
* driver unload, or wake on lan is not enabled, remove the link.
**/
static void e1000_power_down_phy_copper_82575(struct e1000_hw *hw)
{
struct e1000_phy_info *phy = &hw->phy;
if (!(phy->ops.check_reset_block))
return;
/* If the management interface is not enabled, then power down */
if (!(e1000_enable_mng_pass_thru(hw) || phy->ops.check_reset_block(hw)))
e1000_power_down_phy_copper(hw);
return;
}
/**
* e1000_clear_hw_cntrs_82575 - Clear device specific hardware counters
* @hw: pointer to the HW structure
*
* Clears the hardware counters by reading the counter registers.
**/
static void e1000_clear_hw_cntrs_82575(struct e1000_hw *hw)
{
DEBUGFUNC("e1000_clear_hw_cntrs_82575");
e1000_clear_hw_cntrs_base_generic(hw);
E1000_READ_REG(hw, E1000_PRC64);
E1000_READ_REG(hw, E1000_PRC127);
E1000_READ_REG(hw, E1000_PRC255);
E1000_READ_REG(hw, E1000_PRC511);
E1000_READ_REG(hw, E1000_PRC1023);
E1000_READ_REG(hw, E1000_PRC1522);
E1000_READ_REG(hw, E1000_PTC64);
E1000_READ_REG(hw, E1000_PTC127);
E1000_READ_REG(hw, E1000_PTC255);
E1000_READ_REG(hw, E1000_PTC511);
E1000_READ_REG(hw, E1000_PTC1023);
E1000_READ_REG(hw, E1000_PTC1522);
E1000_READ_REG(hw, E1000_ALGNERRC);
E1000_READ_REG(hw, E1000_RXERRC);
E1000_READ_REG(hw, E1000_TNCRS);
E1000_READ_REG(hw, E1000_CEXTERR);
E1000_READ_REG(hw, E1000_TSCTC);
E1000_READ_REG(hw, E1000_TSCTFC);
E1000_READ_REG(hw, E1000_MGTPRC);
E1000_READ_REG(hw, E1000_MGTPDC);
E1000_READ_REG(hw, E1000_MGTPTC);
E1000_READ_REG(hw, E1000_IAC);
E1000_READ_REG(hw, E1000_ICRXOC);
E1000_READ_REG(hw, E1000_ICRXPTC);
E1000_READ_REG(hw, E1000_ICRXATC);
E1000_READ_REG(hw, E1000_ICTXPTC);
E1000_READ_REG(hw, E1000_ICTXATC);
E1000_READ_REG(hw, E1000_ICTXQEC);
E1000_READ_REG(hw, E1000_ICTXQMTC);
E1000_READ_REG(hw, E1000_ICRXDMTC);
E1000_READ_REG(hw, E1000_CBTMPC);
E1000_READ_REG(hw, E1000_HTDPMC);
E1000_READ_REG(hw, E1000_CBRMPC);
E1000_READ_REG(hw, E1000_RPTHC);
E1000_READ_REG(hw, E1000_HGPTC);
E1000_READ_REG(hw, E1000_HTCBDPC);
E1000_READ_REG(hw, E1000_HGORCL);
E1000_READ_REG(hw, E1000_HGORCH);
E1000_READ_REG(hw, E1000_HGOTCL);
E1000_READ_REG(hw, E1000_HGOTCH);
E1000_READ_REG(hw, E1000_LENERRS);
/* This register should not be read in copper configurations */
if ((hw->phy.media_type == e1000_media_type_internal_serdes) ||
e1000_sgmii_active_82575(hw))
E1000_READ_REG(hw, E1000_SCVPC);
}
/**
* e1000_rx_fifo_flush_82575 - Clean rx fifo after Rx enable
* @hw: pointer to the HW structure
*
* After Rx enable, if manageability is enabled then there is likely some
* bad data at the start of the fifo and possibly in the DMA fifo. This
* function clears the fifos and flushes any packets that came in as rx was
* being enabled.
**/
void e1000_rx_fifo_flush_82575(struct e1000_hw *hw)
{
u32 rctl, rlpml, rxdctl[4], rfctl, temp_rctl, rx_enabled;
int i, ms_wait;
DEBUGFUNC("e1000_rx_fifo_flush_82575");
/* disable IPv6 options as per hardware errata */
rfctl = E1000_READ_REG(hw, E1000_RFCTL);
rfctl |= E1000_RFCTL_IPV6_EX_DIS;
E1000_WRITE_REG(hw, E1000_RFCTL, rfctl);
if (hw->mac.type != e1000_82575 ||
!(E1000_READ_REG(hw, E1000_MANC) & E1000_MANC_RCV_TCO_EN))
return;
/* Disable all Rx queues */
for (i = 0; i < 4; i++) {
rxdctl[i] = E1000_READ_REG(hw, E1000_RXDCTL(i));
E1000_WRITE_REG(hw, E1000_RXDCTL(i),
rxdctl[i] & ~E1000_RXDCTL_QUEUE_ENABLE);
}
/* Poll all queues to verify they have shut down */
for (ms_wait = 0; ms_wait < 10; ms_wait++) {
msec_delay(1);
rx_enabled = 0;
for (i = 0; i < 4; i++)
rx_enabled |= E1000_READ_REG(hw, E1000_RXDCTL(i));
if (!(rx_enabled & E1000_RXDCTL_QUEUE_ENABLE))
break;
}
if (ms_wait == 10)
DEBUGOUT("Queue disable timed out after 10ms\n");
/* Clear RLPML, RCTL.SBP, RFCTL.LEF, and set RCTL.LPE so that all
* incoming packets are rejected. Set enable and wait 2ms so that
* any packet that was coming in as RCTL.EN was set is flushed
*/
E1000_WRITE_REG(hw, E1000_RFCTL, rfctl & ~E1000_RFCTL_LEF);
rlpml = E1000_READ_REG(hw, E1000_RLPML);
E1000_WRITE_REG(hw, E1000_RLPML, 0);
rctl = E1000_READ_REG(hw, E1000_RCTL);
temp_rctl = rctl & ~(E1000_RCTL_EN | E1000_RCTL_SBP);
temp_rctl |= E1000_RCTL_LPE;
E1000_WRITE_REG(hw, E1000_RCTL, temp_rctl);
E1000_WRITE_REG(hw, E1000_RCTL, temp_rctl | E1000_RCTL_EN);
E1000_WRITE_FLUSH(hw);
msec_delay(2);
/* Enable Rx queues that were previously enabled and restore our
* previous state
*/
for (i = 0; i < 4; i++)
E1000_WRITE_REG(hw, E1000_RXDCTL(i), rxdctl[i]);
E1000_WRITE_REG(hw, E1000_RCTL, rctl);
E1000_WRITE_FLUSH(hw);
E1000_WRITE_REG(hw, E1000_RLPML, rlpml);
E1000_WRITE_REG(hw, E1000_RFCTL, rfctl);
/* Flush receive errors generated by workaround */
E1000_READ_REG(hw, E1000_ROC);
E1000_READ_REG(hw, E1000_RNBC);
E1000_READ_REG(hw, E1000_MPC);
}
/**
* e1000_set_pcie_completion_timeout - set pci-e completion timeout
* @hw: pointer to the HW structure
*
* The defaults for 82575 and 82576 should be in the range of 50us to 50ms,
* however the hardware default for these parts is 500us to 1ms which is less
* than the 10ms recommended by the pci-e spec. To address this we need to
* increase the value to either 10ms to 200ms for capability version 1 config,
* or 16ms to 55ms for version 2.
**/
static s32 e1000_set_pcie_completion_timeout(struct e1000_hw *hw)
{
u32 gcr = E1000_READ_REG(hw, E1000_GCR);
s32 ret_val = E1000_SUCCESS;
u16 pcie_devctl2;
/* only take action if timeout value is defaulted to 0 */
if (gcr & E1000_GCR_CMPL_TMOUT_MASK)
goto out;
/*
* if capababilities version is type 1 we can write the
* timeout of 10ms to 200ms through the GCR register
*/
if (!(gcr & E1000_GCR_CAP_VER2)) {
gcr |= E1000_GCR_CMPL_TMOUT_10ms;
goto out;
}
/*
* for version 2 capabilities we need to write the config space
* directly in order to set the completion timeout value for
* 16ms to 55ms
*/
ret_val = e1000_read_pcie_cap_reg(hw, PCIE_DEVICE_CONTROL2,
&pcie_devctl2);
if (ret_val)
goto out;
pcie_devctl2 |= PCIE_DEVICE_CONTROL2_16ms;
ret_val = e1000_write_pcie_cap_reg(hw, PCIE_DEVICE_CONTROL2,
&pcie_devctl2);
out:
/* disable completion timeout resend */
gcr &= ~E1000_GCR_CMPL_TMOUT_RESEND;
E1000_WRITE_REG(hw, E1000_GCR, gcr);
return ret_val;
}
/**
* e1000_vmdq_set_anti_spoofing_pf - enable or disable anti-spoofing
* @hw: pointer to the hardware struct
* @enable: state to enter, either enabled or disabled
* @pf: Physical Function pool - do not set anti-spoofing for the PF
*
* enables/disables L2 switch anti-spoofing functionality.
**/
void e1000_vmdq_set_anti_spoofing_pf(struct e1000_hw *hw, bool enable, int pf)
{
u32 reg_val, reg_offset;
switch (hw->mac.type) {
case e1000_82576:
reg_offset = E1000_DTXSWC;
break;
case e1000_i350:
case e1000_i354:
reg_offset = E1000_TXSWC;
break;
default:
return;
}
reg_val = E1000_READ_REG(hw, reg_offset);
if (enable) {
reg_val |= (E1000_DTXSWC_MAC_SPOOF_MASK |
E1000_DTXSWC_VLAN_SPOOF_MASK);
/* The PF can spoof - it has to in order to
* support emulation mode NICs
*/
reg_val ^= (1 << pf | 1 << (pf + MAX_NUM_VFS));
} else {
reg_val &= ~(E1000_DTXSWC_MAC_SPOOF_MASK |
E1000_DTXSWC_VLAN_SPOOF_MASK);
}
E1000_WRITE_REG(hw, reg_offset, reg_val);
}
/**
* e1000_vmdq_set_loopback_pf - enable or disable vmdq loopback
* @hw: pointer to the hardware struct
* @enable: state to enter, either enabled or disabled
*
* enables/disables L2 switch loopback functionality.
**/
void e1000_vmdq_set_loopback_pf(struct e1000_hw *hw, bool enable)
{
u32 dtxswc;
switch (hw->mac.type) {
case e1000_82576:
dtxswc = E1000_READ_REG(hw, E1000_DTXSWC);
if (enable)
dtxswc |= E1000_DTXSWC_VMDQ_LOOPBACK_EN;
else
dtxswc &= ~E1000_DTXSWC_VMDQ_LOOPBACK_EN;
E1000_WRITE_REG(hw, E1000_DTXSWC, dtxswc);
break;
case e1000_i350:
case e1000_i354:
dtxswc = E1000_READ_REG(hw, E1000_TXSWC);
if (enable)
dtxswc |= E1000_DTXSWC_VMDQ_LOOPBACK_EN;
else
dtxswc &= ~E1000_DTXSWC_VMDQ_LOOPBACK_EN;
E1000_WRITE_REG(hw, E1000_TXSWC, dtxswc);
break;
default:
/* Currently no other hardware supports loopback */
break;
}
}
/**
* e1000_vmdq_set_replication_pf - enable or disable vmdq replication
* @hw: pointer to the hardware struct
* @enable: state to enter, either enabled or disabled
*
* enables/disables replication of packets across multiple pools.
**/
void e1000_vmdq_set_replication_pf(struct e1000_hw *hw, bool enable)
{
u32 vt_ctl = E1000_READ_REG(hw, E1000_VT_CTL);
if (enable)
vt_ctl |= E1000_VT_CTL_VM_REPL_EN;
else
vt_ctl &= ~E1000_VT_CTL_VM_REPL_EN;
E1000_WRITE_REG(hw, E1000_VT_CTL, vt_ctl);
}
/**
* e1000_read_phy_reg_82580 - Read 82580 MDI control register
* @hw: pointer to the HW structure
* @offset: register offset to be read
* @data: pointer to the read data
*
* Reads the MDI control register in the PHY at offset and stores the
* information read to data.
**/
static s32 e1000_read_phy_reg_82580(struct e1000_hw *hw, u32 offset, u16 *data)
{
s32 ret_val;
DEBUGFUNC("e1000_read_phy_reg_82580");
ret_val = hw->phy.ops.acquire(hw);
if (ret_val)
goto out;
ret_val = e1000_read_phy_reg_mdic(hw, offset, data);
hw->phy.ops.release(hw);
out:
return ret_val;
}
/**
* e1000_write_phy_reg_82580 - Write 82580 MDI control register
* @hw: pointer to the HW structure
* @offset: register offset to write to
* @data: data to write to register at offset
*
* Writes data to MDI control register in the PHY at offset.
**/
static s32 e1000_write_phy_reg_82580(struct e1000_hw *hw, u32 offset, u16 data)
{
s32 ret_val;
DEBUGFUNC("e1000_write_phy_reg_82580");
ret_val = hw->phy.ops.acquire(hw);
if (ret_val)
goto out;
ret_val = e1000_write_phy_reg_mdic(hw, offset, data);
hw->phy.ops.release(hw);
out:
return ret_val;
}
/**
* e1000_reset_mdicnfg_82580 - Reset MDICNFG destination and com_mdio bits
* @hw: pointer to the HW structure
*
- * This resets the the MDICNFG.Destination and MDICNFG.Com_MDIO bits based on
+ * This resets the MDICNFG.Destination and MDICNFG.Com_MDIO bits based on
* the values found in the EEPROM. This addresses an issue in which these
* bits are not restored from EEPROM after reset.
**/
static s32 e1000_reset_mdicnfg_82580(struct e1000_hw *hw)
{
s32 ret_val = E1000_SUCCESS;
u32 mdicnfg;
u16 nvm_data = 0;
DEBUGFUNC("e1000_reset_mdicnfg_82580");
if (hw->mac.type != e1000_82580)
goto out;
if (!e1000_sgmii_active_82575(hw))
goto out;
ret_val = hw->nvm.ops.read(hw, NVM_INIT_CONTROL3_PORT_A +
NVM_82580_LAN_FUNC_OFFSET(hw->bus.func), 1,
&nvm_data);
if (ret_val) {
DEBUGOUT("NVM Read Error\n");
goto out;
}
mdicnfg = E1000_READ_REG(hw, E1000_MDICNFG);
if (nvm_data & NVM_WORD24_EXT_MDIO)
mdicnfg |= E1000_MDICNFG_EXT_MDIO;
if (nvm_data & NVM_WORD24_COM_MDIO)
mdicnfg |= E1000_MDICNFG_COM_MDIO;
E1000_WRITE_REG(hw, E1000_MDICNFG, mdicnfg);
out:
return ret_val;
}
/**
* e1000_reset_hw_82580 - Reset hardware
* @hw: pointer to the HW structure
*
* This resets function or entire device (all ports, etc.)
* to a known state.
**/
static s32 e1000_reset_hw_82580(struct e1000_hw *hw)
{
s32 ret_val = E1000_SUCCESS;
/* BH SW mailbox bit in SW_FW_SYNC */
u16 swmbsw_mask = E1000_SW_SYNCH_MB;
u32 ctrl;
bool global_device_reset = hw->dev_spec._82575.global_device_reset;
DEBUGFUNC("e1000_reset_hw_82580");
hw->dev_spec._82575.global_device_reset = FALSE;
/* 82580 does not reliably do global_device_reset due to hw errata */
if (hw->mac.type == e1000_82580)
global_device_reset = FALSE;
/* Get current control state. */
ctrl = E1000_READ_REG(hw, E1000_CTRL);
/*
* Prevent the PCI-E bus from sticking if there is no TLP connection
* on the last TLP read/write transaction when MAC is reset.
*/
ret_val = e1000_disable_pcie_master_generic(hw);
if (ret_val)
DEBUGOUT("PCI-E Master disable polling has failed.\n");
DEBUGOUT("Masking off all interrupts\n");
E1000_WRITE_REG(hw, E1000_IMC, 0xffffffff);
E1000_WRITE_REG(hw, E1000_RCTL, 0);
E1000_WRITE_REG(hw, E1000_TCTL, E1000_TCTL_PSP);
E1000_WRITE_FLUSH(hw);
msec_delay(10);
/* Determine whether or not a global dev reset is requested */
if (global_device_reset && hw->mac.ops.acquire_swfw_sync(hw,
swmbsw_mask))
global_device_reset = FALSE;
if (global_device_reset && !(E1000_READ_REG(hw, E1000_STATUS) &
E1000_STAT_DEV_RST_SET))
ctrl |= E1000_CTRL_DEV_RST;
else
ctrl |= E1000_CTRL_RST;
E1000_WRITE_REG(hw, E1000_CTRL, ctrl);
switch (hw->device_id) {
case E1000_DEV_ID_DH89XXCC_SGMII:
break;
default:
E1000_WRITE_FLUSH(hw);
break;
}
/* Add delay to insure DEV_RST or RST has time to complete */
msec_delay(5);
ret_val = e1000_get_auto_rd_done_generic(hw);
if (ret_val) {
/*
* When auto config read does not complete, do not
* return with an error. This can happen in situations
* where there is no eeprom and prevents getting link.
*/
DEBUGOUT("Auto Read Done did not complete\n");
}
/* clear global device reset status bit */
E1000_WRITE_REG(hw, E1000_STATUS, E1000_STAT_DEV_RST_SET);
/* Clear any pending interrupt events. */
E1000_WRITE_REG(hw, E1000_IMC, 0xffffffff);
E1000_READ_REG(hw, E1000_ICR);
ret_val = e1000_reset_mdicnfg_82580(hw);
if (ret_val)
DEBUGOUT("Could not reset MDICNFG based on EEPROM\n");
/* Install any alternate MAC address into RAR0 */
ret_val = e1000_check_alt_mac_addr_generic(hw);
/* Release semaphore */
if (global_device_reset)
hw->mac.ops.release_swfw_sync(hw, swmbsw_mask);
return ret_val;
}
/**
* e1000_rxpbs_adjust_82580 - adjust RXPBS value to reflect actual Rx PBA size
* @data: data received by reading RXPBS register
*
* The 82580 uses a table based approach for packet buffer allocation sizes.
* This function converts the retrieved value into the correct table value
* 0x0 0x1 0x2 0x3 0x4 0x5 0x6 0x7
* 0x0 36 72 144 1 2 4 8 16
* 0x8 35 70 140 rsv rsv rsv rsv rsv
*/
u16 e1000_rxpbs_adjust_82580(u32 data)
{
u16 ret_val = 0;
if (data < E1000_82580_RXPBS_TABLE_SIZE)
ret_val = e1000_82580_rxpbs_table[data];
return ret_val;
}
/**
* e1000_validate_nvm_checksum_with_offset - Validate EEPROM
* checksum
* @hw: pointer to the HW structure
* @offset: offset in words of the checksum protected region
*
* Calculates the EEPROM checksum by reading/adding each word of the EEPROM
* and then verifies that the sum of the EEPROM is equal to 0xBABA.
**/
s32 e1000_validate_nvm_checksum_with_offset(struct e1000_hw *hw, u16 offset)
{
s32 ret_val = E1000_SUCCESS;
u16 checksum = 0;
u16 i, nvm_data;
DEBUGFUNC("e1000_validate_nvm_checksum_with_offset");
for (i = offset; i < ((NVM_CHECKSUM_REG + offset) + 1); i++) {
ret_val = hw->nvm.ops.read(hw, i, 1, &nvm_data);
if (ret_val) {
DEBUGOUT("NVM Read Error\n");
goto out;
}
checksum += nvm_data;
}
if (checksum != (u16) NVM_SUM) {
DEBUGOUT("NVM Checksum Invalid\n");
ret_val = -E1000_ERR_NVM;
goto out;
}
out:
return ret_val;
}
/**
* e1000_update_nvm_checksum_with_offset - Update EEPROM
* checksum
* @hw: pointer to the HW structure
* @offset: offset in words of the checksum protected region
*
* Updates the EEPROM checksum by reading/adding each word of the EEPROM
* up to the checksum. Then calculates the EEPROM checksum and writes the
* value to the EEPROM.
**/
s32 e1000_update_nvm_checksum_with_offset(struct e1000_hw *hw, u16 offset)
{
s32 ret_val;
u16 checksum = 0;
u16 i, nvm_data;
DEBUGFUNC("e1000_update_nvm_checksum_with_offset");
for (i = offset; i < (NVM_CHECKSUM_REG + offset); i++) {
ret_val = hw->nvm.ops.read(hw, i, 1, &nvm_data);
if (ret_val) {
DEBUGOUT("NVM Read Error while updating checksum.\n");
goto out;
}
checksum += nvm_data;
}
checksum = (u16) NVM_SUM - checksum;
ret_val = hw->nvm.ops.write(hw, (NVM_CHECKSUM_REG + offset), 1,
&checksum);
if (ret_val)
DEBUGOUT("NVM Write Error while updating checksum.\n");
out:
return ret_val;
}
/**
* e1000_validate_nvm_checksum_82580 - Validate EEPROM checksum
* @hw: pointer to the HW structure
*
* Calculates the EEPROM section checksum by reading/adding each word of
* the EEPROM and then verifies that the sum of the EEPROM is
* equal to 0xBABA.
**/
static s32 e1000_validate_nvm_checksum_82580(struct e1000_hw *hw)
{
s32 ret_val;
u16 eeprom_regions_count = 1;
u16 j, nvm_data;
u16 nvm_offset;
DEBUGFUNC("e1000_validate_nvm_checksum_82580");
ret_val = hw->nvm.ops.read(hw, NVM_COMPATIBILITY_REG_3, 1, &nvm_data);
if (ret_val) {
DEBUGOUT("NVM Read Error\n");
goto out;
}
if (nvm_data & NVM_COMPATIBILITY_BIT_MASK) {
/* if chekcsums compatibility bit is set validate checksums
* for all 4 ports. */
eeprom_regions_count = 4;
}
for (j = 0; j < eeprom_regions_count; j++) {
nvm_offset = NVM_82580_LAN_FUNC_OFFSET(j);
ret_val = e1000_validate_nvm_checksum_with_offset(hw,
nvm_offset);
if (ret_val != E1000_SUCCESS)
goto out;
}
out:
return ret_val;
}
/**
* e1000_update_nvm_checksum_82580 - Update EEPROM checksum
* @hw: pointer to the HW structure
*
* Updates the EEPROM section checksums for all 4 ports by reading/adding
* each word of the EEPROM up to the checksum. Then calculates the EEPROM
* checksum and writes the value to the EEPROM.
**/
static s32 e1000_update_nvm_checksum_82580(struct e1000_hw *hw)
{
s32 ret_val;
u16 j, nvm_data;
u16 nvm_offset;
DEBUGFUNC("e1000_update_nvm_checksum_82580");
ret_val = hw->nvm.ops.read(hw, NVM_COMPATIBILITY_REG_3, 1, &nvm_data);
if (ret_val) {
DEBUGOUT("NVM Read Error while updating checksum compatibility bit.\n");
goto out;
}
if (!(nvm_data & NVM_COMPATIBILITY_BIT_MASK)) {
/* set compatibility bit to validate checksums appropriately */
nvm_data = nvm_data | NVM_COMPATIBILITY_BIT_MASK;
ret_val = hw->nvm.ops.write(hw, NVM_COMPATIBILITY_REG_3, 1,
&nvm_data);
if (ret_val) {
DEBUGOUT("NVM Write Error while updating checksum compatibility bit.\n");
goto out;
}
}
for (j = 0; j < 4; j++) {
nvm_offset = NVM_82580_LAN_FUNC_OFFSET(j);
ret_val = e1000_update_nvm_checksum_with_offset(hw, nvm_offset);
if (ret_val)
goto out;
}
out:
return ret_val;
}
/**
* e1000_validate_nvm_checksum_i350 - Validate EEPROM checksum
* @hw: pointer to the HW structure
*
* Calculates the EEPROM section checksum by reading/adding each word of
* the EEPROM and then verifies that the sum of the EEPROM is
* equal to 0xBABA.
**/
static s32 e1000_validate_nvm_checksum_i350(struct e1000_hw *hw)
{
s32 ret_val = E1000_SUCCESS;
u16 j;
u16 nvm_offset;
DEBUGFUNC("e1000_validate_nvm_checksum_i350");
for (j = 0; j < 4; j++) {
nvm_offset = NVM_82580_LAN_FUNC_OFFSET(j);
ret_val = e1000_validate_nvm_checksum_with_offset(hw,
nvm_offset);
if (ret_val != E1000_SUCCESS)
goto out;
}
out:
return ret_val;
}
/**
* e1000_update_nvm_checksum_i350 - Update EEPROM checksum
* @hw: pointer to the HW structure
*
* Updates the EEPROM section checksums for all 4 ports by reading/adding
* each word of the EEPROM up to the checksum. Then calculates the EEPROM
* checksum and writes the value to the EEPROM.
**/
static s32 e1000_update_nvm_checksum_i350(struct e1000_hw *hw)
{
s32 ret_val = E1000_SUCCESS;
u16 j;
u16 nvm_offset;
DEBUGFUNC("e1000_update_nvm_checksum_i350");
for (j = 0; j < 4; j++) {
nvm_offset = NVM_82580_LAN_FUNC_OFFSET(j);
ret_val = e1000_update_nvm_checksum_with_offset(hw, nvm_offset);
if (ret_val != E1000_SUCCESS)
goto out;
}
out:
return ret_val;
}
/**
* __e1000_access_emi_reg - Read/write EMI register
* @hw: pointer to the HW structure
* @addr: EMI address to program
* @data: pointer to value to read/write from/to the EMI address
* @read: boolean flag to indicate read or write
**/
static s32 __e1000_access_emi_reg(struct e1000_hw *hw, u16 address,
u16 *data, bool read)
{
s32 ret_val;
DEBUGFUNC("__e1000_access_emi_reg");
ret_val = hw->phy.ops.write_reg(hw, E1000_EMIADD, address);
if (ret_val)
return ret_val;
if (read)
ret_val = hw->phy.ops.read_reg(hw, E1000_EMIDATA, data);
else
ret_val = hw->phy.ops.write_reg(hw, E1000_EMIDATA, *data);
return ret_val;
}
/**
* e1000_read_emi_reg - Read Extended Management Interface register
* @hw: pointer to the HW structure
* @addr: EMI address to program
* @data: value to be read from the EMI address
**/
s32 e1000_read_emi_reg(struct e1000_hw *hw, u16 addr, u16 *data)
{
DEBUGFUNC("e1000_read_emi_reg");
return __e1000_access_emi_reg(hw, addr, data, TRUE);
}
/**
* e1000_initialize_M88E1512_phy - Initialize M88E1512 PHY
* @hw: pointer to the HW structure
*
* Initialize Marvell 1512 to work correctly with Avoton.
**/
s32 e1000_initialize_M88E1512_phy(struct e1000_hw *hw)
{
struct e1000_phy_info *phy = &hw->phy;
s32 ret_val = E1000_SUCCESS;
DEBUGFUNC("e1000_initialize_M88E1512_phy");
/* Check if this is correct PHY. */
if (phy->id != M88E1512_E_PHY_ID)
goto out;
/* Switch to PHY page 0xFF. */
ret_val = phy->ops.write_reg(hw, E1000_M88E1543_PAGE_ADDR, 0x00FF);
if (ret_val)
goto out;
ret_val = phy->ops.write_reg(hw, E1000_M88E1512_CFG_REG_2, 0x214B);
if (ret_val)
goto out;
ret_val = phy->ops.write_reg(hw, E1000_M88E1512_CFG_REG_1, 0x2144);
if (ret_val)
goto out;
ret_val = phy->ops.write_reg(hw, E1000_M88E1512_CFG_REG_2, 0x0C28);
if (ret_val)
goto out;
ret_val = phy->ops.write_reg(hw, E1000_M88E1512_CFG_REG_1, 0x2146);
if (ret_val)
goto out;
ret_val = phy->ops.write_reg(hw, E1000_M88E1512_CFG_REG_2, 0xB233);
if (ret_val)
goto out;
ret_val = phy->ops.write_reg(hw, E1000_M88E1512_CFG_REG_1, 0x214D);
if (ret_val)
goto out;
ret_val = phy->ops.write_reg(hw, E1000_M88E1512_CFG_REG_2, 0xCC0C);
if (ret_val)
goto out;
ret_val = phy->ops.write_reg(hw, E1000_M88E1512_CFG_REG_1, 0x2159);
if (ret_val)
goto out;
/* Switch to PHY page 0xFB. */
ret_val = phy->ops.write_reg(hw, E1000_M88E1543_PAGE_ADDR, 0x00FB);
if (ret_val)
goto out;
ret_val = phy->ops.write_reg(hw, E1000_M88E1512_CFG_REG_3, 0x000D);
if (ret_val)
goto out;
/* Switch to PHY page 0x12. */
ret_val = phy->ops.write_reg(hw, E1000_M88E1543_PAGE_ADDR, 0x12);
if (ret_val)
goto out;
/* Change mode to SGMII-to-Copper */
ret_val = phy->ops.write_reg(hw, E1000_M88E1512_MODE, 0x8001);
if (ret_val)
goto out;
/* Return the PHY to page 0. */
ret_val = phy->ops.write_reg(hw, E1000_M88E1543_PAGE_ADDR, 0);
if (ret_val)
goto out;
ret_val = phy->ops.commit(hw);
if (ret_val) {
DEBUGOUT("Error committing the PHY changes\n");
return ret_val;
}
msec_delay(1000);
out:
return ret_val;
}
/**
* e1000_initialize_M88E1543_phy - Initialize M88E1543 PHY
* @hw: pointer to the HW structure
*
* Initialize Marvell 1543 to work correctly with Avoton.
**/
s32 e1000_initialize_M88E1543_phy(struct e1000_hw *hw)
{
struct e1000_phy_info *phy = &hw->phy;
s32 ret_val = E1000_SUCCESS;
DEBUGFUNC("e1000_initialize_M88E1543_phy");
/* Check if this is correct PHY. */
if (phy->id != M88E1543_E_PHY_ID)
goto out;
/* Switch to PHY page 0xFF. */
ret_val = phy->ops.write_reg(hw, E1000_M88E1543_PAGE_ADDR, 0x00FF);
if (ret_val)
goto out;
ret_val = phy->ops.write_reg(hw, E1000_M88E1512_CFG_REG_2, 0x214B);
if (ret_val)
goto out;
ret_val = phy->ops.write_reg(hw, E1000_M88E1512_CFG_REG_1, 0x2144);
if (ret_val)
goto out;
ret_val = phy->ops.write_reg(hw, E1000_M88E1512_CFG_REG_2, 0x0C28);
if (ret_val)
goto out;
ret_val = phy->ops.write_reg(hw, E1000_M88E1512_CFG_REG_1, 0x2146);
if (ret_val)
goto out;
ret_val = phy->ops.write_reg(hw, E1000_M88E1512_CFG_REG_2, 0xB233);
if (ret_val)
goto out;
ret_val = phy->ops.write_reg(hw, E1000_M88E1512_CFG_REG_1, 0x214D);
if (ret_val)
goto out;
ret_val = phy->ops.write_reg(hw, E1000_M88E1512_CFG_REG_2, 0xDC0C);
if (ret_val)
goto out;
ret_val = phy->ops.write_reg(hw, E1000_M88E1512_CFG_REG_1, 0x2159);
if (ret_val)
goto out;
/* Switch to PHY page 0xFB. */
ret_val = phy->ops.write_reg(hw, E1000_M88E1543_PAGE_ADDR, 0x00FB);
if (ret_val)
goto out;
ret_val = phy->ops.write_reg(hw, E1000_M88E1512_CFG_REG_3, 0xC00D);
if (ret_val)
goto out;
/* Switch to PHY page 0x12. */
ret_val = phy->ops.write_reg(hw, E1000_M88E1543_PAGE_ADDR, 0x12);
if (ret_val)
goto out;
/* Change mode to SGMII-to-Copper */
ret_val = phy->ops.write_reg(hw, E1000_M88E1512_MODE, 0x8001);
if (ret_val)
goto out;
/* Switch to PHY page 1. */
ret_val = phy->ops.write_reg(hw, E1000_M88E1543_PAGE_ADDR, 0x1);
if (ret_val)
goto out;
/* Change mode to 1000BASE-X/SGMII and autoneg enable; reset */
ret_val = phy->ops.write_reg(hw, E1000_M88E1543_FIBER_CTRL, 0x9140);
if (ret_val)
goto out;
/* Return the PHY to page 0. */
ret_val = phy->ops.write_reg(hw, E1000_M88E1543_PAGE_ADDR, 0);
if (ret_val)
goto out;
ret_val = phy->ops.commit(hw);
if (ret_val) {
DEBUGOUT("Error committing the PHY changes\n");
return ret_val;
}
msec_delay(1000);
out:
return ret_val;
}
/**
* e1000_set_eee_i350 - Enable/disable EEE support
* @hw: pointer to the HW structure
* @adv1g: boolean flag enabling 1G EEE advertisement
* @adv100m: boolean flag enabling 100M EEE advertisement
*
* Enable/disable EEE based on setting in dev_spec structure.
*
**/
s32 e1000_set_eee_i350(struct e1000_hw *hw, bool adv1G, bool adv100M)
{
u32 ipcnfg, eeer;
DEBUGFUNC("e1000_set_eee_i350");
if ((hw->mac.type < e1000_i350) ||
(hw->phy.media_type != e1000_media_type_copper))
goto out;
ipcnfg = E1000_READ_REG(hw, E1000_IPCNFG);
eeer = E1000_READ_REG(hw, E1000_EEER);
/* enable or disable per user setting */
if (!(hw->dev_spec._82575.eee_disable)) {
u32 eee_su = E1000_READ_REG(hw, E1000_EEE_SU);
if (adv100M)
ipcnfg |= E1000_IPCNFG_EEE_100M_AN;
else
ipcnfg &= ~E1000_IPCNFG_EEE_100M_AN;
if (adv1G)
ipcnfg |= E1000_IPCNFG_EEE_1G_AN;
else
ipcnfg &= ~E1000_IPCNFG_EEE_1G_AN;
eeer |= (E1000_EEER_TX_LPI_EN | E1000_EEER_RX_LPI_EN |
E1000_EEER_LPI_FC);
/* This bit should not be set in normal operation. */
if (eee_su & E1000_EEE_SU_LPI_CLK_STP)
DEBUGOUT("LPI Clock Stop Bit should not be set!\n");
} else {
ipcnfg &= ~(E1000_IPCNFG_EEE_1G_AN | E1000_IPCNFG_EEE_100M_AN);
eeer &= ~(E1000_EEER_TX_LPI_EN | E1000_EEER_RX_LPI_EN |
E1000_EEER_LPI_FC);
}
E1000_WRITE_REG(hw, E1000_IPCNFG, ipcnfg);
E1000_WRITE_REG(hw, E1000_EEER, eeer);
E1000_READ_REG(hw, E1000_IPCNFG);
E1000_READ_REG(hw, E1000_EEER);
out:
return E1000_SUCCESS;
}
/**
* e1000_set_eee_i354 - Enable/disable EEE support
* @hw: pointer to the HW structure
* @adv1g: boolean flag enabling 1G EEE advertisement
* @adv100m: boolean flag enabling 100M EEE advertisement
*
* Enable/disable EEE legacy mode based on setting in dev_spec structure.
*
**/
s32 e1000_set_eee_i354(struct e1000_hw *hw, bool adv1G, bool adv100M)
{
struct e1000_phy_info *phy = &hw->phy;
s32 ret_val = E1000_SUCCESS;
u16 phy_data;
DEBUGFUNC("e1000_set_eee_i354");
if ((hw->phy.media_type != e1000_media_type_copper) ||
((phy->id != M88E1543_E_PHY_ID) &&
(phy->id != M88E1512_E_PHY_ID)))
goto out;
if (!hw->dev_spec._82575.eee_disable) {
/* Switch to PHY page 18. */
ret_val = phy->ops.write_reg(hw, E1000_M88E1543_PAGE_ADDR, 18);
if (ret_val)
goto out;
ret_val = phy->ops.read_reg(hw, E1000_M88E1543_EEE_CTRL_1,
&phy_data);
if (ret_val)
goto out;
phy_data |= E1000_M88E1543_EEE_CTRL_1_MS;
ret_val = phy->ops.write_reg(hw, E1000_M88E1543_EEE_CTRL_1,
phy_data);
if (ret_val)
goto out;
/* Return the PHY to page 0. */
ret_val = phy->ops.write_reg(hw, E1000_M88E1543_PAGE_ADDR, 0);
if (ret_val)
goto out;
/* Turn on EEE advertisement. */
ret_val = e1000_read_xmdio_reg(hw, E1000_EEE_ADV_ADDR_I354,
E1000_EEE_ADV_DEV_I354,
&phy_data);
if (ret_val)
goto out;
if (adv100M)
phy_data |= E1000_EEE_ADV_100_SUPPORTED;
else
phy_data &= ~E1000_EEE_ADV_100_SUPPORTED;
if (adv1G)
phy_data |= E1000_EEE_ADV_1000_SUPPORTED;
else
phy_data &= ~E1000_EEE_ADV_1000_SUPPORTED;
ret_val = e1000_write_xmdio_reg(hw, E1000_EEE_ADV_ADDR_I354,
E1000_EEE_ADV_DEV_I354,
phy_data);
} else {
/* Turn off EEE advertisement. */
ret_val = e1000_read_xmdio_reg(hw, E1000_EEE_ADV_ADDR_I354,
E1000_EEE_ADV_DEV_I354,
&phy_data);
if (ret_val)
goto out;
phy_data &= ~(E1000_EEE_ADV_100_SUPPORTED |
E1000_EEE_ADV_1000_SUPPORTED);
ret_val = e1000_write_xmdio_reg(hw, E1000_EEE_ADV_ADDR_I354,
E1000_EEE_ADV_DEV_I354,
phy_data);
}
out:
return ret_val;
}
/**
* e1000_get_eee_status_i354 - Get EEE status
* @hw: pointer to the HW structure
* @status: EEE status
*
* Get EEE status by guessing based on whether Tx or Rx LPI indications have
* been received.
**/
s32 e1000_get_eee_status_i354(struct e1000_hw *hw, bool *status)
{
struct e1000_phy_info *phy = &hw->phy;
s32 ret_val = E1000_SUCCESS;
u16 phy_data;
DEBUGFUNC("e1000_get_eee_status_i354");
/* Check if EEE is supported on this device. */
if ((hw->phy.media_type != e1000_media_type_copper) ||
((phy->id != M88E1543_E_PHY_ID) &&
(phy->id != M88E1512_E_PHY_ID)))
goto out;
ret_val = e1000_read_xmdio_reg(hw, E1000_PCS_STATUS_ADDR_I354,
E1000_PCS_STATUS_DEV_I354,
&phy_data);
if (ret_val)
goto out;
*status = phy_data & (E1000_PCS_STATUS_TX_LPI_RCVD |
E1000_PCS_STATUS_RX_LPI_RCVD) ? TRUE : FALSE;
out:
return ret_val;
}
/* Due to a hw errata, if the host tries to configure the VFTA register
* while performing queries from the BMC or DMA, then the VFTA in some
* cases won't be written.
*/
/**
* e1000_clear_vfta_i350 - Clear VLAN filter table
* @hw: pointer to the HW structure
*
* Clears the register array which contains the VLAN filter table by
* setting all the values to 0.
**/
void e1000_clear_vfta_i350(struct e1000_hw *hw)
{
u32 offset;
int i;
DEBUGFUNC("e1000_clear_vfta_350");
for (offset = 0; offset < E1000_VLAN_FILTER_TBL_SIZE; offset++) {
for (i = 0; i < 10; i++)
E1000_WRITE_REG_ARRAY(hw, E1000_VFTA, offset, 0);
E1000_WRITE_FLUSH(hw);
}
}
/**
* e1000_write_vfta_i350 - Write value to VLAN filter table
* @hw: pointer to the HW structure
* @offset: register offset in VLAN filter table
* @value: register value written to VLAN filter table
*
* Writes value at the given offset in the register array which stores
* the VLAN filter table.
**/
void e1000_write_vfta_i350(struct e1000_hw *hw, u32 offset, u32 value)
{
int i;
DEBUGFUNC("e1000_write_vfta_350");
for (i = 0; i < 10; i++)
E1000_WRITE_REG_ARRAY(hw, E1000_VFTA, offset, value);
E1000_WRITE_FLUSH(hw);
}
/**
* e1000_set_i2c_bb - Enable I2C bit-bang
* @hw: pointer to the HW structure
*
* Enable I2C bit-bang interface
*
**/
s32 e1000_set_i2c_bb(struct e1000_hw *hw)
{
s32 ret_val = E1000_SUCCESS;
u32 ctrl_ext, i2cparams;
DEBUGFUNC("e1000_set_i2c_bb");
ctrl_ext = E1000_READ_REG(hw, E1000_CTRL_EXT);
ctrl_ext |= E1000_CTRL_I2C_ENA;
E1000_WRITE_REG(hw, E1000_CTRL_EXT, ctrl_ext);
E1000_WRITE_FLUSH(hw);
i2cparams = E1000_READ_REG(hw, E1000_I2CPARAMS);
i2cparams |= E1000_I2CBB_EN;
i2cparams |= E1000_I2C_DATA_OE_N;
i2cparams |= E1000_I2C_CLK_OE_N;
E1000_WRITE_REG(hw, E1000_I2CPARAMS, i2cparams);
E1000_WRITE_FLUSH(hw);
return ret_val;
}
/**
* e1000_read_i2c_byte_generic - Reads 8 bit word over I2C
* @hw: pointer to hardware structure
* @byte_offset: byte offset to read
* @dev_addr: device address
* @data: value read
*
* Performs byte read operation over I2C interface at
* a specified device address.
**/
s32 e1000_read_i2c_byte_generic(struct e1000_hw *hw, u8 byte_offset,
u8 dev_addr, u8 *data)
{
s32 status = E1000_SUCCESS;
u32 max_retry = 10;
u32 retry = 1;
u16 swfw_mask = 0;
bool nack = TRUE;
DEBUGFUNC("e1000_read_i2c_byte_generic");
swfw_mask = E1000_SWFW_PHY0_SM;
do {
if (hw->mac.ops.acquire_swfw_sync(hw, swfw_mask)
!= E1000_SUCCESS) {
status = E1000_ERR_SWFW_SYNC;
goto read_byte_out;
}
e1000_i2c_start(hw);
/* Device Address and write indication */
status = e1000_clock_out_i2c_byte(hw, dev_addr);
if (status != E1000_SUCCESS)
goto fail;
status = e1000_get_i2c_ack(hw);
if (status != E1000_SUCCESS)
goto fail;
status = e1000_clock_out_i2c_byte(hw, byte_offset);
if (status != E1000_SUCCESS)
goto fail;
status = e1000_get_i2c_ack(hw);
if (status != E1000_SUCCESS)
goto fail;
e1000_i2c_start(hw);
/* Device Address and read indication */
status = e1000_clock_out_i2c_byte(hw, (dev_addr | 0x1));
if (status != E1000_SUCCESS)
goto fail;
status = e1000_get_i2c_ack(hw);
if (status != E1000_SUCCESS)
goto fail;
status = e1000_clock_in_i2c_byte(hw, data);
if (status != E1000_SUCCESS)
goto fail;
status = e1000_clock_out_i2c_bit(hw, nack);
if (status != E1000_SUCCESS)
goto fail;
e1000_i2c_stop(hw);
break;
fail:
hw->mac.ops.release_swfw_sync(hw, swfw_mask);
msec_delay(100);
e1000_i2c_bus_clear(hw);
retry++;
if (retry < max_retry)
DEBUGOUT("I2C byte read error - Retrying.\n");
else
DEBUGOUT("I2C byte read error.\n");
} while (retry < max_retry);
hw->mac.ops.release_swfw_sync(hw, swfw_mask);
read_byte_out:
return status;
}
/**
* e1000_write_i2c_byte_generic - Writes 8 bit word over I2C
* @hw: pointer to hardware structure
* @byte_offset: byte offset to write
* @dev_addr: device address
* @data: value to write
*
* Performs byte write operation over I2C interface at
* a specified device address.
**/
s32 e1000_write_i2c_byte_generic(struct e1000_hw *hw, u8 byte_offset,
u8 dev_addr, u8 data)
{
s32 status = E1000_SUCCESS;
u32 max_retry = 1;
u32 retry = 0;
u16 swfw_mask = 0;
DEBUGFUNC("e1000_write_i2c_byte_generic");
swfw_mask = E1000_SWFW_PHY0_SM;
if (hw->mac.ops.acquire_swfw_sync(hw, swfw_mask) != E1000_SUCCESS) {
status = E1000_ERR_SWFW_SYNC;
goto write_byte_out;
}
do {
e1000_i2c_start(hw);
status = e1000_clock_out_i2c_byte(hw, dev_addr);
if (status != E1000_SUCCESS)
goto fail;
status = e1000_get_i2c_ack(hw);
if (status != E1000_SUCCESS)
goto fail;
status = e1000_clock_out_i2c_byte(hw, byte_offset);
if (status != E1000_SUCCESS)
goto fail;
status = e1000_get_i2c_ack(hw);
if (status != E1000_SUCCESS)
goto fail;
status = e1000_clock_out_i2c_byte(hw, data);
if (status != E1000_SUCCESS)
goto fail;
status = e1000_get_i2c_ack(hw);
if (status != E1000_SUCCESS)
goto fail;
e1000_i2c_stop(hw);
break;
fail:
e1000_i2c_bus_clear(hw);
retry++;
if (retry < max_retry)
DEBUGOUT("I2C byte write error - Retrying.\n");
else
DEBUGOUT("I2C byte write error.\n");
} while (retry < max_retry);
hw->mac.ops.release_swfw_sync(hw, swfw_mask);
write_byte_out:
return status;
}
/**
* e1000_i2c_start - Sets I2C start condition
* @hw: pointer to hardware structure
*
* Sets I2C start condition (High -> Low on SDA while SCL is High)
**/
static void e1000_i2c_start(struct e1000_hw *hw)
{
u32 i2cctl = E1000_READ_REG(hw, E1000_I2CPARAMS);
DEBUGFUNC("e1000_i2c_start");
/* Start condition must begin with data and clock high */
e1000_set_i2c_data(hw, &i2cctl, 1);
e1000_raise_i2c_clk(hw, &i2cctl);
/* Setup time for start condition (4.7us) */
usec_delay(E1000_I2C_T_SU_STA);
e1000_set_i2c_data(hw, &i2cctl, 0);
/* Hold time for start condition (4us) */
usec_delay(E1000_I2C_T_HD_STA);
e1000_lower_i2c_clk(hw, &i2cctl);
/* Minimum low period of clock is 4.7 us */
usec_delay(E1000_I2C_T_LOW);
}
/**
* e1000_i2c_stop - Sets I2C stop condition
* @hw: pointer to hardware structure
*
* Sets I2C stop condition (Low -> High on SDA while SCL is High)
**/
static void e1000_i2c_stop(struct e1000_hw *hw)
{
u32 i2cctl = E1000_READ_REG(hw, E1000_I2CPARAMS);
DEBUGFUNC("e1000_i2c_stop");
/* Stop condition must begin with data low and clock high */
e1000_set_i2c_data(hw, &i2cctl, 0);
e1000_raise_i2c_clk(hw, &i2cctl);
/* Setup time for stop condition (4us) */
usec_delay(E1000_I2C_T_SU_STO);
e1000_set_i2c_data(hw, &i2cctl, 1);
/* bus free time between stop and start (4.7us)*/
usec_delay(E1000_I2C_T_BUF);
}
/**
* e1000_clock_in_i2c_byte - Clocks in one byte via I2C
* @hw: pointer to hardware structure
* @data: data byte to clock in
*
* Clocks in one byte data via I2C data/clock
**/
static s32 e1000_clock_in_i2c_byte(struct e1000_hw *hw, u8 *data)
{
s32 i;
bool bit = 0;
DEBUGFUNC("e1000_clock_in_i2c_byte");
*data = 0;
for (i = 7; i >= 0; i--) {
e1000_clock_in_i2c_bit(hw, &bit);
*data |= bit << i;
}
return E1000_SUCCESS;
}
/**
* e1000_clock_out_i2c_byte - Clocks out one byte via I2C
* @hw: pointer to hardware structure
* @data: data byte clocked out
*
* Clocks out one byte data via I2C data/clock
**/
static s32 e1000_clock_out_i2c_byte(struct e1000_hw *hw, u8 data)
{
s32 status = E1000_SUCCESS;
s32 i;
u32 i2cctl;
bool bit = 0;
DEBUGFUNC("e1000_clock_out_i2c_byte");
for (i = 7; i >= 0; i--) {
bit = (data >> i) & 0x1;
status = e1000_clock_out_i2c_bit(hw, bit);
if (status != E1000_SUCCESS)
break;
}
/* Release SDA line (set high) */
i2cctl = E1000_READ_REG(hw, E1000_I2CPARAMS);
i2cctl |= E1000_I2C_DATA_OE_N;
E1000_WRITE_REG(hw, E1000_I2CPARAMS, i2cctl);
E1000_WRITE_FLUSH(hw);
return status;
}
/**
* e1000_get_i2c_ack - Polls for I2C ACK
* @hw: pointer to hardware structure
*
* Clocks in/out one bit via I2C data/clock
**/
static s32 e1000_get_i2c_ack(struct e1000_hw *hw)
{
s32 status = E1000_SUCCESS;
u32 i = 0;
u32 i2cctl = E1000_READ_REG(hw, E1000_I2CPARAMS);
u32 timeout = 10;
bool ack = TRUE;
DEBUGFUNC("e1000_get_i2c_ack");
e1000_raise_i2c_clk(hw, &i2cctl);
/* Minimum high period of clock is 4us */
usec_delay(E1000_I2C_T_HIGH);
/* Wait until SCL returns high */
for (i = 0; i < timeout; i++) {
usec_delay(1);
i2cctl = E1000_READ_REG(hw, E1000_I2CPARAMS);
if (i2cctl & E1000_I2C_CLK_IN)
break;
}
if (!(i2cctl & E1000_I2C_CLK_IN))
return E1000_ERR_I2C;
ack = e1000_get_i2c_data(&i2cctl);
if (ack) {
DEBUGOUT("I2C ack was not received.\n");
status = E1000_ERR_I2C;
}
e1000_lower_i2c_clk(hw, &i2cctl);
/* Minimum low period of clock is 4.7 us */
usec_delay(E1000_I2C_T_LOW);
return status;
}
/**
* e1000_clock_in_i2c_bit - Clocks in one bit via I2C data/clock
* @hw: pointer to hardware structure
* @data: read data value
*
* Clocks in one bit via I2C data/clock
**/
static s32 e1000_clock_in_i2c_bit(struct e1000_hw *hw, bool *data)
{
u32 i2cctl = E1000_READ_REG(hw, E1000_I2CPARAMS);
DEBUGFUNC("e1000_clock_in_i2c_bit");
e1000_raise_i2c_clk(hw, &i2cctl);
/* Minimum high period of clock is 4us */
usec_delay(E1000_I2C_T_HIGH);
i2cctl = E1000_READ_REG(hw, E1000_I2CPARAMS);
*data = e1000_get_i2c_data(&i2cctl);
e1000_lower_i2c_clk(hw, &i2cctl);
/* Minimum low period of clock is 4.7 us */
usec_delay(E1000_I2C_T_LOW);
return E1000_SUCCESS;
}
/**
* e1000_clock_out_i2c_bit - Clocks in/out one bit via I2C data/clock
* @hw: pointer to hardware structure
* @data: data value to write
*
* Clocks out one bit via I2C data/clock
**/
static s32 e1000_clock_out_i2c_bit(struct e1000_hw *hw, bool data)
{
s32 status;
u32 i2cctl = E1000_READ_REG(hw, E1000_I2CPARAMS);
DEBUGFUNC("e1000_clock_out_i2c_bit");
status = e1000_set_i2c_data(hw, &i2cctl, data);
if (status == E1000_SUCCESS) {
e1000_raise_i2c_clk(hw, &i2cctl);
/* Minimum high period of clock is 4us */
usec_delay(E1000_I2C_T_HIGH);
e1000_lower_i2c_clk(hw, &i2cctl);
/* Minimum low period of clock is 4.7 us.
* This also takes care of the data hold time.
*/
usec_delay(E1000_I2C_T_LOW);
} else {
status = E1000_ERR_I2C;
DEBUGOUT1("I2C data was not set to %X\n", data);
}
return status;
}
/**
* e1000_raise_i2c_clk - Raises the I2C SCL clock
* @hw: pointer to hardware structure
* @i2cctl: Current value of I2CCTL register
*
* Raises the I2C clock line '0'->'1'
**/
static void e1000_raise_i2c_clk(struct e1000_hw *hw, u32 *i2cctl)
{
DEBUGFUNC("e1000_raise_i2c_clk");
*i2cctl |= E1000_I2C_CLK_OUT;
*i2cctl &= ~E1000_I2C_CLK_OE_N;
E1000_WRITE_REG(hw, E1000_I2CPARAMS, *i2cctl);
E1000_WRITE_FLUSH(hw);
/* SCL rise time (1000ns) */
usec_delay(E1000_I2C_T_RISE);
}
/**
* e1000_lower_i2c_clk - Lowers the I2C SCL clock
* @hw: pointer to hardware structure
* @i2cctl: Current value of I2CCTL register
*
* Lowers the I2C clock line '1'->'0'
**/
static void e1000_lower_i2c_clk(struct e1000_hw *hw, u32 *i2cctl)
{
DEBUGFUNC("e1000_lower_i2c_clk");
*i2cctl &= ~E1000_I2C_CLK_OUT;
*i2cctl &= ~E1000_I2C_CLK_OE_N;
E1000_WRITE_REG(hw, E1000_I2CPARAMS, *i2cctl);
E1000_WRITE_FLUSH(hw);
/* SCL fall time (300ns) */
usec_delay(E1000_I2C_T_FALL);
}
/**
* e1000_set_i2c_data - Sets the I2C data bit
* @hw: pointer to hardware structure
* @i2cctl: Current value of I2CCTL register
* @data: I2C data value (0 or 1) to set
*
* Sets the I2C data bit
**/
static s32 e1000_set_i2c_data(struct e1000_hw *hw, u32 *i2cctl, bool data)
{
s32 status = E1000_SUCCESS;
DEBUGFUNC("e1000_set_i2c_data");
if (data)
*i2cctl |= E1000_I2C_DATA_OUT;
else
*i2cctl &= ~E1000_I2C_DATA_OUT;
*i2cctl &= ~E1000_I2C_DATA_OE_N;
*i2cctl |= E1000_I2C_CLK_OE_N;
E1000_WRITE_REG(hw, E1000_I2CPARAMS, *i2cctl);
E1000_WRITE_FLUSH(hw);
/* Data rise/fall (1000ns/300ns) and set-up time (250ns) */
usec_delay(E1000_I2C_T_RISE + E1000_I2C_T_FALL + E1000_I2C_T_SU_DATA);
*i2cctl = E1000_READ_REG(hw, E1000_I2CPARAMS);
if (data != e1000_get_i2c_data(i2cctl)) {
status = E1000_ERR_I2C;
DEBUGOUT1("Error - I2C data was not set to %X.\n", data);
}
return status;
}
/**
* e1000_get_i2c_data - Reads the I2C SDA data bit
* @hw: pointer to hardware structure
* @i2cctl: Current value of I2CCTL register
*
* Returns the I2C data bit value
**/
static bool e1000_get_i2c_data(u32 *i2cctl)
{
bool data;
DEBUGFUNC("e1000_get_i2c_data");
if (*i2cctl & E1000_I2C_DATA_IN)
data = 1;
else
data = 0;
return data;
}
/**
* e1000_i2c_bus_clear - Clears the I2C bus
* @hw: pointer to hardware structure
*
* Clears the I2C bus by sending nine clock pulses.
* Used when data line is stuck low.
**/
void e1000_i2c_bus_clear(struct e1000_hw *hw)
{
u32 i2cctl = E1000_READ_REG(hw, E1000_I2CPARAMS);
u32 i;
DEBUGFUNC("e1000_i2c_bus_clear");
e1000_i2c_start(hw);
e1000_set_i2c_data(hw, &i2cctl, 1);
for (i = 0; i < 9; i++) {
e1000_raise_i2c_clk(hw, &i2cctl);
/* Min high period of clock is 4us */
usec_delay(E1000_I2C_T_HIGH);
e1000_lower_i2c_clk(hw, &i2cctl);
/* Min low period of clock is 4.7us*/
usec_delay(E1000_I2C_T_LOW);
}
e1000_i2c_start(hw);
/* Put the i2c bus back to default state */
e1000_i2c_stop(hw);
}
Index: head/sys/dev/e1000/e1000_ich8lan.c
===================================================================
--- head/sys/dev/e1000/e1000_ich8lan.c (revision 300049)
+++ head/sys/dev/e1000/e1000_ich8lan.c (revision 300050)
@@ -1,6101 +1,6101 @@
/******************************************************************************
Copyright (c) 2001-2015, Intel Corporation
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:
1. Redistributions of source code must retain the above copyright notice,
this list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in the
documentation and/or other materials provided with the distribution.
3. Neither the name of the Intel Corporation nor the names of its
contributors may be used to endorse or promote products derived from
this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
POSSIBILITY OF SUCH DAMAGE.
******************************************************************************/
/*$FreeBSD$*/
/* 82562G 10/100 Network Connection
* 82562G-2 10/100 Network Connection
* 82562GT 10/100 Network Connection
* 82562GT-2 10/100 Network Connection
* 82562V 10/100 Network Connection
* 82562V-2 10/100 Network Connection
* 82566DC-2 Gigabit Network Connection
* 82566DC Gigabit Network Connection
* 82566DM-2 Gigabit Network Connection
* 82566DM Gigabit Network Connection
* 82566MC Gigabit Network Connection
* 82566MM Gigabit Network Connection
* 82567LM Gigabit Network Connection
* 82567LF Gigabit Network Connection
* 82567V Gigabit Network Connection
* 82567LM-2 Gigabit Network Connection
* 82567LF-2 Gigabit Network Connection
* 82567V-2 Gigabit Network Connection
* 82567LF-3 Gigabit Network Connection
* 82567LM-3 Gigabit Network Connection
* 82567LM-4 Gigabit Network Connection
* 82577LM Gigabit Network Connection
* 82577LC Gigabit Network Connection
* 82578DM Gigabit Network Connection
* 82578DC Gigabit Network Connection
* 82579LM Gigabit Network Connection
* 82579V Gigabit Network Connection
* Ethernet Connection I217-LM
* Ethernet Connection I217-V
* Ethernet Connection I218-V
* Ethernet Connection I218-LM
* Ethernet Connection (2) I218-LM
* Ethernet Connection (2) I218-V
* Ethernet Connection (3) I218-LM
* Ethernet Connection (3) I218-V
*/
#include "e1000_api.h"
static s32 e1000_acquire_swflag_ich8lan(struct e1000_hw *hw);
static void e1000_release_swflag_ich8lan(struct e1000_hw *hw);
static s32 e1000_acquire_nvm_ich8lan(struct e1000_hw *hw);
static void e1000_release_nvm_ich8lan(struct e1000_hw *hw);
static bool e1000_check_mng_mode_ich8lan(struct e1000_hw *hw);
static bool e1000_check_mng_mode_pchlan(struct e1000_hw *hw);
static int e1000_rar_set_pch2lan(struct e1000_hw *hw, u8 *addr, u32 index);
static int e1000_rar_set_pch_lpt(struct e1000_hw *hw, u8 *addr, u32 index);
static s32 e1000_sw_lcd_config_ich8lan(struct e1000_hw *hw);
static void e1000_update_mc_addr_list_pch2lan(struct e1000_hw *hw,
u8 *mc_addr_list,
u32 mc_addr_count);
static s32 e1000_check_reset_block_ich8lan(struct e1000_hw *hw);
static s32 e1000_phy_hw_reset_ich8lan(struct e1000_hw *hw);
static s32 e1000_set_lplu_state_pchlan(struct e1000_hw *hw, bool active);
static s32 e1000_set_d0_lplu_state_ich8lan(struct e1000_hw *hw,
bool active);
static s32 e1000_set_d3_lplu_state_ich8lan(struct e1000_hw *hw,
bool active);
static s32 e1000_read_nvm_ich8lan(struct e1000_hw *hw, u16 offset,
u16 words, u16 *data);
static s32 e1000_read_nvm_spt(struct e1000_hw *hw, u16 offset, u16 words,
u16 *data);
static s32 e1000_write_nvm_ich8lan(struct e1000_hw *hw, u16 offset,
u16 words, u16 *data);
static s32 e1000_validate_nvm_checksum_ich8lan(struct e1000_hw *hw);
static s32 e1000_update_nvm_checksum_ich8lan(struct e1000_hw *hw);
static s32 e1000_update_nvm_checksum_spt(struct e1000_hw *hw);
static s32 e1000_valid_led_default_ich8lan(struct e1000_hw *hw,
u16 *data);
static s32 e1000_id_led_init_pchlan(struct e1000_hw *hw);
static s32 e1000_get_bus_info_ich8lan(struct e1000_hw *hw);
static s32 e1000_reset_hw_ich8lan(struct e1000_hw *hw);
static s32 e1000_init_hw_ich8lan(struct e1000_hw *hw);
static s32 e1000_setup_link_ich8lan(struct e1000_hw *hw);
static s32 e1000_setup_copper_link_ich8lan(struct e1000_hw *hw);
static s32 e1000_setup_copper_link_pch_lpt(struct e1000_hw *hw);
static s32 e1000_get_link_up_info_ich8lan(struct e1000_hw *hw,
u16 *speed, u16 *duplex);
static s32 e1000_cleanup_led_ich8lan(struct e1000_hw *hw);
static s32 e1000_led_on_ich8lan(struct e1000_hw *hw);
static s32 e1000_led_off_ich8lan(struct e1000_hw *hw);
static s32 e1000_k1_gig_workaround_hv(struct e1000_hw *hw, bool link);
static s32 e1000_setup_led_pchlan(struct e1000_hw *hw);
static s32 e1000_cleanup_led_pchlan(struct e1000_hw *hw);
static s32 e1000_led_on_pchlan(struct e1000_hw *hw);
static s32 e1000_led_off_pchlan(struct e1000_hw *hw);
static void e1000_clear_hw_cntrs_ich8lan(struct e1000_hw *hw);
static s32 e1000_erase_flash_bank_ich8lan(struct e1000_hw *hw, u32 bank);
static void e1000_initialize_hw_bits_ich8lan(struct e1000_hw *hw);
static s32 e1000_kmrn_lock_loss_workaround_ich8lan(struct e1000_hw *hw);
static s32 e1000_read_flash_byte_ich8lan(struct e1000_hw *hw,
u32 offset, u8 *data);
static s32 e1000_read_flash_data_ich8lan(struct e1000_hw *hw, u32 offset,
u8 size, u16 *data);
static s32 e1000_read_flash_data32_ich8lan(struct e1000_hw *hw, u32 offset,
u32 *data);
static s32 e1000_read_flash_dword_ich8lan(struct e1000_hw *hw,
u32 offset, u32 *data);
static s32 e1000_write_flash_data32_ich8lan(struct e1000_hw *hw,
u32 offset, u32 data);
static s32 e1000_retry_write_flash_dword_ich8lan(struct e1000_hw *hw,
u32 offset, u32 dword);
static s32 e1000_read_flash_word_ich8lan(struct e1000_hw *hw,
u32 offset, u16 *data);
static s32 e1000_retry_write_flash_byte_ich8lan(struct e1000_hw *hw,
u32 offset, u8 byte);
static s32 e1000_get_cfg_done_ich8lan(struct e1000_hw *hw);
static void e1000_power_down_phy_copper_ich8lan(struct e1000_hw *hw);
static s32 e1000_check_for_copper_link_ich8lan(struct e1000_hw *hw);
static s32 e1000_set_mdio_slow_mode_hv(struct e1000_hw *hw);
static s32 e1000_k1_workaround_lv(struct e1000_hw *hw);
static void e1000_gate_hw_phy_config_ich8lan(struct e1000_hw *hw, bool gate);
static s32 e1000_set_obff_timer_pch_lpt(struct e1000_hw *hw, u32 itr);
/* ICH GbE Flash Hardware Sequencing Flash Status Register bit breakdown */
/* Offset 04h HSFSTS */
union ich8_hws_flash_status {
struct ich8_hsfsts {
u16 flcdone:1; /* bit 0 Flash Cycle Done */
u16 flcerr:1; /* bit 1 Flash Cycle Error */
u16 dael:1; /* bit 2 Direct Access error Log */
u16 berasesz:2; /* bit 4:3 Sector Erase Size */
u16 flcinprog:1; /* bit 5 flash cycle in Progress */
u16 reserved1:2; /* bit 13:6 Reserved */
u16 reserved2:6; /* bit 13:6 Reserved */
u16 fldesvalid:1; /* bit 14 Flash Descriptor Valid */
u16 flockdn:1; /* bit 15 Flash Config Lock-Down */
} hsf_status;
u16 regval;
};
/* ICH GbE Flash Hardware Sequencing Flash control Register bit breakdown */
/* Offset 06h FLCTL */
union ich8_hws_flash_ctrl {
struct ich8_hsflctl {
u16 flcgo:1; /* 0 Flash Cycle Go */
u16 flcycle:2; /* 2:1 Flash Cycle */
u16 reserved:5; /* 7:3 Reserved */
u16 fldbcount:2; /* 9:8 Flash Data Byte Count */
u16 flockdn:6; /* 15:10 Reserved */
} hsf_ctrl;
u16 regval;
};
/* ICH Flash Region Access Permissions */
union ich8_hws_flash_regacc {
struct ich8_flracc {
u32 grra:8; /* 0:7 GbE region Read Access */
u32 grwa:8; /* 8:15 GbE region Write Access */
u32 gmrag:8; /* 23:16 GbE Master Read Access Grant */
u32 gmwag:8; /* 31:24 GbE Master Write Access Grant */
} hsf_flregacc;
u16 regval;
};
/**
* e1000_phy_is_accessible_pchlan - Check if able to access PHY registers
* @hw: pointer to the HW structure
*
* Test access to the PHY registers by reading the PHY ID registers. If
* the PHY ID is already known (e.g. resume path) compare it with known ID,
* otherwise assume the read PHY ID is correct if it is valid.
*
* Assumes the sw/fw/hw semaphore is already acquired.
**/
static bool e1000_phy_is_accessible_pchlan(struct e1000_hw *hw)
{
u16 phy_reg = 0;
u32 phy_id = 0;
s32 ret_val = 0;
u16 retry_count;
u32 mac_reg = 0;
for (retry_count = 0; retry_count < 2; retry_count++) {
ret_val = hw->phy.ops.read_reg_locked(hw, PHY_ID1, &phy_reg);
if (ret_val || (phy_reg == 0xFFFF))
continue;
phy_id = (u32)(phy_reg << 16);
ret_val = hw->phy.ops.read_reg_locked(hw, PHY_ID2, &phy_reg);
if (ret_val || (phy_reg == 0xFFFF)) {
phy_id = 0;
continue;
}
phy_id |= (u32)(phy_reg & PHY_REVISION_MASK);
break;
}
if (hw->phy.id) {
if (hw->phy.id == phy_id)
goto out;
} else if (phy_id) {
hw->phy.id = phy_id;
hw->phy.revision = (u32)(phy_reg & ~PHY_REVISION_MASK);
goto out;
}
/* In case the PHY needs to be in mdio slow mode,
* set slow mode and try to get the PHY id again.
*/
if (hw->mac.type < e1000_pch_lpt) {
hw->phy.ops.release(hw);
ret_val = e1000_set_mdio_slow_mode_hv(hw);
if (!ret_val)
ret_val = e1000_get_phy_id(hw);
hw->phy.ops.acquire(hw);
}
if (ret_val)
return FALSE;
out:
if ((hw->mac.type == e1000_pch_lpt) ||
(hw->mac.type == e1000_pch_spt)) {
/* Only unforce SMBus if ME is not active */
if (!(E1000_READ_REG(hw, E1000_FWSM) &
E1000_ICH_FWSM_FW_VALID)) {
/* Unforce SMBus mode in PHY */
hw->phy.ops.read_reg_locked(hw, CV_SMB_CTRL, &phy_reg);
phy_reg &= ~CV_SMB_CTRL_FORCE_SMBUS;
hw->phy.ops.write_reg_locked(hw, CV_SMB_CTRL, phy_reg);
/* Unforce SMBus mode in MAC */
mac_reg = E1000_READ_REG(hw, E1000_CTRL_EXT);
mac_reg &= ~E1000_CTRL_EXT_FORCE_SMBUS;
E1000_WRITE_REG(hw, E1000_CTRL_EXT, mac_reg);
}
}
return TRUE;
}
/**
* e1000_toggle_lanphypc_pch_lpt - toggle the LANPHYPC pin value
* @hw: pointer to the HW structure
*
* Toggling the LANPHYPC pin value fully power-cycles the PHY and is
* used to reset the PHY to a quiescent state when necessary.
**/
static void e1000_toggle_lanphypc_pch_lpt(struct e1000_hw *hw)
{
u32 mac_reg;
DEBUGFUNC("e1000_toggle_lanphypc_pch_lpt");
/* Set Phy Config Counter to 50msec */
mac_reg = E1000_READ_REG(hw, E1000_FEXTNVM3);
mac_reg &= ~E1000_FEXTNVM3_PHY_CFG_COUNTER_MASK;
mac_reg |= E1000_FEXTNVM3_PHY_CFG_COUNTER_50MSEC;
E1000_WRITE_REG(hw, E1000_FEXTNVM3, mac_reg);
/* Toggle LANPHYPC Value bit */
mac_reg = E1000_READ_REG(hw, E1000_CTRL);
mac_reg |= E1000_CTRL_LANPHYPC_OVERRIDE;
mac_reg &= ~E1000_CTRL_LANPHYPC_VALUE;
E1000_WRITE_REG(hw, E1000_CTRL, mac_reg);
E1000_WRITE_FLUSH(hw);
usec_delay(10);
mac_reg &= ~E1000_CTRL_LANPHYPC_OVERRIDE;
E1000_WRITE_REG(hw, E1000_CTRL, mac_reg);
E1000_WRITE_FLUSH(hw);
if (hw->mac.type < e1000_pch_lpt) {
msec_delay(50);
} else {
u16 count = 20;
do {
msec_delay(5);
} while (!(E1000_READ_REG(hw, E1000_CTRL_EXT) &
E1000_CTRL_EXT_LPCD) && count--);
msec_delay(30);
}
}
/**
* e1000_init_phy_workarounds_pchlan - PHY initialization workarounds
* @hw: pointer to the HW structure
*
* Workarounds/flow necessary for PHY initialization during driver load
* and resume paths.
**/
static s32 e1000_init_phy_workarounds_pchlan(struct e1000_hw *hw)
{
u32 mac_reg, fwsm = E1000_READ_REG(hw, E1000_FWSM);
s32 ret_val;
DEBUGFUNC("e1000_init_phy_workarounds_pchlan");
/* Gate automatic PHY configuration by hardware on managed and
* non-managed 82579 and newer adapters.
*/
e1000_gate_hw_phy_config_ich8lan(hw, TRUE);
/* It is not possible to be certain of the current state of ULP
* so forcibly disable it.
*/
hw->dev_spec.ich8lan.ulp_state = e1000_ulp_state_unknown;
e1000_disable_ulp_lpt_lp(hw, TRUE);
ret_val = hw->phy.ops.acquire(hw);
if (ret_val) {
DEBUGOUT("Failed to initialize PHY flow\n");
goto out;
}
/* The MAC-PHY interconnect may be in SMBus mode. If the PHY is
* inaccessible and resetting the PHY is not blocked, toggle the
* LANPHYPC Value bit to force the interconnect to PCIe mode.
*/
switch (hw->mac.type) {
case e1000_pch_lpt:
case e1000_pch_spt:
if (e1000_phy_is_accessible_pchlan(hw))
break;
/* Before toggling LANPHYPC, see if PHY is accessible by
* forcing MAC to SMBus mode first.
*/
mac_reg = E1000_READ_REG(hw, E1000_CTRL_EXT);
mac_reg |= E1000_CTRL_EXT_FORCE_SMBUS;
E1000_WRITE_REG(hw, E1000_CTRL_EXT, mac_reg);
/* Wait 50 milliseconds for MAC to finish any retries
* that it might be trying to perform from previous
* attempts to acknowledge any phy read requests.
*/
msec_delay(50);
/* fall-through */
case e1000_pch2lan:
if (e1000_phy_is_accessible_pchlan(hw))
break;
/* fall-through */
case e1000_pchlan:
if ((hw->mac.type == e1000_pchlan) &&
(fwsm & E1000_ICH_FWSM_FW_VALID))
break;
if (hw->phy.ops.check_reset_block(hw)) {
DEBUGOUT("Required LANPHYPC toggle blocked by ME\n");
ret_val = -E1000_ERR_PHY;
break;
}
/* Toggle LANPHYPC Value bit */
e1000_toggle_lanphypc_pch_lpt(hw);
if (hw->mac.type >= e1000_pch_lpt) {
if (e1000_phy_is_accessible_pchlan(hw))
break;
/* Toggling LANPHYPC brings the PHY out of SMBus mode
* so ensure that the MAC is also out of SMBus mode
*/
mac_reg = E1000_READ_REG(hw, E1000_CTRL_EXT);
mac_reg &= ~E1000_CTRL_EXT_FORCE_SMBUS;
E1000_WRITE_REG(hw, E1000_CTRL_EXT, mac_reg);
if (e1000_phy_is_accessible_pchlan(hw))
break;
ret_val = -E1000_ERR_PHY;
}
break;
default:
break;
}
hw->phy.ops.release(hw);
if (!ret_val) {
/* Check to see if able to reset PHY. Print error if not */
if (hw->phy.ops.check_reset_block(hw)) {
ERROR_REPORT("Reset blocked by ME\n");
goto out;
}
/* Reset the PHY before any access to it. Doing so, ensures
* that the PHY is in a known good state before we read/write
* PHY registers. The generic reset is sufficient here,
* because we haven't determined the PHY type yet.
*/
ret_val = e1000_phy_hw_reset_generic(hw);
if (ret_val)
goto out;
/* On a successful reset, possibly need to wait for the PHY
* to quiesce to an accessible state before returning control
* to the calling function. If the PHY does not quiesce, then
* return E1000E_BLK_PHY_RESET, as this is the condition that
* the PHY is in.
*/
ret_val = hw->phy.ops.check_reset_block(hw);
if (ret_val)
ERROR_REPORT("ME blocked access to PHY after reset\n");
}
out:
/* Ungate automatic PHY configuration on non-managed 82579 */
if ((hw->mac.type == e1000_pch2lan) &&
!(fwsm & E1000_ICH_FWSM_FW_VALID)) {
msec_delay(10);
e1000_gate_hw_phy_config_ich8lan(hw, FALSE);
}
return ret_val;
}
/**
* e1000_init_phy_params_pchlan - Initialize PHY function pointers
* @hw: pointer to the HW structure
*
* Initialize family-specific PHY parameters and function pointers.
**/
static s32 e1000_init_phy_params_pchlan(struct e1000_hw *hw)
{
struct e1000_phy_info *phy = &hw->phy;
s32 ret_val;
DEBUGFUNC("e1000_init_phy_params_pchlan");
phy->addr = 1;
phy->reset_delay_us = 100;
phy->ops.acquire = e1000_acquire_swflag_ich8lan;
phy->ops.check_reset_block = e1000_check_reset_block_ich8lan;
phy->ops.get_cfg_done = e1000_get_cfg_done_ich8lan;
phy->ops.set_page = e1000_set_page_igp;
phy->ops.read_reg = e1000_read_phy_reg_hv;
phy->ops.read_reg_locked = e1000_read_phy_reg_hv_locked;
phy->ops.read_reg_page = e1000_read_phy_reg_page_hv;
phy->ops.release = e1000_release_swflag_ich8lan;
phy->ops.reset = e1000_phy_hw_reset_ich8lan;
phy->ops.set_d0_lplu_state = e1000_set_lplu_state_pchlan;
phy->ops.set_d3_lplu_state = e1000_set_lplu_state_pchlan;
phy->ops.write_reg = e1000_write_phy_reg_hv;
phy->ops.write_reg_locked = e1000_write_phy_reg_hv_locked;
phy->ops.write_reg_page = e1000_write_phy_reg_page_hv;
phy->ops.power_up = e1000_power_up_phy_copper;
phy->ops.power_down = e1000_power_down_phy_copper_ich8lan;
phy->autoneg_mask = AUTONEG_ADVERTISE_SPEED_DEFAULT;
phy->id = e1000_phy_unknown;
ret_val = e1000_init_phy_workarounds_pchlan(hw);
if (ret_val)
return ret_val;
if (phy->id == e1000_phy_unknown)
switch (hw->mac.type) {
default:
ret_val = e1000_get_phy_id(hw);
if (ret_val)
return ret_val;
if ((phy->id != 0) && (phy->id != PHY_REVISION_MASK))
break;
/* fall-through */
case e1000_pch2lan:
case e1000_pch_lpt:
case e1000_pch_spt:
/* In case the PHY needs to be in mdio slow mode,
* set slow mode and try to get the PHY id again.
*/
ret_val = e1000_set_mdio_slow_mode_hv(hw);
if (ret_val)
return ret_val;
ret_val = e1000_get_phy_id(hw);
if (ret_val)
return ret_val;
break;
}
phy->type = e1000_get_phy_type_from_id(phy->id);
switch (phy->type) {
case e1000_phy_82577:
case e1000_phy_82579:
case e1000_phy_i217:
phy->ops.check_polarity = e1000_check_polarity_82577;
phy->ops.force_speed_duplex =
e1000_phy_force_speed_duplex_82577;
phy->ops.get_cable_length = e1000_get_cable_length_82577;
phy->ops.get_info = e1000_get_phy_info_82577;
phy->ops.commit = e1000_phy_sw_reset_generic;
break;
case e1000_phy_82578:
phy->ops.check_polarity = e1000_check_polarity_m88;
phy->ops.force_speed_duplex = e1000_phy_force_speed_duplex_m88;
phy->ops.get_cable_length = e1000_get_cable_length_m88;
phy->ops.get_info = e1000_get_phy_info_m88;
break;
default:
ret_val = -E1000_ERR_PHY;
break;
}
return ret_val;
}
/**
* e1000_init_phy_params_ich8lan - Initialize PHY function pointers
* @hw: pointer to the HW structure
*
* Initialize family-specific PHY parameters and function pointers.
**/
static s32 e1000_init_phy_params_ich8lan(struct e1000_hw *hw)
{
struct e1000_phy_info *phy = &hw->phy;
s32 ret_val;
u16 i = 0;
DEBUGFUNC("e1000_init_phy_params_ich8lan");
phy->addr = 1;
phy->reset_delay_us = 100;
phy->ops.acquire = e1000_acquire_swflag_ich8lan;
phy->ops.check_reset_block = e1000_check_reset_block_ich8lan;
phy->ops.get_cable_length = e1000_get_cable_length_igp_2;
phy->ops.get_cfg_done = e1000_get_cfg_done_ich8lan;
phy->ops.read_reg = e1000_read_phy_reg_igp;
phy->ops.release = e1000_release_swflag_ich8lan;
phy->ops.reset = e1000_phy_hw_reset_ich8lan;
phy->ops.set_d0_lplu_state = e1000_set_d0_lplu_state_ich8lan;
phy->ops.set_d3_lplu_state = e1000_set_d3_lplu_state_ich8lan;
phy->ops.write_reg = e1000_write_phy_reg_igp;
phy->ops.power_up = e1000_power_up_phy_copper;
phy->ops.power_down = e1000_power_down_phy_copper_ich8lan;
/* We may need to do this twice - once for IGP and if that fails,
* we'll set BM func pointers and try again
*/
ret_val = e1000_determine_phy_address(hw);
if (ret_val) {
phy->ops.write_reg = e1000_write_phy_reg_bm;
phy->ops.read_reg = e1000_read_phy_reg_bm;
ret_val = e1000_determine_phy_address(hw);
if (ret_val) {
DEBUGOUT("Cannot determine PHY addr. Erroring out\n");
return ret_val;
}
}
phy->id = 0;
while ((e1000_phy_unknown == e1000_get_phy_type_from_id(phy->id)) &&
(i++ < 100)) {
msec_delay(1);
ret_val = e1000_get_phy_id(hw);
if (ret_val)
return ret_val;
}
/* Verify phy id */
switch (phy->id) {
case IGP03E1000_E_PHY_ID:
phy->type = e1000_phy_igp_3;
phy->autoneg_mask = AUTONEG_ADVERTISE_SPEED_DEFAULT;
phy->ops.read_reg_locked = e1000_read_phy_reg_igp_locked;
phy->ops.write_reg_locked = e1000_write_phy_reg_igp_locked;
phy->ops.get_info = e1000_get_phy_info_igp;
phy->ops.check_polarity = e1000_check_polarity_igp;
phy->ops.force_speed_duplex = e1000_phy_force_speed_duplex_igp;
break;
case IFE_E_PHY_ID:
case IFE_PLUS_E_PHY_ID:
case IFE_C_E_PHY_ID:
phy->type = e1000_phy_ife;
phy->autoneg_mask = E1000_ALL_NOT_GIG;
phy->ops.get_info = e1000_get_phy_info_ife;
phy->ops.check_polarity = e1000_check_polarity_ife;
phy->ops.force_speed_duplex = e1000_phy_force_speed_duplex_ife;
break;
case BME1000_E_PHY_ID:
phy->type = e1000_phy_bm;
phy->autoneg_mask = AUTONEG_ADVERTISE_SPEED_DEFAULT;
phy->ops.read_reg = e1000_read_phy_reg_bm;
phy->ops.write_reg = e1000_write_phy_reg_bm;
phy->ops.commit = e1000_phy_sw_reset_generic;
phy->ops.get_info = e1000_get_phy_info_m88;
phy->ops.check_polarity = e1000_check_polarity_m88;
phy->ops.force_speed_duplex = e1000_phy_force_speed_duplex_m88;
break;
default:
return -E1000_ERR_PHY;
break;
}
return E1000_SUCCESS;
}
/**
* e1000_init_nvm_params_ich8lan - Initialize NVM function pointers
* @hw: pointer to the HW structure
*
* Initialize family-specific NVM parameters and function
* pointers.
**/
static s32 e1000_init_nvm_params_ich8lan(struct e1000_hw *hw)
{
struct e1000_nvm_info *nvm = &hw->nvm;
struct e1000_dev_spec_ich8lan *dev_spec = &hw->dev_spec.ich8lan;
u32 gfpreg, sector_base_addr, sector_end_addr;
u16 i;
u32 nvm_size;
DEBUGFUNC("e1000_init_nvm_params_ich8lan");
nvm->type = e1000_nvm_flash_sw;
if (hw->mac.type == e1000_pch_spt) {
/* in SPT, gfpreg doesn't exist. NVM size is taken from the
* STRAP register. This is because in SPT the GbE Flash region
* is no longer accessed through the flash registers. Instead,
* the mechanism has changed, and the Flash region access
* registers are now implemented in GbE memory space.
*/
nvm->flash_base_addr = 0;
nvm_size =
(((E1000_READ_REG(hw, E1000_STRAP) >> 1) & 0x1F) + 1)
* NVM_SIZE_MULTIPLIER;
nvm->flash_bank_size = nvm_size / 2;
/* Adjust to word count */
nvm->flash_bank_size /= sizeof(u16);
/* Set the base address for flash register access */
hw->flash_address = hw->hw_addr + E1000_FLASH_BASE_ADDR;
} else {
/* Can't read flash registers if register set isn't mapped. */
if (!hw->flash_address) {
DEBUGOUT("ERROR: Flash registers not mapped\n");
return -E1000_ERR_CONFIG;
}
gfpreg = E1000_READ_FLASH_REG(hw, ICH_FLASH_GFPREG);
/* sector_X_addr is a "sector"-aligned address (4096 bytes)
* Add 1 to sector_end_addr since this sector is included in
* the overall size.
*/
sector_base_addr = gfpreg & FLASH_GFPREG_BASE_MASK;
sector_end_addr = ((gfpreg >> 16) & FLASH_GFPREG_BASE_MASK) + 1;
/* flash_base_addr is byte-aligned */
nvm->flash_base_addr = sector_base_addr
<< FLASH_SECTOR_ADDR_SHIFT;
/* find total size of the NVM, then cut in half since the total
* size represents two separate NVM banks.
*/
nvm->flash_bank_size = ((sector_end_addr - sector_base_addr)
<< FLASH_SECTOR_ADDR_SHIFT);
nvm->flash_bank_size /= 2;
/* Adjust to word count */
nvm->flash_bank_size /= sizeof(u16);
}
nvm->word_size = E1000_SHADOW_RAM_WORDS;
/* Clear shadow ram */
for (i = 0; i < nvm->word_size; i++) {
dev_spec->shadow_ram[i].modified = FALSE;
dev_spec->shadow_ram[i].value = 0xFFFF;
}
E1000_MUTEX_INIT(&dev_spec->nvm_mutex);
E1000_MUTEX_INIT(&dev_spec->swflag_mutex);
/* Function Pointers */
nvm->ops.acquire = e1000_acquire_nvm_ich8lan;
nvm->ops.release = e1000_release_nvm_ich8lan;
if (hw->mac.type == e1000_pch_spt) {
nvm->ops.read = e1000_read_nvm_spt;
nvm->ops.update = e1000_update_nvm_checksum_spt;
} else {
nvm->ops.read = e1000_read_nvm_ich8lan;
nvm->ops.update = e1000_update_nvm_checksum_ich8lan;
}
nvm->ops.valid_led_default = e1000_valid_led_default_ich8lan;
nvm->ops.validate = e1000_validate_nvm_checksum_ich8lan;
nvm->ops.write = e1000_write_nvm_ich8lan;
return E1000_SUCCESS;
}
/**
* e1000_init_mac_params_ich8lan - Initialize MAC function pointers
* @hw: pointer to the HW structure
*
* Initialize family-specific MAC parameters and function
* pointers.
**/
static s32 e1000_init_mac_params_ich8lan(struct e1000_hw *hw)
{
struct e1000_mac_info *mac = &hw->mac;
DEBUGFUNC("e1000_init_mac_params_ich8lan");
/* Set media type function pointer */
hw->phy.media_type = e1000_media_type_copper;
/* Set mta register count */
mac->mta_reg_count = 32;
/* Set rar entry count */
mac->rar_entry_count = E1000_ICH_RAR_ENTRIES;
if (mac->type == e1000_ich8lan)
mac->rar_entry_count--;
/* Set if part includes ASF firmware */
mac->asf_firmware_present = TRUE;
/* FWSM register */
mac->has_fwsm = TRUE;
/* ARC subsystem not supported */
mac->arc_subsystem_valid = FALSE;
/* Adaptive IFS supported */
mac->adaptive_ifs = TRUE;
/* Function pointers */
/* bus type/speed/width */
mac->ops.get_bus_info = e1000_get_bus_info_ich8lan;
/* function id */
mac->ops.set_lan_id = e1000_set_lan_id_single_port;
/* reset */
mac->ops.reset_hw = e1000_reset_hw_ich8lan;
/* hw initialization */
mac->ops.init_hw = e1000_init_hw_ich8lan;
/* link setup */
mac->ops.setup_link = e1000_setup_link_ich8lan;
/* physical interface setup */
mac->ops.setup_physical_interface = e1000_setup_copper_link_ich8lan;
/* check for link */
mac->ops.check_for_link = e1000_check_for_copper_link_ich8lan;
/* link info */
mac->ops.get_link_up_info = e1000_get_link_up_info_ich8lan;
/* multicast address update */
mac->ops.update_mc_addr_list = e1000_update_mc_addr_list_generic;
/* clear hardware counters */
mac->ops.clear_hw_cntrs = e1000_clear_hw_cntrs_ich8lan;
/* LED and other operations */
switch (mac->type) {
case e1000_ich8lan:
case e1000_ich9lan:
case e1000_ich10lan:
/* check management mode */
mac->ops.check_mng_mode = e1000_check_mng_mode_ich8lan;
/* ID LED init */
mac->ops.id_led_init = e1000_id_led_init_generic;
/* blink LED */
mac->ops.blink_led = e1000_blink_led_generic;
/* setup LED */
mac->ops.setup_led = e1000_setup_led_generic;
/* cleanup LED */
mac->ops.cleanup_led = e1000_cleanup_led_ich8lan;
/* turn on/off LED */
mac->ops.led_on = e1000_led_on_ich8lan;
mac->ops.led_off = e1000_led_off_ich8lan;
break;
case e1000_pch2lan:
mac->rar_entry_count = E1000_PCH2_RAR_ENTRIES;
mac->ops.rar_set = e1000_rar_set_pch2lan;
/* fall-through */
case e1000_pch_lpt:
case e1000_pch_spt:
/* multicast address update for pch2 */
mac->ops.update_mc_addr_list =
e1000_update_mc_addr_list_pch2lan;
/* fall-through */
case e1000_pchlan:
/* check management mode */
mac->ops.check_mng_mode = e1000_check_mng_mode_pchlan;
/* ID LED init */
mac->ops.id_led_init = e1000_id_led_init_pchlan;
/* setup LED */
mac->ops.setup_led = e1000_setup_led_pchlan;
/* cleanup LED */
mac->ops.cleanup_led = e1000_cleanup_led_pchlan;
/* turn on/off LED */
mac->ops.led_on = e1000_led_on_pchlan;
mac->ops.led_off = e1000_led_off_pchlan;
break;
default:
break;
}
if ((mac->type == e1000_pch_lpt) ||
(mac->type == e1000_pch_spt)) {
mac->rar_entry_count = E1000_PCH_LPT_RAR_ENTRIES;
mac->ops.rar_set = e1000_rar_set_pch_lpt;
mac->ops.setup_physical_interface = e1000_setup_copper_link_pch_lpt;
mac->ops.set_obff_timer = e1000_set_obff_timer_pch_lpt;
}
/* Enable PCS Lock-loss workaround for ICH8 */
if (mac->type == e1000_ich8lan)
e1000_set_kmrn_lock_loss_workaround_ich8lan(hw, TRUE);
return E1000_SUCCESS;
}
/**
* __e1000_access_emi_reg_locked - Read/write EMI register
* @hw: pointer to the HW structure
* @addr: EMI address to program
* @data: pointer to value to read/write from/to the EMI address
* @read: boolean flag to indicate read or write
*
* This helper function assumes the SW/FW/HW Semaphore is already acquired.
**/
static s32 __e1000_access_emi_reg_locked(struct e1000_hw *hw, u16 address,
u16 *data, bool read)
{
s32 ret_val;
DEBUGFUNC("__e1000_access_emi_reg_locked");
ret_val = hw->phy.ops.write_reg_locked(hw, I82579_EMI_ADDR, address);
if (ret_val)
return ret_val;
if (read)
ret_val = hw->phy.ops.read_reg_locked(hw, I82579_EMI_DATA,
data);
else
ret_val = hw->phy.ops.write_reg_locked(hw, I82579_EMI_DATA,
*data);
return ret_val;
}
/**
* e1000_read_emi_reg_locked - Read Extended Management Interface register
* @hw: pointer to the HW structure
* @addr: EMI address to program
* @data: value to be read from the EMI address
*
* Assumes the SW/FW/HW Semaphore is already acquired.
**/
s32 e1000_read_emi_reg_locked(struct e1000_hw *hw, u16 addr, u16 *data)
{
DEBUGFUNC("e1000_read_emi_reg_locked");
return __e1000_access_emi_reg_locked(hw, addr, data, TRUE);
}
/**
* e1000_write_emi_reg_locked - Write Extended Management Interface register
* @hw: pointer to the HW structure
* @addr: EMI address to program
* @data: value to be written to the EMI address
*
* Assumes the SW/FW/HW Semaphore is already acquired.
**/
s32 e1000_write_emi_reg_locked(struct e1000_hw *hw, u16 addr, u16 data)
{
DEBUGFUNC("e1000_read_emi_reg_locked");
return __e1000_access_emi_reg_locked(hw, addr, &data, FALSE);
}
/**
* e1000_set_eee_pchlan - Enable/disable EEE support
* @hw: pointer to the HW structure
*
* Enable/disable EEE based on setting in dev_spec structure, the duplex of
* the link and the EEE capabilities of the link partner. The LPI Control
* register bits will remain set only if/when link is up.
*
* EEE LPI must not be asserted earlier than one second after link is up.
* On 82579, EEE LPI should not be enabled until such time otherwise there
* can be link issues with some switches. Other devices can have EEE LPI
* enabled immediately upon link up since they have a timer in hardware which
* prevents LPI from being asserted too early.
**/
s32 e1000_set_eee_pchlan(struct e1000_hw *hw)
{
struct e1000_dev_spec_ich8lan *dev_spec = &hw->dev_spec.ich8lan;
s32 ret_val;
u16 lpa, pcs_status, adv, adv_addr, lpi_ctrl, data;
DEBUGFUNC("e1000_set_eee_pchlan");
switch (hw->phy.type) {
case e1000_phy_82579:
lpa = I82579_EEE_LP_ABILITY;
pcs_status = I82579_EEE_PCS_STATUS;
adv_addr = I82579_EEE_ADVERTISEMENT;
break;
case e1000_phy_i217:
lpa = I217_EEE_LP_ABILITY;
pcs_status = I217_EEE_PCS_STATUS;
adv_addr = I217_EEE_ADVERTISEMENT;
break;
default:
return E1000_SUCCESS;
}
ret_val = hw->phy.ops.acquire(hw);
if (ret_val)
return ret_val;
ret_val = hw->phy.ops.read_reg_locked(hw, I82579_LPI_CTRL, &lpi_ctrl);
if (ret_val)
goto release;
/* Clear bits that enable EEE in various speeds */
lpi_ctrl &= ~I82579_LPI_CTRL_ENABLE_MASK;
/* Enable EEE if not disabled by user */
if (!dev_spec->eee_disable) {
/* Save off link partner's EEE ability */
ret_val = e1000_read_emi_reg_locked(hw, lpa,
&dev_spec->eee_lp_ability);
if (ret_val)
goto release;
/* Read EEE advertisement */
ret_val = e1000_read_emi_reg_locked(hw, adv_addr, &adv);
if (ret_val)
goto release;
/* Enable EEE only for speeds in which the link partner is
* EEE capable and for which we advertise EEE.
*/
if (adv & dev_spec->eee_lp_ability & I82579_EEE_1000_SUPPORTED)
lpi_ctrl |= I82579_LPI_CTRL_1000_ENABLE;
if (adv & dev_spec->eee_lp_ability & I82579_EEE_100_SUPPORTED) {
hw->phy.ops.read_reg_locked(hw, PHY_LP_ABILITY, &data);
if (data & NWAY_LPAR_100TX_FD_CAPS)
lpi_ctrl |= I82579_LPI_CTRL_100_ENABLE;
else
/* EEE is not supported in 100Half, so ignore
* partner's EEE in 100 ability if full-duplex
* is not advertised.
*/
dev_spec->eee_lp_ability &=
~I82579_EEE_100_SUPPORTED;
}
}
if (hw->phy.type == e1000_phy_82579) {
ret_val = e1000_read_emi_reg_locked(hw, I82579_LPI_PLL_SHUT,
&data);
if (ret_val)
goto release;
data &= ~I82579_LPI_100_PLL_SHUT;
ret_val = e1000_write_emi_reg_locked(hw, I82579_LPI_PLL_SHUT,
data);
}
/* R/Clr IEEE MMD 3.1 bits 11:10 - Tx/Rx LPI Received */
ret_val = e1000_read_emi_reg_locked(hw, pcs_status, &data);
if (ret_val)
goto release;
ret_val = hw->phy.ops.write_reg_locked(hw, I82579_LPI_CTRL, lpi_ctrl);
release:
hw->phy.ops.release(hw);
return ret_val;
}
/**
* e1000_k1_workaround_lpt_lp - K1 workaround on Lynxpoint-LP
* @hw: pointer to the HW structure
* @link: link up bool flag
*
* When K1 is enabled for 1Gbps, the MAC can miss 2 DMA completion indications
* preventing further DMA write requests. Workaround the issue by disabling
* the de-assertion of the clock request when in 1Gpbs mode.
* Also, set appropriate Tx re-transmission timeouts for 10 and 100Half link
* speeds in order to avoid Tx hangs.
**/
static s32 e1000_k1_workaround_lpt_lp(struct e1000_hw *hw, bool link)
{
u32 fextnvm6 = E1000_READ_REG(hw, E1000_FEXTNVM6);
u32 status = E1000_READ_REG(hw, E1000_STATUS);
s32 ret_val = E1000_SUCCESS;
u16 reg;
if (link && (status & E1000_STATUS_SPEED_1000)) {
ret_val = hw->phy.ops.acquire(hw);
if (ret_val)
return ret_val;
ret_val =
e1000_read_kmrn_reg_locked(hw, E1000_KMRNCTRLSTA_K1_CONFIG,
&reg);
if (ret_val)
goto release;
ret_val =
e1000_write_kmrn_reg_locked(hw,
E1000_KMRNCTRLSTA_K1_CONFIG,
reg &
~E1000_KMRNCTRLSTA_K1_ENABLE);
if (ret_val)
goto release;
usec_delay(10);
E1000_WRITE_REG(hw, E1000_FEXTNVM6,
fextnvm6 | E1000_FEXTNVM6_REQ_PLL_CLK);
ret_val =
e1000_write_kmrn_reg_locked(hw,
E1000_KMRNCTRLSTA_K1_CONFIG,
reg);
release:
hw->phy.ops.release(hw);
} else {
/* clear FEXTNVM6 bit 8 on link down or 10/100 */
fextnvm6 &= ~E1000_FEXTNVM6_REQ_PLL_CLK;
if ((hw->phy.revision > 5) || !link ||
((status & E1000_STATUS_SPEED_100) &&
(status & E1000_STATUS_FD)))
goto update_fextnvm6;
ret_val = hw->phy.ops.read_reg(hw, I217_INBAND_CTRL, &reg);
if (ret_val)
return ret_val;
/* Clear link status transmit timeout */
reg &= ~I217_INBAND_CTRL_LINK_STAT_TX_TIMEOUT_MASK;
if (status & E1000_STATUS_SPEED_100) {
/* Set inband Tx timeout to 5x10us for 100Half */
reg |= 5 << I217_INBAND_CTRL_LINK_STAT_TX_TIMEOUT_SHIFT;
/* Do not extend the K1 entry latency for 100Half */
fextnvm6 &= ~E1000_FEXTNVM6_ENABLE_K1_ENTRY_CONDITION;
} else {
/* Set inband Tx timeout to 50x10us for 10Full/Half */
reg |= 50 <<
I217_INBAND_CTRL_LINK_STAT_TX_TIMEOUT_SHIFT;
/* Extend the K1 entry latency for 10 Mbps */
fextnvm6 |= E1000_FEXTNVM6_ENABLE_K1_ENTRY_CONDITION;
}
ret_val = hw->phy.ops.write_reg(hw, I217_INBAND_CTRL, reg);
if (ret_val)
return ret_val;
update_fextnvm6:
E1000_WRITE_REG(hw, E1000_FEXTNVM6, fextnvm6);
}
return ret_val;
}
static u64 e1000_ltr2ns(u16 ltr)
{
u32 value, scale;
/* Determine the latency in nsec based on the LTR value & scale */
value = ltr & E1000_LTRV_VALUE_MASK;
scale = (ltr & E1000_LTRV_SCALE_MASK) >> E1000_LTRV_SCALE_SHIFT;
return value * (1 << (scale * E1000_LTRV_SCALE_FACTOR));
}
/**
* e1000_platform_pm_pch_lpt - Set platform power management values
* @hw: pointer to the HW structure
* @link: bool indicating link status
*
* Set the Latency Tolerance Reporting (LTR) values for the "PCIe-like"
* GbE MAC in the Lynx Point PCH based on Rx buffer size and link speed
* when link is up (which must not exceed the maximum latency supported
* by the platform), otherwise specify there is no LTR requirement.
* Unlike TRUE-PCIe devices which set the LTR maximum snoop/no-snoop
* latencies in the LTR Extended Capability Structure in the PCIe Extended
* Capability register set, on this device LTR is set by writing the
* equivalent snoop/no-snoop latencies in the LTRV register in the MAC and
* set the SEND bit to send an Intel On-chip System Fabric sideband (IOSF-SB)
* message to the PMC.
*
* Use the LTR value to calculate the Optimized Buffer Flush/Fill (OBFF)
* high-water mark.
**/
static s32 e1000_platform_pm_pch_lpt(struct e1000_hw *hw, bool link)
{
u32 reg = link << (E1000_LTRV_REQ_SHIFT + E1000_LTRV_NOSNOOP_SHIFT) |
link << E1000_LTRV_REQ_SHIFT | E1000_LTRV_SEND;
u16 lat_enc = 0; /* latency encoded */
s32 obff_hwm = 0;
DEBUGFUNC("e1000_platform_pm_pch_lpt");
if (link) {
u16 speed, duplex, scale = 0;
u16 max_snoop, max_nosnoop;
u16 max_ltr_enc; /* max LTR latency encoded */
s64 lat_ns;
s64 value;
u32 rxa;
if (!hw->mac.max_frame_size) {
DEBUGOUT("max_frame_size not set.\n");
return -E1000_ERR_CONFIG;
}
hw->mac.ops.get_link_up_info(hw, &speed, &duplex);
if (!speed) {
DEBUGOUT("Speed not set.\n");
return -E1000_ERR_CONFIG;
}
/* Rx Packet Buffer Allocation size (KB) */
rxa = E1000_READ_REG(hw, E1000_PBA) & E1000_PBA_RXA_MASK;
/* Determine the maximum latency tolerated by the device.
*
* Per the PCIe spec, the tolerated latencies are encoded as
* a 3-bit encoded scale (only 0-5 are valid) multiplied by
* a 10-bit value (0-1023) to provide a range from 1 ns to
* 2^25*(2^10-1) ns. The scale is encoded as 0=2^0ns,
* 1=2^5ns, 2=2^10ns,...5=2^25ns.
*/
lat_ns = ((s64)rxa * 1024 -
(2 * (s64)hw->mac.max_frame_size)) * 8 * 1000;
if (lat_ns < 0)
lat_ns = 0;
else
lat_ns /= speed;
value = lat_ns;
while (value > E1000_LTRV_VALUE_MASK) {
scale++;
value = E1000_DIVIDE_ROUND_UP(value, (1 << 5));
}
if (scale > E1000_LTRV_SCALE_MAX) {
DEBUGOUT1("Invalid LTR latency scale %d\n", scale);
return -E1000_ERR_CONFIG;
}
lat_enc = (u16)((scale << E1000_LTRV_SCALE_SHIFT) | value);
/* Determine the maximum latency tolerated by the platform */
e1000_read_pci_cfg(hw, E1000_PCI_LTR_CAP_LPT, &max_snoop);
e1000_read_pci_cfg(hw, E1000_PCI_LTR_CAP_LPT + 2, &max_nosnoop);
max_ltr_enc = E1000_MAX(max_snoop, max_nosnoop);
if (lat_enc > max_ltr_enc) {
lat_enc = max_ltr_enc;
lat_ns = e1000_ltr2ns(max_ltr_enc);
}
if (lat_ns) {
lat_ns *= speed * 1000;
lat_ns /= 8;
lat_ns /= 1000000000;
obff_hwm = (s32)(rxa - lat_ns);
}
if ((obff_hwm < 0) || (obff_hwm > E1000_SVT_OFF_HWM_MASK)) {
DEBUGOUT1("Invalid high water mark %d\n", obff_hwm);
return -E1000_ERR_CONFIG;
}
}
/* Set Snoop and No-Snoop latencies the same */
reg |= lat_enc | (lat_enc << E1000_LTRV_NOSNOOP_SHIFT);
E1000_WRITE_REG(hw, E1000_LTRV, reg);
/* Set OBFF high water mark */
reg = E1000_READ_REG(hw, E1000_SVT) & ~E1000_SVT_OFF_HWM_MASK;
reg |= obff_hwm;
E1000_WRITE_REG(hw, E1000_SVT, reg);
/* Enable OBFF */
reg = E1000_READ_REG(hw, E1000_SVCR);
reg |= E1000_SVCR_OFF_EN;
/* Always unblock interrupts to the CPU even when the system is
* in OBFF mode. This ensures that small round-robin traffic
* (like ping) does not get dropped or experience long latency.
*/
reg |= E1000_SVCR_OFF_MASKINT;
E1000_WRITE_REG(hw, E1000_SVCR, reg);
return E1000_SUCCESS;
}
/**
* e1000_set_obff_timer_pch_lpt - Update Optimized Buffer Flush/Fill timer
* @hw: pointer to the HW structure
* @itr: interrupt throttling rate
*
* Configure OBFF with the updated interrupt rate.
**/
static s32 e1000_set_obff_timer_pch_lpt(struct e1000_hw *hw, u32 itr)
{
u32 svcr;
s32 timer;
DEBUGFUNC("e1000_set_obff_timer_pch_lpt");
/* Convert ITR value into microseconds for OBFF timer */
timer = itr & E1000_ITR_MASK;
timer = (timer * E1000_ITR_MULT) / 1000;
if ((timer < 0) || (timer > E1000_ITR_MASK)) {
DEBUGOUT1("Invalid OBFF timer %d\n", timer);
return -E1000_ERR_CONFIG;
}
svcr = E1000_READ_REG(hw, E1000_SVCR);
svcr &= ~E1000_SVCR_OFF_TIMER_MASK;
svcr |= timer << E1000_SVCR_OFF_TIMER_SHIFT;
E1000_WRITE_REG(hw, E1000_SVCR, svcr);
return E1000_SUCCESS;
}
/**
* e1000_enable_ulp_lpt_lp - configure Ultra Low Power mode for LynxPoint-LP
* @hw: pointer to the HW structure
* @to_sx: boolean indicating a system power state transition to Sx
*
* When link is down, configure ULP mode to significantly reduce the power
* to the PHY. If on a Manageability Engine (ME) enabled system, tell the
* ME firmware to start the ULP configuration. If not on an ME enabled
* system, configure the ULP mode by software.
*/
s32 e1000_enable_ulp_lpt_lp(struct e1000_hw *hw, bool to_sx)
{
u32 mac_reg;
s32 ret_val = E1000_SUCCESS;
u16 phy_reg;
u16 oem_reg = 0;
if ((hw->mac.type < e1000_pch_lpt) ||
(hw->device_id == E1000_DEV_ID_PCH_LPT_I217_LM) ||
(hw->device_id == E1000_DEV_ID_PCH_LPT_I217_V) ||
(hw->device_id == E1000_DEV_ID_PCH_I218_LM2) ||
(hw->device_id == E1000_DEV_ID_PCH_I218_V2) ||
(hw->dev_spec.ich8lan.ulp_state == e1000_ulp_state_on))
return 0;
if (E1000_READ_REG(hw, E1000_FWSM) & E1000_ICH_FWSM_FW_VALID) {
/* Request ME configure ULP mode in the PHY */
mac_reg = E1000_READ_REG(hw, E1000_H2ME);
mac_reg |= E1000_H2ME_ULP | E1000_H2ME_ENFORCE_SETTINGS;
E1000_WRITE_REG(hw, E1000_H2ME, mac_reg);
goto out;
}
if (!to_sx) {
int i = 0;
/* Poll up to 5 seconds for Cable Disconnected indication */
while (!(E1000_READ_REG(hw, E1000_FEXT) &
E1000_FEXT_PHY_CABLE_DISCONNECTED)) {
/* Bail if link is re-acquired */
if (E1000_READ_REG(hw, E1000_STATUS) & E1000_STATUS_LU)
return -E1000_ERR_PHY;
if (i++ == 100)
break;
msec_delay(50);
}
DEBUGOUT2("CABLE_DISCONNECTED %s set after %dmsec\n",
(E1000_READ_REG(hw, E1000_FEXT) &
E1000_FEXT_PHY_CABLE_DISCONNECTED) ? "" : "not",
i * 50);
}
ret_val = hw->phy.ops.acquire(hw);
if (ret_val)
goto out;
/* Force SMBus mode in PHY */
ret_val = e1000_read_phy_reg_hv_locked(hw, CV_SMB_CTRL, &phy_reg);
if (ret_val)
goto release;
phy_reg |= CV_SMB_CTRL_FORCE_SMBUS;
e1000_write_phy_reg_hv_locked(hw, CV_SMB_CTRL, phy_reg);
/* Force SMBus mode in MAC */
mac_reg = E1000_READ_REG(hw, E1000_CTRL_EXT);
mac_reg |= E1000_CTRL_EXT_FORCE_SMBUS;
E1000_WRITE_REG(hw, E1000_CTRL_EXT, mac_reg);
/* Si workaround for ULP entry flow on i127/rev6 h/w. Enable
* LPLU and disable Gig speed when entering ULP
*/
if ((hw->phy.type == e1000_phy_i217) && (hw->phy.revision == 6)) {
ret_val = e1000_read_phy_reg_hv_locked(hw, HV_OEM_BITS,
&oem_reg);
if (ret_val)
goto release;
phy_reg = oem_reg;
phy_reg |= HV_OEM_BITS_LPLU | HV_OEM_BITS_GBE_DIS;
ret_val = e1000_write_phy_reg_hv_locked(hw, HV_OEM_BITS,
phy_reg);
if (ret_val)
goto release;
}
/* Set Inband ULP Exit, Reset to SMBus mode and
* Disable SMBus Release on PERST# in PHY
*/
ret_val = e1000_read_phy_reg_hv_locked(hw, I218_ULP_CONFIG1, &phy_reg);
if (ret_val)
goto release;
phy_reg |= (I218_ULP_CONFIG1_RESET_TO_SMBUS |
I218_ULP_CONFIG1_DISABLE_SMB_PERST);
if (to_sx) {
if (E1000_READ_REG(hw, E1000_WUFC) & E1000_WUFC_LNKC)
phy_reg |= I218_ULP_CONFIG1_WOL_HOST;
else
phy_reg &= ~I218_ULP_CONFIG1_WOL_HOST;
phy_reg |= I218_ULP_CONFIG1_STICKY_ULP;
phy_reg &= ~I218_ULP_CONFIG1_INBAND_EXIT;
} else {
phy_reg |= I218_ULP_CONFIG1_INBAND_EXIT;
phy_reg &= ~I218_ULP_CONFIG1_STICKY_ULP;
phy_reg &= ~I218_ULP_CONFIG1_WOL_HOST;
}
e1000_write_phy_reg_hv_locked(hw, I218_ULP_CONFIG1, phy_reg);
/* Set Disable SMBus Release on PERST# in MAC */
mac_reg = E1000_READ_REG(hw, E1000_FEXTNVM7);
mac_reg |= E1000_FEXTNVM7_DISABLE_SMB_PERST;
E1000_WRITE_REG(hw, E1000_FEXTNVM7, mac_reg);
/* Commit ULP changes in PHY by starting auto ULP configuration */
phy_reg |= I218_ULP_CONFIG1_START;
e1000_write_phy_reg_hv_locked(hw, I218_ULP_CONFIG1, phy_reg);
if ((hw->phy.type == e1000_phy_i217) && (hw->phy.revision == 6) &&
to_sx && (E1000_READ_REG(hw, E1000_STATUS) & E1000_STATUS_LU)) {
ret_val = e1000_write_phy_reg_hv_locked(hw, HV_OEM_BITS,
oem_reg);
if (ret_val)
goto release;
}
release:
hw->phy.ops.release(hw);
out:
if (ret_val)
DEBUGOUT1("Error in ULP enable flow: %d\n", ret_val);
else
hw->dev_spec.ich8lan.ulp_state = e1000_ulp_state_on;
return ret_val;
}
/**
* e1000_disable_ulp_lpt_lp - unconfigure Ultra Low Power mode for LynxPoint-LP
* @hw: pointer to the HW structure
* @force: boolean indicating whether or not to force disabling ULP
*
* Un-configure ULP mode when link is up, the system is transitioned from
* Sx or the driver is unloaded. If on a Manageability Engine (ME) enabled
* system, poll for an indication from ME that ULP has been un-configured.
* If not on an ME enabled system, un-configure the ULP mode by software.
*
* During nominal operation, this function is called when link is acquired
* to disable ULP mode (force=FALSE); otherwise, for example when unloading
* the driver or during Sx->S0 transitions, this is called with force=TRUE
* to forcibly disable ULP.
*/
s32 e1000_disable_ulp_lpt_lp(struct e1000_hw *hw, bool force)
{
s32 ret_val = E1000_SUCCESS;
u32 mac_reg;
u16 phy_reg;
int i = 0;
if ((hw->mac.type < e1000_pch_lpt) ||
(hw->device_id == E1000_DEV_ID_PCH_LPT_I217_LM) ||
(hw->device_id == E1000_DEV_ID_PCH_LPT_I217_V) ||
(hw->device_id == E1000_DEV_ID_PCH_I218_LM2) ||
(hw->device_id == E1000_DEV_ID_PCH_I218_V2) ||
(hw->dev_spec.ich8lan.ulp_state == e1000_ulp_state_off))
return 0;
if (E1000_READ_REG(hw, E1000_FWSM) & E1000_ICH_FWSM_FW_VALID) {
if (force) {
/* Request ME un-configure ULP mode in the PHY */
mac_reg = E1000_READ_REG(hw, E1000_H2ME);
mac_reg &= ~E1000_H2ME_ULP;
mac_reg |= E1000_H2ME_ENFORCE_SETTINGS;
E1000_WRITE_REG(hw, E1000_H2ME, mac_reg);
}
/* Poll up to 300msec for ME to clear ULP_CFG_DONE. */
while (E1000_READ_REG(hw, E1000_FWSM) &
E1000_FWSM_ULP_CFG_DONE) {
if (i++ == 30) {
ret_val = -E1000_ERR_PHY;
goto out;
}
msec_delay(10);
}
DEBUGOUT1("ULP_CONFIG_DONE cleared after %dmsec\n", i * 10);
if (force) {
mac_reg = E1000_READ_REG(hw, E1000_H2ME);
mac_reg &= ~E1000_H2ME_ENFORCE_SETTINGS;
E1000_WRITE_REG(hw, E1000_H2ME, mac_reg);
} else {
/* Clear H2ME.ULP after ME ULP configuration */
mac_reg = E1000_READ_REG(hw, E1000_H2ME);
mac_reg &= ~E1000_H2ME_ULP;
E1000_WRITE_REG(hw, E1000_H2ME, mac_reg);
}
goto out;
}
ret_val = hw->phy.ops.acquire(hw);
if (ret_val)
goto out;
if (force)
/* Toggle LANPHYPC Value bit */
e1000_toggle_lanphypc_pch_lpt(hw);
/* Unforce SMBus mode in PHY */
ret_val = e1000_read_phy_reg_hv_locked(hw, CV_SMB_CTRL, &phy_reg);
if (ret_val) {
/* The MAC might be in PCIe mode, so temporarily force to
* SMBus mode in order to access the PHY.
*/
mac_reg = E1000_READ_REG(hw, E1000_CTRL_EXT);
mac_reg |= E1000_CTRL_EXT_FORCE_SMBUS;
E1000_WRITE_REG(hw, E1000_CTRL_EXT, mac_reg);
msec_delay(50);
ret_val = e1000_read_phy_reg_hv_locked(hw, CV_SMB_CTRL,
&phy_reg);
if (ret_val)
goto release;
}
phy_reg &= ~CV_SMB_CTRL_FORCE_SMBUS;
e1000_write_phy_reg_hv_locked(hw, CV_SMB_CTRL, phy_reg);
/* Unforce SMBus mode in MAC */
mac_reg = E1000_READ_REG(hw, E1000_CTRL_EXT);
mac_reg &= ~E1000_CTRL_EXT_FORCE_SMBUS;
E1000_WRITE_REG(hw, E1000_CTRL_EXT, mac_reg);
/* When ULP mode was previously entered, K1 was disabled by the
* hardware. Re-Enable K1 in the PHY when exiting ULP.
*/
ret_val = e1000_read_phy_reg_hv_locked(hw, HV_PM_CTRL, &phy_reg);
if (ret_val)
goto release;
phy_reg |= HV_PM_CTRL_K1_ENABLE;
e1000_write_phy_reg_hv_locked(hw, HV_PM_CTRL, phy_reg);
/* Clear ULP enabled configuration */
ret_val = e1000_read_phy_reg_hv_locked(hw, I218_ULP_CONFIG1, &phy_reg);
if (ret_val)
goto release;
phy_reg &= ~(I218_ULP_CONFIG1_IND |
I218_ULP_CONFIG1_STICKY_ULP |
I218_ULP_CONFIG1_RESET_TO_SMBUS |
I218_ULP_CONFIG1_WOL_HOST |
I218_ULP_CONFIG1_INBAND_EXIT |
I218_ULP_CONFIG1_EN_ULP_LANPHYPC |
I218_ULP_CONFIG1_DIS_CLR_STICKY_ON_PERST |
I218_ULP_CONFIG1_DISABLE_SMB_PERST);
e1000_write_phy_reg_hv_locked(hw, I218_ULP_CONFIG1, phy_reg);
/* Commit ULP changes by starting auto ULP configuration */
phy_reg |= I218_ULP_CONFIG1_START;
e1000_write_phy_reg_hv_locked(hw, I218_ULP_CONFIG1, phy_reg);
/* Clear Disable SMBus Release on PERST# in MAC */
mac_reg = E1000_READ_REG(hw, E1000_FEXTNVM7);
mac_reg &= ~E1000_FEXTNVM7_DISABLE_SMB_PERST;
E1000_WRITE_REG(hw, E1000_FEXTNVM7, mac_reg);
release:
hw->phy.ops.release(hw);
if (force) {
hw->phy.ops.reset(hw);
msec_delay(50);
}
out:
if (ret_val)
DEBUGOUT1("Error in ULP disable flow: %d\n", ret_val);
else
hw->dev_spec.ich8lan.ulp_state = e1000_ulp_state_off;
return ret_val;
}
/**
* e1000_check_for_copper_link_ich8lan - Check for link (Copper)
* @hw: pointer to the HW structure
*
* Checks to see of the link status of the hardware has changed. If a
* change in link status has been detected, then we read the PHY registers
* to get the current speed/duplex if link exists.
**/
static s32 e1000_check_for_copper_link_ich8lan(struct e1000_hw *hw)
{
struct e1000_mac_info *mac = &hw->mac;
s32 ret_val, tipg_reg = 0;
u16 emi_addr, emi_val = 0;
bool link;
u16 phy_reg;
DEBUGFUNC("e1000_check_for_copper_link_ich8lan");
/* We only want to go out to the PHY registers to see if Auto-Neg
* has completed and/or if our link status has changed. The
* get_link_status flag is set upon receiving a Link Status
* Change or Rx Sequence Error interrupt.
*/
if (!mac->get_link_status)
return E1000_SUCCESS;
/* First we want to see if the MII Status Register reports
* link. If so, then we want to get the current speed/duplex
* of the PHY.
*/
ret_val = e1000_phy_has_link_generic(hw, 1, 0, &link);
if (ret_val)
return ret_val;
if (hw->mac.type == e1000_pchlan) {
ret_val = e1000_k1_gig_workaround_hv(hw, link);
if (ret_val)
return ret_val;
}
/* When connected at 10Mbps half-duplex, some parts are excessively
* aggressive resulting in many collisions. To avoid this, increase
* the IPG and reduce Rx latency in the PHY.
*/
if (((hw->mac.type == e1000_pch2lan) ||
(hw->mac.type == e1000_pch_lpt) ||
(hw->mac.type == e1000_pch_spt)) && link) {
u16 speed, duplex;
e1000_get_speed_and_duplex_copper_generic(hw, &speed, &duplex);
tipg_reg = E1000_READ_REG(hw, E1000_TIPG);
tipg_reg &= ~E1000_TIPG_IPGT_MASK;
if (duplex == HALF_DUPLEX && speed == SPEED_10) {
tipg_reg |= 0xFF;
/* Reduce Rx latency in analog PHY */
emi_val = 0;
} else if (hw->mac.type == e1000_pch_spt &&
duplex == FULL_DUPLEX && speed != SPEED_1000) {
tipg_reg |= 0xC;
emi_val = 1;
} else {
/* Roll back the default values */
tipg_reg |= 0x08;
emi_val = 1;
}
E1000_WRITE_REG(hw, E1000_TIPG, tipg_reg);
ret_val = hw->phy.ops.acquire(hw);
if (ret_val)
return ret_val;
if (hw->mac.type == e1000_pch2lan)
emi_addr = I82579_RX_CONFIG;
else
emi_addr = I217_RX_CONFIG;
ret_val = e1000_write_emi_reg_locked(hw, emi_addr, emi_val);
if (hw->mac.type == e1000_pch_lpt ||
hw->mac.type == e1000_pch_spt) {
u16 phy_reg;
hw->phy.ops.read_reg_locked(hw, I217_PLL_CLOCK_GATE_REG,
&phy_reg);
phy_reg &= ~I217_PLL_CLOCK_GATE_MASK;
if (speed == SPEED_100 || speed == SPEED_10)
phy_reg |= 0x3E8;
else
phy_reg |= 0xFA;
hw->phy.ops.write_reg_locked(hw,
I217_PLL_CLOCK_GATE_REG,
phy_reg);
}
hw->phy.ops.release(hw);
if (ret_val)
return ret_val;
if (hw->mac.type == e1000_pch_spt) {
u16 data;
u16 ptr_gap;
if (speed == SPEED_1000) {
ret_val = hw->phy.ops.acquire(hw);
if (ret_val)
return ret_val;
ret_val = hw->phy.ops.read_reg_locked(hw,
PHY_REG(776, 20),
&data);
if (ret_val) {
hw->phy.ops.release(hw);
return ret_val;
}
ptr_gap = (data & (0x3FF << 2)) >> 2;
if (ptr_gap < 0x18) {
data &= ~(0x3FF << 2);
data |= (0x18 << 2);
ret_val =
hw->phy.ops.write_reg_locked(hw,
PHY_REG(776, 20), data);
}
hw->phy.ops.release(hw);
if (ret_val)
return ret_val;
} else {
ret_val = hw->phy.ops.acquire(hw);
if (ret_val)
return ret_val;
ret_val = hw->phy.ops.write_reg_locked(hw,
PHY_REG(776, 20),
0xC023);
hw->phy.ops.release(hw);
if (ret_val)
return ret_val;
}
}
}
/* I217 Packet Loss issue:
* ensure that FEXTNVM4 Beacon Duration is set correctly
* on power up.
* Set the Beacon Duration for I217 to 8 usec
*/
if ((hw->mac.type == e1000_pch_lpt) ||
(hw->mac.type == e1000_pch_spt)) {
u32 mac_reg;
mac_reg = E1000_READ_REG(hw, E1000_FEXTNVM4);
mac_reg &= ~E1000_FEXTNVM4_BEACON_DURATION_MASK;
mac_reg |= E1000_FEXTNVM4_BEACON_DURATION_8USEC;
E1000_WRITE_REG(hw, E1000_FEXTNVM4, mac_reg);
}
/* Work-around I218 hang issue */
if ((hw->device_id == E1000_DEV_ID_PCH_LPTLP_I218_LM) ||
(hw->device_id == E1000_DEV_ID_PCH_LPTLP_I218_V) ||
(hw->device_id == E1000_DEV_ID_PCH_I218_LM3) ||
(hw->device_id == E1000_DEV_ID_PCH_I218_V3)) {
ret_val = e1000_k1_workaround_lpt_lp(hw, link);
if (ret_val)
return ret_val;
}
if ((hw->mac.type == e1000_pch_lpt) ||
(hw->mac.type == e1000_pch_spt)) {
/* Set platform power management values for
* Latency Tolerance Reporting (LTR)
* Optimized Buffer Flush/Fill (OBFF)
*/
ret_val = e1000_platform_pm_pch_lpt(hw, link);
if (ret_val)
return ret_val;
}
/* Clear link partner's EEE ability */
hw->dev_spec.ich8lan.eee_lp_ability = 0;
/* FEXTNVM6 K1-off workaround */
if (hw->mac.type == e1000_pch_spt) {
u32 pcieanacfg = E1000_READ_REG(hw, E1000_PCIEANACFG);
u32 fextnvm6 = E1000_READ_REG(hw, E1000_FEXTNVM6);
if (pcieanacfg & E1000_FEXTNVM6_K1_OFF_ENABLE)
fextnvm6 |= E1000_FEXTNVM6_K1_OFF_ENABLE;
else
fextnvm6 &= ~E1000_FEXTNVM6_K1_OFF_ENABLE;
E1000_WRITE_REG(hw, E1000_FEXTNVM6, fextnvm6);
}
if (!link)
return E1000_SUCCESS; /* No link detected */
mac->get_link_status = FALSE;
switch (hw->mac.type) {
case e1000_pch2lan:
ret_val = e1000_k1_workaround_lv(hw);
if (ret_val)
return ret_val;
/* fall-thru */
case e1000_pchlan:
if (hw->phy.type == e1000_phy_82578) {
ret_val = e1000_link_stall_workaround_hv(hw);
if (ret_val)
return ret_val;
}
/* Workaround for PCHx parts in half-duplex:
* Set the number of preambles removed from the packet
* when it is passed from the PHY to the MAC to prevent
* the MAC from misinterpreting the packet type.
*/
hw->phy.ops.read_reg(hw, HV_KMRN_FIFO_CTRLSTA, &phy_reg);
phy_reg &= ~HV_KMRN_FIFO_CTRLSTA_PREAMBLE_MASK;
if ((E1000_READ_REG(hw, E1000_STATUS) & E1000_STATUS_FD) !=
E1000_STATUS_FD)
phy_reg |= (1 << HV_KMRN_FIFO_CTRLSTA_PREAMBLE_SHIFT);
hw->phy.ops.write_reg(hw, HV_KMRN_FIFO_CTRLSTA, phy_reg);
break;
default:
break;
}
/* Check if there was DownShift, must be checked
* immediately after link-up
*/
e1000_check_downshift_generic(hw);
/* Enable/Disable EEE after link up */
if (hw->phy.type > e1000_phy_82579) {
ret_val = e1000_set_eee_pchlan(hw);
if (ret_val)
return ret_val;
}
/* If we are forcing speed/duplex, then we simply return since
* we have already determined whether we have link or not.
*/
if (!mac->autoneg)
return -E1000_ERR_CONFIG;
/* Auto-Neg is enabled. Auto Speed Detection takes care
* of MAC speed/duplex configuration. So we only need to
* configure Collision Distance in the MAC.
*/
mac->ops.config_collision_dist(hw);
/* Configure Flow Control now that Auto-Neg has completed.
* First, we need to restore the desired flow control
* settings because we may have had to re-autoneg with a
* different link partner.
*/
ret_val = e1000_config_fc_after_link_up_generic(hw);
if (ret_val)
DEBUGOUT("Error configuring flow control\n");
return ret_val;
}
/**
* e1000_init_function_pointers_ich8lan - Initialize ICH8 function pointers
* @hw: pointer to the HW structure
*
* Initialize family-specific function pointers for PHY, MAC, and NVM.
**/
void e1000_init_function_pointers_ich8lan(struct e1000_hw *hw)
{
DEBUGFUNC("e1000_init_function_pointers_ich8lan");
hw->mac.ops.init_params = e1000_init_mac_params_ich8lan;
hw->nvm.ops.init_params = e1000_init_nvm_params_ich8lan;
switch (hw->mac.type) {
case e1000_ich8lan:
case e1000_ich9lan:
case e1000_ich10lan:
hw->phy.ops.init_params = e1000_init_phy_params_ich8lan;
break;
case e1000_pchlan:
case e1000_pch2lan:
case e1000_pch_lpt:
case e1000_pch_spt:
hw->phy.ops.init_params = e1000_init_phy_params_pchlan;
break;
default:
break;
}
}
/**
* e1000_acquire_nvm_ich8lan - Acquire NVM mutex
* @hw: pointer to the HW structure
*
* Acquires the mutex for performing NVM operations.
**/
static s32 e1000_acquire_nvm_ich8lan(struct e1000_hw *hw)
{
DEBUGFUNC("e1000_acquire_nvm_ich8lan");
E1000_MUTEX_LOCK(&hw->dev_spec.ich8lan.nvm_mutex);
return E1000_SUCCESS;
}
/**
* e1000_release_nvm_ich8lan - Release NVM mutex
* @hw: pointer to the HW structure
*
* Releases the mutex used while performing NVM operations.
**/
static void e1000_release_nvm_ich8lan(struct e1000_hw *hw)
{
DEBUGFUNC("e1000_release_nvm_ich8lan");
E1000_MUTEX_UNLOCK(&hw->dev_spec.ich8lan.nvm_mutex);
return;
}
/**
* e1000_acquire_swflag_ich8lan - Acquire software control flag
* @hw: pointer to the HW structure
*
* Acquires the software control flag for performing PHY and select
* MAC CSR accesses.
**/
static s32 e1000_acquire_swflag_ich8lan(struct e1000_hw *hw)
{
u32 extcnf_ctrl, timeout = PHY_CFG_TIMEOUT;
s32 ret_val = E1000_SUCCESS;
DEBUGFUNC("e1000_acquire_swflag_ich8lan");
E1000_MUTEX_LOCK(&hw->dev_spec.ich8lan.swflag_mutex);
while (timeout) {
extcnf_ctrl = E1000_READ_REG(hw, E1000_EXTCNF_CTRL);
if (!(extcnf_ctrl & E1000_EXTCNF_CTRL_SWFLAG))
break;
msec_delay_irq(1);
timeout--;
}
if (!timeout) {
DEBUGOUT("SW has already locked the resource.\n");
ret_val = -E1000_ERR_CONFIG;
goto out;
}
timeout = SW_FLAG_TIMEOUT;
extcnf_ctrl |= E1000_EXTCNF_CTRL_SWFLAG;
E1000_WRITE_REG(hw, E1000_EXTCNF_CTRL, extcnf_ctrl);
while (timeout) {
extcnf_ctrl = E1000_READ_REG(hw, E1000_EXTCNF_CTRL);
if (extcnf_ctrl & E1000_EXTCNF_CTRL_SWFLAG)
break;
msec_delay_irq(1);
timeout--;
}
if (!timeout) {
DEBUGOUT2("Failed to acquire the semaphore, FW or HW has it: FWSM=0x%8.8x EXTCNF_CTRL=0x%8.8x)\n",
E1000_READ_REG(hw, E1000_FWSM), extcnf_ctrl);
extcnf_ctrl &= ~E1000_EXTCNF_CTRL_SWFLAG;
E1000_WRITE_REG(hw, E1000_EXTCNF_CTRL, extcnf_ctrl);
ret_val = -E1000_ERR_CONFIG;
goto out;
}
out:
if (ret_val)
E1000_MUTEX_UNLOCK(&hw->dev_spec.ich8lan.swflag_mutex);
return ret_val;
}
/**
* e1000_release_swflag_ich8lan - Release software control flag
* @hw: pointer to the HW structure
*
* Releases the software control flag for performing PHY and select
* MAC CSR accesses.
**/
static void e1000_release_swflag_ich8lan(struct e1000_hw *hw)
{
u32 extcnf_ctrl;
DEBUGFUNC("e1000_release_swflag_ich8lan");
extcnf_ctrl = E1000_READ_REG(hw, E1000_EXTCNF_CTRL);
if (extcnf_ctrl & E1000_EXTCNF_CTRL_SWFLAG) {
extcnf_ctrl &= ~E1000_EXTCNF_CTRL_SWFLAG;
E1000_WRITE_REG(hw, E1000_EXTCNF_CTRL, extcnf_ctrl);
} else {
DEBUGOUT("Semaphore unexpectedly released by sw/fw/hw\n");
}
E1000_MUTEX_UNLOCK(&hw->dev_spec.ich8lan.swflag_mutex);
return;
}
/**
* e1000_check_mng_mode_ich8lan - Checks management mode
* @hw: pointer to the HW structure
*
* This checks if the adapter has any manageability enabled.
* This is a function pointer entry point only called by read/write
* routines for the PHY and NVM parts.
**/
static bool e1000_check_mng_mode_ich8lan(struct e1000_hw *hw)
{
u32 fwsm;
DEBUGFUNC("e1000_check_mng_mode_ich8lan");
fwsm = E1000_READ_REG(hw, E1000_FWSM);
return (fwsm & E1000_ICH_FWSM_FW_VALID) &&
((fwsm & E1000_FWSM_MODE_MASK) ==
(E1000_ICH_MNG_IAMT_MODE << E1000_FWSM_MODE_SHIFT));
}
/**
* e1000_check_mng_mode_pchlan - Checks management mode
* @hw: pointer to the HW structure
*
* This checks if the adapter has iAMT enabled.
* This is a function pointer entry point only called by read/write
* routines for the PHY and NVM parts.
**/
static bool e1000_check_mng_mode_pchlan(struct e1000_hw *hw)
{
u32 fwsm;
DEBUGFUNC("e1000_check_mng_mode_pchlan");
fwsm = E1000_READ_REG(hw, E1000_FWSM);
return (fwsm & E1000_ICH_FWSM_FW_VALID) &&
(fwsm & (E1000_ICH_MNG_IAMT_MODE << E1000_FWSM_MODE_SHIFT));
}
/**
* e1000_rar_set_pch2lan - Set receive address register
* @hw: pointer to the HW structure
* @addr: pointer to the receive address
* @index: receive address array register
*
* Sets the receive address array register at index to the address passed
* in by addr. For 82579, RAR[0] is the base address register that is to
* contain the MAC address but RAR[1-6] are reserved for manageability (ME).
* Use SHRA[0-3] in place of those reserved for ME.
**/
static int e1000_rar_set_pch2lan(struct e1000_hw *hw, u8 *addr, u32 index)
{
u32 rar_low, rar_high;
DEBUGFUNC("e1000_rar_set_pch2lan");
/* HW expects these in little endian so we reverse the byte order
* from network order (big endian) to little endian
*/
rar_low = ((u32) addr[0] |
((u32) addr[1] << 8) |
((u32) addr[2] << 16) | ((u32) addr[3] << 24));
rar_high = ((u32) addr[4] | ((u32) addr[5] << 8));
/* If MAC address zero, no need to set the AV bit */
if (rar_low || rar_high)
rar_high |= E1000_RAH_AV;
if (index == 0) {
E1000_WRITE_REG(hw, E1000_RAL(index), rar_low);
E1000_WRITE_FLUSH(hw);
E1000_WRITE_REG(hw, E1000_RAH(index), rar_high);
E1000_WRITE_FLUSH(hw);
return E1000_SUCCESS;
}
/* RAR[1-6] are owned by manageability. Skip those and program the
* next address into the SHRA register array.
*/
if (index < (u32) (hw->mac.rar_entry_count)) {
s32 ret_val;
ret_val = e1000_acquire_swflag_ich8lan(hw);
if (ret_val)
goto out;
E1000_WRITE_REG(hw, E1000_SHRAL(index - 1), rar_low);
E1000_WRITE_FLUSH(hw);
E1000_WRITE_REG(hw, E1000_SHRAH(index - 1), rar_high);
E1000_WRITE_FLUSH(hw);
e1000_release_swflag_ich8lan(hw);
/* verify the register updates */
if ((E1000_READ_REG(hw, E1000_SHRAL(index - 1)) == rar_low) &&
(E1000_READ_REG(hw, E1000_SHRAH(index - 1)) == rar_high))
return E1000_SUCCESS;
DEBUGOUT2("SHRA[%d] might be locked by ME - FWSM=0x%8.8x\n",
(index - 1), E1000_READ_REG(hw, E1000_FWSM));
}
out:
DEBUGOUT1("Failed to write receive address at index %d\n", index);
return -E1000_ERR_CONFIG;
}
/**
* e1000_rar_set_pch_lpt - Set receive address registers
* @hw: pointer to the HW structure
* @addr: pointer to the receive address
* @index: receive address array register
*
* Sets the receive address register array at index to the address passed
* in by addr. For LPT, RAR[0] is the base address register that is to
* contain the MAC address. SHRA[0-10] are the shared receive address
* registers that are shared between the Host and manageability engine (ME).
**/
static int e1000_rar_set_pch_lpt(struct e1000_hw *hw, u8 *addr, u32 index)
{
u32 rar_low, rar_high;
u32 wlock_mac;
DEBUGFUNC("e1000_rar_set_pch_lpt");
/* HW expects these in little endian so we reverse the byte order
* from network order (big endian) to little endian
*/
rar_low = ((u32) addr[0] | ((u32) addr[1] << 8) |
((u32) addr[2] << 16) | ((u32) addr[3] << 24));
rar_high = ((u32) addr[4] | ((u32) addr[5] << 8));
/* If MAC address zero, no need to set the AV bit */
if (rar_low || rar_high)
rar_high |= E1000_RAH_AV;
if (index == 0) {
E1000_WRITE_REG(hw, E1000_RAL(index), rar_low);
E1000_WRITE_FLUSH(hw);
E1000_WRITE_REG(hw, E1000_RAH(index), rar_high);
E1000_WRITE_FLUSH(hw);
return E1000_SUCCESS;
}
/* The manageability engine (ME) can lock certain SHRAR registers that
* it is using - those registers are unavailable for use.
*/
if (index < hw->mac.rar_entry_count) {
wlock_mac = E1000_READ_REG(hw, E1000_FWSM) &
E1000_FWSM_WLOCK_MAC_MASK;
wlock_mac >>= E1000_FWSM_WLOCK_MAC_SHIFT;
/* Check if all SHRAR registers are locked */
if (wlock_mac == 1)
goto out;
if ((wlock_mac == 0) || (index <= wlock_mac)) {
s32 ret_val;
ret_val = e1000_acquire_swflag_ich8lan(hw);
if (ret_val)
goto out;
E1000_WRITE_REG(hw, E1000_SHRAL_PCH_LPT(index - 1),
rar_low);
E1000_WRITE_FLUSH(hw);
E1000_WRITE_REG(hw, E1000_SHRAH_PCH_LPT(index - 1),
rar_high);
E1000_WRITE_FLUSH(hw);
e1000_release_swflag_ich8lan(hw);
/* verify the register updates */
if ((E1000_READ_REG(hw, E1000_SHRAL_PCH_LPT(index - 1)) == rar_low) &&
(E1000_READ_REG(hw, E1000_SHRAH_PCH_LPT(index - 1)) == rar_high))
return E1000_SUCCESS;
}
}
out:
DEBUGOUT1("Failed to write receive address at index %d\n", index);
return -E1000_ERR_CONFIG;
}
/**
* e1000_update_mc_addr_list_pch2lan - Update Multicast addresses
* @hw: pointer to the HW structure
* @mc_addr_list: array of multicast addresses to program
* @mc_addr_count: number of multicast addresses to program
*
* Updates entire Multicast Table Array of the PCH2 MAC and PHY.
* The caller must have a packed mc_addr_list of multicast addresses.
**/
static void e1000_update_mc_addr_list_pch2lan(struct e1000_hw *hw,
u8 *mc_addr_list,
u32 mc_addr_count)
{
u16 phy_reg = 0;
int i;
s32 ret_val;
DEBUGFUNC("e1000_update_mc_addr_list_pch2lan");
e1000_update_mc_addr_list_generic(hw, mc_addr_list, mc_addr_count);
ret_val = hw->phy.ops.acquire(hw);
if (ret_val)
return;
ret_val = e1000_enable_phy_wakeup_reg_access_bm(hw, &phy_reg);
if (ret_val)
goto release;
for (i = 0; i < hw->mac.mta_reg_count; i++) {
hw->phy.ops.write_reg_page(hw, BM_MTA(i),
(u16)(hw->mac.mta_shadow[i] &
0xFFFF));
hw->phy.ops.write_reg_page(hw, (BM_MTA(i) + 1),
(u16)((hw->mac.mta_shadow[i] >> 16) &
0xFFFF));
}
e1000_disable_phy_wakeup_reg_access_bm(hw, &phy_reg);
release:
hw->phy.ops.release(hw);
}
/**
* e1000_check_reset_block_ich8lan - Check if PHY reset is blocked
* @hw: pointer to the HW structure
*
* Checks if firmware is blocking the reset of the PHY.
* This is a function pointer entry point only called by
* reset routines.
**/
static s32 e1000_check_reset_block_ich8lan(struct e1000_hw *hw)
{
u32 fwsm;
bool blocked = FALSE;
int i = 0;
DEBUGFUNC("e1000_check_reset_block_ich8lan");
do {
fwsm = E1000_READ_REG(hw, E1000_FWSM);
if (!(fwsm & E1000_ICH_FWSM_RSPCIPHY)) {
blocked = TRUE;
msec_delay(10);
continue;
}
blocked = FALSE;
} while (blocked && (i++ < 30));
return blocked ? E1000_BLK_PHY_RESET : E1000_SUCCESS;
}
/**
* e1000_write_smbus_addr - Write SMBus address to PHY needed during Sx states
* @hw: pointer to the HW structure
*
* Assumes semaphore already acquired.
*
**/
static s32 e1000_write_smbus_addr(struct e1000_hw *hw)
{
u16 phy_data;
u32 strap = E1000_READ_REG(hw, E1000_STRAP);
u32 freq = (strap & E1000_STRAP_SMT_FREQ_MASK) >>
E1000_STRAP_SMT_FREQ_SHIFT;
s32 ret_val;
strap &= E1000_STRAP_SMBUS_ADDRESS_MASK;
ret_val = e1000_read_phy_reg_hv_locked(hw, HV_SMB_ADDR, &phy_data);
if (ret_val)
return ret_val;
phy_data &= ~HV_SMB_ADDR_MASK;
phy_data |= (strap >> E1000_STRAP_SMBUS_ADDRESS_SHIFT);
phy_data |= HV_SMB_ADDR_PEC_EN | HV_SMB_ADDR_VALID;
if (hw->phy.type == e1000_phy_i217) {
/* Restore SMBus frequency */
if (freq--) {
phy_data &= ~HV_SMB_ADDR_FREQ_MASK;
phy_data |= (freq & (1 << 0)) <<
HV_SMB_ADDR_FREQ_LOW_SHIFT;
phy_data |= (freq & (1 << 1)) <<
(HV_SMB_ADDR_FREQ_HIGH_SHIFT - 1);
} else {
DEBUGOUT("Unsupported SMB frequency in PHY\n");
}
}
return e1000_write_phy_reg_hv_locked(hw, HV_SMB_ADDR, phy_data);
}
/**
* e1000_sw_lcd_config_ich8lan - SW-based LCD Configuration
* @hw: pointer to the HW structure
*
* SW should configure the LCD from the NVM extended configuration region
* as a workaround for certain parts.
**/
static s32 e1000_sw_lcd_config_ich8lan(struct e1000_hw *hw)
{
struct e1000_phy_info *phy = &hw->phy;
u32 i, data, cnf_size, cnf_base_addr, sw_cfg_mask;
s32 ret_val = E1000_SUCCESS;
u16 word_addr, reg_data, reg_addr, phy_page = 0;
DEBUGFUNC("e1000_sw_lcd_config_ich8lan");
/* Initialize the PHY from the NVM on ICH platforms. This
* is needed due to an issue where the NVM configuration is
* not properly autoloaded after power transitions.
* Therefore, after each PHY reset, we will load the
* configuration data out of the NVM manually.
*/
switch (hw->mac.type) {
case e1000_ich8lan:
if (phy->type != e1000_phy_igp_3)
return ret_val;
if ((hw->device_id == E1000_DEV_ID_ICH8_IGP_AMT) ||
(hw->device_id == E1000_DEV_ID_ICH8_IGP_C)) {
sw_cfg_mask = E1000_FEXTNVM_SW_CONFIG;
break;
}
/* Fall-thru */
case e1000_pchlan:
case e1000_pch2lan:
case e1000_pch_lpt:
case e1000_pch_spt:
sw_cfg_mask = E1000_FEXTNVM_SW_CONFIG_ICH8M;
break;
default:
return ret_val;
}
ret_val = hw->phy.ops.acquire(hw);
if (ret_val)
return ret_val;
data = E1000_READ_REG(hw, E1000_FEXTNVM);
if (!(data & sw_cfg_mask))
goto release;
/* Make sure HW does not configure LCD from PHY
* extended configuration before SW configuration
*/
data = E1000_READ_REG(hw, E1000_EXTCNF_CTRL);
if ((hw->mac.type < e1000_pch2lan) &&
(data & E1000_EXTCNF_CTRL_LCD_WRITE_ENABLE))
goto release;
cnf_size = E1000_READ_REG(hw, E1000_EXTCNF_SIZE);
cnf_size &= E1000_EXTCNF_SIZE_EXT_PCIE_LENGTH_MASK;
cnf_size >>= E1000_EXTCNF_SIZE_EXT_PCIE_LENGTH_SHIFT;
if (!cnf_size)
goto release;
cnf_base_addr = data & E1000_EXTCNF_CTRL_EXT_CNF_POINTER_MASK;
cnf_base_addr >>= E1000_EXTCNF_CTRL_EXT_CNF_POINTER_SHIFT;
if (((hw->mac.type == e1000_pchlan) &&
!(data & E1000_EXTCNF_CTRL_OEM_WRITE_ENABLE)) ||
(hw->mac.type > e1000_pchlan)) {
/* HW configures the SMBus address and LEDs when the
* OEM and LCD Write Enable bits are set in the NVM.
* When both NVM bits are cleared, SW will configure
* them instead.
*/
ret_val = e1000_write_smbus_addr(hw);
if (ret_val)
goto release;
data = E1000_READ_REG(hw, E1000_LEDCTL);
ret_val = e1000_write_phy_reg_hv_locked(hw, HV_LED_CONFIG,
(u16)data);
if (ret_val)
goto release;
}
/* Configure LCD from extended configuration region. */
/* cnf_base_addr is in DWORD */
word_addr = (u16)(cnf_base_addr << 1);
for (i = 0; i < cnf_size; i++) {
ret_val = hw->nvm.ops.read(hw, (word_addr + i * 2), 1,
&reg_data);
if (ret_val)
goto release;
ret_val = hw->nvm.ops.read(hw, (word_addr + i * 2 + 1),
1, &reg_addr);
if (ret_val)
goto release;
/* Save off the PHY page for future writes. */
if (reg_addr == IGP01E1000_PHY_PAGE_SELECT) {
phy_page = reg_data;
continue;
}
reg_addr &= PHY_REG_MASK;
reg_addr |= phy_page;
ret_val = phy->ops.write_reg_locked(hw, (u32)reg_addr,
reg_data);
if (ret_val)
goto release;
}
release:
hw->phy.ops.release(hw);
return ret_val;
}
/**
* e1000_k1_gig_workaround_hv - K1 Si workaround
* @hw: pointer to the HW structure
* @link: link up bool flag
*
* If K1 is enabled for 1Gbps, the MAC might stall when transitioning
* from a lower speed. This workaround disables K1 whenever link is at 1Gig
* If link is down, the function will restore the default K1 setting located
* in the NVM.
**/
static s32 e1000_k1_gig_workaround_hv(struct e1000_hw *hw, bool link)
{
s32 ret_val = E1000_SUCCESS;
u16 status_reg = 0;
bool k1_enable = hw->dev_spec.ich8lan.nvm_k1_enabled;
DEBUGFUNC("e1000_k1_gig_workaround_hv");
if (hw->mac.type != e1000_pchlan)
return E1000_SUCCESS;
/* Wrap the whole flow with the sw flag */
ret_val = hw->phy.ops.acquire(hw);
if (ret_val)
return ret_val;
/* Disable K1 when link is 1Gbps, otherwise use the NVM setting */
if (link) {
if (hw->phy.type == e1000_phy_82578) {
ret_val = hw->phy.ops.read_reg_locked(hw, BM_CS_STATUS,
&status_reg);
if (ret_val)
goto release;
status_reg &= (BM_CS_STATUS_LINK_UP |
BM_CS_STATUS_RESOLVED |
BM_CS_STATUS_SPEED_MASK);
if (status_reg == (BM_CS_STATUS_LINK_UP |
BM_CS_STATUS_RESOLVED |
BM_CS_STATUS_SPEED_1000))
k1_enable = FALSE;
}
if (hw->phy.type == e1000_phy_82577) {
ret_val = hw->phy.ops.read_reg_locked(hw, HV_M_STATUS,
&status_reg);
if (ret_val)
goto release;
status_reg &= (HV_M_STATUS_LINK_UP |
HV_M_STATUS_AUTONEG_COMPLETE |
HV_M_STATUS_SPEED_MASK);
if (status_reg == (HV_M_STATUS_LINK_UP |
HV_M_STATUS_AUTONEG_COMPLETE |
HV_M_STATUS_SPEED_1000))
k1_enable = FALSE;
}
/* Link stall fix for link up */
ret_val = hw->phy.ops.write_reg_locked(hw, PHY_REG(770, 19),
0x0100);
if (ret_val)
goto release;
} else {
/* Link stall fix for link down */
ret_val = hw->phy.ops.write_reg_locked(hw, PHY_REG(770, 19),
0x4100);
if (ret_val)
goto release;
}
ret_val = e1000_configure_k1_ich8lan(hw, k1_enable);
release:
hw->phy.ops.release(hw);
return ret_val;
}
/**
* e1000_configure_k1_ich8lan - Configure K1 power state
* @hw: pointer to the HW structure
* @enable: K1 state to configure
*
* Configure the K1 power state based on the provided parameter.
* Assumes semaphore already acquired.
*
* Success returns 0, Failure returns -E1000_ERR_PHY (-2)
**/
s32 e1000_configure_k1_ich8lan(struct e1000_hw *hw, bool k1_enable)
{
s32 ret_val;
u32 ctrl_reg = 0;
u32 ctrl_ext = 0;
u32 reg = 0;
u16 kmrn_reg = 0;
DEBUGFUNC("e1000_configure_k1_ich8lan");
ret_val = e1000_read_kmrn_reg_locked(hw, E1000_KMRNCTRLSTA_K1_CONFIG,
&kmrn_reg);
if (ret_val)
return ret_val;
if (k1_enable)
kmrn_reg |= E1000_KMRNCTRLSTA_K1_ENABLE;
else
kmrn_reg &= ~E1000_KMRNCTRLSTA_K1_ENABLE;
ret_val = e1000_write_kmrn_reg_locked(hw, E1000_KMRNCTRLSTA_K1_CONFIG,
kmrn_reg);
if (ret_val)
return ret_val;
usec_delay(20);
ctrl_ext = E1000_READ_REG(hw, E1000_CTRL_EXT);
ctrl_reg = E1000_READ_REG(hw, E1000_CTRL);
reg = ctrl_reg & ~(E1000_CTRL_SPD_1000 | E1000_CTRL_SPD_100);
reg |= E1000_CTRL_FRCSPD;
E1000_WRITE_REG(hw, E1000_CTRL, reg);
E1000_WRITE_REG(hw, E1000_CTRL_EXT, ctrl_ext | E1000_CTRL_EXT_SPD_BYPS);
E1000_WRITE_FLUSH(hw);
usec_delay(20);
E1000_WRITE_REG(hw, E1000_CTRL, ctrl_reg);
E1000_WRITE_REG(hw, E1000_CTRL_EXT, ctrl_ext);
E1000_WRITE_FLUSH(hw);
usec_delay(20);
return E1000_SUCCESS;
}
/**
* e1000_oem_bits_config_ich8lan - SW-based LCD Configuration
* @hw: pointer to the HW structure
* @d0_state: boolean if entering d0 or d3 device state
*
* SW will configure Gbe Disable and LPLU based on the NVM. The four bits are
* collectively called OEM bits. The OEM Write Enable bit and SW Config bit
* in NVM determines whether HW should configure LPLU and Gbe Disable.
**/
static s32 e1000_oem_bits_config_ich8lan(struct e1000_hw *hw, bool d0_state)
{
s32 ret_val = 0;
u32 mac_reg;
u16 oem_reg;
DEBUGFUNC("e1000_oem_bits_config_ich8lan");
if (hw->mac.type < e1000_pchlan)
return ret_val;
ret_val = hw->phy.ops.acquire(hw);
if (ret_val)
return ret_val;
if (hw->mac.type == e1000_pchlan) {
mac_reg = E1000_READ_REG(hw, E1000_EXTCNF_CTRL);
if (mac_reg & E1000_EXTCNF_CTRL_OEM_WRITE_ENABLE)
goto release;
}
mac_reg = E1000_READ_REG(hw, E1000_FEXTNVM);
if (!(mac_reg & E1000_FEXTNVM_SW_CONFIG_ICH8M))
goto release;
mac_reg = E1000_READ_REG(hw, E1000_PHY_CTRL);
ret_val = hw->phy.ops.read_reg_locked(hw, HV_OEM_BITS, &oem_reg);
if (ret_val)
goto release;
oem_reg &= ~(HV_OEM_BITS_GBE_DIS | HV_OEM_BITS_LPLU);
if (d0_state) {
if (mac_reg & E1000_PHY_CTRL_GBE_DISABLE)
oem_reg |= HV_OEM_BITS_GBE_DIS;
if (mac_reg & E1000_PHY_CTRL_D0A_LPLU)
oem_reg |= HV_OEM_BITS_LPLU;
} else {
if (mac_reg & (E1000_PHY_CTRL_GBE_DISABLE |
E1000_PHY_CTRL_NOND0A_GBE_DISABLE))
oem_reg |= HV_OEM_BITS_GBE_DIS;
if (mac_reg & (E1000_PHY_CTRL_D0A_LPLU |
E1000_PHY_CTRL_NOND0A_LPLU))
oem_reg |= HV_OEM_BITS_LPLU;
}
/* Set Restart auto-neg to activate the bits */
if ((d0_state || (hw->mac.type != e1000_pchlan)) &&
!hw->phy.ops.check_reset_block(hw))
oem_reg |= HV_OEM_BITS_RESTART_AN;
ret_val = hw->phy.ops.write_reg_locked(hw, HV_OEM_BITS, oem_reg);
release:
hw->phy.ops.release(hw);
return ret_val;
}
/**
* e1000_set_mdio_slow_mode_hv - Set slow MDIO access mode
* @hw: pointer to the HW structure
**/
static s32 e1000_set_mdio_slow_mode_hv(struct e1000_hw *hw)
{
s32 ret_val;
u16 data;
DEBUGFUNC("e1000_set_mdio_slow_mode_hv");
ret_val = hw->phy.ops.read_reg(hw, HV_KMRN_MODE_CTRL, &data);
if (ret_val)
return ret_val;
data |= HV_KMRN_MDIO_SLOW;
ret_val = hw->phy.ops.write_reg(hw, HV_KMRN_MODE_CTRL, data);
return ret_val;
}
/**
* e1000_hv_phy_workarounds_ich8lan - A series of Phy workarounds to be
* done after every PHY reset.
**/
static s32 e1000_hv_phy_workarounds_ich8lan(struct e1000_hw *hw)
{
s32 ret_val = E1000_SUCCESS;
u16 phy_data;
DEBUGFUNC("e1000_hv_phy_workarounds_ich8lan");
if (hw->mac.type != e1000_pchlan)
return E1000_SUCCESS;
/* Set MDIO slow mode before any other MDIO access */
if (hw->phy.type == e1000_phy_82577) {
ret_val = e1000_set_mdio_slow_mode_hv(hw);
if (ret_val)
return ret_val;
}
if (((hw->phy.type == e1000_phy_82577) &&
((hw->phy.revision == 1) || (hw->phy.revision == 2))) ||
((hw->phy.type == e1000_phy_82578) && (hw->phy.revision == 1))) {
/* Disable generation of early preamble */
ret_val = hw->phy.ops.write_reg(hw, PHY_REG(769, 25), 0x4431);
if (ret_val)
return ret_val;
/* Preamble tuning for SSC */
ret_val = hw->phy.ops.write_reg(hw, HV_KMRN_FIFO_CTRLSTA,
0xA204);
if (ret_val)
return ret_val;
}
if (hw->phy.type == e1000_phy_82578) {
/* Return registers to default by doing a soft reset then
* writing 0x3140 to the control register.
*/
if (hw->phy.revision < 2) {
e1000_phy_sw_reset_generic(hw);
ret_val = hw->phy.ops.write_reg(hw, PHY_CONTROL,
0x3140);
}
}
/* Select page 0 */
ret_val = hw->phy.ops.acquire(hw);
if (ret_val)
return ret_val;
hw->phy.addr = 1;
ret_val = e1000_write_phy_reg_mdic(hw, IGP01E1000_PHY_PAGE_SELECT, 0);
hw->phy.ops.release(hw);
if (ret_val)
return ret_val;
/* Configure the K1 Si workaround during phy reset assuming there is
* link so that it disables K1 if link is in 1Gbps.
*/
ret_val = e1000_k1_gig_workaround_hv(hw, TRUE);
if (ret_val)
return ret_val;
/* Workaround for link disconnects on a busy hub in half duplex */
ret_val = hw->phy.ops.acquire(hw);
if (ret_val)
return ret_val;
ret_val = hw->phy.ops.read_reg_locked(hw, BM_PORT_GEN_CFG, &phy_data);
if (ret_val)
goto release;
ret_val = hw->phy.ops.write_reg_locked(hw, BM_PORT_GEN_CFG,
phy_data & 0x00FF);
if (ret_val)
goto release;
/* set MSE higher to enable link to stay up when noise is high */
ret_val = e1000_write_emi_reg_locked(hw, I82577_MSE_THRESHOLD, 0x0034);
release:
hw->phy.ops.release(hw);
return ret_val;
}
/**
* e1000_copy_rx_addrs_to_phy_ich8lan - Copy Rx addresses from MAC to PHY
* @hw: pointer to the HW structure
**/
void e1000_copy_rx_addrs_to_phy_ich8lan(struct e1000_hw *hw)
{
u32 mac_reg;
u16 i, phy_reg = 0;
s32 ret_val;
DEBUGFUNC("e1000_copy_rx_addrs_to_phy_ich8lan");
ret_val = hw->phy.ops.acquire(hw);
if (ret_val)
return;
ret_val = e1000_enable_phy_wakeup_reg_access_bm(hw, &phy_reg);
if (ret_val)
goto release;
/* Copy both RAL/H (rar_entry_count) and SHRAL/H to PHY */
for (i = 0; i < (hw->mac.rar_entry_count); i++) {
mac_reg = E1000_READ_REG(hw, E1000_RAL(i));
hw->phy.ops.write_reg_page(hw, BM_RAR_L(i),
(u16)(mac_reg & 0xFFFF));
hw->phy.ops.write_reg_page(hw, BM_RAR_M(i),
(u16)((mac_reg >> 16) & 0xFFFF));
mac_reg = E1000_READ_REG(hw, E1000_RAH(i));
hw->phy.ops.write_reg_page(hw, BM_RAR_H(i),
(u16)(mac_reg & 0xFFFF));
hw->phy.ops.write_reg_page(hw, BM_RAR_CTRL(i),
(u16)((mac_reg & E1000_RAH_AV)
>> 16));
}
e1000_disable_phy_wakeup_reg_access_bm(hw, &phy_reg);
release:
hw->phy.ops.release(hw);
}
static u32 e1000_calc_rx_da_crc(u8 mac[])
{
u32 poly = 0xEDB88320; /* Polynomial for 802.3 CRC calculation */
u32 i, j, mask, crc;
DEBUGFUNC("e1000_calc_rx_da_crc");
crc = 0xffffffff;
for (i = 0; i < 6; i++) {
crc = crc ^ mac[i];
for (j = 8; j > 0; j--) {
mask = (crc & 1) * (-1);
crc = (crc >> 1) ^ (poly & mask);
}
}
return ~crc;
}
/**
* e1000_lv_jumbo_workaround_ich8lan - required for jumbo frame operation
* with 82579 PHY
* @hw: pointer to the HW structure
* @enable: flag to enable/disable workaround when enabling/disabling jumbos
**/
s32 e1000_lv_jumbo_workaround_ich8lan(struct e1000_hw *hw, bool enable)
{
s32 ret_val = E1000_SUCCESS;
u16 phy_reg, data;
u32 mac_reg;
u16 i;
DEBUGFUNC("e1000_lv_jumbo_workaround_ich8lan");
if (hw->mac.type < e1000_pch2lan)
return E1000_SUCCESS;
/* disable Rx path while enabling/disabling workaround */
hw->phy.ops.read_reg(hw, PHY_REG(769, 20), &phy_reg);
ret_val = hw->phy.ops.write_reg(hw, PHY_REG(769, 20),
phy_reg | (1 << 14));
if (ret_val)
return ret_val;
if (enable) {
/* Write Rx addresses (rar_entry_count for RAL/H, and
* SHRAL/H) and initial CRC values to the MAC
*/
for (i = 0; i < hw->mac.rar_entry_count; i++) {
u8 mac_addr[ETH_ADDR_LEN] = {0};
u32 addr_high, addr_low;
addr_high = E1000_READ_REG(hw, E1000_RAH(i));
if (!(addr_high & E1000_RAH_AV))
continue;
addr_low = E1000_READ_REG(hw, E1000_RAL(i));
mac_addr[0] = (addr_low & 0xFF);
mac_addr[1] = ((addr_low >> 8) & 0xFF);
mac_addr[2] = ((addr_low >> 16) & 0xFF);
mac_addr[3] = ((addr_low >> 24) & 0xFF);
mac_addr[4] = (addr_high & 0xFF);
mac_addr[5] = ((addr_high >> 8) & 0xFF);
E1000_WRITE_REG(hw, E1000_PCH_RAICC(i),
e1000_calc_rx_da_crc(mac_addr));
}
/* Write Rx addresses to the PHY */
e1000_copy_rx_addrs_to_phy_ich8lan(hw);
/* Enable jumbo frame workaround in the MAC */
mac_reg = E1000_READ_REG(hw, E1000_FFLT_DBG);
mac_reg &= ~(1 << 14);
mac_reg |= (7 << 15);
E1000_WRITE_REG(hw, E1000_FFLT_DBG, mac_reg);
mac_reg = E1000_READ_REG(hw, E1000_RCTL);
mac_reg |= E1000_RCTL_SECRC;
E1000_WRITE_REG(hw, E1000_RCTL, mac_reg);
ret_val = e1000_read_kmrn_reg_generic(hw,
E1000_KMRNCTRLSTA_CTRL_OFFSET,
&data);
if (ret_val)
return ret_val;
ret_val = e1000_write_kmrn_reg_generic(hw,
E1000_KMRNCTRLSTA_CTRL_OFFSET,
data | (1 << 0));
if (ret_val)
return ret_val;
ret_val = e1000_read_kmrn_reg_generic(hw,
E1000_KMRNCTRLSTA_HD_CTRL,
&data);
if (ret_val)
return ret_val;
data &= ~(0xF << 8);
data |= (0xB << 8);
ret_val = e1000_write_kmrn_reg_generic(hw,
E1000_KMRNCTRLSTA_HD_CTRL,
data);
if (ret_val)
return ret_val;
/* Enable jumbo frame workaround in the PHY */
hw->phy.ops.read_reg(hw, PHY_REG(769, 23), &data);
data &= ~(0x7F << 5);
data |= (0x37 << 5);
ret_val = hw->phy.ops.write_reg(hw, PHY_REG(769, 23), data);
if (ret_val)
return ret_val;
hw->phy.ops.read_reg(hw, PHY_REG(769, 16), &data);
data &= ~(1 << 13);
ret_val = hw->phy.ops.write_reg(hw, PHY_REG(769, 16), data);
if (ret_val)
return ret_val;
hw->phy.ops.read_reg(hw, PHY_REG(776, 20), &data);
data &= ~(0x3FF << 2);
data |= (E1000_TX_PTR_GAP << 2);
ret_val = hw->phy.ops.write_reg(hw, PHY_REG(776, 20), data);
if (ret_val)
return ret_val;
ret_val = hw->phy.ops.write_reg(hw, PHY_REG(776, 23), 0xF100);
if (ret_val)
return ret_val;
hw->phy.ops.read_reg(hw, HV_PM_CTRL, &data);
ret_val = hw->phy.ops.write_reg(hw, HV_PM_CTRL, data |
(1 << 10));
if (ret_val)
return ret_val;
} else {
/* Write MAC register values back to h/w defaults */
mac_reg = E1000_READ_REG(hw, E1000_FFLT_DBG);
mac_reg &= ~(0xF << 14);
E1000_WRITE_REG(hw, E1000_FFLT_DBG, mac_reg);
mac_reg = E1000_READ_REG(hw, E1000_RCTL);
mac_reg &= ~E1000_RCTL_SECRC;
E1000_WRITE_REG(hw, E1000_RCTL, mac_reg);
ret_val = e1000_read_kmrn_reg_generic(hw,
E1000_KMRNCTRLSTA_CTRL_OFFSET,
&data);
if (ret_val)
return ret_val;
ret_val = e1000_write_kmrn_reg_generic(hw,
E1000_KMRNCTRLSTA_CTRL_OFFSET,
data & ~(1 << 0));
if (ret_val)
return ret_val;
ret_val = e1000_read_kmrn_reg_generic(hw,
E1000_KMRNCTRLSTA_HD_CTRL,
&data);
if (ret_val)
return ret_val;
data &= ~(0xF << 8);
data |= (0xB << 8);
ret_val = e1000_write_kmrn_reg_generic(hw,
E1000_KMRNCTRLSTA_HD_CTRL,
data);
if (ret_val)
return ret_val;
/* Write PHY register values back to h/w defaults */
hw->phy.ops.read_reg(hw, PHY_REG(769, 23), &data);
data &= ~(0x7F << 5);
ret_val = hw->phy.ops.write_reg(hw, PHY_REG(769, 23), data);
if (ret_val)
return ret_val;
hw->phy.ops.read_reg(hw, PHY_REG(769, 16), &data);
data |= (1 << 13);
ret_val = hw->phy.ops.write_reg(hw, PHY_REG(769, 16), data);
if (ret_val)
return ret_val;
hw->phy.ops.read_reg(hw, PHY_REG(776, 20), &data);
data &= ~(0x3FF << 2);
data |= (0x8 << 2);
ret_val = hw->phy.ops.write_reg(hw, PHY_REG(776, 20), data);
if (ret_val)
return ret_val;
ret_val = hw->phy.ops.write_reg(hw, PHY_REG(776, 23), 0x7E00);
if (ret_val)
return ret_val;
hw->phy.ops.read_reg(hw, HV_PM_CTRL, &data);
ret_val = hw->phy.ops.write_reg(hw, HV_PM_CTRL, data &
~(1 << 10));
if (ret_val)
return ret_val;
}
/* re-enable Rx path after enabling/disabling workaround */
return hw->phy.ops.write_reg(hw, PHY_REG(769, 20), phy_reg &
~(1 << 14));
}
/**
* e1000_lv_phy_workarounds_ich8lan - A series of Phy workarounds to be
* done after every PHY reset.
**/
static s32 e1000_lv_phy_workarounds_ich8lan(struct e1000_hw *hw)
{
s32 ret_val = E1000_SUCCESS;
DEBUGFUNC("e1000_lv_phy_workarounds_ich8lan");
if (hw->mac.type != e1000_pch2lan)
return E1000_SUCCESS;
/* Set MDIO slow mode before any other MDIO access */
ret_val = e1000_set_mdio_slow_mode_hv(hw);
if (ret_val)
return ret_val;
ret_val = hw->phy.ops.acquire(hw);
if (ret_val)
return ret_val;
/* set MSE higher to enable link to stay up when noise is high */
ret_val = e1000_write_emi_reg_locked(hw, I82579_MSE_THRESHOLD, 0x0034);
if (ret_val)
goto release;
/* drop link after 5 times MSE threshold was reached */
ret_val = e1000_write_emi_reg_locked(hw, I82579_MSE_LINK_DOWN, 0x0005);
release:
hw->phy.ops.release(hw);
return ret_val;
}
/**
* e1000_k1_gig_workaround_lv - K1 Si workaround
* @hw: pointer to the HW structure
*
* Workaround to set the K1 beacon duration for 82579 parts in 10Mbps
* Disable K1 for 1000 and 100 speeds
**/
static s32 e1000_k1_workaround_lv(struct e1000_hw *hw)
{
s32 ret_val = E1000_SUCCESS;
u16 status_reg = 0;
DEBUGFUNC("e1000_k1_workaround_lv");
if (hw->mac.type != e1000_pch2lan)
return E1000_SUCCESS;
/* Set K1 beacon duration based on 10Mbs speed */
ret_val = hw->phy.ops.read_reg(hw, HV_M_STATUS, &status_reg);
if (ret_val)
return ret_val;
if ((status_reg & (HV_M_STATUS_LINK_UP | HV_M_STATUS_AUTONEG_COMPLETE))
== (HV_M_STATUS_LINK_UP | HV_M_STATUS_AUTONEG_COMPLETE)) {
if (status_reg &
(HV_M_STATUS_SPEED_1000 | HV_M_STATUS_SPEED_100)) {
u16 pm_phy_reg;
/* LV 1G/100 Packet drop issue wa */
ret_val = hw->phy.ops.read_reg(hw, HV_PM_CTRL,
&pm_phy_reg);
if (ret_val)
return ret_val;
pm_phy_reg &= ~HV_PM_CTRL_K1_ENABLE;
ret_val = hw->phy.ops.write_reg(hw, HV_PM_CTRL,
pm_phy_reg);
if (ret_val)
return ret_val;
} else {
u32 mac_reg;
mac_reg = E1000_READ_REG(hw, E1000_FEXTNVM4);
mac_reg &= ~E1000_FEXTNVM4_BEACON_DURATION_MASK;
mac_reg |= E1000_FEXTNVM4_BEACON_DURATION_16USEC;
E1000_WRITE_REG(hw, E1000_FEXTNVM4, mac_reg);
}
}
return ret_val;
}
/**
* e1000_gate_hw_phy_config_ich8lan - disable PHY config via hardware
* @hw: pointer to the HW structure
* @gate: boolean set to TRUE to gate, FALSE to ungate
*
* Gate/ungate the automatic PHY configuration via hardware; perform
* the configuration via software instead.
**/
static void e1000_gate_hw_phy_config_ich8lan(struct e1000_hw *hw, bool gate)
{
u32 extcnf_ctrl;
DEBUGFUNC("e1000_gate_hw_phy_config_ich8lan");
if (hw->mac.type < e1000_pch2lan)
return;
extcnf_ctrl = E1000_READ_REG(hw, E1000_EXTCNF_CTRL);
if (gate)
extcnf_ctrl |= E1000_EXTCNF_CTRL_GATE_PHY_CFG;
else
extcnf_ctrl &= ~E1000_EXTCNF_CTRL_GATE_PHY_CFG;
E1000_WRITE_REG(hw, E1000_EXTCNF_CTRL, extcnf_ctrl);
}
/**
* e1000_lan_init_done_ich8lan - Check for PHY config completion
* @hw: pointer to the HW structure
*
* Check the appropriate indication the MAC has finished configuring the
* PHY after a software reset.
**/
static void e1000_lan_init_done_ich8lan(struct e1000_hw *hw)
{
u32 data, loop = E1000_ICH8_LAN_INIT_TIMEOUT;
DEBUGFUNC("e1000_lan_init_done_ich8lan");
/* Wait for basic configuration completes before proceeding */
do {
data = E1000_READ_REG(hw, E1000_STATUS);
data &= E1000_STATUS_LAN_INIT_DONE;
usec_delay(100);
} while ((!data) && --loop);
/* If basic configuration is incomplete before the above loop
* count reaches 0, loading the configuration from NVM will
* leave the PHY in a bad state possibly resulting in no link.
*/
if (loop == 0)
DEBUGOUT("LAN_INIT_DONE not set, increase timeout\n");
/* Clear the Init Done bit for the next init event */
data = E1000_READ_REG(hw, E1000_STATUS);
data &= ~E1000_STATUS_LAN_INIT_DONE;
E1000_WRITE_REG(hw, E1000_STATUS, data);
}
/**
* e1000_post_phy_reset_ich8lan - Perform steps required after a PHY reset
* @hw: pointer to the HW structure
**/
static s32 e1000_post_phy_reset_ich8lan(struct e1000_hw *hw)
{
s32 ret_val = E1000_SUCCESS;
u16 reg;
DEBUGFUNC("e1000_post_phy_reset_ich8lan");
if (hw->phy.ops.check_reset_block(hw))
return E1000_SUCCESS;
/* Allow time for h/w to get to quiescent state after reset */
msec_delay(10);
/* Perform any necessary post-reset workarounds */
switch (hw->mac.type) {
case e1000_pchlan:
ret_val = e1000_hv_phy_workarounds_ich8lan(hw);
if (ret_val)
return ret_val;
break;
case e1000_pch2lan:
ret_val = e1000_lv_phy_workarounds_ich8lan(hw);
if (ret_val)
return ret_val;
break;
default:
break;
}
/* Clear the host wakeup bit after lcd reset */
if (hw->mac.type >= e1000_pchlan) {
hw->phy.ops.read_reg(hw, BM_PORT_GEN_CFG, &reg);
reg &= ~BM_WUC_HOST_WU_BIT;
hw->phy.ops.write_reg(hw, BM_PORT_GEN_CFG, reg);
}
/* Configure the LCD with the extended configuration region in NVM */
ret_val = e1000_sw_lcd_config_ich8lan(hw);
if (ret_val)
return ret_val;
/* Configure the LCD with the OEM bits in NVM */
ret_val = e1000_oem_bits_config_ich8lan(hw, TRUE);
if (hw->mac.type == e1000_pch2lan) {
/* Ungate automatic PHY configuration on non-managed 82579 */
if (!(E1000_READ_REG(hw, E1000_FWSM) &
E1000_ICH_FWSM_FW_VALID)) {
msec_delay(10);
e1000_gate_hw_phy_config_ich8lan(hw, FALSE);
}
/* Set EEE LPI Update Timer to 200usec */
ret_val = hw->phy.ops.acquire(hw);
if (ret_val)
return ret_val;
ret_val = e1000_write_emi_reg_locked(hw,
I82579_LPI_UPDATE_TIMER,
0x1387);
hw->phy.ops.release(hw);
}
return ret_val;
}
/**
* e1000_phy_hw_reset_ich8lan - Performs a PHY reset
* @hw: pointer to the HW structure
*
* Resets the PHY
* This is a function pointer entry point called by drivers
* or other shared routines.
**/
static s32 e1000_phy_hw_reset_ich8lan(struct e1000_hw *hw)
{
s32 ret_val = E1000_SUCCESS;
DEBUGFUNC("e1000_phy_hw_reset_ich8lan");
/* Gate automatic PHY configuration by hardware on non-managed 82579 */
if ((hw->mac.type == e1000_pch2lan) &&
!(E1000_READ_REG(hw, E1000_FWSM) & E1000_ICH_FWSM_FW_VALID))
e1000_gate_hw_phy_config_ich8lan(hw, TRUE);
ret_val = e1000_phy_hw_reset_generic(hw);
if (ret_val)
return ret_val;
return e1000_post_phy_reset_ich8lan(hw);
}
/**
* e1000_set_lplu_state_pchlan - Set Low Power Link Up state
* @hw: pointer to the HW structure
* @active: TRUE to enable LPLU, FALSE to disable
*
* Sets the LPLU state according to the active flag. For PCH, if OEM write
* bit are disabled in the NVM, writing the LPLU bits in the MAC will not set
* the phy speed. This function will manually set the LPLU bit and restart
* auto-neg as hw would do. D3 and D0 LPLU will call the same function
* since it configures the same bit.
**/
static s32 e1000_set_lplu_state_pchlan(struct e1000_hw *hw, bool active)
{
s32 ret_val;
u16 oem_reg;
DEBUGFUNC("e1000_set_lplu_state_pchlan");
ret_val = hw->phy.ops.read_reg(hw, HV_OEM_BITS, &oem_reg);
if (ret_val)
return ret_val;
if (active)
oem_reg |= HV_OEM_BITS_LPLU;
else
oem_reg &= ~HV_OEM_BITS_LPLU;
if (!hw->phy.ops.check_reset_block(hw))
oem_reg |= HV_OEM_BITS_RESTART_AN;
return hw->phy.ops.write_reg(hw, HV_OEM_BITS, oem_reg);
}
/**
* e1000_set_d0_lplu_state_ich8lan - Set Low Power Linkup D0 state
* @hw: pointer to the HW structure
* @active: TRUE to enable LPLU, FALSE to disable
*
* Sets the LPLU D0 state according to the active flag. When
* activating LPLU this function also disables smart speed
* and vice versa. LPLU will not be activated unless the
* device autonegotiation advertisement meets standards of
* either 10 or 10/100 or 10/100/1000 at all duplexes.
* This is a function pointer entry point only called by
* PHY setup routines.
**/
static s32 e1000_set_d0_lplu_state_ich8lan(struct e1000_hw *hw, bool active)
{
struct e1000_phy_info *phy = &hw->phy;
u32 phy_ctrl;
s32 ret_val = E1000_SUCCESS;
u16 data;
DEBUGFUNC("e1000_set_d0_lplu_state_ich8lan");
if (phy->type == e1000_phy_ife)
return E1000_SUCCESS;
phy_ctrl = E1000_READ_REG(hw, E1000_PHY_CTRL);
if (active) {
phy_ctrl |= E1000_PHY_CTRL_D0A_LPLU;
E1000_WRITE_REG(hw, E1000_PHY_CTRL, phy_ctrl);
if (phy->type != e1000_phy_igp_3)
return E1000_SUCCESS;
/* Call gig speed drop workaround on LPLU before accessing
* any PHY registers
*/
if (hw->mac.type == e1000_ich8lan)
e1000_gig_downshift_workaround_ich8lan(hw);
/* When LPLU is enabled, we should disable SmartSpeed */
ret_val = phy->ops.read_reg(hw,
IGP01E1000_PHY_PORT_CONFIG,
&data);
if (ret_val)
return ret_val;
data &= ~IGP01E1000_PSCFR_SMART_SPEED;
ret_val = phy->ops.write_reg(hw,
IGP01E1000_PHY_PORT_CONFIG,
data);
if (ret_val)
return ret_val;
} else {
phy_ctrl &= ~E1000_PHY_CTRL_D0A_LPLU;
E1000_WRITE_REG(hw, E1000_PHY_CTRL, phy_ctrl);
if (phy->type != e1000_phy_igp_3)
return E1000_SUCCESS;
/* LPLU and SmartSpeed are mutually exclusive. LPLU is used
* during Dx states where the power conservation is most
* important. During driver activity we should enable
* SmartSpeed, so performance is maintained.
*/
if (phy->smart_speed == e1000_smart_speed_on) {
ret_val = phy->ops.read_reg(hw,
IGP01E1000_PHY_PORT_CONFIG,
&data);
if (ret_val)
return ret_val;
data |= IGP01E1000_PSCFR_SMART_SPEED;
ret_val = phy->ops.write_reg(hw,
IGP01E1000_PHY_PORT_CONFIG,
data);
if (ret_val)
return ret_val;
} else if (phy->smart_speed == e1000_smart_speed_off) {
ret_val = phy->ops.read_reg(hw,
IGP01E1000_PHY_PORT_CONFIG,
&data);
if (ret_val)
return ret_val;
data &= ~IGP01E1000_PSCFR_SMART_SPEED;
ret_val = phy->ops.write_reg(hw,
IGP01E1000_PHY_PORT_CONFIG,
data);
if (ret_val)
return ret_val;
}
}
return E1000_SUCCESS;
}
/**
* e1000_set_d3_lplu_state_ich8lan - Set Low Power Linkup D3 state
* @hw: pointer to the HW structure
* @active: TRUE to enable LPLU, FALSE to disable
*
* Sets the LPLU D3 state according to the active flag. When
* activating LPLU this function also disables smart speed
* and vice versa. LPLU will not be activated unless the
* device autonegotiation advertisement meets standards of
* either 10 or 10/100 or 10/100/1000 at all duplexes.
* This is a function pointer entry point only called by
* PHY setup routines.
**/
static s32 e1000_set_d3_lplu_state_ich8lan(struct e1000_hw *hw, bool active)
{
struct e1000_phy_info *phy = &hw->phy;
u32 phy_ctrl;
s32 ret_val = E1000_SUCCESS;
u16 data;
DEBUGFUNC("e1000_set_d3_lplu_state_ich8lan");
phy_ctrl = E1000_READ_REG(hw, E1000_PHY_CTRL);
if (!active) {
phy_ctrl &= ~E1000_PHY_CTRL_NOND0A_LPLU;
E1000_WRITE_REG(hw, E1000_PHY_CTRL, phy_ctrl);
if (phy->type != e1000_phy_igp_3)
return E1000_SUCCESS;
/* LPLU and SmartSpeed are mutually exclusive. LPLU is used
* during Dx states where the power conservation is most
* important. During driver activity we should enable
* SmartSpeed, so performance is maintained.
*/
if (phy->smart_speed == e1000_smart_speed_on) {
ret_val = phy->ops.read_reg(hw,
IGP01E1000_PHY_PORT_CONFIG,
&data);
if (ret_val)
return ret_val;
data |= IGP01E1000_PSCFR_SMART_SPEED;
ret_val = phy->ops.write_reg(hw,
IGP01E1000_PHY_PORT_CONFIG,
data);
if (ret_val)
return ret_val;
} else if (phy->smart_speed == e1000_smart_speed_off) {
ret_val = phy->ops.read_reg(hw,
IGP01E1000_PHY_PORT_CONFIG,
&data);
if (ret_val)
return ret_val;
data &= ~IGP01E1000_PSCFR_SMART_SPEED;
ret_val = phy->ops.write_reg(hw,
IGP01E1000_PHY_PORT_CONFIG,
data);
if (ret_val)
return ret_val;
}
} else if ((phy->autoneg_advertised == E1000_ALL_SPEED_DUPLEX) ||
(phy->autoneg_advertised == E1000_ALL_NOT_GIG) ||
(phy->autoneg_advertised == E1000_ALL_10_SPEED)) {
phy_ctrl |= E1000_PHY_CTRL_NOND0A_LPLU;
E1000_WRITE_REG(hw, E1000_PHY_CTRL, phy_ctrl);
if (phy->type != e1000_phy_igp_3)
return E1000_SUCCESS;
/* Call gig speed drop workaround on LPLU before accessing
* any PHY registers
*/
if (hw->mac.type == e1000_ich8lan)
e1000_gig_downshift_workaround_ich8lan(hw);
/* When LPLU is enabled, we should disable SmartSpeed */
ret_val = phy->ops.read_reg(hw,
IGP01E1000_PHY_PORT_CONFIG,
&data);
if (ret_val)
return ret_val;
data &= ~IGP01E1000_PSCFR_SMART_SPEED;
ret_val = phy->ops.write_reg(hw,
IGP01E1000_PHY_PORT_CONFIG,
data);
}
return ret_val;
}
/**
* e1000_valid_nvm_bank_detect_ich8lan - finds out the valid bank 0 or 1
* @hw: pointer to the HW structure
* @bank: pointer to the variable that returns the active bank
*
* Reads signature byte from the NVM using the flash access registers.
* Word 0x13 bits 15:14 = 10b indicate a valid signature for that bank.
**/
static s32 e1000_valid_nvm_bank_detect_ich8lan(struct e1000_hw *hw, u32 *bank)
{
u32 eecd;
struct e1000_nvm_info *nvm = &hw->nvm;
u32 bank1_offset = nvm->flash_bank_size * sizeof(u16);
u32 act_offset = E1000_ICH_NVM_SIG_WORD * 2 + 1;
u32 nvm_dword = 0;
u8 sig_byte = 0;
s32 ret_val;
DEBUGFUNC("e1000_valid_nvm_bank_detect_ich8lan");
switch (hw->mac.type) {
case e1000_pch_spt:
bank1_offset = nvm->flash_bank_size;
act_offset = E1000_ICH_NVM_SIG_WORD;
/* set bank to 0 in case flash read fails */
*bank = 0;
/* Check bank 0 */
ret_val = e1000_read_flash_dword_ich8lan(hw, act_offset,
&nvm_dword);
if (ret_val)
return ret_val;
sig_byte = (u8)((nvm_dword & 0xFF00) >> 8);
if ((sig_byte & E1000_ICH_NVM_VALID_SIG_MASK) ==
E1000_ICH_NVM_SIG_VALUE) {
*bank = 0;
return E1000_SUCCESS;
}
/* Check bank 1 */
ret_val = e1000_read_flash_dword_ich8lan(hw, act_offset +
bank1_offset,
&nvm_dword);
if (ret_val)
return ret_val;
sig_byte = (u8)((nvm_dword & 0xFF00) >> 8);
if ((sig_byte & E1000_ICH_NVM_VALID_SIG_MASK) ==
E1000_ICH_NVM_SIG_VALUE) {
*bank = 1;
return E1000_SUCCESS;
}
DEBUGOUT("ERROR: No valid NVM bank present\n");
return -E1000_ERR_NVM;
case e1000_ich8lan:
case e1000_ich9lan:
eecd = E1000_READ_REG(hw, E1000_EECD);
if ((eecd & E1000_EECD_SEC1VAL_VALID_MASK) ==
E1000_EECD_SEC1VAL_VALID_MASK) {
if (eecd & E1000_EECD_SEC1VAL)
*bank = 1;
else
*bank = 0;
return E1000_SUCCESS;
}
DEBUGOUT("Unable to determine valid NVM bank via EEC - reading flash signature\n");
/* fall-thru */
default:
/* set bank to 0 in case flash read fails */
*bank = 0;
/* Check bank 0 */
ret_val = e1000_read_flash_byte_ich8lan(hw, act_offset,
&sig_byte);
if (ret_val)
return ret_val;
if ((sig_byte & E1000_ICH_NVM_VALID_SIG_MASK) ==
E1000_ICH_NVM_SIG_VALUE) {
*bank = 0;
return E1000_SUCCESS;
}
/* Check bank 1 */
ret_val = e1000_read_flash_byte_ich8lan(hw, act_offset +
bank1_offset,
&sig_byte);
if (ret_val)
return ret_val;
if ((sig_byte & E1000_ICH_NVM_VALID_SIG_MASK) ==
E1000_ICH_NVM_SIG_VALUE) {
*bank = 1;
return E1000_SUCCESS;
}
DEBUGOUT("ERROR: No valid NVM bank present\n");
return -E1000_ERR_NVM;
}
}
/**
* e1000_read_nvm_spt - NVM access for SPT
* @hw: pointer to the HW structure
* @offset: The offset (in bytes) of the word(s) to read.
* @words: Size of data to read in words.
* @data: pointer to the word(s) to read at offset.
*
* Reads a word(s) from the NVM
**/
static s32 e1000_read_nvm_spt(struct e1000_hw *hw, u16 offset, u16 words,
u16 *data)
{
struct e1000_nvm_info *nvm = &hw->nvm;
struct e1000_dev_spec_ich8lan *dev_spec = &hw->dev_spec.ich8lan;
u32 act_offset;
s32 ret_val = E1000_SUCCESS;
u32 bank = 0;
u32 dword = 0;
u16 offset_to_read;
u16 i;
DEBUGFUNC("e1000_read_nvm_spt");
if ((offset >= nvm->word_size) || (words > nvm->word_size - offset) ||
(words == 0)) {
DEBUGOUT("nvm parameter(s) out of bounds\n");
ret_val = -E1000_ERR_NVM;
goto out;
}
nvm->ops.acquire(hw);
ret_val = e1000_valid_nvm_bank_detect_ich8lan(hw, &bank);
if (ret_val != E1000_SUCCESS) {
DEBUGOUT("Could not detect valid bank, assuming bank 0\n");
bank = 0;
}
act_offset = (bank) ? nvm->flash_bank_size : 0;
act_offset += offset;
ret_val = E1000_SUCCESS;
for (i = 0; i < words; i += 2) {
if (words - i == 1) {
if (dev_spec->shadow_ram[offset+i].modified) {
data[i] = dev_spec->shadow_ram[offset+i].value;
} else {
offset_to_read = act_offset + i -
((act_offset + i) % 2);
ret_val =
e1000_read_flash_dword_ich8lan(hw,
offset_to_read,
&dword);
if (ret_val)
break;
if ((act_offset + i) % 2 == 0)
data[i] = (u16)(dword & 0xFFFF);
else
data[i] = (u16)((dword >> 16) & 0xFFFF);
}
} else {
offset_to_read = act_offset + i;
if (!(dev_spec->shadow_ram[offset+i].modified) ||
!(dev_spec->shadow_ram[offset+i+1].modified)) {
ret_val =
e1000_read_flash_dword_ich8lan(hw,
offset_to_read,
&dword);
if (ret_val)
break;
}
if (dev_spec->shadow_ram[offset+i].modified)
data[i] = dev_spec->shadow_ram[offset+i].value;
else
data[i] = (u16) (dword & 0xFFFF);
if (dev_spec->shadow_ram[offset+i].modified)
data[i+1] =
dev_spec->shadow_ram[offset+i+1].value;
else
data[i+1] = (u16) (dword >> 16 & 0xFFFF);
}
}
nvm->ops.release(hw);
out:
if (ret_val)
DEBUGOUT1("NVM read error: %d\n", ret_val);
return ret_val;
}
/**
* e1000_read_nvm_ich8lan - Read word(s) from the NVM
* @hw: pointer to the HW structure
* @offset: The offset (in bytes) of the word(s) to read.
* @words: Size of data to read in words
* @data: Pointer to the word(s) to read at offset.
*
* Reads a word(s) from the NVM using the flash access registers.
**/
static s32 e1000_read_nvm_ich8lan(struct e1000_hw *hw, u16 offset, u16 words,
u16 *data)
{
struct e1000_nvm_info *nvm = &hw->nvm;
struct e1000_dev_spec_ich8lan *dev_spec = &hw->dev_spec.ich8lan;
u32 act_offset;
s32 ret_val = E1000_SUCCESS;
u32 bank = 0;
u16 i, word;
DEBUGFUNC("e1000_read_nvm_ich8lan");
if ((offset >= nvm->word_size) || (words > nvm->word_size - offset) ||
(words == 0)) {
DEBUGOUT("nvm parameter(s) out of bounds\n");
ret_val = -E1000_ERR_NVM;
goto out;
}
nvm->ops.acquire(hw);
ret_val = e1000_valid_nvm_bank_detect_ich8lan(hw, &bank);
if (ret_val != E1000_SUCCESS) {
DEBUGOUT("Could not detect valid bank, assuming bank 0\n");
bank = 0;
}
act_offset = (bank) ? nvm->flash_bank_size : 0;
act_offset += offset;
ret_val = E1000_SUCCESS;
for (i = 0; i < words; i++) {
if (dev_spec->shadow_ram[offset+i].modified) {
data[i] = dev_spec->shadow_ram[offset+i].value;
} else {
ret_val = e1000_read_flash_word_ich8lan(hw,
act_offset + i,
&word);
if (ret_val)
break;
data[i] = word;
}
}
nvm->ops.release(hw);
out:
if (ret_val)
DEBUGOUT1("NVM read error: %d\n", ret_val);
return ret_val;
}
/**
* e1000_flash_cycle_init_ich8lan - Initialize flash
* @hw: pointer to the HW structure
*
* This function does initial flash setup so that a new read/write/erase cycle
* can be started.
**/
static s32 e1000_flash_cycle_init_ich8lan(struct e1000_hw *hw)
{
union ich8_hws_flash_status hsfsts;
s32 ret_val = -E1000_ERR_NVM;
DEBUGFUNC("e1000_flash_cycle_init_ich8lan");
hsfsts.regval = E1000_READ_FLASH_REG16(hw, ICH_FLASH_HSFSTS);
/* Check if the flash descriptor is valid */
if (!hsfsts.hsf_status.fldesvalid) {
DEBUGOUT("Flash descriptor invalid. SW Sequencing must be used.\n");
return -E1000_ERR_NVM;
}
/* Clear FCERR and DAEL in hw status by writing 1 */
hsfsts.hsf_status.flcerr = 1;
hsfsts.hsf_status.dael = 1;
if (hw->mac.type == e1000_pch_spt)
E1000_WRITE_FLASH_REG(hw, ICH_FLASH_HSFSTS,
hsfsts.regval & 0xFFFF);
else
E1000_WRITE_FLASH_REG16(hw, ICH_FLASH_HSFSTS, hsfsts.regval);
/* Either we should have a hardware SPI cycle in progress
* bit to check against, in order to start a new cycle or
* FDONE bit should be changed in the hardware so that it
* is 1 after hardware reset, which can then be used as an
* indication whether a cycle is in progress or has been
* completed.
*/
if (!hsfsts.hsf_status.flcinprog) {
/* There is no cycle running at present,
* so we can start a cycle.
* Begin by setting Flash Cycle Done.
*/
hsfsts.hsf_status.flcdone = 1;
if (hw->mac.type == e1000_pch_spt)
E1000_WRITE_FLASH_REG(hw, ICH_FLASH_HSFSTS,
hsfsts.regval & 0xFFFF);
else
E1000_WRITE_FLASH_REG16(hw, ICH_FLASH_HSFSTS,
hsfsts.regval);
ret_val = E1000_SUCCESS;
} else {
s32 i;
/* Otherwise poll for sometime so the current
* cycle has a chance to end before giving up.
*/
for (i = 0; i < ICH_FLASH_READ_COMMAND_TIMEOUT; i++) {
hsfsts.regval = E1000_READ_FLASH_REG16(hw,
ICH_FLASH_HSFSTS);
if (!hsfsts.hsf_status.flcinprog) {
ret_val = E1000_SUCCESS;
break;
}
usec_delay(1);
}
if (ret_val == E1000_SUCCESS) {
/* Successful in waiting for previous cycle to timeout,
* now set the Flash Cycle Done.
*/
hsfsts.hsf_status.flcdone = 1;
if (hw->mac.type == e1000_pch_spt)
E1000_WRITE_FLASH_REG(hw, ICH_FLASH_HSFSTS,
hsfsts.regval & 0xFFFF);
else
E1000_WRITE_FLASH_REG16(hw, ICH_FLASH_HSFSTS,
hsfsts.regval);
} else {
DEBUGOUT("Flash controller busy, cannot get access\n");
}
}
return ret_val;
}
/**
* e1000_flash_cycle_ich8lan - Starts flash cycle (read/write/erase)
* @hw: pointer to the HW structure
* @timeout: maximum time to wait for completion
*
* This function starts a flash cycle and waits for its completion.
**/
static s32 e1000_flash_cycle_ich8lan(struct e1000_hw *hw, u32 timeout)
{
union ich8_hws_flash_ctrl hsflctl;
union ich8_hws_flash_status hsfsts;
u32 i = 0;
DEBUGFUNC("e1000_flash_cycle_ich8lan");
/* Start a cycle by writing 1 in Flash Cycle Go in Hw Flash Control */
if (hw->mac.type == e1000_pch_spt)
hsflctl.regval = E1000_READ_FLASH_REG(hw, ICH_FLASH_HSFSTS)>>16;
else
hsflctl.regval = E1000_READ_FLASH_REG16(hw, ICH_FLASH_HSFCTL);
hsflctl.hsf_ctrl.flcgo = 1;
if (hw->mac.type == e1000_pch_spt)
E1000_WRITE_FLASH_REG(hw, ICH_FLASH_HSFSTS,
hsflctl.regval << 16);
else
E1000_WRITE_FLASH_REG16(hw, ICH_FLASH_HSFCTL, hsflctl.regval);
/* wait till FDONE bit is set to 1 */
do {
hsfsts.regval = E1000_READ_FLASH_REG16(hw, ICH_FLASH_HSFSTS);
if (hsfsts.hsf_status.flcdone)
break;
usec_delay(1);
} while (i++ < timeout);
if (hsfsts.hsf_status.flcdone && !hsfsts.hsf_status.flcerr)
return E1000_SUCCESS;
return -E1000_ERR_NVM;
}
/**
* e1000_read_flash_dword_ich8lan - Read dword from flash
* @hw: pointer to the HW structure
* @offset: offset to data location
* @data: pointer to the location for storing the data
*
* Reads the flash dword at offset into data. Offset is converted
* to bytes before read.
**/
static s32 e1000_read_flash_dword_ich8lan(struct e1000_hw *hw, u32 offset,
u32 *data)
{
DEBUGFUNC("e1000_read_flash_dword_ich8lan");
if (!data)
return -E1000_ERR_NVM;
/* Must convert word offset into bytes. */
offset <<= 1;
return e1000_read_flash_data32_ich8lan(hw, offset, data);
}
/**
* e1000_read_flash_word_ich8lan - Read word from flash
* @hw: pointer to the HW structure
* @offset: offset to data location
* @data: pointer to the location for storing the data
*
* Reads the flash word at offset into data. Offset is converted
* to bytes before read.
**/
static s32 e1000_read_flash_word_ich8lan(struct e1000_hw *hw, u32 offset,
u16 *data)
{
DEBUGFUNC("e1000_read_flash_word_ich8lan");
if (!data)
return -E1000_ERR_NVM;
/* Must convert offset into bytes. */
offset <<= 1;
return e1000_read_flash_data_ich8lan(hw, offset, 2, data);
}
/**
* e1000_read_flash_byte_ich8lan - Read byte from flash
* @hw: pointer to the HW structure
* @offset: The offset of the byte to read.
* @data: Pointer to a byte to store the value read.
*
* Reads a single byte from the NVM using the flash access registers.
**/
static s32 e1000_read_flash_byte_ich8lan(struct e1000_hw *hw, u32 offset,
u8 *data)
{
s32 ret_val;
u16 word = 0;
/* In SPT, only 32 bits access is supported,
* so this function should not be called.
*/
if (hw->mac.type == e1000_pch_spt)
return -E1000_ERR_NVM;
else
ret_val = e1000_read_flash_data_ich8lan(hw, offset, 1, &word);
if (ret_val)
return ret_val;
*data = (u8)word;
return E1000_SUCCESS;
}
/**
* e1000_read_flash_data_ich8lan - Read byte or word from NVM
* @hw: pointer to the HW structure
* @offset: The offset (in bytes) of the byte or word to read.
* @size: Size of data to read, 1=byte 2=word
* @data: Pointer to the word to store the value read.
*
* Reads a byte or word from the NVM using the flash access registers.
**/
static s32 e1000_read_flash_data_ich8lan(struct e1000_hw *hw, u32 offset,
u8 size, u16 *data)
{
union ich8_hws_flash_status hsfsts;
union ich8_hws_flash_ctrl hsflctl;
u32 flash_linear_addr;
u32 flash_data = 0;
s32 ret_val = -E1000_ERR_NVM;
u8 count = 0;
DEBUGFUNC("e1000_read_flash_data_ich8lan");
if (size < 1 || size > 2 || offset > ICH_FLASH_LINEAR_ADDR_MASK)
return -E1000_ERR_NVM;
flash_linear_addr = ((ICH_FLASH_LINEAR_ADDR_MASK & offset) +
hw->nvm.flash_base_addr);
do {
usec_delay(1);
/* Steps */
ret_val = e1000_flash_cycle_init_ich8lan(hw);
if (ret_val != E1000_SUCCESS)
break;
hsflctl.regval = E1000_READ_FLASH_REG16(hw, ICH_FLASH_HSFCTL);
/* 0b/1b corresponds to 1 or 2 byte size, respectively. */
hsflctl.hsf_ctrl.fldbcount = size - 1;
hsflctl.hsf_ctrl.flcycle = ICH_CYCLE_READ;
E1000_WRITE_FLASH_REG16(hw, ICH_FLASH_HSFCTL, hsflctl.regval);
E1000_WRITE_FLASH_REG(hw, ICH_FLASH_FADDR, flash_linear_addr);
ret_val = e1000_flash_cycle_ich8lan(hw,
ICH_FLASH_READ_COMMAND_TIMEOUT);
/* Check if FCERR is set to 1, if set to 1, clear it
* and try the whole sequence a few more times, else
* read in (shift in) the Flash Data0, the order is
* least significant byte first msb to lsb
*/
if (ret_val == E1000_SUCCESS) {
flash_data = E1000_READ_FLASH_REG(hw, ICH_FLASH_FDATA0);
if (size == 1)
*data = (u8)(flash_data & 0x000000FF);
else if (size == 2)
*data = (u16)(flash_data & 0x0000FFFF);
break;
} else {
/* If we've gotten here, then things are probably
* completely hosed, but if the error condition is
* detected, it won't hurt to give it another try...
* ICH_FLASH_CYCLE_REPEAT_COUNT times.
*/
hsfsts.regval = E1000_READ_FLASH_REG16(hw,
ICH_FLASH_HSFSTS);
if (hsfsts.hsf_status.flcerr) {
/* Repeat for some time before giving up. */
continue;
} else if (!hsfsts.hsf_status.flcdone) {
DEBUGOUT("Timeout error - flash cycle did not complete.\n");
break;
}
}
} while (count++ < ICH_FLASH_CYCLE_REPEAT_COUNT);
return ret_val;
}
/**
* e1000_read_flash_data32_ich8lan - Read dword from NVM
* @hw: pointer to the HW structure
* @offset: The offset (in bytes) of the dword to read.
* @data: Pointer to the dword to store the value read.
*
* Reads a byte or word from the NVM using the flash access registers.
**/
static s32 e1000_read_flash_data32_ich8lan(struct e1000_hw *hw, u32 offset,
u32 *data)
{
union ich8_hws_flash_status hsfsts;
union ich8_hws_flash_ctrl hsflctl;
u32 flash_linear_addr;
s32 ret_val = -E1000_ERR_NVM;
u8 count = 0;
DEBUGFUNC("e1000_read_flash_data_ich8lan");
if (offset > ICH_FLASH_LINEAR_ADDR_MASK ||
hw->mac.type != e1000_pch_spt)
return -E1000_ERR_NVM;
flash_linear_addr = ((ICH_FLASH_LINEAR_ADDR_MASK & offset) +
hw->nvm.flash_base_addr);
do {
usec_delay(1);
/* Steps */
ret_val = e1000_flash_cycle_init_ich8lan(hw);
if (ret_val != E1000_SUCCESS)
break;
/* In SPT, This register is in Lan memory space, not flash.
* Therefore, only 32 bit access is supported
*/
hsflctl.regval = E1000_READ_FLASH_REG(hw, ICH_FLASH_HSFSTS)>>16;
/* 0b/1b corresponds to 1 or 2 byte size, respectively. */
hsflctl.hsf_ctrl.fldbcount = sizeof(u32) - 1;
hsflctl.hsf_ctrl.flcycle = ICH_CYCLE_READ;
/* In SPT, This register is in Lan memory space, not flash.
* Therefore, only 32 bit access is supported
*/
E1000_WRITE_FLASH_REG(hw, ICH_FLASH_HSFSTS,
(u32)hsflctl.regval << 16);
E1000_WRITE_FLASH_REG(hw, ICH_FLASH_FADDR, flash_linear_addr);
ret_val = e1000_flash_cycle_ich8lan(hw,
ICH_FLASH_READ_COMMAND_TIMEOUT);
/* Check if FCERR is set to 1, if set to 1, clear it
* and try the whole sequence a few more times, else
* read in (shift in) the Flash Data0, the order is
* least significant byte first msb to lsb
*/
if (ret_val == E1000_SUCCESS) {
*data = E1000_READ_FLASH_REG(hw, ICH_FLASH_FDATA0);
break;
} else {
/* If we've gotten here, then things are probably
* completely hosed, but if the error condition is
* detected, it won't hurt to give it another try...
* ICH_FLASH_CYCLE_REPEAT_COUNT times.
*/
hsfsts.regval = E1000_READ_FLASH_REG16(hw,
ICH_FLASH_HSFSTS);
if (hsfsts.hsf_status.flcerr) {
/* Repeat for some time before giving up. */
continue;
} else if (!hsfsts.hsf_status.flcdone) {
DEBUGOUT("Timeout error - flash cycle did not complete.\n");
break;
}
}
} while (count++ < ICH_FLASH_CYCLE_REPEAT_COUNT);
return ret_val;
}
/**
* e1000_write_nvm_ich8lan - Write word(s) to the NVM
* @hw: pointer to the HW structure
* @offset: The offset (in bytes) of the word(s) to write.
* @words: Size of data to write in words
* @data: Pointer to the word(s) to write at offset.
*
* Writes a byte or word to the NVM using the flash access registers.
**/
static s32 e1000_write_nvm_ich8lan(struct e1000_hw *hw, u16 offset, u16 words,
u16 *data)
{
struct e1000_nvm_info *nvm = &hw->nvm;
struct e1000_dev_spec_ich8lan *dev_spec = &hw->dev_spec.ich8lan;
u16 i;
DEBUGFUNC("e1000_write_nvm_ich8lan");
if ((offset >= nvm->word_size) || (words > nvm->word_size - offset) ||
(words == 0)) {
DEBUGOUT("nvm parameter(s) out of bounds\n");
return -E1000_ERR_NVM;
}
nvm->ops.acquire(hw);
for (i = 0; i < words; i++) {
dev_spec->shadow_ram[offset+i].modified = TRUE;
dev_spec->shadow_ram[offset+i].value = data[i];
}
nvm->ops.release(hw);
return E1000_SUCCESS;
}
/**
* e1000_update_nvm_checksum_spt - Update the checksum for NVM
* @hw: pointer to the HW structure
*
* The NVM checksum is updated by calling the generic update_nvm_checksum,
* which writes the checksum to the shadow ram. The changes in the shadow
* ram are then committed to the EEPROM by processing each bank at a time
* checking for the modified bit and writing only the pending changes.
* After a successful commit, the shadow ram is cleared and is ready for
* future writes.
**/
static s32 e1000_update_nvm_checksum_spt(struct e1000_hw *hw)
{
struct e1000_nvm_info *nvm = &hw->nvm;
struct e1000_dev_spec_ich8lan *dev_spec = &hw->dev_spec.ich8lan;
u32 i, act_offset, new_bank_offset, old_bank_offset, bank;
s32 ret_val;
u32 dword = 0;
DEBUGFUNC("e1000_update_nvm_checksum_spt");
ret_val = e1000_update_nvm_checksum_generic(hw);
if (ret_val)
goto out;
if (nvm->type != e1000_nvm_flash_sw)
goto out;
nvm->ops.acquire(hw);
/* We're writing to the opposite bank so if we're on bank 1,
* write to bank 0 etc. We also need to erase the segment that
* is going to be written
*/
ret_val = e1000_valid_nvm_bank_detect_ich8lan(hw, &bank);
if (ret_val != E1000_SUCCESS) {
DEBUGOUT("Could not detect valid bank, assuming bank 0\n");
bank = 0;
}
if (bank == 0) {
new_bank_offset = nvm->flash_bank_size;
old_bank_offset = 0;
ret_val = e1000_erase_flash_bank_ich8lan(hw, 1);
if (ret_val)
goto release;
} else {
old_bank_offset = nvm->flash_bank_size;
new_bank_offset = 0;
ret_val = e1000_erase_flash_bank_ich8lan(hw, 0);
if (ret_val)
goto release;
}
for (i = 0; i < E1000_SHADOW_RAM_WORDS; i += 2) {
/* Determine whether to write the value stored
* in the other NVM bank or a modified value stored
* in the shadow RAM
*/
ret_val = e1000_read_flash_dword_ich8lan(hw,
i + old_bank_offset,
&dword);
if (dev_spec->shadow_ram[i].modified) {
dword &= 0xffff0000;
dword |= (dev_spec->shadow_ram[i].value & 0xffff);
}
if (dev_spec->shadow_ram[i + 1].modified) {
dword &= 0x0000ffff;
dword |= ((dev_spec->shadow_ram[i + 1].value & 0xffff)
<< 16);
}
if (ret_val)
break;
/* If the word is 0x13, then make sure the signature bits
* (15:14) are 11b until the commit has completed.
* This will allow us to write 10b which indicates the
* signature is valid. We want to do this after the write
* has completed so that we don't mark the segment valid
* while the write is still in progress
*/
if (i == E1000_ICH_NVM_SIG_WORD - 1)
dword |= E1000_ICH_NVM_SIG_MASK << 16;
/* Convert offset to bytes. */
act_offset = (i + new_bank_offset) << 1;
usec_delay(100);
/* Write the data to the new bank. Offset in words*/
act_offset = i + new_bank_offset;
ret_val = e1000_retry_write_flash_dword_ich8lan(hw, act_offset,
dword);
if (ret_val)
break;
}
/* Don't bother writing the segment valid bits if sector
* programming failed.
*/
if (ret_val) {
DEBUGOUT("Flash commit failed.\n");
goto release;
}
/* Finally validate the new segment by setting bit 15:14
* to 10b in word 0x13 , this can be done without an
* erase as well since these bits are 11 to start with
* and we need to change bit 14 to 0b
*/
act_offset = new_bank_offset + E1000_ICH_NVM_SIG_WORD;
/*offset in words but we read dword*/
--act_offset;
ret_val = e1000_read_flash_dword_ich8lan(hw, act_offset, &dword);
if (ret_val)
goto release;
dword &= 0xBFFFFFFF;
ret_val = e1000_retry_write_flash_dword_ich8lan(hw, act_offset, dword);
if (ret_val)
goto release;
/* And invalidate the previously valid segment by setting
* its signature word (0x13) high_byte to 0b. This can be
* done without an erase because flash erase sets all bits
* to 1's. We can write 1's to 0's without an erase
*/
act_offset = (old_bank_offset + E1000_ICH_NVM_SIG_WORD) * 2 + 1;
/* offset in words but we read dword*/
act_offset = old_bank_offset + E1000_ICH_NVM_SIG_WORD - 1;
ret_val = e1000_read_flash_dword_ich8lan(hw, act_offset, &dword);
if (ret_val)
goto release;
dword &= 0x00FFFFFF;
ret_val = e1000_retry_write_flash_dword_ich8lan(hw, act_offset, dword);
if (ret_val)
goto release;
/* Great! Everything worked, we can now clear the cached entries. */
for (i = 0; i < E1000_SHADOW_RAM_WORDS; i++) {
dev_spec->shadow_ram[i].modified = FALSE;
dev_spec->shadow_ram[i].value = 0xFFFF;
}
release:
nvm->ops.release(hw);
/* Reload the EEPROM, or else modifications will not appear
* until after the next adapter reset.
*/
if (!ret_val) {
nvm->ops.reload(hw);
msec_delay(10);
}
out:
if (ret_val)
DEBUGOUT1("NVM update error: %d\n", ret_val);
return ret_val;
}
/**
* e1000_update_nvm_checksum_ich8lan - Update the checksum for NVM
* @hw: pointer to the HW structure
*
* The NVM checksum is updated by calling the generic update_nvm_checksum,
* which writes the checksum to the shadow ram. The changes in the shadow
* ram are then committed to the EEPROM by processing each bank at a time
* checking for the modified bit and writing only the pending changes.
* After a successful commit, the shadow ram is cleared and is ready for
* future writes.
**/
static s32 e1000_update_nvm_checksum_ich8lan(struct e1000_hw *hw)
{
struct e1000_nvm_info *nvm = &hw->nvm;
struct e1000_dev_spec_ich8lan *dev_spec = &hw->dev_spec.ich8lan;
u32 i, act_offset, new_bank_offset, old_bank_offset, bank;
s32 ret_val;
u16 data = 0;
DEBUGFUNC("e1000_update_nvm_checksum_ich8lan");
ret_val = e1000_update_nvm_checksum_generic(hw);
if (ret_val)
goto out;
if (nvm->type != e1000_nvm_flash_sw)
goto out;
nvm->ops.acquire(hw);
/* We're writing to the opposite bank so if we're on bank 1,
* write to bank 0 etc. We also need to erase the segment that
* is going to be written
*/
ret_val = e1000_valid_nvm_bank_detect_ich8lan(hw, &bank);
if (ret_val != E1000_SUCCESS) {
DEBUGOUT("Could not detect valid bank, assuming bank 0\n");
bank = 0;
}
if (bank == 0) {
new_bank_offset = nvm->flash_bank_size;
old_bank_offset = 0;
ret_val = e1000_erase_flash_bank_ich8lan(hw, 1);
if (ret_val)
goto release;
} else {
old_bank_offset = nvm->flash_bank_size;
new_bank_offset = 0;
ret_val = e1000_erase_flash_bank_ich8lan(hw, 0);
if (ret_val)
goto release;
}
for (i = 0; i < E1000_SHADOW_RAM_WORDS; i++) {
if (dev_spec->shadow_ram[i].modified) {
data = dev_spec->shadow_ram[i].value;
} else {
ret_val = e1000_read_flash_word_ich8lan(hw, i +
old_bank_offset,
&data);
if (ret_val)
break;
}
/* If the word is 0x13, then make sure the signature bits
* (15:14) are 11b until the commit has completed.
* This will allow us to write 10b which indicates the
* signature is valid. We want to do this after the write
* has completed so that we don't mark the segment valid
* while the write is still in progress
*/
if (i == E1000_ICH_NVM_SIG_WORD)
data |= E1000_ICH_NVM_SIG_MASK;
/* Convert offset to bytes. */
act_offset = (i + new_bank_offset) << 1;
usec_delay(100);
/* Write the bytes to the new bank. */
ret_val = e1000_retry_write_flash_byte_ich8lan(hw,
act_offset,
(u8)data);
if (ret_val)
break;
usec_delay(100);
ret_val = e1000_retry_write_flash_byte_ich8lan(hw,
act_offset + 1,
(u8)(data >> 8));
if (ret_val)
break;
}
/* Don't bother writing the segment valid bits if sector
* programming failed.
*/
if (ret_val) {
DEBUGOUT("Flash commit failed.\n");
goto release;
}
/* Finally validate the new segment by setting bit 15:14
* to 10b in word 0x13 , this can be done without an
* erase as well since these bits are 11 to start with
* and we need to change bit 14 to 0b
*/
act_offset = new_bank_offset + E1000_ICH_NVM_SIG_WORD;
ret_val = e1000_read_flash_word_ich8lan(hw, act_offset, &data);
if (ret_val)
goto release;
data &= 0xBFFF;
ret_val = e1000_retry_write_flash_byte_ich8lan(hw, act_offset * 2 + 1,
(u8)(data >> 8));
if (ret_val)
goto release;
/* And invalidate the previously valid segment by setting
* its signature word (0x13) high_byte to 0b. This can be
* done without an erase because flash erase sets all bits
* to 1's. We can write 1's to 0's without an erase
*/
act_offset = (old_bank_offset + E1000_ICH_NVM_SIG_WORD) * 2 + 1;
ret_val = e1000_retry_write_flash_byte_ich8lan(hw, act_offset, 0);
if (ret_val)
goto release;
/* Great! Everything worked, we can now clear the cached entries. */
for (i = 0; i < E1000_SHADOW_RAM_WORDS; i++) {
dev_spec->shadow_ram[i].modified = FALSE;
dev_spec->shadow_ram[i].value = 0xFFFF;
}
release:
nvm->ops.release(hw);
/* Reload the EEPROM, or else modifications will not appear
* until after the next adapter reset.
*/
if (!ret_val) {
nvm->ops.reload(hw);
msec_delay(10);
}
out:
if (ret_val)
DEBUGOUT1("NVM update error: %d\n", ret_val);
return ret_val;
}
/**
* e1000_validate_nvm_checksum_ich8lan - Validate EEPROM checksum
* @hw: pointer to the HW structure
*
* Check to see if checksum needs to be fixed by reading bit 6 in word 0x19.
* If the bit is 0, that the EEPROM had been modified, but the checksum was not
* calculated, in which case we need to calculate the checksum and set bit 6.
**/
static s32 e1000_validate_nvm_checksum_ich8lan(struct e1000_hw *hw)
{
s32 ret_val;
u16 data;
u16 word;
u16 valid_csum_mask;
DEBUGFUNC("e1000_validate_nvm_checksum_ich8lan");
/* Read NVM and check Invalid Image CSUM bit. If this bit is 0,
* the checksum needs to be fixed. This bit is an indication that
* the NVM was prepared by OEM software and did not calculate
* the checksum...a likely scenario.
*/
switch (hw->mac.type) {
case e1000_pch_lpt:
case e1000_pch_spt:
word = NVM_COMPAT;
valid_csum_mask = NVM_COMPAT_VALID_CSUM;
break;
default:
word = NVM_FUTURE_INIT_WORD1;
valid_csum_mask = NVM_FUTURE_INIT_WORD1_VALID_CSUM;
break;
}
ret_val = hw->nvm.ops.read(hw, word, 1, &data);
if (ret_val)
return ret_val;
if (!(data & valid_csum_mask)) {
data |= valid_csum_mask;
ret_val = hw->nvm.ops.write(hw, word, 1, &data);
if (ret_val)
return ret_val;
ret_val = hw->nvm.ops.update(hw);
if (ret_val)
return ret_val;
}
return e1000_validate_nvm_checksum_generic(hw);
}
/**
* e1000_write_flash_data_ich8lan - Writes bytes to the NVM
* @hw: pointer to the HW structure
* @offset: The offset (in bytes) of the byte/word to read.
* @size: Size of data to read, 1=byte 2=word
* @data: The byte(s) to write to the NVM.
*
* Writes one/two bytes to the NVM using the flash access registers.
**/
static s32 e1000_write_flash_data_ich8lan(struct e1000_hw *hw, u32 offset,
u8 size, u16 data)
{
union ich8_hws_flash_status hsfsts;
union ich8_hws_flash_ctrl hsflctl;
u32 flash_linear_addr;
u32 flash_data = 0;
s32 ret_val;
u8 count = 0;
DEBUGFUNC("e1000_write_ich8_data");
if (hw->mac.type == e1000_pch_spt) {
if (size != 4 || offset > ICH_FLASH_LINEAR_ADDR_MASK)
return -E1000_ERR_NVM;
} else {
if (size < 1 || size > 2 || offset > ICH_FLASH_LINEAR_ADDR_MASK)
return -E1000_ERR_NVM;
}
flash_linear_addr = ((ICH_FLASH_LINEAR_ADDR_MASK & offset) +
hw->nvm.flash_base_addr);
do {
usec_delay(1);
/* Steps */
ret_val = e1000_flash_cycle_init_ich8lan(hw);
if (ret_val != E1000_SUCCESS)
break;
/* In SPT, This register is in Lan memory space, not
* flash. Therefore, only 32 bit access is supported
*/
if (hw->mac.type == e1000_pch_spt)
hsflctl.regval =
E1000_READ_FLASH_REG(hw, ICH_FLASH_HSFSTS)>>16;
else
hsflctl.regval =
E1000_READ_FLASH_REG16(hw, ICH_FLASH_HSFCTL);
/* 0b/1b corresponds to 1 or 2 byte size, respectively. */
hsflctl.hsf_ctrl.fldbcount = size - 1;
hsflctl.hsf_ctrl.flcycle = ICH_CYCLE_WRITE;
/* In SPT, This register is in Lan memory space,
* not flash. Therefore, only 32 bit access is
* supported
*/
if (hw->mac.type == e1000_pch_spt)
E1000_WRITE_FLASH_REG(hw, ICH_FLASH_HSFSTS,
hsflctl.regval << 16);
else
E1000_WRITE_FLASH_REG16(hw, ICH_FLASH_HSFCTL,
hsflctl.regval);
E1000_WRITE_FLASH_REG(hw, ICH_FLASH_FADDR, flash_linear_addr);
if (size == 1)
flash_data = (u32)data & 0x00FF;
else
flash_data = (u32)data;
E1000_WRITE_FLASH_REG(hw, ICH_FLASH_FDATA0, flash_data);
/* check if FCERR is set to 1 , if set to 1, clear it
* and try the whole sequence a few more times else done
*/
ret_val =
e1000_flash_cycle_ich8lan(hw,
ICH_FLASH_WRITE_COMMAND_TIMEOUT);
if (ret_val == E1000_SUCCESS)
break;
/* If we're here, then things are most likely
* completely hosed, but if the error condition
* is detected, it won't hurt to give it another
* try...ICH_FLASH_CYCLE_REPEAT_COUNT times.
*/
hsfsts.regval = E1000_READ_FLASH_REG16(hw, ICH_FLASH_HSFSTS);
if (hsfsts.hsf_status.flcerr)
/* Repeat for some time before giving up. */
continue;
if (!hsfsts.hsf_status.flcdone) {
DEBUGOUT("Timeout error - flash cycle did not complete.\n");
break;
}
} while (count++ < ICH_FLASH_CYCLE_REPEAT_COUNT);
return ret_val;
}
/**
* e1000_write_flash_data32_ich8lan - Writes 4 bytes to the NVM
* @hw: pointer to the HW structure
* @offset: The offset (in bytes) of the dwords to read.
* @data: The 4 bytes to write to the NVM.
*
* Writes one/two/four bytes to the NVM using the flash access registers.
**/
static s32 e1000_write_flash_data32_ich8lan(struct e1000_hw *hw, u32 offset,
u32 data)
{
union ich8_hws_flash_status hsfsts;
union ich8_hws_flash_ctrl hsflctl;
u32 flash_linear_addr;
s32 ret_val;
u8 count = 0;
DEBUGFUNC("e1000_write_flash_data32_ich8lan");
if (hw->mac.type == e1000_pch_spt) {
if (offset > ICH_FLASH_LINEAR_ADDR_MASK)
return -E1000_ERR_NVM;
}
flash_linear_addr = ((ICH_FLASH_LINEAR_ADDR_MASK & offset) +
hw->nvm.flash_base_addr);
do {
usec_delay(1);
/* Steps */
ret_val = e1000_flash_cycle_init_ich8lan(hw);
if (ret_val != E1000_SUCCESS)
break;
/* In SPT, This register is in Lan memory space, not
* flash. Therefore, only 32 bit access is supported
*/
if (hw->mac.type == e1000_pch_spt)
hsflctl.regval = E1000_READ_FLASH_REG(hw,
ICH_FLASH_HSFSTS)
>> 16;
else
hsflctl.regval = E1000_READ_FLASH_REG16(hw,
ICH_FLASH_HSFCTL);
hsflctl.hsf_ctrl.fldbcount = sizeof(u32) - 1;
hsflctl.hsf_ctrl.flcycle = ICH_CYCLE_WRITE;
/* In SPT, This register is in Lan memory space,
* not flash. Therefore, only 32 bit access is
* supported
*/
if (hw->mac.type == e1000_pch_spt)
E1000_WRITE_FLASH_REG(hw, ICH_FLASH_HSFSTS,
hsflctl.regval << 16);
else
E1000_WRITE_FLASH_REG16(hw, ICH_FLASH_HSFCTL,
hsflctl.regval);
E1000_WRITE_FLASH_REG(hw, ICH_FLASH_FADDR, flash_linear_addr);
E1000_WRITE_FLASH_REG(hw, ICH_FLASH_FDATA0, data);
/* check if FCERR is set to 1 , if set to 1, clear it
* and try the whole sequence a few more times else done
*/
ret_val = e1000_flash_cycle_ich8lan(hw,
ICH_FLASH_WRITE_COMMAND_TIMEOUT);
if (ret_val == E1000_SUCCESS)
break;
/* If we're here, then things are most likely
* completely hosed, but if the error condition
* is detected, it won't hurt to give it another
* try...ICH_FLASH_CYCLE_REPEAT_COUNT times.
*/
hsfsts.regval = E1000_READ_FLASH_REG16(hw, ICH_FLASH_HSFSTS);
if (hsfsts.hsf_status.flcerr)
/* Repeat for some time before giving up. */
continue;
if (!hsfsts.hsf_status.flcdone) {
DEBUGOUT("Timeout error - flash cycle did not complete.\n");
break;
}
} while (count++ < ICH_FLASH_CYCLE_REPEAT_COUNT);
return ret_val;
}
/**
* e1000_write_flash_byte_ich8lan - Write a single byte to NVM
* @hw: pointer to the HW structure
* @offset: The index of the byte to read.
* @data: The byte to write to the NVM.
*
* Writes a single byte to the NVM using the flash access registers.
**/
static s32 e1000_write_flash_byte_ich8lan(struct e1000_hw *hw, u32 offset,
u8 data)
{
u16 word = (u16)data;
DEBUGFUNC("e1000_write_flash_byte_ich8lan");
return e1000_write_flash_data_ich8lan(hw, offset, 1, word);
}
/**
* e1000_retry_write_flash_dword_ich8lan - Writes a dword to NVM
* @hw: pointer to the HW structure
* @offset: The offset of the word to write.
* @dword: The dword to write to the NVM.
*
* Writes a single dword to the NVM using the flash access registers.
* Goes through a retry algorithm before giving up.
**/
static s32 e1000_retry_write_flash_dword_ich8lan(struct e1000_hw *hw,
u32 offset, u32 dword)
{
s32 ret_val;
u16 program_retries;
DEBUGFUNC("e1000_retry_write_flash_dword_ich8lan");
/* Must convert word offset into bytes. */
offset <<= 1;
ret_val = e1000_write_flash_data32_ich8lan(hw, offset, dword);
if (!ret_val)
return ret_val;
for (program_retries = 0; program_retries < 100; program_retries++) {
DEBUGOUT2("Retrying Byte %8.8X at offset %u\n", dword, offset);
usec_delay(100);
ret_val = e1000_write_flash_data32_ich8lan(hw, offset, dword);
if (ret_val == E1000_SUCCESS)
break;
}
if (program_retries == 100)
return -E1000_ERR_NVM;
return E1000_SUCCESS;
}
/**
* e1000_retry_write_flash_byte_ich8lan - Writes a single byte to NVM
* @hw: pointer to the HW structure
* @offset: The offset of the byte to write.
* @byte: The byte to write to the NVM.
*
* Writes a single byte to the NVM using the flash access registers.
* Goes through a retry algorithm before giving up.
**/
static s32 e1000_retry_write_flash_byte_ich8lan(struct e1000_hw *hw,
u32 offset, u8 byte)
{
s32 ret_val;
u16 program_retries;
DEBUGFUNC("e1000_retry_write_flash_byte_ich8lan");
ret_val = e1000_write_flash_byte_ich8lan(hw, offset, byte);
if (!ret_val)
return ret_val;
for (program_retries = 0; program_retries < 100; program_retries++) {
DEBUGOUT2("Retrying Byte %2.2X at offset %u\n", byte, offset);
usec_delay(100);
ret_val = e1000_write_flash_byte_ich8lan(hw, offset, byte);
if (ret_val == E1000_SUCCESS)
break;
}
if (program_retries == 100)
return -E1000_ERR_NVM;
return E1000_SUCCESS;
}
/**
* e1000_erase_flash_bank_ich8lan - Erase a bank (4k) from NVM
* @hw: pointer to the HW structure
* @bank: 0 for first bank, 1 for second bank, etc.
*
* Erases the bank specified. Each bank is a 4k block. Banks are 0 based.
* bank N is 4096 * N + flash_reg_addr.
**/
static s32 e1000_erase_flash_bank_ich8lan(struct e1000_hw *hw, u32 bank)
{
struct e1000_nvm_info *nvm = &hw->nvm;
union ich8_hws_flash_status hsfsts;
union ich8_hws_flash_ctrl hsflctl;
u32 flash_linear_addr;
/* bank size is in 16bit words - adjust to bytes */
u32 flash_bank_size = nvm->flash_bank_size * 2;
s32 ret_val;
s32 count = 0;
s32 j, iteration, sector_size;
DEBUGFUNC("e1000_erase_flash_bank_ich8lan");
hsfsts.regval = E1000_READ_FLASH_REG16(hw, ICH_FLASH_HSFSTS);
/* Determine HW Sector size: Read BERASE bits of hw flash status
* register
* 00: The Hw sector is 256 bytes, hence we need to erase 16
* consecutive sectors. The start index for the nth Hw sector
* can be calculated as = bank * 4096 + n * 256
* 01: The Hw sector is 4K bytes, hence we need to erase 1 sector.
* The start index for the nth Hw sector can be calculated
* as = bank * 4096
* 10: The Hw sector is 8K bytes, nth sector = bank * 8192
* (ich9 only, otherwise error condition)
* 11: The Hw sector is 64K bytes, nth sector = bank * 65536
*/
switch (hsfsts.hsf_status.berasesz) {
case 0:
/* Hw sector size 256 */
sector_size = ICH_FLASH_SEG_SIZE_256;
iteration = flash_bank_size / ICH_FLASH_SEG_SIZE_256;
break;
case 1:
sector_size = ICH_FLASH_SEG_SIZE_4K;
iteration = 1;
break;
case 2:
sector_size = ICH_FLASH_SEG_SIZE_8K;
iteration = 1;
break;
case 3:
sector_size = ICH_FLASH_SEG_SIZE_64K;
iteration = 1;
break;
default:
return -E1000_ERR_NVM;
}
/* Start with the base address, then add the sector offset. */
flash_linear_addr = hw->nvm.flash_base_addr;
flash_linear_addr += (bank) ? flash_bank_size : 0;
for (j = 0; j < iteration; j++) {
do {
u32 timeout = ICH_FLASH_ERASE_COMMAND_TIMEOUT;
/* Steps */
ret_val = e1000_flash_cycle_init_ich8lan(hw);
if (ret_val)
return ret_val;
/* Write a value 11 (block Erase) in Flash
* Cycle field in hw flash control
*/
if (hw->mac.type == e1000_pch_spt)
hsflctl.regval =
E1000_READ_FLASH_REG(hw,
ICH_FLASH_HSFSTS)>>16;
else
hsflctl.regval =
E1000_READ_FLASH_REG16(hw,
ICH_FLASH_HSFCTL);
hsflctl.hsf_ctrl.flcycle = ICH_CYCLE_ERASE;
if (hw->mac.type == e1000_pch_spt)
E1000_WRITE_FLASH_REG(hw, ICH_FLASH_HSFSTS,
hsflctl.regval << 16);
else
E1000_WRITE_FLASH_REG16(hw, ICH_FLASH_HSFCTL,
hsflctl.regval);
/* Write the last 24 bits of an index within the
* block into Flash Linear address field in Flash
* Address.
*/
flash_linear_addr += (j * sector_size);
E1000_WRITE_FLASH_REG(hw, ICH_FLASH_FADDR,
flash_linear_addr);
ret_val = e1000_flash_cycle_ich8lan(hw, timeout);
if (ret_val == E1000_SUCCESS)
break;
/* Check if FCERR is set to 1. If 1,
* clear it and try the whole sequence
* a few more times else Done
*/
hsfsts.regval = E1000_READ_FLASH_REG16(hw,
ICH_FLASH_HSFSTS);
if (hsfsts.hsf_status.flcerr)
/* repeat for some time before giving up */
continue;
else if (!hsfsts.hsf_status.flcdone)
return ret_val;
} while (++count < ICH_FLASH_CYCLE_REPEAT_COUNT);
}
return E1000_SUCCESS;
}
/**
* e1000_valid_led_default_ich8lan - Set the default LED settings
* @hw: pointer to the HW structure
* @data: Pointer to the LED settings
*
* Reads the LED default settings from the NVM to data. If the NVM LED
* settings is all 0's or F's, set the LED default to a valid LED default
* setting.
**/
static s32 e1000_valid_led_default_ich8lan(struct e1000_hw *hw, u16 *data)
{
s32 ret_val;
DEBUGFUNC("e1000_valid_led_default_ich8lan");
ret_val = hw->nvm.ops.read(hw, NVM_ID_LED_SETTINGS, 1, data);
if (ret_val) {
DEBUGOUT("NVM Read Error\n");
return ret_val;
}
if (*data == ID_LED_RESERVED_0000 || *data == ID_LED_RESERVED_FFFF)
*data = ID_LED_DEFAULT_ICH8LAN;
return E1000_SUCCESS;
}
/**
* e1000_id_led_init_pchlan - store LED configurations
* @hw: pointer to the HW structure
*
* PCH does not control LEDs via the LEDCTL register, rather it uses
* the PHY LED configuration register.
*
* PCH also does not have an "always on" or "always off" mode which
* complicates the ID feature. Instead of using the "on" mode to indicate
* in ledctl_mode2 the LEDs to use for ID (see e1000_id_led_init_generic()),
* use "link_up" mode. The LEDs will still ID on request if there is no
* link based on logic in e1000_led_[on|off]_pchlan().
**/
static s32 e1000_id_led_init_pchlan(struct e1000_hw *hw)
{
struct e1000_mac_info *mac = &hw->mac;
s32 ret_val;
const u32 ledctl_on = E1000_LEDCTL_MODE_LINK_UP;
const u32 ledctl_off = E1000_LEDCTL_MODE_LINK_UP | E1000_PHY_LED0_IVRT;
u16 data, i, temp, shift;
DEBUGFUNC("e1000_id_led_init_pchlan");
/* Get default ID LED modes */
ret_val = hw->nvm.ops.valid_led_default(hw, &data);
if (ret_val)
return ret_val;
mac->ledctl_default = E1000_READ_REG(hw, E1000_LEDCTL);
mac->ledctl_mode1 = mac->ledctl_default;
mac->ledctl_mode2 = mac->ledctl_default;
for (i = 0; i < 4; i++) {
temp = (data >> (i << 2)) & E1000_LEDCTL_LED0_MODE_MASK;
shift = (i * 5);
switch (temp) {
case ID_LED_ON1_DEF2:
case ID_LED_ON1_ON2:
case ID_LED_ON1_OFF2:
mac->ledctl_mode1 &= ~(E1000_PHY_LED0_MASK << shift);
mac->ledctl_mode1 |= (ledctl_on << shift);
break;
case ID_LED_OFF1_DEF2:
case ID_LED_OFF1_ON2:
case ID_LED_OFF1_OFF2:
mac->ledctl_mode1 &= ~(E1000_PHY_LED0_MASK << shift);
mac->ledctl_mode1 |= (ledctl_off << shift);
break;
default:
/* Do nothing */
break;
}
switch (temp) {
case ID_LED_DEF1_ON2:
case ID_LED_ON1_ON2:
case ID_LED_OFF1_ON2:
mac->ledctl_mode2 &= ~(E1000_PHY_LED0_MASK << shift);
mac->ledctl_mode2 |= (ledctl_on << shift);
break;
case ID_LED_DEF1_OFF2:
case ID_LED_ON1_OFF2:
case ID_LED_OFF1_OFF2:
mac->ledctl_mode2 &= ~(E1000_PHY_LED0_MASK << shift);
mac->ledctl_mode2 |= (ledctl_off << shift);
break;
default:
/* Do nothing */
break;
}
}
return E1000_SUCCESS;
}
/**
* e1000_get_bus_info_ich8lan - Get/Set the bus type and width
* @hw: pointer to the HW structure
*
* ICH8 use the PCI Express bus, but does not contain a PCI Express Capability
- * register, so the the bus width is hard coded.
+ * register, so the bus width is hard coded.
**/
static s32 e1000_get_bus_info_ich8lan(struct e1000_hw *hw)
{
struct e1000_bus_info *bus = &hw->bus;
s32 ret_val;
DEBUGFUNC("e1000_get_bus_info_ich8lan");
ret_val = e1000_get_bus_info_pcie_generic(hw);
/* ICH devices are "PCI Express"-ish. They have
* a configuration space, but do not contain
* PCI Express Capability registers, so bus width
* must be hardcoded.
*/
if (bus->width == e1000_bus_width_unknown)
bus->width = e1000_bus_width_pcie_x1;
return ret_val;
}
/**
* e1000_reset_hw_ich8lan - Reset the hardware
* @hw: pointer to the HW structure
*
* Does a full reset of the hardware which includes a reset of the PHY and
* MAC.
**/
static s32 e1000_reset_hw_ich8lan(struct e1000_hw *hw)
{
struct e1000_dev_spec_ich8lan *dev_spec = &hw->dev_spec.ich8lan;
u16 kum_cfg;
u32 ctrl, reg;
s32 ret_val;
DEBUGFUNC("e1000_reset_hw_ich8lan");
/* Prevent the PCI-E bus from sticking if there is no TLP connection
* on the last TLP read/write transaction when MAC is reset.
*/
ret_val = e1000_disable_pcie_master_generic(hw);
if (ret_val)
DEBUGOUT("PCI-E Master disable polling has failed.\n");
DEBUGOUT("Masking off all interrupts\n");
E1000_WRITE_REG(hw, E1000_IMC, 0xffffffff);
/* Disable the Transmit and Receive units. Then delay to allow
* any pending transactions to complete before we hit the MAC
* with the global reset.
*/
E1000_WRITE_REG(hw, E1000_RCTL, 0);
E1000_WRITE_REG(hw, E1000_TCTL, E1000_TCTL_PSP);
E1000_WRITE_FLUSH(hw);
msec_delay(10);
/* Workaround for ICH8 bit corruption issue in FIFO memory */
if (hw->mac.type == e1000_ich8lan) {
/* Set Tx and Rx buffer allocation to 8k apiece. */
E1000_WRITE_REG(hw, E1000_PBA, E1000_PBA_8K);
/* Set Packet Buffer Size to 16k. */
E1000_WRITE_REG(hw, E1000_PBS, E1000_PBS_16K);
}
if (hw->mac.type == e1000_pchlan) {
/* Save the NVM K1 bit setting*/
ret_val = e1000_read_nvm(hw, E1000_NVM_K1_CONFIG, 1, &kum_cfg);
if (ret_val)
return ret_val;
if (kum_cfg & E1000_NVM_K1_ENABLE)
dev_spec->nvm_k1_enabled = TRUE;
else
dev_spec->nvm_k1_enabled = FALSE;
}
ctrl = E1000_READ_REG(hw, E1000_CTRL);
if (!hw->phy.ops.check_reset_block(hw)) {
/* Full-chip reset requires MAC and PHY reset at the same
* time to make sure the interface between MAC and the
* external PHY is reset.
*/
ctrl |= E1000_CTRL_PHY_RST;
/* Gate automatic PHY configuration by hardware on
* non-managed 82579
*/
if ((hw->mac.type == e1000_pch2lan) &&
!(E1000_READ_REG(hw, E1000_FWSM) & E1000_ICH_FWSM_FW_VALID))
e1000_gate_hw_phy_config_ich8lan(hw, TRUE);
}
ret_val = e1000_acquire_swflag_ich8lan(hw);
DEBUGOUT("Issuing a global reset to ich8lan\n");
E1000_WRITE_REG(hw, E1000_CTRL, (ctrl | E1000_CTRL_RST));
/* cannot issue a flush here because it hangs the hardware */
msec_delay(20);
/* Set Phy Config Counter to 50msec */
if (hw->mac.type == e1000_pch2lan) {
reg = E1000_READ_REG(hw, E1000_FEXTNVM3);
reg &= ~E1000_FEXTNVM3_PHY_CFG_COUNTER_MASK;
reg |= E1000_FEXTNVM3_PHY_CFG_COUNTER_50MSEC;
E1000_WRITE_REG(hw, E1000_FEXTNVM3, reg);
}
if (!ret_val)
E1000_MUTEX_UNLOCK(&hw->dev_spec.ich8lan.swflag_mutex);
if (ctrl & E1000_CTRL_PHY_RST) {
ret_val = hw->phy.ops.get_cfg_done(hw);
if (ret_val)
return ret_val;
ret_val = e1000_post_phy_reset_ich8lan(hw);
if (ret_val)
return ret_val;
}
/* For PCH, this write will make sure that any noise
* will be detected as a CRC error and be dropped rather than show up
* as a bad packet to the DMA engine.
*/
if (hw->mac.type == e1000_pchlan)
E1000_WRITE_REG(hw, E1000_CRC_OFFSET, 0x65656565);
E1000_WRITE_REG(hw, E1000_IMC, 0xffffffff);
E1000_READ_REG(hw, E1000_ICR);
reg = E1000_READ_REG(hw, E1000_KABGTXD);
reg |= E1000_KABGTXD_BGSQLBIAS;
E1000_WRITE_REG(hw, E1000_KABGTXD, reg);
return E1000_SUCCESS;
}
/**
* e1000_init_hw_ich8lan - Initialize the hardware
* @hw: pointer to the HW structure
*
* Prepares the hardware for transmit and receive by doing the following:
* - initialize hardware bits
* - initialize LED identification
* - setup receive address registers
* - setup flow control
* - setup transmit descriptors
* - clear statistics
**/
static s32 e1000_init_hw_ich8lan(struct e1000_hw *hw)
{
struct e1000_mac_info *mac = &hw->mac;
u32 ctrl_ext, txdctl, snoop;
s32 ret_val;
u16 i;
DEBUGFUNC("e1000_init_hw_ich8lan");
e1000_initialize_hw_bits_ich8lan(hw);
/* Initialize identification LED */
ret_val = mac->ops.id_led_init(hw);
/* An error is not fatal and we should not stop init due to this */
if (ret_val)
DEBUGOUT("Error initializing identification LED\n");
/* Setup the receive address. */
e1000_init_rx_addrs_generic(hw, mac->rar_entry_count);
/* Zero out the Multicast HASH table */
DEBUGOUT("Zeroing the MTA\n");
for (i = 0; i < mac->mta_reg_count; i++)
E1000_WRITE_REG_ARRAY(hw, E1000_MTA, i, 0);
/* The 82578 Rx buffer will stall if wakeup is enabled in host and
* the ME. Disable wakeup by clearing the host wakeup bit.
* Reset the phy after disabling host wakeup to reset the Rx buffer.
*/
if (hw->phy.type == e1000_phy_82578) {
hw->phy.ops.read_reg(hw, BM_PORT_GEN_CFG, &i);
i &= ~BM_WUC_HOST_WU_BIT;
hw->phy.ops.write_reg(hw, BM_PORT_GEN_CFG, i);
ret_val = e1000_phy_hw_reset_ich8lan(hw);
if (ret_val)
return ret_val;
}
/* Setup link and flow control */
ret_val = mac->ops.setup_link(hw);
/* Set the transmit descriptor write-back policy for both queues */
txdctl = E1000_READ_REG(hw, E1000_TXDCTL(0));
txdctl = ((txdctl & ~E1000_TXDCTL_WTHRESH) |
E1000_TXDCTL_FULL_TX_DESC_WB);
txdctl = ((txdctl & ~E1000_TXDCTL_PTHRESH) |
E1000_TXDCTL_MAX_TX_DESC_PREFETCH);
E1000_WRITE_REG(hw, E1000_TXDCTL(0), txdctl);
txdctl = E1000_READ_REG(hw, E1000_TXDCTL(1));
txdctl = ((txdctl & ~E1000_TXDCTL_WTHRESH) |
E1000_TXDCTL_FULL_TX_DESC_WB);
txdctl = ((txdctl & ~E1000_TXDCTL_PTHRESH) |
E1000_TXDCTL_MAX_TX_DESC_PREFETCH);
E1000_WRITE_REG(hw, E1000_TXDCTL(1), txdctl);
/* ICH8 has opposite polarity of no_snoop bits.
* By default, we should use snoop behavior.
*/
if (mac->type == e1000_ich8lan)
snoop = PCIE_ICH8_SNOOP_ALL;
else
snoop = (u32) ~(PCIE_NO_SNOOP_ALL);
e1000_set_pcie_no_snoop_generic(hw, snoop);
ctrl_ext = E1000_READ_REG(hw, E1000_CTRL_EXT);
ctrl_ext |= E1000_CTRL_EXT_RO_DIS;
E1000_WRITE_REG(hw, E1000_CTRL_EXT, ctrl_ext);
/* Clear all of the statistics registers (clear on read). It is
* important that we do this after we have tried to establish link
* because the symbol error count will increment wildly if there
* is no link.
*/
e1000_clear_hw_cntrs_ich8lan(hw);
return ret_val;
}
/**
* e1000_initialize_hw_bits_ich8lan - Initialize required hardware bits
* @hw: pointer to the HW structure
*
* Sets/Clears required hardware bits necessary for correctly setting up the
* hardware for transmit and receive.
**/
static void e1000_initialize_hw_bits_ich8lan(struct e1000_hw *hw)
{
u32 reg;
DEBUGFUNC("e1000_initialize_hw_bits_ich8lan");
/* Extended Device Control */
reg = E1000_READ_REG(hw, E1000_CTRL_EXT);
reg |= (1 << 22);
/* Enable PHY low-power state when MAC is at D3 w/o WoL */
if (hw->mac.type >= e1000_pchlan)
reg |= E1000_CTRL_EXT_PHYPDEN;
E1000_WRITE_REG(hw, E1000_CTRL_EXT, reg);
/* Transmit Descriptor Control 0 */
reg = E1000_READ_REG(hw, E1000_TXDCTL(0));
reg |= (1 << 22);
E1000_WRITE_REG(hw, E1000_TXDCTL(0), reg);
/* Transmit Descriptor Control 1 */
reg = E1000_READ_REG(hw, E1000_TXDCTL(1));
reg |= (1 << 22);
E1000_WRITE_REG(hw, E1000_TXDCTL(1), reg);
/* Transmit Arbitration Control 0 */
reg = E1000_READ_REG(hw, E1000_TARC(0));
if (hw->mac.type == e1000_ich8lan)
reg |= (1 << 28) | (1 << 29);
reg |= (1 << 23) | (1 << 24) | (1 << 26) | (1 << 27);
E1000_WRITE_REG(hw, E1000_TARC(0), reg);
/* Transmit Arbitration Control 1 */
reg = E1000_READ_REG(hw, E1000_TARC(1));
if (E1000_READ_REG(hw, E1000_TCTL) & E1000_TCTL_MULR)
reg &= ~(1 << 28);
else
reg |= (1 << 28);
reg |= (1 << 24) | (1 << 26) | (1 << 30);
E1000_WRITE_REG(hw, E1000_TARC(1), reg);
/* Device Status */
if (hw->mac.type == e1000_ich8lan) {
reg = E1000_READ_REG(hw, E1000_STATUS);
reg &= ~(1 << 31);
E1000_WRITE_REG(hw, E1000_STATUS, reg);
}
/* work-around descriptor data corruption issue during nfs v2 udp
* traffic, just disable the nfs filtering capability
*/
reg = E1000_READ_REG(hw, E1000_RFCTL);
reg |= (E1000_RFCTL_NFSW_DIS | E1000_RFCTL_NFSR_DIS);
/* Disable IPv6 extension header parsing because some malformed
* IPv6 headers can hang the Rx.
*/
if (hw->mac.type == e1000_ich8lan)
reg |= (E1000_RFCTL_IPV6_EX_DIS | E1000_RFCTL_NEW_IPV6_EXT_DIS);
E1000_WRITE_REG(hw, E1000_RFCTL, reg);
/* Enable ECC on Lynxpoint */
if ((hw->mac.type == e1000_pch_lpt) ||
(hw->mac.type == e1000_pch_spt)) {
reg = E1000_READ_REG(hw, E1000_PBECCSTS);
reg |= E1000_PBECCSTS_ECC_ENABLE;
E1000_WRITE_REG(hw, E1000_PBECCSTS, reg);
reg = E1000_READ_REG(hw, E1000_CTRL);
reg |= E1000_CTRL_MEHE;
E1000_WRITE_REG(hw, E1000_CTRL, reg);
}
return;
}
/**
* e1000_setup_link_ich8lan - Setup flow control and link settings
* @hw: pointer to the HW structure
*
* Determines which flow control settings to use, then configures flow
* control. Calls the appropriate media-specific link configuration
* function. Assuming the adapter has a valid link partner, a valid link
* should be established. Assumes the hardware has previously been reset
* and the transmitter and receiver are not enabled.
**/
static s32 e1000_setup_link_ich8lan(struct e1000_hw *hw)
{
s32 ret_val;
DEBUGFUNC("e1000_setup_link_ich8lan");
if (hw->phy.ops.check_reset_block(hw))
return E1000_SUCCESS;
/* ICH parts do not have a word in the NVM to determine
* the default flow control setting, so we explicitly
* set it to full.
*/
if (hw->fc.requested_mode == e1000_fc_default)
hw->fc.requested_mode = e1000_fc_full;
/* Save off the requested flow control mode for use later. Depending
* on the link partner's capabilities, we may or may not use this mode.
*/
hw->fc.current_mode = hw->fc.requested_mode;
DEBUGOUT1("After fix-ups FlowControl is now = %x\n",
hw->fc.current_mode);
/* Continue to configure the copper link. */
ret_val = hw->mac.ops.setup_physical_interface(hw);
if (ret_val)
return ret_val;
E1000_WRITE_REG(hw, E1000_FCTTV, hw->fc.pause_time);
if ((hw->phy.type == e1000_phy_82578) ||
(hw->phy.type == e1000_phy_82579) ||
(hw->phy.type == e1000_phy_i217) ||
(hw->phy.type == e1000_phy_82577)) {
E1000_WRITE_REG(hw, E1000_FCRTV_PCH, hw->fc.refresh_time);
ret_val = hw->phy.ops.write_reg(hw,
PHY_REG(BM_PORT_CTRL_PAGE, 27),
hw->fc.pause_time);
if (ret_val)
return ret_val;
}
return e1000_set_fc_watermarks_generic(hw);
}
/**
* e1000_setup_copper_link_ich8lan - Configure MAC/PHY interface
* @hw: pointer to the HW structure
*
* Configures the kumeran interface to the PHY to wait the appropriate time
* when polling the PHY, then call the generic setup_copper_link to finish
* configuring the copper link.
**/
static s32 e1000_setup_copper_link_ich8lan(struct e1000_hw *hw)
{
u32 ctrl;
s32 ret_val;
u16 reg_data;
DEBUGFUNC("e1000_setup_copper_link_ich8lan");
ctrl = E1000_READ_REG(hw, E1000_CTRL);
ctrl |= E1000_CTRL_SLU;
ctrl &= ~(E1000_CTRL_FRCSPD | E1000_CTRL_FRCDPX);
E1000_WRITE_REG(hw, E1000_CTRL, ctrl);
/* Set the mac to wait the maximum time between each iteration
* and increase the max iterations when polling the phy;
* this fixes erroneous timeouts at 10Mbps.
*/
ret_val = e1000_write_kmrn_reg_generic(hw, E1000_KMRNCTRLSTA_TIMEOUTS,
0xFFFF);
if (ret_val)
return ret_val;
ret_val = e1000_read_kmrn_reg_generic(hw,
E1000_KMRNCTRLSTA_INBAND_PARAM,
&reg_data);
if (ret_val)
return ret_val;
reg_data |= 0x3F;
ret_val = e1000_write_kmrn_reg_generic(hw,
E1000_KMRNCTRLSTA_INBAND_PARAM,
reg_data);
if (ret_val)
return ret_val;
switch (hw->phy.type) {
case e1000_phy_igp_3:
ret_val = e1000_copper_link_setup_igp(hw);
if (ret_val)
return ret_val;
break;
case e1000_phy_bm:
case e1000_phy_82578:
ret_val = e1000_copper_link_setup_m88(hw);
if (ret_val)
return ret_val;
break;
case e1000_phy_82577:
case e1000_phy_82579:
ret_val = e1000_copper_link_setup_82577(hw);
if (ret_val)
return ret_val;
break;
case e1000_phy_ife:
ret_val = hw->phy.ops.read_reg(hw, IFE_PHY_MDIX_CONTROL,
&reg_data);
if (ret_val)
return ret_val;
reg_data &= ~IFE_PMC_AUTO_MDIX;
switch (hw->phy.mdix) {
case 1:
reg_data &= ~IFE_PMC_FORCE_MDIX;
break;
case 2:
reg_data |= IFE_PMC_FORCE_MDIX;
break;
case 0:
default:
reg_data |= IFE_PMC_AUTO_MDIX;
break;
}
ret_val = hw->phy.ops.write_reg(hw, IFE_PHY_MDIX_CONTROL,
reg_data);
if (ret_val)
return ret_val;
break;
default:
break;
}
return e1000_setup_copper_link_generic(hw);
}
/**
* e1000_setup_copper_link_pch_lpt - Configure MAC/PHY interface
* @hw: pointer to the HW structure
*
* Calls the PHY specific link setup function and then calls the
* generic setup_copper_link to finish configuring the link for
* Lynxpoint PCH devices
**/
static s32 e1000_setup_copper_link_pch_lpt(struct e1000_hw *hw)
{
u32 ctrl;
s32 ret_val;
DEBUGFUNC("e1000_setup_copper_link_pch_lpt");
ctrl = E1000_READ_REG(hw, E1000_CTRL);
ctrl |= E1000_CTRL_SLU;
ctrl &= ~(E1000_CTRL_FRCSPD | E1000_CTRL_FRCDPX);
E1000_WRITE_REG(hw, E1000_CTRL, ctrl);
ret_val = e1000_copper_link_setup_82577(hw);
if (ret_val)
return ret_val;
return e1000_setup_copper_link_generic(hw);
}
/**
* e1000_get_link_up_info_ich8lan - Get current link speed and duplex
* @hw: pointer to the HW structure
* @speed: pointer to store current link speed
* @duplex: pointer to store the current link duplex
*
* Calls the generic get_speed_and_duplex to retrieve the current link
* information and then calls the Kumeran lock loss workaround for links at
* gigabit speeds.
**/
static s32 e1000_get_link_up_info_ich8lan(struct e1000_hw *hw, u16 *speed,
u16 *duplex)
{
s32 ret_val;
DEBUGFUNC("e1000_get_link_up_info_ich8lan");
ret_val = e1000_get_speed_and_duplex_copper_generic(hw, speed, duplex);
if (ret_val)
return ret_val;
if ((hw->mac.type == e1000_ich8lan) &&
(hw->phy.type == e1000_phy_igp_3) &&
(*speed == SPEED_1000)) {
ret_val = e1000_kmrn_lock_loss_workaround_ich8lan(hw);
}
return ret_val;
}
/**
* e1000_kmrn_lock_loss_workaround_ich8lan - Kumeran workaround
* @hw: pointer to the HW structure
*
* Work-around for 82566 Kumeran PCS lock loss:
* On link status change (i.e. PCI reset, speed change) and link is up and
* speed is gigabit-
* 0) if workaround is optionally disabled do nothing
* 1) wait 1ms for Kumeran link to come up
* 2) check Kumeran Diagnostic register PCS lock loss bit
* 3) if not set the link is locked (all is good), otherwise...
* 4) reset the PHY
* 5) repeat up to 10 times
* Note: this is only called for IGP3 copper when speed is 1gb.
**/
static s32 e1000_kmrn_lock_loss_workaround_ich8lan(struct e1000_hw *hw)
{
struct e1000_dev_spec_ich8lan *dev_spec = &hw->dev_spec.ich8lan;
u32 phy_ctrl;
s32 ret_val;
u16 i, data;
bool link;
DEBUGFUNC("e1000_kmrn_lock_loss_workaround_ich8lan");
if (!dev_spec->kmrn_lock_loss_workaround_enabled)
return E1000_SUCCESS;
/* Make sure link is up before proceeding. If not just return.
* Attempting this while link is negotiating fouled up link
* stability
*/
ret_val = e1000_phy_has_link_generic(hw, 1, 0, &link);
if (!link)
return E1000_SUCCESS;
for (i = 0; i < 10; i++) {
/* read once to clear */
ret_val = hw->phy.ops.read_reg(hw, IGP3_KMRN_DIAG, &data);
if (ret_val)
return ret_val;
/* and again to get new status */
ret_val = hw->phy.ops.read_reg(hw, IGP3_KMRN_DIAG, &data);
if (ret_val)
return ret_val;
/* check for PCS lock */
if (!(data & IGP3_KMRN_DIAG_PCS_LOCK_LOSS))
return E1000_SUCCESS;
/* Issue PHY reset */
hw->phy.ops.reset(hw);
msec_delay_irq(5);
}
/* Disable GigE link negotiation */
phy_ctrl = E1000_READ_REG(hw, E1000_PHY_CTRL);
phy_ctrl |= (E1000_PHY_CTRL_GBE_DISABLE |
E1000_PHY_CTRL_NOND0A_GBE_DISABLE);
E1000_WRITE_REG(hw, E1000_PHY_CTRL, phy_ctrl);
/* Call gig speed drop workaround on Gig disable before accessing
* any PHY registers
*/
e1000_gig_downshift_workaround_ich8lan(hw);
/* unable to acquire PCS lock */
return -E1000_ERR_PHY;
}
/**
* e1000_set_kmrn_lock_loss_workaround_ich8lan - Set Kumeran workaround state
* @hw: pointer to the HW structure
* @state: boolean value used to set the current Kumeran workaround state
*
* If ICH8, set the current Kumeran workaround state (enabled - TRUE
* /disabled - FALSE).
**/
void e1000_set_kmrn_lock_loss_workaround_ich8lan(struct e1000_hw *hw,
bool state)
{
struct e1000_dev_spec_ich8lan *dev_spec = &hw->dev_spec.ich8lan;
DEBUGFUNC("e1000_set_kmrn_lock_loss_workaround_ich8lan");
if (hw->mac.type != e1000_ich8lan) {
DEBUGOUT("Workaround applies to ICH8 only.\n");
return;
}
dev_spec->kmrn_lock_loss_workaround_enabled = state;
return;
}
/**
* e1000_ipg3_phy_powerdown_workaround_ich8lan - Power down workaround on D3
* @hw: pointer to the HW structure
*
* Workaround for 82566 power-down on D3 entry:
* 1) disable gigabit link
* 2) write VR power-down enable
* 3) read it back
* Continue if successful, else issue LCD reset and repeat
**/
void e1000_igp3_phy_powerdown_workaround_ich8lan(struct e1000_hw *hw)
{
u32 reg;
u16 data;
u8 retry = 0;
DEBUGFUNC("e1000_igp3_phy_powerdown_workaround_ich8lan");
if (hw->phy.type != e1000_phy_igp_3)
return;
/* Try the workaround twice (if needed) */
do {
/* Disable link */
reg = E1000_READ_REG(hw, E1000_PHY_CTRL);
reg |= (E1000_PHY_CTRL_GBE_DISABLE |
E1000_PHY_CTRL_NOND0A_GBE_DISABLE);
E1000_WRITE_REG(hw, E1000_PHY_CTRL, reg);
/* Call gig speed drop workaround on Gig disable before
* accessing any PHY registers
*/
if (hw->mac.type == e1000_ich8lan)
e1000_gig_downshift_workaround_ich8lan(hw);
/* Write VR power-down enable */
hw->phy.ops.read_reg(hw, IGP3_VR_CTRL, &data);
data &= ~IGP3_VR_CTRL_DEV_POWERDOWN_MODE_MASK;
hw->phy.ops.write_reg(hw, IGP3_VR_CTRL,
data | IGP3_VR_CTRL_MODE_SHUTDOWN);
/* Read it back and test */
hw->phy.ops.read_reg(hw, IGP3_VR_CTRL, &data);
data &= IGP3_VR_CTRL_DEV_POWERDOWN_MODE_MASK;
if ((data == IGP3_VR_CTRL_MODE_SHUTDOWN) || retry)
break;
/* Issue PHY reset and repeat at most one more time */
reg = E1000_READ_REG(hw, E1000_CTRL);
E1000_WRITE_REG(hw, E1000_CTRL, reg | E1000_CTRL_PHY_RST);
retry++;
} while (retry);
}
/**
* e1000_gig_downshift_workaround_ich8lan - WoL from S5 stops working
* @hw: pointer to the HW structure
*
* Steps to take when dropping from 1Gb/s (eg. link cable removal (LSC),
* LPLU, Gig disable, MDIC PHY reset):
* 1) Set Kumeran Near-end loopback
* 2) Clear Kumeran Near-end loopback
* Should only be called for ICH8[m] devices with any 1G Phy.
**/
void e1000_gig_downshift_workaround_ich8lan(struct e1000_hw *hw)
{
s32 ret_val;
u16 reg_data;
DEBUGFUNC("e1000_gig_downshift_workaround_ich8lan");
if ((hw->mac.type != e1000_ich8lan) ||
(hw->phy.type == e1000_phy_ife))
return;
ret_val = e1000_read_kmrn_reg_generic(hw, E1000_KMRNCTRLSTA_DIAG_OFFSET,
&reg_data);
if (ret_val)
return;
reg_data |= E1000_KMRNCTRLSTA_DIAG_NELPBK;
ret_val = e1000_write_kmrn_reg_generic(hw,
E1000_KMRNCTRLSTA_DIAG_OFFSET,
reg_data);
if (ret_val)
return;
reg_data &= ~E1000_KMRNCTRLSTA_DIAG_NELPBK;
e1000_write_kmrn_reg_generic(hw, E1000_KMRNCTRLSTA_DIAG_OFFSET,
reg_data);
}
/**
* e1000_suspend_workarounds_ich8lan - workarounds needed during S0->Sx
* @hw: pointer to the HW structure
*
* During S0 to Sx transition, it is possible the link remains at gig
* instead of negotiating to a lower speed. Before going to Sx, set
* 'Gig Disable' to force link speed negotiation to a lower speed based on
* the LPLU setting in the NVM or custom setting. For PCH and newer parts,
* the OEM bits PHY register (LED, GbE disable and LPLU configurations) also
* needs to be written.
* Parts that support (and are linked to a partner which support) EEE in
* 100Mbps should disable LPLU since 100Mbps w/ EEE requires less power
* than 10Mbps w/o EEE.
**/
void e1000_suspend_workarounds_ich8lan(struct e1000_hw *hw)
{
struct e1000_dev_spec_ich8lan *dev_spec = &hw->dev_spec.ich8lan;
u32 phy_ctrl;
s32 ret_val;
DEBUGFUNC("e1000_suspend_workarounds_ich8lan");
phy_ctrl = E1000_READ_REG(hw, E1000_PHY_CTRL);
phy_ctrl |= E1000_PHY_CTRL_GBE_DISABLE;
if (hw->phy.type == e1000_phy_i217) {
u16 phy_reg, device_id = hw->device_id;
if ((device_id == E1000_DEV_ID_PCH_LPTLP_I218_LM) ||
(device_id == E1000_DEV_ID_PCH_LPTLP_I218_V) ||
(device_id == E1000_DEV_ID_PCH_I218_LM3) ||
(device_id == E1000_DEV_ID_PCH_I218_V3) ||
(hw->mac.type == e1000_pch_spt)) {
u32 fextnvm6 = E1000_READ_REG(hw, E1000_FEXTNVM6);
E1000_WRITE_REG(hw, E1000_FEXTNVM6,
fextnvm6 & ~E1000_FEXTNVM6_REQ_PLL_CLK);
}
ret_val = hw->phy.ops.acquire(hw);
if (ret_val)
goto out;
if (!dev_spec->eee_disable) {
u16 eee_advert;
ret_val =
e1000_read_emi_reg_locked(hw,
I217_EEE_ADVERTISEMENT,
&eee_advert);
if (ret_val)
goto release;
/* Disable LPLU if both link partners support 100BaseT
* EEE and 100Full is advertised on both ends of the
* link, and enable Auto Enable LPI since there will
* be no driver to enable LPI while in Sx.
*/
if ((eee_advert & I82579_EEE_100_SUPPORTED) &&
(dev_spec->eee_lp_ability &
I82579_EEE_100_SUPPORTED) &&
(hw->phy.autoneg_advertised & ADVERTISE_100_FULL)) {
phy_ctrl &= ~(E1000_PHY_CTRL_D0A_LPLU |
E1000_PHY_CTRL_NOND0A_LPLU);
/* Set Auto Enable LPI after link up */
hw->phy.ops.read_reg_locked(hw,
I217_LPI_GPIO_CTRL,
&phy_reg);
phy_reg |= I217_LPI_GPIO_CTRL_AUTO_EN_LPI;
hw->phy.ops.write_reg_locked(hw,
I217_LPI_GPIO_CTRL,
phy_reg);
}
}
/* For i217 Intel Rapid Start Technology support,
* when the system is going into Sx and no manageability engine
* is present, the driver must configure proxy to reset only on
* power good. LPI (Low Power Idle) state must also reset only
* on power good, as well as the MTA (Multicast table array).
* The SMBus release must also be disabled on LCD reset.
*/
if (!(E1000_READ_REG(hw, E1000_FWSM) &
E1000_ICH_FWSM_FW_VALID)) {
/* Enable proxy to reset only on power good. */
hw->phy.ops.read_reg_locked(hw, I217_PROXY_CTRL,
&phy_reg);
phy_reg |= I217_PROXY_CTRL_AUTO_DISABLE;
hw->phy.ops.write_reg_locked(hw, I217_PROXY_CTRL,
phy_reg);
/* Set bit enable LPI (EEE) to reset only on
* power good.
*/
hw->phy.ops.read_reg_locked(hw, I217_SxCTRL, &phy_reg);
phy_reg |= I217_SxCTRL_ENABLE_LPI_RESET;
hw->phy.ops.write_reg_locked(hw, I217_SxCTRL, phy_reg);
/* Disable the SMB release on LCD reset. */
hw->phy.ops.read_reg_locked(hw, I217_MEMPWR, &phy_reg);
phy_reg &= ~I217_MEMPWR_DISABLE_SMB_RELEASE;
hw->phy.ops.write_reg_locked(hw, I217_MEMPWR, phy_reg);
}
/* Enable MTA to reset for Intel Rapid Start Technology
* Support
*/
hw->phy.ops.read_reg_locked(hw, I217_CGFREG, &phy_reg);
phy_reg |= I217_CGFREG_ENABLE_MTA_RESET;
hw->phy.ops.write_reg_locked(hw, I217_CGFREG, phy_reg);
release:
hw->phy.ops.release(hw);
}
out:
E1000_WRITE_REG(hw, E1000_PHY_CTRL, phy_ctrl);
if (hw->mac.type == e1000_ich8lan)
e1000_gig_downshift_workaround_ich8lan(hw);
if (hw->mac.type >= e1000_pchlan) {
e1000_oem_bits_config_ich8lan(hw, FALSE);
/* Reset PHY to activate OEM bits on 82577/8 */
if (hw->mac.type == e1000_pchlan)
e1000_phy_hw_reset_generic(hw);
ret_val = hw->phy.ops.acquire(hw);
if (ret_val)
return;
e1000_write_smbus_addr(hw);
hw->phy.ops.release(hw);
}
return;
}
/**
* e1000_resume_workarounds_pchlan - workarounds needed during Sx->S0
* @hw: pointer to the HW structure
*
* During Sx to S0 transitions on non-managed devices or managed devices
* on which PHY resets are not blocked, if the PHY registers cannot be
* accessed properly by the s/w toggle the LANPHYPC value to power cycle
* the PHY.
* On i217, setup Intel Rapid Start Technology.
**/
u32 e1000_resume_workarounds_pchlan(struct e1000_hw *hw)
{
s32 ret_val;
DEBUGFUNC("e1000_resume_workarounds_pchlan");
if (hw->mac.type < e1000_pch2lan)
return E1000_SUCCESS;
ret_val = e1000_init_phy_workarounds_pchlan(hw);
if (ret_val) {
DEBUGOUT1("Failed to init PHY flow ret_val=%d\n", ret_val);
return ret_val;
}
/* For i217 Intel Rapid Start Technology support when the system
* is transitioning from Sx and no manageability engine is present
* configure SMBus to restore on reset, disable proxy, and enable
* the reset on MTA (Multicast table array).
*/
if (hw->phy.type == e1000_phy_i217) {
u16 phy_reg;
ret_val = hw->phy.ops.acquire(hw);
if (ret_val) {
DEBUGOUT("Failed to setup iRST\n");
return ret_val;
}
/* Clear Auto Enable LPI after link up */
hw->phy.ops.read_reg_locked(hw, I217_LPI_GPIO_CTRL, &phy_reg);
phy_reg &= ~I217_LPI_GPIO_CTRL_AUTO_EN_LPI;
hw->phy.ops.write_reg_locked(hw, I217_LPI_GPIO_CTRL, phy_reg);
if (!(E1000_READ_REG(hw, E1000_FWSM) &
E1000_ICH_FWSM_FW_VALID)) {
/* Restore clear on SMB if no manageability engine
* is present
*/
ret_val = hw->phy.ops.read_reg_locked(hw, I217_MEMPWR,
&phy_reg);
if (ret_val)
goto release;
phy_reg |= I217_MEMPWR_DISABLE_SMB_RELEASE;
hw->phy.ops.write_reg_locked(hw, I217_MEMPWR, phy_reg);
/* Disable Proxy */
hw->phy.ops.write_reg_locked(hw, I217_PROXY_CTRL, 0);
}
/* Enable reset on MTA */
ret_val = hw->phy.ops.read_reg_locked(hw, I217_CGFREG,
&phy_reg);
if (ret_val)
goto release;
phy_reg &= ~I217_CGFREG_ENABLE_MTA_RESET;
hw->phy.ops.write_reg_locked(hw, I217_CGFREG, phy_reg);
release:
if (ret_val)
DEBUGOUT1("Error %d in resume workarounds\n", ret_val);
hw->phy.ops.release(hw);
return ret_val;
}
return E1000_SUCCESS;
}
/**
* e1000_cleanup_led_ich8lan - Restore the default LED operation
* @hw: pointer to the HW structure
*
* Return the LED back to the default configuration.
**/
static s32 e1000_cleanup_led_ich8lan(struct e1000_hw *hw)
{
DEBUGFUNC("e1000_cleanup_led_ich8lan");
if (hw->phy.type == e1000_phy_ife)
return hw->phy.ops.write_reg(hw, IFE_PHY_SPECIAL_CONTROL_LED,
0);
E1000_WRITE_REG(hw, E1000_LEDCTL, hw->mac.ledctl_default);
return E1000_SUCCESS;
}
/**
* e1000_led_on_ich8lan - Turn LEDs on
* @hw: pointer to the HW structure
*
* Turn on the LEDs.
**/
static s32 e1000_led_on_ich8lan(struct e1000_hw *hw)
{
DEBUGFUNC("e1000_led_on_ich8lan");
if (hw->phy.type == e1000_phy_ife)
return hw->phy.ops.write_reg(hw, IFE_PHY_SPECIAL_CONTROL_LED,
(IFE_PSCL_PROBE_MODE | IFE_PSCL_PROBE_LEDS_ON));
E1000_WRITE_REG(hw, E1000_LEDCTL, hw->mac.ledctl_mode2);
return E1000_SUCCESS;
}
/**
* e1000_led_off_ich8lan - Turn LEDs off
* @hw: pointer to the HW structure
*
* Turn off the LEDs.
**/
static s32 e1000_led_off_ich8lan(struct e1000_hw *hw)
{
DEBUGFUNC("e1000_led_off_ich8lan");
if (hw->phy.type == e1000_phy_ife)
return hw->phy.ops.write_reg(hw, IFE_PHY_SPECIAL_CONTROL_LED,
(IFE_PSCL_PROBE_MODE | IFE_PSCL_PROBE_LEDS_OFF));
E1000_WRITE_REG(hw, E1000_LEDCTL, hw->mac.ledctl_mode1);
return E1000_SUCCESS;
}
/**
* e1000_setup_led_pchlan - Configures SW controllable LED
* @hw: pointer to the HW structure
*
* This prepares the SW controllable LED for use.
**/
static s32 e1000_setup_led_pchlan(struct e1000_hw *hw)
{
DEBUGFUNC("e1000_setup_led_pchlan");
return hw->phy.ops.write_reg(hw, HV_LED_CONFIG,
(u16)hw->mac.ledctl_mode1);
}
/**
* e1000_cleanup_led_pchlan - Restore the default LED operation
* @hw: pointer to the HW structure
*
* Return the LED back to the default configuration.
**/
static s32 e1000_cleanup_led_pchlan(struct e1000_hw *hw)
{
DEBUGFUNC("e1000_cleanup_led_pchlan");
return hw->phy.ops.write_reg(hw, HV_LED_CONFIG,
(u16)hw->mac.ledctl_default);
}
/**
* e1000_led_on_pchlan - Turn LEDs on
* @hw: pointer to the HW structure
*
* Turn on the LEDs.
**/
static s32 e1000_led_on_pchlan(struct e1000_hw *hw)
{
u16 data = (u16)hw->mac.ledctl_mode2;
u32 i, led;
DEBUGFUNC("e1000_led_on_pchlan");
/* If no link, then turn LED on by setting the invert bit
* for each LED that's mode is "link_up" in ledctl_mode2.
*/
if (!(E1000_READ_REG(hw, E1000_STATUS) & E1000_STATUS_LU)) {
for (i = 0; i < 3; i++) {
led = (data >> (i * 5)) & E1000_PHY_LED0_MASK;
if ((led & E1000_PHY_LED0_MODE_MASK) !=
E1000_LEDCTL_MODE_LINK_UP)
continue;
if (led & E1000_PHY_LED0_IVRT)
data &= ~(E1000_PHY_LED0_IVRT << (i * 5));
else
data |= (E1000_PHY_LED0_IVRT << (i * 5));
}
}
return hw->phy.ops.write_reg(hw, HV_LED_CONFIG, data);
}
/**
* e1000_led_off_pchlan - Turn LEDs off
* @hw: pointer to the HW structure
*
* Turn off the LEDs.
**/
static s32 e1000_led_off_pchlan(struct e1000_hw *hw)
{
u16 data = (u16)hw->mac.ledctl_mode1;
u32 i, led;
DEBUGFUNC("e1000_led_off_pchlan");
/* If no link, then turn LED off by clearing the invert bit
* for each LED that's mode is "link_up" in ledctl_mode1.
*/
if (!(E1000_READ_REG(hw, E1000_STATUS) & E1000_STATUS_LU)) {
for (i = 0; i < 3; i++) {
led = (data >> (i * 5)) & E1000_PHY_LED0_MASK;
if ((led & E1000_PHY_LED0_MODE_MASK) !=
E1000_LEDCTL_MODE_LINK_UP)
continue;
if (led & E1000_PHY_LED0_IVRT)
data &= ~(E1000_PHY_LED0_IVRT << (i * 5));
else
data |= (E1000_PHY_LED0_IVRT << (i * 5));
}
}
return hw->phy.ops.write_reg(hw, HV_LED_CONFIG, data);
}
/**
* e1000_get_cfg_done_ich8lan - Read config done bit after Full or PHY reset
* @hw: pointer to the HW structure
*
* Read appropriate register for the config done bit for completion status
* and configure the PHY through s/w for EEPROM-less parts.
*
* NOTE: some silicon which is EEPROM-less will fail trying to read the
* config done bit, so only an error is logged and continues. If we were
* to return with error, EEPROM-less silicon would not be able to be reset
* or change link.
**/
static s32 e1000_get_cfg_done_ich8lan(struct e1000_hw *hw)
{
s32 ret_val = E1000_SUCCESS;
u32 bank = 0;
u32 status;
DEBUGFUNC("e1000_get_cfg_done_ich8lan");
e1000_get_cfg_done_generic(hw);
/* Wait for indication from h/w that it has completed basic config */
if (hw->mac.type >= e1000_ich10lan) {
e1000_lan_init_done_ich8lan(hw);
} else {
ret_val = e1000_get_auto_rd_done_generic(hw);
if (ret_val) {
/* When auto config read does not complete, do not
* return with an error. This can happen in situations
* where there is no eeprom and prevents getting link.
*/
DEBUGOUT("Auto Read Done did not complete\n");
ret_val = E1000_SUCCESS;
}
}
/* Clear PHY Reset Asserted bit */
status = E1000_READ_REG(hw, E1000_STATUS);
if (status & E1000_STATUS_PHYRA)
E1000_WRITE_REG(hw, E1000_STATUS, status & ~E1000_STATUS_PHYRA);
else
DEBUGOUT("PHY Reset Asserted not set - needs delay\n");
/* If EEPROM is not marked present, init the IGP 3 PHY manually */
if (hw->mac.type <= e1000_ich9lan) {
if (!(E1000_READ_REG(hw, E1000_EECD) & E1000_EECD_PRES) &&
(hw->phy.type == e1000_phy_igp_3)) {
e1000_phy_init_script_igp3(hw);
}
} else {
if (e1000_valid_nvm_bank_detect_ich8lan(hw, &bank)) {
/* Maybe we should do a basic PHY config */
DEBUGOUT("EEPROM not present\n");
ret_val = -E1000_ERR_CONFIG;
}
}
return ret_val;
}
/**
* e1000_power_down_phy_copper_ich8lan - Remove link during PHY power down
* @hw: pointer to the HW structure
*
* In the case of a PHY power down to save power, or to turn off link during a
* driver unload, or wake on lan is not enabled, remove the link.
**/
static void e1000_power_down_phy_copper_ich8lan(struct e1000_hw *hw)
{
/* If the management interface is not enabled, then power down */
if (!(hw->mac.ops.check_mng_mode(hw) ||
hw->phy.ops.check_reset_block(hw)))
e1000_power_down_phy_copper(hw);
return;
}
/**
* e1000_clear_hw_cntrs_ich8lan - Clear statistical counters
* @hw: pointer to the HW structure
*
* Clears hardware counters specific to the silicon family and calls
* clear_hw_cntrs_generic to clear all general purpose counters.
**/
static void e1000_clear_hw_cntrs_ich8lan(struct e1000_hw *hw)
{
u16 phy_data;
s32 ret_val;
DEBUGFUNC("e1000_clear_hw_cntrs_ich8lan");
e1000_clear_hw_cntrs_base_generic(hw);
E1000_READ_REG(hw, E1000_ALGNERRC);
E1000_READ_REG(hw, E1000_RXERRC);
E1000_READ_REG(hw, E1000_TNCRS);
E1000_READ_REG(hw, E1000_CEXTERR);
E1000_READ_REG(hw, E1000_TSCTC);
E1000_READ_REG(hw, E1000_TSCTFC);
E1000_READ_REG(hw, E1000_MGTPRC);
E1000_READ_REG(hw, E1000_MGTPDC);
E1000_READ_REG(hw, E1000_MGTPTC);
E1000_READ_REG(hw, E1000_IAC);
E1000_READ_REG(hw, E1000_ICRXOC);
/* Clear PHY statistics registers */
if ((hw->phy.type == e1000_phy_82578) ||
(hw->phy.type == e1000_phy_82579) ||
(hw->phy.type == e1000_phy_i217) ||
(hw->phy.type == e1000_phy_82577)) {
ret_val = hw->phy.ops.acquire(hw);
if (ret_val)
return;
ret_val = hw->phy.ops.set_page(hw,
HV_STATS_PAGE << IGP_PAGE_SHIFT);
if (ret_val)
goto release;
hw->phy.ops.read_reg_page(hw, HV_SCC_UPPER, &phy_data);
hw->phy.ops.read_reg_page(hw, HV_SCC_LOWER, &phy_data);
hw->phy.ops.read_reg_page(hw, HV_ECOL_UPPER, &phy_data);
hw->phy.ops.read_reg_page(hw, HV_ECOL_LOWER, &phy_data);
hw->phy.ops.read_reg_page(hw, HV_MCC_UPPER, &phy_data);
hw->phy.ops.read_reg_page(hw, HV_MCC_LOWER, &phy_data);
hw->phy.ops.read_reg_page(hw, HV_LATECOL_UPPER, &phy_data);
hw->phy.ops.read_reg_page(hw, HV_LATECOL_LOWER, &phy_data);
hw->phy.ops.read_reg_page(hw, HV_COLC_UPPER, &phy_data);
hw->phy.ops.read_reg_page(hw, HV_COLC_LOWER, &phy_data);
hw->phy.ops.read_reg_page(hw, HV_DC_UPPER, &phy_data);
hw->phy.ops.read_reg_page(hw, HV_DC_LOWER, &phy_data);
hw->phy.ops.read_reg_page(hw, HV_TNCRS_UPPER, &phy_data);
hw->phy.ops.read_reg_page(hw, HV_TNCRS_LOWER, &phy_data);
release:
hw->phy.ops.release(hw);
}
}
Index: head/sys/dev/hyperv/storvsc/hv_storvsc_drv_freebsd.c
===================================================================
--- head/sys/dev/hyperv/storvsc/hv_storvsc_drv_freebsd.c (revision 300049)
+++ head/sys/dev/hyperv/storvsc/hv_storvsc_drv_freebsd.c (revision 300050)
@@ -1,2170 +1,2170 @@
/*-
* Copyright (c) 2009-2012,2016 Microsoft Corp.
* Copyright (c) 2012 NetApp Inc.
* Copyright (c) 2012 Citrix Inc.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice unmodified, this list of conditions, and the following
* disclaimer.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
*
* THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
* IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
* OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
* IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
* INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
* NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
* DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
* THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
* (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
* THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
/**
* StorVSC driver for Hyper-V. This driver presents a SCSI HBA interface
* to the Comman Access Method (CAM) layer. CAM control blocks (CCBs) are
* converted into VSCSI protocol messages which are delivered to the parent
* partition StorVSP driver over the Hyper-V VMBUS.
*/
#include <sys/cdefs.h>
__FBSDID("$FreeBSD$");
#include <sys/param.h>
#include <sys/proc.h>
#include <sys/condvar.h>
#include <sys/time.h>
#include <sys/systm.h>
#include <sys/sockio.h>
#include <sys/mbuf.h>
#include <sys/malloc.h>
#include <sys/module.h>
#include <sys/kernel.h>
#include <sys/queue.h>
#include <sys/lock.h>
#include <sys/sx.h>
#include <sys/taskqueue.h>
#include <sys/bus.h>
#include <sys/mutex.h>
#include <sys/callout.h>
#include <vm/vm.h>
#include <vm/pmap.h>
#include <vm/uma.h>
#include <sys/lock.h>
#include <sys/sema.h>
#include <sys/sglist.h>
#include <machine/bus.h>
#include <sys/bus_dma.h>
#include <cam/cam.h>
#include <cam/cam_ccb.h>
#include <cam/cam_periph.h>
#include <cam/cam_sim.h>
#include <cam/cam_xpt_sim.h>
#include <cam/cam_xpt_internal.h>
#include <cam/cam_debug.h>
#include <cam/scsi/scsi_all.h>
#include <cam/scsi/scsi_message.h>
#include <dev/hyperv/include/hyperv.h>
#include "hv_vstorage.h"
#define STORVSC_RINGBUFFER_SIZE (20*PAGE_SIZE)
#define STORVSC_MAX_LUNS_PER_TARGET (64)
#define STORVSC_MAX_IO_REQUESTS (STORVSC_MAX_LUNS_PER_TARGET * 2)
#define BLKVSC_MAX_IDE_DISKS_PER_TARGET (1)
#define BLKVSC_MAX_IO_REQUESTS STORVSC_MAX_IO_REQUESTS
#define STORVSC_MAX_TARGETS (2)
#define VSTOR_PKT_SIZE (sizeof(struct vstor_packet) - vmscsi_size_delta)
#define HV_ALIGN(x, a) roundup2(x, a)
struct storvsc_softc;
struct hv_sgl_node {
LIST_ENTRY(hv_sgl_node) link;
struct sglist *sgl_data;
};
struct hv_sgl_page_pool{
LIST_HEAD(, hv_sgl_node) in_use_sgl_list;
LIST_HEAD(, hv_sgl_node) free_sgl_list;
boolean_t is_init;
} g_hv_sgl_page_pool;
#define STORVSC_MAX_SG_PAGE_CNT STORVSC_MAX_IO_REQUESTS * HV_MAX_MULTIPAGE_BUFFER_COUNT
enum storvsc_request_type {
WRITE_TYPE,
READ_TYPE,
UNKNOWN_TYPE
};
struct hv_storvsc_request {
LIST_ENTRY(hv_storvsc_request) link;
struct vstor_packet vstor_packet;
hv_vmbus_multipage_buffer data_buf;
void *sense_data;
uint8_t sense_info_len;
uint8_t retries;
union ccb *ccb;
struct storvsc_softc *softc;
struct callout callout;
struct sema synch_sema; /*Synchronize the request/response if needed */
struct sglist *bounce_sgl;
unsigned int bounce_sgl_count;
uint64_t not_aligned_seg_bits;
};
struct storvsc_softc {
struct hv_device *hs_dev;
LIST_HEAD(, hv_storvsc_request) hs_free_list;
struct mtx hs_lock;
struct storvsc_driver_props *hs_drv_props;
int hs_unit;
uint32_t hs_frozen;
struct cam_sim *hs_sim;
struct cam_path *hs_path;
uint32_t hs_num_out_reqs;
boolean_t hs_destroy;
boolean_t hs_drain_notify;
struct sema hs_drain_sema;
struct hv_storvsc_request hs_init_req;
struct hv_storvsc_request hs_reset_req;
};
/**
* HyperV storvsc timeout testing cases:
* a. IO returned after first timeout;
* b. IO returned after second timeout and queue freeze;
* c. IO returned while timer handler is running
* The first can be tested by "sg_senddiag -vv /dev/daX",
* and the second and third can be done by
* "sg_wr_mode -v -p 08 -c 0,1a -m 0,ff /dev/daX".
*/
#define HVS_TIMEOUT_TEST 0
/*
* Bus/adapter reset functionality on the Hyper-V host is
* buggy and it will be disabled until
* it can be further tested.
*/
#define HVS_HOST_RESET 0
struct storvsc_driver_props {
char *drv_name;
char *drv_desc;
uint8_t drv_max_luns_per_target;
uint8_t drv_max_ios_per_target;
uint32_t drv_ringbuffer_size;
};
enum hv_storage_type {
DRIVER_BLKVSC,
DRIVER_STORVSC,
DRIVER_UNKNOWN
};
#define HS_MAX_ADAPTERS 10
#define HV_STORAGE_SUPPORTS_MULTI_CHANNEL 0x1
/* {ba6163d9-04a1-4d29-b605-72e2ffb1dc7f} */
static const hv_guid gStorVscDeviceType={
.data = {0xd9, 0x63, 0x61, 0xba, 0xa1, 0x04, 0x29, 0x4d,
0xb6, 0x05, 0x72, 0xe2, 0xff, 0xb1, 0xdc, 0x7f}
};
/* {32412632-86cb-44a2-9b5c-50d1417354f5} */
static const hv_guid gBlkVscDeviceType={
.data = {0x32, 0x26, 0x41, 0x32, 0xcb, 0x86, 0xa2, 0x44,
0x9b, 0x5c, 0x50, 0xd1, 0x41, 0x73, 0x54, 0xf5}
};
static struct storvsc_driver_props g_drv_props_table[] = {
{"blkvsc", "Hyper-V IDE Storage Interface",
BLKVSC_MAX_IDE_DISKS_PER_TARGET, BLKVSC_MAX_IO_REQUESTS,
STORVSC_RINGBUFFER_SIZE},
{"storvsc", "Hyper-V SCSI Storage Interface",
STORVSC_MAX_LUNS_PER_TARGET, STORVSC_MAX_IO_REQUESTS,
STORVSC_RINGBUFFER_SIZE}
};
/*
* Sense buffer size changed in win8; have a run-time
* variable to track the size we should use.
*/
static int sense_buffer_size = PRE_WIN8_STORVSC_SENSE_BUFFER_SIZE;
/*
* The size of the vmscsi_request has changed in win8. The
* additional size is for the newly added elements in the
* structure. These elements are valid only when we are talking
* to a win8 host.
* Track the correct size we need to apply.
*/
static int vmscsi_size_delta;
/*
* The storage protocol version is determined during the
* initial exchange with the host. It will indicate which
* storage functionality is available in the host.
*/
static int vmstor_proto_version;
struct vmstor_proto {
int proto_version;
int sense_buffer_size;
int vmscsi_size_delta;
};
static const struct vmstor_proto vmstor_proto_list[] = {
{
VMSTOR_PROTOCOL_VERSION_WIN10,
POST_WIN7_STORVSC_SENSE_BUFFER_SIZE,
0
},
{
VMSTOR_PROTOCOL_VERSION_WIN8_1,
POST_WIN7_STORVSC_SENSE_BUFFER_SIZE,
0
},
{
VMSTOR_PROTOCOL_VERSION_WIN8,
POST_WIN7_STORVSC_SENSE_BUFFER_SIZE,
0
},
{
VMSTOR_PROTOCOL_VERSION_WIN7,
PRE_WIN8_STORVSC_SENSE_BUFFER_SIZE,
sizeof(struct vmscsi_win8_extension),
},
{
VMSTOR_PROTOCOL_VERSION_WIN6,
PRE_WIN8_STORVSC_SENSE_BUFFER_SIZE,
sizeof(struct vmscsi_win8_extension),
}
};
/* static functions */
static int storvsc_probe(device_t dev);
static int storvsc_attach(device_t dev);
static int storvsc_detach(device_t dev);
static void storvsc_poll(struct cam_sim * sim);
static void storvsc_action(struct cam_sim * sim, union ccb * ccb);
static int create_storvsc_request(union ccb *ccb, struct hv_storvsc_request *reqp);
static void storvsc_free_request(struct storvsc_softc *sc, struct hv_storvsc_request *reqp);
static enum hv_storage_type storvsc_get_storage_type(device_t dev);
static void hv_storvsc_rescan_target(struct storvsc_softc *sc);
static void hv_storvsc_on_channel_callback(void *context);
static void hv_storvsc_on_iocompletion( struct storvsc_softc *sc,
struct vstor_packet *vstor_packet,
struct hv_storvsc_request *request);
static int hv_storvsc_connect_vsp(struct hv_device *device);
static void storvsc_io_done(struct hv_storvsc_request *reqp);
static void storvsc_copy_sgl_to_bounce_buf(struct sglist *bounce_sgl,
bus_dma_segment_t *orig_sgl,
unsigned int orig_sgl_count,
uint64_t seg_bits);
void storvsc_copy_from_bounce_buf_to_sgl(bus_dma_segment_t *dest_sgl,
unsigned int dest_sgl_count,
struct sglist* src_sgl,
uint64_t seg_bits);
static device_method_t storvsc_methods[] = {
/* Device interface */
DEVMETHOD(device_probe, storvsc_probe),
DEVMETHOD(device_attach, storvsc_attach),
DEVMETHOD(device_detach, storvsc_detach),
DEVMETHOD(device_shutdown, bus_generic_shutdown),
DEVMETHOD_END
};
static driver_t storvsc_driver = {
"storvsc", storvsc_methods, sizeof(struct storvsc_softc),
};
static devclass_t storvsc_devclass;
DRIVER_MODULE(storvsc, vmbus, storvsc_driver, storvsc_devclass, 0, 0);
MODULE_VERSION(storvsc, 1);
MODULE_DEPEND(storvsc, vmbus, 1, 1, 1);
/**
* The host is capable of sending messages to us that are
* completely unsolicited. So, we need to address the race
* condition where we may be in the process of unloading the
* driver when the host may send us an unsolicited message.
* We address this issue by implementing a sequentially
* consistent protocol:
*
- * 1. Channel callback is invoked while holding the the channel lock
+ * 1. Channel callback is invoked while holding the channel lock
* and an unloading driver will reset the channel callback under
* the protection of this channel lock.
*
* 2. To ensure bounded wait time for unloading a driver, we don't
* permit outgoing traffic once the device is marked as being
* destroyed.
*
* 3. Once the device is marked as being destroyed, we only
* permit incoming traffic to properly account for
* packets already sent out.
*/
static inline struct storvsc_softc *
get_stor_device(struct hv_device *device,
boolean_t outbound)
{
struct storvsc_softc *sc;
sc = device_get_softc(device->device);
if (outbound) {
/*
* Here we permit outgoing I/O only
* if the device is not being destroyed.
*/
if (sc->hs_destroy) {
sc = NULL;
}
} else {
/*
* inbound case; if being destroyed
* only permit to account for
* messages already sent out.
*/
if (sc->hs_destroy && (sc->hs_num_out_reqs == 0)) {
sc = NULL;
}
}
return sc;
}
static void
storvsc_subchan_attach(struct hv_vmbus_channel *new_channel)
{
struct hv_device *device;
struct storvsc_softc *sc;
struct vmstor_chan_props props;
int ret = 0;
device = new_channel->device;
sc = get_stor_device(device, TRUE);
if (sc == NULL)
return;
memset(&props, 0, sizeof(props));
ret = hv_vmbus_channel_open(new_channel,
sc->hs_drv_props->drv_ringbuffer_size,
sc->hs_drv_props->drv_ringbuffer_size,
(void *)&props,
sizeof(struct vmstor_chan_props),
hv_storvsc_on_channel_callback,
new_channel);
return;
}
/**
* @brief Send multi-channel creation request to host
*
* @param device a Hyper-V device pointer
* @param max_chans the max channels supported by vmbus
*/
static void
storvsc_send_multichannel_request(struct hv_device *dev, int max_chans)
{
struct hv_vmbus_channel **subchan;
struct storvsc_softc *sc;
struct hv_storvsc_request *request;
struct vstor_packet *vstor_packet;
int request_channels_cnt = 0;
int ret, i;
/* get multichannels count that need to create */
request_channels_cnt = MIN(max_chans, mp_ncpus);
sc = get_stor_device(dev, TRUE);
if (sc == NULL) {
printf("Storvsc_error: get sc failed while send mutilchannel "
"request\n");
return;
}
request = &sc->hs_init_req;
/* request the host to create multi-channel */
memset(request, 0, sizeof(struct hv_storvsc_request));
sema_init(&request->synch_sema, 0, ("stor_synch_sema"));
vstor_packet = &request->vstor_packet;
vstor_packet->operation = VSTOR_OPERATION_CREATE_MULTI_CHANNELS;
vstor_packet->flags = REQUEST_COMPLETION_FLAG;
vstor_packet->u.multi_channels_cnt = request_channels_cnt;
ret = hv_vmbus_channel_send_packet(
dev->channel,
vstor_packet,
VSTOR_PKT_SIZE,
(uint64_t)(uintptr_t)request,
HV_VMBUS_PACKET_TYPE_DATA_IN_BAND,
HV_VMBUS_DATA_PACKET_FLAG_COMPLETION_REQUESTED);
/* wait for 5 seconds */
ret = sema_timedwait(&request->synch_sema, 5 * hz);
if (ret != 0) {
printf("Storvsc_error: create multi-channel timeout, %d\n",
ret);
return;
}
if (vstor_packet->operation != VSTOR_OPERATION_COMPLETEIO ||
vstor_packet->status != 0) {
printf("Storvsc_error: create multi-channel invalid operation "
"(%d) or statue (%u)\n",
vstor_packet->operation, vstor_packet->status);
return;
}
/* Wait for sub-channels setup to complete. */
subchan = vmbus_get_subchan(dev->channel, request_channels_cnt);
/* Attach the sub-channels. */
for (i = 0; i < request_channels_cnt; ++i)
storvsc_subchan_attach(subchan[i]);
/* Release the sub-channels. */
vmbus_rel_subchan(subchan, request_channels_cnt);
if (bootverbose)
printf("Storvsc create multi-channel success!\n");
}
/**
* @brief initialize channel connection to parent partition
*
* @param dev a Hyper-V device pointer
* @returns 0 on success, non-zero error on failure
*/
static int
hv_storvsc_channel_init(struct hv_device *dev)
{
int ret = 0, i;
struct hv_storvsc_request *request;
struct vstor_packet *vstor_packet;
struct storvsc_softc *sc;
uint16_t max_chans = 0;
boolean_t support_multichannel = FALSE;
max_chans = 0;
support_multichannel = FALSE;
sc = get_stor_device(dev, TRUE);
if (sc == NULL)
return (ENODEV);
request = &sc->hs_init_req;
memset(request, 0, sizeof(struct hv_storvsc_request));
vstor_packet = &request->vstor_packet;
request->softc = sc;
/**
* Initiate the vsc/vsp initialization protocol on the open channel
*/
sema_init(&request->synch_sema, 0, ("stor_synch_sema"));
vstor_packet->operation = VSTOR_OPERATION_BEGININITIALIZATION;
vstor_packet->flags = REQUEST_COMPLETION_FLAG;
ret = hv_vmbus_channel_send_packet(
dev->channel,
vstor_packet,
VSTOR_PKT_SIZE,
(uint64_t)(uintptr_t)request,
HV_VMBUS_PACKET_TYPE_DATA_IN_BAND,
HV_VMBUS_DATA_PACKET_FLAG_COMPLETION_REQUESTED);
if (ret != 0)
goto cleanup;
/* wait 5 seconds */
ret = sema_timedwait(&request->synch_sema, 5 * hz);
if (ret != 0)
goto cleanup;
if (vstor_packet->operation != VSTOR_OPERATION_COMPLETEIO ||
vstor_packet->status != 0) {
goto cleanup;
}
for (i = 0; i < nitems(vmstor_proto_list); i++) {
/* reuse the packet for version range supported */
memset(vstor_packet, 0, sizeof(struct vstor_packet));
vstor_packet->operation = VSTOR_OPERATION_QUERYPROTOCOLVERSION;
vstor_packet->flags = REQUEST_COMPLETION_FLAG;
vstor_packet->u.version.major_minor =
vmstor_proto_list[i].proto_version;
/* revision is only significant for Windows guests */
vstor_packet->u.version.revision = 0;
ret = hv_vmbus_channel_send_packet(
dev->channel,
vstor_packet,
VSTOR_PKT_SIZE,
(uint64_t)(uintptr_t)request,
HV_VMBUS_PACKET_TYPE_DATA_IN_BAND,
HV_VMBUS_DATA_PACKET_FLAG_COMPLETION_REQUESTED);
if (ret != 0)
goto cleanup;
/* wait 5 seconds */
ret = sema_timedwait(&request->synch_sema, 5 * hz);
if (ret)
goto cleanup;
if (vstor_packet->operation != VSTOR_OPERATION_COMPLETEIO) {
ret = EINVAL;
goto cleanup;
}
if (vstor_packet->status == 0) {
vmstor_proto_version =
vmstor_proto_list[i].proto_version;
sense_buffer_size =
vmstor_proto_list[i].sense_buffer_size;
vmscsi_size_delta =
vmstor_proto_list[i].vmscsi_size_delta;
break;
}
}
if (vstor_packet->status != 0) {
ret = EINVAL;
goto cleanup;
}
/**
* Query channel properties
*/
memset(vstor_packet, 0, sizeof(struct vstor_packet));
vstor_packet->operation = VSTOR_OPERATION_QUERYPROPERTIES;
vstor_packet->flags = REQUEST_COMPLETION_FLAG;
ret = hv_vmbus_channel_send_packet(
dev->channel,
vstor_packet,
VSTOR_PKT_SIZE,
(uint64_t)(uintptr_t)request,
HV_VMBUS_PACKET_TYPE_DATA_IN_BAND,
HV_VMBUS_DATA_PACKET_FLAG_COMPLETION_REQUESTED);
if ( ret != 0)
goto cleanup;
/* wait 5 seconds */
ret = sema_timedwait(&request->synch_sema, 5 * hz);
if (ret != 0)
goto cleanup;
/* TODO: Check returned version */
if (vstor_packet->operation != VSTOR_OPERATION_COMPLETEIO ||
vstor_packet->status != 0) {
goto cleanup;
}
/* multi-channels feature is supported by WIN8 and above version */
max_chans = vstor_packet->u.chan_props.max_channel_cnt;
if ((hv_vmbus_protocal_version != HV_VMBUS_VERSION_WIN7) &&
(hv_vmbus_protocal_version != HV_VMBUS_VERSION_WS2008) &&
(vstor_packet->u.chan_props.flags &
HV_STORAGE_SUPPORTS_MULTI_CHANNEL)) {
support_multichannel = TRUE;
}
memset(vstor_packet, 0, sizeof(struct vstor_packet));
vstor_packet->operation = VSTOR_OPERATION_ENDINITIALIZATION;
vstor_packet->flags = REQUEST_COMPLETION_FLAG;
ret = hv_vmbus_channel_send_packet(
dev->channel,
vstor_packet,
VSTOR_PKT_SIZE,
(uint64_t)(uintptr_t)request,
HV_VMBUS_PACKET_TYPE_DATA_IN_BAND,
HV_VMBUS_DATA_PACKET_FLAG_COMPLETION_REQUESTED);
if (ret != 0) {
goto cleanup;
}
/* wait 5 seconds */
ret = sema_timedwait(&request->synch_sema, 5 * hz);
if (ret != 0)
goto cleanup;
if (vstor_packet->operation != VSTOR_OPERATION_COMPLETEIO ||
vstor_packet->status != 0)
goto cleanup;
/*
* If multi-channel is supported, send multichannel create
* request to host.
*/
if (support_multichannel)
storvsc_send_multichannel_request(dev, max_chans);
cleanup:
sema_destroy(&request->synch_sema);
return (ret);
}
/**
* @brief Open channel connection to paraent partition StorVSP driver
*
* Open and initialize channel connection to parent partition StorVSP driver.
*
* @param pointer to a Hyper-V device
* @returns 0 on success, non-zero error on failure
*/
static int
hv_storvsc_connect_vsp(struct hv_device *dev)
{
int ret = 0;
struct vmstor_chan_props props;
struct storvsc_softc *sc;
sc = device_get_softc(dev->device);
memset(&props, 0, sizeof(struct vmstor_chan_props));
/*
* Open the channel
*/
ret = hv_vmbus_channel_open(
dev->channel,
sc->hs_drv_props->drv_ringbuffer_size,
sc->hs_drv_props->drv_ringbuffer_size,
(void *)&props,
sizeof(struct vmstor_chan_props),
hv_storvsc_on_channel_callback,
dev->channel);
if (ret != 0) {
return ret;
}
ret = hv_storvsc_channel_init(dev);
return (ret);
}
#if HVS_HOST_RESET
static int
hv_storvsc_host_reset(struct hv_device *dev)
{
int ret = 0;
struct storvsc_softc *sc;
struct hv_storvsc_request *request;
struct vstor_packet *vstor_packet;
sc = get_stor_device(dev, TRUE);
if (sc == NULL) {
return ENODEV;
}
request = &sc->hs_reset_req;
request->softc = sc;
vstor_packet = &request->vstor_packet;
sema_init(&request->synch_sema, 0, "stor synch sema");
vstor_packet->operation = VSTOR_OPERATION_RESETBUS;
vstor_packet->flags = REQUEST_COMPLETION_FLAG;
ret = hv_vmbus_channel_send_packet(dev->channel,
vstor_packet,
VSTOR_PKT_SIZE,
(uint64_t)(uintptr_t)&sc->hs_reset_req,
HV_VMBUS_PACKET_TYPE_DATA_IN_BAND,
HV_VMBUS_DATA_PACKET_FLAG_COMPLETION_REQUESTED);
if (ret != 0) {
goto cleanup;
}
ret = sema_timedwait(&request->synch_sema, 5 * hz); /* KYS 5 seconds */
if (ret) {
goto cleanup;
}
/*
* At this point, all outstanding requests in the adapter
* should have been flushed out and return to us
*/
cleanup:
sema_destroy(&request->synch_sema);
return (ret);
}
#endif /* HVS_HOST_RESET */
/**
* @brief Function to initiate an I/O request
*
* @param device Hyper-V device pointer
* @param request pointer to a request structure
* @returns 0 on success, non-zero error on failure
*/
static int
hv_storvsc_io_request(struct hv_device *device,
struct hv_storvsc_request *request)
{
struct storvsc_softc *sc;
struct vstor_packet *vstor_packet = &request->vstor_packet;
struct hv_vmbus_channel* outgoing_channel = NULL;
int ret = 0;
sc = get_stor_device(device, TRUE);
if (sc == NULL) {
return ENODEV;
}
vstor_packet->flags |= REQUEST_COMPLETION_FLAG;
vstor_packet->u.vm_srb.length = VSTOR_PKT_SIZE;
vstor_packet->u.vm_srb.sense_info_len = sense_buffer_size;
vstor_packet->u.vm_srb.transfer_len = request->data_buf.length;
vstor_packet->operation = VSTOR_OPERATION_EXECUTESRB;
outgoing_channel = vmbus_select_outgoing_channel(device->channel);
mtx_unlock(&request->softc->hs_lock);
if (request->data_buf.length) {
ret = hv_vmbus_channel_send_packet_multipagebuffer(
outgoing_channel,
&request->data_buf,
vstor_packet,
VSTOR_PKT_SIZE,
(uint64_t)(uintptr_t)request);
} else {
ret = hv_vmbus_channel_send_packet(
outgoing_channel,
vstor_packet,
VSTOR_PKT_SIZE,
(uint64_t)(uintptr_t)request,
HV_VMBUS_PACKET_TYPE_DATA_IN_BAND,
HV_VMBUS_DATA_PACKET_FLAG_COMPLETION_REQUESTED);
}
mtx_lock(&request->softc->hs_lock);
if (ret != 0) {
printf("Unable to send packet %p ret %d", vstor_packet, ret);
} else {
atomic_add_int(&sc->hs_num_out_reqs, 1);
}
return (ret);
}
/**
* Process IO_COMPLETION_OPERATION and ready
* the result to be completed for upper layer
* processing by the CAM layer.
*/
static void
hv_storvsc_on_iocompletion(struct storvsc_softc *sc,
struct vstor_packet *vstor_packet,
struct hv_storvsc_request *request)
{
struct vmscsi_req *vm_srb;
vm_srb = &vstor_packet->u.vm_srb;
if (((vm_srb->scsi_status & 0xFF) == SCSI_STATUS_CHECK_COND) &&
(vm_srb->srb_status & SRB_STATUS_AUTOSENSE_VALID)) {
/* Autosense data available */
KASSERT(vm_srb->sense_info_len <= request->sense_info_len,
("vm_srb->sense_info_len <= "
"request->sense_info_len"));
memcpy(request->sense_data, vm_srb->u.sense_data,
vm_srb->sense_info_len);
request->sense_info_len = vm_srb->sense_info_len;
}
/* Complete request by passing to the CAM layer */
storvsc_io_done(request);
atomic_subtract_int(&sc->hs_num_out_reqs, 1);
if (sc->hs_drain_notify && (sc->hs_num_out_reqs == 0)) {
sema_post(&sc->hs_drain_sema);
}
}
static void
hv_storvsc_rescan_target(struct storvsc_softc *sc)
{
path_id_t pathid;
target_id_t targetid;
union ccb *ccb;
pathid = cam_sim_path(sc->hs_sim);
targetid = CAM_TARGET_WILDCARD;
/*
* Allocate a CCB and schedule a rescan.
*/
ccb = xpt_alloc_ccb_nowait();
if (ccb == NULL) {
printf("unable to alloc CCB for rescan\n");
return;
}
if (xpt_create_path(&ccb->ccb_h.path, NULL, pathid, targetid,
CAM_LUN_WILDCARD) != CAM_REQ_CMP) {
printf("unable to create path for rescan, pathid: %u,"
"targetid: %u\n", pathid, targetid);
xpt_free_ccb(ccb);
return;
}
if (targetid == CAM_TARGET_WILDCARD)
ccb->ccb_h.func_code = XPT_SCAN_BUS;
else
ccb->ccb_h.func_code = XPT_SCAN_TGT;
xpt_rescan(ccb);
}
static void
hv_storvsc_on_channel_callback(void *context)
{
int ret = 0;
hv_vmbus_channel *channel = (hv_vmbus_channel *)context;
struct hv_device *device = NULL;
struct storvsc_softc *sc;
uint32_t bytes_recvd;
uint64_t request_id;
uint8_t packet[roundup2(sizeof(struct vstor_packet), 8)];
struct hv_storvsc_request *request;
struct vstor_packet *vstor_packet;
device = channel->device;
KASSERT(device, ("device is NULL"));
sc = get_stor_device(device, FALSE);
if (sc == NULL) {
printf("Storvsc_error: get stor device failed.\n");
return;
}
ret = hv_vmbus_channel_recv_packet(
channel,
packet,
roundup2(VSTOR_PKT_SIZE, 8),
&bytes_recvd,
&request_id);
while ((ret == 0) && (bytes_recvd > 0)) {
request = (struct hv_storvsc_request *)(uintptr_t)request_id;
if ((request == &sc->hs_init_req) ||
(request == &sc->hs_reset_req)) {
memcpy(&request->vstor_packet, packet,
sizeof(struct vstor_packet));
sema_post(&request->synch_sema);
} else {
vstor_packet = (struct vstor_packet *)packet;
switch(vstor_packet->operation) {
case VSTOR_OPERATION_COMPLETEIO:
if (request == NULL)
panic("VMBUS: storvsc received a "
"packet with NULL request id in "
"COMPLETEIO operation.");
hv_storvsc_on_iocompletion(sc,
vstor_packet, request);
break;
case VSTOR_OPERATION_REMOVEDEVICE:
printf("VMBUS: storvsc operation %d not "
"implemented.\n", vstor_packet->operation);
/* TODO: implement */
break;
case VSTOR_OPERATION_ENUMERATE_BUS:
hv_storvsc_rescan_target(sc);
break;
default:
break;
}
}
ret = hv_vmbus_channel_recv_packet(
channel,
packet,
roundup2(VSTOR_PKT_SIZE, 8),
&bytes_recvd,
&request_id);
}
}
/**
* @brief StorVSC probe function
*
* Device probe function. Returns 0 if the input device is a StorVSC
* device. Otherwise, a ENXIO is returned. If the input device is
* for BlkVSC (paravirtual IDE) device and this support is disabled in
* favor of the emulated ATA/IDE device, return ENXIO.
*
* @param a device
* @returns 0 on success, ENXIO if not a matcing StorVSC device
*/
static int
storvsc_probe(device_t dev)
{
int ata_disk_enable = 0;
int ret = ENXIO;
switch (storvsc_get_storage_type(dev)) {
case DRIVER_BLKVSC:
if(bootverbose)
device_printf(dev, "DRIVER_BLKVSC-Emulated ATA/IDE probe\n");
if (!getenv_int("hw.ata.disk_enable", &ata_disk_enable)) {
if(bootverbose)
device_printf(dev,
"Enlightened ATA/IDE detected\n");
device_set_desc(dev, g_drv_props_table[DRIVER_BLKVSC].drv_desc);
ret = BUS_PROBE_DEFAULT;
} else if(bootverbose)
device_printf(dev, "Emulated ATA/IDE set (hw.ata.disk_enable set)\n");
break;
case DRIVER_STORVSC:
if(bootverbose)
device_printf(dev, "Enlightened SCSI device detected\n");
device_set_desc(dev, g_drv_props_table[DRIVER_STORVSC].drv_desc);
ret = BUS_PROBE_DEFAULT;
break;
default:
ret = ENXIO;
}
return (ret);
}
/**
* @brief StorVSC attach function
*
* Function responsible for allocating per-device structures,
* setting up CAM interfaces and scanning for available LUNs to
* be used for SCSI device peripherals.
*
* @param a device
* @returns 0 on success or an error on failure
*/
static int
storvsc_attach(device_t dev)
{
struct hv_device *hv_dev = vmbus_get_devctx(dev);
enum hv_storage_type stor_type;
struct storvsc_softc *sc;
struct cam_devq *devq;
int ret, i, j;
struct hv_storvsc_request *reqp;
struct root_hold_token *root_mount_token = NULL;
struct hv_sgl_node *sgl_node = NULL;
void *tmp_buff = NULL;
/*
* We need to serialize storvsc attach calls.
*/
root_mount_token = root_mount_hold("storvsc");
sc = device_get_softc(dev);
stor_type = storvsc_get_storage_type(dev);
if (stor_type == DRIVER_UNKNOWN) {
ret = ENODEV;
goto cleanup;
}
/* fill in driver specific properties */
sc->hs_drv_props = &g_drv_props_table[stor_type];
/* fill in device specific properties */
sc->hs_unit = device_get_unit(dev);
sc->hs_dev = hv_dev;
LIST_INIT(&sc->hs_free_list);
mtx_init(&sc->hs_lock, "hvslck", NULL, MTX_DEF);
for (i = 0; i < sc->hs_drv_props->drv_max_ios_per_target; ++i) {
reqp = malloc(sizeof(struct hv_storvsc_request),
M_DEVBUF, M_WAITOK|M_ZERO);
reqp->softc = sc;
LIST_INSERT_HEAD(&sc->hs_free_list, reqp, link);
}
/* create sg-list page pool */
if (FALSE == g_hv_sgl_page_pool.is_init) {
g_hv_sgl_page_pool.is_init = TRUE;
LIST_INIT(&g_hv_sgl_page_pool.in_use_sgl_list);
LIST_INIT(&g_hv_sgl_page_pool.free_sgl_list);
/*
* Pre-create SG list, each SG list with
* HV_MAX_MULTIPAGE_BUFFER_COUNT segments, each
* segment has one page buffer
*/
for (i = 0; i < STORVSC_MAX_IO_REQUESTS; i++) {
sgl_node = malloc(sizeof(struct hv_sgl_node),
M_DEVBUF, M_WAITOK|M_ZERO);
sgl_node->sgl_data =
sglist_alloc(HV_MAX_MULTIPAGE_BUFFER_COUNT,
M_WAITOK|M_ZERO);
for (j = 0; j < HV_MAX_MULTIPAGE_BUFFER_COUNT; j++) {
tmp_buff = malloc(PAGE_SIZE,
M_DEVBUF, M_WAITOK|M_ZERO);
sgl_node->sgl_data->sg_segs[j].ss_paddr =
(vm_paddr_t)tmp_buff;
}
LIST_INSERT_HEAD(&g_hv_sgl_page_pool.free_sgl_list,
sgl_node, link);
}
}
sc->hs_destroy = FALSE;
sc->hs_drain_notify = FALSE;
sema_init(&sc->hs_drain_sema, 0, "Store Drain Sema");
ret = hv_storvsc_connect_vsp(hv_dev);
if (ret != 0) {
goto cleanup;
}
/*
* Create the device queue.
* Hyper-V maps each target to one SCSI HBA
*/
devq = cam_simq_alloc(sc->hs_drv_props->drv_max_ios_per_target);
if (devq == NULL) {
device_printf(dev, "Failed to alloc device queue\n");
ret = ENOMEM;
goto cleanup;
}
sc->hs_sim = cam_sim_alloc(storvsc_action,
storvsc_poll,
sc->hs_drv_props->drv_name,
sc,
sc->hs_unit,
&sc->hs_lock, 1,
sc->hs_drv_props->drv_max_ios_per_target,
devq);
if (sc->hs_sim == NULL) {
device_printf(dev, "Failed to alloc sim\n");
cam_simq_free(devq);
ret = ENOMEM;
goto cleanup;
}
mtx_lock(&sc->hs_lock);
/* bus_id is set to 0, need to get it from VMBUS channel query? */
if (xpt_bus_register(sc->hs_sim, dev, 0) != CAM_SUCCESS) {
cam_sim_free(sc->hs_sim, /*free_devq*/TRUE);
mtx_unlock(&sc->hs_lock);
device_printf(dev, "Unable to register SCSI bus\n");
ret = ENXIO;
goto cleanup;
}
if (xpt_create_path(&sc->hs_path, /*periph*/NULL,
cam_sim_path(sc->hs_sim),
CAM_TARGET_WILDCARD, CAM_LUN_WILDCARD) != CAM_REQ_CMP) {
xpt_bus_deregister(cam_sim_path(sc->hs_sim));
cam_sim_free(sc->hs_sim, /*free_devq*/TRUE);
mtx_unlock(&sc->hs_lock);
device_printf(dev, "Unable to create path\n");
ret = ENXIO;
goto cleanup;
}
mtx_unlock(&sc->hs_lock);
root_mount_rel(root_mount_token);
return (0);
cleanup:
root_mount_rel(root_mount_token);
while (!LIST_EMPTY(&sc->hs_free_list)) {
reqp = LIST_FIRST(&sc->hs_free_list);
LIST_REMOVE(reqp, link);
free(reqp, M_DEVBUF);
}
while (!LIST_EMPTY(&g_hv_sgl_page_pool.free_sgl_list)) {
sgl_node = LIST_FIRST(&g_hv_sgl_page_pool.free_sgl_list);
LIST_REMOVE(sgl_node, link);
for (j = 0; j < HV_MAX_MULTIPAGE_BUFFER_COUNT; j++) {
if (NULL !=
(void*)sgl_node->sgl_data->sg_segs[j].ss_paddr) {
free((void*)sgl_node->sgl_data->sg_segs[j].ss_paddr, M_DEVBUF);
}
}
sglist_free(sgl_node->sgl_data);
free(sgl_node, M_DEVBUF);
}
return (ret);
}
/**
* @brief StorVSC device detach function
*
* This function is responsible for safely detaching a
* StorVSC device. This includes waiting for inbound responses
* to complete and freeing associated per-device structures.
*
* @param dev a device
* returns 0 on success
*/
static int
storvsc_detach(device_t dev)
{
struct storvsc_softc *sc = device_get_softc(dev);
struct hv_storvsc_request *reqp = NULL;
struct hv_device *hv_device = vmbus_get_devctx(dev);
struct hv_sgl_node *sgl_node = NULL;
int j = 0;
sc->hs_destroy = TRUE;
/*
* At this point, all outbound traffic should be disabled. We
* only allow inbound traffic (responses) to proceed so that
* outstanding requests can be completed.
*/
sc->hs_drain_notify = TRUE;
sema_wait(&sc->hs_drain_sema);
sc->hs_drain_notify = FALSE;
/*
* Since we have already drained, we don't need to busy wait.
* The call to close the channel will reset the callback
* under the protection of the incoming channel lock.
*/
hv_vmbus_channel_close(hv_device->channel);
mtx_lock(&sc->hs_lock);
while (!LIST_EMPTY(&sc->hs_free_list)) {
reqp = LIST_FIRST(&sc->hs_free_list);
LIST_REMOVE(reqp, link);
free(reqp, M_DEVBUF);
}
mtx_unlock(&sc->hs_lock);
while (!LIST_EMPTY(&g_hv_sgl_page_pool.free_sgl_list)) {
sgl_node = LIST_FIRST(&g_hv_sgl_page_pool.free_sgl_list);
LIST_REMOVE(sgl_node, link);
for (j = 0; j < HV_MAX_MULTIPAGE_BUFFER_COUNT; j++){
if (NULL !=
(void*)sgl_node->sgl_data->sg_segs[j].ss_paddr) {
free((void*)sgl_node->sgl_data->sg_segs[j].ss_paddr, M_DEVBUF);
}
}
sglist_free(sgl_node->sgl_data);
free(sgl_node, M_DEVBUF);
}
return (0);
}
#if HVS_TIMEOUT_TEST
/**
* @brief unit test for timed out operations
*
* This function provides unit testing capability to simulate
* timed out operations. Recompilation with HV_TIMEOUT_TEST=1
* is required.
*
* @param reqp pointer to a request structure
* @param opcode SCSI operation being performed
* @param wait if 1, wait for I/O to complete
*/
static void
storvsc_timeout_test(struct hv_storvsc_request *reqp,
uint8_t opcode, int wait)
{
int ret;
union ccb *ccb = reqp->ccb;
struct storvsc_softc *sc = reqp->softc;
if (reqp->vstor_packet.vm_srb.cdb[0] != opcode) {
return;
}
if (wait) {
mtx_lock(&reqp->event.mtx);
}
ret = hv_storvsc_io_request(sc->hs_dev, reqp);
if (ret != 0) {
if (wait) {
mtx_unlock(&reqp->event.mtx);
}
printf("%s: io_request failed with %d.\n",
__func__, ret);
ccb->ccb_h.status = CAM_PROVIDE_FAIL;
mtx_lock(&sc->hs_lock);
storvsc_free_request(sc, reqp);
xpt_done(ccb);
mtx_unlock(&sc->hs_lock);
return;
}
if (wait) {
xpt_print(ccb->ccb_h.path,
"%u: %s: waiting for IO return.\n",
ticks, __func__);
ret = cv_timedwait(&reqp->event.cv, &reqp->event.mtx, 60*hz);
mtx_unlock(&reqp->event.mtx);
xpt_print(ccb->ccb_h.path, "%u: %s: %s.\n",
ticks, __func__, (ret == 0)?
"IO return detected" :
"IO return not detected");
/*
* Now both the timer handler and io done are running
* simultaneously. We want to confirm the io done always
* finishes after the timer handler exits. So reqp used by
* timer handler is not freed or stale. Do busy loop for
* another 1/10 second to make sure io done does
* wait for the timer handler to complete.
*/
DELAY(100*1000);
mtx_lock(&sc->hs_lock);
xpt_print(ccb->ccb_h.path,
"%u: %s: finishing, queue frozen %d, "
"ccb status 0x%x scsi_status 0x%x.\n",
ticks, __func__, sc->hs_frozen,
ccb->ccb_h.status,
ccb->csio.scsi_status);
mtx_unlock(&sc->hs_lock);
}
}
#endif /* HVS_TIMEOUT_TEST */
#ifdef notyet
/**
* @brief timeout handler for requests
*
* This function is called as a result of a callout expiring.
*
* @param arg pointer to a request
*/
static void
storvsc_timeout(void *arg)
{
struct hv_storvsc_request *reqp = arg;
struct storvsc_softc *sc = reqp->softc;
union ccb *ccb = reqp->ccb;
if (reqp->retries == 0) {
mtx_lock(&sc->hs_lock);
xpt_print(ccb->ccb_h.path,
"%u: IO timed out (req=0x%p), wait for another %u secs.\n",
ticks, reqp, ccb->ccb_h.timeout / 1000);
cam_error_print(ccb, CAM_ESF_ALL, CAM_EPF_ALL);
mtx_unlock(&sc->hs_lock);
reqp->retries++;
callout_reset_sbt(&reqp->callout, SBT_1MS * ccb->ccb_h.timeout,
0, storvsc_timeout, reqp, 0);
#if HVS_TIMEOUT_TEST
storvsc_timeout_test(reqp, SEND_DIAGNOSTIC, 0);
#endif
return;
}
mtx_lock(&sc->hs_lock);
xpt_print(ccb->ccb_h.path,
"%u: IO (reqp = 0x%p) did not return for %u seconds, %s.\n",
ticks, reqp, ccb->ccb_h.timeout * (reqp->retries+1) / 1000,
(sc->hs_frozen == 0)?
"freezing the queue" : "the queue is already frozen");
if (sc->hs_frozen == 0) {
sc->hs_frozen = 1;
xpt_freeze_simq(xpt_path_sim(ccb->ccb_h.path), 1);
}
mtx_unlock(&sc->hs_lock);
#if HVS_TIMEOUT_TEST
storvsc_timeout_test(reqp, MODE_SELECT_10, 1);
#endif
}
#endif
/**
* @brief StorVSC device poll function
*
* This function is responsible for servicing requests when
* interrupts are disabled (i.e when we are dumping core.)
*
* @param sim a pointer to a CAM SCSI interface module
*/
static void
storvsc_poll(struct cam_sim *sim)
{
struct storvsc_softc *sc = cam_sim_softc(sim);
mtx_assert(&sc->hs_lock, MA_OWNED);
mtx_unlock(&sc->hs_lock);
hv_storvsc_on_channel_callback(sc->hs_dev->channel);
mtx_lock(&sc->hs_lock);
}
/**
* @brief StorVSC device action function
*
* This function is responsible for handling SCSI operations which
* are passed from the CAM layer. The requests are in the form of
* CAM control blocks which indicate the action being performed.
* Not all actions require converting the request to a VSCSI protocol
* message - these actions can be responded to by this driver.
* Requests which are destined for a backend storage device are converted
* to a VSCSI protocol message and sent on the channel connection associated
* with this device.
*
* @param sim pointer to a CAM SCSI interface module
* @param ccb pointer to a CAM control block
*/
static void
storvsc_action(struct cam_sim *sim, union ccb *ccb)
{
struct storvsc_softc *sc = cam_sim_softc(sim);
int res;
mtx_assert(&sc->hs_lock, MA_OWNED);
switch (ccb->ccb_h.func_code) {
case XPT_PATH_INQ: {
struct ccb_pathinq *cpi = &ccb->cpi;
cpi->version_num = 1;
cpi->hba_inquiry = PI_TAG_ABLE|PI_SDTR_ABLE;
cpi->target_sprt = 0;
cpi->hba_misc = PIM_NOBUSRESET;
cpi->hba_eng_cnt = 0;
cpi->max_target = STORVSC_MAX_TARGETS;
cpi->max_lun = sc->hs_drv_props->drv_max_luns_per_target;
cpi->initiator_id = cpi->max_target;
cpi->bus_id = cam_sim_bus(sim);
cpi->base_transfer_speed = 300000;
cpi->transport = XPORT_SAS;
cpi->transport_version = 0;
cpi->protocol = PROTO_SCSI;
cpi->protocol_version = SCSI_REV_SPC2;
strncpy(cpi->sim_vid, "FreeBSD", SIM_IDLEN);
strncpy(cpi->hba_vid, sc->hs_drv_props->drv_name, HBA_IDLEN);
strncpy(cpi->dev_name, cam_sim_name(sim), DEV_IDLEN);
cpi->unit_number = cam_sim_unit(sim);
ccb->ccb_h.status = CAM_REQ_CMP;
xpt_done(ccb);
return;
}
case XPT_GET_TRAN_SETTINGS: {
struct ccb_trans_settings *cts = &ccb->cts;
cts->transport = XPORT_SAS;
cts->transport_version = 0;
cts->protocol = PROTO_SCSI;
cts->protocol_version = SCSI_REV_SPC2;
/* enable tag queuing and disconnected mode */
cts->proto_specific.valid = CTS_SCSI_VALID_TQ;
cts->proto_specific.scsi.valid = CTS_SCSI_VALID_TQ;
cts->proto_specific.scsi.flags = CTS_SCSI_FLAGS_TAG_ENB;
cts->xport_specific.valid = CTS_SPI_VALID_DISC;
cts->xport_specific.spi.flags = CTS_SPI_FLAGS_DISC_ENB;
ccb->ccb_h.status = CAM_REQ_CMP;
xpt_done(ccb);
return;
}
case XPT_SET_TRAN_SETTINGS: {
ccb->ccb_h.status = CAM_REQ_CMP;
xpt_done(ccb);
return;
}
case XPT_CALC_GEOMETRY:{
cam_calc_geometry(&ccb->ccg, 1);
xpt_done(ccb);
return;
}
case XPT_RESET_BUS:
case XPT_RESET_DEV:{
#if HVS_HOST_RESET
if ((res = hv_storvsc_host_reset(sc->hs_dev)) != 0) {
xpt_print(ccb->ccb_h.path,
"hv_storvsc_host_reset failed with %d\n", res);
ccb->ccb_h.status = CAM_PROVIDE_FAIL;
xpt_done(ccb);
return;
}
ccb->ccb_h.status = CAM_REQ_CMP;
xpt_done(ccb);
return;
#else
xpt_print(ccb->ccb_h.path,
"%s reset not supported.\n",
(ccb->ccb_h.func_code == XPT_RESET_BUS)?
"bus" : "dev");
ccb->ccb_h.status = CAM_REQ_INVALID;
xpt_done(ccb);
return;
#endif /* HVS_HOST_RESET */
}
case XPT_SCSI_IO:
case XPT_IMMED_NOTIFY: {
struct hv_storvsc_request *reqp = NULL;
if (ccb->csio.cdb_len == 0) {
panic("cdl_len is 0\n");
}
if (LIST_EMPTY(&sc->hs_free_list)) {
ccb->ccb_h.status = CAM_REQUEUE_REQ;
if (sc->hs_frozen == 0) {
sc->hs_frozen = 1;
xpt_freeze_simq(sim, /* count*/1);
}
xpt_done(ccb);
return;
}
reqp = LIST_FIRST(&sc->hs_free_list);
LIST_REMOVE(reqp, link);
bzero(reqp, sizeof(struct hv_storvsc_request));
reqp->softc = sc;
ccb->ccb_h.status |= CAM_SIM_QUEUED;
if ((res = create_storvsc_request(ccb, reqp)) != 0) {
ccb->ccb_h.status = CAM_REQ_INVALID;
xpt_done(ccb);
return;
}
#ifdef notyet
if (ccb->ccb_h.timeout != CAM_TIME_INFINITY) {
callout_init(&reqp->callout, 1);
callout_reset_sbt(&reqp->callout,
SBT_1MS * ccb->ccb_h.timeout, 0,
storvsc_timeout, reqp, 0);
#if HVS_TIMEOUT_TEST
cv_init(&reqp->event.cv, "storvsc timeout cv");
mtx_init(&reqp->event.mtx, "storvsc timeout mutex",
NULL, MTX_DEF);
switch (reqp->vstor_packet.vm_srb.cdb[0]) {
case MODE_SELECT_10:
case SEND_DIAGNOSTIC:
/* To have timer send the request. */
return;
default:
break;
}
#endif /* HVS_TIMEOUT_TEST */
}
#endif
if ((res = hv_storvsc_io_request(sc->hs_dev, reqp)) != 0) {
xpt_print(ccb->ccb_h.path,
"hv_storvsc_io_request failed with %d\n", res);
ccb->ccb_h.status = CAM_PROVIDE_FAIL;
storvsc_free_request(sc, reqp);
xpt_done(ccb);
return;
}
return;
}
default:
ccb->ccb_h.status = CAM_REQ_INVALID;
xpt_done(ccb);
return;
}
}
/**
* @brief destroy bounce buffer
*
* This function is responsible for destroy a Scatter/Gather list
* that create by storvsc_create_bounce_buffer()
*
* @param sgl- the Scatter/Gather need be destroy
* @param sg_count- page count of the SG list.
*
*/
static void
storvsc_destroy_bounce_buffer(struct sglist *sgl)
{
struct hv_sgl_node *sgl_node = NULL;
if (LIST_EMPTY(&g_hv_sgl_page_pool.in_use_sgl_list)) {
printf("storvsc error: not enough in use sgl\n");
return;
}
sgl_node = LIST_FIRST(&g_hv_sgl_page_pool.in_use_sgl_list);
LIST_REMOVE(sgl_node, link);
sgl_node->sgl_data = sgl;
LIST_INSERT_HEAD(&g_hv_sgl_page_pool.free_sgl_list, sgl_node, link);
}
/**
* @brief create bounce buffer
*
* This function is responsible for create a Scatter/Gather list,
* which hold several pages that can be aligned with page size.
*
* @param seg_count- SG-list segments count
* @param write - if WRITE_TYPE, set SG list page used size to 0,
* otherwise set used size to page size.
*
* return NULL if create failed
*/
static struct sglist *
storvsc_create_bounce_buffer(uint16_t seg_count, int write)
{
int i = 0;
struct sglist *bounce_sgl = NULL;
unsigned int buf_len = ((write == WRITE_TYPE) ? 0 : PAGE_SIZE);
struct hv_sgl_node *sgl_node = NULL;
/* get struct sglist from free_sgl_list */
if (LIST_EMPTY(&g_hv_sgl_page_pool.free_sgl_list)) {
printf("storvsc error: not enough free sgl\n");
return NULL;
}
sgl_node = LIST_FIRST(&g_hv_sgl_page_pool.free_sgl_list);
LIST_REMOVE(sgl_node, link);
bounce_sgl = sgl_node->sgl_data;
LIST_INSERT_HEAD(&g_hv_sgl_page_pool.in_use_sgl_list, sgl_node, link);
bounce_sgl->sg_maxseg = seg_count;
if (write == WRITE_TYPE)
bounce_sgl->sg_nseg = 0;
else
bounce_sgl->sg_nseg = seg_count;
for (i = 0; i < seg_count; i++)
bounce_sgl->sg_segs[i].ss_len = buf_len;
return bounce_sgl;
}
/**
* @brief copy data from SG list to bounce buffer
*
* This function is responsible for copy data from one SG list's segments
* to another SG list which used as bounce buffer.
*
* @param bounce_sgl - the destination SG list
* @param orig_sgl - the segment of the source SG list.
* @param orig_sgl_count - the count of segments.
* @param orig_sgl_count - indicate which segment need bounce buffer,
* set 1 means need.
*
*/
static void
storvsc_copy_sgl_to_bounce_buf(struct sglist *bounce_sgl,
bus_dma_segment_t *orig_sgl,
unsigned int orig_sgl_count,
uint64_t seg_bits)
{
int src_sgl_idx = 0;
for (src_sgl_idx = 0; src_sgl_idx < orig_sgl_count; src_sgl_idx++) {
if (seg_bits & (1 << src_sgl_idx)) {
memcpy((void*)bounce_sgl->sg_segs[src_sgl_idx].ss_paddr,
(void*)orig_sgl[src_sgl_idx].ds_addr,
orig_sgl[src_sgl_idx].ds_len);
bounce_sgl->sg_segs[src_sgl_idx].ss_len =
orig_sgl[src_sgl_idx].ds_len;
}
}
}
/**
* @brief copy data from SG list which used as bounce to another SG list
*
* This function is responsible for copy data from one SG list with bounce
* buffer to another SG list's segments.
*
* @param dest_sgl - the destination SG list's segments
* @param dest_sgl_count - the count of destination SG list's segment.
* @param src_sgl - the source SG list.
* @param seg_bits - indicate which segment used bounce buffer of src SG-list.
*
*/
void
storvsc_copy_from_bounce_buf_to_sgl(bus_dma_segment_t *dest_sgl,
unsigned int dest_sgl_count,
struct sglist* src_sgl,
uint64_t seg_bits)
{
int sgl_idx = 0;
for (sgl_idx = 0; sgl_idx < dest_sgl_count; sgl_idx++) {
if (seg_bits & (1 << sgl_idx)) {
memcpy((void*)(dest_sgl[sgl_idx].ds_addr),
(void*)(src_sgl->sg_segs[sgl_idx].ss_paddr),
src_sgl->sg_segs[sgl_idx].ss_len);
}
}
}
/**
* @brief check SG list with bounce buffer or not
*
* This function is responsible for check if need bounce buffer for SG list.
*
* @param sgl - the SG list's segments
* @param sg_count - the count of SG list's segment.
* @param bits - segmengs number that need bounce buffer
*
* return -1 if SG list needless bounce buffer
*/
static int
storvsc_check_bounce_buffer_sgl(bus_dma_segment_t *sgl,
unsigned int sg_count,
uint64_t *bits)
{
int i = 0;
int offset = 0;
uint64_t phys_addr = 0;
uint64_t tmp_bits = 0;
boolean_t found_hole = FALSE;
boolean_t pre_aligned = TRUE;
if (sg_count < 2){
return -1;
}
*bits = 0;
phys_addr = vtophys(sgl[0].ds_addr);
offset = phys_addr - trunc_page(phys_addr);
if (offset != 0) {
pre_aligned = FALSE;
tmp_bits |= 1;
}
for (i = 1; i < sg_count; i++) {
phys_addr = vtophys(sgl[i].ds_addr);
offset = phys_addr - trunc_page(phys_addr);
if (offset == 0) {
if (FALSE == pre_aligned){
/*
* This segment is aligned, if the previous
* one is not aligned, find a hole
*/
found_hole = TRUE;
}
pre_aligned = TRUE;
} else {
tmp_bits |= 1 << i;
if (!pre_aligned) {
if (phys_addr != vtophys(sgl[i-1].ds_addr +
sgl[i-1].ds_len)) {
/*
* Check whether connect to previous
* segment,if not, find the hole
*/
found_hole = TRUE;
}
} else {
found_hole = TRUE;
}
pre_aligned = FALSE;
}
}
if (!found_hole) {
return (-1);
} else {
*bits = tmp_bits;
return 0;
}
}
/**
* @brief Fill in a request structure based on a CAM control block
*
* Fills in a request structure based on the contents of a CAM control
* block. The request structure holds the payload information for
* VSCSI protocol request.
*
* @param ccb pointer to a CAM contorl block
* @param reqp pointer to a request structure
*/
static int
create_storvsc_request(union ccb *ccb, struct hv_storvsc_request *reqp)
{
struct ccb_scsiio *csio = &ccb->csio;
uint64_t phys_addr;
uint32_t bytes_to_copy = 0;
uint32_t pfn_num = 0;
uint32_t pfn;
uint64_t not_aligned_seg_bits = 0;
/* refer to struct vmscsi_req for meanings of these two fields */
reqp->vstor_packet.u.vm_srb.port =
cam_sim_unit(xpt_path_sim(ccb->ccb_h.path));
reqp->vstor_packet.u.vm_srb.path_id =
cam_sim_bus(xpt_path_sim(ccb->ccb_h.path));
reqp->vstor_packet.u.vm_srb.target_id = ccb->ccb_h.target_id;
reqp->vstor_packet.u.vm_srb.lun = ccb->ccb_h.target_lun;
reqp->vstor_packet.u.vm_srb.cdb_len = csio->cdb_len;
if(ccb->ccb_h.flags & CAM_CDB_POINTER) {
memcpy(&reqp->vstor_packet.u.vm_srb.u.cdb, csio->cdb_io.cdb_ptr,
csio->cdb_len);
} else {
memcpy(&reqp->vstor_packet.u.vm_srb.u.cdb, csio->cdb_io.cdb_bytes,
csio->cdb_len);
}
switch (ccb->ccb_h.flags & CAM_DIR_MASK) {
case CAM_DIR_OUT:
reqp->vstor_packet.u.vm_srb.data_in = WRITE_TYPE;
break;
case CAM_DIR_IN:
reqp->vstor_packet.u.vm_srb.data_in = READ_TYPE;
break;
case CAM_DIR_NONE:
reqp->vstor_packet.u.vm_srb.data_in = UNKNOWN_TYPE;
break;
default:
reqp->vstor_packet.u.vm_srb.data_in = UNKNOWN_TYPE;
break;
}
reqp->sense_data = &csio->sense_data;
reqp->sense_info_len = csio->sense_len;
reqp->ccb = ccb;
if (0 == csio->dxfer_len) {
return (0);
}
reqp->data_buf.length = csio->dxfer_len;
switch (ccb->ccb_h.flags & CAM_DATA_MASK) {
case CAM_DATA_VADDR:
{
bytes_to_copy = csio->dxfer_len;
phys_addr = vtophys(csio->data_ptr);
reqp->data_buf.offset = phys_addr & PAGE_MASK;
while (bytes_to_copy != 0) {
int bytes, page_offset;
phys_addr =
vtophys(&csio->data_ptr[reqp->data_buf.length -
bytes_to_copy]);
pfn = phys_addr >> PAGE_SHIFT;
reqp->data_buf.pfn_array[pfn_num] = pfn;
page_offset = phys_addr & PAGE_MASK;
bytes = min(PAGE_SIZE - page_offset, bytes_to_copy);
bytes_to_copy -= bytes;
pfn_num++;
}
break;
}
case CAM_DATA_SG:
{
int i = 0;
int offset = 0;
int ret;
bus_dma_segment_t *storvsc_sglist =
(bus_dma_segment_t *)ccb->csio.data_ptr;
u_int16_t storvsc_sg_count = ccb->csio.sglist_cnt;
printf("Storvsc: get SG I/O operation, %d\n",
reqp->vstor_packet.u.vm_srb.data_in);
if (storvsc_sg_count > HV_MAX_MULTIPAGE_BUFFER_COUNT){
printf("Storvsc: %d segments is too much, "
"only support %d segments\n",
storvsc_sg_count, HV_MAX_MULTIPAGE_BUFFER_COUNT);
return (EINVAL);
}
/*
* We create our own bounce buffer function currently. Idealy
* we should use BUS_DMA(9) framework. But with current BUS_DMA
* code there is no callback API to check the page alignment of
* middle segments before busdma can decide if a bounce buffer
* is needed for particular segment. There is callback,
* "bus_dma_filter_t *filter", but the parrameters are not
* sufficient for storvsc driver.
* TODO:
* Add page alignment check in BUS_DMA(9) callback. Once
* this is complete, switch the following code to use
* BUS_DMA(9) for storvsc bounce buffer support.
*/
/* check if we need to create bounce buffer */
ret = storvsc_check_bounce_buffer_sgl(storvsc_sglist,
storvsc_sg_count, &not_aligned_seg_bits);
if (ret != -1) {
reqp->bounce_sgl =
storvsc_create_bounce_buffer(storvsc_sg_count,
reqp->vstor_packet.u.vm_srb.data_in);
if (NULL == reqp->bounce_sgl) {
printf("Storvsc_error: "
"create bounce buffer failed.\n");
return (ENOMEM);
}
reqp->bounce_sgl_count = storvsc_sg_count;
reqp->not_aligned_seg_bits = not_aligned_seg_bits;
/*
* if it is write, we need copy the original data
*to bounce buffer
*/
if (WRITE_TYPE == reqp->vstor_packet.u.vm_srb.data_in) {
storvsc_copy_sgl_to_bounce_buf(
reqp->bounce_sgl,
storvsc_sglist,
storvsc_sg_count,
reqp->not_aligned_seg_bits);
}
/* transfer virtual address to physical frame number */
if (reqp->not_aligned_seg_bits & 0x1){
phys_addr =
vtophys(reqp->bounce_sgl->sg_segs[0].ss_paddr);
}else{
phys_addr =
vtophys(storvsc_sglist[0].ds_addr);
}
reqp->data_buf.offset = phys_addr & PAGE_MASK;
pfn = phys_addr >> PAGE_SHIFT;
reqp->data_buf.pfn_array[0] = pfn;
for (i = 1; i < storvsc_sg_count; i++) {
if (reqp->not_aligned_seg_bits & (1 << i)) {
phys_addr =
vtophys(reqp->bounce_sgl->sg_segs[i].ss_paddr);
} else {
phys_addr =
vtophys(storvsc_sglist[i].ds_addr);
}
pfn = phys_addr >> PAGE_SHIFT;
reqp->data_buf.pfn_array[i] = pfn;
}
} else {
phys_addr = vtophys(storvsc_sglist[0].ds_addr);
reqp->data_buf.offset = phys_addr & PAGE_MASK;
for (i = 0; i < storvsc_sg_count; i++) {
phys_addr = vtophys(storvsc_sglist[i].ds_addr);
pfn = phys_addr >> PAGE_SHIFT;
reqp->data_buf.pfn_array[i] = pfn;
}
/* check the last segment cross boundary or not */
offset = phys_addr & PAGE_MASK;
if (offset) {
phys_addr =
vtophys(storvsc_sglist[i-1].ds_addr +
PAGE_SIZE - offset);
pfn = phys_addr >> PAGE_SHIFT;
reqp->data_buf.pfn_array[i] = pfn;
}
reqp->bounce_sgl_count = 0;
}
break;
}
default:
printf("Unknow flags: %d\n", ccb->ccb_h.flags);
return(EINVAL);
}
return(0);
}
/*
* Modified based on scsi_print_inquiry which is responsible to
* print the detail information for scsi_inquiry_data.
*
* Return 1 if it is valid, 0 otherwise.
*/
static inline int
is_inquiry_valid(const struct scsi_inquiry_data *inq_data)
{
uint8_t type;
char vendor[16], product[48], revision[16];
/*
* Check device type and qualifier
*/
if (!(SID_QUAL_IS_VENDOR_UNIQUE(inq_data) ||
SID_QUAL(inq_data) == SID_QUAL_LU_CONNECTED))
return (0);
type = SID_TYPE(inq_data);
switch (type) {
case T_DIRECT:
case T_SEQUENTIAL:
case T_PRINTER:
case T_PROCESSOR:
case T_WORM:
case T_CDROM:
case T_SCANNER:
case T_OPTICAL:
case T_CHANGER:
case T_COMM:
case T_STORARRAY:
case T_ENCLOSURE:
case T_RBC:
case T_OCRW:
case T_OSD:
case T_ADC:
break;
case T_NODEVICE:
default:
return (0);
}
/*
* Check vendor, product, and revision
*/
cam_strvis(vendor, inq_data->vendor, sizeof(inq_data->vendor),
sizeof(vendor));
cam_strvis(product, inq_data->product, sizeof(inq_data->product),
sizeof(product));
cam_strvis(revision, inq_data->revision, sizeof(inq_data->revision),
sizeof(revision));
if (strlen(vendor) == 0 ||
strlen(product) == 0 ||
strlen(revision) == 0)
return (0);
return (1);
}
/**
* @brief completion function before returning to CAM
*
* I/O process has been completed and the result needs
* to be passed to the CAM layer.
* Free resources related to this request.
*
* @param reqp pointer to a request structure
*/
static void
storvsc_io_done(struct hv_storvsc_request *reqp)
{
union ccb *ccb = reqp->ccb;
struct ccb_scsiio *csio = &ccb->csio;
struct storvsc_softc *sc = reqp->softc;
struct vmscsi_req *vm_srb = &reqp->vstor_packet.u.vm_srb;
bus_dma_segment_t *ori_sglist = NULL;
int ori_sg_count = 0;
/* destroy bounce buffer if it is used */
if (reqp->bounce_sgl_count) {
ori_sglist = (bus_dma_segment_t *)ccb->csio.data_ptr;
ori_sg_count = ccb->csio.sglist_cnt;
/*
* If it is READ operation, we should copy back the data
* to original SG list.
*/
if (READ_TYPE == reqp->vstor_packet.u.vm_srb.data_in) {
storvsc_copy_from_bounce_buf_to_sgl(ori_sglist,
ori_sg_count,
reqp->bounce_sgl,
reqp->not_aligned_seg_bits);
}
storvsc_destroy_bounce_buffer(reqp->bounce_sgl);
reqp->bounce_sgl_count = 0;
}
if (reqp->retries > 0) {
mtx_lock(&sc->hs_lock);
#if HVS_TIMEOUT_TEST
xpt_print(ccb->ccb_h.path,
"%u: IO returned after timeout, "
"waking up timer handler if any.\n", ticks);
mtx_lock(&reqp->event.mtx);
cv_signal(&reqp->event.cv);
mtx_unlock(&reqp->event.mtx);
#endif
reqp->retries = 0;
xpt_print(ccb->ccb_h.path,
"%u: IO returned after timeout, "
"stopping timer if any.\n", ticks);
mtx_unlock(&sc->hs_lock);
}
#ifdef notyet
/*
* callout_drain() will wait for the timer handler to finish
* if it is running. So we don't need any lock to synchronize
* between this routine and the timer handler.
* Note that we need to make sure reqp is not freed when timer
* handler is using or will use it.
*/
if (ccb->ccb_h.timeout != CAM_TIME_INFINITY) {
callout_drain(&reqp->callout);
}
#endif
ccb->ccb_h.status &= ~CAM_SIM_QUEUED;
ccb->ccb_h.status &= ~CAM_STATUS_MASK;
if (vm_srb->scsi_status == SCSI_STATUS_OK) {
const struct scsi_generic *cmd;
/*
* Check whether the data for INQUIRY cmd is valid or
* not. Windows 10 and Windows 2016 send all zero
* inquiry data to VM even for unpopulated slots.
*/
cmd = (const struct scsi_generic *)
((ccb->ccb_h.flags & CAM_CDB_POINTER) ?
csio->cdb_io.cdb_ptr : csio->cdb_io.cdb_bytes);
if (cmd->opcode == INQUIRY &&
/*
* XXX: Temporary work around disk hot plugin on win2k12r2,
* only filtering the invalid disk on win10 or 2016 server.
* So, the hot plugin on win10 and 2016 server needs
* to be fixed.
*/
vmstor_proto_version == VMSTOR_PROTOCOL_VERSION_WIN10 &&
is_inquiry_valid(
(const struct scsi_inquiry_data *)csio->data_ptr) == 0) {
ccb->ccb_h.status |= CAM_DEV_NOT_THERE;
if (bootverbose) {
mtx_lock(&sc->hs_lock);
xpt_print(ccb->ccb_h.path,
"storvsc uninstalled device\n");
mtx_unlock(&sc->hs_lock);
}
} else {
ccb->ccb_h.status |= CAM_REQ_CMP;
}
} else {
mtx_lock(&sc->hs_lock);
xpt_print(ccb->ccb_h.path,
"storvsc scsi_status = %d\n",
vm_srb->scsi_status);
mtx_unlock(&sc->hs_lock);
ccb->ccb_h.status |= CAM_SCSI_STATUS_ERROR;
}
ccb->csio.scsi_status = (vm_srb->scsi_status & 0xFF);
ccb->csio.resid = ccb->csio.dxfer_len - vm_srb->transfer_len;
if (reqp->sense_info_len != 0) {
csio->sense_resid = csio->sense_len - reqp->sense_info_len;
ccb->ccb_h.status |= CAM_AUTOSNS_VALID;
}
mtx_lock(&sc->hs_lock);
if (reqp->softc->hs_frozen == 1) {
xpt_print(ccb->ccb_h.path,
"%u: storvsc unfreezing softc 0x%p.\n",
ticks, reqp->softc);
ccb->ccb_h.status |= CAM_RELEASE_SIMQ;
reqp->softc->hs_frozen = 0;
}
storvsc_free_request(sc, reqp);
mtx_unlock(&sc->hs_lock);
xpt_done_direct(ccb);
}
/**
* @brief Free a request structure
*
* Free a request structure by returning it to the free list
*
* @param sc pointer to a softc
* @param reqp pointer to a request structure
*/
static void
storvsc_free_request(struct storvsc_softc *sc, struct hv_storvsc_request *reqp)
{
LIST_INSERT_HEAD(&sc->hs_free_list, reqp, link);
}
/**
* @brief Determine type of storage device from GUID
*
* Using the type GUID, determine if this is a StorVSC (paravirtual
* SCSI or BlkVSC (paravirtual IDE) device.
*
* @param dev a device
* returns an enum
*/
static enum hv_storage_type
storvsc_get_storage_type(device_t dev)
{
const char *p = vmbus_get_type(dev);
if (!memcmp(p, &gBlkVscDeviceType, sizeof(hv_guid))) {
return DRIVER_BLKVSC;
} else if (!memcmp(p, &gStorVscDeviceType, sizeof(hv_guid))) {
return DRIVER_STORVSC;
}
return (DRIVER_UNKNOWN);
}
Index: head/sys/dev/iwm/if_iwmreg.h
===================================================================
--- head/sys/dev/iwm/if_iwmreg.h (revision 300049)
+++ head/sys/dev/iwm/if_iwmreg.h (revision 300050)
@@ -1,5320 +1,5320 @@
/* $OpenBSD: if_iwmreg.h,v 1.3 2015/02/23 10:25:20 stsp Exp $ */
/* $FreeBSD$ */
/******************************************************************************
*
* This file is provided under a dual BSD/GPLv2 license. When using or
* redistributing this file, you may do so under either license.
*
* GPL LICENSE SUMMARY
*
* Copyright(c) 2005 - 2014 Intel Corporation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of version 2 of the GNU General Public License as
* published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful, but
* WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110,
* USA
*
* The full GNU General Public License is included in this distribution
* in the file called COPYING.
*
* Contact Information:
* Intel Linux Wireless <ilw@linux.intel.com>
* Intel Corporation, 5200 N.E. Elam Young Parkway, Hillsboro, OR 97124-6497
*
* BSD LICENSE
*
* Copyright(c) 2005 - 2014 Intel Corporation. All rights reserved.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
*
* * Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* * Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in
* the documentation and/or other materials provided with the
* distribution.
* * Neither the name Intel Corporation nor the names of its
* contributors may be used to endorse or promote products derived
* from this software without specific prior written permission.
*
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
* "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
* LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
* A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
* OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
* SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
* LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
* DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
* THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
* (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
* OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*
*****************************************************************************/
#ifndef __IF_IWM_REG_H__
#define __IF_IWM_REG_H__
#define le16_to_cpup(_a_) (le16toh(*(const uint16_t *)(_a_)))
#define le32_to_cpup(_a_) (le32toh(*(const uint32_t *)(_a_)))
/*
* BEGIN iwl-csr.h
*/
/*
* CSR (control and status registers)
*
* CSR registers are mapped directly into PCI bus space, and are accessible
* whenever platform supplies power to device, even when device is in
* low power states due to driver-invoked device resets
* (e.g. IWM_CSR_RESET_REG_FLAG_SW_RESET) or uCode-driven power-saving modes.
*
* Use iwl_write32() and iwl_read32() family to access these registers;
* these provide simple PCI bus access, without waking up the MAC.
* Do not use iwl_write_direct32() family for these registers;
* no need to "grab nic access" via IWM_CSR_GP_CNTRL_REG_FLAG_MAC_ACCESS_REQ.
* The MAC (uCode processor, etc.) does not need to be powered up for accessing
* the CSR registers.
*
* NOTE: Device does need to be awake in order to read this memory
* via IWM_CSR_EEPROM and IWM_CSR_OTP registers
*/
#define IWM_CSR_HW_IF_CONFIG_REG (0x000) /* hardware interface config */
#define IWM_CSR_INT_COALESCING (0x004) /* accum ints, 32-usec units */
#define IWM_CSR_INT (0x008) /* host interrupt status/ack */
#define IWM_CSR_INT_MASK (0x00c) /* host interrupt enable */
#define IWM_CSR_FH_INT_STATUS (0x010) /* busmaster int status/ack*/
#define IWM_CSR_GPIO_IN (0x018) /* read external chip pins */
#define IWM_CSR_RESET (0x020) /* busmaster enable, NMI, etc*/
#define IWM_CSR_GP_CNTRL (0x024)
/* 2nd byte of IWM_CSR_INT_COALESCING, not accessible via iwl_write32()! */
#define IWM_CSR_INT_PERIODIC_REG (0x005)
/*
* Hardware revision info
* Bit fields:
* 31-16: Reserved
* 15-4: Type of device: see IWM_CSR_HW_REV_TYPE_xxx definitions
* 3-2: Revision step: 0 = A, 1 = B, 2 = C, 3 = D
* 1-0: "Dash" (-) value, as in A-1, etc.
*/
#define IWM_CSR_HW_REV (0x028)
/*
* EEPROM and OTP (one-time-programmable) memory reads
*
* NOTE: Device must be awake, initialized via apm_ops.init(),
* in order to read.
*/
#define IWM_CSR_EEPROM_REG (0x02c)
#define IWM_CSR_EEPROM_GP (0x030)
#define IWM_CSR_OTP_GP_REG (0x034)
#define IWM_CSR_GIO_REG (0x03C)
#define IWM_CSR_GP_UCODE_REG (0x048)
#define IWM_CSR_GP_DRIVER_REG (0x050)
/*
* UCODE-DRIVER GP (general purpose) mailbox registers.
* SET/CLR registers set/clear bit(s) if "1" is written.
*/
#define IWM_CSR_UCODE_DRV_GP1 (0x054)
#define IWM_CSR_UCODE_DRV_GP1_SET (0x058)
#define IWM_CSR_UCODE_DRV_GP1_CLR (0x05c)
#define IWM_CSR_UCODE_DRV_GP2 (0x060)
#define IWM_CSR_LED_REG (0x094)
#define IWM_CSR_DRAM_INT_TBL_REG (0x0A0)
#define IWM_CSR_MAC_SHADOW_REG_CTRL (0x0A8) /* 6000 and up */
/* GIO Chicken Bits (PCI Express bus link power management) */
#define IWM_CSR_GIO_CHICKEN_BITS (0x100)
/* Analog phase-lock-loop configuration */
#define IWM_CSR_ANA_PLL_CFG (0x20c)
/*
* CSR Hardware Revision Workaround Register. Indicates hardware rev;
* "step" determines CCK backoff for txpower calculation. Used for 4965 only.
* See also IWM_CSR_HW_REV register.
* Bit fields:
* 3-2: 0 = A, 1 = B, 2 = C, 3 = D step
* 1-0: "Dash" (-) value, as in C-1, etc.
*/
#define IWM_CSR_HW_REV_WA_REG (0x22C)
#define IWM_CSR_DBG_HPET_MEM_REG (0x240)
#define IWM_CSR_DBG_LINK_PWR_MGMT_REG (0x250)
/* Bits for IWM_CSR_HW_IF_CONFIG_REG */
#define IWM_CSR_HW_IF_CONFIG_REG_MSK_MAC_DASH (0x00000003)
#define IWM_CSR_HW_IF_CONFIG_REG_MSK_MAC_STEP (0x0000000C)
#define IWM_CSR_HW_IF_CONFIG_REG_MSK_BOARD_VER (0x000000C0)
#define IWM_CSR_HW_IF_CONFIG_REG_BIT_MAC_SI (0x00000100)
#define IWM_CSR_HW_IF_CONFIG_REG_BIT_RADIO_SI (0x00000200)
#define IWM_CSR_HW_IF_CONFIG_REG_MSK_PHY_TYPE (0x00000C00)
#define IWM_CSR_HW_IF_CONFIG_REG_MSK_PHY_DASH (0x00003000)
#define IWM_CSR_HW_IF_CONFIG_REG_MSK_PHY_STEP (0x0000C000)
#define IWM_CSR_HW_IF_CONFIG_REG_POS_MAC_DASH (0)
#define IWM_CSR_HW_IF_CONFIG_REG_POS_MAC_STEP (2)
#define IWM_CSR_HW_IF_CONFIG_REG_POS_BOARD_VER (6)
#define IWM_CSR_HW_IF_CONFIG_REG_POS_PHY_TYPE (10)
#define IWM_CSR_HW_IF_CONFIG_REG_POS_PHY_DASH (12)
#define IWM_CSR_HW_IF_CONFIG_REG_POS_PHY_STEP (14)
#define IWM_CSR_HW_IF_CONFIG_REG_BIT_HAP_WAKE_L1A (0x00080000)
#define IWM_CSR_HW_IF_CONFIG_REG_BIT_EEPROM_OWN_SEM (0x00200000)
#define IWM_CSR_HW_IF_CONFIG_REG_BIT_NIC_READY (0x00400000) /* PCI_OWN_SEM */
#define IWM_CSR_HW_IF_CONFIG_REG_BIT_NIC_PREPARE_DONE (0x02000000) /* ME_OWN */
#define IWM_CSR_HW_IF_CONFIG_REG_PREPARE (0x08000000) /* WAKE_ME */
#define IWM_CSR_INT_PERIODIC_DIS (0x00) /* disable periodic int*/
#define IWM_CSR_INT_PERIODIC_ENA (0xFF) /* 255*32 usec ~ 8 msec*/
/* interrupt flags in INTA, set by uCode or hardware (e.g. dma),
* acknowledged (reset) by host writing "1" to flagged bits. */
#define IWM_CSR_INT_BIT_FH_RX (1 << 31) /* Rx DMA, cmd responses, FH_INT[17:16] */
#define IWM_CSR_INT_BIT_HW_ERR (1 << 29) /* DMA hardware error FH_INT[31] */
#define IWM_CSR_INT_BIT_RX_PERIODIC (1 << 28) /* Rx periodic */
#define IWM_CSR_INT_BIT_FH_TX (1 << 27) /* Tx DMA FH_INT[1:0] */
#define IWM_CSR_INT_BIT_SCD (1 << 26) /* TXQ pointer advanced */
#define IWM_CSR_INT_BIT_SW_ERR (1 << 25) /* uCode error */
#define IWM_CSR_INT_BIT_RF_KILL (1 << 7) /* HW RFKILL switch GP_CNTRL[27] toggled */
#define IWM_CSR_INT_BIT_CT_KILL (1 << 6) /* Critical temp (chip too hot) rfkill */
#define IWM_CSR_INT_BIT_SW_RX (1 << 3) /* Rx, command responses */
#define IWM_CSR_INT_BIT_WAKEUP (1 << 1) /* NIC controller waking up (pwr mgmt) */
#define IWM_CSR_INT_BIT_ALIVE (1 << 0) /* uCode interrupts once it initializes */
#define IWM_CSR_INI_SET_MASK (IWM_CSR_INT_BIT_FH_RX | \
IWM_CSR_INT_BIT_HW_ERR | \
IWM_CSR_INT_BIT_FH_TX | \
IWM_CSR_INT_BIT_SW_ERR | \
IWM_CSR_INT_BIT_RF_KILL | \
IWM_CSR_INT_BIT_SW_RX | \
IWM_CSR_INT_BIT_WAKEUP | \
IWM_CSR_INT_BIT_ALIVE | \
IWM_CSR_INT_BIT_RX_PERIODIC)
/* interrupt flags in FH (flow handler) (PCI busmaster DMA) */
#define IWM_CSR_FH_INT_BIT_ERR (1 << 31) /* Error */
#define IWM_CSR_FH_INT_BIT_HI_PRIOR (1 << 30) /* High priority Rx, bypass coalescing */
#define IWM_CSR_FH_INT_BIT_RX_CHNL1 (1 << 17) /* Rx channel 1 */
#define IWM_CSR_FH_INT_BIT_RX_CHNL0 (1 << 16) /* Rx channel 0 */
#define IWM_CSR_FH_INT_BIT_TX_CHNL1 (1 << 1) /* Tx channel 1 */
#define IWM_CSR_FH_INT_BIT_TX_CHNL0 (1 << 0) /* Tx channel 0 */
#define IWM_CSR_FH_INT_RX_MASK (IWM_CSR_FH_INT_BIT_HI_PRIOR | \
IWM_CSR_FH_INT_BIT_RX_CHNL1 | \
IWM_CSR_FH_INT_BIT_RX_CHNL0)
#define IWM_CSR_FH_INT_TX_MASK (IWM_CSR_FH_INT_BIT_TX_CHNL1 | \
IWM_CSR_FH_INT_BIT_TX_CHNL0)
/* GPIO */
#define IWM_CSR_GPIO_IN_BIT_AUX_POWER (0x00000200)
#define IWM_CSR_GPIO_IN_VAL_VAUX_PWR_SRC (0x00000000)
#define IWM_CSR_GPIO_IN_VAL_VMAIN_PWR_SRC (0x00000200)
/* RESET */
#define IWM_CSR_RESET_REG_FLAG_NEVO_RESET (0x00000001)
#define IWM_CSR_RESET_REG_FLAG_FORCE_NMI (0x00000002)
#define IWM_CSR_RESET_REG_FLAG_SW_RESET (0x00000080)
#define IWM_CSR_RESET_REG_FLAG_MASTER_DISABLED (0x00000100)
#define IWM_CSR_RESET_REG_FLAG_STOP_MASTER (0x00000200)
#define IWM_CSR_RESET_LINK_PWR_MGMT_DISABLED (0x80000000)
/*
* GP (general purpose) CONTROL REGISTER
* Bit fields:
* 27: HW_RF_KILL_SW
* Indicates state of (platform's) hardware RF-Kill switch
* 26-24: POWER_SAVE_TYPE
* Indicates current power-saving mode:
* 000 -- No power saving
* 001 -- MAC power-down
* 010 -- PHY (radio) power-down
* 011 -- Error
* 9-6: SYS_CONFIG
* Indicates current system configuration, reflecting pins on chip
* as forced high/low by device circuit board.
* 4: GOING_TO_SLEEP
* Indicates MAC is entering a power-saving sleep power-down.
* Not a good time to access device-internal resources.
* 3: MAC_ACCESS_REQ
* Host sets this to request and maintain MAC wakeup, to allow host
* access to device-internal resources. Host must wait for
* MAC_CLOCK_READY (and !GOING_TO_SLEEP) before accessing non-CSR
* device registers.
* 2: INIT_DONE
* Host sets this to put device into fully operational D0 power mode.
* Host resets this after SW_RESET to put device into low power mode.
* 0: MAC_CLOCK_READY
* Indicates MAC (ucode processor, etc.) is powered up and can run.
* Internal resources are accessible.
* NOTE: This does not indicate that the processor is actually running.
* NOTE: This does not indicate that device has completed
* init or post-power-down restore of internal SRAM memory.
* Use IWM_CSR_UCODE_DRV_GP1_BIT_MAC_SLEEP as indication that
* SRAM is restored and uCode is in normal operation mode.
* Later devices (5xxx/6xxx/1xxx) use non-volatile SRAM, and
* do not need to save/restore it.
* NOTE: After device reset, this bit remains "0" until host sets
* INIT_DONE
*/
#define IWM_CSR_GP_CNTRL_REG_FLAG_MAC_CLOCK_READY (0x00000001)
#define IWM_CSR_GP_CNTRL_REG_FLAG_INIT_DONE (0x00000004)
#define IWM_CSR_GP_CNTRL_REG_FLAG_MAC_ACCESS_REQ (0x00000008)
#define IWM_CSR_GP_CNTRL_REG_FLAG_GOING_TO_SLEEP (0x00000010)
#define IWM_CSR_GP_CNTRL_REG_VAL_MAC_ACCESS_EN (0x00000001)
#define IWM_CSR_GP_CNTRL_REG_MSK_POWER_SAVE_TYPE (0x07000000)
#define IWM_CSR_GP_CNTRL_REG_FLAG_MAC_POWER_SAVE (0x04000000)
#define IWM_CSR_GP_CNTRL_REG_FLAG_HW_RF_KILL_SW (0x08000000)
/* HW REV */
#define IWM_CSR_HW_REV_DASH(_val) (((_val) & 0x0000003) >> 0)
#define IWM_CSR_HW_REV_STEP(_val) (((_val) & 0x000000C) >> 2)
#define IWM_CSR_HW_REV_TYPE_MSK (0x000FFF0)
#define IWM_CSR_HW_REV_TYPE_5300 (0x0000020)
#define IWM_CSR_HW_REV_TYPE_5350 (0x0000030)
#define IWM_CSR_HW_REV_TYPE_5100 (0x0000050)
#define IWM_CSR_HW_REV_TYPE_5150 (0x0000040)
#define IWM_CSR_HW_REV_TYPE_1000 (0x0000060)
#define IWM_CSR_HW_REV_TYPE_6x00 (0x0000070)
#define IWM_CSR_HW_REV_TYPE_6x50 (0x0000080)
#define IWM_CSR_HW_REV_TYPE_6150 (0x0000084)
#define IWM_CSR_HW_REV_TYPE_6x05 (0x00000B0)
#define IWM_CSR_HW_REV_TYPE_6x30 IWM_CSR_HW_REV_TYPE_6x05
#define IWM_CSR_HW_REV_TYPE_6x35 IWM_CSR_HW_REV_TYPE_6x05
#define IWM_CSR_HW_REV_TYPE_2x30 (0x00000C0)
#define IWM_CSR_HW_REV_TYPE_2x00 (0x0000100)
#define IWM_CSR_HW_REV_TYPE_105 (0x0000110)
#define IWM_CSR_HW_REV_TYPE_135 (0x0000120)
#define IWM_CSR_HW_REV_TYPE_NONE (0x00001F0)
/* EEPROM REG */
#define IWM_CSR_EEPROM_REG_READ_VALID_MSK (0x00000001)
#define IWM_CSR_EEPROM_REG_BIT_CMD (0x00000002)
#define IWM_CSR_EEPROM_REG_MSK_ADDR (0x0000FFFC)
#define IWM_CSR_EEPROM_REG_MSK_DATA (0xFFFF0000)
/* EEPROM GP */
#define IWM_CSR_EEPROM_GP_VALID_MSK (0x00000007) /* signature */
#define IWM_CSR_EEPROM_GP_IF_OWNER_MSK (0x00000180)
#define IWM_CSR_EEPROM_GP_BAD_SIGNATURE_BOTH_EEP_AND_OTP (0x00000000)
#define IWM_CSR_EEPROM_GP_BAD_SIG_EEP_GOOD_SIG_OTP (0x00000001)
#define IWM_CSR_EEPROM_GP_GOOD_SIG_EEP_LESS_THAN_4K (0x00000002)
#define IWM_CSR_EEPROM_GP_GOOD_SIG_EEP_MORE_THAN_4K (0x00000004)
/* One-time-programmable memory general purpose reg */
#define IWM_CSR_OTP_GP_REG_DEVICE_SELECT (0x00010000) /* 0 - EEPROM, 1 - OTP */
#define IWM_CSR_OTP_GP_REG_OTP_ACCESS_MODE (0x00020000) /* 0 - absolute, 1 - relative */
#define IWM_CSR_OTP_GP_REG_ECC_CORR_STATUS_MSK (0x00100000) /* bit 20 */
#define IWM_CSR_OTP_GP_REG_ECC_UNCORR_STATUS_MSK (0x00200000) /* bit 21 */
/* GP REG */
#define IWM_CSR_GP_REG_POWER_SAVE_STATUS_MSK (0x03000000) /* bit 24/25 */
#define IWM_CSR_GP_REG_NO_POWER_SAVE (0x00000000)
#define IWM_CSR_GP_REG_MAC_POWER_SAVE (0x01000000)
#define IWM_CSR_GP_REG_PHY_POWER_SAVE (0x02000000)
#define IWM_CSR_GP_REG_POWER_SAVE_ERROR (0x03000000)
/* CSR GIO */
#define IWM_CSR_GIO_REG_VAL_L0S_ENABLED (0x00000002)
/*
* UCODE-DRIVER GP (general purpose) mailbox register 1
* Host driver and uCode write and/or read this register to communicate with
* each other.
* Bit fields:
* 4: UCODE_DISABLE
* Host sets this to request permanent halt of uCode, same as
* sending CARD_STATE command with "halt" bit set.
* 3: CT_KILL_EXIT
* Host sets this to request exit from CT_KILL state, i.e. host thinks
* device temperature is low enough to continue normal operation.
* 2: CMD_BLOCKED
* Host sets this during RF KILL power-down sequence (HW, SW, CT KILL)
* to release uCode to clear all Tx and command queues, enter
* unassociated mode, and power down.
* NOTE: Some devices also use HBUS_TARG_MBX_C register for this bit.
* 1: SW_BIT_RFKILL
* Host sets this when issuing CARD_STATE command to request
* device sleep.
* 0: MAC_SLEEP
* uCode sets this when preparing a power-saving power-down.
* uCode resets this when power-up is complete and SRAM is sane.
* NOTE: device saves internal SRAM data to host when powering down,
* and must restore this data after powering back up.
* MAC_SLEEP is the best indication that restore is complete.
* Later devices (5xxx/6xxx/1xxx) use non-volatile SRAM, and
* do not need to save/restore it.
*/
#define IWM_CSR_UCODE_DRV_GP1_BIT_MAC_SLEEP (0x00000001)
#define IWM_CSR_UCODE_SW_BIT_RFKILL (0x00000002)
#define IWM_CSR_UCODE_DRV_GP1_BIT_CMD_BLOCKED (0x00000004)
#define IWM_CSR_UCODE_DRV_GP1_REG_BIT_CT_KILL_EXIT (0x00000008)
#define IWM_CSR_UCODE_DRV_GP1_BIT_D3_CFG_COMPLETE (0x00000020)
/* GP Driver */
#define IWM_CSR_GP_DRIVER_REG_BIT_RADIO_SKU_MSK (0x00000003)
#define IWM_CSR_GP_DRIVER_REG_BIT_RADIO_SKU_3x3_HYB (0x00000000)
#define IWM_CSR_GP_DRIVER_REG_BIT_RADIO_SKU_2x2_HYB (0x00000001)
#define IWM_CSR_GP_DRIVER_REG_BIT_RADIO_SKU_2x2_IPA (0x00000002)
#define IWM_CSR_GP_DRIVER_REG_BIT_CALIB_VERSION6 (0x00000004)
#define IWM_CSR_GP_DRIVER_REG_BIT_6050_1x2 (0x00000008)
#define IWM_CSR_GP_DRIVER_REG_BIT_RADIO_IQ_INVER (0x00000080)
/* GIO Chicken Bits (PCI Express bus link power management) */
#define IWM_CSR_GIO_CHICKEN_BITS_REG_BIT_L1A_NO_L0S_RX (0x00800000)
#define IWM_CSR_GIO_CHICKEN_BITS_REG_BIT_DIS_L0S_EXIT_TIMER (0x20000000)
/* LED */
#define IWM_CSR_LED_BSM_CTRL_MSK (0xFFFFFFDF)
#define IWM_CSR_LED_REG_TURN_ON (0x60)
#define IWM_CSR_LED_REG_TURN_OFF (0x20)
/* ANA_PLL */
#define IWM_CSR50_ANA_PLL_CFG_VAL (0x00880300)
/* HPET MEM debug */
#define IWM_CSR_DBG_HPET_MEM_REG_VAL (0xFFFF0000)
/* DRAM INT TABLE */
#define IWM_CSR_DRAM_INT_TBL_ENABLE (1 << 31)
#define IWM_CSR_DRAM_INIT_TBL_WRAP_CHECK (1 << 27)
/* SECURE boot registers */
#define IWM_CSR_SECURE_BOOT_CONFIG_ADDR (0x100)
enum iwm_secure_boot_config_reg {
IWM_CSR_SECURE_BOOT_CONFIG_INSPECTOR_BURNED_IN_OTP = 0x00000001,
IWM_CSR_SECURE_BOOT_CONFIG_INSPECTOR_NOT_REQ = 0x00000002,
};
#define IWM_CSR_SECURE_BOOT_CPU1_STATUS_ADDR (0x100)
#define IWM_CSR_SECURE_BOOT_CPU2_STATUS_ADDR (0x100)
enum iwm_secure_boot_status_reg {
IWM_CSR_SECURE_BOOT_CPU_STATUS_VERF_STATUS = 0x00000003,
IWM_CSR_SECURE_BOOT_CPU_STATUS_VERF_COMPLETED = 0x00000002,
IWM_CSR_SECURE_BOOT_CPU_STATUS_VERF_SUCCESS = 0x00000004,
IWM_CSR_SECURE_BOOT_CPU_STATUS_VERF_FAIL = 0x00000008,
IWM_CSR_SECURE_BOOT_CPU_STATUS_SIGN_VERF_FAIL = 0x00000010,
};
#define IWM_CSR_UCODE_LOAD_STATUS_ADDR (0x100)
enum iwm_secure_load_status_reg {
IWM_CSR_CPU_STATUS_LOADING_STARTED = 0x00000001,
IWM_CSR_CPU_STATUS_LOADING_COMPLETED = 0x00000002,
IWM_CSR_CPU_STATUS_NUM_OF_LAST_COMPLETED = 0x000000F8,
IWM_CSR_CPU_STATUS_NUM_OF_LAST_LOADED_BLOCK = 0x0000FF00,
};
#define IWM_CSR_SECURE_INSPECTOR_CODE_ADDR (0x100)
#define IWM_CSR_SECURE_INSPECTOR_DATA_ADDR (0x100)
#define IWM_CSR_SECURE_TIME_OUT (100)
#define IWM_FH_TCSR_0_REG0 (0x1D00)
/*
* HBUS (Host-side Bus)
*
* HBUS registers are mapped directly into PCI bus space, but are used
* to indirectly access device's internal memory or registers that
* may be powered-down.
*
* Use iwl_write_direct32()/iwl_read_direct32() family for these registers;
* host must "grab nic access" via CSR_GP_CNTRL_REG_FLAG_MAC_ACCESS_REQ
* to make sure the MAC (uCode processor, etc.) is powered up for accessing
* internal resources.
*
* Do not use iwl_write32()/iwl_read32() family to access these registers;
* these provide only simple PCI bus access, without waking up the MAC.
*/
#define IWM_HBUS_BASE (0x400)
/*
* Registers for accessing device's internal SRAM memory (e.g. SCD SRAM
* structures, error log, event log, verifying uCode load).
* First write to address register, then read from or write to data register
* to complete the job. Once the address register is set up, accesses to
* data registers auto-increment the address by one dword.
* Bit usage for address registers (read or write):
* 0-31: memory address within device
*/
#define IWM_HBUS_TARG_MEM_RADDR (IWM_HBUS_BASE+0x00c)
#define IWM_HBUS_TARG_MEM_WADDR (IWM_HBUS_BASE+0x010)
#define IWM_HBUS_TARG_MEM_WDAT (IWM_HBUS_BASE+0x018)
#define IWM_HBUS_TARG_MEM_RDAT (IWM_HBUS_BASE+0x01c)
/* Mailbox C, used as workaround alternative to CSR_UCODE_DRV_GP1 mailbox */
#define IWM_HBUS_TARG_MBX_C (IWM_HBUS_BASE+0x030)
#define IWM_HBUS_TARG_MBX_C_REG_BIT_CMD_BLOCKED (0x00000004)
/*
* Registers for accessing device's internal peripheral registers
* (e.g. SCD, BSM, etc.). First write to address register,
* then read from or write to data register to complete the job.
* Bit usage for address registers (read or write):
* 0-15: register address (offset) within device
* 24-25: (# bytes - 1) to read or write (e.g. 3 for dword)
*/
#define IWM_HBUS_TARG_PRPH_WADDR (IWM_HBUS_BASE+0x044)
#define IWM_HBUS_TARG_PRPH_RADDR (IWM_HBUS_BASE+0x048)
#define IWM_HBUS_TARG_PRPH_WDAT (IWM_HBUS_BASE+0x04c)
#define IWM_HBUS_TARG_PRPH_RDAT (IWM_HBUS_BASE+0x050)
/* Used to enable DBGM */
#define IWM_HBUS_TARG_TEST_REG (IWM_HBUS_BASE+0x05c)
/*
* Per-Tx-queue write pointer (index, really!)
* Indicates index to next TFD that driver will fill (1 past latest filled).
* Bit usage:
* 0-7: queue write index
* 11-8: queue selector
*/
#define IWM_HBUS_TARG_WRPTR (IWM_HBUS_BASE+0x060)
/**********************************************************
* CSR values
**********************************************************/
/*
* host interrupt timeout value
* used with setting interrupt coalescing timer
* the CSR_INT_COALESCING is an 8 bit register in 32-usec unit
*
* default interrupt coalescing timer is 64 x 32 = 2048 usecs
*/
#define IWM_HOST_INT_TIMEOUT_MAX (0xFF)
#define IWM_HOST_INT_TIMEOUT_DEF (0x40)
#define IWM_HOST_INT_TIMEOUT_MIN (0x0)
#define IWM_HOST_INT_OPER_MODE (1 << 31)
/*****************************************************************************
* 7000/3000 series SHR DTS addresses *
*****************************************************************************/
/* Diode Results Register Structure: */
enum iwm_dtd_diode_reg {
IWM_DTS_DIODE_REG_DIG_VAL = 0x000000FF, /* bits [7:0] */
IWM_DTS_DIODE_REG_VREF_LOW = 0x0000FF00, /* bits [15:8] */
IWM_DTS_DIODE_REG_VREF_HIGH = 0x00FF0000, /* bits [23:16] */
IWM_DTS_DIODE_REG_VREF_ID = 0x03000000, /* bits [25:24] */
IWM_DTS_DIODE_REG_PASS_ONCE = 0x80000000, /* bits [31:31] */
IWM_DTS_DIODE_REG_FLAGS_MSK = 0xFF000000, /* bits [31:24] */
/* Those are the masks INSIDE the flags bit-field: */
IWM_DTS_DIODE_REG_FLAGS_VREFS_ID_POS = 0,
IWM_DTS_DIODE_REG_FLAGS_VREFS_ID = 0x00000003, /* bits [1:0] */
IWM_DTS_DIODE_REG_FLAGS_PASS_ONCE_POS = 7,
IWM_DTS_DIODE_REG_FLAGS_PASS_ONCE = 0x00000080, /* bits [7:7] */
};
/*
* END iwl-csr.h
*/
/*
* BEGIN iwl-fw.h
*/
/**
* enum iwl_ucode_tlv_flag - ucode API flags
* @IWM_UCODE_TLV_FLAGS_PAN: This is PAN capable microcode; this previously
* was a separate TLV but moved here to save space.
* @IWM_UCODE_TLV_FLAGS_NEWSCAN: new uCode scan behaviour on hidden SSID,
* treats good CRC threshold as a boolean
* @IWM_UCODE_TLV_FLAGS_MFP: This uCode image supports MFP (802.11w).
* @IWM_UCODE_TLV_FLAGS_P2P: This uCode image supports P2P.
* @IWM_UCODE_TLV_FLAGS_DW_BC_TABLE: The SCD byte count table is in DWORDS
* @IWM_UCODE_TLV_FLAGS_UAPSD: This uCode image supports uAPSD
* @IWM_UCODE_TLV_FLAGS_SHORT_BL: 16 entries of black list instead of 64 in scan
* offload profile config command.
* @IWM_UCODE_TLV_FLAGS_RX_ENERGY_API: supports rx signal strength api
* @IWM_UCODE_TLV_FLAGS_TIME_EVENT_API_V2: using the new time event API.
* @IWM_UCODE_TLV_FLAGS_D3_6_IPV6_ADDRS: D3 image supports up to six
* (rather than two) IPv6 addresses
* @IWM_UCODE_TLV_FLAGS_BF_UPDATED: new beacon filtering API
* @IWM_UCODE_TLV_FLAGS_NO_BASIC_SSID: not sending a probe with the SSID element
* from the probe request template.
* @IWM_UCODE_TLV_FLAGS_D3_CONTINUITY_API: modified D3 API to allow keeping
* connection when going back to D0
* @IWM_UCODE_TLV_FLAGS_NEW_NSOFFL_SMALL: new NS offload (small version)
* @IWM_UCODE_TLV_FLAGS_NEW_NSOFFL_LARGE: new NS offload (large version)
* @IWM_UCODE_TLV_FLAGS_SCHED_SCAN: this uCode image supports scheduled scan.
* @IWM_UCODE_TLV_FLAGS_STA_KEY_CMD: new ADD_STA and ADD_STA_KEY command API
* @IWM_UCODE_TLV_FLAGS_DEVICE_PS_CMD: support device wide power command
* containing CAM (Continuous Active Mode) indication.
* @IWM_UCODE_TLV_FLAGS_P2P_PS: P2P client power save is supported (only on a
* single bound interface).
* @IWM_UCODE_TLV_FLAGS_P2P_PS_UAPSD: P2P client supports uAPSD power save
*/
enum iwm_ucode_tlv_flag {
IWM_UCODE_TLV_FLAGS_PAN = (1 << 0),
IWM_UCODE_TLV_FLAGS_NEWSCAN = (1 << 1),
IWM_UCODE_TLV_FLAGS_MFP = (1 << 2),
IWM_UCODE_TLV_FLAGS_P2P = (1 << 3),
IWM_UCODE_TLV_FLAGS_DW_BC_TABLE = (1 << 4),
IWM_UCODE_TLV_FLAGS_NEWBT_COEX = (1 << 5),
IWM_UCODE_TLV_FLAGS_PM_CMD_SUPPORT = (1 << 6),
IWM_UCODE_TLV_FLAGS_SHORT_BL = (1 << 7),
IWM_UCODE_TLV_FLAGS_RX_ENERGY_API = (1 << 8),
IWM_UCODE_TLV_FLAGS_TIME_EVENT_API_V2 = (1 << 9),
IWM_UCODE_TLV_FLAGS_D3_6_IPV6_ADDRS = (1 << 10),
IWM_UCODE_TLV_FLAGS_BF_UPDATED = (1 << 11),
IWM_UCODE_TLV_FLAGS_NO_BASIC_SSID = (1 << 12),
IWM_UCODE_TLV_FLAGS_D3_CONTINUITY_API = (1 << 14),
IWM_UCODE_TLV_FLAGS_NEW_NSOFFL_SMALL = (1 << 15),
IWM_UCODE_TLV_FLAGS_NEW_NSOFFL_LARGE = (1 << 16),
IWM_UCODE_TLV_FLAGS_SCHED_SCAN = (1 << 17),
IWM_UCODE_TLV_FLAGS_STA_KEY_CMD = (1 << 19),
IWM_UCODE_TLV_FLAGS_DEVICE_PS_CMD = (1 << 20),
IWM_UCODE_TLV_FLAGS_P2P_PS = (1 << 21),
IWM_UCODE_TLV_FLAGS_UAPSD_SUPPORT = (1 << 24),
IWM_UCODE_TLV_FLAGS_P2P_PS_UAPSD = (1 << 26),
};
/* The default calibrate table size if not specified by firmware file */
#define IWM_DEFAULT_STANDARD_PHY_CALIBRATE_TBL_SIZE 18
#define IWM_MAX_STANDARD_PHY_CALIBRATE_TBL_SIZE 19
#define IWM_MAX_PHY_CALIBRATE_TBL_SIZE 253
/* The default max probe length if not specified by the firmware file */
#define IWM_DEFAULT_MAX_PROBE_LENGTH 200
/*
* enumeration of ucode section.
* This enumeration is used directly for older firmware (before 16.0).
* For new firmware, there can be up to 4 sections (see below) but the
* first one packaged into the firmware file is the DATA section and
* some debugging code accesses that.
*/
enum iwm_ucode_sec {
IWM_UCODE_SECTION_DATA,
IWM_UCODE_SECTION_INST,
};
/*
* For 16.0 uCode and above, there is no differentiation between sections,
* just an offset to the HW address.
*/
#define IWM_UCODE_SECTION_MAX 6
#define IWM_UCODE_FIRST_SECTION_OF_SECOND_CPU (IWM_UCODE_SECTION_MAX/2)
/* uCode version contains 4 values: Major/Minor/API/Serial */
#define IWM_UCODE_MAJOR(ver) (((ver) & 0xFF000000) >> 24)
#define IWM_UCODE_MINOR(ver) (((ver) & 0x00FF0000) >> 16)
#define IWM_UCODE_API(ver) (((ver) & 0x0000FF00) >> 8)
#define IWM_UCODE_SERIAL(ver) ((ver) & 0x000000FF)
/*
* Calibration control struct.
* Sent as part of the phy configuration command.
* @flow_trigger: bitmap for which calibrations to perform according to
* flow triggers.
* @event_trigger: bitmap for which calibrations to perform according to
* event triggers.
*/
struct iwm_tlv_calib_ctrl {
uint32_t flow_trigger;
uint32_t event_trigger;
} __packed;
enum iwm_fw_phy_cfg {
IWM_FW_PHY_CFG_RADIO_TYPE_POS = 0,
IWM_FW_PHY_CFG_RADIO_TYPE = 0x3 << IWM_FW_PHY_CFG_RADIO_TYPE_POS,
IWM_FW_PHY_CFG_RADIO_STEP_POS = 2,
IWM_FW_PHY_CFG_RADIO_STEP = 0x3 << IWM_FW_PHY_CFG_RADIO_STEP_POS,
IWM_FW_PHY_CFG_RADIO_DASH_POS = 4,
IWM_FW_PHY_CFG_RADIO_DASH = 0x3 << IWM_FW_PHY_CFG_RADIO_DASH_POS,
IWM_FW_PHY_CFG_TX_CHAIN_POS = 16,
IWM_FW_PHY_CFG_TX_CHAIN = 0xf << IWM_FW_PHY_CFG_TX_CHAIN_POS,
IWM_FW_PHY_CFG_RX_CHAIN_POS = 20,
IWM_FW_PHY_CFG_RX_CHAIN = 0xf << IWM_FW_PHY_CFG_RX_CHAIN_POS,
};
#define IWM_UCODE_MAX_CS 1
/**
* struct iwm_fw_cipher_scheme - a cipher scheme supported by FW.
* @cipher: a cipher suite selector
* @flags: cipher scheme flags (currently reserved for a future use)
* @hdr_len: a size of MPDU security header
* @pn_len: a size of PN
* @pn_off: an offset of pn from the beginning of the security header
* @key_idx_off: an offset of key index byte in the security header
* @key_idx_mask: a bit mask of key_idx bits
* @key_idx_shift: bit shift needed to get key_idx
* @mic_len: mic length in bytes
* @hw_cipher: a HW cipher index used in host commands
*/
struct iwm_fw_cipher_scheme {
uint32_t cipher;
uint8_t flags;
uint8_t hdr_len;
uint8_t pn_len;
uint8_t pn_off;
uint8_t key_idx_off;
uint8_t key_idx_mask;
uint8_t key_idx_shift;
uint8_t mic_len;
uint8_t hw_cipher;
} __packed;
/**
* struct iwm_fw_cscheme_list - a cipher scheme list
* @size: a number of entries
* @cs: cipher scheme entries
*/
struct iwm_fw_cscheme_list {
uint8_t size;
struct iwm_fw_cipher_scheme cs[];
} __packed;
/*
* END iwl-fw.h
*/
/*
* BEGIN iwl-fw-file.h
*/
/* v1/v2 uCode file layout */
struct iwm_ucode_header {
uint32_t ver; /* major/minor/API/serial */
union {
struct {
uint32_t inst_size; /* bytes of runtime code */
uint32_t data_size; /* bytes of runtime data */
uint32_t init_size; /* bytes of init code */
uint32_t init_data_size; /* bytes of init data */
uint32_t boot_size; /* bytes of bootstrap code */
uint8_t data[0]; /* in same order as sizes */
} v1;
struct {
uint32_t build; /* build number */
uint32_t inst_size; /* bytes of runtime code */
uint32_t data_size; /* bytes of runtime data */
uint32_t init_size; /* bytes of init code */
uint32_t init_data_size; /* bytes of init data */
uint32_t boot_size; /* bytes of bootstrap code */
uint8_t data[0]; /* in same order as sizes */
} v2;
} u;
};
/*
* new TLV uCode file layout
*
* The new TLV file format contains TLVs, that each specify
* some piece of data.
*/
enum iwm_ucode_tlv_type {
IWM_UCODE_TLV_INVALID = 0, /* unused */
IWM_UCODE_TLV_INST = 1,
IWM_UCODE_TLV_DATA = 2,
IWM_UCODE_TLV_INIT = 3,
IWM_UCODE_TLV_INIT_DATA = 4,
IWM_UCODE_TLV_BOOT = 5,
IWM_UCODE_TLV_PROBE_MAX_LEN = 6, /* a uint32_t value */
IWM_UCODE_TLV_PAN = 7,
IWM_UCODE_TLV_RUNT_EVTLOG_PTR = 8,
IWM_UCODE_TLV_RUNT_EVTLOG_SIZE = 9,
IWM_UCODE_TLV_RUNT_ERRLOG_PTR = 10,
IWM_UCODE_TLV_INIT_EVTLOG_PTR = 11,
IWM_UCODE_TLV_INIT_EVTLOG_SIZE = 12,
IWM_UCODE_TLV_INIT_ERRLOG_PTR = 13,
IWM_UCODE_TLV_ENHANCE_SENS_TBL = 14,
IWM_UCODE_TLV_PHY_CALIBRATION_SIZE = 15,
IWM_UCODE_TLV_WOWLAN_INST = 16,
IWM_UCODE_TLV_WOWLAN_DATA = 17,
IWM_UCODE_TLV_FLAGS = 18,
IWM_UCODE_TLV_SEC_RT = 19,
IWM_UCODE_TLV_SEC_INIT = 20,
IWM_UCODE_TLV_SEC_WOWLAN = 21,
IWM_UCODE_TLV_DEF_CALIB = 22,
IWM_UCODE_TLV_PHY_SKU = 23,
IWM_UCODE_TLV_SECURE_SEC_RT = 24,
IWM_UCODE_TLV_SECURE_SEC_INIT = 25,
IWM_UCODE_TLV_SECURE_SEC_WOWLAN = 26,
IWM_UCODE_TLV_NUM_OF_CPU = 27,
IWM_UCODE_TLV_CSCHEME = 28,
/*
* Following two are not in our base tag, but allow
* handling ucode version 9.
*/
IWM_UCODE_TLV_API_CHANGES_SET = 29,
IWM_UCODE_TLV_ENABLED_CAPABILITIES = 30
};
struct iwm_ucode_tlv {
uint32_t type; /* see above */
uint32_t length; /* not including type/length fields */
uint8_t data[0];
};
#define IWM_TLV_UCODE_MAGIC 0x0a4c5749
struct iwm_tlv_ucode_header {
/*
* The TLV style ucode header is distinguished from
* the v1/v2 style header by first four bytes being
* zero, as such is an invalid combination of
* major/minor/API/serial versions.
*/
uint32_t zero;
uint32_t magic;
uint8_t human_readable[64];
uint32_t ver; /* major/minor/API/serial */
uint32_t build;
uint64_t ignore;
/*
* The data contained herein has a TLV layout,
* see above for the TLV header and types.
* Note that each TLV is padded to a length
* that is a multiple of 4 for alignment.
*/
uint8_t data[0];
};
/*
* END iwl-fw-file.h
*/
/*
* BEGIN iwl-prph.h
*/
/*
* Registers in this file are internal, not PCI bus memory mapped.
* Driver accesses these via IWM_HBUS_TARG_PRPH_* registers.
*/
#define IWM_PRPH_BASE (0x00000)
#define IWM_PRPH_END (0xFFFFF)
/* APMG (power management) constants */
#define IWM_APMG_BASE (IWM_PRPH_BASE + 0x3000)
#define IWM_APMG_CLK_CTRL_REG (IWM_APMG_BASE + 0x0000)
#define IWM_APMG_CLK_EN_REG (IWM_APMG_BASE + 0x0004)
#define IWM_APMG_CLK_DIS_REG (IWM_APMG_BASE + 0x0008)
#define IWM_APMG_PS_CTRL_REG (IWM_APMG_BASE + 0x000c)
#define IWM_APMG_PCIDEV_STT_REG (IWM_APMG_BASE + 0x0010)
#define IWM_APMG_RFKILL_REG (IWM_APMG_BASE + 0x0014)
#define IWM_APMG_RTC_INT_STT_REG (IWM_APMG_BASE + 0x001c)
#define IWM_APMG_RTC_INT_MSK_REG (IWM_APMG_BASE + 0x0020)
#define IWM_APMG_DIGITAL_SVR_REG (IWM_APMG_BASE + 0x0058)
#define IWM_APMG_ANALOG_SVR_REG (IWM_APMG_BASE + 0x006C)
#define IWM_APMS_CLK_VAL_MRB_FUNC_MODE (0x00000001)
#define IWM_APMG_CLK_VAL_DMA_CLK_RQT (0x00000200)
#define IWM_APMG_CLK_VAL_BSM_CLK_RQT (0x00000800)
#define IWM_APMG_PS_CTRL_EARLY_PWR_OFF_RESET_DIS (0x00400000)
#define IWM_APMG_PS_CTRL_VAL_RESET_REQ (0x04000000)
#define IWM_APMG_PS_CTRL_MSK_PWR_SRC (0x03000000)
#define IWM_APMG_PS_CTRL_VAL_PWR_SRC_VMAIN (0x00000000)
#define IWM_APMG_PS_CTRL_VAL_PWR_SRC_VAUX (0x02000000)
#define IWM_APMG_SVR_VOLTAGE_CONFIG_BIT_MSK (0x000001E0) /* bit 8:5 */
#define IWM_APMG_SVR_DIGITAL_VOLTAGE_1_32 (0x00000060)
#define IWM_APMG_PCIDEV_STT_VAL_L1_ACT_DIS (0x00000800)
#define IWM_APMG_RTC_INT_STT_RFKILL (0x10000000)
/* Device system time */
#define IWM_DEVICE_SYSTEM_TIME_REG 0xA0206C
/* Device NMI register */
#define IWM_DEVICE_SET_NMI_REG 0x00a01c30
/*****************************************************************************
* 7000/3000 series SHR DTS addresses *
*****************************************************************************/
#define IWM_SHR_MISC_WFM_DTS_EN (0x00a10024)
#define IWM_DTSC_CFG_MODE (0x00a10604)
#define IWM_DTSC_VREF_AVG (0x00a10648)
#define IWM_DTSC_VREF5_AVG (0x00a1064c)
#define IWM_DTSC_CFG_MODE_PERIODIC (0x2)
#define IWM_DTSC_PTAT_AVG (0x00a10650)
/**
* Tx Scheduler
*
* The Tx Scheduler selects the next frame to be transmitted, choosing TFDs
* (Transmit Frame Descriptors) from up to 16 circular Tx queues resident in
* host DRAM. It steers each frame's Tx command (which contains the frame
* data) into one of up to 7 prioritized Tx DMA FIFO channels within the
* device. A queue maps to only one (selectable by driver) Tx DMA channel,
* but one DMA channel may take input from several queues.
*
* Tx DMA FIFOs have dedicated purposes.
*
* For 5000 series and up, they are used differently
* (cf. iwl5000_default_queue_to_tx_fifo in iwl-5000.c):
*
* 0 -- EDCA BK (background) frames, lowest priority
* 1 -- EDCA BE (best effort) frames, normal priority
* 2 -- EDCA VI (video) frames, higher priority
* 3 -- EDCA VO (voice) and management frames, highest priority
* 4 -- unused
* 5 -- unused
* 6 -- unused
* 7 -- Commands
*
* Driver should normally map queues 0-6 to Tx DMA/FIFO channels 0-6.
* In addition, driver can map the remaining queues to Tx DMA/FIFO
* channels 0-3 to support 11n aggregation via EDCA DMA channels.
*
* The driver sets up each queue to work in one of two modes:
*
* 1) Scheduler-Ack, in which the scheduler automatically supports a
* block-ack (BA) window of up to 64 TFDs. In this mode, each queue
* contains TFDs for a unique combination of Recipient Address (RA)
* and Traffic Identifier (TID), that is, traffic of a given
* Quality-Of-Service (QOS) priority, destined for a single station.
*
* In scheduler-ack mode, the scheduler keeps track of the Tx status of
* each frame within the BA window, including whether it's been transmitted,
* and whether it's been acknowledged by the receiving station. The device
* automatically processes block-acks received from the receiving STA,
* and reschedules un-acked frames to be retransmitted (successful
* Tx completion may end up being out-of-order).
*
* The driver must maintain the queue's Byte Count table in host DRAM
* for this mode.
* This mode does not support fragmentation.
*
* 2) FIFO (a.k.a. non-Scheduler-ACK), in which each TFD is processed in order.
* The device may automatically retry Tx, but will retry only one frame
* at a time, until receiving ACK from receiving station, or reaching
* retry limit and giving up.
*
* The command queue (#4/#9) must use this mode!
* This mode does not require use of the Byte Count table in host DRAM.
*
* Driver controls scheduler operation via 3 means:
* 1) Scheduler registers
* 2) Shared scheduler data base in internal SRAM
* 3) Shared data in host DRAM
*
* Initialization:
*
* When loading, driver should allocate memory for:
* 1) 16 TFD circular buffers, each with space for (typically) 256 TFDs.
* 2) 16 Byte Count circular buffers in 16 KBytes contiguous memory
* (1024 bytes for each queue).
*
* After receiving "Alive" response from uCode, driver must initialize
* the scheduler (especially for queue #4/#9, the command queue, otherwise
* the driver can't issue commands!):
*/
#define IWM_SCD_MEM_LOWER_BOUND (0x0000)
/**
* Max Tx window size is the max number of contiguous TFDs that the scheduler
* can keep track of at one time when creating block-ack chains of frames.
* Note that "64" matches the number of ack bits in a block-ack packet.
*/
#define IWM_SCD_WIN_SIZE 64
#define IWM_SCD_FRAME_LIMIT 64
#define IWM_SCD_TXFIFO_POS_TID (0)
#define IWM_SCD_TXFIFO_POS_RA (4)
#define IWM_SCD_QUEUE_RA_TID_MAP_RATID_MSK (0x01FF)
/* agn SCD */
#define IWM_SCD_QUEUE_STTS_REG_POS_TXF (0)
#define IWM_SCD_QUEUE_STTS_REG_POS_ACTIVE (3)
#define IWM_SCD_QUEUE_STTS_REG_POS_WSL (4)
#define IWM_SCD_QUEUE_STTS_REG_POS_SCD_ACT_EN (19)
#define IWM_SCD_QUEUE_STTS_REG_MSK (0x017F0000)
#define IWM_SCD_QUEUE_CTX_REG1_CREDIT_POS (8)
#define IWM_SCD_QUEUE_CTX_REG1_CREDIT_MSK (0x00FFFF00)
#define IWM_SCD_QUEUE_CTX_REG1_SUPER_CREDIT_POS (24)
#define IWM_SCD_QUEUE_CTX_REG1_SUPER_CREDIT_MSK (0xFF000000)
#define IWM_SCD_QUEUE_CTX_REG2_WIN_SIZE_POS (0)
#define IWM_SCD_QUEUE_CTX_REG2_WIN_SIZE_MSK (0x0000007F)
#define IWM_SCD_QUEUE_CTX_REG2_FRAME_LIMIT_POS (16)
#define IWM_SCD_QUEUE_CTX_REG2_FRAME_LIMIT_MSK (0x007F0000)
/* Context Data */
#define IWM_SCD_CONTEXT_MEM_LOWER_BOUND (IWM_SCD_MEM_LOWER_BOUND + 0x600)
#define IWM_SCD_CONTEXT_MEM_UPPER_BOUND (IWM_SCD_MEM_LOWER_BOUND + 0x6A0)
/* Tx status */
#define IWM_SCD_TX_STTS_MEM_LOWER_BOUND (IWM_SCD_MEM_LOWER_BOUND + 0x6A0)
#define IWM_SCD_TX_STTS_MEM_UPPER_BOUND (IWM_SCD_MEM_LOWER_BOUND + 0x7E0)
/* Translation Data */
#define IWM_SCD_TRANS_TBL_MEM_LOWER_BOUND (IWM_SCD_MEM_LOWER_BOUND + 0x7E0)
#define IWM_SCD_TRANS_TBL_MEM_UPPER_BOUND (IWM_SCD_MEM_LOWER_BOUND + 0x808)
#define IWM_SCD_CONTEXT_QUEUE_OFFSET(x)\
(IWM_SCD_CONTEXT_MEM_LOWER_BOUND + ((x) * 8))
#define IWM_SCD_TX_STTS_QUEUE_OFFSET(x)\
(IWM_SCD_TX_STTS_MEM_LOWER_BOUND + ((x) * 16))
#define IWM_SCD_TRANS_TBL_OFFSET_QUEUE(x) \
((IWM_SCD_TRANS_TBL_MEM_LOWER_BOUND + ((x) * 2)) & 0xfffc)
#define IWM_SCD_BASE (IWM_PRPH_BASE + 0xa02c00)
#define IWM_SCD_SRAM_BASE_ADDR (IWM_SCD_BASE + 0x0)
#define IWM_SCD_DRAM_BASE_ADDR (IWM_SCD_BASE + 0x8)
#define IWM_SCD_AIT (IWM_SCD_BASE + 0x0c)
#define IWM_SCD_TXFACT (IWM_SCD_BASE + 0x10)
#define IWM_SCD_ACTIVE (IWM_SCD_BASE + 0x14)
#define IWM_SCD_QUEUECHAIN_SEL (IWM_SCD_BASE + 0xe8)
#define IWM_SCD_CHAINEXT_EN (IWM_SCD_BASE + 0x244)
#define IWM_SCD_AGGR_SEL (IWM_SCD_BASE + 0x248)
#define IWM_SCD_INTERRUPT_MASK (IWM_SCD_BASE + 0x108)
static inline unsigned int IWM_SCD_QUEUE_WRPTR(unsigned int chnl)
{
if (chnl < 20)
return IWM_SCD_BASE + 0x18 + chnl * 4;
return IWM_SCD_BASE + 0x284 + (chnl - 20) * 4;
}
static inline unsigned int IWM_SCD_QUEUE_RDPTR(unsigned int chnl)
{
if (chnl < 20)
return IWM_SCD_BASE + 0x68 + chnl * 4;
return IWM_SCD_BASE + 0x2B4 + (chnl - 20) * 4;
}
static inline unsigned int IWM_SCD_QUEUE_STATUS_BITS(unsigned int chnl)
{
if (chnl < 20)
return IWM_SCD_BASE + 0x10c + chnl * 4;
return IWM_SCD_BASE + 0x384 + (chnl - 20) * 4;
}
/*********************** END TX SCHEDULER *************************************/
/* Oscillator clock */
#define IWM_OSC_CLK (0xa04068)
#define IWM_OSC_CLK_FORCE_CONTROL (0x8)
/*
* END iwl-prph.h
*/
/*
* BEGIN iwl-fh.h
*/
/****************************/
/* Flow Handler Definitions */
/****************************/
/**
* This I/O area is directly read/writable by driver (e.g. Linux uses writel())
* Addresses are offsets from device's PCI hardware base address.
*/
#define IWM_FH_MEM_LOWER_BOUND (0x1000)
#define IWM_FH_MEM_UPPER_BOUND (0x2000)
/**
* Keep-Warm (KW) buffer base address.
*
* Driver must allocate a 4KByte buffer that is for keeping the
* host DRAM powered on (via dummy accesses to DRAM) to maintain low-latency
* DRAM access when doing Txing or Rxing. The dummy accesses prevent host
* from going into a power-savings mode that would cause higher DRAM latency,
* and possible data over/under-runs, before all Tx/Rx is complete.
*
* Driver loads IWM_FH_KW_MEM_ADDR_REG with the physical address (bits 35:4)
* of the buffer, which must be 4K aligned. Once this is set up, the device
* automatically invokes keep-warm accesses when normal accesses might not
* be sufficient to maintain fast DRAM response.
*
* Bit fields:
* 31-0: Keep-warm buffer physical base address [35:4], must be 4K aligned
*/
#define IWM_FH_KW_MEM_ADDR_REG (IWM_FH_MEM_LOWER_BOUND + 0x97C)
/**
* TFD Circular Buffers Base (CBBC) addresses
*
* Device has 16 base pointer registers, one for each of 16 host-DRAM-resident
* circular buffers (CBs/queues) containing Transmit Frame Descriptors (TFDs)
* (see struct iwm_tfd_frame). These 16 pointer registers are offset by 0x04
* bytes from one another. Each TFD circular buffer in DRAM must be 256-byte
* aligned (address bits 0-7 must be 0).
* Later devices have 20 (5000 series) or 30 (higher) queues, but the registers
* for them are in different places.
*
* Bit fields in each pointer register:
* 27-0: TFD CB physical base address [35:8], must be 256-byte aligned
*/
#define IWM_FH_MEM_CBBC_0_15_LOWER_BOUND (IWM_FH_MEM_LOWER_BOUND + 0x9D0)
#define IWM_FH_MEM_CBBC_0_15_UPPER_BOUN (IWM_FH_MEM_LOWER_BOUND + 0xA10)
#define IWM_FH_MEM_CBBC_16_19_LOWER_BOUND (IWM_FH_MEM_LOWER_BOUND + 0xBF0)
#define IWM_FH_MEM_CBBC_16_19_UPPER_BOUND (IWM_FH_MEM_LOWER_BOUND + 0xC00)
#define IWM_FH_MEM_CBBC_20_31_LOWER_BOUND (IWM_FH_MEM_LOWER_BOUND + 0xB20)
#define IWM_FH_MEM_CBBC_20_31_UPPER_BOUND (IWM_FH_MEM_LOWER_BOUND + 0xB80)
/* Find TFD CB base pointer for given queue */
static inline unsigned int IWM_FH_MEM_CBBC_QUEUE(unsigned int chnl)
{
if (chnl < 16)
return IWM_FH_MEM_CBBC_0_15_LOWER_BOUND + 4 * chnl;
if (chnl < 20)
return IWM_FH_MEM_CBBC_16_19_LOWER_BOUND + 4 * (chnl - 16);
return IWM_FH_MEM_CBBC_20_31_LOWER_BOUND + 4 * (chnl - 20);
}
/**
* Rx SRAM Control and Status Registers (RSCSR)
*
* These registers provide handshake between driver and device for the Rx queue
* (this queue handles *all* command responses, notifications, Rx data, etc.
* sent from uCode to host driver). Unlike Tx, there is only one Rx
* queue, and only one Rx DMA/FIFO channel. Also unlike Tx, which can
* concatenate up to 20 DRAM buffers to form a Tx frame, each Receive Buffer
* Descriptor (RBD) points to only one Rx Buffer (RB); there is a 1:1
* mapping between RBDs and RBs.
*
* Driver must allocate host DRAM memory for the following, and set the
* physical address of each into device registers:
*
* 1) Receive Buffer Descriptor (RBD) circular buffer (CB), typically with 256
* entries (although any power of 2, up to 4096, is selectable by driver).
* Each entry (1 dword) points to a receive buffer (RB) of consistent size
* (typically 4K, although 8K or 16K are also selectable by driver).
* Driver sets up RB size and number of RBDs in the CB via Rx config
* register IWM_FH_MEM_RCSR_CHNL0_CONFIG_REG.
*
* Bit fields within one RBD:
* 27-0: Receive Buffer physical address bits [35:8], 256-byte aligned
*
* Driver sets physical address [35:8] of base of RBD circular buffer
* into IWM_FH_RSCSR_CHNL0_RBDCB_BASE_REG [27:0].
*
* 2) Rx status buffer, 8 bytes, in which uCode indicates which Rx Buffers
* (RBs) have been filled, via a "write pointer", actually the index of
* the RB's corresponding RBD within the circular buffer. Driver sets
* physical address [35:4] into IWM_FH_RSCSR_CHNL0_STTS_WPTR_REG [31:0].
*
* Bit fields in lower dword of Rx status buffer (upper dword not used
* by driver:
* 31-12: Not used by driver
* 11- 0: Index of last filled Rx buffer descriptor
* (device writes, driver reads this value)
*
* As the driver prepares Receive Buffers (RBs) for device to fill, driver must
* enter pointers to these RBs into contiguous RBD circular buffer entries,
* and update the device's "write" index register,
* IWM_FH_RSCSR_CHNL0_RBDCB_WPTR_REG.
*
* This "write" index corresponds to the *next* RBD that the driver will make
* available, i.e. one RBD past the tail of the ready-to-fill RBDs within
* the circular buffer. This value should initially be 0 (before preparing any
* RBs), should be 8 after preparing the first 8 RBs (for example), and must
* wrap back to 0 at the end of the circular buffer (but don't wrap before
* "read" index has advanced past 1! See below).
* NOTE: DEVICE EXPECTS THE WRITE INDEX TO BE INCREMENTED IN MULTIPLES OF 8.
*
* As the device fills RBs (referenced from contiguous RBDs within the circular
* buffer), it updates the Rx status buffer in host DRAM, 2) described above,
* to tell the driver the index of the latest filled RBD. The driver must
* read this "read" index from DRAM after receiving an Rx interrupt from device
*
* The driver must also internally keep track of a third index, which is the
* next RBD to process. When receiving an Rx interrupt, driver should process
* all filled but unprocessed RBs up to, but not including, the RB
* corresponding to the "read" index. For example, if "read" index becomes "1",
* driver may process the RB pointed to by RBD 0. Depending on volume of
* traffic, there may be many RBs to process.
*
* If read index == write index, device thinks there is no room to put new data.
* Due to this, the maximum number of filled RBs is 255, instead of 256. To
* be safe, make sure that there is a gap of at least 2 RBDs between "write"
* and "read" indexes; that is, make sure that there are no more than 254
* buffers waiting to be filled.
*/
#define IWM_FH_MEM_RSCSR_LOWER_BOUND (IWM_FH_MEM_LOWER_BOUND + 0xBC0)
#define IWM_FH_MEM_RSCSR_UPPER_BOUND (IWM_FH_MEM_LOWER_BOUND + 0xC00)
#define IWM_FH_MEM_RSCSR_CHNL0 (IWM_FH_MEM_RSCSR_LOWER_BOUND)
/**
* Physical base address of 8-byte Rx Status buffer.
* Bit fields:
* 31-0: Rx status buffer physical base address [35:4], must 16-byte aligned.
*/
#define IWM_FH_RSCSR_CHNL0_STTS_WPTR_REG (IWM_FH_MEM_RSCSR_CHNL0)
/**
* Physical base address of Rx Buffer Descriptor Circular Buffer.
* Bit fields:
* 27-0: RBD CD physical base address [35:8], must be 256-byte aligned.
*/
#define IWM_FH_RSCSR_CHNL0_RBDCB_BASE_REG (IWM_FH_MEM_RSCSR_CHNL0 + 0x004)
/**
* Rx write pointer (index, really!).
* Bit fields:
* 11-0: Index of driver's most recent prepared-to-be-filled RBD, + 1.
* NOTE: For 256-entry circular buffer, use only bits [7:0].
*/
#define IWM_FH_RSCSR_CHNL0_RBDCB_WPTR_REG (IWM_FH_MEM_RSCSR_CHNL0 + 0x008)
#define IWM_FH_RSCSR_CHNL0_WPTR (IWM_FH_RSCSR_CHNL0_RBDCB_WPTR_REG)
#define IWM_FW_RSCSR_CHNL0_RXDCB_RDPTR_REG (IWM_FH_MEM_RSCSR_CHNL0 + 0x00c)
#define IWM_FH_RSCSR_CHNL0_RDPTR IWM_FW_RSCSR_CHNL0_RXDCB_RDPTR_REG
/**
* Rx Config/Status Registers (RCSR)
* Rx Config Reg for channel 0 (only channel used)
*
* Driver must initialize IWM_FH_MEM_RCSR_CHNL0_CONFIG_REG as follows for
* normal operation (see bit fields).
*
* Clearing IWM_FH_MEM_RCSR_CHNL0_CONFIG_REG to 0 turns off Rx DMA.
* Driver should poll IWM_FH_MEM_RSSR_RX_STATUS_REG for
* IWM_FH_RSSR_CHNL0_RX_STATUS_CHNL_IDLE (bit 24) before continuing.
*
* Bit fields:
* 31-30: Rx DMA channel enable: '00' off/pause, '01' pause at end of frame,
* '10' operate normally
* 29-24: reserved
* 23-20: # RBDs in circular buffer = 2^value; use "8" for 256 RBDs (normal),
* min "5" for 32 RBDs, max "12" for 4096 RBDs.
* 19-18: reserved
* 17-16: size of each receive buffer; '00' 4K (normal), '01' 8K,
* '10' 12K, '11' 16K.
* 15-14: reserved
* 13-12: IRQ destination; '00' none, '01' host driver (normal operation)
* 11- 4: timeout for closing Rx buffer and interrupting host (units 32 usec)
* typical value 0x10 (about 1/2 msec)
* 3- 0: reserved
*/
#define IWM_FH_MEM_RCSR_LOWER_BOUND (IWM_FH_MEM_LOWER_BOUND + 0xC00)
#define IWM_FH_MEM_RCSR_UPPER_BOUND (IWM_FH_MEM_LOWER_BOUND + 0xCC0)
#define IWM_FH_MEM_RCSR_CHNL0 (IWM_FH_MEM_RCSR_LOWER_BOUND)
#define IWM_FH_MEM_RCSR_CHNL0_CONFIG_REG (IWM_FH_MEM_RCSR_CHNL0)
#define IWM_FH_MEM_RCSR_CHNL0_RBDCB_WPTR (IWM_FH_MEM_RCSR_CHNL0 + 0x8)
#define IWM_FH_MEM_RCSR_CHNL0_FLUSH_RB_REQ (IWM_FH_MEM_RCSR_CHNL0 + 0x10)
#define IWM_FH_RCSR_CHNL0_RX_CONFIG_RB_TIMEOUT_MSK (0x00000FF0) /* bits 4-11 */
#define IWM_FH_RCSR_CHNL0_RX_CONFIG_IRQ_DEST_MSK (0x00001000) /* bits 12 */
#define IWM_FH_RCSR_CHNL0_RX_CONFIG_SINGLE_FRAME_MSK (0x00008000) /* bit 15 */
#define IWM_FH_RCSR_CHNL0_RX_CONFIG_RB_SIZE_MSK (0x00030000) /* bits 16-17 */
#define IWM_FH_RCSR_CHNL0_RX_CONFIG_RBDBC_SIZE_MSK (0x00F00000) /* bits 20-23 */
#define IWM_FH_RCSR_CHNL0_RX_CONFIG_DMA_CHNL_EN_MSK (0xC0000000) /* bits 30-31*/
#define IWM_FH_RCSR_RX_CONFIG_RBDCB_SIZE_POS (20)
#define IWM_FH_RCSR_RX_CONFIG_REG_IRQ_RBTH_POS (4)
#define IWM_RX_RB_TIMEOUT (0x11)
#define IWM_FH_RCSR_RX_CONFIG_CHNL_EN_PAUSE_VAL (0x00000000)
#define IWM_FH_RCSR_RX_CONFIG_CHNL_EN_PAUSE_EOF_VAL (0x40000000)
#define IWM_FH_RCSR_RX_CONFIG_CHNL_EN_ENABLE_VAL (0x80000000)
#define IWM_FH_RCSR_RX_CONFIG_REG_VAL_RB_SIZE_4K (0x00000000)
#define IWM_FH_RCSR_RX_CONFIG_REG_VAL_RB_SIZE_8K (0x00010000)
#define IWM_FH_RCSR_RX_CONFIG_REG_VAL_RB_SIZE_12K (0x00020000)
#define IWM_FH_RCSR_RX_CONFIG_REG_VAL_RB_SIZE_16K (0x00030000)
#define IWM_FH_RCSR_CHNL0_RX_IGNORE_RXF_EMPTY (0x00000004)
#define IWM_FH_RCSR_CHNL0_RX_CONFIG_IRQ_DEST_NO_INT_VAL (0x00000000)
#define IWM_FH_RCSR_CHNL0_RX_CONFIG_IRQ_DEST_INT_HOST_VAL (0x00001000)
/**
* Rx Shared Status Registers (RSSR)
*
* After stopping Rx DMA channel (writing 0 to
* IWM_FH_MEM_RCSR_CHNL0_CONFIG_REG), driver must poll
* IWM_FH_MEM_RSSR_RX_STATUS_REG until Rx channel is idle.
*
* Bit fields:
* 24: 1 = Channel 0 is idle
*
* IWM_FH_MEM_RSSR_SHARED_CTRL_REG and IWM_FH_MEM_RSSR_RX_ENABLE_ERR_IRQ2DRV
* contain default values that should not be altered by the driver.
*/
#define IWM_FH_MEM_RSSR_LOWER_BOUND (IWM_FH_MEM_LOWER_BOUND + 0xC40)
#define IWM_FH_MEM_RSSR_UPPER_BOUND (IWM_FH_MEM_LOWER_BOUND + 0xD00)
#define IWM_FH_MEM_RSSR_SHARED_CTRL_REG (IWM_FH_MEM_RSSR_LOWER_BOUND)
#define IWM_FH_MEM_RSSR_RX_STATUS_REG (IWM_FH_MEM_RSSR_LOWER_BOUND + 0x004)
#define IWM_FH_MEM_RSSR_RX_ENABLE_ERR_IRQ2DRV\
(IWM_FH_MEM_RSSR_LOWER_BOUND + 0x008)
#define IWM_FH_RSSR_CHNL0_RX_STATUS_CHNL_IDLE (0x01000000)
#define IWM_FH_MEM_TFDIB_REG1_ADDR_BITSHIFT 28
/* TFDB Area - TFDs buffer table */
#define IWM_FH_MEM_TFDIB_DRAM_ADDR_LSB_MSK (0xFFFFFFFF)
#define IWM_FH_TFDIB_LOWER_BOUND (IWM_FH_MEM_LOWER_BOUND + 0x900)
#define IWM_FH_TFDIB_UPPER_BOUND (IWM_FH_MEM_LOWER_BOUND + 0x958)
#define IWM_FH_TFDIB_CTRL0_REG(_chnl) (IWM_FH_TFDIB_LOWER_BOUND + 0x8 * (_chnl))
#define IWM_FH_TFDIB_CTRL1_REG(_chnl) (IWM_FH_TFDIB_LOWER_BOUND + 0x8 * (_chnl) + 0x4)
/**
* Transmit DMA Channel Control/Status Registers (TCSR)
*
* Device has one configuration register for each of 8 Tx DMA/FIFO channels
* supported in hardware (don't confuse these with the 16 Tx queues in DRAM,
* which feed the DMA/FIFO channels); config regs are separated by 0x20 bytes.
*
* To use a Tx DMA channel, driver must initialize its
* IWM_FH_TCSR_CHNL_TX_CONFIG_REG(chnl) with:
*
* IWM_FH_TCSR_TX_CONFIG_REG_VAL_DMA_CHNL_ENABLE |
* IWM_FH_TCSR_TX_CONFIG_REG_VAL_DMA_CREDIT_ENABLE_VAL
*
* All other bits should be 0.
*
* Bit fields:
* 31-30: Tx DMA channel enable: '00' off/pause, '01' pause at end of frame,
* '10' operate normally
* 29- 4: Reserved, set to "0"
* 3: Enable internal DMA requests (1, normal operation), disable (0)
* 2- 0: Reserved, set to "0"
*/
#define IWM_FH_TCSR_LOWER_BOUND (IWM_FH_MEM_LOWER_BOUND + 0xD00)
#define IWM_FH_TCSR_UPPER_BOUND (IWM_FH_MEM_LOWER_BOUND + 0xE60)
/* Find Control/Status reg for given Tx DMA/FIFO channel */
#define IWM_FH_TCSR_CHNL_NUM (8)
/* TCSR: tx_config register values */
#define IWM_FH_TCSR_CHNL_TX_CONFIG_REG(_chnl) \
(IWM_FH_TCSR_LOWER_BOUND + 0x20 * (_chnl))
#define IWM_FH_TCSR_CHNL_TX_CREDIT_REG(_chnl) \
(IWM_FH_TCSR_LOWER_BOUND + 0x20 * (_chnl) + 0x4)
#define IWM_FH_TCSR_CHNL_TX_BUF_STS_REG(_chnl) \
(IWM_FH_TCSR_LOWER_BOUND + 0x20 * (_chnl) + 0x8)
#define IWM_FH_TCSR_TX_CONFIG_REG_VAL_MSG_MODE_TXF (0x00000000)
#define IWM_FH_TCSR_TX_CONFIG_REG_VAL_MSG_MODE_DRV (0x00000001)
#define IWM_FH_TCSR_TX_CONFIG_REG_VAL_DMA_CREDIT_DISABLE (0x00000000)
#define IWM_FH_TCSR_TX_CONFIG_REG_VAL_DMA_CREDIT_ENABLE (0x00000008)
#define IWM_FH_TCSR_TX_CONFIG_REG_VAL_CIRQ_HOST_NOINT (0x00000000)
#define IWM_FH_TCSR_TX_CONFIG_REG_VAL_CIRQ_HOST_ENDTFD (0x00100000)
#define IWM_FH_TCSR_TX_CONFIG_REG_VAL_CIRQ_HOST_IFTFD (0x00200000)
#define IWM_FH_TCSR_TX_CONFIG_REG_VAL_CIRQ_RTC_NOINT (0x00000000)
#define IWM_FH_TCSR_TX_CONFIG_REG_VAL_CIRQ_RTC_ENDTFD (0x00400000)
#define IWM_FH_TCSR_TX_CONFIG_REG_VAL_CIRQ_RTC_IFTFD (0x00800000)
#define IWM_FH_TCSR_TX_CONFIG_REG_VAL_DMA_CHNL_PAUSE (0x00000000)
#define IWM_FH_TCSR_TX_CONFIG_REG_VAL_DMA_CHNL_PAUSE_EOF (0x40000000)
#define IWM_FH_TCSR_TX_CONFIG_REG_VAL_DMA_CHNL_ENABLE (0x80000000)
#define IWM_FH_TCSR_CHNL_TX_BUF_STS_REG_VAL_TFDB_EMPTY (0x00000000)
#define IWM_FH_TCSR_CHNL_TX_BUF_STS_REG_VAL_TFDB_WAIT (0x00002000)
#define IWM_FH_TCSR_CHNL_TX_BUF_STS_REG_VAL_TFDB_VALID (0x00000003)
#define IWM_FH_TCSR_CHNL_TX_BUF_STS_REG_POS_TB_NUM (20)
#define IWM_FH_TCSR_CHNL_TX_BUF_STS_REG_POS_TB_IDX (12)
/**
* Tx Shared Status Registers (TSSR)
*
* After stopping Tx DMA channel (writing 0 to
* IWM_FH_TCSR_CHNL_TX_CONFIG_REG(chnl)), driver must poll
* IWM_FH_TSSR_TX_STATUS_REG until selected Tx channel is idle
* (channel's buffers empty | no pending requests).
*
* Bit fields:
* 31-24: 1 = Channel buffers empty (channel 7:0)
* 23-16: 1 = No pending requests (channel 7:0)
*/
#define IWM_FH_TSSR_LOWER_BOUND (IWM_FH_MEM_LOWER_BOUND + 0xEA0)
#define IWM_FH_TSSR_UPPER_BOUND (IWM_FH_MEM_LOWER_BOUND + 0xEC0)
#define IWM_FH_TSSR_TX_STATUS_REG (IWM_FH_TSSR_LOWER_BOUND + 0x010)
/**
* Bit fields for TSSR(Tx Shared Status & Control) error status register:
* 31: Indicates an address error when accessed to internal memory
* uCode/driver must write "1" in order to clear this flag
* 30: Indicates that Host did not send the expected number of dwords to FH
* uCode/driver must write "1" in order to clear this flag
* 16-9:Each status bit is for one channel. Indicates that an (Error) ActDMA
* command was received from the scheduler while the TRB was already full
* with previous command
* uCode/driver must write "1" in order to clear this flag
* 7-0: Each status bit indicates a channel's TxCredit error. When an error
* bit is set, it indicates that the FH has received a full indication
* from the RTC TxFIFO and the current value of the TxCredit counter was
* not equal to zero. This mean that the credit mechanism was not
* synchronized to the TxFIFO status
* uCode/driver must write "1" in order to clear this flag
*/
#define IWM_FH_TSSR_TX_ERROR_REG (IWM_FH_TSSR_LOWER_BOUND + 0x018)
#define IWM_FH_TSSR_TX_MSG_CONFIG_REG (IWM_FH_TSSR_LOWER_BOUND + 0x008)
#define IWM_FH_TSSR_TX_STATUS_REG_MSK_CHNL_IDLE(_chnl) ((1 << (_chnl)) << 16)
/* Tx service channels */
#define IWM_FH_SRVC_CHNL (9)
#define IWM_FH_SRVC_LOWER_BOUND (IWM_FH_MEM_LOWER_BOUND + 0x9C8)
#define IWM_FH_SRVC_UPPER_BOUND (IWM_FH_MEM_LOWER_BOUND + 0x9D0)
#define IWM_FH_SRVC_CHNL_SRAM_ADDR_REG(_chnl) \
(IWM_FH_SRVC_LOWER_BOUND + ((_chnl) - 9) * 0x4)
#define IWM_FH_TX_CHICKEN_BITS_REG (IWM_FH_MEM_LOWER_BOUND + 0xE98)
#define IWM_FH_TX_TRB_REG(_chan) (IWM_FH_MEM_LOWER_BOUND + 0x958 + \
(_chan) * 4)
/* Instruct FH to increment the retry count of a packet when
* it is brought from the memory to TX-FIFO
*/
#define IWM_FH_TX_CHICKEN_BITS_SCD_AUTO_RETRY_EN (0x00000002)
#define IWM_RX_QUEUE_SIZE 256
#define IWM_RX_QUEUE_MASK 255
#define IWM_RX_QUEUE_SIZE_LOG 8
/*
* RX related structures and functions
*/
#define IWM_RX_FREE_BUFFERS 64
#define IWM_RX_LOW_WATERMARK 8
/**
* struct iwm_rb_status - reseve buffer status
* host memory mapped FH registers
* @closed_rb_num [0:11] - Indicates the index of the RB which was closed
* @closed_fr_num [0:11] - Indicates the index of the RX Frame which was closed
* @finished_rb_num [0:11] - Indicates the index of the current RB
* in which the last frame was written to
* @finished_fr_num [0:11] - Indicates the index of the RX Frame
* which was transferred
*/
struct iwm_rb_status {
uint16_t closed_rb_num;
uint16_t closed_fr_num;
uint16_t finished_rb_num;
uint16_t finished_fr_nam;
uint32_t unused;
} __packed;
#define IWM_TFD_QUEUE_SIZE_MAX (256)
#define IWM_TFD_QUEUE_SIZE_BC_DUP (64)
#define IWM_TFD_QUEUE_BC_SIZE (IWM_TFD_QUEUE_SIZE_MAX + \
IWM_TFD_QUEUE_SIZE_BC_DUP)
#define IWM_TX_DMA_MASK DMA_BIT_MASK(36)
#define IWM_NUM_OF_TBS 20
static inline uint8_t iwm_get_dma_hi_addr(bus_addr_t addr)
{
return (sizeof(addr) > sizeof(uint32_t) ? (addr >> 16) >> 16 : 0) & 0xF;
}
/**
* struct iwm_tfd_tb transmit buffer descriptor within transmit frame descriptor
*
* This structure contains dma address and length of transmission address
*
* @lo: low [31:0] portion of the dma address of TX buffer
* every even is unaligned on 16 bit boundary
* @hi_n_len 0-3 [35:32] portion of dma
* 4-15 length of the tx buffer
*/
struct iwm_tfd_tb {
uint32_t lo;
uint16_t hi_n_len;
} __packed;
/**
* struct iwm_tfd
*
* Transmit Frame Descriptor (TFD)
*
* @ __reserved1[3] reserved
* @ num_tbs 0-4 number of active tbs
* 5 reserved
* 6-7 padding (not used)
* @ tbs[20] transmit frame buffer descriptors
* @ __pad padding
*
* Each Tx queue uses a circular buffer of 256 TFDs stored in host DRAM.
* Both driver and device share these circular buffers, each of which must be
* contiguous 256 TFDs x 128 bytes-per-TFD = 32 KBytes
*
* Driver must indicate the physical address of the base of each
* circular buffer via the IWM_FH_MEM_CBBC_QUEUE registers.
*
* Each TFD contains pointer/size information for up to 20 data buffers
* in host DRAM. These buffers collectively contain the (one) frame described
* by the TFD. Each buffer must be a single contiguous block of memory within
* itself, but buffers may be scattered in host DRAM. Each buffer has max size
* of (4K - 4). The concatenates all of a TFD's buffers into a single
* Tx frame, up to 8 KBytes in size.
*
* A maximum of 255 (not 256!) TFDs may be on a queue waiting for Tx.
*/
struct iwm_tfd {
uint8_t __reserved1[3];
uint8_t num_tbs;
struct iwm_tfd_tb tbs[IWM_NUM_OF_TBS];
uint32_t __pad;
} __packed;
/* Keep Warm Size */
#define IWM_KW_SIZE 0x1000 /* 4k */
/* Fixed (non-configurable) rx data from phy */
/**
* struct iwm_agn_schedq_bc_tbl scheduler byte count table
* base physical address provided by IWM_SCD_DRAM_BASE_ADDR
* @tfd_offset 0-12 - tx command byte count
* 12-16 - station index
*/
struct iwm_agn_scd_bc_tbl {
uint16_t tfd_offset[IWM_TFD_QUEUE_BC_SIZE];
} __packed;
/*
* END iwl-fh.h
*/
/*
* BEGIN mvm/fw-api.h
*/
/* maximal number of Tx queues in any platform */
#define IWM_MVM_MAX_QUEUES 20
/* Tx queue numbers */
enum {
IWM_MVM_OFFCHANNEL_QUEUE = 8,
IWM_MVM_CMD_QUEUE = 9,
};
#define IWM_MVM_CMD_FIFO 7
#define IWM_MVM_STATION_COUNT 16
/* commands */
enum {
IWM_MVM_ALIVE = 0x1,
IWM_REPLY_ERROR = 0x2,
IWM_INIT_COMPLETE_NOTIF = 0x4,
/* PHY context commands */
IWM_PHY_CONTEXT_CMD = 0x8,
IWM_DBG_CFG = 0x9,
/* station table */
IWM_ADD_STA_KEY = 0x17,
IWM_ADD_STA = 0x18,
IWM_REMOVE_STA = 0x19,
/* TX */
IWM_TX_CMD = 0x1c,
IWM_TXPATH_FLUSH = 0x1e,
IWM_MGMT_MCAST_KEY = 0x1f,
/* global key */
IWM_WEP_KEY = 0x20,
/* MAC and Binding commands */
IWM_MAC_CONTEXT_CMD = 0x28,
IWM_TIME_EVENT_CMD = 0x29, /* both CMD and response */
IWM_TIME_EVENT_NOTIFICATION = 0x2a,
IWM_BINDING_CONTEXT_CMD = 0x2b,
IWM_TIME_QUOTA_CMD = 0x2c,
IWM_NON_QOS_TX_COUNTER_CMD = 0x2d,
IWM_LQ_CMD = 0x4e,
/* Calibration */
IWM_TEMPERATURE_NOTIFICATION = 0x62,
IWM_CALIBRATION_CFG_CMD = 0x65,
IWM_CALIBRATION_RES_NOTIFICATION = 0x66,
IWM_CALIBRATION_COMPLETE_NOTIFICATION = 0x67,
IWM_RADIO_VERSION_NOTIFICATION = 0x68,
/* Scan offload */
IWM_SCAN_OFFLOAD_REQUEST_CMD = 0x51,
IWM_SCAN_OFFLOAD_ABORT_CMD = 0x52,
IWM_SCAN_OFFLOAD_COMPLETE = 0x6D,
IWM_SCAN_OFFLOAD_UPDATE_PROFILES_CMD = 0x6E,
IWM_SCAN_OFFLOAD_CONFIG_CMD = 0x6f,
IWM_MATCH_FOUND_NOTIFICATION = 0xd9,
/* Phy */
IWM_PHY_CONFIGURATION_CMD = 0x6a,
IWM_CALIB_RES_NOTIF_PHY_DB = 0x6b,
/* IWM_PHY_DB_CMD = 0x6c, */
/* Power - legacy power table command */
IWM_POWER_TABLE_CMD = 0x77,
IWM_PSM_UAPSD_AP_MISBEHAVING_NOTIFICATION = 0x78,
/* Thermal Throttling*/
IWM_REPLY_THERMAL_MNG_BACKOFF = 0x7e,
/* Scanning */
IWM_SCAN_REQUEST_CMD = 0x80,
IWM_SCAN_ABORT_CMD = 0x81,
IWM_SCAN_START_NOTIFICATION = 0x82,
IWM_SCAN_RESULTS_NOTIFICATION = 0x83,
IWM_SCAN_COMPLETE_NOTIFICATION = 0x84,
/* NVM */
IWM_NVM_ACCESS_CMD = 0x88,
IWM_SET_CALIB_DEFAULT_CMD = 0x8e,
IWM_BEACON_NOTIFICATION = 0x90,
IWM_BEACON_TEMPLATE_CMD = 0x91,
IWM_TX_ANT_CONFIGURATION_CMD = 0x98,
IWM_BT_CONFIG = 0x9b,
IWM_STATISTICS_NOTIFICATION = 0x9d,
IWM_REDUCE_TX_POWER_CMD = 0x9f,
/* RF-KILL commands and notifications */
IWM_CARD_STATE_CMD = 0xa0,
IWM_CARD_STATE_NOTIFICATION = 0xa1,
IWM_MISSED_BEACONS_NOTIFICATION = 0xa2,
/* Power - new power table command */
IWM_MAC_PM_POWER_TABLE = 0xa9,
IWM_REPLY_RX_PHY_CMD = 0xc0,
IWM_REPLY_RX_MPDU_CMD = 0xc1,
IWM_BA_NOTIF = 0xc5,
/* BT Coex */
IWM_BT_COEX_PRIO_TABLE = 0xcc,
IWM_BT_COEX_PROT_ENV = 0xcd,
IWM_BT_PROFILE_NOTIFICATION = 0xce,
IWM_BT_COEX_CI = 0x5d,
IWM_REPLY_SF_CFG_CMD = 0xd1,
IWM_REPLY_BEACON_FILTERING_CMD = 0xd2,
IWM_REPLY_DEBUG_CMD = 0xf0,
IWM_DEBUG_LOG_MSG = 0xf7,
IWM_MCAST_FILTER_CMD = 0xd0,
/* D3 commands/notifications */
IWM_D3_CONFIG_CMD = 0xd3,
IWM_PROT_OFFLOAD_CONFIG_CMD = 0xd4,
IWM_OFFLOADS_QUERY_CMD = 0xd5,
IWM_REMOTE_WAKE_CONFIG_CMD = 0xd6,
/* for WoWLAN in particular */
IWM_WOWLAN_PATTERNS = 0xe0,
IWM_WOWLAN_CONFIGURATION = 0xe1,
IWM_WOWLAN_TSC_RSC_PARAM = 0xe2,
IWM_WOWLAN_TKIP_PARAM = 0xe3,
IWM_WOWLAN_KEK_KCK_MATERIAL = 0xe4,
IWM_WOWLAN_GET_STATUSES = 0xe5,
IWM_WOWLAN_TX_POWER_PER_DB = 0xe6,
/* and for NetDetect */
IWM_NET_DETECT_CONFIG_CMD = 0x54,
IWM_NET_DETECT_PROFILES_QUERY_CMD = 0x56,
IWM_NET_DETECT_PROFILES_CMD = 0x57,
IWM_NET_DETECT_HOTSPOTS_CMD = 0x58,
IWM_NET_DETECT_HOTSPOTS_QUERY_CMD = 0x59,
IWM_REPLY_MAX = 0xff,
};
/**
* struct iwm_cmd_response - generic response struct for most commands
* @status: status of the command asked, changes for each one
*/
struct iwm_cmd_response {
uint32_t status;
};
/*
* struct iwm_tx_ant_cfg_cmd
* @valid: valid antenna configuration
*/
struct iwm_tx_ant_cfg_cmd {
uint32_t valid;
} __packed;
/**
* struct iwm_reduce_tx_power_cmd - TX power reduction command
* IWM_REDUCE_TX_POWER_CMD = 0x9f
* @flags: (reserved for future implementation)
* @mac_context_id: id of the mac ctx for which we are reducing TX power.
* @pwr_restriction: TX power restriction in dBms.
*/
struct iwm_reduce_tx_power_cmd {
uint8_t flags;
uint8_t mac_context_id;
uint16_t pwr_restriction;
} __packed; /* IWM_TX_REDUCED_POWER_API_S_VER_1 */
/*
* Calibration control struct.
* Sent as part of the phy configuration command.
* @flow_trigger: bitmap for which calibrations to perform according to
* flow triggers.
* @event_trigger: bitmap for which calibrations to perform according to
* event triggers.
*/
struct iwm_calib_ctrl {
uint32_t flow_trigger;
uint32_t event_trigger;
} __packed;
/* This enum defines the bitmap of various calibrations to enable in both
* init ucode and runtime ucode through IWM_CALIBRATION_CFG_CMD.
*/
enum iwm_calib_cfg {
IWM_CALIB_CFG_XTAL_IDX = (1 << 0),
IWM_CALIB_CFG_TEMPERATURE_IDX = (1 << 1),
IWM_CALIB_CFG_VOLTAGE_READ_IDX = (1 << 2),
IWM_CALIB_CFG_PAPD_IDX = (1 << 3),
IWM_CALIB_CFG_TX_PWR_IDX = (1 << 4),
IWM_CALIB_CFG_DC_IDX = (1 << 5),
IWM_CALIB_CFG_BB_FILTER_IDX = (1 << 6),
IWM_CALIB_CFG_LO_LEAKAGE_IDX = (1 << 7),
IWM_CALIB_CFG_TX_IQ_IDX = (1 << 8),
IWM_CALIB_CFG_TX_IQ_SKEW_IDX = (1 << 9),
IWM_CALIB_CFG_RX_IQ_IDX = (1 << 10),
IWM_CALIB_CFG_RX_IQ_SKEW_IDX = (1 << 11),
IWM_CALIB_CFG_SENSITIVITY_IDX = (1 << 12),
IWM_CALIB_CFG_CHAIN_NOISE_IDX = (1 << 13),
IWM_CALIB_CFG_DISCONNECTED_ANT_IDX = (1 << 14),
IWM_CALIB_CFG_ANT_COUPLING_IDX = (1 << 15),
IWM_CALIB_CFG_DAC_IDX = (1 << 16),
IWM_CALIB_CFG_ABS_IDX = (1 << 17),
IWM_CALIB_CFG_AGC_IDX = (1 << 18),
};
/*
* Phy configuration command.
*/
struct iwm_phy_cfg_cmd {
uint32_t phy_cfg;
struct iwm_calib_ctrl calib_control;
} __packed;
#define IWM_PHY_CFG_RADIO_TYPE ((1 << 0) | (1 << 1))
#define IWM_PHY_CFG_RADIO_STEP ((1 << 2) | (1 << 3))
#define IWM_PHY_CFG_RADIO_DASH ((1 << 4) | (1 << 5))
#define IWM_PHY_CFG_PRODUCT_NUMBER ((1 << 6) | (1 << 7))
#define IWM_PHY_CFG_TX_CHAIN_A (1 << 8)
#define IWM_PHY_CFG_TX_CHAIN_B (1 << 9)
#define IWM_PHY_CFG_TX_CHAIN_C (1 << 10)
#define IWM_PHY_CFG_RX_CHAIN_A (1 << 12)
#define IWM_PHY_CFG_RX_CHAIN_B (1 << 13)
#define IWM_PHY_CFG_RX_CHAIN_C (1 << 14)
/* Target of the IWM_NVM_ACCESS_CMD */
enum {
IWM_NVM_ACCESS_TARGET_CACHE = 0,
IWM_NVM_ACCESS_TARGET_OTP = 1,
IWM_NVM_ACCESS_TARGET_EEPROM = 2,
};
/* Section types for IWM_NVM_ACCESS_CMD */
enum {
IWM_NVM_SECTION_TYPE_HW = 0,
IWM_NVM_SECTION_TYPE_SW,
IWM_NVM_SECTION_TYPE_PAPD,
IWM_NVM_SECTION_TYPE_BT,
IWM_NVM_SECTION_TYPE_CALIBRATION,
IWM_NVM_SECTION_TYPE_PRODUCTION,
IWM_NVM_SECTION_TYPE_POST_FCS_CALIB,
IWM_NVM_NUM_OF_SECTIONS,
};
/**
* struct iwm_nvm_access_cmd_ver2 - Request the device to send an NVM section
* @op_code: 0 - read, 1 - write
* @target: IWM_NVM_ACCESS_TARGET_*
* @type: IWM_NVM_SECTION_TYPE_*
* @offset: offset in bytes into the section
* @length: in bytes, to read/write
* @data: if write operation, the data to write. On read its empty
*/
struct iwm_nvm_access_cmd {
uint8_t op_code;
uint8_t target;
uint16_t type;
uint16_t offset;
uint16_t length;
uint8_t data[];
} __packed; /* IWM_NVM_ACCESS_CMD_API_S_VER_2 */
/**
* struct iwm_nvm_access_resp_ver2 - response to IWM_NVM_ACCESS_CMD
* @offset: offset in bytes into the section
* @length: in bytes, either how much was written or read
* @type: IWM_NVM_SECTION_TYPE_*
* @status: 0 for success, fail otherwise
* @data: if read operation, the data returned. Empty on write.
*/
struct iwm_nvm_access_resp {
uint16_t offset;
uint16_t length;
uint16_t type;
uint16_t status;
uint8_t data[];
} __packed; /* IWM_NVM_ACCESS_CMD_RESP_API_S_VER_2 */
/* IWM_MVM_ALIVE 0x1 */
/* alive response is_valid values */
#define IWM_ALIVE_RESP_UCODE_OK (1 << 0)
#define IWM_ALIVE_RESP_RFKILL (1 << 1)
/* alive response ver_type values */
enum {
IWM_FW_TYPE_HW = 0,
IWM_FW_TYPE_PROT = 1,
IWM_FW_TYPE_AP = 2,
IWM_FW_TYPE_WOWLAN = 3,
IWM_FW_TYPE_TIMING = 4,
IWM_FW_TYPE_WIPAN = 5
};
/* alive response ver_subtype values */
enum {
IWM_FW_SUBTYPE_FULL_FEATURE = 0,
IWM_FW_SUBTYPE_BOOTSRAP = 1, /* Not valid */
IWM_FW_SUBTYPE_REDUCED = 2,
IWM_FW_SUBTYPE_ALIVE_ONLY = 3,
IWM_FW_SUBTYPE_WOWLAN = 4,
IWM_FW_SUBTYPE_AP_SUBTYPE = 5,
IWM_FW_SUBTYPE_WIPAN = 6,
IWM_FW_SUBTYPE_INITIALIZE = 9
};
#define IWM_ALIVE_STATUS_ERR 0xDEAD
#define IWM_ALIVE_STATUS_OK 0xCAFE
#define IWM_ALIVE_FLG_RFKILL (1 << 0)
struct iwm_mvm_alive_resp {
uint16_t status;
uint16_t flags;
uint8_t ucode_minor;
uint8_t ucode_major;
uint16_t id;
uint8_t api_minor;
uint8_t api_major;
uint8_t ver_subtype;
uint8_t ver_type;
uint8_t mac;
uint8_t opt;
uint16_t reserved2;
uint32_t timestamp;
uint32_t error_event_table_ptr; /* SRAM address for error log */
uint32_t log_event_table_ptr; /* SRAM address for event log */
uint32_t cpu_register_ptr;
uint32_t dbgm_config_ptr;
uint32_t alive_counter_ptr;
uint32_t scd_base_ptr; /* SRAM address for SCD */
} __packed; /* IWM_ALIVE_RES_API_S_VER_1 */
/* Error response/notification */
enum {
IWM_FW_ERR_UNKNOWN_CMD = 0x0,
IWM_FW_ERR_INVALID_CMD_PARAM = 0x1,
IWM_FW_ERR_SERVICE = 0x2,
IWM_FW_ERR_ARC_MEMORY = 0x3,
IWM_FW_ERR_ARC_CODE = 0x4,
IWM_FW_ERR_WATCH_DOG = 0x5,
IWM_FW_ERR_WEP_GRP_KEY_INDX = 0x10,
IWM_FW_ERR_WEP_KEY_SIZE = 0x11,
IWM_FW_ERR_OBSOLETE_FUNC = 0x12,
IWM_FW_ERR_UNEXPECTED = 0xFE,
IWM_FW_ERR_FATAL = 0xFF
};
/**
* struct iwm_error_resp - FW error indication
* ( IWM_REPLY_ERROR = 0x2 )
* @error_type: one of IWM_FW_ERR_*
* @cmd_id: the command ID for which the error occurred
* @bad_cmd_seq_num: sequence number of the erroneous command
* @error_service: which service created the error, applicable only if
* error_type = 2, otherwise 0
* @timestamp: TSF in usecs.
*/
struct iwm_error_resp {
uint32_t error_type;
uint8_t cmd_id;
uint8_t reserved1;
uint16_t bad_cmd_seq_num;
uint32_t error_service;
uint64_t timestamp;
} __packed;
/* Common PHY, MAC and Bindings definitions */
#define IWM_MAX_MACS_IN_BINDING (3)
#define IWM_MAX_BINDINGS (4)
#define IWM_AUX_BINDING_INDEX (3)
#define IWM_MAX_PHYS (4)
/* Used to extract ID and color from the context dword */
#define IWM_FW_CTXT_ID_POS (0)
#define IWM_FW_CTXT_ID_MSK (0xff << IWM_FW_CTXT_ID_POS)
#define IWM_FW_CTXT_COLOR_POS (8)
#define IWM_FW_CTXT_COLOR_MSK (0xff << IWM_FW_CTXT_COLOR_POS)
#define IWM_FW_CTXT_INVALID (0xffffffff)
#define IWM_FW_CMD_ID_AND_COLOR(_id, _color) ((_id << IWM_FW_CTXT_ID_POS) |\
(_color << IWM_FW_CTXT_COLOR_POS))
/* Possible actions on PHYs, MACs and Bindings */
enum {
IWM_FW_CTXT_ACTION_STUB = 0,
IWM_FW_CTXT_ACTION_ADD,
IWM_FW_CTXT_ACTION_MODIFY,
IWM_FW_CTXT_ACTION_REMOVE,
IWM_FW_CTXT_ACTION_NUM
}; /* COMMON_CONTEXT_ACTION_API_E_VER_1 */
/* Time Events */
/* Time Event types, according to MAC type */
enum iwm_time_event_type {
/* BSS Station Events */
IWM_TE_BSS_STA_AGGRESSIVE_ASSOC,
IWM_TE_BSS_STA_ASSOC,
IWM_TE_BSS_EAP_DHCP_PROT,
IWM_TE_BSS_QUIET_PERIOD,
/* P2P Device Events */
IWM_TE_P2P_DEVICE_DISCOVERABLE,
IWM_TE_P2P_DEVICE_LISTEN,
IWM_TE_P2P_DEVICE_ACTION_SCAN,
IWM_TE_P2P_DEVICE_FULL_SCAN,
/* P2P Client Events */
IWM_TE_P2P_CLIENT_AGGRESSIVE_ASSOC,
IWM_TE_P2P_CLIENT_ASSOC,
IWM_TE_P2P_CLIENT_QUIET_PERIOD,
/* P2P GO Events */
IWM_TE_P2P_GO_ASSOC_PROT,
IWM_TE_P2P_GO_REPETITIVE_NOA,
IWM_TE_P2P_GO_CT_WINDOW,
/* WiDi Sync Events */
IWM_TE_WIDI_TX_SYNC,
IWM_TE_MAX
}; /* IWM_MAC_EVENT_TYPE_API_E_VER_1 */
/* Time event - defines for command API v1 */
/*
* @IWM_TE_V1_FRAG_NONE: fragmentation of the time event is NOT allowed.
* @IWM_TE_V1_FRAG_SINGLE: fragmentation of the time event is allowed, but only
* the first fragment is scheduled.
* @IWM_TE_V1_FRAG_DUAL: fragmentation of the time event is allowed, but only
* the first 2 fragments are scheduled.
* @IWM_TE_V1_FRAG_ENDLESS: fragmentation of the time event is allowed, and any
* number of fragments are valid.
*
* Other than the constant defined above, specifying a fragmentation value 'x'
* means that the event can be fragmented but only the first 'x' will be
* scheduled.
*/
enum {
IWM_TE_V1_FRAG_NONE = 0,
IWM_TE_V1_FRAG_SINGLE = 1,
IWM_TE_V1_FRAG_DUAL = 2,
IWM_TE_V1_FRAG_ENDLESS = 0xffffffff
};
/* If a Time Event can be fragmented, this is the max number of fragments */
#define IWM_TE_V1_FRAG_MAX_MSK 0x0fffffff
/* Repeat the time event endlessly (until removed) */
#define IWM_TE_V1_REPEAT_ENDLESS 0xffffffff
/* If a Time Event has bounded repetitions, this is the maximal value */
#define IWM_TE_V1_REPEAT_MAX_MSK_V1 0x0fffffff
/* Time Event dependencies: none, on another TE, or in a specific time */
enum {
IWM_TE_V1_INDEPENDENT = 0,
IWM_TE_V1_DEP_OTHER = (1 << 0),
IWM_TE_V1_DEP_TSF = (1 << 1),
IWM_TE_V1_EVENT_SOCIOPATHIC = (1 << 2),
}; /* IWM_MAC_EVENT_DEPENDENCY_POLICY_API_E_VER_2 */
/*
* @IWM_TE_V1_NOTIF_NONE: no notifications
* @IWM_TE_V1_NOTIF_HOST_EVENT_START: request/receive notification on event start
* @IWM_TE_V1_NOTIF_HOST_EVENT_END:request/receive notification on event end
* @IWM_TE_V1_NOTIF_INTERNAL_EVENT_START: internal FW use
* @IWM_TE_V1_NOTIF_INTERNAL_EVENT_END: internal FW use.
* @IWM_TE_V1_NOTIF_HOST_FRAG_START: request/receive notification on frag start
* @IWM_TE_V1_NOTIF_HOST_FRAG_END:request/receive notification on frag end
* @IWM_TE_V1_NOTIF_INTERNAL_FRAG_START: internal FW use.
* @IWM_TE_V1_NOTIF_INTERNAL_FRAG_END: internal FW use.
*
* Supported Time event notifications configuration.
* A notification (both event and fragment) includes a status indicating weather
* the FW was able to schedule the event or not. For fragment start/end
* notification the status is always success. There is no start/end fragment
* notification for monolithic events.
*/
enum {
IWM_TE_V1_NOTIF_NONE = 0,
IWM_TE_V1_NOTIF_HOST_EVENT_START = (1 << 0),
IWM_TE_V1_NOTIF_HOST_EVENT_END = (1 << 1),
IWM_TE_V1_NOTIF_INTERNAL_EVENT_START = (1 << 2),
IWM_TE_V1_NOTIF_INTERNAL_EVENT_END = (1 << 3),
IWM_TE_V1_NOTIF_HOST_FRAG_START = (1 << 4),
IWM_TE_V1_NOTIF_HOST_FRAG_END = (1 << 5),
IWM_TE_V1_NOTIF_INTERNAL_FRAG_START = (1 << 6),
IWM_TE_V1_NOTIF_INTERNAL_FRAG_END = (1 << 7),
}; /* IWM_MAC_EVENT_ACTION_API_E_VER_2 */
/**
* struct iwm_time_event_cmd_api_v1 - configuring Time Events
* with struct IWM_MAC_TIME_EVENT_DATA_API_S_VER_1 (see also
* with version 2. determined by IWM_UCODE_TLV_FLAGS)
* ( IWM_TIME_EVENT_CMD = 0x29 )
* @id_and_color: ID and color of the relevant MAC
* @action: action to perform, one of IWM_FW_CTXT_ACTION_*
* @id: this field has two meanings, depending on the action:
* If the action is ADD, then it means the type of event to add.
* For all other actions it is the unique event ID assigned when the
* event was added by the FW.
* @apply_time: When to start the Time Event (in GP2)
* @max_delay: maximum delay to event's start (apply time), in TU
* @depends_on: the unique ID of the event we depend on (if any)
* @interval: interval between repetitions, in TU
* @interval_reciprocal: 2^32 / interval
* @duration: duration of event in TU
* @repeat: how many repetitions to do, can be IWM_TE_REPEAT_ENDLESS
* @dep_policy: one of IWM_TE_V1_INDEPENDENT, IWM_TE_V1_DEP_OTHER, IWM_TE_V1_DEP_TSF
* and IWM_TE_V1_EVENT_SOCIOPATHIC
* @is_present: 0 or 1, are we present or absent during the Time Event
* @max_frags: maximal number of fragments the Time Event can be divided to
* @notify: notifications using IWM_TE_V1_NOTIF_* (whom to notify when)
*/
struct iwm_time_event_cmd_v1 {
/* COMMON_INDEX_HDR_API_S_VER_1 */
uint32_t id_and_color;
uint32_t action;
uint32_t id;
/* IWM_MAC_TIME_EVENT_DATA_API_S_VER_1 */
uint32_t apply_time;
uint32_t max_delay;
uint32_t dep_policy;
uint32_t depends_on;
uint32_t is_present;
uint32_t max_frags;
uint32_t interval;
uint32_t interval_reciprocal;
uint32_t duration;
uint32_t repeat;
uint32_t notify;
} __packed; /* IWM_MAC_TIME_EVENT_CMD_API_S_VER_1 */
/* Time event - defines for command API v2 */
/*
* @IWM_TE_V2_FRAG_NONE: fragmentation of the time event is NOT allowed.
* @IWM_TE_V2_FRAG_SINGLE: fragmentation of the time event is allowed, but only
* the first fragment is scheduled.
* @IWM_TE_V2_FRAG_DUAL: fragmentation of the time event is allowed, but only
* the first 2 fragments are scheduled.
* @IWM_TE_V2_FRAG_ENDLESS: fragmentation of the time event is allowed, and any
* number of fragments are valid.
*
* Other than the constant defined above, specifying a fragmentation value 'x'
* means that the event can be fragmented but only the first 'x' will be
* scheduled.
*/
enum {
IWM_TE_V2_FRAG_NONE = 0,
IWM_TE_V2_FRAG_SINGLE = 1,
IWM_TE_V2_FRAG_DUAL = 2,
IWM_TE_V2_FRAG_MAX = 0xfe,
IWM_TE_V2_FRAG_ENDLESS = 0xff
};
/* Repeat the time event endlessly (until removed) */
#define IWM_TE_V2_REPEAT_ENDLESS 0xff
/* If a Time Event has bounded repetitions, this is the maximal value */
#define IWM_TE_V2_REPEAT_MAX 0xfe
#define IWM_TE_V2_PLACEMENT_POS 12
#define IWM_TE_V2_ABSENCE_POS 15
/* Time event policy values (for time event cmd api v2)
* A notification (both event and fragment) includes a status indicating weather
* the FW was able to schedule the event or not. For fragment start/end
* notification the status is always success. There is no start/end fragment
* notification for monolithic events.
*
* @IWM_TE_V2_DEFAULT_POLICY: independent, social, present, unoticable
* @IWM_TE_V2_NOTIF_HOST_EVENT_START: request/receive notification on event start
* @IWM_TE_V2_NOTIF_HOST_EVENT_END:request/receive notification on event end
* @IWM_TE_V2_NOTIF_INTERNAL_EVENT_START: internal FW use
* @IWM_TE_V2_NOTIF_INTERNAL_EVENT_END: internal FW use.
* @IWM_TE_V2_NOTIF_HOST_FRAG_START: request/receive notification on frag start
* @IWM_TE_V2_NOTIF_HOST_FRAG_END:request/receive notification on frag end
* @IWM_TE_V2_NOTIF_INTERNAL_FRAG_START: internal FW use.
* @IWM_TE_V2_NOTIF_INTERNAL_FRAG_END: internal FW use.
* @IWM_TE_V2_DEP_OTHER: depends on another time event
* @IWM_TE_V2_DEP_TSF: depends on a specific time
* @IWM_TE_V2_EVENT_SOCIOPATHIC: can't co-exist with other events of tha same MAC
* @IWM_TE_V2_ABSENCE: are we present or absent during the Time Event.
*/
enum {
IWM_TE_V2_DEFAULT_POLICY = 0x0,
/* notifications (event start/stop, fragment start/stop) */
IWM_TE_V2_NOTIF_HOST_EVENT_START = (1 << 0),
IWM_TE_V2_NOTIF_HOST_EVENT_END = (1 << 1),
IWM_TE_V2_NOTIF_INTERNAL_EVENT_START = (1 << 2),
IWM_TE_V2_NOTIF_INTERNAL_EVENT_END = (1 << 3),
IWM_TE_V2_NOTIF_HOST_FRAG_START = (1 << 4),
IWM_TE_V2_NOTIF_HOST_FRAG_END = (1 << 5),
IWM_TE_V2_NOTIF_INTERNAL_FRAG_START = (1 << 6),
IWM_TE_V2_NOTIF_INTERNAL_FRAG_END = (1 << 7),
IWM_TE_V2_NOTIF_MSK = 0xff,
/* placement characteristics */
IWM_TE_V2_DEP_OTHER = (1 << IWM_TE_V2_PLACEMENT_POS),
IWM_TE_V2_DEP_TSF = (1 << (IWM_TE_V2_PLACEMENT_POS + 1)),
IWM_TE_V2_EVENT_SOCIOPATHIC = (1 << (IWM_TE_V2_PLACEMENT_POS + 2)),
/* are we present or absent during the Time Event. */
IWM_TE_V2_ABSENCE = (1 << IWM_TE_V2_ABSENCE_POS),
};
/**
* struct iwm_time_event_cmd_api_v2 - configuring Time Events
* with struct IWM_MAC_TIME_EVENT_DATA_API_S_VER_2 (see also
* with version 1. determined by IWM_UCODE_TLV_FLAGS)
* ( IWM_TIME_EVENT_CMD = 0x29 )
* @id_and_color: ID and color of the relevant MAC
* @action: action to perform, one of IWM_FW_CTXT_ACTION_*
* @id: this field has two meanings, depending on the action:
* If the action is ADD, then it means the type of event to add.
* For all other actions it is the unique event ID assigned when the
* event was added by the FW.
* @apply_time: When to start the Time Event (in GP2)
* @max_delay: maximum delay to event's start (apply time), in TU
* @depends_on: the unique ID of the event we depend on (if any)
* @interval: interval between repetitions, in TU
* @duration: duration of event in TU
* @repeat: how many repetitions to do, can be IWM_TE_REPEAT_ENDLESS
* @max_frags: maximal number of fragments the Time Event can be divided to
* @policy: defines whether uCode shall notify the host or other uCode modules
* on event and/or fragment start and/or end
* using one of IWM_TE_INDEPENDENT, IWM_TE_DEP_OTHER, IWM_TE_DEP_TSF
* IWM_TE_EVENT_SOCIOPATHIC
* using IWM_TE_ABSENCE and using IWM_TE_NOTIF_*
*/
struct iwm_time_event_cmd_v2 {
/* COMMON_INDEX_HDR_API_S_VER_1 */
uint32_t id_and_color;
uint32_t action;
uint32_t id;
/* IWM_MAC_TIME_EVENT_DATA_API_S_VER_2 */
uint32_t apply_time;
uint32_t max_delay;
uint32_t depends_on;
uint32_t interval;
uint32_t duration;
uint8_t repeat;
uint8_t max_frags;
uint16_t policy;
} __packed; /* IWM_MAC_TIME_EVENT_CMD_API_S_VER_2 */
/**
* struct iwm_time_event_resp - response structure to iwm_time_event_cmd
* @status: bit 0 indicates success, all others specify errors
* @id: the Time Event type
* @unique_id: the unique ID assigned (in ADD) or given (others) to the TE
* @id_and_color: ID and color of the relevant MAC
*/
struct iwm_time_event_resp {
uint32_t status;
uint32_t id;
uint32_t unique_id;
uint32_t id_and_color;
} __packed; /* IWM_MAC_TIME_EVENT_RSP_API_S_VER_1 */
/**
* struct iwm_time_event_notif - notifications of time event start/stop
* ( IWM_TIME_EVENT_NOTIFICATION = 0x2a )
* @timestamp: action timestamp in GP2
* @session_id: session's unique id
* @unique_id: unique id of the Time Event itself
* @id_and_color: ID and color of the relevant MAC
* @action: one of IWM_TE_NOTIF_START or IWM_TE_NOTIF_END
* @status: true if scheduled, false otherwise (not executed)
*/
struct iwm_time_event_notif {
uint32_t timestamp;
uint32_t session_id;
uint32_t unique_id;
uint32_t id_and_color;
uint32_t action;
uint32_t status;
} __packed; /* IWM_MAC_TIME_EVENT_NTFY_API_S_VER_1 */
/* Bindings and Time Quota */
/**
* struct iwm_binding_cmd - configuring bindings
* ( IWM_BINDING_CONTEXT_CMD = 0x2b )
* @id_and_color: ID and color of the relevant Binding
* @action: action to perform, one of IWM_FW_CTXT_ACTION_*
* @macs: array of MAC id and colors which belong to the binding
* @phy: PHY id and color which belongs to the binding
*/
struct iwm_binding_cmd {
/* COMMON_INDEX_HDR_API_S_VER_1 */
uint32_t id_and_color;
uint32_t action;
/* IWM_BINDING_DATA_API_S_VER_1 */
uint32_t macs[IWM_MAX_MACS_IN_BINDING];
uint32_t phy;
} __packed; /* IWM_BINDING_CMD_API_S_VER_1 */
/* The maximal number of fragments in the FW's schedule session */
#define IWM_MVM_MAX_QUOTA 128
/**
* struct iwm_time_quota_data - configuration of time quota per binding
* @id_and_color: ID and color of the relevant Binding
* @quota: absolute time quota in TU. The scheduler will try to divide the
* remainig quota (after Time Events) according to this quota.
* @max_duration: max uninterrupted context duration in TU
*/
struct iwm_time_quota_data {
uint32_t id_and_color;
uint32_t quota;
uint32_t max_duration;
} __packed; /* IWM_TIME_QUOTA_DATA_API_S_VER_1 */
/**
* struct iwm_time_quota_cmd - configuration of time quota between bindings
* ( IWM_TIME_QUOTA_CMD = 0x2c )
* @quotas: allocations per binding
*/
struct iwm_time_quota_cmd {
struct iwm_time_quota_data quotas[IWM_MAX_BINDINGS];
} __packed; /* IWM_TIME_QUOTA_ALLOCATION_CMD_API_S_VER_1 */
/* PHY context */
/* Supported bands */
#define IWM_PHY_BAND_5 (0)
#define IWM_PHY_BAND_24 (1)
/* Supported channel width, vary if there is VHT support */
#define IWM_PHY_VHT_CHANNEL_MODE20 (0x0)
#define IWM_PHY_VHT_CHANNEL_MODE40 (0x1)
#define IWM_PHY_VHT_CHANNEL_MODE80 (0x2)
#define IWM_PHY_VHT_CHANNEL_MODE160 (0x3)
/*
* Control channel position:
* For legacy set bit means upper channel, otherwise lower.
* For VHT - bit-2 marks if the control is lower/upper relative to center-freq
* bits-1:0 mark the distance from the center freq. for 20Mhz, offset is 0.
* center_freq
* |
* 40Mhz |_______|_______|
* 80Mhz |_______|_______|_______|_______|
* 160Mhz |_______|_______|_______|_______|_______|_______|_______|_______|
* code 011 010 001 000 | 100 101 110 111
*/
#define IWM_PHY_VHT_CTRL_POS_1_BELOW (0x0)
#define IWM_PHY_VHT_CTRL_POS_2_BELOW (0x1)
#define IWM_PHY_VHT_CTRL_POS_3_BELOW (0x2)
#define IWM_PHY_VHT_CTRL_POS_4_BELOW (0x3)
#define IWM_PHY_VHT_CTRL_POS_1_ABOVE (0x4)
#define IWM_PHY_VHT_CTRL_POS_2_ABOVE (0x5)
#define IWM_PHY_VHT_CTRL_POS_3_ABOVE (0x6)
#define IWM_PHY_VHT_CTRL_POS_4_ABOVE (0x7)
/*
* @band: IWM_PHY_BAND_*
* @channel: channel number
* @width: PHY_[VHT|LEGACY]_CHANNEL_*
* @ctrl channel: PHY_[VHT|LEGACY]_CTRL_*
*/
struct iwm_fw_channel_info {
uint8_t band;
uint8_t channel;
uint8_t width;
uint8_t ctrl_pos;
} __packed;
#define IWM_PHY_RX_CHAIN_DRIVER_FORCE_POS (0)
#define IWM_PHY_RX_CHAIN_DRIVER_FORCE_MSK \
(0x1 << IWM_PHY_RX_CHAIN_DRIVER_FORCE_POS)
#define IWM_PHY_RX_CHAIN_VALID_POS (1)
#define IWM_PHY_RX_CHAIN_VALID_MSK \
(0x7 << IWM_PHY_RX_CHAIN_VALID_POS)
#define IWM_PHY_RX_CHAIN_FORCE_SEL_POS (4)
#define IWM_PHY_RX_CHAIN_FORCE_SEL_MSK \
(0x7 << IWM_PHY_RX_CHAIN_FORCE_SEL_POS)
#define IWM_PHY_RX_CHAIN_FORCE_MIMO_SEL_POS (7)
#define IWM_PHY_RX_CHAIN_FORCE_MIMO_SEL_MSK \
(0x7 << IWM_PHY_RX_CHAIN_FORCE_MIMO_SEL_POS)
#define IWM_PHY_RX_CHAIN_CNT_POS (10)
#define IWM_PHY_RX_CHAIN_CNT_MSK \
(0x3 << IWM_PHY_RX_CHAIN_CNT_POS)
#define IWM_PHY_RX_CHAIN_MIMO_CNT_POS (12)
#define IWM_PHY_RX_CHAIN_MIMO_CNT_MSK \
(0x3 << IWM_PHY_RX_CHAIN_MIMO_CNT_POS)
#define IWM_PHY_RX_CHAIN_MIMO_FORCE_POS (14)
#define IWM_PHY_RX_CHAIN_MIMO_FORCE_MSK \
(0x1 << IWM_PHY_RX_CHAIN_MIMO_FORCE_POS)
/* TODO: fix the value, make it depend on firmware at runtime? */
#define IWM_NUM_PHY_CTX 3
/* TODO: complete missing documentation */
/**
* struct iwm_phy_context_cmd - config of the PHY context
* ( IWM_PHY_CONTEXT_CMD = 0x8 )
* @id_and_color: ID and color of the relevant Binding
* @action: action to perform, one of IWM_FW_CTXT_ACTION_*
* @apply_time: 0 means immediate apply and context switch.
* other value means apply new params after X usecs
* @tx_param_color: ???
* @channel_info:
* @txchain_info: ???
* @rxchain_info: ???
* @acquisition_data: ???
* @dsp_cfg_flags: set to 0
*/
struct iwm_phy_context_cmd {
/* COMMON_INDEX_HDR_API_S_VER_1 */
uint32_t id_and_color;
uint32_t action;
/* IWM_PHY_CONTEXT_DATA_API_S_VER_1 */
uint32_t apply_time;
uint32_t tx_param_color;
struct iwm_fw_channel_info ci;
uint32_t txchain_info;
uint32_t rxchain_info;
uint32_t acquisition_data;
uint32_t dsp_cfg_flags;
} __packed; /* IWM_PHY_CONTEXT_CMD_API_VER_1 */
#define IWM_RX_INFO_PHY_CNT 8
#define IWM_RX_INFO_ENERGY_ANT_ABC_IDX 1
#define IWM_RX_INFO_ENERGY_ANT_A_MSK 0x000000ff
#define IWM_RX_INFO_ENERGY_ANT_B_MSK 0x0000ff00
#define IWM_RX_INFO_ENERGY_ANT_C_MSK 0x00ff0000
#define IWM_RX_INFO_ENERGY_ANT_A_POS 0
#define IWM_RX_INFO_ENERGY_ANT_B_POS 8
#define IWM_RX_INFO_ENERGY_ANT_C_POS 16
#define IWM_RX_INFO_AGC_IDX 1
#define IWM_RX_INFO_RSSI_AB_IDX 2
#define IWM_OFDM_AGC_A_MSK 0x0000007f
#define IWM_OFDM_AGC_A_POS 0
#define IWM_OFDM_AGC_B_MSK 0x00003f80
#define IWM_OFDM_AGC_B_POS 7
#define IWM_OFDM_AGC_CODE_MSK 0x3fe00000
#define IWM_OFDM_AGC_CODE_POS 20
#define IWM_OFDM_RSSI_INBAND_A_MSK 0x00ff
#define IWM_OFDM_RSSI_A_POS 0
#define IWM_OFDM_RSSI_ALLBAND_A_MSK 0xff00
#define IWM_OFDM_RSSI_ALLBAND_A_POS 8
#define IWM_OFDM_RSSI_INBAND_B_MSK 0xff0000
#define IWM_OFDM_RSSI_B_POS 16
#define IWM_OFDM_RSSI_ALLBAND_B_MSK 0xff000000
#define IWM_OFDM_RSSI_ALLBAND_B_POS 24
/**
* struct iwm_rx_phy_info - phy info
* (IWM_REPLY_RX_PHY_CMD = 0xc0)
* @non_cfg_phy_cnt: non configurable DSP phy data byte count
* @cfg_phy_cnt: configurable DSP phy data byte count
* @stat_id: configurable DSP phy data set ID
* @reserved1:
* @system_timestamp: GP2 at on air rise
* @timestamp: TSF at on air rise
* @beacon_time_stamp: beacon at on-air rise
* @phy_flags: general phy flags: band, modulation, ...
* @channel: channel number
* @non_cfg_phy_buf: for various implementations of non_cfg_phy
* @rate_n_flags: IWM_RATE_MCS_*
* @byte_count: frame's byte-count
* @frame_time: frame's time on the air, based on byte count and frame rate
* calculation
* @mac_active_msk: what MACs were active when the frame was received
*
* Before each Rx, the device sends this data. It contains PHY information
* about the reception of the packet.
*/
struct iwm_rx_phy_info {
uint8_t non_cfg_phy_cnt;
uint8_t cfg_phy_cnt;
uint8_t stat_id;
uint8_t reserved1;
uint32_t system_timestamp;
uint64_t timestamp;
uint32_t beacon_time_stamp;
uint16_t phy_flags;
#define IWM_PHY_INFO_FLAG_SHPREAMBLE (1 << 2)
uint16_t channel;
uint32_t non_cfg_phy[IWM_RX_INFO_PHY_CNT];
uint8_t rate;
uint8_t rflags;
uint16_t xrflags;
uint32_t byte_count;
uint16_t mac_active_msk;
uint16_t frame_time;
} __packed;
struct iwm_rx_mpdu_res_start {
uint16_t byte_count;
uint16_t reserved;
} __packed;
/**
* enum iwm_rx_phy_flags - to parse %iwm_rx_phy_info phy_flags
* @IWM_RX_RES_PHY_FLAGS_BAND_24: true if the packet was received on 2.4 band
* @IWM_RX_RES_PHY_FLAGS_MOD_CCK:
* @IWM_RX_RES_PHY_FLAGS_SHORT_PREAMBLE: true if packet's preamble was short
* @IWM_RX_RES_PHY_FLAGS_NARROW_BAND:
* @IWM_RX_RES_PHY_FLAGS_ANTENNA: antenna on which the packet was received
* @IWM_RX_RES_PHY_FLAGS_AGG: set if the packet was part of an A-MPDU
* @IWM_RX_RES_PHY_FLAGS_OFDM_HT: The frame was an HT frame
* @IWM_RX_RES_PHY_FLAGS_OFDM_GF: The frame used GF preamble
* @IWM_RX_RES_PHY_FLAGS_OFDM_VHT: The frame was a VHT frame
*/
enum iwm_rx_phy_flags {
IWM_RX_RES_PHY_FLAGS_BAND_24 = (1 << 0),
IWM_RX_RES_PHY_FLAGS_MOD_CCK = (1 << 1),
IWM_RX_RES_PHY_FLAGS_SHORT_PREAMBLE = (1 << 2),
IWM_RX_RES_PHY_FLAGS_NARROW_BAND = (1 << 3),
IWM_RX_RES_PHY_FLAGS_ANTENNA = (0x7 << 4),
IWM_RX_RES_PHY_FLAGS_ANTENNA_POS = 4,
IWM_RX_RES_PHY_FLAGS_AGG = (1 << 7),
IWM_RX_RES_PHY_FLAGS_OFDM_HT = (1 << 8),
IWM_RX_RES_PHY_FLAGS_OFDM_GF = (1 << 9),
IWM_RX_RES_PHY_FLAGS_OFDM_VHT = (1 << 10),
};
/**
* enum iwm_mvm_rx_status - written by fw for each Rx packet
* @IWM_RX_MPDU_RES_STATUS_CRC_OK: CRC is fine
* @IWM_RX_MPDU_RES_STATUS_OVERRUN_OK: there was no RXE overflow
* @IWM_RX_MPDU_RES_STATUS_SRC_STA_FOUND:
* @IWM_RX_MPDU_RES_STATUS_KEY_VALID:
* @IWM_RX_MPDU_RES_STATUS_KEY_PARAM_OK:
* @IWM_RX_MPDU_RES_STATUS_ICV_OK: ICV is fine, if not, the packet is destroyed
* @IWM_RX_MPDU_RES_STATUS_MIC_OK: used for CCM alg only. TKIP MIC is checked
* in the driver.
* @IWM_RX_MPDU_RES_STATUS_TTAK_OK: TTAK is fine
* @IWM_RX_MPDU_RES_STATUS_MNG_FRAME_REPLAY_ERR: valid for alg = CCM_CMAC or
* alg = CCM only. Checks replay attack for 11w frames. Relevant only if
* %IWM_RX_MPDU_RES_STATUS_ROBUST_MNG_FRAME is set.
* @IWM_RX_MPDU_RES_STATUS_SEC_NO_ENC: this frame is not encrypted
* @IWM_RX_MPDU_RES_STATUS_SEC_WEP_ENC: this frame is encrypted using WEP
* @IWM_RX_MPDU_RES_STATUS_SEC_CCM_ENC: this frame is encrypted using CCM
* @IWM_RX_MPDU_RES_STATUS_SEC_TKIP_ENC: this frame is encrypted using TKIP
* @IWM_RX_MPDU_RES_STATUS_SEC_CCM_CMAC_ENC: this frame is encrypted using CCM_CMAC
* @IWM_RX_MPDU_RES_STATUS_SEC_ENC_ERR: this frame couldn't be decrypted
* @IWM_RX_MPDU_RES_STATUS_SEC_ENC_MSK: bitmask of the encryption algorithm
* @IWM_RX_MPDU_RES_STATUS_DEC_DONE: this frame has been successfully decrypted
* @IWM_RX_MPDU_RES_STATUS_PROTECT_FRAME_BIT_CMP:
* @IWM_RX_MPDU_RES_STATUS_EXT_IV_BIT_CMP:
* @IWM_RX_MPDU_RES_STATUS_KEY_ID_CMP_BIT:
* @IWM_RX_MPDU_RES_STATUS_ROBUST_MNG_FRAME: this frame is an 11w management frame
* @IWM_RX_MPDU_RES_STATUS_HASH_INDEX_MSK:
* @IWM_RX_MPDU_RES_STATUS_STA_ID_MSK:
* @IWM_RX_MPDU_RES_STATUS_RRF_KILL:
* @IWM_RX_MPDU_RES_STATUS_FILTERING_MSK:
* @IWM_RX_MPDU_RES_STATUS2_FILTERING_MSK:
*/
enum iwm_mvm_rx_status {
IWM_RX_MPDU_RES_STATUS_CRC_OK = (1 << 0),
IWM_RX_MPDU_RES_STATUS_OVERRUN_OK = (1 << 1),
IWM_RX_MPDU_RES_STATUS_SRC_STA_FOUND = (1 << 2),
IWM_RX_MPDU_RES_STATUS_KEY_VALID = (1 << 3),
IWM_RX_MPDU_RES_STATUS_KEY_PARAM_OK = (1 << 4),
IWM_RX_MPDU_RES_STATUS_ICV_OK = (1 << 5),
IWM_RX_MPDU_RES_STATUS_MIC_OK = (1 << 6),
IWM_RX_MPDU_RES_STATUS_TTAK_OK = (1 << 7),
IWM_RX_MPDU_RES_STATUS_MNG_FRAME_REPLAY_ERR = (1 << 7),
IWM_RX_MPDU_RES_STATUS_SEC_NO_ENC = (0 << 8),
IWM_RX_MPDU_RES_STATUS_SEC_WEP_ENC = (1 << 8),
IWM_RX_MPDU_RES_STATUS_SEC_CCM_ENC = (2 << 8),
IWM_RX_MPDU_RES_STATUS_SEC_TKIP_ENC = (3 << 8),
IWM_RX_MPDU_RES_STATUS_SEC_EXT_ENC = (4 << 8),
IWM_RX_MPDU_RES_STATUS_SEC_CCM_CMAC_ENC = (6 << 8),
IWM_RX_MPDU_RES_STATUS_SEC_ENC_ERR = (7 << 8),
IWM_RX_MPDU_RES_STATUS_SEC_ENC_MSK = (7 << 8),
IWM_RX_MPDU_RES_STATUS_DEC_DONE = (1 << 11),
IWM_RX_MPDU_RES_STATUS_PROTECT_FRAME_BIT_CMP = (1 << 12),
IWM_RX_MPDU_RES_STATUS_EXT_IV_BIT_CMP = (1 << 13),
IWM_RX_MPDU_RES_STATUS_KEY_ID_CMP_BIT = (1 << 14),
IWM_RX_MPDU_RES_STATUS_ROBUST_MNG_FRAME = (1 << 15),
IWM_RX_MPDU_RES_STATUS_HASH_INDEX_MSK = (0x3F0000),
IWM_RX_MPDU_RES_STATUS_STA_ID_MSK = (0x1f000000),
IWM_RX_MPDU_RES_STATUS_RRF_KILL = (1 << 29),
IWM_RX_MPDU_RES_STATUS_FILTERING_MSK = (0xc00000),
IWM_RX_MPDU_RES_STATUS2_FILTERING_MSK = (0xc0000000),
};
/**
* struct iwm_radio_version_notif - information on the radio version
* ( IWM_RADIO_VERSION_NOTIFICATION = 0x68 )
* @radio_flavor:
* @radio_step:
* @radio_dash:
*/
struct iwm_radio_version_notif {
uint32_t radio_flavor;
uint32_t radio_step;
uint32_t radio_dash;
} __packed; /* IWM_RADIO_VERSION_NOTOFICATION_S_VER_1 */
enum iwm_card_state_flags {
IWM_CARD_ENABLED = 0x00,
IWM_HW_CARD_DISABLED = 0x01,
IWM_SW_CARD_DISABLED = 0x02,
IWM_CT_KILL_CARD_DISABLED = 0x04,
IWM_HALT_CARD_DISABLED = 0x08,
IWM_CARD_DISABLED_MSK = 0x0f,
IWM_CARD_IS_RX_ON = 0x10,
};
/**
* struct iwm_radio_version_notif - information on the radio version
* (IWM_CARD_STATE_NOTIFICATION = 0xa1 )
* @flags: %iwm_card_state_flags
*/
struct iwm_card_state_notif {
uint32_t flags;
} __packed; /* CARD_STATE_NTFY_API_S_VER_1 */
/**
* struct iwm_missed_beacons_notif - information on missed beacons
* ( IWM_MISSED_BEACONS_NOTIFICATION = 0xa2 )
* @mac_id: interface ID
* @consec_missed_beacons_since_last_rx: number of consecutive missed
* beacons since last RX.
* @consec_missed_beacons: number of consecutive missed beacons
* @num_expected_beacons:
* @num_recvd_beacons:
*/
struct iwm_missed_beacons_notif {
uint32_t mac_id;
uint32_t consec_missed_beacons_since_last_rx;
uint32_t consec_missed_beacons;
uint32_t num_expected_beacons;
uint32_t num_recvd_beacons;
} __packed; /* IWM_MISSED_BEACON_NTFY_API_S_VER_3 */
/**
* struct iwm_set_calib_default_cmd - set default value for calibration.
* ( IWM_SET_CALIB_DEFAULT_CMD = 0x8e )
* @calib_index: the calibration to set value for
* @length: of data
* @data: the value to set for the calibration result
*/
struct iwm_set_calib_default_cmd {
uint16_t calib_index;
uint16_t length;
uint8_t data[0];
} __packed; /* IWM_PHY_CALIB_OVERRIDE_VALUES_S */
#define IWM_MAX_PORT_ID_NUM 2
#define IWM_MAX_MCAST_FILTERING_ADDRESSES 256
/**
* struct iwm_mcast_filter_cmd - configure multicast filter.
* @filter_own: Set 1 to filter out multicast packets sent by station itself
* @port_id: Multicast MAC addresses array specifier. This is a strange way
* to identify network interface adopted in host-device IF.
* It is used by FW as index in array of addresses. This array has
* IWM_MAX_PORT_ID_NUM members.
* @count: Number of MAC addresses in the array
* @pass_all: Set 1 to pass all multicast packets.
* @bssid: current association BSSID.
* @addr_list: Place holder for array of MAC addresses.
* IMPORTANT: add padding if necessary to ensure DWORD alignment.
*/
struct iwm_mcast_filter_cmd {
uint8_t filter_own;
uint8_t port_id;
uint8_t count;
uint8_t pass_all;
uint8_t bssid[6];
uint8_t reserved[2];
uint8_t addr_list[0];
} __packed; /* IWM_MCAST_FILTERING_CMD_API_S_VER_1 */
struct iwm_mvm_statistics_dbg {
uint32_t burst_check;
uint32_t burst_count;
uint32_t wait_for_silence_timeout_cnt;
uint32_t reserved[3];
} __packed; /* IWM_STATISTICS_DEBUG_API_S_VER_2 */
struct iwm_mvm_statistics_div {
uint32_t tx_on_a;
uint32_t tx_on_b;
uint32_t exec_time;
uint32_t probe_time;
uint32_t rssi_ant;
uint32_t reserved2;
} __packed; /* IWM_STATISTICS_SLOW_DIV_API_S_VER_2 */
struct iwm_mvm_statistics_general_common {
uint32_t temperature; /* radio temperature */
uint32_t temperature_m; /* radio voltage */
struct iwm_mvm_statistics_dbg dbg;
uint32_t sleep_time;
uint32_t slots_out;
uint32_t slots_idle;
uint32_t ttl_timestamp;
struct iwm_mvm_statistics_div div;
uint32_t rx_enable_counter;
/*
* num_of_sos_states:
* count the number of times we have to re-tune
* in order to get out of bad PHY status
*/
uint32_t num_of_sos_states;
} __packed; /* IWM_STATISTICS_GENERAL_API_S_VER_5 */
struct iwm_mvm_statistics_rx_non_phy {
uint32_t bogus_cts; /* CTS received when not expecting CTS */
uint32_t bogus_ack; /* ACK received when not expecting ACK */
uint32_t non_bssid_frames; /* number of frames with BSSID that
* doesn't belong to the STA BSSID */
uint32_t filtered_frames; /* count frames that were dumped in the
* filtering process */
uint32_t non_channel_beacons; /* beacons with our bss id but not on
* our serving channel */
uint32_t channel_beacons; /* beacons with our bss id and in our
* serving channel */
uint32_t num_missed_bcon; /* number of missed beacons */
uint32_t adc_rx_saturation_time; /* count in 0.8us units the time the
* ADC was in saturation */
uint32_t ina_detection_search_time;/* total time (in 0.8us) searched
* for INA */
uint32_t beacon_silence_rssi[3];/* RSSI silence after beacon frame */
uint32_t interference_data_flag; /* flag for interference data
* availability. 1 when data is
* available. */
uint32_t channel_load; /* counts RX Enable time in uSec */
uint32_t dsp_false_alarms; /* DSP false alarm (both OFDM
* and CCK) counter */
uint32_t beacon_rssi_a;
uint32_t beacon_rssi_b;
uint32_t beacon_rssi_c;
uint32_t beacon_energy_a;
uint32_t beacon_energy_b;
uint32_t beacon_energy_c;
uint32_t num_bt_kills;
uint32_t mac_id;
uint32_t directed_data_mpdu;
} __packed; /* IWM_STATISTICS_RX_NON_PHY_API_S_VER_3 */
struct iwm_mvm_statistics_rx_phy {
uint32_t ina_cnt;
uint32_t fina_cnt;
uint32_t plcp_err;
uint32_t crc32_err;
uint32_t overrun_err;
uint32_t early_overrun_err;
uint32_t crc32_good;
uint32_t false_alarm_cnt;
uint32_t fina_sync_err_cnt;
uint32_t sfd_timeout;
uint32_t fina_timeout;
uint32_t unresponded_rts;
uint32_t rxe_frame_limit_overrun;
uint32_t sent_ack_cnt;
uint32_t sent_cts_cnt;
uint32_t sent_ba_rsp_cnt;
uint32_t dsp_self_kill;
uint32_t mh_format_err;
uint32_t re_acq_main_rssi_sum;
uint32_t reserved;
} __packed; /* IWM_STATISTICS_RX_PHY_API_S_VER_2 */
struct iwm_mvm_statistics_rx_ht_phy {
uint32_t plcp_err;
uint32_t overrun_err;
uint32_t early_overrun_err;
uint32_t crc32_good;
uint32_t crc32_err;
uint32_t mh_format_err;
uint32_t agg_crc32_good;
uint32_t agg_mpdu_cnt;
uint32_t agg_cnt;
uint32_t unsupport_mcs;
} __packed; /* IWM_STATISTICS_HT_RX_PHY_API_S_VER_1 */
#define IWM_MAX_CHAINS 3
struct iwm_mvm_statistics_tx_non_phy_agg {
uint32_t ba_timeout;
uint32_t ba_reschedule_frames;
uint32_t scd_query_agg_frame_cnt;
uint32_t scd_query_no_agg;
uint32_t scd_query_agg;
uint32_t scd_query_mismatch;
uint32_t frame_not_ready;
uint32_t underrun;
uint32_t bt_prio_kill;
uint32_t rx_ba_rsp_cnt;
int8_t txpower[IWM_MAX_CHAINS];
int8_t reserved;
uint32_t reserved2;
} __packed; /* IWM_STATISTICS_TX_NON_PHY_AGG_API_S_VER_1 */
struct iwm_mvm_statistics_tx_channel_width {
uint32_t ext_cca_narrow_ch20[1];
uint32_t ext_cca_narrow_ch40[2];
uint32_t ext_cca_narrow_ch80[3];
uint32_t ext_cca_narrow_ch160[4];
uint32_t last_tx_ch_width_indx;
uint32_t rx_detected_per_ch_width[4];
uint32_t success_per_ch_width[4];
uint32_t fail_per_ch_width[4];
}; /* IWM_STATISTICS_TX_CHANNEL_WIDTH_API_S_VER_1 */
struct iwm_mvm_statistics_tx {
uint32_t preamble_cnt;
uint32_t rx_detected_cnt;
uint32_t bt_prio_defer_cnt;
uint32_t bt_prio_kill_cnt;
uint32_t few_bytes_cnt;
uint32_t cts_timeout;
uint32_t ack_timeout;
uint32_t expected_ack_cnt;
uint32_t actual_ack_cnt;
uint32_t dump_msdu_cnt;
uint32_t burst_abort_next_frame_mismatch_cnt;
uint32_t burst_abort_missing_next_frame_cnt;
uint32_t cts_timeout_collision;
uint32_t ack_or_ba_timeout_collision;
struct iwm_mvm_statistics_tx_non_phy_agg agg;
struct iwm_mvm_statistics_tx_channel_width channel_width;
} __packed; /* IWM_STATISTICS_TX_API_S_VER_4 */
struct iwm_mvm_statistics_bt_activity {
uint32_t hi_priority_tx_req_cnt;
uint32_t hi_priority_tx_denied_cnt;
uint32_t lo_priority_tx_req_cnt;
uint32_t lo_priority_tx_denied_cnt;
uint32_t hi_priority_rx_req_cnt;
uint32_t hi_priority_rx_denied_cnt;
uint32_t lo_priority_rx_req_cnt;
uint32_t lo_priority_rx_denied_cnt;
} __packed; /* IWM_STATISTICS_BT_ACTIVITY_API_S_VER_1 */
struct iwm_mvm_statistics_general {
struct iwm_mvm_statistics_general_common common;
uint32_t beacon_filtered;
uint32_t missed_beacons;
int8_t beacon_filter_average_energy;
int8_t beacon_filter_reason;
int8_t beacon_filter_current_energy;
int8_t beacon_filter_reserved;
uint32_t beacon_filter_delta_time;
struct iwm_mvm_statistics_bt_activity bt_activity;
} __packed; /* IWM_STATISTICS_GENERAL_API_S_VER_5 */
struct iwm_mvm_statistics_rx {
struct iwm_mvm_statistics_rx_phy ofdm;
struct iwm_mvm_statistics_rx_phy cck;
struct iwm_mvm_statistics_rx_non_phy general;
struct iwm_mvm_statistics_rx_ht_phy ofdm_ht;
} __packed; /* IWM_STATISTICS_RX_API_S_VER_3 */
/*
* IWM_STATISTICS_NOTIFICATION = 0x9d (notification only, not a command)
*
* By default, uCode issues this notification after receiving a beacon
* while associated. To disable this behavior, set DISABLE_NOTIF flag in the
* IWM_REPLY_STATISTICS_CMD 0x9c, above.
*
* Statistics counters continue to increment beacon after beacon, but are
* cleared when changing channels or when driver issues IWM_REPLY_STATISTICS_CMD
* 0x9c with CLEAR_STATS bit set (see above).
*
* uCode also issues this notification during scans. uCode clears statistics
* appropriately so that each notification contains statistics for only the
* one channel that has just been scanned.
*/
struct iwm_notif_statistics { /* IWM_STATISTICS_NTFY_API_S_VER_8 */
uint32_t flag;
struct iwm_mvm_statistics_rx rx;
struct iwm_mvm_statistics_tx tx;
struct iwm_mvm_statistics_general general;
} __packed;
/***********************************
* Smart Fifo API
***********************************/
/* Smart Fifo state */
enum iwm_sf_state {
IWM_SF_LONG_DELAY_ON = 0, /* should never be called by driver */
IWM_SF_FULL_ON,
IWM_SF_UNINIT,
IWM_SF_INIT_OFF,
IWM_SF_HW_NUM_STATES
};
/* Smart Fifo possible scenario */
enum iwm_sf_scenario {
IWM_SF_SCENARIO_SINGLE_UNICAST,
IWM_SF_SCENARIO_AGG_UNICAST,
IWM_SF_SCENARIO_MULTICAST,
IWM_SF_SCENARIO_BA_RESP,
IWM_SF_SCENARIO_TX_RESP,
IWM_SF_NUM_SCENARIO
};
#define IWM_SF_TRANSIENT_STATES_NUMBER 2 /* IWM_SF_LONG_DELAY_ON and IWM_SF_FULL_ON */
#define IWM_SF_NUM_TIMEOUT_TYPES 2 /* Aging timer and Idle timer */
/* smart FIFO default values */
#define IWM_SF_W_MARK_SISO 4096
#define IWM_SF_W_MARK_MIMO2 8192
#define IWM_SF_W_MARK_MIMO3 6144
#define IWM_SF_W_MARK_LEGACY 4096
#define IWM_SF_W_MARK_SCAN 4096
/* SF Scenarios timers for FULL_ON state (aligned to 32 uSec) */
#define IWM_SF_SINGLE_UNICAST_IDLE_TIMER 320 /* 300 uSec */
#define IWM_SF_SINGLE_UNICAST_AGING_TIMER 2016 /* 2 mSec */
#define IWM_SF_AGG_UNICAST_IDLE_TIMER 320 /* 300 uSec */
#define IWM_SF_AGG_UNICAST_AGING_TIMER 2016 /* 2 mSec */
#define IWM_SF_MCAST_IDLE_TIMER 2016 /* 2 mSec */
#define IWM_SF_MCAST_AGING_TIMER 10016 /* 10 mSec */
#define IWM_SF_BA_IDLE_TIMER 320 /* 300 uSec */
#define IWM_SF_BA_AGING_TIMER 2016 /* 2 mSec */
#define IWM_SF_TX_RE_IDLE_TIMER 320 /* 300 uSec */
#define IWM_SF_TX_RE_AGING_TIMER 2016 /* 2 mSec */
#define IWM_SF_LONG_DELAY_AGING_TIMER 1000000 /* 1 Sec */
/**
* Smart Fifo configuration command.
* @state: smart fifo state, types listed in iwm_sf_sate.
* @watermark: Minimum allowed available free space in RXF for transient state.
* @long_delay_timeouts: aging and idle timer values for each scenario
* in long delay state.
* @full_on_timeouts: timer values for each scenario in full on state.
*/
struct iwm_sf_cfg_cmd {
enum iwm_sf_state state;
uint32_t watermark[IWM_SF_TRANSIENT_STATES_NUMBER];
uint32_t long_delay_timeouts[IWM_SF_NUM_SCENARIO][IWM_SF_NUM_TIMEOUT_TYPES];
uint32_t full_on_timeouts[IWM_SF_NUM_SCENARIO][IWM_SF_NUM_TIMEOUT_TYPES];
} __packed; /* IWM_SF_CFG_API_S_VER_2 */
/*
* END mvm/fw-api.h
*/
/*
* BEGIN mvm/fw-api-mac.h
*/
/*
* The first MAC indices (starting from 0)
* are available to the driver, AUX follows
*/
#define IWM_MAC_INDEX_AUX 4
#define IWM_MAC_INDEX_MIN_DRIVER 0
#define IWM_NUM_MAC_INDEX_DRIVER IWM_MAC_INDEX_AUX
enum iwm_ac {
IWM_AC_BK,
IWM_AC_BE,
IWM_AC_VI,
IWM_AC_VO,
IWM_AC_NUM,
};
/**
* enum iwm_mac_protection_flags - MAC context flags
* @IWM_MAC_PROT_FLG_TGG_PROTECT: 11g protection when transmitting OFDM frames,
* this will require CCK RTS/CTS2self.
* RTS/CTS will protect full burst time.
* @IWM_MAC_PROT_FLG_HT_PROT: enable HT protection
* @IWM_MAC_PROT_FLG_FAT_PROT: protect 40 MHz transmissions
* @IWM_MAC_PROT_FLG_SELF_CTS_EN: allow CTS2self
*/
enum iwm_mac_protection_flags {
IWM_MAC_PROT_FLG_TGG_PROTECT = (1 << 3),
IWM_MAC_PROT_FLG_HT_PROT = (1 << 23),
IWM_MAC_PROT_FLG_FAT_PROT = (1 << 24),
IWM_MAC_PROT_FLG_SELF_CTS_EN = (1 << 30),
};
#define IWM_MAC_FLG_SHORT_SLOT (1 << 4)
#define IWM_MAC_FLG_SHORT_PREAMBLE (1 << 5)
/**
* enum iwm_mac_types - Supported MAC types
* @IWM_FW_MAC_TYPE_FIRST: lowest supported MAC type
* @IWM_FW_MAC_TYPE_AUX: Auxiliary MAC (internal)
* @IWM_FW_MAC_TYPE_LISTENER: monitor MAC type (?)
* @IWM_FW_MAC_TYPE_PIBSS: Pseudo-IBSS
* @IWM_FW_MAC_TYPE_IBSS: IBSS
* @IWM_FW_MAC_TYPE_BSS_STA: BSS (managed) station
* @IWM_FW_MAC_TYPE_P2P_DEVICE: P2P Device
* @IWM_FW_MAC_TYPE_P2P_STA: P2P client
* @IWM_FW_MAC_TYPE_GO: P2P GO
* @IWM_FW_MAC_TYPE_TEST: ?
* @IWM_FW_MAC_TYPE_MAX: highest support MAC type
*/
enum iwm_mac_types {
IWM_FW_MAC_TYPE_FIRST = 1,
IWM_FW_MAC_TYPE_AUX = IWM_FW_MAC_TYPE_FIRST,
IWM_FW_MAC_TYPE_LISTENER,
IWM_FW_MAC_TYPE_PIBSS,
IWM_FW_MAC_TYPE_IBSS,
IWM_FW_MAC_TYPE_BSS_STA,
IWM_FW_MAC_TYPE_P2P_DEVICE,
IWM_FW_MAC_TYPE_P2P_STA,
IWM_FW_MAC_TYPE_GO,
IWM_FW_MAC_TYPE_TEST,
IWM_FW_MAC_TYPE_MAX = IWM_FW_MAC_TYPE_TEST
}; /* IWM_MAC_CONTEXT_TYPE_API_E_VER_1 */
/**
* enum iwm_tsf_id - TSF hw timer ID
* @IWM_TSF_ID_A: use TSF A
* @IWM_TSF_ID_B: use TSF B
* @IWM_TSF_ID_C: use TSF C
* @IWM_TSF_ID_D: use TSF D
* @IWM_NUM_TSF_IDS: number of TSF timers available
*/
enum iwm_tsf_id {
IWM_TSF_ID_A = 0,
IWM_TSF_ID_B = 1,
IWM_TSF_ID_C = 2,
IWM_TSF_ID_D = 3,
IWM_NUM_TSF_IDS = 4,
}; /* IWM_TSF_ID_API_E_VER_1 */
/**
* struct iwm_mac_data_ap - configuration data for AP MAC context
* @beacon_time: beacon transmit time in system time
* @beacon_tsf: beacon transmit time in TSF
* @bi: beacon interval in TU
* @bi_reciprocal: 2^32 / bi
* @dtim_interval: dtim transmit time in TU
* @dtim_reciprocal: 2^32 / dtim_interval
* @mcast_qid: queue ID for multicast traffic
* @beacon_template: beacon template ID
*/
struct iwm_mac_data_ap {
uint32_t beacon_time;
uint64_t beacon_tsf;
uint32_t bi;
uint32_t bi_reciprocal;
uint32_t dtim_interval;
uint32_t dtim_reciprocal;
uint32_t mcast_qid;
uint32_t beacon_template;
} __packed; /* AP_MAC_DATA_API_S_VER_1 */
/**
* struct iwm_mac_data_ibss - configuration data for IBSS MAC context
* @beacon_time: beacon transmit time in system time
* @beacon_tsf: beacon transmit time in TSF
* @bi: beacon interval in TU
* @bi_reciprocal: 2^32 / bi
* @beacon_template: beacon template ID
*/
struct iwm_mac_data_ibss {
uint32_t beacon_time;
uint64_t beacon_tsf;
uint32_t bi;
uint32_t bi_reciprocal;
uint32_t beacon_template;
} __packed; /* IBSS_MAC_DATA_API_S_VER_1 */
/**
* struct iwm_mac_data_sta - configuration data for station MAC context
* @is_assoc: 1 for associated state, 0 otherwise
* @dtim_time: DTIM arrival time in system time
* @dtim_tsf: DTIM arrival time in TSF
* @bi: beacon interval in TU, applicable only when associated
* @bi_reciprocal: 2^32 / bi , applicable only when associated
* @dtim_interval: DTIM interval in TU, applicable only when associated
* @dtim_reciprocal: 2^32 / dtim_interval , applicable only when associated
* @listen_interval: in beacon intervals, applicable only when associated
* @assoc_id: unique ID assigned by the AP during association
*/
struct iwm_mac_data_sta {
uint32_t is_assoc;
uint32_t dtim_time;
uint64_t dtim_tsf;
uint32_t bi;
uint32_t bi_reciprocal;
uint32_t dtim_interval;
uint32_t dtim_reciprocal;
uint32_t listen_interval;
uint32_t assoc_id;
uint32_t assoc_beacon_arrive_time;
} __packed; /* IWM_STA_MAC_DATA_API_S_VER_1 */
/**
* struct iwm_mac_data_go - configuration data for P2P GO MAC context
* @ap: iwm_mac_data_ap struct with most config data
* @ctwin: client traffic window in TU (period after TBTT when GO is present).
* 0 indicates that there is no CT window.
* @opp_ps_enabled: indicate that opportunistic PS allowed
*/
struct iwm_mac_data_go {
struct iwm_mac_data_ap ap;
uint32_t ctwin;
uint32_t opp_ps_enabled;
} __packed; /* GO_MAC_DATA_API_S_VER_1 */
/**
* struct iwm_mac_data_p2p_sta - configuration data for P2P client MAC context
* @sta: iwm_mac_data_sta struct with most config data
* @ctwin: client traffic window in TU (period after TBTT when GO is present).
* 0 indicates that there is no CT window.
*/
struct iwm_mac_data_p2p_sta {
struct iwm_mac_data_sta sta;
uint32_t ctwin;
} __packed; /* P2P_STA_MAC_DATA_API_S_VER_1 */
/**
* struct iwm_mac_data_pibss - Pseudo IBSS config data
* @stats_interval: interval in TU between statistics notifications to host.
*/
struct iwm_mac_data_pibss {
uint32_t stats_interval;
} __packed; /* PIBSS_MAC_DATA_API_S_VER_1 */
/*
* struct iwm_mac_data_p2p_dev - configuration data for the P2P Device MAC
* context.
* @is_disc_extended: if set to true, P2P Device discoverability is enabled on
* other channels as well. This should be to true only in case that the
* device is discoverable and there is an active GO. Note that setting this
* field when not needed, will increase the number of interrupts and have
* effect on the platform power, as this setting opens the Rx filters on
* all macs.
*/
struct iwm_mac_data_p2p_dev {
uint32_t is_disc_extended;
} __packed; /* _P2P_DEV_MAC_DATA_API_S_VER_1 */
/**
* enum iwm_mac_filter_flags - MAC context filter flags
* @IWM_MAC_FILTER_IN_PROMISC: accept all data frames
* @IWM_MAC_FILTER_IN_CONTROL_AND_MGMT: pass all mangement and
* control frames to the host
* @IWM_MAC_FILTER_ACCEPT_GRP: accept multicast frames
* @IWM_MAC_FILTER_DIS_DECRYPT: don't decrypt unicast frames
* @IWM_MAC_FILTER_DIS_GRP_DECRYPT: don't decrypt multicast frames
* @IWM_MAC_FILTER_IN_BEACON: transfer foreign BSS's beacons to host
* (in station mode when associated)
* @IWM_MAC_FILTER_OUT_BCAST: filter out all broadcast frames
* @IWM_MAC_FILTER_IN_CRC32: extract FCS and append it to frames
* @IWM_MAC_FILTER_IN_PROBE_REQUEST: pass probe requests to host
*/
enum iwm_mac_filter_flags {
IWM_MAC_FILTER_IN_PROMISC = (1 << 0),
IWM_MAC_FILTER_IN_CONTROL_AND_MGMT = (1 << 1),
IWM_MAC_FILTER_ACCEPT_GRP = (1 << 2),
IWM_MAC_FILTER_DIS_DECRYPT = (1 << 3),
IWM_MAC_FILTER_DIS_GRP_DECRYPT = (1 << 4),
IWM_MAC_FILTER_IN_BEACON = (1 << 6),
IWM_MAC_FILTER_OUT_BCAST = (1 << 8),
IWM_MAC_FILTER_IN_CRC32 = (1 << 11),
IWM_MAC_FILTER_IN_PROBE_REQUEST = (1 << 12),
};
/**
* enum iwm_mac_qos_flags - QoS flags
* @IWM_MAC_QOS_FLG_UPDATE_EDCA: ?
* @IWM_MAC_QOS_FLG_TGN: HT is enabled
* @IWM_MAC_QOS_FLG_TXOP_TYPE: ?
*
*/
enum iwm_mac_qos_flags {
IWM_MAC_QOS_FLG_UPDATE_EDCA = (1 << 0),
IWM_MAC_QOS_FLG_TGN = (1 << 1),
IWM_MAC_QOS_FLG_TXOP_TYPE = (1 << 4),
};
/**
* struct iwm_ac_qos - QOS timing params for IWM_MAC_CONTEXT_CMD
* @cw_min: Contention window, start value in numbers of slots.
* Should be a power-of-2, minus 1. Device's default is 0x0f.
* @cw_max: Contention window, max value in numbers of slots.
* Should be a power-of-2, minus 1. Device's default is 0x3f.
* @aifsn: Number of slots in Arbitration Interframe Space (before
* performing random backoff timing prior to Tx). Device default 1.
* @fifos_mask: FIFOs used by this MAC for this AC
* @edca_txop: Length of Tx opportunity, in uSecs. Device default is 0.
*
* One instance of this config struct for each of 4 EDCA access categories
* in struct iwm_qosparam_cmd.
*
* Device will automatically increase contention window by (2*CW) + 1 for each
* transmission retry. Device uses cw_max as a bit mask, ANDed with new CW
* value, to cap the CW value.
*/
struct iwm_ac_qos {
uint16_t cw_min;
uint16_t cw_max;
uint8_t aifsn;
uint8_t fifos_mask;
uint16_t edca_txop;
} __packed; /* IWM_AC_QOS_API_S_VER_2 */
/**
* struct iwm_mac_ctx_cmd - command structure to configure MAC contexts
* ( IWM_MAC_CONTEXT_CMD = 0x28 )
* @id_and_color: ID and color of the MAC
* @action: action to perform, one of IWM_FW_CTXT_ACTION_*
* @mac_type: one of IWM_FW_MAC_TYPE_*
* @tsd_id: TSF HW timer, one of IWM_TSF_ID_*
* @node_addr: MAC address
* @bssid_addr: BSSID
* @cck_rates: basic rates available for CCK
* @ofdm_rates: basic rates available for OFDM
* @protection_flags: combination of IWM_MAC_PROT_FLG_FLAG_*
* @cck_short_preamble: 0x20 for enabling short preamble, 0 otherwise
* @short_slot: 0x10 for enabling short slots, 0 otherwise
* @filter_flags: combination of IWM_MAC_FILTER_*
* @qos_flags: from IWM_MAC_QOS_FLG_*
* @ac: one iwm_mac_qos configuration for each AC
* @mac_specific: one of struct iwm_mac_data_*, according to mac_type
*/
struct iwm_mac_ctx_cmd {
/* COMMON_INDEX_HDR_API_S_VER_1 */
uint32_t id_and_color;
uint32_t action;
/* IWM_MAC_CONTEXT_COMMON_DATA_API_S_VER_1 */
uint32_t mac_type;
uint32_t tsf_id;
uint8_t node_addr[6];
uint16_t reserved_for_node_addr;
uint8_t bssid_addr[6];
uint16_t reserved_for_bssid_addr;
uint32_t cck_rates;
uint32_t ofdm_rates;
uint32_t protection_flags;
uint32_t cck_short_preamble;
uint32_t short_slot;
uint32_t filter_flags;
/* IWM_MAC_QOS_PARAM_API_S_VER_1 */
uint32_t qos_flags;
struct iwm_ac_qos ac[IWM_AC_NUM+1];
/* IWM_MAC_CONTEXT_COMMON_DATA_API_S */
union {
struct iwm_mac_data_ap ap;
struct iwm_mac_data_go go;
struct iwm_mac_data_sta sta;
struct iwm_mac_data_p2p_sta p2p_sta;
struct iwm_mac_data_p2p_dev p2p_dev;
struct iwm_mac_data_pibss pibss;
struct iwm_mac_data_ibss ibss;
};
} __packed; /* IWM_MAC_CONTEXT_CMD_API_S_VER_1 */
static inline uint32_t iwm_mvm_reciprocal(uint32_t v)
{
if (!v)
return 0;
return 0xFFFFFFFF / v;
}
#define IWM_NONQOS_SEQ_GET 0x1
#define IWM_NONQOS_SEQ_SET 0x2
struct iwm_nonqos_seq_query_cmd {
uint32_t get_set_flag;
uint32_t mac_id_n_color;
uint16_t value;
uint16_t reserved;
} __packed; /* IWM_NON_QOS_TX_COUNTER_GET_SET_API_S_VER_1 */
/*
* END mvm/fw-api-mac.h
*/
/*
* BEGIN mvm/fw-api-power.h
*/
/* Power Management Commands, Responses, Notifications */
/* Radio LP RX Energy Threshold measured in dBm */
#define IWM_POWER_LPRX_RSSI_THRESHOLD 75
#define IWM_POWER_LPRX_RSSI_THRESHOLD_MAX 94
#define IWM_POWER_LPRX_RSSI_THRESHOLD_MIN 30
/**
* enum iwm_scan_flags - masks for power table command flags
* @IWM_POWER_FLAGS_POWER_SAVE_ENA_MSK: '1' Allow to save power by turning off
* receiver and transmitter. '0' - does not allow.
* @IWM_POWER_FLAGS_POWER_MANAGEMENT_ENA_MSK: '0' Driver disables power management,
* '1' Driver enables PM (use rest of parameters)
* @IWM_POWER_FLAGS_SKIP_OVER_DTIM_MSK: '0' PM have to walk up every DTIM,
* '1' PM could sleep over DTIM till listen Interval.
* @IWM_POWER_FLAGS_SNOOZE_ENA_MSK: Enable snoozing only if uAPSD is enabled and all
* access categories are both delivery and trigger enabled.
* @IWM_POWER_FLAGS_BT_SCO_ENA: Enable BT SCO coex only if uAPSD and
* PBW Snoozing enabled
* @IWM_POWER_FLAGS_ADVANCE_PM_ENA_MSK: Advanced PM (uAPSD) enable mask
* @IWM_POWER_FLAGS_LPRX_ENA_MSK: Low Power RX enable.
* @IWM_POWER_FLAGS_AP_UAPSD_MISBEHAVING_ENA_MSK: AP/GO's uAPSD misbehaving
* detection enablement
*/
enum iwm_power_flags {
IWM_POWER_FLAGS_POWER_SAVE_ENA_MSK = (1 << 0),
IWM_POWER_FLAGS_POWER_MANAGEMENT_ENA_MSK = (1 << 1),
IWM_POWER_FLAGS_SKIP_OVER_DTIM_MSK = (1 << 2),
IWM_POWER_FLAGS_SNOOZE_ENA_MSK = (1 << 5),
IWM_POWER_FLAGS_BT_SCO_ENA = (1 << 8),
IWM_POWER_FLAGS_ADVANCE_PM_ENA_MSK = (1 << 9),
IWM_POWER_FLAGS_LPRX_ENA_MSK = (1 << 11),
IWM_POWER_FLAGS_UAPSD_MISBEHAVING_ENA_MSK = (1 << 12),
};
#define IWM_POWER_VEC_SIZE 5
/**
* struct iwm_powertable_cmd - legacy power command. Beside old API support this
* is used also with a new power API for device wide power settings.
* IWM_POWER_TABLE_CMD = 0x77 (command, has simple generic response)
*
* @flags: Power table command flags from IWM_POWER_FLAGS_*
* @keep_alive_seconds: Keep alive period in seconds. Default - 25 sec.
* Minimum allowed:- 3 * DTIM. Keep alive period must be
* set regardless of power scheme or current power state.
* FW use this value also when PM is disabled.
* @rx_data_timeout: Minimum time (usec) from last Rx packet for AM to
* PSM transition - legacy PM
* @tx_data_timeout: Minimum time (usec) from last Tx packet for AM to
* PSM transition - legacy PM
* @sleep_interval: not in use
* @skip_dtim_periods: Number of DTIM periods to skip if Skip over DTIM flag
* is set. For example, if it is required to skip over
* one DTIM, this value need to be set to 2 (DTIM periods).
* @lprx_rssi_threshold: Signal strength up to which LP RX can be enabled.
* Default: 80dbm
*/
struct iwm_powertable_cmd {
/* PM_POWER_TABLE_CMD_API_S_VER_6 */
uint16_t flags;
uint8_t keep_alive_seconds;
uint8_t debug_flags;
uint32_t rx_data_timeout;
uint32_t tx_data_timeout;
uint32_t sleep_interval[IWM_POWER_VEC_SIZE];
uint32_t skip_dtim_periods;
uint32_t lprx_rssi_threshold;
} __packed;
/**
* enum iwm_device_power_flags - masks for device power command flags
* @DEVIC_POWER_FLAGS_POWER_SAVE_ENA_MSK: '1' Allow to save power by turning off
* receiver and transmitter. '0' - does not allow. This flag should be
* always set to '1' unless one need to disable actual power down for debug
* purposes.
* @IWM_DEVICE_POWER_FLAGS_CAM_MSK: '1' CAM (Continuous Active Mode) is set, meaning
* that power management is disabled. '0' Power management is enabled, one
* of power schemes is applied.
*/
enum iwm_device_power_flags {
IWM_DEVICE_POWER_FLAGS_POWER_SAVE_ENA_MSK = (1 << 0),
IWM_DEVICE_POWER_FLAGS_CAM_MSK = (1 << 13),
};
/**
* struct iwm_device_power_cmd - device wide power command.
* IWM_DEVICE_POWER_CMD = 0x77 (command, has simple generic response)
*
* @flags: Power table command flags from IWM_DEVICE_POWER_FLAGS_*
*/
struct iwm_device_power_cmd {
/* PM_POWER_TABLE_CMD_API_S_VER_6 */
uint16_t flags;
uint16_t reserved;
} __packed;
/**
* struct iwm_mac_power_cmd - New power command containing uAPSD support
* IWM_MAC_PM_POWER_TABLE = 0xA9 (command, has simple generic response)
* @id_and_color: MAC contex identifier
* @flags: Power table command flags from POWER_FLAGS_*
* @keep_alive_seconds: Keep alive period in seconds. Default - 25 sec.
* Minimum allowed:- 3 * DTIM. Keep alive period must be
* set regardless of power scheme or current power state.
* FW use this value also when PM is disabled.
* @rx_data_timeout: Minimum time (usec) from last Rx packet for AM to
* PSM transition - legacy PM
* @tx_data_timeout: Minimum time (usec) from last Tx packet for AM to
* PSM transition - legacy PM
* @sleep_interval: not in use
* @skip_dtim_periods: Number of DTIM periods to skip if Skip over DTIM flag
* is set. For example, if it is required to skip over
* one DTIM, this value need to be set to 2 (DTIM periods).
* @rx_data_timeout_uapsd: Minimum time (usec) from last Rx packet for AM to
* PSM transition - uAPSD
* @tx_data_timeout_uapsd: Minimum time (usec) from last Tx packet for AM to
* PSM transition - uAPSD
* @lprx_rssi_threshold: Signal strength up to which LP RX can be enabled.
* Default: 80dbm
* @num_skip_dtim: Number of DTIMs to skip if Skip over DTIM flag is set
* @snooze_interval: Maximum time between attempts to retrieve buffered data
* from the AP [msec]
* @snooze_window: A window of time in which PBW snoozing insures that all
* packets received. It is also the minimum time from last
* received unicast RX packet, before client stops snoozing
* for data. [msec]
* @snooze_step: TBD
* @qndp_tid: TID client shall use for uAPSD QNDP triggers
* @uapsd_ac_flags: Set trigger-enabled and delivery-enabled indication for
* each corresponding AC.
* Use IEEE80211_WMM_IE_STA_QOSINFO_AC* for correct values.
* @uapsd_max_sp: Use IEEE80211_WMM_IE_STA_QOSINFO_SP_* for correct
* values.
* @heavy_tx_thld_packets: TX threshold measured in number of packets
* @heavy_rx_thld_packets: RX threshold measured in number of packets
* @heavy_tx_thld_percentage: TX threshold measured in load's percentage
* @heavy_rx_thld_percentage: RX threshold measured in load's percentage
* @limited_ps_threshold:
*/
struct iwm_mac_power_cmd {
/* CONTEXT_DESC_API_T_VER_1 */
uint32_t id_and_color;
/* CLIENT_PM_POWER_TABLE_S_VER_1 */
uint16_t flags;
uint16_t keep_alive_seconds;
uint32_t rx_data_timeout;
uint32_t tx_data_timeout;
uint32_t rx_data_timeout_uapsd;
uint32_t tx_data_timeout_uapsd;
uint8_t lprx_rssi_threshold;
uint8_t skip_dtim_periods;
uint16_t snooze_interval;
uint16_t snooze_window;
uint8_t snooze_step;
uint8_t qndp_tid;
uint8_t uapsd_ac_flags;
uint8_t uapsd_max_sp;
uint8_t heavy_tx_thld_packets;
uint8_t heavy_rx_thld_packets;
uint8_t heavy_tx_thld_percentage;
uint8_t heavy_rx_thld_percentage;
uint8_t limited_ps_threshold;
uint8_t reserved;
} __packed;
/*
* struct iwm_uapsd_misbehaving_ap_notif - FW sends this notification when
* associated AP is identified as improperly implementing uAPSD protocol.
* IWM_PSM_UAPSD_AP_MISBEHAVING_NOTIFICATION = 0x78
* @sta_id: index of station in uCode's station table - associated AP ID in
* this context.
*/
struct iwm_uapsd_misbehaving_ap_notif {
uint32_t sta_id;
uint8_t mac_id;
uint8_t reserved[3];
} __packed;
/**
* struct iwm_beacon_filter_cmd
* IWM_REPLY_BEACON_FILTERING_CMD = 0xd2 (command)
* @id_and_color: MAC contex identifier
* @bf_energy_delta: Used for RSSI filtering, if in 'normal' state. Send beacon
* to driver if delta in Energy values calculated for this and last
* passed beacon is greater than this threshold. Zero value means that
* the Energy change is ignored for beacon filtering, and beacon will
* not be forced to be sent to driver regardless of this delta. Typical
* energy delta 5dB.
* @bf_roaming_energy_delta: Used for RSSI filtering, if in 'roaming' state.
* Send beacon to driver if delta in Energy values calculated for this
* and last passed beacon is greater than this threshold. Zero value
* means that the Energy change is ignored for beacon filtering while in
* Roaming state, typical energy delta 1dB.
* @bf_roaming_state: Used for RSSI filtering. If absolute Energy values
* calculated for current beacon is less than the threshold, use
* Roaming Energy Delta Threshold, otherwise use normal Energy Delta
* Threshold. Typical energy threshold is -72dBm.
* @bf_temp_threshold: This threshold determines the type of temperature
* filtering (Slow or Fast) that is selected (Units are in Celsuis):
* If the current temperature is above this threshold - Fast filter
* will be used, If the current temperature is below this threshold -
* Slow filter will be used.
* @bf_temp_fast_filter: Send Beacon to driver if delta in temperature values
* calculated for this and the last passed beacon is greater than this
* threshold. Zero value means that the temperature change is ignored for
* beacon filtering; beacons will not be forced to be sent to driver
* regardless of whether its temperature has been changed.
* @bf_temp_slow_filter: Send Beacon to driver if delta in temperature values
* calculated for this and the last passed beacon is greater than this
* threshold. Zero value means that the temperature change is ignored for
* beacon filtering; beacons will not be forced to be sent to driver
* regardless of whether its temperature has been changed.
* @bf_enable_beacon_filter: 1, beacon filtering is enabled; 0, disabled.
* @bf_filter_escape_timer: Send beacons to to driver if no beacons were passed
* for a specific period of time. Units: Beacons.
* @ba_escape_timer: Fully receive and parse beacon if no beacons were passed
* for a longer period of time then this escape-timeout. Units: Beacons.
* @ba_enable_beacon_abort: 1, beacon abort is enabled; 0, disabled.
*/
struct iwm_beacon_filter_cmd {
uint32_t bf_energy_delta;
uint32_t bf_roaming_energy_delta;
uint32_t bf_roaming_state;
uint32_t bf_temp_threshold;
uint32_t bf_temp_fast_filter;
uint32_t bf_temp_slow_filter;
uint32_t bf_enable_beacon_filter;
uint32_t bf_debug_flag;
uint32_t bf_escape_timer;
uint32_t ba_escape_timer;
uint32_t ba_enable_beacon_abort;
} __packed;
/* Beacon filtering and beacon abort */
#define IWM_BF_ENERGY_DELTA_DEFAULT 5
#define IWM_BF_ENERGY_DELTA_MAX 255
#define IWM_BF_ENERGY_DELTA_MIN 0
#define IWM_BF_ROAMING_ENERGY_DELTA_DEFAULT 1
#define IWM_BF_ROAMING_ENERGY_DELTA_MAX 255
#define IWM_BF_ROAMING_ENERGY_DELTA_MIN 0
#define IWM_BF_ROAMING_STATE_DEFAULT 72
#define IWM_BF_ROAMING_STATE_MAX 255
#define IWM_BF_ROAMING_STATE_MIN 0
#define IWM_BF_TEMP_THRESHOLD_DEFAULT 112
#define IWM_BF_TEMP_THRESHOLD_MAX 255
#define IWM_BF_TEMP_THRESHOLD_MIN 0
#define IWM_BF_TEMP_FAST_FILTER_DEFAULT 1
#define IWM_BF_TEMP_FAST_FILTER_MAX 255
#define IWM_BF_TEMP_FAST_FILTER_MIN 0
#define IWM_BF_TEMP_SLOW_FILTER_DEFAULT 5
#define IWM_BF_TEMP_SLOW_FILTER_MAX 255
#define IWM_BF_TEMP_SLOW_FILTER_MIN 0
#define IWM_BF_ENABLE_BEACON_FILTER_DEFAULT 1
#define IWM_BF_DEBUG_FLAG_DEFAULT 0
#define IWM_BF_ESCAPE_TIMER_DEFAULT 50
#define IWM_BF_ESCAPE_TIMER_MAX 1024
#define IWM_BF_ESCAPE_TIMER_MIN 0
#define IWM_BA_ESCAPE_TIMER_DEFAULT 6
#define IWM_BA_ESCAPE_TIMER_D3 9
#define IWM_BA_ESCAPE_TIMER_MAX 1024
#define IWM_BA_ESCAPE_TIMER_MIN 0
#define IWM_BA_ENABLE_BEACON_ABORT_DEFAULT 1
#define IWM_BF_CMD_CONFIG_DEFAULTS \
.bf_energy_delta = htole32(IWM_BF_ENERGY_DELTA_DEFAULT), \
.bf_roaming_energy_delta = \
htole32(IWM_BF_ROAMING_ENERGY_DELTA_DEFAULT), \
.bf_roaming_state = htole32(IWM_BF_ROAMING_STATE_DEFAULT), \
.bf_temp_threshold = htole32(IWM_BF_TEMP_THRESHOLD_DEFAULT), \
.bf_temp_fast_filter = htole32(IWM_BF_TEMP_FAST_FILTER_DEFAULT), \
.bf_temp_slow_filter = htole32(IWM_BF_TEMP_SLOW_FILTER_DEFAULT), \
.bf_debug_flag = htole32(IWM_BF_DEBUG_FLAG_DEFAULT), \
.bf_escape_timer = htole32(IWM_BF_ESCAPE_TIMER_DEFAULT), \
.ba_escape_timer = htole32(IWM_BA_ESCAPE_TIMER_DEFAULT)
/*
* END mvm/fw-api-power.h
*/
/*
* BEGIN mvm/fw-api-rs.h
*/
/*
* These serve as indexes into
* struct iwm_rate_info fw_rate_idx_to_plcp[IWM_RATE_COUNT];
* TODO: avoid overlap between legacy and HT rates
*/
enum {
IWM_RATE_1M_INDEX = 0,
IWM_FIRST_CCK_RATE = IWM_RATE_1M_INDEX,
IWM_RATE_2M_INDEX,
IWM_RATE_5M_INDEX,
IWM_RATE_11M_INDEX,
IWM_LAST_CCK_RATE = IWM_RATE_11M_INDEX,
IWM_RATE_6M_INDEX,
IWM_FIRST_OFDM_RATE = IWM_RATE_6M_INDEX,
IWM_RATE_MCS_0_INDEX = IWM_RATE_6M_INDEX,
IWM_FIRST_HT_RATE = IWM_RATE_MCS_0_INDEX,
IWM_FIRST_VHT_RATE = IWM_RATE_MCS_0_INDEX,
IWM_RATE_9M_INDEX,
IWM_RATE_12M_INDEX,
IWM_RATE_MCS_1_INDEX = IWM_RATE_12M_INDEX,
IWM_RATE_18M_INDEX,
IWM_RATE_MCS_2_INDEX = IWM_RATE_18M_INDEX,
IWM_RATE_24M_INDEX,
IWM_RATE_MCS_3_INDEX = IWM_RATE_24M_INDEX,
IWM_RATE_36M_INDEX,
IWM_RATE_MCS_4_INDEX = IWM_RATE_36M_INDEX,
IWM_RATE_48M_INDEX,
IWM_RATE_MCS_5_INDEX = IWM_RATE_48M_INDEX,
IWM_RATE_54M_INDEX,
IWM_RATE_MCS_6_INDEX = IWM_RATE_54M_INDEX,
IWM_LAST_NON_HT_RATE = IWM_RATE_54M_INDEX,
IWM_RATE_60M_INDEX,
IWM_RATE_MCS_7_INDEX = IWM_RATE_60M_INDEX,
IWM_LAST_HT_RATE = IWM_RATE_MCS_7_INDEX,
IWM_RATE_MCS_8_INDEX,
IWM_RATE_MCS_9_INDEX,
IWM_LAST_VHT_RATE = IWM_RATE_MCS_9_INDEX,
IWM_RATE_COUNT_LEGACY = IWM_LAST_NON_HT_RATE + 1,
IWM_RATE_COUNT = IWM_LAST_VHT_RATE + 1,
};
#define IWM_RATE_BIT_MSK(r) (1 << (IWM_RATE_##r##M_INDEX))
/* fw API values for legacy bit rates, both OFDM and CCK */
enum {
IWM_RATE_6M_PLCP = 13,
IWM_RATE_9M_PLCP = 15,
IWM_RATE_12M_PLCP = 5,
IWM_RATE_18M_PLCP = 7,
IWM_RATE_24M_PLCP = 9,
IWM_RATE_36M_PLCP = 11,
IWM_RATE_48M_PLCP = 1,
IWM_RATE_54M_PLCP = 3,
IWM_RATE_1M_PLCP = 10,
IWM_RATE_2M_PLCP = 20,
IWM_RATE_5M_PLCP = 55,
IWM_RATE_11M_PLCP = 110,
IWM_RATE_INVM_PLCP = -1,
};
/*
* rate_n_flags bit fields
*
* The 32-bit value has different layouts in the low 8 bites depending on the
* format. There are three formats, HT, VHT and legacy (11abg, with subformats
* for CCK and OFDM).
*
* High-throughput (HT) rate format
* bit 8 is 1, bit 26 is 0, bit 9 is 0 (OFDM)
* Very High-throughput (VHT) rate format
* bit 8 is 0, bit 26 is 1, bit 9 is 0 (OFDM)
* Legacy OFDM rate format for bits 7:0
* bit 8 is 0, bit 26 is 0, bit 9 is 0 (OFDM)
* Legacy CCK rate format for bits 7:0:
* bit 8 is 0, bit 26 is 0, bit 9 is 1 (CCK)
*/
/* Bit 8: (1) HT format, (0) legacy or VHT format */
#define IWM_RATE_MCS_HT_POS 8
#define IWM_RATE_MCS_HT_MSK (1 << IWM_RATE_MCS_HT_POS)
/* Bit 9: (1) CCK, (0) OFDM. HT (bit 8) must be "0" for this bit to be valid */
#define IWM_RATE_MCS_CCK_POS 9
#define IWM_RATE_MCS_CCK_MSK (1 << IWM_RATE_MCS_CCK_POS)
/* Bit 26: (1) VHT format, (0) legacy format in bits 8:0 */
#define IWM_RATE_MCS_VHT_POS 26
#define IWM_RATE_MCS_VHT_MSK (1 << IWM_RATE_MCS_VHT_POS)
/*
* High-throughput (HT) rate format for bits 7:0
*
* 2-0: MCS rate base
* 0) 6 Mbps
* 1) 12 Mbps
* 2) 18 Mbps
* 3) 24 Mbps
* 4) 36 Mbps
* 5) 48 Mbps
* 6) 54 Mbps
* 7) 60 Mbps
* 4-3: 0) Single stream (SISO)
* 1) Dual stream (MIMO)
* 2) Triple stream (MIMO)
* 5: Value of 0x20 in bits 7:0 indicates 6 Mbps HT40 duplicate data
* (bits 7-6 are zero)
*
* Together the low 5 bits work out to the MCS index because we don't
* support MCSes above 15/23, and 0-7 have one stream, 8-15 have two
* streams and 16-23 have three streams. We could also support MCS 32
* which is the duplicate 20 MHz MCS (bit 5 set, all others zero.)
*/
#define IWM_RATE_HT_MCS_RATE_CODE_MSK 0x7
#define IWM_RATE_HT_MCS_NSS_POS 3
#define IWM_RATE_HT_MCS_NSS_MSK (3 << IWM_RATE_HT_MCS_NSS_POS)
/* Bit 10: (1) Use Green Field preamble */
#define IWM_RATE_HT_MCS_GF_POS 10
#define IWM_RATE_HT_MCS_GF_MSK (1 << IWM_RATE_HT_MCS_GF_POS)
#define IWM_RATE_HT_MCS_INDEX_MSK 0x3f
/*
* Very High-throughput (VHT) rate format for bits 7:0
*
* 3-0: VHT MCS (0-9)
* 5-4: number of streams - 1:
* 0) Single stream (SISO)
* 1) Dual stream (MIMO)
* 2) Triple stream (MIMO)
*/
/* Bit 4-5: (0) SISO, (1) MIMO2 (2) MIMO3 */
#define IWM_RATE_VHT_MCS_RATE_CODE_MSK 0xf
#define IWM_RATE_VHT_MCS_NSS_POS 4
#define IWM_RATE_VHT_MCS_NSS_MSK (3 << IWM_RATE_VHT_MCS_NSS_POS)
/*
* Legacy OFDM rate format for bits 7:0
*
* 3-0: 0xD) 6 Mbps
* 0xF) 9 Mbps
* 0x5) 12 Mbps
* 0x7) 18 Mbps
* 0x9) 24 Mbps
* 0xB) 36 Mbps
* 0x1) 48 Mbps
* 0x3) 54 Mbps
* (bits 7-4 are 0)
*
* Legacy CCK rate format for bits 7:0:
* bit 8 is 0, bit 26 is 0, bit 9 is 1 (CCK):
*
* 6-0: 10) 1 Mbps
* 20) 2 Mbps
* 55) 5.5 Mbps
* 110) 11 Mbps
* (bit 7 is 0)
*/
#define IWM_RATE_LEGACY_RATE_MSK 0xff
/*
* Bit 11-12: (0) 20MHz, (1) 40MHz, (2) 80MHz, (3) 160MHz
* 0 and 1 are valid for HT and VHT, 2 and 3 only for VHT
*/
#define IWM_RATE_MCS_CHAN_WIDTH_POS 11
#define IWM_RATE_MCS_CHAN_WIDTH_MSK (3 << IWM_RATE_MCS_CHAN_WIDTH_POS)
#define IWM_RATE_MCS_CHAN_WIDTH_20 (0 << IWM_RATE_MCS_CHAN_WIDTH_POS)
#define IWM_RATE_MCS_CHAN_WIDTH_40 (1 << IWM_RATE_MCS_CHAN_WIDTH_POS)
#define IWM_RATE_MCS_CHAN_WIDTH_80 (2 << IWM_RATE_MCS_CHAN_WIDTH_POS)
#define IWM_RATE_MCS_CHAN_WIDTH_160 (3 << IWM_RATE_MCS_CHAN_WIDTH_POS)
/* Bit 13: (1) Short guard interval (0.4 usec), (0) normal GI (0.8 usec) */
#define IWM_RATE_MCS_SGI_POS 13
#define IWM_RATE_MCS_SGI_MSK (1 << IWM_RATE_MCS_SGI_POS)
/* Bit 14-16: Antenna selection (1) Ant A, (2) Ant B, (4) Ant C */
#define IWM_RATE_MCS_ANT_POS 14
#define IWM_RATE_MCS_ANT_A_MSK (1 << IWM_RATE_MCS_ANT_POS)
#define IWM_RATE_MCS_ANT_B_MSK (2 << IWM_RATE_MCS_ANT_POS)
#define IWM_RATE_MCS_ANT_C_MSK (4 << IWM_RATE_MCS_ANT_POS)
#define IWM_RATE_MCS_ANT_AB_MSK (IWM_RATE_MCS_ANT_A_MSK | \
IWM_RATE_MCS_ANT_B_MSK)
#define IWM_RATE_MCS_ANT_ABC_MSK (IWM_RATE_MCS_ANT_AB_MSK | \
IWM_RATE_MCS_ANT_C_MSK)
#define IWM_RATE_MCS_ANT_MSK IWM_RATE_MCS_ANT_ABC_MSK
#define IWM_RATE_MCS_ANT_NUM 3
/* Bit 17-18: (0) SS, (1) SS*2 */
#define IWM_RATE_MCS_STBC_POS 17
#define IWM_RATE_MCS_STBC_MSK (1 << IWM_RATE_MCS_STBC_POS)
/* Bit 19: (0) Beamforming is off, (1) Beamforming is on */
#define IWM_RATE_MCS_BF_POS 19
#define IWM_RATE_MCS_BF_MSK (1 << IWM_RATE_MCS_BF_POS)
/* Bit 20: (0) ZLF is off, (1) ZLF is on */
#define IWM_RATE_MCS_ZLF_POS 20
#define IWM_RATE_MCS_ZLF_MSK (1 << IWM_RATE_MCS_ZLF_POS)
/* Bit 24-25: (0) 20MHz (no dup), (1) 2x20MHz, (2) 4x20MHz, 3 8x20MHz */
#define IWM_RATE_MCS_DUP_POS 24
#define IWM_RATE_MCS_DUP_MSK (3 << IWM_RATE_MCS_DUP_POS)
/* Bit 27: (1) LDPC enabled, (0) LDPC disabled */
#define IWM_RATE_MCS_LDPC_POS 27
#define IWM_RATE_MCS_LDPC_MSK (1 << IWM_RATE_MCS_LDPC_POS)
/* Link Quality definitions */
/* # entries in rate scale table to support Tx retries */
#define IWM_LQ_MAX_RETRY_NUM 16
/* Link quality command flags bit fields */
/* Bit 0: (0) Don't use RTS (1) Use RTS */
#define IWM_LQ_FLAG_USE_RTS_POS 0
#define IWM_LQ_FLAG_USE_RTS_MSK (1 << IWM_LQ_FLAG_USE_RTS_POS)
/* Bit 1-3: LQ command color. Used to match responses to LQ commands */
#define IWM_LQ_FLAG_COLOR_POS 1
#define IWM_LQ_FLAG_COLOR_MSK (7 << IWM_LQ_FLAG_COLOR_POS)
/* Bit 4-5: Tx RTS BW Signalling
* (0) No RTS BW signalling
* (1) Static BW signalling
* (2) Dynamic BW signalling
*/
#define IWM_LQ_FLAG_RTS_BW_SIG_POS 4
#define IWM_LQ_FLAG_RTS_BW_SIG_NONE (0 << IWM_LQ_FLAG_RTS_BW_SIG_POS)
#define IWM_LQ_FLAG_RTS_BW_SIG_STATIC (1 << IWM_LQ_FLAG_RTS_BW_SIG_POS)
#define IWM_LQ_FLAG_RTS_BW_SIG_DYNAMIC (2 << IWM_LQ_FLAG_RTS_BW_SIG_POS)
/* Bit 6: (0) No dynamic BW selection (1) Allow dynamic BW selection
* Dyanmic BW selection allows Tx with narrower BW then requested in rates
*/
#define IWM_LQ_FLAG_DYNAMIC_BW_POS 6
#define IWM_LQ_FLAG_DYNAMIC_BW_MSK (1 << IWM_LQ_FLAG_DYNAMIC_BW_POS)
/**
* struct iwm_lq_cmd - link quality command
* @sta_id: station to update
* @control: not used
* @flags: combination of IWM_LQ_FLAG_*
* @mimo_delim: the first SISO index in rs_table, which separates MIMO
* and SISO rates
* @single_stream_ant_msk: best antenna for SISO (can be dual in CDD).
* Should be ANT_[ABC]
* @dual_stream_ant_msk: best antennas for MIMO, combination of ANT_[ABC]
* @initial_rate_index: first index from rs_table per AC category
* @agg_time_limit: aggregation max time threshold in usec/100, meaning
* value of 100 is one usec. Range is 100 to 8000
* @agg_disable_start_th: try-count threshold for starting aggregation.
* If a frame has higher try-count, it should not be selected for
* starting an aggregation sequence.
* @agg_frame_cnt_limit: max frame count in an aggregation.
* 0: no limit
* 1: no aggregation (one frame per aggregation)
* 2 - 0x3f: maximal number of frames (up to 3f == 63)
* @rs_table: array of rates for each TX try, each is rate_n_flags,
* meaning it is a combination of IWM_RATE_MCS_* and IWM_RATE_*_PLCP
* @bf_params: beam forming params, currently not used
*/
struct iwm_lq_cmd {
uint8_t sta_id;
uint8_t reserved1;
uint16_t control;
/* LINK_QUAL_GENERAL_PARAMS_API_S_VER_1 */
uint8_t flags;
uint8_t mimo_delim;
uint8_t single_stream_ant_msk;
uint8_t dual_stream_ant_msk;
uint8_t initial_rate_index[IWM_AC_NUM];
/* LINK_QUAL_AGG_PARAMS_API_S_VER_1 */
uint16_t agg_time_limit;
uint8_t agg_disable_start_th;
uint8_t agg_frame_cnt_limit;
uint32_t reserved2;
uint32_t rs_table[IWM_LQ_MAX_RETRY_NUM];
uint32_t bf_params;
}; /* LINK_QUALITY_CMD_API_S_VER_1 */
/*
* END mvm/fw-api-rs.h
*/
/*
* BEGIN mvm/fw-api-tx.h
*/
/**
* enum iwm_tx_flags - bitmasks for tx_flags in TX command
* @IWM_TX_CMD_FLG_PROT_REQUIRE: use RTS or CTS-to-self to protect the frame
* @IWM_TX_CMD_FLG_ACK: expect ACK from receiving station
* @IWM_TX_CMD_FLG_STA_RATE: use RS table with initial index from the TX command.
* Otherwise, use rate_n_flags from the TX command
* @IWM_TX_CMD_FLG_BA: this frame is a block ack
* @IWM_TX_CMD_FLG_BAR: this frame is a BA request, immediate BAR is expected
* Must set IWM_TX_CMD_FLG_ACK with this flag.
* @IWM_TX_CMD_FLG_TXOP_PROT: protect frame with full TXOP protection
* @IWM_TX_CMD_FLG_VHT_NDPA: mark frame is NDPA for VHT beamformer sequence
* @IWM_TX_CMD_FLG_HT_NDPA: mark frame is NDPA for HT beamformer sequence
* @IWM_TX_CMD_FLG_CSI_FDBK2HOST: mark to send feedback to host (only if good CRC)
* @IWM_TX_CMD_FLG_BT_DIS: disable BT priority for this frame
* @IWM_TX_CMD_FLG_SEQ_CTL: set if FW should override the sequence control.
* Should be set for mgmt, non-QOS data, mcast, bcast and in scan command
* @IWM_TX_CMD_FLG_MORE_FRAG: this frame is non-last MPDU
* @IWM_TX_CMD_FLG_NEXT_FRAME: this frame includes information of the next frame
* @IWM_TX_CMD_FLG_TSF: FW should calculate and insert TSF in the frame
* Should be set for beacons and probe responses
* @IWM_TX_CMD_FLG_CALIB: activate PA TX power calibrations
* @IWM_TX_CMD_FLG_KEEP_SEQ_CTL: if seq_ctl is set, don't increase inner seq count
* @IWM_TX_CMD_FLG_AGG_START: allow this frame to start aggregation
* @IWM_TX_CMD_FLG_MH_PAD: driver inserted 2 byte padding after MAC header.
* Should be set for 26/30 length MAC headers
* @IWM_TX_CMD_FLG_RESP_TO_DRV: zero this if the response should go only to FW
* @IWM_TX_CMD_FLG_CCMP_AGG: this frame uses CCMP for aggregation acceleration
* @IWM_TX_CMD_FLG_TKIP_MIC_DONE: FW already performed TKIP MIC calculation
* @IWM_TX_CMD_FLG_DUR: disable duration overwriting used in PS-Poll Assoc-id
* @IWM_TX_CMD_FLG_FW_DROP: FW should mark frame to be dropped
* @IWM_TX_CMD_FLG_EXEC_PAPD: execute PAPD
* @IWM_TX_CMD_FLG_PAPD_TYPE: 0 for reference power, 1 for nominal power
* @IWM_TX_CMD_FLG_HCCA_CHUNK: mark start of TSPEC chunk
*/
enum iwm_tx_flags {
IWM_TX_CMD_FLG_PROT_REQUIRE = (1 << 0),
IWM_TX_CMD_FLG_ACK = (1 << 3),
IWM_TX_CMD_FLG_STA_RATE = (1 << 4),
IWM_TX_CMD_FLG_BA = (1 << 5),
IWM_TX_CMD_FLG_BAR = (1 << 6),
IWM_TX_CMD_FLG_TXOP_PROT = (1 << 7),
IWM_TX_CMD_FLG_VHT_NDPA = (1 << 8),
IWM_TX_CMD_FLG_HT_NDPA = (1 << 9),
IWM_TX_CMD_FLG_CSI_FDBK2HOST = (1 << 10),
IWM_TX_CMD_FLG_BT_DIS = (1 << 12),
IWM_TX_CMD_FLG_SEQ_CTL = (1 << 13),
IWM_TX_CMD_FLG_MORE_FRAG = (1 << 14),
IWM_TX_CMD_FLG_NEXT_FRAME = (1 << 15),
IWM_TX_CMD_FLG_TSF = (1 << 16),
IWM_TX_CMD_FLG_CALIB = (1 << 17),
IWM_TX_CMD_FLG_KEEP_SEQ_CTL = (1 << 18),
IWM_TX_CMD_FLG_AGG_START = (1 << 19),
IWM_TX_CMD_FLG_MH_PAD = (1 << 20),
IWM_TX_CMD_FLG_RESP_TO_DRV = (1 << 21),
IWM_TX_CMD_FLG_CCMP_AGG = (1 << 22),
IWM_TX_CMD_FLG_TKIP_MIC_DONE = (1 << 23),
IWM_TX_CMD_FLG_DUR = (1 << 25),
IWM_TX_CMD_FLG_FW_DROP = (1 << 26),
IWM_TX_CMD_FLG_EXEC_PAPD = (1 << 27),
IWM_TX_CMD_FLG_PAPD_TYPE = (1 << 28),
IWM_TX_CMD_FLG_HCCA_CHUNK = (1 << 31)
}; /* IWM_TX_FLAGS_BITS_API_S_VER_1 */
/*
* TX command security control
*/
#define IWM_TX_CMD_SEC_WEP 0x01
#define IWM_TX_CMD_SEC_CCM 0x02
#define IWM_TX_CMD_SEC_TKIP 0x03
#define IWM_TX_CMD_SEC_EXT 0x04
#define IWM_TX_CMD_SEC_MSK 0x07
#define IWM_TX_CMD_SEC_WEP_KEY_IDX_POS 6
#define IWM_TX_CMD_SEC_WEP_KEY_IDX_MSK 0xc0
#define IWM_TX_CMD_SEC_KEY128 0x08
/* TODO: how does these values are OK with only 16 bit variable??? */
/*
* TX command next frame info
*
* bits 0:2 - security control (IWM_TX_CMD_SEC_*)
* bit 3 - immediate ACK required
* bit 4 - rate is taken from STA table
* bit 5 - frame belongs to BA stream
* bit 6 - immediate BA response expected
* bit 7 - unused
* bits 8:15 - Station ID
* bits 16:31 - rate
*/
#define IWM_TX_CMD_NEXT_FRAME_ACK_MSK (0x8)
#define IWM_TX_CMD_NEXT_FRAME_STA_RATE_MSK (0x10)
#define IWM_TX_CMD_NEXT_FRAME_BA_MSK (0x20)
#define IWM_TX_CMD_NEXT_FRAME_IMM_BA_RSP_MSK (0x40)
#define IWM_TX_CMD_NEXT_FRAME_FLAGS_MSK (0xf8)
#define IWM_TX_CMD_NEXT_FRAME_STA_ID_MSK (0xff00)
#define IWM_TX_CMD_NEXT_FRAME_STA_ID_POS (8)
#define IWM_TX_CMD_NEXT_FRAME_RATE_MSK (0xffff0000)
#define IWM_TX_CMD_NEXT_FRAME_RATE_POS (16)
/*
* TX command Frame life time in us - to be written in pm_frame_timeout
*/
#define IWM_TX_CMD_LIFE_TIME_INFINITE 0xFFFFFFFF
#define IWM_TX_CMD_LIFE_TIME_DEFAULT 2000000 /* 2000 ms*/
#define IWM_TX_CMD_LIFE_TIME_PROBE_RESP 40000 /* 40 ms */
#define IWM_TX_CMD_LIFE_TIME_EXPIRED_FRAME 0
/*
* TID for non QoS frames - to be written in tid_tspec
*/
#define IWM_TID_NON_QOS IWM_MAX_TID_COUNT
/*
* Limits on the retransmissions - to be written in {data,rts}_retry_limit
*/
#define IWM_DEFAULT_TX_RETRY 15
#define IWM_MGMT_DFAULT_RETRY_LIMIT 3
#define IWM_RTS_DFAULT_RETRY_LIMIT 60
#define IWM_BAR_DFAULT_RETRY_LIMIT 60
#define IWM_LOW_RETRY_LIMIT 7
/* TODO: complete documentation for try_cnt and btkill_cnt */
/**
* struct iwm_tx_cmd - TX command struct to FW
* ( IWM_TX_CMD = 0x1c )
* @len: in bytes of the payload, see below for details
* @next_frame_len: same as len, but for next frame (0 if not applicable)
* Used for fragmentation and bursting, but not in 11n aggregation.
* @tx_flags: combination of IWM_TX_CMD_FLG_*
* @rate_n_flags: rate for *all* Tx attempts, if IWM_TX_CMD_FLG_STA_RATE_MSK is
* cleared. Combination of IWM_RATE_MCS_*
* @sta_id: index of destination station in FW station table
* @sec_ctl: security control, IWM_TX_CMD_SEC_*
- * @initial_rate_index: index into the the rate table for initial TX attempt.
+ * @initial_rate_index: index into the rate table for initial TX attempt.
* Applied if IWM_TX_CMD_FLG_STA_RATE_MSK is set, normally 0 for data frames.
* @key: security key
* @next_frame_flags: IWM_TX_CMD_SEC_* and IWM_TX_CMD_NEXT_FRAME_*
* @life_time: frame life time (usecs??)
* @dram_lsb_ptr: Physical address of scratch area in the command (try_cnt +
* btkill_cnd + reserved), first 32 bits. "0" disables usage.
* @dram_msb_ptr: upper bits of the scratch physical address
* @rts_retry_limit: max attempts for RTS
* @data_retry_limit: max attempts to send the data packet
* @tid_spec: TID/tspec
* @pm_frame_timeout: PM TX frame timeout
* @driver_txop: duration od EDCA TXOP, in 32-usec units. Set this if not
* specified by HCCA protocol
*
* The byte count (both len and next_frame_len) includes MAC header
* (24/26/30/32 bytes)
* + 2 bytes pad if 26/30 header size
* + 8 byte IV for CCM or TKIP (not used for WEP)
* + Data payload
* + 8-byte MIC (not used for CCM/WEP)
* It does not include post-MAC padding, i.e.,
* MIC (CCM) 8 bytes, ICV (WEP/TKIP/CKIP) 4 bytes, CRC 4 bytes.
* Range of len: 14-2342 bytes.
*
* After the struct fields the MAC header is placed, plus any padding,
* and then the actial payload.
*/
struct iwm_tx_cmd {
uint16_t len;
uint16_t next_frame_len;
uint32_t tx_flags;
struct {
uint8_t try_cnt;
uint8_t btkill_cnt;
uint16_t reserved;
} scratch; /* DRAM_SCRATCH_API_U_VER_1 */
uint32_t rate_n_flags;
uint8_t sta_id;
uint8_t sec_ctl;
uint8_t initial_rate_index;
uint8_t reserved2;
uint8_t key[16];
uint16_t next_frame_flags;
uint16_t reserved3;
uint32_t life_time;
uint32_t dram_lsb_ptr;
uint8_t dram_msb_ptr;
uint8_t rts_retry_limit;
uint8_t data_retry_limit;
uint8_t tid_tspec;
uint16_t pm_frame_timeout;
uint16_t driver_txop;
uint8_t payload[0];
struct ieee80211_frame hdr[0];
} __packed; /* IWM_TX_CMD_API_S_VER_3 */
/*
* TX response related data
*/
/*
* enum iwm_tx_status - status that is returned by the fw after attempts to Tx
* @IWM_TX_STATUS_SUCCESS:
* @IWM_TX_STATUS_DIRECT_DONE:
* @IWM_TX_STATUS_POSTPONE_DELAY:
* @IWM_TX_STATUS_POSTPONE_FEW_BYTES:
* @IWM_TX_STATUS_POSTPONE_BT_PRIO:
* @IWM_TX_STATUS_POSTPONE_QUIET_PERIOD:
* @IWM_TX_STATUS_POSTPONE_CALC_TTAK:
* @IWM_TX_STATUS_FAIL_INTERNAL_CROSSED_RETRY:
* @IWM_TX_STATUS_FAIL_SHORT_LIMIT:
* @IWM_TX_STATUS_FAIL_LONG_LIMIT:
* @IWM_TX_STATUS_FAIL_UNDERRUN:
* @IWM_TX_STATUS_FAIL_DRAIN_FLOW:
* @IWM_TX_STATUS_FAIL_RFKILL_FLUSH:
* @IWM_TX_STATUS_FAIL_LIFE_EXPIRE:
* @IWM_TX_STATUS_FAIL_DEST_PS:
* @IWM_TX_STATUS_FAIL_HOST_ABORTED:
* @IWM_TX_STATUS_FAIL_BT_RETRY:
* @IWM_TX_STATUS_FAIL_STA_INVALID:
* @IWM_TX_TATUS_FAIL_FRAG_DROPPED:
* @IWM_TX_STATUS_FAIL_TID_DISABLE:
* @IWM_TX_STATUS_FAIL_FIFO_FLUSHED:
* @IWM_TX_STATUS_FAIL_SMALL_CF_POLL:
* @IWM_TX_STATUS_FAIL_FW_DROP:
* @IWM_TX_STATUS_FAIL_STA_COLOR_MISMATCH: mismatch between color of Tx cmd and
* STA table
* @IWM_TX_FRAME_STATUS_INTERNAL_ABORT:
* @IWM_TX_MODE_MSK:
* @IWM_TX_MODE_NO_BURST:
* @IWM_TX_MODE_IN_BURST_SEQ:
* @IWM_TX_MODE_FIRST_IN_BURST:
* @IWM_TX_QUEUE_NUM_MSK:
*
* Valid only if frame_count =1
* TODO: complete documentation
*/
enum iwm_tx_status {
IWM_TX_STATUS_MSK = 0x000000ff,
IWM_TX_STATUS_SUCCESS = 0x01,
IWM_TX_STATUS_DIRECT_DONE = 0x02,
/* postpone TX */
IWM_TX_STATUS_POSTPONE_DELAY = 0x40,
IWM_TX_STATUS_POSTPONE_FEW_BYTES = 0x41,
IWM_TX_STATUS_POSTPONE_BT_PRIO = 0x42,
IWM_TX_STATUS_POSTPONE_QUIET_PERIOD = 0x43,
IWM_TX_STATUS_POSTPONE_CALC_TTAK = 0x44,
/* abort TX */
IWM_TX_STATUS_FAIL_INTERNAL_CROSSED_RETRY = 0x81,
IWM_TX_STATUS_FAIL_SHORT_LIMIT = 0x82,
IWM_TX_STATUS_FAIL_LONG_LIMIT = 0x83,
IWM_TX_STATUS_FAIL_UNDERRUN = 0x84,
IWM_TX_STATUS_FAIL_DRAIN_FLOW = 0x85,
IWM_TX_STATUS_FAIL_RFKILL_FLUSH = 0x86,
IWM_TX_STATUS_FAIL_LIFE_EXPIRE = 0x87,
IWM_TX_STATUS_FAIL_DEST_PS = 0x88,
IWM_TX_STATUS_FAIL_HOST_ABORTED = 0x89,
IWM_TX_STATUS_FAIL_BT_RETRY = 0x8a,
IWM_TX_STATUS_FAIL_STA_INVALID = 0x8b,
IWM_TX_STATUS_FAIL_FRAG_DROPPED = 0x8c,
IWM_TX_STATUS_FAIL_TID_DISABLE = 0x8d,
IWM_TX_STATUS_FAIL_FIFO_FLUSHED = 0x8e,
IWM_TX_STATUS_FAIL_SMALL_CF_POLL = 0x8f,
IWM_TX_STATUS_FAIL_FW_DROP = 0x90,
IWM_TX_STATUS_FAIL_STA_COLOR_MISMATCH = 0x91,
IWM_TX_STATUS_INTERNAL_ABORT = 0x92,
IWM_TX_MODE_MSK = 0x00000f00,
IWM_TX_MODE_NO_BURST = 0x00000000,
IWM_TX_MODE_IN_BURST_SEQ = 0x00000100,
IWM_TX_MODE_FIRST_IN_BURST = 0x00000200,
IWM_TX_QUEUE_NUM_MSK = 0x0001f000,
IWM_TX_NARROW_BW_MSK = 0x00060000,
IWM_TX_NARROW_BW_1DIV2 = 0x00020000,
IWM_TX_NARROW_BW_1DIV4 = 0x00040000,
IWM_TX_NARROW_BW_1DIV8 = 0x00060000,
};
/*
* enum iwm_tx_agg_status - TX aggregation status
* @IWM_AGG_TX_STATE_STATUS_MSK:
* @IWM_AGG_TX_STATE_TRANSMITTED:
* @IWM_AGG_TX_STATE_UNDERRUN:
* @IWM_AGG_TX_STATE_BT_PRIO:
* @IWM_AGG_TX_STATE_FEW_BYTES:
* @IWM_AGG_TX_STATE_ABORT:
* @IWM_AGG_TX_STATE_LAST_SENT_TTL:
* @IWM_AGG_TX_STATE_LAST_SENT_TRY_CNT:
* @IWM_AGG_TX_STATE_LAST_SENT_BT_KILL:
* @IWM_AGG_TX_STATE_SCD_QUERY:
* @IWM_AGG_TX_STATE_TEST_BAD_CRC32:
* @IWM_AGG_TX_STATE_RESPONSE:
* @IWM_AGG_TX_STATE_DUMP_TX:
* @IWM_AGG_TX_STATE_DELAY_TX:
* @IWM_AGG_TX_STATE_TRY_CNT_MSK: Retry count for 1st frame in aggregation (retries
* occur if tx failed for this frame when it was a member of a previous
* aggregation block). If rate scaling is used, retry count indicates the
* rate table entry used for all frames in the new agg.
*@ IWM_AGG_TX_STATE_SEQ_NUM_MSK: Command ID and sequence number of Tx command for
* this frame
*
* TODO: complete documentation
*/
enum iwm_tx_agg_status {
IWM_AGG_TX_STATE_STATUS_MSK = 0x00fff,
IWM_AGG_TX_STATE_TRANSMITTED = 0x000,
IWM_AGG_TX_STATE_UNDERRUN = 0x001,
IWM_AGG_TX_STATE_BT_PRIO = 0x002,
IWM_AGG_TX_STATE_FEW_BYTES = 0x004,
IWM_AGG_TX_STATE_ABORT = 0x008,
IWM_AGG_TX_STATE_LAST_SENT_TTL = 0x010,
IWM_AGG_TX_STATE_LAST_SENT_TRY_CNT = 0x020,
IWM_AGG_TX_STATE_LAST_SENT_BT_KILL = 0x040,
IWM_AGG_TX_STATE_SCD_QUERY = 0x080,
IWM_AGG_TX_STATE_TEST_BAD_CRC32 = 0x0100,
IWM_AGG_TX_STATE_RESPONSE = 0x1ff,
IWM_AGG_TX_STATE_DUMP_TX = 0x200,
IWM_AGG_TX_STATE_DELAY_TX = 0x400,
IWM_AGG_TX_STATE_TRY_CNT_POS = 12,
IWM_AGG_TX_STATE_TRY_CNT_MSK = 0xf << IWM_AGG_TX_STATE_TRY_CNT_POS,
};
#define IWM_AGG_TX_STATE_LAST_SENT_MSK (IWM_AGG_TX_STATE_LAST_SENT_TTL| \
IWM_AGG_TX_STATE_LAST_SENT_TRY_CNT| \
IWM_AGG_TX_STATE_LAST_SENT_BT_KILL)
/*
* The mask below describes a status where we are absolutely sure that the MPDU
* wasn't sent. For BA/Underrun we cannot be that sure. All we know that we've
* written the bytes to the TXE, but we know nothing about what the DSP did.
*/
#define IWM_AGG_TX_STAT_FRAME_NOT_SENT (IWM_AGG_TX_STATE_FEW_BYTES | \
IWM_AGG_TX_STATE_ABORT | \
IWM_AGG_TX_STATE_SCD_QUERY)
/*
* IWM_REPLY_TX = 0x1c (response)
*
* This response may be in one of two slightly different formats, indicated
* by the frame_count field:
*
* 1) No aggregation (frame_count == 1). This reports Tx results for a single
* frame. Multiple attempts, at various bit rates, may have been made for
* this frame.
*
* 2) Aggregation (frame_count > 1). This reports Tx results for two or more
* frames that used block-acknowledge. All frames were transmitted at
* same rate. Rate scaling may have been used if first frame in this new
* agg block failed in previous agg block(s).
*
* Note that, for aggregation, ACK (block-ack) status is not delivered
* here; block-ack has not been received by the time the device records
* this status.
* This status relates to reasons the tx might have been blocked or aborted
* within the device, rather than whether it was received successfully by
* the destination station.
*/
/**
* struct iwm_agg_tx_status - per packet TX aggregation status
* @status: enum iwm_tx_agg_status
* @sequence: Sequence # for this frame's Tx cmd (not SSN!)
*/
struct iwm_agg_tx_status {
uint16_t status;
uint16_t sequence;
} __packed;
/*
* definitions for initial rate index field
* bits [3:0] initial rate index
* bits [6:4] rate table color, used for the initial rate
* bit-7 invalid rate indication
*/
#define IWM_TX_RES_INIT_RATE_INDEX_MSK 0x0f
#define IWM_TX_RES_RATE_TABLE_COLOR_MSK 0x70
#define IWM_TX_RES_INV_RATE_INDEX_MSK 0x80
#define IWM_MVM_TX_RES_GET_TID(_ra_tid) ((_ra_tid) & 0x0f)
#define IWM_MVM_TX_RES_GET_RA(_ra_tid) ((_ra_tid) >> 4)
/**
* struct iwm_mvm_tx_resp - notifies that fw is TXing a packet
* ( IWM_REPLY_TX = 0x1c )
* @frame_count: 1 no aggregation, >1 aggregation
* @bt_kill_count: num of times blocked by bluetooth (unused for agg)
* @failure_rts: num of failures due to unsuccessful RTS
* @failure_frame: num failures due to no ACK (unused for agg)
* @initial_rate: for non-agg: rate of the successful Tx. For agg: rate of the
* Tx of all the batch. IWM_RATE_MCS_*
* @wireless_media_time: for non-agg: RTS + CTS + frame tx attempts time + ACK.
* for agg: RTS + CTS + aggregation tx time + block-ack time.
* in usec.
* @pa_status: tx power info
* @pa_integ_res_a: tx power info
* @pa_integ_res_b: tx power info
* @pa_integ_res_c: tx power info
* @measurement_req_id: tx power info
* @tfd_info: TFD information set by the FH
* @seq_ctl: sequence control from the Tx cmd
* @byte_cnt: byte count from the Tx cmd
* @tlc_info: TLC rate info
* @ra_tid: bits [3:0] = ra, bits [7:4] = tid
* @frame_ctrl: frame control
* @status: for non-agg: frame status IWM_TX_STATUS_*
* for agg: status of 1st frame, IWM_AGG_TX_STATE_*; other frame status fields
* follow this one, up to frame_count.
*
* After the array of statuses comes the SSN of the SCD. Look at
* %iwm_mvm_get_scd_ssn for more details.
*/
struct iwm_mvm_tx_resp {
uint8_t frame_count;
uint8_t bt_kill_count;
uint8_t failure_rts;
uint8_t failure_frame;
uint32_t initial_rate;
uint16_t wireless_media_time;
uint8_t pa_status;
uint8_t pa_integ_res_a[3];
uint8_t pa_integ_res_b[3];
uint8_t pa_integ_res_c[3];
uint16_t measurement_req_id;
uint16_t reserved;
uint32_t tfd_info;
uint16_t seq_ctl;
uint16_t byte_cnt;
uint8_t tlc_info;
uint8_t ra_tid;
uint16_t frame_ctrl;
struct iwm_agg_tx_status status;
} __packed; /* IWM_TX_RSP_API_S_VER_3 */
/**
* struct iwm_mvm_ba_notif - notifies about reception of BA
* ( IWM_BA_NOTIF = 0xc5 )
* @sta_addr_lo32: lower 32 bits of the MAC address
* @sta_addr_hi16: upper 16 bits of the MAC address
* @sta_id: Index of recipient (BA-sending) station in fw's station table
* @tid: tid of the session
* @seq_ctl:
* @bitmap: the bitmap of the BA notification as seen in the air
* @scd_flow: the tx queue this BA relates to
* @scd_ssn: the index of the last contiguously sent packet
* @txed: number of Txed frames in this batch
* @txed_2_done: number of Acked frames in this batch
*/
struct iwm_mvm_ba_notif {
uint32_t sta_addr_lo32;
uint16_t sta_addr_hi16;
uint16_t reserved;
uint8_t sta_id;
uint8_t tid;
uint16_t seq_ctl;
uint64_t bitmap;
uint16_t scd_flow;
uint16_t scd_ssn;
uint8_t txed;
uint8_t txed_2_done;
uint16_t reserved1;
} __packed;
/*
* struct iwm_mac_beacon_cmd - beacon template command
* @tx: the tx commands associated with the beacon frame
* @template_id: currently equal to the mac context id of the coresponding
* mac.
* @tim_idx: the offset of the tim IE in the beacon
* @tim_size: the length of the tim IE
* @frame: the template of the beacon frame
*/
struct iwm_mac_beacon_cmd {
struct iwm_tx_cmd tx;
uint32_t template_id;
uint32_t tim_idx;
uint32_t tim_size;
struct ieee80211_frame frame[0];
} __packed;
struct iwm_beacon_notif {
struct iwm_mvm_tx_resp beacon_notify_hdr;
uint64_t tsf;
uint32_t ibss_mgr_status;
} __packed;
/**
* enum iwm_dump_control - dump (flush) control flags
- * @IWM_DUMP_TX_FIFO_FLUSH: Dump MSDUs until the the FIFO is empty
+ * @IWM_DUMP_TX_FIFO_FLUSH: Dump MSDUs until the FIFO is empty
* and the TFD queues are empty.
*/
enum iwm_dump_control {
IWM_DUMP_TX_FIFO_FLUSH = (1 << 1),
};
/**
* struct iwm_tx_path_flush_cmd -- queue/FIFO flush command
* @queues_ctl: bitmap of queues to flush
* @flush_ctl: control flags
* @reserved: reserved
*/
struct iwm_tx_path_flush_cmd {
uint32_t queues_ctl;
uint16_t flush_ctl;
uint16_t reserved;
} __packed; /* IWM_TX_PATH_FLUSH_CMD_API_S_VER_1 */
/**
* iwm_mvm_get_scd_ssn - returns the SSN of the SCD
* @tx_resp: the Tx response from the fw (agg or non-agg)
*
* When the fw sends an AMPDU, it fetches the MPDUs one after the other. Since
* it can't know that everything will go well until the end of the AMPDU, it
* can't know in advance the number of MPDUs that will be sent in the current
* batch. This is why it writes the agg Tx response while it fetches the MPDUs.
* Hence, it can't know in advance what the SSN of the SCD will be at the end
* of the batch. This is why the SSN of the SCD is written at the end of the
* whole struct at a variable offset. This function knows how to cope with the
* variable offset and returns the SSN of the SCD.
*/
static inline uint32_t iwm_mvm_get_scd_ssn(struct iwm_mvm_tx_resp *tx_resp)
{
return le32_to_cpup((uint32_t *)&tx_resp->status +
tx_resp->frame_count) & 0xfff;
}
/*
* END mvm/fw-api-tx.h
*/
/*
* BEGIN mvm/fw-api-scan.h
*/
/* Scan Commands, Responses, Notifications */
/* Masks for iwm_scan_channel.type flags */
#define IWM_SCAN_CHANNEL_TYPE_ACTIVE (1 << 0)
#define IWM_SCAN_CHANNEL_NARROW_BAND (1 << 22)
/* Max number of IEs for direct SSID scans in a command */
#define IWM_PROBE_OPTION_MAX 20
/**
* struct iwm_scan_channel - entry in IWM_REPLY_SCAN_CMD channel table
* @channel: band is selected by iwm_scan_cmd "flags" field
* @tx_gain: gain for analog radio
* @dsp_atten: gain for DSP
* @active_dwell: dwell time for active scan in TU, typically 5-50
* @passive_dwell: dwell time for passive scan in TU, typically 20-500
* @type: type is broken down to these bits:
* bit 0: 0 = passive, 1 = active
* bits 1-20: SSID direct bit map. If any of these bits is set then
* the corresponding SSID IE is transmitted in probe request
* (bit i adds IE in position i to the probe request)
* bit 22: channel width, 0 = regular, 1 = TGj narrow channel
*
* @iteration_count:
* @iteration_interval:
* This struct is used once for each channel in the scan list.
* Each channel can independently select:
* 1) SSID for directed active scans
* 2) Txpower setting (for rate specified within Tx command)
* 3) How long to stay on-channel (behavior may be modified by quiet_time,
* quiet_plcp_th, good_CRC_th)
*
* To avoid uCode errors, make sure the following are true (see comments
* under struct iwm_scan_cmd about max_out_time and quiet_time):
* 1) If using passive_dwell (i.e. passive_dwell != 0):
* active_dwell <= passive_dwell (< max_out_time if max_out_time != 0)
* 2) quiet_time <= active_dwell
* 3) If restricting off-channel time (i.e. max_out_time !=0):
* passive_dwell < max_out_time
* active_dwell < max_out_time
*/
struct iwm_scan_channel {
uint32_t type;
uint16_t channel;
uint16_t iteration_count;
uint32_t iteration_interval;
uint16_t active_dwell;
uint16_t passive_dwell;
} __packed; /* IWM_SCAN_CHANNEL_CONTROL_API_S_VER_1 */
/**
* struct iwm_ssid_ie - directed scan network information element
*
* Up to 20 of these may appear in IWM_REPLY_SCAN_CMD,
* selected by "type" bit field in struct iwm_scan_channel;
* each channel may select different ssids from among the 20 entries.
* SSID IEs get transmitted in reverse order of entry.
*/
struct iwm_ssid_ie {
uint8_t id;
uint8_t len;
uint8_t ssid[IEEE80211_NWID_LEN];
} __packed; /* IWM_SCAN_DIRECT_SSID_IE_API_S_VER_1 */
/**
* iwm_scan_flags - masks for scan command flags
*@IWM_SCAN_FLAGS_PERIODIC_SCAN:
*@IWM_SCAN_FLAGS_P2P_PUBLIC_ACTION_FRAME_TX:
*@IWM_SCAN_FLAGS_DELAYED_SCAN_LOWBAND:
*@IWM_SCAN_FLAGS_DELAYED_SCAN_HIGHBAND:
*@IWM_SCAN_FLAGS_FRAGMENTED_SCAN:
*@IWM_SCAN_FLAGS_PASSIVE2ACTIVE: use active scan on channels that was active
* in the past hour, even if they are marked as passive.
*/
enum iwm_scan_flags {
IWM_SCAN_FLAGS_PERIODIC_SCAN = (1 << 0),
IWM_SCAN_FLAGS_P2P_PUBLIC_ACTION_FRAME_TX = (1 << 1),
IWM_SCAN_FLAGS_DELAYED_SCAN_LOWBAND = (1 << 2),
IWM_SCAN_FLAGS_DELAYED_SCAN_HIGHBAND = (1 << 3),
IWM_SCAN_FLAGS_FRAGMENTED_SCAN = (1 << 4),
IWM_SCAN_FLAGS_PASSIVE2ACTIVE = (1 << 5),
};
/**
* enum iwm_scan_type - Scan types for scan command
* @IWM_SCAN_TYPE_FORCED:
* @IWM_SCAN_TYPE_BACKGROUND:
* @IWM_SCAN_TYPE_OS:
* @IWM_SCAN_TYPE_ROAMING:
* @IWM_SCAN_TYPE_ACTION:
* @IWM_SCAN_TYPE_DISCOVERY:
* @IWM_SCAN_TYPE_DISCOVERY_FORCED:
*/
enum iwm_scan_type {
IWM_SCAN_TYPE_FORCED = 0,
IWM_SCAN_TYPE_BACKGROUND = 1,
IWM_SCAN_TYPE_OS = 2,
IWM_SCAN_TYPE_ROAMING = 3,
IWM_SCAN_TYPE_ACTION = 4,
IWM_SCAN_TYPE_DISCOVERY = 5,
IWM_SCAN_TYPE_DISCOVERY_FORCED = 6,
}; /* IWM_SCAN_ACTIVITY_TYPE_E_VER_1 */
/* Maximal number of channels to scan */
#define IWM_MAX_NUM_SCAN_CHANNELS 0x24
/**
* struct iwm_scan_cmd - scan request command
* ( IWM_SCAN_REQUEST_CMD = 0x80 )
* @len: command length in bytes
* @scan_flags: scan flags from IWM_SCAN_FLAGS_*
* @channel_count: num of channels in channel list (1 - IWM_MAX_NUM_SCAN_CHANNELS)
* @quiet_time: in msecs, dwell this time for active scan on quiet channels
* @quiet_plcp_th: quiet PLCP threshold (channel is quiet if less than
* this number of packets were received (typically 1)
* @passive2active: is auto switching from passive to active during scan allowed
* @rxchain_sel_flags: RXON_RX_CHAIN_*
* @max_out_time: in usecs, max out of serving channel time
* @suspend_time: how long to pause scan when returning to service channel:
* bits 0-19: beacon interval in usecs (suspend before executing)
* bits 20-23: reserved
* bits 24-31: number of beacons (suspend between channels)
* @rxon_flags: RXON_FLG_*
* @filter_flags: RXON_FILTER_*
* @tx_cmd: for active scans (zero for passive), w/o payload,
* no RS so specify TX rate
* @direct_scan: direct scan SSIDs
* @type: one of IWM_SCAN_TYPE_*
* @repeats: how many time to repeat the scan
*/
struct iwm_scan_cmd {
uint16_t len;
uint8_t scan_flags;
uint8_t channel_count;
uint16_t quiet_time;
uint16_t quiet_plcp_th;
uint16_t passive2active;
uint16_t rxchain_sel_flags;
uint32_t max_out_time;
uint32_t suspend_time;
/* IWM_RX_ON_FLAGS_API_S_VER_1 */
uint32_t rxon_flags;
uint32_t filter_flags;
struct iwm_tx_cmd tx_cmd;
struct iwm_ssid_ie direct_scan[IWM_PROBE_OPTION_MAX];
uint32_t type;
uint32_t repeats;
/*
* Probe request frame, followed by channel list.
*
* Size of probe request frame is specified by byte count in tx_cmd.
* Channel list follows immediately after probe request frame.
* Number of channels in list is specified by channel_count.
* Each channel in list is of type:
*
* struct iwm_scan_channel channels[0];
*
* NOTE: Only one band of channels can be scanned per pass. You
* must not mix 2.4GHz channels and 5.2GHz channels, and you must wait
* for one scan to complete (i.e. receive IWM_SCAN_COMPLETE_NOTIFICATION)
* before requesting another scan.
*/
uint8_t data[0];
} __packed; /* IWM_SCAN_REQUEST_FIXED_PART_API_S_VER_5 */
/* Response to scan request contains only status with one of these values */
#define IWM_SCAN_RESPONSE_OK 0x1
#define IWM_SCAN_RESPONSE_ERROR 0x2
/*
* IWM_SCAN_ABORT_CMD = 0x81
* When scan abort is requested, the command has no fields except the common
* header. The response contains only a status with one of these values.
*/
#define IWM_SCAN_ABORT_POSSIBLE 0x1
#define IWM_SCAN_ABORT_IGNORED 0x2 /* no pending scans */
/* TODO: complete documentation */
#define IWM_SCAN_OWNER_STATUS 0x1
#define IWM_MEASURE_OWNER_STATUS 0x2
/**
* struct iwm_scan_start_notif - notifies start of scan in the device
* ( IWM_SCAN_START_NOTIFICATION = 0x82 )
* @tsf_low: TSF timer (lower half) in usecs
* @tsf_high: TSF timer (higher half) in usecs
* @beacon_timer: structured as follows:
* bits 0:19 - beacon interval in usecs
* bits 20:23 - reserved (0)
* bits 24:31 - number of beacons
* @channel: which channel is scanned
* @band: 0 for 5.2 GHz, 1 for 2.4 GHz
* @status: one of *_OWNER_STATUS
*/
struct iwm_scan_start_notif {
uint32_t tsf_low;
uint32_t tsf_high;
uint32_t beacon_timer;
uint8_t channel;
uint8_t band;
uint8_t reserved[2];
uint32_t status;
} __packed; /* IWM_SCAN_START_NTF_API_S_VER_1 */
/* scan results probe_status first bit indicates success */
#define IWM_SCAN_PROBE_STATUS_OK 0
#define IWM_SCAN_PROBE_STATUS_TX_FAILED (1 << 0)
/* error statuses combined with TX_FAILED */
#define IWM_SCAN_PROBE_STATUS_FAIL_TTL (1 << 1)
#define IWM_SCAN_PROBE_STATUS_FAIL_BT (1 << 2)
/* How many statistics are gathered for each channel */
#define IWM_SCAN_RESULTS_STATISTICS 1
/**
* enum iwm_scan_complete_status - status codes for scan complete notifications
* @IWM_SCAN_COMP_STATUS_OK: scan completed successfully
* @IWM_SCAN_COMP_STATUS_ABORT: scan was aborted by user
* @IWM_SCAN_COMP_STATUS_ERR_SLEEP: sending null sleep packet failed
* @IWM_SCAN_COMP_STATUS_ERR_CHAN_TIMEOUT: timeout before channel is ready
* @IWM_SCAN_COMP_STATUS_ERR_PROBE: sending probe request failed
* @IWM_SCAN_COMP_STATUS_ERR_WAKEUP: sending null wakeup packet failed
* @IWM_SCAN_COMP_STATUS_ERR_ANTENNAS: invalid antennas chosen at scan command
* @IWM_SCAN_COMP_STATUS_ERR_INTERNAL: internal error caused scan abort
* @IWM_SCAN_COMP_STATUS_ERR_COEX: medium was lost ot WiMax
* @IWM_SCAN_COMP_STATUS_P2P_ACTION_OK: P2P public action frame TX was successful
* (not an error!)
* @IWM_SCAN_COMP_STATUS_ITERATION_END: indicates end of one repeatition the driver
* asked for
* @IWM_SCAN_COMP_STATUS_ERR_ALLOC_TE: scan could not allocate time events
*/
enum iwm_scan_complete_status {
IWM_SCAN_COMP_STATUS_OK = 0x1,
IWM_SCAN_COMP_STATUS_ABORT = 0x2,
IWM_SCAN_COMP_STATUS_ERR_SLEEP = 0x3,
IWM_SCAN_COMP_STATUS_ERR_CHAN_TIMEOUT = 0x4,
IWM_SCAN_COMP_STATUS_ERR_PROBE = 0x5,
IWM_SCAN_COMP_STATUS_ERR_WAKEUP = 0x6,
IWM_SCAN_COMP_STATUS_ERR_ANTENNAS = 0x7,
IWM_SCAN_COMP_STATUS_ERR_INTERNAL = 0x8,
IWM_SCAN_COMP_STATUS_ERR_COEX = 0x9,
IWM_SCAN_COMP_STATUS_P2P_ACTION_OK = 0xA,
IWM_SCAN_COMP_STATUS_ITERATION_END = 0x0B,
IWM_SCAN_COMP_STATUS_ERR_ALLOC_TE = 0x0C,
};
/**
* struct iwm_scan_results_notif - scan results for one channel
* ( IWM_SCAN_RESULTS_NOTIFICATION = 0x83 )
* @channel: which channel the results are from
* @band: 0 for 5.2 GHz, 1 for 2.4 GHz
* @probe_status: IWM_SCAN_PROBE_STATUS_*, indicates success of probe request
* @num_probe_not_sent: # of request that weren't sent due to not enough time
* @duration: duration spent in channel, in usecs
* @statistics: statistics gathered for this channel
*/
struct iwm_scan_results_notif {
uint8_t channel;
uint8_t band;
uint8_t probe_status;
uint8_t num_probe_not_sent;
uint32_t duration;
uint32_t statistics[IWM_SCAN_RESULTS_STATISTICS];
} __packed; /* IWM_SCAN_RESULT_NTF_API_S_VER_2 */
/**
* struct iwm_scan_complete_notif - notifies end of scanning (all channels)
* ( IWM_SCAN_COMPLETE_NOTIFICATION = 0x84 )
* @scanned_channels: number of channels scanned (and number of valid results)
* @status: one of IWM_SCAN_COMP_STATUS_*
* @bt_status: BT on/off status
* @last_channel: last channel that was scanned
* @tsf_low: TSF timer (lower half) in usecs
* @tsf_high: TSF timer (higher half) in usecs
* @results: all scan results, only "scanned_channels" of them are valid
*/
struct iwm_scan_complete_notif {
uint8_t scanned_channels;
uint8_t status;
uint8_t bt_status;
uint8_t last_channel;
uint32_t tsf_low;
uint32_t tsf_high;
struct iwm_scan_results_notif results[IWM_MAX_NUM_SCAN_CHANNELS];
} __packed; /* IWM_SCAN_COMPLETE_NTF_API_S_VER_2 */
/* scan offload */
#define IWM_MAX_SCAN_CHANNELS 40
#define IWM_SCAN_MAX_BLACKLIST_LEN 64
#define IWM_SCAN_SHORT_BLACKLIST_LEN 16
#define IWM_SCAN_MAX_PROFILES 11
#define IWM_SCAN_OFFLOAD_PROBE_REQ_SIZE 512
/* Default watchdog (in MS) for scheduled scan iteration */
#define IWM_SCHED_SCAN_WATCHDOG cpu_to_le16(15000)
#define IWM_GOOD_CRC_TH_DEFAULT cpu_to_le16(1)
#define IWM_CAN_ABORT_STATUS 1
#define IWM_FULL_SCAN_MULTIPLIER 5
#define IWM_FAST_SCHED_SCAN_ITERATIONS 3
enum iwm_scan_framework_client {
IWM_SCAN_CLIENT_SCHED_SCAN = (1 << 0),
IWM_SCAN_CLIENT_NETDETECT = (1 << 1),
IWM_SCAN_CLIENT_ASSET_TRACKING = (1 << 2),
};
/**
* struct iwm_scan_offload_cmd - IWM_SCAN_REQUEST_FIXED_PART_API_S_VER_6
* @scan_flags: see enum iwm_scan_flags
* @channel_count: channels in channel list
* @quiet_time: dwell time, in milisiconds, on quiet channel
* @quiet_plcp_th: quiet channel num of packets threshold
* @good_CRC_th: passive to active promotion threshold
* @rx_chain: RXON rx chain.
* @max_out_time: max uSec to be out of assoceated channel
* @suspend_time: pause scan this long when returning to service channel
* @flags: RXON flags
* @filter_flags: RXONfilter
* @tx_cmd: tx command for active scan; for 2GHz and for 5GHz.
* @direct_scan: list of SSIDs for directed active scan
* @scan_type: see enum iwm_scan_type.
* @rep_count: repetition count for each scheduled scan iteration.
*/
struct iwm_scan_offload_cmd {
uint16_t len;
uint8_t scan_flags;
uint8_t channel_count;
uint16_t quiet_time;
uint16_t quiet_plcp_th;
uint16_t good_CRC_th;
uint16_t rx_chain;
uint32_t max_out_time;
uint32_t suspend_time;
/* IWM_RX_ON_FLAGS_API_S_VER_1 */
uint32_t flags;
uint32_t filter_flags;
struct iwm_tx_cmd tx_cmd[2];
/* IWM_SCAN_DIRECT_SSID_IE_API_S_VER_1 */
struct iwm_ssid_ie direct_scan[IWM_PROBE_OPTION_MAX];
uint32_t scan_type;
uint32_t rep_count;
} __packed;
enum iwm_scan_offload_channel_flags {
IWM_SCAN_OFFLOAD_CHANNEL_ACTIVE = (1 << 0),
IWM_SCAN_OFFLOAD_CHANNEL_NARROW = (1 << 22),
IWM_SCAN_OFFLOAD_CHANNEL_FULL = (1 << 24),
IWM_SCAN_OFFLOAD_CHANNEL_PARTIAL = (1 << 25),
};
/**
* iwm_scan_channel_cfg - IWM_SCAN_CHANNEL_CFG_S
* @type: bitmap - see enum iwm_scan_offload_channel_flags.
* 0: passive (0) or active (1) scan.
* 1-20: directed scan to i'th ssid.
* 22: channel width configuation - 1 for narrow.
* 24: full scan.
* 25: partial scan.
* @channel_number: channel number 1-13 etc.
* @iter_count: repetition count for the channel.
* @iter_interval: interval between two innteration on one channel.
* @dwell_time: entry 0 - active scan, entry 1 - passive scan.
*/
struct iwm_scan_channel_cfg {
uint32_t type[IWM_MAX_SCAN_CHANNELS];
uint16_t channel_number[IWM_MAX_SCAN_CHANNELS];
uint16_t iter_count[IWM_MAX_SCAN_CHANNELS];
uint32_t iter_interval[IWM_MAX_SCAN_CHANNELS];
uint8_t dwell_time[IWM_MAX_SCAN_CHANNELS][2];
} __packed;
/**
* iwm_scan_offload_cfg - IWM_SCAN_OFFLOAD_CONFIG_API_S
* @scan_cmd: scan command fixed part
* @channel_cfg: scan channel configuration
* @data: probe request frames (one per band)
*/
struct iwm_scan_offload_cfg {
struct iwm_scan_offload_cmd scan_cmd;
struct iwm_scan_channel_cfg channel_cfg;
uint8_t data[0];
} __packed;
/**
* iwm_scan_offload_blacklist - IWM_SCAN_OFFLOAD_BLACKLIST_S
* @ssid: MAC address to filter out
* @reported_rssi: AP rssi reported to the host
* @client_bitmap: clients ignore this entry - enum scan_framework_client
*/
struct iwm_scan_offload_blacklist {
uint8_t ssid[IEEE80211_ADDR_LEN];
uint8_t reported_rssi;
uint8_t client_bitmap;
} __packed;
enum iwm_scan_offload_network_type {
IWM_NETWORK_TYPE_BSS = 1,
IWM_NETWORK_TYPE_IBSS = 2,
IWM_NETWORK_TYPE_ANY = 3,
};
enum iwm_scan_offload_band_selection {
IWM_SCAN_OFFLOAD_SELECT_2_4 = 0x4,
IWM_SCAN_OFFLOAD_SELECT_5_2 = 0x8,
IWM_SCAN_OFFLOAD_SELECT_ANY = 0xc,
};
/**
* iwm_scan_offload_profile - IWM_SCAN_OFFLOAD_PROFILE_S
* @ssid_index: index to ssid list in fixed part
* @unicast_cipher: encryption olgorithm to match - bitmap
* @aut_alg: authentication olgorithm to match - bitmap
* @network_type: enum iwm_scan_offload_network_type
* @band_selection: enum iwm_scan_offload_band_selection
* @client_bitmap: clients waiting for match - enum scan_framework_client
*/
struct iwm_scan_offload_profile {
uint8_t ssid_index;
uint8_t unicast_cipher;
uint8_t auth_alg;
uint8_t network_type;
uint8_t band_selection;
uint8_t client_bitmap;
uint8_t reserved[2];
} __packed;
/**
* iwm_scan_offload_profile_cfg - IWM_SCAN_OFFLOAD_PROFILES_CFG_API_S_VER_1
* @blaclist: AP list to filter off from scan results
* @profiles: profiles to search for match
* @blacklist_len: length of blacklist
* @num_profiles: num of profiles in the list
* @match_notify: clients waiting for match found notification
* @pass_match: clients waiting for the results
* @active_clients: active clients bitmap - enum scan_framework_client
* @any_beacon_notify: clients waiting for match notification without match
*/
struct iwm_scan_offload_profile_cfg {
struct iwm_scan_offload_profile profiles[IWM_SCAN_MAX_PROFILES];
uint8_t blacklist_len;
uint8_t num_profiles;
uint8_t match_notify;
uint8_t pass_match;
uint8_t active_clients;
uint8_t any_beacon_notify;
uint8_t reserved[2];
} __packed;
/**
* iwm_scan_offload_schedule - schedule of scan offload
* @delay: delay between iterations, in seconds.
* @iterations: num of scan iterations
* @full_scan_mul: number of partial scans before each full scan
*/
struct iwm_scan_offload_schedule {
uint16_t delay;
uint8_t iterations;
uint8_t full_scan_mul;
} __packed;
/*
* iwm_scan_offload_flags
*
* IWM_SCAN_OFFLOAD_FLAG_PASS_ALL: pass all results - no filtering.
* IWM_SCAN_OFFLOAD_FLAG_CACHED_CHANNEL: add cached channels to partial scan.
* IWM_SCAN_OFFLOAD_FLAG_ENERGY_SCAN: use energy based scan before partial scan
* on A band.
*/
enum iwm_scan_offload_flags {
IWM_SCAN_OFFLOAD_FLAG_PASS_ALL = (1 << 0),
IWM_SCAN_OFFLOAD_FLAG_CACHED_CHANNEL = (1 << 2),
IWM_SCAN_OFFLOAD_FLAG_ENERGY_SCAN = (1 << 3),
};
/**
* iwm_scan_offload_req - scan offload request command
* @flags: bitmap - enum iwm_scan_offload_flags.
* @watchdog: maximum scan duration in TU.
* @delay: delay in seconds before first iteration.
* @schedule_line: scan offload schedule, for fast and regular scan.
*/
struct iwm_scan_offload_req {
uint16_t flags;
uint16_t watchdog;
uint16_t delay;
uint16_t reserved;
struct iwm_scan_offload_schedule schedule_line[2];
} __packed;
enum iwm_scan_offload_compleate_status {
IWM_SCAN_OFFLOAD_COMPLETED = 1,
IWM_SCAN_OFFLOAD_ABORTED = 2,
};
/**
* iwm_scan_offload_complete - IWM_SCAN_OFFLOAD_COMPLETE_NTF_API_S_VER_1
* @last_schedule_line: last schedule line executed (fast or regular)
* @last_schedule_iteration: last scan iteration executed before scan abort
* @status: enum iwm_scan_offload_compleate_status
*/
struct iwm_scan_offload_complete {
uint8_t last_schedule_line;
uint8_t last_schedule_iteration;
uint8_t status;
uint8_t reserved;
} __packed;
/**
* iwm_sched_scan_results - IWM_SCAN_OFFLOAD_MATCH_FOUND_NTF_API_S_VER_1
* @ssid_bitmap: SSIDs indexes found in this iteration
* @client_bitmap: clients that are active and wait for this notification
*/
struct iwm_sched_scan_results {
uint16_t ssid_bitmap;
uint8_t client_bitmap;
uint8_t reserved;
};
/*
* END mvm/fw-api-scan.h
*/
/*
* BEGIN mvm/fw-api-sta.h
*/
/**
* enum iwm_sta_flags - flags for the ADD_STA host command
* @IWM_STA_FLG_REDUCED_TX_PWR_CTRL:
* @IWM_STA_FLG_REDUCED_TX_PWR_DATA:
* @IWM_STA_FLG_FLG_ANT_MSK: Antenna selection
* @IWM_STA_FLG_PS: set if STA is in Power Save
* @IWM_STA_FLG_INVALID: set if STA is invalid
* @IWM_STA_FLG_DLP_EN: Direct Link Protocol is enabled
* @IWM_STA_FLG_SET_ALL_KEYS: the current key applies to all key IDs
* @IWM_STA_FLG_DRAIN_FLOW: drain flow
* @IWM_STA_FLG_PAN: STA is for PAN interface
* @IWM_STA_FLG_CLASS_AUTH:
* @IWM_STA_FLG_CLASS_ASSOC:
* @IWM_STA_FLG_CLASS_MIMO_PROT:
* @IWM_STA_FLG_MAX_AGG_SIZE_MSK: maximal size for A-MPDU
* @IWM_STA_FLG_AGG_MPDU_DENS_MSK: maximal MPDU density for Tx aggregation
* @IWM_STA_FLG_FAT_EN_MSK: support for channel width (for Tx). This flag is
* initialised by driver and can be updated by fw upon reception of
* action frames that can change the channel width. When cleared the fw
* will send all the frames in 20MHz even when FAT channel is requested.
* @IWM_STA_FLG_MIMO_EN_MSK: support for MIMO. This flag is initialised by the
* driver and can be updated by fw upon reception of action frames.
* @IWM_STA_FLG_MFP_EN: Management Frame Protection
*/
enum iwm_sta_flags {
IWM_STA_FLG_REDUCED_TX_PWR_CTRL = (1 << 3),
IWM_STA_FLG_REDUCED_TX_PWR_DATA = (1 << 6),
IWM_STA_FLG_FLG_ANT_A = (1 << 4),
IWM_STA_FLG_FLG_ANT_B = (2 << 4),
IWM_STA_FLG_FLG_ANT_MSK = (IWM_STA_FLG_FLG_ANT_A |
IWM_STA_FLG_FLG_ANT_B),
IWM_STA_FLG_PS = (1 << 8),
IWM_STA_FLG_DRAIN_FLOW = (1 << 12),
IWM_STA_FLG_PAN = (1 << 13),
IWM_STA_FLG_CLASS_AUTH = (1 << 14),
IWM_STA_FLG_CLASS_ASSOC = (1 << 15),
IWM_STA_FLG_RTS_MIMO_PROT = (1 << 17),
IWM_STA_FLG_MAX_AGG_SIZE_SHIFT = 19,
IWM_STA_FLG_MAX_AGG_SIZE_8K = (0 << IWM_STA_FLG_MAX_AGG_SIZE_SHIFT),
IWM_STA_FLG_MAX_AGG_SIZE_16K = (1 << IWM_STA_FLG_MAX_AGG_SIZE_SHIFT),
IWM_STA_FLG_MAX_AGG_SIZE_32K = (2 << IWM_STA_FLG_MAX_AGG_SIZE_SHIFT),
IWM_STA_FLG_MAX_AGG_SIZE_64K = (3 << IWM_STA_FLG_MAX_AGG_SIZE_SHIFT),
IWM_STA_FLG_MAX_AGG_SIZE_128K = (4 << IWM_STA_FLG_MAX_AGG_SIZE_SHIFT),
IWM_STA_FLG_MAX_AGG_SIZE_256K = (5 << IWM_STA_FLG_MAX_AGG_SIZE_SHIFT),
IWM_STA_FLG_MAX_AGG_SIZE_512K = (6 << IWM_STA_FLG_MAX_AGG_SIZE_SHIFT),
IWM_STA_FLG_MAX_AGG_SIZE_1024K = (7 << IWM_STA_FLG_MAX_AGG_SIZE_SHIFT),
IWM_STA_FLG_MAX_AGG_SIZE_MSK = (7 << IWM_STA_FLG_MAX_AGG_SIZE_SHIFT),
IWM_STA_FLG_AGG_MPDU_DENS_SHIFT = 23,
IWM_STA_FLG_AGG_MPDU_DENS_2US = (4 << IWM_STA_FLG_AGG_MPDU_DENS_SHIFT),
IWM_STA_FLG_AGG_MPDU_DENS_4US = (5 << IWM_STA_FLG_AGG_MPDU_DENS_SHIFT),
IWM_STA_FLG_AGG_MPDU_DENS_8US = (6 << IWM_STA_FLG_AGG_MPDU_DENS_SHIFT),
IWM_STA_FLG_AGG_MPDU_DENS_16US = (7 << IWM_STA_FLG_AGG_MPDU_DENS_SHIFT),
IWM_STA_FLG_AGG_MPDU_DENS_MSK = (7 << IWM_STA_FLG_AGG_MPDU_DENS_SHIFT),
IWM_STA_FLG_FAT_EN_20MHZ = (0 << 26),
IWM_STA_FLG_FAT_EN_40MHZ = (1 << 26),
IWM_STA_FLG_FAT_EN_80MHZ = (2 << 26),
IWM_STA_FLG_FAT_EN_160MHZ = (3 << 26),
IWM_STA_FLG_FAT_EN_MSK = (3 << 26),
IWM_STA_FLG_MIMO_EN_SISO = (0 << 28),
IWM_STA_FLG_MIMO_EN_MIMO2 = (1 << 28),
IWM_STA_FLG_MIMO_EN_MIMO3 = (2 << 28),
IWM_STA_FLG_MIMO_EN_MSK = (3 << 28),
};
/**
* enum iwm_sta_key_flag - key flags for the ADD_STA host command
* @IWM_STA_KEY_FLG_NO_ENC: no encryption
* @IWM_STA_KEY_FLG_WEP: WEP encryption algorithm
* @IWM_STA_KEY_FLG_CCM: CCMP encryption algorithm
* @IWM_STA_KEY_FLG_TKIP: TKIP encryption algorithm
* @IWM_STA_KEY_FLG_EXT: extended cipher algorithm (depends on the FW support)
* @IWM_STA_KEY_FLG_CMAC: CMAC encryption algorithm
* @IWM_STA_KEY_FLG_ENC_UNKNOWN: unknown encryption algorithm
* @IWM_STA_KEY_FLG_EN_MSK: mask for encryption algorithmi value
* @IWM_STA_KEY_FLG_WEP_KEY_MAP: wep is either a group key (0 - legacy WEP) or from
* station info array (1 - n 1X mode)
* @IWM_STA_KEY_FLG_KEYID_MSK: the index of the key
* @IWM_STA_KEY_NOT_VALID: key is invalid
* @IWM_STA_KEY_FLG_WEP_13BYTES: set for 13 bytes WEP key
* @IWM_STA_KEY_MULTICAST: set for multical key
* @IWM_STA_KEY_MFP: key is used for Management Frame Protection
*/
enum iwm_sta_key_flag {
IWM_STA_KEY_FLG_NO_ENC = (0 << 0),
IWM_STA_KEY_FLG_WEP = (1 << 0),
IWM_STA_KEY_FLG_CCM = (2 << 0),
IWM_STA_KEY_FLG_TKIP = (3 << 0),
IWM_STA_KEY_FLG_EXT = (4 << 0),
IWM_STA_KEY_FLG_CMAC = (6 << 0),
IWM_STA_KEY_FLG_ENC_UNKNOWN = (7 << 0),
IWM_STA_KEY_FLG_EN_MSK = (7 << 0),
IWM_STA_KEY_FLG_WEP_KEY_MAP = (1 << 3),
IWM_STA_KEY_FLG_KEYID_POS = 8,
IWM_STA_KEY_FLG_KEYID_MSK = (3 << IWM_STA_KEY_FLG_KEYID_POS),
IWM_STA_KEY_NOT_VALID = (1 << 11),
IWM_STA_KEY_FLG_WEP_13BYTES = (1 << 12),
IWM_STA_KEY_MULTICAST = (1 << 14),
IWM_STA_KEY_MFP = (1 << 15),
};
/**
* enum iwm_sta_modify_flag - indicate to the fw what flag are being changed
* @IWM_STA_MODIFY_KEY: this command modifies %key
* @IWM_STA_MODIFY_TID_DISABLE_TX: this command modifies %tid_disable_tx
* @IWM_STA_MODIFY_TX_RATE: unused
* @IWM_STA_MODIFY_ADD_BA_TID: this command modifies %add_immediate_ba_tid
* @IWM_STA_MODIFY_REMOVE_BA_TID: this command modifies %remove_immediate_ba_tid
* @IWM_STA_MODIFY_SLEEPING_STA_TX_COUNT: this command modifies %sleep_tx_count
* @IWM_STA_MODIFY_PROT_TH:
* @IWM_STA_MODIFY_QUEUES: modify the queues used by this station
*/
enum iwm_sta_modify_flag {
IWM_STA_MODIFY_KEY = (1 << 0),
IWM_STA_MODIFY_TID_DISABLE_TX = (1 << 1),
IWM_STA_MODIFY_TX_RATE = (1 << 2),
IWM_STA_MODIFY_ADD_BA_TID = (1 << 3),
IWM_STA_MODIFY_REMOVE_BA_TID = (1 << 4),
IWM_STA_MODIFY_SLEEPING_STA_TX_COUNT = (1 << 5),
IWM_STA_MODIFY_PROT_TH = (1 << 6),
IWM_STA_MODIFY_QUEUES = (1 << 7),
};
#define IWM_STA_MODE_MODIFY 1
/**
* enum iwm_sta_sleep_flag - type of sleep of the station
* @IWM_STA_SLEEP_STATE_AWAKE:
* @IWM_STA_SLEEP_STATE_PS_POLL:
* @IWM_STA_SLEEP_STATE_UAPSD:
*/
enum iwm_sta_sleep_flag {
IWM_STA_SLEEP_STATE_AWAKE = 0,
IWM_STA_SLEEP_STATE_PS_POLL = (1 << 0),
IWM_STA_SLEEP_STATE_UAPSD = (1 << 1),
};
/* STA ID and color bits definitions */
#define IWM_STA_ID_SEED (0x0f)
#define IWM_STA_ID_POS (0)
#define IWM_STA_ID_MSK (IWM_STA_ID_SEED << IWM_STA_ID_POS)
#define IWM_STA_COLOR_SEED (0x7)
#define IWM_STA_COLOR_POS (4)
#define IWM_STA_COLOR_MSK (IWM_STA_COLOR_SEED << IWM_STA_COLOR_POS)
#define IWM_STA_ID_N_COLOR_GET_COLOR(id_n_color) \
(((id_n_color) & IWM_STA_COLOR_MSK) >> IWM_STA_COLOR_POS)
#define IWM_STA_ID_N_COLOR_GET_ID(id_n_color) \
(((id_n_color) & IWM_STA_ID_MSK) >> IWM_STA_ID_POS)
#define IWM_STA_KEY_MAX_NUM (16)
#define IWM_STA_KEY_IDX_INVALID (0xff)
#define IWM_STA_KEY_MAX_DATA_KEY_NUM (4)
#define IWM_MAX_GLOBAL_KEYS (4)
#define IWM_STA_KEY_LEN_WEP40 (5)
#define IWM_STA_KEY_LEN_WEP104 (13)
/**
* struct iwm_mvm_keyinfo - key information
* @key_flags: type %iwm_sta_key_flag
* @tkip_rx_tsc_byte2: TSC[2] for key mix ph1 detection
* @tkip_rx_ttak: 10-byte unicast TKIP TTAK for Rx
* @key_offset: key offset in the fw's key table
* @key: 16-byte unicast decryption key
* @tx_secur_seq_cnt: initial RSC / PN needed for replay check
* @hw_tkip_mic_rx_key: byte: MIC Rx Key - used for TKIP only
* @hw_tkip_mic_tx_key: byte: MIC Tx Key - used for TKIP only
*/
struct iwm_mvm_keyinfo {
uint16_t key_flags;
uint8_t tkip_rx_tsc_byte2;
uint8_t reserved1;
uint16_t tkip_rx_ttak[5];
uint8_t key_offset;
uint8_t reserved2;
uint8_t key[16];
uint64_t tx_secur_seq_cnt;
uint64_t hw_tkip_mic_rx_key;
uint64_t hw_tkip_mic_tx_key;
} __packed;
/**
* struct iwm_mvm_add_sta_cmd_v5 - Add/modify a station in the fw's sta table.
* ( IWM_REPLY_ADD_STA = 0x18 )
* @add_modify: 1: modify existing, 0: add new station
* @unicast_tx_key_id: unicast tx key id. Relevant only when unicast key sent
* @multicast_tx_key_id: multicast tx key id. Relevant only when multicast key
* sent
* @mac_id_n_color: the Mac context this station belongs to
* @addr[IEEE80211_ADDR_LEN]: station's MAC address
* @sta_id: index of station in uCode's station table
* @modify_mask: IWM_STA_MODIFY_*, selects which parameters to modify vs. leave
* alone. 1 - modify, 0 - don't change.
* @key: look at %iwm_mvm_keyinfo
* @station_flags: look at %iwm_sta_flags
* @station_flags_msk: what of %station_flags have changed
* @tid_disable_tx: is tid BIT(tid) enabled for Tx. Clear BIT(x) to enable
* AMPDU for tid x. Set %IWM_STA_MODIFY_TID_DISABLE_TX to change this field.
* @add_immediate_ba_tid: tid for which to add block-ack support (Rx)
* Set %IWM_STA_MODIFY_ADD_BA_TID to use this field, and also set
* add_immediate_ba_ssn.
* @remove_immediate_ba_tid: tid for which to remove block-ack support (Rx)
* Set %IWM_STA_MODIFY_REMOVE_BA_TID to use this field
* @add_immediate_ba_ssn: ssn for the Rx block-ack session. Used together with
* add_immediate_ba_tid.
* @sleep_tx_count: number of packets to transmit to station even though it is
* asleep. Used to synchronise PS-poll and u-APSD responses while ucode
* keeps track of STA sleep state.
* @sleep_state_flags: Look at %iwm_sta_sleep_flag.
* @assoc_id: assoc_id to be sent in VHT PLCP (9-bit), for grp use 0, for AP
* mac-addr.
* @beamform_flags: beam forming controls
* @tfd_queue_msk: tfd queues used by this station
*
* The device contains an internal table of per-station information, with info
* on security keys, aggregation parameters, and Tx rates for initial Tx
* attempt and any retries (set by IWM_REPLY_TX_LINK_QUALITY_CMD).
*
* ADD_STA sets up the table entry for one station, either creating a new
* entry, or modifying a pre-existing one.
*/
struct iwm_mvm_add_sta_cmd_v5 {
uint8_t add_modify;
uint8_t unicast_tx_key_id;
uint8_t multicast_tx_key_id;
uint8_t reserved1;
uint32_t mac_id_n_color;
uint8_t addr[IEEE80211_ADDR_LEN];
uint16_t reserved2;
uint8_t sta_id;
uint8_t modify_mask;
uint16_t reserved3;
struct iwm_mvm_keyinfo key;
uint32_t station_flags;
uint32_t station_flags_msk;
uint16_t tid_disable_tx;
uint16_t reserved4;
uint8_t add_immediate_ba_tid;
uint8_t remove_immediate_ba_tid;
uint16_t add_immediate_ba_ssn;
uint16_t sleep_tx_count;
uint16_t sleep_state_flags;
uint16_t assoc_id;
uint16_t beamform_flags;
uint32_t tfd_queue_msk;
} __packed; /* IWM_ADD_STA_CMD_API_S_VER_5 */
/**
* struct iwm_mvm_add_sta_cmd_v6 - Add / modify a station
* VER_6 of this command is quite similar to VER_5 except
* exclusion of all fields related to the security key installation.
*/
struct iwm_mvm_add_sta_cmd_v6 {
uint8_t add_modify;
uint8_t reserved1;
uint16_t tid_disable_tx;
uint32_t mac_id_n_color;
uint8_t addr[IEEE80211_ADDR_LEN]; /* _STA_ID_MODIFY_INFO_API_S_VER_1 */
uint16_t reserved2;
uint8_t sta_id;
uint8_t modify_mask;
uint16_t reserved3;
uint32_t station_flags;
uint32_t station_flags_msk;
uint8_t add_immediate_ba_tid;
uint8_t remove_immediate_ba_tid;
uint16_t add_immediate_ba_ssn;
uint16_t sleep_tx_count;
uint16_t sleep_state_flags;
uint16_t assoc_id;
uint16_t beamform_flags;
uint32_t tfd_queue_msk;
} __packed; /* IWM_ADD_STA_CMD_API_S_VER_6 */
/**
* struct iwm_mvm_add_sta_key_cmd - add/modify sta key
* ( IWM_REPLY_ADD_STA_KEY = 0x17 )
* @sta_id: index of station in uCode's station table
* @key_offset: key offset in key storage
* @key_flags: type %iwm_sta_key_flag
* @key: key material data
* @key2: key material data
* @rx_secur_seq_cnt: RX security sequence counter for the key
* @tkip_rx_tsc_byte2: TSC[2] for key mix ph1 detection
* @tkip_rx_ttak: 10-byte unicast TKIP TTAK for Rx
*/
struct iwm_mvm_add_sta_key_cmd {
uint8_t sta_id;
uint8_t key_offset;
uint16_t key_flags;
uint8_t key[16];
uint8_t key2[16];
uint8_t rx_secur_seq_cnt[16];
uint8_t tkip_rx_tsc_byte2;
uint8_t reserved;
uint16_t tkip_rx_ttak[5];
} __packed; /* IWM_ADD_MODIFY_STA_KEY_API_S_VER_1 */
/**
* enum iwm_mvm_add_sta_rsp_status - status in the response to ADD_STA command
* @IWM_ADD_STA_SUCCESS: operation was executed successfully
* @IWM_ADD_STA_STATIONS_OVERLOAD: no room left in the fw's station table
* @IWM_ADD_STA_IMMEDIATE_BA_FAILURE: can't add Rx block ack session
* @IWM_ADD_STA_MODIFY_NON_EXISTING_STA: driver requested to modify a station
* that doesn't exist.
*/
enum iwm_mvm_add_sta_rsp_status {
IWM_ADD_STA_SUCCESS = 0x1,
IWM_ADD_STA_STATIONS_OVERLOAD = 0x2,
IWM_ADD_STA_IMMEDIATE_BA_FAILURE = 0x4,
IWM_ADD_STA_MODIFY_NON_EXISTING_STA = 0x8,
};
/**
* struct iwm_mvm_rm_sta_cmd - Add / modify a station in the fw's station table
* ( IWM_REMOVE_STA = 0x19 )
* @sta_id: the station id of the station to be removed
*/
struct iwm_mvm_rm_sta_cmd {
uint8_t sta_id;
uint8_t reserved[3];
} __packed; /* IWM_REMOVE_STA_CMD_API_S_VER_2 */
/**
* struct iwm_mvm_mgmt_mcast_key_cmd
* ( IWM_MGMT_MCAST_KEY = 0x1f )
* @ctrl_flags: %iwm_sta_key_flag
* @IGTK:
* @K1: IGTK master key
* @K2: IGTK sub key
* @sta_id: station ID that support IGTK
* @key_id:
* @receive_seq_cnt: initial RSC/PN needed for replay check
*/
struct iwm_mvm_mgmt_mcast_key_cmd {
uint32_t ctrl_flags;
uint8_t IGTK[16];
uint8_t K1[16];
uint8_t K2[16];
uint32_t key_id;
uint32_t sta_id;
uint64_t receive_seq_cnt;
} __packed; /* SEC_MGMT_MULTICAST_KEY_CMD_API_S_VER_1 */
struct iwm_mvm_wep_key {
uint8_t key_index;
uint8_t key_offset;
uint16_t reserved1;
uint8_t key_size;
uint8_t reserved2[3];
uint8_t key[16];
} __packed;
struct iwm_mvm_wep_key_cmd {
uint32_t mac_id_n_color;
uint8_t num_keys;
uint8_t decryption_type;
uint8_t flags;
uint8_t reserved;
struct iwm_mvm_wep_key wep_key[0];
} __packed; /* SEC_CURR_WEP_KEY_CMD_API_S_VER_2 */
/*
* END mvm/fw-api-sta.h
*/
/*
* Some cherry-picked definitions
*/
#define IWM_FRAME_LIMIT 64
struct iwm_cmd_header {
uint8_t code;
uint8_t flags;
uint8_t idx;
uint8_t qid;
} __packed;
enum iwm_power_scheme {
IWM_POWER_SCHEME_CAM = 1,
IWM_POWER_SCHEME_BPS,
IWM_POWER_SCHEME_LP
};
#define IWM_DEF_CMD_PAYLOAD_SIZE 320
#define IWM_CMD_FAILED_MSK 0x40
struct iwm_device_cmd {
struct iwm_cmd_header hdr;
uint8_t data[IWM_DEF_CMD_PAYLOAD_SIZE];
} __packed;
struct iwm_rx_packet {
/*
* The first 4 bytes of the RX frame header contain both the RX frame
* size and some flags.
* Bit fields:
* 31: flag flush RB request
* 30: flag ignore TC (terminal counter) request
* 29: flag fast IRQ request
* 28-14: Reserved
* 13-00: RX frame size
*/
uint32_t len_n_flags;
struct iwm_cmd_header hdr;
uint8_t data[];
} __packed;
#define IWM_FH_RSCSR_FRAME_SIZE_MSK 0x00003fff
static inline uint32_t
iwm_rx_packet_len(const struct iwm_rx_packet *pkt)
{
return le32toh(pkt->len_n_flags) & IWM_FH_RSCSR_FRAME_SIZE_MSK;
}
static inline uint32_t
iwm_rx_packet_payload_len(const struct iwm_rx_packet *pkt)
{
return iwm_rx_packet_len(pkt) - sizeof(pkt->hdr);
}
#define IWM_MIN_DBM -100
#define IWM_MAX_DBM -33 /* realistic guess */
#define IWM_READ(sc, reg) \
bus_space_read_4((sc)->sc_st, (sc)->sc_sh, (reg))
#define IWM_WRITE(sc, reg, val) \
bus_space_write_4((sc)->sc_st, (sc)->sc_sh, (reg), (val))
#define IWM_WRITE_1(sc, reg, val) \
bus_space_write_1((sc)->sc_st, (sc)->sc_sh, (reg), (val))
#define IWM_SETBITS(sc, reg, mask) \
IWM_WRITE(sc, reg, IWM_READ(sc, reg) | (mask))
#define IWM_CLRBITS(sc, reg, mask) \
IWM_WRITE(sc, reg, IWM_READ(sc, reg) & ~(mask))
#define IWM_BARRIER_WRITE(sc) \
bus_space_barrier((sc)->sc_st, (sc)->sc_sh, 0, (sc)->sc_sz, \
BUS_SPACE_BARRIER_WRITE)
#define IWM_BARRIER_READ_WRITE(sc) \
bus_space_barrier((sc)->sc_st, (sc)->sc_sh, 0, (sc)->sc_sz, \
BUS_SPACE_BARRIER_READ | BUS_SPACE_BARRIER_WRITE)
#define IWM_FW_VALID_TX_ANT(sc) \
((sc->sc_fw_phy_config & IWM_FW_PHY_CFG_TX_CHAIN) \
>> IWM_FW_PHY_CFG_TX_CHAIN_POS)
#define IWM_FW_VALID_RX_ANT(sc) \
((sc->sc_fw_phy_config & IWM_FW_PHY_CFG_RX_CHAIN) \
>> IWM_FW_PHY_CFG_RX_CHAIN_POS)
#endif /* __IF_IWM_REG_H__ */
Index: head/sys/dev/netmap/netmap.c
===================================================================
--- head/sys/dev/netmap/netmap.c (revision 300049)
+++ head/sys/dev/netmap/netmap.c (revision 300050)
@@ -1,3166 +1,3166 @@
/*
* Copyright (C) 2011-2014 Matteo Landi, Luigi Rizzo. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
*
* THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND
* ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
* ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE
* FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
* DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
* OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
* HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
* LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
* OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
* SUCH DAMAGE.
*/
/*
* $FreeBSD$
*
* This module supports memory mapped access to network devices,
* see netmap(4).
*
* The module uses a large, memory pool allocated by the kernel
* and accessible as mmapped memory by multiple userspace threads/processes.
* The memory pool contains packet buffers and "netmap rings",
* i.e. user-accessible copies of the interface's queues.
*
* Access to the network card works like this:
* 1. a process/thread issues one or more open() on /dev/netmap, to create
* select()able file descriptor on which events are reported.
* 2. on each descriptor, the process issues an ioctl() to identify
* the interface that should report events to the file descriptor.
* 3. on each descriptor, the process issues an mmap() request to
* map the shared memory region within the process' address space.
* The list of interesting queues is indicated by a location in
* the shared memory region.
* 4. using the functions in the netmap(4) userspace API, a process
* can look up the occupation state of a queue, access memory buffers,
* and retrieve received packets or enqueue packets to transmit.
* 5. using some ioctl()s the process can synchronize the userspace view
* of the queue with the actual status in the kernel. This includes both
* receiving the notification of new packets, and transmitting new
* packets on the output interface.
* 6. select() or poll() can be used to wait for events on individual
* transmit or receive queues (or all queues for a given interface).
*
SYNCHRONIZATION (USER)
The netmap rings and data structures may be shared among multiple
user threads or even independent processes.
Any synchronization among those threads/processes is delegated
to the threads themselves. Only one thread at a time can be in
a system call on the same netmap ring. The OS does not enforce
this and only guarantees against system crashes in case of
invalid usage.
LOCKING (INTERNAL)
Within the kernel, access to the netmap rings is protected as follows:
- a spinlock on each ring, to handle producer/consumer races on
RX rings attached to the host stack (against multiple host
threads writing from the host stack to the same ring),
and on 'destination' rings attached to a VALE switch
(i.e. RX rings in VALE ports, and TX rings in NIC/host ports)
protecting multiple active senders for the same destination)
- an atomic variable to guarantee that there is at most one
instance of *_*xsync() on the ring at any time.
For rings connected to user file
descriptors, an atomic_test_and_set() protects this, and the
lock on the ring is not actually used.
For NIC RX rings connected to a VALE switch, an atomic_test_and_set()
is also used to prevent multiple executions (the driver might indeed
already guarantee this).
For NIC TX rings connected to a VALE switch, the lock arbitrates
access to the queue (both when allocating buffers and when pushing
them out).
- *xsync() should be protected against initializations of the card.
On FreeBSD most devices have the reset routine protected by
a RING lock (ixgbe, igb, em) or core lock (re). lem is missing
the RING protection on rx_reset(), this should be added.
On linux there is an external lock on the tx path, which probably
also arbitrates access to the reset routine. XXX to be revised
- a per-interface core_lock protecting access from the host stack
while interfaces may be detached from netmap mode.
XXX there should be no need for this lock if we detach the interfaces
only while they are down.
--- VALE SWITCH ---
NMG_LOCK() serializes all modifications to switches and ports.
A switch cannot be deleted until all ports are gone.
For each switch, an SX lock (RWlock on linux) protects
deletion of ports. When configuring or deleting a new port, the
lock is acquired in exclusive mode (after holding NMG_LOCK).
When forwarding, the lock is acquired in shared mode (without NMG_LOCK).
The lock is held throughout the entire forwarding cycle,
during which the thread may incur in a page fault.
Hence it is important that sleepable shared locks are used.
On the rx ring, the per-port lock is grabbed initially to reserve
a number of slot in the ring, then the lock is released,
packets are copied from source to destination, and then
the lock is acquired again and the receive ring is updated.
(A similar thing is done on the tx ring for NIC and host stack
ports attached to the switch)
*/
/* --- internals ----
*
* Roadmap to the code that implements the above.
*
* > 1. a process/thread issues one or more open() on /dev/netmap, to create
* > select()able file descriptor on which events are reported.
*
* Internally, we allocate a netmap_priv_d structure, that will be
* initialized on ioctl(NIOCREGIF).
*
* os-specific:
* FreeBSD: netmap_open (netmap_freebsd.c). The priv is
* per-thread.
* linux: linux_netmap_open (netmap_linux.c). The priv is
* per-open.
*
* > 2. on each descriptor, the process issues an ioctl() to identify
* > the interface that should report events to the file descriptor.
*
* Implemented by netmap_ioctl(), NIOCREGIF case, with nmr->nr_cmd==0.
* Most important things happen in netmap_get_na() and
* netmap_do_regif(), called from there. Additional details can be
* found in the comments above those functions.
*
* In all cases, this action creates/takes-a-reference-to a
* netmap_*_adapter describing the port, and allocates a netmap_if
* and all necessary netmap rings, filling them with netmap buffers.
*
* In this phase, the sync callbacks for each ring are set (these are used
* in steps 5 and 6 below). The callbacks depend on the type of adapter.
* The adapter creation/initialization code puts them in the
* netmap_adapter (fields na->nm_txsync and na->nm_rxsync). Then, they
* are copied from there to the netmap_kring's during netmap_do_regif(), by
* the nm_krings_create() callback. All the nm_krings_create callbacks
* actually call netmap_krings_create() to perform this and the other
* common stuff. netmap_krings_create() also takes care of the host rings,
* if needed, by setting their sync callbacks appropriately.
*
* Additional actions depend on the kind of netmap_adapter that has been
* registered:
*
* - netmap_hw_adapter: [netmap.c]
* This is a system netdev/ifp with native netmap support.
* The ifp is detached from the host stack by redirecting:
* - transmissions (from the network stack) to netmap_transmit()
* - receive notifications to the nm_notify() callback for
* this adapter. The callback is normally netmap_notify(), unless
* the ifp is attached to a bridge using bwrap, in which case it
* is netmap_bwrap_intr_notify().
*
* - netmap_generic_adapter: [netmap_generic.c]
* A system netdev/ifp without native netmap support.
*
* (the decision about native/non native support is taken in
* netmap_get_hw_na(), called by netmap_get_na())
*
* - netmap_vp_adapter [netmap_vale.c]
* Returned by netmap_get_bdg_na().
* This is a persistent or ephemeral VALE port. Ephemeral ports
* are created on the fly if they don't already exist, and are
* always attached to a bridge.
* Persistent VALE ports must must be created separately, and i
* then attached like normal NICs. The NIOCREGIF we are examining
* will find them only if they had previosly been created and
* attached (see VALE_CTL below).
*
* - netmap_pipe_adapter [netmap_pipe.c]
* Returned by netmap_get_pipe_na().
* Both pipe ends are created, if they didn't already exist.
*
* - netmap_monitor_adapter [netmap_monitor.c]
* Returned by netmap_get_monitor_na().
* If successful, the nm_sync callbacks of the monitored adapter
* will be intercepted by the returned monitor.
*
* - netmap_bwrap_adapter [netmap_vale.c]
* Cannot be obtained in this way, see VALE_CTL below
*
*
* os-specific:
* linux: we first go through linux_netmap_ioctl() to
* adapt the FreeBSD interface to the linux one.
*
*
* > 3. on each descriptor, the process issues an mmap() request to
* > map the shared memory region within the process' address space.
* > The list of interesting queues is indicated by a location in
* > the shared memory region.
*
* os-specific:
* FreeBSD: netmap_mmap_single (netmap_freebsd.c).
* linux: linux_netmap_mmap (netmap_linux.c).
*
* > 4. using the functions in the netmap(4) userspace API, a process
* > can look up the occupation state of a queue, access memory buffers,
* > and retrieve received packets or enqueue packets to transmit.
*
* these actions do not involve the kernel.
*
* > 5. using some ioctl()s the process can synchronize the userspace view
* > of the queue with the actual status in the kernel. This includes both
* > receiving the notification of new packets, and transmitting new
* > packets on the output interface.
*
* These are implemented in netmap_ioctl(), NIOCTXSYNC and NIOCRXSYNC
* cases. They invoke the nm_sync callbacks on the netmap_kring
* structures, as initialized in step 2 and maybe later modified
* by a monitor. Monitors, however, will always call the original
* callback before doing anything else.
*
*
* > 6. select() or poll() can be used to wait for events on individual
* > transmit or receive queues (or all queues for a given interface).
*
* Implemented in netmap_poll(). This will call the same nm_sync()
* callbacks as in step 5 above.
*
* os-specific:
* linux: we first go through linux_netmap_poll() to adapt
* the FreeBSD interface to the linux one.
*
*
* ---- VALE_CTL -----
*
* VALE switches are controlled by issuing a NIOCREGIF with a non-null
* nr_cmd in the nmreq structure. These subcommands are handled by
* netmap_bdg_ctl() in netmap_vale.c. Persistent VALE ports are created
* and destroyed by issuing the NETMAP_BDG_NEWIF and NETMAP_BDG_DELIF
* subcommands, respectively.
*
* Any network interface known to the system (including a persistent VALE
* port) can be attached to a VALE switch by issuing the
* NETMAP_BDG_ATTACH subcommand. After the attachment, persistent VALE ports
* look exactly like ephemeral VALE ports (as created in step 2 above). The
* attachment of other interfaces, instead, requires the creation of a
* netmap_bwrap_adapter. Moreover, the attached interface must be put in
* netmap mode. This may require the creation of a netmap_generic_adapter if
* we have no native support for the interface, or if generic adapters have
* been forced by sysctl.
*
* Both persistent VALE ports and bwraps are handled by netmap_get_bdg_na(),
* called by nm_bdg_ctl_attach(), and discriminated by the nm_bdg_attach()
* callback. In the case of the bwrap, the callback creates the
* netmap_bwrap_adapter. The initialization of the bwrap is then
* completed by calling netmap_do_regif() on it, in the nm_bdg_ctl()
* callback (netmap_bwrap_bdg_ctl in netmap_vale.c).
* A generic adapter for the wrapped ifp will be created if needed, when
* netmap_get_bdg_na() calls netmap_get_hw_na().
*
*
* ---- DATAPATHS -----
*
* -= SYSTEM DEVICE WITH NATIVE SUPPORT =-
*
* na == NA(ifp) == netmap_hw_adapter created in DEVICE_netmap_attach()
*
* - tx from netmap userspace:
* concurrently:
* 1) ioctl(NIOCTXSYNC)/netmap_poll() in process context
* kring->nm_sync() == DEVICE_netmap_txsync()
* 2) device interrupt handler
* na->nm_notify() == netmap_notify()
* - rx from netmap userspace:
* concurrently:
* 1) ioctl(NIOCRXSYNC)/netmap_poll() in process context
* kring->nm_sync() == DEVICE_netmap_rxsync()
* 2) device interrupt handler
* na->nm_notify() == netmap_notify()
* - rx from host stack
* concurrently:
* 1) host stack
* netmap_transmit()
* na->nm_notify == netmap_notify()
* 2) ioctl(NIOCRXSYNC)/netmap_poll() in process context
* kring->nm_sync() == netmap_rxsync_from_host_compat
* netmap_rxsync_from_host(na, NULL, NULL)
* - tx to host stack
* ioctl(NIOCTXSYNC)/netmap_poll() in process context
* kring->nm_sync() == netmap_txsync_to_host_compat
* netmap_txsync_to_host(na)
* NM_SEND_UP()
* FreeBSD: na->if_input() == ?? XXX
* linux: netif_rx() with NM_MAGIC_PRIORITY_RX
*
*
*
* -= SYSTEM DEVICE WITH GENERIC SUPPORT =-
*
* na == NA(ifp) == generic_netmap_adapter created in generic_netmap_attach()
*
* - tx from netmap userspace:
* concurrently:
* 1) ioctl(NIOCTXSYNC)/netmap_poll() in process context
* kring->nm_sync() == generic_netmap_txsync()
* linux: dev_queue_xmit() with NM_MAGIC_PRIORITY_TX
* generic_ndo_start_xmit()
* orig. dev. start_xmit
* FreeBSD: na->if_transmit() == orig. dev if_transmit
* 2) generic_mbuf_destructor()
* na->nm_notify() == netmap_notify()
* - rx from netmap userspace:
* 1) ioctl(NIOCRXSYNC)/netmap_poll() in process context
* kring->nm_sync() == generic_netmap_rxsync()
* mbq_safe_dequeue()
* 2) device driver
* generic_rx_handler()
* mbq_safe_enqueue()
* na->nm_notify() == netmap_notify()
* - rx from host stack:
* concurrently:
* 1) host stack
* linux: generic_ndo_start_xmit()
* netmap_transmit()
* FreeBSD: ifp->if_input() == netmap_transmit
* both:
* na->nm_notify() == netmap_notify()
* 2) ioctl(NIOCRXSYNC)/netmap_poll() in process context
* kring->nm_sync() == netmap_rxsync_from_host_compat
* netmap_rxsync_from_host(na, NULL, NULL)
* - tx to host stack:
* ioctl(NIOCTXSYNC)/netmap_poll() in process context
* kring->nm_sync() == netmap_txsync_to_host_compat
* netmap_txsync_to_host(na)
* NM_SEND_UP()
* FreeBSD: na->if_input() == ??? XXX
* linux: netif_rx() with NM_MAGIC_PRIORITY_RX
*
*
* -= VALE =-
*
* INCOMING:
*
* - VALE ports:
* ioctl(NIOCTXSYNC)/netmap_poll() in process context
* kring->nm_sync() == netmap_vp_txsync()
*
* - system device with native support:
* from cable:
* interrupt
* na->nm_notify() == netmap_bwrap_intr_notify(ring_nr != host ring)
* kring->nm_sync() == DEVICE_netmap_rxsync()
* netmap_vp_txsync()
* kring->nm_sync() == DEVICE_netmap_rxsync()
* from host stack:
* netmap_transmit()
* na->nm_notify() == netmap_bwrap_intr_notify(ring_nr == host ring)
* kring->nm_sync() == netmap_rxsync_from_host_compat()
* netmap_vp_txsync()
*
* - system device with generic support:
* from device driver:
* generic_rx_handler()
* na->nm_notify() == netmap_bwrap_intr_notify(ring_nr != host ring)
* kring->nm_sync() == generic_netmap_rxsync()
* netmap_vp_txsync()
* kring->nm_sync() == generic_netmap_rxsync()
* from host stack:
* netmap_transmit()
* na->nm_notify() == netmap_bwrap_intr_notify(ring_nr == host ring)
* kring->nm_sync() == netmap_rxsync_from_host_compat()
* netmap_vp_txsync()
*
* (all cases) --> nm_bdg_flush()
* dest_na->nm_notify() == (see below)
*
* OUTGOING:
*
* - VALE ports:
* concurrently:
* 1) ioctlNIOCRXSYNC)/netmap_poll() in process context
* kring->nm_sync() == netmap_vp_rxsync()
* 2) from nm_bdg_flush()
* na->nm_notify() == netmap_notify()
*
* - system device with native support:
* to cable:
* na->nm_notify() == netmap_bwrap_notify()
* netmap_vp_rxsync()
* kring->nm_sync() == DEVICE_netmap_txsync()
* netmap_vp_rxsync()
* to host stack:
* netmap_vp_rxsync()
* kring->nm_sync() == netmap_txsync_to_host_compat
* netmap_vp_rxsync_locked()
*
* - system device with generic adapter:
* to device driver:
* na->nm_notify() == netmap_bwrap_notify()
* netmap_vp_rxsync()
* kring->nm_sync() == generic_netmap_txsync()
* netmap_vp_rxsync()
* to host stack:
* netmap_vp_rxsync()
* kring->nm_sync() == netmap_txsync_to_host_compat
* netmap_vp_rxsync()
*
*/
/*
* OS-specific code that is used only within this file.
* Other OS-specific code that must be accessed by drivers
* is present in netmap_kern.h
*/
#if defined(__FreeBSD__)
#include <sys/cdefs.h> /* prerequisite */
#include <sys/types.h>
#include <sys/errno.h>
#include <sys/param.h> /* defines used in kernel.h */
#include <sys/kernel.h> /* types used in module initialization */
#include <sys/conf.h> /* cdevsw struct, UID, GID */
#include <sys/filio.h> /* FIONBIO */
#include <sys/sockio.h>
#include <sys/socketvar.h> /* struct socket */
#include <sys/malloc.h>
#include <sys/poll.h>
#include <sys/rwlock.h>
#include <sys/socket.h> /* sockaddrs */
#include <sys/selinfo.h>
#include <sys/sysctl.h>
#include <sys/jail.h>
#include <net/vnet.h>
#include <net/if.h>
#include <net/if_var.h>
#include <net/bpf.h> /* BIOCIMMEDIATE */
#include <machine/bus.h> /* bus_dmamap_* */
#include <sys/endian.h>
#include <sys/refcount.h>
/* reduce conditional code */
// linux API, use for the knlist in FreeBSD
/* use a private mutex for the knlist */
#define init_waitqueue_head(x) do { \
struct mtx *m = &(x)->m; \
mtx_init(m, "nm_kn_lock", NULL, MTX_DEF); \
knlist_init_mtx(&(x)->si.si_note, m); \
} while (0)
#define OS_selrecord(a, b) selrecord(a, &((b)->si))
#define OS_selwakeup(a, b) freebsd_selwakeup(a, b)
#elif defined(linux)
#include "bsd_glue.h"
#elif defined(__APPLE__)
#warning OSX support is only partial
#include "osx_glue.h"
#else
#error Unsupported platform
#endif /* unsupported */
/*
* common headers
*/
#include <net/netmap.h>
#include <dev/netmap/netmap_kern.h>
#include <dev/netmap/netmap_mem2.h>
MALLOC_DEFINE(M_NETMAP, "netmap", "Network memory map");
/* user-controlled variables */
int netmap_verbose;
static int netmap_no_timestamp; /* don't timestamp on rxsync */
SYSCTL_NODE(_dev, OID_AUTO, netmap, CTLFLAG_RW, 0, "Netmap args");
SYSCTL_INT(_dev_netmap, OID_AUTO, verbose,
CTLFLAG_RW, &netmap_verbose, 0, "Verbose mode");
SYSCTL_INT(_dev_netmap, OID_AUTO, no_timestamp,
CTLFLAG_RW, &netmap_no_timestamp, 0, "no_timestamp");
int netmap_mitigate = 1;
SYSCTL_INT(_dev_netmap, OID_AUTO, mitigate, CTLFLAG_RW, &netmap_mitigate, 0, "");
int netmap_no_pendintr = 1;
SYSCTL_INT(_dev_netmap, OID_AUTO, no_pendintr,
CTLFLAG_RW, &netmap_no_pendintr, 0, "Always look for new received packets.");
int netmap_txsync_retry = 2;
SYSCTL_INT(_dev_netmap, OID_AUTO, txsync_retry, CTLFLAG_RW,
&netmap_txsync_retry, 0 , "Number of txsync loops in bridge's flush.");
int netmap_adaptive_io = 0;
SYSCTL_INT(_dev_netmap, OID_AUTO, adaptive_io, CTLFLAG_RW,
&netmap_adaptive_io, 0 , "Adaptive I/O on paravirt");
int netmap_flags = 0; /* debug flags */
int netmap_fwd = 0; /* force transparent mode */
/*
* netmap_admode selects the netmap mode to use.
* Invalid values are reset to NETMAP_ADMODE_BEST
*/
enum { NETMAP_ADMODE_BEST = 0, /* use native, fallback to generic */
NETMAP_ADMODE_NATIVE, /* either native or none */
NETMAP_ADMODE_GENERIC, /* force generic */
NETMAP_ADMODE_LAST };
static int netmap_admode = NETMAP_ADMODE_BEST;
int netmap_generic_mit = 100*1000; /* Generic mitigation interval in nanoseconds. */
int netmap_generic_ringsize = 1024; /* Generic ringsize. */
int netmap_generic_rings = 1; /* number of queues in generic. */
SYSCTL_INT(_dev_netmap, OID_AUTO, flags, CTLFLAG_RW, &netmap_flags, 0 , "");
SYSCTL_INT(_dev_netmap, OID_AUTO, fwd, CTLFLAG_RW, &netmap_fwd, 0 , "");
SYSCTL_INT(_dev_netmap, OID_AUTO, admode, CTLFLAG_RW, &netmap_admode, 0 , "");
SYSCTL_INT(_dev_netmap, OID_AUTO, generic_mit, CTLFLAG_RW, &netmap_generic_mit, 0 , "");
SYSCTL_INT(_dev_netmap, OID_AUTO, generic_ringsize, CTLFLAG_RW, &netmap_generic_ringsize, 0 , "");
SYSCTL_INT(_dev_netmap, OID_AUTO, generic_rings, CTLFLAG_RW, &netmap_generic_rings, 0 , "");
NMG_LOCK_T netmap_global_lock;
int netmap_use_count = 0; /* number of active netmap instances */
/*
* mark the ring as stopped, and run through the locks
* to make sure other users get to see it.
*/
static void
netmap_disable_ring(struct netmap_kring *kr)
{
kr->nkr_stopped = 1;
nm_kr_get(kr);
mtx_lock(&kr->q_lock);
mtx_unlock(&kr->q_lock);
nm_kr_put(kr);
}
/* stop or enable a single ring */
void
netmap_set_ring(struct netmap_adapter *na, u_int ring_id, enum txrx t, int stopped)
{
if (stopped)
netmap_disable_ring(NMR(na, t) + ring_id);
else
NMR(na, t)[ring_id].nkr_stopped = 0;
}
/* stop or enable all the rings of na */
void
netmap_set_all_rings(struct netmap_adapter *na, int stopped)
{
int i;
enum txrx t;
if (!nm_netmap_on(na))
return;
for_rx_tx(t) {
for (i = 0; i < netmap_real_rings(na, t); i++) {
netmap_set_ring(na, i, t, stopped);
}
}
}
/*
* Convenience function used in drivers. Waits for current txsync()s/rxsync()s
* to finish and prevents any new one from starting. Call this before turning
* netmap mode off, or before removing the hardware rings (e.g., on module
* onload). As a rule of thumb for linux drivers, this should be placed near
* each napi_disable().
*/
void
netmap_disable_all_rings(struct ifnet *ifp)
{
netmap_set_all_rings(NA(ifp), 1 /* stopped */);
}
/*
* Convenience function used in drivers. Re-enables rxsync and txsync on the
* adapter's rings In linux drivers, this should be placed near each
* napi_enable().
*/
void
netmap_enable_all_rings(struct ifnet *ifp)
{
netmap_set_all_rings(NA(ifp), 0 /* enabled */);
}
/*
* generic bound_checking function
*/
u_int
nm_bound_var(u_int *v, u_int dflt, u_int lo, u_int hi, const char *msg)
{
u_int oldv = *v;
const char *op = NULL;
if (dflt < lo)
dflt = lo;
if (dflt > hi)
dflt = hi;
if (oldv < lo) {
*v = dflt;
op = "Bump";
} else if (oldv > hi) {
*v = hi;
op = "Clamp";
}
if (op && msg)
printf("%s %s to %d (was %d)\n", op, msg, *v, oldv);
return *v;
}
/*
* packet-dump function, user-supplied or static buffer.
* The destination buffer must be at least 30+4*len
*/
const char *
nm_dump_buf(char *p, int len, int lim, char *dst)
{
static char _dst[8192];
int i, j, i0;
static char hex[] ="0123456789abcdef";
char *o; /* output position */
#define P_HI(x) hex[((x) & 0xf0)>>4]
#define P_LO(x) hex[((x) & 0xf)]
#define P_C(x) ((x) >= 0x20 && (x) <= 0x7e ? (x) : '.')
if (!dst)
dst = _dst;
if (lim <= 0 || lim > len)
lim = len;
o = dst;
sprintf(o, "buf 0x%p len %d lim %d\n", p, len, lim);
o += strlen(o);
/* hexdump routine */
for (i = 0; i < lim; ) {
sprintf(o, "%5d: ", i);
o += strlen(o);
memset(o, ' ', 48);
i0 = i;
for (j=0; j < 16 && i < lim; i++, j++) {
o[j*3] = P_HI(p[i]);
o[j*3+1] = P_LO(p[i]);
}
i = i0;
for (j=0; j < 16 && i < lim; i++, j++)
o[j + 48] = P_C(p[i]);
o[j+48] = '\n';
o += j+49;
}
*o = '\0';
#undef P_HI
#undef P_LO
#undef P_C
return dst;
}
/*
* Fetch configuration from the device, to cope with dynamic
* reconfigurations after loading the module.
*/
/* call with NMG_LOCK held */
int
netmap_update_config(struct netmap_adapter *na)
{
u_int txr, txd, rxr, rxd;
txr = txd = rxr = rxd = 0;
if (na->nm_config == NULL ||
na->nm_config(na, &txr, &txd, &rxr, &rxd))
{
/* take whatever we had at init time */
txr = na->num_tx_rings;
txd = na->num_tx_desc;
rxr = na->num_rx_rings;
rxd = na->num_rx_desc;
}
if (na->num_tx_rings == txr && na->num_tx_desc == txd &&
na->num_rx_rings == rxr && na->num_rx_desc == rxd)
return 0; /* nothing changed */
if (netmap_verbose || na->active_fds > 0) {
D("stored config %s: txring %d x %d, rxring %d x %d",
na->name,
na->num_tx_rings, na->num_tx_desc,
na->num_rx_rings, na->num_rx_desc);
D("new config %s: txring %d x %d, rxring %d x %d",
na->name, txr, txd, rxr, rxd);
}
if (na->active_fds == 0) {
D("configuration changed (but fine)");
na->num_tx_rings = txr;
na->num_tx_desc = txd;
na->num_rx_rings = rxr;
na->num_rx_desc = rxd;
return 0;
}
D("configuration changed while active, this is bad...");
return 1;
}
static void netmap_txsync_to_host(struct netmap_adapter *na);
static int netmap_rxsync_from_host(struct netmap_adapter *na, struct thread *td, void *pwait);
/* kring->nm_sync callback for the host tx ring */
static int
netmap_txsync_to_host_compat(struct netmap_kring *kring, int flags)
{
(void)flags; /* unused */
netmap_txsync_to_host(kring->na);
return 0;
}
/* kring->nm_sync callback for the host rx ring */
static int
netmap_rxsync_from_host_compat(struct netmap_kring *kring, int flags)
{
(void)flags; /* unused */
netmap_rxsync_from_host(kring->na, NULL, NULL);
return 0;
}
/* create the krings array and initialize the fields common to all adapters.
* The array layout is this:
*
* +----------+
* na->tx_rings ----->| | \
* | | } na->num_tx_ring
* | | /
* +----------+
* | | host tx kring
* na->rx_rings ----> +----------+
* | | \
* | | } na->num_rx_rings
* | | /
* +----------+
* | | host rx kring
* +----------+
* na->tailroom ----->| | \
* | | } tailroom bytes
* | | /
* +----------+
*
* Note: for compatibility, host krings are created even when not needed.
* The tailroom space is currently used by vale ports for allocating leases.
*/
/* call with NMG_LOCK held */
int
netmap_krings_create(struct netmap_adapter *na, u_int tailroom)
{
u_int i, len, ndesc;
struct netmap_kring *kring;
u_int n[NR_TXRX];
enum txrx t;
/* account for the (possibly fake) host rings */
n[NR_TX] = na->num_tx_rings + 1;
n[NR_RX] = na->num_rx_rings + 1;
len = (n[NR_TX] + n[NR_RX]) * sizeof(struct netmap_kring) + tailroom;
na->tx_rings = malloc((size_t)len, M_DEVBUF, M_NOWAIT | M_ZERO);
if (na->tx_rings == NULL) {
D("Cannot allocate krings");
return ENOMEM;
}
na->rx_rings = na->tx_rings + n[NR_TX];
/*
* All fields in krings are 0 except the one initialized below.
* but better be explicit on important kring fields.
*/
for_rx_tx(t) {
ndesc = nma_get_ndesc(na, t);
for (i = 0; i < n[t]; i++) {
kring = &NMR(na, t)[i];
bzero(kring, sizeof(*kring));
kring->na = na;
kring->ring_id = i;
kring->tx = t;
kring->nkr_num_slots = ndesc;
if (i < nma_get_nrings(na, t)) {
kring->nm_sync = (t == NR_TX ? na->nm_txsync : na->nm_rxsync);
} else if (i == na->num_tx_rings) {
kring->nm_sync = (t == NR_TX ?
netmap_txsync_to_host_compat :
netmap_rxsync_from_host_compat);
}
kring->nm_notify = na->nm_notify;
kring->rhead = kring->rcur = kring->nr_hwcur = 0;
/*
* IMPORTANT: Always keep one slot empty.
*/
kring->rtail = kring->nr_hwtail = (t == NR_TX ? ndesc - 1 : 0);
snprintf(kring->name, sizeof(kring->name) - 1, "%s %s%d", na->name,
nm_txrx2str(t), i);
ND("ktx %s h %d c %d t %d",
kring->name, kring->rhead, kring->rcur, kring->rtail);
mtx_init(&kring->q_lock, (t == NR_TX ? "nm_txq_lock" : "nm_rxq_lock"), NULL, MTX_DEF);
init_waitqueue_head(&kring->si);
}
init_waitqueue_head(&na->si[t]);
}
na->tailroom = na->rx_rings + n[NR_RX];
return 0;
}
#ifdef __FreeBSD__
static void
netmap_knlist_destroy(NM_SELINFO_T *si)
{
/* XXX kqueue(9) needed; these will mirror knlist_init. */
knlist_delete(&si->si.si_note, curthread, 0 /* not locked */ );
knlist_destroy(&si->si.si_note);
/* now we don't need the mutex anymore */
mtx_destroy(&si->m);
}
#endif /* __FreeBSD__ */
/* undo the actions performed by netmap_krings_create */
/* call with NMG_LOCK held */
void
netmap_krings_delete(struct netmap_adapter *na)
{
struct netmap_kring *kring = na->tx_rings;
enum txrx t;
for_rx_tx(t)
netmap_knlist_destroy(&na->si[t]);
/* we rely on the krings layout described above */
for ( ; kring != na->tailroom; kring++) {
mtx_destroy(&kring->q_lock);
netmap_knlist_destroy(&kring->si);
}
free(na->tx_rings, M_DEVBUF);
na->tx_rings = na->rx_rings = na->tailroom = NULL;
}
/*
* Destructor for NIC ports. They also have an mbuf queue
* on the rings connected to the host so we need to purge
* them first.
*/
/* call with NMG_LOCK held */
static void
netmap_hw_krings_delete(struct netmap_adapter *na)
{
struct mbq *q = &na->rx_rings[na->num_rx_rings].rx_queue;
ND("destroy sw mbq with len %d", mbq_len(q));
mbq_purge(q);
mbq_safe_destroy(q);
netmap_krings_delete(na);
}
/*
* Undo everything that was done in netmap_do_regif(). In particular,
* call nm_register(ifp,0) to stop netmap mode on the interface and
* revert to normal operation.
*/
/* call with NMG_LOCK held */
static void netmap_unset_ringid(struct netmap_priv_d *);
static void netmap_rel_exclusive(struct netmap_priv_d *);
static void
netmap_do_unregif(struct netmap_priv_d *priv)
{
struct netmap_adapter *na = priv->np_na;
NMG_LOCK_ASSERT();
na->active_fds--;
/* release exclusive use if it was requested on regif */
netmap_rel_exclusive(priv);
if (na->active_fds <= 0) { /* last instance */
if (netmap_verbose)
D("deleting last instance for %s", na->name);
#ifdef WITH_MONITOR
/* walk through all the rings and tell any monitor
* that the port is going to exit netmap mode
*/
netmap_monitor_stop(na);
#endif
/*
* (TO CHECK) This function is only called
* when the last reference to this file descriptor goes
* away. This means we cannot have any pending poll()
* or interrupt routine operating on the structure.
* XXX The file may be closed in a thread while
* another thread is using it.
* Linux keeps the file opened until the last reference
* by any outstanding ioctl/poll or mmap is gone.
* FreeBSD does not track mmap()s (but we do) and
* wakes up any sleeping poll(). Need to check what
* happens if the close() occurs while a concurrent
* syscall is running.
*/
na->nm_register(na, 0); /* off, clear flags */
/* Wake up any sleeping threads. netmap_poll will
* then return POLLERR
* XXX The wake up now must happen during *_down(), when
* we order all activities to stop. -gl
*/
/* delete rings and buffers */
netmap_mem_rings_delete(na);
na->nm_krings_delete(na);
}
/* possibily decrement counter of tx_si/rx_si users */
netmap_unset_ringid(priv);
/* delete the nifp */
netmap_mem_if_delete(na, priv->np_nifp);
/* drop the allocator */
netmap_mem_deref(na->nm_mem, na);
/* mark the priv as unregistered */
priv->np_na = NULL;
priv->np_nifp = NULL;
}
/* call with NMG_LOCK held */
static __inline int
nm_si_user(struct netmap_priv_d *priv, enum txrx t)
{
return (priv->np_na != NULL &&
(priv->np_qlast[t] - priv->np_qfirst[t] > 1));
}
/*
* Destructor of the netmap_priv_d, called when the fd is closed
* Action: undo all the things done by NIOCREGIF,
* On FreeBSD we need to track whether there are active mmap()s,
* and we use np_active_mmaps for that. On linux, the field is always 0.
* Return: 1 if we can free priv, 0 otherwise.
*
*/
/* call with NMG_LOCK held */
int
netmap_dtor_locked(struct netmap_priv_d *priv)
{
struct netmap_adapter *na = priv->np_na;
/* number of active references to this fd */
if (--priv->np_refs > 0) {
return 0;
}
netmap_use_count--;
if (!na) {
return 1; //XXX is it correct?
}
netmap_do_unregif(priv);
netmap_adapter_put(na);
return 1;
}
/* call with NMG_LOCK *not* held */
void
netmap_dtor(void *data)
{
struct netmap_priv_d *priv = data;
int last_instance;
NMG_LOCK();
last_instance = netmap_dtor_locked(priv);
NMG_UNLOCK();
if (last_instance) {
bzero(priv, sizeof(*priv)); /* for safety */
free(priv, M_DEVBUF);
}
}
/*
* Handlers for synchronization of the queues from/to the host.
* Netmap has two operating modes:
* - in the default mode, the rings connected to the host stack are
* just another ring pair managed by userspace;
* - in transparent mode (XXX to be defined) incoming packets
* (from the host or the NIC) are marked as NS_FORWARD upon
* arrival, and the user application has a chance to reset the
* flag for packets that should be dropped.
* On the RXSYNC or poll(), packets in RX rings between
* kring->nr_kcur and ring->cur with NS_FORWARD still set are moved
* to the other side.
* The transfer NIC --> host is relatively easy, just encapsulate
* into mbufs and we are done. The host --> NIC side is slightly
* harder because there might not be room in the tx ring so it
* might take a while before releasing the buffer.
*/
/*
* pass a chain of buffers to the host stack as coming from 'dst'
* We do not need to lock because the queue is private.
*/
static void
netmap_send_up(struct ifnet *dst, struct mbq *q)
{
struct mbuf *m;
/* send packets up, outside the lock */
while ((m = mbq_dequeue(q)) != NULL) {
if (netmap_verbose & NM_VERB_HOST)
D("sending up pkt %p size %d", m, MBUF_LEN(m));
NM_SEND_UP(dst, m);
}
mbq_destroy(q);
}
/*
* put a copy of the buffers marked NS_FORWARD into an mbuf chain.
* Take packets from hwcur to ring->head marked NS_FORWARD (or forced)
* and pass them up. Drop remaining packets in the unlikely event
* of an mbuf shortage.
*/
static void
netmap_grab_packets(struct netmap_kring *kring, struct mbq *q, int force)
{
u_int const lim = kring->nkr_num_slots - 1;
u_int const head = kring->rhead;
u_int n;
struct netmap_adapter *na = kring->na;
for (n = kring->nr_hwcur; n != head; n = nm_next(n, lim)) {
struct mbuf *m;
struct netmap_slot *slot = &kring->ring->slot[n];
if ((slot->flags & NS_FORWARD) == 0 && !force)
continue;
if (slot->len < 14 || slot->len > NETMAP_BUF_SIZE(na)) {
RD(5, "bad pkt at %d len %d", n, slot->len);
continue;
}
slot->flags &= ~NS_FORWARD; // XXX needed ?
/* XXX TODO: adapt to the case of a multisegment packet */
m = m_devget(NMB(na, slot), slot->len, 0, na->ifp, NULL);
if (m == NULL)
break;
mbq_enqueue(q, m);
}
}
/*
* Send to the NIC rings packets marked NS_FORWARD between
* kring->nr_hwcur and kring->rhead
* Called under kring->rx_queue.lock on the sw rx ring,
*/
static u_int
netmap_sw_to_nic(struct netmap_adapter *na)
{
struct netmap_kring *kring = &na->rx_rings[na->num_rx_rings];
struct netmap_slot *rxslot = kring->ring->slot;
u_int i, rxcur = kring->nr_hwcur;
u_int const head = kring->rhead;
u_int const src_lim = kring->nkr_num_slots - 1;
u_int sent = 0;
/* scan rings to find space, then fill as much as possible */
for (i = 0; i < na->num_tx_rings; i++) {
struct netmap_kring *kdst = &na->tx_rings[i];
struct netmap_ring *rdst = kdst->ring;
u_int const dst_lim = kdst->nkr_num_slots - 1;
/* XXX do we trust ring or kring->rcur,rtail ? */
for (; rxcur != head && !nm_ring_empty(rdst);
rxcur = nm_next(rxcur, src_lim) ) {
struct netmap_slot *src, *dst, tmp;
u_int dst_cur = rdst->cur;
src = &rxslot[rxcur];
if ((src->flags & NS_FORWARD) == 0 && !netmap_fwd)
continue;
sent++;
dst = &rdst->slot[dst_cur];
tmp = *src;
src->buf_idx = dst->buf_idx;
src->flags = NS_BUF_CHANGED;
dst->buf_idx = tmp.buf_idx;
dst->len = tmp.len;
dst->flags = NS_BUF_CHANGED;
rdst->cur = nm_next(dst_cur, dst_lim);
}
/* if (sent) XXX txsync ? */
}
return sent;
}
/*
* netmap_txsync_to_host() passes packets up. We are called from a
* system call in user process context, and the only contention
* can be among multiple user threads erroneously calling
* this routine concurrently.
*/
static void
netmap_txsync_to_host(struct netmap_adapter *na)
{
struct netmap_kring *kring = &na->tx_rings[na->num_tx_rings];
u_int const lim = kring->nkr_num_slots - 1;
u_int const head = kring->rhead;
struct mbq q;
/* Take packets from hwcur to head and pass them up.
* force head = cur since netmap_grab_packets() stops at head
* In case of no buffers we give up. At the end of the loop,
* the queue is drained in all cases.
*/
mbq_init(&q);
netmap_grab_packets(kring, &q, 1 /* force */);
ND("have %d pkts in queue", mbq_len(&q));
kring->nr_hwcur = head;
kring->nr_hwtail = head + lim;
if (kring->nr_hwtail > lim)
kring->nr_hwtail -= lim + 1;
netmap_send_up(na->ifp, &q);
}
/*
* rxsync backend for packets coming from the host stack.
* They have been put in kring->rx_queue by netmap_transmit().
* We protect access to the kring using kring->rx_queue.lock
*
* This routine also does the selrecord if called from the poll handler
* (we know because td != NULL).
*
* NOTE: on linux, selrecord() is defined as a macro and uses pwait
* as an additional hidden argument.
* returns the number of packets delivered to tx queues in
* transparent mode, or a negative value if error
*/
static int
netmap_rxsync_from_host(struct netmap_adapter *na, struct thread *td, void *pwait)
{
struct netmap_kring *kring = &na->rx_rings[na->num_rx_rings];
struct netmap_ring *ring = kring->ring;
u_int nm_i, n;
u_int const lim = kring->nkr_num_slots - 1;
u_int const head = kring->rhead;
int ret = 0;
struct mbq *q = &kring->rx_queue, fq;
(void)pwait; /* disable unused warnings */
(void)td;
mbq_init(&fq); /* fq holds packets to be freed */
mbq_lock(q);
/* First part: import newly received packets */
n = mbq_len(q);
if (n) { /* grab packets from the queue */
struct mbuf *m;
uint32_t stop_i;
nm_i = kring->nr_hwtail;
stop_i = nm_prev(nm_i, lim);
while ( nm_i != stop_i && (m = mbq_dequeue(q)) != NULL ) {
int len = MBUF_LEN(m);
struct netmap_slot *slot = &ring->slot[nm_i];
m_copydata(m, 0, len, NMB(na, slot));
ND("nm %d len %d", nm_i, len);
if (netmap_verbose)
D("%s", nm_dump_buf(NMB(na, slot),len, 128, NULL));
slot->len = len;
slot->flags = kring->nkr_slot_flags;
nm_i = nm_next(nm_i, lim);
mbq_enqueue(&fq, m);
}
kring->nr_hwtail = nm_i;
}
/*
* Second part: skip past packets that userspace has released.
*/
nm_i = kring->nr_hwcur;
if (nm_i != head) { /* something was released */
if (netmap_fwd || kring->ring->flags & NR_FORWARD)
ret = netmap_sw_to_nic(na);
kring->nr_hwcur = head;
}
/* access copies of cur,tail in the kring */
if (kring->rcur == kring->rtail && td) /* no bufs available */
OS_selrecord(td, &kring->si);
mbq_unlock(q);
mbq_purge(&fq);
mbq_destroy(&fq);
return ret;
}
/* Get a netmap adapter for the port.
*
* If it is possible to satisfy the request, return 0
* with *na containing the netmap adapter found.
* Otherwise return an error code, with *na containing NULL.
*
* When the port is attached to a bridge, we always return
* EBUSY.
* Otherwise, if the port is already bound to a file descriptor,
* then we unconditionally return the existing adapter into *na.
* In all the other cases, we return (into *na) either native,
* generic or NULL, according to the following table:
*
* native_support
* active_fds dev.netmap.admode YES NO
* -------------------------------------------------------
* >0 * NA(ifp) NA(ifp)
*
* 0 NETMAP_ADMODE_BEST NATIVE GENERIC
* 0 NETMAP_ADMODE_NATIVE NATIVE NULL
* 0 NETMAP_ADMODE_GENERIC GENERIC GENERIC
*
*/
int
netmap_get_hw_na(struct ifnet *ifp, struct netmap_adapter **na)
{
/* generic support */
int i = netmap_admode; /* Take a snapshot. */
struct netmap_adapter *prev_na;
#ifdef WITH_GENERIC
struct netmap_generic_adapter *gna;
int error = 0;
#endif
*na = NULL; /* default */
/* reset in case of invalid value */
if (i < NETMAP_ADMODE_BEST || i >= NETMAP_ADMODE_LAST)
i = netmap_admode = NETMAP_ADMODE_BEST;
if (NETMAP_CAPABLE(ifp)) {
prev_na = NA(ifp);
/* If an adapter already exists, return it if
* there are active file descriptors or if
* netmap is not forced to use generic
* adapters.
*/
if (NETMAP_OWNED_BY_ANY(prev_na)
|| i != NETMAP_ADMODE_GENERIC
|| prev_na->na_flags & NAF_FORCE_NATIVE
#ifdef WITH_PIPES
/* ugly, but we cannot allow an adapter switch
* if some pipe is referring to this one
*/
|| prev_na->na_next_pipe > 0
#endif
) {
*na = prev_na;
return 0;
}
}
/* If there isn't native support and netmap is not allowed
* to use generic adapters, we cannot satisfy the request.
*/
if (!NETMAP_CAPABLE(ifp) && i == NETMAP_ADMODE_NATIVE)
return EOPNOTSUPP;
#ifdef WITH_GENERIC
/* Otherwise, create a generic adapter and return it,
* saving the previously used netmap adapter, if any.
*
* Note that here 'prev_na', if not NULL, MUST be a
* native adapter, and CANNOT be a generic one. This is
* true because generic adapters are created on demand, and
* destroyed when not used anymore. Therefore, if the adapter
* currently attached to an interface 'ifp' is generic, it
* must be that
* (NA(ifp)->active_fds > 0 || NETMAP_OWNED_BY_KERN(NA(ifp))).
* Consequently, if NA(ifp) is generic, we will enter one of
* the branches above. This ensures that we never override
* a generic adapter with another generic adapter.
*/
prev_na = NA(ifp);
error = generic_netmap_attach(ifp);
if (error)
return error;
*na = NA(ifp);
gna = (struct netmap_generic_adapter*)NA(ifp);
gna->prev = prev_na; /* save old na */
if (prev_na != NULL) {
ifunit_ref(ifp->if_xname);
// XXX add a refcount ?
netmap_adapter_get(prev_na);
}
ND("Created generic NA %p (prev %p)", gna, gna->prev);
return 0;
#else /* !WITH_GENERIC */
return EOPNOTSUPP;
#endif
}
/*
* MUST BE CALLED UNDER NMG_LOCK()
*
* Get a refcounted reference to a netmap adapter attached
* to the interface specified by nmr.
* This is always called in the execution of an ioctl().
*
* Return ENXIO if the interface specified by the request does
* not exist, ENOTSUP if netmap is not supported by the interface,
* EBUSY if the interface is already attached to a bridge,
* EINVAL if parameters are invalid, ENOMEM if needed resources
* could not be allocated.
* If successful, hold a reference to the netmap adapter.
*
* No reference is kept on the real interface, which may then
* disappear at any time.
*/
int
netmap_get_na(struct nmreq *nmr, struct netmap_adapter **na, int create)
{
struct ifnet *ifp = NULL;
int error = 0;
struct netmap_adapter *ret = NULL;
*na = NULL; /* default return value */
NMG_LOCK_ASSERT();
/* we cascade through all possible types of netmap adapter.
* All netmap_get_*_na() functions return an error and an na,
* with the following combinations:
*
* error na
* 0 NULL type doesn't match
* !0 NULL type matches, but na creation/lookup failed
* 0 !NULL type matches and na created/found
* !0 !NULL impossible
*/
/* try to see if this is a monitor port */
error = netmap_get_monitor_na(nmr, na, create);
if (error || *na != NULL)
return error;
/* try to see if this is a pipe port */
error = netmap_get_pipe_na(nmr, na, create);
if (error || *na != NULL)
return error;
/* try to see if this is a bridge port */
error = netmap_get_bdg_na(nmr, na, create);
if (error)
return error;
if (*na != NULL) /* valid match in netmap_get_bdg_na() */
goto out;
/*
* This must be a hardware na, lookup the name in the system.
* Note that by hardware we actually mean "it shows up in ifconfig".
* This may still be a tap, a veth/epair, or even a
* persistent VALE port.
*/
ifp = ifunit_ref(nmr->nr_name);
if (ifp == NULL) {
return ENXIO;
}
error = netmap_get_hw_na(ifp, &ret);
if (error)
goto out;
*na = ret;
netmap_adapter_get(ret);
out:
if (error && ret != NULL)
netmap_adapter_put(ret);
if (ifp)
if_rele(ifp); /* allow live unloading of drivers modules */
return error;
}
/*
* validate parameters on entry for *_txsync()
* Returns ring->cur if ok, or something >= kring->nkr_num_slots
* in case of error.
*
* rhead, rcur and rtail=hwtail are stored from previous round.
* hwcur is the next packet to send to the ring.
*
* We want
* hwcur <= *rhead <= head <= cur <= tail = *rtail <= hwtail
*
* hwcur, rhead, rtail and hwtail are reliable
*/
static u_int
nm_txsync_prologue(struct netmap_kring *kring)
{
#define NM_ASSERT(t) if (t) { D("fail " #t); goto error; }
struct netmap_ring *ring = kring->ring;
u_int head = ring->head; /* read only once */
u_int cur = ring->cur; /* read only once */
u_int n = kring->nkr_num_slots;
ND(5, "%s kcur %d ktail %d head %d cur %d tail %d",
kring->name,
kring->nr_hwcur, kring->nr_hwtail,
ring->head, ring->cur, ring->tail);
#if 1 /* kernel sanity checks; but we can trust the kring. */
if (kring->nr_hwcur >= n || kring->rhead >= n ||
kring->rtail >= n || kring->nr_hwtail >= n)
goto error;
#endif /* kernel sanity checks */
/*
* user sanity checks. We only use 'cur',
* A, B, ... are possible positions for cur:
*
* 0 A cur B tail C n-1
* 0 D tail E cur F n-1
*
* B, F, D are valid. A, C, E are wrong
*/
if (kring->rtail >= kring->rhead) {
/* want rhead <= head <= rtail */
NM_ASSERT(head < kring->rhead || head > kring->rtail);
/* and also head <= cur <= rtail */
NM_ASSERT(cur < head || cur > kring->rtail);
} else { /* here rtail < rhead */
/* we need head outside rtail .. rhead */
NM_ASSERT(head > kring->rtail && head < kring->rhead);
/* two cases now: head <= rtail or head >= rhead */
if (head <= kring->rtail) {
/* want head <= cur <= rtail */
NM_ASSERT(cur < head || cur > kring->rtail);
} else { /* head >= rhead */
/* cur must be outside rtail..head */
NM_ASSERT(cur > kring->rtail && cur < head);
}
}
if (ring->tail != kring->rtail) {
RD(5, "tail overwritten was %d need %d",
ring->tail, kring->rtail);
ring->tail = kring->rtail;
}
kring->rhead = head;
kring->rcur = cur;
return head;
error:
RD(5, "%s kring error: head %d cur %d tail %d rhead %d rcur %d rtail %d hwcur %d hwtail %d",
kring->name,
head, cur, ring->tail,
kring->rhead, kring->rcur, kring->rtail,
kring->nr_hwcur, kring->nr_hwtail);
return n;
#undef NM_ASSERT
}
/*
* validate parameters on entry for *_rxsync()
* Returns ring->head if ok, kring->nkr_num_slots on error.
*
* For a valid configuration,
* hwcur <= head <= cur <= tail <= hwtail
*
* We only consider head and cur.
* hwcur and hwtail are reliable.
*
*/
static u_int
nm_rxsync_prologue(struct netmap_kring *kring)
{
struct netmap_ring *ring = kring->ring;
uint32_t const n = kring->nkr_num_slots;
uint32_t head, cur;
ND(5,"%s kc %d kt %d h %d c %d t %d",
kring->name,
kring->nr_hwcur, kring->nr_hwtail,
ring->head, ring->cur, ring->tail);
/*
* Before storing the new values, we should check they do not
* move backwards. However:
* - head is not an issue because the previous value is hwcur;
* - cur could in principle go back, however it does not matter
* because we are processing a brand new rxsync()
*/
cur = kring->rcur = ring->cur; /* read only once */
head = kring->rhead = ring->head; /* read only once */
#if 1 /* kernel sanity checks */
if (kring->nr_hwcur >= n || kring->nr_hwtail >= n)
goto error;
#endif /* kernel sanity checks */
/* user sanity checks */
if (kring->nr_hwtail >= kring->nr_hwcur) {
/* want hwcur <= rhead <= hwtail */
if (head < kring->nr_hwcur || head > kring->nr_hwtail)
goto error;
/* and also rhead <= rcur <= hwtail */
if (cur < head || cur > kring->nr_hwtail)
goto error;
} else {
/* we need rhead outside hwtail..hwcur */
if (head < kring->nr_hwcur && head > kring->nr_hwtail)
goto error;
/* two cases now: head <= hwtail or head >= hwcur */
if (head <= kring->nr_hwtail) {
/* want head <= cur <= hwtail */
if (cur < head || cur > kring->nr_hwtail)
goto error;
} else {
/* cur must be outside hwtail..head */
if (cur < head && cur > kring->nr_hwtail)
goto error;
}
}
if (ring->tail != kring->rtail) {
RD(5, "%s tail overwritten was %d need %d",
kring->name,
ring->tail, kring->rtail);
ring->tail = kring->rtail;
}
return head;
error:
RD(5, "kring error: hwcur %d rcur %d hwtail %d head %d cur %d tail %d",
kring->nr_hwcur,
kring->rcur, kring->nr_hwtail,
kring->rhead, kring->rcur, ring->tail);
return n;
}
/*
* Error routine called when txsync/rxsync detects an error.
* Can't do much more than resetting head =cur = hwcur, tail = hwtail
* Return 1 on reinit.
*
* This routine is only called by the upper half of the kernel.
* It only reads hwcur (which is changed only by the upper half, too)
* and hwtail (which may be changed by the lower half, but only on
* a tx ring and only to increase it, so any error will be recovered
* on the next call). For the above, we don't strictly need to call
* it under lock.
*/
int
netmap_ring_reinit(struct netmap_kring *kring)
{
struct netmap_ring *ring = kring->ring;
u_int i, lim = kring->nkr_num_slots - 1;
int errors = 0;
// XXX KASSERT nm_kr_tryget
RD(10, "called for %s", kring->name);
// XXX probably wrong to trust userspace
kring->rhead = ring->head;
kring->rcur = ring->cur;
kring->rtail = ring->tail;
if (ring->cur > lim)
errors++;
if (ring->head > lim)
errors++;
if (ring->tail > lim)
errors++;
for (i = 0; i <= lim; i++) {
u_int idx = ring->slot[i].buf_idx;
u_int len = ring->slot[i].len;
if (idx < 2 || idx >= kring->na->na_lut.objtotal) {
RD(5, "bad index at slot %d idx %d len %d ", i, idx, len);
ring->slot[i].buf_idx = 0;
ring->slot[i].len = 0;
} else if (len > NETMAP_BUF_SIZE(kring->na)) {
ring->slot[i].len = 0;
RD(5, "bad len at slot %d idx %d len %d", i, idx, len);
}
}
if (errors) {
RD(10, "total %d errors", errors);
RD(10, "%s reinit, cur %d -> %d tail %d -> %d",
kring->name,
ring->cur, kring->nr_hwcur,
ring->tail, kring->nr_hwtail);
ring->head = kring->rhead = kring->nr_hwcur;
ring->cur = kring->rcur = kring->nr_hwcur;
ring->tail = kring->rtail = kring->nr_hwtail;
}
return (errors ? 1 : 0);
}
/* interpret the ringid and flags fields of an nmreq, by translating them
* into a pair of intervals of ring indices:
*
* [priv->np_txqfirst, priv->np_txqlast) and
* [priv->np_rxqfirst, priv->np_rxqlast)
*
*/
int
netmap_interp_ringid(struct netmap_priv_d *priv, uint16_t ringid, uint32_t flags)
{
struct netmap_adapter *na = priv->np_na;
u_int j, i = ringid & NETMAP_RING_MASK;
u_int reg = flags & NR_REG_MASK;
enum txrx t;
if (reg == NR_REG_DEFAULT) {
/* convert from old ringid to flags */
if (ringid & NETMAP_SW_RING) {
reg = NR_REG_SW;
} else if (ringid & NETMAP_HW_RING) {
reg = NR_REG_ONE_NIC;
} else {
reg = NR_REG_ALL_NIC;
}
D("deprecated API, old ringid 0x%x -> ringid %x reg %d", ringid, i, reg);
}
switch (reg) {
case NR_REG_ALL_NIC:
case NR_REG_PIPE_MASTER:
case NR_REG_PIPE_SLAVE:
for_rx_tx(t) {
priv->np_qfirst[t] = 0;
priv->np_qlast[t] = nma_get_nrings(na, t);
}
ND("%s %d %d", "ALL/PIPE",
priv->np_qfirst[NR_RX], priv->np_qlast[NR_RX]);
break;
case NR_REG_SW:
case NR_REG_NIC_SW:
if (!(na->na_flags & NAF_HOST_RINGS)) {
D("host rings not supported");
return EINVAL;
}
for_rx_tx(t) {
priv->np_qfirst[t] = (reg == NR_REG_SW ?
nma_get_nrings(na, t) : 0);
priv->np_qlast[t] = nma_get_nrings(na, t) + 1;
}
ND("%s %d %d", reg == NR_REG_SW ? "SW" : "NIC+SW",
priv->np_qfirst[NR_RX], priv->np_qlast[NR_RX]);
break;
case NR_REG_ONE_NIC:
if (i >= na->num_tx_rings && i >= na->num_rx_rings) {
D("invalid ring id %d", i);
return EINVAL;
}
for_rx_tx(t) {
/* if not enough rings, use the first one */
j = i;
if (j >= nma_get_nrings(na, t))
j = 0;
priv->np_qfirst[t] = j;
priv->np_qlast[t] = j + 1;
}
break;
default:
D("invalid regif type %d", reg);
return EINVAL;
}
priv->np_flags = (flags & ~NR_REG_MASK) | reg;
if (netmap_verbose) {
D("%s: tx [%d,%d) rx [%d,%d) id %d",
na->name,
priv->np_qfirst[NR_TX],
priv->np_qlast[NR_TX],
priv->np_qfirst[NR_RX],
priv->np_qlast[NR_RX],
i);
}
return 0;
}
/*
* Set the ring ID. For devices with a single queue, a request
* for all rings is the same as a single ring.
*/
static int
netmap_set_ringid(struct netmap_priv_d *priv, uint16_t ringid, uint32_t flags)
{
struct netmap_adapter *na = priv->np_na;
int error;
enum txrx t;
error = netmap_interp_ringid(priv, ringid, flags);
if (error) {
return error;
}
priv->np_txpoll = (ringid & NETMAP_NO_TX_POLL) ? 0 : 1;
/* optimization: count the users registered for more than
* one ring, which are the ones sleeping on the global queue.
* The default netmap_notify() callback will then
* avoid signaling the global queue if nobody is using it
*/
for_rx_tx(t) {
if (nm_si_user(priv, t))
na->si_users[t]++;
}
return 0;
}
static void
netmap_unset_ringid(struct netmap_priv_d *priv)
{
struct netmap_adapter *na = priv->np_na;
enum txrx t;
for_rx_tx(t) {
if (nm_si_user(priv, t))
na->si_users[t]--;
priv->np_qfirst[t] = priv->np_qlast[t] = 0;
}
priv->np_flags = 0;
priv->np_txpoll = 0;
}
/* check that the rings we want to bind are not exclusively owned by a previous
* bind. If exclusive ownership has been requested, we also mark the rings.
*/
static int
netmap_get_exclusive(struct netmap_priv_d *priv)
{
struct netmap_adapter *na = priv->np_na;
u_int i;
struct netmap_kring *kring;
int excl = (priv->np_flags & NR_EXCLUSIVE);
enum txrx t;
ND("%s: grabbing tx [%d, %d) rx [%d, %d)",
na->name,
priv->np_qfirst[NR_TX],
priv->np_qlast[NR_TX],
priv->np_qfirst[NR_RX],
priv->np_qlast[NR_RX]);
/* first round: check that all the requested rings
* are neither alread exclusively owned, nor we
* want exclusive ownership when they are already in use
*/
for_rx_tx(t) {
for (i = priv->np_qfirst[t]; i < priv->np_qlast[t]; i++) {
kring = &NMR(na, t)[i];
if ((kring->nr_kflags & NKR_EXCLUSIVE) ||
(kring->users && excl))
{
ND("ring %s busy", kring->name);
return EBUSY;
}
}
}
/* second round: increment usage cound and possibly
* mark as exclusive
*/
for_rx_tx(t) {
for (i = priv->np_qfirst[t]; i < priv->np_qlast[t]; i++) {
kring = &NMR(na, t)[i];
kring->users++;
if (excl)
kring->nr_kflags |= NKR_EXCLUSIVE;
}
}
return 0;
}
/* undo netmap_get_ownership() */
static void
netmap_rel_exclusive(struct netmap_priv_d *priv)
{
struct netmap_adapter *na = priv->np_na;
u_int i;
struct netmap_kring *kring;
int excl = (priv->np_flags & NR_EXCLUSIVE);
enum txrx t;
ND("%s: releasing tx [%d, %d) rx [%d, %d)",
na->name,
priv->np_qfirst[NR_TX],
priv->np_qlast[NR_TX],
priv->np_qfirst[NR_RX],
priv->np_qlast[MR_RX]);
for_rx_tx(t) {
for (i = priv->np_qfirst[t]; i < priv->np_qlast[t]; i++) {
kring = &NMR(na, t)[i];
if (excl)
kring->nr_kflags &= ~NKR_EXCLUSIVE;
kring->users--;
}
}
}
/*
* possibly move the interface to netmap-mode.
* If success it returns a pointer to netmap_if, otherwise NULL.
* This must be called with NMG_LOCK held.
*
* The following na callbacks are called in the process:
*
* na->nm_config() [by netmap_update_config]
* (get current number and size of rings)
*
* We have a generic one for linux (netmap_linux_config).
* The bwrap has to override this, since it has to forward
* the request to the wrapped adapter (netmap_bwrap_config).
*
*
* na->nm_krings_create()
* (create and init the krings array)
*
* One of the following:
*
* * netmap_hw_krings_create, (hw ports)
* creates the standard layout for the krings
* and adds the mbq (used for the host rings).
*
* * netmap_vp_krings_create (VALE ports)
* add leases and scratchpads
*
* * netmap_pipe_krings_create (pipes)
* create the krings and rings of both ends and
* cross-link them
*
* * netmap_monitor_krings_create (monitors)
* avoid allocating the mbq
*
* * netmap_bwrap_krings_create (bwraps)
* create both the brap krings array,
* the krings array of the wrapped adapter, and
* (if needed) the fake array for the host adapter
*
* na->nm_register(, 1)
* (put the adapter in netmap mode)
*
* This may be one of the following:
* (XXX these should be either all *_register or all *_reg 2014-03-15)
*
* * netmap_hw_register (hw ports)
* checks that the ifp is still there, then calls
* the hardware specific callback;
*
* * netmap_vp_reg (VALE ports)
* If the port is connected to a bridge,
* set the NAF_NETMAP_ON flag under the
* bridge write lock.
*
* * netmap_pipe_reg (pipes)
* inform the other pipe end that it is no
* longer responsible for the lifetime of this
* pipe end
*
* * netmap_monitor_reg (monitors)
* intercept the sync callbacks of the monitored
* rings
*
* * netmap_bwrap_register (bwraps)
* cross-link the bwrap and hwna rings,
* forward the request to the hwna, override
* the hwna notify callback (to get the frames
* coming from outside go through the bridge).
*
*
*/
int
netmap_do_regif(struct netmap_priv_d *priv, struct netmap_adapter *na,
uint16_t ringid, uint32_t flags)
{
struct netmap_if *nifp = NULL;
int error;
NMG_LOCK_ASSERT();
/* ring configuration may have changed, fetch from the card */
netmap_update_config(na);
priv->np_na = na; /* store the reference */
error = netmap_set_ringid(priv, ringid, flags);
if (error)
goto err;
error = netmap_mem_finalize(na->nm_mem, na);
if (error)
goto err;
if (na->active_fds == 0) {
/*
* If this is the first registration of the adapter,
* also create the netmap rings and their in-kernel view,
* the netmap krings.
*/
/*
* Depending on the adapter, this may also create
* the netmap rings themselves
*/
error = na->nm_krings_create(na);
if (error)
goto err_drop_mem;
/* create all missing netmap rings */
error = netmap_mem_rings_create(na);
if (error)
goto err_del_krings;
}
/* now the kring must exist and we can check whether some
* previous bind has exclusive ownership on them
*/
error = netmap_get_exclusive(priv);
if (error)
goto err_del_rings;
/* in all cases, create a new netmap if */
nifp = netmap_mem_if_new(na);
if (nifp == NULL) {
error = ENOMEM;
goto err_rel_excl;
}
na->active_fds++;
if (!nm_netmap_on(na)) {
/* Netmap not active, set the card in netmap mode
* and make it use the shared buffers.
*/
/* cache the allocator info in the na */
netmap_mem_get_lut(na->nm_mem, &na->na_lut);
ND("%p->na_lut == %p", na, na->na_lut.lut);
error = na->nm_register(na, 1); /* mode on */
if (error)
goto err_del_if;
}
/*
* advertise that the interface is ready by setting np_nifp.
* The barrier is needed because readers (poll, *SYNC and mmap)
* check for priv->np_nifp != NULL without locking
*/
mb(); /* make sure previous writes are visible to all CPUs */
priv->np_nifp = nifp;
return 0;
err_del_if:
memset(&na->na_lut, 0, sizeof(na->na_lut));
na->active_fds--;
netmap_mem_if_delete(na, nifp);
err_rel_excl:
netmap_rel_exclusive(priv);
err_del_rings:
if (na->active_fds == 0)
netmap_mem_rings_delete(na);
err_del_krings:
if (na->active_fds == 0)
na->nm_krings_delete(na);
err_drop_mem:
netmap_mem_deref(na->nm_mem, na);
err:
priv->np_na = NULL;
return error;
}
/*
* update kring and ring at the end of txsync.
*/
static inline void
nm_txsync_finalize(struct netmap_kring *kring)
{
/* update ring tail to what the kernel knows */
kring->ring->tail = kring->rtail = kring->nr_hwtail;
/* note, head/rhead/hwcur might be behind cur/rcur
* if no carrier
*/
ND(5, "%s now hwcur %d hwtail %d head %d cur %d tail %d",
kring->name, kring->nr_hwcur, kring->nr_hwtail,
kring->rhead, kring->rcur, kring->rtail);
}
/*
* update kring and ring at the end of rxsync
*/
static inline void
nm_rxsync_finalize(struct netmap_kring *kring)
{
/* tell userspace that there might be new packets */
//struct netmap_ring *ring = kring->ring;
ND("head %d cur %d tail %d -> %d", ring->head, ring->cur, ring->tail,
kring->nr_hwtail);
kring->ring->tail = kring->rtail = kring->nr_hwtail;
/* make a copy of the state for next round */
kring->rhead = kring->ring->head;
kring->rcur = kring->ring->cur;
}
/*
* ioctl(2) support for the "netmap" device.
*
* Following a list of accepted commands:
* - NIOCGINFO
* - SIOCGIFADDR just for convenience
* - NIOCREGIF
* - NIOCTXSYNC
* - NIOCRXSYNC
*
* Return 0 on success, errno otherwise.
*/
int
netmap_ioctl(struct cdev *dev, u_long cmd, caddr_t data,
int fflag, struct thread *td)
{
struct netmap_priv_d *priv = NULL;
struct nmreq *nmr = (struct nmreq *) data;
struct netmap_adapter *na = NULL;
int error;
u_int i, qfirst, qlast;
struct netmap_if *nifp;
struct netmap_kring *krings;
enum txrx t;
(void)dev; /* UNUSED */
(void)fflag; /* UNUSED */
if (cmd == NIOCGINFO || cmd == NIOCREGIF) {
/* truncate name */
nmr->nr_name[sizeof(nmr->nr_name) - 1] = '\0';
if (nmr->nr_version != NETMAP_API) {
D("API mismatch for %s got %d need %d",
nmr->nr_name,
nmr->nr_version, NETMAP_API);
nmr->nr_version = NETMAP_API;
}
if (nmr->nr_version < NETMAP_MIN_API ||
nmr->nr_version > NETMAP_MAX_API) {
return EINVAL;
}
}
CURVNET_SET(TD_TO_VNET(td));
error = devfs_get_cdevpriv((void **)&priv);
if (error) {
CURVNET_RESTORE();
/* XXX ENOENT should be impossible, since the priv
* is now created in the open */
return (error == ENOENT ? ENXIO : error);
}
switch (cmd) {
case NIOCGINFO: /* return capabilities etc */
if (nmr->nr_cmd == NETMAP_BDG_LIST) {
error = netmap_bdg_ctl(nmr, NULL);
break;
}
NMG_LOCK();
do {
/* memsize is always valid */
struct netmap_mem_d *nmd = &nm_mem;
u_int memflags;
if (nmr->nr_name[0] != '\0') {
/* get a refcount */
error = netmap_get_na(nmr, &na, 1 /* create */);
if (error)
break;
nmd = na->nm_mem; /* get memory allocator */
}
error = netmap_mem_get_info(nmd, &nmr->nr_memsize, &memflags,
&nmr->nr_arg2);
if (error)
break;
if (na == NULL) /* only memory info */
break;
nmr->nr_offset = 0;
nmr->nr_rx_slots = nmr->nr_tx_slots = 0;
netmap_update_config(na);
nmr->nr_rx_rings = na->num_rx_rings;
nmr->nr_tx_rings = na->num_tx_rings;
nmr->nr_rx_slots = na->num_rx_desc;
nmr->nr_tx_slots = na->num_tx_desc;
netmap_adapter_put(na);
} while (0);
NMG_UNLOCK();
break;
case NIOCREGIF:
/* possibly attach/detach NIC and VALE switch */
i = nmr->nr_cmd;
if (i == NETMAP_BDG_ATTACH || i == NETMAP_BDG_DETACH
|| i == NETMAP_BDG_VNET_HDR
|| i == NETMAP_BDG_NEWIF
|| i == NETMAP_BDG_DELIF) {
error = netmap_bdg_ctl(nmr, NULL);
break;
} else if (i != 0) {
D("nr_cmd must be 0 not %d", i);
error = EINVAL;
break;
}
/* protect access to priv from concurrent NIOCREGIF */
NMG_LOCK();
do {
u_int memflags;
if (priv->np_nifp != NULL) { /* thread already registered */
error = EBUSY;
break;
}
/* find the interface and a reference */
error = netmap_get_na(nmr, &na, 1 /* create */); /* keep reference */
if (error)
break;
if (NETMAP_OWNED_BY_KERN(na)) {
netmap_adapter_put(na);
error = EBUSY;
break;
}
error = netmap_do_regif(priv, na, nmr->nr_ringid, nmr->nr_flags);
if (error) { /* reg. failed, release priv and ref */
netmap_adapter_put(na);
break;
}
nifp = priv->np_nifp;
priv->np_td = td; // XXX kqueue, debugging only
/* return the offset of the netmap_if object */
nmr->nr_rx_rings = na->num_rx_rings;
nmr->nr_tx_rings = na->num_tx_rings;
nmr->nr_rx_slots = na->num_rx_desc;
nmr->nr_tx_slots = na->num_tx_desc;
error = netmap_mem_get_info(na->nm_mem, &nmr->nr_memsize, &memflags,
&nmr->nr_arg2);
if (error) {
netmap_do_unregif(priv);
netmap_adapter_put(na);
break;
}
if (memflags & NETMAP_MEM_PRIVATE) {
*(uint32_t *)(uintptr_t)&nifp->ni_flags |= NI_PRIV_MEM;
}
for_rx_tx(t) {
priv->np_si[t] = nm_si_user(priv, t) ?
&na->si[t] : &NMR(na, t)[priv->np_qfirst[t]].si;
}
if (nmr->nr_arg3) {
D("requested %d extra buffers", nmr->nr_arg3);
nmr->nr_arg3 = netmap_extra_alloc(na,
&nifp->ni_bufs_head, nmr->nr_arg3);
D("got %d extra buffers", nmr->nr_arg3);
}
nmr->nr_offset = netmap_mem_if_offset(na->nm_mem, nifp);
} while (0);
NMG_UNLOCK();
break;
case NIOCTXSYNC:
case NIOCRXSYNC:
nifp = priv->np_nifp;
if (nifp == NULL) {
error = ENXIO;
break;
}
mb(); /* make sure following reads are not from cache */
na = priv->np_na; /* we have a reference */
if (na == NULL) {
D("Internal error: nifp != NULL && na == NULL");
error = ENXIO;
break;
}
if (!nm_netmap_on(na)) {
error = ENXIO;
break;
}
t = (cmd == NIOCTXSYNC ? NR_TX : NR_RX);
krings = NMR(na, t);
qfirst = priv->np_qfirst[t];
qlast = priv->np_qlast[t];
for (i = qfirst; i < qlast; i++) {
struct netmap_kring *kring = krings + i;
if (nm_kr_tryget(kring)) {
error = EBUSY;
goto out;
}
if (cmd == NIOCTXSYNC) {
if (netmap_verbose & NM_VERB_TXSYNC)
D("pre txsync ring %d cur %d hwcur %d",
i, kring->ring->cur,
kring->nr_hwcur);
if (nm_txsync_prologue(kring) >= kring->nkr_num_slots) {
netmap_ring_reinit(kring);
} else if (kring->nm_sync(kring, NAF_FORCE_RECLAIM) == 0) {
nm_txsync_finalize(kring);
}
if (netmap_verbose & NM_VERB_TXSYNC)
D("post txsync ring %d cur %d hwcur %d",
i, kring->ring->cur,
kring->nr_hwcur);
} else {
if (nm_rxsync_prologue(kring) >= kring->nkr_num_slots) {
netmap_ring_reinit(kring);
} else if (kring->nm_sync(kring, NAF_FORCE_READ) == 0) {
nm_rxsync_finalize(kring);
}
microtime(&na->rx_rings[i].ring->ts);
}
nm_kr_put(kring);
}
break;
#ifdef WITH_VALE
case NIOCCONFIG:
error = netmap_bdg_config(nmr);
break;
#endif
#ifdef __FreeBSD__
case FIONBIO:
case FIOASYNC:
ND("FIONBIO/FIOASYNC are no-ops");
break;
case BIOCIMMEDIATE:
case BIOCGHDRCMPLT:
case BIOCSHDRCMPLT:
case BIOCSSEESENT:
D("ignore BIOCIMMEDIATE/BIOCSHDRCMPLT/BIOCSHDRCMPLT/BIOCSSEESENT");
break;
default: /* allow device-specific ioctls */
{
struct ifnet *ifp = ifunit_ref(nmr->nr_name);
if (ifp == NULL) {
error = ENXIO;
} else {
struct socket so;
bzero(&so, sizeof(so));
so.so_vnet = ifp->if_vnet;
// so->so_proto not null.
error = ifioctl(&so, cmd, data, td);
if_rele(ifp);
}
break;
}
#else /* linux */
default:
error = EOPNOTSUPP;
#endif /* linux */
}
out:
CURVNET_RESTORE();
return (error);
}
/*
* select(2) and poll(2) handlers for the "netmap" device.
*
* Can be called for one or more queues.
* Return true the event mask corresponding to ready events.
* If there are no ready events, do a selrecord on either individual
* selinfo or on the global one.
* Device-dependent parts (locking and sync of tx/rx rings)
* are done through callbacks.
*
* On linux, arguments are really pwait, the poll table, and 'td' is struct file *
* The first one is remapped to pwait as selrecord() uses the name as an
* hidden argument.
*/
int
netmap_poll(struct cdev *dev, int events, struct thread *td)
{
struct netmap_priv_d *priv = NULL;
struct netmap_adapter *na;
struct netmap_kring *kring;
u_int i, check_all_tx, check_all_rx, want[NR_TXRX], revents = 0;
#define want_tx want[NR_TX]
#define want_rx want[NR_RX]
struct mbq q; /* packets from hw queues to host stack */
void *pwait = dev; /* linux compatibility */
int is_kevent = 0;
enum txrx t;
/*
* In order to avoid nested locks, we need to "double check"
* txsync and rxsync if we decide to do a selrecord().
* retry_tx (and retry_rx, later) prevent looping forever.
*/
int retry_tx = 1, retry_rx = 1;
(void)pwait;
mbq_init(&q);
/*
* XXX kevent has curthread->tp_fop == NULL,
* so devfs_get_cdevpriv() fails. We circumvent this by passing
* priv as the first argument, which is also useful to avoid
* the selrecord() which are not necessary in that case.
*/
if (devfs_get_cdevpriv((void **)&priv) != 0) {
is_kevent = 1;
if (netmap_verbose)
D("called from kevent");
priv = (struct netmap_priv_d *)dev;
}
if (priv == NULL)
return POLLERR;
if (priv->np_nifp == NULL) {
D("No if registered");
return POLLERR;
}
mb(); /* make sure following reads are not from cache */
na = priv->np_na;
if (!nm_netmap_on(na))
return POLLERR;
if (netmap_verbose & 0x8000)
D("device %s events 0x%x", na->name, events);
want_tx = events & (POLLOUT | POLLWRNORM);
want_rx = events & (POLLIN | POLLRDNORM);
/*
* check_all_{tx|rx} are set if the card has more than one queue AND
* the file descriptor is bound to all of them. If so, we sleep on
* the "global" selinfo, otherwise we sleep on individual selinfo
* (FreeBSD only allows two selinfo's per file descriptor).
* The interrupt routine in the driver wake one or the other
* (or both) depending on which clients are active.
*
* rxsync() is only called if we run out of buffers on a POLLIN.
* txsync() is called if we run out of buffers on POLLOUT, or
* there are pending packets to send. The latter can be disabled
* passing NETMAP_NO_TX_POLL in the NIOCREG call.
*/
check_all_tx = nm_si_user(priv, NR_TX);
check_all_rx = nm_si_user(priv, NR_RX);
/*
* We start with a lock free round which is cheap if we have
* slots available. If this fails, then lock and call the sync
* routines.
*/
for_rx_tx(t) {
for (i = priv->np_qfirst[t]; want[t] && i < priv->np_qlast[t]; i++) {
kring = &NMR(na, t)[i];
/* XXX compare ring->cur and kring->tail */
if (!nm_ring_empty(kring->ring)) {
revents |= want[t];
want[t] = 0; /* also breaks the loop */
}
}
}
/*
* If we want to push packets out (priv->np_txpoll) or
* want_tx is still set, we must issue txsync calls
* (on all rings, to avoid that the tx rings stall).
* XXX should also check cur != hwcur on the tx rings.
* Fortunately, normal tx mode has np_txpoll set.
*/
if (priv->np_txpoll || want_tx) {
/*
* The first round checks if anyone is ready, if not
* do a selrecord and another round to handle races.
* want_tx goes to 0 if any space is found, and is
* used to skip rings with no pending transmissions.
*/
flush_tx:
for (i = priv->np_qfirst[NR_TX]; i < priv->np_qlast[NR_RX]; i++) {
int found = 0;
kring = &na->tx_rings[i];
if (!want_tx && kring->ring->cur == kring->nr_hwcur)
continue;
/* only one thread does txsync */
if (nm_kr_tryget(kring)) {
/* either busy or stopped
* XXX if the ring is stopped, sleeping would
* be better. In current code, however, we only
* stop the rings for brief intervals (2014-03-14)
*/
if (netmap_verbose)
RD(2, "%p lost race on txring %d, ok",
priv, i);
continue;
}
if (nm_txsync_prologue(kring) >= kring->nkr_num_slots) {
netmap_ring_reinit(kring);
revents |= POLLERR;
} else {
if (kring->nm_sync(kring, 0))
revents |= POLLERR;
else
nm_txsync_finalize(kring);
}
/*
* If we found new slots, notify potential
* listeners on the same ring.
* Since we just did a txsync, look at the copies
* of cur,tail in the kring.
*/
found = kring->rcur != kring->rtail;
nm_kr_put(kring);
if (found) { /* notify other listeners */
revents |= want_tx;
want_tx = 0;
kring->nm_notify(kring, 0);
}
}
if (want_tx && retry_tx && !is_kevent) {
OS_selrecord(td, check_all_tx ?
&na->si[NR_TX] : &na->tx_rings[priv->np_qfirst[NR_TX]].si);
retry_tx = 0;
goto flush_tx;
}
}
/*
* If want_rx is still set scan receive rings.
* Do it on all rings because otherwise we starve.
*/
if (want_rx) {
int send_down = 0; /* transparent mode */
/* two rounds here for race avoidance */
do_retry_rx:
for (i = priv->np_qfirst[NR_RX]; i < priv->np_qlast[NR_RX]; i++) {
int found = 0;
kring = &na->rx_rings[i];
if (nm_kr_tryget(kring)) {
if (netmap_verbose)
RD(2, "%p lost race on rxring %d, ok",
priv, i);
continue;
}
if (nm_rxsync_prologue(kring) >= kring->nkr_num_slots) {
netmap_ring_reinit(kring);
revents |= POLLERR;
}
/* now we can use kring->rcur, rtail */
/*
* transparent mode support: collect packets
* from the rxring(s).
* XXX NR_FORWARD should only be read on
* physical or NIC ports
*/
if (netmap_fwd ||kring->ring->flags & NR_FORWARD) {
ND(10, "forwarding some buffers up %d to %d",
kring->nr_hwcur, kring->ring->cur);
netmap_grab_packets(kring, &q, netmap_fwd);
}
if (kring->nm_sync(kring, 0))
revents |= POLLERR;
else
nm_rxsync_finalize(kring);
if (netmap_no_timestamp == 0 ||
kring->ring->flags & NR_TIMESTAMP) {
microtime(&kring->ring->ts);
}
found = kring->rcur != kring->rtail;
nm_kr_put(kring);
if (found) {
revents |= want_rx;
retry_rx = 0;
kring->nm_notify(kring, 0);
}
}
/* transparent mode XXX only during first pass ? */
if (na->na_flags & NAF_HOST_RINGS) {
kring = &na->rx_rings[na->num_rx_rings];
if (check_all_rx
&& (netmap_fwd || kring->ring->flags & NR_FORWARD)) {
/* XXX fix to use kring fields */
if (nm_ring_empty(kring->ring))
send_down = netmap_rxsync_from_host(na, td, dev);
if (!nm_ring_empty(kring->ring))
revents |= want_rx;
}
}
if (retry_rx && !is_kevent)
OS_selrecord(td, check_all_rx ?
&na->si[NR_RX] : &na->rx_rings[priv->np_qfirst[NR_RX]].si);
if (send_down > 0 || retry_rx) {
retry_rx = 0;
if (send_down)
goto flush_tx; /* and retry_rx */
else
goto do_retry_rx;
}
}
/*
* Transparent mode: marked bufs on rx rings between
* kring->nr_hwcur and ring->head
* are passed to the other endpoint.
*
* In this mode we also scan the sw rxring, which in
* turn passes packets up.
*
* XXX Transparent mode at the moment requires to bind all
* rings to a single file descriptor.
*/
if (q.head && na->ifp != NULL)
netmap_send_up(na->ifp, &q);
return (revents);
#undef want_tx
#undef want_rx
}
/*-------------------- driver support routines -------------------*/
static int netmap_hw_krings_create(struct netmap_adapter *);
/* default notify callback */
static int
netmap_notify(struct netmap_kring *kring, int flags)
{
struct netmap_adapter *na = kring->na;
enum txrx t = kring->tx;
OS_selwakeup(&kring->si, PI_NET);
/* optimization: avoid a wake up on the global
* queue if nobody has registered for more
* than one ring
*/
if (na->si_users[t] > 0)
OS_selwakeup(&na->si[t], PI_NET);
return 0;
}
/* called by all routines that create netmap_adapters.
* Attach na to the ifp (if any) and provide defaults
* for optional callbacks. Defaults assume that we
* are creating an hardware netmap_adapter.
*/
int
netmap_attach_common(struct netmap_adapter *na)
{
struct ifnet *ifp = na->ifp;
if (na->num_tx_rings == 0 || na->num_rx_rings == 0) {
D("%s: invalid rings tx %d rx %d",
na->name, na->num_tx_rings, na->num_rx_rings);
return EINVAL;
}
/* ifp is NULL for virtual adapters (bwrap, non-persistent VALE ports,
* pipes, monitors). For bwrap we actually have a non-null ifp for
* use by the external modules, but that is set after this
* function has been called.
* XXX this is ugly, maybe split this function in two (2014-03-14)
*/
if (ifp != NULL) {
WNA(ifp) = na;
/* the following is only needed for na that use the host port.
* XXX do we have something similar for linux ?
*/
#ifdef __FreeBSD__
na->if_input = ifp->if_input; /* for netmap_send_up */
#endif /* __FreeBSD__ */
NETMAP_SET_CAPABLE(ifp);
}
if (na->nm_krings_create == NULL) {
/* we assume that we have been called by a driver,
* since other port types all provide their own
* nm_krings_create
*/
na->nm_krings_create = netmap_hw_krings_create;
na->nm_krings_delete = netmap_hw_krings_delete;
}
if (na->nm_notify == NULL)
na->nm_notify = netmap_notify;
na->active_fds = 0;
if (na->nm_mem == NULL)
/* use the global allocator */
na->nm_mem = &nm_mem;
netmap_mem_get(na->nm_mem);
#ifdef WITH_VALE
if (na->nm_bdg_attach == NULL)
/* no special nm_bdg_attach callback. On VALE
* attach, we need to interpose a bwrap
*/
na->nm_bdg_attach = netmap_bwrap_attach;
#endif
return 0;
}
/* standard cleanup, called by all destructors */
void
netmap_detach_common(struct netmap_adapter *na)
{
if (na->ifp != NULL)
WNA(na->ifp) = NULL; /* XXX do we need this? */
if (na->tx_rings) { /* XXX should not happen */
D("freeing leftover tx_rings");
na->nm_krings_delete(na);
}
netmap_pipe_dealloc(na);
if (na->nm_mem)
netmap_mem_put(na->nm_mem);
bzero(na, sizeof(*na));
free(na, M_DEVBUF);
}
/* Wrapper for the register callback provided hardware drivers.
- * na->ifp == NULL means the the driver module has been
+ * na->ifp == NULL means the driver module has been
* unloaded, so we cannot call into it.
* Note that module unloading, in our patched linux drivers,
* happens under NMG_LOCK and after having stopped all the
* nic rings (see netmap_detach). This provides sufficient
* protection for the other driver-provied callbacks
* (i.e., nm_config and nm_*xsync), that therefore don't need
* to wrapped.
*/
static int
netmap_hw_register(struct netmap_adapter *na, int onoff)
{
struct netmap_hw_adapter *hwna =
(struct netmap_hw_adapter*)na;
if (na->ifp == NULL)
return onoff ? ENXIO : 0;
return hwna->nm_hw_register(na, onoff);
}
/*
* Initialize a ``netmap_adapter`` object created by driver on attach.
* We allocate a block of memory with room for a struct netmap_adapter
* plus two sets of N+2 struct netmap_kring (where N is the number
* of hardware rings):
* krings 0..N-1 are for the hardware queues.
* kring N is for the host stack queue
* kring N+1 is only used for the selinfo for all queues. // XXX still true ?
* Return 0 on success, ENOMEM otherwise.
*/
int
netmap_attach(struct netmap_adapter *arg)
{
struct netmap_hw_adapter *hwna = NULL;
// XXX when is arg == NULL ?
struct ifnet *ifp = arg ? arg->ifp : NULL;
if (arg == NULL || ifp == NULL)
goto fail;
hwna = malloc(sizeof(*hwna), M_DEVBUF, M_NOWAIT | M_ZERO);
if (hwna == NULL)
goto fail;
hwna->up = *arg;
hwna->up.na_flags |= NAF_HOST_RINGS | NAF_NATIVE;
strncpy(hwna->up.name, ifp->if_xname, sizeof(hwna->up.name));
hwna->nm_hw_register = hwna->up.nm_register;
hwna->up.nm_register = netmap_hw_register;
if (netmap_attach_common(&hwna->up)) {
free(hwna, M_DEVBUF);
goto fail;
}
netmap_adapter_get(&hwna->up);
#ifdef linux
if (ifp->netdev_ops) {
/* prepare a clone of the netdev ops */
#ifndef NETMAP_LINUX_HAVE_NETDEV_OPS
hwna->nm_ndo.ndo_start_xmit = ifp->netdev_ops;
#else
hwna->nm_ndo = *ifp->netdev_ops;
#endif
}
hwna->nm_ndo.ndo_start_xmit = linux_netmap_start_xmit;
if (ifp->ethtool_ops) {
hwna->nm_eto = *ifp->ethtool_ops;
}
hwna->nm_eto.set_ringparam = linux_netmap_set_ringparam;
#ifdef NETMAP_LINUX_HAVE_SET_CHANNELS
hwna->nm_eto.set_channels = linux_netmap_set_channels;
#endif
if (arg->nm_config == NULL) {
hwna->up.nm_config = netmap_linux_config;
}
#endif /* linux */
if_printf(ifp, "netmap queues/slots: TX %d/%d, RX %d/%d\n",
hwna->up.num_tx_rings, hwna->up.num_tx_desc,
hwna->up.num_rx_rings, hwna->up.num_rx_desc);
return 0;
fail:
D("fail, arg %p ifp %p na %p", arg, ifp, hwna);
if (ifp)
netmap_detach(ifp);
return (hwna ? EINVAL : ENOMEM);
}
void
NM_DBG(netmap_adapter_get)(struct netmap_adapter *na)
{
if (!na) {
return;
}
refcount_acquire(&na->na_refcount);
}
/* returns 1 iff the netmap_adapter is destroyed */
int
NM_DBG(netmap_adapter_put)(struct netmap_adapter *na)
{
if (!na)
return 1;
if (!refcount_release(&na->na_refcount))
return 0;
if (na->nm_dtor)
na->nm_dtor(na);
netmap_detach_common(na);
return 1;
}
/* nm_krings_create callback for all hardware native adapters */
int
netmap_hw_krings_create(struct netmap_adapter *na)
{
int ret = netmap_krings_create(na, 0);
if (ret == 0) {
/* initialize the mbq for the sw rx ring */
mbq_safe_init(&na->rx_rings[na->num_rx_rings].rx_queue);
ND("initialized sw rx queue %d", na->num_rx_rings);
}
return ret;
}
/*
* Called on module unload by the netmap-enabled drivers
*/
void
netmap_detach(struct ifnet *ifp)
{
struct netmap_adapter *na = NA(ifp);
int skip;
if (!na)
return;
skip = 0;
NMG_LOCK();
netmap_disable_all_rings(ifp);
na->ifp = NULL;
na->na_flags &= ~NAF_NETMAP_ON;
/*
* if the netmap adapter is not native, somebody
* changed it, so we can not release it here.
* The NULL na->ifp will notify the new owner that
* the driver is gone.
*/
if (na->na_flags & NAF_NATIVE) {
skip = netmap_adapter_put(na);
}
/* give them a chance to notice */
if (skip == 0)
netmap_enable_all_rings(ifp);
NMG_UNLOCK();
}
/*
* Intercept packets from the network stack and pass them
* to netmap as incoming packets on the 'software' ring.
*
* We only store packets in a bounded mbq and then copy them
* in the relevant rxsync routine.
*
* We rely on the OS to make sure that the ifp and na do not go
* away (typically the caller checks for IFF_DRV_RUNNING or the like).
* In nm_register() or whenever there is a reinitialization,
* we make sure to make the mode change visible here.
*/
int
netmap_transmit(struct ifnet *ifp, struct mbuf *m)
{
struct netmap_adapter *na = NA(ifp);
struct netmap_kring *kring;
u_int len = MBUF_LEN(m);
u_int error = ENOBUFS;
struct mbq *q;
int space;
kring = &na->rx_rings[na->num_rx_rings];
// XXX [Linux] we do not need this lock
// if we follow the down/configure/up protocol -gl
// mtx_lock(&na->core_lock);
if (!nm_netmap_on(na)) {
D("%s not in netmap mode anymore", na->name);
error = ENXIO;
goto done;
}
q = &kring->rx_queue;
// XXX reconsider long packets if we handle fragments
if (len > NETMAP_BUF_SIZE(na)) { /* too long for us */
D("%s from_host, drop packet size %d > %d", na->name,
len, NETMAP_BUF_SIZE(na));
goto done;
}
/* protect against rxsync_from_host(), netmap_sw_to_nic()
* and maybe other instances of netmap_transmit (the latter
* not possible on Linux).
* Also avoid overflowing the queue.
*/
mbq_lock(q);
space = kring->nr_hwtail - kring->nr_hwcur;
if (space < 0)
space += kring->nkr_num_slots;
if (space + mbq_len(q) >= kring->nkr_num_slots - 1) { // XXX
RD(10, "%s full hwcur %d hwtail %d qlen %d len %d m %p",
na->name, kring->nr_hwcur, kring->nr_hwtail, mbq_len(q),
len, m);
} else {
mbq_enqueue(q, m);
ND(10, "%s %d bufs in queue len %d m %p",
na->name, mbq_len(q), len, m);
/* notify outside the lock */
m = NULL;
error = 0;
}
mbq_unlock(q);
done:
if (m)
m_freem(m);
/* unconditionally wake up listeners */
kring->nm_notify(kring, 0);
/* this is normally netmap_notify(), but for nics
* connected to a bridge it is netmap_bwrap_intr_notify(),
* that possibly forwards the frames through the switch
*/
return (error);
}
/*
* netmap_reset() is called by the driver routines when reinitializing
* a ring. The driver is in charge of locking to protect the kring.
* If native netmap mode is not set just return NULL.
*/
struct netmap_slot *
netmap_reset(struct netmap_adapter *na, enum txrx tx, u_int n,
u_int new_cur)
{
struct netmap_kring *kring;
int new_hwofs, lim;
if (!nm_native_on(na)) {
ND("interface not in native netmap mode");
return NULL; /* nothing to reinitialize */
}
/* XXX note- in the new scheme, we are not guaranteed to be
* under lock (e.g. when called on a device reset).
* In this case, we should set a flag and do not trust too
* much the values. In practice: TODO
* - set a RESET flag somewhere in the kring
* - do the processing in a conservative way
* - let the *sync() fixup at the end.
*/
if (tx == NR_TX) {
if (n >= na->num_tx_rings)
return NULL;
kring = na->tx_rings + n;
// XXX check whether we should use hwcur or rcur
new_hwofs = kring->nr_hwcur - new_cur;
} else {
if (n >= na->num_rx_rings)
return NULL;
kring = na->rx_rings + n;
new_hwofs = kring->nr_hwtail - new_cur;
}
lim = kring->nkr_num_slots - 1;
if (new_hwofs > lim)
new_hwofs -= lim + 1;
/* Always set the new offset value and realign the ring. */
if (netmap_verbose)
D("%s %s%d hwofs %d -> %d, hwtail %d -> %d",
na->name,
tx == NR_TX ? "TX" : "RX", n,
kring->nkr_hwofs, new_hwofs,
kring->nr_hwtail,
tx == NR_TX ? lim : kring->nr_hwtail);
kring->nkr_hwofs = new_hwofs;
if (tx == NR_TX) {
kring->nr_hwtail = kring->nr_hwcur + lim;
if (kring->nr_hwtail > lim)
kring->nr_hwtail -= lim + 1;
}
#if 0 // def linux
/* XXX check that the mappings are correct */
/* need ring_nr, adapter->pdev, direction */
buffer_info->dma = dma_map_single(&pdev->dev, addr, adapter->rx_buffer_len, DMA_FROM_DEVICE);
if (dma_mapping_error(&adapter->pdev->dev, buffer_info->dma)) {
D("error mapping rx netmap buffer %d", i);
// XXX fix error handling
}
#endif /* linux */
/*
* Wakeup on the individual and global selwait
* We do the wakeup here, but the ring is not yet reconfigured.
* However, we are under lock so there are no races.
*/
kring->nm_notify(kring, 0);
return kring->ring->slot;
}
/*
* Dispatch rx/tx interrupts to the netmap rings.
*
* "work_done" is non-null on the RX path, NULL for the TX path.
* We rely on the OS to make sure that there is only one active
* instance per queue, and that there is appropriate locking.
*
* The 'notify' routine depends on what the ring is attached to.
* - for a netmap file descriptor, do a selwakeup on the individual
* waitqueue, plus one on the global one if needed
* (see netmap_notify)
* - for a nic connected to a switch, call the proper forwarding routine
* (see netmap_bwrap_intr_notify)
*/
void
netmap_common_irq(struct ifnet *ifp, u_int q, u_int *work_done)
{
struct netmap_adapter *na = NA(ifp);
struct netmap_kring *kring;
enum txrx t = (work_done ? NR_RX : NR_TX);
q &= NETMAP_RING_MASK;
if (netmap_verbose) {
RD(5, "received %s queue %d", work_done ? "RX" : "TX" , q);
}
if (q >= nma_get_nrings(na, t))
return; // not a physical queue
kring = NMR(na, t) + q;
if (t == NR_RX) {
kring->nr_kflags |= NKR_PENDINTR; // XXX atomic ?
*work_done = 1; /* do not fire napi again */
}
kring->nm_notify(kring, 0);
}
/*
* Default functions to handle rx/tx interrupts from a physical device.
* "work_done" is non-null on the RX path, NULL for the TX path.
*
* If the card is not in netmap mode, simply return 0,
* so that the caller proceeds with regular processing.
* Otherwise call netmap_common_irq() and return 1.
*
* If the card is connected to a netmap file descriptor,
* do a selwakeup on the individual queue, plus one on the global one
* if needed (multiqueue card _and_ there are multiqueue listeners),
* and return 1.
*
* Finally, if called on rx from an interface connected to a switch,
* calls the proper forwarding routine, and return 1.
*/
int
netmap_rx_irq(struct ifnet *ifp, u_int q, u_int *work_done)
{
struct netmap_adapter *na = NA(ifp);
/*
* XXX emulated netmap mode sets NAF_SKIP_INTR so
* we still use the regular driver even though the previous
* check fails. It is unclear whether we should use
* nm_native_on() here.
*/
if (!nm_netmap_on(na))
return 0;
if (na->na_flags & NAF_SKIP_INTR) {
ND("use regular interrupt");
return 0;
}
netmap_common_irq(ifp, q, work_done);
return 1;
}
/*
* Module loader and unloader
*
* netmap_init() creates the /dev/netmap device and initializes
* all global variables. Returns 0 on success, errno on failure
* (but there is no chance)
*
* netmap_fini() destroys everything.
*/
static struct cdev *netmap_dev; /* /dev/netmap character device. */
extern struct cdevsw netmap_cdevsw;
void
netmap_fini(void)
{
netmap_uninit_bridges();
if (netmap_dev)
destroy_dev(netmap_dev);
netmap_mem_fini();
NMG_LOCK_DESTROY();
printf("netmap: unloaded module.\n");
}
int
netmap_init(void)
{
int error;
NMG_LOCK_INIT();
error = netmap_mem_init();
if (error != 0)
goto fail;
/*
* MAKEDEV_ETERNAL_KLD avoids an expensive check on syscalls
* when the module is compiled in.
* XXX could use make_dev_credv() to get error number
*/
netmap_dev = make_dev_credf(MAKEDEV_ETERNAL_KLD,
&netmap_cdevsw, 0, NULL, UID_ROOT, GID_WHEEL, 0600,
"netmap");
if (!netmap_dev)
goto fail;
error = netmap_init_bridges();
if (error)
goto fail;
#ifdef __FreeBSD__
nm_vi_init_index();
#endif
printf("netmap: loaded module\n");
return (0);
fail:
netmap_fini();
return (EINVAL); /* may be incorrect */
}
Index: head/sys/dev/ow/ow.c
===================================================================
--- head/sys/dev/ow/ow.c (revision 300049)
+++ head/sys/dev/ow/ow.c (revision 300050)
@@ -1,641 +1,641 @@
/*-
* Copyright (c) 2015 M. Warner Losh <imp@freebsd.org>
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice unmodified, this list of conditions, and the following
* disclaimer.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
*
* THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
* IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
* OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
* IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
* INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
* NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
* DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
* THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
* (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
* THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
#include <sys/cdefs.h>
__FBSDID("$FreeBSD$");
#include <sys/param.h>
#include <sys/systm.h>
#include <sys/kernel.h>
#include <sys/bus.h>
#include <sys/errno.h>
#include <sys/libkern.h>
#include <sys/malloc.h>
#include <sys/module.h>
#include <dev/ow/ow.h>
#include <dev/ow/owll.h>
#include <dev/ow/own.h>
/*
* lldev - link level device
* ndev - network / transport device (this module)
* pdev - presentation device (children of this module)
*/
typedef int ow_enum_fn(device_t, device_t);
typedef int ow_found_fn(device_t, romid_t);
struct ow_softc
{
device_t dev; /* Newbus driver back pointer */
struct mtx mtx; /* bus mutex */
device_t owner; /* bus owner, if != NULL */
};
struct ow_devinfo
{
romid_t romid;
};
static int ow_acquire_bus(device_t ndev, device_t pdev, int how);
static void ow_release_bus(device_t ndev, device_t pdev);
#define OW_LOCK(_sc) mtx_lock(&(_sc)->mtx)
#define OW_UNLOCK(_sc) mtx_unlock(&(_sc)->mtx)
#define OW_LOCK_DESTROY(_sc) mtx_destroy(&_sc->mtx)
#define OW_ASSERT_LOCKED(_sc) mtx_assert(&_sc->mtx, MA_OWNED)
#define OW_ASSERT_UNLOCKED(_sc) mtx_assert(&_sc->mtx, MA_NOTOWNED)
static MALLOC_DEFINE(M_OW, "ow", "House keeping data for 1wire bus");
static struct ow_timing timing_regular = {
.t_slot = 60, /* 60 to 120 */
.t_low0 = 60, /* really 60 to 120 */
.t_low1 = 1, /* really 1 to 15 */
.t_release = 45, /* <= 45us */
.t_rec = 1, /* at least 1us */
.t_rdv = 15, /* 15us */
.t_rstl = 480, /* 480us or more */
.t_rsth = 480, /* 480us or more */
.t_pdl = 60, /* 60us to 240us */
.t_pdh = 60, /* 15us to 60us */
.t_lowr = 1, /* 1us */
};
/* NB: Untested */
static struct ow_timing timing_overdrive = {
.t_slot = 11, /* 6us to 16us */
.t_low0 = 6, /* really 6 to 16 */
.t_low1 = 1, /* really 1 to 2 */
.t_release = 4, /* <= 4us */
.t_rec = 1, /* at least 1us */
.t_rdv = 2, /* 2us */
.t_rstl = 48, /* 48us to 80us */
.t_rsth = 48, /* 48us or more */
.t_pdl = 8, /* 8us to 24us */
.t_pdh = 2, /* 2us to 6us */
.t_lowr = 1, /* 1us */
};
static void
ow_send_byte(device_t lldev, struct ow_timing *t, uint8_t byte)
{
int i;
for (i = 0; i < 8; i++)
if (byte & (1 << i))
OWLL_WRITE_ONE(lldev, t);
else
OWLL_WRITE_ZERO(lldev, t);
}
static void
ow_read_byte(device_t lldev, struct ow_timing *t, uint8_t *bytep)
{
int i;
uint8_t byte = 0;
int bit;
for (i = 0; i < 8; i++) {
OWLL_READ_DATA(lldev, t, &bit);
byte |= bit << i;
}
*bytep = byte;
}
static int
ow_send_command(device_t ndev, device_t pdev, struct ow_cmd *cmd)
{
int present, i, bit, tries;
device_t lldev;
struct ow_timing *t;
lldev = device_get_parent(ndev);
/*
* Retry the reset a couple of times before giving up.
*/
tries = 4;
do {
OWLL_RESET_AND_PRESENCE(lldev, &timing_regular, &present);
if (present == 1)
device_printf(ndev, "Reset said no device on bus?.\n");
} while (present == 1 && tries-- > 0);
if (present == 1) {
device_printf(ndev, "Reset said the device wasn't there.\n");
return ENOENT; /* No devices acked the RESET */
}
if (present == -1) {
device_printf(ndev, "Reset discovered bus wired wrong.\n");
return ENOENT;
}
for (i = 0; i < cmd->rom_len; i++)
ow_send_byte(lldev, &timing_regular, cmd->rom_cmd[i]);
for (i = 0; i < cmd->rom_read_len; i++)
ow_read_byte(lldev, &timing_regular, cmd->rom_read + i);
if (cmd->xpt_len) {
/*
* Per AN937, the reset pulse and ROM level are always
* done with the regular timings. Certain ROM commands
* put the device into overdrive mode for the remainder
* of the data transfer, which is why we have to pass the
* timings here. Commands that need to be handled like this
* are expected to be flagged by the client.
*/
t = (cmd->flags & OW_FLAG_OVERDRIVE) ?
&timing_overdrive : &timing_regular;
for (i = 0; i < cmd->xpt_len; i++)
ow_send_byte(lldev, t, cmd->xpt_cmd[i]);
if (cmd->flags & OW_FLAG_READ_BIT) {
memset(cmd->xpt_read, 0, (cmd->xpt_read_len + 7) / 8);
for (i = 0; i < cmd->xpt_read_len; i++) {
OWLL_READ_DATA(lldev, t, &bit);
cmd->xpt_read[i / 8] |= bit << (i % 8);
}
} else {
for (i = 0; i < cmd->xpt_read_len; i++)
ow_read_byte(lldev, t, cmd->xpt_read + i);
}
}
return 0;
}
static int
ow_search_rom(device_t lldev, device_t dev)
{
struct ow_cmd cmd;
memset(&cmd, 0, sizeof(cmd));
cmd.rom_cmd[0] = SEARCH_ROM;
cmd.rom_len = 1;
return ow_send_command(lldev, dev, &cmd);
}
#if 0
static int
ow_alarm_search(device_t lldev, device_t dev)
{
struct ow_cmd cmd;
memset(&cmd, 0, sizeof(cmd));
cmd.rom_cmd[0] = ALARM_SEARCH;
cmd.rom_len = 1;
return ow_send_command(lldev, dev, &cmd);
}
#endif
static int
ow_add_child(device_t dev, romid_t romid)
{
struct ow_devinfo *di;
device_t child;
di = malloc(sizeof(*di), M_OW, M_WAITOK);
di->romid = romid;
child = device_add_child(dev, NULL, -1);
if (child == NULL) {
free(di, M_OW);
return ENOMEM;
}
device_set_ivars(child, di);
return (0);
}
static device_t
ow_child_by_romid(device_t dev, romid_t romid)
{
device_t *children, retval, child;
int nkid, i;
struct ow_devinfo *di;
if (device_get_children(dev, &children, &nkid) != 0)
return (NULL);
retval = NULL;
for (i = 0; i < nkid; i++) {
child = children[i];
di = device_get_ivars(child);
if (di->romid == romid) {
retval = child;
break;
}
}
free(children, M_TEMP);
return (retval);
}
/*
* CRC generator table -- taken from AN937 DOW CRC LOOKUP FUNCTION Table 2
*/
const uint8_t ow_crc_table[] = {
0, 94, 188, 226, 97, 63, 221, 131, 194, 156, 126, 32, 163, 253, 31, 65,
157, 195, 33, 127, 252, 162, 64, 30, 95, 1, 227, 189, 62, 96, 130, 220,
35, 125, 159, 193, 66, 28, 254, 160, 225, 191, 93, 3, 128, 222, 60, 98,
190, 224, 2, 92, 223, 129, 99, 61, 124, 34, 192, 158, 29, 67, 161, 255,
70, 24, 250, 164, 39, 121, 155, 197, 132, 218, 56, 102, 229, 187, 89, 7,
219, 133,103, 57, 186, 228, 6, 88, 25, 71, 165, 251, 120, 38, 196, 154,
101, 59, 217, 135, 4, 90, 184, 230, 167, 249, 27, 69, 198, 152, 122, 36,
248, 166, 68, 26, 153, 199, 37, 123, 58, 100, 134, 216, 91, 5, 231, 185,
140,210, 48, 110, 237, 179, 81, 15, 78, 16, 242, 172, 47, 113,147, 205,
17, 79, 173, 243, 112, 46, 204, 146, 211,141, 111, 49, 178, 236, 14, 80,
175, 241, 19, 77, 206, 144, 114, 44, 109, 51, 209, 143, 12, 82,176, 238,
50, 108, 142, 208, 83, 13, 239, 177, 240, 174, 76, 18, 145, 207, 45, 115,
202, 148, 118, 40, 171, 245, 23, 73, 8, 86, 180, 234, 105, 55, 213, 139,
87, 9, 235, 181, 54, 104, 138, 212, 149, 203, 41, 119, 244, 170, 72, 22,
233, 183, 85, 11, 136, 214, 52, 106, 43, 117, 151, 201, 74, 20, 246, 168,
116, 42, 200, 150, 21, 75, 169, 247, 182, 232, 10, 84, 215, 137, 107, 53
};
/*
* Converted from DO_CRC page 131 ANN937
*/
static uint8_t
ow_crc(device_t ndev, device_t pdev, uint8_t *buffer, size_t len)
{
uint8_t crc = 0;
int i;
for (i = 0; i < len; i++)
crc = ow_crc_table[crc ^ buffer[i]];
return crc;
}
static int
ow_check_crc(romid_t romid)
{
return ow_crc(NULL, NULL, (uint8_t *)&romid, sizeof(romid)) == 0;
}
static int
ow_device_found(device_t dev, romid_t romid)
{
/* XXX Move this up into enumerate? */
/*
* All valid ROM IDs have a valid CRC. Check that first.
*/
if (!ow_check_crc(romid)) {
device_printf(dev, "Device romid %8D failed CRC.\n",
&romid, ":");
return EINVAL;
}
/*
* If we've seen this child before, don't add a new one for it.
*/
if (ow_child_by_romid(dev, romid) != NULL)
return 0;
return ow_add_child(dev, romid);
}
static int
ow_enumerate(device_t dev, ow_enum_fn *enumfp, ow_found_fn *foundfp)
{
device_t lldev = device_get_parent(dev);
int first, second, i, dir, prior, last, err, retries;
uint64_t probed, last_mask;
int sanity = 10;
prior = -1;
last_mask = 0;
retries = 0;
last = -2;
err = ow_acquire_bus(dev, dev, OWN_DONTWAIT);
if (err != 0)
return err;
while (last != -1) {
if (sanity-- < 0) {
printf("Reached the sanity limit\n");
return EIO;
}
again:
probed = 0;
last = -1;
/*
* See AN397 section 5.II.C.3 for the algorithm (though a bit
* poorly stated). The search command forces each device to
* send ROM ID bits one at a time (first the bit, then the
- * complement) the the master (us) sends back a bit. If the
+ * complement) the master (us) sends back a bit. If the
* device's bit doesn't match what we send back, that device
* stops sending bits back. So each time through we remember
* where we made the last decision (always 0). If there's a
* conflict there this time (and there will be in the absence
* of a hardware failure) we go with 1. This way, we prune the
* devices on the bus and wind up with a unique ROM. We know
* we're done when we detect no new conflicts. The same
* algorithm is used for devices in alarm state as well.
*
* In addition, experience has shown that sometimes devices
* stop responding in the middle of enumeration, so try this
* step again a few times when that happens. It is unclear if
* this is due to a nosiy electrical environment or some odd
* timing issue.
*/
/*
* The enumeration command should be successfully sent, if not,
* we have big issues on the bus so punt. Lower layers report
* any unusual errors, so we don't need to here.
*/
err = enumfp(dev, dev);
if (err != 0)
return (err);
for (i = 0; i < 64; i++) {
OWLL_READ_DATA(lldev, &timing_regular, &first);
OWLL_READ_DATA(lldev, &timing_regular, &second);
switch (first | second << 1) {
case 0: /* Conflict */
if (i < prior)
dir = (last_mask >> i) & 1;
else
dir = i == prior;
if (dir == 0)
last = i;
break;
case 1: /* 1 then 0 -> 1 for all */
dir = 1;
break;
case 2: /* 0 then 1 -> 0 for all */
dir = 0;
break;
case 3:
/*
* No device responded. This is unexpected, but
* experience has shown that on some platforms
* we miss a timing window, or otherwise have
* an issue. Start this step over. Since we've
* not updated prior yet, we can just jump to
* the top of the loop for a re-do of this step.
*/
printf("oops, starting over\n");
if (++retries > 5)
return (EIO);
goto again;
default: /* NOTREACHED */
__unreachable();
}
if (dir) {
OWLL_WRITE_ONE(lldev, &timing_regular);
probed |= 1ull << i;
} else {
OWLL_WRITE_ZERO(lldev, &timing_regular);
}
}
retries = 0;
foundfp(dev, probed);
last_mask = probed;
prior = last;
}
ow_release_bus(dev, dev);
return (0);
}
static int
ow_probe(device_t dev)
{
device_set_desc(dev, "1 Wire Bus");
return (BUS_PROBE_GENERIC);
}
static int
ow_attach(device_t ndev)
{
struct ow_softc *sc;
/*
* Find all the devices on the bus. We don't probe / attach them in the
* enumeration phase. We do this because we want to allow the probe /
* attach routines of the child drivers to have as full an access to the
* bus as possible. While we reset things before the next step of the
* search (so it would likely be OK to allow access by the clients to
* the bus), it is more conservative to find them all, then to do the
* attach of the devices. This also allows the child devices to have
* more knowledge of the bus. We also ignore errors from the enumeration
* because they might happen after we've found a few devices.
*/
sc = device_get_softc(ndev);
sc->dev = ndev;
mtx_init(&sc->mtx, device_get_nameunit(sc->dev), "ow", MTX_DEF);
ow_enumerate(ndev, ow_search_rom, ow_device_found);
return bus_generic_attach(ndev);
}
static int
ow_detach(device_t ndev)
{
device_t *children, child;
int nkid, i;
struct ow_devinfo *di;
struct ow_softc *sc;
sc = device_get_softc(ndev);
/*
* detach all the children first. This is blocking until any threads
* have stopped, etc.
*/
bus_generic_detach(ndev);
/*
* We delete all the children, and free up the ivars
*/
if (device_get_children(ndev, &children, &nkid) != 0)
return ENOMEM;
for (i = 0; i < nkid; i++) {
child = children[i];
di = device_get_ivars(child);
free(di, M_OW);
device_delete_child(ndev, child);
}
free(children, M_TEMP);
OW_LOCK_DESTROY(sc);
return 0;
}
/*
* Not sure this is really needed. I'm having trouble figuring out what
* location means in the context of the one wire bus.
*/
static int
ow_child_location_str(device_t dev, device_t child, char *buf,
size_t buflen)
{
*buf = '\0';
return (0);
}
static int
ow_child_pnpinfo_str(device_t dev, device_t child, char *buf,
size_t buflen)
{
struct ow_devinfo *di;
di = device_get_ivars(child);
snprintf(buf, buflen, "romid=%8D", &di->romid, ":");
return (0);
}
static int
ow_read_ivar(device_t dev, device_t child, int which, uintptr_t *result)
{
struct ow_devinfo *di;
romid_t **ptr;
di = device_get_ivars(child);
switch (which) {
case OW_IVAR_FAMILY:
*result = di->romid & 0xff;
break;
case OW_IVAR_ROMID:
ptr = (romid_t **)result;
*ptr = &di->romid;
break;
default:
return EINVAL;
}
return 0;
}
static int
ow_write_ivar(device_t dev, device_t child, int which, uintptr_t value)
{
return EINVAL;
}
static int
ow_print_child(device_t ndev, device_t pdev)
{
int retval = 0;
struct ow_devinfo *di;
di = device_get_ivars(pdev);
retval += bus_print_child_header(ndev, pdev);
retval += printf(" romid %8D", &di->romid, ":");
retval += bus_print_child_footer(ndev, pdev);
return retval;
}
static void
ow_probe_nomatch(device_t ndev, device_t pdev)
{
struct ow_devinfo *di;
di = device_get_ivars(pdev);
device_printf(ndev, "romid %8D: no driver\n", &di->romid, ":");
}
static int
ow_acquire_bus(device_t ndev, device_t pdev, int how)
{
struct ow_softc *sc;
sc = device_get_softc(ndev);
OW_ASSERT_UNLOCKED(sc);
OW_LOCK(sc);
if (sc->owner != NULL) {
if (sc->owner == pdev)
panic("%s: %s recursively acquiring the bus.\n",
device_get_nameunit(ndev),
device_get_nameunit(pdev));
if (how == OWN_DONTWAIT) {
OW_UNLOCK(sc);
return EWOULDBLOCK;
}
while (sc->owner != NULL)
mtx_sleep(sc, &sc->mtx, 0, "owbuswait", 0);
}
sc->owner = pdev;
OW_UNLOCK(sc);
return 0;
}
static void
ow_release_bus(device_t ndev, device_t pdev)
{
struct ow_softc *sc;
sc = device_get_softc(ndev);
OW_ASSERT_UNLOCKED(sc);
OW_LOCK(sc);
if (sc->owner == NULL)
panic("%s: %s releasing unowned bus.", device_get_nameunit(ndev),
device_get_nameunit(pdev));
if (sc->owner != pdev)
panic("%s: %s don't own the bus. %s does. game over.",
device_get_nameunit(ndev), device_get_nameunit(pdev),
device_get_nameunit(sc->owner));
sc->owner = NULL;
wakeup(sc);
OW_UNLOCK(sc);
}
devclass_t ow_devclass;
static device_method_t ow_methods[] = {
/* Device interface */
DEVMETHOD(device_probe, ow_probe),
DEVMETHOD(device_attach, ow_attach),
DEVMETHOD(device_detach, ow_detach),
/* Bus interface */
DEVMETHOD(bus_child_pnpinfo_str, ow_child_pnpinfo_str),
DEVMETHOD(bus_child_location_str, ow_child_location_str),
DEVMETHOD(bus_read_ivar, ow_read_ivar),
DEVMETHOD(bus_write_ivar, ow_write_ivar),
DEVMETHOD(bus_print_child, ow_print_child),
DEVMETHOD(bus_probe_nomatch, ow_probe_nomatch),
/* One Wire Network/Transport layer interface */
DEVMETHOD(own_send_command, ow_send_command),
DEVMETHOD(own_acquire_bus, ow_acquire_bus),
DEVMETHOD(own_release_bus, ow_release_bus),
DEVMETHOD(own_crc, ow_crc),
{ 0, 0 }
};
static driver_t ow_driver = {
"ow",
ow_methods,
sizeof(struct ow_softc),
};
DRIVER_MODULE(ow, owc, ow_driver, ow_devclass, 0, 0);
MODULE_VERSION(ow, 1);
Index: head/sys/dev/pms/RefTisa/sallsdk/spc/mpi.c
===================================================================
--- head/sys/dev/pms/RefTisa/sallsdk/spc/mpi.c (revision 300049)
+++ head/sys/dev/pms/RefTisa/sallsdk/spc/mpi.c (revision 300050)
@@ -1,980 +1,980 @@
/*******************************************************************************
**
*Copyright (c) 2014 PMC-Sierra, Inc. All rights reserved.
*
*Redistribution and use in source and binary forms, with or without modification, are permitted provided
*that the following conditions are met:
*1. Redistributions of source code must retain the above copyright notice, this list of conditions and the
*following disclaimer.
*2. Redistributions in binary form must reproduce the above copyright notice,
*this list of conditions and the following disclaimer in the documentation and/or other materials provided
*with the distribution.
*
*THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND ANY EXPRESS OR IMPLIED
*WARRANTIES,INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
*FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE
*FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
*NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR
*BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
*LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
*SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE
********************************************************************************/
/*******************************************************************************/
/*! \file mpi.c
* \brief The file is a MPI Libraries to implement the MPI functions
*
* The file implements the MPI Library functions.
*
*/
/*******************************************************************************/
#include <sys/cdefs.h>
__FBSDID("$FreeBSD$");
#include <dev/pms/config.h>
#include <dev/pms/RefTisa/sallsdk/spc/saglobal.h>
#ifdef SA_ENABLE_TRACE_FUNCTIONS
#ifdef siTraceFileID
#undef siTraceFileID
#endif
#define siTraceFileID 'A'
#endif
#ifdef LOOPBACK_MPI
extern int loopback;
#endif
/*******************************************************************************/
/*******************************************************************************/
/*******************************************************************************/
/* FUNCTIONS */
/*******************************************************************************/
/*******************************************************************************/
/** \fn void mpiRequirementsGet(mpiConfig_t* config, mpiMemReq_t* memoryRequirement)
* \brief Retrieves the MPI layer resource requirements
* \param config MPI configuration for the Host MPI Message Unit
* \param memoryRequirement Returned data structure as defined by mpiMemReq_t
* that holds the different chunks of memory that are required
*
* The mpiRequirementsGet() function is used to determine the resource requirements
* for the SPC device interface
*
* Return: None
*/
/*******************************************************************************/
void mpiRequirementsGet(mpiConfig_t* config, mpiMemReq_t* memoryRequirement)
{
bit32 qIdx, numq;
mpiMemReq_t* memoryMap;
SA_DBG2(("Entering function:mpiRequirementsGet\n"));
SA_ASSERT((NULL != config), "config argument cannot be null");
memoryMap = memoryRequirement;
memoryMap->count = 0;
/* MPI Memory region 0 for MSGU(AAP1) Event Log for fw */
memoryMap->region[memoryMap->count].numElements = 1;
memoryMap->region[memoryMap->count].elementSize = sizeof(bit8) * config->mainConfig.eventLogSize;
memoryMap->region[memoryMap->count].totalLength = sizeof(bit8) * config->mainConfig.eventLogSize;
memoryMap->region[memoryMap->count].alignment = 32;
memoryMap->region[memoryMap->count].type = AGSA_DMA_MEM;
SA_DBG2(("mpiRequirementsGet:eventLogSize region[%d] 0x%X\n",memoryMap->count,memoryMap->region[memoryMap->count].totalLength ));
memoryMap->count++;
SA_DBG2(("mpiRequirementsGet:eventLogSize region[%d] 0x%X\n",memoryMap->count,memoryMap->region[memoryMap->count].totalLength ));
/* MPI Memory region 1 for IOP Event Log for fw */
memoryMap->region[memoryMap->count].numElements = 1;
memoryMap->region[memoryMap->count].elementSize = sizeof(bit8) * config->mainConfig.IOPeventLogSize;
memoryMap->region[memoryMap->count].totalLength = sizeof(bit8) * config->mainConfig.IOPeventLogSize;
memoryMap->region[memoryMap->count].alignment = 32;
memoryMap->region[memoryMap->count].type = AGSA_DMA_MEM;
SA_DBG2(("mpiRequirementsGet:IOPeventLogSize region[%d] 0x%X\n",memoryMap->count,memoryMap->region[memoryMap->count].totalLength ));
memoryMap->count++;
/* MPI Memory region 2 for consumer Index of inbound queues */
memoryMap->region[memoryMap->count].numElements = 1;
memoryMap->region[memoryMap->count].elementSize = sizeof(bit32) * config->numInboundQueues;
memoryMap->region[memoryMap->count].totalLength = sizeof(bit32) * config->numInboundQueues;
memoryMap->region[memoryMap->count].alignment = 4;
memoryMap->region[memoryMap->count].type = AGSA_DMA_MEM;
SA_DBG2(("mpiRequirementsGet:numInboundQueues region[%d] 0x%X\n",memoryMap->count,memoryMap->region[memoryMap->count].totalLength ));
memoryMap->count++;
/* MPI Memory region 3 for producer Index of outbound queues */
memoryMap->region[memoryMap->count].numElements = 1;
memoryMap->region[memoryMap->count].elementSize = sizeof(bit32) * config->numOutboundQueues;
memoryMap->region[memoryMap->count].totalLength = sizeof(bit32) * config->numOutboundQueues;
memoryMap->region[memoryMap->count].alignment = 4;
memoryMap->region[memoryMap->count].type = AGSA_DMA_MEM;
SA_DBG2(("mpiRequirementsGet:numOutboundQueues region[%d] 0x%X\n",memoryMap->count,memoryMap->region[memoryMap->count].totalLength ));
memoryMap->count++;
/* MPI Memory regions 4, ... for the inbound queues - depends on configuration */
numq = 0;
for(qIdx = 0; qIdx < config->numInboundQueues; qIdx++)
{
if(0 != config->inboundQueues[qIdx].numElements)
{
bit32 memSize = config->inboundQueues[qIdx].numElements * config->inboundQueues[qIdx].elementSize;
bit32 remainder = memSize & 127;
/* Calculate the size of this queue padded to 128 bytes */
if (remainder > 0)
{
memSize += (128 - remainder);
}
if (numq == 0)
{
memoryMap->region[memoryMap->count].numElements = 1;
memoryMap->region[memoryMap->count].elementSize = memSize;
memoryMap->region[memoryMap->count].totalLength = memSize;
memoryMap->region[memoryMap->count].alignment = 128;
memoryMap->region[memoryMap->count].type = AGSA_CACHED_DMA_MEM;
}
else
{
memoryMap->region[memoryMap->count].elementSize += memSize;
memoryMap->region[memoryMap->count].totalLength += memSize;
}
numq++;
if ((0 == ((qIdx + 1) % MAX_QUEUE_EACH_MEM)) ||
(qIdx == (bit32)(config->numInboundQueues - 1)))
{
SA_DBG2(("mpiRequirementsGet: (inboundQueues) memoryMap->region[%d].elementSize = %d\n",
memoryMap->count, memoryMap->region[memoryMap->count].elementSize));
SA_DBG2(("mpiRequirementsGet: (inboundQueues) memoryMap->region[%d].numElements = %d\n",
memoryMap->count, memoryMap->region[memoryMap->count].numElements));
memoryMap->count++;
numq = 0;
}
}
}
/* MPI Memory regions for the outbound queues - depends on configuration */
numq = 0;
for(qIdx = 0; qIdx < config->numOutboundQueues; qIdx++)
{
if(0 != config->outboundQueues[qIdx].numElements)
{
bit32 memSize = config->outboundQueues[qIdx].numElements * config->outboundQueues[qIdx].elementSize;
bit32 remainder = memSize & 127;
/* Calculate the size of this queue padded to 128 bytes */
if (remainder > 0)
{
memSize += (128 - remainder);
}
if (numq == 0)
{
memoryMap->region[memoryMap->count].numElements = 1;
memoryMap->region[memoryMap->count].elementSize = memSize;
memoryMap->region[memoryMap->count].totalLength = memSize;
memoryMap->region[memoryMap->count].alignment = 128;
memoryMap->region[memoryMap->count].type = AGSA_CACHED_DMA_MEM;
}
else
{
memoryMap->region[memoryMap->count].elementSize += memSize;
memoryMap->region[memoryMap->count].totalLength += memSize;
}
numq++;
if ((0 == ((qIdx + 1) % MAX_QUEUE_EACH_MEM)) ||
(qIdx == (bit32)(config->numOutboundQueues - 1)))
{
SA_DBG2(("mpiRequirementsGet: (outboundQueues) memoryMap->region[%d].elementSize = %d\n",
memoryMap->count, memoryMap->region[memoryMap->count].elementSize));
SA_DBG2(("mpiRequirementsGet: (outboundQueues) memoryMap->region[%d].numElements = %d\n",
memoryMap->count, memoryMap->region[memoryMap->count].numElements));
memoryMap->count++;
numq = 0;
}
}
}
}
/*******************************************************************************/
/** \fn mpiMsgFreeGet(mpiICQueue_t *circularQ, bit16 messageSize, void** messagePtr)
* \brief Retrieves a free message buffer from an inbound queue
* \param circularQ Pointer to an inbound circular queue
* \param messageSize Requested message size in bytes - only support 64 bytes/element
* \param messagePtr Pointer to the free message buffer payload (not including message header) or NULL if no free message buffers are available
*
* This function is used to retrieve a free message buffer for the given inbound queue of at least
* messageSize bytes.
* The caller can use the returned buffer to construct the message and then call mpiMsgProduce()
* to deliver the message to the device message unit or mpiMsgInvalidate() if the message buffer
* is not going to be used
*
* Return:
* AGSA_RC_SUCCESS if messagePtr contains a valid message buffer pointer
* AGSA_RC_FAILURE if messageSize larger than the elementSize of queue
* AGSA_RC_BUSY if there are not free message buffers (Queue full)
*/
/*******************************************************************************/
GLOBAL FORCEINLINE
bit32
mpiMsgFreeGet(
mpiICQueue_t *circularQ,
bit16 messageSize,
void** messagePtr
)
{
bit32 offset;
agsaRoot_t *agRoot=circularQ->agRoot;
mpiMsgHeader_t *msgHeader;
bit8 bcCount = 1; /* only support single buffer */
SA_DBG4(("Entering function:mpiMsgFreeGet\n"));
SA_ASSERT(NULL != circularQ, "circularQ cannot be null");
SA_ASSERT(NULL != messagePtr, "messagePtr argument cannot be null");
SA_ASSERT(0 != circularQ->numElements, "The number of elements in this queue is 0");
/* Checks is the requested message size can be allocated in this queue */
if(messageSize > circularQ->elementSize)
{
SA_DBG1(("mpiMsgFreeGet: Message Size (%d) is larger than Q element size (%d)\n",messageSize,circularQ->elementSize));
return AGSA_RC_FAILURE;
}
/* Stores the new consumer index */
OSSA_READ_LE_32(circularQ->agRoot, &circularQ->consumerIdx, circularQ->ciPointer, 0);
/* if inbound queue is full, return busy */
/* This queue full logic may only works for bc == 1 ( == ) */
/* ( pi + bc ) % size > ci not fully works for bc > 1 */
/* To do - support bc > 1 case and wrap around case */
if (((circularQ->producerIdx + bcCount) % circularQ->numElements) == circularQ->consumerIdx)
{
*messagePtr = NULL;
smTrace(hpDBG_VERY_LOUD,"Za", (((circularQ->producerIdx & 0xFFF) << 16) | (circularQ->consumerIdx & 0xFFF) ));
/* TP:Za IQ PI CI */
ossaHwRegRead(agRoot, MSGU_HOST_SCRATCH_PAD_0);
SA_DBG1(("mpiMsgFreeGet: %d + %d == %d AGSA_RC_BUSY\n",circularQ->producerIdx,bcCount,circularQ->consumerIdx));
return AGSA_RC_BUSY;
}
smTrace(hpDBG_VERY_LOUD,"Zb", (((circularQ->producerIdx & 0xFFF) << 16) | (circularQ->consumerIdx & 0xFFF) ));
/* TP:Zb IQ PI CI */
/* get memory IOMB buffer address */
offset = circularQ->producerIdx * circularQ->elementSize;
/* increment to next bcCount element */
circularQ->producerIdx = (circularQ->producerIdx + bcCount) % circularQ->numElements;
/* Adds that distance to the base of the region virtual address plus the message header size*/
msgHeader = (mpiMsgHeader_t*) (((bit8 *)(circularQ->memoryRegion.virtPtr)) + offset);
SA_DBG3(("mpiMsgFreeGet: msgHeader = %p Offset = 0x%x\n", (void *)msgHeader, offset));
/* Sets the message buffer in "allocated" state */
/* bc always is 1 for inbound queue */
/* temporarily store it in the native endian format, when the rest of the */
/* header is filled, this would be converted to Little Endian */
msgHeader->Header = (1<<24);
*messagePtr = ((bit8*)msgHeader) + sizeof(mpiMsgHeader_t);
return AGSA_RC_SUCCESS;
}
#ifdef LOOPBACK_MPI
GLOBAL bit32 mpiMsgFreeGetOQ(mpiOCQueue_t *circularQ, bit16 messageSize, void** messagePtr)
{
bit32 offset;
mpiMsgHeader_t *msgHeader;
bit8 bcCount = 1; /* only support single buffer */
SA_DBG4(("Entering function:mpiMsgFreeGet\n"));
SA_ASSERT(NULL != circularQ, "circularQ cannot be null");
SA_ASSERT(NULL != messagePtr, "messagePtr argument cannot be null");
SA_ASSERT(0 != circularQ->numElements, "The number of elements in this queue is 0");
/* Checks is the requested message size can be allocated in this queue */
if(messageSize > circularQ->elementSize)
{
SA_DBG1(("mpiMsgFreeGet: Message Size is not fit in\n"));
return AGSA_RC_FAILURE;
}
/* Stores the new consumer index */
//OSSA_READ_LE_32(circularQ->agRoot, &circularQ->consumerIdx, circularQ->ciPointer, 0);
/* if inbound queue is full, return busy */
/* This queue full logic may only works for bc == 1 ( == ) */
/* ( pi + bc ) % size > ci not fully works for bc > 1 */
/* To do - support bc > 1 case and wrap around case */
if (((circularQ->producerIdx + bcCount) % circularQ->numElements) == circularQ->consumerIdx)
{
*messagePtr = NULL;
return AGSA_RC_BUSY;
}
/* get memory IOMB buffer address */
offset = circularQ->producerIdx * circularQ->elementSize;
/* increment to next bcCount element */
circularQ->producerIdx = (circularQ->producerIdx + bcCount) % circularQ->numElements;
/* Adds that distance to the base of the region virtual address plus the message header size*/
msgHeader = (mpiMsgHeader_t*) (((bit8 *)(circularQ->memoryRegion.virtPtr)) + offset);
SA_DBG3(("mpiMsgFreeGet: msgHeader = %p Offset = 0x%x\n", (void *)msgHeader, offset));
/* Sets the message buffer in "allocated" state */
/* bc always is 1 for inbound queue */
/* temporarily store it in the native endian format, when the rest of the */
/* header is filled, this would be converted to Little Endian */
msgHeader->Header = (1<<24);
*messagePtr = ((bit8*)msgHeader) + sizeof(mpiMsgHeader_t);
return AGSA_RC_SUCCESS;
}
#endif
/*******************************************************************************/
/** \fn mpiMsgProduce(mpiICQueue_t *circularQ, void *messagePtr, mpiMsgCategory_t category, bit16 opCode, bit8 responseQueue)
* \brief Add a header of IOMB then send to a inbound queue and update the Producer index
* \param circularQ Pointer to an inbound queue
* \param messagePtr Pointer to the message buffer payload (not including message header))
* \param category Message category (ETHERNET, FC, SAS-SATA, SCSI)
* \param opCode Message operation code
* \param responseQueue If the message requires response, this paramater indicates the outbound queue for the response
*
* This function is used to sumit a message buffer, previously obtained from mpiMsgFreeGet()
* function call, to the given Inbound queue
*
* Return:
* AGSA_RC_SUCCESS if the message has been posted succesfully
*/
/*******************************************************************************/
#ifdef FAST_IO_TEST
GLOBAL bit32 mpiMsgPrepare(
mpiICQueue_t *circularQ,
void *messagePtr,
mpiMsgCategory_t category,
bit16 opCode,
bit8 responseQueue,
bit8 hiPriority
)
{
mpiMsgHeader_t *msgHeader;
bit32 bc;
bit32 Header = 0;
bit32 hpriority = 0;
SA_DBG4(("Entering function:mpiMsgProduce\n"));
SA_ASSERT(NULL != circularQ, "circularQ argument cannot be null");
SA_ASSERT(NULL != messagePtr, "messagePtr argument cannot be null");
SA_ASSERT(0 != circularQ->numElements, "The number of elements in this queue"
" is 0");
SA_ASSERT(MPI_MAX_OUTBOUND_QUEUES > responseQueue, "oQueue ID is wrong");
/* Obtains the address of the entire message buffer, including the header */
msgHeader = (mpiMsgHeader_t*)(((bit8*)messagePtr) - sizeof(mpiMsgHeader_t));
/* Read the BC from header, its stored in native endian format when message
was allocated */
/* intially */
bc = (((msgHeader->Header) >> SHIFT24) & BC_MASK);
SA_DBG6(("mpiMsgProduce: msgHeader bc %d\n", bc));
if (circularQ->priority)
hpriority = 1;
/* Checks the message is in "allocated" state */
SA_ASSERT(0 != bc, "The message buffer is not in \"allocated\" state "
"(bc == 0)");
Header = ((V_BIT << SHIFT31) | (hpriority << SHIFT30) |
((bc & BC_MASK) << SHIFT24) |
((responseQueue & OBID_MASK) << SHIFT16) |
((category & CAT_MASK) << SHIFT12 ) | (opCode & OPCODE_MASK));
/* pre flush the IOMB cache line */
ossaCachePreFlush(circularQ->agRoot,
(void *)circularQ->memoryRegion.appHandle,
(void *)msgHeader, circularQ->elementSize * bc);
OSSA_WRITE_LE_32(circularQ->agRoot, msgHeader, OSSA_OFFSET_OF(mpiMsgHeader_t,
Header), Header);
/* flush the IOMB cache line */
ossaCacheFlush(circularQ->agRoot, (void *)circularQ->memoryRegion.appHandle,
(void *)msgHeader, circularQ->elementSize * bc);
MPI_DEBUG_TRACE( circularQ->qNumber,
((circularQ->producerIdx << 16 ) | circularQ->consumerIdx),
MPI_DEBUG_TRACE_IBQ,
(void *)msgHeader,
circularQ->elementSize);
ossaLogIomb(circularQ->agRoot,
circularQ->qNumber,
TRUE,
(void *)msgHeader,
circularQ->elementSize);
return AGSA_RC_SUCCESS;
} /* mpiMsgPrepare */
GLOBAL bit32 mpiMsgProduce(
mpiICQueue_t *circularQ,
void *messagePtr,
mpiMsgCategory_t category,
bit16 opCode,
bit8 responseQueue,
bit8 hiPriority
)
{
bit32 ret;
ret = mpiMsgPrepare(circularQ, messagePtr, category, opCode, responseQueue,
hiPriority);
if (ret == AGSA_RC_SUCCESS)
{
/* update PI of inbound queue */
ossaHwRegWriteExt(circularQ->agRoot,
circularQ->PIPCIBar,
circularQ->PIPCIOffset,
circularQ->producerIdx);
}
return ret;
}
GLOBAL void mpiIBQMsgSend(mpiICQueue_t *circularQ)
{
ossaHwRegWriteExt(circularQ->agRoot,
circularQ->PIPCIBar,
circularQ->PIPCIOffset,
circularQ->producerIdx);
}
#else /* FAST_IO_TEST */
GLOBAL FORCEINLINE
bit32
mpiMsgProduce(
mpiICQueue_t *circularQ,
void *messagePtr,
mpiMsgCategory_t category,
bit16 opCode,
bit8 responseQueue,
bit8 hiPriority
)
{
mpiMsgHeader_t *msgHeader;
bit32 bc;
bit32 Header = 0;
bit32 hpriority = 0;
#ifdef SA_FW_TEST_BUNCH_STARTS
#define Need_agRootDefined 1
#endif /* SA_FW_TEST_BUNCH_STARTS */
#ifdef SA_ENABLE_TRACE_FUNCTIONS
bit32 i;
#define Need_agRootDefined 1
#endif /* SA_ENABLE_TRACE_FUNCTIONS */
#ifdef MPI_DEBUG_TRACE_ENABLE
#define Need_agRootDefined 1
#endif /* MPI_DEBUG_TRACE_ENABLE */
#ifdef Need_agRootDefined
agsaRoot_t *agRoot=circularQ->agRoot;
#ifdef SA_FW_TEST_BUNCH_STARTS
agsaLLRoot_t *saRoot = agNULL;
saRoot = agRoot->sdkData;
#endif /* SA_FW_TEST_BUNCH_STARTS */
#undef Need_agRootDefined
#endif /* Need_agRootDefined */
SA_DBG4(("Entering function:mpiMsgProduce\n"));
SA_ASSERT(NULL != circularQ, "circularQ argument cannot be null");
SA_ASSERT(NULL != messagePtr, "messagePtr argument cannot be null");
SA_ASSERT(0 != circularQ->numElements, "The number of elements in this queue is 0");
SA_ASSERT(MPI_MAX_OUTBOUND_QUEUES > responseQueue, "oQueue ID is wrong");
/* REB Start extra trace */
smTraceFuncEnter(hpDBG_VERY_LOUD,"22");
/* REB End extra trace */
/* Obtains the address of the entire message buffer, including the header */
msgHeader = (mpiMsgHeader_t*)(((bit8*)messagePtr) - sizeof(mpiMsgHeader_t));
/* Read the BC from header, its stored in native endian format when message was allocated */
/* intially */
bc = (((msgHeader->Header) >> SHIFT24) & BC_MASK);
SA_DBG6(("mpiMsgProduce: msgHeader bc %d\n", bc));
if (circularQ->priority)
{
hpriority = 1;
}
/* Checks the message is in "allocated" state */
SA_ASSERT(0 != bc, "The message buffer is not in \"allocated\" state (bc == 0)");
Header = ((V_BIT << SHIFT31) |
(hpriority << SHIFT30) |
((bc & BC_MASK) << SHIFT24) |
((responseQueue & OBID_MASK) << SHIFT16) |
((category & CAT_MASK) << SHIFT12 ) |
(opCode & OPCODE_MASK));
/* pre flush the cache line */
ossaCachePreFlush(circularQ->agRoot, (void *)circularQ->memoryRegion.appHandle, (void *)msgHeader, circularQ->elementSize * bc);
OSSA_WRITE_LE_32(circularQ->agRoot, msgHeader, OSSA_OFFSET_OF(mpiMsgHeader_t, Header), Header);
/* flush the cache line for IOMB */
ossaCacheFlush(circularQ->agRoot, (void *)circularQ->memoryRegion.appHandle, (void *)msgHeader, circularQ->elementSize * bc);
MPI_DEBUG_TRACE( circularQ->qNumber,
((circularQ->producerIdx << 16 ) | circularQ->consumerIdx),
MPI_DEBUG_TRACE_IBQ,
(void *)msgHeader,
circularQ->elementSize);
ossaLogIomb(circularQ->agRoot,
circularQ->qNumber,
TRUE,
(void *)msgHeader,
circularQ->elementSize);
#if defined(SALLSDK_DEBUG)
MPI_IBQ_IOMB_LOG(circularQ->qNumber, (void *)msgHeader, circularQ->elementSize);
#endif /* SALLSDK_DEBUG */
/* REB Start extra trace */
#ifdef SA_ENABLE_TRACE_FUNCTIONS
smTrace(hpDBG_IOMB,"M1",circularQ->qNumber);
/* TP:M1 circularQ->qNumber */
for (i=0; i<((bit32)bc*(circularQ->elementSize/4)); i++)
{
/* The -sizeof(mpiMsgHeader_t) is to account for mpiMsgProduce adding the header to the pMessage pointer */
smTrace(hpDBG_IOMB,"MD",*( ((bit32 *)((bit8 *)messagePtr - sizeof(mpiMsgHeader_t))) + i));
/* TP:MD Inbound IOMB Dword */
}
#endif /* SA_ENABLE_TRACE_FUNCTIONS */
/* update PI of inbound queue */
#ifdef SA_FW_TEST_BUNCH_STARTS
if(saRoot->BunchStarts_Enable)
{
if (circularQ->BunchStarts_QPending == 0)
{
// store tick value for 1st deferred IO only
circularQ->BunchStarts_QPendingTick = saRoot->timeTick;
}
// update queue's pending count
circularQ->BunchStarts_QPending++;
// update global pending count
saRoot->BunchStarts_Pending++;
SA_DBG1(("mpiMsgProduce: BunchStarts - Global Pending %d\n", saRoot->BunchStarts_Pending));
SA_DBG1(("mpiMsgProduce: BunchStarts - QPending %d, Q-%d\n", circularQ->BunchStarts_QPending, circularQ->qNumber));
smTraceFuncExit(hpDBG_VERY_LOUD, 'a', "22");
return AGSA_RC_SUCCESS;
}
saRoot->BunchStarts_Pending = 0;
circularQ->BunchStarts_QPending = 0;
#endif /* SA_FW_TEST_BUNCH_STARTS */
ossaHwRegWriteExt(circularQ->agRoot,
circularQ->PIPCIBar,
circularQ->PIPCIOffset,
circularQ->producerIdx);
smTraceFuncExit(hpDBG_VERY_LOUD, 'b', "22");
return AGSA_RC_SUCCESS;
} /* mpiMsgProduce */
#endif /* FAST_IO_TEST */
#ifdef SA_FW_TEST_BUNCH_STARTS
void mpiMsgProduceBunch( agsaLLRoot_t *saRoot)
{
mpiICQueue_t *circularQ;
bit32 inq;
for(inq=0; ((inq < saRoot->QueueConfig.numInboundQueues) && saRoot->BunchStarts_Pending); inq++)
{
circularQ= &saRoot->inboundQueue[inq];
/* If any pending IOs present then either process if BunchStarts_Threshold
* IO limit reached or if the timer has popped
*/
if (circularQ->BunchStarts_QPending &&
((circularQ->BunchStarts_QPending >= saRoot->BunchStarts_Threshold) ||
((saRoot->timeTick - circularQ->BunchStarts_QPendingTick) >= saRoot->BunchStarts_TimeoutTicks))
)
{
if(circularQ->qNumber != inq)
{
SA_DBG1(("mpiMsgProduceBunch:circularQ->qNumber(%d) != inq(%d)\n",circularQ->qNumber, inq));
}
SA_DBG1(("mpiMsgProduceBunch: IQ=%d, PI=%d\n", inq, circularQ->producerIdx));
SA_DBG1(("mpiMsgProduceBunch: Qpending=%d, TotPending=%d\n", circularQ->BunchStarts_QPending, saRoot->BunchStarts_Pending));
ossaHwRegWriteExt(circularQ->agRoot,
circularQ->PIPCIBar,
circularQ->PIPCIOffset,
circularQ->producerIdx);
// update global pending count
saRoot->BunchStarts_Pending -= circularQ->BunchStarts_QPending;
// clear current queue's pending count after processing
circularQ->BunchStarts_QPending = 0;
circularQ->BunchStarts_QPendingTick = saRoot->timeTick;
}
}
}
#endif /* SA_FW_TEST_BUNCH_STARTS */
/*******************************************************************************/
/** \fn mpiMsgConsume(mpiOCQueue_t *circularQ, void *messagePtr1,
* mpiMsgCategory_t * pCategory, bit16 * pOpCode, bit8 * pBC)
* \brief Get a received message
* \param circularQ Pointer to a outbound queue
* \param messagePtr1 Pointer to the returned message buffer or NULL if no valid message
* \param pCategory Pointer to Message category (ETHERNET, FC, SAS-SATA, SCSI)
* \param pOpCode Pointer to Message operation code
* \param pBC Pointer to buffer count
*
* Consume a receive message in the specified outbound queue
*
* Return:
* AGSA_RC_SUCCESS if the message has been retrieved succesfully
* AGSA_RC_BUSY if the circular is empty
*/
/*******************************************************************************/
GLOBAL FORCEINLINE
bit32
mpiMsgConsume(
mpiOCQueue_t *circularQ,
void ** messagePtr1,
mpiMsgCategory_t *pCategory,
bit16 *pOpCode,
bit8 *pBC
)
{
mpiMsgHeader_t *msgHeader;
bit32 msgHeader_tmp;
SA_ASSERT(NULL != circularQ, "circularQ argument cannot be null");
SA_ASSERT(NULL != messagePtr1, "messagePtr1 argument cannot be null");
SA_ASSERT(NULL != pCategory, "pCategory argument cannot be null");
SA_ASSERT(NULL != pOpCode, "pOpCode argument cannot be null");
SA_ASSERT(NULL != pBC, "pBC argument cannot be null");
SA_ASSERT(0 != circularQ->numElements, "The number of elements in this queue is 0");
do
{
/* If there are not-yet-delivered messages ... */
if(circularQ->producerIdx != circularQ->consumerIdx)
{
/* Get the pointer to the circular queue buffer element */
msgHeader = (mpiMsgHeader_t*) ((bit8 *)(circularQ->memoryRegion.virtPtr) + circularQ->consumerIdx * circularQ->elementSize);
#ifdef LOOPBACK_MPI
if (!loopback)
#endif
/* invalidate the cache line of IOMB */
ossaCacheInvalidate(circularQ->agRoot, (void *)circularQ->memoryRegion.appHandle, (void *)msgHeader, circularQ->elementSize);
/* read header */
OSSA_READ_LE_32(circularQ->agRoot, &msgHeader_tmp, msgHeader, 0);
SA_DBG4(("mpiMsgConsume: process an IOMB, header=0x%x\n", msgHeader_tmp));
SA_ASSERT(0 != (msgHeader_tmp & HEADER_BC_MASK), "The bc field in the header is 0");
#ifdef TEST
/* for debugging */
if (0 == (msgHeader_tmp & HEADER_BC_MASK))
{
SA_DBG1(("mpiMsgConsume: CI=%d PI=%d msgHeader=%p\n", circularQ->consumerIdx, circularQ->producerIdx, (void *)msgHeader));
circularQ->consumerIdx = (circularQ->consumerIdx + 1) % circularQ->numElements;
/* update the CI of outbound queue - skip this blank IOMB, for test only */
ossaHwRegWriteExt(circularQ->agRoot,
circularQ->CIPCIBar,
circularQ->CIPCIOffset,
circularQ->consumerIdx);
return AGSA_RC_FAILURE;
}
#endif
/* get message pointer of valid entry */
if (0 != (msgHeader_tmp & HEADER_V_MASK))
{
SA_ASSERT(circularQ->consumerIdx <= circularQ->numElements, "Multi-buffer messages cannot wrap around");
if (OPC_OUB_SKIP_ENTRY != (msgHeader_tmp & OPCODE_MASK))
{
/* ... return the message payload */
*messagePtr1 = ((bit8*)msgHeader) + sizeof(mpiMsgHeader_t);
*pCategory = (mpiMsgCategory_t)(msgHeader_tmp >> SHIFT12) & CAT_MASK;
*pOpCode = (bit16)(msgHeader_tmp & OPCODE_MASK);
*pBC = (bit8)((msgHeader_tmp >> SHIFT24) & BC_MASK);
/* invalidate the cache line for IOMB */
#ifdef LOOPBACK_MPI
if (!loopback)
#endif
ossaCacheInvalidate(circularQ->agRoot, (void *)circularQ->memoryRegion.appHandle, (void *)msgHeader, (*pBC - 1) * circularQ->elementSize);
#if defined(SALLSDK_DEBUG)
SA_DBG3(("mpiMsgConsume: CI=%d PI=%d msgHeader=%p\n", circularQ->consumerIdx, circularQ->producerIdx, (void *)msgHeader));
MPI_OBQ_IOMB_LOG(circularQ->qNumber, (void *)msgHeader, circularQ->elementSize);
#endif
return AGSA_RC_SUCCESS;
}
else
{
SA_DBG3(("mpiMsgConsume: SKIP_ENTRIES_IOMB BC=%d\n", (msgHeader_tmp >> SHIFT24) & BC_MASK));
/* Updated comsumerIdx and skip it */
circularQ->consumerIdx = (circularQ->consumerIdx + ((msgHeader_tmp >> SHIFT24) & BC_MASK)) % circularQ->numElements;
/* clean header to 0 */
msgHeader_tmp = 0;
/*ossaSingleThreadedEnter(agRoot, LL_IOREQ_OBQ_LOCK);*/
OSSA_WRITE_LE_32(circularQ->agRoot, msgHeader, OSSA_OFFSET_OF(mpiMsgHeader_t, Header), msgHeader_tmp);
/* update the CI of outbound queue */
ossaHwRegWriteExt(circularQ->agRoot,
circularQ->CIPCIBar,
circularQ->CIPCIOffset,
circularQ->consumerIdx);
/* Update the producer index */
OSSA_READ_LE_32(circularQ->agRoot, &circularQ->producerIdx, circularQ->piPointer, 0);
/*ossaSingleThreadedLeave(agRoot, LL_IOREQ_OBQ_LOCK); */
}
}
else
{
/* V bit is not set */
#if defined(SALLSDK_DEBUG)
agsaRoot_t *agRoot=circularQ->agRoot;
SA_DBG1(("mpiMsgConsume: V bit not set, PI=%d CI=%d msgHeader=%p\n", circularQ->producerIdx, circularQ->consumerIdx,(void *)msgHeader));
SA_DBG1(("mpiMsgConsume: V bit not set, 0x%08X Q=%d \n", msgHeader_tmp, circularQ->qNumber));
MPI_DEBUG_TRACE(MPI_DEBUG_TRACE_QNUM_ERROR + circularQ->qNumber,
((circularQ->producerIdx << 16 ) | circularQ->consumerIdx),
MPI_DEBUG_TRACE_OBQ,
(void *)(((bit8*)msgHeader) - sizeof(mpiMsgHeader_t)),
circularQ->elementSize);
circularQ->consumerIdx = circularQ->consumerIdx % circularQ->numElements;
circularQ->consumerIdx ++;
OSSA_WRITE_LE_32(circularQ->agRoot, msgHeader, OSSA_OFFSET_OF(mpiMsgHeader_t, Header), msgHeader_tmp);
ossaHwRegWriteExt(agRoot,
circularQ->CIPCIBar,
circularQ->CIPCIOffset,
circularQ->consumerIdx);
MPI_OBQ_IOMB_LOG(circularQ->qNumber, (void *)msgHeader, circularQ->elementSize);
#endif
SA_DBG1(("mpiMsgConsume: V bit is not set!!!!! HW CI=%d\n", ossaHwRegReadExt(circularQ->agRoot, circularQ->CIPCIBar, circularQ->CIPCIOffset) ));
SA_ASSERT(0, "V bit is not set");
return AGSA_RC_FAILURE;
}
}
else
{
/* Update the producer index from SPC */
OSSA_READ_LE_32(circularQ->agRoot, &circularQ->producerIdx, circularQ->piPointer, 0);
}
} while(circularQ->producerIdx != circularQ->consumerIdx); /* while we don't have any more not-yet-delivered message */
#ifdef TEST
SA_DBG4(("mpiMsgConsume: Outbound queue is empty.\n"));
#endif
/* report empty */
return AGSA_RC_BUSY;
}
/*******************************************************************************/
/** \fn mpiMsgFreeSet(mpiOCQueue_t *circularQ, void *messagePtr)
* \brief Returns a received message to the outbound queue
* \param circularQ Pointer to an outbound queue
* \param messagePtr1 Pointer to the returned message buffer to free
* \param messagePtr2 Pointer to the returned message buffer to free if bc > 1
*
- * Returns consumed and processed message to the the specified outbounf queue
+ * Returns consumed and processed message to the specified outbounf queue
*
* Return:
* AGSA_RC_SUCCESS if the message has been returned succesfully
*/
/*******************************************************************************/
GLOBAL FORCEINLINE
bit32
mpiMsgFreeSet(
mpiOCQueue_t *circularQ,
void *messagePtr1,
bit8 bc
)
{
mpiMsgHeader_t *msgHeader;
SA_DBG4(("Entering function:mpiMsgFreeSet\n"));
SA_ASSERT(NULL != circularQ, "circularQ argument cannot be null");
SA_ASSERT(NULL != messagePtr1, "messagePtr1 argument cannot be null");
SA_ASSERT(0 != circularQ->numElements, "The number of elements in this queue is 0");
/* Obtains the address of the entire message buffer, including the header */
msgHeader = (mpiMsgHeader_t*)(((bit8*)messagePtr1) - sizeof(mpiMsgHeader_t));
if ( ((mpiMsgHeader_t*)((bit8*)circularQ->memoryRegion.virtPtr + circularQ->consumerIdx * circularQ->elementSize)) != msgHeader)
{
/* IOMB of CI points mismatch with Message Header - should never happened */
SA_DBG1(("mpiMsgFreeSet: Wrong CI, Q %d ConsumeIdx = %d msgHeader 0x%08x\n",circularQ->qNumber, circularQ->consumerIdx ,msgHeader->Header));
SA_DBG1(("mpiMsgFreeSet: msgHeader %p != %p\n", msgHeader,((mpiMsgHeader_t*)((bit8*)circularQ->memoryRegion.virtPtr + circularQ->consumerIdx * circularQ->elementSize))));
#ifdef LOOPBACK_MPI
if (!loopback)
#endif
/* Update the producer index from SPC */
OSSA_READ_LE_32(circularQ->agRoot, &circularQ->producerIdx, circularQ->piPointer, 0);
#if defined(SALLSDK_DEBUG)
SA_DBG3(("mpiMsgFreeSet: ProducerIdx = %d\n", circularQ->producerIdx));
#endif
return AGSA_RC_SUCCESS;
}
/* ... free the circular queue buffer elements associated with the message ... */
/*... by incrementing the consumer index (with wrap arround) */
circularQ->consumerIdx = (circularQ->consumerIdx + bc) % circularQ->numElements;
/* Invalidates this circular queue buffer element */
msgHeader->Header &= ~HEADER_V_MASK; /* Clear Valid bit to indicate IOMB consumed by host */
SA_ASSERT(circularQ->consumerIdx <= circularQ->numElements, "Multi-buffer messages cannot wrap arround");
/* update the CI of outbound queue */
#ifdef LOOPBACK_MPI
if (!loopback)
#endif
{
ossaHwRegWriteExt(circularQ->agRoot,
circularQ->CIPCIBar,
circularQ->CIPCIOffset,
circularQ->consumerIdx);
/* Update the producer index from SPC */
OSSA_READ_LE_32(circularQ->agRoot, &circularQ->producerIdx, circularQ->piPointer, 0);
}
#if defined(SALLSDK_DEBUG)
SA_DBG5(("mpiMsgFreeSet: CI=%d PI=%d\n", circularQ->consumerIdx, circularQ->producerIdx));
#endif
return AGSA_RC_SUCCESS;
}
#ifdef TEST
GLOBAL bit32 mpiRotateQnumber(agsaRoot_t *agRoot)
{
agsaLLRoot_t *saRoot = (agsaLLRoot_t *) (agRoot->sdkData);
bit32 denom;
bit32 ret = 0;
/* inbound queue number */
saRoot->IBQnumber++;
denom = saRoot->QueueConfig.numInboundQueues;
if (saRoot->IBQnumber % denom == 0) /* % Qnumber*/
{
saRoot->IBQnumber = 0;
}
SA_DBG3(("mpiRotateQnumber: IBQnumber %d\n", saRoot->IBQnumber));
/* outbound queue number */
saRoot->OBQnumber++;
denom = saRoot->QueueConfig.numOutboundQueues;
if (saRoot->OBQnumber % denom == 0) /* % Qnumber*/
{
saRoot->OBQnumber = 0;
}
SA_DBG3(("mpiRotateQnumber: OBQnumber %d\n", saRoot->OBQnumber));
ret = (saRoot->OBQnumber << SHIFT16) | saRoot->IBQnumber;
return ret;
}
#endif
#ifdef LOOPBACK_MPI
GLOBAL bit32 mpiMsgProduceOQ(
mpiOCQueue_t *circularQ,
void *messagePtr,
mpiMsgCategory_t category,
bit16 opCode,
bit8 responseQueue,
bit8 hiPriority
)
{
mpiMsgHeader_t *msgHeader;
bit32 bc;
bit32 Header = 0;
bit32 hpriority = 0;
SA_DBG4(("Entering function:mpiMsgProduceOQ\n"));
SA_ASSERT(NULL != circularQ, "circularQ argument cannot be null");
SA_ASSERT(NULL != messagePtr, "messagePtr argument cannot be null");
SA_ASSERT(0 != circularQ->numElements, "The number of elements in this queue"
" is 0");
SA_ASSERT(MPI_MAX_OUTBOUND_QUEUES > responseQueue, "oQueue ID is wrong");
/* REB Start extra trace */
smTraceFuncEnter(hpDBG_VERY_LOUD, "2I");
/* REB End extra trace */
/* Obtains the address of the entire message buffer, including the header */
msgHeader = (mpiMsgHeader_t*)(((bit8*)messagePtr) - sizeof(mpiMsgHeader_t));
/* Read the BC from header, its stored in native endian format when message
was allocated */
/* intially */
SA_DBG4(("mpiMsgProduceOQ: msgHeader %p opcode %d pi/ci %d / %d\n", msgHeader, opCode, circularQ->producerIdx, circularQ->consumerIdx));
bc = (((msgHeader->Header) >> SHIFT24) & BC_MASK);
SA_DBG6(("mpiMsgProduceOQ: msgHeader bc %d\n", bc));
if (circularQ->priority)
hpriority = 1;
/* Checks the message is in "allocated" state */
SA_ASSERT(0 != bc, "The message buffer is not in \"allocated\" state "
"(bc == 0)");
Header = ((V_BIT << SHIFT31) | (hpriority << SHIFT30) |
((bc & BC_MASK) << SHIFT24) |
((responseQueue & OBID_MASK) << SHIFT16) |
((category & CAT_MASK) << SHIFT12 ) | (opCode & OPCODE_MASK));
/* pre flush the IOMB cache line */
//ossaCachePreFlush(circularQ->agRoot,
// (void *)circularQ->memoryRegion.appHandle,
// (void *)msgHeader, circularQ->elementSize * bc);
OSSA_WRITE_LE_32(circularQ->agRoot, msgHeader, OSSA_OFFSET_OF(mpiMsgHeader_t,
Header), Header);
/* flush the IOMB cache line */
//ossaCacheFlush(circularQ->agRoot, (void *)circularQ->memoryRegion.appHandle,
// (void *)msgHeader, circularQ->elementSize * bc);
MPI_DEBUG_TRACE( circularQ->qNumber,
((circularQ->producerIdx << 16 ) | circularQ->consumerIdx),
MPI_DEBUG_TRACE_OBQ,
(void *)msgHeader,
circularQ->elementSize);
ossaLogIomb(circularQ->agRoot,
circularQ->qNumber,
TRUE,
(void *)msgHeader,
circularQ->elementSize);
smTraceFuncExit(hpDBG_VERY_LOUD, 'a', "2I");
return AGSA_RC_SUCCESS;
} /* mpiMsgProduceOQ */
#endif
Index: head/sys/dev/pms/RefTisa/sat/src/smsat.c
===================================================================
--- head/sys/dev/pms/RefTisa/sat/src/smsat.c (revision 300049)
+++ head/sys/dev/pms/RefTisa/sat/src/smsat.c (revision 300050)
@@ -1,20820 +1,20820 @@
/*******************************************************************************
*Copyright (c) 2014 PMC-Sierra, Inc. All rights reserved.
*
*Redistribution and use in source and binary forms, with or without modification, are permitted provided
*that the following conditions are met:
*1. Redistributions of source code must retain the above copyright notice, this list of conditions and the
*following disclaimer.
*2. Redistributions in binary form must reproduce the above copyright notice,
*this list of conditions and the following disclaimer in the documentation and/or other materials provided
*with the distribution.
*
*THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND ANY EXPRESS OR IMPLIED
*WARRANTIES,INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
*FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE
*FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
*NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR
*BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
*LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
*SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE
********************************************************************************/
#include <sys/cdefs.h>
__FBSDID("$FreeBSD$");
#include <dev/pms/config.h>
#include <dev/pms/freebsd/driver/common/osenv.h>
#include <dev/pms/freebsd/driver/common/ostypes.h>
#include <dev/pms/freebsd/driver/common/osdebug.h>
#include <dev/pms/RefTisa/tisa/api/titypes.h>
#include <dev/pms/RefTisa/sallsdk/api/sa.h>
#include <dev/pms/RefTisa/sallsdk/api/saapi.h>
#include <dev/pms/RefTisa/sallsdk/api/saosapi.h>
#include <dev/pms/RefTisa/sat/api/sm.h>
#include <dev/pms/RefTisa/sat/api/smapi.h>
#include <dev/pms/RefTisa/sat/api/tdsmapi.h>
#include <dev/pms/RefTisa/sat/src/smdefs.h>
#include <dev/pms/RefTisa/sat/src/smproto.h>
#include <dev/pms/RefTisa/sat/src/smtypes.h>
/* start smapi defined APIs */
osGLOBAL bit32
smRegisterDevice(
smRoot_t *smRoot,
agsaDevHandle_t *agDevHandle,
smDeviceHandle_t *smDeviceHandle,
agsaDevHandle_t *agExpDevHandle,
bit32 phyID,
bit32 DeviceType
)
{
smDeviceData_t *oneDeviceData = agNULL;
SM_DBG2(("smRegisterDevice: start\n"));
if (smDeviceHandle == agNULL)
{
SM_DBG1(("smRegisterDevice: smDeviceHandle is NULL!!!\n"));
return SM_RC_FAILURE;
}
if (agDevHandle == agNULL)
{
SM_DBG1(("smRegisterDevice: agDevHandle is NULL!!!\n"));
return SM_RC_FAILURE;
}
oneDeviceData = smAddToSharedcontext(smRoot, agDevHandle, smDeviceHandle, agExpDevHandle, phyID);
if (oneDeviceData != agNULL)
{
oneDeviceData->satDeviceType = DeviceType;
return SM_RC_SUCCESS;
}
else
{
return SM_RC_FAILURE;
}
}
osGLOBAL bit32
smDeregisterDevice(
smRoot_t *smRoot,
agsaDevHandle_t *agDevHandle,
smDeviceHandle_t *smDeviceHandle
)
{
bit32 status = SM_RC_FAILURE;
SM_DBG2(("smDeregisterDevice: start\n"));
if (smDeviceHandle == agNULL)
{
SM_DBG1(("smDeregisterDevice: smDeviceHandle is NULL!!!\n"));
return SM_RC_FAILURE;
}
if (agDevHandle == agNULL)
{
SM_DBG1(("smDeregisterDevice: agDevHandle is NULL!!!\n"));
return SM_RC_FAILURE;
}
status = smRemoveFromSharedcontext(smRoot, agDevHandle, smDeviceHandle);
return status;
}
osGLOBAL bit32
smIOAbort(
smRoot_t *smRoot,
smIORequest_t *tasktag
)
{
smIntRoot_t *smIntRoot = (smIntRoot_t *)smRoot->smData;
smIntContext_t *smAllShared = (smIntContext_t *)&smIntRoot->smAllShared;
agsaRoot_t *agRoot;
smIORequestBody_t *smIORequestBody = agNULL;
smIORequestBody_t *smIONewRequestBody = agNULL;
agsaIORequest_t *agIORequest = agNULL; /* IO to be aborted */
bit32 status = SM_RC_FAILURE;
agsaIORequest_t *agAbortIORequest; /* abort IO itself */
smIORequestBody_t *smAbortIORequestBody;
#if 1
bit32 PhysUpper32;
bit32 PhysLower32;
bit32 memAllocStatus;
void *osMemHandle;
#endif
smSatIOContext_t *satIOContext;
smSatInternalIo_t *satIntIo;
smSatIOContext_t *satAbortIOContext;
SM_DBG1(("smIOAbort: start\n"));
SM_DBG2(("smIOAbort: tasktag %p\n", tasktag));
/*
alloc smIORequestBody for abort itself
call saSATAAbort()
*/
agRoot = smAllShared->agRoot;
smIORequestBody = (smIORequestBody_t *)tasktag->smData;
if (smIORequestBody == agNULL)
{
SM_DBG1(("smIOAbort: smIORequestBody is NULL!!!\n"));
return SM_RC_FAILURE;
}
/* needs to distinguish internally generated or externally generated */
satIOContext = &(smIORequestBody->transport.SATA.satIOContext);
satIntIo = satIOContext->satIntIoContext;
if (satIntIo == agNULL)
{
SM_DBG2(("smIOAbort: External, OS generated\n"));
agIORequest = &(smIORequestBody->agIORequest);
}
else
{
SM_DBG2(("smIOAbort: Internal, SM generated\n"));
smIONewRequestBody = (smIORequestBody_t *)satIntIo->satIntRequestBody;
agIORequest = &(smIONewRequestBody->agIORequest);
}
/*
allocate smAbortIORequestBody for abort request itself
*/
#if 1
/* allocating agIORequest for abort itself */
memAllocStatus = tdsmAllocMemory(
smRoot,
&osMemHandle,
(void **)&smAbortIORequestBody,
&PhysUpper32,
&PhysLower32,
8,
sizeof(smIORequestBody_t),
agTRUE
);
if (memAllocStatus != SM_RC_SUCCESS)
{
/* let os process IO */
SM_DBG1(("smIOAbort: tdsmAllocMemory failed...!!!\n"));
return SM_RC_FAILURE;
}
if (smAbortIORequestBody == agNULL)
{
/* let os process IO */
SM_DBG1(("smIOAbort: tdsmAllocMemory returned NULL smAbortIORequestBody!!!\n"));
return SM_RC_FAILURE;
}
smIOReInit(smRoot, smAbortIORequestBody);
/* setup task management structure */
smAbortIORequestBody->IOType.InitiatorTMIO.osMemHandle = osMemHandle;
satAbortIOContext = &(smAbortIORequestBody->transport.SATA.satIOContext);
satAbortIOContext->smRequestBody = smAbortIORequestBody;
smAbortIORequestBody->smDevHandle = smIORequestBody->smDevHandle;
/* initialize agIORequest */
agAbortIORequest = &(smAbortIORequestBody->agIORequest);
agAbortIORequest->osData = (void *) smAbortIORequestBody;
agAbortIORequest->sdkData = agNULL; /* LL takes care of this */
/* remember IO to be aborted */
smAbortIORequestBody->smIOToBeAbortedRequest = tasktag;
status = saSATAAbort(agRoot, agAbortIORequest, 0, agNULL, 0, agIORequest, smaSATAAbortCB);
SM_DBG2(("smIOAbort: return status=0x%x\n", status));
#endif /* 1 */
if (status == AGSA_RC_SUCCESS)
{
return SM_RC_SUCCESS;
}
else
{
SM_DBG1(("smIOAbort: failed to call saSATAAbort, status=%d!!!\n", status));
tdsmFreeMemory(smRoot,
smAbortIORequestBody->IOType.InitiatorTMIO.osMemHandle,
sizeof(smIORequestBody_t)
);
return SM_RC_FAILURE;
}
}
osGLOBAL bit32
smIOAbortAll(
smRoot_t *smRoot,
smDeviceHandle_t *smDeviceHandle
)
{
smIntRoot_t *smIntRoot = (smIntRoot_t *)smRoot->smData;
smIntContext_t *smAllShared = (smIntContext_t *)&smIntRoot->smAllShared;
agsaRoot_t *agRoot;
bit32 status = SM_RC_FAILURE;
agsaIORequest_t *agAbortIORequest;
smIORequestBody_t *smAbortIORequestBody;
smSatIOContext_t *satAbortIOContext;
smDeviceData_t *oneDeviceData = agNULL;
agsaDevHandle_t *agDevHandle;
bit32 PhysUpper32;
bit32 PhysLower32;
bit32 memAllocStatus;
void *osMemHandle;
SM_DBG2(("smIOAbortAll: start\n"));
agRoot = smAllShared->agRoot;
if (smDeviceHandle == agNULL)
{
SM_DBG1(("smIOAbortAll: smDeviceHandle is NULL!!!\n"));
return SM_RC_FAILURE;
}
oneDeviceData = (smDeviceData_t *)smDeviceHandle->smData;
if (oneDeviceData == agNULL)
{
SM_DBG1(("smIOAbortAll: oneDeviceData is NULL!!!\n"));
return SM_RC_FAILURE;
}
if (oneDeviceData->valid == agFALSE)
{
SM_DBG1(("smIOAbortAll: oneDeviceData is not valid, did %d !!!\n", oneDeviceData->id));
return SM_RC_FAILURE;
}
agDevHandle = oneDeviceData->agDevHandle;
if (agDevHandle == agNULL)
{
SM_DBG1(("smIOAbortAll: agDevHandle is NULL!!!\n"));
return SM_RC_FAILURE;
}
/*
smAbortIORequestBody = smDequeueIO(smRoot);
if (smAbortIORequestBody == agNULL)
{
SM_DBG1(("smIOAbortAll: empty freeIOList!!!\n"));
return SM_RC_FAILURE;
}
*/
/* allocating agIORequest for abort itself */
memAllocStatus = tdsmAllocMemory(
smRoot,
&osMemHandle,
(void **)&smAbortIORequestBody,
&PhysUpper32,
&PhysLower32,
8,
sizeof(smIORequestBody_t),
agTRUE
);
if (memAllocStatus != SM_RC_SUCCESS)
{
/* let os process IO */
SM_DBG1(("smIOAbortAll: tdsmAllocMemory failed...!!!\n"));
return SM_RC_FAILURE;
}
if (smAbortIORequestBody == agNULL)
{
/* let os process IO */
SM_DBG1(("smIOAbortAll: tdsmAllocMemory returned NULL smAbortIORequestBody!!!\n"));
return SM_RC_FAILURE;
}
smIOReInit(smRoot, smAbortIORequestBody);
/* setup task management structure */
smAbortIORequestBody->IOType.InitiatorTMIO.osMemHandle = osMemHandle;
satAbortIOContext = &(smAbortIORequestBody->transport.SATA.satIOContext);
satAbortIOContext->smRequestBody = smAbortIORequestBody;
smAbortIORequestBody->smDevHandle = smDeviceHandle;
/* initialize agIORequest */
agAbortIORequest = &(smAbortIORequestBody->agIORequest);
agAbortIORequest->osData = (void *) smAbortIORequestBody;
agAbortIORequest->sdkData = agNULL; /* LL takes care of this */
oneDeviceData->OSAbortAll = agTRUE;
/* abort all */
status = saSATAAbort(agRoot, agAbortIORequest, tdsmRotateQnumber(smRoot, smDeviceHandle), agDevHandle, 1, agNULL, smaSATAAbortCB);
if (status != AGSA_RC_SUCCESS)
{
SM_DBG1(("smIOAbortAll: failed to call saSATAAbort, status=%d!!!\n", status));
tdsmFreeMemory(smRoot,
smAbortIORequestBody->IOType.InitiatorTMIO.osMemHandle,
sizeof(smIORequestBody_t)
);
}
return status;
}
osGLOBAL bit32
smSuperIOStart(
smRoot_t *smRoot,
smIORequest_t *smIORequest,
smDeviceHandle_t *smDeviceHandle,
smSuperScsiInitiatorRequest_t *smSCSIRequest,
bit32 AddrHi,
bit32 AddrLo,
bit32 interruptContext
)
{
smDeviceData_t *oneDeviceData = agNULL;
smIORequestBody_t *smIORequestBody = agNULL;
smSatIOContext_t *satIOContext = agNULL;
bit32 status = SM_RC_FAILURE;
SM_DBG2(("smSuperIOStart: start\n"));
oneDeviceData = (smDeviceData_t *)smDeviceHandle->smData;
if (oneDeviceData == agNULL)
{
SM_DBG1(("smSuperIOStart: oneDeviceData is NULL!!!\n"));
return SM_RC_FAILURE;
}
if (oneDeviceData->valid == agFALSE)
{
SM_DBG1(("smSuperIOStart: oneDeviceData is not valid, did %d !!!\n", oneDeviceData->id));
return SM_RC_FAILURE;
}
smIORequestBody = (smIORequestBody_t*)smIORequest->smData;//smDequeueIO(smRoot);
if (smIORequestBody == agNULL)
{
SM_DBG1(("smSuperIOStart: smIORequestBody is NULL!!!\n"));
return SM_RC_FAILURE;
}
smIOReInit(smRoot, smIORequestBody);
SM_DBG3(("smSuperIOStart: io ID %d!!!\n", smIORequestBody->id ));
oneDeviceData->sasAddressHi = AddrHi;
oneDeviceData->sasAddressLo = AddrLo;
smIORequestBody->smIORequest = smIORequest;
smIORequestBody->smDevHandle = smDeviceHandle;
satIOContext = &(smIORequestBody->transport.SATA.satIOContext);
/*
* Need to initialize all the fields within satIOContext except
* reqType and satCompleteCB which will be set later in SM.
*/
smIORequestBody->transport.SATA.smSenseData.senseData = agNULL;
smIORequestBody->transport.SATA.smSenseData.senseLen = 0;
satIOContext->pSatDevData = oneDeviceData;
satIOContext->pFis =
&smIORequestBody->transport.SATA.agSATARequestBody.fis.fisRegHostToDev;
satIOContext->pScsiCmnd = &smSCSIRequest->scsiCmnd;
satIOContext->pSense = &smIORequestBody->transport.SATA.sensePayload;
satIOContext->pSmSenseData = &smIORequestBody->transport.SATA.smSenseData;
satIOContext->pSmSenseData->senseData = satIOContext->pSense;
/* satIOContext->pSense = (scsiRspSense_t *)satIOContext->pSmSenseData->senseData; */
satIOContext->smRequestBody = smIORequestBody;
satIOContext->interruptContext = interruptContext;
satIOContext->psmDeviceHandle = smDeviceHandle;
satIOContext->smScsiXchg = smSCSIRequest;
satIOContext->superIOFlag = agTRUE;
// satIOContext->superIOFlag = agFALSE;
satIOContext->satIntIoContext = agNULL;
satIOContext->satOrgIOContext = agNULL;
/* satIOContext->tiIORequest = tiIORequest; */
/* save context if we need to abort later */
/*smIORequest->smData = smIORequestBody;*/
/* followings are used only for internal IO */
satIOContext->currentLBA = 0;
satIOContext->OrgTL = 0;
status = smsatIOStart(smRoot, smIORequest, smDeviceHandle, (smScsiInitiatorRequest_t *)smSCSIRequest, satIOContext);
return status;
}
/*
osGLOBAL bit32
tiINIIOStart(
tiRoot_t *tiRoot,
tiIORequest_t *tiIORequest,
tiDeviceHandle_t *tiDeviceHandle,
tiScsiInitiatorRequest_t *tiScsiRequest,
void *tiRequestBody,
bit32 interruptContext
)
GLOBAL bit32 satIOStart(
tiRoot_t *tiRoot,
tiIORequest_t *tiIORequest,
tiDeviceHandle_t *tiDeviceHandle,
tiScsiInitiatorRequest_t *tiScsiRequest,
smSatIOContext_t *satIOContext
)
smIOStart(
smRoot_t *smRoot,
smIORequest_t *smIORequest,
smDeviceHandle_t *smDeviceHandle,
smScsiInitiatorRequest_t *smSCSIRequest,
smIORequestBody_t *smRequestBody,
bit32 interruptContext
)
*/
FORCEINLINE bit32
smIOStart(
smRoot_t *smRoot,
smIORequest_t *smIORequest,
smDeviceHandle_t *smDeviceHandle,
smScsiInitiatorRequest_t *smSCSIRequest,
bit32 interruptContext
)
{
smDeviceData_t *oneDeviceData = agNULL;
smIORequestBody_t *smIORequestBody = agNULL;
smSatIOContext_t *satIOContext = agNULL;
bit32 status = SM_RC_FAILURE;
SM_DBG2(("smIOStart: start\n"));
oneDeviceData = (smDeviceData_t *)smDeviceHandle->smData;
if (oneDeviceData == agNULL)
{
SM_DBG1(("smIOStart: oneDeviceData is NULL!!!\n"));
return SM_RC_FAILURE;
}
if (oneDeviceData->valid == agFALSE)
{
SM_DBG1(("smIOStart: oneDeviceData is not valid, did %d !!!\n", oneDeviceData->id));
return SM_RC_FAILURE;
}
smIORequestBody = (smIORequestBody_t*)smIORequest->smData;//smDequeueIO(smRoot);
if (smIORequestBody == agNULL)
{
SM_DBG1(("smIOStart: smIORequestBody is NULL!!!\n"));
return SM_RC_FAILURE;
}
smIOReInit(smRoot, smIORequestBody);
SM_DBG3(("smIOStart: io ID %d!!!\n", smIORequestBody->id ));
smIORequestBody->smIORequest = smIORequest;
smIORequestBody->smDevHandle = smDeviceHandle;
satIOContext = &(smIORequestBody->transport.SATA.satIOContext);
/*
* Need to initialize all the fields within satIOContext except
* reqType and satCompleteCB which will be set later in SM.
*/
smIORequestBody->transport.SATA.smSenseData.senseData = agNULL;
smIORequestBody->transport.SATA.smSenseData.senseLen = 0;
satIOContext->pSatDevData = oneDeviceData;
satIOContext->pFis =
&smIORequestBody->transport.SATA.agSATARequestBody.fis.fisRegHostToDev;
satIOContext->pScsiCmnd = &smSCSIRequest->scsiCmnd;
satIOContext->pSense = &smIORequestBody->transport.SATA.sensePayload;
satIOContext->pSmSenseData = &smIORequestBody->transport.SATA.smSenseData;
satIOContext->pSmSenseData->senseData = satIOContext->pSense;
/* satIOContext->pSense = (scsiRspSense_t *)satIOContext->pSmSenseData->senseData; */
satIOContext->smRequestBody = smIORequestBody;
satIOContext->interruptContext = interruptContext;
satIOContext->psmDeviceHandle = smDeviceHandle;
satIOContext->smScsiXchg = smSCSIRequest;
satIOContext->superIOFlag = agFALSE;
satIOContext->satIntIoContext = agNULL;
satIOContext->satOrgIOContext = agNULL;
satIOContext->currentLBA = 0;
satIOContext->OrgTL = 0;
status = smsatIOStart(smRoot, smIORequest, smDeviceHandle, smSCSIRequest, satIOContext);
return status;
}
osGLOBAL bit32
smTaskManagement(
smRoot_t *smRoot,
smDeviceHandle_t *smDeviceHandle,
bit32 task,
smLUN_t *lun,
smIORequest_t *taskTag, /* io to be aborted */
smIORequest_t *currentTaskTag /* task management */
)
{
smIntRoot_t *smIntRoot = (smIntRoot_t *)smRoot->smData;
smIntContext_t *smAllShared = (smIntContext_t *)&smIntRoot->smAllShared;
agsaRoot_t *agRoot = smAllShared->agRoot;
smDeviceData_t *oneDeviceData = agNULL;
smIORequestBody_t *smIORequestBody = agNULL;
bit32 status;
agsaContext_t *agContext = agNULL;
smSatIOContext_t *satIOContext;
SM_DBG1(("smTaskManagement: start\n"));
oneDeviceData = (smDeviceData_t *)smDeviceHandle->smData;
if (task == SM_LOGICAL_UNIT_RESET || task == SM_TARGET_WARM_RESET || task == SM_ABORT_TASK)
{
if (task == AG_LOGICAL_UNIT_RESET)
{
if ( (lun->lun[0] | lun->lun[1] | lun->lun[2] | lun->lun[3] |
lun->lun[4] | lun->lun[5] | lun->lun[6] | lun->lun[7] ) != 0 )
{
SM_DBG1(("smTaskManagement: *** REJECT *** LUN not zero, did %d!!!\n",
oneDeviceData->id));
return SM_RC_FAILURE;
}
}
oneDeviceData->satDriveState = SAT_DEV_STATE_IN_RECOVERY;
oneDeviceData->satAbortAfterReset = agFALSE;
saSetDeviceState(agRoot,
agNULL,
tdsmRotateQnumber(smRoot, smDeviceHandle),
oneDeviceData->agDevHandle,
SA_DS_IN_RECOVERY
);
if (oneDeviceData->directlyAttached == agFALSE)
{
/* expander attached */
SM_DBG1(("smTaskManagement: LUN reset or device reset expander attached!!!\n"));
status = smPhyControlSend(smRoot,
oneDeviceData,
SMP_PHY_CONTROL_HARD_RESET,
currentTaskTag,
tdsmRotateQnumber(smRoot, smDeviceHandle)
);
return status;
}
else
{
SM_DBG1(("smTaskManagement: LUN reset or device reset directly attached\n"));
smIORequestBody = (smIORequestBody_t*)currentTaskTag->smData;//smDequeueIO(smRoot);
if (smIORequestBody == agNULL)
{
SM_DBG1(("smTaskManagement: smIORequestBody is NULL!!!\n"));
return SM_RC_FAILURE;
}
smIOReInit(smRoot, smIORequestBody);
satIOContext = &(smIORequestBody->transport.SATA.satIOContext);
satIOContext->smRequestBody = smIORequestBody;
smIORequestBody->smDevHandle = smDeviceHandle;
agContext = &(oneDeviceData->agDeviceResetContext);
agContext->osData = currentTaskTag;
status = saLocalPhyControl(agRoot,
agContext,
tdsmRotateQnumber(smRoot, smDeviceHandle) &0xFFFF,
oneDeviceData->phyID,
AGSA_PHY_HARD_RESET,
smLocalPhyControlCB
);
if ( status == AGSA_RC_SUCCESS)
{
return SM_RC_SUCCESS;
}
else if (status == AGSA_RC_BUSY)
{
return SM_RC_BUSY;
}
else if (status == AGSA_RC_FAILURE)
{
return SM_RC_FAILURE;
}
else
{
SM_DBG1(("smTaskManagement: unknown status %d\n",status));
return SM_RC_FAILURE;
}
}
}
else
{
/* smsatsmTaskManagement() which is satTM() */
smIORequestBody = (smIORequestBody_t*)currentTaskTag->smData;//smDequeueIO(smRoot);
if (smIORequestBody == agNULL)
{
SM_DBG1(("smTaskManagement: smIORequestBody is NULL!!!\n"));
return SM_RC_FAILURE;
}
smIOReInit(smRoot, smIORequestBody);
/*currentTaskTag->smData = smIORequestBody;*/
status = smsatTaskManagement(smRoot,
smDeviceHandle,
task,
lun,
taskTag,
currentTaskTag,
smIORequestBody
);
return status;
}
return SM_RC_SUCCESS;
}
/********************************************************* end smapi defined APIS */
/* counterpart is
smEnqueueIO(smRoot_t *smRoot,
smSatIOContext_t *satIOContext)
*/
osGLOBAL smIORequestBody_t *
smDequeueIO(smRoot_t *smRoot)
{
smIntRoot_t *smIntRoot = (smIntRoot_t *)smRoot->smData;
smIntContext_t *smAllShared = (smIntContext_t *)&smIntRoot->smAllShared;
smIORequestBody_t *smIORequestBody = agNULL;
smList_t *IOListList;
SM_DBG2(("smDequeueIO: start\n"));
tdsmSingleThreadedEnter(smRoot, SM_EXTERNAL_IO_LOCK);
if (SMLIST_EMPTY(&(smAllShared->freeIOList)))
{
SM_DBG1(("smDequeueIO: empty freeIOList!!!\n"));
tdsmSingleThreadedLeave(smRoot, SM_EXTERNAL_IO_LOCK);
return agNULL;
}
SMLIST_DEQUEUE_FROM_HEAD(&IOListList, &(smAllShared->freeIOList));
smIORequestBody = SMLIST_OBJECT_BASE(smIORequestBody_t, satIoBodyLink, IOListList);
SMLIST_DEQUEUE_THIS(&(smIORequestBody->satIoBodyLink));
SMLIST_ENQUEUE_AT_TAIL(&(smIORequestBody->satIoBodyLink), &(smAllShared->mainIOList));
tdsmSingleThreadedLeave(smRoot, SM_EXTERNAL_IO_LOCK);
if (smIORequestBody->InUse == agTRUE)
{
SM_DBG1(("smDequeueIO: wrong. already in USE ID %d!!!!\n", smIORequestBody->id));
}
smIOReInit(smRoot, smIORequestBody);
SM_DBG2(("smDequeueIO: io ID %d!\n", smIORequestBody->id));
/* debugging */
if (smIORequestBody->satIoBodyLink.flink == agNULL)
{
SM_DBG1(("smDequeueIO: io ID %d, flink is NULL!!!\n", smIORequestBody->id));
}
if (smIORequestBody->satIoBodyLink.blink == agNULL)
{
SM_DBG1(("smDequeueIO: io ID %d, blink is NULL!!!\n", smIORequestBody->id));
}
return smIORequestBody;
}
//start here
//compare with ossaSATAAbortCB()
//qqq1
osGLOBAL void
smsatAbort(
smRoot_t *smRoot,
agsaRoot_t *agRoot,
smSatIOContext_t *satIOContext
)
{
smIORequestBody_t *smIORequestBody = agNULL; /* abort itself */
smIORequestBody_t *smToBeAbortedIORequestBody; /* io to be aborted */
agsaIORequest_t *agToBeAbortedIORequest; /* io to be aborted */
agsaIORequest_t *agAbortIORequest; /* abort io itself */
smSatIOContext_t *satAbortIOContext;
bit32 PhysUpper32;
bit32 PhysLower32;
bit32 memAllocStatus;
void *osMemHandle;
SM_DBG2(("smsatAbort: start\n"));
if (satIOContext == agNULL)
{
SM_DBG1(("smsatAbort: satIOContext is NULL, wrong!!!\n"));
return;
}
smToBeAbortedIORequestBody = (smIORequestBody_t *)satIOContext->smRequestBody;
agToBeAbortedIORequest = (agsaIORequest_t *)&(smToBeAbortedIORequestBody->agIORequest);
/*
smIORequestBody = smDequeueIO(smRoot);
if (smIORequestBody == agNULL)
{
SM_DBG1(("smsatAbort: empty freeIOList!!!\n"));
return;
}
*/
/* allocating agIORequest for abort itself */
memAllocStatus = tdsmAllocMemory(
smRoot,
&osMemHandle,
(void **)&smIORequestBody,
&PhysUpper32,
&PhysLower32,
8,
sizeof(smIORequestBody_t),
agTRUE
);
if (memAllocStatus != tiSuccess)
{
/* let os process IO */
SM_DBG1(("smsatAbort: ostiAllocMemory failed...\n"));
return;
}
if (smIORequestBody == agNULL)
{
/* let os process IO */
SM_DBG1(("smsatAbort: ostiAllocMemory returned NULL smIORequestBody\n"));
return;
}
smIOReInit(smRoot, smIORequestBody);
smIORequestBody->IOType.InitiatorTMIO.osMemHandle = osMemHandle;
smIORequestBody->smDevHandle = smToBeAbortedIORequestBody->smDevHandle;
/* initialize agIORequest */
satAbortIOContext = &(smIORequestBody->transport.SATA.satIOContext);
satAbortIOContext->smRequestBody = smIORequestBody;
agAbortIORequest = &(smIORequestBody->agIORequest);
agAbortIORequest->osData = (void *) smIORequestBody;
agAbortIORequest->sdkData = agNULL; /* LL takes care of this */
/*
* Issue abort
*/
saSATAAbort( agRoot, agAbortIORequest, 0, agNULL, 0, agToBeAbortedIORequest, smaSATAAbortCB);
SM_DBG1(("satAbort: end!!!\n"));
return;
}
osGLOBAL bit32
smsatStartCheckPowerMode(
smRoot_t *smRoot,
smIORequest_t *currentTaskTag,
smDeviceHandle_t *smDeviceHandle,
smScsiInitiatorRequest_t *smScsiRequest,
smSatIOContext_t *satIOContext
)
{
smSatInternalIo_t *satIntIo = agNULL;
smDeviceData_t *oneDeviceData = agNULL;
smSatIOContext_t *satNewIOContext;
bit32 status;
SM_DBG1(("smsatStartCheckPowerMode: start\n"));
oneDeviceData = satIOContext->pSatDevData;
SM_DBG6(("smsatStartCheckPowerMode: before alloc\n"));
/* allocate any fis for seting SRT bit in device control */
satIntIo = smsatAllocIntIoResource( smRoot,
currentTaskTag,
oneDeviceData,
0,
satIntIo);
SM_DBG6(("smsatStartCheckPowerMode: before after\n"));
if (satIntIo == agNULL)
{
SM_DBG1(("smsatStartCheckPowerMode: can't alloacate!!!\n"));
/*smEnqueueIO(smRoot, satIOContext);*/
return SM_RC_FAILURE;
}
satNewIOContext = smsatPrepareNewIO(satIntIo,
currentTaskTag,
oneDeviceData,
agNULL,
satIOContext);
SM_DBG6(("smsatStartCheckPowerMode: TD satIOContext %p \n", satIOContext));
SM_DBG6(("smsatStartCheckPowerMode: SM satNewIOContext %p \n", satNewIOContext));
SM_DBG6(("smsatStartCheckPowerMode: TD smScsiXchg %p \n", satIOContext->smScsiXchg));
SM_DBG6(("smsatStartCheckPowerMode: SM smScsiXchg %p \n", satNewIOContext->smScsiXchg));
SM_DBG2(("smsatStartCheckPowerMode: satNewIOContext %p \n", satNewIOContext));
status = smsatCheckPowerMode(smRoot,
&satIntIo->satIntSmIORequest, /* New smIORequest */
smDeviceHandle,
satNewIOContext->smScsiXchg, /* New tiScsiInitiatorRequest_t *smScsiRequest, */
satNewIOContext);
if (status != SM_RC_SUCCESS)
{
SM_DBG1(("smsatStartCheckPowerMode: failed in sending!!!\n"));
smsatFreeIntIoResource( smRoot,
oneDeviceData,
satIntIo);
/*smEnqueueIO(smRoot, satIOContext);*/
return SM_RC_FAILURE;
}
SM_DBG6(("smsatStartCheckPowerMode: end\n"));
return status;
}
osGLOBAL bit32
smsatStartResetDevice(
smRoot_t *smRoot,
smIORequest_t *currentTaskTag,
smDeviceHandle_t *smDeviceHandle,
smScsiInitiatorRequest_t *smScsiRequest,
smSatIOContext_t *satIOContext
)
{
smSatInternalIo_t *satIntIo = agNULL;
smDeviceData_t *oneDeviceData = agNULL;
smSatIOContext_t *satNewIOContext;
bit32 status;
SM_DBG1(("smsatStartResetDevice: start\n"));
oneDeviceData = satIOContext->pSatDevData;
SM_DBG6(("smsatStartResetDevice: before alloc\n"));
/* allocate any fis for seting SRT bit in device control */
satIntIo = smsatAllocIntIoResource( smRoot,
currentTaskTag,
oneDeviceData,
0,
satIntIo);
SM_DBG6(("smsatStartResetDevice: before after\n"));
if (satIntIo == agNULL)
{
SM_DBG1(("smsatStartResetDevice: can't alloacate!!!\n"));
/*smEnqueueIO(smRoot, satIOContext);*/
return SM_RC_FAILURE;
}
satNewIOContext = smsatPrepareNewIO(satIntIo,
currentTaskTag,
oneDeviceData,
agNULL,
satIOContext);
SM_DBG6(("smsatStartResetDevice: TD satIOContext %p \n", satIOContext));
SM_DBG6(("smsatStartResetDevice: SM satNewIOContext %p \n", satNewIOContext));
SM_DBG6(("smsatStartResetDevice: TD smScsiXchg %p \n", satIOContext->smScsiXchg));
SM_DBG6(("smsatStartResetDevice: SM smScsiXchg %p \n", satNewIOContext->smScsiXchg));
SM_DBG6(("smsatStartResetDevice: satNewIOContext %p \n", satNewIOContext));
if (oneDeviceData->satDeviceType == SATA_ATAPI_DEVICE)
{
/*if ATAPI device, send DEVICE RESET command to ATAPI device*/
status = smsatDeviceReset(smRoot,
&satIntIo->satIntSmIORequest, /* New smIORequest */
smDeviceHandle,
satNewIOContext->smScsiXchg, /* New smScsiInitiatorRequest_t *smScsiRequest, NULL */
satNewIOContext);
}
else
{
status = smsatResetDevice(smRoot,
&satIntIo->satIntSmIORequest, /* New smIORequest */
smDeviceHandle,
satNewIOContext->smScsiXchg, /* New smScsiInitiatorRequest_t *smScsiRequest, NULL */
satNewIOContext);
}
if (status != SM_RC_SUCCESS)
{
SM_DBG1(("smsatStartResetDevice: failed in sending!!!\n"));
smsatFreeIntIoResource( smRoot,
oneDeviceData,
satIntIo);
/*smEnqueueIO(smRoot, satIOContext);*/
return SM_RC_FAILURE;
}
SM_DBG6(("smsatStartResetDevice: end\n"));
return status;
}
osGLOBAL bit32
smsatTmAbortTask(
smRoot_t *smRoot,
smIORequest_t *currentTaskTag, /* task management */
smDeviceHandle_t *smDeviceHandle,
smScsiInitiatorRequest_t *smScsiRequest, /* NULL */
smSatIOContext_t *satIOContext, /* task management */
smIORequest_t *taskTag) /* io to be aborted */
{
smDeviceData_t *oneDeviceData = agNULL;
smSatIOContext_t *satTempIOContext = agNULL;
smList_t *elementHdr;
bit32 found = agFALSE;
smIORequestBody_t *smIORequestBody = agNULL;
smIORequest_t *smIOReq = agNULL;
bit32 status;
SM_DBG1(("smsatTmAbortTask: start\n"));
oneDeviceData = (smDeviceData_t *)smDeviceHandle->smData;
/*
* Check that the only pending I/O matches taskTag. If not return tiError.
*/
tdsmSingleThreadedEnter(smRoot, SM_EXTERNAL_IO_LOCK);
elementHdr = oneDeviceData->satIoLinkList.flink;
while (elementHdr != &oneDeviceData->satIoLinkList)
{
satTempIOContext = SMLIST_OBJECT_BASE( smSatIOContext_t,
satIoContextLink,
elementHdr );
if ( satTempIOContext != agNULL)
{
smIORequestBody = (smIORequestBody_t *) satTempIOContext->smRequestBody;
smIOReq = smIORequestBody->smIORequest;
}
elementHdr = elementHdr->flink; /* for the next while loop */
/*
* Check if the tag matches
*/
if ( smIOReq == taskTag)
{
found = agTRUE;
satIOContext->satToBeAbortedIOContext = satTempIOContext;
SM_DBG1(("smsatTmAbortTask: found matching tag.\n"));
break;
} /* if matching tag */
} /* while loop */
tdsmSingleThreadedLeave(smRoot, SM_EXTERNAL_IO_LOCK);
if (found == agFALSE )
{
SM_DBG1(("smsatTmAbortTask: *** REJECT *** no match!!!\n"));
/*smEnqueueIO(smRoot, satIOContext);*/
/* clean up TD layer's smIORequestBody */
if (smIORequestBody)
{
if (smIORequestBody->IOType.InitiatorTMIO.osMemHandle != agNULL)
{
tdsmFreeMemory(
smRoot,
smIORequestBody->IOType.InitiatorTMIO.osMemHandle,
sizeof(smIORequestBody_t)
);
}
}
else
{
SM_DBG1(("smsatTmAbortTask: smIORequestBody is NULL!!!\n"));
}
return SM_RC_FAILURE;
}
if (satTempIOContext == agNULL)
{
SM_DBG1(("smsatTmAbortTask: satTempIOContext is NULL!!!\n"));
return SM_RC_FAILURE;
}
/*
* Save smIORequest, will be returned at device reset completion to return
* the TM completion.
*/
oneDeviceData->satTmTaskTag = currentTaskTag;
/*
* Set flag to indicate device in recovery mode.
*/
oneDeviceData->satDriveState = SAT_DEV_STATE_IN_RECOVERY;
/*
* Issue SATA device reset or check power mode.. Set flag to to automatically abort
* at the completion of SATA device reset.
* SAT r09 p25
*/
oneDeviceData->satAbortAfterReset = agTRUE;
if ( (satTempIOContext->reqType == AGSA_SATA_PROTOCOL_FPDMA_WRITE) ||
(satTempIOContext->reqType == AGSA_SATA_PROTOCOL_FPDMA_READ)
)
{
SM_DBG1(("smsatTmAbortTask: calling satStartCheckPowerMode!!!\n"));
/* send check power mode */
status = smsatStartCheckPowerMode(
smRoot,
currentTaskTag, /* currentTaskTag */
smDeviceHandle,
smScsiRequest, /* NULL */
satIOContext
);
}
else
{
SM_DBG1(("smsatTmAbortTask: calling satStartResetDevice!!!\n"));
/* send AGSA_SATA_PROTOCOL_SRST_ASSERT */
status = smsatStartResetDevice(
smRoot,
currentTaskTag, /* currentTaskTag */
smDeviceHandle,
smScsiRequest, /* NULL */
satIOContext
);
}
return status;
}
/* satTM() */
osGLOBAL bit32
smsatTaskManagement(
smRoot_t *smRoot,
smDeviceHandle_t *smDeviceHandle,
bit32 task,
smLUN_t *lun,
smIORequest_t *taskTag, /* io to be aborted */
smIORequest_t *currentTaskTag, /* task management */
smIORequestBody_t *smIORequestBody
)
{
smSatIOContext_t *satIOContext = agNULL;
smDeviceData_t *oneDeviceData = agNULL;
bit32 status;
SM_DBG1(("smsatTaskManagement: start\n"));
oneDeviceData = (smDeviceData_t *)smDeviceHandle->smData;
satIOContext = &(smIORequestBody->transport.SATA.satIOContext);
satIOContext->pSatDevData = oneDeviceData;
satIOContext->pFis =
&(smIORequestBody->transport.SATA.agSATARequestBody.fis.fisRegHostToDev);
satIOContext->smRequestBody = smIORequestBody;
satIOContext->psmDeviceHandle = smDeviceHandle;
satIOContext->satIntIoContext = agNULL;
satIOContext->satOrgIOContext = agNULL;
/* followings are used only for internal IO */
satIOContext->currentLBA = 0;
satIOContext->OrgTL = 0;
/* saving task in satIOContext */
satIOContext->TMF = task;
satIOContext->satToBeAbortedIOContext = agNULL;
if (task == AG_ABORT_TASK)
{
status = smsatTmAbortTask( smRoot,
currentTaskTag,
smDeviceHandle,
agNULL,
satIOContext,
taskTag);
return status;
}
else
{
SM_DBG1(("smsatTaskManagement: UNSUPPORTED TM task=0x%x!!!\n", task ));
/*smEnqueueIO(smRoot, satIOContext);*/
return SM_RC_FAILURE;
}
return SM_RC_SUCCESS;
}
osGLOBAL bit32
smPhyControlSend(
smRoot_t *smRoot,
smDeviceData_t *oneDeviceData, /* sata disk itself */
bit8 phyOp,
smIORequest_t *CurrentTaskTag,
bit32 queueNumber
)
{
smIntRoot_t *smIntRoot = (smIntRoot_t *)smRoot->smData;
smIntContext_t *smAllShared = (smIntContext_t *)&smIntRoot->smAllShared;
agsaRoot_t *agRoot = smAllShared->agRoot;
agsaDevHandle_t *agExpDevHandle;
smpReqPhyControl_t smpPhyControlReq;
void *osMemHandle;
bit32 PhysUpper32;
bit32 PhysLower32;
bit32 memAllocStatus;
bit32 expectedRspLen = 0;
smSMPRequestBody_t *smSMPRequestBody;
agsaSASRequestBody_t *agSASRequestBody;
agsaSMPFrame_t *agSMPFrame;
agsaIORequest_t *agIORequest;
// agsaDevHandle_t *agDevHandle;
smSMPFrameHeader_t smSMPFrameHeader;
bit32 status;
bit8 *pSmpBody; /* smp payload itself w/o first 4 bytes(header) */
bit32 smpBodySize; /* smp payload size w/o first 4 bytes(header) */
bit32 agRequestType;
SM_DBG2(("smPhyControlSend: start\n"));
agExpDevHandle = oneDeviceData->agExpDevHandle;
if (agExpDevHandle == agNULL)
{
SM_DBG1(("smPhyControlSend: agExpDevHandle is NULL!!!\n"));
return SM_RC_FAILURE;
}
SM_DBG5(("smPhyControlSend: phyID %d\n", oneDeviceData->phyID));
sm_memset(&smpPhyControlReq, 0, sizeof(smpReqPhyControl_t));
/* fill in SMP payload */
smpPhyControlReq.phyIdentifier = (bit8)oneDeviceData->phyID;
smpPhyControlReq.phyOperation = phyOp;
/* allocate smp and send it */
memAllocStatus = tdsmAllocMemory(
smRoot,
&osMemHandle,
(void **)&smSMPRequestBody,
&PhysUpper32,
&PhysLower32,
8,
sizeof(smSMPRequestBody_t),
agTRUE
);
if (memAllocStatus != SM_RC_SUCCESS)
{
SM_DBG1(("smPhyControlSend: tdsmAllocMemory failed...!!!\n"));
return SM_RC_FAILURE;
}
if (smSMPRequestBody == agNULL)
{
SM_DBG1(("smPhyControlSend: tdsmAllocMemory returned NULL smSMPRequestBody!!!\n"));
return SM_RC_FAILURE;
}
/* saves mem handle for freeing later */
smSMPRequestBody->osMemHandle = osMemHandle;
/* saves oneDeviceData */
smSMPRequestBody->smDeviceData = oneDeviceData; /* sata disk */
/* saves oneDeviceData */
smSMPRequestBody->smDevHandle = oneDeviceData->smDevHandle;
// agDevHandle = oneDeviceData->agDevHandle;
/* save the callback funtion */
smSMPRequestBody->SMPCompletionFunc = smSMPCompleted; /* in satcb.c */
/* for simulate warm target reset */
smSMPRequestBody->CurrentTaskTag = CurrentTaskTag;
if (CurrentTaskTag != agNULL)
{
CurrentTaskTag->smData = smSMPRequestBody;
}
/* initializes the number of SMP retries */
smSMPRequestBody->retries = 0;
#ifdef TD_INTERNAL_DEBUG /* debugging */
SM_DBG4(("smPhyControlSend: SMPRequestbody %p\n", smSMPRequestBody));
SM_DBG4(("smPhyControlSend: callback fn %p\n", smSMPRequestBody->SMPCompletionFunc));
#endif
agIORequest = &(smSMPRequestBody->agIORequest);
agIORequest->osData = (void *) smSMPRequestBody;
agIORequest->sdkData = agNULL; /* SALL takes care of this */
agSASRequestBody = &(smSMPRequestBody->agSASRequestBody);
agSMPFrame = &(agSASRequestBody->smpFrame);
SM_DBG3(("smPhyControlSend: agIORequest %p\n", agIORequest));
SM_DBG3(("smPhyControlSend: SMPRequestbody %p\n", smSMPRequestBody));
expectedRspLen = 4;
pSmpBody = (bit8 *)&smpPhyControlReq;
smpBodySize = sizeof(smpReqPhyControl_t);
agRequestType = AGSA_SMP_INIT_REQ;
if (SMIsSPC(agRoot))
{
if ( (smpBodySize + 4) <= SMP_DIRECT_PAYLOAD_LIMIT) /* 48 */
{
SM_DBG3(("smPhyControlSend: DIRECT smp payload\n"));
sm_memset(&smSMPFrameHeader, 0, sizeof(smSMPFrameHeader_t));
sm_memset(smSMPRequestBody->smpPayload, 0, SMP_DIRECT_PAYLOAD_LIMIT);
/* SMP header */
smSMPFrameHeader.smpFrameType = SMP_REQUEST; /* SMP request */
smSMPFrameHeader.smpFunction = (bit8)SMP_PHY_CONTROL;
smSMPFrameHeader.smpFunctionResult = 0;
smSMPFrameHeader.smpReserved = 0;
sm_memcpy(smSMPRequestBody->smpPayload, &smSMPFrameHeader, 4);
sm_memcpy((smSMPRequestBody->smpPayload)+4, pSmpBody, smpBodySize);
/* direct SMP payload eg) REPORT_GENERAL, DISCOVER etc */
agSMPFrame->outFrameBuf = smSMPRequestBody->smpPayload;
agSMPFrame->outFrameLen = smpBodySize + 4; /* without last 4 byte crc */
/* to specify DIRECT SMP response */
agSMPFrame->inFrameLen = 0;
/* temporary solution for T2D Combo*/
#if defined (INITIATOR_DRIVER) && defined (TARGET_DRIVER)
/* force smp repsonse to be direct */
agSMPFrame->expectedRespLen = 0;
#else
agSMPFrame->expectedRespLen = expectedRspLen;
#endif
// smhexdump("smPhyControlSend", (bit8*)agSMPFrame->outFrameBuf, agSMPFrame->outFrameLen);
// smhexdump("smPhyControlSend new", (bit8*)smSMPRequestBody->smpPayload, agSMPFrame->outFrameLen);
// smhexdump("smPhyControlSend - smSMPRequestBody", (bit8*)smSMPRequestBody, sizeof(smSMPRequestBody_t));
}
else
{
SM_DBG1(("smPhyControlSend: INDIRECT smp payload, not supported!!!\n"));
tdsmFreeMemory(
smRoot,
osMemHandle,
sizeof(smSMPRequestBody_t)
);
return SM_RC_FAILURE;
}
}
else /* SPCv controller */
{
/* only direct mode for both request and response */
SM_DBG3(("smPhyControlSend: DIRECT smp payload\n"));
agSMPFrame->flag = 0;
sm_memset(&smSMPFrameHeader, 0, sizeof(smSMPFrameHeader_t));
sm_memset(smSMPRequestBody->smpPayload, 0, SMP_DIRECT_PAYLOAD_LIMIT);
/* SMP header */
smSMPFrameHeader.smpFrameType = SMP_REQUEST; /* SMP request */
smSMPFrameHeader.smpFunction = (bit8)SMP_PHY_CONTROL;
smSMPFrameHeader.smpFunctionResult = 0;
smSMPFrameHeader.smpReserved = 0;
sm_memcpy(smSMPRequestBody->smpPayload, &smSMPFrameHeader, 4);
sm_memcpy((smSMPRequestBody->smpPayload)+4, pSmpBody, smpBodySize);
/* direct SMP payload eg) REPORT_GENERAL, DISCOVER etc */
agSMPFrame->outFrameBuf = smSMPRequestBody->smpPayload;
agSMPFrame->outFrameLen = smpBodySize + 4; /* without last 4 byte crc */
/* to specify DIRECT SMP response */
agSMPFrame->inFrameLen = 0;
/* temporary solution for T2D Combo*/
#if defined (INITIATOR_DRIVER) && defined (TARGET_DRIVER)
/* force smp repsonse to be direct */
agSMPFrame->expectedRespLen = 0;
#else
agSMPFrame->expectedRespLen = expectedRspLen;
#endif
// smhexdump("smPhyControlSend", (bit8*)agSMPFrame->outFrameBuf, agSMPFrame->outFrameLen);
// smhexdump("smPhyControlSend new", (bit8*)smSMPRequestBody->smpPayload, agSMPFrame->outFrameLen);
// smhexdump("smPhyControlSend - smSMPRequestBody", (bit8*)smSMPRequestBody, sizeof(smSMPRequestBody_t));
}
status = saSMPStart(
agRoot,
agIORequest,
queueNumber,
agExpDevHandle,
agRequestType,
agSASRequestBody,
&smSMPCompletedCB
);
if (status == AGSA_RC_SUCCESS)
{
return SM_RC_SUCCESS;
}
else if (status == AGSA_RC_BUSY)
{
SM_DBG1(("smPhyControlSend: saSMPStart is busy!!!\n"));
tdsmFreeMemory(
smRoot,
osMemHandle,
sizeof(smSMPRequestBody_t)
);
return SM_RC_BUSY;
}
else /* AGSA_RC_FAILURE */
{
SM_DBG1(("smPhyControlSend: saSMPStart is failed. status %d!!!\n", status));
tdsmFreeMemory(
smRoot,
osMemHandle,
sizeof(smSMPRequestBody_t)
);
return SM_RC_FAILURE;
}
}
/* free IO which are internally completed within SM
counterpart is
osGLOBAL smIORequestBody_t *
smDequeueIO(smRoot_t *smRoot)
*/
osGLOBAL void
smEnqueueIO(
smRoot_t *smRoot,
smSatIOContext_t *satIOContext
)
{
smIntRoot_t *smIntRoot = agNULL;
smIntContext_t *smAllShared = agNULL;
smIORequestBody_t *smIORequestBody;
SM_DBG3(("smEnqueueIO: start\n"));
smIORequestBody = (smIORequestBody_t *)satIOContext->smRequestBody;
smIntRoot = (smIntRoot_t *)smRoot->smData;
smAllShared = (smIntContext_t *)&smIntRoot->smAllShared;
/* enque back to smAllShared->freeIOList */
if (satIOContext->satIntIoContext == agNULL)
{
SM_DBG2(("smEnqueueIO: external command!!!, io ID %d!!!\n", smIORequestBody->id));
/* debugging only */
if (smIORequestBody->satIoBodyLink.flink == agNULL)
{
SM_DBG1(("smEnqueueIO: external command!!!, io ID %d, flink is NULL!!!\n", smIORequestBody->id));
}
if (smIORequestBody->satIoBodyLink.blink == agNULL)
{
SM_DBG1(("smEnqueueIO: external command!!!, io ID %d, blink is NULL!!!\n", smIORequestBody->id));
}
}
else
{
SM_DBG2(("smEnqueueIO: internal command!!!, io ID %d!!!\n", smIORequestBody->id));
/* debugging only */
if (smIORequestBody->satIoBodyLink.flink == agNULL)
{
SM_DBG1(("smEnqueueIO: internal command!!!, io ID %d, flink is NULL!!!\n", smIORequestBody->id));
}
if (smIORequestBody->satIoBodyLink.blink == agNULL)
{
SM_DBG1(("smEnqueueIO: internal command!!!, io ID %d, blink is NULL!!!\n", smIORequestBody->id));
}
}
if (smIORequestBody->smIORequest == agNULL)
{
SM_DBG1(("smEnqueueIO: smIORequest is NULL, io ID %d!!!\n", smIORequestBody->id));
}
if (smIORequestBody->InUse == agTRUE)
{
smIORequestBody->InUse = agFALSE;
tdsmSingleThreadedEnter(smRoot, SM_EXTERNAL_IO_LOCK);
SMLIST_DEQUEUE_THIS(&(smIORequestBody->satIoBodyLink));
SMLIST_ENQUEUE_AT_TAIL(&(smIORequestBody->satIoBodyLink), &(smAllShared->freeIOList));
tdsmSingleThreadedLeave(smRoot, SM_EXTERNAL_IO_LOCK);
}
else
{
SM_DBG2(("smEnqueueIO: check!!!, io ID %d!!!\n", smIORequestBody->id));
}
return;
}
FORCEINLINE void
smsatFreeIntIoResource(
smRoot_t *smRoot,
smDeviceData_t *satDevData,
smSatInternalIo_t *satIntIo
)
{
SM_DBG3(("smsatFreeIntIoResource: start\n"));
if (satIntIo == agNULL)
{
SM_DBG2(("smsatFreeIntIoResource: allowed call\n"));
return;
}
/* sets the original smIOrequest to agNULL for internally generated ATA cmnd */
satIntIo->satOrgSmIORequest = agNULL;
/*
* Free DMA memory if previosly alocated
*/
if (satIntIo->satIntSmScsiXchg.scsiCmnd.expDataLength != 0)
{
SM_DBG3(("smsatFreeIntIoResource: DMA len %d\n", satIntIo->satIntDmaMem.totalLength));
SM_DBG3(("smsatFreeIntIoResource: pointer %p\n", satIntIo->satIntDmaMem.osHandle));
tdsmFreeMemory( smRoot,
satIntIo->satIntDmaMem.osHandle,
satIntIo->satIntDmaMem.totalLength);
satIntIo->satIntSmScsiXchg.scsiCmnd.expDataLength = 0;
}
if (satIntIo->satIntReqBodyMem.totalLength != 0)
{
SM_DBG3(("smsatFreeIntIoResource: req body len %d\n", satIntIo->satIntReqBodyMem.totalLength));
/*
* Free mem allocated for Req body
*/
tdsmFreeMemory( smRoot,
satIntIo->satIntReqBodyMem.osHandle,
satIntIo->satIntReqBodyMem.totalLength);
satIntIo->satIntReqBodyMem.totalLength = 0;
}
SM_DBG3(("smsatFreeIntIoResource: satDevData %p satIntIo id %d\n", satDevData, satIntIo->id));
/*
* Return satIntIo to the free list
*/
tdsmSingleThreadedEnter(smRoot, SM_INTERNAL_IO_LOCK);
SMLIST_DEQUEUE_THIS (&(satIntIo->satIntIoLink));
SMLIST_ENQUEUE_AT_TAIL (&(satIntIo->satIntIoLink), &(satDevData->satFreeIntIoLinkList));
tdsmSingleThreadedLeave(smRoot, SM_INTERNAL_IO_LOCK);
return;
}
//start here
osGLOBAL smSatInternalIo_t *
smsatAllocIntIoResource(
smRoot_t *smRoot,
smIORequest_t *smIORequest,
smDeviceData_t *satDevData,
bit32 dmaAllocLength,
smSatInternalIo_t *satIntIo)
{
smList_t *smList = agNULL;
bit32 memAllocStatus;
SM_DBG3(("smsatAllocIntIoResource: start\n"));
SM_DBG3(("smsatAllocIntIoResource: satIntIo %p\n", satIntIo));
if (satDevData == agNULL)
{
SM_DBG1(("smsatAllocIntIoResource: ***** ASSERT satDevData is null!!!\n"));
return agNULL;
}
tdsmSingleThreadedEnter(smRoot, SM_INTERNAL_IO_LOCK);
if (!SMLIST_EMPTY(&(satDevData->satFreeIntIoLinkList)))
{
SMLIST_DEQUEUE_FROM_HEAD(&smList, &(satDevData->satFreeIntIoLinkList));
}
else
{
tdsmSingleThreadedLeave(smRoot, SM_INTERNAL_IO_LOCK);
SM_DBG1(("smsatAllocIntIoResource() no more internal free link!!!\n"));
return agNULL;
}
if (smList == agNULL)
{
tdsmSingleThreadedLeave(smRoot, SM_INTERNAL_IO_LOCK);
SM_DBG1(("smsatAllocIntIoResource() FAIL to alloc satIntIo!!!\n"));
return agNULL;
}
satIntIo = SMLIST_OBJECT_BASE( smSatInternalIo_t, satIntIoLink, smList);
SM_DBG3(("smsatAllocIntIoResource: satDevData %p satIntIo id %d\n", satDevData, satIntIo->id));
/* Put in active list */
SMLIST_DEQUEUE_THIS (&(satIntIo->satIntIoLink));
SMLIST_ENQUEUE_AT_TAIL (&(satIntIo->satIntIoLink), &(satDevData->satActiveIntIoLinkList));
tdsmSingleThreadedLeave(smRoot, SM_INTERNAL_IO_LOCK);
#ifdef REMOVED
/* Put in active list */
tdsmSingleThreadedEnter(smRoot, SM_INTERNAL_IO_LOCK);
SMLIST_DEQUEUE_THIS (smList);
SMLIST_ENQUEUE_AT_TAIL (smList, &(satDevData->satActiveIntIoLinkList));
tdsmSingleThreadedLeave(smRoot, SM_INTERNAL_IO_LOCK);
satIntIo = SMLIST_OBJECT_BASE( smSatInternalIo_t, satIntIoLink, smList);
SM_DBG3(("smsatAllocIntIoResource: satDevData %p satIntIo id %d\n", satDevData, satIntIo->id));
#endif
/*
typedef struct
{
tdList_t satIntIoLink;
smIORequest_t satIntSmIORequest;
void *satIntRequestBody;
smScsiInitiatorRequest_t satIntSmScsiXchg;
smMem_t satIntDmaMem;
smMem_t satIntReqBodyMem;
bit32 satIntFlag;
} smSatInternalIo_t;
*/
/*
* Allocate mem for Request Body
*/
satIntIo->satIntReqBodyMem.totalLength = sizeof(smIORequestBody_t);
memAllocStatus = tdsmAllocMemory( smRoot,
&satIntIo->satIntReqBodyMem.osHandle,
(void **)&satIntIo->satIntRequestBody,
&satIntIo->satIntReqBodyMem.physAddrUpper,
&satIntIo->satIntReqBodyMem.physAddrLower,
8,
satIntIo->satIntReqBodyMem.totalLength,
agTRUE );
if (memAllocStatus != SM_RC_SUCCESS)
{
SM_DBG1(("smsatAllocIntIoResource() FAIL to alloc mem for Req Body!!!\n"));
/*
* Return satIntIo to the free list
*/
tdsmSingleThreadedEnter(smRoot, SM_INTERNAL_IO_LOCK);
SMLIST_DEQUEUE_THIS (&satIntIo->satIntIoLink);
SMLIST_ENQUEUE_AT_HEAD(&satIntIo->satIntIoLink, &satDevData->satFreeIntIoLinkList);
tdsmSingleThreadedLeave(smRoot, SM_INTERNAL_IO_LOCK);
return agNULL;
}
/*
* Allocate DMA memory if required
*/
if (dmaAllocLength != 0)
{
satIntIo->satIntDmaMem.totalLength = dmaAllocLength;
memAllocStatus = tdsmAllocMemory( smRoot,
&satIntIo->satIntDmaMem.osHandle,
(void **)&satIntIo->satIntDmaMem.virtPtr,
&satIntIo->satIntDmaMem.physAddrUpper,
&satIntIo->satIntDmaMem.physAddrLower,
8,
satIntIo->satIntDmaMem.totalLength,
agFALSE);
SM_DBG3(("smsatAllocIntIoResource: len %d \n", satIntIo->satIntDmaMem.totalLength));
SM_DBG3(("smsatAllocIntIoResource: pointer %p \n", satIntIo->satIntDmaMem.osHandle));
if (memAllocStatus != SM_RC_SUCCESS)
{
SM_DBG1(("smsatAllocIntIoResource() FAIL to alloc mem for DMA mem!!!\n"));
/*
* Return satIntIo to the free list
*/
tdsmSingleThreadedEnter(smRoot, SM_INTERNAL_IO_LOCK);
SMLIST_DEQUEUE_THIS (&satIntIo->satIntIoLink);
SMLIST_ENQUEUE_AT_HEAD(&satIntIo->satIntIoLink, &satDevData->satFreeIntIoLinkList);
tdsmSingleThreadedLeave(smRoot, SM_INTERNAL_IO_LOCK);
/*
* Free mem allocated for Req body
*/
tdsmFreeMemory( smRoot,
satIntIo->satIntReqBodyMem.osHandle,
satIntIo->satIntReqBodyMem.totalLength);
return agNULL;
}
}
/*
typedef struct
{
smList_t satIntIoLink;
smIORequest_t satIntSmIORequest;
void *satIntRequestBody;
smScsiInitiatorRequest_t satIntSmScsiXchg;
smMem_t satIntDmaMem;
smMem_t satIntReqBodyMem;
bit32 satIntFlag;
} smSatInternalIo_t;
*/
/*
* Initialize satIntSmIORequest field
*/
satIntIo->satIntSmIORequest.tdData = agNULL; /* Not used for internal SAT I/O */
satIntIo->satIntSmIORequest.smData = satIntIo->satIntRequestBody;
/*
* saves the original smIOrequest
*/
satIntIo->satOrgSmIORequest = smIORequest;
/*
typedef struct tiIniScsiCmnd
{
tiLUN_t lun;
bit32 expDataLength;
bit32 taskAttribute;
bit32 crn;
bit8 cdb[16];
} tiIniScsiCmnd_t;
typedef struct tiScsiInitiatorExchange
{
void *sglVirtualAddr;
tiIniScsiCmnd_t scsiCmnd;
tiSgl_t agSgl1;
tiSgl_t agSgl2;
tiDataDirection_t dataDirection;
} tiScsiInitiatorRequest_t;
*/
/*
* Initialize satIntSmScsiXchg. Since the internal SAT request is NOT
* originated from SCSI request, only the following fields are initialized:
* - sglVirtualAddr if DMA transfer is involved
* - agSgl1 if DMA transfer is involved
* - expDataLength in scsiCmnd since this field is read by smsataLLIOStart()
*/
if (dmaAllocLength != 0)
{
satIntIo->satIntSmScsiXchg.sglVirtualAddr = satIntIo->satIntDmaMem.virtPtr;
OSSA_WRITE_LE_32(agNULL, &satIntIo->satIntSmScsiXchg.smSgl1.len, 0,
satIntIo->satIntDmaMem.totalLength);
satIntIo->satIntSmScsiXchg.smSgl1.lower = satIntIo->satIntDmaMem.physAddrLower;
satIntIo->satIntSmScsiXchg.smSgl1.upper = satIntIo->satIntDmaMem.physAddrUpper;
satIntIo->satIntSmScsiXchg.smSgl1.type = tiSgl;
satIntIo->satIntSmScsiXchg.scsiCmnd.expDataLength = satIntIo->satIntDmaMem.totalLength;
}
else
{
satIntIo->satIntSmScsiXchg.sglVirtualAddr = agNULL;
satIntIo->satIntSmScsiXchg.smSgl1.len = 0;
satIntIo->satIntSmScsiXchg.smSgl1.lower = 0;
satIntIo->satIntSmScsiXchg.smSgl1.upper = 0;
satIntIo->satIntSmScsiXchg.smSgl1.type = tiSgl;
satIntIo->satIntSmScsiXchg.scsiCmnd.expDataLength = 0;
}
SM_DBG5(("smsatAllocIntIoResource: satIntIo->satIntSmScsiXchg.agSgl1.len %d\n", satIntIo->satIntSmScsiXchg.smSgl1.len));
SM_DBG5(("smsatAllocIntIoResource: satIntIo->satIntSmScsiXchg.agSgl1.upper %d\n", satIntIo->satIntSmScsiXchg.smSgl1.upper));
SM_DBG5(("smsatAllocIntIoResource: satIntIo->satIntSmScsiXchg.agSgl1.lower %d\n", satIntIo->satIntSmScsiXchg.smSgl1.lower));
SM_DBG5(("smsatAllocIntIoResource: satIntIo->satIntSmScsiXchg.agSgl1.type %d\n", satIntIo->satIntSmScsiXchg.smSgl1.type));
SM_DBG5(("smsatAllocIntIoResource: return satIntIo %p\n", satIntIo));
return satIntIo;
}
osGLOBAL smDeviceData_t *
smAddToSharedcontext(
smRoot_t *smRoot,
agsaDevHandle_t *agDevHandle,
smDeviceHandle_t *smDeviceHandle,
agsaDevHandle_t *agExpDevHandle,
bit32 phyID
)
{
smIntRoot_t *smIntRoot = (smIntRoot_t *)smRoot->smData;
smIntContext_t *smAllShared = (smIntContext_t *)&smIntRoot->smAllShared;
smDeviceData_t *oneDeviceData = agNULL;
smList_t *DeviceListList;
bit32 new_device = agTRUE;
SM_DBG2(("smAddToSharedcontext: start\n"));
/* find a device's existence */
DeviceListList = smAllShared->MainDeviceList.flink;
while (DeviceListList != &(smAllShared->MainDeviceList))
{
oneDeviceData = SMLIST_OBJECT_BASE(smDeviceData_t, MainLink, DeviceListList);
if (oneDeviceData == agNULL)
{
SM_DBG1(("smAddToSharedcontext: oneDeviceData is NULL!!!\n"));
return agNULL;
}
if (oneDeviceData->agDevHandle == agDevHandle)
{
SM_DBG2(("smAddToSharedcontext: did %d\n", oneDeviceData->id));
new_device = agFALSE;
break;
}
DeviceListList = DeviceListList->flink;
}
/* new device */
if (new_device == agTRUE)
{
SM_DBG2(("smAddToSharedcontext: new device\n"));
tdsmSingleThreadedEnter(smRoot, SM_DEVICE_LOCK);
if (SMLIST_EMPTY(&(smAllShared->FreeDeviceList)))
{
tdsmSingleThreadedLeave(smRoot, SM_DEVICE_LOCK);
SM_DBG1(("smAddToSharedcontext: empty DeviceData FreeLink!!!\n"));
smDeviceHandle->smData = agNULL;
return agNULL;
}
SMLIST_DEQUEUE_FROM_HEAD(&DeviceListList, &(smAllShared->FreeDeviceList));
tdsmSingleThreadedLeave(smRoot, SM_DEVICE_LOCK);
oneDeviceData = SMLIST_OBJECT_BASE(smDeviceData_t, FreeLink, DeviceListList);
oneDeviceData->smRoot = smRoot;
oneDeviceData->agDevHandle = agDevHandle;
oneDeviceData->valid = agTRUE;
smDeviceHandle->smData = oneDeviceData;
oneDeviceData->smDevHandle = smDeviceHandle;
if (agExpDevHandle == agNULL)
{
oneDeviceData->directlyAttached = agTRUE;
}
else
{
oneDeviceData->directlyAttached = agFALSE;
}
oneDeviceData->agExpDevHandle = agExpDevHandle;
oneDeviceData->phyID = phyID;
oneDeviceData->satPendingIO = 0;
oneDeviceData->satPendingNCQIO = 0;
oneDeviceData->satPendingNONNCQIO = 0;
/* add the devicedata to the portcontext */
tdsmSingleThreadedEnter(smRoot, SM_DEVICE_LOCK);
SMLIST_ENQUEUE_AT_TAIL(&(oneDeviceData->MainLink), &(smAllShared->MainDeviceList));
tdsmSingleThreadedLeave(smRoot, SM_DEVICE_LOCK);
SM_DBG2(("smAddToSharedcontext: new case did %d\n", oneDeviceData->id));
}
else
{
SM_DBG2(("smAddToSharedcontext: old device\n"));
oneDeviceData->smRoot = smRoot;
oneDeviceData->agDevHandle = agDevHandle;
oneDeviceData->valid = agTRUE;
smDeviceHandle->smData = oneDeviceData;
oneDeviceData->smDevHandle = smDeviceHandle;
if (agExpDevHandle == agNULL)
{
oneDeviceData->directlyAttached = agTRUE;
}
else
{
oneDeviceData->directlyAttached = agFALSE;
}
oneDeviceData->agExpDevHandle = agExpDevHandle;
oneDeviceData->phyID = phyID;
oneDeviceData->satPendingIO = 0;
oneDeviceData->satPendingNCQIO = 0;
oneDeviceData->satPendingNONNCQIO = 0;
SM_DBG2(("smAddToSharedcontext: old case did %d\n", oneDeviceData->id));
}
return oneDeviceData;
}
osGLOBAL bit32
smRemoveFromSharedcontext(
smRoot_t *smRoot,
agsaDevHandle_t *agDevHandle,
smDeviceHandle_t *smDeviceHandle
)
{
smIntRoot_t *smIntRoot = (smIntRoot_t *)smRoot->smData;
smIntContext_t *smAllShared = (smIntContext_t *)&smIntRoot->smAllShared;
smDeviceData_t *oneDeviceData = agNULL;
SM_DBG2(("smRemoveFromSharedcontext: start\n"));
//due to device all and completion
//smDeviceHandle->smData = agNULL;
/* find oneDeviceData from MainLink */
oneDeviceData = smFindInSharedcontext(smRoot, agDevHandle);
if (oneDeviceData == agNULL)
{
return SM_RC_FAILURE;
}
else
{
if (oneDeviceData->valid == agTRUE)
{
smDeviceDataReInit(smRoot, oneDeviceData);
tdsmSingleThreadedEnter(smRoot, SM_DEVICE_LOCK);
SMLIST_DEQUEUE_THIS(&(oneDeviceData->MainLink));
SMLIST_ENQUEUE_AT_TAIL(&(oneDeviceData->FreeLink), &(smAllShared->FreeDeviceList));
tdsmSingleThreadedLeave(smRoot, SM_DEVICE_LOCK);
return SM_RC_SUCCESS;
}
else
{
SM_DBG1(("smRemoveFromSharedcontext: did %d bad case!!!\n", oneDeviceData->id));
return SM_RC_FAILURE;
}
}
}
osGLOBAL smDeviceData_t *
smFindInSharedcontext(
smRoot_t *smRoot,
agsaDevHandle_t *agDevHandle
)
{
smIntRoot_t *smIntRoot = (smIntRoot_t *)smRoot->smData;
smIntContext_t *smAllShared = (smIntContext_t *)&smIntRoot->smAllShared;
smDeviceData_t *oneDeviceData = agNULL;
smList_t *DeviceListList;
SM_DBG2(("smFindInSharedcontext: start\n"));
tdsmSingleThreadedEnter(smRoot, SM_DEVICE_LOCK);
if (SMLIST_EMPTY(&(smAllShared->MainDeviceList)))
{
SM_DBG1(("smFindInSharedcontext: empty MainDeviceList!!!\n"));
tdsmSingleThreadedLeave(smRoot, SM_DEVICE_LOCK);
return agNULL;
}
else
{
tdsmSingleThreadedLeave(smRoot, SM_DEVICE_LOCK);
}
DeviceListList = smAllShared->MainDeviceList.flink;
while (DeviceListList != &(smAllShared->MainDeviceList))
{
oneDeviceData = SMLIST_OBJECT_BASE(smDeviceData_t, MainLink, DeviceListList);
if (oneDeviceData == agNULL)
{
SM_DBG1(("smFindInSharedcontext: oneDeviceData is NULL!!!\n"));
return agNULL;
}
if ((oneDeviceData->agDevHandle == agDevHandle) &&
(oneDeviceData->valid == agTRUE)
)
{
SM_DBG2(("smFindInSharedcontext: found, did %d\n", oneDeviceData->id));
return oneDeviceData;
}
DeviceListList = DeviceListList->flink;
}
SM_DBG2(("smFindInSharedcontext: not found\n"));
return agNULL;
}
osGLOBAL smSatIOContext_t *
smsatPrepareNewIO(
smSatInternalIo_t *satNewIntIo,
smIORequest_t *smOrgIORequest,
smDeviceData_t *satDevData,
smIniScsiCmnd_t *scsiCmnd,
smSatIOContext_t *satOrgIOContext
)
{
smSatIOContext_t *satNewIOContext;
smIORequestBody_t *smNewIORequestBody;
SM_DBG3(("smsatPrepareNewIO: start\n"));
/* the one to be used; good 8/2/07 */
satNewIntIo->satOrgSmIORequest = smOrgIORequest; /* this is already done in
smsatAllocIntIoResource() */
smNewIORequestBody = (smIORequestBody_t *)satNewIntIo->satIntRequestBody;
satNewIOContext = &(smNewIORequestBody->transport.SATA.satIOContext);
satNewIOContext->pSatDevData = satDevData;
satNewIOContext->pFis = &(smNewIORequestBody->transport.SATA.agSATARequestBody.fis.fisRegHostToDev);
satNewIOContext->pScsiCmnd = &(satNewIntIo->satIntSmScsiXchg.scsiCmnd);
if (scsiCmnd != agNULL)
{
/* saves only CBD; not scsi command for LBA and number of blocks */
sm_memcpy(satNewIOContext->pScsiCmnd->cdb, scsiCmnd->cdb, 16);
}
satNewIOContext->pSense = &(smNewIORequestBody->transport.SATA.sensePayload);
satNewIOContext->pSmSenseData = &(smNewIORequestBody->transport.SATA.smSenseData);
satNewIOContext->pSmSenseData->senseData = satNewIOContext->pSense;
satNewIOContext->smRequestBody = satNewIntIo->satIntRequestBody;
satNewIOContext->interruptContext = satNewIOContext->interruptContext;
satNewIOContext->satIntIoContext = satNewIntIo;
satNewIOContext->psmDeviceHandle = satOrgIOContext->psmDeviceHandle;
satNewIOContext->satOrgIOContext = satOrgIOContext;
/* saves tiScsiXchg; only for writesame10() */
satNewIOContext->smScsiXchg = satOrgIOContext->smScsiXchg;
return satNewIOContext;
}
osGLOBAL void
smsatSetDevInfo(
smDeviceData_t *oneDeviceData,
agsaSATAIdentifyData_t *SATAIdData
)
{
SM_DBG3(("smsatSetDevInfo: start\n"));
oneDeviceData->satDriveState = SAT_DEV_STATE_NORMAL;
oneDeviceData->satFormatState = agFALSE;
oneDeviceData->satDeviceFaultState = agFALSE;
oneDeviceData->satTmTaskTag = agNULL;
oneDeviceData->satAbortAfterReset = agFALSE;
oneDeviceData->satAbortCalled = agFALSE;
oneDeviceData->satSectorDone = 0;
/* Qeueu depth, Word 75 */
oneDeviceData->satNCQMaxIO = SATAIdData->queueDepth + 1;
SM_DBG3(("smsatSetDevInfo: max queue depth %d\n",oneDeviceData->satNCQMaxIO));
/* Support NCQ, if Word 76 bit 8 is set */
if (SATAIdData->sataCapabilities & 0x100)
{
SM_DBG3(("smsatSetDevInfo: device supports NCQ\n"));
oneDeviceData->satNCQ = agTRUE;
}
else
{
SM_DBG3(("smsatSetDevInfo: no NCQ\n"));
oneDeviceData->satNCQ = agFALSE;
}
/* Support 48 bit addressing, if Word 83 bit 10 and Word 86 bit 10 are set */
if ((SATAIdData->commandSetSupported1 & 0x400) &&
(SATAIdData->commandSetFeatureEnabled1 & 0x400) )
{
SM_DBG3(("smsatSetDevInfo: support 48 bit addressing\n"));
oneDeviceData->sat48BitSupport = agTRUE;
}
else
{
SM_DBG3(("smsatSetDevInfo: NO 48 bit addressing\n"));
oneDeviceData->sat48BitSupport = agFALSE;
}
/* Support SMART Self Test, word84 bit 1 */
if (SATAIdData->commandSetFeatureSupportedExt & 0x02)
{
SM_DBG3(("smsatSetDevInfo: SMART self-test supported \n"));
oneDeviceData->satSMARTSelfTest = agTRUE;
}
else
{
SM_DBG3(("smsatSetDevInfo: no SMART self-test suppored\n"));
oneDeviceData->satSMARTSelfTest = agFALSE;
}
/* Support SMART feature set, word82 bit 0 */
if (SATAIdData->commandSetSupported & 0x01)
{
SM_DBG3(("smsatSetDevInfo: SMART feature set supported \n"));
oneDeviceData->satSMARTFeatureSet = agTRUE;
}
else
{
SM_DBG3(("smsatSetDevInfo: no SMART feature set suppored\n"));
oneDeviceData->satSMARTFeatureSet = agFALSE;
}
/* Support SMART enabled, word85 bit 0 */
if (SATAIdData->commandSetFeatureEnabled & 0x01)
{
SM_DBG3(("smsatSetDevInfo: SMART enabled \n"));
oneDeviceData->satSMARTEnabled = agTRUE;
}
else
{
SM_DBG3(("smsatSetDevInfo: no SMART enabled\n"));
oneDeviceData->satSMARTEnabled = agFALSE;
}
oneDeviceData->satVerifyState = 0;
/* Removable Media feature set support, word82 bit 2 */
if (SATAIdData->commandSetSupported & 0x4)
{
SM_DBG3(("smsatSetDevInfo: Removable Media supported \n"));
oneDeviceData->satRemovableMedia = agTRUE;
}
else
{
SM_DBG3(("smsatSetDevInfo: no Removable Media suppored\n"));
oneDeviceData->satRemovableMedia = agFALSE;
}
/* Removable Media feature set enabled, word 85, bit 2 */
if (SATAIdData->commandSetFeatureEnabled & 0x4)
{
SM_DBG3(("smsatSetDevInfo: Removable Media enabled\n"));
oneDeviceData->satRemovableMediaEnabled = agTRUE;
}
else
{
SM_DBG3(("smsatSetDevInfo: no Removable Media enabled\n"));
oneDeviceData->satRemovableMediaEnabled = agFALSE;
}
/* DMA Support, word49 bit8 */
if (SATAIdData->dma_lba_iod_ios_stimer & 0x100)
{
SM_DBG3(("smsatSetDevInfo: DMA supported \n"));
oneDeviceData->satDMASupport = agTRUE;
}
else
{
SM_DBG3(("smsatSetDevInfo: no DMA suppored\n"));
oneDeviceData->satDMASupport = agFALSE;
}
/* Support DMADIR, if Word 62 bit 8 is set */
if (SATAIdData->word62_74[0] & 0x8000)
{
SM_DBG3(("satSetDevInfo: DMADIR enabled\n"));
oneDeviceData->satDMADIRSupport = agTRUE;
}
else
{
SM_DBG3(("satSetDevInfo: DMADIR disabled\n"));
oneDeviceData->satDMADIRSupport = agFALSE;
}
/* DMA Enabled, word88 bit0-6, bit8-14*/
/* 0x7F7F = 0111 1111 0111 1111*/
if (SATAIdData->ultraDMAModes & 0x7F7F)
{
SM_DBG3(("smsatSetDevInfo: DMA enabled \n"));
oneDeviceData->satDMAEnabled = agTRUE;
if (SATAIdData->ultraDMAModes & 0x40)
{
oneDeviceData->satUltraDMAMode = 6;
}
else if (SATAIdData->ultraDMAModes & 0x20)
{
oneDeviceData->satUltraDMAMode = 5;
}
else if (SATAIdData->ultraDMAModes & 0x10)
{
oneDeviceData->satUltraDMAMode = 4;
}
else if (SATAIdData->ultraDMAModes & 0x08)
{
oneDeviceData->satUltraDMAMode = 3;
}
else if (SATAIdData->ultraDMAModes & 0x04)
{
oneDeviceData->satUltraDMAMode = 2;
}
else if (SATAIdData->ultraDMAModes & 0x01)
{
oneDeviceData->satUltraDMAMode = 1;
}
}
else
{
SM_DBG3(("smsatSetDevInfo: no DMA enabled\n"));
oneDeviceData->satDMAEnabled = agFALSE;
oneDeviceData->satUltraDMAMode = 0;
}
/*
setting MaxUserAddrSectors: max user addressable setctors
word60 - 61, should be 0x 0F FF FF FF
*/
oneDeviceData->satMaxUserAddrSectors
= (SATAIdData->numOfUserAddressableSectorsHi << (8*2) )
+ SATAIdData->numOfUserAddressableSectorsLo;
SM_DBG3(("smsatSetDevInfo: MaxUserAddrSectors 0x%x decimal %d\n", oneDeviceData->satMaxUserAddrSectors, oneDeviceData->satMaxUserAddrSectors));
/* Read Look-ahead is supported */
if (SATAIdData->commandSetSupported & 0x40)
{
SM_DBG3(("smsatSetDevInfo: Read Look-ahead is supported\n"));
oneDeviceData->satReadLookAheadSupport= agTRUE;
}
else
{
SM_DBG3(("smsatSetDevInfo: Read Look-ahead is not supported\n"));
oneDeviceData->satReadLookAheadSupport= agFALSE;
}
/* Volatile Write Cache is supported */
if (SATAIdData->commandSetSupported & 0x20)
{
SM_DBG3(("smsatSetDevInfo: Volatile Write Cache is supported\n"));
oneDeviceData->satVolatileWriteCacheSupport = agTRUE;
}
else
{
SM_DBG3(("smsatSetDevInfo: Volatile Write Cache is not supported\n"));
oneDeviceData->satVolatileWriteCacheSupport = agFALSE;
}
/* write cache enabled for caching mode page SAT Table 67 p69, word85 bit5 */
if (SATAIdData->commandSetFeatureEnabled & 0x20)
{
SM_DBG3(("smsatSetDevInfo: write cache enabled\n"));
oneDeviceData->satWriteCacheEnabled = agTRUE;
}
else
{
SM_DBG3(("smsatSetDevInfo: no write cache enabled\n"));
oneDeviceData->satWriteCacheEnabled = agFALSE;
}
/* look ahead enabled for caching mode page SAT Table 67 p69, word85 bit6 */
if (SATAIdData->commandSetFeatureEnabled & 0x40)
{
SM_DBG3(("smsatSetDevInfo: look ahead enabled\n"));
oneDeviceData->satLookAheadEnabled = agTRUE;
}
else
{
SM_DBG3(("smsatSetDevInfo: no look ahead enabled\n"));
oneDeviceData->satLookAheadEnabled = agFALSE;
}
/* Support WWN, if Word 87 bit 8 is set */
if (SATAIdData->commandSetFeatureDefault & 0x100)
{
SM_DBG3(("smsatSetDevInfo: device supports WWN\n"));
oneDeviceData->satWWNSupport = agTRUE;
}
else
{
SM_DBG3(("smsatSetDevInfo: no WWN\n"));
oneDeviceData->satWWNSupport = agFALSE;
}
/* Support DMA Setup Auto-Activate, if Word 78 bit 2 is set */
if (SATAIdData->sataFeaturesSupported & 0x4)
{
SM_DBG3(("smsatSetDevInfo: device supports DMA Setup Auto-Activate\n"));
oneDeviceData->satDMASetupAA = agTRUE;
}
else
{
SM_DBG3(("smsatSetDevInfo: no DMA Setup Auto-Activate\n"));
oneDeviceData->satDMASetupAA = agFALSE;
}
/* Support NCQ Queue Management Command, if Word 77 bit 5 is set */
if (SATAIdData->word77 & 0x10)
{
SM_DBG3(("smsatSetDevInfo: device supports NCQ Queue Management Command\n"));
oneDeviceData->satNCQQMgntCmd = agTRUE;
}
else
{
SM_DBG3(("smsatSetDevInfo: no NCQ Queue Management Command\n"));
oneDeviceData->satNCQQMgntCmd = agFALSE;
}
return;
}
osGLOBAL void
smsatInquiryStandard(
bit8 *pInquiry,
agsaSATAIdentifyData_t *pSATAIdData,
smIniScsiCmnd_t *scsiCmnd
)
{
smLUN_t *pLun;
pLun = &scsiCmnd->lun;
/*
Assumption: Basic Task Mangement is supported
-> BQUE 1 and CMDQUE 0, SPC-4, Table96, p147
*/
/*
See SPC-4, 6.4.2, p 143
and SAT revision 8, 8.1.2, p 28
*/
SM_DBG5(("smsatInquiryStandard: start\n"));
if (pInquiry == agNULL)
{
SM_DBG1(("smsatInquiryStandard: pInquiry is NULL, wrong\n"));
return;
}
else
{
SM_DBG5(("smsatInquiryStandard: pInquiry is NOT NULL\n"));
}
/*
* Reject all other LUN other than LUN 0.
*/
if ( ((pLun->lun[0] | pLun->lun[1] | pLun->lun[2] | pLun->lun[3] |
pLun->lun[4] | pLun->lun[5] | pLun->lun[6] | pLun->lun[7] ) != 0) )
{
/* SAT Spec Table 8, p27, footnote 'a' */
pInquiry[0] = 0x7F;
}
else
{
pInquiry[0] = 0x00;
}
if (pSATAIdData->rm_ataDevice & ATA_REMOVABLE_MEDIA_DEVICE_MASK )
{
pInquiry[1] = 0x80;
}
else
{
pInquiry[1] = 0x00;
}
pInquiry[2] = 0x05; /* SPC-3 */
pInquiry[3] = 0x12; /* set HiSup 1; resp data format set to 2 */
pInquiry[4] = 0x1F; /* 35 - 4 = 31; Additional length */
pInquiry[5] = 0x00;
/* The following two are for task management. SAT Rev8, p20 */
if (pSATAIdData->sataCapabilities & 0x100)
{
/* NCQ supported; multiple outstanding SCSI IO are supported */
pInquiry[6] = 0x00; /* BQUE bit is not set */
pInquiry[7] = 0x02; /* CMDQUE bit is set */
}
else
{
pInquiry[6] = 0x80; /* BQUE bit is set */
pInquiry[7] = 0x00; /* CMDQUE bit is not set */
}
/*
* Vendor ID.
*/
sm_strncpy((char*)&pInquiry[8], AG_SAT_VENDOR_ID_STRING, 8); /* 8 bytes */
/*
* Product ID
*/
/* when flipped by LL */
pInquiry[16] = pSATAIdData->modelNumber[1];
pInquiry[17] = pSATAIdData->modelNumber[0];
pInquiry[18] = pSATAIdData->modelNumber[3];
pInquiry[19] = pSATAIdData->modelNumber[2];
pInquiry[20] = pSATAIdData->modelNumber[5];
pInquiry[21] = pSATAIdData->modelNumber[4];
pInquiry[22] = pSATAIdData->modelNumber[7];
pInquiry[23] = pSATAIdData->modelNumber[6];
pInquiry[24] = pSATAIdData->modelNumber[9];
pInquiry[25] = pSATAIdData->modelNumber[8];
pInquiry[26] = pSATAIdData->modelNumber[11];
pInquiry[27] = pSATAIdData->modelNumber[10];
pInquiry[28] = pSATAIdData->modelNumber[13];
pInquiry[29] = pSATAIdData->modelNumber[12];
pInquiry[30] = pSATAIdData->modelNumber[15];
pInquiry[31] = pSATAIdData->modelNumber[14];
/* when flipped */
/*
* Product Revision level.
*/
/*
* If the IDENTIFY DEVICE data received in words 25 and 26 from the ATA
* device are ASCII spaces (20h), do this translation.
*/
if ( (pSATAIdData->firmwareVersion[4] == 0x20 ) &&
(pSATAIdData->firmwareVersion[5] == 0x20 ) &&
(pSATAIdData->firmwareVersion[6] == 0x20 ) &&
(pSATAIdData->firmwareVersion[7] == 0x20 )
)
{
pInquiry[32] = pSATAIdData->firmwareVersion[1];
pInquiry[33] = pSATAIdData->firmwareVersion[0];
pInquiry[34] = pSATAIdData->firmwareVersion[3];
pInquiry[35] = pSATAIdData->firmwareVersion[2];
}
else
{
pInquiry[32] = pSATAIdData->firmwareVersion[5];
pInquiry[33] = pSATAIdData->firmwareVersion[4];
pInquiry[34] = pSATAIdData->firmwareVersion[7];
pInquiry[35] = pSATAIdData->firmwareVersion[6];
}
#ifdef REMOVED
/*
* Product ID
*/
/* when flipped by LL */
pInquiry[16] = pSATAIdData->modelNumber[0];
pInquiry[17] = pSATAIdData->modelNumber[1];
pInquiry[18] = pSATAIdData->modelNumber[2];
pInquiry[19] = pSATAIdData->modelNumber[3];
pInquiry[20] = pSATAIdData->modelNumber[4];
pInquiry[21] = pSATAIdData->modelNumber[5];
pInquiry[22] = pSATAIdData->modelNumber[6];
pInquiry[23] = pSATAIdData->modelNumber[7];
pInquiry[24] = pSATAIdData->modelNumber[8];
pInquiry[25] = pSATAIdData->modelNumber[9];
pInquiry[26] = pSATAIdData->modelNumber[10];
pInquiry[27] = pSATAIdData->modelNumber[11];
pInquiry[28] = pSATAIdData->modelNumber[12];
pInquiry[29] = pSATAIdData->modelNumber[13];
pInquiry[30] = pSATAIdData->modelNumber[14];
pInquiry[31] = pSATAIdData->modelNumber[15];
/* when flipped */
/*
* Product Revision level.
*/
/*
* If the IDENTIFY DEVICE data received in words 25 and 26 from the ATA
* device are ASCII spaces (20h), do this translation.
*/
if ( (pSATAIdData->firmwareVersion[4] == 0x20 ) &&
(pSATAIdData->firmwareVersion[5] == 0x20 ) &&
(pSATAIdData->firmwareVersion[6] == 0x20 ) &&
(pSATAIdData->firmwareVersion[7] == 0x20 )
)
{
pInquiry[32] = pSATAIdData->firmwareVersion[0];
pInquiry[33] = pSATAIdData->firmwareVersion[1];
pInquiry[34] = pSATAIdData->firmwareVersion[2];
pInquiry[35] = pSATAIdData->firmwareVersion[3];
}
else
{
pInquiry[32] = pSATAIdData->firmwareVersion[4];
pInquiry[33] = pSATAIdData->firmwareVersion[5];
pInquiry[34] = pSATAIdData->firmwareVersion[6];
pInquiry[35] = pSATAIdData->firmwareVersion[7];
}
#endif
SM_DBG5(("smsatInquiryStandard: end\n"));
return;
}
osGLOBAL void
smsatInquiryPage0(
bit8 *pInquiry,
agsaSATAIdentifyData_t *pSATAIdData
)
{
SM_DBG5(("smsatInquiryPage0: start\n"));
/*
See SPC-4, 7.6.9, p 345
and SAT revision 8, 10.3.2, p 77
*/
pInquiry[0] = 0x00;
pInquiry[1] = 0x00; /* page code */
pInquiry[2] = 0x00; /* reserved */
pInquiry[3] = 8 - 3; /* last index(in this case, 6) - 3; page length */
/* supported vpd page list */
pInquiry[4] = 0x00; /* page 0x00 supported */
pInquiry[5] = 0x80; /* page 0x80 supported */
pInquiry[6] = 0x83; /* page 0x83 supported */
pInquiry[7] = 0x89; /* page 0x89 supported */
pInquiry[8] = 0xB1; /* page 0xB1 supported */
return;
}
osGLOBAL void
smsatInquiryPage83(
bit8 *pInquiry,
agsaSATAIdentifyData_t *pSATAIdData,
smDeviceData_t *oneDeviceData
)
{
satSimpleSATAIdentifyData_t *pSimpleData;
/*
* When translating the fields, in some cases using the simple form of SATA
* Identify Device Data is easier. So we define it here.
* Both pSimpleData and pSATAIdData points to the same data.
*/
pSimpleData = ( satSimpleSATAIdentifyData_t *)pSATAIdData;
SM_DBG5(("smsatInquiryPage83: start\n"));
pInquiry[0] = 0x00;
pInquiry[1] = 0x83; /* page code */
pInquiry[2] = 0; /* Reserved */
/*
* If the ATA device returns word 87 bit 8 set to one in its IDENTIFY DEVICE
* data indicating that it supports the WORLD WIDE NAME field
* (i.e., words 108-111), the SATL shall include an identification descriptor
* containing a logical unit name.
*/
if ( oneDeviceData->satWWNSupport)
{
#ifndef PMC_FREEBSD
/* Fill in SAT Rev8 Table85 */
/*
* Logical unit name derived from the world wide name.
*/
pInquiry[3] = 12; /* 15-3; page length, no addition ID descriptor assumed*/
/*
* Identifier descriptor
*/
pInquiry[4] = 0x01; /* Code set: binary codes */
pInquiry[5] = 0x03; /* Identifier type : NAA */
pInquiry[6] = 0x00; /* Reserved */
pInquiry[7] = 0x08; /* Identifier length */
/* Bit 4-7 NAA field, bit 0-3 MSB of IEEE Company ID */
pInquiry[8] = (bit8)((pSATAIdData->namingAuthority) >> 8);
pInquiry[9] = (bit8)((pSATAIdData->namingAuthority) & 0xFF); /* IEEE Company ID */
pInquiry[10] = (bit8)((pSATAIdData->namingAuthority1) >> 8); /* IEEE Company ID */
/* Bit 4-7 LSB of IEEE Company ID, bit 0-3 MSB of Vendor Specific ID */
pInquiry[11] = (bit8)((pSATAIdData->namingAuthority1) & 0xFF);
pInquiry[12] = (bit8)((pSATAIdData->uniqueID_bit16_31) >> 8); /* Vendor Specific ID */
pInquiry[13] = (bit8)((pSATAIdData->uniqueID_bit16_31) & 0xFF); /* Vendor Specific ID */
pInquiry[14] = (bit8)((pSATAIdData->uniqueID_bit0_15) >> 8); /* Vendor Specific ID */
pInquiry[15] = (bit8)((pSATAIdData->uniqueID_bit0_15) & 0xFF); /* Vendor Specific ID */
#else
/* For FreeBSD */
/* Fill in SAT Rev8 Table85 */
/*
* Logical unit name derived from the world wide name.
*/
pInquiry[3] = 24; /* 35-3; page length, no addition ID descriptor assumed*/
/*
* Identifier descriptor
*/
pInquiry[4] = 0x01; /* Code set: binary codes; this is proto_codeset in FreeBSD */
pInquiry[5] = 0x03; /* Identifier type : NAA ; this is id_type in FreeBSD*/
pInquiry[6] = 0x00; /* Reserved */
pInquiry[7] = 0x08; /* Identifier length */
/* Bit 4-7 NAA field, bit 0-3 MSB of IEEE Company ID */
pInquiry[8] = (bit8)((pSATAIdData->namingAuthority) >> 8);
pInquiry[9] = (bit8)((pSATAIdData->namingAuthority) & 0xFF); /* IEEE Company ID */
pInquiry[10] = (bit8)((pSATAIdData->namingAuthority1) >> 8); /* IEEE Company ID */
/* Bit 4-7 LSB of IEEE Company ID, bit 0-3 MSB of Vendor Specific ID */
pInquiry[11] = (bit8)((pSATAIdData->namingAuthority1) & 0xFF);
pInquiry[12] = (bit8)((pSATAIdData->uniqueID_bit16_31) >> 8); /* Vendor Specific ID */
pInquiry[13] = (bit8)((pSATAIdData->uniqueID_bit16_31) & 0xFF); /* Vendor Specific ID */
pInquiry[14] = (bit8)((pSATAIdData->uniqueID_bit0_15) >> 8); /* Vendor Specific ID */
pInquiry[15] = (bit8)((pSATAIdData->uniqueID_bit0_15) & 0xFF); /* Vendor Specific ID */
pInquiry[16] = 0x61; /* Code set: binary codes; this is proto_codeset in FreeBSD; SCSI_PROTO_SAS and SVPD_ID_CODESET_BINARY */
pInquiry[17] = 0x93; /* Identifier type : NAA ; this is id_type in FreeBSD; PIV set, ASSOCIATION is 01b and NAA (3h) */
pInquiry[18] = 0x00; /* Reserved */
pInquiry[19] = 0x08; /* Identifier length */
SM_DBG5(("smsatInquiryPage83: sasAddressHi 0x%08x\n", oneDeviceData->sasAddressHi));
SM_DBG5(("smsatInquiryPage83: sasAddressLo 0x%08x\n", oneDeviceData->sasAddressLo));
/* SAS address of SATA */
pInquiry[20] = ((oneDeviceData->sasAddressHi) & 0xFF000000 ) >> 24;
pInquiry[21] = ((oneDeviceData->sasAddressHi) & 0xFF0000 ) >> 16;
pInquiry[22] = ((oneDeviceData->sasAddressHi) & 0xFF00 ) >> 8;
pInquiry[23] = (oneDeviceData->sasAddressHi) & 0xFF;
pInquiry[24] = ((oneDeviceData->sasAddressLo) & 0xFF000000 ) >> 24;
pInquiry[25] = ((oneDeviceData->sasAddressLo) & 0xFF0000 ) >> 16;
pInquiry[26] = ((oneDeviceData->sasAddressLo) & 0xFF00 ) >> 8;
pInquiry[27] = (oneDeviceData->sasAddressLo) & 0xFF;
#endif
}
else
{
#ifndef PMC_FREEBSD
/* Fill in SAT Rev8 Table86 */
/*
* Logical unit name derived from the model number and serial number.
*/
pInquiry[3] = 72; /* 75 - 3; page length */
/*
* Identifier descriptor
*/
pInquiry[4] = 0x02; /* Code set: ASCII codes */
pInquiry[5] = 0x01; /* Identifier type : T10 vendor ID based */
pInquiry[6] = 0x00; /* Reserved */
pInquiry[7] = 0x44; /* 0x44, 68 Identifier length */
/* Byte 8 to 15 is the vendor id string 'ATA '. */
sm_strncpy((char *)&pInquiry[8], AG_SAT_VENDOR_ID_STRING, 8);
/*
* Byte 16 to 75 is vendor specific id
*/
pInquiry[16] = (bit8)((pSimpleData->word[27]) >> 8);
pInquiry[17] = (bit8)((pSimpleData->word[27]) & 0x00ff);
pInquiry[18] = (bit8)((pSimpleData->word[28]) >> 8);
pInquiry[19] = (bit8)((pSimpleData->word[28]) & 0x00ff);
pInquiry[20] = (bit8)((pSimpleData->word[29]) >> 8);
pInquiry[21] = (bit8)((pSimpleData->word[29]) & 0x00ff);
pInquiry[22] = (bit8)((pSimpleData->word[30]) >> 8);
pInquiry[23] = (bit8)((pSimpleData->word[30]) & 0x00ff);
pInquiry[24] = (bit8)((pSimpleData->word[31]) >> 8);
pInquiry[25] = (bit8)((pSimpleData->word[31]) & 0x00ff);
pInquiry[26] = (bit8)((pSimpleData->word[32]) >> 8);
pInquiry[27] = (bit8)((pSimpleData->word[32]) & 0x00ff);
pInquiry[28] = (bit8)((pSimpleData->word[33]) >> 8);
pInquiry[29] = (bit8)((pSimpleData->word[33]) & 0x00ff);
pInquiry[30] = (bit8)((pSimpleData->word[34]) >> 8);
pInquiry[31] = (bit8)((pSimpleData->word[34]) & 0x00ff);
pInquiry[32] = (bit8)((pSimpleData->word[35]) >> 8);
pInquiry[33] = (bit8)((pSimpleData->word[35]) & 0x00ff);
pInquiry[34] = (bit8)((pSimpleData->word[36]) >> 8);
pInquiry[35] = (bit8)((pSimpleData->word[36]) & 0x00ff);
pInquiry[36] = (bit8)((pSimpleData->word[37]) >> 8);
pInquiry[37] = (bit8)((pSimpleData->word[37]) & 0x00ff);
pInquiry[38] = (bit8)((pSimpleData->word[38]) >> 8);
pInquiry[39] = (bit8)((pSimpleData->word[38]) & 0x00ff);
pInquiry[40] = (bit8)((pSimpleData->word[39]) >> 8);
pInquiry[41] = (bit8)((pSimpleData->word[39]) & 0x00ff);
pInquiry[42] = (bit8)((pSimpleData->word[40]) >> 8);
pInquiry[43] = (bit8)((pSimpleData->word[40]) & 0x00ff);
pInquiry[44] = (bit8)((pSimpleData->word[41]) >> 8);
pInquiry[45] = (bit8)((pSimpleData->word[41]) & 0x00ff);
pInquiry[46] = (bit8)((pSimpleData->word[42]) >> 8);
pInquiry[47] = (bit8)((pSimpleData->word[42]) & 0x00ff);
pInquiry[48] = (bit8)((pSimpleData->word[43]) >> 8);
pInquiry[49] = (bit8)((pSimpleData->word[43]) & 0x00ff);
pInquiry[50] = (bit8)((pSimpleData->word[44]) >> 8);
pInquiry[51] = (bit8)((pSimpleData->word[44]) & 0x00ff);
pInquiry[52] = (bit8)((pSimpleData->word[45]) >> 8);
pInquiry[53] = (bit8)((pSimpleData->word[45]) & 0x00ff);
pInquiry[54] = (bit8)((pSimpleData->word[46]) >> 8);
pInquiry[55] = (bit8)((pSimpleData->word[46]) & 0x00ff);
pInquiry[56] = (bit8)((pSimpleData->word[10]) >> 8);
pInquiry[57] = (bit8)((pSimpleData->word[10]) & 0x00ff);
pInquiry[58] = (bit8)((pSimpleData->word[11]) >> 8);
pInquiry[59] = (bit8)((pSimpleData->word[11]) & 0x00ff);
pInquiry[60] = (bit8)((pSimpleData->word[12]) >> 8);
pInquiry[61] = (bit8)((pSimpleData->word[12]) & 0x00ff);
pInquiry[62] = (bit8)((pSimpleData->word[13]) >> 8);
pInquiry[63] = (bit8)((pSimpleData->word[13]) & 0x00ff);
pInquiry[64] = (bit8)((pSimpleData->word[14]) >> 8);
pInquiry[65] = (bit8)((pSimpleData->word[14]) & 0x00ff);
pInquiry[66] = (bit8)((pSimpleData->word[15]) >> 8);
pInquiry[67] = (bit8)((pSimpleData->word[15]) & 0x00ff);
pInquiry[68] = (bit8)((pSimpleData->word[16]) >> 8);
pInquiry[69] = (bit8)((pSimpleData->word[16]) & 0x00ff);
pInquiry[70] = (bit8)((pSimpleData->word[17]) >> 8);
pInquiry[71] = (bit8)((pSimpleData->word[17]) & 0x00ff);
pInquiry[72] = (bit8)((pSimpleData->word[18]) >> 8);
pInquiry[73] = (bit8)((pSimpleData->word[18]) & 0x00ff);
pInquiry[74] = (bit8)((pSimpleData->word[19]) >> 8);
pInquiry[75] = (bit8)((pSimpleData->word[19]) & 0x00ff);
#else
/* for the FreeBSD */
/* Fill in SAT Rev8 Table86 */
/*
* Logical unit name derived from the model number and serial number.
*/
pInquiry[3] = 84; /* 87 - 3; page length */
/*
* Identifier descriptor
*/
pInquiry[4] = 0x02; /* Code set: ASCII codes */
pInquiry[5] = 0x01; /* Identifier type : T10 vendor ID based */
pInquiry[6] = 0x00; /* Reserved */
pInquiry[7] = 0x44; /* 0x44, 68 Identifier length */
/* Byte 8 to 15 is the vendor id string 'ATA '. */
sm_strncpy((char *)&pInquiry[8], AG_SAT_VENDOR_ID_STRING, 8);
/*
* Byte 16 to 75 is vendor specific id
*/
pInquiry[16] = (bit8)((pSimpleData->word[27]) >> 8);
pInquiry[17] = (bit8)((pSimpleData->word[27]) & 0x00ff);
pInquiry[18] = (bit8)((pSimpleData->word[28]) >> 8);
pInquiry[19] = (bit8)((pSimpleData->word[28]) & 0x00ff);
pInquiry[20] = (bit8)((pSimpleData->word[29]) >> 8);
pInquiry[21] = (bit8)((pSimpleData->word[29]) & 0x00ff);
pInquiry[22] = (bit8)((pSimpleData->word[30]) >> 8);
pInquiry[23] = (bit8)((pSimpleData->word[30]) & 0x00ff);
pInquiry[24] = (bit8)((pSimpleData->word[31]) >> 8);
pInquiry[25] = (bit8)((pSimpleData->word[31]) & 0x00ff);
pInquiry[26] = (bit8)((pSimpleData->word[32]) >> 8);
pInquiry[27] = (bit8)((pSimpleData->word[32]) & 0x00ff);
pInquiry[28] = (bit8)((pSimpleData->word[33]) >> 8);
pInquiry[29] = (bit8)((pSimpleData->word[33]) & 0x00ff);
pInquiry[30] = (bit8)((pSimpleData->word[34]) >> 8);
pInquiry[31] = (bit8)((pSimpleData->word[34]) & 0x00ff);
pInquiry[32] = (bit8)((pSimpleData->word[35]) >> 8);
pInquiry[33] = (bit8)((pSimpleData->word[35]) & 0x00ff);
pInquiry[34] = (bit8)((pSimpleData->word[36]) >> 8);
pInquiry[35] = (bit8)((pSimpleData->word[36]) & 0x00ff);
pInquiry[36] = (bit8)((pSimpleData->word[37]) >> 8);
pInquiry[37] = (bit8)((pSimpleData->word[37]) & 0x00ff);
pInquiry[38] = (bit8)((pSimpleData->word[38]) >> 8);
pInquiry[39] = (bit8)((pSimpleData->word[38]) & 0x00ff);
pInquiry[40] = (bit8)((pSimpleData->word[39]) >> 8);
pInquiry[41] = (bit8)((pSimpleData->word[39]) & 0x00ff);
pInquiry[42] = (bit8)((pSimpleData->word[40]) >> 8);
pInquiry[43] = (bit8)((pSimpleData->word[40]) & 0x00ff);
pInquiry[44] = (bit8)((pSimpleData->word[41]) >> 8);
pInquiry[45] = (bit8)((pSimpleData->word[41]) & 0x00ff);
pInquiry[46] = (bit8)((pSimpleData->word[42]) >> 8);
pInquiry[47] = (bit8)((pSimpleData->word[42]) & 0x00ff);
pInquiry[48] = (bit8)((pSimpleData->word[43]) >> 8);
pInquiry[49] = (bit8)((pSimpleData->word[43]) & 0x00ff);
pInquiry[50] = (bit8)((pSimpleData->word[44]) >> 8);
pInquiry[51] = (bit8)((pSimpleData->word[44]) & 0x00ff);
pInquiry[52] = (bit8)((pSimpleData->word[45]) >> 8);
pInquiry[53] = (bit8)((pSimpleData->word[45]) & 0x00ff);
pInquiry[54] = (bit8)((pSimpleData->word[46]) >> 8);
pInquiry[55] = (bit8)((pSimpleData->word[46]) & 0x00ff);
pInquiry[56] = (bit8)((pSimpleData->word[10]) >> 8);
pInquiry[57] = (bit8)((pSimpleData->word[10]) & 0x00ff);
pInquiry[58] = (bit8)((pSimpleData->word[11]) >> 8);
pInquiry[59] = (bit8)((pSimpleData->word[11]) & 0x00ff);
pInquiry[60] = (bit8)((pSimpleData->word[12]) >> 8);
pInquiry[61] = (bit8)((pSimpleData->word[12]) & 0x00ff);
pInquiry[62] = (bit8)((pSimpleData->word[13]) >> 8);
pInquiry[63] = (bit8)((pSimpleData->word[13]) & 0x00ff);
pInquiry[64] = (bit8)((pSimpleData->word[14]) >> 8);
pInquiry[65] = (bit8)((pSimpleData->word[14]) & 0x00ff);
pInquiry[66] = (bit8)((pSimpleData->word[15]) >> 8);
pInquiry[67] = (bit8)((pSimpleData->word[15]) & 0x00ff);
pInquiry[68] = (bit8)((pSimpleData->word[16]) >> 8);
pInquiry[69] = (bit8)((pSimpleData->word[16]) & 0x00ff);
pInquiry[70] = (bit8)((pSimpleData->word[17]) >> 8);
pInquiry[71] = (bit8)((pSimpleData->word[17]) & 0x00ff);
pInquiry[72] = (bit8)((pSimpleData->word[18]) >> 8);
pInquiry[73] = (bit8)((pSimpleData->word[18]) & 0x00ff);
pInquiry[74] = (bit8)((pSimpleData->word[19]) >> 8);
pInquiry[75] = (bit8)((pSimpleData->word[19]) & 0x00ff);
pInquiry[76] = 0x61; /* Code set: binary codes; this is proto_codeset in FreeBSD; SCSI_PROTO_SAS and SVPD_ID_CODESET_BINARY */
pInquiry[77] = 0x93; /* Identifier type : NAA ; this is id_type in FreeBSD; PIV set, ASSOCIATION is 01b and NAA (3h) */
pInquiry[78] = 0x00; /* Reserved */
pInquiry[79] = 0x08; /* Identifier length */
SM_DBG5(("smsatInquiryPage83: NO WWN sasAddressHi 0x%08x\n", oneDeviceData->sasAddressHi));
SM_DBG5(("smsatInquiryPage83: No WWN sasAddressLo 0x%08x\n", oneDeviceData->sasAddressLo));
/* SAS address of SATA */
pInquiry[80] = ((oneDeviceData->sasAddressHi) & 0xFF000000 ) >> 24;
pInquiry[81] = ((oneDeviceData->sasAddressHi) & 0xFF0000 ) >> 16;
pInquiry[82] = ((oneDeviceData->sasAddressHi) & 0xFF00 ) >> 8;
pInquiry[83] = (oneDeviceData->sasAddressHi) & 0xFF;
pInquiry[84] = ((oneDeviceData->sasAddressLo) & 0xFF000000 ) >> 24;
pInquiry[85] = ((oneDeviceData->sasAddressLo) & 0xFF0000 ) >> 16;
pInquiry[86] = ((oneDeviceData->sasAddressLo) & 0xFF00 ) >> 8;
pInquiry[87] = (oneDeviceData->sasAddressLo) & 0xFF;
#endif
}
return;
}
osGLOBAL void
smsatInquiryPage89(
bit8 *pInquiry,
agsaSATAIdentifyData_t *pSATAIdData,
smDeviceData_t *oneDeviceData,
bit32 len
)
{
/*
SAT revision 8, 10.3.5, p 83
*/
satSimpleSATAIdentifyData_t *pSimpleData;
/*
* When translating the fields, in some cases using the simple form of SATA
* Identify Device Data is easier. So we define it here.
* Both pSimpleData and pSATAIdData points to the same data.
*/
pSimpleData = ( satSimpleSATAIdentifyData_t *)pSATAIdData;
SM_DBG5(("smsatInquiryPage89: start\n"));
pInquiry[0] = 0x00; /* Peripheral Qualifier and Peripheral Device Type */
pInquiry[1] = 0x89; /* page code */
/* Page length 0x238 */
pInquiry[2] = 0x02;
pInquiry[3] = 0x38;
pInquiry[4] = 0x0; /* reserved */
pInquiry[5] = 0x0; /* reserved */
pInquiry[6] = 0x0; /* reserved */
pInquiry[7] = 0x0; /* reserved */
/* SAT Vendor Identification */
sm_strncpy((char*)&pInquiry[8], "PMC-SIERRA", 8); /* 8 bytes */
/* SAT Product Idetification */
sm_strncpy((char*)&pInquiry[16], "Tachyon-SPC ", 16); /* 16 bytes */
/* SAT Product Revision Level */
sm_strncpy((char*)&pInquiry[32], "01", 4); /* 4 bytes */
/* Signature, SAT revision8, Table88, p85 */
pInquiry[36] = 0x34; /* FIS type */
if (oneDeviceData->satDeviceType == SATA_ATA_DEVICE)
{
/* interrupt assume to be 0 */
pInquiry[37] = (bit8)((oneDeviceData->satPMField) >> (4 * 7)); /* first four bits of PM field */
}
else
{
/* interrupt assume to be 1 */
pInquiry[37] = (bit8)(0x40 + (bit8)(((oneDeviceData->satPMField) >> (4 * 7)))); /* first four bits of PM field */
}
pInquiry[38] = 0;
pInquiry[39] = 0;
if (oneDeviceData->satDeviceType == SATA_ATA_DEVICE)
{
pInquiry[40] = 0x01; /* LBA Low */
pInquiry[41] = 0x00; /* LBA Mid */
pInquiry[42] = 0x00; /* LBA High */
pInquiry[43] = 0x00; /* Device */
pInquiry[44] = 0x00; /* LBA Low Exp */
pInquiry[45] = 0x00; /* LBA Mid Exp */
pInquiry[46] = 0x00; /* LBA High Exp */
pInquiry[47] = 0x00; /* Reserved */
pInquiry[48] = 0x01; /* Sector Count */
pInquiry[49] = 0x00; /* Sector Count Exp */
}
else
{
pInquiry[40] = 0x01; /* LBA Low */
pInquiry[41] = 0x00; /* LBA Mid */
pInquiry[42] = 0x00; /* LBA High */
pInquiry[43] = 0x00; /* Device */
pInquiry[44] = 0x00; /* LBA Low Exp */
pInquiry[45] = 0x00; /* LBA Mid Exp */
pInquiry[46] = 0x00; /* LBA High Exp */
pInquiry[47] = 0x00; /* Reserved */
pInquiry[48] = 0x01; /* Sector Count */
pInquiry[49] = 0x00; /* Sector Count Exp */
}
/* Reserved */
pInquiry[50] = 0x00;
pInquiry[51] = 0x00;
pInquiry[52] = 0x00;
pInquiry[53] = 0x00;
pInquiry[54] = 0x00;
pInquiry[55] = 0x00;
/* Command Code */
if (oneDeviceData->satDeviceType == SATA_ATA_DEVICE)
{
pInquiry[56] = 0xEC; /* IDENTIFY DEVICE */
}
else
{
pInquiry[56] = 0xA1; /* IDENTIFY PACKET DEVICE */
}
/* Reserved */
pInquiry[57] = 0x0;
pInquiry[58] = 0x0;
pInquiry[59] = 0x0;
/* check the length; len is assumed to be at least 60 */
if (len < SATA_PAGE89_INQUIRY_SIZE)
{
/* Identify Device */
sm_memcpy(&pInquiry[60], pSimpleData, MIN((len - 60), sizeof(satSimpleSATAIdentifyData_t)));
}
else
{
/* Identify Device */
sm_memcpy(&pInquiry[60], pSimpleData, sizeof(satSimpleSATAIdentifyData_t));
}
return;
}
osGLOBAL void
smsatInquiryPage80(
bit8 *pInquiry,
agsaSATAIdentifyData_t *pSATAIdData
)
{
SM_DBG5(("smsatInquiryPage89: start\n"));
/*
See SPC-4, 7.6.9, p 345
and SAT revision 8, 10.3.3, p 77
*/
pInquiry[0] = 0x00;
pInquiry[1] = 0x80; /* page code */
pInquiry[2] = 0x00; /* reserved */
pInquiry[3] = 0x14; /* page length */
/* product serial number */
pInquiry[4] = pSATAIdData->serialNumber[1];
pInquiry[5] = pSATAIdData->serialNumber[0];
pInquiry[6] = pSATAIdData->serialNumber[3];
pInquiry[7] = pSATAIdData->serialNumber[2];
pInquiry[8] = pSATAIdData->serialNumber[5];
pInquiry[9] = pSATAIdData->serialNumber[4];
pInquiry[10] = pSATAIdData->serialNumber[7];
pInquiry[11] = pSATAIdData->serialNumber[6];
pInquiry[12] = pSATAIdData->serialNumber[9];
pInquiry[13] = pSATAIdData->serialNumber[8];
pInquiry[14] = pSATAIdData->serialNumber[11];
pInquiry[15] = pSATAIdData->serialNumber[10];
pInquiry[16] = pSATAIdData->serialNumber[13];
pInquiry[17] = pSATAIdData->serialNumber[12];
pInquiry[18] = pSATAIdData->serialNumber[15];
pInquiry[19] = pSATAIdData->serialNumber[14];
pInquiry[20] = pSATAIdData->serialNumber[17];
pInquiry[21] = pSATAIdData->serialNumber[16];
pInquiry[22] = pSATAIdData->serialNumber[19];
pInquiry[23] = pSATAIdData->serialNumber[18];
return;
}
osGLOBAL void
smsatInquiryPageB1(
bit8 *pInquiry,
agsaSATAIdentifyData_t *pSATAIdData
)
{
bit32 i;
satSimpleSATAIdentifyData_t *pSimpleData;
SM_DBG5(("smsatInquiryPageB1: start\n"));
pSimpleData = ( satSimpleSATAIdentifyData_t *)pSATAIdData;
/*
See SBC-3, revision31, Table193, p273
and SAT-3 revision 3, 10.3.6, p141
*/
pInquiry[0] = 0x00; /* Peripheral Qualifier and Peripheral Device Type */
pInquiry[1] = 0xB1; /* page code */
/* page length */
pInquiry[2] = 0x0;
pInquiry[3] = 0x3C;
/* medium rotation rate */
pInquiry[4] = (bit8) ((pSimpleData->word[217]) >> 8);
pInquiry[5] = (bit8) ((pSimpleData->word[217]) & 0xFF);
/* reserved */
pInquiry[6] = 0x0;
/* nominal form factor bits 3:0 */
pInquiry[7] = (bit8) ((pSimpleData->word[168]) & 0xF);
/* reserved */
for (i=8;i<64;i++)
{
pInquiry[i] = 0x0;
}
return;
}
osGLOBAL void
smsatDefaultTranslation(
smRoot_t *smRoot,
smIORequest_t *smIORequest,
smSatIOContext_t *satIOContext,
smScsiRspSense_t *pSense,
bit8 ataStatus,
bit8 ataError,
bit32 interruptContext
)
{
SM_DBG5(("smsatDefaultTranslation: start\n"));
/*
* Check for device fault case
*/
if ( ataStatus & DF_ATA_STATUS_MASK )
{
smsatSetSensePayload( pSense,
SCSI_SNSKEY_HARDWARE_ERROR,
0,
SCSI_SNSCODE_INTERNAL_TARGET_FAILURE,
satIOContext);
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pSmSenseData,
interruptContext );
return;
}
/*
* If status error bit it set, need to check the error register
*/
if ( ataStatus & ERR_ATA_STATUS_MASK )
{
if ( ataError & NM_ATA_ERROR_MASK )
{
SM_DBG1(("smsatDefaultTranslation: NM_ATA_ERROR ataError= 0x%x, smIORequest=%p!!!\n",
ataError, smIORequest));
smsatSetSensePayload( pSense,
SCSI_SNSKEY_NOT_READY,
0,
SCSI_SNSCODE_MEDIUM_NOT_PRESENT,
satIOContext);
}
else if (ataError & UNC_ATA_ERROR_MASK)
{
SM_DBG1(("smsatDefaultTranslation: UNC_ATA_ERROR ataError= 0x%x, smIORequest=%p!!!\n",
ataError, smIORequest));
smsatSetSensePayload( pSense,
SCSI_SNSKEY_MEDIUM_ERROR,
0,
SCSI_SNSCODE_UNRECOVERED_READ_ERROR,
satIOContext);
}
else if (ataError & IDNF_ATA_ERROR_MASK)
{
SM_DBG1(("smsatDefaultTranslation: IDNF_ATA_ERROR ataError= 0x%x, smIORequest=%p!!!\n",
ataError, smIORequest));
smsatSetSensePayload( pSense,
SCSI_SNSKEY_MEDIUM_ERROR,
0,
SCSI_SNSCODE_RECORD_NOT_FOUND,
satIOContext);
}
else if (ataError & MC_ATA_ERROR_MASK)
{
SM_DBG1(("smsatDefaultTranslation: MC_ATA_ERROR ataError= 0x%x, smIORequest=%p!!!\n",
ataError, smIORequest));
smsatSetSensePayload( pSense,
SCSI_SNSKEY_UNIT_ATTENTION,
0,
SCSI_SNSCODE_NOT_READY_TO_READY_CHANGE,
satIOContext);
}
else if (ataError & MCR_ATA_ERROR_MASK)
{
SM_DBG1(("smsatDefaultTranslation: MCR_ATA_ERROR ataError= 0x%x, smIORequest=%p!!!\n",
ataError, smIORequest));
smsatSetSensePayload( pSense,
SCSI_SNSKEY_UNIT_ATTENTION,
0,
SCSI_SNSCODE_OPERATOR_MEDIUM_REMOVAL_REQUEST,
satIOContext);
}
else if (ataError & ICRC_ATA_ERROR_MASK)
{
SM_DBG1(("smsatDefaultTranslation: ICRC_ATA_ERROR ataError= 0x%x, smIORequest=%p!!!\n",
ataError, smIORequest));
smsatSetSensePayload( pSense,
SCSI_SNSKEY_ABORTED_COMMAND,
0,
SCSI_SNSCODE_INFORMATION_UNIT_CRC_ERROR,
satIOContext);
}
else if (ataError & ABRT_ATA_ERROR_MASK)
{
SM_DBG1(("smsatDefaultTranslation: ABRT_ATA_ERROR ataError= 0x%x, smIORequest=%p!!!\n",
ataError, smIORequest));
smsatSetSensePayload( pSense,
SCSI_SNSKEY_ABORTED_COMMAND,
0,
SCSI_SNSCODE_NO_ADDITIONAL_INFO,
satIOContext);
}
else
{
SM_DBG1(("smsatDefaultTranslation: **** UNEXPECTED ATA_ERROR **** ataError= 0x%x, smIORequest=%p!!!\n",
ataError, smIORequest));
smsatSetSensePayload( pSense,
SCSI_SNSKEY_HARDWARE_ERROR,
0,
SCSI_SNSCODE_INTERNAL_TARGET_FAILURE,
satIOContext);
}
/* Send the completion response now */
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pSmSenseData,
interruptContext );
return;
}
else /* (ataStatus & ERR_ATA_STATUS_MASK ) is false */
{
/* This case should never happen */
SM_DBG1(("smsatDefaultTranslation: *** UNEXPECTED ATA status 0x%x *** smIORequest=%p!!!\n",
ataStatus, smIORequest));
smsatSetSensePayload( pSense,
SCSI_SNSKEY_HARDWARE_ERROR,
0,
SCSI_SNSCODE_INTERNAL_TARGET_FAILURE,
satIOContext);
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pSmSenseData,
interruptContext );
return;
}
return;
}
osGLOBAL bit32
smIDStart(
smRoot_t *smRoot,
smIORequest_t *smIORequest,
smDeviceHandle_t *smDeviceHandle
)
{
smDeviceData_t *oneDeviceData = agNULL;
smIORequestBody_t *smIORequestBody = agNULL;
smSatIOContext_t *satIOContext = agNULL;
bit32 status = SM_RC_FAILURE;
SM_DBG2(("smIDStart: start, smIORequest %p\n", smIORequest));
oneDeviceData = (smDeviceData_t *)smDeviceHandle->smData;
if (oneDeviceData == agNULL)
{
SM_DBG1(("smIDStart: oneDeviceData is NULL!!!\n"));
return SM_RC_FAILURE;
}
if (oneDeviceData->valid == agFALSE)
{
SM_DBG1(("smIDStart: oneDeviceData is not valid, did %d !!!\n", oneDeviceData->id));
return SM_RC_FAILURE;
}
smIORequestBody = (smIORequestBody_t*)smIORequest->smData;//smDequeueIO(smRoot);
if (smIORequestBody == agNULL)
{
SM_DBG1(("smIDStart: smIORequestBody is NULL!!!\n"));
return SM_RC_FAILURE;
}
smIOReInit(smRoot, smIORequestBody);
SM_DBG3(("smIDStart: io ID %d!!!\n", smIORequestBody->id ));
smIORequestBody->smIORequest = smIORequest;
smIORequestBody->smDevHandle = smDeviceHandle;
satIOContext = &(smIORequestBody->transport.SATA.satIOContext);
/* setting up satIOContext */
satIOContext->pSatDevData = oneDeviceData;
satIOContext->pFis = &(smIORequestBody->transport.SATA.agSATARequestBody.fis.fisRegHostToDev);
satIOContext->smRequestBody = smIORequestBody;
satIOContext->psmDeviceHandle = smDeviceHandle;
satIOContext->smScsiXchg = agNULL;
/*smIORequest->smData = smIORequestBody;*/
SM_DBG3(("smIDStart: smIORequestBody %p smIORequestBody->smIORequest %p!!!\n", smIORequestBody, smIORequestBody->smIORequest));
SM_DBG1(("smIDStart: did %d\n", oneDeviceData->id));
status = smsatIDSubStart( smRoot,
smIORequest,
smDeviceHandle,
agNULL,
satIOContext);
if (status != SM_RC_SUCCESS)
{
SM_DBG1(("smIDStart: smsatIDSubStart failure %d!!!\n", status));
/*smEnqueueIO(smRoot, satIOContext);*/
}
SM_DBG2(("smIDStart: exit\n"));
return status;
}
/*
SM generated IO, needs to call smsatAllocIntIoResource()
allocating using smsatAllocIntIoResource
*/
osGLOBAL bit32
smsatIDSubStart(
smRoot_t *smRoot,
smIORequest_t *smIORequest,
smDeviceHandle_t *smDeviceHandle,
smScsiInitiatorRequest_t *smSCSIRequest, /* agNULL */
smSatIOContext_t *satIOContext
)
{
smSatInternalIo_t *satIntIo = agNULL;
smDeviceData_t *satDevData = agNULL;
smIORequestBody_t *smIORequestBody;
smSatIOContext_t *satNewIOContext;
bit32 status;
SM_DBG2(("smsatIDSubStart: start\n"));
satDevData = satIOContext->pSatDevData;
/* allocate identify device command */
satIntIo = smsatAllocIntIoResource( smRoot,
smIORequest,
satDevData,
sizeof(agsaSATAIdentifyData_t), /* 512; size of identify device data */
satIntIo);
if (satIntIo == agNULL)
{
SM_DBG1(("smsatIDSubStart: can't alloacate!!!\n"));
return SM_RC_FAILURE;
}
satIOContext->satIntIoContext = satIntIo;
/* fill in fields */
/* real ttttttthe one worked and the same; 5/21/07/ */
satIntIo->satOrgSmIORequest = smIORequest; /* changed */
smIORequestBody = satIntIo->satIntRequestBody;
satNewIOContext = &(smIORequestBody->transport.SATA.satIOContext);
satNewIOContext->pSatDevData = satDevData;
satNewIOContext->pFis = &(smIORequestBody->transport.SATA.agSATARequestBody.fis.fisRegHostToDev);
satNewIOContext->pScsiCmnd = &(satIntIo->satIntSmScsiXchg.scsiCmnd);
satNewIOContext->pSense = &(smIORequestBody->transport.SATA.sensePayload);
satNewIOContext->pSmSenseData = &(smIORequestBody->transport.SATA.smSenseData);
satNewIOContext->smRequestBody = satIntIo->satIntRequestBody; /* key fix */
// satNewIOContext->interruptContext = tiInterruptContext;
satNewIOContext->satIntIoContext = satIntIo;
satNewIOContext->psmDeviceHandle = smDeviceHandle;
satNewIOContext->satOrgIOContext = satIOContext; /* changed */
/* this is valid only for TD layer generated (not triggered by OS at all) IO */
satNewIOContext->smScsiXchg = &(satIntIo->satIntSmScsiXchg);
SM_DBG6(("smsatIDSubStart: SM satIOContext %p \n", satIOContext));
SM_DBG6(("smsatIDSubStart: SM satNewIOContext %p \n", satNewIOContext));
SM_DBG6(("smsatIDSubStart: SM tiScsiXchg %p \n", satIOContext->smScsiXchg));
SM_DBG6(("smsatIDSubStart: SM tiScsiXchg %p \n", satNewIOContext->smScsiXchg));
SM_DBG3(("smsatIDSubStart: satNewIOContext %p smIORequestBody %p\n", satNewIOContext, smIORequestBody));
status = smsatIDStart(smRoot,
&satIntIo->satIntSmIORequest, /* New smIORequest */
smDeviceHandle,
satNewIOContext->smScsiXchg, /* New smScsiInitiatorRequest_t *smScsiRequest, */
satNewIOContext);
if (status != SM_RC_SUCCESS)
{
SM_DBG1(("smsatIDSubStart: failed in sending %d!!!\n", status));
smsatFreeIntIoResource( smRoot,
satDevData,
satIntIo);
return SM_RC_FAILURE;
}
SM_DBG2(("smsatIDSubStart: end\n"));
return status;
}
osGLOBAL bit32
smsatIDStart(
smRoot_t *smRoot,
smIORequest_t *smIORequest,
smDeviceHandle_t *smDeviceHandle,
smScsiInitiatorRequest_t *smSCSIRequest,
smSatIOContext_t *satIOContext
)
{
bit32 status;
bit32 agRequestType;
smDeviceData_t *pSatDevData;
agsaFisRegHostToDevice_t *fis;
#ifdef SM_INTERNAL_DEBUG
smIORequestBody_t *smIORequestBody;
smSatInternalIo_t *satIntIoContext;
#endif
pSatDevData = satIOContext->pSatDevData;
fis = satIOContext->pFis;
SM_DBG2(("smsatIDStart: start\n"));
#ifdef SM_INTERNAL_DEBUG
satIntIoContext = satIOContext->satIntIoContext;
smIORequestBody = satIntIoContext->satIntRequestBody;
#endif
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
if (pSatDevData->satDeviceType == SATA_ATAPI_DEVICE)
{
SM_DBG2(("smsatIDStart: IDENTIFY_PACKET_DEVICE\n"));
fis->h.command = SAT_IDENTIFY_PACKET_DEVICE; /* 0x40 */
}
else
{
SM_DBG2(("smsatIDStart: IDENTIFY_DEVICE\n"));
fis->h.command = SAT_IDENTIFY_DEVICE; /* 0xEC */
}
fis->h.features = 0; /* FIS reserve */
fis->d.lbaLow = 0; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = 0; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = 0; /* FIS LBA (23:16) */
fis->d.device = 0; /* FIS LBA mode */
fis->d.lbaLowExp = 0;
fis->d.lbaMidExp = 0;
fis->d.lbaHighExp = 0;
fis->d.featuresExp = 0;
fis->d.sectorCount = 0; /* FIS sector count (7:0) */
fis->d.sectorCountExp = 0;
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_PIO_READ;
/* Initialize CB for SATA completion.
*/
satIOContext->satCompleteCB = &smsatIDStartCB;
/*
* Prepare SGL and send FIS to LL layer.
*/
satIOContext->reqType = agRequestType; /* Save it */
#ifdef SM_INTERNAL_DEBUG
smhexdump("smsatIDStart", (bit8 *)satIOContext->pFis, sizeof(agsaFisRegHostToDevice_t));
smhexdump("smsatIDStart LL", (bit8 *)&(smIORequestBody->transport.SATA.agSATARequestBody.fis.fisRegHostToDev), sizeof(agsaFisRegHostToDevice_t));
#endif
status = smsataLLIOStart( smRoot,
smIORequest,
smDeviceHandle,
smSCSIRequest,
satIOContext);
SM_DBG2(("smsatIDStart: end status %d\n", status));
return status;
}
osGLOBAL FORCEINLINE bit32
smsatIOStart(
smRoot_t *smRoot,
smIORequest_t *smIORequest,
smDeviceHandle_t *smDeviceHandle,
smScsiInitiatorRequest_t *smSCSIRequest,
smSatIOContext_t *satIOContext
)
{
smDeviceData_t *pSatDevData = satIOContext->pSatDevData;
smScsiRspSense_t *pSense = satIOContext->pSense;
smIniScsiCmnd_t *scsiCmnd = &smSCSIRequest->scsiCmnd;
smLUN_t *pLun = &scsiCmnd->lun;
smSatInternalIo_t *pSatIntIo = agNULL;
bit32 status = SM_RC_FAILURE;
SM_DBG2(("smsatIOStart: start\n"));
/*
* Reject all other LUN other than LUN 0.
*/
if ( ((pLun->lun[0] | pLun->lun[1] | pLun->lun[2] | pLun->lun[3] |
pLun->lun[4] | pLun->lun[5] | pLun->lun[6] | pLun->lun[7] ) != 0) &&
(scsiCmnd->cdb[0] != SCSIOPC_INQUIRY)
)
{
SM_DBG1(("smsatIOStart: *** REJECT *** LUN not zero, cdb[0]=0x%x did %d !!!\n",
scsiCmnd->cdb[0], pSatDevData->id));
smsatSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_LOGICAL_NOT_SUPPORTED,
satIOContext);
/*smEnqueueIO(smRoot, satIOContext);*/
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pSmSenseData,
satIOContext->interruptContext );
return SM_RC_SUCCESS;
}
SM_DBG2(("smsatIOStart: satPendingIO %d satNCQMaxIO %d\n",pSatDevData->satPendingIO, pSatDevData->satNCQMaxIO ));
/* this may happen after tiCOMReset until OS sends inquiry */
if (pSatDevData->IDDeviceValid == agFALSE && (scsiCmnd->cdb[0] != SCSIOPC_INQUIRY))
{
SM_DBG1(("smsatIOStart: invalid identify device data did %d !!!\n", pSatDevData->id));
SM_DBG1(("smsatIOStart: satPendingIO %d satNCQMaxIO %d\n", pSatDevData->satPendingIO, pSatDevData->satNCQMaxIO ));
SM_DBG1(("smsatIOStart: satPendingNCQIO %d satPendingNONNCQIO %d\n", pSatDevData->satPendingNCQIO, pSatDevData->satPendingNONNCQIO));
/*smEnqueueIO(smRoot, satIOContext);*/
return SM_RC_NODEVICE;
}
/*
* Check if we need to return BUSY, i.e. recovery in progress
*/
if (pSatDevData->satDriveState == SAT_DEV_STATE_IN_RECOVERY)
{
SM_DBG1(("smsatIOStart: IN RECOVERY STATE cdb[0]=0x%x did=%d !!!\n",
scsiCmnd->cdb[0], pSatDevData->id));
SM_DBG2(("smsatIOStart: device %p satPendingIO %d satNCQMaxIO %d\n", pSatDevData, pSatDevData->satPendingIO, pSatDevData->satNCQMaxIO ));
SM_DBG2(("smsatIOStart: device %p satPendingNCQIO %d satPendingNONNCQIO %d\n",pSatDevData, pSatDevData->satPendingNCQIO, pSatDevData->satPendingNONNCQIO));
/*smEnqueueIO(smRoot, satIOContext);*/
// return SM_RC_FAILURE;
return SM_RC_DEVICE_BUSY;
}
if (pSatDevData->satDeviceType == SATA_ATAPI_DEVICE)
{
if (scsiCmnd->cdb[0] == SCSIOPC_REPORT_LUN)
{
return smsatReportLun(smRoot, smIORequest, smDeviceHandle, smSCSIRequest, satIOContext);
}
else
{
return smsatPacket(smRoot, smIORequest, smDeviceHandle, smSCSIRequest, satIOContext);
}
}
else
{
/* Parse CDB */
switch(scsiCmnd->cdb[0])
{
case SCSIOPC_READ_10:
status = smsatRead10( smRoot,
smIORequest,
smDeviceHandle,
smSCSIRequest,
satIOContext);
break;
case SCSIOPC_WRITE_10:
status = smsatWrite10( smRoot,
smIORequest,
smDeviceHandle,
smSCSIRequest,
satIOContext);
break;
case SCSIOPC_READ_6:
status = smsatRead6( smRoot,
smIORequest,
smDeviceHandle,
smSCSIRequest,
satIOContext);
break;
case SCSIOPC_READ_12:
SM_DBG5(("smsatIOStart: SCSIOPC_READ_12\n"));
status = smsatRead12( smRoot,
smIORequest,
smDeviceHandle,
smSCSIRequest,
satIOContext);
break;
case SCSIOPC_READ_16:
status = smsatRead16( smRoot,
smIORequest,
smDeviceHandle,
smSCSIRequest,
satIOContext);
break;
case SCSIOPC_WRITE_6:
status = smsatWrite6( smRoot,
smIORequest,
smDeviceHandle,
smSCSIRequest,
satIOContext);
break;
case SCSIOPC_WRITE_12:
SM_DBG5(("smsatIOStart: SCSIOPC_WRITE_12 \n"));
status = smsatWrite12( smRoot,
smIORequest,
smDeviceHandle,
smSCSIRequest,
satIOContext);
break;
case SCSIOPC_WRITE_16:
SM_DBG5(("smsatIOStart: SCSIOPC_WRITE_16 \n"));
status = smsatWrite16( smRoot,
smIORequest,
smDeviceHandle,
smSCSIRequest,
satIOContext);
break;
case SCSIOPC_VERIFY_10:
status = smsatVerify10( smRoot,
smIORequest,
smDeviceHandle,
smSCSIRequest,
satIOContext);
break;
case SCSIOPC_VERIFY_12:
SM_DBG5(("smsatIOStart: SCSIOPC_VERIFY_12\n"));
status = smsatVerify12( smRoot,
smIORequest,
smDeviceHandle,
smSCSIRequest,
satIOContext);
break;
case SCSIOPC_VERIFY_16:
SM_DBG5(("smsatIOStart: SCSIOPC_VERIFY_16\n"));
status = smsatVerify16( smRoot,
smIORequest,
smDeviceHandle,
smSCSIRequest,
satIOContext);
break;
case SCSIOPC_TEST_UNIT_READY:
status = smsatTestUnitReady( smRoot,
smIORequest,
smDeviceHandle,
smSCSIRequest,
satIOContext);
break;
case SCSIOPC_INQUIRY:
status = smsatInquiry( smRoot,
smIORequest,
smDeviceHandle,
smSCSIRequest,
satIOContext);
break;
case SCSIOPC_REQUEST_SENSE:
status = smsatRequestSense( smRoot,
smIORequest,
smDeviceHandle,
smSCSIRequest,
satIOContext);
break;
case SCSIOPC_MODE_SENSE_6:
status = smsatModeSense6( smRoot,
smIORequest,
smDeviceHandle,
smSCSIRequest,
satIOContext);
break;
case SCSIOPC_MODE_SENSE_10:
status = smsatModeSense10( smRoot,
smIORequest,
smDeviceHandle,
smSCSIRequest,
satIOContext);
break;
case SCSIOPC_READ_CAPACITY_10:
status = smsatReadCapacity10( smRoot,
smIORequest,
smDeviceHandle,
smSCSIRequest,
satIOContext);
break;
case SCSIOPC_READ_CAPACITY_16:
status = smsatReadCapacity16( smRoot,
smIORequest,
smDeviceHandle,
smSCSIRequest,
satIOContext);
break;
case SCSIOPC_REPORT_LUN:
status = smsatReportLun( smRoot,
smIORequest,
smDeviceHandle,
smSCSIRequest,
satIOContext);
break;
case SCSIOPC_FORMAT_UNIT:
SM_DBG5(("smsatIOStart: SCSIOPC_FORMAT_UNIT\n"));
status = smsatFormatUnit( smRoot,
smIORequest,
smDeviceHandle,
smSCSIRequest,
satIOContext);
break;
case SCSIOPC_SEND_DIAGNOSTIC:
SM_DBG5(("smsatIOStart: SCSIOPC_SEND_DIAGNOSTIC\n"));
status = smsatSendDiagnostic( smRoot,
smIORequest,
smDeviceHandle,
smSCSIRequest,
satIOContext);
break;
case SCSIOPC_START_STOP_UNIT:
SM_DBG5(("smsatIOStart: SCSIOPC_START_STOP_UNIT\n"));
status = smsatStartStopUnit( smRoot,
smIORequest,
smDeviceHandle,
smSCSIRequest,
satIOContext);
break;
case SCSIOPC_WRITE_SAME_10:
SM_DBG5(("smsatIOStart: SCSIOPC_WRITE_SAME_10\n"));
status = smsatWriteSame10( smRoot,
smIORequest,
smDeviceHandle,
smSCSIRequest,
satIOContext);
break;
case SCSIOPC_WRITE_SAME_16: /* no support due to transfer length(sector count) */
SM_DBG5(("smsatIOStart: SCSIOPC_WRITE_SAME_16\n"));
status = smsatWriteSame16( smRoot,
smIORequest,
smDeviceHandle,
smSCSIRequest,
satIOContext);
break;
case SCSIOPC_LOG_SENSE:
SM_DBG5(("smsatIOStart: SCSIOPC_LOG_SENSE\n"));
status = smsatLogSense( smRoot,
smIORequest,
smDeviceHandle,
smSCSIRequest,
satIOContext);
break;
case SCSIOPC_MODE_SELECT_6:
SM_DBG5(("smsatIOStart: SCSIOPC_MODE_SELECT_6\n"));
status = smsatModeSelect6( smRoot,
smIORequest,
smDeviceHandle,
smSCSIRequest,
satIOContext);
break;
case SCSIOPC_MODE_SELECT_10:
SM_DBG5(("smsatIOStart: SCSIOPC_MODE_SELECT_10\n"));
status = smsatModeSelect10( smRoot,
smIORequest,
smDeviceHandle,
smSCSIRequest,
satIOContext);
break;
case SCSIOPC_SYNCHRONIZE_CACHE_10: /* on error what to return, sharing CB with
satSynchronizeCache16 */
SM_DBG5(("smsatIOStart: SCSIOPC_SYNCHRONIZE_CACHE_10\n"));
status = smsatSynchronizeCache10( smRoot,
smIORequest,
smDeviceHandle,
smSCSIRequest,
satIOContext);
break;
case SCSIOPC_SYNCHRONIZE_CACHE_16:/* on error what to return, sharing CB with
satSynchronizeCache16 */
SM_DBG5(("smsatIOStart: SCSIOPC_SYNCHRONIZE_CACHE_16\n"));
status = smsatSynchronizeCache16( smRoot,
smIORequest,
smDeviceHandle,
smSCSIRequest,
satIOContext);
break;
case SCSIOPC_WRITE_AND_VERIFY_10: /* single write and multiple writes */
SM_DBG5(("smsatIOStart: SCSIOPC_WRITE_AND_VERIFY_10\n"));
status = smsatWriteAndVerify10( smRoot,
smIORequest,
smDeviceHandle,
smSCSIRequest,
satIOContext);
break;
case SCSIOPC_WRITE_AND_VERIFY_12:
SM_DBG5(("smsatIOStart: SCSIOPC_WRITE_AND_VERIFY_12\n"));
status = smsatWriteAndVerify12( smRoot,
smIORequest,
smDeviceHandle,
smSCSIRequest,
satIOContext);
break;
case SCSIOPC_WRITE_AND_VERIFY_16:
SM_DBG5(("smsatIOStart: SCSIOPC_WRITE_AND_VERIFY_16\n"));
status = smsatWriteAndVerify16( smRoot,
smIORequest,
smDeviceHandle,
smSCSIRequest,
satIOContext);
break;
case SCSIOPC_READ_MEDIA_SERIAL_NUMBER:
SM_DBG5(("smsatIOStart: SCSIOPC_READ_MEDIA_SERIAL_NUMBER\n"));
status = smsatReadMediaSerialNumber( smRoot,
smIORequest,
smDeviceHandle,
smSCSIRequest,
satIOContext);
break;
case SCSIOPC_READ_BUFFER:
SM_DBG5(("smsatIOStart: SCSIOPC_READ_BUFFER\n"));
status = smsatReadBuffer( smRoot,
smIORequest,
smDeviceHandle,
smSCSIRequest,
satIOContext);
break;
case SCSIOPC_WRITE_BUFFER:
SM_DBG5(("smsatIOStart: SCSIOPC_WRITE_BUFFER\n"));
status = smsatWriteBuffer( smRoot,
smIORequest,
smDeviceHandle,
smSCSIRequest,
satIOContext);
break;
case SCSIOPC_REASSIGN_BLOCKS:
SM_DBG5(("smsatIOStart: SCSIOPC_REASSIGN_BLOCKS\n"));
status = smsatReassignBlocks( smRoot,
smIORequest,
smDeviceHandle,
smSCSIRequest,
satIOContext);
break;
case SCSIOPC_ATA_PASS_THROUGH12: /* fall through */
case SCSIOPC_ATA_PASS_THROUGH16:
SM_DBG5(("smsatIOStart: SCSIOPC_ATA_PASS_THROUGH\n"));
status = smsatPassthrough( smRoot,
smIORequest,
smDeviceHandle,
smSCSIRequest,
satIOContext);
break;
default:
/* Not implemented SCSI cmd, set up error response */
SM_DBG1(("smsatIOStart: unsupported SCSI cdb[0]=0x%x did=%d !!!\n",
scsiCmnd->cdb[0], pSatDevData->id));
smsatSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_COMMAND,
satIOContext);
/*smEnqueueIO(smRoot, satIOContext);*/
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pSmSenseData,
satIOContext->interruptContext );
status = SM_RC_SUCCESS;
break;
} /* end switch */
}
if (status == SM_RC_BUSY || status == SM_RC_DEVICE_BUSY)
{
SM_DBG1(("smsatIOStart: BUSY did %d!!!\n", pSatDevData->id));
SM_DBG2(("smsatIOStart: LL is busy or target queue is full\n"));
SM_DBG2(("smsatIOStart: device %p satPendingIO %d satNCQMaxIO %d\n",pSatDevData, pSatDevData->satPendingIO, pSatDevData->satNCQMaxIO ));
SM_DBG2(("smsatIOStart: device %p satPendingNCQIO %d satPendingNONNCQIO %d\n",pSatDevData, pSatDevData->satPendingNCQIO, pSatDevData->satPendingNONNCQIO));
pSatIntIo = satIOContext->satIntIoContext;
/*smEnqueueIO(smRoot, satIOContext);*/
/* interal structure free */
smsatFreeIntIoResource( smRoot,
pSatDevData,
pSatIntIo);
}
return status;
}
osGLOBAL void
smsatSetSensePayload(
smScsiRspSense_t *pSense,
bit8 SnsKey,
bit32 SnsInfo,
bit16 SnsCode,
smSatIOContext_t *satIOContext)
{
/* for fixed format sense data, SPC-4, p37 */
bit32 i;
bit32 senseLength;
bit8 tmp = 0;
SM_DBG2(("smsatSetSensePayload: start\n"));
senseLength = sizeof(smScsiRspSense_t);
/* zero out the data area */
for (i=0;i< senseLength;i++)
{
((bit8*)pSense)[i] = 0;
}
/*
* SCSI Sense Data part of response data
*/
pSense->snsRespCode = 0x70; /* 0xC0 == vendor specific */
/* 0x70 == standard current error */
pSense->senseKey = SnsKey;
/*
* Put sense info in scsi order format
*/
pSense->info[0] = (bit8)((SnsInfo >> 24) & 0xff);
pSense->info[1] = (bit8)((SnsInfo >> 16) & 0xff);
pSense->info[2] = (bit8)((SnsInfo >> 8) & 0xff);
pSense->info[3] = (bit8)((SnsInfo) & 0xff);
pSense->addSenseLen = 11; /* fixed size of sense data = 18 */
pSense->addSenseCode = (bit8)((SnsCode >> 8) & 0xFF);
pSense->senseQual = (bit8)(SnsCode & 0xFF);
/*
* Set pointer in scsi status
*/
switch(SnsKey)
{
/*
* set illegal request sense key specific error in cdb, no bit pointer
*/
case SCSI_SNSKEY_ILLEGAL_REQUEST:
pSense->skeySpecific[0] = 0xC8;
break;
default:
break;
}
/* setting sense data length */
if (satIOContext != agNULL)
{
satIOContext->pSmSenseData->senseLen = 18;
}
else
{
SM_DBG1(("smsatSetSensePayload: satIOContext is NULL!!!\n"));
}
/* Only for SCSI_SNSCODE_ATA_PASS_THROUGH_INFORMATION_AVAILABLE */
if (SnsCode == SCSI_SNSCODE_ATA_PASS_THROUGH_INFORMATION_AVAILABLE)
{
/* filling in COMMAND-SPECIFIC INFORMATION */
tmp = satIOContext->extend << 7 | satIOContext->Sector_Cnt_Upper_Nonzero << 6 | satIOContext->LBA_Upper_Nonzero << 5;
SM_DBG3(("smsatSetSensePayload: extend 0x%x Sector_Cnt_Upper_Nonzero 0x%x LBA_Upper_Nonzero 0x%x\n",
satIOContext->extend, satIOContext->Sector_Cnt_Upper_Nonzero, satIOContext->LBA_Upper_Nonzero));
SM_DBG3(("smsatSetSensePayload: tmp 0x%x\n", tmp));
pSense->cmdSpecific[0] = tmp;
pSense->cmdSpecific[1] = satIOContext->LBAHigh07;
pSense->cmdSpecific[2] = satIOContext->LBAMid07;
pSense->cmdSpecific[3] = satIOContext->LBALow07;
// smhexdump("smsatSetSensePayload: cmdSpecific",(bit8 *)pSense->cmdSpecific, 4);
// smhexdump("smsatSetSensePayload: info",(bit8 *)pSense->info, 4);
}
return;
}
/*****************************************************************************
*! \brief smsatDecodeSATADeviceType
*
* This routine decodes ATA signature
*
* \param pSignature: ATA signature
*
*
* \return:
* TRUE if ATA signature
* FALSE otherwise
*
*****************************************************************************/
/*
ATA p65
PM p65
SATAII p79, p80
*/
GLOBAL bit32
smsatDecodeSATADeviceType(
bit8 *pSignature
)
{
bit32 deviceType = UNKNOWN_DEVICE;
if ( (pSignature)[0] == 0x01 && (pSignature)[1] == 0x01
&& (pSignature)[2] == 0x00 && (pSignature)[3] == 0x00
&& (pSignature)[4] == 0xA0 ) /* this is the signature of a Hitachi SATA HDD*/
{
deviceType = SATA_ATA_DEVICE;
}
else if ( (pSignature)[0] == 0x01 && (pSignature)[1] == 0x01
&& (pSignature)[2] == 0x00 && (pSignature)[3] == 0x00
&& (pSignature)[4] == 0x00 )
{
deviceType = SATA_ATA_DEVICE;
}
else if ( (pSignature)[0] == 0x01 && (pSignature)[1] == 0x01
&& (pSignature)[2] == 0x14 && (pSignature)[3] == 0xEB
&& ( (pSignature)[4] == 0x00 || (pSignature)[4] == 0x10) )
{
deviceType = SATA_ATAPI_DEVICE;
}
else if ( (pSignature)[0] == 0x01 && (pSignature)[1] == 0x01
&& (pSignature)[2] == 0x69 && (pSignature)[3] == 0x96
&& (pSignature)[4] == 0x00 )
{
deviceType = SATA_PM_DEVICE;
}
else if ( (pSignature)[0] == 0x01 && (pSignature)[1] == 0x01
&& (pSignature)[2] == 0x3C && (pSignature)[3] == 0xC3
&& (pSignature)[4] == 0x00 )
{
deviceType = SATA_SEMB_DEVICE;
}
else if ( (pSignature)[0] == 0xFF && (pSignature)[1] == 0xFF
&& (pSignature)[2] == 0xFF && (pSignature)[3] == 0xFF
&& (pSignature)[4] == 0xFF )
{
deviceType = SATA_SEMB_WO_SEP_DEVICE;
}
return deviceType;
}
/*****************************************************************************/
/*! \brief SAT implementation for ATAPI Packet Command.
*
* SAT implementation for ATAPI Packet and send FIS request to LL layer.
*
* \param tiRoot: Pointer to TISA initiator driver/port instance.
* \param tiIORequest: Pointer to TISA I/O request context for this I/O.
* \param tiDeviceHandle: Pointer to TISA device handle for this I/O.
* \param tiScsiRequest: Pointer to TISA SCSI I/O request and SGL list.
* \param smSatIOContext_t: Pointer to the SAT IO Context
*
* \return If command is started successfully
* - \e smIOSuccess: I/O request successfully initiated.
* - \e smIOBusy: No resources available, try again later.
* - \e smIONoDevice: Invalid device handle.
* - \e smIOError: Other errors.
*/
/*****************************************************************************/
osGLOBAL bit32
smsatPacket(
smRoot_t *smRoot,
smIORequest_t *smIORequest,
smDeviceHandle_t *smDeviceHandle,
smScsiInitiatorRequest_t *smScsiRequest,
smSatIOContext_t *satIOContext
)
{
bit32 status;
bit32 agRequestType = AGSA_SATA_PROTOCOL_D2H_PKT;
smDeviceData_t *pSatDevData;
smIniScsiCmnd_t *scsiCmnd;
agsaFisRegHostToDevice_t *fis;
pSatDevData = satIOContext->pSatDevData;
scsiCmnd = &smScsiRequest->scsiCmnd;
fis = satIOContext->pFis;
SM_DBG3(("smsatPacket: start, SCSI CDB is 0x%X %X %X %X %X %X %X %X %X %X %X %X\n",
scsiCmnd->cdb[0],scsiCmnd->cdb[1],scsiCmnd->cdb[2],scsiCmnd->cdb[3],
scsiCmnd->cdb[4],scsiCmnd->cdb[5],scsiCmnd->cdb[6],scsiCmnd->cdb[7],
scsiCmnd->cdb[8],scsiCmnd->cdb[9],scsiCmnd->cdb[10],scsiCmnd->cdb[11]));
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set 1*/
fis->h.command = SAT_PACKET; /* 0xA0 */
if (pSatDevData->satDMADIRSupport) /* DMADIR enabled*/
{
fis->h.features = (smScsiRequest->dataDirection == smDirectionIn)? 0x04 : 0; /* 1 for D2H, 0 for H2D */
}
else
{
fis->h.features = 0; /* FIS reserve */
}
if (pSatDevData->satDMASupport == agTRUE && pSatDevData->satDMAEnabled == agTRUE)
{
/*DMA transfer mode*/
fis->h.features |= 0x01;
}
else
{
/*PIO transfer mode*/
fis->h.features |= 0x0;
}
/* Byte count low and byte count high */
if ( scsiCmnd->expDataLength > 0xFFFF )
{
fis->d.lbaMid = 0xFF; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = 0xFF; /* FIS LBA (23:16) */
}
else
{
fis->d.lbaMid = (bit8)scsiCmnd->expDataLength; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = (bit8)(scsiCmnd->expDataLength>>8); /* FIS LBA (23:16) */
}
fis->d.lbaLow = 0; /* FIS LBA (7 :0 ) */
fis->d.device = 0; /* FIS LBA (27:24) and FIS LBA mode */
fis->d.lbaLowExp = 0;
fis->d.lbaMidExp = 0;
fis->d.lbaHighExp = 0;
fis->d.featuresExp = 0;
fis->d.sectorCount = 0; /* FIS sector count (7:0) */
fis->d.sectorCountExp = 0;
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
satIOContext->ATACmd = SAT_PACKET;
if (smScsiRequest->dataDirection == smDirectionIn)
{
agRequestType = AGSA_SATA_PROTOCOL_D2H_PKT;
}
else
{
agRequestType = AGSA_SATA_PROTOCOL_H2D_PKT;
}
satIOContext->satCompleteCB = &smsatPacketCB;
/*
* Prepare SGL and send FIS to LL layer.
*/
satIOContext->reqType = agRequestType; /* Save it */
status = smsataLLIOStart(smRoot,
smIORequest,
smDeviceHandle,
smScsiRequest,
satIOContext);
SM_DBG3(("smsatPacket: return\n"));
return (status);
}
/*****************************************************************************/
/*! \brief SAT implementation for smsatSetFeaturePIO.
*
* This function creates Set Features fis and sends the request to LL layer
*
* \param tiRoot: Pointer to TISA initiator driver/port instance.
* \param tiIORequest: Pointer to TISA I/O request context for this I/O.
* \param tiDeviceHandle: Pointer to TISA device handle for this I/O.
* \param tiScsiRequest: Pointer to TISA SCSI I/O request and SGL list.
* \param smSatIOContext_t: Pointer to the SAT IO Context
*
* \return If command is started successfully
* - \e smIOSuccess: I/O request successfully initiated.
* - \e smIOBusy: No resources available, try again later.
* - \e smIONoDevice: Invalid device handle.
* - \e smIOError: Other errors.
*/
/*****************************************************************************/
osGLOBAL bit32
smsatSetFeaturesPIO(
smRoot_t *smRoot,
smIORequest_t *smIORequest,
smDeviceHandle_t *smDeviceHandle,
smScsiInitiatorRequest_t *smScsiRequest,
smSatIOContext_t *satIOContext
)
{
bit32 status = SM_RC_FAILURE;
bit32 agRequestType;
agsaFisRegHostToDevice_t *fis;
fis = satIOContext->pFis;
SM_DBG2(("smsatSetFeaturesPIO: start\n"));
/*
* Send the Set Features command.
*/
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_SET_FEATURES; /* 0xEF */
fis->h.features = 0x03; /* set transfer mode */
fis->d.lbaLow = 0;
fis->d.lbaMid = 0;
fis->d.lbaHigh = 0;
fis->d.device = 0;
fis->d.lbaLowExp = 0;
fis->d.lbaMidExp = 0;
fis->d.lbaHighExp = 0;
fis->d.featuresExp = 0;
fis->d.sectorCountExp = 0;
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_NON_DATA;
/* Initialize CB for SATA completion.
*/
fis->d.sectorCount = 0x0C; /*enable PIO transfer mode */
satIOContext->satCompleteCB = &smsatSetFeaturesPIOCB;
/*
* Prepare SGL and send FIS to LL layer.
*/
satIOContext->reqType = agRequestType; /* Save it */
status = smsataLLIOStart( smRoot,
smIORequest,
smDeviceHandle,
smScsiRequest,
satIOContext);
SM_DBG2(("smsatSetFeaturesPIO: return\n"));
/* debugging code */
if (smIORequest->tdData == smIORequest->smData)
{
SM_DBG1(("smsatSetFeaturesPIO: incorrect smIORequest\n"));
}
return status;
}
/*****************************************************************************/
/*! \brief SAT implementation for SCSI REQUEST SENSE to ATAPI device.
*
* SAT implementation for SCSI REQUEST SENSE.
*
* \param tiRoot: Pointer to TISA initiator driver/port instance.
* \param tiIORequest: Pointer to TISA I/O request context for this I/O.
* \param tiDeviceHandle: Pointer to TISA device handle for this I/O.
* \param tiScsiRequest: Pointer to TISA SCSI I/O request and SGL list.
* \param smSatIOContext_t: Pointer to the SAT IO Context
*
* \return If command is started successfully
* - \e smIOSuccess: I/O request successfully initiated.
* - \e smIOBusy: No resources available, try again later.
* - \e smIONoDevice: Invalid device handle.
* - \e smIOError: Other errors.
*/
/*****************************************************************************/
osGLOBAL bit32
smsatRequestSenseForATAPI(
smRoot_t *smRoot,
smIORequest_t *smIORequest,
smDeviceHandle_t *smDeviceHandle,
smScsiInitiatorRequest_t *smScsiRequest,
smSatIOContext_t *satIOContext
)
{
bit32 status;
bit32 agRequestType = AGSA_SATA_PROTOCOL_D2H_PKT;
smDeviceData_t *pSatDevData;
smIniScsiCmnd_t *scsiCmnd;
agsaFisRegHostToDevice_t *fis;
pSatDevData = satIOContext->pSatDevData;
scsiCmnd = &smScsiRequest->scsiCmnd;
fis = satIOContext->pFis;
scsiCmnd->cdb[0] = SCSIOPC_REQUEST_SENSE;
scsiCmnd->cdb[1] = 0;
scsiCmnd->cdb[2] = 0;
scsiCmnd->cdb[3] = 0;
scsiCmnd->cdb[4] = (bit8)scsiCmnd->expDataLength;
scsiCmnd->cdb[5] = 0;
SM_DBG3(("smsatRequestSenseForATAPI: start, SCSI CDB is 0x%X %X %X %X %X %X %X %X %X %X %X %X\n",
scsiCmnd->cdb[0],scsiCmnd->cdb[1],scsiCmnd->cdb[2],scsiCmnd->cdb[3],
scsiCmnd->cdb[4],scsiCmnd->cdb[5],scsiCmnd->cdb[6],scsiCmnd->cdb[7],
scsiCmnd->cdb[8],scsiCmnd->cdb[9],scsiCmnd->cdb[10],scsiCmnd->cdb[11]));
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set 1*/
fis->h.command = SAT_PACKET; /* 0xA0 */
if (pSatDevData->satDMADIRSupport) /* DMADIR enabled*/
{
fis->h.features = (smScsiRequest->dataDirection == smDirectionIn)? 0x04 : 0; /* 1 for D2H, 0 for H2D */
}
else
{
fis->h.features = 0; /* FIS reserve */
}
if (pSatDevData->satDMASupport == agTRUE && pSatDevData->satDMAEnabled == agTRUE)
{
fis->h.features |= 0x01;
}
else
{
fis->h.features |= 0x0;
}
fis->d.lbaLow = 0; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = (bit8)scsiCmnd->expDataLength; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = (bit8)(scsiCmnd->expDataLength>>8); /* FIS LBA (23:16) */
fis->d.device = 0; /* FIS LBA (27:24) and FIS LBA mode */
fis->d.lbaLowExp = 0;
fis->d.lbaMidExp = 0;
fis->d.lbaHighExp = 0;
fis->d.featuresExp = 0;
fis->d.sectorCount = 0; /* FIS sector count (7:0) */
fis->d.sectorCountExp = 0;
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
satIOContext->ATACmd = SAT_PACKET;
agRequestType = AGSA_SATA_PROTOCOL_D2H_PKT;
satIOContext->satCompleteCB = &smsatRequestSenseForATAPICB;
/*
* Prepare SGL and send FIS to LL layer.
*/
satIOContext->reqType = agRequestType; /* Save it */
status = smsataLLIOStart( smRoot,
smIORequest,
smDeviceHandle,
smScsiRequest,
satIOContext);
SM_DBG3(("smsatRequestSenseForATAPI: return\n"));
return (status);
}
/*****************************************************************************/
/*! \brief SAT implementation for smsatDeviceReset.
*
* This function creates DEVICE RESET fis and sends the request to LL layer
*
* \param smRoot: Pointer to TISA initiator driver/port instance.
* \param smIORequest: Pointer to TISA I/O request context for this I/O.
* \param smDeviceHandle: Pointer to TISA device handle for this I/O.
* \param smScsiRequest: Pointer to TISA SCSI I/O request and SGL list.
* \param smSatIOContext_t: Pointer to the SAT IO Context
*
* \return If command is started successfully
* - \e smIOSuccess: I/O request successfully initiated.
* - \e smIOBusy: No resources available, try again later.
* - \e smIONoDevice: Invalid device handle.
* - \e smIOError: Other errors.
*/
/*****************************************************************************/
osGLOBAL bit32
smsatDeviceReset(
smRoot_t *smRoot,
smIORequest_t *smIORequest,
smDeviceHandle_t *smDeviceHandle,
smScsiInitiatorRequest_t *smScsiRequest,
smSatIOContext_t *satIOContext
)
{
bit32 status;
bit32 agRequestType;
agsaFisRegHostToDevice_t *fis;
fis = satIOContext->pFis;
SM_DBG3(("smsatDeviceReset: start\n"));
/*
* Send the Execute Device Diagnostic command.
*/
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_DEVICE_RESET; /* 0x08 */
fis->h.features = 0;
fis->d.lbaLow = 0;
fis->d.lbaMid = 0;
fis->d.lbaHigh = 0;
fis->d.device = 0;
fis->d.lbaLowExp = 0;
fis->d.lbaMidExp = 0;
fis->d.lbaHighExp = 0;
fis->d.featuresExp = 0;
fis->d.sectorCount = 0;
fis->d.sectorCountExp = 0;
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_DEV_RESET;
/* Initialize CB for SATA completion.
*/
satIOContext->satCompleteCB = &smsatDeviceResetCB;
/*
* Prepare SGL and send FIS to LL layer.
*/
satIOContext->reqType = agRequestType; /* Save it */
status = smsataLLIOStart( smRoot,
smIORequest,
smDeviceHandle,
smScsiRequest,
satIOContext);
SM_DBG3(("smsatDeviceReset: return\n"));
return status;
}
/*****************************************************************************/
/*! \brief SAT implementation for smsatExecuteDeviceDiagnostic.
*
* This function creates Execute Device Diagnostic fis and sends the request to LL layer
*
* \param smRoot: Pointer to TISA initiator driver/port instance.
* \param smIORequest: Pointer to TISA I/O request context for this I/O.
* \param smDeviceHandle: Pointer to TISA device handle for this I/O.
* \param smScsiRequest: Pointer to TISA SCSI I/O request and SGL list.
* \param smSatIOContext_t: Pointer to the SAT IO Context
*
* \return If command is started successfully
* - \e smIOSuccess: I/O request successfully initiated.
* - \e smIOBusy: No resources available, try again later.
* - \e smIONoDevice: Invalid device handle.
* - \e smIOError: Other errors.
*/
/*****************************************************************************/
osGLOBAL bit32
smsatExecuteDeviceDiagnostic(
smRoot_t *smRoot,
smIORequest_t *smIORequest,
smDeviceHandle_t *smDeviceHandle,
smScsiInitiatorRequest_t *smScsiRequest,
smSatIOContext_t *satIOContext
)
{
bit32 status;
bit32 agRequestType;
agsaFisRegHostToDevice_t *fis;
fis = satIOContext->pFis;
SM_DBG3(("smsatExecuteDeviceDiagnostic: start\n"));
/*
* Send the Execute Device Diagnostic command.
*/
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_EXECUTE_DEVICE_DIAGNOSTIC; /* 0x90 */
fis->h.features = 0;
fis->d.lbaLow = 0;
fis->d.lbaMid = 0;
fis->d.lbaHigh = 0;
fis->d.device = 0;
fis->d.lbaLowExp = 0;
fis->d.lbaMidExp = 0;
fis->d.lbaHighExp = 0;
fis->d.featuresExp = 0;
fis->d.sectorCount = 0;
fis->d.sectorCountExp = 0;
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_NON_DATA;
/* Initialize CB for SATA completion.
*/
satIOContext->satCompleteCB = &smsatExecuteDeviceDiagnosticCB;
/*
* Prepare SGL and send FIS to LL layer.
*/
satIOContext->reqType = agRequestType; /* Save it */
status = smsataLLIOStart( smRoot,
smIORequest,
smDeviceHandle,
smScsiRequest,
satIOContext);
SM_DBG3(("smsatExecuteDeviceDiagnostic: return\n"));
return status;
}
osGLOBAL void
smsatSetDeferredSensePayload(
smScsiRspSense_t *pSense,
bit8 SnsKey,
bit32 SnsInfo,
bit16 SnsCode,
smSatIOContext_t *satIOContext
)
{
SM_DBG2(("smsatSetDeferredSensePayload: start\n"));
return;
}
GLOBAL bit32
smsatRead6(
smRoot_t *smRoot,
smIORequest_t *smIORequest,
smDeviceHandle_t *smDeviceHandle,
smScsiInitiatorRequest_t *smScsiRequest,
smSatIOContext_t *satIOContext
)
{
bit32 status;
bit32 agRequestType = AGSA_SATA_PROTOCOL_DMA_READ;
smDeviceData_t *pSatDevData;
smScsiRspSense_t *pSense;
smIniScsiCmnd_t *scsiCmnd;
agsaFisRegHostToDevice_t *fis;
bit32 lba = 0;
bit16 tl = 0;
pSense = satIOContext->pSense;
pSatDevData = satIOContext->pSatDevData;
scsiCmnd = &smScsiRequest->scsiCmnd;
fis = satIOContext->pFis;
SM_DBG2(("smsatRead6: start\n"));
/* no FUA checking since read6 */
/* checking CONTROL */
/* NACA == 1 or LINK == 1*/
if ( (scsiCmnd->cdb[5] & SCSI_NACA_MASK) || (scsiCmnd->cdb[5] & SCSI_LINK_MASK) )
{
smsatSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
/*smEnqueueIO(smRoot, satIOContext);*/
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pSmSenseData,
satIOContext->interruptContext );
SM_DBG1(("smsatRead6: return control!!!\n"));
return SM_RC_SUCCESS;
}
/* cbd6; computing LBA and transfer length */
lba = (((scsiCmnd->cdb[1]) & 0x1f) << (8*2))
+ (scsiCmnd->cdb[2] << 8) + scsiCmnd->cdb[3];
tl = scsiCmnd->cdb[4];
/* Table 34, 9.1, p 46 */
/*
note: As of 2/10/2006, no support for DMA QUEUED
*/
/*
Table 34, 9.1, p 46, b
When no 48-bit addressing support or NCQ, if LBA is beyond (2^28 - 1),
return check condition
*/
if (pSatDevData->satNCQ != agTRUE &&
pSatDevData->sat48BitSupport != agTRUE
)
{
if (lba > SAT_TR_LBA_LIMIT - 1)
{
smsatSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_LOGICAL_BLOCK_ADDRESS_OUT_OF_RANGE,
satIOContext);
/*smEnqueueIO(smRoot, satIOContext);*/
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pSmSenseData,
satIOContext->interruptContext );
SM_DBG1(("smsatRead6: return LBA out of range!!!\n"));
return SM_RC_SUCCESS;
}
}
/* case 1 and 2 */
if (lba + tl <= SAT_TR_LBA_LIMIT)
{
if (pSatDevData->satDMASupport == agTRUE && pSatDevData->satDMAEnabled == agTRUE)
{
/* case 2 */
/* READ DMA*/
SM_DBG5(("smsatRead6: case 2\n"));
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_READ_DMA; /* 0xC8 */
fis->h.features = 0; /* FIS reserve */
fis->d.lbaLow = scsiCmnd->cdb[3]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = scsiCmnd->cdb[2]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = (bit8)((scsiCmnd->cdb[1]) & 0x1f); /* FIS LBA (23:16) */
fis->d.device = 0x40; /* FIS LBA mode */
fis->d.lbaLowExp = 0;
fis->d.lbaMidExp = 0;
fis->d.lbaHighExp = 0;
fis->d.featuresExp = 0;
if (tl == 0)
{
/* temporary fix */
fis->d.sectorCount = 0xff; /* FIS sector count (7:0) */
}
else
{
fis->d.sectorCount = scsiCmnd->cdb[4]; /* FIS sector count (7:0) */
}
fis->d.sectorCountExp = 0;
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_DMA_READ;
}
else
{
/* case 1 */
/* READ SECTORS for easier implemetation */
SM_DBG5(("smsatRead6: case 1\n"));
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_READ_SECTORS; /* 0x20 */
fis->h.features = 0; /* FIS reserve */
fis->d.lbaLow = scsiCmnd->cdb[3]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = scsiCmnd->cdb[2]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = (bit8)((scsiCmnd->cdb[1]) & 0x1f); /* FIS LBA (23:16) */
fis->d.device = 0x40; /* FIS LBA mode */
fis->d.lbaLowExp = 0;
fis->d.lbaMidExp = 0;
fis->d.lbaHighExp = 0;
fis->d.featuresExp = 0;
if (tl == 0)
{
/* temporary fix */
fis->d.sectorCount = 0xff; /* FIS sector count (7:0) */
}
else
{
fis->d.sectorCount = scsiCmnd->cdb[4]; /* FIS sector count (7:0) */
}
fis->d.sectorCountExp = 0;
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_PIO_READ;
}
}
/* case 3 and 4 */
if (pSatDevData->sat48BitSupport == agTRUE)
{
if (pSatDevData->satDMASupport == agTRUE && pSatDevData->satDMAEnabled == agTRUE)
{
/* case 3 */
/* READ DMA EXT only */
SM_DBG5(("smsatRead6: case 3\n"));
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_READ_DMA_EXT; /* 0x25 */
fis->h.features = 0; /* FIS reserve */
fis->d.lbaLow = scsiCmnd->cdb[3]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = scsiCmnd->cdb[2]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = (bit8)((scsiCmnd->cdb[1]) & 0x1f); /* FIS LBA (23:16) */
fis->d.device = 0x40; /* FIS LBA mode set */
fis->d.lbaLowExp = 0; /* FIS LBA (31:24) */
fis->d.lbaMidExp = 0; /* FIS LBA (39:32) */
fis->d.lbaHighExp = 0; /* FIS LBA (47:40) */
fis->d.featuresExp = 0; /* FIS reserve */
if (tl == 0)
{
/* sector count is 256, 0x100*/
fis->d.sectorCount = 0; /* FIS sector count (7:0) */
fis->d.sectorCountExp = 0x01; /* FIS sector count (15:8) */
}
else
{
fis->d.sectorCount = scsiCmnd->cdb[4]; /* FIS sector count (7:0) */
fis->d.sectorCountExp = 0; /* FIS sector count (15:8) */
}
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_DMA_READ;
}
else
{
/* case 4 */
/* READ SECTORS EXT for easier implemetation */
SM_DBG5(("smsatRead6: case 4\n"));
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_READ_SECTORS_EXT; /* 0x24 */
fis->h.features = 0; /* FIS reserve */
fis->d.lbaLow = scsiCmnd->cdb[3]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = scsiCmnd->cdb[2]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = (bit8)((scsiCmnd->cdb[1]) & 0x1f); /* FIS LBA (23:16) */
fis->d.device = 0x40; /* FIS LBA mode set */
fis->d.lbaLowExp = 0; /* FIS LBA (31:24) */
fis->d.lbaMidExp = 0; /* FIS LBA (39:32) */
fis->d.lbaHighExp = 0; /* FIS LBA (47:40) */
fis->d.featuresExp = 0; /* FIS reserve */
if (tl == 0)
{
/* sector count is 256, 0x100*/
fis->d.sectorCount = 0; /* FIS sector count (7:0) */
fis->d.sectorCountExp = 0x01; /* FIS sector count (15:8) */
}
else
{
fis->d.sectorCount = scsiCmnd->cdb[4]; /* FIS sector count (7:0) */
fis->d.sectorCountExp = 0; /* FIS sector count (15:8) */
}
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_PIO_READ;
}
}
/* case 5 */
if (pSatDevData->satNCQ == agTRUE)
{
/* READ FPDMA QUEUED */
if (pSatDevData->sat48BitSupport != agTRUE)
{
/* sanity check */
SM_DBG1(("smsatRead6: case 5 !!! error NCQ but 28 bit address support!!!\n"));
smsatSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
/*smEnqueueIO(smRoot, satIOContext);*/
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pSmSenseData,
satIOContext->interruptContext );
return SM_RC_SUCCESS;
}
SM_DBG5(("smsatRead6: case 5\n"));
/* Support 48-bit FPDMA addressing, use READ FPDMA QUEUE command */
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_READ_FPDMA_QUEUED; /* 0x60 */
fis->d.lbaLow = scsiCmnd->cdb[3]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = scsiCmnd->cdb[2]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = (bit8)((scsiCmnd->cdb[1]) & 0x1f); /* FIS LBA (23:16) */
fis->d.device = 0x40; /* FIS FUA clear */
fis->d.lbaLowExp = 0; /* FIS LBA (31:24) */
fis->d.lbaMidExp = 0; /* FIS LBA (39:32) */
fis->d.lbaHighExp = 0; /* FIS LBA (47:40) */
if (tl == 0)
{
/* sector count is 256, 0x100*/
fis->h.features = 0; /* FIS sector count (7:0) */
fis->d.featuresExp = 0x01; /* FIS sector count (15:8) */
}
else
{
fis->h.features = scsiCmnd->cdb[4]; /* FIS sector count (7:0) */
fis->d.featuresExp = 0; /* FIS sector count (15:8) */
}
fis->d.sectorCount = 0; /* Tag (7:3) set by LL layer */
fis->d.sectorCountExp = 0;
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_FPDMA_READ;
}
/* Initialize CB for SATA completion.
*/
satIOContext->satCompleteCB = &smsatNonChainedDataIOCB;
/*
* Prepare SGL and send FIS to LL layer.
*/
satIOContext->reqType = agRequestType; /* Save it */
status = smsataLLIOStart( smRoot,
smIORequest,
smDeviceHandle,
smScsiRequest,
satIOContext);
return (status);
}
osGLOBAL FORCEINLINE bit32
smsatRead10(
smRoot_t *smRoot,
smIORequest_t *smIORequest,
smDeviceHandle_t *smDeviceHandle,
smScsiInitiatorRequest_t *smScsiRequest,
smSatIOContext_t *satIOContext
)
{
smDeviceData_t *pSatDevData = satIOContext->pSatDevData;
smScsiRspSense_t *pSense = satIOContext->pSense;
smIniScsiCmnd_t *scsiCmnd = &smScsiRequest->scsiCmnd;
agsaFisRegHostToDevice_t *fis = satIOContext->pFis;
bit32 status;
bit32 agRequestType = AGSA_SATA_PROTOCOL_DMA_READ;
bit32 lba = 0;
bit32 tl = 0;
bit32 LoopNum = 1;
bit8 LBA[8];
bit8 TL[8];
bit32 AllChk = agFALSE; /* lba, lba+tl check against ATA limit and Disk capacity */
SM_DBG2(("smsatRead10: start\n"));
SM_DBG2(("smsatRead10: pSatDevData did=%d\n", pSatDevData->id));
// smhexdump("smsatRead10", (bit8 *)scsiCmnd->cdb, 10);
/* checking FUA_NV */
if (scsiCmnd->cdb[1] & SCSI_FUA_NV_MASK)
{
smsatSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
/*smEnqueueIO(smRoot, satIOContext);*/
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pSmSenseData,
satIOContext->interruptContext );
SM_DBG1(("smsatRead10: return FUA_NV!!!\n"));
return SM_RC_SUCCESS;
}
/* checking CONTROL */
/* NACA == 1 or LINK == 1*/
if ( (scsiCmnd->cdb[9] & SCSI_NACA_MASK) || (scsiCmnd->cdb[9] & SCSI_LINK_MASK) )
{
smsatSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
/*smEnqueueIO(smRoot, satIOContext);*/
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pSmSenseData,
satIOContext->interruptContext );
SM_DBG1(("smsatRead10: return control!!!\n"));
return SM_RC_SUCCESS;
}
/*
sm_memset(LBA, 0, sizeof(LBA));
sm_memset(TL, 0, sizeof(TL));
*/
/* do not use memcpy due to indexing in LBA and TL */
LBA[0] = 0; /* MSB */
LBA[1] = 0;
LBA[2] = 0;
LBA[3] = 0;
LBA[4] = scsiCmnd->cdb[2];
LBA[5] = scsiCmnd->cdb[3];
LBA[6] = scsiCmnd->cdb[4];
LBA[7] = scsiCmnd->cdb[5]; /* LSB */
TL[0] = 0;
TL[1] = 0;
TL[2] = 0;
TL[3] = 0;
TL[4] = 0;
TL[5] = 0;
TL[6] = scsiCmnd->cdb[7];
TL[7] = scsiCmnd->cdb[8]; /* LSB */
/* cbd10; computing LBA and transfer length */
lba = (scsiCmnd->cdb[2] << 24) + (scsiCmnd->cdb[3] << 16)
+ (scsiCmnd->cdb[4] << 8) + scsiCmnd->cdb[5];
tl = (scsiCmnd->cdb[7] << 8) + scsiCmnd->cdb[8];
SM_DBG5(("smsatRead10: lba %d functioned lba %d\n", lba, smsatComputeCDB10LBA(satIOContext)));
SM_DBG5(("smsatRead10: lba 0x%x functioned lba 0x%x\n", lba, smsatComputeCDB10LBA(satIOContext)));
SM_DBG5(("smsatRead10: tl %d functioned tl %d\n", tl, smsatComputeCDB10TL(satIOContext)));
/* Table 34, 9.1, p 46 */
/*
note: As of 2/10/2006, no support for DMA QUEUED
*/
/*
Table 34, 9.1, p 46, b
When no 48-bit addressing support or NCQ, if LBA is beyond (2^28 - 1),
return check condition
*/
if (pSatDevData->satNCQ != agTRUE &&
pSatDevData->sat48BitSupport != agTRUE
)
{
AllChk = smsatCheckLimit(LBA, TL, agFALSE, pSatDevData);
if (AllChk)
{
SM_DBG1(("smsatRead10: return LBA out of range, not EXT!!!\n"));
smsatSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_LOGICAL_BLOCK_ADDRESS_OUT_OF_RANGE,
satIOContext);
/*smEnqueueIO(smRoot, satIOContext);*/
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pSmSenseData,
satIOContext->interruptContext );
return SM_RC_SUCCESS;
}
}
else
{
AllChk = smsatCheckLimit(LBA, TL, agTRUE, pSatDevData);
if (AllChk)
{
SM_DBG1(("smsatRead10: return LBA out of range, EXT!!!\n"));
smsatSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_LOGICAL_BLOCK_ADDRESS_OUT_OF_RANGE,
satIOContext);
/*smEnqueueIO(smRoot, satIOContext);*/
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pSmSenseData,
satIOContext->interruptContext );
return SM_RC_SUCCESS;
}
}
/* case 5 */
if (pSatDevData->satNCQ == agTRUE)
{
/* READ FPDMA QUEUED */
if (pSatDevData->sat48BitSupport != agTRUE)
{
SM_DBG1(("smsatRead10: case 5 !!! error NCQ but 28 bit address support!!!\n"));
smsatSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
/*smEnqueueIO(smRoot, satIOContext);*/
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pSmSenseData,
satIOContext->interruptContext );
return SM_RC_SUCCESS;
}
SM_DBG6(("smsatRead10: case 5\n"));
/* Support 48-bit FPDMA addressing, use READ FPDMA QUEUE command */
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_READ_FPDMA_QUEUED; /* 0x60 */
fis->h.features = scsiCmnd->cdb[8]; /* FIS sector count (7:0) */
fis->d.lbaLow = scsiCmnd->cdb[5]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = scsiCmnd->cdb[4]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = scsiCmnd->cdb[3]; /* FIS LBA (23:16) */
/* Check FUA bit */
if (scsiCmnd->cdb[1] & SCSI_READ10_FUA_MASK)
fis->d.device = 0xC0; /* FIS FUA set */
else
fis->d.device = 0x40; /* FIS FUA clear */
fis->d.lbaLowExp = scsiCmnd->cdb[2]; /* FIS LBA (31:24) */
fis->d.lbaMidExp = 0; /* FIS LBA (39:32) */
fis->d.lbaHighExp = 0; /* FIS LBA (47:40) */
fis->d.featuresExp = scsiCmnd->cdb[7]; /* FIS sector count (15:8) */
fis->d.sectorCount = 0; /* Tag (7:3) set by LL layer */
fis->d.sectorCountExp = 0;
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_FPDMA_READ;
satIOContext->ATACmd = SAT_READ_FPDMA_QUEUED;
}
else if (pSatDevData->sat48BitSupport == agTRUE) /* case 3 and 4 */
{
if (pSatDevData->satDMASupport == agTRUE && pSatDevData->satDMAEnabled == agTRUE)
{
/* case 3 */
/* READ DMA EXT */
SM_DBG5(("smsatRead10: case 3\n"));
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_READ_DMA_EXT; /* 0x25 */
fis->h.features = 0; /* FIS reserve */
fis->d.lbaLow = scsiCmnd->cdb[5]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = scsiCmnd->cdb[4]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = scsiCmnd->cdb[3]; /* FIS LBA (23:16) */
fis->d.device = 0x40; /* FIS LBA mode set */
fis->d.lbaLowExp = scsiCmnd->cdb[2]; /* FIS LBA (31:24) */
fis->d.lbaMidExp = 0; /* FIS LBA (39:32) */
fis->d.lbaHighExp = 0; /* FIS LBA (47:40) */
fis->d.featuresExp = 0; /* FIS reserve */
fis->d.sectorCount = scsiCmnd->cdb[8]; /* FIS sector count (7:0) */
fis->d.sectorCountExp = scsiCmnd->cdb[7]; /* FIS sector count (15:8) */
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_DMA_READ;
satIOContext->ATACmd = SAT_READ_DMA_EXT;
}
else
{
/* case 4 */
/* READ MULTIPLE EXT or READ SECTOR(S) EXT or READ VERIFY SECTOR(S) EXT*/
/* READ SECTORS EXT for easier implemetation */
SM_DBG5(("smsatRead10: case 4\n"));
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
/* Check FUA bit */
if (scsiCmnd->cdb[1] & SCSI_READ10_FUA_MASK)
{
/* for now, no support for FUA */
smsatSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
/*smEnqueueIO(smRoot, satIOContext);*/
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pSmSenseData,
satIOContext->interruptContext );
return SM_RC_SUCCESS;
}
fis->h.command = SAT_READ_SECTORS_EXT; /* 0x24 */
fis->h.features = 0; /* FIS reserve */
fis->d.lbaLow = scsiCmnd->cdb[5]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = scsiCmnd->cdb[4]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = scsiCmnd->cdb[3]; /* FIS LBA (23:16) */
fis->d.device = 0x40; /* FIS LBA mode set */
fis->d.lbaLowExp = scsiCmnd->cdb[2]; /* FIS LBA (31:24) */
fis->d.lbaMidExp = 0; /* FIS LBA (39:32) */
fis->d.lbaHighExp = 0; /* FIS LBA (47:40) */
fis->d.featuresExp = 0; /* FIS reserve */
fis->d.sectorCount = scsiCmnd->cdb[8]; /* FIS sector count (7:0) */
fis->d.sectorCountExp = scsiCmnd->cdb[7]; /* FIS sector count (15:8) */
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_PIO_READ;
satIOContext->ATACmd = SAT_READ_SECTORS_EXT;
}
}
else/* case 1 and 2 */
{
if (pSatDevData->satDMASupport == agTRUE && pSatDevData->satDMAEnabled == agTRUE)
{
/* case 2 */
/* READ DMA*/
/* in case that we can't fit the transfer length, we need to make it fit by sending multiple ATA cmnds */
SM_DBG5(("smsatRead10: case 2\n"));
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_READ_DMA; /* 0xC8 */
fis->h.features = 0; /* FIS reserve */
fis->d.lbaLow = scsiCmnd->cdb[5]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = scsiCmnd->cdb[4]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = scsiCmnd->cdb[3]; /* FIS LBA (23:16) */
fis->d.device =
(bit8)((0x4 << 4) | (scsiCmnd->cdb[2] & 0xF)); /* FIS LBA (27:24) and FIS LBA mode */
fis->d.lbaLowExp = 0;
fis->d.lbaMidExp = 0;
fis->d.lbaHighExp = 0;
fis->d.featuresExp = 0;
fis->d.sectorCount = scsiCmnd->cdb[8]; /* FIS sector count (7:0) */
fis->d.sectorCountExp = 0;
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_DMA_READ;
satIOContext->ATACmd = SAT_READ_DMA;
}
else
{
/* case 1 */
/* READ MULTIPLE or READ SECTOR(S) */
/* READ SECTORS for easier implemetation */
/* in case that we can't fit the transfer length, we need to make it fit by sending multiple ATA cmnds */
SM_DBG5(("smsatRead10: case 1\n"));
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_READ_SECTORS; /* 0x20 */
fis->h.features = 0; /* FIS reserve */
fis->d.lbaLow = scsiCmnd->cdb[5]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = scsiCmnd->cdb[4]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = scsiCmnd->cdb[3]; /* FIS LBA (23:16) */
fis->d.device =
(bit8)((0x4 << 4) | (scsiCmnd->cdb[2] & 0xF)); /* FIS LBA (27:24) and FIS LBA mode */
fis->d.lbaLowExp = 0;
fis->d.lbaMidExp = 0;
fis->d.lbaHighExp = 0;
fis->d.featuresExp = 0;
fis->d.sectorCount = scsiCmnd->cdb[8]; /* FIS sector count (7:0) */
fis->d.sectorCountExp = 0;
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_PIO_READ;
satIOContext->ATACmd = SAT_READ_SECTORS;
}
}
// smhexdump("satRead10 final fis", (bit8 *)fis, sizeof(agsaFisRegHostToDevice_t));
/* saves the current LBA and orginal TL */
satIOContext->currentLBA = lba;
satIOContext->OrgTL = tl;
/*
computing number of loop and remainder for tl
0xFF in case not ext
0xFFFF in case EXT
*/
if (fis->h.command == SAT_READ_SECTORS || fis->h.command == SAT_READ_DMA)
{
LoopNum = smsatComputeLoopNum(tl, 0x100);
}
else
{
/* SAT_READ_FPDMA_QUEUED */
/* SAT_READ_SECTORS_EXT, SAT_READ_DMA_EXT */
LoopNum = smsatComputeLoopNum(tl, 0xFFFF);
}
satIOContext->LoopNum = LoopNum;
/* Initialize CB for SATA completion.
*/
if (LoopNum == 1)
{
SM_DBG5(("smsatRead10: NON CHAINED data\n"));
satIOContext->satCompleteCB = &smsatNonChainedDataIOCB;
}
else
{
SM_DBG2(("smsatRead10: CHAINED data!!!\n"));
/* re-setting tl */
if (fis->h.command == SAT_READ_SECTORS || fis->h.command == SAT_READ_DMA)
{
fis->d.sectorCount = 0x0;
smsatSplitSGL(smRoot,
smIORequest,
smDeviceHandle,
smScsiRequest,
satIOContext,
NON_BIT48_ADDRESS_TL_LIMIT*SATA_SECTOR_SIZE, /* 0x100 * 0x200 */
(satIOContext->OrgTL)*SATA_SECTOR_SIZE,
agTRUE);
}
else if (fis->h.command == SAT_READ_SECTORS_EXT || fis->h.command == SAT_READ_DMA_EXT)
{
/* SAT_READ_SECTORS_EXT, SAT_READ_DMA_EXT */
fis->d.sectorCount = 0xFF;
fis->d.sectorCountExp = 0xFF;
smsatSplitSGL(smRoot,
smIORequest,
smDeviceHandle,
smScsiRequest,
satIOContext,
BIT48_ADDRESS_TL_LIMIT*SATA_SECTOR_SIZE, /* 0xFFFF * 0x200 */
(satIOContext->OrgTL)*SATA_SECTOR_SIZE,
agTRUE);
}
else
{
/* SAT_READ_FPDMA_QUEUED */
fis->h.features = 0xFF;
fis->d.featuresExp = 0xFF;
smsatSplitSGL(smRoot,
smIORequest,
smDeviceHandle,
smScsiRequest,
satIOContext,
BIT48_ADDRESS_TL_LIMIT*SATA_SECTOR_SIZE, /* 0xFFFF * 0x200 */
(satIOContext->OrgTL)*SATA_SECTOR_SIZE,
agTRUE);
}
/* chained data */
satIOContext->satCompleteCB = &smsatChainedDataIOCB;
}
/*
* Prepare SGL and send FIS to LL layer.
*/
satIOContext->reqType = agRequestType; /* Save it */
status = smsataLLIOStart( smRoot,
smIORequest,
smDeviceHandle,
smScsiRequest,
satIOContext);
SM_DBG5(("smsatRead10: return\n"));
return (status);
}
osGLOBAL bit32
smsatRead12(
smRoot_t *smRoot,
smIORequest_t *smIORequest,
smDeviceHandle_t *smDeviceHandle,
smScsiInitiatorRequest_t *smScsiRequest,
smSatIOContext_t *satIOContext
)
{
bit32 status;
bit32 agRequestType = AGSA_SATA_PROTOCOL_DMA_READ;
smDeviceData_t *pSatDevData;
smScsiRspSense_t *pSense;
smIniScsiCmnd_t *scsiCmnd;
agsaFisRegHostToDevice_t *fis;
bit32 lba = 0;
bit32 tl = 0;
bit32 LoopNum = 1;
bit8 LBA[8];
bit8 TL[8];
bit32 AllChk = agFALSE; /* lba, lba+tl check against ATA limit and Disk capacity */
pSense = satIOContext->pSense;
pSatDevData = satIOContext->pSatDevData;
scsiCmnd = &smScsiRequest->scsiCmnd;
fis = satIOContext->pFis;
SM_DBG5(("smsatRead12: start\n"));
/* checking FUA_NV */
if (scsiCmnd->cdb[1] & SCSI_FUA_NV_MASK)
{
smsatSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
/*smEnqueueIO(smRoot, satIOContext);*/
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pSmSenseData,
satIOContext->interruptContext );
SM_DBG1(("smsatRead12: return FUA_NV!!!\n"));
return SM_RC_SUCCESS;
}
/* checking CONTROL */
/* NACA == 1 or LINK == 1*/
if ( (scsiCmnd->cdb[11] & SCSI_NACA_MASK) || (scsiCmnd->cdb[11] & SCSI_LINK_MASK) )
{
smsatSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
/*smEnqueueIO(smRoot, satIOContext);*/
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pSmSenseData,
satIOContext->interruptContext );
SM_DBG1(("smsatRead12: return control!!!\n"));
return SM_RC_SUCCESS;
}
sm_memset(LBA, 0, sizeof(LBA));
sm_memset(TL, 0, sizeof(TL));
/* do not use memcpy due to indexing in LBA and TL */
LBA[0] = 0; /* MSB */
LBA[1] = 0;
LBA[2] = 0;
LBA[3] = 0;
LBA[4] = scsiCmnd->cdb[2];
LBA[5] = scsiCmnd->cdb[3];
LBA[6] = scsiCmnd->cdb[4];
LBA[7] = scsiCmnd->cdb[5]; /* LSB */
TL[0] = 0; /* MSB */
TL[1] = 0;
TL[2] = 0;
TL[3] = 0;
TL[4] = scsiCmnd->cdb[6];
TL[5] = scsiCmnd->cdb[7];
TL[6] = scsiCmnd->cdb[8];
TL[7] = scsiCmnd->cdb[9]; /* LSB */
lba = smsatComputeCDB12LBA(satIOContext);
tl = smsatComputeCDB12TL(satIOContext);
/* Table 34, 9.1, p 46 */
/*
note: As of 2/10/2006, no support for DMA QUEUED
*/
/*
Table 34, 9.1, p 46, b
When no 48-bit addressing support or NCQ, if LBA is beyond (2^28 - 1),
return check condition
*/
if (pSatDevData->satNCQ != agTRUE &&
pSatDevData->sat48BitSupport != agTRUE
)
{
AllChk = smsatCheckLimit(LBA, TL, agFALSE, pSatDevData);
if (AllChk)
{
SM_DBG1(("smsatRead12: return LBA out of range, not EXT!!!\n"));
smsatSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_LOGICAL_BLOCK_ADDRESS_OUT_OF_RANGE,
satIOContext);
/*smEnqueueIO(smRoot, satIOContext);*/
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pSmSenseData,
satIOContext->interruptContext );
return SM_RC_SUCCESS;
}
}
else
{
AllChk = smsatCheckLimit(LBA, TL, agTRUE, pSatDevData);
if (AllChk)
{
SM_DBG1(("smsatRead12: return LBA out of range, EXT!!!\n"));
smsatSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_LOGICAL_BLOCK_ADDRESS_OUT_OF_RANGE,
satIOContext);
/*smEnqueueIO(smRoot, satIOContext);*/
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pSmSenseData,
satIOContext->interruptContext );
return SM_RC_SUCCESS;
}
}
/* case 1 and 2 */
if (pSatDevData->satDMASupport == agTRUE && pSatDevData->satDMAEnabled == agTRUE)
{
/* case 2 */
/* READ DMA*/
/* in case that we can't fit the transfer length,
we need to make it fit by sending multiple ATA cmnds */
SM_DBG5(("smsatRead12: case 2\n"));
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_READ_DMA; /* 0xC8 */
fis->h.features = 0; /* FIS reserve */
fis->d.lbaLow = scsiCmnd->cdb[5]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = scsiCmnd->cdb[4]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = scsiCmnd->cdb[3]; /* FIS LBA (23:16) */
fis->d.device =
(bit8)((0x4 << 4) | (scsiCmnd->cdb[2] & 0xF)); /* FIS LBA (27:24) and FIS LBA mode */
fis->d.lbaLowExp = 0;
fis->d.lbaMidExp = 0;
fis->d.lbaHighExp = 0;
fis->d.featuresExp = 0;
fis->d.sectorCount = scsiCmnd->cdb[9]; /* FIS sector count (7:0) */
fis->d.sectorCountExp = 0;
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_DMA_READ;
satIOContext->ATACmd = SAT_READ_DMA;
}
else
{
/* case 1 */
/* READ MULTIPLE or READ SECTOR(S) */
/* READ SECTORS for easier implemetation */
/* can't fit the transfer length but need to make it fit by sending multiple*/
SM_DBG5(("smsatRead12: case 1\n"));
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_READ_SECTORS; /* 0x20 */
fis->h.features = 0; /* FIS reserve */
fis->d.lbaLow = scsiCmnd->cdb[5]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = scsiCmnd->cdb[4]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = scsiCmnd->cdb[3]; /* FIS LBA (23:16) */
fis->d.device =
(bit8)((0x4 << 4) | (scsiCmnd->cdb[2] & 0xF)); /* FIS LBA (27:24) and FIS LBA mode */
fis->d.lbaLowExp = 0;
fis->d.lbaMidExp = 0;
fis->d.lbaHighExp = 0;
fis->d.featuresExp = 0;
fis->d.sectorCount = scsiCmnd->cdb[9]; /* FIS sector count (7:0) */
fis->d.sectorCountExp = 0;
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_PIO_READ;
satIOContext->ATACmd = SAT_READ_SECTORS;
}
/* case 3 and 4 */
if (pSatDevData->sat48BitSupport == agTRUE)
{
if (pSatDevData->satDMASupport == agTRUE && pSatDevData->satDMAEnabled == agTRUE)
{
/* case 3 */
/* READ DMA EXT */
SM_DBG5(("smsatRead12: case 3\n"));
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_READ_DMA_EXT; /* 0x25 */
fis->h.features = 0; /* FIS reserve */
fis->d.lbaLow = scsiCmnd->cdb[5]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = scsiCmnd->cdb[4]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = scsiCmnd->cdb[3]; /* FIS LBA (23:16) */
fis->d.device = 0x40; /* FIS LBA mode set */
fis->d.lbaLowExp = scsiCmnd->cdb[2]; /* FIS LBA (31:24) */
fis->d.lbaMidExp = 0; /* FIS LBA (39:32) */
fis->d.lbaHighExp = 0; /* FIS LBA (47:40) */
fis->d.featuresExp = 0; /* FIS reserve */
fis->d.sectorCount = scsiCmnd->cdb[9]; /* FIS sector count (7:0) */
fis->d.sectorCountExp = scsiCmnd->cdb[8]; /* FIS sector count (15:8) */
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_DMA_READ;
satIOContext->ATACmd = SAT_READ_DMA_EXT;
}
else
{
/* case 4 */
/* READ MULTIPLE EXT or READ SECTOR(S) EXT or READ VERIFY SECTOR(S) EXT*/
/* READ SECTORS EXT for easier implemetation */
SM_DBG5(("smsatRead12: case 4\n"));
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
/* Check FUA bit */
if (scsiCmnd->cdb[1] & SCSI_READ12_FUA_MASK)
{
/* for now, no support for FUA */
smsatSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
/*smEnqueueIO(smRoot, satIOContext);*/
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pSmSenseData,
satIOContext->interruptContext );
return SM_RC_SUCCESS;
}
fis->h.command = SAT_READ_SECTORS_EXT; /* 0x24 */
fis->h.features = 0; /* FIS reserve */
fis->d.lbaLow = scsiCmnd->cdb[5]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = scsiCmnd->cdb[4]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = scsiCmnd->cdb[3]; /* FIS LBA (23:16) */
fis->d.device = 0x40; /* FIS LBA mode set */
fis->d.lbaLowExp = scsiCmnd->cdb[2]; /* FIS LBA (31:24) */
fis->d.lbaMidExp = 0; /* FIS LBA (39:32) */
fis->d.lbaHighExp = 0; /* FIS LBA (47:40) */
fis->d.featuresExp = 0; /* FIS reserve */
fis->d.sectorCount = scsiCmnd->cdb[9]; /* FIS sector count (7:0) */
fis->d.sectorCountExp = scsiCmnd->cdb[8]; /* FIS sector count (15:8) */
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_PIO_READ;
satIOContext->ATACmd = SAT_READ_SECTORS_EXT;
}
}
/* case 5 */
if (pSatDevData->satNCQ == agTRUE)
{
/* READ FPDMA QUEUED */
if (pSatDevData->sat48BitSupport != agTRUE)
{
SM_DBG1(("smsatRead12: case 5 !!! error NCQ but 28 bit address support!!!\n"));
smsatSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
/*smEnqueueIO(smRoot, satIOContext);*/
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pSmSenseData,
satIOContext->interruptContext );
return SM_RC_SUCCESS;
}
SM_DBG6(("smsatRead12: case 5\n"));
/* Support 48-bit FPDMA addressing, use READ FPDMA QUEUE command */
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_READ_FPDMA_QUEUED; /* 0x60 */
fis->h.features = scsiCmnd->cdb[9]; /* FIS sector count (7:0) */
fis->d.lbaLow = scsiCmnd->cdb[5]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = scsiCmnd->cdb[4]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = scsiCmnd->cdb[3]; /* FIS LBA (23:16) */
/* Check FUA bit */
if (scsiCmnd->cdb[1] & SCSI_READ12_FUA_MASK)
fis->d.device = 0xC0; /* FIS FUA set */
else
fis->d.device = 0x40; /* FIS FUA clear */
fis->d.lbaLowExp = scsiCmnd->cdb[2]; /* FIS LBA (31:24) */
fis->d.lbaMidExp = 0; /* FIS LBA (39:32) */
fis->d.lbaHighExp = 0; /* FIS LBA (47:40) */
fis->d.featuresExp = scsiCmnd->cdb[8]; /* FIS sector count (15:8) */
fis->d.sectorCount = 0; /* Tag (7:3) set by LL layer */
fis->d.sectorCountExp = 0;
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_FPDMA_READ;
satIOContext->ATACmd = SAT_READ_FPDMA_QUEUED;
}
/* saves the current LBA and orginal TL */
satIOContext->currentLBA = lba;
satIOContext->OrgTL = tl;
/*
computing number of loop and remainder for tl
0xFF in case not ext
0xFFFF in case EXT
*/
if (fis->h.command == SAT_READ_SECTORS || fis->h.command == SAT_READ_DMA)
{
LoopNum = smsatComputeLoopNum(tl, 0xFF);
}
else if (fis->h.command == SAT_READ_SECTORS_EXT || fis->h.command == SAT_READ_DMA_EXT)
{
/* SAT_READ_SECTORS_EXT, SAT_READ_DMA_EXT */
LoopNum = smsatComputeLoopNum(tl, 0xFFFF);
}
else
{
/* SAT_READ_FPDMA_QUEUEDK */
LoopNum = smsatComputeLoopNum(tl, 0xFFFF);
}
satIOContext->LoopNum = LoopNum;
if (LoopNum == 1)
{
SM_DBG5(("smsatRead12: NON CHAINED data\n"));
satIOContext->satCompleteCB = &smsatNonChainedDataIOCB;
}
else
{
SM_DBG1(("smsatRead12: CHAINED data\n"));
/* re-setting tl */
if (fis->h.command == SAT_READ_SECTORS || fis->h.command == SAT_READ_DMA)
{
fis->d.sectorCount = 0xFF;
}
else if (fis->h.command == SAT_READ_SECTORS_EXT || fis->h.command == SAT_READ_DMA_EXT)
{
/* SAT_READ_SECTORS_EXT, SAT_READ_DMA_EXT */
fis->d.sectorCount = 0xFF;
fis->d.sectorCountExp = 0xFF;
}
else
{
/* SAT_READ_FPDMA_QUEUED */
fis->h.features = 0xFF;
fis->d.featuresExp = 0xFF;
}
/* chained data */
satIOContext->satCompleteCB = &smsatChainedDataIOCB;
}
/*
* Prepare SGL and send FIS to LL layer.
*/
satIOContext->reqType = agRequestType; /* Save it */
status = smsataLLIOStart( smRoot,
smIORequest,
smDeviceHandle,
smScsiRequest,
satIOContext);
SM_DBG5(("smsatRead12: return\n"));
return (status);
}
osGLOBAL bit32
smsatRead16(
smRoot_t *smRoot,
smIORequest_t *smIORequest,
smDeviceHandle_t *smDeviceHandle,
smScsiInitiatorRequest_t *smScsiRequest,
smSatIOContext_t *satIOContext
)
{
bit32 status;
bit32 agRequestType = AGSA_SATA_PROTOCOL_DMA_READ;
smDeviceData_t *pSatDevData;
smScsiRspSense_t *pSense;
smIniScsiCmnd_t *scsiCmnd;
agsaFisRegHostToDevice_t *fis;
bit32 lba = 0;
bit32 tl = 0;
bit32 LoopNum = 1;
bit8 LBA[8];
bit8 TL[8];
bit32 AllChk = agFALSE; /* lba, lba+tl check against ATA limit and Disk capacity */
// bit32 limitExtChk = agFALSE; /* lba limit check for bit48 addressing check */
pSense = satIOContext->pSense;
pSatDevData = satIOContext->pSatDevData;
scsiCmnd = &smScsiRequest->scsiCmnd;
fis = satIOContext->pFis;
SM_DBG5(("smsatRead16: start\n"));
/* checking FUA_NV */
if (scsiCmnd->cdb[1] & SCSI_FUA_NV_MASK)
{
smsatSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
/*smEnqueueIO(smRoot, satIOContext);*/
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pSmSenseData,
satIOContext->interruptContext );
SM_DBG1(("smsatRead16: return FUA_NV!!!\n"));
return SM_RC_SUCCESS;
}
/* checking CONTROL */
/* NACA == 1 or LINK == 1*/
if ( (scsiCmnd->cdb[15] & SCSI_NACA_MASK) || (scsiCmnd->cdb[15] & SCSI_LINK_MASK) )
{
smsatSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
/*smEnqueueIO(smRoot, satIOContext);*/
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pSmSenseData,
satIOContext->interruptContext );
SM_DBG1(("smsatRead16: return control!!!\n"));
return SM_RC_SUCCESS;
}
sm_memset(LBA, 0, sizeof(LBA));
sm_memset(TL, 0, sizeof(TL));
/* do not use memcpy due to indexing in LBA and TL */
LBA[0] = scsiCmnd->cdb[2]; /* MSB */
LBA[1] = scsiCmnd->cdb[3];
LBA[2] = scsiCmnd->cdb[4];
LBA[3] = scsiCmnd->cdb[5];
LBA[4] = scsiCmnd->cdb[6];
LBA[5] = scsiCmnd->cdb[7];
LBA[6] = scsiCmnd->cdb[8];
LBA[7] = scsiCmnd->cdb[9]; /* LSB */
TL[0] = 0;
TL[1] = 0;
TL[2] = 0;
TL[3] = 0;
TL[4] = scsiCmnd->cdb[10]; /* MSB */
TL[5] = scsiCmnd->cdb[11];
TL[6] = scsiCmnd->cdb[12];
TL[7] = scsiCmnd->cdb[13]; /* LSB */
lba = smsatComputeCDB16LBA(satIOContext);
tl = smsatComputeCDB16TL(satIOContext);
/* Table 34, 9.1, p 46 */
/*
note: As of 2/10/2006, no support for DMA QUEUED
*/
/*
Table 34, 9.1, p 46, b
When no 48-bit addressing support or NCQ, if LBA is beyond (2^28 - 1),
return check condition
*/
if (pSatDevData->satNCQ != agTRUE &&
pSatDevData->sat48BitSupport != agTRUE
)
{
AllChk = smsatCheckLimit(LBA, TL, agFALSE, pSatDevData);
if (AllChk)
{
SM_DBG1(("smsatRead16: return LBA out of range, not EXT!!!\n"));
/*smEnqueueIO(smRoot, satIOContext);*/
smsatSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_LOGICAL_BLOCK_ADDRESS_OUT_OF_RANGE,
satIOContext);
/*smEnqueueIO(smRoot, satIOContext);*/
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pSmSenseData,
satIOContext->interruptContext );
return SM_RC_SUCCESS;
}
}
else
{
// rangeChk = smsatAddNComparebit64(LBA, TL);
AllChk = smsatCheckLimit(LBA, TL, agTRUE, pSatDevData);
if (AllChk)
{
SM_DBG1(("smsatRead16: return LBA out of range, EXT!!!\n"));
smsatSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_LOGICAL_BLOCK_ADDRESS_OUT_OF_RANGE,
satIOContext);
/*smEnqueueIO(smRoot, satIOContext);*/
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pSmSenseData,
satIOContext->interruptContext );
return SM_RC_SUCCESS;
}
}
/* case 1 and 2 */
if (pSatDevData->satDMASupport == agTRUE && pSatDevData->satDMAEnabled == agTRUE)
{
/* case 2 */
/* READ DMA*/
/* in case that we can't fit the transfer length,
we need to make it fit by sending multiple ATA cmnds */
SM_DBG5(("smsatRead16: case 2\n"));
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_READ_DMA; /* 0xC8 */
fis->h.features = 0; /* FIS reserve */
fis->d.lbaLow = scsiCmnd->cdb[9]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = scsiCmnd->cdb[8]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = scsiCmnd->cdb[7]; /* FIS LBA (23:16) */
fis->d.device =
(bit8)((0x4 << 4) | (scsiCmnd->cdb[6] & 0xF)); /* FIS LBA (27:24) and FIS LBA mode */
fis->d.lbaLowExp = 0;
fis->d.lbaMidExp = 0;
fis->d.lbaHighExp = 0;
fis->d.featuresExp = 0;
fis->d.sectorCount = scsiCmnd->cdb[13]; /* FIS sector count (7:0) */
fis->d.sectorCountExp = 0;
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_DMA_READ;
satIOContext->ATACmd = SAT_READ_DMA;
}
else
{
/* case 1 */
/* READ MULTIPLE or READ SECTOR(S) */
/* READ SECTORS for easier implemetation */
/* can't fit the transfer length but need to make it fit by sending multiple*/
SM_DBG5(("smsatRead16: case 1\n"));
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_READ_SECTORS; /* 0x20 */
fis->h.features = 0; /* FIS reserve */
fis->d.lbaLow = scsiCmnd->cdb[9]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = scsiCmnd->cdb[8]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = scsiCmnd->cdb[7]; /* FIS LBA (23:16) */
fis->d.device =
(bit8)((0x4 << 4) | (scsiCmnd->cdb[6] & 0xF)); /* FIS LBA (27:24) and FIS LBA mode */
fis->d.lbaLowExp = 0;
fis->d.lbaMidExp = 0;
fis->d.lbaHighExp = 0;
fis->d.featuresExp = 0;
fis->d.sectorCount = scsiCmnd->cdb[13]; /* FIS sector count (7:0) */
fis->d.sectorCountExp = 0;
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_PIO_READ;
satIOContext->ATACmd = SAT_READ_SECTORS;
}
/* case 3 and 4 */
if (pSatDevData->sat48BitSupport == agTRUE)
{
if (pSatDevData->satDMASupport == agTRUE && pSatDevData->satDMAEnabled == agTRUE)
{
/* case 3 */
/* READ DMA EXT */
SM_DBG5(("smsatRead16: case 3\n"));
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_READ_DMA_EXT; /* 0x25 */
fis->h.features = 0; /* FIS reserve */
fis->d.lbaLow = scsiCmnd->cdb[9]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = scsiCmnd->cdb[8]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = scsiCmnd->cdb[7]; /* FIS LBA (23:16) */
fis->d.device = 0x40; /* FIS LBA mode set */
fis->d.lbaLowExp = scsiCmnd->cdb[6]; /* FIS LBA (31:24) */
fis->d.lbaMidExp = scsiCmnd->cdb[5]; /* FIS LBA (39:32) */
fis->d.lbaHighExp = scsiCmnd->cdb[4]; /* FIS LBA (47:40) */
fis->d.featuresExp = 0; /* FIS reserve */
fis->d.sectorCount = scsiCmnd->cdb[13]; /* FIS sector count (7:0) */
fis->d.sectorCountExp = scsiCmnd->cdb[12]; /* FIS sector count (15:8) */
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_DMA_READ;
satIOContext->ATACmd = SAT_READ_DMA_EXT;
}
else
{
/* case 4 */
/* READ MULTIPLE EXT or READ SECTOR(S) EXT or READ VERIFY SECTOR(S) EXT*/
/* READ SECTORS EXT for easier implemetation */
SM_DBG5(("smsatRead16: case 4\n"));
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
/* Check FUA bit */
if (scsiCmnd->cdb[1] & SCSI_READ16_FUA_MASK)
{
/* for now, no support for FUA */
smsatSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
/*smEnqueueIO(smRoot, satIOContext);*/
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pSmSenseData,
satIOContext->interruptContext );
return SM_RC_SUCCESS;
}
fis->h.command = SAT_READ_SECTORS_EXT; /* 0x24 */
fis->h.features = 0; /* FIS reserve */
fis->d.lbaLow = scsiCmnd->cdb[9]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = scsiCmnd->cdb[8]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = scsiCmnd->cdb[7]; /* FIS LBA (23:16) */
fis->d.device = 0x40; /* FIS LBA mode set */
fis->d.lbaLowExp = scsiCmnd->cdb[6]; /* FIS LBA (31:24) */
fis->d.lbaMidExp = scsiCmnd->cdb[5]; /* FIS LBA (39:32) */
fis->d.lbaHighExp = scsiCmnd->cdb[4]; /* FIS LBA (47:40) */
fis->d.featuresExp = 0; /* FIS reserve */
fis->d.sectorCount = scsiCmnd->cdb[13]; /* FIS sector count (7:0) */
fis->d.sectorCountExp = scsiCmnd->cdb[12]; /* FIS sector count (15:8) */
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_PIO_READ;
satIOContext->ATACmd = SAT_READ_SECTORS_EXT;
}
}
/* case 5 */
if (pSatDevData->satNCQ == agTRUE)
{
/* READ FPDMA QUEUED */
if (pSatDevData->sat48BitSupport != agTRUE)
{
SM_DBG1(("smsatRead16: case 5 !!! error NCQ but 28 bit address support!!!\n"));
smsatSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
/*smEnqueueIO(smRoot, satIOContext);*/
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pSmSenseData,
satIOContext->interruptContext );
return SM_RC_SUCCESS;
}
SM_DBG6(("smsatRead16: case 5\n"));
/* Support 48-bit FPDMA addressing, use READ FPDMA QUEUE command */
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_READ_FPDMA_QUEUED; /* 0x60 */
fis->h.features = scsiCmnd->cdb[13]; /* FIS sector count (7:0) */
fis->d.lbaLow = scsiCmnd->cdb[9]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = scsiCmnd->cdb[8]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = scsiCmnd->cdb[7]; /* FIS LBA (23:16) */
/* Check FUA bit */
if (scsiCmnd->cdb[1] & SCSI_READ16_FUA_MASK)
fis->d.device = 0xC0; /* FIS FUA set */
else
fis->d.device = 0x40; /* FIS FUA clear */
fis->d.lbaLowExp = scsiCmnd->cdb[6]; /* FIS LBA (31:24) */
fis->d.lbaMidExp = scsiCmnd->cdb[5]; /* FIS LBA (39:32) */
fis->d.lbaHighExp = scsiCmnd->cdb[4]; /* FIS LBA (47:40) */
fis->d.featuresExp = scsiCmnd->cdb[12]; /* FIS sector count (15:8) */
fis->d.sectorCount = 0; /* Tag (7:3) set by LL layer */
fis->d.sectorCountExp = 0;
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_FPDMA_READ;
satIOContext->ATACmd = SAT_READ_FPDMA_QUEUED;
}
/* saves the current LBA and orginal TL */
satIOContext->currentLBA = lba;
satIOContext->OrgTL = tl;
/*
computing number of loop and remainder for tl
0xFF in case not ext
0xFFFF in case EXT
*/
if (fis->h.command == SAT_READ_SECTORS || fis->h.command == SAT_READ_DMA)
{
LoopNum = smsatComputeLoopNum(tl, 0xFF);
}
else if (fis->h.command == SAT_READ_SECTORS_EXT || fis->h.command == SAT_READ_DMA_EXT)
{
/* SAT_READ_SECTORS_EXT, SAT_READ_DMA_EXT */
LoopNum = smsatComputeLoopNum(tl, 0xFFFF);
}
else
{
/* SAT_READ_FPDMA_QUEUEDK */
LoopNum = smsatComputeLoopNum(tl, 0xFFFF);
}
satIOContext->LoopNum = LoopNum;
if (LoopNum == 1)
{
SM_DBG5(("smsatRead16: NON CHAINED data\n"));
satIOContext->satCompleteCB = &smsatNonChainedDataIOCB;
}
else
{
SM_DBG1(("smsatRead16: CHAINED data!!!\n"));
/* re-setting tl */
if (fis->h.command == SAT_READ_SECTORS || fis->h.command == SAT_READ_DMA)
{
fis->d.sectorCount = 0xFF;
}
else if (fis->h.command == SAT_READ_SECTORS_EXT || fis->h.command == SAT_READ_DMA_EXT)
{
/* SAT_READ_SECTORS_EXT, SAT_READ_DMA_EXT */
fis->d.sectorCount = 0xFF;
fis->d.sectorCountExp = 0xFF;
}
else
{
/* SAT_READ_FPDMA_QUEUED */
fis->h.features = 0xFF;
fis->d.featuresExp = 0xFF;
}
/* chained data */
satIOContext->satCompleteCB = &smsatChainedDataIOCB;
}
/*
* Prepare SGL and send FIS to LL layer.
*/
satIOContext->reqType = agRequestType; /* Save it */
status = smsataLLIOStart( smRoot,
smIORequest,
smDeviceHandle,
smScsiRequest,
satIOContext);
SM_DBG5(("smsatRead16: return\n"));
return (status);
}
osGLOBAL bit32
smsatWrite6(
smRoot_t *smRoot,
smIORequest_t *smIORequest,
smDeviceHandle_t *smDeviceHandle,
smScsiInitiatorRequest_t *smScsiRequest,
smSatIOContext_t *satIOContext
)
{
bit32 status;
bit32 agRequestType = AGSA_SATA_PROTOCOL_DMA_WRITE;
smDeviceData_t *pSatDevData;
smScsiRspSense_t *pSense;
smIniScsiCmnd_t *scsiCmnd;
agsaFisRegHostToDevice_t *fis;
bit32 lba = 0;
bit16 tl = 0;
pSense = satIOContext->pSense;
pSatDevData = satIOContext->pSatDevData;
scsiCmnd = &smScsiRequest->scsiCmnd;
fis = satIOContext->pFis;
SM_DBG5(("smsatWrite6: start\n"));
/* checking CONTROL */
/* NACA == 1 or LINK == 1*/
if ( (scsiCmnd->cdb[5] & SCSI_NACA_MASK) || (scsiCmnd->cdb[5] & SCSI_LINK_MASK) )
{
smsatSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
/*smEnqueueIO(smRoot, satIOContext);*/
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pSmSenseData,
satIOContext->interruptContext );
SM_DBG1(("smsatWrite6: return control!!!\n"));
return SM_RC_SUCCESS;
}
/* cbd6; computing LBA and transfer length */
lba = (((scsiCmnd->cdb[1]) & 0x1f) << (8*2))
+ (scsiCmnd->cdb[2] << 8) + scsiCmnd->cdb[3];
tl = scsiCmnd->cdb[4];
/* Table 34, 9.1, p 46 */
/*
note: As of 2/10/2006, no support for DMA QUEUED
*/
/*
Table 34, 9.1, p 46, b
When no 48-bit addressing support or NCQ, if LBA is beyond (2^28 - 1),
return check condition
*/
if (pSatDevData->satNCQ != agTRUE &&
pSatDevData->sat48BitSupport != agTRUE
)
{
if (lba > SAT_TR_LBA_LIMIT - 1)
{
smsatSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_LOGICAL_BLOCK_ADDRESS_OUT_OF_RANGE,
satIOContext);
/*smEnqueueIO(smRoot, satIOContext);*/
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pSmSenseData,
satIOContext->interruptContext );
SM_DBG1(("smsatWrite6: return LBA out of range!!!\n"));
return SM_RC_SUCCESS;
}
}
/* case 1 and 2 */
if (lba + tl <= SAT_TR_LBA_LIMIT)
{
if (pSatDevData->satDMASupport == agTRUE && pSatDevData->satDMAEnabled == agTRUE)
{
/* case 2 */
/* WRITE DMA*/
SM_DBG5(("smsatWrite6: case 2\n"));
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_WRITE_DMA; /* 0xCA */
fis->h.features = 0; /* FIS reserve */
fis->d.lbaLow = scsiCmnd->cdb[3]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = scsiCmnd->cdb[2]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = (bit8)((scsiCmnd->cdb[1]) & 0x1f); /* FIS LBA (23:16) */
fis->d.device = 0x40; /* FIS LBA mode */
fis->d.lbaLowExp = 0;
fis->d.lbaMidExp = 0;
fis->d.lbaHighExp = 0;
fis->d.featuresExp = 0;
if (tl == 0)
{
/* temporary fix */
fis->d.sectorCount = 0xff; /* FIS sector count (7:0) */
}
else
{
fis->d.sectorCount = scsiCmnd->cdb[4]; /* FIS sector count (7:0) */
}
fis->d.sectorCountExp = 0;
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_DMA_WRITE;
}
else
{
/* case 1 */
/* WRITE SECTORS for easier implemetation */
SM_DBG5(("smsatWrite6: case 1\n"));
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_WRITE_SECTORS; /* 0xCA */
fis->h.features = 0; /* FIS reserve */
fis->d.lbaLow = scsiCmnd->cdb[3]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = scsiCmnd->cdb[2]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = (bit8)((scsiCmnd->cdb[1]) & 0x1f); /* FIS LBA (23:16) */
fis->d.device = 0x40; /* FIS LBA mode */
fis->d.lbaLowExp = 0;
fis->d.lbaMidExp = 0;
fis->d.lbaHighExp = 0;
fis->d.featuresExp = 0;
if (tl == 0)
{
/* temporary fix */
fis->d.sectorCount = 0xff; /* FIS sector count (7:0) */
}
else
{
fis->d.sectorCount = scsiCmnd->cdb[4]; /* FIS sector count (7:0) */
}
fis->d.sectorCountExp = 0;
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_PIO_WRITE;
}
}
/* case 3 and 4 */
if (pSatDevData->sat48BitSupport == agTRUE)
{
if (pSatDevData->satDMASupport == agTRUE && pSatDevData->satDMAEnabled == agTRUE)
{
/* case 3 */
/* WRITE DMA EXT only */
SM_DBG5(("smsatWrite6: case 3\n"));
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_WRITE_DMA_EXT; /* 0x35 */
fis->h.features = 0; /* FIS reserve */
fis->d.lbaLow = scsiCmnd->cdb[3]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = scsiCmnd->cdb[2]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = (bit8)((scsiCmnd->cdb[1]) & 0x1f); /* FIS LBA (23:16) */
fis->d.device = 0x40; /* FIS LBA mode set */
fis->d.lbaLowExp = 0; /* FIS LBA (31:24) */
fis->d.lbaMidExp = 0; /* FIS LBA (39:32) */
fis->d.lbaHighExp = 0; /* FIS LBA (47:40) */
fis->d.featuresExp = 0; /* FIS reserve */
if (tl == 0)
{
/* sector count is 256, 0x100*/
fis->d.sectorCount = 0; /* FIS sector count (7:0) */
fis->d.sectorCountExp = 0x01; /* FIS sector count (15:8) */
}
else
{
fis->d.sectorCount = scsiCmnd->cdb[4]; /* FIS sector count (7:0) */
fis->d.sectorCountExp = 0; /* FIS sector count (15:8) */
}
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_DMA_WRITE;
}
else
{
/* case 4 */
/* WRITE SECTORS EXT for easier implemetation */
SM_DBG5(("smsatWrite6: case 4\n"));
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_WRITE_SECTORS_EXT; /* 0x34 */
fis->h.features = 0; /* FIS reserve */
fis->d.lbaLow = scsiCmnd->cdb[3]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = scsiCmnd->cdb[2]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = (bit8)((scsiCmnd->cdb[1]) & 0x1f); /* FIS LBA (23:16) */
fis->d.device = 0x40; /* FIS LBA mode set */
fis->d.lbaLowExp = 0; /* FIS LBA (31:24) */
fis->d.lbaMidExp = 0; /* FIS LBA (39:32) */
fis->d.lbaHighExp = 0; /* FIS LBA (47:40) */
fis->d.featuresExp = 0; /* FIS reserve */
if (tl == 0)
{
/* sector count is 256, 0x100*/
fis->d.sectorCount = 0; /* FIS sector count (7:0) */
fis->d.sectorCountExp = 0x01; /* FIS sector count (15:8) */
}
else
{
fis->d.sectorCount = scsiCmnd->cdb[4]; /* FIS sector count (7:0) */
fis->d.sectorCountExp = 0; /* FIS sector count (15:8) */
}
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_PIO_WRITE;
}
}
/* case 5 */
if (pSatDevData->satNCQ == agTRUE)
{
/* WRITE FPDMA QUEUED */
if (pSatDevData->sat48BitSupport != agTRUE)
{
/* sanity check */
SM_DBG5(("smsatWrite6: case 5 !!! error NCQ but 28 bit address support!!!\n"));
smsatSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
/*smEnqueueIO(smRoot, satIOContext);*/
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pSmSenseData,
satIOContext->interruptContext );
return SM_RC_SUCCESS;
}
SM_DBG5(("smsatWrite6: case 5\n"));
/* Support 48-bit FPDMA addressing, use WRITE FPDMA QUEUE command */
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_WRITE_FPDMA_QUEUED; /* 0x61 */
fis->d.lbaLow = scsiCmnd->cdb[3]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = scsiCmnd->cdb[2]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = (bit8)((scsiCmnd->cdb[1]) & 0x1f); /* FIS LBA (23:16) */
fis->d.device = 0x40; /* FIS FUA clear */
fis->d.lbaLowExp = 0; /* FIS LBA (31:24) */
fis->d.lbaMidExp = 0; /* FIS LBA (39:32) */
fis->d.lbaHighExp = 0; /* FIS LBA (47:40) */
if (tl == 0)
{
/* sector count is 256, 0x100*/
fis->h.features = 0; /* FIS sector count (7:0) */
fis->d.featuresExp = 0x01; /* FIS sector count (15:8) */
}
else
{
fis->h.features = scsiCmnd->cdb[4]; /* FIS sector count (7:0) */
fis->d.featuresExp = 0; /* FIS sector count (15:8) */
}
fis->d.sectorCount = 0; /* Tag (7:3) set by LL layer */
fis->d.sectorCountExp = 0;
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_FPDMA_WRITE;
}
/* Initialize CB for SATA completion.
*/
satIOContext->satCompleteCB = &smsatNonChainedDataIOCB;
/*
* Prepare SGL and send FIS to LL layer.
*/
satIOContext->reqType = agRequestType; /* Save it */
status = smsataLLIOStart( smRoot,
smIORequest,
smDeviceHandle,
smScsiRequest,
satIOContext);
return (status);
}
osGLOBAL FORCEINLINE bit32
smsatWrite10(
smRoot_t *smRoot,
smIORequest_t *smIORequest,
smDeviceHandle_t *smDeviceHandle,
smScsiInitiatorRequest_t *smScsiRequest,
smSatIOContext_t *satIOContext
)
{
smDeviceData_t *pSatDevData = satIOContext->pSatDevData;
smScsiRspSense_t *pSense = satIOContext->pSense;
smIniScsiCmnd_t *scsiCmnd = &smScsiRequest->scsiCmnd;
agsaFisRegHostToDevice_t *fis = satIOContext->pFis;
bit32 status = SM_RC_FAILURE;
bit32 agRequestType = AGSA_SATA_PROTOCOL_DMA_WRITE;
bit32 lba = 0;
bit32 tl = 0;
bit32 LoopNum = 1;
bit32 AllChk = agFALSE; /* lba, lba+tl check against ATA limit and Disk capacity */
bit8 LBA[8];
bit8 TL[8];
SM_DBG2(("smsatWrite10: start\n"));
/* checking FUA_NV */
if (scsiCmnd->cdb[1] & SCSI_FUA_NV_MASK)
{
smsatSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
/*smEnqueueIO(smRoot, satIOContext);*/
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pSmSenseData,
satIOContext->interruptContext );
SM_DBG1(("smsatWrite10: return FUA_NV!!!\n"));
return SM_RC_SUCCESS;
}
/* checking CONTROL */
/* NACA == 1 or LINK == 1*/
if ( (scsiCmnd->cdb[9] & SCSI_NACA_MASK) || (scsiCmnd->cdb[9] & SCSI_LINK_MASK) )
{
smsatSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
/*smEnqueueIO(smRoot, satIOContext);*/
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pSmSenseData,
satIOContext->interruptContext );
SM_DBG1(("smsatWrite10: return control!!!\n"));
return SM_RC_SUCCESS;
}
/*
sm_memset(LBA, 0, sizeof(LBA));
sm_memset(TL, 0, sizeof(TL));
*/
/* do not use memcpy due to indexing in LBA and TL */
LBA[0] = 0; /* MSB */
LBA[1] = 0;
LBA[2] = 0;
LBA[3] = 0;
LBA[4] = scsiCmnd->cdb[2];
LBA[5] = scsiCmnd->cdb[3];
LBA[6] = scsiCmnd->cdb[4];
LBA[7] = scsiCmnd->cdb[5]; /* LSB */
TL[0] = 0;
TL[1] = 0;
TL[2] = 0;
TL[3] = 0;
TL[4] = 0;
TL[5] = 0;
TL[6] = scsiCmnd->cdb[7];
TL[7] = scsiCmnd->cdb[8]; /* LSB */
/* cbd10; computing LBA and transfer length */
lba = (scsiCmnd->cdb[2] << (24)) + (scsiCmnd->cdb[3] << (16))
+ (scsiCmnd->cdb[4] << 8) + scsiCmnd->cdb[5];
tl = (scsiCmnd->cdb[7] << 8) + scsiCmnd->cdb[8];
SM_DBG5(("smsatWrite10: lba %d functioned lba %d\n", lba, smsatComputeCDB10LBA(satIOContext)));
SM_DBG5(("smsatWrite10: tl %d functioned tl %d\n", tl, smsatComputeCDB10TL(satIOContext)));
/* Table 34, 9.1, p 46 */
/*
note: As of 2/10/2006, no support for DMA QUEUED
*/
/*
Table 34, 9.1, p 46, b
When no 48-bit addressing support or NCQ, if LBA is beyond (2^28 - 1),
return check condition
*/
if (pSatDevData->satNCQ != agTRUE &&
pSatDevData->sat48BitSupport != agTRUE
)
{
AllChk = smsatCheckLimit(LBA, TL, agFALSE, pSatDevData);
if (AllChk)
{
SM_DBG1(("smsatWrite10: return LBA out of range, not EXT!!!\n"));
SM_DBG1(("smsatWrite10: cdb 0x%x 0x%x 0x%x 0x%x!!!\n",scsiCmnd->cdb[2], scsiCmnd->cdb[3],
scsiCmnd->cdb[4], scsiCmnd->cdb[5]));
SM_DBG1(("smsatWrite10: lba 0x%x SAT_TR_LBA_LIMIT 0x%x!!!\n", lba, SAT_TR_LBA_LIMIT));
smsatSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_LOGICAL_BLOCK_ADDRESS_OUT_OF_RANGE,
satIOContext);
/*smEnqueueIO(smRoot, satIOContext);*/
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pSmSenseData,
satIOContext->interruptContext );
return SM_RC_SUCCESS;
}
}
else
{
AllChk = smsatCheckLimit(LBA, TL, agTRUE, pSatDevData);
if (AllChk)
{
SM_DBG1(("smsatWrite10: return LBA out of range, EXT!!!\n"));
smsatSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_LOGICAL_BLOCK_ADDRESS_OUT_OF_RANGE,
satIOContext);
/*smEnqueueIO(smRoot, satIOContext);*/
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pSmSenseData,
satIOContext->interruptContext );
return SM_RC_SUCCESS;
}
}
/* case 5 */
if (pSatDevData->satNCQ == agTRUE)
{
/* WRITE FPDMA QUEUED */
if (pSatDevData->sat48BitSupport != agTRUE)
{
SM_DBG1(("smsatWrite10: case 5 !!! error NCQ but 28 bit address support!!!\n"));
smsatSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
/*smEnqueueIO(smRoot, satIOContext);*/
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pSmSenseData,
satIOContext->interruptContext );
return SM_RC_SUCCESS;
}
SM_DBG6(("smsatWrite10: case 5\n"));
/* Support 48-bit FPDMA addressing, use WRITE FPDMA QUEUE command */
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_WRITE_FPDMA_QUEUED; /* 0x61 */
fis->h.features = scsiCmnd->cdb[8]; /* FIS sector count (7:0) */
fis->d.lbaLow = scsiCmnd->cdb[5]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = scsiCmnd->cdb[4]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = scsiCmnd->cdb[3]; /* FIS LBA (23:16) */
/* Check FUA bit */
if (scsiCmnd->cdb[1] & SCSI_WRITE10_FUA_MASK)
fis->d.device = 0xC0; /* FIS FUA set */
else
fis->d.device = 0x40; /* FIS FUA clear */
fis->d.lbaLowExp = scsiCmnd->cdb[2]; /* FIS LBA (31:24) */
fis->d.lbaMidExp = 0; /* FIS LBA (39:32) */
fis->d.lbaHighExp = 0; /* FIS LBA (47:40) */
fis->d.featuresExp = scsiCmnd->cdb[7]; /* FIS sector count (15:8) */
fis->d.sectorCount = 0; /* Tag (7:3) set by LL layer */
fis->d.sectorCountExp = 0;
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_FPDMA_WRITE;
satIOContext->ATACmd = SAT_WRITE_FPDMA_QUEUED;
}
/* case 3 and 4 */
else if (pSatDevData->sat48BitSupport == agTRUE)
{
if (pSatDevData->satDMASupport == agTRUE && pSatDevData->satDMAEnabled == agTRUE)
{
/* case 3 */
/* WRITE DMA EXT or WRITE DMA FUA EXT */
SM_DBG5(("smsatWrite10: case 3\n"));
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
/* SAT_WRITE_DMA_FUA_EXT is optional and we don't support it */
fis->h.command = SAT_WRITE_DMA_EXT; /* 0x35 */
satIOContext->ATACmd = SAT_WRITE_DMA_EXT;
fis->h.features = 0; /* FIS reserve */
fis->d.lbaLow = scsiCmnd->cdb[5]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = scsiCmnd->cdb[4]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = scsiCmnd->cdb[3]; /* FIS LBA (23:16) */
fis->d.device = 0x40; /* FIS LBA mode set */
fis->d.lbaLowExp = scsiCmnd->cdb[2]; /* FIS LBA (31:24) */
fis->d.lbaMidExp = 0; /* FIS LBA (39:32) */
fis->d.lbaHighExp = 0; /* FIS LBA (47:40) */
fis->d.featuresExp = 0; /* FIS reserve */
fis->d.sectorCount = scsiCmnd->cdb[8]; /* FIS sector count (7:0) */
fis->d.sectorCountExp = scsiCmnd->cdb[7]; /* FIS sector count (15:8) */
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_DMA_WRITE;
}
else
{
/* case 4 */
/* WRITE MULTIPLE EXT or WRITE MULTIPLE FUA EXT or WRITE SECTOR(S) EXT */
/* WRITE SECTORS EXT for easier implemetation */
SM_DBG5(("smsatWrite10: case 4\n"));
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_WRITE_SECTORS_EXT; /* 0x34 */
fis->h.features = 0; /* FIS reserve */
fis->d.lbaLow = scsiCmnd->cdb[5]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = scsiCmnd->cdb[4]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = scsiCmnd->cdb[3]; /* FIS LBA (23:16) */
fis->d.device = 0x40; /* FIS LBA mode set */
fis->d.lbaLowExp = scsiCmnd->cdb[2]; /* FIS LBA (31:24) */
fis->d.lbaMidExp = 0; /* FIS LBA (39:32) */
fis->d.lbaHighExp = 0; /* FIS LBA (47:40) */
fis->d.featuresExp = 0; /* FIS reserve */
fis->d.sectorCount = scsiCmnd->cdb[8]; /* FIS sector count (7:0) */
fis->d.sectorCountExp = scsiCmnd->cdb[7]; /* FIS sector count (15:8) */
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_PIO_WRITE;
satIOContext->ATACmd = SAT_WRITE_SECTORS_EXT;
}
}
else /* case 1 and 2 */
{
if (pSatDevData->satDMASupport == agTRUE && pSatDevData->satDMAEnabled == agTRUE)
{
/* case 2 */
/* WRITE DMA*/
/* can't fit the transfer length */
SM_DBG5(("smsatWrite10: case 2\n"));
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C bit is set */
fis->h.command = SAT_WRITE_DMA; /* 0xCA */
fis->h.features = 0; /* FIS reserve */
fis->d.lbaLow = scsiCmnd->cdb[5]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = scsiCmnd->cdb[4]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = scsiCmnd->cdb[3]; /* FIS LBA (23:16) */
/* FIS LBA mode set LBA (27:24) */
fis->d.device = (bit8)((0x4 << 4) | (scsiCmnd->cdb[2] & 0xF));
fis->d.lbaLowExp = 0;
fis->d.lbaMidExp = 0;
fis->d.lbaHighExp = 0;
fis->d.featuresExp = 0;
fis->d.sectorCount = scsiCmnd->cdb[8]; /* FIS sector count (7:0) */
fis->d.sectorCountExp = 0;
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_DMA_WRITE;
satIOContext->ATACmd = SAT_WRITE_DMA;
}
else
{
/* case 1 */
/* WRITE MULTIPLE or WRITE SECTOR(S) */
/* WRITE SECTORS for easier implemetation */
/* can't fit the transfer length */
SM_DBG5(("smsatWrite10: case 1\n"));
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C bit is set */
fis->h.command = SAT_WRITE_SECTORS; /* 0x30 */
fis->h.features = 0; /* FIS reserve */
fis->d.lbaLow = scsiCmnd->cdb[5]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = scsiCmnd->cdb[4]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = scsiCmnd->cdb[3]; /* FIS LBA (23:16) */
/* FIS LBA mode set LBA (27:24) */
fis->d.device = (bit8)((0x4 << 4) | (scsiCmnd->cdb[2] & 0xF));
fis->d.lbaLowExp = 0;
fis->d.lbaMidExp = 0;
fis->d.lbaHighExp = 0;
fis->d.featuresExp = 0;
fis->d.sectorCount = scsiCmnd->cdb[8]; /* FIS sector count (7:0) */
fis->d.sectorCountExp = 0;
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_PIO_WRITE;
satIOContext->ATACmd = SAT_WRITE_SECTORS;
}
}
// smhexdump("satWrite10 final fis", (bit8 *)fis, sizeof(agsaFisRegHostToDevice_t));
satIOContext->currentLBA = lba;
satIOContext->OrgTL = tl;
/*
computing number of loop and remainder for tl
0xFF in case not ext
0xFFFF in case EXT
*/
if (fis->h.command == SAT_WRITE_SECTORS || fis->h.command == SAT_WRITE_DMA)
{
LoopNum = smsatComputeLoopNum(tl, 0x100);
}
else
{
/* SAT_WRITE_FPDMA_QUEUEDK */
/* SAT_READ_SECTORS_EXT, SAT_READ_DMA_EXT */
LoopNum = smsatComputeLoopNum(tl, 0xFFFF);
}
satIOContext->LoopNum = LoopNum;
if (LoopNum == 1)
{
SM_DBG5(("smsatWrite10: NON CHAINED data\n"));
/* Initialize CB for SATA completion.
*/
satIOContext->satCompleteCB = &smsatNonChainedDataIOCB;
}
else
{
SM_DBG2(("smsatWrite10: CHAINED data!!!\n"));
/* re-setting tl */
if (fis->h.command == SAT_WRITE_SECTORS || fis->h.command == SAT_WRITE_DMA)
{
fis->d.sectorCount = 0x0;
smsatSplitSGL(smRoot,
smIORequest,
smDeviceHandle,
smScsiRequest,
satIOContext,
NON_BIT48_ADDRESS_TL_LIMIT*SATA_SECTOR_SIZE, /* 0x100 * 0x200 */
(satIOContext->OrgTL)*SATA_SECTOR_SIZE,
agTRUE);
}
else if (fis->h.command == SAT_WRITE_SECTORS_EXT ||
fis->h.command == SAT_WRITE_DMA_EXT ||
fis->h.command == SAT_WRITE_DMA_FUA_EXT
)
{
fis->d.sectorCount = 0xFF;
fis->d.sectorCountExp = 0xFF;
smsatSplitSGL(smRoot,
smIORequest,
smDeviceHandle,
smScsiRequest,
satIOContext,
BIT48_ADDRESS_TL_LIMIT*SATA_SECTOR_SIZE, /* 0xFFFF * 0x200 */
(satIOContext->OrgTL)*SATA_SECTOR_SIZE,
agTRUE);
}
else
{
/* SAT_WRITE_FPDMA_QUEUED */
fis->h.features = 0xFF;
fis->d.featuresExp = 0xFF;
smsatSplitSGL(smRoot,
smIORequest,
smDeviceHandle,
smScsiRequest,
satIOContext,
BIT48_ADDRESS_TL_LIMIT*SATA_SECTOR_SIZE, /* 0xFFFF * 0x200 */
(satIOContext->OrgTL)*SATA_SECTOR_SIZE,
agTRUE);
}
/* Initialize CB for SATA completion.
*/
satIOContext->satCompleteCB = &smsatChainedDataIOCB;
}
/*
* Prepare SGL and send FIS to LL layer.
*/
satIOContext->reqType = agRequestType; /* Save it */
status = smsataLLIOStart( smRoot,
smIORequest,
smDeviceHandle,
smScsiRequest,
satIOContext);
return (status);
}
osGLOBAL bit32
smsatWrite12(
smRoot_t *smRoot,
smIORequest_t *smIORequest,
smDeviceHandle_t *smDeviceHandle,
smScsiInitiatorRequest_t *smScsiRequest,
smSatIOContext_t *satIOContext
)
{
bit32 status;
bit32 agRequestType = AGSA_SATA_PROTOCOL_DMA_WRITE;
smDeviceData_t *pSatDevData;
smScsiRspSense_t *pSense;
smIniScsiCmnd_t *scsiCmnd;
agsaFisRegHostToDevice_t *fis;
bit32 lba = 0;
bit32 tl = 0;
bit32 LoopNum = 1;
bit8 LBA[8];
bit8 TL[8];
bit32 AllChk = agFALSE; /* lba, lba+tl check against ATA limit and Disk capacity */
pSense = satIOContext->pSense;
pSatDevData = satIOContext->pSatDevData;
scsiCmnd = &smScsiRequest->scsiCmnd;
fis = satIOContext->pFis;
SM_DBG5(("smsatWrite12: start\n"));
/* checking FUA_NV */
if (scsiCmnd->cdb[1] & SCSI_FUA_NV_MASK)
{
smsatSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
/*smEnqueueIO(smRoot, satIOContext);*/
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pSmSenseData,
satIOContext->interruptContext );
SM_DBG1(("smsatWrite12: return FUA_NV!!!\n"));
return SM_RC_SUCCESS;
}
/* checking CONTROL */
/* NACA == 1 or LINK == 1*/
if ( (scsiCmnd->cdb[11] & SCSI_NACA_MASK) || (scsiCmnd->cdb[11] & SCSI_LINK_MASK) )
{
smsatSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
/*smEnqueueIO(smRoot, satIOContext);*/
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pSmSenseData,
satIOContext->interruptContext );
SM_DBG1(("smsatWrite10: return control!!!\n"));
return SM_RC_SUCCESS;
}
sm_memset(LBA, 0, sizeof(LBA));
sm_memset(TL, 0, sizeof(TL));
/* do not use memcpy due to indexing in LBA and TL */
LBA[0] = 0; /* MSB */
LBA[1] = 0;
LBA[2] = 0;
LBA[3] = 0;
LBA[4] = scsiCmnd->cdb[2];
LBA[5] = scsiCmnd->cdb[3];
LBA[6] = scsiCmnd->cdb[4];
LBA[7] = scsiCmnd->cdb[5]; /* LSB */
TL[0] = 0; /* MSB */
TL[1] = 0;
TL[2] = 0;
TL[3] = 0;
TL[4] = scsiCmnd->cdb[6];
TL[5] = scsiCmnd->cdb[7];
TL[6] = scsiCmnd->cdb[8];
TL[7] = scsiCmnd->cdb[9]; /* LSB */
lba = smsatComputeCDB12LBA(satIOContext);
tl = smsatComputeCDB12TL(satIOContext);
/* Table 34, 9.1, p 46 */
/*
note: As of 2/10/2006, no support for DMA QUEUED
*/
/*
Table 34, 9.1, p 46, b
When no 48-bit addressing support or NCQ, if LBA is beyond (2^28 - 1),
return check condition
*/
if (pSatDevData->satNCQ != agTRUE &&
pSatDevData->sat48BitSupport != agTRUE
)
{
AllChk = smsatCheckLimit(LBA, TL, agFALSE, pSatDevData);
/*smEnqueueIO(smRoot, satIOContext);*/
if (AllChk)
{
SM_DBG1(("smsatWrite12: return LBA out of range, not EXT!!!\n"));
smsatSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_LOGICAL_BLOCK_ADDRESS_OUT_OF_RANGE,
satIOContext);
/*smEnqueueIO(smRoot, satIOContext);*/
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pSmSenseData,
satIOContext->interruptContext );
return SM_RC_SUCCESS;
}
}
else
{
AllChk = smsatCheckLimit(LBA, TL, agTRUE, pSatDevData);
if (AllChk)
{
SM_DBG1(("smsatWrite12: return LBA out of range, EXT!!!\n"));
smsatSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_LOGICAL_BLOCK_ADDRESS_OUT_OF_RANGE,
satIOContext);
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pSmSenseData,
satIOContext->interruptContext );
return SM_RC_SUCCESS;
}
}
if (pSatDevData->satDMASupport == agTRUE && pSatDevData->satDMAEnabled == agTRUE)
{
/* case 2 */
/* WRITE DMA*/
/* In case that we can't fit the transfer length, we loop */
SM_DBG5(("smsatWrite10: case 2\n"));
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C bit is set */
fis->h.command = SAT_WRITE_DMA; /* 0xCA */
fis->h.features = 0; /* FIS reserve */
fis->d.lbaLow = scsiCmnd->cdb[5]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = scsiCmnd->cdb[4]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = scsiCmnd->cdb[3]; /* FIS LBA (23:16) */
/* FIS LBA mode set LBA (27:24) */
fis->d.device = (bit8)((0x4 << 4) | (scsiCmnd->cdb[2] & 0xF));
fis->d.lbaLowExp = 0;
fis->d.lbaMidExp = 0;
fis->d.lbaHighExp = 0;
fis->d.featuresExp = 0;
fis->d.sectorCount = scsiCmnd->cdb[9]; /* FIS sector count (7:0) */
fis->d.sectorCountExp = 0;
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_DMA_WRITE;
satIOContext->ATACmd = SAT_WRITE_DMA;
}
else
{
/* case 1 */
/* WRITE MULTIPLE or WRITE SECTOR(S) */
/* WRITE SECTORS for easier implemetation */
/* In case that we can't fit the transfer length, we loop */
SM_DBG5(("smsatWrite10: case 1\n"));
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C bit is set */
fis->h.command = SAT_WRITE_SECTORS; /* 0x30 */
fis->h.features = 0; /* FIS reserve */
fis->d.lbaLow = scsiCmnd->cdb[5]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = scsiCmnd->cdb[4]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = scsiCmnd->cdb[3]; /* FIS LBA (23:16) */
/* FIS LBA mode set LBA (27:24) */
fis->d.device = (bit8)((0x4 << 4) | (scsiCmnd->cdb[2] & 0xF));
fis->d.lbaLowExp = 0;
fis->d.lbaMidExp = 0;
fis->d.lbaHighExp = 0;
fis->d.featuresExp = 0;
fis->d.sectorCount = scsiCmnd->cdb[9]; /* FIS sector count (7:0) */
fis->d.sectorCountExp = 0;
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_PIO_WRITE;
satIOContext->ATACmd = SAT_WRITE_SECTORS;
}
/* case 3 and 4 */
if (pSatDevData->sat48BitSupport == agTRUE)
{
if (pSatDevData->satDMASupport == agTRUE && pSatDevData->satDMAEnabled == agTRUE)
{
/* case 3 */
/* WRITE DMA EXT or WRITE DMA FUA EXT */
SM_DBG5(("smsatWrite10: case 3\n"));
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
/* SAT_WRITE_DMA_FUA_EXT is optional and we don't support it */
fis->h.command = SAT_WRITE_DMA_EXT; /* 0x35 */
fis->h.features = 0; /* FIS reserve */
fis->d.lbaLow = scsiCmnd->cdb[5]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = scsiCmnd->cdb[4]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = scsiCmnd->cdb[3]; /* FIS LBA (23:16) */
fis->d.device = 0x40; /* FIS LBA mode set */
fis->d.lbaLowExp = scsiCmnd->cdb[2]; /* FIS LBA (31:24) */
fis->d.lbaMidExp = 0; /* FIS LBA (39:32) */
fis->d.lbaHighExp = 0; /* FIS LBA (47:40) */
fis->d.featuresExp = 0; /* FIS reserve */
fis->d.sectorCount = scsiCmnd->cdb[9]; /* FIS sector count (7:0) */
fis->d.sectorCountExp = scsiCmnd->cdb[8]; /* FIS sector count (15:8) */
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_DMA_WRITE;
satIOContext->ATACmd = SAT_WRITE_DMA_EXT;
}
else
{
/* case 4 */
/* WRITE MULTIPLE EXT or WRITE MULTIPLE FUA EXT or WRITE SECTOR(S) EXT */
/* WRITE SECTORS EXT for easier implemetation */
SM_DBG5(("smsatWrite10: case 4\n"));
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_WRITE_SECTORS_EXT; /* 0x34 */
fis->h.features = 0; /* FIS reserve */
fis->d.lbaLow = scsiCmnd->cdb[5]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = scsiCmnd->cdb[4]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = scsiCmnd->cdb[3]; /* FIS LBA (23:16) */
fis->d.device = 0x40; /* FIS LBA mode set */
fis->d.lbaLowExp = scsiCmnd->cdb[2]; /* FIS LBA (31:24) */
fis->d.lbaMidExp = 0; /* FIS LBA (39:32) */
fis->d.lbaHighExp = 0; /* FIS LBA (47:40) */
fis->d.featuresExp = 0; /* FIS reserve */
fis->d.sectorCount = scsiCmnd->cdb[9]; /* FIS sector count (7:0) */
fis->d.sectorCountExp = scsiCmnd->cdb[8]; /* FIS sector count (15:8) */
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_PIO_WRITE;
satIOContext->ATACmd = SAT_WRITE_SECTORS_EXT;
}
}
/* case 5 */
if (pSatDevData->satNCQ == agTRUE)
{
/* WRITE FPDMA QUEUED */
if (pSatDevData->sat48BitSupport != agTRUE)
{
SM_DBG5(("smsatWrite10: case 5 !!! error NCQ but 28 bit address support!!!\n"));
smsatSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
/*smEnqueueIO(smRoot, satIOContext);*/
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pSmSenseData,
satIOContext->interruptContext );
return SM_RC_SUCCESS;
}
SM_DBG6(("smsatWrite10: case 5\n"));
/* Support 48-bit FPDMA addressing, use WRITE FPDMA QUEUE command */
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_WRITE_FPDMA_QUEUED; /* 0x61 */
fis->h.features = scsiCmnd->cdb[9]; /* FIS sector count (7:0) */
fis->d.lbaLow = scsiCmnd->cdb[5]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = scsiCmnd->cdb[4]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = scsiCmnd->cdb[3]; /* FIS LBA (23:16) */
/* Check FUA bit */
if (scsiCmnd->cdb[1] & SCSI_WRITE12_FUA_MASK)
fis->d.device = 0xC0; /* FIS FUA set */
else
fis->d.device = 0x40; /* FIS FUA clear */
fis->d.lbaLowExp = scsiCmnd->cdb[2]; /* FIS LBA (31:24) */
fis->d.lbaMidExp = 0; /* FIS LBA (39:32) */
fis->d.lbaHighExp = 0; /* FIS LBA (47:40) */
fis->d.featuresExp = scsiCmnd->cdb[8]; /* FIS sector count (15:8) */
fis->d.sectorCount = 0; /* Tag (7:3) set by LL layer */
fis->d.sectorCountExp = 0;
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_FPDMA_WRITE;
satIOContext->ATACmd = SAT_WRITE_FPDMA_QUEUED;
}
satIOContext->currentLBA = lba;
satIOContext->OrgTL = tl;
/*
computing number of loop and remainder for tl
0xFF in case not ext
0xFFFF in case EXT
*/
if (fis->h.command == SAT_WRITE_SECTORS || fis->h.command == SAT_WRITE_DMA)
{
LoopNum = smsatComputeLoopNum(tl, 0xFF);
}
else if (fis->h.command == SAT_WRITE_SECTORS_EXT ||
fis->h.command == SAT_WRITE_DMA_EXT ||
fis->h.command == SAT_WRITE_DMA_FUA_EXT
)
{
/* SAT_READ_SECTORS_EXT, SAT_READ_DMA_EXT */
LoopNum = smsatComputeLoopNum(tl, 0xFFFF);
}
else
{
/* SAT_WRITE_FPDMA_QUEUEDK */
LoopNum = smsatComputeLoopNum(tl, 0xFFFF);
}
satIOContext->LoopNum = LoopNum;
if (LoopNum == 1)
{
SM_DBG5(("smsatWrite10: NON CHAINED data\n"));
/* Initialize CB for SATA completion.
*/
satIOContext->satCompleteCB = &smsatNonChainedDataIOCB;
}
else
{
SM_DBG1(("smsatWrite10: CHAINED data\n"));
/* re-setting tl */
if (fis->h.command == SAT_WRITE_SECTORS || fis->h.command == SAT_WRITE_DMA)
{
fis->d.sectorCount = 0xFF;
}
else if (fis->h.command == SAT_WRITE_SECTORS_EXT ||
fis->h.command == SAT_WRITE_DMA_EXT ||
fis->h.command == SAT_WRITE_DMA_FUA_EXT
)
{
fis->d.sectorCount = 0xFF;
fis->d.sectorCountExp = 0xFF;
}
else
{
/* SAT_WRITE_FPDMA_QUEUED */
fis->h.features = 0xFF;
fis->d.featuresExp = 0xFF;
}
/* Initialize CB for SATA completion.
*/
satIOContext->satCompleteCB = &smsatChainedDataIOCB;
}
/*
* Prepare SGL and send FIS to LL layer.
*/
satIOContext->reqType = agRequestType; /* Save it */
status = smsataLLIOStart( smRoot,
smIORequest,
smDeviceHandle,
smScsiRequest,
satIOContext);
return (status);
}
osGLOBAL bit32
smsatWrite16(
smRoot_t *smRoot,
smIORequest_t *smIORequest,
smDeviceHandle_t *smDeviceHandle,
smScsiInitiatorRequest_t *smScsiRequest,
smSatIOContext_t *satIOContext
)
{
bit32 status;
bit32 agRequestType = AGSA_SATA_PROTOCOL_DMA_WRITE;
smDeviceData_t *pSatDevData;
smScsiRspSense_t *pSense;
smIniScsiCmnd_t *scsiCmnd;
agsaFisRegHostToDevice_t *fis;
bit32 lba = 0;
bit32 tl = 0;
bit32 LoopNum = 1;
bit8 LBA[8];
bit8 TL[8];
bit32 AllChk = agFALSE; /* lba, lba+tl check against ATA limit and Disk capacity */
pSense = satIOContext->pSense;
pSatDevData = satIOContext->pSatDevData;
scsiCmnd = &smScsiRequest->scsiCmnd;
fis = satIOContext->pFis;
SM_DBG5(("smsatWrite16: start\n"));
/* checking FUA_NV */
if (scsiCmnd->cdb[1] & SCSI_FUA_NV_MASK)
{
smsatSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
/*smEnqueueIO(smRoot, satIOContext);*/
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pSmSenseData,
satIOContext->interruptContext );
SM_DBG1(("smsatWrite16: return FUA_NV!!!\n"));
return SM_RC_SUCCESS;
}
/* checking CONTROL */
/* NACA == 1 or LINK == 1*/
if ( (scsiCmnd->cdb[15] & SCSI_NACA_MASK) || (scsiCmnd->cdb[15] & SCSI_LINK_MASK) )
{
smsatSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
/*smEnqueueIO(smRoot, satIOContext);*/
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pSmSenseData,
satIOContext->interruptContext );
SM_DBG1(("smsatWrite16: return control!!!\n"));
return SM_RC_SUCCESS;
}
sm_memset(LBA, 0, sizeof(LBA));
sm_memset(TL, 0, sizeof(TL));
/* do not use memcpy due to indexing in LBA and TL */
LBA[0] = scsiCmnd->cdb[2]; /* MSB */
LBA[1] = scsiCmnd->cdb[3];
LBA[2] = scsiCmnd->cdb[4];
LBA[3] = scsiCmnd->cdb[5];
LBA[4] = scsiCmnd->cdb[6];
LBA[5] = scsiCmnd->cdb[7];
LBA[6] = scsiCmnd->cdb[8];
LBA[7] = scsiCmnd->cdb[9]; /* LSB */
TL[0] = 0;
TL[1] = 0;
TL[2] = 0;
TL[3] = 0;
TL[4] = scsiCmnd->cdb[10]; /* MSB */
TL[5] = scsiCmnd->cdb[11];
TL[6] = scsiCmnd->cdb[12];
TL[7] = scsiCmnd->cdb[13]; /* LSB */
lba = smsatComputeCDB16LBA(satIOContext);
tl = smsatComputeCDB16TL(satIOContext);
/* Table 34, 9.1, p 46 */
/*
note: As of 2/10/2006, no support for DMA QUEUED
*/
/*
Table 34, 9.1, p 46, b
When no 48-bit addressing support or NCQ, if LBA is beyond (2^28 - 1),
return check condition
*/
if (pSatDevData->satNCQ != agTRUE &&
pSatDevData->sat48BitSupport != agTRUE
)
{
AllChk = smsatCheckLimit(LBA, TL, agFALSE, pSatDevData);
if (AllChk)
{
SM_DBG1(("smsatWrite16: return LBA out of range, not EXT!!!\n"));
smsatSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_LOGICAL_BLOCK_ADDRESS_OUT_OF_RANGE,
satIOContext);
/*smEnqueueIO(smRoot, satIOContext);*/
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pSmSenseData,
satIOContext->interruptContext );
return SM_RC_SUCCESS;
}
}
else
{
AllChk = smsatCheckLimit(LBA, TL, agTRUE, pSatDevData);
if (AllChk)
{
SM_DBG1(("smsatWrite16: return LBA out of range, EXT!!!\n"));
smsatSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_LOGICAL_BLOCK_ADDRESS_OUT_OF_RANGE,
satIOContext);
/*smEnqueueIO(smRoot, satIOContext);*/
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pSmSenseData,
satIOContext->interruptContext );
return SM_RC_SUCCESS;
}
}
/* case 1 and 2 */
if (pSatDevData->satDMASupport == agTRUE && pSatDevData->satDMAEnabled == agTRUE)
{
/* case 2 */
/* WRITE DMA*/
/* In case that we can't fit the transfer length, we loop */
SM_DBG5(("smsatWrite16: case 2\n"));
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C bit is set */
fis->h.command = SAT_WRITE_DMA; /* 0xCA */
fis->h.features = 0; /* FIS reserve */
fis->d.lbaLow = scsiCmnd->cdb[9]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = scsiCmnd->cdb[8]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = scsiCmnd->cdb[7]; /* FIS LBA (23:16) */
/* FIS LBA mode set LBA (27:24) */
fis->d.device = (bit8)((0x4 << 4) | (scsiCmnd->cdb[6] & 0xF));
fis->d.lbaLowExp = 0;
fis->d.lbaMidExp = 0;
fis->d.lbaHighExp = 0;
fis->d.featuresExp = 0;
fis->d.sectorCount = scsiCmnd->cdb[13]; /* FIS sector count (7:0) */
fis->d.sectorCountExp = 0;
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_DMA_WRITE;
satIOContext->ATACmd = SAT_WRITE_DMA;
}
else
{
/* case 1 */
/* WRITE MULTIPLE or WRITE SECTOR(S) */
/* WRITE SECTORS for easier implemetation */
/* In case that we can't fit the transfer length, we loop */
SM_DBG5(("smsatWrite16: case 1\n"));
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C bit is set */
fis->h.command = SAT_WRITE_SECTORS; /* 0x30 */
fis->h.features = 0; /* FIS reserve */
fis->d.lbaLow = scsiCmnd->cdb[9]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = scsiCmnd->cdb[8]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = scsiCmnd->cdb[7]; /* FIS LBA (23:16) */
/* FIS LBA mode set LBA (27:24) */
fis->d.device = (bit8)((0x4 << 4) | (scsiCmnd->cdb[6] & 0xF));
fis->d.lbaLowExp = 0;
fis->d.lbaMidExp = 0;
fis->d.lbaHighExp = 0;
fis->d.featuresExp = 0;
fis->d.sectorCount = scsiCmnd->cdb[13]; /* FIS sector count (7:0) */
fis->d.sectorCountExp = 0;
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_PIO_WRITE;
satIOContext->ATACmd = SAT_WRITE_SECTORS;
}
/* case 3 and 4 */
if (pSatDevData->sat48BitSupport == agTRUE)
{
if (pSatDevData->satDMASupport == agTRUE && pSatDevData->satDMAEnabled == agTRUE)
{
/* case 3 */
/* WRITE DMA EXT or WRITE DMA FUA EXT */
SM_DBG5(("smsatWrite16: case 3\n"));
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
/* SAT_WRITE_DMA_FUA_EXT is optional and we don't support it */
fis->h.command = SAT_WRITE_DMA_EXT; /* 0x35 */
fis->h.features = 0; /* FIS reserve */
fis->d.lbaLow = scsiCmnd->cdb[9]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = scsiCmnd->cdb[8]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = scsiCmnd->cdb[7]; /* FIS LBA (23:16) */
fis->d.device = 0x40; /* FIS LBA mode set */
fis->d.lbaLowExp = scsiCmnd->cdb[6]; /* FIS LBA (31:24) */
fis->d.lbaMidExp = scsiCmnd->cdb[5]; /* FIS LBA (39:32) */
fis->d.lbaHighExp = scsiCmnd->cdb[4]; /* FIS LBA (47:40) */
fis->d.featuresExp = 0; /* FIS reserve */
fis->d.sectorCount = scsiCmnd->cdb[13]; /* FIS sector count (7:0) */
fis->d.sectorCountExp = scsiCmnd->cdb[12]; /* FIS sector count (15:8) */
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_DMA_WRITE;
satIOContext->ATACmd = SAT_WRITE_DMA_EXT;
}
else
{
/* case 4 */
/* WRITE MULTIPLE EXT or WRITE MULTIPLE FUA EXT or WRITE SECTOR(S) EXT */
/* WRITE SECTORS EXT for easier implemetation */
SM_DBG5(("smsatWrite16: case 4\n"));
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_WRITE_SECTORS_EXT; /* 0x34 */
fis->h.features = 0; /* FIS reserve */
fis->d.lbaLow = scsiCmnd->cdb[9]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = scsiCmnd->cdb[8]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = scsiCmnd->cdb[7]; /* FIS LBA (23:16) */
fis->d.device = 0x40; /* FIS LBA mode set */
fis->d.lbaLowExp = scsiCmnd->cdb[6]; /* FIS LBA (31:24) */
fis->d.lbaMidExp = scsiCmnd->cdb[5]; /* FIS LBA (39:32) */
fis->d.lbaHighExp = scsiCmnd->cdb[4]; /* FIS LBA (47:40) */
fis->d.featuresExp = 0; /* FIS reserve */
fis->d.sectorCount = scsiCmnd->cdb[13]; /* FIS sector count (7:0) */
fis->d.sectorCountExp = scsiCmnd->cdb[12]; /* FIS sector count (15:8) */
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_PIO_WRITE;
satIOContext->ATACmd = SAT_WRITE_SECTORS_EXT;
}
}
/* case 5 */
if (pSatDevData->satNCQ == agTRUE)
{
/* WRITE FPDMA QUEUED */
if (pSatDevData->sat48BitSupport != agTRUE)
{
SM_DBG5(("smsatWrite16: case 5 !!! error NCQ but 28 bit address support!!!\n"));
smsatSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
/*smEnqueueIO(smRoot, satIOContext);*/
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pSmSenseData,
satIOContext->interruptContext );
return SM_RC_SUCCESS;
}
SM_DBG6(("smsatWrite16: case 5\n"));
/* Support 48-bit FPDMA addressing, use WRITE FPDMA QUEUE command */
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_WRITE_FPDMA_QUEUED; /* 0x61 */
fis->h.features = scsiCmnd->cdb[13]; /* FIS sector count (7:0) */
fis->d.lbaLow = scsiCmnd->cdb[9]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = scsiCmnd->cdb[8]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = scsiCmnd->cdb[7]; /* FIS LBA (23:16) */
/* Check FUA bit */
if (scsiCmnd->cdb[1] & SCSI_WRITE16_FUA_MASK)
fis->d.device = 0xC0; /* FIS FUA set */
else
fis->d.device = 0x40; /* FIS FUA clear */
fis->d.lbaLowExp = scsiCmnd->cdb[6]; /* FIS LBA (31:24) */
fis->d.lbaMidExp = scsiCmnd->cdb[5]; /* FIS LBA (39:32) */
fis->d.lbaHighExp = scsiCmnd->cdb[4]; /* FIS LBA (47:40) */
fis->d.featuresExp = scsiCmnd->cdb[12]; /* FIS sector count (15:8) */
fis->d.sectorCount = 0; /* Tag (7:3) set by LL layer */
fis->d.sectorCountExp = 0;
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_FPDMA_WRITE;
satIOContext->ATACmd = SAT_WRITE_FPDMA_QUEUED;
}
satIOContext->currentLBA = lba;
satIOContext->OrgTL = tl;
/*
computing number of loop and remainder for tl
0xFF in case not ext
0xFFFF in case EXT
*/
if (fis->h.command == SAT_WRITE_SECTORS || fis->h.command == SAT_WRITE_DMA)
{
LoopNum = smsatComputeLoopNum(tl, 0xFF);
}
else if (fis->h.command == SAT_WRITE_SECTORS_EXT ||
fis->h.command == SAT_WRITE_DMA_EXT ||
fis->h.command == SAT_WRITE_DMA_FUA_EXT
)
{
/* SAT_READ_SECTORS_EXT, SAT_READ_DMA_EXT */
LoopNum = smsatComputeLoopNum(tl, 0xFFFF);
}
else
{
/* SAT_WRITE_FPDMA_QUEUEDK */
LoopNum = smsatComputeLoopNum(tl, 0xFFFF);
}
satIOContext->LoopNum = LoopNum;
if (LoopNum == 1)
{
SM_DBG5(("smsatWrite16: NON CHAINED data\n"));
/* Initialize CB for SATA completion.
*/
satIOContext->satCompleteCB = &smsatNonChainedDataIOCB;
}
else
{
SM_DBG1(("smsatWrite16: CHAINED data!!!\n"));
/* re-setting tl */
if (fis->h.command == SAT_WRITE_SECTORS || fis->h.command == SAT_WRITE_DMA)
{
fis->d.sectorCount = 0xFF;
}
else if (fis->h.command == SAT_WRITE_SECTORS_EXT ||
fis->h.command == SAT_WRITE_DMA_EXT ||
fis->h.command == SAT_WRITE_DMA_FUA_EXT
)
{
fis->d.sectorCount = 0xFF;
fis->d.sectorCountExp = 0xFF;
}
else
{
/* SAT_WRITE_FPDMA_QUEUED */
fis->h.features = 0xFF;
fis->d.featuresExp = 0xFF;
}
/* Initialize CB for SATA completion.
*/
satIOContext->satCompleteCB = &smsatChainedDataIOCB;
}
/*
* Prepare SGL and send FIS to LL layer.
*/
satIOContext->reqType = agRequestType; /* Save it */
status = smsataLLIOStart( smRoot,
smIORequest,
smDeviceHandle,
smScsiRequest,
satIOContext);
return (status);
}
osGLOBAL bit32
smsatVerify10(
smRoot_t *smRoot,
smIORequest_t *smIORequest,
smDeviceHandle_t *smDeviceHandle,
smScsiInitiatorRequest_t *smScsiRequest,
smSatIOContext_t *satIOContext
)
{
/*
For simple implementation,
no byte comparison supported as of 4/5/06
*/
smScsiRspSense_t *pSense;
smIniScsiCmnd_t *scsiCmnd;
smDeviceData_t *pSatDevData;
agsaFisRegHostToDevice_t *fis;
bit32 status;
bit32 agRequestType = AGSA_SATA_PROTOCOL_NON_DATA;
bit32 lba = 0;
bit32 tl = 0;
bit32 LoopNum = 1;
bit8 LBA[8];
bit8 TL[8];
bit32 AllChk = agFALSE; /* lba, lba+tl check against ATA limit and Disk capacity */
pSense = satIOContext->pSense;
scsiCmnd = &smScsiRequest->scsiCmnd;
pSatDevData = satIOContext->pSatDevData;
fis = satIOContext->pFis;
SM_DBG5(("smsatVerify10: start\n"));
/* checking BYTCHK */
if (scsiCmnd->cdb[1] & SCSI_VERIFY_BYTCHK_MASK)
{
/*
should do the byte check
but not supported in this version
*/
smsatSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
/*smEnqueueIO(smRoot, satIOContext);*/
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pSmSenseData,
satIOContext->interruptContext );
SM_DBG1(("smsatVerify10: no byte checking!!!\n"));
return SM_RC_SUCCESS;
}
/* checking CONTROL */
/* NACA == 1 or LINK == 1*/
if ( (scsiCmnd->cdb[9] & SCSI_NACA_MASK) || (scsiCmnd->cdb[9] & SCSI_LINK_MASK) )
{
smsatSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
/*smEnqueueIO(smRoot, satIOContext);*/
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pSmSenseData,
satIOContext->interruptContext );
SM_DBG1(("smsatVerify10: return control!!!\n"));
return SM_RC_SUCCESS;
}
sm_memset(LBA, 0, sizeof(LBA));
sm_memset(TL, 0, sizeof(TL));
/* do not use memcpy due to indexing in LBA and TL */
LBA[0] = 0; /* MSB */
LBA[1] = 0;
LBA[2] = 0;
LBA[3] = 0;
LBA[4] = scsiCmnd->cdb[2];
LBA[5] = scsiCmnd->cdb[3];
LBA[6] = scsiCmnd->cdb[4];
LBA[7] = scsiCmnd->cdb[5]; /* LSB */
TL[0] = 0;
TL[1] = 0;
TL[2] = 0;
TL[3] = 0;
TL[4] = 0;
TL[5] = 0;
TL[6] = scsiCmnd->cdb[7];
TL[7] = scsiCmnd->cdb[8]; /* LSB */
/* cbd10; computing LBA and transfer length */
lba = (scsiCmnd->cdb[2] << (8*3)) + (scsiCmnd->cdb[3] << (8*2))
+ (scsiCmnd->cdb[4] << 8) + scsiCmnd->cdb[5];
tl = (scsiCmnd->cdb[7] << 8) + scsiCmnd->cdb[8];
if (pSatDevData->satNCQ != agTRUE &&
pSatDevData->sat48BitSupport != agTRUE
)
{
AllChk = smsatCheckLimit(LBA, TL, agFALSE, pSatDevData);
if (AllChk)
{
SM_DBG1(("smsatVerify10: return LBA out of range, not EXT!!!\n"));
SM_DBG1(("smsatVerify10: cdb 0x%x 0x%x 0x%x 0x%x!!!\n",scsiCmnd->cdb[2], scsiCmnd->cdb[3],
scsiCmnd->cdb[4], scsiCmnd->cdb[5]));
SM_DBG1(("smsatVerify10: lba 0x%x SAT_TR_LBA_LIMIT 0x%x!!!\n", lba, SAT_TR_LBA_LIMIT));
smsatSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_LOGICAL_BLOCK_ADDRESS_OUT_OF_RANGE,
satIOContext);
/*smEnqueueIO(smRoot, satIOContext);*/
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pSmSenseData,
satIOContext->interruptContext );
return SM_RC_SUCCESS;
}
}
else
{
AllChk = smsatCheckLimit(LBA, TL, agTRUE, pSatDevData);
if (AllChk)
{
SM_DBG1(("smsatVerify10: return LBA out of range, EXT!!!\n"));
smsatSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_LOGICAL_BLOCK_ADDRESS_OUT_OF_RANGE,
satIOContext);
/*smEnqueueIO(smRoot, satIOContext);*/
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pSmSenseData,
satIOContext->interruptContext );
return SM_RC_SUCCESS;
}
}
if (pSatDevData->sat48BitSupport == agTRUE)
{
SM_DBG5(("smsatVerify10: SAT_READ_VERIFY_SECTORS_EXT\n"));
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_READ_VERIFY_SECTORS_EXT;/* 0x42 */
fis->h.features = 0; /* FIS reserve */
fis->d.lbaLow = scsiCmnd->cdb[5]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = scsiCmnd->cdb[4]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = scsiCmnd->cdb[3]; /* FIS LBA (23:16) */
fis->d.device = 0x40; /* FIS LBA mode set 01000000 */
fis->d.lbaLowExp = scsiCmnd->cdb[2]; /* FIS LBA (31:24) */
fis->d.lbaMidExp = 0; /* FIS LBA (39:32) */
fis->d.lbaHighExp = 0; /* FIS LBA (47:40) */
fis->d.featuresExp = 0; /* FIS reserve */
fis->d.sectorCount = scsiCmnd->cdb[8]; /* FIS sector count (7:0) */
fis->d.sectorCountExp = scsiCmnd->cdb[7]; /* FIS sector count (15:8) */
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_NON_DATA;
satIOContext->ATACmd = SAT_READ_VERIFY_SECTORS_EXT;
}
else
{
SM_DBG5(("smsatVerify10: SAT_READ_VERIFY_SECTORS\n"));
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C bit is set */
fis->h.command = SAT_READ_VERIFY_SECTORS; /* 0x40 */
fis->h.features = 0; /* FIS reserve */
fis->d.lbaLow = scsiCmnd->cdb[5]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = scsiCmnd->cdb[4]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = scsiCmnd->cdb[3]; /* FIS LBA (23:16) */
/* FIS LBA mode set LBA (27:24) */
fis->d.device = (bit8)((0x4 << 4) | (scsiCmnd->cdb[2] & 0xF));
fis->d.lbaLowExp = 0;
fis->d.lbaMidExp = 0;
fis->d.lbaHighExp = 0;
fis->d.featuresExp = 0;
fis->d.sectorCount = scsiCmnd->cdb[8]; /* FIS sector count (7:0) */
fis->d.sectorCountExp = 0;
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_NON_DATA;
satIOContext->ATACmd = SAT_READ_VERIFY_SECTORS;
}
satIOContext->currentLBA = lba;
satIOContext->OrgTL = tl;
/*
computing number of loop and remainder for tl
0xFF in case not ext
0xFFFF in case EXT
*/
if (fis->h.command == SAT_READ_VERIFY_SECTORS)
{
LoopNum = smsatComputeLoopNum(tl, 0xFF);
}
else if (fis->h.command == SAT_READ_VERIFY_SECTORS_EXT)
{
/* SAT_READ_SECTORS_EXT, SAT_READ_DMA_EXT */
LoopNum = smsatComputeLoopNum(tl, 0xFFFF);
}
else
{
SM_DBG1(("smsatVerify10: error case 1!!!\n"));
LoopNum = 1;
}
satIOContext->LoopNum = LoopNum;
if (LoopNum == 1)
{
SM_DBG5(("smsatVerify10: NON CHAINED data\n"));
/* Initialize CB for SATA completion.
*/
satIOContext->satCompleteCB = &smsatNonChainedVerifyCB;
}
else
{
SM_DBG1(("smsatVerify10: CHAINED data!!!\n"));
/* re-setting tl */
if (fis->h.command == SAT_READ_VERIFY_SECTORS)
{
fis->d.sectorCount = 0xFF;
}
else if (fis->h.command == SAT_READ_VERIFY_SECTORS_EXT)
{
fis->d.sectorCount = 0xFF;
fis->d.sectorCountExp = 0xFF;
}
else
{
SM_DBG1(("smsatVerify10: error case 2!!!\n"));
}
/* Initialize CB for SATA completion.
*/
satIOContext->satCompleteCB = &smsatChainedVerifyCB;
}
/*
* Prepare SGL and send FIS to LL layer.
*/
satIOContext->reqType = agRequestType; /* Save it */
status = smsataLLIOStart( smRoot,
smIORequest,
smDeviceHandle,
smScsiRequest,
satIOContext);
return (status);
}
osGLOBAL bit32
smsatVerify12(
smRoot_t *smRoot,
smIORequest_t *smIORequest,
smDeviceHandle_t *smDeviceHandle,
smScsiInitiatorRequest_t *smScsiRequest,
smSatIOContext_t *satIOContext
)
{
/*
For simple implementation,
no byte comparison supported as of 4/5/06
*/
smScsiRspSense_t *pSense;
smIniScsiCmnd_t *scsiCmnd;
smDeviceData_t *pSatDevData;
agsaFisRegHostToDevice_t *fis;
bit32 status;
bit32 agRequestType = AGSA_SATA_PROTOCOL_NON_DATA;
bit32 lba = 0;
bit32 tl = 0;
bit32 LoopNum = 1;
bit8 LBA[8];
bit8 TL[8];
bit32 AllChk = agFALSE; /* lba, lba+tl check against ATA limit and Disk capacity */
pSense = satIOContext->pSense;
scsiCmnd = &smScsiRequest->scsiCmnd;
pSatDevData = satIOContext->pSatDevData;
fis = satIOContext->pFis;
SM_DBG5(("smsatVerify12: start\n"));
/* checking BYTCHK */
if (scsiCmnd->cdb[1] & SCSI_VERIFY_BYTCHK_MASK)
{
/*
should do the byte check
but not supported in this version
*/
smsatSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
/*smEnqueueIO(smRoot, satIOContext);*/
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pSmSenseData,
satIOContext->interruptContext );
SM_DBG1(("smsatVerify12: no byte checking!!!\n"));
return SM_RC_SUCCESS;
}
/* checking CONTROL */
/* NACA == 1 or LINK == 1*/
if ( (scsiCmnd->cdb[11] & SCSI_NACA_MASK) || (scsiCmnd->cdb[11] & SCSI_LINK_MASK) )
{
smsatSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
/*smEnqueueIO(smRoot, satIOContext);*/
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pSmSenseData,
satIOContext->interruptContext );
SM_DBG1(("smsatVerify12: return control!!!\n"));
return SM_RC_SUCCESS;
}
sm_memset(LBA, 0, sizeof(LBA));
sm_memset(TL, 0, sizeof(TL));
/* do not use memcpy due to indexing in LBA and TL */
LBA[0] = 0; /* MSB */
LBA[1] = 0;
LBA[2] = 0;
LBA[3] = 0;
LBA[4] = scsiCmnd->cdb[2];
LBA[5] = scsiCmnd->cdb[3];
LBA[6] = scsiCmnd->cdb[4];
LBA[7] = scsiCmnd->cdb[5]; /* LSB */
TL[0] = 0; /* MSB */
TL[1] = 0;
TL[2] = 0;
TL[3] = 0;
TL[4] = scsiCmnd->cdb[6];
TL[5] = scsiCmnd->cdb[7];
TL[6] = scsiCmnd->cdb[8];
TL[7] = scsiCmnd->cdb[9]; /* LSB */
lba = smsatComputeCDB12LBA(satIOContext);
tl = smsatComputeCDB12TL(satIOContext);
if (pSatDevData->satNCQ != agTRUE &&
pSatDevData->sat48BitSupport != agTRUE
)
{
AllChk = smsatCheckLimit(LBA, TL, agFALSE, pSatDevData);
if (AllChk)
{
SM_DBG1(("smsatVerify12: return LBA out of range, not EXT!!!\n"));
SM_DBG1(("smsatVerify12: cdb 0x%x 0x%x 0x%x 0x%x!!!\n",scsiCmnd->cdb[2], scsiCmnd->cdb[3],
scsiCmnd->cdb[4], scsiCmnd->cdb[5]));
SM_DBG1(("smsatVerify12: lba 0x%x SAT_TR_LBA_LIMIT 0x%x!!!\n", lba, SAT_TR_LBA_LIMIT));
smsatSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_LOGICAL_BLOCK_ADDRESS_OUT_OF_RANGE,
satIOContext);
/*smEnqueueIO(smRoot, satIOContext);*/
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pSmSenseData,
satIOContext->interruptContext );
return SM_RC_SUCCESS;
}
}
else
{
AllChk = smsatCheckLimit(LBA, TL, agTRUE, pSatDevData);
if (AllChk)
{
SM_DBG1(("smsatVerify12: return LBA out of range, EXT!!!\n"));
smsatSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_LOGICAL_BLOCK_ADDRESS_OUT_OF_RANGE,
satIOContext);
/*smEnqueueIO(smRoot, satIOContext);*/
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pSmSenseData,
satIOContext->interruptContext );
return SM_RC_SUCCESS;
}
}
if (pSatDevData->sat48BitSupport == agTRUE)
{
SM_DBG5(("smsatVerify12: SAT_READ_VERIFY_SECTORS_EXT\n"));
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_READ_VERIFY_SECTORS_EXT;/* 0x42 */
fis->h.features = 0; /* FIS reserve */
fis->d.lbaLow = scsiCmnd->cdb[5]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = scsiCmnd->cdb[4]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = scsiCmnd->cdb[3]; /* FIS LBA (23:16) */
fis->d.device = 0x40; /* FIS LBA mode set 01000000 */
fis->d.lbaLowExp = scsiCmnd->cdb[2]; /* FIS LBA (31:24) */
fis->d.lbaMidExp = 0; /* FIS LBA (39:32) */
fis->d.lbaHighExp = 0; /* FIS LBA (47:40) */
fis->d.featuresExp = 0; /* FIS reserve */
fis->d.sectorCount = scsiCmnd->cdb[9]; /* FIS sector count (7:0) */
fis->d.sectorCountExp = scsiCmnd->cdb[8]; /* FIS sector count (15:8) */
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_NON_DATA;
satIOContext->ATACmd = SAT_READ_VERIFY_SECTORS_EXT;
}
else
{
SM_DBG5(("smsatVerify12: SAT_READ_VERIFY_SECTORS\n"));
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C bit is set */
fis->h.command = SAT_READ_VERIFY_SECTORS; /* 0x40 */
fis->h.features = 0; /* FIS reserve */
fis->d.lbaLow = scsiCmnd->cdb[5]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = scsiCmnd->cdb[4]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = scsiCmnd->cdb[3]; /* FIS LBA (23:16) */
/* FIS LBA mode set LBA (27:24) */
fis->d.device = (bit8)((0x4 << 4) | (scsiCmnd->cdb[2] & 0xF));
fis->d.lbaLowExp = 0;
fis->d.lbaMidExp = 0;
fis->d.lbaHighExp = 0;
fis->d.featuresExp = 0;
fis->d.sectorCount = scsiCmnd->cdb[9]; /* FIS sector count (7:0) */
fis->d.sectorCountExp = 0;
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_NON_DATA;
satIOContext->ATACmd = SAT_READ_VERIFY_SECTORS;
}
satIOContext->currentLBA = lba;
satIOContext->OrgTL = tl;
/*
computing number of loop and remainder for tl
0xFF in case not ext
0xFFFF in case EXT
*/
if (fis->h.command == SAT_READ_VERIFY_SECTORS)
{
LoopNum = smsatComputeLoopNum(tl, 0xFF);
}
else if (fis->h.command == SAT_READ_VERIFY_SECTORS_EXT)
{
/* SAT_READ_SECTORS_EXT, SAT_READ_DMA_EXT */
LoopNum = smsatComputeLoopNum(tl, 0xFFFF);
}
else
{
SM_DBG1(("smsatVerify12: error case 1!!!\n"));
LoopNum = 1;
}
satIOContext->LoopNum = LoopNum;
if (LoopNum == 1)
{
SM_DBG5(("smsatVerify12: NON CHAINED data\n"));
/* Initialize CB for SATA completion.
*/
satIOContext->satCompleteCB = &smsatNonChainedVerifyCB;
}
else
{
SM_DBG1(("smsatVerify12: CHAINED data!!!\n"));
/* re-setting tl */
if (fis->h.command == SAT_READ_VERIFY_SECTORS)
{
fis->d.sectorCount = 0xFF;
}
else if (fis->h.command == SAT_READ_VERIFY_SECTORS_EXT)
{
fis->d.sectorCount = 0xFF;
fis->d.sectorCountExp = 0xFF;
}
else
{
SM_DBG1(("smsatVerify12: error case 2!!!\n"));
}
/* Initialize CB for SATA completion.
*/
satIOContext->satCompleteCB = &smsatChainedVerifyCB;
}
/*
* Prepare SGL and send FIS to LL layer.
*/
satIOContext->reqType = agRequestType; /* Save it */
status = smsataLLIOStart( smRoot,
smIORequest,
smDeviceHandle,
smScsiRequest,
satIOContext);
return (status);
}
osGLOBAL bit32
smsatVerify16(
smRoot_t *smRoot,
smIORequest_t *smIORequest,
smDeviceHandle_t *smDeviceHandle,
smScsiInitiatorRequest_t *smScsiRequest,
smSatIOContext_t *satIOContext
)
{
/*
For simple implementation,
no byte comparison supported as of 4/5/06
*/
smScsiRspSense_t *pSense;
smIniScsiCmnd_t *scsiCmnd;
smDeviceData_t *pSatDevData;
agsaFisRegHostToDevice_t *fis;
bit32 status;
bit32 agRequestType = AGSA_SATA_PROTOCOL_NON_DATA;
bit32 lba = 0;
bit32 tl = 0;
bit32 LoopNum = 1;
bit8 LBA[8];
bit8 TL[8];
bit32 AllChk = agFALSE; /* lba, lba+tl check against ATA limit and Disk capacity */
pSense = satIOContext->pSense;
scsiCmnd = &smScsiRequest->scsiCmnd;
pSatDevData = satIOContext->pSatDevData;
fis = satIOContext->pFis;
SM_DBG5(("smsatVerify16: start\n"));
/* checking BYTCHK */
if (scsiCmnd->cdb[1] & SCSI_VERIFY_BYTCHK_MASK)
{
/*
should do the byte check
but not supported in this version
*/
smsatSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
/*smEnqueueIO(smRoot, satIOContext);*/
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pSmSenseData,
satIOContext->interruptContext );
SM_DBG1(("smsatVerify16: no byte checking!!!\n"));
return SM_RC_SUCCESS;
}
/* checking CONTROL */
/* NACA == 1 or LINK == 1*/
if ( (scsiCmnd->cdb[15] & SCSI_NACA_MASK) || (scsiCmnd->cdb[15] & SCSI_LINK_MASK) )
{
smsatSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
/*smEnqueueIO(smRoot, satIOContext);*/
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pSmSenseData,
satIOContext->interruptContext );
SM_DBG1(("smsatVerify16: return control!!!\n"));
return SM_RC_SUCCESS;
}
sm_memset(LBA, 0, sizeof(LBA));
sm_memset(TL, 0, sizeof(TL));
/* do not use memcpy due to indexing in LBA and TL */
LBA[0] = scsiCmnd->cdb[2]; /* MSB */
LBA[1] = scsiCmnd->cdb[3];
LBA[2] = scsiCmnd->cdb[4];
LBA[3] = scsiCmnd->cdb[5];
LBA[4] = scsiCmnd->cdb[6];
LBA[5] = scsiCmnd->cdb[7];
LBA[6] = scsiCmnd->cdb[8];
LBA[7] = scsiCmnd->cdb[9]; /* LSB */
TL[0] = 0;
TL[1] = 0;
TL[2] = 0;
TL[3] = 0;
TL[4] = scsiCmnd->cdb[10]; /* MSB */
TL[5] = scsiCmnd->cdb[11];
TL[6] = scsiCmnd->cdb[12];
TL[7] = scsiCmnd->cdb[13]; /* LSB */
lba = smsatComputeCDB16LBA(satIOContext);
tl = smsatComputeCDB16TL(satIOContext);
if (pSatDevData->satNCQ != agTRUE &&
pSatDevData->sat48BitSupport != agTRUE
)
{
AllChk = smsatCheckLimit(LBA, TL, agFALSE, pSatDevData);
if (AllChk)
{
SM_DBG1(("smsatVerify16: return LBA out of range, not EXT!!!\n"));
smsatSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_LOGICAL_BLOCK_ADDRESS_OUT_OF_RANGE,
satIOContext);
/*smEnqueueIO(smRoot, satIOContext);*/
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pSmSenseData,
satIOContext->interruptContext );
return SM_RC_SUCCESS;
}
}
else
{
AllChk = smsatCheckLimit(LBA, TL, agTRUE, pSatDevData);
if (AllChk)
{
SM_DBG1(("smsatVerify16: return LBA out of range, EXT!!!\n"));
smsatSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_LOGICAL_BLOCK_ADDRESS_OUT_OF_RANGE,
satIOContext);
/*smEnqueueIO(smRoot, satIOContext);*/
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pSmSenseData,
satIOContext->interruptContext );
return SM_RC_SUCCESS;
}
}
if (pSatDevData->sat48BitSupport == agTRUE)
{
SM_DBG5(("smsatVerify16: SAT_READ_VERIFY_SECTORS_EXT\n"));
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_READ_VERIFY_SECTORS_EXT;/* 0x42 */
fis->h.features = 0; /* FIS reserve */
fis->d.lbaLow = scsiCmnd->cdb[9]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = scsiCmnd->cdb[8]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = scsiCmnd->cdb[7]; /* FIS LBA (23:16) */
fis->d.device = 0x40; /* FIS LBA mode set 01000000 */
fis->d.lbaLowExp = scsiCmnd->cdb[6]; /* FIS LBA (31:24) */
fis->d.lbaMidExp = scsiCmnd->cdb[5]; /* FIS LBA (39:32) */
fis->d.lbaHighExp = scsiCmnd->cdb[4]; /* FIS LBA (47:40) */
fis->d.featuresExp = 0; /* FIS reserve */
fis->d.sectorCount = scsiCmnd->cdb[13]; /* FIS sector count (7:0) */
fis->d.sectorCountExp = scsiCmnd->cdb[12]; /* FIS sector count (15:8) */
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_NON_DATA;
satIOContext->ATACmd = SAT_READ_VERIFY_SECTORS_EXT;
}
else
{
SM_DBG5(("smsatVerify16: SAT_READ_VERIFY_SECTORS\n"));
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C bit is set */
fis->h.command = SAT_READ_VERIFY_SECTORS; /* 0x40 */
fis->h.features = 0; /* FIS reserve */
fis->d.lbaLow = scsiCmnd->cdb[9]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = scsiCmnd->cdb[8]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = scsiCmnd->cdb[7]; /* FIS LBA (23:16) */
/* FIS LBA mode set LBA (27:24) */
fis->d.device = (bit8)((0x4 << 4) | (scsiCmnd->cdb[6] & 0xF));
fis->d.lbaLowExp = 0;
fis->d.lbaMidExp = 0;
fis->d.lbaHighExp = 0;
fis->d.featuresExp = 0;
fis->d.sectorCount = scsiCmnd->cdb[13]; /* FIS sector count (7:0) */
fis->d.sectorCountExp = 0;
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_NON_DATA;
satIOContext->ATACmd = SAT_READ_VERIFY_SECTORS;
}
satIOContext->currentLBA = lba;
satIOContext->OrgTL = tl;
/*
computing number of loop and remainder for tl
0xFF in case not ext
0xFFFF in case EXT
*/
if (fis->h.command == SAT_READ_VERIFY_SECTORS)
{
LoopNum = smsatComputeLoopNum(tl, 0xFF);
}
else if (fis->h.command == SAT_READ_VERIFY_SECTORS_EXT)
{
/* SAT_READ_SECTORS_EXT, SAT_READ_DMA_EXT */
LoopNum = smsatComputeLoopNum(tl, 0xFFFF);
}
else
{
SM_DBG1(("smsatVerify16: error case 1!!!\n"));
LoopNum = 1;
}
satIOContext->LoopNum = LoopNum;
if (LoopNum == 1)
{
SM_DBG5(("smsatVerify16: NON CHAINED data\n"));
/* Initialize CB for SATA completion.
*/
satIOContext->satCompleteCB = &smsatNonChainedVerifyCB;
}
else
{
SM_DBG1(("smsatVerify16: CHAINED data!!!\n"));
/* re-setting tl */
if (fis->h.command == SAT_READ_VERIFY_SECTORS)
{
fis->d.sectorCount = 0xFF;
}
else if (fis->h.command == SAT_READ_VERIFY_SECTORS_EXT)
{
fis->d.sectorCount = 0xFF;
fis->d.sectorCountExp = 0xFF;
}
else
{
SM_DBG1(("smsatVerify16: error case 2!!!\n"));
}
/* Initialize CB for SATA completion.
*/
satIOContext->satCompleteCB = &smsatChainedVerifyCB;
}
/*
* Prepare SGL and send FIS to LL layer.
*/
satIOContext->reqType = agRequestType; /* Save it */
status = smsataLLIOStart( smRoot,
smIORequest,
smDeviceHandle,
smScsiRequest,
satIOContext);
return (status);
}
osGLOBAL bit32
smsatTestUnitReady(
smRoot_t *smRoot,
smIORequest_t *smIORequest,
smDeviceHandle_t *smDeviceHandle,
smScsiInitiatorRequest_t *smScsiRequest,
smSatIOContext_t *satIOContext
)
{
bit32 status;
bit32 agRequestType;
smDeviceData_t *pSatDevData;
smScsiRspSense_t *pSense;
smIniScsiCmnd_t *scsiCmnd;
agsaFisRegHostToDevice_t *fis;
pSense = satIOContext->pSense;
pSatDevData = satIOContext->pSatDevData;
scsiCmnd = &smScsiRequest->scsiCmnd;
fis = satIOContext->pFis;
SM_DBG5(("smsatTestUnitReady: start\n"));
/* checking CONTROL */
/* NACA == 1 or LINK == 1*/
if ( (scsiCmnd->cdb[5] & SCSI_NACA_MASK) || (scsiCmnd->cdb[5] & SCSI_LINK_MASK) )
{
smsatSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
/*smEnqueueIO(smRoot, satIOContext);*/
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pSmSenseData,
satIOContext->interruptContext );
SM_DBG1(("smsatTestUnitReady: return control!!!\n"));
return SM_RC_SUCCESS;
}
/* SAT revision 8, 8.11.2, p42*/
if (pSatDevData->satStopState == agTRUE)
{
smsatSetSensePayload( pSense,
SCSI_SNSKEY_NOT_READY,
0,
SCSI_SNSCODE_LOGICAL_UNIT_NOT_READY_INITIALIZING_COMMAND_REQUIRED,
satIOContext);
/*smEnqueueIO(smRoot, satIOContext);*/
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pSmSenseData,
satIOContext->interruptContext );
SM_DBG1(("smsatTestUnitReady: stop state!!!\n"));
return SM_RC_SUCCESS;
}
/*
* Check if format is in progress
*/
if (pSatDevData->satDriveState == SAT_DEV_STATE_FORMAT_IN_PROGRESS)
{
SM_DBG1(("smsatTestUnitReady: FORMAT_IN_PROGRESS!!!\n"));
smsatSetSensePayload( pSense,
SCSI_SNSKEY_NOT_READY,
0,
SCSI_SNSCODE_LOGICAL_UNIT_NOT_READY_FORMAT_IN_PROGRESS,
satIOContext);
/*smEnqueueIO(smRoot, satIOContext);*/
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pSmSenseData,
satIOContext->interruptContext );
SM_DBG1(("smsatTestUnitReady: format in progress!!!\n"));
return SM_RC_SUCCESS;
}
/*
check previously issued ATA command
*/
if (pSatDevData->satPendingIO != 0)
{
if (pSatDevData->satDeviceFaultState == agTRUE)
{
smsatSetSensePayload( pSense,
SCSI_SNSKEY_HARDWARE_ERROR,
0,
SCSI_SNSCODE_LOGICAL_UNIT_FAILURE,
satIOContext);
/*smEnqueueIO(smRoot, satIOContext);*/
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pSmSenseData,
satIOContext->interruptContext );
SM_DBG1(("smsatTestUnitReady: previous command ended in error!!!\n"));
return SM_RC_SUCCESS;
}
}
/*
check removalbe media feature set
*/
if(pSatDevData->satRemovableMedia && pSatDevData->satRemovableMediaEnabled)
{
SM_DBG5(("smsatTestUnitReady: sending get media status cmnd\n"));
/* send GET MEDIA STATUS command */
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_GET_MEDIA_STATUS; /* 0xDA */
fis->h.features = 0; /* FIS features NA */
fis->d.lbaLow = 0; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = 0; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = 0; /* FIS LBA (23:16) */
fis->d.device = 0; /* FIS DEV is discared in SATA */
fis->d.lbaLowExp = 0;
fis->d.lbaMidExp = 0;
fis->d.lbaHighExp = 0;
fis->d.featuresExp = 0;
fis->d.sectorCount = 0; /* FIS sector count (7:0) */
fis->d.sectorCountExp = 0;
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_NON_DATA;
/* Initialize CB for SATA completion.
*/
satIOContext->satCompleteCB = &smsatTestUnitReadyCB;
/*
* Prepare SGL and send FIS to LL layer.
*/
satIOContext->reqType = agRequestType; /* Save it */
status = smsataLLIOStart( smRoot,
smIORequest,
smDeviceHandle,
smScsiRequest,
satIOContext);
return (status);
}
/*
number 6) in SAT p42
send ATA CHECK POWER MODE
*/
SM_DBG5(("smsatTestUnitReady: sending check power mode cmnd\n"));
status = smsatTestUnitReady_1( smRoot,
smIORequest,
smDeviceHandle,
smScsiRequest,
satIOContext);
return (status);
}
osGLOBAL bit32
smsatTestUnitReady_1(
smRoot_t *smRoot,
smIORequest_t *smIORequest,
smDeviceHandle_t *smDeviceHandle,
smScsiInitiatorRequest_t *smScsiRequest,
smSatIOContext_t *satIOContext
)
{
/*
sends SAT_CHECK_POWER_MODE as a part of TESTUNITREADY
internally generated - no directly corresponding scsi
called in satIOCompleted as a part of satTestUnitReady(), SAT, revision8, 8.11.2, p42
*/
bit32 status;
bit32 agRequestType;
agsaFisRegHostToDevice_t *fis;
fis = satIOContext->pFis;
SM_DBG5(("smsatTestUnitReady_1: start\n"));
/*
* Send the ATA CHECK POWER MODE command.
*/
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_CHECK_POWER_MODE; /* 0xE5 */
fis->h.features = 0;
fis->d.lbaLow = 0;
fis->d.lbaMid = 0;
fis->d.lbaHigh = 0;
fis->d.device = 0;
fis->d.lbaLowExp = 0;
fis->d.lbaMidExp = 0;
fis->d.lbaHighExp = 0;
fis->d.featuresExp = 0;
fis->d.sectorCount = 0;
fis->d.sectorCountExp = 0;
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_NON_DATA;
/* Initialize CB for SATA completion.
*/
satIOContext->satCompleteCB = &smsatTestUnitReadyCB;
/*
* Prepare SGL and send FIS to LL layer.
*/
satIOContext->reqType = agRequestType; /* Save it */
status = smsataLLIOStart( smRoot,
smIORequest,
smDeviceHandle,
smScsiRequest,
satIOContext);
SM_DBG5(("smsatTestUnitReady_1: return\n"));
return status;
}
osGLOBAL bit32
smsatInquiry(
smRoot_t *smRoot,
smIORequest_t *smIORequest,
smDeviceHandle_t *smDeviceHandle,
smScsiInitiatorRequest_t *smScsiRequest,
smSatIOContext_t *satIOContext
)
{
/*
CMDDT bit is obsolete in SPC-3 and this is assumed in SAT revision 8
*/
smScsiRspSense_t *pSense;
smIniScsiCmnd_t *scsiCmnd;
smDeviceData_t *pSatDevData;
bit32 status;
pSense = satIOContext->pSense;
scsiCmnd = &smScsiRequest->scsiCmnd;
pSatDevData = satIOContext->pSatDevData;
SM_DBG5(("smsatInquiry: start\n"));
SM_DBG5(("smsatInquiry: pSatDevData did %d\n", pSatDevData->id));
//smhexdump("smsatInquiry", (bit8 *)scsiCmnd->cdb, 6);
/* checking CONTROL */
/* NACA == 1 or LINK == 1*/
if ( (scsiCmnd->cdb[5] & SCSI_NACA_MASK) || (scsiCmnd->cdb[5] & SCSI_LINK_MASK) )
{
smsatSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
/*smEnqueueIO(smRoot, satIOContext);*/
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pSmSenseData,
satIOContext->interruptContext );
SM_DBG1(("smsatInquiry: return control!!!\n"));
return SM_RC_SUCCESS;
}
/* checking EVPD and Allocation Length */
/* SPC-4 spec 6.4 p141 */
/* EVPD bit == 0 && PAGE CODE != 0 */
if ( !(scsiCmnd->cdb[1] & SCSI_EVPD_MASK) &&
(scsiCmnd->cdb[2] != 0)
)
{
smsatSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
/*smEnqueueIO(smRoot, satIOContext);*/
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pSmSenseData,
satIOContext->interruptContext );
SM_DBG1(("smsatInquiry: return EVPD and PAGE CODE!!!\n"));
return SM_RC_SUCCESS;
}
SM_DBG6(("smsatInquiry: allocation length 0x%x %d\n", ((scsiCmnd->cdb[3]) << 8) + scsiCmnd->cdb[4], ((scsiCmnd->cdb[3]) << 8) + scsiCmnd->cdb[4]));
/* convert OS IO to TD internal IO */
if ( pSatDevData->IDDeviceValid == agFALSE)
{
status = smsatStartIDDev(
smRoot,
smIORequest,
smDeviceHandle,
smScsiRequest,
satIOContext
);
SM_DBG6(("smsatInquiry: end status %d\n", status));
return status;
}
else
{
SM_DBG6(("smsatInquiry: calling satInquiryIntCB\n"));
smsatInquiryIntCB(
smRoot,
smIORequest,
smDeviceHandle,
smScsiRequest,
satIOContext
);
/*smEnqueueIO(smRoot, satIOContext);*/
return SM_RC_SUCCESS;
}
}
osGLOBAL bit32
smsatStartIDDev(
smRoot_t *smRoot,
smIORequest_t *smIORequest,
smDeviceHandle_t *smDeviceHandle,
smScsiInitiatorRequest_t *smScsiRequest,
smSatIOContext_t *satIOContext
)
{
smSatInternalIo_t *satIntIo = agNULL;
smDeviceData_t *satDevData = agNULL;
smIORequestBody_t *smIORequestBody;
smSatIOContext_t *satNewIOContext;
bit32 status;
SM_DBG5(("smsatStartIDDev: start\n"));
satDevData = satIOContext->pSatDevData;
SM_DBG6(("smsatStartIDDev: before alloc\n"));
/* allocate identify device command */
satIntIo = smsatAllocIntIoResource( smRoot,
smIORequest,
satDevData,
sizeof(agsaSATAIdentifyData_t), /* 512; size of identify device data */
satIntIo);
SM_DBG6(("smsatStartIDDev: before after\n"));
if (satIntIo == agNULL)
{
SM_DBG1(("smsatStartIDDev: can't alloacate!!!\n"));
/*smEnqueueIO(smRoot, satIOContext);*/
return SM_RC_FAILURE;
}
satIntIo->satOrgSmIORequest = smIORequest; /* changed */
smIORequestBody = satIntIo->satIntRequestBody;
satNewIOContext = &(smIORequestBody->transport.SATA.satIOContext);
satNewIOContext->pSatDevData = satDevData;
satNewIOContext->pFis = &(smIORequestBody->transport.SATA.agSATARequestBody.fis.fisRegHostToDev);
satNewIOContext->pScsiCmnd = &(satIntIo->satIntSmScsiXchg.scsiCmnd);
satNewIOContext->pSense = &(smIORequestBody->transport.SATA.sensePayload);
satNewIOContext->pSmSenseData = &(smIORequestBody->transport.SATA.smSenseData);
satNewIOContext->smRequestBody = satIntIo->satIntRequestBody; /* key fix */
satNewIOContext->interruptContext = tiInterruptContext;
satNewIOContext->satIntIoContext = satIntIo;
satNewIOContext->psmDeviceHandle = agNULL;
satNewIOContext->satOrgIOContext = satIOContext; /* changed */
/* this is valid only for TD layer generated (not triggered by OS at all) IO */
satNewIOContext->smScsiXchg = &(satIntIo->satIntSmScsiXchg);
SM_DBG6(("smsatStartIDDev: OS satIOContext %p \n", satIOContext));
SM_DBG6(("smsatStartIDDev: TD satNewIOContext %p \n", satNewIOContext));
SM_DBG6(("smsatStartIDDev: OS tiScsiXchg %p \n", satIOContext->smScsiXchg));
SM_DBG6(("smsatStartIDDev: TD tiScsiXchg %p \n", satNewIOContext->smScsiXchg));
SM_DBG1(("smsatStartIDDev: satNewIOContext %p smIORequestBody %p!!!\n", satNewIOContext, smIORequestBody));
status = smsatSendIDDev( smRoot,
&satIntIo->satIntSmIORequest, /* New smIORequest */
smDeviceHandle,
satNewIOContext->smScsiXchg, /* New tiScsiInitiatorRequest_t *tiScsiRequest, */
satNewIOContext);
if (status != SM_RC_SUCCESS)
{
SM_DBG1(("smsatStartIDDev: failed in sending!!!\n"));
smsatFreeIntIoResource( smRoot,
satDevData,
satIntIo);
/*smEnqueueIO(smRoot, satIOContext);*/
return SM_RC_FAILURE;
}
SM_DBG6(("smsatStartIDDev: end\n"));
return status;
}
osGLOBAL bit32
smsatSendIDDev(
smRoot_t *smRoot,
smIORequest_t *smIORequest,
smDeviceHandle_t *smDeviceHandle,
smScsiInitiatorRequest_t *smScsiRequest,
smSatIOContext_t *satIOContext
)
{
bit32 status;
bit32 agRequestType;
smDeviceData_t *pSatDevData;
agsaFisRegHostToDevice_t *fis;
#ifdef SM_INTERNAL_DEBUG
smIORequestBody_t *smIORequestBody;
smSatInternalIo_t *satIntIoContext;
#endif
pSatDevData = satIOContext->pSatDevData;
fis = satIOContext->pFis;
SM_DBG6(("smsatSendIDDev: start\n"));
SM_DBG6(("smsatSendIDDev: did %d\n", pSatDevData->id));
#ifdef SM_INTERNAL_DEBUG
satIntIoContext = satIOContext->satIntIoContext;
smIORequestBody = satIntIoContext->satIntRequestBody;
#endif
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
if (pSatDevData->satDeviceType == SATA_ATAPI_DEVICE)
fis->h.command = SAT_IDENTIFY_PACKET_DEVICE; /* 0x40 */
else
fis->h.command = SAT_IDENTIFY_DEVICE; /* 0xEC */
fis->h.features = 0; /* FIS reserve */
fis->d.lbaLow = 0; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = 0; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = 0; /* FIS LBA (23:16) */
fis->d.device = 0; /* FIS LBA mode */
fis->d.lbaLowExp = 0;
fis->d.lbaMidExp = 0;
fis->d.lbaHighExp = 0;
fis->d.featuresExp = 0;
fis->d.sectorCount = 0; /* FIS sector count (7:0) */
fis->d.sectorCountExp = 0;
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_PIO_READ;
/* Initialize CB for SATA completion.
*/
satIOContext->satCompleteCB = &smsatInquiryCB;
/*
* Prepare SGL and send FIS to LL layer.
*/
satIOContext->reqType = agRequestType; /* Save it */
#ifdef SM_INTERNAL_DEBUG
smhexdump("smsatSendIDDev", (bit8 *)satIOContext->pFis, sizeof(agsaFisRegHostToDevice_t));
smhexdump("smsatSendIDDev LL", (bit8 *)&(smIORequestBody->transport.SATA.agSATARequestBody.fis.fisRegHostToDev), sizeof(agsaFisRegHostToDevice_t));
#endif
status = smsataLLIOStart( smRoot,
smIORequest,
smDeviceHandle,
smScsiRequest,
satIOContext);
SM_DBG6(("smsatSendIDDev: end status %d\n", status));
return status;
}
osGLOBAL bit32
smsatRequestSense(
smRoot_t *smRoot,
smIORequest_t *smIORequest,
smDeviceHandle_t *smDeviceHandle,
smScsiInitiatorRequest_t *smScsiRequest,
smSatIOContext_t *satIOContext
)
{
/*
SAT Rev 8 p38, Table25
sending SMART RETURN STATUS
Checking SMART Treshold Exceeded Condition is done in satRequestSenseCB()
Only fixed format sense data is support. In other words, we don't support DESC bit is set
in Request Sense
*/
bit32 status;
bit32 agRequestType;
smScsiRspSense_t *pSense;
smDeviceData_t *pSatDevData;
smIniScsiCmnd_t *scsiCmnd;
agsaFisRegHostToDevice_t *fis;
smIORequestBody_t *smIORequestBody;
smSatInternalIo_t *satIntIo = agNULL;
smSatIOContext_t *satIOContext2;
bit8 *pDataBuffer = agNULL;
bit32 allocationLen = 0;
pSense = satIOContext->pSense;
pSatDevData = satIOContext->pSatDevData;
scsiCmnd = &smScsiRequest->scsiCmnd;
fis = satIOContext->pFis;
pDataBuffer = (bit8 *) smScsiRequest->sglVirtualAddr;
allocationLen = scsiCmnd->cdb[4];
allocationLen = MIN(allocationLen, scsiCmnd->expDataLength);
SM_DBG5(("smsatRequestSense: start\n"));
/* checking CONTROL */
/* NACA == 1 or LINK == 1*/
if ( (scsiCmnd->cdb[5] & SCSI_NACA_MASK) || (scsiCmnd->cdb[5] & SCSI_LINK_MASK) )
{
smsatSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
sm_memcpy(pDataBuffer, pSense, MIN(SENSE_DATA_LENGTH, allocationLen));
/*smEnqueueIO(smRoot, satIOContext);*/
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pSmSenseData,
satIOContext->interruptContext );
SM_DBG1(("smsatRequestSense: return control!!!\n"));
return SM_RC_SUCCESS;
}
/*
Only fixed format sense data is support. In other words, we don't support DESC bit is set
in Request Sense
*/
if ( scsiCmnd->cdb[1] & ATA_REMOVABLE_MEDIA_DEVICE_MASK )
{
smsatSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
sm_memcpy(pDataBuffer, pSense, MIN(SENSE_DATA_LENGTH, allocationLen));
/*smEnqueueIO(smRoot, satIOContext);*/
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pSmSenseData,
satIOContext->interruptContext );
SM_DBG1(("smsatRequestSense: DESC bit is set, which we don't support!!!\n"));
return SM_RC_SUCCESS;
}
if (pSatDevData->satSMARTEnabled == agTRUE)
{
/* sends SMART RETURN STATUS */
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_SMART; /* 0xB0 */
fis->h.features = SAT_SMART_RETURN_STATUS; /* FIS features */
fis->d.featuresExp = 0; /* FIS reserve */
fis->d.sectorCount = 0; /* FIS sector count (7:0) */
fis->d.sectorCountExp = 0; /* FIS sector count (15:8) */
fis->d.lbaLow = 0; /* FIS LBA (7 :0 ) */
fis->d.lbaLowExp = 0; /* FIS LBA (31:24) */
fis->d.lbaMid = 0x4F; /* FIS LBA (15:8 ) */
fis->d.lbaMidExp = 0; /* FIS LBA (39:32) */
fis->d.lbaHigh = 0xC2; /* FIS LBA (23:16) */
fis->d.lbaHighExp = 0; /* FIS LBA (47:40) */
fis->d.device = 0; /* FIS DEV is discared in SATA */
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved4 = 0;
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_NON_DATA;
/* Initialize CB for SATA completion.
*/
satIOContext->satCompleteCB = &smsatRequestSenseCB;
/*
* Prepare SGL and send FIS to LL layer.
*/
satIOContext->reqType = agRequestType; /* Save it */
status = smsataLLIOStart( smRoot,
smIORequest,
smDeviceHandle,
smScsiRequest,
satIOContext);
SM_DBG4(("smsatRequestSense: if return, status %d\n", status));
return (status);
}
else
{
/*allocate iocontext for xmitting xmit SAT_CHECK_POWER_MODE
then call satRequestSense2 */
SM_DBG4(("smsatRequestSense: before satIntIo %p\n", satIntIo));
/* allocate iocontext */
satIntIo = smsatAllocIntIoResource( smRoot,
smIORequest, /* original request */
pSatDevData,
smScsiRequest->scsiCmnd.expDataLength,
satIntIo);
SM_DBG4(("smsatRequestSense: after satIntIo %p\n", satIntIo));
if (satIntIo == agNULL)
{
/* failed during sending SMART RETURN STATUS */
smsatSetSensePayload( pSense,
SCSI_SNSKEY_NO_SENSE,
0,
SCSI_SNSCODE_HARDWARE_IMPENDING_FAILURE,
satIOContext);
sm_memcpy(pDataBuffer, pSense, MIN(SENSE_DATA_LENGTH, allocationLen));
/*smEnqueueIO(smRoot, satIOContext);*/
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOSuccess,
SCSI_STAT_GOOD,
agNULL,
satIOContext->interruptContext );
SM_DBG1(("smsatRequestSense: else fail 1!!!\n"));
return SM_RC_SUCCESS;
} /* end of memory allocation failure */
/*
* Need to initialize all the fields within satIOContext except
* reqType and satCompleteCB which will be set depending on cmd.
*/
if (satIntIo == agNULL)
{
SM_DBG4(("smsatRequestSense: satIntIo is NULL\n"));
}
else
{
SM_DBG4(("smsatRequestSense: satIntIo is NOT NULL\n"));
}
/* use this --- tttttthe one the same */
satIntIo->satOrgSmIORequest = smIORequest;
smIORequestBody = (smIORequestBody_t *)satIntIo->satIntRequestBody;
satIOContext2 = &(smIORequestBody->transport.SATA.satIOContext);
satIOContext2->pSatDevData = pSatDevData;
satIOContext2->pFis = &(smIORequestBody->transport.SATA.agSATARequestBody.fis.fisRegHostToDev);
satIOContext2->pScsiCmnd = &(satIntIo->satIntSmScsiXchg.scsiCmnd);
satIOContext2->pSense = &(smIORequestBody->transport.SATA.sensePayload);
satIOContext2->pSmSenseData = &(smIORequestBody->transport.SATA.smSenseData);
satIOContext2->pSmSenseData->senseData = satIOContext2->pSense;
satIOContext2->smRequestBody = satIntIo->satIntRequestBody;
satIOContext2->interruptContext = satIOContext->interruptContext;
satIOContext2->satIntIoContext = satIntIo;
satIOContext2->psmDeviceHandle = smDeviceHandle;
satIOContext2->satOrgIOContext = satIOContext;
SM_DBG4(("smsatRequestSense: satIntIo->satIntSmScsiXchg.agSgl1.len %d\n", satIntIo->satIntSmScsiXchg.smSgl1.len));
SM_DBG4(("smsatRequestSense: satIntIo->satIntSmScsiXchg.agSgl1.upper %d\n", satIntIo->satIntSmScsiXchg.smSgl1.upper));
SM_DBG4(("smsatRequestSense: satIntIo->satIntSmScsiXchg.agSgl1.lower %d\n", satIntIo->satIntSmScsiXchg.smSgl1.lower));
SM_DBG4(("smsatRequestSense: satIntIo->satIntSmScsiXchg.agSgl1.type %d\n", satIntIo->satIntSmScsiXchg.smSgl1.type));
status = smsatRequestSense_1( smRoot,
&(satIntIo->satIntSmIORequest),
smDeviceHandle,
&(satIntIo->satIntSmScsiXchg),
satIOContext2);
if (status != SM_RC_SUCCESS)
{
smsatFreeIntIoResource( smRoot,
pSatDevData,
satIntIo);
/* failed during sending SMART RETURN STATUS */
smsatSetSensePayload( pSense,
SCSI_SNSKEY_NO_SENSE,
0,
SCSI_SNSCODE_HARDWARE_IMPENDING_FAILURE,
satIOContext);
sm_memcpy(pDataBuffer, pSense, MIN(SENSE_DATA_LENGTH, allocationLen));
/*smEnqueueIO(smRoot, satIOContext);*/
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOSuccess,
SCSI_STAT_CHECK_CONDITION,
agNULL,
satIOContext->interruptContext );
SM_DBG1(("smsatRequestSense: else fail 2!!!\n"));
return SM_RC_SUCCESS;
}
SM_DBG4(("smsatRequestSense: else return success\n"));
return SM_RC_SUCCESS;
}
}
osGLOBAL bit32
smsatRequestSense_1(
smRoot_t *smRoot,
smIORequest_t *smIORequest,
smDeviceHandle_t *smDeviceHandle,
smScsiInitiatorRequest_t *smScsiRequest,
smSatIOContext_t *satIOContext
)
{
/*
sends SAT_CHECK_POWER_MODE
*/
bit32 status;
bit32 agRequestType;
agsaFisRegHostToDevice_t *fis;
fis = satIOContext->pFis;
SM_DBG5(("smsatRequestSense_1: start\n"));
/*
* Send the ATA CHECK POWER MODE command.
*/
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_CHECK_POWER_MODE; /* 0xE5 */
fis->h.features = 0;
fis->d.lbaLow = 0;
fis->d.lbaMid = 0;
fis->d.lbaHigh = 0;
fis->d.device = 0;
fis->d.lbaLowExp = 0;
fis->d.lbaMidExp = 0;
fis->d.lbaHighExp = 0;
fis->d.featuresExp = 0;
fis->d.sectorCount = 0;
fis->d.sectorCountExp = 0;
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_NON_DATA;
/* Initialize CB for SATA completion.
*/
satIOContext->satCompleteCB = &smsatRequestSenseCB;
/*
* Prepare SGL and send FIS to LL layer.
*/
satIOContext->reqType = agRequestType; /* Save it */
SM_DBG4(("smsatRequestSense_1: smSgl1.len %d\n", smScsiRequest->smSgl1.len));
SM_DBG4(("smsatRequestSense_1: smSgl1.upper %d\n", smScsiRequest->smSgl1.upper));
SM_DBG4(("smsatRequestSense_1: smSgl1.lower %d\n", smScsiRequest->smSgl1.lower));
SM_DBG4(("smsatRequestSense_1: smSgl1.type %d\n", smScsiRequest->smSgl1.type));
// smhexdump("smsatRequestSense_1", (bit8 *)fis, sizeof(agsaFisRegHostToDevice_t));
status = smsataLLIOStart( smRoot,
smIORequest,
smDeviceHandle,
smScsiRequest,
satIOContext);
return status;
}
osGLOBAL bit32
smsatModeSense6(
smRoot_t *smRoot,
smIORequest_t *smIORequest,
smDeviceHandle_t *smDeviceHandle,
smScsiInitiatorRequest_t *smScsiRequest,
smSatIOContext_t *satIOContext
)
{
smScsiRspSense_t *pSense;
bit32 allocationLen;
smIniScsiCmnd_t *scsiCmnd;
bit32 pageSupported;
bit8 page;
bit8 *pModeSense; /* Mode Sense data buffer */
smDeviceData_t *pSatDevData;
bit8 PC;
bit8 AllPages[MODE_SENSE6_RETURN_ALL_PAGES_LEN];
bit8 Control[MODE_SENSE6_CONTROL_PAGE_LEN];
bit8 RWErrorRecovery[MODE_SENSE6_READ_WRITE_ERROR_RECOVERY_PAGE_LEN];
bit8 Caching[MODE_SENSE6_CACHING_LEN];
bit8 InfoExceptionCtrl[MODE_SENSE6_INFORMATION_EXCEPTION_CONTROL_PAGE_LEN];
bit8 lenRead = 0;
pSense = satIOContext->pSense;
scsiCmnd = &smScsiRequest->scsiCmnd;
pModeSense = (bit8 *) smScsiRequest->sglVirtualAddr;
pSatDevData = satIOContext->pSatDevData;
//smhexdump("smsatModeSense6", (bit8 *)scsiCmnd->cdb, 6);
SM_DBG5(("smsatModeSense6: start\n"));
/* checking CONTROL */
/* NACA == 1 or LINK == 1*/
if ( (scsiCmnd->cdb[5] & SCSI_NACA_MASK) || (scsiCmnd->cdb[5] & SCSI_LINK_MASK) )
{
smsatSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
/*smEnqueueIO(smRoot, satIOContext);*/
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pSmSenseData,
satIOContext->interruptContext );
SM_DBG1(("smsatModeSense6: return control!!!\n"));
return SM_RC_SUCCESS;
}
/* checking PC(Page Control)
SAT revion 8, 8.5.3 p33 and 10.1.2, p66
*/
PC = (bit8)((scsiCmnd->cdb[2]) & SCSI_MODE_SENSE6_PC_MASK);
if (PC != 0)
{
smsatSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
/*smEnqueueIO(smRoot, satIOContext);*/
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pSmSenseData,
satIOContext->interruptContext );
SM_DBG1(("smsatModeSense6: return due to PC value pc 0x%x!!!\n", PC >> 6));
return SM_RC_SUCCESS;
}
/* reading PAGE CODE */
page = (bit8)((scsiCmnd->cdb[2]) & SCSI_MODE_SENSE6_PAGE_CODE_MASK);
SM_DBG5(("smsatModeSense6: page=0x%x\n", page));
allocationLen = scsiCmnd->cdb[4];
allocationLen = MIN(allocationLen, scsiCmnd->expDataLength);
/*
Based on page code value, returns a corresponding mode page
note: no support for subpage
*/
switch(page)
{
case MODESENSE_RETURN_ALL_PAGES:
case MODESENSE_CONTROL_PAGE: /* control */
case MODESENSE_READ_WRITE_ERROR_RECOVERY_PAGE: /* Read-Write Error Recovery */
case MODESENSE_CACHING: /* caching */
case MODESENSE_INFORMATION_EXCEPTION_CONTROL_PAGE: /* informational exceptions control*/
pageSupported = agTRUE;
break;
case MODESENSE_VENDOR_SPECIFIC_PAGE: /* vendor specific */
default:
pageSupported = agFALSE;
break;
}
if (pageSupported == agFALSE)
{
SM_DBG1(("smsatModeSense6 *** ERROR *** not supported page 0x%x did %d!!!\n",
page, pSatDevData->id));
smsatSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_COMMAND,
satIOContext);
/*smEnqueueIO(smRoot, satIOContext);*/
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pSmSenseData,
satIOContext->interruptContext );
return SM_RC_SUCCESS;
}
switch(page)
{
case MODESENSE_RETURN_ALL_PAGES:
lenRead = (bit8)MIN(allocationLen, MODE_SENSE6_RETURN_ALL_PAGES_LEN);
break;
case MODESENSE_CONTROL_PAGE: /* control */
lenRead = (bit8)MIN(allocationLen, MODE_SENSE6_CONTROL_PAGE_LEN);
break;
case MODESENSE_READ_WRITE_ERROR_RECOVERY_PAGE: /* Read-Write Error Recovery */
lenRead = (bit8)MIN(allocationLen, MODE_SENSE6_READ_WRITE_ERROR_RECOVERY_PAGE_LEN);
break;
case MODESENSE_CACHING: /* caching */
lenRead = (bit8)MIN(allocationLen, MODE_SENSE6_CACHING_LEN);
break;
case MODESENSE_INFORMATION_EXCEPTION_CONTROL_PAGE: /* informational exceptions control*/
lenRead = (bit8)MIN(allocationLen, MODE_SENSE6_INFORMATION_EXCEPTION_CONTROL_PAGE_LEN);
break;
default:
SM_DBG1(("smsatModeSense6: default error page %d!!!\n", page));
break;
}
if (page == MODESENSE_RETURN_ALL_PAGES)
{
SM_DBG5(("smsatModeSense6: MODESENSE_RETURN_ALL_PAGES\n"));
AllPages[0] = (bit8)(lenRead - 1);
AllPages[1] = 0x00; /* default medium type (currently mounted medium type) */
AllPages[2] = 0x00; /* no write-protect, no support for DPO-FUA */
AllPages[3] = 0x08; /* block descriptor length */
/*
* Fill-up direct-access device block-descriptor, SAT, Table 19
*/
/* density code */
AllPages[4] = 0x04; /* density-code : reserved for direct-access */
/* number of blocks */
AllPages[5] = 0x00; /* unspecified */
AllPages[6] = 0x00; /* unspecified */
AllPages[7] = 0x00; /* unspecified */
/* reserved */
AllPages[8] = 0x00; /* reserved */
/* Block size */
AllPages[9] = 0x00;
AllPages[10] = 0x02; /* Block size is always 512 bytes */
AllPages[11] = 0x00;
/* MODESENSE_READ_WRITE_ERROR_RECOVERY_PAGE */
AllPages[12] = 0x01; /* page code */
AllPages[13] = 0x0A; /* page length */
AllPages[14] = 0x40; /* ARRE is set */
AllPages[15] = 0x00;
AllPages[16] = 0x00;
AllPages[17] = 0x00;
AllPages[18] = 0x00;
AllPages[19] = 0x00;
AllPages[20] = 0x00;
AllPages[21] = 0x00;
AllPages[22] = 0x00;
AllPages[23] = 0x00;
/* MODESENSE_CACHING */
AllPages[24] = 0x08; /* page code */
AllPages[25] = 0x12; /* page length */
if (pSatDevData->satWriteCacheEnabled == agTRUE)
{
AllPages[26] = 0x04;/* WCE bit is set */
}
else
{
AllPages[26] = 0x00;/* WCE bit is NOT set */
}
AllPages[27] = 0x00;
AllPages[28] = 0x00;
AllPages[29] = 0x00;
AllPages[30] = 0x00;
AllPages[31] = 0x00;
AllPages[32] = 0x00;
AllPages[33] = 0x00;
AllPages[34] = 0x00;
AllPages[35] = 0x00;
if (pSatDevData->satLookAheadEnabled == agTRUE)
{
AllPages[36] = 0x00;/* DRA bit is NOT set */
}
else
{
AllPages[36] = 0x20;/* DRA bit is set */
}
AllPages[37] = 0x00;
AllPages[38] = 0x00;
AllPages[39] = 0x00;
AllPages[40] = 0x00;
AllPages[41] = 0x00;
AllPages[42] = 0x00;
AllPages[43] = 0x00;
/* MODESENSE_CONTROL_PAGE */
AllPages[44] = 0x0A; /* page code */
AllPages[45] = 0x0A; /* page length */
AllPages[46] = 0x02; /* only GLTSD bit is set */
if (pSatDevData->satNCQ == agTRUE)
{
AllPages[47] = 0x12; /* Queue Alogorithm modifier 1b and QErr 01b*/
}
else
{
AllPages[47] = 0x02; /* Queue Alogorithm modifier 0b and QErr 01b */
}
AllPages[48] = 0x00;
AllPages[49] = 0x00;
AllPages[50] = 0x00; /* obsolete */
AllPages[51] = 0x00; /* obsolete */
AllPages[52] = 0xFF; /* Busy Timeout Period */
AllPages[53] = 0xFF; /* Busy Timeout Period */
AllPages[54] = 0x00; /* we don't support non-000b value for the self-test code */
AllPages[55] = 0x00; /* we don't support non-000b value for the self-test code */
/* MODESENSE_INFORMATION_EXCEPTION_CONTROL_PAGE */
AllPages[56] = 0x1C; /* page code */
AllPages[57] = 0x0A; /* page length */
if (pSatDevData->satSMARTEnabled == agTRUE)
{
AllPages[58] = 0x00;/* DEXCPT bit is NOT set */
}
else
{
AllPages[58] = 0x08;/* DEXCPT bit is set */
}
AllPages[59] = 0x00; /* We don't support MRIE */
AllPages[60] = 0x00; /* Interval timer vendor-specific */
AllPages[61] = 0x00;
AllPages[62] = 0x00;
AllPages[63] = 0x00;
AllPages[64] = 0x00; /* REPORT-COUNT */
AllPages[65] = 0x00;
AllPages[66] = 0x00;
AllPages[67] = 0x00;
sm_memcpy(pModeSense, &AllPages, lenRead);
}
else if (page == MODESENSE_CONTROL_PAGE)
{
SM_DBG5(("smsatModeSense6: MODESENSE_CONTROL_PAGE\n"));
Control[0] = MODE_SENSE6_CONTROL_PAGE_LEN - 1;
Control[1] = 0x00; /* default medium type (currently mounted medium type) */
Control[2] = 0x00; /* no write-protect, no support for DPO-FUA */
Control[3] = 0x08; /* block descriptor length */
/*
* Fill-up direct-access device block-descriptor, SAT, Table 19
*/
/* density code */
Control[4] = 0x04; /* density-code : reserved for direct-access */
/* number of blocks */
Control[5] = 0x00; /* unspecified */
Control[6] = 0x00; /* unspecified */
Control[7] = 0x00; /* unspecified */
/* reserved */
Control[8] = 0x00; /* reserved */
/* Block size */
Control[9] = 0x00;
Control[10] = 0x02; /* Block size is always 512 bytes */
Control[11] = 0x00;
/*
* Fill-up control mode page, SAT, Table 65
*/
Control[12] = 0x0A; /* page code */
Control[13] = 0x0A; /* page length */
Control[14] = 0x02; /* only GLTSD bit is set */
if (pSatDevData->satNCQ == agTRUE)
{
Control[15] = 0x12; /* Queue Alogorithm modifier 1b and QErr 01b*/
}
else
{
Control[15] = 0x02; /* Queue Alogorithm modifier 0b and QErr 01b */
}
Control[16] = 0x00;
Control[17] = 0x00;
Control[18] = 0x00; /* obsolete */
Control[19] = 0x00; /* obsolete */
Control[20] = 0xFF; /* Busy Timeout Period */
Control[21] = 0xFF; /* Busy Timeout Period */
Control[22] = 0x00; /* we don't support non-000b value for the self-test code */
Control[23] = 0x00; /* we don't support non-000b value for the self-test code */
sm_memcpy(pModeSense, &Control, lenRead);
}
else if (page == MODESENSE_READ_WRITE_ERROR_RECOVERY_PAGE)
{
SM_DBG5(("smsatModeSense6: MODESENSE_READ_WRITE_ERROR_RECOVERY_PAGE\n"));
RWErrorRecovery[0] = MODE_SENSE6_READ_WRITE_ERROR_RECOVERY_PAGE_LEN - 1;
RWErrorRecovery[1] = 0x00; /* default medium type (currently mounted medium type) */
RWErrorRecovery[2] = 0x00; /* no write-protect, no support for DPO-FUA */
RWErrorRecovery[3] = 0x08; /* block descriptor length */
/*
* Fill-up direct-access device block-descriptor, SAT, Table 19
*/
/* density code */
RWErrorRecovery[4] = 0x04; /* density-code : reserved for direct-access */
/* number of blocks */
RWErrorRecovery[5] = 0x00; /* unspecified */
RWErrorRecovery[6] = 0x00; /* unspecified */
RWErrorRecovery[7] = 0x00; /* unspecified */
/* reserved */
RWErrorRecovery[8] = 0x00; /* reserved */
/* Block size */
RWErrorRecovery[9] = 0x00;
RWErrorRecovery[10] = 0x02; /* Block size is always 512 bytes */
RWErrorRecovery[11] = 0x00;
/*
* Fill-up Read-Write Error Recovery mode page, SAT, Table 66
*/
RWErrorRecovery[12] = 0x01; /* page code */
RWErrorRecovery[13] = 0x0A; /* page length */
RWErrorRecovery[14] = 0x40; /* ARRE is set */
RWErrorRecovery[15] = 0x00;
RWErrorRecovery[16] = 0x00;
RWErrorRecovery[17] = 0x00;
RWErrorRecovery[18] = 0x00;
RWErrorRecovery[19] = 0x00;
RWErrorRecovery[20] = 0x00;
RWErrorRecovery[21] = 0x00;
RWErrorRecovery[22] = 0x00;
RWErrorRecovery[23] = 0x00;
sm_memcpy(pModeSense, &RWErrorRecovery, lenRead);
}
else if (page == MODESENSE_CACHING)
{
SM_DBG5(("smsatModeSense6: MODESENSE_CACHING\n"));
/* special case */
if (allocationLen == 4 && page == MODESENSE_CACHING)
{
SM_DBG5(("smsatModeSense6: linux 2.6.8.24 support\n"));
Caching[0] = 0x20 - 1; /* 32 - 1 */
Caching[1] = 0x00; /* default medium type (currently mounted medium type) */
Caching[2] = 0x00; /* no write-protect, no support for DPO-FUA */
Caching[3] = 0x08; /* block descriptor length */
sm_memcpy(pModeSense, &Caching, 4);
/*smEnqueueIO(smRoot, satIOContext);*/
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOSuccess,
SCSI_STAT_GOOD,
agNULL,
satIOContext->interruptContext);
return SM_RC_SUCCESS;
}
Caching[0] = MODE_SENSE6_CACHING_LEN - 1;
Caching[1] = 0x00; /* default medium type (currently mounted medium type) */
Caching[2] = 0x00; /* no write-protect, no support for DPO-FUA */
Caching[3] = 0x08; /* block descriptor length */
/*
* Fill-up direct-access device block-descriptor, SAT, Table 19
*/
/* density code */
Caching[4] = 0x04; /* density-code : reserved for direct-access */
/* number of blocks */
Caching[5] = 0x00; /* unspecified */
Caching[6] = 0x00; /* unspecified */
Caching[7] = 0x00; /* unspecified */
/* reserved */
Caching[8] = 0x00; /* reserved */
/* Block size */
Caching[9] = 0x00;
Caching[10] = 0x02; /* Block size is always 512 bytes */
Caching[11] = 0x00;
/*
* Fill-up Caching mode page, SAT, Table 67
*/
/* length 20 */
Caching[12] = 0x08; /* page code */
Caching[13] = 0x12; /* page length */
if (pSatDevData->satWriteCacheEnabled == agTRUE)
{
Caching[14] = 0x04;/* WCE bit is set */
}
else
{
Caching[14] = 0x00;/* WCE bit is NOT set */
}
Caching[15] = 0x00;
Caching[16] = 0x00;
Caching[17] = 0x00;
Caching[18] = 0x00;
Caching[19] = 0x00;
Caching[20] = 0x00;
Caching[21] = 0x00;
Caching[22] = 0x00;
Caching[23] = 0x00;
if (pSatDevData->satLookAheadEnabled == agTRUE)
{
Caching[24] = 0x00;/* DRA bit is NOT set */
}
else
{
Caching[24] = 0x20;/* DRA bit is set */
}
Caching[25] = 0x00;
Caching[26] = 0x00;
Caching[27] = 0x00;
Caching[28] = 0x00;
Caching[29] = 0x00;
Caching[30] = 0x00;
Caching[31] = 0x00;
sm_memcpy(pModeSense, &Caching, lenRead);
}
else if (page == MODESENSE_INFORMATION_EXCEPTION_CONTROL_PAGE)
{
SM_DBG5(("smsatModeSense6: MODESENSE_INFORMATION_EXCEPTION_CONTROL_PAGE\n"));
InfoExceptionCtrl[0] = MODE_SENSE6_INFORMATION_EXCEPTION_CONTROL_PAGE_LEN - 1;
InfoExceptionCtrl[1] = 0x00; /* default medium type (currently mounted medium type) */
InfoExceptionCtrl[2] = 0x00; /* no write-protect, no support for DPO-FUA */
InfoExceptionCtrl[3] = 0x08; /* block descriptor length */
/*
* Fill-up direct-access device block-descriptor, SAT, Table 19
*/
/* density code */
InfoExceptionCtrl[4] = 0x04; /* density-code : reserved for direct-access */
/* number of blocks */
InfoExceptionCtrl[5] = 0x00; /* unspecified */
InfoExceptionCtrl[6] = 0x00; /* unspecified */
InfoExceptionCtrl[7] = 0x00; /* unspecified */
/* reserved */
InfoExceptionCtrl[8] = 0x00; /* reserved */
/* Block size */
InfoExceptionCtrl[9] = 0x00;
InfoExceptionCtrl[10] = 0x02; /* Block size is always 512 bytes */
InfoExceptionCtrl[11] = 0x00;
/*
* Fill-up informational-exceptions control mode page, SAT, Table 68
*/
InfoExceptionCtrl[12] = 0x1C; /* page code */
InfoExceptionCtrl[13] = 0x0A; /* page length */
if (pSatDevData->satSMARTEnabled == agTRUE)
{
InfoExceptionCtrl[14] = 0x00;/* DEXCPT bit is NOT set */
}
else
{
InfoExceptionCtrl[14] = 0x08;/* DEXCPT bit is set */
}
InfoExceptionCtrl[15] = 0x00; /* We don't support MRIE */
InfoExceptionCtrl[16] = 0x00; /* Interval timer vendor-specific */
InfoExceptionCtrl[17] = 0x00;
InfoExceptionCtrl[18] = 0x00;
InfoExceptionCtrl[19] = 0x00;
InfoExceptionCtrl[20] = 0x00; /* REPORT-COUNT */
InfoExceptionCtrl[21] = 0x00;
InfoExceptionCtrl[22] = 0x00;
InfoExceptionCtrl[23] = 0x00;
sm_memcpy(pModeSense, &InfoExceptionCtrl, lenRead);
}
else
{
/* Error */
SM_DBG1(("smsatModeSense6: Error page %d!!!\n", page));
smsatSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_COMMAND,
satIOContext);
/*smEnqueueIO(smRoot, satIOContext);*/
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pSmSenseData,
satIOContext->interruptContext );
return SM_RC_SUCCESS;
}
/* there can be only underrun not overrun in error case */
if (allocationLen > lenRead)
{
SM_DBG6(("smsatModeSense6 reporting underrun lenRead=0x%x allocationLen=0x%x\n", lenRead, allocationLen));
/*smEnqueueIO(smRoot, satIOContext);*/
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOUnderRun,
allocationLen - lenRead,
agNULL,
satIOContext->interruptContext );
}
else
{
/*smEnqueueIO(smRoot, satIOContext);*/
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOSuccess,
SCSI_STAT_GOOD,
agNULL,
satIOContext->interruptContext);
}
return SM_RC_SUCCESS;
}
osGLOBAL bit32
smsatModeSense10(
smRoot_t *smRoot,
smIORequest_t *smIORequest,
smDeviceHandle_t *smDeviceHandle,
smScsiInitiatorRequest_t *smScsiRequest,
smSatIOContext_t *satIOContext
)
{
smScsiRspSense_t *pSense;
bit32 allocationLen;
smIniScsiCmnd_t *scsiCmnd;
bit32 pageSupported;
bit8 page;
bit8 *pModeSense; /* Mode Sense data buffer */
smDeviceData_t *pSatDevData;
bit8 PC; /* page control */
bit8 LLBAA; /* Long LBA Accepted */
bit32 index;
bit8 AllPages[MODE_SENSE10_RETURN_ALL_PAGES_LLBAA_LEN];
bit8 Control[MODE_SENSE10_CONTROL_PAGE_LLBAA_LEN];
bit8 RWErrorRecovery[MODE_SENSE10_READ_WRITE_ERROR_RECOVERY_PAGE_LLBAA_LEN];
bit8 Caching[MODE_SENSE10_CACHING_LLBAA_LEN];
bit8 InfoExceptionCtrl[MODE_SENSE10_INFORMATION_EXCEPTION_CONTROL_PAGE_LLBAA_LEN];
bit8 lenRead = 0;
pSense = satIOContext->pSense;
scsiCmnd = &smScsiRequest->scsiCmnd;
pModeSense = (bit8 *) smScsiRequest->sglVirtualAddr;
pSatDevData = satIOContext->pSatDevData;
SM_DBG5(("smsatModeSense10: start\n"));
/* checking CONTROL */
/* NACA == 1 or LINK == 1*/
if ( (scsiCmnd->cdb[9] & SCSI_NACA_MASK) || (scsiCmnd->cdb[9] & SCSI_LINK_MASK) )
{
smsatSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
/*smEnqueueIO(smRoot, satIOContext);*/
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pSmSenseData,
satIOContext->interruptContext );
SM_DBG1(("smsatModeSense10: return control!!!\n"));
return SM_RC_SUCCESS;
}
/* checking PC(Page Control)
SAT revion 8, 8.5.3 p33 and 10.1.2, p66
*/
PC = (bit8)((scsiCmnd->cdb[2]) & SCSI_MODE_SENSE10_PC_MASK);
if (PC != 0)
{
smsatSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
/*smEnqueueIO(smRoot, satIOContext);*/
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pSmSenseData,
satIOContext->interruptContext );
SM_DBG1(("smsatModeSense10: return due to PC value pc 0x%x!!!\n", PC));
return SM_RC_SUCCESS;
}
/* finding LLBAA bit */
LLBAA = (bit8)((scsiCmnd->cdb[1]) & SCSI_MODE_SENSE10_LLBAA_MASK);
/* reading PAGE CODE */
page = (bit8)((scsiCmnd->cdb[2]) & SCSI_MODE_SENSE10_PAGE_CODE_MASK);
SM_DBG5(("smsatModeSense10: page=0x%x, did %d\n", page, pSatDevData->id));
allocationLen = (scsiCmnd->cdb[7] << 8) + scsiCmnd->cdb[8];
allocationLen = MIN(allocationLen, scsiCmnd->expDataLength);
/*
Based on page code value, returns a corresponding mode page
note: no support for subpage
*/
switch(page)
{
case MODESENSE_RETURN_ALL_PAGES: /* return all pages */
case MODESENSE_CONTROL_PAGE: /* control */
case MODESENSE_READ_WRITE_ERROR_RECOVERY_PAGE: /* Read-Write Error Recovery */
case MODESENSE_CACHING: /* caching */
case MODESENSE_INFORMATION_EXCEPTION_CONTROL_PAGE: /* informational exceptions control*/
pageSupported = agTRUE;
break;
case MODESENSE_VENDOR_SPECIFIC_PAGE: /* vendor specific */
default:
pageSupported = agFALSE;
break;
}
if (pageSupported == agFALSE)
{
SM_DBG1(("smsatModeSense10 *** ERROR *** not supported page 0x%x did %d!!!\n", page, pSatDevData->id));
smsatSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_COMMAND,
satIOContext);
/*smEnqueueIO(smRoot, satIOContext);*/
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pSmSenseData,
satIOContext->interruptContext );
return SM_RC_SUCCESS;
}
switch(page)
{
case MODESENSE_RETURN_ALL_PAGES:
if (LLBAA)
{
lenRead = (bit8)MIN(allocationLen, MODE_SENSE10_RETURN_ALL_PAGES_LLBAA_LEN);
}
else
{
lenRead = (bit8)MIN(allocationLen, MODE_SENSE10_RETURN_ALL_PAGES_LEN);
}
break;
case MODESENSE_CONTROL_PAGE: /* control */
if (LLBAA)
{
lenRead = (bit8)MIN(allocationLen, MODE_SENSE10_CONTROL_PAGE_LLBAA_LEN);
}
else
{
lenRead = (bit8)MIN(allocationLen, MODE_SENSE10_CONTROL_PAGE_LEN);
}
break;
case MODESENSE_READ_WRITE_ERROR_RECOVERY_PAGE: /* Read-Write Error Recovery */
if (LLBAA)
{
lenRead = (bit8)MIN(allocationLen, MODE_SENSE10_READ_WRITE_ERROR_RECOVERY_PAGE_LLBAA_LEN);
}
else
{
lenRead = (bit8)MIN(allocationLen, MODE_SENSE10_READ_WRITE_ERROR_RECOVERY_PAGE_LEN);
}
break;
case MODESENSE_CACHING: /* caching */
if (LLBAA)
{
lenRead = (bit8)MIN(allocationLen, MODE_SENSE10_CACHING_LLBAA_LEN);
}
else
{
lenRead = (bit8)MIN(allocationLen, MODE_SENSE10_CACHING_LEN);
}
break;
case MODESENSE_INFORMATION_EXCEPTION_CONTROL_PAGE: /* informational exceptions control*/
if (LLBAA)
{
lenRead = (bit8)MIN(allocationLen, MODE_SENSE10_INFORMATION_EXCEPTION_CONTROL_PAGE_LLBAA_LEN);
}
else
{
lenRead = (bit8)MIN(allocationLen, MODE_SENSE10_INFORMATION_EXCEPTION_CONTROL_PAGE_LEN);
}
break;
default:
SM_DBG1(("smsatModeSense10: default error page %d!!!\n", page));
break;
}
if (page == MODESENSE_RETURN_ALL_PAGES)
{
SM_DBG5(("smsatModeSense10: MODESENSE_RETURN_ALL_PAGES\n"));
AllPages[0] = 0;
AllPages[1] = (bit8)(lenRead - 2);
AllPages[2] = 0x00; /* medium type: default medium type (currently mounted medium type) */
AllPages[3] = 0x00; /* device-specific param: no write-protect, no support for DPO-FUA */
if (LLBAA)
{
AllPages[4] = 0x00; /* reserved and LONGLBA */
AllPages[4] = (bit8)(AllPages[4] | 0x1); /* LONGLBA is set */
}
else
{
AllPages[4] = 0x00; /* reserved and LONGLBA: LONGLBA is not set */
}
AllPages[5] = 0x00; /* reserved */
AllPages[6] = 0x00; /* block descriptot length */
if (LLBAA)
{
AllPages[7] = 0x10; /* block descriptor length: LONGLBA is set. So, length is 16 */
}
else
{
AllPages[7] = 0x08; /* block descriptor length: LONGLBA is NOT set. So, length is 8 */
}
/*
* Fill-up direct-access device block-descriptor, SAT, Table 19
*/
if (LLBAA)
{
/* density code */
AllPages[8] = 0x04; /* density-code : reserved for direct-access */
/* number of blocks */
AllPages[9] = 0x00; /* unspecified */
AllPages[10] = 0x00; /* unspecified */
AllPages[11] = 0x00; /* unspecified */
AllPages[12] = 0x00; /* unspecified */
AllPages[13] = 0x00; /* unspecified */
AllPages[14] = 0x00; /* unspecified */
AllPages[15] = 0x00; /* unspecified */
/* reserved */
AllPages[16] = 0x00; /* reserved */
AllPages[17] = 0x00; /* reserved */
AllPages[18] = 0x00; /* reserved */
AllPages[19] = 0x00; /* reserved */
/* Block size */
AllPages[20] = 0x00;
AllPages[21] = 0x00;
AllPages[22] = 0x02; /* Block size is always 512 bytes */
AllPages[23] = 0x00;
}
else
{
/* density code */
AllPages[8] = 0x04; /* density-code : reserved for direct-access */
/* number of blocks */
AllPages[9] = 0x00; /* unspecified */
AllPages[10] = 0x00; /* unspecified */
AllPages[11] = 0x00; /* unspecified */
/* reserved */
AllPages[12] = 0x00; /* reserved */
/* Block size */
AllPages[13] = 0x00;
AllPages[14] = 0x02; /* Block size is always 512 bytes */
AllPages[15] = 0x00;
}
if (LLBAA)
{
index = 24;
}
else
{
index = 16;
}
/* MODESENSE_READ_WRITE_ERROR_RECOVERY_PAGE */
AllPages[index+0] = 0x01; /* page code */
AllPages[index+1] = 0x0A; /* page length */
AllPages[index+2] = 0x40; /* ARRE is set */
AllPages[index+3] = 0x00;
AllPages[index+4] = 0x00;
AllPages[index+5] = 0x00;
AllPages[index+6] = 0x00;
AllPages[index+7] = 0x00;
AllPages[index+8] = 0x00;
AllPages[index+9] = 0x00;
AllPages[index+10] = 0x00;
AllPages[index+11] = 0x00;
/* MODESENSE_CACHING */
/*
* Fill-up Caching mode page, SAT, Table 67
*/
/* length 20 */
AllPages[index+12] = 0x08; /* page code */
AllPages[index+13] = 0x12; /* page length */
if (pSatDevData->satWriteCacheEnabled == agTRUE)
{
AllPages[index+14] = 0x04;/* WCE bit is set */
}
else
{
AllPages[index+14] = 0x00;/* WCE bit is NOT set */
}
AllPages[index+15] = 0x00;
AllPages[index+16] = 0x00;
AllPages[index+17] = 0x00;
AllPages[index+18] = 0x00;
AllPages[index+19] = 0x00;
AllPages[index+20] = 0x00;
AllPages[index+21] = 0x00;
AllPages[index+22] = 0x00;
AllPages[index+23] = 0x00;
if (pSatDevData->satLookAheadEnabled == agTRUE)
{
AllPages[index+24] = 0x00;/* DRA bit is NOT set */
}
else
{
AllPages[index+24] = 0x20;/* DRA bit is set */
}
AllPages[index+25] = 0x00;
AllPages[index+26] = 0x00;
AllPages[index+27] = 0x00;
AllPages[index+28] = 0x00;
AllPages[index+29] = 0x00;
AllPages[index+30] = 0x00;
AllPages[index+31] = 0x00;
/* MODESENSE_CONTROL_PAGE */
/*
* Fill-up control mode page, SAT, Table 65
*/
AllPages[index+32] = 0x0A; /* page code */
AllPages[index+33] = 0x0A; /* page length */
AllPages[index+34] = 0x02; /* only GLTSD bit is set */
if (pSatDevData->satNCQ == agTRUE)
{
AllPages[index+35] = 0x12; /* Queue Alogorithm modifier 1b and QErr 01b*/
}
else
{
AllPages[index+35] = 0x02; /* Queue Alogorithm modifier 0b and QErr 01b */
}
AllPages[index+36] = 0x00;
AllPages[index+37] = 0x00;
AllPages[index+38] = 0x00; /* obsolete */
AllPages[index+39] = 0x00; /* obsolete */
AllPages[index+40] = 0xFF; /* Busy Timeout Period */
AllPages[index+41] = 0xFF; /* Busy Timeout Period */
AllPages[index+42] = 0x00; /* we don't support non-000b value for the self-test code */
AllPages[index+43] = 0x00; /* we don't support non-000b value for the self-test code */
/* MODESENSE_INFORMATION_EXCEPTION_CONTROL_PAGE */
/*
* Fill-up informational-exceptions control mode page, SAT, Table 68
*/
AllPages[index+44] = 0x1C; /* page code */
AllPages[index+45] = 0x0A; /* page length */
if (pSatDevData->satSMARTEnabled == agTRUE)
{
AllPages[index+46] = 0x00;/* DEXCPT bit is NOT set */
}
else
{
AllPages[index+46] = 0x08;/* DEXCPT bit is set */
}
AllPages[index+47] = 0x00; /* We don't support MRIE */
AllPages[index+48] = 0x00; /* Interval timer vendor-specific */
AllPages[index+49] = 0x00;
AllPages[index+50] = 0x00;
AllPages[index+51] = 0x00;
AllPages[index+52] = 0x00; /* REPORT-COUNT */
AllPages[index+53] = 0x00;
AllPages[index+54] = 0x00;
AllPages[index+55] = 0x00;
sm_memcpy(pModeSense, &AllPages, lenRead);
}
else if (page == MODESENSE_CONTROL_PAGE)
{
SM_DBG5(("smsatModeSense10: MODESENSE_CONTROL_PAGE\n"));
Control[0] = 0;
Control[1] = (bit8)(lenRead - 2);
Control[2] = 0x00; /* medium type: default medium type (currently mounted medium type) */
Control[3] = 0x00; /* device-specific param: no write-protect, no support for DPO-FUA */
if (LLBAA)
{
Control[4] = 0x00; /* reserved and LONGLBA */
Control[4] = (bit8)(Control[4] | 0x1); /* LONGLBA is set */
}
else
{
Control[4] = 0x00; /* reserved and LONGLBA: LONGLBA is not set */
}
Control[5] = 0x00; /* reserved */
Control[6] = 0x00; /* block descriptot length */
if (LLBAA)
{
Control[7] = 0x10; /* block descriptor length: LONGLBA is set. So, length is 16 */
}
else
{
Control[7] = 0x08; /* block descriptor length: LONGLBA is NOT set. So, length is 8 */
}
/*
* Fill-up direct-access device block-descriptor, SAT, Table 19
*/
if (LLBAA)
{
/* density code */
Control[8] = 0x04; /* density-code : reserved for direct-access */
/* number of blocks */
Control[9] = 0x00; /* unspecified */
Control[10] = 0x00; /* unspecified */
Control[11] = 0x00; /* unspecified */
Control[12] = 0x00; /* unspecified */
Control[13] = 0x00; /* unspecified */
Control[14] = 0x00; /* unspecified */
Control[15] = 0x00; /* unspecified */
/* reserved */
Control[16] = 0x00; /* reserved */
Control[17] = 0x00; /* reserved */
Control[18] = 0x00; /* reserved */
Control[19] = 0x00; /* reserved */
/* Block size */
Control[20] = 0x00;
Control[21] = 0x00;
Control[22] = 0x02; /* Block size is always 512 bytes */
Control[23] = 0x00;
}
else
{
/* density code */
Control[8] = 0x04; /* density-code : reserved for direct-access */
/* number of blocks */
Control[9] = 0x00; /* unspecified */
Control[10] = 0x00; /* unspecified */
Control[11] = 0x00; /* unspecified */
/* reserved */
Control[12] = 0x00; /* reserved */
/* Block size */
Control[13] = 0x00;
Control[14] = 0x02; /* Block size is always 512 bytes */
Control[15] = 0x00;
}
if (LLBAA)
{
index = 24;
}
else
{
index = 16;
}
/*
* Fill-up control mode page, SAT, Table 65
*/
Control[index+0] = 0x0A; /* page code */
Control[index+1] = 0x0A; /* page length */
Control[index+2] = 0x02; /* only GLTSD bit is set */
if (pSatDevData->satNCQ == agTRUE)
{
Control[index+3] = 0x12; /* Queue Alogorithm modifier 1b and QErr 01b*/
}
else
{
Control[index+3] = 0x02; /* Queue Alogorithm modifier 0b and QErr 01b */
}
Control[index+4] = 0x00;
Control[index+5] = 0x00;
Control[index+6] = 0x00; /* obsolete */
Control[index+7] = 0x00; /* obsolete */
Control[index+8] = 0xFF; /* Busy Timeout Period */
Control[index+9] = 0xFF; /* Busy Timeout Period */
Control[index+10] = 0x00; /* we don't support non-000b value for the self-test code */
Control[index+11] = 0x00; /* we don't support non-000b value for the self-test code */
sm_memcpy(pModeSense, &Control, lenRead);
}
else if (page == MODESENSE_READ_WRITE_ERROR_RECOVERY_PAGE)
{
SM_DBG5(("smsatModeSense10: MODESENSE_READ_WRITE_ERROR_RECOVERY_PAGE\n"));
RWErrorRecovery[0] = 0;
RWErrorRecovery[1] = (bit8)(lenRead - 2);
RWErrorRecovery[2] = 0x00; /* medium type: default medium type (currently mounted medium type) */
RWErrorRecovery[3] = 0x00; /* device-specific param: no write-protect, no support for DPO-FUA */
if (LLBAA)
{
RWErrorRecovery[4] = 0x00; /* reserved and LONGLBA */
RWErrorRecovery[4] = (bit8)(RWErrorRecovery[4] | 0x1); /* LONGLBA is set */
}
else
{
RWErrorRecovery[4] = 0x00; /* reserved and LONGLBA: LONGLBA is not set */
}
RWErrorRecovery[5] = 0x00; /* reserved */
RWErrorRecovery[6] = 0x00; /* block descriptot length */
if (LLBAA)
{
RWErrorRecovery[7] = 0x10; /* block descriptor length: LONGLBA is set. So, length is 16 */
}
else
{
RWErrorRecovery[7] = 0x08; /* block descriptor length: LONGLBA is NOT set. So, length is 8 */
}
/*
* Fill-up direct-access device block-descriptor, SAT, Table 19
*/
if (LLBAA)
{
/* density code */
RWErrorRecovery[8] = 0x04; /* density-code : reserved for direct-access */
/* number of blocks */
RWErrorRecovery[9] = 0x00; /* unspecified */
RWErrorRecovery[10] = 0x00; /* unspecified */
RWErrorRecovery[11] = 0x00; /* unspecified */
RWErrorRecovery[12] = 0x00; /* unspecified */
RWErrorRecovery[13] = 0x00; /* unspecified */
RWErrorRecovery[14] = 0x00; /* unspecified */
RWErrorRecovery[15] = 0x00; /* unspecified */
/* reserved */
RWErrorRecovery[16] = 0x00; /* reserved */
RWErrorRecovery[17] = 0x00; /* reserved */
RWErrorRecovery[18] = 0x00; /* reserved */
RWErrorRecovery[19] = 0x00; /* reserved */
/* Block size */
RWErrorRecovery[20] = 0x00;
RWErrorRecovery[21] = 0x00;
RWErrorRecovery[22] = 0x02; /* Block size is always 512 bytes */
RWErrorRecovery[23] = 0x00;
}
else
{
/* density code */
RWErrorRecovery[8] = 0x04; /* density-code : reserved for direct-access */
/* number of blocks */
RWErrorRecovery[9] = 0x00; /* unspecified */
RWErrorRecovery[10] = 0x00; /* unspecified */
RWErrorRecovery[11] = 0x00; /* unspecified */
/* reserved */
RWErrorRecovery[12] = 0x00; /* reserved */
/* Block size */
RWErrorRecovery[13] = 0x00;
RWErrorRecovery[14] = 0x02; /* Block size is always 512 bytes */
RWErrorRecovery[15] = 0x00;
}
if (LLBAA)
{
index = 24;
}
else
{
index = 16;
}
/*
* Fill-up Read-Write Error Recovery mode page, SAT, Table 66
*/
RWErrorRecovery[index+0] = 0x01; /* page code */
RWErrorRecovery[index+1] = 0x0A; /* page length */
RWErrorRecovery[index+2] = 0x40; /* ARRE is set */
RWErrorRecovery[index+3] = 0x00;
RWErrorRecovery[index+4] = 0x00;
RWErrorRecovery[index+5] = 0x00;
RWErrorRecovery[index+6] = 0x00;
RWErrorRecovery[index+7] = 0x00;
RWErrorRecovery[index+8] = 0x00;
RWErrorRecovery[index+9] = 0x00;
RWErrorRecovery[index+10] = 0x00;
RWErrorRecovery[index+11] = 0x00;
sm_memcpy(pModeSense, &RWErrorRecovery, lenRead);
}
else if (page == MODESENSE_CACHING)
{
SM_DBG5(("smsatModeSense10: MODESENSE_CACHING\n"));
Caching[0] = 0;
Caching[1] = (bit8)(lenRead - 2);
Caching[2] = 0x00; /* medium type: default medium type (currently mounted medium type) */
Caching[3] = 0x00; /* device-specific param: no write-protect, no support for DPO-FUA */
if (LLBAA)
{
Caching[4] = 0x00; /* reserved and LONGLBA */
Caching[4] = (bit8)(Caching[4] | 0x1); /* LONGLBA is set */
}
else
{
Caching[4] = 0x00; /* reserved and LONGLBA: LONGLBA is not set */
}
Caching[5] = 0x00; /* reserved */
Caching[6] = 0x00; /* block descriptot length */
if (LLBAA)
{
Caching[7] = 0x10; /* block descriptor length: LONGLBA is set. So, length is 16 */
}
else
{
Caching[7] = 0x08; /* block descriptor length: LONGLBA is NOT set. So, length is 8 */
}
/*
* Fill-up direct-access device block-descriptor, SAT, Table 19
*/
if (LLBAA)
{
/* density code */
Caching[8] = 0x04; /* density-code : reserved for direct-access */
/* number of blocks */
Caching[9] = 0x00; /* unspecified */
Caching[10] = 0x00; /* unspecified */
Caching[11] = 0x00; /* unspecified */
Caching[12] = 0x00; /* unspecified */
Caching[13] = 0x00; /* unspecified */
Caching[14] = 0x00; /* unspecified */
Caching[15] = 0x00; /* unspecified */
/* reserved */
Caching[16] = 0x00; /* reserved */
Caching[17] = 0x00; /* reserved */
Caching[18] = 0x00; /* reserved */
Caching[19] = 0x00; /* reserved */
/* Block size */
Caching[20] = 0x00;
Caching[21] = 0x00;
Caching[22] = 0x02; /* Block size is always 512 bytes */
Caching[23] = 0x00;
}
else
{
/* density code */
Caching[8] = 0x04; /* density-code : reserved for direct-access */
/* number of blocks */
Caching[9] = 0x00; /* unspecified */
Caching[10] = 0x00; /* unspecified */
Caching[11] = 0x00; /* unspecified */
/* reserved */
Caching[12] = 0x00; /* reserved */
/* Block size */
Caching[13] = 0x00;
Caching[14] = 0x02; /* Block size is always 512 bytes */
Caching[15] = 0x00;
}
if (LLBAA)
{
index = 24;
}
else
{
index = 16;
}
/*
* Fill-up Caching mode page, SAT, Table 67
*/
/* length 20 */
Caching[index+0] = 0x08; /* page code */
Caching[index+1] = 0x12; /* page length */
if (pSatDevData->satWriteCacheEnabled == agTRUE)
{
Caching[index+2] = 0x04;/* WCE bit is set */
}
else
{
Caching[index+2] = 0x00;/* WCE bit is NOT set */
}
Caching[index+3] = 0x00;
Caching[index+4] = 0x00;
Caching[index+5] = 0x00;
Caching[index+6] = 0x00;
Caching[index+7] = 0x00;
Caching[index+8] = 0x00;
Caching[index+9] = 0x00;
Caching[index+10] = 0x00;
Caching[index+11] = 0x00;
if (pSatDevData->satLookAheadEnabled == agTRUE)
{
Caching[index+12] = 0x00;/* DRA bit is NOT set */
}
else
{
Caching[index+12] = 0x20;/* DRA bit is set */
}
Caching[index+13] = 0x00;
Caching[index+14] = 0x00;
Caching[index+15] = 0x00;
Caching[index+16] = 0x00;
Caching[index+17] = 0x00;
Caching[index+18] = 0x00;
Caching[index+19] = 0x00;
sm_memcpy(pModeSense, &Caching, lenRead);
}
else if (page == MODESENSE_INFORMATION_EXCEPTION_CONTROL_PAGE)
{
SM_DBG5(("smsatModeSense10: MODESENSE_INFORMATION_EXCEPTION_CONTROL_PAGE\n"));
InfoExceptionCtrl[0] = 0;
InfoExceptionCtrl[1] = (bit8)(lenRead - 2);
InfoExceptionCtrl[2] = 0x00; /* medium type: default medium type (currently mounted medium type) */
InfoExceptionCtrl[3] = 0x00; /* device-specific param: no write-protect, no support for DPO-FUA */
if (LLBAA)
{
InfoExceptionCtrl[4] = 0x00; /* reserved and LONGLBA */
InfoExceptionCtrl[4] = (bit8)(InfoExceptionCtrl[4] | 0x1); /* LONGLBA is set */
}
else
{
InfoExceptionCtrl[4] = 0x00; /* reserved and LONGLBA: LONGLBA is not set */
}
InfoExceptionCtrl[5] = 0x00; /* reserved */
InfoExceptionCtrl[6] = 0x00; /* block descriptot length */
if (LLBAA)
{
InfoExceptionCtrl[7] = 0x10; /* block descriptor length: LONGLBA is set. So, length is 16 */
}
else
{
InfoExceptionCtrl[7] = 0x08; /* block descriptor length: LONGLBA is NOT set. So, length is 8 */
}
/*
* Fill-up direct-access device block-descriptor, SAT, Table 19
*/
if (LLBAA)
{
/* density code */
InfoExceptionCtrl[8] = 0x04; /* density-code : reserved for direct-access */
/* number of blocks */
InfoExceptionCtrl[9] = 0x00; /* unspecified */
InfoExceptionCtrl[10] = 0x00; /* unspecified */
InfoExceptionCtrl[11] = 0x00; /* unspecified */
InfoExceptionCtrl[12] = 0x00; /* unspecified */
InfoExceptionCtrl[13] = 0x00; /* unspecified */
InfoExceptionCtrl[14] = 0x00; /* unspecified */
InfoExceptionCtrl[15] = 0x00; /* unspecified */
/* reserved */
InfoExceptionCtrl[16] = 0x00; /* reserved */
InfoExceptionCtrl[17] = 0x00; /* reserved */
InfoExceptionCtrl[18] = 0x00; /* reserved */
InfoExceptionCtrl[19] = 0x00; /* reserved */
/* Block size */
InfoExceptionCtrl[20] = 0x00;
InfoExceptionCtrl[21] = 0x00;
InfoExceptionCtrl[22] = 0x02; /* Block size is always 512 bytes */
InfoExceptionCtrl[23] = 0x00;
}
else
{
/* density code */
InfoExceptionCtrl[8] = 0x04; /* density-code : reserved for direct-access */
/* number of blocks */
InfoExceptionCtrl[9] = 0x00; /* unspecified */
InfoExceptionCtrl[10] = 0x00; /* unspecified */
InfoExceptionCtrl[11] = 0x00; /* unspecified */
/* reserved */
InfoExceptionCtrl[12] = 0x00; /* reserved */
/* Block size */
InfoExceptionCtrl[13] = 0x00;
InfoExceptionCtrl[14] = 0x02; /* Block size is always 512 bytes */
InfoExceptionCtrl[15] = 0x00;
}
if (LLBAA)
{
index = 24;
}
else
{
index = 16;
}
/*
* Fill-up informational-exceptions control mode page, SAT, Table 68
*/
InfoExceptionCtrl[index+0] = 0x1C; /* page code */
InfoExceptionCtrl[index+1] = 0x0A; /* page length */
if (pSatDevData->satSMARTEnabled == agTRUE)
{
InfoExceptionCtrl[index+2] = 0x00;/* DEXCPT bit is NOT set */
}
else
{
InfoExceptionCtrl[index+2] = 0x08;/* DEXCPT bit is set */
}
InfoExceptionCtrl[index+3] = 0x00; /* We don't support MRIE */
InfoExceptionCtrl[index+4] = 0x00; /* Interval timer vendor-specific */
InfoExceptionCtrl[index+5] = 0x00;
InfoExceptionCtrl[index+6] = 0x00;
InfoExceptionCtrl[index+7] = 0x00;
InfoExceptionCtrl[index+8] = 0x00; /* REPORT-COUNT */
InfoExceptionCtrl[index+9] = 0x00;
InfoExceptionCtrl[index+10] = 0x00;
InfoExceptionCtrl[index+11] = 0x00;
sm_memcpy(pModeSense, &InfoExceptionCtrl, lenRead);
}
else
{
/* Error */
SM_DBG1(("smsatModeSense10: Error page %d!!!\n", page));
smsatSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_COMMAND,
satIOContext);
/*smEnqueueIO(smRoot, satIOContext);*/
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pSmSenseData,
satIOContext->interruptContext );
return SM_RC_SUCCESS;
}
if (allocationLen > lenRead)
{
SM_DBG1(("smsatModeSense10: reporting underrun lenRead=0x%x allocationLen=0x%x smIORequest=%p\n", lenRead, allocationLen, smIORequest));
/*smEnqueueIO(smRoot, satIOContext);*/
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOUnderRun,
allocationLen - lenRead,
agNULL,
satIOContext->interruptContext );
}
else
{
/*smEnqueueIO(smRoot, satIOContext);*/
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOSuccess,
SCSI_STAT_GOOD,
agNULL,
satIOContext->interruptContext);
}
return SM_RC_SUCCESS;
}
osGLOBAL bit32
smsatReadCapacity10(
smRoot_t *smRoot,
smIORequest_t *smIORequest,
smDeviceHandle_t *smDeviceHandle,
smScsiInitiatorRequest_t *smScsiRequest,
smSatIOContext_t *satIOContext
)
{
smScsiRspSense_t *pSense;
smIniScsiCmnd_t *scsiCmnd;
bit8 dataBuffer[8] = {0};
bit32 allocationLen;
bit8 *pVirtAddr = agNULL;
smDeviceData_t *pSatDevData;
agsaSATAIdentifyData_t *pSATAIdData;
bit32 lastLba;
bit32 word117_118;
bit32 word117;
bit32 word118;
pSense = satIOContext->pSense;
pVirtAddr = (bit8 *) smScsiRequest->sglVirtualAddr;
scsiCmnd = &smScsiRequest->scsiCmnd;
pSatDevData = satIOContext->pSatDevData;
pSATAIdData = &pSatDevData->satIdentifyData;
allocationLen = scsiCmnd->expDataLength;
SM_DBG5(("smsatReadCapacity10: start\n"));
/* checking CONTROL */
/* NACA == 1 or LINK == 1*/
if ( (scsiCmnd->cdb[9] & SCSI_NACA_MASK) || (scsiCmnd->cdb[9] & SCSI_LINK_MASK) )
{
smsatSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
/*smEnqueueIO(smRoot, satIOContext);*/
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pSmSenseData,
satIOContext->interruptContext );
SM_DBG1(("smsatReadCapacity10: return control!!!\n"));
return SM_RC_SUCCESS;
}
/*
* If Logical block address is not set to zero, return error
*/
if ((scsiCmnd->cdb[2] || scsiCmnd->cdb[3] || scsiCmnd->cdb[4] || scsiCmnd->cdb[5]))
{
SM_DBG1(("smsatReadCapacity10: *** ERROR *** logical address non zero, did %d!!!\n",
pSatDevData->id));
smsatSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
/*smEnqueueIO(smRoot, satIOContext);*/
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pSmSenseData,
satIOContext->interruptContext );
return SM_RC_SUCCESS;
}
/*
* If PMI bit is not zero, return error
*/
if ( ((scsiCmnd->cdb[8]) & SCSI_READ_CAPACITY10_PMI_MASK) != 0 )
{
SM_DBG1(("smsatReadCapacity10: *** ERROR *** PMI is not zero, did %d\n",
pSatDevData->id));
smsatSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
/*smEnqueueIO(smRoot, satIOContext);*/
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pSmSenseData,
satIOContext->interruptContext );
return SM_RC_SUCCESS;
}
/*
filling in Read Capacity parameter data
saved identify device has been already flipped
See ATA spec p125 and p136 and SBC spec p54
*/
/*
* If 48-bit addressing is supported, set capacity information from Identify
* Device Word 100-103.
*/
if (pSatDevData->sat48BitSupport == agTRUE)
{
/*
* Setting RETURNED LOGICAL BLOCK ADDRESS in READ CAPACITY(10) response data:
* SBC-2 specifies that if the capacity exceeded the 4-byte RETURNED LOGICAL
- * BLOCK ADDRESS in READ CAPACITY(10) parameter data, the the RETURNED LOGICAL
+ * BLOCK ADDRESS in READ CAPACITY(10) parameter data, the RETURNED LOGICAL
* BLOCK ADDRESS should be set to 0xFFFFFFFF so the application client would
* then issue a READ CAPACITY(16) command.
*/
/* ATA Identify Device information word 100 - 103 */
if ( (pSATAIdData->maxLBA32_47 != 0 ) || (pSATAIdData->maxLBA48_63 != 0))
{
dataBuffer[0] = 0xFF; /* MSB number of block */
dataBuffer[1] = 0xFF;
dataBuffer[2] = 0xFF;
dataBuffer[3] = 0xFF; /* LSB number of block */
SM_DBG1(("smsatReadCapacity10: returns 0xFFFFFFFF!!!\n"));
}
else /* Fit the Readcapacity10 4-bytes response length */
{
lastLba = (((pSATAIdData->maxLBA16_31) << 16) ) |
(pSATAIdData->maxLBA0_15);
lastLba = lastLba - 1; /* LBA starts from zero */
/*
for testing
lastLba = lastLba - (512*10) - 1;
*/
dataBuffer[0] = (bit8)((lastLba >> 24) & 0xFF); /* MSB */
dataBuffer[1] = (bit8)((lastLba >> 16) & 0xFF);
dataBuffer[2] = (bit8)((lastLba >> 8) & 0xFF);
dataBuffer[3] = (bit8)((lastLba ) & 0xFF); /* LSB */
SM_DBG3(("smsatReadCapacity10: lastLba is 0x%x %d\n", lastLba, lastLba));
SM_DBG3(("smsatReadCapacity10: LBA 0 is 0x%x %d\n", dataBuffer[0], dataBuffer[0]));
SM_DBG3(("smsatReadCapacity10: LBA 1 is 0x%x %d\n", dataBuffer[1], dataBuffer[1]));
SM_DBG3(("smsatReadCapacity10: LBA 2 is 0x%x %d\n", dataBuffer[2], dataBuffer[2]));
SM_DBG3(("smsatReadCapacity10: LBA 3 is 0x%x %d\n", dataBuffer[3], dataBuffer[3]));
}
}
/*
* For 28-bit addressing, set capacity information from Identify
* Device Word 60-61.
*/
else
{
/* ATA Identify Device information word 60 - 61 */
lastLba = (((pSATAIdData->numOfUserAddressableSectorsHi) << 16) ) |
(pSATAIdData->numOfUserAddressableSectorsLo);
lastLba = lastLba - 1; /* LBA starts from zero */
dataBuffer[0] = (bit8)((lastLba >> 24) & 0xFF); /* MSB */
dataBuffer[1] = (bit8)((lastLba >> 16) & 0xFF);
dataBuffer[2] = (bit8)((lastLba >> 8) & 0xFF);
dataBuffer[3] = (bit8)((lastLba ) & 0xFF); /* LSB */
}
/* SAT Rev 8d */
if (((pSATAIdData->word104_107[2]) & 0x1000) == 0)
{
SM_DBG5(("smsatReadCapacity10: Default Block Length is 512\n"));
/*
* Set the block size, fixed at 512 bytes.
*/
dataBuffer[4] = 0x00; /* MSB block size in bytes */
dataBuffer[5] = 0x00;
dataBuffer[6] = 0x02;
dataBuffer[7] = 0x00; /* LSB block size in bytes */
}
else
{
word118 = pSATAIdData->word112_126[6];
word117 = pSATAIdData->word112_126[5];
word117_118 = (word118 << 16) + word117;
word117_118 = word117_118 * 2;
dataBuffer[4] = (bit8)((word117_118 >> 24) & 0xFF); /* MSB block size in bytes */
dataBuffer[5] = (bit8)((word117_118 >> 16) & 0xFF);
dataBuffer[6] = (bit8)((word117_118 >> 8) & 0xFF);
dataBuffer[7] = (bit8)(word117_118 & 0xFF); /* LSB block size in bytes */
SM_DBG1(("smsatReadCapacity10: Nondefault word118 %d 0x%x !!!\n", word118, word118));
SM_DBG1(("smsatReadCapacity10: Nondefault word117 %d 0x%x !!!\n", word117, word117));
SM_DBG1(("smsatReadCapacity10: Nondefault Block Length is %d 0x%x !!!\n",word117_118, word117_118));
}
/* fill in MAX LBA, which is used in satSendDiagnostic_1() */
pSatDevData->satMaxLBA[0] = 0; /* MSB */
pSatDevData->satMaxLBA[1] = 0;
pSatDevData->satMaxLBA[2] = 0;
pSatDevData->satMaxLBA[3] = 0;
pSatDevData->satMaxLBA[4] = dataBuffer[0];
pSatDevData->satMaxLBA[5] = dataBuffer[1];
pSatDevData->satMaxLBA[6] = dataBuffer[2];
pSatDevData->satMaxLBA[7] = dataBuffer[3]; /* LSB */
SM_DBG4(("smsatReadCapacity10: 0x%x 0x%x 0x%x 0x%x 0x%x 0x%x 0x%x 0x%x , did %d\n",
dataBuffer[0], dataBuffer[1], dataBuffer[2], dataBuffer[3],
dataBuffer[4], dataBuffer[5], dataBuffer[6], dataBuffer[7],
pSatDevData->id));
sm_memcpy(pVirtAddr, dataBuffer, MIN(allocationLen, 8));
/*
* Send the completion response now.
*/
/*smEnqueueIO(smRoot, satIOContext);*/
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOSuccess,
SCSI_STAT_GOOD,
agNULL,
satIOContext->interruptContext);
return SM_RC_SUCCESS;
}
osGLOBAL bit32
smsatReadCapacity16(
smRoot_t *smRoot,
smIORequest_t *smIORequest,
smDeviceHandle_t *smDeviceHandle,
smScsiInitiatorRequest_t *smScsiRequest,
smSatIOContext_t *satIOContext
)
{
smScsiRspSense_t *pSense;
smIniScsiCmnd_t *scsiCmnd;
bit8 dataBuffer[32] = {0};
bit8 *pVirtAddr = agNULL;
smDeviceData_t *pSatDevData;
agsaSATAIdentifyData_t *pSATAIdData;
bit32 lastLbaLo;
bit32 allocationLen;
bit32 readCapacityLen = 32;
bit32 i = 0;
pSense = satIOContext->pSense;
pVirtAddr = (bit8 *) smScsiRequest->sglVirtualAddr;
scsiCmnd = &smScsiRequest->scsiCmnd;
pSatDevData = satIOContext->pSatDevData;
pSATAIdData = &pSatDevData->satIdentifyData;
SM_DBG5(("smsatReadCapacity16: start\n"));
/* Find the buffer size allocated by Initiator */
allocationLen = (((bit32)scsiCmnd->cdb[10]) << 24) |
(((bit32)scsiCmnd->cdb[11]) << 16) |
(((bit32)scsiCmnd->cdb[12]) << 8 ) |
(((bit32)scsiCmnd->cdb[13]) );
allocationLen = MIN(allocationLen, scsiCmnd->expDataLength);
#ifdef REMOVED
if (allocationLen < readCapacityLen)
{
SM_DBG1(("smsatReadCapacity16: *** ERROR *** insufficient len=0x%x readCapacityLen=0x%x!!!\n", allocationLen, readCapacityLen));
smsatSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
/*smEnqueueIO(smRoot, satIOContext);*/
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pSmSenseData,
satIOContext->interruptContext );
return SM_RC_SUCCESS;
}
#endif
/* checking CONTROL */
/* NACA == 1 or LINK == 1*/
if ( (scsiCmnd->cdb[15] & SCSI_NACA_MASK) || (scsiCmnd->cdb[15] & SCSI_LINK_MASK) )
{
smsatSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
/*smEnqueueIO(smRoot, satIOContext);*/
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pSmSenseData,
satIOContext->interruptContext );
SM_DBG1(("smsatReadCapacity16: return control!!!\n"));
return SM_RC_SUCCESS;
}
/*
* If Logical blcok address is not set to zero, return error
*/
if ((scsiCmnd->cdb[2] || scsiCmnd->cdb[3] || scsiCmnd->cdb[4] || scsiCmnd->cdb[5]) ||
(scsiCmnd->cdb[6] || scsiCmnd->cdb[7] || scsiCmnd->cdb[8] || scsiCmnd->cdb[9]) )
{
SM_DBG1(("smsatReadCapacity16: *** ERROR *** logical address non zero, did %d\n",
pSatDevData->id));
smsatSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
/*smEnqueueIO(smRoot, satIOContext);*/
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pSmSenseData,
satIOContext->interruptContext );
return SM_RC_SUCCESS;
}
/*
* If PMI bit is not zero, return error
*/
if ( ((scsiCmnd->cdb[14]) & SCSI_READ_CAPACITY16_PMI_MASK) != 0 )
{
SM_DBG1(("smsatReadCapacity16: *** ERROR *** PMI is not zero, did %d\n",
pSatDevData->id));
smsatSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
/*smEnqueueIO(smRoot, satIOContext);*/
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pSmSenseData,
satIOContext->interruptContext );
return SM_RC_SUCCESS;
}
/*
filling in Read Capacity parameter data
*/
/*
* If 48-bit addressing is supported, set capacity information from Identify
* Device Word 100-103.
*/
if (pSatDevData->sat48BitSupport == agTRUE)
{
dataBuffer[0] = (bit8)(((pSATAIdData->maxLBA48_63) >> 8) & 0xff); /* MSB */
dataBuffer[1] = (bit8)((pSATAIdData->maxLBA48_63) & 0xff);
dataBuffer[2] = (bit8)(((pSATAIdData->maxLBA32_47) >> 8) & 0xff);
dataBuffer[3] = (bit8)((pSATAIdData->maxLBA32_47) & 0xff);
lastLbaLo = (((pSATAIdData->maxLBA16_31) << 16) ) | (pSATAIdData->maxLBA0_15);
lastLbaLo = lastLbaLo - 1; /* LBA starts from zero */
dataBuffer[4] = (bit8)((lastLbaLo >> 24) & 0xFF);
dataBuffer[5] = (bit8)((lastLbaLo >> 16) & 0xFF);
dataBuffer[6] = (bit8)((lastLbaLo >> 8) & 0xFF);
dataBuffer[7] = (bit8)((lastLbaLo ) & 0xFF); /* LSB */
}
/*
* For 28-bit addressing, set capacity information from Identify
* Device Word 60-61.
*/
else
{
dataBuffer[0] = 0; /* MSB */
dataBuffer[1] = 0;
dataBuffer[2] = 0;
dataBuffer[3] = 0;
lastLbaLo = (((pSATAIdData->numOfUserAddressableSectorsHi) << 16) ) |
(pSATAIdData->numOfUserAddressableSectorsLo);
lastLbaLo = lastLbaLo - 1; /* LBA starts from zero */
dataBuffer[4] = (bit8)((lastLbaLo >> 24) & 0xFF);
dataBuffer[5] = (bit8)((lastLbaLo >> 16) & 0xFF);
dataBuffer[6] = (bit8)((lastLbaLo >> 8) & 0xFF);
dataBuffer[7] = (bit8)((lastLbaLo ) & 0xFF); /* LSB */
}
/*
* Set the block size, fixed at 512 bytes.
*/
dataBuffer[8] = 0x00; /* MSB block size in bytes */
dataBuffer[9] = 0x00;
dataBuffer[10] = 0x02;
dataBuffer[11] = 0x00; /* LSB block size in bytes */
/* fill in MAX LBA, which is used in satSendDiagnostic_1() */
pSatDevData->satMaxLBA[0] = dataBuffer[0]; /* MSB */
pSatDevData->satMaxLBA[1] = dataBuffer[1];
pSatDevData->satMaxLBA[2] = dataBuffer[2];
pSatDevData->satMaxLBA[3] = dataBuffer[3];
pSatDevData->satMaxLBA[4] = dataBuffer[4];
pSatDevData->satMaxLBA[5] = dataBuffer[5];
pSatDevData->satMaxLBA[6] = dataBuffer[6];
pSatDevData->satMaxLBA[7] = dataBuffer[7]; /* LSB */
SM_DBG5(("smsatReadCapacity16: 0x%x 0x%x 0x%x 0x%x 0x%x 0x%x 0x%x 0x%x 0x%x 0x%x 0x%x 0x%x , did %d\n",
dataBuffer[0], dataBuffer[1], dataBuffer[2], dataBuffer[3],
dataBuffer[4], dataBuffer[5], dataBuffer[6], dataBuffer[7],
dataBuffer[8], dataBuffer[9], dataBuffer[10], dataBuffer[11],
pSatDevData->id));
if (allocationLen > 0xC) /* 0xc = 12 */
{
for(i=12;i<=31;i++)
{
dataBuffer[i] = 0x00;
}
}
sm_memcpy(pVirtAddr, dataBuffer, MIN(allocationLen, readCapacityLen));
/*
* Send the completion response now.
*/
if (allocationLen > readCapacityLen)
{
/* underrun */
SM_DBG1(("smsatReadCapacity16: reporting underrun readCapacityLen=0x%x allocationLen=0x%x !!!\n", readCapacityLen, allocationLen));
/*smEnqueueIO(smRoot, satIOContext);*/
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOUnderRun,
allocationLen - readCapacityLen,
agNULL,
satIOContext->interruptContext );
}
else
{
/*smEnqueueIO(smRoot, satIOContext);*/
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOSuccess,
SCSI_STAT_GOOD,
agNULL,
satIOContext->interruptContext);
}
return SM_RC_SUCCESS;
}
osGLOBAL bit32
smsatReportLun(
smRoot_t *smRoot,
smIORequest_t *smIORequest,
smDeviceHandle_t *smDeviceHandle,
smScsiInitiatorRequest_t *smScsiRequest,
smSatIOContext_t *satIOContext
)
{
smScsiRspSense_t *pSense;
bit8 dataBuffer[16] = {0};
bit32 allocationLen;
bit32 reportLunLen;
smScsiReportLun_t *pReportLun;
smIniScsiCmnd_t *scsiCmnd;
#ifdef TD_DEBUG_ENABLE
smDeviceData_t *pSatDevData;
#endif
pSense = satIOContext->pSense;
pReportLun = (smScsiReportLun_t *) dataBuffer;
scsiCmnd = &smScsiRequest->scsiCmnd;
#ifdef TD_DEBUG_ENABLE
pSatDevData = satIOContext->pSatDevData;
#endif
SM_DBG5(("smsatReportLun: start\n"));
// smhexdump("smsatReportLun: cdb", (bit8 *)scsiCmnd, 16);
/* Find the buffer size allocated by Initiator */
allocationLen = (((bit32)scsiCmnd->cdb[6]) << 24) |
(((bit32)scsiCmnd->cdb[7]) << 16) |
(((bit32)scsiCmnd->cdb[8]) << 8 ) |
(((bit32)scsiCmnd->cdb[9]) );
allocationLen = MIN(allocationLen, scsiCmnd->expDataLength);
reportLunLen = 16; /* 8 byte header and 8 bytes of LUN0 */
if (allocationLen < reportLunLen)
{
SM_DBG1(("smsatReportLun: *** ERROR *** insufficient len=0x%x did %d\n",
reportLunLen, pSatDevData->id));
smsatSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
/*smEnqueueIO(smRoot, satIOContext);*/
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pSmSenseData,
satIOContext->interruptContext );
return SM_RC_SUCCESS;
}
/* Set length to one entry */
pReportLun->len[0] = 0;
pReportLun->len[1] = 0;
pReportLun->len[2] = 0;
pReportLun->len[3] = sizeof (tiLUN_t);
pReportLun->reserved = 0;
/* Set to LUN 0:
* - address method to 0x00: Peripheral device addressing method,
* - bus identifier to 0
*/
pReportLun->lunList[0].lun[0] = 0;
pReportLun->lunList[0].lun[1] = 0;
pReportLun->lunList[0].lun[2] = 0;
pReportLun->lunList[0].lun[3] = 0;
pReportLun->lunList[0].lun[4] = 0;
pReportLun->lunList[0].lun[5] = 0;
pReportLun->lunList[0].lun[6] = 0;
pReportLun->lunList[0].lun[7] = 0;
sm_memcpy(smScsiRequest->sglVirtualAddr, dataBuffer, MIN(allocationLen, reportLunLen));
if (allocationLen > reportLunLen)
{
/* underrun */
SM_DBG1(("smsatReportLun: reporting underrun reportLunLen=0x%x allocationLen=0x%x !!!\n", reportLunLen, allocationLen));
/*smEnqueueIO(smRoot, satIOContext);*/
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOUnderRun,
allocationLen - reportLunLen,
agNULL,
satIOContext->interruptContext );
}
else
{
/*smEnqueueIO(smRoot, satIOContext);*/
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOSuccess,
SCSI_STAT_GOOD,
agNULL,
satIOContext->interruptContext);
}
return SM_RC_SUCCESS;
}
osGLOBAL bit32
smsatFormatUnit(
smRoot_t *smRoot,
smIORequest_t *smIORequest,
smDeviceHandle_t *smDeviceHandle,
smScsiInitiatorRequest_t *smScsiRequest,
smSatIOContext_t *satIOContext
)
{
/*
note: we don't support media certification in this version and IP bit
satDevData->satFormatState will be agFalse since SAT does not actually sends
any ATA command
*/
smScsiRspSense_t *pSense;
smIniScsiCmnd_t *scsiCmnd;
bit32 index = 0;
pSense = satIOContext->pSense;
scsiCmnd = &smScsiRequest->scsiCmnd;
SM_DBG5(("smsatFormatUnit: start\n"));
/*
checking opcode
1. FMTDATA bit == 0(no defect list header)
2. FMTDATA bit == 1 and DCRT bit == 1(defect list header is provided
with DCRT bit set)
*/
if ( ((scsiCmnd->cdb[1] & SCSI_FORMAT_UNIT_FMTDATA_MASK) == 0) ||
((scsiCmnd->cdb[1] & SCSI_FORMAT_UNIT_FMTDATA_MASK) &&
(scsiCmnd->cdb[7] & SCSI_FORMAT_UNIT_DCRT_MASK))
)
{
/*smEnqueueIO(smRoot, satIOContext);*/
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOSuccess,
SCSI_STAT_GOOD,
agNULL,
satIOContext->interruptContext);
SM_DBG1(("smsatFormatUnit: return opcode!!!\n"));
return SM_RC_SUCCESS;
}
/*
checking DEFECT LIST FORMAT and defect list length
*/
if ( (((scsiCmnd->cdb[1] & SCSI_FORMAT_UNIT_DEFECT_LIST_FORMAT_MASK) == 0x00) ||
((scsiCmnd->cdb[1] & SCSI_FORMAT_UNIT_DEFECT_LIST_FORMAT_MASK) == 0x06)) )
{
/* short parameter header */
if ((scsiCmnd->cdb[2] & SCSI_FORMAT_UNIT_LONGLIST_MASK) == 0x00)
{
index = 8;
}
/* long parameter header */
if ((scsiCmnd->cdb[2] & SCSI_FORMAT_UNIT_LONGLIST_MASK) == 0x01)
{
index = 10;
}
/* defect list length */
if ((scsiCmnd->cdb[index] != 0) || (scsiCmnd->cdb[index+1] != 0))
{
smsatSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
/*smEnqueueIO(smRoot, satIOContext);*/
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pSmSenseData,
satIOContext->interruptContext );
SM_DBG1(("smsatFormatUnit: return defect list format!!!\n"));
return SM_RC_SUCCESS;
}
}
if ( (scsiCmnd->cdb[1] & SCSI_FORMAT_UNIT_FMTDATA_MASK) &&
(scsiCmnd->cdb[1] & SCSI_FORMAT_UNIT_CMPLIST_MASK) )
{
smsatSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
/*smEnqueueIO(smRoot, satIOContext);*/
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pSmSenseData,
satIOContext->interruptContext );
SM_DBG1(("smsatFormatUnit: return cmplist!!!\n"));
return SM_RC_SUCCESS;
}
if ( (scsiCmnd->cdb[5] & SCSI_NACA_MASK) || (scsiCmnd->cdb[5] & SCSI_LINK_MASK) )
{
smsatSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
/*smEnqueueIO(smRoot, satIOContext);*/
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pSmSenseData,
satIOContext->interruptContext );
SM_DBG1(("smsatFormatUnit: return control!!!\n"));
return SM_RC_SUCCESS;
}
/* defect list header filed, if exists, SAT rev8, Table 37, p48 */
if (scsiCmnd->cdb[1] & SCSI_FORMAT_UNIT_FMTDATA_MASK)
{
/* case 1,2,3 */
/* IMMED 1; FOV 0; FOV 1, DCRT 1, IP 0 */
if ( (scsiCmnd->cdb[7] & SCSI_FORMAT_UNIT_IMMED_MASK) ||
( !(scsiCmnd->cdb[7] & SCSI_FORMAT_UNIT_FOV_MASK)) ||
( (scsiCmnd->cdb[7] & SCSI_FORMAT_UNIT_FOV_MASK) &&
(scsiCmnd->cdb[7] & SCSI_FORMAT_UNIT_DCRT_MASK) &&
!(scsiCmnd->cdb[7] & SCSI_FORMAT_UNIT_IP_MASK))
)
{
/*smEnqueueIO(smRoot, satIOContext);*/
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOSuccess,
SCSI_STAT_GOOD,
agNULL,
satIOContext->interruptContext);
SM_DBG5(("smsatFormatUnit: return defect list case 1\n"));
return SM_RC_SUCCESS;
}
/* case 4,5,6 */
/*
1. IMMED 0, FOV 1, DCRT 0, IP 0
2. IMMED 0, FOV 1, DCRT 0, IP 1
3. IMMED 0, FOV 1, DCRT 1, IP 1
*/
if ( ( !(scsiCmnd->cdb[7] & SCSI_FORMAT_UNIT_IMMED_MASK) &&
(scsiCmnd->cdb[7] & SCSI_FORMAT_UNIT_FOV_MASK) &&
!(scsiCmnd->cdb[7] & SCSI_FORMAT_UNIT_DCRT_MASK) &&
!(scsiCmnd->cdb[7] & SCSI_FORMAT_UNIT_IP_MASK) )
||
( !(scsiCmnd->cdb[7] & SCSI_FORMAT_UNIT_IMMED_MASK) &&
(scsiCmnd->cdb[7] & SCSI_FORMAT_UNIT_FOV_MASK) &&
!(scsiCmnd->cdb[7] & SCSI_FORMAT_UNIT_DCRT_MASK) &&
(scsiCmnd->cdb[7] & SCSI_FORMAT_UNIT_IP_MASK) )
||
( !(scsiCmnd->cdb[7] & SCSI_FORMAT_UNIT_IMMED_MASK) &&
(scsiCmnd->cdb[7] & SCSI_FORMAT_UNIT_FOV_MASK) &&
(scsiCmnd->cdb[7] & SCSI_FORMAT_UNIT_DCRT_MASK) &&
(scsiCmnd->cdb[7] & SCSI_FORMAT_UNIT_IP_MASK) )
)
{
smsatSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_PARAMETER_LIST,
satIOContext);
/*smEnqueueIO(smRoot, satIOContext);*/
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pSmSenseData,
satIOContext->interruptContext );
SM_DBG5(("smsatFormatUnit: return defect list case 2\n"));
return SM_RC_SUCCESS;
}
}
/*
* Send the completion response now.
*/
/*smEnqueueIO(smRoot, satIOContext);*/
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOSuccess,
SCSI_STAT_GOOD,
agNULL,
satIOContext->interruptContext);
SM_DBG5(("smsatFormatUnit: return last\n"));
return SM_RC_SUCCESS;
}
osGLOBAL bit32
smsatSendDiagnostic(
smRoot_t *smRoot,
smIORequest_t *smIORequest,
smDeviceHandle_t *smDeviceHandle,
smScsiInitiatorRequest_t *smScsiRequest,
smSatIOContext_t *satIOContext
)
{
bit32 status;
bit32 agRequestType;
smDeviceData_t *pSatDevData;
smScsiRspSense_t *pSense;
smIniScsiCmnd_t *scsiCmnd;
agsaFisRegHostToDevice_t *fis;
bit32 parmLen;
pSense = satIOContext->pSense;
pSatDevData = satIOContext->pSatDevData;
scsiCmnd = &smScsiRequest->scsiCmnd;
fis = satIOContext->pFis;
SM_DBG5(("smsatSendDiagnostic: start\n"));
/* reset satVerifyState */
pSatDevData->satVerifyState = 0;
/* no pending diagnostic in background */
pSatDevData->satBGPendingDiag = agFALSE;
/* table 27, 8.10 p39 SAT Rev8 */
/*
1. checking PF == 1
2. checking DEVOFFL == 1
3. checking UNITOFFL == 1
4. checking PARAMETER LIST LENGTH != 0
*/
if ( (scsiCmnd->cdb[1] & SCSI_PF_MASK) ||
(scsiCmnd->cdb[1] & SCSI_DEVOFFL_MASK) ||
(scsiCmnd->cdb[1] & SCSI_UNITOFFL_MASK) ||
( (scsiCmnd->cdb[3] != 0) || (scsiCmnd->cdb[4] != 0) )
)
{
smsatSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
/*smEnqueueIO(smRoot, satIOContext);*/
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pSmSenseData,
satIOContext->interruptContext );
SM_DBG1(("smsatSendDiagnostic: return PF, DEVOFFL, UNITOFFL, PARAM LIST!!!\n"));
return SM_RC_SUCCESS;
}
/* checking CONTROL */
/* NACA == 1 or LINK == 1*/
if ( (scsiCmnd->cdb[5] & SCSI_NACA_MASK) || (scsiCmnd->cdb[5] & SCSI_LINK_MASK) )
{
smsatSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
/*smEnqueueIO(smRoot, satIOContext);*/
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pSmSenseData,
satIOContext->interruptContext );
SM_DBG1(("smsatSendDiagnostic: return control!!!\n"));
return SM_RC_SUCCESS;
}
parmLen = (scsiCmnd->cdb[3] << 8) + scsiCmnd->cdb[4];
/* checking SELFTEST bit*/
/* table 29, 8.10.3, p41 SAT Rev8 */
/* case 1 */
if ( !(scsiCmnd->cdb[1] & SCSI_SEND_DIAGNOSTIC_SELFTEST_MASK) &&
(pSatDevData->satSMARTSelfTest == agFALSE)
)
{
smsatSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
/*smEnqueueIO(smRoot, satIOContext);*/
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pSmSenseData,
satIOContext->interruptContext );
SM_DBG1(("smsatSendDiagnostic: return Table 29 case 1!!!\n"));
return SM_RC_SUCCESS;
}
/* case 2 */
if ( !(scsiCmnd->cdb[1] & SCSI_SEND_DIAGNOSTIC_SELFTEST_MASK) &&
(pSatDevData->satSMARTSelfTest == agTRUE) &&
(pSatDevData->satSMARTEnabled == agFALSE)
)
{
smsatSetSensePayload( pSense,
SCSI_SNSKEY_ABORTED_COMMAND,
0,
SCSI_SNSCODE_ATA_DEVICE_FEATURE_NOT_ENABLED,
satIOContext);
/*smEnqueueIO(smRoot, satIOContext);*/
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pSmSenseData,
satIOContext->interruptContext );
SM_DBG5(("smsatSendDiagnostic: return Table 29 case 2\n"));
return SM_RC_SUCCESS;
}
/*
case 3
see SELF TEST CODE later
*/
/* case 4 */
/*
sends three ATA verify commands
*/
if ( ((scsiCmnd->cdb[1] & SCSI_SEND_DIAGNOSTIC_SELFTEST_MASK) &&
(pSatDevData->satSMARTSelfTest == agFALSE))
||
((scsiCmnd->cdb[1] & SCSI_SEND_DIAGNOSTIC_SELFTEST_MASK) &&
(pSatDevData->satSMARTSelfTest == agTRUE) &&
(pSatDevData->satSMARTEnabled == agFALSE))
)
{
/*
sector count 1, LBA 0
sector count 1, LBA MAX
sector count 1, LBA random
*/
if (pSatDevData->sat48BitSupport == agTRUE)
{
/* sends READ VERIFY SECTOR(S) EXT*/
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_READ_VERIFY_SECTORS_EXT;/* 0x42 */
fis->h.features = 0; /* FIS reserve */
fis->d.lbaLow = 0; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = 0; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = 0; /* FIS LBA (23:16) */
fis->d.lbaLowExp = 0; /* FIS LBA (31:24) */
fis->d.lbaMidExp = 0; /* FIS LBA (39:32) */
fis->d.lbaHighExp = 0; /* FIS LBA (47:40) */
fis->d.featuresExp = 0; /* FIS reserve */
fis->d.sectorCount = 1; /* FIS sector count (7:0) */
fis->d.sectorCountExp = 0; /* FIS sector count (15:8) */
fis->d.reserved4 = 0;
fis->d.device = 0x40; /* 01000000 */
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_NON_DATA;
}
else
{
/* READ VERIFY SECTOR(S)*/
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_READ_VERIFY_SECTORS;/* 0x40 */
fis->h.features = 0; /* FIS features NA */
fis->d.lbaLow = 0; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = 0; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = 0; /* FIS LBA (23:16) */
fis->d.lbaLowExp = 0;
fis->d.lbaMidExp = 0;
fis->d.lbaHighExp = 0;
fis->d.featuresExp = 0;
fis->d.sectorCount = 1; /* FIS sector count (7:0) */
fis->d.sectorCountExp = 0;
fis->d.reserved4 = 0;
fis->d.device = 0x40; /* 01000000 */
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_NON_DATA;
}
/* Initialize CB for SATA completion.
*/
satIOContext->satCompleteCB = &smsatSendDiagnosticCB;
/*
* Prepare SGL and send FIS to LL layer.
*/
satIOContext->reqType = agRequestType; /* Save it */
status = smsataLLIOStart( smRoot,
smIORequest,
smDeviceHandle,
smScsiRequest,
satIOContext);
SM_DBG5(("smsatSendDiagnostic: return Table 29 case 4\n"));
return (status);
}
/* case 5 */
if ( (scsiCmnd->cdb[1] & SCSI_SEND_DIAGNOSTIC_SELFTEST_MASK) &&
(pSatDevData->satSMARTSelfTest == agTRUE) &&
(pSatDevData->satSMARTEnabled == agTRUE)
)
{
/* sends SMART EXECUTE OFF-LINE IMMEDIATE */
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_SMART; /* 0xB0 */
fis->h.features = SAT_SMART_EXEUTE_OFF_LINE_IMMEDIATE; /* FIS features NA */
fis->d.lbaLow = 0x81; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = 0x4F; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = 0xC2; /* FIS LBA (23:16) */
fis->d.lbaLowExp = 0;
fis->d.lbaMidExp = 0;
fis->d.lbaHighExp = 0;
fis->d.featuresExp = 0;
fis->d.sectorCount = 0; /* FIS sector count (7:0) */
fis->d.sectorCountExp = 0;
fis->d.reserved4 = 0;
fis->d.device = 0; /* FIS DEV is discared in SATA */
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_NON_DATA;
/* Initialize CB for SATA completion.
*/
satIOContext->satCompleteCB = &smsatSendDiagnosticCB;
/*
* Prepare SGL and send FIS to LL layer.
*/
satIOContext->reqType = agRequestType; /* Save it */
status = smsataLLIOStart( smRoot,
smIORequest,
smDeviceHandle,
smScsiRequest,
satIOContext);
SM_DBG5(("smsatSendDiagnostic: return Table 29 case 5\n"));
return (status);
}
/* SAT rev8 Table29 p41 case 3*/
/* checking SELF TEST CODE*/
if ( !(scsiCmnd->cdb[1] & SCSI_SEND_DIAGNOSTIC_SELFTEST_MASK) &&
(pSatDevData->satSMARTSelfTest == agTRUE) &&
(pSatDevData->satSMARTEnabled == agTRUE)
)
{
/* SAT rev8 Table28 p40 */
/* finding self-test code */
switch ((scsiCmnd->cdb[1] & SCSI_SEND_DIAGNOSTIC_TEST_CODE_MASK) >> 5)
{
case 1:
pSatDevData->satBGPendingDiag = agTRUE;
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOSuccess,
SCSI_STAT_GOOD,
agNULL,
satIOContext->interruptContext );
/* sends SMART EXECUTE OFF-LINE IMMEDIATE */
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_SMART; /* 0x40 */
fis->h.features = SAT_SMART_EXEUTE_OFF_LINE_IMMEDIATE; /* FIS features NA */
fis->d.lbaLow = 0x01; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = 0x4F; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = 0xC2; /* FIS LBA (23:16) */
fis->d.lbaLowExp = 0;
fis->d.lbaMidExp = 0;
fis->d.lbaHighExp = 0;
fis->d.featuresExp = 0;
fis->d.sectorCount = 0; /* FIS sector count (7:0) */
fis->d.sectorCountExp = 0;
fis->d.reserved4 = 0;
fis->d.device = 0; /* FIS DEV is discared in SATA */
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_NON_DATA;
/* Initialize CB for SATA completion.
*/
satIOContext->satCompleteCB = &smsatSendDiagnosticCB;
/*
* Prepare SGL and send FIS to LL layer.
*/
satIOContext->reqType = agRequestType; /* Save it */
status = smsataLLIOStart( smRoot,
smIORequest,
smDeviceHandle,
smScsiRequest,
satIOContext);
SM_DBG5(("smsatSendDiagnostic: return Table 28 case 1\n"));
return (status);
case 2:
pSatDevData->satBGPendingDiag = agTRUE;
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOSuccess,
SCSI_STAT_GOOD,
agNULL,
satIOContext->interruptContext );
/* issuing SMART EXECUTE OFF-LINE IMMEDIATE */
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_SMART; /* 0x40 */
fis->h.features = SAT_SMART_EXEUTE_OFF_LINE_IMMEDIATE; /* FIS features NA */
fis->d.lbaLow = 0x02; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = 0x4F; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = 0xC2; /* FIS LBA (23:16) */
fis->d.lbaLowExp = 0;
fis->d.lbaMidExp = 0;
fis->d.lbaHighExp = 0;
fis->d.featuresExp = 0;
fis->d.sectorCount = 0; /* FIS sector count (7:0) */
fis->d.sectorCountExp = 0;
fis->d.reserved4 = 0;
fis->d.device = 0; /* FIS DEV is discared in SATA */
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_NON_DATA;
/* Initialize CB for SATA completion.
*/
satIOContext->satCompleteCB = &smsatSendDiagnosticCB;
/*
* Prepare SGL and send FIS to LL layer.
*/
satIOContext->reqType = agRequestType; /* Save it */
status = smsataLLIOStart( smRoot,
smIORequest,
smDeviceHandle,
smScsiRequest,
satIOContext);
SM_DBG5(("smsatSendDiagnostic: return Table 28 case 2\n"));
return (status);
case 4:
if (parmLen != 0)
{
/* check condition */
smsatSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
/*smEnqueueIO(smRoot, satIOContext);*/
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pSmSenseData,
satIOContext->interruptContext );
SM_DBG1(("smsatSendDiagnostic: case 4, non zero ParmLen %d!!!\n", parmLen));
return SM_RC_SUCCESS;
}
if (pSatDevData->satBGPendingDiag == agTRUE)
{
/* sends SMART EXECUTE OFF-LINE IMMEDIATE abort */
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_SMART; /* 0x40 */
fis->h.features = SAT_SMART_EXEUTE_OFF_LINE_IMMEDIATE; /* FIS features NA */
fis->d.lbaLow = 0x7F; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = 0x4F; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = 0xC2; /* FIS LBA (23:16) */
fis->d.lbaLowExp = 0;
fis->d.lbaMidExp = 0;
fis->d.lbaHighExp = 0;
fis->d.featuresExp = 0;
fis->d.sectorCount = 0; /* FIS sector count (7:0) */
fis->d.sectorCountExp = 0;
fis->d.reserved4 = 0;
fis->d.device = 0; /* FIS DEV is discared in SATA */
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_NON_DATA;
/* Initialize CB for SATA completion.
*/
satIOContext->satCompleteCB = &smsatSendDiagnosticCB;
/*
* Prepare SGL and send FIS to LL layer.
*/
satIOContext->reqType = agRequestType; /* Save it */
status = smsataLLIOStart( smRoot,
smIORequest,
smDeviceHandle,
smScsiRequest,
satIOContext);
SM_DBG5(("smsatSendDiagnostic: send SAT_SMART_EXEUTE_OFF_LINE_IMMEDIATE case 3\n"));
SM_DBG5(("smsatSendDiagnostic: Table 28 case 4\n"));
return (status);
}
else
{
/* check condition */
smsatSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
/*smEnqueueIO(smRoot, satIOContext);*/
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pSmSenseData,
satIOContext->interruptContext );
SM_DBG1(("smsatSendDiagnostic: case 4, no pending diagnostic in background!!!\n"));
SM_DBG5(("smsatSendDiagnostic: Table 28 case 4\n"));
return SM_RC_SUCCESS;
}
break;
case 5:
/* issuing SMART EXECUTE OFF-LINE IMMEDIATE */
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_SMART; /* 0x40 */
fis->h.features = SAT_SMART_EXEUTE_OFF_LINE_IMMEDIATE; /* FIS features NA */
fis->d.lbaLow = 0x81; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = 0x4F; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = 0xC2; /* FIS LBA (23:16) */
fis->d.lbaLowExp = 0;
fis->d.lbaMidExp = 0;
fis->d.lbaHighExp = 0;
fis->d.featuresExp = 0;
fis->d.sectorCount = 0; /* FIS sector count (7:0) */
fis->d.sectorCountExp = 0;
fis->d.reserved4 = 0;
fis->d.device = 0; /* FIS DEV is discared in SATA */
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_NON_DATA;
/* Initialize CB for SATA completion.
*/
satIOContext->satCompleteCB = &smsatSendDiagnosticCB;
/*
* Prepare SGL and send FIS to LL layer.
*/
satIOContext->reqType = agRequestType; /* Save it */
status = smsataLLIOStart( smRoot,
smIORequest,
smDeviceHandle,
smScsiRequest,
satIOContext);
SM_DBG5(("smsatSendDiagnostic: return Table 28 case 5\n"));
return (status);
case 6:
/* issuing SMART EXECUTE OFF-LINE IMMEDIATE */
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_SMART; /* 0x40 */
fis->h.features = SAT_SMART_EXEUTE_OFF_LINE_IMMEDIATE; /* FIS features NA */
fis->d.lbaLow = 0x82; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = 0x4F; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = 0xC2; /* FIS LBA (23:16) */
fis->d.lbaLowExp = 0;
fis->d.lbaMidExp = 0;
fis->d.lbaHighExp = 0;
fis->d.featuresExp = 0;
fis->d.sectorCount = 0; /* FIS sector count (7:0) */
fis->d.sectorCountExp = 0;
fis->d.reserved4 = 0;
fis->d.device = 0; /* FIS DEV is discared in SATA */
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_NON_DATA;
/* Initialize CB for SATA completion.
*/
satIOContext->satCompleteCB = &smsatSendDiagnosticCB;
/*
* Prepare SGL and send FIS to LL layer.
*/
satIOContext->reqType = agRequestType; /* Save it */
status = smsataLLIOStart( smRoot,
smIORequest,
smDeviceHandle,
smScsiRequest,
satIOContext);
SM_DBG5(("smsatSendDiagnostic: return Table 28 case 6\n"));
return (status);
case 0:
case 3: /* fall through */
case 7: /* fall through */
default:
break;
}/* switch */
/* returns the results of default self-testing, which is good */
/*smEnqueueIO(smRoot, satIOContext);*/
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOSuccess,
SCSI_STAT_GOOD,
agNULL,
satIOContext->interruptContext );
SM_DBG5(("smsatSendDiagnostic: return Table 28 case 0,3,7 and default\n"));
return SM_RC_SUCCESS;
}
/*smEnqueueIO(smRoot, satIOContext);*/
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOSuccess,
SCSI_STAT_GOOD,
agNULL,
satIOContext->interruptContext );
SM_DBG5(("smsatSendDiagnostic: return last\n"));
return SM_RC_SUCCESS;
}
osGLOBAL bit32
smsatStartStopUnit(
smRoot_t *smRoot,
smIORequest_t *smIORequest,
smDeviceHandle_t *smDeviceHandle,
smScsiInitiatorRequest_t *smScsiRequest,
smSatIOContext_t *satIOContext
)
{
bit32 status;
bit32 agRequestType;
smDeviceData_t *pSatDevData;
smScsiRspSense_t *pSense;
smIniScsiCmnd_t *scsiCmnd;
agsaFisRegHostToDevice_t *fis;
pSense = satIOContext->pSense;
pSatDevData = satIOContext->pSatDevData;
scsiCmnd = &smScsiRequest->scsiCmnd;
fis = satIOContext->pFis;
SM_DBG5(("smsatStartStopUnit: start\n"));
/* checking CONTROL */
/* NACA == 1 or LINK == 1*/
if ( (scsiCmnd->cdb[5] & SCSI_NACA_MASK) || (scsiCmnd->cdb[5] & SCSI_LINK_MASK) )
{
smsatSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
/*smEnqueueIO(smRoot, satIOContext);*/
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pSmSenseData,
satIOContext->interruptContext );
SM_DBG1(("smsatStartStopUnit: return control!!!\n"));
return SM_RC_SUCCESS;
}
/* Spec p55, Table 48 checking START and LOEJ bit */
/* case 1 */
if ( !(scsiCmnd->cdb[4] & SCSI_START_MASK) && !(scsiCmnd->cdb[4] & SCSI_LOEJ_MASK) )
{
if ( (scsiCmnd->cdb[1] & SCSI_IMMED_MASK) )
{
/* immed bit , SAT rev 8, 9.11.2.1 p 54*/
/*smEnqueueIO(smRoot, satIOContext);*/
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOSuccess,
SCSI_STAT_GOOD,
agNULL,
satIOContext->interruptContext );
SM_DBG5(("smsatStartStopUnit: return table48 case 1-1\n"));
return SM_RC_SUCCESS;
}
/* sends FLUSH CACHE or FLUSH CACHE EXT */
if (pSatDevData->sat48BitSupport == agTRUE)
{
/* FLUSH CACHE EXT */
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_FLUSH_CACHE_EXT; /* 0xEA */
fis->h.features = 0; /* FIS reserve */
fis->d.featuresExp = 0; /* FIS reserve */
fis->d.sectorCount = 0; /* FIS sector count (7:0) */
fis->d.sectorCountExp = 0; /* FIS sector count (15:8) */
fis->d.lbaLow = 0; /* FIS LBA (7 :0 ) */
fis->d.lbaLowExp = 0; /* FIS LBA (31:24) */
fis->d.lbaMid = 0; /* FIS LBA (15:8 ) */
fis->d.lbaMidExp = 0; /* FIS LBA (39:32) */
fis->d.lbaHigh = 0; /* FIS LBA (23:16) */
fis->d.lbaHighExp = 0; /* FIS LBA (47:40) */
fis->d.device = 0; /* FIS DEV is discared in SATA */
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved4 = 0;
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_NON_DATA;
}
else
{
/* FLUSH CACHE */
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_FLUSH_CACHE; /* 0xE7 */
fis->h.features = 0; /* FIS features NA */
fis->d.lbaLow = 0; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = 0; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = 0; /* FIS LBA (23:16) */
fis->d.lbaLowExp = 0;
fis->d.lbaMidExp = 0;
fis->d.lbaHighExp = 0;
fis->d.featuresExp = 0;
fis->d.sectorCount = 0; /* FIS sector count (7:0) */
fis->d.sectorCountExp = 0;
fis->d.device = 0; /* FIS DEV is discared in SATA */
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved4 = 0;
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_NON_DATA;
}
/* Initialize CB for SATA completion.
*/
satIOContext->satCompleteCB = &smsatStartStopUnitCB;
/*
* Prepare SGL and send FIS to LL layer.
*/
satIOContext->reqType = agRequestType; /* Save it */
status = smsataLLIOStart( smRoot,
smIORequest,
smDeviceHandle,
smScsiRequest,
satIOContext);
SM_DBG5(("smsatStartStopUnit: return table48 case 1\n"));
return (status);
}
/* case 2 */
else if ( (scsiCmnd->cdb[4] & SCSI_START_MASK) && !(scsiCmnd->cdb[4] & SCSI_LOEJ_MASK) )
{
/* immed bit , SAT rev 8, 9.11.2.1 p 54*/
if ( (scsiCmnd->cdb[1] & SCSI_IMMED_MASK) )
{
/*smEnqueueIO(smRoot, satIOContext);*/
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOSuccess,
SCSI_STAT_GOOD,
agNULL,
satIOContext->interruptContext );
SM_DBG5(("smsatStartStopUnit: return table48 case 2 1\n"));
return SM_RC_SUCCESS;
}
/*
sends READ_VERIFY_SECTORS(_EXT)
sector count 1, any LBA between zero to Maximum
*/
if (pSatDevData->sat48BitSupport == agTRUE)
{
/* READ VERIFY SECTOR(S) EXT*/
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_READ_VERIFY_SECTORS_EXT;/* 0x42 */
fis->h.features = 0; /* FIS reserve */
fis->d.lbaLow = 0x01; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = 0x00; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = 0x00; /* FIS LBA (23:16) */
fis->d.lbaLowExp = 0x00; /* FIS LBA (31:24) */
fis->d.lbaMidExp = 0x00; /* FIS LBA (39:32) */
fis->d.lbaHighExp = 0x00; /* FIS LBA (47:40) */
fis->d.featuresExp = 0; /* FIS reserve */
fis->d.sectorCount = 1; /* FIS sector count (7:0) */
fis->d.sectorCountExp = 0; /* FIS sector count (15:8) */
fis->d.reserved4 = 0;
fis->d.device = 0x40; /* 01000000 */
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
}
else
{
/* READ VERIFY SECTOR(S)*/
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_READ_VERIFY_SECTORS;/* 0x40 */
fis->h.features = 0; /* FIS features NA */
fis->d.lbaLow = 0x01; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = 0x00; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = 0x00; /* FIS LBA (23:16) */
fis->d.lbaLowExp = 0;
fis->d.lbaMidExp = 0;
fis->d.lbaHighExp = 0;
fis->d.featuresExp = 0;
fis->d.sectorCount = 1; /* FIS sector count (7:0) */
fis->d.sectorCountExp = 0;
fis->d.reserved4 = 0;
fis->d.device = 0x40; /* 01000000 */
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
}
agRequestType = AGSA_SATA_PROTOCOL_NON_DATA;
/* Initialize CB for SATA completion.
*/
satIOContext->satCompleteCB = &smsatStartStopUnitCB;
/*
* Prepare SGL and send FIS to LL layer.
*/
satIOContext->reqType = agRequestType; /* Save it */
status = smsataLLIOStart( smRoot,
smIORequest,
smDeviceHandle,
smScsiRequest,
satIOContext);
SM_DBG5(("smsatStartStopUnit: return table48 case 2 2\n"));
return status;
}
/* case 3 */
else if ( !(scsiCmnd->cdb[4] & SCSI_START_MASK) && (scsiCmnd->cdb[4] & SCSI_LOEJ_MASK) )
{
if(pSatDevData->satRemovableMedia && pSatDevData->satRemovableMediaEnabled)
{
/* support for removal media */
/* immed bit , SAT rev 8, 9.11.2.1 p 54*/
if ( (scsiCmnd->cdb[1] & SCSI_IMMED_MASK) )
{
/*smEnqueueIO(smRoot, satIOContext);*/
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOSuccess,
SCSI_STAT_GOOD,
agNULL,
satIOContext->interruptContext );
SM_DBG5(("smsatStartStopUnit: return table48 case 3 1\n"));
return SM_RC_SUCCESS;
}
/*
sends MEDIA EJECT
*/
/* Media Eject fis */
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_MEDIA_EJECT; /* 0xED */
fis->h.features = 0; /* FIS features NA */
fis->d.lbaLow = 0; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = 0; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = 0; /* FIS LBA (23:16) */
fis->d.lbaLowExp = 0;
fis->d.lbaMidExp = 0;
fis->d.lbaHighExp = 0;
fis->d.featuresExp = 0;
/* sector count zero */
fis->d.sectorCount = 0; /* FIS sector count (7:0) */
fis->d.sectorCountExp = 0;
fis->d.device = 0; /* FIS DEV is discared in SATA */
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved4 = 0;
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_NON_DATA;
/* Initialize CB for SATA completion.
*/
satIOContext->satCompleteCB = &smsatStartStopUnitCB;
/*
* Prepare SGL and send FIS to LL layer.
*/
satIOContext->reqType = agRequestType; /* Save it */
status = smsataLLIOStart( smRoot,
smIORequest,
smDeviceHandle,
smScsiRequest,
satIOContext);
return status;
}
else
{
/* no support for removal media */
smsatSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
/*smEnqueueIO(smRoot, satIOContext);*/
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pSmSenseData,
satIOContext->interruptContext );
SM_DBG5(("smsatStartStopUnit: return Table 29 case 3 2\n"));
return SM_RC_SUCCESS;
}
}
/* case 4 */
else /* ( (scsiCmnd->cdb[4] & SCSI_START_MASK) && (scsiCmnd->cdb[4] & SCSI_LOEJ_MASK) ) */
{
smsatSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
/*smEnqueueIO(smRoot, satIOContext);*/
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pSmSenseData,
satIOContext->interruptContext );
SM_DBG5(("smsatStartStopUnit: return Table 29 case 4\n"));
return SM_RC_SUCCESS;
}
}
osGLOBAL bit32
smsatWriteSame10(
smRoot_t *smRoot,
smIORequest_t *smIORequest,
smDeviceHandle_t *smDeviceHandle,
smScsiInitiatorRequest_t *smScsiRequest,
smSatIOContext_t *satIOContext
)
{
bit32 status;
bit32 agRequestType = AGSA_SATA_PROTOCOL_DMA_WRITE;
smDeviceData_t *pSatDevData;
smScsiRspSense_t *pSense;
smIniScsiCmnd_t *scsiCmnd;
agsaFisRegHostToDevice_t *fis;
bit32 lba = 0;
bit32 tl = 0;
pSense = satIOContext->pSense;
pSatDevData = satIOContext->pSatDevData;
scsiCmnd = &smScsiRequest->scsiCmnd;
fis = satIOContext->pFis;
SM_DBG5(("smsatWriteSame10: start\n"));
/* checking CONTROL */
/* NACA == 1 or LINK == 1*/
if ( (scsiCmnd->cdb[9] & SCSI_NACA_MASK) || (scsiCmnd->cdb[9] & SCSI_LINK_MASK) )
{
smsatSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
/*smEnqueueIO(smRoot, satIOContext);*/
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pSmSenseData,
satIOContext->interruptContext );
SM_DBG1(("smsatWriteSame10: return control!!!\n"));
return SM_RC_SUCCESS;
}
/* checking LBDATA and PBDATA */
/* case 1 */
if ( !(scsiCmnd->cdb[1] & SCSI_WRITE_SAME_LBDATA_MASK) &&
!(scsiCmnd->cdb[1] & SCSI_WRITE_SAME_PBDATA_MASK))
{
SM_DBG5(("smsatWriteSame10: case 1\n"));
/* spec 9.26.2, Table 62, p64, case 1*/
/*
normal case
just like write in 9.17.1
*/
if ( pSatDevData->sat48BitSupport != agTRUE )
{
/*
writeSame10 but no support for 48 bit addressing
-> problem in transfer length. Therefore, return check condition
*/
smsatSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
/*smEnqueueIO(smRoot, satIOContext);*/
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pSmSenseData,
satIOContext->interruptContext );
SM_DBG1(("smsatWriteSame10: return internal checking!!!\n"));
return SM_RC_SUCCESS;
}
/* cdb10; computing LBA and transfer length */
lba = (scsiCmnd->cdb[2] << (8*3)) + (scsiCmnd->cdb[3] << (8*2))
+ (scsiCmnd->cdb[4] << 8) + scsiCmnd->cdb[5];
tl = (scsiCmnd->cdb[7] << 8) + scsiCmnd->cdb[8];
/* Table 34, 9.1, p 46 */
/*
note: As of 2/10/2006, no support for DMA QUEUED
*/
/*
Table 34, 9.1, p 46, b (footnote)
When no 48-bit addressing support or NCQ, if LBA is beyond (2^28 - 1),
return check condition
*/
if (pSatDevData->satNCQ != agTRUE &&
pSatDevData->sat48BitSupport != agTRUE
)
{
if (lba > SAT_TR_LBA_LIMIT - 1) /* SAT_TR_LBA_LIMIT is 2^28, 0x10000000 */
{
smsatSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_LOGICAL_BLOCK_ADDRESS_OUT_OF_RANGE,
satIOContext);
/*smEnqueueIO(smRoot, satIOContext);*/
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pSmSenseData,
satIOContext->interruptContext );
SM_DBG1(("smsatWriteSame10: return LBA out of range!!!\n"));
return SM_RC_SUCCESS;
}
}
if (lba + tl <= SAT_TR_LBA_LIMIT)
{
if (pSatDevData->satDMASupport == agTRUE && pSatDevData->satDMAEnabled == agTRUE)
{
/* case 2 */
/* WRITE DMA */
/* can't fit the transfer length since WRITE DMA has 1 byte for sector count */
SM_DBG1(("smsatWriteSame10: case 1-2 !!! error due to writesame10!!!\n"));
smsatSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
/*smEnqueueIO(smRoot, satIOContext);*/
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pSmSenseData,
satIOContext->interruptContext );
return SM_RC_SUCCESS;
}
else
{
/* case 1 */
/* WRITE MULTIPLE or WRITE SECTOR(S) */
/* WRITE SECTORS is chosen for easier implemetation */
/* can't fit the transfer length since WRITE DMA has 1 byte for sector count */
SM_DBG1(("smsatWriteSame10: case 1-1 !!! error due to writesame10!!!\n"));
smsatSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
/*smEnqueueIO(smRoot, satIOContext);*/
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pSmSenseData,
satIOContext->interruptContext );
return SM_RC_SUCCESS;
}
} /* end of case 1 and 2 */
/* case 3 and 4 */
if (pSatDevData->sat48BitSupport == agTRUE)
{
if (pSatDevData->satDMASupport == agTRUE && pSatDevData->satDMAEnabled == agTRUE)
{
/* case 3 */
/* WRITE DMA EXT or WRITE DMA FUA EXT */
/* WRITE DMA EXT is chosen since WRITE SAME does not have FUA bit */
SM_DBG5(("smsatWriteSame10: case 1-3\n"));
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_WRITE_DMA_EXT; /* 0x35 */
fis->h.features = 0; /* FIS reserve */
fis->d.lbaLow = scsiCmnd->cdb[5]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = scsiCmnd->cdb[4]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = scsiCmnd->cdb[3]; /* FIS LBA (23:16) */
fis->d.device = 0x40; /* FIS LBA mode set */
fis->d.lbaLowExp = scsiCmnd->cdb[2]; /* FIS LBA (31:24) */
fis->d.lbaMidExp = 0; /* FIS LBA (39:32) */
fis->d.lbaHighExp = 0; /* FIS LBA (47:40) */
fis->d.featuresExp = 0; /* FIS reserve */
if (tl == 0)
{
/* error check
ATA spec, p125, 6.17.29
pSatDevData->satMaxUserAddrSectors should be 0x0FFFFFFF
and allowed value is 0x0FFFFFFF - 1
*/
if (pSatDevData->satMaxUserAddrSectors > 0x0FFFFFFF)
{
SM_DBG1(("smsatWriteSame10: case 3 !!! warning can't fit sectors!!!\n"));
smsatSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
/*smEnqueueIO(smRoot, satIOContext);*/
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pSmSenseData,
satIOContext->interruptContext );
return SM_RC_SUCCESS;
}
}
/* one sector at a time */
fis->d.sectorCount = 1; /* FIS sector count (7:0) */
fis->d.sectorCountExp = 0; /* FIS sector count (15:8) */
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_DMA_WRITE;
}
else
{
/* case 4 */
/* WRITE MULTIPLE EXT or WRITE MULTIPLE FUA EXT or WRITE SECTOR(S) EXT */
/* WRITE SECTORS EXT is chosen for easier implemetation */
SM_DBG5(("smsatWriteSame10: case 1-4\n"));
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_WRITE_SECTORS_EXT; /* 0x34 */
fis->h.features = 0; /* FIS reserve */
fis->d.lbaLow = scsiCmnd->cdb[5]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = scsiCmnd->cdb[4]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = scsiCmnd->cdb[3]; /* FIS LBA (23:16) */
fis->d.device = 0x40; /* FIS LBA mode set */
fis->d.lbaLowExp = scsiCmnd->cdb[2]; /* FIS LBA (31:24) */
fis->d.lbaMidExp = 0; /* FIS LBA (39:32) */
fis->d.lbaHighExp = 0; /* FIS LBA (47:40) */
fis->d.featuresExp = 0; /* FIS reserve */
if (tl == 0)
{
/* error check
ATA spec, p125, 6.17.29
pSatDevData->satMaxUserAddrSectors should be 0x0FFFFFFF
and allowed value is 0x0FFFFFFF - 1
*/
if (pSatDevData->satMaxUserAddrSectors > 0x0FFFFFFF)
{
SM_DBG1(("smsatWriteSame10: case 4 !!! warning can't fit sectors!!!\n"));
smsatSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
/*smEnqueueIO(smRoot, satIOContext);*/
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pSmSenseData,
satIOContext->interruptContext );
return SM_RC_SUCCESS;
}
}
/* one sector at a time */
fis->d.sectorCount = 1; /* FIS sector count (7:0) */
fis->d.sectorCountExp = 0; /* FIS sector count (15:8) */
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_PIO_WRITE;
}
}
/* case 5 */
if (pSatDevData->satNCQ == agTRUE)
{
/* WRITE FPDMA QUEUED */
if (pSatDevData->sat48BitSupport != agTRUE)
{
SM_DBG1(("smsatWriteSame10: case 1-5 !!! error NCQ but 28 bit address support!!!\n"));
smsatSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
/*smEnqueueIO(smRoot, satIOContext);*/
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pSmSenseData,
satIOContext->interruptContext );
return SM_RC_SUCCESS;
}
SM_DBG5(("smsatWriteSame10: case 1-5\n"));
/* Support 48-bit FPDMA addressing, use WRITE FPDMA QUEUE command */
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_WRITE_FPDMA_QUEUED; /* 0x61 */
if (tl == 0)
{
/* error check
ATA spec, p125, 6.17.29
pSatDevData->satMaxUserAddrSectors should be 0x0FFFFFFF
and allowed value is 0x0FFFFFFF - 1
*/
if (pSatDevData->satMaxUserAddrSectors > 0x0FFFFFFF)
{
SM_DBG1(("smsatWriteSame10: case 4 !!! warning can't fit sectors!!!\n"));
smsatSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
/*smEnqueueIO(smRoot, satIOContext);*/
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pSmSenseData,
satIOContext->interruptContext );
return SM_RC_SUCCESS;
}
}
/* one sector at a time */
fis->h.features = 1; /* FIS sector count (7:0) */
fis->d.featuresExp = 0; /* FIS sector count (15:8) */
fis->d.lbaLow = scsiCmnd->cdb[5]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = scsiCmnd->cdb[4]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = scsiCmnd->cdb[3]; /* FIS LBA (23:16) */
/* NO FUA bit in the WRITE SAME 10 */
fis->d.device = 0x40; /* FIS FUA clear */
fis->d.lbaLowExp = scsiCmnd->cdb[2]; /* FIS LBA (31:24) */
fis->d.lbaMidExp = 0; /* FIS LBA (39:32) */
fis->d.lbaHighExp = 0; /* FIS LBA (47:40) */
fis->d.sectorCount = 0; /* Tag (7:3) set by LL layer */
fis->d.sectorCountExp = 0;
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_FPDMA_WRITE;
}
/* Initialize CB for SATA completion.
*/
satIOContext->satCompleteCB = &smsatWriteSame10CB;
/*
* Prepare SGL and send FIS to LL layer.
*/
satIOContext->reqType = agRequestType; /* Save it */
status = smsataLLIOStart( smRoot,
smIORequest,
smDeviceHandle,
smScsiRequest,
satIOContext);
return (status);
} /* end of case 1 */
else if ( !(scsiCmnd->cdb[1] & SCSI_WRITE_SAME_LBDATA_MASK) &&
(scsiCmnd->cdb[1] & SCSI_WRITE_SAME_PBDATA_MASK))
{
/* spec 9.26.2, Table 62, p64, case 2*/
smsatSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
/*smEnqueueIO(smRoot, satIOContext);*/
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pSmSenseData,
satIOContext->interruptContext );
SM_DBG5(("smsatWriteSame10: return Table 62 case 2\n"));
return SM_RC_SUCCESS;
}
else if ( (scsiCmnd->cdb[1] & SCSI_WRITE_SAME_LBDATA_MASK) &&
!(scsiCmnd->cdb[1] & SCSI_WRITE_SAME_PBDATA_MASK))
{
SM_DBG5(("smsatWriteSame10: Table 62 case 3\n"));
}
else /* ( (scsiCmnd->cdb[1] & SCSI_WRITE_SAME_LBDATA_MASK) &&
(scsiCmnd->cdb[1] & SCSI_WRITE_SAME_PBDATA_MASK)) */
{
/* spec 9.26.2, Table 62, p64, case 4*/
smsatSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
/*smEnqueueIO(smRoot, satIOContext);*/
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pSmSenseData,
satIOContext->interruptContext );
SM_DBG5(("smsatWriteSame10: return Table 62 case 4\n"));
return SM_RC_SUCCESS;
}
return SM_RC_SUCCESS;
}
osGLOBAL bit32
smsatWriteSame16(
smRoot_t *smRoot,
smIORequest_t *smIORequest,
smDeviceHandle_t *smDeviceHandle,
smScsiInitiatorRequest_t *smScsiRequest,
smSatIOContext_t *satIOContext
)
{
smScsiRspSense_t *pSense;
pSense = satIOContext->pSense;
SM_DBG5(("smsatWriteSame16: start\n"));
smsatSetSensePayload( pSense,
SCSI_SNSKEY_NO_SENSE,
0,
SCSI_SNSCODE_NO_ADDITIONAL_INFO,
satIOContext);
/*smEnqueueIO(smRoot, satIOContext);*/
tdsmIOCompletedCB( smRoot,
smIORequest, /* == &satIntIo->satOrgSmIORequest */
smIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pSmSenseData,
satIOContext->interruptContext );
SM_DBG1(("smsatWriteSame16: return internal checking!!!\n"));
return SM_RC_SUCCESS;
}
osGLOBAL bit32
smsatLogSense(
smRoot_t *smRoot,
smIORequest_t *smIORequest,
smDeviceHandle_t *smDeviceHandle,
smScsiInitiatorRequest_t *smScsiRequest,
smSatIOContext_t *satIOContext
)
{
bit32 status;
bit32 agRequestType;
smDeviceData_t *pSatDevData;
smScsiRspSense_t *pSense;
smIniScsiCmnd_t *scsiCmnd;
agsaFisRegHostToDevice_t *fis;
bit8 *pLogPage; /* Log Page data buffer */
bit32 flag = 0;
bit16 AllocLen = 0; /* allocation length */
bit8 AllLogPages[8];
bit16 lenRead = 0;
pSense = satIOContext->pSense;
pSatDevData = satIOContext->pSatDevData;
scsiCmnd = &smScsiRequest->scsiCmnd;
fis = satIOContext->pFis;
pLogPage = (bit8 *) smScsiRequest->sglVirtualAddr;
SM_DBG5(("smsatLogSense: start\n"));
sm_memset(&AllLogPages, 0, 8);
/* checking CONTROL */
/* NACA == 1 or LINK == 1*/
if ( (scsiCmnd->cdb[9] & SCSI_NACA_MASK) || (scsiCmnd->cdb[9] & SCSI_LINK_MASK) )
{
smsatSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
/*smEnqueueIO(smRoot, satIOContext);*/
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pSmSenseData,
satIOContext->interruptContext );
SM_DBG1(("smsatLogSense: return control!!!\n"));
return SM_RC_SUCCESS;
}
AllocLen = ((scsiCmnd->cdb[7] << 8) + scsiCmnd->cdb[8]);
AllocLen = MIN(AllocLen, scsiCmnd->expDataLength);
/* checking PC (Page Control) */
/* nothing */
/* special cases */
if (AllocLen == 4)
{
SM_DBG1(("smsatLogSense: AllocLen is 4!!!\n"));
switch (scsiCmnd->cdb[2] & SCSI_LOG_SENSE_PAGE_CODE_MASK)
{
case LOGSENSE_SUPPORTED_LOG_PAGES:
SM_DBG5(("smsatLogSense: case LOGSENSE_SUPPORTED_LOG_PAGES\n"));
if (pSatDevData->satSMARTFeatureSet == agTRUE)
{
/* add informational exception log */
flag = 1;
if (pSatDevData->satSMARTSelfTest == agTRUE)
{
/* add Self-Test results log page */
flag = 2;
}
}
else
{
/* only supported, no informational exception log, no Self-Test results log page */
flag = 0;
}
lenRead = 4;
AllLogPages[0] = LOGSENSE_SUPPORTED_LOG_PAGES; /* page code */
AllLogPages[1] = 0; /* reserved */
switch (flag)
{
case 0:
/* only supported */
AllLogPages[2] = 0; /* page length */
AllLogPages[3] = 1; /* page length */
break;
case 1:
/* supported and informational exception log */
AllLogPages[2] = 0; /* page length */
AllLogPages[3] = 2; /* page length */
break;
case 2:
/* supported and informational exception log */
AllLogPages[2] = 0; /* page length */
AllLogPages[3] = 3; /* page length */
break;
default:
SM_DBG1(("smsatLogSense: error unallowed flag value %d!!!\n", flag));
break;
}
sm_memcpy(pLogPage, &AllLogPages, lenRead);
break;
case LOGSENSE_SELFTEST_RESULTS_PAGE:
SM_DBG5(("smsatLogSense: case LOGSENSE_SUPPORTED_LOG_PAGES\n"));
lenRead = 4;
AllLogPages[0] = LOGSENSE_SELFTEST_RESULTS_PAGE; /* page code */
AllLogPages[1] = 0; /* reserved */
/* page length = SELFTEST_RESULTS_LOG_PAGE_LENGTH - 1 - 3 = 400 = 0x190 */
AllLogPages[2] = 0x01;
AllLogPages[3] = 0x90; /* page length */
sm_memcpy(pLogPage, &AllLogPages, lenRead);
break;
case LOGSENSE_INFORMATION_EXCEPTIONS_PAGE:
SM_DBG5(("smsatLogSense: case LOGSENSE_SUPPORTED_LOG_PAGES\n"));
lenRead = 4;
AllLogPages[0] = LOGSENSE_INFORMATION_EXCEPTIONS_PAGE; /* page code */
AllLogPages[1] = 0; /* reserved */
AllLogPages[2] = 0; /* page length */
AllLogPages[3] = INFORMATION_EXCEPTIONS_LOG_PAGE_LENGTH - 1 - 3; /* page length */
sm_memcpy(pLogPage, &AllLogPages, lenRead);
break;
default:
SM_DBG1(("smsatLogSense: default Page Code 0x%x!!!\n", scsiCmnd->cdb[2] & SCSI_LOG_SENSE_PAGE_CODE_MASK));
smsatSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
/*smEnqueueIO(smRoot, satIOContext);*/
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pSmSenseData,
satIOContext->interruptContext );
return SM_RC_SUCCESS;
}
/*smEnqueueIO(smRoot, satIOContext);*/
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOSuccess,
SCSI_STAT_GOOD,
agNULL,
satIOContext->interruptContext);
return SM_RC_SUCCESS;
} /* if */
/* SAT rev8 Table 11 p30*/
/* checking Page Code */
switch (scsiCmnd->cdb[2] & SCSI_LOG_SENSE_PAGE_CODE_MASK)
{
case LOGSENSE_SUPPORTED_LOG_PAGES:
SM_DBG5(("smsatLogSense: case 1\n"));
if (pSatDevData->satSMARTFeatureSet == agTRUE)
{
/* add informational exception log */
flag = 1;
if (pSatDevData->satSMARTSelfTest == agTRUE)
{
/* add Self-Test results log page */
flag = 2;
}
}
else
{
/* only supported, no informational exception log, no Self-Test results log page */
flag = 0;
}
AllLogPages[0] = 0; /* page code */
AllLogPages[1] = 0; /* reserved */
switch (flag)
{
case 0:
/* only supported */
AllLogPages[2] = 0; /* page length */
AllLogPages[3] = 1; /* page length */
AllLogPages[4] = 0x00; /* supported page list */
lenRead = (bit8)(MIN(AllocLen, 5));
break;
case 1:
/* supported and informational exception log */
AllLogPages[2] = 0; /* page length */
AllLogPages[3] = 2; /* page length */
AllLogPages[4] = 0x00; /* supported page list */
AllLogPages[5] = 0x10; /* supported page list */
lenRead = (bit8)(MIN(AllocLen, 6));
break;
case 2:
/* supported and informational exception log */
AllLogPages[2] = 0; /* page length */
AllLogPages[3] = 3; /* page length */
AllLogPages[4] = 0x00; /* supported page list */
AllLogPages[5] = 0x10; /* supported page list */
AllLogPages[6] = 0x2F; /* supported page list */
lenRead = (bit8)(MIN(AllocLen, 7));
break;
default:
SM_DBG1(("smsatLogSense: error unallowed flag value %d!!!\n", flag));
break;
}
sm_memcpy(pLogPage, &AllLogPages, lenRead);
/* comparing allocation length to Log Page byte size */
/* SPC-4, 4.3.4.6, p28 */
if (AllocLen > lenRead )
{
SM_DBG1(("smsatLogSense: reporting underrun lenRead=0x%x AllocLen=0x%x!!!\n", lenRead, AllocLen));
/*smEnqueueIO(smRoot, satIOContext);*/
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOUnderRun,
AllocLen - lenRead,
agNULL,
satIOContext->interruptContext );
}
else
{
/*smEnqueueIO(smRoot, satIOContext);*/
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOSuccess,
SCSI_STAT_GOOD,
agNULL,
satIOContext->interruptContext);
}
break;
case LOGSENSE_SELFTEST_RESULTS_PAGE:
SM_DBG5(("smsatLogSense: case 2\n"));
/* checking SMART self-test */
if (pSatDevData->satSMARTSelfTest == agFALSE)
{
SM_DBG5(("smsatLogSense: case 2 no SMART Self Test\n"));
smsatSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
/*smEnqueueIO(smRoot, satIOContext);*/
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pSmSenseData,
satIOContext->interruptContext );
}
else
{
/* if satSMARTEnabled is false, send SMART_ENABLE_OPERATIONS */
if (pSatDevData->satSMARTEnabled == agFALSE)
{
SM_DBG5(("smsatLogSense: case 2 calling satSMARTEnable\n"));
status = smsatLogSenseAllocate(smRoot,
smIORequest,
smDeviceHandle,
smScsiRequest,
satIOContext,
0,
LOG_SENSE_0
);
return status;
}
else
{
/* SAT Rev 8, 10.2.4 p74 */
if ( pSatDevData->sat48BitSupport == agTRUE )
{
SM_DBG5(("smsatLogSense: case 2-1 sends READ LOG EXT\n"));
status = smsatLogSenseAllocate(smRoot,
smIORequest,
smDeviceHandle,
smScsiRequest,
satIOContext,
512,
LOG_SENSE_1
);
return status;
}
else
{
SM_DBG5(("smsatLogSense: case 2-2 sends SMART READ LOG\n"));
status = smsatLogSenseAllocate(smRoot,
smIORequest,
smDeviceHandle,
smScsiRequest,
satIOContext,
512,
LOG_SENSE_2
);
return status;
}
}
}
break;
case LOGSENSE_INFORMATION_EXCEPTIONS_PAGE:
SM_DBG5(("smsatLogSense: case 3\n"));
/* checking SMART feature set */
if (pSatDevData->satSMARTFeatureSet == agFALSE)
{
smsatSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
/*smEnqueueIO(smRoot, satIOContext);*/
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pSmSenseData,
satIOContext->interruptContext );
}
else
{
/* checking SMART feature enabled */
if (pSatDevData->satSMARTEnabled == agFALSE)
{
smsatSetSensePayload( pSense,
SCSI_SNSKEY_ABORTED_COMMAND,
0,
SCSI_SNSCODE_ATA_DEVICE_FEATURE_NOT_ENABLED,
satIOContext);
/*smEnqueueIO(smRoot, satIOContext);*/
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pSmSenseData,
satIOContext->interruptContext );
}
else
{
/* SAT Rev 8, 10.2.3 p72 */
SM_DBG5(("smsatLogSense: case 3 sends SMART RETURN STATUS\n"));
/* sends SMART RETURN STATUS */
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_SMART; /* 0xB0 */
fis->h.features = SAT_SMART_RETURN_STATUS;/* FIS features */
fis->d.featuresExp = 0; /* FIS reserve */
fis->d.sectorCount = 0; /* FIS sector count (7:0) */
fis->d.sectorCountExp = 0; /* FIS sector count (15:8) */
fis->d.lbaLow = 0; /* FIS LBA (7 :0 ) */
fis->d.lbaLowExp = 0; /* FIS LBA (31:24) */
fis->d.lbaMid = 0x4F; /* FIS LBA (15:8 ) */
fis->d.lbaMidExp = 0; /* FIS LBA (39:32) */
fis->d.lbaHigh = 0xC2; /* FIS LBA (23:16) */
fis->d.lbaHighExp = 0; /* FIS LBA (47:40) */
fis->d.device = 0; /* FIS DEV is discared in SATA */
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved4 = 0;
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_NON_DATA;
/* Initialize CB for SATA completion.
*/
satIOContext->satCompleteCB = &smsatLogSenseCB;
/*
* Prepare SGL and send FIS to LL layer.
*/
satIOContext->reqType = agRequestType; /* Save it */
status = smsataLLIOStart( smRoot,
smIORequest,
smDeviceHandle,
smScsiRequest,
satIOContext);
return status;
}
}
break;
default:
SM_DBG1(("smsatLogSense: default Page Code 0x%x!!!\n", scsiCmnd->cdb[2] & SCSI_LOG_SENSE_PAGE_CODE_MASK));
smsatSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
/*smEnqueueIO(smRoot, satIOContext);*/
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pSmSenseData,
satIOContext->interruptContext );
break;
} /* end switch */
return SM_RC_SUCCESS;
}
osGLOBAL bit32
smsatLogSenseAllocate(
smRoot_t *smRoot,
smIORequest_t *smIORequest,
smDeviceHandle_t *smDeviceHandle,
smScsiInitiatorRequest_t *smSCSIRequest,
smSatIOContext_t *satIOContext,
bit32 payloadSize,
bit32 flag
)
{
smDeviceData_t *pSatDevData;
smIORequestBody_t *smIORequestBody;
smSatInternalIo_t *satIntIo = agNULL;
smSatIOContext_t *satIOContext2;
bit32 status;
SM_DBG5(("smsatLogSenseAllocate: start\n"));
pSatDevData = satIOContext->pSatDevData;
/* create internal satIOContext */
satIntIo = smsatAllocIntIoResource( smRoot,
smIORequest, /* original request */
pSatDevData,
payloadSize,
satIntIo);
if (satIntIo == agNULL)
{
/*smEnqueueIO(smRoot, satIOContext);*/
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOFailed,
smDetailOtherError,
agNULL,
satIOContext->interruptContext );
SM_DBG1(("smsatLogSenseAllocate: fail in allocation!!!\n"));
return SM_RC_SUCCESS;
} /* end of memory allocation failure */
satIntIo->satOrgSmIORequest = smIORequest;
smIORequestBody = (smIORequestBody_t *)satIntIo->satIntRequestBody;
satIOContext2 = &(smIORequestBody->transport.SATA.satIOContext);
satIOContext2->pSatDevData = pSatDevData;
satIOContext2->pFis = &(smIORequestBody->transport.SATA.agSATARequestBody.fis.fisRegHostToDev);
satIOContext2->pScsiCmnd = &(satIntIo->satIntSmScsiXchg.scsiCmnd);
satIOContext2->pSense = &(smIORequestBody->transport.SATA.sensePayload);
satIOContext2->pSmSenseData = &(smIORequestBody->transport.SATA.smSenseData);
satIOContext2->pSmSenseData->senseData = satIOContext2->pSense;
satIOContext2->smRequestBody = satIntIo->satIntRequestBody;
satIOContext2->interruptContext = satIOContext->interruptContext;
satIOContext2->satIntIoContext = satIntIo;
satIOContext2->psmDeviceHandle = smDeviceHandle;
satIOContext2->satOrgIOContext = satIOContext;
if (flag == LOG_SENSE_0)
{
/* SAT_SMART_ENABLE_OPERATIONS */
status = smsatSMARTEnable( smRoot,
&(satIntIo->satIntSmIORequest),
smDeviceHandle,
&(satIntIo->satIntSmScsiXchg),
satIOContext2);
}
else if (flag == LOG_SENSE_1)
{
/* SAT_READ_LOG_EXT */
status = smsatLogSense_2( smRoot,
&(satIntIo->satIntSmIORequest),
smDeviceHandle,
&(satIntIo->satIntSmScsiXchg),
satIOContext2);
}
else
{
/* SAT_SMART_READ_LOG */
/* SAT_READ_LOG_EXT */
status = smsatLogSense_3( smRoot,
&(satIntIo->satIntSmIORequest),
smDeviceHandle,
&(satIntIo->satIntSmScsiXchg),
satIOContext2);
}
if (status != SM_RC_SUCCESS)
{
smsatFreeIntIoResource( smRoot,
pSatDevData,
satIntIo);
/*smEnqueueIO(smRoot, satIOContext);*/
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOFailed,
smDetailOtherError,
agNULL,
satIOContext->interruptContext );
return SM_RC_SUCCESS;
}
return SM_RC_SUCCESS;
}
osGLOBAL bit32
smsatSMARTEnable(
smRoot_t *smRoot,
smIORequest_t *smIORequest,
smDeviceHandle_t *smDeviceHandle,
smScsiInitiatorRequest_t *smScsiRequest,
smSatIOContext_t *satIOContext
)
{
bit32 status;
bit32 agRequestType;
agsaFisRegHostToDevice_t *fis;
fis = satIOContext->pFis;
SM_DBG5(("smsatSMARTEnable: start\n"));
/*
* Send the SAT_SMART_ENABLE_OPERATIONS command.
*/
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_SMART; /* 0xB0 */
fis->h.features = SAT_SMART_ENABLE_OPERATIONS;
fis->d.lbaLow = 0;
fis->d.lbaMid = 0x4F;
fis->d.lbaHigh = 0xC2;
fis->d.device = 0;
fis->d.lbaLowExp = 0;
fis->d.lbaMidExp = 0;
fis->d.lbaHighExp = 0;
fis->d.featuresExp = 0;
fis->d.sectorCount = 0;
fis->d.sectorCountExp = 0;
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_NON_DATA;
/* Initialize CB for SATA completion.
*/
satIOContext->satCompleteCB = &smsatSMARTEnableCB;
/*
* Prepare SGL and send FIS to LL layer.
*/
satIOContext->reqType = agRequestType; /* Save it */
status = smsataLLIOStart( smRoot,
smIORequest,
smDeviceHandle,
smScsiRequest,
satIOContext);
return status;
}
osGLOBAL bit32
smsatLogSense_2(
smRoot_t *smRoot,
smIORequest_t *smIORequest,
smDeviceHandle_t *smDeviceHandle,
smScsiInitiatorRequest_t *smScsiRequest,
smSatIOContext_t *satIOContext
)
{
bit32 status;
bit32 agRequestType;
agsaFisRegHostToDevice_t *fis;
fis = satIOContext->pFis;
SM_DBG5(("smsatLogSense_2: start\n"));
/* sends READ LOG EXT */
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_READ_LOG_EXT; /* 0x2F */
fis->h.features = 0; /* FIS reserve */
fis->d.lbaLow = 0x07; /* 0x07 */
fis->d.lbaMid = 0; /* */
fis->d.lbaHigh = 0; /* */
fis->d.device = 0; /* */
fis->d.lbaLowExp = 0; /* */
fis->d.lbaMidExp = 0; /* */
fis->d.lbaHighExp = 0; /* */
fis->d.featuresExp = 0; /* FIS reserve */
fis->d.sectorCount = 0x01; /* 1 sector counts */
fis->d.sectorCountExp = 0x00; /* 1 sector counts */
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_PIO_READ;
/* Initialize CB for SATA completion.
*/
satIOContext->satCompleteCB = &smsatLogSenseCB;
/*
* Prepare SGL and send FIS to LL layer.
*/
satIOContext->reqType = agRequestType; /* Save it */
status = smsataLLIOStart( smRoot,
smIORequest,
smDeviceHandle,
smScsiRequest,
satIOContext);
return status;
}
osGLOBAL bit32
smsatLogSense_3(
smRoot_t *smRoot,
smIORequest_t *smIORequest,
smDeviceHandle_t *smDeviceHandle,
smScsiInitiatorRequest_t *smScsiRequest,
smSatIOContext_t *satIOContext
)
{
bit32 status;
bit32 agRequestType;
agsaFisRegHostToDevice_t *fis;
fis = satIOContext->pFis;
SM_DBG5(("smsatLogSense_3: start\n"));
/* sends READ LOG EXT */
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_SMART; /* 0x2F */
fis->h.features = SAT_SMART_READ_LOG; /* 0xd5 */
fis->d.lbaLow = 0x06; /* 0x06 */
fis->d.lbaMid = 0x4F; /* 0x4f */
fis->d.lbaHigh = 0xC2; /* 0xc2 */
fis->d.device = 0; /* */
fis->d.lbaLowExp = 0; /* */
fis->d.lbaMidExp = 0; /* */
fis->d.lbaHighExp = 0; /* */
fis->d.featuresExp = 0; /* FIS reserve */
fis->d.sectorCount = 0x01; /* 1 sector counts */
fis->d.sectorCountExp = 0x00; /* 1 sector counts */
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_PIO_READ;
/* Initialize CB for SATA completion.
*/
satIOContext->satCompleteCB = &smsatLogSenseCB;
/*
* Prepare SGL and send FIS to LL layer.
*/
satIOContext->reqType = agRequestType; /* Save it */
status = smsataLLIOStart( smRoot,
smIORequest,
smDeviceHandle,
smScsiRequest,
satIOContext);
return status;
}
osGLOBAL bit32
smsatModeSelect6(
smRoot_t *smRoot,
smIORequest_t *smIORequest,
smDeviceHandle_t *smDeviceHandle,
smScsiInitiatorRequest_t *smScsiRequest,
smSatIOContext_t *satIOContext
)
{
bit32 status;
bit32 agRequestType;
smDeviceData_t *pSatDevData;
smScsiRspSense_t *pSense;
smIniScsiCmnd_t *scsiCmnd;
agsaFisRegHostToDevice_t *fis;
bit8 *pLogPage; /* Log Page data buffer */
bit32 StartingIndex = 0;
bit8 PageCode = 0;
bit32 chkCnd = agFALSE;
bit32 parameterListLen = 0;
pSense = satIOContext->pSense;
pSatDevData = satIOContext->pSatDevData;
scsiCmnd = &smScsiRequest->scsiCmnd;
fis = satIOContext->pFis;
pLogPage = (bit8 *) smScsiRequest->sglVirtualAddr;
SM_DBG5(("smsatModeSelect6: start\n"));
/* checking CONTROL */
/* NACA == 1 or LINK == 1*/
if ( (scsiCmnd->cdb[5] & SCSI_NACA_MASK) || (scsiCmnd->cdb[5] & SCSI_LINK_MASK) )
{
smsatSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
/*smEnqueueIO(smRoot, satIOContext);*/
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pSmSenseData,
satIOContext->interruptContext );
SM_DBG1(("smsatModeSelect6: return control!!!\n"));
return SM_RC_SUCCESS;
}
/* checking PF bit */
if ( !(scsiCmnd->cdb[1] & SCSI_MODE_SELECT6_PF_MASK))
{
smsatSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
/*smEnqueueIO(smRoot, satIOContext);*/
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pSmSenseData,
satIOContext->interruptContext );
SM_DBG1(("smsatModeSelect6: PF bit check!!!\n"));
return SM_RC_SUCCESS;
}
parameterListLen = scsiCmnd->cdb[4];
parameterListLen = MIN(parameterListLen, scsiCmnd->expDataLength);
if ((0 == parameterListLen) || (agNULL == pLogPage))
{
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOSuccess,
SCSI_STAT_GOOD,
agNULL,
satIOContext->interruptContext);
return SM_RC_SUCCESS;
}
/* checking Block Descriptor Length on Mode parameter header(6)*/
if (pLogPage[3] == 8)
{
/* mode parameter block descriptor exists */
PageCode = (bit8)(pLogPage[12] & 0x3F); /* page code and index is 4 + 8 */
StartingIndex = 12;
}
else if (pLogPage[3] == 0)
{
/* mode parameter block descriptor does not exist */
PageCode = (bit8)(pLogPage[4] & 0x3F); /* page code and index is 4 + 0 */
StartingIndex = 4;
/*smEnqueueIO(smRoot, satIOContext);*/
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOSuccess,
SCSI_STAT_GOOD,
agNULL,
satIOContext->interruptContext);
return SM_RC_SUCCESS;
}
else
{
SM_DBG1(("smsatModeSelect6: return mode parameter block descriptor 0x%x!!!\n", pLogPage[3]));
smsatSetSensePayload( pSense,
SCSI_SNSKEY_NO_SENSE,
0,
SCSI_SNSCODE_NO_ADDITIONAL_INFO,
satIOContext);
/*smEnqueueIO(smRoot, satIOContext);*/
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pSmSenseData,
satIOContext->interruptContext );
return SM_RC_SUCCESS;
}
switch (PageCode) /* page code */
{
case MODESELECT_CONTROL_PAGE:
SM_DBG1(("smsatModeSelect6: Control mode page!!!\n"));
if ( pLogPage[StartingIndex+1] != 0x0A ||
pLogPage[StartingIndex+2] != 0x02 ||
(pSatDevData->satNCQ == agTRUE && pLogPage[StartingIndex+3] != 0x12) ||
(pSatDevData->satNCQ == agFALSE && pLogPage[StartingIndex+3] != 0x02) ||
(pLogPage[StartingIndex+4] & BIT3_MASK) != 0x00 || /* SWP bit */
(pLogPage[StartingIndex+4] & BIT4_MASK) != 0x00 || /* UA_INTLCK_CTRL */
(pLogPage[StartingIndex+4] & BIT5_MASK) != 0x00 || /* UA_INTLCK_CTRL */
(pLogPage[StartingIndex+5] & BIT0_MASK) != 0x00 || /* AUTOLOAD MODE */
(pLogPage[StartingIndex+5] & BIT1_MASK) != 0x00 || /* AUTOLOAD MODE */
(pLogPage[StartingIndex+5] & BIT2_MASK) != 0x00 || /* AUTOLOAD MODE */
(pLogPage[StartingIndex+5] & BIT6_MASK) != 0x00 || /* TAS bit */
pLogPage[StartingIndex+8] != 0xFF ||
pLogPage[StartingIndex+9] != 0xFF ||
pLogPage[StartingIndex+10] != 0x00 ||
pLogPage[StartingIndex+11] != 0x00
)
{
chkCnd = agTRUE;
}
if (chkCnd == agTRUE)
{
smsatSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
/*smEnqueueIO(smRoot, satIOContext);*/
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pSmSenseData,
satIOContext->interruptContext );
SM_DBG1(("smsatModeSelect6: unexpected values!!!\n"));
}
else
{
/*smEnqueueIO(smRoot, satIOContext);*/
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOSuccess,
SCSI_STAT_GOOD,
agNULL,
satIOContext->interruptContext);
}
return SM_RC_SUCCESS;
break;
case MODESELECT_READ_WRITE_ERROR_RECOVERY_PAGE:
SM_DBG1(("smsatModeSelect6: Read-Write Error Recovery mode page!!!\n"));
if ( (pLogPage[StartingIndex + 2] & SCSI_MODE_SELECT6_AWRE_MASK) ||
(pLogPage[StartingIndex + 2] & SCSI_MODE_SELECT6_RC_MASK) ||
(pLogPage[StartingIndex + 2] & SCSI_MODE_SELECT6_EER_MASK) ||
(pLogPage[StartingIndex + 2] & SCSI_MODE_SELECT6_PER_MASK) ||
(pLogPage[StartingIndex + 2] & SCSI_MODE_SELECT6_DTE_MASK) ||
(pLogPage[StartingIndex + 2] & SCSI_MODE_SELECT6_DCR_MASK) ||
(pLogPage[StartingIndex + 10]) ||
(pLogPage[StartingIndex + 11])
)
{
SM_DBG5(("smsatModeSelect6: return check condition\n"));
smsatSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_PARAMETER_LIST,
satIOContext);
/*smEnqueueIO(smRoot, satIOContext);*/
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pSmSenseData,
satIOContext->interruptContext );
return SM_RC_SUCCESS;
}
else
{
SM_DBG5(("smsatModeSelect6: return GOOD \n"));
/*smEnqueueIO(smRoot, satIOContext);*/
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOSuccess,
SCSI_STAT_GOOD,
agNULL,
satIOContext->interruptContext);
return SM_RC_SUCCESS;
}
break;
case MODESELECT_CACHING:
/* SAT rev8 Table67, p69*/
SM_DBG5(("smsatModeSelect6: Caching mode page\n"));
if ( (pLogPage[StartingIndex + 2] & 0xFB) || /* 1111 1011 */
(pLogPage[StartingIndex + 3]) ||
(pLogPage[StartingIndex + 4]) ||
(pLogPage[StartingIndex + 5]) ||
(pLogPage[StartingIndex + 6]) ||
(pLogPage[StartingIndex + 7]) ||
(pLogPage[StartingIndex + 8]) ||
(pLogPage[StartingIndex + 9]) ||
(pLogPage[StartingIndex + 10]) ||
(pLogPage[StartingIndex + 11]) ||
(pLogPage[StartingIndex + 12] & 0xC1) || /* 1100 0001 */
(pLogPage[StartingIndex + 13]) ||
(pLogPage[StartingIndex + 14]) ||
(pLogPage[StartingIndex + 15])
)
{
SM_DBG1(("smsatModeSelect6: return check condition!!!\n"));
smsatSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_PARAMETER_LIST,
satIOContext);
/*smEnqueueIO(smRoot, satIOContext);*/
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pSmSenseData,
satIOContext->interruptContext );
return SM_RC_SUCCESS;
}
else
{
/* sends ATA SET FEATURES based on WCE bit */
if ( !(pLogPage[StartingIndex + 2] & SCSI_MODE_SELECT6_WCE_MASK) )
{
SM_DBG5(("smsatModeSelect6: disable write cache\n"));
/* sends SET FEATURES */
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_SET_FEATURES; /* 0xEF */
fis->h.features = 0x82; /* disable write cache */
fis->d.lbaLow = 0; /* */
fis->d.lbaMid = 0; /* */
fis->d.lbaHigh = 0; /* */
fis->d.device = 0; /* */
fis->d.lbaLowExp = 0; /* */
fis->d.lbaMidExp = 0; /* */
fis->d.lbaHighExp = 0; /* */
fis->d.featuresExp = 0; /* */
fis->d.sectorCount = 0; /* */
fis->d.sectorCountExp = 0; /* */
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_NON_DATA;
/* Initialize CB for SATA completion.
*/
satIOContext->satCompleteCB = &smsatModeSelect6n10CB;
/*
* Prepare SGL and send FIS to LL layer.
*/
satIOContext->reqType = agRequestType; /* Save it */
status = smsataLLIOStart( smRoot,
smIORequest,
smDeviceHandle,
smScsiRequest,
satIOContext);
return status;
}
else
{
SM_DBG5(("smsatModeSelect6: enable write cache\n"));
/* sends SET FEATURES */
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_SET_FEATURES; /* 0xEF */
fis->h.features = 0x02; /* enable write cache */
fis->d.lbaLow = 0; /* */
fis->d.lbaMid = 0; /* */
fis->d.lbaHigh = 0; /* */
fis->d.device = 0; /* */
fis->d.lbaLowExp = 0; /* */
fis->d.lbaMidExp = 0; /* */
fis->d.lbaHighExp = 0; /* */
fis->d.featuresExp = 0; /* */
fis->d.sectorCount = 0; /* */
fis->d.sectorCountExp = 0; /* */
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_NON_DATA;
/* Initialize CB for SATA completion.
*/
satIOContext->satCompleteCB = &smsatModeSelect6n10CB;
/*
* Prepare SGL and send FIS to LL layer.
*/
satIOContext->reqType = agRequestType; /* Save it */
status = smsataLLIOStart( smRoot,
smIORequest,
smDeviceHandle,
smScsiRequest,
satIOContext);
return status;
}
}
break;
case MODESELECT_INFORMATION_EXCEPTION_CONTROL_PAGE:
SM_DBG5(("smsatModeSelect6: Informational Exception Control mode page\n"));
if ( (pLogPage[StartingIndex + 2] & SCSI_MODE_SELECT6_PERF_MASK) ||
(pLogPage[StartingIndex + 2] & SCSI_MODE_SELECT6_TEST_MASK)
)
{
SM_DBG1(("smsatModeSelect6: return check condition!!! \n"));
smsatSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_PARAMETER_LIST,
satIOContext);
/*smEnqueueIO(smRoot, satIOContext);*/
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pSmSenseData,
satIOContext->interruptContext );
return SM_RC_SUCCESS;
}
else
{
/* sends either ATA SMART ENABLE/DISABLE OPERATIONS based on DEXCPT bit */
if ( !(pLogPage[StartingIndex + 2] & 0x08) )
{
SM_DBG5(("smsatModeSelect6: enable information exceptions reporting\n"));
/* sends SMART ENABLE OPERATIONS */
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_SMART; /* 0xB0 */
fis->h.features = SAT_SMART_ENABLE_OPERATIONS; /* enable */
fis->d.lbaLow = 0; /* */
fis->d.lbaMid = 0x4F; /* 0x4F */
fis->d.lbaHigh = 0xC2; /* 0xC2 */
fis->d.device = 0; /* */
fis->d.lbaLowExp = 0; /* */
fis->d.lbaMidExp = 0; /* */
fis->d.lbaHighExp = 0; /* */
fis->d.featuresExp = 0; /* */
fis->d.sectorCount = 0; /* */
fis->d.sectorCountExp = 0; /* */
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_NON_DATA;
/* Initialize CB for SATA completion.
*/
satIOContext->satCompleteCB = &smsatModeSelect6n10CB;
/*
* Prepare SGL and send FIS to LL layer.
*/
satIOContext->reqType = agRequestType; /* Save it */
status = smsataLLIOStart( smRoot,
smIORequest,
smDeviceHandle,
smScsiRequest,
satIOContext);
return status;
}
else
{
SM_DBG5(("smsatModeSelect6: disable information exceptions reporting\n"));
/* sends SMART DISABLE OPERATIONS */
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_SMART; /* 0xB0 */
fis->h.features = SAT_SMART_DISABLE_OPERATIONS; /* disable */
fis->d.lbaLow = 0; /* */
fis->d.lbaMid = 0x4F; /* 0x4F */
fis->d.lbaHigh = 0xC2; /* 0xC2 */
fis->d.device = 0; /* */
fis->d.lbaLowExp = 0; /* */
fis->d.lbaMidExp = 0; /* */
fis->d.lbaHighExp = 0; /* */
fis->d.featuresExp = 0; /* */
fis->d.sectorCount = 0; /* */
fis->d.sectorCountExp = 0; /* */
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_NON_DATA;
/* Initialize CB for SATA completion.
*/
satIOContext->satCompleteCB = &smsatModeSelect6n10CB;
/*
* Prepare SGL and send FIS to LL layer.
*/
satIOContext->reqType = agRequestType; /* Save it */
status = smsataLLIOStart( smRoot,
smIORequest,
smDeviceHandle,
smScsiRequest,
satIOContext);
return status;
}
}
break;
default:
SM_DBG1(("smsatModeSelect6: Error unknown page code 0x%x!!!\n", pLogPage[12]));
smsatSetSensePayload( pSense,
SCSI_SNSKEY_NO_SENSE,
0,
SCSI_SNSCODE_NO_ADDITIONAL_INFO,
satIOContext);
/*smEnqueueIO(smRoot, satIOContext);*/
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pSmSenseData,
satIOContext->interruptContext );
return SM_RC_SUCCESS;
}
}
osGLOBAL bit32
smsatModeSelect10(
smRoot_t *smRoot,
smIORequest_t *smIORequest,
smDeviceHandle_t *smDeviceHandle,
smScsiInitiatorRequest_t *smScsiRequest,
smSatIOContext_t *satIOContext
)
{
bit32 status;
bit32 agRequestType;
smDeviceData_t *pSatDevData;
smScsiRspSense_t *pSense;
smIniScsiCmnd_t *scsiCmnd;
agsaFisRegHostToDevice_t *fis;
bit8 *pLogPage; /* Log Page data buffer */
bit16 BlkDescLen = 0; /* Block Descriptor Length */
bit32 StartingIndex = 0;
bit8 PageCode = 0;
bit32 chkCnd = agFALSE;
bit32 parameterListLen = 0;
pSense = satIOContext->pSense;
pSatDevData = satIOContext->pSatDevData;
scsiCmnd = &smScsiRequest->scsiCmnd;
fis = satIOContext->pFis;
pLogPage = (bit8 *) smScsiRequest->sglVirtualAddr;
SM_DBG5(("smsatModeSelect10: start\n"));
/* checking CONTROL */
/* NACA == 1 or LINK == 1*/
if ( (scsiCmnd->cdb[9] & SCSI_NACA_MASK) || (scsiCmnd->cdb[9] & SCSI_LINK_MASK) )
{
smsatSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
/*smEnqueueIO(smRoot, satIOContext);*/
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pSmSenseData,
satIOContext->interruptContext );
SM_DBG1(("smsatModeSelect10: return control!!!\n"));
return SM_RC_SUCCESS;
}
/* checking PF bit */
if ( !(scsiCmnd->cdb[1] & SCSI_MODE_SELECT10_PF_MASK))
{
smsatSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
/*smEnqueueIO(smRoot, satIOContext);*/
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pSmSenseData,
satIOContext->interruptContext );
SM_DBG1(("smsatModeSelect10: PF bit check!!!\n"));
return SM_RC_SUCCESS;
}
parameterListLen = ((scsiCmnd->cdb[7]) << 8) + scsiCmnd->cdb[8];
parameterListLen = MIN(parameterListLen, scsiCmnd->expDataLength);
if ((0 == parameterListLen) || (agNULL == pLogPage))
{
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOSuccess,
SCSI_STAT_GOOD,
agNULL,
satIOContext->interruptContext);
return SM_RC_SUCCESS;
}
BlkDescLen = (bit8)((pLogPage[6] << 8) + pLogPage[7]);
/* checking Block Descriptor Length on Mode parameter header(10) and LONGLBA bit*/
if ( (BlkDescLen == 8) && !(pLogPage[4] & SCSI_MODE_SELECT10_LONGLBA_MASK) )
{
/* mode parameter block descriptor exists and length is 8 byte */
PageCode = (bit8)(pLogPage[16] & 0x3F); /* page code and index is 8 + 8 */
StartingIndex = 16;
}
else if ( (BlkDescLen == 16) && (pLogPage[4] & SCSI_MODE_SELECT10_LONGLBA_MASK) )
{
/* mode parameter block descriptor exists and length is 16 byte */
PageCode = (bit8)(pLogPage[24] & 0x3F); /* page code and index is 8 + 16 */
StartingIndex = 24;
}
else if (BlkDescLen == 0)
{
PageCode = (bit8)(pLogPage[8] & 0x3F); /* page code and index is 8 + 0 */
StartingIndex = 8;
/*smEnqueueIO(smRoot, satIOContext);*/
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOSuccess,
SCSI_STAT_GOOD,
agNULL,
satIOContext->interruptContext);
return SM_RC_SUCCESS;
}
else
{
SM_DBG1(("smsatModeSelect10: return mode parameter block descriptor 0x%x!!!\n", BlkDescLen));
/* no more than one mode parameter block descriptor shall be supported */
smsatSetSensePayload( pSense,
SCSI_SNSKEY_NO_SENSE,
0,
SCSI_SNSCODE_NO_ADDITIONAL_INFO,
satIOContext);
/*smEnqueueIO(smRoot, satIOContext);*/
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pSmSenseData,
satIOContext->interruptContext );
return SM_RC_SUCCESS;
}
/*
for debugging only
*/
if (StartingIndex == 8)
{
smhexdump("startingindex 8", (bit8 *)pLogPage, 8);
}
else if(StartingIndex == 16)
{
if (PageCode == MODESELECT_CACHING)
{
smhexdump("startingindex 16", (bit8 *)pLogPage, 16+20);
}
else
{
smhexdump("startingindex 16", (bit8 *)pLogPage, 16+12);
}
}
else
{
if (PageCode == MODESELECT_CACHING)
{
smhexdump("startingindex 24", (bit8 *)pLogPage, 24+20);
}
else
{
smhexdump("startingindex 24", (bit8 *)pLogPage, 24+12);
}
}
switch (PageCode) /* page code */
{
case MODESELECT_CONTROL_PAGE:
SM_DBG5(("smsatModeSelect10: Control mode page\n"));
/*
compare pLogPage to expected value (SAT Table 65, p67)
If not match, return check condition
*/
if ( pLogPage[StartingIndex+1] != 0x0A ||
pLogPage[StartingIndex+2] != 0x02 ||
(pSatDevData->satNCQ == agTRUE && pLogPage[StartingIndex+3] != 0x12) ||
(pSatDevData->satNCQ == agFALSE && pLogPage[StartingIndex+3] != 0x02) ||
(pLogPage[StartingIndex+4] & BIT3_MASK) != 0x00 || /* SWP bit */
(pLogPage[StartingIndex+4] & BIT4_MASK) != 0x00 || /* UA_INTLCK_CTRL */
(pLogPage[StartingIndex+4] & BIT5_MASK) != 0x00 || /* UA_INTLCK_CTRL */
(pLogPage[StartingIndex+5] & BIT0_MASK) != 0x00 || /* AUTOLOAD MODE */
(pLogPage[StartingIndex+5] & BIT1_MASK) != 0x00 || /* AUTOLOAD MODE */
(pLogPage[StartingIndex+5] & BIT2_MASK) != 0x00 || /* AUTOLOAD MODE */
(pLogPage[StartingIndex+5] & BIT6_MASK) != 0x00 || /* TAS bit */
pLogPage[StartingIndex+8] != 0xFF ||
pLogPage[StartingIndex+9] != 0xFF ||
pLogPage[StartingIndex+10] != 0x00 ||
pLogPage[StartingIndex+11] != 0x00
)
{
chkCnd = agTRUE;
}
if (chkCnd == agTRUE)
{
smsatSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
/*smEnqueueIO(smRoot, satIOContext);*/
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pSmSenseData,
satIOContext->interruptContext );
SM_DBG1(("smsatModeSelect10: unexpected values!!!\n"));
}
else
{
/*smEnqueueIO(smRoot, satIOContext);*/
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOSuccess,
SCSI_STAT_GOOD,
agNULL,
satIOContext->interruptContext);
}
return SM_RC_SUCCESS;
break;
case MODESELECT_READ_WRITE_ERROR_RECOVERY_PAGE:
SM_DBG5(("smsatModeSelect10: Read-Write Error Recovery mode page\n"));
if ( (pLogPage[StartingIndex + 2] & SCSI_MODE_SELECT10_AWRE_MASK) ||
(pLogPage[StartingIndex + 2] & SCSI_MODE_SELECT10_RC_MASK) ||
(pLogPage[StartingIndex + 2] & SCSI_MODE_SELECT10_EER_MASK) ||
(pLogPage[StartingIndex + 2] & SCSI_MODE_SELECT10_PER_MASK) ||
(pLogPage[StartingIndex + 2] & SCSI_MODE_SELECT10_DTE_MASK) ||
(pLogPage[StartingIndex + 2] & SCSI_MODE_SELECT10_DCR_MASK) ||
(pLogPage[StartingIndex + 10]) ||
(pLogPage[StartingIndex + 11])
)
{
SM_DBG1(("smsatModeSelect10: return check condition!!!\n"));
smsatSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_PARAMETER_LIST,
satIOContext);
/*smEnqueueIO(smRoot, satIOContext);*/
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pSmSenseData,
satIOContext->interruptContext );
return SM_RC_SUCCESS;
}
else
{
SM_DBG2(("smsatModeSelect10: return GOOD \n"));
/*smEnqueueIO(smRoot, satIOContext);*/
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOSuccess,
SCSI_STAT_GOOD,
agNULL,
satIOContext->interruptContext);
return SM_RC_SUCCESS;
}
break;
case MODESELECT_CACHING:
/* SAT rev8 Table67, p69*/
SM_DBG5(("smsatModeSelect10: Caching mode page\n"));
if ( (pLogPage[StartingIndex + 2] & 0xFB) || /* 1111 1011 */
(pLogPage[StartingIndex + 3]) ||
(pLogPage[StartingIndex + 4]) ||
(pLogPage[StartingIndex + 5]) ||
(pLogPage[StartingIndex + 6]) ||
(pLogPage[StartingIndex + 7]) ||
(pLogPage[StartingIndex + 8]) ||
(pLogPage[StartingIndex + 9]) ||
(pLogPage[StartingIndex + 10]) ||
(pLogPage[StartingIndex + 11]) ||
(pLogPage[StartingIndex + 12] & 0xC1) || /* 1100 0001 */
(pLogPage[StartingIndex + 13]) ||
(pLogPage[StartingIndex + 14]) ||
(pLogPage[StartingIndex + 15])
)
{
SM_DBG1(("smsatModeSelect10: return check condition!!!\n"));
smsatSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_PARAMETER_LIST,
satIOContext);
/*smEnqueueIO(smRoot, satIOContext);*/
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pSmSenseData,
satIOContext->interruptContext );
return SM_RC_SUCCESS;
}
else
{
/* sends ATA SET FEATURES based on WCE bit */
if ( !(pLogPage[StartingIndex + 2] & SCSI_MODE_SELECT10_WCE_MASK) )
{
SM_DBG5(("smsatModeSelect10: disable write cache\n"));
/* sends SET FEATURES */
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_SET_FEATURES; /* 0xEF */
fis->h.features = 0x82; /* disable write cache */
fis->d.lbaLow = 0; /* */
fis->d.lbaMid = 0; /* */
fis->d.lbaHigh = 0; /* */
fis->d.device = 0; /* */
fis->d.lbaLowExp = 0; /* */
fis->d.lbaMidExp = 0; /* */
fis->d.lbaHighExp = 0; /* */
fis->d.featuresExp = 0; /* */
fis->d.sectorCount = 0; /* */
fis->d.sectorCountExp = 0; /* */
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_NON_DATA;
/* Initialize CB for SATA completion.
*/
satIOContext->satCompleteCB = &smsatModeSelect6n10CB;
/*
* Prepare SGL and send FIS to LL layer.
*/
satIOContext->reqType = agRequestType; /* Save it */
status = smsataLLIOStart( smRoot,
smIORequest,
smDeviceHandle,
smScsiRequest,
satIOContext);
return status;
}
else
{
SM_DBG5(("smsatModeSelect10: enable write cache\n"));
/* sends SET FEATURES */
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_SET_FEATURES; /* 0xEF */
fis->h.features = 0x02; /* enable write cache */
fis->d.lbaLow = 0; /* */
fis->d.lbaMid = 0; /* */
fis->d.lbaHigh = 0; /* */
fis->d.device = 0; /* */
fis->d.lbaLowExp = 0; /* */
fis->d.lbaMidExp = 0; /* */
fis->d.lbaHighExp = 0; /* */
fis->d.featuresExp = 0; /* */
fis->d.sectorCount = 0; /* */
fis->d.sectorCountExp = 0; /* */
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_NON_DATA;
/* Initialize CB for SATA completion.
*/
satIOContext->satCompleteCB = &smsatModeSelect6n10CB;
/*
* Prepare SGL and send FIS to LL layer.
*/
satIOContext->reqType = agRequestType; /* Save it */
status = smsataLLIOStart( smRoot,
smIORequest,
smDeviceHandle,
smScsiRequest,
satIOContext);
return status;
}
}
break;
case MODESELECT_INFORMATION_EXCEPTION_CONTROL_PAGE:
SM_DBG5(("smsatModeSelect10: Informational Exception Control mode page\n"));
if ( (pLogPage[StartingIndex + 2] & SCSI_MODE_SELECT10_PERF_MASK) ||
(pLogPage[StartingIndex + 2] & SCSI_MODE_SELECT10_TEST_MASK)
)
{
SM_DBG1(("smsatModeSelect10: return check condition!!!\n"));
smsatSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_PARAMETER_LIST,
satIOContext);
/*smEnqueueIO(smRoot, satIOContext);*/
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pSmSenseData,
satIOContext->interruptContext );
return SM_RC_SUCCESS;
}
else
{
/* sends either ATA SMART ENABLE/DISABLE OPERATIONS based on DEXCPT bit */
if ( !(pLogPage[StartingIndex + 2] & SCSI_MODE_SELECT10_DEXCPT_MASK) )
{
SM_DBG5(("smsatModeSelect10: enable information exceptions reporting\n"));
/* sends SMART ENABLE OPERATIONS */
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_SMART; /* 0xB0 */
fis->h.features = SAT_SMART_ENABLE_OPERATIONS; /* enable */
fis->d.lbaLow = 0; /* */
fis->d.lbaMid = 0x4F; /* 0x4F */
fis->d.lbaHigh = 0xC2; /* 0xC2 */
fis->d.device = 0; /* */
fis->d.lbaLowExp = 0; /* */
fis->d.lbaMidExp = 0; /* */
fis->d.lbaHighExp = 0; /* */
fis->d.featuresExp = 0; /* */
fis->d.sectorCount = 0; /* */
fis->d.sectorCountExp = 0; /* */
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_NON_DATA;
/* Initialize CB for SATA completion.
*/
satIOContext->satCompleteCB = &smsatModeSelect6n10CB;
/*
* Prepare SGL and send FIS to LL layer.
*/
satIOContext->reqType = agRequestType; /* Save it */
status = smsataLLIOStart( smRoot,
smIORequest,
smDeviceHandle,
smScsiRequest,
satIOContext);
return status;
}
else
{
SM_DBG5(("smsatModeSelect10: disable information exceptions reporting\n"));
/* sends SMART DISABLE OPERATIONS */
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_SMART; /* 0xB0 */
fis->h.features = SAT_SMART_DISABLE_OPERATIONS; /* disable */
fis->d.lbaLow = 0; /* */
fis->d.lbaMid = 0x4F; /* 0x4F */
fis->d.lbaHigh = 0xC2; /* 0xC2 */
fis->d.device = 0; /* */
fis->d.lbaLowExp = 0; /* */
fis->d.lbaMidExp = 0; /* */
fis->d.lbaHighExp = 0; /* */
fis->d.featuresExp = 0; /* */
fis->d.sectorCount = 0; /* */
fis->d.sectorCountExp = 0; /* */
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_NON_DATA;
/* Initialize CB for SATA completion.
*/
satIOContext->satCompleteCB = &smsatModeSelect6n10CB;
/*
* Prepare SGL and send FIS to LL layer.
*/
satIOContext->reqType = agRequestType; /* Save it */
status = smsataLLIOStart( smRoot,
smIORequest,
smDeviceHandle,
smScsiRequest,
satIOContext);
return status;
}
}
break;
default:
SM_DBG1(("smsatModeSelect10: Error unknown page code 0x%x!!!\n", pLogPage[12]));
smsatSetSensePayload( pSense,
SCSI_SNSKEY_NO_SENSE,
0,
SCSI_SNSCODE_NO_ADDITIONAL_INFO,
satIOContext);
/*smEnqueueIO(smRoot, satIOContext);*/
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pSmSenseData,
satIOContext->interruptContext );
return SM_RC_SUCCESS;
}
}
osGLOBAL bit32
smsatSynchronizeCache10(
smRoot_t *smRoot,
smIORequest_t *smIORequest,
smDeviceHandle_t *smDeviceHandle,
smScsiInitiatorRequest_t *smScsiRequest,
smSatIOContext_t *satIOContext
)
{
bit32 status;
bit32 agRequestType;
smDeviceData_t *pSatDevData;
smScsiRspSense_t *pSense;
smIniScsiCmnd_t *scsiCmnd;
agsaFisRegHostToDevice_t *fis;
pSense = satIOContext->pSense;
pSatDevData = satIOContext->pSatDevData;
scsiCmnd = &smScsiRequest->scsiCmnd;
fis = satIOContext->pFis;
SM_DBG5(("smsatSynchronizeCache10: start\n"));
/* checking CONTROL */
/* NACA == 1 or LINK == 1*/
if ( (scsiCmnd->cdb[9] & SCSI_NACA_MASK) || (scsiCmnd->cdb[9] & SCSI_LINK_MASK) )
{
smsatSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
/*smEnqueueIO(smRoot, satIOContext);*/
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pSmSenseData,
satIOContext->interruptContext );
SM_DBG1(("smsatSynchronizeCache10: return control!!!\n"));
return SM_RC_SUCCESS;
}
/* checking IMMED bit */
if (scsiCmnd->cdb[1] & SCSI_SYNC_CACHE_IMMED_MASK)
{
SM_DBG1(("smsatSynchronizeCache10: GOOD status due to IMMED bit!!!\n"));
/* return GOOD status first here */
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOSuccess,
SCSI_STAT_GOOD,
agNULL,
satIOContext->interruptContext);
}
/* sends FLUSH CACHE or FLUSH CACHE EXT */
if (pSatDevData->sat48BitSupport == agTRUE)
{
SM_DBG5(("smsatSynchronizeCache10: sends FLUSH CACHE EXT\n"));
/* FLUSH CACHE EXT */
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_FLUSH_CACHE_EXT; /* 0xEA */
fis->h.features = 0; /* FIS reserve */
fis->d.featuresExp = 0; /* FIS reserve */
fis->d.sectorCount = 0; /* FIS sector count (7:0) */
fis->d.sectorCountExp = 0; /* FIS sector count (15:8) */
fis->d.lbaLow = 0; /* FIS LBA (7 :0 ) */
fis->d.lbaLowExp = 0; /* FIS LBA (31:24) */
fis->d.lbaMid = 0; /* FIS LBA (15:8 ) */
fis->d.lbaMidExp = 0; /* FIS LBA (39:32) */
fis->d.lbaHigh = 0; /* FIS LBA (23:16) */
fis->d.lbaHighExp = 0; /* FIS LBA (47:40) */
fis->d.device = 0; /* FIS DEV is discared in SATA */
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved4 = 0;
fis->d.reserved5 = 0;
}
else
{
SM_DBG5(("smsatSynchronizeCache10: sends FLUSH CACHE\n"));
/* FLUSH CACHE */
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_FLUSH_CACHE; /* 0xE7 */
fis->h.features = 0; /* FIS features NA */
fis->d.lbaLow = 0; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = 0; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = 0; /* FIS LBA (23:16) */
fis->d.lbaLowExp = 0;
fis->d.lbaMidExp = 0;
fis->d.lbaHighExp = 0;
fis->d.featuresExp = 0;
fis->d.sectorCount = 0; /* FIS sector count (7:0) */
fis->d.sectorCountExp = 0;
fis->d.device = 0; /* FIS DEV is discared in SATA */
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved4 = 0;
fis->d.reserved5 = 0;
}
agRequestType = AGSA_SATA_PROTOCOL_NON_DATA;
/* Initialize CB for SATA completion.
*/
satIOContext->satCompleteCB = &smsatSynchronizeCache10n16CB;
/*
* Prepare SGL and send FIS to LL layer.
*/
satIOContext->reqType = agRequestType; /* Save it */
status = smsataLLIOStart( smRoot,
smIORequest,
smDeviceHandle,
smScsiRequest,
satIOContext);
return (status);
}
osGLOBAL bit32
smsatSynchronizeCache16(
smRoot_t *smRoot,
smIORequest_t *smIORequest,
smDeviceHandle_t *smDeviceHandle,
smScsiInitiatorRequest_t *smScsiRequest,
smSatIOContext_t *satIOContext
)
{
bit32 status;
bit32 agRequestType;
smDeviceData_t *pSatDevData;
smScsiRspSense_t *pSense;
smIniScsiCmnd_t *scsiCmnd;
agsaFisRegHostToDevice_t *fis;
pSense = satIOContext->pSense;
pSatDevData = satIOContext->pSatDevData;
scsiCmnd = &smScsiRequest->scsiCmnd;
fis = satIOContext->pFis;
SM_DBG5(("smsatSynchronizeCache10: start\n"));
/* checking CONTROL */
/* NACA == 1 or LINK == 1*/
if ( (scsiCmnd->cdb[15] & SCSI_NACA_MASK) || (scsiCmnd->cdb[15] & SCSI_LINK_MASK) )
{
smsatSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
/*smEnqueueIO(smRoot, satIOContext);*/
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pSmSenseData,
satIOContext->interruptContext );
SM_DBG1(("smsatSynchronizeCache10: return control!!!\n"));
return SM_RC_SUCCESS;
}
/* checking IMMED bit */
if (scsiCmnd->cdb[1] & SCSI_SYNC_CACHE_IMMED_MASK)
{
SM_DBG1(("smsatSynchronizeCache10: GOOD status due to IMMED bit!!!\n"));
/* return GOOD status first here */
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOSuccess,
SCSI_STAT_GOOD,
agNULL,
satIOContext->interruptContext);
}
/* sends FLUSH CACHE or FLUSH CACHE EXT */
if (pSatDevData->sat48BitSupport == agTRUE)
{
SM_DBG5(("smsatSynchronizeCache10: sends FLUSH CACHE EXT\n"));
/* FLUSH CACHE EXT */
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_FLUSH_CACHE_EXT; /* 0xEA */
fis->h.features = 0; /* FIS reserve */
fis->d.featuresExp = 0; /* FIS reserve */
fis->d.sectorCount = 0; /* FIS sector count (7:0) */
fis->d.sectorCountExp = 0; /* FIS sector count (15:8) */
fis->d.lbaLow = 0; /* FIS LBA (7 :0 ) */
fis->d.lbaLowExp = 0; /* FIS LBA (31:24) */
fis->d.lbaMid = 0; /* FIS LBA (15:8 ) */
fis->d.lbaMidExp = 0; /* FIS LBA (39:32) */
fis->d.lbaHigh = 0; /* FIS LBA (23:16) */
fis->d.lbaHighExp = 0; /* FIS LBA (47:40) */
fis->d.device = 0; /* FIS DEV is discared in SATA */
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved4 = 0;
fis->d.reserved5 = 0;
}
else
{
SM_DBG5(("smsatSynchronizeCache10: sends FLUSH CACHE\n"));
/* FLUSH CACHE */
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_FLUSH_CACHE; /* 0xE7 */
fis->h.features = 0; /* FIS features NA */
fis->d.lbaLow = 0; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = 0; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = 0; /* FIS LBA (23:16) */
fis->d.lbaLowExp = 0;
fis->d.lbaMidExp = 0;
fis->d.lbaHighExp = 0;
fis->d.featuresExp = 0;
fis->d.sectorCount = 0; /* FIS sector count (7:0) */
fis->d.sectorCountExp = 0;
fis->d.device = 0; /* FIS DEV is discared in SATA */
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved4 = 0;
fis->d.reserved5 = 0;
}
agRequestType = AGSA_SATA_PROTOCOL_NON_DATA;
/* Initialize CB for SATA completion.
*/
satIOContext->satCompleteCB = &smsatSynchronizeCache10n16CB;
/*
* Prepare SGL and send FIS to LL layer.
*/
satIOContext->reqType = agRequestType; /* Save it */
status = smsataLLIOStart( smRoot,
smIORequest,
smDeviceHandle,
smScsiRequest,
satIOContext);
return (status);
}
osGLOBAL bit32
smsatWriteAndVerify10(
smRoot_t *smRoot,
smIORequest_t *smIORequest,
smDeviceHandle_t *smDeviceHandle,
smScsiInitiatorRequest_t *smScsiRequest,
smSatIOContext_t *satIOContext
)
{
/*
combination of write10 and verify10
*/
bit32 status;
bit32 agRequestType = AGSA_SATA_PROTOCOL_DMA_WRITE;
smDeviceData_t *pSatDevData;
smScsiRspSense_t *pSense;
smIniScsiCmnd_t *scsiCmnd;
agsaFisRegHostToDevice_t *fis;
bit32 lba = 0;
bit32 tl = 0;
bit32 LoopNum = 1;
bit8 LBA[8];
bit8 TL[8];
bit32 AllChk = agFALSE; /* lba, lba+tl check against ATA limit and Disk capacity */
pSense = satIOContext->pSense;
pSatDevData = satIOContext->pSatDevData;
scsiCmnd = &smScsiRequest->scsiCmnd;
fis = satIOContext->pFis;
SM_DBG5(("smsatWriteAndVerify10: start\n"));
/* checking BYTCHK bit */
if (scsiCmnd->cdb[1] & SCSI_WRITE_N_VERIFY_BYTCHK_MASK)
{
smsatSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
/*smEnqueueIO(smRoot, satIOContext);*/
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pSmSenseData,
satIOContext->interruptContext );
SM_DBG1(("smsatWriteAndVerify10: BYTCHK bit checking!!!\n"));
return SM_RC_SUCCESS;
}
/* checking CONTROL */
/* NACA == 1 or LINK == 1*/
if ( (scsiCmnd->cdb[9] & SCSI_NACA_MASK) || (scsiCmnd->cdb[9] & SCSI_LINK_MASK) )
{
smsatSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
/*smEnqueueIO(smRoot, satIOContext);*/
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pSmSenseData,
satIOContext->interruptContext );
SM_DBG1(("smsatWriteAndVerify10: return control!!!\n"));
return SM_RC_SUCCESS;
}
sm_memset(LBA, 0, sizeof(LBA));
sm_memset(TL, 0, sizeof(TL));
/* do not use memcpy due to indexing in LBA and TL */
LBA[0] = 0; /* MSB */
LBA[1] = 0;
LBA[2] = 0;
LBA[3] = 0;
LBA[4] = scsiCmnd->cdb[2];
LBA[5] = scsiCmnd->cdb[3];
LBA[6] = scsiCmnd->cdb[4];
LBA[7] = scsiCmnd->cdb[5]; /* LSB */
TL[0] = 0;
TL[1] = 0;
TL[2] = 0;
TL[3] = 0;
TL[4] = 0;
TL[5] = 0;
TL[6] = scsiCmnd->cdb[7];
TL[7] = scsiCmnd->cdb[8]; /* LSB */
/* cbd10; computing LBA and transfer length */
lba = (scsiCmnd->cdb[2] << (8*3)) + (scsiCmnd->cdb[3] << (8*2))
+ (scsiCmnd->cdb[4] << 8) + scsiCmnd->cdb[5];
tl = (scsiCmnd->cdb[7] << 8) + scsiCmnd->cdb[8];
/* Table 34, 9.1, p 46 */
/*
note: As of 2/10/2006, no support for DMA QUEUED
*/
/*
Table 34, 9.1, p 46, b
When no 48-bit addressing support or NCQ, if LBA is beyond (2^28 - 1),
return check condition
*/
if (pSatDevData->satNCQ != agTRUE &&
pSatDevData->sat48BitSupport != agTRUE
)
{
AllChk = smsatCheckLimit(LBA, TL, agFALSE, pSatDevData);
if (AllChk)
{
SM_DBG1(("smsatWriteAndVerify10: return LBA out of range!!!\n"));
smsatSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_LOGICAL_BLOCK_ADDRESS_OUT_OF_RANGE,
satIOContext);
/*smEnqueueIO(smRoot, satIOContext);*/
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pSmSenseData,
satIOContext->interruptContext );
return SM_RC_SUCCESS;
}
}
else
{
AllChk = smsatCheckLimit(LBA, TL, agTRUE, pSatDevData);
if (AllChk)
{
SM_DBG1(("smsatWriteAndVerify10: return LBA out of range, EXT!!!\n"));
smsatSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_LOGICAL_BLOCK_ADDRESS_OUT_OF_RANGE,
satIOContext);
/*smEnqueueIO(smRoot, satIOContext);*/
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pSmSenseData,
satIOContext->interruptContext );
return SM_RC_SUCCESS;
}
}
/* case 1 and 2 */
if (pSatDevData->satDMASupport == agTRUE && pSatDevData->satDMAEnabled == agTRUE)
{
/* case 2 */
/* WRITE DMA*/
/* can't fit the transfer length */
SM_DBG5(("smsatWriteAndVerify10: case 2\n"));
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C bit is set */
fis->h.command = SAT_WRITE_DMA; /* 0xCA */
fis->h.features = 0; /* FIS reserve */
fis->d.lbaLow = scsiCmnd->cdb[5]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = scsiCmnd->cdb[4]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = scsiCmnd->cdb[3]; /* FIS LBA (23:16) */
/* FIS LBA mode set LBA (27:24) */
fis->d.device = (bit8)((0x4 << 4) | (scsiCmnd->cdb[2] & 0xF));
fis->d.lbaLowExp = 0;
fis->d.lbaMidExp = 0;
fis->d.lbaHighExp = 0;
fis->d.featuresExp = 0;
fis->d.sectorCount = scsiCmnd->cdb[8]; /* FIS sector count (7:0) */
fis->d.sectorCountExp = 0;
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_DMA_WRITE;
satIOContext->ATACmd = SAT_WRITE_DMA;
}
else
{
/* case 1 */
/* WRITE MULTIPLE or WRITE SECTOR(S) */
/* WRITE SECTORS for easier implemetation */
/* can't fit the transfer length */
SM_DBG5(("smsatWriteAndVerify10: case 1\n"));
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C bit is set */
fis->h.command = SAT_WRITE_SECTORS; /* 0x30 */
fis->h.features = 0; /* FIS reserve */
fis->d.lbaLow = scsiCmnd->cdb[5]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = scsiCmnd->cdb[4]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = scsiCmnd->cdb[3]; /* FIS LBA (23:16) */
/* FIS LBA mode set LBA (27:24) */
fis->d.device = (bit8)((0x4 << 4) | (scsiCmnd->cdb[2] & 0xF));
fis->d.lbaLowExp = 0;
fis->d.lbaMidExp = 0;
fis->d.lbaHighExp = 0;
fis->d.featuresExp = 0;
fis->d.sectorCount = scsiCmnd->cdb[8]; /* FIS sector count (7:0) */
fis->d.sectorCountExp = 0;
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_PIO_WRITE;
satIOContext->ATACmd = SAT_WRITE_SECTORS;
}
/* case 3 and 4 */
if (pSatDevData->sat48BitSupport == agTRUE)
{
if (pSatDevData->satDMASupport == agTRUE && pSatDevData->satDMAEnabled == agTRUE)
{
/* case 3 */
/* WRITE DMA EXT or WRITE DMA FUA EXT */
SM_DBG5(("smsatWriteAndVerify10: case 3\n"));
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
/* SAT_WRITE_DMA_FUA_EXT is optional and we don't support it */
fis->h.command = SAT_WRITE_DMA_EXT; /* 0x35 */
fis->h.features = 0; /* FIS reserve */
fis->d.lbaLow = scsiCmnd->cdb[5]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = scsiCmnd->cdb[4]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = scsiCmnd->cdb[3]; /* FIS LBA (23:16) */
fis->d.device = 0x40; /* FIS LBA mode set */
fis->d.lbaLowExp = scsiCmnd->cdb[2]; /* FIS LBA (31:24) */
fis->d.lbaMidExp = 0; /* FIS LBA (39:32) */
fis->d.lbaHighExp = 0; /* FIS LBA (47:40) */
fis->d.featuresExp = 0; /* FIS reserve */
fis->d.sectorCount = scsiCmnd->cdb[8]; /* FIS sector count (7:0) */
fis->d.sectorCountExp = scsiCmnd->cdb[7]; /* FIS sector count (15:8) */
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_DMA_WRITE;
satIOContext->ATACmd = SAT_WRITE_DMA_EXT;
}
else
{
/* case 4 */
/* WRITE MULTIPLE EXT or WRITE MULTIPLE FUA EXT or WRITE SECTOR(S) EXT */
/* WRITE SECTORS EXT for easier implemetation */
SM_DBG5(("smsatWriteAndVerify10: case 4\n"));
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_WRITE_SECTORS_EXT; /* 0x34 */
fis->h.features = 0; /* FIS reserve */
fis->d.lbaLow = scsiCmnd->cdb[5]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = scsiCmnd->cdb[4]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = scsiCmnd->cdb[3]; /* FIS LBA (23:16) */
fis->d.device = 0x40; /* FIS LBA mode set */
fis->d.lbaLowExp = scsiCmnd->cdb[2]; /* FIS LBA (31:24) */
fis->d.lbaMidExp = 0; /* FIS LBA (39:32) */
fis->d.lbaHighExp = 0; /* FIS LBA (47:40) */
fis->d.featuresExp = 0; /* FIS reserve */
fis->d.sectorCount = scsiCmnd->cdb[8]; /* FIS sector count (7:0) */
fis->d.sectorCountExp = scsiCmnd->cdb[7]; /* FIS sector count (15:8) */
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_PIO_WRITE;
satIOContext->ATACmd = SAT_WRITE_SECTORS_EXT;
}
}
/* case 5 */
if (pSatDevData->satNCQ == agTRUE)
{
/* WRITE FPDMA QUEUED */
if (pSatDevData->sat48BitSupport != agTRUE)
{
SM_DBG1(("smsatWriteAndVerify10: case 5 !!! error NCQ but 28 bit address support!!!\n"));
smsatSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
/*smEnqueueIO(smRoot, satIOContext);*/
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pSmSenseData,
satIOContext->interruptContext );
return SM_RC_SUCCESS;
}
SM_DBG5(("smsatWriteAndVerify10: case 5\n"));
/* Support 48-bit FPDMA addressing, use WRITE FPDMA QUEUE command */
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_WRITE_FPDMA_QUEUED; /* 0x61 */
fis->h.features = scsiCmnd->cdb[8]; /* FIS sector count (7:0) */
fis->d.lbaLow = scsiCmnd->cdb[5]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = scsiCmnd->cdb[4]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = scsiCmnd->cdb[3]; /* FIS LBA (23:16) */
/* Check FUA bit */
if (scsiCmnd->cdb[1] & SCSI_WRITE_N_VERIFY10_FUA_MASK)
fis->d.device = 0xC0; /* FIS FUA set */
else
fis->d.device = 0x40; /* FIS FUA clear */
fis->d.lbaLowExp = scsiCmnd->cdb[2]; /* FIS LBA (31:24) */
fis->d.lbaMidExp = 0; /* FIS LBA (39:32) */
fis->d.lbaHighExp = 0; /* FIS LBA (47:40) */
fis->d.featuresExp = scsiCmnd->cdb[7]; /* FIS sector count (15:8) */
fis->d.sectorCount = 0; /* Tag (7:3) set by LL layer */
fis->d.sectorCountExp = 0;
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_FPDMA_WRITE;
satIOContext->ATACmd = SAT_WRITE_FPDMA_QUEUED;
}
satIOContext->currentLBA = lba;
satIOContext->OrgTL = tl;
/*
computing number of loop and remainder for tl
0xFF in case not ext
0xFFFF in case EXT
*/
if (fis->h.command == SAT_WRITE_SECTORS || fis->h.command == SAT_WRITE_DMA)
{
LoopNum = smsatComputeLoopNum(tl, 0xFF);
}
else if (fis->h.command == SAT_WRITE_SECTORS_EXT ||
fis->h.command == SAT_WRITE_DMA_EXT ||
fis->h.command == SAT_WRITE_DMA_FUA_EXT
)
{
/* SAT_READ_SECTORS_EXT, SAT_READ_DMA_EXT */
LoopNum = smsatComputeLoopNum(tl, 0xFFFF);
}
else
{
/* SAT_WRITE_FPDMA_QUEUED */
LoopNum = smsatComputeLoopNum(tl, 0xFFFF);
}
satIOContext->LoopNum = LoopNum;
if (LoopNum == 1)
{
SM_DBG5(("smsatWriteAndVerify10: NON CHAINED data\n"));
/* Initialize CB for SATA completion.
*/
satIOContext->satCompleteCB = &smsatNonChainedWriteNVerifyCB;
}
else
{
SM_DBG1(("smsatWriteAndVerify10: CHAINED data!!!\n"));
/* re-setting tl */
if (fis->h.command == SAT_WRITE_SECTORS || fis->h.command == SAT_WRITE_DMA)
{
fis->d.sectorCount = 0xFF;
}
else if (fis->h.command == SAT_WRITE_SECTORS_EXT ||
fis->h.command == SAT_WRITE_DMA_EXT ||
fis->h.command == SAT_WRITE_DMA_FUA_EXT
)
{
fis->d.sectorCount = 0xFF;
fis->d.sectorCountExp = 0xFF;
}
else
{
/* SAT_WRITE_FPDMA_QUEUED */
fis->h.features = 0xFF;
fis->d.featuresExp = 0xFF;
}
/* Initialize CB for SATA completion.
*/
satIOContext->satCompleteCB = &smsatChainedWriteNVerifyCB;
}
/*
* Prepare SGL and send FIS to LL layer.
*/
satIOContext->reqType = agRequestType; /* Save it */
status = smsataLLIOStart( smRoot,
smIORequest,
smDeviceHandle,
smScsiRequest,
satIOContext);
return (status);
}
osGLOBAL bit32
smsatWriteAndVerify12(
smRoot_t *smRoot,
smIORequest_t *smIORequest,
smDeviceHandle_t *smDeviceHandle,
smScsiInitiatorRequest_t *smScsiRequest,
smSatIOContext_t *satIOContext
)
{
/*
combination of write12 and verify12
temp: since write12 is not support (due to internal checking), no support
*/
bit32 status;
bit32 agRequestType = AGSA_SATA_PROTOCOL_DMA_WRITE;
smDeviceData_t *pSatDevData;
smScsiRspSense_t *pSense;
smIniScsiCmnd_t *scsiCmnd;
agsaFisRegHostToDevice_t *fis;
bit32 lba = 0;
bit32 tl = 0;
bit32 LoopNum = 1;
bit8 LBA[8];
bit8 TL[8];
bit32 AllChk = agFALSE; /* lba, lba+tl check against ATA limit and Disk capacity */
pSense = satIOContext->pSense;
pSatDevData = satIOContext->pSatDevData;
scsiCmnd = &smScsiRequest->scsiCmnd;
fis = satIOContext->pFis;
SM_DBG5(("smsatWriteAndVerify12: start\n"));
/* checking BYTCHK bit */
if (scsiCmnd->cdb[1] & SCSI_WRITE_N_VERIFY_BYTCHK_MASK)
{
smsatSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
/*smEnqueueIO(smRoot, satIOContext);*/
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pSmSenseData,
satIOContext->interruptContext );
SM_DBG1(("smsatWriteAndVerify12: BYTCHK bit checking!!!\n"));
return SM_RC_SUCCESS;
}
/* checking CONTROL */
/* NACA == 1 or LINK == 1*/
if ( (scsiCmnd->cdb[11] & SCSI_NACA_MASK) || (scsiCmnd->cdb[11] & SCSI_LINK_MASK) )
{
smsatSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
/*smEnqueueIO(smRoot, satIOContext);*/
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pSmSenseData,
satIOContext->interruptContext );
SM_DBG1(("smsatWriteAndVerify12: return control!!!\n"));
return SM_RC_SUCCESS;
}
sm_memset(LBA, 0, sizeof(LBA));
sm_memset(TL, 0, sizeof(TL));
/* do not use memcpy due to indexing in LBA and TL */
LBA[0] = 0; /* MSB */
LBA[1] = 0;
LBA[2] = 0;
LBA[3] = 0;
LBA[4] = scsiCmnd->cdb[2];
LBA[5] = scsiCmnd->cdb[3];
LBA[6] = scsiCmnd->cdb[4];
LBA[7] = scsiCmnd->cdb[5]; /* LSB */
TL[0] = 0; /* MSB */
TL[1] = 0;
TL[2] = 0;
TL[3] = 0;
TL[4] = scsiCmnd->cdb[6];
TL[5] = scsiCmnd->cdb[7];
TL[6] = scsiCmnd->cdb[8];
TL[7] = scsiCmnd->cdb[9]; /* LSB */
lba = smsatComputeCDB12LBA(satIOContext);
tl = smsatComputeCDB12TL(satIOContext);
/* Table 34, 9.1, p 46 */
/*
note: As of 2/10/2006, no support for DMA QUEUED
*/
/*
Table 34, 9.1, p 46, b
When no 48-bit addressing support or NCQ, if LBA is beyond (2^28 - 1),
return check condition
*/
if (pSatDevData->satNCQ != agTRUE &&
pSatDevData->sat48BitSupport != agTRUE
)
{
AllChk = smsatCheckLimit(LBA, TL, agFALSE, pSatDevData);
if (AllChk)
{
/*smEnqueueIO(smRoot, satIOContext);*/
SM_DBG1(("smsatWriteAndVerify12: return LBA out of range, not EXT!!!\n"));
smsatSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_LOGICAL_BLOCK_ADDRESS_OUT_OF_RANGE,
satIOContext);
/*smEnqueueIO(smRoot, satIOContext);*/
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pSmSenseData,
satIOContext->interruptContext );
return SM_RC_SUCCESS;
}
}
else
{
AllChk = smsatCheckLimit(LBA, TL, agTRUE, pSatDevData);
if (AllChk)
{
SM_DBG1(("smsatWriteAndVerify12: return LBA out of range, EXT!!!\n"));
smsatSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_LOGICAL_BLOCK_ADDRESS_OUT_OF_RANGE,
satIOContext);
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pSmSenseData,
satIOContext->interruptContext );
return SM_RC_SUCCESS;
}
}
if (pSatDevData->satDMASupport == agTRUE && pSatDevData->satDMAEnabled == agTRUE)
{
/* case 2 */
/* WRITE DMA*/
/* In case that we can't fit the transfer length, we loop */
SM_DBG5(("smsatWriteAndVerify12: case 2\n"));
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C bit is set */
fis->h.command = SAT_WRITE_DMA; /* 0xCA */
fis->h.features = 0; /* FIS reserve */
fis->d.lbaLow = scsiCmnd->cdb[5]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = scsiCmnd->cdb[4]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = scsiCmnd->cdb[3]; /* FIS LBA (23:16) */
/* FIS LBA mode set LBA (27:24) */
fis->d.device = (bit8)((0x4 << 4) | (scsiCmnd->cdb[2] & 0xF));
fis->d.lbaLowExp = 0;
fis->d.lbaMidExp = 0;
fis->d.lbaHighExp = 0;
fis->d.featuresExp = 0;
fis->d.sectorCount = scsiCmnd->cdb[9]; /* FIS sector count (7:0) */
fis->d.sectorCountExp = 0;
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_DMA_WRITE;
satIOContext->ATACmd = SAT_WRITE_DMA;
}
else
{
/* case 1 */
/* WRITE MULTIPLE or WRITE SECTOR(S) */
/* WRITE SECTORS for easier implemetation */
/* In case that we can't fit the transfer length, we loop */
SM_DBG5(("smsatWriteAndVerify12: case 1\n"));
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C bit is set */
fis->h.command = SAT_WRITE_SECTORS; /* 0x30 */
fis->h.features = 0; /* FIS reserve */
fis->d.lbaLow = scsiCmnd->cdb[5]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = scsiCmnd->cdb[4]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = scsiCmnd->cdb[3]; /* FIS LBA (23:16) */
/* FIS LBA mode set LBA (27:24) */
fis->d.device = (bit8)((0x4 << 4) | (scsiCmnd->cdb[2] & 0xF));
fis->d.lbaLowExp = 0;
fis->d.lbaMidExp = 0;
fis->d.lbaHighExp = 0;
fis->d.featuresExp = 0;
fis->d.sectorCount = scsiCmnd->cdb[9]; /* FIS sector count (7:0) */
fis->d.sectorCountExp = 0;
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_PIO_WRITE;
satIOContext->ATACmd = SAT_WRITE_SECTORS;
}
/* case 3 and 4 */
if (pSatDevData->sat48BitSupport == agTRUE)
{
if (pSatDevData->satDMASupport == agTRUE && pSatDevData->satDMAEnabled == agTRUE)
{
/* case 3 */
/* WRITE DMA EXT or WRITE DMA FUA EXT */
SM_DBG5(("smsatWriteAndVerify12: case 3\n"));
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
/* SAT_WRITE_DMA_FUA_EXT is optional and we don't support it */
fis->h.command = SAT_WRITE_DMA_EXT; /* 0x35 */
fis->h.features = 0; /* FIS reserve */
fis->d.lbaLow = scsiCmnd->cdb[5]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = scsiCmnd->cdb[4]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = scsiCmnd->cdb[3]; /* FIS LBA (23:16) */
fis->d.device = 0x40; /* FIS LBA mode set */
fis->d.lbaLowExp = scsiCmnd->cdb[2]; /* FIS LBA (31:24) */
fis->d.lbaMidExp = 0; /* FIS LBA (39:32) */
fis->d.lbaHighExp = 0; /* FIS LBA (47:40) */
fis->d.featuresExp = 0; /* FIS reserve */
fis->d.sectorCount = scsiCmnd->cdb[9]; /* FIS sector count (7:0) */
fis->d.sectorCountExp = scsiCmnd->cdb[8]; /* FIS sector count (15:8) */
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_DMA_WRITE;
satIOContext->ATACmd = SAT_WRITE_DMA_EXT;
}
else
{
/* case 4 */
/* WRITE MULTIPLE EXT or WRITE MULTIPLE FUA EXT or WRITE SECTOR(S) EXT */
/* WRITE SECTORS EXT for easier implemetation */
SM_DBG5(("smsatWriteAndVerify12: case 4\n"));
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_WRITE_SECTORS_EXT; /* 0x34 */
fis->h.features = 0; /* FIS reserve */
fis->d.lbaLow = scsiCmnd->cdb[5]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = scsiCmnd->cdb[4]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = scsiCmnd->cdb[3]; /* FIS LBA (23:16) */
fis->d.device = 0x40; /* FIS LBA mode set */
fis->d.lbaLowExp = scsiCmnd->cdb[2]; /* FIS LBA (31:24) */
fis->d.lbaMidExp = 0; /* FIS LBA (39:32) */
fis->d.lbaHighExp = 0; /* FIS LBA (47:40) */
fis->d.featuresExp = 0; /* FIS reserve */
fis->d.sectorCount = scsiCmnd->cdb[9]; /* FIS sector count (7:0) */
fis->d.sectorCountExp = scsiCmnd->cdb[8]; /* FIS sector count (15:8) */
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_PIO_WRITE;
satIOContext->ATACmd = SAT_WRITE_SECTORS_EXT;
}
}
/* case 5 */
if (pSatDevData->satNCQ == agTRUE)
{
/* WRITE FPDMA QUEUED */
if (pSatDevData->sat48BitSupport != agTRUE)
{
SM_DBG1(("smsatWriteAndVerify12: case 5 !!! error NCQ but 28 bit address support!!!\n"));
smsatSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
/*smEnqueueIO(smRoot, satIOContext);*/
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pSmSenseData,
satIOContext->interruptContext );
return SM_RC_SUCCESS;
}
SM_DBG6(("smsatWriteAndVerify12: case 5\n"));
/* Support 48-bit FPDMA addressing, use WRITE FPDMA QUEUE command */
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_WRITE_FPDMA_QUEUED; /* 0x61 */
fis->h.features = scsiCmnd->cdb[9]; /* FIS sector count (7:0) */
fis->d.lbaLow = scsiCmnd->cdb[5]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = scsiCmnd->cdb[4]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = scsiCmnd->cdb[3]; /* FIS LBA (23:16) */
/* Check FUA bit */
if (scsiCmnd->cdb[1] & SCSI_WRITE12_FUA_MASK)
fis->d.device = 0xC0; /* FIS FUA set */
else
fis->d.device = 0x40; /* FIS FUA clear */
fis->d.lbaLowExp = scsiCmnd->cdb[2]; /* FIS LBA (31:24) */
fis->d.lbaMidExp = 0; /* FIS LBA (39:32) */
fis->d.lbaHighExp = 0; /* FIS LBA (47:40) */
fis->d.featuresExp = scsiCmnd->cdb[8]; /* FIS sector count (15:8) */
fis->d.sectorCount = 0; /* Tag (7:3) set by LL layer */
fis->d.sectorCountExp = 0;
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_FPDMA_WRITE;
satIOContext->ATACmd = SAT_WRITE_FPDMA_QUEUED;
}
satIOContext->currentLBA = lba;
// satIOContext->OrgLBA = lba;
satIOContext->OrgTL = tl;
/*
computing number of loop and remainder for tl
0xFF in case not ext
0xFFFF in case EXT
*/
if (fis->h.command == SAT_WRITE_SECTORS || fis->h.command == SAT_WRITE_DMA)
{
LoopNum = smsatComputeLoopNum(tl, 0xFF);
}
else if (fis->h.command == SAT_WRITE_SECTORS_EXT ||
fis->h.command == SAT_WRITE_DMA_EXT ||
fis->h.command == SAT_WRITE_DMA_FUA_EXT
)
{
/* SAT_READ_SECTORS_EXT, SAT_READ_DMA_EXT */
LoopNum = smsatComputeLoopNum(tl, 0xFFFF);
}
else
{
/* SAT_WRITE_FPDMA_QUEUEDK */
LoopNum = smsatComputeLoopNum(tl, 0xFFFF);
}
satIOContext->LoopNum = LoopNum;
satIOContext->LoopNum2 = LoopNum;
if (LoopNum == 1)
{
SM_DBG5(("smsatWriteAndVerify12: NON CHAINED data\n"));
/* Initialize CB for SATA completion.
*/
satIOContext->satCompleteCB = &smsatNonChainedWriteNVerifyCB;
}
else
{
SM_DBG1(("smsatWriteAndVerify12: CHAINED data!!!\n"));
/* re-setting tl */
if (fis->h.command == SAT_WRITE_SECTORS || fis->h.command == SAT_WRITE_DMA)
{
fis->d.sectorCount = 0xFF;
}
else if (fis->h.command == SAT_WRITE_SECTORS_EXT ||
fis->h.command == SAT_WRITE_DMA_EXT ||
fis->h.command == SAT_WRITE_DMA_FUA_EXT
)
{
fis->d.sectorCount = 0xFF;
fis->d.sectorCountExp = 0xFF;
}
else
{
/* SAT_WRITE_FPDMA_QUEUED */
fis->h.features = 0xFF;
fis->d.featuresExp = 0xFF;
}
/* Initialize CB for SATA completion.
*/
satIOContext->satCompleteCB = &smsatChainedWriteNVerifyCB;
}
/*
* Prepare SGL and send FIS to LL layer.
*/
satIOContext->reqType = agRequestType; /* Save it */
status = smsataLLIOStart( smRoot,
smIORequest,
smDeviceHandle,
smScsiRequest,
satIOContext);
return (status);
}
osGLOBAL bit32
smsatWriteAndVerify16(
smRoot_t *smRoot,
smIORequest_t *smIORequest,
smDeviceHandle_t *smDeviceHandle,
smScsiInitiatorRequest_t *smScsiRequest,
smSatIOContext_t *satIOContext
)
{
/*
combination of write16 and verify16
since write16 has 8 bytes LBA -> problem ATA LBA(upto 6 bytes), no support
*/
bit32 status;
bit32 agRequestType = AGSA_SATA_PROTOCOL_DMA_WRITE;
smDeviceData_t *pSatDevData;
smScsiRspSense_t *pSense;
smIniScsiCmnd_t *scsiCmnd;
agsaFisRegHostToDevice_t *fis;
bit32 lba = 0;
bit32 tl = 0;
bit32 LoopNum = 1;
bit8 LBA[8];
bit8 TL[8];
bit32 AllChk = agFALSE; /* lba, lba+tl check against ATA limit and Disk capacity */
pSense = satIOContext->pSense;
pSatDevData = satIOContext->pSatDevData;
scsiCmnd = &smScsiRequest->scsiCmnd;
fis = satIOContext->pFis;
SM_DBG5(("smsatWriteAndVerify16: start\n"));
/* checking BYTCHK bit */
if (scsiCmnd->cdb[1] & SCSI_WRITE_N_VERIFY_BYTCHK_MASK)
{
smsatSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
/*smEnqueueIO(smRoot, satIOContext);*/
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pSmSenseData,
satIOContext->interruptContext );
SM_DBG1(("smsatWriteAndVerify16: BYTCHK bit checking!!!\n"));
return SM_RC_SUCCESS;
}
/* checking CONTROL */
/* NACA == 1 or LINK == 1*/
if ( (scsiCmnd->cdb[15] & SCSI_NACA_MASK) || (scsiCmnd->cdb[15] & SCSI_LINK_MASK) )
{
smsatSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
/*smEnqueueIO(smRoot, satIOContext);*/
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pSmSenseData,
satIOContext->interruptContext );
SM_DBG1(("smsatWriteAndVerify16: return control!!!\n"));
return SM_RC_SUCCESS;
}
sm_memset(LBA, 0, sizeof(LBA));
sm_memset(TL, 0, sizeof(TL));
/* do not use memcpy due to indexing in LBA and TL */
LBA[0] = scsiCmnd->cdb[2]; /* MSB */
LBA[1] = scsiCmnd->cdb[3];
LBA[2] = scsiCmnd->cdb[4];
LBA[3] = scsiCmnd->cdb[5];
LBA[4] = scsiCmnd->cdb[6];
LBA[5] = scsiCmnd->cdb[7];
LBA[6] = scsiCmnd->cdb[8];
LBA[7] = scsiCmnd->cdb[9]; /* LSB */
TL[0] = 0;
TL[1] = 0;
TL[2] = 0;
TL[3] = 0;
TL[4] = scsiCmnd->cdb[10]; /* MSB */
TL[5] = scsiCmnd->cdb[11];
TL[6] = scsiCmnd->cdb[12];
TL[7] = scsiCmnd->cdb[13]; /* LSB */
lba = smsatComputeCDB16LBA(satIOContext);
tl = smsatComputeCDB16TL(satIOContext);
/* Table 34, 9.1, p 46 */
/*
note: As of 2/10/2006, no support for DMA QUEUED
*/
/*
Table 34, 9.1, p 46, b
When no 48-bit addressing support or NCQ, if LBA is beyond (2^28 - 1),
return check condition
*/
if (pSatDevData->satNCQ != agTRUE &&
pSatDevData->sat48BitSupport != agTRUE
)
{
AllChk = smsatCheckLimit(LBA, TL, agFALSE, pSatDevData);
if (AllChk)
{
SM_DBG1(("smsatWriteAndVerify16: return LBA out of range, not EXT!!!\n"));
smsatSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_LOGICAL_BLOCK_ADDRESS_OUT_OF_RANGE,
satIOContext);
/*smEnqueueIO(smRoot, satIOContext);*/
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pSmSenseData,
satIOContext->interruptContext );
return SM_RC_SUCCESS;
}
}
else
{
AllChk = smsatCheckLimit(LBA, TL, agTRUE, pSatDevData);
if (AllChk)
{
SM_DBG1(("smsatWriteAndVerify16: return LBA out of range, EXT!!!\n"));
smsatSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_LOGICAL_BLOCK_ADDRESS_OUT_OF_RANGE,
satIOContext);
/*smEnqueueIO(smRoot, satIOContext);*/
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pSmSenseData,
satIOContext->interruptContext );
return SM_RC_SUCCESS;
}
}
/* case 1 and 2 */
if (pSatDevData->satDMASupport == agTRUE && pSatDevData->satDMAEnabled == agTRUE)
{
/* case 2 */
/* WRITE DMA*/
/* In case that we can't fit the transfer length, we loop */
SM_DBG5(("smsatWriteAndVerify16: case 2\n"));
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C bit is set */
fis->h.command = SAT_WRITE_DMA; /* 0xCA */
fis->h.features = 0; /* FIS reserve */
fis->d.lbaLow = scsiCmnd->cdb[9]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = scsiCmnd->cdb[8]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = scsiCmnd->cdb[7]; /* FIS LBA (23:16) */
/* FIS LBA mode set LBA (27:24) */
fis->d.device = (bit8)((0x4 << 4) | (scsiCmnd->cdb[6] & 0xF));
fis->d.lbaLowExp = 0;
fis->d.lbaMidExp = 0;
fis->d.lbaHighExp = 0;
fis->d.featuresExp = 0;
fis->d.sectorCount = scsiCmnd->cdb[13]; /* FIS sector count (7:0) */
fis->d.sectorCountExp = 0;
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_DMA_WRITE;
satIOContext->ATACmd = SAT_WRITE_DMA;
}
else
{
/* case 1 */
/* WRITE MULTIPLE or WRITE SECTOR(S) */
/* WRITE SECTORS for easier implemetation */
/* In case that we can't fit the transfer length, we loop */
SM_DBG5(("smsatWriteAndVerify16: case 1\n"));
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C bit is set */
fis->h.command = SAT_WRITE_SECTORS; /* 0x30 */
fis->h.features = 0; /* FIS reserve */
fis->d.lbaLow = scsiCmnd->cdb[9]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = scsiCmnd->cdb[8]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = scsiCmnd->cdb[7]; /* FIS LBA (23:16) */
/* FIS LBA mode set LBA (27:24) */
fis->d.device = (bit8)((0x4 << 4) | (scsiCmnd->cdb[6] & 0xF));
fis->d.lbaLowExp = 0;
fis->d.lbaMidExp = 0;
fis->d.lbaHighExp = 0;
fis->d.featuresExp = 0;
fis->d.sectorCount = scsiCmnd->cdb[13]; /* FIS sector count (7:0) */
fis->d.sectorCountExp = 0;
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_PIO_WRITE;
satIOContext->ATACmd = SAT_WRITE_SECTORS;
}
/* case 3 and 4 */
if (pSatDevData->sat48BitSupport == agTRUE)
{
if (pSatDevData->satDMASupport == agTRUE && pSatDevData->satDMAEnabled == agTRUE)
{
/* case 3 */
/* WRITE DMA EXT or WRITE DMA FUA EXT */
SM_DBG5(("smsatWriteAndVerify16: case 3\n"));
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
/* SAT_WRITE_DMA_FUA_EXT is optional and we don't support it */
fis->h.command = SAT_WRITE_DMA_EXT; /* 0x35 */
fis->h.features = 0; /* FIS reserve */
fis->d.lbaLow = scsiCmnd->cdb[9]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = scsiCmnd->cdb[8]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = scsiCmnd->cdb[7]; /* FIS LBA (23:16) */
fis->d.device = 0x40; /* FIS LBA mode set */
fis->d.lbaLowExp = scsiCmnd->cdb[6]; /* FIS LBA (31:24) */
fis->d.lbaMidExp = scsiCmnd->cdb[5]; /* FIS LBA (39:32) */
fis->d.lbaHighExp = scsiCmnd->cdb[4]; /* FIS LBA (47:40) */
fis->d.featuresExp = 0; /* FIS reserve */
fis->d.sectorCount = scsiCmnd->cdb[13]; /* FIS sector count (7:0) */
fis->d.sectorCountExp = scsiCmnd->cdb[12]; /* FIS sector count (15:8) */
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_DMA_WRITE;
satIOContext->ATACmd = SAT_WRITE_DMA_EXT;
}
else
{
/* case 4 */
/* WRITE MULTIPLE EXT or WRITE MULTIPLE FUA EXT or WRITE SECTOR(S) EXT */
/* WRITE SECTORS EXT for easier implemetation */
SM_DBG5(("smsatWriteAndVerify16: case 4\n"));
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_WRITE_SECTORS_EXT; /* 0x34 */
fis->h.features = 0; /* FIS reserve */
fis->d.lbaLow = scsiCmnd->cdb[9]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = scsiCmnd->cdb[8]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = scsiCmnd->cdb[7]; /* FIS LBA (23:16) */
fis->d.device = 0x40; /* FIS LBA mode set */
fis->d.lbaLowExp = scsiCmnd->cdb[6]; /* FIS LBA (31:24) */
fis->d.lbaMidExp = scsiCmnd->cdb[5]; /* FIS LBA (39:32) */
fis->d.lbaHighExp = scsiCmnd->cdb[4]; /* FIS LBA (47:40) */
fis->d.featuresExp = 0; /* FIS reserve */
fis->d.sectorCount = scsiCmnd->cdb[13]; /* FIS sector count (7:0) */
fis->d.sectorCountExp = scsiCmnd->cdb[12]; /* FIS sector count (15:8) */
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_PIO_WRITE;
satIOContext->ATACmd = SAT_WRITE_SECTORS_EXT;
}
}
/* case 5 */
if (pSatDevData->satNCQ == agTRUE)
{
/* WRITE FPDMA QUEUED */
if (pSatDevData->sat48BitSupport != agTRUE)
{
SM_DBG1(("smsatWriteAndVerify16: case 5 !!! error NCQ but 28 bit address support!!!\n"));
smsatSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
/*smEnqueueIO(smRoot, satIOContext);*/
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pSmSenseData,
satIOContext->interruptContext );
return SM_RC_SUCCESS;
}
SM_DBG6(("smsatWriteAndVerify16: case 5\n"));
/* Support 48-bit FPDMA addressing, use WRITE FPDMA QUEUE command */
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_WRITE_FPDMA_QUEUED; /* 0x61 */
fis->h.features = scsiCmnd->cdb[13]; /* FIS sector count (7:0) */
fis->d.lbaLow = scsiCmnd->cdb[9]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = scsiCmnd->cdb[8]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = scsiCmnd->cdb[7]; /* FIS LBA (23:16) */
/* Check FUA bit */
if (scsiCmnd->cdb[1] & SCSI_WRITE16_FUA_MASK)
fis->d.device = 0xC0; /* FIS FUA set */
else
fis->d.device = 0x40; /* FIS FUA clear */
fis->d.lbaLowExp = scsiCmnd->cdb[6]; /* FIS LBA (31:24) */
fis->d.lbaMidExp = scsiCmnd->cdb[5]; /* FIS LBA (39:32) */
fis->d.lbaHighExp = scsiCmnd->cdb[4]; /* FIS LBA (47:40) */
fis->d.featuresExp = scsiCmnd->cdb[12]; /* FIS sector count (15:8) */
fis->d.sectorCount = 0; /* Tag (7:3) set by LL layer */
fis->d.sectorCountExp = 0;
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_FPDMA_WRITE;
satIOContext->ATACmd = SAT_WRITE_FPDMA_QUEUED;
}
satIOContext->currentLBA = lba;
satIOContext->OrgTL = tl;
/*
computing number of loop and remainder for tl
0xFF in case not ext
0xFFFF in case EXT
*/
if (fis->h.command == SAT_WRITE_SECTORS || fis->h.command == SAT_WRITE_DMA)
{
LoopNum = smsatComputeLoopNum(tl, 0xFF);
}
else if (fis->h.command == SAT_WRITE_SECTORS_EXT ||
fis->h.command == SAT_WRITE_DMA_EXT ||
fis->h.command == SAT_WRITE_DMA_FUA_EXT
)
{
/* SAT_READ_SECTORS_EXT, SAT_READ_DMA_EXT */
LoopNum = smsatComputeLoopNum(tl, 0xFFFF);
}
else
{
/* SAT_WRITE_FPDMA_QUEUEDK */
LoopNum = smsatComputeLoopNum(tl, 0xFFFF);
}
satIOContext->LoopNum = LoopNum;
if (LoopNum == 1)
{
SM_DBG5(("smsatWriteAndVerify16: NON CHAINED data\n"));
/* Initialize CB for SATA completion.
*/
satIOContext->satCompleteCB = &smsatNonChainedWriteNVerifyCB;
}
else
{
SM_DBG1(("smsatWriteAndVerify16: CHAINED data!!!\n"));
/* re-setting tl */
if (fis->h.command == SAT_WRITE_SECTORS || fis->h.command == SAT_WRITE_DMA)
{
fis->d.sectorCount = 0xFF;
}
else if (fis->h.command == SAT_WRITE_SECTORS_EXT ||
fis->h.command == SAT_WRITE_DMA_EXT ||
fis->h.command == SAT_WRITE_DMA_FUA_EXT
)
{
fis->d.sectorCount = 0xFF;
fis->d.sectorCountExp = 0xFF;
}
else
{
/* SAT_WRITE_FPDMA_QUEUED */
fis->h.features = 0xFF;
fis->d.featuresExp = 0xFF;
}
/* Initialize CB for SATA completion.
*/
satIOContext->satCompleteCB = &smsatChainedWriteNVerifyCB;
}
/*
* Prepare SGL and send FIS to LL layer.
*/
satIOContext->reqType = agRequestType; /* Save it */
status = smsataLLIOStart( smRoot,
smIORequest,
smDeviceHandle,
smScsiRequest,
satIOContext);
return (status);
}
osGLOBAL bit32
smsatReadMediaSerialNumber(
smRoot_t *smRoot,
smIORequest_t *smIORequest,
smDeviceHandle_t *smDeviceHandle,
smScsiInitiatorRequest_t *smScsiRequest,
smSatIOContext_t *satIOContext
)
{
bit32 status;
bit32 agRequestType = AGSA_SATA_PROTOCOL_PIO_READ;
smDeviceData_t *pSatDevData;
smScsiRspSense_t *pSense;
smIniScsiCmnd_t *scsiCmnd;
agsaFisRegHostToDevice_t *fis;
agsaSATAIdentifyData_t *pSATAIdData;
bit8 *pSerialNumber;
bit8 MediaSerialNumber[64] = {0};
bit32 allocationLen = 0;
pSense = satIOContext->pSense;
pSatDevData = satIOContext->pSatDevData;
scsiCmnd = &smScsiRequest->scsiCmnd;
fis = satIOContext->pFis;
pSATAIdData = &(pSatDevData->satIdentifyData);
pSerialNumber = (bit8 *) smScsiRequest->sglVirtualAddr;
SM_DBG5(("smsatReadMediaSerialNumber: start\n"));
/* checking CONTROL */
/* NACA == 1 or LINK == 1*/
if ( (scsiCmnd->cdb[11] & SCSI_NACA_MASK) || (scsiCmnd->cdb[11] & SCSI_LINK_MASK) )
{
smsatSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
/*smEnqueueIO(smRoot, satIOContext);*/
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pSmSenseData,
satIOContext->interruptContext );
SM_DBG1(("smsatReadMediaSerialNumber: return control!!!\n"));
return SM_RC_SUCCESS;
}
allocationLen = (((bit32)scsiCmnd->cdb[6]) << 24) |
(((bit32)scsiCmnd->cdb[7]) << 16) |
(((bit32)scsiCmnd->cdb[8]) << 8 ) |
(((bit32)scsiCmnd->cdb[9]));
allocationLen = MIN(allocationLen, scsiCmnd->expDataLength);
if (allocationLen == 4)
{
if (pSATAIdData->commandSetFeatureDefault & 0x4)
{
SM_DBG1(("smsatReadMediaSerialNumber: Media serial number returning only length!!!\n"));
/* SPC-3 6.16 p192; filling in length */
MediaSerialNumber[0] = 0;
MediaSerialNumber[1] = 0;
MediaSerialNumber[2] = 0;
MediaSerialNumber[3] = 0x3C;
}
else
{
/* 1 sector - 4 = 512 - 4 to avoid underflow; 0x1fc*/
MediaSerialNumber[0] = 0;
MediaSerialNumber[1] = 0;
MediaSerialNumber[2] = 0x1;
MediaSerialNumber[3] = 0xfc;
}
sm_memcpy(pSerialNumber, MediaSerialNumber, 4);
/*smEnqueueIO(smRoot, satIOContext);*/
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOSuccess,
SCSI_STAT_GOOD,
agNULL,
satIOContext->interruptContext);
return SM_RC_SUCCESS;
}
if ( pSatDevData->IDDeviceValid == agTRUE)
{
if (pSATAIdData->commandSetFeatureDefault & 0x4)
{
/* word87 bit2 Media serial number is valid */
/* read word 176 to 205; length is 2*30 = 60 = 0x3C*/
#ifdef LOG_ENABLE
smhexdump("ID smsatReadMediaSerialNumber", (bit8*)pSATAIdData->currentMediaSerialNumber, 2*30);
#endif
/* SPC-3 6.16 p192; filling in length */
MediaSerialNumber[0] = 0;
MediaSerialNumber[1] = 0;
MediaSerialNumber[2] = 0;
MediaSerialNumber[3] = 0x3C;
sm_memcpy(&MediaSerialNumber[4], (void *)pSATAIdData->currentMediaSerialNumber, 60);
#ifdef LOG_ENABLE
smhexdump("smsatReadMediaSerialNumber", (bit8*)MediaSerialNumber, 2*30 + 4);
#endif
sm_memcpy(pSerialNumber, MediaSerialNumber, MIN(allocationLen, 64));
/*smEnqueueIO(smRoot, satIOContext);*/
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOSuccess,
SCSI_STAT_GOOD,
agNULL,
satIOContext->interruptContext);
return SM_RC_SUCCESS;
}
else
{
/* word87 bit2 Media serial number is NOT valid */
SM_DBG1(("smsatReadMediaSerialNumber: Media serial number is NOT valid!!!\n"));
if (pSatDevData->sat48BitSupport == agTRUE)
{
/* READ VERIFY SECTORS EXT */
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_READ_SECTORS_EXT; /* 0x24 */
fis->h.features = 0; /* FIS reserve */
fis->d.lbaLow = 0; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = 0; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = 0; /* FIS LBA (23:16) */
fis->d.device = 0x40; /* FIS LBA mode set */
fis->d.lbaLowExp = 0; /* FIS LBA (31:24) */
fis->d.lbaMidExp = 0; /* FIS LBA (39:32) */
fis->d.lbaHighExp = 0; /* FIS LBA (47:40) */
fis->d.featuresExp = 0; /* FIS reserve */
fis->d.sectorCount = 1; /* FIS sector count (7:0) */
fis->d.sectorCountExp = 0; /* FIS sector count (15:8) */
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_PIO_READ;
}
else
{
/* READ VERIFY SECTORS */
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_READ_SECTORS; /* 0x20 */
fis->h.features = 0; /* FIS reserve */
fis->d.lbaLow = 0; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = 0; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = 0; /* FIS LBA (23:16) */
fis->d.device = 0x40; /* FIS LBA (27:24) and FIS LBA mode */
fis->d.lbaLowExp = 0;
fis->d.lbaMidExp = 0;
fis->d.lbaHighExp = 0;
fis->d.featuresExp = 0;
fis->d.sectorCount = 1; /* FIS sector count (7:0) */
fis->d.sectorCountExp = 0;
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_PIO_READ;
}
satIOContext->satCompleteCB = &smsatReadMediaSerialNumberCB;
satIOContext->reqType = agRequestType; /* Save it */
status = smsataLLIOStart( smRoot,
smIORequest,
smDeviceHandle,
smScsiRequest,
satIOContext);
return status;
}
}
else
{
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOFailed,
smDetailOtherError,
agNULL,
satIOContext->interruptContext);
return SM_RC_SUCCESS;
}
}
osGLOBAL bit32
smsatReadBuffer(
smRoot_t *smRoot,
smIORequest_t *smIORequest,
smDeviceHandle_t *smDeviceHandle,
smScsiInitiatorRequest_t *smScsiRequest,
smSatIOContext_t *satIOContext
)
{
bit32 status = SM_RC_SUCCESS;
bit32 agRequestType = AGSA_SATA_PROTOCOL_PIO_READ;
smScsiRspSense_t *pSense;
smIniScsiCmnd_t *scsiCmnd;
agsaFisRegHostToDevice_t *fis;
bit32 bufferOffset;
bit32 tl;
bit8 mode;
bit8 bufferID;
bit8 *pBuff;
pSense = satIOContext->pSense;
scsiCmnd = &smScsiRequest->scsiCmnd;
fis = satIOContext->pFis;
pBuff = (bit8 *) smScsiRequest->sglVirtualAddr;
SM_DBG5(("smsatReadBuffer: start\n"));
/* checking CONTROL */
/* NACA == 1 or LINK == 1*/
if ( (scsiCmnd->cdb[9] & SCSI_NACA_MASK) || (scsiCmnd->cdb[9] & SCSI_LINK_MASK) )
{
smsatSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
/*smEnqueueIO(smRoot, satIOContext);*/
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pSmSenseData,
satIOContext->interruptContext );
SM_DBG1(("smsatReadBuffer: return control!!!\n"));
return SM_RC_SUCCESS;
}
bufferOffset = (scsiCmnd->cdb[3] << (8*2)) + (scsiCmnd->cdb[4] << 8) + scsiCmnd->cdb[5];
tl = (scsiCmnd->cdb[6] << (8*2)) + (scsiCmnd->cdb[7] << 8) + scsiCmnd->cdb[8];
mode = (bit8)(scsiCmnd->cdb[1] & SCSI_READ_BUFFER_MODE_MASK);
bufferID = scsiCmnd->cdb[2];
if (mode == READ_BUFFER_DATA_MODE) /* 2 */
{
if (bufferID == 0 && bufferOffset == 0 && tl == 512)
{
/* send ATA READ BUFFER */
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_READ_BUFFER; /* 0xE4 */
fis->h.features = 0; /* FIS reserve */
fis->d.lbaLow = 0; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = 0; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = 0; /* FIS LBA (23:16) */
fis->d.device = 0x40; /* FIS LBA (27:24) and FIS LBA mode */
fis->d.lbaLowExp = 0;
fis->d.lbaMidExp = 0;
fis->d.lbaHighExp = 0;
fis->d.featuresExp = 0;
fis->d.sectorCount = 0; /* FIS sector count (7:0) */
fis->d.sectorCountExp = 0;
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_PIO_READ;
satIOContext->satCompleteCB = &smsatReadBufferCB;
satIOContext->reqType = agRequestType; /* Save it */
status = smsataLLIOStart( smRoot,
smIORequest,
smDeviceHandle,
smScsiRequest,
satIOContext);
return status;
}
if (bufferID == 0 && bufferOffset == 0 && tl != 512)
{
smsatSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
/*smEnqueueIO(smRoot, satIOContext);*/
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pSmSenseData,
satIOContext->interruptContext );
SM_DBG1(("smsatReadBuffer: allocation length is not 512; it is %d!!!\n", tl));
return SM_RC_SUCCESS;
}
if (bufferID == 0 && bufferOffset != 0)
{
smsatSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
/*smEnqueueIO(smRoot, satIOContext);*/
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pSmSenseData,
satIOContext->interruptContext );
SM_DBG1(("smsatReadBuffer: buffer offset is not 0; it is %d!!!\n", bufferOffset));
return SM_RC_SUCCESS;
}
/* all other cases unsupported */
SM_DBG1(("smsatReadBuffer: unsupported case 1!!!\n"));
smsatSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_COMMAND,
satIOContext);
/*smEnqueueIO(smRoot, satIOContext);*/
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pSmSenseData,
satIOContext->interruptContext );
return SM_RC_SUCCESS;
}
else if (mode == READ_BUFFER_DESCRIPTOR_MODE) /* 3 */
{
if (tl < READ_BUFFER_DESCRIPTOR_MODE_DATA_LEN) /* 4 */
{
smsatSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
/*smEnqueueIO(smRoot, satIOContext);*/
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pSmSenseData,
satIOContext->interruptContext );
SM_DBG1(("smsatReadBuffer: tl < 4; tl is %d!!!\n", tl));
return SM_RC_SUCCESS;
}
if (bufferID == 0)
{
/* SPC-4, 6.15.5, p189; SAT-2 Rev00, 8.7.2.3, p41*/
pBuff[0] = 0xFF;
pBuff[1] = 0x00;
pBuff[2] = 0x02;
pBuff[3] = 0x00;
if (READ_BUFFER_DESCRIPTOR_MODE_DATA_LEN < tl)
{
/* underrrun */
SM_DBG1(("smsatReadBuffer: underrun tl %d data %d!!!\n", tl, READ_BUFFER_DESCRIPTOR_MODE_DATA_LEN));
/*smEnqueueIO(smRoot, satIOContext);*/
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOUnderRun,
tl - READ_BUFFER_DESCRIPTOR_MODE_DATA_LEN,
agNULL,
satIOContext->interruptContext );
return SM_RC_SUCCESS;
}
else
{
/*smEnqueueIO(smRoot, satIOContext);*/
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOSuccess,
SCSI_STAT_GOOD,
agNULL,
satIOContext->interruptContext);
return SM_RC_SUCCESS;
}
}
else
{
/* We don't support other than bufferID 0 */
smsatSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_COMMAND,
satIOContext);
/*smEnqueueIO(smRoot, satIOContext);*/
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pSmSenseData,
satIOContext->interruptContext );
return SM_RC_SUCCESS;
}
}
else
{
/* We don't support any other mode */
SM_DBG1(("smsatReadBuffer: unsupported mode %d!!!\n", mode));
smsatSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_COMMAND,
satIOContext);
/*smEnqueueIO(smRoot, satIOContext);*/
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pSmSenseData,
satIOContext->interruptContext );
return SM_RC_SUCCESS;
}
}
osGLOBAL bit32
smsatWriteBuffer(
smRoot_t *smRoot,
smIORequest_t *smIORequest,
smDeviceHandle_t *smDeviceHandle,
smScsiInitiatorRequest_t *smScsiRequest,
smSatIOContext_t *satIOContext
)
{
#ifdef NOT_YET
bit32 agRequestType = AGSA_SATA_PROTOCOL_PIO_READ;
#endif
smScsiRspSense_t *pSense;
smIniScsiCmnd_t *scsiCmnd;
#ifdef NOT_YET
agsaFisRegHostToDevice_t *fis;
#endif
bit32 bufferOffset;
bit32 parmLen;
bit8 mode;
bit8 bufferID;
bit8 *pBuff;
pSense = satIOContext->pSense;
scsiCmnd = &smScsiRequest->scsiCmnd;
#ifdef NOT_YET
fis = satIOContext->pFis;
#endif
pBuff = (bit8 *) smScsiRequest->sglVirtualAddr;
SM_DBG5(("smsatWriteBuffer: start\n"));
/* checking CONTROL */
/* NACA == 1 or LINK == 1*/
if ( (scsiCmnd->cdb[9] & SCSI_NACA_MASK) || (scsiCmnd->cdb[9] & SCSI_LINK_MASK) )
{
smsatSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
/*smEnqueueIO(smRoot, satIOContext);*/
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pSmSenseData,
satIOContext->interruptContext );
SM_DBG1(("smsatWriteBuffer: return control!!!\n"));
return SM_RC_SUCCESS;
}
bufferOffset = (scsiCmnd->cdb[3] << (8*2)) + (scsiCmnd->cdb[4] << 8) + scsiCmnd->cdb[5];
parmLen = (scsiCmnd->cdb[6] << (8*2)) + (scsiCmnd->cdb[7] << 8) + scsiCmnd->cdb[8];
mode = (bit8)(scsiCmnd->cdb[1] & SCSI_READ_BUFFER_MODE_MASK);
bufferID = scsiCmnd->cdb[2];
/* for debugging only */
smhexdump("smsatWriteBuffer pBuff", (bit8 *)pBuff, 24);
if (mode == WRITE_BUFFER_DATA_MODE) /* 2 */
{
if (bufferID == 0 && bufferOffset == 0 && parmLen == 512)
{
SM_DBG1(("smsatWriteBuffer: sending ATA WRITE BUFFER!!!\n"));
/* send ATA WRITE BUFFER */
#ifdef NOT_YET
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_WRITE_BUFFER; /* 0xE8 */
fis->h.features = 0; /* FIS reserve */
fis->d.lbaLow = 0; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = 0; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = 0; /* FIS LBA (23:16) */
fis->d.device = 0x40; /* FIS LBA (27:24) and FIS LBA mode */
fis->d.lbaLowExp = 0;
fis->d.lbaMidExp = 0;
fis->d.lbaHighExp = 0;
fis->d.featuresExp = 0;
fis->d.sectorCount = 0; /* FIS sector count (7:0) */
fis->d.sectorCountExp = 0;
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_PIO_WRITE;
satIOContext->satCompleteCB = &smsatWriteBufferCB;
satIOContext->reqType = agRequestType; /* Save it */
status = smsataLLIOStart( smRoot,
smIORequest,
smDeviceHandle,
smScsiRequest,
satIOContext);
return status;
#endif
/* temp */
/*smEnqueueIO(smRoot, satIOContext);*/
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOSuccess,
SCSI_STAT_GOOD,
agNULL,
satIOContext->interruptContext);
return SM_RC_SUCCESS;
}
if ( (bufferID == 0 && bufferOffset != 0) ||
(bufferID == 0 && parmLen != 512)
)
{
smsatSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
/*smEnqueueIO(smRoot, satIOContext);*/
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pSmSenseData,
satIOContext->interruptContext );
SM_DBG1(("smsatWriteBuffer: wrong buffer offset %d or parameter length parmLen %d!!!\n", bufferOffset, parmLen));
return SM_RC_SUCCESS;
}
/* all other cases unsupported */
SM_DBG1(("smsatWriteBuffer: unsupported case 1!!!\n"));
smsatSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_COMMAND,
satIOContext);
/*smEnqueueIO(smRoot, satIOContext);*/
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pSmSenseData,
satIOContext->interruptContext );
return SM_RC_SUCCESS;
}
else if (mode == WRITE_BUFFER_DL_MICROCODE_SAVE_MODE) /* 5 */
{
/* temporary */
SM_DBG1(("smsatWriteBuffer: not yet supported mode %d!!!\n", mode));
smsatSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_COMMAND,
satIOContext);
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pSmSenseData,
satIOContext->interruptContext );
return SM_RC_SUCCESS;
}
else
{
/* We don't support any other mode */
SM_DBG1(("smsatWriteBuffer: unsupported mode %d!!!\n", mode));
smsatSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_COMMAND,
satIOContext);
/*smEnqueueIO(smRoot, satIOContext);*/
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pSmSenseData,
satIOContext->interruptContext );
return SM_RC_SUCCESS;
}
}
osGLOBAL bit32
smsatReassignBlocks(
smRoot_t *smRoot,
smIORequest_t *smIORequest,
smDeviceHandle_t *smDeviceHandle,
smScsiInitiatorRequest_t *smScsiRequest,
smSatIOContext_t *satIOContext
)
{
/*
assumes all LBA fits in ATA command; no boundary condition is checked here yet
*/
bit32 status;
bit32 agRequestType;
smDeviceData_t *pSatDevData;
smScsiRspSense_t *pSense;
smIniScsiCmnd_t *scsiCmnd;
agsaFisRegHostToDevice_t *fis;
bit8 *pParmList; /* Log Page data buffer */
bit8 LongLBA;
bit8 LongList;
bit32 defectListLen;
bit8 LBA[8];
bit32 startingIndex;
pSense = satIOContext->pSense;
pSatDevData = satIOContext->pSatDevData;
scsiCmnd = &smScsiRequest->scsiCmnd;
fis = satIOContext->pFis;
pParmList = (bit8 *) smScsiRequest->sglVirtualAddr;
SM_DBG5(("smsatReassignBlocks: start\n"));
/* checking CONTROL */
/* NACA == 1 or LINK == 1*/
if ( (scsiCmnd->cdb[5] & SCSI_NACA_MASK) || (scsiCmnd->cdb[5] & SCSI_LINK_MASK) )
{
smsatSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
/*smEnqueueIO(smRoot, satIOContext);*/
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pSmSenseData,
satIOContext->interruptContext );
SM_DBG1(("smsatReassignBlocks: return control!!!\n"));
return SM_RC_SUCCESS;
}
sm_memset(satIOContext->LBA, 0, 8);
satIOContext->ParmIndex = 0;
satIOContext->ParmLen = 0;
LongList = (bit8)(scsiCmnd->cdb[1] & SCSI_REASSIGN_BLOCKS_LONGLIST_MASK);
LongLBA = (bit8)(scsiCmnd->cdb[1] & SCSI_REASSIGN_BLOCKS_LONGLBA_MASK);
sm_memset(LBA, 0, sizeof(LBA));
if (LongList == 0)
{
defectListLen = (pParmList[2] << 8) + pParmList[3];
}
else
{
defectListLen = (pParmList[0] << (8*3)) + (pParmList[1] << (8*2))
+ (pParmList[2] << 8) + pParmList[3];
}
/* SBC 5.16.2, p61*/
satIOContext->ParmLen = defectListLen + 4 /* header size */;
startingIndex = 4;
if (LongLBA == 0)
{
LBA[4] = pParmList[startingIndex]; /* MSB */
LBA[5] = pParmList[startingIndex+1];
LBA[6] = pParmList[startingIndex+2];
LBA[7] = pParmList[startingIndex+3]; /* LSB */
startingIndex = startingIndex + 4;
}
else
{
LBA[0] = pParmList[startingIndex]; /* MSB */
LBA[1] = pParmList[startingIndex+1];
LBA[2] = pParmList[startingIndex+2];
LBA[3] = pParmList[startingIndex+3];
LBA[4] = pParmList[startingIndex+4];
LBA[5] = pParmList[startingIndex+5];
LBA[6] = pParmList[startingIndex+6];
LBA[7] = pParmList[startingIndex+7]; /* LSB */
startingIndex = startingIndex + 8;
}
smhexdump("smsatReassignBlocks Parameter list", (bit8 *)pParmList, 4 + defectListLen);
if (pSatDevData->sat48BitSupport == agTRUE)
{
/* sends READ VERIFY SECTOR(S) EXT*/
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_READ_VERIFY_SECTORS_EXT;/* 0x42 */
fis->h.features = 0; /* FIS reserve */
fis->d.lbaLow = LBA[7]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = LBA[6]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = LBA[5]; /* FIS LBA (23:16) */
fis->d.lbaLowExp = LBA[4]; /* FIS LBA (31:24) */
fis->d.lbaMidExp = LBA[3]; /* FIS LBA (39:32) */
fis->d.lbaHighExp = LBA[2]; /* FIS LBA (47:40) */
fis->d.featuresExp = 0; /* FIS reserve */
fis->d.sectorCount = 1; /* FIS sector count (7:0) */
fis->d.sectorCountExp = 0; /* FIS sector count (15:8) */
fis->d.reserved4 = 0;
fis->d.device = 0x40; /* 01000000 */
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
}
else
{
/* READ VERIFY SECTOR(S)*/
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_READ_VERIFY_SECTORS;/* 0x40 */
fis->h.features = 0; /* FIS features NA */
fis->d.lbaLow = LBA[7]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = LBA[6]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = LBA[5]; /* FIS LBA (23:16) */
fis->d.lbaLowExp = 0;
fis->d.lbaMidExp = 0;
fis->d.lbaHighExp = 0;
fis->d.featuresExp = 0;
fis->d.sectorCount = 1; /* FIS sector count (7:0) */
fis->d.sectorCountExp = 0;
fis->d.reserved4 = 0;
fis->d.device = (bit8)((0x4 << 4) | (LBA[4] & 0xF));
/* DEV and LBA 27:24 */
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
}
sm_memcpy(satIOContext->LBA, LBA, 8);
satIOContext->ParmIndex = startingIndex;
agRequestType = AGSA_SATA_PROTOCOL_NON_DATA;
/* Initialize CB for SATA completion.
*/
satIOContext->satCompleteCB = &smsatReassignBlocksCB;
/*
* Prepare SGL and send FIS to LL layer.
*/
satIOContext->reqType = agRequestType; /* Save it */
status = smsataLLIOStart( smRoot,
smIORequest,
smDeviceHandle,
smScsiRequest,
satIOContext);
return status;
}
osGLOBAL bit32
smsatRead_1(
smRoot_t *smRoot,
smIORequest_t *smIORequest,
smDeviceHandle_t *smDeviceHandle,
smScsiInitiatorRequest_t *smScsiRequest,
smSatIOContext_t *satIOContext
)
{
/*
Assumption: error check on lba and tl has been done in satRead*()
lba = lba + tl;
*/
bit32 status;
smSatIOContext_t *satOrgIOContext = agNULL;
smIniScsiCmnd_t *scsiCmnd;
agsaFisRegHostToDevice_t *fis;
bit32 agRequestType = AGSA_SATA_PROTOCOL_DMA_READ;
bit32 lba = 0;
bit32 DenomTL = 0xFF;
bit32 Remainder = 0;
bit8 LBA[4]; /* 0 MSB, 3 LSB */
SM_DBG2(("smsatRead_1: start\n"));
fis = satIOContext->pFis;
satOrgIOContext = satIOContext->satOrgIOContext;
scsiCmnd = satOrgIOContext->pScsiCmnd;
sm_memset(LBA,0, sizeof(LBA));
switch (satOrgIOContext->ATACmd)
{
case SAT_READ_DMA:
DenomTL = 0x100;
break;
case SAT_READ_SECTORS:
DenomTL = 0x100;
break;
case SAT_READ_DMA_EXT:
DenomTL = 0xFFFF;
break;
case SAT_READ_SECTORS_EXT:
DenomTL = 0xFFFF;
break;
case SAT_READ_FPDMA_QUEUED:
DenomTL = 0xFFFF;
break;
default:
SM_DBG1(("smsatRead_1: error incorrect ata command 0x%x!!!\n", satIOContext->ATACmd));
return SM_RC_FAILURE;
break;
}
Remainder = satOrgIOContext->OrgTL % DenomTL;
satOrgIOContext->currentLBA = satOrgIOContext->currentLBA + DenomTL;
lba = satOrgIOContext->currentLBA;
LBA[0] = (bit8)((lba & 0xFF000000) >> (8 * 3));
LBA[1] = (bit8)((lba & 0xFF0000) >> (8 * 2));
LBA[2] = (bit8)((lba & 0xFF00) >> 8);
LBA[3] = (bit8)(lba & 0xFF);
switch (satOrgIOContext->ATACmd)
{
case SAT_READ_DMA:
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_READ_DMA; /* 0xC8 */
fis->h.features = 0; /* FIS reserve */
fis->d.lbaLow = LBA[3]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = LBA[2]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = LBA[1]; /* FIS LBA (23:16) */
fis->d.device =
(bit8)((0x4 << 4) | (LBA[0] & 0xF)); /* FIS LBA (27:24) and FIS LBA mode */
fis->d.lbaLowExp = 0;
fis->d.lbaMidExp = 0;
fis->d.lbaHighExp = 0;
fis->d.featuresExp = 0;
if (satOrgIOContext->LoopNum == 1)
{
/* last loop */
fis->d.sectorCount = (bit8)Remainder; /* FIS sector count (7:0) */
}
else
{
fis->d.sectorCount = 0x0; /* FIS sector count (7:0) */
}
fis->d.sectorCountExp = 0;
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_DMA_READ;
break;
case SAT_READ_SECTORS:
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_READ_SECTORS; /* 0x20 */
fis->h.features = 0; /* FIS reserve */
fis->d.lbaLow = LBA[3]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = LBA[2]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = LBA[1]; /* FIS LBA (23:16) */
fis->d.device =
(bit8)((0x4 << 4) | (LBA[0] & 0xF)); /* FIS LBA (27:24) and FIS LBA mode */
fis->d.lbaLowExp = 0;
fis->d.lbaMidExp = 0;
fis->d.lbaHighExp = 0;
fis->d.featuresExp = 0;
if (satOrgIOContext->LoopNum == 1)
{
/* last loop */
fis->d.sectorCount = (bit8)Remainder; /* FIS sector count (7:0) */
}
else
{
fis->d.sectorCount = 0x0; /* FIS sector count (7:0) */
}
fis->d.sectorCountExp = 0;
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_PIO_READ;
break;
case SAT_READ_DMA_EXT:
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_READ_DMA_EXT; /* 0x25 */
fis->h.features = 0; /* FIS reserve */
fis->d.lbaLow = LBA[3]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = LBA[2]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = LBA[1]; /* FIS LBA (23:16) */
fis->d.device = 0x40; /* FIS LBA mode set */
fis->d.lbaLowExp = LBA[0]; /* FIS LBA (31:24) */
fis->d.lbaMidExp = 0; /* FIS LBA (39:32) */
fis->d.lbaHighExp = 0; /* FIS LBA (47:40) */
fis->d.featuresExp = 0; /* FIS reserve */
if (satOrgIOContext->LoopNum == 1)
{
/* last loop */
fis->d.sectorCount = (bit8)(Remainder & 0xFF); /* FIS sector count (7:0) */
fis->d.sectorCountExp = (bit8)((Remainder & 0xFF00) >> 8); /* FIS sector count (15:8) */
}
else
{
fis->d.sectorCount = 0xFF; /* FIS sector count (7:0) */
fis->d.sectorCountExp = 0xFF; /* FIS sector count (15:8) */
}
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_DMA_READ;
break;
case SAT_READ_SECTORS_EXT:
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_READ_SECTORS_EXT; /* 0x24 */
fis->h.features = 0; /* FIS reserve */
fis->d.lbaLow = LBA[3]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = LBA[2]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = LBA[1]; /* FIS LBA (23:16) */
fis->d.device = 0x40; /* FIS LBA mode set */
fis->d.lbaLowExp = LBA[0]; /* FIS LBA (31:24) */
fis->d.lbaMidExp = 0; /* FIS LBA (39:32) */
fis->d.lbaHighExp = 0; /* FIS LBA (47:40) */
fis->d.featuresExp = 0; /* FIS reserve */
if (satOrgIOContext->LoopNum == 1)
{
/* last loop */
fis->d.sectorCount = (bit8)(Remainder & 0xFF); /* FIS sector count (7:0) */
fis->d.sectorCountExp = (bit8)((Remainder & 0xFF00) >> 8); /* FIS sector count (15:8) */
}
else
{
fis->d.sectorCount = 0xFF; /* FIS sector count (7:0) */
fis->d.sectorCountExp = 0xFF; /* FIS sector count (15:8) */
}
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_PIO_READ;
break;
case SAT_READ_FPDMA_QUEUED:
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_READ_FPDMA_QUEUED; /* 0x60 */
fis->d.lbaLow = LBA[3]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = LBA[2]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = LBA[1]; /* FIS LBA (23:16) */
/* Check FUA bit */
if (scsiCmnd->cdb[1] & SCSI_READ10_FUA_MASK)
fis->d.device = 0xC0; /* FIS FUA set */
else
fis->d.device = 0x40; /* FIS FUA clear */
fis->d.lbaLowExp = LBA[0]; /* FIS LBA (31:24) */
fis->d.lbaMidExp = 0; /* FIS LBA (39:32) */
fis->d.lbaHighExp = 0; /* FIS LBA (47:40) */
if (satOrgIOContext->LoopNum == 1)
{
/* last loop */
fis->h.features = (bit8)(Remainder & 0xFF); /* FIS sector count (7:0) */
fis->d.featuresExp = (bit8)((Remainder & 0xFF00) >> 8); /* FIS sector count (15:8) */
}
else
{
fis->h.features = 0xFF; /* FIS sector count (7:0) */
fis->d.featuresExp = 0xFF; /* FIS sector count (15:8) */
}
fis->d.sectorCount = 0; /* Tag (7:3) set by LL layer */
fis->d.sectorCountExp = 0;
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_FPDMA_READ;
break;
default:
SM_DBG1(("smsatRead_1: error incorrect ata command 0x%x!!!\n", satIOContext->ATACmd));
return SM_RC_FAILURE;
break;
}
/* Initialize CB for SATA completion.
*/
/* chained data */
satIOContext->satCompleteCB = &smsatChainedDataIOCB;
if (satOrgIOContext->ATACmd == SAT_READ_DMA || satOrgIOContext->ATACmd == SAT_READ_SECTORS)
{
smsatSplitSGL(smRoot,
smIORequest,
smDeviceHandle,
(smScsiInitiatorRequest_t *)satOrgIOContext->smScsiXchg,
satOrgIOContext,
NON_BIT48_ADDRESS_TL_LIMIT * SATA_SECTOR_SIZE, /* 0x100 * 0x200*/
(satOrgIOContext->OrgTL) * SATA_SECTOR_SIZE,
agFALSE);
}
else
{
smsatSplitSGL(smRoot,
smIORequest,
smDeviceHandle,
(smScsiInitiatorRequest_t *)satOrgIOContext->smScsiXchg,
satOrgIOContext,
BIT48_ADDRESS_TL_LIMIT * SATA_SECTOR_SIZE, /* 0xFFFF * 0x200*/
(satOrgIOContext->OrgTL) * SATA_SECTOR_SIZE,
agFALSE);
}
/*
* Prepare SGL and send FIS to LL layer.
*/
satIOContext->reqType = agRequestType; /* Save it */
status = smsataLLIOStart( smRoot,
smIORequest,
smDeviceHandle,
(smScsiInitiatorRequest_t *)satOrgIOContext->smScsiXchg, //smScsiRequest,
satIOContext);
SM_DBG5(("smsatRead_1: return\n"));
return (status);
}
osGLOBAL bit32
smsatWrite_1(
smRoot_t *smRoot,
smIORequest_t *smIORequest,
smDeviceHandle_t *smDeviceHandle,
smScsiInitiatorRequest_t *smScsiRequest,
smSatIOContext_t *satIOContext
)
{
/*
Assumption: error check on lba and tl has been done in satWrite*()
lba = lba + tl;
*/
bit32 status;
smSatIOContext_t *satOrgIOContext = agNULL;
smIniScsiCmnd_t *scsiCmnd;
agsaFisRegHostToDevice_t *fis;
bit32 agRequestType = AGSA_SATA_PROTOCOL_DMA_WRITE;
bit32 lba = 0;
bit32 DenomTL = 0xFF;
bit32 Remainder = 0;
bit8 LBA[4]; /* 0 MSB, 3 LSB */
SM_DBG2(("smsatWrite_1: start\n"));
fis = satIOContext->pFis;
satOrgIOContext = satIOContext->satOrgIOContext;
scsiCmnd = satOrgIOContext->pScsiCmnd;
sm_memset(LBA,0, sizeof(LBA));
switch (satOrgIOContext->ATACmd)
{
case SAT_WRITE_DMA:
DenomTL = 0x100;
break;
case SAT_WRITE_SECTORS:
DenomTL = 0x100;
break;
case SAT_WRITE_DMA_EXT:
DenomTL = 0xFFFF;
break;
case SAT_WRITE_DMA_FUA_EXT:
DenomTL = 0xFFFF;
break;
case SAT_WRITE_SECTORS_EXT:
DenomTL = 0xFFFF;
break;
case SAT_WRITE_FPDMA_QUEUED:
DenomTL = 0xFFFF;
break;
default:
SM_DBG1(("smsatWrite_1: error incorrect ata command 0x%x!!!\n", satIOContext->ATACmd));
return SM_RC_FAILURE;
break;
}
Remainder = satOrgIOContext->OrgTL % DenomTL;
satOrgIOContext->currentLBA = satOrgIOContext->currentLBA + DenomTL;
lba = satOrgIOContext->currentLBA;
LBA[0] = (bit8)((lba & 0xFF000000) >> (8 * 3));
LBA[1] = (bit8)((lba & 0xFF0000) >> (8 * 2));
LBA[2] = (bit8)((lba & 0xFF00) >> 8);
LBA[3] = (bit8)(lba & 0xFF);
switch (satOrgIOContext->ATACmd)
{
case SAT_WRITE_DMA:
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C bit is set */
fis->h.command = SAT_WRITE_DMA; /* 0xCA */
fis->h.features = 0; /* FIS reserve */
fis->d.lbaLow = LBA[3]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = LBA[2]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = LBA[1]; /* FIS LBA (23:16) */
/* FIS LBA mode set LBA (27:24) */
fis->d.device = (bit8)((0x4 << 4) | (LBA[0] & 0xF));
fis->d.lbaLowExp = 0;
fis->d.lbaMidExp = 0;
fis->d.lbaHighExp = 0;
fis->d.featuresExp = 0;
if (satOrgIOContext->LoopNum == 1)
{
/* last loop */
fis->d.sectorCount = (bit8)Remainder; /* FIS sector count (7:0) */
}
else
{
fis->d.sectorCount = 0x0; /* FIS sector count (7:0) */
}
fis->d.sectorCountExp = 0;
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_DMA_WRITE;
break;
case SAT_WRITE_SECTORS:
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C bit is set */
fis->h.command = SAT_WRITE_SECTORS; /* 0x30 */
fis->h.features = 0; /* FIS reserve */
fis->d.lbaLow = LBA[3]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = LBA[2]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = LBA[1]; /* FIS LBA (23:16) */
/* FIS LBA mode set LBA (27:24) */
fis->d.device = (bit8)((0x4 << 4) | (LBA[0] & 0xF));
fis->d.lbaLowExp = 0;
fis->d.lbaMidExp = 0;
fis->d.lbaHighExp = 0;
fis->d.featuresExp = 0;
if (satOrgIOContext->LoopNum == 1)
{
/* last loop */
fis->d.sectorCount = (bit8)Remainder; /* FIS sector count (7:0) */
}
else
{
fis->d.sectorCount = 0x0; /* FIS sector count (7:0) */
}
fis->d.sectorCountExp = 0;
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_PIO_WRITE;
break;
case SAT_WRITE_DMA_EXT:
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_WRITE_DMA_EXT; /* 0x3D */
fis->h.features = 0; /* FIS reserve */
fis->d.lbaLow = LBA[3]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = LBA[2]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = LBA[1]; /* FIS LBA (23:16) */
fis->d.device = 0x40; /* FIS LBA mode set */
fis->d.lbaLowExp = LBA[0]; /* FIS LBA (31:24) */
fis->d.lbaMidExp = 0; /* FIS LBA (39:32) */
fis->d.lbaHighExp = 0; /* FIS LBA (47:40) */
fis->d.featuresExp = 0; /* FIS reserve */
if (satOrgIOContext->LoopNum == 1)
{
/* last loop */
fis->d.sectorCount = (bit8)(Remainder & 0xFF); /* FIS sector count (7:0) */
fis->d.sectorCountExp = (bit8)((Remainder & 0xFF00) >> 8); /* FIS sector count (15:8) */
}
else
{
fis->d.sectorCount = 0xFF; /* FIS sector count (7:0) */
fis->d.sectorCountExp = 0xFF; /* FIS sector count (15:8) */
}
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_DMA_WRITE;
break;
case SAT_WRITE_SECTORS_EXT:
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_WRITE_SECTORS_EXT; /* 0x34 */
fis->h.features = 0; /* FIS reserve */
fis->d.lbaLow = LBA[3]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = LBA[2]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = LBA[1]; /* FIS LBA (23:16) */
fis->d.device = 0x40; /* FIS LBA mode set */
fis->d.lbaLowExp = LBA[0]; /* FIS LBA (31:24) */
fis->d.lbaMidExp = 0; /* FIS LBA (39:32) */
fis->d.lbaHighExp = 0; /* FIS LBA (47:40) */
fis->d.featuresExp = 0; /* FIS reserve */
if (satOrgIOContext->LoopNum == 1)
{
/* last loop */
fis->d.sectorCount = (bit8)(Remainder & 0xFF); /* FIS sector count (7:0) */
fis->d.sectorCountExp = (bit8)((Remainder & 0xFF00) >> 8); /* FIS sector count (15:8) */
}
else
{
fis->d.sectorCount = 0xFF; /* FIS sector count (7:0) */
fis->d.sectorCountExp = 0xFF; /* FIS sector count (15:8) */
}
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_PIO_WRITE;
break;
case SAT_WRITE_FPDMA_QUEUED:
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_WRITE_FPDMA_QUEUED; /* 0x61 */
fis->d.lbaLow = LBA[3]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = LBA[2]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = LBA[1]; /* FIS LBA (23:16) */
/* Check FUA bit */
if (scsiCmnd->cdb[1] & SCSI_WRITE10_FUA_MASK)
fis->d.device = 0xC0; /* FIS FUA set */
else
fis->d.device = 0x40; /* FIS FUA clear */
fis->d.lbaLowExp = LBA[0];; /* FIS LBA (31:24) */
fis->d.lbaMidExp = 0; /* FIS LBA (39:32) */
fis->d.lbaHighExp = 0; /* FIS LBA (47:40) */
if (satOrgIOContext->LoopNum == 1)
{
/* last loop */
fis->h.features = (bit8)(Remainder & 0xFF); /* FIS sector count (7:0) */
fis->d.featuresExp = (bit8)((Remainder & 0xFF00) >> 8); /* FIS sector count (15:8) */
}
else
{
fis->h.features = 0xFF; /* FIS sector count (7:0) */
fis->d.featuresExp = 0xFF; /* FIS sector count (15:8) */
}
fis->d.sectorCount = 0; /* Tag (7:3) set by LL layer */
fis->d.sectorCountExp = 0;
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_FPDMA_WRITE;
break;
default:
SM_DBG1(("smsatWrite_1: error incorrect ata command 0x%x!!!\n", satIOContext->ATACmd));
return SM_RC_FAILURE;
break;
}
/* Initialize CB for SATA completion.
*/
/* chained data */
satIOContext->satCompleteCB = &smsatChainedDataIOCB;
if (satOrgIOContext->ATACmd == SAT_WRITE_DMA || satOrgIOContext->ATACmd == SAT_WRITE_SECTORS)
{
smsatSplitSGL(smRoot,
smIORequest,
smDeviceHandle,
(smScsiInitiatorRequest_t *)satOrgIOContext->smScsiXchg,
satOrgIOContext,
NON_BIT48_ADDRESS_TL_LIMIT * SATA_SECTOR_SIZE, /* 0x100 * 0x200*/
(satOrgIOContext->OrgTL) * SATA_SECTOR_SIZE,
agFALSE);
}
else
{
smsatSplitSGL(smRoot,
smIORequest,
smDeviceHandle,
(smScsiInitiatorRequest_t *)satOrgIOContext->smScsiXchg,
satOrgIOContext,
BIT48_ADDRESS_TL_LIMIT * SATA_SECTOR_SIZE, /* 0xFFFF * 0x200*/
(satOrgIOContext->OrgTL) * SATA_SECTOR_SIZE,
agFALSE);
}
/*
* Prepare SGL and send FIS to LL layer.
*/
satIOContext->reqType = agRequestType; /* Save it */
status = smsataLLIOStart( smRoot,
smIORequest,
smDeviceHandle,
(smScsiInitiatorRequest_t *)satOrgIOContext->smScsiXchg, //smScsiRequest,
satIOContext);
SM_DBG5(("smsatWrite_1: return\n"));
return (status);
}
osGLOBAL bit32
smsatPassthrough(
smRoot_t *smRoot,
smIORequest_t *smIORequest,
smDeviceHandle_t *smDeviceHandle,
smScsiInitiatorRequest_t *smScsiRequest,
smSatIOContext_t *satIOContext
)
{
smScsiRspSense_t *pSense;
smIniScsiCmnd_t *scsiCmnd;
smDeviceData_t *pSatDevData;
agsaFisRegHostToDevice_t *fis;
bit32 status;
bit32 agRequestType;
smAtaPassThroughHdr_t ataPassThroughHdr;
pSense = satIOContext->pSense;
scsiCmnd = &smScsiRequest->scsiCmnd;
pSatDevData = satIOContext->pSatDevData;
fis = satIOContext->pFis;
SM_DBG1(("smsatPassthrough: START!!!\n"));
osti_memset(&ataPassThroughHdr, 0 , sizeof(smAtaPassThroughHdr_t));
ataPassThroughHdr.opc = scsiCmnd->cdb[0];
ataPassThroughHdr.mulCount = scsiCmnd->cdb[1] >> 5;
ataPassThroughHdr.proto = (scsiCmnd->cdb[1] >> 1) & 0x0F;
ataPassThroughHdr.extend = scsiCmnd->cdb[1] & 1;
ataPassThroughHdr.offline = scsiCmnd->cdb[2] >> 6;
ataPassThroughHdr.ckCond = (scsiCmnd->cdb[2] >> 5) & 1;
ataPassThroughHdr.tType = (scsiCmnd->cdb[2] >> 4) & 1;
ataPassThroughHdr.tDir = (scsiCmnd->cdb[2] >> 3) & 1;
ataPassThroughHdr.byteBlock = (scsiCmnd->cdb[2] >> 2) & 1;
ataPassThroughHdr.tlength = scsiCmnd->cdb[2] & 0x3;
switch(ataPassThroughHdr.proto)
{
case 0:
case 9:
agRequestType = AGSA_SATA_PROTOCOL_DEV_RESET; //Device Reset
break;
case 1:
agRequestType = AGSA_SATA_PROTOCOL_SRST_ASSERT; //Software reset
break;
case 3:
agRequestType = AGSA_SATA_PROTOCOL_NON_DATA; //Non Data mode
break;
case 4:
agRequestType = AGSA_SATA_PROTOCOL_PIO_READ; //IO_Data_In mode
break;
case 5:
agRequestType = AGSA_SATA_PROTOCOL_PIO_WRITE; //PIO_Data_out
break;
case 6:
agRequestType = AGSA_SATA_PROTOCOL_DMA_WRITE; //DMA READ and WRITE
break;
case 8:
agRequestType = AGSA_SATA_ATAP_EXECDEVDIAG; //device diagnostic
break;
case 12:
agRequestType = AGSA_SATA_PROTOCOL_FPDMA_WRITE; //FPDMA Read and Write
break;
default:
agRequestType = AGSA_SATA_PROTOCOL_NON_DATA; //Default Non Data Mode
break;
}
if((ataPassThroughHdr.tlength == 0) && (agRequestType != AGSA_SATA_PROTOCOL_NON_DATA))
{
SM_DBG1(("smsatPassthrough SCSI_SNSCODE_INVALID_FIELD_IN_CDB\n"));
smsatSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pSmSenseData,
satIOContext->interruptContext );
return SM_RC_SUCCESS;
}
if(scsiCmnd->cdb[0] == 0xA1)
{
SM_DBG1(("smsatPassthrough A1h: COMMAND: %x FEATURE: %x \n",scsiCmnd->cdb[9],scsiCmnd->cdb[3]));
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.features = scsiCmnd->cdb[3];
fis->d.sectorCount = scsiCmnd->cdb[4]; /* 0x01 FIS sector count (7:0) */
fis->d.lbaLow = scsiCmnd->cdb[5]; /* Reading LBA FIS LBA (7 :0 ) */
fis->d.lbaMid = scsiCmnd->cdb[6];
fis->d.lbaHigh = scsiCmnd->cdb[7];
fis->d.device = scsiCmnd->cdb[8];
fis->h.command = scsiCmnd->cdb[9];
fis->d.featuresExp = 0;
fis->d.sectorCountExp = 0;
fis->d.lbaLowExp = 0;
fis->d.lbaMidExp = 0;
fis->d.lbaHighExp = 0;
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
/* Initialize CB for SATA completion*/
satIOContext->satCompleteCB = &smsatPassthroughCB;
/*
* Prepare SGL and send FIS to LL layer.
*/
satIOContext->reqType = agRequestType;
status = smsataLLIOStart( smRoot,
smIORequest,
smDeviceHandle,
smScsiRequest,
satIOContext);
return status;
}
else if(scsiCmnd->cdb[0] == 0x85)
{
SM_DBG1(("smsatPassthrough 85h: COMMAND: %x FEATURE: %x \n",scsiCmnd->cdb[14],scsiCmnd->cdb[4]));
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
if(1 == ataPassThroughHdr.extend)
{
fis->d.featuresExp = scsiCmnd->cdb[3];
fis->d.sectorCountExp = scsiCmnd->cdb[5];
fis->d.lbaMidExp = scsiCmnd->cdb[9];
fis->d.lbaHighExp = scsiCmnd->cdb[11];
fis->d.lbaLowExp = scsiCmnd->cdb[7];
}
fis->h.features = scsiCmnd->cdb[4];
fis->d.sectorCount = scsiCmnd->cdb[6];
fis->d.lbaLow = scsiCmnd->cdb[8];
fis->d.lbaMid = scsiCmnd->cdb[10];
fis->d.lbaHigh = scsiCmnd->cdb[12];
fis->d.device = scsiCmnd->cdb[13];
fis->h.command = scsiCmnd->cdb[14];
fis->d.reserved4 = 0;
fis->d.control = 0;
fis->d.reserved5 = 0;
/* Initialize CB for SATA completion.
*/
satIOContext->satCompleteCB = &smsatPassthroughCB;
/*
* Prepare SGL and send FIS to LL layer.
*/
satIOContext->reqType = agRequestType;
status = smsataLLIOStart( smRoot,
smIORequest,
smDeviceHandle,
smScsiRequest,
satIOContext);
return status;
}
else
{
SM_DBG1(("smsatPassthrough : INVALD PASSTHROUGH!!!\n"));
smsatSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pSmSenseData,
satIOContext->interruptContext );
SM_DBG1(("smsatPassthrough : return control!!!\n"));
return SM_RC_SUCCESS;
}
}
osGLOBAL bit32
smsatNonChainedWriteNVerify_Verify(
smRoot_t *smRoot,
smIORequest_t *smIORequest,
smDeviceHandle_t *smDeviceHandle,
smScsiInitiatorRequest_t *smScsiRequest,
smSatIOContext_t *satIOContext
)
{
bit32 status;
bit32 agRequestType = AGSA_SATA_PROTOCOL_DMA_WRITE;
smDeviceData_t *pSatDevData;
smIniScsiCmnd_t *scsiCmnd;
agsaFisRegHostToDevice_t *fis;
pSatDevData = satIOContext->pSatDevData;
scsiCmnd = &smScsiRequest->scsiCmnd;
fis = satIOContext->pFis;
SM_DBG5(("smsatNonChainedWriteNVerify_Verify: start\n"));
if (pSatDevData->sat48BitSupport == agTRUE)
{
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_READ_VERIFY_SECTORS_EXT;/* 0x42 */
fis->h.features = 0; /* FIS reserve */
fis->d.lbaLow = scsiCmnd->cdb[5]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = scsiCmnd->cdb[4]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = scsiCmnd->cdb[3]; /* FIS LBA (23:16) */
fis->d.device = 0x40; /* FIS LBA mode set 01000000 */
fis->d.lbaLowExp = scsiCmnd->cdb[2]; /* FIS LBA (31:24) */
fis->d.lbaMidExp = 0; /* FIS LBA (39:32) */
fis->d.lbaHighExp = 0; /* FIS LBA (47:40) */
fis->d.featuresExp = 0; /* FIS reserve */
fis->d.sectorCount = scsiCmnd->cdb[8]; /* FIS sector count (7:0) */
fis->d.sectorCountExp = scsiCmnd->cdb[7]; /* FIS sector count (15:8) */
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_NON_DATA;
/* Initialize CB for SATA completion.
*/
satIOContext->satCompleteCB = &smsatNonChainedWriteNVerifyCB;
/*
* Prepare SGL and send FIS to LL layer.
*/
satIOContext->reqType = agRequestType; /* Save it */
status = smsataLLIOStart( smRoot,
smIORequest,
smDeviceHandle,
smScsiRequest,
satIOContext);
SM_DBG1(("smsatNonChainedWriteNVerify_Verify: return status %d!!!\n", status));
return (status);
}
else
{
/* can't fit in SAT_READ_VERIFY_SECTORS becasue of Sector Count and LBA */
SM_DBG1(("smsatNonChainedWriteNVerify_Verify: can't fit in SAT_READ_VERIFY_SECTORS!!!\n"));
return SM_RC_FAILURE;
}
}
osGLOBAL bit32
smsatChainedWriteNVerify_Start_Verify(
smRoot_t *smRoot,
smIORequest_t *smIORequest,
smDeviceHandle_t *smDeviceHandle,
smScsiInitiatorRequest_t *smScsiRequest,
smSatIOContext_t *satIOContext
)
{
/*
deal with transfer length; others have been handled previously at this point;
no LBA check; no range check;
*/
bit32 status;
bit32 agRequestType = AGSA_SATA_PROTOCOL_NON_DATA;
smDeviceData_t *pSatDevData;
smIniScsiCmnd_t *scsiCmnd;
agsaFisRegHostToDevice_t *fis;
bit32 lba = 0;
bit32 tl = 0;
bit32 LoopNum = 1;
bit8 LBA[4];
bit8 TL[4];
pSatDevData = satIOContext->pSatDevData;
scsiCmnd = &smScsiRequest->scsiCmnd;
fis = satIOContext->pFis;
SM_DBG5(("smsatChainedWriteNVerify_Start_Verify: start\n"));
sm_memset(LBA, 0, sizeof(LBA));
sm_memset(TL, 0, sizeof(TL));
/* do not use memcpy due to indexing in LBA and TL */
LBA[0] = scsiCmnd->cdb[2]; /* MSB */
LBA[1] = scsiCmnd->cdb[3];
LBA[2] = scsiCmnd->cdb[4];
LBA[3] = scsiCmnd->cdb[5]; /* LSB */
TL[0] = scsiCmnd->cdb[6]; /* MSB */
TL[1] = scsiCmnd->cdb[7];
TL[2] = scsiCmnd->cdb[7];
TL[3] = scsiCmnd->cdb[8]; /* LSB */
lba = smsatComputeCDB12LBA(satIOContext);
tl = smsatComputeCDB12TL(satIOContext);
if (pSatDevData->sat48BitSupport == agTRUE)
{
SM_DBG5(("smsatChainedWriteNVerify_Start_Verify: SAT_READ_VERIFY_SECTORS_EXT\n"));
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_READ_VERIFY_SECTORS_EXT;/* 0x42 */
fis->h.features = 0; /* FIS reserve */
fis->d.lbaLow = scsiCmnd->cdb[5]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = scsiCmnd->cdb[4]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = scsiCmnd->cdb[3]; /* FIS LBA (23:16) */
fis->d.device = 0x40; /* FIS LBA mode set 01000000 */
fis->d.lbaLowExp = scsiCmnd->cdb[2]; /* FIS LBA (31:24) */
fis->d.lbaMidExp = 0; /* FIS LBA (39:32) */
fis->d.lbaHighExp = 0; /* FIS LBA (47:40) */
fis->d.featuresExp = 0; /* FIS reserve */
fis->d.sectorCount = scsiCmnd->cdb[8]; /* FIS sector count (7:0) */
fis->d.sectorCountExp = scsiCmnd->cdb[7]; /* FIS sector count (15:8) */
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_NON_DATA;
satIOContext->ATACmd = SAT_READ_VERIFY_SECTORS_EXT;
}
else
{
SM_DBG5(("smsatChainedWriteNVerify_Start_Verify: SAT_READ_VERIFY_SECTORS\n"));
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C bit is set */
fis->h.command = SAT_READ_VERIFY_SECTORS; /* 0x40 */
fis->h.features = 0; /* FIS reserve */
fis->d.lbaLow = scsiCmnd->cdb[5]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = scsiCmnd->cdb[4]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = scsiCmnd->cdb[3]; /* FIS LBA (23:16) */
/* FIS LBA mode set LBA (27:24) */
fis->d.device = (bit8)((0x4 << 4) | (scsiCmnd->cdb[2] & 0xF));
fis->d.lbaLowExp = 0;
fis->d.lbaMidExp = 0;
fis->d.lbaHighExp = 0;
fis->d.featuresExp = 0;
fis->d.sectorCount = scsiCmnd->cdb[8]; /* FIS sector count (7:0) */
fis->d.sectorCountExp = 0;
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_NON_DATA;
satIOContext->ATACmd = SAT_READ_VERIFY_SECTORS;
}
satIOContext->currentLBA = lba;
satIOContext->OrgTL = tl;
/*
computing number of loop and remainder for tl
0xFF in case not ext
0xFFFF in case EXT
*/
if (fis->h.command == SAT_READ_VERIFY_SECTORS)
{
LoopNum = smsatComputeLoopNum(tl, 0xFF);
}
else if (fis->h.command == SAT_READ_VERIFY_SECTORS_EXT)
{
/* SAT_READ_SECTORS_EXT, SAT_READ_DMA_EXT */
LoopNum = smsatComputeLoopNum(tl, 0xFFFF);
}
else
{
SM_DBG1(("smsatChainedWriteNVerify_Start_Verify: error case 1!!!\n"));
LoopNum = 1;
}
satIOContext->LoopNum = LoopNum;
if (LoopNum == 1)
{
SM_DBG5(("smsatChainedWriteNVerify_Start_Verify: NON CHAINED data\n"));
/* Initialize CB for SATA completion.
*/
satIOContext->satCompleteCB = &smsatNonChainedWriteNVerifyCB;
}
else
{
SM_DBG1(("smsatChainedWriteNVerify_Start_Verify: CHAINED data!!!\n"));
/* re-setting tl */
if (fis->h.command == SAT_READ_VERIFY_SECTORS)
{
fis->d.sectorCount = 0xFF;
}
else if (fis->h.command == SAT_READ_VERIFY_SECTORS_EXT)
{
fis->d.sectorCount = 0xFF;
fis->d.sectorCountExp = 0xFF;
}
else
{
SM_DBG1(("smsatChainedWriteNVerify_Start_Verify: error case 2!!!\n"));
}
/* Initialize CB for SATA completion.
*/
satIOContext->satCompleteCB = &smsatChainedWriteNVerifyCB;
}
/*
* Prepare SGL and send FIS to LL layer.
*/
satIOContext->reqType = agRequestType; /* Save it */
status = smsataLLIOStart( smRoot,
smIORequest,
smDeviceHandle,
smScsiRequest,
satIOContext);
return (status);
}
osGLOBAL bit32
smsatChainedWriteNVerify_Write(
smRoot_t *smRoot,
smIORequest_t *smIORequest,
smDeviceHandle_t *smDeviceHandle,
smScsiInitiatorRequest_t *smScsiRequest,
smSatIOContext_t *satIOContext
)
{
/*
Assumption: error check on lba and tl has been done in satWrite*()
lba = lba + tl;
*/
bit32 status;
smSatIOContext_t *satOrgIOContext = agNULL;
smIniScsiCmnd_t *scsiCmnd;
agsaFisRegHostToDevice_t *fis;
bit32 agRequestType = AGSA_SATA_PROTOCOL_DMA_WRITE;
bit32 lba = 0;
bit32 DenomTL = 0xFF;
bit32 Remainder = 0;
bit8 LBA[4]; /* 0 MSB, 3 LSB */
SM_DBG1(("smsatChainedWriteNVerify_Write: start\n"));
fis = satIOContext->pFis;
satOrgIOContext = satIOContext->satOrgIOContext;
scsiCmnd = satOrgIOContext->pScsiCmnd;
sm_memset(LBA,0, sizeof(LBA));
switch (satOrgIOContext->ATACmd)
{
case SAT_WRITE_DMA:
DenomTL = 0xFF;
break;
case SAT_WRITE_SECTORS:
DenomTL = 0xFF;
break;
case SAT_WRITE_DMA_EXT:
DenomTL = 0xFFFF;
break;
case SAT_WRITE_DMA_FUA_EXT:
DenomTL = 0xFFFF;
break;
case SAT_WRITE_SECTORS_EXT:
DenomTL = 0xFFFF;
break;
case SAT_WRITE_FPDMA_QUEUED:
DenomTL = 0xFFFF;
break;
default:
SM_DBG1(("satChainedWriteNVerify_Write: error incorrect ata command 0x%x!!!\n", satIOContext->ATACmd));
return SM_RC_FAILURE;
break;
}
Remainder = satOrgIOContext->OrgTL % DenomTL;
satOrgIOContext->currentLBA = satOrgIOContext->currentLBA + DenomTL;
lba = satOrgIOContext->currentLBA;
LBA[0] = (bit8)((lba & 0xF000) >> (8 * 3)); /* MSB */
LBA[1] = (bit8)((lba & 0xF00) >> (8 * 2));
LBA[2] = (bit8)((lba & 0xF0) >> 8);
LBA[3] = (bit8)(lba & 0xF); /* LSB */
switch (satOrgIOContext->ATACmd)
{
case SAT_WRITE_DMA:
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C bit is set */
fis->h.command = SAT_WRITE_DMA; /* 0xCA */
fis->h.features = 0; /* FIS reserve */
fis->d.lbaLow = LBA[3]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = LBA[2]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = LBA[1]; /* FIS LBA (23:16) */
/* FIS LBA mode set LBA (27:24) */
fis->d.device = (bit8)((0x4 << 4) | (LBA[0] & 0xF));
fis->d.lbaLowExp = 0;
fis->d.lbaMidExp = 0;
fis->d.lbaHighExp = 0;
fis->d.featuresExp = 0;
if (satOrgIOContext->LoopNum == 1)
{
/* last loop */
fis->d.sectorCount = (bit8)Remainder; /* FIS sector count (7:0) */
}
else
{
fis->d.sectorCount = 0xFF; /* FIS sector count (7:0) */
}
fis->d.sectorCountExp = 0;
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_DMA_WRITE;
break;
case SAT_WRITE_SECTORS:
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C bit is set */
fis->h.command = SAT_WRITE_SECTORS; /* 0x30 */
fis->h.features = 0; /* FIS reserve */
fis->d.lbaLow = LBA[3]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = LBA[2]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = LBA[1]; /* FIS LBA (23:16) */
/* FIS LBA mode set LBA (27:24) */
fis->d.device = (bit8)((0x4 << 4) | (LBA[0] & 0xF));
fis->d.lbaLowExp = 0;
fis->d.lbaMidExp = 0;
fis->d.lbaHighExp = 0;
fis->d.featuresExp = 0;
if (satOrgIOContext->LoopNum == 1)
{
/* last loop */
fis->d.sectorCount = (bit8)Remainder; /* FIS sector count (7:0) */
}
else
{
fis->d.sectorCount = 0xFF; /* FIS sector count (7:0) */
}
fis->d.sectorCountExp = 0;
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_PIO_WRITE;
break;
case SAT_WRITE_DMA_EXT:
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_WRITE_DMA_EXT; /* 0x3D */
fis->h.features = 0; /* FIS reserve */
fis->d.lbaLow = LBA[3]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = LBA[2]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = LBA[1]; /* FIS LBA (23:16) */
fis->d.device = 0x40; /* FIS LBA mode set */
fis->d.lbaLowExp = LBA[0]; /* FIS LBA (31:24) */
fis->d.lbaMidExp = 0; /* FIS LBA (39:32) */
fis->d.lbaHighExp = 0; /* FIS LBA (47:40) */
fis->d.featuresExp = 0; /* FIS reserve */
if (satOrgIOContext->LoopNum == 1)
{
/* last loop */
fis->d.sectorCount = (bit8)(Remainder & 0xFF); /* FIS sector count (7:0) */
fis->d.sectorCountExp = (bit8)((Remainder & 0xFF00) >> 8); /* FIS sector count (15:8) */
}
else
{
fis->d.sectorCount = 0xFF; /* FIS sector count (7:0) */
fis->d.sectorCountExp = 0xFF; /* FIS sector count (15:8) */
}
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_DMA_WRITE;
break;
case SAT_WRITE_SECTORS_EXT:
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_WRITE_SECTORS_EXT; /* 0x34 */
fis->h.features = 0; /* FIS reserve */
fis->d.lbaLow = LBA[3]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = LBA[2]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = LBA[1]; /* FIS LBA (23:16) */
fis->d.device = 0x40; /* FIS LBA mode set */
fis->d.lbaLowExp = LBA[0]; /* FIS LBA (31:24) */
fis->d.lbaMidExp = 0; /* FIS LBA (39:32) */
fis->d.lbaHighExp = 0; /* FIS LBA (47:40) */
fis->d.featuresExp = 0; /* FIS reserve */
if (satOrgIOContext->LoopNum == 1)
{
/* last loop */
fis->d.sectorCount = (bit8)(Remainder & 0xFF); /* FIS sector count (7:0) */
fis->d.sectorCountExp = (bit8)((Remainder & 0xFF00) >> 8); /* FIS sector count (15:8) */
}
else
{
fis->d.sectorCount = 0xFF; /* FIS sector count (7:0) */
fis->d.sectorCountExp = 0xFF; /* FIS sector count (15:8) */
}
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_PIO_WRITE;
break;
case SAT_WRITE_FPDMA_QUEUED:
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_WRITE_FPDMA_QUEUED; /* 0x61 */
fis->d.lbaLow = LBA[3]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = LBA[2]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = LBA[1]; /* FIS LBA (23:16) */
/* Check FUA bit */
if (scsiCmnd->cdb[1] & SCSI_WRITE10_FUA_MASK)
fis->d.device = 0xC0; /* FIS FUA set */
else
fis->d.device = 0x40; /* FIS FUA clear */
fis->d.lbaLowExp = LBA[0];; /* FIS LBA (31:24) */
fis->d.lbaMidExp = 0; /* FIS LBA (39:32) */
fis->d.lbaHighExp = 0; /* FIS LBA (47:40) */
if (satOrgIOContext->LoopNum == 1)
{
/* last loop */
fis->h.features = (bit8)(Remainder & 0xFF); /* FIS sector count (7:0) */
fis->d.featuresExp = (bit8)((Remainder & 0xFF00) >> 8); /* FIS sector count (15:8) */
}
else
{
fis->h.features = 0xFF; /* FIS sector count (7:0) */
fis->d.featuresExp = 0xFF; /* FIS sector count (15:8) */
}
fis->d.sectorCount = 0; /* Tag (7:3) set by LL layer */
fis->d.sectorCountExp = 0;
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_FPDMA_WRITE;
break;
default:
SM_DBG1(("satChainedWriteNVerify_Write: error incorrect ata command 0x%x!!!\n", satIOContext->ATACmd));
return SM_RC_FAILURE;
break;
}
/* Initialize CB for SATA completion.
*/
/* chained data */
satIOContext->satCompleteCB = &smsatChainedWriteNVerifyCB;
/*
* Prepare SGL and send FIS to LL layer.
*/
satIOContext->reqType = agRequestType; /* Save it */
status = smsataLLIOStart( smRoot,
smIORequest,
smDeviceHandle,
smScsiRequest,
satIOContext);
SM_DBG5(("satChainedWriteNVerify_Write: return\n"));
return (status);
}
osGLOBAL bit32
smsatChainedWriteNVerify_Verify(
smRoot_t *smRoot,
smIORequest_t *smIORequest,
smDeviceHandle_t *smDeviceHandle,
smScsiInitiatorRequest_t *smScsiRequest,
smSatIOContext_t *satIOContext
)
{
bit32 status;
smSatIOContext_t *satOrgIOContext = agNULL;
agsaFisRegHostToDevice_t *fis;
bit32 agRequestType = AGSA_SATA_PROTOCOL_NON_DATA;
bit32 lba = 0;
bit32 DenomTL = 0xFF;
bit32 Remainder = 0;
bit8 LBA[4]; /* 0 MSB, 3 LSB */
SM_DBG2(("smsatChainedWriteNVerify_Verify: start\n"));
fis = satIOContext->pFis;
satOrgIOContext = satIOContext->satOrgIOContext;
sm_memset(LBA,0, sizeof(LBA));
switch (satOrgIOContext->ATACmd)
{
case SAT_READ_VERIFY_SECTORS:
DenomTL = 0xFF;
break;
case SAT_READ_VERIFY_SECTORS_EXT:
DenomTL = 0xFFFF;
break;
default:
SM_DBG1(("smsatChainedWriteNVerify_Verify: error incorrect ata command 0x%x!!!\n", satIOContext->ATACmd));
return SM_RC_FAILURE;
break;
}
Remainder = satOrgIOContext->OrgTL % DenomTL;
satOrgIOContext->currentLBA = satOrgIOContext->currentLBA + DenomTL;
lba = satOrgIOContext->currentLBA;
LBA[0] = (bit8)((lba & 0xF000) >> (8 * 3)); /* MSB */
LBA[1] = (bit8)((lba & 0xF00) >> (8 * 2));
LBA[2] = (bit8)((lba & 0xF0) >> 8);
LBA[3] = (bit8)(lba & 0xF); /* LSB */
switch (satOrgIOContext->ATACmd)
{
case SAT_READ_VERIFY_SECTORS:
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C bit is set */
fis->h.command = SAT_READ_VERIFY_SECTORS; /* 0x40 */
fis->h.features = 0; /* FIS reserve */
fis->d.lbaLow = LBA[3]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = LBA[2]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = LBA[1]; /* FIS LBA (23:16) */
/* FIS LBA mode set LBA (27:24) */
fis->d.device = (bit8)((0x4 << 4) | (LBA[0] & 0xF));
fis->d.lbaLowExp = 0;
fis->d.lbaMidExp = 0;
fis->d.lbaHighExp = 0;
fis->d.featuresExp = 0;
if (satOrgIOContext->LoopNum == 1)
{
/* last loop */
fis->d.sectorCount = (bit8)Remainder; /* FIS sector count (7:0) */
}
else
{
fis->d.sectorCount = 0xFF; /* FIS sector count (7:0) */
}
fis->d.sectorCountExp = 0;
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_NON_DATA;
break;
case SAT_READ_VERIFY_SECTORS_EXT:
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_READ_VERIFY_SECTORS_EXT; /* 0x42 */
fis->h.features = 0; /* FIS reserve */
fis->d.lbaLow = LBA[3]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = LBA[2]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = LBA[1]; /* FIS LBA (23:16) */
fis->d.device = 0x40; /* FIS LBA mode set */
fis->d.lbaLowExp = LBA[0]; /* FIS LBA (31:24) */
fis->d.lbaMidExp = 0; /* FIS LBA (39:32) */
fis->d.lbaHighExp = 0; /* FIS LBA (47:40) */
fis->d.featuresExp = 0; /* FIS reserve */
if (satOrgIOContext->LoopNum == 1)
{
/* last loop */
fis->d.sectorCount = (bit8)(Remainder & 0xFF); /* FIS sector count (7:0) */
fis->d.sectorCountExp = (bit8)((Remainder & 0xFF00) >> 8); /* FIS sector count (15:8) */
}
else
{
fis->d.sectorCount = 0xFF; /* FIS sector count (7:0) */
fis->d.sectorCountExp = 0xFF; /* FIS sector count (15:8) */
}
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_NON_DATA;
break;
default:
SM_DBG1(("smsatChainedWriteNVerify_Verify: error incorrect ata command 0x%x!!!\n", satIOContext->ATACmd));
return SM_RC_FAILURE;
break;
}
/* Initialize CB for SATA completion.
*/
/* chained data */
satIOContext->satCompleteCB = &smsatChainedWriteNVerifyCB;
/*
* Prepare SGL and send FIS to LL layer.
*/
satIOContext->reqType = agRequestType; /* Save it */
status = smsataLLIOStart( smRoot,
smIORequest,
smDeviceHandle,
smScsiRequest,
satIOContext);
SM_DBG5(("smsatChainedWriteNVerify_Verify: return\n"));
return (status);
}
osGLOBAL bit32
smsatChainedVerify(
smRoot_t *smRoot,
smIORequest_t *smIORequest,
smDeviceHandle_t *smDeviceHandle,
smScsiInitiatorRequest_t *smScsiRequest,
smSatIOContext_t *satIOContext
)
{
bit32 status;
smSatIOContext_t *satOrgIOContext = agNULL;
agsaFisRegHostToDevice_t *fis;
bit32 agRequestType = AGSA_SATA_PROTOCOL_NON_DATA;
bit32 lba = 0;
bit32 DenomTL = 0xFF;
bit32 Remainder = 0;
bit8 LBA[4]; /* 0 MSB, 3 LSB */
SM_DBG2(("smsatChainedVerify: start\n"));
fis = satIOContext->pFis;
satOrgIOContext = satIOContext->satOrgIOContext;
sm_memset(LBA,0, sizeof(LBA));
switch (satOrgIOContext->ATACmd)
{
case SAT_READ_VERIFY_SECTORS:
DenomTL = 0xFF;
break;
case SAT_READ_VERIFY_SECTORS_EXT:
DenomTL = 0xFFFF;
break;
default:
SM_DBG1(("satChainedVerify: error incorrect ata command 0x%x!!!\n", satIOContext->ATACmd));
return tiError;
break;
}
Remainder = satOrgIOContext->OrgTL % DenomTL;
satOrgIOContext->currentLBA = satOrgIOContext->currentLBA + DenomTL;
lba = satOrgIOContext->currentLBA;
LBA[0] = (bit8)((lba & 0xF000) >> (8 * 3)); /* MSB */
LBA[1] = (bit8)((lba & 0xF00) >> (8 * 2));
LBA[2] = (bit8)((lba & 0xF0) >> 8);
LBA[3] = (bit8)(lba & 0xF); /* LSB */
switch (satOrgIOContext->ATACmd)
{
case SAT_READ_VERIFY_SECTORS:
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C bit is set */
fis->h.command = SAT_READ_VERIFY_SECTORS; /* 0x40 */
fis->h.features = 0; /* FIS reserve */
fis->d.lbaLow = LBA[3]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = LBA[2]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = LBA[1]; /* FIS LBA (23:16) */
/* FIS LBA mode set LBA (27:24) */
fis->d.device = (bit8)((0x4 << 4) | (LBA[0] & 0xF));
fis->d.lbaLowExp = 0;
fis->d.lbaMidExp = 0;
fis->d.lbaHighExp = 0;
fis->d.featuresExp = 0;
if (satOrgIOContext->LoopNum == 1)
{
/* last loop */
fis->d.sectorCount = (bit8)Remainder; /* FIS sector count (7:0) */
}
else
{
fis->d.sectorCount = 0xFF; /* FIS sector count (7:0) */
}
fis->d.sectorCountExp = 0;
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_NON_DATA;
break;
case SAT_READ_VERIFY_SECTORS_EXT:
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_READ_VERIFY_SECTORS_EXT; /* 0x42 */
fis->h.features = 0; /* FIS reserve */
fis->d.lbaLow = LBA[3]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = LBA[2]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = LBA[1]; /* FIS LBA (23:16) */
fis->d.device = 0x40; /* FIS LBA mode set */
fis->d.lbaLowExp = LBA[0]; /* FIS LBA (31:24) */
fis->d.lbaMidExp = 0; /* FIS LBA (39:32) */
fis->d.lbaHighExp = 0; /* FIS LBA (47:40) */
fis->d.featuresExp = 0; /* FIS reserve */
if (satOrgIOContext->LoopNum == 1)
{
/* last loop */
fis->d.sectorCount = (bit8)(Remainder & 0xFF); /* FIS sector count (7:0) */
fis->d.sectorCountExp = (bit8)((Remainder & 0xFF00) >> 8); /* FIS sector count (15:8) */
}
else
{
fis->d.sectorCount = 0xFF; /* FIS sector count (7:0) */
fis->d.sectorCountExp = 0xFF; /* FIS sector count (15:8) */
}
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_NON_DATA;
break;
default:
SM_DBG1(("satChainedVerify: error incorrect ata command 0x%x!!!\n", satIOContext->ATACmd));
return tiError;
break;
}
/* Initialize CB for SATA completion.
*/
/* chained data */
satIOContext->satCompleteCB = &smsatChainedVerifyCB;
/*
* Prepare SGL and send FIS to LL layer.
*/
satIOContext->reqType = agRequestType; /* Save it */
status = smsataLLIOStart( smRoot,
smIORequest,
smDeviceHandle,
smScsiRequest,
satIOContext);
SM_DBG5(("satChainedVerify: return\n"));
return (status);
}
osGLOBAL bit32
smsatWriteSame10_1(
smRoot_t *smRoot,
smIORequest_t *smIORequest,
smDeviceHandle_t *smDeviceHandle,
smScsiInitiatorRequest_t *smScsiRequest,
smSatIOContext_t *satIOContext,
bit32 lba
)
{
/*
sends SAT_WRITE_DMA_EXT
*/
bit32 status;
bit32 agRequestType;
agsaFisRegHostToDevice_t *fis;
bit8 lba1, lba2 ,lba3, lba4;
SM_DBG5(("smsatWriteSame10_1: start\n"));
fis = satIOContext->pFis;
/* MSB */
lba1 = (bit8)((lba & 0xFF000000) >> (8*3));
lba2 = (bit8)((lba & 0x00FF0000) >> (8*2));
lba3 = (bit8)((lba & 0x0000FF00) >> (8*1));
/* LSB */
lba4 = (bit8)(lba & 0x000000FF);
/* SAT_WRITE_DMA_EXT */
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_WRITE_DMA_EXT; /* 0x35 */
fis->h.features = 0; /* FIS reserve */
fis->d.lbaLow = lba4; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = lba3; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = lba2; /* FIS LBA (23:16) */
fis->d.device = 0x40; /* FIS LBA mode set */
fis->d.lbaLowExp = lba1; /* FIS LBA (31:24) */
fis->d.lbaMidExp = 0; /* FIS LBA (39:32) */
fis->d.lbaHighExp = 0; /* FIS LBA (47:40) */
fis->d.featuresExp = 0; /* FIS reserve */
/* one sector at a time */
fis->d.sectorCount = 1; /* FIS sector count (7:0) */
fis->d.sectorCountExp = 0; /* FIS sector count (15:8) */
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_DMA_WRITE;
/* Initialize CB for SATA completion.
*/
satIOContext->satCompleteCB = &smsatWriteSame10CB;
/*
* Prepare SGL and send FIS to LL layer.
*/
satIOContext->reqType = agRequestType; /* Save it */
status = smsataLLIOStart( smRoot,
smIORequest,
smDeviceHandle,
smScsiRequest,
satIOContext);
SM_DBG5(("smsatWriteSame10_1 return status %d\n", status));
return status;
}
osGLOBAL bit32
smsatWriteSame10_2(
smRoot_t *smRoot,
smIORequest_t *smIORequest,
smDeviceHandle_t *smDeviceHandle,
smScsiInitiatorRequest_t *smScsiRequest,
smSatIOContext_t *satIOContext,
bit32 lba
)
{
/*
sends SAT_WRITE_SECTORS_EXT
*/
bit32 status;
bit32 agRequestType;
agsaFisRegHostToDevice_t *fis;
bit8 lba1, lba2 ,lba3, lba4;
SM_DBG5(("smsatWriteSame10_2: start\n"));
fis = satIOContext->pFis;
/* MSB */
lba1 = (bit8)((lba & 0xFF000000) >> (8*3));
lba2 = (bit8)((lba & 0x00FF0000) >> (8*2));
lba3 = (bit8)((lba & 0x0000FF00) >> (8*1));
/* LSB */
lba4 = (bit8)(lba & 0x000000FF);
/* SAT_WRITE_SECTORS_EXT */
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_WRITE_SECTORS_EXT; /* 0x34 */
fis->h.features = 0; /* FIS reserve */
fis->d.lbaLow = lba4; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = lba3; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = lba2; /* FIS LBA (23:16) */
fis->d.device = 0x40; /* FIS LBA mode set */
fis->d.lbaLowExp = lba1; /* FIS LBA (31:24) */
fis->d.lbaMidExp = 0; /* FIS LBA (39:32) */
fis->d.lbaHighExp = 0; /* FIS LBA (47:40) */
fis->d.featuresExp = 0; /* FIS reserve */
/* one sector at a time */
fis->d.sectorCount = 1; /* FIS sector count (7:0) */
fis->d.sectorCountExp = 0; /* FIS sector count (15:8) */
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_PIO_WRITE;
/* Initialize CB for SATA completion.
*/
satIOContext->satCompleteCB = &smsatWriteSame10CB;
/*
* Prepare SGL and send FIS to LL layer.
*/
satIOContext->reqType = agRequestType; /* Save it */
status = smsataLLIOStart( smRoot,
smIORequest,
smDeviceHandle,
smScsiRequest,
satIOContext);
SM_DBG5(("smsatWriteSame10_2 return status %d\n", status));
return status;
}
osGLOBAL bit32
smsatWriteSame10_3(
smRoot_t *smRoot,
smIORequest_t *smIORequest,
smDeviceHandle_t *smDeviceHandle,
smScsiInitiatorRequest_t *smScsiRequest,
smSatIOContext_t *satIOContext,
bit32 lba
)
{
/*
sends SAT_WRITE_FPDMA_QUEUED
*/
bit32 status;
bit32 agRequestType;
agsaFisRegHostToDevice_t *fis;
bit8 lba1, lba2 ,lba3, lba4;
SM_DBG5(("smsatWriteSame10_3: start\n"));
fis = satIOContext->pFis;
/* MSB */
lba1 = (bit8)((lba & 0xFF000000) >> (8*3));
lba2 = (bit8)((lba & 0x00FF0000) >> (8*2));
lba3 = (bit8)((lba & 0x0000FF00) >> (8*1));
/* LSB */
lba4 = (bit8)(lba & 0x000000FF);
/* SAT_WRITE_FPDMA_QUEUED */
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_WRITE_FPDMA_QUEUED; /* 0x61 */
/* one sector at a time */
fis->h.features = 1; /* FIS sector count (7:0) */
fis->d.featuresExp = 0; /* FIS sector count (15:8) */
fis->d.lbaLow = lba4; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = lba3; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = lba2; /* FIS LBA (23:16) */
/* NO FUA bit in the WRITE SAME 10 */
fis->d.device = 0x40; /* FIS FUA clear */
fis->d.lbaLowExp = lba1; /* FIS LBA (31:24) */
fis->d.lbaMidExp = 0; /* FIS LBA (39:32) */
fis->d.lbaHighExp = 0; /* FIS LBA (47:40) */
fis->d.sectorCount = 0; /* Tag (7:3) set by LL layer */
fis->d.sectorCountExp = 0;
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_FPDMA_WRITE;
/* Initialize CB for SATA completion.
*/
satIOContext->satCompleteCB = &smsatWriteSame10CB;
/*
* Prepare SGL and send FIS to LL layer.
*/
satIOContext->reqType = agRequestType; /* Save it */
status = smsataLLIOStart( smRoot,
smIORequest,
smDeviceHandle,
smScsiRequest,
satIOContext);
SM_DBG5(("smsatWriteSame10_3 return status %d\n", status));
return status;
}
osGLOBAL bit32
smsatStartStopUnit_1(
smRoot_t *smRoot,
smIORequest_t *smIORequest,
smDeviceHandle_t *smDeviceHandle,
smScsiInitiatorRequest_t *smScsiRequest,
smSatIOContext_t *satIOContext
)
{
/*
SAT Rev 8, Table 48, 9.11.3 p55
sends STANDBY
*/
bit32 status;
bit32 agRequestType;
agsaFisRegHostToDevice_t *fis;
SM_DBG5(("smsatStartStopUnit_1: start\n"));
fis = satIOContext->pFis;
/* STANDBY */
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_STANDBY; /* 0xE2 */
fis->h.features = 0; /* FIS features NA */
fis->d.lbaLow = 0; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = 0; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = 0; /* FIS LBA (23:16) */
fis->d.lbaLowExp = 0;
fis->d.lbaMidExp = 0;
fis->d.lbaHighExp = 0;
fis->d.featuresExp = 0;
fis->d.sectorCount = 0; /* FIS sector count (7:0) */
fis->d.sectorCountExp = 0;
fis->d.reserved4 = 0;
fis->d.device = 0; /* 0 */
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_NON_DATA;
/* Initialize CB for SATA completion.
*/
satIOContext->satCompleteCB = &smsatStartStopUnitCB;
/*
* Prepare SGL and send FIS to LL layer.
*/
satIOContext->reqType = agRequestType; /* Save it */
status = smsataLLIOStart( smRoot,
smIORequest,
smDeviceHandle,
smScsiRequest,
satIOContext);
SM_DBG5(("smsatStartStopUnit_1 return status %d\n", status));
return status;
}
osGLOBAL bit32
smsatSendDiagnostic_1(
smRoot_t *smRoot,
smIORequest_t *smIORequest,
smDeviceHandle_t *smDeviceHandle,
smScsiInitiatorRequest_t *smScsiRequest,
smSatIOContext_t *satIOContext
)
{
/*
SAT Rev9, Table29, p41
send 2nd SAT_READ_VERIFY_SECTORS(_EXT)
*/
bit32 status;
bit32 agRequestType;
smDeviceData_t *pSatDevData;
agsaFisRegHostToDevice_t *fis;
SM_DBG5(("smsatSendDiagnostic_1: start\n"));
pSatDevData = satIOContext->pSatDevData;
fis = satIOContext->pFis;
/*
sector count 1, LBA MAX
*/
if (pSatDevData->sat48BitSupport == agTRUE)
{
/* sends READ VERIFY SECTOR(S) EXT*/
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_READ_VERIFY_SECTORS_EXT;/* 0x42 */
fis->h.features = 0; /* FIS reserve */
fis->d.lbaLow = pSatDevData->satMaxLBA[7]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = pSatDevData->satMaxLBA[6]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = pSatDevData->satMaxLBA[5]; /* FIS LBA (23:16) */
fis->d.lbaLowExp = pSatDevData->satMaxLBA[4]; /* FIS LBA (31:24) */
fis->d.lbaMidExp = pSatDevData->satMaxLBA[3]; /* FIS LBA (39:32) */
fis->d.lbaHighExp = pSatDevData->satMaxLBA[2]; /* FIS LBA (47:40) */
fis->d.featuresExp = 0; /* FIS reserve */
fis->d.sectorCount = 1; /* FIS sector count (7:0) */
fis->d.sectorCountExp = 0; /* FIS sector count (15:8) */
fis->d.reserved4 = 0;
fis->d.device = 0x40; /* 01000000 */
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
}
else
{
/* READ VERIFY SECTOR(S)*/
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_READ_VERIFY_SECTORS;/* 0x40 */
fis->h.features = 0; /* FIS features NA */
fis->d.lbaLow = pSatDevData->satMaxLBA[7]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = pSatDevData->satMaxLBA[6]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = pSatDevData->satMaxLBA[5]; /* FIS LBA (23:16) */
fis->d.lbaLowExp = 0;
fis->d.lbaMidExp = 0;
fis->d.lbaHighExp = 0;
fis->d.featuresExp = 0;
fis->d.sectorCount = 1; /* FIS sector count (7:0) */
fis->d.sectorCountExp = 0;
fis->d.reserved4 = 0;
fis->d.device = (bit8)((0x4 << 4) | (pSatDevData->satMaxLBA[4] & 0xF));
/* DEV and LBA 27:24 */
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
}
agRequestType = AGSA_SATA_PROTOCOL_NON_DATA;
/* Initialize CB for SATA completion.
*/
satIOContext->satCompleteCB = &smsatSendDiagnosticCB;
/*
* Prepare SGL and send FIS to LL layer.
*/
satIOContext->reqType = agRequestType; /* Save it */
status = smsataLLIOStart( smRoot,
smIORequest,
smDeviceHandle,
smScsiRequest,
satIOContext);
return status;
}
osGLOBAL bit32
smsatSendDiagnostic_2(
smRoot_t *smRoot,
smIORequest_t *smIORequest,
smDeviceHandle_t *smDeviceHandle,
smScsiInitiatorRequest_t *smScsiRequest,
smSatIOContext_t *satIOContext
)
{
/*
SAT Rev9, Table29, p41
send 3rd SAT_READ_VERIFY_SECTORS(_EXT)
*/
bit32 status;
bit32 agRequestType;
smDeviceData_t *pSatDevData;
agsaFisRegHostToDevice_t *fis;
SM_DBG5(("smsatSendDiagnostic_2: start\n"));
pSatDevData = satIOContext->pSatDevData;
fis = satIOContext->pFis;
/*
sector count 1, LBA Random
*/
if (pSatDevData->sat48BitSupport == agTRUE)
{
/* sends READ VERIFY SECTOR(S) EXT*/
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_READ_VERIFY_SECTORS_EXT;/* 0x42 */
fis->h.features = 0; /* FIS reserve */
fis->d.lbaLow = 0x7F; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = 0; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = 0; /* FIS LBA (23:16) */
fis->d.lbaLowExp = 0; /* FIS LBA (31:24) */
fis->d.lbaMidExp = 0; /* FIS LBA (39:32) */
fis->d.lbaHighExp = 0; /* FIS LBA (47:40) */
fis->d.featuresExp = 0; /* FIS reserve */
fis->d.sectorCount = 1; /* FIS sector count (7:0) */
fis->d.sectorCountExp = 0; /* FIS sector count (15:8) */
fis->d.reserved4 = 0;
fis->d.device = 0x40; /* 01000000 */
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
}
else
{
/* READ VERIFY SECTOR(S)*/
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_READ_VERIFY_SECTORS;/* 0x40 */
fis->h.features = 0; /* FIS features NA */
fis->d.lbaLow = 0x7F; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = 0; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = 0; /* FIS LBA (23:16) */
fis->d.lbaLowExp = 0;
fis->d.lbaMidExp = 0;
fis->d.lbaHighExp = 0;
fis->d.featuresExp = 0;
fis->d.sectorCount = 1; /* FIS sector count (7:0) */
fis->d.sectorCountExp = 0;
fis->d.reserved4 = 0;
fis->d.device = 0x40; /* FIS LBA mode set 01000000 */
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
}
agRequestType = AGSA_SATA_PROTOCOL_NON_DATA;
/* Initialize CB for SATA completion.
*/
satIOContext->satCompleteCB = &smsatSendDiagnosticCB;
/*
* Prepare SGL and send FIS to LL layer.
*/
satIOContext->reqType = agRequestType; /* Save it */
status = smsataLLIOStart( smRoot,
smIORequest,
smDeviceHandle,
smScsiRequest,
satIOContext);
return status;
}
osGLOBAL bit32
smsatModeSelect6n10_1(
smRoot_t *smRoot,
smIORequest_t *smIORequest,
smDeviceHandle_t *smDeviceHandle,
smScsiInitiatorRequest_t *smScsiRequest,
smSatIOContext_t *satIOContext
)
{
/* sends either ATA SET FEATURES based on DRA bit */
bit32 status;
bit32 agRequestType;
agsaFisRegHostToDevice_t *fis;
bit8 *pLogPage; /* Log Page data buffer */
bit32 StartingIndex = 0;
fis = satIOContext->pFis;
pLogPage = (bit8 *) smScsiRequest->sglVirtualAddr;
SM_DBG5(("smsatModeSelect6n10_1: start\n"));
if (pLogPage[3] == 8)
{
/* mode parameter block descriptor exists */
StartingIndex = 12;
}
else
{
/* mode parameter block descriptor does not exist */
StartingIndex = 4;
}
/* sends ATA SET FEATURES based on DRA bit */
if ( !(pLogPage[StartingIndex + 12] & SCSI_MODE_SELECT6_DRA_MASK) )
{
SM_DBG5(("smsatModeSelect6n10_1: enable read look-ahead feature\n"));
/* sends SET FEATURES */
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_SET_FEATURES; /* 0xEF */
fis->h.features = 0xAA; /* enable read look-ahead */
fis->d.lbaLow = 0; /* */
fis->d.lbaMid = 0; /* */
fis->d.lbaHigh = 0; /* */
fis->d.device = 0; /* */
fis->d.lbaLowExp = 0; /* */
fis->d.lbaMidExp = 0; /* */
fis->d.lbaHighExp = 0; /* */
fis->d.featuresExp = 0; /* */
fis->d.sectorCount = 0; /* */
fis->d.sectorCountExp = 0; /* */
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_NON_DATA;
/* Initialize CB for SATA completion.
*/
satIOContext->satCompleteCB = &smsatModeSelect6n10CB;
/*
* Prepare SGL and send FIS to LL layer.
*/
satIOContext->reqType = agRequestType; /* Save it */
status = smsataLLIOStart( smRoot,
smIORequest,
smDeviceHandle,
smScsiRequest,
satIOContext);
return status;
}
else
{
SM_DBG5(("smsatModeSelect6n10_1: disable read look-ahead feature\n"));
/* sends SET FEATURES */
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_SET_FEATURES; /* 0xEF */
fis->h.features = 0x55; /* disable read look-ahead */
fis->d.lbaLow = 0; /* */
fis->d.lbaMid = 0; /* */
fis->d.lbaHigh = 0; /* */
fis->d.device = 0; /* */
fis->d.lbaLowExp = 0; /* */
fis->d.lbaMidExp = 0; /* */
fis->d.lbaHighExp = 0; /* */
fis->d.featuresExp = 0; /* */
fis->d.sectorCount = 0; /* */
fis->d.sectorCountExp = 0; /* */
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_NON_DATA;
/* Initialize CB for SATA completion.
*/
satIOContext->satCompleteCB = &smsatModeSelect6n10CB;
/*
* Prepare SGL and send FIS to LL layer.
*/
satIOContext->reqType = agRequestType; /* Save it */
status = smsataLLIOStart( smRoot,
smIORequest,
smDeviceHandle,
smScsiRequest,
satIOContext);
return status;
}
}
osGLOBAL bit32
smsatLogSense_1(
smRoot_t *smRoot,
smIORequest_t *smIORequest,
smDeviceHandle_t *smDeviceHandle,
smScsiInitiatorRequest_t *smScsiRequest,
smSatIOContext_t *satIOContext
)
{
bit32 status;
bit32 agRequestType;
smDeviceData_t *pSatDevData;
agsaFisRegHostToDevice_t *fis;
pSatDevData = satIOContext->pSatDevData;
fis = satIOContext->pFis;
SM_DBG5(("smsatLogSense_1: start\n"));
/* SAT Rev 8, 10.2.4 p74 */
if ( pSatDevData->sat48BitSupport == agTRUE )
{
SM_DBG5(("smsatLogSense_1: case 2-1 sends READ LOG EXT\n"));
/* sends READ LOG EXT */
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_READ_LOG_EXT; /* 0x2F */
fis->h.features = 0; /* FIS reserve */
fis->d.lbaLow = 0x07; /* 0x07 */
fis->d.lbaMid = 0; /* */
fis->d.lbaHigh = 0; /* */
fis->d.device = 0; /* */
fis->d.lbaLowExp = 0; /* */
fis->d.lbaMidExp = 0; /* */
fis->d.lbaHighExp = 0; /* */
fis->d.featuresExp = 0; /* FIS reserve */
fis->d.sectorCount = 0x01; /* 1 sector counts */
fis->d.sectorCountExp = 0x00; /* 1 sector counts */
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_PIO_READ;
/* Initialize CB for SATA completion.
*/
satIOContext->satCompleteCB = &smsatLogSenseCB;
/*
* Prepare SGL and send FIS to LL layer.
*/
satIOContext->reqType = agRequestType; /* Save it */
status = smsataLLIOStart( smRoot,
smIORequest,
smDeviceHandle,
smScsiRequest,
satIOContext);
return status;
}
else
{
SM_DBG5(("smsatLogSense_1: case 2-2 sends SMART READ LOG\n"));
/* sends SMART READ LOG */
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_SMART; /* 0x2F */
fis->h.features = SAT_SMART_READ_LOG; /* 0xd5 */
fis->d.lbaLow = 0x06; /* 0x06 */
fis->d.lbaMid = 0x00; /* 0x4f */
fis->d.lbaHigh = 0x00; /* 0xc2 */
fis->d.device = 0; /* */
fis->d.lbaLowExp = 0; /* */
fis->d.lbaMidExp = 0; /* */
fis->d.lbaHighExp = 0; /* */
fis->d.featuresExp = 0; /* FIS reserve */
fis->d.sectorCount = 0x01; /* */
fis->d.sectorCountExp = 0x00; /* */
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_PIO_READ;
/* Initialize CB for SATA completion.
*/
satIOContext->satCompleteCB = &smsatLogSenseCB;
/*
* Prepare SGL and send FIS to LL layer.
*/
satIOContext->reqType = agRequestType; /* Save it */
status = smsataLLIOStart( smRoot,
smIORequest,
smDeviceHandle,
smScsiRequest,
satIOContext);
return status;
}
}
osGLOBAL bit32
smsatReassignBlocks_2(
smRoot_t *smRoot,
smIORequest_t *smIORequest,
smDeviceHandle_t *smDeviceHandle,
smScsiInitiatorRequest_t *smScsiRequest,
smSatIOContext_t *satIOContext,
bit8 *LBA
)
{
/*
assumes all LBA fits in ATA command; no boundary condition is checked here yet
tiScsiRequest is TD generated for writing
*/
bit32 status;
bit32 agRequestType;
smDeviceData_t *pSatDevData;
smScsiRspSense_t *pSense;
agsaFisRegHostToDevice_t *fis;
pSense = satIOContext->pSense;
pSatDevData = satIOContext->pSatDevData;
fis = satIOContext->pFis;
SM_DBG5(("smsatReassignBlocks_2: start\n"));
if (pSatDevData->satDMASupport == agTRUE && pSatDevData->satDMAEnabled == agTRUE)
{
/* case 2 */
/* WRITE DMA*/
/* can't fit the transfer length */
SM_DBG5(("smsatReassignBlocks_2: case 2\n"));
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C bit is set */
fis->h.command = SAT_WRITE_DMA; /* 0xCA */
fis->h.features = 0; /* FIS reserve */
fis->d.lbaLow = LBA[7]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = LBA[6]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = LBA[5]; /* FIS LBA (23:16) */
/* FIS LBA mode set LBA (27:24) */
fis->d.device = (bit8)((0x4 << 4) | (LBA[4] & 0xF));
fis->d.lbaLowExp = 0;
fis->d.lbaMidExp = 0;
fis->d.lbaHighExp = 0;
fis->d.featuresExp = 0;
fis->d.sectorCount = 1; /* FIS sector count (7:0) */
fis->d.sectorCountExp = 0;
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_DMA_WRITE;
satIOContext->ATACmd = SAT_WRITE_DMA;
}
else
{
/* case 1 */
/* WRITE MULTIPLE or WRITE SECTOR(S) */
/* WRITE SECTORS for easier implemetation */
/* can't fit the transfer length */
SM_DBG5(("smsatReassignBlocks_2: case 1\n"));
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C bit is set */
fis->h.command = SAT_WRITE_SECTORS; /* 0x30 */
fis->h.features = 0; /* FIS reserve */
fis->d.lbaLow = LBA[7]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = LBA[6]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = LBA[7]; /* FIS LBA (23:16) */
/* FIS LBA mode set LBA (27:24) */
fis->d.device = (bit8)((0x4 << 4) | (LBA[4] & 0xF));
fis->d.lbaLowExp = 0;
fis->d.lbaMidExp = 0;
fis->d.lbaHighExp = 0;
fis->d.featuresExp = 0;
fis->d.sectorCount = 1; /* FIS sector count (7:0) */
fis->d.sectorCountExp = 0;
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_PIO_WRITE;
satIOContext->ATACmd = SAT_WRITE_SECTORS;
}
/* case 3 and 4 */
if (pSatDevData->sat48BitSupport == agTRUE)
{
if (pSatDevData->satDMASupport == agTRUE && pSatDevData->satDMAEnabled == agTRUE)
{
/* case 3 */
/* WRITE DMA EXT or WRITE DMA FUA EXT */
SM_DBG5(("smsatReassignBlocks_2: case 3\n"));
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
/* SAT_WRITE_DMA_FUA_EXT is optional and we don't support it */
fis->h.command = SAT_WRITE_DMA_EXT; /* 0x35 */
satIOContext->ATACmd = SAT_WRITE_DMA_EXT;
fis->h.features = 0; /* FIS reserve */
fis->d.lbaLow = LBA[7]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = LBA[6]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = LBA[5]; /* FIS LBA (23:16) */
fis->d.device = 0x40; /* FIS LBA mode set */
fis->d.lbaLowExp = LBA[4]; /* FIS LBA (31:24) */
fis->d.lbaMidExp = LBA[3]; /* FIS LBA (39:32) */
fis->d.lbaHighExp = LBA[2]; /* FIS LBA (47:40) */
fis->d.featuresExp = 0; /* FIS reserve */
fis->d.sectorCount = 1; /* FIS sector count (7:0) */
fis->d.sectorCountExp = 0; /* FIS sector count (15:8) */
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_DMA_WRITE;
}
else
{
/* case 4 */
/* WRITE MULTIPLE EXT or WRITE MULTIPLE FUA EXT or WRITE SECTOR(S) EXT */
/* WRITE SECTORS EXT for easier implemetation */
SM_DBG5(("smsatReassignBlocks_2: case 4\n"));
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_WRITE_SECTORS_EXT; /* 0x34 */
fis->h.features = 0; /* FIS reserve */
fis->d.lbaLow = LBA[7]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = LBA[6]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = LBA[5]; /* FIS LBA (23:16) */
fis->d.device = 0x40; /* FIS LBA mode set */
fis->d.lbaLowExp = LBA[4]; /* FIS LBA (31:24) */
fis->d.lbaMidExp = LBA[3]; /* FIS LBA (39:32) */
fis->d.lbaHighExp = LBA[2]; /* FIS LBA (47:40) */
fis->d.featuresExp = 0; /* FIS reserve */
fis->d.sectorCount = 1; /* FIS sector count (7:0) */
fis->d.sectorCountExp = 0; /* FIS sector count (15:8) */
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_PIO_WRITE;
satIOContext->ATACmd = SAT_WRITE_SECTORS_EXT;
}
}
/* case 5 */
if (pSatDevData->satNCQ == agTRUE)
{
/* WRITE FPDMA QUEUED */
if (pSatDevData->sat48BitSupport != agTRUE)
{
SM_DBG5(("smsatReassignBlocks_2: case 5 !!! error NCQ but 28 bit address support \n"));
smsatSetSensePayload( pSense,
SCSI_SNSKEY_HARDWARE_ERROR,
0,
SCSI_SNSCODE_WRITE_ERROR_AUTO_REALLOCATION_FAILED,
satIOContext);
/*smEnqueueIO(smRoot, satIOContext);*/
tdsmIOCompletedCB( smRoot,
smIORequest,
smIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pSmSenseData,
satIOContext->interruptContext );
return SM_RC_SUCCESS;
}
SM_DBG6(("satWrite10: case 5\n"));
/* Support 48-bit FPDMA addressing, use WRITE FPDMA QUEUE command */
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_WRITE_FPDMA_QUEUED; /* 0x61 */
fis->h.features = 1; /* FIS sector count (7:0) */
fis->d.lbaLow = LBA[7]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = LBA[6]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = LBA[5]; /* FIS LBA (23:16) */
/* Check FUA bit */
fis->d.device = 0x40; /* FIS FUA clear */
fis->d.lbaLowExp = LBA[4]; /* FIS LBA (31:24) */
fis->d.lbaMidExp = LBA[3]; /* FIS LBA (39:32) */
fis->d.lbaHighExp = LBA[2]; /* FIS LBA (47:40) */
fis->d.featuresExp = 0; /* FIS sector count (15:8) */
fis->d.sectorCount = 0; /* Tag (7:3) set by LL layer */
fis->d.sectorCountExp = 0;
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_FPDMA_WRITE;
satIOContext->ATACmd = SAT_WRITE_FPDMA_QUEUED;
}
satIOContext->satCompleteCB = &smsatReassignBlocksCB;
/*
* Prepare SGL and send FIS to LL layer.
*/
satIOContext->reqType = agRequestType; /* Save it */
status = smsataLLIOStart( smRoot,
smIORequest,
smDeviceHandle,
/* not the original, should be the TD generated one */
smScsiRequest,
satIOContext);
return (status);
}
osGLOBAL bit32
smsatReassignBlocks_1(
smRoot_t *smRoot,
smIORequest_t *smIORequest,
smDeviceHandle_t *smDeviceHandle,
smScsiInitiatorRequest_t *smScsiRequest,
smSatIOContext_t *satIOContext,
smSatIOContext_t *satOrgIOContext
)
{
/*
assumes all LBA fits in ATA command; no boundary condition is checked here yet
tiScsiRequest is OS generated; needs for accessing parameter list
*/
bit32 agRequestType;
smDeviceData_t *pSatDevData;
smIniScsiCmnd_t *scsiCmnd;
agsaFisRegHostToDevice_t *fis;
bit8 *pParmList; /* Log Page data buffer */
bit8 LongLBA;
bit8 LBA[8];
bit32 startingIndex;
pSatDevData = satIOContext->pSatDevData;
scsiCmnd = &smScsiRequest->scsiCmnd;
fis = satIOContext->pFis;
pParmList = (bit8 *) smScsiRequest->sglVirtualAddr;
SM_DBG5(("smsatReassignBlocks_1: start\n"));
LongLBA = (bit8)(scsiCmnd->cdb[1] & SCSI_REASSIGN_BLOCKS_LONGLBA_MASK);
sm_memset(LBA, 0, sizeof(LBA));
startingIndex = satOrgIOContext->ParmIndex;
if (LongLBA == 0)
{
LBA[4] = pParmList[startingIndex];
LBA[5] = pParmList[startingIndex+1];
LBA[6] = pParmList[startingIndex+2];
LBA[7] = pParmList[startingIndex+3];
startingIndex = startingIndex + 4;
}
else
{
LBA[0] = pParmList[startingIndex];
LBA[1] = pParmList[startingIndex+1];
LBA[2] = pParmList[startingIndex+2];
LBA[3] = pParmList[startingIndex+3];
LBA[4] = pParmList[startingIndex+4];
LBA[5] = pParmList[startingIndex+5];
LBA[6] = pParmList[startingIndex+6];
LBA[7] = pParmList[startingIndex+7];
startingIndex = startingIndex + 8;
}
if (pSatDevData->sat48BitSupport == agTRUE)
{
/* sends READ VERIFY SECTOR(S) EXT*/
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_READ_VERIFY_SECTORS_EXT;/* 0x42 */
fis->h.features = 0; /* FIS reserve */
fis->d.lbaLow = LBA[7]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = LBA[6]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = LBA[5]; /* FIS LBA (23:16) */
fis->d.lbaLowExp = LBA[4]; /* FIS LBA (31:24) */
fis->d.lbaMidExp = LBA[3]; /* FIS LBA (39:32) */
fis->d.lbaHighExp = LBA[2]; /* FIS LBA (47:40) */
fis->d.featuresExp = 0; /* FIS reserve */
fis->d.sectorCount = 1; /* FIS sector count (7:0) */
fis->d.sectorCountExp = 0; /* FIS sector count (15:8) */
fis->d.reserved4 = 0;
fis->d.device = 0x40; /* 01000000 */
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
}
else
{
/* READ VERIFY SECTOR(S)*/
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_READ_VERIFY_SECTORS;/* 0x40 */
fis->h.features = 0; /* FIS features NA */
fis->d.lbaLow = LBA[7]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = LBA[6]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = LBA[5]; /* FIS LBA (23:16) */
fis->d.lbaLowExp = 0;
fis->d.lbaMidExp = 0;
fis->d.lbaHighExp = 0;
fis->d.featuresExp = 0;
fis->d.sectorCount = 1; /* FIS sector count (7:0) */
fis->d.sectorCountExp = 0;
fis->d.reserved4 = 0;
fis->d.device = (bit8)((0x4 << 4) | (LBA[4] & 0xF));
/* DEV and LBA 27:24 */
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
}
sm_memcpy(satOrgIOContext->LBA, LBA, 8);
satOrgIOContext->ParmIndex = startingIndex;
agRequestType = AGSA_SATA_PROTOCOL_NON_DATA;
/* Initialize CB for SATA completion.
*/
satIOContext->satCompleteCB = &smsatReassignBlocksCB;
/*
* Prepare SGL and send FIS to LL layer.
*/
satIOContext->reqType = agRequestType; /* Save it */
smsataLLIOStart( smRoot,
smIORequest,
smDeviceHandle,
smScsiRequest,
satIOContext);
return SM_RC_SUCCESS;
}
osGLOBAL bit32
smsatSendReadLogExt(
smRoot_t *smRoot,
smIORequest_t *smIORequest,
smDeviceHandle_t *smDeviceHandle,
smScsiInitiatorRequest_t *smScsiRequest,
smSatIOContext_t *satIOContext
)
{
bit32 status;
bit32 agRequestType;
agsaFisRegHostToDevice_t *fis;
fis = satIOContext->pFis;
SM_DBG1(("smsatSendReadLogExt: start\n"));
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_READ_LOG_EXT; /* 0x2F */
fis->h.features = 0; /* FIS reserve */
fis->d.lbaLow = 0x10; /* Page number */
fis->d.lbaMid = 0; /* */
fis->d.lbaHigh = 0; /* */
fis->d.device = 0; /* DEV is ignored in SATA */
fis->d.lbaLowExp = 0; /* */
fis->d.lbaMidExp = 0; /* */
fis->d.lbaHighExp = 0; /* */
fis->d.featuresExp = 0; /* FIS reserve */
fis->d.sectorCount = 0x01; /* 1 sector counts*/
fis->d.sectorCountExp = 0x00; /* 1 sector counts */
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_PIO_READ;
/* Initialize CB for SATA completion.
*/
satIOContext->satCompleteCB = &smsatReadLogExtCB;
/*
* Prepare SGL and send FIS to LL layer.
*/
satIOContext->reqType = agRequestType; /* Save it */
status = smsataLLIOStart( smRoot,
smIORequest,
smDeviceHandle,
smScsiRequest,
satIOContext);
SM_DBG1(("smsatSendReadLogExt: end status %d!!!\n", status));
return (status);
}
osGLOBAL bit32
smsatCheckPowerMode(
smRoot_t *smRoot,
smIORequest_t *smIORequest,
smDeviceHandle_t *smDeviceHandle,
smScsiInitiatorRequest_t *smScsiRequest,
smSatIOContext_t *satIOContext
)
{
/*
sends SAT_CHECK_POWER_MODE as a part of ABORT TASKMANGEMENT for NCQ commands
internally generated - no directly corresponding scsi
*/
bit32 status;
bit32 agRequestType;
agsaFisRegHostToDevice_t *fis;
fis = satIOContext->pFis;
SM_DBG1(("smsatCheckPowerMode: start\n"));
/*
* Send the ATA CHECK POWER MODE command.
*/
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_CHECK_POWER_MODE; /* 0xE5 */
fis->h.features = 0;
fis->d.lbaLow = 0;
fis->d.lbaMid = 0;
fis->d.lbaHigh = 0;
fis->d.device = 0;
fis->d.lbaLowExp = 0;
fis->d.lbaMidExp = 0;
fis->d.lbaHighExp = 0;
fis->d.featuresExp = 0;
fis->d.sectorCount = 0;
fis->d.sectorCountExp = 0;
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_NON_DATA;
/* Initialize CB for SATA completion.
*/
satIOContext->satCompleteCB = &smsatCheckPowerModeCB;
/*
* Prepare SGL and send FIS to LL layer.
*/
satIOContext->reqType = agRequestType; /* Save it */
status = smsataLLIOStart( smRoot,
smIORequest,
smDeviceHandle,
smScsiRequest,
satIOContext);
SM_DBG5(("smsatCheckPowerMode: return\n"));
return status;
}
osGLOBAL bit32
smsatResetDevice(
smRoot_t *smRoot,
smIORequest_t *smIORequest,
smDeviceHandle_t *smDeviceHandle,
smScsiInitiatorRequest_t *smScsiRequest, /* NULL */
smSatIOContext_t *satIOContext
)
{
bit32 status;
bit32 agRequestType;
agsaFisRegHostToDevice_t *fis;
#ifdef TD_DEBUG_ENABLE
smIORequestBody_t *smIORequestBody;
smSatInternalIo_t *satIntIoContext;
#endif
fis = satIOContext->pFis;
SM_DBG1(("smsatResetDevice: start\n"));
#ifdef TD_DEBUG_ENABLE
satIntIoContext = satIOContext->satIntIoContext;
smIORequestBody = satIntIoContext->satIntRequestBody;
#endif
SM_DBG5(("smsatResetDevice: satIOContext %p smIORequestBody %p\n", satIOContext, smIORequestBody));
/* any fis should work */
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0; /* C Bit is not set */
fis->h.command = 0; /* any command */
fis->h.features = 0; /* FIS reserve */
fis->d.lbaLow = 0; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = 0; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = 0; /* FIS LBA (23:16) */
fis->d.device = 0; /* FIS LBA mode */
fis->d.lbaLowExp = 0;
fis->d.lbaMidExp = 0;
fis->d.lbaHighExp = 0;
fis->d.featuresExp = 0;
fis->d.sectorCount = 0; /* FIS sector count (7:0) */
fis->d.sectorCountExp = 0;
fis->d.reserved4 = 0;
fis->d.control = 0x4; /* SRST bit is set */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_SRST_ASSERT;
/* Initialize CB for SATA completion.
*/
satIOContext->satCompleteCB = &smsatResetDeviceCB;
/*
* Prepare SGL and send FIS to LL layer.
*/
satIOContext->reqType = agRequestType; /* Save it */
#ifdef SM_INTERNAL_DEBUG
smhexdump("smsatResetDevice", (bit8 *)satIOContext->pFis, sizeof(agsaFisRegHostToDevice_t));
#ifdef TD_DEBUG_ENABLE
smhexdump("smsatResetDevice LL", (bit8 *)&(smIORequestBody->transport.SATA.agSATARequestBody.fis.fisRegHostToDev), sizeof(agsaFisRegHostToDevice_t));
#endif
#endif
status = smsataLLIOStart( smRoot,
smIORequest,
smDeviceHandle,
smScsiRequest,
satIOContext);
SM_DBG6(("smsatResetDevice: end status %d\n", status));
return status;
}
osGLOBAL bit32
smsatDeResetDevice(
smRoot_t *smRoot,
smIORequest_t *smIORequest,
smDeviceHandle_t *smDeviceHandle,
smScsiInitiatorRequest_t *smScsiRequest,
smSatIOContext_t *satIOContext
)
{
bit32 status;
bit32 agRequestType;
agsaFisRegHostToDevice_t *fis;
#ifdef TD_DEBUG_ENABLE
smIORequestBody_t *smIORequestBody;
smSatInternalIo_t *satIntIoContext;
#endif
fis = satIOContext->pFis;
SM_DBG1(("smsatDeResetDevice: start\n"));
#ifdef TD_DEBUG_ENABLE
satIntIoContext = satIOContext->satIntIoContext;
smIORequestBody = satIntIoContext->satIntRequestBody;
#endif
SM_DBG5(("smsatDeResetDevice: satIOContext %p smIORequestBody %p\n", satIOContext, smIORequestBody));
/* any fis should work */
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0; /* C Bit is not set */
fis->h.command = 0; /* any command */
fis->h.features = 0; /* FIS reserve */
fis->d.lbaLow = 0; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = 0; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = 0; /* FIS LBA (23:16) */
fis->d.device = 0; /* FIS LBA mode */
fis->d.lbaLowExp = 0;
fis->d.lbaMidExp = 0;
fis->d.lbaHighExp = 0;
fis->d.featuresExp = 0;
fis->d.sectorCount = 0; /* FIS sector count (7:0) */
fis->d.sectorCountExp = 0;
fis->d.reserved4 = 0;
fis->d.control = 0; /* SRST bit is not set */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_SRST_DEASSERT;
/* Initialize CB for SATA completion.
*/
satIOContext->satCompleteCB = &smsatDeResetDeviceCB;
/*
* Prepare SGL and send FIS to LL layer.
*/
satIOContext->reqType = agRequestType; /* Save it */
#ifdef SM_INTERNAL_DEBUG
smhexdump("smsatDeResetDevice", (bit8 *)satIOContext->pFis, sizeof(agsaFisRegHostToDevice_t));
#ifdef TD_DEBUG_ENABLE
smhexdump("smsatDeResetDevice LL", (bit8 *)&(smIORequestBody->transport.SATA.agSATARequestBody.fis.fisRegHostToDev), sizeof(agsaFisRegHostToDevice_t));
#endif
#endif
status = smsataLLIOStart( smRoot,
smIORequest,
smDeviceHandle,
smScsiRequest,
satIOContext);
SM_DBG6(("smsatDeResetDevice: end status %d\n", status));
return status;
}
/* set feature for auto activate */
osGLOBAL bit32
smsatSetFeaturesAA(
smRoot_t *smRoot,
smIORequest_t *smIORequest,
smDeviceHandle_t *smDeviceHandle,
smScsiInitiatorRequest_t *smScsiRequest,
smSatIOContext_t *satIOContext
)
{
bit32 status = SM_RC_FAILURE;
bit32 agRequestType;
agsaFisRegHostToDevice_t *fis;
fis = satIOContext->pFis;
SM_DBG2(("smsatSetFeaturesAA: start\n"));
/*
* Send the Set Features command.
* See SATA II 1.0a spec
*/
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_SET_FEATURES; /* 0xEF */
fis->h.features = 0x10; /* enable SATA feature */
fis->d.lbaLow = 0;
fis->d.lbaMid = 0;
fis->d.lbaHigh = 0;
fis->d.device = 0;
fis->d.lbaLowExp = 0;
fis->d.lbaMidExp = 0;
fis->d.lbaHighExp = 0;
fis->d.featuresExp = 0;
fis->d.sectorCount = 0x02; /* DMA Setup FIS Auto-Activate */
fis->d.sectorCountExp = 0;
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_NON_DATA;
/* Initialize CB for SATA completion.
*/
satIOContext->satCompleteCB = &smsatSetFeaturesAACB;
/*
* Prepare SGL and send FIS to LL layer.
*/
satIOContext->reqType = agRequestType; /* Save it */
status = smsataLLIOStart( smRoot,
smIORequest,
smDeviceHandle,
smScsiRequest,
satIOContext);
/* debugging code */
if (smIORequest->tdData == smIORequest->smData)
{
SM_DBG1(("smsatSetFeaturesAA: incorrect smIORequest\n"));
}
SM_DBG2(("smsatSetFeatures: return\n"));
return status;
}
/* set feature for DMA transfer mode*/
osGLOBAL bit32
smsatSetFeaturesDMA(
smRoot_t *smRoot,
smIORequest_t *smIORequest,
smDeviceHandle_t *smDeviceHandle,
smScsiInitiatorRequest_t *smScsiRequest,
smSatIOContext_t *satIOContext
)
{
bit32 status = SM_RC_FAILURE;
bit32 agRequestType;
smDeviceData_t *pSatDevData;
agsaFisRegHostToDevice_t *fis;
pSatDevData = satIOContext->pSatDevData;
fis = satIOContext->pFis;
SM_DBG2(("smsatSetFeaturesDMA: start\n"));
/*
* Send the Set Features command.
* See SATA II 1.0a spec
*/
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_SET_FEATURES; /* 0xEF */
fis->h.features = 0x03; /* enable ATA transfer mode */
fis->d.lbaLow = 0;
fis->d.lbaMid = 0;
fis->d.lbaHigh = 0;
fis->d.device = 0;
fis->d.lbaLowExp = 0;
fis->d.lbaMidExp = 0;
fis->d.lbaHighExp = 0;
fis->d.featuresExp = 0;
fis->d.sectorCount = 0x40 |(bit8)pSatDevData->satUltraDMAMode; /* enable Ultra DMA mode */
fis->d.sectorCountExp = 0;
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_NON_DATA;
/* Initialize CB for SATA completion.
*/
satIOContext->satCompleteCB = &smsatSetFeaturesDMACB;
/*
* Prepare SGL and send FIS to LL layer.
*/
satIOContext->reqType = agRequestType; /* Save it */
status = smsataLLIOStart( smRoot,
smIORequest,
smDeviceHandle,
smScsiRequest,
satIOContext);
/* debugging code */
if (smIORequest->tdData == smIORequest->smData)
{
SM_DBG1(("smsatSetFeaturesDMA: incorrect smIORequest\n"));
}
SM_DBG2(("smsatSetFeaturesDMA: return\n"));
return status;
}
/* set feature for Read Look Ahead*/
osGLOBAL bit32
smsatSetFeaturesReadLookAhead(
smRoot_t *smRoot,
smIORequest_t *smIORequest,
smDeviceHandle_t *smDeviceHandle,
smScsiInitiatorRequest_t *smScsiRequest,
smSatIOContext_t *satIOContext
)
{
bit32 status = SM_RC_FAILURE;
bit32 agRequestType;
agsaFisRegHostToDevice_t *fis;
fis = satIOContext->pFis;
SM_DBG2(("smsatSetFeaturesReadLookAhead: start\n"));
/*
* Send the Set Features command.
* See SATA II 1.0a spec
*/
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_SET_FEATURES; /* 0xEF */
fis->h.features = 0xAA; /* Enable read look-ahead feature */
fis->d.lbaLow = 0;
fis->d.lbaMid = 0;
fis->d.lbaHigh = 0;
fis->d.device = 0;
fis->d.lbaLowExp = 0;
fis->d.lbaMidExp = 0;
fis->d.lbaHighExp = 0;
fis->d.featuresExp = 0;
fis->d.sectorCount = 0;
fis->d.sectorCountExp = 0;
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_NON_DATA;
/* Initialize CB for SATA completion.
*/
satIOContext->satCompleteCB = &smsatSetFeaturesReadLookAheadCB;
/*
* Prepare SGL and send FIS to LL layer.
*/
satIOContext->reqType = agRequestType; /* Save it */
status = smsataLLIOStart( smRoot,
smIORequest,
smDeviceHandle,
smScsiRequest,
satIOContext);
/* debugging code */
if (smIORequest->tdData == smIORequest->smData)
{
SM_DBG1(("smsatSetFeaturesReadLookAhead: incorrect smIORequest\n"));
}
SM_DBG2(("smsatSetFeaturesReadLookAhead: return\n"));
return status;
}
/* set feature for Volatile Write Cache*/
osGLOBAL bit32
smsatSetFeaturesVolatileWriteCache(
smRoot_t *smRoot,
smIORequest_t *smIORequest,
smDeviceHandle_t *smDeviceHandle,
smScsiInitiatorRequest_t *smScsiRequest,
smSatIOContext_t *satIOContext
)
{
bit32 status = SM_RC_FAILURE;
bit32 agRequestType;
agsaFisRegHostToDevice_t *fis;
fis = satIOContext->pFis;
SM_DBG2(("smsatSetFeaturesVolatileWriteCache: start\n"));
/*
* Send the Set Features command.
* See SATA II 1.0a spec
*/
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_SET_FEATURES; /* 0xEF */
fis->h.features = 0x02; /* Enable Volatile Write Cache feature */
fis->d.lbaLow = 0;
fis->d.lbaMid = 0;
fis->d.lbaHigh = 0;
fis->d.device = 0;
fis->d.lbaLowExp = 0;
fis->d.lbaMidExp = 0;
fis->d.lbaHighExp = 0;
fis->d.featuresExp = 0;
fis->d.sectorCount = 0;
fis->d.sectorCountExp = 0;
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_NON_DATA;
/* Initialize CB for SATA completion.
*/
satIOContext->satCompleteCB = &smsatSetFeaturesVolatileWriteCacheCB;
/*
* Prepare SGL and send FIS to LL layer.
*/
satIOContext->reqType = agRequestType; /* Save it */
status = smsataLLIOStart( smRoot,
smIORequest,
smDeviceHandle,
smScsiRequest,
satIOContext);
/* debugging code */
if (smIORequest->tdData == smIORequest->smData)
{
SM_DBG1(("smsatSetFeaturesVolatileWriteCache: incorrect smIORequest\n"));
}
SM_DBG2(("smsatSetFeaturesVolatileWriteCache: return\n"));
return status;
}
/******************************** start of utils ***********************************************************/
osGLOBAL FORCEINLINE void
smsatBitSet(smRoot_t *smRoot, bit8 *data, bit32 index)
{
data[index>>3] |= (1 << (index&7));
}
osGLOBAL FORCEINLINE void
smsatBitClear(smRoot_t *smRoot, bit8 *data, bit32 index)
{
data[index>>3] &= ~(1 << (index&7));
}
osGLOBAL FORCEINLINE BOOLEAN
smsatBitTest(smRoot_t *smRoot, bit8 *data, bit32 index)
{
return ( (BOOLEAN)((data[index>>3] & (1 << (index&7)) ) ? 1: 0));
}
FORCEINLINE bit32
smsatTagAlloc(
smRoot_t *smRoot,
smDeviceData_t *pSatDevData,
bit8 *pTag
)
{
bit32 retCode = agFALSE;
bit32 i;
tdsmSingleThreadedEnter(smRoot, SM_NCQ_TAG_LOCK);
#ifdef CCFLAG_OPTIMIZE_SAT_LOCK
if (tdsmBitScanForward(smRoot, &i, ~(pSatDevData->freeSATAFDMATagBitmap)))
{
smsatBitSet(smRoot, (bit8*)&pSatDevData->freeSATAFDMATagBitmap, i);
*pTag = (bit8)i;
retCode = agTRUE;
}
#else
for ( i = 0; i < pSatDevData->satNCQMaxIO; i ++ )
{
if ( 0 == smsatBitTest(smRoot, (bit8 *)&pSatDevData->freeSATAFDMATagBitmap, i) )
{
smsatBitSet(smRoot, (bit8*)&pSatDevData->freeSATAFDMATagBitmap, i);
*pTag = (bit8) i;
retCode = agTRUE;
break;
}
}
#endif
tdsmSingleThreadedLeave(smRoot, SM_NCQ_TAG_LOCK);
return retCode;
}
FORCEINLINE bit32
smsatTagRelease(
smRoot_t *smRoot,
smDeviceData_t *pSatDevData,
bit8 tag
)
{
bit32 retCode = agFALSE;
if ( tag < pSatDevData->satNCQMaxIO )
{
tdsmSingleThreadedEnter(smRoot, SM_NCQ_TAG_LOCK);
smsatBitClear(smRoot, (bit8 *)&pSatDevData->freeSATAFDMATagBitmap, (bit32)tag);
tdsmSingleThreadedLeave(smRoot, SM_NCQ_TAG_LOCK);
/*tdsmInterlockedAnd(smRoot, (volatile LONG *)(&pSatDevData->freeSATAFDMATagBitmap), ~(1 << (tag&31)));*/
retCode = agTRUE;
}
else
{
SM_DBG1(("smsatTagRelease: tag %d >= satNCQMaxIO %d!!!!\n", tag, pSatDevData->satNCQMaxIO));
}
return retCode;
}
osGLOBAL bit32
smsatComputeCDB10LBA(smSatIOContext_t *satIOContext)
{
smIniScsiCmnd_t *scsiCmnd;
smScsiInitiatorRequest_t *smScsiRequest;
bit32 lba = 0;
SM_DBG5(("smsatComputeCDB10LBA: start\n"));
smScsiRequest = satIOContext->smScsiXchg;
scsiCmnd = &(smScsiRequest->scsiCmnd);
lba = (scsiCmnd->cdb[2] << (8*3)) + (scsiCmnd->cdb[3] << (8*2))
+ (scsiCmnd->cdb[4] << 8) + scsiCmnd->cdb[5];
return lba;
}
osGLOBAL bit32
smsatComputeCDB10TL(smSatIOContext_t *satIOContext)
{
smIniScsiCmnd_t *scsiCmnd;
smScsiInitiatorRequest_t *smScsiRequest;
bit32 tl = 0;
SM_DBG5(("smsatComputeCDB10TL: start\n"));
smScsiRequest = satIOContext->smScsiXchg;
scsiCmnd = &(smScsiRequest->scsiCmnd);
tl = (scsiCmnd->cdb[7] << 8) + scsiCmnd->cdb[8];
return tl;
}
osGLOBAL bit32
smsatComputeCDB12LBA(smSatIOContext_t *satIOContext)
{
smIniScsiCmnd_t *scsiCmnd;
smScsiInitiatorRequest_t *smScsiRequest;
bit32 lba = 0;
SM_DBG5(("smsatComputeCDB12LBA: start\n"));
smScsiRequest = satIOContext->smScsiXchg;
scsiCmnd = &(smScsiRequest->scsiCmnd);
lba = (scsiCmnd->cdb[2] << (8*3)) + (scsiCmnd->cdb[3] << (8*2))
+ (scsiCmnd->cdb[4] << 8) + scsiCmnd->cdb[5];
return lba;
}
osGLOBAL bit32
smsatComputeCDB12TL(smSatIOContext_t *satIOContext)
{
smIniScsiCmnd_t *scsiCmnd;
smScsiInitiatorRequest_t *smScsiRequest;
bit32 tl = 0;
SM_DBG5(("smsatComputeCDB12TL: start\n"));
smScsiRequest = satIOContext->smScsiXchg;
scsiCmnd = &(smScsiRequest->scsiCmnd);
tl = (scsiCmnd->cdb[6] << (8*3)) + (scsiCmnd->cdb[7] << (8*2))
+ (scsiCmnd->cdb[8] << 8) + scsiCmnd->cdb[9];
return tl;
}
/*
CBD16 has bit64 LBA
But it has to be less than (2^28 - 1)
Therefore, use last four bytes to compute LBA is OK
*/
osGLOBAL bit32
smsatComputeCDB16LBA(smSatIOContext_t *satIOContext)
{
smIniScsiCmnd_t *scsiCmnd;
smScsiInitiatorRequest_t *smScsiRequest;
bit32 lba = 0;
SM_DBG5(("smsatComputeCDB16LBA: start\n"));
smScsiRequest = satIOContext->smScsiXchg;
scsiCmnd = &(smScsiRequest->scsiCmnd);
lba = (scsiCmnd->cdb[6] << (8*3)) + (scsiCmnd->cdb[7] << (8*2))
+ (scsiCmnd->cdb[8] << 8) + scsiCmnd->cdb[9];
return lba;
}
osGLOBAL bit32
smsatComputeCDB16TL(smSatIOContext_t *satIOContext)
{
smIniScsiCmnd_t *scsiCmnd;
smScsiInitiatorRequest_t *smScsiRequest;
bit32 tl = 0;
SM_DBG5(("smsatComputeCDB16TL: start\n"));
smScsiRequest = satIOContext->smScsiXchg;
scsiCmnd = &(smScsiRequest->scsiCmnd);
tl = (scsiCmnd->cdb[10] << (8*3)) + (scsiCmnd->cdb[11] << (8*2))
+ (scsiCmnd->cdb[12] << 8) + scsiCmnd->cdb[13];
return tl;
}
/*
(tl, denom)
tl can be upto bit32 because CDB16 has bit32 tl
Therefore, fine
either (tl, 0xFF) or (tl, 0xFFFF)
*/
osGLOBAL FORCEINLINE bit32
smsatComputeLoopNum(bit32 a, bit32 b)
{
bit32 LoopNum = 0;
SM_DBG5(("smsatComputeLoopNum: start\n"));
if (a < b || a == 0)
{
LoopNum = 1;
}
else
{
if (a == b || a == 0)
{
LoopNum = a/b;
}
else
{
LoopNum = a/b + 1;
}
}
return LoopNum;
}
/*
Generic new function for checking
LBA itself, LBA+TL < SAT_TR_LBA_LIMIT or SAT_EXT_TR_LBA_LIMIT
and LBA+TL < Read Capacity Limit
flag: false - not 48BitSupport; true - 48BitSupport
returns TRUE when over the limit
*/
osGLOBAL FORCEINLINE bit32
smsatCheckLimit(bit8 *lba, bit8 *tl, int flag, smDeviceData_t *pSatDevData)
{
bit32 lbaCheck = agFALSE;
int i;
bit8 limit[8];
bit32 rangeCheck = agFALSE;
bit16 ans[8]; // 0 MSB, 8 LSB
bit8 final_ans[9]; // 0 MSB, 9 LSB
bit8 Bit28max[8];
bit8 Bit48max[8];
bit32 ReadCapCheck = agFALSE;
bit32 ret;
bit8 final_satMaxLBA[9];
bit8 oneTL[8];
bit8 temp_satMaxLBA[8]; // 0 MSB, 8 LSB
/*
check LBA
*/
if (flag == agFALSE)
{
/* limit is 0xF FF FF = 2^28 - 1 */
limit[0] = 0x0; /* MSB */
limit[1] = 0x0;
limit[2] = 0x0;
limit[3] = 0x0;
limit[4] = 0xF;
limit[5] = 0xFF;
limit[6] = 0xFF;
limit[7] = 0xFF; /* LSB */
}
else
{
/* limit is 0xF FF FF = 2^48 - 1 */
limit[0] = 0x0; /* MSB */
limit[1] = 0x0;
limit[2] = 0xFF;
limit[3] = 0xFF;
limit[4] = 0xFF;
limit[5] = 0xFF;
limit[6] = 0xFF;
limit[7] = 0xFF; /* LSB */
}
//compare lba to limit
for(i=0;i<8;i++)
{
if (lba[i] > limit[i])
{
SM_DBG1(("smsatCheckLimit: LBA check True at %d\n", i));
lbaCheck = agTRUE;
break;
}
else if (lba[i] < limit[i])
{
SM_DBG5(("smsatCheckLimit: LBA check False at %d\n", i));
lbaCheck = agFALSE;
break;
}
else
{
continue;
}
}
if (lbaCheck == agTRUE)
{
SM_DBG1(("smsatCheckLimit: return LBA check True\n"));
return agTRUE;
}
/*
check LBA+TL < SAT_TR_LBA_LIMIT or SAT_EXT_TR_LBA_LIMIT
*/
sm_memset(ans, 0, sizeof(ans));
sm_memset(final_ans, 0, sizeof(final_ans));
// adding from LSB to MSB
for(i=7;i>=0;i--)
{
ans[i] = (bit16)(lba[i] + tl[i]);
if (i != 7)
{
ans[i] = (bit16)(ans[i] + ((ans[i+1] & 0xFF00) >> 8));
}
}
/*
filling in the final answer
*/
final_ans[0] = (bit8)(((ans[0] & 0xFF00) >> 8));
for(i=1;i<=8;i++)
{
final_ans[i] = (bit8)(ans[i-1] & 0xFF);
}
if (flag == agFALSE)
{
sm_memset(Bit28max, 0, sizeof(Bit28max));
Bit28max[4] = 0x10; // max =0x1000 0000
//compare final_ans to max
if (final_ans[0] != 0 || final_ans[1] != 0 || final_ans[2] != 0
|| final_ans[3] != 0 || final_ans[4] != 0)
{
SM_DBG1(("smsatCheckLimit: before 28Bit addressing TRUE\n"));
rangeCheck = agTRUE;
}
else
{
for(i=5;i<=8;i++)
{
if (final_ans[i] > Bit28max[i-1])
{
SM_DBG1(("smsatCheckLimit: 28Bit addressing TRUE at %d\n", i));
rangeCheck = agTRUE;
break;
}
else if (final_ans[i] < Bit28max[i-1])
{
SM_DBG5(("smsatCheckLimit: 28Bit addressing FALSE at %d\n", i));
rangeCheck = agFALSE;
break;
}
else
{
continue;
}
}
}
}
else
{
sm_memset(Bit48max, 0, sizeof(Bit48max));
Bit48max[1] = 0x1; //max = 0x1 0000 0000 0000
//compare final_ans to max
if (final_ans[0] != 0 || final_ans[1] != 0)
{
SM_DBG1(("smsatCheckLimit: before 48Bit addressing TRUE\n"));
rangeCheck = agTRUE;
}
else
{
for(i=2;i<=8;i++)
{
if (final_ans[i] > Bit48max[i-1])
{
SM_DBG1(("smsatCheckLimit: 48Bit addressing TRUE at %d\n", i));
rangeCheck = agTRUE;
break;
}
else if (final_ans[i] < Bit48max[i-1])
{
SM_DBG5(("smsatCheckLimit: 48Bit addressing FALSE at %d\n", i));
rangeCheck = agFALSE;
break;
}
else
{
continue;
}
}
}
}
if (rangeCheck == agTRUE)
{
SM_DBG1(("smsatCheckLimit: return rangeCheck True\n"));
return agTRUE;
}
/*
LBA+TL < Read Capacity Limit
*/
sm_memset(temp_satMaxLBA, 0, sizeof(temp_satMaxLBA));
sm_memset(oneTL, 0, sizeof(oneTL));
sm_memset(final_satMaxLBA, 0, sizeof(final_satMaxLBA));
sm_memset(ans, 0, sizeof(ans));
sm_memcpy(&temp_satMaxLBA, &pSatDevData->satMaxLBA, sizeof(temp_satMaxLBA));
oneTL[7] = 1;
// adding temp_satMaxLBA to oneTL
for(i=7;i>=0;i--)
{
ans[i] = (bit16)(temp_satMaxLBA[i] + oneTL[i]);
if (i != 7)
{
ans[i] = (bit16)(ans[i] + ((ans[i+1] & 0xFF00) >> 8));
}
}
/*
filling in the final answer
*/
final_satMaxLBA[0] = (bit8)(((ans[0] & 0xFF00) >> 8));
for(i=1;i<=8;i++)
{
final_satMaxLBA[i] = (bit8)(ans[i-1] & 0xFF);
}
if ( pSatDevData->ReadCapacity == 10)
{
for (i=0;i<=8;i++)
{
if (final_ans[i] > final_satMaxLBA[i])
{
SM_DBG1(("smsatCheckLimit: Read Capacity 10 TRUE at %d\n", i));
ReadCapCheck = agTRUE;
break;
}
else if (final_ans[i] < final_satMaxLBA[i])
{
SM_DBG5(("smsatCheckLimit: Read Capacity 10 FALSE at %d\n", i));
ReadCapCheck = agFALSE;
break;
}
else
{
continue;
}
}
if ( ReadCapCheck)
{
SM_DBG1(("smsatCheckLimit: after Read Capacity 10 TRUE\n"));
}
else
{
SM_DBG5(("smsatCheckLimit: after Read Capacity 10 FALSE\n"));
}
}
else if ( pSatDevData->ReadCapacity == 16)
{
for (i=0;i<=8;i++)
{
if (final_ans[i] > final_satMaxLBA[i])
{
SM_DBG1(("smsatCheckLimit: Read Capacity 16 TRUE at %d\n", i));
ReadCapCheck = agTRUE;
break;
}
else if (final_ans[i] < final_satMaxLBA[i])
{
SM_DBG5(("smsatCheckLimit: Read Capacity 16 FALSE at %d\n", i));
ReadCapCheck = agFALSE;
break;
}
else
{
continue;
}
}
if ( ReadCapCheck)
{
SM_DBG1(("smsatCheckLimit: after Read Capacity 16 TRUE\n"));
}
else
{
SM_DBG5(("smsatCheckLimit: after Read Capacity 16 FALSE\n"));
}
}
else
{
SM_DBG5(("smsatCheckLimit: unknown pSatDevData->ReadCapacity %d\n", pSatDevData->ReadCapacity));
}
if (ReadCapCheck == agTRUE)
{
SM_DBG1(("smsatCheckLimit: return ReadCapCheck True\n"));
return agTRUE;
}
ret = (lbaCheck | rangeCheck | ReadCapCheck);
if (ret == agTRUE)
{
SM_DBG1(("smsatCheckLimit: final check TRUE\n"));
}
else
{
SM_DBG5(("smsatCheckLimit: final check FALSE\n"));
}
return ret;
}
osGLOBAL void
smsatPrintSgl(
smRoot_t *smRoot,
agsaEsgl_t *agEsgl,
bit32 idx
)
{
bit32 i=0;
#ifdef TD_DEBUG_ENABLE
agsaSgl_t *agSgl;
#endif
for (i=0;i<idx;i++)
{
#ifdef TD_DEBUG_ENABLE
agSgl = &(agEsgl->descriptor[i]);
#endif
SM_DBG3(("smsatPrintSgl: agSgl %d upperAddr 0x%08x lowerAddr 0x%08x len 0x%08x ext 0x%08x\n",
i, agSgl->sgUpper, agSgl->sgLower, agSgl->len, agSgl->extReserved));
}
return;
}
osGLOBAL void
smsatSplitSGL(
smRoot_t *smRoot,
smIORequest_t *smIORequest,
smDeviceHandle_t *smDeviceHandle,
smScsiInitiatorRequest_t *smScsiRequest,
smSatIOContext_t *satIOContext,
bit32 split, /*in sector number, depeding on IO value */
bit32 tl, /* in sector number */
bit32 flag
)
{
agsaSgl_t *agSgl;
agsaEsgl_t *agEsgl;
bit32 i=0;
smIniScsiCmnd_t *scsiCmnd;
bit32 totalLen=0; /* in bytes */
bit32 splitLen=0; /* in bytes */
bit32 splitDiffByte = 0; /* in bytes */
bit32 splitDiffExtra = 0; /* in bytes */
bit32 splitIdx = 0;
bit32 UpperAddr, LowerAddr;
bit32 tmpLowerAddr;
void *sglVirtualAddr;
void *sglSplitVirtualAddr;
scsiCmnd = &smScsiRequest->scsiCmnd;
SM_DBG3(("smsatSplitSGL: start\n"));
if (smScsiRequest->smSgl1.type == 0x80000000) /* esgl */
{
if (flag == agFALSE)
{
SM_DBG3(("smsatSplitSGL: Not first time\n"));
SM_DBG3(("smsatSplitSGL: UpperAddr 0x%08x LowerAddr 0x%08x\n", satIOContext->UpperAddr, satIOContext->LowerAddr));
SM_DBG3(("smsatSplitSGL: SplitIdx %d AdjustBytes 0x%08x\n", satIOContext->SplitIdx, satIOContext->AdjustBytes));
sglVirtualAddr = smScsiRequest->sglVirtualAddr;
agEsgl = (agsaEsgl_t *)smScsiRequest->sglVirtualAddr;
sglSplitVirtualAddr = &(agEsgl->descriptor[satIOContext->SplitIdx]);
agEsgl = (agsaEsgl_t *)sglSplitVirtualAddr;
if (agEsgl == agNULL)
{
SM_DBG1(("smsatSplitSGL: error!\n"));
return;
}
/* first sgl ajustment */
agSgl = &(agEsgl->descriptor[0]);
agSgl->sgUpper = satIOContext->UpperAddr;
agSgl->sgLower = satIOContext->LowerAddr;
agSgl->len = satIOContext->AdjustBytes;
sm_memcpy(sglVirtualAddr, sglSplitVirtualAddr, (satIOContext->EsglLen) * sizeof(agsaSgl_t));
agEsgl = (agsaEsgl_t *)smScsiRequest->sglVirtualAddr;
smsatPrintSgl(smRoot, (agsaEsgl_t *)sglVirtualAddr, satIOContext->EsglLen);
}
else
{
/* first time */
SM_DBG3(("smsatSplitSGL: first time\n"));
satIOContext->EsglLen = smScsiRequest->smSgl1.len;
agEsgl = (agsaEsgl_t *)smScsiRequest->sglVirtualAddr;
if (agEsgl == agNULL)
{
return;
}
smsatPrintSgl(smRoot, agEsgl, satIOContext->EsglLen);
}
if (tl > split)
{
/* split */
SM_DBG3(("smsatSplitSGL: split case\n"));
i = 0;
while (1)
{
agSgl = &(agEsgl->descriptor[i]);
splitLen = splitLen + agSgl->len;
if (splitLen >= split)
{
splitDiffExtra = splitLen - split;
splitDiffByte = agSgl->len - splitDiffExtra;
splitIdx = i;
break;
}
i++;
}
SM_DBG3(("smsatSplitSGL: splitIdx %d\n", splitIdx));
SM_DBG3(("smsatSplitSGL: splitDiffByte 0x%8x\n", splitDiffByte));
SM_DBG3(("smsatSplitSGL: splitDiffExtra 0x%8x \n", splitDiffExtra));
agSgl = &(agEsgl->descriptor[splitIdx]);
UpperAddr = agSgl->sgUpper;
LowerAddr = agSgl->sgLower;
tmpLowerAddr = LowerAddr + splitDiffByte;
if (tmpLowerAddr < LowerAddr)
{
UpperAddr = UpperAddr + 1;
}
SM_DBG3(("smsatSplitSGL: UpperAddr 0x%08x tmpLowerAddr 0x%08x\n", UpperAddr, tmpLowerAddr));
agSgl->len = splitDiffByte;
/* Esgl len adjustment */
smScsiRequest->smSgl1.len = splitIdx;
/* expected data lent adjustment */
scsiCmnd->expDataLength = 0x20000;
/* remeber for the next round */
satIOContext->UpperAddr = UpperAddr;
satIOContext->LowerAddr = tmpLowerAddr;
satIOContext->SplitIdx = splitIdx;
satIOContext->AdjustBytes = splitDiffExtra;
satIOContext->EsglLen = satIOContext->EsglLen - smScsiRequest->smSgl1.len;
satIOContext->OrgTL = satIOContext->OrgTL - 0x100;
// smsatPrintSgl(smRoot, agEsgl, satIOContext->EsglLen);
}
else
{
/* no split */
SM_DBG3(("smsatSplitSGL: no split case\n"));
/* Esgl len adjustment */
smScsiRequest->smSgl1.len = satIOContext->EsglLen;
for (i=0;i< smScsiRequest->smSgl1.len;i++)
{
agSgl = &(agEsgl->descriptor[i]);
totalLen = totalLen + (agSgl->len);
}
/* expected data lent adjustment */
scsiCmnd->expDataLength = totalLen;
// smsatPrintSgl(smRoot, agEsgl, satIOContext->EsglLen);
}
}
else
{
SM_DBG1(("not exntened esgl\n"));
}
return;
}
/******************************** end of utils ***********************************************************/
Index: head/sys/dev/pms/RefTisa/tisa/sassata/sata/host/sat.c
===================================================================
--- head/sys/dev/pms/RefTisa/tisa/sassata/sata/host/sat.c (revision 300049)
+++ head/sys/dev/pms/RefTisa/tisa/sassata/sata/host/sat.c (revision 300050)
@@ -1,23309 +1,23309 @@
/*******************************************************************************
*Copyright (c) 2014 PMC-Sierra, Inc. All rights reserved.
*
*Redistribution and use in source and binary forms, with or without modification, are permitted provided
*that the following conditions are met:
*1. Redistributions of source code must retain the above copyright notice, this list of conditions and the
*following disclaimer.
*2. Redistributions in binary form must reproduce the above copyright notice,
*this list of conditions and the following disclaimer in the documentation and/or other materials provided
*with the distribution.
*
*THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND ANY EXPRESS OR IMPLIED
*WARRANTIES,INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
*FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE
*FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
*NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR
*BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
*LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
*SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE
********************************************************************************/
/*****************************************************************************/
/** \file
*
* The file implementing SCSI/ATA Translation (SAT).
* The routines in this file are independent from HW LL API.
*
*/
/*****************************************************************************/
#include <sys/cdefs.h>
__FBSDID("$FreeBSD$");
#include <dev/pms/config.h>
#include <dev/pms/freebsd/driver/common/osenv.h>
#include <dev/pms/freebsd/driver/common/ostypes.h>
#include <dev/pms/freebsd/driver/common/osdebug.h>
#ifdef SATA_ENABLE
#include <dev/pms/RefTisa/sallsdk/api/sa.h>
#include <dev/pms/RefTisa/sallsdk/api/saapi.h>
#include <dev/pms/RefTisa/sallsdk/api/saosapi.h>
#include <dev/pms/RefTisa/tisa/api/titypes.h>
#include <dev/pms/RefTisa/tisa/api/ostiapi.h>
#include <dev/pms/RefTisa/tisa/api/tiapi.h>
#include <dev/pms/RefTisa/tisa/api/tiglobal.h>
#ifdef FDS_SM
#include <dev/pms/RefTisa/sat/api/sm.h>
#include <dev/pms/RefTisa/sat/api/smapi.h>
#include <dev/pms/RefTisa/sat/api/tdsmapi.h>
#endif
#ifdef FDS_DM
#include <dev/pms/RefTisa/discovery/api/dm.h>
#include <dev/pms/RefTisa/discovery/api/dmapi.h>
#include <dev/pms/RefTisa/discovery/api/tddmapi.h>
#endif
#include <dev/pms/RefTisa/tisa/sassata/sas/common/tdtypes.h>
#include <dev/pms/freebsd/driver/common/osstring.h>
#include <dev/pms/RefTisa/tisa/sassata/common/tdutil.h>
#ifdef INITIATOR_DRIVER
#include <dev/pms/RefTisa/tisa/sassata/sas/ini/itdtypes.h>
#include <dev/pms/RefTisa/tisa/sassata/sas/ini/itddefs.h>
#include <dev/pms/RefTisa/tisa/sassata/sas/ini/itdglobl.h>
#endif
#ifdef TARGET_DRIVER
#include <dev/pms/RefTisa/tisa/sassata/sas/tgt/ttdglobl.h>
#include <dev/pms/RefTisa/tisa/sassata/sas/tgt/ttdxchg.h>
#include <dev/pms/RefTisa/tisa/sassata/sas/tgt/ttdtypes.h>
#endif
#include <dev/pms/RefTisa/tisa/sassata/common/tdsatypes.h>
#include <dev/pms/RefTisa/tisa/sassata/common/tdproto.h>
#include <dev/pms/RefTisa/tisa/sassata/sata/host/sat.h>
#include <dev/pms/RefTisa/tisa/sassata/sata/host/satproto.h>
/*****************************************************************************
*! \brief satIOStart
*
* This routine is called to initiate a new SCSI request to SATL.
*
* \param tiRoot: Pointer to TISA initiator driver/port instance.
* \param tiIORequest: Pointer to TISA I/O request context for this I/O.
* \param tiDeviceHandle: Pointer to TISA device handle for this I/O.
* \param tiScsiRequest: Pointer to TISA SCSI I/O request and SGL list.
* \param satIOContext_t: Pointer to the SAT IO Context
*
* \return:
*
* \e tiSuccess: I/O request successfully initiated.
* \e tiBusy: No resources available, try again later.
* \e tiIONoDevice: Invalid device handle.
* \e tiError: Other errors that prevent the I/O request to be started.
*
*
*****************************************************************************/
GLOBAL bit32 satIOStart(
tiRoot_t *tiRoot,
tiIORequest_t *tiIORequest,
tiDeviceHandle_t *tiDeviceHandle,
tiScsiInitiatorRequest_t *tiScsiRequest,
satIOContext_t *satIOContext
)
{
bit32 retVal = tiSuccess;
satDeviceData_t *pSatDevData;
scsiRspSense_t *pSense;
tiIniScsiCmnd_t *scsiCmnd;
tiLUN_t *pLun;
satInternalIo_t *pSatIntIo;
#ifdef TD_DEBUG_ENABLE
tdsaDeviceData_t *oneDeviceData;
#endif
pSense = satIOContext->pSense;
pSatDevData = satIOContext->pSatDevData;
scsiCmnd = &tiScsiRequest->scsiCmnd;
pLun = &scsiCmnd->lun;
/*
* Reject all other LUN other than LUN 0.
*/
if ( ((pLun->lun[0] | pLun->lun[1] | pLun->lun[2] | pLun->lun[3] |
pLun->lun[4] | pLun->lun[5] | pLun->lun[6] | pLun->lun[7] ) != 0) &&
(scsiCmnd->cdb[0] != SCSIOPC_INQUIRY)
)
{
TI_DBG1(("satIOStart: *** REJECT *** LUN not zero, cdb[0]=0x%x tiIORequest=%p tiDeviceHandle=%p\n",
scsiCmnd->cdb[0], tiIORequest, tiDeviceHandle));
satSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_LOGICAL_NOT_SUPPORTED,
satIOContext);
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pTiSenseData,
satIOContext->interruptContext );
retVal = tiSuccess;
goto ext;
}
TI_DBG6(("satIOStart: satPendingIO %d satNCQMaxIO %d\n",pSatDevData->satPendingIO, pSatDevData->satNCQMaxIO ));
/* this may happen after tiCOMReset until OS sends inquiry */
if (pSatDevData->IDDeviceValid == agFALSE && (scsiCmnd->cdb[0] != SCSIOPC_INQUIRY))
{
#ifdef TD_DEBUG_ENABLE
oneDeviceData = (tdsaDeviceData_t *)tiDeviceHandle->tdData;
#endif
TI_DBG1(("satIOStart: invalid identify device data did %d\n", oneDeviceData->id));
retVal = tiIONoDevice;
goto ext;
}
/*
* Check if we need to return BUSY, i.e. recovery in progress
*/
if (pSatDevData->satDriveState == SAT_DEV_STATE_IN_RECOVERY)
{
#ifdef TD_DEBUG_ENABLE
oneDeviceData = (tdsaDeviceData_t *)tiDeviceHandle->tdData;
#endif
TI_DBG1(("satIOStart: IN RECOVERY STATE cdb[0]=0x%x tiIORequest=%p tiDeviceHandle=%p\n",
scsiCmnd->cdb[0], tiIORequest, tiDeviceHandle));
TI_DBG1(("satIOStart: IN RECOVERY STATE did %d\n", oneDeviceData->id));
TI_DBG1(("satIOStart: device %p satPendingIO %d satNCQMaxIO %d\n",pSatDevData, pSatDevData->satPendingIO, pSatDevData->satNCQMaxIO ));
TI_DBG1(("satIOStart: device %p satPendingNCQIO %d satPendingNONNCQIO %d\n",pSatDevData, pSatDevData->satPendingNCQIO, pSatDevData->satPendingNONNCQIO));
retVal = tiError;
goto ext;
// return tiBusy;
}
if (pSatDevData->satDeviceType == SATA_ATAPI_DEVICE)
{
if (scsiCmnd->cdb[0] == SCSIOPC_REPORT_LUN)
{
return satReportLun(tiRoot, tiIORequest, tiDeviceHandle, tiScsiRequest, satIOContext);
}
else
{
return satPacket(tiRoot, tiIORequest, tiDeviceHandle, tiScsiRequest, satIOContext);
}
}
else /* pSatDevData->satDeviceType != SATA_ATAPI_DEVICE */
{
/* Parse CDB */
switch(scsiCmnd->cdb[0])
{
case SCSIOPC_READ_6:
retVal = satRead6( tiRoot,
tiIORequest,
tiDeviceHandle,
tiScsiRequest,
satIOContext);
break;
case SCSIOPC_READ_10:
retVal = satRead10( tiRoot,
tiIORequest,
tiDeviceHandle,
tiScsiRequest,
satIOContext);
break;
case SCSIOPC_READ_12:
TI_DBG5(("satIOStart: SCSIOPC_READ_12\n"));
retVal = satRead12( tiRoot,
tiIORequest,
tiDeviceHandle,
tiScsiRequest,
satIOContext);
break;
case SCSIOPC_READ_16:
retVal = satRead16( tiRoot,
tiIORequest,
tiDeviceHandle,
tiScsiRequest,
satIOContext);
break;
case SCSIOPC_WRITE_6:
retVal = satWrite6( tiRoot,
tiIORequest,
tiDeviceHandle,
tiScsiRequest,
satIOContext);
break;
case SCSIOPC_WRITE_10:
retVal = satWrite10( tiRoot,
tiIORequest,
tiDeviceHandle,
tiScsiRequest,
satIOContext);
break;
case SCSIOPC_WRITE_12:
TI_DBG5(("satIOStart: SCSIOPC_WRITE_12 \n"));
retVal = satWrite12( tiRoot,
tiIORequest,
tiDeviceHandle,
tiScsiRequest,
satIOContext);
break;
case SCSIOPC_WRITE_16:
TI_DBG5(("satIOStart: SCSIOPC_WRITE_16\n"));
retVal = satWrite16( tiRoot,
tiIORequest,
tiDeviceHandle,
tiScsiRequest,
satIOContext);
break;
case SCSIOPC_VERIFY_10:
retVal = satVerify10( tiRoot,
tiIORequest,
tiDeviceHandle,
tiScsiRequest,
satIOContext);
break;
case SCSIOPC_VERIFY_12:
TI_DBG5(("satIOStart: SCSIOPC_VERIFY_12\n"));
retVal = satVerify12( tiRoot,
tiIORequest,
tiDeviceHandle,
tiScsiRequest,
satIOContext);
break;
case SCSIOPC_VERIFY_16:
TI_DBG5(("satIOStart: SCSIOPC_VERIFY_16\n"));
retVal = satVerify16( tiRoot,
tiIORequest,
tiDeviceHandle,
tiScsiRequest,
satIOContext);
break;
case SCSIOPC_TEST_UNIT_READY:
retVal = satTestUnitReady( tiRoot,
tiIORequest,
tiDeviceHandle,
tiScsiRequest,
satIOContext);
break;
case SCSIOPC_INQUIRY:
retVal = satInquiry( tiRoot,
tiIORequest,
tiDeviceHandle,
tiScsiRequest,
satIOContext);
break;
case SCSIOPC_REQUEST_SENSE:
retVal = satRequestSense( tiRoot,
tiIORequest,
tiDeviceHandle,
tiScsiRequest,
satIOContext);
break;
case SCSIOPC_MODE_SENSE_6:
retVal = satModeSense6( tiRoot,
tiIORequest,
tiDeviceHandle,
tiScsiRequest,
satIOContext);
break;
case SCSIOPC_MODE_SENSE_10:
retVal = satModeSense10( tiRoot,
tiIORequest,
tiDeviceHandle,
tiScsiRequest,
satIOContext);
break;
case SCSIOPC_READ_CAPACITY_10:
retVal = satReadCapacity10( tiRoot,
tiIORequest,
tiDeviceHandle,
tiScsiRequest,
satIOContext);
break;
case SCSIOPC_READ_CAPACITY_16:
retVal = satReadCapacity16( tiRoot,
tiIORequest,
tiDeviceHandle,
tiScsiRequest,
satIOContext);
break;
case SCSIOPC_REPORT_LUN:
retVal = satReportLun( tiRoot,
tiIORequest,
tiDeviceHandle,
tiScsiRequest,
satIOContext);
break;
case SCSIOPC_FORMAT_UNIT:
TI_DBG5(("satIOStart: SCSIOPC_FORMAT_UNIT\n"));
retVal = satFormatUnit( tiRoot,
tiIORequest,
tiDeviceHandle,
tiScsiRequest,
satIOContext);
break;
case SCSIOPC_SEND_DIAGNOSTIC: /* Table 28, p40 */
TI_DBG5(("satIOStart: SCSIOPC_SEND_DIAGNOSTIC\n"));
retVal = satSendDiagnostic( tiRoot,
tiIORequest,
tiDeviceHandle,
tiScsiRequest,
satIOContext);
break;
case SCSIOPC_START_STOP_UNIT:
TI_DBG5(("satIOStart: SCSIOPC_START_STOP_UNIT\n"));
retVal = satStartStopUnit( tiRoot,
tiIORequest,
tiDeviceHandle,
tiScsiRequest,
satIOContext);
break;
case SCSIOPC_WRITE_SAME_10: /* sector and LBA; SAT p64 case 3 accessing payload and very
inefficient now */
TI_DBG5(("satIOStart: SCSIOPC_WRITE_SAME_10\n"));
retVal = satWriteSame10( tiRoot,
tiIORequest,
tiDeviceHandle,
tiScsiRequest,
satIOContext);
break;
case SCSIOPC_WRITE_SAME_16: /* no support due to transfer length(sector count) */
TI_DBG5(("satIOStart: SCSIOPC_WRITE_SAME_16\n"));
retVal = satWriteSame16( tiRoot,
tiIORequest,
tiDeviceHandle,
tiScsiRequest,
satIOContext);
break;
case SCSIOPC_LOG_SENSE: /* SCT and log parameter(informational exceptions) */
TI_DBG5(("satIOStart: SCSIOPC_LOG_SENSE\n"));
retVal = satLogSense( tiRoot,
tiIORequest,
tiDeviceHandle,
tiScsiRequest,
satIOContext);
break;
case SCSIOPC_MODE_SELECT_6: /*mode layout and AlloLen check */
TI_DBG5(("satIOStart: SCSIOPC_MODE_SELECT_6\n"));
retVal = satModeSelect6( tiRoot,
tiIORequest,
tiDeviceHandle,
tiScsiRequest,
satIOContext);
break;
case SCSIOPC_MODE_SELECT_10: /* mode layout and AlloLen check and sharing CB with satModeSelect6*/
TI_DBG5(("satIOStart: SCSIOPC_MODE_SELECT_10\n"));
retVal = satModeSelect10( tiRoot,
tiIORequest,
tiDeviceHandle,
tiScsiRequest,
satIOContext);
break;
case SCSIOPC_SYNCHRONIZE_CACHE_10: /* on error what to return, sharing CB with
satSynchronizeCache16 */
TI_DBG5(("satIOStart: SCSIOPC_SYNCHRONIZE_CACHE_10\n"));
retVal = satSynchronizeCache10( tiRoot,
tiIORequest,
tiDeviceHandle,
tiScsiRequest,
satIOContext);
break;
case SCSIOPC_SYNCHRONIZE_CACHE_16:/* on error what to return, sharing CB with
satSynchronizeCache16 */
TI_DBG5(("satIOStart: SCSIOPC_SYNCHRONIZE_CACHE_16\n"));
retVal = satSynchronizeCache16( tiRoot,
tiIORequest,
tiDeviceHandle,
tiScsiRequest,
satIOContext);
break;
case SCSIOPC_WRITE_AND_VERIFY_10: /* single write and multiple writes */
TI_DBG5(("satIOStart: SCSIOPC_WRITE_AND_VERIFY_10\n"));
retVal = satWriteAndVerify10( tiRoot,
tiIORequest,
tiDeviceHandle,
tiScsiRequest,
satIOContext);
break;
case SCSIOPC_WRITE_AND_VERIFY_12:
TI_DBG5(("satIOStart: SCSIOPC_WRITE_AND_VERIFY_12\n"));
retVal = satWriteAndVerify12( tiRoot,
tiIORequest,
tiDeviceHandle,
tiScsiRequest,
satIOContext);
break;
case SCSIOPC_WRITE_AND_VERIFY_16:
TI_DBG5(("satIOStart: SCSIOPC_WRITE_AND_VERIFY_16\n"));
retVal = satWriteAndVerify16( tiRoot,
tiIORequest,
tiDeviceHandle,
tiScsiRequest,
satIOContext);
break;
case SCSIOPC_READ_MEDIA_SERIAL_NUMBER:
TI_DBG5(("satIOStart: SCSIOPC_READ_MEDIA_SERIAL_NUMBER\n"));
retVal = satReadMediaSerialNumber( tiRoot,
tiIORequest,
tiDeviceHandle,
tiScsiRequest,
satIOContext);
break;
case SCSIOPC_READ_BUFFER:
TI_DBG5(("satIOStart: SCSIOPC_READ_BUFFER\n"));
retVal = satReadBuffer( tiRoot,
tiIORequest,
tiDeviceHandle,
tiScsiRequest,
satIOContext);
break;
case SCSIOPC_WRITE_BUFFER:
TI_DBG5(("satIOStart: SCSIOPC_WRITE_BUFFER\n"));
retVal = satWriteBuffer( tiRoot,
tiIORequest,
tiDeviceHandle,
tiScsiRequest,
satIOContext);
break;
case SCSIOPC_REASSIGN_BLOCKS:
TI_DBG5(("satIOStart: SCSIOPC_REASSIGN_BLOCKS\n"));
retVal = satReassignBlocks( tiRoot,
tiIORequest,
tiDeviceHandle,
tiScsiRequest,
satIOContext);
break;
default:
/* Not implemented SCSI cmd, set up error response */
TI_DBG1(("satIOStart: unsupported SCSI cdb[0]=0x%x tiIORequest=%p tiDeviceHandle=%p\n",
scsiCmnd->cdb[0], tiIORequest, tiDeviceHandle));
satSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_COMMAND,
satIOContext);
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pTiSenseData,
satIOContext->interruptContext );
retVal = tiSuccess;
break;
} /* end switch */
}
if (retVal == tiBusy)
{
#ifdef TD_DEBUG_ENABLE
oneDeviceData = (tdsaDeviceData_t *)tiDeviceHandle->tdData;
#endif
TI_DBG1(("satIOStart: BUSY did %d\n", oneDeviceData->id));
TI_DBG3(("satIOStart: LL is busy or target queue is full\n"));
TI_DBG3(("satIOStart: device %p satPendingIO %d satNCQMaxIO %d\n",pSatDevData, pSatDevData->satPendingIO, pSatDevData->satNCQMaxIO ));
TI_DBG3(("satIOStart: device %p satPendingNCQIO %d satPendingNONNCQIO %d\n",pSatDevData, pSatDevData->satPendingNCQIO, pSatDevData->satPendingNONNCQIO));
pSatIntIo = satIOContext->satIntIoContext;
/* interal structure free */
satFreeIntIoResource( tiRoot,
pSatDevData,
pSatIntIo);
}
ext:
return retVal;
}
/*****************************************************************************/
/*! \brief Setup up the SCSI Sense response.
*
* This function is used to setup up the Sense Data payload for
* CHECK CONDITION status.
*
* \param pSense: Pointer to the scsiRspSense_t sense data structure.
* \param SnsKey: SCSI Sense Key.
* \param SnsInfo: SCSI Sense Info.
* \param SnsCode: SCSI Sense Code.
*
* \return: None
*/
/*****************************************************************************/
void satSetSensePayload( scsiRspSense_t *pSense,
bit8 SnsKey,
bit32 SnsInfo,
bit16 SnsCode,
satIOContext_t *satIOContext
)
{
/* for fixed format sense data, SPC-4, p37 */
bit32 i;
bit32 senseLength;
TI_DBG5(("satSetSensePayload: start\n"));
senseLength = sizeof(scsiRspSense_t);
/* zero out the data area */
for (i=0;i< senseLength;i++)
{
((bit8*)pSense)[i] = 0;
}
/*
* SCSI Sense Data part of response data
*/
pSense->snsRespCode = 0x70; /* 0xC0 == vendor specific */
/* 0x70 == standard current error */
pSense->senseKey = SnsKey;
/*
* Put sense info in scsi order format
*/
pSense->info[0] = (bit8)((SnsInfo >> 24) & 0xff);
pSense->info[1] = (bit8)((SnsInfo >> 16) & 0xff);
pSense->info[2] = (bit8)((SnsInfo >> 8) & 0xff);
pSense->info[3] = (bit8)((SnsInfo) & 0xff);
pSense->addSenseLen = 11; /* fixed size of sense data = 18 */
pSense->addSenseCode = (bit8)((SnsCode >> 8) & 0xFF);
pSense->senseQual = (bit8)(SnsCode & 0xFF);
/*
* Set pointer in scsi status
*/
switch(SnsKey)
{
/*
* set illegal request sense key specific error in cdb, no bit pointer
*/
case SCSI_SNSKEY_ILLEGAL_REQUEST:
pSense->skeySpecific[0] = 0xC8;
break;
default:
break;
}
/* setting sense data length */
if (satIOContext != agNULL)
{
satIOContext->pTiSenseData->senseLen = 18;
}
else
{
TI_DBG1(("satSetSensePayload: satIOContext is NULL\n"));
}
}
/*****************************************************************************/
/*! \brief Setup up the SCSI Sense response.
*
* This function is used to setup up the Sense Data payload for
* CHECK CONDITION status.
*
* \param pSense: Pointer to the scsiRspSense_t sense data structure.
* \param SnsKey: SCSI Sense Key.
* \param SnsInfo: SCSI Sense Info.
* \param SnsCode: SCSI Sense Code.
*
* \return: None
*/
/*****************************************************************************/
void satSetDeferredSensePayload( scsiRspSense_t *pSense,
bit8 SnsKey,
bit32 SnsInfo,
bit16 SnsCode,
satIOContext_t *satIOContext
)
{
/* for fixed format sense data, SPC-4, p37 */
bit32 i;
bit32 senseLength;
senseLength = sizeof(scsiRspSense_t);
/* zero out the data area */
for (i=0;i< senseLength;i++)
{
((bit8*)pSense)[i] = 0;
}
/*
* SCSI Sense Data part of response data
*/
pSense->snsRespCode = 0x71; /* 0xC0 == vendor specific */
/* 0x70 == standard current error */
pSense->senseKey = SnsKey;
/*
* Put sense info in scsi order format
*/
pSense->info[0] = (bit8)((SnsInfo >> 24) & 0xff);
pSense->info[1] = (bit8)((SnsInfo >> 16) & 0xff);
pSense->info[2] = (bit8)((SnsInfo >> 8) & 0xff);
pSense->info[3] = (bit8)((SnsInfo) & 0xff);
pSense->addSenseLen = 11; /* fixed size of sense data = 18 */
pSense->addSenseCode = (bit8)((SnsCode >> 8) & 0xFF);
pSense->senseQual = (bit8)(SnsCode & 0xFF);
/*
* Set pointer in scsi status
*/
switch(SnsKey)
{
/*
* set illegal request sense key specific error in cdb, no bit pointer
*/
case SCSI_SNSKEY_ILLEGAL_REQUEST:
pSense->skeySpecific[0] = 0xC8;
break;
default:
break;
}
/* setting sense data length */
if (satIOContext != agNULL)
{
satIOContext->pTiSenseData->senseLen = 18;
}
else
{
TI_DBG1(("satSetDeferredSensePayload: satIOContext is NULL\n"));
}
}
/*****************************************************************************/
/*! \brief SAT implementation for ATAPI Packet Command.
*
* SAT implementation for ATAPI Packet and send FIS request to LL layer.
*
* \param tiRoot: Pointer to TISA initiator driver/port instance.
* \param tiIORequest: Pointer to TISA I/O request context for this I/O.
* \param tiDeviceHandle: Pointer to TISA device handle for this I/O.
* \param tiScsiRequest: Pointer to TISA SCSI I/O request and SGL list.
* \param satIOContext_t: Pointer to the SAT IO Context
*
* \return If command is started successfully
* - \e tiSuccess: I/O request successfully initiated.
* - \e tiBusy: No resources available, try again later.
* - \e tiIONoDevice: Invalid device handle.
* - \e tiError: Other errors.
*/
/*****************************************************************************/
GLOBAL bit32 satPacket(
tiRoot_t *tiRoot,
tiIORequest_t *tiIORequest,
tiDeviceHandle_t *tiDeviceHandle,
tiScsiInitiatorRequest_t *tiScsiRequest,
satIOContext_t *satIOContext)
{
bit32 status;
bit32 agRequestType = AGSA_SATA_PROTOCOL_D2H_PKT;
satDeviceData_t *pSatDevData;
tiIniScsiCmnd_t *scsiCmnd;
agsaFisRegHostToDevice_t *fis;
pSatDevData = satIOContext->pSatDevData;
scsiCmnd = &tiScsiRequest->scsiCmnd;
fis = satIOContext->pFis;
TI_DBG3(("satPacket: start, SCSI CDB is 0x%X %X %X %X %X %X %X %X %X %X %X %X\n",
scsiCmnd->cdb[0],scsiCmnd->cdb[1],scsiCmnd->cdb[2],scsiCmnd->cdb[3],
scsiCmnd->cdb[4],scsiCmnd->cdb[5],scsiCmnd->cdb[6],scsiCmnd->cdb[7],
scsiCmnd->cdb[8],scsiCmnd->cdb[9],scsiCmnd->cdb[10],scsiCmnd->cdb[11]));
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set 1*/
fis->h.command = SAT_PACKET; /* 0xA0 */
if (pSatDevData->satDMADIRSupport) /* DMADIR enabled*/
{
fis->h.features = (tiScsiRequest->dataDirection == tiDirectionIn)? 0x04 : 0; /* 1 for D2H, 0 for H2D */
}
else
{
fis->h.features = 0; /* FIS reserve */
}
/* Byte count low and byte count high */
if ( scsiCmnd->expDataLength > 0xFFFF )
{
fis->d.lbaMid = 0xFF; /* FIS LBA (7 :0 ) */
fis->d.lbaHigh = 0xFF; /* FIS LBA (15:8 ) */
}
else
{
fis->d.lbaMid = (bit8)scsiCmnd->expDataLength; /* FIS LBA (7 :0 ) */
fis->d.lbaHigh = (bit8)(scsiCmnd->expDataLength>>8); /* FIS LBA (15:8 ) */
}
fis->d.lbaLow = 0; /* FIS LBA (7 :0 ) */
fis->d.device = 0; /* FIS LBA (27:24) and FIS LBA mode */
fis->d.lbaLowExp = 0;
fis->d.lbaMidExp = 0;
fis->d.lbaHighExp = 0;
fis->d.featuresExp = 0;
fis->d.sectorCount = 0; /* FIS sector count (7:0) */
fis->d.sectorCountExp = 0;
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
satIOContext->ATACmd = SAT_PACKET;
if (tiScsiRequest->dataDirection == tiDirectionIn)
{
agRequestType = AGSA_SATA_PROTOCOL_D2H_PKT;
}
else
{
agRequestType = AGSA_SATA_PROTOCOL_H2D_PKT;
}
if (pSatDevData->satDMASupport == agTRUE && pSatDevData->satDMAEnabled == agTRUE)
{
/*DMA transfer mode*/
fis->h.features |= 0x01;
}
else
{
/*PIO transfer mode*/
fis->h.features |= 0x0;
}
satIOContext->satCompleteCB = &satPacketCB;
/*
* Prepare SGL and send FIS to LL layer.
*/
satIOContext->reqType = agRequestType; /* Save it */
status = sataLLIOStart( tiRoot,
tiIORequest,
tiDeviceHandle,
tiScsiRequest,
satIOContext);
TI_DBG5(("satPacket: return\n"));
return (status);
}
/*****************************************************************************/
/*! \brief SAT implementation for satSetFeatures.
*
* This function creates SetFeatures fis and sends the request to LL layer
*
* \param tiRoot: Pointer to TISA initiator driver/port instance.
* \param tiIORequest: Pointer to TISA I/O request context for this I/O.
* \param tiDeviceHandle: Pointer to TISA device handle for this I/O.
* \param tiScsiRequest: Pointer to TISA SCSI I/O request and SGL list.
* \param satIOContext_t: Pointer to the SAT IO Context
*
* \return If command is started successfully
* - \e tiSuccess: I/O request successfully initiated.
* - \e tiBusy: No resources available, try again later.
* - \e tiIONoDevice: Invalid device handle.
* - \e tiError: Other errors.
*/
/*****************************************************************************/
GLOBAL bit32 satSetFeatures(
tiRoot_t *tiRoot,
tiIORequest_t *tiIORequest,
tiDeviceHandle_t *tiDeviceHandle,
tiScsiInitiatorRequest_t *tiScsiRequest,
satIOContext_t *satIOContext,
bit8 bIsDMAMode
)
{
bit32 status;
bit32 agRequestType;
agsaFisRegHostToDevice_t *fis;
fis = satIOContext->pFis;
TI_DBG3(("satSetFeatures: start\n"));
/*
* Send the Set Features command.
*/
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_SET_FEATURES; /* 0xEF */
fis->h.features = 0x03; /* set transfer mode */
fis->d.lbaLow = 0;
fis->d.lbaMid = 0;
fis->d.lbaHigh = 0;
fis->d.device = 0;
fis->d.lbaLowExp = 0;
fis->d.lbaMidExp = 0;
fis->d.lbaHighExp = 0;
fis->d.featuresExp = 0;
fis->d.sectorCountExp = 0;
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_NON_DATA;
/* Initialize CB for SATA completion.
*/
if (bIsDMAMode)
{
fis->d.sectorCount = 0x45;
/*satIOContext->satCompleteCB = &satSetFeaturesDMACB;*/
}
else
{
fis->d.sectorCount = 0x0C;
/*satIOContext->satCompleteCB = &satSetFeaturesPIOCB;*/
}
satIOContext->satCompleteCB = &satSetFeaturesCB;
/*
* Prepare SGL and send FIS to LL layer.
*/
satIOContext->reqType = agRequestType; /* Save it */
status = sataLLIOStart( tiRoot,
tiIORequest,
tiDeviceHandle,
tiScsiRequest,
satIOContext);
TI_DBG5(("satSetFeatures: return\n"));
return status;
}
/*****************************************************************************/
/*! \brief SAT implementation for SCSI REQUEST SENSE to ATAPI device.
*
* SAT implementation for SCSI REQUEST SENSE.
*
* \param tiRoot: Pointer to TISA initiator driver/port instance.
* \param tiIORequest: Pointer to TISA I/O request context for this I/O.
* \param tiDeviceHandle: Pointer to TISA device handle for this I/O.
* \param tiScsiRequest: Pointer to TISA SCSI I/O request and SGL list.
* \param satIOContext_t: Pointer to the SAT IO Context
*
* \return If command is started successfully
* - \e tiSuccess: I/O request successfully initiated.
* - \e tiBusy: No resources available, try again later.
* - \e tiIONoDevice: Invalid device handle.
* - \e tiError: Other errors.
*/
/*****************************************************************************/
GLOBAL bit32 satRequestSenseForATAPI(
tiRoot_t *tiRoot,
tiIORequest_t *tiIORequest,
tiDeviceHandle_t *tiDeviceHandle,
tiScsiInitiatorRequest_t *tiScsiRequest,
satIOContext_t *satIOContext)
{
bit32 status;
bit32 agRequestType = AGSA_SATA_PROTOCOL_D2H_PKT;
satDeviceData_t *pSatDevData;
tiIniScsiCmnd_t *scsiCmnd;
agsaFisRegHostToDevice_t *fis;
pSatDevData = satIOContext->pSatDevData;
scsiCmnd = &tiScsiRequest->scsiCmnd;
fis = satIOContext->pFis;
scsiCmnd->cdb[0] = SCSIOPC_REQUEST_SENSE;
scsiCmnd->cdb[1] = 0;
scsiCmnd->cdb[2] = 0;
scsiCmnd->cdb[3] = 0;
scsiCmnd->cdb[4] = SENSE_DATA_LENGTH;
scsiCmnd->cdb[5] = 0;
TI_DBG3(("satRequestSenseForATAPI: start, SCSI CDB is 0x%X %X %X %X %X %X %X %X %X %X %X %X\n",
scsiCmnd->cdb[0],scsiCmnd->cdb[1],scsiCmnd->cdb[2],scsiCmnd->cdb[3],
scsiCmnd->cdb[4],scsiCmnd->cdb[5],scsiCmnd->cdb[6],scsiCmnd->cdb[7],
scsiCmnd->cdb[8],scsiCmnd->cdb[9],scsiCmnd->cdb[10],scsiCmnd->cdb[11]));
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set 1*/
fis->h.command = SAT_PACKET; /* 0xA0 */
if (pSatDevData->satDMADIRSupport) /* DMADIR enabled*/
{
fis->h.features = (tiScsiRequest->dataDirection == tiDirectionIn)? 0x04 : 0; /* 1 for D2H, 0 for H2D */
}
else
{
fis->h.features = 0; /* FIS reserve */
}
fis->d.lbaLow = 0; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = 0; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = 0x20; /* FIS LBA (23:16) */
fis->d.device = 0; /* FIS LBA (27:24) and FIS LBA mode */
fis->d.lbaLowExp = 0;
fis->d.lbaMidExp = 0;
fis->d.lbaHighExp = 0;
fis->d.featuresExp = 0;
fis->d.sectorCount = 0; /* FIS sector count (7:0) */
fis->d.sectorCountExp = 0;
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = (bit32)(scsiCmnd->cdb[0]|(scsiCmnd->cdb[1]<<8)|(scsiCmnd->cdb[2]<<16)|(scsiCmnd->cdb[3]<<24));
satIOContext->ATACmd = SAT_PACKET;
agRequestType = AGSA_SATA_PROTOCOL_D2H_PKT;
//if (pSatDevData->sat48BitSupport == agTRUE)
{
if (pSatDevData->satDMASupport == agTRUE && pSatDevData->satDMAEnabled == agTRUE)
{
fis->h.features |= 0x01;
}
else
{
fis->h.features |= 0x0;
}
}
satIOContext->satCompleteCB = &satRequestSenseForATAPICB;
/*
* Prepare SGL and send FIS to LL layer.
*/
satIOContext->reqType = agRequestType; /* Save it */
status = sataLLIOStart( tiRoot,
tiIORequest,
tiDeviceHandle,
tiScsiRequest,
satIOContext);
TI_DBG5(("satRequestSenseForATAPI: return\n"));
return (status);
}
/*****************************************************************************/
/*! \brief SAT implementation for satDeviceReset.
*
* This function creates DEVICE RESET fis and sends the request to LL layer
*
* \param tiRoot: Pointer to TISA initiator driver/port instance.
* \param tiIORequest: Pointer to TISA I/O request context for this I/O.
* \param tiDeviceHandle: Pointer to TISA device handle for this I/O.
* \param tiScsiRequest: Pointer to TISA SCSI I/O request and SGL list.
* \param satIOContext_t: Pointer to the SAT IO Context
*
* \return If command is started successfully
* - \e tiSuccess: I/O request successfully initiated.
* - \e tiBusy: No resources available, try again later.
* - \e tiIONoDevice: Invalid device handle.
* - \e tiError: Other errors.
*/
/*****************************************************************************/
GLOBAL bit32 satDeviceReset(
tiRoot_t *tiRoot,
tiIORequest_t *tiIORequest,
tiDeviceHandle_t *tiDeviceHandle,
tiScsiInitiatorRequest_t *tiScsiRequest,
satIOContext_t *satIOContext
)
{
bit32 status;
bit32 agRequestType;
agsaFisRegHostToDevice_t *fis;
fis = satIOContext->pFis;
TI_DBG3(("satDeviceReset: start\n"));
/*
* Send the Execute Device Diagnostic command.
*/
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_DEVICE_RESET; /* 0x90 */
fis->h.features = 0;
fis->d.lbaLow = 0;
fis->d.lbaMid = 0;
fis->d.lbaHigh = 0;
fis->d.device = 0;
fis->d.lbaLowExp = 0;
fis->d.lbaMidExp = 0;
fis->d.lbaHighExp = 0;
fis->d.featuresExp = 0;
fis->d.sectorCount = 0;
fis->d.sectorCountExp = 0;
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_DEV_RESET;
/* Initialize CB for SATA completion.
*/
satIOContext->satCompleteCB = &satDeviceResetCB;
/*
* Prepare SGL and send FIS to LL layer.
*/
satIOContext->reqType = agRequestType; /* Save it */
status = sataLLIOStart( tiRoot,
tiIORequest,
tiDeviceHandle,
tiScsiRequest,
satIOContext);
TI_DBG3(("satDeviceReset: return\n"));
return status;
}
/*****************************************************************************/
/*! \brief SAT implementation for saExecuteDeviceDiagnostic.
*
* This function creates Execute Device Diagnostic fis and sends the request to LL layer
*
* \param tiRoot: Pointer to TISA initiator driver/port instance.
* \param tiIORequest: Pointer to TISA I/O request context for this I/O.
* \param tiDeviceHandle: Pointer to TISA device handle for this I/O.
* \param tiScsiRequest: Pointer to TISA SCSI I/O request and SGL list.
* \param satIOContext_t: Pointer to the SAT IO Context
*
* \return If command is started successfully
* - \e tiSuccess: I/O request successfully initiated.
* - \e tiBusy: No resources available, try again later.
* - \e tiIONoDevice: Invalid device handle.
* - \e tiError: Other errors.
*/
/*****************************************************************************/
GLOBAL bit32 satExecuteDeviceDiagnostic(
tiRoot_t *tiRoot,
tiIORequest_t *tiIORequest,
tiDeviceHandle_t *tiDeviceHandle,
tiScsiInitiatorRequest_t *tiScsiRequest,
satIOContext_t *satIOContext
)
{
bit32 status;
bit32 agRequestType;
agsaFisRegHostToDevice_t *fis;
fis = satIOContext->pFis;
TI_DBG3(("satExecuteDeviceDiagnostic: start\n"));
/*
* Send the Execute Device Diagnostic command.
*/
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_EXECUTE_DEVICE_DIAGNOSTIC; /* 0x90 */
fis->h.features = 0;
fis->d.lbaLow = 0;
fis->d.lbaMid = 0;
fis->d.lbaHigh = 0;
fis->d.device = 0;
fis->d.lbaLowExp = 0;
fis->d.lbaMidExp = 0;
fis->d.lbaHighExp = 0;
fis->d.featuresExp = 0;
fis->d.sectorCount = 0;
fis->d.sectorCountExp = 0;
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_NON_DATA;
/* Initialize CB for SATA completion.
*/
satIOContext->satCompleteCB = &satExecuteDeviceDiagnosticCB;
/*
* Prepare SGL and send FIS to LL layer.
*/
satIOContext->reqType = agRequestType; /* Save it */
status = sataLLIOStart( tiRoot,
tiIORequest,
tiDeviceHandle,
tiScsiRequest,
satIOContext);
TI_DBG5(("satExecuteDeviceDiagnostic: return\n"));
return status;
}
/*****************************************************************************/
/*! \brief SAT implementation for SCSI READ10.
*
* SAT implementation for SCSI READ10 and send FIS request to LL layer.
*
* \param tiRoot: Pointer to TISA initiator driver/port instance.
* \param tiIORequest: Pointer to TISA I/O request context for this I/O.
* \param tiDeviceHandle: Pointer to TISA device handle for this I/O.
* \param tiScsiRequest: Pointer to TISA SCSI I/O request and SGL list.
* \param satIOContext_t: Pointer to the SAT IO Context
*
* \return If command is started successfully
* - \e tiSuccess: I/O request successfully initiated.
* - \e tiBusy: No resources available, try again later.
* - \e tiIONoDevice: Invalid device handle.
* - \e tiError: Other errors.
*/
/*****************************************************************************/
GLOBAL bit32 satRead10(
tiRoot_t *tiRoot,
tiIORequest_t *tiIORequest,
tiDeviceHandle_t *tiDeviceHandle,
tiScsiInitiatorRequest_t *tiScsiRequest,
satIOContext_t *satIOContext)
{
bit32 status;
bit32 agRequestType = AGSA_SATA_PROTOCOL_DMA_READ;
satDeviceData_t *pSatDevData;
scsiRspSense_t *pSense;
tiIniScsiCmnd_t *scsiCmnd;
agsaFisRegHostToDevice_t *fis;
bit32 lba = 0;
bit32 tl = 0;
bit32 LoopNum = 1;
bit8 LBA[4];
bit8 TL[4];
bit32 rangeChk = agFALSE; /* lba and tl range check */
pSense = satIOContext->pSense;
pSatDevData = satIOContext->pSatDevData;
scsiCmnd = &tiScsiRequest->scsiCmnd;
fis = satIOContext->pFis;
TI_DBG5(("satRead10: start\n"));
TI_DBG5(("satRead10: pSatDevData=%p\n", pSatDevData));
// tdhexdump("satRead10", (bit8 *)scsiCmnd->cdb, 10);
/* checking FUA_NV */
if (scsiCmnd->cdb[1] & SCSI_FUA_NV_MASK)
{
satSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pTiSenseData,
satIOContext->interruptContext );
TI_DBG1(("satRead10: return FUA_NV\n"));
return tiSuccess;
}
/* checking CONTROL */
/* NACA == 1 or LINK == 1*/
if ( (scsiCmnd->cdb[9] & SCSI_NACA_MASK) || (scsiCmnd->cdb[9] & SCSI_LINK_MASK) )
{
satSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pTiSenseData,
satIOContext->interruptContext );
TI_DBG1(("satRead10: return control\n"));
return tiSuccess;
}
osti_memset(LBA, 0, sizeof(LBA));
osti_memset(TL, 0, sizeof(TL));
/* do not use memcpy due to indexing in LBA and TL */
LBA[0] = scsiCmnd->cdb[2]; /* MSB */
LBA[1] = scsiCmnd->cdb[3];
LBA[2] = scsiCmnd->cdb[4];
LBA[3] = scsiCmnd->cdb[5]; /* LSB */
TL[0] = 0;
TL[1] = 0;
TL[2] = scsiCmnd->cdb[7]; /* MSB */
TL[3] = scsiCmnd->cdb[8]; /* LSB */
rangeChk = satAddNComparebit32(LBA, TL);
/* cbd10; computing LBA and transfer length */
lba = (scsiCmnd->cdb[2] << (8*3)) + (scsiCmnd->cdb[3] << (8*2))
+ (scsiCmnd->cdb[4] << 8) + scsiCmnd->cdb[5];
tl = (scsiCmnd->cdb[7] << 8) + scsiCmnd->cdb[8];
TI_DBG5(("satRead10: lba %d functioned lba %d\n", lba, satComputeCDB10LBA(satIOContext)));
TI_DBG5(("satRead10: lba 0x%x functioned lba 0x%x\n", lba, satComputeCDB10LBA(satIOContext)));
TI_DBG5(("satRead10: tl %d functioned tl %d\n", tl, satComputeCDB10TL(satIOContext)));
/* Table 34, 9.1, p 46 */
/*
note: As of 2/10/2006, no support for DMA QUEUED
*/
/*
Table 34, 9.1, p 46, b
When no 48-bit addressing support or NCQ, if LBA is beyond (2^28 - 1),
return check condition
*/
if (pSatDevData->satNCQ != agTRUE &&
pSatDevData->sat48BitSupport != agTRUE
)
{
if (lba > SAT_TR_LBA_LIMIT - 1)
{
TI_DBG1(("satRead10: return LBA out of range, not EXT\n"));
satSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_LOGICAL_BLOCK_ADDRESS_OUT_OF_RANGE,
satIOContext);
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pTiSenseData,
satIOContext->interruptContext );
return tiSuccess;
}
if (rangeChk) // if (lba + tl > SAT_TR_LBA_LIMIT)
{
TI_DBG1(("satRead10: return LBA+TL out of range, not EXT\n"));
satSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_LOGICAL_BLOCK_ADDRESS_OUT_OF_RANGE,
satIOContext);
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pTiSenseData,
satIOContext->interruptContext );
return tiSuccess;
}
}
/* case 1 and 2 */
if (!rangeChk) // if (lba + tl <= SAT_TR_LBA_LIMIT)
{
if (pSatDevData->satDMASupport == agTRUE && pSatDevData->satDMAEnabled == agTRUE)
{
/* case 2 */
/* READ DMA*/
/* in case that we can't fit the transfer length,
we need to make it fit by sending multiple ATA cmnds */
TI_DBG5(("satRead10: case 2\n"));
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_READ_DMA; /* 0xC8 */
fis->h.features = 0; /* FIS reserve */
fis->d.lbaLow = scsiCmnd->cdb[5]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = scsiCmnd->cdb[4]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = scsiCmnd->cdb[3]; /* FIS LBA (23:16) */
fis->d.device =
(bit8)((0x4 << 4) | (scsiCmnd->cdb[2] & 0xF)); /* FIS LBA (27:24) and FIS LBA mode */
fis->d.lbaLowExp = 0;
fis->d.lbaMidExp = 0;
fis->d.lbaHighExp = 0;
fis->d.featuresExp = 0;
fis->d.sectorCount = scsiCmnd->cdb[8]; /* FIS sector count (7:0) */
fis->d.sectorCountExp = 0;
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_DMA_READ;
satIOContext->ATACmd = SAT_READ_DMA;
}
else
{
/* case 1 */
/* READ MULTIPLE or READ SECTOR(S) */
/* READ SECTORS for easier implemetation */
/* in case that we can't fit the transfer length,
we need to make it fit by sending multiple ATA cmnds */
TI_DBG5(("satRead10: case 1\n"));
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_READ_SECTORS; /* 0x20 */
fis->h.features = 0; /* FIS reserve */
fis->d.lbaLow = scsiCmnd->cdb[5]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = scsiCmnd->cdb[4]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = scsiCmnd->cdb[3]; /* FIS LBA (23:16) */
fis->d.device =
(bit8)((0x4 << 4) | (scsiCmnd->cdb[2] & 0xF)); /* FIS LBA (27:24) and FIS LBA mode */
fis->d.lbaLowExp = 0;
fis->d.lbaMidExp = 0;
fis->d.lbaHighExp = 0;
fis->d.featuresExp = 0;
fis->d.sectorCount = scsiCmnd->cdb[8]; /* FIS sector count (7:0) */
fis->d.sectorCountExp = 0;
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_PIO_READ;
satIOContext->ATACmd = SAT_READ_SECTORS;
}
}
/* case 3 and 4 */
if (pSatDevData->sat48BitSupport == agTRUE)
{
if (pSatDevData->satDMASupport == agTRUE && pSatDevData->satDMAEnabled == agTRUE)
{
/* case 3 */
/* READ DMA EXT */
TI_DBG5(("satRead10: case 3\n"));
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_READ_DMA_EXT; /* 0x25 */
fis->h.features = 0; /* FIS reserve */
fis->d.lbaLow = scsiCmnd->cdb[5]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = scsiCmnd->cdb[4]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = scsiCmnd->cdb[3]; /* FIS LBA (23:16) */
fis->d.device = 0x40; /* FIS LBA mode set */
fis->d.lbaLowExp = scsiCmnd->cdb[2]; /* FIS LBA (31:24) */
fis->d.lbaMidExp = 0; /* FIS LBA (39:32) */
fis->d.lbaHighExp = 0; /* FIS LBA (47:40) */
fis->d.featuresExp = 0; /* FIS reserve */
fis->d.sectorCount = scsiCmnd->cdb[8]; /* FIS sector count (7:0) */
fis->d.sectorCountExp = scsiCmnd->cdb[7]; /* FIS sector count (15:8) */
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_DMA_READ;
satIOContext->ATACmd = SAT_READ_DMA_EXT;
}
else
{
/* case 4 */
/* READ MULTIPLE EXT or READ SECTOR(S) EXT or READ VERIFY SECTOR(S) EXT*/
/* READ SECTORS EXT for easier implemetation */
TI_DBG5(("satRead10: case 4\n"));
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
/* Check FUA bit */
if (scsiCmnd->cdb[1] & SCSI_READ10_FUA_MASK)
{
/* for now, no support for FUA */
satSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pTiSenseData,
satIOContext->interruptContext );
return tiSuccess;
}
fis->h.command = SAT_READ_SECTORS_EXT; /* 0x24 */
fis->h.features = 0; /* FIS reserve */
fis->d.lbaLow = scsiCmnd->cdb[5]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = scsiCmnd->cdb[4]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = scsiCmnd->cdb[3]; /* FIS LBA (23:16) */
fis->d.device = 0x40; /* FIS LBA mode set */
fis->d.lbaLowExp = scsiCmnd->cdb[2]; /* FIS LBA (31:24) */
fis->d.lbaMidExp = 0; /* FIS LBA (39:32) */
fis->d.lbaHighExp = 0; /* FIS LBA (47:40) */
fis->d.featuresExp = 0; /* FIS reserve */
fis->d.sectorCount = scsiCmnd->cdb[8]; /* FIS sector count (7:0) */
fis->d.sectorCountExp = scsiCmnd->cdb[7]; /* FIS sector count (15:8) */
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_PIO_READ;
satIOContext->ATACmd = SAT_READ_SECTORS_EXT;
}
}
/* case 5 */
if (pSatDevData->satNCQ == agTRUE)
{
/* READ FPDMA QUEUED */
if (pSatDevData->sat48BitSupport != agTRUE)
{
TI_DBG5(("satRead10: case 5 !!! error NCQ but 28 bit address support \n"));
satSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pTiSenseData,
satIOContext->interruptContext );
return tiSuccess;
}
TI_DBG6(("satRead10: case 5\n"));
/* Support 48-bit FPDMA addressing, use READ FPDMA QUEUE command */
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_READ_FPDMA_QUEUED; /* 0x60 */
fis->h.features = scsiCmnd->cdb[8]; /* FIS sector count (7:0) */
fis->d.lbaLow = scsiCmnd->cdb[5]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = scsiCmnd->cdb[4]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = scsiCmnd->cdb[3]; /* FIS LBA (23:16) */
/* Check FUA bit */
if (scsiCmnd->cdb[1] & SCSI_READ10_FUA_MASK)
fis->d.device = 0xC0; /* FIS FUA set */
else
fis->d.device = 0x40; /* FIS FUA clear */
fis->d.lbaLowExp = scsiCmnd->cdb[2]; /* FIS LBA (31:24) */
fis->d.lbaMidExp = 0; /* FIS LBA (39:32) */
fis->d.lbaHighExp = 0; /* FIS LBA (47:40) */
fis->d.featuresExp = scsiCmnd->cdb[7]; /* FIS sector count (15:8) */
fis->d.sectorCount = 0; /* Tag (7:3) set by LL layer */
fis->d.sectorCountExp = 0;
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_FPDMA_READ;
satIOContext->ATACmd = SAT_READ_FPDMA_QUEUED;
}
// tdhexdump("satRead10 final fis", (bit8 *)fis, sizeof(agsaFisRegHostToDevice_t));
/* saves the current LBA and orginal TL */
satIOContext->currentLBA = lba;
satIOContext->OrgTL = tl;
/*
computing number of loop and remainder for tl
0xFF in case not ext
0xFFFF in case EXT
*/
if (fis->h.command == SAT_READ_SECTORS || fis->h.command == SAT_READ_DMA)
{
LoopNum = satComputeLoopNum(tl, 0xFF);
}
else if (fis->h.command == SAT_READ_SECTORS_EXT || fis->h.command == SAT_READ_DMA_EXT)
{
/* SAT_READ_SECTORS_EXT, SAT_READ_DMA_EXT */
LoopNum = satComputeLoopNum(tl, 0xFFFF);
}
else
{
/* SAT_READ_FPDMA_QUEUED */
LoopNum = satComputeLoopNum(tl, 0xFFFF);
}
satIOContext->LoopNum = LoopNum;
/* Initialize CB for SATA completion.
*/
if (LoopNum == 1)
{
TI_DBG5(("satRead10: NON CHAINED data\n"));
satIOContext->satCompleteCB = &satNonChainedDataIOCB;
}
else
{
TI_DBG1(("satRead10: CHAINED data\n"));
/* re-setting tl */
if (fis->h.command == SAT_READ_SECTORS || fis->h.command == SAT_READ_DMA)
{
fis->d.sectorCount = 0xFF;
}
else if (fis->h.command == SAT_READ_SECTORS_EXT || fis->h.command == SAT_READ_DMA_EXT)
{
/* SAT_READ_SECTORS_EXT, SAT_READ_DMA_EXT */
fis->d.sectorCount = 0xFF;
fis->d.sectorCountExp = 0xFF;
}
else
{
/* SAT_READ_FPDMA_QUEUED */
fis->h.features = 0xFF;
fis->d.featuresExp = 0xFF;
}
/* chained data */
satIOContext->satCompleteCB = &satChainedDataIOCB;
}
/*
* Prepare SGL and send FIS to LL layer.
*/
satIOContext->reqType = agRequestType; /* Save it */
status = sataLLIOStart( tiRoot,
tiIORequest,
tiDeviceHandle,
tiScsiRequest,
satIOContext);
TI_DBG5(("satRead10: return\n"));
return (status);
}
/*****************************************************************************/
/*! \brief SAT implementation for SCSI satRead_1.
*
* SAT implementation for SCSI satRead_1
* Sub function of satRead10
*
* \param tiRoot: Pointer to TISA initiator driver/port instance.
* \param tiIORequest: Pointer to TISA I/O request context for this I/O.
* \param tiDeviceHandle: Pointer to TISA device handle for this I/O.
* \param tiScsiRequest: Pointer to TISA SCSI I/O request and SGL list.
* \param satIOContext_t: Pointer to the SAT IO Context
*
* \return If command is started successfully
* - \e tiSuccess: I/O request successfully initiated.
* - \e tiBusy: No resources available, try again later.
* - \e tiIONoDevice: Invalid device handle.
* - \e tiError: Other errors.
*/
/*****************************************************************************/
/*
* as a part of loop for read10
*/
GLOBAL bit32 satRead_1(
tiRoot_t *tiRoot,
tiIORequest_t *tiIORequest,
tiDeviceHandle_t *tiDeviceHandle,
tiScsiInitiatorRequest_t *tiScsiRequest,
satIOContext_t *satIOContext)
{
/*
Assumption: error check on lba and tl has been done in satRead*()
lba = lba + tl;
*/
bit32 status;
satIOContext_t *satOrgIOContext = agNULL;
tiIniScsiCmnd_t *scsiCmnd;
agsaFisRegHostToDevice_t *fis;
bit32 agRequestType = AGSA_SATA_PROTOCOL_DMA_READ;
bit32 lba = 0;
bit32 DenomTL = 0xFF;
bit32 Remainder = 0;
bit8 LBA[4]; /* 0 MSB, 3 LSB */
TI_DBG2(("satRead_1: start\n"));
fis = satIOContext->pFis;
satOrgIOContext = satIOContext->satOrgIOContext;
scsiCmnd = satOrgIOContext->pScsiCmnd;
osti_memset(LBA,0, sizeof(LBA));
switch (satOrgIOContext->ATACmd)
{
case SAT_READ_DMA:
DenomTL = 0xFF;
break;
case SAT_READ_SECTORS:
DenomTL = 0xFF;
break;
case SAT_READ_DMA_EXT:
DenomTL = 0xFFFF;
break;
case SAT_READ_SECTORS_EXT:
DenomTL = 0xFFFF;
break;
case SAT_READ_FPDMA_QUEUED:
DenomTL = 0xFFFF;
break;
default:
TI_DBG1(("satRead_1: error incorrect ata command 0x%x\n", satIOContext->ATACmd));
return tiError;
break;
}
Remainder = satOrgIOContext->OrgTL % DenomTL;
satOrgIOContext->currentLBA = satOrgIOContext->currentLBA + DenomTL;
lba = satOrgIOContext->currentLBA;
LBA[0] = (bit8)((lba & 0xF000) >> (8 * 3));
LBA[1] = (bit8)((lba & 0xF00) >> (8 * 2));
LBA[2] = (bit8)((lba & 0xF0) >> 8);
LBA[3] = (bit8)(lba & 0xF);
switch (satOrgIOContext->ATACmd)
{
case SAT_READ_DMA:
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_READ_DMA; /* 0xC8 */
fis->h.features = 0; /* FIS reserve */
fis->d.lbaLow = LBA[3]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = LBA[2]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = LBA[1]; /* FIS LBA (23:16) */
fis->d.device =
(bit8)((0x4 << 4) | (LBA[0] & 0xF)); /* FIS LBA (27:24) and FIS LBA mode */
fis->d.lbaLowExp = 0;
fis->d.lbaMidExp = 0;
fis->d.lbaHighExp = 0;
fis->d.featuresExp = 0;
if (satOrgIOContext->LoopNum == 1)
{
/* last loop */
fis->d.sectorCount = (bit8)Remainder; /* FIS sector count (7:0) */
}
else
{
fis->d.sectorCount = 0xFF; /* FIS sector count (7:0) */
}
fis->d.sectorCountExp = 0;
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_DMA_READ;
break;
case SAT_READ_SECTORS:
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_READ_SECTORS; /* 0x20 */
fis->h.features = 0; /* FIS reserve */
fis->d.lbaLow = LBA[3]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = LBA[2]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = LBA[1]; /* FIS LBA (23:16) */
fis->d.device =
(bit8)((0x4 << 4) | (LBA[0] & 0xF)); /* FIS LBA (27:24) and FIS LBA mode */
fis->d.lbaLowExp = 0;
fis->d.lbaMidExp = 0;
fis->d.lbaHighExp = 0;
fis->d.featuresExp = 0;
if (satOrgIOContext->LoopNum == 1)
{
/* last loop */
fis->d.sectorCount = (bit8)Remainder; /* FIS sector count (7:0) */
}
else
{
fis->d.sectorCount = 0xFF; /* FIS sector count (7:0) */
}
fis->d.sectorCountExp = 0;
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_PIO_READ;
break;
case SAT_READ_DMA_EXT:
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_READ_DMA_EXT; /* 0x25 */
fis->h.features = 0; /* FIS reserve */
fis->d.lbaLow = LBA[3]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = LBA[2]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = LBA[1]; /* FIS LBA (23:16) */
fis->d.device = 0x40; /* FIS LBA mode set */
fis->d.lbaLowExp = LBA[0]; /* FIS LBA (31:24) */
fis->d.lbaMidExp = 0; /* FIS LBA (39:32) */
fis->d.lbaHighExp = 0; /* FIS LBA (47:40) */
fis->d.featuresExp = 0; /* FIS reserve */
if (satOrgIOContext->LoopNum == 1)
{
/* last loop */
fis->d.sectorCount = (bit8)(Remainder & 0xFF); /* FIS sector count (7:0) */
fis->d.sectorCountExp = (bit8)((Remainder & 0xFF00) >> 8); /* FIS sector count (15:8) */
}
else
{
fis->d.sectorCount = 0xFF; /* FIS sector count (7:0) */
fis->d.sectorCountExp = 0xFF; /* FIS sector count (15:8) */
}
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_DMA_READ;
break;
case SAT_READ_SECTORS_EXT:
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_READ_SECTORS_EXT; /* 0x24 */
fis->h.features = 0; /* FIS reserve */
fis->d.lbaLow = LBA[3]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = LBA[2]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = LBA[1]; /* FIS LBA (23:16) */
fis->d.device = 0x40; /* FIS LBA mode set */
fis->d.lbaLowExp = LBA[0]; /* FIS LBA (31:24) */
fis->d.lbaMidExp = 0; /* FIS LBA (39:32) */
fis->d.lbaHighExp = 0; /* FIS LBA (47:40) */
fis->d.featuresExp = 0; /* FIS reserve */
if (satOrgIOContext->LoopNum == 1)
{
/* last loop */
fis->d.sectorCount = (bit8)(Remainder & 0xFF); /* FIS sector count (7:0) */
fis->d.sectorCountExp = (bit8)((Remainder & 0xFF00) >> 8); /* FIS sector count (15:8) */
}
else
{
fis->d.sectorCount = 0xFF; /* FIS sector count (7:0) */
fis->d.sectorCountExp = 0xFF; /* FIS sector count (15:8) */
}
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_PIO_READ;
break;
case SAT_READ_FPDMA_QUEUED:
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_READ_FPDMA_QUEUED; /* 0x60 */
fis->d.lbaLow = LBA[3]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = LBA[2]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = LBA[1]; /* FIS LBA (23:16) */
/* Check FUA bit */
if (scsiCmnd->cdb[1] & SCSI_READ10_FUA_MASK)
fis->d.device = 0xC0; /* FIS FUA set */
else
fis->d.device = 0x40; /* FIS FUA clear */
fis->d.lbaLowExp = LBA[0]; /* FIS LBA (31:24) */
fis->d.lbaMidExp = 0; /* FIS LBA (39:32) */
fis->d.lbaHighExp = 0; /* FIS LBA (47:40) */
if (satOrgIOContext->LoopNum == 1)
{
/* last loop */
fis->h.features = (bit8)(Remainder & 0xFF); /* FIS sector count (7:0) */
fis->d.featuresExp = (bit8)((Remainder & 0xFF00) >> 8); /* FIS sector count (15:8) */
}
else
{
fis->h.features = 0xFF; /* FIS sector count (7:0) */
fis->d.featuresExp = 0xFF; /* FIS sector count (15:8) */
}
fis->d.sectorCount = 0; /* Tag (7:3) set by LL layer */
fis->d.sectorCountExp = 0;
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_FPDMA_READ;
break;
default:
TI_DBG1(("satRead_1: error incorrect ata command 0x%x\n", satIOContext->ATACmd));
return tiError;
break;
}
/* Initialize CB for SATA completion.
*/
/* chained data */
satIOContext->satCompleteCB = &satChainedDataIOCB;
/*
* Prepare SGL and send FIS to LL layer.
*/
satIOContext->reqType = agRequestType; /* Save it */
status = sataLLIOStart( tiRoot,
tiIORequest,
tiDeviceHandle,
tiScsiRequest,
satIOContext);
TI_DBG5(("satRead_1: return\n"));
return (status);
}
/*****************************************************************************/
/*! \brief SAT implementation for SCSI READ12.
*
* SAT implementation for SCSI READ12 and send FIS request to LL layer.
*
* \param tiRoot: Pointer to TISA initiator driver/port instance.
* \param tiIORequest: Pointer to TISA I/O request context for this I/O.
* \param tiDeviceHandle: Pointer to TISA device handle for this I/O.
* \param tiScsiRequest: Pointer to TISA SCSI I/O request and SGL list.
* \param satIOContext_t: Pointer to the SAT IO Context
*
* \return If command is started successfully
* - \e tiSuccess: I/O request successfully initiated.
* - \e tiBusy: No resources available, try again later.
* - \e tiIONoDevice: Invalid device handle.
* - \e tiError: Other errors.
*/
/*****************************************************************************/
GLOBAL bit32 satRead12(
tiRoot_t *tiRoot,
tiIORequest_t *tiIORequest,
tiDeviceHandle_t *tiDeviceHandle,
tiScsiInitiatorRequest_t *tiScsiRequest,
satIOContext_t *satIOContext)
{
bit32 status;
bit32 agRequestType = AGSA_SATA_PROTOCOL_DMA_READ;
satDeviceData_t *pSatDevData;
scsiRspSense_t *pSense;
tiIniScsiCmnd_t *scsiCmnd;
agsaFisRegHostToDevice_t *fis;
bit32 lba = 0;
bit32 tl = 0;
bit32 LoopNum = 1;
bit8 LBA[4];
bit8 TL[4];
bit32 rangeChk = agFALSE; /* lba and tl range check */
pSense = satIOContext->pSense;
pSatDevData = satIOContext->pSatDevData;
scsiCmnd = &tiScsiRequest->scsiCmnd;
fis = satIOContext->pFis;
TI_DBG5(("satRead12: start\n"));
/* checking FUA_NV */
if (scsiCmnd->cdb[1] & SCSI_FUA_NV_MASK)
{
satSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pTiSenseData,
satIOContext->interruptContext );
TI_DBG1(("satRead12: return FUA_NV\n"));
return tiSuccess;
}
/* checking CONTROL */
/* NACA == 1 or LINK == 1*/
if ( (scsiCmnd->cdb[11] & SCSI_NACA_MASK) || (scsiCmnd->cdb[11] & SCSI_LINK_MASK) )
{
satSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pTiSenseData,
satIOContext->interruptContext );
TI_DBG2(("satRead12: return control\n"));
return tiSuccess;
}
osti_memset(LBA, 0, sizeof(LBA));
osti_memset(TL, 0, sizeof(TL));
/* do not use memcpy due to indexing in LBA and TL */
LBA[0] = scsiCmnd->cdb[2]; /* MSB */
LBA[1] = scsiCmnd->cdb[3];
LBA[2] = scsiCmnd->cdb[4];
LBA[3] = scsiCmnd->cdb[5]; /* LSB */
TL[0] = scsiCmnd->cdb[6]; /* MSB */
TL[1] = scsiCmnd->cdb[7];
TL[2] = scsiCmnd->cdb[8];
TL[3] = scsiCmnd->cdb[9]; /* LSB */
rangeChk = satAddNComparebit32(LBA, TL);
lba = satComputeCDB12LBA(satIOContext);
tl = satComputeCDB12TL(satIOContext);
/* Table 34, 9.1, p 46 */
/*
note: As of 2/10/2006, no support for DMA QUEUED
*/
/*
Table 34, 9.1, p 46, b
When no 48-bit addressing support or NCQ, if LBA is beyond (2^28 - 1),
return check condition
*/
if (pSatDevData->satNCQ != agTRUE &&
pSatDevData->sat48BitSupport != agTRUE
)
{
if (lba > SAT_TR_LBA_LIMIT - 1)
{
TI_DBG1(("satRead12: return LBA out of range, not EXT\n"));
satSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_LOGICAL_BLOCK_ADDRESS_OUT_OF_RANGE,
satIOContext);
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pTiSenseData,
satIOContext->interruptContext );
return tiSuccess;
}
if (rangeChk) // if (lba + tl > SAT_TR_LBA_LIMIT)
{
TI_DBG1(("satRead12: return LBA+TL out of range, not EXT\n"));
satSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_LOGICAL_BLOCK_ADDRESS_OUT_OF_RANGE,
satIOContext);
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pTiSenseData,
satIOContext->interruptContext );
return tiSuccess;
}
}
/* case 1 and 2 */
if (!rangeChk) // if (lba + tl <= SAT_TR_LBA_LIMIT)
{
if (pSatDevData->satDMASupport == agTRUE && pSatDevData->satDMAEnabled == agTRUE)
{
/* case 2 */
/* READ DMA*/
/* in case that we can't fit the transfer length,
we need to make it fit by sending multiple ATA cmnds */
TI_DBG5(("satRead12: case 2\n"));
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_READ_DMA; /* 0xC8 */
fis->h.features = 0; /* FIS reserve */
fis->d.lbaLow = scsiCmnd->cdb[5]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = scsiCmnd->cdb[4]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = scsiCmnd->cdb[3]; /* FIS LBA (23:16) */
fis->d.device =
(bit8)((0x4 << 4) | (scsiCmnd->cdb[2] & 0xF)); /* FIS LBA (27:24) and FIS LBA mode */
fis->d.lbaLowExp = 0;
fis->d.lbaMidExp = 0;
fis->d.lbaHighExp = 0;
fis->d.featuresExp = 0;
fis->d.sectorCount = scsiCmnd->cdb[9]; /* FIS sector count (7:0) */
fis->d.sectorCountExp = 0;
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_DMA_READ;
satIOContext->ATACmd = SAT_READ_DMA;
}
else
{
/* case 1 */
/* READ MULTIPLE or READ SECTOR(S) */
/* READ SECTORS for easier implemetation */
/* can't fit the transfer length but need to make it fit by sending multiple*/
TI_DBG5(("satRead12: case 1\n"));
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_READ_SECTORS; /* 0x20 */
fis->h.features = 0; /* FIS reserve */
fis->d.lbaLow = scsiCmnd->cdb[5]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = scsiCmnd->cdb[4]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = scsiCmnd->cdb[3]; /* FIS LBA (23:16) */
fis->d.device =
(bit8)((0x4 << 4) | (scsiCmnd->cdb[2] & 0xF)); /* FIS LBA (27:24) and FIS LBA mode */
fis->d.lbaLowExp = 0;
fis->d.lbaMidExp = 0;
fis->d.lbaHighExp = 0;
fis->d.featuresExp = 0;
fis->d.sectorCount = scsiCmnd->cdb[9]; /* FIS sector count (7:0) */
fis->d.sectorCountExp = 0;
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_PIO_READ;
satIOContext->ATACmd = SAT_READ_SECTORS;
}
}
/* case 3 and 4 */
if (pSatDevData->sat48BitSupport == agTRUE)
{
if (pSatDevData->satDMASupport == agTRUE && pSatDevData->satDMAEnabled == agTRUE)
{
/* case 3 */
/* READ DMA EXT */
TI_DBG5(("satRead12: case 3\n"));
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_READ_DMA_EXT; /* 0x25 */
fis->h.features = 0; /* FIS reserve */
fis->d.lbaLow = scsiCmnd->cdb[5]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = scsiCmnd->cdb[4]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = scsiCmnd->cdb[3]; /* FIS LBA (23:16) */
fis->d.device = 0x40; /* FIS LBA mode set */
fis->d.lbaLowExp = scsiCmnd->cdb[2]; /* FIS LBA (31:24) */
fis->d.lbaMidExp = 0; /* FIS LBA (39:32) */
fis->d.lbaHighExp = 0; /* FIS LBA (47:40) */
fis->d.featuresExp = 0; /* FIS reserve */
fis->d.sectorCount = scsiCmnd->cdb[9]; /* FIS sector count (7:0) */
fis->d.sectorCountExp = scsiCmnd->cdb[8]; /* FIS sector count (15:8) */
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_DMA_READ;
satIOContext->ATACmd = SAT_READ_DMA_EXT;
}
else
{
/* case 4 */
/* READ MULTIPLE EXT or READ SECTOR(S) EXT or READ VERIFY SECTOR(S) EXT*/
/* READ SECTORS EXT for easier implemetation */
TI_DBG5(("satRead12: case 4\n"));
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
/* Check FUA bit */
if (scsiCmnd->cdb[1] & SCSI_READ12_FUA_MASK)
{
/* for now, no support for FUA */
satSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pTiSenseData,
satIOContext->interruptContext );
return tiSuccess;
}
fis->h.command = SAT_READ_SECTORS_EXT; /* 0x24 */
fis->h.features = 0; /* FIS reserve */
fis->d.lbaLow = scsiCmnd->cdb[5]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = scsiCmnd->cdb[4]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = scsiCmnd->cdb[3]; /* FIS LBA (23:16) */
fis->d.device = 0x40; /* FIS LBA mode set */
fis->d.lbaLowExp = scsiCmnd->cdb[2]; /* FIS LBA (31:24) */
fis->d.lbaMidExp = 0; /* FIS LBA (39:32) */
fis->d.lbaHighExp = 0; /* FIS LBA (47:40) */
fis->d.featuresExp = 0; /* FIS reserve */
fis->d.sectorCount = scsiCmnd->cdb[9]; /* FIS sector count (7:0) */
fis->d.sectorCountExp = scsiCmnd->cdb[8]; /* FIS sector count (15:8) */
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_PIO_READ;
satIOContext->ATACmd = SAT_READ_SECTORS_EXT;
}
}
/* case 5 */
if (pSatDevData->satNCQ == agTRUE)
{
/* READ FPDMA QUEUED */
if (pSatDevData->sat48BitSupport != agTRUE)
{
TI_DBG5(("satRead12: case 5 !!! error NCQ but 28 bit address support \n"));
satSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pTiSenseData,
satIOContext->interruptContext );
return tiSuccess;
}
TI_DBG6(("satRead12: case 5\n"));
/* Support 48-bit FPDMA addressing, use READ FPDMA QUEUE command */
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_READ_FPDMA_QUEUED; /* 0x60 */
fis->h.features = scsiCmnd->cdb[9]; /* FIS sector count (7:0) */
fis->d.lbaLow = scsiCmnd->cdb[5]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = scsiCmnd->cdb[4]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = scsiCmnd->cdb[3]; /* FIS LBA (23:16) */
/* Check FUA bit */
if (scsiCmnd->cdb[1] & SCSI_READ12_FUA_MASK)
fis->d.device = 0xC0; /* FIS FUA set */
else
fis->d.device = 0x40; /* FIS FUA clear */
fis->d.lbaLowExp = scsiCmnd->cdb[2]; /* FIS LBA (31:24) */
fis->d.lbaMidExp = 0; /* FIS LBA (39:32) */
fis->d.lbaHighExp = 0; /* FIS LBA (47:40) */
fis->d.featuresExp = scsiCmnd->cdb[8]; /* FIS sector count (15:8) */
fis->d.sectorCount = 0; /* Tag (7:3) set by LL layer */
fis->d.sectorCountExp = 0;
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_FPDMA_READ;
satIOContext->ATACmd = SAT_READ_FPDMA_QUEUED;
}
/* saves the current LBA and orginal TL */
satIOContext->currentLBA = lba;
satIOContext->OrgTL = tl;
/*
computing number of loop and remainder for tl
0xFF in case not ext
0xFFFF in case EXT
*/
if (fis->h.command == SAT_READ_SECTORS || fis->h.command == SAT_READ_DMA)
{
LoopNum = satComputeLoopNum(tl, 0xFF);
}
else if (fis->h.command == SAT_READ_SECTORS_EXT || fis->h.command == SAT_READ_DMA_EXT)
{
/* SAT_READ_SECTORS_EXT, SAT_READ_DMA_EXT */
LoopNum = satComputeLoopNum(tl, 0xFFFF);
}
else
{
/* SAT_READ_FPDMA_QUEUEDK */
LoopNum = satComputeLoopNum(tl, 0xFFFF);
}
satIOContext->LoopNum = LoopNum;
if (LoopNum == 1)
{
TI_DBG5(("satRead12: NON CHAINED data\n"));
satIOContext->satCompleteCB = &satNonChainedDataIOCB;
}
else
{
TI_DBG1(("satRead12: CHAINED data\n"));
/* re-setting tl */
if (fis->h.command == SAT_READ_SECTORS || fis->h.command == SAT_READ_DMA)
{
fis->d.sectorCount = 0xFF;
}
else if (fis->h.command == SAT_READ_SECTORS_EXT || fis->h.command == SAT_READ_DMA_EXT)
{
/* SAT_READ_SECTORS_EXT, SAT_READ_DMA_EXT */
fis->d.sectorCount = 0xFF;
fis->d.sectorCountExp = 0xFF;
}
else
{
/* SAT_READ_FPDMA_QUEUED */
fis->h.features = 0xFF;
fis->d.featuresExp = 0xFF;
}
/* chained data */
satIOContext->satCompleteCB = &satChainedDataIOCB;
}
/*
* Prepare SGL and send FIS to LL layer.
*/
satIOContext->reqType = agRequestType; /* Save it */
status = sataLLIOStart( tiRoot,
tiIORequest,
tiDeviceHandle,
tiScsiRequest,
satIOContext);
TI_DBG5(("satRead12: return\n"));
return (status);
}
/*****************************************************************************/
/*! \brief SAT implementation for SCSI READ16.
*
* SAT implementation for SCSI READ16 and send FIS request to LL layer.
*
* \param tiRoot: Pointer to TISA initiator driver/port instance.
* \param tiIORequest: Pointer to TISA I/O request context for this I/O.
* \param tiDeviceHandle: Pointer to TISA device handle for this I/O.
* \param tiScsiRequest: Pointer to TISA SCSI I/O request and SGL list.
* \param satIOContext_t: Pointer to the SAT IO Context
*
* \return If command is started successfully
* - \e tiSuccess: I/O request successfully initiated.
* - \e tiBusy: No resources available, try again later.
* - \e tiIONoDevice: Invalid device handle.
* - \e tiError: Other errors.
*/
/*****************************************************************************/
GLOBAL bit32 satRead16(
tiRoot_t *tiRoot,
tiIORequest_t *tiIORequest,
tiDeviceHandle_t *tiDeviceHandle,
tiScsiInitiatorRequest_t *tiScsiRequest,
satIOContext_t *satIOContext)
{
bit32 status;
bit32 agRequestType = AGSA_SATA_PROTOCOL_DMA_READ;
satDeviceData_t *pSatDevData;
scsiRspSense_t *pSense;
tiIniScsiCmnd_t *scsiCmnd;
agsaFisRegHostToDevice_t *fis;
bit32 lba = 0;
bit32 tl = 0;
bit32 LoopNum = 1;
bit8 LBA[8];
bit8 TL[8];
bit32 rangeChk = agFALSE; /* lba and tl range check */
bit32 limitChk = agFALSE; /* lba and tl range check */
pSense = satIOContext->pSense;
pSatDevData = satIOContext->pSatDevData;
scsiCmnd = &tiScsiRequest->scsiCmnd;
fis = satIOContext->pFis;
TI_DBG5(("satRead16: start\n"));
/* checking FUA_NV */
if (scsiCmnd->cdb[1] & SCSI_FUA_NV_MASK)
{
satSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pTiSenseData,
satIOContext->interruptContext );
TI_DBG1(("satRead16: return FUA_NV\n"));
return tiSuccess;
}
/* checking CONTROL */
/* NACA == 1 or LINK == 1*/
if ( (scsiCmnd->cdb[15] & SCSI_NACA_MASK) || (scsiCmnd->cdb[15] & SCSI_LINK_MASK) )
{
satSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pTiSenseData,
satIOContext->interruptContext );
TI_DBG1(("satRead16: return control\n"));
return tiSuccess;
}
osti_memset(LBA, 0, sizeof(LBA));
osti_memset(TL, 0, sizeof(TL));
/* do not use memcpy due to indexing in LBA and TL */
LBA[0] = scsiCmnd->cdb[2]; /* MSB */
LBA[1] = scsiCmnd->cdb[3];
LBA[2] = scsiCmnd->cdb[4];
LBA[3] = scsiCmnd->cdb[5];
LBA[4] = scsiCmnd->cdb[6];
LBA[5] = scsiCmnd->cdb[7];
LBA[6] = scsiCmnd->cdb[8];
LBA[7] = scsiCmnd->cdb[9]; /* LSB */
TL[0] = 0;
TL[1] = 0;
TL[2] = 0;
TL[3] = 0;
TL[4] = scsiCmnd->cdb[10]; /* MSB */
TL[5] = scsiCmnd->cdb[11];
TL[6] = scsiCmnd->cdb[12];
TL[7] = scsiCmnd->cdb[13]; /* LSB */
rangeChk = satAddNComparebit64(LBA, TL);
limitChk = satCompareLBALimitbit(LBA);
lba = satComputeCDB16LBA(satIOContext);
tl = satComputeCDB16TL(satIOContext);
/* Table 34, 9.1, p 46 */
/*
note: As of 2/10/2006, no support for DMA QUEUED
*/
/*
Table 34, 9.1, p 46, b
When no 48-bit addressing support or NCQ, if LBA is beyond (2^28 - 1),
return check condition
*/
if (pSatDevData->satNCQ != agTRUE &&
pSatDevData->sat48BitSupport != agTRUE
)
{
if (limitChk)
{
TI_DBG1(("satRead16: return LBA out of range, not EXT\n"));
satSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_LOGICAL_BLOCK_ADDRESS_OUT_OF_RANGE,
satIOContext);
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pTiSenseData,
satIOContext->interruptContext );
return tiSuccess;
}
if (rangeChk) // if (lba + tl > SAT_TR_LBA_LIMIT)
{
TI_DBG1(("satRead16: return LBA+TL out of range, not EXT\n"));
satSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_LOGICAL_BLOCK_ADDRESS_OUT_OF_RANGE,
satIOContext);
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pTiSenseData,
satIOContext->interruptContext );
return tiSuccess;
}
}
/* case 1 and 2 */
if (!rangeChk) // if (lba + tl <= SAT_TR_LBA_LIMIT)
{
if (pSatDevData->satDMASupport == agTRUE && pSatDevData->satDMAEnabled == agTRUE)
{
/* case 2 */
/* READ DMA*/
/* in case that we can't fit the transfer length,
we need to make it fit by sending multiple ATA cmnds */
TI_DBG5(("satRead16: case 2\n"));
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_READ_DMA; /* 0xC8 */
fis->h.features = 0; /* FIS reserve */
fis->d.lbaLow = scsiCmnd->cdb[9]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = scsiCmnd->cdb[8]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = scsiCmnd->cdb[7]; /* FIS LBA (23:16) */
fis->d.device =
(bit8)((0x4 << 4) | (scsiCmnd->cdb[6] & 0xF)); /* FIS LBA (27:24) and FIS LBA mode */
fis->d.lbaLowExp = 0;
fis->d.lbaMidExp = 0;
fis->d.lbaHighExp = 0;
fis->d.featuresExp = 0;
fis->d.sectorCount = scsiCmnd->cdb[13]; /* FIS sector count (7:0) */
fis->d.sectorCountExp = 0;
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_DMA_READ;
satIOContext->ATACmd = SAT_READ_DMA;
}
else
{
/* case 1 */
/* READ MULTIPLE or READ SECTOR(S) */
/* READ SECTORS for easier implemetation */
/* can't fit the transfer length but need to make it fit by sending multiple*/
TI_DBG5(("satRead16: case 1\n"));
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_READ_SECTORS; /* 0x20 */
fis->h.features = 0; /* FIS reserve */
fis->d.lbaLow = scsiCmnd->cdb[9]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = scsiCmnd->cdb[8]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = scsiCmnd->cdb[7]; /* FIS LBA (23:16) */
fis->d.device =
(bit8)((0x4 << 4) | (scsiCmnd->cdb[6] & 0xF)); /* FIS LBA (27:24) and FIS LBA mode */
fis->d.lbaLowExp = 0;
fis->d.lbaMidExp = 0;
fis->d.lbaHighExp = 0;
fis->d.featuresExp = 0;
fis->d.sectorCount = scsiCmnd->cdb[13]; /* FIS sector count (7:0) */
fis->d.sectorCountExp = 0;
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_PIO_READ;
satIOContext->ATACmd = SAT_READ_SECTORS;
}
}
/* case 3 and 4 */
if (pSatDevData->sat48BitSupport == agTRUE)
{
if (pSatDevData->satDMASupport == agTRUE && pSatDevData->satDMAEnabled == agTRUE)
{
/* case 3 */
/* READ DMA EXT */
TI_DBG5(("satRead16: case 3\n"));
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_READ_DMA_EXT; /* 0x25 */
fis->h.features = 0; /* FIS reserve */
fis->d.lbaLow = scsiCmnd->cdb[9]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = scsiCmnd->cdb[8]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = scsiCmnd->cdb[7]; /* FIS LBA (23:16) */
fis->d.device = 0x40; /* FIS LBA mode set */
fis->d.lbaLowExp = scsiCmnd->cdb[6]; /* FIS LBA (31:24) */
fis->d.lbaMidExp = scsiCmnd->cdb[5]; /* FIS LBA (39:32) */
fis->d.lbaHighExp = scsiCmnd->cdb[4]; /* FIS LBA (47:40) */
fis->d.featuresExp = 0; /* FIS reserve */
fis->d.sectorCount = scsiCmnd->cdb[13]; /* FIS sector count (7:0) */
fis->d.sectorCountExp = scsiCmnd->cdb[12]; /* FIS sector count (15:8) */
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_DMA_READ;
satIOContext->ATACmd = SAT_READ_DMA_EXT;
}
else
{
/* case 4 */
/* READ MULTIPLE EXT or READ SECTOR(S) EXT or READ VERIFY SECTOR(S) EXT*/
/* READ SECTORS EXT for easier implemetation */
TI_DBG5(("satRead16: case 4\n"));
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
/* Check FUA bit */
if (scsiCmnd->cdb[1] & SCSI_READ16_FUA_MASK)
{
/* for now, no support for FUA */
satSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pTiSenseData,
satIOContext->interruptContext );
return tiSuccess;
}
fis->h.command = SAT_READ_SECTORS_EXT; /* 0x24 */
fis->h.features = 0; /* FIS reserve */
fis->d.lbaLow = scsiCmnd->cdb[9]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = scsiCmnd->cdb[8]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = scsiCmnd->cdb[7]; /* FIS LBA (23:16) */
fis->d.device = 0x40; /* FIS LBA mode set */
fis->d.lbaLowExp = scsiCmnd->cdb[6]; /* FIS LBA (31:24) */
fis->d.lbaMidExp = scsiCmnd->cdb[5]; /* FIS LBA (39:32) */
fis->d.lbaHighExp = scsiCmnd->cdb[4]; /* FIS LBA (47:40) */
fis->d.featuresExp = 0; /* FIS reserve */
fis->d.sectorCount = scsiCmnd->cdb[13]; /* FIS sector count (7:0) */
fis->d.sectorCountExp = scsiCmnd->cdb[12]; /* FIS sector count (15:8) */
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_PIO_READ;
satIOContext->ATACmd = SAT_READ_SECTORS_EXT;
}
}
/* case 5 */
if (pSatDevData->satNCQ == agTRUE)
{
/* READ FPDMA QUEUED */
if (pSatDevData->sat48BitSupport != agTRUE)
{
TI_DBG5(("satRead16: case 5 !!! error NCQ but 28 bit address support \n"));
satSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pTiSenseData,
satIOContext->interruptContext );
return tiSuccess;
}
TI_DBG6(("satRead16: case 5\n"));
/* Support 48-bit FPDMA addressing, use READ FPDMA QUEUE command */
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_READ_FPDMA_QUEUED; /* 0x60 */
fis->h.features = scsiCmnd->cdb[13]; /* FIS sector count (7:0) */
fis->d.lbaLow = scsiCmnd->cdb[9]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = scsiCmnd->cdb[8]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = scsiCmnd->cdb[7]; /* FIS LBA (23:16) */
/* Check FUA bit */
if (scsiCmnd->cdb[1] & SCSI_READ16_FUA_MASK)
fis->d.device = 0xC0; /* FIS FUA set */
else
fis->d.device = 0x40; /* FIS FUA clear */
fis->d.lbaLowExp = scsiCmnd->cdb[6]; /* FIS LBA (31:24) */
fis->d.lbaMidExp = scsiCmnd->cdb[5]; /* FIS LBA (39:32) */
fis->d.lbaHighExp = scsiCmnd->cdb[4]; /* FIS LBA (47:40) */
fis->d.featuresExp = scsiCmnd->cdb[12]; /* FIS sector count (15:8) */
fis->d.sectorCount = 0; /* Tag (7:3) set by LL layer */
fis->d.sectorCountExp = 0;
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_FPDMA_READ;
satIOContext->ATACmd = SAT_READ_FPDMA_QUEUED;
}
/* saves the current LBA and orginal TL */
satIOContext->currentLBA = lba;
satIOContext->OrgTL = tl;
/*
computing number of loop and remainder for tl
0xFF in case not ext
0xFFFF in case EXT
*/
if (fis->h.command == SAT_READ_SECTORS || fis->h.command == SAT_READ_DMA)
{
LoopNum = satComputeLoopNum(tl, 0xFF);
}
else if (fis->h.command == SAT_READ_SECTORS_EXT || fis->h.command == SAT_READ_DMA_EXT)
{
/* SAT_READ_SECTORS_EXT, SAT_READ_DMA_EXT */
LoopNum = satComputeLoopNum(tl, 0xFFFF);
}
else
{
/* SAT_READ_FPDMA_QUEUEDK */
LoopNum = satComputeLoopNum(tl, 0xFFFF);
}
satIOContext->LoopNum = LoopNum;
if (LoopNum == 1)
{
TI_DBG5(("satRead16: NON CHAINED data\n"));
satIOContext->satCompleteCB = &satNonChainedDataIOCB;
}
else
{
TI_DBG1(("satRead16: CHAINED data\n"));
/* re-setting tl */
if (fis->h.command == SAT_READ_SECTORS || fis->h.command == SAT_READ_DMA)
{
fis->d.sectorCount = 0xFF;
}
else if (fis->h.command == SAT_READ_SECTORS_EXT || fis->h.command == SAT_READ_DMA_EXT)
{
/* SAT_READ_SECTORS_EXT, SAT_READ_DMA_EXT */
fis->d.sectorCount = 0xFF;
fis->d.sectorCountExp = 0xFF;
}
else
{
/* SAT_READ_FPDMA_QUEUED */
fis->h.features = 0xFF;
fis->d.featuresExp = 0xFF;
}
/* chained data */
satIOContext->satCompleteCB = &satChainedDataIOCB;
}
/*
* Prepare SGL and send FIS to LL layer.
*/
satIOContext->reqType = agRequestType; /* Save it */
status = sataLLIOStart( tiRoot,
tiIORequest,
tiDeviceHandle,
tiScsiRequest,
satIOContext);
TI_DBG5(("satRead16: return\n"));
return (status);
}
/*****************************************************************************/
/*! \brief SAT implementation for SCSI READ6.
*
* SAT implementation for SCSI READ6 and send FIS request to LL layer.
*
* \param tiRoot: Pointer to TISA initiator driver/port instance.
* \param tiIORequest: Pointer to TISA I/O request context for this I/O.
* \param tiDeviceHandle: Pointer to TISA device handle for this I/O.
* \param tiScsiRequest: Pointer to TISA SCSI I/O request and SGL list.
* \param satIOContext_t: Pointer to the SAT IO Context
*
* \return If command is started successfully
* - \e tiSuccess: I/O request successfully initiated.
* - \e tiBusy: No resources available, try again later.
* - \e tiIONoDevice: Invalid device handle.
* - \e tiError: Other errors.
*/
/*****************************************************************************/
GLOBAL bit32 satRead6(
tiRoot_t *tiRoot,
tiIORequest_t *tiIORequest,
tiDeviceHandle_t *tiDeviceHandle,
tiScsiInitiatorRequest_t *tiScsiRequest,
satIOContext_t *satIOContext)
{
bit32 status;
bit32 agRequestType = AGSA_SATA_PROTOCOL_DMA_READ;
satDeviceData_t *pSatDevData;
scsiRspSense_t *pSense;
tiIniScsiCmnd_t *scsiCmnd;
agsaFisRegHostToDevice_t *fis;
bit32 lba = 0;
bit16 tl = 0;
pSense = satIOContext->pSense;
pSatDevData = satIOContext->pSatDevData;
scsiCmnd = &tiScsiRequest->scsiCmnd;
fis = satIOContext->pFis;
TI_DBG5(("satRead6: start\n"));
/* no FUA checking since read6 */
/* checking CONTROL */
/* NACA == 1 or LINK == 1*/
if ( (scsiCmnd->cdb[5] & SCSI_NACA_MASK) || (scsiCmnd->cdb[5] & SCSI_LINK_MASK) )
{
satSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pTiSenseData,
satIOContext->interruptContext );
TI_DBG2(("satRead6: return control\n"));
return tiSuccess;
}
/* cbd6; computing LBA and transfer length */
lba = (((scsiCmnd->cdb[1]) & 0x1f) << (8*2))
+ (scsiCmnd->cdb[2] << 8) + scsiCmnd->cdb[3];
tl = scsiCmnd->cdb[4];
/* Table 34, 9.1, p 46 */
/*
note: As of 2/10/2006, no support for DMA QUEUED
*/
/*
Table 34, 9.1, p 46, b
When no 48-bit addressing support or NCQ, if LBA is beyond (2^28 - 1),
return check condition
*/
if (pSatDevData->satNCQ != agTRUE &&
pSatDevData->sat48BitSupport != agTRUE
)
{
if (lba > SAT_TR_LBA_LIMIT - 1)
{
satSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_LOGICAL_BLOCK_ADDRESS_OUT_OF_RANGE,
satIOContext);
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pTiSenseData,
satIOContext->interruptContext );
TI_DBG1(("satRead6: return LBA out of range\n"));
return tiSuccess;
}
}
/* case 1 and 2 */
if (lba + tl <= SAT_TR_LBA_LIMIT)
{
if (pSatDevData->satDMASupport == agTRUE && pSatDevData->satDMAEnabled == agTRUE)
{
/* case 2 */
/* READ DMA*/
TI_DBG5(("satRead6: case 2\n"));
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_READ_DMA; /* 0xC8 */
fis->h.features = 0; /* FIS reserve */
fis->d.lbaLow = scsiCmnd->cdb[3]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = scsiCmnd->cdb[2]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = (bit8)((scsiCmnd->cdb[1]) & 0x1f); /* FIS LBA (23:16) */
fis->d.device = 0x40; /* FIS LBA mode */
fis->d.lbaLowExp = 0;
fis->d.lbaMidExp = 0;
fis->d.lbaHighExp = 0;
fis->d.featuresExp = 0;
if (tl == 0)
{
/* temporary fix */
fis->d.sectorCount = 0xff; /* FIS sector count (7:0) */
}
else
{
fis->d.sectorCount = scsiCmnd->cdb[4]; /* FIS sector count (7:0) */
}
fis->d.sectorCountExp = 0;
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_DMA_READ;
}
else
{
/* case 1 */
/* READ SECTORS for easier implemetation */
TI_DBG5(("satRead6: case 1\n"));
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_READ_SECTORS; /* 0x20 */
fis->h.features = 0; /* FIS reserve */
fis->d.lbaLow = scsiCmnd->cdb[3]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = scsiCmnd->cdb[2]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = (bit8)((scsiCmnd->cdb[1]) & 0x1f); /* FIS LBA (23:16) */
fis->d.device = 0x40; /* FIS LBA mode */
fis->d.lbaLowExp = 0;
fis->d.lbaMidExp = 0;
fis->d.lbaHighExp = 0;
fis->d.featuresExp = 0;
if (tl == 0)
{
/* temporary fix */
fis->d.sectorCount = 0xff; /* FIS sector count (7:0) */
}
else
{
fis->d.sectorCount = scsiCmnd->cdb[4]; /* FIS sector count (7:0) */
}
fis->d.sectorCountExp = 0;
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_PIO_READ;
}
}
/* case 3 and 4 */
if (pSatDevData->sat48BitSupport == agTRUE)
{
if (pSatDevData->satDMASupport == agTRUE && pSatDevData->satDMAEnabled == agTRUE)
{
/* case 3 */
/* READ DMA EXT only */
TI_DBG5(("satRead6: case 3\n"));
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_READ_DMA_EXT; /* 0x25 */
fis->h.features = 0; /* FIS reserve */
fis->d.lbaLow = scsiCmnd->cdb[3]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = scsiCmnd->cdb[2]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = (bit8)((scsiCmnd->cdb[1]) & 0x1f); /* FIS LBA (23:16) */
fis->d.device = 0x40; /* FIS LBA mode set */
fis->d.lbaLowExp = 0; /* FIS LBA (31:24) */
fis->d.lbaMidExp = 0; /* FIS LBA (39:32) */
fis->d.lbaHighExp = 0; /* FIS LBA (47:40) */
fis->d.featuresExp = 0; /* FIS reserve */
if (tl == 0)
{
/* sector count is 256, 0x100*/
fis->d.sectorCount = 0; /* FIS sector count (7:0) */
fis->d.sectorCountExp = 0x01; /* FIS sector count (15:8) */
}
else
{
fis->d.sectorCount = scsiCmnd->cdb[4]; /* FIS sector count (7:0) */
fis->d.sectorCountExp = 0; /* FIS sector count (15:8) */
}
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_DMA_READ;
}
else
{
/* case 4 */
/* READ SECTORS EXT for easier implemetation */
TI_DBG5(("satRead6: case 4\n"));
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_READ_SECTORS_EXT; /* 0x24 */
fis->h.features = 0; /* FIS reserve */
fis->d.lbaLow = scsiCmnd->cdb[3]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = scsiCmnd->cdb[2]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = (bit8)((scsiCmnd->cdb[1]) & 0x1f); /* FIS LBA (23:16) */
fis->d.device = 0x40; /* FIS LBA mode set */
fis->d.lbaLowExp = 0; /* FIS LBA (31:24) */
fis->d.lbaMidExp = 0; /* FIS LBA (39:32) */
fis->d.lbaHighExp = 0; /* FIS LBA (47:40) */
fis->d.featuresExp = 0; /* FIS reserve */
if (tl == 0)
{
/* sector count is 256, 0x100*/
fis->d.sectorCount = 0; /* FIS sector count (7:0) */
fis->d.sectorCountExp = 0x01; /* FIS sector count (15:8) */
}
else
{
fis->d.sectorCount = scsiCmnd->cdb[4]; /* FIS sector count (7:0) */
fis->d.sectorCountExp = 0; /* FIS sector count (15:8) */
}
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_PIO_READ;
}
}
/* case 5 */
if (pSatDevData->satNCQ == agTRUE)
{
/* READ FPDMA QUEUED */
if (pSatDevData->sat48BitSupport != agTRUE)
{
/* sanity check */
TI_DBG5(("satRead6: case 5 !!! error NCQ but 28 bit address support \n"));
satSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pTiSenseData,
satIOContext->interruptContext );
return tiSuccess;
}
TI_DBG5(("satRead6: case 5\n"));
/* Support 48-bit FPDMA addressing, use READ FPDMA QUEUE command */
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_READ_FPDMA_QUEUED; /* 0x60 */
fis->d.lbaLow = scsiCmnd->cdb[3]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = scsiCmnd->cdb[2]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = (bit8)((scsiCmnd->cdb[1]) & 0x1f); /* FIS LBA (23:16) */
fis->d.device = 0x40; /* FIS FUA clear */
fis->d.lbaLowExp = 0; /* FIS LBA (31:24) */
fis->d.lbaMidExp = 0; /* FIS LBA (39:32) */
fis->d.lbaHighExp = 0; /* FIS LBA (47:40) */
if (tl == 0)
{
/* sector count is 256, 0x100*/
fis->h.features = 0; /* FIS sector count (7:0) */
fis->d.featuresExp = 0x01; /* FIS sector count (15:8) */
}
else
{
fis->h.features = scsiCmnd->cdb[4]; /* FIS sector count (7:0) */
fis->d.featuresExp = 0; /* FIS sector count (15:8) */
}
fis->d.sectorCount = 0; /* Tag (7:3) set by LL layer */
fis->d.sectorCountExp = 0;
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_FPDMA_READ;
}
/* Initialize CB for SATA completion.
*/
satIOContext->satCompleteCB = &satNonChainedDataIOCB;
/*
* Prepare SGL and send FIS to LL layer.
*/
satIOContext->reqType = agRequestType; /* Save it */
status = sataLLIOStart( tiRoot,
tiIORequest,
tiDeviceHandle,
tiScsiRequest,
satIOContext);
return (status);
}
/*****************************************************************************/
/*! \brief SAT implementation for SCSI WRITE16.
*
* SAT implementation for SCSI WRITE16 and send FIS request to LL layer.
*
* \param tiRoot: Pointer to TISA initiator driver/port instance.
* \param tiIORequest: Pointer to TISA I/O request context for this I/O.
* \param tiDeviceHandle: Pointer to TISA device handle for this I/O.
* \param tiScsiRequest: Pointer to TISA SCSI I/O request and SGL list.
* \param satIOContext_t: Pointer to the SAT IO Context
*
* \return If command is started successfully
* - \e tiSuccess: I/O request successfully initiated.
* - \e tiBusy: No resources available, try again later.
* - \e tiIONoDevice: Invalid device handle.
* - \e tiError: Other errors.
*/
/*****************************************************************************/
GLOBAL bit32 satWrite16(
tiRoot_t *tiRoot,
tiIORequest_t *tiIORequest,
tiDeviceHandle_t *tiDeviceHandle,
tiScsiInitiatorRequest_t *tiScsiRequest,
satIOContext_t *satIOContext)
{
bit32 status;
bit32 agRequestType = AGSA_SATA_PROTOCOL_DMA_WRITE;
satDeviceData_t *pSatDevData;
scsiRspSense_t *pSense;
tiIniScsiCmnd_t *scsiCmnd;
agsaFisRegHostToDevice_t *fis;
bit32 lba = 0;
bit32 tl = 0;
bit32 LoopNum = 1;
bit8 LBA[8];
bit8 TL[8];
bit32 rangeChk = agFALSE; /* lba and tl range check */
bit32 limitChk = agFALSE; /* lba and tl range check */
pSense = satIOContext->pSense;
pSatDevData = satIOContext->pSatDevData;
scsiCmnd = &tiScsiRequest->scsiCmnd;
fis = satIOContext->pFis;
TI_DBG5(("satWrite16: start\n"));
/* checking FUA_NV */
if (scsiCmnd->cdb[1] & SCSI_FUA_NV_MASK)
{
satSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pTiSenseData,
satIOContext->interruptContext );
TI_DBG1(("satWrite16: return FUA_NV\n"));
return tiSuccess;
}
/* checking CONTROL */
/* NACA == 1 or LINK == 1*/
if ( (scsiCmnd->cdb[15] & SCSI_NACA_MASK) || (scsiCmnd->cdb[15] & SCSI_LINK_MASK) )
{
satSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pTiSenseData,
satIOContext->interruptContext );
TI_DBG1(("satWrite16: return control\n"));
return tiSuccess;
}
osti_memset(LBA, 0, sizeof(LBA));
osti_memset(TL, 0, sizeof(TL));
/* do not use memcpy due to indexing in LBA and TL */
LBA[0] = scsiCmnd->cdb[2]; /* MSB */
LBA[1] = scsiCmnd->cdb[3];
LBA[2] = scsiCmnd->cdb[4];
LBA[3] = scsiCmnd->cdb[5];
LBA[4] = scsiCmnd->cdb[6];
LBA[5] = scsiCmnd->cdb[7];
LBA[6] = scsiCmnd->cdb[8];
LBA[7] = scsiCmnd->cdb[9]; /* LSB */
TL[0] = 0;
TL[1] = 0;
TL[2] = 0;
TL[3] = 0;
TL[4] = scsiCmnd->cdb[10]; /* MSB */
TL[5] = scsiCmnd->cdb[11];
TL[6] = scsiCmnd->cdb[12];
TL[7] = scsiCmnd->cdb[13]; /* LSB */
rangeChk = satAddNComparebit64(LBA, TL);
limitChk = satCompareLBALimitbit(LBA);
lba = satComputeCDB16LBA(satIOContext);
tl = satComputeCDB16TL(satIOContext);
/* Table 34, 9.1, p 46 */
/*
note: As of 2/10/2006, no support for DMA QUEUED
*/
/*
Table 34, 9.1, p 46, b
When no 48-bit addressing support or NCQ, if LBA is beyond (2^28 - 1),
return check condition
*/
if (pSatDevData->satNCQ != agTRUE &&
pSatDevData->sat48BitSupport != agTRUE
)
{
if (limitChk)
{
TI_DBG1(("satWrite16: return LBA out of range, not EXT\n"));
satSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_LOGICAL_BLOCK_ADDRESS_OUT_OF_RANGE,
satIOContext);
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pTiSenseData,
satIOContext->interruptContext );
return tiSuccess;
}
if (rangeChk) // if (lba + tl > SAT_TR_LBA_LIMIT)
{
TI_DBG1(("satWrite16: return LBA+TL out of range, not EXT\n"));
satSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_LOGICAL_BLOCK_ADDRESS_OUT_OF_RANGE,
satIOContext);
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pTiSenseData,
satIOContext->interruptContext );
return tiSuccess;
}
}
/* case 1 and 2 */
if (!rangeChk) // if (lba + tl <= SAT_TR_LBA_LIMIT)
{
if (pSatDevData->satDMASupport == agTRUE && pSatDevData->satDMAEnabled == agTRUE)
{
/* case 2 */
/* WRITE DMA*/
/* In case that we can't fit the transfer length, we loop */
TI_DBG5(("satWrite16: case 2\n"));
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C bit is set */
fis->h.command = SAT_WRITE_DMA; /* 0xCA */
fis->h.features = 0; /* FIS reserve */
fis->d.lbaLow = scsiCmnd->cdb[9]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = scsiCmnd->cdb[8]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = scsiCmnd->cdb[7]; /* FIS LBA (23:16) */
/* FIS LBA mode set LBA (27:24) */
fis->d.device = (bit8)((0x4 << 4) | (scsiCmnd->cdb[6] & 0xF));
fis->d.lbaLowExp = 0;
fis->d.lbaMidExp = 0;
fis->d.lbaHighExp = 0;
fis->d.featuresExp = 0;
fis->d.sectorCount = scsiCmnd->cdb[13]; /* FIS sector count (7:0) */
fis->d.sectorCountExp = 0;
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_DMA_WRITE;
satIOContext->ATACmd = SAT_WRITE_DMA;
}
else
{
/* case 1 */
/* WRITE MULTIPLE or WRITE SECTOR(S) */
/* WRITE SECTORS for easier implemetation */
/* In case that we can't fit the transfer length, we loop */
TI_DBG5(("satWrite16: case 1\n"));
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C bit is set */
fis->h.command = SAT_WRITE_SECTORS; /* 0x30 */
fis->h.features = 0; /* FIS reserve */
fis->d.lbaLow = scsiCmnd->cdb[9]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = scsiCmnd->cdb[8]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = scsiCmnd->cdb[7]; /* FIS LBA (23:16) */
/* FIS LBA mode set LBA (27:24) */
fis->d.device = (bit8)((0x4 << 4) | (scsiCmnd->cdb[6] & 0xF));
fis->d.lbaLowExp = 0;
fis->d.lbaMidExp = 0;
fis->d.lbaHighExp = 0;
fis->d.featuresExp = 0;
fis->d.sectorCount = scsiCmnd->cdb[13]; /* FIS sector count (7:0) */
fis->d.sectorCountExp = 0;
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_PIO_WRITE;
satIOContext->ATACmd = SAT_WRITE_SECTORS;
}
}
/* case 3 and 4 */
if (pSatDevData->sat48BitSupport == agTRUE)
{
if (pSatDevData->satDMASupport == agTRUE && pSatDevData->satDMAEnabled == agTRUE)
{
/* case 3 */
/* WRITE DMA EXT or WRITE DMA FUA EXT */
TI_DBG5(("satWrite16: case 3\n"));
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
/* SAT_WRITE_DMA_FUA_EXT is optional and we don't support it */
fis->h.command = SAT_WRITE_DMA_EXT; /* 0x35 */
fis->h.features = 0; /* FIS reserve */
fis->d.lbaLow = scsiCmnd->cdb[9]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = scsiCmnd->cdb[8]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = scsiCmnd->cdb[7]; /* FIS LBA (23:16) */
fis->d.device = 0x40; /* FIS LBA mode set */
fis->d.lbaLowExp = scsiCmnd->cdb[6]; /* FIS LBA (31:24) */
fis->d.lbaMidExp = scsiCmnd->cdb[5]; /* FIS LBA (39:32) */
fis->d.lbaHighExp = scsiCmnd->cdb[4]; /* FIS LBA (47:40) */
fis->d.featuresExp = 0; /* FIS reserve */
fis->d.sectorCount = scsiCmnd->cdb[13]; /* FIS sector count (7:0) */
fis->d.sectorCountExp = scsiCmnd->cdb[12]; /* FIS sector count (15:8) */
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_DMA_WRITE;
satIOContext->ATACmd = SAT_WRITE_DMA_EXT;
}
else
{
/* case 4 */
/* WRITE MULTIPLE EXT or WRITE MULTIPLE FUA EXT or WRITE SECTOR(S) EXT */
/* WRITE SECTORS EXT for easier implemetation */
TI_DBG5(("satWrite16: case 4\n"));
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_WRITE_SECTORS_EXT; /* 0x34 */
fis->h.features = 0; /* FIS reserve */
fis->d.lbaLow = scsiCmnd->cdb[9]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = scsiCmnd->cdb[8]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = scsiCmnd->cdb[7]; /* FIS LBA (23:16) */
fis->d.device = 0x40; /* FIS LBA mode set */
fis->d.lbaLowExp = scsiCmnd->cdb[6]; /* FIS LBA (31:24) */
fis->d.lbaMidExp = scsiCmnd->cdb[5]; /* FIS LBA (39:32) */
fis->d.lbaHighExp = scsiCmnd->cdb[4]; /* FIS LBA (47:40) */
fis->d.featuresExp = 0; /* FIS reserve */
fis->d.sectorCount = scsiCmnd->cdb[13]; /* FIS sector count (7:0) */
fis->d.sectorCountExp = scsiCmnd->cdb[12]; /* FIS sector count (15:8) */
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_PIO_WRITE;
satIOContext->ATACmd = SAT_WRITE_SECTORS_EXT;
}
}
/* case 5 */
if (pSatDevData->satNCQ == agTRUE)
{
/* WRITE FPDMA QUEUED */
if (pSatDevData->sat48BitSupport != agTRUE)
{
TI_DBG5(("satWrite16: case 5 !!! error NCQ but 28 bit address support \n"));
satSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pTiSenseData,
satIOContext->interruptContext );
return tiSuccess;
}
TI_DBG6(("satWrite16: case 5\n"));
/* Support 48-bit FPDMA addressing, use WRITE FPDMA QUEUE command */
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_WRITE_FPDMA_QUEUED; /* 0x61 */
fis->h.features = scsiCmnd->cdb[13]; /* FIS sector count (7:0) */
fis->d.lbaLow = scsiCmnd->cdb[9]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = scsiCmnd->cdb[8]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = scsiCmnd->cdb[7]; /* FIS LBA (23:16) */
/* Check FUA bit */
if (scsiCmnd->cdb[1] & SCSI_WRITE16_FUA_MASK)
fis->d.device = 0xC0; /* FIS FUA set */
else
fis->d.device = 0x40; /* FIS FUA clear */
fis->d.lbaLowExp = scsiCmnd->cdb[6]; /* FIS LBA (31:24) */
fis->d.lbaMidExp = scsiCmnd->cdb[5]; /* FIS LBA (39:32) */
fis->d.lbaHighExp = scsiCmnd->cdb[4]; /* FIS LBA (47:40) */
fis->d.featuresExp = scsiCmnd->cdb[12]; /* FIS sector count (15:8) */
fis->d.sectorCount = 0; /* Tag (7:3) set by LL layer */
fis->d.sectorCountExp = 0;
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_FPDMA_WRITE;
satIOContext->ATACmd = SAT_WRITE_FPDMA_QUEUED;
}
satIOContext->currentLBA = lba;
satIOContext->OrgTL = tl;
/*
computing number of loop and remainder for tl
0xFF in case not ext
0xFFFF in case EXT
*/
if (fis->h.command == SAT_WRITE_SECTORS || fis->h.command == SAT_WRITE_DMA)
{
LoopNum = satComputeLoopNum(tl, 0xFF);
}
else if (fis->h.command == SAT_WRITE_SECTORS_EXT ||
fis->h.command == SAT_WRITE_DMA_EXT ||
fis->h.command == SAT_WRITE_DMA_FUA_EXT
)
{
/* SAT_READ_SECTORS_EXT, SAT_READ_DMA_EXT */
LoopNum = satComputeLoopNum(tl, 0xFFFF);
}
else
{
/* SAT_WRITE_FPDMA_QUEUEDK */
LoopNum = satComputeLoopNum(tl, 0xFFFF);
}
satIOContext->LoopNum = LoopNum;
if (LoopNum == 1)
{
TI_DBG5(("satWrite16: NON CHAINED data\n"));
/* Initialize CB for SATA completion.
*/
satIOContext->satCompleteCB = &satNonChainedDataIOCB;
}
else
{
TI_DBG1(("satWrite16: CHAINED data\n"));
/* re-setting tl */
if (fis->h.command == SAT_WRITE_SECTORS || fis->h.command == SAT_WRITE_DMA)
{
fis->d.sectorCount = 0xFF;
}
else if (fis->h.command == SAT_WRITE_SECTORS_EXT ||
fis->h.command == SAT_WRITE_DMA_EXT ||
fis->h.command == SAT_WRITE_DMA_FUA_EXT
)
{
fis->d.sectorCount = 0xFF;
fis->d.sectorCountExp = 0xFF;
}
else
{
/* SAT_WRITE_FPDMA_QUEUED */
fis->h.features = 0xFF;
fis->d.featuresExp = 0xFF;
}
/* Initialize CB for SATA completion.
*/
satIOContext->satCompleteCB = &satChainedDataIOCB;
}
/*
* Prepare SGL and send FIS to LL layer.
*/
satIOContext->reqType = agRequestType; /* Save it */
status = sataLLIOStart( tiRoot,
tiIORequest,
tiDeviceHandle,
tiScsiRequest,
satIOContext);
return (status);
}
/*****************************************************************************/
/*! \brief SAT implementation for SCSI WRITE12.
*
* SAT implementation for SCSI WRITE12 and send FIS request to LL layer.
*
* \param tiRoot: Pointer to TISA initiator driver/port instance.
* \param tiIORequest: Pointer to TISA I/O request context for this I/O.
* \param tiDeviceHandle: Pointer to TISA device handle for this I/O.
* \param tiScsiRequest: Pointer to TISA SCSI I/O request and SGL list.
* \param satIOContext_t: Pointer to the SAT IO Context
*
* \return If command is started successfully
* - \e tiSuccess: I/O request successfully initiated.
* - \e tiBusy: No resources available, try again later.
* - \e tiIONoDevice: Invalid device handle.
* - \e tiError: Other errors.
*/
/*****************************************************************************/
GLOBAL bit32 satWrite12(
tiRoot_t *tiRoot,
tiIORequest_t *tiIORequest,
tiDeviceHandle_t *tiDeviceHandle,
tiScsiInitiatorRequest_t *tiScsiRequest,
satIOContext_t *satIOContext)
{
bit32 status;
bit32 agRequestType = AGSA_SATA_PROTOCOL_DMA_WRITE;
satDeviceData_t *pSatDevData;
scsiRspSense_t *pSense;
tiIniScsiCmnd_t *scsiCmnd;
agsaFisRegHostToDevice_t *fis;
bit32 lba = 0;
bit32 tl = 0;
bit32 LoopNum = 1;
bit8 LBA[4];
bit8 TL[4];
bit32 rangeChk = agFALSE; /* lba and tl range check */
pSense = satIOContext->pSense;
pSatDevData = satIOContext->pSatDevData;
scsiCmnd = &tiScsiRequest->scsiCmnd;
fis = satIOContext->pFis;
TI_DBG5(("satWrite12: start\n"));
/* checking FUA_NV */
if (scsiCmnd->cdb[1] & SCSI_FUA_NV_MASK)
{
satSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pTiSenseData,
satIOContext->interruptContext );
TI_DBG1(("satWrite12: return FUA_NV\n"));
return tiSuccess;
}
/* checking CONTROL */
/* NACA == 1 or LINK == 1*/
if ( (scsiCmnd->cdb[11] & SCSI_NACA_MASK) || (scsiCmnd->cdb[11] & SCSI_LINK_MASK) )
{
satSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pTiSenseData,
satIOContext->interruptContext );
TI_DBG1(("satWrite12: return control\n"));
return tiSuccess;
}
osti_memset(LBA, 0, sizeof(LBA));
osti_memset(TL, 0, sizeof(TL));
/* do not use memcpy due to indexing in LBA and TL */
LBA[0] = scsiCmnd->cdb[2]; /* MSB */
LBA[1] = scsiCmnd->cdb[3];
LBA[2] = scsiCmnd->cdb[4];
LBA[3] = scsiCmnd->cdb[5]; /* LSB */
TL[0] = scsiCmnd->cdb[6]; /* MSB */
TL[1] = scsiCmnd->cdb[7];
TL[2] = scsiCmnd->cdb[8];
TL[3] = scsiCmnd->cdb[9]; /* LSB */
rangeChk = satAddNComparebit32(LBA, TL);
lba = satComputeCDB12LBA(satIOContext);
tl = satComputeCDB12TL(satIOContext);
/* Table 34, 9.1, p 46 */
/*
note: As of 2/10/2006, no support for DMA QUEUED
*/
/*
Table 34, 9.1, p 46, b
When no 48-bit addressing support or NCQ, if LBA is beyond (2^28 - 1),
return check condition
*/
if (pSatDevData->satNCQ != agTRUE &&
pSatDevData->sat48BitSupport != agTRUE
)
{
if (lba > SAT_TR_LBA_LIMIT - 1)
{
satSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_LOGICAL_BLOCK_ADDRESS_OUT_OF_RANGE,
satIOContext);
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pTiSenseData,
satIOContext->interruptContext );
TI_DBG1(("satWrite12: return LBA out of range, not EXT\n"));
return tiSuccess;
}
if (rangeChk) // if (lba + tl > SAT_TR_LBA_LIMIT)
{
TI_DBG1(("satWrite12: return LBA+TL out of range, not EXT\n"));
satSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_LOGICAL_BLOCK_ADDRESS_OUT_OF_RANGE,
satIOContext);
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pTiSenseData,
satIOContext->interruptContext );
return tiSuccess;
}
}
/* case 1 and 2 */
if (!rangeChk) // if (lba + tl <= SAT_TR_LBA_LIMIT)
{
if (pSatDevData->satDMASupport == agTRUE && pSatDevData->satDMAEnabled == agTRUE)
{
/* case 2 */
/* WRITE DMA*/
/* In case that we can't fit the transfer length, we loop */
TI_DBG5(("satWrite12: case 2\n"));
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C bit is set */
fis->h.command = SAT_WRITE_DMA; /* 0xCA */
fis->h.features = 0; /* FIS reserve */
fis->d.lbaLow = scsiCmnd->cdb[5]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = scsiCmnd->cdb[4]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = scsiCmnd->cdb[3]; /* FIS LBA (23:16) */
/* FIS LBA mode set LBA (27:24) */
fis->d.device = (bit8)((0x4 << 4) | (scsiCmnd->cdb[2] & 0xF));
fis->d.lbaLowExp = 0;
fis->d.lbaMidExp = 0;
fis->d.lbaHighExp = 0;
fis->d.featuresExp = 0;
fis->d.sectorCount = scsiCmnd->cdb[9]; /* FIS sector count (7:0) */
fis->d.sectorCountExp = 0;
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_DMA_WRITE;
satIOContext->ATACmd = SAT_WRITE_DMA;
}
else
{
/* case 1 */
/* WRITE MULTIPLE or WRITE SECTOR(S) */
/* WRITE SECTORS for easier implemetation */
/* In case that we can't fit the transfer length, we loop */
TI_DBG5(("satWrite12: case 1\n"));
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C bit is set */
fis->h.command = SAT_WRITE_SECTORS; /* 0x30 */
fis->h.features = 0; /* FIS reserve */
fis->d.lbaLow = scsiCmnd->cdb[5]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = scsiCmnd->cdb[4]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = scsiCmnd->cdb[3]; /* FIS LBA (23:16) */
/* FIS LBA mode set LBA (27:24) */
fis->d.device = (bit8)((0x4 << 4) | (scsiCmnd->cdb[2] & 0xF));
fis->d.lbaLowExp = 0;
fis->d.lbaMidExp = 0;
fis->d.lbaHighExp = 0;
fis->d.featuresExp = 0;
fis->d.sectorCount = scsiCmnd->cdb[9]; /* FIS sector count (7:0) */
fis->d.sectorCountExp = 0;
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_PIO_WRITE;
satIOContext->ATACmd = SAT_WRITE_SECTORS;
}
}
/* case 3 and 4 */
if (pSatDevData->sat48BitSupport == agTRUE)
{
if (pSatDevData->satDMASupport == agTRUE && pSatDevData->satDMAEnabled == agTRUE)
{
/* case 3 */
/* WRITE DMA EXT or WRITE DMA FUA EXT */
TI_DBG5(("satWrite12: case 3\n"));
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
/* SAT_WRITE_DMA_FUA_EXT is optional and we don't support it */
fis->h.command = SAT_WRITE_DMA_EXT; /* 0x35 */
fis->h.features = 0; /* FIS reserve */
fis->d.lbaLow = scsiCmnd->cdb[5]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = scsiCmnd->cdb[4]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = scsiCmnd->cdb[3]; /* FIS LBA (23:16) */
fis->d.device = 0x40; /* FIS LBA mode set */
fis->d.lbaLowExp = scsiCmnd->cdb[2]; /* FIS LBA (31:24) */
fis->d.lbaMidExp = 0; /* FIS LBA (39:32) */
fis->d.lbaHighExp = 0; /* FIS LBA (47:40) */
fis->d.featuresExp = 0; /* FIS reserve */
fis->d.sectorCount = scsiCmnd->cdb[9]; /* FIS sector count (7:0) */
fis->d.sectorCountExp = scsiCmnd->cdb[8]; /* FIS sector count (15:8) */
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_DMA_WRITE;
satIOContext->ATACmd = SAT_WRITE_DMA_EXT;
}
else
{
/* case 4 */
/* WRITE MULTIPLE EXT or WRITE MULTIPLE FUA EXT or WRITE SECTOR(S) EXT */
/* WRITE SECTORS EXT for easier implemetation */
TI_DBG5(("satWrite12: case 4\n"));
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_WRITE_SECTORS_EXT; /* 0x34 */
fis->h.features = 0; /* FIS reserve */
fis->d.lbaLow = scsiCmnd->cdb[5]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = scsiCmnd->cdb[4]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = scsiCmnd->cdb[3]; /* FIS LBA (23:16) */
fis->d.device = 0x40; /* FIS LBA mode set */
fis->d.lbaLowExp = scsiCmnd->cdb[2]; /* FIS LBA (31:24) */
fis->d.lbaMidExp = 0; /* FIS LBA (39:32) */
fis->d.lbaHighExp = 0; /* FIS LBA (47:40) */
fis->d.featuresExp = 0; /* FIS reserve */
fis->d.sectorCount = scsiCmnd->cdb[9]; /* FIS sector count (7:0) */
fis->d.sectorCountExp = scsiCmnd->cdb[8]; /* FIS sector count (15:8) */
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_PIO_WRITE;
satIOContext->ATACmd = SAT_WRITE_SECTORS_EXT;
}
}
/* case 5 */
if (pSatDevData->satNCQ == agTRUE)
{
/* WRITE FPDMA QUEUED */
if (pSatDevData->sat48BitSupport != agTRUE)
{
TI_DBG5(("satWrite12: case 5 !!! error NCQ but 28 bit address support \n"));
satSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pTiSenseData,
satIOContext->interruptContext );
return tiSuccess;
}
TI_DBG6(("satWrite12: case 5\n"));
/* Support 48-bit FPDMA addressing, use WRITE FPDMA QUEUE command */
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_WRITE_FPDMA_QUEUED; /* 0x61 */
fis->h.features = scsiCmnd->cdb[9]; /* FIS sector count (7:0) */
fis->d.lbaLow = scsiCmnd->cdb[5]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = scsiCmnd->cdb[4]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = scsiCmnd->cdb[3]; /* FIS LBA (23:16) */
/* Check FUA bit */
if (scsiCmnd->cdb[1] & SCSI_WRITE12_FUA_MASK)
fis->d.device = 0xC0; /* FIS FUA set */
else
fis->d.device = 0x40; /* FIS FUA clear */
fis->d.lbaLowExp = scsiCmnd->cdb[2]; /* FIS LBA (31:24) */
fis->d.lbaMidExp = 0; /* FIS LBA (39:32) */
fis->d.lbaHighExp = 0; /* FIS LBA (47:40) */
fis->d.featuresExp = scsiCmnd->cdb[8]; /* FIS sector count (15:8) */
fis->d.sectorCount = 0; /* Tag (7:3) set by LL layer */
fis->d.sectorCountExp = 0;
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_FPDMA_WRITE;
satIOContext->ATACmd = SAT_WRITE_FPDMA_QUEUED;
}
satIOContext->currentLBA = lba;
satIOContext->OrgTL = tl;
/*
computing number of loop and remainder for tl
0xFF in case not ext
0xFFFF in case EXT
*/
if (fis->h.command == SAT_WRITE_SECTORS || fis->h.command == SAT_WRITE_DMA)
{
LoopNum = satComputeLoopNum(tl, 0xFF);
}
else if (fis->h.command == SAT_WRITE_SECTORS_EXT ||
fis->h.command == SAT_WRITE_DMA_EXT ||
fis->h.command == SAT_WRITE_DMA_FUA_EXT
)
{
/* SAT_READ_SECTORS_EXT, SAT_READ_DMA_EXT */
LoopNum = satComputeLoopNum(tl, 0xFFFF);
}
else
{
/* SAT_WRITE_FPDMA_QUEUEDK */
LoopNum = satComputeLoopNum(tl, 0xFFFF);
}
satIOContext->LoopNum = LoopNum;
if (LoopNum == 1)
{
TI_DBG5(("satWrite12: NON CHAINED data\n"));
/* Initialize CB for SATA completion.
*/
satIOContext->satCompleteCB = &satNonChainedDataIOCB;
}
else
{
TI_DBG1(("satWrite12: CHAINED data\n"));
/* re-setting tl */
if (fis->h.command == SAT_WRITE_SECTORS || fis->h.command == SAT_WRITE_DMA)
{
fis->d.sectorCount = 0xFF;
}
else if (fis->h.command == SAT_WRITE_SECTORS_EXT ||
fis->h.command == SAT_WRITE_DMA_EXT ||
fis->h.command == SAT_WRITE_DMA_FUA_EXT
)
{
fis->d.sectorCount = 0xFF;
fis->d.sectorCountExp = 0xFF;
}
else
{
/* SAT_WRITE_FPDMA_QUEUED */
fis->h.features = 0xFF;
fis->d.featuresExp = 0xFF;
}
/* Initialize CB for SATA completion.
*/
satIOContext->satCompleteCB = &satChainedDataIOCB;
}
/*
* Prepare SGL and send FIS to LL layer.
*/
satIOContext->reqType = agRequestType; /* Save it */
status = sataLLIOStart( tiRoot,
tiIORequest,
tiDeviceHandle,
tiScsiRequest,
satIOContext);
return (status);
}
/*****************************************************************************/
/*! \brief SAT implementation for SCSI WRITE10.
*
* SAT implementation for SCSI WRITE10 and send FIS request to LL layer.
*
* \param tiRoot: Pointer to TISA initiator driver/port instance.
* \param tiIORequest: Pointer to TISA I/O request context for this I/O.
* \param tiDeviceHandle: Pointer to TISA device handle for this I/O.
* \param tiScsiRequest: Pointer to TISA SCSI I/O request and SGL list.
* \param satIOContext_t: Pointer to the SAT IO Context
*
* \return If command is started successfully
* - \e tiSuccess: I/O request successfully initiated.
* - \e tiBusy: No resources available, try again later.
* - \e tiIONoDevice: Invalid device handle.
* - \e tiError: Other errors.
*/
/*****************************************************************************/
GLOBAL bit32 satWrite10(
tiRoot_t *tiRoot,
tiIORequest_t *tiIORequest,
tiDeviceHandle_t *tiDeviceHandle,
tiScsiInitiatorRequest_t *tiScsiRequest,
satIOContext_t *satIOContext)
{
bit32 status;
bit32 agRequestType = AGSA_SATA_PROTOCOL_DMA_WRITE;
satDeviceData_t *pSatDevData;
scsiRspSense_t *pSense;
tiIniScsiCmnd_t *scsiCmnd;
agsaFisRegHostToDevice_t *fis;
bit32 lba = 0;
bit32 tl = 0;
bit32 LoopNum = 1;
bit8 LBA[4];
bit8 TL[4];
bit32 rangeChk = agFALSE; /* lba and tl range check */
pSense = satIOContext->pSense;
pSatDevData = satIOContext->pSatDevData;
scsiCmnd = &tiScsiRequest->scsiCmnd;
fis = satIOContext->pFis;
TI_DBG5(("satWrite10: start\n"));
/* checking FUA_NV */
if (scsiCmnd->cdb[1] & SCSI_FUA_NV_MASK)
{
satSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pTiSenseData,
satIOContext->interruptContext );
TI_DBG1(("satWrite10: return FUA_NV\n"));
return tiSuccess;
}
/* checking CONTROL */
/* NACA == 1 or LINK == 1*/
if ( (scsiCmnd->cdb[9] & SCSI_NACA_MASK) || (scsiCmnd->cdb[9] & SCSI_LINK_MASK) )
{
satSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pTiSenseData,
satIOContext->interruptContext );
TI_DBG1(("satWrite10: return control\n"));
return tiSuccess;
}
osti_memset(LBA, 0, sizeof(LBA));
osti_memset(TL, 0, sizeof(TL));
/* do not use memcpy due to indexing in LBA and TL */
LBA[0] = scsiCmnd->cdb[2]; /* MSB */
LBA[1] = scsiCmnd->cdb[3];
LBA[2] = scsiCmnd->cdb[4];
LBA[3] = scsiCmnd->cdb[5]; /* LSB */
TL[0] = 0;
TL[1] = 0;
TL[2] = scsiCmnd->cdb[7]; /* MSB */
TL[3] = scsiCmnd->cdb[8]; /* LSB */
rangeChk = satAddNComparebit32(LBA, TL);
/* cbd10; computing LBA and transfer length */
lba = (scsiCmnd->cdb[2] << (8*3)) + (scsiCmnd->cdb[3] << (8*2))
+ (scsiCmnd->cdb[4] << 8) + scsiCmnd->cdb[5];
tl = (scsiCmnd->cdb[7] << 8) + scsiCmnd->cdb[8];
TI_DBG5(("satWrite10: lba %d functioned lba %d\n", lba, satComputeCDB10LBA(satIOContext)));
TI_DBG5(("satWrite10: tl %d functioned tl %d\n", tl, satComputeCDB10TL(satIOContext)));
/* Table 34, 9.1, p 46 */
/*
note: As of 2/10/2006, no support for DMA QUEUED
*/
/*
Table 34, 9.1, p 46, b
When no 48-bit addressing support or NCQ, if LBA is beyond (2^28 - 1),
return check condition
*/
if (pSatDevData->satNCQ != agTRUE &&
pSatDevData->sat48BitSupport != agTRUE
)
{
if (lba > SAT_TR_LBA_LIMIT - 1)
{
satSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_LOGICAL_BLOCK_ADDRESS_OUT_OF_RANGE,
satIOContext);
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pTiSenseData,
satIOContext->interruptContext );
TI_DBG1(("satWrite10: return LBA out of range, not EXT\n"));
TI_DBG1(("satWrite10: cdb 0x%x 0x%x 0x%x 0x%x\n",scsiCmnd->cdb[2], scsiCmnd->cdb[3],
scsiCmnd->cdb[4], scsiCmnd->cdb[5]));
TI_DBG1(("satWrite10: lba 0x%x SAT_TR_LBA_LIMIT 0x%x\n", lba, SAT_TR_LBA_LIMIT));
return tiSuccess;
}
if (rangeChk) // if (lba + tl > SAT_TR_LBA_LIMIT)
{
TI_DBG1(("satWrite10: return LBA+TL out of range, not EXT\n"));
satSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_LOGICAL_BLOCK_ADDRESS_OUT_OF_RANGE,
satIOContext);
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pTiSenseData,
satIOContext->interruptContext );
return tiSuccess;
}
}
/* case 1 and 2 */
if (!rangeChk) // if (lba + tl <= SAT_TR_LBA_LIMIT)
{
if (pSatDevData->satDMASupport == agTRUE && pSatDevData->satDMAEnabled == agTRUE)
{
/* case 2 */
/* WRITE DMA*/
/* can't fit the transfer length */
TI_DBG5(("satWrite10: case 2\n"));
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C bit is set */
fis->h.command = SAT_WRITE_DMA; /* 0xCA */
fis->h.features = 0; /* FIS reserve */
fis->d.lbaLow = scsiCmnd->cdb[5]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = scsiCmnd->cdb[4]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = scsiCmnd->cdb[3]; /* FIS LBA (23:16) */
/* FIS LBA mode set LBA (27:24) */
fis->d.device = (bit8)((0x4 << 4) | (scsiCmnd->cdb[2] & 0xF));
fis->d.lbaLowExp = 0;
fis->d.lbaMidExp = 0;
fis->d.lbaHighExp = 0;
fis->d.featuresExp = 0;
fis->d.sectorCount = scsiCmnd->cdb[8]; /* FIS sector count (7:0) */
fis->d.sectorCountExp = 0;
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_DMA_WRITE;
satIOContext->ATACmd = SAT_WRITE_DMA;
}
else
{
/* case 1 */
/* WRITE MULTIPLE or WRITE SECTOR(S) */
/* WRITE SECTORS for easier implemetation */
/* can't fit the transfer length */
TI_DBG5(("satWrite10: case 1\n"));
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C bit is set */
fis->h.command = SAT_WRITE_SECTORS; /* 0x30 */
fis->h.features = 0; /* FIS reserve */
fis->d.lbaLow = scsiCmnd->cdb[5]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = scsiCmnd->cdb[4]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = scsiCmnd->cdb[3]; /* FIS LBA (23:16) */
/* FIS LBA mode set LBA (27:24) */
fis->d.device = (bit8)((0x4 << 4) | (scsiCmnd->cdb[2] & 0xF));
fis->d.lbaLowExp = 0;
fis->d.lbaMidExp = 0;
fis->d.lbaHighExp = 0;
fis->d.featuresExp = 0;
fis->d.sectorCount = scsiCmnd->cdb[8]; /* FIS sector count (7:0) */
fis->d.sectorCountExp = 0;
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_PIO_WRITE;
satIOContext->ATACmd = SAT_WRITE_SECTORS;
}
}
/* case 3 and 4 */
if (pSatDevData->sat48BitSupport == agTRUE)
{
if (pSatDevData->satDMASupport == agTRUE && pSatDevData->satDMAEnabled == agTRUE)
{
/* case 3 */
/* WRITE DMA EXT or WRITE DMA FUA EXT */
TI_DBG5(("satWrite10: case 3\n"));
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
/* SAT_WRITE_DMA_FUA_EXT is optional and we don't support it */
fis->h.command = SAT_WRITE_DMA_EXT; /* 0x35 */
satIOContext->ATACmd = SAT_WRITE_DMA_EXT;
fis->h.features = 0; /* FIS reserve */
fis->d.lbaLow = scsiCmnd->cdb[5]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = scsiCmnd->cdb[4]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = scsiCmnd->cdb[3]; /* FIS LBA (23:16) */
fis->d.device = 0x40; /* FIS LBA mode set */
fis->d.lbaLowExp = scsiCmnd->cdb[2]; /* FIS LBA (31:24) */
fis->d.lbaMidExp = 0; /* FIS LBA (39:32) */
fis->d.lbaHighExp = 0; /* FIS LBA (47:40) */
fis->d.featuresExp = 0; /* FIS reserve */
fis->d.sectorCount = scsiCmnd->cdb[8]; /* FIS sector count (7:0) */
fis->d.sectorCountExp = scsiCmnd->cdb[7]; /* FIS sector count (15:8) */
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_DMA_WRITE;
}
else
{
/* case 4 */
/* WRITE MULTIPLE EXT or WRITE MULTIPLE FUA EXT or WRITE SECTOR(S) EXT */
/* WRITE SECTORS EXT for easier implemetation */
TI_DBG5(("satWrite10: case 4\n"));
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_WRITE_SECTORS_EXT; /* 0x34 */
fis->h.features = 0; /* FIS reserve */
fis->d.lbaLow = scsiCmnd->cdb[5]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = scsiCmnd->cdb[4]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = scsiCmnd->cdb[3]; /* FIS LBA (23:16) */
fis->d.device = 0x40; /* FIS LBA mode set */
fis->d.lbaLowExp = scsiCmnd->cdb[2]; /* FIS LBA (31:24) */
fis->d.lbaMidExp = 0; /* FIS LBA (39:32) */
fis->d.lbaHighExp = 0; /* FIS LBA (47:40) */
fis->d.featuresExp = 0; /* FIS reserve */
fis->d.sectorCount = scsiCmnd->cdb[8]; /* FIS sector count (7:0) */
fis->d.sectorCountExp = scsiCmnd->cdb[7]; /* FIS sector count (15:8) */
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_PIO_WRITE;
satIOContext->ATACmd = SAT_WRITE_SECTORS_EXT;
}
}
/* case 5 */
if (pSatDevData->satNCQ == agTRUE)
{
/* WRITE FPDMA QUEUED */
if (pSatDevData->sat48BitSupport != agTRUE)
{
TI_DBG5(("satWrite10: case 5 !!! error NCQ but 28 bit address support \n"));
satSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pTiSenseData,
satIOContext->interruptContext );
return tiSuccess;
}
TI_DBG6(("satWrite10: case 5\n"));
/* Support 48-bit FPDMA addressing, use WRITE FPDMA QUEUE command */
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_WRITE_FPDMA_QUEUED; /* 0x61 */
fis->h.features = scsiCmnd->cdb[8]; /* FIS sector count (7:0) */
fis->d.lbaLow = scsiCmnd->cdb[5]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = scsiCmnd->cdb[4]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = scsiCmnd->cdb[3]; /* FIS LBA (23:16) */
/* Check FUA bit */
if (scsiCmnd->cdb[1] & SCSI_WRITE10_FUA_MASK)
fis->d.device = 0xC0; /* FIS FUA set */
else
fis->d.device = 0x40; /* FIS FUA clear */
fis->d.lbaLowExp = scsiCmnd->cdb[2]; /* FIS LBA (31:24) */
fis->d.lbaMidExp = 0; /* FIS LBA (39:32) */
fis->d.lbaHighExp = 0; /* FIS LBA (47:40) */
fis->d.featuresExp = scsiCmnd->cdb[7]; /* FIS sector count (15:8) */
fis->d.sectorCount = 0; /* Tag (7:3) set by LL layer */
fis->d.sectorCountExp = 0;
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_FPDMA_WRITE;
satIOContext->ATACmd = SAT_WRITE_FPDMA_QUEUED;
}
// tdhexdump("satWrite10 final fis", (bit8 *)fis, sizeof(agsaFisRegHostToDevice_t));
satIOContext->currentLBA = lba;
satIOContext->OrgTL = tl;
/*
computing number of loop and remainder for tl
0xFF in case not ext
0xFFFF in case EXT
*/
if (fis->h.command == SAT_WRITE_SECTORS || fis->h.command == SAT_WRITE_DMA)
{
LoopNum = satComputeLoopNum(tl, 0xFF);
}
else if (fis->h.command == SAT_WRITE_SECTORS_EXT ||
fis->h.command == SAT_WRITE_DMA_EXT ||
fis->h.command == SAT_WRITE_DMA_FUA_EXT
)
{
/* SAT_READ_SECTORS_EXT, SAT_READ_DMA_EXT */
LoopNum = satComputeLoopNum(tl, 0xFFFF);
}
else
{
/* SAT_WRITE_FPDMA_QUEUEDK */
LoopNum = satComputeLoopNum(tl, 0xFFFF);
}
satIOContext->LoopNum = LoopNum;
if (LoopNum == 1)
{
TI_DBG5(("satWrite10: NON CHAINED data\n"));
/* Initialize CB for SATA completion.
*/
satIOContext->satCompleteCB = &satNonChainedDataIOCB;
}
else
{
TI_DBG1(("satWrite10: CHAINED data\n"));
/* re-setting tl */
if (fis->h.command == SAT_WRITE_SECTORS || fis->h.command == SAT_WRITE_DMA)
{
fis->d.sectorCount = 0xFF;
}
else if (fis->h.command == SAT_WRITE_SECTORS_EXT ||
fis->h.command == SAT_WRITE_DMA_EXT ||
fis->h.command == SAT_WRITE_DMA_FUA_EXT
)
{
fis->d.sectorCount = 0xFF;
fis->d.sectorCountExp = 0xFF;
}
else
{
/* SAT_WRITE_FPDMA_QUEUED */
fis->h.features = 0xFF;
fis->d.featuresExp = 0xFF;
}
/* Initialize CB for SATA completion.
*/
satIOContext->satCompleteCB = &satChainedDataIOCB;
}
/*
* Prepare SGL and send FIS to LL layer.
*/
satIOContext->reqType = agRequestType; /* Save it */
status = sataLLIOStart( tiRoot,
tiIORequest,
tiDeviceHandle,
tiScsiRequest,
satIOContext);
return (status);
}
/*****************************************************************************/
/*! \brief SAT implementation for SCSI satWrite_1.
*
* SAT implementation for SCSI WRITE10 and send FIS request to LL layer.
* This is used when WRITE10 is divided into multiple ATA commands
*
* \param tiRoot: Pointer to TISA initiator driver/port instance.
* \param tiIORequest: Pointer to TISA I/O request context for this I/O.
* \param tiDeviceHandle: Pointer to TISA device handle for this I/O.
* \param tiScsiRequest: Pointer to TISA SCSI I/O request and SGL list.
* \param satIOContext_t: Pointer to the SAT IO Context
*
* \return If command is started successfully
* - \e tiSuccess: I/O request successfully initiated.
* - \e tiBusy: No resources available, try again later.
* - \e tiIONoDevice: Invalid device handle.
* - \e tiError: Other errors.
*/
/*****************************************************************************/
GLOBAL bit32 satWrite_1(
tiRoot_t *tiRoot,
tiIORequest_t *tiIORequest,
tiDeviceHandle_t *tiDeviceHandle,
tiScsiInitiatorRequest_t *tiScsiRequest,
satIOContext_t *satIOContext)
{
/*
Assumption: error check on lba and tl has been done in satWrite*()
lba = lba + tl;
*/
bit32 status;
satIOContext_t *satOrgIOContext = agNULL;
tiIniScsiCmnd_t *scsiCmnd;
agsaFisRegHostToDevice_t *fis;
bit32 agRequestType = AGSA_SATA_PROTOCOL_DMA_WRITE;
bit32 lba = 0;
bit32 DenomTL = 0xFF;
bit32 Remainder = 0;
bit8 LBA[4]; /* 0 MSB, 3 LSB */
TI_DBG2(("satWrite_1: start\n"));
fis = satIOContext->pFis;
satOrgIOContext = satIOContext->satOrgIOContext;
scsiCmnd = satOrgIOContext->pScsiCmnd;
osti_memset(LBA,0, sizeof(LBA));
switch (satOrgIOContext->ATACmd)
{
case SAT_WRITE_DMA:
DenomTL = 0xFF;
break;
case SAT_WRITE_SECTORS:
DenomTL = 0xFF;
break;
case SAT_WRITE_DMA_EXT:
DenomTL = 0xFFFF;
break;
case SAT_WRITE_DMA_FUA_EXT:
DenomTL = 0xFFFF;
break;
case SAT_WRITE_SECTORS_EXT:
DenomTL = 0xFFFF;
break;
case SAT_WRITE_FPDMA_QUEUED:
DenomTL = 0xFFFF;
break;
default:
TI_DBG1(("satWrite_1: error incorrect ata command 0x%x\n", satIOContext->ATACmd));
return tiError;
break;
}
Remainder = satOrgIOContext->OrgTL % DenomTL;
satOrgIOContext->currentLBA = satOrgIOContext->currentLBA + DenomTL;
lba = satOrgIOContext->currentLBA;
LBA[0] = (bit8)((lba & 0xF000) >> (8 * 3)); /* MSB */
LBA[1] = (bit8)((lba & 0xF00) >> (8 * 2));
LBA[2] = (bit8)((lba & 0xF0) >> 8);
LBA[3] = (bit8)(lba & 0xF); /* LSB */
switch (satOrgIOContext->ATACmd)
{
case SAT_WRITE_DMA:
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C bit is set */
fis->h.command = SAT_WRITE_DMA; /* 0xCA */
fis->h.features = 0; /* FIS reserve */
fis->d.lbaLow = LBA[3]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = LBA[2]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = LBA[1]; /* FIS LBA (23:16) */
/* FIS LBA mode set LBA (27:24) */
fis->d.device = (bit8)((0x4 << 4) | (LBA[0] & 0xF));
fis->d.lbaLowExp = 0;
fis->d.lbaMidExp = 0;
fis->d.lbaHighExp = 0;
fis->d.featuresExp = 0;
if (satOrgIOContext->LoopNum == 1)
{
/* last loop */
fis->d.sectorCount = (bit8)Remainder; /* FIS sector count (7:0) */
}
else
{
fis->d.sectorCount = 0xFF; /* FIS sector count (7:0) */
}
fis->d.sectorCountExp = 0;
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_DMA_WRITE;
break;
case SAT_WRITE_SECTORS:
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C bit is set */
fis->h.command = SAT_WRITE_SECTORS; /* 0x30 */
fis->h.features = 0; /* FIS reserve */
fis->d.lbaLow = LBA[3]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = LBA[2]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = LBA[1]; /* FIS LBA (23:16) */
/* FIS LBA mode set LBA (27:24) */
fis->d.device = (bit8)((0x4 << 4) | (LBA[0] & 0xF));
fis->d.lbaLowExp = 0;
fis->d.lbaMidExp = 0;
fis->d.lbaHighExp = 0;
fis->d.featuresExp = 0;
if (satOrgIOContext->LoopNum == 1)
{
/* last loop */
fis->d.sectorCount = (bit8)Remainder; /* FIS sector count (7:0) */
}
else
{
fis->d.sectorCount = 0xFF; /* FIS sector count (7:0) */
}
fis->d.sectorCountExp = 0;
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_PIO_WRITE;
break;
case SAT_WRITE_DMA_EXT:
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_WRITE_DMA_EXT; /* 0x3D */
fis->h.features = 0; /* FIS reserve */
fis->d.lbaLow = LBA[3]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = LBA[2]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = LBA[1]; /* FIS LBA (23:16) */
fis->d.device = 0x40; /* FIS LBA mode set */
fis->d.lbaLowExp = LBA[0]; /* FIS LBA (31:24) */
fis->d.lbaMidExp = 0; /* FIS LBA (39:32) */
fis->d.lbaHighExp = 0; /* FIS LBA (47:40) */
fis->d.featuresExp = 0; /* FIS reserve */
if (satOrgIOContext->LoopNum == 1)
{
/* last loop */
fis->d.sectorCount = (bit8)(Remainder & 0xFF); /* FIS sector count (7:0) */
fis->d.sectorCountExp = (bit8)((Remainder & 0xFF00) >> 8); /* FIS sector count (15:8) */
}
else
{
fis->d.sectorCount = 0xFF; /* FIS sector count (7:0) */
fis->d.sectorCountExp = 0xFF; /* FIS sector count (15:8) */
}
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_DMA_WRITE;
break;
case SAT_WRITE_SECTORS_EXT:
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_WRITE_SECTORS_EXT; /* 0x34 */
fis->h.features = 0; /* FIS reserve */
fis->d.lbaLow = LBA[3]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = LBA[2]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = LBA[1]; /* FIS LBA (23:16) */
fis->d.device = 0x40; /* FIS LBA mode set */
fis->d.lbaLowExp = LBA[0]; /* FIS LBA (31:24) */
fis->d.lbaMidExp = 0; /* FIS LBA (39:32) */
fis->d.lbaHighExp = 0; /* FIS LBA (47:40) */
fis->d.featuresExp = 0; /* FIS reserve */
if (satOrgIOContext->LoopNum == 1)
{
/* last loop */
fis->d.sectorCount = (bit8)(Remainder & 0xFF); /* FIS sector count (7:0) */
fis->d.sectorCountExp = (bit8)((Remainder & 0xFF00) >> 8); /* FIS sector count (15:8) */
}
else
{
fis->d.sectorCount = 0xFF; /* FIS sector count (7:0) */
fis->d.sectorCountExp = 0xFF; /* FIS sector count (15:8) */
}
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_PIO_WRITE;
break;
case SAT_WRITE_FPDMA_QUEUED:
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_WRITE_FPDMA_QUEUED; /* 0x61 */
fis->d.lbaLow = LBA[3]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = LBA[2]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = LBA[1]; /* FIS LBA (23:16) */
/* Check FUA bit */
if (scsiCmnd->cdb[1] & SCSI_WRITE10_FUA_MASK)
fis->d.device = 0xC0; /* FIS FUA set */
else
fis->d.device = 0x40; /* FIS FUA clear */
fis->d.lbaLowExp = LBA[0];; /* FIS LBA (31:24) */
fis->d.lbaMidExp = 0; /* FIS LBA (39:32) */
fis->d.lbaHighExp = 0; /* FIS LBA (47:40) */
if (satOrgIOContext->LoopNum == 1)
{
/* last loop */
fis->h.features = (bit8)(Remainder & 0xFF); /* FIS sector count (7:0) */
fis->d.featuresExp = (bit8)((Remainder & 0xFF00) >> 8); /* FIS sector count (15:8) */
}
else
{
fis->h.features = 0xFF; /* FIS sector count (7:0) */
fis->d.featuresExp = 0xFF; /* FIS sector count (15:8) */
}
fis->d.sectorCount = 0; /* Tag (7:3) set by LL layer */
fis->d.sectorCountExp = 0;
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_FPDMA_WRITE;
break;
default:
TI_DBG1(("satWrite_1: error incorrect ata command 0x%x\n", satIOContext->ATACmd));
return tiError;
break;
}
/* Initialize CB for SATA completion.
*/
/* chained data */
satIOContext->satCompleteCB = &satChainedDataIOCB;
/*
* Prepare SGL and send FIS to LL layer.
*/
satIOContext->reqType = agRequestType; /* Save it */
status = sataLLIOStart( tiRoot,
tiIORequest,
tiDeviceHandle,
tiScsiRequest,
satIOContext);
TI_DBG5(("satWrite_1: return\n"));
return (status);
}
/*****************************************************************************/
/*! \brief SAT implementation for SCSI WRITE6.
*
* SAT implementation for SCSI WRITE6 and send FIS request to LL layer.
*
* \param tiRoot: Pointer to TISA initiator driver/port instance.
* \param tiIORequest: Pointer to TISA I/O request context for this I/O.
* \param tiDeviceHandle: Pointer to TISA device handle for this I/O.
* \param tiScsiRequest: Pointer to TISA SCSI I/O request and SGL list.
* \param satIOContext_t: Pointer to the SAT IO Context
*
* \return If command is started successfully
* - \e tiSuccess: I/O request successfully initiated.
* - \e tiBusy: No resources available, try again later.
* - \e tiIONoDevice: Invalid device handle.
* - \e tiError: Other errors.
*/
/*****************************************************************************/
GLOBAL bit32 satWrite6(
tiRoot_t *tiRoot,
tiIORequest_t *tiIORequest,
tiDeviceHandle_t *tiDeviceHandle,
tiScsiInitiatorRequest_t *tiScsiRequest,
satIOContext_t *satIOContext)
{
bit32 status;
bit32 agRequestType = AGSA_SATA_PROTOCOL_DMA_WRITE;
satDeviceData_t *pSatDevData;
scsiRspSense_t *pSense;
tiIniScsiCmnd_t *scsiCmnd;
agsaFisRegHostToDevice_t *fis;
bit32 lba = 0;
bit16 tl = 0;
pSense = satIOContext->pSense;
pSatDevData = satIOContext->pSatDevData;
scsiCmnd = &tiScsiRequest->scsiCmnd;
fis = satIOContext->pFis;
TI_DBG5(("satWrite6: start\n"));
/* checking CONTROL */
/* NACA == 1 or LINK == 1*/
if ( (scsiCmnd->cdb[5] & SCSI_NACA_MASK) || (scsiCmnd->cdb[5] & SCSI_LINK_MASK) )
{
satSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pTiSenseData,
satIOContext->interruptContext );
TI_DBG1(("satWrite6: return control\n"));
return tiSuccess;
}
/* cbd6; computing LBA and transfer length */
lba = (((scsiCmnd->cdb[1]) & 0x1f) << (8*2))
+ (scsiCmnd->cdb[2] << 8) + scsiCmnd->cdb[3];
tl = scsiCmnd->cdb[4];
/* Table 34, 9.1, p 46 */
/*
note: As of 2/10/2006, no support for DMA QUEUED
*/
/*
Table 34, 9.1, p 46, b
When no 48-bit addressing support or NCQ, if LBA is beyond (2^28 - 1),
return check condition
*/
if (pSatDevData->satNCQ != agTRUE &&
pSatDevData->sat48BitSupport != agTRUE
)
{
if (lba > SAT_TR_LBA_LIMIT - 1)
{
satSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_LOGICAL_BLOCK_ADDRESS_OUT_OF_RANGE,
satIOContext);
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pTiSenseData,
satIOContext->interruptContext );
TI_DBG1(("satWrite6: return LBA out of range\n"));
return tiSuccess;
}
}
/* case 1 and 2 */
if (lba + tl <= SAT_TR_LBA_LIMIT)
{
if (pSatDevData->satDMASupport == agTRUE && pSatDevData->satDMAEnabled == agTRUE)
{
/* case 2 */
/* WRITE DMA*/
TI_DBG5(("satWrite6: case 2\n"));
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_WRITE_DMA; /* 0xCA */
fis->h.features = 0; /* FIS reserve */
fis->d.lbaLow = scsiCmnd->cdb[3]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = scsiCmnd->cdb[2]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = (bit8)((scsiCmnd->cdb[1]) & 0x1f); /* FIS LBA (23:16) */
fis->d.device = 0x40; /* FIS LBA mode */
fis->d.lbaLowExp = 0;
fis->d.lbaMidExp = 0;
fis->d.lbaHighExp = 0;
fis->d.featuresExp = 0;
if (tl == 0)
{
/* temporary fix */
fis->d.sectorCount = 0xff; /* FIS sector count (7:0) */
}
else
{
fis->d.sectorCount = scsiCmnd->cdb[4]; /* FIS sector count (7:0) */
}
fis->d.sectorCountExp = 0;
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_DMA_WRITE;
}
else
{
/* case 1 */
/* WRITE SECTORS for easier implemetation */
TI_DBG5(("satWrite6: case 1\n"));
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_WRITE_SECTORS; /* 0xCA */
fis->h.features = 0; /* FIS reserve */
fis->d.lbaLow = scsiCmnd->cdb[3]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = scsiCmnd->cdb[2]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = (bit8)((scsiCmnd->cdb[1]) & 0x1f); /* FIS LBA (23:16) */
fis->d.device = 0x40; /* FIS LBA mode */
fis->d.lbaLowExp = 0;
fis->d.lbaMidExp = 0;
fis->d.lbaHighExp = 0;
fis->d.featuresExp = 0;
if (tl == 0)
{
/* temporary fix */
fis->d.sectorCount = 0xff; /* FIS sector count (7:0) */
}
else
{
fis->d.sectorCount = scsiCmnd->cdb[4]; /* FIS sector count (7:0) */
}
fis->d.sectorCountExp = 0;
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_PIO_WRITE;
}
}
/* case 3 and 4 */
if (pSatDevData->sat48BitSupport == agTRUE)
{
if (pSatDevData->satDMASupport == agTRUE && pSatDevData->satDMAEnabled == agTRUE)
{
/* case 3 */
/* WRITE DMA EXT only */
TI_DBG5(("satWrite6: case 3\n"));
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_WRITE_DMA_EXT; /* 0x35 */
fis->h.features = 0; /* FIS reserve */
fis->d.lbaLow = scsiCmnd->cdb[3]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = scsiCmnd->cdb[2]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = (bit8)((scsiCmnd->cdb[1]) & 0x1f); /* FIS LBA (23:16) */
fis->d.device = 0x40; /* FIS LBA mode set */
fis->d.lbaLowExp = 0; /* FIS LBA (31:24) */
fis->d.lbaMidExp = 0; /* FIS LBA (39:32) */
fis->d.lbaHighExp = 0; /* FIS LBA (47:40) */
fis->d.featuresExp = 0; /* FIS reserve */
if (tl == 0)
{
/* sector count is 256, 0x100*/
fis->d.sectorCount = 0; /* FIS sector count (7:0) */
fis->d.sectorCountExp = 0x01; /* FIS sector count (15:8) */
}
else
{
fis->d.sectorCount = scsiCmnd->cdb[4]; /* FIS sector count (7:0) */
fis->d.sectorCountExp = 0; /* FIS sector count (15:8) */
}
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_DMA_WRITE;
}
else
{
/* case 4 */
/* WRITE SECTORS EXT for easier implemetation */
TI_DBG5(("satWrite6: case 4\n"));
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_WRITE_SECTORS_EXT; /* 0x34 */
fis->h.features = 0; /* FIS reserve */
fis->d.lbaLow = scsiCmnd->cdb[3]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = scsiCmnd->cdb[2]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = (bit8)((scsiCmnd->cdb[1]) & 0x1f); /* FIS LBA (23:16) */
fis->d.device = 0x40; /* FIS LBA mode set */
fis->d.lbaLowExp = 0; /* FIS LBA (31:24) */
fis->d.lbaMidExp = 0; /* FIS LBA (39:32) */
fis->d.lbaHighExp = 0; /* FIS LBA (47:40) */
fis->d.featuresExp = 0; /* FIS reserve */
if (tl == 0)
{
/* sector count is 256, 0x100*/
fis->d.sectorCount = 0; /* FIS sector count (7:0) */
fis->d.sectorCountExp = 0x01; /* FIS sector count (15:8) */
}
else
{
fis->d.sectorCount = scsiCmnd->cdb[4]; /* FIS sector count (7:0) */
fis->d.sectorCountExp = 0; /* FIS sector count (15:8) */
}
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_PIO_WRITE;
}
}
/* case 5 */
if (pSatDevData->satNCQ == agTRUE)
{
/* WRITE FPDMA QUEUED */
if (pSatDevData->sat48BitSupport != agTRUE)
{
/* sanity check */
TI_DBG5(("satWrite6: case 5 !!! error NCQ but 28 bit address support \n"));
satSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pTiSenseData,
satIOContext->interruptContext );
return tiSuccess;
}
TI_DBG5(("satWrite6: case 5\n"));
/* Support 48-bit FPDMA addressing, use WRITE FPDMA QUEUE command */
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_WRITE_FPDMA_QUEUED; /* 0x61 */
fis->d.lbaLow = scsiCmnd->cdb[3]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = scsiCmnd->cdb[2]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = (bit8)((scsiCmnd->cdb[1]) & 0x1f); /* FIS LBA (23:16) */
fis->d.device = 0x40; /* FIS FUA clear */
fis->d.lbaLowExp = 0; /* FIS LBA (31:24) */
fis->d.lbaMidExp = 0; /* FIS LBA (39:32) */
fis->d.lbaHighExp = 0; /* FIS LBA (47:40) */
if (tl == 0)
{
/* sector count is 256, 0x100*/
fis->h.features = 0; /* FIS sector count (7:0) */
fis->d.featuresExp = 0x01; /* FIS sector count (15:8) */
}
else
{
fis->h.features = scsiCmnd->cdb[4]; /* FIS sector count (7:0) */
fis->d.featuresExp = 0; /* FIS sector count (15:8) */
}
fis->d.sectorCount = 0; /* Tag (7:3) set by LL layer */
fis->d.sectorCountExp = 0;
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_FPDMA_WRITE;
}
/* Initialize CB for SATA completion.
*/
satIOContext->satCompleteCB = &satNonChainedDataIOCB;
/*
* Prepare SGL and send FIS to LL layer.
*/
satIOContext->reqType = agRequestType; /* Save it */
status = sataLLIOStart( tiRoot,
tiIORequest,
tiDeviceHandle,
tiScsiRequest,
satIOContext);
return (status);
}
/*****************************************************************************/
/*! \brief SAT implementation for SCSI TEST UNIT READY.
*
* SAT implementation for SCSI TUR and send FIS request to LL layer.
*
* \param tiRoot: Pointer to TISA initiator driver/port instance.
* \param tiIORequest: Pointer to TISA I/O request context for this I/O.
* \param tiDeviceHandle: Pointer to TISA device handle for this I/O.
* \param tiScsiRequest: Pointer to TISA SCSI I/O request and SGL list.
* \param satIOContext_t: Pointer to the SAT IO Context
*
* \return If command is started successfully
* - \e tiSuccess: I/O request successfully initiated.
* - \e tiBusy: No resources available, try again later.
* - \e tiIONoDevice: Invalid device handle.
* - \e tiError: Other errors.
*/
/*****************************************************************************/
GLOBAL bit32 satTestUnitReady(
tiRoot_t *tiRoot,
tiIORequest_t *tiIORequest,
tiDeviceHandle_t *tiDeviceHandle,
tiScsiInitiatorRequest_t *tiScsiRequest,
satIOContext_t *satIOContext)
{
bit32 status;
bit32 agRequestType;
satDeviceData_t *pSatDevData;
scsiRspSense_t *pSense;
tiIniScsiCmnd_t *scsiCmnd;
agsaFisRegHostToDevice_t *fis;
pSense = satIOContext->pSense;
pSatDevData = satIOContext->pSatDevData;
scsiCmnd = &tiScsiRequest->scsiCmnd;
fis = satIOContext->pFis;
TI_DBG6(("satTestUnitReady: entry tiDeviceHandle=%p tiIORequest=%p\n",
tiDeviceHandle, tiIORequest));
/* checking CONTROL */
/* NACA == 1 or LINK == 1*/
if ( (scsiCmnd->cdb[5] & SCSI_NACA_MASK) || (scsiCmnd->cdb[5] & SCSI_LINK_MASK) )
{
satSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pTiSenseData,
satIOContext->interruptContext );
TI_DBG1(("satTestUnitReady: return control\n"));
return tiSuccess;
}
/* SAT revision 8, 8.11.2, p42*/
if (pSatDevData->satStopState == agTRUE)
{
satSetSensePayload( pSense,
SCSI_SNSKEY_NOT_READY,
0,
SCSI_SNSCODE_LOGICAL_UNIT_NOT_READY_INITIALIZING_COMMAND_REQUIRED,
satIOContext);
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pTiSenseData,
satIOContext->interruptContext );
TI_DBG1(("satTestUnitReady: stop state\n"));
return tiSuccess;
}
/*
* Check if format is in progress
*/
if (pSatDevData->satDriveState == SAT_DEV_STATE_FORMAT_IN_PROGRESS)
{
TI_DBG1(("satTestUnitReady() FORMAT_IN_PROGRESS tiDeviceHandle=%p tiIORequest=%p\n",
tiDeviceHandle, tiIORequest));
satSetSensePayload( pSense,
SCSI_SNSKEY_NOT_READY,
0,
SCSI_SNSCODE_LOGICAL_UNIT_NOT_READY_FORMAT_IN_PROGRESS,
satIOContext);
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pTiSenseData,
satIOContext->interruptContext );
TI_DBG1(("satTestUnitReady: format in progress\n"));
return tiSuccess;
}
/*
check previously issued ATA command
*/
if (pSatDevData->satPendingIO != 0)
{
if (pSatDevData->satDeviceFaultState == agTRUE)
{
satSetSensePayload( pSense,
SCSI_SNSKEY_HARDWARE_ERROR,
0,
SCSI_SNSCODE_LOGICAL_UNIT_FAILURE,
satIOContext);
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pTiSenseData,
satIOContext->interruptContext );
TI_DBG1(("satTestUnitReady: previous command ended in error\n"));
return tiSuccess;
}
}
/*
check removalbe media feature set
*/
if(pSatDevData->satRemovableMedia && pSatDevData->satRemovableMediaEnabled)
{
TI_DBG5(("satTestUnitReady: sending get media status cmnd\n"));
/* send GET MEDIA STATUS command */
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_GET_MEDIA_STATUS; /* 0xDA */
fis->h.features = 0; /* FIS features NA */
fis->d.lbaLow = 0; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = 0; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = 0; /* FIS LBA (23:16) */
fis->d.device = 0; /* FIS DEV is discared in SATA */
fis->d.lbaLowExp = 0;
fis->d.lbaMidExp = 0;
fis->d.lbaHighExp = 0;
fis->d.featuresExp = 0;
fis->d.sectorCount = 0; /* FIS sector count (7:0) */
fis->d.sectorCountExp = 0;
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_NON_DATA;
/* Initialize CB for SATA completion.
*/
satIOContext->satCompleteCB = &satTestUnitReadyCB;
/*
* Prepare SGL and send FIS to LL layer.
*/
satIOContext->reqType = agRequestType; /* Save it */
status = sataLLIOStart( tiRoot,
tiIORequest,
tiDeviceHandle,
tiScsiRequest,
satIOContext);
return (status);
}
/*
number 6) in SAT p42
send ATA CHECK POWER MODE
*/
TI_DBG5(("satTestUnitReady: sending check power mode cmnd\n"));
status = satTestUnitReady_1( tiRoot,
tiIORequest,
tiDeviceHandle,
tiScsiRequest,
satIOContext);
return (status);
}
/*****************************************************************************/
/*! \brief SAT implementation for SCSI satTestUnitReady_1.
*
* SAT implementation for SCSI satTestUnitReady_1.
*
* \param tiRoot: Pointer to TISA initiator driver/port instance.
* \param tiIORequest: Pointer to TISA I/O request context for this I/O.
* \param tiDeviceHandle: Pointer to TISA device handle for this I/O.
* \param tiScsiRequest: Pointer to TISA SCSI I/O request and SGL list.
* \param satIOContext_t: Pointer to the SAT IO Context
*
* \return If command is started successfully
* - \e tiSuccess: I/O request successfully initiated.
* - \e tiBusy: No resources available, try again later.
* - \e tiIONoDevice: Invalid device handle.
* - \e tiError: Other errors.
*/
/*****************************************************************************/
GLOBAL bit32 satTestUnitReady_1(
tiRoot_t *tiRoot,
tiIORequest_t *tiIORequest,
tiDeviceHandle_t *tiDeviceHandle,
tiScsiInitiatorRequest_t *tiScsiRequest,
satIOContext_t *satIOContext)
{
/*
sends SAT_CHECK_POWER_MODE as a part of TESTUNITREADY
internally generated - no directly corresponding scsi
called in satIOCompleted as a part of satTestUnitReady(), SAT, revision8, 8.11.2, p42
*/
bit32 status;
bit32 agRequestType;
agsaFisRegHostToDevice_t *fis;
fis = satIOContext->pFis;
TI_DBG5(("satTestUnitReady_1: start\n"));
/*
* Send the ATA CHECK POWER MODE command.
*/
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_CHECK_POWER_MODE; /* 0xE5 */
fis->h.features = 0;
fis->d.lbaLow = 0;
fis->d.lbaMid = 0;
fis->d.lbaHigh = 0;
fis->d.device = 0;
fis->d.lbaLowExp = 0;
fis->d.lbaMidExp = 0;
fis->d.lbaHighExp = 0;
fis->d.featuresExp = 0;
fis->d.sectorCount = 0;
fis->d.sectorCountExp = 0;
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_NON_DATA;
/* Initialize CB for SATA completion.
*/
satIOContext->satCompleteCB = &satTestUnitReadyCB;
/*
* Prepare SGL and send FIS to LL layer.
*/
satIOContext->reqType = agRequestType; /* Save it */
status = sataLLIOStart( tiRoot,
tiIORequest,
tiDeviceHandle,
tiScsiRequest,
satIOContext);
TI_DBG5(("satTestUnitReady_1: return\n"));
return status;
}
/*****************************************************************************/
/*! \brief SAT implementation for SCSI satReportLun.
*
* SAT implementation for SCSI satReportLun. Only LUN0 is reported.
*
* \param tiRoot: Pointer to TISA initiator driver/port instance.
* \param tiIORequest: Pointer to TISA I/O request context for this I/O.
* \param tiDeviceHandle: Pointer to TISA device handle for this I/O.
* \param tiScsiRequest: Pointer to TISA SCSI I/O request and SGL list.
* \param satIOContext_t: Pointer to the SAT IO Context
*
* \return If command is started successfully
* - \e tiSuccess: I/O request successfully initiated.
* - \e tiBusy: No resources available, try again later.
* - \e tiIONoDevice: Invalid device handle.
* - \e tiError: Other errors.
*/
/*****************************************************************************/
GLOBAL bit32 satReportLun(
tiRoot_t *tiRoot,
tiIORequest_t *tiIORequest,
tiDeviceHandle_t *tiDeviceHandle,
tiScsiInitiatorRequest_t *tiScsiRequest,
satIOContext_t *satIOContext)
{
scsiRspSense_t *pSense;
bit32 allocationLen;
bit32 reportLunLen;
scsiReportLun_t *pReportLun;
tiIniScsiCmnd_t *scsiCmnd;
TI_DBG5(("satReportLun entry: tiDeviceHandle=%p tiIORequest=%p\n",
tiDeviceHandle, tiIORequest));
pSense = satIOContext->pSense;
pReportLun = (scsiReportLun_t *) tiScsiRequest->sglVirtualAddr;
scsiCmnd = &tiScsiRequest->scsiCmnd;
// tdhexdump("satReportLun cdb", (bit8 *)scsiCmnd, 16);
/* Find the buffer size allocated by Initiator */
allocationLen = (((bit32)scsiCmnd->cdb[6]) << 24) |
(((bit32)scsiCmnd->cdb[7]) << 16) |
(((bit32)scsiCmnd->cdb[8]) << 8 ) |
(((bit32)scsiCmnd->cdb[9]) );
reportLunLen = 16; /* 8 byte header and 8 bytes of LUN0 */
if (allocationLen < reportLunLen)
{
TI_DBG1(("satReportLun *** ERROR *** insufficient len=0x%x tiDeviceHandle=%p tiIORequest=%p\n",
reportLunLen, tiDeviceHandle, tiIORequest));
satSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pTiSenseData,
satIOContext->interruptContext );
return tiSuccess;
}
/* Set length to one entry */
pReportLun->len[0] = 0;
pReportLun->len[1] = 0;
pReportLun->len[2] = 0;
pReportLun->len[3] = sizeof (tiLUN_t);
pReportLun->reserved = 0;
/* Set to LUN 0:
* - address method to 0x00: Peripheral device addressing method,
* - bus identifier to 0
*/
pReportLun->lunList[0].lun[0] = 0;
pReportLun->lunList[0].lun[1] = 0;
pReportLun->lunList[0].lun[2] = 0;
pReportLun->lunList[0].lun[3] = 0;
pReportLun->lunList[0].lun[4] = 0;
pReportLun->lunList[0].lun[5] = 0;
pReportLun->lunList[0].lun[6] = 0;
pReportLun->lunList[0].lun[7] = 0;
if (allocationLen > reportLunLen)
{
/* underrun */
TI_DBG1(("satReportLun reporting underrun reportLunLen=0x%x allocationLen=0x%x \n", reportLunLen, allocationLen));
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOUnderRun,
allocationLen - reportLunLen,
agNULL,
satIOContext->interruptContext );
}
else
{
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOSuccess,
SCSI_STAT_GOOD,
agNULL,
satIOContext->interruptContext);
}
return tiSuccess;
}
/*****************************************************************************/
/*! \brief SAT implementation for SCSI REQUEST SENSE.
*
* SAT implementation for SCSI REQUEST SENSE.
*
* \param tiRoot: Pointer to TISA initiator driver/port instance.
* \param tiIORequest: Pointer to TISA I/O request context for this I/O.
* \param tiDeviceHandle: Pointer to TISA device handle for this I/O.
* \param tiScsiRequest: Pointer to TISA SCSI I/O request and SGL list.
* \param satIOContext_t: Pointer to the SAT IO Context
*
* \return If command is started successfully
* - \e tiSuccess: I/O request successfully initiated.
* - \e tiBusy: No resources available, try again later.
* - \e tiIONoDevice: Invalid device handle.
* - \e tiError: Other errors.
*/
/*****************************************************************************/
GLOBAL bit32 satRequestSense(
tiRoot_t *tiRoot,
tiIORequest_t *tiIORequest,
tiDeviceHandle_t *tiDeviceHandle,
tiScsiInitiatorRequest_t *tiScsiRequest,
satIOContext_t *satIOContext)
{
/*
SAT Rev 8 p38, Table25
sending SMART RETURN STATUS
Checking SMART Treshold Exceeded Condition is done in satRequestSenseCB()
Only fixed format sense data is support. In other words, we don't support DESC bit is set
in Request Sense
*/
bit32 status;
bit32 agRequestType;
scsiRspSense_t *pSense;
satDeviceData_t *pSatDevData;
tiIniScsiCmnd_t *scsiCmnd;
agsaFisRegHostToDevice_t *fis;
tdIORequestBody_t *tdIORequestBody;
satInternalIo_t *satIntIo = agNULL;
satIOContext_t *satIOContext2;
TI_DBG4(("satRequestSense entry: tiDeviceHandle=%p tiIORequest=%p\n",
tiDeviceHandle, tiIORequest));
pSense = (scsiRspSense_t *) tiScsiRequest->sglVirtualAddr;
pSatDevData = satIOContext->pSatDevData;
scsiCmnd = &tiScsiRequest->scsiCmnd;
fis = satIOContext->pFis;
TI_DBG4(("satRequestSense: pSatDevData=%p\n", pSatDevData));
/* checking CONTROL */
/* NACA == 1 or LINK == 1*/
if ( (scsiCmnd->cdb[5] & SCSI_NACA_MASK) || (scsiCmnd->cdb[5] & SCSI_LINK_MASK) )
{
satSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pTiSenseData,
satIOContext->interruptContext );
TI_DBG1(("satRequestSense: return control\n"));
return tiSuccess;
}
/*
Only fixed format sense data is support. In other words, we don't support DESC bit is set
in Request Sense
*/
if ( scsiCmnd->cdb[1] & ATA_REMOVABLE_MEDIA_DEVICE_MASK )
{
satSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pTiSenseData,
satIOContext->interruptContext );
TI_DBG1(("satRequestSense: DESC bit is set, which we don't support\n"));
return tiSuccess;
}
if (pSatDevData->satSMARTEnabled == agTRUE)
{
/* sends SMART RETURN STATUS */
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_SMART_RETURN_STATUS; /* 0xB0 */
fis->h.features = 0xDA; /* FIS features */
fis->d.featuresExp = 0; /* FIS reserve */
fis->d.sectorCount = 0; /* FIS sector count (7:0) */
fis->d.sectorCountExp = 0; /* FIS sector count (15:8) */
fis->d.lbaLow = 0; /* FIS LBA (7 :0 ) */
fis->d.lbaLowExp = 0; /* FIS LBA (31:24) */
fis->d.lbaMid = 0x4F; /* FIS LBA (15:8 ) */
fis->d.lbaMidExp = 0; /* FIS LBA (39:32) */
fis->d.lbaHigh = 0xC2; /* FIS LBA (23:16) */
fis->d.lbaHighExp = 0; /* FIS LBA (47:40) */
fis->d.device = 0; /* FIS DEV is discared in SATA */
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved4 = 0;
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_NON_DATA;
/* Initialize CB for SATA completion.
*/
satIOContext->satCompleteCB = &satRequestSenseCB;
/*
* Prepare SGL and send FIS to LL layer.
*/
satIOContext->reqType = agRequestType; /* Save it */
status = sataLLIOStart( tiRoot,
tiIORequest,
tiDeviceHandle,
tiScsiRequest,
satIOContext);
TI_DBG4(("satRequestSense: if return, status %d\n", status));
return (status);
}
else
{
/*allocate iocontext for xmitting xmit SAT_CHECK_POWER_MODE
then call satRequestSense2 */
TI_DBG4(("satRequestSense: before satIntIo %p\n", satIntIo));
/* allocate iocontext */
satIntIo = satAllocIntIoResource( tiRoot,
tiIORequest, /* original request */
pSatDevData,
tiScsiRequest->scsiCmnd.expDataLength,
satIntIo);
TI_DBG4(("satRequestSense: after satIntIo %p\n", satIntIo));
if (satIntIo == agNULL)
{
/* memory allocation failure */
satFreeIntIoResource( tiRoot,
pSatDevData,
satIntIo);
/* failed during sending SMART RETURN STATUS */
satSetSensePayload( pSense,
SCSI_SNSKEY_NO_SENSE,
0,
SCSI_SNSCODE_HARDWARE_IMPENDING_FAILURE,
satIOContext);
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOSuccess,
SCSI_STAT_GOOD,
agNULL,
satIOContext->interruptContext );
TI_DBG4(("satRequestSense: else fail 1\n"));
return tiSuccess;
} /* end of memory allocation failure */
/*
* Need to initialize all the fields within satIOContext except
* reqType and satCompleteCB which will be set depending on cmd.
*/
if (satIntIo == agNULL)
{
TI_DBG4(("satRequestSense: satIntIo is NULL\n"));
}
else
{
TI_DBG4(("satRequestSense: satIntIo is NOT NULL\n"));
}
/* use this --- tttttthe one the same */
satIntIo->satOrgTiIORequest = tiIORequest;
tdIORequestBody = (tdIORequestBody_t *)satIntIo->satIntRequestBody;
satIOContext2 = &(tdIORequestBody->transport.SATA.satIOContext);
satIOContext2->pSatDevData = pSatDevData;
satIOContext2->pFis = &(tdIORequestBody->transport.SATA.agSATARequestBody.fis.fisRegHostToDev);
satIOContext2->pScsiCmnd = &(satIntIo->satIntTiScsiXchg.scsiCmnd);
satIOContext2->pSense = &(tdIORequestBody->transport.SATA.sensePayload);
satIOContext2->pTiSenseData = &(tdIORequestBody->transport.SATA.tiSenseData);
satIOContext2->pTiSenseData->senseData = satIOContext2->pSense;
satIOContext2->tiRequestBody = satIntIo->satIntRequestBody;
satIOContext2->interruptContext = satIOContext->interruptContext;
satIOContext2->satIntIoContext = satIntIo;
satIOContext2->ptiDeviceHandle = tiDeviceHandle;
satIOContext2->satOrgIOContext = satIOContext;
TI_DBG4(("satRequestSense: satIntIo->satIntTiScsiXchg.agSgl1.len %d\n", satIntIo->satIntTiScsiXchg.agSgl1.len));
TI_DBG4(("satRequestSense: satIntIo->satIntTiScsiXchg.agSgl1.upper %d\n", satIntIo->satIntTiScsiXchg.agSgl1.upper));
TI_DBG4(("satRequestSense: satIntIo->satIntTiScsiXchg.agSgl1.lower %d\n", satIntIo->satIntTiScsiXchg.agSgl1.lower));
TI_DBG4(("satRequestSense: satIntIo->satIntTiScsiXchg.agSgl1.type %d\n", satIntIo->satIntTiScsiXchg.agSgl1.type));
status = satRequestSense_1( tiRoot,
&(satIntIo->satIntTiIORequest),
tiDeviceHandle,
&(satIntIo->satIntTiScsiXchg),
satIOContext2);
if (status != tiSuccess)
{
satFreeIntIoResource( tiRoot,
pSatDevData,
satIntIo);
/* failed during sending SMART RETURN STATUS */
satSetSensePayload( pSense,
SCSI_SNSKEY_NO_SENSE,
0,
SCSI_SNSCODE_HARDWARE_IMPENDING_FAILURE,
satIOContext);
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOSuccess,
SCSI_STAT_CHECK_CONDITION,
agNULL,
satIOContext->interruptContext );
TI_DBG1(("satRequestSense: else fail 2\n"));
return tiSuccess;
}
TI_DBG4(("satRequestSense: else return success\n"));
return tiSuccess;
}
}
/*****************************************************************************/
/*! \brief SAT implementation for SCSI REQUEST SENSE.
*
* SAT implementation for SCSI REQUEST SENSE.
* Sub function of satRequestSense
*
* \param tiRoot: Pointer to TISA initiator driver/port instance.
* \param tiIORequest: Pointer to TISA I/O request context for this I/O.
* \param tiDeviceHandle: Pointer to TISA device handle for this I/O.
* \param tiScsiRequest: Pointer to TISA SCSI I/O request and SGL list.
* \param satIOContext_t: Pointer to the SAT IO Context
*
* \return If command is started successfully
* - \e tiSuccess: I/O request successfully initiated.
* - \e tiBusy: No resources available, try again later.
* - \e tiIONoDevice: Invalid device handle.
* - \e tiError: Other errors.
*/
/*****************************************************************************/
GLOBAL bit32 satRequestSense_1(
tiRoot_t *tiRoot,
tiIORequest_t *tiIORequest,
tiDeviceHandle_t *tiDeviceHandle,
tiScsiInitiatorRequest_t *tiScsiRequest,
satIOContext_t *satIOContext)
{
/*
sends SAT_CHECK_POWER_MODE
*/
bit32 status;
bit32 agRequestType;
agsaFisRegHostToDevice_t *fis;
TI_DBG4(("satRequestSense_1 entry: tiDeviceHandle=%p tiIORequest=%p\n",
tiDeviceHandle, tiIORequest));
fis = satIOContext->pFis;
/*
* Send the ATA CHECK POWER MODE command.
*/
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_CHECK_POWER_MODE; /* 0xE5 */
fis->h.features = 0;
fis->d.lbaLow = 0;
fis->d.lbaMid = 0;
fis->d.lbaHigh = 0;
fis->d.device = 0;
fis->d.lbaLowExp = 0;
fis->d.lbaMidExp = 0;
fis->d.lbaHighExp = 0;
fis->d.featuresExp = 0;
fis->d.sectorCount = 0;
fis->d.sectorCountExp = 0;
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_NON_DATA;
/* Initialize CB for SATA completion.
*/
satIOContext->satCompleteCB = &satRequestSenseCB;
/*
* Prepare SGL and send FIS to LL layer.
*/
satIOContext->reqType = agRequestType; /* Save it */
TI_DBG4(("satRequestSense_1: agSgl1.len %d\n", tiScsiRequest->agSgl1.len));
TI_DBG4(("satRequestSense_1: agSgl1.upper %d\n", tiScsiRequest->agSgl1.upper));
TI_DBG4(("satRequestSense_1: agSgl1.lower %d\n", tiScsiRequest->agSgl1.lower));
TI_DBG4(("satRequestSense_1: agSgl1.type %d\n", tiScsiRequest->agSgl1.type));
// tdhexdump("satRequestSense_1", (bit8 *)fis, sizeof(agsaFisRegHostToDevice_t));
status = sataLLIOStart( tiRoot,
tiIORequest,
tiDeviceHandle,
tiScsiRequest,
satIOContext);
return status;
}
/*****************************************************************************/
/*! \brief SAT implementation for SCSI INQUIRY.
*
* SAT implementation for SCSI INQUIRY.
*
* \param tiRoot: Pointer to TISA initiator driver/port instance.
* \param tiIORequest: Pointer to TISA I/O request context for this I/O.
* \param tiDeviceHandle: Pointer to TISA device handle for this I/O.
* \param tiScsiRequest: Pointer to TISA SCSI I/O request and SGL list.
* \param satIOContext_t: Pointer to the SAT IO Context
*
* \return If command is started successfully
* - \e tiSuccess: I/O request successfully initiated.
* - \e tiBusy: No resources available, try again later.
* - \e tiIONoDevice: Invalid device handle.
* - \e tiError: Other errors.
*/
/*****************************************************************************/
GLOBAL bit32 satInquiry(
tiRoot_t *tiRoot,
tiIORequest_t *tiIORequest,
tiDeviceHandle_t *tiDeviceHandle,
tiScsiInitiatorRequest_t *tiScsiRequest,
satIOContext_t *satIOContext)
{
/*
CMDDT bit is obsolete in SPC-3 and this is assumed in SAT revision 8
*/
scsiRspSense_t *pSense;
tiIniScsiCmnd_t *scsiCmnd;
satDeviceData_t *pSatDevData;
bit32 status;
TI_DBG5(("satInquiry: start\n"));
TI_DBG5(("satInquiry entry: tiDeviceHandle=%p tiIORequest=%p\n",
tiDeviceHandle, tiIORequest));
pSense = satIOContext->pSense;
scsiCmnd = &tiScsiRequest->scsiCmnd;
pSatDevData = satIOContext->pSatDevData;
TI_DBG5(("satInquiry: pSatDevData=%p\n", pSatDevData));
//tdhexdump("satInquiry", (bit8 *)scsiCmnd->cdb, 6);
/* checking CONTROL */
/* NACA == 1 or LINK == 1*/
if ( (scsiCmnd->cdb[5] & SCSI_NACA_MASK) || (scsiCmnd->cdb[5] & SCSI_LINK_MASK) )
{
satSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pTiSenseData,
satIOContext->interruptContext );
TI_DBG2(("satInquiry: return control\n"));
return tiSuccess;
}
/* checking EVPD and Allocation Length */
/* SPC-4 spec 6.4 p141 */
/* EVPD bit == 0 && PAGE CODE != 0 */
if ( !(scsiCmnd->cdb[1] & SCSI_EVPD_MASK) &&
(scsiCmnd->cdb[2] != 0)
)
{
satSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pTiSenseData,
satIOContext->interruptContext );
TI_DBG1(("satInquiry: return EVPD and PAGE CODE\n"));
return tiSuccess;
}
TI_DBG6(("satInquiry: allocation length 0x%x %d\n", ((scsiCmnd->cdb[3]) << 8) + scsiCmnd->cdb[4], ((scsiCmnd->cdb[3]) << 8) + scsiCmnd->cdb[4]));
/* convert OS IO to TD internal IO */
if ( pSatDevData->IDDeviceValid == agFALSE)
{
status = satStartIDDev(
tiRoot,
tiIORequest,
tiDeviceHandle,
tiScsiRequest,
satIOContext
);
TI_DBG6(("satInquiry: end status %d\n", status));
return status;
}
else
{
TI_DBG6(("satInquiry: calling satInquiryIntCB\n"));
satInquiryIntCB(
tiRoot,
tiIORequest,
tiDeviceHandle,
tiScsiRequest,
satIOContext
);
return tiSuccess;
}
}
/*****************************************************************************/
/*! \brief SAT implementation for SCSI satReadCapacity10.
*
* SAT implementation for SCSI satReadCapacity10.
*
* \param tiRoot: Pointer to TISA initiator driver/port instance.
* \param tiIORequest: Pointer to TISA I/O request context for this I/O.
* \param tiDeviceHandle: Pointer to TISA device handle for this I/O.
* \param tiScsiRequest: Pointer to TISA SCSI I/O request and SGL list.
* \param satIOContext_t: Pointer to the SAT IO Context
*
* \return If command is started successfully
* - \e tiSuccess: I/O request successfully initiated.
* - \e tiBusy: No resources available, try again later.
* - \e tiIONoDevice: Invalid device handle.
* - \e tiError: Other errors.
*/
/*****************************************************************************/
GLOBAL bit32 satReadCapacity10(
tiRoot_t *tiRoot,
tiIORequest_t *tiIORequest,
tiDeviceHandle_t *tiDeviceHandle,
tiScsiInitiatorRequest_t *tiScsiRequest,
satIOContext_t *satIOContext)
{
scsiRspSense_t *pSense;
tiIniScsiCmnd_t *scsiCmnd;
bit8 *pVirtAddr;
satDeviceData_t *pSatDevData;
agsaSATAIdentifyData_t *pSATAIdData;
bit32 lastLba;
bit32 word117_118;
bit32 word117;
bit32 word118;
TI_DBG5(("satReadCapacity10: start: tiDeviceHandle=%p tiIORequest=%p\n",
tiDeviceHandle, tiIORequest));
pSense = satIOContext->pSense;
pVirtAddr = (bit8 *) tiScsiRequest->sglVirtualAddr;
scsiCmnd = &tiScsiRequest->scsiCmnd;
pSatDevData = satIOContext->pSatDevData;
pSATAIdData = &pSatDevData->satIdentifyData;
/* checking CONTROL */
/* NACA == 1 or LINK == 1*/
if ( (scsiCmnd->cdb[9] & SCSI_NACA_MASK) || (scsiCmnd->cdb[9] & SCSI_LINK_MASK) )
{
satSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pTiSenseData,
satIOContext->interruptContext );
TI_DBG1(("satReadCapacity10: return control\n"));
return tiSuccess;
}
/*
* If Logical block address is not set to zero, return error
*/
if ((scsiCmnd->cdb[2] || scsiCmnd->cdb[3] || scsiCmnd->cdb[4] || scsiCmnd->cdb[5]))
{
TI_DBG1(("satReadCapacity10 *** ERROR *** logical address non zero, tiDeviceHandle=%p tiIORequest=%p\n",
tiDeviceHandle, tiIORequest));
satSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pTiSenseData,
satIOContext->interruptContext );
return tiSuccess;
}
/*
* If PMI bit is not zero, return error
*/
if ( ((scsiCmnd->cdb[8]) & SCSI_READ_CAPACITY10_PMI_MASK) != 0 )
{
TI_DBG1(("satReadCapacity10 *** ERROR *** PMI is not zero, tiDeviceHandle=%p tiIORequest=%p\n",
tiDeviceHandle, tiIORequest));
satSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pTiSenseData,
satIOContext->interruptContext );
return tiSuccess;
}
/*
filling in Read Capacity parameter data
saved identify device has been already flipped
See ATA spec p125 and p136 and SBC spec p54
*/
/*
* If 48-bit addressing is supported, set capacity information from Identify
* Device Word 100-103.
*/
if (pSatDevData->sat48BitSupport == agTRUE)
{
/*
* Setting RETURNED LOGICAL BLOCK ADDRESS in READ CAPACITY(10) response data:
* SBC-2 specifies that if the capacity exceeded the 4-byte RETURNED LOGICAL
- * BLOCK ADDRESS in READ CAPACITY(10) parameter data, the the RETURNED LOGICAL
+ * BLOCK ADDRESS in READ CAPACITY(10) parameter data, the RETURNED LOGICAL
* BLOCK ADDRESS should be set to 0xFFFFFFFF so the application client would
* then issue a READ CAPACITY(16) command.
*/
/* ATA Identify Device information word 100 - 103 */
if ( (pSATAIdData->maxLBA32_47 != 0 ) || (pSATAIdData->maxLBA48_63 != 0))
{
pVirtAddr[0] = 0xFF; /* MSB number of block */
pVirtAddr[1] = 0xFF;
pVirtAddr[2] = 0xFF;
pVirtAddr[3] = 0xFF; /* LSB number of block */
TI_DBG1(("satReadCapacity10: returns 0xFFFFFFFF\n"));
}
else /* Fit the Readcapacity10 4-bytes response length */
{
lastLba = (((pSATAIdData->maxLBA16_31) << 16) ) |
(pSATAIdData->maxLBA0_15);
lastLba = lastLba - 1; /* LBA starts from zero */
/*
for testing
lastLba = lastLba - (512*10) - 1;
*/
pVirtAddr[0] = (bit8)((lastLba >> 24) & 0xFF); /* MSB */
pVirtAddr[1] = (bit8)((lastLba >> 16) & 0xFF);
pVirtAddr[2] = (bit8)((lastLba >> 8) & 0xFF);
pVirtAddr[3] = (bit8)((lastLba ) & 0xFF); /* LSB */
TI_DBG3(("satReadCapacity10: lastLba is 0x%x %d\n", lastLba, lastLba));
TI_DBG3(("satReadCapacity10: LBA 0 is 0x%x %d\n", pVirtAddr[0], pVirtAddr[0]));
TI_DBG3(("satReadCapacity10: LBA 1 is 0x%x %d\n", pVirtAddr[1], pVirtAddr[1]));
TI_DBG3(("satReadCapacity10: LBA 2 is 0x%x %d\n", pVirtAddr[2], pVirtAddr[2]));
TI_DBG3(("satReadCapacity10: LBA 3 is 0x%x %d\n", pVirtAddr[3], pVirtAddr[3]));
}
}
/*
* For 28-bit addressing, set capacity information from Identify
* Device Word 60-61.
*/
else
{
/* ATA Identify Device information word 60 - 61 */
lastLba = (((pSATAIdData->numOfUserAddressableSectorsHi) << 16) ) |
(pSATAIdData->numOfUserAddressableSectorsLo);
lastLba = lastLba - 1; /* LBA starts from zero */
pVirtAddr[0] = (bit8)((lastLba >> 24) & 0xFF); /* MSB */
pVirtAddr[1] = (bit8)((lastLba >> 16) & 0xFF);
pVirtAddr[2] = (bit8)((lastLba >> 8) & 0xFF);
pVirtAddr[3] = (bit8)((lastLba ) & 0xFF); /* LSB */
}
/* SAT Rev 8d */
if (((pSATAIdData->word104_107[2]) & 0x1000) == 0)
{
TI_DBG5(("satReadCapacity10: Default Block Length is 512\n"));
/*
* Set the block size, fixed at 512 bytes.
*/
pVirtAddr[4] = 0x00; /* MSB block size in bytes */
pVirtAddr[5] = 0x00;
pVirtAddr[6] = 0x02;
pVirtAddr[7] = 0x00; /* LSB block size in bytes */
}
else
{
word118 = pSATAIdData->word112_126[6];
word117 = pSATAIdData->word112_126[5];
word117_118 = (word118 << 16) + word117;
word117_118 = word117_118 * 2;
pVirtAddr[4] = (bit8)((word117_118 >> 24) & 0xFF); /* MSB block size in bytes */
pVirtAddr[5] = (bit8)((word117_118 >> 16) & 0xFF);
pVirtAddr[6] = (bit8)((word117_118 >> 8) & 0xFF);
pVirtAddr[7] = (bit8)(word117_118 & 0xFF); /* LSB block size in bytes */
TI_DBG1(("satReadCapacity10: Nondefault word118 %d 0x%x \n", word118, word118));
TI_DBG1(("satReadCapacity10: Nondefault word117 %d 0x%x \n", word117, word117));
TI_DBG1(("satReadCapacity10: Nondefault Block Length is %d 0x%x \n",word117_118, word117_118));
}
/* fill in MAX LBA, which is used in satSendDiagnostic_1() */
pSatDevData->satMaxLBA[0] = 0; /* MSB */
pSatDevData->satMaxLBA[1] = 0;
pSatDevData->satMaxLBA[2] = 0;
pSatDevData->satMaxLBA[3] = 0;
pSatDevData->satMaxLBA[4] = pVirtAddr[0];
pSatDevData->satMaxLBA[5] = pVirtAddr[1];
pSatDevData->satMaxLBA[6] = pVirtAddr[2];
pSatDevData->satMaxLBA[7] = pVirtAddr[3]; /* LSB */
TI_DBG4(("satReadCapacity10 0x%x 0x%x 0x%x 0x%x 0x%x 0x%x 0x%x 0x%x , tiDeviceHandle=%p tiIORequest=%p\n",
pVirtAddr[0], pVirtAddr[1], pVirtAddr[2], pVirtAddr[3],
pVirtAddr[4], pVirtAddr[5], pVirtAddr[6], pVirtAddr[7],
tiDeviceHandle, tiIORequest));
/*
* Send the completion response now.
*/
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOSuccess,
SCSI_STAT_GOOD,
agNULL,
satIOContext->interruptContext);
return tiSuccess;
}
/*****************************************************************************/
/*! \brief SAT implementation for SCSI satReadCapacity16.
*
* SAT implementation for SCSI satReadCapacity16.
*
* \param tiRoot: Pointer to TISA initiator driver/port instance.
* \param tiIORequest: Pointer to TISA I/O request context for this I/O.
* \param tiDeviceHandle: Pointer to TISA device handle for this I/O.
* \param tiScsiRequest: Pointer to TISA SCSI I/O request and SGL list.
* \param satIOContext_t: Pointer to the SAT IO Context
*
* \return If command is started successfully
* - \e tiSuccess: I/O request successfully initiated.
* - \e tiBusy: No resources available, try again later.
* - \e tiIONoDevice: Invalid device handle.
* - \e tiError: Other errors.
*/
/*****************************************************************************/
GLOBAL bit32 satReadCapacity16(
tiRoot_t *tiRoot,
tiIORequest_t *tiIORequest,
tiDeviceHandle_t *tiDeviceHandle,
tiScsiInitiatorRequest_t *tiScsiRequest,
satIOContext_t *satIOContext)
{
scsiRspSense_t *pSense;
tiIniScsiCmnd_t *scsiCmnd;
bit8 *pVirtAddr;
satDeviceData_t *pSatDevData;
agsaSATAIdentifyData_t *pSATAIdData;
bit32 lastLbaLo;
bit32 allocationLen;
bit32 readCapacityLen = 32;
bit32 i = 0;
TI_DBG5(("satReadCapacity16 start: tiDeviceHandle=%p tiIORequest=%p\n",
tiDeviceHandle, tiIORequest));
pSense = satIOContext->pSense;
pVirtAddr = (bit8 *) tiScsiRequest->sglVirtualAddr;
scsiCmnd = &tiScsiRequest->scsiCmnd;
pSatDevData = satIOContext->pSatDevData;
pSATAIdData = &pSatDevData->satIdentifyData;
/* Find the buffer size allocated by Initiator */
allocationLen = (((bit32)scsiCmnd->cdb[10]) << 24) |
(((bit32)scsiCmnd->cdb[11]) << 16) |
(((bit32)scsiCmnd->cdb[12]) << 8 ) |
(((bit32)scsiCmnd->cdb[13]) );
if (allocationLen < readCapacityLen)
{
TI_DBG1(("satReadCapacity16 *** ERROR *** insufficient len=0x%x readCapacityLen=0x%x\n", allocationLen, readCapacityLen));
satSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pTiSenseData,
satIOContext->interruptContext );
return tiSuccess;
}
/* checking CONTROL */
/* NACA == 1 or LINK == 1*/
if ( (scsiCmnd->cdb[15] & SCSI_NACA_MASK) || (scsiCmnd->cdb[15] & SCSI_LINK_MASK) )
{
satSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pTiSenseData,
satIOContext->interruptContext );
TI_DBG1(("satReadCapacity16: return control\n"));
return tiSuccess;
}
/*
* If Logical blcok address is not set to zero, return error
*/
if ((scsiCmnd->cdb[2] || scsiCmnd->cdb[3] || scsiCmnd->cdb[4] || scsiCmnd->cdb[5]) ||
(scsiCmnd->cdb[6] || scsiCmnd->cdb[7] || scsiCmnd->cdb[8] || scsiCmnd->cdb[9]) )
{
TI_DBG1(("satReadCapacity16 *** ERROR *** logical address non zero, tiDeviceHandle=%p tiIORequest=%p\n",
tiDeviceHandle, tiIORequest));
satSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pTiSenseData,
satIOContext->interruptContext );
return tiSuccess;
}
/*
* If PMI bit is not zero, return error
*/
if ( ((scsiCmnd->cdb[14]) & SCSI_READ_CAPACITY16_PMI_MASK) != 0 )
{
TI_DBG1(("satReadCapacity16 *** ERROR *** PMI is not zero, tiDeviceHandle=%p tiIORequest=%p\n",
tiDeviceHandle, tiIORequest));
satSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pTiSenseData,
satIOContext->interruptContext );
return tiSuccess;
}
/*
filling in Read Capacity parameter data
*/
/*
* If 48-bit addressing is supported, set capacity information from Identify
* Device Word 100-103.
*/
if (pSatDevData->sat48BitSupport == agTRUE)
{
pVirtAddr[0] = (bit8)(((pSATAIdData->maxLBA48_63) >> 8) & 0xff); /* MSB */
pVirtAddr[1] = (bit8)((pSATAIdData->maxLBA48_63) & 0xff);
pVirtAddr[2] = (bit8)(((pSATAIdData->maxLBA32_47) >> 8) & 0xff);
pVirtAddr[3] = (bit8)((pSATAIdData->maxLBA32_47) & 0xff);
lastLbaLo = (((pSATAIdData->maxLBA16_31) << 16) ) | (pSATAIdData->maxLBA0_15);
lastLbaLo = lastLbaLo - 1; /* LBA starts from zero */
pVirtAddr[4] = (bit8)((lastLbaLo >> 24) & 0xFF);
pVirtAddr[5] = (bit8)((lastLbaLo >> 16) & 0xFF);
pVirtAddr[6] = (bit8)((lastLbaLo >> 8) & 0xFF);
pVirtAddr[7] = (bit8)((lastLbaLo ) & 0xFF); /* LSB */
}
/*
* For 28-bit addressing, set capacity information from Identify
* Device Word 60-61.
*/
else
{
pVirtAddr[0] = 0; /* MSB */
pVirtAddr[1] = 0;
pVirtAddr[2] = 0;
pVirtAddr[3] = 0;
lastLbaLo = (((pSATAIdData->numOfUserAddressableSectorsHi) << 16) ) |
(pSATAIdData->numOfUserAddressableSectorsLo);
lastLbaLo = lastLbaLo - 1; /* LBA starts from zero */
pVirtAddr[4] = (bit8)((lastLbaLo >> 24) & 0xFF);
pVirtAddr[5] = (bit8)((lastLbaLo >> 16) & 0xFF);
pVirtAddr[6] = (bit8)((lastLbaLo >> 8) & 0xFF);
pVirtAddr[7] = (bit8)((lastLbaLo ) & 0xFF); /* LSB */
}
/*
* Set the block size, fixed at 512 bytes.
*/
pVirtAddr[8] = 0x00; /* MSB block size in bytes */
pVirtAddr[9] = 0x00;
pVirtAddr[10] = 0x02;
pVirtAddr[11] = 0x00; /* LSB block size in bytes */
/* fill in MAX LBA, which is used in satSendDiagnostic_1() */
pSatDevData->satMaxLBA[0] = pVirtAddr[0]; /* MSB */
pSatDevData->satMaxLBA[1] = pVirtAddr[1];
pSatDevData->satMaxLBA[2] = pVirtAddr[2];
pSatDevData->satMaxLBA[3] = pVirtAddr[3];
pSatDevData->satMaxLBA[4] = pVirtAddr[4];
pSatDevData->satMaxLBA[5] = pVirtAddr[5];
pSatDevData->satMaxLBA[6] = pVirtAddr[6];
pSatDevData->satMaxLBA[7] = pVirtAddr[7]; /* LSB */
TI_DBG5(("satReadCapacity16 0x%x 0x%x 0x%x 0x%x 0x%x 0x%x 0x%x 0x%x 0x%x 0x%x 0x%x 0x%x , tiDeviceHandle=%p tiIORequest=%p\n",
pVirtAddr[0], pVirtAddr[1], pVirtAddr[2], pVirtAddr[3],
pVirtAddr[4], pVirtAddr[5], pVirtAddr[6], pVirtAddr[7],
pVirtAddr[8], pVirtAddr[9], pVirtAddr[10], pVirtAddr[11],
tiDeviceHandle, tiIORequest));
for(i=12;i<=31;i++)
{
pVirtAddr[i] = 0x00;
}
/*
* Send the completion response now.
*/
if (allocationLen > readCapacityLen)
{
/* underrun */
TI_DBG1(("satReadCapacity16 reporting underrun readCapacityLen=0x%x allocationLen=0x%x \n", readCapacityLen, allocationLen));
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOUnderRun,
allocationLen - readCapacityLen,
agNULL,
satIOContext->interruptContext );
}
else
{
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOSuccess,
SCSI_STAT_GOOD,
agNULL,
satIOContext->interruptContext);
}
return tiSuccess;
}
/*****************************************************************************/
/*! \brief SAT implementation for SCSI MODE SENSE (6).
*
* SAT implementation for SCSI MODE SENSE (6).
*
* \param tiRoot: Pointer to TISA initiator driver/port instance.
* \param tiIORequest: Pointer to TISA I/O request context for this I/O.
* \param tiDeviceHandle: Pointer to TISA device handle for this I/O.
* \param tiScsiRequest: Pointer to TISA SCSI I/O request and SGL list.
* \param satIOContext_t: Pointer to the SAT IO Context
*
* \return If command is started successfully
* - \e tiSuccess: I/O request successfully initiated.
* - \e tiBusy: No resources available, try again later.
* - \e tiIONoDevice: Invalid device handle.
* - \e tiError: Other errors.
*/
/*****************************************************************************/
GLOBAL bit32 satModeSense6(
tiRoot_t *tiRoot,
tiIORequest_t *tiIORequest,
tiDeviceHandle_t *tiDeviceHandle,
tiScsiInitiatorRequest_t *tiScsiRequest,
satIOContext_t *satIOContext)
{
scsiRspSense_t *pSense;
bit32 requestLen;
tiIniScsiCmnd_t *scsiCmnd;
bit32 pageSupported;
bit8 page;
bit8 *pModeSense; /* Mode Sense data buffer */
satDeviceData_t *pSatDevData;
bit8 PC;
bit8 AllPages[MODE_SENSE6_RETURN_ALL_PAGES_LEN];
bit8 Control[MODE_SENSE6_CONTROL_PAGE_LEN];
bit8 RWErrorRecovery[MODE_SENSE6_READ_WRITE_ERROR_RECOVERY_PAGE_LEN];
bit8 Caching[MODE_SENSE6_CACHING_LEN];
bit8 InfoExceptionCtrl[MODE_SENSE6_INFORMATION_EXCEPTION_CONTROL_PAGE_LEN];
bit8 lenRead = 0;
TI_DBG5(("satModeSense6 entry: tiDeviceHandle=%p tiIORequest=%p\n",
tiDeviceHandle, tiIORequest));
pSense = satIOContext->pSense;
scsiCmnd = &tiScsiRequest->scsiCmnd;
pModeSense = (bit8 *) tiScsiRequest->sglVirtualAddr;
pSatDevData = satIOContext->pSatDevData;
//tdhexdump("satModeSense6", (bit8 *)scsiCmnd->cdb, 6);
/* checking CONTROL */
/* NACA == 1 or LINK == 1*/
if ( (scsiCmnd->cdb[5] & SCSI_NACA_MASK) || (scsiCmnd->cdb[5] & SCSI_LINK_MASK) )
{
satSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pTiSenseData,
satIOContext->interruptContext );
TI_DBG2(("satModeSense6: return control\n"));
return tiSuccess;
}
/* checking PC(Page Control)
SAT revion 8, 8.5.3 p33 and 10.1.2, p66
*/
PC = (bit8)((scsiCmnd->cdb[2]) & SCSI_MODE_SENSE6_PC_MASK);
if (PC != 0)
{
satSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pTiSenseData,
satIOContext->interruptContext );
TI_DBG1(("satModeSense6: return due to PC value pc 0x%x\n", PC >> 6));
return tiSuccess;
}
/* reading PAGE CODE */
page = (bit8)((scsiCmnd->cdb[2]) & SCSI_MODE_SENSE6_PAGE_CODE_MASK);
TI_DBG5(("satModeSense6: page=0x%x, tiDeviceHandle=%p tiIORequest=%p\n",
page, tiDeviceHandle, tiIORequest));
requestLen = scsiCmnd->cdb[4];
/*
Based on page code value, returns a corresponding mode page
note: no support for subpage
*/
switch(page)
{
case MODESENSE_RETURN_ALL_PAGES:
case MODESENSE_CONTROL_PAGE: /* control */
case MODESENSE_READ_WRITE_ERROR_RECOVERY_PAGE: /* Read-Write Error Recovery */
case MODESENSE_CACHING: /* caching */
case MODESENSE_INFORMATION_EXCEPTION_CONTROL_PAGE: /* informational exceptions control*/
pageSupported = agTRUE;
break;
case MODESENSE_VENDOR_SPECIFIC_PAGE: /* vendor specific */
default:
pageSupported = agFALSE;
break;
}
if (pageSupported == agFALSE)
{
TI_DBG1(("satModeSense6 *** ERROR *** not supported page 0x%x tiDeviceHandle=%p tiIORequest=%p\n",
page, tiDeviceHandle, tiIORequest));
satSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_COMMAND,
satIOContext);
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pTiSenseData,
satIOContext->interruptContext );
return tiSuccess;
}
switch(page)
{
case MODESENSE_RETURN_ALL_PAGES:
lenRead = (bit8)MIN(requestLen, MODE_SENSE6_RETURN_ALL_PAGES_LEN);
break;
case MODESENSE_CONTROL_PAGE: /* control */
lenRead = (bit8)MIN(requestLen, MODE_SENSE6_CONTROL_PAGE_LEN);
break;
case MODESENSE_READ_WRITE_ERROR_RECOVERY_PAGE: /* Read-Write Error Recovery */
lenRead = (bit8)MIN(requestLen, MODE_SENSE6_READ_WRITE_ERROR_RECOVERY_PAGE_LEN);
break;
case MODESENSE_CACHING: /* caching */
lenRead = (bit8)MIN(requestLen, MODE_SENSE6_CACHING_LEN);
break;
case MODESENSE_INFORMATION_EXCEPTION_CONTROL_PAGE: /* informational exceptions control*/
lenRead = (bit8)MIN(requestLen, MODE_SENSE6_INFORMATION_EXCEPTION_CONTROL_PAGE_LEN);
break;
default:
TI_DBG1(("satModeSense6: default error page %d\n", page));
break;
}
if (page == MODESENSE_RETURN_ALL_PAGES)
{
TI_DBG5(("satModeSense6: MODESENSE_RETURN_ALL_PAGES\n"));
AllPages[0] = (bit8)(lenRead - 1);
AllPages[1] = 0x00; /* default medium type (currently mounted medium type) */
AllPages[2] = 0x00; /* no write-protect, no support for DPO-FUA */
AllPages[3] = 0x08; /* block descriptor length */
/*
* Fill-up direct-access device block-descriptor, SAT, Table 19
*/
/* density code */
AllPages[4] = 0x04; /* density-code : reserved for direct-access */
/* number of blocks */
AllPages[5] = 0x00; /* unspecified */
AllPages[6] = 0x00; /* unspecified */
AllPages[7] = 0x00; /* unspecified */
/* reserved */
AllPages[8] = 0x00; /* reserved */
/* Block size */
AllPages[9] = 0x00;
AllPages[10] = 0x02; /* Block size is always 512 bytes */
AllPages[11] = 0x00;
/* MODESENSE_READ_WRITE_ERROR_RECOVERY_PAGE */
AllPages[12] = 0x01; /* page code */
AllPages[13] = 0x0A; /* page length */
AllPages[14] = 0x40; /* ARRE is set */
AllPages[15] = 0x00;
AllPages[16] = 0x00;
AllPages[17] = 0x00;
AllPages[18] = 0x00;
AllPages[19] = 0x00;
AllPages[20] = 0x00;
AllPages[21] = 0x00;
AllPages[22] = 0x00;
AllPages[23] = 0x00;
/* MODESENSE_CACHING */
AllPages[24] = 0x08; /* page code */
AllPages[25] = 0x12; /* page length */
#ifdef NOT_YET
if (pSatDevData->satWriteCacheEnabled == agTRUE)
{
AllPages[26] = 0x04;/* WCE bit is set */
}
else
{
AllPages[26] = 0x00;/* WCE bit is NOT set */
}
#endif
AllPages[26] = 0x00;/* WCE bit is NOT set */
AllPages[27] = 0x00;
AllPages[28] = 0x00;
AllPages[29] = 0x00;
AllPages[30] = 0x00;
AllPages[31] = 0x00;
AllPages[32] = 0x00;
AllPages[33] = 0x00;
AllPages[34] = 0x00;
AllPages[35] = 0x00;
if (pSatDevData->satLookAheadEnabled == agTRUE)
{
AllPages[36] = 0x00;/* DRA bit is NOT set */
}
else
{
AllPages[36] = 0x20;/* DRA bit is set */
}
AllPages[37] = 0x00;
AllPages[38] = 0x00;
AllPages[39] = 0x00;
AllPages[40] = 0x00;
AllPages[41] = 0x00;
AllPages[42] = 0x00;
AllPages[43] = 0x00;
/* MODESENSE_CONTROL_PAGE */
AllPages[44] = 0x0A; /* page code */
AllPages[45] = 0x0A; /* page length */
AllPages[46] = 0x02; /* only GLTSD bit is set */
if (pSatDevData->satNCQ == agTRUE)
{
AllPages[47] = 0x12; /* Queue Alogorithm modifier 1b and QErr 01b*/
}
else
{
AllPages[47] = 0x02; /* Queue Alogorithm modifier 0b and QErr 01b */
}
AllPages[48] = 0x00;
AllPages[49] = 0x00;
AllPages[50] = 0x00; /* obsolete */
AllPages[51] = 0x00; /* obsolete */
AllPages[52] = 0xFF; /* Busy Timeout Period */
AllPages[53] = 0xFF; /* Busy Timeout Period */
AllPages[54] = 0x00; /* we don't support non-000b value for the self-test code */
AllPages[55] = 0x00; /* we don't support non-000b value for the self-test code */
/* MODESENSE_INFORMATION_EXCEPTION_CONTROL_PAGE */
AllPages[56] = 0x1C; /* page code */
AllPages[57] = 0x0A; /* page length */
if (pSatDevData->satSMARTEnabled == agTRUE)
{
AllPages[58] = 0x00;/* DEXCPT bit is NOT set */
}
else
{
AllPages[58] = 0x08;/* DEXCPT bit is set */
}
AllPages[59] = 0x00; /* We don't support MRIE */
AllPages[60] = 0x00; /* Interval timer vendor-specific */
AllPages[61] = 0x00;
AllPages[62] = 0x00;
AllPages[63] = 0x00;
AllPages[64] = 0x00; /* REPORT-COUNT */
AllPages[65] = 0x00;
AllPages[66] = 0x00;
AllPages[67] = 0x00;
osti_memcpy(pModeSense, &AllPages, lenRead);
}
else if (page == MODESENSE_CONTROL_PAGE)
{
TI_DBG5(("satModeSense6: MODESENSE_CONTROL_PAGE\n"));
Control[0] = MODE_SENSE6_CONTROL_PAGE_LEN - 1;
Control[1] = 0x00; /* default medium type (currently mounted medium type) */
Control[2] = 0x00; /* no write-protect, no support for DPO-FUA */
Control[3] = 0x08; /* block descriptor length */
/*
* Fill-up direct-access device block-descriptor, SAT, Table 19
*/
/* density code */
Control[4] = 0x04; /* density-code : reserved for direct-access */
/* number of blocks */
Control[5] = 0x00; /* unspecified */
Control[6] = 0x00; /* unspecified */
Control[7] = 0x00; /* unspecified */
/* reserved */
Control[8] = 0x00; /* reserved */
/* Block size */
Control[9] = 0x00;
Control[10] = 0x02; /* Block size is always 512 bytes */
Control[11] = 0x00;
/*
* Fill-up control mode page, SAT, Table 65
*/
Control[12] = 0x0A; /* page code */
Control[13] = 0x0A; /* page length */
Control[14] = 0x02; /* only GLTSD bit is set */
if (pSatDevData->satNCQ == agTRUE)
{
Control[15] = 0x12; /* Queue Alogorithm modifier 1b and QErr 01b*/
}
else
{
Control[15] = 0x02; /* Queue Alogorithm modifier 0b and QErr 01b */
}
Control[16] = 0x00;
Control[17] = 0x00;
Control[18] = 0x00; /* obsolete */
Control[19] = 0x00; /* obsolete */
Control[20] = 0xFF; /* Busy Timeout Period */
Control[21] = 0xFF; /* Busy Timeout Period */
Control[22] = 0x00; /* we don't support non-000b value for the self-test code */
Control[23] = 0x00; /* we don't support non-000b value for the self-test code */
osti_memcpy(pModeSense, &Control, lenRead);
}
else if (page == MODESENSE_READ_WRITE_ERROR_RECOVERY_PAGE)
{
TI_DBG5(("satModeSense6: MODESENSE_READ_WRITE_ERROR_RECOVERY_PAGE\n"));
RWErrorRecovery[0] = MODE_SENSE6_READ_WRITE_ERROR_RECOVERY_PAGE_LEN - 1;
RWErrorRecovery[1] = 0x00; /* default medium type (currently mounted medium type) */
RWErrorRecovery[2] = 0x00; /* no write-protect, no support for DPO-FUA */
RWErrorRecovery[3] = 0x08; /* block descriptor length */
/*
* Fill-up direct-access device block-descriptor, SAT, Table 19
*/
/* density code */
RWErrorRecovery[4] = 0x04; /* density-code : reserved for direct-access */
/* number of blocks */
RWErrorRecovery[5] = 0x00; /* unspecified */
RWErrorRecovery[6] = 0x00; /* unspecified */
RWErrorRecovery[7] = 0x00; /* unspecified */
/* reserved */
RWErrorRecovery[8] = 0x00; /* reserved */
/* Block size */
RWErrorRecovery[9] = 0x00;
RWErrorRecovery[10] = 0x02; /* Block size is always 512 bytes */
RWErrorRecovery[11] = 0x00;
/*
* Fill-up Read-Write Error Recovery mode page, SAT, Table 66
*/
RWErrorRecovery[12] = 0x01; /* page code */
RWErrorRecovery[13] = 0x0A; /* page length */
RWErrorRecovery[14] = 0x40; /* ARRE is set */
RWErrorRecovery[15] = 0x00;
RWErrorRecovery[16] = 0x00;
RWErrorRecovery[17] = 0x00;
RWErrorRecovery[18] = 0x00;
RWErrorRecovery[19] = 0x00;
RWErrorRecovery[20] = 0x00;
RWErrorRecovery[21] = 0x00;
RWErrorRecovery[22] = 0x00;
RWErrorRecovery[23] = 0x00;
osti_memcpy(pModeSense, &RWErrorRecovery, lenRead);
}
else if (page == MODESENSE_CACHING)
{
TI_DBG5(("satModeSense6: MODESENSE_CACHING\n"));
/* special case */
if (requestLen == 4 && page == MODESENSE_CACHING)
{
TI_DBG5(("satModeSense6: linux 2.6.8.24 support\n"));
pModeSense[0] = 0x20 - 1; /* 32 - 1 */
pModeSense[1] = 0x00; /* default medium type (currently mounted medium type) */
pModeSense[2] = 0x00; /* no write-protect, no support for DPO-FUA */
pModeSense[3] = 0x08; /* block descriptor length */
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOSuccess,
SCSI_STAT_GOOD,
agNULL,
satIOContext->interruptContext);
return tiSuccess;
}
Caching[0] = MODE_SENSE6_CACHING_LEN - 1;
Caching[1] = 0x00; /* default medium type (currently mounted medium type) */
Caching[2] = 0x00; /* no write-protect, no support for DPO-FUA */
Caching[3] = 0x08; /* block descriptor length */
/*
* Fill-up direct-access device block-descriptor, SAT, Table 19
*/
/* density code */
Caching[4] = 0x04; /* density-code : reserved for direct-access */
/* number of blocks */
Caching[5] = 0x00; /* unspecified */
Caching[6] = 0x00; /* unspecified */
Caching[7] = 0x00; /* unspecified */
/* reserved */
Caching[8] = 0x00; /* reserved */
/* Block size */
Caching[9] = 0x00;
Caching[10] = 0x02; /* Block size is always 512 bytes */
Caching[11] = 0x00;
/*
* Fill-up Caching mode page, SAT, Table 67
*/
/* length 20 */
Caching[12] = 0x08; /* page code */
Caching[13] = 0x12; /* page length */
#ifdef NOT_YET
if (pSatDevData->satWriteCacheEnabled == agTRUE)
{
Caching[14] = 0x04;/* WCE bit is set */
}
else
{
Caching[14] = 0x00;/* WCE bit is NOT set */
}
#endif
Caching[14] = 0x00;/* WCE bit is NOT set */
Caching[15] = 0x00;
Caching[16] = 0x00;
Caching[17] = 0x00;
Caching[18] = 0x00;
Caching[19] = 0x00;
Caching[20] = 0x00;
Caching[21] = 0x00;
Caching[22] = 0x00;
Caching[23] = 0x00;
if (pSatDevData->satLookAheadEnabled == agTRUE)
{
Caching[24] = 0x00;/* DRA bit is NOT set */
}
else
{
Caching[24] = 0x20;/* DRA bit is set */
}
Caching[25] = 0x00;
Caching[26] = 0x00;
Caching[27] = 0x00;
Caching[28] = 0x00;
Caching[29] = 0x00;
Caching[30] = 0x00;
Caching[31] = 0x00;
osti_memcpy(pModeSense, &Caching, lenRead);
}
else if (page == MODESENSE_INFORMATION_EXCEPTION_CONTROL_PAGE)
{
TI_DBG5(("satModeSense6: MODESENSE_INFORMATION_EXCEPTION_CONTROL_PAGE\n"));
InfoExceptionCtrl[0] = MODE_SENSE6_INFORMATION_EXCEPTION_CONTROL_PAGE_LEN - 1;
InfoExceptionCtrl[1] = 0x00; /* default medium type (currently mounted medium type) */
InfoExceptionCtrl[2] = 0x00; /* no write-protect, no support for DPO-FUA */
InfoExceptionCtrl[3] = 0x08; /* block descriptor length */
/*
* Fill-up direct-access device block-descriptor, SAT, Table 19
*/
/* density code */
InfoExceptionCtrl[4] = 0x04; /* density-code : reserved for direct-access */
/* number of blocks */
InfoExceptionCtrl[5] = 0x00; /* unspecified */
InfoExceptionCtrl[6] = 0x00; /* unspecified */
InfoExceptionCtrl[7] = 0x00; /* unspecified */
/* reserved */
InfoExceptionCtrl[8] = 0x00; /* reserved */
/* Block size */
InfoExceptionCtrl[9] = 0x00;
InfoExceptionCtrl[10] = 0x02; /* Block size is always 512 bytes */
InfoExceptionCtrl[11] = 0x00;
/*
* Fill-up informational-exceptions control mode page, SAT, Table 68
*/
InfoExceptionCtrl[12] = 0x1C; /* page code */
InfoExceptionCtrl[13] = 0x0A; /* page length */
if (pSatDevData->satSMARTEnabled == agTRUE)
{
InfoExceptionCtrl[14] = 0x00;/* DEXCPT bit is NOT set */
}
else
{
InfoExceptionCtrl[14] = 0x08;/* DEXCPT bit is set */
}
InfoExceptionCtrl[15] = 0x00; /* We don't support MRIE */
InfoExceptionCtrl[16] = 0x00; /* Interval timer vendor-specific */
InfoExceptionCtrl[17] = 0x00;
InfoExceptionCtrl[18] = 0x00;
InfoExceptionCtrl[19] = 0x00;
InfoExceptionCtrl[20] = 0x00; /* REPORT-COUNT */
InfoExceptionCtrl[21] = 0x00;
InfoExceptionCtrl[22] = 0x00;
InfoExceptionCtrl[23] = 0x00;
osti_memcpy(pModeSense, &InfoExceptionCtrl, lenRead);
}
else
{
/* Error */
TI_DBG1(("satModeSense6: Error page %d\n", page));
satSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_COMMAND,
satIOContext);
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pTiSenseData,
satIOContext->interruptContext );
return tiSuccess;
}
/* there can be only underrun not overrun in error case */
if (requestLen > lenRead)
{
TI_DBG6(("satModeSense6 reporting underrun lenRead=0x%x requestLen=0x%x tiIORequest=%p\n", lenRead, requestLen, tiIORequest));
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOUnderRun,
requestLen - lenRead,
agNULL,
satIOContext->interruptContext );
}
else
{
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOSuccess,
SCSI_STAT_GOOD,
agNULL,
satIOContext->interruptContext);
}
return tiSuccess;
}
/*****************************************************************************/
/*! \brief SAT implementation for SCSI MODE SENSE (10).
*
* SAT implementation for SCSI MODE SENSE (10).
*
* \param tiRoot: Pointer to TISA initiator driver/port instance.
* \param tiIORequest: Pointer to TISA I/O request context for this I/O.
* \param tiDeviceHandle: Pointer to TISA device handle for this I/O.
* \param tiScsiRequest: Pointer to TISA SCSI I/O request and SGL list.
* \param satIOContext_t: Pointer to the SAT IO Context
*
* \return If command is started successfully
* - \e tiSuccess: I/O request successfully initiated.
* - \e tiBusy: No resources available, try again later.
* - \e tiIONoDevice: Invalid device handle.
* - \e tiError: Other errors.
*/
/*****************************************************************************/
GLOBAL bit32 satModeSense10(
tiRoot_t *tiRoot,
tiIORequest_t *tiIORequest,
tiDeviceHandle_t *tiDeviceHandle,
tiScsiInitiatorRequest_t *tiScsiRequest,
satIOContext_t *satIOContext)
{
scsiRspSense_t *pSense;
bit32 requestLen;
tiIniScsiCmnd_t *scsiCmnd;
bit32 pageSupported;
bit8 page;
bit8 *pModeSense; /* Mode Sense data buffer */
satDeviceData_t *pSatDevData;
bit8 PC; /* page control */
bit8 LLBAA; /* Long LBA Accepted */
bit32 index;
bit8 AllPages[MODE_SENSE10_RETURN_ALL_PAGES_LLBAA_LEN];
bit8 Control[MODE_SENSE10_CONTROL_PAGE_LLBAA_LEN];
bit8 RWErrorRecovery[MODE_SENSE10_READ_WRITE_ERROR_RECOVERY_PAGE_LLBAA_LEN];
bit8 Caching[MODE_SENSE10_CACHING_LLBAA_LEN];
bit8 InfoExceptionCtrl[MODE_SENSE10_INFORMATION_EXCEPTION_CONTROL_PAGE_LLBAA_LEN];
bit8 lenRead = 0;
TI_DBG5(("satModeSense10 entry: tiDeviceHandle=%p tiIORequest=%p\n",
tiDeviceHandle, tiIORequest));
pSense = satIOContext->pSense;
scsiCmnd = &tiScsiRequest->scsiCmnd;
pModeSense = (bit8 *) tiScsiRequest->sglVirtualAddr;
pSatDevData = satIOContext->pSatDevData;
/* checking CONTROL */
/* NACA == 1 or LINK == 1*/
if ( (scsiCmnd->cdb[9] & SCSI_NACA_MASK) || (scsiCmnd->cdb[9] & SCSI_LINK_MASK) )
{
satSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pTiSenseData,
satIOContext->interruptContext );
TI_DBG2(("satModeSense10: return control\n"));
return tiSuccess;
}
/* checking PC(Page Control)
SAT revion 8, 8.5.3 p33 and 10.1.2, p66
*/
PC = (bit8)((scsiCmnd->cdb[2]) & SCSI_MODE_SENSE10_PC_MASK);
if (PC != 0)
{
satSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pTiSenseData,
satIOContext->interruptContext );
TI_DBG1(("satModeSense10: return due to PC value pc 0x%x\n", PC));
return tiSuccess;
}
/* finding LLBAA bit */
LLBAA = (bit8)((scsiCmnd->cdb[1]) & SCSI_MODE_SENSE10_LLBAA_MASK);
/* reading PAGE CODE */
page = (bit8)((scsiCmnd->cdb[2]) & SCSI_MODE_SENSE10_PAGE_CODE_MASK);
TI_DBG5(("satModeSense10: page=0x%x, tiDeviceHandle=%p tiIORequest=%p\n",
page, tiDeviceHandle, tiIORequest));
requestLen = (scsiCmnd->cdb[7] << 8) + scsiCmnd->cdb[8];
/*
Based on page code value, returns a corresponding mode page
note: no support for subpage
*/
switch(page)
{
case MODESENSE_RETURN_ALL_PAGES: /* return all pages */
case MODESENSE_CONTROL_PAGE: /* control */
case MODESENSE_READ_WRITE_ERROR_RECOVERY_PAGE: /* Read-Write Error Recovery */
case MODESENSE_CACHING: /* caching */
case MODESENSE_INFORMATION_EXCEPTION_CONTROL_PAGE: /* informational exceptions control*/
pageSupported = agTRUE;
break;
case MODESENSE_VENDOR_SPECIFIC_PAGE: /* vendor specific */
default:
pageSupported = agFALSE;
break;
}
if (pageSupported == agFALSE)
{
TI_DBG1(("satModeSense10 *** ERROR *** not supported page 0x%x tiDeviceHandle=%p tiIORequest=%p\n",
page, tiDeviceHandle, tiIORequest));
satSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_COMMAND,
satIOContext);
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pTiSenseData,
satIOContext->interruptContext );
return tiSuccess;
}
switch(page)
{
case MODESENSE_RETURN_ALL_PAGES:
if (LLBAA)
{
lenRead = (bit8)MIN(requestLen, MODE_SENSE10_RETURN_ALL_PAGES_LLBAA_LEN);
}
else
{
lenRead = (bit8)MIN(requestLen, MODE_SENSE10_RETURN_ALL_PAGES_LEN);
}
break;
case MODESENSE_CONTROL_PAGE: /* control */
if (LLBAA)
{
lenRead = (bit8)MIN(requestLen, MODE_SENSE10_CONTROL_PAGE_LLBAA_LEN);
}
else
{
lenRead = (bit8)MIN(requestLen, MODE_SENSE10_CONTROL_PAGE_LEN);
}
break;
case MODESENSE_READ_WRITE_ERROR_RECOVERY_PAGE: /* Read-Write Error Recovery */
if (LLBAA)
{
lenRead = (bit8)MIN(requestLen, MODE_SENSE10_READ_WRITE_ERROR_RECOVERY_PAGE_LLBAA_LEN);
}
else
{
lenRead = (bit8)MIN(requestLen, MODE_SENSE10_READ_WRITE_ERROR_RECOVERY_PAGE_LEN);
}
break;
case MODESENSE_CACHING: /* caching */
if (LLBAA)
{
lenRead = (bit8)MIN(requestLen, MODE_SENSE10_CACHING_LLBAA_LEN);
}
else
{
lenRead = (bit8)MIN(requestLen, MODE_SENSE10_CACHING_LEN);
}
break;
case MODESENSE_INFORMATION_EXCEPTION_CONTROL_PAGE: /* informational exceptions control*/
if (LLBAA)
{
lenRead = (bit8)MIN(requestLen, MODE_SENSE10_INFORMATION_EXCEPTION_CONTROL_PAGE_LLBAA_LEN);
}
else
{
lenRead = (bit8)MIN(requestLen, MODE_SENSE10_INFORMATION_EXCEPTION_CONTROL_PAGE_LEN);
}
break;
default:
TI_DBG1(("satModeSense10: default error page %d\n", page));
break;
}
if (page == MODESENSE_RETURN_ALL_PAGES)
{
TI_DBG5(("satModeSense10: MODESENSE_RETURN_ALL_PAGES\n"));
AllPages[0] = 0;
AllPages[1] = (bit8)(lenRead - 2);
AllPages[2] = 0x00; /* medium type: default medium type (currently mounted medium type) */
AllPages[3] = 0x00; /* device-specific param: no write-protect, no support for DPO-FUA */
if (LLBAA)
{
AllPages[4] = 0x00; /* reserved and LONGLBA */
AllPages[4] = (bit8)(AllPages[4] | 0x1); /* LONGLBA is set */
}
else
{
AllPages[4] = 0x00; /* reserved and LONGLBA: LONGLBA is not set */
}
AllPages[5] = 0x00; /* reserved */
AllPages[6] = 0x00; /* block descriptot length */
if (LLBAA)
{
AllPages[7] = 0x10; /* block descriptor length: LONGLBA is set. So, length is 16 */
}
else
{
AllPages[7] = 0x08; /* block descriptor length: LONGLBA is NOT set. So, length is 8 */
}
/*
* Fill-up direct-access device block-descriptor, SAT, Table 19
*/
if (LLBAA)
{
/* density code */
AllPages[8] = 0x04; /* density-code : reserved for direct-access */
/* number of blocks */
AllPages[9] = 0x00; /* unspecified */
AllPages[10] = 0x00; /* unspecified */
AllPages[11] = 0x00; /* unspecified */
AllPages[12] = 0x00; /* unspecified */
AllPages[13] = 0x00; /* unspecified */
AllPages[14] = 0x00; /* unspecified */
AllPages[15] = 0x00; /* unspecified */
/* reserved */
AllPages[16] = 0x00; /* reserved */
AllPages[17] = 0x00; /* reserved */
AllPages[18] = 0x00; /* reserved */
AllPages[19] = 0x00; /* reserved */
/* Block size */
AllPages[20] = 0x00;
AllPages[21] = 0x00;
AllPages[22] = 0x02; /* Block size is always 512 bytes */
AllPages[23] = 0x00;
}
else
{
/* density code */
AllPages[8] = 0x04; /* density-code : reserved for direct-access */
/* number of blocks */
AllPages[9] = 0x00; /* unspecified */
AllPages[10] = 0x00; /* unspecified */
AllPages[11] = 0x00; /* unspecified */
/* reserved */
AllPages[12] = 0x00; /* reserved */
/* Block size */
AllPages[13] = 0x00;
AllPages[14] = 0x02; /* Block size is always 512 bytes */
AllPages[15] = 0x00;
}
if (LLBAA)
{
index = 24;
}
else
{
index = 16;
}
/* MODESENSE_READ_WRITE_ERROR_RECOVERY_PAGE */
AllPages[index+0] = 0x01; /* page code */
AllPages[index+1] = 0x0A; /* page length */
AllPages[index+2] = 0x40; /* ARRE is set */
AllPages[index+3] = 0x00;
AllPages[index+4] = 0x00;
AllPages[index+5] = 0x00;
AllPages[index+6] = 0x00;
AllPages[index+7] = 0x00;
AllPages[index+8] = 0x00;
AllPages[index+9] = 0x00;
AllPages[index+10] = 0x00;
AllPages[index+11] = 0x00;
/* MODESENSE_CACHING */
/*
* Fill-up Caching mode page, SAT, Table 67
*/
/* length 20 */
AllPages[index+12] = 0x08; /* page code */
AllPages[index+13] = 0x12; /* page length */
#ifdef NOT_YET
if (pSatDevData->satWriteCacheEnabled == agTRUE)
{
AllPages[index+14] = 0x04;/* WCE bit is set */
}
else
{
AllPages[index+14] = 0x00;/* WCE bit is NOT set */
}
#endif
AllPages[index+14] = 0x00;/* WCE bit is NOT set */
AllPages[index+15] = 0x00;
AllPages[index+16] = 0x00;
AllPages[index+17] = 0x00;
AllPages[index+18] = 0x00;
AllPages[index+19] = 0x00;
AllPages[index+20] = 0x00;
AllPages[index+21] = 0x00;
AllPages[index+22] = 0x00;
AllPages[index+23] = 0x00;
if (pSatDevData->satLookAheadEnabled == agTRUE)
{
AllPages[index+24] = 0x00;/* DRA bit is NOT set */
}
else
{
AllPages[index+24] = 0x20;/* DRA bit is set */
}
AllPages[index+25] = 0x00;
AllPages[index+26] = 0x00;
AllPages[index+27] = 0x00;
AllPages[index+28] = 0x00;
AllPages[index+29] = 0x00;
AllPages[index+30] = 0x00;
AllPages[index+31] = 0x00;
/* MODESENSE_CONTROL_PAGE */
/*
* Fill-up control mode page, SAT, Table 65
*/
AllPages[index+32] = 0x0A; /* page code */
AllPages[index+33] = 0x0A; /* page length */
AllPages[index+34] = 0x02; /* only GLTSD bit is set */
if (pSatDevData->satNCQ == agTRUE)
{
AllPages[index+35] = 0x12; /* Queue Alogorithm modifier 1b and QErr 01b*/
}
else
{
AllPages[index+35] = 0x02; /* Queue Alogorithm modifier 0b and QErr 01b */
}
AllPages[index+36] = 0x00;
AllPages[index+37] = 0x00;
AllPages[index+38] = 0x00; /* obsolete */
AllPages[index+39] = 0x00; /* obsolete */
AllPages[index+40] = 0xFF; /* Busy Timeout Period */
AllPages[index+41] = 0xFF; /* Busy Timeout Period */
AllPages[index+42] = 0x00; /* we don't support non-000b value for the self-test code */
AllPages[index+43] = 0x00; /* we don't support non-000b value for the self-test code */
/* MODESENSE_INFORMATION_EXCEPTION_CONTROL_PAGE */
/*
* Fill-up informational-exceptions control mode page, SAT, Table 68
*/
AllPages[index+44] = 0x1C; /* page code */
AllPages[index+45] = 0x0A; /* page length */
if (pSatDevData->satSMARTEnabled == agTRUE)
{
AllPages[index+46] = 0x00;/* DEXCPT bit is NOT set */
}
else
{
AllPages[index+46] = 0x08;/* DEXCPT bit is set */
}
AllPages[index+47] = 0x00; /* We don't support MRIE */
AllPages[index+48] = 0x00; /* Interval timer vendor-specific */
AllPages[index+49] = 0x00;
AllPages[index+50] = 0x00;
AllPages[index+51] = 0x00;
AllPages[index+52] = 0x00; /* REPORT-COUNT */
AllPages[index+53] = 0x00;
AllPages[index+54] = 0x00;
AllPages[index+55] = 0x00;
osti_memcpy(pModeSense, &AllPages, lenRead);
}
else if (page == MODESENSE_CONTROL_PAGE)
{
TI_DBG5(("satModeSense10: MODESENSE_CONTROL_PAGE\n"));
Control[0] = 0;
Control[1] = (bit8)(lenRead - 2);
Control[2] = 0x00; /* medium type: default medium type (currently mounted medium type) */
Control[3] = 0x00; /* device-specific param: no write-protect, no support for DPO-FUA */
if (LLBAA)
{
Control[4] = 0x00; /* reserved and LONGLBA */
Control[4] = (bit8)(Control[4] | 0x1); /* LONGLBA is set */
}
else
{
Control[4] = 0x00; /* reserved and LONGLBA: LONGLBA is not set */
}
Control[5] = 0x00; /* reserved */
Control[6] = 0x00; /* block descriptot length */
if (LLBAA)
{
Control[7] = 0x10; /* block descriptor length: LONGLBA is set. So, length is 16 */
}
else
{
Control[7] = 0x08; /* block descriptor length: LONGLBA is NOT set. So, length is 8 */
}
/*
* Fill-up direct-access device block-descriptor, SAT, Table 19
*/
if (LLBAA)
{
/* density code */
Control[8] = 0x04; /* density-code : reserved for direct-access */
/* number of blocks */
Control[9] = 0x00; /* unspecified */
Control[10] = 0x00; /* unspecified */
Control[11] = 0x00; /* unspecified */
Control[12] = 0x00; /* unspecified */
Control[13] = 0x00; /* unspecified */
Control[14] = 0x00; /* unspecified */
Control[15] = 0x00; /* unspecified */
/* reserved */
Control[16] = 0x00; /* reserved */
Control[17] = 0x00; /* reserved */
Control[18] = 0x00; /* reserved */
Control[19] = 0x00; /* reserved */
/* Block size */
Control[20] = 0x00;
Control[21] = 0x00;
Control[22] = 0x02; /* Block size is always 512 bytes */
Control[23] = 0x00;
}
else
{
/* density code */
Control[8] = 0x04; /* density-code : reserved for direct-access */
/* number of blocks */
Control[9] = 0x00; /* unspecified */
Control[10] = 0x00; /* unspecified */
Control[11] = 0x00; /* unspecified */
/* reserved */
Control[12] = 0x00; /* reserved */
/* Block size */
Control[13] = 0x00;
Control[14] = 0x02; /* Block size is always 512 bytes */
Control[15] = 0x00;
}
if (LLBAA)
{
index = 24;
}
else
{
index = 16;
}
/*
* Fill-up control mode page, SAT, Table 65
*/
Control[index+0] = 0x0A; /* page code */
Control[index+1] = 0x0A; /* page length */
Control[index+2] = 0x02; /* only GLTSD bit is set */
if (pSatDevData->satNCQ == agTRUE)
{
Control[index+3] = 0x12; /* Queue Alogorithm modifier 1b and QErr 01b*/
}
else
{
Control[index+3] = 0x02; /* Queue Alogorithm modifier 0b and QErr 01b */
}
Control[index+4] = 0x00;
Control[index+5] = 0x00;
Control[index+6] = 0x00; /* obsolete */
Control[index+7] = 0x00; /* obsolete */
Control[index+8] = 0xFF; /* Busy Timeout Period */
Control[index+9] = 0xFF; /* Busy Timeout Period */
Control[index+10] = 0x00; /* we don't support non-000b value for the self-test code */
Control[index+11] = 0x00; /* we don't support non-000b value for the self-test code */
osti_memcpy(pModeSense, &Control, lenRead);
}
else if (page == MODESENSE_READ_WRITE_ERROR_RECOVERY_PAGE)
{
TI_DBG5(("satModeSense10: MODESENSE_READ_WRITE_ERROR_RECOVERY_PAGE\n"));
RWErrorRecovery[0] = 0;
RWErrorRecovery[1] = (bit8)(lenRead - 2);
RWErrorRecovery[2] = 0x00; /* medium type: default medium type (currently mounted medium type) */
RWErrorRecovery[3] = 0x00; /* device-specific param: no write-protect, no support for DPO-FUA */
if (LLBAA)
{
RWErrorRecovery[4] = 0x00; /* reserved and LONGLBA */
RWErrorRecovery[4] = (bit8)(RWErrorRecovery[4] | 0x1); /* LONGLBA is set */
}
else
{
RWErrorRecovery[4] = 0x00; /* reserved and LONGLBA: LONGLBA is not set */
}
RWErrorRecovery[5] = 0x00; /* reserved */
RWErrorRecovery[6] = 0x00; /* block descriptot length */
if (LLBAA)
{
RWErrorRecovery[7] = 0x10; /* block descriptor length: LONGLBA is set. So, length is 16 */
}
else
{
RWErrorRecovery[7] = 0x08; /* block descriptor length: LONGLBA is NOT set. So, length is 8 */
}
/*
* Fill-up direct-access device block-descriptor, SAT, Table 19
*/
if (LLBAA)
{
/* density code */
RWErrorRecovery[8] = 0x04; /* density-code : reserved for direct-access */
/* number of blocks */
RWErrorRecovery[9] = 0x00; /* unspecified */
RWErrorRecovery[10] = 0x00; /* unspecified */
RWErrorRecovery[11] = 0x00; /* unspecified */
RWErrorRecovery[12] = 0x00; /* unspecified */
RWErrorRecovery[13] = 0x00; /* unspecified */
RWErrorRecovery[14] = 0x00; /* unspecified */
RWErrorRecovery[15] = 0x00; /* unspecified */
/* reserved */
RWErrorRecovery[16] = 0x00; /* reserved */
RWErrorRecovery[17] = 0x00; /* reserved */
RWErrorRecovery[18] = 0x00; /* reserved */
RWErrorRecovery[19] = 0x00; /* reserved */
/* Block size */
RWErrorRecovery[20] = 0x00;
RWErrorRecovery[21] = 0x00;
RWErrorRecovery[22] = 0x02; /* Block size is always 512 bytes */
RWErrorRecovery[23] = 0x00;
}
else
{
/* density code */
RWErrorRecovery[8] = 0x04; /* density-code : reserved for direct-access */
/* number of blocks */
RWErrorRecovery[9] = 0x00; /* unspecified */
RWErrorRecovery[10] = 0x00; /* unspecified */
RWErrorRecovery[11] = 0x00; /* unspecified */
/* reserved */
RWErrorRecovery[12] = 0x00; /* reserved */
/* Block size */
RWErrorRecovery[13] = 0x00;
RWErrorRecovery[14] = 0x02; /* Block size is always 512 bytes */
RWErrorRecovery[15] = 0x00;
}
if (LLBAA)
{
index = 24;
}
else
{
index = 16;
}
/*
* Fill-up Read-Write Error Recovery mode page, SAT, Table 66
*/
RWErrorRecovery[index+0] = 0x01; /* page code */
RWErrorRecovery[index+1] = 0x0A; /* page length */
RWErrorRecovery[index+2] = 0x40; /* ARRE is set */
RWErrorRecovery[index+3] = 0x00;
RWErrorRecovery[index+4] = 0x00;
RWErrorRecovery[index+5] = 0x00;
RWErrorRecovery[index+6] = 0x00;
RWErrorRecovery[index+7] = 0x00;
RWErrorRecovery[index+8] = 0x00;
RWErrorRecovery[index+9] = 0x00;
RWErrorRecovery[index+10] = 0x00;
RWErrorRecovery[index+11] = 0x00;
osti_memcpy(pModeSense, &RWErrorRecovery, lenRead);
}
else if (page == MODESENSE_CACHING)
{
TI_DBG5(("satModeSense10: MODESENSE_CACHING\n"));
Caching[0] = 0;
Caching[1] = (bit8)(lenRead - 2);
Caching[2] = 0x00; /* medium type: default medium type (currently mounted medium type) */
Caching[3] = 0x00; /* device-specific param: no write-protect, no support for DPO-FUA */
if (LLBAA)
{
Caching[4] = 0x00; /* reserved and LONGLBA */
Caching[4] = (bit8)(Caching[4] | 0x1); /* LONGLBA is set */
}
else
{
Caching[4] = 0x00; /* reserved and LONGLBA: LONGLBA is not set */
}
Caching[5] = 0x00; /* reserved */
Caching[6] = 0x00; /* block descriptot length */
if (LLBAA)
{
Caching[7] = 0x10; /* block descriptor length: LONGLBA is set. So, length is 16 */
}
else
{
Caching[7] = 0x08; /* block descriptor length: LONGLBA is NOT set. So, length is 8 */
}
/*
* Fill-up direct-access device block-descriptor, SAT, Table 19
*/
if (LLBAA)
{
/* density code */
Caching[8] = 0x04; /* density-code : reserved for direct-access */
/* number of blocks */
Caching[9] = 0x00; /* unspecified */
Caching[10] = 0x00; /* unspecified */
Caching[11] = 0x00; /* unspecified */
Caching[12] = 0x00; /* unspecified */
Caching[13] = 0x00; /* unspecified */
Caching[14] = 0x00; /* unspecified */
Caching[15] = 0x00; /* unspecified */
/* reserved */
Caching[16] = 0x00; /* reserved */
Caching[17] = 0x00; /* reserved */
Caching[18] = 0x00; /* reserved */
Caching[19] = 0x00; /* reserved */
/* Block size */
Caching[20] = 0x00;
Caching[21] = 0x00;
Caching[22] = 0x02; /* Block size is always 512 bytes */
Caching[23] = 0x00;
}
else
{
/* density code */
Caching[8] = 0x04; /* density-code : reserved for direct-access */
/* number of blocks */
Caching[9] = 0x00; /* unspecified */
Caching[10] = 0x00; /* unspecified */
Caching[11] = 0x00; /* unspecified */
/* reserved */
Caching[12] = 0x00; /* reserved */
/* Block size */
Caching[13] = 0x00;
Caching[14] = 0x02; /* Block size is always 512 bytes */
Caching[15] = 0x00;
}
if (LLBAA)
{
index = 24;
}
else
{
index = 16;
}
/*
* Fill-up Caching mode page, SAT, Table 67
*/
/* length 20 */
Caching[index+0] = 0x08; /* page code */
Caching[index+1] = 0x12; /* page length */
#ifdef NOT_YET
if (pSatDevData->satWriteCacheEnabled == agTRUE)
{
Caching[index+2] = 0x04;/* WCE bit is set */
}
else
{
Caching[index+2] = 0x00;/* WCE bit is NOT set */
}
#endif
Caching[index+2] = 0x00;/* WCE bit is NOT set */
Caching[index+3] = 0x00;
Caching[index+4] = 0x00;
Caching[index+5] = 0x00;
Caching[index+6] = 0x00;
Caching[index+7] = 0x00;
Caching[index+8] = 0x00;
Caching[index+9] = 0x00;
Caching[index+10] = 0x00;
Caching[index+11] = 0x00;
if (pSatDevData->satLookAheadEnabled == agTRUE)
{
Caching[index+12] = 0x00;/* DRA bit is NOT set */
}
else
{
Caching[index+12] = 0x20;/* DRA bit is set */
}
Caching[index+13] = 0x00;
Caching[index+14] = 0x00;
Caching[index+15] = 0x00;
Caching[index+16] = 0x00;
Caching[index+17] = 0x00;
Caching[index+18] = 0x00;
Caching[index+19] = 0x00;
osti_memcpy(pModeSense, &Caching, lenRead);
}
else if (page == MODESENSE_INFORMATION_EXCEPTION_CONTROL_PAGE)
{
TI_DBG5(("satModeSense10: MODESENSE_INFORMATION_EXCEPTION_CONTROL_PAGE\n"));
InfoExceptionCtrl[0] = 0;
InfoExceptionCtrl[1] = (bit8)(lenRead - 2);
InfoExceptionCtrl[2] = 0x00; /* medium type: default medium type (currently mounted medium type) */
InfoExceptionCtrl[3] = 0x00; /* device-specific param: no write-protect, no support for DPO-FUA */
if (LLBAA)
{
InfoExceptionCtrl[4] = 0x00; /* reserved and LONGLBA */
InfoExceptionCtrl[4] = (bit8)(InfoExceptionCtrl[4] | 0x1); /* LONGLBA is set */
}
else
{
InfoExceptionCtrl[4] = 0x00; /* reserved and LONGLBA: LONGLBA is not set */
}
InfoExceptionCtrl[5] = 0x00; /* reserved */
InfoExceptionCtrl[6] = 0x00; /* block descriptot length */
if (LLBAA)
{
InfoExceptionCtrl[7] = 0x10; /* block descriptor length: LONGLBA is set. So, length is 16 */
}
else
{
InfoExceptionCtrl[7] = 0x08; /* block descriptor length: LONGLBA is NOT set. So, length is 8 */
}
/*
* Fill-up direct-access device block-descriptor, SAT, Table 19
*/
if (LLBAA)
{
/* density code */
InfoExceptionCtrl[8] = 0x04; /* density-code : reserved for direct-access */
/* number of blocks */
InfoExceptionCtrl[9] = 0x00; /* unspecified */
InfoExceptionCtrl[10] = 0x00; /* unspecified */
InfoExceptionCtrl[11] = 0x00; /* unspecified */
InfoExceptionCtrl[12] = 0x00; /* unspecified */
InfoExceptionCtrl[13] = 0x00; /* unspecified */
InfoExceptionCtrl[14] = 0x00; /* unspecified */
InfoExceptionCtrl[15] = 0x00; /* unspecified */
/* reserved */
InfoExceptionCtrl[16] = 0x00; /* reserved */
InfoExceptionCtrl[17] = 0x00; /* reserved */
InfoExceptionCtrl[18] = 0x00; /* reserved */
InfoExceptionCtrl[19] = 0x00; /* reserved */
/* Block size */
InfoExceptionCtrl[20] = 0x00;
InfoExceptionCtrl[21] = 0x00;
InfoExceptionCtrl[22] = 0x02; /* Block size is always 512 bytes */
InfoExceptionCtrl[23] = 0x00;
}
else
{
/* density code */
InfoExceptionCtrl[8] = 0x04; /* density-code : reserved for direct-access */
/* number of blocks */
InfoExceptionCtrl[9] = 0x00; /* unspecified */
InfoExceptionCtrl[10] = 0x00; /* unspecified */
InfoExceptionCtrl[11] = 0x00; /* unspecified */
/* reserved */
InfoExceptionCtrl[12] = 0x00; /* reserved */
/* Block size */
InfoExceptionCtrl[13] = 0x00;
InfoExceptionCtrl[14] = 0x02; /* Block size is always 512 bytes */
InfoExceptionCtrl[15] = 0x00;
}
if (LLBAA)
{
index = 24;
}
else
{
index = 16;
}
/*
* Fill-up informational-exceptions control mode page, SAT, Table 68
*/
InfoExceptionCtrl[index+0] = 0x1C; /* page code */
InfoExceptionCtrl[index+1] = 0x0A; /* page length */
if (pSatDevData->satSMARTEnabled == agTRUE)
{
InfoExceptionCtrl[index+2] = 0x00;/* DEXCPT bit is NOT set */
}
else
{
InfoExceptionCtrl[index+2] = 0x08;/* DEXCPT bit is set */
}
InfoExceptionCtrl[index+3] = 0x00; /* We don't support MRIE */
InfoExceptionCtrl[index+4] = 0x00; /* Interval timer vendor-specific */
InfoExceptionCtrl[index+5] = 0x00;
InfoExceptionCtrl[index+6] = 0x00;
InfoExceptionCtrl[index+7] = 0x00;
InfoExceptionCtrl[index+8] = 0x00; /* REPORT-COUNT */
InfoExceptionCtrl[index+9] = 0x00;
InfoExceptionCtrl[index+10] = 0x00;
InfoExceptionCtrl[index+11] = 0x00;
osti_memcpy(pModeSense, &InfoExceptionCtrl, lenRead);
}
else
{
/* Error */
TI_DBG1(("satModeSense10: Error page %d\n", page));
satSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_COMMAND,
satIOContext);
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pTiSenseData,
satIOContext->interruptContext );
return tiSuccess;
}
if (requestLen > lenRead)
{
TI_DBG1(("satModeSense10 reporting underrun lenRead=0x%x requestLen=0x%x tiIORequest=%p\n", lenRead, requestLen, tiIORequest));
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOUnderRun,
requestLen - lenRead,
agNULL,
satIOContext->interruptContext );
}
else
{
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOSuccess,
SCSI_STAT_GOOD,
agNULL,
satIOContext->interruptContext);
}
return tiSuccess;
}
/*****************************************************************************/
/*! \brief SAT implementation for SCSI VERIFY (10).
*
* SAT implementation for SCSI VERIFY (10).
*
* \param tiRoot: Pointer to TISA initiator driver/port instance.
* \param tiIORequest: Pointer to TISA I/O request context for this I/O.
* \param tiDeviceHandle: Pointer to TISA device handle for this I/O.
* \param tiScsiRequest: Pointer to TISA SCSI I/O request and SGL list.
* \param satIOContext_t: Pointer to the SAT IO Context
*
* \return If command is started successfully
* - \e tiSuccess: I/O request successfully initiated.
* - \e tiBusy: No resources available, try again later.
* - \e tiIONoDevice: Invalid device handle.
* - \e tiError: Other errors.
*/
/*****************************************************************************/
GLOBAL bit32 satVerify10(
tiRoot_t *tiRoot,
tiIORequest_t *tiIORequest,
tiDeviceHandle_t *tiDeviceHandle,
tiScsiInitiatorRequest_t *tiScsiRequest,
satIOContext_t *satIOContext)
{
/*
For simple implementation,
no byte comparison supported as of 4/5/06
*/
scsiRspSense_t *pSense;
tiIniScsiCmnd_t *scsiCmnd;
satDeviceData_t *pSatDevData;
agsaFisRegHostToDevice_t *fis;
bit32 status;
bit32 agRequestType = AGSA_SATA_PROTOCOL_NON_DATA;
bit32 lba = 0;
bit32 tl = 0;
bit32 LoopNum = 1;
bit8 LBA[4];
bit8 TL[4];
bit32 rangeChk = agFALSE; /* lba and tl range check */
TI_DBG5(("satVerify10 entry: tiDeviceHandle=%p tiIORequest=%p\n",
tiDeviceHandle, tiIORequest));
pSense = satIOContext->pSense;
scsiCmnd = &tiScsiRequest->scsiCmnd;
pSatDevData = satIOContext->pSatDevData;
fis = satIOContext->pFis;
/* checking BYTCHK */
if (scsiCmnd->cdb[1] & SCSI_VERIFY_BYTCHK_MASK)
{
/*
should do the byte check
but not supported in this version
*/
satSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pTiSenseData,
satIOContext->interruptContext );
TI_DBG1(("satVerify10: no byte checking \n"));
return tiSuccess;
}
/* checking CONTROL */
/* NACA == 1 or LINK == 1*/
if ( (scsiCmnd->cdb[9] & SCSI_NACA_MASK) || (scsiCmnd->cdb[9] & SCSI_LINK_MASK) )
{
satSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pTiSenseData,
satIOContext->interruptContext );
TI_DBG2(("satVerify10: return control\n"));
return tiSuccess;
}
osti_memset(LBA, 0, sizeof(LBA));
osti_memset(TL, 0, sizeof(TL));
/* do not use memcpy due to indexing in LBA and TL */
LBA[0] = scsiCmnd->cdb[2]; /* MSB */
LBA[1] = scsiCmnd->cdb[3];
LBA[2] = scsiCmnd->cdb[4];
LBA[3] = scsiCmnd->cdb[5]; /* LSB */
TL[0] = 0;
TL[1] = 0;
TL[2] = scsiCmnd->cdb[7]; /* MSB */
TL[3] = scsiCmnd->cdb[8]; /* LSB */
rangeChk = satAddNComparebit32(LBA, TL);
/* cbd10; computing LBA and transfer length */
lba = (scsiCmnd->cdb[2] << (8*3)) + (scsiCmnd->cdb[3] << (8*2))
+ (scsiCmnd->cdb[4] << 8) + scsiCmnd->cdb[5];
tl = (scsiCmnd->cdb[7] << 8) + scsiCmnd->cdb[8];
if (pSatDevData->satNCQ != agTRUE &&
pSatDevData->sat48BitSupport != agTRUE
)
{
if (lba > SAT_TR_LBA_LIMIT - 1)
{
satSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_LOGICAL_BLOCK_ADDRESS_OUT_OF_RANGE,
satIOContext);
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pTiSenseData,
satIOContext->interruptContext );
TI_DBG1(("satVerify10: return LBA out of range, not EXT\n"));
TI_DBG1(("satVerify10: cdb 0x%x 0x%x 0x%x 0x%x\n",scsiCmnd->cdb[2], scsiCmnd->cdb[3],
scsiCmnd->cdb[4], scsiCmnd->cdb[5]));
TI_DBG1(("satVerify10: lba 0x%x SAT_TR_LBA_LIMIT 0x%x\n", lba, SAT_TR_LBA_LIMIT));
return tiSuccess;
}
if (rangeChk) // if (lba + tl > SAT_TR_LBA_LIMIT)
{
TI_DBG1(("satVerify10: return LBA+TL out of range, not EXT\n"));
satSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_LOGICAL_BLOCK_ADDRESS_OUT_OF_RANGE,
satIOContext);
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pTiSenseData,
satIOContext->interruptContext );
return tiSuccess;
}
}
if (pSatDevData->sat48BitSupport == agTRUE)
{
TI_DBG5(("satVerify10: SAT_READ_VERIFY_SECTORS_EXT\n"));
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_READ_VERIFY_SECTORS_EXT;/* 0x42 */
fis->h.features = 0; /* FIS reserve */
fis->d.lbaLow = scsiCmnd->cdb[5]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = scsiCmnd->cdb[4]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = scsiCmnd->cdb[3]; /* FIS LBA (23:16) */
fis->d.device = 0x40; /* FIS LBA mode set 01000000 */
fis->d.lbaLowExp = scsiCmnd->cdb[2]; /* FIS LBA (31:24) */
fis->d.lbaMidExp = 0; /* FIS LBA (39:32) */
fis->d.lbaHighExp = 0; /* FIS LBA (47:40) */
fis->d.featuresExp = 0; /* FIS reserve */
fis->d.sectorCount = scsiCmnd->cdb[8]; /* FIS sector count (7:0) */
fis->d.sectorCountExp = scsiCmnd->cdb[7]; /* FIS sector count (15:8) */
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_NON_DATA;
satIOContext->ATACmd = SAT_READ_VERIFY_SECTORS_EXT;
}
else
{
TI_DBG5(("satVerify10: SAT_READ_VERIFY_SECTORS\n"));
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C bit is set */
fis->h.command = SAT_READ_VERIFY_SECTORS; /* 0x40 */
fis->h.features = 0; /* FIS reserve */
fis->d.lbaLow = scsiCmnd->cdb[5]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = scsiCmnd->cdb[4]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = scsiCmnd->cdb[3]; /* FIS LBA (23:16) */
/* FIS LBA mode set LBA (27:24) */
fis->d.device = (bit8)((0x4 << 4) | (scsiCmnd->cdb[2] & 0xF));
fis->d.lbaLowExp = 0;
fis->d.lbaMidExp = 0;
fis->d.lbaHighExp = 0;
fis->d.featuresExp = 0;
fis->d.sectorCount = scsiCmnd->cdb[8]; /* FIS sector count (7:0) */
fis->d.sectorCountExp = 0;
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_NON_DATA;
satIOContext->ATACmd = SAT_READ_VERIFY_SECTORS;
}
satIOContext->currentLBA = lba;
satIOContext->OrgTL = tl;
/*
computing number of loop and remainder for tl
0xFF in case not ext
0xFFFF in case EXT
*/
if (fis->h.command == SAT_READ_VERIFY_SECTORS)
{
LoopNum = satComputeLoopNum(tl, 0xFF);
}
else if (fis->h.command == SAT_READ_VERIFY_SECTORS_EXT)
{
/* SAT_READ_SECTORS_EXT, SAT_READ_DMA_EXT */
LoopNum = satComputeLoopNum(tl, 0xFFFF);
}
else
{
TI_DBG1(("satVerify10: error case 1!!!\n"));
LoopNum = 1;
}
satIOContext->LoopNum = LoopNum;
if (LoopNum == 1)
{
TI_DBG5(("satVerify10: NON CHAINED data\n"));
/* Initialize CB for SATA completion.
*/
satIOContext->satCompleteCB = &satNonChainedVerifyCB;
}
else
{
TI_DBG1(("satVerify10: CHAINED data\n"));
/* re-setting tl */
if (fis->h.command == SAT_READ_VERIFY_SECTORS)
{
fis->d.sectorCount = 0xFF;
}
else if (fis->h.command == SAT_READ_VERIFY_SECTORS_EXT)
{
fis->d.sectorCount = 0xFF;
fis->d.sectorCountExp = 0xFF;
}
else
{
TI_DBG1(("satVerify10: error case 2!!!\n"));
}
/* Initialize CB for SATA completion.
*/
satIOContext->satCompleteCB = &satChainedVerifyCB;
}
/*
* Prepare SGL and send FIS to LL layer.
*/
satIOContext->reqType = agRequestType; /* Save it */
status = sataLLIOStart( tiRoot,
tiIORequest,
tiDeviceHandle,
tiScsiRequest,
satIOContext);
return (status);
}
GLOBAL bit32 satChainedVerify(
tiRoot_t *tiRoot,
tiIORequest_t *tiIORequest,
tiDeviceHandle_t *tiDeviceHandle,
tiScsiInitiatorRequest_t *tiScsiRequest,
satIOContext_t *satIOContext)
{
bit32 status;
satIOContext_t *satOrgIOContext = agNULL;
agsaFisRegHostToDevice_t *fis;
bit32 agRequestType = AGSA_SATA_PROTOCOL_NON_DATA;
bit32 lba = 0;
bit32 DenomTL = 0xFF;
bit32 Remainder = 0;
bit8 LBA[4]; /* 0 MSB, 3 LSB */
TI_DBG2(("satChainedVerify: start\n"));
fis = satIOContext->pFis;
satOrgIOContext = satIOContext->satOrgIOContext;
osti_memset(LBA,0, sizeof(LBA));
switch (satOrgIOContext->ATACmd)
{
case SAT_READ_VERIFY_SECTORS:
DenomTL = 0xFF;
break;
case SAT_READ_VERIFY_SECTORS_EXT:
DenomTL = 0xFFFF;
break;
default:
TI_DBG1(("satChainedVerify: error incorrect ata command 0x%x\n", satIOContext->ATACmd));
return tiError;
break;
}
Remainder = satOrgIOContext->OrgTL % DenomTL;
satOrgIOContext->currentLBA = satOrgIOContext->currentLBA + DenomTL;
lba = satOrgIOContext->currentLBA;
LBA[0] = (bit8)((lba & 0xF000) >> (8 * 3)); /* MSB */
LBA[1] = (bit8)((lba & 0xF00) >> (8 * 2));
LBA[2] = (bit8)((lba & 0xF0) >> 8);
LBA[3] = (bit8)(lba & 0xF); /* LSB */
switch (satOrgIOContext->ATACmd)
{
case SAT_READ_VERIFY_SECTORS:
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C bit is set */
fis->h.command = SAT_READ_VERIFY_SECTORS; /* 0x40 */
fis->h.features = 0; /* FIS reserve */
fis->d.lbaLow = LBA[3]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = LBA[2]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = LBA[1]; /* FIS LBA (23:16) */
/* FIS LBA mode set LBA (27:24) */
fis->d.device = (bit8)((0x4 << 4) | (LBA[0] & 0xF));
fis->d.lbaLowExp = 0;
fis->d.lbaMidExp = 0;
fis->d.lbaHighExp = 0;
fis->d.featuresExp = 0;
if (satOrgIOContext->LoopNum == 1)
{
/* last loop */
fis->d.sectorCount = (bit8)Remainder; /* FIS sector count (7:0) */
}
else
{
fis->d.sectorCount = 0xFF; /* FIS sector count (7:0) */
}
fis->d.sectorCountExp = 0;
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_NON_DATA;
break;
case SAT_READ_VERIFY_SECTORS_EXT:
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_READ_VERIFY_SECTORS_EXT; /* 0x42 */
fis->h.features = 0; /* FIS reserve */
fis->d.lbaLow = LBA[3]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = LBA[2]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = LBA[1]; /* FIS LBA (23:16) */
fis->d.device = 0x40; /* FIS LBA mode set */
fis->d.lbaLowExp = LBA[0]; /* FIS LBA (31:24) */
fis->d.lbaMidExp = 0; /* FIS LBA (39:32) */
fis->d.lbaHighExp = 0; /* FIS LBA (47:40) */
fis->d.featuresExp = 0; /* FIS reserve */
if (satOrgIOContext->LoopNum == 1)
{
/* last loop */
fis->d.sectorCount = (bit8)(Remainder & 0xFF); /* FIS sector count (7:0) */
fis->d.sectorCountExp = (bit8)((Remainder & 0xFF00) >> 8); /* FIS sector count (15:8) */
}
else
{
fis->d.sectorCount = 0xFF; /* FIS sector count (7:0) */
fis->d.sectorCountExp = 0xFF; /* FIS sector count (15:8) */
}
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_NON_DATA;
break;
default:
TI_DBG1(("satChainedVerify: error incorrect ata command 0x%x\n", satIOContext->ATACmd));
return tiError;
break;
}
/* Initialize CB for SATA completion.
*/
/* chained data */
satIOContext->satCompleteCB = &satChainedVerifyCB;
/*
* Prepare SGL and send FIS to LL layer.
*/
satIOContext->reqType = agRequestType; /* Save it */
status = sataLLIOStart( tiRoot,
tiIORequest,
tiDeviceHandle,
tiScsiRequest,
satIOContext);
TI_DBG5(("satChainedVerify: return\n"));
return (status);
}
/*****************************************************************************/
/*! \brief SAT implementation for SCSI VERIFY (12).
*
* SAT implementation for SCSI VERIFY (12).
*
* \param tiRoot: Pointer to TISA initiator driver/port instance.
* \param tiIORequest: Pointer to TISA I/O request context for this I/O.
* \param tiDeviceHandle: Pointer to TISA device handle for this I/O.
* \param tiScsiRequest: Pointer to TISA SCSI I/O request and SGL list.
* \param satIOContext_t: Pointer to the SAT IO Context
*
* \return If command is started successfully
* - \e tiSuccess: I/O request successfully initiated.
* - \e tiBusy: No resources available, try again later.
* - \e tiIONoDevice: Invalid device handle.
* - \e tiError: Other errors.
*/
/*****************************************************************************/
GLOBAL bit32 satVerify12(
tiRoot_t *tiRoot,
tiIORequest_t *tiIORequest,
tiDeviceHandle_t *tiDeviceHandle,
tiScsiInitiatorRequest_t *tiScsiRequest,
satIOContext_t *satIOContext)
{
/*
For simple implementation,
no byte comparison supported as of 4/5/06
*/
scsiRspSense_t *pSense;
tiIniScsiCmnd_t *scsiCmnd;
satDeviceData_t *pSatDevData;
agsaFisRegHostToDevice_t *fis;
bit32 status;
bit32 agRequestType = AGSA_SATA_PROTOCOL_NON_DATA;
bit32 lba = 0;
bit32 tl = 0;
bit32 LoopNum = 1;
bit8 LBA[4];
bit8 TL[4];
bit32 rangeChk = agFALSE; /* lba and tl range check */
TI_DBG5(("satVerify12 entry: tiDeviceHandle=%p tiIORequest=%p\n",
tiDeviceHandle, tiIORequest));
pSense = satIOContext->pSense;
scsiCmnd = &tiScsiRequest->scsiCmnd;
pSatDevData = satIOContext->pSatDevData;
fis = satIOContext->pFis;
/* checking BYTCHK */
if (scsiCmnd->cdb[1] & SCSI_VERIFY_BYTCHK_MASK)
{
/*
should do the byte check
but not supported in this version
*/
satSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pTiSenseData,
satIOContext->interruptContext );
TI_DBG1(("satVerify12: no byte checking \n"));
return tiSuccess;
}
/* checking CONTROL */
/* NACA == 1 or LINK == 1*/
if ( (scsiCmnd->cdb[11] & SCSI_NACA_MASK) || (scsiCmnd->cdb[11] & SCSI_LINK_MASK) )
{
satSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pTiSenseData,
satIOContext->interruptContext );
TI_DBG1(("satVerify12: return control\n"));
return tiSuccess;
}
osti_memset(LBA, 0, sizeof(LBA));
osti_memset(TL, 0, sizeof(TL));
/* do not use memcpy due to indexing in LBA and TL */
LBA[0] = scsiCmnd->cdb[2]; /* MSB */
LBA[1] = scsiCmnd->cdb[3];
LBA[2] = scsiCmnd->cdb[4];
LBA[3] = scsiCmnd->cdb[5]; /* LSB */
TL[0] = scsiCmnd->cdb[6]; /* MSB */
TL[1] = scsiCmnd->cdb[7];
TL[2] = scsiCmnd->cdb[7];
TL[3] = scsiCmnd->cdb[8]; /* LSB */
rangeChk = satAddNComparebit32(LBA, TL);
lba = satComputeCDB12LBA(satIOContext);
tl = satComputeCDB12TL(satIOContext);
if (pSatDevData->satNCQ != agTRUE &&
pSatDevData->sat48BitSupport != agTRUE
)
{
if (lba > SAT_TR_LBA_LIMIT - 1)
{
satSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_LOGICAL_BLOCK_ADDRESS_OUT_OF_RANGE,
satIOContext);
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pTiSenseData,
satIOContext->interruptContext );
TI_DBG1(("satVerify12: return LBA out of range, not EXT\n"));
TI_DBG1(("satVerify12: cdb 0x%x 0x%x 0x%x 0x%x\n",scsiCmnd->cdb[2], scsiCmnd->cdb[3],
scsiCmnd->cdb[4], scsiCmnd->cdb[5]));
TI_DBG1(("satVerify12: lba 0x%x SAT_TR_LBA_LIMIT 0x%x\n", lba, SAT_TR_LBA_LIMIT));
return tiSuccess;
}
if (rangeChk) // if (lba + tl > SAT_TR_LBA_LIMIT)
{
TI_DBG1(("satVerify12: return LBA+TL out of range, not EXT\n"));
satSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_LOGICAL_BLOCK_ADDRESS_OUT_OF_RANGE,
satIOContext);
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pTiSenseData,
satIOContext->interruptContext );
return tiSuccess;
}
}
if (pSatDevData->sat48BitSupport == agTRUE)
{
TI_DBG5(("satVerify12: SAT_READ_VERIFY_SECTORS_EXT\n"));
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_READ_VERIFY_SECTORS_EXT;/* 0x42 */
fis->h.features = 0; /* FIS reserve */
fis->d.lbaLow = scsiCmnd->cdb[5]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = scsiCmnd->cdb[4]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = scsiCmnd->cdb[3]; /* FIS LBA (23:16) */
fis->d.device = 0x40; /* FIS LBA mode set 01000000 */
fis->d.lbaLowExp = scsiCmnd->cdb[2]; /* FIS LBA (31:24) */
fis->d.lbaMidExp = 0; /* FIS LBA (39:32) */
fis->d.lbaHighExp = 0; /* FIS LBA (47:40) */
fis->d.featuresExp = 0; /* FIS reserve */
fis->d.sectorCount = scsiCmnd->cdb[9]; /* FIS sector count (7:0) */
fis->d.sectorCountExp = scsiCmnd->cdb[8]; /* FIS sector count (15:8) */
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_NON_DATA;
satIOContext->ATACmd = SAT_READ_VERIFY_SECTORS_EXT;
}
else
{
TI_DBG5(("satVerify12: SAT_READ_VERIFY_SECTORS\n"));
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C bit is set */
fis->h.command = SAT_READ_VERIFY_SECTORS; /* 0x40 */
fis->h.features = 0; /* FIS reserve */
fis->d.lbaLow = scsiCmnd->cdb[5]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = scsiCmnd->cdb[4]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = scsiCmnd->cdb[3]; /* FIS LBA (23:16) */
/* FIS LBA mode set LBA (27:24) */
fis->d.device = (bit8)((0x4 << 4) | (scsiCmnd->cdb[2] & 0xF));
fis->d.lbaLowExp = 0;
fis->d.lbaMidExp = 0;
fis->d.lbaHighExp = 0;
fis->d.featuresExp = 0;
fis->d.sectorCount = scsiCmnd->cdb[9]; /* FIS sector count (7:0) */
fis->d.sectorCountExp = 0;
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_NON_DATA;
satIOContext->ATACmd = SAT_READ_VERIFY_SECTORS;
}
satIOContext->currentLBA = lba;
satIOContext->OrgTL = tl;
/*
computing number of loop and remainder for tl
0xFF in case not ext
0xFFFF in case EXT
*/
if (fis->h.command == SAT_READ_VERIFY_SECTORS)
{
LoopNum = satComputeLoopNum(tl, 0xFF);
}
else if (fis->h.command == SAT_READ_VERIFY_SECTORS_EXT)
{
/* SAT_READ_SECTORS_EXT, SAT_READ_DMA_EXT */
LoopNum = satComputeLoopNum(tl, 0xFFFF);
}
else
{
TI_DBG1(("satVerify12: error case 1!!!\n"));
LoopNum = 1;
}
satIOContext->LoopNum = LoopNum;
if (LoopNum == 1)
{
TI_DBG5(("satVerify12: NON CHAINED data\n"));
/* Initialize CB for SATA completion.
*/
satIOContext->satCompleteCB = &satNonChainedVerifyCB;
}
else
{
TI_DBG1(("satVerify12: CHAINED data\n"));
/* re-setting tl */
if (fis->h.command == SAT_READ_VERIFY_SECTORS)
{
fis->d.sectorCount = 0xFF;
}
else if (fis->h.command == SAT_READ_VERIFY_SECTORS_EXT)
{
fis->d.sectorCount = 0xFF;
fis->d.sectorCountExp = 0xFF;
}
else
{
TI_DBG1(("satVerify10: error case 2!!!\n"));
}
/* Initialize CB for SATA completion.
*/
satIOContext->satCompleteCB = &satChainedVerifyCB;
}
/*
* Prepare SGL and send FIS to LL layer.
*/
satIOContext->reqType = agRequestType; /* Save it */
status = sataLLIOStart( tiRoot,
tiIORequest,
tiDeviceHandle,
tiScsiRequest,
satIOContext);
return (status);
}
/*****************************************************************************/
/*! \brief SAT implementation for SCSI VERIFY (16).
*
* SAT implementation for SCSI VERIFY (16).
*
* \param tiRoot: Pointer to TISA initiator driver/port instance.
* \param tiIORequest: Pointer to TISA I/O request context for this I/O.
* \param tiDeviceHandle: Pointer to TISA device handle for this I/O.
* \param tiScsiRequest: Pointer to TISA SCSI I/O request and SGL list.
* \param satIOContext_t: Pointer to the SAT IO Context
*
* \return If command is started successfully
* - \e tiSuccess: I/O request successfully initiated.
* - \e tiBusy: No resources available, try again later.
* - \e tiIONoDevice: Invalid device handle.
* - \e tiError: Other errors.
*/
/*****************************************************************************/
GLOBAL bit32 satVerify16(
tiRoot_t *tiRoot,
tiIORequest_t *tiIORequest,
tiDeviceHandle_t *tiDeviceHandle,
tiScsiInitiatorRequest_t *tiScsiRequest,
satIOContext_t *satIOContext)
{
/*
For simple implementation,
no byte comparison supported as of 4/5/06
*/
scsiRspSense_t *pSense;
tiIniScsiCmnd_t *scsiCmnd;
satDeviceData_t *pSatDevData;
agsaFisRegHostToDevice_t *fis;
bit32 status;
bit32 agRequestType = AGSA_SATA_PROTOCOL_NON_DATA;
bit32 lba = 0;
bit32 tl = 0;
bit32 LoopNum = 1;
bit8 LBA[8];
bit8 TL[8];
bit32 rangeChk = agFALSE; /* lba and tl range check */
bit32 limitChk = agFALSE; /* lba and tl range check */
TI_DBG5(("satVerify16 entry: tiDeviceHandle=%p tiIORequest=%p\n",
tiDeviceHandle, tiIORequest));
pSense = satIOContext->pSense;
scsiCmnd = &tiScsiRequest->scsiCmnd;
pSatDevData = satIOContext->pSatDevData;
fis = satIOContext->pFis;
/* checking BYTCHK */
if (scsiCmnd->cdb[1] & SCSI_VERIFY_BYTCHK_MASK)
{
/*
should do the byte check
but not supported in this version
*/
satSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pTiSenseData,
satIOContext->interruptContext );
TI_DBG1(("satVerify16: no byte checking \n"));
return tiSuccess;
}
/* checking CONTROL */
/* NACA == 1 or LINK == 1*/
if ( (scsiCmnd->cdb[15] & SCSI_NACA_MASK) || (scsiCmnd->cdb[15] & SCSI_LINK_MASK) )
{
satSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pTiSenseData,
satIOContext->interruptContext );
TI_DBG2(("satVerify16: return control\n"));
return tiSuccess;
}
osti_memset(LBA, 0, sizeof(LBA));
osti_memset(TL, 0, sizeof(TL));
/* do not use memcpy due to indexing in LBA and TL */
LBA[0] = scsiCmnd->cdb[2]; /* MSB */
LBA[1] = scsiCmnd->cdb[3];
LBA[2] = scsiCmnd->cdb[4];
LBA[3] = scsiCmnd->cdb[5];
LBA[4] = scsiCmnd->cdb[6];
LBA[5] = scsiCmnd->cdb[7];
LBA[6] = scsiCmnd->cdb[8];
LBA[7] = scsiCmnd->cdb[9]; /* LSB */
TL[0] = 0;
TL[1] = 0;
TL[2] = 0;
TL[3] = 0;
TL[4] = scsiCmnd->cdb[10]; /* MSB */
TL[5] = scsiCmnd->cdb[11];
TL[6] = scsiCmnd->cdb[12];
TL[7] = scsiCmnd->cdb[13]; /* LSB */
rangeChk = satAddNComparebit64(LBA, TL);
limitChk = satCompareLBALimitbit(LBA);
lba = satComputeCDB16LBA(satIOContext);
tl = satComputeCDB16TL(satIOContext);
if (pSatDevData->satNCQ != agTRUE &&
pSatDevData->sat48BitSupport != agTRUE
)
{
if (limitChk)
{
TI_DBG1(("satVerify16: return LBA out of range, not EXT\n"));
satSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_LOGICAL_BLOCK_ADDRESS_OUT_OF_RANGE,
satIOContext);
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pTiSenseData,
satIOContext->interruptContext );
return tiSuccess;
}
if (rangeChk) // if (lba + tl > SAT_TR_LBA_LIMIT)
{
TI_DBG1(("satVerify16: return LBA+TL out of range, not EXT\n"));
satSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_LOGICAL_BLOCK_ADDRESS_OUT_OF_RANGE,
satIOContext);
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pTiSenseData,
satIOContext->interruptContext );
return tiSuccess;
}
}
if (pSatDevData->sat48BitSupport == agTRUE)
{
TI_DBG5(("satVerify16: SAT_READ_VERIFY_SECTORS_EXT\n"));
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_READ_VERIFY_SECTORS_EXT;/* 0x42 */
fis->h.features = 0; /* FIS reserve */
fis->d.lbaLow = scsiCmnd->cdb[9]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = scsiCmnd->cdb[8]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = scsiCmnd->cdb[7]; /* FIS LBA (23:16) */
fis->d.device = 0x40; /* FIS LBA mode set 01000000 */
fis->d.lbaLowExp = scsiCmnd->cdb[6]; /* FIS LBA (31:24) */
fis->d.lbaMidExp = scsiCmnd->cdb[5]; /* FIS LBA (39:32) */
fis->d.lbaHighExp = scsiCmnd->cdb[4]; /* FIS LBA (47:40) */
fis->d.featuresExp = 0; /* FIS reserve */
fis->d.sectorCount = scsiCmnd->cdb[13]; /* FIS sector count (7:0) */
fis->d.sectorCountExp = scsiCmnd->cdb[12]; /* FIS sector count (15:8) */
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_NON_DATA;
satIOContext->ATACmd = SAT_READ_VERIFY_SECTORS_EXT;
}
else
{
TI_DBG5(("satVerify12: SAT_READ_VERIFY_SECTORS\n"));
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C bit is set */
fis->h.command = SAT_READ_VERIFY_SECTORS; /* 0x40 */
fis->h.features = 0; /* FIS reserve */
fis->d.lbaLow = scsiCmnd->cdb[9]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = scsiCmnd->cdb[8]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = scsiCmnd->cdb[7]; /* FIS LBA (23:16) */
/* FIS LBA mode set LBA (27:24) */
fis->d.device = (bit8)((0x4 << 4) | (scsiCmnd->cdb[6] & 0xF));
fis->d.lbaLowExp = 0;
fis->d.lbaMidExp = 0;
fis->d.lbaHighExp = 0;
fis->d.featuresExp = 0;
fis->d.sectorCount = scsiCmnd->cdb[13]; /* FIS sector count (7:0) */
fis->d.sectorCountExp = 0;
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_NON_DATA;
satIOContext->ATACmd = SAT_READ_VERIFY_SECTORS;
}
satIOContext->currentLBA = lba;
satIOContext->OrgTL = tl;
/*
computing number of loop and remainder for tl
0xFF in case not ext
0xFFFF in case EXT
*/
if (fis->h.command == SAT_READ_VERIFY_SECTORS)
{
LoopNum = satComputeLoopNum(tl, 0xFF);
}
else if (fis->h.command == SAT_READ_VERIFY_SECTORS_EXT)
{
/* SAT_READ_SECTORS_EXT, SAT_READ_DMA_EXT */
LoopNum = satComputeLoopNum(tl, 0xFFFF);
}
else
{
TI_DBG1(("satVerify12: error case 1!!!\n"));
LoopNum = 1;
}
satIOContext->LoopNum = LoopNum;
if (LoopNum == 1)
{
TI_DBG5(("satVerify12: NON CHAINED data\n"));
/* Initialize CB for SATA completion.
*/
satIOContext->satCompleteCB = &satNonChainedVerifyCB;
}
else
{
TI_DBG1(("satVerify12: CHAINED data\n"));
/* re-setting tl */
if (fis->h.command == SAT_READ_VERIFY_SECTORS)
{
fis->d.sectorCount = 0xFF;
}
else if (fis->h.command == SAT_READ_VERIFY_SECTORS_EXT)
{
fis->d.sectorCount = 0xFF;
fis->d.sectorCountExp = 0xFF;
}
else
{
TI_DBG1(("satVerify10: error case 2!!!\n"));
}
/* Initialize CB for SATA completion.
*/
satIOContext->satCompleteCB = &satChainedVerifyCB;
}
/*
* Prepare SGL and send FIS to LL layer.
*/
satIOContext->reqType = agRequestType; /* Save it */
status = sataLLIOStart( tiRoot,
tiIORequest,
tiDeviceHandle,
tiScsiRequest,
satIOContext);
return (status);
}
/*****************************************************************************/
/*! \brief SAT implementation for SCSI satFormatUnit.
*
* SAT implementation for SCSI satFormatUnit.
*
* \param tiRoot: Pointer to TISA initiator driver/port instance.
* \param tiIORequest: Pointer to TISA I/O request context for this I/O.
* \param tiDeviceHandle: Pointer to TISA device handle for this I/O.
* \param tiScsiRequest: Pointer to TISA SCSI I/O request and SGL list.
* \param satIOContext_t: Pointer to the SAT IO Context
*
* \return If command is started successfully
* - \e tiSuccess: I/O request successfully initiated.
* - \e tiBusy: No resources available, try again later.
* - \e tiIONoDevice: Invalid device handle.
* - \e tiError: Other errors.
*/
/*****************************************************************************/
GLOBAL bit32 satFormatUnit(
tiRoot_t *tiRoot,
tiIORequest_t *tiIORequest,
tiDeviceHandle_t *tiDeviceHandle,
tiScsiInitiatorRequest_t *tiScsiRequest,
satIOContext_t *satIOContext)
{
/*
note: we don't support media certification in this version and IP bit
satDevData->satFormatState will be agFalse since SAT does not actually sends
any ATA command
*/
scsiRspSense_t *pSense;
tiIniScsiCmnd_t *scsiCmnd;
bit32 index = 0;
pSense = satIOContext->pSense;
scsiCmnd = &tiScsiRequest->scsiCmnd;
TI_DBG5(("satFormatUnit:start\n"));
/*
checking opcode
1. FMTDATA bit == 0(no defect list header)
2. FMTDATA bit == 1 and DCRT bit == 1(defect list header is provided
with DCRT bit set)
*/
if ( ((scsiCmnd->cdb[1] & SCSI_FORMAT_UNIT_FMTDATA_MASK) == 0) ||
((scsiCmnd->cdb[1] & SCSI_FORMAT_UNIT_FMTDATA_MASK) &&
(scsiCmnd->cdb[7] & SCSI_FORMAT_UNIT_DCRT_MASK))
)
{
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOSuccess,
SCSI_STAT_GOOD,
agNULL,
satIOContext->interruptContext);
TI_DBG2(("satFormatUnit: return opcode\n"));
return tiSuccess;
}
/*
checking DEFECT LIST FORMAT and defect list length
*/
if ( (((scsiCmnd->cdb[1] & SCSI_FORMAT_UNIT_DEFECT_LIST_FORMAT_MASK) == 0x00) ||
((scsiCmnd->cdb[1] & SCSI_FORMAT_UNIT_DEFECT_LIST_FORMAT_MASK) == 0x06)) )
{
/* short parameter header */
if ((scsiCmnd->cdb[2] & SCSI_FORMAT_UNIT_LONGLIST_MASK) == 0x00)
{
index = 8;
}
/* long parameter header */
if ((scsiCmnd->cdb[2] & SCSI_FORMAT_UNIT_LONGLIST_MASK) == 0x01)
{
index = 10;
}
/* defect list length */
if ((scsiCmnd->cdb[index] != 0) || (scsiCmnd->cdb[index+1] != 0))
{
satSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pTiSenseData,
satIOContext->interruptContext );
TI_DBG1(("satFormatUnit: return defect list format\n"));
return tiSuccess;
}
}
/* FMTDATA == 1 && CMPLIST == 1*/
if ( (scsiCmnd->cdb[1] & SCSI_FORMAT_UNIT_FMTDATA_MASK) &&
(scsiCmnd->cdb[1] & SCSI_FORMAT_UNIT_CMPLIST_MASK) )
{
satSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pTiSenseData,
satIOContext->interruptContext );
TI_DBG1(("satFormatUnit: return cmplist\n"));
return tiSuccess;
}
if ( (scsiCmnd->cdb[5] & SCSI_NACA_MASK) || (scsiCmnd->cdb[5] & SCSI_LINK_MASK) )
{
satSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pTiSenseData,
satIOContext->interruptContext );
TI_DBG1(("satFormatUnit: return control\n"));
return tiSuccess;
}
/* defect list header filed, if exists, SAT rev8, Table 37, p48 */
if (scsiCmnd->cdb[1] & SCSI_FORMAT_UNIT_FMTDATA_MASK)
{
/* case 1,2,3 */
/* IMMED 1; FOV 0; FOV 1, DCRT 1, IP 0 */
if ( (scsiCmnd->cdb[7] & SCSI_FORMAT_UNIT_IMMED_MASK) ||
( !(scsiCmnd->cdb[7] & SCSI_FORMAT_UNIT_FOV_MASK)) ||
( (scsiCmnd->cdb[7] & SCSI_FORMAT_UNIT_FOV_MASK) &&
(scsiCmnd->cdb[7] & SCSI_FORMAT_UNIT_DCRT_MASK) &&
!(scsiCmnd->cdb[7] & SCSI_FORMAT_UNIT_IP_MASK))
)
{
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOSuccess,
SCSI_STAT_GOOD,
agNULL,
satIOContext->interruptContext);
TI_DBG5(("satFormatUnit: return defect list case 1\n"));
return tiSuccess;
}
/* case 4,5,6 */
/*
1. IMMED 0, FOV 1, DCRT 0, IP 0
2. IMMED 0, FOV 1, DCRT 0, IP 1
3. IMMED 0, FOV 1, DCRT 1, IP 1
*/
if ( ( !(scsiCmnd->cdb[7] & SCSI_FORMAT_UNIT_IMMED_MASK) &&
(scsiCmnd->cdb[7] & SCSI_FORMAT_UNIT_FOV_MASK) &&
!(scsiCmnd->cdb[7] & SCSI_FORMAT_UNIT_DCRT_MASK) &&
!(scsiCmnd->cdb[7] & SCSI_FORMAT_UNIT_IP_MASK) )
||
( !(scsiCmnd->cdb[7] & SCSI_FORMAT_UNIT_IMMED_MASK) &&
(scsiCmnd->cdb[7] & SCSI_FORMAT_UNIT_FOV_MASK) &&
!(scsiCmnd->cdb[7] & SCSI_FORMAT_UNIT_DCRT_MASK) &&
(scsiCmnd->cdb[7] & SCSI_FORMAT_UNIT_IP_MASK) )
||
( !(scsiCmnd->cdb[7] & SCSI_FORMAT_UNIT_IMMED_MASK) &&
(scsiCmnd->cdb[7] & SCSI_FORMAT_UNIT_FOV_MASK) &&
(scsiCmnd->cdb[7] & SCSI_FORMAT_UNIT_DCRT_MASK) &&
(scsiCmnd->cdb[7] & SCSI_FORMAT_UNIT_IP_MASK) )
)
{
satSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_PARAMETER_LIST,
satIOContext);
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pTiSenseData,
satIOContext->interruptContext );
TI_DBG5(("satFormatUnit: return defect list case 2\n"));
return tiSuccess;
}
}
/*
* Send the completion response now.
*/
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOSuccess,
SCSI_STAT_GOOD,
agNULL,
satIOContext->interruptContext);
TI_DBG5(("satFormatUnit: return last\n"));
return tiSuccess;
}
/*****************************************************************************/
/*! \brief SAT implementation for SCSI satSendDiagnostic.
*
* SAT implementation for SCSI satSendDiagnostic.
*
* \param tiRoot: Pointer to TISA initiator driver/port instance.
* \param tiIORequest: Pointer to TISA I/O request context for this I/O.
* \param tiDeviceHandle: Pointer to TISA device handle for this I/O.
* \param tiScsiRequest: Pointer to TISA SCSI I/O request and SGL list.
* \param satIOContext_t: Pointer to the SAT IO Context
*
* \return If command is started successfully
* - \e tiSuccess: I/O request successfully initiated.
* - \e tiBusy: No resources available, try again later.
* - \e tiIONoDevice: Invalid device handle.
* - \e tiError: Other errors.
*/
/*****************************************************************************/
GLOBAL bit32 satSendDiagnostic(
tiRoot_t *tiRoot,
tiIORequest_t *tiIORequest,
tiDeviceHandle_t *tiDeviceHandle,
tiScsiInitiatorRequest_t *tiScsiRequest,
satIOContext_t *satIOContext)
{
bit32 status;
bit32 agRequestType;
satDeviceData_t *pSatDevData;
scsiRspSense_t *pSense;
tiIniScsiCmnd_t *scsiCmnd;
agsaFisRegHostToDevice_t *fis;
bit32 parmLen;
pSense = satIOContext->pSense;
pSatDevData = satIOContext->pSatDevData;
scsiCmnd = &tiScsiRequest->scsiCmnd;
fis = satIOContext->pFis;
TI_DBG5(("satSendDiagnostic:start\n"));
/* reset satVerifyState */
pSatDevData->satVerifyState = 0;
/* no pending diagnostic in background */
pSatDevData->satBGPendingDiag = agFALSE;
/* table 27, 8.10 p39 SAT Rev8 */
/*
1. checking PF == 1
2. checking DEVOFFL == 1
3. checking UNITOFFL == 1
4. checking PARAMETER LIST LENGTH != 0
*/
if ( (scsiCmnd->cdb[1] & SCSI_PF_MASK) ||
(scsiCmnd->cdb[1] & SCSI_DEVOFFL_MASK) ||
(scsiCmnd->cdb[1] & SCSI_UNITOFFL_MASK) ||
( (scsiCmnd->cdb[3] != 0) || (scsiCmnd->cdb[4] != 0) )
)
{
satSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pTiSenseData,
satIOContext->interruptContext );
TI_DBG1(("satSendDiagnostic: return PF, DEVOFFL, UNITOFFL, PARAM LIST\n"));
return tiSuccess;
}
/* checking CONTROL */
/* NACA == 1 or LINK == 1*/
if ( (scsiCmnd->cdb[5] & SCSI_NACA_MASK) || (scsiCmnd->cdb[5] & SCSI_LINK_MASK) )
{
satSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pTiSenseData,
satIOContext->interruptContext );
TI_DBG2(("satSendDiagnostic: return control\n"));
return tiSuccess;
}
parmLen = (scsiCmnd->cdb[3] << 8) + scsiCmnd->cdb[4];
/* checking SELFTEST bit*/
/* table 29, 8.10.3, p41 SAT Rev8 */
/* case 1 */
if ( !(scsiCmnd->cdb[1] & SCSI_SEND_DIAGNOSTIC_SELFTEST_MASK) &&
(pSatDevData->satSMARTSelfTest == agFALSE)
)
{
satSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pTiSenseData,
satIOContext->interruptContext );
TI_DBG1(("satSendDiagnostic: return Table 29 case 1\n"));
return tiSuccess;
}
/* case 2 */
if ( !(scsiCmnd->cdb[1] & SCSI_SEND_DIAGNOSTIC_SELFTEST_MASK) &&
(pSatDevData->satSMARTSelfTest == agTRUE) &&
(pSatDevData->satSMARTEnabled == agFALSE)
)
{
satSetSensePayload( pSense,
SCSI_SNSKEY_ABORTED_COMMAND,
0,
SCSI_SNSCODE_ATA_DEVICE_FEATURE_NOT_ENABLED,
satIOContext);
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pTiSenseData,
satIOContext->interruptContext );
TI_DBG5(("satSendDiagnostic: return Table 29 case 2\n"));
return tiSuccess;
}
/*
case 3
see SELF TEST CODE later
*/
/* case 4 */
/*
sends three ATA verify commands
*/
if ( ((scsiCmnd->cdb[1] & SCSI_SEND_DIAGNOSTIC_SELFTEST_MASK) &&
(pSatDevData->satSMARTSelfTest == agFALSE))
||
((scsiCmnd->cdb[1] & SCSI_SEND_DIAGNOSTIC_SELFTEST_MASK) &&
(pSatDevData->satSMARTSelfTest == agTRUE) &&
(pSatDevData->satSMARTEnabled == agFALSE))
)
{
/*
sector count 1, LBA 0
sector count 1, LBA MAX
sector count 1, LBA random
*/
if (pSatDevData->sat48BitSupport == agTRUE)
{
/* sends READ VERIFY SECTOR(S) EXT*/
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_READ_VERIFY_SECTORS_EXT;/* 0x42 */
fis->h.features = 0; /* FIS reserve */
fis->d.lbaLow = 0; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = 0; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = 0; /* FIS LBA (23:16) */
fis->d.lbaLowExp = 0; /* FIS LBA (31:24) */
fis->d.lbaMidExp = 0; /* FIS LBA (39:32) */
fis->d.lbaHighExp = 0; /* FIS LBA (47:40) */
fis->d.featuresExp = 0; /* FIS reserve */
fis->d.sectorCount = 1; /* FIS sector count (7:0) */
fis->d.sectorCountExp = 0; /* FIS sector count (15:8) */
fis->d.reserved4 = 0;
fis->d.device = 0x40; /* 01000000 */
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_NON_DATA;
}
else
{
/* READ VERIFY SECTOR(S)*/
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_READ_VERIFY_SECTORS;/* 0x40 */
fis->h.features = 0; /* FIS features NA */
fis->d.lbaLow = 0; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = 0; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = 0; /* FIS LBA (23:16) */
fis->d.lbaLowExp = 0;
fis->d.lbaMidExp = 0;
fis->d.lbaHighExp = 0;
fis->d.featuresExp = 0;
fis->d.sectorCount = 1; /* FIS sector count (7:0) */
fis->d.sectorCountExp = 0;
fis->d.reserved4 = 0;
fis->d.device = 0x40; /* 01000000 */
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_NON_DATA;
}
/* Initialize CB for SATA completion.
*/
satIOContext->satCompleteCB = &satSendDiagnosticCB;
/*
* Prepare SGL and send FIS to LL layer.
*/
satIOContext->reqType = agRequestType; /* Save it */
status = sataLLIOStart( tiRoot,
tiIORequest,
tiDeviceHandle,
tiScsiRequest,
satIOContext);
TI_DBG5(("satSendDiagnostic: return Table 29 case 4\n"));
return (status);
}
/* case 5 */
if ( (scsiCmnd->cdb[1] & SCSI_SEND_DIAGNOSTIC_SELFTEST_MASK) &&
(pSatDevData->satSMARTSelfTest == agTRUE) &&
(pSatDevData->satSMARTEnabled == agTRUE)
)
{
/* sends SMART EXECUTE OFF-LINE IMMEDIATE */
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_SMART_EXEUTE_OFF_LINE_IMMEDIATE;/* 0xB0 */
fis->h.features = 0xD4; /* FIS features NA */
fis->d.lbaLow = 0x81; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = 0x4F; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = 0xC2; /* FIS LBA (23:16) */
fis->d.lbaLowExp = 0;
fis->d.lbaMidExp = 0;
fis->d.lbaHighExp = 0;
fis->d.featuresExp = 0;
fis->d.sectorCount = 0; /* FIS sector count (7:0) */
fis->d.sectorCountExp = 0;
fis->d.reserved4 = 0;
fis->d.device = 0; /* FIS DEV is discared in SATA */
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_NON_DATA;
/* Initialize CB for SATA completion.
*/
satIOContext->satCompleteCB = &satSendDiagnosticCB;
/*
* Prepare SGL and send FIS to LL layer.
*/
satIOContext->reqType = agRequestType; /* Save it */
status = sataLLIOStart( tiRoot,
tiIORequest,
tiDeviceHandle,
tiScsiRequest,
satIOContext);
TI_DBG5(("satSendDiagnostic: return Table 29 case 5\n"));
return (status);
}
/* SAT rev8 Table29 p41 case 3*/
/* checking SELF TEST CODE*/
if ( !(scsiCmnd->cdb[1] & SCSI_SEND_DIAGNOSTIC_SELFTEST_MASK) &&
(pSatDevData->satSMARTSelfTest == agTRUE) &&
(pSatDevData->satSMARTEnabled == agTRUE)
)
{
/* SAT rev8 Table28 p40 */
/* finding self-test code */
switch ((scsiCmnd->cdb[1] & SCSI_SEND_DIAGNOSTIC_TEST_CODE_MASK) >> 5)
{
case 1:
pSatDevData->satBGPendingDiag = agTRUE;
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOSuccess,
SCSI_STAT_GOOD,
agNULL,
satIOContext->interruptContext );
/* sends SMART EXECUTE OFF-LINE IMMEDIATE */
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_SMART_EXEUTE_OFF_LINE_IMMEDIATE;/* 0x40 */
fis->h.features = 0xD4; /* FIS features NA */
fis->d.lbaLow = 0x01; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = 0x4F; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = 0xC2; /* FIS LBA (23:16) */
fis->d.lbaLowExp = 0;
fis->d.lbaMidExp = 0;
fis->d.lbaHighExp = 0;
fis->d.featuresExp = 0;
fis->d.sectorCount = 0; /* FIS sector count (7:0) */
fis->d.sectorCountExp = 0;
fis->d.reserved4 = 0;
fis->d.device = 0; /* FIS DEV is discared in SATA */
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_NON_DATA;
/* Initialize CB for SATA completion.
*/
satIOContext->satCompleteCB = &satSendDiagnosticCB;
/*
* Prepare SGL and send FIS to LL layer.
*/
satIOContext->reqType = agRequestType; /* Save it */
status = sataLLIOStart( tiRoot,
tiIORequest,
tiDeviceHandle,
tiScsiRequest,
satIOContext);
TI_DBG5(("satSendDiagnostic: return Table 28 case 1\n"));
return (status);
case 2:
pSatDevData->satBGPendingDiag = agTRUE;
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOSuccess,
SCSI_STAT_GOOD,
agNULL,
satIOContext->interruptContext );
/* issuing SMART EXECUTE OFF-LINE IMMEDIATE */
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_SMART_EXEUTE_OFF_LINE_IMMEDIATE;/* 0x40 */
fis->h.features = 0xD4; /* FIS features NA */
fis->d.lbaLow = 0x02; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = 0x4F; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = 0xC2; /* FIS LBA (23:16) */
fis->d.lbaLowExp = 0;
fis->d.lbaMidExp = 0;
fis->d.lbaHighExp = 0;
fis->d.featuresExp = 0;
fis->d.sectorCount = 0; /* FIS sector count (7:0) */
fis->d.sectorCountExp = 0;
fis->d.reserved4 = 0;
fis->d.device = 0; /* FIS DEV is discared in SATA */
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_NON_DATA;
/* Initialize CB for SATA completion.
*/
satIOContext->satCompleteCB = &satSendDiagnosticCB;
/*
* Prepare SGL and send FIS to LL layer.
*/
satIOContext->reqType = agRequestType; /* Save it */
status = sataLLIOStart( tiRoot,
tiIORequest,
tiDeviceHandle,
tiScsiRequest,
satIOContext);
TI_DBG5(("satSendDiagnostic: return Table 28 case 2\n"));
return (status);
case 4:
/* For simplicity, no abort is supported
Returns good status
need a flag in device data for previously sent background Send Diagnostic
*/
if (parmLen != 0)
{
/* check condition */
satSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pTiSenseData,
satIOContext->interruptContext );
TI_DBG1(("satSendDiagnostic: case 4, non zero ParmLen %d\n", parmLen));
return tiSuccess;
}
if (pSatDevData->satBGPendingDiag == agTRUE)
{
/* sends SMART EXECUTE OFF-LINE IMMEDIATE abort */
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_SMART_EXEUTE_OFF_LINE_IMMEDIATE;/* 0x40 */
fis->h.features = 0xD4; /* FIS features NA */
fis->d.lbaLow = 0x7F; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = 0x4F; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = 0xC2; /* FIS LBA (23:16) */
fis->d.lbaLowExp = 0;
fis->d.lbaMidExp = 0;
fis->d.lbaHighExp = 0;
fis->d.featuresExp = 0;
fis->d.sectorCount = 0; /* FIS sector count (7:0) */
fis->d.sectorCountExp = 0;
fis->d.reserved4 = 0;
fis->d.device = 0; /* FIS DEV is discared in SATA */
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_NON_DATA;
/* Initialize CB for SATA completion.
*/
satIOContext->satCompleteCB = &satSendDiagnosticCB;
/*
* Prepare SGL and send FIS to LL layer.
*/
satIOContext->reqType = agRequestType; /* Save it */
status = sataLLIOStart( tiRoot,
tiIORequest,
tiDeviceHandle,
tiScsiRequest,
satIOContext);
TI_DBG5(("satSendDiagnostic: send SAT_SMART_EXEUTE_OFF_LINE_IMMEDIATE case 3\n"));
TI_DBG5(("satSendDiagnostic: Table 28 case 4\n"));
return (status);
}
else
{
/* check condition */
satSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pTiSenseData,
satIOContext->interruptContext );
TI_DBG1(("satSendDiagnostic: case 4, no pending diagnostic in background\n"));
TI_DBG5(("satSendDiagnostic: Table 28 case 4\n"));
return tiSuccess;
}
break;
case 5:
/* issuing SMART EXECUTE OFF-LINE IMMEDIATE */
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_SMART_EXEUTE_OFF_LINE_IMMEDIATE;/* 0x40 */
fis->h.features = 0xD4; /* FIS features NA */
fis->d.lbaLow = 0x81; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = 0x4F; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = 0xC2; /* FIS LBA (23:16) */
fis->d.lbaLowExp = 0;
fis->d.lbaMidExp = 0;
fis->d.lbaHighExp = 0;
fis->d.featuresExp = 0;
fis->d.sectorCount = 0; /* FIS sector count (7:0) */
fis->d.sectorCountExp = 0;
fis->d.reserved4 = 0;
fis->d.device = 0; /* FIS DEV is discared in SATA */
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_NON_DATA;
/* Initialize CB for SATA completion.
*/
satIOContext->satCompleteCB = &satSendDiagnosticCB;
/*
* Prepare SGL and send FIS to LL layer.
*/
satIOContext->reqType = agRequestType; /* Save it */
status = sataLLIOStart( tiRoot,
tiIORequest,
tiDeviceHandle,
tiScsiRequest,
satIOContext);
TI_DBG5(("satSendDiagnostic: return Table 28 case 5\n"));
return (status);
case 6:
/* issuing SMART EXECUTE OFF-LINE IMMEDIATE */
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_SMART_EXEUTE_OFF_LINE_IMMEDIATE;/* 0x40 */
fis->h.features = 0xD4; /* FIS features NA */
fis->d.lbaLow = 0x82; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = 0x4F; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = 0xC2; /* FIS LBA (23:16) */
fis->d.lbaLowExp = 0;
fis->d.lbaMidExp = 0;
fis->d.lbaHighExp = 0;
fis->d.featuresExp = 0;
fis->d.sectorCount = 0; /* FIS sector count (7:0) */
fis->d.sectorCountExp = 0;
fis->d.reserved4 = 0;
fis->d.device = 0; /* FIS DEV is discared in SATA */
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_NON_DATA;
/* Initialize CB for SATA completion.
*/
satIOContext->satCompleteCB = &satSendDiagnosticCB;
/*
* Prepare SGL and send FIS to LL layer.
*/
satIOContext->reqType = agRequestType; /* Save it */
status = sataLLIOStart( tiRoot,
tiIORequest,
tiDeviceHandle,
tiScsiRequest,
satIOContext);
TI_DBG5(("satSendDiagnostic: return Table 28 case 6\n"));
return (status);
case 0:
case 3: /* fall through */
case 7: /* fall through */
default:
break;
}/* switch */
/* returns the results of default self-testing, which is good */
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOSuccess,
SCSI_STAT_GOOD,
agNULL,
satIOContext->interruptContext );
TI_DBG5(("satSendDiagnostic: return Table 28 case 0,3,7 and default\n"));
return tiSuccess;
}
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOSuccess,
SCSI_STAT_GOOD,
agNULL,
satIOContext->interruptContext );
TI_DBG5(("satSendDiagnostic: return last\n"));
return tiSuccess;
}
/*****************************************************************************/
/*! \brief SAT implementation for SCSI satSendDiagnostic_1.
*
* SAT implementation for SCSI satSendDiagnostic_1.
* Sub function of satSendDiagnostic.
*
* \param tiRoot: Pointer to TISA initiator driver/port instance.
* \param tiIORequest: Pointer to TISA I/O request context for this I/O.
* \param tiDeviceHandle: Pointer to TISA device handle for this I/O.
* \param tiScsiRequest: Pointer to TISA SCSI I/O request and SGL list.
* \param satIOContext_t: Pointer to the SAT IO Context
*
* \return If command is started successfully
* - \e tiSuccess: I/O request successfully initiated.
* - \e tiBusy: No resources available, try again later.
* - \e tiIONoDevice: Invalid device handle.
* - \e tiError: Other errors.
*/
/*****************************************************************************/
GLOBAL bit32 satSendDiagnostic_1(
tiRoot_t *tiRoot,
tiIORequest_t *tiIORequest,
tiDeviceHandle_t *tiDeviceHandle,
tiScsiInitiatorRequest_t *tiScsiRequest,
satIOContext_t *satIOContext)
{
/*
SAT Rev9, Table29, p41
send 2nd SAT_READ_VERIFY_SECTORS(_EXT)
*/
bit32 status;
bit32 agRequestType;
satDeviceData_t *pSatDevData;
agsaFisRegHostToDevice_t *fis;
TI_DBG5(("satSendDiagnostic_1 entry: tiDeviceHandle=%p tiIORequest=%p\n",
tiDeviceHandle, tiIORequest));
pSatDevData = satIOContext->pSatDevData;
fis = satIOContext->pFis;
/*
sector count 1, LBA MAX
*/
if (pSatDevData->sat48BitSupport == agTRUE)
{
/* sends READ VERIFY SECTOR(S) EXT*/
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_READ_VERIFY_SECTORS_EXT;/* 0x42 */
fis->h.features = 0; /* FIS reserve */
fis->d.lbaLow = pSatDevData->satMaxLBA[7]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = pSatDevData->satMaxLBA[6]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = pSatDevData->satMaxLBA[5]; /* FIS LBA (23:16) */
fis->d.lbaLowExp = pSatDevData->satMaxLBA[4]; /* FIS LBA (31:24) */
fis->d.lbaMidExp = pSatDevData->satMaxLBA[3]; /* FIS LBA (39:32) */
fis->d.lbaHighExp = pSatDevData->satMaxLBA[2]; /* FIS LBA (47:40) */
fis->d.featuresExp = 0; /* FIS reserve */
fis->d.sectorCount = 1; /* FIS sector count (7:0) */
fis->d.sectorCountExp = 0; /* FIS sector count (15:8) */
fis->d.reserved4 = 0;
fis->d.device = 0x40; /* 01000000 */
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
}
else
{
/* READ VERIFY SECTOR(S)*/
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_READ_VERIFY_SECTORS;/* 0x40 */
fis->h.features = 0; /* FIS features NA */
fis->d.lbaLow = pSatDevData->satMaxLBA[7]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = pSatDevData->satMaxLBA[6]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = pSatDevData->satMaxLBA[5]; /* FIS LBA (23:16) */
fis->d.lbaLowExp = 0;
fis->d.lbaMidExp = 0;
fis->d.lbaHighExp = 0;
fis->d.featuresExp = 0;
fis->d.sectorCount = 1; /* FIS sector count (7:0) */
fis->d.sectorCountExp = 0;
fis->d.reserved4 = 0;
fis->d.device = (bit8)((0x4 << 4) | (pSatDevData->satMaxLBA[4] & 0xF));
/* DEV and LBA 27:24 */
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
}
agRequestType = AGSA_SATA_PROTOCOL_NON_DATA;
/* Initialize CB for SATA completion.
*/
satIOContext->satCompleteCB = &satSendDiagnosticCB;
/*
* Prepare SGL and send FIS to LL layer.
*/
satIOContext->reqType = agRequestType; /* Save it */
status = sataLLIOStart( tiRoot,
tiIORequest,
tiDeviceHandle,
tiScsiRequest,
satIOContext);
return status;
}
/*****************************************************************************/
/*! \brief SAT implementation for SCSI satSendDiagnostic_2.
*
* SAT implementation for SCSI satSendDiagnostic_2.
* Sub function of satSendDiagnostic.
*
* \param tiRoot: Pointer to TISA initiator driver/port instance.
* \param tiIORequest: Pointer to TISA I/O request context for this I/O.
* \param tiDeviceHandle: Pointer to TISA device handle for this I/O.
* \param tiScsiRequest: Pointer to TISA SCSI I/O request and SGL list.
* \param satIOContext_t: Pointer to the SAT IO Context
*
* \return If command is started successfully
* - \e tiSuccess: I/O request successfully initiated.
* - \e tiBusy: No resources available, try again later.
* - \e tiIONoDevice: Invalid device handle.
* - \e tiError: Other errors.
*/
/*****************************************************************************/
GLOBAL bit32 satSendDiagnostic_2(
tiRoot_t *tiRoot,
tiIORequest_t *tiIORequest,
tiDeviceHandle_t *tiDeviceHandle,
tiScsiInitiatorRequest_t *tiScsiRequest,
satIOContext_t *satIOContext)
{
/*
SAT Rev9, Table29, p41
send 3rd SAT_READ_VERIFY_SECTORS(_EXT)
*/
bit32 status;
bit32 agRequestType;
satDeviceData_t *pSatDevData;
agsaFisRegHostToDevice_t *fis;
TI_DBG5(("satSendDiagnostic_2 entry: tiDeviceHandle=%p tiIORequest=%p\n",
tiDeviceHandle, tiIORequest));
pSatDevData = satIOContext->pSatDevData;
fis = satIOContext->pFis;
/*
sector count 1, LBA Random
*/
if (pSatDevData->sat48BitSupport == agTRUE)
{
/* sends READ VERIFY SECTOR(S) EXT*/
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_READ_VERIFY_SECTORS_EXT;/* 0x42 */
fis->h.features = 0; /* FIS reserve */
fis->d.lbaLow = 0x7F; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = 0; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = 0; /* FIS LBA (23:16) */
fis->d.lbaLowExp = 0; /* FIS LBA (31:24) */
fis->d.lbaMidExp = 0; /* FIS LBA (39:32) */
fis->d.lbaHighExp = 0; /* FIS LBA (47:40) */
fis->d.featuresExp = 0; /* FIS reserve */
fis->d.sectorCount = 1; /* FIS sector count (7:0) */
fis->d.sectorCountExp = 0; /* FIS sector count (15:8) */
fis->d.reserved4 = 0;
fis->d.device = 0x40; /* 01000000 */
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
}
else
{
/* READ VERIFY SECTOR(S)*/
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_READ_VERIFY_SECTORS;/* 0x40 */
fis->h.features = 0; /* FIS features NA */
fis->d.lbaLow = 0x7F; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = 0; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = 0; /* FIS LBA (23:16) */
fis->d.lbaLowExp = 0;
fis->d.lbaMidExp = 0;
fis->d.lbaHighExp = 0;
fis->d.featuresExp = 0;
fis->d.sectorCount = 1; /* FIS sector count (7:0) */
fis->d.sectorCountExp = 0;
fis->d.reserved4 = 0;
fis->d.device = 0x40; /* FIS LBA mode set 01000000 */
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
}
agRequestType = AGSA_SATA_PROTOCOL_NON_DATA;
/* Initialize CB for SATA completion.
*/
satIOContext->satCompleteCB = &satSendDiagnosticCB;
/*
* Prepare SGL and send FIS to LL layer.
*/
satIOContext->reqType = agRequestType; /* Save it */
status = sataLLIOStart( tiRoot,
tiIORequest,
tiDeviceHandle,
tiScsiRequest,
satIOContext);
return status;
}
/*****************************************************************************/
/*! \brief SAT implementation for SCSI satStartStopUnit.
*
* SAT implementation for SCSI satStartStopUnit.
*
* \param tiRoot: Pointer to TISA initiator driver/port instance.
* \param tiIORequest: Pointer to TISA I/O request context for this I/O.
* \param tiDeviceHandle: Pointer to TISA device handle for this I/O.
* \param tiScsiRequest: Pointer to TISA SCSI I/O request and SGL list.
* \param satIOContext_t: Pointer to the SAT IO Context
*
* \return If command is started successfully
* - \e tiSuccess: I/O request successfully initiated.
* - \e tiBusy: No resources available, try again later.
* - \e tiIONoDevice: Invalid device handle.
* - \e tiError: Other errors.
*/
/*****************************************************************************/
GLOBAL bit32 satStartStopUnit(
tiRoot_t *tiRoot,
tiIORequest_t *tiIORequest,
tiDeviceHandle_t *tiDeviceHandle,
tiScsiInitiatorRequest_t *tiScsiRequest,
satIOContext_t *satIOContext)
{
bit32 status;
bit32 agRequestType;
satDeviceData_t *pSatDevData;
scsiRspSense_t *pSense;
tiIniScsiCmnd_t *scsiCmnd;
agsaFisRegHostToDevice_t *fis;
pSense = satIOContext->pSense;
pSatDevData = satIOContext->pSatDevData;
scsiCmnd = &tiScsiRequest->scsiCmnd;
fis = satIOContext->pFis;
TI_DBG5(("satStartStopUnit:start\n"));
/* checking CONTROL */
/* NACA == 1 or LINK == 1*/
if ( (scsiCmnd->cdb[5] & SCSI_NACA_MASK) || (scsiCmnd->cdb[5] & SCSI_LINK_MASK) )
{
satSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pTiSenseData,
satIOContext->interruptContext );
TI_DBG1(("satStartStopUnit: return control\n"));
return tiSuccess;
}
/* Spec p55, Table 48 checking START and LOEJ bit */
/* case 1 */
if ( !(scsiCmnd->cdb[4] & SCSI_START_MASK) && !(scsiCmnd->cdb[4] & SCSI_LOEJ_MASK) )
{
if ( (scsiCmnd->cdb[1] & SCSI_IMMED_MASK) )
{
/* immed bit , SAT rev 8, 9.11.2.1 p 54*/
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOSuccess,
SCSI_STAT_GOOD,
agNULL,
satIOContext->interruptContext );
TI_DBG5(("satStartStopUnit: return table48 case 1-1\n"));
return tiSuccess;
}
/* sends FLUSH CACHE or FLUSH CACHE EXT */
if (pSatDevData->sat48BitSupport == agTRUE)
{
/* FLUSH CACHE EXT */
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_FLUSH_CACHE_EXT; /* 0xEA */
fis->h.features = 0; /* FIS reserve */
fis->d.featuresExp = 0; /* FIS reserve */
fis->d.sectorCount = 0; /* FIS sector count (7:0) */
fis->d.sectorCountExp = 0; /* FIS sector count (15:8) */
fis->d.lbaLow = 0; /* FIS LBA (7 :0 ) */
fis->d.lbaLowExp = 0; /* FIS LBA (31:24) */
fis->d.lbaMid = 0; /* FIS LBA (15:8 ) */
fis->d.lbaMidExp = 0; /* FIS LBA (39:32) */
fis->d.lbaHigh = 0; /* FIS LBA (23:16) */
fis->d.lbaHighExp = 0; /* FIS LBA (47:40) */
fis->d.device = 0; /* FIS DEV is discared in SATA */
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved4 = 0;
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_NON_DATA;
}
else
{
/* FLUSH CACHE */
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_FLUSH_CACHE; /* 0xE7 */
fis->h.features = 0; /* FIS features NA */
fis->d.lbaLow = 0; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = 0; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = 0; /* FIS LBA (23:16) */
fis->d.lbaLowExp = 0;
fis->d.lbaMidExp = 0;
fis->d.lbaHighExp = 0;
fis->d.featuresExp = 0;
fis->d.sectorCount = 0; /* FIS sector count (7:0) */
fis->d.sectorCountExp = 0;
fis->d.device = 0; /* FIS DEV is discared in SATA */
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved4 = 0;
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_NON_DATA;
}
/* Initialize CB for SATA completion.
*/
satIOContext->satCompleteCB = &satStartStopUnitCB;
/*
* Prepare SGL and send FIS to LL layer.
*/
satIOContext->reqType = agRequestType; /* Save it */
status = sataLLIOStart( tiRoot,
tiIORequest,
tiDeviceHandle,
tiScsiRequest,
satIOContext);
TI_DBG5(("satStartStopUnit: return table48 case 1\n"));
return (status);
}
/* case 2 */
else if ( (scsiCmnd->cdb[4] & SCSI_START_MASK) && !(scsiCmnd->cdb[4] & SCSI_LOEJ_MASK) )
{
/* immed bit , SAT rev 8, 9.11.2.1 p 54*/
if ( (scsiCmnd->cdb[1] & SCSI_IMMED_MASK) )
{
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOSuccess,
SCSI_STAT_GOOD,
agNULL,
satIOContext->interruptContext );
TI_DBG5(("satStartStopUnit: return table48 case 2 1\n"));
return tiSuccess;
}
/*
sends READ_VERIFY_SECTORS(_EXT)
sector count 1, any LBA between zero to Maximum
*/
if (pSatDevData->sat48BitSupport == agTRUE)
{
/* READ VERIFY SECTOR(S) EXT*/
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_READ_VERIFY_SECTORS_EXT;/* 0x42 */
fis->h.features = 0; /* FIS reserve */
fis->d.lbaLow = 0x01; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = 0x00; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = 0x00; /* FIS LBA (23:16) */
fis->d.lbaLowExp = 0x00; /* FIS LBA (31:24) */
fis->d.lbaMidExp = 0x00; /* FIS LBA (39:32) */
fis->d.lbaHighExp = 0x00; /* FIS LBA (47:40) */
fis->d.featuresExp = 0; /* FIS reserve */
fis->d.sectorCount = 1; /* FIS sector count (7:0) */
fis->d.sectorCountExp = 0; /* FIS sector count (15:8) */
fis->d.reserved4 = 0;
fis->d.device = 0x40; /* 01000000 */
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
}
else
{
/* READ VERIFY SECTOR(S)*/
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_READ_VERIFY_SECTORS;/* 0x40 */
fis->h.features = 0; /* FIS features NA */
fis->d.lbaLow = 0x01; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = 0x00; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = 0x00; /* FIS LBA (23:16) */
fis->d.lbaLowExp = 0;
fis->d.lbaMidExp = 0;
fis->d.lbaHighExp = 0;
fis->d.featuresExp = 0;
fis->d.sectorCount = 1; /* FIS sector count (7:0) */
fis->d.sectorCountExp = 0;
fis->d.reserved4 = 0;
fis->d.device = 0x40; /* 01000000 */
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
}
agRequestType = AGSA_SATA_PROTOCOL_NON_DATA;
/* Initialize CB for SATA completion.
*/
satIOContext->satCompleteCB = &satStartStopUnitCB;
/*
* Prepare SGL and send FIS to LL layer.
*/
satIOContext->reqType = agRequestType; /* Save it */
status = sataLLIOStart( tiRoot,
tiIORequest,
tiDeviceHandle,
tiScsiRequest,
satIOContext);
TI_DBG5(("satStartStopUnit: return table48 case 2 2\n"));
return status;
}
/* case 3 */
else if ( !(scsiCmnd->cdb[4] & SCSI_START_MASK) && (scsiCmnd->cdb[4] & SCSI_LOEJ_MASK) )
{
if(pSatDevData->satRemovableMedia && pSatDevData->satRemovableMediaEnabled)
{
/* support for removal media */
/* immed bit , SAT rev 8, 9.11.2.1 p 54*/
if ( (scsiCmnd->cdb[1] & SCSI_IMMED_MASK) )
{
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOSuccess,
SCSI_STAT_GOOD,
agNULL,
satIOContext->interruptContext );
TI_DBG5(("satStartStopUnit: return table48 case 3 1\n"));
return tiSuccess;
}
/*
sends MEDIA EJECT
*/
/* Media Eject fis */
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_MEDIA_EJECT; /* 0xED */
fis->h.features = 0; /* FIS features NA */
fis->d.lbaLow = 0; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = 0; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = 0; /* FIS LBA (23:16) */
fis->d.lbaLowExp = 0;
fis->d.lbaMidExp = 0;
fis->d.lbaHighExp = 0;
fis->d.featuresExp = 0;
/* sector count zero */
fis->d.sectorCount = 0; /* FIS sector count (7:0) */
fis->d.sectorCountExp = 0;
fis->d.device = 0; /* FIS DEV is discared in SATA */
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved4 = 0;
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_NON_DATA;
/* Initialize CB for SATA completion.
*/
satIOContext->satCompleteCB = &satStartStopUnitCB;
/*
* Prepare SGL and send FIS to LL layer.
*/
satIOContext->reqType = agRequestType; /* Save it */
status = sataLLIOStart( tiRoot,
tiIORequest,
tiDeviceHandle,
tiScsiRequest,
satIOContext);
return status;
}
else
{
/* no support for removal media */
satSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pTiSenseData,
satIOContext->interruptContext );
TI_DBG5(("satStartStopUnit: return Table 29 case 3 2\n"));
return tiSuccess;
}
}
/* case 4 */
else /* ( (scsiCmnd->cdb[4] & SCSI_START_MASK) && (scsiCmnd->cdb[4] & SCSI_LOEJ_MASK) ) */
{
satSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pTiSenseData,
satIOContext->interruptContext );
TI_DBG5(("satStartStopUnit: return Table 29 case 4\n"));
return tiSuccess;
}
}
/*****************************************************************************/
/*! \brief SAT implementation for SCSI satStartStopUnit_1.
*
* SAT implementation for SCSI satStartStopUnit_1.
* Sub function of satStartStopUnit
*
* \param tiRoot: Pointer to TISA initiator driver/port instance.
* \param tiIORequest: Pointer to TISA I/O request context for this I/O.
* \param tiDeviceHandle: Pointer to TISA device handle for this I/O.
* \param tiScsiRequest: Pointer to TISA SCSI I/O request and SGL list.
* \param satIOContext_t: Pointer to the SAT IO Context
*
* \return If command is started successfully
* - \e tiSuccess: I/O request successfully initiated.
* - \e tiBusy: No resources available, try again later.
* - \e tiIONoDevice: Invalid device handle.
* - \e tiError: Other errors.
*/
/*****************************************************************************/
GLOBAL bit32 satStartStopUnit_1(
tiRoot_t *tiRoot,
tiIORequest_t *tiIORequest,
tiDeviceHandle_t *tiDeviceHandle,
tiScsiInitiatorRequest_t *tiScsiRequest,
satIOContext_t *satIOContext)
{
/*
SAT Rev 8, Table 48, 9.11.3 p55
sends STANDBY
*/
bit32 status;
bit32 agRequestType;
agsaFisRegHostToDevice_t *fis;
TI_DBG5(("satStartStopUnit_1 entry: tiDeviceHandle=%p tiIORequest=%p\n",
tiDeviceHandle, tiIORequest));
fis = satIOContext->pFis;
/* STANDBY */
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_STANDBY; /* 0xE2 */
fis->h.features = 0; /* FIS features NA */
fis->d.lbaLow = 0; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = 0; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = 0; /* FIS LBA (23:16) */
fis->d.lbaLowExp = 0;
fis->d.lbaMidExp = 0;
fis->d.lbaHighExp = 0;
fis->d.featuresExp = 0;
fis->d.sectorCount = 0; /* FIS sector count (7:0) */
fis->d.sectorCountExp = 0;
fis->d.reserved4 = 0;
fis->d.device = 0; /* 0 */
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_NON_DATA;
/* Initialize CB for SATA completion.
*/
satIOContext->satCompleteCB = &satStartStopUnitCB;
/*
* Prepare SGL and send FIS to LL layer.
*/
satIOContext->reqType = agRequestType; /* Save it */
status = sataLLIOStart( tiRoot,
tiIORequest,
tiDeviceHandle,
tiScsiRequest,
satIOContext);
TI_DBG5(("satStartStopUnit_1 return status %d\n", status));
return status;
}
/*****************************************************************************/
/*! \brief SAT implementation for SCSI satRead10_2.
*
* SAT implementation for SCSI satRead10_2
* Sub function of satRead10
*
* \param tiRoot: Pointer to TISA initiator driver/port instance.
* \param tiIORequest: Pointer to TISA I/O request context for this I/O.
* \param tiDeviceHandle: Pointer to TISA device handle for this I/O.
* \param tiScsiRequest: Pointer to TISA SCSI I/O request and SGL list.
* \param satIOContext_t: Pointer to the SAT IO Context
*
* \return If command is started successfully
* - \e tiSuccess: I/O request successfully initiated.
* - \e tiBusy: No resources available, try again later.
* - \e tiIONoDevice: Invalid device handle.
* - \e tiError: Other errors.
*/
/*****************************************************************************/
GLOBAL bit32 satRead10_2(
tiRoot_t *tiRoot,
tiIORequest_t *tiIORequest,
tiDeviceHandle_t *tiDeviceHandle,
tiScsiInitiatorRequest_t *tiScsiRequest,
satIOContext_t *satIOContext)
{
/*
externally generated ATA cmd, there is corresponding scsi cmnd
called by satStartStopUnit() or maybe satRead10()
*/
bit32 status;
bit32 agRequestType;
satDeviceData_t *pSatDevData;
agsaFisRegHostToDevice_t *fis;
pSatDevData = satIOContext->pSatDevData;
fis = satIOContext->pFis;
TI_DBG5(("satReadVerifySectorsNoChain: start\n"));
/* specifying ReadVerifySectors has no chain */
pSatDevData->satVerifyState = 0xFFFFFFFF;
if (pSatDevData->sat48BitSupport == agTRUE)
{
/* READ VERIFY SECTOR(S) EXT*/
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_READ_VERIFY_SECTORS_EXT;/* 0x42 */
fis->h.features = 0; /* FIS reserve */
fis->d.lbaLow = 0x7F; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = 0x4F; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = 0x00; /* FIS LBA (23:16) */
fis->d.lbaLowExp = 0xF1; /* FIS LBA (31:24) */
fis->d.lbaMidExp = 0x5F; /* FIS LBA (39:32) */
fis->d.lbaHighExp = 0xFF; /* FIS LBA (47:40) */
fis->d.featuresExp = 0; /* FIS reserve */
fis->d.sectorCount = 1; /* FIS sector count (7:0) */
fis->d.sectorCountExp = 0; /* FIS sector count (15:8) */
fis->d.reserved4 = 0;
fis->d.device = 0x4E; /* 01001110 */
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_NON_DATA;
}
else
{
/* READ VERIFY SECTOR(S)*/
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_READ_VERIFY_SECTORS;/* 0x40 */
fis->h.features = 0; /* FIS features NA */
fis->d.lbaLow = 0x7F; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = 0x4F; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = 0x00; /* FIS LBA (23:16) */
fis->d.lbaLowExp = 0;
fis->d.lbaMidExp = 0;
fis->d.lbaHighExp = 0;
fis->d.featuresExp = 0;
fis->d.sectorCount = 1; /* FIS sector count (7:0) */
fis->d.sectorCountExp = 0;
fis->d.reserved4 = 0;
fis->d.device = 0x4E; /* 01001110 */
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_NON_DATA;
}
/* Initialize CB for SATA completion.
*/
satIOContext->satCompleteCB = &satNonDataIOCB;
/*
* Prepare SGL and send FIS to LL layer.
*/
satIOContext->reqType = agRequestType; /* Save it */
status = sataLLIOStart( tiRoot,
tiIORequest,
tiDeviceHandle,
tiScsiRequest,
satIOContext);
TI_DBG5(("satReadVerifySectorsNoChain: return last\n"));
return status;
}
/*****************************************************************************/
/*! \brief SAT implementation for SCSI satWriteSame10.
*
* SAT implementation for SCSI satWriteSame10.
*
* \param tiRoot: Pointer to TISA initiator driver/port instance.
* \param tiIORequest: Pointer to TISA I/O request context for this I/O.
* \param tiDeviceHandle: Pointer to TISA device handle for this I/O.
* \param tiScsiRequest: Pointer to TISA SCSI I/O request and SGL list.
* \param satIOContext_t: Pointer to the SAT IO Context
*
* \return If command is started successfully
* - \e tiSuccess: I/O request successfully initiated.
* - \e tiBusy: No resources available, try again later.
* - \e tiIONoDevice: Invalid device handle.
* - \e tiError: Other errors.
*/
/*****************************************************************************/
GLOBAL bit32 satWriteSame10(
tiRoot_t *tiRoot,
tiIORequest_t *tiIORequest,
tiDeviceHandle_t *tiDeviceHandle,
tiScsiInitiatorRequest_t *tiScsiRequest,
satIOContext_t *satIOContext)
{
bit32 status;
bit32 agRequestType = AGSA_SATA_PROTOCOL_DMA_WRITE;
satDeviceData_t *pSatDevData;
scsiRspSense_t *pSense;
tiIniScsiCmnd_t *scsiCmnd;
agsaFisRegHostToDevice_t *fis;
bit32 lba = 0;
bit32 tl = 0;
pSense = satIOContext->pSense;
pSatDevData = satIOContext->pSatDevData;
scsiCmnd = &tiScsiRequest->scsiCmnd;
fis = satIOContext->pFis;
TI_DBG5(("satWriteSame10: start\n"));
/* checking CONTROL */
/* NACA == 1 or LINK == 1*/
if ( (scsiCmnd->cdb[9] & SCSI_NACA_MASK) || (scsiCmnd->cdb[9] & SCSI_LINK_MASK) )
{
satSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pTiSenseData,
satIOContext->interruptContext );
TI_DBG1(("satWriteSame10: return control\n"));
return tiSuccess;
}
/* checking LBDATA and PBDATA */
/* case 1 */
if ( !(scsiCmnd->cdb[1] & SCSI_WRITE_SAME_LBDATA_MASK) &&
!(scsiCmnd->cdb[1] & SCSI_WRITE_SAME_PBDATA_MASK))
{
TI_DBG5(("satWriteSame10: case 1\n"));
/* spec 9.26.2, Table 62, p64, case 1*/
/*
normal case
just like write in 9.17.1
*/
if ( pSatDevData->sat48BitSupport != agTRUE )
{
/*
writeSame10 but no support for 48 bit addressing
-> problem in transfer length. Therefore, return check condition
*/
satSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pTiSenseData,
satIOContext->interruptContext );
TI_DBG1(("satWriteSame10: return internal checking\n"));
return tiSuccess;
}
/* cdb10; computing LBA and transfer length */
lba = (scsiCmnd->cdb[2] << (8*3)) + (scsiCmnd->cdb[3] << (8*2))
+ (scsiCmnd->cdb[4] << 8) + scsiCmnd->cdb[5];
tl = (scsiCmnd->cdb[7] << 8) + scsiCmnd->cdb[8];
/* Table 34, 9.1, p 46 */
/*
note: As of 2/10/2006, no support for DMA QUEUED
*/
/*
Table 34, 9.1, p 46, b (footnote)
When no 48-bit addressing support or NCQ, if LBA is beyond (2^28 - 1),
return check condition
*/
if (pSatDevData->satNCQ != agTRUE &&
pSatDevData->sat48BitSupport != agTRUE
)
{
if (lba > SAT_TR_LBA_LIMIT - 1) /* SAT_TR_LBA_LIMIT is 2^28, 0x10000000 */
{
satSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_LOGICAL_BLOCK_ADDRESS_OUT_OF_RANGE,
satIOContext);
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pTiSenseData,
satIOContext->interruptContext );
TI_DBG1(("satWriteSame10: return LBA out of range\n"));
return tiSuccess;
}
}
if (lba + tl <= SAT_TR_LBA_LIMIT)
{
if (pSatDevData->satDMASupport == agTRUE && pSatDevData->satDMAEnabled == agTRUE)
{
/* case 2 */
/* WRITE DMA */
/* can't fit the transfer length since WRITE DMA has 1 byte for sector count */
TI_DBG5(("satWriteSame10: case 1-2 !!! error due to writeSame10\n"));
satSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pTiSenseData,
satIOContext->interruptContext );
return tiSuccess;
}
else
{
/* case 1 */
/* WRITE MULTIPLE or WRITE SECTOR(S) */
/* WRITE SECTORS is chosen for easier implemetation */
/* can't fit the transfer length since WRITE DMA has 1 byte for sector count */
TI_DBG5(("satWriteSame10: case 1-1 !!! error due to writesame10\n"));
satSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pTiSenseData,
satIOContext->interruptContext );
return tiSuccess;
}
} /* end of case 1 and 2 */
/* case 3 and 4 */
if (pSatDevData->sat48BitSupport == agTRUE)
{
if (pSatDevData->satDMASupport == agTRUE && pSatDevData->satDMAEnabled == agTRUE)
{
/* case 3 */
/* WRITE DMA EXT or WRITE DMA FUA EXT */
/* WRITE DMA EXT is chosen since WRITE SAME does not have FUA bit */
TI_DBG5(("satWriteSame10: case 1-3\n"));
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_WRITE_DMA_EXT; /* 0x35 */
fis->h.features = 0; /* FIS reserve */
fis->d.lbaLow = scsiCmnd->cdb[5]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = scsiCmnd->cdb[4]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = scsiCmnd->cdb[3]; /* FIS LBA (23:16) */
fis->d.device = 0x40; /* FIS LBA mode set */
fis->d.lbaLowExp = scsiCmnd->cdb[2]; /* FIS LBA (31:24) */
fis->d.lbaMidExp = 0; /* FIS LBA (39:32) */
fis->d.lbaHighExp = 0; /* FIS LBA (47:40) */
fis->d.featuresExp = 0; /* FIS reserve */
if (tl == 0)
{
/* error check
ATA spec, p125, 6.17.29
pSatDevData->satMaxUserAddrSectors should be 0x0FFFFFFF
and allowed value is 0x0FFFFFFF - 1
*/
if (pSatDevData->satMaxUserAddrSectors > 0x0FFFFFFF)
{
TI_DBG5(("satWriteSame10: case 3 !!! warning can't fit sectors\n"));
satSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pTiSenseData,
satIOContext->interruptContext );
return tiSuccess;
}
}
/* one sector at a time */
fis->d.sectorCount = 1; /* FIS sector count (7:0) */
fis->d.sectorCountExp = 0; /* FIS sector count (15:8) */
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_DMA_WRITE;
}
else
{
/* case 4 */
/* WRITE MULTIPLE EXT or WRITE MULTIPLE FUA EXT or WRITE SECTOR(S) EXT */
/* WRITE SECTORS EXT is chosen for easier implemetation */
TI_DBG5(("satWriteSame10: case 1-4\n"));
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_WRITE_SECTORS_EXT; /* 0x34 */
fis->h.features = 0; /* FIS reserve */
fis->d.lbaLow = scsiCmnd->cdb[5]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = scsiCmnd->cdb[4]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = scsiCmnd->cdb[3]; /* FIS LBA (23:16) */
fis->d.device = 0x40; /* FIS LBA mode set */
fis->d.lbaLowExp = scsiCmnd->cdb[2]; /* FIS LBA (31:24) */
fis->d.lbaMidExp = 0; /* FIS LBA (39:32) */
fis->d.lbaHighExp = 0; /* FIS LBA (47:40) */
fis->d.featuresExp = 0; /* FIS reserve */
if (tl == 0)
{
/* error check
ATA spec, p125, 6.17.29
pSatDevData->satMaxUserAddrSectors should be 0x0FFFFFFF
and allowed value is 0x0FFFFFFF - 1
*/
if (pSatDevData->satMaxUserAddrSectors > 0x0FFFFFFF)
{
TI_DBG5(("satWriteSame10: case 4 !!! warning can't fit sectors\n"));
satSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pTiSenseData,
satIOContext->interruptContext );
return tiSuccess;
}
}
/* one sector at a time */
fis->d.sectorCount = 1; /* FIS sector count (7:0) */
fis->d.sectorCountExp = 0; /* FIS sector count (15:8) */
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_PIO_WRITE;
}
}
/* case 5 */
if (pSatDevData->satNCQ == agTRUE)
{
/* WRITE FPDMA QUEUED */
if (pSatDevData->sat48BitSupport != agTRUE)
{
TI_DBG5(("satWriteSame10: case 1-5 !!! error NCQ but 28 bit address support \n"));
satSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pTiSenseData,
satIOContext->interruptContext );
return tiSuccess;
}
TI_DBG5(("satWriteSame10: case 1-5\n"));
/* Support 48-bit FPDMA addressing, use WRITE FPDMA QUEUE command */
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_WRITE_FPDMA_QUEUED; /* 0x61 */
if (tl == 0)
{
/* error check
ATA spec, p125, 6.17.29
pSatDevData->satMaxUserAddrSectors should be 0x0FFFFFFF
and allowed value is 0x0FFFFFFF - 1
*/
if (pSatDevData->satMaxUserAddrSectors > 0x0FFFFFFF)
{
TI_DBG5(("satWriteSame10: case 4 !!! warning can't fit sectors\n"));
satSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pTiSenseData,
satIOContext->interruptContext );
return tiSuccess;
}
}
/* one sector at a time */
fis->h.features = 1; /* FIS sector count (7:0) */
fis->d.featuresExp = 0; /* FIS sector count (15:8) */
fis->d.lbaLow = scsiCmnd->cdb[5]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = scsiCmnd->cdb[4]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = scsiCmnd->cdb[3]; /* FIS LBA (23:16) */
/* NO FUA bit in the WRITE SAME 10 */
fis->d.device = 0x40; /* FIS FUA clear */
fis->d.lbaLowExp = scsiCmnd->cdb[2]; /* FIS LBA (31:24) */
fis->d.lbaMidExp = 0; /* FIS LBA (39:32) */
fis->d.lbaHighExp = 0; /* FIS LBA (47:40) */
fis->d.sectorCount = 0; /* Tag (7:3) set by LL layer */
fis->d.sectorCountExp = 0;
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_FPDMA_WRITE;
}
/* Initialize CB for SATA completion.
*/
satIOContext->satCompleteCB = &satWriteSame10CB;
/*
* Prepare SGL and send FIS to LL layer.
*/
satIOContext->reqType = agRequestType; /* Save it */
status = sataLLIOStart( tiRoot,
tiIORequest,
tiDeviceHandle,
tiScsiRequest,
satIOContext);
return (status);
} /* end of case 1 */
else if ( !(scsiCmnd->cdb[1] & SCSI_WRITE_SAME_LBDATA_MASK) &&
(scsiCmnd->cdb[1] & SCSI_WRITE_SAME_PBDATA_MASK))
{
/* spec 9.26.2, Table 62, p64, case 2*/
satSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pTiSenseData,
satIOContext->interruptContext );
TI_DBG5(("satWriteSame10: return Table 62 case 2\n"));
return tiSuccess;
}
else if ( (scsiCmnd->cdb[1] & SCSI_WRITE_SAME_LBDATA_MASK) &&
!(scsiCmnd->cdb[1] & SCSI_WRITE_SAME_PBDATA_MASK))
{
TI_DBG5(("satWriteSame10: Table 62 case 3\n"));
}
else /* ( (scsiCmnd->cdb[1] & SCSI_WRITE_SAME_LBDATA_MASK) &&
(scsiCmnd->cdb[1] & SCSI_WRITE_SAME_PBDATA_MASK)) */
{
/* spec 9.26.2, Table 62, p64, case 4*/
satSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pTiSenseData,
satIOContext->interruptContext );
TI_DBG5(("satWriteSame10: return Table 62 case 4\n"));
return tiSuccess;
}
return tiSuccess;
}
/*****************************************************************************/
/*! \brief SAT implementation for SCSI satWriteSame10_1.
*
* SAT implementation for SCSI WRITESANE10 and send FIS request to LL layer.
* This is used when WRITESAME10 is divided into multiple ATA commands
*
* \param tiRoot: Pointer to TISA initiator driver/port instance.
* \param tiIORequest: Pointer to TISA I/O request context for this I/O.
* \param tiDeviceHandle: Pointer to TISA device handle for this I/O.
* \param tiScsiRequest: Pointer to TISA SCSI I/O request and SGL list.
* \param satIOContext_t: Pointer to the SAT IO Context
* \param lba: LBA
*
* \return If command is started successfully
* - \e tiSuccess: I/O request successfully initiated.
* - \e tiBusy: No resources available, try again later.
* - \e tiIONoDevice: Invalid device handle.
* - \e tiError: Other errors.
*/
/*****************************************************************************/
GLOBAL bit32 satWriteSame10_1(
tiRoot_t *tiRoot,
tiIORequest_t *tiIORequest,
tiDeviceHandle_t *tiDeviceHandle,
tiScsiInitiatorRequest_t *tiScsiRequest,
satIOContext_t *satIOContext,
bit32 lba
)
{
/*
sends SAT_WRITE_DMA_EXT
*/
bit32 status;
bit32 agRequestType;
agsaFisRegHostToDevice_t *fis;
bit8 lba1, lba2 ,lba3, lba4;
TI_DBG5(("satWriteSame10_1 entry: tiDeviceHandle=%p tiIORequest=%p\n",
tiDeviceHandle, tiIORequest));
fis = satIOContext->pFis;
/* MSB */
lba1 = (bit8)((lba & 0xFF000000) >> (8*3));
lba2 = (bit8)((lba & 0x00FF0000) >> (8*2));
lba3 = (bit8)((lba & 0x0000FF00) >> (8*1));
/* LSB */
lba4 = (bit8)(lba & 0x000000FF);
/* SAT_WRITE_DMA_EXT */
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_WRITE_DMA_EXT; /* 0x35 */
fis->h.features = 0; /* FIS reserve */
fis->d.lbaLow = lba4; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = lba3; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = lba2; /* FIS LBA (23:16) */
fis->d.device = 0x40; /* FIS LBA mode set */
fis->d.lbaLowExp = lba1; /* FIS LBA (31:24) */
fis->d.lbaMidExp = 0; /* FIS LBA (39:32) */
fis->d.lbaHighExp = 0; /* FIS LBA (47:40) */
fis->d.featuresExp = 0; /* FIS reserve */
/* one sector at a time */
fis->d.sectorCount = 1; /* FIS sector count (7:0) */
fis->d.sectorCountExp = 0; /* FIS sector count (15:8) */
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_DMA_WRITE;
/* Initialize CB for SATA completion.
*/
satIOContext->satCompleteCB = &satWriteSame10CB;
/*
* Prepare SGL and send FIS to LL layer.
*/
satIOContext->reqType = agRequestType; /* Save it */
status = sataLLIOStart( tiRoot,
tiIORequest,
tiDeviceHandle,
tiScsiRequest,
satIOContext);
TI_DBG5(("satWriteSame10_1 return status %d\n", status));
return status;
}
/*****************************************************************************/
/*! \brief SAT implementation for SCSI satWriteSame10_2.
*
* SAT implementation for SCSI WRITESANE10 and send FIS request to LL layer.
* This is used when WRITESAME10 is divided into multiple ATA commands
*
* \param tiRoot: Pointer to TISA initiator driver/port instance.
* \param tiIORequest: Pointer to TISA I/O request context for this I/O.
* \param tiDeviceHandle: Pointer to TISA device handle for this I/O.
* \param tiScsiRequest: Pointer to TISA SCSI I/O request and SGL list.
* \param satIOContext_t: Pointer to the SAT IO Context
* \param lba: LBA
*
* \return If command is started successfully
* - \e tiSuccess: I/O request successfully initiated.
* - \e tiBusy: No resources available, try again later.
* - \e tiIONoDevice: Invalid device handle.
* - \e tiError: Other errors.
*/
/*****************************************************************************/
GLOBAL bit32 satWriteSame10_2(
tiRoot_t *tiRoot,
tiIORequest_t *tiIORequest,
tiDeviceHandle_t *tiDeviceHandle,
tiScsiInitiatorRequest_t *tiScsiRequest,
satIOContext_t *satIOContext,
bit32 lba
)
{
/*
sends SAT_WRITE_SECTORS_EXT
*/
bit32 status;
bit32 agRequestType;
agsaFisRegHostToDevice_t *fis;
bit8 lba1, lba2 ,lba3, lba4;
TI_DBG5(("satWriteSame10_2 entry: tiDeviceHandle=%p tiIORequest=%p\n",
tiDeviceHandle, tiIORequest));
fis = satIOContext->pFis;
/* MSB */
lba1 = (bit8)((lba & 0xFF000000) >> (8*3));
lba2 = (bit8)((lba & 0x00FF0000) >> (8*2));
lba3 = (bit8)((lba & 0x0000FF00) >> (8*1));
/* LSB */
lba4 = (bit8)(lba & 0x000000FF);
/* SAT_WRITE_SECTORS_EXT */
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_WRITE_SECTORS_EXT; /* 0x34 */
fis->h.features = 0; /* FIS reserve */
fis->d.lbaLow = lba4; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = lba3; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = lba2; /* FIS LBA (23:16) */
fis->d.device = 0x40; /* FIS LBA mode set */
fis->d.lbaLowExp = lba1; /* FIS LBA (31:24) */
fis->d.lbaMidExp = 0; /* FIS LBA (39:32) */
fis->d.lbaHighExp = 0; /* FIS LBA (47:40) */
fis->d.featuresExp = 0; /* FIS reserve */
/* one sector at a time */
fis->d.sectorCount = 1; /* FIS sector count (7:0) */
fis->d.sectorCountExp = 0; /* FIS sector count (15:8) */
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_PIO_WRITE;
/* Initialize CB for SATA completion.
*/
satIOContext->satCompleteCB = &satWriteSame10CB;
/*
* Prepare SGL and send FIS to LL layer.
*/
satIOContext->reqType = agRequestType; /* Save it */
status = sataLLIOStart( tiRoot,
tiIORequest,
tiDeviceHandle,
tiScsiRequest,
satIOContext);
TI_DBG5(("satWriteSame10_2 return status %d\n", status));
return status;
}
/*****************************************************************************/
/*! \brief SAT implementation for SCSI satWriteSame10_3.
*
* SAT implementation for SCSI WRITESANE10 and send FIS request to LL layer.
* This is used when WRITESAME10 is divided into multiple ATA commands
*
* \param tiRoot: Pointer to TISA initiator driver/port instance.
* \param tiIORequest: Pointer to TISA I/O request context for this I/O.
* \param tiDeviceHandle: Pointer to TISA device handle for this I/O.
* \param tiScsiRequest: Pointer to TISA SCSI I/O request and SGL list.
* \param satIOContext_t: Pointer to the SAT IO Context
* \param lba: LBA
*
* \return If command is started successfully
* - \e tiSuccess: I/O request successfully initiated.
* - \e tiBusy: No resources available, try again later.
* - \e tiIONoDevice: Invalid device handle.
* - \e tiError: Other errors.
*/
/*****************************************************************************/
GLOBAL bit32 satWriteSame10_3(
tiRoot_t *tiRoot,
tiIORequest_t *tiIORequest,
tiDeviceHandle_t *tiDeviceHandle,
tiScsiInitiatorRequest_t *tiScsiRequest,
satIOContext_t *satIOContext,
bit32 lba
)
{
/*
sends SAT_WRITE_FPDMA_QUEUED
*/
bit32 status;
bit32 agRequestType;
agsaFisRegHostToDevice_t *fis;
bit8 lba1, lba2 ,lba3, lba4;
TI_DBG5(("satWriteSame10_3 entry: tiDeviceHandle=%p tiIORequest=%p\n",
tiDeviceHandle, tiIORequest));
fis = satIOContext->pFis;
/* MSB */
lba1 = (bit8)((lba & 0xFF000000) >> (8*3));
lba2 = (bit8)((lba & 0x00FF0000) >> (8*2));
lba3 = (bit8)((lba & 0x0000FF00) >> (8*1));
/* LSB */
lba4 = (bit8)(lba & 0x000000FF);
/* SAT_WRITE_FPDMA_QUEUED */
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_WRITE_FPDMA_QUEUED; /* 0x61 */
/* one sector at a time */
fis->h.features = 1; /* FIS sector count (7:0) */
fis->d.featuresExp = 0; /* FIS sector count (15:8) */
fis->d.lbaLow = lba4; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = lba3; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = lba2; /* FIS LBA (23:16) */
/* NO FUA bit in the WRITE SAME 10 */
fis->d.device = 0x40; /* FIS FUA clear */
fis->d.lbaLowExp = lba1; /* FIS LBA (31:24) */
fis->d.lbaMidExp = 0; /* FIS LBA (39:32) */
fis->d.lbaHighExp = 0; /* FIS LBA (47:40) */
fis->d.sectorCount = 0; /* Tag (7:3) set by LL layer */
fis->d.sectorCountExp = 0;
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_FPDMA_WRITE;
/* Initialize CB for SATA completion.
*/
satIOContext->satCompleteCB = &satWriteSame10CB;
/*
* Prepare SGL and send FIS to LL layer.
*/
satIOContext->reqType = agRequestType; /* Save it */
status = sataLLIOStart( tiRoot,
tiIORequest,
tiDeviceHandle,
tiScsiRequest,
satIOContext);
TI_DBG5(("satWriteSame10_2 return status %d\n", status));
return status;
}
/*****************************************************************************/
/*! \brief SAT implementation for SCSI satWriteSame16.
*
* SAT implementation for SCSI satWriteSame16.
*
* \param tiRoot: Pointer to TISA initiator driver/port instance.
* \param tiIORequest: Pointer to TISA I/O request context for this I/O.
* \param tiDeviceHandle: Pointer to TISA device handle for this I/O.
* \param tiScsiRequest: Pointer to TISA SCSI I/O request and SGL list.
* \param satIOContext_t: Pointer to the SAT IO Context
*
* \return If command is started successfully
* - \e tiSuccess: I/O request successfully initiated.
* - \e tiBusy: No resources available, try again later.
* - \e tiIONoDevice: Invalid device handle.
* - \e tiError: Other errors.
*/
/*****************************************************************************/
GLOBAL bit32 satWriteSame16(
tiRoot_t *tiRoot,
tiIORequest_t *tiIORequest,
tiDeviceHandle_t *tiDeviceHandle,
tiScsiInitiatorRequest_t *tiScsiRequest,
satIOContext_t *satIOContext)
{
scsiRspSense_t *pSense;
pSense = satIOContext->pSense;
TI_DBG5(("satWriteSame16:start\n"));
satSetSensePayload( pSense,
SCSI_SNSKEY_NO_SENSE,
0,
SCSI_SNSCODE_NO_ADDITIONAL_INFO,
satIOContext);
ostiInitiatorIOCompleted( tiRoot,
tiIORequest, /* == &satIntIo->satOrgTiIORequest */
tiIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pTiSenseData,
satIOContext->interruptContext );
TI_DBG5(("satWriteSame16: return internal checking\n"));
return tiSuccess;
}
/*****************************************************************************/
/*! \brief SAT implementation for SCSI satLogSense_1.
*
* Part of SAT implementation for SCSI satLogSense.
*
* \param tiRoot: Pointer to TISA initiator driver/port instance.
* \param tiIORequest: Pointer to TISA I/O request context for this I/O.
* \param tiDeviceHandle: Pointer to TISA device handle for this I/O.
* \param tiScsiRequest: Pointer to TISA SCSI I/O request and SGL list.
* \param satIOContext_t: Pointer to the SAT IO Context
*
* \return If command is started successfully
* - \e tiSuccess: I/O request successfully initiated.
* - \e tiBusy: No resources available, try again later.
* - \e tiIONoDevice: Invalid device handle.
* - \e tiError: Other errors.
*/
/*****************************************************************************/
GLOBAL bit32 satLogSense_1(
tiRoot_t *tiRoot,
tiIORequest_t *tiIORequest,
tiDeviceHandle_t *tiDeviceHandle,
tiScsiInitiatorRequest_t *tiScsiRequest,
satIOContext_t *satIOContext)
{
bit32 status;
bit32 agRequestType;
satDeviceData_t *pSatDevData;
agsaFisRegHostToDevice_t *fis;
pSatDevData = satIOContext->pSatDevData;
fis = satIOContext->pFis;
TI_DBG5(("satLogSense_1: start\n"));
/* SAT Rev 8, 10.2.4 p74 */
if ( pSatDevData->sat48BitSupport == agTRUE )
{
TI_DBG5(("satLogSense_1: case 2-1 sends READ LOG EXT\n"));
/* sends READ LOG EXT */
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_READ_LOG_EXT; /* 0x2F */
fis->h.features = 0; /* FIS reserve */
fis->d.lbaLow = 0x07; /* 0x07 */
fis->d.lbaMid = 0; /* */
fis->d.lbaHigh = 0; /* */
fis->d.device = 0; /* */
fis->d.lbaLowExp = 0; /* */
fis->d.lbaMidExp = 0; /* */
fis->d.lbaHighExp = 0; /* */
fis->d.featuresExp = 0; /* FIS reserve */
fis->d.sectorCount = 0x01; /* 1 sector counts */
fis->d.sectorCountExp = 0x00; /* 1 sector counts */
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_PIO_READ;
/* Initialize CB for SATA completion.
*/
satIOContext->satCompleteCB = &satLogSenseCB;
/*
* Prepare SGL and send FIS to LL layer.
*/
satIOContext->reqType = agRequestType; /* Save it */
status = sataLLIOStart( tiRoot,
tiIORequest,
tiDeviceHandle,
tiScsiRequest,
satIOContext);
return status;
}
else
{
TI_DBG5(("satLogSense_1: case 2-2 sends SMART READ LOG\n"));
/* sends SMART READ LOG */
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_SMART_READ_LOG; /* 0x2F */
fis->h.features = 0x00; /* 0xd5 */
fis->d.lbaLow = 0x06; /* 0x06 */
fis->d.lbaMid = 0x00; /* 0x4f */
fis->d.lbaHigh = 0x00; /* 0xc2 */
fis->d.device = 0; /* */
fis->d.lbaLowExp = 0; /* */
fis->d.lbaMidExp = 0; /* */
fis->d.lbaHighExp = 0; /* */
fis->d.featuresExp = 0; /* FIS reserve */
fis->d.sectorCount = 0x01; /* */
fis->d.sectorCountExp = 0x00; /* */
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_PIO_READ;
/* Initialize CB for SATA completion.
*/
satIOContext->satCompleteCB = &satLogSenseCB;
/*
* Prepare SGL and send FIS to LL layer.
*/
satIOContext->reqType = agRequestType; /* Save it */
status = sataLLIOStart( tiRoot,
tiIORequest,
tiDeviceHandle,
tiScsiRequest,
satIOContext);
return status;
}
}
/*****************************************************************************/
/*! \brief SAT implementation for SCSI satSMARTEnable.
*
* Part of SAT implementation for SCSI satLogSense.
*
* \param tiRoot: Pointer to TISA initiator driver/port instance.
* \param tiIORequest: Pointer to TISA I/O request context for this I/O.
* \param tiDeviceHandle: Pointer to TISA device handle for this I/O.
* \param tiScsiRequest: Pointer to TISA SCSI I/O request and SGL list.
* \param satIOContext_t: Pointer to the SAT IO Context
*
* \return If command is started successfully
* - \e tiSuccess: I/O request successfully initiated.
* - \e tiBusy: No resources available, try again later.
* - \e tiIONoDevice: Invalid device handle.
* - \e tiError: Other errors.
*/
/*****************************************************************************/
GLOBAL bit32 satSMARTEnable(
tiRoot_t *tiRoot,
tiIORequest_t *tiIORequest,
tiDeviceHandle_t *tiDeviceHandle,
tiScsiInitiatorRequest_t *tiScsiRequest,
satIOContext_t *satIOContext)
{
bit32 status;
bit32 agRequestType;
agsaFisRegHostToDevice_t *fis;
TI_DBG4(("satSMARTEnable entry: tiDeviceHandle=%p tiIORequest=%p\n",
tiDeviceHandle, tiIORequest));
fis = satIOContext->pFis;
/*
* Send the SAT_SMART_ENABLE_OPERATIONS command.
*/
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_SMART_ENABLE_OPERATIONS; /* 0xB0 */
fis->h.features = 0xD8;
fis->d.lbaLow = 0;
fis->d.lbaMid = 0x4F;
fis->d.lbaHigh = 0xC2;
fis->d.device = 0;
fis->d.lbaLowExp = 0;
fis->d.lbaMidExp = 0;
fis->d.lbaHighExp = 0;
fis->d.featuresExp = 0;
fis->d.sectorCount = 0;
fis->d.sectorCountExp = 0;
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_NON_DATA;
/* Initialize CB for SATA completion.
*/
satIOContext->satCompleteCB = &satSMARTEnableCB;
/*
* Prepare SGL and send FIS to LL layer.
*/
satIOContext->reqType = agRequestType; /* Save it */
status = sataLLIOStart( tiRoot,
tiIORequest,
tiDeviceHandle,
tiScsiRequest,
satIOContext);
return status;
}
/*****************************************************************************/
/*! \brief SAT implementation for SCSI satLogSense_3.
*
* Part of SAT implementation for SCSI satLogSense.
*
* \param tiRoot: Pointer to TISA initiator driver/port instance.
* \param tiIORequest: Pointer to TISA I/O request context for this I/O.
* \param tiDeviceHandle: Pointer to TISA device handle for this I/O.
* \param tiScsiRequest: Pointer to TISA SCSI I/O request and SGL list.
* \param satIOContext_t: Pointer to the SAT IO Context
*
* \return If command is started successfully
* - \e tiSuccess: I/O request successfully initiated.
* - \e tiBusy: No resources available, try again later.
* - \e tiIONoDevice: Invalid device handle.
* - \e tiError: Other errors.
*/
/*****************************************************************************/
GLOBAL bit32 satLogSense_3(
tiRoot_t *tiRoot,
tiIORequest_t *tiIORequest,
tiDeviceHandle_t *tiDeviceHandle,
tiScsiInitiatorRequest_t *tiScsiRequest,
satIOContext_t *satIOContext)
{
bit32 status;
bit32 agRequestType;
agsaFisRegHostToDevice_t *fis;
TI_DBG4(("satLogSense_3 entry: tiDeviceHandle=%p tiIORequest=%p\n",
tiDeviceHandle, tiIORequest));
fis = satIOContext->pFis;
/* sends READ LOG EXT */
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_SMART_READ_LOG; /* 0x2F */
fis->h.features = 0xD5; /* 0xd5 */
fis->d.lbaLow = 0x06; /* 0x06 */
fis->d.lbaMid = 0x4F; /* 0x4f */
fis->d.lbaHigh = 0xC2; /* 0xc2 */
fis->d.device = 0; /* */
fis->d.lbaLowExp = 0; /* */
fis->d.lbaMidExp = 0; /* */
fis->d.lbaHighExp = 0; /* */
fis->d.featuresExp = 0; /* FIS reserve */
fis->d.sectorCount = 0x01; /* 1 sector counts */
fis->d.sectorCountExp = 0x00; /* 1 sector counts */
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_PIO_READ;
/* Initialize CB for SATA completion.
*/
satIOContext->satCompleteCB = &satLogSenseCB;
/*
* Prepare SGL and send FIS to LL layer.
*/
satIOContext->reqType = agRequestType; /* Save it */
status = sataLLIOStart( tiRoot,
tiIORequest,
tiDeviceHandle,
tiScsiRequest,
satIOContext);
return status;
}
/*****************************************************************************/
/*! \brief SAT implementation for SCSI satLogSense_2.
*
* Part of SAT implementation for SCSI satLogSense.
*
* \param tiRoot: Pointer to TISA initiator driver/port instance.
* \param tiIORequest: Pointer to TISA I/O request context for this I/O.
* \param tiDeviceHandle: Pointer to TISA device handle for this I/O.
* \param tiScsiRequest: Pointer to TISA SCSI I/O request and SGL list.
* \param satIOContext_t: Pointer to the SAT IO Context
*
* \return If command is started successfully
* - \e tiSuccess: I/O request successfully initiated.
* - \e tiBusy: No resources available, try again later.
* - \e tiIONoDevice: Invalid device handle.
* - \e tiError: Other errors.
*/
/*****************************************************************************/
GLOBAL bit32 satLogSense_2(
tiRoot_t *tiRoot,
tiIORequest_t *tiIORequest,
tiDeviceHandle_t *tiDeviceHandle,
tiScsiInitiatorRequest_t *tiScsiRequest,
satIOContext_t *satIOContext)
{
bit32 status;
bit32 agRequestType;
agsaFisRegHostToDevice_t *fis;
TI_DBG4(("satLogSense_2 entry: tiDeviceHandle=%p tiIORequest=%p\n",
tiDeviceHandle, tiIORequest));
fis = satIOContext->pFis;
/* sends READ LOG EXT */
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_READ_LOG_EXT; /* 0x2F */
fis->h.features = 0; /* FIS reserve */
fis->d.lbaLow = 0x07; /* 0x07 */
fis->d.lbaMid = 0; /* */
fis->d.lbaHigh = 0; /* */
fis->d.device = 0; /* */
fis->d.lbaLowExp = 0; /* */
fis->d.lbaMidExp = 0; /* */
fis->d.lbaHighExp = 0; /* */
fis->d.featuresExp = 0; /* FIS reserve */
fis->d.sectorCount = 0x01; /* 1 sector counts */
fis->d.sectorCountExp = 0x00; /* 1 sector counts */
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_PIO_READ;
/* Initialize CB for SATA completion.
*/
satIOContext->satCompleteCB = &satLogSenseCB;
/*
* Prepare SGL and send FIS to LL layer.
*/
satIOContext->reqType = agRequestType; /* Save it */
status = sataLLIOStart( tiRoot,
tiIORequest,
tiDeviceHandle,
tiScsiRequest,
satIOContext);
return status;
}
/*****************************************************************************/
/*! \brief SAT implementation for SCSI satLogSenseAllocate.
*
* Part of SAT implementation for SCSI satLogSense.
*
* \param tiRoot: Pointer to TISA initiator driver/port instance.
* \param tiIORequest: Pointer to TISA I/O request context for this I/O.
* \param tiDeviceHandle: Pointer to TISA device handle for this I/O.
* \param tiScsiRequest: Pointer to TISA SCSI I/O request and SGL list.
* \param satIOContext_t: Pointer to the SAT IO Context
* \param payloadSize: size of payload to be allocated.
* \param flag: flag value
*
* \return If command is started successfully
* - \e tiSuccess: I/O request successfully initiated.
* - \e tiBusy: No resources available, try again later.
* - \e tiIONoDevice: Invalid device handle.
* - \e tiError: Other errors.
* \note
* - flag values: LOG_SENSE_0, LOG_SENSE_1, LOG_SENSE_2
*/
/*****************************************************************************/
GLOBAL bit32 satLogSenseAllocate(
tiRoot_t *tiRoot,
tiIORequest_t *tiIORequest,
tiDeviceHandle_t *tiDeviceHandle,
tiScsiInitiatorRequest_t *tiScsiRequest,
satIOContext_t *satIOContext,
bit32 payloadSize,
bit32 flag
)
{
satDeviceData_t *pSatDevData;
tdIORequestBody_t *tdIORequestBody;
satInternalIo_t *satIntIo = agNULL;
satIOContext_t *satIOContext2;
bit32 status;
TI_DBG4(("satLogSense_2 entry: tiDeviceHandle=%p tiIORequest=%p\n",
tiDeviceHandle, tiIORequest));
pSatDevData = satIOContext->pSatDevData;
/* create internal satIOContext */
satIntIo = satAllocIntIoResource( tiRoot,
tiIORequest, /* original request */
pSatDevData,
payloadSize,
satIntIo);
if (satIntIo == agNULL)
{
/* memory allocation failure */
satFreeIntIoResource( tiRoot,
pSatDevData,
satIntIo);
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOFailed,
tiDetailOtherError,
agNULL,
satIOContext->interruptContext );
TI_DBG4(("satLogSense_2: fail in allocation\n"));
return tiSuccess;
} /* end of memory allocation failure */
satIntIo->satOrgTiIORequest = tiIORequest;
tdIORequestBody = (tdIORequestBody_t *)satIntIo->satIntRequestBody;
satIOContext2 = &(tdIORequestBody->transport.SATA.satIOContext);
satIOContext2->pSatDevData = pSatDevData;
satIOContext2->pFis = &(tdIORequestBody->transport.SATA.agSATARequestBody.fis.fisRegHostToDev);
satIOContext2->pScsiCmnd = &(satIntIo->satIntTiScsiXchg.scsiCmnd);
satIOContext2->pSense = &(tdIORequestBody->transport.SATA.sensePayload);
satIOContext2->pTiSenseData = &(tdIORequestBody->transport.SATA.tiSenseData);
satIOContext2->pTiSenseData->senseData = satIOContext2->pSense;
satIOContext2->tiRequestBody = satIntIo->satIntRequestBody;
satIOContext2->interruptContext = satIOContext->interruptContext;
satIOContext2->satIntIoContext = satIntIo;
satIOContext2->ptiDeviceHandle = tiDeviceHandle;
satIOContext2->satOrgIOContext = satIOContext;
if (flag == LOG_SENSE_0)
{
/* SAT_SMART_ENABLE_OPERATIONS */
status = satSMARTEnable( tiRoot,
&(satIntIo->satIntTiIORequest),
tiDeviceHandle,
&(satIntIo->satIntTiScsiXchg),
satIOContext2);
}
else if (flag == LOG_SENSE_1)
{
/* SAT_READ_LOG_EXT */
status = satLogSense_2( tiRoot,
&(satIntIo->satIntTiIORequest),
tiDeviceHandle,
&(satIntIo->satIntTiScsiXchg),
satIOContext2);
}
else
{
/* SAT_SMART_READ_LOG */
/* SAT_READ_LOG_EXT */
status = satLogSense_3( tiRoot,
&(satIntIo->satIntTiIORequest),
tiDeviceHandle,
&(satIntIo->satIntTiScsiXchg),
satIOContext2);
}
if (status != tiSuccess)
{
satFreeIntIoResource( tiRoot,
pSatDevData,
satIntIo);
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOFailed,
tiDetailOtherError,
agNULL,
satIOContext->interruptContext );
return tiSuccess;
}
return tiSuccess;
}
/*****************************************************************************/
/*! \brief SAT implementation for SCSI satLogSense.
*
* SAT implementation for SCSI satLogSense.
*
* \param tiRoot: Pointer to TISA initiator driver/port instance.
* \param tiIORequest: Pointer to TISA I/O request context for this I/O.
* \param tiDeviceHandle: Pointer to TISA device handle for this I/O.
* \param tiScsiRequest: Pointer to TISA SCSI I/O request and SGL list.
* \param satIOContext_t: Pointer to the SAT IO Context
*
* \return If command is started successfully
* - \e tiSuccess: I/O request successfully initiated.
* - \e tiBusy: No resources available, try again later.
* - \e tiIONoDevice: Invalid device handle.
* - \e tiError: Other errors.
*/
/*****************************************************************************/
GLOBAL bit32 satLogSense(
tiRoot_t *tiRoot,
tiIORequest_t *tiIORequest,
tiDeviceHandle_t *tiDeviceHandle,
tiScsiInitiatorRequest_t *tiScsiRequest,
satIOContext_t *satIOContext)
{
bit32 status;
bit32 agRequestType;
satDeviceData_t *pSatDevData;
scsiRspSense_t *pSense;
tiIniScsiCmnd_t *scsiCmnd;
agsaFisRegHostToDevice_t *fis;
bit8 *pLogPage; /* Log Page data buffer */
bit32 flag = 0;
bit16 AllocLen = 0; /* allocation length */
bit8 AllLogPages[8];
bit16 lenRead = 0;
pSense = satIOContext->pSense;
pSatDevData = satIOContext->pSatDevData;
scsiCmnd = &tiScsiRequest->scsiCmnd;
fis = satIOContext->pFis;
pLogPage = (bit8 *) tiScsiRequest->sglVirtualAddr;
TI_DBG5(("satLogSense: start\n"));
osti_memset(&AllLogPages, 0, 8);
/* checking CONTROL */
/* NACA == 1 or LINK == 1*/
if ( (scsiCmnd->cdb[9] & SCSI_NACA_MASK) || (scsiCmnd->cdb[9] & SCSI_LINK_MASK) )
{
satSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pTiSenseData,
satIOContext->interruptContext );
TI_DBG2(("satLogSense: return control\n"));
return tiSuccess;
}
AllocLen = (bit8)((scsiCmnd->cdb[7] << 8) + scsiCmnd->cdb[8]);
/* checking PC (Page Control) */
/* nothing */
/* special cases */
if (AllocLen == 4)
{
TI_DBG1(("satLogSense: AllocLen is 4\n"));
switch (scsiCmnd->cdb[2] & SCSI_LOG_SENSE_PAGE_CODE_MASK)
{
case LOGSENSE_SUPPORTED_LOG_PAGES:
TI_DBG5(("satLogSense: case LOGSENSE_SUPPORTED_LOG_PAGES\n"));
/* SAT Rev 8, 10.2.5 p76 */
if (pSatDevData->satSMARTFeatureSet == agTRUE)
{
/* add informational exception log */
flag = 1;
if (pSatDevData->satSMARTSelfTest == agTRUE)
{
/* add Self-Test results log page */
flag = 2;
}
}
else
{
/* only supported, no informational exception log, no Self-Test results log page */
flag = 0;
}
lenRead = 4;
AllLogPages[0] = LOGSENSE_SUPPORTED_LOG_PAGES; /* page code */
AllLogPages[1] = 0; /* reserved */
switch (flag)
{
case 0:
/* only supported */
AllLogPages[2] = 0; /* page length */
AllLogPages[3] = 1; /* page length */
break;
case 1:
/* supported and informational exception log */
AllLogPages[2] = 0; /* page length */
AllLogPages[3] = 2; /* page length */
break;
case 2:
/* supported and informational exception log */
AllLogPages[2] = 0; /* page length */
AllLogPages[3] = 3; /* page length */
break;
default:
TI_DBG1(("satLogSense: error unallowed flag value %d\n", flag));
break;
}
osti_memcpy(pLogPage, &AllLogPages, lenRead);
break;
case LOGSENSE_SELFTEST_RESULTS_PAGE:
TI_DBG5(("satLogSense: case LOGSENSE_SUPPORTED_LOG_PAGES\n"));
lenRead = 4;
AllLogPages[0] = LOGSENSE_SELFTEST_RESULTS_PAGE; /* page code */
AllLogPages[1] = 0; /* reserved */
/* page length = SELFTEST_RESULTS_LOG_PAGE_LENGTH - 1 - 3 = 400 = 0x190 */
AllLogPages[2] = 0x01;
AllLogPages[3] = 0x90; /* page length */
osti_memcpy(pLogPage, &AllLogPages, lenRead);
break;
case LOGSENSE_INFORMATION_EXCEPTIONS_PAGE:
TI_DBG5(("satLogSense: case LOGSENSE_SUPPORTED_LOG_PAGES\n"));
lenRead = 4;
AllLogPages[0] = LOGSENSE_INFORMATION_EXCEPTIONS_PAGE; /* page code */
AllLogPages[1] = 0; /* reserved */
AllLogPages[2] = 0; /* page length */
AllLogPages[3] = INFORMATION_EXCEPTIONS_LOG_PAGE_LENGTH - 1 - 3; /* page length */
osti_memcpy(pLogPage, &AllLogPages, lenRead);
break;
default:
TI_DBG1(("satLogSense: default Page Code 0x%x\n", scsiCmnd->cdb[2] & SCSI_LOG_SENSE_PAGE_CODE_MASK));
satSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pTiSenseData,
satIOContext->interruptContext );
return tiSuccess;
}
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOSuccess,
SCSI_STAT_GOOD,
agNULL,
satIOContext->interruptContext);
return tiSuccess;
} /* if */
/* SAT rev8 Table 11 p30*/
/* checking Page Code */
switch (scsiCmnd->cdb[2] & SCSI_LOG_SENSE_PAGE_CODE_MASK)
{
case LOGSENSE_SUPPORTED_LOG_PAGES:
TI_DBG5(("satLogSense: case 1\n"));
/* SAT Rev 8, 10.2.5 p76 */
if (pSatDevData->satSMARTFeatureSet == agTRUE)
{
/* add informational exception log */
flag = 1;
if (pSatDevData->satSMARTSelfTest == agTRUE)
{
/* add Self-Test results log page */
flag = 2;
}
}
else
{
/* only supported, no informational exception log, no Self-Test results log page */
flag = 0;
}
AllLogPages[0] = 0; /* page code */
AllLogPages[1] = 0; /* reserved */
switch (flag)
{
case 0:
/* only supported */
AllLogPages[2] = 0; /* page length */
AllLogPages[3] = 1; /* page length */
AllLogPages[4] = 0x00; /* supported page list */
lenRead = (bit8)(MIN(AllocLen, 5));
break;
case 1:
/* supported and informational exception log */
AllLogPages[2] = 0; /* page length */
AllLogPages[3] = 2; /* page length */
AllLogPages[4] = 0x00; /* supported page list */
AllLogPages[5] = 0x10; /* supported page list */
lenRead = (bit8)(MIN(AllocLen, 6));
break;
case 2:
/* supported and informational exception log */
AllLogPages[2] = 0; /* page length */
AllLogPages[3] = 3; /* page length */
AllLogPages[4] = 0x00; /* supported page list */
AllLogPages[5] = 0x10; /* supported page list */
AllLogPages[6] = 0x2F; /* supported page list */
lenRead = (bit8)(MIN(AllocLen, 7));
break;
default:
TI_DBG1(("satLogSense: error unallowed flag value %d\n", flag));
break;
}
osti_memcpy(pLogPage, &AllLogPages, lenRead);
/* comparing allocation length to Log Page byte size */
/* SPC-4, 4.3.4.6, p28 */
if (AllocLen > lenRead )
{
TI_DBG1(("satLogSense reporting underrun lenRead=0x%x AllocLen=0x%x tiIORequest=%p\n", lenRead, AllocLen, tiIORequest));
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOUnderRun,
AllocLen - lenRead,
agNULL,
satIOContext->interruptContext );
}
else
{
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOSuccess,
SCSI_STAT_GOOD,
agNULL,
satIOContext->interruptContext);
}
break;
case LOGSENSE_SELFTEST_RESULTS_PAGE:
TI_DBG5(("satLogSense: case 2\n"));
/* checking SMART self-test */
if (pSatDevData->satSMARTSelfTest == agFALSE)
{
TI_DBG5(("satLogSense: case 2 no SMART Self Test\n"));
satSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pTiSenseData,
satIOContext->interruptContext );
}
else
{
/* if satSMARTEnabled is false, send SMART_ENABLE_OPERATIONS */
if (pSatDevData->satSMARTEnabled == agFALSE)
{
TI_DBG5(("satLogSense: case 2 calling satSMARTEnable\n"));
status = satLogSenseAllocate(tiRoot,
tiIORequest,
tiDeviceHandle,
tiScsiRequest,
satIOContext,
0,
LOG_SENSE_0
);
return status;
}
else
{
/* SAT Rev 8, 10.2.4 p74 */
if ( pSatDevData->sat48BitSupport == agTRUE )
{
TI_DBG5(("satLogSense: case 2-1 sends READ LOG EXT\n"));
status = satLogSenseAllocate(tiRoot,
tiIORequest,
tiDeviceHandle,
tiScsiRequest,
satIOContext,
512,
LOG_SENSE_1
);
return status;
}
else
{
TI_DBG5(("satLogSense: case 2-2 sends SMART READ LOG\n"));
status = satLogSenseAllocate(tiRoot,
tiIORequest,
tiDeviceHandle,
tiScsiRequest,
satIOContext,
512,
LOG_SENSE_2
);
return status;
}
}
}
break;
case LOGSENSE_INFORMATION_EXCEPTIONS_PAGE:
TI_DBG5(("satLogSense: case 3\n"));
/* checking SMART feature set */
if (pSatDevData->satSMARTFeatureSet == agFALSE)
{
satSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pTiSenseData,
satIOContext->interruptContext );
}
else
{
/* checking SMART feature enabled */
if (pSatDevData->satSMARTEnabled == agFALSE)
{
satSetSensePayload( pSense,
SCSI_SNSKEY_ABORTED_COMMAND,
0,
SCSI_SNSCODE_ATA_DEVICE_FEATURE_NOT_ENABLED,
satIOContext);
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pTiSenseData,
satIOContext->interruptContext );
}
else
{
/* SAT Rev 8, 10.2.3 p72 */
TI_DBG5(("satLogSense: case 3 sends SMART RETURN STATUS\n"));
/* sends SMART RETURN STATUS */
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_SMART_RETURN_STATUS;/* 0xB0 */
fis->h.features = 0xDA; /* FIS features */
fis->d.featuresExp = 0; /* FIS reserve */
fis->d.sectorCount = 0; /* FIS sector count (7:0) */
fis->d.sectorCountExp = 0; /* FIS sector count (15:8) */
fis->d.lbaLow = 0; /* FIS LBA (7 :0 ) */
fis->d.lbaLowExp = 0; /* FIS LBA (31:24) */
fis->d.lbaMid = 0x4F; /* FIS LBA (15:8 ) */
fis->d.lbaMidExp = 0; /* FIS LBA (39:32) */
fis->d.lbaHigh = 0xC2; /* FIS LBA (23:16) */
fis->d.lbaHighExp = 0; /* FIS LBA (47:40) */
fis->d.device = 0; /* FIS DEV is discared in SATA */
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved4 = 0;
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_NON_DATA;
/* Initialize CB for SATA completion.
*/
satIOContext->satCompleteCB = &satLogSenseCB;
/*
* Prepare SGL and send FIS to LL layer.
*/
satIOContext->reqType = agRequestType; /* Save it */
status = sataLLIOStart( tiRoot,
tiIORequest,
tiDeviceHandle,
tiScsiRequest,
satIOContext);
return status;
}
}
break;
default:
TI_DBG1(("satLogSense: default Page Code 0x%x\n", scsiCmnd->cdb[2] & SCSI_LOG_SENSE_PAGE_CODE_MASK));
satSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pTiSenseData,
satIOContext->interruptContext );
break;
} /* end switch */
return tiSuccess;
}
/*****************************************************************************/
/*! \brief SAT implementation for SCSI satModeSelect6.
*
* SAT implementation for SCSI satModeSelect6.
*
* \param tiRoot: Pointer to TISA initiator driver/port instance.
* \param tiIORequest: Pointer to TISA I/O request context for this I/O.
* \param tiDeviceHandle: Pointer to TISA device handle for this I/O.
* \param tiScsiRequest: Pointer to TISA SCSI I/O request and SGL list.
* \param satIOContext_t: Pointer to the SAT IO Context
*
* \return If command is started successfully
* - \e tiSuccess: I/O request successfully initiated.
* - \e tiBusy: No resources available, try again later.
* - \e tiIONoDevice: Invalid device handle.
* - \e tiError: Other errors.
*/
/*****************************************************************************/
GLOBAL bit32 satModeSelect6(
tiRoot_t *tiRoot,
tiIORequest_t *tiIORequest,
tiDeviceHandle_t *tiDeviceHandle,
tiScsiInitiatorRequest_t *tiScsiRequest,
satIOContext_t *satIOContext)
{
bit32 status;
bit32 agRequestType;
satDeviceData_t *pSatDevData;
scsiRspSense_t *pSense;
tiIniScsiCmnd_t *scsiCmnd;
agsaFisRegHostToDevice_t *fis;
bit8 *pLogPage; /* Log Page data buffer */
bit32 StartingIndex = 0;
bit8 PageCode = 0;
bit32 chkCnd = agFALSE;
pSense = satIOContext->pSense;
pSatDevData = satIOContext->pSatDevData;
scsiCmnd = &tiScsiRequest->scsiCmnd;
fis = satIOContext->pFis;
pLogPage = (bit8 *) tiScsiRequest->sglVirtualAddr;
TI_DBG5(("satModeSelect6: start\n"));
/* checking CONTROL */
/* NACA == 1 or LINK == 1*/
if ( (scsiCmnd->cdb[5] & SCSI_NACA_MASK) || (scsiCmnd->cdb[5] & SCSI_LINK_MASK) )
{
satSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pTiSenseData,
satIOContext->interruptContext );
TI_DBG2(("satModeSelect6: return control\n"));
return tiSuccess;
}
/* checking PF bit */
if ( !(scsiCmnd->cdb[1] & SCSI_MODE_SELECT6_PF_MASK))
{
satSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pTiSenseData,
satIOContext->interruptContext );
TI_DBG1(("satModeSelect6: PF bit check \n"));
return tiSuccess;
}
/* checking Block Descriptor Length on Mode parameter header(6)*/
if (pLogPage[3] == 8)
{
/* mode parameter block descriptor exists */
PageCode = (bit8)(pLogPage[12] & 0x3F); /* page code and index is 4 + 8 */
StartingIndex = 12;
}
else if (pLogPage[3] == 0)
{
/* mode parameter block descriptor does not exist */
PageCode = (bit8)(pLogPage[4] & 0x3F); /* page code and index is 4 + 0 */
StartingIndex = 4;
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOSuccess,
SCSI_STAT_GOOD,
agNULL,
satIOContext->interruptContext);
return tiSuccess;
}
else
{
TI_DBG1(("satModeSelect6: return mode parameter block descriptor 0x%x\n", pLogPage[3]));
/* no more than one mode parameter block descriptor shall be supported */
satSetSensePayload( pSense,
SCSI_SNSKEY_NO_SENSE,
0,
SCSI_SNSCODE_NO_ADDITIONAL_INFO,
satIOContext);
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pTiSenseData,
satIOContext->interruptContext );
return tiSuccess;
}
switch (PageCode) /* page code */
{
case MODESELECT_CONTROL_PAGE:
TI_DBG1(("satModeSelect6: Control mode page\n"));
/*
compare pLogPage to expected value (SAT Table 65, p67)
If not match, return check condition
*/
if ( pLogPage[StartingIndex+1] != 0x0A ||
pLogPage[StartingIndex+2] != 0x02 ||
(pSatDevData->satNCQ == agTRUE && pLogPage[StartingIndex+3] != 0x12) ||
(pSatDevData->satNCQ == agFALSE && pLogPage[StartingIndex+3] != 0x02) ||
(pLogPage[StartingIndex+4] & BIT3_MASK) != 0x00 || /* SWP bit */
(pLogPage[StartingIndex+4] & BIT4_MASK) != 0x00 || /* UA_INTLCK_CTRL */
(pLogPage[StartingIndex+4] & BIT5_MASK) != 0x00 || /* UA_INTLCK_CTRL */
(pLogPage[StartingIndex+5] & BIT0_MASK) != 0x00 || /* AUTOLOAD MODE */
(pLogPage[StartingIndex+5] & BIT1_MASK) != 0x00 || /* AUTOLOAD MODE */
(pLogPage[StartingIndex+5] & BIT2_MASK) != 0x00 || /* AUTOLOAD MODE */
(pLogPage[StartingIndex+5] & BIT6_MASK) != 0x00 || /* TAS bit */
pLogPage[StartingIndex+8] != 0xFF ||
pLogPage[StartingIndex+9] != 0xFF ||
pLogPage[StartingIndex+10] != 0x00 ||
pLogPage[StartingIndex+11] != 0x00
)
{
chkCnd = agTRUE;
}
if (chkCnd == agTRUE)
{
satSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pTiSenseData,
satIOContext->interruptContext );
TI_DBG1(("satModeSelect10: unexpected values\n"));
}
else
{
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOSuccess,
SCSI_STAT_GOOD,
agNULL,
satIOContext->interruptContext);
}
return tiSuccess;
break;
case MODESELECT_READ_WRITE_ERROR_RECOVERY_PAGE:
TI_DBG1(("satModeSelect6: Read-Write Error Recovery mode page\n"));
if ( (pLogPage[StartingIndex + 2] & SCSI_MODE_SELECT6_AWRE_MASK) ||
(pLogPage[StartingIndex + 2] & SCSI_MODE_SELECT6_RC_MASK) ||
(pLogPage[StartingIndex + 2] & SCSI_MODE_SELECT6_EER_MASK) ||
(pLogPage[StartingIndex + 2] & SCSI_MODE_SELECT6_PER_MASK) ||
(pLogPage[StartingIndex + 2] & SCSI_MODE_SELECT6_DTE_MASK) ||
(pLogPage[StartingIndex + 2] & SCSI_MODE_SELECT6_DCR_MASK) ||
(pLogPage[StartingIndex + 10]) ||
(pLogPage[StartingIndex + 11])
)
{
TI_DBG5(("satModeSelect6: return check condition \n"));
satSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_PARAMETER_LIST,
satIOContext);
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pTiSenseData,
satIOContext->interruptContext );
return tiSuccess;
}
else
{
TI_DBG5(("satModeSelect6: return GOOD \n"));
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOSuccess,
SCSI_STAT_GOOD,
agNULL,
satIOContext->interruptContext);
return tiSuccess;
}
break;
case MODESELECT_CACHING:
/* SAT rev8 Table67, p69*/
TI_DBG5(("satModeSelect6: Caching mode page\n"));
if ( (pLogPage[StartingIndex + 2] & 0xFB) || /* 1111 1011 */
(pLogPage[StartingIndex + 3]) ||
(pLogPage[StartingIndex + 4]) ||
(pLogPage[StartingIndex + 5]) ||
(pLogPage[StartingIndex + 6]) ||
(pLogPage[StartingIndex + 7]) ||
(pLogPage[StartingIndex + 8]) ||
(pLogPage[StartingIndex + 9]) ||
(pLogPage[StartingIndex + 10]) ||
(pLogPage[StartingIndex + 11]) ||
(pLogPage[StartingIndex + 12] & 0xC1) || /* 1100 0001 */
(pLogPage[StartingIndex + 13]) ||
(pLogPage[StartingIndex + 14]) ||
(pLogPage[StartingIndex + 15])
)
{
TI_DBG1(("satModeSelect6: return check condition \n"));
satSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_PARAMETER_LIST,
satIOContext);
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pTiSenseData,
satIOContext->interruptContext );
return tiSuccess;
}
else
{
/* sends ATA SET FEATURES based on WCE bit */
if ( !(pLogPage[StartingIndex + 2] & SCSI_MODE_SELECT6_WCE_MASK) )
{
TI_DBG5(("satModeSelect6: disable write cache\n"));
/* sends SET FEATURES */
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_SET_FEATURES; /* 0xEF */
fis->h.features = 0x82; /* disable write cache */
fis->d.lbaLow = 0; /* */
fis->d.lbaMid = 0; /* */
fis->d.lbaHigh = 0; /* */
fis->d.device = 0; /* */
fis->d.lbaLowExp = 0; /* */
fis->d.lbaMidExp = 0; /* */
fis->d.lbaHighExp = 0; /* */
fis->d.featuresExp = 0; /* */
fis->d.sectorCount = 0; /* */
fis->d.sectorCountExp = 0; /* */
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_NON_DATA;
/* Initialize CB for SATA completion.
*/
satIOContext->satCompleteCB = &satModeSelect6n10CB;
/*
* Prepare SGL and send FIS to LL layer.
*/
satIOContext->reqType = agRequestType; /* Save it */
status = sataLLIOStart( tiRoot,
tiIORequest,
tiDeviceHandle,
tiScsiRequest,
satIOContext);
return status;
}
else
{
TI_DBG5(("satModeSelect6: enable write cache\n"));
/* sends SET FEATURES */
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_SET_FEATURES; /* 0xEF */
fis->h.features = 0x02; /* enable write cache */
fis->d.lbaLow = 0; /* */
fis->d.lbaMid = 0; /* */
fis->d.lbaHigh = 0; /* */
fis->d.device = 0; /* */
fis->d.lbaLowExp = 0; /* */
fis->d.lbaMidExp = 0; /* */
fis->d.lbaHighExp = 0; /* */
fis->d.featuresExp = 0; /* */
fis->d.sectorCount = 0; /* */
fis->d.sectorCountExp = 0; /* */
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_NON_DATA;
/* Initialize CB for SATA completion.
*/
satIOContext->satCompleteCB = &satModeSelect6n10CB;
/*
* Prepare SGL and send FIS to LL layer.
*/
satIOContext->reqType = agRequestType; /* Save it */
status = sataLLIOStart( tiRoot,
tiIORequest,
tiDeviceHandle,
tiScsiRequest,
satIOContext);
return status;
}
}
break;
case MODESELECT_INFORMATION_EXCEPTION_CONTROL_PAGE:
TI_DBG5(("satModeSelect6: Informational Exception Control mode page\n"));
if ( (pLogPage[StartingIndex + 2] & SCSI_MODE_SELECT6_PERF_MASK) ||
(pLogPage[StartingIndex + 2] & SCSI_MODE_SELECT6_TEST_MASK)
)
{
TI_DBG1(("satModeSelect6: return check condition \n"));
satSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_PARAMETER_LIST,
satIOContext);
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pTiSenseData,
satIOContext->interruptContext );
return tiSuccess;
}
else
{
/* sends either ATA SMART ENABLE/DISABLE OPERATIONS based on DEXCPT bit */
if ( !(pLogPage[StartingIndex + 2] & 0x08) )
{
TI_DBG5(("satModeSelect6: enable information exceptions reporting\n"));
/* sends SMART ENABLE OPERATIONS */
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_SMART_ENABLE_OPERATIONS; /* 0xB0 */
fis->h.features = 0xD8; /* enable */
fis->d.lbaLow = 0; /* */
fis->d.lbaMid = 0x4F; /* 0x4F */
fis->d.lbaHigh = 0xC2; /* 0xC2 */
fis->d.device = 0; /* */
fis->d.lbaLowExp = 0; /* */
fis->d.lbaMidExp = 0; /* */
fis->d.lbaHighExp = 0; /* */
fis->d.featuresExp = 0; /* */
fis->d.sectorCount = 0; /* */
fis->d.sectorCountExp = 0; /* */
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_NON_DATA;
/* Initialize CB for SATA completion.
*/
satIOContext->satCompleteCB = &satModeSelect6n10CB;
/*
* Prepare SGL and send FIS to LL layer.
*/
satIOContext->reqType = agRequestType; /* Save it */
status = sataLLIOStart( tiRoot,
tiIORequest,
tiDeviceHandle,
tiScsiRequest,
satIOContext);
return status;
}
else
{
TI_DBG5(("satModeSelect6: disable information exceptions reporting\n"));
/* sends SMART DISABLE OPERATIONS */
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_SMART_DISABLE_OPERATIONS; /* 0xB0 */
fis->h.features = 0xD9; /* disable */
fis->d.lbaLow = 0; /* */
fis->d.lbaMid = 0x4F; /* 0x4F */
fis->d.lbaHigh = 0xC2; /* 0xC2 */
fis->d.device = 0; /* */
fis->d.lbaLowExp = 0; /* */
fis->d.lbaMidExp = 0; /* */
fis->d.lbaHighExp = 0; /* */
fis->d.featuresExp = 0; /* */
fis->d.sectorCount = 0; /* */
fis->d.sectorCountExp = 0; /* */
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_NON_DATA;
/* Initialize CB for SATA completion.
*/
satIOContext->satCompleteCB = &satModeSelect6n10CB;
/*
* Prepare SGL and send FIS to LL layer.
*/
satIOContext->reqType = agRequestType; /* Save it */
status = sataLLIOStart( tiRoot,
tiIORequest,
tiDeviceHandle,
tiScsiRequest,
satIOContext);
return status;
}
}
break;
default:
TI_DBG1(("satModeSelect6: Error unknown page code 0x%x\n", pLogPage[12]));
satSetSensePayload( pSense,
SCSI_SNSKEY_NO_SENSE,
0,
SCSI_SNSCODE_NO_ADDITIONAL_INFO,
satIOContext);
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pTiSenseData,
satIOContext->interruptContext );
return tiSuccess;
}
}
/*****************************************************************************/
/*! \brief SAT implementation for SCSI satModeSelect6n10_1.
*
* This function is part of implementation of ModeSelect6 and ModeSelect10.
* When ModeSelect6 or ModeSelect10 is coverted into multiple ATA commands,
* this function is used.
*
* \param tiRoot: Pointer to TISA initiator driver/port instance.
* \param tiIORequest: Pointer to TISA I/O request context for this I/O.
* \param tiDeviceHandle: Pointer to TISA device handle for this I/O.
* \param tiScsiRequest: Pointer to TISA SCSI I/O request and SGL list.
* \param satIOContext_t: Pointer to the SAT IO Context
*
* \return If command is started successfully
* - \e tiSuccess: I/O request successfully initiated.
* - \e tiBusy: No resources available, try again later.
* - \e tiIONoDevice: Invalid device handle.
* - \e tiError: Other errors.
*/
/*****************************************************************************/
GLOBAL bit32 satModeSelect6n10_1(
tiRoot_t *tiRoot,
tiIORequest_t *tiIORequest,
tiDeviceHandle_t *tiDeviceHandle,
tiScsiInitiatorRequest_t *tiScsiRequest,
satIOContext_t *satIOContext)
{
/* sends either ATA SET FEATURES based on DRA bit */
bit32 status;
bit32 agRequestType;
agsaFisRegHostToDevice_t *fis;
bit8 *pLogPage; /* Log Page data buffer */
bit32 StartingIndex = 0;
fis = satIOContext->pFis;
pLogPage = (bit8 *) tiScsiRequest->sglVirtualAddr;
TI_DBG5(("satModeSelect6_1: start\n"));
/* checking Block Descriptor Length on Mode parameter header(6)*/
if (pLogPage[3] == 8)
{
/* mode parameter block descriptor exists */
StartingIndex = 12;
}
else
{
/* mode parameter block descriptor does not exist */
StartingIndex = 4;
}
/* sends ATA SET FEATURES based on DRA bit */
if ( !(pLogPage[StartingIndex + 12] & SCSI_MODE_SELECT6_DRA_MASK) )
{
TI_DBG5(("satModeSelect6_1: enable read look-ahead feature\n"));
/* sends SET FEATURES */
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_SET_FEATURES; /* 0xEF */
fis->h.features = 0xAA; /* enable read look-ahead */
fis->d.lbaLow = 0; /* */
fis->d.lbaMid = 0; /* */
fis->d.lbaHigh = 0; /* */
fis->d.device = 0; /* */
fis->d.lbaLowExp = 0; /* */
fis->d.lbaMidExp = 0; /* */
fis->d.lbaHighExp = 0; /* */
fis->d.featuresExp = 0; /* */
fis->d.sectorCount = 0; /* */
fis->d.sectorCountExp = 0; /* */
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_NON_DATA;
/* Initialize CB for SATA completion.
*/
satIOContext->satCompleteCB = &satModeSelect6n10CB;
/*
* Prepare SGL and send FIS to LL layer.
*/
satIOContext->reqType = agRequestType; /* Save it */
status = sataLLIOStart( tiRoot,
tiIORequest,
tiDeviceHandle,
tiScsiRequest,
satIOContext);
return status;
}
else
{
TI_DBG5(("satModeSelect6_1: disable read look-ahead feature\n"));
/* sends SET FEATURES */
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_SET_FEATURES; /* 0xEF */
fis->h.features = 0x55; /* disable read look-ahead */
fis->d.lbaLow = 0; /* */
fis->d.lbaMid = 0; /* */
fis->d.lbaHigh = 0; /* */
fis->d.device = 0; /* */
fis->d.lbaLowExp = 0; /* */
fis->d.lbaMidExp = 0; /* */
fis->d.lbaHighExp = 0; /* */
fis->d.featuresExp = 0; /* */
fis->d.sectorCount = 0; /* */
fis->d.sectorCountExp = 0; /* */
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_NON_DATA;
/* Initialize CB for SATA completion.
*/
satIOContext->satCompleteCB = &satModeSelect6n10CB;
/*
* Prepare SGL and send FIS to LL layer.
*/
satIOContext->reqType = agRequestType; /* Save it */
status = sataLLIOStart( tiRoot,
tiIORequest,
tiDeviceHandle,
tiScsiRequest,
satIOContext);
return status;
}
}
/*****************************************************************************/
/*! \brief SAT implementation for SCSI satModeSelect10.
*
* SAT implementation for SCSI satModeSelect10.
*
* \param tiRoot: Pointer to TISA initiator driver/port instance.
* \param tiIORequest: Pointer to TISA I/O request context for this I/O.
* \param tiDeviceHandle: Pointer to TISA device handle for this I/O.
* \param tiScsiRequest: Pointer to TISA SCSI I/O request and SGL list.
* \param satIOContext_t: Pointer to the SAT IO Context
*
* \return If command is started successfully
* - \e tiSuccess: I/O request successfully initiated.
* - \e tiBusy: No resources available, try again later.
* - \e tiIONoDevice: Invalid device handle.
* - \e tiError: Other errors.
*/
/*****************************************************************************/
GLOBAL bit32 satModeSelect10(
tiRoot_t *tiRoot,
tiIORequest_t *tiIORequest,
tiDeviceHandle_t *tiDeviceHandle,
tiScsiInitiatorRequest_t *tiScsiRequest,
satIOContext_t *satIOContext)
{
bit32 status;
bit32 agRequestType;
satDeviceData_t *pSatDevData;
scsiRspSense_t *pSense;
tiIniScsiCmnd_t *scsiCmnd;
agsaFisRegHostToDevice_t *fis;
bit8 *pLogPage; /* Log Page data buffer */
bit16 BlkDescLen = 0; /* Block Descriptor Length */
bit32 StartingIndex = 0;
bit8 PageCode = 0;
bit32 chkCnd = agFALSE;
pSense = satIOContext->pSense;
pSatDevData = satIOContext->pSatDevData;
scsiCmnd = &tiScsiRequest->scsiCmnd;
fis = satIOContext->pFis;
pLogPage = (bit8 *) tiScsiRequest->sglVirtualAddr;
TI_DBG5(("satModeSelect10: start\n"));
/* checking CONTROL */
/* NACA == 1 or LINK == 1*/
if ( (scsiCmnd->cdb[9] & SCSI_NACA_MASK) || (scsiCmnd->cdb[9] & SCSI_LINK_MASK) )
{
satSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pTiSenseData,
satIOContext->interruptContext );
TI_DBG2(("satModeSelect10: return control\n"));
return tiSuccess;
}
/* checking PF bit */
if ( !(scsiCmnd->cdb[1] & SCSI_MODE_SELECT10_PF_MASK))
{
satSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pTiSenseData,
satIOContext->interruptContext );
TI_DBG1(("satModeSelect10: PF bit check \n"));
return tiSuccess;
}
BlkDescLen = (bit8)((pLogPage[6] << 8) + pLogPage[7]);
/* checking Block Descriptor Length on Mode parameter header(10) and LONGLBA bit*/
if ( (BlkDescLen == 8) && !(pLogPage[4] & SCSI_MODE_SELECT10_LONGLBA_MASK) )
{
/* mode parameter block descriptor exists and length is 8 byte */
PageCode = (bit8)(pLogPage[16] & 0x3F); /* page code and index is 8 + 8 */
StartingIndex = 16;
}
else if ( (BlkDescLen == 16) && (pLogPage[4] & SCSI_MODE_SELECT10_LONGLBA_MASK) )
{
/* mode parameter block descriptor exists and length is 16 byte */
PageCode = (bit8)(pLogPage[24] & 0x3F); /* page code and index is 8 + 16 */
StartingIndex = 24;
}
else if (BlkDescLen == 0)
{
/*
mode parameter block descriptor does not exist
*/
PageCode = (bit8)(pLogPage[8] & 0x3F); /* page code and index is 8 + 0 */
StartingIndex = 8;
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOSuccess,
SCSI_STAT_GOOD,
agNULL,
satIOContext->interruptContext);
return tiSuccess;
}
else
{
TI_DBG1(("satModeSelect10: return mode parameter block descriptor 0x%x\n", BlkDescLen));
/* no more than one mode parameter block descriptor shall be supported */
satSetSensePayload( pSense,
SCSI_SNSKEY_NO_SENSE,
0,
SCSI_SNSCODE_NO_ADDITIONAL_INFO,
satIOContext);
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pTiSenseData,
satIOContext->interruptContext );
return tiSuccess;
}
/*
for debugging only
*/
if (StartingIndex == 8)
{
tdhexdump("startingindex 8", (bit8 *)pLogPage, 8);
}
else if(StartingIndex == 16)
{
if (PageCode == MODESELECT_CACHING)
{
tdhexdump("startingindex 16", (bit8 *)pLogPage, 16+20);
}
else
{
tdhexdump("startingindex 16", (bit8 *)pLogPage, 16+12);
}
}
else
{
if (PageCode == MODESELECT_CACHING)
{
tdhexdump("startingindex 24", (bit8 *)pLogPage, 24+20);
}
else
{
tdhexdump("startingindex 24", (bit8 *)pLogPage, 24+12);
}
}
switch (PageCode) /* page code */
{
case MODESELECT_CONTROL_PAGE:
TI_DBG5(("satModeSelect10: Control mode page\n"));
/*
compare pLogPage to expected value (SAT Table 65, p67)
If not match, return check condition
*/
if ( pLogPage[StartingIndex+1] != 0x0A ||
pLogPage[StartingIndex+2] != 0x02 ||
(pSatDevData->satNCQ == agTRUE && pLogPage[StartingIndex+3] != 0x12) ||
(pSatDevData->satNCQ == agFALSE && pLogPage[StartingIndex+3] != 0x02) ||
(pLogPage[StartingIndex+4] & BIT3_MASK) != 0x00 || /* SWP bit */
(pLogPage[StartingIndex+4] & BIT4_MASK) != 0x00 || /* UA_INTLCK_CTRL */
(pLogPage[StartingIndex+4] & BIT5_MASK) != 0x00 || /* UA_INTLCK_CTRL */
(pLogPage[StartingIndex+5] & BIT0_MASK) != 0x00 || /* AUTOLOAD MODE */
(pLogPage[StartingIndex+5] & BIT1_MASK) != 0x00 || /* AUTOLOAD MODE */
(pLogPage[StartingIndex+5] & BIT2_MASK) != 0x00 || /* AUTOLOAD MODE */
(pLogPage[StartingIndex+5] & BIT6_MASK) != 0x00 || /* TAS bit */
pLogPage[StartingIndex+8] != 0xFF ||
pLogPage[StartingIndex+9] != 0xFF ||
pLogPage[StartingIndex+10] != 0x00 ||
pLogPage[StartingIndex+11] != 0x00
)
{
chkCnd = agTRUE;
}
if (chkCnd == agTRUE)
{
satSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pTiSenseData,
satIOContext->interruptContext );
TI_DBG1(("satModeSelect10: unexpected values\n"));
}
else
{
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOSuccess,
SCSI_STAT_GOOD,
agNULL,
satIOContext->interruptContext);
}
return tiSuccess;
break;
case MODESELECT_READ_WRITE_ERROR_RECOVERY_PAGE:
TI_DBG5(("satModeSelect10: Read-Write Error Recovery mode page\n"));
if ( (pLogPage[StartingIndex + 2] & SCSI_MODE_SELECT10_AWRE_MASK) ||
(pLogPage[StartingIndex + 2] & SCSI_MODE_SELECT10_RC_MASK) ||
(pLogPage[StartingIndex + 2] & SCSI_MODE_SELECT10_EER_MASK) ||
(pLogPage[StartingIndex + 2] & SCSI_MODE_SELECT10_PER_MASK) ||
(pLogPage[StartingIndex + 2] & SCSI_MODE_SELECT10_DTE_MASK) ||
(pLogPage[StartingIndex + 2] & SCSI_MODE_SELECT10_DCR_MASK) ||
(pLogPage[StartingIndex + 10]) ||
(pLogPage[StartingIndex + 11])
)
{
TI_DBG1(("satModeSelect10: return check condition \n"));
satSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_PARAMETER_LIST,
satIOContext);
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pTiSenseData,
satIOContext->interruptContext );
return tiSuccess;
}
else
{
TI_DBG2(("satModeSelect10: return GOOD \n"));
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOSuccess,
SCSI_STAT_GOOD,
agNULL,
satIOContext->interruptContext);
return tiSuccess;
}
break;
case MODESELECT_CACHING:
/* SAT rev8 Table67, p69*/
TI_DBG5(("satModeSelect10: Caching mode page\n"));
if ( (pLogPage[StartingIndex + 2] & 0xFB) || /* 1111 1011 */
(pLogPage[StartingIndex + 3]) ||
(pLogPage[StartingIndex + 4]) ||
(pLogPage[StartingIndex + 5]) ||
(pLogPage[StartingIndex + 6]) ||
(pLogPage[StartingIndex + 7]) ||
(pLogPage[StartingIndex + 8]) ||
(pLogPage[StartingIndex + 9]) ||
(pLogPage[StartingIndex + 10]) ||
(pLogPage[StartingIndex + 11]) ||
(pLogPage[StartingIndex + 12] & 0xC1) || /* 1100 0001 */
(pLogPage[StartingIndex + 13]) ||
(pLogPage[StartingIndex + 14]) ||
(pLogPage[StartingIndex + 15])
)
{
TI_DBG1(("satModeSelect10: return check condition \n"));
satSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_PARAMETER_LIST,
satIOContext);
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pTiSenseData,
satIOContext->interruptContext );
return tiSuccess;
}
else
{
/* sends ATA SET FEATURES based on WCE bit */
if ( !(pLogPage[StartingIndex + 2] & SCSI_MODE_SELECT10_WCE_MASK) )
{
TI_DBG5(("satModeSelect10: disable write cache\n"));
/* sends SET FEATURES */
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_SET_FEATURES; /* 0xEF */
fis->h.features = 0x82; /* disable write cache */
fis->d.lbaLow = 0; /* */
fis->d.lbaMid = 0; /* */
fis->d.lbaHigh = 0; /* */
fis->d.device = 0; /* */
fis->d.lbaLowExp = 0; /* */
fis->d.lbaMidExp = 0; /* */
fis->d.lbaHighExp = 0; /* */
fis->d.featuresExp = 0; /* */
fis->d.sectorCount = 0; /* */
fis->d.sectorCountExp = 0; /* */
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_NON_DATA;
/* Initialize CB for SATA completion.
*/
satIOContext->satCompleteCB = &satModeSelect6n10CB;
/*
* Prepare SGL and send FIS to LL layer.
*/
satIOContext->reqType = agRequestType; /* Save it */
status = sataLLIOStart( tiRoot,
tiIORequest,
tiDeviceHandle,
tiScsiRequest,
satIOContext);
return status;
}
else
{
TI_DBG5(("satModeSelect10: enable write cache\n"));
/* sends SET FEATURES */
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_SET_FEATURES; /* 0xEF */
fis->h.features = 0x02; /* enable write cache */
fis->d.lbaLow = 0; /* */
fis->d.lbaMid = 0; /* */
fis->d.lbaHigh = 0; /* */
fis->d.device = 0; /* */
fis->d.lbaLowExp = 0; /* */
fis->d.lbaMidExp = 0; /* */
fis->d.lbaHighExp = 0; /* */
fis->d.featuresExp = 0; /* */
fis->d.sectorCount = 0; /* */
fis->d.sectorCountExp = 0; /* */
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_NON_DATA;
/* Initialize CB for SATA completion.
*/
satIOContext->satCompleteCB = &satModeSelect6n10CB;
/*
* Prepare SGL and send FIS to LL layer.
*/
satIOContext->reqType = agRequestType; /* Save it */
status = sataLLIOStart( tiRoot,
tiIORequest,
tiDeviceHandle,
tiScsiRequest,
satIOContext);
return status;
}
}
break;
case MODESELECT_INFORMATION_EXCEPTION_CONTROL_PAGE:
TI_DBG5(("satModeSelect10: Informational Exception Control mode page\n"));
if ( (pLogPage[StartingIndex + 2] & SCSI_MODE_SELECT10_PERF_MASK) ||
(pLogPage[StartingIndex + 2] & SCSI_MODE_SELECT10_TEST_MASK)
)
{
TI_DBG1(("satModeSelect10: return check condition \n"));
satSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_PARAMETER_LIST,
satIOContext);
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pTiSenseData,
satIOContext->interruptContext );
return tiSuccess;
}
else
{
/* sends either ATA SMART ENABLE/DISABLE OPERATIONS based on DEXCPT bit */
if ( !(pLogPage[StartingIndex + 2] & SCSI_MODE_SELECT10_DEXCPT_MASK) )
{
TI_DBG5(("satModeSelect10: enable information exceptions reporting\n"));
/* sends SMART ENABLE OPERATIONS */
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_SMART_ENABLE_OPERATIONS; /* 0xB0 */
fis->h.features = 0xD8; /* enable */
fis->d.lbaLow = 0; /* */
fis->d.lbaMid = 0x4F; /* 0x4F */
fis->d.lbaHigh = 0xC2; /* 0xC2 */
fis->d.device = 0; /* */
fis->d.lbaLowExp = 0; /* */
fis->d.lbaMidExp = 0; /* */
fis->d.lbaHighExp = 0; /* */
fis->d.featuresExp = 0; /* */
fis->d.sectorCount = 0; /* */
fis->d.sectorCountExp = 0; /* */
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_NON_DATA;
/* Initialize CB for SATA completion.
*/
satIOContext->satCompleteCB = &satModeSelect6n10CB;
/*
* Prepare SGL and send FIS to LL layer.
*/
satIOContext->reqType = agRequestType; /* Save it */
status = sataLLIOStart( tiRoot,
tiIORequest,
tiDeviceHandle,
tiScsiRequest,
satIOContext);
return status;
}
else
{
TI_DBG5(("satModeSelect10: disable information exceptions reporting\n"));
/* sends SMART DISABLE OPERATIONS */
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_SMART_DISABLE_OPERATIONS; /* 0xB0 */
fis->h.features = 0xD9; /* disable */
fis->d.lbaLow = 0; /* */
fis->d.lbaMid = 0x4F; /* 0x4F */
fis->d.lbaHigh = 0xC2; /* 0xC2 */
fis->d.device = 0; /* */
fis->d.lbaLowExp = 0; /* */
fis->d.lbaMidExp = 0; /* */
fis->d.lbaHighExp = 0; /* */
fis->d.featuresExp = 0; /* */
fis->d.sectorCount = 0; /* */
fis->d.sectorCountExp = 0; /* */
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_NON_DATA;
/* Initialize CB for SATA completion.
*/
satIOContext->satCompleteCB = &satModeSelect6n10CB;
/*
* Prepare SGL and send FIS to LL layer.
*/
satIOContext->reqType = agRequestType; /* Save it */
status = sataLLIOStart( tiRoot,
tiIORequest,
tiDeviceHandle,
tiScsiRequest,
satIOContext);
return status;
}
}
break;
default:
TI_DBG1(("satModeSelect10: Error unknown page code 0x%x\n", pLogPage[12]));
satSetSensePayload( pSense,
SCSI_SNSKEY_NO_SENSE,
0,
SCSI_SNSCODE_NO_ADDITIONAL_INFO,
satIOContext);
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pTiSenseData,
satIOContext->interruptContext );
return tiSuccess;
}
}
/*****************************************************************************/
/*! \brief SAT implementation for SCSI satSynchronizeCache10.
*
* SAT implementation for SCSI satSynchronizeCache10.
*
* \param tiRoot: Pointer to TISA initiator driver/port instance.
* \param tiIORequest: Pointer to TISA I/O request context for this I/O.
* \param tiDeviceHandle: Pointer to TISA device handle for this I/O.
* \param tiScsiRequest: Pointer to TISA SCSI I/O request and SGL list.
* \param satIOContext_t: Pointer to the SAT IO Context
*
* \return If command is started successfully
* - \e tiSuccess: I/O request successfully initiated.
* - \e tiBusy: No resources available, try again later.
* - \e tiIONoDevice: Invalid device handle.
* - \e tiError: Other errors.
*/
/*****************************************************************************/
GLOBAL bit32 satSynchronizeCache10(
tiRoot_t *tiRoot,
tiIORequest_t *tiIORequest,
tiDeviceHandle_t *tiDeviceHandle,
tiScsiInitiatorRequest_t *tiScsiRequest,
satIOContext_t *satIOContext)
{
bit32 status;
bit32 agRequestType;
satDeviceData_t *pSatDevData;
scsiRspSense_t *pSense;
tiIniScsiCmnd_t *scsiCmnd;
agsaFisRegHostToDevice_t *fis;
pSense = satIOContext->pSense;
pSatDevData = satIOContext->pSatDevData;
scsiCmnd = &tiScsiRequest->scsiCmnd;
fis = satIOContext->pFis;
TI_DBG5(("satSynchronizeCache10: start\n"));
/* checking CONTROL */
/* NACA == 1 or LINK == 1*/
if ( (scsiCmnd->cdb[9] & SCSI_NACA_MASK) || (scsiCmnd->cdb[9] & SCSI_LINK_MASK) )
{
satSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pTiSenseData,
satIOContext->interruptContext );
TI_DBG2(("satSynchronizeCache10: return control\n"));
return tiSuccess;
}
/* checking IMMED bit */
if (scsiCmnd->cdb[1] & SCSI_SYNC_CACHE_IMMED_MASK)
{
TI_DBG1(("satSynchronizeCache10: GOOD status due to IMMED bit\n"));
/* return GOOD status first here */
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOSuccess,
SCSI_STAT_GOOD,
agNULL,
satIOContext->interruptContext);
}
/* sends FLUSH CACHE or FLUSH CACHE EXT */
if (pSatDevData->sat48BitSupport == agTRUE)
{
TI_DBG5(("satSynchronizeCache10: sends FLUSH CACHE EXT\n"));
/* FLUSH CACHE EXT */
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_FLUSH_CACHE_EXT; /* 0xEA */
fis->h.features = 0; /* FIS reserve */
fis->d.featuresExp = 0; /* FIS reserve */
fis->d.sectorCount = 0; /* FIS sector count (7:0) */
fis->d.sectorCountExp = 0; /* FIS sector count (15:8) */
fis->d.lbaLow = 0; /* FIS LBA (7 :0 ) */
fis->d.lbaLowExp = 0; /* FIS LBA (31:24) */
fis->d.lbaMid = 0; /* FIS LBA (15:8 ) */
fis->d.lbaMidExp = 0; /* FIS LBA (39:32) */
fis->d.lbaHigh = 0; /* FIS LBA (23:16) */
fis->d.lbaHighExp = 0; /* FIS LBA (47:40) */
fis->d.device = 0; /* FIS DEV is discared in SATA */
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved4 = 0;
fis->d.reserved5 = 0;
}
else
{
TI_DBG5(("satSynchronizeCache10: sends FLUSH CACHE\n"));
/* FLUSH CACHE */
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_FLUSH_CACHE; /* 0xE7 */
fis->h.features = 0; /* FIS features NA */
fis->d.lbaLow = 0; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = 0; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = 0; /* FIS LBA (23:16) */
fis->d.lbaLowExp = 0;
fis->d.lbaMidExp = 0;
fis->d.lbaHighExp = 0;
fis->d.featuresExp = 0;
fis->d.sectorCount = 0; /* FIS sector count (7:0) */
fis->d.sectorCountExp = 0;
fis->d.device = 0; /* FIS DEV is discared in SATA */
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved4 = 0;
fis->d.reserved5 = 0;
}
agRequestType = AGSA_SATA_PROTOCOL_NON_DATA;
/* Initialize CB for SATA completion.
*/
satIOContext->satCompleteCB = &satSynchronizeCache10n16CB;
/*
* Prepare SGL and send FIS to LL layer.
*/
satIOContext->reqType = agRequestType; /* Save it */
status = sataLLIOStart( tiRoot,
tiIORequest,
tiDeviceHandle,
tiScsiRequest,
satIOContext);
return (status);
}
/*****************************************************************************/
/*! \brief SAT implementation for SCSI satSynchronizeCache16.
*
* SAT implementation for SCSI satSynchronizeCache16.
*
* \param tiRoot: Pointer to TISA initiator driver/port instance.
* \param tiIORequest: Pointer to TISA I/O request context for this I/O.
* \param tiDeviceHandle: Pointer to TISA device handle for this I/O.
* \param tiScsiRequest: Pointer to TISA SCSI I/O request and SGL list.
* \param satIOContext_t: Pointer to the SAT IO Context
*
* \return If command is started successfully
* - \e tiSuccess: I/O request successfully initiated.
* - \e tiBusy: No resources available, try again later.
* - \e tiIONoDevice: Invalid device handle.
* - \e tiError: Other errors.
*/
/*****************************************************************************/
GLOBAL bit32 satSynchronizeCache16(
tiRoot_t *tiRoot,
tiIORequest_t *tiIORequest,
tiDeviceHandle_t *tiDeviceHandle,
tiScsiInitiatorRequest_t *tiScsiRequest,
satIOContext_t *satIOContext)
{
bit32 status;
bit32 agRequestType;
satDeviceData_t *pSatDevData;
scsiRspSense_t *pSense;
tiIniScsiCmnd_t *scsiCmnd;
agsaFisRegHostToDevice_t *fis;
pSense = satIOContext->pSense;
pSatDevData = satIOContext->pSatDevData;
scsiCmnd = &tiScsiRequest->scsiCmnd;
fis = satIOContext->pFis;
TI_DBG5(("satSynchronizeCache16: start\n"));
/* checking CONTROL */
/* NACA == 1 or LINK == 1*/
if ( (scsiCmnd->cdb[15] & SCSI_NACA_MASK) || (scsiCmnd->cdb[15] & SCSI_LINK_MASK) )
{
satSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pTiSenseData,
satIOContext->interruptContext );
TI_DBG1(("satSynchronizeCache16: return control\n"));
return tiSuccess;
}
/* checking IMMED bit */
if (scsiCmnd->cdb[1] & SCSI_SYNC_CACHE_IMMED_MASK)
{
TI_DBG1(("satSynchronizeCache16: GOOD status due to IMMED bit\n"));
/* return GOOD status first here */
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOSuccess,
SCSI_STAT_GOOD,
agNULL,
satIOContext->interruptContext);
}
/* sends FLUSH CACHE or FLUSH CACHE EXT */
if (pSatDevData->sat48BitSupport == agTRUE)
{
TI_DBG5(("satSynchronizeCache16: sends FLUSH CACHE EXT\n"));
/* FLUSH CACHE EXT */
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_FLUSH_CACHE_EXT; /* 0xEA */
fis->h.features = 0; /* FIS reserve */
fis->d.featuresExp = 0; /* FIS reserve */
fis->d.sectorCount = 0; /* FIS sector count (7:0) */
fis->d.sectorCountExp = 0; /* FIS sector count (15:8) */
fis->d.lbaLow = 0; /* FIS LBA (7 :0 ) */
fis->d.lbaLowExp = 0; /* FIS LBA (31:24) */
fis->d.lbaMid = 0; /* FIS LBA (15:8 ) */
fis->d.lbaMidExp = 0; /* FIS LBA (39:32) */
fis->d.lbaHigh = 0; /* FIS LBA (23:16) */
fis->d.lbaHighExp = 0; /* FIS LBA (47:40) */
fis->d.device = 0; /* FIS DEV is discared in SATA */
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved4 = 0;
fis->d.reserved5 = 0;
}
else
{
TI_DBG5(("satSynchronizeCache16: sends FLUSH CACHE\n"));
/* FLUSH CACHE */
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_FLUSH_CACHE; /* 0xE7 */
fis->h.features = 0; /* FIS features NA */
fis->d.lbaLow = 0; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = 0; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = 0; /* FIS LBA (23:16) */
fis->d.lbaLowExp = 0;
fis->d.lbaMidExp = 0;
fis->d.lbaHighExp = 0;
fis->d.featuresExp = 0;
fis->d.sectorCount = 0; /* FIS sector count (7:0) */
fis->d.sectorCountExp = 0;
fis->d.device = 0; /* FIS DEV is discared in SATA */
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved4 = 0;
fis->d.reserved5 = 0;
}
agRequestType = AGSA_SATA_PROTOCOL_NON_DATA;
/* Initialize CB for SATA completion.
*/
satIOContext->satCompleteCB = &satSynchronizeCache10n16CB;
/*
* Prepare SGL and send FIS to LL layer.
*/
satIOContext->reqType = agRequestType; /* Save it */
status = sataLLIOStart( tiRoot,
tiIORequest,
tiDeviceHandle,
tiScsiRequest,
satIOContext);
return (status);
}
/*****************************************************************************/
/*! \brief SAT implementation for SCSI satWriteAndVerify10.
*
* SAT implementation for SCSI satWriteAndVerify10.
*
* \param tiRoot: Pointer to TISA initiator driver/port instance.
* \param tiIORequest: Pointer to TISA I/O request context for this I/O.
* \param tiDeviceHandle: Pointer to TISA device handle for this I/O.
* \param tiScsiRequest: Pointer to TISA SCSI I/O request and SGL list.
* \param satIOContext_t: Pointer to the SAT IO Context
*
* \return If command is started successfully
* - \e tiSuccess: I/O request successfully initiated.
* - \e tiBusy: No resources available, try again later.
* - \e tiIONoDevice: Invalid device handle.
* - \e tiError: Other errors.
*/
/*****************************************************************************/
GLOBAL bit32 satWriteAndVerify10(
tiRoot_t *tiRoot,
tiIORequest_t *tiIORequest,
tiDeviceHandle_t *tiDeviceHandle,
tiScsiInitiatorRequest_t *tiScsiRequest,
satIOContext_t *satIOContext)
{
/*
combination of write10 and verify10
*/
bit32 status;
bit32 agRequestType = AGSA_SATA_PROTOCOL_DMA_WRITE;
satDeviceData_t *pSatDevData;
scsiRspSense_t *pSense;
tiIniScsiCmnd_t *scsiCmnd;
agsaFisRegHostToDevice_t *fis;
bit32 lba = 0;
bit32 tl = 0;
bit32 LoopNum = 1;
bit8 LBA[4];
bit8 TL[4];
bit32 rangeChk = agFALSE; /* lba and tl range check */
pSense = satIOContext->pSense;
pSatDevData = satIOContext->pSatDevData;
scsiCmnd = &tiScsiRequest->scsiCmnd;
fis = satIOContext->pFis;
TI_DBG5(("satWriteAndVerify10: start\n"));
/* checking BYTCHK bit */
if (scsiCmnd->cdb[1] & SCSI_WRITE_N_VERIFY_BYTCHK_MASK)
{
satSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pTiSenseData,
satIOContext->interruptContext );
TI_DBG1(("satWriteAndVerify10: BYTCHK bit checking \n"));
return tiSuccess;
}
/* checking CONTROL */
/* NACA == 1 or LINK == 1*/
if ( (scsiCmnd->cdb[9] & SCSI_NACA_MASK) || (scsiCmnd->cdb[9] & SCSI_LINK_MASK) )
{
satSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pTiSenseData,
satIOContext->interruptContext );
TI_DBG1(("satWriteAndVerify10: return control\n"));
return tiSuccess;
}
osti_memset(LBA, 0, sizeof(LBA));
osti_memset(TL, 0, sizeof(TL));
/* do not use memcpy due to indexing in LBA and TL */
LBA[0] = scsiCmnd->cdb[2]; /* MSB */
LBA[1] = scsiCmnd->cdb[3];
LBA[2] = scsiCmnd->cdb[4];
LBA[3] = scsiCmnd->cdb[5]; /* LSB */
TL[0] = 0;
TL[1] = 0;
TL[2] = scsiCmnd->cdb[7]; /* MSB */
TL[3] = scsiCmnd->cdb[8]; /* LSB */
rangeChk = satAddNComparebit32(LBA, TL);
/* cbd10; computing LBA and transfer length */
lba = (scsiCmnd->cdb[2] << (8*3)) + (scsiCmnd->cdb[3] << (8*2))
+ (scsiCmnd->cdb[4] << 8) + scsiCmnd->cdb[5];
tl = (scsiCmnd->cdb[7] << 8) + scsiCmnd->cdb[8];
/* Table 34, 9.1, p 46 */
/*
note: As of 2/10/2006, no support for DMA QUEUED
*/
/*
Table 34, 9.1, p 46, b
When no 48-bit addressing support or NCQ, if LBA is beyond (2^28 - 1),
return check condition
*/
if (pSatDevData->satNCQ != agTRUE &&
pSatDevData->sat48BitSupport != agTRUE
)
{
if (lba > SAT_TR_LBA_LIMIT - 1)
{
satSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_LOGICAL_BLOCK_ADDRESS_OUT_OF_RANGE,
satIOContext);
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pTiSenseData,
satIOContext->interruptContext );
TI_DBG1(("satWriteAndVerify10: return LBA out of range\n"));
return tiSuccess;
}
if (rangeChk) // if (lba + tl > SAT_TR_LBA_LIMIT)
{
TI_DBG1(("satWrite10: return LBA+TL out of range, not EXT\n"));
satSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_LOGICAL_BLOCK_ADDRESS_OUT_OF_RANGE,
satIOContext);
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pTiSenseData,
satIOContext->interruptContext );
return tiSuccess;
}
}
/* case 1 and 2 */
if (!rangeChk) // if (lba + tl <= SAT_TR_LBA_LIMIT)
{
if (pSatDevData->satDMASupport == agTRUE && pSatDevData->satDMAEnabled == agTRUE)
{
/* case 2 */
/* WRITE DMA*/
/* can't fit the transfer length */
TI_DBG5(("satWriteAndVerify10: case 2 !!!\n"));
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C bit is set */
fis->h.command = SAT_WRITE_DMA; /* 0xCA */
fis->h.features = 0; /* FIS reserve */
fis->d.lbaLow = scsiCmnd->cdb[5]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = scsiCmnd->cdb[4]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = scsiCmnd->cdb[3]; /* FIS LBA (23:16) */
/* FIS LBA mode set LBA (27:24) */
fis->d.device = (bit8)((0x4 << 4) | (scsiCmnd->cdb[2] & 0xF));
fis->d.lbaLowExp = 0;
fis->d.lbaMidExp = 0;
fis->d.lbaHighExp = 0;
fis->d.featuresExp = 0;
fis->d.sectorCount = scsiCmnd->cdb[8]; /* FIS sector count (7:0) */
fis->d.sectorCountExp = 0;
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_DMA_WRITE;
satIOContext->ATACmd = SAT_WRITE_DMA;
}
else
{
/* case 1 */
/* WRITE MULTIPLE or WRITE SECTOR(S) */
/* WRITE SECTORS for easier implemetation */
/* can't fit the transfer length */
TI_DBG5(("satWriteAndVerify10: case 1 !!!\n"));
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C bit is set */
fis->h.command = SAT_WRITE_SECTORS; /* 0x30 */
fis->h.features = 0; /* FIS reserve */
fis->d.lbaLow = scsiCmnd->cdb[5]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = scsiCmnd->cdb[4]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = scsiCmnd->cdb[3]; /* FIS LBA (23:16) */
/* FIS LBA mode set LBA (27:24) */
fis->d.device = (bit8)((0x4 << 4) | (scsiCmnd->cdb[2] & 0xF));
fis->d.lbaLowExp = 0;
fis->d.lbaMidExp = 0;
fis->d.lbaHighExp = 0;
fis->d.featuresExp = 0;
fis->d.sectorCount = scsiCmnd->cdb[8]; /* FIS sector count (7:0) */
fis->d.sectorCountExp = 0;
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_PIO_WRITE;
satIOContext->ATACmd = SAT_WRITE_SECTORS;
}
}
/* case 3 and 4 */
if (pSatDevData->sat48BitSupport == agTRUE)
{
if (pSatDevData->satDMASupport == agTRUE && pSatDevData->satDMAEnabled == agTRUE)
{
/* case 3 */
/* WRITE DMA EXT or WRITE DMA FUA EXT */
TI_DBG5(("satWriteAndVerify10: case 3\n"));
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
/* SAT_WRITE_DMA_FUA_EXT is optional and we don't support it */
fis->h.command = SAT_WRITE_DMA_EXT; /* 0x35 */
fis->h.features = 0; /* FIS reserve */
fis->d.lbaLow = scsiCmnd->cdb[5]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = scsiCmnd->cdb[4]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = scsiCmnd->cdb[3]; /* FIS LBA (23:16) */
fis->d.device = 0x40; /* FIS LBA mode set */
fis->d.lbaLowExp = scsiCmnd->cdb[2]; /* FIS LBA (31:24) */
fis->d.lbaMidExp = 0; /* FIS LBA (39:32) */
fis->d.lbaHighExp = 0; /* FIS LBA (47:40) */
fis->d.featuresExp = 0; /* FIS reserve */
fis->d.sectorCount = scsiCmnd->cdb[8]; /* FIS sector count (7:0) */
fis->d.sectorCountExp = scsiCmnd->cdb[7]; /* FIS sector count (15:8) */
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_DMA_WRITE;
satIOContext->ATACmd = SAT_WRITE_DMA_EXT;
}
else
{
/* case 4 */
/* WRITE MULTIPLE EXT or WRITE MULTIPLE FUA EXT or WRITE SECTOR(S) EXT */
/* WRITE SECTORS EXT for easier implemetation */
TI_DBG5(("satWriteAndVerify10: case 4\n"));
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_WRITE_SECTORS_EXT; /* 0x34 */
fis->h.features = 0; /* FIS reserve */
fis->d.lbaLow = scsiCmnd->cdb[5]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = scsiCmnd->cdb[4]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = scsiCmnd->cdb[3]; /* FIS LBA (23:16) */
fis->d.device = 0x40; /* FIS LBA mode set */
fis->d.lbaLowExp = scsiCmnd->cdb[2]; /* FIS LBA (31:24) */
fis->d.lbaMidExp = 0; /* FIS LBA (39:32) */
fis->d.lbaHighExp = 0; /* FIS LBA (47:40) */
fis->d.featuresExp = 0; /* FIS reserve */
fis->d.sectorCount = scsiCmnd->cdb[8]; /* FIS sector count (7:0) */
fis->d.sectorCountExp = scsiCmnd->cdb[7]; /* FIS sector count (15:8) */
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_PIO_WRITE;
satIOContext->ATACmd = SAT_WRITE_SECTORS_EXT;
}
}
/* case 5 */
if (pSatDevData->satNCQ == agTRUE)
{
/* WRITE FPDMA QUEUED */
if (pSatDevData->sat48BitSupport != agTRUE)
{
TI_DBG5(("satWriteAndVerify10: case 5 !!! error NCQ but 28 bit address support \n"));
satSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pTiSenseData,
satIOContext->interruptContext );
return tiSuccess;
}
TI_DBG5(("satWriteAndVerify10: case 5\n"));
/* Support 48-bit FPDMA addressing, use WRITE FPDMA QUEUE command */
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_WRITE_FPDMA_QUEUED; /* 0x61 */
fis->h.features = scsiCmnd->cdb[8]; /* FIS sector count (7:0) */
fis->d.lbaLow = scsiCmnd->cdb[5]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = scsiCmnd->cdb[4]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = scsiCmnd->cdb[3]; /* FIS LBA (23:16) */
/* Check FUA bit */
if (scsiCmnd->cdb[1] & SCSI_WRITE_N_VERIFY10_FUA_MASK)
fis->d.device = 0xC0; /* FIS FUA set */
else
fis->d.device = 0x40; /* FIS FUA clear */
fis->d.lbaLowExp = scsiCmnd->cdb[2]; /* FIS LBA (31:24) */
fis->d.lbaMidExp = 0; /* FIS LBA (39:32) */
fis->d.lbaHighExp = 0; /* FIS LBA (47:40) */
fis->d.featuresExp = scsiCmnd->cdb[7]; /* FIS sector count (15:8) */
fis->d.sectorCount = 0; /* Tag (7:3) set by LL layer */
fis->d.sectorCountExp = 0;
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_FPDMA_WRITE;
satIOContext->ATACmd = SAT_WRITE_FPDMA_QUEUED;
}
satIOContext->currentLBA = lba;
satIOContext->OrgTL = tl;
/*
computing number of loop and remainder for tl
0xFF in case not ext
0xFFFF in case EXT
*/
if (fis->h.command == SAT_WRITE_SECTORS || fis->h.command == SAT_WRITE_DMA)
{
LoopNum = satComputeLoopNum(tl, 0xFF);
}
else if (fis->h.command == SAT_WRITE_SECTORS_EXT ||
fis->h.command == SAT_WRITE_DMA_EXT ||
fis->h.command == SAT_WRITE_DMA_FUA_EXT
)
{
/* SAT_READ_SECTORS_EXT, SAT_READ_DMA_EXT */
LoopNum = satComputeLoopNum(tl, 0xFFFF);
}
else
{
/* SAT_WRITE_FPDMA_QUEUED */
LoopNum = satComputeLoopNum(tl, 0xFFFF);
}
satIOContext->LoopNum = LoopNum;
if (LoopNum == 1)
{
TI_DBG5(("satWriteAndVerify10: NON CHAINED data\n"));
/* Initialize CB for SATA completion.
*/
satIOContext->satCompleteCB = &satNonChainedWriteNVerifyCB;
}
else
{
TI_DBG1(("satWriteAndVerify10: CHAINED data\n"));
/* re-setting tl */
if (fis->h.command == SAT_WRITE_SECTORS || fis->h.command == SAT_WRITE_DMA)
{
fis->d.sectorCount = 0xFF;
}
else if (fis->h.command == SAT_WRITE_SECTORS_EXT ||
fis->h.command == SAT_WRITE_DMA_EXT ||
fis->h.command == SAT_WRITE_DMA_FUA_EXT
)
{
fis->d.sectorCount = 0xFF;
fis->d.sectorCountExp = 0xFF;
}
else
{
/* SAT_WRITE_FPDMA_QUEUED */
fis->h.features = 0xFF;
fis->d.featuresExp = 0xFF;
}
/* Initialize CB for SATA completion.
*/
satIOContext->satCompleteCB = &satChainedWriteNVerifyCB;
}
/*
* Prepare SGL and send FIS to LL layer.
*/
satIOContext->reqType = agRequestType; /* Save it */
status = sataLLIOStart( tiRoot,
tiIORequest,
tiDeviceHandle,
tiScsiRequest,
satIOContext);
return (status);
}
#ifdef REMOVED
GLOBAL bit32 satWriteAndVerify10(
tiRoot_t *tiRoot,
tiIORequest_t *tiIORequest,
tiDeviceHandle_t *tiDeviceHandle,
tiScsiInitiatorRequest_t *tiScsiRequest,
satIOContext_t *satIOContext)
{
/*
combination of write10 and verify10
*/
bit32 status;
bit32 agRequestType = AGSA_SATA_PROTOCOL_DMA_WRITE;
satDeviceData_t *pSatDevData;
scsiRspSense_t *pSense;
tiIniScsiCmnd_t *scsiCmnd;
agsaFisRegHostToDevice_t *fis;
bit32 lba = 0;
bit32 tl = 0;
pSense = satIOContext->pSense;
pSatDevData = satIOContext->pSatDevData;
scsiCmnd = &tiScsiRequest->scsiCmnd;
fis = satIOContext->pFis;
TI_DBG5(("satWriteAndVerify10: start\n"));
/* checking BYTCHK bit */
if (scsiCmnd->cdb[1] & SCSI_WRITE_N_VERIFY_BYTCHK_MASK)
{
satSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pTiSenseData,
satIOContext->interruptContext );
TI_DBG1(("satWriteAndVerify10: BYTCHK bit checking \n"));
return tiSuccess;
}
/* checking CONTROL */
/* NACA == 1 or LINK == 1*/
if ( (scsiCmnd->cdb[9] & SCSI_NACA_MASK) || (scsiCmnd->cdb[9] & SCSI_LINK_MASK) )
{
satSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pTiSenseData,
satIOContext->interruptContext );
TI_DBG2(("satWriteAndVerify10: return control\n"));
return tiSuccess;
}
/* let's do write10 */
if ( pSatDevData->sat48BitSupport != agTRUE )
{
/*
writeandverify10 but no support for 48 bit addressing -> problem in transfer
length(sector count)
*/
satSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pTiSenseData,
satIOContext->interruptContext );
TI_DBG1(("satWriteAndVerify10: return internal checking\n"));
return tiSuccess;
}
/* cbd10; computing LBA and transfer length */
lba = (scsiCmnd->cdb[2] << (8*3)) + (scsiCmnd->cdb[3] << (8*2))
+ (scsiCmnd->cdb[4] << 8) + scsiCmnd->cdb[5];
tl = (scsiCmnd->cdb[7] << 8) + scsiCmnd->cdb[8];
/* Table 34, 9.1, p 46 */
/*
note: As of 2/10/2006, no support for DMA QUEUED
*/
/*
Table 34, 9.1, p 46, b
When no 48-bit addressing support or NCQ, if LBA is beyond (2^28 - 1),
return check condition
*/
if (pSatDevData->satNCQ != agTRUE &&
pSatDevData->sat48BitSupport != agTRUE
)
{
if (lba > SAT_TR_LBA_LIMIT - 1)
{
satSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_LOGICAL_BLOCK_ADDRESS_OUT_OF_RANGE,
satIOContext);
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pTiSenseData,
satIOContext->interruptContext );
TI_DBG1(("satWriteAndVerify10: return LBA out of range\n"));
return tiSuccess;
}
}
/* case 1 and 2 */
if (lba + tl <= SAT_TR_LBA_LIMIT)
{
if (pSatDevData->satDMASupport == agTRUE && pSatDevData->satDMAEnabled == agTRUE)
{
/* case 2 */
/* WRITE DMA*/
/* can't fit the transfer length */
TI_DBG5(("satWriteAndVerify10: case 2 !!!\n"));
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C bit is set */
fis->h.command = SAT_WRITE_DMA; /* 0xCA */
fis->h.features = 0; /* FIS reserve */
fis->d.lbaLow = scsiCmnd->cdb[5]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = scsiCmnd->cdb[4]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = scsiCmnd->cdb[3]; /* FIS LBA (23:16) */
/* FIS LBA mode set LBA (27:24) */
fis->d.device = (0x4 << 4) | (scsiCmnd->cdb[2] & 0xF);
fis->d.lbaLowExp = 0;
fis->d.lbaMidExp = 0;
fis->d.lbaHighExp = 0;
fis->d.featuresExp = 0;
fis->d.sectorCount = scsiCmnd->cdb[8]; /* FIS sector count (7:0) */
fis->d.sectorCountExp = 0;
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_DMA_WRITE;
satIOContext->ATACmd = SAT_WRITE_DMA;
}
else
{
/* case 1 */
/* WRITE MULTIPLE or WRITE SECTOR(S) */
/* WRITE SECTORS for easier implemetation */
/* can't fit the transfer length */
TI_DBG5(("satWriteAndVerify10: case 1 !!!\n"));
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C bit is set */
fis->h.command = SAT_WRITE_SECTORS; /* 0x30 */
fis->h.features = 0; /* FIS reserve */
fis->d.lbaLow = scsiCmnd->cdb[5]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = scsiCmnd->cdb[4]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = scsiCmnd->cdb[3]; /* FIS LBA (23:16) */
/* FIS LBA mode set LBA (27:24) */
fis->d.device = (0x4 << 4) | (scsiCmnd->cdb[2] & 0xF);
fis->d.lbaLowExp = 0;
fis->d.lbaMidExp = 0;
fis->d.lbaHighExp = 0;
fis->d.featuresExp = 0;
fis->d.sectorCount = scsiCmnd->cdb[8]; /* FIS sector count (7:0) */
fis->d.sectorCountExp = 0;
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_PIO_WRITE;
satIOContext->ATACmd = SAT_WRITE_SECTORS;
}
}
/* case 3 and 4 */
if (pSatDevData->sat48BitSupport == agTRUE)
{
if (pSatDevData->satDMASupport == agTRUE && pSatDevData->satDMAEnabled == agTRUE)
{
/* case 3 */
/* WRITE DMA EXT or WRITE DMA FUA EXT */
TI_DBG5(("satWriteAndVerify10: case 3\n"));
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
/* SAT_WRITE_DMA_FUA_EXT is optional and we don't support it */
fis->h.command = SAT_WRITE_DMA_EXT; /* 0x35 */
fis->h.features = 0; /* FIS reserve */
fis->d.lbaLow = scsiCmnd->cdb[5]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = scsiCmnd->cdb[4]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = scsiCmnd->cdb[3]; /* FIS LBA (23:16) */
fis->d.device = 0x40; /* FIS LBA mode set */
fis->d.lbaLowExp = scsiCmnd->cdb[2]; /* FIS LBA (31:24) */
fis->d.lbaMidExp = 0; /* FIS LBA (39:32) */
fis->d.lbaHighExp = 0; /* FIS LBA (47:40) */
fis->d.featuresExp = 0; /* FIS reserve */
fis->d.sectorCount = scsiCmnd->cdb[8]; /* FIS sector count (7:0) */
fis->d.sectorCountExp = scsiCmnd->cdb[7]; /* FIS sector count (15:8) */
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_DMA_WRITE;
}
else
{
/* case 4 */
/* WRITE MULTIPLE EXT or WRITE MULTIPLE FUA EXT or WRITE SECTOR(S) EXT */
/* WRITE SECTORS EXT for easier implemetation */
TI_DBG5(("satWriteAndVerify10: case 4\n"));
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_WRITE_SECTORS_EXT; /* 0x34 */
fis->h.features = 0; /* FIS reserve */
fis->d.lbaLow = scsiCmnd->cdb[5]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = scsiCmnd->cdb[4]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = scsiCmnd->cdb[3]; /* FIS LBA (23:16) */
fis->d.device = 0x40; /* FIS LBA mode set */
fis->d.lbaLowExp = scsiCmnd->cdb[2]; /* FIS LBA (31:24) */
fis->d.lbaMidExp = 0; /* FIS LBA (39:32) */
fis->d.lbaHighExp = 0; /* FIS LBA (47:40) */
fis->d.featuresExp = 0; /* FIS reserve */
fis->d.sectorCount = scsiCmnd->cdb[8]; /* FIS sector count (7:0) */
fis->d.sectorCountExp = scsiCmnd->cdb[7]; /* FIS sector count (15:8) */
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_PIO_WRITE;
}
}
/* case 5 */
if (pSatDevData->satNCQ == agTRUE)
{
/* WRITE FPDMA QUEUED */
if (pSatDevData->sat48BitSupport != agTRUE)
{
TI_DBG5(("satWriteAndVerify10: case 5 !!! error NCQ but 28 bit address support \n"));
satSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pTiSenseData,
satIOContext->interruptContext );
return tiSuccess;
}
TI_DBG5(("satWriteAndVerify10: case 5\n"));
/* Support 48-bit FPDMA addressing, use WRITE FPDMA QUEUE command */
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_WRITE_FPDMA_QUEUED; /* 0x61 */
fis->h.features = scsiCmnd->cdb[8]; /* FIS sector count (7:0) */
fis->d.lbaLow = scsiCmnd->cdb[5]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = scsiCmnd->cdb[4]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = scsiCmnd->cdb[3]; /* FIS LBA (23:16) */
/* Check FUA bit */
if (scsiCmnd->cdb[1] & SCSI_WRITE_N_VERIFY10_FUA_MASK)
fis->d.device = 0xC0; /* FIS FUA set */
else
fis->d.device = 0x40; /* FIS FUA clear */
fis->d.lbaLowExp = scsiCmnd->cdb[2]; /* FIS LBA (31:24) */
fis->d.lbaMidExp = 0; /* FIS LBA (39:32) */
fis->d.lbaHighExp = 0; /* FIS LBA (47:40) */
fis->d.featuresExp = scsiCmnd->cdb[7]; /* FIS sector count (15:8) */
fis->d.sectorCount = 0; /* Tag (7:3) set by LL layer */
fis->d.sectorCountExp = 0;
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_FPDMA_WRITE;
}
/* Initialize CB for SATA completion.
*/
satIOContext->satCompleteCB = &satWriteAndVerify10CB;
/*
* Prepare SGL and send FIS to LL layer.
*/
satIOContext->reqType = agRequestType; /* Save it */
status = sataLLIOStart( tiRoot,
tiIORequest,
tiDeviceHandle,
tiScsiRequest,
satIOContext);
return (status);
}
#endif /* REMOVED */
#ifdef REMOVED
/*****************************************************************************/
/*! \brief SAT implementation for SCSI satWriteAndVerify10_1.
*
* SAT implementation for SCSI satWriteAndVerify10_1.
* Sub function of satWriteAndVerify10
*
* \param tiRoot: Pointer to TISA initiator driver/port instance.
* \param tiIORequest: Pointer to TISA I/O request context for this I/O.
* \param tiDeviceHandle: Pointer to TISA device handle for this I/O.
* \param tiScsiRequest: Pointer to TISA SCSI I/O request and SGL list.
* \param satIOContext_t: Pointer to the SAT IO Context
*
* \return If command is started successfully
* - \e tiSuccess: I/O request successfully initiated.
* - \e tiBusy: No resources available, try again later.
* - \e tiIONoDevice: Invalid device handle.
* - \e tiError: Other errors.
*/
/*****************************************************************************/
GLOBAL bit32 satWriteAndVerify10_1(
tiRoot_t *tiRoot,
tiIORequest_t *tiIORequest,
tiDeviceHandle_t *tiDeviceHandle,
tiScsiInitiatorRequest_t *tiScsiRequest,
satIOContext_t *satIOContext)
{
bit32 status;
bit32 agRequestType = AGSA_SATA_PROTOCOL_DMA_WRITE;
satDeviceData_t *pSatDevData;
scsiRspSense_t *pSense;
tiIniScsiCmnd_t *scsiCmnd;
agsaFisRegHostToDevice_t *fis;
pSense = satIOContext->pSense;
pSatDevData = satIOContext->pSatDevData;
scsiCmnd = &tiScsiRequest->scsiCmnd;
fis = satIOContext->pFis;
TI_DBG5(("satWriteAndVerify10_1: start\n"));
if (pSatDevData->sat48BitSupport == agTRUE)
{
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_READ_VERIFY_SECTORS_EXT;/* 0x42 */
fis->h.features = 0; /* FIS reserve */
fis->d.lbaLow = scsiCmnd->cdb[5]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = scsiCmnd->cdb[4]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = scsiCmnd->cdb[3]; /* FIS LBA (23:16) */
fis->d.device = 0x40; /* FIS LBA mode set 01000000 */
fis->d.lbaLowExp = scsiCmnd->cdb[2]; /* FIS LBA (31:24) */
fis->d.lbaMidExp = 0; /* FIS LBA (39:32) */
fis->d.lbaHighExp = 0; /* FIS LBA (47:40) */
fis->d.featuresExp = 0; /* FIS reserve */
fis->d.sectorCount = scsiCmnd->cdb[8]; /* FIS sector count (7:0) */
fis->d.sectorCountExp = scsiCmnd->cdb[7]; /* FIS sector count (15:8) */
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_NON_DATA;
/* Initialize CB for SATA completion.
*/
satIOContext->satCompleteCB = &satWriteAndVerify10CB;
/*
* Prepare SGL and send FIS to LL layer.
*/
satIOContext->reqType = agRequestType; /* Save it */
status = sataLLIOStart( tiRoot,
tiIORequest,
tiDeviceHandle,
tiScsiRequest,
satIOContext);
TI_DBG1(("satWriteAndVerify10_1: return status %d\n", status));
return (status);
}
else
{
/* can't fit in SAT_READ_VERIFY_SECTORS becasue of Sector Count and LBA */
TI_DBG1(("satWriteAndVerify10_1: can't fit in SAT_READ_VERIFY_SECTORS\n"));
return tiError;
}
return tiSuccess;
}
#endif /* REMOVED */
/*****************************************************************************/
/*! \brief SAT implementation for SCSI satWriteAndVerify12.
*
* SAT implementation for SCSI satWriteAndVerify12.
*
* \param tiRoot: Pointer to TISA initiator driver/port instance.
* \param tiIORequest: Pointer to TISA I/O request context for this I/O.
* \param tiDeviceHandle: Pointer to TISA device handle for this I/O.
* \param tiScsiRequest: Pointer to TISA SCSI I/O request and SGL list.
* \param satIOContext_t: Pointer to the SAT IO Context
*
* \return If command is started successfully
* - \e tiSuccess: I/O request successfully initiated.
* - \e tiBusy: No resources available, try again later.
* - \e tiIONoDevice: Invalid device handle.
* - \e tiError: Other errors.
*/
/*****************************************************************************/
GLOBAL bit32 satWriteAndVerify12(
tiRoot_t *tiRoot,
tiIORequest_t *tiIORequest,
tiDeviceHandle_t *tiDeviceHandle,
tiScsiInitiatorRequest_t *tiScsiRequest,
satIOContext_t *satIOContext)
{
/*
combination of write12 and verify12
temp: since write12 is not support (due to internal checking), no support
*/
bit32 status;
bit32 agRequestType = AGSA_SATA_PROTOCOL_DMA_WRITE;
satDeviceData_t *pSatDevData;
scsiRspSense_t *pSense;
tiIniScsiCmnd_t *scsiCmnd;
agsaFisRegHostToDevice_t *fis;
bit32 lba = 0;
bit32 tl = 0;
bit32 LoopNum = 1;
bit8 LBA[4];
bit8 TL[4];
bit32 rangeChk = agFALSE; /* lba and tl range check */
pSense = satIOContext->pSense;
pSatDevData = satIOContext->pSatDevData;
scsiCmnd = &tiScsiRequest->scsiCmnd;
fis = satIOContext->pFis;
TI_DBG5(("satWriteAndVerify12: start\n"));
/* checking BYTCHK bit */
if (scsiCmnd->cdb[1] & SCSI_WRITE_N_VERIFY_BYTCHK_MASK)
{
satSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pTiSenseData,
satIOContext->interruptContext );
TI_DBG1(("satWriteAndVerify12: BYTCHK bit checking \n"));
return tiSuccess;
}
/* checking CONTROL */
/* NACA == 1 or LINK == 1*/
if ( (scsiCmnd->cdb[11] & SCSI_NACA_MASK) || (scsiCmnd->cdb[11] & SCSI_LINK_MASK) )
{
satSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pTiSenseData,
satIOContext->interruptContext );
TI_DBG2(("satWriteAndVerify12: return control\n"));
return tiSuccess;
}
osti_memset(LBA, 0, sizeof(LBA));
osti_memset(TL, 0, sizeof(TL));
/* do not use memcpy due to indexing in LBA and TL */
LBA[0] = scsiCmnd->cdb[2]; /* MSB */
LBA[1] = scsiCmnd->cdb[3];
LBA[2] = scsiCmnd->cdb[4];
LBA[3] = scsiCmnd->cdb[5]; /* LSB */
TL[0] = scsiCmnd->cdb[6]; /* MSB */
TL[1] = scsiCmnd->cdb[7];
TL[2] = scsiCmnd->cdb[7];
TL[3] = scsiCmnd->cdb[8]; /* LSB */
rangeChk = satAddNComparebit32(LBA, TL);
lba = satComputeCDB12LBA(satIOContext);
tl = satComputeCDB12TL(satIOContext);
/* Table 34, 9.1, p 46 */
/*
note: As of 2/10/2006, no support for DMA QUEUED
*/
/*
Table 34, 9.1, p 46, b
When no 48-bit addressing support or NCQ, if LBA is beyond (2^28 - 1),
return check condition
*/
if (pSatDevData->satNCQ != agTRUE &&
pSatDevData->sat48BitSupport != agTRUE
)
{
if (lba > SAT_TR_LBA_LIMIT - 1)
{
satSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_LOGICAL_BLOCK_ADDRESS_OUT_OF_RANGE,
satIOContext);
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pTiSenseData,
satIOContext->interruptContext );
TI_DBG1(("satWriteAndVerify12: return LBA out of range, not EXT\n"));
return tiSuccess;
}
if (rangeChk) // if (lba + tl > SAT_TR_LBA_LIMIT)
{
TI_DBG1(("satWriteAndVerify12: return LBA+TL out of range, not EXT\n"));
satSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_LOGICAL_BLOCK_ADDRESS_OUT_OF_RANGE,
satIOContext);
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pTiSenseData,
satIOContext->interruptContext );
return tiSuccess;
}
}
/* case 1 and 2 */
if (!rangeChk) // if (lba + tl <= SAT_TR_LBA_LIMIT)
{
if (pSatDevData->satDMASupport == agTRUE && pSatDevData->satDMAEnabled == agTRUE)
{
/* case 2 */
/* WRITE DMA*/
/* In case that we can't fit the transfer length, we loop */
TI_DBG5(("satWriteAndVerify12: case 2\n"));
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C bit is set */
fis->h.command = SAT_WRITE_DMA; /* 0xCA */
fis->h.features = 0; /* FIS reserve */
fis->d.lbaLow = scsiCmnd->cdb[5]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = scsiCmnd->cdb[4]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = scsiCmnd->cdb[3]; /* FIS LBA (23:16) */
/* FIS LBA mode set LBA (27:24) */
fis->d.device = (bit8)((0x4 << 4) | (scsiCmnd->cdb[2] & 0xF));
fis->d.lbaLowExp = 0;
fis->d.lbaMidExp = 0;
fis->d.lbaHighExp = 0;
fis->d.featuresExp = 0;
fis->d.sectorCount = scsiCmnd->cdb[9]; /* FIS sector count (7:0) */
fis->d.sectorCountExp = 0;
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_DMA_WRITE;
satIOContext->ATACmd = SAT_WRITE_DMA;
}
else
{
/* case 1 */
/* WRITE MULTIPLE or WRITE SECTOR(S) */
/* WRITE SECTORS for easier implemetation */
/* In case that we can't fit the transfer length, we loop */
TI_DBG5(("satWriteAndVerify12: case 1\n"));
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C bit is set */
fis->h.command = SAT_WRITE_SECTORS; /* 0x30 */
fis->h.features = 0; /* FIS reserve */
fis->d.lbaLow = scsiCmnd->cdb[5]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = scsiCmnd->cdb[4]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = scsiCmnd->cdb[3]; /* FIS LBA (23:16) */
/* FIS LBA mode set LBA (27:24) */
fis->d.device = (bit8)((0x4 << 4) | (scsiCmnd->cdb[2] & 0xF));
fis->d.lbaLowExp = 0;
fis->d.lbaMidExp = 0;
fis->d.lbaHighExp = 0;
fis->d.featuresExp = 0;
fis->d.sectorCount = scsiCmnd->cdb[9]; /* FIS sector count (7:0) */
fis->d.sectorCountExp = 0;
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_PIO_WRITE;
satIOContext->ATACmd = SAT_WRITE_SECTORS;
}
}
/* case 3 and 4 */
if (pSatDevData->sat48BitSupport == agTRUE)
{
if (pSatDevData->satDMASupport == agTRUE && pSatDevData->satDMAEnabled == agTRUE)
{
/* case 3 */
/* WRITE DMA EXT or WRITE DMA FUA EXT */
TI_DBG5(("satWriteAndVerify12: case 3\n"));
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
/* SAT_WRITE_DMA_FUA_EXT is optional and we don't support it */
fis->h.command = SAT_WRITE_DMA_EXT; /* 0x35 */
fis->h.features = 0; /* FIS reserve */
fis->d.lbaLow = scsiCmnd->cdb[5]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = scsiCmnd->cdb[4]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = scsiCmnd->cdb[3]; /* FIS LBA (23:16) */
fis->d.device = 0x40; /* FIS LBA mode set */
fis->d.lbaLowExp = scsiCmnd->cdb[2]; /* FIS LBA (31:24) */
fis->d.lbaMidExp = 0; /* FIS LBA (39:32) */
fis->d.lbaHighExp = 0; /* FIS LBA (47:40) */
fis->d.featuresExp = 0; /* FIS reserve */
fis->d.sectorCount = scsiCmnd->cdb[9]; /* FIS sector count (7:0) */
fis->d.sectorCountExp = scsiCmnd->cdb[8]; /* FIS sector count (15:8) */
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_DMA_WRITE;
satIOContext->ATACmd = SAT_WRITE_DMA_EXT;
}
else
{
/* case 4 */
/* WRITE MULTIPLE EXT or WRITE MULTIPLE FUA EXT or WRITE SECTOR(S) EXT */
/* WRITE SECTORS EXT for easier implemetation */
TI_DBG5(("satWriteAndVerify12: case 4\n"));
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_WRITE_SECTORS_EXT; /* 0x34 */
fis->h.features = 0; /* FIS reserve */
fis->d.lbaLow = scsiCmnd->cdb[5]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = scsiCmnd->cdb[4]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = scsiCmnd->cdb[3]; /* FIS LBA (23:16) */
fis->d.device = 0x40; /* FIS LBA mode set */
fis->d.lbaLowExp = scsiCmnd->cdb[2]; /* FIS LBA (31:24) */
fis->d.lbaMidExp = 0; /* FIS LBA (39:32) */
fis->d.lbaHighExp = 0; /* FIS LBA (47:40) */
fis->d.featuresExp = 0; /* FIS reserve */
fis->d.sectorCount = scsiCmnd->cdb[9]; /* FIS sector count (7:0) */
fis->d.sectorCountExp = scsiCmnd->cdb[8]; /* FIS sector count (15:8) */
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_PIO_WRITE;
satIOContext->ATACmd = SAT_WRITE_SECTORS_EXT;
}
}
/* case 5 */
if (pSatDevData->satNCQ == agTRUE)
{
/* WRITE FPDMA QUEUED */
if (pSatDevData->sat48BitSupport != agTRUE)
{
TI_DBG5(("satWriteAndVerify12: case 5 !!! error NCQ but 28 bit address support \n"));
satSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pTiSenseData,
satIOContext->interruptContext );
return tiSuccess;
}
TI_DBG6(("satWriteAndVerify12: case 5\n"));
/* Support 48-bit FPDMA addressing, use WRITE FPDMA QUEUE command */
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_WRITE_FPDMA_QUEUED; /* 0x61 */
fis->h.features = scsiCmnd->cdb[9]; /* FIS sector count (7:0) */
fis->d.lbaLow = scsiCmnd->cdb[5]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = scsiCmnd->cdb[4]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = scsiCmnd->cdb[3]; /* FIS LBA (23:16) */
/* Check FUA bit */
if (scsiCmnd->cdb[1] & SCSI_WRITE12_FUA_MASK)
fis->d.device = 0xC0; /* FIS FUA set */
else
fis->d.device = 0x40; /* FIS FUA clear */
fis->d.lbaLowExp = scsiCmnd->cdb[2]; /* FIS LBA (31:24) */
fis->d.lbaMidExp = 0; /* FIS LBA (39:32) */
fis->d.lbaHighExp = 0; /* FIS LBA (47:40) */
fis->d.featuresExp = scsiCmnd->cdb[8]; /* FIS sector count (15:8) */
fis->d.sectorCount = 0; /* Tag (7:3) set by LL layer */
fis->d.sectorCountExp = 0;
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_FPDMA_WRITE;
satIOContext->ATACmd = SAT_WRITE_FPDMA_QUEUED;
}
satIOContext->currentLBA = lba;
// satIOContext->OrgLBA = lba;
satIOContext->OrgTL = tl;
/*
computing number of loop and remainder for tl
0xFF in case not ext
0xFFFF in case EXT
*/
if (fis->h.command == SAT_WRITE_SECTORS || fis->h.command == SAT_WRITE_DMA)
{
LoopNum = satComputeLoopNum(tl, 0xFF);
}
else if (fis->h.command == SAT_WRITE_SECTORS_EXT ||
fis->h.command == SAT_WRITE_DMA_EXT ||
fis->h.command == SAT_WRITE_DMA_FUA_EXT
)
{
/* SAT_READ_SECTORS_EXT, SAT_READ_DMA_EXT */
LoopNum = satComputeLoopNum(tl, 0xFFFF);
}
else
{
/* SAT_WRITE_FPDMA_QUEUEDK */
LoopNum = satComputeLoopNum(tl, 0xFFFF);
}
satIOContext->LoopNum = LoopNum;
satIOContext->LoopNum2 = LoopNum;
if (LoopNum == 1)
{
TI_DBG5(("satWriteAndVerify12: NON CHAINED data\n"));
/* Initialize CB for SATA completion.
*/
satIOContext->satCompleteCB = &satNonChainedWriteNVerifyCB;
}
else
{
TI_DBG1(("satWriteAndVerify12: CHAINED data\n"));
/* re-setting tl */
if (fis->h.command == SAT_WRITE_SECTORS || fis->h.command == SAT_WRITE_DMA)
{
fis->d.sectorCount = 0xFF;
}
else if (fis->h.command == SAT_WRITE_SECTORS_EXT ||
fis->h.command == SAT_WRITE_DMA_EXT ||
fis->h.command == SAT_WRITE_DMA_FUA_EXT
)
{
fis->d.sectorCount = 0xFF;
fis->d.sectorCountExp = 0xFF;
}
else
{
/* SAT_WRITE_FPDMA_QUEUED */
fis->h.features = 0xFF;
fis->d.featuresExp = 0xFF;
}
/* Initialize CB for SATA completion.
*/
satIOContext->satCompleteCB = &satChainedWriteNVerifyCB;
}
/*
* Prepare SGL and send FIS to LL layer.
*/
satIOContext->reqType = agRequestType; /* Save it */
status = sataLLIOStart( tiRoot,
tiIORequest,
tiDeviceHandle,
tiScsiRequest,
satIOContext);
return (status);
}
GLOBAL bit32 satNonChainedWriteNVerify_Verify(
tiRoot_t *tiRoot,
tiIORequest_t *tiIORequest,
tiDeviceHandle_t *tiDeviceHandle,
tiScsiInitiatorRequest_t *tiScsiRequest,
satIOContext_t *satIOContext)
{
bit32 status;
bit32 agRequestType = AGSA_SATA_PROTOCOL_DMA_WRITE;
satDeviceData_t *pSatDevData;
tiIniScsiCmnd_t *scsiCmnd;
agsaFisRegHostToDevice_t *fis;
pSatDevData = satIOContext->pSatDevData;
scsiCmnd = &tiScsiRequest->scsiCmnd;
fis = satIOContext->pFis;
TI_DBG5(("satNonChainedWriteNVerify_Verify: start\n"));
if (pSatDevData->sat48BitSupport == agTRUE)
{
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_READ_VERIFY_SECTORS_EXT;/* 0x42 */
fis->h.features = 0; /* FIS reserve */
fis->d.lbaLow = scsiCmnd->cdb[5]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = scsiCmnd->cdb[4]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = scsiCmnd->cdb[3]; /* FIS LBA (23:16) */
fis->d.device = 0x40; /* FIS LBA mode set 01000000 */
fis->d.lbaLowExp = scsiCmnd->cdb[2]; /* FIS LBA (31:24) */
fis->d.lbaMidExp = 0; /* FIS LBA (39:32) */
fis->d.lbaHighExp = 0; /* FIS LBA (47:40) */
fis->d.featuresExp = 0; /* FIS reserve */
fis->d.sectorCount = scsiCmnd->cdb[8]; /* FIS sector count (7:0) */
fis->d.sectorCountExp = scsiCmnd->cdb[7]; /* FIS sector count (15:8) */
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_NON_DATA;
/* Initialize CB for SATA completion.
*/
satIOContext->satCompleteCB = &satNonChainedWriteNVerifyCB;
/*
* Prepare SGL and send FIS to LL layer.
*/
satIOContext->reqType = agRequestType; /* Save it */
status = sataLLIOStart( tiRoot,
tiIORequest,
tiDeviceHandle,
tiScsiRequest,
satIOContext);
TI_DBG1(("satNonChainedWriteNVerify_Verify: return status %d\n", status));
return (status);
}
else
{
/* can't fit in SAT_READ_VERIFY_SECTORS becasue of Sector Count and LBA */
TI_DBG1(("satNonChainedWriteNVerify_Verify: can't fit in SAT_READ_VERIFY_SECTORS\n"));
return tiError;
}
}
GLOBAL bit32 satChainedWriteNVerify_Write(
tiRoot_t *tiRoot,
tiIORequest_t *tiIORequest,
tiDeviceHandle_t *tiDeviceHandle,
tiScsiInitiatorRequest_t *tiScsiRequest,
satIOContext_t *satIOContext)
{
/*
Assumption: error check on lba and tl has been done in satWrite*()
lba = lba + tl;
*/
bit32 status;
satIOContext_t *satOrgIOContext = agNULL;
tiIniScsiCmnd_t *scsiCmnd;
agsaFisRegHostToDevice_t *fis;
bit32 agRequestType = AGSA_SATA_PROTOCOL_DMA_WRITE;
bit32 lba = 0;
bit32 DenomTL = 0xFF;
bit32 Remainder = 0;
bit8 LBA[4]; /* 0 MSB, 3 LSB */
TI_DBG1(("satChainedWriteNVerify_Write: start\n"));
fis = satIOContext->pFis;
satOrgIOContext = satIOContext->satOrgIOContext;
scsiCmnd = satOrgIOContext->pScsiCmnd;
osti_memset(LBA,0, sizeof(LBA));
switch (satOrgIOContext->ATACmd)
{
case SAT_WRITE_DMA:
DenomTL = 0xFF;
break;
case SAT_WRITE_SECTORS:
DenomTL = 0xFF;
break;
case SAT_WRITE_DMA_EXT:
DenomTL = 0xFFFF;
break;
case SAT_WRITE_DMA_FUA_EXT:
DenomTL = 0xFFFF;
break;
case SAT_WRITE_SECTORS_EXT:
DenomTL = 0xFFFF;
break;
case SAT_WRITE_FPDMA_QUEUED:
DenomTL = 0xFFFF;
break;
default:
TI_DBG1(("satChainedWriteNVerify_Write: error incorrect ata command 0x%x\n", satIOContext->ATACmd));
return tiError;
break;
}
Remainder = satOrgIOContext->OrgTL % DenomTL;
satOrgIOContext->currentLBA = satOrgIOContext->currentLBA + DenomTL;
lba = satOrgIOContext->currentLBA;
LBA[0] = (bit8)((lba & 0xF000) >> (8 * 3)); /* MSB */
LBA[1] = (bit8)((lba & 0xF00) >> (8 * 2));
LBA[2] = (bit8)((lba & 0xF0) >> 8);
LBA[3] = (bit8)(lba & 0xF); /* LSB */
switch (satOrgIOContext->ATACmd)
{
case SAT_WRITE_DMA:
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C bit is set */
fis->h.command = SAT_WRITE_DMA; /* 0xCA */
fis->h.features = 0; /* FIS reserve */
fis->d.lbaLow = LBA[3]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = LBA[2]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = LBA[1]; /* FIS LBA (23:16) */
/* FIS LBA mode set LBA (27:24) */
fis->d.device = (bit8)((0x4 << 4) | (LBA[0] & 0xF));
fis->d.lbaLowExp = 0;
fis->d.lbaMidExp = 0;
fis->d.lbaHighExp = 0;
fis->d.featuresExp = 0;
if (satOrgIOContext->LoopNum == 1)
{
/* last loop */
fis->d.sectorCount = (bit8)Remainder; /* FIS sector count (7:0) */
}
else
{
fis->d.sectorCount = 0xFF; /* FIS sector count (7:0) */
}
fis->d.sectorCountExp = 0;
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_DMA_WRITE;
break;
case SAT_WRITE_SECTORS:
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C bit is set */
fis->h.command = SAT_WRITE_SECTORS; /* 0x30 */
fis->h.features = 0; /* FIS reserve */
fis->d.lbaLow = LBA[3]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = LBA[2]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = LBA[1]; /* FIS LBA (23:16) */
/* FIS LBA mode set LBA (27:24) */
fis->d.device = (bit8)((0x4 << 4) | (LBA[0] & 0xF));
fis->d.lbaLowExp = 0;
fis->d.lbaMidExp = 0;
fis->d.lbaHighExp = 0;
fis->d.featuresExp = 0;
if (satOrgIOContext->LoopNum == 1)
{
/* last loop */
fis->d.sectorCount = (bit8)Remainder; /* FIS sector count (7:0) */
}
else
{
fis->d.sectorCount = 0xFF; /* FIS sector count (7:0) */
}
fis->d.sectorCountExp = 0;
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_PIO_WRITE;
break;
case SAT_WRITE_DMA_EXT:
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_WRITE_DMA_EXT; /* 0x3D */
fis->h.features = 0; /* FIS reserve */
fis->d.lbaLow = LBA[3]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = LBA[2]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = LBA[1]; /* FIS LBA (23:16) */
fis->d.device = 0x40; /* FIS LBA mode set */
fis->d.lbaLowExp = LBA[0]; /* FIS LBA (31:24) */
fis->d.lbaMidExp = 0; /* FIS LBA (39:32) */
fis->d.lbaHighExp = 0; /* FIS LBA (47:40) */
fis->d.featuresExp = 0; /* FIS reserve */
if (satOrgIOContext->LoopNum == 1)
{
/* last loop */
fis->d.sectorCount = (bit8)(Remainder & 0xFF); /* FIS sector count (7:0) */
fis->d.sectorCountExp = (bit8)((Remainder & 0xFF00) >> 8); /* FIS sector count (15:8) */
}
else
{
fis->d.sectorCount = 0xFF; /* FIS sector count (7:0) */
fis->d.sectorCountExp = 0xFF; /* FIS sector count (15:8) */
}
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_DMA_WRITE;
break;
case SAT_WRITE_SECTORS_EXT:
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_WRITE_SECTORS_EXT; /* 0x34 */
fis->h.features = 0; /* FIS reserve */
fis->d.lbaLow = LBA[3]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = LBA[2]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = LBA[1]; /* FIS LBA (23:16) */
fis->d.device = 0x40; /* FIS LBA mode set */
fis->d.lbaLowExp = LBA[0]; /* FIS LBA (31:24) */
fis->d.lbaMidExp = 0; /* FIS LBA (39:32) */
fis->d.lbaHighExp = 0; /* FIS LBA (47:40) */
fis->d.featuresExp = 0; /* FIS reserve */
if (satOrgIOContext->LoopNum == 1)
{
/* last loop */
fis->d.sectorCount = (bit8)(Remainder & 0xFF); /* FIS sector count (7:0) */
fis->d.sectorCountExp = (bit8)((Remainder & 0xFF00) >> 8); /* FIS sector count (15:8) */
}
else
{
fis->d.sectorCount = 0xFF; /* FIS sector count (7:0) */
fis->d.sectorCountExp = 0xFF; /* FIS sector count (15:8) */
}
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_PIO_WRITE;
break;
case SAT_WRITE_FPDMA_QUEUED:
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_WRITE_FPDMA_QUEUED; /* 0x61 */
fis->d.lbaLow = LBA[3]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = LBA[2]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = LBA[1]; /* FIS LBA (23:16) */
/* Check FUA bit */
if (scsiCmnd->cdb[1] & SCSI_WRITE10_FUA_MASK)
fis->d.device = 0xC0; /* FIS FUA set */
else
fis->d.device = 0x40; /* FIS FUA clear */
fis->d.lbaLowExp = LBA[0];; /* FIS LBA (31:24) */
fis->d.lbaMidExp = 0; /* FIS LBA (39:32) */
fis->d.lbaHighExp = 0; /* FIS LBA (47:40) */
if (satOrgIOContext->LoopNum == 1)
{
/* last loop */
fis->h.features = (bit8)(Remainder & 0xFF); /* FIS sector count (7:0) */
fis->d.featuresExp = (bit8)((Remainder & 0xFF00) >> 8); /* FIS sector count (15:8) */
}
else
{
fis->h.features = 0xFF; /* FIS sector count (7:0) */
fis->d.featuresExp = 0xFF; /* FIS sector count (15:8) */
}
fis->d.sectorCount = 0; /* Tag (7:3) set by LL layer */
fis->d.sectorCountExp = 0;
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_FPDMA_WRITE;
break;
default:
TI_DBG1(("satChainedWriteNVerify_Write: error incorrect ata command 0x%x\n", satIOContext->ATACmd));
return tiError;
break;
}
/* Initialize CB for SATA completion.
*/
/* chained data */
satIOContext->satCompleteCB = &satChainedWriteNVerifyCB;
/*
* Prepare SGL and send FIS to LL layer.
*/
satIOContext->reqType = agRequestType; /* Save it */
status = sataLLIOStart( tiRoot,
tiIORequest,
tiDeviceHandle,
tiScsiRequest,
satIOContext);
TI_DBG5(("satChainedWriteNVerify_Write: return\n"));
return (status);
}
/*
similar to write12 and verify10;
this will be similar to verify12
*/
GLOBAL bit32 satChainedWriteNVerify_Start_Verify(
tiRoot_t *tiRoot,
tiIORequest_t *tiIORequest,
tiDeviceHandle_t *tiDeviceHandle,
tiScsiInitiatorRequest_t *tiScsiRequest,
satIOContext_t *satIOContext)
{
/*
deal with transfer length; others have been handled previously at this point;
no LBA check; no range check;
*/
bit32 status;
bit32 agRequestType = AGSA_SATA_PROTOCOL_NON_DATA;
satDeviceData_t *pSatDevData;
tiIniScsiCmnd_t *scsiCmnd;
agsaFisRegHostToDevice_t *fis;
bit32 lba = 0;
bit32 tl = 0;
bit32 LoopNum = 1;
bit8 LBA[4];
bit8 TL[4];
pSatDevData = satIOContext->pSatDevData;
scsiCmnd = &tiScsiRequest->scsiCmnd;
fis = satIOContext->pFis;
TI_DBG5(("satChainedWriteNVerify_Start_Verify: start\n"));
osti_memset(LBA, 0, sizeof(LBA));
osti_memset(TL, 0, sizeof(TL));
/* do not use memcpy due to indexing in LBA and TL */
LBA[0] = scsiCmnd->cdb[2]; /* MSB */
LBA[1] = scsiCmnd->cdb[3];
LBA[2] = scsiCmnd->cdb[4];
LBA[3] = scsiCmnd->cdb[5]; /* LSB */
TL[0] = scsiCmnd->cdb[6]; /* MSB */
TL[1] = scsiCmnd->cdb[7];
TL[2] = scsiCmnd->cdb[7];
TL[3] = scsiCmnd->cdb[8]; /* LSB */
lba = satComputeCDB12LBA(satIOContext);
tl = satComputeCDB12TL(satIOContext);
if (pSatDevData->sat48BitSupport == agTRUE)
{
TI_DBG5(("satChainedWriteNVerify_Start_Verify: SAT_READ_VERIFY_SECTORS_EXT\n"));
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_READ_VERIFY_SECTORS_EXT;/* 0x42 */
fis->h.features = 0; /* FIS reserve */
fis->d.lbaLow = scsiCmnd->cdb[5]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = scsiCmnd->cdb[4]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = scsiCmnd->cdb[3]; /* FIS LBA (23:16) */
fis->d.device = 0x40; /* FIS LBA mode set 01000000 */
fis->d.lbaLowExp = scsiCmnd->cdb[2]; /* FIS LBA (31:24) */
fis->d.lbaMidExp = 0; /* FIS LBA (39:32) */
fis->d.lbaHighExp = 0; /* FIS LBA (47:40) */
fis->d.featuresExp = 0; /* FIS reserve */
fis->d.sectorCount = scsiCmnd->cdb[8]; /* FIS sector count (7:0) */
fis->d.sectorCountExp = scsiCmnd->cdb[7]; /* FIS sector count (15:8) */
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_NON_DATA;
satIOContext->ATACmd = SAT_READ_VERIFY_SECTORS_EXT;
}
else
{
TI_DBG5(("satChainedWriteNVerify_Start_Verify: SAT_READ_VERIFY_SECTORS\n"));
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C bit is set */
fis->h.command = SAT_READ_VERIFY_SECTORS; /* 0x40 */
fis->h.features = 0; /* FIS reserve */
fis->d.lbaLow = scsiCmnd->cdb[5]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = scsiCmnd->cdb[4]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = scsiCmnd->cdb[3]; /* FIS LBA (23:16) */
/* FIS LBA mode set LBA (27:24) */
fis->d.device = (bit8)((0x4 << 4) | (scsiCmnd->cdb[2] & 0xF));
fis->d.lbaLowExp = 0;
fis->d.lbaMidExp = 0;
fis->d.lbaHighExp = 0;
fis->d.featuresExp = 0;
fis->d.sectorCount = scsiCmnd->cdb[8]; /* FIS sector count (7:0) */
fis->d.sectorCountExp = 0;
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_NON_DATA;
satIOContext->ATACmd = SAT_READ_VERIFY_SECTORS;
}
satIOContext->currentLBA = lba;
satIOContext->OrgTL = tl;
/*
computing number of loop and remainder for tl
0xFF in case not ext
0xFFFF in case EXT
*/
if (fis->h.command == SAT_READ_VERIFY_SECTORS)
{
LoopNum = satComputeLoopNum(tl, 0xFF);
}
else if (fis->h.command == SAT_READ_VERIFY_SECTORS_EXT)
{
/* SAT_READ_SECTORS_EXT, SAT_READ_DMA_EXT */
LoopNum = satComputeLoopNum(tl, 0xFFFF);
}
else
{
TI_DBG1(("satChainedWriteNVerify_Start_Verify: error case 1!!!\n"));
LoopNum = 1;
}
satIOContext->LoopNum = LoopNum;
if (LoopNum == 1)
{
TI_DBG5(("satChainedWriteNVerify_Start_Verify: NON CHAINED data\n"));
/* Initialize CB for SATA completion.
*/
satIOContext->satCompleteCB = &satNonChainedWriteNVerifyCB;
}
else
{
TI_DBG1(("satChainedWriteNVerify_Start_Verify: CHAINED data\n"));
/* re-setting tl */
if (fis->h.command == SAT_READ_VERIFY_SECTORS)
{
fis->d.sectorCount = 0xFF;
}
else if (fis->h.command == SAT_READ_VERIFY_SECTORS_EXT)
{
fis->d.sectorCount = 0xFF;
fis->d.sectorCountExp = 0xFF;
}
else
{
TI_DBG1(("satChainedWriteNVerify_Start_Verify: error case 2!!!\n"));
}
/* Initialize CB for SATA completion.
*/
satIOContext->satCompleteCB = &satChainedWriteNVerifyCB;
}
/*
* Prepare SGL and send FIS to LL layer.
*/
satIOContext->reqType = agRequestType; /* Save it */
status = sataLLIOStart( tiRoot,
tiIORequest,
tiDeviceHandle,
tiScsiRequest,
satIOContext);
return (status);
}
GLOBAL bit32 satChainedWriteNVerify_Verify(
tiRoot_t *tiRoot,
tiIORequest_t *tiIORequest,
tiDeviceHandle_t *tiDeviceHandle,
tiScsiInitiatorRequest_t *tiScsiRequest,
satIOContext_t *satIOContext)
{
bit32 status;
satIOContext_t *satOrgIOContext = agNULL;
agsaFisRegHostToDevice_t *fis;
bit32 agRequestType = AGSA_SATA_PROTOCOL_NON_DATA;
bit32 lba = 0;
bit32 DenomTL = 0xFF;
bit32 Remainder = 0;
bit8 LBA[4]; /* 0 MSB, 3 LSB */
TI_DBG2(("satChainedWriteNVerify_Verify: start\n"));
fis = satIOContext->pFis;
satOrgIOContext = satIOContext->satOrgIOContext;
osti_memset(LBA,0, sizeof(LBA));
switch (satOrgIOContext->ATACmd)
{
case SAT_READ_VERIFY_SECTORS:
DenomTL = 0xFF;
break;
case SAT_READ_VERIFY_SECTORS_EXT:
DenomTL = 0xFFFF;
break;
default:
TI_DBG1(("satChainedWriteNVerify_Verify: error incorrect ata command 0x%x\n", satIOContext->ATACmd));
return tiError;
break;
}
Remainder = satOrgIOContext->OrgTL % DenomTL;
satOrgIOContext->currentLBA = satOrgIOContext->currentLBA + DenomTL;
lba = satOrgIOContext->currentLBA;
LBA[0] = (bit8)((lba & 0xF000) >> (8 * 3)); /* MSB */
LBA[1] = (bit8)((lba & 0xF00) >> (8 * 2));
LBA[2] = (bit8)((lba & 0xF0) >> 8);
LBA[3] = (bit8)(lba & 0xF); /* LSB */
switch (satOrgIOContext->ATACmd)
{
case SAT_READ_VERIFY_SECTORS:
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C bit is set */
fis->h.command = SAT_READ_VERIFY_SECTORS; /* 0x40 */
fis->h.features = 0; /* FIS reserve */
fis->d.lbaLow = LBA[3]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = LBA[2]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = LBA[1]; /* FIS LBA (23:16) */
/* FIS LBA mode set LBA (27:24) */
fis->d.device = (bit8)((0x4 << 4) | (LBA[0] & 0xF));
fis->d.lbaLowExp = 0;
fis->d.lbaMidExp = 0;
fis->d.lbaHighExp = 0;
fis->d.featuresExp = 0;
if (satOrgIOContext->LoopNum == 1)
{
/* last loop */
fis->d.sectorCount = (bit8)Remainder; /* FIS sector count (7:0) */
}
else
{
fis->d.sectorCount = 0xFF; /* FIS sector count (7:0) */
}
fis->d.sectorCountExp = 0;
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_NON_DATA;
break;
case SAT_READ_VERIFY_SECTORS_EXT:
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_READ_VERIFY_SECTORS_EXT; /* 0x42 */
fis->h.features = 0; /* FIS reserve */
fis->d.lbaLow = LBA[3]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = LBA[2]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = LBA[1]; /* FIS LBA (23:16) */
fis->d.device = 0x40; /* FIS LBA mode set */
fis->d.lbaLowExp = LBA[0]; /* FIS LBA (31:24) */
fis->d.lbaMidExp = 0; /* FIS LBA (39:32) */
fis->d.lbaHighExp = 0; /* FIS LBA (47:40) */
fis->d.featuresExp = 0; /* FIS reserve */
if (satOrgIOContext->LoopNum == 1)
{
/* last loop */
fis->d.sectorCount = (bit8)(Remainder & 0xFF); /* FIS sector count (7:0) */
fis->d.sectorCountExp = (bit8)((Remainder & 0xFF00) >> 8); /* FIS sector count (15:8) */
}
else
{
fis->d.sectorCount = 0xFF; /* FIS sector count (7:0) */
fis->d.sectorCountExp = 0xFF; /* FIS sector count (15:8) */
}
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_NON_DATA;
break;
default:
TI_DBG1(("satChainedWriteNVerify_Verify: error incorrect ata command 0x%x\n", satIOContext->ATACmd));
return tiError;
break;
}
/* Initialize CB for SATA completion.
*/
/* chained data */
satIOContext->satCompleteCB = &satChainedWriteNVerifyCB;
/*
* Prepare SGL and send FIS to LL layer.
*/
satIOContext->reqType = agRequestType; /* Save it */
status = sataLLIOStart( tiRoot,
tiIORequest,
tiDeviceHandle,
tiScsiRequest,
satIOContext);
TI_DBG5(("satChainedWriteNVerify_Verify: return\n"));
return (status);
}
/*****************************************************************************/
/*! \brief SAT implementation for SCSI satWriteAndVerify16.
*
* SAT implementation for SCSI satWriteAndVerify16.
*
* \param tiRoot: Pointer to TISA initiator driver/port instance.
* \param tiIORequest: Pointer to TISA I/O request context for this I/O.
* \param tiDeviceHandle: Pointer to TISA device handle for this I/O.
* \param tiScsiRequest: Pointer to TISA SCSI I/O request and SGL list.
* \param satIOContext_t: Pointer to the SAT IO Context
*
* \return If command is started successfully
* - \e tiSuccess: I/O request successfully initiated.
* - \e tiBusy: No resources available, try again later.
* - \e tiIONoDevice: Invalid device handle.
* - \e tiError: Other errors.
*/
/*****************************************************************************/
GLOBAL bit32 satWriteAndVerify16(
tiRoot_t *tiRoot,
tiIORequest_t *tiIORequest,
tiDeviceHandle_t *tiDeviceHandle,
tiScsiInitiatorRequest_t *tiScsiRequest,
satIOContext_t *satIOContext)
{
/*
combination of write16 and verify16
since write16 has 8 bytes LBA -> problem ATA LBA(upto 6 bytes), no support
*/
bit32 status;
bit32 agRequestType = AGSA_SATA_PROTOCOL_DMA_WRITE;
satDeviceData_t *pSatDevData;
scsiRspSense_t *pSense;
tiIniScsiCmnd_t *scsiCmnd;
agsaFisRegHostToDevice_t *fis;
bit32 lba = 0;
bit32 tl = 0;
bit32 LoopNum = 1;
bit8 LBA[8];
bit8 TL[8];
bit32 rangeChk = agFALSE; /* lba and tl range check */
bit32 limitChk = agFALSE; /* lba and tl range check */
pSense = satIOContext->pSense;
pSatDevData = satIOContext->pSatDevData;
scsiCmnd = &tiScsiRequest->scsiCmnd;
fis = satIOContext->pFis;
TI_DBG5(("satWriteAndVerify16:start\n"));
/* checking BYTCHK bit */
if (scsiCmnd->cdb[1] & SCSI_WRITE_N_VERIFY_BYTCHK_MASK)
{
satSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pTiSenseData,
satIOContext->interruptContext );
TI_DBG1(("satWriteAndVerify16: BYTCHK bit checking \n"));
return tiSuccess;
}
/* checking CONTROL */
/* NACA == 1 or LINK == 1*/
if ( (scsiCmnd->cdb[15] & SCSI_NACA_MASK) || (scsiCmnd->cdb[15] & SCSI_LINK_MASK) )
{
satSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pTiSenseData,
satIOContext->interruptContext );
TI_DBG2(("satWriteAndVerify16: return control\n"));
return tiSuccess;
}
osti_memset(LBA, 0, sizeof(LBA));
osti_memset(TL, 0, sizeof(TL));
/* do not use memcpy due to indexing in LBA and TL */
LBA[0] = scsiCmnd->cdb[2]; /* MSB */
LBA[1] = scsiCmnd->cdb[3];
LBA[2] = scsiCmnd->cdb[4];
LBA[3] = scsiCmnd->cdb[5];
LBA[4] = scsiCmnd->cdb[6];
LBA[5] = scsiCmnd->cdb[7];
LBA[6] = scsiCmnd->cdb[8];
LBA[7] = scsiCmnd->cdb[9]; /* LSB */
TL[0] = 0;
TL[1] = 0;
TL[2] = 0;
TL[3] = 0;
TL[4] = scsiCmnd->cdb[10]; /* MSB */
TL[5] = scsiCmnd->cdb[11];
TL[6] = scsiCmnd->cdb[12];
TL[7] = scsiCmnd->cdb[13]; /* LSB */
rangeChk = satAddNComparebit64(LBA, TL);
limitChk = satCompareLBALimitbit(LBA);
lba = satComputeCDB16LBA(satIOContext);
tl = satComputeCDB16TL(satIOContext);
/* Table 34, 9.1, p 46 */
/*
note: As of 2/10/2006, no support for DMA QUEUED
*/
/*
Table 34, 9.1, p 46, b
When no 48-bit addressing support or NCQ, if LBA is beyond (2^28 - 1),
return check condition
*/
if (pSatDevData->satNCQ != agTRUE &&
pSatDevData->sat48BitSupport != agTRUE
)
{
if (limitChk)
{
TI_DBG1(("satWriteAndVerify16: return LBA out of range, not EXT\n"));
satSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_LOGICAL_BLOCK_ADDRESS_OUT_OF_RANGE,
satIOContext);
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pTiSenseData,
satIOContext->interruptContext );
return tiSuccess;
}
if (rangeChk) // if (lba + tl > SAT_TR_LBA_LIMIT)
{
TI_DBG1(("satWriteAndVerify16: return LBA+TL out of range, not EXT\n"));
satSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_LOGICAL_BLOCK_ADDRESS_OUT_OF_RANGE,
satIOContext);
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pTiSenseData,
satIOContext->interruptContext );
return tiSuccess;
}
}
/* case 1 and 2 */
if (!rangeChk) // if (lba + tl <= SAT_TR_LBA_LIMIT)
{
if (pSatDevData->satDMASupport == agTRUE && pSatDevData->satDMAEnabled == agTRUE)
{
/* case 2 */
/* WRITE DMA*/
/* In case that we can't fit the transfer length, we loop */
TI_DBG5(("satWriteAndVerify16: case 2\n"));
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C bit is set */
fis->h.command = SAT_WRITE_DMA; /* 0xCA */
fis->h.features = 0; /* FIS reserve */
fis->d.lbaLow = scsiCmnd->cdb[9]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = scsiCmnd->cdb[8]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = scsiCmnd->cdb[7]; /* FIS LBA (23:16) */
/* FIS LBA mode set LBA (27:24) */
fis->d.device = (bit8)((0x4 << 4) | (scsiCmnd->cdb[6] & 0xF));
fis->d.lbaLowExp = 0;
fis->d.lbaMidExp = 0;
fis->d.lbaHighExp = 0;
fis->d.featuresExp = 0;
fis->d.sectorCount = scsiCmnd->cdb[13]; /* FIS sector count (7:0) */
fis->d.sectorCountExp = 0;
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_DMA_WRITE;
satIOContext->ATACmd = SAT_WRITE_DMA;
}
else
{
/* case 1 */
/* WRITE MULTIPLE or WRITE SECTOR(S) */
/* WRITE SECTORS for easier implemetation */
/* In case that we can't fit the transfer length, we loop */
TI_DBG5(("satWriteAndVerify16: case 1\n"));
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C bit is set */
fis->h.command = SAT_WRITE_SECTORS; /* 0x30 */
fis->h.features = 0; /* FIS reserve */
fis->d.lbaLow = scsiCmnd->cdb[9]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = scsiCmnd->cdb[8]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = scsiCmnd->cdb[7]; /* FIS LBA (23:16) */
/* FIS LBA mode set LBA (27:24) */
fis->d.device = (bit8)((0x4 << 4) | (scsiCmnd->cdb[6] & 0xF));
fis->d.lbaLowExp = 0;
fis->d.lbaMidExp = 0;
fis->d.lbaHighExp = 0;
fis->d.featuresExp = 0;
fis->d.sectorCount = scsiCmnd->cdb[13]; /* FIS sector count (7:0) */
fis->d.sectorCountExp = 0;
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_PIO_WRITE;
satIOContext->ATACmd = SAT_WRITE_SECTORS;
}
}
/* case 3 and 4 */
if (pSatDevData->sat48BitSupport == agTRUE)
{
if (pSatDevData->satDMASupport == agTRUE && pSatDevData->satDMAEnabled == agTRUE)
{
/* case 3 */
/* WRITE DMA EXT or WRITE DMA FUA EXT */
TI_DBG5(("satWriteAndVerify16: case 3\n"));
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
/* SAT_WRITE_DMA_FUA_EXT is optional and we don't support it */
fis->h.command = SAT_WRITE_DMA_EXT; /* 0x35 */
fis->h.features = 0; /* FIS reserve */
fis->d.lbaLow = scsiCmnd->cdb[9]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = scsiCmnd->cdb[8]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = scsiCmnd->cdb[7]; /* FIS LBA (23:16) */
fis->d.device = 0x40; /* FIS LBA mode set */
fis->d.lbaLowExp = scsiCmnd->cdb[6]; /* FIS LBA (31:24) */
fis->d.lbaMidExp = scsiCmnd->cdb[5]; /* FIS LBA (39:32) */
fis->d.lbaHighExp = scsiCmnd->cdb[4]; /* FIS LBA (47:40) */
fis->d.featuresExp = 0; /* FIS reserve */
fis->d.sectorCount = scsiCmnd->cdb[13]; /* FIS sector count (7:0) */
fis->d.sectorCountExp = scsiCmnd->cdb[12]; /* FIS sector count (15:8) */
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_DMA_WRITE;
satIOContext->ATACmd = SAT_WRITE_DMA_EXT;
}
else
{
/* case 4 */
/* WRITE MULTIPLE EXT or WRITE MULTIPLE FUA EXT or WRITE SECTOR(S) EXT */
/* WRITE SECTORS EXT for easier implemetation */
TI_DBG5(("satWriteAndVerify16: case 4\n"));
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_WRITE_SECTORS_EXT; /* 0x34 */
fis->h.features = 0; /* FIS reserve */
fis->d.lbaLow = scsiCmnd->cdb[9]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = scsiCmnd->cdb[8]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = scsiCmnd->cdb[7]; /* FIS LBA (23:16) */
fis->d.device = 0x40; /* FIS LBA mode set */
fis->d.lbaLowExp = scsiCmnd->cdb[6]; /* FIS LBA (31:24) */
fis->d.lbaMidExp = scsiCmnd->cdb[5]; /* FIS LBA (39:32) */
fis->d.lbaHighExp = scsiCmnd->cdb[4]; /* FIS LBA (47:40) */
fis->d.featuresExp = 0; /* FIS reserve */
fis->d.sectorCount = scsiCmnd->cdb[13]; /* FIS sector count (7:0) */
fis->d.sectorCountExp = scsiCmnd->cdb[12]; /* FIS sector count (15:8) */
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_PIO_WRITE;
satIOContext->ATACmd = SAT_WRITE_SECTORS_EXT;
}
}
/* case 5 */
if (pSatDevData->satNCQ == agTRUE)
{
/* WRITE FPDMA QUEUED */
if (pSatDevData->sat48BitSupport != agTRUE)
{
TI_DBG5(("satWriteAndVerify16: case 5 !!! error NCQ but 28 bit address support \n"));
satSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pTiSenseData,
satIOContext->interruptContext );
return tiSuccess;
}
TI_DBG6(("satWriteAndVerify16: case 5\n"));
/* Support 48-bit FPDMA addressing, use WRITE FPDMA QUEUE command */
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_WRITE_FPDMA_QUEUED; /* 0x61 */
fis->h.features = scsiCmnd->cdb[13]; /* FIS sector count (7:0) */
fis->d.lbaLow = scsiCmnd->cdb[9]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = scsiCmnd->cdb[8]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = scsiCmnd->cdb[7]; /* FIS LBA (23:16) */
/* Check FUA bit */
if (scsiCmnd->cdb[1] & SCSI_WRITE16_FUA_MASK)
fis->d.device = 0xC0; /* FIS FUA set */
else
fis->d.device = 0x40; /* FIS FUA clear */
fis->d.lbaLowExp = scsiCmnd->cdb[6]; /* FIS LBA (31:24) */
fis->d.lbaMidExp = scsiCmnd->cdb[5]; /* FIS LBA (39:32) */
fis->d.lbaHighExp = scsiCmnd->cdb[4]; /* FIS LBA (47:40) */
fis->d.featuresExp = scsiCmnd->cdb[12]; /* FIS sector count (15:8) */
fis->d.sectorCount = 0; /* Tag (7:3) set by LL layer */
fis->d.sectorCountExp = 0;
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_FPDMA_WRITE;
satIOContext->ATACmd = SAT_WRITE_FPDMA_QUEUED;
}
satIOContext->currentLBA = lba;
satIOContext->OrgTL = tl;
/*
computing number of loop and remainder for tl
0xFF in case not ext
0xFFFF in case EXT
*/
if (fis->h.command == SAT_WRITE_SECTORS || fis->h.command == SAT_WRITE_DMA)
{
LoopNum = satComputeLoopNum(tl, 0xFF);
}
else if (fis->h.command == SAT_WRITE_SECTORS_EXT ||
fis->h.command == SAT_WRITE_DMA_EXT ||
fis->h.command == SAT_WRITE_DMA_FUA_EXT
)
{
/* SAT_READ_SECTORS_EXT, SAT_READ_DMA_EXT */
LoopNum = satComputeLoopNum(tl, 0xFFFF);
}
else
{
/* SAT_WRITE_FPDMA_QUEUEDK */
LoopNum = satComputeLoopNum(tl, 0xFFFF);
}
satIOContext->LoopNum = LoopNum;
if (LoopNum == 1)
{
TI_DBG5(("satWriteAndVerify16: NON CHAINED data\n"));
/* Initialize CB for SATA completion.
*/
satIOContext->satCompleteCB = &satNonChainedWriteNVerifyCB;
}
else
{
TI_DBG1(("satWriteAndVerify16: CHAINED data\n"));
/* re-setting tl */
if (fis->h.command == SAT_WRITE_SECTORS || fis->h.command == SAT_WRITE_DMA)
{
fis->d.sectorCount = 0xFF;
}
else if (fis->h.command == SAT_WRITE_SECTORS_EXT ||
fis->h.command == SAT_WRITE_DMA_EXT ||
fis->h.command == SAT_WRITE_DMA_FUA_EXT
)
{
fis->d.sectorCount = 0xFF;
fis->d.sectorCountExp = 0xFF;
}
else
{
/* SAT_WRITE_FPDMA_QUEUED */
fis->h.features = 0xFF;
fis->d.featuresExp = 0xFF;
}
/* Initialize CB for SATA completion.
*/
satIOContext->satCompleteCB = &satChainedWriteNVerifyCB;
}
/*
* Prepare SGL and send FIS to LL layer.
*/
satIOContext->reqType = agRequestType; /* Save it */
status = sataLLIOStart( tiRoot,
tiIORequest,
tiDeviceHandle,
tiScsiRequest,
satIOContext);
return (status);
}
/*****************************************************************************/
/*! \brief SAT implementation for SCSI satReadMediaSerialNumber.
*
* SAT implementation for SCSI Read Media Serial Number.
*
* \param tiRoot: Pointer to TISA initiator driver/port instance.
* \param tiIORequest: Pointer to TISA I/O request context for this I/O.
* \param tiDeviceHandle: Pointer to TISA device handle for this I/O.
* \param tiScsiRequest: Pointer to TISA SCSI I/O request and SGL list.
* \param satIOContext_t: Pointer to the SAT IO Context
*
* \return If command is started successfully
* - \e tiSuccess: I/O request successfully initiated.
* - \e tiBusy: No resources available, try again later.
* - \e tiIONoDevice: Invalid device handle.
* - \e tiError: Other errors.
*/
/*****************************************************************************/
GLOBAL bit32 satReadMediaSerialNumber(
tiRoot_t *tiRoot,
tiIORequest_t *tiIORequest,
tiDeviceHandle_t *tiDeviceHandle,
tiScsiInitiatorRequest_t *tiScsiRequest,
satIOContext_t *satIOContext)
{
bit32 status;
bit32 agRequestType = AGSA_SATA_PROTOCOL_PIO_READ;
satDeviceData_t *pSatDevData;
scsiRspSense_t *pSense;
tiIniScsiCmnd_t *scsiCmnd;
agsaFisRegHostToDevice_t *fis;
agsaSATAIdentifyData_t *pSATAIdData;
bit8 *pSerialNumber;
pSense = satIOContext->pSense;
pSatDevData = satIOContext->pSatDevData;
scsiCmnd = &tiScsiRequest->scsiCmnd;
fis = satIOContext->pFis;
pSATAIdData = &(pSatDevData->satIdentifyData);
pSerialNumber = (bit8 *) tiScsiRequest->sglVirtualAddr;
TI_DBG1(("satReadMediaSerialNumber: start\n"));
/* checking CONTROL */
/* NACA == 1 or LINK == 1*/
if ( (scsiCmnd->cdb[11] & SCSI_NACA_MASK) || (scsiCmnd->cdb[11] & SCSI_LINK_MASK) )
{
satSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pTiSenseData,
satIOContext->interruptContext );
TI_DBG1(("satReadMediaSerialNumber: return control\n"));
return tiSuccess;
}
if (tiScsiRequest->scsiCmnd.expDataLength == 4)
{
if (pSATAIdData->commandSetFeatureDefault & 0x4)
{
TI_DBG1(("satReadMediaSerialNumber: Media serial number returning only length\n"));
/* SPC-3 6.16 p192; filling in length */
pSerialNumber[0] = 0;
pSerialNumber[1] = 0;
pSerialNumber[2] = 0;
pSerialNumber[3] = 0x3C;
}
else
{
/* 1 sector - 4 = 512 - 4 to avoid underflow; 0x1fc*/
pSerialNumber[0] = 0;
pSerialNumber[1] = 0;
pSerialNumber[2] = 0x1;
pSerialNumber[3] = 0xfc;
}
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOSuccess,
SCSI_STAT_GOOD,
agNULL,
satIOContext->interruptContext);
return tiSuccess;
}
if ( pSatDevData->IDDeviceValid == agTRUE)
{
if (pSATAIdData->commandSetFeatureDefault & 0x4)
{
/* word87 bit2 Media serial number is valid */
/* read word 176 to 205; length is 2*30 = 60 = 0x3C*/
tdhexdump("ID satReadMediaSerialNumber", (bit8*)pSATAIdData->currentMediaSerialNumber, 2*30);
/* SPC-3 6.16 p192; filling in length */
pSerialNumber[0] = 0;
pSerialNumber[1] = 0;
pSerialNumber[2] = 0;
pSerialNumber[3] = 0x3C;
osti_memcpy(&pSerialNumber[4], (void *)pSATAIdData->currentMediaSerialNumber, 60);
tdhexdump("satReadMediaSerialNumber", (bit8*)pSerialNumber, 2*30 + 4);
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOSuccess,
SCSI_STAT_GOOD,
agNULL,
satIOContext->interruptContext);
return tiSuccess;
}
else
{
/* word87 bit2 Media serial number is NOT valid */
TI_DBG1(("satReadMediaSerialNumber: Media serial number is NOT valid \n"));
if (pSatDevData->sat48BitSupport == agTRUE)
{
/* READ VERIFY SECTORS EXT */
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_READ_SECTORS_EXT; /* 0x24 */
fis->h.features = 0; /* FIS reserve */
fis->d.lbaLow = 0; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = 0; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = 0; /* FIS LBA (23:16) */
fis->d.device = 0x40; /* FIS LBA mode set */
fis->d.lbaLowExp = 0; /* FIS LBA (31:24) */
fis->d.lbaMidExp = 0; /* FIS LBA (39:32) */
fis->d.lbaHighExp = 0; /* FIS LBA (47:40) */
fis->d.featuresExp = 0; /* FIS reserve */
fis->d.sectorCount = 1; /* FIS sector count (7:0) */
fis->d.sectorCountExp = 0; /* FIS sector count (15:8) */
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_PIO_READ;
}
else
{
/* READ VERIFY SECTORS */
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_READ_SECTORS; /* 0x20 */
fis->h.features = 0; /* FIS reserve */
fis->d.lbaLow = 0; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = 0; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = 0; /* FIS LBA (23:16) */
fis->d.device = 0x40; /* FIS LBA (27:24) and FIS LBA mode */
fis->d.lbaLowExp = 0;
fis->d.lbaMidExp = 0;
fis->d.lbaHighExp = 0;
fis->d.featuresExp = 0;
fis->d.sectorCount = 1; /* FIS sector count (7:0) */
fis->d.sectorCountExp = 0;
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_PIO_READ;
}
satIOContext->satCompleteCB = &satReadMediaSerialNumberCB;
satIOContext->reqType = agRequestType; /* Save it */
status = sataLLIOStart( tiRoot,
tiIORequest,
tiDeviceHandle,
tiScsiRequest,
satIOContext);
return status;
}
}
else
{
/* temporary failure */
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOFailed,
tiDetailOtherError,
agNULL,
satIOContext->interruptContext);
return tiSuccess;
}
}
/*****************************************************************************/
/*! \brief SAT implementation for SCSI satReadBuffer.
*
* SAT implementation for SCSI Read Buffer.
*
* \param tiRoot: Pointer to TISA initiator driver/port instance.
* \param tiIORequest: Pointer to TISA I/O request context for this I/O.
* \param tiDeviceHandle: Pointer to TISA device handle for this I/O.
* \param tiScsiRequest: Pointer to TISA SCSI I/O request and SGL list.
* \param satIOContext_t: Pointer to the SAT IO Context
*
* \return If command is started successfully
* - \e tiSuccess: I/O request successfully initiated.
* - \e tiBusy: No resources available, try again later.
* - \e tiIONoDevice: Invalid device handle.
* - \e tiError: Other errors.
*/
/*****************************************************************************/
/* SAT-2, Revision 00*/
GLOBAL bit32 satReadBuffer(
tiRoot_t *tiRoot,
tiIORequest_t *tiIORequest,
tiDeviceHandle_t *tiDeviceHandle,
tiScsiInitiatorRequest_t *tiScsiRequest,
satIOContext_t *satIOContext)
{
bit32 status = tiSuccess;
bit32 agRequestType = AGSA_SATA_PROTOCOL_PIO_READ;
scsiRspSense_t *pSense;
tiIniScsiCmnd_t *scsiCmnd;
agsaFisRegHostToDevice_t *fis;
bit32 bufferOffset;
bit32 tl;
bit8 mode;
bit8 bufferID;
bit8 *pBuff;
pSense = satIOContext->pSense;
scsiCmnd = &tiScsiRequest->scsiCmnd;
fis = satIOContext->pFis;
pBuff = (bit8 *) tiScsiRequest->sglVirtualAddr;
TI_DBG2(("satReadBuffer: start\n"));
/* checking CONTROL */
/* NACA == 1 or LINK == 1*/
if ( (scsiCmnd->cdb[9] & SCSI_NACA_MASK) || (scsiCmnd->cdb[9] & SCSI_LINK_MASK) )
{
satSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pTiSenseData,
satIOContext->interruptContext );
TI_DBG1(("satReadBuffer: return control\n"));
return tiSuccess;
}
bufferOffset = (scsiCmnd->cdb[3] << (8*2)) + (scsiCmnd->cdb[4] << 8) + scsiCmnd->cdb[5];
tl = (scsiCmnd->cdb[6] << (8*2)) + (scsiCmnd->cdb[7] << 8) + scsiCmnd->cdb[8];
mode = (bit8)(scsiCmnd->cdb[1] & SCSI_READ_BUFFER_MODE_MASK);
bufferID = scsiCmnd->cdb[2];
if (mode == READ_BUFFER_DATA_MODE) /* 2 */
{
if (bufferID == 0 && bufferOffset == 0 && tl == 512)
{
/* send ATA READ BUFFER */
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_READ_BUFFER; /* 0xE4 */
fis->h.features = 0; /* FIS reserve */
fis->d.lbaLow = 0; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = 0; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = 0; /* FIS LBA (23:16) */
fis->d.device = 0x40; /* FIS LBA (27:24) and FIS LBA mode */
fis->d.lbaLowExp = 0;
fis->d.lbaMidExp = 0;
fis->d.lbaHighExp = 0;
fis->d.featuresExp = 0;
fis->d.sectorCount = 0; /* FIS sector count (7:0) */
fis->d.sectorCountExp = 0;
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_PIO_READ;
satIOContext->satCompleteCB = &satReadBufferCB;
satIOContext->reqType = agRequestType; /* Save it */
status = sataLLIOStart( tiRoot,
tiIORequest,
tiDeviceHandle,
tiScsiRequest,
satIOContext);
return status;
}
if (bufferID == 0 && bufferOffset == 0 && tl != 512)
{
satSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pTiSenseData,
satIOContext->interruptContext );
TI_DBG1(("satReadBuffer: allocation length is not 512; it is %d\n", tl));
return tiSuccess;
}
if (bufferID == 0 && bufferOffset != 0)
{
satSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pTiSenseData,
satIOContext->interruptContext );
TI_DBG1(("satReadBuffer: buffer offset is not 0; it is %d\n", bufferOffset));
return tiSuccess;
}
/* all other cases unsupported */
TI_DBG1(("satReadBuffer: unsupported case 1\n"));
satSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_COMMAND,
satIOContext);
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pTiSenseData,
satIOContext->interruptContext );
return tiSuccess;
}
else if (mode == READ_BUFFER_DESCRIPTOR_MODE) /* 3 */
{
if (tl < READ_BUFFER_DESCRIPTOR_MODE_DATA_LEN) /* 4 */
{
satSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pTiSenseData,
satIOContext->interruptContext );
TI_DBG1(("satReadBuffer: tl < 4; tl is %d\n", tl));
return tiSuccess;
}
if (bufferID == 0)
{
/* SPC-4, 6.15.5, p189; SAT-2 Rev00, 8.7.2.3, p41*/
pBuff[0] = 0xFF;
pBuff[1] = 0x00;
pBuff[2] = 0x02;
pBuff[3] = 0x00;
if (READ_BUFFER_DESCRIPTOR_MODE_DATA_LEN < tl)
{
/* underrrun */
TI_DBG1(("satReadBuffer: underrun tl %d data %d\n", tl, READ_BUFFER_DESCRIPTOR_MODE_DATA_LEN));
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOUnderRun,
tl - READ_BUFFER_DESCRIPTOR_MODE_DATA_LEN,
agNULL,
satIOContext->interruptContext );
return tiSuccess;
}
else
{
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOSuccess,
SCSI_STAT_GOOD,
agNULL,
satIOContext->interruptContext);
return tiSuccess;
}
}
else
{
/* We don't support other than bufferID 0 */
satSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_COMMAND,
satIOContext);
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pTiSenseData,
satIOContext->interruptContext );
return tiSuccess;
}
}
else
{
/* We don't support any other mode */
TI_DBG1(("satReadBuffer: unsupported mode %d\n", mode));
satSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_COMMAND,
satIOContext);
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pTiSenseData,
satIOContext->interruptContext );
return tiSuccess;
}
}
/*****************************************************************************/
/*! \brief SAT implementation for SCSI satWriteBuffer.
*
* SAT implementation for SCSI Write Buffer.
*
* \param tiRoot: Pointer to TISA initiator driver/port instance.
* \param tiIORequest: Pointer to TISA I/O request context for this I/O.
* \param tiDeviceHandle: Pointer to TISA device handle for this I/O.
* \param tiScsiRequest: Pointer to TISA SCSI I/O request and SGL list.
* \param satIOContext_t: Pointer to the SAT IO Context
*
* \return If command is started successfully
* - \e tiSuccess: I/O request successfully initiated.
* - \e tiBusy: No resources available, try again later.
* - \e tiIONoDevice: Invalid device handle.
* - \e tiError: Other errors.
*/
/*****************************************************************************/
/* SAT-2, Revision 00*/
GLOBAL bit32 satWriteBuffer(
tiRoot_t *tiRoot,
tiIORequest_t *tiIORequest,
tiDeviceHandle_t *tiDeviceHandle,
tiScsiInitiatorRequest_t *tiScsiRequest,
satIOContext_t *satIOContext)
{
#ifdef NOT_YET
bit32 agRequestType = AGSA_SATA_PROTOCOL_PIO_READ;
#endif
scsiRspSense_t *pSense;
tiIniScsiCmnd_t *scsiCmnd;
bit32 bufferOffset;
bit32 parmLen;
bit8 mode;
bit8 bufferID;
bit8 *pBuff;
pSense = satIOContext->pSense;
scsiCmnd = &tiScsiRequest->scsiCmnd;
pBuff = (bit8 *) tiScsiRequest->sglVirtualAddr;
TI_DBG2(("satWriteBuffer: start\n"));
/* checking CONTROL */
/* NACA == 1 or LINK == 1*/
if ( (scsiCmnd->cdb[9] & SCSI_NACA_MASK) || (scsiCmnd->cdb[9] & SCSI_LINK_MASK) )
{
satSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pTiSenseData,
satIOContext->interruptContext );
TI_DBG1(("satWriteBuffer: return control\n"));
return tiSuccess;
}
bufferOffset = (scsiCmnd->cdb[3] << (8*2)) + (scsiCmnd->cdb[4] << 8) + scsiCmnd->cdb[5];
parmLen = (scsiCmnd->cdb[6] << (8*2)) + (scsiCmnd->cdb[7] << 8) + scsiCmnd->cdb[8];
mode = (bit8)(scsiCmnd->cdb[1] & SCSI_READ_BUFFER_MODE_MASK);
bufferID = scsiCmnd->cdb[2];
/* for debugging only */
tdhexdump("satWriteBuffer pBuff", (bit8 *)pBuff, 24);
if (mode == WRITE_BUFFER_DATA_MODE) /* 2 */
{
if (bufferID == 0 && bufferOffset == 0 && parmLen == 512)
{
TI_DBG1(("satWriteBuffer: sending ATA WRITE BUFFER\n"));
/* send ATA WRITE BUFFER */
#ifdef NOT_YET
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_WRITE_BUFFER; /* 0xE8 */
fis->h.features = 0; /* FIS reserve */
fis->d.lbaLow = 0; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = 0; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = 0; /* FIS LBA (23:16) */
fis->d.device = 0x40; /* FIS LBA (27:24) and FIS LBA mode */
fis->d.lbaLowExp = 0;
fis->d.lbaMidExp = 0;
fis->d.lbaHighExp = 0;
fis->d.featuresExp = 0;
fis->d.sectorCount = 0; /* FIS sector count (7:0) */
fis->d.sectorCountExp = 0;
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_PIO_WRITE;
satIOContext->satCompleteCB = &satWriteBufferCB;
satIOContext->reqType = agRequestType; /* Save it */
status = sataLLIOStart( tiRoot,
tiIORequest,
tiDeviceHandle,
tiScsiRequest,
satIOContext);
return status;
#endif
/* temp */
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOSuccess,
SCSI_STAT_GOOD,
agNULL,
satIOContext->interruptContext);
return tiSuccess;
}
if ( (bufferID == 0 && bufferOffset != 0) ||
(bufferID == 0 && parmLen != 512)
)
{
satSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pTiSenseData,
satIOContext->interruptContext );
TI_DBG1(("satWriteBuffer: wrong buffer offset %d or parameter length parmLen %d\n", bufferOffset, parmLen));
return tiSuccess;
}
/* all other cases unsupported */
TI_DBG1(("satWriteBuffer: unsupported case 1\n"));
satSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_COMMAND,
satIOContext);
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pTiSenseData,
satIOContext->interruptContext );
return tiSuccess;
}
else if (mode == WRITE_BUFFER_DL_MICROCODE_SAVE_MODE) /* 5 */
{
TI_DBG1(("satWriteBuffer: not yet supported mode %d\n", mode));
satSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_COMMAND,
satIOContext);
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pTiSenseData,
satIOContext->interruptContext );
return tiSuccess;
}
else
{
/* We don't support any other mode */
TI_DBG1(("satWriteBuffer: unsupported mode %d\n", mode));
satSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_COMMAND,
satIOContext);
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pTiSenseData,
satIOContext->interruptContext );
return tiSuccess;
}
}
/*****************************************************************************/
/*! \brief SAT implementation for SCSI satReassignBlocks.
*
* SAT implementation for SCSI Reassign Blocks.
*
* \param tiRoot: Pointer to TISA initiator driver/port instance.
* \param tiIORequest: Pointer to TISA I/O request context for this I/O.
* \param tiDeviceHandle: Pointer to TISA device handle for this I/O.
* \param tiScsiRequest: Pointer to TISA SCSI I/O request and SGL list.
* \param satIOContext_t: Pointer to the SAT IO Context
*
* \return If command is started successfully
* - \e tiSuccess: I/O request successfully initiated.
* - \e tiBusy: No resources available, try again later.
* - \e tiIONoDevice: Invalid device handle.
* - \e tiError: Other errors.
*/
/*****************************************************************************/
GLOBAL bit32 satReassignBlocks(
tiRoot_t *tiRoot,
tiIORequest_t *tiIORequest,
tiDeviceHandle_t *tiDeviceHandle,
tiScsiInitiatorRequest_t *tiScsiRequest,
satIOContext_t *satIOContext)
{
/*
assumes all LBA fits in ATA command; no boundary condition is checked here yet
*/
bit32 status;
bit32 agRequestType;
satDeviceData_t *pSatDevData;
scsiRspSense_t *pSense;
tiIniScsiCmnd_t *scsiCmnd;
agsaFisRegHostToDevice_t *fis;
bit8 *pParmList; /* Log Page data buffer */
bit8 LongLBA;
bit8 LongList;
bit32 defectListLen;
bit8 LBA[8];
bit32 startingIndex;
pSense = satIOContext->pSense;
pSatDevData = satIOContext->pSatDevData;
scsiCmnd = &tiScsiRequest->scsiCmnd;
fis = satIOContext->pFis;
pParmList = (bit8 *) tiScsiRequest->sglVirtualAddr;
TI_DBG5(("satReassignBlocks: start\n"));
/* checking CONTROL */
/* NACA == 1 or LINK == 1*/
if ( (scsiCmnd->cdb[5] & SCSI_NACA_MASK) || (scsiCmnd->cdb[5] & SCSI_LINK_MASK) )
{
satSetSensePayload( pSense,
SCSI_SNSKEY_ILLEGAL_REQUEST,
0,
SCSI_SNSCODE_INVALID_FIELD_IN_CDB,
satIOContext);
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pTiSenseData,
satIOContext->interruptContext );
TI_DBG1(("satReassignBlocks: return control\n"));
return tiSuccess;
}
osti_memset(satIOContext->LBA, 0, 8);
satIOContext->ParmIndex = 0;
satIOContext->ParmLen = 0;
LongList = (bit8)(scsiCmnd->cdb[1] & SCSI_REASSIGN_BLOCKS_LONGLIST_MASK);
LongLBA = (bit8)(scsiCmnd->cdb[1] & SCSI_REASSIGN_BLOCKS_LONGLBA_MASK);
osti_memset(LBA, 0, sizeof(LBA));
if (LongList == 0)
{
defectListLen = (pParmList[2] << 8) + pParmList[3];
}
else
{
defectListLen = (pParmList[0] << (8*3)) + (pParmList[1] << (8*2))
+ (pParmList[2] << 8) + pParmList[3];
}
/* SBC 5.16.2, p61*/
satIOContext->ParmLen = defectListLen + 4 /* header size */;
startingIndex = 4;
if (LongLBA == 0)
{
LBA[4] = pParmList[startingIndex]; /* MSB */
LBA[5] = pParmList[startingIndex+1];
LBA[6] = pParmList[startingIndex+2];
LBA[7] = pParmList[startingIndex+3]; /* LSB */
startingIndex = startingIndex + 4;
}
else
{
LBA[0] = pParmList[startingIndex]; /* MSB */
LBA[1] = pParmList[startingIndex+1];
LBA[2] = pParmList[startingIndex+2];
LBA[3] = pParmList[startingIndex+3];
LBA[4] = pParmList[startingIndex+4];
LBA[5] = pParmList[startingIndex+5];
LBA[6] = pParmList[startingIndex+6];
LBA[7] = pParmList[startingIndex+7]; /* LSB */
startingIndex = startingIndex + 8;
}
tdhexdump("satReassignBlocks Parameter list", (bit8 *)pParmList, 4 + defectListLen);
if (pSatDevData->sat48BitSupport == agTRUE)
{
/* sends READ VERIFY SECTOR(S) EXT*/
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_READ_VERIFY_SECTORS_EXT;/* 0x42 */
fis->h.features = 0; /* FIS reserve */
fis->d.lbaLow = LBA[7]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = LBA[6]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = LBA[5]; /* FIS LBA (23:16) */
fis->d.lbaLowExp = LBA[4]; /* FIS LBA (31:24) */
fis->d.lbaMidExp = LBA[3]; /* FIS LBA (39:32) */
fis->d.lbaHighExp = LBA[2]; /* FIS LBA (47:40) */
fis->d.featuresExp = 0; /* FIS reserve */
fis->d.sectorCount = 1; /* FIS sector count (7:0) */
fis->d.sectorCountExp = 0; /* FIS sector count (15:8) */
fis->d.reserved4 = 0;
fis->d.device = 0x40; /* 01000000 */
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
}
else
{
/* READ VERIFY SECTOR(S)*/
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_READ_VERIFY_SECTORS;/* 0x40 */
fis->h.features = 0; /* FIS features NA */
fis->d.lbaLow = LBA[7]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = LBA[6]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = LBA[5]; /* FIS LBA (23:16) */
fis->d.lbaLowExp = 0;
fis->d.lbaMidExp = 0;
fis->d.lbaHighExp = 0;
fis->d.featuresExp = 0;
fis->d.sectorCount = 1; /* FIS sector count (7:0) */
fis->d.sectorCountExp = 0;
fis->d.reserved4 = 0;
fis->d.device = (bit8)((0x4 << 4) | (LBA[4] & 0xF));
/* DEV and LBA 27:24 */
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
}
osti_memcpy(satIOContext->LBA, LBA, 8);
satIOContext->ParmIndex = startingIndex;
agRequestType = AGSA_SATA_PROTOCOL_NON_DATA;
/* Initialize CB for SATA completion.
*/
satIOContext->satCompleteCB = &satReassignBlocksCB;
/*
* Prepare SGL and send FIS to LL layer.
*/
satIOContext->reqType = agRequestType; /* Save it */
status = sataLLIOStart( tiRoot,
tiIORequest,
tiDeviceHandle,
tiScsiRequest,
satIOContext);
return status;
}
/*****************************************************************************/
/*! \brief SAT implementation for SCSI satReassignBlocks_1.
*
* SAT implementation for SCSI Reassign Blocks. This is helper function for
* satReassignBlocks and satReassignBlocksCB. This sends ATA verify command.
*
* \param tiRoot: Pointer to TISA initiator driver/port instance.
* \param tiIORequest: Pointer to TISA I/O request context for this I/O.
* \param tiDeviceHandle: Pointer to TISA device handle for this I/O.
* \param tiScsiRequest: Pointer to TISA SCSI I/O request and SGL list.
* \param satIOContext_t: Pointer to the SAT IO Context
*
* \return If command is started successfully
* - \e tiSuccess: I/O request successfully initiated.
* - \e tiBusy: No resources available, try again later.
* - \e tiIONoDevice: Invalid device handle.
* - \e tiError: Other errors.
*/
/*****************************************************************************/
/* next LBA; sends READ VERIFY SECTOR; update LBA and ParmIdx */
GLOBAL bit32 satReassignBlocks_1(
tiRoot_t *tiRoot,
tiIORequest_t *tiIORequest,
tiDeviceHandle_t *tiDeviceHandle,
tiScsiInitiatorRequest_t *tiScsiRequest,
satIOContext_t *satIOContext,
satIOContext_t *satOrgIOContext
)
{
/*
assumes all LBA fits in ATA command; no boundary condition is checked here yet
tiScsiRequest is OS generated; needs for accessing parameter list
*/
bit32 agRequestType;
satDeviceData_t *pSatDevData;
tiIniScsiCmnd_t *scsiCmnd;
agsaFisRegHostToDevice_t *fis;
bit8 *pParmList; /* Log Page data buffer */
bit8 LongLBA;
bit8 LBA[8];
bit32 startingIndex;
pSatDevData = satIOContext->pSatDevData;
scsiCmnd = &tiScsiRequest->scsiCmnd;
fis = satIOContext->pFis;
pParmList = (bit8 *) tiScsiRequest->sglVirtualAddr;
TI_DBG5(("satReassignBlocks_1: start\n"));
LongLBA = (bit8)(scsiCmnd->cdb[1] & SCSI_REASSIGN_BLOCKS_LONGLBA_MASK);
osti_memset(LBA, 0, sizeof(LBA));
startingIndex = satOrgIOContext->ParmIndex;
if (LongLBA == 0)
{
LBA[4] = pParmList[startingIndex];
LBA[5] = pParmList[startingIndex+1];
LBA[6] = pParmList[startingIndex+2];
LBA[7] = pParmList[startingIndex+3];
startingIndex = startingIndex + 4;
}
else
{
LBA[0] = pParmList[startingIndex];
LBA[1] = pParmList[startingIndex+1];
LBA[2] = pParmList[startingIndex+2];
LBA[3] = pParmList[startingIndex+3];
LBA[4] = pParmList[startingIndex+4];
LBA[5] = pParmList[startingIndex+5];
LBA[6] = pParmList[startingIndex+6];
LBA[7] = pParmList[startingIndex+7];
startingIndex = startingIndex + 8;
}
if (pSatDevData->sat48BitSupport == agTRUE)
{
/* sends READ VERIFY SECTOR(S) EXT*/
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_READ_VERIFY_SECTORS_EXT;/* 0x42 */
fis->h.features = 0; /* FIS reserve */
fis->d.lbaLow = LBA[7]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = LBA[6]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = LBA[5]; /* FIS LBA (23:16) */
fis->d.lbaLowExp = LBA[4]; /* FIS LBA (31:24) */
fis->d.lbaMidExp = LBA[3]; /* FIS LBA (39:32) */
fis->d.lbaHighExp = LBA[2]; /* FIS LBA (47:40) */
fis->d.featuresExp = 0; /* FIS reserve */
fis->d.sectorCount = 1; /* FIS sector count (7:0) */
fis->d.sectorCountExp = 0; /* FIS sector count (15:8) */
fis->d.reserved4 = 0;
fis->d.device = 0x40; /* 01000000 */
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
}
else
{
/* READ VERIFY SECTOR(S)*/
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_READ_VERIFY_SECTORS;/* 0x40 */
fis->h.features = 0; /* FIS features NA */
fis->d.lbaLow = LBA[7]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = LBA[6]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = LBA[5]; /* FIS LBA (23:16) */
fis->d.lbaLowExp = 0;
fis->d.lbaMidExp = 0;
fis->d.lbaHighExp = 0;
fis->d.featuresExp = 0;
fis->d.sectorCount = 1; /* FIS sector count (7:0) */
fis->d.sectorCountExp = 0;
fis->d.reserved4 = 0;
fis->d.device = (bit8)((0x4 << 4) | (LBA[4] & 0xF));
/* DEV and LBA 27:24 */
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
}
osti_memcpy(satOrgIOContext->LBA, LBA, 8);
satOrgIOContext->ParmIndex = startingIndex;
agRequestType = AGSA_SATA_PROTOCOL_NON_DATA;
/* Initialize CB for SATA completion.
*/
satIOContext->satCompleteCB = &satReassignBlocksCB;
/*
* Prepare SGL and send FIS to LL layer.
*/
satIOContext->reqType = agRequestType; /* Save it */
sataLLIOStart( tiRoot,
tiIORequest,
tiDeviceHandle,
tiScsiRequest,
satIOContext );
return tiSuccess;
}
/*****************************************************************************/
/*! \brief SAT implementation for SCSI satReassignBlocks_2.
*
* SAT implementation for SCSI Reassign Blocks. This is helper function for
* satReassignBlocks and satReassignBlocksCB. This sends ATA write command.
*
* \param tiRoot: Pointer to TISA initiator driver/port instance.
* \param tiIORequest: Pointer to TISA I/O request context for this I/O.
* \param tiDeviceHandle: Pointer to TISA device handle for this I/O.
* \param tiScsiRequest: Pointer to TISA SCSI I/O request and SGL list.
* \param satIOContext_t: Pointer to the SAT IO Context
* \param LBA: Pointer to the LBA to be processed
*
* \return If command is started successfully
* - \e tiSuccess: I/O request successfully initiated.
* - \e tiBusy: No resources available, try again later.
* - \e tiIONoDevice: Invalid device handle.
* - \e tiError: Other errors.
*/
/*****************************************************************************/
/* current LBA; sends WRITE */
GLOBAL bit32 satReassignBlocks_2(
tiRoot_t *tiRoot,
tiIORequest_t *tiIORequest,
tiDeviceHandle_t *tiDeviceHandle,
tiScsiInitiatorRequest_t *tiScsiRequest,
satIOContext_t *satIOContext,
bit8 *LBA
)
{
/*
assumes all LBA fits in ATA command; no boundary condition is checked here yet
tiScsiRequest is TD generated for writing
*/
bit32 status;
bit32 agRequestType;
satDeviceData_t *pSatDevData;
scsiRspSense_t *pSense;
agsaFisRegHostToDevice_t *fis;
pSense = satIOContext->pSense;
pSatDevData = satIOContext->pSatDevData;
fis = satIOContext->pFis;
if (pSatDevData->satDMASupport == agTRUE && pSatDevData->satDMAEnabled == agTRUE)
{
/* case 2 */
/* WRITE DMA*/
/* can't fit the transfer length */
TI_DBG5(("satReassignBlocks_2: case 2\n"));
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C bit is set */
fis->h.command = SAT_WRITE_DMA; /* 0xCA */
fis->h.features = 0; /* FIS reserve */
fis->d.lbaLow = LBA[7]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = LBA[6]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = LBA[5]; /* FIS LBA (23:16) */
/* FIS LBA mode set LBA (27:24) */
fis->d.device = (bit8)((0x4 << 4) | (LBA[4] & 0xF));
fis->d.lbaLowExp = 0;
fis->d.lbaMidExp = 0;
fis->d.lbaHighExp = 0;
fis->d.featuresExp = 0;
fis->d.sectorCount = 1; /* FIS sector count (7:0) */
fis->d.sectorCountExp = 0;
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_DMA_WRITE;
satIOContext->ATACmd = SAT_WRITE_DMA;
}
else
{
/* case 1 */
/* WRITE MULTIPLE or WRITE SECTOR(S) */
/* WRITE SECTORS for easier implemetation */
/* can't fit the transfer length */
TI_DBG5(("satReassignBlocks_2: case 1\n"));
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C bit is set */
fis->h.command = SAT_WRITE_SECTORS; /* 0x30 */
fis->h.features = 0; /* FIS reserve */
fis->d.lbaLow = LBA[7]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = LBA[6]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = LBA[7]; /* FIS LBA (23:16) */
/* FIS LBA mode set LBA (27:24) */
fis->d.device = (bit8)((0x4 << 4) | (LBA[4] & 0xF));
fis->d.lbaLowExp = 0;
fis->d.lbaMidExp = 0;
fis->d.lbaHighExp = 0;
fis->d.featuresExp = 0;
fis->d.sectorCount = 1; /* FIS sector count (7:0) */
fis->d.sectorCountExp = 0;
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_PIO_WRITE;
satIOContext->ATACmd = SAT_WRITE_SECTORS;
}
/* case 3 and 4 */
if (pSatDevData->sat48BitSupport == agTRUE)
{
if (pSatDevData->satDMASupport == agTRUE && pSatDevData->satDMAEnabled == agTRUE)
{
/* case 3 */
/* WRITE DMA EXT or WRITE DMA FUA EXT */
TI_DBG5(("satReassignBlocks_2: case 3\n"));
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
/* SAT_WRITE_DMA_FUA_EXT is optional and we don't support it */
fis->h.command = SAT_WRITE_DMA_EXT; /* 0x35 */
satIOContext->ATACmd = SAT_WRITE_DMA_EXT;
fis->h.features = 0; /* FIS reserve */
fis->d.lbaLow = LBA[7]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = LBA[6]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = LBA[5]; /* FIS LBA (23:16) */
fis->d.device = 0x40; /* FIS LBA mode set */
fis->d.lbaLowExp = LBA[4]; /* FIS LBA (31:24) */
fis->d.lbaMidExp = LBA[3]; /* FIS LBA (39:32) */
fis->d.lbaHighExp = LBA[2]; /* FIS LBA (47:40) */
fis->d.featuresExp = 0; /* FIS reserve */
fis->d.sectorCount = 1; /* FIS sector count (7:0) */
fis->d.sectorCountExp = 0; /* FIS sector count (15:8) */
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_DMA_WRITE;
}
else
{
/* case 4 */
/* WRITE MULTIPLE EXT or WRITE MULTIPLE FUA EXT or WRITE SECTOR(S) EXT */
/* WRITE SECTORS EXT for easier implemetation */
TI_DBG5(("satReassignBlocks_2: case 4\n"));
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_WRITE_SECTORS_EXT; /* 0x34 */
fis->h.features = 0; /* FIS reserve */
fis->d.lbaLow = LBA[7]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = LBA[6]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = LBA[5]; /* FIS LBA (23:16) */
fis->d.device = 0x40; /* FIS LBA mode set */
fis->d.lbaLowExp = LBA[4]; /* FIS LBA (31:24) */
fis->d.lbaMidExp = LBA[3]; /* FIS LBA (39:32) */
fis->d.lbaHighExp = LBA[2]; /* FIS LBA (47:40) */
fis->d.featuresExp = 0; /* FIS reserve */
fis->d.sectorCount = 1; /* FIS sector count (7:0) */
fis->d.sectorCountExp = 0; /* FIS sector count (15:8) */
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_PIO_WRITE;
satIOContext->ATACmd = SAT_WRITE_SECTORS_EXT;
}
}
/* case 5 */
if (pSatDevData->satNCQ == agTRUE)
{
/* WRITE FPDMA QUEUED */
if (pSatDevData->sat48BitSupport != agTRUE)
{
TI_DBG5(("satReassignBlocks_2: case 5 !!! error NCQ but 28 bit address support \n"));
satSetSensePayload( pSense,
SCSI_SNSKEY_HARDWARE_ERROR,
0,
SCSI_SNSCODE_WRITE_ERROR_AUTO_REALLOCATION_FAILED,
satIOContext);
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pTiSenseData,
satIOContext->interruptContext );
return tiSuccess;
}
TI_DBG6(("satWrite10: case 5\n"));
/* Support 48-bit FPDMA addressing, use WRITE FPDMA QUEUE command */
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_WRITE_FPDMA_QUEUED; /* 0x61 */
fis->h.features = 1; /* FIS sector count (7:0) */
fis->d.lbaLow = LBA[7]; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = LBA[6]; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = LBA[5]; /* FIS LBA (23:16) */
/* Check FUA bit */
fis->d.device = 0x40; /* FIS FUA clear */
fis->d.lbaLowExp = LBA[4]; /* FIS LBA (31:24) */
fis->d.lbaMidExp = LBA[3]; /* FIS LBA (39:32) */
fis->d.lbaHighExp = LBA[2]; /* FIS LBA (47:40) */
fis->d.featuresExp = 0; /* FIS sector count (15:8) */
fis->d.sectorCount = 0; /* Tag (7:3) set by LL layer */
fis->d.sectorCountExp = 0;
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_FPDMA_WRITE;
satIOContext->ATACmd = SAT_WRITE_FPDMA_QUEUED;
}
satIOContext->satCompleteCB = &satReassignBlocksCB;
/*
* Prepare SGL and send FIS to LL layer.
*/
satIOContext->reqType = agRequestType; /* Save it */
status = sataLLIOStart( tiRoot,
tiIORequest,
tiDeviceHandle,
/* not the original, should be the TD generated one */
tiScsiRequest,
satIOContext);
return (status);
}
/*****************************************************************************/
/*! \brief SAT implementation for SCSI satPrepareNewIO.
*
* This function fills in the fields of internal IO generated by TD layer.
* This is mostly used in the callback functions.
*
* \param satNewIntIo: Pointer to the internal IO structure.
* \param tiOrgIORequest: Pointer to the original tiIOrequest sent by OS layer
* \param satDevData: Pointer to the device data.
* \param scsiCmnd: Pointer to SCSI command.
* \param satOrgIOContext: Pointer to the original SAT IO Context
*
* \return
* - \e Pointer to the new SAT IO Context
*/
/*****************************************************************************/
GLOBAL satIOContext_t *satPrepareNewIO(
satInternalIo_t *satNewIntIo,
tiIORequest_t *tiOrgIORequest,
satDeviceData_t *satDevData,
tiIniScsiCmnd_t *scsiCmnd,
satIOContext_t *satOrgIOContext
)
{
satIOContext_t *satNewIOContext;
tdIORequestBody_t *tdNewIORequestBody;
TI_DBG2(("satPrepareNewIO: start\n"));
/* the one to be used; good 8/2/07 */
satNewIntIo->satOrgTiIORequest = tiOrgIORequest; /* this is already done in
satAllocIntIoResource() */
tdNewIORequestBody = (tdIORequestBody_t *)satNewIntIo->satIntRequestBody;
satNewIOContext = &(tdNewIORequestBody->transport.SATA.satIOContext);
satNewIOContext->pSatDevData = satDevData;
satNewIOContext->pFis = &(tdNewIORequestBody->transport.SATA.agSATARequestBody.fis.fisRegHostToDev);
satNewIOContext->pScsiCmnd = &(satNewIntIo->satIntTiScsiXchg.scsiCmnd);
if (scsiCmnd != agNULL)
{
/* saves only CBD; not scsi command for LBA and number of blocks */
osti_memcpy(satNewIOContext->pScsiCmnd->cdb, scsiCmnd->cdb, 16);
}
satNewIOContext->pSense = &(tdNewIORequestBody->transport.SATA.sensePayload);
satNewIOContext->pTiSenseData = &(tdNewIORequestBody->transport.SATA.tiSenseData);
satNewIOContext->pTiSenseData->senseData = satNewIOContext->pSense;
satNewIOContext->tiRequestBody = satNewIntIo->satIntRequestBody;
satNewIOContext->interruptContext = satNewIOContext->interruptContext;
satNewIOContext->satIntIoContext = satNewIntIo;
satNewIOContext->ptiDeviceHandle = satOrgIOContext->ptiDeviceHandle;
satNewIOContext->satOrgIOContext = satOrgIOContext;
/* saves tiScsiXchg; only for writesame10() */
satNewIOContext->tiScsiXchg = satOrgIOContext->tiScsiXchg;
return satNewIOContext;
}
/*****************************************************************************
*! \brief satIOAbort
*
* This routine is called to initiate a I/O abort to SATL.
* This routine is independent of HW/LL API.
*
* \param tiRoot: Pointer to TISA initiator driver/port instance.
* \param taskTag: Pointer to TISA I/O request context/tag to be aborted.
*
* \return:
*
* \e tiSuccess: I/O request successfully initiated.
* \e tiBusy: No resources available, try again later.
* \e tiError: Other errors that prevent the I/O request to be started.
*
*
*****************************************************************************/
GLOBAL bit32 satIOAbort(
tiRoot_t *tiRoot,
tiIORequest_t *taskTag )
{
tdsaRoot_t *tdsaRoot = (tdsaRoot_t *) tiRoot->tdData;
tdsaContext_t *tdsaAllShared = (tdsaContext_t *)&tdsaRoot->tdsaAllShared;
agsaRoot_t *agRoot;
tdIORequestBody_t *tdIORequestBody;
tdIORequestBody_t *tdIONewRequestBody;
agsaIORequest_t *agIORequest;
bit32 status;
agsaIORequest_t *agAbortIORequest;
tdIORequestBody_t *tdAbortIORequestBody;
bit32 PhysUpper32;
bit32 PhysLower32;
bit32 memAllocStatus;
void *osMemHandle;
satIOContext_t *satIOContext;
satInternalIo_t *satIntIo;
TI_DBG2(("satIOAbort: start\n"));
agRoot = &(tdsaAllShared->agRootNonInt);
tdIORequestBody = (tdIORequestBody_t *)taskTag->tdData;
/* needs to distinguish internally generated or externally generated */
satIOContext = &(tdIORequestBody->transport.SATA.satIOContext);
satIntIo = satIOContext->satIntIoContext;
if (satIntIo == agNULL)
{
TI_DBG1(("satIOAbort: External, OS generated\n"));
agIORequest = &(tdIORequestBody->agIORequest);
}
else
{
TI_DBG1(("satIOAbort: Internal, TD generated\n"));
tdIONewRequestBody = (tdIORequestBody_t *)satIntIo->satIntRequestBody;
agIORequest = &(tdIONewRequestBody->agIORequest);
}
/* allocating agIORequest for abort itself */
memAllocStatus = ostiAllocMemory(
tiRoot,
&osMemHandle,
(void **)&tdAbortIORequestBody,
&PhysUpper32,
&PhysLower32,
8,
sizeof(tdIORequestBody_t),
agTRUE
);
if (memAllocStatus != tiSuccess)
{
/* let os process IO */
TI_DBG1(("satIOAbort: ostiAllocMemory failed...\n"));
return tiError;
}
if (tdAbortIORequestBody == agNULL)
{
/* let os process IO */
TI_DBG1(("satIOAbort: ostiAllocMemory returned NULL tdAbortIORequestBody\n"));
return tiError;
}
/* setup task management structure */
tdAbortIORequestBody->IOType.InitiatorTMIO.osMemHandle = osMemHandle;
tdAbortIORequestBody->tiDevHandle = tdIORequestBody->tiDevHandle;
/* initialize agIORequest */
agAbortIORequest = &(tdAbortIORequestBody->agIORequest);
agAbortIORequest->osData = (void *) tdAbortIORequestBody;
agAbortIORequest->sdkData = agNULL; /* LL takes care of this */
/* remember IO to be aborted */
tdAbortIORequestBody->tiIOToBeAbortedRequest = taskTag;
status = saSATAAbort( agRoot, agAbortIORequest, 0, agNULL, 0, agIORequest, agNULL );
TI_DBG5(("satIOAbort: return status=0x%x\n", status));
if (status == AGSA_RC_SUCCESS)
return tiSuccess;
else
return tiError;
}
/*****************************************************************************
*! \brief satTM
*
* This routine is called to initiate a TM request to SATL.
* This routine is independent of HW/LL API.
*
* \param tiRoot: Pointer to TISA initiator driver/port instance.
* \param tiDeviceHandle: Pointer to TISA device handle for this I/O.
* \param task: SAM-3 task management request.
* \param lun: Pointer to LUN.
* \param taskTag: Pointer to the associated task where the TM
* command is to be applied.
* \param currentTaskTag: Pointer to tag/context for this TM request.
*
* \return:
*
* \e tiSuccess: I/O request successfully initiated.
* \e tiBusy: No resources available, try again later.
* \e tiIONoDevice: Invalid device handle.
* \e tiError: Other errors that prevent the I/O request to be started.
*
*
*****************************************************************************/
/* save task in satIOContext */
osGLOBAL bit32 satTM(
tiRoot_t *tiRoot,
tiDeviceHandle_t *tiDeviceHandle,
bit32 task,
tiLUN_t *lun,
tiIORequest_t *taskTag,
tiIORequest_t *currentTaskTag,
tdIORequestBody_t *tiRequestBody,
bit32 NotifyOS
)
{
tdIORequestBody_t *tdIORequestBody = agNULL;
satIOContext_t *satIOContext = agNULL;
tdsaDeviceData_t *oneDeviceData = agNULL;
bit32 status;
TI_DBG3(("satTM: tiDeviceHandle=%p task=0x%x\n", tiDeviceHandle, task ));
/* set satIOContext fields and etc */
oneDeviceData = (tdsaDeviceData_t *)tiDeviceHandle->tdData;
tdIORequestBody = (tdIORequestBody_t *)tiRequestBody;
satIOContext = &(tdIORequestBody->transport.SATA.satIOContext);
satIOContext->pSatDevData = &oneDeviceData->satDevData;
satIOContext->pFis =
&tdIORequestBody->transport.SATA.agSATARequestBody.fis.fisRegHostToDev;
satIOContext->tiRequestBody = tiRequestBody;
satIOContext->ptiDeviceHandle = tiDeviceHandle;
satIOContext->satIntIoContext = agNULL;
satIOContext->satOrgIOContext = agNULL;
/* followings are used only for internal IO */
satIOContext->currentLBA = 0;
satIOContext->OrgTL = 0;
/* saving task in satIOContext */
satIOContext->TMF = task;
satIOContext->satToBeAbortedIOContext = agNULL;
if (NotifyOS == agTRUE)
{
satIOContext->NotifyOS = agTRUE;
}
else
{
satIOContext->NotifyOS = agFALSE;
}
/*
* Our SAT supports RESET LUN and partially support ABORT TASK (only if there
* is no more than one I/O pending on the drive.
*/
if (task == AG_LOGICAL_UNIT_RESET)
{
status = satTmResetLUN( tiRoot,
currentTaskTag,
tiDeviceHandle,
agNULL,
satIOContext,
lun);
return status;
}
#ifdef TO_BE_REMOVED
else if (task == AG_TARGET_WARM_RESET)
{
status = satTmWarmReset( tiRoot,
currentTaskTag,
tiDeviceHandle,
agNULL,
satIOContext);
return status;
}
#endif
else if (task == AG_ABORT_TASK)
{
status = satTmAbortTask( tiRoot,
currentTaskTag,
tiDeviceHandle,
agNULL,
satIOContext,
taskTag);
return status;
}
else if (task == TD_INTERNAL_TM_RESET)
{
status = satTDInternalTmReset( tiRoot,
currentTaskTag,
tiDeviceHandle,
agNULL,
satIOContext);
return status;
}
else
{
TI_DBG1(("satTM: tiDeviceHandle=%p UNSUPPORTED TM task=0x%x\n",
tiDeviceHandle, task ));
/* clean up TD layer's IORequestBody */
ostiFreeMemory(
tiRoot,
tiRequestBody->IOType.InitiatorTMIO.osMemHandle,
sizeof(tdIORequestBody_t)
);
return tiError;
}
}
/*****************************************************************************
*! \brief satTmResetLUN
*
* This routine is called to initiate a TM RESET LUN request to SATL.
* This routine is independent of HW/LL API.
*
* \param tiRoot: Pointer to TISA initiator driver/port instance.
* \param tiDeviceHandle: Pointer to TISA device handle for this I/O.
* \param lun: Pointer to LUN.
* \param currentTaskTag: Pointer to tag/context for this TM request.
*
* \return:
*
* \e tiSuccess: I/O request successfully initiated.
* \e tiBusy: No resources available, try again later.
* \e tiIONoDevice: Invalid device handle.
* \e tiError: Other errors that prevent the I/O request to be started.
*
*
*****************************************************************************/
osGLOBAL bit32 satTmResetLUN(
tiRoot_t *tiRoot,
tiIORequest_t *tiIORequest, /* current task tag */
tiDeviceHandle_t *tiDeviceHandle,
tiScsiInitiatorRequest_t *tiScsiRequest,
satIOContext_t *satIOContext,
tiLUN_t *lun)
{
tdsaDeviceData_t *tdsaDeviceData;
satDeviceData_t *satDevData;
tdsaDeviceData = (tdsaDeviceData_t *)tiDeviceHandle->tdData;
satDevData = &tdsaDeviceData->satDevData;
TI_DBG1(("satTmResetLUN: tiDeviceHandle=%p.\n", tiDeviceHandle ));
/*
* Only support LUN 0
*/
if ( (lun->lun[0] | lun->lun[1] | lun->lun[2] | lun->lun[3] |
lun->lun[4] | lun->lun[5] | lun->lun[6] | lun->lun[7] ) != 0 )
{
TI_DBG1(("satTmResetLUN: *** REJECT *** LUN not zero, tiDeviceHandle=%p\n",
tiDeviceHandle));
return tiError;
}
/*
* Check if there is other TM request pending
*/
if (satDevData->satTmTaskTag != agNULL)
{
TI_DBG1(("satTmResetLUN: *** REJECT *** other TM pending, tiDeviceHandle=%p\n",
tiDeviceHandle));
return tiError;
}
/*
* Save tiIORequest, will be returned at device reset completion to return
* the TM completion.
*/
satDevData->satTmTaskTag = tiIORequest;
/*
* Set flag to indicate device in recovery mode.
*/
satDevData->satDriveState = SAT_DEV_STATE_IN_RECOVERY;
/*
* Issue SATA device reset. Set flag to indicate NOT to automatically abort
* at the completion of SATA device reset.
*/
satDevData->satAbortAfterReset = agFALSE;
/* SAT rev8 6.3.6 p22 */
satStartResetDevice(
tiRoot,
tiIORequest, /* currentTaskTag */
tiDeviceHandle,
tiScsiRequest,
satIOContext
);
return tiSuccess;
}
/*****************************************************************************
*! \brief satTmWarmReset
*
* This routine is called to initiate a TM warm RESET request to SATL.
* This routine is independent of HW/LL API.
*
* \param tiRoot: Pointer to TISA initiator driver/port instance.
* \param tiDeviceHandle: Pointer to TISA device handle for this I/O.
* \param currentTaskTag: Pointer to tag/context for this TM request.
*
* \return:
*
* \e tiSuccess: I/O request successfully initiated.
* \e tiBusy: No resources available, try again later.
* \e tiIONoDevice: Invalid device handle.
* \e tiError: Other errors that prevent the I/O request to be started.
*
*
*****************************************************************************/
osGLOBAL bit32 satTmWarmReset(
tiRoot_t *tiRoot,
tiIORequest_t *tiIORequest, /* current task tag */
tiDeviceHandle_t *tiDeviceHandle,
tiScsiInitiatorRequest_t *tiScsiRequest,
satIOContext_t *satIOContext)
{
tdsaDeviceData_t *tdsaDeviceData;
satDeviceData_t *satDevData;
tdsaDeviceData = (tdsaDeviceData_t *)tiDeviceHandle->tdData;
satDevData = &tdsaDeviceData->satDevData;
TI_DBG1(("satTmWarmReset: tiDeviceHandle=%p.\n", tiDeviceHandle ));
/*
* Check if there is other TM request pending
*/
if (satDevData->satTmTaskTag != agNULL)
{
TI_DBG1(("satTmWarmReset: *** REJECT *** other TM pending, tiDeviceHandle=%p\n",
tiDeviceHandle));
return tiError;
}
/*
* Save tiIORequest, will be returned at device reset completion to return
* the TM completion.
*/
satDevData->satTmTaskTag = tiIORequest;
/*
* Set flag to indicate device in recovery mode.
*/
satDevData->satDriveState = SAT_DEV_STATE_IN_RECOVERY;
/*
* Issue SATA device reset. Set flag to indicate NOT to automatically abort
* at the completion of SATA device reset.
*/
satDevData->satAbortAfterReset = agFALSE;
/* SAT rev8 6.3.6 p22 */
satStartResetDevice(
tiRoot,
tiIORequest, /* currentTaskTag */
tiDeviceHandle,
tiScsiRequest,
satIOContext
);
return tiSuccess;
}
osGLOBAL bit32 satTDInternalTmReset(
tiRoot_t *tiRoot,
tiIORequest_t *tiIORequest, /* current task tag */
tiDeviceHandle_t *tiDeviceHandle,
tiScsiInitiatorRequest_t *tiScsiRequest,
satIOContext_t *satIOContext)
{
tdsaDeviceData_t *tdsaDeviceData;
satDeviceData_t *satDevData;
tdsaDeviceData = (tdsaDeviceData_t *)tiDeviceHandle->tdData;
satDevData = &tdsaDeviceData->satDevData;
TI_DBG1(("satTmWarmReset: tiDeviceHandle=%p.\n", tiDeviceHandle ));
/*
* Check if there is other TM request pending
*/
if (satDevData->satTmTaskTag != agNULL)
{
TI_DBG1(("satTmWarmReset: *** REJECT *** other TM pending, tiDeviceHandle=%p\n",
tiDeviceHandle));
return tiError;
}
/*
* Save tiIORequest, will be returned at device reset completion to return
* the TM completion.
*/
satDevData->satTmTaskTag = tiIORequest;
/*
* Set flag to indicate device in recovery mode.
*/
satDevData->satDriveState = SAT_DEV_STATE_IN_RECOVERY;
/*
* Issue SATA device reset. Set flag to indicate NOT to automatically abort
* at the completion of SATA device reset.
*/
satDevData->satAbortAfterReset = agFALSE;
/* SAT rev8 6.3.6 p22 */
satStartResetDevice(
tiRoot,
tiIORequest, /* currentTaskTag */
tiDeviceHandle,
tiScsiRequest,
satIOContext
);
return tiSuccess;
}
/*****************************************************************************
*! \brief satTmAbortTask
*
* This routine is called to initiate a TM ABORT TASK request to SATL.
* This routine is independent of HW/LL API.
*
* \param tiRoot: Pointer to TISA initiator driver/port instance.
* \param tiDeviceHandle: Pointer to TISA device handle for this I/O.
* \param taskTag: Pointer to the associated task where the TM
* command is to be applied.
* \param currentTaskTag: Pointer to tag/context for this TM request.
*
* \return:
*
* \e tiSuccess: I/O request successfully initiated.
* \e tiBusy: No resources available, try again later.
* \e tiIONoDevice: Invalid device handle.
* \e tiError: Other errors that prevent the I/O request to be started.
*
*
*****************************************************************************/
osGLOBAL bit32 satTmAbortTask(
tiRoot_t *tiRoot,
tiIORequest_t *tiIORequest, /* current task tag */
tiDeviceHandle_t *tiDeviceHandle,
tiScsiInitiatorRequest_t *tiScsiRequest, /* NULL */
satIOContext_t *satIOContext,
tiIORequest_t *taskTag)
{
tdsaDeviceData_t *tdsaDeviceData;
satDeviceData_t *satDevData;
satIOContext_t *satTempIOContext = agNULL;
tdIORequestBody_t *tdIORequestBody;
tdIORequestBody_t *TMtdIORequestBody;
tdList_t *elementHdr;
bit32 found = agFALSE;
tiIORequest_t *tiIOReq;
tdsaDeviceData = (tdsaDeviceData_t *)tiDeviceHandle->tdData;
satDevData = &tdsaDeviceData->satDevData;
TMtdIORequestBody = (tdIORequestBody_t *)tiIORequest->tdData;
TI_DBG1(("satTmAbortTask: tiDeviceHandle=%p taskTag=%p.\n", tiDeviceHandle, taskTag ));
/*
* Check if there is other TM request pending
*/
if (satDevData->satTmTaskTag != agNULL)
{
TI_DBG1(("satTmAbortTask: REJECT other TM pending, tiDeviceHandle=%p\n",
tiDeviceHandle));
/* clean up TD layer's IORequestBody */
ostiFreeMemory(
tiRoot,
TMtdIORequestBody->IOType.InitiatorTMIO.osMemHandle,
sizeof(tdIORequestBody_t)
);
return tiError;
}
#ifdef REMOVED
/*
* Check if there is only one I/O pending.
*/
if (satDevData->satPendingIO > 0)
{
TI_DBG1(("satTmAbortTask: REJECT num pending I/O, tiDeviceHandle=%p, satPendingIO=0x%x\n",
tiDeviceHandle, satDevData->satPendingIO));
/* clean up TD layer's IORequestBody */
ostiFreeMemory(
tiRoot,
TMtdIORequestBody->IOType.InitiatorTMIO.osMemHandle,
sizeof(tdIORequestBody_t)
);
return tiError;
}
#endif
/*
* Check that the only pending I/O matches taskTag. If not return tiError.
*/
elementHdr = satDevData->satIoLinkList.flink;
while (elementHdr != &satDevData->satIoLinkList)
{
satTempIOContext = TDLIST_OBJECT_BASE( satIOContext_t,
satIoContextLink,
elementHdr );
tdIORequestBody = (tdIORequestBody_t *) satTempIOContext->tiRequestBody;
tiIOReq = tdIORequestBody->tiIORequest;
elementHdr = elementHdr->flink; /* for the next while loop */
/*
* Check if the tag matches
*/
if ( tiIOReq == taskTag)
{
found = agTRUE;
satIOContext->satToBeAbortedIOContext = satTempIOContext;
TI_DBG1(("satTmAbortTask: found matching tag.\n"));
break;
} /* if matching tag */
} /* while loop */
if (found == agFALSE )
{
TI_DBG1(("satTmAbortTask: *** REJECT *** no match, tiDeviceHandle=%p\n",
tiDeviceHandle ));
/* clean up TD layer's IORequestBody */
ostiFreeMemory(
tiRoot,
TMtdIORequestBody->IOType.InitiatorTMIO.osMemHandle,
sizeof(tdIORequestBody_t)
);
return tiError;
}
/*
* Save tiIORequest, will be returned at device reset completion to return
* the TM completion.
*/
satDevData->satTmTaskTag = tiIORequest;
/*
* Set flag to indicate device in recovery mode.
*/
satDevData->satDriveState = SAT_DEV_STATE_IN_RECOVERY;
/*
* Issue SATA device reset or check power mode. Set flag to to automatically abort
* at the completion of SATA device reset.
* SAT r09 p25
*/
satDevData->satAbortAfterReset = agTRUE;
if ( (satTempIOContext->reqType == AGSA_SATA_PROTOCOL_FPDMA_WRITE) ||
(satTempIOContext->reqType == AGSA_SATA_PROTOCOL_FPDMA_READ)
)
{
TI_DBG1(("satTmAbortTask: calling satStartCheckPowerMode\n"));
/* send check power mode */
satStartCheckPowerMode(
tiRoot,
tiIORequest, /* currentTaskTag */
tiDeviceHandle,
tiScsiRequest,
satIOContext
);
}
else
{
TI_DBG1(("satTmAbortTask: calling satStartResetDevice\n"));
/* send AGSA_SATA_PROTOCOL_SRST_ASSERT */
satStartResetDevice(
tiRoot,
tiIORequest, /* currentTaskTag */
tiDeviceHandle,
tiScsiRequest,
satIOContext
);
}
return tiSuccess;
}
/*****************************************************************************
*! \brief osSatResetCB
*
* This routine is called to notify the completion of SATA device reset
* which was initiated previously through the call to sataLLReset().
* This routine is independent of HW/LL API.
*
* \param tiRoot: Pointer to TISA initiator driver/port instance.
* \param tiDeviceHandle: Pointer to TISA device handle for this I/O.
* \param resetStatus: Reset status either tiSuccess or tiError.
* \param respFis: Pointer to the Register Device-To-Host FIS
* received from the device.
*
* \return: None
*
*****************************************************************************/
osGLOBAL void osSatResetCB(
tiRoot_t *tiRoot,
tiDeviceHandle_t *tiDeviceHandle,
bit32 resetStatus,
void *respFis)
{
agsaRoot_t *agRoot;
tdsaDeviceData_t *tdsaDeviceData;
satDeviceData_t *satDevData;
satIOContext_t *satIOContext;
tdIORequestBody_t *tdIORequestBodyTmp;
tdList_t *elementHdr;
agsaIORequest_t *agAbortIORequest;
tdIORequestBody_t *tdAbortIORequestBody;
bit32 PhysUpper32;
bit32 PhysLower32;
bit32 memAllocStatus;
void *osMemHandle;
tdsaDeviceData = (tdsaDeviceData_t *)tiDeviceHandle->tdData;
agRoot = tdsaDeviceData->agRoot;
satDevData = &tdsaDeviceData->satDevData;
TI_DBG5(("osSatResetCB: tiDeviceHandle=%p resetStatus=0x%x\n",
tiDeviceHandle, resetStatus ));
/* We may need to check FIS to check device operating condition */
/*
* Check if need to abort all pending I/Os
*/
if ( satDevData->satAbortAfterReset == agTRUE )
{
/*
* Issue abort to LL layer to all other pending I/Os for the same SATA drive
*/
elementHdr = satDevData->satIoLinkList.flink;
while (elementHdr != &satDevData->satIoLinkList)
{
satIOContext = TDLIST_OBJECT_BASE( satIOContext_t,
satIoContextLink,
elementHdr );
tdIORequestBodyTmp = (tdIORequestBody_t *)satIOContext->tiRequestBody;
/*
* Issue abort
*/
TI_DBG5(("osSatResetCB: issuing ABORT tiDeviceHandle=%p agIORequest=%p\n",
tiDeviceHandle, &tdIORequestBodyTmp->agIORequest ));
/* allocating agIORequest for abort itself */
memAllocStatus = ostiAllocMemory(
tiRoot,
&osMemHandle,
(void **)&tdAbortIORequestBody,
&PhysUpper32,
&PhysLower32,
8,
sizeof(tdIORequestBody_t),
agTRUE
);
if (memAllocStatus != tiSuccess)
{
/* let os process IO */
TI_DBG1(("osSatResetCB: ostiAllocMemory failed...\n"));
return;
}
if (tdAbortIORequestBody == agNULL)
{
/* let os process IO */
TI_DBG1(("osSatResetCB: ostiAllocMemory returned NULL tdAbortIORequestBody\n"));
return;
}
/* setup task management structure */
tdAbortIORequestBody->IOType.InitiatorTMIO.osMemHandle = osMemHandle;
tdAbortIORequestBody->tiDevHandle = tiDeviceHandle;
/* initialize agIORequest */
agAbortIORequest = &(tdAbortIORequestBody->agIORequest);
agAbortIORequest->osData = (void *) tdAbortIORequestBody;
agAbortIORequest->sdkData = agNULL; /* LL takes care of this */
saSATAAbort( agRoot, agAbortIORequest, 0, agNULL, 0, &(tdIORequestBodyTmp->agIORequest), agNULL );
elementHdr = elementHdr->flink; /* for the next while loop */
} /* while */
/* Reset flag */
satDevData->satAbortAfterReset = agFALSE;
}
/*
* Check if the device reset if the result of TM request.
*/
if ( satDevData->satTmTaskTag != agNULL )
{
TI_DBG5(("osSatResetCB: calling TM completion tiDeviceHandle=%p satTmTaskTag=%p\n",
tiDeviceHandle, satDevData->satTmTaskTag ));
ostiInitiatorEvent( tiRoot,
agNULL, /* portalContext not used */
tiDeviceHandle,
tiIntrEventTypeTaskManagement,
tiTMOK,
satDevData->satTmTaskTag);
/*
* Reset flag
*/
satDevData->satTmTaskTag = agNULL;
}
}
/*****************************************************************************
*! \brief osSatIOCompleted
*
* This routine is a callback for SATA completion that required FIS status
* translation to SCSI status.
*
* \param tiRoot: Pointer to TISA initiator driver/port instance.
* \param tiIORequest: Pointer to TISA I/O request context for this I/O.
* \param respFis: Pointer to status FIS to read.
* \param respFisLen: Length of response FIS to read.
* \param satIOContext: Pointer to SAT context.
* \param interruptContext: Interrupt context
*
* \return: None
*
*****************************************************************************/
osGLOBAL void osSatIOCompleted(
tiRoot_t *tiRoot,
tiIORequest_t *tiIORequest,
agsaFisHeader_t *agFirstDword,
bit32 respFisLen,
agsaFrameHandle_t agFrameHandle,
satIOContext_t *satIOContext,
bit32 interruptContext)
{
satDeviceData_t *pSatDevData;
scsiRspSense_t *pSense;
#ifdef TD_DEBUG_ENABLE
tiIniScsiCmnd_t *pScsiCmnd;
#endif
agsaFisRegHostToDevice_t *hostToDevFis = agNULL;
bit32 ataStatus = 0;
bit32 ataError;
satInternalIo_t *satIntIo = agNULL;
bit32 status;
tiDeviceHandle_t *tiDeviceHandle;
satIOContext_t *satIOContext2;
tdIORequestBody_t *tdIORequestBody;
agsaFisRegD2HHeader_t *statDevToHostFisHeader = agNULL;
agsaFisSetDevBitsHeader_t *statSetDevBitFisHeader = agNULL;
tiIORequest_t tiIORequestTMP;
pSense = satIOContext->pSense;
pSatDevData = satIOContext->pSatDevData;
#ifdef TD_DEBUG_ENABLE
pScsiCmnd = satIOContext->pScsiCmnd;
#endif
hostToDevFis = satIOContext->pFis;
tiDeviceHandle = &((tdsaDeviceData_t *)(pSatDevData->satSaDeviceData))->tiDeviceHandle;
/*
* Find out the type of response FIS:
* Set Device Bit FIS or Reg Device To Host FIS.
*/
/* First assume it is Reg Device to Host FIS */
statDevToHostFisHeader = (agsaFisRegD2HHeader_t *)&(agFirstDword->D2H);
ataStatus = statDevToHostFisHeader->status; /* ATA Status register */
ataError = statDevToHostFisHeader->error; /* ATA Eror register */
/* for debugging */
TI_DBG1(("osSatIOCompleted: H to D command 0x%x\n", hostToDevFis->h.command));
TI_DBG1(("osSatIOCompleted: D to H fistype 0x%x\n", statDevToHostFisHeader->fisType));
if (statDevToHostFisHeader->fisType == SET_DEV_BITS_FIS)
{
/* It is Set Device Bits FIS */
statSetDevBitFisHeader = (agsaFisSetDevBitsHeader_t *)&(agFirstDword->D2H);
/* Get ATA Status register */
ataStatus = (statSetDevBitFisHeader->statusHi_Lo & 0x70); /* bits 4,5,6 */
ataStatus = ataStatus | (statSetDevBitFisHeader->statusHi_Lo & 0x07); /* bits 0,1,2 */
/* ATA Eror register */
ataError = statSetDevBitFisHeader->error;
statDevToHostFisHeader = agNULL;
}
else if (statDevToHostFisHeader->fisType != REG_DEV_TO_HOST_FIS)
{
TI_DBG1(("osSatIOCompleted: *** UNEXPECTED RESP FIS TYPE 0x%x *** tiIORequest=%p\n",
statDevToHostFisHeader->fisType, tiIORequest));
satSetSensePayload( pSense,
SCSI_SNSKEY_HARDWARE_ERROR,
0,
SCSI_SNSCODE_INTERNAL_TARGET_FAILURE,
satIOContext);
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pTiSenseData,
interruptContext );
return;
}
if ( ataStatus & DF_ATA_STATUS_MASK )
{
pSatDevData->satDeviceFaultState = agTRUE;
}
else
{
pSatDevData->satDeviceFaultState = agFALSE;
}
TI_DBG5(("osSatIOCompleted: tiIORequest=%p CDB=0x%x ATA CMD =0x%x\n",
tiIORequest, pScsiCmnd->cdb[0], hostToDevFis->h.command));
/*
* Decide which ATA command is the translation needed
*/
switch(hostToDevFis->h.command)
{
case SAT_READ_FPDMA_QUEUED:
case SAT_WRITE_FPDMA_QUEUED:
/************************************************************************
*
* !!!! See Section 13.5.2.4 of SATA 2.5 specs. !!!!
* !!!! If the NCQ error ends up here, it means that the device sent !!!!
* !!!! Set Device Bit FIS (which has SActive register) instead of !!!!
* !!!! Register Device To Host FIS (which does not have SActive !!!!
* !!!! register). The callback ossaSATAEvent() deals with the case !!!!
* !!!! where Register Device To Host FIS was sent by the device. !!!!
*
* For NCQ we need to issue READ LOG EXT command with log page 10h
* to get the error and to allow other I/Os to continue.
*
* Here is the basic flow or sequence of error recovery, note that due
* to the SATA HW assist that we have, this sequence is slighly different
* from the one described in SATA 2.5:
*
* 1. Set SATA device flag to indicate error condition and returning busy
* for all new request.
* return tiSuccess;
* 2. Because the HW/LL layer received Set Device Bit FIS, it can get the
* tag or I/O context for NCQ request, SATL would translate the ATA error
* to SCSI status and return the original NCQ I/O with the appopriate
* SCSI status.
*
* 3. Prepare READ LOG EXT page 10h command. Set flag to indicate that
* the failed I/O has been returned to the OS Layer. Send command.
*
* 4. When the device receives READ LOG EXT page 10h request all other
* pending I/O are implicitly aborted. No completion (aborted) status
* will be sent to the host for these aborted commands.
*
* 5. SATL receives the completion for READ LOG EXT command in
* satReadLogExtCB(). Steps 6,7,8,9 below are the step 1,2,3,4 in
* satReadLogExtCB().
*
* 6. Check flag that indicates whether the failed I/O has been returned
* to the OS Layer. If not, search the I/O context in device data
* looking for a matched tag. Then return the completion of the failed
* NCQ command with the appopriate/trasnlated SCSI status.
*
* 7. Issue abort to LL layer to all other pending I/Os for the same SATA
* drive.
*
* 8. Free resource allocated for the internally generated READ LOG EXT.
*
* 9. At the completion of abort, in the context of ossaSATACompleted(),
* return the I/O with error status to the OS-App Specific layer.
* When all I/O aborts are completed, clear SATA device flag to
* indicate ready to process new request.
*
***********************************************************************/
TI_DBG1(("osSatIOCompleted: NCQ ERROR tiIORequest=%p ataStatus=0x%x ataError=0x%x\n",
tiIORequest, ataStatus, ataError ));
/* Set flag to indicate we are in recovery */
pSatDevData->satDriveState = SAT_DEV_STATE_IN_RECOVERY;
/* Return the failed NCQ I/O to OS-Apps Specifiic layer */
osSatDefaultTranslation( tiRoot,
tiIORequest,
satIOContext,
pSense,
(bit8)ataStatus,
(bit8)ataError,
interruptContext );
/*
* Allocate resource for READ LOG EXT page 10h
*/
satIntIo = satAllocIntIoResource( tiRoot,
&(tiIORequestTMP), /* anything but NULL */
pSatDevData,
sizeof (satReadLogExtPage10h_t),
satIntIo);
if (satIntIo == agNULL)
{
TI_DBG1(("osSatIOCompleted: can't send RLE due to resource lack\n"));
/* Abort I/O after completion of device reset */
pSatDevData->satAbortAfterReset = agTRUE;
#ifdef NOT_YET
/* needs further investigation */
/* no report to OS layer */
satSubTM(tiRoot,
tiDeviceHandle,
TD_INTERNAL_TM_RESET,
agNULL,
agNULL,
agNULL,
agFALSE);
#endif
TI_DBG1(("osSatIOCompleted: calling saSATADeviceReset 1\n"));
return;
}
/*
* Set flag to indicate that the failed I/O has been returned to the
* OS-App specific Layer.
*/
satIntIo->satIntFlag = AG_SAT_INT_IO_FLAG_ORG_IO_COMPLETED;
/* compare to satPrepareNewIO() */
/* Send READ LOG EXIT page 10h command */
/*
* Need to initialize all the fields within satIOContext except
* reqType and satCompleteCB which will be set depending on cmd.
*/
tdIORequestBody = (tdIORequestBody_t *)satIntIo->satIntRequestBody;
satIOContext2 = &(tdIORequestBody->transport.SATA.satIOContext);
satIOContext2->pSatDevData = pSatDevData;
satIOContext2->pFis = &(tdIORequestBody->transport.SATA.agSATARequestBody.fis.fisRegHostToDev);
satIOContext2->pScsiCmnd = &(satIntIo->satIntTiScsiXchg.scsiCmnd);
satIOContext2->pSense = &(tdIORequestBody->transport.SATA.sensePayload);
satIOContext2->pTiSenseData = &(tdIORequestBody->transport.SATA.tiSenseData);
satIOContext2->pTiSenseData->senseData = satIOContext2->pSense;
satIOContext2->tiRequestBody = satIntIo->satIntRequestBody;
satIOContext2->interruptContext = interruptContext;
satIOContext2->satIntIoContext = satIntIo;
satIOContext2->ptiDeviceHandle = tiDeviceHandle;
satIOContext2->satOrgIOContext = agNULL;
satIOContext2->tiScsiXchg = agNULL;
status = satSendReadLogExt( tiRoot,
&satIntIo->satIntTiIORequest,
tiDeviceHandle,
&satIntIo->satIntTiScsiXchg,
satIOContext2);
if (status != tiSuccess)
{
TI_DBG1(("osSatIOCompleted: can't send RLE due to LL api failure\n"));
satFreeIntIoResource( tiRoot,
pSatDevData,
satIntIo);
/* Abort I/O after completion of device reset */
pSatDevData->satAbortAfterReset = agTRUE;
#ifdef NOT_YET
/* needs further investigation */
/* no report to OS layer */
satSubTM(tiRoot,
tiDeviceHandle,
TD_INTERNAL_TM_RESET,
agNULL,
agNULL,
agNULL,
agFALSE);
#endif
TI_DBG1(("osSatIOCompleted: calling saSATADeviceReset 2\n"));
return;
}
break;
case SAT_READ_DMA_EXT:
/* fall through */
/* Use default status/error translation */
case SAT_READ_DMA:
/* fall through */
/* Use default status/error translation */
default:
osSatDefaultTranslation( tiRoot,
tiIORequest,
satIOContext,
pSense,
(bit8)ataStatus,
(bit8)ataError,
interruptContext );
break;
} /* end switch */
}
/*****************************************************************************/
/*! \brief SAT implementation for SCSI STANDARD INQUIRY.
*
* SAT implementation for SCSI STANDARD INQUIRY.
*
* \param pInquiry: Pointer to Inquiry Data buffer.
* \param pSATAIdData: Pointer to ATA IDENTIFY DEVICE data.
*
* \return None.
*/
/*****************************************************************************/
GLOBAL void satInquiryStandard(
bit8 *pInquiry,
agsaSATAIdentifyData_t *pSATAIdData,
tiIniScsiCmnd_t *scsiCmnd
)
{
tiLUN_t *pLun;
pLun = &scsiCmnd->lun;
/*
Assumption: Basic Task Mangement is supported
-> BQUE 1 and CMDQUE 0, SPC-4, Table96, p147
*/
/*
See SPC-4, 6.4.2, p 143
and SAT revision 8, 8.1.2, p 28
*/
TI_DBG5(("satInquiryStandard: start\n"));
if (pInquiry == agNULL)
{
TI_DBG1(("satInquiryStandard: pInquiry is NULL, wrong\n"));
return;
}
else
{
TI_DBG5(("satInquiryStandard: pInquiry is NOT NULL\n"));
}
/*
* Reject all other LUN other than LUN 0.
*/
if ( ((pLun->lun[0] | pLun->lun[1] | pLun->lun[2] | pLun->lun[3] |
pLun->lun[4] | pLun->lun[5] | pLun->lun[6] | pLun->lun[7] ) != 0) )
{
/* SAT Spec Table 8, p27, footnote 'a' */
pInquiry[0] = 0x7F;
}
else
{
pInquiry[0] = 0x00;
}
if (pSATAIdData->rm_ataDevice & ATA_REMOVABLE_MEDIA_DEVICE_MASK )
{
pInquiry[1] = 0x80;
}
else
{
pInquiry[1] = 0x00;
}
pInquiry[2] = 0x05; /* SPC-3 */
pInquiry[3] = 0x12; /* set HiSup 1; resp data format set to 2 */
pInquiry[4] = 0x1F; /* 35 - 4 = 31; Additional length */
pInquiry[5] = 0x00;
/* The following two are for task management. SAT Rev8, p20 */
if (pSATAIdData->sataCapabilities & 0x100)
{
/* NCQ supported; multiple outstanding SCSI IO are supported */
pInquiry[6] = 0x00; /* BQUE bit is not set */
pInquiry[7] = 0x02; /* CMDQUE bit is set */
}
else
{
pInquiry[6] = 0x80; /* BQUE bit is set */
pInquiry[7] = 0x00; /* CMDQUE bit is not set */
}
/*
* Vendor ID.
*/
osti_strncpy((char*)&pInquiry[8], AG_SAT_VENDOR_ID_STRING,8); /* 8 bytes */
/*
* Product ID
*/
/* when flipped by LL */
pInquiry[16] = pSATAIdData->modelNumber[1];
pInquiry[17] = pSATAIdData->modelNumber[0];
pInquiry[18] = pSATAIdData->modelNumber[3];
pInquiry[19] = pSATAIdData->modelNumber[2];
pInquiry[20] = pSATAIdData->modelNumber[5];
pInquiry[21] = pSATAIdData->modelNumber[4];
pInquiry[22] = pSATAIdData->modelNumber[7];
pInquiry[23] = pSATAIdData->modelNumber[6];
pInquiry[24] = pSATAIdData->modelNumber[9];
pInquiry[25] = pSATAIdData->modelNumber[8];
pInquiry[26] = pSATAIdData->modelNumber[11];
pInquiry[27] = pSATAIdData->modelNumber[10];
pInquiry[28] = pSATAIdData->modelNumber[13];
pInquiry[29] = pSATAIdData->modelNumber[12];
pInquiry[30] = pSATAIdData->modelNumber[15];
pInquiry[31] = pSATAIdData->modelNumber[14];
/* when flipped */
/*
* Product Revision level.
*/
/*
* If the IDENTIFY DEVICE data received in words 25 and 26 from the ATA
* device are ASCII spaces (20h), do this translation.
*/
if ( (pSATAIdData->firmwareVersion[4] == 0x20 ) &&
(pSATAIdData->firmwareVersion[5] == 0x00 ) &&
(pSATAIdData->firmwareVersion[6] == 0x20 ) &&
(pSATAIdData->firmwareVersion[7] == 0x00 )
)
{
pInquiry[32] = pSATAIdData->firmwareVersion[1];
pInquiry[33] = pSATAIdData->firmwareVersion[0];
pInquiry[34] = pSATAIdData->firmwareVersion[3];
pInquiry[35] = pSATAIdData->firmwareVersion[2];
}
else
{
pInquiry[32] = pSATAIdData->firmwareVersion[5];
pInquiry[33] = pSATAIdData->firmwareVersion[4];
pInquiry[34] = pSATAIdData->firmwareVersion[7];
pInquiry[35] = pSATAIdData->firmwareVersion[6];
}
#ifdef REMOVED
/*
* Product ID
*/
/* when flipped by LL */
pInquiry[16] = pSATAIdData->modelNumber[0];
pInquiry[17] = pSATAIdData->modelNumber[1];
pInquiry[18] = pSATAIdData->modelNumber[2];
pInquiry[19] = pSATAIdData->modelNumber[3];
pInquiry[20] = pSATAIdData->modelNumber[4];
pInquiry[21] = pSATAIdData->modelNumber[5];
pInquiry[22] = pSATAIdData->modelNumber[6];
pInquiry[23] = pSATAIdData->modelNumber[7];
pInquiry[24] = pSATAIdData->modelNumber[8];
pInquiry[25] = pSATAIdData->modelNumber[9];
pInquiry[26] = pSATAIdData->modelNumber[10];
pInquiry[27] = pSATAIdData->modelNumber[11];
pInquiry[28] = pSATAIdData->modelNumber[12];
pInquiry[29] = pSATAIdData->modelNumber[13];
pInquiry[30] = pSATAIdData->modelNumber[14];
pInquiry[31] = pSATAIdData->modelNumber[15];
/* when flipped */
/*
* Product Revision level.
*/
/*
* If the IDENTIFY DEVICE data received in words 25 and 26 from the ATA
* device are ASCII spaces (20h), do this translation.
*/
if ( (pSATAIdData->firmwareVersion[4] == 0x20 ) &&
(pSATAIdData->firmwareVersion[5] == 0x00 ) &&
(pSATAIdData->firmwareVersion[6] == 0x20 ) &&
(pSATAIdData->firmwareVersion[7] == 0x00 )
)
{
pInquiry[32] = pSATAIdData->firmwareVersion[0];
pInquiry[33] = pSATAIdData->firmwareVersion[1];
pInquiry[34] = pSATAIdData->firmwareVersion[2];
pInquiry[35] = pSATAIdData->firmwareVersion[3];
}
else
{
pInquiry[32] = pSATAIdData->firmwareVersion[4];
pInquiry[33] = pSATAIdData->firmwareVersion[5];
pInquiry[34] = pSATAIdData->firmwareVersion[6];
pInquiry[35] = pSATAIdData->firmwareVersion[7];
}
#endif
TI_DBG5(("satInquiryStandard: end\n"));
}
/*****************************************************************************/
/*! \brief SAT implementation for SCSI INQUIRY page 0.
*
* SAT implementation for SCSI INQUIRY page 0.
*
* \param pInquiry: Pointer to Inquiry Data buffer.
* \param pSATAIdData: Pointer to ATA IDENTIFY DEVICE data.
*
* \return None.
*/
/*****************************************************************************/
GLOBAL void satInquiryPage0(
bit8 *pInquiry,
agsaSATAIdentifyData_t *pSATAIdData)
{
TI_DBG5(("satInquiryPage0: entry\n"));
/*
See SPC-4, 7.6.9, p 345
and SAT revision 8, 10.3.2, p 77
*/
pInquiry[0] = 0x00;
pInquiry[1] = 0x00; /* page code */
pInquiry[2] = 0x00; /* reserved */
pInquiry[3] = 7 - 3; /* last index(in this case, 6) - 3; page length */
/* supported vpd page list */
pInquiry[4] = 0x00; /* page 0x00 supported */
pInquiry[5] = 0x80; /* page 0x80 supported */
pInquiry[6] = 0x83; /* page 0x83 supported */
pInquiry[7] = 0x89; /* page 0x89 supported */
}
/*****************************************************************************/
/*! \brief SAT implementation for SCSI INQUIRY page 83.
*
* SAT implementation for SCSI INQUIRY page 83.
*
* \param pInquiry: Pointer to Inquiry Data buffer.
* \param pSATAIdData: Pointer to ATA IDENTIFY DEVICE data.
*
* \return None.
*/
/*****************************************************************************/
GLOBAL void satInquiryPage83(
bit8 *pInquiry,
agsaSATAIdentifyData_t *pSATAIdData,
satDeviceData_t *pSatDevData)
{
satSimpleSATAIdentifyData_t *pSimpleData;
/*
* When translating the fields, in some cases using the simple form of SATA
* Identify Device Data is easier. So we define it here.
* Both pSimpleData and pSATAIdData points to the same data.
*/
pSimpleData = ( satSimpleSATAIdentifyData_t *)pSATAIdData;
TI_DBG5(("satInquiryPage83: entry\n"));
pInquiry[0] = 0x00;
pInquiry[1] = 0x83; /* page code */
pInquiry[2] = 0; /* Reserved */
/*
* If the ATA device returns word 87 bit 8 set to one in its IDENTIFY DEVICE
* data indicating that it supports the WORLD WIDE NAME field
* (i.e., words 108-111), the SATL shall include an identification descriptor
* containing a logical unit name.
*/
if ( pSatDevData->satWWNSupport)
{
/* Fill in SAT Rev8 Table85 */
/*
* Logical unit name derived from the world wide name.
*/
pInquiry[3] = 12; /* 15-3; page length, no addition ID descriptor assumed*/
/*
* Identifier descriptor
*/
pInquiry[4] = 0x01; /* Code set: binary codes */
pInquiry[5] = 0x03; /* Identifier type : NAA */
pInquiry[6] = 0x00; /* Reserved */
pInquiry[7] = 0x08; /* Identifier length */
/* Bit 4-7 NAA field, bit 0-3 MSB of IEEE Company ID */
pInquiry[8] = (bit8)((pSATAIdData->namingAuthority) >> 8);
pInquiry[9] = (bit8)((pSATAIdData->namingAuthority) & 0xFF); /* IEEE Company ID */
pInquiry[10] = (bit8)((pSATAIdData->namingAuthority1) >> 8); /* IEEE Company ID */
/* Bit 4-7 LSB of IEEE Company ID, bit 0-3 MSB of Vendor Specific ID */
pInquiry[11] = (bit8)((pSATAIdData->namingAuthority1) & 0xFF);
pInquiry[12] = (bit8)((pSATAIdData->uniqueID_bit16_31) >> 8); /* Vendor Specific ID */
pInquiry[13] = (bit8)((pSATAIdData->uniqueID_bit16_31) & 0xFF); /* Vendor Specific ID */
pInquiry[14] = (bit8)((pSATAIdData->uniqueID_bit0_15) >> 8); /* Vendor Specific ID */
pInquiry[15] = (bit8)((pSATAIdData->uniqueID_bit0_15) & 0xFF); /* Vendor Specific ID */
}
else
{
/* Fill in SAT Rev8 Table86 */
/*
* Logical unit name derived from the model number and serial number.
*/
pInquiry[3] = 72; /* 75 - 3; page length */
/*
* Identifier descriptor
*/
pInquiry[4] = 0x02; /* Code set: ASCII codes */
pInquiry[5] = 0x01; /* Identifier type : T10 vendor ID based */
pInquiry[6] = 0x00; /* Reserved */
pInquiry[7] = 0x44; /* 0x44, 68 Identifier length */
/* Byte 8 to 15 is the vendor id string 'ATA '. */
osti_strncpy((char *)&pInquiry[8], AG_SAT_VENDOR_ID_STRING, 8);
/*
* Byte 16 to 75 is vendor specific id
*/
pInquiry[16] = (bit8)((pSimpleData->word[27]) >> 8);
pInquiry[17] = (bit8)((pSimpleData->word[27]) & 0x00ff);
pInquiry[18] = (bit8)((pSimpleData->word[28]) >> 8);
pInquiry[19] = (bit8)((pSimpleData->word[28]) & 0x00ff);
pInquiry[20] = (bit8)((pSimpleData->word[29]) >> 8);
pInquiry[21] = (bit8)((pSimpleData->word[29]) & 0x00ff);
pInquiry[22] = (bit8)((pSimpleData->word[30]) >> 8);
pInquiry[23] = (bit8)((pSimpleData->word[30]) & 0x00ff);
pInquiry[24] = (bit8)((pSimpleData->word[31]) >> 8);
pInquiry[25] = (bit8)((pSimpleData->word[31]) & 0x00ff);
pInquiry[26] = (bit8)((pSimpleData->word[32]) >> 8);
pInquiry[27] = (bit8)((pSimpleData->word[32]) & 0x00ff);
pInquiry[28] = (bit8)((pSimpleData->word[33]) >> 8);
pInquiry[29] = (bit8)((pSimpleData->word[33]) & 0x00ff);
pInquiry[30] = (bit8)((pSimpleData->word[34]) >> 8);
pInquiry[31] = (bit8)((pSimpleData->word[34]) & 0x00ff);
pInquiry[32] = (bit8)((pSimpleData->word[35]) >> 8);
pInquiry[33] = (bit8)((pSimpleData->word[35]) & 0x00ff);
pInquiry[34] = (bit8)((pSimpleData->word[36]) >> 8);
pInquiry[35] = (bit8)((pSimpleData->word[36]) & 0x00ff);
pInquiry[36] = (bit8)((pSimpleData->word[37]) >> 8);
pInquiry[37] = (bit8)((pSimpleData->word[37]) & 0x00ff);
pInquiry[38] = (bit8)((pSimpleData->word[38]) >> 8);
pInquiry[39] = (bit8)((pSimpleData->word[38]) & 0x00ff);
pInquiry[40] = (bit8)((pSimpleData->word[39]) >> 8);
pInquiry[41] = (bit8)((pSimpleData->word[39]) & 0x00ff);
pInquiry[42] = (bit8)((pSimpleData->word[40]) >> 8);
pInquiry[43] = (bit8)((pSimpleData->word[40]) & 0x00ff);
pInquiry[44] = (bit8)((pSimpleData->word[41]) >> 8);
pInquiry[45] = (bit8)((pSimpleData->word[41]) & 0x00ff);
pInquiry[46] = (bit8)((pSimpleData->word[42]) >> 8);
pInquiry[47] = (bit8)((pSimpleData->word[42]) & 0x00ff);
pInquiry[48] = (bit8)((pSimpleData->word[43]) >> 8);
pInquiry[49] = (bit8)((pSimpleData->word[43]) & 0x00ff);
pInquiry[50] = (bit8)((pSimpleData->word[44]) >> 8);
pInquiry[51] = (bit8)((pSimpleData->word[44]) & 0x00ff);
pInquiry[52] = (bit8)((pSimpleData->word[45]) >> 8);
pInquiry[53] = (bit8)((pSimpleData->word[45]) & 0x00ff);
pInquiry[54] = (bit8)((pSimpleData->word[46]) >> 8);
pInquiry[55] = (bit8)((pSimpleData->word[46]) & 0x00ff);
pInquiry[56] = (bit8)((pSimpleData->word[10]) >> 8);
pInquiry[57] = (bit8)((pSimpleData->word[10]) & 0x00ff);
pInquiry[58] = (bit8)((pSimpleData->word[11]) >> 8);
pInquiry[59] = (bit8)((pSimpleData->word[11]) & 0x00ff);
pInquiry[60] = (bit8)((pSimpleData->word[12]) >> 8);
pInquiry[61] = (bit8)((pSimpleData->word[12]) & 0x00ff);
pInquiry[62] = (bit8)((pSimpleData->word[13]) >> 8);
pInquiry[63] = (bit8)((pSimpleData->word[13]) & 0x00ff);
pInquiry[64] = (bit8)((pSimpleData->word[14]) >> 8);
pInquiry[65] = (bit8)((pSimpleData->word[14]) & 0x00ff);
pInquiry[66] = (bit8)((pSimpleData->word[15]) >> 8);
pInquiry[67] = (bit8)((pSimpleData->word[15]) & 0x00ff);
pInquiry[68] = (bit8)((pSimpleData->word[16]) >> 8);
pInquiry[69] = (bit8)((pSimpleData->word[16]) & 0x00ff);
pInquiry[70] = (bit8)((pSimpleData->word[17]) >> 8);
pInquiry[71] = (bit8)((pSimpleData->word[17]) & 0x00ff);
pInquiry[72] = (bit8)((pSimpleData->word[18]) >> 8);
pInquiry[73] = (bit8)((pSimpleData->word[18]) & 0x00ff);
pInquiry[74] = (bit8)((pSimpleData->word[19]) >> 8);
pInquiry[75] = (bit8)((pSimpleData->word[19]) & 0x00ff);
}
}
/*****************************************************************************/
/*! \brief SAT implementation for SCSI INQUIRY page 89.
*
* SAT implementation for SCSI INQUIRY page 89.
*
* \param pInquiry: Pointer to Inquiry Data buffer.
* \param pSATAIdData: Pointer to ATA IDENTIFY DEVICE data.
* \param pSatDevData Pointer to internal device data structure
*
* \return None.
*/
/*****************************************************************************/
GLOBAL void satInquiryPage89(
bit8 *pInquiry,
agsaSATAIdentifyData_t *pSATAIdData,
satDeviceData_t *pSatDevData)
{
/*
SAT revision 8, 10.3.5, p 83
*/
satSimpleSATAIdentifyData_t *pSimpleData;
/*
* When translating the fields, in some cases using the simple form of SATA
* Identify Device Data is easier. So we define it here.
* Both pSimpleData and pSATAIdData points to the same data.
*/
pSimpleData = ( satSimpleSATAIdentifyData_t *)pSATAIdData;
TI_DBG5(("satInquiryPage89: start\n"));
pInquiry[0] = 0x00; /* Peripheral Qualifier and Peripheral Device Type */
pInquiry[1] = 0x89; /* page code */
/* Page length 0x238 */
pInquiry[2] = 0x02;
pInquiry[3] = 0x38;
pInquiry[4] = 0x0; /* reserved */
pInquiry[5] = 0x0; /* reserved */
pInquiry[6] = 0x0; /* reserved */
pInquiry[7] = 0x0; /* reserved */
/* SAT Vendor Identification */
osti_strncpy((char*)&pInquiry[8], "PMC-SIERRA", 8); /* 8 bytes */
/* SAT Product Idetification */
osti_strncpy((char*)&pInquiry[16], "Tachyon-SPC ", 16); /* 16 bytes */
/* SAT Product Revision Level */
osti_strncpy((char*)&pInquiry[32], "01", 4); /* 4 bytes */
/* Signature, SAT revision8, Table88, p85 */
pInquiry[36] = 0x34; /* FIS type */
if (pSatDevData->satDeviceType == SATA_ATA_DEVICE)
{
/* interrupt assume to be 0 */
pInquiry[37] = (bit8)((pSatDevData->satPMField) >> (4 * 7)); /* first four bits of PM field */
}
else
{
/* interrupt assume to be 1 */
pInquiry[37] = (bit8)(0x40 + (bit8)(((pSatDevData->satPMField) >> (4 * 7)))); /* first four bits of PM field */
}
pInquiry[38] = 0;
pInquiry[39] = 0;
if (pSatDevData->satDeviceType == SATA_ATA_DEVICE)
{
pInquiry[40] = 0x01; /* LBA Low */
pInquiry[41] = 0x00; /* LBA Mid */
pInquiry[42] = 0x00; /* LBA High */
pInquiry[43] = 0x00; /* Device */
pInquiry[44] = 0x00; /* LBA Low Exp */
pInquiry[45] = 0x00; /* LBA Mid Exp */
pInquiry[46] = 0x00; /* LBA High Exp */
pInquiry[47] = 0x00; /* Reserved */
pInquiry[48] = 0x01; /* Sector Count */
pInquiry[49] = 0x00; /* Sector Count Exp */
}
else
{
pInquiry[40] = 0x01; /* LBA Low */
pInquiry[41] = 0x00; /* LBA Mid */
pInquiry[42] = 0x00; /* LBA High */
pInquiry[43] = 0x00; /* Device */
pInquiry[44] = 0x00; /* LBA Low Exp */
pInquiry[45] = 0x00; /* LBA Mid Exp */
pInquiry[46] = 0x00; /* LBA High Exp */
pInquiry[47] = 0x00; /* Reserved */
pInquiry[48] = 0x01; /* Sector Count */
pInquiry[49] = 0x00; /* Sector Count Exp */
}
/* Reserved */
pInquiry[50] = 0x00;
pInquiry[51] = 0x00;
pInquiry[52] = 0x00;
pInquiry[53] = 0x00;
pInquiry[54] = 0x00;
pInquiry[55] = 0x00;
/* Command Code */
if (pSatDevData->satDeviceType == SATA_ATA_DEVICE)
{
pInquiry[56] = 0xEC; /* IDENTIFY DEVICE */
}
else
{
pInquiry[56] = 0xA1; /* IDENTIFY PACKET DEVICE */
}
/* Reserved */
pInquiry[57] = 0x0;
pInquiry[58] = 0x0;
pInquiry[59] = 0x0;
/* Identify Device */
osti_memcpy(&pInquiry[60], pSimpleData, sizeof(satSimpleSATAIdentifyData_t));
return;
}
/*****************************************************************************/
/*! \brief SAT implementation for SCSI INQUIRY page 0.
*
* SAT implementation for SCSI INQUIRY page 0.
*
* \param pInquiry: Pointer to Inquiry Data buffer.
* \param pSATAIdData: Pointer to ATA IDENTIFY DEVICE data.
*
* \return None.
*/
/*****************************************************************************/
GLOBAL void satInquiryPage80(
bit8 *pInquiry,
agsaSATAIdentifyData_t *pSATAIdData)
{
TI_DBG5(("satInquiryPage80: entry\n"));
/*
See SPC-4, 7.6.9, p 345
and SAT revision 8, 10.3.3, p 77
*/
pInquiry[0] = 0x00;
pInquiry[1] = 0x80; /* page code */
pInquiry[2] = 0x00; /* reserved */
pInquiry[3] = 0x14; /* page length */
/* supported vpd page list */
pInquiry[4] = pSATAIdData->serialNumber[1];
pInquiry[5] = pSATAIdData->serialNumber[0];
pInquiry[6] = pSATAIdData->serialNumber[3];
pInquiry[7] = pSATAIdData->serialNumber[2];
pInquiry[8] = pSATAIdData->serialNumber[5];
pInquiry[9] = pSATAIdData->serialNumber[4];
pInquiry[10] = pSATAIdData->serialNumber[7];
pInquiry[11] = pSATAIdData->serialNumber[6];
pInquiry[12] = pSATAIdData->serialNumber[9];
pInquiry[13] = pSATAIdData->serialNumber[8];
pInquiry[14] = pSATAIdData->serialNumber[11];
pInquiry[15] = pSATAIdData->serialNumber[10];
pInquiry[16] = pSATAIdData->serialNumber[13];
pInquiry[17] = pSATAIdData->serialNumber[12];
pInquiry[18] = pSATAIdData->serialNumber[15];
pInquiry[19] = pSATAIdData->serialNumber[14];
pInquiry[20] = pSATAIdData->serialNumber[17];
pInquiry[21] = pSATAIdData->serialNumber[16];
pInquiry[22] = pSATAIdData->serialNumber[19];
pInquiry[23] = pSATAIdData->serialNumber[18];
}
/*****************************************************************************/
/*! \brief Send READ LOG EXT ATA PAGE 10h command to sata drive.
*
* Send READ LOG EXT ATA command PAGE 10h request to LL layer.
*
* \param tiRoot: Pointer to TISA initiator driver/port instance.
* \param tiIORequest: Pointer to TISA I/O request context for this I/O.
* \param tiDeviceHandle: Pointer to TISA device handle for this I/O.
* \param tiScsiRequest: Pointer to TISA SCSI I/O request and SGL list.
* \param satIOContext_t: Pointer to the SAT IO Context
*
* \return If command is started successfully
* - \e tiSuccess: I/O request successfully initiated.
* - \e tiBusy: No resources available, try again later.
* - \e tiIONoDevice: Invalid device handle.
* - \e tiError: Other errors.
*/
/*****************************************************************************/
GLOBAL bit32 satSendReadLogExt(
tiRoot_t *tiRoot,
tiIORequest_t *tiIORequest,
tiDeviceHandle_t *tiDeviceHandle,
tiScsiInitiatorRequest_t *tiScsiRequest,
satIOContext_t *satIOContext)
{
bit32 status;
bit32 agRequestType;
agsaFisRegHostToDevice_t *fis;
fis = satIOContext->pFis;
TI_DBG1(("satSendReadLogExt: tiDeviceHandle=%p tiIORequest=%p\n",
tiDeviceHandle, tiIORequest));
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_READ_LOG_EXT; /* 0x2F */
fis->h.features = 0; /* FIS reserve */
fis->d.lbaLow = 0x10; /* Page number */
fis->d.lbaMid = 0; /* */
fis->d.lbaHigh = 0; /* */
fis->d.device = 0; /* DEV is ignored in SATA */
fis->d.lbaLowExp = 0; /* */
fis->d.lbaMidExp = 0; /* */
fis->d.lbaHighExp = 0; /* */
fis->d.featuresExp = 0; /* FIS reserve */
fis->d.sectorCount = 0x01; /* 1 sector counts*/
fis->d.sectorCountExp = 0x00; /* 1 sector counts */
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_PIO_READ;
/* Initialize CB for SATA completion.
*/
satIOContext->satCompleteCB = &satReadLogExtCB;
/*
* Prepare SGL and send FIS to LL layer.
*/
satIOContext->reqType = agRequestType; /* Save it */
status = sataLLIOStart( tiRoot,
tiIORequest,
tiDeviceHandle,
tiScsiRequest,
satIOContext);
TI_DBG1(("satSendReadLogExt: end status %d\n", status));
return (status);
}
/*****************************************************************************/
/*! \brief SAT default ATA status and ATA error translation to SCSI.
*
* SSAT default ATA status and ATA error translation to SCSI.
*
* \param tiRoot: Pointer to TISA initiator driver/port instance.
* \param tiIORequest: Pointer to TISA I/O request context for this I/O.
* \param satIOContext: Pointer to the SAT IO Context
* \param pSense: Pointer to scsiRspSense_t
* \param ataStatus: ATA status register
* \param ataError: ATA error register
* \param interruptContext: Interrupt context
*
* \return None
*/
/*****************************************************************************/
GLOBAL void osSatDefaultTranslation(
tiRoot_t *tiRoot,
tiIORequest_t *tiIORequest,
satIOContext_t *satIOContext,
scsiRspSense_t *pSense,
bit8 ataStatus,
bit8 ataError,
bit32 interruptContext )
{
/*
* Check for device fault case
*/
if ( ataStatus & DF_ATA_STATUS_MASK )
{
satSetSensePayload( pSense,
SCSI_SNSKEY_HARDWARE_ERROR,
0,
SCSI_SNSCODE_INTERNAL_TARGET_FAILURE,
satIOContext);
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pTiSenseData,
interruptContext );
return;
}
/*
* If status error bit it set, need to check the error register
*/
if ( ataStatus & ERR_ATA_STATUS_MASK )
{
if ( ataError & NM_ATA_ERROR_MASK )
{
TI_DBG1(("osSatDefaultTranslation: NM_ATA_ERROR ataError= 0x%x, tiIORequest=%p\n",
ataError, tiIORequest));
satSetSensePayload( pSense,
SCSI_SNSKEY_NOT_READY,
0,
SCSI_SNSCODE_MEDIUM_NOT_PRESENT,
satIOContext);
}
else if (ataError & UNC_ATA_ERROR_MASK)
{
TI_DBG1(("osSatDefaultTranslation: UNC_ATA_ERROR ataError= 0x%x, tiIORequest=%p\n",
ataError, tiIORequest));
satSetSensePayload( pSense,
SCSI_SNSKEY_MEDIUM_ERROR,
0,
SCSI_SNSCODE_UNRECOVERED_READ_ERROR,
satIOContext);
}
else if (ataError & IDNF_ATA_ERROR_MASK)
{
TI_DBG1(("osSatDefaultTranslation: IDNF_ATA_ERROR ataError= 0x%x, tiIORequest=%p\n",
ataError, tiIORequest));
satSetSensePayload( pSense,
SCSI_SNSKEY_MEDIUM_ERROR,
0,
SCSI_SNSCODE_RECORD_NOT_FOUND,
satIOContext);
}
else if (ataError & MC_ATA_ERROR_MASK)
{
TI_DBG1(("osSatDefaultTranslation: MC_ATA_ERROR ataError= 0x%x, tiIORequest=%p\n",
ataError, tiIORequest));
satSetSensePayload( pSense,
SCSI_SNSKEY_UNIT_ATTENTION,
0,
SCSI_SNSCODE_NOT_READY_TO_READY_CHANGE,
satIOContext);
}
else if (ataError & MCR_ATA_ERROR_MASK)
{
TI_DBG1(("osSatDefaultTranslation: MCR_ATA_ERROR ataError= 0x%x, tiIORequest=%p\n",
ataError, tiIORequest));
satSetSensePayload( pSense,
SCSI_SNSKEY_UNIT_ATTENTION,
0,
SCSI_SNSCODE_OPERATOR_MEDIUM_REMOVAL_REQUEST,
satIOContext);
}
else if (ataError & ICRC_ATA_ERROR_MASK)
{
TI_DBG1(("osSatDefaultTranslation: ICRC_ATA_ERROR ataError= 0x%x, tiIORequest=%p\n",
ataError, tiIORequest));
satSetSensePayload( pSense,
SCSI_SNSKEY_ABORTED_COMMAND,
0,
SCSI_SNSCODE_INFORMATION_UNIT_CRC_ERROR,
satIOContext);
}
else if (ataError & ABRT_ATA_ERROR_MASK)
{
TI_DBG1(("osSatDefaultTranslation: ABRT_ATA_ERROR ataError= 0x%x, tiIORequest=%p\n",
ataError, tiIORequest));
satSetSensePayload( pSense,
SCSI_SNSKEY_ABORTED_COMMAND,
0,
SCSI_SNSCODE_NO_ADDITIONAL_INFO,
satIOContext);
}
else
{
TI_DBG1(("osSatDefaultTranslation: **** UNEXPECTED ATA_ERROR **** ataError= 0x%x, tiIORequest=%p\n",
ataError, tiIORequest));
satSetSensePayload( pSense,
SCSI_SNSKEY_HARDWARE_ERROR,
0,
SCSI_SNSCODE_INTERNAL_TARGET_FAILURE,
satIOContext);
}
/* Send the completion response now */
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pTiSenseData,
interruptContext );
return;
}
else /* (ataStatus & ERR_ATA_STATUS_MASK ) is false */
{
/* This case should never happen */
TI_DBG1(("osSatDefaultTranslation: *** UNEXPECTED ATA status 0x%x *** tiIORequest=%p\n",
ataStatus, tiIORequest));
satSetSensePayload( pSense,
SCSI_SNSKEY_HARDWARE_ERROR,
0,
SCSI_SNSCODE_INTERNAL_TARGET_FAILURE,
satIOContext);
ostiInitiatorIOCompleted( tiRoot,
tiIORequest,
tiIOSuccess,
SCSI_STAT_CHECK_CONDITION,
satIOContext->pTiSenseData,
interruptContext );
return;
}
}
/*****************************************************************************/
/*! \brief Allocate resource for SAT intervally generated I/O.
*
* Allocate resource for SAT intervally generated I/O.
*
* \param tiRoot: Pointer to TISA driver/port instance.
* \param satDevData: Pointer to SAT specific device data.
* \param allocLength: Length in byte of the DMA mem to allocate, upto
* one page size.
* \param satIntIo: Pointer (output) to context for SAT internally
* generated I/O that is allocated by this routine.
*
* \return If command is started successfully
* - \e tiSuccess: Success.
* - \e tiError: Failed allocating resource.
*/
/*****************************************************************************/
GLOBAL satInternalIo_t * satAllocIntIoResource(
tiRoot_t *tiRoot,
tiIORequest_t *tiIORequest,
satDeviceData_t *satDevData,
bit32 dmaAllocLength,
satInternalIo_t *satIntIo)
{
tdList_t *tdList = agNULL;
bit32 memAllocStatus;
TI_DBG1(("satAllocIntIoResource: start\n"));
TI_DBG6(("satAllocIntIoResource: satIntIo %p\n", satIntIo));
if (satDevData == agNULL)
{
TI_DBG1(("satAllocIntIoResource: ***** ASSERT satDevData is null\n"));
return agNULL;
}
tdsaSingleThreadedEnter(tiRoot, TD_SATA_LOCK);
if (!TDLIST_EMPTY(&(satDevData->satFreeIntIoLinkList)))
{
TDLIST_DEQUEUE_FROM_HEAD(&tdList, &(satDevData->satFreeIntIoLinkList));
}
else
{
tdsaSingleThreadedLeave(tiRoot, TD_SATA_LOCK);
TI_DBG1(("satAllocIntIoResource() no more internal free link.\n"));
return agNULL;
}
if (tdList == agNULL)
{
tdsaSingleThreadedLeave(tiRoot, TD_SATA_LOCK);
TI_DBG1(("satAllocIntIoResource() FAIL to alloc satIntIo.\n"));
return agNULL;
}
satIntIo = TDLIST_OBJECT_BASE( satInternalIo_t, satIntIoLink, tdList);
TI_DBG6(("satAllocIntIoResource: satDevData %p satIntIo id %d\n", satDevData, satIntIo->id));
/* Put in active list */
TDLIST_DEQUEUE_THIS (&(satIntIo->satIntIoLink));
TDLIST_ENQUEUE_AT_TAIL (&(satIntIo->satIntIoLink), &(satDevData->satActiveIntIoLinkList));
tdsaSingleThreadedLeave(tiRoot, TD_SATA_LOCK);
#ifdef REMOVED
/* Put in active list */
tdsaSingleThreadedEnter(tiRoot, TD_SATA_LOCK);
TDLIST_DEQUEUE_THIS (tdList);
TDLIST_ENQUEUE_AT_TAIL (tdList, &(satDevData->satActiveIntIoLinkList));
tdsaSingleThreadedLeave(tiRoot, TD_SATA_LOCK);
satIntIo = TDLIST_OBJECT_BASE( satInternalIo_t, satIntIoLink, tdList);
TI_DBG6(("satAllocIntIoResource: satDevData %p satIntIo id %d\n", satDevData, satIntIo->id));
#endif
/*
typedef struct
{
tdList_t satIntIoLink;
tiIORequest_t satIntTiIORequest;
void *satIntRequestBody;
tiScsiInitiatorRequest_t satIntTiScsiXchg;
tiMem_t satIntDmaMem;
tiMem_t satIntReqBodyMem;
bit32 satIntFlag;
} satInternalIo_t;
*/
/*
* Allocate mem for Request Body
*/
satIntIo->satIntReqBodyMem.totalLength = sizeof(tdIORequestBody_t);
memAllocStatus = ostiAllocMemory( tiRoot,
&satIntIo->satIntReqBodyMem.osHandle,
(void **)&satIntIo->satIntRequestBody,
&satIntIo->satIntReqBodyMem.physAddrUpper,
&satIntIo->satIntReqBodyMem.physAddrLower,
8,
satIntIo->satIntReqBodyMem.totalLength,
agTRUE );
if (memAllocStatus != tiSuccess)
{
TI_DBG1(("satAllocIntIoResource() FAIL to alloc mem for Req Body.\n"));
/*
* Return satIntIo to the free list
*/
tdsaSingleThreadedEnter(tiRoot, TD_SATA_LOCK);
TDLIST_DEQUEUE_THIS (&satIntIo->satIntIoLink);
TDLIST_ENQUEUE_AT_HEAD(&satIntIo->satIntIoLink, &satDevData->satFreeIntIoLinkList);
tdsaSingleThreadedLeave(tiRoot, TD_SATA_LOCK);
return agNULL;
}
/*
* Allocate DMA memory if required
*/
if (dmaAllocLength != 0)
{
satIntIo->satIntDmaMem.totalLength = dmaAllocLength;
memAllocStatus = ostiAllocMemory( tiRoot,
&satIntIo->satIntDmaMem.osHandle,
(void **)&satIntIo->satIntDmaMem.virtPtr,
&satIntIo->satIntDmaMem.physAddrUpper,
&satIntIo->satIntDmaMem.physAddrLower,
8,
satIntIo->satIntDmaMem.totalLength,
agFALSE);
TI_DBG6(("satAllocIntIoResource: len %d \n", satIntIo->satIntDmaMem.totalLength));
TI_DBG6(("satAllocIntIoResource: pointer %p \n", satIntIo->satIntDmaMem.osHandle));
if (memAllocStatus != tiSuccess)
{
TI_DBG1(("satAllocIntIoResource() FAIL to alloc mem for DMA mem.\n"));
/*
* Return satIntIo to the free list
*/
tdsaSingleThreadedEnter(tiRoot, TD_SATA_LOCK);
TDLIST_DEQUEUE_THIS (&satIntIo->satIntIoLink);
TDLIST_ENQUEUE_AT_HEAD(&satIntIo->satIntIoLink, &satDevData->satFreeIntIoLinkList);
tdsaSingleThreadedLeave(tiRoot, TD_SATA_LOCK);
/*
* Free mem allocated for Req body
*/
ostiFreeMemory( tiRoot,
satIntIo->satIntReqBodyMem.osHandle,
satIntIo->satIntReqBodyMem.totalLength);
return agNULL;
}
}
/*
typedef struct
{
tdList_t satIntIoLink;
tiIORequest_t satIntTiIORequest;
void *satIntRequestBody;
tiScsiInitiatorRequest_t satIntTiScsiXchg;
tiMem_t satIntDmaMem;
tiMem_t satIntReqBodyMem;
bit32 satIntFlag;
} satInternalIo_t;
*/
/*
* Initialize satIntTiIORequest field
*/
satIntIo->satIntTiIORequest.osData = agNULL; /* Not used for internal SAT I/O */
satIntIo->satIntTiIORequest.tdData = satIntIo->satIntRequestBody;
/*
* saves the original tiIOrequest
*/
satIntIo->satOrgTiIORequest = tiIORequest;
/*
typedef struct tiIniScsiCmnd
{
tiLUN_t lun;
bit32 expDataLength;
bit32 taskAttribute;
bit32 crn;
bit8 cdb[16];
} tiIniScsiCmnd_t;
typedef struct tiScsiInitiatorExchange
{
void *sglVirtualAddr;
tiIniScsiCmnd_t scsiCmnd;
tiSgl_t agSgl1;
tiSgl_t agSgl2;
tiDataDirection_t dataDirection;
} tiScsiInitiatorRequest_t;
*/
/*
* Initialize satIntTiScsiXchg. Since the internal SAT request is NOT
* originated from SCSI request, only the following fields are initialized:
* - sglVirtualAddr if DMA transfer is involved
* - agSgl1 if DMA transfer is involved
* - expDataLength in scsiCmnd since this field is read by sataLLIOStart()
*/
if (dmaAllocLength != 0)
{
satIntIo->satIntTiScsiXchg.sglVirtualAddr = satIntIo->satIntDmaMem.virtPtr;
OSSA_WRITE_LE_32(agNULL, &satIntIo->satIntTiScsiXchg.agSgl1.len, 0,
satIntIo->satIntDmaMem.totalLength);
satIntIo->satIntTiScsiXchg.agSgl1.lower = satIntIo->satIntDmaMem.physAddrLower;
satIntIo->satIntTiScsiXchg.agSgl1.upper = satIntIo->satIntDmaMem.physAddrUpper;
satIntIo->satIntTiScsiXchg.agSgl1.type = tiSgl;
satIntIo->satIntTiScsiXchg.scsiCmnd.expDataLength = satIntIo->satIntDmaMem.totalLength;
}
else
{
satIntIo->satIntTiScsiXchg.sglVirtualAddr = agNULL;
satIntIo->satIntTiScsiXchg.agSgl1.len = 0;
satIntIo->satIntTiScsiXchg.agSgl1.lower = 0;
satIntIo->satIntTiScsiXchg.agSgl1.upper = 0;
satIntIo->satIntTiScsiXchg.agSgl1.type = tiSgl;
satIntIo->satIntTiScsiXchg.scsiCmnd.expDataLength = 0;
}
TI_DBG5(("satAllocIntIoResource: satIntIo->satIntTiScsiXchg.agSgl1.len %d\n", satIntIo->satIntTiScsiXchg.agSgl1.len));
TI_DBG5(("satAllocIntIoResource: satIntIo->satIntTiScsiXchg.agSgl1.upper %d\n", satIntIo->satIntTiScsiXchg.agSgl1.upper));
TI_DBG5(("satAllocIntIoResource: satIntIo->satIntTiScsiXchg.agSgl1.lower %d\n", satIntIo->satIntTiScsiXchg.agSgl1.lower));
TI_DBG5(("satAllocIntIoResource: satIntIo->satIntTiScsiXchg.agSgl1.type %d\n", satIntIo->satIntTiScsiXchg.agSgl1.type));
TI_DBG5(("satAllocIntIoResource: return satIntIo %p\n", satIntIo));
return satIntIo;
}
/*****************************************************************************/
/*! \brief Free resource for SAT intervally generated I/O.
*
* Free resource for SAT intervally generated I/O that was previously
* allocated in satAllocIntIoResource().
*
* \param tiRoot: Pointer to TISA driver/port instance.
* \param satDevData: Pointer to SAT specific device data.
* \param satIntIo: Pointer to context for SAT internal I/O that was
* previously allocated in satAllocIntIoResource().
*
* \return None
*/
/*****************************************************************************/
GLOBAL void satFreeIntIoResource(
tiRoot_t *tiRoot,
satDeviceData_t *satDevData,
satInternalIo_t *satIntIo)
{
TI_DBG6(("satFreeIntIoResource: start\n"));
if (satIntIo == agNULL)
{
TI_DBG6(("satFreeIntIoResource: allowed call\n"));
return;
}
/* sets the original tiIOrequest to agNULL for internally generated ATA cmnd */
satIntIo->satOrgTiIORequest = agNULL;
/*
* Free DMA memory if previosly alocated
*/
if (satIntIo->satIntTiScsiXchg.scsiCmnd.expDataLength != 0)
{
TI_DBG1(("satFreeIntIoResource: DMA len %d\n", satIntIo->satIntDmaMem.totalLength));
TI_DBG6(("satFreeIntIoResource: pointer %p\n", satIntIo->satIntDmaMem.osHandle));
ostiFreeMemory( tiRoot,
satIntIo->satIntDmaMem.osHandle,
satIntIo->satIntDmaMem.totalLength);
satIntIo->satIntTiScsiXchg.scsiCmnd.expDataLength = 0;
}
if (satIntIo->satIntReqBodyMem.totalLength != 0)
{
TI_DBG1(("satFreeIntIoResource: req body len %d\n", satIntIo->satIntReqBodyMem.totalLength));
/*
* Free mem allocated for Req body
*/
ostiFreeMemory( tiRoot,
satIntIo->satIntReqBodyMem.osHandle,
satIntIo->satIntReqBodyMem.totalLength);
satIntIo->satIntReqBodyMem.totalLength = 0;
}
TI_DBG6(("satFreeIntIoResource: satDevData %p satIntIo id %d\n", satDevData, satIntIo->id));
/*
* Return satIntIo to the free list
*/
tdsaSingleThreadedEnter(tiRoot, TD_SATA_LOCK);
TDLIST_DEQUEUE_THIS (&(satIntIo->satIntIoLink));
TDLIST_ENQUEUE_AT_TAIL (&(satIntIo->satIntIoLink), &(satDevData->satFreeIntIoLinkList));
tdsaSingleThreadedLeave(tiRoot, TD_SATA_LOCK);
}
/*****************************************************************************/
/*! \brief SAT implementation for SCSI INQUIRY.
*
* SAT implementation for SCSI INQUIRY.
* This function sends ATA Identify Device data command for SCSI INQUIRY
*
* \param tiRoot: Pointer to TISA initiator driver/port instance.
* \param tiIORequest: Pointer to TISA I/O request context for this I/O.
* \param tiDeviceHandle: Pointer to TISA device handle for this I/O.
* \param tiScsiRequest: Pointer to TISA SCSI I/O request and SGL list.
* \param satIOContext_t: Pointer to the SAT IO Context
*
* \return If command is started successfully
* - \e tiSuccess: I/O request successfully initiated.
* - \e tiBusy: No resources available, try again later.
* - \e tiIONoDevice: Invalid device handle.
* - \e tiError: Other errors.
*/
/*****************************************************************************/
GLOBAL bit32 satSendIDDev(
tiRoot_t *tiRoot,
tiIORequest_t *tiIORequest,
tiDeviceHandle_t *tiDeviceHandle,
tiScsiInitiatorRequest_t *tiScsiRequest,
satIOContext_t *satIOContext)
{
bit32 status;
bit32 agRequestType;
satDeviceData_t *pSatDevData;
agsaFisRegHostToDevice_t *fis;
#ifdef TD_DEBUG_ENABLE
satInternalIo_t *satIntIoContext;
tdsaDeviceData_t *oneDeviceData;
tdIORequestBody_t *tdIORequestBody;
#endif
pSatDevData = satIOContext->pSatDevData;
fis = satIOContext->pFis;
TI_DBG5(("satSendIDDev: start\n"));
#ifdef TD_DEBUG_ENABLE
oneDeviceData = (tdsaDeviceData_t *)tiDeviceHandle->tdData;
#endif
TI_DBG5(("satSendIDDev: did %d\n", oneDeviceData->id));
#ifdef TD_DEBUG_ENABLE
satIntIoContext = satIOContext->satIntIoContext;
tdIORequestBody = satIntIoContext->satIntRequestBody;
#endif
TI_DBG5(("satSendIDDev: satIOContext %p tdIORequestBody %p\n", satIOContext, tdIORequestBody));
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
if (pSatDevData->satDeviceType == SATA_ATAPI_DEVICE)
fis->h.command = SAT_IDENTIFY_PACKET_DEVICE; /* 0x40 */
else
fis->h.command = SAT_IDENTIFY_DEVICE; /* 0xEC */
fis->h.features = 0; /* FIS reserve */
fis->d.lbaLow = 0; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = 0; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = 0; /* FIS LBA (23:16) */
fis->d.device = 0; /* FIS LBA mode */
fis->d.lbaLowExp = 0;
fis->d.lbaMidExp = 0;
fis->d.lbaHighExp = 0;
fis->d.featuresExp = 0;
fis->d.sectorCount = 0; /* FIS sector count (7:0) */
fis->d.sectorCountExp = 0;
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_PIO_READ;
/* Initialize CB for SATA completion.
*/
satIOContext->satCompleteCB = &satInquiryCB;
/*
* Prepare SGL and send FIS to LL layer.
*/
satIOContext->reqType = agRequestType; /* Save it */
#ifdef TD_INTERNAL_DEBUG
tdhexdump("satSendIDDev", (bit8 *)satIOContext->pFis, sizeof(agsaFisRegHostToDevice_t));
#ifdef TD_DEBUG_ENABLE
tdhexdump("satSendIDDev LL", (bit8 *)&(tdIORequestBody->transport.SATA.agSATARequestBody.fis.fisRegHostToDev), sizeof(agsaFisRegHostToDevice_t));
#endif
#endif
status = sataLLIOStart( tiRoot,
tiIORequest,
tiDeviceHandle,
tiScsiRequest,
satIOContext);
TI_DBG6(("satSendIDDev: end status %d\n", status));
return status;
}
/*****************************************************************************/
/*! \brief SAT implementation for SCSI INQUIRY.
*
* SAT implementation for SCSI INQUIRY.
* This function prepares TD layer internal resource to send ATA
* Identify Device data command for SCSI INQUIRY
*
* \param tiRoot: Pointer to TISA initiator driver/port instance.
* \param tiIORequest: Pointer to TISA I/O request context for this I/O.
* \param tiDeviceHandle: Pointer to TISA device handle for this I/O.
* \param tiScsiRequest: Pointer to TISA SCSI I/O request and SGL list.
* \param satIOContext_t: Pointer to the SAT IO Context
*
* \return If command is started successfully
* - \e tiSuccess: I/O request successfully initiated.
* - \e tiBusy: No resources available, try again later.
* - \e tiIONoDevice: Invalid device handle.
* - \e tiError: Other errors.
*/
/*****************************************************************************/
/* prerequsite: tdsaDeviceData and agdevhandle must exist; in other words, LL discovered the device
already */
/*
convert OS generated IO to TD generated IO due to difference in sgl
*/
GLOBAL bit32 satStartIDDev(
tiRoot_t *tiRoot,
tiIORequest_t *tiIORequest,
tiDeviceHandle_t *tiDeviceHandle,
tiScsiInitiatorRequest_t *tiScsiRequest,
satIOContext_t *satIOContext
)
{
satInternalIo_t *satIntIo = agNULL;
satDeviceData_t *satDevData = agNULL;
tdIORequestBody_t *tdIORequestBody;
satIOContext_t *satNewIOContext;
bit32 status;
TI_DBG6(("satStartIDDev: start\n"));
satDevData = satIOContext->pSatDevData;
TI_DBG6(("satStartIDDev: before alloc\n"));
/* allocate identify device command */
satIntIo = satAllocIntIoResource( tiRoot,
tiIORequest,
satDevData,
sizeof(agsaSATAIdentifyData_t), /* 512; size of identify device data */
satIntIo);
TI_DBG6(("satStartIDDev: before after\n"));
if (satIntIo == agNULL)
{
TI_DBG1(("satStartIDDev: can't alloacate\n"));
#if 0
ostiInitiatorIOCompleted (
tiRoot,
tiIORequest,
tiIOFailed,
tiDetailOtherError,
agNULL,
satIOContext->interruptContext
);
#endif
return tiError;
}
/* fill in fields */
/* real ttttttthe one worked and the same; 5/21/07/ */
satIntIo->satOrgTiIORequest = tiIORequest; /* changed */
tdIORequestBody = satIntIo->satIntRequestBody;
satNewIOContext = &(tdIORequestBody->transport.SATA.satIOContext);
satNewIOContext->pSatDevData = satDevData;
satNewIOContext->pFis = &(tdIORequestBody->transport.SATA.agSATARequestBody.fis.fisRegHostToDev);
satNewIOContext->pScsiCmnd = &(satIntIo->satIntTiScsiXchg.scsiCmnd);
satNewIOContext->pSense = &(tdIORequestBody->transport.SATA.sensePayload);
satNewIOContext->pTiSenseData = &(tdIORequestBody->transport.SATA.tiSenseData);
satNewIOContext->tiRequestBody = satIntIo->satIntRequestBody; /* key fix */
satNewIOContext->interruptContext = tiInterruptContext;
satNewIOContext->satIntIoContext = satIntIo;
satNewIOContext->ptiDeviceHandle = agNULL;
satNewIOContext->satOrgIOContext = satIOContext; /* changed */
/* this is valid only for TD layer generated (not triggered by OS at all) IO */
satNewIOContext->tiScsiXchg = &(satIntIo->satIntTiScsiXchg);
TI_DBG6(("satStartIDDev: OS satIOContext %p \n", satIOContext));
TI_DBG6(("satStartIDDev: TD satNewIOContext %p \n", satNewIOContext));
TI_DBG6(("satStartIDDev: OS tiScsiXchg %p \n", satIOContext->tiScsiXchg));
TI_DBG6(("satStartIDDev: TD tiScsiXchg %p \n", satNewIOContext->tiScsiXchg));
TI_DBG1(("satStartIDDev: satNewIOContext %p tdIORequestBody %p\n", satNewIOContext, tdIORequestBody));
status = satSendIDDev( tiRoot,
&satIntIo->satIntTiIORequest, /* New tiIORequest */
tiDeviceHandle,
satNewIOContext->tiScsiXchg, /* New tiScsiInitiatorRequest_t *tiScsiRequest, */
satNewIOContext);
if (status != tiSuccess)
{
TI_DBG1(("satStartIDDev: failed in sending\n"));
satFreeIntIoResource( tiRoot,
satDevData,
satIntIo);
#if 0
ostiInitiatorIOCompleted (
tiRoot,
tiIORequest,
tiIOFailed,
tiDetailOtherError,
agNULL,
satIOContext->interruptContext
);
#endif
return tiError;
}
TI_DBG6(("satStartIDDev: end\n"));
return status;
}
/*****************************************************************************/
/*! \brief satComputeCDB10LBA.
*
* This fuctions computes LBA of CDB10.
*
* \param satIOContext_t: Pointer to the SAT IO Context
*
* \return
* - \e LBA
*/
/*****************************************************************************/
bit32 satComputeCDB10LBA(satIOContext_t *satIOContext)
{
tiIniScsiCmnd_t *scsiCmnd;
tiScsiInitiatorRequest_t *tiScsiRequest;
bit32 lba = 0;
TI_DBG5(("satComputeCDB10LBA: start\n"));
tiScsiRequest = satIOContext->tiScsiXchg;
scsiCmnd = &(tiScsiRequest->scsiCmnd);
lba = (scsiCmnd->cdb[2] << (8*3)) + (scsiCmnd->cdb[3] << (8*2))
+ (scsiCmnd->cdb[4] << 8) + scsiCmnd->cdb[5];
return lba;
}
/*****************************************************************************/
/*! \brief satComputeCDB10TL.
*
* This fuctions computes transfer length of CDB10.
*
* \param satIOContext_t: Pointer to the SAT IO Context
*
* \return
* - \e TL
*/
/*****************************************************************************/
bit32 satComputeCDB10TL(satIOContext_t *satIOContext)
{
tiIniScsiCmnd_t *scsiCmnd;
tiScsiInitiatorRequest_t *tiScsiRequest;
bit32 tl = 0;
TI_DBG5(("satComputeCDB10TL: start\n"));
tiScsiRequest = satIOContext->tiScsiXchg;
scsiCmnd = &(tiScsiRequest->scsiCmnd);
tl = (scsiCmnd->cdb[7] << 8) + scsiCmnd->cdb[8];
return tl;
}
/*****************************************************************************/
/*! \brief satComputeCDB12LBA.
*
* This fuctions computes LBA of CDB12.
*
* \param satIOContext_t: Pointer to the SAT IO Context
*
* \return
* - \e LBA
*/
/*****************************************************************************/
bit32 satComputeCDB12LBA(satIOContext_t *satIOContext)
{
tiIniScsiCmnd_t *scsiCmnd;
tiScsiInitiatorRequest_t *tiScsiRequest;
bit32 lba = 0;
TI_DBG5(("satComputeCDB10LBA: start\n"));
tiScsiRequest = satIOContext->tiScsiXchg;
scsiCmnd = &(tiScsiRequest->scsiCmnd);
lba = (scsiCmnd->cdb[2] << (8*3)) + (scsiCmnd->cdb[3] << (8*2))
+ (scsiCmnd->cdb[4] << 8) + scsiCmnd->cdb[5];
return lba;
}
/*****************************************************************************/
/*! \brief satComputeCDB12TL.
*
* This fuctions computes transfer length of CDB12.
*
* \param satIOContext_t: Pointer to the SAT IO Context
*
* \return
* - \e TL
*/
/*****************************************************************************/
bit32 satComputeCDB12TL(satIOContext_t *satIOContext)
{
tiIniScsiCmnd_t *scsiCmnd;
tiScsiInitiatorRequest_t *tiScsiRequest;
bit32 tl = 0;
TI_DBG5(("satComputeCDB10TL: start\n"));
tiScsiRequest = satIOContext->tiScsiXchg;
scsiCmnd = &(tiScsiRequest->scsiCmnd);
tl = (scsiCmnd->cdb[6] << (8*3)) + (scsiCmnd->cdb[7] << (8*2))
+ (scsiCmnd->cdb[8] << 8) + scsiCmnd->cdb[9];
return tl;
}
/*****************************************************************************/
/*! \brief satComputeCDB16LBA.
*
* This fuctions computes LBA of CDB16.
*
* \param satIOContext_t: Pointer to the SAT IO Context
*
* \return
* - \e LBA
*/
/*****************************************************************************/
/*
CBD16 has bit64 LBA
But it has to be less than (2^28 - 1)
Therefore, use last four bytes to compute LBA is OK
*/
bit32 satComputeCDB16LBA(satIOContext_t *satIOContext)
{
tiIniScsiCmnd_t *scsiCmnd;
tiScsiInitiatorRequest_t *tiScsiRequest;
bit32 lba = 0;
TI_DBG5(("satComputeCDB10LBA: start\n"));
tiScsiRequest = satIOContext->tiScsiXchg;
scsiCmnd = &(tiScsiRequest->scsiCmnd);
lba = (scsiCmnd->cdb[6] << (8*3)) + (scsiCmnd->cdb[7] << (8*2))
+ (scsiCmnd->cdb[8] << 8) + scsiCmnd->cdb[9];
return lba;
}
/*****************************************************************************/
/*! \brief satComputeCDB16TL.
*
* This fuctions computes transfer length of CDB16.
*
* \param satIOContext_t: Pointer to the SAT IO Context
*
* \return
* - \e TL
*/
/*****************************************************************************/
bit32 satComputeCDB16TL(satIOContext_t *satIOContext)
{
tiIniScsiCmnd_t *scsiCmnd;
tiScsiInitiatorRequest_t *tiScsiRequest;
bit32 tl = 0;
TI_DBG5(("satComputeCDB10TL: start\n"));
tiScsiRequest = satIOContext->tiScsiXchg;
scsiCmnd = &(tiScsiRequest->scsiCmnd);
tl = (scsiCmnd->cdb[10] << (8*3)) + (scsiCmnd->cdb[11] << (8*2))
+ (scsiCmnd->cdb[12] << 8) + scsiCmnd->cdb[13];
return tl;
}
/*****************************************************************************/
/*! \brief satComputeLoopNum.
*
* This fuctions computes the number of interation needed for a transfer
* length with a specific number.
*
* \param a: a numerator
* \param b: a denominator
*
* \return
* - \e number of interation
*/
/*****************************************************************************/
/*
(tl, denom)
tl can be upto bit32 because CDB16 has bit32 tl
Therefore, fine
either (tl, 0xFF) or (tl, 0xFFFF)
*/
bit32 satComputeLoopNum(bit32 a, bit32 b)
{
bit32 quo = 0, rem = 0;
bit32 LoopNum = 0;
TI_DBG5(("satComputeLoopNum: start\n"));
quo = a/b;
if (quo == 0)
{
LoopNum = 1;
}
else
{
rem = a % b;
if (rem == 0)
{
LoopNum = quo;
}
else
{
LoopNum = quo + 1;
}
}
return LoopNum;
}
/*****************************************************************************/
/*! \brief satAddNComparebit64.
*
*
*
*
* \param a: lba
* \param b: tl
*
* \return
* - \e TRUE if (lba + tl > SAT_TR_LBA_LIMIT)
* - \e FALSE otherwise
* \note: a and b must be in the same length
*/
/*****************************************************************************/
/*
input: bit8 a[8], bit8 b[8] (lba, tl) must be in same length
if (lba + tl > SAT_TR_LBA_LIMIT)
then returns true
else returns false
(LBA,TL)
*/
bit32 satAddNComparebit64(bit8 *a, bit8 *b)
{
bit16 ans[8]; // 0 MSB, 8 LSB
bit8 final_ans[9]; // 0 MSB, 9 LSB
bit8 max[9];
int i;
TI_DBG5(("satAddNComparebit64: start\n"));
osti_memset(ans, 0, sizeof(ans));
osti_memset(final_ans, 0, sizeof(final_ans));
osti_memset(max, 0, sizeof(max));
max[0] = 0x1; //max = 0x1 0000 0000 0000 0000
// adding from LSB to MSB
for(i=7;i>=0;i--)
{
ans[i] = (bit16)(a[i] + b[i]);
if (i != 7)
{
ans[i] = (bit16)(ans[i] + ((ans[i+1] & 0xFF00) >> 8));
}
}
/*
filling in the final answer
*/
final_ans[0] = (bit8)(((ans[0] & 0xFF00) >> 8));
final_ans[1] = (bit8)(ans[0] & 0xFF);
for(i=2;i<=8;i++)
{
final_ans[i] = (bit8)(ans[i-1] & 0xFF);
}
//compare final_ans to max
for(i=0;i<=8;i++)
{
if (final_ans[i] > max[i])
{
TI_DBG5(("satAddNComparebit64: yes at %d\n", i));
return agTRUE;
}
else if (final_ans[i] < max[i])
{
TI_DBG5(("satAddNComparebit64: no at %d\n", i));
return agFALSE;
}
else
{
continue;
}
}
return agFALSE;
}
/*****************************************************************************/
/*! \brief satAddNComparebit32.
*
*
*
*
* \param a: lba
* \param b: tl
*
* \return
* - \e TRUE if (lba + tl > SAT_TR_LBA_LIMIT)
* - \e FALSE otherwise
* \note: a and b must be in the same length
*/
/*****************************************************************************/
/*
input: bit8 a[4], bit8 b[4] (lba, tl) must be in same length
if (lba + tl > SAT_TR_LBA_LIMIT)
then returns true
else returns false
(LBA,TL)
*/
bit32 satAddNComparebit32(bit8 *a, bit8 *b)
{
bit16 ans[4]; // 0 MSB, 4 LSB
bit8 final_ans[5]; // 0 MSB, 5 LSB
bit8 max[4];
int i;
TI_DBG5(("satAddNComparebit32: start\n"));
osti_memset(ans, 0, sizeof(ans));
osti_memset(final_ans, 0, sizeof(final_ans));
osti_memset(max, 0, sizeof(max));
max[0] = 0x10; // max =0x1000 0000
// adding from LSB to MSB
for(i=3;i>=0;i--)
{
ans[i] = (bit16)(a[i] + b[i]);
if (i != 3)
{
ans[i] = (bit16)(ans[i] + ((ans[i+1] & 0xFF00) >> 8));
}
}
/*
filling in the final answer
*/
final_ans[0] = (bit8)(((ans[0] & 0xFF00) >> 8));
final_ans[1] = (bit8)(ans[0] & 0xFF);
for(i=2;i<=4;i++)
{
final_ans[i] = (bit8)(ans[i-1] & 0xFF);
}
//compare final_ans to max
if (final_ans[0] != 0)
{
TI_DBG5(("satAddNComparebit32: yes bigger and out of range\n"));
return agTRUE;
}
for(i=1;i<=4;i++)
{
if (final_ans[i] > max[i-1])
{
TI_DBG5(("satAddNComparebit32: yes at %d\n", i));
return agTRUE;
}
else if (final_ans[i] < max[i-1])
{
TI_DBG5(("satAddNComparebit32: no at %d\n", i));
return agFALSE;
}
else
{
continue;
}
}
return agFALSE;;
}
/*****************************************************************************/
/*! \brief satCompareLBALimitbit.
*
*
*
*
* \param lba: lba
*
* \return
* - \e TRUE if (lba > SAT_TR_LBA_LIMIT - 1)
* - \e FALSE otherwise
* \note: a and b must be in the same length
*/
/*****************************************************************************/
/*
lba
*/
/*
input: bit8 lba[8]
if (lba > SAT_TR_LBA_LIMIT - 1)
then returns true
else returns false
(LBA,TL)
*/
bit32 satCompareLBALimitbit(bit8 *lba)
{
bit32 i;
bit8 limit[8];
/* limit is 0xF FF FF = 2^28 - 1 */
limit[0] = 0x0; /* MSB */
limit[1] = 0x0;
limit[2] = 0x0;
limit[3] = 0x0;
limit[4] = 0xF;
limit[5] = 0xFF;
limit[6] = 0xFF;
limit[7] = 0xFF; /* LSB */
//compare lba to limit
for(i=0;i<8;i++)
{
if (lba[i] > limit[i])
{
TI_DBG5(("satCompareLBALimitbit64: yes at %d\n", i));
return agTRUE;
}
else if (lba[i] < limit[i])
{
TI_DBG5(("satCompareLBALimitbit64: no at %d\n", i));
return agFALSE;
}
else
{
continue;
}
}
return agFALSE;
}
/*****************************************************************************
*! \brief
* Purpose: bitwise set
*
* Parameters:
* data - input output buffer
* index - bit to set
*
* Return:
* none
*
*****************************************************************************/
GLOBAL void
satBitSet(bit8 *data, bit32 index)
{
data[index/8] |= (1 << (index%8));
}
/*****************************************************************************
*! \brief
* Purpose: bitwise clear
*
* Parameters:
* data - input output buffer
* index - bit to clear
*
* Return:
* none
*
*****************************************************************************/
GLOBAL void
satBitClear(bit8 *data, bit32 index)
{
data[index/8] &= ~(1 << (index%8));
}
/*****************************************************************************
*! \brief
* Purpose: bitwise test
*
* Parameters:
* data - input output buffer
* index - bit to test
*
* Return:
* 0 - not set
* 1 - set
*
*****************************************************************************/
GLOBAL agBOOLEAN
satBitTest(bit8 *data, bit32 index)
{
return ( (BOOLEAN)((data[index/8] & (1 << (index%8)) ) ? 1: 0));
}
/******************************************************************************/
/*! \brief allocate an available SATA tag
*
* allocate an available SATA tag
*
* \param tiRoot Pointer to TISA initiator driver/port instance.
* \param pSatDevData
* \param pTag
*
* \return -Success or fail-
*/
/*******************************************************************************/
GLOBAL bit32 satTagAlloc(
tiRoot_t *tiRoot,
satDeviceData_t *pSatDevData,
bit8 *pTag
)
{
bit32 retCode = agFALSE;
bit32 i;
tdsaSingleThreadedEnter(tiRoot, TD_SATA_LOCK);
for ( i = 0; i < pSatDevData->satNCQMaxIO; i ++ )
{
if ( 0 == satBitTest((bit8 *)&pSatDevData->freeSATAFDMATagBitmap, i) )
{
satBitSet((bit8*)&pSatDevData->freeSATAFDMATagBitmap, i);
*pTag = (bit8) i;
retCode = agTRUE;
break;
}
}
tdsaSingleThreadedLeave(tiRoot, TD_SATA_LOCK);
return retCode;
}
/******************************************************************************/
/*! \brief release an SATA tag
*
* release an available SATA tag
*
* \param tiRoot Pointer to TISA initiator driver/port instance.
* \param pSatDevData
* \param Tag
*
* \return -the tag-
*/
/*******************************************************************************/
GLOBAL bit32 satTagRelease(
tiRoot_t *tiRoot,
satDeviceData_t *pSatDevData,
bit8 tag
)
{
bit32 retCode = agFALSE;
tdsaSingleThreadedEnter(tiRoot, TD_SATA_LOCK);
if ( tag < pSatDevData->satNCQMaxIO )
{
satBitClear( (bit8 *)&pSatDevData->freeSATAFDMATagBitmap, (bit32)tag);
retCode = agTRUE;
}
tdsaSingleThreadedLeave(tiRoot, TD_SATA_LOCK);
return retCode;
}
/*****************************************************************************
*! \brief satSubTM
*
* This routine is called to initiate a TM request to SATL.
* This routine is independent of HW/LL API.
*
* \param tiRoot: Pointer to TISA initiator driver/port instance.
* \param tiDeviceHandle: Pointer to TISA device handle for this I/O.
* \param task: SAM-3 task management request.
* \param lun: Pointer to LUN.
* \param taskTag: Pointer to the associated task where the TM
* command is to be applied.
* \param currentTaskTag: Pointer to tag/context for this TM request.
* \param NotifyOS flag determines whether notify OS layer or not
*
* \return:
*
* \e tiSuccess: I/O request successfully initiated.
* \e tiBusy: No resources available, try again later.
* \e tiIONoDevice: Invalid device handle.
* \e tiError: Other errors that prevent the I/O request to be started.
*
* \note:
* This funcion is triggered bottom up. Not yet in use.
*****************************************************************************/
/* called for bottom up */
osGLOBAL bit32 satSubTM(
tiRoot_t *tiRoot,
tiDeviceHandle_t *tiDeviceHandle,
bit32 task,
tiLUN_t *lun,
tiIORequest_t *taskTag,
tiIORequest_t *currentTaskTag,
bit32 NotifyOS
)
{
void *osMemHandle;
tdIORequestBody_t *TMtdIORequestBody;
bit32 PhysUpper32;
bit32 PhysLower32;
bit32 memAllocStatus;
agsaIORequest_t *agIORequest = agNULL;
TI_DBG6(("satSubTM: start\n"));
/* allocation tdIORequestBody and pass it to satTM() */
memAllocStatus = ostiAllocMemory(
tiRoot,
&osMemHandle,
(void **)&TMtdIORequestBody,
&PhysUpper32,
&PhysLower32,
8,
sizeof(tdIORequestBody_t),
agTRUE
);
if (memAllocStatus != tiSuccess)
{
TI_DBG1(("satSubTM: ostiAllocMemory failed... \n"));
return tiError;
}
if (TMtdIORequestBody == agNULL)
{
TI_DBG1(("satSubTM: ostiAllocMemory returned NULL TMIORequestBody\n"));
return tiError;
}
/* setup task management structure */
TMtdIORequestBody->IOType.InitiatorTMIO.osMemHandle = osMemHandle;
TMtdIORequestBody->IOType.InitiatorTMIO.CurrentTaskTag = agNULL;
TMtdIORequestBody->IOType.InitiatorTMIO.TaskTag = agNULL;
/* initialize tiDevhandle */
TMtdIORequestBody->tiDevHandle = tiDeviceHandle;
/* initialize tiIORequest */
TMtdIORequestBody->tiIORequest = agNULL;
/* initialize agIORequest */
agIORequest = &(TMtdIORequestBody->agIORequest);
agIORequest->osData = (void *) TMtdIORequestBody;
agIORequest->sdkData = agNULL; /* SA takes care of this */
satTM(tiRoot,
tiDeviceHandle,
task, /* TD_INTERNAL_TM_RESET */
agNULL,
agNULL,
agNULL,
TMtdIORequestBody,
agFALSE);
return tiSuccess;
}
/*****************************************************************************/
/*! \brief SAT implementation for satStartResetDevice.
*
* SAT implementation for sending SRT and send FIS request to LL layer.
*
* \param tiRoot: Pointer to TISA initiator driver/port instance.
* \param tiIORequest: Pointer to TISA I/O request context for this I/O.
* \param tiDeviceHandle: Pointer to TISA device handle for this I/O.
* \param tiScsiRequest: Pointer to TISA SCSI I/O request and SGL list.
* \param satIOContext_t: Pointer to the SAT IO Context
*
* \return If command is started successfully
* - \e tiSuccess: I/O request successfully initiated.
* - \e tiBusy: No resources available, try again later.
* - \e tiIONoDevice: Invalid device handle.
* - \e tiError: Other errors.
* \note : triggerred by OS layer or bottom up
*/
/*****************************************************************************/
/* OS triggerred or bottom up */
GLOBAL bit32
satStartResetDevice(
tiRoot_t *tiRoot,
tiIORequest_t *tiIORequest, /* currentTaskTag */
tiDeviceHandle_t *tiDeviceHandle,
tiScsiInitiatorRequest_t *tiScsiRequest, /* should be NULL */
satIOContext_t *satIOContext
)
{
satInternalIo_t *satIntIo = agNULL;
satDeviceData_t *satDevData = agNULL;
satIOContext_t *satNewIOContext;
bit32 status;
tiIORequest_t *currentTaskTag = agNULL;
TI_DBG1(("satStartResetDevice: start\n"));
currentTaskTag = tiIORequest;
satDevData = satIOContext->pSatDevData;
TI_DBG6(("satStartResetDevice: before alloc\n"));
/* allocate any fis for seting SRT bit in device control */
satIntIo = satAllocIntIoResource( tiRoot,
tiIORequest,
satDevData,
0,
satIntIo);
TI_DBG6(("satStartResetDevice: before after\n"));
if (satIntIo == agNULL)
{
TI_DBG1(("satStartResetDevice: can't alloacate\n"));
if (satIOContext->NotifyOS)
{
ostiInitiatorEvent( tiRoot,
NULL,
NULL,
tiIntrEventTypeTaskManagement,
tiTMFailed,
currentTaskTag );
}
return tiError;
}
satNewIOContext = satPrepareNewIO(satIntIo,
tiIORequest,
satDevData,
agNULL,
satIOContext);
TI_DBG6(("satStartResetDevice: OS satIOContext %p \n", satIOContext));
TI_DBG6(("satStartResetDevice: TD satNewIOContext %p \n", satNewIOContext));
TI_DBG6(("satStartResetDevice: OS tiScsiXchg %p \n", satIOContext->tiScsiXchg));
TI_DBG6(("satStartResetDevice: TD tiScsiXchg %p \n", satNewIOContext->tiScsiXchg));
TI_DBG6(("satStartResetDevice: satNewIOContext %p \n", satNewIOContext));
if (satDevData->satDeviceType == SATA_ATAPI_DEVICE)
{
status = satDeviceReset(tiRoot,
&satIntIo->satIntTiIORequest, /* New tiIORequest */
tiDeviceHandle,
satNewIOContext->tiScsiXchg, /* New tiScsiInitiatorRequest_t *tiScsiRequest, */
satNewIOContext);
}
else
{
status = satResetDevice(tiRoot,
&satIntIo->satIntTiIORequest, /* New tiIORequest */
tiDeviceHandle,
satNewIOContext->tiScsiXchg, /* New tiScsiInitiatorRequest_t *tiScsiRequest, */
satNewIOContext);
}
if (status != tiSuccess)
{
TI_DBG1(("satStartResetDevice: failed in sending\n"));
satFreeIntIoResource( tiRoot,
satDevData,
satIntIo);
if (satIOContext->NotifyOS)
{
ostiInitiatorEvent( tiRoot,
NULL,
NULL,
tiIntrEventTypeTaskManagement,
tiTMFailed,
currentTaskTag );
}
return tiError;
}
TI_DBG6(("satStartResetDevice: end\n"));
return status;
}
/*****************************************************************************/
/*! \brief SAT implementation for satResetDevice.
*
* SAT implementation for building SRT FIS and sends the request to LL layer.
*
* \param tiRoot: Pointer to TISA initiator driver/port instance.
* \param tiIORequest: Pointer to TISA I/O request context for this I/O.
* \param tiDeviceHandle: Pointer to TISA device handle for this I/O.
* \param tiScsiRequest: Pointer to TISA SCSI I/O request and SGL list.
* \param satIOContext_t: Pointer to the SAT IO Context
*
* \return If command is started successfully
* - \e tiSuccess: I/O request successfully initiated.
* - \e tiBusy: No resources available, try again later.
* - \e tiIONoDevice: Invalid device handle.
* - \e tiError: Other errors.
*/
/*****************************************************************************/
/*
create any fis and set SRST bit in device control
*/
GLOBAL bit32
satResetDevice(
tiRoot_t *tiRoot,
tiIORequest_t *tiIORequest,
tiDeviceHandle_t *tiDeviceHandle,
tiScsiInitiatorRequest_t *tiScsiRequest,
satIOContext_t *satIOContext
)
{
bit32 status;
bit32 agRequestType;
agsaFisRegHostToDevice_t *fis;
#ifdef TD_DEBUG_ENABLE
tdIORequestBody_t *tdIORequestBody;
satInternalIo_t *satIntIoContext;
#endif
fis = satIOContext->pFis;
TI_DBG2(("satResetDevice: start\n"));
#ifdef TD_DEBUG_ENABLE
satIntIoContext = satIOContext->satIntIoContext;
tdIORequestBody = satIntIoContext->satIntRequestBody;
#endif
TI_DBG5(("satResetDevice: satIOContext %p tdIORequestBody %p\n", satIOContext, tdIORequestBody));
/* any fis should work */
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0; /* C Bit is not set */
fis->h.command = 0; /* any command */
fis->h.features = 0; /* FIS reserve */
fis->d.lbaLow = 0; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = 0; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = 0; /* FIS LBA (23:16) */
fis->d.device = 0; /* FIS LBA mode */
fis->d.lbaLowExp = 0;
fis->d.lbaMidExp = 0;
fis->d.lbaHighExp = 0;
fis->d.featuresExp = 0;
fis->d.sectorCount = 0; /* FIS sector count (7:0) */
fis->d.sectorCountExp = 0;
fis->d.reserved4 = 0;
fis->d.control = 0x4; /* SRST bit is set */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_SRST_ASSERT;
satIOContext->satCompleteCB = &satResetDeviceCB;
/*
* Prepare SGL and send FIS to LL layer.
*/
satIOContext->reqType = agRequestType; /* Save it */
#ifdef TD_INTERNAL_DEBUG
tdhexdump("satResetDevice", (bit8 *)satIOContext->pFis, sizeof(agsaFisRegHostToDevice_t));
#ifdef TD_DEBUG_ENABLE
tdhexdump("satResetDevice LL", (bit8 *)&(tdIORequestBody->transport.SATA.agSATARequestBody.fis.fisRegHostToDev), sizeof(agsaFisRegHostToDevice_t));
#endif
#endif
status = sataLLIOStart( tiRoot,
tiIORequest,
tiDeviceHandle,
tiScsiRequest,
satIOContext);
TI_DBG6(("satResetDevice: end status %d\n", status));
return status;
}
/*****************************************************************************
*! \brief satResetDeviceCB
*
* This routine is a callback function called from ossaSATACompleted().
* This CB routine deals with SRT completion. This function send DSRT
*
* \param agRoot: Handles for this instance of SAS/SATA hardware
* \param agIORequest: Pointer to the LL I/O request context for this I/O.
* \param agIOStatus: Status of completed I/O.
* \param agFirstDword:Pointer to the four bytes of FIS.
* \param agIOInfoLen: Length in bytes of overrun/underrun residual or FIS
* length.
* \param agParam: Additional info based on status.
* \param ioContext: Pointer to satIOContext_t.
*
* \return: none
*
*****************************************************************************/
GLOBAL void satResetDeviceCB(
agsaRoot_t *agRoot,
agsaIORequest_t *agIORequest,
bit32 agIOStatus,
agsaFisHeader_t *agFirstDword,
bit32 agIOInfoLen,
agsaFrameHandle_t agFrameHandle,
void *ioContext
)
{
/* callback for satResetDevice */
tdsaRootOsData_t *osData = (tdsaRootOsData_t *)agRoot->osData;
tiRoot_t *tiRoot = (tiRoot_t *)osData->tiRoot;
tdsaRoot_t *tdsaRoot = (tdsaRoot_t *) tiRoot->tdData;
tdsaContext_t *tdsaAllShared = (tdsaContext_t *)&tdsaRoot->tdsaAllShared;
tdIORequestBody_t *tdIORequestBody;
tdIORequestBody_t *tdOrgIORequestBody;
satIOContext_t *satIOContext;
satIOContext_t *satOrgIOContext;
satIOContext_t *satNewIOContext;
satInternalIo_t *satIntIo;
satInternalIo_t *satNewIntIo = agNULL;
satDeviceData_t *satDevData;
tiIORequest_t *tiOrgIORequest;
#ifdef TD_DEBUG_ENABLE
bit32 ataStatus = 0;
bit32 ataError;
agsaFisPioSetupHeader_t *satPIOSetupHeader = agNULL;
#endif
bit32 status;
TI_DBG1(("satResetDeviceCB: start\n"));
TI_DBG6(("satResetDeviceCB: agIORequest=%p agIOStatus=0x%x agIOInfoLen %d\n", agIORequest, agIOStatus, agIOInfoLen));
tdIORequestBody = (tdIORequestBody_t *)agIORequest->osData;
satIOContext = (satIOContext_t *) ioContext;
satIntIo = satIOContext->satIntIoContext;
satDevData = satIOContext->pSatDevData;
if (satIntIo == agNULL)
{
TI_DBG6(("satResetDeviceCB: External, OS generated\n"));
satOrgIOContext = satIOContext;
tiOrgIORequest = tdIORequestBody->tiIORequest;
}
else
{
TI_DBG6(("satResetDeviceCB: Internal, TD generated\n"));
satOrgIOContext = satIOContext->satOrgIOContext;
if (satOrgIOContext == agNULL)
{
TI_DBG6(("satResetDeviceCB: satOrgIOContext is NULL, wrong\n"));
return;
}
else
{
TI_DBG6(("satResetDeviceCB: satOrgIOContext is NOT NULL\n"));
}
tdOrgIORequestBody = (tdIORequestBody_t *)satOrgIOContext->tiRequestBody;
tiOrgIORequest = (tiIORequest_t *)tdOrgIORequestBody->tiIORequest;
}
tdIORequestBody->ioCompleted = agTRUE;
tdIORequestBody->ioStarted = agFALSE;
if (agFirstDword == agNULL && agIOStatus != OSSA_IO_SUCCESS)
{
TI_DBG1(("satResetDeviceCB: wrong. agFirstDword is NULL when error, status %d\n", agIOStatus));
if (satOrgIOContext->NotifyOS == agTRUE)
{
ostiInitiatorEvent( tiRoot,
NULL,
NULL,
tiIntrEventTypeTaskManagement,
tiTMFailed,
tiOrgIORequest );
}
satDevData->satTmTaskTag = agNULL;
satDecrementPendingIO(tiRoot, tdsaAllShared, satIOContext);
satFreeIntIoResource( tiRoot,
satDevData,
satIntIo);
return;
}
if (agIOStatus == OSSA_IO_OPEN_CNX_ERROR_PROTOCOL_NOT_SUPPORTED ||
agIOStatus == OSSA_IO_OPEN_CNX_ERROR_ZONE_VIOLATION ||
agIOStatus == OSSA_IO_OPEN_CNX_ERROR_BREAK ||
agIOStatus == OSSA_IO_OPEN_CNX_ERROR_IT_NEXUS_LOSS ||
agIOStatus == OSSA_IO_OPEN_CNX_ERROR_BAD_DESTINATION ||
agIOStatus == OSSA_IO_OPEN_CNX_ERROR_CONNECTION_RATE_NOT_SUPPORTED ||
agIOStatus == OSSA_IO_OPEN_CNX_ERROR_WRONG_DESTINATION ||
agIOStatus == OSSA_IO_OPEN_CNX_ERROR_UNKNOWN_ERROR ||
agIOStatus == OSSA_IO_OPEN_CNX_ERROR_STP_RESOURCES_BUSY
)
{
TI_DBG1(("satResetDeviceCB: OSSA_IO_OPEN_CNX_ERROR\n"));
if (satOrgIOContext->NotifyOS == agTRUE)
{
ostiInitiatorEvent( tiRoot,
NULL,
NULL,
tiIntrEventTypeTaskManagement,
tiTMFailed,
tiOrgIORequest );
}
satDevData->satTmTaskTag = agNULL;
satDecrementPendingIO(tiRoot, tdsaAllShared, satIOContext);
satFreeIntIoResource( tiRoot,
satDevData,
satIntIo);
return;
}
if (agIOStatus != OSSA_IO_SUCCESS)
{
#ifdef TD_DEBUG_ENABLE
/* only agsaFisPioSetup_t is expected */
satPIOSetupHeader = (agsaFisPioSetupHeader_t *)&(agFirstDword->PioSetup);
ataStatus = satPIOSetupHeader->status; /* ATA Status register */
ataError = satPIOSetupHeader->error; /* ATA Eror register */
#endif
TI_DBG1(("satResetDeviceCB: ataStatus 0x%x ataError 0x%x\n", ataStatus, ataError));
if (satOrgIOContext->NotifyOS == agTRUE)
{
ostiInitiatorEvent( tiRoot,
NULL,
NULL,
tiIntrEventTypeTaskManagement,
tiTMFailed,
tiOrgIORequest );
}
satDevData->satTmTaskTag = agNULL;
satDecrementPendingIO(tiRoot, tdsaAllShared, satIOContext);
satFreeIntIoResource( tiRoot,
satDevData,
satIntIo);
return;
}
/* success */
satNewIntIo = satAllocIntIoResource( tiRoot,
tiOrgIORequest,
satDevData,
0,
satNewIntIo);
if (satNewIntIo == agNULL)
{
satDevData->satTmTaskTag = agNULL;
satDecrementPendingIO(tiRoot, tdsaAllShared, satIOContext);
/* memory allocation failure */
satFreeIntIoResource( tiRoot,
satDevData,
satNewIntIo);
if (satOrgIOContext->NotifyOS == agTRUE)
{
ostiInitiatorEvent( tiRoot,
NULL,
NULL,
tiIntrEventTypeTaskManagement,
tiTMFailed,
tiOrgIORequest );
}
TI_DBG1(("satResetDeviceCB: momory allocation fails\n"));
return;
} /* end of memory allocation failure */
/*
* Need to initialize all the fields within satIOContext
*/
satNewIOContext = satPrepareNewIO(
satNewIntIo,
tiOrgIORequest,
satDevData,
agNULL,
satOrgIOContext
);
/* send AGSA_SATA_PROTOCOL_SRST_DEASSERT */
status = satDeResetDevice(tiRoot,
tiOrgIORequest,
satOrgIOContext->ptiDeviceHandle,
agNULL,
satNewIOContext
);
if (status != tiSuccess)
{
if (satOrgIOContext->NotifyOS == agTRUE)
{
ostiInitiatorEvent( tiRoot,
NULL,
NULL,
tiIntrEventTypeTaskManagement,
tiTMFailed,
tiOrgIORequest );
}
/* sending AGSA_SATA_PROTOCOL_SRST_DEASSERT fails */
satDevData->satTmTaskTag = agNULL;
satDecrementPendingIO(tiRoot, tdsaAllShared, satIOContext);
satFreeIntIoResource( tiRoot,
satDevData,
satNewIntIo);
return;
}
satDevData->satTmTaskTag = agNULL;
satDecrementPendingIO(tiRoot, tdsaAllShared, satIOContext);
satFreeIntIoResource( tiRoot,
satDevData,
satIntIo);
TI_DBG5(("satResetDeviceCB: device %p pending IO %d\n", satDevData, satDevData->satPendingIO));
TI_DBG6(("satResetDeviceCB: end\n"));
return;
}
/*****************************************************************************/
/*! \brief SAT implementation for satDeResetDevice.
*
* SAT implementation for building DSRT FIS and sends the request to LL layer.
*
* \param tiRoot: Pointer to TISA initiator driver/port instance.
* \param tiIORequest: Pointer to TISA I/O request context for this I/O.
* \param tiDeviceHandle: Pointer to TISA device handle for this I/O.
* \param tiScsiRequest: Pointer to TISA SCSI I/O request and SGL list.
* \param satIOContext_t: Pointer to the SAT IO Context
*
* \return If command is started successfully
* - \e tiSuccess: I/O request successfully initiated.
* - \e tiBusy: No resources available, try again later.
* - \e tiIONoDevice: Invalid device handle.
* - \e tiError: Other errors.
*/
/*****************************************************************************/
GLOBAL bit32 satDeResetDevice(
tiRoot_t *tiRoot,
tiIORequest_t *tiIORequest,
tiDeviceHandle_t *tiDeviceHandle,
tiScsiInitiatorRequest_t *tiScsiRequest,
satIOContext_t *satIOContext
)
{
bit32 status;
bit32 agRequestType;
agsaFisRegHostToDevice_t *fis;
#ifdef TD_DEBUG_ENABLE
tdIORequestBody_t *tdIORequestBody;
satInternalIo_t *satIntIoContext;
#endif
fis = satIOContext->pFis;
TI_DBG6(("satDeResetDevice: start\n"));
#ifdef TD_DEBUG_ENABLE
satIntIoContext = satIOContext->satIntIoContext;
tdIORequestBody = satIntIoContext->satIntRequestBody;
TI_DBG5(("satDeResetDevice: satIOContext %p tdIORequestBody %p\n", satIOContext, tdIORequestBody));
#endif
/* any fis should work */
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0; /* C Bit is not set */
fis->h.command = 0; /* any command */
fis->h.features = 0; /* FIS reserve */
fis->d.lbaLow = 0; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = 0; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = 0; /* FIS LBA (23:16) */
fis->d.device = 0; /* FIS LBA mode */
fis->d.lbaLowExp = 0;
fis->d.lbaMidExp = 0;
fis->d.lbaHighExp = 0;
fis->d.featuresExp = 0;
fis->d.sectorCount = 0; /* FIS sector count (7:0) */
fis->d.sectorCountExp = 0;
fis->d.reserved4 = 0;
fis->d.control = 0; /* SRST bit is not set */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_SRST_DEASSERT;
satIOContext->satCompleteCB = &satDeResetDeviceCB;
/*
* Prepare SGL and send FIS to LL layer.
*/
satIOContext->reqType = agRequestType; /* Save it */
#ifdef TD_INTERNAL_DEBUG
tdhexdump("satDeResetDevice", (bit8 *)satIOContext->pFis, sizeof(agsaFisRegHostToDevice_t));
#ifdef TD_DEBUG_ENABLE
tdhexdump("satDeResetDevice LL", (bit8 *)&(tdIORequestBody->transport.SATA.agSATARequestBody.fis.fisRegHostToDev), sizeof(agsaFisRegHostToDevice_t));
#endif
#endif
status = sataLLIOStart( tiRoot,
tiIORequest,
tiDeviceHandle,
tiScsiRequest,
satIOContext);
TI_DBG6(("satDeResetDevice: end status %d\n", status));
return status;
}
/*****************************************************************************
*! \brief satDeResetDeviceCB
*
* This routine is a callback function called from ossaSATACompleted().
* This CB routine deals with DSRT completion.
*
* \param agRoot: Handles for this instance of SAS/SATA hardware
* \param agIORequest: Pointer to the LL I/O request context for this I/O.
* \param agIOStatus: Status of completed I/O.
* \param agFirstDword:Pointer to the four bytes of FIS.
* \param agIOInfoLen: Length in bytes of overrun/underrun residual or FIS
* length.
* \param agParam: Additional info based on status.
* \param ioContext: Pointer to satIOContext_t.
*
* \return: none
*
*****************************************************************************/
GLOBAL void satDeResetDeviceCB(
agsaRoot_t *agRoot,
agsaIORequest_t *agIORequest,
bit32 agIOStatus,
agsaFisHeader_t *agFirstDword,
bit32 agIOInfoLen,
agsaFrameHandle_t agFrameHandle,
void *ioContext
)
{
/* callback for satDeResetDevice */
tdsaRootOsData_t *osData = (tdsaRootOsData_t *)agRoot->osData;
tiRoot_t *tiRoot = (tiRoot_t *)osData->tiRoot;
tdsaRoot_t *tdsaRoot = (tdsaRoot_t *) tiRoot->tdData;
tdsaContext_t *tdsaAllShared = (tdsaContext_t *)&tdsaRoot->tdsaAllShared;
tdIORequestBody_t *tdIORequestBody;
tdIORequestBody_t *tdOrgIORequestBody = agNULL;
satIOContext_t *satIOContext;
satIOContext_t *satOrgIOContext;
satInternalIo_t *satIntIo;
satDeviceData_t *satDevData;
tiIORequest_t *tiOrgIORequest;
#ifdef TD_DEBUG_ENABLE
bit32 ataStatus = 0;
bit32 ataError;
agsaFisPioSetupHeader_t *satPIOSetupHeader = agNULL;
#endif
bit32 report = agFALSE;
bit32 AbortTM = agFALSE;
TI_DBG1(("satDeResetDeviceCB: start\n"));
TI_DBG6(("satDeResetDeviceCB: agIORequest=%p agIOStatus=0x%x agIOInfoLen %d\n", agIORequest, agIOStatus, agIOInfoLen));
tdIORequestBody = (tdIORequestBody_t *)agIORequest->osData;
satIOContext = (satIOContext_t *) ioContext;
satIntIo = satIOContext->satIntIoContext;
satDevData = satIOContext->pSatDevData;
if (satIntIo == agNULL)
{
TI_DBG6(("satDeResetDeviceCB: External, OS generated\n"));
satOrgIOContext = satIOContext;
tiOrgIORequest = tdIORequestBody->tiIORequest;
}
else
{
TI_DBG6(("satDeResetDeviceCB: Internal, TD generated\n"));
satOrgIOContext = satIOContext->satOrgIOContext;
if (satOrgIOContext == agNULL)
{
TI_DBG6(("satDeResetDeviceCB: satOrgIOContext is NULL, wrong\n"));
return;
}
else
{
TI_DBG6(("satDeResetDeviceCB: satOrgIOContext is NOT NULL\n"));
}
tdOrgIORequestBody = (tdIORequestBody_t *)satOrgIOContext->tiRequestBody;
tiOrgIORequest = (tiIORequest_t *)tdOrgIORequestBody->tiIORequest;
}
tdIORequestBody->ioCompleted = agTRUE;
tdIORequestBody->ioStarted = agFALSE;
if (agFirstDword == agNULL && agIOStatus != OSSA_IO_SUCCESS)
{
TI_DBG1(("satDeResetDeviceCB: wrong. agFirstDword is NULL when error, status %d\n", agIOStatus));
if (satOrgIOContext->NotifyOS == agTRUE)
{
ostiInitiatorEvent( tiRoot,
NULL,
NULL,
tiIntrEventTypeTaskManagement,
tiTMFailed,
tiOrgIORequest );
}
satDevData->satTmTaskTag = agNULL;
satDecrementPendingIO(tiRoot, tdsaAllShared, satIOContext);
satFreeIntIoResource( tiRoot,
satDevData,
satIntIo);
return;
}
if (agIOStatus == OSSA_IO_OPEN_CNX_ERROR_PROTOCOL_NOT_SUPPORTED ||
agIOStatus == OSSA_IO_OPEN_CNX_ERROR_ZONE_VIOLATION ||
agIOStatus == OSSA_IO_OPEN_CNX_ERROR_BREAK ||
agIOStatus == OSSA_IO_OPEN_CNX_ERROR_IT_NEXUS_LOSS ||
agIOStatus == OSSA_IO_OPEN_CNX_ERROR_BAD_DESTINATION ||
agIOStatus == OSSA_IO_OPEN_CNX_ERROR_CONNECTION_RATE_NOT_SUPPORTED ||
agIOStatus == OSSA_IO_OPEN_CNX_ERROR_WRONG_DESTINATION ||
agIOStatus == OSSA_IO_OPEN_CNX_ERROR_UNKNOWN_ERROR ||
agIOStatus == OSSA_IO_OPEN_CNX_ERROR_STP_RESOURCES_BUSY
)
{
TI_DBG1(("satDeResetDeviceCB: OSSA_IO_OPEN_CNX_ERROR\n"));
if (satOrgIOContext->NotifyOS == agTRUE)
{
ostiInitiatorEvent( tiRoot,
NULL,
NULL,
tiIntrEventTypeTaskManagement,
tiTMFailed,
tiOrgIORequest );
}
satDevData->satTmTaskTag = agNULL;
satDecrementPendingIO(tiRoot, tdsaAllShared, satIOContext);
satFreeIntIoResource( tiRoot,
satDevData,
satIntIo);
return;
}
if (agIOStatus != OSSA_IO_SUCCESS)
{
#ifdef TD_DEBUG_ENABLE
/* only agsaFisPioSetup_t is expected */
satPIOSetupHeader = (agsaFisPioSetupHeader_t *)&(agFirstDword->PioSetup);
ataStatus = satPIOSetupHeader->status; /* ATA Status register */
ataError = satPIOSetupHeader->error; /* ATA Eror register */
#endif
TI_DBG1(("satDeResetDeviceCB: ataStatus 0x%x ataError 0x%x\n", ataStatus, ataError));
if (satOrgIOContext->NotifyOS == agTRUE)
{
ostiInitiatorEvent( tiRoot,
NULL,
NULL,
tiIntrEventTypeTaskManagement,
tiTMFailed,
tiOrgIORequest );
}
satDevData->satTmTaskTag = agNULL;
satDecrementPendingIO(tiRoot, tdsaAllShared, satIOContext);
satFreeIntIoResource( tiRoot,
satDevData,
satIntIo);
return;
}
/* success */
TI_DBG1(("satDeResetDeviceCB: success \n"));
TI_DBG1(("satDeResetDeviceCB: TMF %d\n", satOrgIOContext->TMF));
if (satOrgIOContext->TMF == AG_ABORT_TASK)
{
AbortTM = agTRUE;
}
if (satOrgIOContext->NotifyOS == agTRUE)
{
report = agTRUE;
}
if (AbortTM == agTRUE)
{
TI_DBG1(("satDeResetDeviceCB: calling satAbort\n"));
satAbort(agRoot, satOrgIOContext->satToBeAbortedIOContext);
}
satDevData->satTmTaskTag = agNULL;
satDevData->satDriveState = SAT_DEV_STATE_NORMAL;
satDecrementPendingIO(tiRoot, tdsaAllShared, satIOContext);
TI_DBG1(("satDeResetDeviceCB: satPendingIO %d satNCQMaxIO %d\n", satDevData->satPendingIO, satDevData->satNCQMaxIO ));
TI_DBG1(("satDeResetDeviceCB: satPendingNCQIO %d satPendingNONNCQIO %d\n", satDevData->satPendingNCQIO, satDevData->satPendingNONNCQIO));
satFreeIntIoResource( tiRoot,
satDevData,
satIntIo);
/* clean up TD layer's IORequestBody */
if (tdOrgIORequestBody != agNULL)
{
ostiFreeMemory(
tiRoot,
tdOrgIORequestBody->IOType.InitiatorTMIO.osMemHandle,
sizeof(tdIORequestBody_t)
);
}
else
{
TI_DBG1(("satDeResetDeviceCB: tdOrgIORequestBody is NULL, wrong\n"));
}
if (report)
{
ostiInitiatorEvent( tiRoot,
NULL,
NULL,
tiIntrEventTypeTaskManagement,
tiTMOK,
tiOrgIORequest );
}
TI_DBG5(("satDeResetDeviceCB: device %p pending IO %d\n", satDevData, satDevData->satPendingIO));
TI_DBG6(("satDeResetDeviceCB: end\n"));
return;
}
/*****************************************************************************/
/*! \brief SAT implementation for satStartCheckPowerMode.
*
* SAT implementation for abort task management for non-ncq sata disk.
* This function sends CHECK POWER MODE
*
* \param tiRoot: Pointer to TISA initiator driver/port instance.
* \param tiIORequest: Pointer to TISA I/O request context for this I/O.
* \param tiDeviceHandle: Pointer to TISA device handle for this I/O.
* \param tiScsiRequest: Pointer to TISA SCSI I/O request and SGL list.
* \param satIOContext_t: Pointer to the SAT IO Context
*
* \return If command is started successfully
* - \e tiSuccess: I/O request successfully initiated.
* - \e tiBusy: No resources available, try again later.
* - \e tiIONoDevice: Invalid device handle.
* - \e tiError: Other errors.
*/
/*****************************************************************************/
GLOBAL bit32 satStartCheckPowerMode(
tiRoot_t *tiRoot,
tiIORequest_t *tiIORequest,
tiDeviceHandle_t *tiDeviceHandle,
tiScsiInitiatorRequest_t *tiScsiRequest, /* NULL */
satIOContext_t *satIOContext
)
{
satInternalIo_t *satIntIo = agNULL;
satDeviceData_t *satDevData = agNULL;
satIOContext_t *satNewIOContext;
bit32 status;
tiIORequest_t *currentTaskTag = agNULL;
TI_DBG6(("satStartCheckPowerMode: start\n"));
currentTaskTag = tiIORequest;
satDevData = satIOContext->pSatDevData;
TI_DBG6(("satStartCheckPowerMode: before alloc\n"));
/* allocate any fis for seting SRT bit in device control */
satIntIo = satAllocIntIoResource( tiRoot,
tiIORequest,
satDevData,
0,
satIntIo);
TI_DBG6(("satStartCheckPowerMode: before after\n"));
if (satIntIo == agNULL)
{
TI_DBG1(("satStartCheckPowerMode: can't alloacate\n"));
if (satIOContext->NotifyOS)
{
ostiInitiatorEvent( tiRoot,
NULL,
NULL,
tiIntrEventTypeTaskManagement,
tiTMFailed,
currentTaskTag );
}
return tiError;
}
satNewIOContext = satPrepareNewIO(satIntIo,
tiIORequest,
satDevData,
agNULL,
satIOContext);
TI_DBG6(("satStartCheckPowerMode: OS satIOContext %p \n", satIOContext));
TI_DBG6(("satStartCheckPowerMode: TD satNewIOContext %p \n", satNewIOContext));
TI_DBG6(("satStartCheckPowerMode: OS tiScsiXchg %p \n", satIOContext->tiScsiXchg));
TI_DBG6(("satStartCheckPowerMode: TD tiScsiXchg %p \n", satNewIOContext->tiScsiXchg));
TI_DBG1(("satStartCheckPowerMode: satNewIOContext %p \n", satNewIOContext));
status = satCheckPowerMode(tiRoot,
&satIntIo->satIntTiIORequest, /* New tiIORequest */
tiDeviceHandle,
satNewIOContext->tiScsiXchg, /* New tiScsiInitiatorRequest_t *tiScsiRequest, */
satNewIOContext);
if (status != tiSuccess)
{
TI_DBG1(("satStartCheckPowerMode: failed in sending\n"));
satFreeIntIoResource( tiRoot,
satDevData,
satIntIo);
if (satIOContext->NotifyOS)
{
ostiInitiatorEvent( tiRoot,
NULL,
NULL,
tiIntrEventTypeTaskManagement,
tiTMFailed,
currentTaskTag );
}
return tiError;
}
TI_DBG6(("satStartCheckPowerMode: end\n"));
return status;
}
/*****************************************************************************/
/*! \brief SAT implementation for satCheckPowerMode.
*
* This function creates CHECK POWER MODE fis and sends the request to LL layer
*
* \param tiRoot: Pointer to TISA initiator driver/port instance.
* \param tiIORequest: Pointer to TISA I/O request context for this I/O.
* \param tiDeviceHandle: Pointer to TISA device handle for this I/O.
* \param tiScsiRequest: Pointer to TISA SCSI I/O request and SGL list.
* \param satIOContext_t: Pointer to the SAT IO Context
*
* \return If command is started successfully
* - \e tiSuccess: I/O request successfully initiated.
* - \e tiBusy: No resources available, try again later.
* - \e tiIONoDevice: Invalid device handle.
* - \e tiError: Other errors.
*/
/*****************************************************************************/
GLOBAL bit32 satCheckPowerMode(
tiRoot_t *tiRoot,
tiIORequest_t *tiIORequest,
tiDeviceHandle_t *tiDeviceHandle,
tiScsiInitiatorRequest_t *tiScsiRequest,
satIOContext_t *satIOContext
)
{
/*
sends SAT_CHECK_POWER_MODE as a part of ABORT TASKMANGEMENT for NCQ commands
internally generated - no directly corresponding scsi
*/
bit32 status;
bit32 agRequestType;
agsaFisRegHostToDevice_t *fis;
fis = satIOContext->pFis;
TI_DBG5(("satCheckPowerMode: start\n"));
/*
* Send the ATA CHECK POWER MODE command.
*/
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
fis->h.command = SAT_CHECK_POWER_MODE; /* 0xE5 */
fis->h.features = 0;
fis->d.lbaLow = 0;
fis->d.lbaMid = 0;
fis->d.lbaHigh = 0;
fis->d.device = 0;
fis->d.lbaLowExp = 0;
fis->d.lbaMidExp = 0;
fis->d.lbaHighExp = 0;
fis->d.featuresExp = 0;
fis->d.sectorCount = 0;
fis->d.sectorCountExp = 0;
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_NON_DATA;
/* Initialize CB for SATA completion.
*/
satIOContext->satCompleteCB = &satCheckPowerModeCB;
/*
* Prepare SGL and send FIS to LL layer.
*/
satIOContext->reqType = agRequestType; /* Save it */
status = sataLLIOStart( tiRoot,
tiIORequest,
tiDeviceHandle,
tiScsiRequest,
satIOContext);
TI_DBG5(("satCheckPowerMode: return\n"));
return status;
}
/*****************************************************************************
*! \brief satCheckPowerModeCB
*
* This routine is a callback function called from ossaSATACompleted().
* This CB routine deals with CHECK POWER MODE completion as abort task
* management.
*
* \param agRoot: Handles for this instance of SAS/SATA hardware
* \param agIORequest: Pointer to the LL I/O request context for this I/O.
* \param agIOStatus: Status of completed I/O.
* \param agFirstDword:Pointer to the four bytes of FIS.
* \param agIOInfoLen: Length in bytes of overrun/underrun residual or FIS
* length.
* \param agParam: Additional info based on status.
* \param ioContext: Pointer to satIOContext_t.
*
* \return: none
*
*****************************************************************************/
GLOBAL void satCheckPowerModeCB(
agsaRoot_t *agRoot,
agsaIORequest_t *agIORequest,
bit32 agIOStatus,
agsaFisHeader_t *agFirstDword,
bit32 agIOInfoLen,
agsaFrameHandle_t agFrameHandle,
void *ioContext
)
{
/* callback for satDeResetDevice */
tdsaRootOsData_t *osData = (tdsaRootOsData_t *)agRoot->osData;
tiRoot_t *tiRoot = (tiRoot_t *)osData->tiRoot;
tdsaRoot_t *tdsaRoot = (tdsaRoot_t *) tiRoot->tdData;
tdsaContext_t *tdsaAllShared = (tdsaContext_t *)&tdsaRoot->tdsaAllShared;
tdIORequestBody_t *tdIORequestBody;
tdIORequestBody_t *tdOrgIORequestBody = agNULL;
satIOContext_t *satIOContext;
satIOContext_t *satOrgIOContext;
satInternalIo_t *satIntIo;
satDeviceData_t *satDevData;
tiIORequest_t *tiOrgIORequest;
#ifdef TD_DEBUG_ENABLE
bit32 ataStatus = 0;
bit32 ataError;
agsaFisPioSetupHeader_t *satPIOSetupHeader = agNULL;
#endif
bit32 report = agFALSE;
bit32 AbortTM = agFALSE;
TI_DBG1(("satCheckPowerModeCB: start\n"));
TI_DBG1(("satCheckPowerModeCB: agIORequest=%p agIOStatus=0x%x agIOInfoLen %d\n", agIORequest, agIOStatus, agIOInfoLen));
tdIORequestBody = (tdIORequestBody_t *)agIORequest->osData;
satIOContext = (satIOContext_t *) ioContext;
satIntIo = satIOContext->satIntIoContext;
satDevData = satIOContext->pSatDevData;
if (satIntIo == agNULL)
{
TI_DBG6(("satCheckPowerModeCB: External, OS generated\n"));
satOrgIOContext = satIOContext;
tiOrgIORequest = tdIORequestBody->tiIORequest;
}
else
{
TI_DBG6(("satCheckPowerModeCB: Internal, TD generated\n"));
satOrgIOContext = satIOContext->satOrgIOContext;
if (satOrgIOContext == agNULL)
{
TI_DBG6(("satCheckPowerModeCB: satOrgIOContext is NULL, wrong\n"));
return;
}
else
{
TI_DBG6(("satCheckPowerModeCB: satOrgIOContext is NOT NULL\n"));
}
tdOrgIORequestBody = (tdIORequestBody_t *)satOrgIOContext->tiRequestBody;
tiOrgIORequest = (tiIORequest_t *)tdOrgIORequestBody->tiIORequest;
}
tdIORequestBody->ioCompleted = agTRUE;
tdIORequestBody->ioStarted = agFALSE;
if (agFirstDword == agNULL && agIOStatus != OSSA_IO_SUCCESS)
{
TI_DBG1(("satCheckPowerModeCB: wrong. agFirstDword is NULL when error, status %d\n", agIOStatus));
if (satOrgIOContext->NotifyOS == agTRUE)
{
ostiInitiatorEvent( tiRoot,
NULL,
NULL,
tiIntrEventTypeTaskManagement,
tiTMFailed,
tiOrgIORequest );
}
satDevData->satTmTaskTag = agNULL;
satDecrementPendingIO(tiRoot, tdsaAllShared, satIOContext);
satFreeIntIoResource( tiRoot,
satDevData,
satIntIo);
return;
}
if (agIOStatus == OSSA_IO_OPEN_CNX_ERROR_PROTOCOL_NOT_SUPPORTED ||
agIOStatus == OSSA_IO_OPEN_CNX_ERROR_ZONE_VIOLATION ||
agIOStatus == OSSA_IO_OPEN_CNX_ERROR_BREAK ||
agIOStatus == OSSA_IO_OPEN_CNX_ERROR_IT_NEXUS_LOSS ||
agIOStatus == OSSA_IO_OPEN_CNX_ERROR_BAD_DESTINATION ||
agIOStatus == OSSA_IO_OPEN_CNX_ERROR_CONNECTION_RATE_NOT_SUPPORTED ||
agIOStatus == OSSA_IO_OPEN_CNX_ERROR_WRONG_DESTINATION ||
agIOStatus == OSSA_IO_OPEN_CNX_ERROR_UNKNOWN_ERROR ||
agIOStatus == OSSA_IO_OPEN_CNX_ERROR_STP_RESOURCES_BUSY
)
{
TI_DBG1(("satCheckPowerModeCB: OSSA_IO_OPEN_CNX_ERROR\n"));
if (satOrgIOContext->NotifyOS == agTRUE)
{
ostiInitiatorEvent( tiRoot,
NULL,
NULL,
tiIntrEventTypeTaskManagement,
tiTMFailed,
tiOrgIORequest );
}
satDevData->satTmTaskTag = agNULL;
satDecrementPendingIO(tiRoot, tdsaAllShared, satIOContext);
satFreeIntIoResource( tiRoot,
satDevData,
satIntIo);
return;
}
if (agIOStatus != OSSA_IO_SUCCESS)
{
#ifdef TD_DEBUG_ENABLE
/* only agsaFisPioSetup_t is expected */
satPIOSetupHeader = (agsaFisPioSetupHeader_t *)&(agFirstDword->PioSetup);
ataStatus = satPIOSetupHeader->status; /* ATA Status register */
ataError = satPIOSetupHeader->error; /* ATA Eror register */
#endif
TI_DBG1(("satCheckPowerModeCB: ataStatus 0x%x ataError 0x%x\n", ataStatus, ataError));
if (satOrgIOContext->NotifyOS == agTRUE)
{
ostiInitiatorEvent( tiRoot,
NULL,
NULL,
tiIntrEventTypeTaskManagement,
tiTMFailed,
tiOrgIORequest );
}
satDevData->satTmTaskTag = agNULL;
satDecrementPendingIO(tiRoot, tdsaAllShared, satIOContext);
satFreeIntIoResource( tiRoot,
satDevData,
satIntIo);
return;
}
/* success */
TI_DBG1(("satCheckPowerModeCB: success\n"));
TI_DBG1(("satCheckPowerModeCB: TMF %d\n", satOrgIOContext->TMF));
if (satOrgIOContext->TMF == AG_ABORT_TASK)
{
AbortTM = agTRUE;
}
if (satOrgIOContext->NotifyOS == agTRUE)
{
report = agTRUE;
}
if (AbortTM == agTRUE)
{
TI_DBG1(("satCheckPowerModeCB: calling satAbort\n"));
satAbort(agRoot, satOrgIOContext->satToBeAbortedIOContext);
}
satDevData->satTmTaskTag = agNULL;
satDevData->satDriveState = SAT_DEV_STATE_NORMAL;
satDecrementPendingIO(tiRoot, tdsaAllShared, satIOContext);
TI_DBG1(("satCheckPowerModeCB: satPendingIO %d satNCQMaxIO %d\n", satDevData->satPendingIO, satDevData->satNCQMaxIO ));
TI_DBG1(("satCheckPowerModeCB: satPendingNCQIO %d satPendingNONNCQIO %d\n", satDevData->satPendingNCQIO, satDevData->satPendingNONNCQIO));
satFreeIntIoResource( tiRoot,
satDevData,
satIntIo);
/* clean up TD layer's IORequestBody */
if (tdOrgIORequestBody != agNULL)
{
ostiFreeMemory(
tiRoot,
tdOrgIORequestBody->IOType.InitiatorTMIO.osMemHandle,
sizeof(tdIORequestBody_t)
);
}
else
{
TI_DBG1(("satCheckPowerModeCB: tdOrgIORequestBody is NULL, wrong\n"));
}
if (report)
{
ostiInitiatorEvent( tiRoot,
NULL,
NULL,
tiIntrEventTypeTaskManagement,
tiTMOK,
tiOrgIORequest );
}
TI_DBG5(("satCheckPowerModeCB: device %p pending IO %d\n", satDevData, satDevData->satPendingIO));
TI_DBG2(("satCheckPowerModeCB: end\n"));
return;
}
/*****************************************************************************/
/*! \brief SAT implementation for satAddSATAStartIDDev.
*
* This function sends identify device data to find out the uniqueness
* of device.
*
* \param tiRoot: Pointer to TISA initiator driver/port instance.
* \param tiIORequest: Pointer to TISA I/O request context for this I/O.
* \param tiDeviceHandle: Pointer to TISA device handle for this I/O.
* \param tiScsiRequest: Pointer to TISA SCSI I/O request and SGL list.
* \param satIOContext_t: Pointer to the SAT IO Context
*
* \return If command is started successfully
* - \e tiSuccess: I/O request successfully initiated.
* - \e tiBusy: No resources available, try again later.
* - \e tiIONoDevice: Invalid device handle.
* - \e tiError: Other errors.
*/
/*****************************************************************************/
GLOBAL bit32 satAddSATAStartIDDev(
tiRoot_t *tiRoot,
tiIORequest_t *tiIORequest,
tiDeviceHandle_t *tiDeviceHandle,
tiScsiInitiatorRequest_t *tiScsiRequest, // NULL
satIOContext_t *satIOContext
)
{
satInternalIo_t *satIntIo = agNULL;
satDeviceData_t *satDevData = agNULL;
tdIORequestBody_t *tdIORequestBody;
satIOContext_t *satNewIOContext;
bit32 status;
TI_DBG2(("satAddSATAStartIDDev: start\n"));
satDevData = satIOContext->pSatDevData;
TI_DBG2(("satAddSATAStartIDDev: before alloc\n"));
/* allocate identify device command */
satIntIo = satAllocIntIoResource( tiRoot,
tiIORequest,
satDevData,
sizeof(agsaSATAIdentifyData_t), /* 512; size of identify device data */
satIntIo);
TI_DBG2(("satAddSATAStartIDDev: after alloc\n"));
if (satIntIo == agNULL)
{
TI_DBG1(("satAddSATAStartIDDev: can't alloacate\n"));
return tiError;
}
/* fill in fields */
/* real ttttttthe one worked and the same; 5/21/07/ */
satIntIo->satOrgTiIORequest = tiIORequest; /* changed */
tdIORequestBody = satIntIo->satIntRequestBody;
satNewIOContext = &(tdIORequestBody->transport.SATA.satIOContext);
satNewIOContext->pSatDevData = satDevData;
satNewIOContext->pFis = &(tdIORequestBody->transport.SATA.agSATARequestBody.fis.fisRegHostToDev);
satNewIOContext->pScsiCmnd = &(satIntIo->satIntTiScsiXchg.scsiCmnd);
satNewIOContext->pSense = &(tdIORequestBody->transport.SATA.sensePayload);
satNewIOContext->pTiSenseData = &(tdIORequestBody->transport.SATA.tiSenseData);
satNewIOContext->tiRequestBody = satIntIo->satIntRequestBody; /* key fix */
satNewIOContext->interruptContext = tiInterruptContext;
satNewIOContext->satIntIoContext = satIntIo;
satNewIOContext->ptiDeviceHandle = agNULL;
satNewIOContext->satOrgIOContext = satIOContext; /* changed */
/* this is valid only for TD layer generated (not triggered by OS at all) IO */
satNewIOContext->tiScsiXchg = &(satIntIo->satIntTiScsiXchg);
TI_DBG6(("satAddSATAStartIDDev: OS satIOContext %p \n", satIOContext));
TI_DBG6(("satAddSATAStartIDDev: TD satNewIOContext %p \n", satNewIOContext));
TI_DBG6(("satAddSATAStartIDDev: OS tiScsiXchg %p \n", satIOContext->tiScsiXchg));
TI_DBG6(("satAddSATAStartIDDev: TD tiScsiXchg %p \n", satNewIOContext->tiScsiXchg));
TI_DBG2(("satAddSATAStartIDDev: satNewIOContext %p tdIORequestBody %p\n", satNewIOContext, tdIORequestBody));
status = satAddSATASendIDDev( tiRoot,
&satIntIo->satIntTiIORequest, /* New tiIORequest */
tiDeviceHandle,
satNewIOContext->tiScsiXchg, /* New tiScsiInitiatorRequest_t *tiScsiRequest, */
satNewIOContext);
if (status != tiSuccess)
{
TI_DBG1(("satAddSATAStartIDDev: failed in sending\n"));
satFreeIntIoResource( tiRoot,
satDevData,
satIntIo);
return tiError;
}
TI_DBG6(("satAddSATAStartIDDev: end\n"));
return status;
}
/*****************************************************************************/
/*! \brief SAT implementation for satAddSATASendIDDev.
*
* This function creates identify device data fis and send it to LL
*
* \param tiRoot: Pointer to TISA initiator driver/port instance.
* \param tiIORequest: Pointer to TISA I/O request context for this I/O.
* \param tiDeviceHandle: Pointer to TISA device handle for this I/O.
* \param tiScsiRequest: Pointer to TISA SCSI I/O request and SGL list.
* \param satIOContext_t: Pointer to the SAT IO Context
*
* \return If command is started successfully
* - \e tiSuccess: I/O request successfully initiated.
* - \e tiBusy: No resources available, try again later.
* - \e tiIONoDevice: Invalid device handle.
* - \e tiError: Other errors.
*/
/*****************************************************************************/
GLOBAL bit32 satAddSATASendIDDev(
tiRoot_t *tiRoot,
tiIORequest_t *tiIORequest,
tiDeviceHandle_t *tiDeviceHandle,
tiScsiInitiatorRequest_t *tiScsiRequest,
satIOContext_t *satIOContext)
{
bit32 status;
bit32 agRequestType;
satDeviceData_t *pSatDevData;
agsaFisRegHostToDevice_t *fis;
#ifdef TD_DEBUG_ENABLE
tdIORequestBody_t *tdIORequestBody;
satInternalIo_t *satIntIoContext;
#endif
pSatDevData = satIOContext->pSatDevData;
fis = satIOContext->pFis;
TI_DBG2(("satAddSATASendIDDev: start\n"));
#ifdef TD_DEBUG_ENABLE
satIntIoContext = satIOContext->satIntIoContext;
tdIORequestBody = satIntIoContext->satIntRequestBody;
#endif
TI_DBG5(("satAddSATASendIDDev: satIOContext %p tdIORequestBody %p\n", satIOContext, tdIORequestBody));
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
if (pSatDevData->satDeviceType == SATA_ATAPI_DEVICE)
fis->h.command = SAT_IDENTIFY_PACKET_DEVICE; /* 0x40 */
else
fis->h.command = SAT_IDENTIFY_DEVICE; /* 0xEC */
fis->h.features = 0; /* FIS reserve */
fis->d.lbaLow = 0; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = 0; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = 0; /* FIS LBA (23:16) */
fis->d.device = 0; /* FIS LBA mode */
fis->d.lbaLowExp = 0;
fis->d.lbaMidExp = 0;
fis->d.lbaHighExp = 0;
fis->d.featuresExp = 0;
fis->d.sectorCount = 0; /* FIS sector count (7:0) */
fis->d.sectorCountExp = 0;
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_PIO_READ;
/* Initialize CB for SATA completion.
*/
satIOContext->satCompleteCB = &satAddSATAIDDevCB;
/*
* Prepare SGL and send FIS to LL layer.
*/
satIOContext->reqType = agRequestType; /* Save it */
#ifdef TD_INTERNAL_DEBUG
tdhexdump("satAddSATASendIDDev", (bit8 *)satIOContext->pFis, sizeof(agsaFisRegHostToDevice_t));
#ifdef TD_DEBUG_ENABLE
tdhexdump("satAddSATASendIDDev LL", (bit8 *)&(tdIORequestBody->transport.SATA.agSATARequestBody.fis.fisRegHostToDev), sizeof(agsaFisRegHostToDevice_t));
#endif
#endif
status = sataLLIOStart( tiRoot,
tiIORequest,
tiDeviceHandle,
tiScsiRequest,
satIOContext);
TI_DBG2(("satAddSATASendIDDev: end status %d\n", status));
return status;
}
/*****************************************************************************
*! \brief satAddSATAIDDevCB
*
* This routine is a callback function for satAddSATASendIDDev()
* Using Identify Device Data, this function finds whether devicedata is
* new or old. If new, add it to the devicelist.
*
* \param agRoot: Handles for this instance of SAS/SATA hardware
* \param agIORequest: Pointer to the LL I/O request context for this I/O.
* \param agIOStatus: Status of completed I/O.
* \param agFirstDword:Pointer to the four bytes of FIS.
* \param agIOInfoLen: Length in bytes of overrun/underrun residual or FIS
* length.
* \param agParam: Additional info based on status.
* \param ioContext: Pointer to satIOContext_t.
*
* \return: none
*
*****************************************************************************/
void satAddSATAIDDevCB(
agsaRoot_t *agRoot,
agsaIORequest_t *agIORequest,
bit32 agIOStatus,
agsaFisHeader_t *agFirstDword,
bit32 agIOInfoLen,
void *agParam,
void *ioContext
)
{
/*
In the process of Inquiry
Process SAT_IDENTIFY_DEVICE
*/
tdsaRootOsData_t *osData = (tdsaRootOsData_t *)agRoot->osData;
tiRoot_t *tiRoot = (tiRoot_t *)osData->tiRoot;
tdsaRoot_t *tdsaRoot = (tdsaRoot_t *) tiRoot->tdData;
tdsaContext_t *tdsaAllShared = (tdsaContext_t *)&tdsaRoot->tdsaAllShared;
tdIORequestBody_t *tdIORequestBody;
tdIORequestBody_t *tdOrgIORequestBody;
satIOContext_t *satIOContext;
satIOContext_t *satOrgIOContext;
satIOContext_t *satNewIOContext;
satInternalIo_t *satIntIo;
satInternalIo_t *satNewIntIo = agNULL;
satDeviceData_t *satDevData;
tiIORequest_t *tiOrgIORequest = agNULL;
agsaSATAIdentifyData_t *pSATAIdData;
bit16 *tmpptr, tmpptr_tmp;
bit32 x;
tdsaDeviceData_t *NewOneDeviceData = agNULL;
tdsaDeviceData_t *oneDeviceData = agNULL;
tdList_t *DeviceListList;
int new_device = agTRUE;
bit8 PhyID;
void *sglVirtualAddr;
bit32 retry_status;
agsaContext_t *agContext;
tdsaPortContext_t *onePortContext;
bit32 status = 0;
TI_DBG2(("satAddSATAIDDevCB: start\n"));
TI_DBG6(("satAddSATAIDDevCB: agIORequest=%p agIOStatus=0x%x agIOInfoLen %d\n", agIORequest, agIOStatus, agIOInfoLen));
tdIORequestBody = (tdIORequestBody_t *)agIORequest->osData;
satIOContext = (satIOContext_t *) ioContext;
satIntIo = satIOContext->satIntIoContext;
satDevData = satIOContext->pSatDevData;
NewOneDeviceData = (tdsaDeviceData_t *)tdIORequestBody->tiDevHandle->tdData;
TI_DBG2(("satAddSATAIDDevCB: NewOneDeviceData %p did %d\n", NewOneDeviceData, NewOneDeviceData->id));
PhyID = NewOneDeviceData->phyID;
TI_DBG2(("satAddSATAIDDevCB: phyID %d\n", PhyID));
agContext = &(NewOneDeviceData->agDeviceResetContext);
agContext->osData = agNULL;
if (satIntIo == agNULL)
{
TI_DBG1(("satAddSATAIDDevCB: External, OS generated\n"));
TI_DBG1(("satAddSATAIDDevCB: Not possible case\n"));
satOrgIOContext = satIOContext;
tdOrgIORequestBody = (tdIORequestBody_t *)satOrgIOContext->tiRequestBody;
tdsaAbortAll(tiRoot, agRoot, NewOneDeviceData);
/* put onedevicedata back to free list */
osti_memset(&(NewOneDeviceData->satDevData.satIdentifyData), 0xFF, sizeof(agsaSATAIdentifyData_t));
TDLIST_DEQUEUE_THIS(&(NewOneDeviceData->MainLink));
TDLIST_ENQUEUE_AT_TAIL(&(NewOneDeviceData->FreeLink), &(tdsaAllShared->FreeDeviceList));
satDecrementPendingIO(tiRoot, tdsaAllShared, satIOContext);
satFreeIntIoResource( tiRoot,
satDevData,
satIntIo);
/* clean up TD layer's IORequestBody */
ostiFreeMemory(
tiRoot,
tdOrgIORequestBody->IOType.InitiatorTMIO.osMemHandle,
sizeof(tdIORequestBody_t)
);
/* notifying link up */
ostiPortEvent (
tiRoot,
tiPortLinkUp,
tiSuccess,
(void *)tdsaAllShared->Ports[PhyID].tiPortalContext
);
#ifdef INITIATOR_DRIVER
/* triggers discovery */
ostiPortEvent(
tiRoot,
tiPortDiscoveryReady,
tiSuccess,
(void *) tdsaAllShared->Ports[PhyID].tiPortalContext
);
#endif
return;
}
else
{
TI_DBG1(("satAddSATAIDDevCB: Internal, TD generated\n"));
satOrgIOContext = satIOContext->satOrgIOContext;
if (satOrgIOContext == agNULL)
{
TI_DBG6(("satAddSATAIDDevCB: satOrgIOContext is NULL\n"));
return;
}
else
{
TI_DBG6(("satAddSATAIDDevCB: satOrgIOContext is NOT NULL\n"));
tdOrgIORequestBody = (tdIORequestBody_t *)satOrgIOContext->tiRequestBody;
sglVirtualAddr = satIntIo->satIntTiScsiXchg.sglVirtualAddr;
}
}
tiOrgIORequest = tdIORequestBody->tiIORequest;
tdIORequestBody->ioCompleted = agTRUE;
tdIORequestBody->ioStarted = agFALSE;
TI_DBG2(("satAddSATAIDDevCB: satOrgIOContext->pid %d\n", satOrgIOContext->pid));
/* protect against double completion for old port */
if (satOrgIOContext->pid != tdsaAllShared->Ports[PhyID].portContext->id)
{
TI_DBG2(("satAddSATAIDDevCB: incorrect pid\n"));
TI_DBG2(("satAddSATAIDDevCB: satOrgIOContext->pid %d\n", satOrgIOContext->pid));
TI_DBG2(("satAddSATAIDDevCB: tiPortalContext pid %d\n", tdsaAllShared->Ports[PhyID].portContext->id));
tdsaAbortAll(tiRoot, agRoot, NewOneDeviceData);
/* put onedevicedata back to free list */
osti_memset(&(NewOneDeviceData->satDevData.satIdentifyData), 0xFF, sizeof(agsaSATAIdentifyData_t));
TDLIST_DEQUEUE_THIS(&(NewOneDeviceData->MainLink));
TDLIST_ENQUEUE_AT_TAIL(&(NewOneDeviceData->FreeLink), &(tdsaAllShared->FreeDeviceList));
satDecrementPendingIO(tiRoot, tdsaAllShared, satIOContext);
satFreeIntIoResource( tiRoot,
satDevData,
satIntIo);
/* clean up TD layer's IORequestBody */
ostiFreeMemory(
tiRoot,
tdOrgIORequestBody->IOType.InitiatorTMIO.osMemHandle,
sizeof(tdIORequestBody_t)
);
/* no notification to OS layer */
return;
}
/* completion after portcontext is invalidated */
onePortContext = NewOneDeviceData->tdPortContext;
if (onePortContext != agNULL)
{
if (onePortContext->valid == agFALSE)
{
TI_DBG1(("satAddSATAIDDevCB: portcontext is invalid\n"));
TI_DBG1(("satAddSATAIDDevCB: onePortContext->id pid %d\n", onePortContext->id));
satDecrementPendingIO(tiRoot, tdsaAllShared, satIOContext);
satFreeIntIoResource( tiRoot,
satDevData,
satIntIo);
/* clean up TD layer's IORequestBody */
ostiFreeMemory(
tiRoot,
tdOrgIORequestBody->IOType.InitiatorTMIO.osMemHandle,
sizeof(tdIORequestBody_t)
);
/* no notification to OS layer */
return;
}
}
else
{
TI_DBG1(("satAddSATAIDDevCB: onePortContext is NULL!!!\n"));
return;
}
if (agFirstDword == agNULL && agIOStatus != OSSA_IO_SUCCESS)
{
TI_DBG1(("satAddSATAIDDevCB: wrong. agFirstDword is NULL when error, status %d\n", agIOStatus));
if (tdsaAllShared->ResetInDiscovery != 0 && satDevData->ID_Retries < SATA_ID_DEVICE_DATA_RETRIES)
{
satDevData->satPendingNONNCQIO--;
satDevData->satPendingIO--;
retry_status = sataLLIOStart(tiRoot,
&satIntIo->satIntTiIORequest,
&(NewOneDeviceData->tiDeviceHandle),
satIOContext->tiScsiXchg,
satIOContext);
if (retry_status != tiSuccess)
{
/* simply give up */
satDevData->ID_Retries = 0;
satAddSATAIDDevCBCleanup(agRoot, NewOneDeviceData, satIOContext, tdOrgIORequestBody);
return;
}
satDevData->ID_Retries++;
tdIORequestBody->ioCompleted = agFALSE;
tdIORequestBody->ioStarted = agTRUE;
return;
}
else
{
if (tdsaAllShared->ResetInDiscovery == 0)
{
satAddSATAIDDevCBCleanup(agRoot, NewOneDeviceData, satIOContext, tdOrgIORequestBody);
}
else /* ResetInDiscovery in on */
{
/* RESET only one after ID retries */
if (satDevData->NumOfIDRetries <= 0)
{
satDevData->NumOfIDRetries++;
satDevData->ID_Retries = 0;
satAddSATAIDDevCBReset(agRoot, NewOneDeviceData, satIOContext, tdOrgIORequestBody);
/* send link reset */
saLocalPhyControl(agRoot,
agContext,
tdsaRotateQnumber(tiRoot, NewOneDeviceData),
PhyID,
AGSA_PHY_HARD_RESET,
agNULL);
}
else
{
satDevData->ID_Retries = 0;
satAddSATAIDDevCBCleanup(agRoot, NewOneDeviceData, satIOContext, tdOrgIORequestBody);
}
}
return;
}
}
if (agIOStatus == OSSA_IO_OPEN_CNX_ERROR_PROTOCOL_NOT_SUPPORTED ||
agIOStatus == OSSA_IO_OPEN_CNX_ERROR_ZONE_VIOLATION ||
agIOStatus == OSSA_IO_OPEN_CNX_ERROR_BREAK ||
agIOStatus == OSSA_IO_OPEN_CNX_ERROR_IT_NEXUS_LOSS ||
agIOStatus == OSSA_IO_OPEN_CNX_ERROR_BAD_DESTINATION ||
agIOStatus == OSSA_IO_OPEN_CNX_ERROR_CONNECTION_RATE_NOT_SUPPORTED ||
agIOStatus == OSSA_IO_OPEN_CNX_ERROR_WRONG_DESTINATION ||
agIOStatus == OSSA_IO_OPEN_CNX_ERROR_UNKNOWN_ERROR ||
agIOStatus == OSSA_IO_OPEN_CNX_ERROR_STP_RESOURCES_BUSY
)
{
TI_DBG1(("satAddSATAIDDevCB: OSSA_IO_OPEN_CNX_ERROR\n"));
if (tdsaAllShared->ResetInDiscovery != 0 && satDevData->ID_Retries < SATA_ID_DEVICE_DATA_RETRIES)
{
satDevData->satPendingNONNCQIO--;
satDevData->satPendingIO--;
retry_status = sataLLIOStart(tiRoot,
&satIntIo->satIntTiIORequest,
&(NewOneDeviceData->tiDeviceHandle),
satIOContext->tiScsiXchg,
satIOContext);
if (retry_status != tiSuccess)
{
/* simply give up */
satDevData->ID_Retries = 0;
satAddSATAIDDevCBCleanup(agRoot, NewOneDeviceData, satIOContext, tdOrgIORequestBody);
return;
}
satDevData->ID_Retries++;
tdIORequestBody->ioCompleted = agFALSE;
tdIORequestBody->ioStarted = agTRUE;
return;
}
else
{
if (tdsaAllShared->ResetInDiscovery == 0)
{
satAddSATAIDDevCBCleanup(agRoot, NewOneDeviceData, satIOContext, tdOrgIORequestBody);
}
else /* ResetInDiscovery in on */
{
/* RESET only one after ID retries */
if (satDevData->NumOfIDRetries <= 0)
{
satDevData->NumOfIDRetries++;
satDevData->ID_Retries = 0;
satAddSATAIDDevCBReset(agRoot, NewOneDeviceData, satIOContext, tdOrgIORequestBody);
/* send link reset */
saLocalPhyControl(agRoot,
agContext,
tdsaRotateQnumber(tiRoot, NewOneDeviceData),
PhyID,
AGSA_PHY_HARD_RESET,
agNULL);
}
else
{
satDevData->ID_Retries = 0;
satAddSATAIDDevCBCleanup(agRoot, NewOneDeviceData, satIOContext, tdOrgIORequestBody);
}
}
return;
}
}
if ( agIOStatus != OSSA_IO_SUCCESS ||
(agIOStatus == OSSA_IO_SUCCESS && agFirstDword != agNULL && agIOInfoLen != 0)
)
{
if (tdsaAllShared->ResetInDiscovery != 0 && satDevData->ID_Retries < SATA_ID_DEVICE_DATA_RETRIES)
{
satIOContext->pSatDevData->satPendingNONNCQIO--;
satIOContext->pSatDevData->satPendingIO--;
retry_status = sataLLIOStart(tiRoot,
&satIntIo->satIntTiIORequest,
&(NewOneDeviceData->tiDeviceHandle),
satIOContext->tiScsiXchg,
satIOContext);
if (retry_status != tiSuccess)
{
/* simply give up */
satDevData->ID_Retries = 0;
satAddSATAIDDevCBCleanup(agRoot, NewOneDeviceData, satIOContext, tdOrgIORequestBody);
return;
}
satDevData->ID_Retries++;
tdIORequestBody->ioCompleted = agFALSE;
tdIORequestBody->ioStarted = agTRUE;
return;
}
else
{
if (tdsaAllShared->ResetInDiscovery == 0)
{
satAddSATAIDDevCBCleanup(agRoot, NewOneDeviceData, satIOContext, tdOrgIORequestBody);
}
else /* ResetInDiscovery in on */
{
/* RESET only one after ID retries */
if (satDevData->NumOfIDRetries <= 0)
{
satDevData->NumOfIDRetries++;
satDevData->ID_Retries = 0;
satAddSATAIDDevCBReset(agRoot, NewOneDeviceData, satIOContext, tdOrgIORequestBody);
/* send link reset */
saLocalPhyControl(agRoot,
agContext,
tdsaRotateQnumber(tiRoot, NewOneDeviceData),
PhyID,
AGSA_PHY_HARD_RESET,
agNULL);
}
else
{
satDevData->ID_Retries = 0;
satAddSATAIDDevCBCleanup(agRoot, NewOneDeviceData, satIOContext, tdOrgIORequestBody);
}
}
return;
}
}
/* success */
TI_DBG2(("satAddSATAIDDevCB: Success\n"));
/* Convert to host endian */
tmpptr = (bit16*)sglVirtualAddr;
//tdhexdump("satAddSATAIDDevCB before", (bit8 *)sglVirtualAddr, sizeof(agsaSATAIdentifyData_t));
for (x=0; x < sizeof(agsaSATAIdentifyData_t)/sizeof(bit16); x++)
{
OSSA_READ_LE_16(AGROOT, &tmpptr_tmp, tmpptr, 0);
*tmpptr = tmpptr_tmp;
tmpptr++;
/*Print tmpptr_tmp here for debugging purpose*/
}
pSATAIdData = (agsaSATAIdentifyData_t *)sglVirtualAddr;
//tdhexdump("satAddSATAIDDevCB after", (bit8 *)pSATAIdData, sizeof(agsaSATAIdentifyData_t));
TI_DBG5(("satAddSATAIDDevCB: OS satOrgIOContext %p \n", satOrgIOContext));
TI_DBG5(("satAddSATAIDDevCB: TD satIOContext %p \n", satIOContext));
TI_DBG5(("satAddSATAIDDevCB: OS tiScsiXchg %p \n", satOrgIOContext->tiScsiXchg));
TI_DBG5(("satAddSATAIDDevCB: TD tiScsiXchg %p \n", satIOContext->tiScsiXchg));
/* compare idenitfy device data to the exiting list */
DeviceListList = tdsaAllShared->MainDeviceList.flink;
while (DeviceListList != &(tdsaAllShared->MainDeviceList))
{
oneDeviceData = TDLIST_OBJECT_BASE(tdsaDeviceData_t, MainLink, DeviceListList);
TI_DBG1(("satAddSATAIDDevCB: LOOP oneDeviceData %p did %d\n", oneDeviceData, oneDeviceData->id));
//tdhexdump("satAddSATAIDDevCB LOOP", (bit8 *)&oneDeviceData->satDevData.satIdentifyData, sizeof(agsaSATAIdentifyData_t));
/* what is unique ID for sata device -> response of identify devicedata; not really
Let's compare serial number, firmware version, model number
*/
if ( oneDeviceData->DeviceType == TD_SATA_DEVICE &&
(osti_memcmp (oneDeviceData->satDevData.satIdentifyData.serialNumber,
pSATAIdData->serialNumber,
20) == 0) &&
(osti_memcmp (oneDeviceData->satDevData.satIdentifyData.firmwareVersion,
pSATAIdData->firmwareVersion,
8) == 0) &&
(osti_memcmp (oneDeviceData->satDevData.satIdentifyData.modelNumber,
pSATAIdData->modelNumber,
40) == 0)
)
{
TI_DBG2(("satAddSATAIDDevCB: did %d\n", oneDeviceData->id));
new_device = agFALSE;
break;
}
DeviceListList = DeviceListList->flink;
}
if (new_device == agFALSE)
{
TI_DBG2(("satAddSATAIDDevCB: old device data\n"));
oneDeviceData->valid = agTRUE;
oneDeviceData->valid2 = agTRUE;
/* save data field from new device data */
oneDeviceData->agRoot = agRoot;
oneDeviceData->agDevHandle = NewOneDeviceData->agDevHandle;
oneDeviceData->agDevHandle->osData = oneDeviceData; /* TD layer */
oneDeviceData->tdPortContext = NewOneDeviceData->tdPortContext;
oneDeviceData->phyID = NewOneDeviceData->phyID;
/*
one SATA directly attached device per phy;
Therefore, deregister then register
*/
saDeregisterDeviceHandle(agRoot, agNULL, NewOneDeviceData->agDevHandle, 0);
if (oneDeviceData->registered == agFALSE)
{
TI_DBG2(("satAddSATAIDDevCB: re-registering old device data\n"));
/* already has old information; just register it again */
saRegisterNewDevice( /* satAddSATAIDDevCB */
agRoot,
&oneDeviceData->agContext,
tdsaRotateQnumber(tiRoot, oneDeviceData),
&oneDeviceData->agDeviceInfo,
oneDeviceData->tdPortContext->agPortContext,
0
);
}
// tdsaAbortAll(tiRoot, agRoot, NewOneDeviceData);
/* put onedevicedata back to free list */
osti_memset(&(NewOneDeviceData->satDevData.satIdentifyData), 0xFF, sizeof(agsaSATAIdentifyData_t));
TDLIST_DEQUEUE_THIS(&(NewOneDeviceData->MainLink));
TDLIST_ENQUEUE_AT_TAIL(&(NewOneDeviceData->FreeLink), &(tdsaAllShared->FreeDeviceList));
satDecrementPendingIO(tiRoot, tdsaAllShared, satIOContext);
satFreeIntIoResource( tiRoot,
satDevData,
satIntIo);
if (satDevData->satDeviceType == SATA_ATAPI_DEVICE)
{
/* send the Set Feature ATA command to ATAPI device for enbling PIO and DMA transfer mode*/
satNewIntIo = satAllocIntIoResource( tiRoot,
tiOrgIORequest,
satDevData,
0,
satNewIntIo);
if (satNewIntIo == agNULL)
{
TI_DBG1(("tdsaDiscoveryStartIDDevCB: momory allocation fails\n"));
/* clean up TD layer's IORequestBody */
ostiFreeMemory(
tiRoot,
tdOrgIORequestBody->IOType.InitiatorTMIO.osMemHandle,
sizeof(tdIORequestBody_t)
);
return;
} /* end memory allocation */
satNewIOContext = satPrepareNewIO(satNewIntIo,
tiOrgIORequest,
satDevData,
agNULL,
satOrgIOContext
);
/* enable PIO mode, then enable Ultra DMA mode in the satSetFeaturesCB callback function*/
status = satSetFeatures(tiRoot,
&satNewIntIo->satIntTiIORequest,
satNewIOContext->ptiDeviceHandle,
&satNewIntIo->satIntTiScsiXchg, /* orginal from OS layer */
satNewIOContext,
agFALSE);
if (status != tiSuccess)
{
satFreeIntIoResource( tiRoot,
satDevData,
satIntIo);
/* clean up TD layer's IORequestBody */
ostiFreeMemory(
tiRoot,
tdOrgIORequestBody->IOType.InitiatorTMIO.osMemHandle,
sizeof(tdIORequestBody_t)
);
}
}
else
{
/* clean up TD layer's IORequestBody */
ostiFreeMemory(
tiRoot,
tdOrgIORequestBody->IOType.InitiatorTMIO.osMemHandle,
sizeof(tdIORequestBody_t)
);
TI_DBG2(("satAddSATAIDDevCB: pid %d\n", tdsaAllShared->Ports[PhyID].portContext->id));
/* notifying link up */
ostiPortEvent(
tiRoot,
tiPortLinkUp,
tiSuccess,
(void *)tdsaAllShared->Ports[PhyID].tiPortalContext
);
#ifdef INITIATOR_DRIVER
/* triggers discovery */
ostiPortEvent(
tiRoot,
tiPortDiscoveryReady,
tiSuccess,
(void *) tdsaAllShared->Ports[PhyID].tiPortalContext
);
#endif
}
return;
}
TI_DBG2(("satAddSATAIDDevCB: new device data\n"));
/* copy ID Dev data to satDevData */
satDevData->satIdentifyData = *pSATAIdData;
satDevData->IDDeviceValid = agTRUE;
#ifdef TD_INTERNAL_DEBUG
tdhexdump("satAddSATAIDDevCB ID Dev data",(bit8 *)pSATAIdData, sizeof(agsaSATAIdentifyData_t));
tdhexdump("satAddSATAIDDevCB Device ID Dev data",(bit8 *)&satDevData->satIdentifyData, sizeof(agsaSATAIdentifyData_t));
#endif
/* set satDevData fields from IndentifyData */
satSetDevInfo(satDevData,pSATAIdData);
satDecrementPendingIO(tiRoot, tdsaAllShared, satIOContext);
satFreeIntIoResource( tiRoot,
satDevData,
satIntIo);
if (satDevData->satDeviceType == SATA_ATAPI_DEVICE)
{
/* send the Set Feature ATA command to ATAPI device for enbling PIO and DMA transfer mode*/
satNewIntIo = satAllocIntIoResource( tiRoot,
tiOrgIORequest,
satDevData,
0,
satNewIntIo);
if (satNewIntIo == agNULL)
{
TI_DBG1(("tdsaDiscoveryStartIDDevCB: momory allocation fails\n"));
/* clean up TD layer's IORequestBody */
ostiFreeMemory(
tiRoot,
tdOrgIORequestBody->IOType.InitiatorTMIO.osMemHandle,
sizeof(tdIORequestBody_t)
);
return;
} /* end memory allocation */
satNewIOContext = satPrepareNewIO(satNewIntIo,
tiOrgIORequest,
satDevData,
agNULL,
satOrgIOContext
);
/* enable PIO mode, then enable Ultra DMA mode in the satSetFeaturesCB callback function*/
status = satSetFeatures(tiRoot,
&satNewIntIo->satIntTiIORequest,
satNewIOContext->ptiDeviceHandle,
&satNewIntIo->satIntTiScsiXchg, /* orginal from OS layer */
satNewIOContext,
agFALSE);
if (status != tiSuccess)
{
satFreeIntIoResource( tiRoot,
satDevData,
satIntIo);
/* clean up TD layer's IORequestBody */
ostiFreeMemory(
tiRoot,
tdOrgIORequestBody->IOType.InitiatorTMIO.osMemHandle,
sizeof(tdIORequestBody_t)
);
}
}
else
{
/* clean up TD layer's IORequestBody */
ostiFreeMemory(
tiRoot,
tdOrgIORequestBody->IOType.InitiatorTMIO.osMemHandle,
sizeof(tdIORequestBody_t)
);
TI_DBG2(("satAddSATAIDDevCB: pid %d\n", tdsaAllShared->Ports[PhyID].portContext->id));
/* notifying link up */
ostiPortEvent (
tiRoot,
tiPortLinkUp,
tiSuccess,
(void *)tdsaAllShared->Ports[PhyID].tiPortalContext
);
#ifdef INITIATOR_DRIVER
/* triggers discovery */
ostiPortEvent(
tiRoot,
tiPortDiscoveryReady,
tiSuccess,
(void *) tdsaAllShared->Ports[PhyID].tiPortalContext
);
#endif
}
TI_DBG2(("satAddSATAIDDevCB: end\n"));
return;
}
/*****************************************************************************
*! \brief satAddSATAIDDevCBReset
*
* This routine cleans up IOs for failed Identify device data
*
* \param agRoot: Handles for this instance of SAS/SATA hardware
* \param oneDeviceData: Pointer to the device data.
* \param ioContext: Pointer to satIOContext_t.
* \param tdIORequestBody: Pointer to the request body
* \param flag: Decrement pending io or not
*
* \return: none
*
*****************************************************************************/
void satAddSATAIDDevCBReset(
agsaRoot_t *agRoot,
tdsaDeviceData_t *oneDeviceData,
satIOContext_t *satIOContext,
tdIORequestBody_t *tdIORequestBody
)
{
tdsaRootOsData_t *osData = (tdsaRootOsData_t *)agRoot->osData;
tiRoot_t *tiRoot = (tiRoot_t *)osData->tiRoot;
tdsaRoot_t *tdsaRoot = (tdsaRoot_t *) tiRoot->tdData;
tdsaContext_t *tdsaAllShared = (tdsaContext_t *)&tdsaRoot->tdsaAllShared;
satInternalIo_t *satIntIo;
satDeviceData_t *satDevData;
TI_DBG2(("satAddSATAIDDevCBReset: start\n"));
satIntIo = satIOContext->satIntIoContext;
satDevData = satIOContext->pSatDevData;
satDecrementPendingIO(tiRoot, tdsaAllShared, satIOContext);
satFreeIntIoResource( tiRoot,
satDevData,
satIntIo);
/* clean up TD layer's IORequestBody */
ostiFreeMemory(
tiRoot,
tdIORequestBody->IOType.InitiatorTMIO.osMemHandle,
sizeof(tdIORequestBody_t)
);
return;
}
/*****************************************************************************
*! \brief satAddSATAIDDevCBCleanup
*
* This routine cleans up IOs for failed Identify device data
*
* \param agRoot: Handles for this instance of SAS/SATA hardware
* \param oneDeviceData: Pointer to the device data.
* \param ioContext: Pointer to satIOContext_t.
* \param tdIORequestBody: Pointer to the request body
*
* \return: none
*
*****************************************************************************/
void satAddSATAIDDevCBCleanup(
agsaRoot_t *agRoot,
tdsaDeviceData_t *oneDeviceData,
satIOContext_t *satIOContext,
tdIORequestBody_t *tdIORequestBody
)
{
tdsaRootOsData_t *osData = (tdsaRootOsData_t *)agRoot->osData;
tiRoot_t *tiRoot = (tiRoot_t *)osData->tiRoot;
tdsaRoot_t *tdsaRoot = (tdsaRoot_t *) tiRoot->tdData;
tdsaContext_t *tdsaAllShared = (tdsaContext_t *)&tdsaRoot->tdsaAllShared;
satInternalIo_t *satIntIo;
satDeviceData_t *satDevData;
bit8 PhyID;
TI_DBG2(("satAddSATAIDDevCBCleanup: start\n"));
satIntIo = satIOContext->satIntIoContext;
satDevData = satIOContext->pSatDevData;
PhyID = oneDeviceData->phyID;
tdsaAbortAll(tiRoot, agRoot, oneDeviceData);
/* put onedevicedata back to free list */
osti_memset(&(oneDeviceData->satDevData.satIdentifyData), 0xFF, sizeof(agsaSATAIdentifyData_t));
TDLIST_DEQUEUE_THIS(&(oneDeviceData->MainLink));
TDLIST_ENQUEUE_AT_TAIL(&(oneDeviceData->FreeLink), &(tdsaAllShared->FreeDeviceList));
satDecrementPendingIO(tiRoot, tdsaAllShared, satIOContext);
satFreeIntIoResource( tiRoot,
satDevData,
satIntIo);
/* clean up TD layer's IORequestBody */
ostiFreeMemory(
tiRoot,
tdIORequestBody->IOType.InitiatorTMIO.osMemHandle,
sizeof(tdIORequestBody_t)
);
/* notifying link up */
ostiPortEvent (
tiRoot,
tiPortLinkUp,
tiSuccess,
(void *)tdsaAllShared->Ports[PhyID].tiPortalContext
);
#ifdef INITIATOR_DRIVER
/* triggers discovery */
ostiPortEvent(
tiRoot,
tiPortDiscoveryReady,
tiSuccess,
(void *) tdsaAllShared->Ports[PhyID].tiPortalContext
);
#endif
return;
}
/*****************************************************************************/
/*! \brief SAT implementation for tdsaDiscoveryStartIDDev.
*
* This function sends identify device data to SATA device in discovery
*
*
* \param tiRoot: Pointer to TISA initiator driver/port instance.
* \param tiIORequest: Pointer to TISA I/O request context for this I/O.
* \param tiDeviceHandle: Pointer to TISA device handle for this I/O.
* \param tiScsiRequest: Pointer to TISA SCSI I/O request and SGL list.
* \param oneDeviceData : Pointer to the device data.
*
* \return If command is started successfully
* - \e tiSuccess: I/O request successfully initiated.
* - \e tiBusy: No resources available, try again later.
* - \e tiIONoDevice: Invalid device handle.
* - \e tiError: Other errors.
*/
/*****************************************************************************/
GLOBAL bit32
tdsaDiscoveryStartIDDev(tiRoot_t *tiRoot,
tiIORequest_t *tiIORequest, /* agNULL */
tiDeviceHandle_t *tiDeviceHandle,
tiScsiInitiatorRequest_t *tiScsiRequest, /* agNULL */
tdsaDeviceData_t *oneDeviceData
)
{
void *osMemHandle;
tdIORequestBody_t *tdIORequestBody;
bit32 PhysUpper32;
bit32 PhysLower32;
bit32 memAllocStatus;
agsaIORequest_t *agIORequest = agNULL; /* identify device data itself */
satIOContext_t *satIOContext = agNULL;
bit32 status;
/* allocate tdiorequestbody and call tdsaDiscoveryIntStartIDDev
tdsaDiscoveryIntStartIDDev(tiRoot, agNULL, tiDeviceHandle, satIOContext);
*/
TI_DBG3(("tdsaDiscoveryStartIDDev: start\n"));
TI_DBG3(("tdsaDiscoveryStartIDDev: did %d\n", oneDeviceData->id));
/* allocation tdIORequestBody and pass it to satTM() */
memAllocStatus = ostiAllocMemory(
tiRoot,
&osMemHandle,
(void **)&tdIORequestBody,
&PhysUpper32,
&PhysLower32,
8,
sizeof(tdIORequestBody_t),
agTRUE
);
if (memAllocStatus != tiSuccess)
{
TI_DBG1(("tdsaDiscoveryStartIDDev: ostiAllocMemory failed... loc 1\n"));
return tiError;
}
if (tdIORequestBody == agNULL)
{
TI_DBG1(("tdsaDiscoveryStartIDDev: ostiAllocMemory returned NULL tdIORequestBody loc 2\n"));
return tiError;
}
/* setup identify device data IO structure */
tdIORequestBody->IOType.InitiatorTMIO.osMemHandle = osMemHandle;
tdIORequestBody->IOType.InitiatorTMIO.CurrentTaskTag = agNULL;
tdIORequestBody->IOType.InitiatorTMIO.TaskTag = agNULL;
/* initialize tiDevhandle */
tdIORequestBody->tiDevHandle = &(oneDeviceData->tiDeviceHandle);
tdIORequestBody->tiDevHandle->tdData = oneDeviceData;
/* initialize tiIORequest */
tdIORequestBody->tiIORequest = agNULL;
/* initialize agIORequest */
agIORequest = &(tdIORequestBody->agIORequest);
agIORequest->osData = (void *) tdIORequestBody;
agIORequest->sdkData = agNULL; /* SA takes care of this */
/* set up satIOContext */
satIOContext = &(tdIORequestBody->transport.SATA.satIOContext);
satIOContext->pSatDevData = &(oneDeviceData->satDevData);
satIOContext->pFis =
&(tdIORequestBody->transport.SATA.agSATARequestBody.fis.fisRegHostToDev);
satIOContext->tiRequestBody = tdIORequestBody;
satIOContext->ptiDeviceHandle = &(oneDeviceData->tiDeviceHandle);
satIOContext->tiScsiXchg = agNULL;
satIOContext->satIntIoContext = agNULL;
satIOContext->satOrgIOContext = agNULL;
/* followings are used only for internal IO */
satIOContext->currentLBA = 0;
satIOContext->OrgTL = 0;
satIOContext->satToBeAbortedIOContext = agNULL;
satIOContext->NotifyOS = agFALSE;
/* saving port ID just in case of full discovery to full discovery transition */
satIOContext->pid = oneDeviceData->tdPortContext->id;
osti_memset(&(oneDeviceData->satDevData.satIdentifyData), 0x0, sizeof(agsaSATAIdentifyData_t));
status = tdsaDiscoveryIntStartIDDev(tiRoot,
tiIORequest, /* agNULL */
tiDeviceHandle, /* &(oneDeviceData->tiDeviceHandle)*/
agNULL,
satIOContext
);
if (status != tiSuccess)
{
TI_DBG1(("tdsaDiscoveryStartIDDev: failed in sending %d\n", status));
ostiFreeMemory(tiRoot, osMemHandle, sizeof(tdIORequestBody_t));
}
return status;
}
/*****************************************************************************/
/*! \brief SAT implementation for tdsaDiscoveryIntStartIDDev.
*
* This function sends identify device data to SATA device.
*
* \param tiRoot: Pointer to TISA initiator driver/port instance.
* \param tiIORequest: Pointer to TISA I/O request context for this I/O.
* \param tiDeviceHandle: Pointer to TISA device handle for this I/O.
* \param tiScsiRequest: Pointer to TISA SCSI I/O request and SGL list.
* \param satIOContext_t: Pointer to the SAT IO Context
*
* \return If command is started successfully
* - \e tiSuccess: I/O request successfully initiated.
* - \e tiBusy: No resources available, try again later.
* - \e tiIONoDevice: Invalid device handle.
* - \e tiError: Other errors.
*/
/*****************************************************************************/
GLOBAL bit32
tdsaDiscoveryIntStartIDDev(tiRoot_t *tiRoot,
tiIORequest_t *tiIORequest, /* agNULL */
tiDeviceHandle_t *tiDeviceHandle,
tiScsiInitiatorRequest_t *tiScsiRequest, /* agNULL */
satIOContext_t *satIOContext
)
{
satInternalIo_t *satIntIo = agNULL;
satDeviceData_t *satDevData = agNULL;
tdIORequestBody_t *tdIORequestBody;
satIOContext_t *satNewIOContext;
bit32 status;
TI_DBG3(("tdsaDiscoveryIntStartIDDev: start\n"));
satDevData = satIOContext->pSatDevData;
/* allocate identify device command */
satIntIo = satAllocIntIoResource( tiRoot,
tiIORequest,
satDevData,
sizeof(agsaSATAIdentifyData_t), /* 512; size of identify device data */
satIntIo);
if (satIntIo == agNULL)
{
TI_DBG2(("tdsaDiscoveryIntStartIDDev: can't alloacate\n"));
return tiError;
}
/* fill in fields */
/* real ttttttthe one worked and the same; 5/21/07/ */
satIntIo->satOrgTiIORequest = tiIORequest; /* changed */
tdIORequestBody = satIntIo->satIntRequestBody;
satNewIOContext = &(tdIORequestBody->transport.SATA.satIOContext);
satNewIOContext->pSatDevData = satDevData;
satNewIOContext->pFis = &(tdIORequestBody->transport.SATA.agSATARequestBody.fis.fisRegHostToDev);
satNewIOContext->pScsiCmnd = &(satIntIo->satIntTiScsiXchg.scsiCmnd);
satNewIOContext->pSense = &(tdIORequestBody->transport.SATA.sensePayload);
satNewIOContext->pTiSenseData = &(tdIORequestBody->transport.SATA.tiSenseData);
satNewIOContext->tiRequestBody = satIntIo->satIntRequestBody; /* key fix */
satNewIOContext->interruptContext = tiInterruptContext;
satNewIOContext->satIntIoContext = satIntIo;
satNewIOContext->ptiDeviceHandle = agNULL;
satNewIOContext->satOrgIOContext = satIOContext; /* changed */
/* this is valid only for TD layer generated (not triggered by OS at all) IO */
satNewIOContext->tiScsiXchg = &(satIntIo->satIntTiScsiXchg);
TI_DBG6(("tdsaDiscoveryIntStartIDDev: OS satIOContext %p \n", satIOContext));
TI_DBG6(("tdsaDiscoveryIntStartIDDev: TD satNewIOContext %p \n", satNewIOContext));
TI_DBG6(("tdsaDiscoveryIntStartIDDev: OS tiScsiXchg %p \n", satIOContext->tiScsiXchg));
TI_DBG6(("tdsaDiscoveryIntStartIDDev: TD tiScsiXchg %p \n", satNewIOContext->tiScsiXchg));
TI_DBG3(("tdsaDiscoveryIntStartIDDev: satNewIOContext %p tdIORequestBody %p\n", satNewIOContext, tdIORequestBody));
status = tdsaDiscoverySendIDDev(tiRoot,
&satIntIo->satIntTiIORequest, /* New tiIORequest */
tiDeviceHandle,
satNewIOContext->tiScsiXchg, /* New tiScsiInitiatorRequest_t *tiScsiRequest, */
satNewIOContext);
if (status != tiSuccess)
{
TI_DBG1(("tdsaDiscoveryIntStartIDDev: failed in sending %d\n", status));
satFreeIntIoResource( tiRoot,
satDevData,
satIntIo);
return tiError;
}
TI_DBG6(("tdsaDiscoveryIntStartIDDev: end\n"));
return status;
}
/*****************************************************************************/
/*! \brief SAT implementation for tdsaDiscoverySendIDDev.
*
* This function prepares identify device data FIS and sends it to SATA device.
*
* \param tiRoot: Pointer to TISA initiator driver/port instance.
* \param tiIORequest: Pointer to TISA I/O request context for this I/O.
* \param tiDeviceHandle: Pointer to TISA device handle for this I/O.
* \param tiScsiRequest: Pointer to TISA SCSI I/O request and SGL list.
* \param satIOContext_t: Pointer to the SAT IO Context
*
* \return If command is started successfully
* - \e tiSuccess: I/O request successfully initiated.
* - \e tiBusy: No resources available, try again later.
* - \e tiIONoDevice: Invalid device handle.
* - \e tiError: Other errors.
*/
/*****************************************************************************/
GLOBAL bit32
tdsaDiscoverySendIDDev(tiRoot_t *tiRoot,
tiIORequest_t *tiIORequest,
tiDeviceHandle_t *tiDeviceHandle,
tiScsiInitiatorRequest_t *tiScsiRequest,
satIOContext_t *satIOContext
)
{
bit32 status;
bit32 agRequestType;
satDeviceData_t *pSatDevData;
agsaFisRegHostToDevice_t *fis;
#ifdef TD_DEBUG_ENABLE
tdIORequestBody_t *tdIORequestBody;
satInternalIo_t *satIntIoContext;
#endif
pSatDevData = satIOContext->pSatDevData;
fis = satIOContext->pFis;
TI_DBG3(("tdsaDiscoverySendIDDev: start\n"));
#ifdef TD_DEBUG_ENABLE
satIntIoContext = satIOContext->satIntIoContext;
tdIORequestBody = satIntIoContext->satIntRequestBody;
#endif
TI_DBG5(("tdsaDiscoverySendIDDev: satIOContext %p tdIORequestBody %p\n", satIOContext, tdIORequestBody));
fis->h.fisType = 0x27; /* Reg host to device */
fis->h.c_pmPort = 0x80; /* C Bit is set */
if (pSatDevData->satDeviceType == SATA_ATAPI_DEVICE)
fis->h.command = SAT_IDENTIFY_PACKET_DEVICE; /* 0xA1 */
else
fis->h.command = SAT_IDENTIFY_DEVICE; /* 0xEC */
fis->h.features = 0; /* FIS reserve */
fis->d.lbaLow = 0; /* FIS LBA (7 :0 ) */
fis->d.lbaMid = 0; /* FIS LBA (15:8 ) */
fis->d.lbaHigh = 0; /* FIS LBA (23:16) */
fis->d.device = 0; /* FIS LBA mode */
fis->d.lbaLowExp = 0;
fis->d.lbaMidExp = 0;
fis->d.lbaHighExp = 0;
fis->d.featuresExp = 0;
fis->d.sectorCount = 0; /* FIS sector count (7:0) */
fis->d.sectorCountExp = 0;
fis->d.reserved4 = 0;
fis->d.control = 0; /* FIS HOB bit clear */
fis->d.reserved5 = 0;
agRequestType = AGSA_SATA_PROTOCOL_PIO_READ;
/* Initialize CB for SATA completion.
*/
satIOContext->satCompleteCB = &tdsaDiscoveryStartIDDevCB;
/*
* Prepare SGL and send FIS to LL layer.
*/
satIOContext->reqType = agRequestType; /* Save it */
#ifdef TD_INTERNAL_DEBUG
tdhexdump("tdsaDiscoverySendIDDev", (bit8 *)satIOContext->pFis, sizeof(agsaFisRegHostToDevice_t));
#ifdef TD_DEBUG_ENABLE
tdhexdump("tdsaDiscoverySendIDDev LL", (bit8 *)&(tdIORequestBody->transport.SATA.agSATARequestBody.fis.fisRegHostToDev), sizeof(agsaFisRegHostToDevice_t));
#endif
#endif
status = sataLLIOStart( tiRoot,
tiIORequest,
tiDeviceHandle,
tiScsiRequest,
satIOContext);
TI_DBG3(("tdsaDiscoverySendIDDev: end status %d\n", status));
return status;
}
/*****************************************************************************
*! \brief tdsaDiscoveryStartIDDevCB
*
* This routine is a callback function for tdsaDiscoverySendIDDev()
* Using Identify Device Data, this function finds whether devicedata is
* new or old. If new, add it to the devicelist. This is done as a part
* of discovery.
*
* \param agRoot: Handles for this instance of SAS/SATA hardware
* \param agIORequest: Pointer to the LL I/O request context for this I/O.
* \param agIOStatus: Status of completed I/O.
* \param agFirstDword:Pointer to the four bytes of FIS.
* \param agIOInfoLen: Length in bytes of overrun/underrun residual or FIS
* length.
* \param agParam: Additional info based on status.
* \param ioContext: Pointer to satIOContext_t.
*
* \return: none
*
*****************************************************************************/
void tdsaDiscoveryStartIDDevCB(
agsaRoot_t *agRoot,
agsaIORequest_t *agIORequest,
bit32 agIOStatus,
agsaFisHeader_t *agFirstDword,
bit32 agIOInfoLen,
void *agParam,
void *ioContext
)
{
/*
In the process of SAT_IDENTIFY_DEVICE during discovery
*/
tdsaRootOsData_t *osData = (tdsaRootOsData_t *)agRoot->osData;
tiRoot_t *tiRoot = (tiRoot_t *)osData->tiRoot;
tdsaRoot_t *tdsaRoot = (tdsaRoot_t *) tiRoot->tdData;
tdsaContext_t *tdsaAllShared = (tdsaContext_t *)&tdsaRoot->tdsaAllShared;
tdIORequestBody_t *tdIORequestBody;
tdIORequestBody_t *tdOrgIORequestBody;
satIOContext_t *satIOContext;
satIOContext_t *satOrgIOContext;
satIOContext_t *satNewIOContext;
satInternalIo_t *satIntIo;
satInternalIo_t *satNewIntIo = agNULL;
satDeviceData_t *satDevData;
tiIORequest_t *tiOrgIORequest = agNULL;
#ifdef TD_DEBUG_ENABLE
bit32 ataStatus = 0;
bit32 ataError;
agsaFisPioSetupHeader_t *satPIOSetupHeader = agNULL;
#endif
agsaSATAIdentifyData_t *pSATAIdData;
bit16 *tmpptr, tmpptr_tmp;
bit32 x;
tdsaDeviceData_t *oneDeviceData = agNULL;
void *sglVirtualAddr;
tdsaPortContext_t *onePortContext = agNULL;
tiPortalContext_t *tiPortalContext = agNULL;
bit32 retry_status;
TI_DBG3(("tdsaDiscoveryStartIDDevCB: start\n"));
tdIORequestBody = (tdIORequestBody_t *)agIORequest->osData;
satIOContext = (satIOContext_t *) ioContext;
satIntIo = satIOContext->satIntIoContext;
satDevData = satIOContext->pSatDevData;
oneDeviceData = (tdsaDeviceData_t *)tdIORequestBody->tiDevHandle->tdData;
TI_DBG3(("tdsaDiscoveryStartIDDevCB: did %d\n", oneDeviceData->id));
onePortContext = oneDeviceData->tdPortContext;
if (onePortContext == agNULL)
{
TI_DBG1(("tdsaDiscoveryStartIDDevCB: onePortContext is NULL\n"));
return;
}
tiPortalContext= onePortContext->tiPortalContext;
satDevData->IDDeviceValid = agFALSE;
if (satIntIo == agNULL)
{
TI_DBG1(("tdsaDiscoveryStartIDDevCB: External, OS generated\n"));
TI_DBG1(("tdsaDiscoveryStartIDDevCB: Not possible case\n"));
satOrgIOContext = satIOContext;
tdOrgIORequestBody = (tdIORequestBody_t *)satOrgIOContext->tiRequestBody;
satDecrementPendingIO(tiRoot, tdsaAllShared, satIOContext);
satFreeIntIoResource( tiRoot,
satDevData,
satIntIo);
/* clean up TD layer's IORequestBody */
ostiFreeMemory(
tiRoot,
tdOrgIORequestBody->IOType.InitiatorTMIO.osMemHandle,
sizeof(tdIORequestBody_t)
);
return;
}
else
{
TI_DBG3(("tdsaDiscoveryStartIDDevCB: Internal, TD generated\n"));
satOrgIOContext = satIOContext->satOrgIOContext;
if (satOrgIOContext == agNULL)
{
TI_DBG6(("tdsaDiscoveryStartIDDevCB: satOrgIOContext is NULL\n"));
return;
}
else
{
TI_DBG6(("tdsaDiscoveryStartIDDevCB: satOrgIOContext is NOT NULL\n"));
tdOrgIORequestBody = (tdIORequestBody_t *)satOrgIOContext->tiRequestBody;
sglVirtualAddr = satIntIo->satIntTiScsiXchg.sglVirtualAddr;
}
}
tiOrgIORequest = tdIORequestBody->tiIORequest;
tdIORequestBody->ioCompleted = agTRUE;
tdIORequestBody->ioStarted = agFALSE;
TI_DBG3(("tdsaDiscoveryStartIDDevCB: satOrgIOContext->pid %d\n", satOrgIOContext->pid));
/* protect against double completion for old port */
if (satOrgIOContext->pid != oneDeviceData->tdPortContext->id)
{
TI_DBG3(("tdsaDiscoveryStartIDDevCB: incorrect pid\n"));
TI_DBG3(("tdsaDiscoveryStartIDDevCB: satOrgIOContext->pid %d\n", satOrgIOContext->pid));
TI_DBG3(("tdsaDiscoveryStartIDDevCB: tiPortalContext pid %d\n", oneDeviceData->tdPortContext->id));
satDecrementPendingIO(tiRoot, tdsaAllShared, satIOContext);
satFreeIntIoResource( tiRoot,
satDevData,
satIntIo);
/* clean up TD layer's IORequestBody */
ostiFreeMemory(
tiRoot,
tdOrgIORequestBody->IOType.InitiatorTMIO.osMemHandle,
sizeof(tdIORequestBody_t)
);
return;
}
/* completion after portcontext is invalidated */
if (onePortContext != agNULL)
{
if (onePortContext->valid == agFALSE)
{
TI_DBG1(("tdsaDiscoveryStartIDDevCB: portcontext is invalid\n"));
TI_DBG1(("tdsaDiscoveryStartIDDevCB: onePortContext->id pid %d\n", onePortContext->id));
satDecrementPendingIO(tiRoot, tdsaAllShared, satIOContext);
satFreeIntIoResource( tiRoot,
satDevData,
satIntIo);
/* clean up TD layer's IORequestBody */
ostiFreeMemory(
tiRoot,
tdOrgIORequestBody->IOType.InitiatorTMIO.osMemHandle,
sizeof(tdIORequestBody_t)
);
/* no notification to OS layer */
return;
}
}
if (agFirstDword == agNULL && agIOStatus != OSSA_IO_SUCCESS)
{
TI_DBG1(("tdsaDiscoveryStartIDDevCB: agFirstDword is NULL when error, status %d\n", agIOStatus));
TI_DBG1(("tdsaDiscoveryStartIDDevCB: did %d\n", oneDeviceData->id));
if (tdsaAllShared->ResetInDiscovery != 0 && satDevData->ID_Retries < SATA_ID_DEVICE_DATA_RETRIES)
{
satIOContext->pSatDevData->satPendingNONNCQIO--;
satIOContext->pSatDevData->satPendingIO--;
retry_status = sataLLIOStart(tiRoot,
&satIntIo->satIntTiIORequest,
&(oneDeviceData->tiDeviceHandle),
satIOContext->tiScsiXchg,
satIOContext);
if (retry_status != tiSuccess)
{
/* simply give up */
satDevData->ID_Retries = 0;
satDecrementPendingIO(tiRoot, tdsaAllShared, satIOContext);
satFreeIntIoResource( tiRoot,
satDevData,
satIntIo);
/* clean up TD layer's IORequestBody */
ostiFreeMemory(
tiRoot,
tdOrgIORequestBody->IOType.InitiatorTMIO.osMemHandle,
sizeof(tdIORequestBody_t)
);
return;
}
satDevData->ID_Retries++;
tdIORequestBody->ioCompleted = agFALSE;
tdIORequestBody->ioStarted = agTRUE;
return;
}
else
{
satDecrementPendingIO(tiRoot, tdsaAllShared, satIOContext);
satFreeIntIoResource( tiRoot,
satDevData,
satIntIo);
/* clean up TD layer's IORequestBody */
ostiFreeMemory(
tiRoot,
tdOrgIORequestBody->IOType.InitiatorTMIO.osMemHandle,
sizeof(tdIORequestBody_t)
);
if (tdsaAllShared->ResetInDiscovery != 0)
{
/* ResetInDiscovery in on */
if (satDevData->NumOfIDRetries <= 0)
{
satDevData->NumOfIDRetries++;
satDevData->ID_Retries = 0;
/* send link reset */
tdsaPhyControlSend(tiRoot,
oneDeviceData,
SMP_PHY_CONTROL_HARD_RESET,
agNULL,
tdsaRotateQnumber(tiRoot, oneDeviceData)
);
}
}
return;
}
}
if (agIOStatus == OSSA_IO_ABORTED ||
agIOStatus == OSSA_IO_UNDERFLOW ||
agIOStatus == OSSA_IO_XFER_ERROR_BREAK ||
agIOStatus == OSSA_IO_XFER_ERROR_PHY_NOT_READY ||
agIOStatus == OSSA_IO_OPEN_CNX_ERROR_PROTOCOL_NOT_SUPPORTED ||
agIOStatus == OSSA_IO_OPEN_CNX_ERROR_BREAK ||
agIOStatus == OSSA_IO_OPEN_CNX_ERROR_IT_NEXUS_LOSS ||
agIOStatus == OSSA_IO_OPEN_CNX_ERROR_BAD_DESTINATION ||
agIOStatus == OSSA_IO_OPEN_CNX_ERROR_CONNECTION_RATE_NOT_SUPPORTED ||
agIOStatus == OSSA_IO_OPEN_CNX_ERROR_STP_RESOURCES_BUSY ||
agIOStatus == OSSA_IO_OPEN_CNX_ERROR_WRONG_DESTINATION ||
agIOStatus == OSSA_IO_XFER_ERROR_NAK_RECEIVED ||
agIOStatus == OSSA_IO_XFER_ERROR_DMA ||
agIOStatus == OSSA_IO_XFER_ERROR_SATA_LINK_TIMEOUT ||
agIOStatus == OSSA_IO_XFER_ERROR_REJECTED_NCQ_MODE ||
agIOStatus == OSSA_IO_XFER_OPEN_RETRY_TIMEOUT ||
agIOStatus == OSSA_IO_NO_DEVICE ||
agIOStatus == OSSA_IO_OPEN_CNX_ERROR_ZONE_VIOLATION ||
agIOStatus == OSSA_IO_PORT_IN_RESET ||
agIOStatus == OSSA_IO_DS_NON_OPERATIONAL ||
agIOStatus == OSSA_IO_DS_IN_RECOVERY ||
agIOStatus == OSSA_IO_DS_IN_ERROR
)
{
TI_DBG1(("tdsaDiscoveryStartIDDevCB: OSSA_IO_OPEN_CNX_ERROR 0x%x\n", agIOStatus));
if (tdsaAllShared->ResetInDiscovery != 0 && satDevData->ID_Retries < SATA_ID_DEVICE_DATA_RETRIES)
{
satIOContext->pSatDevData->satPendingNONNCQIO--;
satIOContext->pSatDevData->satPendingIO--;
retry_status = sataLLIOStart(tiRoot,
&satIntIo->satIntTiIORequest,
&(oneDeviceData->tiDeviceHandle),
satIOContext->tiScsiXchg,
satIOContext);
if (retry_status != tiSuccess)
{
/* simply give up */
satDevData->ID_Retries = 0;
satDecrementPendingIO(tiRoot, tdsaAllShared, satIOContext);
satFreeIntIoResource( tiRoot,
satDevData,
satIntIo);
/* clean up TD layer's IORequestBody */
ostiFreeMemory(
tiRoot,
tdOrgIORequestBody->IOType.InitiatorTMIO.osMemHandle,
sizeof(tdIORequestBody_t)
);
return;
}
satDevData->ID_Retries++;
tdIORequestBody->ioCompleted = agFALSE;
tdIORequestBody->ioStarted = agTRUE;
return;
}
else
{
satDecrementPendingIO(tiRoot, tdsaAllShared, satIOContext);
satFreeIntIoResource( tiRoot,
satDevData,
satIntIo);
/* clean up TD layer's IORequestBody */
ostiFreeMemory(
tiRoot,
tdOrgIORequestBody->IOType.InitiatorTMIO.osMemHandle,
sizeof(tdIORequestBody_t)
);
if (tdsaAllShared->ResetInDiscovery != 0)
{
/* ResetInDiscovery in on */
if (satDevData->NumOfIDRetries <= 0)
{
satDevData->NumOfIDRetries++;
satDevData->ID_Retries = 0;
/* send link reset */
tdsaPhyControlSend(tiRoot,
oneDeviceData,
SMP_PHY_CONTROL_HARD_RESET,
agNULL,
tdsaRotateQnumber(tiRoot, oneDeviceData)
);
}
}
return;
}
}
if ( agIOStatus != OSSA_IO_SUCCESS ||
(agIOStatus == OSSA_IO_SUCCESS && agFirstDword != agNULL && agIOInfoLen != 0)
)
{
#ifdef TD_DEBUG_ENABLE
/* only agsaFisPioSetup_t is expected */
satPIOSetupHeader = (agsaFisPioSetupHeader_t *)&(agFirstDword->PioSetup);
ataStatus = satPIOSetupHeader->status; /* ATA Status register */
ataError = satPIOSetupHeader->error; /* ATA Eror register */
#endif
TI_DBG1(("tdsaDiscoveryStartIDDevCB: ataStatus 0x%x ataError 0x%x\n", ataStatus, ataError));
if (tdsaAllShared->ResetInDiscovery != 0 && satDevData->ID_Retries < SATA_ID_DEVICE_DATA_RETRIES)
{
satIOContext->pSatDevData->satPendingNONNCQIO--;
satIOContext->pSatDevData->satPendingIO--;
retry_status = sataLLIOStart(tiRoot,
&satIntIo->satIntTiIORequest,
&(oneDeviceData->tiDeviceHandle),
satIOContext->tiScsiXchg,
satIOContext);
if (retry_status != tiSuccess)
{
/* simply give up */
satDevData->ID_Retries = 0;
satDecrementPendingIO(tiRoot, tdsaAllShared, satIOContext);
satFreeIntIoResource( tiRoot,
satDevData,
satIntIo);
/* clean up TD layer's IORequestBody */
ostiFreeMemory(
tiRoot,
tdOrgIORequestBody->IOType.InitiatorTMIO.osMemHandle,
sizeof(tdIORequestBody_t)
);
return;
}
satDevData->ID_Retries++;
tdIORequestBody->ioCompleted = agFALSE;
tdIORequestBody->ioStarted = agTRUE;
return;
}
else
{
satDecrementPendingIO(tiRoot, tdsaAllShared, satIOContext);
satFreeIntIoResource( tiRoot,
satDevData,
satIntIo);
/* clean up TD layer's IORequestBody */
ostiFreeMemory(
tiRoot,
tdOrgIORequestBody->IOType.InitiatorTMIO.osMemHandle,
sizeof(tdIORequestBody_t)
);
if (tdsaAllShared->ResetInDiscovery != 0)
{
/* ResetInDiscovery in on */
if (satDevData->NumOfIDRetries <= 0)
{
satDevData->NumOfIDRetries++;
satDevData->ID_Retries = 0;
/* send link reset */
tdsaPhyControlSend(tiRoot,
oneDeviceData,
SMP_PHY_CONTROL_HARD_RESET,
agNULL,
tdsaRotateQnumber(tiRoot, oneDeviceData)
);
}
}
return;
}
}
/* success */
TI_DBG3(("tdsaDiscoveryStartIDDevCB: Success\n"));
TI_DBG3(("tdsaDiscoveryStartIDDevCB: Success did %d\n", oneDeviceData->id));
/* Convert to host endian */
tmpptr = (bit16*)sglVirtualAddr;
for (x=0; x < sizeof(agsaSATAIdentifyData_t)/sizeof(bit16); x++)
{
OSSA_READ_LE_16(AGROOT, &tmpptr_tmp, tmpptr, 0);
*tmpptr = tmpptr_tmp;
tmpptr++;
}
pSATAIdData = (agsaSATAIdentifyData_t *)sglVirtualAddr;
//tdhexdump("satAddSATAIDDevCB before", (bit8 *)pSATAIdData, sizeof(agsaSATAIdentifyData_t));
TI_DBG5(("tdsaDiscoveryStartIDDevCB: OS satOrgIOContext %p \n", satOrgIOContext));
TI_DBG5(("tdsaDiscoveryStartIDDevCB: TD satIOContext %p \n", satIOContext));
TI_DBG5(("tdsaDiscoveryStartIDDevCB: OS tiScsiXchg %p \n", satOrgIOContext->tiScsiXchg));
TI_DBG5(("tdsaDiscoveryStartIDDevCB: TD tiScsiXchg %p \n", satIOContext->tiScsiXchg));
/* copy ID Dev data to satDevData */
satDevData->satIdentifyData = *pSATAIdData;
satDevData->IDDeviceValid = agTRUE;
#ifdef TD_INTERNAL_DEBUG
tdhexdump("tdsaDiscoveryStartIDDevCB ID Dev data",(bit8 *)pSATAIdData, sizeof(agsaSATAIdentifyData_t));
tdhexdump("tdsaDiscoveryStartIDDevCB Device ID Dev data",(bit8 *)&satDevData->satIdentifyData, sizeof(agsaSATAIdentifyData_t));
#endif
/* set satDevData fields from IndentifyData */
satSetDevInfo(satDevData,pSATAIdData);
satDecrementPendingIO(tiRoot, tdsaAllShared, satIOContext);
satFreeIntIoResource( tiRoot,
satDevData,
satIntIo);
if (satDevData->satDeviceType == SATA_ATAPI_DEVICE)
{
/* send the Set Feature ATA command to ATAPI device for enbling PIO and DMA transfer mode*/
satNewIntIo = satAllocIntIoResource( tiRoot,
tiOrgIORequest,
satDevData,
0,
satNewIntIo);
if (satNewIntIo == agNULL)
{
TI_DBG1(("tdsaDiscoveryStartIDDevCB: momory allocation fails\n"));
/* clean up TD layer's IORequestBody */
ostiFreeMemory(
tiRoot,
tdOrgIORequestBody->IOType.InitiatorTMIO.osMemHandle,
sizeof(tdIORequestBody_t)
);
return;
} /* end memory allocation */
satNewIOContext = satPrepareNewIO(satNewIntIo,
tiOrgIORequest,
satDevData,
agNULL,
satOrgIOContext
);
/* enable PIO mode, then enable Ultra DMA mode in the satSetFeaturesCB callback function*/
retry_status = satSetFeatures(tiRoot,
&satNewIntIo->satIntTiIORequest,
satNewIOContext->ptiDeviceHandle,
&satNewIntIo->satIntTiScsiXchg, /* orginal from OS layer */
satNewIOContext,
agFALSE);
if (retry_status != tiSuccess)
{
satFreeIntIoResource(tiRoot, satDevData, satIntIo);
/* clean up TD layer's IORequestBody */
ostiFreeMemory(
tiRoot,
tdOrgIORequestBody->IOType.InitiatorTMIO.osMemHandle,
sizeof(tdIORequestBody_t)
);
}
}
else
{
/* clean up TD layer's IORequestBody */
ostiFreeMemory(
tiRoot,
tdOrgIORequestBody->IOType.InitiatorTMIO.osMemHandle,
sizeof(tdIORequestBody_t)
);
if (onePortContext != agNULL)
{
if (onePortContext->DiscoveryState == ITD_DSTATE_COMPLETED)
{
TI_DBG1(("tdsaDiscoveryStartIDDevCB: ID completed after discovery is done; tiDeviceArrival\n"));
/* in case registration is finished after discovery is finished */
ostiInitiatorEvent(
tiRoot,
tiPortalContext,
agNULL,
tiIntrEventTypeDeviceChange,
tiDeviceArrival,
agNULL
);
}
}
else
{
TI_DBG1(("tdsaDiscoveryStartIDDevCB: onePortContext is NULL, wrong\n"));
}
}
TI_DBG3(("tdsaDiscoveryStartIDDevCB: end\n"));
return;
}
/*****************************************************************************
*! \brief satAbort
*
* This routine does local abort for outstanding FIS.
*
* \param agRoot: Handles for this instance of SAS/SATA hardware
* \param satIOContext: Pointer to satIOContext_t.
*
* \return: none
*
*****************************************************************************/
GLOBAL void satAbort(agsaRoot_t *agRoot,
satIOContext_t *satIOContext)
{
tdsaRootOsData_t *osData = (tdsaRootOsData_t *)agRoot->osData;
tiRoot_t *tiRoot = (tiRoot_t *)osData->tiRoot;
tdIORequestBody_t *tdIORequestBody; /* io to be aborted */
tdIORequestBody_t *tdAbortIORequestBody; /* abort io itself */
agsaIORequest_t *agToBeAbortedIORequest; /* io to be aborted */
agsaIORequest_t *agAbortIORequest; /* abort io itself */
bit32 PhysUpper32;
bit32 PhysLower32;
bit32 memAllocStatus;
void *osMemHandle;
TI_DBG1(("satAbort: start\n"));
if (satIOContext == agNULL)
{
TI_DBG1(("satAbort: satIOContext is NULL, wrong\n"));
return;
}
tdIORequestBody = (tdIORequestBody_t *)satIOContext->tiRequestBody;
agToBeAbortedIORequest = (agsaIORequest_t *)&(tdIORequestBody->agIORequest);
/* allocating agIORequest for abort itself */
memAllocStatus = ostiAllocMemory(
tiRoot,
&osMemHandle,
(void **)&tdAbortIORequestBody,
&PhysUpper32,
&PhysLower32,
8,
sizeof(tdIORequestBody_t),
agTRUE
);
if (memAllocStatus != tiSuccess)
{
/* let os process IO */
TI_DBG1(("satAbort: ostiAllocMemory failed...\n"));
return;
}
if (tdAbortIORequestBody == agNULL)
{
/* let os process IO */
TI_DBG1(("satAbort: ostiAllocMemory returned NULL tdAbortIORequestBody\n"));
return;
}
/* setup task management structure */
tdAbortIORequestBody->IOType.InitiatorTMIO.osMemHandle = osMemHandle;
tdAbortIORequestBody->tiDevHandle = tdIORequestBody->tiDevHandle;
/* initialize agIORequest */
agAbortIORequest = &(tdAbortIORequestBody->agIORequest);
agAbortIORequest->osData = (void *) tdAbortIORequestBody;
agAbortIORequest->sdkData = agNULL; /* LL takes care of this */
/*
* Issue abort
*/
saSATAAbort( agRoot, agAbortIORequest, 0, agNULL, 0, agToBeAbortedIORequest, agNULL );
TI_DBG1(("satAbort: end\n"));
return;
}
/*****************************************************************************
*! \brief satSATADeviceReset
*
* This routine is called to reset all phys of port which a device belongs to
*
* \param tiRoot: Pointer to TISA initiator driver/port instance.
* \param oneDeviceData: Pointer to the device data.
* \param flag: reset flag
*
* \return:
*
* none
*
*****************************************************************************/
osGLOBAL void
satSATADeviceReset( tiRoot_t *tiRoot,
tdsaDeviceData_t *oneDeviceData,
bit32 flag)
{
agsaRoot_t *agRoot;
tdsaPortContext_t *onePortContext;
bit32 i;
TI_DBG1(("satSATADeviceReset: start\n"));
agRoot = oneDeviceData->agRoot;
onePortContext = oneDeviceData->tdPortContext;
if (agRoot == agNULL)
{
TI_DBG1(("satSATADeviceReset: Error!!! agRoot is NULL\n"));
return;
}
if (onePortContext == agNULL)
{
TI_DBG1(("satSATADeviceReset: Error!!! onePortContext is NULL\n"));
return;
}
for(i=0;i<TD_MAX_NUM_PHYS;i++)
{
if (onePortContext->PhyIDList[i] == agTRUE)
{
saLocalPhyControl(agRoot, agNULL, tdsaRotateQnumber(tiRoot, agNULL), i, flag, agNULL);
}
}
return;
}
#endif /* #ifdef SATA_ENABLE */
Index: head/sys/dev/random/fortuna.c
===================================================================
--- head/sys/dev/random/fortuna.c (revision 300049)
+++ head/sys/dev/random/fortuna.c (revision 300050)
@@ -1,422 +1,422 @@
/*-
* Copyright (c) 2013-2015 Mark R V Murray
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer
* in this position and unchanged.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
*
* THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
* IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
* OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
* IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
* INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
* NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
* DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
* THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
* (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
* THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*
*/
/*
* This implementation of Fortuna is based on the descriptions found in
* ISBN 978-0-470-47424-2 "Cryptography Engineering" by Ferguson, Schneier
* and Kohno ("FS&K").
*/
#include <sys/cdefs.h>
__FBSDID("$FreeBSD$");
#include <sys/limits.h>
#ifdef _KERNEL
#include <sys/param.h>
#include <sys/kernel.h>
#include <sys/lock.h>
#include <sys/malloc.h>
#include <sys/mutex.h>
#include <sys/random.h>
#include <sys/sdt.h>
#include <sys/sysctl.h>
#include <sys/systm.h>
#include <machine/cpu.h>
#include <crypto/rijndael/rijndael-api-fst.h>
#include <crypto/sha2/sha256.h>
#include <dev/random/hash.h>
#include <dev/random/randomdev.h>
#include <dev/random/random_harvestq.h>
#include <dev/random/uint128.h>
#include <dev/random/fortuna.h>
#else /* !_KERNEL */
#include <inttypes.h>
#include <stdbool.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <threads.h>
#include "unit_test.h"
#include <crypto/rijndael/rijndael-api-fst.h>
#include <crypto/sha2/sha256.h>
#include <dev/random/hash.h>
#include <dev/random/randomdev.h>
#include <dev/random/uint128.h>
#include <dev/random/fortuna.h>
#endif /* _KERNEL */
/* Defined in FS&K */
#define RANDOM_FORTUNA_NPOOLS 32 /* The number of accumulation pools */
#define RANDOM_FORTUNA_DEFPOOLSIZE 64 /* The default pool size/length for a (re)seed */
#define RANDOM_FORTUNA_MAX_READ (1 << 20) /* Max bytes in a single read */
/*
* The allowable range of RANDOM_FORTUNA_DEFPOOLSIZE. The default value is above.
* Making RANDOM_FORTUNA_DEFPOOLSIZE too large will mean a long time between reseeds,
* and too small may compromise initial security but get faster reseeds.
*/
#define RANDOM_FORTUNA_MINPOOLSIZE 16
#define RANDOM_FORTUNA_MAXPOOLSIZE UINT_MAX
CTASSERT(RANDOM_FORTUNA_MINPOOLSIZE <= RANDOM_FORTUNA_DEFPOOLSIZE);
CTASSERT(RANDOM_FORTUNA_DEFPOOLSIZE <= RANDOM_FORTUNA_MAXPOOLSIZE);
/* This algorithm (and code) presumes that RANDOM_KEYSIZE is twice as large as RANDOM_BLOCKSIZE */
CTASSERT(RANDOM_BLOCKSIZE == sizeof(uint128_t));
CTASSERT(RANDOM_KEYSIZE == 2*RANDOM_BLOCKSIZE);
/* Probes for dtrace(1) */
SDT_PROVIDER_DECLARE(random);
SDT_PROVIDER_DEFINE(random);
SDT_PROBE_DEFINE2(random, fortuna, event_processor, debug, "u_int", "struct fs_pool *");
/*
* This is the beastie that needs protecting. It contains all of the
* state that we are excited about. Exactly one is instantiated.
*/
static struct fortuna_state {
struct fs_pool { /* P_i */
u_int fsp_length; /* Only the first one is used by Fortuna */
struct randomdev_hash fsp_hash;
} fs_pool[RANDOM_FORTUNA_NPOOLS];
u_int fs_reseedcount; /* ReseedCnt */
uint128_t fs_counter; /* C */
struct randomdev_key fs_key; /* K */
u_int fs_minpoolsize; /* Extras */
/* Extras for the OS */
#ifdef _KERNEL
/* For use when 'pacing' the reseeds */
sbintime_t fs_lasttime;
#endif
/* Reseed lock */
mtx_t fs_mtx;
} fortuna_state;
#ifdef _KERNEL
static struct sysctl_ctx_list random_clist;
RANDOM_CHECK_UINT(fs_minpoolsize, RANDOM_FORTUNA_MINPOOLSIZE, RANDOM_FORTUNA_MAXPOOLSIZE);
#else
static uint8_t zero_region[RANDOM_ZERO_BLOCKSIZE];
#endif
static void random_fortuna_pre_read(void);
static void random_fortuna_read(uint8_t *, u_int);
static bool random_fortuna_seeded(void);
static void random_fortuna_process_event(struct harvest_event *);
static void random_fortuna_init_alg(void *);
static void random_fortuna_deinit_alg(void *);
static void random_fortuna_reseed_internal(uint32_t *entropy_data, u_int blockcount);
struct random_algorithm random_alg_context = {
.ra_ident = "Fortuna",
.ra_init_alg = random_fortuna_init_alg,
.ra_deinit_alg = random_fortuna_deinit_alg,
.ra_pre_read = random_fortuna_pre_read,
.ra_read = random_fortuna_read,
.ra_seeded = random_fortuna_seeded,
.ra_event_processor = random_fortuna_process_event,
.ra_poolcount = RANDOM_FORTUNA_NPOOLS,
};
/* ARGSUSED */
static void
random_fortuna_init_alg(void *unused __unused)
{
int i;
#ifdef _KERNEL
struct sysctl_oid *random_fortuna_o;
#endif
RANDOM_RESEED_INIT_LOCK();
/*
* Fortuna parameters. Do not adjust these unless you have
* have a very good clue about what they do!
*/
fortuna_state.fs_minpoolsize = RANDOM_FORTUNA_DEFPOOLSIZE;
#ifdef _KERNEL
fortuna_state.fs_lasttime = 0;
random_fortuna_o = SYSCTL_ADD_NODE(&random_clist,
SYSCTL_STATIC_CHILDREN(_kern_random),
OID_AUTO, "fortuna", CTLFLAG_RW, 0,
"Fortuna Parameters");
SYSCTL_ADD_PROC(&random_clist,
SYSCTL_CHILDREN(random_fortuna_o), OID_AUTO,
"minpoolsize", CTLTYPE_UINT | CTLFLAG_RWTUN,
&fortuna_state.fs_minpoolsize, RANDOM_FORTUNA_DEFPOOLSIZE,
random_check_uint_fs_minpoolsize, "IU",
"Minimum pool size necessary to cause a reseed");
KASSERT(fortuna_state.fs_minpoolsize > 0, ("random: Fortuna threshold must be > 0 at startup"));
#endif
/*-
* FS&K - InitializePRNG()
* - P_i = \epsilon
* - ReseedCNT = 0
*/
for (i = 0; i < RANDOM_FORTUNA_NPOOLS; i++) {
randomdev_hash_init(&fortuna_state.fs_pool[i].fsp_hash);
fortuna_state.fs_pool[i].fsp_length = 0;
}
fortuna_state.fs_reseedcount = 0;
/*-
* FS&K - InitializeGenerator()
* - C = 0
* - K = 0
*/
fortuna_state.fs_counter = UINT128_ZERO;
explicit_bzero(&fortuna_state.fs_key, sizeof(fortuna_state.fs_key));
}
/* ARGSUSED */
static void
random_fortuna_deinit_alg(void *unused __unused)
{
RANDOM_RESEED_DEINIT_LOCK();
explicit_bzero(&fortuna_state, sizeof(fortuna_state));
#ifdef _KERNEL
sysctl_ctx_free(&random_clist);
#endif
}
/*-
* FS&K - AddRandomEvent()
* Process a single stochastic event off the harvest queue
*/
static void
random_fortuna_process_event(struct harvest_event *event)
{
u_int pl;
RANDOM_RESEED_LOCK();
/*-
* FS&K - P_i = P_i|<harvested stuff>
* Accumulate the event into the appropriate pool
* where each event carries the destination information.
*
* The hash_init() and hash_finish() calls are done in
* random_fortuna_pre_read().
*
* We must be locked against pool state modification which can happen
* during accumulation/reseeding and reading/regating.
*/
pl = event->he_destination % RANDOM_FORTUNA_NPOOLS;
randomdev_hash_iterate(&fortuna_state.fs_pool[pl].fsp_hash, event, sizeof(*event));
/*-
- * Don't wrap the length. Doing the the hard way so as not to wrap at MAXUINT.
+ * Don't wrap the length. Doing this the hard way so as not to wrap at MAXUINT.
* This is a "saturating" add.
* XXX: FIX!!: We don't actually need lengths for anything but fs_pool[0],
* but it's been useful debugging to see them all.
*/
if (RANDOM_FORTUNA_MAXPOOLSIZE - fortuna_state.fs_pool[pl].fsp_length > event->he_size)
fortuna_state.fs_pool[pl].fsp_length += event->he_size;
else
fortuna_state.fs_pool[pl].fsp_length = RANDOM_FORTUNA_MAXPOOLSIZE;
explicit_bzero(event, sizeof(*event));
RANDOM_RESEED_UNLOCK();
}
/*-
* FS&K - Reseed()
* This introduces new key material into the output generator.
* Additionally it increments the output generator's counter
* variable C. When C > 0, the output generator is seeded and
* will deliver output.
* The entropy_data buffer passed is a very specific size; the
* product of RANDOM_FORTUNA_NPOOLS and RANDOM_KEYSIZE.
*/
static void
random_fortuna_reseed_internal(uint32_t *entropy_data, u_int blockcount)
{
struct randomdev_hash context;
uint8_t hash[RANDOM_KEYSIZE];
RANDOM_RESEED_ASSERT_LOCK_OWNED();
/*-
* FS&K - K = Hd(K|s) where Hd(m) is H(H(0^512|m))
* - C = C + 1
*/
randomdev_hash_init(&context);
randomdev_hash_iterate(&context, zero_region, RANDOM_ZERO_BLOCKSIZE);
randomdev_hash_iterate(&context, &fortuna_state.fs_key, sizeof(fortuna_state.fs_key));
randomdev_hash_iterate(&context, entropy_data, RANDOM_KEYSIZE*blockcount);
randomdev_hash_finish(&context, hash);
randomdev_hash_init(&context);
randomdev_hash_iterate(&context, hash, RANDOM_KEYSIZE);
randomdev_hash_finish(&context, hash);
randomdev_encrypt_init(&fortuna_state.fs_key, hash);
explicit_bzero(hash, sizeof(hash));
/* Unblock the device if this is the first time we are reseeding. */
if (uint128_is_zero(fortuna_state.fs_counter))
randomdev_unblock();
uint128_increment(&fortuna_state.fs_counter);
}
/*-
* FS&K - GenerateBlocks()
* Generate a number of complete blocks of random output.
*/
static __inline void
random_fortuna_genblocks(uint8_t *buf, u_int blockcount)
{
u_int i;
RANDOM_RESEED_ASSERT_LOCK_OWNED();
for (i = 0; i < blockcount; i++) {
/*-
* FS&K - r = r|E(K,C)
* - C = C + 1
*/
randomdev_encrypt(&fortuna_state.fs_key, &fortuna_state.fs_counter, buf, RANDOM_BLOCKSIZE);
buf += RANDOM_BLOCKSIZE;
uint128_increment(&fortuna_state.fs_counter);
}
}
/*-
* FS&K - PseudoRandomData()
* This generates no more than 2^20 bytes of data, and cleans up its
* internal state when finished. It is assumed that a whole number of
* blocks are available for writing; any excess generated will be
* ignored.
*/
static __inline void
random_fortuna_genrandom(uint8_t *buf, u_int bytecount)
{
static uint8_t temp[RANDOM_BLOCKSIZE*(RANDOM_KEYS_PER_BLOCK)];
u_int blockcount;
RANDOM_RESEED_ASSERT_LOCK_OWNED();
/*-
* FS&K - assert(n < 2^20 (== 1 MB)
* - r = first-n-bytes(GenerateBlocks(ceil(n/16)))
* - K = GenerateBlocks(2)
*/
KASSERT((bytecount <= RANDOM_FORTUNA_MAX_READ), ("invalid single read request to Fortuna of %d bytes", bytecount));
blockcount = howmany(bytecount, RANDOM_BLOCKSIZE);
random_fortuna_genblocks(buf, blockcount);
random_fortuna_genblocks(temp, RANDOM_KEYS_PER_BLOCK);
randomdev_encrypt_init(&fortuna_state.fs_key, temp);
explicit_bzero(temp, sizeof(temp));
}
/*-
* FS&K - RandomData() (Part 1)
* Used to return processed entropy from the PRNG. There is a pre_read
* required to be present (but it can be a stub) in order to allow
* specific actions at the begin of the read.
*/
void
random_fortuna_pre_read(void)
{
#ifdef _KERNEL
sbintime_t now;
#endif
struct randomdev_hash context;
uint32_t s[RANDOM_FORTUNA_NPOOLS*RANDOM_KEYSIZE_WORDS];
uint8_t temp[RANDOM_KEYSIZE];
u_int i;
KASSERT(fortuna_state.fs_minpoolsize > 0, ("random: Fortuna threshold must be > 0"));
#ifdef _KERNEL
/* FS&K - Use 'getsbinuptime()' to prevent reseed-spamming. */
now = getsbinuptime();
#endif
RANDOM_RESEED_LOCK();
if (fortuna_state.fs_pool[0].fsp_length >= fortuna_state.fs_minpoolsize
#ifdef _KERNEL
/* FS&K - Use 'getsbinuptime()' to prevent reseed-spamming. */
&& (now - fortuna_state.fs_lasttime > hz/10)
#endif
) {
#ifdef _KERNEL
fortuna_state.fs_lasttime = now;
#endif
/* FS&K - ReseedCNT = ReseedCNT + 1 */
fortuna_state.fs_reseedcount++;
/* s = \epsilon at start */
for (i = 0; i < RANDOM_FORTUNA_NPOOLS; i++) {
/* FS&K - if Divides(ReseedCnt, 2^i) ... */
if ((fortuna_state.fs_reseedcount % (1 << i)) == 0) {
/*-
* FS&K - temp = (P_i)
* - P_i = \epsilon
* - s = s|H(temp)
*/
randomdev_hash_finish(&fortuna_state.fs_pool[i].fsp_hash, temp);
randomdev_hash_init(&fortuna_state.fs_pool[i].fsp_hash);
fortuna_state.fs_pool[i].fsp_length = 0;
randomdev_hash_init(&context);
randomdev_hash_iterate(&context, temp, RANDOM_KEYSIZE);
randomdev_hash_finish(&context, s + i*RANDOM_KEYSIZE_WORDS);
} else
break;
}
SDT_PROBE2(random, fortuna, event_processor, debug, fortuna_state.fs_reseedcount, fortuna_state.fs_pool);
/* FS&K */
random_fortuna_reseed_internal(s, i < RANDOM_FORTUNA_NPOOLS ? i + 1 : RANDOM_FORTUNA_NPOOLS);
/* Clean up and secure */
explicit_bzero(s, sizeof(s));
explicit_bzero(temp, sizeof(temp));
explicit_bzero(&context, sizeof(context));
}
RANDOM_RESEED_UNLOCK();
}
/*-
* FS&K - RandomData() (Part 2)
* Main read from Fortuna, continued. May be called multiple times after
* the random_fortuna_pre_read() above.
* The supplied buf MUST be a multiple of RANDOM_BLOCKSIZE in size.
* Lots of code presumes this for efficiency, both here and in other
* routines. You are NOT allowed to break this!
*/
void
random_fortuna_read(uint8_t *buf, u_int bytecount)
{
KASSERT((bytecount % RANDOM_BLOCKSIZE) == 0, ("%s(): bytecount (= %d) must be a multiple of %d", __func__, bytecount, RANDOM_BLOCKSIZE ));
RANDOM_RESEED_LOCK();
random_fortuna_genrandom(buf, bytecount);
RANDOM_RESEED_UNLOCK();
}
bool
random_fortuna_seeded(void)
{
return (!uint128_is_zero(fortuna_state.fs_counter));
}
Index: head/sys/dev/sfxge/common/ef10_ev.c
===================================================================
--- head/sys/dev/sfxge/common/ef10_ev.c (revision 300049)
+++ head/sys/dev/sfxge/common/ef10_ev.c (revision 300050)
@@ -1,987 +1,987 @@
/*-
* Copyright (c) 2012-2015 Solarflare Communications Inc.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions are met:
*
* 1. Redistributions of source code must retain the above copyright notice,
* this list of conditions and the following disclaimer.
* 2. Redistributions in binary form must reproduce the above copyright notice,
* this list of conditions and the following disclaimer in the documentation
* and/or other materials provided with the distribution.
*
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
* AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
* THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
* PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
* CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
* EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
* PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS;
* OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY,
* WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR
* OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE,
* EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*
* The views and conclusions contained in the software and documentation are
* those of the authors and should not be interpreted as representing official
* policies, either expressed or implied, of the FreeBSD Project.
*/
#include <sys/cdefs.h>
__FBSDID("$FreeBSD$");
#include "efx.h"
#include "efx_impl.h"
#if EFSYS_OPT_MON_STATS
#include "mcdi_mon.h"
#endif
#if EFSYS_OPT_HUNTINGTON || EFSYS_OPT_MEDFORD
#if EFSYS_OPT_QSTATS
#define EFX_EV_QSTAT_INCR(_eep, _stat) \
do { \
(_eep)->ee_stat[_stat]++; \
_NOTE(CONSTANTCONDITION) \
} while (B_FALSE)
#else
#define EFX_EV_QSTAT_INCR(_eep, _stat)
#endif
static __checkReturn boolean_t
ef10_ev_rx(
__in efx_evq_t *eep,
__in efx_qword_t *eqp,
__in const efx_ev_callbacks_t *eecp,
__in_opt void *arg);
static __checkReturn boolean_t
ef10_ev_tx(
__in efx_evq_t *eep,
__in efx_qword_t *eqp,
__in const efx_ev_callbacks_t *eecp,
__in_opt void *arg);
static __checkReturn boolean_t
ef10_ev_driver(
__in efx_evq_t *eep,
__in efx_qword_t *eqp,
__in const efx_ev_callbacks_t *eecp,
__in_opt void *arg);
static __checkReturn boolean_t
ef10_ev_drv_gen(
__in efx_evq_t *eep,
__in efx_qword_t *eqp,
__in const efx_ev_callbacks_t *eecp,
__in_opt void *arg);
static __checkReturn boolean_t
ef10_ev_mcdi(
__in efx_evq_t *eep,
__in efx_qword_t *eqp,
__in const efx_ev_callbacks_t *eecp,
__in_opt void *arg);
static __checkReturn efx_rc_t
efx_mcdi_init_evq(
__in efx_nic_t *enp,
__in unsigned int instance,
__in efsys_mem_t *esmp,
__in size_t nevs,
__in uint32_t irq)
{
efx_mcdi_req_t req;
uint8_t payload[
MAX(MC_CMD_INIT_EVQ_IN_LEN(EFX_EVQ_NBUFS(EFX_EVQ_MAXNEVS)),
MC_CMD_INIT_EVQ_OUT_LEN)];
efx_qword_t *dma_addr;
uint64_t addr;
int npages;
int i;
int supports_rx_batching;
efx_rc_t rc;
npages = EFX_EVQ_NBUFS(nevs);
if (MC_CMD_INIT_EVQ_IN_LEN(npages) > MC_CMD_INIT_EVQ_IN_LENMAX) {
rc = EINVAL;
goto fail1;
}
(void) memset(payload, 0, sizeof (payload));
req.emr_cmd = MC_CMD_INIT_EVQ;
req.emr_in_buf = payload;
req.emr_in_length = MC_CMD_INIT_EVQ_IN_LEN(npages);
req.emr_out_buf = payload;
req.emr_out_length = MC_CMD_INIT_EVQ_OUT_LEN;
MCDI_IN_SET_DWORD(req, INIT_EVQ_IN_SIZE, nevs);
MCDI_IN_SET_DWORD(req, INIT_EVQ_IN_INSTANCE, instance);
MCDI_IN_SET_DWORD(req, INIT_EVQ_IN_IRQ_NUM, irq);
/*
* On Huntington RX and TX event batching can only be requested
* together (even if the datapath firmware doesn't actually support RX
* batching).
* Cut through is incompatible with RX batching and so enabling cut
* through disables RX batching (but it does not affect TX batching).
*
* So always enable RX and TX event batching, and enable cut through
* if RX event batching isn't supported (i.e. on low latency firmware).
*/
supports_rx_batching = enp->en_nic_cfg.enc_rx_batching_enabled ? 1 : 0;
MCDI_IN_POPULATE_DWORD_6(req, INIT_EVQ_IN_FLAGS,
INIT_EVQ_IN_FLAG_INTERRUPTING, 1,
INIT_EVQ_IN_FLAG_RPTR_DOS, 0,
INIT_EVQ_IN_FLAG_INT_ARMD, 0,
INIT_EVQ_IN_FLAG_CUT_THRU, !supports_rx_batching,
INIT_EVQ_IN_FLAG_RX_MERGE, 1,
INIT_EVQ_IN_FLAG_TX_MERGE, 1);
MCDI_IN_SET_DWORD(req, INIT_EVQ_IN_TMR_MODE,
MC_CMD_INIT_EVQ_IN_TMR_MODE_DIS);
MCDI_IN_SET_DWORD(req, INIT_EVQ_IN_TMR_LOAD, 0);
MCDI_IN_SET_DWORD(req, INIT_EVQ_IN_TMR_RELOAD, 0);
MCDI_IN_SET_DWORD(req, INIT_EVQ_IN_COUNT_MODE,
MC_CMD_INIT_EVQ_IN_COUNT_MODE_DIS);
MCDI_IN_SET_DWORD(req, INIT_EVQ_IN_COUNT_THRSHLD, 0);
dma_addr = MCDI_IN2(req, efx_qword_t, INIT_EVQ_IN_DMA_ADDR);
addr = EFSYS_MEM_ADDR(esmp);
for (i = 0; i < npages; i++) {
EFX_POPULATE_QWORD_2(*dma_addr,
EFX_DWORD_1, (uint32_t)(addr >> 32),
EFX_DWORD_0, (uint32_t)(addr & 0xffffffff));
dma_addr++;
addr += EFX_BUF_SIZE;
}
efx_mcdi_execute(enp, &req);
if (req.emr_rc != 0) {
rc = req.emr_rc;
goto fail2;
}
if (req.emr_out_length_used < MC_CMD_INIT_EVQ_OUT_LEN) {
rc = EMSGSIZE;
goto fail3;
}
/* NOTE: ignore the returned IRQ param as firmware does not set it. */
return (0);
fail3:
EFSYS_PROBE(fail3);
fail2:
EFSYS_PROBE(fail2);
fail1:
EFSYS_PROBE1(fail1, efx_rc_t, rc);
return (rc);
}
static __checkReturn efx_rc_t
efx_mcdi_fini_evq(
__in efx_nic_t *enp,
__in uint32_t instance)
{
efx_mcdi_req_t req;
uint8_t payload[MAX(MC_CMD_FINI_EVQ_IN_LEN,
MC_CMD_FINI_EVQ_OUT_LEN)];
efx_rc_t rc;
(void) memset(payload, 0, sizeof (payload));
req.emr_cmd = MC_CMD_FINI_EVQ;
req.emr_in_buf = payload;
req.emr_in_length = MC_CMD_FINI_EVQ_IN_LEN;
req.emr_out_buf = payload;
req.emr_out_length = MC_CMD_FINI_EVQ_OUT_LEN;
MCDI_IN_SET_DWORD(req, FINI_EVQ_IN_INSTANCE, instance);
efx_mcdi_execute_quiet(enp, &req);
if (req.emr_rc != 0) {
rc = req.emr_rc;
goto fail1;
}
return (0);
fail1:
EFSYS_PROBE1(fail1, efx_rc_t, rc);
return (rc);
}
__checkReturn efx_rc_t
ef10_ev_init(
__in efx_nic_t *enp)
{
_NOTE(ARGUNUSED(enp))
return (0);
}
void
ef10_ev_fini(
__in efx_nic_t *enp)
{
_NOTE(ARGUNUSED(enp))
}
__checkReturn efx_rc_t
ef10_ev_qcreate(
__in efx_nic_t *enp,
__in unsigned int index,
__in efsys_mem_t *esmp,
__in size_t n,
__in uint32_t id,
__in efx_evq_t *eep)
{
efx_nic_cfg_t *encp = &(enp->en_nic_cfg);
uint32_t irq;
efx_rc_t rc;
_NOTE(ARGUNUSED(id)) /* buftbl id managed by MC */
EFX_STATIC_ASSERT(ISP2(EFX_EVQ_MAXNEVS));
EFX_STATIC_ASSERT(ISP2(EFX_EVQ_MINNEVS));
if (!ISP2(n) || (n < EFX_EVQ_MINNEVS) || (n > EFX_EVQ_MAXNEVS)) {
rc = EINVAL;
goto fail1;
}
if (index >= encp->enc_evq_limit) {
rc = EINVAL;
goto fail2;
}
/* Set up the handler table */
eep->ee_rx = ef10_ev_rx;
eep->ee_tx = ef10_ev_tx;
eep->ee_driver = ef10_ev_driver;
eep->ee_drv_gen = ef10_ev_drv_gen;
eep->ee_mcdi = ef10_ev_mcdi;
/* Set up the event queue */
irq = index; /* INIT_EVQ expects function-relative vector number */
if ((rc = efx_mcdi_init_evq(enp, index, esmp, n, irq)) != 0)
goto fail3;
return (0);
fail3:
EFSYS_PROBE(fail3);
fail2:
EFSYS_PROBE(fail2);
fail1:
EFSYS_PROBE1(fail1, efx_rc_t, rc);
return (rc);
}
void
ef10_ev_qdestroy(
__in efx_evq_t *eep)
{
efx_nic_t *enp = eep->ee_enp;
EFSYS_ASSERT(enp->en_family == EFX_FAMILY_HUNTINGTON ||
enp->en_family == EFX_FAMILY_MEDFORD);
(void) efx_mcdi_fini_evq(eep->ee_enp, eep->ee_index);
}
__checkReturn efx_rc_t
ef10_ev_qprime(
__in efx_evq_t *eep,
__in unsigned int count)
{
efx_nic_t *enp = eep->ee_enp;
uint32_t rptr;
efx_dword_t dword;
rptr = count & eep->ee_mask;
if (enp->en_nic_cfg.enc_bug35388_workaround) {
EFX_STATIC_ASSERT(EFX_EVQ_MINNEVS >
(1 << ERF_DD_EVQ_IND_RPTR_WIDTH));
EFX_STATIC_ASSERT(EFX_EVQ_MAXNEVS <
(1 << 2 * ERF_DD_EVQ_IND_RPTR_WIDTH));
EFX_POPULATE_DWORD_2(dword,
ERF_DD_EVQ_IND_RPTR_FLAGS,
EFE_DD_EVQ_IND_RPTR_FLAGS_HIGH,
ERF_DD_EVQ_IND_RPTR,
(rptr >> ERF_DD_EVQ_IND_RPTR_WIDTH));
EFX_BAR_TBL_WRITED(enp, ER_DD_EVQ_INDIRECT, eep->ee_index,
&dword, B_FALSE);
EFX_POPULATE_DWORD_2(dword,
ERF_DD_EVQ_IND_RPTR_FLAGS,
EFE_DD_EVQ_IND_RPTR_FLAGS_LOW,
ERF_DD_EVQ_IND_RPTR,
rptr & ((1 << ERF_DD_EVQ_IND_RPTR_WIDTH) - 1));
EFX_BAR_TBL_WRITED(enp, ER_DD_EVQ_INDIRECT, eep->ee_index,
&dword, B_FALSE);
} else {
EFX_POPULATE_DWORD_1(dword, ERF_DZ_EVQ_RPTR, rptr);
EFX_BAR_TBL_WRITED(enp, ER_DZ_EVQ_RPTR_REG, eep->ee_index,
&dword, B_FALSE);
}
return (0);
}
static __checkReturn efx_rc_t
efx_mcdi_driver_event(
__in efx_nic_t *enp,
__in uint32_t evq,
__in efx_qword_t data)
{
efx_mcdi_req_t req;
uint8_t payload[MAX(MC_CMD_DRIVER_EVENT_IN_LEN,
MC_CMD_DRIVER_EVENT_OUT_LEN)];
efx_rc_t rc;
req.emr_cmd = MC_CMD_DRIVER_EVENT;
req.emr_in_buf = payload;
req.emr_in_length = MC_CMD_DRIVER_EVENT_IN_LEN;
req.emr_out_buf = payload;
req.emr_out_length = MC_CMD_DRIVER_EVENT_OUT_LEN;
MCDI_IN_SET_DWORD(req, DRIVER_EVENT_IN_EVQ, evq);
MCDI_IN_SET_DWORD(req, DRIVER_EVENT_IN_DATA_LO,
EFX_QWORD_FIELD(data, EFX_DWORD_0));
MCDI_IN_SET_DWORD(req, DRIVER_EVENT_IN_DATA_HI,
EFX_QWORD_FIELD(data, EFX_DWORD_1));
efx_mcdi_execute(enp, &req);
if (req.emr_rc != 0) {
rc = req.emr_rc;
goto fail1;
}
return (0);
fail1:
EFSYS_PROBE1(fail1, efx_rc_t, rc);
return (rc);
}
void
ef10_ev_qpost(
__in efx_evq_t *eep,
__in uint16_t data)
{
efx_nic_t *enp = eep->ee_enp;
efx_qword_t event;
EFX_POPULATE_QWORD_3(event,
ESF_DZ_DRV_CODE, ESE_DZ_EV_CODE_DRV_GEN_EV,
ESF_DZ_DRV_SUB_CODE, 0,
ESF_DZ_DRV_SUB_DATA_DW0, (uint32_t)data);
(void) efx_mcdi_driver_event(enp, eep->ee_index, event);
}
__checkReturn efx_rc_t
ef10_ev_qmoderate(
__in efx_evq_t *eep,
__in unsigned int us)
{
efx_nic_t *enp = eep->ee_enp;
efx_nic_cfg_t *encp = &(enp->en_nic_cfg);
efx_dword_t dword;
uint32_t timer_val, mode;
efx_rc_t rc;
if (us > encp->enc_evq_timer_max_us) {
rc = EINVAL;
goto fail1;
}
/* If the value is zero then disable the timer */
if (us == 0) {
timer_val = 0;
mode = FFE_CZ_TIMER_MODE_DIS;
} else {
/* Calculate the timer value in quanta */
timer_val = us * 1000 / encp->enc_evq_timer_quantum_ns;
/* Moderation value is base 0 so we need to deduct 1 */
if (timer_val > 0)
timer_val--;
mode = FFE_CZ_TIMER_MODE_INT_HLDOFF;
}
if (encp->enc_bug35388_workaround) {
EFX_POPULATE_DWORD_3(dword,
ERF_DD_EVQ_IND_TIMER_FLAGS,
EFE_DD_EVQ_IND_TIMER_FLAGS,
ERF_DD_EVQ_IND_TIMER_MODE, mode,
ERF_DD_EVQ_IND_TIMER_VAL, timer_val);
EFX_BAR_TBL_WRITED(enp, ER_DD_EVQ_INDIRECT,
eep->ee_index, &dword, 0);
} else {
EFX_POPULATE_DWORD_2(dword,
ERF_DZ_TC_TIMER_MODE, mode,
ERF_DZ_TC_TIMER_VAL, timer_val);
EFX_BAR_TBL_WRITED(enp, ER_DZ_EVQ_TMR_REG,
eep->ee_index, &dword, 0);
}
return (0);
fail1:
EFSYS_PROBE1(fail1, efx_rc_t, rc);
return (rc);
}
#if EFSYS_OPT_QSTATS
void
ef10_ev_qstats_update(
__in efx_evq_t *eep,
__inout_ecount(EV_NQSTATS) efsys_stat_t *stat)
{
unsigned int id;
for (id = 0; id < EV_NQSTATS; id++) {
efsys_stat_t *essp = &stat[id];
EFSYS_STAT_INCR(essp, eep->ee_stat[id]);
eep->ee_stat[id] = 0;
}
}
#endif /* EFSYS_OPT_QSTATS */
static __checkReturn boolean_t
ef10_ev_rx(
__in efx_evq_t *eep,
__in efx_qword_t *eqp,
__in const efx_ev_callbacks_t *eecp,
__in_opt void *arg)
{
efx_nic_t *enp = eep->ee_enp;
uint32_t size;
uint32_t label;
uint32_t mac_class;
uint32_t eth_tag_class;
uint32_t l3_class;
uint32_t l4_class;
uint32_t next_read_lbits;
uint16_t flags;
boolean_t cont;
boolean_t should_abort;
efx_evq_rxq_state_t *eersp;
unsigned int desc_count;
unsigned int last_used_id;
EFX_EV_QSTAT_INCR(eep, EV_RX);
/* Discard events after RXQ/TXQ errors */
if (enp->en_reset_flags & (EFX_RESET_RXQ_ERR | EFX_RESET_TXQ_ERR))
return (B_FALSE);
/* Basic packet information */
size = EFX_QWORD_FIELD(*eqp, ESF_DZ_RX_BYTES);
next_read_lbits = EFX_QWORD_FIELD(*eqp, ESF_DZ_RX_DSC_PTR_LBITS);
label = EFX_QWORD_FIELD(*eqp, ESF_DZ_RX_QLABEL);
eth_tag_class = EFX_QWORD_FIELD(*eqp, ESF_DZ_RX_ETH_TAG_CLASS);
mac_class = EFX_QWORD_FIELD(*eqp, ESF_DZ_RX_MAC_CLASS);
l3_class = EFX_QWORD_FIELD(*eqp, ESF_DZ_RX_L3_CLASS);
l4_class = EFX_QWORD_FIELD(*eqp, ESF_DZ_RX_L4_CLASS);
cont = EFX_QWORD_FIELD(*eqp, ESF_DZ_RX_CONT);
if (EFX_QWORD_FIELD(*eqp, ESF_DZ_RX_DROP_EVENT) != 0) {
/* Drop this event */
return (B_FALSE);
}
flags = 0;
if (cont != 0) {
/*
* This may be part of a scattered frame, or it may be a
* truncated frame if scatter is disabled on this RXQ.
* Overlength frames can be received if e.g. a VF is configured
* for 1500 MTU but connected to a port set to 9000 MTU
* (see bug56567).
* FIXME: There is not yet any driver that supports scatter on
* Huntington. Scatter support is required for OSX.
*/
flags |= EFX_PKT_CONT;
}
if (mac_class == ESE_DZ_MAC_CLASS_UCAST)
flags |= EFX_PKT_UNICAST;
/* Increment the count of descriptors read */
eersp = &eep->ee_rxq_state[label];
desc_count = (next_read_lbits - eersp->eers_rx_read_ptr) &
EFX_MASK32(ESF_DZ_RX_DSC_PTR_LBITS);
eersp->eers_rx_read_ptr += desc_count;
/*
* FIXME: add error checking to make sure this a batched event.
* This could also be an aborted scatter, see Bug36629.
*/
if (desc_count > 1) {
EFX_EV_QSTAT_INCR(eep, EV_RX_BATCH);
flags |= EFX_PKT_PREFIX_LEN;
}
- /* Calculate the index of the the last descriptor consumed */
+ /* Calculate the index of the last descriptor consumed */
last_used_id = (eersp->eers_rx_read_ptr - 1) & eersp->eers_rx_mask;
/* Check for errors that invalidate checksum and L3/L4 fields */
if (EFX_QWORD_FIELD(*eqp, ESF_DZ_RX_ECC_ERR) != 0) {
/* RX frame truncated (error flag is misnamed) */
EFX_EV_QSTAT_INCR(eep, EV_RX_FRM_TRUNC);
flags |= EFX_DISCARD;
goto deliver;
}
if (EFX_QWORD_FIELD(*eqp, ESF_DZ_RX_ECRC_ERR) != 0) {
/* Bad Ethernet frame CRC */
EFX_EV_QSTAT_INCR(eep, EV_RX_ETH_CRC_ERR);
flags |= EFX_DISCARD;
goto deliver;
}
if (EFX_QWORD_FIELD(*eqp, ESF_DZ_RX_PARSE_INCOMPLETE)) {
/*
* Hardware parse failed, due to malformed headers
* or headers that are too long for the parser.
* Headers and checksums must be validated by the host.
*/
// TODO: EFX_EV_QSTAT_INCR(eep, EV_RX_PARSE_INCOMPLETE);
goto deliver;
}
if ((eth_tag_class == ESE_DZ_ETH_TAG_CLASS_VLAN1) ||
(eth_tag_class == ESE_DZ_ETH_TAG_CLASS_VLAN2)) {
flags |= EFX_PKT_VLAN_TAGGED;
}
switch (l3_class) {
case ESE_DZ_L3_CLASS_IP4:
case ESE_DZ_L3_CLASS_IP4_FRAG:
flags |= EFX_PKT_IPV4;
if (EFX_QWORD_FIELD(*eqp, ESF_DZ_RX_IPCKSUM_ERR)) {
EFX_EV_QSTAT_INCR(eep, EV_RX_IPV4_HDR_CHKSUM_ERR);
} else {
flags |= EFX_CKSUM_IPV4;
}
if (l4_class == ESE_DZ_L4_CLASS_TCP) {
EFX_EV_QSTAT_INCR(eep, EV_RX_TCP_IPV4);
flags |= EFX_PKT_TCP;
} else if (l4_class == ESE_DZ_L4_CLASS_UDP) {
EFX_EV_QSTAT_INCR(eep, EV_RX_UDP_IPV4);
flags |= EFX_PKT_UDP;
} else {
EFX_EV_QSTAT_INCR(eep, EV_RX_OTHER_IPV4);
}
break;
case ESE_DZ_L3_CLASS_IP6:
case ESE_DZ_L3_CLASS_IP6_FRAG:
flags |= EFX_PKT_IPV6;
if (l4_class == ESE_DZ_L4_CLASS_TCP) {
EFX_EV_QSTAT_INCR(eep, EV_RX_TCP_IPV6);
flags |= EFX_PKT_TCP;
} else if (l4_class == ESE_DZ_L4_CLASS_UDP) {
EFX_EV_QSTAT_INCR(eep, EV_RX_UDP_IPV6);
flags |= EFX_PKT_UDP;
} else {
EFX_EV_QSTAT_INCR(eep, EV_RX_OTHER_IPV6);
}
break;
default:
EFX_EV_QSTAT_INCR(eep, EV_RX_NON_IP);
break;
}
if (flags & (EFX_PKT_TCP | EFX_PKT_UDP)) {
if (EFX_QWORD_FIELD(*eqp, ESF_DZ_RX_TCPUDP_CKSUM_ERR)) {
EFX_EV_QSTAT_INCR(eep, EV_RX_TCP_UDP_CHKSUM_ERR);
} else {
flags |= EFX_CKSUM_TCPUDP;
}
}
deliver:
/* If we're not discarding the packet then it is ok */
if (~flags & EFX_DISCARD)
EFX_EV_QSTAT_INCR(eep, EV_RX_OK);
EFSYS_ASSERT(eecp->eec_rx != NULL);
should_abort = eecp->eec_rx(arg, label, last_used_id, size, flags);
return (should_abort);
}
static __checkReturn boolean_t
ef10_ev_tx(
__in efx_evq_t *eep,
__in efx_qword_t *eqp,
__in const efx_ev_callbacks_t *eecp,
__in_opt void *arg)
{
efx_nic_t *enp = eep->ee_enp;
uint32_t id;
uint32_t label;
boolean_t should_abort;
EFX_EV_QSTAT_INCR(eep, EV_TX);
/* Discard events after RXQ/TXQ errors */
if (enp->en_reset_flags & (EFX_RESET_RXQ_ERR | EFX_RESET_TXQ_ERR))
return (B_FALSE);
if (EFX_QWORD_FIELD(*eqp, ESF_DZ_TX_DROP_EVENT) != 0) {
/* Drop this event */
return (B_FALSE);
}
/* Per-packet TX completion (was per-descriptor for Falcon/Siena) */
id = EFX_QWORD_FIELD(*eqp, ESF_DZ_TX_DESCR_INDX);
label = EFX_QWORD_FIELD(*eqp, ESF_DZ_TX_QLABEL);
EFSYS_PROBE2(tx_complete, uint32_t, label, uint32_t, id);
EFSYS_ASSERT(eecp->eec_tx != NULL);
should_abort = eecp->eec_tx(arg, label, id);
return (should_abort);
}
static __checkReturn boolean_t
ef10_ev_driver(
__in efx_evq_t *eep,
__in efx_qword_t *eqp,
__in const efx_ev_callbacks_t *eecp,
__in_opt void *arg)
{
unsigned int code;
boolean_t should_abort;
EFX_EV_QSTAT_INCR(eep, EV_DRIVER);
should_abort = B_FALSE;
code = EFX_QWORD_FIELD(*eqp, ESF_DZ_DRV_SUB_CODE);
switch (code) {
case ESE_DZ_DRV_TIMER_EV: {
uint32_t id;
id = EFX_QWORD_FIELD(*eqp, ESF_DZ_DRV_TMR_ID);
EFSYS_ASSERT(eecp->eec_timer != NULL);
should_abort = eecp->eec_timer(arg, id);
break;
}
case ESE_DZ_DRV_WAKE_UP_EV: {
uint32_t id;
id = EFX_QWORD_FIELD(*eqp, ESF_DZ_DRV_EVQ_ID);
EFSYS_ASSERT(eecp->eec_wake_up != NULL);
should_abort = eecp->eec_wake_up(arg, id);
break;
}
case ESE_DZ_DRV_START_UP_EV:
EFSYS_ASSERT(eecp->eec_initialized != NULL);
should_abort = eecp->eec_initialized(arg);
break;
default:
EFSYS_PROBE3(bad_event, unsigned int, eep->ee_index,
uint32_t, EFX_QWORD_FIELD(*eqp, EFX_DWORD_1),
uint32_t, EFX_QWORD_FIELD(*eqp, EFX_DWORD_0));
break;
}
return (should_abort);
}
static __checkReturn boolean_t
ef10_ev_drv_gen(
__in efx_evq_t *eep,
__in efx_qword_t *eqp,
__in const efx_ev_callbacks_t *eecp,
__in_opt void *arg)
{
uint32_t data;
boolean_t should_abort;
EFX_EV_QSTAT_INCR(eep, EV_DRV_GEN);
should_abort = B_FALSE;
data = EFX_QWORD_FIELD(*eqp, ESF_DZ_DRV_SUB_DATA_DW0);
if (data >= ((uint32_t)1 << 16)) {
EFSYS_PROBE3(bad_event, unsigned int, eep->ee_index,
uint32_t, EFX_QWORD_FIELD(*eqp, EFX_DWORD_1),
uint32_t, EFX_QWORD_FIELD(*eqp, EFX_DWORD_0));
return (B_TRUE);
}
EFSYS_ASSERT(eecp->eec_software != NULL);
should_abort = eecp->eec_software(arg, (uint16_t)data);
return (should_abort);
}
static __checkReturn boolean_t
ef10_ev_mcdi(
__in efx_evq_t *eep,
__in efx_qword_t *eqp,
__in const efx_ev_callbacks_t *eecp,
__in_opt void *arg)
{
efx_nic_t *enp = eep->ee_enp;
unsigned code;
boolean_t should_abort = B_FALSE;
EFX_EV_QSTAT_INCR(eep, EV_MCDI_RESPONSE);
code = EFX_QWORD_FIELD(*eqp, MCDI_EVENT_CODE);
switch (code) {
case MCDI_EVENT_CODE_BADSSERT:
efx_mcdi_ev_death(enp, EINTR);
break;
case MCDI_EVENT_CODE_CMDDONE:
efx_mcdi_ev_cpl(enp,
MCDI_EV_FIELD(eqp, CMDDONE_SEQ),
MCDI_EV_FIELD(eqp, CMDDONE_DATALEN),
MCDI_EV_FIELD(eqp, CMDDONE_ERRNO));
break;
#if EFSYS_OPT_MCDI_PROXY_AUTH
case MCDI_EVENT_CODE_PROXY_RESPONSE:
/*
* This event notifies a function that an authorization request
* has been processed. If the request was authorized then the
* function can now re-send the original MCDI request.
* See SF-113652-SW "SR-IOV Proxied Network Access Control".
*/
efx_mcdi_ev_proxy_response(enp,
MCDI_EV_FIELD(eqp, PROXY_RESPONSE_HANDLE),
MCDI_EV_FIELD(eqp, PROXY_RESPONSE_RC));
break;
#endif /* EFSYS_OPT_MCDI_PROXY_AUTH */
case MCDI_EVENT_CODE_LINKCHANGE: {
efx_link_mode_t link_mode;
ef10_phy_link_ev(enp, eqp, &link_mode);
should_abort = eecp->eec_link_change(arg, link_mode);
break;
}
case MCDI_EVENT_CODE_SENSOREVT: {
#if EFSYS_OPT_MON_STATS
efx_mon_stat_t id;
efx_mon_stat_value_t value;
efx_rc_t rc;
/* Decode monitor stat for MCDI sensor (if supported) */
if ((rc = mcdi_mon_ev(enp, eqp, &id, &value)) == 0) {
/* Report monitor stat change */
should_abort = eecp->eec_monitor(arg, id, value);
} else if (rc == ENOTSUP) {
should_abort = eecp->eec_exception(arg,
EFX_EXCEPTION_UNKNOWN_SENSOREVT,
MCDI_EV_FIELD(eqp, DATA));
} else {
EFSYS_ASSERT(rc == ENODEV); /* Wrong port */
}
#endif
break;
}
case MCDI_EVENT_CODE_SCHEDERR:
/* Informational only */
break;
case MCDI_EVENT_CODE_REBOOT:
/* Falcon/Siena only (should not been seen with Huntington). */
efx_mcdi_ev_death(enp, EIO);
break;
case MCDI_EVENT_CODE_MC_REBOOT:
/* MC_REBOOT event is used for Huntington (EF10) and later. */
efx_mcdi_ev_death(enp, EIO);
break;
case MCDI_EVENT_CODE_MAC_STATS_DMA:
#if EFSYS_OPT_MAC_STATS
if (eecp->eec_mac_stats != NULL) {
eecp->eec_mac_stats(arg,
MCDI_EV_FIELD(eqp, MAC_STATS_DMA_GENERATION));
}
#endif
break;
case MCDI_EVENT_CODE_FWALERT: {
uint32_t reason = MCDI_EV_FIELD(eqp, FWALERT_REASON);
if (reason == MCDI_EVENT_FWALERT_REASON_SRAM_ACCESS)
should_abort = eecp->eec_exception(arg,
EFX_EXCEPTION_FWALERT_SRAM,
MCDI_EV_FIELD(eqp, FWALERT_DATA));
else
should_abort = eecp->eec_exception(arg,
EFX_EXCEPTION_UNKNOWN_FWALERT,
MCDI_EV_FIELD(eqp, DATA));
break;
}
case MCDI_EVENT_CODE_TX_ERR: {
/*
* After a TXQ error is detected, firmware sends a TX_ERR event.
* This may be followed by TX completions (which we discard),
* and then finally by a TX_FLUSH event. Firmware destroys the
* TXQ automatically after sending the TX_FLUSH event.
*/
enp->en_reset_flags |= EFX_RESET_TXQ_ERR;
EFSYS_PROBE2(tx_descq_err,
uint32_t, EFX_QWORD_FIELD(*eqp, EFX_DWORD_1),
uint32_t, EFX_QWORD_FIELD(*eqp, EFX_DWORD_0));
/* Inform the driver that a reset is required. */
eecp->eec_exception(arg, EFX_EXCEPTION_TX_ERROR,
MCDI_EV_FIELD(eqp, TX_ERR_DATA));
break;
}
case MCDI_EVENT_CODE_TX_FLUSH: {
uint32_t txq_index = MCDI_EV_FIELD(eqp, TX_FLUSH_TXQ);
/*
* EF10 firmware sends two TX_FLUSH events: one to the txq's
* event queue, and one to evq 0 (with TX_FLUSH_TO_DRIVER set).
* We want to wait for all completions, so ignore the events
* with TX_FLUSH_TO_DRIVER.
*/
if (MCDI_EV_FIELD(eqp, TX_FLUSH_TO_DRIVER) != 0) {
should_abort = B_FALSE;
break;
}
EFX_EV_QSTAT_INCR(eep, EV_DRIVER_TX_DESCQ_FLS_DONE);
EFSYS_PROBE1(tx_descq_fls_done, uint32_t, txq_index);
EFSYS_ASSERT(eecp->eec_txq_flush_done != NULL);
should_abort = eecp->eec_txq_flush_done(arg, txq_index);
break;
}
case MCDI_EVENT_CODE_RX_ERR: {
/*
* After an RXQ error is detected, firmware sends an RX_ERR
* event. This may be followed by RX events (which we discard),
* and then finally by an RX_FLUSH event. Firmware destroys the
* RXQ automatically after sending the RX_FLUSH event.
*/
enp->en_reset_flags |= EFX_RESET_RXQ_ERR;
EFSYS_PROBE2(rx_descq_err,
uint32_t, EFX_QWORD_FIELD(*eqp, EFX_DWORD_1),
uint32_t, EFX_QWORD_FIELD(*eqp, EFX_DWORD_0));
/* Inform the driver that a reset is required. */
eecp->eec_exception(arg, EFX_EXCEPTION_RX_ERROR,
MCDI_EV_FIELD(eqp, RX_ERR_DATA));
break;
}
case MCDI_EVENT_CODE_RX_FLUSH: {
uint32_t rxq_index = MCDI_EV_FIELD(eqp, RX_FLUSH_RXQ);
/*
* EF10 firmware sends two RX_FLUSH events: one to the rxq's
* event queue, and one to evq 0 (with RX_FLUSH_TO_DRIVER set).
* We want to wait for all completions, so ignore the events
* with RX_FLUSH_TO_DRIVER.
*/
if (MCDI_EV_FIELD(eqp, RX_FLUSH_TO_DRIVER) != 0) {
should_abort = B_FALSE;
break;
}
EFX_EV_QSTAT_INCR(eep, EV_DRIVER_RX_DESCQ_FLS_DONE);
EFSYS_PROBE1(rx_descq_fls_done, uint32_t, rxq_index);
EFSYS_ASSERT(eecp->eec_rxq_flush_done != NULL);
should_abort = eecp->eec_rxq_flush_done(arg, rxq_index);
break;
}
default:
EFSYS_PROBE3(bad_event, unsigned int, eep->ee_index,
uint32_t, EFX_QWORD_FIELD(*eqp, EFX_DWORD_1),
uint32_t, EFX_QWORD_FIELD(*eqp, EFX_DWORD_0));
break;
}
return (should_abort);
}
void
ef10_ev_rxlabel_init(
__in efx_evq_t *eep,
__in efx_rxq_t *erp,
__in unsigned int label)
{
efx_evq_rxq_state_t *eersp;
EFSYS_ASSERT3U(label, <, EFX_ARRAY_SIZE(eep->ee_rxq_state));
eersp = &eep->ee_rxq_state[label];
EFSYS_ASSERT3U(eersp->eers_rx_mask, ==, 0);
eersp->eers_rx_read_ptr = 0;
eersp->eers_rx_mask = erp->er_mask;
}
void
ef10_ev_rxlabel_fini(
__in efx_evq_t *eep,
__in unsigned int label)
{
efx_evq_rxq_state_t *eersp;
EFSYS_ASSERT3U(label, <, EFX_ARRAY_SIZE(eep->ee_rxq_state));
eersp = &eep->ee_rxq_state[label];
EFSYS_ASSERT3U(eersp->eers_rx_mask, !=, 0);
eersp->eers_rx_read_ptr = 0;
eersp->eers_rx_mask = 0;
}
#endif /* EFSYS_OPT_HUNTINGTON || EFSYS_OPT_MEDFORD */
Index: head/sys/mips/conf/DIR-825C1.hints
===================================================================
--- head/sys/mips/conf/DIR-825C1.hints (revision 300049)
+++ head/sys/mips/conf/DIR-825C1.hints (revision 300050)
@@ -1,154 +1,154 @@
# $FreeBSD$
# mdiobus0 on arge0
hint.argemdio.0.at="nexus0"
hint.argemdio.0.maddr=0x19000000
hint.argemdio.0.msize=0x1000
hint.argemdio.0.order=0
-# 0x1ffe0004 is the the "unit MAC".
+# 0x1ffe0004 is the "unit MAC".
# 0x1ffe0018 is the second "MAC".
# Right now this doesn't have any option for more than one
# "unit MACs", so:
# ath0: unit MAC
# ath1: unit MAC + 1
# arge0: unit MAC + 2
# arge1: leave as default; not used.
hint.ar71xx.0.eeprom_mac_addr=0x1ffe0004
hint.ar71xx.0.eeprom_mac_isascii=1
hint.ar71xx_mac_map.0.devid=ath
hint.ar71xx_mac_map.0.unitid=0
hint.ar71xx_mac_map.0.offset=0
hint.ar71xx_mac_map.0.is_local=0
hint.ar71xx_mac_map.1.devid=ath
hint.ar71xx_mac_map.1.unitid=1
hint.ar71xx_mac_map.1.offset=1
hint.ar71xx_mac_map.1.is_local=0
hint.ar71xx_mac_map.2.devid=arge
hint.ar71xx_mac_map.2.unitid=0
hint.ar71xx_mac_map.2.offset=2
hint.ar71xx_mac_map.2.is_local=0
# DIR-825C1 GMAC configuration
# + AR934X_ETH_CFG_RGMII_GMAC0 (1 << 0)
# Onboard AR9344 10/100 switch is not wired up
hint.ar934x_gmac.0.gmac_cfg=0x1
# GMAC0 here - connected to an AR8327
hint.arswitch.0.at="mdio0"
hint.arswitch.0.is_7240=0
hint.arswitch.0.is_9340=0 # not the internal switch!
hint.arswitch.0.numphys=5
hint.arswitch.0.phy4cpu=0
hint.arswitch.0.is_rgmii=1
hint.arswitch.0.is_gmii=0
# Other AR8327 configuration parameters
# AR8327_PAD_MAC_RGMII
hint.arswitch.0.pad.0.mode=6
hint.arswitch.0.pad.0.txclk_delay_en=1
hint.arswitch.0.pad.0.rxclk_delay_en=1
# AR8327_CLK_DELAY_SEL1
hint.arswitch.0.pad.0.txclk_delay_sel=1
# AR8327_CLK_DELAY_SEL2
hint.arswitch.0.pad.0.rxclk_delay_sel=2
# XXX there's no LED management just yet!
hint.arswitch.0.led.ctrl0=0x00000000
hint.arswitch.0.led.ctrl1=0xc737c737
hint.arswitch.0.led.ctrl2=0x00000000
hint.arswitch.0.led.ctrl3=0x00c30c00
hint.arswitch.0.led.open_drain=1
# force_link=1 is required for the rest of the parameters
# to be configured.
hint.arswitch.0.port.0.force_link=1
hint.arswitch.0.port.0.speed=1000
hint.arswitch.0.port.0.duplex=1
hint.arswitch.0.port.0.txpause=1
hint.arswitch.0.port.0.rxpause=1
# XXX OpenWRT DB120 BSP doesn't have media/duplex set?
hint.arge.0.phymask=0x0
hint.arge.0.media=1000
hint.arge.0.fduplex=1
hint.arge.0.miimode=3 # RGMII
hint.arge.0.pll_1000=0x06000000
# ath0: Where the ART is - last 64k in the flash
hint.ath.0.eepromaddr=0x1fff0000
hint.ath.0.eepromsize=16384
# ath1: it's different; it's a PCIe attached device, so
# we instead need to teach the PCIe bridge code about it
# (ie, the 'early pci fixup' stuff that programs the PCIe
# host registers on the NIC) and then we teach ath where
# to find it.
# ath1 hint - pcie slot 0
hint.pcib.0.bus.0.0.0.ath_fixup_addr=0x1fff4000
hint.pcib.0.bus.0.0.0.ath_fixup_size=16384
# ath0 - eeprom comes from here
hint.ath.1.eeprom_firmware="pcib.0.bus.0.0.0.eeprom_firmware"
# flash layout:
# m25p80 spi0.0: mx25l12805d (16384 Kbytes)
#
# uBoot firmware variables:
# bootargs=console=ttyS0,115200 root=31:02 rootfstype=jffs2 init=/sbin/init
# mtdparts=ath-nor0:256k(u-boot),64k(u-boot-env),6336k(rootfs),1408k(uImage),64k(mib0),64k(ART)
# 64KiB u-boot
hint.map.0.at="flash/spi0"
hint.map.0.start=0x00000000
hint.map.0.end=0x00010000
hint.map.0.name="u-boot"
hint.map.0.readonly=1
# 64KiB u-boot-env
hint.map.1.at="flash/spi0"
hint.map.1.start=0x00010000
hint.map.1.end=0x00020000
hint.map.1.name="u-boot-env"
hint.map.1.readonly=1
# 1344KiB kernel
hint.map.2.at="flash/spi0"
hint.map.2.start=0x00020000
hint.map.2.end="search:0x00020000:0x10000:.!/bin/sh"
hint.map.2.name="kernel"
hint.map.2.readonly=1
# 14592KiB rootfs
hint.map.3.at="flash/spi0"
hint.map.3.start="search:0x00020000:0x10000:.!/bin/sh"
hint.map.3.end=0x00fb0000
hint.map.3.name="rootfs"
hint.map.3.readonly=1
# 192KiB lang -- remapped to cfg
hint.map.4.at="flash/spi0"
hint.map.4.start=0x00fb0000
hint.map.4.end=0x00fe0000
hint.map.4.name="cfg"
hint.map.4.readonly=0
# 64KiB mac
hint.map.5.at="flash/spi0"
hint.map.5.start=0x00fe0000
hint.map.5.end=0x00ff0000
hint.map.5.name="mac"
hint.map.5.readonly=1
# 64KiB art
hint.map.6.at="flash/spi0"
hint.map.6.start=0x00ff0000
hint.map.6.end=0x01000000
hint.map.6.name="art"
hint.map.6.readonly=1
Index: head/sys/net/altq/altq_cbq.c
===================================================================
--- head/sys/net/altq/altq_cbq.c (revision 300049)
+++ head/sys/net/altq/altq_cbq.c (revision 300050)
@@ -1,1169 +1,1169 @@
/*-
* Copyright (c) Sun Microsystems, Inc. 1993-1998 All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
*
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
*
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
*
* 3. All advertising materials mentioning features or use of this software
* must display the following acknowledgement:
* This product includes software developed by the SMCC Technology
* Development Group at Sun Microsystems, Inc.
*
* 4. The name of the Sun Microsystems, Inc nor may not be used to endorse or
* promote products derived from this software without specific prior
* written permission.
*
* SUN MICROSYSTEMS DOES NOT CLAIM MERCHANTABILITY OF THIS SOFTWARE OR THE
* SUITABILITY OF THIS SOFTWARE FOR ANY PARTICULAR PURPOSE. The software is
* provided "as is" without express or implied warranty of any kind.
*
* These notices must be retained in any copies of any part of this software.
*
* $KAME: altq_cbq.c,v 1.19 2003/09/17 14:23:25 kjc Exp $
* $FreeBSD$
*/
#include "opt_altq.h"
#include "opt_inet.h"
#include "opt_inet6.h"
#ifdef ALTQ_CBQ /* cbq is enabled by ALTQ_CBQ option in opt_altq.h */
#include <sys/param.h>
#include <sys/malloc.h>
#include <sys/mbuf.h>
#include <sys/socket.h>
#include <sys/systm.h>
#include <sys/proc.h>
#include <sys/errno.h>
#include <sys/time.h>
#ifdef ALTQ3_COMPAT
#include <sys/uio.h>
#include <sys/kernel.h>
#endif
#include <net/if.h>
#include <net/if_var.h>
#include <netinet/in.h>
#include <netpfil/pf/pf.h>
#include <netpfil/pf/pf_altq.h>
#include <netpfil/pf/pf_mtag.h>
#include <net/altq/altq.h>
#include <net/altq/altq_cbq.h>
#ifdef ALTQ3_COMPAT
#include <net/altq/altq_conf.h>
#endif
#ifdef ALTQ3_COMPAT
/*
* Local Data structures.
*/
static cbq_state_t *cbq_list = NULL;
#endif
/*
* Forward Declarations.
*/
static int cbq_class_destroy(cbq_state_t *, struct rm_class *);
static struct rm_class *clh_to_clp(cbq_state_t *, u_int32_t);
static int cbq_clear_interface(cbq_state_t *);
static int cbq_request(struct ifaltq *, int, void *);
static int cbq_enqueue(struct ifaltq *, struct mbuf *,
struct altq_pktattr *);
static struct mbuf *cbq_dequeue(struct ifaltq *, int);
static void cbqrestart(struct ifaltq *);
static void get_class_stats(class_stats_t *, struct rm_class *);
static void cbq_purge(cbq_state_t *);
#ifdef ALTQ3_COMPAT
static int cbq_add_class(struct cbq_add_class *);
static int cbq_delete_class(struct cbq_delete_class *);
static int cbq_modify_class(struct cbq_modify_class *);
static int cbq_class_create(cbq_state_t *, struct cbq_add_class *,
struct rm_class *, struct rm_class *);
static int cbq_clear_hierarchy(struct cbq_interface *);
static int cbq_set_enable(struct cbq_interface *, int);
static int cbq_ifattach(struct cbq_interface *);
static int cbq_ifdetach(struct cbq_interface *);
static int cbq_getstats(struct cbq_getstats *);
static int cbq_add_filter(struct cbq_add_filter *);
static int cbq_delete_filter(struct cbq_delete_filter *);
#endif /* ALTQ3_COMPAT */
/*
* int
* cbq_class_destroy(cbq_mod_state_t *, struct rm_class *) - This
* function destroys a given traffic class. Before destroying
* the class, all traffic for that class is released.
*/
static int
cbq_class_destroy(cbq_state_t *cbqp, struct rm_class *cl)
{
int i;
/* delete the class */
rmc_delete_class(&cbqp->ifnp, cl);
/*
* free the class handle
*/
for (i = 0; i < CBQ_MAX_CLASSES; i++)
if (cbqp->cbq_class_tbl[i] == cl)
cbqp->cbq_class_tbl[i] = NULL;
if (cl == cbqp->ifnp.root_)
cbqp->ifnp.root_ = NULL;
if (cl == cbqp->ifnp.default_)
cbqp->ifnp.default_ = NULL;
#ifdef ALTQ3_COMPAT
if (cl == cbqp->ifnp.ctl_)
cbqp->ifnp.ctl_ = NULL;
#endif
return (0);
}
/* convert class handle to class pointer */
static struct rm_class *
clh_to_clp(cbq_state_t *cbqp, u_int32_t chandle)
{
int i;
struct rm_class *cl;
if (chandle == 0)
return (NULL);
/*
* first, try optimistically the slot matching the lower bits of
* the handle. if it fails, do the linear table search.
*/
i = chandle % CBQ_MAX_CLASSES;
if ((cl = cbqp->cbq_class_tbl[i]) != NULL &&
cl->stats_.handle == chandle)
return (cl);
for (i = 0; i < CBQ_MAX_CLASSES; i++)
if ((cl = cbqp->cbq_class_tbl[i]) != NULL &&
cl->stats_.handle == chandle)
return (cl);
return (NULL);
}
static int
cbq_clear_interface(cbq_state_t *cbqp)
{
int again, i;
struct rm_class *cl;
#ifdef ALTQ3_CLFIER_COMPAT
/* free the filters for this interface */
acc_discard_filters(&cbqp->cbq_classifier, NULL, 1);
#endif
/* clear out the classes now */
do {
again = 0;
for (i = 0; i < CBQ_MAX_CLASSES; i++) {
if ((cl = cbqp->cbq_class_tbl[i]) != NULL) {
if (is_a_parent_class(cl))
again++;
else {
cbq_class_destroy(cbqp, cl);
cbqp->cbq_class_tbl[i] = NULL;
if (cl == cbqp->ifnp.root_)
cbqp->ifnp.root_ = NULL;
if (cl == cbqp->ifnp.default_)
cbqp->ifnp.default_ = NULL;
#ifdef ALTQ3_COMPAT
if (cl == cbqp->ifnp.ctl_)
cbqp->ifnp.ctl_ = NULL;
#endif
}
}
}
} while (again);
return (0);
}
static int
cbq_request(struct ifaltq *ifq, int req, void *arg)
{
cbq_state_t *cbqp = (cbq_state_t *)ifq->altq_disc;
IFQ_LOCK_ASSERT(ifq);
switch (req) {
case ALTRQ_PURGE:
cbq_purge(cbqp);
break;
}
return (0);
}
/* copy the stats info in rm_class to class_states_t */
static void
get_class_stats(class_stats_t *statsp, struct rm_class *cl)
{
statsp->xmit_cnt = cl->stats_.xmit_cnt;
statsp->drop_cnt = cl->stats_.drop_cnt;
statsp->over = cl->stats_.over;
statsp->borrows = cl->stats_.borrows;
statsp->overactions = cl->stats_.overactions;
statsp->delays = cl->stats_.delays;
statsp->depth = cl->depth_;
statsp->priority = cl->pri_;
statsp->maxidle = cl->maxidle_;
statsp->minidle = cl->minidle_;
statsp->offtime = cl->offtime_;
statsp->qmax = qlimit(cl->q_);
statsp->ns_per_byte = cl->ns_per_byte_;
statsp->wrr_allot = cl->w_allotment_;
statsp->qcnt = qlen(cl->q_);
statsp->avgidle = cl->avgidle_;
statsp->qtype = qtype(cl->q_);
#ifdef ALTQ_RED
if (q_is_red(cl->q_))
red_getstats(cl->red_, &statsp->red[0]);
#endif
#ifdef ALTQ_RIO
if (q_is_rio(cl->q_))
rio_getstats((rio_t *)cl->red_, &statsp->red[0]);
#endif
#ifdef ALTQ_CODEL
if (q_is_codel(cl->q_))
codel_getstats(cl->codel_, &statsp->codel);
#endif
}
int
cbq_pfattach(struct pf_altq *a)
{
struct ifnet *ifp;
int s, error;
if ((ifp = ifunit(a->ifname)) == NULL || a->altq_disc == NULL)
return (EINVAL);
s = splnet();
error = altq_attach(&ifp->if_snd, ALTQT_CBQ, a->altq_disc,
cbq_enqueue, cbq_dequeue, cbq_request, NULL, NULL);
splx(s);
return (error);
}
int
cbq_add_altq(struct pf_altq *a)
{
cbq_state_t *cbqp;
struct ifnet *ifp;
if ((ifp = ifunit(a->ifname)) == NULL)
return (EINVAL);
if (!ALTQ_IS_READY(&ifp->if_snd))
return (ENODEV);
/* allocate and initialize cbq_state_t */
cbqp = malloc(sizeof(cbq_state_t), M_DEVBUF, M_NOWAIT | M_ZERO);
if (cbqp == NULL)
return (ENOMEM);
CALLOUT_INIT(&cbqp->cbq_callout);
cbqp->cbq_qlen = 0;
cbqp->ifnp.ifq_ = &ifp->if_snd; /* keep the ifq */
/* keep the state in pf_altq */
a->altq_disc = cbqp;
return (0);
}
int
cbq_remove_altq(struct pf_altq *a)
{
cbq_state_t *cbqp;
if ((cbqp = a->altq_disc) == NULL)
return (EINVAL);
a->altq_disc = NULL;
cbq_clear_interface(cbqp);
if (cbqp->ifnp.default_)
cbq_class_destroy(cbqp, cbqp->ifnp.default_);
if (cbqp->ifnp.root_)
cbq_class_destroy(cbqp, cbqp->ifnp.root_);
/* deallocate cbq_state_t */
free(cbqp, M_DEVBUF);
return (0);
}
int
cbq_add_queue(struct pf_altq *a)
{
struct rm_class *borrow, *parent;
cbq_state_t *cbqp;
struct rm_class *cl;
struct cbq_opts *opts;
int i;
if ((cbqp = a->altq_disc) == NULL)
return (EINVAL);
if (a->qid == 0)
return (EINVAL);
/*
* find a free slot in the class table. if the slot matching
* the lower bits of qid is free, use this slot. otherwise,
* use the first free slot.
*/
i = a->qid % CBQ_MAX_CLASSES;
if (cbqp->cbq_class_tbl[i] != NULL) {
for (i = 0; i < CBQ_MAX_CLASSES; i++)
if (cbqp->cbq_class_tbl[i] == NULL)
break;
if (i == CBQ_MAX_CLASSES)
return (EINVAL);
}
opts = &a->pq_u.cbq_opts;
/* check parameters */
if (a->priority >= CBQ_MAXPRI)
return (EINVAL);
/* Get pointers to parent and borrow classes. */
parent = clh_to_clp(cbqp, a->parent_qid);
if (opts->flags & CBQCLF_BORROW)
borrow = parent;
else
borrow = NULL;
/*
* A class must borrow from it's parent or it can not
* borrow at all. Hence, borrow can be null.
*/
if (parent == NULL && (opts->flags & CBQCLF_ROOTCLASS) == 0) {
printf("cbq_add_queue: no parent class!\n");
return (EINVAL);
}
if ((borrow != parent) && (borrow != NULL)) {
printf("cbq_add_class: borrow class != parent\n");
return (EINVAL);
}
/*
* check parameters
*/
switch (opts->flags & CBQCLF_CLASSMASK) {
case CBQCLF_ROOTCLASS:
if (parent != NULL)
return (EINVAL);
if (cbqp->ifnp.root_)
return (EINVAL);
break;
case CBQCLF_DEFCLASS:
if (cbqp->ifnp.default_)
return (EINVAL);
break;
case 0:
if (a->qid == 0)
return (EINVAL);
break;
default:
/* more than two flags bits set */
return (EINVAL);
}
/*
* create a class. if this is a root class, initialize the
* interface.
*/
if ((opts->flags & CBQCLF_CLASSMASK) == CBQCLF_ROOTCLASS) {
rmc_init(cbqp->ifnp.ifq_, &cbqp->ifnp, opts->ns_per_byte,
cbqrestart, a->qlimit, RM_MAXQUEUED,
opts->maxidle, opts->minidle, opts->offtime,
opts->flags);
cl = cbqp->ifnp.root_;
} else {
cl = rmc_newclass(a->priority,
&cbqp->ifnp, opts->ns_per_byte,
rmc_delay_action, a->qlimit, parent, borrow,
opts->maxidle, opts->minidle, opts->offtime,
opts->pktsize, opts->flags);
}
if (cl == NULL)
return (ENOMEM);
/* return handle to user space. */
cl->stats_.handle = a->qid;
cl->stats_.depth = cl->depth_;
/* save the allocated class */
cbqp->cbq_class_tbl[i] = cl;
if ((opts->flags & CBQCLF_CLASSMASK) == CBQCLF_DEFCLASS)
cbqp->ifnp.default_ = cl;
return (0);
}
int
cbq_remove_queue(struct pf_altq *a)
{
struct rm_class *cl;
cbq_state_t *cbqp;
int i;
if ((cbqp = a->altq_disc) == NULL)
return (EINVAL);
if ((cl = clh_to_clp(cbqp, a->qid)) == NULL)
return (EINVAL);
/* if we are a parent class, then return an error. */
if (is_a_parent_class(cl))
return (EINVAL);
/* delete the class */
rmc_delete_class(&cbqp->ifnp, cl);
/*
* free the class handle
*/
for (i = 0; i < CBQ_MAX_CLASSES; i++)
if (cbqp->cbq_class_tbl[i] == cl) {
cbqp->cbq_class_tbl[i] = NULL;
if (cl == cbqp->ifnp.root_)
cbqp->ifnp.root_ = NULL;
if (cl == cbqp->ifnp.default_)
cbqp->ifnp.default_ = NULL;
break;
}
return (0);
}
int
cbq_getqstats(struct pf_altq *a, void *ubuf, int *nbytes)
{
cbq_state_t *cbqp;
struct rm_class *cl;
class_stats_t stats;
int error = 0;
if ((cbqp = altq_lookup(a->ifname, ALTQT_CBQ)) == NULL)
return (EBADF);
if ((cl = clh_to_clp(cbqp, a->qid)) == NULL)
return (EINVAL);
if (*nbytes < sizeof(stats))
return (EINVAL);
get_class_stats(&stats, cl);
if ((error = copyout((caddr_t)&stats, ubuf, sizeof(stats))) != 0)
return (error);
*nbytes = sizeof(stats);
return (0);
}
/*
* int
* cbq_enqueue(struct ifaltq *ifq, struct mbuf *m, struct altq_pktattr *pattr)
* - Queue data packets.
*
* cbq_enqueue is set to ifp->if_altqenqueue and called by an upper
* layer (e.g. ether_output). cbq_enqueue queues the given packet
* to the cbq, then invokes the driver's start routine.
*
* Assumptions: called in splimp
* Returns: 0 if the queueing is successful.
* ENOBUFS if a packet dropping occurred as a result of
* the queueing.
*/
static int
cbq_enqueue(struct ifaltq *ifq, struct mbuf *m, struct altq_pktattr *pktattr)
{
cbq_state_t *cbqp = (cbq_state_t *)ifq->altq_disc;
struct rm_class *cl;
struct pf_mtag *t;
int len;
IFQ_LOCK_ASSERT(ifq);
/* grab class set by classifier */
if ((m->m_flags & M_PKTHDR) == 0) {
/* should not happen */
printf("altq: packet for %s does not have pkthdr\n",
ifq->altq_ifp->if_xname);
m_freem(m);
return (ENOBUFS);
}
cl = NULL;
if ((t = pf_find_mtag(m)) != NULL)
cl = clh_to_clp(cbqp, t->qid);
#ifdef ALTQ3_COMPAT
else if ((ifq->altq_flags & ALTQF_CLASSIFY) && pktattr != NULL)
cl = pktattr->pattr_class;
#endif
if (cl == NULL) {
cl = cbqp->ifnp.default_;
if (cl == NULL) {
m_freem(m);
return (ENOBUFS);
}
}
#ifdef ALTQ3_COMPAT
if (pktattr != NULL)
cl->pktattr_ = pktattr; /* save proto hdr used by ECN */
else
#endif
cl->pktattr_ = NULL;
len = m_pktlen(m);
if (rmc_queue_packet(cl, m) != 0) {
/* drop occurred. some mbuf was freed in rmc_queue_packet. */
PKTCNTR_ADD(&cl->stats_.drop_cnt, len);
return (ENOBUFS);
}
/* successfully queued. */
++cbqp->cbq_qlen;
IFQ_INC_LEN(ifq);
return (0);
}
static struct mbuf *
cbq_dequeue(struct ifaltq *ifq, int op)
{
cbq_state_t *cbqp = (cbq_state_t *)ifq->altq_disc;
struct mbuf *m;
IFQ_LOCK_ASSERT(ifq);
m = rmc_dequeue_next(&cbqp->ifnp, op);
if (m && op == ALTDQ_REMOVE) {
--cbqp->cbq_qlen; /* decrement # of packets in cbq */
IFQ_DEC_LEN(ifq);
/* Update the class. */
rmc_update_class_util(&cbqp->ifnp);
}
return (m);
}
/*
* void
* cbqrestart(queue_t *) - Restart sending of data.
* called from rmc_restart in splimp via timeout after waking up
* a suspended class.
* Returns: NONE
*/
static void
cbqrestart(struct ifaltq *ifq)
{
cbq_state_t *cbqp;
struct ifnet *ifp;
IFQ_LOCK_ASSERT(ifq);
if (!ALTQ_IS_ENABLED(ifq))
/* cbq must have been detached */
return;
if ((cbqp = (cbq_state_t *)ifq->altq_disc) == NULL)
/* should not happen */
return;
ifp = ifq->altq_ifp;
if (ifp->if_start &&
cbqp->cbq_qlen > 0 && (ifp->if_drv_flags & IFF_DRV_OACTIVE) == 0) {
IFQ_UNLOCK(ifq);
(*ifp->if_start)(ifp);
IFQ_LOCK(ifq);
}
}
static void cbq_purge(cbq_state_t *cbqp)
{
struct rm_class *cl;
int i;
for (i = 0; i < CBQ_MAX_CLASSES; i++)
if ((cl = cbqp->cbq_class_tbl[i]) != NULL)
rmc_dropall(cl);
if (ALTQ_IS_ENABLED(cbqp->ifnp.ifq_))
cbqp->ifnp.ifq_->ifq_len = 0;
}
#ifdef ALTQ3_COMPAT
static int
cbq_add_class(acp)
struct cbq_add_class *acp;
{
char *ifacename;
struct rm_class *borrow, *parent;
cbq_state_t *cbqp;
ifacename = acp->cbq_iface.cbq_ifacename;
if ((cbqp = altq_lookup(ifacename, ALTQT_CBQ)) == NULL)
return (EBADF);
/* check parameters */
if (acp->cbq_class.priority >= CBQ_MAXPRI ||
acp->cbq_class.maxq > CBQ_MAXQSIZE)
return (EINVAL);
/* Get pointers to parent and borrow classes. */
parent = clh_to_clp(cbqp, acp->cbq_class.parent_class_handle);
borrow = clh_to_clp(cbqp, acp->cbq_class.borrow_class_handle);
/*
* A class must borrow from it's parent or it can not
* borrow at all. Hence, borrow can be null.
*/
if (parent == NULL && (acp->cbq_class.flags & CBQCLF_ROOTCLASS) == 0) {
printf("cbq_add_class: no parent class!\n");
return (EINVAL);
}
if ((borrow != parent) && (borrow != NULL)) {
printf("cbq_add_class: borrow class != parent\n");
return (EINVAL);
}
return cbq_class_create(cbqp, acp, parent, borrow);
}
static int
cbq_delete_class(dcp)
struct cbq_delete_class *dcp;
{
char *ifacename;
struct rm_class *cl;
cbq_state_t *cbqp;
ifacename = dcp->cbq_iface.cbq_ifacename;
if ((cbqp = altq_lookup(ifacename, ALTQT_CBQ)) == NULL)
return (EBADF);
if ((cl = clh_to_clp(cbqp, dcp->cbq_class_handle)) == NULL)
return (EINVAL);
/* if we are a parent class, then return an error. */
if (is_a_parent_class(cl))
return (EINVAL);
/* if a filter has a reference to this class delete the filter */
acc_discard_filters(&cbqp->cbq_classifier, cl, 0);
return cbq_class_destroy(cbqp, cl);
}
static int
cbq_modify_class(acp)
struct cbq_modify_class *acp;
{
char *ifacename;
struct rm_class *cl;
cbq_state_t *cbqp;
ifacename = acp->cbq_iface.cbq_ifacename;
if ((cbqp = altq_lookup(ifacename, ALTQT_CBQ)) == NULL)
return (EBADF);
/* Get pointer to this class */
if ((cl = clh_to_clp(cbqp, acp->cbq_class_handle)) == NULL)
return (EINVAL);
if (rmc_modclass(cl, acp->cbq_class.nano_sec_per_byte,
acp->cbq_class.maxq, acp->cbq_class.maxidle,
acp->cbq_class.minidle, acp->cbq_class.offtime,
acp->cbq_class.pktsize) < 0)
return (EINVAL);
return (0);
}
/*
* struct rm_class *
* cbq_class_create(cbq_mod_state_t *cbqp, struct cbq_add_class *acp,
* struct rm_class *parent, struct rm_class *borrow)
*
* This function create a new traffic class in the CBQ class hierarchy of
* given parameters. The class that created is either the root, default,
- * or a new dynamic class. If CBQ is not initilaized, the the root class
+ * or a new dynamic class. If CBQ is not initilaized, the root class
* will be created.
*/
static int
cbq_class_create(cbqp, acp, parent, borrow)
cbq_state_t *cbqp;
struct cbq_add_class *acp;
struct rm_class *parent, *borrow;
{
struct rm_class *cl;
cbq_class_spec_t *spec = &acp->cbq_class;
u_int32_t chandle;
int i;
/*
* allocate class handle
*/
for (i = 1; i < CBQ_MAX_CLASSES; i++)
if (cbqp->cbq_class_tbl[i] == NULL)
break;
if (i == CBQ_MAX_CLASSES)
return (EINVAL);
chandle = i; /* use the slot number as class handle */
/*
* create a class. if this is a root class, initialize the
* interface.
*/
if ((spec->flags & CBQCLF_CLASSMASK) == CBQCLF_ROOTCLASS) {
rmc_init(cbqp->ifnp.ifq_, &cbqp->ifnp, spec->nano_sec_per_byte,
cbqrestart, spec->maxq, RM_MAXQUEUED,
spec->maxidle, spec->minidle, spec->offtime,
spec->flags);
cl = cbqp->ifnp.root_;
} else {
cl = rmc_newclass(spec->priority,
&cbqp->ifnp, spec->nano_sec_per_byte,
rmc_delay_action, spec->maxq, parent, borrow,
spec->maxidle, spec->minidle, spec->offtime,
spec->pktsize, spec->flags);
}
if (cl == NULL)
return (ENOMEM);
/* return handle to user space. */
acp->cbq_class_handle = chandle;
cl->stats_.handle = chandle;
cl->stats_.depth = cl->depth_;
/* save the allocated class */
cbqp->cbq_class_tbl[i] = cl;
if ((spec->flags & CBQCLF_CLASSMASK) == CBQCLF_DEFCLASS)
cbqp->ifnp.default_ = cl;
if ((spec->flags & CBQCLF_CLASSMASK) == CBQCLF_CTLCLASS)
cbqp->ifnp.ctl_ = cl;
return (0);
}
static int
cbq_add_filter(afp)
struct cbq_add_filter *afp;
{
char *ifacename;
cbq_state_t *cbqp;
struct rm_class *cl;
ifacename = afp->cbq_iface.cbq_ifacename;
if ((cbqp = altq_lookup(ifacename, ALTQT_CBQ)) == NULL)
return (EBADF);
/* Get the pointer to class. */
if ((cl = clh_to_clp(cbqp, afp->cbq_class_handle)) == NULL)
return (EINVAL);
return acc_add_filter(&cbqp->cbq_classifier, &afp->cbq_filter,
cl, &afp->cbq_filter_handle);
}
static int
cbq_delete_filter(dfp)
struct cbq_delete_filter *dfp;
{
char *ifacename;
cbq_state_t *cbqp;
ifacename = dfp->cbq_iface.cbq_ifacename;
if ((cbqp = altq_lookup(ifacename, ALTQT_CBQ)) == NULL)
return (EBADF);
return acc_delete_filter(&cbqp->cbq_classifier,
dfp->cbq_filter_handle);
}
/*
* cbq_clear_hierarchy deletes all classes and their filters on the
* given interface.
*/
static int
cbq_clear_hierarchy(ifacep)
struct cbq_interface *ifacep;
{
char *ifacename;
cbq_state_t *cbqp;
ifacename = ifacep->cbq_ifacename;
if ((cbqp = altq_lookup(ifacename, ALTQT_CBQ)) == NULL)
return (EBADF);
return cbq_clear_interface(cbqp);
}
/*
* static int
* cbq_set_enable(struct cbq_enable *ep) - this function processed the
* ioctl request to enable class based queueing. It searches the list
* of interfaces for the specified interface and then enables CBQ on
* that interface.
*
* Returns: 0, for no error.
* EBADF, for specified inteface not found.
*/
static int
cbq_set_enable(ep, enable)
struct cbq_interface *ep;
int enable;
{
int error = 0;
cbq_state_t *cbqp;
char *ifacename;
ifacename = ep->cbq_ifacename;
if ((cbqp = altq_lookup(ifacename, ALTQT_CBQ)) == NULL)
return (EBADF);
switch (enable) {
case ENABLE:
if (cbqp->ifnp.root_ == NULL || cbqp->ifnp.default_ == NULL ||
cbqp->ifnp.ctl_ == NULL) {
if (cbqp->ifnp.root_ == NULL)
printf("No Root Class for %s\n", ifacename);
if (cbqp->ifnp.default_ == NULL)
printf("No Default Class for %s\n", ifacename);
if (cbqp->ifnp.ctl_ == NULL)
printf("No Control Class for %s\n", ifacename);
error = EINVAL;
} else if ((error = altq_enable(cbqp->ifnp.ifq_)) == 0) {
cbqp->cbq_qlen = 0;
}
break;
case DISABLE:
error = altq_disable(cbqp->ifnp.ifq_);
break;
}
return (error);
}
static int
cbq_getstats(gsp)
struct cbq_getstats *gsp;
{
char *ifacename;
int i, n, nclasses;
cbq_state_t *cbqp;
struct rm_class *cl;
class_stats_t stats, *usp;
int error = 0;
ifacename = gsp->iface.cbq_ifacename;
nclasses = gsp->nclasses;
usp = gsp->stats;
if ((cbqp = altq_lookup(ifacename, ALTQT_CBQ)) == NULL)
return (EBADF);
if (nclasses <= 0)
return (EINVAL);
for (n = 0, i = 0; n < nclasses && i < CBQ_MAX_CLASSES; n++, i++) {
while ((cl = cbqp->cbq_class_tbl[i]) == NULL)
if (++i >= CBQ_MAX_CLASSES)
goto out;
get_class_stats(&stats, cl);
stats.handle = cl->stats_.handle;
if ((error = copyout((caddr_t)&stats, (caddr_t)usp++,
sizeof(stats))) != 0)
return (error);
}
out:
gsp->nclasses = n;
return (error);
}
static int
cbq_ifattach(ifacep)
struct cbq_interface *ifacep;
{
int error = 0;
char *ifacename;
cbq_state_t *new_cbqp;
struct ifnet *ifp;
ifacename = ifacep->cbq_ifacename;
if ((ifp = ifunit(ifacename)) == NULL)
return (ENXIO);
if (!ALTQ_IS_READY(&ifp->if_snd))
return (ENXIO);
/* allocate and initialize cbq_state_t */
new_cbqp = malloc(sizeof(cbq_state_t), M_DEVBUF, M_WAITOK);
if (new_cbqp == NULL)
return (ENOMEM);
bzero(new_cbqp, sizeof(cbq_state_t));
CALLOUT_INIT(&new_cbqp->cbq_callout);
new_cbqp->cbq_qlen = 0;
new_cbqp->ifnp.ifq_ = &ifp->if_snd; /* keep the ifq */
/*
* set CBQ to this ifnet structure.
*/
error = altq_attach(&ifp->if_snd, ALTQT_CBQ, new_cbqp,
cbq_enqueue, cbq_dequeue, cbq_request,
&new_cbqp->cbq_classifier, acc_classify);
if (error) {
free(new_cbqp, M_DEVBUF);
return (error);
}
/* prepend to the list of cbq_state_t's. */
new_cbqp->cbq_next = cbq_list;
cbq_list = new_cbqp;
return (0);
}
static int
cbq_ifdetach(ifacep)
struct cbq_interface *ifacep;
{
char *ifacename;
cbq_state_t *cbqp;
ifacename = ifacep->cbq_ifacename;
if ((cbqp = altq_lookup(ifacename, ALTQT_CBQ)) == NULL)
return (EBADF);
(void)cbq_set_enable(ifacep, DISABLE);
cbq_clear_interface(cbqp);
/* remove CBQ from the ifnet structure. */
(void)altq_detach(cbqp->ifnp.ifq_);
/* remove from the list of cbq_state_t's. */
if (cbq_list == cbqp)
cbq_list = cbqp->cbq_next;
else {
cbq_state_t *cp;
for (cp = cbq_list; cp != NULL; cp = cp->cbq_next)
if (cp->cbq_next == cbqp) {
cp->cbq_next = cbqp->cbq_next;
break;
}
ASSERT(cp != NULL);
}
/* deallocate cbq_state_t */
free(cbqp, M_DEVBUF);
return (0);
}
/*
* cbq device interface
*/
altqdev_decl(cbq);
int
cbqopen(dev, flag, fmt, p)
dev_t dev;
int flag, fmt;
#if (__FreeBSD_version > 500000)
struct thread *p;
#else
struct proc *p;
#endif
{
return (0);
}
int
cbqclose(dev, flag, fmt, p)
dev_t dev;
int flag, fmt;
#if (__FreeBSD_version > 500000)
struct thread *p;
#else
struct proc *p;
#endif
{
struct ifnet *ifp;
struct cbq_interface iface;
int err, error = 0;
while (cbq_list) {
ifp = cbq_list->ifnp.ifq_->altq_ifp;
sprintf(iface.cbq_ifacename, "%s", ifp->if_xname);
err = cbq_ifdetach(&iface);
if (err != 0 && error == 0)
error = err;
}
return (error);
}
int
cbqioctl(dev, cmd, addr, flag, p)
dev_t dev;
ioctlcmd_t cmd;
caddr_t addr;
int flag;
#if (__FreeBSD_version > 500000)
struct thread *p;
#else
struct proc *p;
#endif
{
int error = 0;
/* check cmd for superuser only */
switch (cmd) {
case CBQ_GETSTATS:
/* currently only command that an ordinary user can call */
break;
default:
#if (__FreeBSD_version > 700000)
error = priv_check(p, PRIV_ALTQ_MANAGE);
#elsif (__FreeBSD_version > 400000)
error = suser(p);
#else
error = suser(p->p_ucred, &p->p_acflag);
#endif
if (error)
return (error);
break;
}
switch (cmd) {
case CBQ_ENABLE:
error = cbq_set_enable((struct cbq_interface *)addr, ENABLE);
break;
case CBQ_DISABLE:
error = cbq_set_enable((struct cbq_interface *)addr, DISABLE);
break;
case CBQ_ADD_FILTER:
error = cbq_add_filter((struct cbq_add_filter *)addr);
break;
case CBQ_DEL_FILTER:
error = cbq_delete_filter((struct cbq_delete_filter *)addr);
break;
case CBQ_ADD_CLASS:
error = cbq_add_class((struct cbq_add_class *)addr);
break;
case CBQ_DEL_CLASS:
error = cbq_delete_class((struct cbq_delete_class *)addr);
break;
case CBQ_MODIFY_CLASS:
error = cbq_modify_class((struct cbq_modify_class *)addr);
break;
case CBQ_CLEAR_HIERARCHY:
error = cbq_clear_hierarchy((struct cbq_interface *)addr);
break;
case CBQ_IF_ATTACH:
error = cbq_ifattach((struct cbq_interface *)addr);
break;
case CBQ_IF_DETACH:
error = cbq_ifdetach((struct cbq_interface *)addr);
break;
case CBQ_GETSTATS:
error = cbq_getstats((struct cbq_getstats *)addr);
break;
default:
error = EINVAL;
break;
}
return error;
}
#if 0
/* for debug */
static void cbq_class_dump(int);
static void cbq_class_dump(i)
int i;
{
struct rm_class *cl;
rm_class_stats_t *s;
struct _class_queue_ *q;
if (cbq_list == NULL) {
printf("cbq_class_dump: no cbq_state found\n");
return;
}
cl = cbq_list->cbq_class_tbl[i];
printf("class %d cl=%p\n", i, cl);
if (cl != NULL) {
s = &cl->stats_;
q = cl->q_;
printf("pri=%d, depth=%d, maxrate=%d, allotment=%d\n",
cl->pri_, cl->depth_, cl->maxrate_, cl->allotment_);
printf("w_allotment=%d, bytes_alloc=%d, avgidle=%d, maxidle=%d\n",
cl->w_allotment_, cl->bytes_alloc_, cl->avgidle_,
cl->maxidle_);
printf("minidle=%d, offtime=%d, sleeping=%d, leaf=%d\n",
cl->minidle_, cl->offtime_, cl->sleeping_, cl->leaf_);
printf("handle=%d, depth=%d, packets=%d, bytes=%d\n",
s->handle, s->depth,
(int)s->xmit_cnt.packets, (int)s->xmit_cnt.bytes);
printf("over=%d\n, borrows=%d, drops=%d, overactions=%d, delays=%d\n",
s->over, s->borrows, (int)s->drop_cnt.packets,
s->overactions, s->delays);
printf("tail=%p, head=%p, qlen=%d, qlim=%d, qthresh=%d,qtype=%d\n",
q->tail_, q->head_, q->qlen_, q->qlim_,
q->qthresh_, q->qtype_);
}
}
#endif /* 0 */
#ifdef KLD_MODULE
static struct altqsw cbq_sw =
{"cbq", cbqopen, cbqclose, cbqioctl};
ALTQ_MODULE(altq_cbq, ALTQT_CBQ, &cbq_sw);
MODULE_DEPEND(altq_cbq, altq_red, 1, 1, 1);
MODULE_DEPEND(altq_cbq, altq_rio, 1, 1, 1);
#endif /* KLD_MODULE */
#endif /* ALTQ3_COMPAT */
#endif /* ALTQ_CBQ */
Index: head/sys/ofed/drivers/infiniband/debug/memtrack.c
===================================================================
--- head/sys/ofed/drivers/infiniband/debug/memtrack.c (revision 300049)
+++ head/sys/ofed/drivers/infiniband/debug/memtrack.c (revision 300050)
@@ -1,958 +1,958 @@
/*
This software is available to you under a choice of one of two
licenses. You may choose to be licensed under the terms of the GNU
General Public License (GPL) Version 2, available at
<http://www.fsf.org/copyleft/gpl.html>, or the OpenIB.org BSD
license, available in the LICENSE.TXT file accompanying this
software. These details are also available at
<http://openib.org/license.html>.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
Copyright (c) 2004 Mellanox Technologies Ltd. All rights reserved.
*/
#define C_MEMTRACK_C
#ifdef kmalloc
#undef kmalloc
#endif
#ifdef kmemdup
#undef kmemdup
#endif
#ifdef kfree
#undef kfree
#endif
#ifdef vmalloc
#undef vmalloc
#endif
#ifdef vzalloc
#undef vzalloc
#endif
#ifdef vzalloc_node
#undef vzalloc_node
#endif
#ifdef vfree
#undef vfree
#endif
#ifdef kmem_cache_alloc
#undef kmem_cache_alloc
#endif
#ifdef kmem_cache_free
#undef kmem_cache_free
#endif
#ifdef ioremap
#undef ioremap
#endif
#ifdef io_mapping_create_wc
#undef io_mapping_create_wc
#endif
#ifdef io_mapping_free
#undef io_mapping_free
#endif
#ifdef ioremap_nocache
#undef ioremap_nocache
#endif
#ifdef iounmap
#undef iounmap
#endif
#ifdef alloc_pages
#undef alloc_pages
#endif
#ifdef free_pages
#undef free_pages
#endif
#ifdef get_page
#undef get_page
#endif
#ifdef put_page
#undef put_page
#endif
#ifdef create_workqueue
#undef create_workqueue
#endif
#ifdef create_rt_workqueue
#undef create_rt_workqueue
#endif
#ifdef create_freezeable_workqueue
#undef create_freezeable_workqueue
#endif
#ifdef create_singlethread_workqueue
#undef create_singlethread_workqueue
#endif
#ifdef destroy_workqueue
#undef destroy_workqueue
#endif
#include <linux/module.h>
#include <linux/kernel.h>
#include <linux/slab.h>
#include <linux/interrupt.h>
#include <linux/vmalloc.h>
#include <linux/mm.h>
#include <asm/uaccess.h>
#include <linux/proc_fs.h>
#include <linux/random.h>
#include "memtrack.h"
#include <linux/moduleparam.h>
MODULE_AUTHOR("Mellanox Technologies LTD.");
MODULE_DESCRIPTION("Memory allocations tracking");
MODULE_LICENSE("GPL");
#define MEMTRACK_HASH_SZ ((1<<15)-19) /* prime: http://www.utm.edu/research/primes/lists/2small/0bit.html */
#define MAX_FILENAME_LEN 31
#define memtrack_spin_lock(spl, flags) spin_lock_irqsave(spl, flags)
#define memtrack_spin_unlock(spl, flags) spin_unlock_irqrestore(spl, flags)
/* if a bit is set then the corresponding allocation is tracked.
bit0 corresponds to MEMTRACK_KMALLOC, bit1 corresponds to MEMTRACK_VMALLOC etc. */
static unsigned long track_mask = -1; /* effectively everything */
module_param(track_mask, ulong, 0444);
MODULE_PARM_DESC(track_mask, "bitmask defining what is tracked");
/* if a bit is set then the corresponding allocation is strictly tracked.
That is, before inserting the whole range is checked to not overlap any
of the allocations already in the database */
static unsigned long strict_track_mask = 0; /* no strict tracking */
module_param(strict_track_mask, ulong, 0444);
MODULE_PARM_DESC(strict_track_mask, "bitmask which allocation requires strict tracking");
/* Sets the frequency of allocations failures injections
if set to 0 all allocation should succeed */
static unsigned int inject_freq = 0;
module_param(inject_freq, uint, 0644);
MODULE_PARM_DESC(inject_freq, "Error injection frequency, default is 0 (disabled)");
static int random_mem = 1;
module_param(random_mem, uint, 0644);
MODULE_PARM_DESC(random_mem, "When set, randomize allocated memory, default is 1 (enabled)");
struct memtrack_meminfo_t {
unsigned long addr;
unsigned long size;
unsigned long line_num;
unsigned long dev;
unsigned long addr2;
int direction;
struct memtrack_meminfo_t *next;
struct list_head list; /* used to link all items from a certain type together */
char filename[MAX_FILENAME_LEN + 1]; /* putting the char array last is better for struct. packing */
char ext_info[32];
};
static struct kmem_cache *meminfo_cache;
struct tracked_obj_desc_t {
struct memtrack_meminfo_t *mem_hash[MEMTRACK_HASH_SZ];
spinlock_t hash_lock;
unsigned long count; /* size of memory tracked (*malloc) or number of objects tracked */
struct list_head tracked_objs_head; /* head of list of all objects */
int strict_track; /* if 1 then for each object inserted check if it overlaps any of the objects already in the list */
};
static struct tracked_obj_desc_t *tracked_objs_arr[MEMTRACK_NUM_OF_MEMTYPES];
static const char *rsc_names[MEMTRACK_NUM_OF_MEMTYPES] = {
"kmalloc",
"vmalloc",
"kmem_cache_alloc",
"io_remap",
"create_workqueue",
"alloc_pages",
"ib_dma_map_single",
"ib_dma_map_page",
"ib_dma_map_sg"
};
static const char *rsc_free_names[MEMTRACK_NUM_OF_MEMTYPES] = {
"kfree",
"vfree",
"kmem_cache_free",
"io_unmap",
"destory_workqueue",
"free_pages",
"ib_dma_unmap_single",
"ib_dma_unmap_page",
"ib_dma_unmap_sg"
};
static inline const char *memtype_alloc_str(enum memtrack_memtype_t memtype)
{
switch (memtype) {
case MEMTRACK_KMALLOC:
case MEMTRACK_VMALLOC:
case MEMTRACK_KMEM_OBJ:
case MEMTRACK_IOREMAP:
case MEMTRACK_WORK_QUEUE:
case MEMTRACK_PAGE_ALLOC:
case MEMTRACK_DMA_MAP_SINGLE:
case MEMTRACK_DMA_MAP_PAGE:
case MEMTRACK_DMA_MAP_SG:
return rsc_names[memtype];
default:
return "(Unknown allocation type)";
}
}
static inline const char *memtype_free_str(enum memtrack_memtype_t memtype)
{
switch (memtype) {
case MEMTRACK_KMALLOC:
case MEMTRACK_VMALLOC:
case MEMTRACK_KMEM_OBJ:
case MEMTRACK_IOREMAP:
case MEMTRACK_WORK_QUEUE:
case MEMTRACK_PAGE_ALLOC:
case MEMTRACK_DMA_MAP_SINGLE:
case MEMTRACK_DMA_MAP_PAGE:
case MEMTRACK_DMA_MAP_SG:
return rsc_free_names[memtype];
default:
return "(Unknown allocation type)";
}
}
/*
* overlap_a_b
*/
static inline int overlap_a_b(unsigned long a_start, unsigned long a_end,
unsigned long b_start, unsigned long b_end)
{
if ((b_start > a_end) || (a_start > b_end))
return 0;
return 1;
}
/*
* check_overlap
*/
static void check_overlap(enum memtrack_memtype_t memtype,
struct memtrack_meminfo_t *mem_info_p,
struct tracked_obj_desc_t *obj_desc_p)
{
struct list_head *pos, *next;
struct memtrack_meminfo_t *cur;
unsigned long start_a, end_a, start_b, end_b;
start_a = mem_info_p->addr;
end_a = mem_info_p->addr + mem_info_p->size - 1;
list_for_each_safe(pos, next, &obj_desc_p->tracked_objs_head) {
cur = list_entry(pos, struct memtrack_meminfo_t, list);
start_b = cur->addr;
end_b = cur->addr + cur->size - 1;
if (overlap_a_b(start_a, end_a, start_b, end_b))
printk(KERN_ERR "%s overlaps! new_start=0x%lx, new_end=0x%lx, item_start=0x%lx, item_end=0x%lx\n",
memtype_alloc_str(memtype), mem_info_p->addr,
mem_info_p->addr + mem_info_p->size - 1, cur->addr,
cur->addr + cur->size - 1);
}
}
/* Invoke on memory allocation */
void memtrack_alloc(enum memtrack_memtype_t memtype, unsigned long dev,
unsigned long addr, unsigned long size, unsigned long addr2,
int direction, const char *filename,
const unsigned long line_num, int alloc_flags)
{
unsigned long hash_val;
struct memtrack_meminfo_t *cur_mem_info_p, *new_mem_info_p;
struct tracked_obj_desc_t *obj_desc_p;
unsigned long flags;
if (memtype >= MEMTRACK_NUM_OF_MEMTYPES) {
printk(KERN_ERR "%s: Invalid memory type (%d)\n", __func__, memtype);
return;
}
if (!tracked_objs_arr[memtype]) {
/* object is not tracked */
return;
}
obj_desc_p = tracked_objs_arr[memtype];
hash_val = addr % MEMTRACK_HASH_SZ;
new_mem_info_p = (struct memtrack_meminfo_t *)kmem_cache_alloc(meminfo_cache, alloc_flags);
if (new_mem_info_p == NULL) {
printk(KERN_ERR "%s: Failed allocating kmem_cache item for new mem_info. "
"Lost tracking on allocation at %s:%lu...\n", __func__,
filename, line_num);
return;
}
/* save allocation properties */
new_mem_info_p->addr = addr;
new_mem_info_p->size = size;
new_mem_info_p->dev = dev;
new_mem_info_p->addr2 = addr2;
new_mem_info_p->direction = direction;
new_mem_info_p->line_num = line_num;
*new_mem_info_p->ext_info = '\0';
/* Make sure that we will print out the path tail if the given filename is longer
* than MAX_FILENAME_LEN. (otherwise, we will not see the name of the actual file
* in the printout -- only the path head!
*/
if (strlen(filename) > MAX_FILENAME_LEN)
strncpy(new_mem_info_p->filename, filename + strlen(filename) - MAX_FILENAME_LEN, MAX_FILENAME_LEN);
else
strncpy(new_mem_info_p->filename, filename, MAX_FILENAME_LEN);
new_mem_info_p->filename[MAX_FILENAME_LEN] = 0; /* NULL terminate anyway */
memtrack_spin_lock(&obj_desc_p->hash_lock, flags);
/* make sure given memory location is not already allocated */
if ((memtype != MEMTRACK_DMA_MAP_SINGLE) && (memtype != MEMTRACK_DMA_MAP_PAGE) &&
(memtype != MEMTRACK_DMA_MAP_SG)) {
/* make sure given memory location is not already allocated */
cur_mem_info_p = obj_desc_p->mem_hash[hash_val];
while (cur_mem_info_p != NULL) {
if ((cur_mem_info_p->addr == addr) && (cur_mem_info_p->dev == dev)) {
/* Found given address in the database */
printk(KERN_ERR "mtl rsc inconsistency: %s: %s::%lu: %s @ addr=0x%lX which is already known from %s:%lu\n",
__func__, filename, line_num,
memtype_alloc_str(memtype), addr,
cur_mem_info_p->filename,
cur_mem_info_p->line_num);
memtrack_spin_unlock(&obj_desc_p->hash_lock, flags);
kmem_cache_free(meminfo_cache, new_mem_info_p);
return;
}
cur_mem_info_p = cur_mem_info_p->next;
}
}
/* not found - we can put in the hash bucket */
/* link as first */
new_mem_info_p->next = obj_desc_p->mem_hash[hash_val];
obj_desc_p->mem_hash[hash_val] = new_mem_info_p;
if (obj_desc_p->strict_track)
check_overlap(memtype, new_mem_info_p, obj_desc_p);
obj_desc_p->count += size;
list_add(&new_mem_info_p->list, &obj_desc_p->tracked_objs_head);
memtrack_spin_unlock(&obj_desc_p->hash_lock, flags);
return;
}
EXPORT_SYMBOL(memtrack_alloc);
/* Invoke on memory free */
void memtrack_free(enum memtrack_memtype_t memtype, unsigned long dev,
unsigned long addr, unsigned long size, int direction,
const char *filename, const unsigned long line_num)
{
unsigned long hash_val;
struct memtrack_meminfo_t *cur_mem_info_p, *prev_mem_info_p;
struct tracked_obj_desc_t *obj_desc_p;
unsigned long flags;
if (memtype >= MEMTRACK_NUM_OF_MEMTYPES) {
printk(KERN_ERR "%s: Invalid memory type (%d)\n", __func__, memtype);
return;
}
if (!tracked_objs_arr[memtype]) {
/* object is not tracked */
return;
}
obj_desc_p = tracked_objs_arr[memtype];
hash_val = addr % MEMTRACK_HASH_SZ;
memtrack_spin_lock(&obj_desc_p->hash_lock, flags);
/* find mem_info of given memory location */
prev_mem_info_p = NULL;
cur_mem_info_p = obj_desc_p->mem_hash[hash_val];
while (cur_mem_info_p != NULL) {
if ((cur_mem_info_p->addr == addr) && (cur_mem_info_p->dev == dev)) {
/* Found given address in the database */
if ((memtype == MEMTRACK_DMA_MAP_SINGLE) || (memtype == MEMTRACK_DMA_MAP_PAGE) ||
(memtype == MEMTRACK_DMA_MAP_SG)) {
if (direction != cur_mem_info_p->direction)
printk(KERN_ERR "mtl rsc inconsistency: %s: %s::%lu: %s bad direction for addr 0x%lX: alloc:0x%x, free:0x%x (allocated in %s::%lu)\n",
__func__, filename, line_num, memtype_free_str(memtype), addr, cur_mem_info_p->direction, direction,
cur_mem_info_p->filename, cur_mem_info_p->line_num);
if (size != cur_mem_info_p->size)
printk(KERN_ERR "mtl rsc inconsistency: %s: %s::%lu: %s bad size for addr 0x%lX: size:%lu, free:%lu (allocated in %s::%lu)\n",
__func__, filename, line_num, memtype_free_str(memtype), addr, cur_mem_info_p->size, size,
cur_mem_info_p->filename, cur_mem_info_p->line_num);
}
/* Remove from the bucket/list */
if (prev_mem_info_p == NULL)
obj_desc_p->mem_hash[hash_val] = cur_mem_info_p->next; /* removing first */
else
prev_mem_info_p->next = cur_mem_info_p->next; /* "crossover" */
list_del(&cur_mem_info_p->list);
obj_desc_p->count -= cur_mem_info_p->size;
memtrack_spin_unlock(&obj_desc_p->hash_lock, flags);
kmem_cache_free(meminfo_cache, cur_mem_info_p);
return;
}
prev_mem_info_p = cur_mem_info_p;
cur_mem_info_p = cur_mem_info_p->next;
}
/* not found */
printk(KERN_ERR "mtl rsc inconsistency: %s: %s::%lu: %s for unknown address=0x%lX, device=0x%lX\n",
__func__, filename, line_num, memtype_free_str(memtype), addr, dev);
memtrack_spin_unlock(&obj_desc_p->hash_lock, flags);
return;
}
EXPORT_SYMBOL(memtrack_free);
/*
* This function recognizes allocations which
* may be released by kernel (e.g. skb) and
* therefore not trackable by memtrack.
* The allocations are recognized by the name
* of their calling function.
*/
int is_non_trackable_alloc_func(const char *func_name)
{
static const char * const str_str_arr[] = {
/* functions containing these strings consider non trackable */
"skb",
};
static const char * const str_str_excep_arr[] = {
/* functions which are exception to the str_str_arr table */
"ipoib_cm_skb_too_long"
};
static const char * const str_cmp_arr[] = {
/* functions that allocate SKBs */
"mlx4_en_alloc_frags",
"mlx4_en_alloc_frag",
"mlx4_en_init_allocator",
"mlx4_en_free_frag",
"mlx4_en_free_rx_desc",
"mlx4_en_destroy_allocator",
"mlx4_en_complete_rx_desc",
/* vnic skb functions */
"free_single_frag",
"vnic_alloc_rx_skb",
"vnic_rx_skb",
"vnic_alloc_frag",
"vnic_empty_rx_entry",
"vnic_init_allocator",
"vnic_destroy_allocator",
"sdp_post_recv",
"sdp_rx_ring_purge",
"sdp_post_srcavail",
"sk_stream_alloc_page",
"update_send_head",
"sdp_bcopy_get",
"sdp_destroy_resources",
/* function that allocate memory for RDMA device context */
"ib_alloc_device"
};
size_t str_str_arr_size = sizeof(str_str_arr)/sizeof(char *);
size_t str_str_excep_size = sizeof(str_str_excep_arr)/sizeof(char *);
size_t str_cmp_arr_size = sizeof(str_cmp_arr)/sizeof(char *);
int i, j;
for (i = 0; i < str_str_arr_size; ++i)
if (strstr(func_name, str_str_arr[i])) {
for (j = 0; j < str_str_excep_size; ++j)
if (!strcmp(func_name, str_str_excep_arr[j]))
return 0;
return 1;
}
for (i = 0; i < str_cmp_arr_size; ++i)
if (!strcmp(func_name, str_cmp_arr[i]))
return 1;
return 0;
}
EXPORT_SYMBOL(is_non_trackable_alloc_func);
/*
* In some cases we need to free a memory
* we defined as "non trackable" (see
* is_non_trackable_alloc_func).
* This function recognizes such releases
* by the name of their calling function.
*/
int is_non_trackable_free_func(const char *func_name)
{
static const char * const str_cmp_arr[] = {
/* function that deallocate memory for RDMA device context */
"ib_dealloc_device"
};
size_t str_cmp_arr_size = sizeof(str_cmp_arr)/sizeof(char *);
int i;
for (i = 0; i < str_cmp_arr_size; ++i)
if (!strcmp(func_name, str_cmp_arr[i]))
return 1;
return 0;
}
EXPORT_SYMBOL(is_non_trackable_free_func);
/* WA - In this function handles confirm
- the the function name is
+ the function name is
'__ib_umem_release' or 'ib_umem_get'
In this case we won't track the
memory there because the kernel
was the one who allocated it.
Return value:
1 - if the function name is match, else 0 */
int is_umem_put_page(const char *func_name)
{
const char func_str[18] = "__ib_umem_release";
/* In case of error flow put_page is called as part of ib_umem_get */
const char func_str1[12] = "ib_umem_get";
return ((strstr(func_name, func_str) != NULL) ||
(strstr(func_name, func_str1) != NULL)) ? 1 : 0;
}
EXPORT_SYMBOL(is_umem_put_page);
/* Check page order size
When Freeing a page allocation it checks whether
we are trying to free the same size
we asked to allocate */
int memtrack_check_size(enum memtrack_memtype_t memtype, unsigned long addr,
unsigned long size, const char *filename,
const unsigned long line_num)
{
unsigned long hash_val;
struct memtrack_meminfo_t *cur_mem_info_p;
struct tracked_obj_desc_t *obj_desc_p;
unsigned long flags;
int ret = 0;
if (memtype >= MEMTRACK_NUM_OF_MEMTYPES) {
printk(KERN_ERR "%s: Invalid memory type (%d)\n", __func__, memtype);
return 1;
}
if (!tracked_objs_arr[memtype]) {
/* object is not tracked */
return 1;
}
obj_desc_p = tracked_objs_arr[memtype];
hash_val = addr % MEMTRACK_HASH_SZ;
memtrack_spin_lock(&obj_desc_p->hash_lock, flags);
/* find mem_info of given memory location */
cur_mem_info_p = obj_desc_p->mem_hash[hash_val];
while (cur_mem_info_p != NULL) {
if (cur_mem_info_p->addr == addr) {
/* Found given address in the database - check size */
if (cur_mem_info_p->size != size) {
printk(KERN_ERR "mtl size inconsistency: %s: %s::%lu: try to %s at address=0x%lX with size %lu while was created with size %lu\n",
__func__, filename, line_num, memtype_free_str(memtype),
addr, size, cur_mem_info_p->size);
snprintf(cur_mem_info_p->ext_info, sizeof(cur_mem_info_p->ext_info),
"invalid free size %lu\n", size);
ret = 1;
}
memtrack_spin_unlock(&obj_desc_p->hash_lock, flags);
return ret;
}
cur_mem_info_p = cur_mem_info_p->next;
}
/* not found - This function will not give any indication
but will only check the correct size\order
For inconsistency the 'free' function will check that */
memtrack_spin_unlock(&obj_desc_p->hash_lock, flags);
return 1;
}
EXPORT_SYMBOL(memtrack_check_size);
/* Search for a specific addr whether it exist in the
current data-base.
It will print an error msg if we get an unexpected result,
Return value: 0 - if addr exist, else 1 */
int memtrack_is_new_addr(enum memtrack_memtype_t memtype, unsigned long addr, int expect_exist,
const char *filename, const unsigned long line_num)
{
unsigned long hash_val;
struct memtrack_meminfo_t *cur_mem_info_p;
struct tracked_obj_desc_t *obj_desc_p;
unsigned long flags;
if (memtype >= MEMTRACK_NUM_OF_MEMTYPES) {
printk(KERN_ERR "%s: Invalid memory type (%d)\n", __func__, memtype);
return 1;
}
if (!tracked_objs_arr[memtype]) {
/* object is not tracked */
return 0;
}
obj_desc_p = tracked_objs_arr[memtype];
hash_val = addr % MEMTRACK_HASH_SZ;
memtrack_spin_lock(&obj_desc_p->hash_lock, flags);
/* find mem_info of given memory location */
cur_mem_info_p = obj_desc_p->mem_hash[hash_val];
while (cur_mem_info_p != NULL) {
if (cur_mem_info_p->addr == addr) {
/* Found given address in the database - exiting */
memtrack_spin_unlock(&obj_desc_p->hash_lock, flags);
return 0;
}
cur_mem_info_p = cur_mem_info_p->next;
}
/* not found */
if (expect_exist)
printk(KERN_ERR "mtl rsc inconsistency: %s: %s::%lu: %s for unknown address=0x%lX\n",
__func__, filename, line_num, memtype_free_str(memtype), addr);
memtrack_spin_unlock(&obj_desc_p->hash_lock, flags);
return 1;
}
EXPORT_SYMBOL(memtrack_is_new_addr);
/* Return current page reference counter */
int memtrack_get_page_ref_count(unsigned long addr)
{
unsigned long hash_val;
struct memtrack_meminfo_t *cur_mem_info_p;
struct tracked_obj_desc_t *obj_desc_p;
unsigned long flags;
/* This function is called only for page allocation */
enum memtrack_memtype_t memtype = MEMTRACK_PAGE_ALLOC;
int ref_conut = 0;
if (!tracked_objs_arr[memtype]) {
/* object is not tracked */
return ref_conut;
}
obj_desc_p = tracked_objs_arr[memtype];
hash_val = addr % MEMTRACK_HASH_SZ;
memtrack_spin_lock(&obj_desc_p->hash_lock, flags);
/* find mem_info of given memory location */
cur_mem_info_p = obj_desc_p->mem_hash[hash_val];
while (cur_mem_info_p != NULL) {
if (cur_mem_info_p->addr == addr) {
/* Found given address in the database - check ref-count */
struct page *page = (struct page *)(cur_mem_info_p->addr);
ref_conut = atomic_read(&page->_count);
memtrack_spin_unlock(&obj_desc_p->hash_lock, flags);
return ref_conut;
}
cur_mem_info_p = cur_mem_info_p->next;
}
/* not found */
memtrack_spin_unlock(&obj_desc_p->hash_lock, flags);
return ref_conut;
}
EXPORT_SYMBOL(memtrack_get_page_ref_count);
/* Report current allocations status (for all memory types) */
static void memtrack_report(void)
{
enum memtrack_memtype_t memtype;
unsigned long cur_bucket;
struct memtrack_meminfo_t *cur_mem_info_p;
int serial = 1;
struct tracked_obj_desc_t *obj_desc_p;
unsigned long flags;
unsigned long detected_leaks = 0;
printk(KERN_INFO "%s: Currently known allocations:\n", __func__);
for (memtype = 0; memtype < MEMTRACK_NUM_OF_MEMTYPES; memtype++) {
if (tracked_objs_arr[memtype]) {
printk(KERN_INFO "%d) %s:\n", serial, memtype_alloc_str(memtype));
obj_desc_p = tracked_objs_arr[memtype];
/* Scan all buckets to find existing allocations */
/* TBD: this may be optimized by holding a linked list of all hash items */
for (cur_bucket = 0; cur_bucket < MEMTRACK_HASH_SZ; cur_bucket++) {
memtrack_spin_lock(&obj_desc_p->hash_lock, flags); /* protect per bucket/list */
cur_mem_info_p = obj_desc_p->mem_hash[cur_bucket];
while (cur_mem_info_p != NULL) { /* scan bucket */
printk(KERN_INFO "%s::%lu: %s(%lu)==%lX dev=%lX %s\n",
cur_mem_info_p->filename,
cur_mem_info_p->line_num,
memtype_alloc_str(memtype),
cur_mem_info_p->size,
cur_mem_info_p->addr,
cur_mem_info_p->dev,
cur_mem_info_p->ext_info);
cur_mem_info_p = cur_mem_info_p->next;
++ detected_leaks;
} /* while cur_mem_info_p */
memtrack_spin_unlock(&obj_desc_p->hash_lock, flags);
} /* for cur_bucket */
serial++;
}
} /* for memtype */
printk(KERN_INFO "%s: Summary: %lu leak(s) detected\n", __func__, detected_leaks);
}
static struct proc_dir_entry *memtrack_tree;
static enum memtrack_memtype_t get_rsc_by_name(const char *name)
{
enum memtrack_memtype_t i;
for (i = 0; i < MEMTRACK_NUM_OF_MEMTYPES; ++i) {
if (strcmp(name, rsc_names[i]) == 0)
return i;
}
return i;
}
static ssize_t memtrack_read(struct file *filp,
char __user *buf,
size_t size,
loff_t *offset)
{
unsigned long cur, flags;
loff_t pos = *offset;
static char kbuf[20];
static int file_len;
int _read, to_ret, left;
const char *fname;
enum memtrack_memtype_t memtype;
if (pos < 0)
return -EINVAL;
fname = filp->f_dentry->d_name.name;
memtype = get_rsc_by_name(fname);
if (memtype >= MEMTRACK_NUM_OF_MEMTYPES) {
printk(KERN_ERR "invalid file name\n");
return -EINVAL;
}
if (pos == 0) {
memtrack_spin_lock(&tracked_objs_arr[memtype]->hash_lock, flags);
cur = tracked_objs_arr[memtype]->count;
memtrack_spin_unlock(&tracked_objs_arr[memtype]->hash_lock, flags);
_read = sprintf(kbuf, "%lu\n", cur);
if (_read < 0)
return _read;
else
file_len = _read;
}
left = file_len - pos;
to_ret = (left < size) ? left : size;
if (copy_to_user(buf, kbuf+pos, to_ret))
return -EFAULT;
else {
*offset = pos + to_ret;
return to_ret;
}
}
static const struct file_operations memtrack_proc_fops = {
.read = memtrack_read,
};
static const char *memtrack_proc_entry_name = "mt_memtrack";
static int create_procfs_tree(void)
{
struct proc_dir_entry *dir_ent;
struct proc_dir_entry *proc_ent;
int i, j;
unsigned long bit_mask;
dir_ent = proc_mkdir(memtrack_proc_entry_name, NULL);
if (!dir_ent)
return -1;
memtrack_tree = dir_ent;
for (i = 0, bit_mask = 1; i < MEMTRACK_NUM_OF_MEMTYPES; ++i, bit_mask <<= 1) {
if (bit_mask & track_mask) {
proc_ent = create_proc_entry(rsc_names[i], S_IRUGO, memtrack_tree);
if (!proc_ent)
goto undo_create_root;
proc_ent->proc_fops = &memtrack_proc_fops;
}
}
goto exit_ok;
undo_create_root:
for (j = 0, bit_mask = 1; j < i; ++j, bit_mask <<= 1) {
if (bit_mask & track_mask)
remove_proc_entry(rsc_names[j], memtrack_tree);
}
remove_proc_entry(memtrack_proc_entry_name, NULL);
return -1;
exit_ok:
return 0;
}
static void destroy_procfs_tree(void)
{
int i;
unsigned long bit_mask;
for (i = 0, bit_mask = 1; i < MEMTRACK_NUM_OF_MEMTYPES; ++i, bit_mask <<= 1) {
if (bit_mask & track_mask)
remove_proc_entry(rsc_names[i], memtrack_tree);
}
remove_proc_entry(memtrack_proc_entry_name, NULL);
}
int memtrack_inject_error(void)
{
int val = 0;
if (inject_freq) {
if (!(random32() % inject_freq))
val = 1;
}
return val;
}
EXPORT_SYMBOL(memtrack_inject_error);
int memtrack_randomize_mem(void)
{
return random_mem;
}
EXPORT_SYMBOL(memtrack_randomize_mem);
/* module entry points */
int init_module(void)
{
enum memtrack_memtype_t i;
int j;
unsigned long bit_mask;
/* create a cache for the memtrack_meminfo_t strcutures */
meminfo_cache = kmem_cache_create("memtrack_meminfo_t",
sizeof(struct memtrack_meminfo_t), 0,
SLAB_HWCACHE_ALIGN, NULL);
if (!meminfo_cache) {
printk(KERN_ERR "memtrack::%s: failed to allocate meminfo cache\n", __func__);
return -1;
}
/* initialize array of descriptors */
memset(tracked_objs_arr, 0, sizeof(tracked_objs_arr));
/* create a tracking object descriptor for all required objects */
for (i = 0, bit_mask = 1; i < MEMTRACK_NUM_OF_MEMTYPES; ++i, bit_mask <<= 1) {
if (bit_mask & track_mask) {
tracked_objs_arr[i] = vmalloc(sizeof(struct tracked_obj_desc_t));
if (!tracked_objs_arr[i]) {
printk(KERN_ERR "memtrack: failed to allocate tracking object\n");
goto undo_cache_create;
}
memset(tracked_objs_arr[i], 0, sizeof(struct tracked_obj_desc_t));
spin_lock_init(&tracked_objs_arr[i]->hash_lock);
INIT_LIST_HEAD(&tracked_objs_arr[i]->tracked_objs_head);
if (bit_mask & strict_track_mask)
tracked_objs_arr[i]->strict_track = 1;
else
tracked_objs_arr[i]->strict_track = 0;
}
}
if (create_procfs_tree()) {
printk(KERN_ERR "%s: create_procfs_tree() failed\n", __FILE__);
goto undo_cache_create;
}
printk(KERN_INFO "memtrack::%s done.\n", __func__);
return 0;
undo_cache_create:
for (j = 0; j < i; ++j) {
if (tracked_objs_arr[j])
vfree(tracked_objs_arr[j]);
}
#if LINUX_VERSION_CODE < KERNEL_VERSION(2, 6, 19)
if (kmem_cache_destroy(meminfo_cache) != 0)
printk(KERN_ERR "Failed on kmem_cache_destroy!\n");
#else
kmem_cache_destroy(meminfo_cache);
#endif
return -1;
}
void cleanup_module(void)
{
enum memtrack_memtype_t memtype;
unsigned long cur_bucket;
struct memtrack_meminfo_t *cur_mem_info_p, *next_mem_info_p;
struct tracked_obj_desc_t *obj_desc_p;
unsigned long flags;
memtrack_report();
destroy_procfs_tree();
/* clean up any hash table left-overs */
for (memtype = 0; memtype < MEMTRACK_NUM_OF_MEMTYPES; memtype++) {
/* Scan all buckets to find existing allocations */
/* TBD: this may be optimized by holding a linked list of all hash items */
if (tracked_objs_arr[memtype]) {
obj_desc_p = tracked_objs_arr[memtype];
for (cur_bucket = 0; cur_bucket < MEMTRACK_HASH_SZ; cur_bucket++) {
memtrack_spin_lock(&obj_desc_p->hash_lock, flags); /* protect per bucket/list */
cur_mem_info_p = obj_desc_p->mem_hash[cur_bucket];
while (cur_mem_info_p != NULL) { /* scan bucket */
next_mem_info_p = cur_mem_info_p->next; /* save "next" pointer before the "free" */
kmem_cache_free(meminfo_cache, cur_mem_info_p);
cur_mem_info_p = next_mem_info_p;
} /* while cur_mem_info_p */
memtrack_spin_unlock(&obj_desc_p->hash_lock, flags);
} /* for cur_bucket */
vfree(obj_desc_p);
}
} /* for memtype */
#if LINUX_VERSION_CODE < KERNEL_VERSION(2, 6, 19)
if (kmem_cache_destroy(meminfo_cache) != 0)
printk(KERN_ERR "memtrack::cleanup_module: Failed on kmem_cache_destroy!\n");
#else
kmem_cache_destroy(meminfo_cache);
#endif
printk(KERN_INFO "memtrack::cleanup_module done.\n");
}
Index: head/sys/ofed/drivers/infiniband/debug/memtrack.h
===================================================================
--- head/sys/ofed/drivers/infiniband/debug/memtrack.h (revision 300049)
+++ head/sys/ofed/drivers/infiniband/debug/memtrack.h (revision 300050)
@@ -1,106 +1,106 @@
/*
This software is available to you under a choice of one of two
licenses. You may choose to be licensed under the terms of the GNU
General Public License (GPL) Version 2, available at
<http://www.fsf.org/copyleft/gpl.html>, or the OpenIB.org BSD
license, available in the LICENSE.TXT file accompanying this
software. These details are also available at
<http://openib.org/license.html>.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
Copyright (c) 2004 Mellanox Technologies Ltd. All rights reserved.
*/
#ifndef H_MEMTRACK_H
#define H_MEMTRACK_H
enum memtrack_memtype_t {
MEMTRACK_KMALLOC,
MEMTRACK_VMALLOC,
MEMTRACK_KMEM_OBJ,
MEMTRACK_IOREMAP, /* IO-RE/UN-MAP */
MEMTRACK_WORK_QUEUE, /* Handle work-queue create & destroy */
MEMTRACK_PAGE_ALLOC, /* Handle page allocation and free */
MEMTRACK_DMA_MAP_SINGLE,/* Handle ib_dma_single map and unmap */
MEMTRACK_DMA_MAP_PAGE, /* Handle ib_dma_page map and unmap */
MEMTRACK_DMA_MAP_SG, /* Handle ib_dma_sg map and unmap with and without attributes */
MEMTRACK_NUM_OF_MEMTYPES
};
/* Invoke on memory allocation */
void memtrack_alloc(enum memtrack_memtype_t memtype, unsigned long dev,
unsigned long addr, unsigned long size, unsigned long addr2,
int direction, const char *filename,
const unsigned long line_num, int alloc_flags);
/* Invoke on memory free */
void memtrack_free(enum memtrack_memtype_t memtype, unsigned long dev,
unsigned long addr, unsigned long size, int direction,
const char *filename, const unsigned long line_num);
/*
* This function recognizes allocations which
* may be released by kernel (e.g. skb & vnic) and
* therefore not trackable by memtrack.
* The allocations are recognized by the name
* of their calling function.
*/
int is_non_trackable_alloc_func(const char *func_name);
/*
* In some cases we need to free a memory
* we defined as "non trackable" (see
* is_non_trackable_alloc_func).
* This function recognizes such releases
* by the name of their calling function.
*/
int is_non_trackable_free_func(const char *func_name);
/* WA - In this function handles confirm
- the the function name is
+ the function name is
'__ib_umem_release' or 'ib_umem_get'
In this case we won't track the
memory there because the kernel
was the one who allocated it.
Return value:
1 - if the function name is match, else 0 */
int is_umem_put_page(const char *func_name);
/* Check page order size
When Freeing a page allocation it checks whether
we are trying to free the same amount of pages
we ask to allocate (In log2(order)).
In case an error if found it will print
an error msg */
int memtrack_check_size(enum memtrack_memtype_t memtype, unsigned long addr,
unsigned long size, const char *filename,
const unsigned long line_num);
/* Search for a specific addr whether it exist in the
current data-base.
If not it will print an error msg,
Return value: 0 - if addr exist, else 1 */
int memtrack_is_new_addr(enum memtrack_memtype_t memtype, unsigned long addr, int expect_exist,
const char *filename, const unsigned long line_num);
/* Return current page reference counter */
int memtrack_get_page_ref_count(unsigned long addr);
/* Report current allocations status (for all memory types) */
/* we do not export this function since it is used by cleanup_module only */
/* void memtrack_report(void); */
/* Allow support of error injections */
int memtrack_inject_error(void);
/* randomize allocated memory */
int memtrack_randomize_mem(void);
#endif
Index: head/sys/ofed/drivers/net/mlx4/main.c
===================================================================
--- head/sys/ofed/drivers/net/mlx4/main.c (revision 300049)
+++ head/sys/ofed/drivers/net/mlx4/main.c (revision 300050)
@@ -1,3881 +1,3881 @@
/*
* Copyright (c) 2004, 2005 Topspin Communications. All rights reserved.
* Copyright (c) 2005 Sun Microsystems, Inc. All rights reserved.
* Copyright (c) 2005, 2006, 2007, 2008, 2014 Mellanox Technologies. All rights reserved.
* Copyright (c) 2006, 2007 Cisco Systems, Inc. All rights reserved.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU
* General Public License (GPL) Version 2, available from the file
* COPYING in the main directory of this source tree, or the
* OpenIB.org BSD license below:
*
* Redistribution and use in source and binary forms, with or
* without modification, are permitted provided that the following
* conditions are met:
*
* - Redistributions of source code must retain the above
* copyright notice, this list of conditions and the following
* disclaimer.
*
* - Redistributions in binary form must reproduce the above
* copyright notice, this list of conditions and the following
* disclaimer in the documentation and/or other materials
* provided with the distribution.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
* EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
* MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
* NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
* BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
* ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
* CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
* SOFTWARE.
*/
#include <linux/kmod.h>
#include <linux/module.h>
#include <linux/errno.h>
#include <linux/pci.h>
#include <linux/dma-mapping.h>
#include <linux/slab.h>
#include <linux/io-mapping.h>
#include <linux/delay.h>
#include <linux/netdevice.h>
#include <linux/string.h>
#include <linux/fs.h>
#include <linux/mlx4/device.h>
#include <linux/mlx4/doorbell.h>
#include "mlx4.h"
#include "fw.h"
#include "icm.h"
#include "mlx4_stats.h"
/* Mellanox ConnectX HCA low-level driver */
struct workqueue_struct *mlx4_wq;
#ifdef CONFIG_MLX4_DEBUG
int mlx4_debug_level = 0;
module_param_named(debug_level, mlx4_debug_level, int, 0644);
MODULE_PARM_DESC(debug_level, "Enable debug tracing if > 0");
#endif /* CONFIG_MLX4_DEBUG */
#ifdef CONFIG_PCI_MSI
static int msi_x = 1;
module_param(msi_x, int, 0444);
MODULE_PARM_DESC(msi_x, "0 - don't use MSI-X, 1 - use MSI-X, >1 - limit number of MSI-X irqs to msi_x (non-SRIOV only)");
#else /* CONFIG_PCI_MSI */
#define msi_x (0)
#endif /* CONFIG_PCI_MSI */
static int enable_sys_tune = 0;
module_param(enable_sys_tune, int, 0444);
MODULE_PARM_DESC(enable_sys_tune, "Tune the cpu's for better performance (default 0)");
int mlx4_blck_lb = 1;
module_param_named(block_loopback, mlx4_blck_lb, int, 0644);
MODULE_PARM_DESC(block_loopback, "Block multicast loopback packets if > 0 "
"(default: 1)");
enum {
DEFAULT_DOMAIN = 0,
BDF_STR_SIZE = 8, /* bb:dd.f- */
DBDF_STR_SIZE = 13 /* mmmm:bb:dd.f- */
};
enum {
NUM_VFS,
PROBE_VF,
PORT_TYPE_ARRAY
};
enum {
VALID_DATA,
INVALID_DATA,
INVALID_STR
};
struct param_data {
int id;
struct mlx4_dbdf2val_lst dbdf2val;
};
static struct param_data num_vfs = {
.id = NUM_VFS,
.dbdf2val = {
.name = "num_vfs param",
.num_vals = 1,
.def_val = {0},
.range = {0, MLX4_MAX_NUM_VF}
}
};
module_param_string(num_vfs, num_vfs.dbdf2val.str,
sizeof(num_vfs.dbdf2val.str), 0444);
MODULE_PARM_DESC(num_vfs,
"Either single value (e.g. '5') to define uniform num_vfs value for all devices functions\n"
"\t\tor a string to map device function numbers to their num_vfs values (e.g. '0000:04:00.0-5,002b:1c:0b.a-15').\n"
"\t\tHexadecimal digits for the device function (e.g. 002b:1c:0b.a) and decimal for num_vfs value (e.g. 15).");
static struct param_data probe_vf = {
.id = PROBE_VF,
.dbdf2val = {
.name = "probe_vf param",
.num_vals = 1,
.def_val = {0},
.range = {0, MLX4_MAX_NUM_VF}
}
};
module_param_string(probe_vf, probe_vf.dbdf2val.str,
sizeof(probe_vf.dbdf2val.str), 0444);
MODULE_PARM_DESC(probe_vf,
"Either single value (e.g. '3') to define uniform number of VFs to probe by the pf driver for all devices functions\n"
"\t\tor a string to map device function numbers to their probe_vf values (e.g. '0000:04:00.0-3,002b:1c:0b.a-13').\n"
"\t\tHexadecimal digits for the device function (e.g. 002b:1c:0b.a) and decimal for probe_vf value (e.g. 13).");
int mlx4_log_num_mgm_entry_size = MLX4_DEFAULT_MGM_LOG_ENTRY_SIZE;
module_param_named(log_num_mgm_entry_size,
mlx4_log_num_mgm_entry_size, int, 0444);
MODULE_PARM_DESC(log_num_mgm_entry_size, "log mgm size, that defines the num"
" of qp per mcg, for example:"
" 10 gives 248.range: 7 <="
" log_num_mgm_entry_size <= 12."
" To activate device managed"
" flow steering when available, set to -1");
static int high_rate_steer;
module_param(high_rate_steer, int, 0444);
MODULE_PARM_DESC(high_rate_steer, "Enable steering mode for higher packet rate"
" (default off)");
static int fast_drop;
module_param_named(fast_drop, fast_drop, int, 0444);
MODULE_PARM_DESC(fast_drop,
"Enable fast packet drop when no receive WQEs are posted");
int mlx4_enable_64b_cqe_eqe = 1;
module_param_named(enable_64b_cqe_eqe, mlx4_enable_64b_cqe_eqe, int, 0644);
MODULE_PARM_DESC(enable_64b_cqe_eqe,
- "Enable 64 byte CQEs/EQEs when the the FW supports this if non-zero (default: 1)");
+ "Enable 64 byte CQEs/EQEs when the FW supports this if non-zero (default: 1)");
#define HCA_GLOBAL_CAP_MASK 0
#define PF_CONTEXT_BEHAVIOUR_MASK MLX4_FUNC_CAP_64B_EQE_CQE
static char mlx4_version[] __devinitdata =
DRV_NAME ": Mellanox ConnectX VPI driver v"
DRV_VERSION " (" DRV_RELDATE ")\n";
static int log_num_mac = 7;
module_param_named(log_num_mac, log_num_mac, int, 0444);
MODULE_PARM_DESC(log_num_mac, "Log2 max number of MACs per ETH port (1-7)");
static int log_num_vlan;
module_param_named(log_num_vlan, log_num_vlan, int, 0444);
MODULE_PARM_DESC(log_num_vlan,
"(Obsolete) Log2 max number of VLANs per ETH port (0-7)");
/* Log2 max number of VLANs per ETH port (0-7) */
#define MLX4_LOG_NUM_VLANS 7
int log_mtts_per_seg = ilog2(1);
module_param_named(log_mtts_per_seg, log_mtts_per_seg, int, 0444);
MODULE_PARM_DESC(log_mtts_per_seg, "Log2 number of MTT entries per segment "
"(0-7) (default: 0)");
static struct param_data port_type_array = {
.id = PORT_TYPE_ARRAY,
.dbdf2val = {
.name = "port_type_array param",
.num_vals = 2,
.def_val = {MLX4_PORT_TYPE_ETH, MLX4_PORT_TYPE_ETH},
.range = {MLX4_PORT_TYPE_IB, MLX4_PORT_TYPE_NA}
}
};
module_param_string(port_type_array, port_type_array.dbdf2val.str,
sizeof(port_type_array.dbdf2val.str), 0444);
MODULE_PARM_DESC(port_type_array,
"Either pair of values (e.g. '1,2') to define uniform port1/port2 types configuration for all devices functions\n"
"\t\tor a string to map device function numbers to their pair of port types values (e.g. '0000:04:00.0-1;2,002b:1c:0b.a-1;1').\n"
"\t\tValid port types: 1-ib, 2-eth, 3-auto, 4-N/A\n"
"\t\tIn case that only one port is available use the N/A port type for port2 (e.g '1,4').");
struct mlx4_port_config {
struct list_head list;
enum mlx4_port_type port_type[MLX4_MAX_PORTS + 1];
struct pci_dev *pdev;
};
#define MLX4_LOG_NUM_MTT 20
/* We limit to 30 as of a bit map issue which uses int and not uint.
see mlx4_buddy_init -> bitmap_zero which gets int.
*/
#define MLX4_MAX_LOG_NUM_MTT 30
static struct mlx4_profile mod_param_profile = {
.num_qp = 19,
.num_srq = 16,
.rdmarc_per_qp = 4,
.num_cq = 16,
.num_mcg = 13,
.num_mpt = 19,
.num_mtt_segs = 0, /* max(20, 2*MTTs for host memory)) */
};
module_param_named(log_num_qp, mod_param_profile.num_qp, int, 0444);
MODULE_PARM_DESC(log_num_qp, "log maximum number of QPs per HCA (default: 19)");
module_param_named(log_num_srq, mod_param_profile.num_srq, int, 0444);
MODULE_PARM_DESC(log_num_srq, "log maximum number of SRQs per HCA "
"(default: 16)");
module_param_named(log_rdmarc_per_qp, mod_param_profile.rdmarc_per_qp, int,
0444);
MODULE_PARM_DESC(log_rdmarc_per_qp, "log number of RDMARC buffers per QP "
"(default: 4)");
module_param_named(log_num_cq, mod_param_profile.num_cq, int, 0444);
MODULE_PARM_DESC(log_num_cq, "log maximum number of CQs per HCA (default: 16)");
module_param_named(log_num_mcg, mod_param_profile.num_mcg, int, 0444);
MODULE_PARM_DESC(log_num_mcg, "log maximum number of multicast groups per HCA "
"(default: 13)");
module_param_named(log_num_mpt, mod_param_profile.num_mpt, int, 0444);
MODULE_PARM_DESC(log_num_mpt,
"log maximum number of memory protection table entries per "
"HCA (default: 19)");
module_param_named(log_num_mtt, mod_param_profile.num_mtt_segs, int, 0444);
MODULE_PARM_DESC(log_num_mtt,
"log maximum number of memory translation table segments per "
"HCA (default: max(20, 2*MTTs for register all of the host memory limited to 30))");
enum {
MLX4_IF_STATE_BASIC,
MLX4_IF_STATE_EXTENDED
};
static inline u64 dbdf_to_u64(int domain, int bus, int dev, int fn)
{
return (domain << 20) | (bus << 12) | (dev << 4) | fn;
}
static inline void pr_bdf_err(const char *dbdf, const char *pname)
{
pr_warn("mlx4_core: '%s' is not valid bdf in '%s'\n", dbdf, pname);
}
static inline void pr_val_err(const char *dbdf, const char *pname,
const char *val)
{
pr_warn("mlx4_core: value '%s' of bdf '%s' in '%s' is not valid\n"
, val, dbdf, pname);
}
static inline void pr_out_of_range_bdf(const char *dbdf, int val,
struct mlx4_dbdf2val_lst *dbdf2val)
{
pr_warn("mlx4_core: value %d in bdf '%s' of '%s' is out of its valid range (%d,%d)\n"
, val, dbdf, dbdf2val->name , dbdf2val->range.min,
dbdf2val->range.max);
}
static inline void pr_out_of_range(struct mlx4_dbdf2val_lst *dbdf2val)
{
pr_warn("mlx4_core: value of '%s' is out of its valid range (%d,%d)\n"
, dbdf2val->name , dbdf2val->range.min, dbdf2val->range.max);
}
static inline int is_in_range(int val, struct mlx4_range *r)
{
return (val >= r->min && val <= r->max);
}
static int update_defaults(struct param_data *pdata)
{
long int val[MLX4_MAX_BDF_VALS];
int ret;
char *t, *p = pdata->dbdf2val.str;
char sval[32];
int val_len;
if (!strlen(p) || strchr(p, ':') || strchr(p, '.') || strchr(p, ';'))
return INVALID_STR;
switch (pdata->id) {
case PORT_TYPE_ARRAY:
t = strchr(p, ',');
if (!t || t == p || (t - p) > sizeof(sval))
return INVALID_STR;
val_len = t - p;
strncpy(sval, p, val_len);
sval[val_len] = 0;
ret = kstrtol(sval, 0, &val[0]);
if (ret == -EINVAL)
return INVALID_STR;
if (ret || !is_in_range(val[0], &pdata->dbdf2val.range)) {
pr_out_of_range(&pdata->dbdf2val);
return INVALID_DATA;
}
ret = kstrtol(t + 1, 0, &val[1]);
if (ret == -EINVAL)
return INVALID_STR;
if (ret || !is_in_range(val[1], &pdata->dbdf2val.range)) {
pr_out_of_range(&pdata->dbdf2val);
return INVALID_DATA;
}
pdata->dbdf2val.tbl[0].val[0] = val[0];
pdata->dbdf2val.tbl[0].val[1] = val[1];
break;
case NUM_VFS:
case PROBE_VF:
ret = kstrtol(p, 0, &val[0]);
if (ret == -EINVAL)
return INVALID_STR;
if (ret || !is_in_range(val[0], &pdata->dbdf2val.range)) {
pr_out_of_range(&pdata->dbdf2val);
return INVALID_DATA;
}
pdata->dbdf2val.tbl[0].val[0] = val[0];
break;
}
pdata->dbdf2val.tbl[1].dbdf = MLX4_ENDOF_TBL;
return VALID_DATA;
}
int mlx4_fill_dbdf2val_tbl(struct mlx4_dbdf2val_lst *dbdf2val_lst)
{
int domain, bus, dev, fn;
u64 dbdf;
char *p, *t, *v;
char tmp[32];
char sbdf[32];
char sep = ',';
int j, k, str_size, i = 1;
int prfx_size;
p = dbdf2val_lst->str;
for (j = 0; j < dbdf2val_lst->num_vals; j++)
dbdf2val_lst->tbl[0].val[j] = dbdf2val_lst->def_val[j];
dbdf2val_lst->tbl[1].dbdf = MLX4_ENDOF_TBL;
str_size = strlen(dbdf2val_lst->str);
if (str_size == 0)
return 0;
while (strlen(p)) {
prfx_size = BDF_STR_SIZE;
sbdf[prfx_size] = 0;
strncpy(sbdf, p, prfx_size);
domain = DEFAULT_DOMAIN;
if (sscanf(sbdf, "%02x:%02x.%x-", &bus, &dev, &fn) != 3) {
prfx_size = DBDF_STR_SIZE;
sbdf[prfx_size] = 0;
strncpy(sbdf, p, prfx_size);
if (sscanf(sbdf, "%04x:%02x:%02x.%x-", &domain, &bus,
&dev, &fn) != 4) {
pr_bdf_err(sbdf, dbdf2val_lst->name);
goto err;
}
sprintf(tmp, "%04x:%02x:%02x.%x-", domain, bus, dev,
fn);
} else {
sprintf(tmp, "%02x:%02x.%x-", bus, dev, fn);
}
if (strnicmp(sbdf, tmp, sizeof(tmp))) {
pr_bdf_err(sbdf, dbdf2val_lst->name);
goto err;
}
dbdf = dbdf_to_u64(domain, bus, dev, fn);
for (j = 1; j < i; j++)
if (dbdf2val_lst->tbl[j].dbdf == dbdf) {
pr_warn("mlx4_core: in '%s', %s appears multiple times\n"
, dbdf2val_lst->name, sbdf);
goto err;
}
if (i >= MLX4_DEVS_TBL_SIZE) {
pr_warn("mlx4_core: Too many devices in '%s'\n"
, dbdf2val_lst->name);
goto err;
}
p += prfx_size;
t = strchr(p, sep);
t = t ? t : p + strlen(p);
if (p >= t) {
pr_val_err(sbdf, dbdf2val_lst->name, "");
goto err;
}
for (k = 0; k < dbdf2val_lst->num_vals; k++) {
char sval[32];
long int val;
int ret, val_len;
char vsep = ';';
v = (k == dbdf2val_lst->num_vals - 1) ? t : strchr(p, vsep);
if (!v || v > t || v == p || (v - p) > sizeof(sval)) {
pr_val_err(sbdf, dbdf2val_lst->name, p);
goto err;
}
val_len = v - p;
strncpy(sval, p, val_len);
sval[val_len] = 0;
ret = kstrtol(sval, 0, &val);
if (ret) {
if (strchr(p, vsep))
pr_warn("mlx4_core: too many vals in bdf '%s' of '%s'\n"
, sbdf, dbdf2val_lst->name);
else
pr_val_err(sbdf, dbdf2val_lst->name,
sval);
goto err;
}
if (!is_in_range(val, &dbdf2val_lst->range)) {
pr_out_of_range_bdf(sbdf, val, dbdf2val_lst);
goto err;
}
dbdf2val_lst->tbl[i].val[k] = val;
p = v;
if (p[0] == vsep)
p++;
}
dbdf2val_lst->tbl[i].dbdf = dbdf;
if (strlen(p)) {
if (p[0] != sep) {
pr_warn("mlx4_core: expect separator '%c' before '%s' in '%s'\n"
, sep, p, dbdf2val_lst->name);
goto err;
}
p++;
}
i++;
if (i < MLX4_DEVS_TBL_SIZE)
dbdf2val_lst->tbl[i].dbdf = MLX4_ENDOF_TBL;
}
return 0;
err:
dbdf2val_lst->tbl[1].dbdf = MLX4_ENDOF_TBL;
pr_warn("mlx4_core: The value of '%s' is incorrect. The value is discarded!\n"
, dbdf2val_lst->name);
return -EINVAL;
}
EXPORT_SYMBOL(mlx4_fill_dbdf2val_tbl);
int mlx4_get_val(struct mlx4_dbdf2val *tbl, struct pci_dev *pdev, int idx,
int *val)
{
u64 dbdf;
int i = 1;
*val = tbl[0].val[idx];
if (!pdev)
return -EINVAL;
dbdf = dbdf_to_u64(pci_get_domain(pdev->dev.bsddev), pci_get_bus(pdev->dev.bsddev),
PCI_SLOT(pdev->devfn), PCI_FUNC(pdev->devfn));
while ((i < MLX4_DEVS_TBL_SIZE) && (tbl[i].dbdf != MLX4_ENDOF_TBL)) {
if (tbl[i].dbdf == dbdf) {
*val = tbl[i].val[idx];
return 0;
}
i++;
}
return 0;
}
EXPORT_SYMBOL(mlx4_get_val);
static void process_mod_param_profile(struct mlx4_profile *profile)
{
vm_size_t hwphyssz;
hwphyssz = 0;
TUNABLE_ULONG_FETCH("hw.realmem", (u_long *) &hwphyssz);
profile->num_qp = 1 << mod_param_profile.num_qp;
profile->num_srq = 1 << mod_param_profile.num_srq;
profile->rdmarc_per_qp = 1 << mod_param_profile.rdmarc_per_qp;
profile->num_cq = 1 << mod_param_profile.num_cq;
profile->num_mcg = 1 << mod_param_profile.num_mcg;
profile->num_mpt = 1 << mod_param_profile.num_mpt;
/*
* We want to scale the number of MTTs with the size of the
* system memory, since it makes sense to register a lot of
* memory on a system with a lot of memory. As a heuristic,
* make sure we have enough MTTs to register twice the system
* memory (with PAGE_SIZE entries).
*
* This number has to be a power of two and fit into 32 bits
* due to device limitations. We cap this at 2^30 as of bit map
* limitation to work with int instead of uint (mlx4_buddy_init -> bitmap_zero)
* That limits us to 4TB of memory registration per HCA with
* 4KB pages, which is probably OK for the next few months.
*/
if (mod_param_profile.num_mtt_segs)
profile->num_mtt_segs = 1 << mod_param_profile.num_mtt_segs;
else {
profile->num_mtt_segs =
roundup_pow_of_two(max_t(unsigned,
1 << (MLX4_LOG_NUM_MTT - log_mtts_per_seg),
min(1UL <<
(MLX4_MAX_LOG_NUM_MTT -
log_mtts_per_seg),
(hwphyssz << 1)
>> log_mtts_per_seg)));
/* set the actual value, so it will be reflected to the user
using the sysfs */
mod_param_profile.num_mtt_segs = ilog2(profile->num_mtt_segs);
}
}
int mlx4_check_port_params(struct mlx4_dev *dev,
enum mlx4_port_type *port_type)
{
int i;
for (i = 0; i < dev->caps.num_ports - 1; i++) {
if (port_type[i] != port_type[i + 1]) {
if (!(dev->caps.flags & MLX4_DEV_CAP_FLAG_DPDP)) {
mlx4_err(dev, "Only same port types supported "
"on this HCA, aborting.\n");
return -EINVAL;
}
}
}
for (i = 0; i < dev->caps.num_ports; i++) {
if (!(port_type[i] & dev->caps.supported_type[i+1])) {
mlx4_err(dev, "Requested port type for port %d is not "
"supported on this HCA\n", i + 1);
return -EINVAL;
}
}
return 0;
}
static void mlx4_set_port_mask(struct mlx4_dev *dev)
{
int i;
for (i = 1; i <= dev->caps.num_ports; ++i)
dev->caps.port_mask[i] = dev->caps.port_type[i];
}
static int mlx4_dev_cap(struct mlx4_dev *dev, struct mlx4_dev_cap *dev_cap)
{
int err;
int i;
err = mlx4_QUERY_DEV_CAP(dev, dev_cap);
if (err) {
mlx4_err(dev, "QUERY_DEV_CAP command failed, aborting.\n");
return err;
}
if (dev_cap->min_page_sz > PAGE_SIZE) {
mlx4_err(dev, "HCA minimum page size of %d bigger than "
"kernel PAGE_SIZE of %d, aborting.\n",
dev_cap->min_page_sz, (int)PAGE_SIZE);
return -ENODEV;
}
if (dev_cap->num_ports > MLX4_MAX_PORTS) {
mlx4_err(dev, "HCA has %d ports, but we only support %d, "
"aborting.\n",
dev_cap->num_ports, MLX4_MAX_PORTS);
return -ENODEV;
}
if (dev_cap->uar_size > pci_resource_len(dev->pdev, 2)) {
mlx4_err(dev, "HCA reported UAR size of 0x%x bigger than "
"PCI resource 2 size of 0x%llx, aborting.\n",
dev_cap->uar_size,
(unsigned long long) pci_resource_len(dev->pdev, 2));
return -ENODEV;
}
dev->caps.num_ports = dev_cap->num_ports;
dev->phys_caps.num_phys_eqs = MLX4_MAX_EQ_NUM;
for (i = 1; i <= dev->caps.num_ports; ++i) {
dev->caps.vl_cap[i] = dev_cap->max_vl[i];
dev->caps.ib_mtu_cap[i] = dev_cap->ib_mtu[i];
dev->phys_caps.gid_phys_table_len[i] = dev_cap->max_gids[i];
dev->phys_caps.pkey_phys_table_len[i] = dev_cap->max_pkeys[i];
/* set gid and pkey table operating lengths by default
* to non-sriov values */
dev->caps.gid_table_len[i] = dev_cap->max_gids[i];
dev->caps.pkey_table_len[i] = dev_cap->max_pkeys[i];
dev->caps.port_width_cap[i] = dev_cap->max_port_width[i];
dev->caps.eth_mtu_cap[i] = dev_cap->eth_mtu[i];
dev->caps.def_mac[i] = dev_cap->def_mac[i];
dev->caps.supported_type[i] = dev_cap->supported_port_types[i];
dev->caps.suggested_type[i] = dev_cap->suggested_type[i];
dev->caps.default_sense[i] = dev_cap->default_sense[i];
dev->caps.trans_type[i] = dev_cap->trans_type[i];
dev->caps.vendor_oui[i] = dev_cap->vendor_oui[i];
dev->caps.wavelength[i] = dev_cap->wavelength[i];
dev->caps.trans_code[i] = dev_cap->trans_code[i];
}
dev->caps.uar_page_size = PAGE_SIZE;
dev->caps.num_uars = dev_cap->uar_size / PAGE_SIZE;
dev->caps.local_ca_ack_delay = dev_cap->local_ca_ack_delay;
dev->caps.bf_reg_size = dev_cap->bf_reg_size;
dev->caps.bf_regs_per_page = dev_cap->bf_regs_per_page;
dev->caps.max_sq_sg = dev_cap->max_sq_sg;
dev->caps.max_rq_sg = dev_cap->max_rq_sg;
dev->caps.max_wqes = dev_cap->max_qp_sz;
dev->caps.max_qp_init_rdma = dev_cap->max_requester_per_qp;
dev->caps.max_srq_wqes = dev_cap->max_srq_sz;
dev->caps.max_srq_sge = dev_cap->max_rq_sg - 1;
dev->caps.reserved_srqs = dev_cap->reserved_srqs;
dev->caps.max_sq_desc_sz = dev_cap->max_sq_desc_sz;
dev->caps.max_rq_desc_sz = dev_cap->max_rq_desc_sz;
/*
* Subtract 1 from the limit because we need to allocate a
* spare CQE to enable resizing the CQ
*/
dev->caps.max_cqes = dev_cap->max_cq_sz - 1;
dev->caps.reserved_cqs = dev_cap->reserved_cqs;
dev->caps.reserved_eqs = dev_cap->reserved_eqs;
dev->caps.reserved_mtts = dev_cap->reserved_mtts;
dev->caps.reserved_mrws = dev_cap->reserved_mrws;
/* The first 128 UARs are used for EQ doorbells */
dev->caps.reserved_uars = max_t(int, 128, dev_cap->reserved_uars);
dev->caps.reserved_pds = dev_cap->reserved_pds;
dev->caps.reserved_xrcds = (dev->caps.flags & MLX4_DEV_CAP_FLAG_XRC) ?
dev_cap->reserved_xrcds : 0;
dev->caps.max_xrcds = (dev->caps.flags & MLX4_DEV_CAP_FLAG_XRC) ?
dev_cap->max_xrcds : 0;
dev->caps.mtt_entry_sz = dev_cap->mtt_entry_sz;
dev->caps.max_msg_sz = dev_cap->max_msg_sz;
dev->caps.page_size_cap = ~(u32) (dev_cap->min_page_sz - 1);
dev->caps.flags = dev_cap->flags;
dev->caps.flags2 = dev_cap->flags2;
dev->caps.bmme_flags = dev_cap->bmme_flags;
dev->caps.reserved_lkey = dev_cap->reserved_lkey;
dev->caps.stat_rate_support = dev_cap->stat_rate_support;
dev->caps.cq_timestamp = dev_cap->timestamp_support;
dev->caps.max_gso_sz = dev_cap->max_gso_sz;
dev->caps.max_rss_tbl_sz = dev_cap->max_rss_tbl_sz;
/* Sense port always allowed on supported devices for ConnectX-1 and -2 */
if (mlx4_priv(dev)->pci_dev_data & MLX4_PCI_DEV_FORCE_SENSE_PORT)
dev->caps.flags |= MLX4_DEV_CAP_FLAG_SENSE_SUPPORT;
/* Don't do sense port on multifunction devices (for now at least) */
if (mlx4_is_mfunc(dev))
dev->caps.flags &= ~MLX4_DEV_CAP_FLAG_SENSE_SUPPORT;
dev->caps.log_num_macs = log_num_mac;
dev->caps.log_num_vlans = MLX4_LOG_NUM_VLANS;
dev->caps.fast_drop = fast_drop ?
!!(dev->caps.flags & MLX4_DEV_CAP_FLAG_FAST_DROP) :
0;
for (i = 1; i <= dev->caps.num_ports; ++i) {
dev->caps.port_type[i] = MLX4_PORT_TYPE_NONE;
if (dev->caps.supported_type[i]) {
/* if only ETH is supported - assign ETH */
if (dev->caps.supported_type[i] == MLX4_PORT_TYPE_ETH)
dev->caps.port_type[i] = MLX4_PORT_TYPE_ETH;
/* if only IB is supported, assign IB */
else if (dev->caps.supported_type[i] ==
MLX4_PORT_TYPE_IB)
dev->caps.port_type[i] = MLX4_PORT_TYPE_IB;
else {
/*
* if IB and ETH are supported, we set the port
* type according to user selection of port type;
* if there is no user selection, take the FW hint
*/
int pta;
mlx4_get_val(port_type_array.dbdf2val.tbl,
pci_physfn(dev->pdev), i - 1,
&pta);
if (pta == MLX4_PORT_TYPE_NONE) {
dev->caps.port_type[i] = dev->caps.suggested_type[i] ?
MLX4_PORT_TYPE_ETH : MLX4_PORT_TYPE_IB;
} else if (pta == MLX4_PORT_TYPE_NA) {
mlx4_err(dev, "Port %d is valid port. "
"It is not allowed to configure its type to N/A(%d)\n",
i, MLX4_PORT_TYPE_NA);
return -EINVAL;
} else {
dev->caps.port_type[i] = pta;
}
}
}
/*
* Link sensing is allowed on the port if 3 conditions are true:
* 1. Both protocols are supported on the port.
* 2. Different types are supported on the port
* 3. FW declared that it supports link sensing
*/
mlx4_priv(dev)->sense.sense_allowed[i] =
((dev->caps.supported_type[i] == MLX4_PORT_TYPE_AUTO) &&
(dev->caps.flags & MLX4_DEV_CAP_FLAG_DPDP) &&
(dev->caps.flags & MLX4_DEV_CAP_FLAG_SENSE_SUPPORT));
/* Disablling auto sense for default Eth ports support */
mlx4_priv(dev)->sense.sense_allowed[i] = 0;
/*
* If "default_sense" bit is set, we move the port to "AUTO" mode
* and perform sense_port FW command to try and set the correct
* port type from beginning
*/
if (mlx4_priv(dev)->sense.sense_allowed[i] && dev->caps.default_sense[i]) {
enum mlx4_port_type sensed_port = MLX4_PORT_TYPE_NONE;
dev->caps.possible_type[i] = MLX4_PORT_TYPE_AUTO;
mlx4_SENSE_PORT(dev, i, &sensed_port);
if (sensed_port != MLX4_PORT_TYPE_NONE)
dev->caps.port_type[i] = sensed_port;
} else {
dev->caps.possible_type[i] = dev->caps.port_type[i];
}
if (dev->caps.log_num_macs > dev_cap->log_max_macs[i]) {
dev->caps.log_num_macs = dev_cap->log_max_macs[i];
mlx4_warn(dev, "Requested number of MACs is too much "
"for port %d, reducing to %d.\n",
i, 1 << dev->caps.log_num_macs);
}
if (dev->caps.log_num_vlans > dev_cap->log_max_vlans[i]) {
dev->caps.log_num_vlans = dev_cap->log_max_vlans[i];
mlx4_warn(dev, "Requested number of VLANs is too much "
"for port %d, reducing to %d.\n",
i, 1 << dev->caps.log_num_vlans);
}
}
dev->caps.max_basic_counters = dev_cap->max_basic_counters;
dev->caps.max_extended_counters = dev_cap->max_extended_counters;
/* support extended counters if available */
if (dev->caps.flags & MLX4_DEV_CAP_FLAG_COUNTERS_EXT)
dev->caps.max_counters = dev->caps.max_extended_counters;
else
dev->caps.max_counters = dev->caps.max_basic_counters;
dev->caps.reserved_qps_cnt[MLX4_QP_REGION_FW] = dev_cap->reserved_qps;
dev->caps.reserved_qps_cnt[MLX4_QP_REGION_ETH_ADDR] =
dev->caps.reserved_qps_cnt[MLX4_QP_REGION_FC_ADDR] =
(1 << dev->caps.log_num_macs) *
(1 << dev->caps.log_num_vlans) *
dev->caps.num_ports;
dev->caps.reserved_qps_cnt[MLX4_QP_REGION_FC_EXCH] = MLX4_NUM_FEXCH;
dev->caps.reserved_qps = dev->caps.reserved_qps_cnt[MLX4_QP_REGION_FW] +
dev->caps.reserved_qps_cnt[MLX4_QP_REGION_ETH_ADDR] +
dev->caps.reserved_qps_cnt[MLX4_QP_REGION_FC_ADDR] +
dev->caps.reserved_qps_cnt[MLX4_QP_REGION_FC_EXCH];
dev->caps.sync_qp = dev_cap->sync_qp;
if (dev->pdev->device == 0x1003)
dev->caps.cq_flags |= MLX4_DEV_CAP_CQ_FLAG_IO;
dev->caps.sqp_demux = (mlx4_is_master(dev)) ? MLX4_MAX_NUM_SLAVES : 0;
if (!mlx4_enable_64b_cqe_eqe && !mlx4_is_slave(dev)) {
if (dev_cap->flags &
(MLX4_DEV_CAP_FLAG_64B_CQE | MLX4_DEV_CAP_FLAG_64B_EQE)) {
mlx4_warn(dev, "64B EQEs/CQEs supported by the device but not enabled\n");
dev->caps.flags &= ~MLX4_DEV_CAP_FLAG_64B_CQE;
dev->caps.flags &= ~MLX4_DEV_CAP_FLAG_64B_EQE;
}
}
if ((dev->caps.flags &
(MLX4_DEV_CAP_FLAG_64B_CQE | MLX4_DEV_CAP_FLAG_64B_EQE)) &&
mlx4_is_master(dev))
dev->caps.function_caps |= MLX4_FUNC_CAP_64B_EQE_CQE;
if (!mlx4_is_slave(dev)) {
for (i = 0; i < dev->caps.num_ports; ++i)
dev->caps.def_counter_index[i] = i << 1;
}
return 0;
}
/*The function checks if there are live vf, return the num of them*/
static int mlx4_how_many_lives_vf(struct mlx4_dev *dev)
{
struct mlx4_priv *priv = mlx4_priv(dev);
struct mlx4_slave_state *s_state;
int i;
int ret = 0;
for (i = 1/*the ppf is 0*/; i < dev->num_slaves; ++i) {
s_state = &priv->mfunc.master.slave_state[i];
if (s_state->active && s_state->last_cmd !=
MLX4_COMM_CMD_RESET) {
mlx4_warn(dev, "%s: slave: %d is still active\n",
__func__, i);
ret++;
}
}
return ret;
}
int mlx4_get_parav_qkey(struct mlx4_dev *dev, u32 qpn, u32 *qkey)
{
u32 qk = MLX4_RESERVED_QKEY_BASE;
if (qpn >= dev->phys_caps.base_tunnel_sqpn + 8 * MLX4_MFUNC_MAX ||
qpn < dev->phys_caps.base_proxy_sqpn)
return -EINVAL;
if (qpn >= dev->phys_caps.base_tunnel_sqpn)
/* tunnel qp */
qk += qpn - dev->phys_caps.base_tunnel_sqpn;
else
qk += qpn - dev->phys_caps.base_proxy_sqpn;
*qkey = qk;
return 0;
}
EXPORT_SYMBOL(mlx4_get_parav_qkey);
void mlx4_sync_pkey_table(struct mlx4_dev *dev, int slave, int port, int i, int val)
{
struct mlx4_priv *priv = container_of(dev, struct mlx4_priv, dev);
if (!mlx4_is_master(dev))
return;
priv->virt2phys_pkey[slave][port - 1][i] = val;
}
EXPORT_SYMBOL(mlx4_sync_pkey_table);
void mlx4_put_slave_node_guid(struct mlx4_dev *dev, int slave, __be64 guid)
{
struct mlx4_priv *priv = container_of(dev, struct mlx4_priv, dev);
if (!mlx4_is_master(dev))
return;
priv->slave_node_guids[slave] = guid;
}
EXPORT_SYMBOL(mlx4_put_slave_node_guid);
__be64 mlx4_get_slave_node_guid(struct mlx4_dev *dev, int slave)
{
struct mlx4_priv *priv = container_of(dev, struct mlx4_priv, dev);
if (!mlx4_is_master(dev))
return 0;
return priv->slave_node_guids[slave];
}
EXPORT_SYMBOL(mlx4_get_slave_node_guid);
int mlx4_is_slave_active(struct mlx4_dev *dev, int slave)
{
struct mlx4_priv *priv = mlx4_priv(dev);
struct mlx4_slave_state *s_slave;
if (!mlx4_is_master(dev))
return 0;
s_slave = &priv->mfunc.master.slave_state[slave];
return !!s_slave->active;
}
EXPORT_SYMBOL(mlx4_is_slave_active);
static void slave_adjust_steering_mode(struct mlx4_dev *dev,
struct mlx4_dev_cap *dev_cap,
struct mlx4_init_hca_param *hca_param)
{
dev->caps.steering_mode = hca_param->steering_mode;
if (dev->caps.steering_mode == MLX4_STEERING_MODE_DEVICE_MANAGED)
dev->caps.num_qp_per_mgm = dev_cap->fs_max_num_qp_per_entry;
else
dev->caps.num_qp_per_mgm =
4 * ((1 << hca_param->log_mc_entry_sz)/16 - 2);
mlx4_dbg(dev, "Steering mode is: %s\n",
mlx4_steering_mode_str(dev->caps.steering_mode));
}
static int mlx4_slave_cap(struct mlx4_dev *dev)
{
int err;
u32 page_size;
struct mlx4_dev_cap dev_cap;
struct mlx4_func_cap func_cap;
struct mlx4_init_hca_param hca_param;
int i;
memset(&hca_param, 0, sizeof(hca_param));
err = mlx4_QUERY_HCA(dev, &hca_param);
if (err) {
mlx4_err(dev, "QUERY_HCA command failed, aborting.\n");
return err;
}
/*fail if the hca has an unknown capability */
if ((hca_param.global_caps | HCA_GLOBAL_CAP_MASK) !=
HCA_GLOBAL_CAP_MASK) {
mlx4_err(dev, "Unknown hca global capabilities\n");
return -ENOSYS;
}
mlx4_log_num_mgm_entry_size = hca_param.log_mc_entry_sz;
dev->caps.hca_core_clock = hca_param.hca_core_clock;
memset(&dev_cap, 0, sizeof(dev_cap));
dev->caps.max_qp_dest_rdma = 1 << hca_param.log_rd_per_qp;
err = mlx4_dev_cap(dev, &dev_cap);
if (err) {
mlx4_err(dev, "QUERY_DEV_CAP command failed, aborting.\n");
return err;
}
err = mlx4_QUERY_FW(dev);
if (err)
mlx4_err(dev, "QUERY_FW command failed: could not get FW version.\n");
if (!hca_param.mw_enable) {
dev->caps.flags &= ~MLX4_DEV_CAP_FLAG_MEM_WINDOW;
dev->caps.bmme_flags &= ~MLX4_BMME_FLAG_TYPE_2_WIN;
}
page_size = ~dev->caps.page_size_cap + 1;
mlx4_warn(dev, "HCA minimum page size:%d\n", page_size);
if (page_size > PAGE_SIZE) {
mlx4_err(dev, "HCA minimum page size of %d bigger than "
"kernel PAGE_SIZE of %d, aborting.\n",
page_size, (int)PAGE_SIZE);
return -ENODEV;
}
/* slave gets uar page size from QUERY_HCA fw command */
dev->caps.uar_page_size = 1 << (hca_param.uar_page_sz + 12);
/* TODO: relax this assumption */
if (dev->caps.uar_page_size != PAGE_SIZE) {
mlx4_err(dev, "UAR size:%d != kernel PAGE_SIZE of %d\n",
dev->caps.uar_page_size, (int)PAGE_SIZE);
return -ENODEV;
}
memset(&func_cap, 0, sizeof(func_cap));
err = mlx4_QUERY_FUNC_CAP(dev, 0, &func_cap);
if (err) {
mlx4_err(dev, "QUERY_FUNC_CAP general command failed, aborting (%d).\n",
err);
return err;
}
if ((func_cap.pf_context_behaviour | PF_CONTEXT_BEHAVIOUR_MASK) !=
PF_CONTEXT_BEHAVIOUR_MASK) {
mlx4_err(dev, "Unknown pf context behaviour\n");
return -ENOSYS;
}
dev->caps.num_ports = func_cap.num_ports;
dev->quotas.qp = func_cap.qp_quota;
dev->quotas.srq = func_cap.srq_quota;
dev->quotas.cq = func_cap.cq_quota;
dev->quotas.mpt = func_cap.mpt_quota;
dev->quotas.mtt = func_cap.mtt_quota;
dev->caps.num_qps = 1 << hca_param.log_num_qps;
dev->caps.num_srqs = 1 << hca_param.log_num_srqs;
dev->caps.num_cqs = 1 << hca_param.log_num_cqs;
dev->caps.num_mpts = 1 << hca_param.log_mpt_sz;
dev->caps.num_eqs = func_cap.max_eq;
dev->caps.reserved_eqs = func_cap.reserved_eq;
dev->caps.num_pds = MLX4_NUM_PDS;
dev->caps.num_mgms = 0;
dev->caps.num_amgms = 0;
if (dev->caps.num_ports > MLX4_MAX_PORTS) {
mlx4_err(dev, "HCA has %d ports, but we only support %d, "
"aborting.\n", dev->caps.num_ports, MLX4_MAX_PORTS);
return -ENODEV;
}
dev->caps.qp0_tunnel = kcalloc(dev->caps.num_ports, sizeof (u32), GFP_KERNEL);
dev->caps.qp0_proxy = kcalloc(dev->caps.num_ports, sizeof (u32), GFP_KERNEL);
dev->caps.qp1_tunnel = kcalloc(dev->caps.num_ports, sizeof (u32), GFP_KERNEL);
dev->caps.qp1_proxy = kcalloc(dev->caps.num_ports, sizeof (u32), GFP_KERNEL);
if (!dev->caps.qp0_tunnel || !dev->caps.qp0_proxy ||
!dev->caps.qp1_tunnel || !dev->caps.qp1_proxy) {
err = -ENOMEM;
goto err_mem;
}
for (i = 1; i <= dev->caps.num_ports; ++i) {
err = mlx4_QUERY_FUNC_CAP(dev, (u32) i, &func_cap);
if (err) {
mlx4_err(dev, "QUERY_FUNC_CAP port command failed for"
" port %d, aborting (%d).\n", i, err);
goto err_mem;
}
dev->caps.qp0_tunnel[i - 1] = func_cap.qp0_tunnel_qpn;
dev->caps.qp0_proxy[i - 1] = func_cap.qp0_proxy_qpn;
dev->caps.qp1_tunnel[i - 1] = func_cap.qp1_tunnel_qpn;
dev->caps.qp1_proxy[i - 1] = func_cap.qp1_proxy_qpn;
dev->caps.def_counter_index[i - 1] = func_cap.def_counter_index;
dev->caps.port_mask[i] = dev->caps.port_type[i];
err = mlx4_get_slave_pkey_gid_tbl_len(dev, i,
&dev->caps.gid_table_len[i],
&dev->caps.pkey_table_len[i]);
if (err)
goto err_mem;
}
if (dev->caps.uar_page_size * (dev->caps.num_uars -
dev->caps.reserved_uars) >
pci_resource_len(dev->pdev, 2)) {
mlx4_err(dev, "HCA reported UAR region size of 0x%x bigger than "
"PCI resource 2 size of 0x%llx, aborting.\n",
dev->caps.uar_page_size * dev->caps.num_uars,
(unsigned long long) pci_resource_len(dev->pdev, 2));
err = -ENOMEM;
goto err_mem;
}
if (hca_param.dev_cap_enabled & MLX4_DEV_CAP_64B_EQE_ENABLED) {
dev->caps.eqe_size = 64;
dev->caps.eqe_factor = 1;
} else {
dev->caps.eqe_size = 32;
dev->caps.eqe_factor = 0;
}
if (hca_param.dev_cap_enabled & MLX4_DEV_CAP_64B_CQE_ENABLED) {
dev->caps.cqe_size = 64;
dev->caps.userspace_caps |= MLX4_USER_DEV_CAP_64B_CQE;
} else {
dev->caps.cqe_size = 32;
}
dev->caps.flags2 &= ~MLX4_DEV_CAP_FLAG2_TS;
mlx4_warn(dev, "Timestamping is not supported in slave mode.\n");
slave_adjust_steering_mode(dev, &dev_cap, &hca_param);
return 0;
err_mem:
kfree(dev->caps.qp0_tunnel);
kfree(dev->caps.qp0_proxy);
kfree(dev->caps.qp1_tunnel);
kfree(dev->caps.qp1_proxy);
dev->caps.qp0_tunnel = dev->caps.qp0_proxy =
dev->caps.qp1_tunnel = dev->caps.qp1_proxy = NULL;
return err;
}
static void mlx4_request_modules(struct mlx4_dev *dev)
{
int port;
int has_ib_port = false;
int has_eth_port = false;
#define EN_DRV_NAME "mlx4_en"
#define IB_DRV_NAME "mlx4_ib"
for (port = 1; port <= dev->caps.num_ports; port++) {
if (dev->caps.port_type[port] == MLX4_PORT_TYPE_IB)
has_ib_port = true;
else if (dev->caps.port_type[port] == MLX4_PORT_TYPE_ETH)
has_eth_port = true;
}
if (has_ib_port)
request_module_nowait(IB_DRV_NAME);
if (has_eth_port)
request_module_nowait(EN_DRV_NAME);
}
/*
* Change the port configuration of the device.
* Every user of this function must hold the port mutex.
*/
int mlx4_change_port_types(struct mlx4_dev *dev,
enum mlx4_port_type *port_types)
{
int err = 0;
int change = 0;
int port;
for (port = 0; port < dev->caps.num_ports; port++) {
/* Change the port type only if the new type is different
* from the current, and not set to Auto */
if (port_types[port] != dev->caps.port_type[port + 1])
change = 1;
}
if (change) {
mlx4_unregister_device(dev);
for (port = 1; port <= dev->caps.num_ports; port++) {
mlx4_CLOSE_PORT(dev, port);
dev->caps.port_type[port] = port_types[port - 1];
err = mlx4_SET_PORT(dev, port, -1);
if (err) {
mlx4_err(dev, "Failed to set port %d, "
"aborting\n", port);
goto out;
}
}
mlx4_set_port_mask(dev);
err = mlx4_register_device(dev);
if (err) {
mlx4_err(dev, "Failed to register device\n");
goto out;
}
mlx4_request_modules(dev);
}
out:
return err;
}
static ssize_t show_port_type(struct device *dev,
struct device_attribute *attr,
char *buf)
{
struct mlx4_port_info *info = container_of(attr, struct mlx4_port_info,
port_attr);
struct mlx4_dev *mdev = info->dev;
char type[8];
sprintf(type, "%s",
(mdev->caps.port_type[info->port] == MLX4_PORT_TYPE_IB) ?
"ib" : "eth");
if (mdev->caps.possible_type[info->port] == MLX4_PORT_TYPE_AUTO)
sprintf(buf, "auto (%s)\n", type);
else
sprintf(buf, "%s\n", type);
return strlen(buf);
}
static ssize_t set_port_type(struct device *dev,
struct device_attribute *attr,
const char *buf, size_t count)
{
struct mlx4_port_info *info = container_of(attr, struct mlx4_port_info,
port_attr);
struct mlx4_dev *mdev = info->dev;
struct mlx4_priv *priv = mlx4_priv(mdev);
enum mlx4_port_type types[MLX4_MAX_PORTS];
enum mlx4_port_type new_types[MLX4_MAX_PORTS];
int i;
int err = 0;
if (!strcmp(buf, "ib\n"))
info->tmp_type = MLX4_PORT_TYPE_IB;
else if (!strcmp(buf, "eth\n"))
info->tmp_type = MLX4_PORT_TYPE_ETH;
else if (!strcmp(buf, "auto\n"))
info->tmp_type = MLX4_PORT_TYPE_AUTO;
else {
mlx4_err(mdev, "%s is not supported port type\n", buf);
return -EINVAL;
}
if ((info->tmp_type & mdev->caps.supported_type[info->port]) !=
info->tmp_type) {
mlx4_err(mdev, "Requested port type for port %d is not supported on this HCA\n",
info->port);
return -EINVAL;
}
mlx4_stop_sense(mdev);
mutex_lock(&priv->port_mutex);
/* Possible type is always the one that was delivered */
mdev->caps.possible_type[info->port] = info->tmp_type;
for (i = 0; i < mdev->caps.num_ports; i++) {
types[i] = priv->port[i+1].tmp_type ? priv->port[i+1].tmp_type :
mdev->caps.possible_type[i+1];
if (types[i] == MLX4_PORT_TYPE_AUTO)
types[i] = mdev->caps.port_type[i+1];
}
if (!(mdev->caps.flags & MLX4_DEV_CAP_FLAG_DPDP) &&
!(mdev->caps.flags & MLX4_DEV_CAP_FLAG_SENSE_SUPPORT)) {
for (i = 1; i <= mdev->caps.num_ports; i++) {
if (mdev->caps.possible_type[i] == MLX4_PORT_TYPE_AUTO) {
mdev->caps.possible_type[i] = mdev->caps.port_type[i];
err = -EINVAL;
}
}
}
if (err) {
mlx4_err(mdev, "Auto sensing is not supported on this HCA. "
"Set only 'eth' or 'ib' for both ports "
"(should be the same)\n");
goto out;
}
mlx4_do_sense_ports(mdev, new_types, types);
err = mlx4_check_port_params(mdev, new_types);
if (err)
goto out;
/* We are about to apply the changes after the configuration
* was verified, no need to remember the temporary types
* any more */
for (i = 0; i < mdev->caps.num_ports; i++)
priv->port[i + 1].tmp_type = 0;
err = mlx4_change_port_types(mdev, new_types);
out:
mlx4_start_sense(mdev);
mutex_unlock(&priv->port_mutex);
return err ? err : count;
}
enum ibta_mtu {
IB_MTU_256 = 1,
IB_MTU_512 = 2,
IB_MTU_1024 = 3,
IB_MTU_2048 = 4,
IB_MTU_4096 = 5
};
static inline int int_to_ibta_mtu(int mtu)
{
switch (mtu) {
case 256: return IB_MTU_256;
case 512: return IB_MTU_512;
case 1024: return IB_MTU_1024;
case 2048: return IB_MTU_2048;
case 4096: return IB_MTU_4096;
default: return -1;
}
}
static inline int ibta_mtu_to_int(enum ibta_mtu mtu)
{
switch (mtu) {
case IB_MTU_256: return 256;
case IB_MTU_512: return 512;
case IB_MTU_1024: return 1024;
case IB_MTU_2048: return 2048;
case IB_MTU_4096: return 4096;
default: return -1;
}
}
static ssize_t
show_board(struct device *device, struct device_attribute *attr,
char *buf)
{
struct mlx4_hca_info *info = container_of(attr, struct mlx4_hca_info,
board_attr);
struct mlx4_dev *mdev = info->dev;
return sprintf(buf, "%.*s\n", MLX4_BOARD_ID_LEN,
mdev->board_id);
}
static ssize_t
show_hca(struct device *device, struct device_attribute *attr,
char *buf)
{
struct mlx4_hca_info *info = container_of(attr, struct mlx4_hca_info,
hca_attr);
struct mlx4_dev *mdev = info->dev;
return sprintf(buf, "MT%d\n", mdev->pdev->device);
}
static ssize_t
show_firmware_version(struct device *dev,
struct device_attribute *attr,
char *buf)
{
struct mlx4_hca_info *info = container_of(attr, struct mlx4_hca_info,
firmware_attr);
struct mlx4_dev *mdev = info->dev;
return sprintf(buf, "%d.%d.%d\n", (int)(mdev->caps.fw_ver >> 32),
(int)(mdev->caps.fw_ver >> 16) & 0xffff,
(int)mdev->caps.fw_ver & 0xffff);
}
static ssize_t show_port_ib_mtu(struct device *dev,
struct device_attribute *attr,
char *buf)
{
struct mlx4_port_info *info = container_of(attr, struct mlx4_port_info,
port_mtu_attr);
struct mlx4_dev *mdev = info->dev;
/* When port type is eth, port mtu value isn't used. */
if (mdev->caps.port_type[info->port] == MLX4_PORT_TYPE_ETH)
return -EINVAL;
sprintf(buf, "%d\n",
ibta_mtu_to_int(mdev->caps.port_ib_mtu[info->port]));
return strlen(buf);
}
static ssize_t set_port_ib_mtu(struct device *dev,
struct device_attribute *attr,
const char *buf, size_t count)
{
struct mlx4_port_info *info = container_of(attr, struct mlx4_port_info,
port_mtu_attr);
struct mlx4_dev *mdev = info->dev;
struct mlx4_priv *priv = mlx4_priv(mdev);
int err, port, mtu, ibta_mtu = -1;
if (mdev->caps.port_type[info->port] == MLX4_PORT_TYPE_ETH) {
mlx4_warn(mdev, "port level mtu is only used for IB ports\n");
return -EINVAL;
}
mtu = (int) simple_strtol(buf, NULL, 0);
ibta_mtu = int_to_ibta_mtu(mtu);
if (ibta_mtu < 0) {
mlx4_err(mdev, "%s is invalid IBTA mtu\n", buf);
return -EINVAL;
}
mdev->caps.port_ib_mtu[info->port] = ibta_mtu;
mlx4_stop_sense(mdev);
mutex_lock(&priv->port_mutex);
mlx4_unregister_device(mdev);
for (port = 1; port <= mdev->caps.num_ports; port++) {
mlx4_CLOSE_PORT(mdev, port);
err = mlx4_SET_PORT(mdev, port, -1);
if (err) {
mlx4_err(mdev, "Failed to set port %d, "
"aborting\n", port);
goto err_set_port;
}
}
err = mlx4_register_device(mdev);
err_set_port:
mutex_unlock(&priv->port_mutex);
mlx4_start_sense(mdev);
return err ? err : count;
}
static int mlx4_load_fw(struct mlx4_dev *dev)
{
struct mlx4_priv *priv = mlx4_priv(dev);
int err, unmap_flag = 0;
priv->fw.fw_icm = mlx4_alloc_icm(dev, priv->fw.fw_pages,
GFP_HIGHUSER | __GFP_NOWARN, 0);
if (!priv->fw.fw_icm) {
mlx4_err(dev, "Couldn't allocate FW area, aborting.\n");
return -ENOMEM;
}
err = mlx4_MAP_FA(dev, priv->fw.fw_icm);
if (err) {
mlx4_err(dev, "MAP_FA command failed, aborting.\n");
goto err_free;
}
err = mlx4_RUN_FW(dev);
if (err) {
mlx4_err(dev, "RUN_FW command failed, aborting.\n");
goto err_unmap_fa;
}
return 0;
err_unmap_fa:
unmap_flag = mlx4_UNMAP_FA(dev);
if (unmap_flag)
pr_warn("mlx4_core: mlx4_UNMAP_FA failed.\n");
err_free:
if (!unmap_flag)
mlx4_free_icm(dev, priv->fw.fw_icm, 0);
return err;
}
static int mlx4_init_cmpt_table(struct mlx4_dev *dev, u64 cmpt_base,
int cmpt_entry_sz)
{
struct mlx4_priv *priv = mlx4_priv(dev);
int err;
int num_eqs;
err = mlx4_init_icm_table(dev, &priv->qp_table.cmpt_table,
cmpt_base +
((u64) (MLX4_CMPT_TYPE_QP *
cmpt_entry_sz) << MLX4_CMPT_SHIFT),
cmpt_entry_sz, dev->caps.num_qps,
dev->caps.reserved_qps_cnt[MLX4_QP_REGION_FW],
0, 0);
if (err)
goto err;
err = mlx4_init_icm_table(dev, &priv->srq_table.cmpt_table,
cmpt_base +
((u64) (MLX4_CMPT_TYPE_SRQ *
cmpt_entry_sz) << MLX4_CMPT_SHIFT),
cmpt_entry_sz, dev->caps.num_srqs,
dev->caps.reserved_srqs, 0, 0);
if (err)
goto err_qp;
err = mlx4_init_icm_table(dev, &priv->cq_table.cmpt_table,
cmpt_base +
((u64) (MLX4_CMPT_TYPE_CQ *
cmpt_entry_sz) << MLX4_CMPT_SHIFT),
cmpt_entry_sz, dev->caps.num_cqs,
dev->caps.reserved_cqs, 0, 0);
if (err)
goto err_srq;
num_eqs = (mlx4_is_master(dev)) ? dev->phys_caps.num_phys_eqs :
dev->caps.num_eqs;
err = mlx4_init_icm_table(dev, &priv->eq_table.cmpt_table,
cmpt_base +
((u64) (MLX4_CMPT_TYPE_EQ *
cmpt_entry_sz) << MLX4_CMPT_SHIFT),
cmpt_entry_sz, num_eqs, num_eqs, 0, 0);
if (err)
goto err_cq;
return 0;
err_cq:
mlx4_cleanup_icm_table(dev, &priv->cq_table.cmpt_table);
err_srq:
mlx4_cleanup_icm_table(dev, &priv->srq_table.cmpt_table);
err_qp:
mlx4_cleanup_icm_table(dev, &priv->qp_table.cmpt_table);
err:
return err;
}
static int mlx4_init_icm(struct mlx4_dev *dev, struct mlx4_dev_cap *dev_cap,
struct mlx4_init_hca_param *init_hca, u64 icm_size)
{
struct mlx4_priv *priv = mlx4_priv(dev);
u64 aux_pages;
int num_eqs;
int err, unmap_flag = 0;
err = mlx4_SET_ICM_SIZE(dev, icm_size, &aux_pages);
if (err) {
mlx4_err(dev, "SET_ICM_SIZE command failed, aborting.\n");
return err;
}
mlx4_dbg(dev, "%lld KB of HCA context requires %lld KB aux memory.\n",
(unsigned long long) icm_size >> 10,
(unsigned long long) aux_pages << 2);
priv->fw.aux_icm = mlx4_alloc_icm(dev, aux_pages,
GFP_HIGHUSER | __GFP_NOWARN, 0);
if (!priv->fw.aux_icm) {
mlx4_err(dev, "Couldn't allocate aux memory, aborting.\n");
return -ENOMEM;
}
err = mlx4_MAP_ICM_AUX(dev, priv->fw.aux_icm);
if (err) {
mlx4_err(dev, "MAP_ICM_AUX command failed, aborting.\n");
goto err_free_aux;
}
err = mlx4_init_cmpt_table(dev, init_hca->cmpt_base, dev_cap->cmpt_entry_sz);
if (err) {
mlx4_err(dev, "Failed to map cMPT context memory, aborting.\n");
goto err_unmap_aux;
}
num_eqs = (mlx4_is_master(dev)) ? dev->phys_caps.num_phys_eqs :
dev->caps.num_eqs;
err = mlx4_init_icm_table(dev, &priv->eq_table.table,
init_hca->eqc_base, dev_cap->eqc_entry_sz,
num_eqs, num_eqs, 0, 0);
if (err) {
mlx4_err(dev, "Failed to map EQ context memory, aborting.\n");
goto err_unmap_cmpt;
}
/*
* Reserved MTT entries must be aligned up to a cacheline
* boundary, since the FW will write to them, while the driver
* writes to all other MTT entries. (The variable
* dev->caps.mtt_entry_sz below is really the MTT segment
* size, not the raw entry size)
*/
dev->caps.reserved_mtts =
ALIGN(dev->caps.reserved_mtts * dev->caps.mtt_entry_sz,
dma_get_cache_alignment()) / dev->caps.mtt_entry_sz;
err = mlx4_init_icm_table(dev, &priv->mr_table.mtt_table,
init_hca->mtt_base,
dev->caps.mtt_entry_sz,
dev->caps.num_mtts,
dev->caps.reserved_mtts, 1, 0);
if (err) {
mlx4_err(dev, "Failed to map MTT context memory, aborting.\n");
goto err_unmap_eq;
}
err = mlx4_init_icm_table(dev, &priv->mr_table.dmpt_table,
init_hca->dmpt_base,
dev_cap->dmpt_entry_sz,
dev->caps.num_mpts,
dev->caps.reserved_mrws, 1, 1);
if (err) {
mlx4_err(dev, "Failed to map dMPT context memory, aborting.\n");
goto err_unmap_mtt;
}
err = mlx4_init_icm_table(dev, &priv->qp_table.qp_table,
init_hca->qpc_base,
dev_cap->qpc_entry_sz,
dev->caps.num_qps,
dev->caps.reserved_qps_cnt[MLX4_QP_REGION_FW],
0, 0);
if (err) {
mlx4_err(dev, "Failed to map QP context memory, aborting.\n");
goto err_unmap_dmpt;
}
err = mlx4_init_icm_table(dev, &priv->qp_table.auxc_table,
init_hca->auxc_base,
dev_cap->aux_entry_sz,
dev->caps.num_qps,
dev->caps.reserved_qps_cnt[MLX4_QP_REGION_FW],
0, 0);
if (err) {
mlx4_err(dev, "Failed to map AUXC context memory, aborting.\n");
goto err_unmap_qp;
}
err = mlx4_init_icm_table(dev, &priv->qp_table.altc_table,
init_hca->altc_base,
dev_cap->altc_entry_sz,
dev->caps.num_qps,
dev->caps.reserved_qps_cnt[MLX4_QP_REGION_FW],
0, 0);
if (err) {
mlx4_err(dev, "Failed to map ALTC context memory, aborting.\n");
goto err_unmap_auxc;
}
err = mlx4_init_icm_table(dev, &priv->qp_table.rdmarc_table,
init_hca->rdmarc_base,
dev_cap->rdmarc_entry_sz << priv->qp_table.rdmarc_shift,
dev->caps.num_qps,
dev->caps.reserved_qps_cnt[MLX4_QP_REGION_FW],
0, 0);
if (err) {
mlx4_err(dev, "Failed to map RDMARC context memory, aborting\n");
goto err_unmap_altc;
}
err = mlx4_init_icm_table(dev, &priv->cq_table.table,
init_hca->cqc_base,
dev_cap->cqc_entry_sz,
dev->caps.num_cqs,
dev->caps.reserved_cqs, 0, 0);
if (err) {
mlx4_err(dev, "Failed to map CQ context memory, aborting.\n");
goto err_unmap_rdmarc;
}
err = mlx4_init_icm_table(dev, &priv->srq_table.table,
init_hca->srqc_base,
dev_cap->srq_entry_sz,
dev->caps.num_srqs,
dev->caps.reserved_srqs, 0, 0);
if (err) {
mlx4_err(dev, "Failed to map SRQ context memory, aborting.\n");
goto err_unmap_cq;
}
/*
* For flow steering device managed mode it is required to use
* mlx4_init_icm_table. For B0 steering mode it's not strictly
* required, but for simplicity just map the whole multicast
* group table now. The table isn't very big and it's a lot
* easier than trying to track ref counts.
*/
err = mlx4_init_icm_table(dev, &priv->mcg_table.table,
init_hca->mc_base,
mlx4_get_mgm_entry_size(dev),
dev->caps.num_mgms + dev->caps.num_amgms,
dev->caps.num_mgms + dev->caps.num_amgms,
0, 0);
if (err) {
mlx4_err(dev, "Failed to map MCG context memory, aborting.\n");
goto err_unmap_srq;
}
return 0;
err_unmap_srq:
mlx4_cleanup_icm_table(dev, &priv->srq_table.table);
err_unmap_cq:
mlx4_cleanup_icm_table(dev, &priv->cq_table.table);
err_unmap_rdmarc:
mlx4_cleanup_icm_table(dev, &priv->qp_table.rdmarc_table);
err_unmap_altc:
mlx4_cleanup_icm_table(dev, &priv->qp_table.altc_table);
err_unmap_auxc:
mlx4_cleanup_icm_table(dev, &priv->qp_table.auxc_table);
err_unmap_qp:
mlx4_cleanup_icm_table(dev, &priv->qp_table.qp_table);
err_unmap_dmpt:
mlx4_cleanup_icm_table(dev, &priv->mr_table.dmpt_table);
err_unmap_mtt:
mlx4_cleanup_icm_table(dev, &priv->mr_table.mtt_table);
err_unmap_eq:
mlx4_cleanup_icm_table(dev, &priv->eq_table.table);
err_unmap_cmpt:
mlx4_cleanup_icm_table(dev, &priv->eq_table.cmpt_table);
mlx4_cleanup_icm_table(dev, &priv->cq_table.cmpt_table);
mlx4_cleanup_icm_table(dev, &priv->srq_table.cmpt_table);
mlx4_cleanup_icm_table(dev, &priv->qp_table.cmpt_table);
err_unmap_aux:
unmap_flag = mlx4_UNMAP_ICM_AUX(dev);
if (unmap_flag)
pr_warn("mlx4_core: mlx4_UNMAP_ICM_AUX failed.\n");
err_free_aux:
if (!unmap_flag)
mlx4_free_icm(dev, priv->fw.aux_icm, 0);
return err;
}
static void mlx4_free_icms(struct mlx4_dev *dev)
{
struct mlx4_priv *priv = mlx4_priv(dev);
mlx4_cleanup_icm_table(dev, &priv->mcg_table.table);
mlx4_cleanup_icm_table(dev, &priv->srq_table.table);
mlx4_cleanup_icm_table(dev, &priv->cq_table.table);
mlx4_cleanup_icm_table(dev, &priv->qp_table.rdmarc_table);
mlx4_cleanup_icm_table(dev, &priv->qp_table.altc_table);
mlx4_cleanup_icm_table(dev, &priv->qp_table.auxc_table);
mlx4_cleanup_icm_table(dev, &priv->qp_table.qp_table);
mlx4_cleanup_icm_table(dev, &priv->mr_table.dmpt_table);
mlx4_cleanup_icm_table(dev, &priv->mr_table.mtt_table);
mlx4_cleanup_icm_table(dev, &priv->eq_table.table);
mlx4_cleanup_icm_table(dev, &priv->eq_table.cmpt_table);
mlx4_cleanup_icm_table(dev, &priv->cq_table.cmpt_table);
mlx4_cleanup_icm_table(dev, &priv->srq_table.cmpt_table);
mlx4_cleanup_icm_table(dev, &priv->qp_table.cmpt_table);
if (!mlx4_UNMAP_ICM_AUX(dev))
mlx4_free_icm(dev, priv->fw.aux_icm, 0);
else
pr_warn("mlx4_core: mlx4_UNMAP_ICM_AUX failed.\n");
}
static void mlx4_slave_exit(struct mlx4_dev *dev)
{
struct mlx4_priv *priv = mlx4_priv(dev);
mutex_lock(&priv->cmd.slave_cmd_mutex);
if (mlx4_comm_cmd(dev, MLX4_COMM_CMD_RESET, 0, MLX4_COMM_TIME))
mlx4_warn(dev, "Failed to close slave function.\n");
mutex_unlock(&priv->cmd.slave_cmd_mutex);
}
static int map_bf_area(struct mlx4_dev *dev)
{
struct mlx4_priv *priv = mlx4_priv(dev);
resource_size_t bf_start;
resource_size_t bf_len;
int err = 0;
if (!dev->caps.bf_reg_size)
return -ENXIO;
bf_start = pci_resource_start(dev->pdev, 2) +
(dev->caps.num_uars << PAGE_SHIFT);
bf_len = pci_resource_len(dev->pdev, 2) -
(dev->caps.num_uars << PAGE_SHIFT);
priv->bf_mapping = io_mapping_create_wc(bf_start, bf_len);
if (!priv->bf_mapping)
err = -ENOMEM;
return err;
}
static void unmap_bf_area(struct mlx4_dev *dev)
{
if (mlx4_priv(dev)->bf_mapping)
io_mapping_free(mlx4_priv(dev)->bf_mapping);
}
int mlx4_read_clock(struct mlx4_dev *dev)
{
u32 clockhi, clocklo, clockhi1;
cycle_t cycles;
int i;
struct mlx4_priv *priv = mlx4_priv(dev);
if (!priv->clock_mapping)
return -ENOTSUPP;
for (i = 0; i < 10; i++) {
clockhi = swab32(readl(priv->clock_mapping));
clocklo = swab32(readl(priv->clock_mapping + 4));
clockhi1 = swab32(readl(priv->clock_mapping));
if (clockhi == clockhi1)
break;
}
cycles = (u64) clockhi << 32 | (u64) clocklo;
return cycles;
}
EXPORT_SYMBOL_GPL(mlx4_read_clock);
static int map_internal_clock(struct mlx4_dev *dev)
{
struct mlx4_priv *priv = mlx4_priv(dev);
priv->clock_mapping = ioremap(pci_resource_start(dev->pdev,
priv->fw.clock_bar) +
priv->fw.clock_offset, MLX4_CLOCK_SIZE);
if (!priv->clock_mapping)
return -ENOMEM;
return 0;
}
int mlx4_get_internal_clock_params(struct mlx4_dev *dev,
struct mlx4_clock_params *params)
{
struct mlx4_priv *priv = mlx4_priv(dev);
if (mlx4_is_slave(dev))
return -ENOTSUPP;
if (!params)
return -EINVAL;
params->bar = priv->fw.clock_bar;
params->offset = priv->fw.clock_offset;
params->size = MLX4_CLOCK_SIZE;
return 0;
}
EXPORT_SYMBOL_GPL(mlx4_get_internal_clock_params);
static void unmap_internal_clock(struct mlx4_dev *dev)
{
struct mlx4_priv *priv = mlx4_priv(dev);
if (priv->clock_mapping)
iounmap(priv->clock_mapping);
}
static void mlx4_close_hca(struct mlx4_dev *dev)
{
unmap_internal_clock(dev);
unmap_bf_area(dev);
if (mlx4_is_slave(dev)) {
mlx4_slave_exit(dev);
} else {
mlx4_CLOSE_HCA(dev, 0);
mlx4_free_icms(dev);
if (!mlx4_UNMAP_FA(dev))
mlx4_free_icm(dev, mlx4_priv(dev)->fw.fw_icm, 0);
else
pr_warn("mlx4_core: mlx4_UNMAP_FA failed.\n");
}
}
static int mlx4_init_slave(struct mlx4_dev *dev)
{
struct mlx4_priv *priv = mlx4_priv(dev);
u64 dma = (u64) priv->mfunc.vhcr_dma;
int num_of_reset_retries = NUM_OF_RESET_RETRIES;
int ret_from_reset = 0;
u32 slave_read;
u32 cmd_channel_ver;
mutex_lock(&priv->cmd.slave_cmd_mutex);
priv->cmd.max_cmds = 1;
mlx4_warn(dev, "Sending reset\n");
ret_from_reset = mlx4_comm_cmd(dev, MLX4_COMM_CMD_RESET, 0,
MLX4_COMM_TIME);
/* if we are in the middle of flr the slave will try
* NUM_OF_RESET_RETRIES times before leaving.*/
if (ret_from_reset) {
if (MLX4_DELAY_RESET_SLAVE == ret_from_reset) {
msleep(SLEEP_TIME_IN_RESET);
while (ret_from_reset && num_of_reset_retries) {
mlx4_warn(dev, "slave is currently in the"
"middle of FLR. retrying..."
"(try num:%d)\n",
(NUM_OF_RESET_RETRIES -
num_of_reset_retries + 1));
ret_from_reset =
mlx4_comm_cmd(dev, MLX4_COMM_CMD_RESET,
0, MLX4_COMM_TIME);
num_of_reset_retries = num_of_reset_retries - 1;
}
} else
goto err;
}
/* check the driver version - the slave I/F revision
* must match the master's */
slave_read = swab32(readl(&priv->mfunc.comm->slave_read));
cmd_channel_ver = mlx4_comm_get_version();
if (MLX4_COMM_GET_IF_REV(cmd_channel_ver) !=
MLX4_COMM_GET_IF_REV(slave_read)) {
mlx4_err(dev, "slave driver version is not supported"
" by the master\n");
goto err;
}
mlx4_warn(dev, "Sending vhcr0\n");
if (mlx4_comm_cmd(dev, MLX4_COMM_CMD_VHCR0, dma >> 48,
MLX4_COMM_TIME))
goto err;
if (mlx4_comm_cmd(dev, MLX4_COMM_CMD_VHCR1, dma >> 32,
MLX4_COMM_TIME))
goto err;
if (mlx4_comm_cmd(dev, MLX4_COMM_CMD_VHCR2, dma >> 16,
MLX4_COMM_TIME))
goto err;
if (mlx4_comm_cmd(dev, MLX4_COMM_CMD_VHCR_EN, dma, MLX4_COMM_TIME))
goto err;
mutex_unlock(&priv->cmd.slave_cmd_mutex);
return 0;
err:
mlx4_comm_cmd(dev, MLX4_COMM_CMD_RESET, 0, 0);
mutex_unlock(&priv->cmd.slave_cmd_mutex);
return -EIO;
}
static void mlx4_parav_master_pf_caps(struct mlx4_dev *dev)
{
int i;
for (i = 1; i <= dev->caps.num_ports; i++) {
if (dev->caps.port_type[i] == MLX4_PORT_TYPE_ETH)
dev->caps.gid_table_len[i] =
mlx4_get_slave_num_gids(dev, 0);
else
dev->caps.gid_table_len[i] = 1;
dev->caps.pkey_table_len[i] =
dev->phys_caps.pkey_phys_table_len[i] - 1;
}
}
static int choose_log_fs_mgm_entry_size(int qp_per_entry)
{
int i = MLX4_MIN_MGM_LOG_ENTRY_SIZE;
for (i = MLX4_MIN_MGM_LOG_ENTRY_SIZE; i <= MLX4_MAX_MGM_LOG_ENTRY_SIZE;
i++) {
if (qp_per_entry <= 4 * ((1 << i) / 16 - 2))
break;
}
return (i <= MLX4_MAX_MGM_LOG_ENTRY_SIZE) ? i : -1;
}
static void choose_steering_mode(struct mlx4_dev *dev,
struct mlx4_dev_cap *dev_cap)
{
int nvfs;
mlx4_get_val(num_vfs.dbdf2val.tbl, pci_physfn(dev->pdev), 0, &nvfs);
if (high_rate_steer && !mlx4_is_mfunc(dev)) {
dev->caps.flags &= ~(MLX4_DEV_CAP_FLAG_VEP_MC_STEER |
MLX4_DEV_CAP_FLAG_VEP_UC_STEER);
dev_cap->flags2 &= ~MLX4_DEV_CAP_FLAG2_FS_EN;
}
if (mlx4_log_num_mgm_entry_size == -1 &&
dev_cap->flags2 & MLX4_DEV_CAP_FLAG2_FS_EN &&
(!mlx4_is_mfunc(dev) ||
(dev_cap->fs_max_num_qp_per_entry >= (nvfs + 1))) &&
choose_log_fs_mgm_entry_size(dev_cap->fs_max_num_qp_per_entry) >=
MLX4_MIN_MGM_LOG_ENTRY_SIZE) {
dev->oper_log_mgm_entry_size =
choose_log_fs_mgm_entry_size(dev_cap->fs_max_num_qp_per_entry);
dev->caps.steering_mode = MLX4_STEERING_MODE_DEVICE_MANAGED;
dev->caps.num_qp_per_mgm = dev_cap->fs_max_num_qp_per_entry;
} else {
if (dev->caps.flags & MLX4_DEV_CAP_FLAG_VEP_UC_STEER &&
dev->caps.flags & MLX4_DEV_CAP_FLAG_VEP_MC_STEER)
dev->caps.steering_mode = MLX4_STEERING_MODE_B0;
else {
dev->caps.steering_mode = MLX4_STEERING_MODE_A0;
if (dev->caps.flags & MLX4_DEV_CAP_FLAG_VEP_UC_STEER ||
dev->caps.flags & MLX4_DEV_CAP_FLAG_VEP_MC_STEER)
mlx4_warn(dev, "Must have both UC_STEER and MC_STEER flags "
"set to use B0 steering. Falling back to A0 steering mode.\n");
}
dev->oper_log_mgm_entry_size =
mlx4_log_num_mgm_entry_size > 0 ?
mlx4_log_num_mgm_entry_size :
MLX4_DEFAULT_MGM_LOG_ENTRY_SIZE;
dev->caps.num_qp_per_mgm = mlx4_get_qp_per_mgm(dev);
}
mlx4_dbg(dev, "Steering mode is: %s, oper_log_mgm_entry_size = %d, "
"log_num_mgm_entry_size = %d\n",
mlx4_steering_mode_str(dev->caps.steering_mode),
dev->oper_log_mgm_entry_size, mlx4_log_num_mgm_entry_size);
}
static int mlx4_init_hca(struct mlx4_dev *dev)
{
struct mlx4_priv *priv = mlx4_priv(dev);
struct mlx4_dev_cap *dev_cap = NULL;
struct mlx4_adapter adapter;
struct mlx4_mod_stat_cfg mlx4_cfg;
struct mlx4_profile profile;
struct mlx4_init_hca_param init_hca;
u64 icm_size;
int err;
if (!mlx4_is_slave(dev)) {
err = mlx4_QUERY_FW(dev);
if (err) {
if (err == -EACCES)
mlx4_info(dev, "non-primary physical function, skipping.\n");
else
mlx4_err(dev, "QUERY_FW command failed, aborting.\n");
return err;
}
err = mlx4_load_fw(dev);
if (err) {
mlx4_err(dev, "Failed to start FW, aborting.\n");
return err;
}
mlx4_cfg.log_pg_sz_m = 1;
mlx4_cfg.log_pg_sz = 0;
err = mlx4_MOD_STAT_CFG(dev, &mlx4_cfg);
if (err)
mlx4_warn(dev, "Failed to override log_pg_sz parameter\n");
dev_cap = kzalloc(sizeof *dev_cap, GFP_KERNEL);
if (!dev_cap) {
mlx4_err(dev, "Failed to allocate memory for dev_cap\n");
err = -ENOMEM;
goto err_stop_fw;
}
err = mlx4_dev_cap(dev, dev_cap);
if (err) {
mlx4_err(dev, "QUERY_DEV_CAP command failed, aborting.\n");
goto err_stop_fw;
}
choose_steering_mode(dev, dev_cap);
if (mlx4_is_master(dev))
mlx4_parav_master_pf_caps(dev);
process_mod_param_profile(&profile);
if (dev->caps.steering_mode ==
MLX4_STEERING_MODE_DEVICE_MANAGED)
profile.num_mcg = MLX4_FS_NUM_MCG;
icm_size = mlx4_make_profile(dev, &profile, dev_cap,
&init_hca);
if ((long long) icm_size < 0) {
err = icm_size;
goto err_stop_fw;
}
dev->caps.max_fmr_maps = (1 << (32 - ilog2(dev->caps.num_mpts))) - 1;
init_hca.log_uar_sz = ilog2(dev->caps.num_uars);
init_hca.uar_page_sz = PAGE_SHIFT - 12;
err = mlx4_init_icm(dev, dev_cap, &init_hca, icm_size);
if (err)
goto err_stop_fw;
init_hca.mw_enable = 1;
err = mlx4_INIT_HCA(dev, &init_hca);
if (err) {
mlx4_err(dev, "INIT_HCA command failed, aborting.\n");
goto err_free_icm;
}
/*
* Read HCA frequency by QUERY_HCA command
*/
if (dev->caps.flags2 & MLX4_DEV_CAP_FLAG2_TS) {
memset(&init_hca, 0, sizeof(init_hca));
err = mlx4_QUERY_HCA(dev, &init_hca);
if (err) {
mlx4_err(dev, "QUERY_HCA command failed, disable timestamp.\n");
dev->caps.flags2 &= ~MLX4_DEV_CAP_FLAG2_TS;
} else {
dev->caps.hca_core_clock =
init_hca.hca_core_clock;
}
/* In case we got HCA frequency 0 - disable timestamping
* to avoid dividing by zero
*/
if (!dev->caps.hca_core_clock) {
dev->caps.flags2 &= ~MLX4_DEV_CAP_FLAG2_TS;
mlx4_err(dev, "HCA frequency is 0. Timestamping is not supported.");
} else if (map_internal_clock(dev)) {
/* Map internal clock,
* in case of failure disable timestamping
*/
dev->caps.flags2 &= ~MLX4_DEV_CAP_FLAG2_TS;
mlx4_err(dev, "Failed to map internal clock. Timestamping is not supported.\n");
}
}
} else {
err = mlx4_init_slave(dev);
if (err) {
mlx4_err(dev, "Failed to initialize slave\n");
return err;
}
err = mlx4_slave_cap(dev);
if (err) {
mlx4_err(dev, "Failed to obtain slave caps\n");
goto err_close;
}
}
if (map_bf_area(dev))
mlx4_dbg(dev, "Failed to map blue flame area\n");
/* Only the master set the ports, all the rest got it from it.*/
if (!mlx4_is_slave(dev))
mlx4_set_port_mask(dev);
err = mlx4_QUERY_ADAPTER(dev, &adapter);
if (err) {
mlx4_err(dev, "QUERY_ADAPTER command failed, aborting.\n");
goto unmap_bf;
}
priv->eq_table.inta_pin = adapter.inta_pin;
memcpy(dev->board_id, adapter.board_id, sizeof dev->board_id);
memcpy(dev->vsd, adapter.vsd, sizeof(dev->vsd));
dev->vsd_vendor_id = adapter.vsd_vendor_id;
if (!mlx4_is_slave(dev))
kfree(dev_cap);
return 0;
unmap_bf:
if (!mlx4_is_slave(dev))
unmap_internal_clock(dev);
unmap_bf_area(dev);
if (mlx4_is_slave(dev)) {
kfree(dev->caps.qp0_tunnel);
kfree(dev->caps.qp0_proxy);
kfree(dev->caps.qp1_tunnel);
kfree(dev->caps.qp1_proxy);
}
err_close:
if (mlx4_is_slave(dev))
mlx4_slave_exit(dev);
else
mlx4_CLOSE_HCA(dev, 0);
err_free_icm:
if (!mlx4_is_slave(dev))
mlx4_free_icms(dev);
err_stop_fw:
if (!mlx4_is_slave(dev)) {
if (!mlx4_UNMAP_FA(dev))
mlx4_free_icm(dev, priv->fw.fw_icm, 0);
else
pr_warn("mlx4_core: mlx4_UNMAP_FA failed.\n");
kfree(dev_cap);
}
return err;
}
static int mlx4_init_counters_table(struct mlx4_dev *dev)
{
struct mlx4_priv *priv = mlx4_priv(dev);
int nent_pow2, port_indx, vf_index, num_counters;
int res, index = 0;
struct counter_index *new_counter_index;
if (!(dev->caps.flags & MLX4_DEV_CAP_FLAG_COUNTERS))
return -ENOENT;
if (!mlx4_is_slave(dev) &&
dev->caps.max_counters == dev->caps.max_extended_counters) {
res = mlx4_cmd(dev, MLX4_IF_STATE_EXTENDED, 0, 0,
MLX4_CMD_SET_IF_STAT,
MLX4_CMD_TIME_CLASS_A, MLX4_CMD_NATIVE);
if (res) {
mlx4_err(dev, "Failed to set extended counters (err=%d)\n", res);
return res;
}
}
mutex_init(&priv->counters_table.mutex);
if (mlx4_is_slave(dev)) {
for (port_indx = 0; port_indx < dev->caps.num_ports; port_indx++) {
INIT_LIST_HEAD(&priv->counters_table.global_port_list[port_indx]);
if (dev->caps.def_counter_index[port_indx] != 0xFF) {
new_counter_index = kmalloc(sizeof(struct counter_index), GFP_KERNEL);
if (!new_counter_index)
return -ENOMEM;
new_counter_index->index = dev->caps.def_counter_index[port_indx];
list_add_tail(&new_counter_index->list, &priv->counters_table.global_port_list[port_indx]);
}
}
mlx4_dbg(dev, "%s: slave allocated %d counters for %d ports\n",
__func__, dev->caps.num_ports, dev->caps.num_ports);
return 0;
}
nent_pow2 = roundup_pow_of_two(dev->caps.max_counters);
for (port_indx = 0; port_indx < dev->caps.num_ports; port_indx++) {
INIT_LIST_HEAD(&priv->counters_table.global_port_list[port_indx]);
/* allocating 2 counters per port for PFs */
/* For the PF, the ETH default counters are 0,2; */
/* and the RoCE default counters are 1,3 */
for (num_counters = 0; num_counters < 2; num_counters++, index++) {
new_counter_index = kmalloc(sizeof(struct counter_index), GFP_KERNEL);
if (!new_counter_index)
return -ENOMEM;
new_counter_index->index = index;
list_add_tail(&new_counter_index->list,
&priv->counters_table.global_port_list[port_indx]);
}
}
if (mlx4_is_master(dev)) {
for (vf_index = 0; vf_index < dev->num_vfs; vf_index++) {
for (port_indx = 0; port_indx < dev->caps.num_ports; port_indx++) {
INIT_LIST_HEAD(&priv->counters_table.vf_list[vf_index][port_indx]);
new_counter_index = kmalloc(sizeof(struct counter_index), GFP_KERNEL);
if (!new_counter_index)
return -ENOMEM;
if (index < nent_pow2 - 2) {
new_counter_index->index = index;
index++;
} else {
new_counter_index->index = MLX4_SINK_COUNTER_INDEX;
}
list_add_tail(&new_counter_index->list,
&priv->counters_table.vf_list[vf_index][port_indx]);
}
}
res = mlx4_bitmap_init(&priv->counters_table.bitmap,
nent_pow2, nent_pow2 - 1,
index, 1);
mlx4_dbg(dev, "%s: master allocated %d counters for %d VFs\n",
__func__, index, dev->num_vfs);
} else {
res = mlx4_bitmap_init(&priv->counters_table.bitmap,
nent_pow2, nent_pow2 - 1,
index, 1);
mlx4_dbg(dev, "%s: native allocated %d counters for %d ports\n",
__func__, index, dev->caps.num_ports);
}
return 0;
}
static void mlx4_cleanup_counters_table(struct mlx4_dev *dev)
{
struct mlx4_priv *priv = mlx4_priv(dev);
int i, j;
struct counter_index *port, *tmp_port;
struct counter_index *vf, *tmp_vf;
mutex_lock(&priv->counters_table.mutex);
if (dev->caps.flags & MLX4_DEV_CAP_FLAG_COUNTERS) {
for (i = 0; i < dev->caps.num_ports; i++) {
list_for_each_entry_safe(port, tmp_port,
&priv->counters_table.global_port_list[i],
list) {
list_del(&port->list);
kfree(port);
}
}
if (!mlx4_is_slave(dev)) {
for (i = 0; i < dev->num_vfs; i++) {
for (j = 0; j < dev->caps.num_ports; j++) {
list_for_each_entry_safe(vf, tmp_vf,
&priv->counters_table.vf_list[i][j],
list) {
/* clear the counter statistic */
if (__mlx4_clear_if_stat(dev, vf->index))
mlx4_dbg(dev, "%s: reset counter %d failed\n",
__func__, vf->index);
list_del(&vf->list);
kfree(vf);
}
}
}
mlx4_bitmap_cleanup(&priv->counters_table.bitmap);
}
}
mutex_unlock(&priv->counters_table.mutex);
}
int __mlx4_slave_counters_free(struct mlx4_dev *dev, int slave)
{
struct mlx4_priv *priv = mlx4_priv(dev);
int i, first;
struct counter_index *vf, *tmp_vf;
/* clean VF's counters for the next useg */
if (slave > 0 && slave <= dev->num_vfs) {
mlx4_dbg(dev, "%s: free counters of slave(%d)\n"
, __func__, slave);
mutex_lock(&priv->counters_table.mutex);
for (i = 0; i < dev->caps.num_ports; i++) {
first = 0;
list_for_each_entry_safe(vf, tmp_vf,
&priv->counters_table.vf_list[slave - 1][i],
list) {
/* clear the counter statistic */
if (__mlx4_clear_if_stat(dev, vf->index))
mlx4_dbg(dev, "%s: reset counter %d failed\n",
__func__, vf->index);
if (first++ && vf->index != MLX4_SINK_COUNTER_INDEX) {
mlx4_dbg(dev, "%s: delete counter index %d for slave %d and port %d\n"
, __func__, vf->index, slave, i + 1);
mlx4_bitmap_free(&priv->counters_table.bitmap, vf->index, MLX4_USE_RR);
list_del(&vf->list);
kfree(vf);
} else {
mlx4_dbg(dev, "%s: can't delete default counter index %d for slave %d and port %d\n"
, __func__, vf->index, slave, i + 1);
}
}
}
mutex_unlock(&priv->counters_table.mutex);
}
return 0;
}
int __mlx4_counter_alloc(struct mlx4_dev *dev, int slave, int port, u32 *idx)
{
struct mlx4_priv *priv = mlx4_priv(dev);
struct counter_index *new_counter_index;
if (!(dev->caps.flags & MLX4_DEV_CAP_FLAG_COUNTERS))
return -ENOENT;
if ((slave > MLX4_MAX_NUM_VF) || (slave < 0) ||
(port < 0) || (port > MLX4_MAX_PORTS)) {
mlx4_dbg(dev, "%s: invalid slave(%d) or port(%d) index\n",
__func__, slave, port);
return -EINVAL;
}
/* handle old guest request does not support request by port index */
if (port == 0) {
*idx = MLX4_SINK_COUNTER_INDEX;
mlx4_dbg(dev, "%s: allocated default counter index %d for slave %d port %d\n"
, __func__, *idx, slave, port);
return 0;
}
mutex_lock(&priv->counters_table.mutex);
*idx = mlx4_bitmap_alloc(&priv->counters_table.bitmap);
/* if no resources return the default counter of the slave and port */
if (*idx == -1) {
if (slave == 0) { /* its the ethernet counter ?????? */
new_counter_index = list_entry(priv->counters_table.global_port_list[port - 1].next,
struct counter_index,
list);
} else {
new_counter_index = list_entry(priv->counters_table.vf_list[slave - 1][port - 1].next,
struct counter_index,
list);
}
*idx = new_counter_index->index;
mlx4_dbg(dev, "%s: allocated defualt counter index %d for slave %d port %d\n"
, __func__, *idx, slave, port);
goto out;
}
if (slave == 0) { /* native or master */
new_counter_index = kmalloc(sizeof(struct counter_index), GFP_KERNEL);
if (!new_counter_index)
goto no_mem;
new_counter_index->index = *idx;
list_add_tail(&new_counter_index->list, &priv->counters_table.global_port_list[port - 1]);
} else {
new_counter_index = kmalloc(sizeof(struct counter_index), GFP_KERNEL);
if (!new_counter_index)
goto no_mem;
new_counter_index->index = *idx;
list_add_tail(&new_counter_index->list, &priv->counters_table.vf_list[slave - 1][port - 1]);
}
mlx4_dbg(dev, "%s: allocated counter index %d for slave %d port %d\n"
, __func__, *idx, slave, port);
out:
mutex_unlock(&priv->counters_table.mutex);
return 0;
no_mem:
mlx4_bitmap_free(&priv->counters_table.bitmap, *idx, MLX4_USE_RR);
mutex_unlock(&priv->counters_table.mutex);
*idx = MLX4_SINK_COUNTER_INDEX;
mlx4_dbg(dev, "%s: failed err (%d)\n"
, __func__, -ENOMEM);
return -ENOMEM;
}
int mlx4_counter_alloc(struct mlx4_dev *dev, u8 port, u32 *idx)
{
u64 out_param;
int err;
struct mlx4_priv *priv = mlx4_priv(dev);
struct counter_index *new_counter_index, *c_index;
if (mlx4_is_mfunc(dev)) {
err = mlx4_cmd_imm(dev, 0, &out_param,
((u32) port) << 8 | (u32) RES_COUNTER,
RES_OP_RESERVE, MLX4_CMD_ALLOC_RES,
MLX4_CMD_TIME_CLASS_A, MLX4_CMD_WRAPPED);
if (!err) {
*idx = get_param_l(&out_param);
if (*idx == MLX4_SINK_COUNTER_INDEX)
return -ENOSPC;
mutex_lock(&priv->counters_table.mutex);
c_index = list_entry(priv->counters_table.global_port_list[port - 1].next,
struct counter_index,
list);
mutex_unlock(&priv->counters_table.mutex);
if (c_index->index == *idx)
return -EEXIST;
if (mlx4_is_slave(dev)) {
new_counter_index = kmalloc(sizeof(struct counter_index), GFP_KERNEL);
if (!new_counter_index) {
mlx4_counter_free(dev, port, *idx);
return -ENOMEM;
}
new_counter_index->index = *idx;
mutex_lock(&priv->counters_table.mutex);
list_add_tail(&new_counter_index->list, &priv->counters_table.global_port_list[port - 1]);
mutex_unlock(&priv->counters_table.mutex);
mlx4_dbg(dev, "%s: allocated counter index %d for port %d\n"
, __func__, *idx, port);
}
}
return err;
}
return __mlx4_counter_alloc(dev, 0, port, idx);
}
EXPORT_SYMBOL_GPL(mlx4_counter_alloc);
void __mlx4_counter_free(struct mlx4_dev *dev, int slave, int port, u32 idx)
{
/* check if native or slave and deletes accordingly */
struct mlx4_priv *priv = mlx4_priv(dev);
struct counter_index *pf, *tmp_pf;
struct counter_index *vf, *tmp_vf;
int first;
if (idx == MLX4_SINK_COUNTER_INDEX) {
mlx4_dbg(dev, "%s: try to delete default counter index %d for port %d\n"
, __func__, idx, port);
return;
}
if ((slave > MLX4_MAX_NUM_VF) || (slave < 0) ||
(port < 0) || (port > MLX4_MAX_PORTS)) {
mlx4_warn(dev, "%s: deletion failed due to invalid slave(%d) or port(%d) index\n"
, __func__, slave, idx);
return;
}
mutex_lock(&priv->counters_table.mutex);
if (slave == 0) {
first = 0;
list_for_each_entry_safe(pf, tmp_pf,
&priv->counters_table.global_port_list[port - 1],
list) {
/* the first 2 counters are reserved */
if (pf->index == idx) {
/* clear the counter statistic */
if (__mlx4_clear_if_stat(dev, pf->index))
mlx4_dbg(dev, "%s: reset counter %d failed\n",
__func__, pf->index);
if (1 < first && idx != MLX4_SINK_COUNTER_INDEX) {
list_del(&pf->list);
kfree(pf);
mlx4_dbg(dev, "%s: delete counter index %d for native device (%d) port %d\n"
, __func__, idx, slave, port);
mlx4_bitmap_free(&priv->counters_table.bitmap, idx, MLX4_USE_RR);
goto out;
} else {
mlx4_dbg(dev, "%s: can't delete default counter index %d for native device (%d) port %d\n"
, __func__, idx, slave, port);
goto out;
}
}
first++;
}
mlx4_dbg(dev, "%s: can't delete counter index %d for native device (%d) port %d\n"
, __func__, idx, slave, port);
} else {
first = 0;
list_for_each_entry_safe(vf, tmp_vf,
&priv->counters_table.vf_list[slave - 1][port - 1],
list) {
/* the first element is reserved */
if (vf->index == idx) {
/* clear the counter statistic */
if (__mlx4_clear_if_stat(dev, vf->index))
mlx4_dbg(dev, "%s: reset counter %d failed\n",
__func__, vf->index);
if (first) {
list_del(&vf->list);
kfree(vf);
mlx4_dbg(dev, "%s: delete counter index %d for slave %d port %d\n",
__func__, idx, slave, port);
mlx4_bitmap_free(&priv->counters_table.bitmap, idx, MLX4_USE_RR);
goto out;
} else {
mlx4_dbg(dev, "%s: can't delete default slave (%d) counter index %d for port %d\n"
, __func__, slave, idx, port);
goto out;
}
}
first++;
}
mlx4_dbg(dev, "%s: can't delete slave (%d) counter index %d for port %d\n"
, __func__, slave, idx, port);
}
out:
mutex_unlock(&priv->counters_table.mutex);
}
void mlx4_counter_free(struct mlx4_dev *dev, u8 port, u32 idx)
{
u64 in_param = 0;
struct mlx4_priv *priv = mlx4_priv(dev);
struct counter_index *counter, *tmp_counter;
int first = 0;
if (mlx4_is_mfunc(dev)) {
set_param_l(&in_param, idx);
mlx4_cmd(dev, in_param,
((u32) port) << 8 | (u32) RES_COUNTER,
RES_OP_RESERVE,
MLX4_CMD_FREE_RES, MLX4_CMD_TIME_CLASS_A,
MLX4_CMD_WRAPPED);
if (mlx4_is_slave(dev) && idx != MLX4_SINK_COUNTER_INDEX) {
mutex_lock(&priv->counters_table.mutex);
list_for_each_entry_safe(counter, tmp_counter,
&priv->counters_table.global_port_list[port - 1],
list) {
if (counter->index == idx && first++) {
list_del(&counter->list);
kfree(counter);
mlx4_dbg(dev, "%s: delete counter index %d for port %d\n"
, __func__, idx, port);
mutex_unlock(&priv->counters_table.mutex);
return;
}
}
mutex_unlock(&priv->counters_table.mutex);
}
return;
}
__mlx4_counter_free(dev, 0, port, idx);
}
EXPORT_SYMBOL_GPL(mlx4_counter_free);
int __mlx4_clear_if_stat(struct mlx4_dev *dev,
u8 counter_index)
{
struct mlx4_cmd_mailbox *if_stat_mailbox = NULL;
int err = 0;
u32 if_stat_in_mod = (counter_index & 0xff) | (1 << 31);
if (counter_index == MLX4_SINK_COUNTER_INDEX)
return -EINVAL;
if (mlx4_is_slave(dev))
return 0;
if_stat_mailbox = mlx4_alloc_cmd_mailbox(dev);
if (IS_ERR(if_stat_mailbox)) {
err = PTR_ERR(if_stat_mailbox);
return err;
}
err = mlx4_cmd_box(dev, 0, if_stat_mailbox->dma, if_stat_in_mod, 0,
MLX4_CMD_QUERY_IF_STAT, MLX4_CMD_TIME_CLASS_C,
MLX4_CMD_NATIVE);
mlx4_free_cmd_mailbox(dev, if_stat_mailbox);
return err;
}
u8 mlx4_get_default_counter_index(struct mlx4_dev *dev, int slave, int port)
{
struct mlx4_priv *priv = mlx4_priv(dev);
struct counter_index *new_counter_index;
if (dev->caps.port_type[port] == MLX4_PORT_TYPE_IB) {
mlx4_dbg(dev, "%s: return counter index %d for slave %d port (MLX4_PORT_TYPE_IB) %d\n",
__func__, MLX4_SINK_COUNTER_INDEX, slave, port);
return (u8)MLX4_SINK_COUNTER_INDEX;
}
mutex_lock(&priv->counters_table.mutex);
if (slave == 0) {
new_counter_index = list_entry(priv->counters_table.global_port_list[port - 1].next,
struct counter_index,
list);
} else {
new_counter_index = list_entry(priv->counters_table.vf_list[slave - 1][port - 1].next,
struct counter_index,
list);
}
mutex_unlock(&priv->counters_table.mutex);
mlx4_dbg(dev, "%s: return counter index %d for slave %d port %d\n",
__func__, new_counter_index->index, slave, port);
return (u8)new_counter_index->index;
}
int mlx4_get_vport_ethtool_stats(struct mlx4_dev *dev, int port,
struct mlx4_en_vport_stats *vport_stats,
int reset)
{
struct mlx4_priv *priv = mlx4_priv(dev);
struct mlx4_cmd_mailbox *if_stat_mailbox = NULL;
union mlx4_counter *counter;
int err = 0;
u32 if_stat_in_mod;
struct counter_index *vport, *tmp_vport;
if (!vport_stats)
return -EINVAL;
if_stat_mailbox = mlx4_alloc_cmd_mailbox(dev);
if (IS_ERR(if_stat_mailbox)) {
err = PTR_ERR(if_stat_mailbox);
return err;
}
mutex_lock(&priv->counters_table.mutex);
list_for_each_entry_safe(vport, tmp_vport,
&priv->counters_table.global_port_list[port - 1],
list) {
if (vport->index == MLX4_SINK_COUNTER_INDEX)
continue;
memset(if_stat_mailbox->buf, 0, sizeof(union mlx4_counter));
if_stat_in_mod = (vport->index & 0xff) | ((reset & 1) << 31);
err = mlx4_cmd_box(dev, 0, if_stat_mailbox->dma,
if_stat_in_mod, 0,
MLX4_CMD_QUERY_IF_STAT,
MLX4_CMD_TIME_CLASS_C,
MLX4_CMD_NATIVE);
if (err) {
mlx4_dbg(dev, "%s: failed to read statistics for counter index %d\n",
__func__, vport->index);
goto if_stat_out;
}
counter = (union mlx4_counter *)if_stat_mailbox->buf;
if ((counter->control.cnt_mode & 0xf) == 1) {
vport_stats->rx_broadcast_packets += be64_to_cpu(counter->ext.counters[0].IfRxBroadcastFrames);
vport_stats->rx_unicast_packets += be64_to_cpu(counter->ext.counters[0].IfRxUnicastFrames);
vport_stats->rx_multicast_packets += be64_to_cpu(counter->ext.counters[0].IfRxMulticastFrames);
vport_stats->tx_broadcast_packets += be64_to_cpu(counter->ext.counters[0].IfTxBroadcastFrames);
vport_stats->tx_unicast_packets += be64_to_cpu(counter->ext.counters[0].IfTxUnicastFrames);
vport_stats->tx_multicast_packets += be64_to_cpu(counter->ext.counters[0].IfTxMulticastFrames);
vport_stats->rx_broadcast_bytes += be64_to_cpu(counter->ext.counters[0].IfRxBroadcastOctets);
vport_stats->rx_unicast_bytes += be64_to_cpu(counter->ext.counters[0].IfRxUnicastOctets);
vport_stats->rx_multicast_bytes += be64_to_cpu(counter->ext.counters[0].IfRxMulticastOctets);
vport_stats->tx_broadcast_bytes += be64_to_cpu(counter->ext.counters[0].IfTxBroadcastOctets);
vport_stats->tx_unicast_bytes += be64_to_cpu(counter->ext.counters[0].IfTxUnicastOctets);
vport_stats->tx_multicast_bytes += be64_to_cpu(counter->ext.counters[0].IfTxMulticastOctets);
vport_stats->rx_errors += be64_to_cpu(counter->ext.counters[0].IfRxErrorFrames);
vport_stats->rx_dropped += be64_to_cpu(counter->ext.counters[0].IfRxNoBufferFrames);
vport_stats->tx_errors += be64_to_cpu(counter->ext.counters[0].IfTxDroppedFrames);
}
}
if_stat_out:
mutex_unlock(&priv->counters_table.mutex);
mlx4_free_cmd_mailbox(dev, if_stat_mailbox);
return err;
}
EXPORT_SYMBOL_GPL(mlx4_get_vport_ethtool_stats);
static int mlx4_setup_hca(struct mlx4_dev *dev)
{
struct mlx4_priv *priv = mlx4_priv(dev);
int err;
int port;
__be32 ib_port_default_caps;
err = mlx4_init_uar_table(dev);
if (err) {
mlx4_err(dev, "Failed to initialize "
"user access region table (err=%d), aborting.\n",
err);
return err;
}
err = mlx4_uar_alloc(dev, &priv->driver_uar);
if (err) {
mlx4_err(dev, "Failed to allocate driver access region "
"(err=%d), aborting.\n", err);
goto err_uar_table_free;
}
priv->kar = ioremap((phys_addr_t) priv->driver_uar.pfn << PAGE_SHIFT, PAGE_SIZE);
if (!priv->kar) {
mlx4_err(dev, "Couldn't map kernel access region, "
"aborting.\n");
err = -ENOMEM;
goto err_uar_free;
}
err = mlx4_init_pd_table(dev);
if (err) {
mlx4_err(dev, "Failed to initialize "
"protection domain table (err=%d), aborting.\n", err);
goto err_kar_unmap;
}
err = mlx4_init_xrcd_table(dev);
if (err) {
mlx4_err(dev, "Failed to initialize "
"reliable connection domain table (err=%d), "
"aborting.\n", err);
goto err_pd_table_free;
}
err = mlx4_init_mr_table(dev);
if (err) {
mlx4_err(dev, "Failed to initialize "
"memory region table (err=%d), aborting.\n", err);
goto err_xrcd_table_free;
}
if (!mlx4_is_slave(dev)) {
err = mlx4_init_mcg_table(dev);
if (err) {
mlx4_err(dev, "Failed to initialize "
"multicast group table (err=%d), aborting.\n",
err);
goto err_mr_table_free;
}
}
err = mlx4_init_eq_table(dev);
if (err) {
mlx4_err(dev, "Failed to initialize "
"event queue table (err=%d), aborting.\n", err);
goto err_mcg_table_free;
}
err = mlx4_cmd_use_events(dev);
if (err) {
mlx4_err(dev, "Failed to switch to event-driven "
"firmware commands (err=%d), aborting.\n", err);
goto err_eq_table_free;
}
err = mlx4_NOP(dev);
if (err) {
if (dev->flags & MLX4_FLAG_MSI_X) {
mlx4_warn(dev, "NOP command failed to generate MSI-X "
"interrupt IRQ %d).\n",
priv->eq_table.eq[dev->caps.num_comp_vectors].irq);
mlx4_warn(dev, "Trying again without MSI-X.\n");
} else {
mlx4_err(dev, "NOP command failed to generate interrupt "
"(IRQ %d), aborting.\n",
priv->eq_table.eq[dev->caps.num_comp_vectors].irq);
mlx4_err(dev, "BIOS or ACPI interrupt routing problem?\n");
}
goto err_cmd_poll;
}
mlx4_dbg(dev, "NOP command IRQ test passed\n");
err = mlx4_init_cq_table(dev);
if (err) {
mlx4_err(dev, "Failed to initialize "
"completion queue table (err=%d), aborting.\n", err);
goto err_cmd_poll;
}
err = mlx4_init_srq_table(dev);
if (err) {
mlx4_err(dev, "Failed to initialize "
"shared receive queue table (err=%d), aborting.\n",
err);
goto err_cq_table_free;
}
err = mlx4_init_qp_table(dev);
if (err) {
mlx4_err(dev, "Failed to initialize "
"queue pair table (err=%d), aborting.\n", err);
goto err_srq_table_free;
}
err = mlx4_init_counters_table(dev);
if (err && err != -ENOENT) {
mlx4_err(dev, "Failed to initialize counters table (err=%d), "
"aborting.\n", err);
goto err_qp_table_free;
}
if (!mlx4_is_slave(dev)) {
for (port = 1; port <= dev->caps.num_ports; port++) {
ib_port_default_caps = 0;
err = mlx4_get_port_ib_caps(dev, port,
&ib_port_default_caps);
if (err)
mlx4_warn(dev, "failed to get port %d default "
"ib capabilities (%d). Continuing "
"with caps = 0\n", port, err);
dev->caps.ib_port_def_cap[port] = ib_port_default_caps;
/* initialize per-slave default ib port capabilities */
if (mlx4_is_master(dev)) {
int i;
for (i = 0; i < dev->num_slaves; i++) {
if (i == mlx4_master_func_num(dev))
continue;
priv->mfunc.master.slave_state[i].ib_cap_mask[port] =
ib_port_default_caps;
}
}
dev->caps.port_ib_mtu[port] = IB_MTU_4096;
err = mlx4_SET_PORT(dev, port, mlx4_is_master(dev) ?
dev->caps.pkey_table_len[port] : -1);
if (err) {
mlx4_err(dev, "Failed to set port %d (err=%d), "
"aborting\n", port, err);
goto err_counters_table_free;
}
}
}
return 0;
err_counters_table_free:
mlx4_cleanup_counters_table(dev);
err_qp_table_free:
mlx4_cleanup_qp_table(dev);
err_srq_table_free:
mlx4_cleanup_srq_table(dev);
err_cq_table_free:
mlx4_cleanup_cq_table(dev);
err_cmd_poll:
mlx4_cmd_use_polling(dev);
err_eq_table_free:
mlx4_cleanup_eq_table(dev);
err_mcg_table_free:
if (!mlx4_is_slave(dev))
mlx4_cleanup_mcg_table(dev);
err_mr_table_free:
mlx4_cleanup_mr_table(dev);
err_xrcd_table_free:
mlx4_cleanup_xrcd_table(dev);
err_pd_table_free:
mlx4_cleanup_pd_table(dev);
err_kar_unmap:
iounmap(priv->kar);
err_uar_free:
mlx4_uar_free(dev, &priv->driver_uar);
err_uar_table_free:
mlx4_cleanup_uar_table(dev);
return err;
}
static void mlx4_enable_msi_x(struct mlx4_dev *dev)
{
struct mlx4_priv *priv = mlx4_priv(dev);
struct msix_entry *entries;
int nreq = min_t(int, dev->caps.num_ports *
min_t(int, num_possible_cpus() + 1, MAX_MSIX_P_PORT)
+ MSIX_LEGACY_SZ, MAX_MSIX);
int err;
int i;
if (msi_x) {
nreq = min_t(int, dev->caps.num_eqs - dev->caps.reserved_eqs,
nreq);
if (msi_x > 1 && !mlx4_is_mfunc(dev))
nreq = min_t(int, nreq, msi_x);
entries = kcalloc(nreq, sizeof *entries, GFP_KERNEL);
if (!entries)
goto no_msi;
for (i = 0; i < nreq; ++i)
entries[i].entry = i;
retry:
err = pci_enable_msix(dev->pdev, entries, nreq);
if (err) {
/* Try again if at least 2 vectors are available */
if (err > 1) {
mlx4_info(dev, "Requested %d vectors, "
"but only %d MSI-X vectors available, "
"trying again\n", nreq, err);
nreq = err;
goto retry;
}
kfree(entries);
/* if error, or can't alloc even 1 IRQ */
if (err < 0) {
mlx4_err(dev, "No IRQs left, device can't "
"be started.\n");
goto no_irq;
}
goto no_msi;
}
if (nreq <
MSIX_LEGACY_SZ + dev->caps.num_ports * MIN_MSIX_P_PORT) {
/*Working in legacy mode , all EQ's shared*/
dev->caps.comp_pool = 0;
dev->caps.num_comp_vectors = nreq - 1;
} else {
dev->caps.comp_pool = nreq - MSIX_LEGACY_SZ;
dev->caps.num_comp_vectors = MSIX_LEGACY_SZ - 1;
}
for (i = 0; i < nreq; ++i)
priv->eq_table.eq[i].irq = entries[i].vector;
dev->flags |= MLX4_FLAG_MSI_X;
kfree(entries);
return;
}
no_msi:
dev->caps.num_comp_vectors = 1;
dev->caps.comp_pool = 0;
for (i = 0; i < 2; ++i)
priv->eq_table.eq[i].irq = dev->pdev->irq;
return;
no_irq:
dev->caps.num_comp_vectors = 0;
dev->caps.comp_pool = 0;
return;
}
static void
mlx4_init_hca_info(struct mlx4_dev *dev)
{
struct mlx4_hca_info *info = &mlx4_priv(dev)->hca_info;
info->dev = dev;
info->firmware_attr = (struct device_attribute)__ATTR(fw_ver, S_IRUGO,
show_firmware_version, NULL);
if (device_create_file(&dev->pdev->dev, &info->firmware_attr))
mlx4_err(dev, "Failed to add file firmware version");
info->hca_attr = (struct device_attribute)__ATTR(hca, S_IRUGO, show_hca,
NULL);
if (device_create_file(&dev->pdev->dev, &info->hca_attr))
mlx4_err(dev, "Failed to add file hca type");
info->board_attr = (struct device_attribute)__ATTR(board_id, S_IRUGO,
show_board, NULL);
if (device_create_file(&dev->pdev->dev, &info->board_attr))
mlx4_err(dev, "Failed to add file board id type");
}
static int mlx4_init_port_info(struct mlx4_dev *dev, int port)
{
struct mlx4_port_info *info = &mlx4_priv(dev)->port[port];
int err = 0;
info->dev = dev;
info->port = port;
if (!mlx4_is_slave(dev)) {
mlx4_init_mac_table(dev, &info->mac_table);
mlx4_init_vlan_table(dev, &info->vlan_table);
info->base_qpn = mlx4_get_base_qpn(dev, port);
}
sprintf(info->dev_name, "mlx4_port%d", port);
info->port_attr.attr.name = info->dev_name;
if (mlx4_is_mfunc(dev))
info->port_attr.attr.mode = S_IRUGO;
else {
info->port_attr.attr.mode = S_IRUGO | S_IWUSR;
info->port_attr.store = set_port_type;
}
info->port_attr.show = show_port_type;
sysfs_attr_init(&info->port_attr.attr);
err = device_create_file(&dev->pdev->dev, &info->port_attr);
if (err) {
mlx4_err(dev, "Failed to create file for port %d\n", port);
info->port = -1;
}
sprintf(info->dev_mtu_name, "mlx4_port%d_mtu", port);
info->port_mtu_attr.attr.name = info->dev_mtu_name;
if (mlx4_is_mfunc(dev))
info->port_mtu_attr.attr.mode = S_IRUGO;
else {
info->port_mtu_attr.attr.mode = S_IRUGO | S_IWUSR;
info->port_mtu_attr.store = set_port_ib_mtu;
}
info->port_mtu_attr.show = show_port_ib_mtu;
sysfs_attr_init(&info->port_mtu_attr.attr);
err = device_create_file(&dev->pdev->dev, &info->port_mtu_attr);
if (err) {
mlx4_err(dev, "Failed to create mtu file for port %d\n", port);
device_remove_file(&info->dev->pdev->dev, &info->port_attr);
info->port = -1;
}
return err;
}
static void
mlx4_cleanup_hca_info(struct mlx4_hca_info *info)
{
device_remove_file(&info->dev->pdev->dev, &info->firmware_attr);
device_remove_file(&info->dev->pdev->dev, &info->board_attr);
device_remove_file(&info->dev->pdev->dev, &info->hca_attr);
}
static void mlx4_cleanup_port_info(struct mlx4_port_info *info)
{
if (info->port < 0)
return;
device_remove_file(&info->dev->pdev->dev, &info->port_attr);
device_remove_file(&info->dev->pdev->dev, &info->port_mtu_attr);
}
static int mlx4_init_steering(struct mlx4_dev *dev)
{
struct mlx4_priv *priv = mlx4_priv(dev);
int num_entries = dev->caps.num_ports;
int i, j;
priv->steer = kzalloc(sizeof(struct mlx4_steer) * num_entries, GFP_KERNEL);
if (!priv->steer)
return -ENOMEM;
for (i = 0; i < num_entries; i++)
for (j = 0; j < MLX4_NUM_STEERS; j++) {
INIT_LIST_HEAD(&priv->steer[i].promisc_qps[j]);
INIT_LIST_HEAD(&priv->steer[i].steer_entries[j]);
}
return 0;
}
static void mlx4_clear_steering(struct mlx4_dev *dev)
{
struct mlx4_priv *priv = mlx4_priv(dev);
struct mlx4_steer_index *entry, *tmp_entry;
struct mlx4_promisc_qp *pqp, *tmp_pqp;
int num_entries = dev->caps.num_ports;
int i, j;
for (i = 0; i < num_entries; i++) {
for (j = 0; j < MLX4_NUM_STEERS; j++) {
list_for_each_entry_safe(pqp, tmp_pqp,
&priv->steer[i].promisc_qps[j],
list) {
list_del(&pqp->list);
kfree(pqp);
}
list_for_each_entry_safe(entry, tmp_entry,
&priv->steer[i].steer_entries[j],
list) {
list_del(&entry->list);
list_for_each_entry_safe(pqp, tmp_pqp,
&entry->duplicates,
list) {
list_del(&pqp->list);
kfree(pqp);
}
kfree(entry);
}
}
}
kfree(priv->steer);
}
static int extended_func_num(struct pci_dev *pdev)
{
return PCI_SLOT(pdev->devfn) * 8 + PCI_FUNC(pdev->devfn);
}
#define MLX4_OWNER_BASE 0x8069c
#define MLX4_OWNER_SIZE 4
static int mlx4_get_ownership(struct mlx4_dev *dev)
{
void __iomem *owner;
u32 ret;
if (pci_channel_offline(dev->pdev))
return -EIO;
owner = ioremap(pci_resource_start(dev->pdev, 0) + MLX4_OWNER_BASE,
MLX4_OWNER_SIZE);
if (!owner) {
mlx4_err(dev, "Failed to obtain ownership bit\n");
return -ENOMEM;
}
ret = readl(owner);
iounmap(owner);
return (int) !!ret;
}
static void mlx4_free_ownership(struct mlx4_dev *dev)
{
void __iomem *owner;
if (pci_channel_offline(dev->pdev))
return;
owner = ioremap(pci_resource_start(dev->pdev, 0) + MLX4_OWNER_BASE,
MLX4_OWNER_SIZE);
if (!owner) {
mlx4_err(dev, "Failed to obtain ownership bit\n");
return;
}
writel(0, owner);
msleep(1000);
iounmap(owner);
}
static int __mlx4_init_one(struct pci_dev *pdev, int pci_dev_data)
{
struct mlx4_priv *priv;
struct mlx4_dev *dev;
int err;
int port;
int nvfs, prb_vf;
pr_info(DRV_NAME ": Initializing %s\n", pci_name(pdev));
err = pci_enable_device(pdev);
if (err) {
dev_err(&pdev->dev, "Cannot enable PCI device, "
"aborting.\n");
return err;
}
mlx4_get_val(num_vfs.dbdf2val.tbl, pci_physfn(pdev), 0, &nvfs);
mlx4_get_val(probe_vf.dbdf2val.tbl, pci_physfn(pdev), 0, &prb_vf);
if (nvfs > MLX4_MAX_NUM_VF) {
dev_err(&pdev->dev, "There are more VF's (%d) than allowed(%d)\n",
nvfs, MLX4_MAX_NUM_VF);
return -EINVAL;
}
if (nvfs < 0) {
dev_err(&pdev->dev, "num_vfs module parameter cannot be negative\n");
return -EINVAL;
}
/*
* Check for BARs.
*/
if (!(pci_dev_data & MLX4_PCI_DEV_IS_VF) &&
!(pci_resource_flags(pdev, 0) & IORESOURCE_MEM)) {
dev_err(&pdev->dev, "Missing DCS, aborting."
"(driver_data: 0x%x, pci_resource_flags(pdev, 0):0x%x)\n",
pci_dev_data, pci_resource_flags(pdev, 0));
err = -ENODEV;
goto err_disable_pdev;
}
if (!(pci_resource_flags(pdev, 2) & IORESOURCE_MEM)) {
dev_err(&pdev->dev, "Missing UAR, aborting.\n");
err = -ENODEV;
goto err_disable_pdev;
}
err = pci_request_regions(pdev, DRV_NAME);
if (err) {
dev_err(&pdev->dev, "Couldn't get PCI resources, aborting\n");
goto err_disable_pdev;
}
pci_set_master(pdev);
err = pci_set_dma_mask(pdev, DMA_BIT_MASK(64));
if (err) {
dev_warn(&pdev->dev, "Warning: couldn't set 64-bit PCI DMA mask.\n");
err = pci_set_dma_mask(pdev, DMA_BIT_MASK(32));
if (err) {
dev_err(&pdev->dev, "Can't set PCI DMA mask, aborting.\n");
goto err_release_regions;
}
}
err = pci_set_consistent_dma_mask(pdev, DMA_BIT_MASK(64));
if (err) {
dev_warn(&pdev->dev, "Warning: couldn't set 64-bit "
"consistent PCI DMA mask.\n");
err = pci_set_consistent_dma_mask(pdev, DMA_BIT_MASK(32));
if (err) {
dev_err(&pdev->dev, "Can't set consistent PCI DMA mask, "
"aborting.\n");
goto err_release_regions;
}
}
/* Allow large DMA segments, up to the firmware limit of 1 GB */
dma_set_max_seg_size(&pdev->dev, 1024 * 1024 * 1024);
priv = kzalloc(sizeof *priv, GFP_KERNEL);
if (!priv) {
dev_err(&pdev->dev, "Device struct alloc failed, "
"aborting.\n");
err = -ENOMEM;
goto err_release_regions;
}
dev = &priv->dev;
dev->pdev = pdev;
INIT_LIST_HEAD(&priv->dev_list);
INIT_LIST_HEAD(&priv->ctx_list);
spin_lock_init(&priv->ctx_lock);
mutex_init(&priv->port_mutex);
INIT_LIST_HEAD(&priv->pgdir_list);
mutex_init(&priv->pgdir_mutex);
INIT_LIST_HEAD(&priv->bf_list);
mutex_init(&priv->bf_mutex);
dev->rev_id = pdev->revision;
dev->numa_node = dev_to_node(&pdev->dev);
/* Detect if this device is a virtual function */
if (pci_dev_data & MLX4_PCI_DEV_IS_VF) {
/* When acting as pf, we normally skip vfs unless explicitly
* requested to probe them. */
if (nvfs && extended_func_num(pdev) > prb_vf) {
mlx4_warn(dev, "Skipping virtual function:%d\n",
extended_func_num(pdev));
err = -ENODEV;
goto err_free_dev;
}
mlx4_warn(dev, "Detected virtual function - running in slave mode\n");
dev->flags |= MLX4_FLAG_SLAVE;
} else {
/* We reset the device and enable SRIOV only for physical
* devices. Try to claim ownership on the device;
* if already taken, skip -- do not allow multiple PFs */
err = mlx4_get_ownership(dev);
if (err) {
if (err < 0)
goto err_free_dev;
else {
mlx4_warn(dev, "Multiple PFs not yet supported."
" Skipping PF.\n");
err = -EINVAL;
goto err_free_dev;
}
}
if (nvfs) {
mlx4_warn(dev, "Enabling SR-IOV with %d VFs\n", nvfs);
err = pci_enable_sriov(pdev, nvfs);
if (err) {
mlx4_err(dev, "Failed to enable SR-IOV, continuing without SR-IOV (err = %d).\n",
err);
err = 0;
} else {
mlx4_warn(dev, "Running in master mode\n");
dev->flags |= MLX4_FLAG_SRIOV |
MLX4_FLAG_MASTER;
dev->num_vfs = nvfs;
}
}
atomic_set(&priv->opreq_count, 0);
INIT_WORK(&priv->opreq_task, mlx4_opreq_action);
/*
* Now reset the HCA before we touch the PCI capabilities or
* attempt a firmware command, since a boot ROM may have left
* the HCA in an undefined state.
*/
err = mlx4_reset(dev);
if (err) {
mlx4_err(dev, "Failed to reset HCA, aborting.\n");
goto err_sriov;
}
}
slave_start:
err = mlx4_cmd_init(dev);
if (err) {
mlx4_err(dev, "Failed to init command interface, aborting.\n");
goto err_sriov;
}
/* In slave functions, the communication channel must be initialized
* before posting commands. Also, init num_slaves before calling
* mlx4_init_hca */
if (mlx4_is_mfunc(dev)) {
if (mlx4_is_master(dev))
dev->num_slaves = MLX4_MAX_NUM_SLAVES;
else {
dev->num_slaves = 0;
err = mlx4_multi_func_init(dev);
if (err) {
mlx4_err(dev, "Failed to init slave mfunc"
" interface, aborting.\n");
goto err_cmd;
}
}
}
err = mlx4_init_hca(dev);
if (err) {
if (err == -EACCES) {
/* Not primary Physical function
* Running in slave mode */
mlx4_cmd_cleanup(dev);
dev->flags |= MLX4_FLAG_SLAVE;
dev->flags &= ~MLX4_FLAG_MASTER;
goto slave_start;
} else
goto err_mfunc;
}
/* In master functions, the communication channel must be initialized
* after obtaining its address from fw */
if (mlx4_is_master(dev)) {
err = mlx4_multi_func_init(dev);
if (err) {
mlx4_err(dev, "Failed to init master mfunc"
"interface, aborting.\n");
goto err_close;
}
}
err = mlx4_alloc_eq_table(dev);
if (err)
goto err_master_mfunc;
priv->msix_ctl.pool_bm = 0;
mutex_init(&priv->msix_ctl.pool_lock);
mlx4_enable_msi_x(dev);
/* no MSIX and no shared IRQ */
if (!dev->caps.num_comp_vectors && !dev->caps.comp_pool) {
err = -ENOSPC;
goto err_free_eq;
}
if ((mlx4_is_mfunc(dev)) &&
!(dev->flags & MLX4_FLAG_MSI_X)) {
err = -ENOSYS;
mlx4_err(dev, "INTx is not supported in multi-function mode."
" aborting.\n");
goto err_free_eq;
}
if (!mlx4_is_slave(dev)) {
err = mlx4_init_steering(dev);
if (err)
goto err_free_eq;
}
err = mlx4_setup_hca(dev);
if (err == -EBUSY && (dev->flags & MLX4_FLAG_MSI_X) &&
!mlx4_is_mfunc(dev)) {
dev->flags &= ~MLX4_FLAG_MSI_X;
dev->caps.num_comp_vectors = 1;
dev->caps.comp_pool = 0;
pci_disable_msix(pdev);
err = mlx4_setup_hca(dev);
}
if (err)
goto err_steer;
mlx4_init_quotas(dev);
mlx4_init_hca_info(dev);
for (port = 1; port <= dev->caps.num_ports; port++) {
err = mlx4_init_port_info(dev, port);
if (err)
goto err_port;
}
err = mlx4_register_device(dev);
if (err)
goto err_port;
mlx4_request_modules(dev);
mlx4_sense_init(dev);
mlx4_start_sense(dev);
priv->pci_dev_data = pci_dev_data;
pci_set_drvdata(pdev, dev);
return 0;
err_port:
for (--port; port >= 1; --port)
mlx4_cleanup_port_info(&priv->port[port]);
mlx4_cleanup_counters_table(dev);
mlx4_cleanup_qp_table(dev);
mlx4_cleanup_srq_table(dev);
mlx4_cleanup_cq_table(dev);
mlx4_cmd_use_polling(dev);
mlx4_cleanup_eq_table(dev);
mlx4_cleanup_mcg_table(dev);
mlx4_cleanup_mr_table(dev);
mlx4_cleanup_xrcd_table(dev);
mlx4_cleanup_pd_table(dev);
mlx4_cleanup_uar_table(dev);
err_steer:
if (!mlx4_is_slave(dev))
mlx4_clear_steering(dev);
err_free_eq:
mlx4_free_eq_table(dev);
err_master_mfunc:
if (mlx4_is_master(dev)) {
mlx4_free_resource_tracker(dev, RES_TR_FREE_STRUCTS_ONLY);
mlx4_multi_func_cleanup(dev);
}
if (mlx4_is_slave(dev)) {
kfree(dev->caps.qp0_tunnel);
kfree(dev->caps.qp0_proxy);
kfree(dev->caps.qp1_tunnel);
kfree(dev->caps.qp1_proxy);
}
err_close:
if (dev->flags & MLX4_FLAG_MSI_X)
pci_disable_msix(pdev);
mlx4_close_hca(dev);
err_mfunc:
if (mlx4_is_slave(dev))
mlx4_multi_func_cleanup(dev);
err_cmd:
mlx4_cmd_cleanup(dev);
err_sriov:
if (dev->flags & MLX4_FLAG_SRIOV)
pci_disable_sriov(pdev);
if (!mlx4_is_slave(dev))
mlx4_free_ownership(dev);
err_free_dev:
kfree(priv);
err_release_regions:
pci_release_regions(pdev);
err_disable_pdev:
pci_disable_device(pdev);
pci_set_drvdata(pdev, NULL);
return err;
}
static int __devinit mlx4_init_one(struct pci_dev *pdev,
const struct pci_device_id *id)
{
device_set_desc(pdev->dev.bsddev, mlx4_version);
return __mlx4_init_one(pdev, id->driver_data);
}
static void mlx4_remove_one(struct pci_dev *pdev)
{
struct mlx4_dev *dev = pci_get_drvdata(pdev);
struct mlx4_priv *priv = mlx4_priv(dev);
int p;
if (dev) {
/* in SRIOV it is not allowed to unload the pf's
* driver while there are alive vf's */
if (mlx4_is_master(dev)) {
if (mlx4_how_many_lives_vf(dev))
mlx4_err(dev, "Removing PF when there are assigned VF's !!!\n");
}
mlx4_stop_sense(dev);
mlx4_unregister_device(dev);
mlx4_cleanup_hca_info(&priv->hca_info);
for (p = 1; p <= dev->caps.num_ports; p++) {
mlx4_cleanup_port_info(&priv->port[p]);
mlx4_CLOSE_PORT(dev, p);
}
if (mlx4_is_master(dev))
mlx4_free_resource_tracker(dev,
RES_TR_FREE_SLAVES_ONLY);
mlx4_cleanup_counters_table(dev);
mlx4_cleanup_qp_table(dev);
mlx4_cleanup_srq_table(dev);
mlx4_cleanup_cq_table(dev);
mlx4_cmd_use_polling(dev);
mlx4_cleanup_eq_table(dev);
mlx4_cleanup_mcg_table(dev);
mlx4_cleanup_mr_table(dev);
mlx4_cleanup_xrcd_table(dev);
mlx4_cleanup_pd_table(dev);
if (mlx4_is_master(dev))
mlx4_free_resource_tracker(dev,
RES_TR_FREE_STRUCTS_ONLY);
iounmap(priv->kar);
mlx4_uar_free(dev, &priv->driver_uar);
mlx4_cleanup_uar_table(dev);
if (!mlx4_is_slave(dev))
mlx4_clear_steering(dev);
mlx4_free_eq_table(dev);
if (mlx4_is_master(dev))
mlx4_multi_func_cleanup(dev);
mlx4_close_hca(dev);
if (mlx4_is_slave(dev))
mlx4_multi_func_cleanup(dev);
mlx4_cmd_cleanup(dev);
if (dev->flags & MLX4_FLAG_MSI_X)
pci_disable_msix(pdev);
if (dev->flags & MLX4_FLAG_SRIOV) {
mlx4_warn(dev, "Disabling SR-IOV\n");
pci_disable_sriov(pdev);
}
if (!mlx4_is_slave(dev))
mlx4_free_ownership(dev);
kfree(dev->caps.qp0_tunnel);
kfree(dev->caps.qp0_proxy);
kfree(dev->caps.qp1_tunnel);
kfree(dev->caps.qp1_proxy);
kfree(priv);
pci_release_regions(pdev);
pci_disable_device(pdev);
pci_set_drvdata(pdev, NULL);
}
}
static int restore_current_port_types(struct mlx4_dev *dev,
enum mlx4_port_type *types,
enum mlx4_port_type *poss_types)
{
struct mlx4_priv *priv = mlx4_priv(dev);
int err, i;
mlx4_stop_sense(dev);
mutex_lock(&priv->port_mutex);
for (i = 0; i < dev->caps.num_ports; i++)
dev->caps.possible_type[i + 1] = poss_types[i];
err = mlx4_change_port_types(dev, types);
mlx4_start_sense(dev);
mutex_unlock(&priv->port_mutex);
return err;
}
int mlx4_restart_one(struct pci_dev *pdev)
{
struct mlx4_dev *dev = pci_get_drvdata(pdev);
struct mlx4_priv *priv = mlx4_priv(dev);
enum mlx4_port_type curr_type[MLX4_MAX_PORTS];
enum mlx4_port_type poss_type[MLX4_MAX_PORTS];
int pci_dev_data, err, i;
pci_dev_data = priv->pci_dev_data;
for (i = 0; i < dev->caps.num_ports; i++) {
curr_type[i] = dev->caps.port_type[i + 1];
poss_type[i] = dev->caps.possible_type[i + 1];
}
mlx4_remove_one(pdev);
err = __mlx4_init_one(pdev, pci_dev_data);
if (err)
return err;
dev = pci_get_drvdata(pdev);
err = restore_current_port_types(dev, curr_type, poss_type);
if (err)
mlx4_err(dev, "mlx4_restart_one: could not restore original port types (%d)\n",
err);
return 0;
}
static DEFINE_PCI_DEVICE_TABLE(mlx4_pci_table) = {
/* MT25408 "Hermon" SDR */
{ PCI_VDEVICE(MELLANOX, 0x6340), MLX4_PCI_DEV_FORCE_SENSE_PORT },
/* MT25408 "Hermon" DDR */
{ PCI_VDEVICE(MELLANOX, 0x634a), MLX4_PCI_DEV_FORCE_SENSE_PORT },
/* MT25408 "Hermon" QDR */
{ PCI_VDEVICE(MELLANOX, 0x6354), MLX4_PCI_DEV_FORCE_SENSE_PORT },
/* MT25408 "Hermon" DDR PCIe gen2 */
{ PCI_VDEVICE(MELLANOX, 0x6732), MLX4_PCI_DEV_FORCE_SENSE_PORT },
/* MT25408 "Hermon" QDR PCIe gen2 */
{ PCI_VDEVICE(MELLANOX, 0x673c), MLX4_PCI_DEV_FORCE_SENSE_PORT },
/* MT25408 "Hermon" EN 10GigE */
{ PCI_VDEVICE(MELLANOX, 0x6368), MLX4_PCI_DEV_FORCE_SENSE_PORT },
/* MT25408 "Hermon" EN 10GigE PCIe gen2 */
{ PCI_VDEVICE(MELLANOX, 0x6750), MLX4_PCI_DEV_FORCE_SENSE_PORT },
/* MT25458 ConnectX EN 10GBASE-T 10GigE */
{ PCI_VDEVICE(MELLANOX, 0x6372), MLX4_PCI_DEV_FORCE_SENSE_PORT },
/* MT25458 ConnectX EN 10GBASE-T+Gen2 10GigE */
{ PCI_VDEVICE(MELLANOX, 0x675a), MLX4_PCI_DEV_FORCE_SENSE_PORT },
/* MT26468 ConnectX EN 10GigE PCIe gen2*/
{ PCI_VDEVICE(MELLANOX, 0x6764), MLX4_PCI_DEV_FORCE_SENSE_PORT },
/* MT26438 ConnectX EN 40GigE PCIe gen2 5GT/s */
{ PCI_VDEVICE(MELLANOX, 0x6746), MLX4_PCI_DEV_FORCE_SENSE_PORT },
/* MT26478 ConnectX2 40GigE PCIe gen2 */
{ PCI_VDEVICE(MELLANOX, 0x676e), MLX4_PCI_DEV_FORCE_SENSE_PORT },
/* MT25400 Family [ConnectX-2 Virtual Function] */
{ PCI_VDEVICE(MELLANOX, 0x1002), MLX4_PCI_DEV_IS_VF },
/* MT27500 Family [ConnectX-3] */
{ PCI_VDEVICE(MELLANOX, 0x1003), 0 },
/* MT27500 Family [ConnectX-3 Virtual Function] */
{ PCI_VDEVICE(MELLANOX, 0x1004), MLX4_PCI_DEV_IS_VF },
{ PCI_VDEVICE(MELLANOX, 0x1005), 0 }, /* MT27510 Family */
{ PCI_VDEVICE(MELLANOX, 0x1006), 0 }, /* MT27511 Family */
{ PCI_VDEVICE(MELLANOX, 0x1007), 0 }, /* MT27520 Family */
{ PCI_VDEVICE(MELLANOX, 0x1008), 0 }, /* MT27521 Family */
{ PCI_VDEVICE(MELLANOX, 0x1009), 0 }, /* MT27530 Family */
{ PCI_VDEVICE(MELLANOX, 0x100a), 0 }, /* MT27531 Family */
{ PCI_VDEVICE(MELLANOX, 0x100b), 0 }, /* MT27540 Family */
{ PCI_VDEVICE(MELLANOX, 0x100c), 0 }, /* MT27541 Family */
{ PCI_VDEVICE(MELLANOX, 0x100d), 0 }, /* MT27550 Family */
{ PCI_VDEVICE(MELLANOX, 0x100e), 0 }, /* MT27551 Family */
{ PCI_VDEVICE(MELLANOX, 0x100f), 0 }, /* MT27560 Family */
{ PCI_VDEVICE(MELLANOX, 0x1010), 0 }, /* MT27561 Family */
{ 0, }
};
MODULE_DEVICE_TABLE(pci, mlx4_pci_table);
static pci_ers_result_t mlx4_pci_err_detected(struct pci_dev *pdev,
pci_channel_state_t state)
{
mlx4_remove_one(pdev);
return state == pci_channel_io_perm_failure ?
PCI_ERS_RESULT_DISCONNECT : PCI_ERS_RESULT_NEED_RESET;
}
static pci_ers_result_t mlx4_pci_slot_reset(struct pci_dev *pdev)
{
int ret = __mlx4_init_one(pdev, 0);
return ret ? PCI_ERS_RESULT_DISCONNECT : PCI_ERS_RESULT_RECOVERED;
}
static const struct pci_error_handlers mlx4_err_handler = {
.error_detected = mlx4_pci_err_detected,
.slot_reset = mlx4_pci_slot_reset,
};
static int suspend(struct pci_dev *pdev, pm_message_t state)
{
mlx4_remove_one(pdev);
return 0;
}
static int resume(struct pci_dev *pdev)
{
return __mlx4_init_one(pdev, 0);
}
static struct pci_driver mlx4_driver = {
.name = DRV_NAME,
.id_table = mlx4_pci_table,
.probe = mlx4_init_one,
.remove = __devexit_p(mlx4_remove_one),
.suspend = suspend,
.resume = resume,
.err_handler = &mlx4_err_handler,
};
static int __init mlx4_verify_params(void)
{
int status;
status = update_defaults(&port_type_array);
if (status == INVALID_STR) {
if (mlx4_fill_dbdf2val_tbl(&port_type_array.dbdf2val))
return -1;
} else if (status == INVALID_DATA) {
return -1;
}
status = update_defaults(&num_vfs);
if (status == INVALID_STR) {
if (mlx4_fill_dbdf2val_tbl(&num_vfs.dbdf2val))
return -1;
} else if (status == INVALID_DATA) {
return -1;
}
status = update_defaults(&probe_vf);
if (status == INVALID_STR) {
if (mlx4_fill_dbdf2val_tbl(&probe_vf.dbdf2val))
return -1;
} else if (status == INVALID_DATA) {
return -1;
}
if (msi_x < 0) {
pr_warn("mlx4_core: bad msi_x: %d\n", msi_x);
return -1;
}
if ((log_num_mac < 0) || (log_num_mac > 7)) {
pr_warning("mlx4_core: bad num_mac: %d\n", log_num_mac);
return -1;
}
if (log_num_vlan != 0)
pr_warning("mlx4_core: log_num_vlan - obsolete module param, using %d\n",
MLX4_LOG_NUM_VLANS);
if (mlx4_set_4k_mtu != -1)
pr_warning("mlx4_core: set_4k_mtu - obsolete module param\n");
if ((log_mtts_per_seg < 0) || (log_mtts_per_seg > 7)) {
pr_warning("mlx4_core: bad log_mtts_per_seg: %d\n", log_mtts_per_seg);
return -1;
}
if (mlx4_log_num_mgm_entry_size != -1 &&
(mlx4_log_num_mgm_entry_size < MLX4_MIN_MGM_LOG_ENTRY_SIZE ||
mlx4_log_num_mgm_entry_size > MLX4_MAX_MGM_LOG_ENTRY_SIZE)) {
pr_warning("mlx4_core: mlx4_log_num_mgm_entry_size (%d) not "
"in legal range (-1 or %d..%d)\n",
mlx4_log_num_mgm_entry_size,
MLX4_MIN_MGM_LOG_ENTRY_SIZE,
MLX4_MAX_MGM_LOG_ENTRY_SIZE);
return -1;
}
if (mod_param_profile.num_qp < 18 || mod_param_profile.num_qp > 23) {
pr_warning("mlx4_core: bad log_num_qp: %d\n",
mod_param_profile.num_qp);
return -1;
}
if (mod_param_profile.num_srq < 10) {
pr_warning("mlx4_core: too low log_num_srq: %d\n",
mod_param_profile.num_srq);
return -1;
}
if (mod_param_profile.num_cq < 10) {
pr_warning("mlx4_core: too low log_num_cq: %d\n",
mod_param_profile.num_cq);
return -1;
}
if (mod_param_profile.num_mpt < 10) {
pr_warning("mlx4_core: too low log_num_mpt: %d\n",
mod_param_profile.num_mpt);
return -1;
}
if (mod_param_profile.num_mtt_segs &&
mod_param_profile.num_mtt_segs < 15) {
pr_warning("mlx4_core: too low log_num_mtt: %d\n",
mod_param_profile.num_mtt_segs);
return -1;
}
if (mod_param_profile.num_mtt_segs > MLX4_MAX_LOG_NUM_MTT) {
pr_warning("mlx4_core: too high log_num_mtt: %d\n",
mod_param_profile.num_mtt_segs);
return -1;
}
return 0;
}
static int __init mlx4_init(void)
{
int ret;
if (mlx4_verify_params())
return -EINVAL;
mlx4_catas_init();
mlx4_wq = create_singlethread_workqueue("mlx4");
if (!mlx4_wq)
return -ENOMEM;
if (enable_sys_tune)
sys_tune_init();
ret = pci_register_driver(&mlx4_driver);
if (ret < 0)
goto err;
return 0;
err:
if (enable_sys_tune)
sys_tune_fini();
destroy_workqueue(mlx4_wq);
return ret;
}
static void __exit mlx4_cleanup(void)
{
if (enable_sys_tune)
sys_tune_fini();
pci_unregister_driver(&mlx4_driver);
destroy_workqueue(mlx4_wq);
}
module_init_order(mlx4_init, SI_ORDER_MIDDLE);
module_exit(mlx4_cleanup);
static int
mlx4_evhand(module_t mod, int event, void *arg)
{
return (0);
}
static moduledata_t mlx4_mod = {
.name = "mlx4",
.evhand = mlx4_evhand,
};
MODULE_VERSION(mlx4, 1);
DECLARE_MODULE(mlx4, mlx4_mod, SI_SUB_OFED_PREINIT, SI_ORDER_ANY);
MODULE_DEPEND(mlx4, linuxkpi, 1, 1, 1);
Index: head/sys/xen/interface/io/blkif.h
===================================================================
--- head/sys/xen/interface/io/blkif.h (revision 300049)
+++ head/sys/xen/interface/io/blkif.h (revision 300050)
@@ -1,646 +1,646 @@
/******************************************************************************
* blkif.h
*
* Unified block-device I/O interface for Xen guest OSes.
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the "Software"), to
* deal in the Software without restriction, including without limitation the
* rights to use, copy, modify, merge, publish, distribute, sublicense, and/or
* sell copies of the Software, and to permit persons to whom the Software is
* furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
* AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*
* Copyright (c) 2003-2004, Keir Fraser
* Copyright (c) 2012, Spectra Logic Corporation
*/
#ifndef __XEN_PUBLIC_IO_BLKIF_H__
#define __XEN_PUBLIC_IO_BLKIF_H__
#include "ring.h"
#include "../grant_table.h"
/*
* Front->back notifications: When enqueuing a new request, sending a
* notification can be made conditional on req_event (i.e., the generic
* hold-off mechanism provided by the ring macros). Backends must set
* req_event appropriately (e.g., using RING_FINAL_CHECK_FOR_REQUESTS()).
*
* Back->front notifications: When enqueuing a new response, sending a
* notification can be made conditional on rsp_event (i.e., the generic
* hold-off mechanism provided by the ring macros). Frontends must set
* rsp_event appropriately (e.g., using RING_FINAL_CHECK_FOR_RESPONSES()).
*/
#ifndef blkif_vdev_t
#define blkif_vdev_t uint16_t
#endif
#define blkif_sector_t uint64_t
/*
* Feature and Parameter Negotiation
* =================================
* The two halves of a Xen block driver utilize nodes within the XenStore to
* communicate capabilities and to negotiate operating parameters. This
* section enumerates these nodes which reside in the respective front and
* backend portions of the XenStore, following the XenBus convention.
*
* All data in the XenStore is stored as strings. Nodes specifying numeric
* values are encoded in decimal. Integer value ranges listed below are
* expressed as fixed sized integer types capable of storing the conversion
* of a properly formated node string, without loss of information.
*
* Any specified default value is in effect if the corresponding XenBus node
* is not present in the XenStore.
*
* XenStore nodes in sections marked "PRIVATE" are solely for use by the
* driver side whose XenBus tree contains them.
*
* XenStore nodes marked "DEPRECATED" in their notes section should only be
* used to provide interoperability with legacy implementations.
*
* See the XenBus state transition diagram below for details on when XenBus
* nodes must be published and when they can be queried.
*
*****************************************************************************
* Backend XenBus Nodes
*****************************************************************************
*
*------------------ Backend Device Identification (PRIVATE) ------------------
*
* mode
* Values: "r" (read only), "w" (writable)
*
* The read or write access permissions to the backing store to be
* granted to the frontend.
*
* params
* Values: string
*
* A free formatted string providing sufficient information for the
* backend driver to open the backing device. (e.g. the path to the
* file or block device representing the backing store.)
*
* physical-device
* Values: "MAJOR:MINOR"
*
* MAJOR and MINOR are the major number and minor number of the
* backing device respectively.
*
* type
* Values: "file", "phy", "tap"
*
* The type of the backing device/object.
*
*
* direct-io-safe
* Values: 0/1 (boolean)
* Default Value: 0
*
* The underlying storage is not affected by the direct IO memory
* lifetime bug. See:
* http://lists.xen.org/archives/html/xen-devel/2012-12/msg01154.html
*
* Therefore this option gives the backend permission to use
* O_DIRECT, notwithstanding that bug.
*
* That is, if this option is enabled, use of O_DIRECT is safe,
* in circumstances where we would normally have avoided it as a
* workaround for that bug. This option is not relevant for all
* backends, and even not necessarily supported for those for
* which it is relevant. A backend which knows that it is not
* affected by the bug can ignore this option.
*
* This option doesn't require a backend to use O_DIRECT, so it
* should not be used to try to control the caching behaviour.
*
*--------------------------------- Features ---------------------------------
*
* feature-barrier
* Values: 0/1 (boolean)
* Default Value: 0
*
* A value of "1" indicates that the backend can process requests
* containing the BLKIF_OP_WRITE_BARRIER request opcode. Requests
* of this type may still be returned at any time with the
* BLKIF_RSP_EOPNOTSUPP result code.
*
* feature-flush-cache
* Values: 0/1 (boolean)
* Default Value: 0
*
* A value of "1" indicates that the backend can process requests
* containing the BLKIF_OP_FLUSH_DISKCACHE request opcode. Requests
* of this type may still be returned at any time with the
* BLKIF_RSP_EOPNOTSUPP result code.
*
* feature-discard
* Values: 0/1 (boolean)
* Default Value: 0
*
* A value of "1" indicates that the backend can process requests
* containing the BLKIF_OP_DISCARD request opcode. Requests
* of this type may still be returned at any time with the
* BLKIF_RSP_EOPNOTSUPP result code.
*
* feature-persistent
* Values: 0/1 (boolean)
* Default Value: 0
* Notes: 7
*
* A value of "1" indicates that the backend can keep the grants used
* by the frontend driver mapped, so the same set of grants should be
* used in all transactions. The maximum number of grants the backend
* can map persistently depends on the implementation, but ideally it
* should be RING_SIZE * BLKIF_MAX_SEGMENTS_PER_REQUEST. Using this
* feature the backend doesn't need to unmap each grant, preventing
* costly TLB flushes. The backend driver should only map grants
* persistently if the frontend supports it. If a backend driver chooses
* to use the persistent protocol when the frontend doesn't support it,
* it will probably hit the maximum number of persistently mapped grants
* (due to the fact that the frontend won't be reusing the same grants),
* and fall back to non-persistent mode. Backend implementations may
* shrink or expand the number of persistently mapped grants without
* notifying the frontend depending on memory constraints (this might
* cause a performance degradation).
*
* If a backend driver wants to limit the maximum number of persistently
* mapped grants to a value less than RING_SIZE *
* BLKIF_MAX_SEGMENTS_PER_REQUEST a LRU strategy should be used to
* discard the grants that are less commonly used. Using a LRU in the
* backend driver paired with a LIFO queue in the frontend will
* allow us to have better performance in this scenario.
*
*----------------------- Request Transport Parameters ------------------------
*
* max-ring-page-order
* Values: <uint32_t>
* Default Value: 0
* Notes: 1, 3
*
* The maximum supported size of the request ring buffer in units of
* lb(machine pages). (e.g. 0 == 1 page, 1 = 2 pages, 2 == 4 pages,
* etc.).
*
* max-ring-pages
* Values: <uint32_t>
* Default Value: 1
* Notes: DEPRECATED, 2, 3
*
* The maximum supported size of the request ring buffer in units of
* machine pages. The value must be a power of 2.
*
*------------------------- Backend Device Properties -------------------------
*
* discard-enable
* Values: 0/1 (boolean)
* Default Value: 1
*
* This optional property, set by the toolstack, instructs the backend
* to offer discard to the frontend. If the property is missing the
* backend should offer discard if the backing storage actually supports
* it. This optional property, set by the toolstack, requests that the
* backend offer, or not offer, discard to the frontend.
*
* discard-alignment
* Values: <uint32_t>
* Default Value: 0
* Notes: 4, 5
*
* The offset, in bytes from the beginning of the virtual block device,
* to the first, addressable, discard extent on the underlying device.
*
* discard-granularity
* Values: <uint32_t>
* Default Value: <"sector-size">
* Notes: 4
*
* The size, in bytes, of the individually addressable discard extents
* of the underlying device.
*
* discard-secure
* Values: 0/1 (boolean)
* Default Value: 0
* Notes: 10
*
* A value of "1" indicates that the backend can process BLKIF_OP_DISCARD
* requests with the BLKIF_DISCARD_SECURE flag set.
*
* info
* Values: <uint32_t> (bitmap)
*
* A collection of bit flags describing attributes of the backing
* device. The VDISK_* macros define the meaning of each bit
* location.
*
* sector-size
* Values: <uint32_t>
*
* The logical sector size, in bytes, of the backend device.
*
* physical-sector-size
* Values: <uint32_t>
*
* The physical sector size, in bytes, of the backend device.
*
* sectors
* Values: <uint64_t>
*
* The size of the backend device, expressed in units of its logical
* sector size ("sector-size").
*
*****************************************************************************
* Frontend XenBus Nodes
*****************************************************************************
*
*----------------------- Request Transport Parameters -----------------------
*
* event-channel
* Values: <uint32_t>
*
* The identifier of the Xen event channel used to signal activity
* in the ring buffer.
*
* ring-ref
* Values: <uint32_t>
* Notes: 6
*
* The Xen grant reference granting permission for the backend to map
* the sole page in a single page sized ring buffer.
*
* ring-ref%u
* Values: <uint32_t>
* Notes: 6
*
* For a frontend providing a multi-page ring, a "number of ring pages"
* sized list of nodes, each containing a Xen grant reference granting
* permission for the backend to map the page of the ring located
* at page index "%u". Page indexes are zero based.
*
* protocol
* Values: string (XEN_IO_PROTO_ABI_*)
* Default Value: XEN_IO_PROTO_ABI_NATIVE
*
* The machine ABI rules governing the format of all ring request and
* response structures.
*
* ring-page-order
* Values: <uint32_t>
* Default Value: 0
* Maximum Value: MAX(ffs(max-ring-pages) - 1, max-ring-page-order)
* Notes: 1, 3
*
* The size of the frontend allocated request ring buffer in units
* of lb(machine pages). (e.g. 0 == 1 page, 1 = 2 pages, 2 == 4 pages,
* etc.).
*
* num-ring-pages
* Values: <uint32_t>
* Default Value: 1
* Maximum Value: MAX(max-ring-pages,(0x1 << max-ring-page-order))
* Notes: DEPRECATED, 2, 3
*
* The size of the frontend allocated request ring buffer in units of
* machine pages. The value must be a power of 2.
*
* feature-persistent
* Values: 0/1 (boolean)
* Default Value: 0
* Notes: 7, 8, 9
*
* A value of "1" indicates that the frontend will reuse the same grants
* for all transactions, allowing the backend to map them with write
* access (even when it should be read-only). If the frontend hits the
* maximum number of allowed persistently mapped grants, it can fallback
* to non persistent mode. This will cause a performance degradation,
- * since the the backend driver will still try to map those grants
+ * since the backend driver will still try to map those grants
* persistently. Since the persistent grants protocol is compatible with
* the previous protocol, a frontend driver can choose to work in
* persistent mode even when the backend doesn't support it.
*
* It is recommended that the frontend driver stores the persistently
* mapped grants in a LIFO queue, so a subset of all persistently mapped
* grants gets used commonly. This is done in case the backend driver
* decides to limit the maximum number of persistently mapped grants
* to a value less than RING_SIZE * BLKIF_MAX_SEGMENTS_PER_REQUEST.
*
*------------------------- Virtual Device Properties -------------------------
*
* device-type
* Values: "disk", "cdrom", "floppy", etc.
*
* virtual-device
* Values: <uint32_t>
*
* A value indicating the physical device to virtualize within the
* frontend's domain. (e.g. "The first ATA disk", "The third SCSI
* disk", etc.)
*
* See docs/misc/vbd-interface.txt for details on the format of this
* value.
*
* Notes
* -----
* (1) Multi-page ring buffer scheme first developed in the Citrix XenServer
* PV drivers.
* (2) Multi-page ring buffer scheme first used in some RedHat distributions
* including a distribution deployed on certain nodes of the Amazon
* EC2 cluster.
* (3) Support for multi-page ring buffers was implemented independently,
* in slightly different forms, by both Citrix and RedHat/Amazon.
* For full interoperability, block front and backends should publish
* identical ring parameters, adjusted for unit differences, to the
* XenStore nodes used in both schemes.
* (4) Devices that support discard functionality may internally allocate space
* (discardable extents) in units that are larger than the exported logical
* block size. If the backing device has such discardable extents the
* backend should provide both discard-granularity and discard-alignment.
* Providing just one of the two may be considered an error by the frontend.
* Backends supporting discard should include discard-granularity and
* discard-alignment even if it supports discarding individual sectors.
* Frontends should assume discard-alignment == 0 and discard-granularity
* == sector size if these keys are missing.
* (5) The discard-alignment parameter allows a physical device to be
* partitioned into virtual devices that do not necessarily begin or
* end on a discardable extent boundary.
* (6) When there is only a single page allocated to the request ring,
* 'ring-ref' is used to communicate the grant reference for this
* page to the backend. When using a multi-page ring, the 'ring-ref'
* node is not created. Instead 'ring-ref0' - 'ring-refN' are used.
* (7) When using persistent grants data has to be copied from/to the page
* where the grant is currently mapped. The overhead of doing this copy
* however doesn't suppress the speed improvement of not having to unmap
* the grants.
* (8) The frontend driver has to allow the backend driver to map all grants
* with write access, even when they should be mapped read-only, since
* further requests may reuse these grants and require write permissions.
* (9) Linux implementation doesn't have a limit on the maximum number of
* grants that can be persistently mapped in the frontend driver, but
* due to the frontent driver implementation it should never be bigger
* than RING_SIZE * BLKIF_MAX_SEGMENTS_PER_REQUEST.
*(10) The discard-secure property may be present and will be set to 1 if the
* backing device supports secure discard.
*/
/*
* STATE DIAGRAMS
*
*****************************************************************************
* Startup *
*****************************************************************************
*
* Tool stack creates front and back nodes with state XenbusStateInitialising.
*
* Front Back
* ================================= =====================================
* XenbusStateInitialising XenbusStateInitialising
* o Query virtual device o Query backend device identification
* properties. data.
* o Setup OS device instance. o Open and validate backend device.
* o Publish backend features and
* transport parameters.
* |
* |
* V
* XenbusStateInitWait
*
* o Query backend features and
* transport parameters.
* o Allocate and initialize the
* request ring.
* o Publish transport parameters
* that will be in effect during
* this connection.
* |
* |
* V
* XenbusStateInitialised
*
* o Query frontend transport parameters.
* o Connect to the request ring and
* event channel.
* o Publish backend device properties.
* |
* |
* V
* XenbusStateConnected
*
* o Query backend device properties.
* o Finalize OS virtual device
* instance.
* |
* |
* V
* XenbusStateConnected
*
* Note: Drivers that do not support any optional features, or the negotiation
* of transport parameters, can skip certain states in the state machine:
*
* o A frontend may transition to XenbusStateInitialised without
* waiting for the backend to enter XenbusStateInitWait. In this
* case, default transport parameters are in effect and any
* transport parameters published by the frontend must contain
* their default values.
*
* o A backend may transition to XenbusStateInitialised, bypassing
* XenbusStateInitWait, without waiting for the frontend to first
* enter the XenbusStateInitialised state. In this case, default
* transport parameters are in effect and any transport parameters
* published by the backend must contain their default values.
*
* Drivers that support optional features and/or transport parameter
* negotiation must tolerate these additional state transition paths.
* In general this means performing the work of any skipped state
* transition, if it has not already been performed, in addition to the
* work associated with entry into the current state.
*/
/*
* REQUEST CODES.
*/
#define BLKIF_OP_READ 0
#define BLKIF_OP_WRITE 1
/*
* All writes issued prior to a request with the BLKIF_OP_WRITE_BARRIER
* operation code ("barrier request") must be completed prior to the
* execution of the barrier request. All writes issued after the barrier
* request must not execute until after the completion of the barrier request.
*
* Optional. See "feature-barrier" XenBus node documentation above.
*/
#define BLKIF_OP_WRITE_BARRIER 2
/*
* Commit any uncommitted contents of the backing device's volatile cache
* to stable storage.
*
* Optional. See "feature-flush-cache" XenBus node documentation above.
*/
#define BLKIF_OP_FLUSH_DISKCACHE 3
/*
* Used in SLES sources for device specific command packet
* contained within the request. Reserved for that purpose.
*/
#define BLKIF_OP_RESERVED_1 4
/*
* Indicate to the backend device that a region of storage is no longer in
* use, and may be discarded at any time without impact to the client. If
* the BLKIF_DISCARD_SECURE flag is set on the request, all copies of the
* discarded region on the device must be rendered unrecoverable before the
* command returns.
*
* This operation is analogous to performing a trim (ATA) or unamp (SCSI),
* command on a native device.
*
* More information about trim/unmap operations can be found at:
* http://t13.org/Documents/UploadedDocuments/docs2008/
* e07154r6-Data_Set_Management_Proposal_for_ATA-ACS2.doc
* http://www.seagate.com/staticfiles/support/disc/manuals/
* Interface%20manuals/100293068c.pdf
*
* Optional. See "feature-discard", "discard-alignment",
* "discard-granularity", and "discard-secure" in the XenBus node
* documentation above.
*/
#define BLKIF_OP_DISCARD 5
/*
* Recognized if "feature-max-indirect-segments" in present in the backend
* xenbus info. The "feature-max-indirect-segments" node contains the maximum
* number of segments allowed by the backend per request. If the node is
* present, the frontend might use blkif_request_indirect structs in order to
* issue requests with more than BLKIF_MAX_SEGMENTS_PER_REQUEST (11). The
* maximum number of indirect segments is fixed by the backend, but the
* frontend can issue requests with any number of indirect segments as long as
* it's less than the number provided by the backend. The indirect_grefs field
* in blkif_request_indirect should be filled by the frontend with the
* grant references of the pages that are holding the indirect segments.
* These pages are filled with an array of blkif_request_segment that hold the
* information about the segments. The number of indirect pages to use is
* determined by the number of segments an indirect request contains. Every
* indirect page can contain a maximum of
* (PAGE_SIZE / sizeof(struct blkif_request_segment)) segments, so to
* calculate the number of indirect pages to use we have to do
* ceil(indirect_segments / (PAGE_SIZE / sizeof(struct blkif_request_segment))).
*
* If a backend does not recognize BLKIF_OP_INDIRECT, it should *not*
* create the "feature-max-indirect-segments" node!
*/
#define BLKIF_OP_INDIRECT 6
/*
* Maximum scatter/gather segments per request.
* This is carefully chosen so that sizeof(blkif_ring_t) <= PAGE_SIZE.
* NB. This could be 12 if the ring indexes weren't stored in the same page.
*/
#define BLKIF_MAX_SEGMENTS_PER_REQUEST 11
/*
* Maximum number of indirect pages to use per request.
*/
#define BLKIF_MAX_INDIRECT_PAGES_PER_REQUEST 8
/*
* NB. first_sect and last_sect in blkif_request_segment, as well as
* sector_number in blkif_request, are always expressed in 512-byte units.
* However they must be properly aligned to the real sector size of the
* physical disk, which is reported in the "physical-sector-size" node in
* the backend xenbus info. Also the xenbus "sectors" node is expressed in
* 512-byte units.
*/
struct blkif_request_segment {
grant_ref_t gref; /* reference to I/O buffer frame */
/* @first_sect: first sector in frame to transfer (inclusive). */
/* @last_sect: last sector in frame to transfer (inclusive). */
uint8_t first_sect, last_sect;
};
/*
* Starting ring element for any I/O request.
*/
struct blkif_request {
uint8_t operation; /* BLKIF_OP_??? */
uint8_t nr_segments; /* number of segments */
blkif_vdev_t handle; /* only for read/write requests */
uint64_t id; /* private guest value, echoed in resp */
blkif_sector_t sector_number;/* start sector idx on disk (r/w only) */
struct blkif_request_segment seg[BLKIF_MAX_SEGMENTS_PER_REQUEST];
};
typedef struct blkif_request blkif_request_t;
/*
* Cast to this structure when blkif_request.operation == BLKIF_OP_DISCARD
* sizeof(struct blkif_request_discard) <= sizeof(struct blkif_request)
*/
struct blkif_request_discard {
uint8_t operation; /* BLKIF_OP_DISCARD */
uint8_t flag; /* BLKIF_DISCARD_SECURE or zero */
#define BLKIF_DISCARD_SECURE (1<<0) /* ignored if discard-secure=0 */
blkif_vdev_t handle; /* same as for read/write requests */
uint64_t id; /* private guest value, echoed in resp */
blkif_sector_t sector_number;/* start sector idx on disk */
uint64_t nr_sectors; /* number of contiguous sectors to discard*/
};
typedef struct blkif_request_discard blkif_request_discard_t;
struct blkif_request_indirect {
uint8_t operation; /* BLKIF_OP_INDIRECT */
uint8_t indirect_op; /* BLKIF_OP_{READ/WRITE} */
uint16_t nr_segments; /* number of segments */
uint64_t id; /* private guest value, echoed in resp */
blkif_sector_t sector_number;/* start sector idx on disk (r/w only) */
blkif_vdev_t handle; /* same as for read/write requests */
grant_ref_t indirect_grefs[BLKIF_MAX_INDIRECT_PAGES_PER_REQUEST];
#ifdef __i386__
uint64_t pad; /* Make it 64 byte aligned on i386 */
#endif
};
typedef struct blkif_request_indirect blkif_request_indirect_t;
struct blkif_response {
uint64_t id; /* copied from request */
uint8_t operation; /* copied from request */
int16_t status; /* BLKIF_RSP_??? */
};
typedef struct blkif_response blkif_response_t;
/*
* STATUS RETURN CODES.
*/
/* Operation not supported (only happens on barrier writes). */
#define BLKIF_RSP_EOPNOTSUPP -2
/* Operation failed for some unspecified reason (-EIO). */
#define BLKIF_RSP_ERROR -1
/* Operation completed successfully. */
#define BLKIF_RSP_OKAY 0
/*
* Generate blkif ring structures and types.
*/
DEFINE_RING_TYPES(blkif, struct blkif_request, struct blkif_response);
#define VDISK_CDROM 0x1
#define VDISK_REMOVABLE 0x2
#define VDISK_READONLY 0x4
#endif /* __XEN_PUBLIC_IO_BLKIF_H__ */
/*
* Local variables:
* mode: C
* c-file-style: "BSD"
* c-basic-offset: 4
* tab-width: 4
* indent-tabs-mode: nil
* End:
*/
Index: head/usr.bin/numactl/numactl.1
===================================================================
--- head/usr.bin/numactl/numactl.1 (revision 300049)
+++ head/usr.bin/numactl/numactl.1 (revision 300050)
@@ -1,132 +1,132 @@
.\" Copyright (c) 2015 Adrian Chadd <adrian@FreeBSD.org>
.\" All rights reserved.
.\"
.\" Redistribution and use in source and binary forms, with or without
.\" modification, are permitted provided that the following conditions
.\" are met:
.\" 1. Redistributions of source code must retain the above copyright
.\" notice, this list of conditions and the following disclaimer.
.\" 2. Redistributions in binary form must reproduce the above copyright
.\" notice, this list of conditions and the following disclaimer in the
.\" documentation and/or other materials provided with the distribution.
.\"
.\" THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND
.\" ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
.\" IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
.\" ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE
.\" FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
.\" DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
.\" OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
.\" HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
.\" LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
.\" OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
.\" SUCH DAMAGE.
.\"
.\" $FreeBSD$
.\"
.Dd May 9, 2015
.Dt NUMACTL 1
.Os
.Sh NAME
.Nm numactl
.Nd "manage NUMA policy configuration"
.Sh SYNOPSIS
.Nm
.Op Fl l Ar policy
.Op Fl m Ar domain
.Op Fl c Ar domain
.Ar cmd ...
.Nm
.Fl g
.Op Fl p Ar pid
.Op Fl t Ar tid
.Nm
.Fl s
.Op Fl l Ar policy
.Op Fl m Ar domain
.Op Fl c Ar domain
.Op Fl p Ar pid
.Op Fl t Ar tid
.Sh DESCRIPTION
The
.Nm
command can be used to assign NUMA policies to processes/threads,
run commands with a given NUMA policy, and query information
about NUMA policies on running processes.
.Pp
.Nm
requires a target to modify or query.
The target may be specified as a command, process id or a thread id.
Using
.Fl -get
the target's NUMA policy may be queried.
Using
.Fl -set
the target's NUMA policy may be queried.
If no target is specified,
.Nm
operates on itself.
Not all combinations of operations and targets are supported.
For example,
you may not set the id of an existing set or query and launch a command
at the same time.
.Pp
Each process and thread has a NUMA policy.
By default the policy is NONE.
If a thread policy is NONE, then the policy will fall back to the process.
If the process policy is NONE, then the policy will fall back to the
system default.
The policy may be queried by using
.Fl -get.
.Pp
The options are as follows:
.Bl -tag -width ".Fl -cpudomain Ar domain"
.It Fl -cpudomain Ar domain , Fl c Ar domain
Set the given CPU scheduling policy.
-Constrain the the object (tid, pid, command) to run on CPUs
+Constrain the object (tid, pid, command) to run on CPUs
that belong to the given domain.
.It Fl -get , Fl g
Retrieve the NUMA policy for the given thread or process id.
.It Fl -set , Fl s
Set the NUMA policy for the given thread or process id.
.It Fl -memdomain Ar domain , Fl m Ar domain
Constrain the object (tid, pid, command) to the given
domain.
This is only valid for fixed-domain and fixed-domain-rr.
It must not be set for other policies.
.It Fl -mempolicy Ar policy , Fl l Ar policy
Set the given memory allocation policy.
Valid policies are none, rr, fixed-domain, fixed-domain-rr,
first-touch, and first-touch-rr.
A memdomain argument is required for fixed-domain and
fixed-domain-rr.
.It Fl -pid Ar pid , Fl p Ar pid
Operate on the given pid.
.It Fl -tid Ar tid , Fl t Ar tid
Operate on the given tid.
.El
.Sh EXIT STATUS
.Ex -std
.Sh EXAMPLES
Create a
.Pa /bin/sh
process with memory coming from domain 0, but
CPUs coming from domain 1:
.Dl numactl --mempolicy=fixed-domain --memdomain=0 --cpudomain=1 /bin/sh
.Pp
Query the NUMA policy for the
.Aq sh pid :
.Dl numactl --get --pid=<sh pid>
.Pp
Set the NUMA policy for the given TID to round-robin:
.Dl numactl --set --mempolicy=rr --tid=<tid>
.Sh SEE ALSO
.Xr cpuset 2 ,
.Xr numa 4
.Sh HISTORY
The
.Nm
command first appeared in
.Fx 11.0 .
.Sh AUTHORS
.An Adrian Chadd Aq Mt adrian@FreeBSD.org
Index: head/usr.sbin/bsdconfig/share/dialog.subr
===================================================================
--- head/usr.sbin/bsdconfig/share/dialog.subr (revision 300049)
+++ head/usr.sbin/bsdconfig/share/dialog.subr (revision 300050)
@@ -1,2340 +1,2340 @@
if [ ! "$_DIALOG_SUBR" ]; then _DIALOG_SUBR=1
#
# Copyright (c) 2006-2015 Devin Teske
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions
# are met:
# 1. Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# 2. Redistributions in binary form must reproduce the above copyright
# notice, this list of conditions and the following disclaimer in the
# documentation and/or other materials provided with the distribution.
#
# THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND
# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
# ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE
# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
# OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
# HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
# OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
# SUCH DAMAGE.
#
# $FreeBSD$
#
############################################################ INCLUDES
BSDCFG_SHARE="/usr/share/bsdconfig"
. $BSDCFG_SHARE/common.subr || exit 1
f_dprintf "%s: loading includes..." dialog.subr
f_include $BSDCFG_SHARE/strings.subr
f_include $BSDCFG_SHARE/variable.subr
BSDCFG_LIBE="/usr/libexec/bsdconfig"
f_include_lang $BSDCFG_LIBE/include/messages.subr
############################################################ CONFIGURATION
#
# Default file descriptor to link to stdout for dialog(1) passthru allowing
# execution of dialog from within a sub-shell (so-long as its standard output
# is explicitly redirected to this file descriptor).
#
: ${DIALOG_TERMINAL_PASSTHRU_FD:=${TERMINAL_STDOUT_PASSTHRU:-3}}
############################################################ GLOBALS
#
# Default name of dialog(1) utility
# NOTE: This is changed to "Xdialog" by the optional `-X' argument
#
DIALOG="dialog"
#
# Default dialog(1) title and backtitle text
#
DIALOG_TITLE="$pgm"
DIALOG_BACKTITLE="bsdconfig"
#
# Settings used while interacting with dialog(1)
#
DIALOG_MENU_TAGS="123456789ABCDEFGHIJKLMNOPQRSTUVWYZabcdefghijklmnopqrstuvwxyz"
#
# Declare that we are fully-compliant with Xdialog(1) by unset'ing all
# compatibility settings.
#
unset XDIALOG_HIGH_DIALOG_COMPAT
unset XDIALOG_FORCE_AUTOSIZE
unset XDIALOG_INFOBOX_TIMEOUT
#
# Exit codes for [X]dialog(1)
#
DIALOG_OK=${SUCCESS:-0}
DIALOG_CANCEL=${FAILURE:-1}
DIALOG_HELP=2
DIALOG_ITEM_HELP=2
DIALOG_EXTRA=3
DIALOG_ITEM_HELP=4
export DIALOG_ERROR=254 # sh(1) can't handle the default of `-1'
DIALOG_ESC=255
#
# Default behavior is to call f_dialog_init() automatically when loaded.
#
: ${DIALOG_SELF_INITIALIZE=1}
#
# Default terminal size (used if/when running without a controlling terminal)
#
: ${DEFAULT_TERMINAL_SIZE:=24 80}
#
# Minimum width(s) for various dialog(1) implementations (sensible global
# default(s) for all widgets of a given variant)
#
: ${DIALOG_MIN_WIDTH:=24}
: ${XDIALOG_MIN_WIDTH:=35}
#
# When manually sizing Xdialog(1) widgets such as calendar and timebox, you'll
# need to know the size of the embedded GUI objects because the height passed
# to Xdialog(1) for these widgets has to be tall enough to accommodate them.
#
# These values are helpful when manually sizing with dialog(1) too, but in a
# different way. dialog(1) does not make you accommodate the custom items in the
# height (but does for width) -- a height of 3 will display three lines and a
# full calendar, for example (whereas Xdialog will truncate the calendar if
# given a height of 3). For dialog(1), use these values for making sure that
# the height does not exceed max_height (obtained by f_dialog_max_size()).
#
DIALOG_CALENDAR_HEIGHT=15
DIALOG_TIMEBOX_HEIGHT=6
############################################################ GENERIC FUNCTIONS
# f_dialog_data_sanitize $var_to_edit ...
#
# When using dialog(1) or Xdialog(1) sometimes unintended warnings or errors
# are generated from underlying libraries. For example, if $LANG is set to an
# invalid or unknown locale, the warnings from the Xdialog(1) libraries will
# clutter the output. This function helps by providing a centralied function
# that removes spurious warnings from the dialog(1) (or Xdialog(1)) response.
#
# Simply pass the name of one or more variables that need to be sanitized.
# After execution, the variables will hold their newly-sanitized data.
#
f_dialog_data_sanitize()
{
if [ "$#" -eq 0 ]; then
f_dprintf "%s: called with zero arguments" \
f_dialog_response_sanitize
return $FAILURE
fi
local __var_to_edit
for __var_to_edit in $*; do
# Skip warnings and trim leading/trailing whitespace
setvar $__var_to_edit "$( f_getvar $__var_to_edit | awk '
BEGIN { data = 0 }
{
if ( ! data )
{
if ( $0 ~ /^$/ ) next
if ( $0 ~ /^Gdk-WARNING \*\*:/ ) next
data = 1
}
print
}
' )"
done
}
# f_dialog_line_sanitize $var_to_edit ...
#
# When using dialog(1) or Xdialog(1) sometimes unintended warnings or errors
# are generated from underlying libraries. For example, if $LANG is set to an
# invalid or unknown locale, the warnings from the Xdialog(1) libraries will
# clutter the output. This function helps by providing a centralied function
# that removes spurious warnings from the dialog(1) (or Xdialog(1)) response.
#
# Simply pass the name of one or more variables that need to be sanitized.
# After execution, the variables will hold their newly-sanitized data.
#
# This function, unlike f_dialog_data_sanitize(), also removes leading/trailing
# whitespace from each line.
#
f_dialog_line_sanitize()
{
if [ "$#" -eq 0 ]; then
f_dprintf "%s: called with zero arguments" \
f_dialog_response_sanitize
return $FAILURE
fi
local __var_to_edit
for __var_to_edit in $*; do
# Skip warnings and trim leading/trailing whitespace
setvar $__var_to_edit "$( f_getvar $__var_to_edit | awk '
BEGIN { data = 0 }
{
if ( ! data )
{
if ( $0 ~ /^$/ ) next
if ( $0 ~ /^Gdk-WARNING \*\*:/ ) next
data = 1
}
sub(/^[[:space:]]*/, "")
sub(/[[:space:]]*$/, "")
print
}
' )"
done
}
############################################################ TITLE FUNCTIONS
# f_dialog_title [$new_title]
#
# Set the title of future dialog(1) ($DIALOG_TITLE) or backtitle of Xdialog(1)
# ($DIALOG_BACKTITLE) invocations. If no arguments are given or the first
# argument is NULL, the current title is returned.
#
# Each time this function is called, a backup of the current values is made
# allowing a one-time (single-level) restoration of the previous title using
# the f_dialog_title_restore() function (below).
#
f_dialog_title()
{
local new_title="$1"
if [ "${1+set}" ]; then
if [ "$USE_XDIALOG" ]; then
_DIALOG_BACKTITLE="$DIALOG_BACKTITLE"
DIALOG_BACKTITLE="$new_title"
else
_DIALOG_TITLE="$DIALOG_TITLE"
DIALOG_TITLE="$new_title"
fi
else
if [ "$USE_XDIALOG" ]; then
echo "$DIALOG_BACKTITLE"
else
echo "$DIALOG_TITLE"
fi
fi
}
# f_dialog_title_restore
#
# Restore the previous title set by the last call to f_dialog_title().
# Restoration is non-recursive and only works to restore the most-recent title.
#
f_dialog_title_restore()
{
if [ "$USE_XDIALOG" ]; then
DIALOG_BACKTITLE="$_DIALOG_BACKTITLE"
else
DIALOG_TITLE="$_DIALOG_TITLE"
fi
}
# f_dialog_backtitle [$new_backtitle]
#
# Set the backtitle of future dialog(1) ($DIALOG_BACKTITLE) or title of
# Xdialog(1) ($DIALOG_TITLE) invocations. If no arguments are given or the
# first argument is NULL, the current backtitle is returned.
#
f_dialog_backtitle()
{
local new_backtitle="$1"
if [ "${1+set}" ]; then
if [ "$USE_XDIALOG" ]; then
_DIALOG_TITLE="$DIALOG_TITLE"
DIALOG_TITLE="$new_backtitle"
else
_DIALOG_BACKTITLE="$DIALOG_BACKTITLE"
DIALOG_BACKTITLE="$new_backtitle"
fi
else
if [ "$USE_XDIALOG" ]; then
echo "$DIALOG_TITLE"
else
echo "$DIALOG_BACKTITLE"
fi
fi
}
# f_dialog_backtitle_restore
#
# Restore the previous backtitle set by the last call to f_dialog_backtitle().
# Restoration is non-recursive and only works to restore the most-recent
# backtitle.
#
f_dialog_backtitle_restore()
{
if [ "$USE_XDIALOG" ]; then
DIALOG_TITLE="$_DIALOG_TITLE"
else
DIALOG_BACKTITLE="$_DIALOG_BACKTITLE"
fi
}
############################################################ SIZE FUNCTIONS
# f_dialog_max_size $var_height $var_width
#
# Get the maximum height and width for a dialog widget and store the values in
# $var_height and $var_width (respectively).
#
f_dialog_max_size()
{
local funcname=f_dialog_max_size
local __var_height="$1" __var_width="$2" __max_size
[ "$__var_height" -o "$__var_width" ] || return $FAILURE
if [ "$USE_XDIALOG" ]; then
__max_size="$XDIALOG_MAXSIZE" # see CONFIGURATION
else
if __max_size=$( $DIALOG --print-maxsize \
2>&1 >&$DIALOG_TERMINAL_PASSTHRU_FD )
then
f_dprintf "$funcname: %s --print-maxsize = [%s]" \
"$DIALOG" "$__max_size"
# usually "MaxSize: 24, 80"
__max_size="${__max_size#*: }"
f_replaceall "$__max_size" "," "" __max_size
else
f_eval_catch -dk __max_size $funcname stty \
'stty size' || __max_size=
# usually "24 80"
fi
: ${__max_size:=$DEFAULT_TERMINAL_SIZE}
fi
if [ "$__var_height" ]; then
local __height="${__max_size%%[$IFS]*}"
#
# If we're not using Xdialog(1), we should assume that $DIALOG
# will render --backtitle behind the widget. In such a case, we
# should prevent a widget from obscuring the backtitle (unless
# $NO_BACKTITLE is set and non-NULL, allowing a trap-door).
#
if [ ! "$USE_XDIALOG" ] && [ ! "$NO_BACKTITLE" ]; then
#
# If use_shadow (in ~/.dialogrc) is OFF, we need to
# subtract 4, otherwise 5. However, don't check this
# every time, rely on an initialization variable set
# by f_dialog_init().
#
local __adjust=5
[ "$NO_SHADOW" ] && __adjust=4
# Don't adjust height if already too small (allowing
# obscured backtitle for small values of __height).
[ ${__height:-0} -gt 11 ] &&
__height=$(( $__height - $__adjust ))
fi
setvar "$__var_height" "$__height"
fi
[ "$__var_width" ] && setvar "$__var_width" "${__max_size##*[$IFS]}"
}
# f_dialog_size_constrain $var_height $var_width [$min_height [$min_width]]
#
# Modify $var_height to be no-less-than $min_height (if given; zero otherwise)
# and no-greater-than terminal height (or screen height if $USE_XDIALOG is
# set).
#
# Also modify $var_width to be no-less-than $XDIALOG_MIN_WIDTH (or
# $XDIALOG_MIN_WIDTH if $_USE_XDIALOG is set) and no-greater-than terminal
# or screen width. The use of $[X]DIALOG_MIN_WIDTH can be overridden by
# passing $min_width.
#
# Return status is success unless one of the passed arguments is invalid
# or all of the $var_* arguments are either NULL or missing.
#
f_dialog_size_constrain()
{
local __var_height="$1" __var_width="$2"
local __min_height="$3" __min_width="$4"
local __retval=$SUCCESS
# Return failure unless at least one var_* argument is passed
[ "$__var_height" -o "$__var_width" ] || return $FAILURE
#
# Print debug warnings if any given (non-NULL) argument are invalid
# NOTE: Don't change the name of $__{var,min,}{height,width}
#
local __height __width
local __arg __cp __fname=f_dialog_size_constrain
for __arg in height width; do
debug= f_getvar __var_$__arg __cp
[ "$__cp" ] || continue
if ! debug= f_getvar "$__cp" __$__arg; then
f_dprintf "%s: var_%s variable \`%s' not set" \
$__fname $__arg "$__cp"
__retval=$FAILURE
elif ! eval f_isinteger \$__$__arg; then
f_dprintf "%s: var_%s variable value not a number" \
$__fname $__arg
__retval=$FAILURE
fi
done
for __arg in height width; do
debug= f_getvar __min_$__arg __cp
[ "$__cp" ] || continue
f_isinteger "$__cp" && continue
f_dprintf "%s: min_%s value not a number" $__fname $__arg
__retval=$FAILURE
setvar __min_$__arg ""
done
# Obtain maximum height and width values
# NOTE: Function name appended to prevent __var_{height,width} values
# from becoming local (and thus preventing setvar from working).
local __max_height_size_constain __max_width_size_constrain
f_dialog_max_size \
__max_height_size_constrain __max_width_size_constrain
# Adjust height if desired
if [ "$__var_height" ]; then
if [ $__height -lt ${__min_height:-0} ]; then
setvar "$__var_height" $__min_height
elif [ $__height -gt $__max_height_size_constrain ]; then
setvar "$__var_height" $__max_height_size_constrain
fi
fi
# Adjust width if desired
if [ "$__var_width" ]; then
if [ "$USE_XDIALOG" ]; then
: ${__min_width:=${XDIALOG_MIN_WIDTH:-35}}
else
: ${__min_width:=${DIALOG_MIN_WIDTH:-24}}
fi
if [ $__width -lt $__min_width ]; then
setvar "$__var_width" $__min_width
elif [ $__width -gt $__max_width_size_constrain ]; then
setvar "$__var_width" $__max_width_size_constrain
fi
fi
if [ "$debug" ]; then
# Print final constrained values to debugging
[ "$__var_height" ] && f_quietly f_getvar "$__var_height"
[ "$__var_width" ] && f_quietly f_getvar "$__var_width"
fi
return $__retval # success if no debug warnings were printed
}
# f_dialog_menu_constrain $var_height $var_width $var_rows "$prompt" \
# [$min_height [$min_width [$min_rows]]]
#
# Modify $var_height to be no-less-than $min_height (if given; zero otherwise)
# and no-greater-than terminal height (or screen height if $USE_XDIALOG is
# set).
#
# Also modify $var_width to be no-less-than $XDIALOG_MIN_WIDTH (or
# $XDIALOG_MIN_WIDTH if $_USE_XDIALOG is set) and no-greater-than terminal
# or screen width. The use of $[X]DIALOG_MIN_WIDTH can be overridden by
# passing $min_width.
#
# Last, modify $var_rows to be no-less-than $min_rows (if specified; zero
# otherwise) and no-greater-than (max_height - 8) where max_height is the
# terminal height (or screen height if $USE_XDIALOG is set). If $prompt is NULL
# or missing, dialog(1) allows $var_rows to be (max_height - 7), maximizing the
# number of visible rows.
#
# Return status is success unless one of the passed arguments is invalid
# or all of the $var_* arguments are either NULL or missing.
#
f_dialog_menu_constrain()
{
local __var_height="$1" __var_width="$2" __var_rows="$3" __prompt="$4"
local __min_height="$5" __min_width="$6" __min_rows="$7"
# Return failure unless at least one var_* argument is passed
[ "$__var_height" -o "$__var_width" -o "$__var_rows" ] ||
return $FAILURE
#
# Print debug warnings if any given (non-NULL) argument are invalid
# NOTE: Don't change the name of $__{var,min,}{height,width,rows}
#
local __height_menu_constrain __width_menu_constrain
local __rows_menu_constrain
local __arg __cp __fname=f_dialog_menu_constrain
for __arg in height width rows; do
debug= f_getvar __var_$__arg __cp
[ "$__cp" ] || continue
if ! debug= f_getvar "$__cp" __${__arg}_menu_constrain; then
f_dprintf "%s: var_%s variable \`%s' not set" \
$__fname $__arg "$__cp"
__retval=$FAILURE
elif ! eval f_isinteger \$__${__arg}_menu_constrain; then
f_dprintf "%s: var_%s variable value not a number" \
$__fname $__arg
__retval=$FAILURE
fi
done
for __arg in height width rows; do
debug= f_getvar __min_$__arg __cp
[ "$__cp" ] || continue
f_isinteger "$__cp" && continue
f_dprintf "%s: min_%s value not a number" $__fname $__arg
__retval=$FAILURE
setvar __min_$__arg ""
done
# Obtain maximum height and width values
# NOTE: Function name appended to prevent __var_{height,width} values
# from becoming local (and thus preventing setvar from working).
local __max_height_menu_constrain __max_width_menu_constrain
f_dialog_max_size \
__max_height_menu_constrain __max_width_menu_constrain
# Adjust height if desired
if [ "$__var_height" ]; then
if [ $__height_menu_constrain -lt ${__min_height:-0} ]; then
setvar "$__var_height" $__min_height
elif [ $__height_menu_constrain -gt \
$__max_height_menu_constrain ]
then
setvar "$__var_height" $__max_height_menu_constrain
fi
fi
# Adjust width if desired
if [ "$__var_width" ]; then
if [ "$USE_XDIALOG" ]; then
: ${__min_width:=${XDIALOG_MIN_WIDTH:-35}}
else
: ${__min_width:=${DIALOG_MIN_WIDTH:-24}}
fi
if [ $__width_menu_constrain -lt $__min_width ]; then
setvar "$__var_width" $__min_width
elif [ $__width_menu_constrain -gt \
$__max_width_menu_constrain ]
then
setvar "$__var_width" $__max_width_menu_constrain
fi
fi
# Adjust rows if desired
if [ "$__var_rows" ]; then
if [ "$USE_XDIALOG" ]; then
: ${__min_rows:=1}
else
: ${__min_rows:=0}
fi
local __max_rows_menu_constrain=$((
$__max_height_menu_constrain - 7
))
# If prompt_len is zero (no prompt), bump the max-rows by 1
# Default assumption is (if no argument) that there's no prompt
[ ${__prompt_len:-0} -gt 0 ] || __max_rows_menu_constrain=$((
$__max_rows_menu_constrain + 1
))
if [ $__rows_menu_constrain -lt $__min_rows ]; then
setvar "$__var_rows" $__min_rows
elif [ $__rows_menu_constrain -gt $__max_rows_menu_constrain ]
then
setvar "$__var_rows" $__max_rows_menu_constrain
fi
fi
if [ "$debug" ]; then
# Print final constrained values to debugging
[ "$__var_height" ] && f_quietly f_getvar "$__var_height"
[ "$__var_width" ] && f_quietly f_getvar "$__var_width"
[ "$__var_rows" ] && f_quietly f_getvar "$__var_rows"
fi
return $__retval # success if no debug warnings were printed
}
# f_dialog_infobox_size [-n] $var_height $var_width \
# $title $backtitle $prompt [$hline]
#
# Not all versions of dialog(1) perform auto-sizing of the width and height of
# `--infobox' boxes sensibly.
#
# This function helps solve this issue by taking two sets of sequential
# arguments. The first set of arguments are the variable names to use when
# storing the calculated height and width. The second set of arguments are the
# title, backtitle, prompt, and [optionally] hline. The optimal height and
# width for the described widget (not exceeding the actual terminal height or
# width) is stored in $var_height and $var_width (respectively).
#
# If the first argument is `-n', the calculated sizes ($var_height and
# $var_width) are not constrained to minimum/maximum values.
#
# Newline character sequences (``\n'') in $prompt are expanded as-is done by
# dialog(1).
#
f_dialog_infobox_size()
{
local __constrain=1
[ "$1" = "-n" ] && __constrain= && shift 1 # -n
local __var_height="$1" __var_width="$2"
local __title="$3" __btitle="$4" __prompt="$5" __hline="$6"
# Return unless at least one size aspect has been requested
[ "$__var_height" -o "$__var_width" ] || return $FAILURE
# Default height/width of zero for auto-sizing
local __height=0 __width=0 __n
# Adjust height if desired
if [ "$__var_height" ]; then
#
# Set height based on number of rows in prompt
#
__n=$( echo -n "$__prompt" | f_number_of_lines )
__n=$(( $__n + 2 ))
[ $__n -gt $__height ] && __height=$__n
#
# For Xdialog(1) bump height if backtitle is enabled (displayed
# in the X11 window with a separator line between the backtitle
# and msg text).
#
if [ "$USE_XDIALOG" -a "$__btitle" ]; then
__n=$( echo "$__btitle" | f_number_of_lines )
__height=$(( $__height + $__n + 2 ))
fi
setvar "$__var_height" $__height
fi
# Adjust width if desired
if [ "$__var_width" ]; then
#
# Bump width for long titles
#
__n=$(( ${#__title} + 4 ))
[ $__n -gt $__width ] && __width=$__n
#
# If using Xdialog(1), bump width for long backtitles (which
# appear within the window).
#
if [ "$USE_XDIALOG" ]; then
__n=$(( ${#__btitle} + 4 ))
[ $__n -gt $__width ] && __width=$__n
fi
#
# Bump width for long prompts
#
__n=$( echo "$__prompt" | f_longest_line_length )
__n=$(( $__n + 4 )) # add width for border
[ $__n -gt $__width ] && __width=$__n
#
# Bump width for long hlines. Xdialog(1) supports `--hline' but
# it's currently not used (so don't do anything here if using
# Xdialog(1)).
#
if [ ! "$USE_XDIALOG" ]; then
__n=$(( ${#__hline} + 10 ))
[ $__n -gt $__width ] && __width=$__n
fi
# Bump width by 16.6% if using Xdialog(1)
[ "$USE_XDIALOG" ] && __width=$(( $__width + $__width / 6 ))
setvar "$__var_width" $__width
fi
# Constrain values to sensible minimums/maximums unless `-n' was passed
# Return success if no-constrain, else return status from constrain
[ ! "$__constrain" ] ||
f_dialog_size_constrain "$__var_height" "$__var_width"
}
# f_dialog_buttonbox_size [-n] $var_height $var_width \
# $title $backtitle $prompt [$hline]
#
# Not all versions of dialog(1) perform auto-sizing of the width and height of
# `--msgbox' and `--yesno' boxes sensibly.
#
# This function helps solve this issue by taking two sets of sequential
# arguments. The first set of arguments are the variable names to use when
# storing the calculated height and width. The second set of arguments are the
# title, backtitle, prompt, and [optionally] hline. The optimal height and
# width for the described widget (not exceeding the actual terminal height or
# width) is stored in $var_height and $var_width (respectively).
#
# If the first argument is `-n', the calculated sizes ($var_height and
# $var_width) are not constrained to minimum/maximum values.
#
# Newline character sequences (``\n'') in $prompt are expanded as-is done by
# dialog(1).
#
f_dialog_buttonbox_size()
{
local __constrain=1
[ "$1" = "-n" ] && __constrain= && shift 1 # -n
local __var_height="$1" __var_width="$2"
local __title="$3" __btitle="$4" __prompt="$5" __hline="$6"
# Return unless at least one size aspect has been requested
[ "$__var_height" -o "$__var_width" ] || return $FAILURE
# Calculate height/width of infobox (adjusted/constrained below)
# NOTE: Function name appended to prevent __var_{height,width} values
# from becoming local (and thus preventing setvar from working).
local __height_bbox_size __width_bbox_size
f_dialog_infobox_size -n \
"${__var_height:+__height_bbox_size}" \
"${__var_width:+__width_bbox_size}" \
"$__title" "$__btitle" "$__prompt" "$__hline"
# Adjust height if desired
if [ "$__var_height" ]; then
# Add height to accommodate the buttons
__height_bbox_size=$(( $__height_bbox_size + 2 ))
# Adjust for clipping with Xdialog(1) on Linux/GTK2
[ "$USE_XDIALOG" ] &&
__height_bbox_size=$(( $__height_bbox_size + 3 ))
setvar "$__var_height" $__height_bbox_size
fi
# No adjustemnts to width, just pass-thru the infobox width
if [ "$__var_width" ]; then
setvar "$__var_width" $__width_bbox_size
fi
# Constrain values to sensible minimums/maximums unless `-n' was passed
# Return success if no-constrain, else return status from constrain
[ ! "$__constrain" ] ||
f_dialog_size_constrain "$__var_height" "$__var_width"
}
# f_dialog_inputbox_size [-n] $var_height $var_width \
# $title $backtitle $prompt $init [$hline]
#
# Not all versions of dialog(1) perform auto-sizing of the width and height of
# `--inputbox' boxes sensibly.
#
# This function helps solve this issue by taking two sets of sequential
# arguments. The first set of arguments are the variable names to use when
# storing the calculated height and width. The second set of arguments are the
# title, backtitle, prompt, and [optionally] hline. The optimal height and
# width for the described widget (not exceeding the actual terminal height or
# width) is stored in $var_height and $var_width (respectively).
#
# If the first argument is `-n', the calculated sizes ($var_height and
# $var_width) are not constrained to minimum/maximum values.
#
# Newline character sequences (``\n'') in $prompt are expanded as-is done by
# dialog(1).
#
f_dialog_inputbox_size()
{
local __constrain=1
[ "$1" = "-n" ] && __constrain= && shift 1 # -n
local __var_height="$1" __var_width="$2"
local __title="$3" __btitle="$4" __prompt="$5" __init="$6" __hline="$7"
# Return unless at least one size aspect has been requested
[ "$__var_height" -o "$__var_width" ] || return $FAILURE
# Calculate height/width of buttonbox (adjusted/constrained below)
# NOTE: Function name appended to prevent __var_{height,width} values
# from becoming local (and thus preventing setvar from working).
local __height_ibox_size __width_ibox_size
f_dialog_buttonbox_size -n \
"${__var_height:+__height_ibox_size}" \
"${__var_width:+__width_ibox_size}" \
"$__title" "$__btitle" "$__prompt" "$__hline"
# Adjust height if desired
if [ "$__var_height" ]; then
# Add height for input box (not needed for Xdialog(1))
[ ! "$USE_XDIALOG" ] &&
__height_ibox_size=$(( $__height_ibox_size + 3 ))
setvar "$__var_height" $__height_ibox_size
fi
# Adjust width if desired
if [ "$__var_width" ]; then
# Bump width for initial text (something neither dialog(1) nor
# Xdialog(1) do, but worth it!; add 16.6% if using Xdialog(1))
local __n=$(( ${#__init} + 7 ))
[ "$USE_XDIALOG" ] && __n=$(( $__n + $__n / 6 ))
[ $__n -gt $__width_ibox_size ] && __width_ibox_size=$__n
setvar "$__var_width" $__width_ibox_size
fi
# Constrain values to sensible minimums/maximums unless `-n' was passed
# Return success if no-constrain, else return status from constrain
[ ! "$__constrain" ] ||
f_dialog_size_constrain "$__var_height" "$__var_width"
}
# f_xdialog_2inputsbox_size [-n] $var_height $var_width \
# $title $backtitle $prompt \
# $label1 $init1 $label2 $init2
#
# Xdialog(1) does not perform auto-sizing of the width and height of
# `--2inputsbox' boxes sensibly.
#
# This function helps solve this issue by taking two sets of sequential
# arguments. The first set of arguments are the variable names to use when
# storing the calculated height and width. The second set of arguments are the
# title, backtitle, prompt, label for the first field, initial text for said
# field, label for the second field, and initial text for said field. The
# optimal height and width for the described widget (not exceeding the actual
# terminal height or width) is stored in $var_height and $var_width
# (respectively).
#
# If the first argument is `-n', the calculated sizes ($var_height and
# $var_width) are not constrained to minimum/maximum values.
#
# Newline character sequences (``\n'') in $prompt are expanded as-is done by
# Xdialog(1).
#
f_xdialog_2inputsbox_size()
{
local __constrain=1
[ "$1" = "-n" ] && __constrain= && shift 1 # -n
local __var_height="$1" __var_width="$2"
local __title="$3" __btitle="$4" __prompt="$5"
local __label1="$6" __init1="$7" __label2="$8" __init2="$9"
# Return unless at least one size aspect has been requested
[ "$__var_height" -o "$__var_width" ] || return $FAILURE
# Calculate height/width of inputbox (adjusted/constrained below)
# NOTE: Function name appended to prevent __var_{height,width} values
# from becoming local (and thus preventing setvar from working).
local __height_2ibox_size __width_2ibox_size
f_dialog_inputbox_size -n \
"${__var_height:+__height_2ibox_size}" \
"${__var_width:+__width_2ibox_size}" \
"$__title" "$__btitle" "$__prompt" "$__hline" "$__init1"
# Adjust height if desired
if [ "$__var_height" ]; then
# Add height for 1st label, 2nd label, and 2nd input box
__height_2ibox_size=$(( $__height_2ibox_size + 2 + 2 + 2 ))
setvar "$__var_height" $__height_2ibox_size
fi
# Adjust width if desired
if [ "$__var_width" ]; then
local __n
# Bump width for first label text (+16.6% since Xdialog(1))
__n=$(( ${#__label1} + 7 ))
__n=$(( $__n + $__n / 6 ))
[ $__n -gt $__width_2ibox_size ] && __width_2ibox_size=$__n
# Bump width for second label text (+16.6% since Xdialog(1))
__n=$(( ${#__label2} + 7 ))
__n=$(( $__n + $__n / 6 ))
[ $__n -gt $__width_2ibox_size ] && __width_2ibox_size=$__n
# Bump width for 2nd initial text (something neither dialog(1)
# nor Xdialog(1) do, but worth it!; +16.6% since Xdialog(1))
__n=$(( ${#__init2} + 7 ))
__n=$(( $__n + $__n / 6 ))
[ $__n -gt $__width_2ibox_size ] && __width_2ibox_size=$__n
setvar "$__var_width" $__width_2ibox_size
fi
# Constrain values to sensible minimums/maximums unless `-n' was passed
# Return success if no-constrain, else return status from constrain
[ ! "$__constrain" ] ||
f_dialog_size_constrain "$__var_height" "$__var_width"
}
# f_dialog_menu_size [-n] $var_height $var_width $var_rows \
# $title $backtitle $prompt $hline \
# $tag1 $item1 $tag2 $item2 ...
#
# Not all versions of dialog(1) perform auto-sizing of the width and height of
# `--menu' boxes sensibly.
#
# This function helps solve this issue by taking three sets of sequential
# arguments. The first set of arguments are the variable names to use when
# storing the calculated height, width, and rows. The second set of arguments
# are the title, backtitle, prompt, and hline. The [optional] third set of
# arguments are the menu list itself (comprised of tag/item couplets). The
# optimal height, width, and rows for the described widget (not exceeding the
# actual terminal height or width) is stored in $var_height, $var_width, and
# $var_rows (respectively).
#
# If the first argument is `-n', the calculated sizes ($var_height, $var_width,
# and $var_rows) are not constrained to minimum/maximum values.
#
f_dialog_menu_size()
{
local __constrain=1
[ "$1" = "-n" ] && __constrain= && shift 1 # -n
local __var_height="$1" __var_width="$2" __var_rows="$3"
local __title="$4" __btitle="$5" __prompt="$6" __hline="$7"
shift 7 # var_height/var_width/var_rows/title/btitle/prompt/hline
# Return unless at least one size aspect has been requested
[ "$__var_height" -o "$__var_width" -o "$__var_rows" ] ||
return $FAILURE
# Calculate height/width of infobox (adjusted/constrained below)
# NOTE: Function name appended to prevent __var_{height,width} values
# from becoming local (and thus preventing setvar from working).
local __height_menu_size __width_menu_size
f_dialog_infobox_size -n \
"${__var_height:+__height_menu_size}" \
"${__var_width:+__width_menu_size}" \
"$__title" "$__btitle" "$__prompt" "$__hline"
#
# Always process the menu-item arguments to get the longest tag-length,
# longest item-length (both used to bump the width), and the number of
# rows (used to bump the height).
#
local __longest_tag=0 __longest_item=0 __rows=0
while [ $# -ge 2 ]; do
local __tag="$1" __item="$2"
shift 2 # tag/item
[ ${#__tag} -gt $__longest_tag ] && __longest_tag=${#__tag}
[ ${#__item} -gt $__longest_item ] && __longest_item=${#__item}
__rows=$(( $__rows + 1 ))
done
# Adjust rows early (for up-comning height calculation)
if [ "$__var_height" -o "$__var_rows" ]; then
# Add a row for visual aid if using Xdialog(1)
[ "$USE_XDIALOG" ] && __rows=$(( $__rows + 1 ))
fi
# Adjust height if desired
if [ "$__var_height" ]; then
# Add rows to height
if [ "$USE_XDIALOG" ]; then
__height_menu_size=$((
$__height_menu_size + $__rows + 7 ))
else
__height_menu_size=$((
$__height_menu_size + $__rows + 4 ))
fi
setvar "$__var_height" $__height_menu_size
fi
# Adjust width if desired
if [ "$__var_width" ]; then
# The sum total between the longest tag-length and the
# longest item-length should be used to bump menu width
local __n=$(( $__longest_tag + $__longest_item + 10 ))
[ "$USE_XDIALOG" ] && __n=$(( $__n + $__n / 6 )) # plus 16.6%
[ $__n -gt $__width_menu_size ] && __width_menu_size=$__n
setvar "$__var_width" $__width_menu_size
fi
# Store adjusted rows if desired
[ "$__var_rows" ] && setvar "$__var_rows" $__rows
# Constrain height, width, and rows to sensible minimum/maximum values
# Return success if no-constrain, else return status from constrain
[ ! "$__constrain" ] || f_dialog_menu_constrain \
"$__var_height" "$__var_width" "$__var_rows" "$__prompt"
}
# f_dialog_menu_with_help_size [-n] $var_height $var_width $var_rows \
# $title $backtitle $prompt $hline \
# $tag1 $item1 $help1 $tag2 $item2 $help2 ...
#
# Not all versions of dialog(1) perform auto-sizing of the width and height of
# `--menu' boxes sensibly.
#
# This function helps solve this issue by taking three sets of sequential
# arguments. The first set of arguments are the variable names to use when
# storing the calculated height, width, and rows. The second set of arguments
# are the title, backtitle, prompt, and hline. The [optional] third set of
# arguments are the menu list itself (comprised of tag/item/help triplets). The
# optimal height, width, and rows for the described widget (not exceeding the
# actual terminal height or width) is stored in $var_height, $var_width, and
# $var_rows (respectively).
#
# If the first argument is `-n', the calculated sizes ($var_height, $var_width,
# and $var_rows) are not constrained to minimum/maximum values.
#
f_dialog_menu_with_help_size()
{
local __constrain=1
[ "$1" = "-n" ] && __constrain= && shift 1 # -n
local __var_height="$1" __var_width="$2" __var_rows="$3"
local __title="$4" __btitle="$5" __prompt="$6" __hline="$7"
shift 7 # var_height/var_width/var_rows/title/btitle/prompt/hline
# Return unless at least one size aspect has been requested
[ "$__var_height" -o "$__var_width" -o "$__var_rows" ] ||
return $FAILURE
# Calculate height/width of infobox (adjusted/constrained below)
# NOTE: Function name appended to prevent __var_{height,width} values
# from becoming local (and thus preventing setvar from working).
local __height_menu_with_help_size __width_menu_with_help_size
f_dialog_infobox_size -n \
"${__var_height:+__height_menu_with_help_size}" \
"${__var_width:+__width_menu_with_help_size}" \
"$__title" "$__btitle" "$__prompt" "$__hline"
#
# Always process the menu-item arguments to get the longest tag-length,
# longest item-length, longest help-length (help-length only considered
# if using Xdialog(1), as it places the help string in the widget) --
# all used to bump the width -- and the number of rows (used to bump
# the height).
#
local __longest_tag=0 __longest_item=0 __longest_help=0 __rows=0
while [ $# -ge 3 ]; do
local __tag="$1" __item="$2" __help="$3"
shift 3 # tag/item/help
[ ${#__tag} -gt $__longest_tag ] && __longest_tag=${#__tag}
[ ${#__item} -gt $__longest_item ] && __longest_item=${#__item}
[ ${#__help} -gt $__longest_help ] && __longest_help=${#__help}
__rows=$(( $__rows + 1 ))
done
# Adjust rows early (for up-coming height calculation)
if [ "$__var_height" -o "$__var_rows" ]; then
# Add a row for visual aid if using Xdialog(1)
[ "$USE_XDIALOG" ] && __rows=$(( $__rows + 1 ))
fi
# Adjust height if desired
if [ "$__var_height" ]; then
# Add rows to height
if [ "$USE_XDIALOG" ]; then
__height_menu_with_help_size=$((
$__height_menu_with_help_size + $__rows + 8 ))
else
__height_menu_with_help_size=$((
$__height_menu_with_help_size + $__rows + 4 ))
fi
setvar "$__var_height" $__height_menu_with_help_size
fi
# Adjust width if desired
if [ "$__var_width" ]; then
# The sum total between the longest tag-length and the
# longest item-length should be used to bump menu width
local __n=$(( $__longest_tag + $__longest_item + 10 ))
[ "$USE_XDIALOG" ] && __n=$(( $__n + $__n / 6 )) # plus 16.6%
[ $__n -gt $__width_menu_with_help_size ] &&
__width_menu_with_help_size=$__n
# Update width for help text if using Xdialog(1)
if [ "$USE_XDIALOG" ]; then
__n=$(( $__longest_help + 10 ))
__n=$(( $__n + $__n / 6 )) # plus 16.6%
[ $__n -gt $__width_menu_with_help_size ] &&
__width_menu_with_help_size=$__n
fi
setvar "$__var_width" $__width_menu_with_help_size
fi
# Store adjusted rows if desired
[ "$__var_rows" ] && setvar "$__var_rows" $__rows
# Constrain height, width, and rows to sensible minimum/maximum values
# Return success if no-constrain, else return status from constrain
[ ! "$__constrain" ] || f_dialog_menu_constrain \
"$__var_height" "$__var_width" "$__var_rows" "$__prompt"
}
# f_dialog_radiolist_size [-n] $var_height $var_width $var_rows \
# $title $backtitle $prompt $hline \
# $tag1 $item1 $status1 $tag2 $item2 $status2 ...
#
# Not all versions of dialog(1) perform auto-sizing of the width and height of
# `--radiolist' boxes sensibly.
#
# This function helps solve this issue by taking three sets of sequential
# arguments. The first set of arguments are the variable names to use when
# storing the calculated height, width, and rows. The second set of arguments
# are the title, backtitle, prompt, and hline. The [optional] third set of
# arguments are the radio list itself (comprised of tag/item/status triplets).
# The optimal height, width, and rows for the described widget (not exceeding
# the actual terminal height or width) is stored in $var_height, $var_width,
# and $var_rows (respectively).
#
# If the first argument is `-n', the calculated sizes ($var_height, $var_width,
# and $var_rows) are not constrained to minimum/maximum values.
#
f_dialog_radiolist_size()
{
local __constrain=1
[ "$1" = "-n" ] && __constrain= && shift 1 # -n
local __var_height="$1" __var_width="$2" __var_rows="$3"
local __title="$4" __btitle="$5" __prompt="$6" __hline="$7"
shift 7 # var_height/var_width/var_rows/title/btitle/prompt/hline
# Return unless at least one size aspect has been requested
[ "$__var_height" -o "$__var_width" -o "$__var_rows" ] ||
return $FAILURE
# Calculate height/width of infobox (adjusted/constrained below)
# NOTE: Function name appended to prevent __var_{height,width} values
# from becoming local (and thus preventing setvar from working).
local __height_rlist_size __width_rlist_size
f_dialog_infobox_size -n \
"${__var_height:+__height_rlist_size}" \
"${__var_width:+__width_rlist_size}" \
"$__title" "$__btitle" "$__prompt" "$__hline"
#
# Always process the menu-item arguments to get the longest tag-length,
# longest item-length (both used to bump the width), and the number of
# rows (used to bump the height).
#
local __longest_tag=0 __longest_item=0 __rows_rlist_size=0
while [ $# -ge 3 ]; do
local __tag="$1" __item="$2"
shift 3 # tag/item/status
[ ${#__tag} -gt $__longest_tag ] && __longest_tag=${#__tag}
[ ${#__item} -gt $__longest_item ] && __longest_item=${#__item}
__rows_rlist_size=$(( $__rows_rlist_size + 1 ))
done
# Adjust rows early (for up-coming height calculation)
if [ "$__var_height" -o "$__var_rows" ]; then
# Add a row for visual aid if using Xdialog(1)
[ "$USE_XDIALOG" ] &&
__rows_rlist_size=$(( $__rows_rlist_size + 1 ))
fi
# Adjust height if desired
if [ "$__var_height" ]; then
# Add rows to height
if [ "$USE_XDIALOG" ]; then
__height_rlist_size=$((
$__height_rlist_size + $__rows_rlist_size + 7
))
else
__height_rlist_size=$((
$__height_rlist_size + $__rows_rlist_size + 4
))
fi
setvar "$__var_height" $__height_rlist_size
fi
# Adjust width if desired
if [ "$__var_width" ]; then
# Sum total between longest tag-length, longest item-length,
# and radio-button width should be used to bump menu width
local __n=$(( $__longest_tag + $__longest_item + 13 ))
[ "$USE_XDIALOG" ] && __n=$(( $__n + $__n / 6 )) # plus 16.6%
[ $__n -gt $__width_rlist_size ] && __width_rlist_size=$__n
setvar "$__var_width" $__width_rlist_size
fi
# Store adjusted rows if desired
[ "$__var_rows" ] && setvar "$__var_rows" $__rows_rlist_size
# Constrain height, width, and rows to sensible minimum/maximum values
# Return success if no-constrain, else return status from constrain
[ ! "$__constrain" ] || f_dialog_menu_constrain \
"$__var_height" "$__var_width" "$__var_rows" "$__prompt"
}
# f_dialog_checklist_size [-n] $var_height $var_width $var_rows \
# $title $backtitle $prompt $hline \
# $tag1 $item1 $status1 $tag2 $item2 $status2 ...
#
# Not all versions of dialog(1) perform auto-sizing of the width and height of
# `--checklist' boxes sensibly.
#
# This function helps solve this issue by taking three sets of sequential
# arguments. The first set of arguments are the variable names to use when
# storing the calculated height, width, and rows. The second set of arguments
# are the title, backtitle, prompt, and hline. The [optional] third set of
# arguments are the check list itself (comprised of tag/item/status triplets).
# The optimal height, width, and rows for the described widget (not exceeding
# the actual terminal height or width) is stored in $var_height, $var_width,
# and $var_rows (respectively).
#
# If the first argument is `-n', the calculated sizes ($var_height, $var_width,
# and $var_rows) are not constrained to minimum/maximum values.
#
f_dialog_checklist_size()
{
f_dialog_radiolist_size "$@"
}
# f_dialog_radiolist_with_help_size [-n] $var_height $var_width $var_rows \
# $title $backtitle $prompt $hline \
# $tag1 $item1 $status1 $help1 \
# $tag2 $item2 $status2 $help2 ...
#
# Not all versions of dialog(1) perform auto-sizing of the width and height of
# `--radiolist' boxes sensibly.
#
# This function helps solve this issue by taking three sets of sequential
# arguments. The first set of arguments are the variable names to use when
# storing the calculated height, width, and rows. The second set of arguments
# are the title, backtitle, prompt, and hline. The [optional] third set of
# arguments are the radio list itself (comprised of tag/item/status/help
# quadruplets). The optimal height, width, and rows for the described widget
# (not exceeding the actual terminal height or width) is stored in $var_height,
# $var_width, and $var_rows (respectively).
#
# If the first argument is `-n', the calculated sizes ($var_height, $var_width,
# and $var_rows) are not constrained to minimum/maximum values.
#
f_dialog_radiolist_with_help_size()
{
local __constrain=1
[ "$1" = "-n" ] && __constrain= && shift 1 # -n
local __var_height="$1" __var_width="$2" __var_rows="$3"
local __title="$4" __btitle="$5" __prompt="$6" __hline="$7"
shift 7 # var_height/var_width/var_rows/title/btitle/prompt/hline
# Return unless at least one size aspect has been requested
[ "$__var_height" -o "$__var_width" -o "$__var_rows" ] ||
return $FAILURE
# Calculate height/width of infobox (adjusted/constrained below)
# NOTE: Function name appended to prevent __var_{height,width} values
# from becoming local (and thus preventing setvar from working).
local __height_rlist_with_help_size __width_rlist_with_help_size
f_dialog_infobox_size -n \
"${__var_height:+__height_rlist_with_help_size}" \
"${__var_width:+__width_rlist_with_help_size}" \
"$__title" "$__btitle" "$__prompt" "$__hline"
#
# Always process the menu-item arguments to get the longest tag-length,
# longest item-length, longest help-length (help-length only considered
# if using Xdialog(1), as it places the help string in the widget) --
# all used to bump the width -- and the number of rows (used to bump
# the height).
#
local __longest_tag=0 __longest_item=0 __longest_help=0
local __rows_rlist_with_help_size=0
while [ $# -ge 4 ]; do
local __tag="$1" __item="$2" __status="$3" __help="$4"
shift 4 # tag/item/status/help
[ ${#__tag} -gt $__longest_tag ] && __longest_tag=${#__tag}
[ ${#__item} -gt $__longest_item ] && __longest_item=${#__item}
[ ${#__help} -gt $__longest_help ] && __longest_help=${#__help}
__rows_rlist_with_help_size=$((
$__rows_rlist_with_help_size + 1
))
done
# Adjust rows early (for up-coming height calculation)
if [ "$__var_height" -o "$__var_rows" ]; then
# Add a row for visual aid if using Xdialog(1)
[ "$USE_XDIALOG" ] &&
__rows_rlist_with_help_size=$((
$__rows_rlist_with_help_size + 1
))
fi
# Adjust height if desired
if [ "$__var_height" ]; then
# Add rows to height
if [ "$USE_XDIALOG" ]; then
__height_rlist_with_help_size=$((
$__height_rlist_with_help_size +
$__rows_rlist_with_help_size + 7
))
else
__height_rlist_with_help_size=$((
$__height_rlist_with_help_size +
$__rows_rlist_with_help_size + 4
))
fi
setvar "$__var_height" $__height
fi
# Adjust width if desired
if [ "$__var_width" ]; then
# Sum total between longest tag-length, longest item-length,
# and radio-button width should be used to bump menu width
local __n=$(( $__longest_tag + $__longest_item + 13 ))
[ "$USE_XDIALOG" ] && __n=$(( $__n + $__n / 6 )) # plus 16.6%
[ $__n -gt $__width_rlist_with_help_size ] &&
__width_rlist_with_help_size=$__n
# Update width for help text if using Xdialog(1)
if [ "$USE_XDIALOG" ]; then
__n=$(( $__longest_help + 10 ))
__n=$(( $__n + $__n / 6 )) # plus 16.6%
[ $__n -gt $__width_rlist_with_help_size ] &&
__width_rlist_with_help_size=$__n
fi
setvar "$__var_width" $__width_rlist_with_help_size
fi
# Store adjusted rows if desired
[ "$__var_rows" ] && setvar "$__var_rows" $__rows_rlist_with_help_size
# Constrain height, width, and rows to sensible minimum/maximum values
# Return success if no-constrain, else return status from constrain
[ ! "$__constrain" ] || f_dialog_menu_constrain \
"$__var_height" "$__var_width" "$__var_rows" "$__prompt"
}
# f_dialog_checklist_with_help_size [-n] $var_height $var_width $var_rows \
# $title $backtitle $prompt $hline \
# $tag1 $item1 $status1 $help1 \
# $tag2 $item2 $status2 $help2 ...
#
# Not all versions of dialog(1) perform auto-sizing of the width and height of
# `--checklist' boxes sensibly.
#
# This function helps solve this issue by taking three sets of sequential
# arguments. The first set of arguments are the variable names to use when
# storing the calculated height, width, and rows. The second set of arguments
# are the title, backtitle, prompt, and hline. The [optional] third set of
# arguments are the check list itself (comprised of tag/item/status/help
# quadruplets). The optimal height, width, and rows for the described widget
# (not exceeding the actual terminal height or width) is stored in $var_height,
# $var_width, and $var_rows (respectively).
#
# If the first argument is `-n', the calculated sizes ($var_height, $var_width,
# and $var_rows) are not constrained to minimum/maximum values.
#
f_dialog_checklist_with_help_size()
{
f_dialog_radiolist_with_help_size "$@"
}
# f_dialog_calendar_size [-n] $var_height $var_width \
# $title $backtitle $prompt [$hline]
#
# Not all versions of dialog(1) perform auto-sizing of the width and height of
# `--calendar' boxes sensibly.
#
# This function helps solve this issue by taking two sets of sequential
# arguments. The first set of arguments are the variable names to use when
# storing the calculated height and width. The second set of arguments are the
# title, backtitle, prompt, and [optionally] hline. The optimal height and
# width for the described widget (not exceeding the actual terminal height or
# width) is stored in $var_height and $var_width (respectively).
#
# If the first argument is `-n', the calculated sizes ($var_height and
# $var_width) are not constrained to minimum/maximum values.
#
# Newline character sequences (``\n'') in $prompt are expanded as-is done by
# dialog(1).
#
f_dialog_calendar_size()
{
local __constrain=1
[ "$1" = "-n" ] && __constrain= && shift 1 # -n
local __var_height="$1" __var_width="$2"
local __title="$3" __btitle="$4" __prompt="$5" __hline="$6"
# Return unless at least one size aspect has been requested
[ "$__var_height" -o "$__var_width" ] || return $FAILURE
#
# Obtain/Adjust minimum and maximum thresholds
# NOTE: Function name appended to prevent __var_{height,width} values
# from becoming local (and thus preventing setvar from working).
#
local __max_height_cal_size __max_width_cal_size
f_dialog_max_size __max_height_cal_size __max_width_cal_size
__max_width_cal_size=$(( $__max_width_cal_size - 2 ))
# the calendar box will refuse to display if too wide
local __min_width
if [ "$USE_XDIALOG" ]; then
__min_width=55
else
__min_width=40
__max_height_cal_size=$((
$__max_height_cal_size - $DIALOG_CALENDAR_HEIGHT ))
# When using dialog(1), we can't predict whether the user has
# disabled shadow's in their `$HOME/.dialogrc' file, so we'll
# subtract one for the potential shadow around the widget
__max_height_cal_size=$(( $__max_height_cal_size - 1 ))
fi
# Calculate height if desired
if [ "$__var_height" ]; then
local __height
__height=$( echo "$__prompt" | f_number_of_lines )
if [ "$USE_XDIALOG" ]; then
# Add height to accommodate for embedded calendar widget
__height=$(( $__height + $DIALOG_CALENDAR_HEIGHT - 1 ))
# Also, bump height if backtitle is enabled
if [ "$__btitle" ]; then
local __n
__n=$( echo "$__btitle" | f_number_of_lines )
__height=$(( $__height + $__n + 2 ))
fi
else
[ "$__prompt" ] && __height=$(( $__height + 1 ))
fi
# Enforce maximum height, unless `-n' was passed
[ "$__constrain" -a $__height -gt $__max_height_cal_size ] &&
__height=$__max_height_cal_size
setvar "$__var_height" $__height
fi
# Calculate width if desired
if [ "$__var_width" ]; then
# NOTE: Function name appended to prevent __var_{height,width}
# values from becoming local (and thus preventing setvar
# from working).
local __width_cal_size
f_dialog_infobox_size -n "" __width_cal_size \
"$__title" "$__btitle" "$__prompt" "$__hline"
# Enforce minimum/maximum width, unless `-n' was passed
if [ "$__constrain" ]; then
if [ $__width_cal_size -lt $__min_width ]; then
__width_cal_size=$__min_width
elif [ $__width_cal_size -gt $__max_width_cal_size ]
then
__width_cal_size=$__max_width_size
fi
fi
setvar "$__var_width" $__width_cal_size
fi
return $SUCCESS
}
# f_dialog_timebox_size [-n] $var_height $var_width \
# $title $backtitle $prompt [$hline]
#
# Not all versions of dialog(1) perform auto-sizing of the width and height of
# `--timebox' boxes sensibly.
#
# This function helps solve this issue by taking two sets of sequential
# arguments. The first set of arguments are the variable names to use when
# storing the calculated height and width. The second set of arguments are the
# title, backtitle, prompt, and [optionally] hline. The optional height and
# width for the described widget (not exceeding the actual terminal height or
# width) is stored in $var_height and $var_width (respectively).
#
# If the first argument is `-n', the calculated sizes ($var_height and
# $var_width) are not constrained to minimum/maximum values.
#
# Newline character sequences (``\n'') in $prompt are expanded as-is done by
# dialog(1).
#
f_dialog_timebox_size()
{
local __constrain=1
[ "$1" = "-n" ] && __constrain= && shift 1 # -n
local __var_height="$1" __var_width="$2"
local __title="$3" __btitle="$4" __prompt="$5" __hline="$6"
# Return unless at least one size aspect has been requested
[ "$__var_height" -o "$__var_width" ] || return $FAILURE
#
# Obtain/Adjust minimum and maximum thresholds
# NOTE: Function name appended to prevent __var_{height,width} values
# from becoming local (and thus preventing setvar from working).
#
local __max_height_tbox_size __max_width_tbox_size
f_dialog_max_size __max_height_tbox_size __max_width_tbox_size
__max_width_tbox_size=$(( $__max_width_tbox_size - 2 ))
# the timebox widget refuses to display if too wide
local __min_width
if [ "$USE_XDIALOG" ]; then
__min_width=40
else
__min_width=20
__max_height_tbox_size=$(( \
$__max_height_tbox_size - $DIALOG_TIMEBOX_HEIGHT ))
# When using dialog(1), we can't predict whether the user has
# disabled shadow's in their `$HOME/.dialogrc' file, so we'll
# subtract one for the potential shadow around the widget
__max_height_tbox_size=$(( $__max_height_tbox_size - 1 ))
fi
# Calculate height if desired
if [ "$__var_height" -a "$USE_XDIALOG" ]; then
# When using Xdialog(1), the height seems to have
# no effect. All values provide the same results.
setvar "$__var_height" 0 # autosize
elif [ "$__var_height" ]; then
local __height
__height=$( echo "$__prompt" | f_number_of_lines )
__height=$(( $__height ${__prompt:++1} + 1 ))
# Enforce maximum height, unless `-n' was passed
[ "$__constrain" -a $__height -gt $__max_height_tbox_size ] &&
__height=$__max_height_tbox_size
setvar "$__var_height" $__height
fi
# Calculate width if desired
if [ "$__var_width" ]; then
# NOTE: Function name appended to prevent __var_{height,width}
# values from becoming local (and thus preventing setvar
# from working).
local __width_tbox_size
f_dialog_infobox_size -n "" __width_tbox_size \
"$__title" "$__btitle" "$__prompt" "$__hline"
# Enforce the minimum width for displaying the timebox
if [ "$__constrain" ]; then
if [ $__width_tbox_size -lt $__min_width ]; then
__width_tbox_size=$__min_width
elif [ $__width_tbox_size -ge $__max_width_tbox_size ]
then
__width_tbox_size=$__max_width_tbox_size
fi
fi
setvar "$__var_width" $__width_tbox_size
fi
return $SUCCESS
}
############################################################ CLEAR FUNCTIONS
# f_dialog_clear
#
# Clears any/all previous dialog(1) displays.
#
f_dialog_clear()
{
$DIALOG --clear
}
############################################################ INFO FUNCTIONS
# f_dialog_info $info_text ...
#
# Throw up a dialog(1) infobox. The infobox remains until another dialog is
# displayed or `dialog --clear' (or f_dialog_clear) is called.
#
f_dialog_info()
{
local info_text="$*" height width
f_dialog_infobox_size height width \
"$DIALOG_TITLE" "$DIALOG_BACKTITLE" "$info_text"
$DIALOG \
--title "$DIALOG_TITLE" \
--backtitle "$DIALOG_BACKTITLE" \
${USE_XDIALOG:+--ignore-eof} \
${USE_XDIALOG:+--no-buttons} \
--infobox "$info_text" $height $width
}
# f_xdialog_info $info_text ...
#
# Throw up an Xdialog(1) infobox and do not dismiss it until stdin produces
# EOF. This implies that you must execute this either as an rvalue to a pipe,
# lvalue to indirection or in a sub-shell that provides data on stdin.
#
# To open an Xdialog(1) infobox that does not disappear until expeclitly dis-
# missed, use the following:
#
# f_xdialog_info "$info_text" < /dev/tty &
# pid=$!
# # Perform some lengthy actions
# kill $pid
#
# NB: Check $USE_XDIALOG if you need to support both dialog(1) and Xdialog(1).
#
f_xdialog_info()
{
local info_text="$*" height width
f_dialog_infobox_size height width \
"$DIALOG_TITLE" "$DIALOG_BACKTITLE" "$info_text"
exec $DIALOG \
--title "$DIALOG_TITLE" \
--backtitle "$DIALOG_BACKTITLE" \
--no-close --no-buttons \
--infobox "$info_text" $height $width \
-1 # timeout of -1 means abort when EOF on stdin
}
############################################################ PAUSE FUNCTIONS
# f_dialog_pause $msg_text $duration [$hline]
#
# Display a message in a widget with a progress bar that runs backward for
# $duration seconds.
#
f_dialog_pause()
{
local pause_text="$1" duration="$2" hline="$3" height width
f_isinteger "$duration" || return $FAILURE
f_dialog_buttonbox_size height width \
"$DIALOG_TITLE" "$DIALOG_BACKTITLE" "$pause_text" "$hline"
if [ "$USE_XDIALOG" ]; then
$DIALOG \
--title "$DIALOG_TITLE" \
--backtitle "$DIALOG_BACKTITLE" \
--ok-label "$msg_skip" \
--cancel-label "$msg_cancel" \
${noCancel:+--no-cancel} \
--timeout "$duration" \
--yesno "$pause_text" \
$height $width
else
[ $duration -gt 0 ] && duration=$(( $duration - 1 ))
height=$(( $height + 3 )) # Add height for progress bar
$DIALOG \
--title "$DIALOG_TITLE" \
--backtitle "$DIALOG_BACKTITLE" \
--hline "$hline" \
--ok-label "$msg_skip" \
--cancel-label "$msg_cancel" \
${noCancel:+--no-cancel} \
--pause "$pause_text" \
$height $width "$duration"
fi
}
# f_dialog_pause_no_cancel $msg_text $duration [$hline]
#
# Display a message in a widget with a progress bar that runs backward for
# $duration seconds. No cancel button is provided. Always returns success.
#
f_dialog_pause_no_cancel()
{
noCancel=1 f_dialog_pause "$@"
return $SUCCESS
}
############################################################ MSGBOX FUNCTIONS
# f_dialog_msgbox $msg_text [$hline]
#
# Throw up a dialog(1) msgbox. The msgbox remains until the user presses ENTER
# or ESC, acknowledging the modal dialog.
#
# If the user presses ENTER, the exit status is zero (success), otherwise if
# the user presses ESC the exit status is 255.
#
f_dialog_msgbox()
{
local msg_text="$1" hline="$2" height width
f_dialog_buttonbox_size height width \
"$DIALOG_TITLE" "$DIALOG_BACKTITLE" "$msg_text" "$hline"
$DIALOG \
--title "$DIALOG_TITLE" \
--backtitle "$DIALOG_BACKTITLE" \
--hline "$hline" \
--ok-label "$msg_ok" \
--msgbox "$msg_text" $height $width
}
############################################################ TEXTBOX FUNCTIONS
# f_dialog_textbox $file
#
# Display the contents of $file (or an error if $file does not exist, etc.) in
# a dialog(1) textbox (which has a scrollable region for the text). The textbox
# remains until the user presses ENTER or ESC, acknowledging the modal dialog.
#
# If the user presses ENTER, the exit status is zero (success), otherwise if
# the user presses ESC the exit status is 255.
#
f_dialog_textbox()
{
local file="$1"
local contents height width retval
contents=$( cat "$file" 2>&1 )
retval=$?
f_dialog_buttonbox_size height width \
"$DIALOG_TITLE" "$DIALOG_BACKTITLE" "$contents"
if [ $retval -eq $SUCCESS ]; then
$DIALOG \
--title "$DIALOG_TITLE" \
--backtitle "$DIALOG_BACKTITLE" \
--exit-label "$msg_ok" \
--no-cancel \
--textbox "$file" $height $width
else
$DIALOG \
--title "$DIALOG_TITLE" \
--backtitle "$DIALOG_BACKTITLE" \
--ok-label "$msg_ok" \
--msgbox "$contents" $height $width
fi
}
############################################################ YESNO FUNCTIONS
# f_dialog_yesno $msg_text [$hline]
#
# Display a dialog(1) Yes/No prompt to allow the user to make some decision.
# The yesno prompt remains until the user presses ENTER or ESC, acknowledging
# the modal dialog.
#
# If the user chooses YES the exit status is zero, or chooses NO the exit
# status is one, or presses ESC the exit status is 255.
#
f_dialog_yesno()
{
local msg_text="$1" height width
local hline="${2-$hline_arrows_tab_enter}"
f_interactive || return 0 # If non-interactive, return YES all the time
f_dialog_buttonbox_size height width \
"$DIALOG_TITLE" "$DIALOG_BACKTITLE" "$msg_text" "$hline"
if [ "$USE_XDIALOG" ]; then
$DIALOG \
--title "$DIALOG_TITLE" \
--backtitle "$DIALOG_BACKTITLE" \
--hline "$hline" \
--ok-label "$msg_yes" \
--cancel-label "$msg_no" \
--yesno "$msg_text" $height $width
else
$DIALOG \
--title "$DIALOG_TITLE" \
--backtitle "$DIALOG_BACKTITLE" \
--hline "$hline" \
--yes-label "$msg_yes" \
--no-label "$msg_no" \
--yesno "$msg_text" $height $width
fi
}
# f_dialog_noyes $msg_text [$hline]
#
# Display a dialog(1) No/Yes prompt to allow the user to make some decision.
# The noyes prompt remains until the user presses ENTER or ESC, acknowledging
# the modal dialog.
#
# If the user chooses YES the exit status is zero, or chooses NO the exit
# status is one, or presses ESC the exit status is 255.
#
# NOTE: This is just like the f_dialog_yesno function except "No" is default.
#
f_dialog_noyes()
{
local msg_text="$1" height width
local hline="${2-$hline_arrows_tab_enter}"
f_interactive || return 1 # If non-interactive, return NO all the time
f_dialog_buttonbox_size height width \
"$DIALOG_TITLE" "$DIALOG_BACKTITLE" "$msg_text" "$hline"
if [ "$USE_XDIALOG" ]; then
$DIALOG \
--title "$DIALOG_TITLE" \
--backtitle "$DIALOG_BACKTITLE" \
--hline "$hline" \
--default-no \
--ok-label "$msg_yes" \
--cancel-label "$msg_no" \
--yesno "$msg_text" $height $width
else
$DIALOG \
--title "$DIALOG_TITLE" \
--backtitle "$DIALOG_BACKTITLE" \
--hline "$hline" \
--defaultno \
--yes-label "$msg_yes" \
--no-label "$msg_no" \
--yesno "$msg_text" $height $width
fi
}
############################################################ INPUT FUNCTIONS
# f_dialog_inputstr_store [-s] $text
#
# Store some text from a dialog(1) inputbox to be retrieved later by
# f_dialog_inputstr_fetch(). If the first argument is `-s', the text is
# sanitized before being stored.
#
f_dialog_inputstr_store()
{
local sanitize=
[ "$1" = "-s" ] && sanitize=1 && shift 1 # -s
local text="$1"
# Sanitize the line before storing it if desired
[ "$sanitize" ] && f_dialog_line_sanitize text
setvar DIALOG_INPUTBOX_$$ "$text"
}
# f_dialog_inputstr_fetch [$var_to_set]
#
# Obtain the inputstr entered by the user from the most recently displayed
# dialog(1) inputbox (previously stored with f_dialog_inputstr_store() above).
# If $var_to_set is NULL or missing, output is printed to stdout (which is less
# recommended due to performance degradation; in a loop for example).
#
f_dialog_inputstr_fetch()
{
local __var_to_set="$1" __cp
debug= f_getvar DIALOG_INPUTBOX_$$ "${__var_to_set:-__cp}" # get data
setvar DIALOG_INPUTBOX_$$ "" # scrub memory in case data was sensitive
# Return the line on standard-out if desired
[ "$__var_to_set" ] || echo "$__cp"
return $SUCCESS
}
# f_dialog_input $var_to_set $prompt [$init [$hline]]
#
# Prompt the user with a dialog(1) inputbox to enter some value. The inputbox
-# remains until the the user presses ENTER or ESC, or otherwise ends the
+# remains until the user presses ENTER or ESC, or otherwise ends the
# editing session (by selecting `Cancel' for example).
#
# If the user presses ENTER, the exit status is zero (success), otherwise if
# the user presses ESC the exit status is 255, or if the user chose Cancel, the
# exit status is instead 1.
#
# NOTE: The hline should correspond to the type of data you want from the user.
# NOTE: Should not be used to edit multiline values.
#
f_dialog_input()
{
local __var_to_set="$1" __prompt="$2" __init="$3" __hline="$4"
# NOTE: Function name appended to prevent __var_{height,width} values
# from becoming local (and thus preventing setvar from working).
local __height_input __width_input
f_dialog_inputbox_size __height_input __width_input \
"$DIALOG_TITLE" "$DIALOG_BACKTITLE" \
"$__prompt" "$__init" "$__hline"
local __opterm="--"
[ "$USE_XDIALOG" ] && __opterm=
local __dialog_input
__dialog_input=$(
$DIALOG \
--title "$DIALOG_TITLE" \
--backtitle "$DIALOG_BACKTITLE" \
--hline "$__hline" \
--ok-label "$msg_ok" \
--cancel-label "$msg_cancel" \
--inputbox "$__prompt" \
$__height_input $__width_input \
$__opterm "$__init" \
2>&1 >&$DIALOG_TERMINAL_PASSTHRU_FD
)
local __retval=$?
# Remove warnings and leading/trailing whitespace from user input
f_dialog_line_sanitize __dialog_input
setvar "$__var_to_set" "$__dialog_input"
return $__retval
}
############################################################ MENU FUNCTIONS
# f_dialog_menutag_store [-s] $text
#
# Store some text from a dialog(1) menu to be retrieved later by
# f_dialog_menutag_fetch(). If the first argument is `-s', the text is
# sanitized before being stored.
#
f_dialog_menutag_store()
{
local sanitize=
[ "$1" = "-s" ] && sanitize=1 && shift 1 # -s
local text="$1"
# Sanitize the menutag before storing it if desired
[ "$sanitize" ] && f_dialog_data_sanitize text
setvar DIALOG_MENU_$$ "$text"
}
# f_dialog_menutag_fetch [$var_to_set]
#
# Obtain the menutag chosen by the user from the most recently displayed
# dialog(1) menu (previously stored with f_dialog_menutag_store() above). If
# $var_to_set is NULL or missing, output is printed to stdout (which is less
# recommended due to performance degradation; in a loop for example).
#
f_dialog_menutag_fetch()
{
local __var_to_set="$1" __cp
debug= f_getvar DIALOG_MENU_$$ "${__var_to_set:-__cp}" # get the data
setvar DIALOG_MENU_$$ "" # scrub memory in case data was sensitive
# Return the data on standard-out if desired
[ "$__var_to_set" ] || echo "$__cp"
return $SUCCESS
}
# f_dialog_menuitem_store [-s] $text
#
# Store the item from a dialog(1) menu (see f_dialog_menutag2item()) to be
# retrieved later by f_dialog_menuitem_fetch(). If the first argument is `-s',
# the text is sanitized before being stored.
#
f_dialog_menuitem_store()
{
local sanitize=
[ "$1" = "-s" ] && sanitize=1 && shift 1 # -s
local text="$1"
# Sanitize the menuitem before storing it if desired
[ "$sanitize" ] && f_dialog_data_sanitize text
setvar DIALOG_MENUITEM_$$ "$text"
}
# f_dialog_menuitem_fetch [$var_to_set]
#
# Obtain the menuitem chosen by the user from the most recently displayed
# dialog(1) menu (previously stored with f_dialog_menuitem_store() above). If
# $var_to_set is NULL or missing, output is printed to stdout (which is less
# recommended due to performance degradation; in a loop for example).
#
f_dialog_menuitem_fetch()
{
local __var_to_set="$1" __cp
debug= f_getvar DIALOG_MENUITEM_$$ "${__var_to_set:-__cp}" # get data
setvar DIALOG_MENUITEM_$$ "" # scrub memory in case data was sensitive
# Return the data on standard-out if desired
[ "$__var_to_set" ] || echo "$__cp"
return $SUCCESS
}
# f_dialog_default_store [-s] $text
#
# Store some text to be used later as the --default-item argument to dialog(1)
# (or Xdialog(1)) for --menu, --checklist, and --radiolist widgets. Retrieve
# the text later with f_dialog_menutag_fetch(). If the first argument is `-s',
# the text is sanitized before being stored.
#
f_dialog_default_store()
{
local sanitize=
[ "$1" = "-s" ] && sanitize=1 && shift 1 # -s
local text="$1"
# Sanitize the defaulitem before storing it if desired
[ "$sanitize" ] && f_dialog_data_sanitize text
setvar DEFAULTITEM_$$ "$text"
}
# f_dialog_default_fetch [$var_to_set]
#
# Obtain text to be used with the --default-item argument of dialog(1) (or
# Xdialog(1)) (previously stored with f_dialog_default_store() above). If
# $var_to_set is NULL or missing, output is printed to stdout (which is less
# recommended due to performance degradation; in a loop for example).
#
f_dialog_default_fetch()
{
local __var_to_set="$1" __cp
debug= f_getvar DEFAULTITEM_$$ "${__var_to_set:-__cp}" # get the data
setvar DEFAULTITEM_$$ "" # scrub memory in case data was sensitive
# Return the data on standard-out if desired
[ "$__var_to_set" ] || echo "$__cp"
return $SUCCESS
}
# f_dialog_menutag2item $tag_chosen $tag1 $item1 $tag2 $item2 ...
#
# To use the `--menu' option of dialog(1) you must pass an ordered list of
# tag/item pairs on the command-line. When the user selects a menu option the
# tag for that item is printed to stderr.
#
# This function allows you to dereference the tag chosen by the user back into
# the item associated with said tag.
#
# Pass the tag chosen by the user as the first argument, followed by the
# ordered list of tag/item pairs (HINT: use the same tag/item list as was
# passed to dialog(1) for consistency).
#
# If the tag cannot be found, NULL is returned.
#
f_dialog_menutag2item()
{
local tag="$1" tagn item
shift 1 # tag
while [ $# -gt 0 ]; do
tagn="$1"
item="$2"
shift 2 # tagn/item
if [ "$tag" = "$tagn" ]; then
echo "$item"
return $SUCCESS
fi
done
return $FAILURE
}
# f_dialog_menutag2item_with_help $tag_chosen $tag1 $item1 $help1 \
# $tag2 $item2 $help2 ...
#
# To use the `--menu' option of dialog(1) with the `--item-help' option, you
# must pass an ordered list of tag/item/help triplets on the command-line. When
# the user selects a menu option the tag for that item is printed to stderr.
#
# This function allows you to dereference the tag chosen by the user back into
# the item associated with said tag (help is discarded/ignored).
#
# Pass the tag chosen by the user as the first argument, followed by the
# ordered list of tag/item/help triplets (HINT: use the same tag/item/help list
# as was passed to dialog(1) for consistency).
#
# If the tag cannot be found, NULL is returned.
#
f_dialog_menutag2item_with_help()
{
local tag="$1" tagn item
shift 1 # tag
while [ $# -gt 0 ]; do
tagn="$1"
item="$2"
shift 3 # tagn/item/help
if [ "$tag" = "$tagn" ]; then
echo "$item"
return $SUCCESS
fi
done
return $FAILURE
}
# f_dialog_menutag2index $tag_chosen $tag1 $item1 $tag2 $item2 ...
#
# To use the `--menu' option of dialog(1) you must pass an ordered list of
# tag/item pairs on the command-line. When the user selects a menu option the
# tag for that item is printed to stderr.
#
# This function allows you to dereference the tag chosen by the user back into
# the index associated with said tag. The index is the one-based tag/item pair
# array position within the ordered list of tag/item pairs passed to dialog(1).
#
# Pass the tag chosen by the user as the first argument, followed by the
# ordered list of tag/item pairs (HINT: use the same tag/item list as was
# passed to dialog(1) for consistency).
#
# If the tag cannot be found, NULL is returned.
#
f_dialog_menutag2index()
{
local tag="$1" tagn n=1
shift 1 # tag
while [ $# -gt 0 ]; do
tagn="$1"
shift 2 # tagn/item
if [ "$tag" = "$tagn" ]; then
echo $n
return $SUCCESS
fi
n=$(( $n + 1 ))
done
return $FAILURE
}
# f_dialog_menutag2index_with_help $tag_chosen $tag1 $item1 $help1 \
# $tag2 $item2 $help2 ...
#
# To use the `--menu' option of dialog(1) with the `--item-help' option, you
# must pass an ordered list of tag/item/help triplets on the command-line. When
# the user selects a menu option the tag for that item is printed to stderr.
#
# This function allows you to dereference the tag chosen by the user back into
# the index associated with said tag. The index is the one-based tag/item/help
# triplet array position within the ordered list of tag/item/help triplets
# passed to dialog(1).
#
# Pass the tag chosen by the user as the first argument, followed by the
# ordered list of tag/item/help triplets (HINT: use the same tag/item/help list
# as was passed to dialog(1) for consistency).
#
# If the tag cannot be found, NULL is returned.
#
f_dialog_menutag2index_with_help()
{
local tag="$1" tagn n=1
shift 1 # tag
while [ $# -gt 0 ]; do
tagn="$1"
shift 3 # tagn/item/help
if [ "$tag" = "$tagn" ]; then
echo $n
return $SUCCESS
fi
n=$(( $n + 1 ))
done
return $FAILURE
}
# f_dialog_menutag2help $tag_chosen $tag1 $item1 $help1 $tag2 $item2 $help2 ...
#
# To use the `--menu' option of dialog(1) with the `--item-help' option, you
# must pass an ordered list of tag/item/help triplets on the command-line. When
# the user selects a menu option the tag for that item is printed to stderr.
#
# This function allows you to dereference the tag chosen by the user back into
# the help associated with said tag (item is discarded/ignored).
#
# Pass the tag chosen by the user as the first argument, followed by the
# ordered list of tag/item/help triplets (HINT: use the same tag/item/help list
# as was passed to dialog(1) for consistency).
#
# If the tag cannot be found, NULL is returned.
#
f_dialog_menutag2help()
{
local tag="$1" tagn help
shift 1 # tag
while [ $# -gt 0 ]; do
tagn="$1"
help="$3"
shift 3 # tagn/item/help
if [ "$tag" = "$tagn" ]; then
echo "$help"
return $SUCCESS
fi
done
return $FAILURE
}
############################################################ INIT FUNCTIONS
# f_dialog_init
#
# Initialize (or re-initialize) the dialog module after setting/changing any
# of the following environment variables:
#
# USE_XDIALOG Either NULL or Non-NULL. If given a value will indicate
# that Xdialog(1) should be used instead of dialog(1).
#
# SECURE Either NULL or Non-NULL. If given a value will indicate
# that (while running as root) sudo(8) authentication is
# required to proceed.
#
# Also reads ~/.dialogrc for the following information:
#
# NO_SHADOW Either NULL or Non-NULL. If use_shadow is OFF (case-
# insensitive) in ~/.dialogrc this is set to "1" (otherwise
# unset).
#
f_dialog_init()
{
local funcname=f_dialog_init
DIALOG_SELF_INITIALIZE=
USE_DIALOG=1
#
# Clone terminal stdout so we can redirect to it from within sub-shells
#
eval exec $DIALOG_TERMINAL_PASSTHRU_FD\>\&1
#
# Add `-S' and `-X' to the list of standard arguments supported by all
#
case "$GETOPTS_STDARGS" in
*SX*) : good ;; # already present
*) GETOPTS_STDARGS="${GETOPTS_STDARGS}SX"
esac
#
# Process stored command-line arguments
#
# NB: Using backticks instead of $(...) for portability since Linux
# bash(1) balks at the right parentheses encountered in the case-
# statement (incorrectly interpreting it as the close of $(...)).
#
f_dprintf "f_dialog_init: ARGV=[%s] GETOPTS_STDARGS=[%s]" \
"$ARGV" "$GETOPTS_STDARGS"
SECURE=`set -- $ARGV
OPTIND=1
while getopts \
"$GETOPTS_STDARGS$GETOPTS_EXTRA$GETOPTS_ALLFLAGS" \
flag > /dev/null; do
case "$flag" in
S) echo 1 ;;
esac
done
` # END-BACKTICK
USE_XDIALOG=`set -- $ARGV
OPTIND=1
while getopts \
"$GETOPTS_STDARGS$GETOPTS_EXTRA$GETOPTS_ALLFLAGS" \
flag > /dev/null; do
case "$flag" in
S|X) echo 1 ;;
esac
done
` # END-BACKTICK
f_dprintf "f_dialog_init: SECURE=[%s] USE_XDIALOG=[%s]" \
"$SECURE" "$USE_XDIALOG"
#
# Process `-X' command-line option
#
[ "$USE_XDIALOG" ] && DIALOG=Xdialog USE_DIALOG=
#
# Sanity check, or die gracefully
#
if ! f_have $DIALOG; then
unset USE_XDIALOG
local failed_dialog="$DIALOG"
DIALOG=dialog
f_die 1 "$msg_no_such_file_or_directory" "$pgm" "$failed_dialog"
fi
#
# Read ~/.dialogrc (unless using Xdialog(1)) for properties
#
if [ -f ~/.dialogrc -a ! "$USE_XDIALOG" ]; then
eval "$(
awk -v param=use_shadow -v expect=OFF \
-v set="NO_SHADOW=1" '
!/^[[:space:]]*(#|$)/ && \
tolower($1) ~ "^"param"(=|$)" && \
/[^#]*=/ {
sub(/^[^=]*=[[:space:]]*/, "")
if ( toupper($1) == expect ) print set";"
}' ~/.dialogrc
)"
fi
#
# If we're already running as root but we got there by way of sudo(8)
# and we have X11, we should merge the xauth(1) credentials from our
# original user.
#
if [ "$USE_XDIALOG" ] &&
[ "$( id -u )" = "0" ] &&
[ "$SUDO_USER" -a "$DISPLAY" ]
then
if ! f_have xauth; then
# Die gracefully, as we [likely] can't use Xdialog(1)
unset USE_XDIALOG
DIALOG=dialog
f_die 1 "$msg_no_such_file_or_directory" "$pgm" "xauth"
fi
HOSTNAME=$( hostname )
local displaynum="${DISPLAY#*:}"
eval xauth -if \~$SUDO_USER/.Xauthority extract - \
\"\$HOSTNAME/unix:\$displaynum\" \
\"\$HOSTNAME:\$displaynum\" | sudo sh -c 'xauth -ivf \
~root/.Xauthority merge - > /dev/null 2>&1'
fi
#
# Probe Xdialog(1) for maximum height/width constraints, or die
# gracefully
#
if [ "$USE_XDIALOG" ]; then
local maxsize
if ! f_eval_catch -dk maxsize $funcname "$DIALOG" \
'LANG= LC_ALL= %s --print-maxsize' "$DIALOG"
then
# Xdialog(1) failed, fall back to dialog(1)
unset USE_XDIALOG
# Display the error message produced by Xdialog(1)
local height width
f_dialog_buttonbox_size height width \
"$DIALOG_TITLE" "$DIALOG_BACKTITLE" "$maxsize"
dialog \
--title "$DIALOG_TITLE" \
--backtitle "$DIALOG_BACKTITLE" \
--ok-label "$msg_ok" \
--msgbox "$maxsize" $height $width
exit $FAILURE
fi
XDIALOG_MAXSIZE=$(
set -- ${maxsize##*:}
height=${1%,}
width=$2
echo $height $width
)
fi
#
# If using Xdialog(1), swap DIALOG_TITLE with DIALOG_BACKTITLE.
# The reason for this is because many dialog(1) applications use
# --backtitle for the program name (which is better suited as
# --title with Xdialog(1)).
#
if [ "$USE_XDIALOG" ]; then
local _DIALOG_TITLE="$DIALOG_TITLE"
DIALOG_TITLE="$DIALOG_BACKTITLE"
DIALOG_BACKTITLE="$_DIALOG_TITLE"
fi
f_dprintf "f_dialog_init: dialog(1) API initialized."
}
############################################################ MAIN
#
# Self-initialize unless requested otherwise
#
f_dprintf "%s: DIALOG_SELF_INITIALIZE=[%s]" \
dialog.subr "$DIALOG_SELF_INITIALIZE"
case "$DIALOG_SELF_INITIALIZE" in
""|0|[Nn][Oo]|[Oo][Ff][Ff]|[Ff][Aa][Ll][Ss][Ee]) : do nothing ;;
*) f_dialog_init
esac
f_dprintf "%s: Successfully loaded." dialog.subr
fi # ! $_DIALOG_SUBR
Index: head/usr.sbin/pciconf/pciconf.8
===================================================================
--- head/usr.sbin/pciconf/pciconf.8 (revision 300049)
+++ head/usr.sbin/pciconf/pciconf.8 (revision 300050)
@@ -1,370 +1,370 @@
.\" Copyright (c) 1997
.\" Stefan Esser <se@FreeBSD.org>. All rights reserved.
.\"
.\" Redistribution and use in source and binary forms, with or without
.\" modification, are permitted provided that the following conditions
.\" are met:
.\" 1. Redistributions of source code must retain the above copyright
.\" notice, this list of conditions and the following disclaimer.
.\"
.\" 2. Redistributions in binary form must reproduce the above copyright
.\" notice, this list of conditions and the following disclaimer in the
.\" documentation and/or other materials provided with the distribution.
.\"
.\" THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND
.\" ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
.\" IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
.\" ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE
.\" FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
.\" DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
.\" OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
.\" HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
.\" LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
.\" OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
.\" SUCH DAMAGE.
.\"
.\" $FreeBSD$
.\"
.Dd November 23, 2015
.Dt PCICONF 8
.Os
.Sh NAME
.Nm pciconf
.Nd diagnostic utility for the PCI bus
.Sh SYNOPSIS
.Nm
.Fl l Oo Fl BbceVv Oc Op Ar device
.Nm
.Fl a Ar device
.Nm
.Fl r Oo Fl b | h Oc Ar device addr Ns Op : Ns Ar addr2
.Nm
.Fl w Oo Fl b | h Oc Ar device addr value
.Sh DESCRIPTION
The
.Nm
utility provides a command line interface to functionality provided by the
.Xr pci 4
.Xr ioctl 2
interface.
As such, some of the functions are only available to users with write
access to
.Pa /dev/pci ,
normally only the super-user.
.Pp
With the
.Fl l
option,
.Nm
lists PCI devices in the following format:
.Bd -literal
foo0@pci0:0:4:0: class=0x010000 card=0x00000000 chip=0x000f1000 rev=0x01 \
hdr=0x00
bar0@pci0:0:5:0: class=0x000100 card=0x00000000 chip=0x88c15333 rev=0x00 \
hdr=0x00
none0@pci0:0:6:0: class=0x020000 card=0x00000000 chip=0x802910ec rev=0x00 \
hdr=0x00
.Ed
.Pp
The first column gives the
driver name, unit number, and selector .
If there is no driver attached to the
.Tn PCI
device in question, the driver name will be
.Dq none .
Unit numbers for detached devices start at zero and are incremented for
each detached device that is encountered.
The selector
is in a form which may directly be used for the other forms of the command.
The second column is the class code, with the class byte printed as two
hex digits, followed by the sub-class and the interface bytes.
The third column gives the contents of the subvendorid register, introduced
in revision 2.1 of the
.Tn PCI
standard.
Note that it will be 0 for older cards.
The field consists of the card ID in the upper
half and the card vendor ID in the lower half of the value.
.Pp
The fourth column contains the chip device ID, which identifies the chip
this card is based on.
It consists of two fields, identifying the chip and
its vendor, as above.
The fifth column prints the chip's revision.
The sixth column describes the header type.
Currently assigned header types include 0 for most devices,
1 for
.Tn PCI
to
.Tn PCI
bridges, and 2 for
.Tn PCI
to
.Tn CardBus
bridges.
If the most significant bit
of the header type register is set for
function 0 of a
.Tn PCI
device, it is a
.Em multi-function
device, which contains several (similar or independent) functions on
one chip.
.Pp
If the
.Fl B
option is supplied,
.Nm
will list additional information for
.Tn PCI
to
.Tn PCI
and
.Tn PCI
to
.Tn CardBus
bridges,
specifically the resource ranges decoded by the bridge for use by devices
behind the bridge.
Each bridge lists a range of bus numbers handled by the bridge and its
downstream devices.
Memory and I/O port decoding windows are enumerated via a line in the
following format:
.Bd -literal
window[1c] = type I/O Port, range 16, addr 0x5000-0x8fff, enabled
.Ed
.Pp
The first value after the
.Dq Li window
prefix in the square brackets is the offset of the decoding window in
config space in hexadecimal.
The type of a window is one of
.Dq Memory ,
.Dq Prefetchable Memory ,
or
.Dq I/O Port .
The range indicates the binary log of the maximum address the window decodes.
The address field indicates the start and end addresses of the decoded range.
Finally, the last flag indicates if the window is enabled or disabled.
.Pp
If the
.Fl b
option is supplied,
.Nm
will list any base address registers
.Pq BARs
that are assigned resources for each device.
Each BAR will be enumerated via a line in the following format:
.Bd -literal
bar [10] = type Memory, range 32, base 0xda060000, size 131072, enabled
.Ed
.Pp
The first value after the
.Dq Li bar
prefix in the square brackets is the offset of the BAR in config space in
hexadecimal.
The type of a BAR is one of
.Dq Memory ,
.Dq Prefetchable Memory ,
or
.Dq I/O Port .
The range indicates the binary log of the maximum address the BAR decodes.
The base and size indicate the start and length of the BAR's address window,
respectively.
Finally, the last flag indicates if the BAR is enabled or disabled.
.Pp
If the
.Fl c
option is supplied,
.Nm
will list any capabilities supported by each device.
Each capability is enumerated via a line in the following format:
.Bd -literal
cap 10[40] = PCI-Express 1 root port
.Ed
.Pp
The first value after the
.Dq Li cap
prefix is the capability ID in hexadecimal.
The second value in the square brackets is the offset of the capability
in config space in hexadecimal.
The format of the text after the equals sign is capability-specific.
.Pp
Each extended capability is enumerated via a line in a similar format:
.Bd -literal
ecap 0002[100] = VC 1 max VC0
.Ed
.Pp
The first value after the
.Dq Li ecap
prefix is the extended capability ID in hexadecimal.
The second value in the square brackets is the offset of the extended
capability in config space in hexadecimal.
The format of the text after the equals sign is capability-specific.
.Pp
If the
.Fl e
option is supplied,
.Nm
will list any errors reported for this device in standard PCI error registers.
Errors are checked for in the PCI status register,
the PCI-express device status register,
and the Advanced Error Reporting status registers.
.Pp
If the
.Fl v
option is supplied,
.Nm
will attempt to load the vendor/device information database, and print
vendor, device, class and subclass identification strings for each device.
.Pp
If the
.Fl V
option is supplied,
.Nm
will list any vital product data
.Pq VPD
provided by each device.
Each VPD keyword is enumerated via a line in the following format:
.Bd -literal
VPD ro PN = '110114640C0 '
.Ed
.Pp
The first string after the
.Dq Li VPD
prefix indicates if the keyword is read-only
.Dq ro
or read-write
.Dq rw .
The second string provides the keyword name.
-The text after the the equals sign lists the value of the keyword which is
+The text after the equals sign lists the value of the keyword which is
usually an ASCII string.
.Pp
If the optional
.Ar device
argument is given with the
.Fl l
flag,
.Nm
will only list details about a single device instead of all devices.
.Pp
All invocations of
.Nm
except for
.Fl l
require a
.Ar device .
The device can be identified either by a device name if the device is
attached to a driver or by a selector.
Selectors identify a PCI device by its address in PCI config space and
can take one of the following forms:
.Pp
.Bl -bullet -offset indent -compact
.It
.Li pci Ns Va domain Ns \&: Ns Va bus Ns \&: Ns Va device Ns \&: \
Ns Va function Ns
.It
.Li pci Ns Va bus Ns \&: Ns Va device Ns \&: Ns Va function Ns
.It
.Li pci Ns Va bus Ns \&: Ns Va device Ns
.El
.Pp
In the case of an abridged form, omitted selector components are assumed to be 0.
An optional leading device name followed by @ and an optional final colon
will be ignored; this is so that the first column in the output of
.Nm
.Fl l
can be used without modification.
All numbers are base 10.
.Pp
With the
.Fl a
flag,
.Nm
determines whether any driver has been assigned to the device
identified by
.Ar selector .
An exit status of zero indicates that the device has a driver;
non-zero indicates that it does not.
.Pp
The
.Fl r
option reads a configuration space register at byte offset
.Ar addr
of device
.Ar selector
and prints out its value in hexadecimal.
The optional second address
.Ar addr2
specifies a range to read.
The
.Fl w
option writes the
.Ar value
into a configuration space register at byte offset
.Ar addr
of device
.Ar selector .
For both operations, the flags
.Fl b
and
.Fl h
select the width of the operation;
.Fl b
indicates a byte operation, and
.Fl h
indicates a halfword (two-byte) operation.
The default is to read or
write a longword (four bytes).
.Sh ENVIRONMENT
PCI vendor and device information is read from
.Pa /usr/local/share/pciids/pci.ids .
If that file is not present, it is read from
.Pa /usr/share/misc/pci_vendors .
This path can be overridden by setting the environment variable
.Ev PCICONF_VENDOR_DATABASE .
.Sh SEE ALSO
.Xr ioctl 2 ,
.\" .Xr pci 4 ,
.Xr devinfo 8 ,
.Xr kldload 8
.Sh HISTORY
The
.Nm
utility appeared first in
.Fx 2.2 .
The
.Fl a
option was added for
.Tn PCI
KLD support in
.Fx 3.0 .
.Sh AUTHORS
.An -nosplit
The
.Nm
utility was written by
.An Stefan Esser
and
.An Garrett Wollman .
.Sh BUGS
The
.Fl b
and
.Fl h
options are implemented in
.Nm ,
but not in the underlying
.Xr ioctl 2 .
.Pp
It might be useful to give non-root users access to the
.Fl a
and
.Fl r
options.
But only root will be able to execute a
.Nm kldload
to provide the device with a driver KLD, and reading of configuration space
registers may cause a failure in badly designed
.Tn PCI
chips.

File Metadata

Mime Type
application/octet-stream
Expires
Fri, May 10, 3:49 PM (2 d)
Storage Engine
chunks
Storage Format
Chunks
Storage Handle
uMrbmPgGoEQd
Default Alt Text
(4 MB)

Event Timeline