Index: stable/5/share/man/man4/polling.4 =================================================================== --- stable/5/share/man/man4/polling.4 (revision 145135) +++ stable/5/share/man/man4/polling.4 (revision 145136) @@ -1,218 +1,219 @@ .\" Copyright (c) 2002 Luigi Rizzo .\" All rights reserved. .\" .\" Redistribution and use in source and binary forms, with or without .\" modification, are permitted provided that the following conditions .\" are met: .\" 1. Redistributions of source code must retain the above copyright .\" notice, this list of conditions and the following disclaimer. .\" 2. Redistributions in binary form must reproduce the above copyright .\" notice, this list of conditions and the following disclaimer in the .\" documentation and/or other materials provided with the distribution. .\" .\" THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND .\" ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE .\" IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE .\" ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE .\" FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL .\" DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS .\" OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) .\" HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT .\" LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY .\" OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF .\" SUCH DAMAGE. .\" .\" $FreeBSD$ .\" -.Dd December 14, 2004 +.Dd March 26, 2005 .Dt POLLING 4 .Os .Sh NAME .Nm polling .Nd device polling support .Sh SYNOPSIS .Cd "options DEVICE_POLLING" .Cd "options HZ=1000" .Sh DESCRIPTION Device polling .Nm ( for brevity) refers to a technique that lets the operating system periodically poll devices, instead of relying on the devices to generate interrupts when they need attention. This might seem inefficient and counterintuitive, but when done properly, .Nm gives more control to the operating system on when and how to handle devices, with a number of advantages in terms of system responsiveness and performance. .Pp In particular, .Nm reduces the overhead for context switches which is incurred when servicing interrupts, and gives more control on the scheduling of the CPU between various tasks (user processes, software interrupts, device handling) which ultimately reduces the chances of livelock in the system. .Ss Principles of Operation In the normal, interrupt-based mode, devices generate an interrupt whenever they need attention. This in turn causes a context switch and the execution of an interrupt handler which performs whatever processing is needed by the device. The duration of the interrupt handler is potentially unbounded unless the device driver has been programmed with real-time concerns in mind (which is generally not the case for .Fx drivers). Furthermore, under heavy traffic load, the system might be persistently processing interrupts without being able to complete other work, either in the kernel or in userland. .Pp Device polling disables interrupts by polling devices at appropriate times, i.e., on clock interrupts, system calls and within the idle loop. This way, the context switch overhead is removed. Furthermore, the operating system can control accurately how much work to spend in handling device events, and thus prevent livelock by reserving some amount of CPU to other tasks. .Pp Enabling .Nm also changes the way software network interrupts are scheduled, so there is never the risk of livelock because packets are not processed to completion. .Ss MIB Variables The operation of .Nm is controlled by the following .Xr sysctl 8 MIB variables: .Pp .Bl -tag -width indent -compact .It Va kern.polling.enable If set to non-zero, .Nm is enabled. Default is disabled. .Pp .It Va kern.polling.user_frac When .Nm is enabled, and provided that there is some work to do, up to this percent of the CPU cycles is reserved to userland tasks, the remaining fraction being available for .Nm processing. Default is 50. .Pp .It Va kern.polling.burst Maximum number of packets grabbed from each network interface in each timer tick. This number is dynamically adjusted by the kernel, according to the programmed .Va user_frac , burst_max , CPU speed, and system load. .Pp .It Va kern.polling.each_burst The burst above is split into smaller chunks of this number of packets, going round-robin among all interfaces registered for .Nm . This prevents the case that a large burst from a single interface can saturate the IP interrupt queue .Pq Va net.inet.ip.intr_queue_maxlen . Default is 5. .Pp .It Va kern.polling.burst_max Upper bound for .Va kern.polling.burst . Note that when .Nm is enabled, each interface can receive at most .Pq Va HZ No * Va burst_max packets per second unless there are spare CPU cycles available for .Nm in the idle loop. This number should be tuned to match the expected load (which can be quite high with GigE cards). Default is 150 which is adequate for 100Mbit network and HZ=1000. .Pp .It Va kern.polling.idle_poll Controls if .Nm is enabled in the idle loop. There are no reasons (other than power saving or bugs in the scheduler's handling of idle priority kernel threads) to disable this. Note that -CURRENT apparently has some problems in this respect now, so default is disabled. .Pp .It Va kern.polling.poll_in_trap Controls if .Nm is enabled during hardware traps. Enabling this can be useful to improve the network responsiveness of boxes with 100% CPU usage. Default is disabled. .Pp .It Va kern.polling.reg_frac Controls how often (every .Va reg_frac No / Va HZ seconds) the status registers of the device are checked for error conditions and the like. Increasing this value reduces the load on the bus, but also delays the error detection. Default is 20. .Pp .It Va kern.polling.handlers How many active devices have registered for .Nm . .Pp .It Va kern.polling.short_ticks .It Va kern.polling.lost_polls .It Va kern.polling.pending_polls .It Va kern.polling.residual_burst .It Va kern.polling.phase .It Va kern.polling.suspect .It Va kern.polling.stalled Debugging variables. .El .Sh SUPPORTED DEVICES Device polling requires explicit modifications to the device drivers. As of this writing, the .Xr dc 4 , .Xr em 4 , .Xr fwe 4 , .Xr fwip 4 , .Xr fxp 4 , .Xr ixgb 4 , .Xr nge 4 , .Xr re 4 , .Xr rl 4 , .Xr sf 4 , .Xr sis 4 , .Xr ste 4 , .Xr vge 4 , +.Xr vr 4 , and -.Xr vr 4 +.Xr xl 4 devices are supported, with others in the works. The modifications are rather straightforward, consisting in the extraction of the inner part of the interrupt service routine and writing a callback function, .Fn *_poll , which is invoked to probe the device for events and process them. (See the conditionally compiled sections of the devices mentioned above for more details.) .Pp As in the worst case the devices are only polled on clock interrupts, in order to reduce the latency in processing packets, it is advisable to increase the frequency of the clock to at least 1000 HZ. .Sh HISTORY Device polling first appeared in .Fx 4.6 and .Fx 5.0 . .Sh AUTHORS Device polling was written by .An Luigi Rizzo Aq luigi@iet.unipi.it . Index: stable/5/share/man/man4/xl.4 =================================================================== --- stable/5/share/man/man4/xl.4 (revision 145135) +++ stable/5/share/man/man4/xl.4 (revision 145136) @@ -1,253 +1,254 @@ .\" Copyright (c) 1997, 1998 .\" Bill Paul . All rights reserved. .\" .\" Redistribution and use in source and binary forms, with or without .\" modification, are permitted provided that the following conditions .\" are met: .\" 1. Redistributions of source code must retain the above copyright .\" notice, this list of conditions and the following disclaimer. .\" 2. Redistributions in binary form must reproduce the above copyright .\" notice, this list of conditions and the following disclaimer in the .\" documentation and/or other materials provided with the distribution. .\" 3. All advertising materials mentioning features or use of this software .\" must display the following acknowledgement: .\" This product includes software developed by Bill Paul. .\" 4. Neither the name of the author nor the names of any co-contributors .\" may be used to endorse or promote products derived from this software .\" without specific prior written permission. .\" .\" THIS SOFTWARE IS PROVIDED BY Bill Paul AND CONTRIBUTORS ``AS IS'' AND .\" ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE .\" IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE .\" ARE DISCLAIMED. IN NO EVENT SHALL Bill Paul OR THE VOICES IN HIS HEAD .\" BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR .\" CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF .\" SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS .\" INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN .\" CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) .\" ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF .\" THE POSSIBILITY OF SUCH DAMAGE. .\" .\" $FreeBSD$ .\" .Dd January 3, 2005 .Dt XL 4 .Os .Sh NAME .Nm xl .Nd "3Com Etherlink XL and Fast Etherlink XL Ethernet device driver" .Sh SYNOPSIS .Cd "device miibus" .Cd "device xl" .Sh DESCRIPTION The .Nm driver provides support for PCI Ethernet adapters and embedded controllers based on the 3Com "boomerang," "cyclone," "hurricane" and "tornado" bus-master Etherlink XL chips. .Pp The Etherlink XL chips support built-in 10baseT, 10base2 and 10base5 transceivers as well as an MII bus for externally attached PHY transceivers. The 3c905 series typically uses a National Semiconductor NS 83840A 10/100 PHY for 10/100 Mbps support in full or half-duplex. The 3c905B adapters have built-in autonegotiation logic mapped onto the MII for compatibility with previous drivers. Fast Etherlink XL adapters such as the 3c905-TX and 3c905B-TX are capable of 10 or 100Mbps data rates in either full or half duplex and can be manually configured for any supported mode or automatically negotiate the highest possible mode with a link partner. .Pp The .Nm driver supports the following media types: .Pp .Bl -tag -width xxxxxxxxxxxxxxxxxxxx .It autoselect Enable autoselection of the media type and options. Note that this option is only available with the 3c905 and 3c905B adapters with external PHYs or built-in autonegotiation logic. For 3c900 adapters, the driver will choose the mode specified in the EEPROM. The user can change this by adding media options to the .Pa /etc/rc.conf file. .It 10baseT/UTP Set 10Mbps operation. The .Ar mediaopt option can also be used to select either .Ar full-duplex or .Ar half-duplex modes. .It 100baseTX Set 100Mbps (Fast Ethernet) operation. The .Ar mediaopt option can also be used to select either .Ar full-duplex or .Ar half-duplex modes. .It 10base5/AUI Enable AUI transceiver (available only on COMBO cards). .It 10base2/BNC Enable BNC coax transceiver (available only on COMBO cards). .El .Pp The .Nm driver supports the following media options: .Pp .Bl -tag -width xxxxxxxxxxxxxxxxxxxx .It full-duplex Force full duplex operation .It half-duplex Force half duplex operation. .El .Pp Note that the 100baseTX media type is only available if supported by the adapter. For more information on configuring this device, see .Xr ifconfig 8 . .Sh HARDWARE The .Nm driver supports the following hardware: .Pp .Bl -bullet -compact .It 3Com 3c900-TPO .It 3Com 3c900-COMBO .It 3Com 3c905-TX .It 3Com 3c905-T4 .It 3Com 3c900B-TPO .It 3Com 3c900B-TPC .It 3Com 3c900B-FL .It 3Com 3c900B-COMBO .It 3Com 3c905B-T4 .It 3Com 3c905B-TX .It 3Com 3c905B-FX .It 3Com 3c905B-COMBO .It 3Com 3c905C-TX .It 3Com 3c980, 3c980B, and 3c980C server adapters .It 3Com 3cSOHO100-TX OfficeConnect adapters .It 3Com 3c450 HomeConnect adapters .It 3Com 3c555, 3c556 and 3c556B mini-PCI adapters .It 3Com 3C3SH573BT, 3C575TX, 3CCFE575BT, 3CXFE575BT, 3CCFE575CT, 3CXFE575CT, 3CCFEM656, 3CCFEM656B, and 3CCFEM656C, 3CXFEM656, 3CXFEM656B, and 3CXFEM656C CardBus adapters .It 3Com 3c905-TX, 3c905B-TX 3c905C-TX, 3c920B-EMB, and 3c920B-EMB-WNM embedded adapters .El .Pp Both the 3C656 family of CardBus cards and the 3C556 family of MiniPCI cards have a built-in proprietary modem. Neither the .Nm driver nor any other .Fx driver supports this modem. .Sh DIAGNOSTICS .Bl -diag .It "xl%d: couldn't map memory" A fatal initialization error has occurred. .It "xl%d: couldn't map interrupt" A fatal initialization error has occurred. .It "xl%d: device timeout" The device has stopped responding to the network, or there is a problem with the network connection (cable). .It "xl%d: no memory for rx list" The driver failed to allocate an mbuf for the receiver ring. .It "xl%d: no memory for tx list" The driver failed to allocate an mbuf for the transmitter ring when allocating a pad buffer or collapsing an mbuf chain into a cluster. .It "xl%d: command never completed!" Some commands issued to the 3c90x ASIC take time to complete: the driver is supposed to wait until the 'command in progress' bit in the status register clears before continuing. In rare instances, this bit may not clear. To avoid getting caught in an infinite wait loop, the driver only polls the bit for a finite number of times before giving up, at which point it issues this message. This message may be printed during driver initialization on slower machines. If you see this message but the driver continues to function normally, the message can probably be ignored. .It "xl%d: chip is in D3 power state -- setting to D0" This message applies only to 3c905B adapters, which support power management. Some operating systems place the 3c905B in low power mode when shutting down, and some PCI BIOSes fail to bring the chip out of this state before configuring it. The 3c905B loses all of its PCI configuration in the D3 state, so if the BIOS does not set it back to full power mode in time, it won't be able to configure it correctly. The driver tries to detect this condition and bring the adapter back to the D0 (full power) state, but this may not be enough to return the driver to a fully operational condition. If you see this message at boot time and the driver fails to attach the device as a network interface, you will have to perform second warm boot to have the device properly configured. .Pp Note that this condition only occurs when warm booting from another operating system. If you power down your system prior to booting .Fx , the card should be configured correctly. .It "xl%d: WARNING: no media options bits set in the media options register!" This warning may appear when using the driver on some Dell Latitude docking stations with built-in 3c905-TX adapters. For whatever the reason, the 'MII available' bit in the media options register on this particular equipment is not set, even though it should be (the 3c905-TX always uses an external PHY transceiver). The driver will attempt to guess the proper media type based on the PCI device ID word. The driver makes a lot of noise about this condition because the author considers it a manufacturing defect. .El .Sh SEE ALSO .Xr arp 4 , .Xr cardbus 4 , .Xr miibus 4 , .Xr netintro 4 , .Xr ng_ether 4 , .Xr pccard 4 , +.Xr polling 4 , .Xr ifconfig 8 .Sh HISTORY The .Nm device driver first appeared in .Fx 3.0 . .Sh AUTHORS The .Nm driver was written by .An Bill Paul Aq wpaul@ctr.columbia.edu . Index: stable/5/sys/pci/if_xl.c =================================================================== --- stable/5/sys/pci/if_xl.c (revision 145135) +++ stable/5/sys/pci/if_xl.c (revision 145136) @@ -1,3246 +1,3365 @@ /*- * Copyright (c) 1997, 1998, 1999 * Bill Paul . All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * 3. All advertising materials mentioning features or use of this software * must display the following acknowledgement: * This product includes software developed by Bill Paul. * 4. Neither the name of the author nor the names of any co-contributors * may be used to endorse or promote products derived from this software * without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY Bill Paul AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL Bill Paul OR THE VOICES IN HIS HEAD * BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF * THE POSSIBILITY OF SUCH DAMAGE. */ #include __FBSDID("$FreeBSD$"); /* * 3Com 3c90x Etherlink XL PCI NIC driver * * Supports the 3Com "boomerang", "cyclone" and "hurricane" PCI * bus-master chips (3c90x cards and embedded controllers) including * the following: * * 3Com 3c900-TPO 10Mbps/RJ-45 * 3Com 3c900-COMBO 10Mbps/RJ-45,AUI,BNC * 3Com 3c905-TX 10/100Mbps/RJ-45 * 3Com 3c905-T4 10/100Mbps/RJ-45 * 3Com 3c900B-TPO 10Mbps/RJ-45 * 3Com 3c900B-COMBO 10Mbps/RJ-45,AUI,BNC * 3Com 3c900B-TPC 10Mbps/RJ-45,BNC * 3Com 3c900B-FL 10Mbps/Fiber-optic * 3Com 3c905B-COMBO 10/100Mbps/RJ-45,AUI,BNC * 3Com 3c905B-TX 10/100Mbps/RJ-45 * 3Com 3c905B-FL/FX 10/100Mbps/Fiber-optic * 3Com 3c905C-TX 10/100Mbps/RJ-45 (Tornado ASIC) * 3Com 3c980-TX 10/100Mbps server adapter (Hurricane ASIC) * 3Com 3c980C-TX 10/100Mbps server adapter (Tornado ASIC) * 3Com 3cSOHO100-TX 10/100Mbps/RJ-45 (Hurricane ASIC) * 3Com 3c450-TX 10/100Mbps/RJ-45 (Tornado ASIC) * 3Com 3c555 10/100Mbps/RJ-45 (MiniPCI, Laptop Hurricane) * 3Com 3c556 10/100Mbps/RJ-45 (MiniPCI, Hurricane ASIC) * 3Com 3c556B 10/100Mbps/RJ-45 (MiniPCI, Hurricane ASIC) * 3Com 3c575TX 10/100Mbps/RJ-45 (Cardbus, Hurricane ASIC) * 3Com 3c575B 10/100Mbps/RJ-45 (Cardbus, Hurricane ASIC) * 3Com 3c575C 10/100Mbps/RJ-45 (Cardbus, Hurricane ASIC) * 3Com 3cxfem656 10/100Mbps/RJ-45 (Cardbus, Hurricane ASIC) * 3Com 3cxfem656b 10/100Mbps/RJ-45 (Cardbus, Hurricane ASIC) * 3Com 3cxfem656c 10/100Mbps/RJ-45 (Cardbus, Tornado ASIC) * Dell Optiplex GX1 on-board 3c918 10/100Mbps/RJ-45 * Dell on-board 3c920 10/100Mbps/RJ-45 * Dell Precision on-board 3c905B 10/100Mbps/RJ-45 * Dell Latitude laptop docking station embedded 3c905-TX * * Written by Bill Paul * Electrical Engineering Department * Columbia University, New York City */ /* * The 3c90x series chips use a bus-master DMA interface for transfering * packets to and from the controller chip. Some of the "vortex" cards * (3c59x) also supported a bus master mode, however for those chips * you could only DMA packets to/from a contiguous memory buffer. For * transmission this would mean copying the contents of the queued mbuf * chain into an mbuf cluster and then DMAing the cluster. This extra * copy would sort of defeat the purpose of the bus master support for * any packet that doesn't fit into a single mbuf. * * By contrast, the 3c90x cards support a fragment-based bus master * mode where mbuf chains can be encapsulated using TX descriptors. * This is similar to other PCI chips such as the Texas Instruments * ThunderLAN and the Intel 82557/82558. * * The "vortex" driver (if_vx.c) happens to work for the "boomerang" * bus master chips because they maintain the old PIO interface for * backwards compatibility, but starting with the 3c905B and the * "cyclone" chips, the compatibility interface has been dropped. * Since using bus master DMA is a big win, we use this driver to * support the PCI "boomerang" chips even though they work with the * "vortex" driver in order to obtain better performance. * * This driver is in the /sys/pci directory because it only supports * PCI-based NICs. */ #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include MODULE_DEPEND(xl, pci, 1, 1, 1); MODULE_DEPEND(xl, ether, 1, 1, 1); MODULE_DEPEND(xl, miibus, 1, 1, 1); /* "device miibus" required. See GENERIC if you get errors here. */ #include "miibus_if.h" #include /* * TX Checksumming is disabled by default for two reasons: * - TX Checksumming will occasionally produce corrupt packets * - TX Checksumming seems to reduce performance * * Only 905B/C cards were reported to have this problem, it is possible * that later chips _may_ be immune. */ #define XL905B_TXCSUM_BROKEN 1 #ifdef XL905B_TXCSUM_BROKEN #define XL905B_CSUM_FEATURES 0 #else #define XL905B_CSUM_FEATURES (CSUM_IP | CSUM_TCP | CSUM_UDP) #endif /* * Various supported device vendors/types and their names. */ static struct xl_type xl_devs[] = { { TC_VENDORID, TC_DEVICEID_BOOMERANG_10BT, "3Com 3c900-TPO Etherlink XL" }, { TC_VENDORID, TC_DEVICEID_BOOMERANG_10BT_COMBO, "3Com 3c900-COMBO Etherlink XL" }, { TC_VENDORID, TC_DEVICEID_BOOMERANG_10_100BT, "3Com 3c905-TX Fast Etherlink XL" }, { TC_VENDORID, TC_DEVICEID_BOOMERANG_100BT4, "3Com 3c905-T4 Fast Etherlink XL" }, { TC_VENDORID, TC_DEVICEID_KRAKATOA_10BT, "3Com 3c900B-TPO Etherlink XL" }, { TC_VENDORID, TC_DEVICEID_KRAKATOA_10BT_COMBO, "3Com 3c900B-COMBO Etherlink XL" }, { TC_VENDORID, TC_DEVICEID_KRAKATOA_10BT_TPC, "3Com 3c900B-TPC Etherlink XL" }, { TC_VENDORID, TC_DEVICEID_CYCLONE_10FL, "3Com 3c900B-FL Etherlink XL" }, { TC_VENDORID, TC_DEVICEID_HURRICANE_10_100BT, "3Com 3c905B-TX Fast Etherlink XL" }, { TC_VENDORID, TC_DEVICEID_CYCLONE_10_100BT4, "3Com 3c905B-T4 Fast Etherlink XL" }, { TC_VENDORID, TC_DEVICEID_CYCLONE_10_100FX, "3Com 3c905B-FX/SC Fast Etherlink XL" }, { TC_VENDORID, TC_DEVICEID_CYCLONE_10_100_COMBO, "3Com 3c905B-COMBO Fast Etherlink XL" }, { TC_VENDORID, TC_DEVICEID_TORNADO_10_100BT, "3Com 3c905C-TX Fast Etherlink XL" }, { TC_VENDORID, TC_DEVICEID_TORNADO_10_100BT_920B, "3Com 3c920B-EMB Integrated Fast Etherlink XL" }, { TC_VENDORID, TC_DEVICEID_TORNADO_10_100BT_920B_WNM, "3Com 3c920B-EMB-WNM Integrated Fast Etherlink XL" }, { TC_VENDORID, TC_DEVICEID_HURRICANE_10_100BT_SERV, "3Com 3c980 Fast Etherlink XL" }, { TC_VENDORID, TC_DEVICEID_TORNADO_10_100BT_SERV, "3Com 3c980C Fast Etherlink XL" }, { TC_VENDORID, TC_DEVICEID_HURRICANE_SOHO100TX, "3Com 3cSOHO100-TX OfficeConnect" }, { TC_VENDORID, TC_DEVICEID_TORNADO_HOMECONNECT, "3Com 3c450-TX HomeConnect" }, { TC_VENDORID, TC_DEVICEID_HURRICANE_555, "3Com 3c555 Fast Etherlink XL" }, { TC_VENDORID, TC_DEVICEID_HURRICANE_556, "3Com 3c556 Fast Etherlink XL" }, { TC_VENDORID, TC_DEVICEID_HURRICANE_556B, "3Com 3c556B Fast Etherlink XL" }, { TC_VENDORID, TC_DEVICEID_HURRICANE_575A, "3Com 3c575TX Fast Etherlink XL" }, { TC_VENDORID, TC_DEVICEID_HURRICANE_575B, "3Com 3c575B Fast Etherlink XL" }, { TC_VENDORID, TC_DEVICEID_HURRICANE_575C, "3Com 3c575C Fast Etherlink XL" }, { TC_VENDORID, TC_DEVICEID_HURRICANE_656, "3Com 3c656 Fast Etherlink XL" }, { TC_VENDORID, TC_DEVICEID_HURRICANE_656B, "3Com 3c656B Fast Etherlink XL" }, { TC_VENDORID, TC_DEVICEID_TORNADO_656C, "3Com 3c656C Fast Etherlink XL" }, { 0, 0, NULL } }; static int xl_probe(device_t); static int xl_attach(device_t); static int xl_detach(device_t); static int xl_newbuf(struct xl_softc *, struct xl_chain_onefrag *); static void xl_stats_update(void *); static void xl_stats_update_locked(struct xl_softc *); static int xl_encap(struct xl_softc *, struct xl_chain *, struct mbuf *); static void xl_rxeof(struct xl_softc *); static int xl_rx_resync(struct xl_softc *); static void xl_txeof(struct xl_softc *); static void xl_txeof_90xB(struct xl_softc *); static void xl_txeoc(struct xl_softc *); static void xl_intr(void *); static void xl_start(struct ifnet *); static void xl_start_locked(struct ifnet *); static void xl_start_90xB_locked(struct ifnet *); static int xl_ioctl(struct ifnet *, u_long, caddr_t); static void xl_init(void *); static void xl_init_locked(struct xl_softc *); static void xl_stop(struct xl_softc *); static void xl_watchdog(struct ifnet *); static void xl_shutdown(device_t); static int xl_suspend(device_t); static int xl_resume(device_t); +#ifdef DEVICE_POLLING +static void xl_poll(struct ifnet *ifp, enum poll_cmd cmd, int count); +static void xl_poll_locked(struct ifnet *ifp, enum poll_cmd cmd, int count); +#endif /* DEVICE_POLLING */ + static int xl_ifmedia_upd(struct ifnet *); static void xl_ifmedia_sts(struct ifnet *, struct ifmediareq *); static int xl_eeprom_wait(struct xl_softc *); static int xl_read_eeprom(struct xl_softc *, caddr_t, int, int, int); static void xl_mii_sync(struct xl_softc *); static void xl_mii_send(struct xl_softc *, u_int32_t, int); static int xl_mii_readreg(struct xl_softc *, struct xl_mii_frame *); static int xl_mii_writereg(struct xl_softc *, struct xl_mii_frame *); static void xl_setcfg(struct xl_softc *); static void xl_setmode(struct xl_softc *, int); static void xl_setmulti(struct xl_softc *); static void xl_setmulti_hash(struct xl_softc *); static void xl_reset(struct xl_softc *); static int xl_list_rx_init(struct xl_softc *); static int xl_list_tx_init(struct xl_softc *); static int xl_list_tx_init_90xB(struct xl_softc *); static void xl_wait(struct xl_softc *); static void xl_mediacheck(struct xl_softc *); static void xl_choose_media(struct xl_softc *sc, int *media); static void xl_choose_xcvr(struct xl_softc *, int); static void xl_dma_map_addr(void *, bus_dma_segment_t *, int, int); static void xl_dma_map_rxbuf(void *, bus_dma_segment_t *, int, bus_size_t, int); static void xl_dma_map_txbuf(void *, bus_dma_segment_t *, int, bus_size_t, int); #ifdef notdef static void xl_testpacket(struct xl_softc *); #endif static int xl_miibus_readreg(device_t, int, int); static int xl_miibus_writereg(device_t, int, int, int); static void xl_miibus_statchg(device_t); static void xl_miibus_mediainit(device_t); static device_method_t xl_methods[] = { /* Device interface */ DEVMETHOD(device_probe, xl_probe), DEVMETHOD(device_attach, xl_attach), DEVMETHOD(device_detach, xl_detach), DEVMETHOD(device_shutdown, xl_shutdown), DEVMETHOD(device_suspend, xl_suspend), DEVMETHOD(device_resume, xl_resume), /* bus interface */ DEVMETHOD(bus_print_child, bus_generic_print_child), DEVMETHOD(bus_driver_added, bus_generic_driver_added), /* MII interface */ DEVMETHOD(miibus_readreg, xl_miibus_readreg), DEVMETHOD(miibus_writereg, xl_miibus_writereg), DEVMETHOD(miibus_statchg, xl_miibus_statchg), DEVMETHOD(miibus_mediainit, xl_miibus_mediainit), { 0, 0 } }; static driver_t xl_driver = { "xl", xl_methods, sizeof(struct xl_softc) }; static devclass_t xl_devclass; DRIVER_MODULE(xl, cardbus, xl_driver, xl_devclass, 0, 0); DRIVER_MODULE(xl, pci, xl_driver, xl_devclass, 0, 0); DRIVER_MODULE(miibus, xl, miibus_driver, miibus_devclass, 0, 0); static void xl_dma_map_addr(void *arg, bus_dma_segment_t *segs, int nseg, int error) { u_int32_t *paddr; paddr = arg; *paddr = segs->ds_addr; } static void xl_dma_map_rxbuf(void *arg, bus_dma_segment_t *segs, int nseg, bus_size_t mapsize, int error) { u_int32_t *paddr; if (error) return; KASSERT(nseg == 1, ("xl_dma_map_rxbuf: too many DMA segments")); paddr = arg; *paddr = segs->ds_addr; } static void xl_dma_map_txbuf(void *arg, bus_dma_segment_t *segs, int nseg, bus_size_t mapsize, int error) { struct xl_list *l; int i, total_len; if (error) return; KASSERT(nseg <= XL_MAXFRAGS, ("too many DMA segments")); total_len = 0; l = arg; for (i = 0; i < nseg; i++) { KASSERT(segs[i].ds_len <= MCLBYTES, ("segment size too large")); l->xl_frag[i].xl_addr = htole32(segs[i].ds_addr); l->xl_frag[i].xl_len = htole32(segs[i].ds_len); total_len += segs[i].ds_len; } l->xl_frag[nseg - 1].xl_len = htole32(segs[nseg - 1].ds_len | XL_LAST_FRAG); l->xl_status = htole32(total_len); l->xl_next = 0; } /* * Murphy's law says that it's possible the chip can wedge and * the 'command in progress' bit may never clear. Hence, we wait * only a finite amount of time to avoid getting caught in an * infinite loop. Normally this delay routine would be a macro, * but it isn't called during normal operation so we can afford * to make it a function. */ static void xl_wait(struct xl_softc *sc) { register int i; for (i = 0; i < XL_TIMEOUT; i++) { if ((CSR_READ_2(sc, XL_STATUS) & XL_STAT_CMDBUSY) == 0) break; } if (i == XL_TIMEOUT) if_printf(&sc->arpcom.ac_if, "command never completed!\n"); } /* * MII access routines are provided for adapters with external * PHYs (3c905-TX, 3c905-T4, 3c905B-T4) and those with built-in * autoneg logic that's faked up to look like a PHY (3c905B-TX). * Note: if you don't perform the MDIO operations just right, * it's possible to end up with code that works correctly with * some chips/CPUs/processor speeds/bus speeds/etc but not * with others. */ #define MII_SET(x) \ CSR_WRITE_2(sc, XL_W4_PHY_MGMT, \ CSR_READ_2(sc, XL_W4_PHY_MGMT) | (x)) #define MII_CLR(x) \ CSR_WRITE_2(sc, XL_W4_PHY_MGMT, \ CSR_READ_2(sc, XL_W4_PHY_MGMT) & ~(x)) /* * Sync the PHYs by setting data bit and strobing the clock 32 times. */ static void xl_mii_sync(struct xl_softc *sc) { register int i; XL_SEL_WIN(4); MII_SET(XL_MII_DIR|XL_MII_DATA); for (i = 0; i < 32; i++) { MII_SET(XL_MII_CLK); MII_SET(XL_MII_DATA); MII_SET(XL_MII_DATA); MII_CLR(XL_MII_CLK); MII_SET(XL_MII_DATA); MII_SET(XL_MII_DATA); } } /* * Clock a series of bits through the MII. */ static void xl_mii_send(struct xl_softc *sc, u_int32_t bits, int cnt) { int i; XL_SEL_WIN(4); MII_CLR(XL_MII_CLK); for (i = (0x1 << (cnt - 1)); i; i >>= 1) { if (bits & i) { MII_SET(XL_MII_DATA); } else { MII_CLR(XL_MII_DATA); } MII_CLR(XL_MII_CLK); MII_SET(XL_MII_CLK); } } /* * Read an PHY register through the MII. */ static int xl_mii_readreg(struct xl_softc *sc, struct xl_mii_frame *frame) { int i, ack; /*XL_LOCK_ASSERT(sc);*/ /* Set up frame for RX. */ frame->mii_stdelim = XL_MII_STARTDELIM; frame->mii_opcode = XL_MII_READOP; frame->mii_turnaround = 0; frame->mii_data = 0; /* Select register window 4. */ XL_SEL_WIN(4); CSR_WRITE_2(sc, XL_W4_PHY_MGMT, 0); /* Turn on data xmit. */ MII_SET(XL_MII_DIR); xl_mii_sync(sc); /* Send command/address info. */ xl_mii_send(sc, frame->mii_stdelim, 2); xl_mii_send(sc, frame->mii_opcode, 2); xl_mii_send(sc, frame->mii_phyaddr, 5); xl_mii_send(sc, frame->mii_regaddr, 5); /* Idle bit */ MII_CLR((XL_MII_CLK|XL_MII_DATA)); MII_SET(XL_MII_CLK); /* Turn off xmit. */ MII_CLR(XL_MII_DIR); /* Check for ack */ MII_CLR(XL_MII_CLK); ack = CSR_READ_2(sc, XL_W4_PHY_MGMT) & XL_MII_DATA; MII_SET(XL_MII_CLK); /* * Now try reading data bits. If the ack failed, we still * need to clock through 16 cycles to keep the PHY(s) in sync. */ if (ack) { for (i = 0; i < 16; i++) { MII_CLR(XL_MII_CLK); MII_SET(XL_MII_CLK); } goto fail; } for (i = 0x8000; i; i >>= 1) { MII_CLR(XL_MII_CLK); if (!ack) { if (CSR_READ_2(sc, XL_W4_PHY_MGMT) & XL_MII_DATA) frame->mii_data |= i; } MII_SET(XL_MII_CLK); } fail: MII_CLR(XL_MII_CLK); MII_SET(XL_MII_CLK); return (ack ? 1 : 0); } /* * Write to a PHY register through the MII. */ static int xl_mii_writereg(struct xl_softc *sc, struct xl_mii_frame *frame) { /*XL_LOCK_ASSERT(sc);*/ /* Set up frame for TX. */ frame->mii_stdelim = XL_MII_STARTDELIM; frame->mii_opcode = XL_MII_WRITEOP; frame->mii_turnaround = XL_MII_TURNAROUND; /* Select the window 4. */ XL_SEL_WIN(4); /* Turn on data output. */ MII_SET(XL_MII_DIR); xl_mii_sync(sc); xl_mii_send(sc, frame->mii_stdelim, 2); xl_mii_send(sc, frame->mii_opcode, 2); xl_mii_send(sc, frame->mii_phyaddr, 5); xl_mii_send(sc, frame->mii_regaddr, 5); xl_mii_send(sc, frame->mii_turnaround, 2); xl_mii_send(sc, frame->mii_data, 16); /* Idle bit. */ MII_SET(XL_MII_CLK); MII_CLR(XL_MII_CLK); /* Turn off xmit. */ MII_CLR(XL_MII_DIR); return (0); } static int xl_miibus_readreg(device_t dev, int phy, int reg) { struct xl_softc *sc; struct xl_mii_frame frame; sc = device_get_softc(dev); /* * Pretend that PHYs are only available at MII address 24. * This is to guard against problems with certain 3Com ASIC * revisions that incorrectly map the internal transceiver * control registers at all MII addresses. This can cause * the miibus code to attach the same PHY several times over. */ if ((sc->xl_flags & XL_FLAG_PHYOK) == 0 && phy != 24) return (0); bzero((char *)&frame, sizeof(frame)); frame.mii_phyaddr = phy; frame.mii_regaddr = reg; xl_mii_readreg(sc, &frame); return (frame.mii_data); } static int xl_miibus_writereg(device_t dev, int phy, int reg, int data) { struct xl_softc *sc; struct xl_mii_frame frame; sc = device_get_softc(dev); if ((sc->xl_flags & XL_FLAG_PHYOK) == 0 && phy != 24) return (0); bzero((char *)&frame, sizeof(frame)); frame.mii_phyaddr = phy; frame.mii_regaddr = reg; frame.mii_data = data; xl_mii_writereg(sc, &frame); return (0); } static void xl_miibus_statchg(device_t dev) { struct xl_softc *sc; struct mii_data *mii; sc = device_get_softc(dev); mii = device_get_softc(sc->xl_miibus); /*XL_LOCK_ASSERT(sc);*/ xl_setcfg(sc); /* Set ASIC's duplex mode to match the PHY. */ XL_SEL_WIN(3); if ((mii->mii_media_active & IFM_GMASK) == IFM_FDX) CSR_WRITE_1(sc, XL_W3_MAC_CTRL, XL_MACCTRL_DUPLEX); else CSR_WRITE_1(sc, XL_W3_MAC_CTRL, (CSR_READ_1(sc, XL_W3_MAC_CTRL) & ~XL_MACCTRL_DUPLEX)); } /* * Special support for the 3c905B-COMBO. This card has 10/100 support * plus BNC and AUI ports. This means we will have both an miibus attached * plus some non-MII media settings. In order to allow this, we have to * add the extra media to the miibus's ifmedia struct, but we can't do * that during xl_attach() because the miibus hasn't been attached yet. * So instead, we wait until the miibus probe/attach is done, at which * point we will get a callback telling is that it's safe to add our * extra media. */ static void xl_miibus_mediainit(device_t dev) { struct xl_softc *sc; struct mii_data *mii; struct ifmedia *ifm; sc = device_get_softc(dev); mii = device_get_softc(sc->xl_miibus); ifm = &mii->mii_media; /*XL_LOCK_ASSERT(sc);*/ if (sc->xl_media & (XL_MEDIAOPT_AUI | XL_MEDIAOPT_10FL)) { /* * Check for a 10baseFL board in disguise. */ if (sc->xl_type == XL_TYPE_905B && sc->xl_media == XL_MEDIAOPT_10FL) { if (bootverbose) if_printf(&sc->arpcom.ac_if, "found 10baseFL\n"); ifmedia_add(ifm, IFM_ETHER | IFM_10_FL, 0, NULL); ifmedia_add(ifm, IFM_ETHER | IFM_10_FL|IFM_HDX, 0, NULL); if (sc->xl_caps & XL_CAPS_FULL_DUPLEX) ifmedia_add(ifm, IFM_ETHER | IFM_10_FL | IFM_FDX, 0, NULL); } else { if (bootverbose) if_printf(&sc->arpcom.ac_if, "found AUI\n"); ifmedia_add(ifm, IFM_ETHER | IFM_10_5, 0, NULL); } } if (sc->xl_media & XL_MEDIAOPT_BNC) { if (bootverbose) if_printf(&sc->arpcom.ac_if, "found BNC\n"); ifmedia_add(ifm, IFM_ETHER | IFM_10_2, 0, NULL); } } /* * The EEPROM is slow: give it time to come ready after issuing * it a command. */ static int xl_eeprom_wait(struct xl_softc *sc) { int i; for (i = 0; i < 100; i++) { if (CSR_READ_2(sc, XL_W0_EE_CMD) & XL_EE_BUSY) DELAY(162); else break; } if (i == 100) { if_printf(&sc->arpcom.ac_if, "eeprom failed to come ready\n"); return (1); } return (0); } /* * Read a sequence of words from the EEPROM. Note that ethernet address * data is stored in the EEPROM in network byte order. */ static int xl_read_eeprom(struct xl_softc *sc, caddr_t dest, int off, int cnt, int swap) { int err = 0, i; u_int16_t word = 0, *ptr; XL_LOCK_ASSERT(sc); #define EEPROM_5BIT_OFFSET(A) ((((A) << 2) & 0x7F00) | ((A) & 0x003F)) #define EEPROM_8BIT_OFFSET(A) ((A) & 0x003F) /* * XXX: WARNING! DANGER! * It's easy to accidentally overwrite the rom content! * Note: the 3c575 uses 8bit EEPROM offsets. */ XL_SEL_WIN(0); if (xl_eeprom_wait(sc)) return (1); if (sc->xl_flags & XL_FLAG_EEPROM_OFFSET_30) off += 0x30; for (i = 0; i < cnt; i++) { if (sc->xl_flags & XL_FLAG_8BITROM) CSR_WRITE_2(sc, XL_W0_EE_CMD, XL_EE_8BIT_READ | EEPROM_8BIT_OFFSET(off + i)); else CSR_WRITE_2(sc, XL_W0_EE_CMD, XL_EE_READ | EEPROM_5BIT_OFFSET(off + i)); err = xl_eeprom_wait(sc); if (err) break; word = CSR_READ_2(sc, XL_W0_EE_DATA); ptr = (u_int16_t *)(dest + (i * 2)); if (swap) *ptr = ntohs(word); else *ptr = word; } return (err ? 1 : 0); } /* * NICs older than the 3c905B have only one multicast option, which * is to enable reception of all multicast frames. */ static void xl_setmulti(struct xl_softc *sc) { struct ifnet *ifp = &sc->arpcom.ac_if; struct ifmultiaddr *ifma; u_int8_t rxfilt; int mcnt = 0; XL_LOCK_ASSERT(sc); XL_SEL_WIN(5); rxfilt = CSR_READ_1(sc, XL_W5_RX_FILTER); if (ifp->if_flags & IFF_ALLMULTI) { rxfilt |= XL_RXFILTER_ALLMULTI; CSR_WRITE_2(sc, XL_COMMAND, XL_CMD_RX_SET_FILT|rxfilt); return; } TAILQ_FOREACH(ifma, &ifp->if_multiaddrs, ifma_link) mcnt++; if (mcnt) rxfilt |= XL_RXFILTER_ALLMULTI; else rxfilt &= ~XL_RXFILTER_ALLMULTI; CSR_WRITE_2(sc, XL_COMMAND, XL_CMD_RX_SET_FILT|rxfilt); } /* * 3c905B adapters have a hash filter that we can program. */ static void xl_setmulti_hash(struct xl_softc *sc) { struct ifnet *ifp = &sc->arpcom.ac_if; int h = 0, i; struct ifmultiaddr *ifma; u_int8_t rxfilt; int mcnt = 0; XL_LOCK_ASSERT(sc); XL_SEL_WIN(5); rxfilt = CSR_READ_1(sc, XL_W5_RX_FILTER); if (ifp->if_flags & IFF_ALLMULTI) { rxfilt |= XL_RXFILTER_ALLMULTI; CSR_WRITE_2(sc, XL_COMMAND, XL_CMD_RX_SET_FILT|rxfilt); return; } else rxfilt &= ~XL_RXFILTER_ALLMULTI; /* first, zot all the existing hash bits */ for (i = 0; i < XL_HASHFILT_SIZE; i++) CSR_WRITE_2(sc, XL_COMMAND, XL_CMD_RX_SET_HASH|i); /* now program new ones */ TAILQ_FOREACH(ifma, &ifp->if_multiaddrs, ifma_link) { if (ifma->ifma_addr->sa_family != AF_LINK) continue; /* * Note: the 3c905B currently only supports a 64-bit hash * table, which means we really only need 6 bits, but the * manual indicates that future chip revisions will have a * 256-bit hash table, hence the routine is set up to * calculate 8 bits of position info in case we need it some * day. * Note II, The Sequel: _CURRENT_ versions of the 3c905B have * a 256 bit hash table. This means we have to use all 8 bits * regardless. On older cards, the upper 2 bits will be * ignored. Grrrr.... */ h = ether_crc32_be(LLADDR((struct sockaddr_dl *) ifma->ifma_addr), ETHER_ADDR_LEN) & 0xFF; CSR_WRITE_2(sc, XL_COMMAND, h | XL_CMD_RX_SET_HASH | XL_HASH_SET); mcnt++; } if (mcnt) rxfilt |= XL_RXFILTER_MULTIHASH; else rxfilt &= ~XL_RXFILTER_MULTIHASH; CSR_WRITE_2(sc, XL_COMMAND, rxfilt | XL_CMD_RX_SET_FILT); } #ifdef notdef static void xl_testpacket(struct xl_softc *sc) { struct mbuf *m; struct ifnet *ifp = &sc->arpcom.ac_if; MGETHDR(m, M_DONTWAIT, MT_DATA); if (m == NULL) return; bcopy(&sc->arpcom.ac_enaddr, mtod(m, struct ether_header *)->ether_dhost, ETHER_ADDR_LEN); bcopy(&sc->arpcom.ac_enaddr, mtod(m, struct ether_header *)->ether_shost, ETHER_ADDR_LEN); mtod(m, struct ether_header *)->ether_type = htons(3); mtod(m, unsigned char *)[14] = 0; mtod(m, unsigned char *)[15] = 0; mtod(m, unsigned char *)[16] = 0xE3; m->m_len = m->m_pkthdr.len = sizeof(struct ether_header) + 3; IFQ_ENQUEUE(&ifp->if_snd, m); xl_start(ifp); } #endif static void xl_setcfg(struct xl_softc *sc) { u_int32_t icfg; /*XL_LOCK_ASSERT(sc);*/ XL_SEL_WIN(3); icfg = CSR_READ_4(sc, XL_W3_INTERNAL_CFG); icfg &= ~XL_ICFG_CONNECTOR_MASK; if (sc->xl_media & XL_MEDIAOPT_MII || sc->xl_media & XL_MEDIAOPT_BT4) icfg |= (XL_XCVR_MII << XL_ICFG_CONNECTOR_BITS); if (sc->xl_media & XL_MEDIAOPT_BTX) icfg |= (XL_XCVR_AUTO << XL_ICFG_CONNECTOR_BITS); CSR_WRITE_4(sc, XL_W3_INTERNAL_CFG, icfg); CSR_WRITE_2(sc, XL_COMMAND, XL_CMD_COAX_STOP); } static void xl_setmode(struct xl_softc *sc, int media) { u_int32_t icfg; u_int16_t mediastat; char *pmsg = "", *dmsg = ""; /*XL_LOCK_ASSERT(sc);*/ XL_SEL_WIN(4); mediastat = CSR_READ_2(sc, XL_W4_MEDIA_STATUS); XL_SEL_WIN(3); icfg = CSR_READ_4(sc, XL_W3_INTERNAL_CFG); if (sc->xl_media & XL_MEDIAOPT_BT) { if (IFM_SUBTYPE(media) == IFM_10_T) { pmsg = "10baseT transceiver"; sc->xl_xcvr = XL_XCVR_10BT; icfg &= ~XL_ICFG_CONNECTOR_MASK; icfg |= (XL_XCVR_10BT << XL_ICFG_CONNECTOR_BITS); mediastat |= XL_MEDIASTAT_LINKBEAT | XL_MEDIASTAT_JABGUARD; mediastat &= ~XL_MEDIASTAT_SQEENB; } } if (sc->xl_media & XL_MEDIAOPT_BFX) { if (IFM_SUBTYPE(media) == IFM_100_FX) { pmsg = "100baseFX port"; sc->xl_xcvr = XL_XCVR_100BFX; icfg &= ~XL_ICFG_CONNECTOR_MASK; icfg |= (XL_XCVR_100BFX << XL_ICFG_CONNECTOR_BITS); mediastat |= XL_MEDIASTAT_LINKBEAT; mediastat &= ~XL_MEDIASTAT_SQEENB; } } if (sc->xl_media & (XL_MEDIAOPT_AUI|XL_MEDIAOPT_10FL)) { if (IFM_SUBTYPE(media) == IFM_10_5) { pmsg = "AUI port"; sc->xl_xcvr = XL_XCVR_AUI; icfg &= ~XL_ICFG_CONNECTOR_MASK; icfg |= (XL_XCVR_AUI << XL_ICFG_CONNECTOR_BITS); mediastat &= ~(XL_MEDIASTAT_LINKBEAT | XL_MEDIASTAT_JABGUARD); mediastat |= ~XL_MEDIASTAT_SQEENB; } if (IFM_SUBTYPE(media) == IFM_10_FL) { pmsg = "10baseFL transceiver"; sc->xl_xcvr = XL_XCVR_AUI; icfg &= ~XL_ICFG_CONNECTOR_MASK; icfg |= (XL_XCVR_AUI << XL_ICFG_CONNECTOR_BITS); mediastat &= ~(XL_MEDIASTAT_LINKBEAT | XL_MEDIASTAT_JABGUARD); mediastat |= ~XL_MEDIASTAT_SQEENB; } } if (sc->xl_media & XL_MEDIAOPT_BNC) { if (IFM_SUBTYPE(media) == IFM_10_2) { pmsg = "AUI port"; sc->xl_xcvr = XL_XCVR_COAX; icfg &= ~XL_ICFG_CONNECTOR_MASK; icfg |= (XL_XCVR_COAX << XL_ICFG_CONNECTOR_BITS); mediastat &= ~(XL_MEDIASTAT_LINKBEAT | XL_MEDIASTAT_JABGUARD | XL_MEDIASTAT_SQEENB); } } if ((media & IFM_GMASK) == IFM_FDX || IFM_SUBTYPE(media) == IFM_100_FX) { dmsg = "full"; XL_SEL_WIN(3); CSR_WRITE_1(sc, XL_W3_MAC_CTRL, XL_MACCTRL_DUPLEX); } else { dmsg = "half"; XL_SEL_WIN(3); CSR_WRITE_1(sc, XL_W3_MAC_CTRL, (CSR_READ_1(sc, XL_W3_MAC_CTRL) & ~XL_MACCTRL_DUPLEX)); } if (IFM_SUBTYPE(media) == IFM_10_2) CSR_WRITE_2(sc, XL_COMMAND, XL_CMD_COAX_START); else CSR_WRITE_2(sc, XL_COMMAND, XL_CMD_COAX_STOP); CSR_WRITE_4(sc, XL_W3_INTERNAL_CFG, icfg); XL_SEL_WIN(4); CSR_WRITE_2(sc, XL_W4_MEDIA_STATUS, mediastat); DELAY(800); XL_SEL_WIN(7); if_printf(&sc->arpcom.ac_if, "selecting %s, %s duplex\n", pmsg, dmsg); } static void xl_reset(struct xl_softc *sc) { register int i; XL_LOCK_ASSERT(sc); XL_SEL_WIN(0); CSR_WRITE_2(sc, XL_COMMAND, XL_CMD_RESET | ((sc->xl_flags & XL_FLAG_WEIRDRESET) ? XL_RESETOPT_DISADVFD:0)); /* * If we're using memory mapped register mode, pause briefly * after issuing the reset command before trying to access any * other registers. With my 3c575C cardbus card, failing to do * this results in the system locking up while trying to poll * the command busy bit in the status register. */ if (sc->xl_flags & XL_FLAG_USE_MMIO) DELAY(100000); for (i = 0; i < XL_TIMEOUT; i++) { DELAY(10); if (!(CSR_READ_2(sc, XL_STATUS) & XL_STAT_CMDBUSY)) break; } if (i == XL_TIMEOUT) if_printf(&sc->arpcom.ac_if, "reset didn't complete\n"); /* Reset TX and RX. */ /* Note: the RX reset takes an absurd amount of time * on newer versions of the Tornado chips such as those * on the 3c905CX and newer 3c908C cards. We wait an * extra amount of time so that xl_wait() doesn't complain * and annoy the users. */ CSR_WRITE_2(sc, XL_COMMAND, XL_CMD_RX_RESET); DELAY(100000); xl_wait(sc); CSR_WRITE_2(sc, XL_COMMAND, XL_CMD_TX_RESET); xl_wait(sc); if (sc->xl_flags & XL_FLAG_INVERT_LED_PWR || sc->xl_flags & XL_FLAG_INVERT_MII_PWR) { XL_SEL_WIN(2); CSR_WRITE_2(sc, XL_W2_RESET_OPTIONS, CSR_READ_2(sc, XL_W2_RESET_OPTIONS) | ((sc->xl_flags & XL_FLAG_INVERT_LED_PWR) ? XL_RESETOPT_INVERT_LED : 0) | ((sc->xl_flags & XL_FLAG_INVERT_MII_PWR) ? XL_RESETOPT_INVERT_MII : 0)); } /* Wait a little while for the chip to get its brains in order. */ DELAY(100000); } /* * Probe for a 3Com Etherlink XL chip. Check the PCI vendor and device * IDs against our list and return a device name if we find a match. */ static int xl_probe(device_t dev) { struct xl_type *t; t = xl_devs; while (t->xl_name != NULL) { if ((pci_get_vendor(dev) == t->xl_vid) && (pci_get_device(dev) == t->xl_did)) { device_set_desc(dev, t->xl_name); return (BUS_PROBE_DEFAULT); } t++; } return (ENXIO); } /* * This routine is a kludge to work around possible hardware faults * or manufacturing defects that can cause the media options register * (or reset options register, as it's called for the first generation * 3c90x adapters) to return an incorrect result. I have encountered * one Dell Latitude laptop docking station with an integrated 3c905-TX * which doesn't have any of the 'mediaopt' bits set. This screws up * the attach routine pretty badly because it doesn't know what media * to look for. If we find ourselves in this predicament, this routine * will try to guess the media options values and warn the user of a * possible manufacturing defect with his adapter/system/whatever. */ static void xl_mediacheck(struct xl_softc *sc) { XL_LOCK_ASSERT(sc); /* * If some of the media options bits are set, assume they are * correct. If not, try to figure it out down below. * XXX I should check for 10baseFL, but I don't have an adapter * to test with. */ if (sc->xl_media & (XL_MEDIAOPT_MASK & ~XL_MEDIAOPT_VCO)) { /* * Check the XCVR value. If it's not in the normal range * of values, we need to fake it up here. */ if (sc->xl_xcvr <= XL_XCVR_AUTO) return; else { if_printf(&sc->arpcom.ac_if, "bogus xcvr value in EEPROM (%x)\n", sc->xl_xcvr); if_printf(&sc->arpcom.ac_if, "choosing new default based on card type\n"); } } else { if (sc->xl_type == XL_TYPE_905B && sc->xl_media & XL_MEDIAOPT_10FL) return; if_printf(&sc->arpcom.ac_if, "WARNING: no media options bits set in the media options register!!\n"); if_printf(&sc->arpcom.ac_if, "this could be a manufacturing defect in your adapter or system\n"); if_printf(&sc->arpcom.ac_if, "attempting to guess media type; you should probably consult your vendor\n"); } xl_choose_xcvr(sc, 1); } static void xl_choose_xcvr(struct xl_softc *sc, int verbose) { u_int16_t devid; /* * Read the device ID from the EEPROM. * This is what's loaded into the PCI device ID register, so it has * to be correct otherwise we wouldn't have gotten this far. */ xl_read_eeprom(sc, (caddr_t)&devid, XL_EE_PRODID, 1, 0); switch (devid) { case TC_DEVICEID_BOOMERANG_10BT: /* 3c900-TPO */ case TC_DEVICEID_KRAKATOA_10BT: /* 3c900B-TPO */ sc->xl_media = XL_MEDIAOPT_BT; sc->xl_xcvr = XL_XCVR_10BT; if (verbose) if_printf(&sc->arpcom.ac_if, "guessing 10BaseT transceiver\n"); break; case TC_DEVICEID_BOOMERANG_10BT_COMBO: /* 3c900-COMBO */ case TC_DEVICEID_KRAKATOA_10BT_COMBO: /* 3c900B-COMBO */ sc->xl_media = XL_MEDIAOPT_BT|XL_MEDIAOPT_BNC|XL_MEDIAOPT_AUI; sc->xl_xcvr = XL_XCVR_10BT; if (verbose) if_printf(&sc->arpcom.ac_if, "guessing COMBO (AUI/BNC/TP)\n"); break; case TC_DEVICEID_KRAKATOA_10BT_TPC: /* 3c900B-TPC */ sc->xl_media = XL_MEDIAOPT_BT|XL_MEDIAOPT_BNC; sc->xl_xcvr = XL_XCVR_10BT; if (verbose) if_printf(&sc->arpcom.ac_if, "guessing TPC (BNC/TP)\n"); break; case TC_DEVICEID_CYCLONE_10FL: /* 3c900B-FL */ sc->xl_media = XL_MEDIAOPT_10FL; sc->xl_xcvr = XL_XCVR_AUI; if (verbose) if_printf(&sc->arpcom.ac_if, "guessing 10baseFL\n"); break; case TC_DEVICEID_BOOMERANG_10_100BT: /* 3c905-TX */ case TC_DEVICEID_HURRICANE_555: /* 3c555 */ case TC_DEVICEID_HURRICANE_556: /* 3c556 */ case TC_DEVICEID_HURRICANE_556B: /* 3c556B */ case TC_DEVICEID_HURRICANE_575A: /* 3c575TX */ case TC_DEVICEID_HURRICANE_575B: /* 3c575B */ case TC_DEVICEID_HURRICANE_575C: /* 3c575C */ case TC_DEVICEID_HURRICANE_656: /* 3c656 */ case TC_DEVICEID_HURRICANE_656B: /* 3c656B */ case TC_DEVICEID_TORNADO_656C: /* 3c656C */ case TC_DEVICEID_TORNADO_10_100BT_920B: /* 3c920B-EMB */ case TC_DEVICEID_TORNADO_10_100BT_920B_WNM: /* 3c920B-EMB-WNM */ sc->xl_media = XL_MEDIAOPT_MII; sc->xl_xcvr = XL_XCVR_MII; if (verbose) if_printf(&sc->arpcom.ac_if, "guessing MII\n"); break; case TC_DEVICEID_BOOMERANG_100BT4: /* 3c905-T4 */ case TC_DEVICEID_CYCLONE_10_100BT4: /* 3c905B-T4 */ sc->xl_media = XL_MEDIAOPT_BT4; sc->xl_xcvr = XL_XCVR_MII; if (verbose) if_printf(&sc->arpcom.ac_if, "guessing 100baseT4/MII\n"); break; case TC_DEVICEID_HURRICANE_10_100BT: /* 3c905B-TX */ case TC_DEVICEID_HURRICANE_10_100BT_SERV:/*3c980-TX */ case TC_DEVICEID_TORNADO_10_100BT_SERV: /* 3c980C-TX */ case TC_DEVICEID_HURRICANE_SOHO100TX: /* 3cSOHO100-TX */ case TC_DEVICEID_TORNADO_10_100BT: /* 3c905C-TX */ case TC_DEVICEID_TORNADO_HOMECONNECT: /* 3c450-TX */ sc->xl_media = XL_MEDIAOPT_BTX; sc->xl_xcvr = XL_XCVR_AUTO; if (verbose) if_printf(&sc->arpcom.ac_if, "guessing 10/100 internal\n"); break; case TC_DEVICEID_CYCLONE_10_100_COMBO: /* 3c905B-COMBO */ sc->xl_media = XL_MEDIAOPT_BTX|XL_MEDIAOPT_BNC|XL_MEDIAOPT_AUI; sc->xl_xcvr = XL_XCVR_AUTO; if (verbose) if_printf(&sc->arpcom.ac_if, "guessing 10/100 plus BNC/AUI\n"); break; default: if_printf(&sc->arpcom.ac_if, "unknown device ID: %x -- defaulting to 10baseT\n", devid); sc->xl_media = XL_MEDIAOPT_BT; break; } } /* * Attach the interface. Allocate softc structures, do ifmedia * setup and ethernet/BPF attach. */ static int xl_attach(device_t dev) { u_char eaddr[ETHER_ADDR_LEN]; u_int16_t xcvr[2]; struct xl_softc *sc; struct ifnet *ifp; int media; int unit, error = 0, rid, res; uint16_t did; sc = device_get_softc(dev); unit = device_get_unit(dev); mtx_init(&sc->xl_mtx, device_get_nameunit(dev), MTX_NETWORK_LOCK, MTX_DEF); ifmedia_init(&sc->ifmedia, 0, xl_ifmedia_upd, xl_ifmedia_sts); did = pci_get_device(dev); sc->xl_flags = 0; if (did == TC_DEVICEID_HURRICANE_555) sc->xl_flags |= XL_FLAG_EEPROM_OFFSET_30 | XL_FLAG_PHYOK; if (did == TC_DEVICEID_HURRICANE_556 || did == TC_DEVICEID_HURRICANE_556B) sc->xl_flags |= XL_FLAG_FUNCREG | XL_FLAG_PHYOK | XL_FLAG_EEPROM_OFFSET_30 | XL_FLAG_WEIRDRESET | XL_FLAG_INVERT_LED_PWR | XL_FLAG_INVERT_MII_PWR; if (did == TC_DEVICEID_HURRICANE_555 || did == TC_DEVICEID_HURRICANE_556) sc->xl_flags |= XL_FLAG_8BITROM; if (did == TC_DEVICEID_HURRICANE_556B) sc->xl_flags |= XL_FLAG_NO_XCVR_PWR; if (did == TC_DEVICEID_HURRICANE_575A || did == TC_DEVICEID_HURRICANE_575B || did == TC_DEVICEID_HURRICANE_575C || did == TC_DEVICEID_HURRICANE_656B || did == TC_DEVICEID_TORNADO_656C) sc->xl_flags |= XL_FLAG_FUNCREG | XL_FLAG_PHYOK | XL_FLAG_EEPROM_OFFSET_30 | XL_FLAG_8BITROM; if (did == TC_DEVICEID_HURRICANE_656) sc->xl_flags |= XL_FLAG_FUNCREG | XL_FLAG_PHYOK; if (did == TC_DEVICEID_HURRICANE_575B) sc->xl_flags |= XL_FLAG_INVERT_LED_PWR; if (did == TC_DEVICEID_HURRICANE_575C) sc->xl_flags |= XL_FLAG_INVERT_MII_PWR; if (did == TC_DEVICEID_TORNADO_656C) sc->xl_flags |= XL_FLAG_INVERT_MII_PWR; if (did == TC_DEVICEID_HURRICANE_656 || did == TC_DEVICEID_HURRICANE_656B) sc->xl_flags |= XL_FLAG_INVERT_MII_PWR | XL_FLAG_INVERT_LED_PWR; if (did == TC_DEVICEID_TORNADO_10_100BT_920B || did == TC_DEVICEID_TORNADO_10_100BT_920B_WNM) sc->xl_flags |= XL_FLAG_PHYOK; switch (did) { case TC_DEVICEID_BOOMERANG_10_100BT: /* 3c905-TX */ case TC_DEVICEID_HURRICANE_575A: case TC_DEVICEID_HURRICANE_575B: case TC_DEVICEID_HURRICANE_575C: sc->xl_flags |= XL_FLAG_NO_MMIO; break; default: break; } /* * Map control/status registers. */ pci_enable_busmaster(dev); if ((sc->xl_flags & XL_FLAG_NO_MMIO) == 0) { rid = XL_PCI_LOMEM; res = SYS_RES_MEMORY; sc->xl_res = bus_alloc_resource_any(dev, res, &rid, RF_ACTIVE); } if (sc->xl_res != NULL) { sc->xl_flags |= XL_FLAG_USE_MMIO; if (bootverbose) device_printf(dev, "using memory mapped I/O\n"); } else { rid = XL_PCI_LOIO; res = SYS_RES_IOPORT; sc->xl_res = bus_alloc_resource_any(dev, res, &rid, RF_ACTIVE); if (sc->xl_res == NULL) { device_printf(dev, "couldn't map ports/memory\n"); error = ENXIO; goto fail; } if (bootverbose) device_printf(dev, "using port I/O\n"); } sc->xl_btag = rman_get_bustag(sc->xl_res); sc->xl_bhandle = rman_get_bushandle(sc->xl_res); if (sc->xl_flags & XL_FLAG_FUNCREG) { rid = XL_PCI_FUNCMEM; sc->xl_fres = bus_alloc_resource_any(dev, SYS_RES_MEMORY, &rid, RF_ACTIVE); if (sc->xl_fres == NULL) { device_printf(dev, "couldn't map ports/memory\n"); error = ENXIO; goto fail; } sc->xl_ftag = rman_get_bustag(sc->xl_fres); sc->xl_fhandle = rman_get_bushandle(sc->xl_fres); } /* Allocate interrupt */ rid = 0; sc->xl_irq = bus_alloc_resource_any(dev, SYS_RES_IRQ, &rid, RF_SHAREABLE | RF_ACTIVE); if (sc->xl_irq == NULL) { device_printf(dev, "couldn't map interrupt\n"); error = ENXIO; goto fail; } /* Initialize interface name. */ ifp = &sc->arpcom.ac_if; ifp->if_softc = sc; if_initname(ifp, device_get_name(dev), device_get_unit(dev)); XL_LOCK(sc); /* Reset the adapter. */ xl_reset(sc); /* * Get station address from the EEPROM. */ if (xl_read_eeprom(sc, (caddr_t)&eaddr, XL_EE_OEM_ADR0, 3, 1)) { device_printf(dev, "failed to read station address\n"); error = ENXIO; XL_UNLOCK(sc); goto fail; } XL_UNLOCK(sc); sc->xl_unit = unit; callout_handle_init(&sc->xl_stat_ch); bcopy(eaddr, (char *)&sc->arpcom.ac_enaddr, ETHER_ADDR_LEN); /* * Now allocate a tag for the DMA descriptor lists and a chunk * of DMA-able memory based on the tag. Also obtain the DMA * addresses of the RX and TX ring, which we'll need later. * All of our lists are allocated as a contiguous block * of memory. */ error = bus_dma_tag_create(NULL, 8, 0, BUS_SPACE_MAXADDR_32BIT, BUS_SPACE_MAXADDR, NULL, NULL, XL_RX_LIST_SZ, 1, XL_RX_LIST_SZ, 0, NULL, NULL, &sc->xl_ldata.xl_rx_tag); if (error) { device_printf(dev, "failed to allocate rx dma tag\n"); goto fail; } error = bus_dmamem_alloc(sc->xl_ldata.xl_rx_tag, (void **)&sc->xl_ldata.xl_rx_list, BUS_DMA_NOWAIT | BUS_DMA_ZERO, &sc->xl_ldata.xl_rx_dmamap); if (error) { device_printf(dev, "no memory for rx list buffers!\n"); bus_dma_tag_destroy(sc->xl_ldata.xl_rx_tag); sc->xl_ldata.xl_rx_tag = NULL; goto fail; } error = bus_dmamap_load(sc->xl_ldata.xl_rx_tag, sc->xl_ldata.xl_rx_dmamap, sc->xl_ldata.xl_rx_list, XL_RX_LIST_SZ, xl_dma_map_addr, &sc->xl_ldata.xl_rx_dmaaddr, BUS_DMA_NOWAIT); if (error) { device_printf(dev, "cannot get dma address of the rx ring!\n"); bus_dmamem_free(sc->xl_ldata.xl_rx_tag, sc->xl_ldata.xl_rx_list, sc->xl_ldata.xl_rx_dmamap); bus_dma_tag_destroy(sc->xl_ldata.xl_rx_tag); sc->xl_ldata.xl_rx_tag = NULL; goto fail; } error = bus_dma_tag_create(NULL, 8, 0, BUS_SPACE_MAXADDR_32BIT, BUS_SPACE_MAXADDR, NULL, NULL, XL_TX_LIST_SZ, 1, XL_TX_LIST_SZ, 0, NULL, NULL, &sc->xl_ldata.xl_tx_tag); if (error) { device_printf(dev, "failed to allocate tx dma tag\n"); goto fail; } error = bus_dmamem_alloc(sc->xl_ldata.xl_tx_tag, (void **)&sc->xl_ldata.xl_tx_list, BUS_DMA_NOWAIT | BUS_DMA_ZERO, &sc->xl_ldata.xl_tx_dmamap); if (error) { device_printf(dev, "no memory for list buffers!\n"); bus_dma_tag_destroy(sc->xl_ldata.xl_tx_tag); sc->xl_ldata.xl_tx_tag = NULL; goto fail; } error = bus_dmamap_load(sc->xl_ldata.xl_tx_tag, sc->xl_ldata.xl_tx_dmamap, sc->xl_ldata.xl_tx_list, XL_TX_LIST_SZ, xl_dma_map_addr, &sc->xl_ldata.xl_tx_dmaaddr, BUS_DMA_NOWAIT); if (error) { device_printf(dev, "cannot get dma address of the tx ring!\n"); bus_dmamem_free(sc->xl_ldata.xl_tx_tag, sc->xl_ldata.xl_tx_list, sc->xl_ldata.xl_tx_dmamap); bus_dma_tag_destroy(sc->xl_ldata.xl_tx_tag); sc->xl_ldata.xl_tx_tag = NULL; goto fail; } /* * Allocate a DMA tag for the mapping of mbufs. */ error = bus_dma_tag_create(NULL, 1, 0, BUS_SPACE_MAXADDR_32BIT, BUS_SPACE_MAXADDR, NULL, NULL, MCLBYTES * XL_MAXFRAGS, XL_MAXFRAGS, MCLBYTES, 0, NULL, NULL, &sc->xl_mtag); if (error) { device_printf(dev, "failed to allocate mbuf dma tag\n"); goto fail; } /* We need a spare DMA map for the RX ring. */ error = bus_dmamap_create(sc->xl_mtag, 0, &sc->xl_tmpmap); if (error) goto fail; XL_LOCK(sc); /* * Figure out the card type. 3c905B adapters have the * 'supportsNoTxLength' bit set in the capabilities * word in the EEPROM. * Note: my 3c575C cardbus card lies. It returns a value * of 0x1578 for its capabilities word, which is somewhat * nonsensical. Another way to distinguish a 3c90x chip * from a 3c90xB/C chip is to check for the 'supportsLargePackets' * bit. This will only be set for 3c90x boomerage chips. */ xl_read_eeprom(sc, (caddr_t)&sc->xl_caps, XL_EE_CAPS, 1, 0); if (sc->xl_caps & XL_CAPS_NO_TXLENGTH || !(sc->xl_caps & XL_CAPS_LARGE_PKTS)) sc->xl_type = XL_TYPE_905B; else sc->xl_type = XL_TYPE_90X; ifp->if_mtu = ETHERMTU; ifp->if_flags = IFF_BROADCAST | IFF_SIMPLEX | IFF_MULTICAST; ifp->if_ioctl = xl_ioctl; ifp->if_capabilities = IFCAP_VLAN_MTU; if (sc->xl_type == XL_TYPE_905B) { ifp->if_hwassist = XL905B_CSUM_FEATURES; #ifdef XL905B_TXCSUM_BROKEN ifp->if_capabilities |= IFCAP_RXCSUM; #else ifp->if_capabilities |= IFCAP_HWCSUM; #endif } +#ifdef DEVICE_POLLING + ifp->if_capabilities |= IFCAP_POLLING; +#endif /* DEVICE_POLLING */ ifp->if_start = xl_start; ifp->if_watchdog = xl_watchdog; ifp->if_init = xl_init; ifp->if_baudrate = 10000000; IFQ_SET_MAXLEN(&ifp->if_snd, XL_TX_LIST_CNT - 1); ifp->if_snd.ifq_drv_maxlen = XL_TX_LIST_CNT - 1; IFQ_SET_READY(&ifp->if_snd); ifp->if_capenable = ifp->if_capabilities; /* * Now we have to see what sort of media we have. * This includes probing for an MII interace and a * possible PHY. */ XL_SEL_WIN(3); sc->xl_media = CSR_READ_2(sc, XL_W3_MEDIA_OPT); if (bootverbose) device_printf(dev, "media options word: %x\n", sc->xl_media); xl_read_eeprom(sc, (char *)&xcvr, XL_EE_ICFG_0, 2, 0); sc->xl_xcvr = xcvr[0] | xcvr[1] << 16; sc->xl_xcvr &= XL_ICFG_CONNECTOR_MASK; sc->xl_xcvr >>= XL_ICFG_CONNECTOR_BITS; xl_mediacheck(sc); /* XXX Downcalls to ifmedia, miibus about to happen. */ XL_UNLOCK(sc); if (sc->xl_media & XL_MEDIAOPT_MII || sc->xl_media & XL_MEDIAOPT_BTX || sc->xl_media & XL_MEDIAOPT_BT4) { if (bootverbose) device_printf(dev, "found MII/AUTO\n"); xl_setcfg(sc); if (mii_phy_probe(dev, &sc->xl_miibus, xl_ifmedia_upd, xl_ifmedia_sts)) { device_printf(dev, "no PHY found!\n"); error = ENXIO; goto fail; } goto done; } /* * Sanity check. If the user has selected "auto" and this isn't * a 10/100 card of some kind, we need to force the transceiver * type to something sane. */ if (sc->xl_xcvr == XL_XCVR_AUTO) { /* XXX Direct hardware access needs lock coverage. */ XL_LOCK(sc); xl_choose_xcvr(sc, bootverbose); XL_UNLOCK(sc); } /* * Do ifmedia setup. */ if (sc->xl_media & XL_MEDIAOPT_BT) { if (bootverbose) device_printf(dev, "found 10baseT\n"); ifmedia_add(&sc->ifmedia, IFM_ETHER|IFM_10_T, 0, NULL); ifmedia_add(&sc->ifmedia, IFM_ETHER|IFM_10_T|IFM_HDX, 0, NULL); if (sc->xl_caps & XL_CAPS_FULL_DUPLEX) ifmedia_add(&sc->ifmedia, IFM_ETHER|IFM_10_T|IFM_FDX, 0, NULL); } if (sc->xl_media & (XL_MEDIAOPT_AUI|XL_MEDIAOPT_10FL)) { /* * Check for a 10baseFL board in disguise. */ if (sc->xl_type == XL_TYPE_905B && sc->xl_media == XL_MEDIAOPT_10FL) { if (bootverbose) device_printf(dev, "found 10baseFL\n"); ifmedia_add(&sc->ifmedia, IFM_ETHER|IFM_10_FL, 0, NULL); ifmedia_add(&sc->ifmedia, IFM_ETHER|IFM_10_FL|IFM_HDX, 0, NULL); if (sc->xl_caps & XL_CAPS_FULL_DUPLEX) ifmedia_add(&sc->ifmedia, IFM_ETHER|IFM_10_FL|IFM_FDX, 0, NULL); } else { if (bootverbose) device_printf(dev, "found AUI\n"); ifmedia_add(&sc->ifmedia, IFM_ETHER|IFM_10_5, 0, NULL); } } if (sc->xl_media & XL_MEDIAOPT_BNC) { if (bootverbose) device_printf(dev, "found BNC\n"); ifmedia_add(&sc->ifmedia, IFM_ETHER|IFM_10_2, 0, NULL); } if (sc->xl_media & XL_MEDIAOPT_BFX) { if (bootverbose) device_printf(dev, "found 100baseFX\n"); ifp->if_baudrate = 100000000; ifmedia_add(&sc->ifmedia, IFM_ETHER|IFM_100_FX, 0, NULL); } /* XXX: Unlocked, leaf will take lock. */ media = IFM_ETHER|IFM_100_TX|IFM_FDX; xl_choose_media(sc, &media); if (sc->xl_miibus == NULL) ifmedia_set(&sc->ifmedia, media); done: /* XXX: Unlocked hardware access, narrow race. */ if (sc->xl_flags & XL_FLAG_NO_XCVR_PWR) { XL_SEL_WIN(0); CSR_WRITE_2(sc, XL_W0_MFG_ID, XL_NO_XCVR_PWR_MAGICBITS); } /* * Call MI attach routine. */ ether_ifattach(ifp, eaddr); error = bus_setup_intr(dev, sc->xl_irq, INTR_TYPE_NET | INTR_MPSAFE, xl_intr, sc, &sc->xl_intrhand); if (error) { device_printf(dev, "couldn't set up irq\n"); ether_ifdetach(ifp); goto fail; } fail: if (error) xl_detach(dev); return (error); } /* * Choose a default media. * XXX This is a leaf function only called by xl_attach() and * acquires/releases the non-recursible driver mutex. */ static void xl_choose_media(struct xl_softc *sc, int *media) { XL_LOCK(sc); switch (sc->xl_xcvr) { case XL_XCVR_10BT: *media = IFM_ETHER|IFM_10_T; xl_setmode(sc, *media); break; case XL_XCVR_AUI: if (sc->xl_type == XL_TYPE_905B && sc->xl_media == XL_MEDIAOPT_10FL) { *media = IFM_ETHER|IFM_10_FL; xl_setmode(sc, *media); } else { *media = IFM_ETHER|IFM_10_5; xl_setmode(sc, *media); } break; case XL_XCVR_COAX: *media = IFM_ETHER|IFM_10_2; xl_setmode(sc, *media); break; case XL_XCVR_AUTO: case XL_XCVR_100BTX: case XL_XCVR_MII: /* Chosen by miibus */ break; case XL_XCVR_100BFX: *media = IFM_ETHER|IFM_100_FX; break; default: if_printf(&sc->arpcom.ac_if, "unknown XCVR type: %d\n", sc->xl_xcvr); /* * This will probably be wrong, but it prevents * the ifmedia code from panicking. */ *media = IFM_ETHER|IFM_10_T; break; } XL_UNLOCK(sc); } /* * Shutdown hardware and free up resources. This can be called any * time after the mutex has been initialized. It is called in both * the error case in attach and the normal detach case so it needs * to be careful about only freeing resources that have actually been * allocated. */ static int xl_detach(device_t dev) { struct xl_softc *sc; struct ifnet *ifp; int rid, res; sc = device_get_softc(dev); ifp = &sc->arpcom.ac_if; KASSERT(mtx_initialized(&sc->xl_mtx), ("xl mutex not initialized")); XL_LOCK(sc); if (sc->xl_flags & XL_FLAG_USE_MMIO) { rid = XL_PCI_LOMEM; res = SYS_RES_MEMORY; } else { rid = XL_PCI_LOIO; res = SYS_RES_IOPORT; } /* These should only be active if attach succeeded */ if (device_is_attached(dev)) { xl_reset(sc); xl_stop(sc); ether_ifdetach(ifp); } if (sc->xl_miibus) device_delete_child(dev, sc->xl_miibus); bus_generic_detach(dev); ifmedia_removeall(&sc->ifmedia); if (sc->xl_intrhand) bus_teardown_intr(dev, sc->xl_irq, sc->xl_intrhand); if (sc->xl_irq) bus_release_resource(dev, SYS_RES_IRQ, 0, sc->xl_irq); if (sc->xl_fres != NULL) bus_release_resource(dev, SYS_RES_MEMORY, XL_PCI_FUNCMEM, sc->xl_fres); if (sc->xl_res) bus_release_resource(dev, res, rid, sc->xl_res); if (sc->xl_mtag) { bus_dmamap_destroy(sc->xl_mtag, sc->xl_tmpmap); bus_dma_tag_destroy(sc->xl_mtag); } if (sc->xl_ldata.xl_rx_tag) { bus_dmamap_unload(sc->xl_ldata.xl_rx_tag, sc->xl_ldata.xl_rx_dmamap); bus_dmamem_free(sc->xl_ldata.xl_rx_tag, sc->xl_ldata.xl_rx_list, sc->xl_ldata.xl_rx_dmamap); bus_dma_tag_destroy(sc->xl_ldata.xl_rx_tag); } if (sc->xl_ldata.xl_tx_tag) { bus_dmamap_unload(sc->xl_ldata.xl_tx_tag, sc->xl_ldata.xl_tx_dmamap); bus_dmamem_free(sc->xl_ldata.xl_tx_tag, sc->xl_ldata.xl_tx_list, sc->xl_ldata.xl_tx_dmamap); bus_dma_tag_destroy(sc->xl_ldata.xl_tx_tag); } XL_UNLOCK(sc); mtx_destroy(&sc->xl_mtx); return (0); } /* * Initialize the transmit descriptors. */ static int xl_list_tx_init(struct xl_softc *sc) { struct xl_chain_data *cd; struct xl_list_data *ld; int error, i; XL_LOCK_ASSERT(sc); cd = &sc->xl_cdata; ld = &sc->xl_ldata; for (i = 0; i < XL_TX_LIST_CNT; i++) { cd->xl_tx_chain[i].xl_ptr = &ld->xl_tx_list[i]; error = bus_dmamap_create(sc->xl_mtag, 0, &cd->xl_tx_chain[i].xl_map); if (error) return (error); cd->xl_tx_chain[i].xl_phys = ld->xl_tx_dmaaddr + i * sizeof(struct xl_list); if (i == (XL_TX_LIST_CNT - 1)) cd->xl_tx_chain[i].xl_next = NULL; else cd->xl_tx_chain[i].xl_next = &cd->xl_tx_chain[i + 1]; } cd->xl_tx_free = &cd->xl_tx_chain[0]; cd->xl_tx_tail = cd->xl_tx_head = NULL; bus_dmamap_sync(ld->xl_tx_tag, ld->xl_tx_dmamap, BUS_DMASYNC_PREWRITE); return (0); } /* * Initialize the transmit descriptors. */ static int xl_list_tx_init_90xB(struct xl_softc *sc) { struct xl_chain_data *cd; struct xl_list_data *ld; int error, i; XL_LOCK_ASSERT(sc); cd = &sc->xl_cdata; ld = &sc->xl_ldata; for (i = 0; i < XL_TX_LIST_CNT; i++) { cd->xl_tx_chain[i].xl_ptr = &ld->xl_tx_list[i]; error = bus_dmamap_create(sc->xl_mtag, 0, &cd->xl_tx_chain[i].xl_map); if (error) return (error); cd->xl_tx_chain[i].xl_phys = ld->xl_tx_dmaaddr + i * sizeof(struct xl_list); if (i == (XL_TX_LIST_CNT - 1)) cd->xl_tx_chain[i].xl_next = &cd->xl_tx_chain[0]; else cd->xl_tx_chain[i].xl_next = &cd->xl_tx_chain[i + 1]; if (i == 0) cd->xl_tx_chain[i].xl_prev = &cd->xl_tx_chain[XL_TX_LIST_CNT - 1]; else cd->xl_tx_chain[i].xl_prev = &cd->xl_tx_chain[i - 1]; } bzero(ld->xl_tx_list, XL_TX_LIST_SZ); ld->xl_tx_list[0].xl_status = htole32(XL_TXSTAT_EMPTY); cd->xl_tx_prod = 1; cd->xl_tx_cons = 1; cd->xl_tx_cnt = 0; bus_dmamap_sync(ld->xl_tx_tag, ld->xl_tx_dmamap, BUS_DMASYNC_PREWRITE); return (0); } /* * Initialize the RX descriptors and allocate mbufs for them. Note that * we arrange the descriptors in a closed ring, so that the last descriptor * points back to the first. */ static int xl_list_rx_init(struct xl_softc *sc) { struct xl_chain_data *cd; struct xl_list_data *ld; int error, i, next; u_int32_t nextptr; XL_LOCK_ASSERT(sc); cd = &sc->xl_cdata; ld = &sc->xl_ldata; for (i = 0; i < XL_RX_LIST_CNT; i++) { cd->xl_rx_chain[i].xl_ptr = &ld->xl_rx_list[i]; error = bus_dmamap_create(sc->xl_mtag, 0, &cd->xl_rx_chain[i].xl_map); if (error) return (error); error = xl_newbuf(sc, &cd->xl_rx_chain[i]); if (error) return (error); if (i == (XL_RX_LIST_CNT - 1)) next = 0; else next = i + 1; nextptr = ld->xl_rx_dmaaddr + next * sizeof(struct xl_list_onefrag); cd->xl_rx_chain[i].xl_next = &cd->xl_rx_chain[next]; ld->xl_rx_list[i].xl_next = htole32(nextptr); } bus_dmamap_sync(ld->xl_rx_tag, ld->xl_rx_dmamap, BUS_DMASYNC_PREWRITE); cd->xl_rx_head = &cd->xl_rx_chain[0]; return (0); } /* * Initialize an RX descriptor and attach an MBUF cluster. * If we fail to do so, we need to leave the old mbuf and * the old DMA map untouched so that it can be reused. */ static int xl_newbuf(struct xl_softc *sc, struct xl_chain_onefrag *c) { struct mbuf *m_new = NULL; bus_dmamap_t map; int error; u_int32_t baddr; XL_LOCK_ASSERT(sc); m_new = m_getcl(M_DONTWAIT, MT_DATA, M_PKTHDR); if (m_new == NULL) return (ENOBUFS); m_new->m_len = m_new->m_pkthdr.len = MCLBYTES; /* Force longword alignment for packet payload. */ m_adj(m_new, ETHER_ALIGN); error = bus_dmamap_load_mbuf(sc->xl_mtag, sc->xl_tmpmap, m_new, xl_dma_map_rxbuf, &baddr, BUS_DMA_NOWAIT); if (error) { m_freem(m_new); if_printf(&sc->arpcom.ac_if, "can't map mbuf (error %d)\n", error); return (error); } bus_dmamap_unload(sc->xl_mtag, c->xl_map); map = c->xl_map; c->xl_map = sc->xl_tmpmap; sc->xl_tmpmap = map; c->xl_mbuf = m_new; c->xl_ptr->xl_frag.xl_len = htole32(m_new->m_len | XL_LAST_FRAG); c->xl_ptr->xl_status = 0; c->xl_ptr->xl_frag.xl_addr = htole32(baddr); bus_dmamap_sync(sc->xl_mtag, c->xl_map, BUS_DMASYNC_PREREAD); return (0); } static int xl_rx_resync(struct xl_softc *sc) { struct xl_chain_onefrag *pos; int i; XL_LOCK_ASSERT(sc); pos = sc->xl_cdata.xl_rx_head; for (i = 0; i < XL_RX_LIST_CNT; i++) { if (pos->xl_ptr->xl_status) break; pos = pos->xl_next; } if (i == XL_RX_LIST_CNT) return (0); sc->xl_cdata.xl_rx_head = pos; return (EAGAIN); } /* * A frame has been uploaded: pass the resulting mbuf chain up to * the higher level protocols. */ static void xl_rxeof(struct xl_softc *sc) { struct mbuf *m; struct ifnet *ifp = &sc->arpcom.ac_if; struct xl_chain_onefrag *cur_rx; int total_len = 0; u_int32_t rxstat; XL_LOCK_ASSERT(sc); again: bus_dmamap_sync(sc->xl_ldata.xl_rx_tag, sc->xl_ldata.xl_rx_dmamap, BUS_DMASYNC_POSTREAD); while ((rxstat = le32toh(sc->xl_cdata.xl_rx_head->xl_ptr->xl_status))) { +#ifdef DEVICE_POLLING + if (ifp->if_flags & IFF_POLLING) { + if (sc->rxcycles <= 0) + break; + sc->rxcycles--; + } +#endif /* DEVICE_POLLING */ cur_rx = sc->xl_cdata.xl_rx_head; sc->xl_cdata.xl_rx_head = cur_rx->xl_next; total_len = rxstat & XL_RXSTAT_LENMASK; /* * Since we have told the chip to allow large frames, * we need to trap giant frame errors in software. We allow * a little more than the normal frame size to account for * frames with VLAN tags. */ if (total_len > XL_MAX_FRAMELEN) rxstat |= (XL_RXSTAT_UP_ERROR|XL_RXSTAT_OVERSIZE); /* * If an error occurs, update stats, clear the * status word and leave the mbuf cluster in place: * it should simply get re-used next time this descriptor * comes up in the ring. */ if (rxstat & XL_RXSTAT_UP_ERROR) { ifp->if_ierrors++; cur_rx->xl_ptr->xl_status = 0; bus_dmamap_sync(sc->xl_ldata.xl_rx_tag, sc->xl_ldata.xl_rx_dmamap, BUS_DMASYNC_PREWRITE); continue; } /* * If the error bit was not set, the upload complete * bit should be set which means we have a valid packet. * If not, something truly strange has happened. */ if (!(rxstat & XL_RXSTAT_UP_CMPLT)) { if_printf(ifp, "bad receive status -- packet dropped\n"); ifp->if_ierrors++; cur_rx->xl_ptr->xl_status = 0; bus_dmamap_sync(sc->xl_ldata.xl_rx_tag, sc->xl_ldata.xl_rx_dmamap, BUS_DMASYNC_PREWRITE); continue; } /* No errors; receive the packet. */ bus_dmamap_sync(sc->xl_mtag, cur_rx->xl_map, BUS_DMASYNC_POSTREAD); m = cur_rx->xl_mbuf; /* * Try to conjure up a new mbuf cluster. If that * fails, it means we have an out of memory condition and * should leave the buffer in place and continue. This will * result in a lost packet, but there's little else we * can do in this situation. */ if (xl_newbuf(sc, cur_rx)) { ifp->if_ierrors++; cur_rx->xl_ptr->xl_status = 0; bus_dmamap_sync(sc->xl_ldata.xl_rx_tag, sc->xl_ldata.xl_rx_dmamap, BUS_DMASYNC_PREWRITE); continue; } bus_dmamap_sync(sc->xl_ldata.xl_rx_tag, sc->xl_ldata.xl_rx_dmamap, BUS_DMASYNC_PREWRITE); ifp->if_ipackets++; m->m_pkthdr.rcvif = ifp; m->m_pkthdr.len = m->m_len = total_len; if (ifp->if_capenable & IFCAP_RXCSUM) { /* Do IP checksum checking. */ if (rxstat & XL_RXSTAT_IPCKOK) m->m_pkthdr.csum_flags |= CSUM_IP_CHECKED; if (!(rxstat & XL_RXSTAT_IPCKERR)) m->m_pkthdr.csum_flags |= CSUM_IP_VALID; if ((rxstat & XL_RXSTAT_TCPCOK && !(rxstat & XL_RXSTAT_TCPCKERR)) || (rxstat & XL_RXSTAT_UDPCKOK && !(rxstat & XL_RXSTAT_UDPCKERR))) { m->m_pkthdr.csum_flags |= CSUM_DATA_VALID|CSUM_PSEUDO_HDR; m->m_pkthdr.csum_data = 0xffff; } } XL_UNLOCK(sc); (*ifp->if_input)(ifp, m); XL_LOCK(sc); } /* * Handle the 'end of channel' condition. When the upload * engine hits the end of the RX ring, it will stall. This * is our cue to flush the RX ring, reload the uplist pointer * register and unstall the engine. * XXX This is actually a little goofy. With the ThunderLAN * chip, you get an interrupt when the receiver hits the end * of the receive ring, which tells you exactly when you * you need to reload the ring pointer. Here we have to * fake it. I'm mad at myself for not being clever enough * to avoid the use of a goto here. */ if (CSR_READ_4(sc, XL_UPLIST_PTR) == 0 || CSR_READ_4(sc, XL_UPLIST_STATUS) & XL_PKTSTAT_UP_STALLED) { CSR_WRITE_2(sc, XL_COMMAND, XL_CMD_UP_STALL); xl_wait(sc); CSR_WRITE_4(sc, XL_UPLIST_PTR, sc->xl_ldata.xl_rx_dmaaddr); sc->xl_cdata.xl_rx_head = &sc->xl_cdata.xl_rx_chain[0]; CSR_WRITE_2(sc, XL_COMMAND, XL_CMD_UP_UNSTALL); goto again; } } /* * A frame was downloaded to the chip. It's safe for us to clean up * the list buffers. */ static void xl_txeof(struct xl_softc *sc) { struct xl_chain *cur_tx; struct ifnet *ifp = &sc->arpcom.ac_if; XL_LOCK_ASSERT(sc); /* Clear the timeout timer. */ ifp->if_timer = 0; /* * Go through our tx list and free mbufs for those * frames that have been uploaded. Note: the 3c905B * sets a special bit in the status word to let us * know that a frame has been downloaded, but the * original 3c900/3c905 adapters don't do that. * Consequently, we have to use a different test if * xl_type != XL_TYPE_905B. */ while (sc->xl_cdata.xl_tx_head != NULL) { cur_tx = sc->xl_cdata.xl_tx_head; if (CSR_READ_4(sc, XL_DOWNLIST_PTR)) break; sc->xl_cdata.xl_tx_head = cur_tx->xl_next; bus_dmamap_sync(sc->xl_mtag, cur_tx->xl_map, BUS_DMASYNC_POSTWRITE); bus_dmamap_unload(sc->xl_mtag, cur_tx->xl_map); m_freem(cur_tx->xl_mbuf); cur_tx->xl_mbuf = NULL; ifp->if_opackets++; cur_tx->xl_next = sc->xl_cdata.xl_tx_free; sc->xl_cdata.xl_tx_free = cur_tx; } if (sc->xl_cdata.xl_tx_head == NULL) { ifp->if_flags &= ~IFF_OACTIVE; sc->xl_cdata.xl_tx_tail = NULL; } else { if (CSR_READ_4(sc, XL_DMACTL) & XL_DMACTL_DOWN_STALLED || !CSR_READ_4(sc, XL_DOWNLIST_PTR)) { CSR_WRITE_4(sc, XL_DOWNLIST_PTR, sc->xl_cdata.xl_tx_head->xl_phys); CSR_WRITE_2(sc, XL_COMMAND, XL_CMD_DOWN_UNSTALL); } } } static void xl_txeof_90xB(struct xl_softc *sc) { struct xl_chain *cur_tx = NULL; struct ifnet *ifp = &sc->arpcom.ac_if; int idx; XL_LOCK_ASSERT(sc); bus_dmamap_sync(sc->xl_ldata.xl_tx_tag, sc->xl_ldata.xl_tx_dmamap, BUS_DMASYNC_POSTREAD); idx = sc->xl_cdata.xl_tx_cons; while (idx != sc->xl_cdata.xl_tx_prod) { cur_tx = &sc->xl_cdata.xl_tx_chain[idx]; if (!(le32toh(cur_tx->xl_ptr->xl_status) & XL_TXSTAT_DL_COMPLETE)) break; if (cur_tx->xl_mbuf != NULL) { bus_dmamap_sync(sc->xl_mtag, cur_tx->xl_map, BUS_DMASYNC_POSTWRITE); bus_dmamap_unload(sc->xl_mtag, cur_tx->xl_map); m_freem(cur_tx->xl_mbuf); cur_tx->xl_mbuf = NULL; } ifp->if_opackets++; sc->xl_cdata.xl_tx_cnt--; XL_INC(idx, XL_TX_LIST_CNT); ifp->if_timer = 0; } sc->xl_cdata.xl_tx_cons = idx; if (cur_tx != NULL) ifp->if_flags &= ~IFF_OACTIVE; } /* * TX 'end of channel' interrupt handler. Actually, we should * only get a 'TX complete' interrupt if there's a transmit error, * so this is really TX error handler. */ static void xl_txeoc(struct xl_softc *sc) { u_int8_t txstat; XL_LOCK_ASSERT(sc); while ((txstat = CSR_READ_1(sc, XL_TX_STATUS))) { if (txstat & XL_TXSTATUS_UNDERRUN || txstat & XL_TXSTATUS_JABBER || txstat & XL_TXSTATUS_RECLAIM) { if_printf(&sc->arpcom.ac_if, "transmission error: %x\n", txstat); CSR_WRITE_2(sc, XL_COMMAND, XL_CMD_TX_RESET); xl_wait(sc); if (sc->xl_type == XL_TYPE_905B) { if (sc->xl_cdata.xl_tx_cnt) { int i; struct xl_chain *c; i = sc->xl_cdata.xl_tx_cons; c = &sc->xl_cdata.xl_tx_chain[i]; CSR_WRITE_4(sc, XL_DOWNLIST_PTR, c->xl_phys); CSR_WRITE_1(sc, XL_DOWN_POLL, 64); } } else { if (sc->xl_cdata.xl_tx_head != NULL) CSR_WRITE_4(sc, XL_DOWNLIST_PTR, sc->xl_cdata.xl_tx_head->xl_phys); } /* * Remember to set this for the * first generation 3c90X chips. */ CSR_WRITE_1(sc, XL_TX_FREETHRESH, XL_PACKET_SIZE >> 8); if (txstat & XL_TXSTATUS_UNDERRUN && sc->xl_tx_thresh < XL_PACKET_SIZE) { sc->xl_tx_thresh += XL_MIN_FRAMELEN; if_printf(&sc->arpcom.ac_if, "tx underrun, increasing tx start threshold to %d bytes\n", sc->xl_tx_thresh); } CSR_WRITE_2(sc, XL_COMMAND, XL_CMD_TX_SET_START|sc->xl_tx_thresh); if (sc->xl_type == XL_TYPE_905B) { CSR_WRITE_2(sc, XL_COMMAND, XL_CMD_SET_TX_RECLAIM|(XL_PACKET_SIZE >> 4)); } CSR_WRITE_2(sc, XL_COMMAND, XL_CMD_TX_ENABLE); CSR_WRITE_2(sc, XL_COMMAND, XL_CMD_DOWN_UNSTALL); } else { CSR_WRITE_2(sc, XL_COMMAND, XL_CMD_TX_ENABLE); CSR_WRITE_2(sc, XL_COMMAND, XL_CMD_DOWN_UNSTALL); } /* * Write an arbitrary byte to the TX_STATUS register * to clear this interrupt/error and advance to the next. */ CSR_WRITE_1(sc, XL_TX_STATUS, 0x01); } } static void xl_intr(void *arg) { struct xl_softc *sc = arg; struct ifnet *ifp = &sc->arpcom.ac_if; u_int16_t status; XL_LOCK(sc); +#ifdef DEVICE_POLLING + if (ifp->if_flags & IFF_POLLING) { + XL_UNLOCK(sc); + return; + } + + if ((ifp->if_capenable & IFCAP_POLLING) && + ether_poll_register(xl_poll, ifp)) { + /* Disable interrupts. */ + CSR_WRITE_2(sc, XL_COMMAND, XL_CMD_INTR_ENB|0); + CSR_WRITE_2(sc, XL_COMMAND, XL_CMD_INTR_ACK|0xFF); + if (sc->xl_flags & XL_FLAG_FUNCREG) + bus_space_write_4(sc->xl_ftag, sc->xl_fhandle, + 4, 0x8000); + xl_poll_locked(ifp, 0, 1); + XL_UNLOCK(sc); + return; + } +#endif /* DEVICE_POLLING */ + while ((status = CSR_READ_2(sc, XL_STATUS)) & XL_INTRS && status != 0xFFFF) { CSR_WRITE_2(sc, XL_COMMAND, XL_CMD_INTR_ACK|(status & XL_INTRS)); if (status & XL_STAT_UP_COMPLETE) { int curpkts; curpkts = ifp->if_ipackets; xl_rxeof(sc); if (curpkts == ifp->if_ipackets) { while (xl_rx_resync(sc)) xl_rxeof(sc); } } if (status & XL_STAT_DOWN_COMPLETE) { if (sc->xl_type == XL_TYPE_905B) xl_txeof_90xB(sc); else xl_txeof(sc); } if (status & XL_STAT_TX_COMPLETE) { ifp->if_oerrors++; xl_txeoc(sc); } if (status & XL_STAT_ADFAIL) { xl_reset(sc); xl_init_locked(sc); } if (status & XL_STAT_STATSOFLOW) { sc->xl_stats_no_timeout = 1; xl_stats_update_locked(sc); sc->xl_stats_no_timeout = 0; } } if (!IFQ_DRV_IS_EMPTY(&ifp->if_snd)) { if (sc->xl_type == XL_TYPE_905B) xl_start_90xB_locked(ifp); else xl_start_locked(ifp); } XL_UNLOCK(sc); } +#ifdef DEVICE_POLLING +static void +xl_poll(struct ifnet *ifp, enum poll_cmd cmd, int count) +{ + struct xl_softc *sc = ifp->if_softc; + + XL_LOCK(sc); + xl_poll_locked(ifp, cmd, count); + XL_UNLOCK(sc); +} + +static void +xl_poll_locked(struct ifnet *ifp, enum poll_cmd cmd, int count) +{ + struct xl_softc *sc = ifp->if_softc; + + XL_LOCK_ASSERT(sc); + + if (!(ifp->if_capenable & IFCAP_POLLING)) { + ether_poll_deregister(ifp); + cmd = POLL_DEREGISTER; + } + + if (cmd == POLL_DEREGISTER) { + /* Final call; enable interrupts. */ + CSR_WRITE_2(sc, XL_COMMAND, XL_CMD_INTR_ACK|0xFF); + CSR_WRITE_2(sc, XL_COMMAND, XL_CMD_INTR_ENB|XL_INTRS); + if (sc->xl_flags & XL_FLAG_FUNCREG) + bus_space_write_4(sc->xl_ftag, sc->xl_fhandle, + 4, 0x8000); + return; + } + + sc->rxcycles = count; + xl_rxeof(sc); + if (sc->xl_type == XL_TYPE_905B) + xl_txeof_90xB(sc); + else + xl_txeof(sc); + + if (!IFQ_DRV_IS_EMPTY(&ifp->if_snd)) { + if (sc->xl_type == XL_TYPE_905B) + xl_start_90xB_locked(ifp); + else + xl_start_locked(ifp); + } + + if (cmd == POLL_AND_CHECK_STATUS) { + u_int16_t status; + + status = CSR_READ_2(sc, XL_STATUS); + if (status & XL_INTRS && status != 0xFFFF) { + CSR_WRITE_2(sc, XL_COMMAND, + XL_CMD_INTR_ACK|(status & XL_INTRS)); + + if (status & XL_STAT_TX_COMPLETE) { + ifp->if_oerrors++; + xl_txeoc(sc); + } + + if (status & XL_STAT_ADFAIL) { + xl_reset(sc); + xl_init_locked(sc); + } + + if (status & XL_STAT_STATSOFLOW) { + sc->xl_stats_no_timeout = 1; + xl_stats_update_locked(sc); + sc->xl_stats_no_timeout = 0; + } + } + } +} +#endif /* DEVICE_POLLING */ + /* * XXX: This is an entry point for callout which needs to take the lock. */ static void xl_stats_update(void *xsc) { struct xl_softc *sc = xsc; XL_LOCK(sc); xl_stats_update_locked(sc); XL_UNLOCK(sc); } static void xl_stats_update_locked(struct xl_softc *sc) { struct ifnet *ifp = &sc->arpcom.ac_if; struct xl_stats xl_stats; u_int8_t *p; int i; struct mii_data *mii = NULL; XL_LOCK_ASSERT(sc); bzero((char *)&xl_stats, sizeof(struct xl_stats)); if (sc->xl_miibus != NULL) mii = device_get_softc(sc->xl_miibus); p = (u_int8_t *)&xl_stats; /* Read all the stats registers. */ XL_SEL_WIN(6); for (i = 0; i < 16; i++) *p++ = CSR_READ_1(sc, XL_W6_CARRIER_LOST + i); ifp->if_ierrors += xl_stats.xl_rx_overrun; ifp->if_collisions += xl_stats.xl_tx_multi_collision + xl_stats.xl_tx_single_collision + xl_stats.xl_tx_late_collision; /* * Boomerang and cyclone chips have an extra stats counter * in window 4 (BadSSD). We have to read this too in order * to clear out all the stats registers and avoid a statsoflow * interrupt. */ XL_SEL_WIN(4); CSR_READ_1(sc, XL_W4_BADSSD); if ((mii != NULL) && (!sc->xl_stats_no_timeout)) mii_tick(mii); XL_SEL_WIN(7); if (!sc->xl_stats_no_timeout) sc->xl_stat_ch = timeout(xl_stats_update, sc, hz); } /* * Encapsulate an mbuf chain in a descriptor by coupling the mbuf data * pointers to the fragment pointers. */ static int xl_encap(struct xl_softc *sc, struct xl_chain *c, struct mbuf *m_head) { int error; u_int32_t status; struct ifnet *ifp = &sc->arpcom.ac_if; XL_LOCK_ASSERT(sc); /* * Start packing the mbufs in this chain into * the fragment pointers. Stop when we run out * of fragments or hit the end of the mbuf chain. */ error = bus_dmamap_load_mbuf(sc->xl_mtag, c->xl_map, m_head, xl_dma_map_txbuf, c->xl_ptr, BUS_DMA_NOWAIT); if (error && error != EFBIG) { m_freem(m_head); if_printf(ifp, "can't map mbuf (error %d)\n", error); return (1); } /* * Handle special case: we used up all 63 fragments, * but we have more mbufs left in the chain. Copy the * data into an mbuf cluster. Note that we don't * bother clearing the values in the other fragment * pointers/counters; it wouldn't gain us anything, * and would waste cycles. */ if (error) { struct mbuf *m_new; m_new = m_defrag(m_head, M_DONTWAIT); if (m_new == NULL) { m_freem(m_head); return (1); } else { m_head = m_new; } error = bus_dmamap_load_mbuf(sc->xl_mtag, c->xl_map, m_head, xl_dma_map_txbuf, c->xl_ptr, BUS_DMA_NOWAIT); if (error) { m_freem(m_head); if_printf(ifp, "can't map mbuf (error %d)\n", error); return (1); } } if (sc->xl_type == XL_TYPE_905B) { status = XL_TXSTAT_RND_DEFEAT; #ifndef XL905B_TXCSUM_BROKEN if (m_head->m_pkthdr.csum_flags) { if (m_head->m_pkthdr.csum_flags & CSUM_IP) status |= XL_TXSTAT_IPCKSUM; if (m_head->m_pkthdr.csum_flags & CSUM_TCP) status |= XL_TXSTAT_TCPCKSUM; if (m_head->m_pkthdr.csum_flags & CSUM_UDP) status |= XL_TXSTAT_UDPCKSUM; } #endif c->xl_ptr->xl_status = htole32(status); } c->xl_mbuf = m_head; bus_dmamap_sync(sc->xl_mtag, c->xl_map, BUS_DMASYNC_PREWRITE); return (0); } /* * Main transmit routine. To avoid having to do mbuf copies, we put pointers * to the mbuf data regions directly in the transmit lists. We also save a * copy of the pointers since the transmit list fragment pointers are * physical addresses. */ static void xl_start(struct ifnet *ifp) { struct xl_softc *sc = ifp->if_softc; XL_LOCK(sc); if (sc->xl_type == XL_TYPE_905B) xl_start_90xB_locked(ifp); else xl_start_locked(ifp); XL_UNLOCK(sc); } static void xl_start_locked(struct ifnet *ifp) { struct xl_softc *sc = ifp->if_softc; struct mbuf *m_head = NULL; struct xl_chain *prev = NULL, *cur_tx = NULL, *start_tx; struct xl_chain *prev_tx; u_int32_t status; int error; XL_LOCK_ASSERT(sc); /* * Check for an available queue slot. If there are none, * punt. */ if (sc->xl_cdata.xl_tx_free == NULL) { xl_txeoc(sc); xl_txeof(sc); if (sc->xl_cdata.xl_tx_free == NULL) { ifp->if_flags |= IFF_OACTIVE; return; } } start_tx = sc->xl_cdata.xl_tx_free; while (sc->xl_cdata.xl_tx_free != NULL) { IFQ_DRV_DEQUEUE(&ifp->if_snd, m_head); if (m_head == NULL) break; /* Pick a descriptor off the free list. */ prev_tx = cur_tx; cur_tx = sc->xl_cdata.xl_tx_free; /* Pack the data into the descriptor. */ error = xl_encap(sc, cur_tx, m_head); if (error) { cur_tx = prev_tx; continue; } sc->xl_cdata.xl_tx_free = cur_tx->xl_next; cur_tx->xl_next = NULL; /* Chain it together. */ if (prev != NULL) { prev->xl_next = cur_tx; prev->xl_ptr->xl_next = htole32(cur_tx->xl_phys); } prev = cur_tx; /* * If there's a BPF listener, bounce a copy of this frame * to him. */ BPF_MTAP(ifp, cur_tx->xl_mbuf); } /* * If there are no packets queued, bail. */ if (cur_tx == NULL) return; /* * Place the request for the upload interrupt * in the last descriptor in the chain. This way, if * we're chaining several packets at once, we'll only * get an interupt once for the whole chain rather than * once for each packet. */ cur_tx->xl_ptr->xl_status = htole32(le32toh(cur_tx->xl_ptr->xl_status) | XL_TXSTAT_DL_INTR); bus_dmamap_sync(sc->xl_ldata.xl_tx_tag, sc->xl_ldata.xl_tx_dmamap, BUS_DMASYNC_PREWRITE); /* * Queue the packets. If the TX channel is clear, update * the downlist pointer register. */ CSR_WRITE_2(sc, XL_COMMAND, XL_CMD_DOWN_STALL); xl_wait(sc); if (sc->xl_cdata.xl_tx_head != NULL) { sc->xl_cdata.xl_tx_tail->xl_next = start_tx; sc->xl_cdata.xl_tx_tail->xl_ptr->xl_next = htole32(start_tx->xl_phys); status = sc->xl_cdata.xl_tx_tail->xl_ptr->xl_status; sc->xl_cdata.xl_tx_tail->xl_ptr->xl_status = htole32(le32toh(status) & ~XL_TXSTAT_DL_INTR); sc->xl_cdata.xl_tx_tail = cur_tx; } else { sc->xl_cdata.xl_tx_head = start_tx; sc->xl_cdata.xl_tx_tail = cur_tx; } if (!CSR_READ_4(sc, XL_DOWNLIST_PTR)) CSR_WRITE_4(sc, XL_DOWNLIST_PTR, start_tx->xl_phys); CSR_WRITE_2(sc, XL_COMMAND, XL_CMD_DOWN_UNSTALL); XL_SEL_WIN(7); /* * Set a timeout in case the chip goes out to lunch. */ ifp->if_timer = 5; /* * XXX Under certain conditions, usually on slower machines * where interrupts may be dropped, it's possible for the * adapter to chew up all the buffers in the receive ring * and stall, without us being able to do anything about it. * To guard against this, we need to make a pass over the * RX queue to make sure there aren't any packets pending. * Doing it here means we can flush the receive ring at the * same time the chip is DMAing the transmit descriptors we * just gave it. * * 3Com goes to some lengths to emphasize the Parallel Tasking (tm) * nature of their chips in all their marketing literature; * we may as well take advantage of it. :) */ xl_rxeof(sc); } static void xl_start_90xB_locked(struct ifnet *ifp) { struct xl_softc *sc = ifp->if_softc; struct mbuf *m_head = NULL; struct xl_chain *prev = NULL, *cur_tx = NULL, *start_tx; struct xl_chain *prev_tx; int error, idx; XL_LOCK_ASSERT(sc); if (ifp->if_flags & IFF_OACTIVE) return; idx = sc->xl_cdata.xl_tx_prod; start_tx = &sc->xl_cdata.xl_tx_chain[idx]; while (sc->xl_cdata.xl_tx_chain[idx].xl_mbuf == NULL) { if ((XL_TX_LIST_CNT - sc->xl_cdata.xl_tx_cnt) < 3) { ifp->if_flags |= IFF_OACTIVE; break; } IFQ_DRV_DEQUEUE(&ifp->if_snd, m_head); if (m_head == NULL) break; prev_tx = cur_tx; cur_tx = &sc->xl_cdata.xl_tx_chain[idx]; /* Pack the data into the descriptor. */ error = xl_encap(sc, cur_tx, m_head); if (error) { cur_tx = prev_tx; continue; } /* Chain it together. */ if (prev != NULL) prev->xl_ptr->xl_next = htole32(cur_tx->xl_phys); prev = cur_tx; /* * If there's a BPF listener, bounce a copy of this frame * to him. */ BPF_MTAP(ifp, cur_tx->xl_mbuf); XL_INC(idx, XL_TX_LIST_CNT); sc->xl_cdata.xl_tx_cnt++; } /* * If there are no packets queued, bail. */ if (cur_tx == NULL) return; /* * Place the request for the upload interrupt * in the last descriptor in the chain. This way, if * we're chaining several packets at once, we'll only * get an interupt once for the whole chain rather than * once for each packet. */ cur_tx->xl_ptr->xl_status = htole32(le32toh(cur_tx->xl_ptr->xl_status) | XL_TXSTAT_DL_INTR); bus_dmamap_sync(sc->xl_ldata.xl_tx_tag, sc->xl_ldata.xl_tx_dmamap, BUS_DMASYNC_PREWRITE); /* Start transmission */ sc->xl_cdata.xl_tx_prod = idx; start_tx->xl_prev->xl_ptr->xl_next = htole32(start_tx->xl_phys); /* * Set a timeout in case the chip goes out to lunch. */ ifp->if_timer = 5; } static void xl_init(void *xsc) { struct xl_softc *sc = xsc; XL_LOCK(sc); xl_init_locked(sc); XL_UNLOCK(sc); } static void xl_init_locked(struct xl_softc *sc) { struct ifnet *ifp = &sc->arpcom.ac_if; int error, i; u_int16_t rxfilt = 0; struct mii_data *mii = NULL; XL_LOCK_ASSERT(sc); /* * Cancel pending I/O and free all RX/TX buffers. */ xl_stop(sc); if (sc->xl_miibus == NULL) { CSR_WRITE_2(sc, XL_COMMAND, XL_CMD_RX_RESET); xl_wait(sc); } CSR_WRITE_2(sc, XL_COMMAND, XL_CMD_TX_RESET); xl_wait(sc); DELAY(10000); if (sc->xl_miibus != NULL) mii = device_get_softc(sc->xl_miibus); /* Init our MAC address */ XL_SEL_WIN(2); for (i = 0; i < ETHER_ADDR_LEN; i++) { CSR_WRITE_1(sc, XL_W2_STATION_ADDR_LO + i, sc->arpcom.ac_enaddr[i]); } /* Clear the station mask. */ for (i = 0; i < 3; i++) CSR_WRITE_2(sc, XL_W2_STATION_MASK_LO + (i * 2), 0); #ifdef notdef /* Reset TX and RX. */ CSR_WRITE_2(sc, XL_COMMAND, XL_CMD_RX_RESET); xl_wait(sc); CSR_WRITE_2(sc, XL_COMMAND, XL_CMD_TX_RESET); xl_wait(sc); #endif /* Init circular RX list. */ error = xl_list_rx_init(sc); if (error) { if_printf(ifp, "initialization of the rx ring failed (%d)\n", error); xl_stop(sc); return; } /* Init TX descriptors. */ if (sc->xl_type == XL_TYPE_905B) error = xl_list_tx_init_90xB(sc); else error = xl_list_tx_init(sc); if (error) { if_printf(ifp, "initialization of the tx ring failed (%d)\n", error); xl_stop(sc); return; } /* * Set the TX freethresh value. * Note that this has no effect on 3c905B "cyclone" * cards but is required for 3c900/3c905 "boomerang" * cards in order to enable the download engine. */ CSR_WRITE_1(sc, XL_TX_FREETHRESH, XL_PACKET_SIZE >> 8); /* Set the TX start threshold for best performance. */ sc->xl_tx_thresh = XL_MIN_FRAMELEN; CSR_WRITE_2(sc, XL_COMMAND, XL_CMD_TX_SET_START|sc->xl_tx_thresh); /* * If this is a 3c905B, also set the tx reclaim threshold. * This helps cut down on the number of tx reclaim errors * that could happen on a busy network. The chip multiplies * the register value by 16 to obtain the actual threshold * in bytes, so we divide by 16 when setting the value here. * The existing threshold value can be examined by reading * the register at offset 9 in window 5. */ if (sc->xl_type == XL_TYPE_905B) { CSR_WRITE_2(sc, XL_COMMAND, XL_CMD_SET_TX_RECLAIM|(XL_PACKET_SIZE >> 4)); } /* Set RX filter bits. */ XL_SEL_WIN(5); rxfilt = CSR_READ_1(sc, XL_W5_RX_FILTER); /* Set the individual bit to receive frames for this host only. */ rxfilt |= XL_RXFILTER_INDIVIDUAL; /* If we want promiscuous mode, set the allframes bit. */ if (ifp->if_flags & IFF_PROMISC) { rxfilt |= XL_RXFILTER_ALLFRAMES; CSR_WRITE_2(sc, XL_COMMAND, XL_CMD_RX_SET_FILT|rxfilt); } else { rxfilt &= ~XL_RXFILTER_ALLFRAMES; CSR_WRITE_2(sc, XL_COMMAND, XL_CMD_RX_SET_FILT|rxfilt); } /* * Set capture broadcast bit to capture broadcast frames. */ if (ifp->if_flags & IFF_BROADCAST) { rxfilt |= XL_RXFILTER_BROADCAST; CSR_WRITE_2(sc, XL_COMMAND, XL_CMD_RX_SET_FILT|rxfilt); } else { rxfilt &= ~XL_RXFILTER_BROADCAST; CSR_WRITE_2(sc, XL_COMMAND, XL_CMD_RX_SET_FILT|rxfilt); } /* * Program the multicast filter, if necessary. */ if (sc->xl_type == XL_TYPE_905B) xl_setmulti_hash(sc); else xl_setmulti(sc); /* * Load the address of the RX list. We have to * stall the upload engine before we can manipulate * the uplist pointer register, then unstall it when * we're finished. We also have to wait for the * stall command to complete before proceeding. * Note that we have to do this after any RX resets * have completed since the uplist register is cleared * by a reset. */ CSR_WRITE_2(sc, XL_COMMAND, XL_CMD_UP_STALL); xl_wait(sc); CSR_WRITE_4(sc, XL_UPLIST_PTR, sc->xl_ldata.xl_rx_dmaaddr); CSR_WRITE_2(sc, XL_COMMAND, XL_CMD_UP_UNSTALL); xl_wait(sc); if (sc->xl_type == XL_TYPE_905B) { /* Set polling interval */ CSR_WRITE_1(sc, XL_DOWN_POLL, 64); /* Load the address of the TX list */ CSR_WRITE_2(sc, XL_COMMAND, XL_CMD_DOWN_STALL); xl_wait(sc); CSR_WRITE_4(sc, XL_DOWNLIST_PTR, sc->xl_cdata.xl_tx_chain[0].xl_phys); CSR_WRITE_2(sc, XL_COMMAND, XL_CMD_DOWN_UNSTALL); xl_wait(sc); } /* * If the coax transceiver is on, make sure to enable * the DC-DC converter. */ XL_SEL_WIN(3); if (sc->xl_xcvr == XL_XCVR_COAX) CSR_WRITE_2(sc, XL_COMMAND, XL_CMD_COAX_START); else CSR_WRITE_2(sc, XL_COMMAND, XL_CMD_COAX_STOP); /* * increase packet size to allow reception of 802.1q or ISL packets. * For the 3c90x chip, set the 'allow large packets' bit in the MAC * control register. For 3c90xB/C chips, use the RX packet size * register. */ if (sc->xl_type == XL_TYPE_905B) CSR_WRITE_2(sc, XL_W3_MAXPKTSIZE, XL_PACKET_SIZE); else { u_int8_t macctl; macctl = CSR_READ_1(sc, XL_W3_MAC_CTRL); macctl |= XL_MACCTRL_ALLOW_LARGE_PACK; CSR_WRITE_1(sc, XL_W3_MAC_CTRL, macctl); } /* Clear out the stats counters. */ CSR_WRITE_2(sc, XL_COMMAND, XL_CMD_STATS_DISABLE); sc->xl_stats_no_timeout = 1; xl_stats_update_locked(sc); sc->xl_stats_no_timeout = 0; XL_SEL_WIN(4); CSR_WRITE_2(sc, XL_W4_NET_DIAG, XL_NETDIAG_UPPER_BYTES_ENABLE); CSR_WRITE_2(sc, XL_COMMAND, XL_CMD_STATS_ENABLE); /* * Enable interrupts. */ CSR_WRITE_2(sc, XL_COMMAND, XL_CMD_INTR_ACK|0xFF); CSR_WRITE_2(sc, XL_COMMAND, XL_CMD_STAT_ENB|XL_INTRS); +#ifdef DEVICE_POLLING + /* Disable interrupts if we are polling. */ + if (ifp->if_flags & IFF_POLLING) + CSR_WRITE_2(sc, XL_COMMAND, XL_CMD_INTR_ENB|0); + else +#endif /* DEVICE_POLLING */ CSR_WRITE_2(sc, XL_COMMAND, XL_CMD_INTR_ENB|XL_INTRS); if (sc->xl_flags & XL_FLAG_FUNCREG) bus_space_write_4(sc->xl_ftag, sc->xl_fhandle, 4, 0x8000); /* Set the RX early threshold */ CSR_WRITE_2(sc, XL_COMMAND, XL_CMD_RX_SET_THRESH|(XL_PACKET_SIZE >>2)); CSR_WRITE_2(sc, XL_DMACTL, XL_DMACTL_UP_RX_EARLY); /* Enable receiver and transmitter. */ CSR_WRITE_2(sc, XL_COMMAND, XL_CMD_TX_ENABLE); xl_wait(sc); CSR_WRITE_2(sc, XL_COMMAND, XL_CMD_RX_ENABLE); xl_wait(sc); /* XXX Downcall to miibus. */ if (mii != NULL) mii_mediachg(mii); /* Select window 7 for normal operations. */ XL_SEL_WIN(7); ifp->if_flags |= IFF_RUNNING; ifp->if_flags &= ~IFF_OACTIVE; sc->xl_stat_ch = timeout(xl_stats_update, sc, hz); } /* * Set media options. */ static int xl_ifmedia_upd(struct ifnet *ifp) { struct xl_softc *sc = ifp->if_softc; struct ifmedia *ifm = NULL; struct mii_data *mii = NULL; /*XL_LOCK_ASSERT(sc);*/ if (sc->xl_miibus != NULL) mii = device_get_softc(sc->xl_miibus); if (mii == NULL) ifm = &sc->ifmedia; else ifm = &mii->mii_media; switch (IFM_SUBTYPE(ifm->ifm_media)) { case IFM_100_FX: case IFM_10_FL: case IFM_10_2: case IFM_10_5: xl_setmode(sc, ifm->ifm_media); return (0); break; default: break; } if (sc->xl_media & XL_MEDIAOPT_MII || sc->xl_media & XL_MEDIAOPT_BTX || sc->xl_media & XL_MEDIAOPT_BT4) { xl_init(sc); /* XXX */ } else { xl_setmode(sc, ifm->ifm_media); } return (0); } /* * Report current media status. */ static void xl_ifmedia_sts(struct ifnet *ifp, struct ifmediareq *ifmr) { struct xl_softc *sc = ifp->if_softc; u_int32_t icfg; u_int16_t status = 0; struct mii_data *mii = NULL; /*XL_LOCK_ASSERT(sc);*/ if (sc->xl_miibus != NULL) mii = device_get_softc(sc->xl_miibus); XL_SEL_WIN(4); status = CSR_READ_2(sc, XL_W4_MEDIA_STATUS); XL_SEL_WIN(3); icfg = CSR_READ_4(sc, XL_W3_INTERNAL_CFG) & XL_ICFG_CONNECTOR_MASK; icfg >>= XL_ICFG_CONNECTOR_BITS; ifmr->ifm_active = IFM_ETHER; ifmr->ifm_status = IFM_AVALID; if ((status & XL_MEDIASTAT_CARRIER) == 0) ifmr->ifm_status |= IFM_ACTIVE; switch (icfg) { case XL_XCVR_10BT: ifmr->ifm_active = IFM_ETHER|IFM_10_T; if (CSR_READ_1(sc, XL_W3_MAC_CTRL) & XL_MACCTRL_DUPLEX) ifmr->ifm_active |= IFM_FDX; else ifmr->ifm_active |= IFM_HDX; break; case XL_XCVR_AUI: if (sc->xl_type == XL_TYPE_905B && sc->xl_media == XL_MEDIAOPT_10FL) { ifmr->ifm_active = IFM_ETHER|IFM_10_FL; if (CSR_READ_1(sc, XL_W3_MAC_CTRL) & XL_MACCTRL_DUPLEX) ifmr->ifm_active |= IFM_FDX; else ifmr->ifm_active |= IFM_HDX; } else ifmr->ifm_active = IFM_ETHER|IFM_10_5; break; case XL_XCVR_COAX: ifmr->ifm_active = IFM_ETHER|IFM_10_2; break; /* * XXX MII and BTX/AUTO should be separate cases. */ case XL_XCVR_100BTX: case XL_XCVR_AUTO: case XL_XCVR_MII: if (mii != NULL) { mii_pollstat(mii); ifmr->ifm_active = mii->mii_media_active; ifmr->ifm_status = mii->mii_media_status; } break; case XL_XCVR_100BFX: ifmr->ifm_active = IFM_ETHER|IFM_100_FX; break; default: if_printf(ifp, "unknown XCVR type: %d\n", icfg); break; } } static int xl_ioctl(struct ifnet *ifp, u_long command, caddr_t data) { struct xl_softc *sc = ifp->if_softc; struct ifreq *ifr = (struct ifreq *) data; int error = 0; struct mii_data *mii = NULL; u_int8_t rxfilt; switch (command) { case SIOCSIFFLAGS: XL_LOCK(sc); XL_SEL_WIN(5); rxfilt = CSR_READ_1(sc, XL_W5_RX_FILTER); if (ifp->if_flags & IFF_UP) { if (ifp->if_flags & IFF_RUNNING && ifp->if_flags & IFF_PROMISC && !(sc->xl_if_flags & IFF_PROMISC)) { rxfilt |= XL_RXFILTER_ALLFRAMES; CSR_WRITE_2(sc, XL_COMMAND, XL_CMD_RX_SET_FILT|rxfilt); XL_SEL_WIN(7); } else if (ifp->if_flags & IFF_RUNNING && !(ifp->if_flags & IFF_PROMISC) && sc->xl_if_flags & IFF_PROMISC) { rxfilt &= ~XL_RXFILTER_ALLFRAMES; CSR_WRITE_2(sc, XL_COMMAND, XL_CMD_RX_SET_FILT|rxfilt); XL_SEL_WIN(7); } else { if ((ifp->if_flags & IFF_RUNNING) == 0) xl_init_locked(sc); } } else { if (ifp->if_flags & IFF_RUNNING) xl_stop(sc); } sc->xl_if_flags = ifp->if_flags; XL_UNLOCK(sc); error = 0; break; case SIOCADDMULTI: case SIOCDELMULTI: /* XXX Downcall from if_addmulti() possibly with locks held. */ XL_LOCK(sc); if (sc->xl_type == XL_TYPE_905B) xl_setmulti_hash(sc); else xl_setmulti(sc); XL_UNLOCK(sc); error = 0; break; case SIOCGIFMEDIA: case SIOCSIFMEDIA: /* XXX Downcall from ifmedia possibly with locks held. */ /*XL_LOCK(sc);*/ if (sc->xl_miibus != NULL) mii = device_get_softc(sc->xl_miibus); if (mii == NULL) error = ifmedia_ioctl(ifp, ifr, &sc->ifmedia, command); else error = ifmedia_ioctl(ifp, ifr, &mii->mii_media, command); /*XL_UNLOCK(sc);*/ break; case SIOCSIFCAP: XL_LOCK(sc); ifp->if_capenable = ifr->ifr_reqcap; if (ifp->if_capenable & IFCAP_TXCSUM) ifp->if_hwassist = XL905B_CSUM_FEATURES; else ifp->if_hwassist = 0; XL_UNLOCK(sc); break; default: error = ether_ioctl(ifp, command, data); break; } return (error); } /* * XXX: Invoked from ifnet slow timer. Lock coverage needed. */ static void xl_watchdog(struct ifnet *ifp) { struct xl_softc *sc = ifp->if_softc; u_int16_t status = 0; XL_LOCK(sc); ifp->if_oerrors++; XL_SEL_WIN(4); status = CSR_READ_2(sc, XL_W4_MEDIA_STATUS); if_printf(ifp, "watchdog timeout\n"); if (status & XL_MEDIASTAT_CARRIER) if_printf(ifp, "no carrier - transceiver cable problem?\n"); xl_txeoc(sc); xl_txeof(sc); xl_rxeof(sc); xl_reset(sc); xl_init_locked(sc); if (!IFQ_DRV_IS_EMPTY(&ifp->if_snd)) { if (sc->xl_type == XL_TYPE_905B) xl_start_90xB_locked(ifp); else xl_start_locked(ifp); } XL_UNLOCK(sc); } /* * Stop the adapter and free any mbufs allocated to the * RX and TX lists. */ static void xl_stop(struct xl_softc *sc) { register int i; struct ifnet *ifp = &sc->arpcom.ac_if; XL_LOCK_ASSERT(sc); ifp->if_timer = 0; +#ifdef DEVICE_POLLING + ether_poll_deregister(ifp); +#endif /* DEVICE_POLLING */ CSR_WRITE_2(sc, XL_COMMAND, XL_CMD_RX_DISABLE); CSR_WRITE_2(sc, XL_COMMAND, XL_CMD_STATS_DISABLE); CSR_WRITE_2(sc, XL_COMMAND, XL_CMD_INTR_ENB); CSR_WRITE_2(sc, XL_COMMAND, XL_CMD_RX_DISCARD); xl_wait(sc); CSR_WRITE_2(sc, XL_COMMAND, XL_CMD_TX_DISABLE); CSR_WRITE_2(sc, XL_COMMAND, XL_CMD_COAX_STOP); DELAY(800); #ifdef foo CSR_WRITE_2(sc, XL_COMMAND, XL_CMD_RX_RESET); xl_wait(sc); CSR_WRITE_2(sc, XL_COMMAND, XL_CMD_TX_RESET); xl_wait(sc); #endif CSR_WRITE_2(sc, XL_COMMAND, XL_CMD_INTR_ACK|XL_STAT_INTLATCH); CSR_WRITE_2(sc, XL_COMMAND, XL_CMD_STAT_ENB|0); CSR_WRITE_2(sc, XL_COMMAND, XL_CMD_INTR_ENB|0); if (sc->xl_flags & XL_FLAG_FUNCREG) bus_space_write_4(sc->xl_ftag, sc->xl_fhandle, 4, 0x8000); /* Stop the stats updater. */ untimeout(xl_stats_update, sc, sc->xl_stat_ch); /* * Free data in the RX lists. */ for (i = 0; i < XL_RX_LIST_CNT; i++) { if (sc->xl_cdata.xl_rx_chain[i].xl_mbuf != NULL) { bus_dmamap_unload(sc->xl_mtag, sc->xl_cdata.xl_rx_chain[i].xl_map); bus_dmamap_destroy(sc->xl_mtag, sc->xl_cdata.xl_rx_chain[i].xl_map); m_freem(sc->xl_cdata.xl_rx_chain[i].xl_mbuf); sc->xl_cdata.xl_rx_chain[i].xl_mbuf = NULL; } } if (sc->xl_ldata.xl_rx_list != NULL) bzero(sc->xl_ldata.xl_rx_list, XL_RX_LIST_SZ); /* * Free the TX list buffers. */ for (i = 0; i < XL_TX_LIST_CNT; i++) { if (sc->xl_cdata.xl_tx_chain[i].xl_mbuf != NULL) { bus_dmamap_unload(sc->xl_mtag, sc->xl_cdata.xl_tx_chain[i].xl_map); bus_dmamap_destroy(sc->xl_mtag, sc->xl_cdata.xl_tx_chain[i].xl_map); m_freem(sc->xl_cdata.xl_tx_chain[i].xl_mbuf); sc->xl_cdata.xl_tx_chain[i].xl_mbuf = NULL; } } if (sc->xl_ldata.xl_tx_list != NULL) bzero(sc->xl_ldata.xl_tx_list, XL_TX_LIST_SZ); ifp->if_flags &= ~(IFF_RUNNING | IFF_OACTIVE); } /* * Stop all chip I/O so that the kernel's probe routines don't * get confused by errant DMAs when rebooting. */ static void xl_shutdown(device_t dev) { struct xl_softc *sc; sc = device_get_softc(dev); XL_LOCK(sc); xl_reset(sc); xl_stop(sc); XL_UNLOCK(sc); } static int xl_suspend(device_t dev) { struct xl_softc *sc; sc = device_get_softc(dev); XL_LOCK(sc); xl_stop(sc); XL_UNLOCK(sc); return (0); } static int xl_resume(device_t dev) { struct xl_softc *sc; struct ifnet *ifp; sc = device_get_softc(dev); ifp = &sc->arpcom.ac_if; XL_LOCK(sc); xl_reset(sc); if (ifp->if_flags & IFF_UP) xl_init_locked(sc); XL_UNLOCK(sc); return (0); } Index: stable/5/sys/pci/if_xlreg.h =================================================================== --- stable/5/sys/pci/if_xlreg.h (revision 145135) +++ stable/5/sys/pci/if_xlreg.h (revision 145136) @@ -1,737 +1,740 @@ /*- * Copyright (c) 1997, 1998 * Bill Paul . All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * 3. All advertising materials mentioning features or use of this software * must display the following acknowledgement: * This product includes software developed by Bill Paul. * 4. Neither the name of the author nor the names of any co-contributors * may be used to endorse or promote products derived from this software * without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY Bill Paul AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL Bill Paul OR THE VOICES IN HIS HEAD * BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF * THE POSSIBILITY OF SUCH DAMAGE. * * $FreeBSD$ */ #define XL_EE_READ 0x0080 /* read, 5 bit address */ #define XL_EE_WRITE 0x0040 /* write, 5 bit address */ #define XL_EE_ERASE 0x00c0 /* erase, 5 bit address */ #define XL_EE_EWEN 0x0030 /* erase, no data needed */ #define XL_EE_8BIT_READ 0x0200 /* read, 8 bit address */ #define XL_EE_BUSY 0x8000 #define XL_EE_EADDR0 0x00 /* station address, first word */ #define XL_EE_EADDR1 0x01 /* station address, next word, */ #define XL_EE_EADDR2 0x02 /* station address, last word */ #define XL_EE_PRODID 0x03 /* product ID code */ #define XL_EE_MDATA_DATE 0x04 /* manufacturing data, date */ #define XL_EE_MDATA_DIV 0x05 /* manufacturing data, division */ #define XL_EE_MDATA_PCODE 0x06 /* manufacturing data, product code */ #define XL_EE_MFG_ID 0x07 #define XL_EE_PCI_PARM 0x08 #define XL_EE_ROM_ONFO 0x09 #define XL_EE_OEM_ADR0 0x0A #define XL_EE_OEM_ADR1 0x0B #define XL_EE_OEM_ADR2 0x0C #define XL_EE_SOFTINFO1 0x0D #define XL_EE_COMPAT 0x0E #define XL_EE_SOFTINFO2 0x0F #define XL_EE_CAPS 0x10 /* capabilities word */ #define XL_EE_RSVD0 0x11 #define XL_EE_ICFG_0 0x12 #define XL_EE_ICFG_1 0x13 #define XL_EE_RSVD1 0x14 #define XL_EE_SOFTINFO3 0x15 #define XL_EE_RSVD_2 0x16 /* * Bits in the capabilities word */ #define XL_CAPS_PNP 0x0001 #define XL_CAPS_FULL_DUPLEX 0x0002 #define XL_CAPS_LARGE_PKTS 0x0004 #define XL_CAPS_SLAVE_DMA 0x0008 #define XL_CAPS_SECOND_DMA 0x0010 #define XL_CAPS_FULL_BM 0x0020 #define XL_CAPS_FRAG_BM 0x0040 #define XL_CAPS_CRC_PASSTHRU 0x0080 #define XL_CAPS_TXDONE 0x0100 #define XL_CAPS_NO_TXLENGTH 0x0200 #define XL_CAPS_RX_REPEAT 0x0400 #define XL_CAPS_SNOOPING 0x0800 #define XL_CAPS_100MBPS 0x1000 #define XL_CAPS_PWRMGMT 0x2000 #define XL_PACKET_SIZE 1540 #define XL_MAX_FRAMELEN (ETHER_MAX_LEN + ETHER_VLAN_ENCAP_LEN) /* * Register layouts. */ #define XL_COMMAND 0x0E #define XL_STATUS 0x0E #define XL_TX_STATUS 0x1B #define XL_TX_FREE 0x1C #define XL_DMACTL 0x20 #define XL_DOWNLIST_PTR 0x24 #define XL_DOWN_POLL 0x2D /* 3c90xB only */ #define XL_TX_FREETHRESH 0x2F #define XL_UPLIST_PTR 0x38 #define XL_UPLIST_STATUS 0x30 #define XL_UP_POLL 0x3D /* 3c90xB only */ #define XL_PKTSTAT_UP_STALLED 0x00002000 #define XL_PKTSTAT_UP_ERROR 0x00004000 #define XL_PKTSTAT_UP_CMPLT 0x00008000 #define XL_DMACTL_DN_CMPLT_REQ 0x00000002 #define XL_DMACTL_DOWN_STALLED 0x00000004 #define XL_DMACTL_UP_CMPLT 0x00000008 #define XL_DMACTL_DOWN_CMPLT 0x00000010 #define XL_DMACTL_UP_RX_EARLY 0x00000020 #define XL_DMACTL_ARM_COUNTDOWN 0x00000040 #define XL_DMACTL_DOWN_INPROG 0x00000080 #define XL_DMACTL_COUNTER_SPEED 0x00000100 #define XL_DMACTL_DOWNDOWN_MODE 0x00000200 #define XL_DMACTL_TARGET_ABORT 0x40000000 #define XL_DMACTL_MASTER_ABORT 0x80000000 /* * Command codes. Some command codes require that we wait for * the CMD_BUSY flag to clear. Those codes are marked as 'mustwait.' */ #define XL_CMD_RESET 0x0000 /* mustwait */ #define XL_CMD_WINSEL 0x0800 #define XL_CMD_COAX_START 0x1000 #define XL_CMD_RX_DISABLE 0x1800 #define XL_CMD_RX_ENABLE 0x2000 #define XL_CMD_RX_RESET 0x2800 /* mustwait */ #define XL_CMD_UP_STALL 0x3000 /* mustwait */ #define XL_CMD_UP_UNSTALL 0x3001 #define XL_CMD_DOWN_STALL 0x3002 /* mustwait */ #define XL_CMD_DOWN_UNSTALL 0x3003 #define XL_CMD_RX_DISCARD 0x4000 #define XL_CMD_TX_ENABLE 0x4800 #define XL_CMD_TX_DISABLE 0x5000 #define XL_CMD_TX_RESET 0x5800 /* mustwait */ #define XL_CMD_INTR_FAKE 0x6000 #define XL_CMD_INTR_ACK 0x6800 #define XL_CMD_INTR_ENB 0x7000 #define XL_CMD_STAT_ENB 0x7800 #define XL_CMD_RX_SET_FILT 0x8000 #define XL_CMD_RX_SET_THRESH 0x8800 #define XL_CMD_TX_SET_THRESH 0x9000 #define XL_CMD_TX_SET_START 0x9800 #define XL_CMD_DMA_UP 0xA000 #define XL_CMD_DMA_STOP 0xA001 #define XL_CMD_STATS_ENABLE 0xA800 #define XL_CMD_STATS_DISABLE 0xB000 #define XL_CMD_COAX_STOP 0xB800 #define XL_CMD_SET_TX_RECLAIM 0xC000 /* 3c905B only */ #define XL_CMD_RX_SET_HASH 0xC800 /* 3c905B only */ #define XL_HASH_SET 0x0400 #define XL_HASHFILT_SIZE 256 /* * status codes * Note that bits 15 to 13 indicate the currently visible register window * which may be anything from 0 to 7. */ #define XL_STAT_INTLATCH 0x0001 /* 0 */ #define XL_STAT_ADFAIL 0x0002 /* 1 */ #define XL_STAT_TX_COMPLETE 0x0004 /* 2 */ #define XL_STAT_TX_AVAIL 0x0008 /* 3 first generation */ #define XL_STAT_RX_COMPLETE 0x0010 /* 4 */ #define XL_STAT_RX_EARLY 0x0020 /* 5 */ #define XL_STAT_INTREQ 0x0040 /* 6 */ #define XL_STAT_STATSOFLOW 0x0080 /* 7 */ #define XL_STAT_DMADONE 0x0100 /* 8 first generation */ #define XL_STAT_LINKSTAT 0x0100 /* 8 3c509B */ #define XL_STAT_DOWN_COMPLETE 0x0200 /* 9 */ #define XL_STAT_UP_COMPLETE 0x0400 /* 10 */ #define XL_STAT_DMABUSY 0x0800 /* 11 first generation */ #define XL_STAT_CMDBUSY 0x1000 /* 12 */ /* * Interrupts we normally want enabled. */ #define XL_INTRS \ (XL_STAT_UP_COMPLETE|XL_STAT_STATSOFLOW|XL_STAT_ADFAIL| \ XL_STAT_DOWN_COMPLETE|XL_STAT_TX_COMPLETE|XL_STAT_INTLATCH) /* * Window 0 registers */ #define XL_W0_EE_DATA 0x0C #define XL_W0_EE_CMD 0x0A #define XL_W0_RSRC_CFG 0x08 #define XL_W0_ADDR_CFG 0x06 #define XL_W0_CFG_CTRL 0x04 #define XL_W0_PROD_ID 0x02 #define XL_W0_MFG_ID 0x00 /* * Window 1 */ #define XL_W1_TX_FIFO 0x10 #define XL_W1_FREE_TX 0x0C #define XL_W1_TX_STATUS 0x0B #define XL_W1_TX_TIMER 0x0A #define XL_W1_RX_STATUS 0x08 #define XL_W1_RX_FIFO 0x00 /* * RX status codes */ #define XL_RXSTATUS_OVERRUN 0x01 #define XL_RXSTATUS_RUNT 0x02 #define XL_RXSTATUS_ALIGN 0x04 #define XL_RXSTATUS_CRC 0x08 #define XL_RXSTATUS_OVERSIZE 0x10 #define XL_RXSTATUS_DRIBBLE 0x20 /* * TX status codes */ #define XL_TXSTATUS_RECLAIM 0x02 /* 3c905B only */ #define XL_TXSTATUS_OVERFLOW 0x04 #define XL_TXSTATUS_MAXCOLS 0x08 #define XL_TXSTATUS_UNDERRUN 0x10 #define XL_TXSTATUS_JABBER 0x20 #define XL_TXSTATUS_INTREQ 0x40 #define XL_TXSTATUS_COMPLETE 0x80 /* * Window 2 */ #define XL_W2_RESET_OPTIONS 0x0C /* 3c905B only */ #define XL_W2_STATION_MASK_HI 0x0A #define XL_W2_STATION_MASK_MID 0x08 #define XL_W2_STATION_MASK_LO 0x06 #define XL_W2_STATION_ADDR_HI 0x04 #define XL_W2_STATION_ADDR_MID 0x02 #define XL_W2_STATION_ADDR_LO 0x00 #define XL_RESETOPT_FEATUREMASK (0x0001 | 0x0002 | 0x004) #define XL_RESETOPT_D3RESETDIS 0x0008 #define XL_RESETOPT_DISADVFD 0x0010 #define XL_RESETOPT_DISADV100 0x0020 #define XL_RESETOPT_DISAUTONEG 0x0040 #define XL_RESETOPT_DEBUGMODE 0x0080 #define XL_RESETOPT_FASTAUTO 0x0100 #define XL_RESETOPT_FASTEE 0x0200 #define XL_RESETOPT_FORCEDCONF 0x0400 #define XL_RESETOPT_TESTPDTPDR 0x0800 #define XL_RESETOPT_TEST100TX 0x1000 #define XL_RESETOPT_TEST100RX 0x2000 #define XL_RESETOPT_INVERT_LED 0x0010 #define XL_RESETOPT_INVERT_MII 0x4000 /* * Window 3 (fifo management) */ #define XL_W3_INTERNAL_CFG 0x00 #define XL_W3_MAXPKTSIZE 0x04 /* 3c905B only */ #define XL_W3_RESET_OPT 0x08 #define XL_W3_FREE_TX 0x0C #define XL_W3_FREE_RX 0x0A #define XL_W3_MAC_CTRL 0x06 #define XL_ICFG_CONNECTOR_MASK 0x00F00000 #define XL_ICFG_CONNECTOR_BITS 20 #define XL_ICFG_RAMSIZE_MASK 0x00000007 #define XL_ICFG_RAMWIDTH 0x00000008 #define XL_ICFG_ROMSIZE_MASK (0x00000040 | 0x00000080) #define XL_ICFG_DISABLE_BASSD 0x00000100 #define XL_ICFG_RAMLOC 0x00000200 #define XL_ICFG_RAMPART (0x00010000 | 0x00020000) #define XL_ICFG_XCVRSEL (0x00100000 | 0x00200000 | 0x00400000) #define XL_ICFG_AUTOSEL 0x01000000 #define XL_XCVR_10BT 0x00 #define XL_XCVR_AUI 0x01 #define XL_XCVR_RSVD_0 0x02 #define XL_XCVR_COAX 0x03 #define XL_XCVR_100BTX 0x04 #define XL_XCVR_100BFX 0x05 #define XL_XCVR_MII 0x06 #define XL_XCVR_RSVD_1 0x07 #define XL_XCVR_AUTO 0x08 /* 3c905B only */ #define XL_MACCTRL_DEFER_EXT_END 0x0001 #define XL_MACCTRL_DEFER_0 0x0002 #define XL_MACCTRL_DEFER_1 0x0004 #define XL_MACCTRL_DEFER_2 0x0008 #define XL_MACCTRL_DEFER_3 0x0010 #define XL_MACCTRL_DUPLEX 0x0020 #define XL_MACCTRL_ALLOW_LARGE_PACK 0x0040 #define XL_MACCTRL_EXTEND_AFTER_COL 0x0080 /* 3c905B only */ #define XL_MACCTRL_FLOW_CONTROL_ENB 0x0100 /* 3c905B only */ #define XL_MACCTRL_VLT_END 0x0200 /* 3c905B only */ /* * The 'reset options' register contains power-on reset values * loaded from the EEPROM. This includes the supported media * types on the card. It is also known as the media options register. */ #define XL_W3_MEDIA_OPT 0x08 #define XL_MEDIAOPT_BT4 0x0001 /* MII */ #define XL_MEDIAOPT_BTX 0x0002 /* on-chip */ #define XL_MEDIAOPT_BFX 0x0004 /* on-chip */ #define XL_MEDIAOPT_BT 0x0008 /* on-chip */ #define XL_MEDIAOPT_BNC 0x0010 /* on-chip */ #define XL_MEDIAOPT_AUI 0x0020 /* on-chip */ #define XL_MEDIAOPT_MII 0x0040 /* MII */ #define XL_MEDIAOPT_VCO 0x0100 /* 1st gen chip only */ #define XL_MEDIAOPT_10FL 0x0100 /* 3x905B only, on-chip */ #define XL_MEDIAOPT_MASK 0x01FF /* * Window 4 (diagnostics) */ #define XL_W4_UPPERBYTESOK 0x0D #define XL_W4_BADSSD 0x0C #define XL_W4_MEDIA_STATUS 0x0A #define XL_W4_PHY_MGMT 0x08 #define XL_W4_NET_DIAG 0x06 #define XL_W4_FIFO_DIAG 0x04 #define XL_W4_VCO_DIAG 0x02 #define XL_W4_CTRLR_STAT 0x08 #define XL_W4_TX_DIAG 0x00 #define XL_MII_CLK 0x01 #define XL_MII_DATA 0x02 #define XL_MII_DIR 0x04 #define XL_MEDIA_SQE 0x0008 #define XL_MEDIA_10TP 0x00C0 #define XL_MEDIA_LNK 0x0080 #define XL_MEDIA_LNKBEAT 0x0800 #define XL_MEDIASTAT_CRCSTRIP 0x0004 #define XL_MEDIASTAT_SQEENB 0x0008 #define XL_MEDIASTAT_COLDET 0x0010 #define XL_MEDIASTAT_CARRIER 0x0020 #define XL_MEDIASTAT_JABGUARD 0x0040 #define XL_MEDIASTAT_LINKBEAT 0x0080 #define XL_MEDIASTAT_JABDETECT 0x0200 #define XL_MEDIASTAT_POLREVERS 0x0400 #define XL_MEDIASTAT_LINKDETECT 0x0800 #define XL_MEDIASTAT_TXINPROG 0x1000 #define XL_MEDIASTAT_DCENB 0x4000 #define XL_MEDIASTAT_AUIDIS 0x8000 #define XL_NETDIAG_TEST_LOWVOLT 0x0001 #define XL_NETDIAG_ASIC_REVMASK \ (0x0002 | 0x0004 | 0x0008 | 0x0010 | 0x0020) #define XL_NETDIAG_UPPER_BYTES_ENABLE 0x0040 #define XL_NETDIAG_STATS_ENABLED 0x0080 #define XL_NETDIAG_TX_FATALERR 0x0100 #define XL_NETDIAG_TRANSMITTING 0x0200 #define XL_NETDIAG_RX_ENABLED 0x0400 #define XL_NETDIAG_TX_ENABLED 0x0800 #define XL_NETDIAG_FIFO_LOOPBACK 0x1000 #define XL_NETDIAG_MAC_LOOPBACK 0x2000 #define XL_NETDIAG_ENDEC_LOOPBACK 0x4000 #define XL_NETDIAG_EXTERNAL_LOOP 0x8000 /* * Window 5 */ #define XL_W5_STAT_ENB 0x0C #define XL_W5_INTR_ENB 0x0A #define XL_W5_RECLAIM_THRESH 0x09 /* 3c905B only */ #define XL_W5_RX_FILTER 0x08 #define XL_W5_RX_EARLYTHRESH 0x06 #define XL_W5_TX_AVAILTHRESH 0x02 #define XL_W5_TX_STARTTHRESH 0x00 /* * RX filter bits */ #define XL_RXFILTER_INDIVIDUAL 0x01 #define XL_RXFILTER_ALLMULTI 0x02 #define XL_RXFILTER_BROADCAST 0x04 #define XL_RXFILTER_ALLFRAMES 0x08 #define XL_RXFILTER_MULTIHASH 0x10 /* 3c905B only */ /* * Window 6 (stats) */ #define XL_W6_TX_BYTES_OK 0x0C #define XL_W6_RX_BYTES_OK 0x0A #define XL_W6_UPPER_FRAMES_OK 0x09 #define XL_W6_DEFERRED 0x08 #define XL_W6_RX_OK 0x07 #define XL_W6_TX_OK 0x06 #define XL_W6_RX_OVERRUN 0x05 #define XL_W6_COL_LATE 0x04 #define XL_W6_COL_SINGLE 0x03 #define XL_W6_COL_MULTIPLE 0x02 #define XL_W6_SQE_ERRORS 0x01 #define XL_W6_CARRIER_LOST 0x00 /* * Window 7 (bus master control) */ #define XL_W7_BM_ADDR 0x00 #define XL_W7_BM_LEN 0x06 #define XL_W7_BM_STATUS 0x0B #define XL_W7_BM_TIMEr 0x0A /* * bus master control registers */ #define XL_BM_PKTSTAT 0x20 #define XL_BM_DOWNLISTPTR 0x24 #define XL_BM_FRAGADDR 0x28 #define XL_BM_FRAGLEN 0x2C #define XL_BM_TXFREETHRESH 0x2F #define XL_BM_UPPKTSTAT 0x30 #define XL_BM_UPLISTPTR 0x38 #define XL_LAST_FRAG 0x80000000 #define XL_MAXFRAGS 63 #define XL_RX_LIST_CNT 128 #define XL_TX_LIST_CNT 256 #define XL_RX_LIST_SZ \ (XL_RX_LIST_CNT * sizeof(struct xl_list_onefrag)) #define XL_TX_LIST_SZ \ (XL_TX_LIST_CNT * sizeof(struct xl_list)) #define XL_MIN_FRAMELEN 60 #define ETHER_ALIGN 2 #define XL_INC(x, y) (x) = (x + 1) % y /* * Boomerang/Cyclone TX/RX list structure. * For the TX lists, bits 0 to 12 of the status word indicate * length. * This looks suspiciously like the ThunderLAN, doesn't it. */ struct xl_frag { u_int32_t xl_addr; /* 63 addr/len pairs */ u_int32_t xl_len; }; struct xl_list { u_int32_t xl_next; /* final entry has 0 nextptr */ u_int32_t xl_status; struct xl_frag xl_frag[XL_MAXFRAGS]; }; struct xl_list_onefrag { u_int32_t xl_next; /* final entry has 0 nextptr */ u_int32_t xl_status; struct xl_frag xl_frag; }; struct xl_list_data { struct xl_list_onefrag *xl_rx_list; struct xl_list *xl_tx_list; u_int32_t xl_rx_dmaaddr; bus_dma_tag_t xl_rx_tag; bus_dmamap_t xl_rx_dmamap; u_int32_t xl_tx_dmaaddr; bus_dma_tag_t xl_tx_tag; bus_dmamap_t xl_tx_dmamap; }; struct xl_chain { struct xl_list *xl_ptr; struct mbuf *xl_mbuf; struct xl_chain *xl_next; struct xl_chain *xl_prev; u_int32_t xl_phys; bus_dmamap_t xl_map; }; struct xl_chain_onefrag { struct xl_list_onefrag *xl_ptr; struct mbuf *xl_mbuf; struct xl_chain_onefrag *xl_next; bus_dmamap_t xl_map; }; struct xl_chain_data { struct xl_chain_onefrag xl_rx_chain[XL_RX_LIST_CNT]; struct xl_chain xl_tx_chain[XL_TX_LIST_CNT]; struct xl_chain_onefrag *xl_rx_head; /* 3c90x "boomerang" queuing stuff */ struct xl_chain *xl_tx_head; struct xl_chain *xl_tx_tail; struct xl_chain *xl_tx_free; /* 3c90xB "cyclone/hurricane/tornado" stuff */ int xl_tx_prod; int xl_tx_cons; int xl_tx_cnt; }; #define XL_RXSTAT_LENMASK 0x00001FFF #define XL_RXSTAT_UP_ERROR 0x00004000 #define XL_RXSTAT_UP_CMPLT 0x00008000 #define XL_RXSTAT_UP_OVERRUN 0x00010000 #define XL_RXSTAT_RUNT 0x00020000 #define XL_RXSTAT_ALIGN 0x00040000 #define XL_RXSTAT_CRC 0x00080000 #define XL_RXSTAT_OVERSIZE 0x00100000 #define XL_RXSTAT_DRIBBLE 0x00800000 #define XL_RXSTAT_UP_OFLOW 0x01000000 #define XL_RXSTAT_IPCKERR 0x02000000 /* 3c905B only */ #define XL_RXSTAT_TCPCKERR 0x04000000 /* 3c905B only */ #define XL_RXSTAT_UDPCKERR 0x08000000 /* 3c905B only */ #define XL_RXSTAT_BUFEN 0x10000000 /* 3c905B only */ #define XL_RXSTAT_IPCKOK 0x20000000 /* 3c905B only */ #define XL_RXSTAT_TCPCOK 0x40000000 /* 3c905B only */ #define XL_RXSTAT_UDPCKOK 0x80000000 /* 3c905B only */ #define XL_TXSTAT_LENMASK 0x00001FFF #define XL_TXSTAT_CRCDIS 0x00002000 #define XL_TXSTAT_TX_INTR 0x00008000 #define XL_TXSTAT_DL_COMPLETE 0x00010000 #define XL_TXSTAT_IPCKSUM 0x02000000 /* 3c905B only */ #define XL_TXSTAT_TCPCKSUM 0x04000000 /* 3c905B only */ #define XL_TXSTAT_UDPCKSUM 0x08000000 /* 3c905B only */ #define XL_TXSTAT_RND_DEFEAT 0x10000000 /* 3c905B only */ #define XL_TXSTAT_EMPTY 0x20000000 /* 3c905B only */ #define XL_TXSTAT_DL_INTR 0x80000000 #define XL_CAPABILITY_BM 0x20 struct xl_type { u_int16_t xl_vid; u_int16_t xl_did; char *xl_name; }; struct xl_mii_frame { u_int8_t mii_stdelim; u_int8_t mii_opcode; u_int8_t mii_phyaddr; u_int8_t mii_regaddr; u_int8_t mii_turnaround; u_int16_t mii_data; }; /* * MII constants */ #define XL_MII_STARTDELIM 0x01 #define XL_MII_READOP 0x02 #define XL_MII_WRITEOP 0x01 #define XL_MII_TURNAROUND 0x02 /* * The 3C905B adapters implement a few features that we want to * take advantage of, namely the multicast hash filter. With older * chips, you only have the option of turning on reception of all * multicast frames, which is kind of lame. * * We also use this to decide on a transmit strategy. For the 3c90xB * cards, we can use polled descriptor mode, which reduces CPU overhead. */ #define XL_TYPE_905B 1 #define XL_TYPE_90X 2 #define XL_FLAG_FUNCREG 0x0001 #define XL_FLAG_PHYOK 0x0002 #define XL_FLAG_EEPROM_OFFSET_30 0x0004 #define XL_FLAG_WEIRDRESET 0x0008 #define XL_FLAG_8BITROM 0x0010 #define XL_FLAG_INVERT_LED_PWR 0x0020 #define XL_FLAG_INVERT_MII_PWR 0x0040 #define XL_FLAG_NO_XCVR_PWR 0x0080 #define XL_FLAG_USE_MMIO 0x0100 #define XL_FLAG_NO_MMIO 0x0200 #define XL_NO_XCVR_PWR_MAGICBITS 0x0900 struct xl_softc { struct arpcom arpcom; /* interface info */ struct ifmedia ifmedia; /* media info */ bus_space_handle_t xl_bhandle; bus_space_tag_t xl_btag; void *xl_intrhand; struct resource *xl_irq; struct resource *xl_res; device_t xl_miibus; struct xl_type *xl_info; /* 3Com adapter info */ bus_dma_tag_t xl_mtag; bus_dmamap_t xl_tmpmap; /* spare DMA map */ u_int8_t xl_unit; /* interface number */ u_int8_t xl_type; u_int32_t xl_xcvr; u_int16_t xl_media; u_int16_t xl_caps; u_int8_t xl_stats_no_timeout; u_int16_t xl_tx_thresh; int xl_if_flags; struct xl_list_data xl_ldata; struct xl_chain_data xl_cdata; struct callout_handle xl_stat_ch; int xl_flags; struct resource *xl_fres; bus_space_handle_t xl_fhandle; bus_space_tag_t xl_ftag; struct mtx xl_mtx; +#ifdef DEVICE_POLLING + int rxcycles; +#endif }; #define XL_LOCK(_sc) mtx_lock(&(_sc)->xl_mtx) #define XL_UNLOCK(_sc) mtx_unlock(&(_sc)->xl_mtx) #define XL_LOCK_ASSERT(_sc) mtx_assert(&(_sc)->xl_mtx, MA_OWNED) #define xl_rx_goodframes(x) \ ((x.xl_upper_frames_ok & 0x03) << 8) | x.xl_rx_frames_ok #define xl_tx_goodframes(x) \ ((x.xl_upper_frames_ok & 0x30) << 4) | x.xl_tx_frames_ok struct xl_stats { u_int8_t xl_carrier_lost; u_int8_t xl_sqe_errs; u_int8_t xl_tx_multi_collision; u_int8_t xl_tx_single_collision; u_int8_t xl_tx_late_collision; u_int8_t xl_rx_overrun; u_int8_t xl_tx_frames_ok; u_int8_t xl_rx_frames_ok; u_int8_t xl_tx_deferred; u_int8_t xl_upper_frames_ok; u_int16_t xl_rx_bytes_ok; u_int16_t xl_tx_bytes_ok; u_int16_t status; }; /* * register space access macros */ #define CSR_WRITE_4(sc, reg, val) \ bus_space_write_4(sc->xl_btag, sc->xl_bhandle, reg, val) #define CSR_WRITE_2(sc, reg, val) \ bus_space_write_2(sc->xl_btag, sc->xl_bhandle, reg, val) #define CSR_WRITE_1(sc, reg, val) \ bus_space_write_1(sc->xl_btag, sc->xl_bhandle, reg, val) #define CSR_READ_4(sc, reg) \ bus_space_read_4(sc->xl_btag, sc->xl_bhandle, reg) #define CSR_READ_2(sc, reg) \ bus_space_read_2(sc->xl_btag, sc->xl_bhandle, reg) #define CSR_READ_1(sc, reg) \ bus_space_read_1(sc->xl_btag, sc->xl_bhandle, reg) #define XL_SEL_WIN(x) \ CSR_WRITE_2(sc, XL_COMMAND, XL_CMD_WINSEL | x) #define XL_TIMEOUT 1000 /* * General constants that are fun to know. * * 3Com PCI vendor ID */ #define TC_VENDORID 0x10B7 /* * 3Com chip device IDs. */ #define TC_DEVICEID_BOOMERANG_10BT 0x9000 #define TC_DEVICEID_BOOMERANG_10BT_COMBO 0x9001 #define TC_DEVICEID_BOOMERANG_10_100BT 0x9050 #define TC_DEVICEID_BOOMERANG_100BT4 0x9051 #define TC_DEVICEID_KRAKATOA_10BT 0x9004 #define TC_DEVICEID_KRAKATOA_10BT_COMBO 0x9005 #define TC_DEVICEID_KRAKATOA_10BT_TPC 0x9006 #define TC_DEVICEID_CYCLONE_10FL 0x900A #define TC_DEVICEID_HURRICANE_10_100BT 0x9055 #define TC_DEVICEID_CYCLONE_10_100BT4 0x9056 #define TC_DEVICEID_CYCLONE_10_100_COMBO 0x9058 #define TC_DEVICEID_CYCLONE_10_100FX 0x905A #define TC_DEVICEID_TORNADO_10_100BT 0x9200 #define TC_DEVICEID_TORNADO_10_100BT_920B 0x9201 #define TC_DEVICEID_TORNADO_10_100BT_920B_WNM 0x9202 #define TC_DEVICEID_HURRICANE_10_100BT_SERV 0x9800 #define TC_DEVICEID_TORNADO_10_100BT_SERV 0x9805 #define TC_DEVICEID_HURRICANE_SOHO100TX 0x7646 #define TC_DEVICEID_TORNADO_HOMECONNECT 0x4500 #define TC_DEVICEID_HURRICANE_555 0x5055 #define TC_DEVICEID_HURRICANE_556 0x6055 #define TC_DEVICEID_HURRICANE_556B 0x6056 #define TC_DEVICEID_HURRICANE_575A 0x5057 #define TC_DEVICEID_HURRICANE_575B 0x5157 #define TC_DEVICEID_HURRICANE_575C 0x5257 #define TC_DEVICEID_HURRICANE_656 0x6560 #define TC_DEVICEID_HURRICANE_656B 0x6562 #define TC_DEVICEID_TORNADO_656C 0x6564 /* * PCI low memory base and low I/O base register, and * other PCI registers. Note: some are only available on * the 3c905B, in particular those that related to power management. */ #define XL_PCI_VENDOR_ID 0x00 #define XL_PCI_DEVICE_ID 0x02 #define XL_PCI_COMMAND 0x04 #define XL_PCI_STATUS 0x06 #define XL_PCI_CLASSCODE 0x09 #define XL_PCI_LATENCY_TIMER 0x0D #define XL_PCI_HEADER_TYPE 0x0E #define XL_PCI_LOIO 0x10 #define XL_PCI_LOMEM 0x14 #define XL_PCI_FUNCMEM 0x18 #define XL_PCI_BIOSROM 0x30 #define XL_PCI_INTLINE 0x3C #define XL_PCI_INTPIN 0x3D #define XL_PCI_MINGNT 0x3E #define XL_PCI_MINLAT 0x0F #define XL_PCI_RESETOPT 0x48 #define XL_PCI_EEPROM_DATA 0x4C /* 3c905B-only registers */ #define XL_PCI_CAPID 0xDC /* 8 bits */ #define XL_PCI_NEXTPTR 0xDD /* 8 bits */ #define XL_PCI_PWRMGMTCAP 0xDE /* 16 bits */ #define XL_PCI_PWRMGMTCTRL 0xE0 /* 16 bits */ #define XL_PSTATE_MASK 0x0003 #define XL_PSTATE_D0 0x0000 #define XL_PSTATE_D1 0x0002 #define XL_PSTATE_D2 0x0002 #define XL_PSTATE_D3 0x0003 #define XL_PME_EN 0x0010 #define XL_PME_STATUS 0x8000 #ifndef IFM_10_FL #define IFM_10_FL 13 /* 10baseFL - Fiber */ #endif