Index: stable/12/share/man/man4/ae.4 =================================================================== --- stable/12/share/man/man4/ae.4 (revision 339734) +++ stable/12/share/man/man4/ae.4 (revision 339735) @@ -1,151 +1,159 @@ .\" Copyright (c) 2008 Stanislav Sedov .\" All rights reserved. .\" .\" Redistribution and use in source and binary forms, with or without .\" modification, are permitted provided that the following conditions .\" are met: .\" 1. Redistributions of source code must retain the above copyright .\" notice, this list of conditions and the following disclaimer. .\" 2. Redistributions in binary form must reproduce the above copyright .\" notice, this list of conditions and the following disclaimer in the .\" documentation and/or other materials provided with the distribution. .\" .\" THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND .\" ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE .\" IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE .\" ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE .\" FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL .\" DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS .\" OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) .\" HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT .\" LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY .\" OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF .\" SUCH DAMAGE. .\" .\" $FreeBSD$ .\" -.Dd October 4, 2008 +.Dd October 24, 2018 .Dt AE 4 .Os .Sh NAME .Nm ae .Nd "Attansic/Atheros L2 FastEthernet controller driver" .Sh SYNOPSIS To compile this driver into the kernel, place the following lines in your kernel configuration file: .Bd -ragged -offset indent .Cd "device miibus" .Cd "device ae" .Ed .Pp Alternatively, to load the driver as a module at boot time, place the following line in .Xr loader.conf 5 : .Bd -literal -offset indent if_ae_load="YES" .Ed +.Sh DEPRECATION NOTICE +The +.Nm +driver is not present in +.Fx 13.0 +and later. +See https://github.com/freebsd/fcp/blob/master/fcp-0101.md for more +information. .Sh DESCRIPTION The .Nm device driver provides support for Attansic/Atheros L2 PCIe FastEthernet controllers. .Pp The controller supports hardware Ethernet checksum processing, hardware VLAN tag stripping/insertion and an interrupt moderation mechanism. Attansic L2 also features a 64-bit multicast hash filter. .Pp The .Nm driver supports the following media types: .Bl -tag -width ".Cm 10baseT/UTP" .It Cm autoselect Enable autoselection of the media type and options. The user can manually override the autoselected mode by adding media options to .Xr rc.conf 5 . .It Cm 10baseT/UTP Select 10Mbps operation. .It Cm 100baseTX Set 100Mbps (FastEthernet) operation. .El .Pp The .Nm driver provides support for the following media options: .Bl -tag -width ".Cm full-duplex" .It Cm full-duplex Force full duplex operation. .It Cm half-duplex Force half duplex operation. .El .Pp For more information on configuring this device, see .Xr ifconfig 8 . .Sh HARDWARE The .Nm driver supports Attansic/Atheros L2 PCIe FastEthernet controllers, and is known to support the following hardware: .Pp .Bl -bullet -compact .It ASUS EeePC 701 .It ASUS EeePC 900 .El .Pp Other hardware may or may not work with this driver. .Sh LOADER TUNABLES Tunables can be set at the .Xr loader 8 prompt before booting the kernel or stored in .Xr loader.conf 5 . .Bl -tag -width "xxxxxx" .It Va hw.ae.msi_disable This tunable disables MSI support on the Ethernet hardware. The default value is 0. .El .Sh SYSCTL VARIABLES The .Nm driver collects a number of useful MAC counter during the work. The statistics is available via the .Va dev.ae.%d.stats .Xr sysctl 8 tree, where %d corresponds to the controller number. .Sh DIAGNOSTICS .Bl -diag .It "ae%d: watchdog timeout." The device has stopped responding to the network, or there is a problem with the network connection (cable). .It "ae%d: reset timeout." The card reset operation has been timed out. .It "ae%d: Generating random ethernet address." No valid Ethernet address was found in the controller NVRAM and registers. Random locally administered address with ASUS OUI identifier will be used instead. .El .Sh SEE ALSO .Xr altq 4 , .Xr arp 4 , .Xr miibus 4 , .Xr netintro 4 , .Xr ng_ether 4 , .Xr vlan 4 , .Xr ifconfig 8 .Sh HISTORY The .Nm driver and this manual page was written by .An Stanislav Sedov Aq Mt stas@FreeBSD.org . It first appeared in .Fx 7.1 . .Sh BUGS The Attansic L2 FastEthernet controller supports DMA but does not use a descriptor based transfer mechanism via scatter-gather DMA. Thus the data should be copied to/from the controller memory on each transmit/receive. Furthermore, a lot of data alignment restrictions apply. This may introduce a high CPU load on systems with heavy network activity. Luckily enough this should not be a problem on modern hardware as L2 does not support speeds faster than 100Mbps. Index: stable/12/share/man/man4/de.4 =================================================================== --- stable/12/share/man/man4/de.4 (revision 339734) +++ stable/12/share/man/man4/de.4 (revision 339735) @@ -1,149 +1,157 @@ .\" .\" Copyright (c) 1997 David E. O'Brien .\" .\" All rights reserved. .\" .\" Redistribution and use in source and binary forms, with or without .\" modification, are permitted provided that the following conditions .\" are met: .\" 1. Redistributions of source code must retain the above copyright .\" notice, this list of conditions and the following disclaimer. .\" 2. Redistributions in binary form must reproduce the above copyright .\" notice, this list of conditions and the following disclaimer in the .\" documentation and/or other materials provided with the distribution. .\" .\" THIS SOFTWARE IS PROVIDED BY THE DEVELOPERS ``AS IS'' AND ANY EXPRESS OR .\" IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES .\" OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. .\" IN NO EVENT SHALL THE DEVELOPERS BE LIABLE FOR ANY DIRECT, INDIRECT, .\" INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT .\" NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, .\" DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY .\" THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT .\" (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF .\" THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. .\" .\" $FreeBSD$ .\" -.Dd July 16, 2005 +.Dd October 24, 2018 .Dt DE 4 .Os .Sh NAME .Nm de .Nd "DEC DC21x4x Ethernet device driver" .Sh SYNOPSIS To compile this driver into the kernel, place the following line in your kernel configuration file: .Bd -ragged -offset indent .Cd "device de" .Ed .Pp Alternatively, to load the driver as a module at boot time, place the following line in .Xr loader.conf 5 : .Bd -literal -offset indent if_de_load="YES" .Ed +.Sh DEPRECATION NOTICE +The +.Nm +driver is not present in +.Fx 13.0 +and later. +See https://github.com/freebsd/fcp/blob/master/fcp-0101.md for more +information. .Sh DESCRIPTION The .Nm driver provides support for the Ethernet adapters based on the Digital Equipment DC21x4x based self-contained Ethernet and Fast Ethernet chips. .Pp The .Nm driver supports the following media types: .Bl -tag -width xxxxxxxxxxxxxxx .It autoselect Enable autoselection of the media type and options .It 10baseT/UTP Set 10Mbps operation on the 10baseT port .It 10base2/BNC Set 10Mbps operation on the BNC port .It 10base5/AUI Set 10Mbps operation on the AUI port .It 100baseTX Set 100Mbps (Fast Ethernet) operation .It 100baseFX Set 100Mbps operation .It 100baseT4 Set 100Mbps operation (4-pair cat-3 cable) .El .Pp The .Nm driver supports the following media options: .Bl -tag -width xxxxxxxxxxxxxxx .It full-duplex Set full duplex operation .El .Pp Note that the media types available depend on the particular card in use. Some cards are explicitly programmed to a particular media type by a setup utility and are not changeable. .Pp Use the .Xr ifconfig 8 command and in particular the .Fl m flag to list the supported media types for your particular card. .Pp The old .Dq ifconfig linkN method of configuration is not supported. .Sh HARDWARE Adapters supported by the .Nm driver include: .Pp .Bl -bullet -compact .It Adaptec ANA-6944/TX .It Cogent EM100FX and EM440TX .It Corega FastEther PCI-TX .It D-Link DFE-500TX .It DEC DE435, DEC DE450, and DEC DE500 .It ELECOM LD-PCI2T, LD-PCITS .It I-O DATA LA2/T-PCI .It SMC Etherpower 8432, 9332 and 9334 .It ZNYX ZX3xx .El .Sh DIAGNOSTICS .Bl -diag .It "de%d: waking device from sleep/snooze mode" The 21041 and 21140A chips support suspending the operation of the card. .It "de%d: error: desired IRQ of %d does not match device's actual IRQ of %d" The device probe detected that the board is configured for a different interrupt than the one specified in the kernel configuration file. .It "de%d: not configured; limit of %d reached or exceeded" There is a limit of 32 .Nm devices allowed in a single machine. .It "de%d: not configured; 21040 pass 2.0 required (%d.%d found)" .It "de%d: not configured; 21140 pass 1.1 required (%d.%d found)" Certain revisions of the chipset are not supported by this driver. .El .Sh SEE ALSO .Xr altq 4 , .Xr arp 4 , .Xr netintro 4 , .Xr ng_ether 4 , .Xr ifconfig 8 .Sh AUTHORS .An -nosplit The .Nm device driver was written by .An Matt Thomas . This manual page was written by .An David E. O'Brien . Index: stable/12/share/man/man4/ed.4 =================================================================== --- stable/12/share/man/man4/ed.4 (revision 339734) +++ stable/12/share/man/man4/ed.4 (revision 339735) @@ -1,399 +1,407 @@ .\" .\" Copyright (c) 1994, David Greenman .\" All rights reserved. .\" .\" Redistribution and use in source and binary forms, with or without .\" modification, are permitted provided that the following conditions .\" are met: .\" 1. Redistributions of source code must retain the above copyright .\" notice, this list of conditions and the following disclaimer. .\" 2. Redistributions in binary form must reproduce the above copyright .\" notice, this list of conditions and the following disclaimer in the .\" documentation and/or other materials provided with the distribution. .\" 3. All advertising materials mentioning features or use of this software .\" must display the following acknowledgement: .\" This product includes software developed by David Greenman. .\" 4. The name of the author may not be used to endorse or promote products .\" derived from this software without specific prior written permission. .\" .\" THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND .\" ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE .\" IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE .\" ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE .\" FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL .\" DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS .\" OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) .\" HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT .\" LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY .\" OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF .\" SUCH DAMAGE. .\" .\" $FreeBSD$ .\" -.Dd February 25, 2012 +.Dd October 24, 2018 .Dt ED 4 .Os .Sh NAME .Nm ed .Nd "NE-2000 and WD-80x3 Ethernet driver" .Sh SYNOPSIS To compile this driver into the kernel, place the following lines in your kernel configuration file: .Bd -ragged -offset indent .Cd "device miibus" .Cd "device ed" .Ed .Pp Alternatively, to load the driver as a module at boot time, place the following line in .Xr loader.conf 5 : .Bd -literal -offset indent if_ed_load="YES" .Ed +.Sh DEPRECATION NOTICE +The +.Nm +driver is not present in +.Fx 13.0 +and later. +See https://github.com/freebsd/fcp/blob/master/fcp-0101.md for more +information. .Sh DESCRIPTION The .Nm driver provides support for 8 and 16bit Ethernet cards that are based on the National Semiconductor DS8390 and similar NICs manufactured by other companies. The .Nm driver also supports many PC Card chips which interface via MII to a PHY. Axiom's AX88790, AX88190 and AX88190A; DLink's DL10019 and DL10022; and Tamarack's TC5299J chips all support internal or external MII/PHY combinations. Realtek's PCI and ISA RTL80x9-based cards are also supported. For these chipsets, autonegotiation and status reporting are supported. .Pp In addition to the standard port and IRQ specifications, the .Nm driver also supports a number of .Cd flags which can force 8/16bit mode, enable/disable multi-buffering, and select the default interface type (AUI/BNC, and for cards with twisted pair, AUI/10BaseT). .Pp The .Cd flags are a bit field, and are summarized as follows: .Bl -tag -width indent .It Li 0x01 Disable transceiver. On those cards which support it, this flag causes the transceiver to be disabled and the AUI connection to be used by default. .It Li 0x02 Force 8bit mode. This flag forces the card to 8bit mode regardless of how the card identifies itself. This may be needed for some clones which incorrectly identify themselves as 16bit, even though they only have an 8bit interface. This flag takes precedence over force 16bit mode. .It Li 0x04 Force 16bit mode. This flag forces the card to 16bit mode regardless of how the card identifies itself. This may be needed for some clones which incorrectly identify themselves as 8bit, even though they have a 16bit ISA interface. .It Li 0x08 Disable transmitter multi-buffering. This flag disables the use of multiple transmit buffers and may be necessary in rare cases where packets are sent out faster than a machine on the other end can handle (as evidenced by severe packet lossage). Some .No ( non- Ns Fx :-)) machines have terrible Ethernet performance and simply cannot cope with 1100K+ data rates. Use of this flag also provides one more packet worth of receiver buffering, and on 8bit cards, this may help reduce receiver lossage. .El .Pp When using a 3c503 card, the AUI connection may be selected by specifying the .Cm link2 option to .Xr ifconfig 8 (BNC is the default). .Sh HARDWARE The .Nm driver supports the following Ethernet NICs: .Pp .Bl -bullet -compact .It 3Com 3c503 Etherlink II .Pq Cd "options ED_3C503" .It AR-P500 Ethernet .It Accton EN1644 (old model), EN1646 (old model), EN2203 (old model) (110pin) (flags 0xd00000) .It Accton EN2212/EN2216/UE2216 .It Allied Telesis CentreCOM LA100-PCM_V2 .It AmbiCom 10BaseT card (8002, 8002T, 8010 and 8610) .It Bay Networks NETGEAR FA410TXC Fast Ethernet .It Belkin F5D5020 PC Card Fast Ethernet .It Billionton LM5LT-10B Ethernet/Modem PC Card .It Billionton LNT-10TB, LNT-10TN Ethernet PC Card .It Bromax iPort 10/100 Ethernet PC Card .It Bromax iPort 10 Ethernet PC Card .It Buffalo LPC2-CLT, LPC3-CLT, LPC3-CLX, LPC4-TX, LPC-CTX PC Card .It Buffalo LPC-CF-CLT CF Card .It CNet BC40 adapter .It Compex Net-A adapter .It Compex RL2000 .It Corega Ether PCC-T/EtherII PCC-T/FEther PCC-TXF/PCC-TXD PCC-T/Fether II TXD .It Corega LAPCCTXD (TC5299J) .It CyQ've ELA-010 .It DEC EtherWorks DE305 .It Danpex EN-6200P2 .It D-Link DE-660, DE-660+ .It D-Link IC-CARD/IC-CARD+ Ethernet .It ELECOM Laneed LD-CDL/TX, LD-CDF, LD-CDS, LD-10/100CD, LD-CDWA (DP83902A) .It Hawking PN652TX PC Card (AX88790) .It HP PC Lan+ 27247B and 27252A .Pq Cd "options ED_HPP" .It IBM Creditcard Ethernet I/II .It I-O DATA ET2/T-PCI .It I-O DATA PCLATE .It Kingston KNE-PC2, CIO10T, KNE-PCM/x Ethernet .It KTI ET32P2 PCI .It Linksys EC2T/PCMPC100/PCM100, PCMLM56 .It Linksys EtherFast 10/100 PC Card, Combo PCMCIA Ethernet Card (PCMPC100 V2) .It MACNICA Ethernet ME1 for JEIDA .It MELCO LGY-PCI-TR .It MELCO LPC-T/LPC2-T/LPC2-CLT/LPC2-TX/LPC3-TX/LPC3-CLX .It NDC Ethernet Instant-Link .It National Semiconductor InfoMover NE4100 .It NetGear FA-410TX .It NetVin NV5000SC .It Network Everywhere Ethernet 10BaseT PC Card .It New Media LANSurfer 10+56 Ethernet/Modem .It New Media LANSurfer .It Novell NE1000/NE2000/NE2100 .It PLANEX ENW-8300-T .It PLANEX FNW-3600-T .It Psion 10/100 LANGLOBAL Combine iT .It RealTek 8019 .It RealTek 8029 .It Relia Combo-L/M-56k PC Card .It SMC Elite 16 WD8013 .It SMC Elite Ultra .It SMC WD8003E/WD8003EBT/WD8003S/WD8003SBT/WD8003W/WD8013EBT/WD8013W and clones .It SMC EZCard PC Card, 8040-TX, 8041-TX (AX88x90), 8041-TX V.2 (TC5299J) .It Socket LP-E, ES-1000 Ethernet/Serial, LP-E CF, LP-FE CF .It Surecom EtherPerfect EP-427 .It Surecom NE-34 .It TDK 3000/3400/5670 Fast Ethernet/Modem .It TDK LAK-CD031, Grey Cell GCS2000 Ethernet Card .It TDK DFL5610WS Ethernet/Modem PC Card .It Telecom Device SuperSocket RE450T .It Toshiba LANCT00A PC Card .It VIA VT86C926 .It Winbond W89C940 .It Winbond W89C940F .El .Pp ISA, PCI and PC Card devices are supported. .Pp The .Nm driver does not support the following Ethernet NICs: .Pp .Bl -bullet -compact .It Mitsubishi LAN Adapter B8895 .El .Sh DIAGNOSTICS .Bl -diag .It "ed%d: failed to clear shared memory at %x - check configuration." When the card was probed at system boot time, the .Nm driver found that it could not clear the card's shared memory. This is most commonly caused by a BIOS extension ROM being configured in the same address space as the Ethernet card's shared memory. Either find the offending card and change its BIOS ROM to be at an address that does not conflict, or change the settings in .Xr device.hints 5 that the card's shared memory is mapped at a non-conflicting address. .It "ed%d: Invalid irq configuration (%d) must be 2-5 for 3c503." The IRQ number that was specified in the .Xr device.hints 5 file is not valid for the 3Com 3c503 card. The 3c503 can only be assigned to IRQs 2 through 5. .It "ed%d: Cannot find start of RAM." .It "ed%d: Cannot find any RAM, start : %d, x = %d." The probe of a Gateway card was unsuccessful in configuring the card's packet memory. This likely indicates that the card was improperly recognized as a Gateway or that the card is defective. .It "ed: packets buffered, but transmitter idle." Indicates a logic problem in the driver. Should never happen. .It "ed%d: device timeout" Indicates that an expected transmitter interrupt did not occur. Usually caused by an interrupt conflict with another card on the ISA bus. This condition could also be caused if the kernel is configured for a different IRQ channel than the one the card is actually using. If that is the case, you will have to either reconfigure the card using a DOS utility or set the jumpers on the card appropriately. .It "ed%d: NIC memory corrupt - invalid packet length %d." Indicates that a packet was received with a packet length that was either larger than the maximum size or smaller than the minimum size allowed by the IEEE 802.3 standard. Usually caused by a conflict with another card on the ISA bus, but in some cases may also indicate faulty cabling. .It "ed%d: remote transmit DMA failed to complete." This indicates that a programmed I/O transfer to an NE1000 or NE2000 style card has failed to properly complete. Usually caused by the ISA bus speed being set too fast. .It "ed%d: Invalid irq configuration (%ld) must be %s for %s" Indicates the device has a different IRQ than supported or expected. .It "ed%d: Cannot locate my ports!" The device is using a different I/O port than the driver knows about. .It "ed%d: Cannot extract MAC address" Attempts to get the MAC address failed. .It "ed%d: Missing mii!" Probing for an MII bus has failed. This indicates a coding error in the PC Card attachment, because a PHY is required for the chips that generate this error message. .El .Sh SEE ALSO .Xr altq 4 , .Xr arp 4 , .Xr miibus 4 , .Xr netintro 4 , .Xr ng_ether 4 , .Xr device.hints 5 , .Xr ifconfig 8 .Sh HISTORY The .Nm device driver first appeared in .Fx 1.0 . .Sh AUTHORS The .Nm device driver and this manual page were written by .An David Greenman . .Sh CAVEATS Early revision DS8390 chips have problems. They lock up whenever the receive ring-buffer overflows. They occasionally switch the byte order of the length field in the packet ring header (several different causes of this related to an off-by-one byte alignment) - resulting in .Qq Li "NIC memory corrupt - invalid packet length" messages. The card is reset whenever these problems occur, but otherwise there is no problem with recovering from these conditions. .Pp The NIC memory access to 3Com and Novell cards is much slower than it is on WD/SMC cards; it is less than 1MB/second on 8bit boards and less than 2MB/second on the 16bit cards. This can lead to ring-buffer overruns resulting in dropped packets during heavy network traffic. .Pp The Mitsubishi B8895 PC Card uses a DP83902, but its ASIC part is undocumented. Neither the NE2000 nor the WD83x0 drivers work with this card. .Sh BUGS The .Nm driver is a bit too aggressive about resetting the card whenever any bad packets are received. As a result, it may throw out some good packets which have been received but not yet transferred from the card to main memory. .Pp The .Nm driver is slow by today's standards. .Pp PC Card attachment supports the D-Link DMF650TX LAN/Modem card's Ethernet port only at this time. .Pp Some devices supported by .Nm do not generate the link state change events used by .Xr devd 8 to start .Xr dhclient 8 . If you have problems with .Xr dhclient 8 not starting and the device is always attached to the network it may be possible to work around this by changing .Dq Li DHCP to .Dq Li SYNCDHCP in the .Va ifconfig_ed0 entry in .Pa /etc/rc.conf . Index: stable/12/share/man/man4/man4.i386/cs.4 =================================================================== --- stable/12/share/man/man4/man4.i386/cs.4 (revision 339734) +++ stable/12/share/man/man4/man4.i386/cs.4 (revision 339735) @@ -1,152 +1,160 @@ .\" .\" Copyright (c) 1998 Michael Smith .\" All rights reserved. .\" .\" Redistribution and use in source and binary forms, with or without .\" modification, are permitted provided that the following conditions .\" are met: .\" 1. Redistributions of source code must retain the above copyright .\" notice, this list of conditions and the following disclaimer. .\" 2. Redistributions in binary form must reproduce the above copyright .\" notice, this list of conditions and the following disclaimer in the .\" documentation and/or other materials provided with the distribution. .\" .\" THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND .\" ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE .\" IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE .\" ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE .\" FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL .\" DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS .\" OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) .\" HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT .\" LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY .\" OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF .\" SUCH DAMAGE. .\" .\" $FreeBSD$ .\" -.Dd July 16, 2005 +.Dd October 24 2018 .Dt CS 4 i386 .Os .Sh NAME .Nm cs .Nd "Ethernet device driver" .Sh SYNOPSIS To compile this driver into the kernel, place the following line in your kernel configuration file: .Bd -ragged -offset indent .Cd "device cs" .Ed .Pp Alternatively, to load the driver as a module at boot time, place the following line in .Xr loader.conf 5 : .Bd -literal -offset indent if_cs_load="YES" .Ed .Pp In .Pa /boot/device.hints : .Cd hint.cs.0.at="isa" .Cd hint.cs.0.port="0x300" .Cd hint.cs.0.irq="10" .Cd hint.cs.0.maddr="0xd000" +.Sh DEPRECATION NOTICE +The +.Nm +driver is not present in +.Fx 13.0 +and later. +See https://github.com/freebsd/fcp/blob/master/fcp-0101.md for more +information. .Sh DESCRIPTION The .Nm driver provides support for ISA Ethernet adapters based on the .Tn Crystal Semiconductor CS8900 and .Tn CS8920 NICs. These devices are used on the .Tn IBM EtherJet ISA adapters and in many embedded applications where the high integration, small size and low cost of the CS89x0 family compensate for their drawbacks. .Pp The .Nm driver will obtain configuration parameters either from .Pa /boot/device.hints or from the card. At least the I/O port number must be specified. Other parameters specified in .Pa /boot/device.hints will be used if present; the card may be soft-configured so these may be any valid value. Adapters based on the CS8920 normally offer PnP configuration and the driver will detect the .Tn IBM EtherJet and the .Tn CSC6040 adapters automatically. .Pp Note that the CS8900 is limited to 4 IRQ values; these are normally implemented as 5, 10, 11 and 12. The CS8920 has no such limitation. .Pp Memory-mapped and DMA operation are not supported at this time. .Pp In addition to the ISA devices, the PC Card devices based on the CS889x0 family are also supported. The IBM EtherJet PCMCIA Card is the only known device based on this chip. The PC Card support does not need the above specific ISA hints to work. The PC Card support may not work for 10base2 (thinnet) connections and may bogusly claim to support 10base5 (there are no known cards that have an AUI necessary for 10base5 support on their dongles). .Sh DIAGNOSTICS .Bl -diag .It "cs%d: full/half duplex negotiation timeout" The attempt to negotiate duplex settings with the hub timed out. This may indicate a cabling problem or a faulty or incompatible hub. .It "cs%d: failed to enable " The CS89x0 failed to select the nominated media, either because it is not present or not operating correctly. .It "cs%d: No EEPROM, assuming defaults" The CS89x0 does not have an EEPROM, or the EEPROM is hopelessly damaged. Operation will only be successful if the configuration entry lists suitable values for the adapter. .It "cs%d: Invalid irq" The IRQ specified in the configuration entry is not valid for the adapter. .It "cs%d: Could not allocate memory for NIC" There is a critical memory shortage. The adapter will not function. .It "cs%d: Adapter has no media" The adapter is not configured for a specific media type. The media type will have to be manually set. .It "This is a %s, but LDN %d is disabled" The PnP probe code found a recognised adapter, but the adapter is disabled. .It "failed to read pnp parms" A PnP adapter was found, but configuration parameters for it could not be read. .It "failed to pnp card parameters" The parameters obtained via PnP were not accepted by the driver. The adapter may not function. .El .Sh SEE ALSO .Xr arp 4 , .Xr netintro 4 , .Xr ng_ether 4 , .Xr ifconfig 8 .Sh AUTHORS .An -nosplit The .Nm device driver was written by .An Maxim Bolotin and .An Oleg Sharoiko . This manpage was written by .An Michael Smith . .Sh CAVEATS The CS89x0 family of adapters have a very small RAM buffer (4K). This may cause problems with extremely high network loads or bursty network traffic. In particular, NFS operations should be limited to 1k read/write transactions in order to avoid overruns. Index: stable/12/share/man/man4/man4.i386/ep.4 =================================================================== --- stable/12/share/man/man4/man4.i386/ep.4 (revision 339734) +++ stable/12/share/man/man4/man4.i386/ep.4 (revision 339735) @@ -1,201 +1,209 @@ .\" .\" Copyright (c) 1994 Herb Peyerl .\" All rights reserved. .\" .\" Redistribution and use in source and binary forms, with or without .\" modification, are permitted provided that the following conditions .\" are met: .\" 1. Redistributions of source code must retain the above copyright .\" notice, this list of conditions and the following disclaimer. .\" 2. Redistributions in binary form must reproduce the above copyright .\" notice, this list of conditions and the following disclaimer in the .\" documentation and/or other materials provided with the distribution. .\" 3. All advertising materials mentioning features or use of this software .\" must display the following acknowledgement: .\" This product includes software developed by Herb Peyerl .\" 3. The name of the author may not be used to endorse or promote products .\" derived from this software without specific prior written permission .\" .\" THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR .\" IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES .\" OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. .\" IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, .\" INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT .\" NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, .\" DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY .\" THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT .\" (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF .\" THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. .\" .\" $FreeBSD$ .\" -.Dd April 1, 2011 +.Dd October 24, 2018 .Dt EP 4 i386 .Os .Sh NAME .Nm ep .Nd "Ethernet driver for 3Com Etherlink III (3c5x9) interfaces" .Sh SYNOPSIS To compile this driver into the kernel, place the following line in your kernel configuration file: .Bd -ragged -offset indent .Cd "device ep" .Ed .Pp Alternatively, to load the driver as a module at boot time, place the following line in .Xr loader.conf 5 : .Bd -literal -offset indent if_ep_load="YES" .Ed +.Sh DEPRECATION NOTICE +The +.Nm +driver is not present in +.Fx 13.0 +and later. +See https://github.com/freebsd/fcp/blob/master/fcp-0101.md for more +information. .Sh DESCRIPTION The .Nm device driver supports network adapters based on the 3Com 3C5x9 Etherlink III Parallel Tasking chipset. .Pp Various models of these cards come with a different assortment of connectors: .Bl -tag -width xxxxxxxxxxxxxxxxxxxx .It AUI/DIX Standard 15 pin connector, also known as 10base5 (thick-net) .It 10Base2 BNC, also known as thin-net .It 10BaseT UTP, also known as twisted pair .El .Pp The default port to use is the port that has been selected with the setup utility. To override this, use the following media options with .Xr ifconfig 8 or in your .Pa /etc/rc.conf file. .Bl -tag -width xxxxxxxxxxxxxxxxxxxx .It 10base5/AUI Use the AUI port. .It 10base2/BNC Use the BNC port. .It 10baseT/UTP Use the UTP port. .El .Sh HARDWARE The .Nm driver supports Ethernet adapters based on the 3Com 3C5x9 Etherlink III Parallel Tasking chipset, including: .Pp .Bl -bullet -compact .It 3Com 3C1 CF .It 3Com 3C509-TP, 3C509-BNC, 3C509-Combo, 3C509-TPO, 3C509-TPC ISA .It 3Com 3C509B-TP, 3C509B-BNC, 3C509B-Combo, 3C509B-TPO, 3C509B-TPC ISA .It 3Com 3C562/3C563 PCMCIA .It 3Com 3C574, 3C574TX, 3C574-TX, 3CCFE574BT, 3CXFE574BT, 3C3FE574BT PCMCIA .It 3Com 3C589, 3C589B, 3C589C, 3C589D, 3CXE589DT PCMCIA .It 3Com 3CCFEM556B, 3CCFEM556BI PCMCIA .It 3Com 3CXE589EC, 3CCE589EC, 3CXE589ET, 3CCE589ET PCMCIA .It 3Com Megahertz 3CCEM556, 3CXEM556, 3CCEM556B, 3CXEM556B, 3C3FEM556C PCMCIA .It 3Com OfficeConnect 3CXSH572BT, 3CCSH572BT PCMCIA .It Farallon EtherWave and EtherMac PC Card (P/n 595/895 with BLUE arrow) .El .Sh NOTES The 3c509 card has no jumpers to set the address. 3Com supplies software to set the address of the card in software. To find the card on the ISA bus, the kernel performs a complex scan operation at IO address 0x110. Beware! Avoid placing other cards at that address! .Pp Furthermore, the 3c509 should not be configured in EISA mode. .Pp Cards in PnP mode may conflict with other resources in the system. Ensure your BIOS is configured correctly to exclude resources used by the 3c509, especially IRQs, to avoid unpredictable behavior. .Pp Many different companies sold the 3Com PC Cards under their own private label. These cards also work. .Pp The Farallon EtherWave and EtherMac card came in two varieties. The .Nm driver supports the 595 and 895 cards. These cards have the blue arrow on the front along with a 3Com logo. The Farallon 595a cards, which have a red arrow on the front, are also called EtherWave and EtherMac. They are supported by the .Xr sn 4 driver. .Sh DIAGNOSTICS .Bl -diag .It "ep0: reset (status: %x)" The driver has encountered a FIFO underrun or overrun. The driver will reset the card and the packet will be lost. This is not fatal. .It "ep0: eeprom failed to come ready" The eeprom failed to come ready. This probably means the card is wedged. .It "ep0: 3c509 in test mode. Erase pencil mark!" This means that someone has scribbled with pencil in the test area on the card. Erase the pencil mark and reboot. (This is not a joke). .It "ep0: No I/O space?!" The driver was unable to allocate the I/O space that it thinks should be there. Look for conflicts with other devices. .It "ep0: No irq?!" The driver could not allocate the interrupt it wanted. Look for conflicts, although sharing interrupts for PC Card is normal. .It "ep0: No connectors!" The driver queried the hardware for what ethernet attachment were present, but the hardware reported none that the driver recognized. .It "ep0: Unable to get Ethernet address!" The driver was unable to read the ethernet address from the EEPROM. This is likely the result of the card being wedged. .It "ep0: if_alloc() failed" The driver was unable to allocate a ifnet structure. This may happen in extremely low memory conditions. .It "ep0: strange connector type in EEPROM: assuming AUI" The driver does not know what to do with the information the EEPROM has about connectors, so it is assuming the worst. .It "ep0: unknown ID 0xXXXXXXXX" The driver has found an ID that it believes it supports, but does not have a specific identification string to present to the user. .It "ep0: <%s> at port 0x%03x in EISA mode, ignored." The 3C509 ISA card is in EISA mode. The card will be ignored until it is taken out of EISA mode. .It "ep0: <%s> at x0%03x in PnP mode" This card appears to be in Plug and Play mode. It should be probed as part of the plug and play phase of the ISA probes. .It "ep0: Invalid EEPROM checksum!" The EEPROM has a bad checksum, so the driver is ignoring the card. .It "ep0: bus_setup_intr() failed!" The driver was unable to setup the interrupt handler. This should never happen. .El .Sh SEE ALSO .Xr altq 4 , .Xr ed 4 , .Xr intro 4 , .Xr ng_ether 4 , .Xr sn 4 , .Xr vx 4 , .Xr ifconfig 8 .Sh STANDARDS are great. There is so many to choose from. Index: stable/12/share/man/man4/man4.i386/ex.4 =================================================================== --- stable/12/share/man/man4/man4.i386/ex.4 (revision 339734) +++ stable/12/share/man/man4/man4.i386/ex.4 (revision 339735) @@ -1,119 +1,127 @@ .\" .\" Copyright (c) 1997 David E. O'Brien .\" .\" All rights reserved. .\" .\" Redistribution and use in source and binary forms, with or without .\" modification, are permitted provided that the following conditions .\" are met: .\" 1. Redistributions of source code must retain the above copyright .\" notice, this list of conditions and the following disclaimer. .\" 2. Redistributions in binary form must reproduce the above copyright .\" notice, this list of conditions and the following disclaimer in the .\" documentation and/or other materials provided with the distribution. .\" .\" THIS SOFTWARE IS PROVIDED BY THE DEVELOPERS ``AS IS'' AND ANY EXPRESS OR .\" IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES .\" OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. .\" IN NO EVENT SHALL THE DEVELOPERS BE LIABLE FOR ANY DIRECT, INDIRECT, .\" INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT .\" NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, .\" DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY .\" THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT .\" (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF .\" THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. .\" .\" $FreeBSD$ .\" -.Dd July 16, 2005 +.Dd October 24, 2018 .Dt EX 4 i386 .Os .Sh NAME .Nm ex .Nd "Ethernet device driver for the Intel EtherExpress Pro/10 and Pro/10+" .Sh SYNOPSIS To compile this driver into the kernel, place the following line in your kernel configuration file: .Bd -ragged -offset indent .Cd "device ex" .Ed .Pp Alternatively, to load the driver as a module at boot time, place the following line in .Xr loader.conf 5 : .Bd -literal -offset indent if_ex_load="YES" .Ed +.Sh DEPRECATION NOTICE +The +.Nm +driver is not present in +.Fx 13.0 +and later. +See https://github.com/freebsd/fcp/blob/master/fcp-0101.md for more +information. .Sh DESCRIPTION The .Nm driver provides support for Ethernet adapters based on the Intel i82595 chip. .Pp On the ISA bus, the card will be searched for in the I/O address range 0x200 - 0x3a0. The IRQ will be read from the EEPROM on the card. For correct operation Plug-N-Play support should be disabled. .Pp On the PC Card bus, the card will be automatically recognized and configured. .Sh HARDWARE The .Nm driver supports the following Ethernet adapters: .Pp .Bl -bullet -compact .It Intel EtherExpress Pro/10 ISA .It Intel EtherExpress Pro/10+ ISA .It Olicom OC2220 Ethernet PC Card .It Olicom OC2232 Ethernet/Modem PC Card .It Silicom Ethernet LAN PC Card .It Silicom EtherSerial LAN PC Card .El .Sh DIAGNOSTICS .Bl -diag .It "ex%d: Intel EtherExpress Pro/10, address %6D, connector %s" The device probe found an installed card, and was able to correctly install the device driver. .It "ex%d: WARNING: board's EEPROM is configured for IRQ %d, using %d" The device probe detected that the board is configured for a different interrupt than the one specified in the kernel configuration file. .It "ex%d: invalid IRQ." The device probe detected an invalid IRQ setting. .El .Sh SEE ALSO .Xr arp 4 , .Xr netintro 4 , .Xr ng_ether 4 , .Xr ifconfig 8 .Sh HISTORY The .Nm device driver first appeared in .Fx 2.2 . .Sh AUTHORS .An -nosplit The .Nm device driver was written by .An Javier Mart\('in Rueda . The PC Card attachment was written by .An Mitsuru ISAWAKI and .An Warner Losh . This manual page was written by .An David E. O'Brien . .Sh BUGS Currently the driver does not support multicast. .Pp The Silicom EtherSerial card's serial port does not currently work. The Olicom OC2232 PC Card should work with the .Nm driver, but is currently completely broken. Index: stable/12/share/man/man4/man4.i386/fe.4 =================================================================== --- stable/12/share/man/man4/man4.i386/fe.4 (revision 339734) +++ stable/12/share/man/man4/man4.i386/fe.4 (revision 339735) @@ -1,318 +1,326 @@ .\" All Rights Reserved, Copyright (C) Fujitsu Limited 1995 .\" .\" This document may be used, modified, copied, distributed, and sold, in .\" both source and printed form provided that the above copyright, these .\" terms and the following disclaimer are retained. The name of the author .\" and/or the contributor may not be used to endorse or promote products .\" derived from this software without specific prior written permission. .\" .\" THIS DOCUMENT IS PROVIDED BY THE AUTHOR AND THE CONTRIBUTOR ``AS IS'' AND .\" ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE .\" IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE .\" ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR THE CONTRIBUTOR BE LIABLE .\" FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL .\" DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS .\" OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) .\" HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT .\" LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY .\" OUT OF THE USE OF THIS DOCUMENT, EVEN IF ADVISED OF THE POSSIBILITY OF .\" SUCH DAMAGE. .\" .\" Contributed by M. Sekiguchi . .\" for fe driver. .\" .\" $FreeBSD$ -.Dd July 16, 2005 +.Dd October 24, 2018 .Dt FE 4 i386 .Os .Sh NAME .Nm fe .Nd "Fujitsu MB86960A/MB86965A based Ethernet adapters" .Sh SYNOPSIS To compile this driver into the kernel, place the following line in your kernel configuration file: .Bd -ragged -offset indent .Cd "device fe" .Ed .Pp Alternatively, to load the driver as a module at boot time, place the following line in .Xr loader.conf 5 : .Bd -literal -offset indent if_fe_load="YES" .Ed .Pp In .Pa /boot/device.hints : .Cd hint.fe.0.at="isa" .Cd hint.fe.0.port="0x300" .Cd hint.fe.0.flags="0x0" +.Sh DEPRECATION NOTICE +The +.Nm +driver is not present in +.Fx 13.0 +and later. +See https://github.com/freebsd/fcp/blob/master/fcp-0101.md for more +information. .Sh DESCRIPTION The .Nm is a network device driver for Ethernet adapters based on Fujitsu MB86960A, MB86965A, or other compatible chips. .Pp The driver provides automatic I/O port address configuration and automatic IRQ configuration, when used with suitable adapter hardware. .Pp The driver works with program I/O data transfer technique. It gives a fair performance. Shared memory is never used, even if the adapter has one. .Pp It currently works with Fujitsu FMV-180 series for ISA, Allied-Telesis AT1700 series and RE2000 series for ISA, and Fujitsu MBH10302 PC card. .Ss Parameters In the .Pa /boot/device.hints file, two parameters, .Ar port and .Ar irq , must be specified to reflect adapter hardware settings. Another parameter .Ar flags can be specified to provide additional configuration as an option. .Pp The .Ar port parameter specifies a base I/O port address of the adapter. It must match with the hardware setting of the adapter. The .Ar port may be left unspecified by removing .Dl hint.fe.0.port="..." from the file. In that case, the driver tries to detect the hardware setting of the I/O address automatically. This feature may not work with some adapter hardware. .Pp The .Ar irq parameter specifies an IRQ number used by the adapter. It must match the hardware setting of the adapter. .Ar Irq may be left unspecified by removing .Dl hint.fe.0.irq="..." from the file. in that case, the driver tries to detect the hardware setting of the IRQ automatically. This feature may not work on some adapters. .Pp The .Ar flags is a numeric value which consists of a combination of various device settings. The following flags are defined in the current version. To specify two or more settings for a device, use a numeric sum of each flag value. Flag bits not specified below are reserved and must be set to 0. Actually, each bit is either just ignored by the driver, or tested and used to control undocumented features of the driver. Consult the source program for undocumented features. .Bl -tag -width 8n .It Li 0x007F These flag bits are used to initialize DLCR6 register of MB86960A/MB86965A chip, when the .Li 0x0080 bit of the .Ar flags is set. See below for more about DLCR6 override feature. The .Li 0x007F flag bits must be 0 unless the .Li 0x0080 bit is set, to maintain the compatibility with future versions of the driver. .It Li 0x0080 This flag overrides the default setting to the DLCR6 register of MB86960A/MB86965A chip by a user supplied value, which is taken from the lower 7 bits of the flag value. This is a troubleshooting flag and should not be used without understanding of the adapter hardware. Consult the Fujitsu manual for more information on DLCR6 settings. .El .Sh HARDWARE Controllers and cards supported by the .Nm driver include: .Pp .Bl -bullet -compact .It Allied Telesis RE1000, RE1000Plus, ME1500 (110-pin) .It CONTEC C-NET(98)P2, C-NET (9N)E (110-pin), C-NET(9N)C (ExtCard) .It CONTEC C-NET(PC)C PC Card Ethernet .It Eagle Tech NE200T .It Eiger Labs EPX-10BT .It Fujitsu FMV-J182, FMV-J182A .It Fujitsu MB86960A, MB86965A .It Fujitsu MBH10303, MBH10302 PC Card Ethernet .It Fujitsu Towa LA501 Ethernet .It HITACHI HT-4840-11 PC Card Ethernet .It NextCom J Link NC5310 .It RATOC REX-5588, REX-9822, REX-4886, and REX-R280 .It RATOC REX-9880/9881/9882/9883 .It TDK LAC-98012, LAC-98013, LAC-98025, LAC-9N011 (110-pin) .It TDK LAK-CD011, LAK-CD021, LAK-CD021A, LAK-CD021BX .It Ungermann-Bass Access/PC N98C+(PC85152, PC85142), Access/NOTE N98(PC86132) (110-pin) .El .Sh FEATURES SPECIFIC TO HARDWARE MODELS The .Nm driver has some features and limitations which depend on adapter hardware models. The following is a summary of these dependencies. .Ss Fujitsu FMV-180 series adapters Both automatic IRQ detection and automatic I/O port address detection is available with these adapters. .Pp Automatic I/O port address detection feature of .Nm works mostly fine for FMV-180 series. It works even if there are two or more FMV-180s in a system. However, some combination of other adapters may confuse the driver. It is recommended to explicitly specify .Ar port when you experience some difficulties with hardware probe. .Pp Automatic IRQ detection feature of .Nm works reliably for FMV-180 series. It is recommended to explicitly specify .Ar irq always for FMV-180. The hardware setting of IRQ is read from the configuration EEPROM on the adapter, even when the kernel config file specifies an IRQ value. The driver will generate a warning message, if the IRQ setting specified in .Pa /boot/device.hints does not match one stored in EEPROM. Then, it will use the value specified in the file. (This behavior has been changed from the previous releases.) .Ss Allied-Telesis AT1700 series and RE2000 series adapters Automatic I/O port address detection is available with Allied-Telesis AT1700 series and RE2000 series, while it is less reliable than FMV-180 series. Using the feature with Allied-Telesis adapters is not recommended. .Pp Automatic IRQ detection is also available with some limitation. The .Nm driver tries to get IRQ setting from the configuration EEPROM on the board, if .Ar irq is not specified in .Pa /boot/device.hints . Unfortunately, AT1700 series and RE2000 series seems to have two types of models; One type allows IRQ selection from 3/4/5/9, while the other from 10/11/12/15. Identification of the models are not well known. Hence, automatic IRQ detection with Allied-Telesis adapters may not be reliable. Specify an exact IRQ number if any troubles are encountered. .Pp Differences between AT1700 series and RE2000 series or minor models in those series are not recognized. .Ss Fujitsu MBH10302 PC card The .Nm driver supports Fujitsu MBH10302 and compatible PC cards. It requires the PC card (PCMCIA) support package. .Sh SEE ALSO .Xr netstat 1 , .Xr ed 4 , .Xr netintro 4 , .Xr ng_ether 4 , .Xr ifconfig 8 .Sh HISTORY The .Nm driver appeared in .Fx 2.0.5 . .Sh AUTHORS, COPYRIGHT AND DISCLAIMER The .Nm driver was originally written and contributed by .An M. Sekiguchi Aq Mt seki@sysrap.cs.fujitsu.co.jp , following the .Nm ed driver written by .An David Greenman . PC card support in .Nm is written by .An Hidetoshi Kimura Aq Mt h-kimura@tokyo.se.fujitsu.co.jp . This manual page was written by .An M. Sekiguchi . .Pp .Em "All Rights Reserved, Copyright (C) Fujitsu Limited 1995" .Pp This document and the associated software may be used, modified, copied, distributed, and sold, in both source and binary form provided that the above copyright, these terms and the following disclaimer are retained. The name of the author and/or the contributor may not be used to endorse or promote products derived from this document and the associated software without specific prior written permission. .Pp THIS DOCUMENT AND THE ASSOCIATED SOFTWARE IS PROVIDED BY THE AUTHOR AND THE CONTRIBUTOR .Dq AS IS AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR THE CONTRIBUTOR BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS DOCUMENT AND THE ASSOCIATED SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. .Sh BUGS Following are major known bugs: .Pp Statistics on the number of collisions maintained by the .Nm driver is not accurate; the .Fl i option of .Xr netstat 1 shows slightly less value than true number of collisions. .Pp More mbuf clusters are used than expected. The packet receive routine has an intended violation against the mbuf cluster allocation policy. The unnecessarily allocated clusters are freed within short lifetime, and it will not affect long term kernel memory usage. .Pp Although XNS and IPX support is included in the driver, it has never been tested and it is expected to have a lot of bugs. Index: stable/12/share/man/man4/man4.i386/vx.4 =================================================================== --- stable/12/share/man/man4/man4.i386/vx.4 (revision 339734) +++ stable/12/share/man/man4/man4.i386/vx.4 (revision 339735) @@ -1,128 +1,136 @@ .\" .\" Copyright (c) 1996, Fred Gray .\" All rights reserved. .\" .\" Redistribution and use in source and binary forms, with or without .\" modification, are permitted provided that the following conditions .\" are met: .\" 1. Redistributions of source code must retain the above copyright .\" notice, this list of conditions and the following disclaimer. .\" 2. Redistributions in binary form must reproduce the above copyright .\" notice, this list of conditions and the following disclaimer in the .\" documentation and/or other materials provided with the distribution. .\" 3. All advertising materials mentioning features or use of this software .\" must display the following acknowledgement: .\" This product includes software developed by David Greenman. .\" 4. The name of the author may not be used to endorse or promote products .\" derived from this software without specific prior written permission. .\" .\" THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND .\" ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE .\" IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE .\" ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE .\" FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL .\" DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS .\" OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) .\" HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT .\" LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY .\" OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF .\" SUCH DAMAGE. .\" .\" $FreeBSD$ .\" -.Dd February 15, 2017 +.Dd October 24, 2018 .Dt VX 4 i386 .Os .Sh NAME .Nm vx .Nd "3Com EtherLink III / Fast EtherLink III (3c59x) Ethernet driver" .Sh SYNOPSIS To compile this driver into the kernel, place the following line in your kernel configuration file: .Bd -ragged -offset indent .Cd "device vx" .Ed .Pp Alternatively, to load the driver as a module at boot time, place the following line in .Xr loader.conf 5 : .Bd -literal -offset indent if_vx_load="YES" .Ed +.Sh DEPRECATION NOTICE +The +.Nm +driver is not present in +.Fx 13.0 +and later. +See https://github.com/freebsd/fcp/blob/master/fcp-0101.md for more +information. .Sh DESCRIPTION The .Nm driver provides support for the 3Com .Dq Vortex chipset. .Pp The medium selection can be influenced by the following link flags to the .Xr ifconfig 8 command: .Pp .Bl -tag -width LINK0X -compact .It Em link0 Use the AUI port. .It Em link1 Use the BNC port. .It Em link2 Use the UTP port. .El .Sh HARDWARE The .Nm driver supports the following cards: .Pp .Bl -bullet -compact .It 3Com 3c590 EtherLink III PCI .It 3Com 3c595 Fast EtherLink III PCI in 10 Mbps mode .El .Sh DIAGNOSTICS All other diagnostics indicate either a hardware problem or a bug in the driver. .Sh SEE ALSO .Xr arp 4 , .Xr netintro 4 , .Xr ng_ether 4 , .Xr ifconfig 8 .Sh HISTORY The .Nm device driver first appeared in .Fx 2.1 . It was derived from the .Nm ep driver, from which it inherits most of its limitations. .Sh AUTHORS .An -nosplit The .Nm device driver and this manual page were written by .An Fred Gray Aq Mt fgray@rice.edu , based on the work of .An Herb Peyerl and with the assistance of numerous others. .Sh CAVEATS Some early-revision 3c590 cards are defective and suffer from many receive overruns, which cause lost packets. The author has attempted to implement a test for it based on the information supplied by 3Com, but the test resulted mostly in spurious warnings. .Pp The performance of this driver is somewhat limited by the fact that it uses only polled-mode I/O and does not make use of the bus-mastering capability of the cards. .Sh BUGS The .Nm driver is known not to reset the adapter correctly following a warm boot on some systems. .Pp The .Nm driver has not been exhaustively tested with all the models of cards that it claims to support. Index: stable/12/share/man/man4/man4.powerpc/bm.4 =================================================================== --- stable/12/share/man/man4/man4.powerpc/bm.4 (revision 339734) +++ stable/12/share/man/man4/man4.powerpc/bm.4 (revision 339735) @@ -1,85 +1,93 @@ .\"- .\" Copyright (c) 2008 Nathan Whitehorn .\" All rights reserved. .\" .\" Redistribution and use in source and binary forms, with or without .\" modification, are permitted provided that the following conditions .\" are met: .\" 1. Redistributions of source code must retain the above copyright .\" notice, this list of conditions and the following disclaimer. .\" 2. Redistributions in binary form must reproduce the above copyright .\" notice, this list of conditions and the following disclaimer in the .\" documentation and/or other materials provided with the distribution. .\" .\" THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR .\" IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED .\" WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE .\" DISCLAIMED. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, .\" INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES .\" (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR .\" SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) .\" HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, .\" STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN .\" ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE .\" POSSIBILITY OF SUCH DAMAGE. .\" .\" $FreeBSD$ .\" -.Dd July 3, 2008 +.Dd October 24, 2018 .Dt BM 4 .Os .Sh NAME .Nm bm .Nd BMAC Ethernet device driver .Sh SYNOPSIS To compile this driver into the kernel, place the following lines in your kernel configuration file: .Bd -ragged -offset indent .Cd "device miibus" .Cd "device bm" .Ed .Pp Alternatively, to load the driver as a module at boot time, place the following line in .Xr loader.conf 5 : .Bd -literal -offset indent if_bm_load="YES" .Ed +.Sh DEPRECATION NOTICE +The +.Nm +driver is not present in +.Fx 13.0 +and later. +See https://github.com/freebsd/fcp/blob/master/fcp-0101.md for more +information. .Sh DESCRIPTION The .Nm driver provides support for the BMac ethernet hardware found mostly in G3-based Apple hardware. It is a close relative of the Sun HME controller found in contemporary Sun workstations. .Sh HARDWARE Chips supported by the .Nm driver include: .Pp .Bl -bullet -compact .It Apple BMAC Onboard Ethernet .It Apple BMAC+ Onboard Ethernet .El .Sh SEE ALSO .Xr altq 4 , .Xr hme 4 , .Xr miibus 4 , .Xr netintro 4 , .Xr ifconfig 8 .Sh HISTORY The .Nm device driver appeared in .Fx 7.1 . .Sh AUTHORS .An -nosplit The .Nm driver was written by .An Nathan Whitehorn Aq Mt nwhitehorn@FreeBSD.org based on work by .An Peter Grehan Aq Mt grehan@FreeBSD.org . Index: stable/12/share/man/man4/pcn.4 =================================================================== --- stable/12/share/man/man4/pcn.4 (revision 339734) +++ stable/12/share/man/man4/pcn.4 (revision 339735) @@ -1,191 +1,199 @@ .\" Copyright (c) Berkeley Software Design, Inc. .\" Copyright (c) 1997, 1998, 1999, 2000 .\" Bill Paul . All rights reserved. .\" .\" Redistribution and use in source and binary forms, with or without .\" modification, are permitted provided that the following conditions .\" are met: .\" 1. Redistributions of source code must retain the above copyright .\" notice, this list of conditions and the following disclaimer. .\" 2. Redistributions in binary form must reproduce the above copyright .\" notice, this list of conditions and the following disclaimer in the .\" documentation and/or other materials provided with the distribution. .\" 3. All advertising materials mentioning features or use of this software .\" must display the following acknowledgement: .\" This product includes software developed by Bill Paul. .\" 4. Neither the name of the author nor the names of any co-contributors .\" may be used to endorse or promote products derived from this software .\" without specific prior written permission. .\" .\" THIS SOFTWARE IS PROVIDED BY Bill Paul AND CONTRIBUTORS ``AS IS'' AND .\" ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE .\" IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE .\" ARE DISCLAIMED. IN NO EVENT SHALL Bill Paul OR THE VOICES IN HIS HEAD .\" BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR .\" CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF .\" SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS .\" INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN .\" CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) .\" ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF .\" THE POSSIBILITY OF SUCH DAMAGE. .\" .\" $FreeBSD$ .\" -.Dd January 31, 2006 +.Dd October 24, 2018 .Dt PCN 4 .Os .Sh NAME .Nm pcn .Nd "AMD PCnet/PCI Fast Ethernet device driver" .Sh SYNOPSIS To compile this driver into the kernel, place the following lines in your kernel configuration file: .Bd -ragged -offset indent .Cd "device miibus" .Cd "device pcn" .Ed .Pp Alternatively, to load the driver as a module at boot time, place the following line in .Xr loader.conf 5 : .Bd -literal -offset indent if_pcn_load="YES" .Ed +.Sh DEPRECATION NOTICE +The +.Nm +driver is not present in +.Fx 13.0 +and later. +See https://github.com/freebsd/fcp/blob/master/fcp-0101.md for more +information. .Sh DESCRIPTION The .Nm driver provides support for PCI Ethernet adapters and embedded controllers based on the AMD PCnet/FAST, PCnet/FAST+, PCnet/FAST III, PCnet/PRO and PCnet/Home Ethernet controller chips. Supported NIC's include the Allied Telesyn AT-2700 family. .Pp The PCnet/PCI chips include a 100Mbps Ethernet MAC and support both a serial and MII-compliant transceiver interface. They use a bus master DMA and a scatter/gather descriptor scheme. The AMD chips provide a mechanism for zero-copy receive, providing good performance in server environments. Receive address filtering is provided using a single perfect filter entry for the station address and a 64-bit multicast hash table. .Pp The .Nm driver supports the following media types: .Bl -tag -width 10baseTXUTP .It autoselect Enable autoselection of the media type and options. The user can manually override the autoselected mode by adding media options to .Xr rc.conf 5 . .It 10baseT/UTP Set 10Mbps operation. The .Xr ifconfig 8 .Cm mediaopt option can also be used to select either .Sq full-duplex or .Sq half-duplex modes. .It 100baseTX Set 100Mbps (Fast Ethernet) operation. The .Xr ifconfig 8 .Cm mediaopt option can also be used to select either .Sq full-duplex or .Sq half-duplex modes. .El .Pp The .Nm driver supports the following media options: .Bl -tag -width full-duplex .It full-duplex Force full duplex operation. .It half-duplex Force half duplex operation. .El .Pp For more information on configuring this device, see .Xr ifconfig 8 . .Sh HARDWARE The .Nm driver supports adapters and embedded controllers based on the AMD PCnet/FAST, PCnet/FAST+, PCnet/FAST III, PCnet/PRO and PCnet/Home Fast Ethernet chips: .Pp .Bl -bullet -compact .It AMD Am79C971 PCnet-FAST .It AMD Am79C972 PCnet-FAST+ .It AMD Am79C973/Am79C975 PCnet-FAST III .It AMD Am79C976 PCnet-PRO .It AMD Am79C978 PCnet-Home .It Allied-Telesis LA-PCI .El .Sh DIAGNOSTICS .Bl -diag .It "pcn%d: couldn't map ports/memory" A fatal initialization error has occurred. .It "pcn%d: couldn't map interrupt" A fatal initialization error has occurred. .It "pcn%d: watchdog timeout" The device has stopped responding to the network, or there is a problem with the network connection (e.g.\& a cable fault). .It "pcn%d: no memory for rx list" The driver failed to allocate an mbuf for the receiver ring. .It "pcn%d: no memory for tx list" The driver failed to allocate an mbuf for the transmitter ring when allocating a pad buffer or collapsing an mbuf chain into a cluster. .It "pcn%d: chip is in D3 power state -- setting to D0" This message applies only to adapters which support power management. Some operating systems place the controller in low power mode when shutting down, and some PCI BIOSes fail to bring the chip out of this state before configuring it. The controller loses all of its PCI configuration in the D3 state, so if the BIOS does not set it back to full power mode in time, it will not be able to configure it correctly. The driver tries to detect this condition and bring the adapter back to the D0 (full power) state, but this may not be enough to return the driver to a fully operational condition. If you see this message at boot time and the driver fails to attach the device as a network interface, you will have to perform a warm boot to have the device properly configured. .Pp Note that this condition only occurs when warm booting from another operating system. If you power down your system prior to booting .Fx , the card should be configured correctly. .El .Sh SEE ALSO .Xr arp 4 , .Xr miibus 4 , .Xr netintro 4 , .Xr ng_ether 4 , .Xr ifconfig 8 .Rs .%T AMD PCnet/FAST, PCnet/FAST+ and PCnet/Home datasheets .%U http://www.amd.com .Re .Sh HISTORY The .Nm device driver first appeared in .Fx 4.3 . .Sh AUTHORS The .Nm driver was written by .An Bill Paul Aq Mt wpaul@osd.bsdi.com . Index: stable/12/share/man/man4/sf.4 =================================================================== --- stable/12/share/man/man4/sf.4 (revision 339734) +++ stable/12/share/man/man4/sf.4 (revision 339735) @@ -1,209 +1,217 @@ .\" Copyright (c) 1997, 1998, 1999 .\" Bill Paul . All rights reserved. .\" .\" Redistribution and use in source and binary forms, with or without .\" modification, are permitted provided that the following conditions .\" are met: .\" 1. Redistributions of source code must retain the above copyright .\" notice, this list of conditions and the following disclaimer. .\" 2. Redistributions in binary form must reproduce the above copyright .\" notice, this list of conditions and the following disclaimer in the .\" documentation and/or other materials provided with the distribution. .\" 3. All advertising materials mentioning features or use of this software .\" must display the following acknowledgement: .\" This product includes software developed by Bill Paul. .\" 4. Neither the name of the author nor the names of any co-contributors .\" may be used to endorse or promote products derived from this software .\" without specific prior written permission. .\" .\" THIS SOFTWARE IS PROVIDED BY Bill Paul AND CONTRIBUTORS ``AS IS'' AND .\" ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE .\" IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE .\" ARE DISCLAIMED. IN NO EVENT SHALL Bill Paul OR THE VOICES IN HIS HEAD .\" BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR .\" CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF .\" SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS .\" INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN .\" CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) .\" ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF .\" THE POSSIBILITY OF SUCH DAMAGE. .\" .\" $FreeBSD$ .\" -.Dd January 21, 2008 +.Dd October 24, 2018 .Dt SF 4 .Os .Sh NAME .Nm sf .Nd "Adaptec AIC-6915" .Qq Starfire PCI Fast Ethernet adapter driver .Sh SYNOPSIS To compile this driver into the kernel, place the following lines in your kernel configuration file: .Bd -ragged -offset indent .Cd "device miibus" .Cd "device sf" .Ed .Pp Alternatively, to load the driver as a module at boot time, place the following line in .Xr loader.conf 5 : .Bd -literal -offset indent if_sf_load="YES" .Ed +.Sh DEPRECATION NOTICE +The +.Nm +driver is not present in +.Fx 13.0 +and later. +See https://github.com/freebsd/fcp/blob/master/fcp-0101.md for more +information. .Sh DESCRIPTION The .Nm driver provides support for Adaptec Duralink Fast Ethernet adapters based on the Adaptec AIC-6915 "Starfire" chipset. .Pp The AIC-6915 is a bus master controller with an MII interface. It supports high and low priority transmit and receive queues, TCP/IP checksum offload, multiple DMA descriptor formats and both polling and producer/consumer DMA models. The AIC-6915 receive filtering options include a 16 entry perfect filter, a 512-bit hash table for multicast addresses, a 512-bit hash table for priority address matching and VLAN filtering. An external MII-compliant transceiver is required for media interfacing. .Pp Multiport adapters consist of several AIC-6915 controllers connected via a PCI to PCI bridge. Each controller is treated as a separate interface by the .Nm driver. .Pp The .Nm driver supports the following media types: .Bl -tag -width xxxxxxxxxxxxxxxxxxxx .It autoselect Enable autoselection of the media type and options. The user can manually override the autoselected mode by adding media options to the .Pa /etc/rc.conf file. .It 10baseT/UTP Set 10Mbps operation. The .Ar mediaopt option can also be used to select either .Ar full-duplex or .Ar half-duplex modes. .It 100baseTX Set 100Mbps (Fast Ethernet) operation. The .Ar mediaopt option can also be used to select either .Ar full-duplex or .Ar half-duplex modes. .El .Pp The .Nm driver supports the following media options: .Bl -tag -width xxxxxxxxxxxxxxxxxxxx .It full-duplex Force full duplex operation .It half-duplex Force half duplex operation. .El .Pp For more information on configuring this device, see .Xr ifconfig 8 . .Sh HARDWARE Adapters supported by the .Nm driver include: .Pp .Bl -bullet -compact .It ANA-62011 64-bit single port 10/100baseTX adapter .It ANA-62022 64-bit dual port 10/100baseTX adapter .It ANA-62044 64-bit quad port 10/100baseTX adapter .It ANA-69011 32-bit single port 10/100baseTX adapter .It ANA-62020 64-bit single port 100baseFX adapter .El .Sh SYSCTL VARIABLES The following variables are available as both .Xr sysctl 8 variables and .Xr loader 8 tunables: .Bl -tag -width indent .It Va dev.sf.%d.int_mod Maximum amount of time to delay interrupt processing in units of 102.4us. The accepted range is 0 to 31, the default value is 1 (102.4us). Value 0 completely disables the interrupt moderation. The interface does not need to be brought down and up again before a change takes effect. .It Va dev.sf.%d.stats Display lots of useful MAC counters maintained in the driver. .El .Sh DIAGNOSTICS .Bl -diag .It "sf%d: couldn't map memory" A fatal initialization error has occurred. This may happen if the PCI BIOS not configured the device, which may be because the BIOS has been configured for a "Plug and Play" operating system. The "Plug and Play OS" setting in the BIOS should be set to "no" or "off" in order for PCI devices to work properly with .Fx . .It "sf%d: couldn't map ports" A fatal initialization error has occurred. This may happen if the PCI BIOS not configured the device, which may be because the BIOS has been configured for a "Plug and Play" operating system. The "Plug and Play OS" setting in the BIOS should be set to "no" or "off" in order for PCI devices to work properly with .Fx . .It "sf%d: couldn't map interrupt" A fatal initialization error has occurred. .It "sf%d: no memory for softc struct!" The driver failed to allocate memory for per-device instance information during initialization. .It "sf%d: failed to enable I/O ports/memory mapping!" The driver failed to initialize PCI I/O port or shared memory access. This might happen if the card is not in a bus-master slot. .It "sf%d: watchdog timeout" The device has stopped responding to the network, or there is a problem with the network connection (cable). .El .Sh SEE ALSO .Xr altq 4 , .Xr arp 4 , .Xr miibus 4 , .Xr netintro 4 , .Xr ng_ether 4 , .Xr polling 4 , .Xr vlan 4 , .Xr ifconfig 8 .Rs .%T The Adaptec AIC-6915 Programmer's Manual .%U http://download.adaptec.com/pdfs/user_guides/aic6915_pg.pdf .Re .Sh HISTORY The .Nm device driver first appeared in .Fx 3.0 . .Sh AUTHORS The .Nm driver was written by .An Bill Paul Aq Mt wpaul@ctr.columbia.edu . Index: stable/12/share/man/man4/sn.4 =================================================================== --- stable/12/share/man/man4/sn.4 (revision 339734) +++ stable/12/share/man/man4/sn.4 (revision 339735) @@ -1,107 +1,115 @@ .\" .\" Copyright (c) 2000 M. Warner Losh .\" .\" Redistribution and use in source and binary forms, with or without .\" modification, are permitted provided that the following conditions .\" are met: .\" 1. Redistributions of source code must retain the above copyright .\" notice, this list of conditions and the following disclaimer. .\" 2. Redistributions in binary form must reproduce the above copyright .\" notice, this list of conditions and the following disclaimer in the .\" documentation and/or other materials provided with the distribution. .\" .\" THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR .\" IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES .\" OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. .\" IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, .\" INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT .\" NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, .\" DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY .\" THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT .\" (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF .\" THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. .\" .\" $FreeBSD$ .\" -.Dd July 16, 2005 +.Dd October 24, 2018 .Dt SN 4 .Os .Sh NAME .Nm sn .Nd "Ethernet driver for SMC91Cxx based cards" .Sh SYNOPSIS To compile this driver into the kernel, place the following lines in your kernel configuration file: .Bd -ragged -offset indent .Cd "device sn" .Ed .Pp Alternatively, to load the driver as a module at boot time, place the following line in .Xr loader.conf 5 : .Bd -literal -offset indent if_sn_load="YES" .Ed +.Sh DEPRECATION NOTICE +The +.Nm +driver is not present in +.Fx 13.0 +and later. +See https://github.com/freebsd/fcp/blob/master/fcp-0101.md for more +information. .Sh DESCRIPTION The .Nm device driver supports SMC91Cxx based ISA and PCMCIA cards. .Sh HARDWARE The .Nm driver supports SMC91Cxx based ISA and PCMCIA cards including: .Pp .Bl -bullet -compact .It 3Com Megahertz X-Jack Ethernet PC Card XJ10BT, XJ10BC .It 3Com Megahertz XJEM and CCEM series: CCEM3288C, CCEM3288T, CCEM3336, CEM3336C, CCEM3336T, XJEM1144C, XJEM1144T, XJEM3288C, XJEM3288T, XJEM3336 .It Farallon EtherMac PC Card 595a .It Motorola Mariner Ethernet/Modem PC Card .It Ositech Seven of Diamonds Ethernet PC Card .It Ositech Jack of Hearts Ethernet/Modem PC Card .It Psion Gold Card Netglobal Ethernet PC Card .It Psion Gold Card Netglobal 10/100 Fast Ethernet PC Card .It Psion Gold Card Netglobal 56k+10Mb Ethernet PC Card .It SMC EZEther PC Card (8020BT) .It SMC EZEther PC Card (8020T) .El .Pp The .Nm driver supports the SMC 91C90, SMC 91C92, SMC 91C94, SMC 91C95, SMC 91C96, SMC91C100 and SMC 91C100FD chips from SMC. .Pp The Farallon EtherWave and EtherMac card came in two varieties. The .Xr ep 4 driver supports the 595 and 895 cards. These cards have the blue arrow on the front along with a 3Com logo. The Farallon 595a cards, which have a red arrow on the front, are also called EtherWave and EtherMac. They are supported by the .Nm driver. .Sh SEE ALSO .Xr ed 4 , .Xr ep 4 , .Xr intro 4 , .Xr ng_ether 4 , .Xr vx 4 , .Xr ifconfig 8 .Sh HISTORY The .Nm device driver appeared in .Fx 4.0 . Index: stable/12/share/man/man4/tl.4 =================================================================== --- stable/12/share/man/man4/tl.4 (revision 339734) +++ stable/12/share/man/man4/tl.4 (revision 339735) @@ -1,185 +1,193 @@ .\" Copyright (c) 1997, 1998 .\" Bill Paul . All rights reserved. .\" .\" Redistribution and use in source and binary forms, with or without .\" modification, are permitted provided that the following conditions .\" are met: .\" 1. Redistributions of source code must retain the above copyright .\" notice, this list of conditions and the following disclaimer. .\" 2. Redistributions in binary form must reproduce the above copyright .\" notice, this list of conditions and the following disclaimer in the .\" documentation and/or other materials provided with the distribution. .\" 3. All advertising materials mentioning features or use of this software .\" must display the following acknowledgement: .\" This product includes software developed by Bill Paul. .\" 4. Neither the name of the author nor the names of any co-contributors .\" may be used to endorse or promote products derived from this software .\" without specific prior written permission. .\" .\" THIS SOFTWARE IS PROVIDED BY Bill Paul AND CONTRIBUTORS ``AS IS'' AND .\" ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE .\" IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE .\" ARE DISCLAIMED. IN NO EVENT SHALL Bill Paul OR THE VOICES IN HIS HEAD .\" BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR .\" CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF .\" SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS .\" INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN .\" CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) .\" ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF .\" THE POSSIBILITY OF SUCH DAMAGE. .\" .\" $FreeBSD$ .\" -.Dd July 16, 2005 +.Dd October 24, 2018 .Dt TL 4 .Os .Sh NAME .Nm tl .Nd "Texas Instruments ThunderLAN Ethernet device driver" .Sh SYNOPSIS To compile this driver into the kernel, place the following lines in your kernel configuration file: .Bd -ragged -offset indent .Cd "device miibus" .Cd "device tl" .Ed .Pp Alternatively, to load the driver as a module at boot time, place the following line in .Xr loader.conf 5 : .Bd -literal -offset indent if_tl_load="YES" .Ed +.Sh DEPRECATION NOTICE +The +.Nm +driver is not present in +.Fx 13.0 +and later. +See https://github.com/freebsd/fcp/blob/master/fcp-0101.md for more +information. .Sh DESCRIPTION The .Nm driver provides support for PCI Ethernet adapters based on the Texas Instruments ThunderLAN Ethernet controller chip. .Pp The ThunderLAN controller has a standard MII interface that supports up to 32 physical interface devices (PHYs). It also has a built-in 10baseT PHY hardwired at MII address 31, which may be used in some 10Mbps-only hardware configurations. In 100Mbps configurations, a National Semiconductor DP83840A or other MII-compliant PHY may be attached to the ThunderLAN's MII bus. If a DP83840A or equivalent is available, the ThunderLAN chip can operate at either 100Mbps or 10Mbps in either half-duplex or full-duplex modes. The ThunderLAN's built-in PHY and the DP83840A also support autonegotiation. .Pp The .Nm driver supports the following media types: .Bl -tag -width xxxxxxxxxxxxxxxxxxxx .It autoselect Enable autoselection of the media type and options. Note that this option is only available on those PHYs that support autonegotiation. Also, the PHY will not advertise those modes that have been explicitly disabled using the following media options. .It 10baseT/UTP Set 10Mbps operation. .It 100baseTX Set 100Mbps (Fast Ethernet) operation. .It 10base5/AUI Enable AUI/BNC interface (useful only with the built-in PHY). .El .Pp The .Nm driver supports the following media options: .Bl -tag -width xxxxxxxxxxxxxxxxxxxx .It full-duplex Force full duplex operation. .It half-duplex Force half duplex operation. .It hw-loopback Enable hardware loopback mode. .El .Pp Note that the 100baseTX media type is only available if supported by the PHY. For more information on configuring this device, see .Xr ifconfig 8 . .Sh HARDWARE The .Nm driver supports Texas Instruments ThunderLAN based Ethernet and Fast Ethernet adapters including a large number of Compaq PCI Ethernet adapters. Also supported are: .Pp .Bl -bullet -compact .It Olicom OC-2135/2138 10/100 TX UTP adapter .It Olicom OC-2325/OC-2326 10/100 TX UTP adapter .It Racore 8148 10baseT/100baseTX/100baseFX adapter .It Racore 8165 10/100baseTX adapter .El .Pp The .Nm driver also supports the built-in Ethernet adapters of various Compaq Prosignia servers and Compaq Deskpro desktop machines including: .Pp .Bl -bullet -compact .It Compaq Netelligent 10 .It Compaq Netelligent 10 T PCI UTP/Coax .It Compaq Netelligent 10/100 .It Compaq Netelligent 10/100 Dual-Port .It Compaq Netelligent 10/100 Proliant .It Compaq Netelligent 10/100 TX Embedded UTP .It Compaq Netelligent 10/100 TX UTP .It Compaq NetFlex 3P .It Compaq NetFlex 3P Integrated .It Compaq NetFlex 3P w/BNC .El .Sh DIAGNOSTICS .Bl -diag .It "tl%d: couldn't map memory" A fatal initialization error has occurred. .It "tl%d: couldn't map interrupt" A fatal initialization error has occurred. .It "tl%d: device timeout" The device has stopped responding to the network, or there is a problem with the network connection (cable). .It "tl%d: no memory for rx list" The driver failed to allocate an mbuf for the receiver ring. .It "tl%d: no memory for tx list" The driver failed to allocate an mbuf for the transmitter ring when allocating a pad buffer or collapsing an mbuf chain into a cluster. .El .Sh SEE ALSO .Xr arp 4 , .Xr miibus 4 , .Xr netintro 4 , .Xr ng_ether 4 , .Xr ifconfig 8 .Sh HISTORY The .Nm device driver first appeared in .Fx 2.2 . .Sh AUTHORS The .Nm driver was written by .An Bill Paul Aq Mt wpaul@ctr.columbia.edu . Index: stable/12/share/man/man4/tx.4 =================================================================== --- stable/12/share/man/man4/tx.4 (revision 339734) +++ stable/12/share/man/man4/tx.4 (revision 339735) @@ -1,120 +1,128 @@ .\" .\" Copyright (c) 1998-2001 Semen Ustimenko .\" .\" All rights reserved. .\" .\" Redistribution and use in source and binary forms, with or without .\" modification, are permitted provided that the following conditions .\" are met: .\" 1. Redistributions of source code must retain the above copyright .\" notice, this list of conditions and the following disclaimer. .\" 2. Redistributions in binary form must reproduce the above copyright .\" notice, this list of conditions and the following disclaimer in the .\" documentation and/or other materials provided with the distribution. .\" .\" THIS SOFTWARE IS PROVIDED BY THE DEVELOPERS ``AS IS'' AND ANY EXPRESS OR .\" IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES .\" OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. .\" IN NO EVENT SHALL THE DEVELOPERS BE LIABLE FOR ANY DIRECT, INDIRECT, .\" INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT .\" NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, .\" DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY .\" THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT .\" (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF .\" THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. .\" .\" $FreeBSD$ .\" -.Dd July 16, 2005 +.Dd October 24, 2018 .Dt TX 4 .Os .Sh NAME .Nm tx .Nd "SMC 83c17x Fast Ethernet device driver" .Sh SYNOPSIS To compile this driver into the kernel, place the following lines in your kernel configuration file: .Bd -ragged -offset indent .Cd "device miibus" .Cd "device tx" .Ed .Pp Alternatively, to load the driver as a module at boot time, place the following line in .Xr loader.conf 5 : .Bd -literal -offset indent if_tx_load="YES" .Ed +.Sh DEPRECATION NOTICE +The +.Nm +driver is not present in +.Fx 13.0 +and later. +See https://github.com/freebsd/fcp/blob/master/fcp-0101.md for more +information. .Sh DESCRIPTION The .Nm driver provides support for the Ethernet adapters based on the SMC 83c17x (EPIC) chips. These are mostly SMC 9432 series cards. .Pp The .Nm driver supports the following media types (depending on card's capabilities): .Bl -tag -width ".Cm 10baseT/UTP" .It Cm autoselect Enable autonegotiation (default). .It Cm 100baseFX Set 100Mbps (Fast Ethernet) fiber optic operation. .It Cm 100baseTX Set 100Mbps (Fast Ethernet) twisted pair operation. .It Cm 10baseT/UTP Set 10Mbps on 10baseT port. .It Cm 10base2/BNC Set 10Mbps on 10base2 port. .El .Pp The .Nm driver supports the following media options: .Bl -tag -width ".Cm full-duplex" .It Cm full-duplex Set full-duplex operation. .El .Pp The .Nm driver supports oversized Ethernet packets (up to 1600 bytes). Refer to the .Xr ifconfig 8 man page on setting the interface's MTU. .Pp The old .Dq Li "ifconfig tx0 linkN" method of configuration is not supported. .Ss "VLAN (IEEE 802.1Q) support" The .Nm driver supports the VLAN operation (using .Xr vlan 4 interfaces) without decreasing the MTU on the .Xr vlan 4 interfaces. .Sh DIAGNOSTICS .Bl -diag .It "tx%d: device timeout %d packets" The device stops responding. Device and driver reset follows this error. .It "tx%d: PCI fatal error occurred (%s)" One of following errors occurred: PCI Target Abort, PCI Master Abort, Data Parity Error or Address Parity Error. Device and driver reset follows this error. .It "tx%d: cannot allocate mbuf header/cluster" Cannot allocate memory for received packet. Packet thrown away. .It "tx%d: can't stop %s DMA" While resetting, the driver failed to stop the device correctly. .El .Sh SEE ALSO .Xr arp 4 , .Xr miibus 4 , .Xr netintro 4 , .Xr ng_ether 4 , .Xr ifconfig 8 .Sh BUGS The auto-negotiation does not work very well. Index: stable/12/share/man/man4/txp.4 =================================================================== --- stable/12/share/man/man4/txp.4 (revision 339734) +++ stable/12/share/man/man4/txp.4 (revision 339735) @@ -1,140 +1,148 @@ .\" $OpenBSD: txp.4,v 1.8 2001/06/26 02:09:11 pjanzen Exp $ .\" .\" Copyright (c) 2001 Jason L. Wright (jason@thought.net) .\" All rights reserved. .\" .\" Redistribution and use in source and binary forms, with or without .\" modification, are permitted provided that the following conditions .\" are met: .\" 1. Redistributions of source code must retain the above copyright .\" notice, this list of conditions and the following disclaimer. .\" 2. Redistributions in binary form must reproduce the above copyright .\" notice, this list of conditions and the following disclaimer in the .\" documentation and/or other materials provided with the distribution. .\" .\" THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR .\" IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED .\" WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE .\" DISCLAIMED. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, .\" INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES .\" (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR .\" SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) .\" HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, .\" STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN .\" ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE .\" POSSIBILITY OF SUCH DAMAGE. .\" .\" $FreeBSD$ .\" -.Dd January 26, 2012 +.Dd October 24, 2018 .Dt TXP 4 .Os .Sh NAME .Nm txp .Nd "3Com 3XP Typhoon/Sidewinder (3CR990) Ethernet interface" .Sh SYNOPSIS To compile this driver into the kernel, place the following line in your kernel configuration file: .Bd -ragged -offset indent .Cd "device txp" .Ed .Pp Alternatively, to load the driver as a module at boot time, place the following line in .Xr loader.conf 5 : .Bd -literal -offset indent if_txp_load="YES" .Ed +.Sh DEPRECATION NOTICE +The +.Nm +driver is not present in +.Fx 13.0 +and later. +See https://github.com/freebsd/fcp/blob/master/fcp-0101.md for more +information. .Sh DESCRIPTION The .Nm interface provides access to the 10Mb/s and 100Mb/s Ethernet networks via the .Tn 3Com .Tn Typhoon/Sidewinder chipset. .Pp Basic Ethernet functions are provided as well as support for .Xr vlan 4 tag removal and insertion assistance, receive .Xr ip 4 , .Xr tcp 4 , and .Xr udp 4 checksum offloading, and transmit .Xr ip 4 checksum offloading. There is currently no support for transmit .Xr tcp 4 or .Xr udp 4 checksum offloading, .Xr tcp 4 segmentation, nor .Xr ipsec 4 acceleration. .Pp When a .Nm interface is brought up, by default, it will attempt to auto-negotiate the link speed and duplex mode. The speeds, in order of attempt, are: 100Mb/s Full Duplex, 100Mb/s Half Duplex, 10 Mb/s Full Duplex, and 10 Mb/s Half Duplex. .Pp The .Nm supports several media types, which are selected via the .Xr ifconfig 8 command. The supported media types are: .Bl -tag -width indent .It Cm media autoselect Attempt to autoselect the media type (default) .It Cm media 100baseTX mediaopt full-duplex Use 100baseTX, full duplex .It Cm media 100baseTX Op Cm mediaopt half-duplex Use 100baseTX, half duplex .It Cm media 10baseT mediaopt full-duplex Use 10baseT, full duplex .It Cm media 10baseT Op Cm mediaopt half-duplex Use 10baseT, half duplex .El .Sh HARDWARE The .Nm driver supports the following cards: .Pp .Bl -bullet -offset indent -compact .It 3Com 3CR990-TX-95 .It 3Com 3CR990-TX-97 .It 3Com 3cR990B-TXM .It 3Com 3CR990SVR95 .It 3Com 3CR990SVR97 .It 3Com 3cR990B-SRV .El .Sh SEE ALSO .Xr altq 4 , .Xr arp 4 , .Xr inet 4 , .Xr intro 4 , .Xr ip 4 , .Xr miibus 4 , .Xr tcp 4 , .Xr udp 4 , .Xr vlan 4 , .Xr ifconfig 8 .Sh HISTORY The .Nm driver first appeared in .Ox 2.9 . Index: stable/12/share/man/man4/wb.4 =================================================================== --- stable/12/share/man/man4/wb.4 (revision 339734) +++ stable/12/share/man/man4/wb.4 (revision 339735) @@ -1,196 +1,204 @@ .\" Copyright (c) 1997, 1998 .\" Bill Paul . All rights reserved. .\" .\" Redistribution and use in source and binary forms, with or without .\" modification, are permitted provided that the following conditions .\" are met: .\" 1. Redistributions of source code must retain the above copyright .\" notice, this list of conditions and the following disclaimer. .\" 2. Redistributions in binary form must reproduce the above copyright .\" notice, this list of conditions and the following disclaimer in the .\" documentation and/or other materials provided with the distribution. .\" 3. All advertising materials mentioning features or use of this software .\" must display the following acknowledgement: .\" This product includes software developed by Bill Paul. .\" 4. Neither the name of the author nor the names of any co-contributors .\" may be used to endorse or promote products derived from this software .\" without specific prior written permission. .\" .\" THIS SOFTWARE IS PROVIDED BY Bill Paul AND CONTRIBUTORS ``AS IS'' AND .\" ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE .\" IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE .\" ARE DISCLAIMED. IN NO EVENT SHALL Bill Paul OR THE VOICES IN HIS HEAD .\" BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR .\" CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF .\" SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS .\" INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN .\" CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) .\" ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF .\" THE POSSIBILITY OF SUCH DAMAGE. .\" .\" $FreeBSD$ .\" -.Dd July 16, 2005 +.Dd October 24, 2018 .Dt WB 4 .Os .Sh NAME .Nm wb .Nd "Winbond W89C840F Fast Ethernet device driver" .Sh SYNOPSIS To compile this driver into the kernel, place the following lines in your kernel configuration file: .Bd -ragged -offset indent .Cd "device miibus" .Cd "device wb" .Ed .Pp Alternatively, to load the driver as a module at boot time, place the following line in .Xr loader.conf 5 : .Bd -literal -offset indent if_wb_load="YES" .Ed +.Sh DEPRECATION NOTICE +The +.Nm +driver is not present in +.Fx 13.0 +and later. +See https://github.com/freebsd/fcp/blob/master/fcp-0101.md for more +information. .Sh DESCRIPTION The .Nm driver provides support for PCI Ethernet adapters and embedded controllers based on the Winbond W89C840F Fast Ethernet controller chip. The 840F should not be confused with the 940F, which is an NE2000 clone and only supports 10Mbps speeds. .Pp The Winbond controller uses bus master DMA and is designed to be a DEC 'tulip' workalike. It differs from the standard DEC design in several ways: the control and status registers are spaced 4 bytes apart instead of 8, and the receive filter is programmed through registers rather than by downloading a special setup frame via the transmit DMA engine. Using an external PHY, the Winbond chip supports both 10 and 100Mbps speeds in either full or half duplex. .Pp The .Nm driver supports the following media types: .Bl -tag -width xxxxxxxxxxxxxxxxxxxx .It autoselect Enable autoselection of the media type and options. This is only supported if the PHY chip attached to the Winbond controller supports NWAY autonegotiation. The user can manually override the autoselected mode by adding media options to the .Pa /etc/rc.conf file. .It 10baseT/UTP Set 10Mbps operation. The .Ar mediaopt option can also be used to select either .Ar full-duplex or .Ar half-duplex modes. .It 100baseTX Set 100Mbps (Fast Ethernet) operation. The .Ar mediaopt option can also be used to select either .Ar full-duplex or .Ar half-duplex modes. .El .Pp The .Nm driver supports the following media options: .Bl -tag -width xxxxxxxxxxxxxxxxxxxx .It full-duplex Force full duplex operation. .It half-duplex Force half duplex operation. .El .Pp Note that the 100baseTX media type is only available if supported by the adapter. For more information on configuring this device, see .Xr ifconfig 8 . .Sh HARDWARE The .Nm driver supports Winbond W89C840F based Fast Ethernet adapters and embedded controllers including: .Pp .Bl -bullet -compact .It Trendware TE100-PCIE .El .Sh DIAGNOSTICS .Bl -diag .It "wb%d: couldn't map memory" A fatal initialization error has occurred. .It "wb%d: couldn't map interrupt" A fatal initialization error has occurred. .It "wb%d: watchdog timeout" The device has stopped responding to the network, or there is a problem with the network connection (cable). .It "wb%d: no memory for rx list" The driver failed to allocate an mbuf for the receiver ring. .It "wb%d: no memory for tx list" The driver failed to allocate an mbuf for the transmitter ring when allocating a pad buffer or collapsing an mbuf chain into a cluster. .It "wb%d: chip is in D3 power state -- setting to D0" This message applies only to adapters which support power management. Some operating systems place the controller in low power mode when shutting down, and some PCI BIOSes fail to bring the chip out of this state before configuring it. The controller loses all of its PCI configuration in the D3 state, so if the BIOS does not set it back to full power mode in time, it will not be able to configure it correctly. The driver tries to detect this condition and bring the adapter back to the D0 (full power) state, but this may not be enough to return the driver to a fully operational condition. If you see this message at boot time and the driver fails to attach the device as a network interface, you will have to perform second warm boot to have the device properly configured. .Pp Note that this condition only occurs when warm booting from another operating system. If you power down your system prior to booting .Fx , the card should be configured correctly. .El .Sh SEE ALSO .Xr arp 4 , .Xr miibus 4 , .Xr netintro 4 , .Xr ng_ether 4 , .Xr ifconfig 8 .Sh HISTORY The .Nm device driver first appeared in .Fx 3.0 . .Sh AUTHORS The .Nm driver was written by .An Bill Paul Aq Mt wpaul@ctr.columbia.edu . .Sh BUGS The Winbond chip seems to behave strangely in some cases when the link partner switches modes. If for example both sides are set to 10Mbps half-duplex, and the other end is changed to 100Mbps full-duplex, the Winbond's receiver suddenly starts writing trash all over the RX descriptors. The .Nm driver handles this by forcing a reset of both the controller chip and attached PHY. This is drastic, but it appears to be the only way to recover properly from this condition. Index: stable/12/share/man/man4/xe.4 =================================================================== --- stable/12/share/man/man4/xe.4 (revision 339734) +++ stable/12/share/man/man4/xe.4 (revision 339735) @@ -1,168 +1,176 @@ .\" .\" Copyright (c) 2003 Tom Rhodes .\" All rights reserved. .\" .\" Redistribution and use in source and binary forms, with or without .\" modification, are permitted provided that the following conditions .\" are met: .\" 1. Redistributions of source code must retain the above copyright .\" notice, this list of conditions and the following disclaimer. .\" 2. Redistributions in binary form must reproduce the above copyright .\" notice, this list of conditions and the following disclaimer in the .\" documentation and/or other materials provided with the distribution. .\" .\" THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND .\" ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE .\" IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE .\" ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE .\" FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL .\" DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS .\" OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) .\" HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT .\" LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY .\" OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF .\" SUCH DAMAGE. .\" .\" $FreeBSD$ .\" -.Dd July 16, 2005 +.Dd October 24 2018 .Dt XE 4 .Os .Sh NAME .Nm xe .Nd "Xircom PCMCIA Ethernet device driver" .Sh SYNOPSIS To compile this driver into the kernel, place the following line in your kernel configuration file: .Bd -ragged -offset indent .Cd "device xe" .Ed .Pp Alternatively, to load the driver as a module at boot time, place the following line in .Xr loader.conf 5 : .Bd -literal -offset indent if_xe_load="YES" .Ed +.Sh DEPRECATION NOTICE +The +.Nm +driver is not present in +.Fx 13.0 +and later. +See https://github.com/freebsd/fcp/blob/master/fcp-0101.md for more +information. .Sh DESCRIPTION The .Nm driver supports .Tn PCMCIA Ethernet adapters based on Xircom CE2- and CE3-class hardware. This includes devices made by Xircom along with various .Tn OEM manufacturers. .Pp Please note that the .Nm driver only supports .Tn PCMCIA cards and their Ethernet functions. .Nm does not support the on-board modem device located on some version of the Ethernet/modem combo cards. In particular, Xircom RealPort2 cards are not supported by this driver. .Pp The .Nm driver supports the following media types: .Bl -tag -width ".Cm autoselect" .It Cm autoselect Enable autoselection of media type and options. .It Cm 10Base2/BNC Select 10Mbps operation on a BNC coaxial connector. .It Cm 10BaseT/UTP Select 10Mbps operation on a RJ-45 connector. .It Cm 100BaseTX Select 100Mbps operation. .El .Pp Note that 100BaseTX operation is not available on CE2-class cards, while the 10Base2/BNC mode is only available on CE2-class cards. Full-duplex operation is currently not supported. For more information on configuring network interface devices, see .Xr ifconfig 8 . .Sh HARDWARE The .Nm driver supports the following cards: .Pp .Bl -bullet -compact .It Xircom CreditCard Ethernet (PS-CE2-10) .It Xircom CreditCard Ethernet + Modem 28 (PS-CEM-28) .It Xircom CreditCard Ethernet + Modem 33 (CEM33) .It Xircom CreditCard 10/100 (CE3, CE3B) .It Xircom CreditCard Ethernet 10/100 + Modem 56 (CEM56) .It Xircom RealPort Ethernet 10 (RE10) .It Xircom RealPort Ethernet 10/100 (RE100) .It Xircom RealPort Ethernet 10/100 + Modem 56 (REM56, REM56G) .It Accton Fast EtherCard-16 (EN2226) .It Compaq Microcom CPQ550 Ethernet/Modem PC Card .It Compaq Netelligent 10/100 PC Card (CPQ-10/100) .It Intel EtherExpress Pro/100 PC Card Mobile Adapter 16 (Pro/100 M16A) .It Intel EtherExpress Pro/100 LAN/Modem PC Card Adapter (Pro/100 M16B) .El .Pp Other similar devices using the same hardware may also be supported. .Sh DIAGNOSTICS .Bl -diag .It "xe%d: Cannot allocate ioport" .It "xe%d: Cannot allocate irq" A fatal initialization error occurred while attempting to allocate system resources for the card. .It "xe%d: Unable to fix your %s combo card" A fatal initialization error occurred while attempting to attach an Ethernet/modem combo card. .It "xe%d: watchdog timeout: resetting card" The card failed to generate an interrupt acknowledging a transmitted packet. May indicate a .Tn PCMCIA configuration problem. .It "xe%d: no carrier" The card has lost all contact with the network; this usually indicates a cable problem. .El .Sh SEE ALSO .Xr pccard 4 , .Xr ifconfig 8 .Sh HISTORY The .Nm driver first appeared in .Fx 3.3 . .Sh AUTHORS .An -nosplit The .Nm device driver was written by .An Scott Mitchell Aq Mt rsm@FreeBSD.org . This manual page was written by .An Scott Mitchell Aq Mt rsm@FreeBSD.org , and .An Tom Rhodes Aq Mt trhodes@FreeBSD.org . .Sh BUGS Supported devices will fail to attach on some machines using the .Tn NEWCARD .Tn PC Card framework. .Pp Automatic media selection is usually unreliable. Index: stable/12/sys/dev/ae/if_ae.c =================================================================== --- stable/12/sys/dev/ae/if_ae.c (revision 339734) +++ stable/12/sys/dev/ae/if_ae.c (revision 339735) @@ -1,2259 +1,2261 @@ /*- * SPDX-License-Identifier: BSD-2-Clause-FreeBSD * * Copyright (c) 2008 Stanislav Sedov . * All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR * IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES * OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. * IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, * INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT * NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF * THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. * * Driver for Attansic Technology Corp. L2 FastEthernet adapter. * * This driver is heavily based on age(4) Attansic L1 driver by Pyun YongHyeon. */ #include __FBSDID("$FreeBSD$"); #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include "miibus_if.h" #include "if_aereg.h" #include "if_aevar.h" /* * Devices supported by this driver. */ static struct ae_dev { uint16_t vendorid; uint16_t deviceid; const char *name; } ae_devs[] = { { VENDORID_ATTANSIC, DEVICEID_ATTANSIC_L2, "Attansic Technology Corp, L2 FastEthernet" }, }; #define AE_DEVS_COUNT nitems(ae_devs) static struct resource_spec ae_res_spec_mem[] = { { SYS_RES_MEMORY, PCIR_BAR(0), RF_ACTIVE }, { -1, 0, 0 } }; static struct resource_spec ae_res_spec_irq[] = { { SYS_RES_IRQ, 0, RF_ACTIVE | RF_SHAREABLE }, { -1, 0, 0 } }; static struct resource_spec ae_res_spec_msi[] = { { SYS_RES_IRQ, 1, RF_ACTIVE }, { -1, 0, 0 } }; static int ae_probe(device_t dev); static int ae_attach(device_t dev); static void ae_pcie_init(ae_softc_t *sc); static void ae_phy_reset(ae_softc_t *sc); static void ae_phy_init(ae_softc_t *sc); static int ae_reset(ae_softc_t *sc); static void ae_init(void *arg); static int ae_init_locked(ae_softc_t *sc); static int ae_detach(device_t dev); static int ae_miibus_readreg(device_t dev, int phy, int reg); static int ae_miibus_writereg(device_t dev, int phy, int reg, int val); static void ae_miibus_statchg(device_t dev); static void ae_mediastatus(struct ifnet *ifp, struct ifmediareq *ifmr); static int ae_mediachange(struct ifnet *ifp); static void ae_retrieve_address(ae_softc_t *sc); static void ae_dmamap_cb(void *arg, bus_dma_segment_t *segs, int nsegs, int error); static int ae_alloc_rings(ae_softc_t *sc); static void ae_dma_free(ae_softc_t *sc); static int ae_shutdown(device_t dev); static int ae_suspend(device_t dev); static void ae_powersave_disable(ae_softc_t *sc); static void ae_powersave_enable(ae_softc_t *sc); static int ae_resume(device_t dev); static unsigned int ae_tx_avail_size(ae_softc_t *sc); static int ae_encap(ae_softc_t *sc, struct mbuf **m_head); static void ae_start(struct ifnet *ifp); static void ae_start_locked(struct ifnet *ifp); static void ae_link_task(void *arg, int pending); static void ae_stop_rxmac(ae_softc_t *sc); static void ae_stop_txmac(ae_softc_t *sc); static void ae_mac_config(ae_softc_t *sc); static int ae_intr(void *arg); static void ae_int_task(void *arg, int pending); static void ae_tx_intr(ae_softc_t *sc); static void ae_rxeof(ae_softc_t *sc, ae_rxd_t *rxd); static void ae_rx_intr(ae_softc_t *sc); static void ae_watchdog(ae_softc_t *sc); static void ae_tick(void *arg); static void ae_rxfilter(ae_softc_t *sc); static void ae_rxvlan(ae_softc_t *sc); static int ae_ioctl(struct ifnet *ifp, u_long cmd, caddr_t data); static void ae_stop(ae_softc_t *sc); static int ae_check_eeprom_present(ae_softc_t *sc, int *vpdc); static int ae_vpd_read_word(ae_softc_t *sc, int reg, uint32_t *word); static int ae_get_vpd_eaddr(ae_softc_t *sc, uint32_t *eaddr); static int ae_get_reg_eaddr(ae_softc_t *sc, uint32_t *eaddr); static void ae_update_stats_rx(uint16_t flags, ae_stats_t *stats); static void ae_update_stats_tx(uint16_t flags, ae_stats_t *stats); static void ae_init_tunables(ae_softc_t *sc); static device_method_t ae_methods[] = { /* Device interface. */ DEVMETHOD(device_probe, ae_probe), DEVMETHOD(device_attach, ae_attach), DEVMETHOD(device_detach, ae_detach), DEVMETHOD(device_shutdown, ae_shutdown), DEVMETHOD(device_suspend, ae_suspend), DEVMETHOD(device_resume, ae_resume), /* MII interface. */ DEVMETHOD(miibus_readreg, ae_miibus_readreg), DEVMETHOD(miibus_writereg, ae_miibus_writereg), DEVMETHOD(miibus_statchg, ae_miibus_statchg), { NULL, NULL } }; static driver_t ae_driver = { "ae", ae_methods, sizeof(ae_softc_t) }; static devclass_t ae_devclass; DRIVER_MODULE(ae, pci, ae_driver, ae_devclass, 0, 0); MODULE_PNP_INFO("U16:vendor;U16:device;D:#", pci, ae, ae_devs, nitems(ae_devs)); DRIVER_MODULE(miibus, ae, miibus_driver, miibus_devclass, 0, 0); MODULE_DEPEND(ae, pci, 1, 1, 1); MODULE_DEPEND(ae, ether, 1, 1, 1); MODULE_DEPEND(ae, miibus, 1, 1, 1); /* * Tunables. */ static int msi_disable = 0; TUNABLE_INT("hw.ae.msi_disable", &msi_disable); #define AE_READ_4(sc, reg) \ bus_read_4((sc)->mem[0], (reg)) #define AE_READ_2(sc, reg) \ bus_read_2((sc)->mem[0], (reg)) #define AE_READ_1(sc, reg) \ bus_read_1((sc)->mem[0], (reg)) #define AE_WRITE_4(sc, reg, val) \ bus_write_4((sc)->mem[0], (reg), (val)) #define AE_WRITE_2(sc, reg, val) \ bus_write_2((sc)->mem[0], (reg), (val)) #define AE_WRITE_1(sc, reg, val) \ bus_write_1((sc)->mem[0], (reg), (val)) #define AE_PHY_READ(sc, reg) \ ae_miibus_readreg(sc->dev, 0, reg) #define AE_PHY_WRITE(sc, reg, val) \ ae_miibus_writereg(sc->dev, 0, reg, val) #define AE_CHECK_EADDR_VALID(eaddr) \ ((eaddr[0] == 0 && eaddr[1] == 0) || \ (eaddr[0] == 0xffffffff && eaddr[1] == 0xffff)) #define AE_RXD_VLAN(vtag) \ (((vtag) >> 4) | (((vtag) & 0x07) << 13) | (((vtag) & 0x08) << 9)) #define AE_TXD_VLAN(vtag) \ (((vtag) << 4) | (((vtag) >> 13) & 0x07) | (((vtag) >> 9) & 0x08)) static int ae_probe(device_t dev) { uint16_t deviceid, vendorid; int i; vendorid = pci_get_vendor(dev); deviceid = pci_get_device(dev); /* * Search through the list of supported devs for matching one. */ for (i = 0; i < AE_DEVS_COUNT; i++) { if (vendorid == ae_devs[i].vendorid && deviceid == ae_devs[i].deviceid) { device_set_desc(dev, ae_devs[i].name); return (BUS_PROBE_DEFAULT); } } return (ENXIO); } static int ae_attach(device_t dev) { ae_softc_t *sc; struct ifnet *ifp; uint8_t chiprev; uint32_t pcirev; int nmsi, pmc; int error; sc = device_get_softc(dev); /* Automatically allocated and zeroed on attach. */ KASSERT(sc != NULL, ("[ae, %d]: sc is NULL", __LINE__)); sc->dev = dev; /* * Initialize mutexes and tasks. */ mtx_init(&sc->mtx, device_get_nameunit(dev), MTX_NETWORK_LOCK, MTX_DEF); callout_init_mtx(&sc->tick_ch, &sc->mtx, 0); TASK_INIT(&sc->int_task, 0, ae_int_task, sc); TASK_INIT(&sc->link_task, 0, ae_link_task, sc); pci_enable_busmaster(dev); /* Enable bus mastering. */ sc->spec_mem = ae_res_spec_mem; /* * Allocate memory-mapped registers. */ error = bus_alloc_resources(dev, sc->spec_mem, sc->mem); if (error != 0) { device_printf(dev, "could not allocate memory resources.\n"); sc->spec_mem = NULL; goto fail; } /* * Retrieve PCI and chip revisions. */ pcirev = pci_get_revid(dev); chiprev = (AE_READ_4(sc, AE_MASTER_REG) >> AE_MASTER_REVNUM_SHIFT) & AE_MASTER_REVNUM_MASK; if (bootverbose) { device_printf(dev, "pci device revision: %#04x\n", pcirev); device_printf(dev, "chip id: %#02x\n", chiprev); } nmsi = pci_msi_count(dev); if (bootverbose) device_printf(dev, "MSI count: %d.\n", nmsi); /* * Allocate interrupt resources. */ if (msi_disable == 0 && nmsi == 1) { error = pci_alloc_msi(dev, &nmsi); if (error == 0) { device_printf(dev, "Using MSI messages.\n"); sc->spec_irq = ae_res_spec_msi; error = bus_alloc_resources(dev, sc->spec_irq, sc->irq); if (error != 0) { device_printf(dev, "MSI allocation failed.\n"); sc->spec_irq = NULL; pci_release_msi(dev); } else { sc->flags |= AE_FLAG_MSI; } } } if (sc->spec_irq == NULL) { sc->spec_irq = ae_res_spec_irq; error = bus_alloc_resources(dev, sc->spec_irq, sc->irq); if (error != 0) { device_printf(dev, "could not allocate IRQ resources.\n"); sc->spec_irq = NULL; goto fail; } } ae_init_tunables(sc); ae_phy_reset(sc); /* Reset PHY. */ error = ae_reset(sc); /* Reset the controller itself. */ if (error != 0) goto fail; ae_pcie_init(sc); ae_retrieve_address(sc); /* Load MAC address. */ error = ae_alloc_rings(sc); /* Allocate ring buffers. */ if (error != 0) goto fail; ifp = sc->ifp = if_alloc(IFT_ETHER); if (ifp == NULL) { device_printf(dev, "could not allocate ifnet structure.\n"); error = ENXIO; goto fail; } ifp->if_softc = sc; if_initname(ifp, device_get_name(dev), device_get_unit(dev)); ifp->if_flags = IFF_BROADCAST | IFF_SIMPLEX | IFF_MULTICAST; ifp->if_ioctl = ae_ioctl; ifp->if_start = ae_start; ifp->if_init = ae_init; ifp->if_capabilities = IFCAP_VLAN_MTU | IFCAP_VLAN_HWTAGGING; ifp->if_hwassist = 0; ifp->if_snd.ifq_drv_maxlen = ifqmaxlen; IFQ_SET_MAXLEN(&ifp->if_snd, ifp->if_snd.ifq_drv_maxlen); IFQ_SET_READY(&ifp->if_snd); if (pci_find_cap(dev, PCIY_PMG, &pmc) == 0) { ifp->if_capabilities |= IFCAP_WOL_MAGIC; sc->flags |= AE_FLAG_PMG; } ifp->if_capenable = ifp->if_capabilities; /* * Configure and attach MII bus. */ error = mii_attach(dev, &sc->miibus, ifp, ae_mediachange, ae_mediastatus, BMSR_DEFCAPMASK, AE_PHYADDR_DEFAULT, MII_OFFSET_ANY, 0); if (error != 0) { device_printf(dev, "attaching PHYs failed\n"); goto fail; } ether_ifattach(ifp, sc->eaddr); /* Tell the upper layer(s) we support long frames. */ ifp->if_hdrlen = sizeof(struct ether_vlan_header); /* * Create and run all helper tasks. */ sc->tq = taskqueue_create_fast("ae_taskq", M_WAITOK, taskqueue_thread_enqueue, &sc->tq); if (sc->tq == NULL) { device_printf(dev, "could not create taskqueue.\n"); ether_ifdetach(ifp); error = ENXIO; goto fail; } taskqueue_start_threads(&sc->tq, 1, PI_NET, "%s taskq", device_get_nameunit(sc->dev)); /* * Configure interrupt handlers. */ error = bus_setup_intr(dev, sc->irq[0], INTR_TYPE_NET | INTR_MPSAFE, ae_intr, NULL, sc, &sc->intrhand); if (error != 0) { device_printf(dev, "could not set up interrupt handler.\n"); taskqueue_free(sc->tq); sc->tq = NULL; ether_ifdetach(ifp); goto fail; } + gone_by_fcp101_dev(dev); + fail: if (error != 0) ae_detach(dev); return (error); } #define AE_SYSCTL(stx, parent, name, desc, ptr) \ SYSCTL_ADD_UINT(ctx, parent, OID_AUTO, name, CTLFLAG_RD, ptr, 0, desc) static void ae_init_tunables(ae_softc_t *sc) { struct sysctl_ctx_list *ctx; struct sysctl_oid *root, *stats, *stats_rx, *stats_tx; struct ae_stats *ae_stats; KASSERT(sc != NULL, ("[ae, %d]: sc is NULL", __LINE__)); ae_stats = &sc->stats; ctx = device_get_sysctl_ctx(sc->dev); root = device_get_sysctl_tree(sc->dev); stats = SYSCTL_ADD_NODE(ctx, SYSCTL_CHILDREN(root), OID_AUTO, "stats", CTLFLAG_RD, NULL, "ae statistics"); /* * Receiver statistcics. */ stats_rx = SYSCTL_ADD_NODE(ctx, SYSCTL_CHILDREN(stats), OID_AUTO, "rx", CTLFLAG_RD, NULL, "Rx MAC statistics"); AE_SYSCTL(ctx, SYSCTL_CHILDREN(stats_rx), "bcast", "broadcast frames", &ae_stats->rx_bcast); AE_SYSCTL(ctx, SYSCTL_CHILDREN(stats_rx), "mcast", "multicast frames", &ae_stats->rx_mcast); AE_SYSCTL(ctx, SYSCTL_CHILDREN(stats_rx), "pause", "PAUSE frames", &ae_stats->rx_pause); AE_SYSCTL(ctx, SYSCTL_CHILDREN(stats_rx), "control", "control frames", &ae_stats->rx_ctrl); AE_SYSCTL(ctx, SYSCTL_CHILDREN(stats_rx), "crc_errors", "frames with CRC errors", &ae_stats->rx_crcerr); AE_SYSCTL(ctx, SYSCTL_CHILDREN(stats_rx), "code_errors", "frames with invalid opcode", &ae_stats->rx_codeerr); AE_SYSCTL(ctx, SYSCTL_CHILDREN(stats_rx), "runt", "runt frames", &ae_stats->rx_runt); AE_SYSCTL(ctx, SYSCTL_CHILDREN(stats_rx), "frag", "fragmented frames", &ae_stats->rx_frag); AE_SYSCTL(ctx, SYSCTL_CHILDREN(stats_rx), "align_errors", "frames with alignment errors", &ae_stats->rx_align); AE_SYSCTL(ctx, SYSCTL_CHILDREN(stats_rx), "truncated", "frames truncated due to Rx FIFO inderrun", &ae_stats->rx_trunc); /* * Receiver statistcics. */ stats_tx = SYSCTL_ADD_NODE(ctx, SYSCTL_CHILDREN(stats), OID_AUTO, "tx", CTLFLAG_RD, NULL, "Tx MAC statistics"); AE_SYSCTL(ctx, SYSCTL_CHILDREN(stats_tx), "bcast", "broadcast frames", &ae_stats->tx_bcast); AE_SYSCTL(ctx, SYSCTL_CHILDREN(stats_tx), "mcast", "multicast frames", &ae_stats->tx_mcast); AE_SYSCTL(ctx, SYSCTL_CHILDREN(stats_tx), "pause", "PAUSE frames", &ae_stats->tx_pause); AE_SYSCTL(ctx, SYSCTL_CHILDREN(stats_tx), "control", "control frames", &ae_stats->tx_ctrl); AE_SYSCTL(ctx, SYSCTL_CHILDREN(stats_tx), "defers", "deferrals occuried", &ae_stats->tx_defer); AE_SYSCTL(ctx, SYSCTL_CHILDREN(stats_tx), "exc_defers", "excessive deferrals occuried", &ae_stats->tx_excdefer); AE_SYSCTL(ctx, SYSCTL_CHILDREN(stats_tx), "singlecols", "single collisions occuried", &ae_stats->tx_singlecol); AE_SYSCTL(ctx, SYSCTL_CHILDREN(stats_tx), "multicols", "multiple collisions occuried", &ae_stats->tx_multicol); AE_SYSCTL(ctx, SYSCTL_CHILDREN(stats_tx), "latecols", "late collisions occuried", &ae_stats->tx_latecol); AE_SYSCTL(ctx, SYSCTL_CHILDREN(stats_tx), "aborts", "transmit aborts due collisions", &ae_stats->tx_abortcol); AE_SYSCTL(ctx, SYSCTL_CHILDREN(stats_tx), "underruns", "Tx FIFO underruns", &ae_stats->tx_underrun); } static void ae_pcie_init(ae_softc_t *sc) { AE_WRITE_4(sc, AE_PCIE_LTSSM_TESTMODE_REG, AE_PCIE_LTSSM_TESTMODE_DEFAULT); AE_WRITE_4(sc, AE_PCIE_DLL_TX_CTRL_REG, AE_PCIE_DLL_TX_CTRL_DEFAULT); } static void ae_phy_reset(ae_softc_t *sc) { AE_WRITE_4(sc, AE_PHY_ENABLE_REG, AE_PHY_ENABLE); DELAY(1000); /* XXX: pause(9) ? */ } static int ae_reset(ae_softc_t *sc) { int i; /* * Issue a soft reset. */ AE_WRITE_4(sc, AE_MASTER_REG, AE_MASTER_SOFT_RESET); bus_barrier(sc->mem[0], AE_MASTER_REG, 4, BUS_SPACE_BARRIER_READ | BUS_SPACE_BARRIER_WRITE); /* * Wait for reset to complete. */ for (i = 0; i < AE_RESET_TIMEOUT; i++) { if ((AE_READ_4(sc, AE_MASTER_REG) & AE_MASTER_SOFT_RESET) == 0) break; DELAY(10); } if (i == AE_RESET_TIMEOUT) { device_printf(sc->dev, "reset timeout.\n"); return (ENXIO); } /* * Wait for everything to enter idle state. */ for (i = 0; i < AE_IDLE_TIMEOUT; i++) { if (AE_READ_4(sc, AE_IDLE_REG) == 0) break; DELAY(100); } if (i == AE_IDLE_TIMEOUT) { device_printf(sc->dev, "could not enter idle state.\n"); return (ENXIO); } return (0); } static void ae_init(void *arg) { ae_softc_t *sc; sc = (ae_softc_t *)arg; AE_LOCK(sc); ae_init_locked(sc); AE_UNLOCK(sc); } static void ae_phy_init(ae_softc_t *sc) { /* * Enable link status change interrupt. * XXX magic numbers. */ #ifdef notyet AE_PHY_WRITE(sc, 18, 0xc00); #endif } static int ae_init_locked(ae_softc_t *sc) { struct ifnet *ifp; struct mii_data *mii; uint8_t eaddr[ETHER_ADDR_LEN]; uint32_t val; bus_addr_t addr; AE_LOCK_ASSERT(sc); ifp = sc->ifp; if ((ifp->if_drv_flags & IFF_DRV_RUNNING) != 0) return (0); mii = device_get_softc(sc->miibus); ae_stop(sc); ae_reset(sc); ae_pcie_init(sc); /* Initialize PCIE stuff. */ ae_phy_init(sc); ae_powersave_disable(sc); /* * Clear and disable interrupts. */ AE_WRITE_4(sc, AE_ISR_REG, 0xffffffff); /* * Set the MAC address. */ bcopy(IF_LLADDR(ifp), eaddr, ETHER_ADDR_LEN); val = eaddr[2] << 24 | eaddr[3] << 16 | eaddr[4] << 8 | eaddr[5]; AE_WRITE_4(sc, AE_EADDR0_REG, val); val = eaddr[0] << 8 | eaddr[1]; AE_WRITE_4(sc, AE_EADDR1_REG, val); bzero(sc->rxd_base_dma, AE_RXD_COUNT_DEFAULT * 1536 + AE_RXD_PADDING); bzero(sc->txd_base, AE_TXD_BUFSIZE_DEFAULT); bzero(sc->txs_base, AE_TXS_COUNT_DEFAULT * 4); /* * Set ring buffers base addresses. */ addr = sc->dma_rxd_busaddr; AE_WRITE_4(sc, AE_DESC_ADDR_HI_REG, BUS_ADDR_HI(addr)); AE_WRITE_4(sc, AE_RXD_ADDR_LO_REG, BUS_ADDR_LO(addr)); addr = sc->dma_txd_busaddr; AE_WRITE_4(sc, AE_TXD_ADDR_LO_REG, BUS_ADDR_LO(addr)); addr = sc->dma_txs_busaddr; AE_WRITE_4(sc, AE_TXS_ADDR_LO_REG, BUS_ADDR_LO(addr)); /* * Configure ring buffers sizes. */ AE_WRITE_2(sc, AE_RXD_COUNT_REG, AE_RXD_COUNT_DEFAULT); AE_WRITE_2(sc, AE_TXD_BUFSIZE_REG, AE_TXD_BUFSIZE_DEFAULT / 4); AE_WRITE_2(sc, AE_TXS_COUNT_REG, AE_TXS_COUNT_DEFAULT); /* * Configure interframe gap parameters. */ val = ((AE_IFG_TXIPG_DEFAULT << AE_IFG_TXIPG_SHIFT) & AE_IFG_TXIPG_MASK) | ((AE_IFG_RXIPG_DEFAULT << AE_IFG_RXIPG_SHIFT) & AE_IFG_RXIPG_MASK) | ((AE_IFG_IPGR1_DEFAULT << AE_IFG_IPGR1_SHIFT) & AE_IFG_IPGR1_MASK) | ((AE_IFG_IPGR2_DEFAULT << AE_IFG_IPGR2_SHIFT) & AE_IFG_IPGR2_MASK); AE_WRITE_4(sc, AE_IFG_REG, val); /* * Configure half-duplex operation. */ val = ((AE_HDPX_LCOL_DEFAULT << AE_HDPX_LCOL_SHIFT) & AE_HDPX_LCOL_MASK) | ((AE_HDPX_RETRY_DEFAULT << AE_HDPX_RETRY_SHIFT) & AE_HDPX_RETRY_MASK) | ((AE_HDPX_ABEBT_DEFAULT << AE_HDPX_ABEBT_SHIFT) & AE_HDPX_ABEBT_MASK) | ((AE_HDPX_JAMIPG_DEFAULT << AE_HDPX_JAMIPG_SHIFT) & AE_HDPX_JAMIPG_MASK) | AE_HDPX_EXC_EN; AE_WRITE_4(sc, AE_HDPX_REG, val); /* * Configure interrupt moderate timer. */ AE_WRITE_2(sc, AE_IMT_REG, AE_IMT_DEFAULT); val = AE_READ_4(sc, AE_MASTER_REG); val |= AE_MASTER_IMT_EN; AE_WRITE_4(sc, AE_MASTER_REG, val); /* * Configure interrupt clearing timer. */ AE_WRITE_2(sc, AE_ICT_REG, AE_ICT_DEFAULT); /* * Configure MTU. */ val = ifp->if_mtu + ETHER_HDR_LEN + ETHER_VLAN_ENCAP_LEN + ETHER_CRC_LEN; AE_WRITE_2(sc, AE_MTU_REG, val); /* * Configure cut-through threshold. */ AE_WRITE_4(sc, AE_CUT_THRESH_REG, AE_CUT_THRESH_DEFAULT); /* * Configure flow control. */ AE_WRITE_2(sc, AE_FLOW_THRESH_HI_REG, (AE_RXD_COUNT_DEFAULT / 8) * 7); AE_WRITE_2(sc, AE_FLOW_THRESH_LO_REG, (AE_RXD_COUNT_MIN / 8) > (AE_RXD_COUNT_DEFAULT / 12) ? (AE_RXD_COUNT_MIN / 8) : (AE_RXD_COUNT_DEFAULT / 12)); /* * Init mailboxes. */ sc->txd_cur = sc->rxd_cur = 0; sc->txs_ack = sc->txd_ack = 0; sc->rxd_cur = 0; AE_WRITE_2(sc, AE_MB_TXD_IDX_REG, sc->txd_cur); AE_WRITE_2(sc, AE_MB_RXD_IDX_REG, sc->rxd_cur); sc->tx_inproc = 0; /* Number of packets the chip processes now. */ sc->flags |= AE_FLAG_TXAVAIL; /* Free Tx's available. */ /* * Enable DMA. */ AE_WRITE_1(sc, AE_DMAREAD_REG, AE_DMAREAD_EN); AE_WRITE_1(sc, AE_DMAWRITE_REG, AE_DMAWRITE_EN); /* * Check if everything is OK. */ val = AE_READ_4(sc, AE_ISR_REG); if ((val & AE_ISR_PHY_LINKDOWN) != 0) { device_printf(sc->dev, "Initialization failed.\n"); return (ENXIO); } /* * Clear interrupt status. */ AE_WRITE_4(sc, AE_ISR_REG, 0x3fffffff); AE_WRITE_4(sc, AE_ISR_REG, 0x0); /* * Enable interrupts. */ val = AE_READ_4(sc, AE_MASTER_REG); AE_WRITE_4(sc, AE_MASTER_REG, val | AE_MASTER_MANUAL_INT); AE_WRITE_4(sc, AE_IMR_REG, AE_IMR_DEFAULT); /* * Disable WOL. */ AE_WRITE_4(sc, AE_WOL_REG, 0); /* * Configure MAC. */ val = AE_MAC_TX_CRC_EN | AE_MAC_TX_AUTOPAD | AE_MAC_FULL_DUPLEX | AE_MAC_CLK_PHY | AE_MAC_TX_FLOW_EN | AE_MAC_RX_FLOW_EN | ((AE_HALFBUF_DEFAULT << AE_HALFBUF_SHIFT) & AE_HALFBUF_MASK) | ((AE_MAC_PREAMBLE_DEFAULT << AE_MAC_PREAMBLE_SHIFT) & AE_MAC_PREAMBLE_MASK); AE_WRITE_4(sc, AE_MAC_REG, val); /* * Configure Rx MAC. */ ae_rxfilter(sc); ae_rxvlan(sc); /* * Enable Tx/Rx. */ val = AE_READ_4(sc, AE_MAC_REG); AE_WRITE_4(sc, AE_MAC_REG, val | AE_MAC_TX_EN | AE_MAC_RX_EN); sc->flags &= ~AE_FLAG_LINK; mii_mediachg(mii); /* Switch to the current media. */ callout_reset(&sc->tick_ch, hz, ae_tick, sc); ifp->if_drv_flags |= IFF_DRV_RUNNING; ifp->if_drv_flags &= ~IFF_DRV_OACTIVE; #ifdef AE_DEBUG device_printf(sc->dev, "Initialization complete.\n"); #endif return (0); } static int ae_detach(device_t dev) { struct ae_softc *sc; struct ifnet *ifp; sc = device_get_softc(dev); KASSERT(sc != NULL, ("[ae: %d]: sc is NULL", __LINE__)); ifp = sc->ifp; if (device_is_attached(dev)) { AE_LOCK(sc); sc->flags |= AE_FLAG_DETACH; ae_stop(sc); AE_UNLOCK(sc); callout_drain(&sc->tick_ch); taskqueue_drain(sc->tq, &sc->int_task); taskqueue_drain(taskqueue_swi, &sc->link_task); ether_ifdetach(ifp); } if (sc->tq != NULL) { taskqueue_drain(sc->tq, &sc->int_task); taskqueue_free(sc->tq); sc->tq = NULL; } if (sc->miibus != NULL) { device_delete_child(dev, sc->miibus); sc->miibus = NULL; } bus_generic_detach(sc->dev); ae_dma_free(sc); if (sc->intrhand != NULL) { bus_teardown_intr(dev, sc->irq[0], sc->intrhand); sc->intrhand = NULL; } if (ifp != NULL) { if_free(ifp); sc->ifp = NULL; } if (sc->spec_irq != NULL) bus_release_resources(dev, sc->spec_irq, sc->irq); if (sc->spec_mem != NULL) bus_release_resources(dev, sc->spec_mem, sc->mem); if ((sc->flags & AE_FLAG_MSI) != 0) pci_release_msi(dev); mtx_destroy(&sc->mtx); return (0); } static int ae_miibus_readreg(device_t dev, int phy, int reg) { ae_softc_t *sc; uint32_t val; int i; sc = device_get_softc(dev); KASSERT(sc != NULL, ("[ae, %d]: sc is NULL", __LINE__)); /* * Locking is done in upper layers. */ val = ((reg << AE_MDIO_REGADDR_SHIFT) & AE_MDIO_REGADDR_MASK) | AE_MDIO_START | AE_MDIO_READ | AE_MDIO_SUP_PREAMBLE | ((AE_MDIO_CLK_25_4 << AE_MDIO_CLK_SHIFT) & AE_MDIO_CLK_MASK); AE_WRITE_4(sc, AE_MDIO_REG, val); /* * Wait for operation to complete. */ for (i = 0; i < AE_MDIO_TIMEOUT; i++) { DELAY(2); val = AE_READ_4(sc, AE_MDIO_REG); if ((val & (AE_MDIO_START | AE_MDIO_BUSY)) == 0) break; } if (i == AE_MDIO_TIMEOUT) { device_printf(sc->dev, "phy read timeout: %d.\n", reg); return (0); } return ((val << AE_MDIO_DATA_SHIFT) & AE_MDIO_DATA_MASK); } static int ae_miibus_writereg(device_t dev, int phy, int reg, int val) { ae_softc_t *sc; uint32_t aereg; int i; sc = device_get_softc(dev); KASSERT(sc != NULL, ("[ae, %d]: sc is NULL", __LINE__)); /* * Locking is done in upper layers. */ aereg = ((reg << AE_MDIO_REGADDR_SHIFT) & AE_MDIO_REGADDR_MASK) | AE_MDIO_START | AE_MDIO_SUP_PREAMBLE | ((AE_MDIO_CLK_25_4 << AE_MDIO_CLK_SHIFT) & AE_MDIO_CLK_MASK) | ((val << AE_MDIO_DATA_SHIFT) & AE_MDIO_DATA_MASK); AE_WRITE_4(sc, AE_MDIO_REG, aereg); /* * Wait for operation to complete. */ for (i = 0; i < AE_MDIO_TIMEOUT; i++) { DELAY(2); aereg = AE_READ_4(sc, AE_MDIO_REG); if ((aereg & (AE_MDIO_START | AE_MDIO_BUSY)) == 0) break; } if (i == AE_MDIO_TIMEOUT) { device_printf(sc->dev, "phy write timeout: %d.\n", reg); } return (0); } static void ae_miibus_statchg(device_t dev) { ae_softc_t *sc; sc = device_get_softc(dev); taskqueue_enqueue(taskqueue_swi, &sc->link_task); } static void ae_mediastatus(struct ifnet *ifp, struct ifmediareq *ifmr) { ae_softc_t *sc; struct mii_data *mii; sc = ifp->if_softc; KASSERT(sc != NULL, ("[ae, %d]: sc is NULL", __LINE__)); AE_LOCK(sc); mii = device_get_softc(sc->miibus); mii_pollstat(mii); ifmr->ifm_status = mii->mii_media_status; ifmr->ifm_active = mii->mii_media_active; AE_UNLOCK(sc); } static int ae_mediachange(struct ifnet *ifp) { ae_softc_t *sc; struct mii_data *mii; struct mii_softc *mii_sc; int error; /* XXX: check IFF_UP ?? */ sc = ifp->if_softc; KASSERT(sc != NULL, ("[ae, %d]: sc is NULL", __LINE__)); AE_LOCK(sc); mii = device_get_softc(sc->miibus); LIST_FOREACH(mii_sc, &mii->mii_phys, mii_list) PHY_RESET(mii_sc); error = mii_mediachg(mii); AE_UNLOCK(sc); return (error); } static int ae_check_eeprom_present(ae_softc_t *sc, int *vpdc) { int error; uint32_t val; KASSERT(vpdc != NULL, ("[ae, %d]: vpdc is NULL!\n", __LINE__)); /* * Not sure why, but Linux does this. */ val = AE_READ_4(sc, AE_SPICTL_REG); if ((val & AE_SPICTL_VPD_EN) != 0) { val &= ~AE_SPICTL_VPD_EN; AE_WRITE_4(sc, AE_SPICTL_REG, val); } error = pci_find_cap(sc->dev, PCIY_VPD, vpdc); return (error); } static int ae_vpd_read_word(ae_softc_t *sc, int reg, uint32_t *word) { uint32_t val; int i; AE_WRITE_4(sc, AE_VPD_DATA_REG, 0); /* Clear register value. */ /* * VPD registers start at offset 0x100. Read them. */ val = 0x100 + reg * 4; AE_WRITE_4(sc, AE_VPD_CAP_REG, (val << AE_VPD_CAP_ADDR_SHIFT) & AE_VPD_CAP_ADDR_MASK); for (i = 0; i < AE_VPD_TIMEOUT; i++) { DELAY(2000); val = AE_READ_4(sc, AE_VPD_CAP_REG); if ((val & AE_VPD_CAP_DONE) != 0) break; } if (i == AE_VPD_TIMEOUT) { device_printf(sc->dev, "timeout reading VPD register %d.\n", reg); return (ETIMEDOUT); } *word = AE_READ_4(sc, AE_VPD_DATA_REG); return (0); } static int ae_get_vpd_eaddr(ae_softc_t *sc, uint32_t *eaddr) { uint32_t word, reg, val; int error; int found; int vpdc; int i; KASSERT(sc != NULL, ("[ae, %d]: sc is NULL", __LINE__)); KASSERT(eaddr != NULL, ("[ae, %d]: eaddr is NULL", __LINE__)); /* * Check for EEPROM. */ error = ae_check_eeprom_present(sc, &vpdc); if (error != 0) return (error); /* * Read the VPD configuration space. * Each register is prefixed with signature, * so we can check if it is valid. */ for (i = 0, found = 0; i < AE_VPD_NREGS; i++) { error = ae_vpd_read_word(sc, i, &word); if (error != 0) break; /* * Check signature. */ if ((word & AE_VPD_SIG_MASK) != AE_VPD_SIG) break; reg = word >> AE_VPD_REG_SHIFT; i++; /* Move to the next word. */ if (reg != AE_EADDR0_REG && reg != AE_EADDR1_REG) continue; error = ae_vpd_read_word(sc, i, &val); if (error != 0) break; if (reg == AE_EADDR0_REG) eaddr[0] = val; else eaddr[1] = val; found++; } if (found < 2) return (ENOENT); eaddr[1] &= 0xffff; /* Only last 2 bytes are used. */ if (AE_CHECK_EADDR_VALID(eaddr) != 0) { if (bootverbose) device_printf(sc->dev, "VPD ethernet address registers are invalid.\n"); return (EINVAL); } return (0); } static int ae_get_reg_eaddr(ae_softc_t *sc, uint32_t *eaddr) { /* * BIOS is supposed to set this. */ eaddr[0] = AE_READ_4(sc, AE_EADDR0_REG); eaddr[1] = AE_READ_4(sc, AE_EADDR1_REG); eaddr[1] &= 0xffff; /* Only last 2 bytes are used. */ if (AE_CHECK_EADDR_VALID(eaddr) != 0) { if (bootverbose) device_printf(sc->dev, "Ethernet address registers are invalid.\n"); return (EINVAL); } return (0); } static void ae_retrieve_address(ae_softc_t *sc) { uint32_t eaddr[2] = {0, 0}; int error; /* *Check for EEPROM. */ error = ae_get_vpd_eaddr(sc, eaddr); if (error != 0) error = ae_get_reg_eaddr(sc, eaddr); if (error != 0) { if (bootverbose) device_printf(sc->dev, "Generating random ethernet address.\n"); eaddr[0] = arc4random(); /* * Set OUI to ASUSTek COMPUTER INC. */ sc->eaddr[0] = 0x02; /* U/L bit set. */ sc->eaddr[1] = 0x1f; sc->eaddr[2] = 0xc6; sc->eaddr[3] = (eaddr[0] >> 16) & 0xff; sc->eaddr[4] = (eaddr[0] >> 8) & 0xff; sc->eaddr[5] = (eaddr[0] >> 0) & 0xff; } else { sc->eaddr[0] = (eaddr[1] >> 8) & 0xff; sc->eaddr[1] = (eaddr[1] >> 0) & 0xff; sc->eaddr[2] = (eaddr[0] >> 24) & 0xff; sc->eaddr[3] = (eaddr[0] >> 16) & 0xff; sc->eaddr[4] = (eaddr[0] >> 8) & 0xff; sc->eaddr[5] = (eaddr[0] >> 0) & 0xff; } } static void ae_dmamap_cb(void *arg, bus_dma_segment_t *segs, int nsegs, int error) { bus_addr_t *addr = arg; if (error != 0) return; KASSERT(nsegs == 1, ("[ae, %d]: %d segments instead of 1!", __LINE__, nsegs)); *addr = segs[0].ds_addr; } static int ae_alloc_rings(ae_softc_t *sc) { bus_addr_t busaddr; int error; /* * Create parent DMA tag. */ error = bus_dma_tag_create(bus_get_dma_tag(sc->dev), 1, 0, BUS_SPACE_MAXADDR_32BIT, BUS_SPACE_MAXADDR, NULL, NULL, BUS_SPACE_MAXSIZE_32BIT, 0, BUS_SPACE_MAXSIZE_32BIT, 0, NULL, NULL, &sc->dma_parent_tag); if (error != 0) { device_printf(sc->dev, "could not creare parent DMA tag.\n"); return (error); } /* * Create DMA tag for TxD. */ error = bus_dma_tag_create(sc->dma_parent_tag, 8, 0, BUS_SPACE_MAXADDR, BUS_SPACE_MAXADDR, NULL, NULL, AE_TXD_BUFSIZE_DEFAULT, 1, AE_TXD_BUFSIZE_DEFAULT, 0, NULL, NULL, &sc->dma_txd_tag); if (error != 0) { device_printf(sc->dev, "could not creare TxD DMA tag.\n"); return (error); } /* * Create DMA tag for TxS. */ error = bus_dma_tag_create(sc->dma_parent_tag, 8, 0, BUS_SPACE_MAXADDR, BUS_SPACE_MAXADDR, NULL, NULL, AE_TXS_COUNT_DEFAULT * 4, 1, AE_TXS_COUNT_DEFAULT * 4, 0, NULL, NULL, &sc->dma_txs_tag); if (error != 0) { device_printf(sc->dev, "could not creare TxS DMA tag.\n"); return (error); } /* * Create DMA tag for RxD. */ error = bus_dma_tag_create(sc->dma_parent_tag, 128, 0, BUS_SPACE_MAXADDR, BUS_SPACE_MAXADDR, NULL, NULL, AE_RXD_COUNT_DEFAULT * 1536 + AE_RXD_PADDING, 1, AE_RXD_COUNT_DEFAULT * 1536 + AE_RXD_PADDING, 0, NULL, NULL, &sc->dma_rxd_tag); if (error != 0) { device_printf(sc->dev, "could not creare TxS DMA tag.\n"); return (error); } /* * Allocate TxD DMA memory. */ error = bus_dmamem_alloc(sc->dma_txd_tag, (void **)&sc->txd_base, BUS_DMA_WAITOK | BUS_DMA_ZERO | BUS_DMA_COHERENT, &sc->dma_txd_map); if (error != 0) { device_printf(sc->dev, "could not allocate DMA memory for TxD ring.\n"); return (error); } error = bus_dmamap_load(sc->dma_txd_tag, sc->dma_txd_map, sc->txd_base, AE_TXD_BUFSIZE_DEFAULT, ae_dmamap_cb, &busaddr, BUS_DMA_NOWAIT); if (error != 0 || busaddr == 0) { device_printf(sc->dev, "could not load DMA map for TxD ring.\n"); return (error); } sc->dma_txd_busaddr = busaddr; /* * Allocate TxS DMA memory. */ error = bus_dmamem_alloc(sc->dma_txs_tag, (void **)&sc->txs_base, BUS_DMA_WAITOK | BUS_DMA_ZERO | BUS_DMA_COHERENT, &sc->dma_txs_map); if (error != 0) { device_printf(sc->dev, "could not allocate DMA memory for TxS ring.\n"); return (error); } error = bus_dmamap_load(sc->dma_txs_tag, sc->dma_txs_map, sc->txs_base, AE_TXS_COUNT_DEFAULT * 4, ae_dmamap_cb, &busaddr, BUS_DMA_NOWAIT); if (error != 0 || busaddr == 0) { device_printf(sc->dev, "could not load DMA map for TxS ring.\n"); return (error); } sc->dma_txs_busaddr = busaddr; /* * Allocate RxD DMA memory. */ error = bus_dmamem_alloc(sc->dma_rxd_tag, (void **)&sc->rxd_base_dma, BUS_DMA_WAITOK | BUS_DMA_ZERO | BUS_DMA_COHERENT, &sc->dma_rxd_map); if (error != 0) { device_printf(sc->dev, "could not allocate DMA memory for RxD ring.\n"); return (error); } error = bus_dmamap_load(sc->dma_rxd_tag, sc->dma_rxd_map, sc->rxd_base_dma, AE_RXD_COUNT_DEFAULT * 1536 + AE_RXD_PADDING, ae_dmamap_cb, &busaddr, BUS_DMA_NOWAIT); if (error != 0 || busaddr == 0) { device_printf(sc->dev, "could not load DMA map for RxD ring.\n"); return (error); } sc->dma_rxd_busaddr = busaddr + AE_RXD_PADDING; sc->rxd_base = (ae_rxd_t *)(sc->rxd_base_dma + AE_RXD_PADDING); return (0); } static void ae_dma_free(ae_softc_t *sc) { if (sc->dma_txd_tag != NULL) { if (sc->dma_txd_busaddr != 0) bus_dmamap_unload(sc->dma_txd_tag, sc->dma_txd_map); if (sc->txd_base != NULL) bus_dmamem_free(sc->dma_txd_tag, sc->txd_base, sc->dma_txd_map); bus_dma_tag_destroy(sc->dma_txd_tag); sc->dma_txd_tag = NULL; sc->txd_base = NULL; sc->dma_txd_busaddr = 0; } if (sc->dma_txs_tag != NULL) { if (sc->dma_txs_busaddr != 0) bus_dmamap_unload(sc->dma_txs_tag, sc->dma_txs_map); if (sc->txs_base != NULL) bus_dmamem_free(sc->dma_txs_tag, sc->txs_base, sc->dma_txs_map); bus_dma_tag_destroy(sc->dma_txs_tag); sc->dma_txs_tag = NULL; sc->txs_base = NULL; sc->dma_txs_busaddr = 0; } if (sc->dma_rxd_tag != NULL) { if (sc->dma_rxd_busaddr != 0) bus_dmamap_unload(sc->dma_rxd_tag, sc->dma_rxd_map); if (sc->rxd_base_dma != NULL) bus_dmamem_free(sc->dma_rxd_tag, sc->rxd_base_dma, sc->dma_rxd_map); bus_dma_tag_destroy(sc->dma_rxd_tag); sc->dma_rxd_tag = NULL; sc->rxd_base_dma = NULL; sc->dma_rxd_busaddr = 0; } if (sc->dma_parent_tag != NULL) { bus_dma_tag_destroy(sc->dma_parent_tag); sc->dma_parent_tag = NULL; } } static int ae_shutdown(device_t dev) { ae_softc_t *sc; int error; sc = device_get_softc(dev); KASSERT(sc != NULL, ("[ae: %d]: sc is NULL", __LINE__)); error = ae_suspend(dev); AE_LOCK(sc); ae_powersave_enable(sc); AE_UNLOCK(sc); return (error); } static void ae_powersave_disable(ae_softc_t *sc) { uint32_t val; AE_LOCK_ASSERT(sc); AE_PHY_WRITE(sc, AE_PHY_DBG_ADDR, 0); val = AE_PHY_READ(sc, AE_PHY_DBG_DATA); if (val & AE_PHY_DBG_POWERSAVE) { val &= ~AE_PHY_DBG_POWERSAVE; AE_PHY_WRITE(sc, AE_PHY_DBG_DATA, val); DELAY(1000); } } static void ae_powersave_enable(ae_softc_t *sc) { uint32_t val; AE_LOCK_ASSERT(sc); /* * XXX magic numbers. */ AE_PHY_WRITE(sc, AE_PHY_DBG_ADDR, 0); val = AE_PHY_READ(sc, AE_PHY_DBG_DATA); AE_PHY_WRITE(sc, AE_PHY_DBG_ADDR, val | 0x1000); AE_PHY_WRITE(sc, AE_PHY_DBG_ADDR, 2); AE_PHY_WRITE(sc, AE_PHY_DBG_DATA, 0x3000); AE_PHY_WRITE(sc, AE_PHY_DBG_ADDR, 3); AE_PHY_WRITE(sc, AE_PHY_DBG_DATA, 0); } static void ae_pm_init(ae_softc_t *sc) { struct ifnet *ifp; uint32_t val; uint16_t pmstat; struct mii_data *mii; int pmc; AE_LOCK_ASSERT(sc); ifp = sc->ifp; if ((sc->flags & AE_FLAG_PMG) == 0) { /* Disable WOL entirely. */ AE_WRITE_4(sc, AE_WOL_REG, 0); return; } /* * Configure WOL if enabled. */ if ((ifp->if_capenable & IFCAP_WOL) != 0) { mii = device_get_softc(sc->miibus); mii_pollstat(mii); if ((mii->mii_media_status & IFM_AVALID) != 0 && (mii->mii_media_status & IFM_ACTIVE) != 0) { AE_WRITE_4(sc, AE_WOL_REG, AE_WOL_MAGIC | \ AE_WOL_MAGIC_PME); /* * Configure MAC. */ val = AE_MAC_RX_EN | AE_MAC_CLK_PHY | \ AE_MAC_TX_CRC_EN | AE_MAC_TX_AUTOPAD | \ ((AE_HALFBUF_DEFAULT << AE_HALFBUF_SHIFT) & \ AE_HALFBUF_MASK) | \ ((AE_MAC_PREAMBLE_DEFAULT << \ AE_MAC_PREAMBLE_SHIFT) & AE_MAC_PREAMBLE_MASK) | \ AE_MAC_BCAST_EN | AE_MAC_MCAST_EN; if ((IFM_OPTIONS(mii->mii_media_active) & \ IFM_FDX) != 0) val |= AE_MAC_FULL_DUPLEX; AE_WRITE_4(sc, AE_MAC_REG, val); } else { /* No link. */ AE_WRITE_4(sc, AE_WOL_REG, AE_WOL_LNKCHG | \ AE_WOL_LNKCHG_PME); AE_WRITE_4(sc, AE_MAC_REG, 0); } } else { ae_powersave_enable(sc); } /* * PCIE hacks. Magic numbers. */ val = AE_READ_4(sc, AE_PCIE_PHYMISC_REG); val |= AE_PCIE_PHYMISC_FORCE_RCV_DET; AE_WRITE_4(sc, AE_PCIE_PHYMISC_REG, val); val = AE_READ_4(sc, AE_PCIE_DLL_TX_CTRL_REG); val |= AE_PCIE_DLL_TX_CTRL_SEL_NOR_CLK; AE_WRITE_4(sc, AE_PCIE_DLL_TX_CTRL_REG, val); /* * Configure PME. */ if (pci_find_cap(sc->dev, PCIY_PMG, &pmc) == 0) { pmstat = pci_read_config(sc->dev, pmc + PCIR_POWER_STATUS, 2); pmstat &= ~(PCIM_PSTAT_PME | PCIM_PSTAT_PMEENABLE); if ((ifp->if_capenable & IFCAP_WOL) != 0) pmstat |= PCIM_PSTAT_PME | PCIM_PSTAT_PMEENABLE; pci_write_config(sc->dev, pmc + PCIR_POWER_STATUS, pmstat, 2); } } static int ae_suspend(device_t dev) { ae_softc_t *sc; sc = device_get_softc(dev); AE_LOCK(sc); ae_stop(sc); ae_pm_init(sc); AE_UNLOCK(sc); return (0); } static int ae_resume(device_t dev) { ae_softc_t *sc; sc = device_get_softc(dev); KASSERT(sc != NULL, ("[ae, %d]: sc is NULL", __LINE__)); AE_LOCK(sc); AE_READ_4(sc, AE_WOL_REG); /* Clear WOL status. */ if ((sc->ifp->if_flags & IFF_UP) != 0) ae_init_locked(sc); AE_UNLOCK(sc); return (0); } static unsigned int ae_tx_avail_size(ae_softc_t *sc) { unsigned int avail; if (sc->txd_cur >= sc->txd_ack) avail = AE_TXD_BUFSIZE_DEFAULT - (sc->txd_cur - sc->txd_ack); else avail = sc->txd_ack - sc->txd_cur; return (avail); } static int ae_encap(ae_softc_t *sc, struct mbuf **m_head) { struct mbuf *m0; ae_txd_t *hdr; unsigned int to_end; uint16_t len; AE_LOCK_ASSERT(sc); m0 = *m_head; len = m0->m_pkthdr.len; if ((sc->flags & AE_FLAG_TXAVAIL) == 0 || len + sizeof(ae_txd_t) + 3 > ae_tx_avail_size(sc)) { #ifdef AE_DEBUG if_printf(sc->ifp, "No free Tx available.\n"); #endif return ENOBUFS; } hdr = (ae_txd_t *)(sc->txd_base + sc->txd_cur); bzero(hdr, sizeof(*hdr)); /* Skip header size. */ sc->txd_cur = (sc->txd_cur + sizeof(ae_txd_t)) % AE_TXD_BUFSIZE_DEFAULT; /* Space available to the end of the ring */ to_end = AE_TXD_BUFSIZE_DEFAULT - sc->txd_cur; if (to_end >= len) { m_copydata(m0, 0, len, (caddr_t)(sc->txd_base + sc->txd_cur)); } else { m_copydata(m0, 0, to_end, (caddr_t)(sc->txd_base + sc->txd_cur)); m_copydata(m0, to_end, len - to_end, (caddr_t)sc->txd_base); } /* * Set TxD flags and parameters. */ if ((m0->m_flags & M_VLANTAG) != 0) { hdr->vlan = htole16(AE_TXD_VLAN(m0->m_pkthdr.ether_vtag)); hdr->len = htole16(len | AE_TXD_INSERT_VTAG); } else { hdr->len = htole16(len); } /* * Set current TxD position and round up to a 4-byte boundary. */ sc->txd_cur = ((sc->txd_cur + len + 3) & ~3) % AE_TXD_BUFSIZE_DEFAULT; if (sc->txd_cur == sc->txd_ack) sc->flags &= ~AE_FLAG_TXAVAIL; #ifdef AE_DEBUG if_printf(sc->ifp, "New txd_cur = %d.\n", sc->txd_cur); #endif /* * Update TxS position and check if there are empty TxS available. */ sc->txs_base[sc->txs_cur].flags &= ~htole16(AE_TXS_UPDATE); sc->txs_cur = (sc->txs_cur + 1) % AE_TXS_COUNT_DEFAULT; if (sc->txs_cur == sc->txs_ack) sc->flags &= ~AE_FLAG_TXAVAIL; /* * Synchronize DMA memory. */ bus_dmamap_sync(sc->dma_txd_tag, sc->dma_txd_map, BUS_DMASYNC_PREREAD | BUS_DMASYNC_PREWRITE); bus_dmamap_sync(sc->dma_txs_tag, sc->dma_txs_map, BUS_DMASYNC_PREREAD | BUS_DMASYNC_PREWRITE); return (0); } static void ae_start(struct ifnet *ifp) { ae_softc_t *sc; sc = ifp->if_softc; AE_LOCK(sc); ae_start_locked(ifp); AE_UNLOCK(sc); } static void ae_start_locked(struct ifnet *ifp) { ae_softc_t *sc; unsigned int count; struct mbuf *m0; int error; sc = ifp->if_softc; KASSERT(sc != NULL, ("[ae, %d]: sc is NULL", __LINE__)); AE_LOCK_ASSERT(sc); #ifdef AE_DEBUG if_printf(ifp, "Start called.\n"); #endif if ((ifp->if_drv_flags & (IFF_DRV_RUNNING | IFF_DRV_OACTIVE)) != IFF_DRV_RUNNING || (sc->flags & AE_FLAG_LINK) == 0) return; count = 0; while (!IFQ_DRV_IS_EMPTY(&ifp->if_snd)) { IFQ_DRV_DEQUEUE(&ifp->if_snd, m0); if (m0 == NULL) break; /* Nothing to do. */ error = ae_encap(sc, &m0); if (error != 0) { if (m0 != NULL) { IFQ_DRV_PREPEND(&ifp->if_snd, m0); ifp->if_drv_flags |= IFF_DRV_OACTIVE; #ifdef AE_DEBUG if_printf(ifp, "Setting OACTIVE.\n"); #endif } break; } count++; sc->tx_inproc++; /* Bounce a copy of the frame to BPF. */ ETHER_BPF_MTAP(ifp, m0); m_freem(m0); } if (count > 0) { /* Something was dequeued. */ AE_WRITE_2(sc, AE_MB_TXD_IDX_REG, sc->txd_cur / 4); sc->wd_timer = AE_TX_TIMEOUT; /* Load watchdog. */ #ifdef AE_DEBUG if_printf(ifp, "%d packets dequeued.\n", count); if_printf(ifp, "Tx pos now is %d.\n", sc->txd_cur); #endif } } static void ae_link_task(void *arg, int pending) { ae_softc_t *sc; struct mii_data *mii; struct ifnet *ifp; uint32_t val; sc = (ae_softc_t *)arg; KASSERT(sc != NULL, ("[ae, %d]: sc is NULL", __LINE__)); AE_LOCK(sc); ifp = sc->ifp; mii = device_get_softc(sc->miibus); if (mii == NULL || ifp == NULL || (ifp->if_drv_flags & IFF_DRV_RUNNING) == 0) { AE_UNLOCK(sc); /* XXX: could happen? */ return; } sc->flags &= ~AE_FLAG_LINK; if ((mii->mii_media_status & (IFM_AVALID | IFM_ACTIVE)) == (IFM_AVALID | IFM_ACTIVE)) { switch(IFM_SUBTYPE(mii->mii_media_active)) { case IFM_10_T: case IFM_100_TX: sc->flags |= AE_FLAG_LINK; break; default: break; } } /* * Stop Rx/Tx MACs. */ ae_stop_rxmac(sc); ae_stop_txmac(sc); if ((sc->flags & AE_FLAG_LINK) != 0) { ae_mac_config(sc); /* * Restart DMA engines. */ AE_WRITE_1(sc, AE_DMAREAD_REG, AE_DMAREAD_EN); AE_WRITE_1(sc, AE_DMAWRITE_REG, AE_DMAWRITE_EN); /* * Enable Rx and Tx MACs. */ val = AE_READ_4(sc, AE_MAC_REG); val |= AE_MAC_TX_EN | AE_MAC_RX_EN; AE_WRITE_4(sc, AE_MAC_REG, val); } AE_UNLOCK(sc); } static void ae_stop_rxmac(ae_softc_t *sc) { uint32_t val; int i; AE_LOCK_ASSERT(sc); /* * Stop Rx MAC engine. */ val = AE_READ_4(sc, AE_MAC_REG); if ((val & AE_MAC_RX_EN) != 0) { val &= ~AE_MAC_RX_EN; AE_WRITE_4(sc, AE_MAC_REG, val); } /* * Stop Rx DMA engine. */ if (AE_READ_1(sc, AE_DMAWRITE_REG) == AE_DMAWRITE_EN) AE_WRITE_1(sc, AE_DMAWRITE_REG, 0); /* * Wait for IDLE state. */ for (i = 0; i < AE_IDLE_TIMEOUT; i++) { val = AE_READ_4(sc, AE_IDLE_REG); if ((val & (AE_IDLE_RXMAC | AE_IDLE_DMAWRITE)) == 0) break; DELAY(100); } if (i == AE_IDLE_TIMEOUT) device_printf(sc->dev, "timed out while stopping Rx MAC.\n"); } static void ae_stop_txmac(ae_softc_t *sc) { uint32_t val; int i; AE_LOCK_ASSERT(sc); /* * Stop Tx MAC engine. */ val = AE_READ_4(sc, AE_MAC_REG); if ((val & AE_MAC_TX_EN) != 0) { val &= ~AE_MAC_TX_EN; AE_WRITE_4(sc, AE_MAC_REG, val); } /* * Stop Tx DMA engine. */ if (AE_READ_1(sc, AE_DMAREAD_REG) == AE_DMAREAD_EN) AE_WRITE_1(sc, AE_DMAREAD_REG, 0); /* * Wait for IDLE state. */ for (i = 0; i < AE_IDLE_TIMEOUT; i++) { val = AE_READ_4(sc, AE_IDLE_REG); if ((val & (AE_IDLE_TXMAC | AE_IDLE_DMAREAD)) == 0) break; DELAY(100); } if (i == AE_IDLE_TIMEOUT) device_printf(sc->dev, "timed out while stopping Tx MAC.\n"); } static void ae_mac_config(ae_softc_t *sc) { struct mii_data *mii; uint32_t val; AE_LOCK_ASSERT(sc); mii = device_get_softc(sc->miibus); val = AE_READ_4(sc, AE_MAC_REG); val &= ~AE_MAC_FULL_DUPLEX; /* XXX disable AE_MAC_TX_FLOW_EN? */ if ((IFM_OPTIONS(mii->mii_media_active) & IFM_FDX) != 0) val |= AE_MAC_FULL_DUPLEX; AE_WRITE_4(sc, AE_MAC_REG, val); } static int ae_intr(void *arg) { ae_softc_t *sc; uint32_t val; sc = (ae_softc_t *)arg; KASSERT(sc != NULL, ("[ae, %d]: sc is NULL", __LINE__)); val = AE_READ_4(sc, AE_ISR_REG); if (val == 0 || (val & AE_IMR_DEFAULT) == 0) return (FILTER_STRAY); /* Disable interrupts. */ AE_WRITE_4(sc, AE_ISR_REG, AE_ISR_DISABLE); /* Schedule interrupt processing. */ taskqueue_enqueue(sc->tq, &sc->int_task); return (FILTER_HANDLED); } static void ae_int_task(void *arg, int pending) { ae_softc_t *sc; struct ifnet *ifp; uint32_t val; sc = (ae_softc_t *)arg; AE_LOCK(sc); ifp = sc->ifp; val = AE_READ_4(sc, AE_ISR_REG); /* Read interrupt status. */ if (val == 0) { AE_UNLOCK(sc); return; } /* * Clear interrupts and disable them. */ AE_WRITE_4(sc, AE_ISR_REG, val | AE_ISR_DISABLE); #ifdef AE_DEBUG if_printf(ifp, "Interrupt received: 0x%08x\n", val); #endif if ((ifp->if_drv_flags & IFF_DRV_RUNNING) != 0) { if ((val & (AE_ISR_DMAR_TIMEOUT | AE_ISR_DMAW_TIMEOUT | AE_ISR_PHY_LINKDOWN)) != 0) { ifp->if_drv_flags &= ~IFF_DRV_RUNNING; ae_init_locked(sc); AE_UNLOCK(sc); return; } if ((val & AE_ISR_TX_EVENT) != 0) ae_tx_intr(sc); if ((val & AE_ISR_RX_EVENT) != 0) ae_rx_intr(sc); /* * Re-enable interrupts. */ AE_WRITE_4(sc, AE_ISR_REG, 0); if ((sc->flags & AE_FLAG_TXAVAIL) != 0) { if (!IFQ_DRV_IS_EMPTY(&ifp->if_snd)) ae_start_locked(ifp); } } AE_UNLOCK(sc); } static void ae_tx_intr(ae_softc_t *sc) { struct ifnet *ifp; ae_txd_t *txd; ae_txs_t *txs; uint16_t flags; AE_LOCK_ASSERT(sc); ifp = sc->ifp; #ifdef AE_DEBUG if_printf(ifp, "Tx interrupt occuried.\n"); #endif /* * Syncronize DMA buffers. */ bus_dmamap_sync(sc->dma_txd_tag, sc->dma_txd_map, BUS_DMASYNC_POSTREAD | BUS_DMASYNC_POSTWRITE); bus_dmamap_sync(sc->dma_txs_tag, sc->dma_txs_map, BUS_DMASYNC_POSTREAD | BUS_DMASYNC_POSTWRITE); for (;;) { txs = sc->txs_base + sc->txs_ack; flags = le16toh(txs->flags); if ((flags & AE_TXS_UPDATE) == 0) break; txs->flags = htole16(flags & ~AE_TXS_UPDATE); /* Update stats. */ ae_update_stats_tx(flags, &sc->stats); /* * Update TxS position. */ sc->txs_ack = (sc->txs_ack + 1) % AE_TXS_COUNT_DEFAULT; sc->flags |= AE_FLAG_TXAVAIL; txd = (ae_txd_t *)(sc->txd_base + sc->txd_ack); if (txs->len != txd->len) device_printf(sc->dev, "Size mismatch: TxS:%d TxD:%d\n", le16toh(txs->len), le16toh(txd->len)); /* * Move txd ack and align on 4-byte boundary. */ sc->txd_ack = ((sc->txd_ack + le16toh(txd->len) + sizeof(ae_txs_t) + 3) & ~3) % AE_TXD_BUFSIZE_DEFAULT; if ((flags & AE_TXS_SUCCESS) != 0) if_inc_counter(ifp, IFCOUNTER_OPACKETS, 1); else if_inc_counter(ifp, IFCOUNTER_OERRORS, 1); sc->tx_inproc--; } if ((sc->flags & AE_FLAG_TXAVAIL) != 0) ifp->if_drv_flags &= ~IFF_DRV_OACTIVE; if (sc->tx_inproc < 0) { if_printf(ifp, "Received stray Tx interrupt(s).\n"); sc->tx_inproc = 0; } if (sc->tx_inproc == 0) sc->wd_timer = 0; /* Unarm watchdog. */ /* * Syncronize DMA buffers. */ bus_dmamap_sync(sc->dma_txd_tag, sc->dma_txd_map, BUS_DMASYNC_PREREAD | BUS_DMASYNC_PREWRITE); bus_dmamap_sync(sc->dma_txs_tag, sc->dma_txs_map, BUS_DMASYNC_PREREAD | BUS_DMASYNC_PREWRITE); } static void ae_rxeof(ae_softc_t *sc, ae_rxd_t *rxd) { struct ifnet *ifp; struct mbuf *m; unsigned int size; uint16_t flags; AE_LOCK_ASSERT(sc); ifp = sc->ifp; flags = le16toh(rxd->flags); #ifdef AE_DEBUG if_printf(ifp, "Rx interrupt occuried.\n"); #endif size = le16toh(rxd->len) - ETHER_CRC_LEN; if (size < (ETHER_MIN_LEN - ETHER_CRC_LEN - ETHER_VLAN_ENCAP_LEN)) { if_printf(ifp, "Runt frame received."); if_inc_counter(ifp, IFCOUNTER_IERRORS, 1); return; } m = m_devget(&rxd->data[0], size, ETHER_ALIGN, ifp, NULL); if (m == NULL) { if_inc_counter(ifp, IFCOUNTER_IQDROPS, 1); return; } if ((ifp->if_capenable & IFCAP_VLAN_HWTAGGING) != 0 && (flags & AE_RXD_HAS_VLAN) != 0) { m->m_pkthdr.ether_vtag = AE_RXD_VLAN(le16toh(rxd->vlan)); m->m_flags |= M_VLANTAG; } if_inc_counter(ifp, IFCOUNTER_IPACKETS, 1); /* * Pass it through. */ AE_UNLOCK(sc); (*ifp->if_input)(ifp, m); AE_LOCK(sc); } static void ae_rx_intr(ae_softc_t *sc) { ae_rxd_t *rxd; struct ifnet *ifp; uint16_t flags; int count; KASSERT(sc != NULL, ("[ae, %d]: sc is NULL!", __LINE__)); AE_LOCK_ASSERT(sc); ifp = sc->ifp; /* * Syncronize DMA buffers. */ bus_dmamap_sync(sc->dma_rxd_tag, sc->dma_rxd_map, BUS_DMASYNC_POSTREAD | BUS_DMASYNC_POSTWRITE); for (count = 0;; count++) { rxd = (ae_rxd_t *)(sc->rxd_base + sc->rxd_cur); flags = le16toh(rxd->flags); if ((flags & AE_RXD_UPDATE) == 0) break; rxd->flags = htole16(flags & ~AE_RXD_UPDATE); /* Update stats. */ ae_update_stats_rx(flags, &sc->stats); /* * Update position index. */ sc->rxd_cur = (sc->rxd_cur + 1) % AE_RXD_COUNT_DEFAULT; if ((flags & AE_RXD_SUCCESS) != 0) ae_rxeof(sc, rxd); else if_inc_counter(ifp, IFCOUNTER_IERRORS, 1); } if (count > 0) { bus_dmamap_sync(sc->dma_rxd_tag, sc->dma_rxd_map, BUS_DMASYNC_PREREAD | BUS_DMASYNC_PREWRITE); /* * Update Rx index. */ AE_WRITE_2(sc, AE_MB_RXD_IDX_REG, sc->rxd_cur); } } static void ae_watchdog(ae_softc_t *sc) { struct ifnet *ifp; KASSERT(sc != NULL, ("[ae, %d]: sc is NULL!", __LINE__)); AE_LOCK_ASSERT(sc); ifp = sc->ifp; if (sc->wd_timer == 0 || --sc->wd_timer != 0) return; /* Noting to do. */ if ((sc->flags & AE_FLAG_LINK) == 0) if_printf(ifp, "watchdog timeout (missed link).\n"); else if_printf(ifp, "watchdog timeout - resetting.\n"); if_inc_counter(ifp, IFCOUNTER_OERRORS, 1); ifp->if_drv_flags &= ~IFF_DRV_RUNNING; ae_init_locked(sc); if (!IFQ_DRV_IS_EMPTY(&ifp->if_snd)) ae_start_locked(ifp); } static void ae_tick(void *arg) { ae_softc_t *sc; struct mii_data *mii; sc = (ae_softc_t *)arg; KASSERT(sc != NULL, ("[ae, %d]: sc is NULL!", __LINE__)); AE_LOCK_ASSERT(sc); mii = device_get_softc(sc->miibus); mii_tick(mii); ae_watchdog(sc); /* Watchdog check. */ callout_reset(&sc->tick_ch, hz, ae_tick, sc); } static void ae_rxvlan(ae_softc_t *sc) { struct ifnet *ifp; uint32_t val; AE_LOCK_ASSERT(sc); ifp = sc->ifp; val = AE_READ_4(sc, AE_MAC_REG); val &= ~AE_MAC_RMVLAN_EN; if ((ifp->if_capenable & IFCAP_VLAN_HWTAGGING) != 0) val |= AE_MAC_RMVLAN_EN; AE_WRITE_4(sc, AE_MAC_REG, val); } static void ae_rxfilter(ae_softc_t *sc) { struct ifnet *ifp; struct ifmultiaddr *ifma; uint32_t crc; uint32_t mchash[2]; uint32_t rxcfg; KASSERT(sc != NULL, ("[ae, %d]: sc is NULL!", __LINE__)); AE_LOCK_ASSERT(sc); ifp = sc->ifp; rxcfg = AE_READ_4(sc, AE_MAC_REG); rxcfg &= ~(AE_MAC_MCAST_EN | AE_MAC_BCAST_EN | AE_MAC_PROMISC_EN); if ((ifp->if_flags & IFF_BROADCAST) != 0) rxcfg |= AE_MAC_BCAST_EN; if ((ifp->if_flags & IFF_PROMISC) != 0) rxcfg |= AE_MAC_PROMISC_EN; if ((ifp->if_flags & IFF_ALLMULTI) != 0) rxcfg |= AE_MAC_MCAST_EN; /* * Wipe old settings. */ AE_WRITE_4(sc, AE_REG_MHT0, 0); AE_WRITE_4(sc, AE_REG_MHT1, 0); if ((ifp->if_flags & (IFF_PROMISC | IFF_ALLMULTI)) != 0) { AE_WRITE_4(sc, AE_REG_MHT0, 0xffffffff); AE_WRITE_4(sc, AE_REG_MHT1, 0xffffffff); AE_WRITE_4(sc, AE_MAC_REG, rxcfg); return; } /* * Load multicast tables. */ bzero(mchash, sizeof(mchash)); if_maddr_rlock(ifp); CK_STAILQ_FOREACH(ifma, &ifp->if_multiaddrs, ifma_link) { if (ifma->ifma_addr->sa_family != AF_LINK) continue; crc = ether_crc32_be(LLADDR((struct sockaddr_dl *) ifma->ifma_addr), ETHER_ADDR_LEN); mchash[crc >> 31] |= 1 << ((crc >> 26) & 0x1f); } if_maddr_runlock(ifp); AE_WRITE_4(sc, AE_REG_MHT0, mchash[0]); AE_WRITE_4(sc, AE_REG_MHT1, mchash[1]); AE_WRITE_4(sc, AE_MAC_REG, rxcfg); } static int ae_ioctl(struct ifnet *ifp, u_long cmd, caddr_t data) { struct ae_softc *sc; struct ifreq *ifr; struct mii_data *mii; int error, mask; sc = ifp->if_softc; ifr = (struct ifreq *)data; error = 0; switch (cmd) { case SIOCSIFMTU: if (ifr->ifr_mtu < ETHERMIN || ifr->ifr_mtu > ETHERMTU) error = EINVAL; else if (ifp->if_mtu != ifr->ifr_mtu) { AE_LOCK(sc); ifp->if_mtu = ifr->ifr_mtu; if ((ifp->if_drv_flags & IFF_DRV_RUNNING) != 0) { ifp->if_drv_flags &= ~IFF_DRV_RUNNING; ae_init_locked(sc); } AE_UNLOCK(sc); } break; case SIOCSIFFLAGS: AE_LOCK(sc); if ((ifp->if_flags & IFF_UP) != 0) { if ((ifp->if_drv_flags & IFF_DRV_RUNNING) != 0) { if (((ifp->if_flags ^ sc->if_flags) & (IFF_PROMISC | IFF_ALLMULTI)) != 0) ae_rxfilter(sc); } else { if ((sc->flags & AE_FLAG_DETACH) == 0) ae_init_locked(sc); } } else { if ((ifp->if_drv_flags & IFF_DRV_RUNNING) != 0) ae_stop(sc); } sc->if_flags = ifp->if_flags; AE_UNLOCK(sc); break; case SIOCADDMULTI: case SIOCDELMULTI: AE_LOCK(sc); if ((ifp->if_drv_flags & IFF_DRV_RUNNING) != 0) ae_rxfilter(sc); AE_UNLOCK(sc); break; case SIOCSIFMEDIA: case SIOCGIFMEDIA: mii = device_get_softc(sc->miibus); error = ifmedia_ioctl(ifp, ifr, &mii->mii_media, cmd); break; case SIOCSIFCAP: AE_LOCK(sc); mask = ifr->ifr_reqcap ^ ifp->if_capenable; if ((mask & IFCAP_VLAN_HWTAGGING) != 0 && (ifp->if_capabilities & IFCAP_VLAN_HWTAGGING) != 0) { ifp->if_capenable ^= IFCAP_VLAN_HWTAGGING; ae_rxvlan(sc); } VLAN_CAPABILITIES(ifp); AE_UNLOCK(sc); break; default: error = ether_ioctl(ifp, cmd, data); break; } return (error); } static void ae_stop(ae_softc_t *sc) { struct ifnet *ifp; int i; AE_LOCK_ASSERT(sc); ifp = sc->ifp; ifp->if_drv_flags &= ~(IFF_DRV_RUNNING | IFF_DRV_OACTIVE); sc->flags &= ~AE_FLAG_LINK; sc->wd_timer = 0; /* Cancel watchdog. */ callout_stop(&sc->tick_ch); /* * Clear and disable interrupts. */ AE_WRITE_4(sc, AE_IMR_REG, 0); AE_WRITE_4(sc, AE_ISR_REG, 0xffffffff); /* * Stop Rx/Tx MACs. */ ae_stop_txmac(sc); ae_stop_rxmac(sc); /* * Stop DMA engines. */ AE_WRITE_1(sc, AE_DMAREAD_REG, ~AE_DMAREAD_EN); AE_WRITE_1(sc, AE_DMAWRITE_REG, ~AE_DMAWRITE_EN); /* * Wait for everything to enter idle state. */ for (i = 0; i < AE_IDLE_TIMEOUT; i++) { if (AE_READ_4(sc, AE_IDLE_REG) == 0) break; DELAY(100); } if (i == AE_IDLE_TIMEOUT) device_printf(sc->dev, "could not enter idle state in stop.\n"); } static void ae_update_stats_tx(uint16_t flags, ae_stats_t *stats) { if ((flags & AE_TXS_BCAST) != 0) stats->tx_bcast++; if ((flags & AE_TXS_MCAST) != 0) stats->tx_mcast++; if ((flags & AE_TXS_PAUSE) != 0) stats->tx_pause++; if ((flags & AE_TXS_CTRL) != 0) stats->tx_ctrl++; if ((flags & AE_TXS_DEFER) != 0) stats->tx_defer++; if ((flags & AE_TXS_EXCDEFER) != 0) stats->tx_excdefer++; if ((flags & AE_TXS_SINGLECOL) != 0) stats->tx_singlecol++; if ((flags & AE_TXS_MULTICOL) != 0) stats->tx_multicol++; if ((flags & AE_TXS_LATECOL) != 0) stats->tx_latecol++; if ((flags & AE_TXS_ABORTCOL) != 0) stats->tx_abortcol++; if ((flags & AE_TXS_UNDERRUN) != 0) stats->tx_underrun++; } static void ae_update_stats_rx(uint16_t flags, ae_stats_t *stats) { if ((flags & AE_RXD_BCAST) != 0) stats->rx_bcast++; if ((flags & AE_RXD_MCAST) != 0) stats->rx_mcast++; if ((flags & AE_RXD_PAUSE) != 0) stats->rx_pause++; if ((flags & AE_RXD_CTRL) != 0) stats->rx_ctrl++; if ((flags & AE_RXD_CRCERR) != 0) stats->rx_crcerr++; if ((flags & AE_RXD_CODEERR) != 0) stats->rx_codeerr++; if ((flags & AE_RXD_RUNT) != 0) stats->rx_runt++; if ((flags & AE_RXD_FRAG) != 0) stats->rx_frag++; if ((flags & AE_RXD_TRUNC) != 0) stats->rx_trunc++; if ((flags & AE_RXD_ALIGN) != 0) stats->rx_align++; } Index: stable/12/sys/dev/bm/if_bm.c =================================================================== --- stable/12/sys/dev/bm/if_bm.c (revision 339734) +++ stable/12/sys/dev/bm/if_bm.c (revision 339735) @@ -1,1298 +1,1300 @@ /*- * SPDX-License-Identifier: BSD-3-Clause * * Copyright 2008 Nathan Whitehorn. All rights reserved. * Copyright 2003 by Peter Grehan. All rights reserved. * Copyright (C) 1998, 1999, 2000 Tsubai Masanari. All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * 3. The name of the author may not be used to endorse or promote products * derived from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR * IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES * OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. * IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, * INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, * BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED * AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, * OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * * From: * NetBSD: if_bm.c,v 1.9.2.1 2000/11/01 15:02:49 tv Exp */ /* * BMAC/BMAC+ Macio cell 10/100 ethernet driver * The low-cost, low-feature Apple variant of the Sun HME */ #include __FBSDID("$FreeBSD$"); #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include MODULE_DEPEND(bm, ether, 1, 1, 1); MODULE_DEPEND(bm, miibus, 1, 1, 1); /* "controller miibus0" required. See GENERIC if you get errors here. */ #include "miibus_if.h" #include "if_bmreg.h" #include "if_bmvar.h" static int bm_probe (device_t); static int bm_attach (device_t); static int bm_detach (device_t); static int bm_shutdown (device_t); static void bm_start (struct ifnet *); static void bm_start_locked (struct ifnet *); static int bm_encap (struct bm_softc *sc, struct mbuf **m_head); static int bm_ioctl (struct ifnet *, u_long, caddr_t); static void bm_init (void *); static void bm_init_locked (struct bm_softc *sc); static void bm_chip_setup (struct bm_softc *sc); static void bm_stop (struct bm_softc *sc); static void bm_setladrf (struct bm_softc *sc); static void bm_dummypacket (struct bm_softc *sc); static void bm_txintr (void *xsc); static void bm_rxintr (void *xsc); static int bm_add_rxbuf (struct bm_softc *sc, int i); static int bm_add_rxbuf_dma (struct bm_softc *sc, int i); static void bm_enable_interrupts (struct bm_softc *sc); static void bm_disable_interrupts (struct bm_softc *sc); static void bm_tick (void *xsc); static int bm_ifmedia_upd (struct ifnet *); static void bm_ifmedia_sts (struct ifnet *, struct ifmediareq *); static int bm_miibus_readreg (device_t, int, int); static int bm_miibus_writereg (device_t, int, int, int); static void bm_miibus_statchg (device_t); /* * MII bit-bang glue */ static uint32_t bm_mii_bitbang_read(device_t); static void bm_mii_bitbang_write(device_t, uint32_t); static const struct mii_bitbang_ops bm_mii_bitbang_ops = { bm_mii_bitbang_read, bm_mii_bitbang_write, { BM_MII_DATAOUT, /* MII_BIT_MDO */ BM_MII_DATAIN, /* MII_BIT_MDI */ BM_MII_CLK, /* MII_BIT_MDC */ BM_MII_OENABLE, /* MII_BIT_DIR_HOST_PHY */ 0, /* MII_BIT_DIR_PHY_HOST */ } }; static device_method_t bm_methods[] = { /* Device interface */ DEVMETHOD(device_probe, bm_probe), DEVMETHOD(device_attach, bm_attach), DEVMETHOD(device_detach, bm_detach), DEVMETHOD(device_shutdown, bm_shutdown), /* MII interface */ DEVMETHOD(miibus_readreg, bm_miibus_readreg), DEVMETHOD(miibus_writereg, bm_miibus_writereg), DEVMETHOD(miibus_statchg, bm_miibus_statchg), DEVMETHOD_END }; static driver_t bm_macio_driver = { "bm", bm_methods, sizeof(struct bm_softc) }; static devclass_t bm_devclass; DRIVER_MODULE(bm, macio, bm_macio_driver, bm_devclass, 0, 0); DRIVER_MODULE(miibus, bm, miibus_driver, miibus_devclass, 0, 0); /* * MII internal routines */ /* * Write the MII serial port for the MII bit-bang module. */ static void bm_mii_bitbang_write(device_t dev, uint32_t val) { struct bm_softc *sc; sc = device_get_softc(dev); CSR_WRITE_2(sc, BM_MII_CSR, val); CSR_BARRIER(sc, BM_MII_CSR, 2, BUS_SPACE_BARRIER_READ | BUS_SPACE_BARRIER_WRITE); } /* * Read the MII serial port for the MII bit-bang module. */ static uint32_t bm_mii_bitbang_read(device_t dev) { struct bm_softc *sc; uint32_t reg; sc = device_get_softc(dev); reg = CSR_READ_2(sc, BM_MII_CSR); CSR_BARRIER(sc, BM_MII_CSR, 2, BUS_SPACE_BARRIER_READ | BUS_SPACE_BARRIER_WRITE); return (reg); } /* * MII bus i/f */ static int bm_miibus_readreg(device_t dev, int phy, int reg) { return (mii_bitbang_readreg(dev, &bm_mii_bitbang_ops, phy, reg)); } static int bm_miibus_writereg(device_t dev, int phy, int reg, int data) { mii_bitbang_readreg(dev, &bm_mii_bitbang_ops, phy, reg); return (0); } static void bm_miibus_statchg(device_t dev) { struct bm_softc *sc = device_get_softc(dev); uint16_t reg; int new_duplex; reg = CSR_READ_2(sc, BM_TX_CONFIG); new_duplex = IFM_OPTIONS(sc->sc_mii->mii_media_active) & IFM_FDX; if (new_duplex != sc->sc_duplex) { /* Turn off TX MAC while we fiddle its settings */ reg &= ~BM_ENABLE; CSR_WRITE_2(sc, BM_TX_CONFIG, reg); while (CSR_READ_2(sc, BM_TX_CONFIG) & BM_ENABLE) DELAY(10); } if (new_duplex && !sc->sc_duplex) reg |= BM_TX_IGNORECOLL | BM_TX_FULLDPX; else if (!new_duplex && sc->sc_duplex) reg &= ~(BM_TX_IGNORECOLL | BM_TX_FULLDPX); if (new_duplex != sc->sc_duplex) { /* Turn TX MAC back on */ reg |= BM_ENABLE; CSR_WRITE_2(sc, BM_TX_CONFIG, reg); sc->sc_duplex = new_duplex; } } /* * ifmedia/mii callbacks */ static int bm_ifmedia_upd(struct ifnet *ifp) { struct bm_softc *sc = ifp->if_softc; int error; BM_LOCK(sc); error = mii_mediachg(sc->sc_mii); BM_UNLOCK(sc); return (error); } static void bm_ifmedia_sts(struct ifnet *ifp, struct ifmediareq *ifm) { struct bm_softc *sc = ifp->if_softc; BM_LOCK(sc); mii_pollstat(sc->sc_mii); ifm->ifm_active = sc->sc_mii->mii_media_active; ifm->ifm_status = sc->sc_mii->mii_media_status; BM_UNLOCK(sc); } /* * Macio probe/attach */ static int bm_probe(device_t dev) { const char *dname = ofw_bus_get_name(dev); const char *dcompat = ofw_bus_get_compat(dev); /* * BMAC+ cells have a name of "ethernet" and * a compatible property of "bmac+" */ if (strcmp(dname, "bmac") == 0) { device_set_desc(dev, "Apple BMAC Ethernet Adaptor"); } else if (strcmp(dcompat, "bmac+") == 0) { device_set_desc(dev, "Apple BMAC+ Ethernet Adaptor"); } else return (ENXIO); return (0); } static int bm_attach(device_t dev) { phandle_t node; u_char *eaddr; struct ifnet *ifp; int error, cellid, i; struct bm_txsoft *txs; struct bm_softc *sc = device_get_softc(dev); ifp = sc->sc_ifp = if_alloc(IFT_ETHER); ifp->if_softc = sc; sc->sc_dev = dev; sc->sc_duplex = ~IFM_FDX; error = 0; mtx_init(&sc->sc_mtx, device_get_nameunit(dev), MTX_NETWORK_LOCK, MTX_DEF); callout_init_mtx(&sc->sc_tick_ch, &sc->sc_mtx, 0); /* Check for an improved version of Paddington */ sc->sc_streaming = 0; cellid = -1; node = ofw_bus_get_node(dev); OF_getprop(node, "cell-id", &cellid, sizeof(cellid)); if (cellid >= 0xc4) sc->sc_streaming = 1; sc->sc_memrid = 0; sc->sc_memr = bus_alloc_resource_any(dev, SYS_RES_MEMORY, &sc->sc_memrid, RF_ACTIVE); if (sc->sc_memr == NULL) { device_printf(dev, "Could not alloc chip registers!\n"); return (ENXIO); } sc->sc_txdmarid = BM_TXDMA_REGISTERS; sc->sc_rxdmarid = BM_RXDMA_REGISTERS; sc->sc_txdmar = bus_alloc_resource_any(dev, SYS_RES_MEMORY, &sc->sc_txdmarid, RF_ACTIVE); sc->sc_rxdmar = bus_alloc_resource_any(dev, SYS_RES_MEMORY, &sc->sc_rxdmarid, RF_ACTIVE); if (sc->sc_txdmar == NULL || sc->sc_rxdmar == NULL) { device_printf(dev, "Could not map DBDMA registers!\n"); return (ENXIO); } error = dbdma_allocate_channel(sc->sc_txdmar, 0, bus_get_dma_tag(dev), BM_MAX_DMA_COMMANDS, &sc->sc_txdma); error += dbdma_allocate_channel(sc->sc_rxdmar, 0, bus_get_dma_tag(dev), BM_MAX_DMA_COMMANDS, &sc->sc_rxdma); if (error) { device_printf(dev,"Could not allocate DBDMA channel!\n"); return (ENXIO); } /* alloc DMA tags and buffers */ error = bus_dma_tag_create(bus_get_dma_tag(dev), 1, 0, BUS_SPACE_MAXADDR_32BIT, BUS_SPACE_MAXADDR, NULL, NULL, BUS_SPACE_MAXSIZE_32BIT, 0, BUS_SPACE_MAXSIZE_32BIT, 0, NULL, NULL, &sc->sc_pdma_tag); if (error) { device_printf(dev,"Could not allocate DMA tag!\n"); return (ENXIO); } error = bus_dma_tag_create(sc->sc_pdma_tag, 1, 0, BUS_SPACE_MAXADDR, BUS_SPACE_MAXADDR, NULL, NULL, MCLBYTES, 1, MCLBYTES, BUS_DMA_ALLOCNOW, NULL, NULL, &sc->sc_rdma_tag); if (error) { device_printf(dev,"Could not allocate RX DMA channel!\n"); return (ENXIO); } error = bus_dma_tag_create(sc->sc_pdma_tag, 1, 0, BUS_SPACE_MAXADDR, BUS_SPACE_MAXADDR, NULL, NULL, MCLBYTES * BM_NTXSEGS, BM_NTXSEGS, MCLBYTES, BUS_DMA_ALLOCNOW, NULL, NULL, &sc->sc_tdma_tag); if (error) { device_printf(dev,"Could not allocate TX DMA tag!\n"); return (ENXIO); } /* init transmit descriptors */ STAILQ_INIT(&sc->sc_txfreeq); STAILQ_INIT(&sc->sc_txdirtyq); /* create TX DMA maps */ error = ENOMEM; for (i = 0; i < BM_MAX_TX_PACKETS; i++) { txs = &sc->sc_txsoft[i]; txs->txs_mbuf = NULL; error = bus_dmamap_create(sc->sc_tdma_tag, 0, &txs->txs_dmamap); if (error) { device_printf(sc->sc_dev, "unable to create TX DMA map %d, error = %d\n", i, error); } STAILQ_INSERT_TAIL(&sc->sc_txfreeq, txs, txs_q); } /* Create the receive buffer DMA maps. */ for (i = 0; i < BM_MAX_RX_PACKETS; i++) { error = bus_dmamap_create(sc->sc_rdma_tag, 0, &sc->sc_rxsoft[i].rxs_dmamap); if (error) { device_printf(sc->sc_dev, "unable to create RX DMA map %d, error = %d\n", i, error); } sc->sc_rxsoft[i].rxs_mbuf = NULL; } /* alloc interrupt */ bm_disable_interrupts(sc); sc->sc_txdmairqid = BM_TXDMA_INTERRUPT; sc->sc_txdmairq = bus_alloc_resource_any(dev, SYS_RES_IRQ, &sc->sc_txdmairqid, RF_ACTIVE); if (error) { device_printf(dev,"Could not allocate TX interrupt!\n"); return (ENXIO); } bus_setup_intr(dev,sc->sc_txdmairq, INTR_TYPE_MISC | INTR_MPSAFE | INTR_ENTROPY, NULL, bm_txintr, sc, &sc->sc_txihtx); sc->sc_rxdmairqid = BM_RXDMA_INTERRUPT; sc->sc_rxdmairq = bus_alloc_resource_any(dev, SYS_RES_IRQ, &sc->sc_rxdmairqid, RF_ACTIVE); if (error) { device_printf(dev,"Could not allocate RX interrupt!\n"); return (ENXIO); } bus_setup_intr(dev,sc->sc_rxdmairq, INTR_TYPE_MISC | INTR_MPSAFE | INTR_ENTROPY, NULL, bm_rxintr, sc, &sc->sc_rxih); /* * Get the ethernet address from OpenFirmware */ eaddr = sc->sc_enaddr; OF_getprop(node, "local-mac-address", eaddr, ETHER_ADDR_LEN); /* * Setup MII * On Apple BMAC controllers, we end up in a weird state of * partially-completed autonegotiation on boot. So we force * autonegotation to try again. */ error = mii_attach(dev, &sc->sc_miibus, ifp, bm_ifmedia_upd, bm_ifmedia_sts, BMSR_DEFCAPMASK, MII_PHY_ANY, MII_OFFSET_ANY, MIIF_FORCEANEG); if (error != 0) { device_printf(dev, "attaching PHYs failed\n"); return (error); } /* reset the adapter */ bm_chip_setup(sc); sc->sc_mii = device_get_softc(sc->sc_miibus); if_initname(ifp, device_get_name(sc->sc_dev), device_get_unit(sc->sc_dev)); ifp->if_flags = IFF_BROADCAST | IFF_SIMPLEX | IFF_MULTICAST; ifp->if_start = bm_start; ifp->if_ioctl = bm_ioctl; ifp->if_init = bm_init; IFQ_SET_MAXLEN(&ifp->if_snd, BM_MAX_TX_PACKETS); ifp->if_snd.ifq_drv_maxlen = BM_MAX_TX_PACKETS; IFQ_SET_READY(&ifp->if_snd); /* Attach the interface. */ ether_ifattach(ifp, sc->sc_enaddr); ifp->if_hwassist = 0; + gone_by_fcp101_dev(dev); + return (0); } static int bm_detach(device_t dev) { struct bm_softc *sc = device_get_softc(dev); BM_LOCK(sc); bm_stop(sc); BM_UNLOCK(sc); callout_drain(&sc->sc_tick_ch); ether_ifdetach(sc->sc_ifp); bus_teardown_intr(dev, sc->sc_txdmairq, sc->sc_txihtx); bus_teardown_intr(dev, sc->sc_rxdmairq, sc->sc_rxih); dbdma_free_channel(sc->sc_txdma); dbdma_free_channel(sc->sc_rxdma); bus_release_resource(dev, SYS_RES_MEMORY, sc->sc_memrid, sc->sc_memr); bus_release_resource(dev, SYS_RES_MEMORY, sc->sc_txdmarid, sc->sc_txdmar); bus_release_resource(dev, SYS_RES_MEMORY, sc->sc_rxdmarid, sc->sc_rxdmar); bus_release_resource(dev, SYS_RES_IRQ, sc->sc_txdmairqid, sc->sc_txdmairq); bus_release_resource(dev, SYS_RES_IRQ, sc->sc_rxdmairqid, sc->sc_rxdmairq); mtx_destroy(&sc->sc_mtx); if_free(sc->sc_ifp); return (0); } static int bm_shutdown(device_t dev) { struct bm_softc *sc; sc = device_get_softc(dev); BM_LOCK(sc); bm_stop(sc); BM_UNLOCK(sc); return (0); } static void bm_dummypacket(struct bm_softc *sc) { struct mbuf *m; struct ifnet *ifp; ifp = sc->sc_ifp; MGETHDR(m, M_NOWAIT, MT_DATA); if (m == NULL) return; bcopy(sc->sc_enaddr, mtod(m, struct ether_header *)->ether_dhost, ETHER_ADDR_LEN); bcopy(sc->sc_enaddr, mtod(m, struct ether_header *)->ether_shost, ETHER_ADDR_LEN); mtod(m, struct ether_header *)->ether_type = htons(3); mtod(m, unsigned char *)[14] = 0; mtod(m, unsigned char *)[15] = 0; mtod(m, unsigned char *)[16] = 0xE3; m->m_len = m->m_pkthdr.len = sizeof(struct ether_header) + 3; IF_ENQUEUE(&ifp->if_snd, m); bm_start_locked(ifp); } static void bm_rxintr(void *xsc) { struct bm_softc *sc = xsc; struct ifnet *ifp = sc->sc_ifp; struct mbuf *m; int i, prev_stop, new_stop; uint16_t status; BM_LOCK(sc); status = dbdma_get_chan_status(sc->sc_rxdma); if (status & DBDMA_STATUS_DEAD) { dbdma_reset(sc->sc_rxdma); BM_UNLOCK(sc); return; } if (!(status & DBDMA_STATUS_RUN)) { device_printf(sc->sc_dev,"Bad RX Interrupt!\n"); BM_UNLOCK(sc); return; } prev_stop = sc->next_rxdma_slot - 1; if (prev_stop < 0) prev_stop = sc->rxdma_loop_slot - 1; if (prev_stop < 0) { BM_UNLOCK(sc); return; } new_stop = -1; dbdma_sync_commands(sc->sc_rxdma, BUS_DMASYNC_POSTREAD); for (i = sc->next_rxdma_slot; i < BM_MAX_RX_PACKETS; i++) { if (i == sc->rxdma_loop_slot) i = 0; if (i == prev_stop) break; status = dbdma_get_cmd_status(sc->sc_rxdma, i); if (status == 0) break; m = sc->sc_rxsoft[i].rxs_mbuf; if (bm_add_rxbuf(sc, i)) { if_inc_counter(ifp, IFCOUNTER_IERRORS, 1); m = NULL; continue; } if (m == NULL) continue; if_inc_counter(ifp, IFCOUNTER_IPACKETS, 1); m->m_pkthdr.rcvif = ifp; m->m_len -= (dbdma_get_residuals(sc->sc_rxdma, i) + 2); m->m_pkthdr.len = m->m_len; /* Send up the stack */ BM_UNLOCK(sc); (*ifp->if_input)(ifp, m); BM_LOCK(sc); /* Clear all fields on this command */ bm_add_rxbuf_dma(sc, i); new_stop = i; } /* Change the last packet we processed to the ring buffer terminator, * and restore a receive buffer to the old terminator */ if (new_stop >= 0) { dbdma_insert_stop(sc->sc_rxdma, new_stop); bm_add_rxbuf_dma(sc, prev_stop); if (i < sc->rxdma_loop_slot) sc->next_rxdma_slot = i; else sc->next_rxdma_slot = 0; } dbdma_sync_commands(sc->sc_rxdma, BUS_DMASYNC_PREWRITE); dbdma_wake(sc->sc_rxdma); BM_UNLOCK(sc); } static void bm_txintr(void *xsc) { struct bm_softc *sc = xsc; struct ifnet *ifp = sc->sc_ifp; struct bm_txsoft *txs; int progress = 0; BM_LOCK(sc); while ((txs = STAILQ_FIRST(&sc->sc_txdirtyq)) != NULL) { if (!dbdma_get_cmd_status(sc->sc_txdma, txs->txs_lastdesc)) break; STAILQ_REMOVE_HEAD(&sc->sc_txdirtyq, txs_q); bus_dmamap_unload(sc->sc_tdma_tag, txs->txs_dmamap); if (txs->txs_mbuf != NULL) { m_freem(txs->txs_mbuf); txs->txs_mbuf = NULL; } /* Set the first used TXDMA slot to the location of the * STOP/NOP command associated with this packet. */ sc->first_used_txdma_slot = txs->txs_stopdesc; STAILQ_INSERT_TAIL(&sc->sc_txfreeq, txs, txs_q); if_inc_counter(ifp, IFCOUNTER_OPACKETS, 1); progress = 1; } if (progress) { /* * We freed some descriptors, so reset IFF_DRV_OACTIVE * and restart. */ ifp->if_drv_flags &= ~IFF_DRV_OACTIVE; sc->sc_wdog_timer = STAILQ_EMPTY(&sc->sc_txdirtyq) ? 0 : 5; if ((ifp->if_drv_flags & IFF_DRV_RUNNING) && !IFQ_DRV_IS_EMPTY(&ifp->if_snd)) bm_start_locked(ifp); } BM_UNLOCK(sc); } static void bm_start(struct ifnet *ifp) { struct bm_softc *sc = ifp->if_softc; BM_LOCK(sc); bm_start_locked(ifp); BM_UNLOCK(sc); } static void bm_start_locked(struct ifnet *ifp) { struct bm_softc *sc = ifp->if_softc; struct mbuf *mb_head; int prev_stop; int txqueued = 0; /* * We lay out our DBDMA program in the following manner: * OUTPUT_MORE * ... * OUTPUT_LAST (+ Interrupt) * STOP * * To extend the channel, we append a new program, * then replace STOP with NOP and wake the channel. * If we stalled on the STOP already, the program proceeds, * if not it will sail through the NOP. */ while (!IFQ_DRV_IS_EMPTY(&ifp->if_snd)) { IFQ_DRV_DEQUEUE(&ifp->if_snd, mb_head); if (mb_head == NULL) break; prev_stop = sc->next_txdma_slot - 1; if (bm_encap(sc, &mb_head)) { /* Put the packet back and stop */ ifp->if_drv_flags |= IFF_DRV_OACTIVE; IFQ_DRV_PREPEND(&ifp->if_snd, mb_head); break; } dbdma_insert_nop(sc->sc_txdma, prev_stop); txqueued = 1; BPF_MTAP(ifp, mb_head); } dbdma_sync_commands(sc->sc_txdma, BUS_DMASYNC_PREWRITE); if (txqueued) { dbdma_wake(sc->sc_txdma); sc->sc_wdog_timer = 5; } } static int bm_encap(struct bm_softc *sc, struct mbuf **m_head) { bus_dma_segment_t segs[BM_NTXSEGS]; struct bm_txsoft *txs; struct mbuf *m; int nsegs = BM_NTXSEGS; int error = 0; uint8_t branch_type; int i; /* Limit the command size to the number of free DBDMA slots */ if (sc->next_txdma_slot >= sc->first_used_txdma_slot) nsegs = BM_MAX_DMA_COMMANDS - 2 - sc->next_txdma_slot + sc->first_used_txdma_slot; /* -2 for branch and indexing */ else nsegs = sc->first_used_txdma_slot - sc->next_txdma_slot; /* Remove one slot for the STOP/NOP terminator */ nsegs--; if (nsegs > BM_NTXSEGS) nsegs = BM_NTXSEGS; /* Get a work queue entry. */ if ((txs = STAILQ_FIRST(&sc->sc_txfreeq)) == NULL) { /* Ran out of descriptors. */ return (ENOBUFS); } error = bus_dmamap_load_mbuf_sg(sc->sc_tdma_tag, txs->txs_dmamap, *m_head, segs, &nsegs, BUS_DMA_NOWAIT); if (error == EFBIG) { m = m_collapse(*m_head, M_NOWAIT, nsegs); if (m == NULL) { m_freem(*m_head); *m_head = NULL; return (ENOBUFS); } *m_head = m; error = bus_dmamap_load_mbuf_sg(sc->sc_tdma_tag, txs->txs_dmamap, *m_head, segs, &nsegs, BUS_DMA_NOWAIT); if (error != 0) { m_freem(*m_head); *m_head = NULL; return (error); } } else if (error != 0) return (error); if (nsegs == 0) { m_freem(*m_head); *m_head = NULL; return (EIO); } txs->txs_ndescs = nsegs; txs->txs_firstdesc = sc->next_txdma_slot; for (i = 0; i < nsegs; i++) { /* Loop back to the beginning if this is our last slot */ if (sc->next_txdma_slot == (BM_MAX_DMA_COMMANDS - 1)) branch_type = DBDMA_ALWAYS; else branch_type = DBDMA_NEVER; if (i+1 == nsegs) txs->txs_lastdesc = sc->next_txdma_slot; dbdma_insert_command(sc->sc_txdma, sc->next_txdma_slot++, (i + 1 < nsegs) ? DBDMA_OUTPUT_MORE : DBDMA_OUTPUT_LAST, 0, segs[i].ds_addr, segs[i].ds_len, (i + 1 < nsegs) ? DBDMA_NEVER : DBDMA_ALWAYS, branch_type, DBDMA_NEVER, 0); if (branch_type == DBDMA_ALWAYS) sc->next_txdma_slot = 0; } /* We have a corner case where the STOP command is the last slot, * but you can't branch in STOP commands. So add a NOP branch here * and the STOP in slot 0. */ if (sc->next_txdma_slot == (BM_MAX_DMA_COMMANDS - 1)) { dbdma_insert_branch(sc->sc_txdma, sc->next_txdma_slot, 0); sc->next_txdma_slot = 0; } txs->txs_stopdesc = sc->next_txdma_slot; dbdma_insert_stop(sc->sc_txdma, sc->next_txdma_slot++); STAILQ_REMOVE_HEAD(&sc->sc_txfreeq, txs_q); STAILQ_INSERT_TAIL(&sc->sc_txdirtyq, txs, txs_q); txs->txs_mbuf = *m_head; return (0); } static int bm_ioctl(struct ifnet *ifp, u_long cmd, caddr_t data) { struct bm_softc *sc = ifp->if_softc; struct ifreq *ifr = (struct ifreq *)data; int error; error = 0; switch(cmd) { case SIOCSIFFLAGS: BM_LOCK(sc); if ((ifp->if_flags & IFF_UP) != 0) { if ((ifp->if_drv_flags & IFF_DRV_RUNNING) != 0 && ((ifp->if_flags ^ sc->sc_ifpflags) & (IFF_ALLMULTI | IFF_PROMISC)) != 0) bm_setladrf(sc); else bm_init_locked(sc); } else if ((ifp->if_drv_flags & IFF_DRV_RUNNING) != 0) bm_stop(sc); sc->sc_ifpflags = ifp->if_flags; BM_UNLOCK(sc); break; case SIOCADDMULTI: case SIOCDELMULTI: BM_LOCK(sc); bm_setladrf(sc); BM_UNLOCK(sc); case SIOCGIFMEDIA: case SIOCSIFMEDIA: error = ifmedia_ioctl(ifp, ifr, &sc->sc_mii->mii_media, cmd); break; default: error = ether_ioctl(ifp, cmd, data); break; } return (error); } static void bm_setladrf(struct bm_softc *sc) { struct ifnet *ifp = sc->sc_ifp; struct ifmultiaddr *inm; uint16_t hash[4]; uint16_t reg; uint32_t crc; reg = BM_CRC_ENABLE | BM_REJECT_OWN_PKTS; /* Turn off RX MAC while we fiddle its settings */ CSR_WRITE_2(sc, BM_RX_CONFIG, reg); while (CSR_READ_2(sc, BM_RX_CONFIG) & BM_ENABLE) DELAY(10); if ((ifp->if_flags & IFF_PROMISC) != 0) { reg |= BM_PROMISC; CSR_WRITE_2(sc, BM_RX_CONFIG, reg); DELAY(15); reg = CSR_READ_2(sc, BM_RX_CONFIG); reg |= BM_ENABLE; CSR_WRITE_2(sc, BM_RX_CONFIG, reg); return; } if ((ifp->if_flags & IFF_ALLMULTI) != 0) { hash[3] = hash[2] = hash[1] = hash[0] = 0xffff; } else { /* Clear the hash table. */ memset(hash, 0, sizeof(hash)); if_maddr_rlock(ifp); CK_STAILQ_FOREACH(inm, &ifp->if_multiaddrs, ifma_link) { if (inm->ifma_addr->sa_family != AF_LINK) continue; crc = ether_crc32_le(LLADDR((struct sockaddr_dl *) inm->ifma_addr), ETHER_ADDR_LEN); /* We just want the 6 most significant bits */ crc >>= 26; /* Set the corresponding bit in the filter. */ hash[crc >> 4] |= 1 << (crc & 0xf); } if_maddr_runlock(ifp); } /* Write out new hash table */ CSR_WRITE_2(sc, BM_HASHTAB0, hash[0]); CSR_WRITE_2(sc, BM_HASHTAB1, hash[1]); CSR_WRITE_2(sc, BM_HASHTAB2, hash[2]); CSR_WRITE_2(sc, BM_HASHTAB3, hash[3]); /* And turn the RX MAC back on, this time with the hash bit set */ reg |= BM_HASH_FILTER_ENABLE; CSR_WRITE_2(sc, BM_RX_CONFIG, reg); while (!(CSR_READ_2(sc, BM_RX_CONFIG) & BM_HASH_FILTER_ENABLE)) DELAY(10); reg = CSR_READ_2(sc, BM_RX_CONFIG); reg |= BM_ENABLE; CSR_WRITE_2(sc, BM_RX_CONFIG, reg); } static void bm_init(void *xsc) { struct bm_softc *sc = xsc; BM_LOCK(sc); bm_init_locked(sc); BM_UNLOCK(sc); } static void bm_chip_setup(struct bm_softc *sc) { uint16_t reg; uint16_t *eaddr_sect; eaddr_sect = (uint16_t *)(sc->sc_enaddr); dbdma_stop(sc->sc_txdma); dbdma_stop(sc->sc_rxdma); /* Reset chip */ CSR_WRITE_2(sc, BM_RX_RESET, 0x0000); CSR_WRITE_2(sc, BM_TX_RESET, 0x0001); do { DELAY(10); reg = CSR_READ_2(sc, BM_TX_RESET); } while (reg & 0x0001); /* Some random junk. OS X uses the system time. We use * the low 16 bits of the MAC address. */ CSR_WRITE_2(sc, BM_TX_RANDSEED, eaddr_sect[2]); /* Enable transmit */ reg = CSR_READ_2(sc, BM_TX_IFC); reg |= BM_ENABLE; CSR_WRITE_2(sc, BM_TX_IFC, reg); CSR_READ_2(sc, BM_TX_PEAKCNT); } static void bm_stop(struct bm_softc *sc) { struct bm_txsoft *txs; uint16_t reg; /* Disable TX and RX MACs */ reg = CSR_READ_2(sc, BM_TX_CONFIG); reg &= ~BM_ENABLE; CSR_WRITE_2(sc, BM_TX_CONFIG, reg); reg = CSR_READ_2(sc, BM_RX_CONFIG); reg &= ~BM_ENABLE; CSR_WRITE_2(sc, BM_RX_CONFIG, reg); DELAY(100); /* Stop DMA engine */ dbdma_stop(sc->sc_rxdma); dbdma_stop(sc->sc_txdma); sc->next_rxdma_slot = 0; sc->rxdma_loop_slot = 0; /* Disable interrupts */ bm_disable_interrupts(sc); /* Don't worry about pending transmits anymore */ while ((txs = STAILQ_FIRST(&sc->sc_txdirtyq)) != NULL) { STAILQ_REMOVE_HEAD(&sc->sc_txdirtyq, txs_q); if (txs->txs_ndescs != 0) { bus_dmamap_sync(sc->sc_tdma_tag, txs->txs_dmamap, BUS_DMASYNC_POSTWRITE); bus_dmamap_unload(sc->sc_tdma_tag, txs->txs_dmamap); if (txs->txs_mbuf != NULL) { m_freem(txs->txs_mbuf); txs->txs_mbuf = NULL; } } STAILQ_INSERT_TAIL(&sc->sc_txfreeq, txs, txs_q); } /* And we're down */ sc->sc_ifp->if_drv_flags &= ~(IFF_DRV_RUNNING | IFF_DRV_OACTIVE); sc->sc_wdog_timer = 0; callout_stop(&sc->sc_tick_ch); } static void bm_init_locked(struct bm_softc *sc) { uint16_t reg; uint16_t *eaddr_sect; struct bm_rxsoft *rxs; int i; eaddr_sect = (uint16_t *)(sc->sc_enaddr); /* Zero RX slot info and stop DMA */ dbdma_stop(sc->sc_rxdma); dbdma_stop(sc->sc_txdma); sc->next_rxdma_slot = 0; sc->rxdma_loop_slot = 0; /* Initialize TX/RX DBDMA programs */ dbdma_insert_stop(sc->sc_rxdma, 0); dbdma_insert_stop(sc->sc_txdma, 0); dbdma_set_current_cmd(sc->sc_rxdma, 0); dbdma_set_current_cmd(sc->sc_txdma, 0); sc->next_rxdma_slot = 0; sc->next_txdma_slot = 1; sc->first_used_txdma_slot = 0; for (i = 0; i < BM_MAX_RX_PACKETS; i++) { rxs = &sc->sc_rxsoft[i]; rxs->dbdma_slot = i; if (rxs->rxs_mbuf == NULL) { bm_add_rxbuf(sc, i); if (rxs->rxs_mbuf == NULL) { /* If we can't add anymore, mark the problem */ rxs->dbdma_slot = -1; break; } } if (i > 0) bm_add_rxbuf_dma(sc, i); } /* * Now terminate the RX ring buffer, and follow with the loop to * the beginning. */ dbdma_insert_stop(sc->sc_rxdma, i - 1); dbdma_insert_branch(sc->sc_rxdma, i, 0); sc->rxdma_loop_slot = i; /* Now add in the first element of the RX DMA chain */ bm_add_rxbuf_dma(sc, 0); dbdma_sync_commands(sc->sc_rxdma, BUS_DMASYNC_PREWRITE); dbdma_sync_commands(sc->sc_txdma, BUS_DMASYNC_PREWRITE); /* Zero collision counters */ CSR_WRITE_2(sc, BM_TX_NCCNT, 0); CSR_WRITE_2(sc, BM_TX_FCCNT, 0); CSR_WRITE_2(sc, BM_TX_EXCNT, 0); CSR_WRITE_2(sc, BM_TX_LTCNT, 0); /* Zero receive counters */ CSR_WRITE_2(sc, BM_RX_FRCNT, 0); CSR_WRITE_2(sc, BM_RX_LECNT, 0); CSR_WRITE_2(sc, BM_RX_AECNT, 0); CSR_WRITE_2(sc, BM_RX_FECNT, 0); CSR_WRITE_2(sc, BM_RXCV, 0); /* Prime transmit */ CSR_WRITE_2(sc, BM_TX_THRESH, 0xff); CSR_WRITE_2(sc, BM_TXFIFO_CSR, 0); CSR_WRITE_2(sc, BM_TXFIFO_CSR, 0x0001); /* Prime receive */ CSR_WRITE_2(sc, BM_RXFIFO_CSR, 0); CSR_WRITE_2(sc, BM_RXFIFO_CSR, 0x0001); /* Clear status reg */ CSR_READ_2(sc, BM_STATUS); /* Zero hash filters */ CSR_WRITE_2(sc, BM_HASHTAB0, 0); CSR_WRITE_2(sc, BM_HASHTAB1, 0); CSR_WRITE_2(sc, BM_HASHTAB2, 0); CSR_WRITE_2(sc, BM_HASHTAB3, 0); /* Write MAC address to chip */ CSR_WRITE_2(sc, BM_MACADDR0, eaddr_sect[0]); CSR_WRITE_2(sc, BM_MACADDR1, eaddr_sect[1]); CSR_WRITE_2(sc, BM_MACADDR2, eaddr_sect[2]); /* Final receive engine setup */ reg = BM_CRC_ENABLE | BM_REJECT_OWN_PKTS | BM_HASH_FILTER_ENABLE; CSR_WRITE_2(sc, BM_RX_CONFIG, reg); /* Now turn it all on! */ dbdma_reset(sc->sc_rxdma); dbdma_reset(sc->sc_txdma); /* Enable RX and TX MACs. Setting the address filter has * the side effect of enabling the RX MAC. */ bm_setladrf(sc); reg = CSR_READ_2(sc, BM_TX_CONFIG); reg |= BM_ENABLE; CSR_WRITE_2(sc, BM_TX_CONFIG, reg); /* * Enable interrupts, unwedge the controller with a dummy packet, * and nudge the DMA queue. */ bm_enable_interrupts(sc); bm_dummypacket(sc); dbdma_wake(sc->sc_rxdma); /* Nudge RXDMA */ sc->sc_ifp->if_drv_flags |= IFF_DRV_RUNNING; sc->sc_ifp->if_drv_flags &= ~IFF_DRV_OACTIVE; sc->sc_ifpflags = sc->sc_ifp->if_flags; /* Resync PHY and MAC states */ sc->sc_mii = device_get_softc(sc->sc_miibus); sc->sc_duplex = ~IFM_FDX; mii_mediachg(sc->sc_mii); /* Start the one second timer. */ sc->sc_wdog_timer = 0; callout_reset(&sc->sc_tick_ch, hz, bm_tick, sc); } static void bm_tick(void *arg) { struct bm_softc *sc = arg; /* Read error counters */ if_inc_counter(sc->sc_ifp, IFCOUNTER_COLLISIONS, CSR_READ_2(sc, BM_TX_NCCNT) + CSR_READ_2(sc, BM_TX_FCCNT) + CSR_READ_2(sc, BM_TX_EXCNT) + CSR_READ_2(sc, BM_TX_LTCNT)); if_inc_counter(sc->sc_ifp, IFCOUNTER_IERRORS, CSR_READ_2(sc, BM_RX_LECNT) + CSR_READ_2(sc, BM_RX_AECNT) + CSR_READ_2(sc, BM_RX_FECNT)); /* Zero collision counters */ CSR_WRITE_2(sc, BM_TX_NCCNT, 0); CSR_WRITE_2(sc, BM_TX_FCCNT, 0); CSR_WRITE_2(sc, BM_TX_EXCNT, 0); CSR_WRITE_2(sc, BM_TX_LTCNT, 0); /* Zero receive counters */ CSR_WRITE_2(sc, BM_RX_FRCNT, 0); CSR_WRITE_2(sc, BM_RX_LECNT, 0); CSR_WRITE_2(sc, BM_RX_AECNT, 0); CSR_WRITE_2(sc, BM_RX_FECNT, 0); CSR_WRITE_2(sc, BM_RXCV, 0); /* Check for link changes and run watchdog */ mii_tick(sc->sc_mii); bm_miibus_statchg(sc->sc_dev); if (sc->sc_wdog_timer == 0 || --sc->sc_wdog_timer != 0) { callout_reset(&sc->sc_tick_ch, hz, bm_tick, sc); return; } /* Problems */ device_printf(sc->sc_dev, "device timeout\n"); bm_init_locked(sc); } static int bm_add_rxbuf(struct bm_softc *sc, int idx) { struct bm_rxsoft *rxs = &sc->sc_rxsoft[idx]; struct mbuf *m; bus_dma_segment_t segs[1]; int error, nsegs; m = m_getcl(M_NOWAIT, MT_DATA, M_PKTHDR); if (m == NULL) return (ENOBUFS); m->m_len = m->m_pkthdr.len = m->m_ext.ext_size; if (rxs->rxs_mbuf != NULL) { bus_dmamap_sync(sc->sc_rdma_tag, rxs->rxs_dmamap, BUS_DMASYNC_POSTREAD); bus_dmamap_unload(sc->sc_rdma_tag, rxs->rxs_dmamap); } error = bus_dmamap_load_mbuf_sg(sc->sc_rdma_tag, rxs->rxs_dmamap, m, segs, &nsegs, BUS_DMA_NOWAIT); if (error != 0) { device_printf(sc->sc_dev, "cannot load RS DMA map %d, error = %d\n", idx, error); m_freem(m); return (error); } /* If nsegs is wrong then the stack is corrupt. */ KASSERT(nsegs == 1, ("%s: too many DMA segments (%d)", __func__, nsegs)); rxs->rxs_mbuf = m; rxs->segment = segs[0]; bus_dmamap_sync(sc->sc_rdma_tag, rxs->rxs_dmamap, BUS_DMASYNC_PREREAD); return (0); } static int bm_add_rxbuf_dma(struct bm_softc *sc, int idx) { struct bm_rxsoft *rxs = &sc->sc_rxsoft[idx]; dbdma_insert_command(sc->sc_rxdma, idx, DBDMA_INPUT_LAST, 0, rxs->segment.ds_addr, rxs->segment.ds_len, DBDMA_ALWAYS, DBDMA_NEVER, DBDMA_NEVER, 0); return (0); } static void bm_enable_interrupts(struct bm_softc *sc) { CSR_WRITE_2(sc, BM_INTR_DISABLE, (sc->sc_streaming) ? BM_INTR_NONE : BM_INTR_NORMAL); } static void bm_disable_interrupts(struct bm_softc *sc) { CSR_WRITE_2(sc, BM_INTR_DISABLE, BM_INTR_NONE); } Index: stable/12/sys/dev/cs/if_cs.c =================================================================== --- stable/12/sys/dev/cs/if_cs.c (revision 339734) +++ stable/12/sys/dev/cs/if_cs.c (revision 339735) @@ -1,1227 +1,1229 @@ /*- * SPDX-License-Identifier: BSD-2-Clause-FreeBSD * * Copyright (c) 1997,1998 Maxim Bolotin and Oleg Sharoiko. * All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice unmodified, this list of conditions, and the following * disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * */ #include __FBSDID("$FreeBSD$"); /* * * Device driver for Crystal Semiconductor CS8920 based ethernet * adapters. By Maxim Bolotin and Oleg Sharoiko, 27-April-1997 */ /* #define CS_DEBUG */ #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #ifdef CS_USE_64K_DMA #define CS_DMA_BUFFER_SIZE 65536 #else #define CS_DMA_BUFFER_SIZE 16384 #endif static void cs_init(void *); static void cs_init_locked(struct cs_softc *); static int cs_ioctl(struct ifnet *, u_long, caddr_t); static void cs_start(struct ifnet *); static void cs_start_locked(struct ifnet *); static void cs_stop(struct cs_softc *); static void cs_reset(struct cs_softc *); static void cs_watchdog(void *); static int cs_mediachange(struct ifnet *); static void cs_mediastatus(struct ifnet *, struct ifmediareq *); static int cs_mediaset(struct cs_softc *, int); static void cs_write_mbufs(struct cs_softc*, struct mbuf*); static void cs_xmit_buf(struct cs_softc*); static int cs_get_packet(struct cs_softc*); static void cs_setmode(struct cs_softc*); static int get_eeprom_data(struct cs_softc *sc, int, int, uint16_t *); static int get_eeprom_cksum(int, int, uint16_t *); static int wait_eeprom_ready( struct cs_softc *); static void control_dc_dc( struct cs_softc *, int ); static int enable_tp(struct cs_softc *); static int enable_aui(struct cs_softc *); static int enable_bnc(struct cs_softc *); static int cs_duplex_auto(struct cs_softc *); devclass_t cs_devclass; driver_intr_t csintr; /* sysctl vars */ static SYSCTL_NODE(_hw, OID_AUTO, cs, CTLFLAG_RD, 0, "cs device parameters"); int cs_ignore_cksum_failure = 0; SYSCTL_INT(_hw_cs, OID_AUTO, ignore_checksum_failure, CTLFLAG_RWTUN, &cs_ignore_cksum_failure, 0, "ignore checksum errors in cs card EEPROM"); static int cs_recv_delay = 570; SYSCTL_INT(_hw_cs, OID_AUTO, recv_delay, CTLFLAG_RWTUN, &cs_recv_delay, 570, ""); static int cs8900_eeint2irq[16] = { 10, 11, 12, 5, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255 }; static int cs8900_irq2eeint[16] = { 255, 255, 255, 255, 255, 3, 255, 255, 255, 0, 1, 2, 255, 255, 255, 255 }; static int get_eeprom_data(struct cs_softc *sc, int off, int len, uint16_t *buffer) { int i; #ifdef CS_DEBUG device_printf(sc->dev, "EEPROM data from %x for %x:\n", off, len); #endif for (i=0; i < len; i++) { if (wait_eeprom_ready(sc) < 0) return (-1); /* Send command to EEPROM to read */ cs_writereg(sc, PP_EECMD, (off + i) | EEPROM_READ_CMD); if (wait_eeprom_ready(sc) < 0) return (-1); buffer[i] = cs_readreg(sc, PP_EEData); #ifdef CS_DEBUG printf("%04x ",buffer[i]); #endif } #ifdef CS_DEBUG printf("\n"); #endif return (0); } static int get_eeprom_cksum(int off, int len, uint16_t *buffer) { int i; uint16_t cksum=0; for (i = 0; i < len; i++) cksum += buffer[i]; cksum &= 0xffff; if (cksum == 0 || cs_ignore_cksum_failure) return (0); return (-1); } static int wait_eeprom_ready(struct cs_softc *sc) { int i; /* * From the CS8900A datasheet, section 3.5.2: * "Before issuing any command to the EEPROM, the host must wait * for the SIBUSY bit (Register 16, SelfST, bit 8) to clear. After * each command has been issued, the host must wait again for SIBUSY * to clear." * * Before we issue the command, we should be !busy, so that will * be fast. The datasheet suggests that clock out from the part * per word will be on the order of 25us, which is consistent with * the 1MHz serial clock and 16bits... We should never hit 100, * let alone 15,000 here. The original code did an unconditional * 30ms DELAY here. Bad Kharma. cs_readreg takes ~2us. */ for (i = 0; i < 15000; i++) /* 30ms max */ if (!(cs_readreg(sc, PP_SelfST) & SI_BUSY)) return (0); return (1); } static void control_dc_dc(struct cs_softc *sc, int on_not_off) { unsigned int self_control = HCB1_ENBL; if (((sc->adapter_cnf & A_CNF_DC_DC_POLARITY)!=0) ^ on_not_off) self_control |= HCB1; else self_control &= ~HCB1; cs_writereg(sc, PP_SelfCTL, self_control); DELAY(500000); /* Bad! */ } static int cs_duplex_auto(struct cs_softc *sc) { int i, error=0; cs_writereg(sc, PP_AutoNegCTL, RE_NEG_NOW | ALLOW_FDX | AUTO_NEG_ENABLE); for (i=0; cs_readreg(sc, PP_AutoNegST) & AUTO_NEG_BUSY; i++) { if (i > 4000) { device_printf(sc->dev, "full/half duplex auto negotiation timeout\n"); error = ETIMEDOUT; break; } DELAY(1000); } return (error); } static int enable_tp(struct cs_softc *sc) { cs_writereg(sc, PP_LineCTL, sc->line_ctl & ~AUI_ONLY); control_dc_dc(sc, 0); return (0); } static int enable_aui(struct cs_softc *sc) { cs_writereg(sc, PP_LineCTL, (sc->line_ctl & ~AUTO_AUI_10BASET) | AUI_ONLY); control_dc_dc(sc, 0); return (0); } static int enable_bnc(struct cs_softc *sc) { cs_writereg(sc, PP_LineCTL, (sc->line_ctl & ~AUTO_AUI_10BASET) | AUI_ONLY); control_dc_dc(sc, 1); return (0); } int cs_cs89x0_probe(device_t dev) { int i; int error; rman_res_t irq, junk; struct cs_softc *sc = device_get_softc(dev); unsigned rev_type = 0; uint16_t id; char chip_revision; uint16_t eeprom_buff[CHKSUM_LEN]; int chip_type, pp_isaint; sc->dev = dev; error = cs_alloc_port(dev, 0, CS_89x0_IO_PORTS); if (error) return (error); if ((cs_inw(sc, ADD_PORT) & ADD_MASK) != ADD_SIG) { /* Chip not detected. Let's try to reset it */ if (bootverbose) device_printf(dev, "trying to reset the chip.\n"); cs_outw(sc, ADD_PORT, PP_SelfCTL); i = cs_inw(sc, DATA_PORT); cs_outw(sc, ADD_PORT, PP_SelfCTL); cs_outw(sc, DATA_PORT, i | POWER_ON_RESET); if ((cs_inw(sc, ADD_PORT) & ADD_MASK) != ADD_SIG) return (ENXIO); } for (i = 0; i < 10000; i++) { id = cs_readreg(sc, PP_ChipID); if (id == CHIP_EISA_ID_SIG) break; } if (i == 10000) return (ENXIO); rev_type = cs_readreg(sc, PRODUCT_ID_ADD); chip_type = rev_type & ~REVISON_BITS; chip_revision = ((rev_type & REVISON_BITS) >> 8) + 'A'; sc->chip_type = chip_type; if (chip_type == CS8900) { pp_isaint = PP_CS8900_ISAINT; sc->send_cmd = TX_CS8900_AFTER_ALL; } else { pp_isaint = PP_CS8920_ISAINT; sc->send_cmd = TX_CS8920_AFTER_ALL; } /* * Clear some fields so that fail of EEPROM will left them clean */ sc->auto_neg_cnf = 0; sc->adapter_cnf = 0; sc->isa_config = 0; /* * If no interrupt specified, use what the board tells us. */ error = bus_get_resource(dev, SYS_RES_IRQ, 0, &irq, &junk); /* * Get data from EEPROM */ if((cs_readreg(sc, PP_SelfST) & EEPROM_PRESENT) == 0) { device_printf(dev, "No EEPROM, assuming defaults.\n"); } else if (get_eeprom_data(sc,START_EEPROM_DATA,CHKSUM_LEN, eeprom_buff)<0) { device_printf(dev, "EEPROM read failed, assuming defaults.\n"); } else if (get_eeprom_cksum(START_EEPROM_DATA,CHKSUM_LEN, eeprom_buff)<0) { device_printf(dev, "EEPROM cheksum bad, assuming defaults.\n"); } else { sc->auto_neg_cnf = eeprom_buff[AUTO_NEG_CNF_OFFSET]; sc->adapter_cnf = eeprom_buff[ADAPTER_CNF_OFFSET]; sc->isa_config = eeprom_buff[ISA_CNF_OFFSET]; for (i=0; ienaddr[i*2] = eeprom_buff[i]; sc->enaddr[i*2+1] = eeprom_buff[i] >> 8; } /* * If no interrupt specified, use what the * board tells us. */ if (error) { irq = sc->isa_config & INT_NO_MASK; error = 0; if (chip_type == CS8900) { irq = cs8900_eeint2irq[irq]; } else { if (irq > CS8920_NO_INTS) irq = 255; } if (irq == 255) { device_printf(dev, "invalid irq in EEPROM.\n"); error = EINVAL; } if (!error) bus_set_resource(dev, SYS_RES_IRQ, 0, irq, 1); } } if (!error && !(sc->flags & CS_NO_IRQ)) { if (chip_type == CS8900) { if (irq < 16) irq = cs8900_irq2eeint[irq]; else irq = 255; } else { if (irq > CS8920_NO_INTS) irq = 255; } if (irq == 255) error = EINVAL; } if (error) { device_printf(dev, "Unknown or invalid irq\n"); return (error); } if (!(sc->flags & CS_NO_IRQ)) cs_writereg(sc, pp_isaint, irq); if (bootverbose) device_printf(dev, "CS89%c0%s rev %c media%s%s%s\n", chip_type == CS8900 ? '0' : '2', chip_type == CS8920M ? "M" : "", chip_revision, (sc->adapter_cnf & A_CNF_10B_T) ? " TP" : "", (sc->adapter_cnf & A_CNF_AUI) ? " AUI" : "", (sc->adapter_cnf & A_CNF_10B_2) ? " BNC" : ""); if ((sc->adapter_cnf & A_CNF_EXTND_10B_2) && (sc->adapter_cnf & A_CNF_LOW_RX_SQUELCH)) sc->line_ctl = LOW_RX_SQUELCH; else sc->line_ctl = 0; return (0); } /* * Allocate a port resource with the given resource id. */ int cs_alloc_port(device_t dev, int rid, int size) { struct cs_softc *sc = device_get_softc(dev); struct resource *res; res = bus_alloc_resource_anywhere(dev, SYS_RES_IOPORT, &rid, size, RF_ACTIVE); if (res == NULL) return (ENOENT); sc->port_rid = rid; sc->port_res = res; return (0); } /* * Allocate an irq resource with the given resource id. */ int cs_alloc_irq(device_t dev, int rid) { struct cs_softc *sc = device_get_softc(dev); struct resource *res; res = bus_alloc_resource_any(dev, SYS_RES_IRQ, &rid, RF_ACTIVE); if (res == NULL) return (ENOENT); sc->irq_rid = rid; sc->irq_res = res; return (0); } /* * Release all resources */ void cs_release_resources(device_t dev) { struct cs_softc *sc = device_get_softc(dev); if (sc->port_res) { bus_release_resource(dev, SYS_RES_IOPORT, sc->port_rid, sc->port_res); sc->port_res = 0; } if (sc->irq_res) { bus_release_resource(dev, SYS_RES_IRQ, sc->irq_rid, sc->irq_res); sc->irq_res = 0; } } /* * Install the interface into kernel networking data structures */ int cs_attach(device_t dev) { int error, media=0; struct cs_softc *sc = device_get_softc(dev); struct ifnet *ifp; sc->dev = dev; ifp = sc->ifp = if_alloc(IFT_ETHER); if (ifp == NULL) { device_printf(dev, "can not if_alloc()\n"); cs_release_resources(dev); return (ENOMEM); } mtx_init(&sc->lock, device_get_nameunit(dev), MTX_NETWORK_LOCK, MTX_DEF); callout_init_mtx(&sc->timer, &sc->lock, 0); CS_LOCK(sc); cs_stop(sc); CS_UNLOCK(sc); ifp->if_softc=sc; if_initname(ifp, device_get_name(dev), device_get_unit(dev)); ifp->if_start=cs_start; ifp->if_ioctl=cs_ioctl; ifp->if_init=cs_init; IFQ_SET_MAXLEN(&ifp->if_snd, ifqmaxlen); ifp->if_flags=(IFF_BROADCAST | IFF_SIMPLEX | IFF_MULTICAST); /* * this code still in progress (DMA support) * sc->recv_ring=malloc(CS_DMA_BUFFER_SIZE<<1, M_DEVBUF, M_NOWAIT); if (sc->recv_ring == NULL) { log(LOG_ERR, "%s: Couldn't allocate memory for NIC\n", ifp->if_xname); return(0); } if ((sc->recv_ring-(sc->recv_ring & 0x1FFFF)) < (128*1024-CS_DMA_BUFFER_SIZE)) sc->recv_ring+=16*1024; */ sc->buffer=malloc(ETHER_MAX_LEN-ETHER_CRC_LEN,M_DEVBUF,M_NOWAIT); if (sc->buffer == NULL) { device_printf(sc->dev, "Couldn't allocate memory for NIC\n"); if_free(ifp); mtx_destroy(&sc->lock); cs_release_resources(dev); return(ENOMEM); } /* * Initialize the media structures. */ ifmedia_init(&sc->media, 0, cs_mediachange, cs_mediastatus); if (sc->adapter_cnf & A_CNF_10B_T) { ifmedia_add(&sc->media, IFM_ETHER|IFM_10_T, 0, NULL); if (sc->chip_type != CS8900) { ifmedia_add(&sc->media, IFM_ETHER|IFM_10_T|IFM_FDX, 0, NULL); ifmedia_add(&sc->media, IFM_ETHER|IFM_10_T|IFM_HDX, 0, NULL); } } if (sc->adapter_cnf & A_CNF_10B_2) ifmedia_add(&sc->media, IFM_ETHER|IFM_10_2, 0, NULL); if (sc->adapter_cnf & A_CNF_AUI) ifmedia_add(&sc->media, IFM_ETHER|IFM_10_5, 0, NULL); if (sc->adapter_cnf & A_CNF_MEDIA) ifmedia_add(&sc->media, IFM_ETHER|IFM_AUTO, 0, NULL); /* Set default media from EEPROM */ switch (sc->adapter_cnf & A_CNF_MEDIA_TYPE) { case A_CNF_MEDIA_AUTO: media = IFM_ETHER|IFM_AUTO; break; case A_CNF_MEDIA_10B_T: media = IFM_ETHER|IFM_10_T; break; case A_CNF_MEDIA_10B_2: media = IFM_ETHER|IFM_10_2; break; case A_CNF_MEDIA_AUI: media = IFM_ETHER|IFM_10_5; break; default: device_printf(sc->dev, "no media, assuming 10baseT\n"); sc->adapter_cnf |= A_CNF_10B_T; ifmedia_add(&sc->media, IFM_ETHER|IFM_10_T, 0, NULL); if (sc->chip_type != CS8900) { ifmedia_add(&sc->media, IFM_ETHER|IFM_10_T|IFM_FDX, 0, NULL); ifmedia_add(&sc->media, IFM_ETHER|IFM_10_T|IFM_HDX, 0, NULL); } media = IFM_ETHER | IFM_10_T; break; } ifmedia_set(&sc->media, media); cs_mediaset(sc, media); ether_ifattach(ifp, sc->enaddr); error = bus_setup_intr(dev, sc->irq_res, INTR_TYPE_NET | INTR_MPSAFE, NULL, csintr, sc, &sc->irq_handle); if (error) { ether_ifdetach(ifp); free(sc->buffer, M_DEVBUF); if_free(ifp); mtx_destroy(&sc->lock); cs_release_resources(dev); return (error); } + gone_by_fcp101_dev(dev); + return (0); } int cs_detach(device_t dev) { struct cs_softc *sc; struct ifnet *ifp; sc = device_get_softc(dev); ifp = sc->ifp; CS_LOCK(sc); cs_stop(sc); CS_UNLOCK(sc); callout_drain(&sc->timer); ether_ifdetach(ifp); bus_teardown_intr(dev, sc->irq_res, sc->irq_handle); cs_release_resources(dev); free(sc->buffer, M_DEVBUF); if_free(ifp); mtx_destroy(&sc->lock); return (0); } /* * Initialize the board */ static void cs_init(void *xsc) { struct cs_softc *sc=(struct cs_softc *)xsc; CS_LOCK(sc); cs_init_locked(sc); CS_UNLOCK(sc); } static void cs_init_locked(struct cs_softc *sc) { struct ifnet *ifp = sc->ifp; int i, rx_cfg; /* * reset watchdog timer */ sc->tx_timeout = 0; sc->buf_len = 0; /* * Hardware initialization of cs */ /* Enable receiver and transmitter */ cs_writereg(sc, PP_LineCTL, cs_readreg(sc, PP_LineCTL) | SERIAL_RX_ON | SERIAL_TX_ON); /* Configure the receiver mode */ cs_setmode(sc); /* * This defines what type of frames will cause interrupts * Bad frames should generate interrupts so that the driver * could track statistics of discarded packets */ rx_cfg = RX_OK_ENBL | RX_CRC_ERROR_ENBL | RX_RUNT_ENBL | RX_EXTRA_DATA_ENBL; if (sc->isa_config & STREAM_TRANSFER) rx_cfg |= RX_STREAM_ENBL; cs_writereg(sc, PP_RxCFG, rx_cfg); cs_writereg(sc, PP_TxCFG, TX_LOST_CRS_ENBL | TX_SQE_ERROR_ENBL | TX_OK_ENBL | TX_LATE_COL_ENBL | TX_JBR_ENBL | TX_ANY_COL_ENBL | TX_16_COL_ENBL); cs_writereg(sc, PP_BufCFG, READY_FOR_TX_ENBL | RX_MISS_COUNT_OVRFLOW_ENBL | TX_COL_COUNT_OVRFLOW_ENBL | TX_UNDERRUN_ENBL /*| RX_DMA_ENBL*/); /* Write MAC address into IA filter */ for (i=0; ienaddr[i * 2] | (sc->enaddr[i * 2 + 1] << 8) ); /* * Now enable everything */ /* #ifdef CS_USE_64K_DMA cs_writereg(sc, PP_BusCTL, ENABLE_IRQ | RX_DMA_SIZE_64K); #else cs_writereg(sc, PP_BusCTL, ENABLE_IRQ); #endif */ cs_writereg(sc, PP_BusCTL, ENABLE_IRQ); /* * Set running and clear output active flags */ sc->ifp->if_drv_flags |= IFF_DRV_RUNNING; sc->ifp->if_drv_flags &= ~IFF_DRV_OACTIVE; callout_reset(&sc->timer, hz, cs_watchdog, sc); /* * Start sending process */ cs_start_locked(ifp); } /* * Get the packet from the board and send it to the upper layer. */ static int cs_get_packet(struct cs_softc *sc) { struct ifnet *ifp = sc->ifp; int status, length; struct mbuf *m; #ifdef CS_DEBUG int i; #endif status = cs_inw(sc, RX_FRAME_PORT); length = cs_inw(sc, RX_FRAME_PORT); #ifdef CS_DEBUG device_printf(sc->dev, "rcvd: stat %x, len %d\n", status, length); #endif if (!(status & RX_OK)) { #ifdef CS_DEBUG device_printf(sc->dev, "bad pkt stat %x\n", status); #endif if_inc_counter(ifp, IFCOUNTER_IERRORS, 1); return (-1); } MGETHDR(m, M_NOWAIT, MT_DATA); if (m==NULL) return (-1); if (length > MHLEN) { if (!(MCLGET(m, M_NOWAIT))) { m_freem(m); return (-1); } } /* Initialize packet's header info */ m->m_pkthdr.rcvif = ifp; m->m_pkthdr.len = length; m->m_len = length; /* Get the data */ bus_read_multi_2(sc->port_res, RX_FRAME_PORT, mtod(m, uint16_t *), (length + 1) >> 1); #ifdef CS_DEBUG for (i=0;im_data+i))); printf( "\n" ); #endif if (status & (RX_IA | RX_BROADCAST) || (ifp->if_flags & IFF_MULTICAST && status & RX_HASHED)) { /* Feed the packet to the upper layer */ (*ifp->if_input)(ifp, m); if_inc_counter(ifp, IFCOUNTER_IPACKETS, 1); if (length == ETHER_MAX_LEN-ETHER_CRC_LEN) DELAY(cs_recv_delay); } else { m_freem(m); } return (0); } /* * Handle interrupts */ void csintr(void *arg) { struct cs_softc *sc = (struct cs_softc*) arg; struct ifnet *ifp = sc->ifp; int status; #ifdef CS_DEBUG device_printf(sc->dev, "Interrupt.\n"); #endif CS_LOCK(sc); while ((status=cs_inw(sc, ISQ_PORT))) { #ifdef CS_DEBUG device_printf(sc->dev, "from ISQ: %04x\n", status); #endif switch (status & ISQ_EVENT_MASK) { case ISQ_RECEIVER_EVENT: cs_get_packet(sc); break; case ISQ_TRANSMITTER_EVENT: if (status & TX_OK) if_inc_counter(ifp, IFCOUNTER_OPACKETS, 1); else if_inc_counter(ifp, IFCOUNTER_OERRORS, 1); ifp->if_drv_flags &= ~IFF_DRV_OACTIVE; sc->tx_timeout = 0; break; case ISQ_BUFFER_EVENT: if (status & READY_FOR_TX) { ifp->if_drv_flags &= ~IFF_DRV_OACTIVE; sc->tx_timeout = 0; } if (status & TX_UNDERRUN) { ifp->if_drv_flags &= ~IFF_DRV_OACTIVE; sc->tx_timeout = 0; if_inc_counter(ifp, IFCOUNTER_OERRORS, 1); } break; case ISQ_RX_MISS_EVENT: if_inc_counter(ifp, IFCOUNTER_IERRORS, status >> 6); break; case ISQ_TX_COL_EVENT: if_inc_counter(ifp, IFCOUNTER_COLLISIONS, status >> 6); break; } } if (!(ifp->if_drv_flags & IFF_DRV_OACTIVE)) { cs_start_locked(ifp); } CS_UNLOCK(sc); } /* * Save the data in buffer */ static void cs_write_mbufs( struct cs_softc *sc, struct mbuf *m ) { int len; struct mbuf *mp; unsigned char *data, *buf; for (mp=m, buf=sc->buffer, sc->buf_len=0; mp != NULL; mp=mp->m_next) { len = mp->m_len; /* * Ignore empty parts */ if (!len) continue; /* * Find actual data address */ data = mtod(mp, caddr_t); bcopy((caddr_t) data, (caddr_t) buf, len); buf += len; sc->buf_len += len; } } static void cs_xmit_buf( struct cs_softc *sc ) { bus_write_multi_2(sc->port_res, TX_FRAME_PORT, (uint16_t *)sc->buffer, (sc->buf_len + 1) >> 1); sc->buf_len = 0; } static void cs_start(struct ifnet *ifp) { struct cs_softc *sc = ifp->if_softc; CS_LOCK(sc); cs_start_locked(ifp); CS_UNLOCK(sc); } static void cs_start_locked(struct ifnet *ifp) { int length; struct mbuf *m, *mp; struct cs_softc *sc = ifp->if_softc; for (;;) { if (sc->buf_len) length = sc->buf_len; else { IF_DEQUEUE( &ifp->if_snd, m ); if (m==NULL) { return; } for (length=0, mp=m; mp != NULL; mp=mp->m_next) length += mp->m_len; /* Skip zero-length packets */ if (length == 0) { m_freem(m); continue; } cs_write_mbufs(sc, m); BPF_MTAP(ifp, m); m_freem(m); } /* * Issue a SEND command */ cs_outw(sc, TX_CMD_PORT, sc->send_cmd); cs_outw(sc, TX_LEN_PORT, length ); /* * If there's no free space in the buffer then leave * this packet for the next time: indicate output active * and return. */ if (!(cs_readreg(sc, PP_BusST) & READY_FOR_TX_NOW)) { sc->tx_timeout = sc->buf_len; ifp->if_drv_flags |= IFF_DRV_OACTIVE; return; } cs_xmit_buf(sc); /* * Set the watchdog timer in case we never hear * from board again. (I don't know about correct * value for this timeout) */ sc->tx_timeout = length; ifp->if_drv_flags |= IFF_DRV_OACTIVE; return; } } /* * Stop everything on the interface */ static void cs_stop(struct cs_softc *sc) { CS_ASSERT_LOCKED(sc); cs_writereg(sc, PP_RxCFG, 0); cs_writereg(sc, PP_TxCFG, 0); cs_writereg(sc, PP_BufCFG, 0); cs_writereg(sc, PP_BusCTL, 0); sc->ifp->if_drv_flags &= ~(IFF_DRV_RUNNING | IFF_DRV_OACTIVE); sc->tx_timeout = 0; callout_stop(&sc->timer); } /* * Reset the interface */ static void cs_reset(struct cs_softc *sc) { CS_ASSERT_LOCKED(sc); cs_stop(sc); cs_init_locked(sc); } static uint16_t cs_hash_index(struct sockaddr_dl *addr) { uint32_t crc; uint16_t idx; caddr_t lla; lla = LLADDR(addr); crc = ether_crc32_le(lla, ETHER_ADDR_LEN); idx = crc >> 26; return (idx); } static void cs_setmode(struct cs_softc *sc) { int rx_ctl; uint16_t af[4]; uint16_t port, mask, index; struct ifnet *ifp = sc->ifp; struct ifmultiaddr *ifma; /* Stop the receiver while changing filters */ cs_writereg(sc, PP_LineCTL, cs_readreg(sc, PP_LineCTL) & ~SERIAL_RX_ON); if (ifp->if_flags & IFF_PROMISC) { /* Turn on promiscuous mode. */ rx_ctl = RX_OK_ACCEPT | RX_PROM_ACCEPT; } else if (ifp->if_flags & IFF_MULTICAST) { /* Allow receiving frames with multicast addresses */ rx_ctl = RX_IA_ACCEPT | RX_BROADCAST_ACCEPT | RX_OK_ACCEPT | RX_MULTCAST_ACCEPT; /* Start with an empty filter */ af[0] = af[1] = af[2] = af[3] = 0x0000; if (ifp->if_flags & IFF_ALLMULTI) { /* Accept all multicast frames */ af[0] = af[1] = af[2] = af[3] = 0xffff; } else { /* * Set up the filter to only accept multicast * frames we're interested in. */ if_maddr_rlock(ifp); CK_STAILQ_FOREACH(ifma, &ifp->if_multiaddrs, ifma_link) { struct sockaddr_dl *dl = (struct sockaddr_dl *)ifma->ifma_addr; index = cs_hash_index(dl); port = (u_int16_t) (index >> 4); mask = (u_int16_t) (1 << (index & 0xf)); af[port] |= mask; } if_maddr_runlock(ifp); } cs_writereg(sc, PP_LAF + 0, af[0]); cs_writereg(sc, PP_LAF + 2, af[1]); cs_writereg(sc, PP_LAF + 4, af[2]); cs_writereg(sc, PP_LAF + 6, af[3]); } else { /* * Receive only good frames addressed for us and * good broadcasts. */ rx_ctl = RX_IA_ACCEPT | RX_BROADCAST_ACCEPT | RX_OK_ACCEPT; } /* Set up the filter */ cs_writereg(sc, PP_RxCTL, RX_DEF_ACCEPT | rx_ctl); /* Turn on receiver */ cs_writereg(sc, PP_LineCTL, cs_readreg(sc, PP_LineCTL) | SERIAL_RX_ON); } static int cs_ioctl(struct ifnet *ifp, u_long command, caddr_t data) { struct cs_softc *sc=ifp->if_softc; struct ifreq *ifr = (struct ifreq *)data; int error=0; #ifdef CS_DEBUG if_printf(ifp, "%s command=%lx\n", __func__, command); #endif switch (command) { case SIOCSIFFLAGS: /* * Switch interface state between "running" and * "stopped", reflecting the UP flag. */ CS_LOCK(sc); if (sc->ifp->if_flags & IFF_UP) { if ((sc->ifp->if_drv_flags & IFF_DRV_RUNNING)==0) { cs_init_locked(sc); } } else { if ((sc->ifp->if_drv_flags & IFF_DRV_RUNNING)!=0) { cs_stop(sc); } } /* * Promiscuous and/or multicast flags may have changed, * so reprogram the multicast filter and/or receive mode. * * See note about multicasts in cs_setmode */ cs_setmode(sc); CS_UNLOCK(sc); break; case SIOCADDMULTI: case SIOCDELMULTI: /* * Multicast list has changed; set the hardware filter * accordingly. * * See note about multicasts in cs_setmode */ CS_LOCK(sc); cs_setmode(sc); CS_UNLOCK(sc); error = 0; break; case SIOCSIFMEDIA: case SIOCGIFMEDIA: error = ifmedia_ioctl(ifp, ifr, &sc->media, command); break; default: error = ether_ioctl(ifp, command, data); break; } return (error); } /* * Device timeout/watchdog routine. Entered if the device neglects to * generate an interrupt after a transmit has been started on it. */ static void cs_watchdog(void *arg) { struct cs_softc *sc = arg; struct ifnet *ifp = sc->ifp; CS_ASSERT_LOCKED(sc); if (sc->tx_timeout && --sc->tx_timeout == 0) { if_inc_counter(ifp, IFCOUNTER_OERRORS, 1); log(LOG_ERR, "%s: device timeout\n", ifp->if_xname); /* Reset the interface */ if (ifp->if_flags & IFF_UP) cs_reset(sc); else cs_stop(sc); } callout_reset(&sc->timer, hz, cs_watchdog, sc); } static int cs_mediachange(struct ifnet *ifp) { struct cs_softc *sc = ifp->if_softc; struct ifmedia *ifm = &sc->media; int error; if (IFM_TYPE(ifm->ifm_media) != IFM_ETHER) return (EINVAL); CS_LOCK(sc); error = cs_mediaset(sc, ifm->ifm_media); CS_UNLOCK(sc); return (error); } static void cs_mediastatus(struct ifnet *ifp, struct ifmediareq *ifmr) { int line_status; struct cs_softc *sc = ifp->if_softc; CS_LOCK(sc); ifmr->ifm_active = IFM_ETHER; line_status = cs_readreg(sc, PP_LineST); if (line_status & TENBASET_ON) { ifmr->ifm_active |= IFM_10_T; if (sc->chip_type != CS8900) { if (cs_readreg(sc, PP_AutoNegST) & FDX_ACTIVE) ifmr->ifm_active |= IFM_FDX; if (cs_readreg(sc, PP_AutoNegST) & HDX_ACTIVE) ifmr->ifm_active |= IFM_HDX; } ifmr->ifm_status = IFM_AVALID; if (line_status & LINK_OK) ifmr->ifm_status |= IFM_ACTIVE; } else { if (line_status & AUI_ON) { cs_writereg(sc, PP_SelfCTL, cs_readreg(sc, PP_SelfCTL) | HCB1_ENBL); if (((sc->adapter_cnf & A_CNF_DC_DC_POLARITY)!=0)^ (cs_readreg(sc, PP_SelfCTL) & HCB1)) ifmr->ifm_active |= IFM_10_2; else ifmr->ifm_active |= IFM_10_5; } } CS_UNLOCK(sc); } static int cs_mediaset(struct cs_softc *sc, int media) { int error = 0; /* Stop the receiver & transmitter */ cs_writereg(sc, PP_LineCTL, cs_readreg(sc, PP_LineCTL) & ~(SERIAL_RX_ON | SERIAL_TX_ON)); #ifdef CS_DEBUG device_printf(sc->dev, "%s media=%x\n", __func__, media); #endif switch (IFM_SUBTYPE(media)) { default: case IFM_AUTO: /* * This chip makes it a little hard to support this, so treat * it as IFM_10_T, auto duplex. */ enable_tp(sc); cs_duplex_auto(sc); break; case IFM_10_T: enable_tp(sc); if (media & IFM_FDX) cs_duplex_full(sc); else if (media & IFM_HDX) cs_duplex_half(sc); else error = cs_duplex_auto(sc); break; case IFM_10_2: enable_bnc(sc); break; case IFM_10_5: enable_aui(sc); break; } /* * Turn the transmitter & receiver back on */ cs_writereg(sc, PP_LineCTL, cs_readreg(sc, PP_LineCTL) | SERIAL_RX_ON | SERIAL_TX_ON); return (error); } Index: stable/12/sys/dev/de/if_de.c =================================================================== --- stable/12/sys/dev/de/if_de.c (revision 339734) +++ stable/12/sys/dev/de/if_de.c (revision 339735) @@ -1,5007 +1,5009 @@ /* $NetBSD: if_de.c,v 1.86 1999/06/01 19:17:59 thorpej Exp $ */ /*- * SPDX-License-Identifier: BSD-2-Clause-NetBSD * * Copyright (c) 1994-1997 Matt Thomas (matt@3am-software.com) * All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. The name of the author may not be used to endorse or promote products * derived from this software without specific prior written permission * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR * IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES * OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. * IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, * INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT * NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF * THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. * * Id: if_de.c,v 1.94 1997/07/03 16:55:07 thomas Exp */ /* * DEC 21040 PCI Ethernet Controller * * Written by Matt Thomas * BPF support code stolen directly from if_ec.c * * This driver supports the DEC DE435 or any other PCI * board which support 21040, 21041, or 21140 (mostly). */ #include __FBSDID("$FreeBSD$"); #define TULIP_HDR_DATA #include "opt_ddb.h" #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #ifdef INET #include #include #endif #include #include #include #include #include #include #ifdef DDB #include #endif /* * Intel CPUs should use I/O mapped access. */ #if defined(__i386__) #define TULIP_IOMAPPED #endif #if 0 /* This enables KTR traces at KTR_DEV. */ #define KTR_TULIP KTR_DEV #else #define KTR_TULIP 0 #endif #if 0 /* * This turns on all sort of debugging stuff and make the * driver much larger. */ #define TULIP_DEBUG #endif #if 0 #define TULIP_PERFSTATS #endif #define TULIP_HZ 10 #include #define SYNC_NONE 0 #define SYNC_RX 1 #define SYNC_TX 2 /* * This module supports * the DEC 21040 PCI Ethernet Controller. * the DEC 21041 PCI Ethernet Controller. * the DEC 21140 PCI Fast Ethernet Controller. */ static void tulip_addr_filter(tulip_softc_t * const sc); static int tulip_ifmedia_change(struct ifnet * const ifp); static void tulip_ifmedia_status(struct ifnet * const ifp, struct ifmediareq *req); static void tulip_init(void *); static void tulip_init_locked(tulip_softc_t * const sc); static void tulip_intr_shared(void *arg); static void tulip_intr_normal(void *arg); static void tulip_mii_autonegotiate(tulip_softc_t * const sc, const unsigned phyaddr); static int tulip_mii_map_abilities(tulip_softc_t * const sc, unsigned abilities); static tulip_media_t tulip_mii_phy_readspecific(tulip_softc_t * const sc); static unsigned tulip_mii_readreg(tulip_softc_t * const sc, unsigned devaddr, unsigned regno); static void tulip_mii_writereg(tulip_softc_t * const sc, unsigned devaddr, unsigned regno, unsigned data); static void tulip_reset(tulip_softc_t * const sc); static void tulip_rx_intr(tulip_softc_t * const sc); static int tulip_srom_decode(tulip_softc_t * const sc); static void tulip_start(struct ifnet *ifp); static void tulip_start_locked(tulip_softc_t * const sc); static struct mbuf * tulip_txput(tulip_softc_t * const sc, struct mbuf *m); static void tulip_txput_setup(tulip_softc_t * const sc); static void tulip_watchdog(void *arg); struct mbuf * tulip_dequeue_mbuf(tulip_ringinfo_t *ri, tulip_descinfo_t *di, int sync); static void tulip_dma_map_addr(void *, bus_dma_segment_t *, int, int); static void tulip_dma_map_rxbuf(void *, bus_dma_segment_t *, int, bus_size_t, int); static void tulip_dma_map_addr(void *arg, bus_dma_segment_t *segs, int nseg, int error) { bus_addr_t *paddr; if (error) return; paddr = arg; *paddr = segs->ds_addr; } static void tulip_dma_map_rxbuf(void *arg, bus_dma_segment_t *segs, int nseg, bus_size_t mapsize, int error) { tulip_desc_t *desc; if (error) return; desc = arg; KASSERT(nseg == 1, ("too many DMA segments")); KASSERT(segs[0].ds_len >= TULIP_RX_BUFLEN, ("receive buffer too small")); desc->d_addr1 = segs[0].ds_addr & 0xffffffff; desc->d_length1 = TULIP_RX_BUFLEN; #ifdef not_needed /* These should already always be zero. */ desc->d_addr2 = 0; desc->d_length2 = 0; #endif } struct mbuf * tulip_dequeue_mbuf(tulip_ringinfo_t *ri, tulip_descinfo_t *di, int sync) { struct mbuf *m; m = di->di_mbuf; if (m != NULL) { switch (sync) { case SYNC_NONE: break; case SYNC_RX: TULIP_RXMAP_POSTSYNC(ri, di); break; case SYNC_TX: TULIP_TXMAP_POSTSYNC(ri, di); break; default: panic("bad sync flag: %d", sync); } bus_dmamap_unload(ri->ri_data_tag, *di->di_map); di->di_mbuf = NULL; } return (m); } static void tulip_timeout_callback(void *arg) { tulip_softc_t * const sc = arg; TULIP_PERFSTART(timeout) TULIP_LOCK_ASSERT(sc); sc->tulip_flags &= ~TULIP_TIMEOUTPENDING; sc->tulip_probe_timeout -= 1000 / TULIP_HZ; (sc->tulip_boardsw->bd_media_poll)(sc, TULIP_MEDIAPOLL_TIMER); TULIP_PERFEND(timeout); } static void tulip_timeout(tulip_softc_t * const sc) { TULIP_LOCK_ASSERT(sc); if (sc->tulip_flags & TULIP_TIMEOUTPENDING) return; sc->tulip_flags |= TULIP_TIMEOUTPENDING; callout_reset(&sc->tulip_callout, (hz + TULIP_HZ / 2) / TULIP_HZ, tulip_timeout_callback, sc); } static int tulip_txprobe(tulip_softc_t * const sc) { struct mbuf *m; u_char *enaddr; /* * Before we are sure this is the right media we need * to send a small packet to make sure there's carrier. * Strangely, BNC and AUI will "see" receive data if * either is connected so the transmit is the only way * to verify the connectivity. */ TULIP_LOCK_ASSERT(sc); MGETHDR(m, M_NOWAIT, MT_DATA); if (m == NULL) return 0; /* * Construct a LLC TEST message which will point to ourselves. */ if (sc->tulip_ifp->if_input != NULL) enaddr = IF_LLADDR(sc->tulip_ifp); else enaddr = sc->tulip_enaddr; bcopy(enaddr, mtod(m, struct ether_header *)->ether_dhost, ETHER_ADDR_LEN); bcopy(enaddr, mtod(m, struct ether_header *)->ether_shost, ETHER_ADDR_LEN); mtod(m, struct ether_header *)->ether_type = htons(3); mtod(m, unsigned char *)[14] = 0; mtod(m, unsigned char *)[15] = 0; mtod(m, unsigned char *)[16] = 0xE3; /* LLC Class1 TEST (no poll) */ m->m_len = m->m_pkthdr.len = sizeof(struct ether_header) + 3; /* * send it! */ sc->tulip_cmdmode |= TULIP_CMD_TXRUN; sc->tulip_intrmask |= TULIP_STS_TXINTR; sc->tulip_flags |= TULIP_TXPROBE_ACTIVE; TULIP_CSR_WRITE(sc, csr_command, sc->tulip_cmdmode); TULIP_CSR_WRITE(sc, csr_intr, sc->tulip_intrmask); if ((m = tulip_txput(sc, m)) != NULL) m_freem(m); sc->tulip_probe.probe_txprobes++; return 1; } static void tulip_media_set(tulip_softc_t * const sc, tulip_media_t media) { const tulip_media_info_t *mi = sc->tulip_mediums[media]; TULIP_LOCK_ASSERT(sc); if (mi == NULL) return; /* * If we are switching media, make sure we don't think there's * any stale RX activity */ sc->tulip_flags &= ~TULIP_RXACT; if (mi->mi_type == TULIP_MEDIAINFO_SIA) { TULIP_CSR_WRITE(sc, csr_sia_connectivity, TULIP_SIACONN_RESET); TULIP_CSR_WRITE(sc, csr_sia_tx_rx, mi->mi_sia_tx_rx); if (sc->tulip_features & TULIP_HAVE_SIAGP) { TULIP_CSR_WRITE(sc, csr_sia_general, mi->mi_sia_gp_control|mi->mi_sia_general); DELAY(50); TULIP_CSR_WRITE(sc, csr_sia_general, mi->mi_sia_gp_data|mi->mi_sia_general); } else { TULIP_CSR_WRITE(sc, csr_sia_general, mi->mi_sia_general); } TULIP_CSR_WRITE(sc, csr_sia_connectivity, mi->mi_sia_connectivity); } else if (mi->mi_type == TULIP_MEDIAINFO_GPR) { #define TULIP_GPR_CMDBITS (TULIP_CMD_PORTSELECT|TULIP_CMD_PCSFUNCTION|TULIP_CMD_SCRAMBLER|TULIP_CMD_TXTHRSHLDCTL) /* * If the cmdmode bits don't match the currently operating mode, * set the cmdmode appropriately and reset the chip. */ if (((mi->mi_cmdmode ^ TULIP_CSR_READ(sc, csr_command)) & TULIP_GPR_CMDBITS) != 0) { sc->tulip_cmdmode &= ~TULIP_GPR_CMDBITS; sc->tulip_cmdmode |= mi->mi_cmdmode; tulip_reset(sc); } TULIP_CSR_WRITE(sc, csr_gp, TULIP_GP_PINSET|sc->tulip_gpinit); DELAY(10); TULIP_CSR_WRITE(sc, csr_gp, (u_int8_t) mi->mi_gpdata); } else if (mi->mi_type == TULIP_MEDIAINFO_SYM) { /* * If the cmdmode bits don't match the currently operating mode, * set the cmdmode appropriately and reset the chip. */ if (((mi->mi_cmdmode ^ TULIP_CSR_READ(sc, csr_command)) & TULIP_GPR_CMDBITS) != 0) { sc->tulip_cmdmode &= ~TULIP_GPR_CMDBITS; sc->tulip_cmdmode |= mi->mi_cmdmode; tulip_reset(sc); } TULIP_CSR_WRITE(sc, csr_sia_general, mi->mi_gpcontrol); TULIP_CSR_WRITE(sc, csr_sia_general, mi->mi_gpdata); } else if (mi->mi_type == TULIP_MEDIAINFO_MII && sc->tulip_probe_state != TULIP_PROBE_INACTIVE) { int idx; if (sc->tulip_features & TULIP_HAVE_SIAGP) { const u_int8_t *dp; dp = &sc->tulip_rombuf[mi->mi_reset_offset]; for (idx = 0; idx < mi->mi_reset_length; idx++, dp += 2) { DELAY(10); TULIP_CSR_WRITE(sc, csr_sia_general, (dp[0] + 256 * dp[1]) << 16); } sc->tulip_phyaddr = mi->mi_phyaddr; dp = &sc->tulip_rombuf[mi->mi_gpr_offset]; for (idx = 0; idx < mi->mi_gpr_length; idx++, dp += 2) { DELAY(10); TULIP_CSR_WRITE(sc, csr_sia_general, (dp[0] + 256 * dp[1]) << 16); } } else { for (idx = 0; idx < mi->mi_reset_length; idx++) { DELAY(10); TULIP_CSR_WRITE(sc, csr_gp, sc->tulip_rombuf[mi->mi_reset_offset + idx]); } sc->tulip_phyaddr = mi->mi_phyaddr; for (idx = 0; idx < mi->mi_gpr_length; idx++) { DELAY(10); TULIP_CSR_WRITE(sc, csr_gp, sc->tulip_rombuf[mi->mi_gpr_offset + idx]); } } if (sc->tulip_flags & TULIP_TRYNWAY) { tulip_mii_autonegotiate(sc, sc->tulip_phyaddr); } else if ((sc->tulip_flags & TULIP_DIDNWAY) == 0) { u_int32_t data = tulip_mii_readreg(sc, sc->tulip_phyaddr, PHYREG_CONTROL); data &= ~(PHYCTL_SELECT_100MB|PHYCTL_FULL_DUPLEX|PHYCTL_AUTONEG_ENABLE); sc->tulip_flags &= ~TULIP_DIDNWAY; if (TULIP_IS_MEDIA_FD(media)) data |= PHYCTL_FULL_DUPLEX; if (TULIP_IS_MEDIA_100MB(media)) data |= PHYCTL_SELECT_100MB; tulip_mii_writereg(sc, sc->tulip_phyaddr, PHYREG_CONTROL, data); } } } static void tulip_linkup(tulip_softc_t * const sc, tulip_media_t media) { TULIP_LOCK_ASSERT(sc); if ((sc->tulip_flags & TULIP_LINKUP) == 0) sc->tulip_flags |= TULIP_PRINTLINKUP; sc->tulip_flags |= TULIP_LINKUP; sc->tulip_ifp->if_drv_flags &= ~IFF_DRV_OACTIVE; #if 0 /* XXX how does with work with ifmedia? */ if ((sc->tulip_flags & TULIP_DIDNWAY) == 0) { if (sc->tulip_ifp->if_flags & IFF_FULLDUPLEX) { if (TULIP_CAN_MEDIA_FD(media) && sc->tulip_mediums[TULIP_FD_MEDIA_OF(media)] != NULL) media = TULIP_FD_MEDIA_OF(media); } else { if (TULIP_IS_MEDIA_FD(media) && sc->tulip_mediums[TULIP_HD_MEDIA_OF(media)] != NULL) media = TULIP_HD_MEDIA_OF(media); } } #endif if (sc->tulip_media != media) { #ifdef TULIP_DEBUG sc->tulip_dbg.dbg_last_media = sc->tulip_media; #endif sc->tulip_media = media; sc->tulip_flags |= TULIP_PRINTMEDIA; if (TULIP_IS_MEDIA_FD(sc->tulip_media)) { sc->tulip_cmdmode |= TULIP_CMD_FULLDUPLEX; } else if (sc->tulip_chipid != TULIP_21041 || (sc->tulip_flags & TULIP_DIDNWAY) == 0) { sc->tulip_cmdmode &= ~TULIP_CMD_FULLDUPLEX; } } /* * We could set probe_timeout to 0 but setting to 3000 puts this * in one central place and the only matters is tulip_link is * followed by a tulip_timeout. Therefore setting it should not * result in aberrant behaviour. */ sc->tulip_probe_timeout = 3000; sc->tulip_probe_state = TULIP_PROBE_INACTIVE; sc->tulip_flags &= ~(TULIP_TXPROBE_ACTIVE|TULIP_TRYNWAY); if (sc->tulip_flags & TULIP_INRESET) { tulip_media_set(sc, sc->tulip_media); } else if (sc->tulip_probe_media != sc->tulip_media) { /* * No reason to change media if we have the right media. */ tulip_reset(sc); } tulip_init_locked(sc); } static void tulip_media_print(tulip_softc_t * const sc) { TULIP_LOCK_ASSERT(sc); if ((sc->tulip_flags & TULIP_LINKUP) == 0) return; if (sc->tulip_flags & TULIP_PRINTMEDIA) { device_printf(sc->tulip_dev, "enabling %s port\n", tulip_mediums[sc->tulip_media]); sc->tulip_flags &= ~(TULIP_PRINTMEDIA|TULIP_PRINTLINKUP); } else if (sc->tulip_flags & TULIP_PRINTLINKUP) { device_printf(sc->tulip_dev, "link up\n"); sc->tulip_flags &= ~TULIP_PRINTLINKUP; } } #if defined(TULIP_DO_GPR_SENSE) static tulip_media_t tulip_21140_gpr_media_sense(tulip_softc_t * const sc) { struct ifnet *ifp sc->tulip_ifp; tulip_media_t maybe_media = TULIP_MEDIA_UNKNOWN; tulip_media_t last_media = TULIP_MEDIA_UNKNOWN; tulip_media_t media; TULIP_LOCK_ASSERT(sc); /* * If one of the media blocks contained a default media flag, * use that. */ for (media = TULIP_MEDIA_UNKNOWN; media < TULIP_MEDIA_MAX; media++) { const tulip_media_info_t *mi; /* * Media is not supported (or is full-duplex). */ if ((mi = sc->tulip_mediums[media]) == NULL || TULIP_IS_MEDIA_FD(media)) continue; if (mi->mi_type != TULIP_MEDIAINFO_GPR) continue; /* * Remember the media is this is the "default" media. */ if (mi->mi_default && maybe_media == TULIP_MEDIA_UNKNOWN) maybe_media = media; /* * No activity mask? Can't see if it is active if there's no mask. */ if (mi->mi_actmask == 0) continue; /* * Does the activity data match? */ if ((TULIP_CSR_READ(sc, csr_gp) & mi->mi_actmask) != mi->mi_actdata) continue; #if defined(TULIP_DEBUG) device_printf(sc->tulip_dev, "%s: %s: 0x%02x & 0x%02x == 0x%02x\n", __func__, tulip_mediums[media], TULIP_CSR_READ(sc, csr_gp) & 0xFF, mi->mi_actmask, mi->mi_actdata); #endif /* * It does! If this is the first media we detected, then * remember this media. If isn't the first, then there were * multiple matches which we equate to no match (since we don't * which to select (if any). */ if (last_media == TULIP_MEDIA_UNKNOWN) { last_media = media; } else if (last_media != media) { last_media = TULIP_MEDIA_UNKNOWN; } } return (last_media != TULIP_MEDIA_UNKNOWN) ? last_media : maybe_media; } #endif /* TULIP_DO_GPR_SENSE */ static tulip_link_status_t tulip_media_link_monitor(tulip_softc_t * const sc) { const tulip_media_info_t * const mi = sc->tulip_mediums[sc->tulip_media]; tulip_link_status_t linkup = TULIP_LINK_DOWN; TULIP_LOCK_ASSERT(sc); if (mi == NULL) { #if defined(DIAGNOSTIC) || defined(TULIP_DEBUG) panic("tulip_media_link_monitor: %s: botch at line %d\n", tulip_mediums[sc->tulip_media],__LINE__); #else return TULIP_LINK_UNKNOWN; #endif } /* * Have we seen some packets? If so, the link must be good. */ if ((sc->tulip_flags & (TULIP_RXACT|TULIP_LINKUP)) == (TULIP_RXACT|TULIP_LINKUP)) { sc->tulip_flags &= ~TULIP_RXACT; sc->tulip_probe_timeout = 3000; return TULIP_LINK_UP; } sc->tulip_flags &= ~TULIP_RXACT; if (mi->mi_type == TULIP_MEDIAINFO_MII) { u_int32_t status; /* * Read the PHY status register. */ status = tulip_mii_readreg(sc, sc->tulip_phyaddr, PHYREG_STATUS); if (status & PHYSTS_AUTONEG_DONE) { /* * If the PHY has completed autonegotiation, see the if the * remote systems abilities have changed. If so, upgrade or * downgrade as appropriate. */ u_int32_t abilities = tulip_mii_readreg(sc, sc->tulip_phyaddr, PHYREG_AUTONEG_ABILITIES); abilities = (abilities << 6) & status; if (abilities != sc->tulip_abilities) { #if defined(TULIP_DEBUG) loudprintf("%s(phy%d): autonegotiation changed: 0x%04x -> 0x%04x\n", ifp->if_xname, sc->tulip_phyaddr, sc->tulip_abilities, abilities); #endif if (tulip_mii_map_abilities(sc, abilities)) { tulip_linkup(sc, sc->tulip_probe_media); return TULIP_LINK_UP; } /* * if we had selected media because of autonegotiation, * we need to probe for the new media. */ sc->tulip_probe_state = TULIP_PROBE_INACTIVE; if (sc->tulip_flags & TULIP_DIDNWAY) return TULIP_LINK_DOWN; } } /* * The link is now up. If was down, say its back up. */ if ((status & (PHYSTS_LINK_UP|PHYSTS_REMOTE_FAULT)) == PHYSTS_LINK_UP) linkup = TULIP_LINK_UP; } else if (mi->mi_type == TULIP_MEDIAINFO_GPR) { /* * No activity sensor? Assume all's well. */ if (mi->mi_actmask == 0) return TULIP_LINK_UNKNOWN; /* * Does the activity data match? */ if ((TULIP_CSR_READ(sc, csr_gp) & mi->mi_actmask) == mi->mi_actdata) linkup = TULIP_LINK_UP; } else if (mi->mi_type == TULIP_MEDIAINFO_SIA) { /* * Assume non TP ok for now. */ if (!TULIP_IS_MEDIA_TP(sc->tulip_media)) return TULIP_LINK_UNKNOWN; if ((TULIP_CSR_READ(sc, csr_sia_status) & TULIP_SIASTS_LINKFAIL) == 0) linkup = TULIP_LINK_UP; #if defined(TULIP_DEBUG) if (sc->tulip_probe_timeout <= 0) device_printf(sc->tulip_dev, "sia status = 0x%08x\n", TULIP_CSR_READ(sc, csr_sia_status)); #endif } else if (mi->mi_type == TULIP_MEDIAINFO_SYM) { return TULIP_LINK_UNKNOWN; } /* * We will wait for 3 seconds until the link goes into suspect mode. */ if (sc->tulip_flags & TULIP_LINKUP) { if (linkup == TULIP_LINK_UP) sc->tulip_probe_timeout = 3000; if (sc->tulip_probe_timeout > 0) return TULIP_LINK_UP; sc->tulip_flags &= ~TULIP_LINKUP; device_printf(sc->tulip_dev, "link down: cable problem?\n"); } #if defined(TULIP_DEBUG) sc->tulip_dbg.dbg_link_downed++; #endif return TULIP_LINK_DOWN; } static void tulip_media_poll(tulip_softc_t * const sc, tulip_mediapoll_event_t event) { TULIP_LOCK_ASSERT(sc); #if defined(TULIP_DEBUG) sc->tulip_dbg.dbg_events[event]++; #endif if (sc->tulip_probe_state == TULIP_PROBE_INACTIVE && event == TULIP_MEDIAPOLL_TIMER) { switch (tulip_media_link_monitor(sc)) { case TULIP_LINK_DOWN: { /* * Link Monitor failed. Probe for new media. */ event = TULIP_MEDIAPOLL_LINKFAIL; break; } case TULIP_LINK_UP: { /* * Check again soon. */ tulip_timeout(sc); return; } case TULIP_LINK_UNKNOWN: { /* * We can't tell so don't bother. */ return; } } } if (event == TULIP_MEDIAPOLL_LINKFAIL) { if (sc->tulip_probe_state == TULIP_PROBE_INACTIVE) { if (TULIP_DO_AUTOSENSE(sc)) { #if defined(TULIP_DEBUG) sc->tulip_dbg.dbg_link_failures++; #endif sc->tulip_media = TULIP_MEDIA_UNKNOWN; if (sc->tulip_ifp->if_flags & IFF_UP) tulip_reset(sc); /* restart probe */ } return; } #if defined(TULIP_DEBUG) sc->tulip_dbg.dbg_link_pollintrs++; #endif } if (event == TULIP_MEDIAPOLL_START) { sc->tulip_ifp->if_drv_flags |= IFF_DRV_OACTIVE; if (sc->tulip_probe_state != TULIP_PROBE_INACTIVE) return; sc->tulip_probe_mediamask = 0; sc->tulip_probe_passes = 0; #if defined(TULIP_DEBUG) sc->tulip_dbg.dbg_media_probes++; #endif /* * If the SROM contained an explicit media to use, use it. */ sc->tulip_cmdmode &= ~(TULIP_CMD_RXRUN|TULIP_CMD_FULLDUPLEX); sc->tulip_flags |= TULIP_TRYNWAY|TULIP_PROBE1STPASS; sc->tulip_flags &= ~(TULIP_DIDNWAY|TULIP_PRINTMEDIA|TULIP_PRINTLINKUP); /* * connidx is defaulted to a media_unknown type. */ sc->tulip_probe_media = tulip_srom_conninfo[sc->tulip_connidx].sc_media; if (sc->tulip_probe_media != TULIP_MEDIA_UNKNOWN) { tulip_linkup(sc, sc->tulip_probe_media); tulip_timeout(sc); return; } if (sc->tulip_features & TULIP_HAVE_GPR) { sc->tulip_probe_state = TULIP_PROBE_GPRTEST; sc->tulip_probe_timeout = 2000; } else { sc->tulip_probe_media = TULIP_MEDIA_MAX; sc->tulip_probe_timeout = 0; sc->tulip_probe_state = TULIP_PROBE_MEDIATEST; } } /* * Ignore txprobe failures or spurious callbacks. */ if (event == TULIP_MEDIAPOLL_TXPROBE_FAILED && sc->tulip_probe_state != TULIP_PROBE_MEDIATEST) { sc->tulip_flags &= ~TULIP_TXPROBE_ACTIVE; return; } /* * If we really transmitted a packet, then that's the media we'll use. */ if (event == TULIP_MEDIAPOLL_TXPROBE_OK || event == TULIP_MEDIAPOLL_LINKPASS) { if (event == TULIP_MEDIAPOLL_LINKPASS) { /* XXX Check media status just to be sure */ sc->tulip_probe_media = TULIP_MEDIA_10BASET; #if defined(TULIP_DEBUG) } else { sc->tulip_dbg.dbg_txprobes_ok[sc->tulip_probe_media]++; #endif } tulip_linkup(sc, sc->tulip_probe_media); tulip_timeout(sc); return; } if (sc->tulip_probe_state == TULIP_PROBE_GPRTEST) { #if defined(TULIP_DO_GPR_SENSE) /* * Check for media via the general purpose register. * * Try to sense the media via the GPR. If the same value * occurs 3 times in a row then just use that. */ if (sc->tulip_probe_timeout > 0) { tulip_media_t new_probe_media = tulip_21140_gpr_media_sense(sc); #if defined(TULIP_DEBUG) device_printf(sc->tulip_dev, "%s: gpr sensing = %s\n", __func__, tulip_mediums[new_probe_media]); #endif if (new_probe_media != TULIP_MEDIA_UNKNOWN) { if (new_probe_media == sc->tulip_probe_media) { if (--sc->tulip_probe_count == 0) tulip_linkup(sc, sc->tulip_probe_media); } else { sc->tulip_probe_count = 10; } } sc->tulip_probe_media = new_probe_media; tulip_timeout(sc); return; } #endif /* TULIP_DO_GPR_SENSE */ /* * Brute force. We cycle through each of the media types * and try to transmit a packet. */ sc->tulip_probe_state = TULIP_PROBE_MEDIATEST; sc->tulip_probe_media = TULIP_MEDIA_MAX; sc->tulip_probe_timeout = 0; tulip_timeout(sc); return; } if (sc->tulip_probe_state != TULIP_PROBE_MEDIATEST && (sc->tulip_features & TULIP_HAVE_MII)) { tulip_media_t old_media = sc->tulip_probe_media; tulip_mii_autonegotiate(sc, sc->tulip_phyaddr); switch (sc->tulip_probe_state) { case TULIP_PROBE_FAILED: case TULIP_PROBE_MEDIATEST: { /* * Try the next media. */ sc->tulip_probe_mediamask |= sc->tulip_mediums[sc->tulip_probe_media]->mi_mediamask; sc->tulip_probe_timeout = 0; #ifdef notyet if (sc->tulip_probe_state == TULIP_PROBE_FAILED) break; if (sc->tulip_probe_media != tulip_mii_phy_readspecific(sc)) break; sc->tulip_probe_timeout = TULIP_IS_MEDIA_TP(sc->tulip_probe_media) ? 2500 : 300; #endif break; } case TULIP_PROBE_PHYAUTONEG: { return; } case TULIP_PROBE_INACTIVE: { /* * Only probe if we autonegotiated a media that hasn't failed. */ sc->tulip_probe_timeout = 0; if (sc->tulip_probe_mediamask & TULIP_BIT(sc->tulip_probe_media)) { sc->tulip_probe_media = old_media; break; } tulip_linkup(sc, sc->tulip_probe_media); tulip_timeout(sc); return; } default: { #if defined(DIAGNOSTIC) || defined(TULIP_DEBUG) panic("tulip_media_poll: botch at line %d\n", __LINE__); #endif break; } } } if (event == TULIP_MEDIAPOLL_TXPROBE_FAILED) { #if defined(TULIP_DEBUG) sc->tulip_dbg.dbg_txprobes_failed[sc->tulip_probe_media]++; #endif sc->tulip_flags &= ~TULIP_TXPROBE_ACTIVE; return; } /* * switch to another media if we tried this one enough. */ if (/* event == TULIP_MEDIAPOLL_TXPROBE_FAILED || */ sc->tulip_probe_timeout <= 0) { #if defined(TULIP_DEBUG) if (sc->tulip_probe_media == TULIP_MEDIA_UNKNOWN) { device_printf(sc->tulip_dev, "poll media unknown!\n"); sc->tulip_probe_media = TULIP_MEDIA_MAX; } #endif /* * Find the next media type to check for. Full Duplex * types are not allowed. */ do { sc->tulip_probe_media -= 1; if (sc->tulip_probe_media == TULIP_MEDIA_UNKNOWN) { if (++sc->tulip_probe_passes == 3) { device_printf(sc->tulip_dev, "autosense failed: cable problem?\n"); if ((sc->tulip_ifp->if_flags & IFF_UP) == 0) { sc->tulip_ifp->if_drv_flags &= ~IFF_DRV_RUNNING; sc->tulip_probe_state = TULIP_PROBE_INACTIVE; return; } } sc->tulip_flags ^= TULIP_TRYNWAY; /* XXX */ sc->tulip_probe_mediamask = 0; sc->tulip_probe_media = TULIP_MEDIA_MAX - 1; } } while (sc->tulip_mediums[sc->tulip_probe_media] == NULL || (sc->tulip_probe_mediamask & TULIP_BIT(sc->tulip_probe_media)) || TULIP_IS_MEDIA_FD(sc->tulip_probe_media)); #if defined(TULIP_DEBUG) device_printf(sc->tulip_dev, "%s: probing %s\n", event == TULIP_MEDIAPOLL_TXPROBE_FAILED ? "txprobe failed" : "timeout", tulip_mediums[sc->tulip_probe_media]); #endif sc->tulip_probe_timeout = TULIP_IS_MEDIA_TP(sc->tulip_probe_media) ? 2500 : 1000; sc->tulip_probe_state = TULIP_PROBE_MEDIATEST; sc->tulip_probe.probe_txprobes = 0; tulip_reset(sc); tulip_media_set(sc, sc->tulip_probe_media); sc->tulip_flags &= ~TULIP_TXPROBE_ACTIVE; } tulip_timeout(sc); /* * If this is hanging off a phy, we know are doing NWAY and we have * forced the phy to a specific speed. Wait for link up before * before sending a packet. */ switch (sc->tulip_mediums[sc->tulip_probe_media]->mi_type) { case TULIP_MEDIAINFO_MII: { if (sc->tulip_probe_media != tulip_mii_phy_readspecific(sc)) return; break; } case TULIP_MEDIAINFO_SIA: { if (TULIP_IS_MEDIA_TP(sc->tulip_probe_media)) { if (TULIP_CSR_READ(sc, csr_sia_status) & TULIP_SIASTS_LINKFAIL) return; tulip_linkup(sc, sc->tulip_probe_media); #ifdef notyet if (sc->tulip_features & TULIP_HAVE_MII) tulip_timeout(sc); #endif return; } break; } case TULIP_MEDIAINFO_RESET: case TULIP_MEDIAINFO_SYM: case TULIP_MEDIAINFO_NONE: case TULIP_MEDIAINFO_GPR: { break; } } /* * Try to send a packet. */ tulip_txprobe(sc); } static void tulip_media_select(tulip_softc_t * const sc) { TULIP_LOCK_ASSERT(sc); if (sc->tulip_features & TULIP_HAVE_GPR) { TULIP_CSR_WRITE(sc, csr_gp, TULIP_GP_PINSET|sc->tulip_gpinit); DELAY(10); TULIP_CSR_WRITE(sc, csr_gp, sc->tulip_gpdata); } /* * If this board has no media, just return */ if (sc->tulip_features & TULIP_HAVE_NOMEDIA) return; if (sc->tulip_media == TULIP_MEDIA_UNKNOWN) { TULIP_CSR_WRITE(sc, csr_intr, sc->tulip_intrmask); (*sc->tulip_boardsw->bd_media_poll)(sc, TULIP_MEDIAPOLL_START); } else { tulip_media_set(sc, sc->tulip_media); } } static void tulip_21040_mediainfo_init(tulip_softc_t * const sc, tulip_media_t media) { TULIP_LOCK_ASSERT(sc); sc->tulip_cmdmode |= TULIP_CMD_CAPTREFFCT|TULIP_CMD_THRSHLD160 |TULIP_CMD_BACKOFFCTR; sc->tulip_ifp->if_baudrate = 10000000; if (media == TULIP_MEDIA_10BASET || media == TULIP_MEDIA_UNKNOWN) { TULIP_MEDIAINFO_SIA_INIT(sc, &sc->tulip_mediainfo[0], 21040, 10BASET); TULIP_MEDIAINFO_SIA_INIT(sc, &sc->tulip_mediainfo[1], 21040, 10BASET_FD); sc->tulip_intrmask |= TULIP_STS_LINKPASS|TULIP_STS_LINKFAIL; } if (media == TULIP_MEDIA_AUIBNC || media == TULIP_MEDIA_UNKNOWN) { TULIP_MEDIAINFO_SIA_INIT(sc, &sc->tulip_mediainfo[2], 21040, AUIBNC); } if (media == TULIP_MEDIA_UNKNOWN) { TULIP_MEDIAINFO_SIA_INIT(sc, &sc->tulip_mediainfo[3], 21040, EXTSIA); } } static void tulip_21040_media_probe(tulip_softc_t * const sc) { TULIP_LOCK_ASSERT(sc); tulip_21040_mediainfo_init(sc, TULIP_MEDIA_UNKNOWN); return; } static void tulip_21040_10baset_only_media_probe(tulip_softc_t * const sc) { TULIP_LOCK_ASSERT(sc); tulip_21040_mediainfo_init(sc, TULIP_MEDIA_10BASET); tulip_media_set(sc, TULIP_MEDIA_10BASET); sc->tulip_media = TULIP_MEDIA_10BASET; } static void tulip_21040_10baset_only_media_select(tulip_softc_t * const sc) { TULIP_LOCK_ASSERT(sc); sc->tulip_flags |= TULIP_LINKUP; if (sc->tulip_media == TULIP_MEDIA_10BASET_FD) { sc->tulip_cmdmode |= TULIP_CMD_FULLDUPLEX; sc->tulip_flags &= ~TULIP_SQETEST; } else { sc->tulip_cmdmode &= ~TULIP_CMD_FULLDUPLEX; sc->tulip_flags |= TULIP_SQETEST; } tulip_media_set(sc, sc->tulip_media); } static void tulip_21040_auibnc_only_media_probe(tulip_softc_t * const sc) { TULIP_LOCK_ASSERT(sc); tulip_21040_mediainfo_init(sc, TULIP_MEDIA_AUIBNC); sc->tulip_flags |= TULIP_SQETEST|TULIP_LINKUP; tulip_media_set(sc, TULIP_MEDIA_AUIBNC); sc->tulip_media = TULIP_MEDIA_AUIBNC; } static void tulip_21040_auibnc_only_media_select(tulip_softc_t * const sc) { TULIP_LOCK_ASSERT(sc); tulip_media_set(sc, TULIP_MEDIA_AUIBNC); sc->tulip_cmdmode &= ~TULIP_CMD_FULLDUPLEX; } static const tulip_boardsw_t tulip_21040_boardsw = { TULIP_21040_GENERIC, tulip_21040_media_probe, tulip_media_select, tulip_media_poll, }; static const tulip_boardsw_t tulip_21040_10baset_only_boardsw = { TULIP_21040_GENERIC, tulip_21040_10baset_only_media_probe, tulip_21040_10baset_only_media_select, NULL, }; static const tulip_boardsw_t tulip_21040_auibnc_only_boardsw = { TULIP_21040_GENERIC, tulip_21040_auibnc_only_media_probe, tulip_21040_auibnc_only_media_select, NULL, }; static void tulip_21041_mediainfo_init(tulip_softc_t * const sc) { tulip_media_info_t * const mi = sc->tulip_mediainfo; TULIP_LOCK_ASSERT(sc); #ifdef notyet if (sc->tulip_revinfo >= 0x20) { TULIP_MEDIAINFO_SIA_INIT(sc, &mi[0], 21041P2, 10BASET); TULIP_MEDIAINFO_SIA_INIT(sc, &mi[1], 21041P2, 10BASET_FD); TULIP_MEDIAINFO_SIA_INIT(sc, &mi[0], 21041P2, AUI); TULIP_MEDIAINFO_SIA_INIT(sc, &mi[1], 21041P2, BNC); return; } #endif TULIP_MEDIAINFO_SIA_INIT(sc, &mi[0], 21041, 10BASET); TULIP_MEDIAINFO_SIA_INIT(sc, &mi[1], 21041, 10BASET_FD); TULIP_MEDIAINFO_SIA_INIT(sc, &mi[2], 21041, AUI); TULIP_MEDIAINFO_SIA_INIT(sc, &mi[3], 21041, BNC); } static void tulip_21041_media_probe(tulip_softc_t * const sc) { TULIP_LOCK_ASSERT(sc); sc->tulip_ifp->if_baudrate = 10000000; sc->tulip_cmdmode |= TULIP_CMD_CAPTREFFCT|TULIP_CMD_ENHCAPTEFFCT |TULIP_CMD_THRSHLD160|TULIP_CMD_BACKOFFCTR; sc->tulip_intrmask |= TULIP_STS_LINKPASS|TULIP_STS_LINKFAIL; tulip_21041_mediainfo_init(sc); } static void tulip_21041_media_poll(tulip_softc_t * const sc, const tulip_mediapoll_event_t event) { u_int32_t sia_status; TULIP_LOCK_ASSERT(sc); #if defined(TULIP_DEBUG) sc->tulip_dbg.dbg_events[event]++; #endif if (event == TULIP_MEDIAPOLL_LINKFAIL) { if (sc->tulip_probe_state != TULIP_PROBE_INACTIVE || !TULIP_DO_AUTOSENSE(sc)) return; sc->tulip_media = TULIP_MEDIA_UNKNOWN; tulip_reset(sc); /* start probe */ return; } /* * If we've been been asked to start a poll or link change interrupt * restart the probe (and reset the tulip to a known state). */ if (event == TULIP_MEDIAPOLL_START) { sc->tulip_ifp->if_drv_flags |= IFF_DRV_OACTIVE; sc->tulip_cmdmode &= ~(TULIP_CMD_FULLDUPLEX|TULIP_CMD_RXRUN); #ifdef notyet if (sc->tulip_revinfo >= 0x20) { sc->tulip_cmdmode |= TULIP_CMD_FULLDUPLEX; sc->tulip_flags |= TULIP_DIDNWAY; } #endif TULIP_CSR_WRITE(sc, csr_command, sc->tulip_cmdmode); sc->tulip_probe_state = TULIP_PROBE_MEDIATEST; sc->tulip_probe_media = TULIP_MEDIA_10BASET; sc->tulip_probe_timeout = TULIP_21041_PROBE_10BASET_TIMEOUT; tulip_media_set(sc, TULIP_MEDIA_10BASET); tulip_timeout(sc); return; } if (sc->tulip_probe_state == TULIP_PROBE_INACTIVE) return; if (event == TULIP_MEDIAPOLL_TXPROBE_OK) { #if defined(TULIP_DEBUG) sc->tulip_dbg.dbg_txprobes_ok[sc->tulip_probe_media]++; #endif tulip_linkup(sc, sc->tulip_probe_media); return; } sia_status = TULIP_CSR_READ(sc, csr_sia_status); TULIP_CSR_WRITE(sc, csr_sia_status, sia_status); if ((sia_status & TULIP_SIASTS_LINKFAIL) == 0) { if (sc->tulip_revinfo >= 0x20) { if (sia_status & (PHYSTS_10BASET_FD << (16 - 6))) sc->tulip_probe_media = TULIP_MEDIA_10BASET_FD; } /* * If the link has passed LinkPass, 10baseT is the * proper media to use. */ tulip_linkup(sc, sc->tulip_probe_media); return; } /* * wait for up to 2.4 seconds for the link to reach pass state. * Only then start scanning the other media for activity. * choose media with receive activity over those without. */ if (sc->tulip_probe_media == TULIP_MEDIA_10BASET) { if (event != TULIP_MEDIAPOLL_TIMER) return; if (sc->tulip_probe_timeout > 0 && (sia_status & TULIP_SIASTS_OTHERRXACTIVITY) == 0) { tulip_timeout(sc); return; } sc->tulip_probe_timeout = TULIP_21041_PROBE_AUIBNC_TIMEOUT; sc->tulip_flags |= TULIP_WANTRXACT; if (sia_status & TULIP_SIASTS_OTHERRXACTIVITY) { sc->tulip_probe_media = TULIP_MEDIA_BNC; } else { sc->tulip_probe_media = TULIP_MEDIA_AUI; } tulip_media_set(sc, sc->tulip_probe_media); tulip_timeout(sc); return; } /* * If we failed, clear the txprobe active flag. */ if (event == TULIP_MEDIAPOLL_TXPROBE_FAILED) sc->tulip_flags &= ~TULIP_TXPROBE_ACTIVE; if (event == TULIP_MEDIAPOLL_TIMER) { /* * If we've received something, then that's our link! */ if (sc->tulip_flags & TULIP_RXACT) { tulip_linkup(sc, sc->tulip_probe_media); return; } /* * if no txprobe active */ if ((sc->tulip_flags & TULIP_TXPROBE_ACTIVE) == 0 && ((sc->tulip_flags & TULIP_WANTRXACT) == 0 || (sia_status & TULIP_SIASTS_RXACTIVITY))) { sc->tulip_probe_timeout = TULIP_21041_PROBE_AUIBNC_TIMEOUT; tulip_txprobe(sc); tulip_timeout(sc); return; } /* * Take 2 passes through before deciding to not * wait for receive activity. Then take another * two passes before spitting out a warning. */ if (sc->tulip_probe_timeout <= 0) { if (sc->tulip_flags & TULIP_WANTRXACT) { sc->tulip_flags &= ~TULIP_WANTRXACT; sc->tulip_probe_timeout = TULIP_21041_PROBE_AUIBNC_TIMEOUT; } else { device_printf(sc->tulip_dev, "autosense failed: cable problem?\n"); if ((sc->tulip_ifp->if_flags & IFF_UP) == 0) { sc->tulip_ifp->if_drv_flags &= ~IFF_DRV_RUNNING; sc->tulip_probe_state = TULIP_PROBE_INACTIVE; return; } } } } /* * Since this media failed to probe, try the other one. */ sc->tulip_probe_timeout = TULIP_21041_PROBE_AUIBNC_TIMEOUT; if (sc->tulip_probe_media == TULIP_MEDIA_AUI) { sc->tulip_probe_media = TULIP_MEDIA_BNC; } else { sc->tulip_probe_media = TULIP_MEDIA_AUI; } tulip_media_set(sc, sc->tulip_probe_media); sc->tulip_flags &= ~TULIP_TXPROBE_ACTIVE; tulip_timeout(sc); } static const tulip_boardsw_t tulip_21041_boardsw = { TULIP_21041_GENERIC, tulip_21041_media_probe, tulip_media_select, tulip_21041_media_poll }; static const tulip_phy_attr_t tulip_mii_phy_attrlist[] = { { 0x20005c00, 0, /* 08-00-17 */ { { 0x19, 0x0040, 0x0040 }, /* 10TX */ { 0x19, 0x0040, 0x0000 }, /* 100TX */ }, #if defined(TULIP_DEBUG) "NS DP83840", #endif }, { 0x0281F400, 0, /* 00-A0-7D */ { { 0x12, 0x0010, 0x0000 }, /* 10T */ { }, /* 100TX */ { 0x12, 0x0010, 0x0010 }, /* 100T4 */ { 0x12, 0x0008, 0x0008 }, /* FULL_DUPLEX */ }, #if defined(TULIP_DEBUG) "Seeq 80C240" #endif }, #if 0 { 0x0015F420, 0, /* 00-A0-7D */ { { 0x12, 0x0010, 0x0000 }, /* 10T */ { }, /* 100TX */ { 0x12, 0x0010, 0x0010 }, /* 100T4 */ { 0x12, 0x0008, 0x0008 }, /* FULL_DUPLEX */ }, #if defined(TULIP_DEBUG) "Broadcom BCM5000" #endif }, #endif { 0x0281F400, 0, /* 00-A0-BE */ { { 0x11, 0x8000, 0x0000 }, /* 10T */ { 0x11, 0x8000, 0x8000 }, /* 100TX */ { }, /* 100T4 */ { 0x11, 0x4000, 0x4000 }, /* FULL_DUPLEX */ }, #if defined(TULIP_DEBUG) "ICS 1890" #endif }, { 0 } }; static tulip_media_t tulip_mii_phy_readspecific(tulip_softc_t * const sc) { const tulip_phy_attr_t *attr; u_int16_t data; u_int32_t id; unsigned idx = 0; static const tulip_media_t table[] = { TULIP_MEDIA_UNKNOWN, TULIP_MEDIA_10BASET, TULIP_MEDIA_100BASETX, TULIP_MEDIA_100BASET4, TULIP_MEDIA_UNKNOWN, TULIP_MEDIA_10BASET_FD, TULIP_MEDIA_100BASETX_FD, TULIP_MEDIA_UNKNOWN }; TULIP_LOCK_ASSERT(sc); /* * Don't read phy specific registers if link is not up. */ data = tulip_mii_readreg(sc, sc->tulip_phyaddr, PHYREG_STATUS); if ((data & (PHYSTS_LINK_UP|PHYSTS_EXTENDED_REGS)) != (PHYSTS_LINK_UP|PHYSTS_EXTENDED_REGS)) return TULIP_MEDIA_UNKNOWN; id = (tulip_mii_readreg(sc, sc->tulip_phyaddr, PHYREG_IDLOW) << 16) | tulip_mii_readreg(sc, sc->tulip_phyaddr, PHYREG_IDHIGH); for (attr = tulip_mii_phy_attrlist;; attr++) { if (attr->attr_id == 0) return TULIP_MEDIA_UNKNOWN; if ((id & ~0x0F) == attr->attr_id) break; } if (attr->attr_modes[PHY_MODE_100TX].pm_regno) { const tulip_phy_modedata_t * const pm = &attr->attr_modes[PHY_MODE_100TX]; data = tulip_mii_readreg(sc, sc->tulip_phyaddr, pm->pm_regno); if ((data & pm->pm_mask) == pm->pm_value) idx = 2; } if (idx == 0 && attr->attr_modes[PHY_MODE_100T4].pm_regno) { const tulip_phy_modedata_t * const pm = &attr->attr_modes[PHY_MODE_100T4]; data = tulip_mii_readreg(sc, sc->tulip_phyaddr, pm->pm_regno); if ((data & pm->pm_mask) == pm->pm_value) idx = 3; } if (idx == 0 && attr->attr_modes[PHY_MODE_10T].pm_regno) { const tulip_phy_modedata_t * const pm = &attr->attr_modes[PHY_MODE_10T]; data = tulip_mii_readreg(sc, sc->tulip_phyaddr, pm->pm_regno); if ((data & pm->pm_mask) == pm->pm_value) idx = 1; } if (idx != 0 && attr->attr_modes[PHY_MODE_FULLDUPLEX].pm_regno) { const tulip_phy_modedata_t * const pm = &attr->attr_modes[PHY_MODE_FULLDUPLEX]; data = tulip_mii_readreg(sc, sc->tulip_phyaddr, pm->pm_regno); idx += ((data & pm->pm_mask) == pm->pm_value ? 4 : 0); } return table[idx]; } static unsigned tulip_mii_get_phyaddr(tulip_softc_t * const sc, unsigned offset) { unsigned phyaddr; TULIP_LOCK_ASSERT(sc); for (phyaddr = 1; phyaddr < 32; phyaddr++) { unsigned status = tulip_mii_readreg(sc, phyaddr, PHYREG_STATUS); if (status == 0 || status == 0xFFFF || status < PHYSTS_10BASET) continue; if (offset == 0) return phyaddr; offset--; } if (offset == 0) { unsigned status = tulip_mii_readreg(sc, 0, PHYREG_STATUS); if (status == 0 || status == 0xFFFF || status < PHYSTS_10BASET) return TULIP_MII_NOPHY; return 0; } return TULIP_MII_NOPHY; } static int tulip_mii_map_abilities(tulip_softc_t * const sc, unsigned abilities) { TULIP_LOCK_ASSERT(sc); sc->tulip_abilities = abilities; if (abilities & PHYSTS_100BASETX_FD) { sc->tulip_probe_media = TULIP_MEDIA_100BASETX_FD; } else if (abilities & PHYSTS_100BASET4) { sc->tulip_probe_media = TULIP_MEDIA_100BASET4; } else if (abilities & PHYSTS_100BASETX) { sc->tulip_probe_media = TULIP_MEDIA_100BASETX; } else if (abilities & PHYSTS_10BASET_FD) { sc->tulip_probe_media = TULIP_MEDIA_10BASET_FD; } else if (abilities & PHYSTS_10BASET) { sc->tulip_probe_media = TULIP_MEDIA_10BASET; } else { sc->tulip_probe_state = TULIP_PROBE_MEDIATEST; return 0; } sc->tulip_probe_state = TULIP_PROBE_INACTIVE; return 1; } static void tulip_mii_autonegotiate(tulip_softc_t * const sc, const unsigned phyaddr) { struct ifnet *ifp = sc->tulip_ifp; TULIP_LOCK_ASSERT(sc); switch (sc->tulip_probe_state) { case TULIP_PROBE_MEDIATEST: case TULIP_PROBE_INACTIVE: { sc->tulip_flags |= TULIP_DIDNWAY; tulip_mii_writereg(sc, phyaddr, PHYREG_CONTROL, PHYCTL_RESET); sc->tulip_probe_timeout = 3000; sc->tulip_intrmask |= TULIP_STS_ABNRMLINTR|TULIP_STS_NORMALINTR; sc->tulip_probe_state = TULIP_PROBE_PHYRESET; } /* FALLTHROUGH */ case TULIP_PROBE_PHYRESET: { u_int32_t status; u_int32_t data = tulip_mii_readreg(sc, phyaddr, PHYREG_CONTROL); if (data & PHYCTL_RESET) { if (sc->tulip_probe_timeout > 0) { tulip_timeout(sc); return; } printf("%s(phy%d): error: reset of PHY never completed!\n", ifp->if_xname, phyaddr); sc->tulip_flags &= ~TULIP_TXPROBE_ACTIVE; sc->tulip_probe_state = TULIP_PROBE_FAILED; sc->tulip_ifp->if_flags &= ~IFF_UP; sc->tulip_ifp->if_drv_flags &= ~IFF_DRV_RUNNING; return; } status = tulip_mii_readreg(sc, phyaddr, PHYREG_STATUS); if ((status & PHYSTS_CAN_AUTONEG) == 0) { #if defined(TULIP_DEBUG) loudprintf("%s(phy%d): autonegotiation disabled\n", ifp->if_xname, phyaddr); #endif sc->tulip_flags &= ~TULIP_DIDNWAY; sc->tulip_probe_state = TULIP_PROBE_MEDIATEST; return; } if (tulip_mii_readreg(sc, phyaddr, PHYREG_AUTONEG_ADVERTISEMENT) != ((status >> 6) | 0x01)) tulip_mii_writereg(sc, phyaddr, PHYREG_AUTONEG_ADVERTISEMENT, (status >> 6) | 0x01); tulip_mii_writereg(sc, phyaddr, PHYREG_CONTROL, data|PHYCTL_AUTONEG_RESTART|PHYCTL_AUTONEG_ENABLE); data = tulip_mii_readreg(sc, phyaddr, PHYREG_CONTROL); #if defined(TULIP_DEBUG) if ((data & PHYCTL_AUTONEG_ENABLE) == 0) loudprintf("%s(phy%d): oops: enable autonegotiation failed: 0x%04x\n", ifp->if_xname, phyaddr, data); else loudprintf("%s(phy%d): autonegotiation restarted: 0x%04x\n", ifp->if_xname, phyaddr, data); sc->tulip_dbg.dbg_nway_starts++; #endif sc->tulip_probe_state = TULIP_PROBE_PHYAUTONEG; sc->tulip_probe_timeout = 3000; } /* FALLTHROUGH */ case TULIP_PROBE_PHYAUTONEG: { u_int32_t status = tulip_mii_readreg(sc, phyaddr, PHYREG_STATUS); u_int32_t data; if ((status & PHYSTS_AUTONEG_DONE) == 0) { if (sc->tulip_probe_timeout > 0) { tulip_timeout(sc); return; } #if defined(TULIP_DEBUG) loudprintf("%s(phy%d): autonegotiation timeout: sts=0x%04x, ctl=0x%04x\n", ifp->if_xname, phyaddr, status, tulip_mii_readreg(sc, phyaddr, PHYREG_CONTROL)); #endif sc->tulip_flags &= ~TULIP_DIDNWAY; sc->tulip_probe_state = TULIP_PROBE_MEDIATEST; return; } data = tulip_mii_readreg(sc, phyaddr, PHYREG_AUTONEG_ABILITIES); #if defined(TULIP_DEBUG) loudprintf("%s(phy%d): autonegotiation complete: 0x%04x\n", ifp->if_xname, phyaddr, data); #endif data = (data << 6) & status; if (!tulip_mii_map_abilities(sc, data)) sc->tulip_flags &= ~TULIP_DIDNWAY; return; } default: { #if defined(DIAGNOSTIC) panic("tulip_media_poll: botch at line %d\n", __LINE__); #endif break; } } #if defined(TULIP_DEBUG) loudprintf("%s(phy%d): autonegotiation failure: state = %d\n", ifp->if_xname, phyaddr, sc->tulip_probe_state); sc->tulip_dbg.dbg_nway_failures++; #endif } static void tulip_2114x_media_preset(tulip_softc_t * const sc) { const tulip_media_info_t *mi = NULL; tulip_media_t media = sc->tulip_media; TULIP_LOCK_ASSERT(sc); if (sc->tulip_probe_state == TULIP_PROBE_INACTIVE) media = sc->tulip_media; else media = sc->tulip_probe_media; sc->tulip_cmdmode &= ~TULIP_CMD_PORTSELECT; sc->tulip_flags &= ~TULIP_SQETEST; if (media != TULIP_MEDIA_UNKNOWN && media != TULIP_MEDIA_MAX) { #if defined(TULIP_DEBUG) if (media < TULIP_MEDIA_MAX && sc->tulip_mediums[media] != NULL) { #endif mi = sc->tulip_mediums[media]; if (mi->mi_type == TULIP_MEDIAINFO_MII) { sc->tulip_cmdmode |= TULIP_CMD_PORTSELECT; } else if (mi->mi_type == TULIP_MEDIAINFO_GPR || mi->mi_type == TULIP_MEDIAINFO_SYM) { sc->tulip_cmdmode &= ~TULIP_GPR_CMDBITS; sc->tulip_cmdmode |= mi->mi_cmdmode; } else if (mi->mi_type == TULIP_MEDIAINFO_SIA) { TULIP_CSR_WRITE(sc, csr_sia_connectivity, TULIP_SIACONN_RESET); } #if defined(TULIP_DEBUG) } else { device_printf(sc->tulip_dev, "preset: bad media %d!\n", media); } #endif } switch (media) { case TULIP_MEDIA_BNC: case TULIP_MEDIA_AUI: case TULIP_MEDIA_10BASET: { sc->tulip_cmdmode &= ~TULIP_CMD_FULLDUPLEX; sc->tulip_cmdmode |= TULIP_CMD_TXTHRSHLDCTL; sc->tulip_ifp->if_baudrate = 10000000; sc->tulip_flags |= TULIP_SQETEST; break; } case TULIP_MEDIA_10BASET_FD: { sc->tulip_cmdmode |= TULIP_CMD_FULLDUPLEX|TULIP_CMD_TXTHRSHLDCTL; sc->tulip_ifp->if_baudrate = 10000000; break; } case TULIP_MEDIA_100BASEFX: case TULIP_MEDIA_100BASET4: case TULIP_MEDIA_100BASETX: { sc->tulip_cmdmode &= ~(TULIP_CMD_FULLDUPLEX|TULIP_CMD_TXTHRSHLDCTL); sc->tulip_cmdmode |= TULIP_CMD_PORTSELECT; sc->tulip_ifp->if_baudrate = 100000000; break; } case TULIP_MEDIA_100BASEFX_FD: case TULIP_MEDIA_100BASETX_FD: { sc->tulip_cmdmode |= TULIP_CMD_FULLDUPLEX|TULIP_CMD_PORTSELECT; sc->tulip_cmdmode &= ~TULIP_CMD_TXTHRSHLDCTL; sc->tulip_ifp->if_baudrate = 100000000; break; } default: { break; } } TULIP_CSR_WRITE(sc, csr_command, sc->tulip_cmdmode); } /* ******************************************************************** * Start of 21140/21140A support which does not use the MII interface */ static void tulip_null_media_poll(tulip_softc_t * const sc, tulip_mediapoll_event_t event) { #if defined(TULIP_DEBUG) sc->tulip_dbg.dbg_events[event]++; #endif #if defined(DIAGNOSTIC) device_printf(sc->tulip_dev, "botch(media_poll) at line %d\n", __LINE__); #endif } static inline void tulip_21140_mediainit(tulip_softc_t * const sc, tulip_media_info_t * const mip, tulip_media_t const media, unsigned gpdata, unsigned cmdmode) { TULIP_LOCK_ASSERT(sc); sc->tulip_mediums[media] = mip; mip->mi_type = TULIP_MEDIAINFO_GPR; mip->mi_cmdmode = cmdmode; mip->mi_gpdata = gpdata; } static void tulip_21140_evalboard_media_probe(tulip_softc_t * const sc) { tulip_media_info_t *mip = sc->tulip_mediainfo; TULIP_LOCK_ASSERT(sc); sc->tulip_gpinit = TULIP_GP_EB_PINS; sc->tulip_gpdata = TULIP_GP_EB_INIT; TULIP_CSR_WRITE(sc, csr_gp, TULIP_GP_EB_PINS); TULIP_CSR_WRITE(sc, csr_gp, TULIP_GP_EB_INIT); TULIP_CSR_WRITE(sc, csr_command, TULIP_CSR_READ(sc, csr_command) | TULIP_CMD_PORTSELECT | TULIP_CMD_PCSFUNCTION | TULIP_CMD_SCRAMBLER | TULIP_CMD_MUSTBEONE); TULIP_CSR_WRITE(sc, csr_command, TULIP_CSR_READ(sc, csr_command) & ~TULIP_CMD_TXTHRSHLDCTL); DELAY(1000000); if ((TULIP_CSR_READ(sc, csr_gp) & TULIP_GP_EB_OK100) != 0) { sc->tulip_media = TULIP_MEDIA_10BASET; } else { sc->tulip_media = TULIP_MEDIA_100BASETX; } tulip_21140_mediainit(sc, mip++, TULIP_MEDIA_10BASET, TULIP_GP_EB_INIT, TULIP_CMD_TXTHRSHLDCTL); tulip_21140_mediainit(sc, mip++, TULIP_MEDIA_10BASET_FD, TULIP_GP_EB_INIT, TULIP_CMD_TXTHRSHLDCTL|TULIP_CMD_FULLDUPLEX); tulip_21140_mediainit(sc, mip++, TULIP_MEDIA_100BASETX, TULIP_GP_EB_INIT, TULIP_CMD_PORTSELECT|TULIP_CMD_PCSFUNCTION |TULIP_CMD_SCRAMBLER); tulip_21140_mediainit(sc, mip++, TULIP_MEDIA_100BASETX_FD, TULIP_GP_EB_INIT, TULIP_CMD_PORTSELECT|TULIP_CMD_PCSFUNCTION |TULIP_CMD_SCRAMBLER|TULIP_CMD_FULLDUPLEX); } static const tulip_boardsw_t tulip_21140_eb_boardsw = { TULIP_21140_DEC_EB, tulip_21140_evalboard_media_probe, tulip_media_select, tulip_null_media_poll, tulip_2114x_media_preset, }; static void tulip_21140_accton_media_probe(tulip_softc_t * const sc) { tulip_media_info_t *mip = sc->tulip_mediainfo; unsigned gpdata; TULIP_LOCK_ASSERT(sc); sc->tulip_gpinit = TULIP_GP_EB_PINS; sc->tulip_gpdata = TULIP_GP_EB_INIT; TULIP_CSR_WRITE(sc, csr_gp, TULIP_GP_EB_PINS); TULIP_CSR_WRITE(sc, csr_gp, TULIP_GP_EB_INIT); TULIP_CSR_WRITE(sc, csr_command, TULIP_CSR_READ(sc, csr_command) | TULIP_CMD_PORTSELECT | TULIP_CMD_PCSFUNCTION | TULIP_CMD_SCRAMBLER | TULIP_CMD_MUSTBEONE); TULIP_CSR_WRITE(sc, csr_command, TULIP_CSR_READ(sc, csr_command) & ~TULIP_CMD_TXTHRSHLDCTL); DELAY(1000000); gpdata = TULIP_CSR_READ(sc, csr_gp); if ((gpdata & TULIP_GP_EN1207_UTP_INIT) == 0) { sc->tulip_media = TULIP_MEDIA_10BASET; } else { if ((gpdata & TULIP_GP_EN1207_BNC_INIT) == 0) { sc->tulip_media = TULIP_MEDIA_BNC; } else { sc->tulip_media = TULIP_MEDIA_100BASETX; } } tulip_21140_mediainit(sc, mip++, TULIP_MEDIA_BNC, TULIP_GP_EN1207_BNC_INIT, TULIP_CMD_TXTHRSHLDCTL); tulip_21140_mediainit(sc, mip++, TULIP_MEDIA_10BASET, TULIP_GP_EN1207_UTP_INIT, TULIP_CMD_TXTHRSHLDCTL); tulip_21140_mediainit(sc, mip++, TULIP_MEDIA_10BASET_FD, TULIP_GP_EN1207_UTP_INIT, TULIP_CMD_TXTHRSHLDCTL|TULIP_CMD_FULLDUPLEX); tulip_21140_mediainit(sc, mip++, TULIP_MEDIA_100BASETX, TULIP_GP_EN1207_100_INIT, TULIP_CMD_PORTSELECT|TULIP_CMD_PCSFUNCTION |TULIP_CMD_SCRAMBLER); tulip_21140_mediainit(sc, mip++, TULIP_MEDIA_100BASETX_FD, TULIP_GP_EN1207_100_INIT, TULIP_CMD_PORTSELECT|TULIP_CMD_PCSFUNCTION |TULIP_CMD_SCRAMBLER|TULIP_CMD_FULLDUPLEX); } static const tulip_boardsw_t tulip_21140_accton_boardsw = { TULIP_21140_EN1207, tulip_21140_accton_media_probe, tulip_media_select, tulip_null_media_poll, tulip_2114x_media_preset, }; static void tulip_21140_smc9332_media_probe(tulip_softc_t * const sc) { tulip_media_info_t *mip = sc->tulip_mediainfo; int idx, cnt = 0; TULIP_LOCK_ASSERT(sc); TULIP_CSR_WRITE(sc, csr_command, TULIP_CMD_PORTSELECT|TULIP_CMD_MUSTBEONE); TULIP_CSR_WRITE(sc, csr_busmode, TULIP_BUSMODE_SWRESET); DELAY(10); /* Wait 10 microseconds (actually 50 PCI cycles but at 33MHz that comes to two microseconds but wait a bit longer anyways) */ TULIP_CSR_WRITE(sc, csr_command, TULIP_CMD_PORTSELECT | TULIP_CMD_PCSFUNCTION | TULIP_CMD_SCRAMBLER | TULIP_CMD_MUSTBEONE); sc->tulip_gpinit = TULIP_GP_SMC_9332_PINS; sc->tulip_gpdata = TULIP_GP_SMC_9332_INIT; TULIP_CSR_WRITE(sc, csr_gp, TULIP_GP_SMC_9332_PINS|TULIP_GP_PINSET); TULIP_CSR_WRITE(sc, csr_gp, TULIP_GP_SMC_9332_INIT); DELAY(200000); for (idx = 1000; idx > 0; idx--) { u_int32_t csr = TULIP_CSR_READ(sc, csr_gp); if ((csr & (TULIP_GP_SMC_9332_OK10|TULIP_GP_SMC_9332_OK100)) == (TULIP_GP_SMC_9332_OK10|TULIP_GP_SMC_9332_OK100)) { if (++cnt > 100) break; } else if ((csr & TULIP_GP_SMC_9332_OK10) == 0) { break; } else { cnt = 0; } DELAY(1000); } sc->tulip_media = cnt > 100 ? TULIP_MEDIA_100BASETX : TULIP_MEDIA_10BASET; tulip_21140_mediainit(sc, mip++, TULIP_MEDIA_100BASETX, TULIP_GP_SMC_9332_INIT, TULIP_CMD_PORTSELECT|TULIP_CMD_PCSFUNCTION |TULIP_CMD_SCRAMBLER); tulip_21140_mediainit(sc, mip++, TULIP_MEDIA_100BASETX_FD, TULIP_GP_SMC_9332_INIT, TULIP_CMD_PORTSELECT|TULIP_CMD_PCSFUNCTION |TULIP_CMD_SCRAMBLER|TULIP_CMD_FULLDUPLEX); tulip_21140_mediainit(sc, mip++, TULIP_MEDIA_10BASET, TULIP_GP_SMC_9332_INIT, TULIP_CMD_TXTHRSHLDCTL); tulip_21140_mediainit(sc, mip++, TULIP_MEDIA_10BASET_FD, TULIP_GP_SMC_9332_INIT, TULIP_CMD_TXTHRSHLDCTL|TULIP_CMD_FULLDUPLEX); } static const tulip_boardsw_t tulip_21140_smc9332_boardsw = { TULIP_21140_SMC_9332, tulip_21140_smc9332_media_probe, tulip_media_select, tulip_null_media_poll, tulip_2114x_media_preset, }; static void tulip_21140_cogent_em100_media_probe(tulip_softc_t * const sc) { tulip_media_info_t *mip = sc->tulip_mediainfo; u_int32_t cmdmode = TULIP_CSR_READ(sc, csr_command); TULIP_LOCK_ASSERT(sc); sc->tulip_gpinit = TULIP_GP_EM100_PINS; sc->tulip_gpdata = TULIP_GP_EM100_INIT; TULIP_CSR_WRITE(sc, csr_gp, TULIP_GP_EM100_PINS); TULIP_CSR_WRITE(sc, csr_gp, TULIP_GP_EM100_INIT); cmdmode = TULIP_CMD_PORTSELECT|TULIP_CMD_PCSFUNCTION|TULIP_CMD_MUSTBEONE; cmdmode &= ~(TULIP_CMD_TXTHRSHLDCTL|TULIP_CMD_SCRAMBLER); if (sc->tulip_rombuf[32] == TULIP_COGENT_EM100FX_ID) { TULIP_CSR_WRITE(sc, csr_command, cmdmode); sc->tulip_media = TULIP_MEDIA_100BASEFX; tulip_21140_mediainit(sc, mip++, TULIP_MEDIA_100BASEFX, TULIP_GP_EM100_INIT, TULIP_CMD_PORTSELECT|TULIP_CMD_PCSFUNCTION); tulip_21140_mediainit(sc, mip++, TULIP_MEDIA_100BASEFX_FD, TULIP_GP_EM100_INIT, TULIP_CMD_PORTSELECT|TULIP_CMD_PCSFUNCTION |TULIP_CMD_FULLDUPLEX); } else { TULIP_CSR_WRITE(sc, csr_command, cmdmode|TULIP_CMD_SCRAMBLER); sc->tulip_media = TULIP_MEDIA_100BASETX; tulip_21140_mediainit(sc, mip++, TULIP_MEDIA_100BASETX, TULIP_GP_EM100_INIT, TULIP_CMD_PORTSELECT|TULIP_CMD_PCSFUNCTION |TULIP_CMD_SCRAMBLER); tulip_21140_mediainit(sc, mip++, TULIP_MEDIA_100BASETX_FD, TULIP_GP_EM100_INIT, TULIP_CMD_PORTSELECT|TULIP_CMD_PCSFUNCTION |TULIP_CMD_SCRAMBLER|TULIP_CMD_FULLDUPLEX); } } static const tulip_boardsw_t tulip_21140_cogent_em100_boardsw = { TULIP_21140_COGENT_EM100, tulip_21140_cogent_em100_media_probe, tulip_media_select, tulip_null_media_poll, tulip_2114x_media_preset }; static void tulip_21140_znyx_zx34x_media_probe(tulip_softc_t * const sc) { tulip_media_info_t *mip = sc->tulip_mediainfo; int cnt10 = 0, cnt100 = 0, idx; TULIP_LOCK_ASSERT(sc); sc->tulip_gpinit = TULIP_GP_ZX34X_PINS; sc->tulip_gpdata = TULIP_GP_ZX34X_INIT; TULIP_CSR_WRITE(sc, csr_gp, TULIP_GP_ZX34X_PINS); TULIP_CSR_WRITE(sc, csr_gp, TULIP_GP_ZX34X_INIT); TULIP_CSR_WRITE(sc, csr_command, TULIP_CSR_READ(sc, csr_command) | TULIP_CMD_PORTSELECT | TULIP_CMD_PCSFUNCTION | TULIP_CMD_SCRAMBLER | TULIP_CMD_MUSTBEONE); TULIP_CSR_WRITE(sc, csr_command, TULIP_CSR_READ(sc, csr_command) & ~TULIP_CMD_TXTHRSHLDCTL); DELAY(200000); for (idx = 1000; idx > 0; idx--) { u_int32_t csr = TULIP_CSR_READ(sc, csr_gp); if ((csr & (TULIP_GP_ZX34X_LNKFAIL|TULIP_GP_ZX34X_SYMDET|TULIP_GP_ZX34X_SIGDET)) == (TULIP_GP_ZX34X_LNKFAIL|TULIP_GP_ZX34X_SYMDET|TULIP_GP_ZX34X_SIGDET)) { if (++cnt100 > 100) break; } else if ((csr & TULIP_GP_ZX34X_LNKFAIL) == 0) { if (++cnt10 > 100) break; } else { cnt10 = 0; cnt100 = 0; } DELAY(1000); } sc->tulip_media = cnt100 > 100 ? TULIP_MEDIA_100BASETX : TULIP_MEDIA_10BASET; tulip_21140_mediainit(sc, mip++, TULIP_MEDIA_10BASET, TULIP_GP_ZX34X_INIT, TULIP_CMD_TXTHRSHLDCTL); tulip_21140_mediainit(sc, mip++, TULIP_MEDIA_10BASET_FD, TULIP_GP_ZX34X_INIT, TULIP_CMD_TXTHRSHLDCTL|TULIP_CMD_FULLDUPLEX); tulip_21140_mediainit(sc, mip++, TULIP_MEDIA_100BASETX, TULIP_GP_ZX34X_INIT, TULIP_CMD_PORTSELECT|TULIP_CMD_PCSFUNCTION |TULIP_CMD_SCRAMBLER); tulip_21140_mediainit(sc, mip++, TULIP_MEDIA_100BASETX_FD, TULIP_GP_ZX34X_INIT, TULIP_CMD_PORTSELECT|TULIP_CMD_PCSFUNCTION |TULIP_CMD_SCRAMBLER|TULIP_CMD_FULLDUPLEX); } static const tulip_boardsw_t tulip_21140_znyx_zx34x_boardsw = { TULIP_21140_ZNYX_ZX34X, tulip_21140_znyx_zx34x_media_probe, tulip_media_select, tulip_null_media_poll, tulip_2114x_media_preset, }; static void tulip_2114x_media_probe(tulip_softc_t * const sc) { TULIP_LOCK_ASSERT(sc); sc->tulip_cmdmode |= TULIP_CMD_MUSTBEONE |TULIP_CMD_BACKOFFCTR|TULIP_CMD_THRSHLD72; } static const tulip_boardsw_t tulip_2114x_isv_boardsw = { TULIP_21140_ISV, tulip_2114x_media_probe, tulip_media_select, tulip_media_poll, tulip_2114x_media_preset, }; /* * ******** END of chip-specific handlers. *********** */ /* * Code the read the SROM and MII bit streams (I2C) */ #define EMIT do { TULIP_CSR_WRITE(sc, csr_srom_mii, csr); DELAY(1); } while (0) static void tulip_srom_idle(tulip_softc_t * const sc) { unsigned bit, csr; csr = SROMSEL ; EMIT; csr = SROMSEL | SROMRD; EMIT; csr ^= SROMCS; EMIT; csr ^= SROMCLKON; EMIT; /* * Write 25 cycles of 0 which will force the SROM to be idle. */ for (bit = 3 + SROM_BITWIDTH + 16; bit > 0; bit--) { csr ^= SROMCLKOFF; EMIT; /* clock low; data not valid */ csr ^= SROMCLKON; EMIT; /* clock high; data valid */ } csr ^= SROMCLKOFF; EMIT; csr ^= SROMCS; EMIT; csr = 0; EMIT; } static void tulip_srom_read(tulip_softc_t * const sc) { unsigned idx; const unsigned bitwidth = SROM_BITWIDTH; const unsigned cmdmask = (SROMCMD_RD << bitwidth); const unsigned msb = 1 << (bitwidth + 3 - 1); unsigned lastidx = (1 << bitwidth) - 1; tulip_srom_idle(sc); for (idx = 0; idx <= lastidx; idx++) { unsigned lastbit, data, bits, bit, csr; csr = SROMSEL ; EMIT; csr = SROMSEL | SROMRD; EMIT; csr ^= SROMCSON; EMIT; csr ^= SROMCLKON; EMIT; lastbit = 0; for (bits = idx|cmdmask, bit = bitwidth + 3; bit > 0; bit--, bits <<= 1) { const unsigned thisbit = bits & msb; csr ^= SROMCLKOFF; EMIT; /* clock low; data not valid */ if (thisbit != lastbit) { csr ^= SROMDOUT; EMIT; /* clock low; invert data */ } else { EMIT; } csr ^= SROMCLKON; EMIT; /* clock high; data valid */ lastbit = thisbit; } csr ^= SROMCLKOFF; EMIT; for (data = 0, bits = 0; bits < 16; bits++) { data <<= 1; csr ^= SROMCLKON; EMIT; /* clock high; data valid */ data |= TULIP_CSR_READ(sc, csr_srom_mii) & SROMDIN ? 1 : 0; csr ^= SROMCLKOFF; EMIT; /* clock low; data not valid */ } sc->tulip_rombuf[idx*2] = data & 0xFF; sc->tulip_rombuf[idx*2+1] = data >> 8; csr = SROMSEL | SROMRD; EMIT; csr = 0; EMIT; } tulip_srom_idle(sc); } #define MII_EMIT do { TULIP_CSR_WRITE(sc, csr_srom_mii, csr); DELAY(1); } while (0) static void tulip_mii_writebits(tulip_softc_t * const sc, unsigned data, unsigned bits) { unsigned msb = 1 << (bits - 1); unsigned csr = TULIP_CSR_READ(sc, csr_srom_mii) & (MII_RD|MII_DOUT|MII_CLK); unsigned lastbit = (csr & MII_DOUT) ? msb : 0; TULIP_LOCK_ASSERT(sc); csr |= MII_WR; MII_EMIT; /* clock low; assert write */ for (; bits > 0; bits--, data <<= 1) { const unsigned thisbit = data & msb; if (thisbit != lastbit) { csr ^= MII_DOUT; MII_EMIT; /* clock low; invert data */ } csr ^= MII_CLKON; MII_EMIT; /* clock high; data valid */ lastbit = thisbit; csr ^= MII_CLKOFF; MII_EMIT; /* clock low; data not valid */ } } static void tulip_mii_turnaround(tulip_softc_t * const sc, unsigned cmd) { unsigned csr = TULIP_CSR_READ(sc, csr_srom_mii) & (MII_RD|MII_DOUT|MII_CLK); TULIP_LOCK_ASSERT(sc); if (cmd == MII_WRCMD) { csr |= MII_DOUT; MII_EMIT; /* clock low; change data */ csr ^= MII_CLKON; MII_EMIT; /* clock high; data valid */ csr ^= MII_CLKOFF; MII_EMIT; /* clock low; data not valid */ csr ^= MII_DOUT; MII_EMIT; /* clock low; change data */ } else { csr |= MII_RD; MII_EMIT; /* clock low; switch to read */ } csr ^= MII_CLKON; MII_EMIT; /* clock high; data valid */ csr ^= MII_CLKOFF; MII_EMIT; /* clock low; data not valid */ } static unsigned tulip_mii_readbits(tulip_softc_t * const sc) { unsigned data; unsigned csr = TULIP_CSR_READ(sc, csr_srom_mii) & (MII_RD|MII_DOUT|MII_CLK); int idx; TULIP_LOCK_ASSERT(sc); for (idx = 0, data = 0; idx < 16; idx++) { data <<= 1; /* this is NOOP on the first pass through */ csr ^= MII_CLKON; MII_EMIT; /* clock high; data valid */ if (TULIP_CSR_READ(sc, csr_srom_mii) & MII_DIN) data |= 1; csr ^= MII_CLKOFF; MII_EMIT; /* clock low; data not valid */ } csr ^= MII_RD; MII_EMIT; /* clock low; turn off read */ return data; } static unsigned tulip_mii_readreg(tulip_softc_t * const sc, unsigned devaddr, unsigned regno) { unsigned csr = TULIP_CSR_READ(sc, csr_srom_mii) & (MII_RD|MII_DOUT|MII_CLK); unsigned data; TULIP_LOCK_ASSERT(sc); csr &= ~(MII_RD|MII_CLK); MII_EMIT; tulip_mii_writebits(sc, MII_PREAMBLE, 32); tulip_mii_writebits(sc, MII_RDCMD, 8); tulip_mii_writebits(sc, devaddr, 5); tulip_mii_writebits(sc, regno, 5); tulip_mii_turnaround(sc, MII_RDCMD); data = tulip_mii_readbits(sc); #if defined(TULIP_DEBUG) sc->tulip_dbg.dbg_phyregs[regno][0] = data; sc->tulip_dbg.dbg_phyregs[regno][1]++; #endif return data; } static void tulip_mii_writereg(tulip_softc_t * const sc, unsigned devaddr, unsigned regno, unsigned data) { unsigned csr = TULIP_CSR_READ(sc, csr_srom_mii) & (MII_RD|MII_DOUT|MII_CLK); TULIP_LOCK_ASSERT(sc); csr &= ~(MII_RD|MII_CLK); MII_EMIT; tulip_mii_writebits(sc, MII_PREAMBLE, 32); tulip_mii_writebits(sc, MII_WRCMD, 8); tulip_mii_writebits(sc, devaddr, 5); tulip_mii_writebits(sc, regno, 5); tulip_mii_turnaround(sc, MII_WRCMD); tulip_mii_writebits(sc, data, 16); #if defined(TULIP_DEBUG) sc->tulip_dbg.dbg_phyregs[regno][2] = data; sc->tulip_dbg.dbg_phyregs[regno][3]++; #endif } #define tulip_mchash(mca) (ether_crc32_le(mca, 6) & 0x1FF) #define tulip_srom_crcok(databuf) ( \ ((ether_crc32_le(databuf, 126) & 0xFFFFU) ^ 0xFFFFU) == \ ((databuf)[126] | ((databuf)[127] << 8))) static void tulip_identify_dec_nic(tulip_softc_t * const sc) { TULIP_LOCK_ASSERT(sc); strcpy(sc->tulip_boardid, "DEC "); #define D0 4 if (sc->tulip_chipid <= TULIP_21040) return; if (bcmp(sc->tulip_rombuf + 29, "DE500", 5) == 0 || bcmp(sc->tulip_rombuf + 29, "DE450", 5) == 0) { bcopy(sc->tulip_rombuf + 29, &sc->tulip_boardid[D0], 8); sc->tulip_boardid[D0+8] = ' '; } #undef D0 } static void tulip_identify_znyx_nic(tulip_softc_t * const sc) { unsigned id = 0; TULIP_LOCK_ASSERT(sc); strcpy(sc->tulip_boardid, "ZNYX ZX3XX "); if (sc->tulip_chipid == TULIP_21140 || sc->tulip_chipid == TULIP_21140A) { unsigned znyx_ptr; sc->tulip_boardid[8] = '4'; znyx_ptr = sc->tulip_rombuf[124] + 256 * sc->tulip_rombuf[125]; if (znyx_ptr < 26 || znyx_ptr > 116) { sc->tulip_boardsw = &tulip_21140_znyx_zx34x_boardsw; return; } /* ZX344 = 0010 .. 0013FF */ if (sc->tulip_rombuf[znyx_ptr] == 0x4A && sc->tulip_rombuf[znyx_ptr + 1] == 0x52 && sc->tulip_rombuf[znyx_ptr + 2] == 0x01) { id = sc->tulip_rombuf[znyx_ptr + 5] + 256 * sc->tulip_rombuf[znyx_ptr + 4]; if ((id >> 8) == (TULIP_ZNYX_ID_ZX342 >> 8)) { sc->tulip_boardid[9] = '2'; if (id == TULIP_ZNYX_ID_ZX342B) { sc->tulip_boardid[10] = 'B'; sc->tulip_boardid[11] = ' '; } sc->tulip_boardsw = &tulip_21140_znyx_zx34x_boardsw; } else if (id == TULIP_ZNYX_ID_ZX344) { sc->tulip_boardid[10] = '4'; sc->tulip_boardsw = &tulip_21140_znyx_zx34x_boardsw; } else if (id == TULIP_ZNYX_ID_ZX345) { sc->tulip_boardid[9] = (sc->tulip_rombuf[19] > 1) ? '8' : '5'; } else if (id == TULIP_ZNYX_ID_ZX346) { sc->tulip_boardid[9] = '6'; } else if (id == TULIP_ZNYX_ID_ZX351) { sc->tulip_boardid[8] = '5'; sc->tulip_boardid[9] = '1'; } } if (id == 0) { /* * Assume it's a ZX342... */ sc->tulip_boardsw = &tulip_21140_znyx_zx34x_boardsw; } return; } sc->tulip_boardid[8] = '1'; if (sc->tulip_chipid == TULIP_21041) { sc->tulip_boardid[10] = '1'; return; } if (sc->tulip_rombuf[32] == 0x4A && sc->tulip_rombuf[33] == 0x52) { id = sc->tulip_rombuf[37] + 256 * sc->tulip_rombuf[36]; if (id == TULIP_ZNYX_ID_ZX312T) { sc->tulip_boardid[9] = '2'; sc->tulip_boardid[10] = 'T'; sc->tulip_boardid[11] = ' '; sc->tulip_boardsw = &tulip_21040_10baset_only_boardsw; } else if (id == TULIP_ZNYX_ID_ZX314_INTA) { sc->tulip_boardid[9] = '4'; sc->tulip_boardsw = &tulip_21040_10baset_only_boardsw; sc->tulip_features |= TULIP_HAVE_SHAREDINTR|TULIP_HAVE_BASEROM; } else if (id == TULIP_ZNYX_ID_ZX314) { sc->tulip_boardid[9] = '4'; sc->tulip_boardsw = &tulip_21040_10baset_only_boardsw; sc->tulip_features |= TULIP_HAVE_BASEROM; } else if (id == TULIP_ZNYX_ID_ZX315_INTA) { sc->tulip_boardid[9] = '5'; sc->tulip_features |= TULIP_HAVE_SHAREDINTR|TULIP_HAVE_BASEROM; } else if (id == TULIP_ZNYX_ID_ZX315) { sc->tulip_boardid[9] = '5'; sc->tulip_features |= TULIP_HAVE_BASEROM; } else { id = 0; } } if (id == 0) { if ((sc->tulip_enaddr[3] & ~3) == 0xF0 && (sc->tulip_enaddr[5] & 2) == 0) { sc->tulip_boardid[9] = '4'; sc->tulip_boardsw = &tulip_21040_10baset_only_boardsw; sc->tulip_features |= TULIP_HAVE_SHAREDINTR|TULIP_HAVE_BASEROM; } else if ((sc->tulip_enaddr[3] & ~3) == 0xF4 && (sc->tulip_enaddr[5] & 1) == 0) { sc->tulip_boardid[9] = '5'; sc->tulip_boardsw = &tulip_21040_boardsw; sc->tulip_features |= TULIP_HAVE_SHAREDINTR|TULIP_HAVE_BASEROM; } else if ((sc->tulip_enaddr[3] & ~3) == 0xEC) { sc->tulip_boardid[9] = '2'; sc->tulip_boardsw = &tulip_21040_boardsw; } } } static void tulip_identify_smc_nic(tulip_softc_t * const sc) { u_int32_t id1, id2, ei; int auibnc = 0, utp = 0; char *cp; TULIP_LOCK_ASSERT(sc); strcpy(sc->tulip_boardid, "SMC "); if (sc->tulip_chipid == TULIP_21041) return; if (sc->tulip_chipid != TULIP_21040) { if (sc->tulip_boardsw != &tulip_2114x_isv_boardsw) { strcpy(&sc->tulip_boardid[4], "9332DST "); sc->tulip_boardsw = &tulip_21140_smc9332_boardsw; } else if (sc->tulip_features & (TULIP_HAVE_BASEROM|TULIP_HAVE_SLAVEDROM)) { strcpy(&sc->tulip_boardid[4], "9334BDT "); } else { strcpy(&sc->tulip_boardid[4], "9332BDT "); } return; } id1 = sc->tulip_rombuf[0x60] | (sc->tulip_rombuf[0x61] << 8); id2 = sc->tulip_rombuf[0x62] | (sc->tulip_rombuf[0x63] << 8); ei = sc->tulip_rombuf[0x66] | (sc->tulip_rombuf[0x67] << 8); strcpy(&sc->tulip_boardid[4], "8432"); cp = &sc->tulip_boardid[8]; if ((id1 & 1) == 0) *cp++ = 'B', auibnc = 1; if ((id1 & 0xFF) > 0x32) *cp++ = 'T', utp = 1; if ((id1 & 0x4000) == 0) *cp++ = 'A', auibnc = 1; if (id2 == 0x15) { sc->tulip_boardid[7] = '4'; *cp++ = '-'; *cp++ = 'C'; *cp++ = 'H'; *cp++ = (ei ? '2' : '1'); } *cp++ = ' '; *cp = '\0'; if (utp && !auibnc) sc->tulip_boardsw = &tulip_21040_10baset_only_boardsw; else if (!utp && auibnc) sc->tulip_boardsw = &tulip_21040_auibnc_only_boardsw; } static void tulip_identify_cogent_nic(tulip_softc_t * const sc) { TULIP_LOCK_ASSERT(sc); strcpy(sc->tulip_boardid, "Cogent "); if (sc->tulip_chipid == TULIP_21140 || sc->tulip_chipid == TULIP_21140A) { if (sc->tulip_rombuf[32] == TULIP_COGENT_EM100TX_ID) { strcat(sc->tulip_boardid, "EM100TX "); sc->tulip_boardsw = &tulip_21140_cogent_em100_boardsw; #if defined(TULIP_COGENT_EM110TX_ID) } else if (sc->tulip_rombuf[32] == TULIP_COGENT_EM110TX_ID) { strcat(sc->tulip_boardid, "EM110TX "); sc->tulip_boardsw = &tulip_21140_cogent_em100_boardsw; #endif } else if (sc->tulip_rombuf[32] == TULIP_COGENT_EM100FX_ID) { strcat(sc->tulip_boardid, "EM100FX "); sc->tulip_boardsw = &tulip_21140_cogent_em100_boardsw; } /* * Magic number (0x24001109U) is the SubVendor (0x2400) and * SubDevId (0x1109) for the ANA6944TX (EM440TX). */ if (*(u_int32_t *) sc->tulip_rombuf == 0x24001109U && (sc->tulip_features & TULIP_HAVE_BASEROM)) { /* * Cogent (Adaptec) is still mapping all INTs to INTA of * first 21140. Dumb! Dumb! */ strcat(sc->tulip_boardid, "EM440TX "); sc->tulip_features |= TULIP_HAVE_SHAREDINTR; } } else if (sc->tulip_chipid == TULIP_21040) { sc->tulip_features |= TULIP_HAVE_SHAREDINTR|TULIP_HAVE_BASEROM; } } static void tulip_identify_accton_nic(tulip_softc_t * const sc) { TULIP_LOCK_ASSERT(sc); strcpy(sc->tulip_boardid, "ACCTON "); switch (sc->tulip_chipid) { case TULIP_21140A: strcat(sc->tulip_boardid, "EN1207 "); if (sc->tulip_boardsw != &tulip_2114x_isv_boardsw) sc->tulip_boardsw = &tulip_21140_accton_boardsw; break; case TULIP_21140: strcat(sc->tulip_boardid, "EN1207TX "); if (sc->tulip_boardsw != &tulip_2114x_isv_boardsw) sc->tulip_boardsw = &tulip_21140_eb_boardsw; break; case TULIP_21040: strcat(sc->tulip_boardid, "EN1203 "); sc->tulip_boardsw = &tulip_21040_boardsw; break; case TULIP_21041: strcat(sc->tulip_boardid, "EN1203 "); sc->tulip_boardsw = &tulip_21041_boardsw; break; default: sc->tulip_boardsw = &tulip_2114x_isv_boardsw; break; } } static void tulip_identify_asante_nic(tulip_softc_t * const sc) { TULIP_LOCK_ASSERT(sc); strcpy(sc->tulip_boardid, "Asante "); if ((sc->tulip_chipid == TULIP_21140 || sc->tulip_chipid == TULIP_21140A) && sc->tulip_boardsw != &tulip_2114x_isv_boardsw) { tulip_media_info_t *mi = sc->tulip_mediainfo; int idx; /* * The Asante Fast Ethernet doesn't always ship with a valid * new format SROM. So if isn't in the new format, we cheat * set it up as if we had. */ sc->tulip_gpinit = TULIP_GP_ASANTE_PINS; sc->tulip_gpdata = 0; TULIP_CSR_WRITE(sc, csr_gp, TULIP_GP_ASANTE_PINS|TULIP_GP_PINSET); TULIP_CSR_WRITE(sc, csr_gp, TULIP_GP_ASANTE_PHYRESET); DELAY(100); TULIP_CSR_WRITE(sc, csr_gp, 0); mi->mi_type = TULIP_MEDIAINFO_MII; mi->mi_gpr_length = 0; mi->mi_gpr_offset = 0; mi->mi_reset_length = 0; mi->mi_reset_offset = 0; mi->mi_phyaddr = TULIP_MII_NOPHY; for (idx = 20; idx > 0 && mi->mi_phyaddr == TULIP_MII_NOPHY; idx--) { DELAY(10000); mi->mi_phyaddr = tulip_mii_get_phyaddr(sc, 0); } if (mi->mi_phyaddr == TULIP_MII_NOPHY) { device_printf(sc->tulip_dev, "can't find phy 0\n"); return; } sc->tulip_features |= TULIP_HAVE_MII; mi->mi_capabilities = PHYSTS_10BASET|PHYSTS_10BASET_FD|PHYSTS_100BASETX|PHYSTS_100BASETX_FD; mi->mi_advertisement = PHYSTS_10BASET|PHYSTS_10BASET_FD|PHYSTS_100BASETX|PHYSTS_100BASETX_FD; mi->mi_full_duplex = PHYSTS_10BASET_FD|PHYSTS_100BASETX_FD; mi->mi_tx_threshold = PHYSTS_10BASET|PHYSTS_10BASET_FD; TULIP_MEDIAINFO_ADD_CAPABILITY(sc, mi, 100BASETX_FD); TULIP_MEDIAINFO_ADD_CAPABILITY(sc, mi, 100BASETX); TULIP_MEDIAINFO_ADD_CAPABILITY(sc, mi, 100BASET4); TULIP_MEDIAINFO_ADD_CAPABILITY(sc, mi, 10BASET_FD); TULIP_MEDIAINFO_ADD_CAPABILITY(sc, mi, 10BASET); mi->mi_phyid = (tulip_mii_readreg(sc, mi->mi_phyaddr, PHYREG_IDLOW) << 16) | tulip_mii_readreg(sc, mi->mi_phyaddr, PHYREG_IDHIGH); sc->tulip_boardsw = &tulip_2114x_isv_boardsw; } } static void tulip_identify_compex_nic(tulip_softc_t * const sc) { TULIP_LOCK_ASSERT(sc); strcpy(sc->tulip_boardid, "COMPEX "); if (sc->tulip_chipid == TULIP_21140A) { int root_unit; tulip_softc_t *root_sc = NULL; strcat(sc->tulip_boardid, "400TX/PCI "); /* * All 4 chips on these boards share an interrupt. This code * copied from tulip_read_macaddr. */ sc->tulip_features |= TULIP_HAVE_SHAREDINTR; for (root_unit = sc->tulip_unit - 1; root_unit >= 0; root_unit--) { root_sc = tulips[root_unit]; if (root_sc == NULL || !(root_sc->tulip_features & TULIP_HAVE_SLAVEDINTR)) break; root_sc = NULL; } if (root_sc != NULL && root_sc->tulip_chipid == sc->tulip_chipid && root_sc->tulip_pci_busno == sc->tulip_pci_busno) { sc->tulip_features |= TULIP_HAVE_SLAVEDINTR; sc->tulip_slaves = root_sc->tulip_slaves; root_sc->tulip_slaves = sc; } else if(sc->tulip_features & TULIP_HAVE_SLAVEDINTR) { printf("\nCannot find master device for %s interrupts", sc->tulip_ifp->if_xname); } } else { strcat(sc->tulip_boardid, "unknown "); } /* sc->tulip_boardsw = &tulip_21140_eb_boardsw; */ return; } static int tulip_srom_decode(tulip_softc_t * const sc) { unsigned idx1, idx2, idx3; const tulip_srom_header_t *shp = (const tulip_srom_header_t *) &sc->tulip_rombuf[0]; const tulip_srom_adapter_info_t *saip = (const tulip_srom_adapter_info_t *) (shp + 1); tulip_srom_media_t srom_media; tulip_media_info_t *mi = sc->tulip_mediainfo; const u_int8_t *dp; u_int32_t leaf_offset, blocks, data; TULIP_LOCK_ASSERT(sc); for (idx1 = 0; idx1 < shp->sh_adapter_count; idx1++, saip++) { if (shp->sh_adapter_count == 1) break; if (saip->sai_device == sc->tulip_pci_devno) break; } /* * Didn't find the right media block for this card. */ if (idx1 == shp->sh_adapter_count) return 0; /* * Save the hardware address. */ bcopy(shp->sh_ieee802_address, sc->tulip_enaddr, 6); /* * If this is a multiple port card, add the adapter index to the last * byte of the hardware address. (if it isn't multiport, adding 0 * won't hurt. */ sc->tulip_enaddr[5] += idx1; leaf_offset = saip->sai_leaf_offset_lowbyte + saip->sai_leaf_offset_highbyte * 256; dp = sc->tulip_rombuf + leaf_offset; sc->tulip_conntype = (tulip_srom_connection_t) (dp[0] + dp[1] * 256); dp += 2; for (idx2 = 0;; idx2++) { if (tulip_srom_conninfo[idx2].sc_type == sc->tulip_conntype || tulip_srom_conninfo[idx2].sc_type == TULIP_SROM_CONNTYPE_NOT_USED) break; } sc->tulip_connidx = idx2; if (sc->tulip_chipid == TULIP_21041) { blocks = *dp++; for (idx2 = 0; idx2 < blocks; idx2++) { tulip_media_t media; data = *dp++; srom_media = (tulip_srom_media_t) (data & 0x3F); for (idx3 = 0; tulip_srom_mediums[idx3].sm_type != TULIP_MEDIA_UNKNOWN; idx3++) { if (tulip_srom_mediums[idx3].sm_srom_type == srom_media) break; } media = tulip_srom_mediums[idx3].sm_type; if (media != TULIP_MEDIA_UNKNOWN) { if (data & TULIP_SROM_21041_EXTENDED) { mi->mi_type = TULIP_MEDIAINFO_SIA; sc->tulip_mediums[media] = mi; mi->mi_sia_connectivity = dp[0] + dp[1] * 256; mi->mi_sia_tx_rx = dp[2] + dp[3] * 256; mi->mi_sia_general = dp[4] + dp[5] * 256; mi++; } else { switch (media) { case TULIP_MEDIA_BNC: { TULIP_MEDIAINFO_SIA_INIT(sc, mi, 21041, BNC); mi++; break; } case TULIP_MEDIA_AUI: { TULIP_MEDIAINFO_SIA_INIT(sc, mi, 21041, AUI); mi++; break; } case TULIP_MEDIA_10BASET: { TULIP_MEDIAINFO_SIA_INIT(sc, mi, 21041, 10BASET); mi++; break; } case TULIP_MEDIA_10BASET_FD: { TULIP_MEDIAINFO_SIA_INIT(sc, mi, 21041, 10BASET_FD); mi++; break; } default: { break; } } } } if (data & TULIP_SROM_21041_EXTENDED) dp += 6; } #ifdef notdef if (blocks == 0) { TULIP_MEDIAINFO_SIA_INIT(sc, mi, 21041, BNC); mi++; TULIP_MEDIAINFO_SIA_INIT(sc, mi, 21041, AUI); mi++; TULIP_MEDIAINFO_SIA_INIT(sc, mi, 21041, 10BASET); mi++; TULIP_MEDIAINFO_SIA_INIT(sc, mi, 21041, 10BASET_FD); mi++; } #endif } else { unsigned length, type; tulip_media_t gp_media = TULIP_MEDIA_UNKNOWN; if (sc->tulip_features & TULIP_HAVE_GPR) sc->tulip_gpinit = *dp++; blocks = *dp++; for (idx2 = 0; idx2 < blocks; idx2++) { const u_int8_t *ep; if ((*dp & 0x80) == 0) { length = 4; type = 0; } else { length = (*dp++ & 0x7f) - 1; type = *dp++ & 0x3f; } ep = dp + length; switch (type & 0x3f) { case 0: { /* 21140[A] GPR block */ tulip_media_t media; srom_media = (tulip_srom_media_t)(dp[0] & 0x3f); for (idx3 = 0; tulip_srom_mediums[idx3].sm_type != TULIP_MEDIA_UNKNOWN; idx3++) { if (tulip_srom_mediums[idx3].sm_srom_type == srom_media) break; } media = tulip_srom_mediums[idx3].sm_type; if (media == TULIP_MEDIA_UNKNOWN) break; mi->mi_type = TULIP_MEDIAINFO_GPR; sc->tulip_mediums[media] = mi; mi->mi_gpdata = dp[1]; if (media > gp_media && !TULIP_IS_MEDIA_FD(media)) { sc->tulip_gpdata = mi->mi_gpdata; gp_media = media; } data = dp[2] + dp[3] * 256; mi->mi_cmdmode = TULIP_SROM_2114X_CMDBITS(data); if (data & TULIP_SROM_2114X_NOINDICATOR) { mi->mi_actmask = 0; } else { #if 0 mi->mi_default = (data & TULIP_SROM_2114X_DEFAULT) != 0; #endif mi->mi_actmask = TULIP_SROM_2114X_BITPOS(data); mi->mi_actdata = (data & TULIP_SROM_2114X_POLARITY) ? 0 : mi->mi_actmask; } mi++; break; } case 1: { /* 21140[A] MII block */ const unsigned phyno = *dp++; mi->mi_type = TULIP_MEDIAINFO_MII; mi->mi_gpr_length = *dp++; mi->mi_gpr_offset = dp - sc->tulip_rombuf; dp += mi->mi_gpr_length; mi->mi_reset_length = *dp++; mi->mi_reset_offset = dp - sc->tulip_rombuf; dp += mi->mi_reset_length; /* * Before we probe for a PHY, use the GPR information * to select it. If we don't, it may be inaccessible. */ TULIP_CSR_WRITE(sc, csr_gp, sc->tulip_gpinit|TULIP_GP_PINSET); for (idx3 = 0; idx3 < mi->mi_reset_length; idx3++) { DELAY(10); TULIP_CSR_WRITE(sc, csr_gp, sc->tulip_rombuf[mi->mi_reset_offset + idx3]); } sc->tulip_phyaddr = mi->mi_phyaddr; for (idx3 = 0; idx3 < mi->mi_gpr_length; idx3++) { DELAY(10); TULIP_CSR_WRITE(sc, csr_gp, sc->tulip_rombuf[mi->mi_gpr_offset + idx3]); } /* * At least write something! */ if (mi->mi_reset_length == 0 && mi->mi_gpr_length == 0) TULIP_CSR_WRITE(sc, csr_gp, 0); mi->mi_phyaddr = TULIP_MII_NOPHY; for (idx3 = 20; idx3 > 0 && mi->mi_phyaddr == TULIP_MII_NOPHY; idx3--) { DELAY(10000); mi->mi_phyaddr = tulip_mii_get_phyaddr(sc, phyno); } if (mi->mi_phyaddr == TULIP_MII_NOPHY) { #if defined(TULIP_DEBUG) device_printf(sc->tulip_dev, "can't find phy %d\n", phyno); #endif break; } sc->tulip_features |= TULIP_HAVE_MII; mi->mi_capabilities = dp[0] + dp[1] * 256; dp += 2; mi->mi_advertisement = dp[0] + dp[1] * 256; dp += 2; mi->mi_full_duplex = dp[0] + dp[1] * 256; dp += 2; mi->mi_tx_threshold = dp[0] + dp[1] * 256; dp += 2; TULIP_MEDIAINFO_ADD_CAPABILITY(sc, mi, 100BASETX_FD); TULIP_MEDIAINFO_ADD_CAPABILITY(sc, mi, 100BASETX); TULIP_MEDIAINFO_ADD_CAPABILITY(sc, mi, 100BASET4); TULIP_MEDIAINFO_ADD_CAPABILITY(sc, mi, 10BASET_FD); TULIP_MEDIAINFO_ADD_CAPABILITY(sc, mi, 10BASET); mi->mi_phyid = (tulip_mii_readreg(sc, mi->mi_phyaddr, PHYREG_IDLOW) << 16) | tulip_mii_readreg(sc, mi->mi_phyaddr, PHYREG_IDHIGH); mi++; break; } case 2: { /* 2114[23] SIA block */ tulip_media_t media; srom_media = (tulip_srom_media_t)(dp[0] & 0x3f); for (idx3 = 0; tulip_srom_mediums[idx3].sm_type != TULIP_MEDIA_UNKNOWN; idx3++) { if (tulip_srom_mediums[idx3].sm_srom_type == srom_media) break; } media = tulip_srom_mediums[idx3].sm_type; if (media == TULIP_MEDIA_UNKNOWN) break; mi->mi_type = TULIP_MEDIAINFO_SIA; sc->tulip_mediums[media] = mi; if (dp[0] & 0x40) { mi->mi_sia_connectivity = dp[1] + dp[2] * 256; mi->mi_sia_tx_rx = dp[3] + dp[4] * 256; mi->mi_sia_general = dp[5] + dp[6] * 256; dp += 6; } else { switch (media) { case TULIP_MEDIA_BNC: { TULIP_MEDIAINFO_SIA_INIT(sc, mi, 21142, BNC); break; } case TULIP_MEDIA_AUI: { TULIP_MEDIAINFO_SIA_INIT(sc, mi, 21142, AUI); break; } case TULIP_MEDIA_10BASET: { TULIP_MEDIAINFO_SIA_INIT(sc, mi, 21142, 10BASET); sc->tulip_intrmask |= TULIP_STS_LINKPASS|TULIP_STS_LINKFAIL; break; } case TULIP_MEDIA_10BASET_FD: { TULIP_MEDIAINFO_SIA_INIT(sc, mi, 21142, 10BASET_FD); sc->tulip_intrmask |= TULIP_STS_LINKPASS|TULIP_STS_LINKFAIL; break; } default: { goto bad_media; } } } mi->mi_sia_gp_control = (dp[1] + dp[2] * 256) << 16; mi->mi_sia_gp_data = (dp[3] + dp[4] * 256) << 16; mi++; bad_media: break; } case 3: { /* 2114[23] MII PHY block */ const unsigned phyno = *dp++; const u_int8_t *dp0; mi->mi_type = TULIP_MEDIAINFO_MII; mi->mi_gpr_length = *dp++; mi->mi_gpr_offset = dp - sc->tulip_rombuf; dp += 2 * mi->mi_gpr_length; mi->mi_reset_length = *dp++; mi->mi_reset_offset = dp - sc->tulip_rombuf; dp += 2 * mi->mi_reset_length; dp0 = &sc->tulip_rombuf[mi->mi_reset_offset]; for (idx3 = 0; idx3 < mi->mi_reset_length; idx3++, dp0 += 2) { DELAY(10); TULIP_CSR_WRITE(sc, csr_sia_general, (dp0[0] + 256 * dp0[1]) << 16); } sc->tulip_phyaddr = mi->mi_phyaddr; dp0 = &sc->tulip_rombuf[mi->mi_gpr_offset]; for (idx3 = 0; idx3 < mi->mi_gpr_length; idx3++, dp0 += 2) { DELAY(10); TULIP_CSR_WRITE(sc, csr_sia_general, (dp0[0] + 256 * dp0[1]) << 16); } if (mi->mi_reset_length == 0 && mi->mi_gpr_length == 0) TULIP_CSR_WRITE(sc, csr_sia_general, 0); mi->mi_phyaddr = TULIP_MII_NOPHY; for (idx3 = 20; idx3 > 0 && mi->mi_phyaddr == TULIP_MII_NOPHY; idx3--) { DELAY(10000); mi->mi_phyaddr = tulip_mii_get_phyaddr(sc, phyno); } if (mi->mi_phyaddr == TULIP_MII_NOPHY) { #if defined(TULIP_DEBUG) device_printf(sc->tulip_dev, "can't find phy %d\n", phyno); #endif break; } sc->tulip_features |= TULIP_HAVE_MII; mi->mi_capabilities = dp[0] + dp[1] * 256; dp += 2; mi->mi_advertisement = dp[0] + dp[1] * 256; dp += 2; mi->mi_full_duplex = dp[0] + dp[1] * 256; dp += 2; mi->mi_tx_threshold = dp[0] + dp[1] * 256; dp += 2; mi->mi_mii_interrupt = dp[0] + dp[1] * 256; dp += 2; TULIP_MEDIAINFO_ADD_CAPABILITY(sc, mi, 100BASETX_FD); TULIP_MEDIAINFO_ADD_CAPABILITY(sc, mi, 100BASETX); TULIP_MEDIAINFO_ADD_CAPABILITY(sc, mi, 100BASET4); TULIP_MEDIAINFO_ADD_CAPABILITY(sc, mi, 10BASET_FD); TULIP_MEDIAINFO_ADD_CAPABILITY(sc, mi, 10BASET); mi->mi_phyid = (tulip_mii_readreg(sc, mi->mi_phyaddr, PHYREG_IDLOW) << 16) | tulip_mii_readreg(sc, mi->mi_phyaddr, PHYREG_IDHIGH); mi++; break; } case 4: { /* 21143 SYM block */ tulip_media_t media; srom_media = (tulip_srom_media_t) dp[0]; for (idx3 = 0; tulip_srom_mediums[idx3].sm_type != TULIP_MEDIA_UNKNOWN; idx3++) { if (tulip_srom_mediums[idx3].sm_srom_type == srom_media) break; } media = tulip_srom_mediums[idx3].sm_type; if (media == TULIP_MEDIA_UNKNOWN) break; mi->mi_type = TULIP_MEDIAINFO_SYM; sc->tulip_mediums[media] = mi; mi->mi_gpcontrol = (dp[1] + dp[2] * 256) << 16; mi->mi_gpdata = (dp[3] + dp[4] * 256) << 16; data = dp[5] + dp[6] * 256; mi->mi_cmdmode = TULIP_SROM_2114X_CMDBITS(data); if (data & TULIP_SROM_2114X_NOINDICATOR) { mi->mi_actmask = 0; } else { mi->mi_default = (data & TULIP_SROM_2114X_DEFAULT) != 0; mi->mi_actmask = TULIP_SROM_2114X_BITPOS(data); mi->mi_actdata = (data & TULIP_SROM_2114X_POLARITY) ? 0 : mi->mi_actmask; } if (TULIP_IS_MEDIA_TP(media)) sc->tulip_intrmask |= TULIP_STS_LINKPASS|TULIP_STS_LINKFAIL; mi++; break; } #if 0 case 5: { /* 21143 Reset block */ mi->mi_type = TULIP_MEDIAINFO_RESET; mi->mi_reset_length = *dp++; mi->mi_reset_offset = dp - sc->tulip_rombuf; dp += 2 * mi->mi_reset_length; mi++; break; } #endif default: { } } dp = ep; } } return mi - sc->tulip_mediainfo; } static const struct { void (*vendor_identify_nic)(tulip_softc_t * const sc); unsigned char vendor_oui[3]; } tulip_vendors[] = { { tulip_identify_dec_nic, { 0x08, 0x00, 0x2B } }, { tulip_identify_dec_nic, { 0x00, 0x00, 0xF8 } }, { tulip_identify_smc_nic, { 0x00, 0x00, 0xC0 } }, { tulip_identify_smc_nic, { 0x00, 0xE0, 0x29 } }, { tulip_identify_znyx_nic, { 0x00, 0xC0, 0x95 } }, { tulip_identify_cogent_nic, { 0x00, 0x00, 0x92 } }, { tulip_identify_asante_nic, { 0x00, 0x00, 0x94 } }, { tulip_identify_cogent_nic, { 0x00, 0x00, 0xD1 } }, { tulip_identify_accton_nic, { 0x00, 0x00, 0xE8 } }, { tulip_identify_compex_nic, { 0x00, 0x80, 0x48 } }, { NULL } }; /* * This deals with the vagaries of the address roms and the * brain-deadness that various vendors commit in using them. */ static int tulip_read_macaddr(tulip_softc_t * const sc) { unsigned cksum, rom_cksum, idx; u_int32_t csr; unsigned char tmpbuf[8]; static const u_char testpat[] = { 0xFF, 0, 0x55, 0xAA, 0xFF, 0, 0x55, 0xAA }; sc->tulip_connidx = TULIP_SROM_LASTCONNIDX; if (sc->tulip_chipid == TULIP_21040) { TULIP_CSR_WRITE(sc, csr_enetrom, 1); for (idx = 0; idx < sizeof(sc->tulip_rombuf); idx++) { int cnt = 0; while (((csr = TULIP_CSR_READ(sc, csr_enetrom)) & 0x80000000L) && cnt < 10000) cnt++; sc->tulip_rombuf[idx] = csr & 0xFF; } sc->tulip_boardsw = &tulip_21040_boardsw; } else { if (sc->tulip_chipid == TULIP_21041) { /* * Thankfully all 21041's act the same. */ sc->tulip_boardsw = &tulip_21041_boardsw; } else { /* * Assume all 21140 board are compatible with the * DEC 10/100 evaluation board. Not really valid but * it's the best we can do until every one switches to * the new SROM format. */ sc->tulip_boardsw = &tulip_21140_eb_boardsw; } tulip_srom_read(sc); if (tulip_srom_crcok(sc->tulip_rombuf)) { /* * SROM CRC is valid therefore it must be in the * new format. */ sc->tulip_features |= TULIP_HAVE_ISVSROM|TULIP_HAVE_OKSROM; } else if (sc->tulip_rombuf[126] == 0xff && sc->tulip_rombuf[127] == 0xFF) { /* * No checksum is present. See if the SROM id checks out; * the first 18 bytes should be 0 followed by a 1 followed * by the number of adapters (which we don't deal with yet). */ for (idx = 0; idx < 18; idx++) { if (sc->tulip_rombuf[idx] != 0) break; } if (idx == 18 && sc->tulip_rombuf[18] == 1 && sc->tulip_rombuf[19] != 0) sc->tulip_features |= TULIP_HAVE_ISVSROM; } else if (sc->tulip_chipid >= TULIP_21142) { sc->tulip_features |= TULIP_HAVE_ISVSROM; sc->tulip_boardsw = &tulip_2114x_isv_boardsw; } if ((sc->tulip_features & TULIP_HAVE_ISVSROM) && tulip_srom_decode(sc)) { if (sc->tulip_chipid != TULIP_21041) sc->tulip_boardsw = &tulip_2114x_isv_boardsw; /* * If the SROM specifies more than one adapter, tag this as a * BASE rom. */ if (sc->tulip_rombuf[19] > 1) sc->tulip_features |= TULIP_HAVE_BASEROM; if (sc->tulip_boardsw == NULL) return -6; goto check_oui; } } if (bcmp(&sc->tulip_rombuf[0], &sc->tulip_rombuf[16], 8) != 0) { /* * Some folks don't use the standard ethernet rom format * but instead just put the address in the first 6 bytes * of the rom and let the rest be all 0xffs. (Can we say * ZNYX?) (well sometimes they put in a checksum so we'll * start at 8). */ for (idx = 8; idx < 32; idx++) { if (sc->tulip_rombuf[idx] != 0xFF) return -4; } /* * Make sure the address is not multicast or locally assigned * that the OUI is not 00-00-00. */ if ((sc->tulip_rombuf[0] & 3) != 0) return -4; if (sc->tulip_rombuf[0] == 0 && sc->tulip_rombuf[1] == 0 && sc->tulip_rombuf[2] == 0) return -4; bcopy(sc->tulip_rombuf, sc->tulip_enaddr, 6); sc->tulip_features |= TULIP_HAVE_OKROM; goto check_oui; } else { /* * A number of makers of multiport boards (ZNYX and Cogent) * only put on one address ROM on their 21040 boards. So * if the ROM is all zeros (or all 0xFFs), look at the * previous configured boards (as long as they are on the same * PCI bus and the bus number is non-zero) until we find the * master board with address ROM. We then use its address ROM * as the base for this board. (we add our relative board * to the last byte of its address). */ for (idx = 0; idx < sizeof(sc->tulip_rombuf); idx++) { if (sc->tulip_rombuf[idx] != 0 && sc->tulip_rombuf[idx] != 0xFF) break; } if (idx == sizeof(sc->tulip_rombuf)) { int root_unit; tulip_softc_t *root_sc = NULL; for (root_unit = sc->tulip_unit - 1; root_unit >= 0; root_unit--) { root_sc = tulips[root_unit]; if (root_sc == NULL || (root_sc->tulip_features & (TULIP_HAVE_OKROM|TULIP_HAVE_SLAVEDROM)) == TULIP_HAVE_OKROM) break; root_sc = NULL; } if (root_sc != NULL && (root_sc->tulip_features & TULIP_HAVE_BASEROM) && root_sc->tulip_chipid == sc->tulip_chipid && root_sc->tulip_pci_busno == sc->tulip_pci_busno) { sc->tulip_features |= TULIP_HAVE_SLAVEDROM; sc->tulip_boardsw = root_sc->tulip_boardsw; strcpy(sc->tulip_boardid, root_sc->tulip_boardid); if (sc->tulip_boardsw->bd_type == TULIP_21140_ISV) { bcopy(root_sc->tulip_rombuf, sc->tulip_rombuf, sizeof(sc->tulip_rombuf)); if (!tulip_srom_decode(sc)) return -5; } else { bcopy(root_sc->tulip_enaddr, sc->tulip_enaddr, 6); sc->tulip_enaddr[5] += sc->tulip_unit - root_sc->tulip_unit; } /* * Now for a truly disgusting kludge: all 4 21040s on * the ZX314 share the same INTA line so the mapping * setup by the BIOS on the PCI bridge is worthless. * Rather than reprogramming the value in the config * register, we will handle this internally. */ if (root_sc->tulip_features & TULIP_HAVE_SHAREDINTR) { sc->tulip_slaves = root_sc->tulip_slaves; root_sc->tulip_slaves = sc; sc->tulip_features |= TULIP_HAVE_SLAVEDINTR; } return 0; } } } /* * This is the standard DEC address ROM test. */ if (bcmp(&sc->tulip_rombuf[24], testpat, 8) != 0) return -3; tmpbuf[0] = sc->tulip_rombuf[15]; tmpbuf[1] = sc->tulip_rombuf[14]; tmpbuf[2] = sc->tulip_rombuf[13]; tmpbuf[3] = sc->tulip_rombuf[12]; tmpbuf[4] = sc->tulip_rombuf[11]; tmpbuf[5] = sc->tulip_rombuf[10]; tmpbuf[6] = sc->tulip_rombuf[9]; tmpbuf[7] = sc->tulip_rombuf[8]; if (bcmp(&sc->tulip_rombuf[0], tmpbuf, 8) != 0) return -2; bcopy(sc->tulip_rombuf, sc->tulip_enaddr, 6); cksum = *(u_int16_t *) &sc->tulip_enaddr[0]; cksum *= 2; if (cksum > 65535) cksum -= 65535; cksum += *(u_int16_t *) &sc->tulip_enaddr[2]; if (cksum > 65535) cksum -= 65535; cksum *= 2; if (cksum > 65535) cksum -= 65535; cksum += *(u_int16_t *) &sc->tulip_enaddr[4]; if (cksum >= 65535) cksum -= 65535; rom_cksum = *(u_int16_t *) &sc->tulip_rombuf[6]; if (cksum != rom_cksum) return -1; check_oui: /* * Check for various boards based on OUI. Did I say braindead? */ for (idx = 0; tulip_vendors[idx].vendor_identify_nic != NULL; idx++) { if (bcmp(sc->tulip_enaddr, tulip_vendors[idx].vendor_oui, 3) == 0) { (*tulip_vendors[idx].vendor_identify_nic)(sc); break; } } sc->tulip_features |= TULIP_HAVE_OKROM; return 0; } static void tulip_ifmedia_add(tulip_softc_t * const sc) { tulip_media_t media; int medias = 0; TULIP_LOCK_ASSERT(sc); for (media = TULIP_MEDIA_UNKNOWN; media < TULIP_MEDIA_MAX; media++) { if (sc->tulip_mediums[media] != NULL) { ifmedia_add(&sc->tulip_ifmedia, tulip_media_to_ifmedia[media], 0, 0); medias++; } } if (medias == 0) { sc->tulip_features |= TULIP_HAVE_NOMEDIA; ifmedia_add(&sc->tulip_ifmedia, IFM_ETHER | IFM_NONE, 0, 0); ifmedia_set(&sc->tulip_ifmedia, IFM_ETHER | IFM_NONE); } else if (sc->tulip_media == TULIP_MEDIA_UNKNOWN) { ifmedia_add(&sc->tulip_ifmedia, IFM_ETHER | IFM_AUTO, 0, 0); ifmedia_set(&sc->tulip_ifmedia, IFM_ETHER | IFM_AUTO); } else { ifmedia_set(&sc->tulip_ifmedia, tulip_media_to_ifmedia[sc->tulip_media]); sc->tulip_flags |= TULIP_PRINTMEDIA; tulip_linkup(sc, sc->tulip_media); } } static int tulip_ifmedia_change(struct ifnet * const ifp) { tulip_softc_t * const sc = (tulip_softc_t *)ifp->if_softc; TULIP_LOCK(sc); sc->tulip_flags |= TULIP_NEEDRESET; sc->tulip_probe_state = TULIP_PROBE_INACTIVE; sc->tulip_media = TULIP_MEDIA_UNKNOWN; if (IFM_SUBTYPE(sc->tulip_ifmedia.ifm_media) != IFM_AUTO) { tulip_media_t media; for (media = TULIP_MEDIA_UNKNOWN; media < TULIP_MEDIA_MAX; media++) { if (sc->tulip_mediums[media] != NULL && sc->tulip_ifmedia.ifm_media == tulip_media_to_ifmedia[media]) { sc->tulip_flags |= TULIP_PRINTMEDIA; sc->tulip_flags &= ~TULIP_DIDNWAY; tulip_linkup(sc, media); TULIP_UNLOCK(sc); return 0; } } } sc->tulip_flags &= ~(TULIP_TXPROBE_ACTIVE|TULIP_WANTRXACT); tulip_reset(sc); tulip_init_locked(sc); TULIP_UNLOCK(sc); return 0; } /* * Media status callback */ static void tulip_ifmedia_status(struct ifnet * const ifp, struct ifmediareq *req) { tulip_softc_t *sc = (tulip_softc_t *)ifp->if_softc; TULIP_LOCK(sc); if (sc->tulip_media == TULIP_MEDIA_UNKNOWN) { TULIP_UNLOCK(sc); return; } req->ifm_status = IFM_AVALID; if (sc->tulip_flags & TULIP_LINKUP) req->ifm_status |= IFM_ACTIVE; req->ifm_active = tulip_media_to_ifmedia[sc->tulip_media]; TULIP_UNLOCK(sc); } static void tulip_addr_filter(tulip_softc_t * const sc) { struct ifmultiaddr *ifma; struct ifnet *ifp; u_char *addrp; u_int16_t eaddr[ETHER_ADDR_LEN/2]; int multicnt; TULIP_LOCK_ASSERT(sc); sc->tulip_flags &= ~(TULIP_WANTHASHPERFECT|TULIP_WANTHASHONLY|TULIP_ALLMULTI); sc->tulip_flags |= TULIP_WANTSETUP|TULIP_WANTTXSTART; sc->tulip_cmdmode &= ~TULIP_CMD_RXRUN; sc->tulip_intrmask &= ~TULIP_STS_RXSTOPPED; #if defined(IFF_ALLMULTI) if (sc->tulip_ifp->if_flags & IFF_ALLMULTI) sc->tulip_flags |= TULIP_ALLMULTI ; #endif multicnt = 0; ifp = sc->tulip_ifp; if_maddr_rlock(ifp); /* Copy MAC address on stack to align. */ if (ifp->if_input != NULL) bcopy(IF_LLADDR(ifp), eaddr, ETHER_ADDR_LEN); else bcopy(sc->tulip_enaddr, eaddr, ETHER_ADDR_LEN); CK_STAILQ_FOREACH(ifma, &ifp->if_multiaddrs, ifma_link) { if (ifma->ifma_addr->sa_family == AF_LINK) multicnt++; } if (multicnt > 14) { u_int32_t *sp = sc->tulip_setupdata; unsigned hash; /* * Some early passes of the 21140 have broken implementations of * hash-perfect mode. When we get too many multicasts for perfect * filtering with these chips, we need to switch into hash-only * mode (this is better than all-multicast on network with lots * of multicast traffic). */ if (sc->tulip_features & TULIP_HAVE_BROKEN_HASH) sc->tulip_flags |= TULIP_WANTHASHONLY; else sc->tulip_flags |= TULIP_WANTHASHPERFECT; /* * If we have more than 14 multicasts, we have * go into hash perfect mode (512 bit multicast * hash and one perfect hardware). */ bzero(sc->tulip_setupdata, sizeof(sc->tulip_setupdata)); CK_STAILQ_FOREACH(ifma, &ifp->if_multiaddrs, ifma_link) { if (ifma->ifma_addr->sa_family != AF_LINK) continue; hash = tulip_mchash(LLADDR((struct sockaddr_dl *)ifma->ifma_addr)); sp[hash >> 4] |= htole32(1 << (hash & 0xF)); } /* * No reason to use a hash if we are going to be * receiving every multicast. */ if ((sc->tulip_flags & TULIP_ALLMULTI) == 0) { hash = tulip_mchash(ifp->if_broadcastaddr); sp[hash >> 4] |= htole32(1 << (hash & 0xF)); if (sc->tulip_flags & TULIP_WANTHASHONLY) { hash = tulip_mchash((caddr_t)eaddr); sp[hash >> 4] |= htole32(1 << (hash & 0xF)); } else { sp[39] = TULIP_SP_MAC(eaddr[0]); sp[40] = TULIP_SP_MAC(eaddr[1]); sp[41] = TULIP_SP_MAC(eaddr[2]); } } } if ((sc->tulip_flags & (TULIP_WANTHASHPERFECT|TULIP_WANTHASHONLY)) == 0) { u_int32_t *sp = sc->tulip_setupdata; int idx = 0; if ((sc->tulip_flags & TULIP_ALLMULTI) == 0) { /* * Else can get perfect filtering for 16 addresses. */ CK_STAILQ_FOREACH(ifma, &ifp->if_multiaddrs, ifma_link) { if (ifma->ifma_addr->sa_family != AF_LINK) continue; addrp = LLADDR((struct sockaddr_dl *)ifma->ifma_addr); *sp++ = TULIP_SP_MAC(((u_int16_t *)addrp)[0]); *sp++ = TULIP_SP_MAC(((u_int16_t *)addrp)[1]); *sp++ = TULIP_SP_MAC(((u_int16_t *)addrp)[2]); idx++; } /* * Add the broadcast address. */ idx++; *sp++ = TULIP_SP_MAC(0xFFFF); *sp++ = TULIP_SP_MAC(0xFFFF); *sp++ = TULIP_SP_MAC(0xFFFF); } /* * Pad the rest with our hardware address */ for (; idx < 16; idx++) { *sp++ = TULIP_SP_MAC(eaddr[0]); *sp++ = TULIP_SP_MAC(eaddr[1]); *sp++ = TULIP_SP_MAC(eaddr[2]); } } if_maddr_runlock(ifp); } static void tulip_reset(tulip_softc_t * const sc) { tulip_ringinfo_t *ri; tulip_descinfo_t *di; struct mbuf *m; u_int32_t inreset = (sc->tulip_flags & TULIP_INRESET); TULIP_LOCK_ASSERT(sc); CTR1(KTR_TULIP, "tulip_reset: inreset %d", inreset); /* * Brilliant. Simply brilliant. When switching modes/speeds * on a 2114*, you need to set the appriopriate MII/PCS/SCL/PS * bits in CSR6 and then do a software reset to get the 21140 * to properly reset its internal pathways to the right places. * Grrrr. */ if ((sc->tulip_flags & TULIP_DEVICEPROBE) == 0 && sc->tulip_boardsw->bd_media_preset != NULL) (*sc->tulip_boardsw->bd_media_preset)(sc); TULIP_CSR_WRITE(sc, csr_busmode, TULIP_BUSMODE_SWRESET); DELAY(10); /* Wait 10 microseconds (actually 50 PCI cycles but at 33MHz that comes to two microseconds but wait a bit longer anyways) */ if (!inreset) { sc->tulip_flags |= TULIP_INRESET; sc->tulip_flags &= ~(TULIP_NEEDRESET|TULIP_RXBUFSLOW); sc->tulip_ifp->if_drv_flags &= ~IFF_DRV_OACTIVE; } TULIP_CSR_WRITE(sc, csr_txlist, sc->tulip_txinfo.ri_dma_addr & 0xffffffff); TULIP_CSR_WRITE(sc, csr_rxlist, sc->tulip_rxinfo.ri_dma_addr & 0xffffffff); TULIP_CSR_WRITE(sc, csr_busmode, (1 << (3 /*pci_max_burst_len*/ + 8)) |TULIP_BUSMODE_CACHE_ALIGN8 |TULIP_BUSMODE_READMULTIPLE |(BYTE_ORDER != LITTLE_ENDIAN ? TULIP_BUSMODE_DESC_BIGENDIAN : 0)); sc->tulip_txtimer = 0; /* * Free all the mbufs that were on the transmit ring. */ CTR0(KTR_TULIP, "tulip_reset: drain transmit ring"); ri = &sc->tulip_txinfo; for (di = ri->ri_first; di < ri->ri_last; di++) { m = tulip_dequeue_mbuf(ri, di, SYNC_NONE); if (m != NULL) m_freem(m); di->di_desc->d_status = 0; } ri->ri_nextin = ri->ri_nextout = ri->ri_first; ri->ri_free = ri->ri_max; TULIP_TXDESC_PRESYNC(ri); /* * We need to collect all the mbufs that were on the * receive ring before we reinit it either to put * them back on or to know if we have to allocate * more. */ CTR0(KTR_TULIP, "tulip_reset: drain receive ring"); ri = &sc->tulip_rxinfo; ri->ri_nextin = ri->ri_nextout = ri->ri_first; ri->ri_free = ri->ri_max; for (di = ri->ri_first; di < ri->ri_last; di++) { di->di_desc->d_status = 0; di->di_desc->d_length1 = 0; di->di_desc->d_addr1 = 0; di->di_desc->d_length2 = 0; di->di_desc->d_addr2 = 0; } TULIP_RXDESC_PRESYNC(ri); for (di = ri->ri_first; di < ri->ri_last; di++) { m = tulip_dequeue_mbuf(ri, di, SYNC_NONE); if (m != NULL) m_freem(m); } /* * If tulip_reset is being called recursively, exit quickly knowing * that when the outer tulip_reset returns all the right stuff will * have happened. */ if (inreset) return; sc->tulip_intrmask |= TULIP_STS_NORMALINTR|TULIP_STS_RXINTR|TULIP_STS_TXINTR |TULIP_STS_ABNRMLINTR|TULIP_STS_SYSERROR|TULIP_STS_TXSTOPPED |TULIP_STS_TXUNDERFLOW|TULIP_STS_TXBABBLE |TULIP_STS_RXSTOPPED; if ((sc->tulip_flags & TULIP_DEVICEPROBE) == 0) (*sc->tulip_boardsw->bd_media_select)(sc); #if defined(TULIP_DEBUG) if ((sc->tulip_flags & TULIP_NEEDRESET) == TULIP_NEEDRESET) device_printf(sc->tulip_dev, "tulip_reset: additional reset needed?!?\n"); #endif if (bootverbose) tulip_media_print(sc); if (sc->tulip_features & TULIP_HAVE_DUALSENSE) TULIP_CSR_WRITE(sc, csr_sia_status, TULIP_CSR_READ(sc, csr_sia_status)); sc->tulip_flags &= ~(TULIP_DOINGSETUP|TULIP_WANTSETUP|TULIP_INRESET |TULIP_RXACT); } static void tulip_init(void *arg) { tulip_softc_t *sc = (tulip_softc_t *)arg; TULIP_LOCK(sc); tulip_init_locked(sc); TULIP_UNLOCK(sc); } static void tulip_init_locked(tulip_softc_t * const sc) { CTR0(KTR_TULIP, "tulip_init_locked"); if (sc->tulip_ifp->if_flags & IFF_UP) { if ((sc->tulip_ifp->if_drv_flags & IFF_DRV_RUNNING) == 0) { /* initialize the media */ CTR0(KTR_TULIP, "tulip_init_locked: up but not running, reset chip"); tulip_reset(sc); } tulip_addr_filter(sc); sc->tulip_ifp->if_drv_flags |= IFF_DRV_RUNNING; if (sc->tulip_ifp->if_flags & IFF_PROMISC) { sc->tulip_flags |= TULIP_PROMISC; sc->tulip_cmdmode |= TULIP_CMD_PROMISCUOUS; sc->tulip_intrmask |= TULIP_STS_TXINTR; } else { sc->tulip_flags &= ~TULIP_PROMISC; sc->tulip_cmdmode &= ~TULIP_CMD_PROMISCUOUS; if (sc->tulip_flags & TULIP_ALLMULTI) { sc->tulip_cmdmode |= TULIP_CMD_ALLMULTI; } else { sc->tulip_cmdmode &= ~TULIP_CMD_ALLMULTI; } } sc->tulip_cmdmode |= TULIP_CMD_TXRUN; if ((sc->tulip_flags & (TULIP_TXPROBE_ACTIVE|TULIP_WANTSETUP)) == 0) { tulip_rx_intr(sc); sc->tulip_cmdmode |= TULIP_CMD_RXRUN; sc->tulip_intrmask |= TULIP_STS_RXSTOPPED; } else { sc->tulip_ifp->if_drv_flags |= IFF_DRV_OACTIVE; sc->tulip_cmdmode &= ~TULIP_CMD_RXRUN; sc->tulip_intrmask &= ~TULIP_STS_RXSTOPPED; } CTR2(KTR_TULIP, "tulip_init_locked: intr mask %08x cmdmode %08x", sc->tulip_intrmask, sc->tulip_cmdmode); TULIP_CSR_WRITE(sc, csr_intr, sc->tulip_intrmask); TULIP_CSR_WRITE(sc, csr_command, sc->tulip_cmdmode); CTR1(KTR_TULIP, "tulip_init_locked: status %08x\n", TULIP_CSR_READ(sc, csr_status)); if ((sc->tulip_flags & (TULIP_WANTSETUP|TULIP_TXPROBE_ACTIVE)) == TULIP_WANTSETUP) tulip_txput_setup(sc); callout_reset(&sc->tulip_stat_timer, hz, tulip_watchdog, sc); } else { CTR0(KTR_TULIP, "tulip_init_locked: not up, reset chip"); sc->tulip_ifp->if_drv_flags &= ~IFF_DRV_RUNNING; tulip_reset(sc); tulip_addr_filter(sc); callout_stop(&sc->tulip_stat_timer); } } #define DESC_STATUS(di) (((volatile tulip_desc_t *)((di)->di_desc))->d_status) #define DESC_FLAG(di) ((di)->di_desc->d_flag) static void tulip_rx_intr(tulip_softc_t * const sc) { TULIP_PERFSTART(rxintr) tulip_ringinfo_t * const ri = &sc->tulip_rxinfo; struct ifnet * const ifp = sc->tulip_ifp; int fillok = 1; #if defined(TULIP_DEBUG) int cnt = 0; #endif TULIP_LOCK_ASSERT(sc); CTR0(KTR_TULIP, "tulip_rx_intr: start"); for (;;) { TULIP_PERFSTART(rxget) tulip_descinfo_t *eop = ri->ri_nextin, *dip; int total_len = 0, last_offset = 0; struct mbuf *ms = NULL, *me = NULL; int accept = 0; int error; if (fillok && (ri->ri_max - ri->ri_free) < TULIP_RXQ_TARGET) goto queue_mbuf; #if defined(TULIP_DEBUG) if (cnt == ri->ri_max) break; #endif /* * If the TULIP has no descriptors, there can't be any receive * descriptors to process. */ if (eop == ri->ri_nextout) break; /* * 90% of the packets will fit in one descriptor. So we optimize * for that case. */ TULIP_RXDESC_POSTSYNC(ri); if ((DESC_STATUS(eop) & (TULIP_DSTS_OWNER|TULIP_DSTS_RxFIRSTDESC|TULIP_DSTS_RxLASTDESC)) == (TULIP_DSTS_RxFIRSTDESC|TULIP_DSTS_RxLASTDESC)) { ms = tulip_dequeue_mbuf(ri, eop, SYNC_RX); CTR2(KTR_TULIP, "tulip_rx_intr: single packet mbuf %p from descriptor %td", ms, eop - ri->ri_first); me = ms; ri->ri_free++; } else { /* * If still owned by the TULIP, don't touch it. */ if (DESC_STATUS(eop) & TULIP_DSTS_OWNER) break; /* * It is possible (though improbable unless MCLBYTES < 1518) for * a received packet to cross more than one receive descriptor. * We first loop through the descriptor ring making sure we have * received a complete packet. If not, we bail until the next * interrupt. */ dip = eop; while ((DESC_STATUS(eop) & TULIP_DSTS_RxLASTDESC) == 0) { if (++eop == ri->ri_last) eop = ri->ri_first; TULIP_RXDESC_POSTSYNC(ri); if (eop == ri->ri_nextout || DESC_STATUS(eop) & TULIP_DSTS_OWNER) { #if defined(TULIP_DEBUG) sc->tulip_dbg.dbg_rxintrs++; sc->tulip_dbg.dbg_rxpktsperintr[cnt]++; #endif TULIP_PERFEND(rxget); TULIP_PERFEND(rxintr); return; } total_len++; } /* * Dequeue the first buffer for the start of the packet. Hopefully * this will be the only one we need to dequeue. However, if the * packet consumed multiple descriptors, then we need to dequeue * those buffers and chain to the starting mbuf. All buffers but * the last buffer have the same length so we can set that now. * (we add to last_offset instead of multiplying since we normally * won't go into the loop and thereby saving ourselves from * doing a multiplication by 0 in the normal case). */ ms = tulip_dequeue_mbuf(ri, dip, SYNC_RX); CTR2(KTR_TULIP, "tulip_rx_intr: start packet mbuf %p from descriptor %td", ms, dip - ri->ri_first); ri->ri_free++; for (me = ms; total_len > 0; total_len--) { me->m_len = TULIP_RX_BUFLEN; last_offset += TULIP_RX_BUFLEN; if (++dip == ri->ri_last) dip = ri->ri_first; me->m_next = tulip_dequeue_mbuf(ri, dip, SYNC_RX); ri->ri_free++; me = me->m_next; CTR2(KTR_TULIP, "tulip_rx_intr: cont packet mbuf %p from descriptor %td", me, dip - ri->ri_first); } KASSERT(dip == eop, ("mismatched descinfo structs")); } /* * Now get the size of received packet (minus the CRC). */ total_len = ((DESC_STATUS(eop) >> 16) & 0x7FFF) - ETHER_CRC_LEN; if ((sc->tulip_flags & TULIP_RXIGNORE) == 0 && ((DESC_STATUS(eop) & TULIP_DSTS_ERRSUM) == 0)) { me->m_len = total_len - last_offset; sc->tulip_flags |= TULIP_RXACT; accept = 1; CTR1(KTR_TULIP, "tulip_rx_intr: good packet; length %d", total_len); } else { CTR1(KTR_TULIP, "tulip_rx_intr: bad packet; status %08x", DESC_STATUS(eop)); if_inc_counter(ifp, IFCOUNTER_IERRORS, 1); if (DESC_STATUS(eop) & (TULIP_DSTS_RxBADLENGTH|TULIP_DSTS_RxOVERFLOW|TULIP_DSTS_RxWATCHDOG)) { sc->tulip_dot3stats.dot3StatsInternalMacReceiveErrors++; } else { #if defined(TULIP_VERBOSE) const char *error = NULL; #endif if (DESC_STATUS(eop) & TULIP_DSTS_RxTOOLONG) { sc->tulip_dot3stats.dot3StatsFrameTooLongs++; #if defined(TULIP_VERBOSE) error = "frame too long"; #endif } if (DESC_STATUS(eop) & TULIP_DSTS_RxBADCRC) { if (DESC_STATUS(eop) & TULIP_DSTS_RxDRBBLBIT) { sc->tulip_dot3stats.dot3StatsAlignmentErrors++; #if defined(TULIP_VERBOSE) error = "alignment error"; #endif } else { sc->tulip_dot3stats.dot3StatsFCSErrors++; #if defined(TULIP_VERBOSE) error = "bad crc"; #endif } } #if defined(TULIP_VERBOSE) if (error != NULL && (sc->tulip_flags & TULIP_NOMESSAGES) == 0) { device_printf(sc->tulip_dev, "receive: %6D: %s\n", mtod(ms, u_char *) + 6, ":", error); sc->tulip_flags |= TULIP_NOMESSAGES; } #endif } } #if defined(TULIP_DEBUG) cnt++; #endif if_inc_counter(ifp, IFCOUNTER_IPACKETS, 1); if (++eop == ri->ri_last) eop = ri->ri_first; ri->ri_nextin = eop; queue_mbuf: /* * We have received a good packet that needs to be passed up the * stack. */ if (accept) { struct mbuf *m0; KASSERT(ms != NULL, ("no packet to accept")); #ifndef __NO_STRICT_ALIGNMENT /* * Copy the data into a new mbuf that is properly aligned. If * we fail to allocate a new mbuf, then drop the packet. We will * reuse the same rx buffer ('ms') below for another packet * regardless. */ m0 = m_devget(mtod(ms, caddr_t), total_len, ETHER_ALIGN, ifp, NULL); if (m0 == NULL) { if_inc_counter(ifp, IFCOUNTER_IERRORS, 1); goto skip_input; } #else /* * Update the header for the mbuf referencing this receive * buffer and pass it up the stack. Allocate a new mbuf cluster * to replace the one we just passed up the stack. * * Note that if this packet crossed multiple descriptors * we don't even try to reallocate all the mbufs here. * Instead we rely on the test at the beginning of * the loop to refill for the extra consumed mbufs. */ ms->m_pkthdr.len = total_len; ms->m_pkthdr.rcvif = ifp; m0 = ms; ms = m_getcl(M_NOWAIT, MT_DATA, M_PKTHDR); #endif TULIP_UNLOCK(sc); CTR1(KTR_TULIP, "tulip_rx_intr: passing %p to upper layer", m0); (*ifp->if_input)(ifp, m0); TULIP_LOCK(sc); } else if (ms == NULL) /* * If we are priming the TULIP with mbufs, then allocate * a new cluster for the next descriptor. */ ms = m_getcl(M_NOWAIT, MT_DATA, M_PKTHDR); #ifndef __NO_STRICT_ALIGNMENT skip_input: #endif if (ms == NULL) { /* * Couldn't allocate a new buffer. Don't bother * trying to replenish the receive queue. */ fillok = 0; sc->tulip_flags |= TULIP_RXBUFSLOW; #if defined(TULIP_DEBUG) sc->tulip_dbg.dbg_rxlowbufs++; #endif TULIP_PERFEND(rxget); continue; } /* * Now give the buffer(s) to the TULIP and save in our * receive queue. */ do { tulip_descinfo_t * const nextout = ri->ri_nextout; M_ASSERTPKTHDR(ms); KASSERT(ms->m_data == ms->m_ext.ext_buf, ("rx mbuf data doesn't point to cluster")); ms->m_len = ms->m_pkthdr.len = TULIP_RX_BUFLEN; error = bus_dmamap_load_mbuf(ri->ri_data_tag, *nextout->di_map, ms, tulip_dma_map_rxbuf, nextout->di_desc, BUS_DMA_NOWAIT); if (error) { device_printf(sc->tulip_dev, "unable to load rx map, error = %d\n", error); panic("tulip_rx_intr"); /* XXX */ } nextout->di_desc->d_status = TULIP_DSTS_OWNER; KASSERT(nextout->di_mbuf == NULL, ("clobbering earlier rx mbuf")); nextout->di_mbuf = ms; CTR2(KTR_TULIP, "tulip_rx_intr: enqueued mbuf %p to descriptor %td", ms, nextout - ri->ri_first); TULIP_RXDESC_POSTSYNC(ri); if (++ri->ri_nextout == ri->ri_last) ri->ri_nextout = ri->ri_first; ri->ri_free--; me = ms->m_next; ms->m_next = NULL; } while ((ms = me) != NULL); if ((ri->ri_max - ri->ri_free) >= TULIP_RXQ_TARGET) sc->tulip_flags &= ~TULIP_RXBUFSLOW; TULIP_PERFEND(rxget); } #if defined(TULIP_DEBUG) sc->tulip_dbg.dbg_rxintrs++; sc->tulip_dbg.dbg_rxpktsperintr[cnt]++; #endif TULIP_PERFEND(rxintr); } static int tulip_tx_intr(tulip_softc_t * const sc) { TULIP_PERFSTART(txintr) tulip_ringinfo_t * const ri = &sc->tulip_txinfo; struct mbuf *m; int xmits = 0; int descs = 0; CTR0(KTR_TULIP, "tulip_tx_intr: start"); TULIP_LOCK_ASSERT(sc); while (ri->ri_free < ri->ri_max) { u_int32_t d_flag; TULIP_TXDESC_POSTSYNC(ri); if (DESC_STATUS(ri->ri_nextin) & TULIP_DSTS_OWNER) break; ri->ri_free++; descs++; d_flag = DESC_FLAG(ri->ri_nextin); if (d_flag & TULIP_DFLAG_TxLASTSEG) { if (d_flag & TULIP_DFLAG_TxSETUPPKT) { CTR2(KTR_TULIP, "tulip_tx_intr: setup packet from descriptor %td: %08x", ri->ri_nextin - ri->ri_first, DESC_STATUS(ri->ri_nextin)); /* * We've just finished processing a setup packet. * Mark that we finished it. If there's not * another pending, startup the TULIP receiver. * Make sure we ack the RXSTOPPED so we won't get * an abormal interrupt indication. */ bus_dmamap_sync(sc->tulip_setup_tag, sc->tulip_setup_map, BUS_DMASYNC_POSTWRITE); sc->tulip_flags &= ~(TULIP_DOINGSETUP|TULIP_HASHONLY); if (DESC_FLAG(ri->ri_nextin) & TULIP_DFLAG_TxINVRSFILT) sc->tulip_flags |= TULIP_HASHONLY; if ((sc->tulip_flags & (TULIP_WANTSETUP|TULIP_TXPROBE_ACTIVE)) == 0) { tulip_rx_intr(sc); sc->tulip_cmdmode |= TULIP_CMD_RXRUN; sc->tulip_intrmask |= TULIP_STS_RXSTOPPED; CTR2(KTR_TULIP, "tulip_tx_intr: intr mask %08x cmdmode %08x", sc->tulip_intrmask, sc->tulip_cmdmode); TULIP_CSR_WRITE(sc, csr_status, TULIP_STS_RXSTOPPED); TULIP_CSR_WRITE(sc, csr_intr, sc->tulip_intrmask); TULIP_CSR_WRITE(sc, csr_command, sc->tulip_cmdmode); } } else { const u_int32_t d_status = DESC_STATUS(ri->ri_nextin); m = tulip_dequeue_mbuf(ri, ri->ri_nextin, SYNC_TX); CTR2(KTR_TULIP, "tulip_tx_intr: data packet %p from descriptor %td", m, ri->ri_nextin - ri->ri_first); if (m != NULL) { m_freem(m); #if defined(TULIP_DEBUG) } else { device_printf(sc->tulip_dev, "tx_intr: failed to dequeue mbuf?!?\n"); #endif } if (sc->tulip_flags & TULIP_TXPROBE_ACTIVE) { tulip_mediapoll_event_t event = TULIP_MEDIAPOLL_TXPROBE_OK; if (d_status & (TULIP_DSTS_TxNOCARR|TULIP_DSTS_TxEXCCOLL)) { #if defined(TULIP_DEBUG) if (d_status & TULIP_DSTS_TxNOCARR) sc->tulip_dbg.dbg_txprobe_nocarr++; if (d_status & TULIP_DSTS_TxEXCCOLL) sc->tulip_dbg.dbg_txprobe_exccoll++; #endif event = TULIP_MEDIAPOLL_TXPROBE_FAILED; } (*sc->tulip_boardsw->bd_media_poll)(sc, event); /* * Escape from the loop before media poll has reset the TULIP! */ break; } else { xmits++; if (d_status & TULIP_DSTS_ERRSUM) { CTR1(KTR_TULIP, "tulip_tx_intr: output error: %08x", d_status); if_inc_counter(sc->tulip_ifp, IFCOUNTER_OERRORS, 1); if (d_status & TULIP_DSTS_TxEXCCOLL) sc->tulip_dot3stats.dot3StatsExcessiveCollisions++; if (d_status & TULIP_DSTS_TxLATECOLL) sc->tulip_dot3stats.dot3StatsLateCollisions++; if (d_status & (TULIP_DSTS_TxNOCARR|TULIP_DSTS_TxCARRLOSS)) sc->tulip_dot3stats.dot3StatsCarrierSenseErrors++; if (d_status & (TULIP_DSTS_TxUNDERFLOW|TULIP_DSTS_TxBABBLE)) sc->tulip_dot3stats.dot3StatsInternalMacTransmitErrors++; if (d_status & TULIP_DSTS_TxUNDERFLOW) sc->tulip_dot3stats.dot3StatsInternalTransmitUnderflows++; if (d_status & TULIP_DSTS_TxBABBLE) sc->tulip_dot3stats.dot3StatsInternalTransmitBabbles++; } else { u_int32_t collisions = (d_status & TULIP_DSTS_TxCOLLMASK) >> TULIP_DSTS_V_TxCOLLCNT; CTR2(KTR_TULIP, "tulip_tx_intr: output ok, collisions %d, status %08x", collisions, d_status); if_inc_counter(sc->tulip_ifp, IFCOUNTER_COLLISIONS, collisions); if (collisions == 1) sc->tulip_dot3stats.dot3StatsSingleCollisionFrames++; else if (collisions > 1) sc->tulip_dot3stats.dot3StatsMultipleCollisionFrames++; else if (d_status & TULIP_DSTS_TxDEFERRED) sc->tulip_dot3stats.dot3StatsDeferredTransmissions++; /* * SQE is only valid for 10baseT/BNC/AUI when not * running in full-duplex. In order to speed up the * test, the corresponding bit in tulip_flags needs to * set as well to get us to count SQE Test Errors. */ if (d_status & TULIP_DSTS_TxNOHRTBT & sc->tulip_flags) sc->tulip_dot3stats.dot3StatsSQETestErrors++; } } } } if (++ri->ri_nextin == ri->ri_last) ri->ri_nextin = ri->ri_first; if ((sc->tulip_flags & TULIP_TXPROBE_ACTIVE) == 0) sc->tulip_ifp->if_drv_flags &= ~IFF_DRV_OACTIVE; } /* * If nothing left to transmit, disable the timer. * Else if progress, reset the timer back to 2 ticks. */ if (ri->ri_free == ri->ri_max || (sc->tulip_flags & TULIP_TXPROBE_ACTIVE)) sc->tulip_txtimer = 0; else if (xmits > 0) sc->tulip_txtimer = TULIP_TXTIMER; if_inc_counter(sc->tulip_ifp, IFCOUNTER_OPACKETS, xmits); TULIP_PERFEND(txintr); return descs; } static void tulip_print_abnormal_interrupt(tulip_softc_t * const sc, u_int32_t csr) { const char * const *msgp = tulip_status_bits; const char *sep; u_int32_t mask; const char thrsh[] = "72|128\0\0\0" "96|256\0\0\0" "128|512\0\0" "160|1024"; TULIP_LOCK_ASSERT(sc); csr &= (1 << (sizeof(tulip_status_bits)/sizeof(tulip_status_bits[0]))) - 1; device_printf(sc->tulip_dev, "abnormal interrupt:"); for (sep = " ", mask = 1; mask <= csr; mask <<= 1, msgp++) { if ((csr & mask) && *msgp != NULL) { printf("%s%s", sep, *msgp); if (mask == TULIP_STS_TXUNDERFLOW && (sc->tulip_flags & TULIP_NEWTXTHRESH)) { sc->tulip_flags &= ~TULIP_NEWTXTHRESH; if (sc->tulip_cmdmode & TULIP_CMD_STOREFWD) { printf(" (switching to store-and-forward mode)"); } else { printf(" (raising TX threshold to %s)", &thrsh[9 * ((sc->tulip_cmdmode & TULIP_CMD_THRESHOLDCTL) >> 14)]); } } sep = ", "; } } printf("\n"); } static void tulip_intr_handler(tulip_softc_t * const sc) { TULIP_PERFSTART(intr) u_int32_t csr; CTR0(KTR_TULIP, "tulip_intr_handler invoked"); TULIP_LOCK_ASSERT(sc); while ((csr = TULIP_CSR_READ(sc, csr_status)) & sc->tulip_intrmask) { TULIP_CSR_WRITE(sc, csr_status, csr); if (csr & TULIP_STS_SYSERROR) { sc->tulip_last_system_error = (csr & TULIP_STS_ERRORMASK) >> TULIP_STS_ERR_SHIFT; if (sc->tulip_flags & TULIP_NOMESSAGES) { sc->tulip_flags |= TULIP_SYSTEMERROR; } else { device_printf(sc->tulip_dev, "system error: %s\n", tulip_system_errors[sc->tulip_last_system_error]); } sc->tulip_flags |= TULIP_NEEDRESET; sc->tulip_system_errors++; break; } if (csr & (TULIP_STS_LINKPASS|TULIP_STS_LINKFAIL) & sc->tulip_intrmask) { #if defined(TULIP_DEBUG) sc->tulip_dbg.dbg_link_intrs++; #endif if (sc->tulip_boardsw->bd_media_poll != NULL) { (*sc->tulip_boardsw->bd_media_poll)(sc, csr & TULIP_STS_LINKFAIL ? TULIP_MEDIAPOLL_LINKFAIL : TULIP_MEDIAPOLL_LINKPASS); csr &= ~TULIP_STS_ABNRMLINTR; } tulip_media_print(sc); } if (csr & (TULIP_STS_RXINTR|TULIP_STS_RXNOBUF)) { u_int32_t misses = TULIP_CSR_READ(sc, csr_missed_frames); if (csr & TULIP_STS_RXNOBUF) sc->tulip_dot3stats.dot3StatsMissedFrames += misses & 0xFFFF; /* * Pass 2.[012] of the 21140A-A[CDE] may hang and/or corrupt data * on receive overflows. */ if ((misses & 0x0FFE0000) && (sc->tulip_features & TULIP_HAVE_RXBADOVRFLW)) { sc->tulip_dot3stats.dot3StatsInternalMacReceiveErrors++; /* * Stop the receiver process and spin until it's stopped. * Tell rx_intr to drop the packets it dequeues. */ TULIP_CSR_WRITE(sc, csr_command, sc->tulip_cmdmode & ~TULIP_CMD_RXRUN); while ((TULIP_CSR_READ(sc, csr_status) & TULIP_STS_RXSTOPPED) == 0) ; TULIP_CSR_WRITE(sc, csr_status, TULIP_STS_RXSTOPPED); sc->tulip_flags |= TULIP_RXIGNORE; } tulip_rx_intr(sc); if (sc->tulip_flags & TULIP_RXIGNORE) { /* * Restart the receiver. */ sc->tulip_flags &= ~TULIP_RXIGNORE; TULIP_CSR_WRITE(sc, csr_command, sc->tulip_cmdmode); } } if (csr & TULIP_STS_ABNRMLINTR) { u_int32_t tmp = csr & sc->tulip_intrmask & ~(TULIP_STS_NORMALINTR|TULIP_STS_ABNRMLINTR); if (csr & TULIP_STS_TXUNDERFLOW) { if ((sc->tulip_cmdmode & TULIP_CMD_THRESHOLDCTL) != TULIP_CMD_THRSHLD160) { sc->tulip_cmdmode += TULIP_CMD_THRSHLD96; sc->tulip_flags |= TULIP_NEWTXTHRESH; } else if (sc->tulip_features & TULIP_HAVE_STOREFWD) { sc->tulip_cmdmode |= TULIP_CMD_STOREFWD; sc->tulip_flags |= TULIP_NEWTXTHRESH; } } if (sc->tulip_flags & TULIP_NOMESSAGES) { sc->tulip_statusbits |= tmp; } else { tulip_print_abnormal_interrupt(sc, tmp); sc->tulip_flags |= TULIP_NOMESSAGES; } TULIP_CSR_WRITE(sc, csr_command, sc->tulip_cmdmode); } if (sc->tulip_flags & (TULIP_WANTTXSTART|TULIP_TXPROBE_ACTIVE|TULIP_DOINGSETUP|TULIP_PROMISC)) { tulip_tx_intr(sc); if ((sc->tulip_flags & TULIP_TXPROBE_ACTIVE) == 0) tulip_start_locked(sc); } } if (sc->tulip_flags & TULIP_NEEDRESET) { tulip_reset(sc); tulip_init_locked(sc); } TULIP_PERFEND(intr); } static void tulip_intr_shared(void *arg) { tulip_softc_t * sc = arg; for (; sc != NULL; sc = sc->tulip_slaves) { TULIP_LOCK(sc); #if defined(TULIP_DEBUG) sc->tulip_dbg.dbg_intrs++; #endif tulip_intr_handler(sc); TULIP_UNLOCK(sc); } } static void tulip_intr_normal(void *arg) { tulip_softc_t * sc = (tulip_softc_t *) arg; TULIP_LOCK(sc); #if defined(TULIP_DEBUG) sc->tulip_dbg.dbg_intrs++; #endif tulip_intr_handler(sc); TULIP_UNLOCK(sc); } static struct mbuf * tulip_txput(tulip_softc_t * const sc, struct mbuf *m) { TULIP_PERFSTART(txput) tulip_ringinfo_t * const ri = &sc->tulip_txinfo; tulip_descinfo_t *eop, *nextout; int segcnt, free; u_int32_t d_status; bus_dma_segment_t segs[TULIP_MAX_TXSEG]; bus_dmamap_t *map; int error, nsegs; struct mbuf *m0; TULIP_LOCK_ASSERT(sc); #if defined(TULIP_DEBUG) if ((sc->tulip_cmdmode & TULIP_CMD_TXRUN) == 0) { device_printf(sc->tulip_dev, "txput%s: tx not running\n", (sc->tulip_flags & TULIP_TXPROBE_ACTIVE) ? "(probe)" : ""); sc->tulip_flags |= TULIP_WANTTXSTART; sc->tulip_dbg.dbg_txput_finishes[0]++; goto finish; } #endif /* * Now we try to fill in our transmit descriptors. This is * a bit reminiscent of going on the Ark two by two * since each descriptor for the TULIP can describe * two buffers. So we advance through packet filling * each of the two entries at a time to fill each * descriptor. Clear the first and last segment bits * in each descriptor (actually just clear everything * but the end-of-ring or chain bits) to make sure * we don't get messed up by previously sent packets. * * We may fail to put the entire packet on the ring if * there is either not enough ring entries free or if the * packet has more than MAX_TXSEG segments. In the former * case we will just wait for the ring to empty. In the * latter case we have to recopy. */ #if defined(KTR) && KTR_TULIP segcnt = 1; m0 = m; while (m0->m_next != NULL) { segcnt++; m0 = m0->m_next; } CTR2(KTR_TULIP, "tulip_txput: sending packet %p (%d chunks)", m, segcnt); #endif d_status = 0; eop = nextout = ri->ri_nextout; segcnt = 0; free = ri->ri_free; /* * Reclaim some tx descriptors if we are out since we need at least one * free descriptor so that we have a dma_map to load the mbuf. */ if (free == 0) { #if defined(TULIP_DEBUG) sc->tulip_dbg.dbg_no_txmaps++; #endif free += tulip_tx_intr(sc); } if (free == 0) { sc->tulip_flags |= TULIP_WANTTXSTART; #if defined(TULIP_DEBUG) sc->tulip_dbg.dbg_txput_finishes[1]++; #endif goto finish; } error = bus_dmamap_load_mbuf_sg(ri->ri_data_tag, *eop->di_map, m, segs, &nsegs, BUS_DMA_NOWAIT); if (error != 0) { if (error == EFBIG) { /* * The packet exceeds the number of transmit buffer * entries that we can use for one packet, so we have * to recopy it into one mbuf and then try again. If * we can't recopy it, try again later. */ m0 = m_defrag(m, M_NOWAIT); if (m0 == NULL) { sc->tulip_flags |= TULIP_WANTTXSTART; #if defined(TULIP_DEBUG) sc->tulip_dbg.dbg_txput_finishes[2]++; #endif goto finish; } m = m0; error = bus_dmamap_load_mbuf_sg(ri->ri_data_tag, *eop->di_map, m, segs, &nsegs, BUS_DMA_NOWAIT); } if (error != 0) { device_printf(sc->tulip_dev, "unable to load tx map, error = %d\n", error); #if defined(TULIP_DEBUG) sc->tulip_dbg.dbg_txput_finishes[3]++; #endif goto finish; } } CTR1(KTR_TULIP, "tulip_txput: nsegs %d", nsegs); /* * Each descriptor allows for up to 2 fragments since we don't use * the descriptor chaining mode in this driver. */ if ((free -= (nsegs + 1) / 2) <= 0 /* * See if there's any unclaimed space in the transmit ring. */ && (free += tulip_tx_intr(sc)) <= 0) { /* * There's no more room but since nothing * has been committed at this point, just * show output is active, put back the * mbuf and return. */ sc->tulip_flags |= TULIP_WANTTXSTART; #if defined(TULIP_DEBUG) sc->tulip_dbg.dbg_txput_finishes[4]++; #endif bus_dmamap_unload(ri->ri_data_tag, *eop->di_map); goto finish; } for (; nsegs - segcnt > 1; segcnt += 2) { eop = nextout; eop->di_desc->d_flag &= TULIP_DFLAG_ENDRING|TULIP_DFLAG_CHAIN; eop->di_desc->d_status = d_status; eop->di_desc->d_addr1 = segs[segcnt].ds_addr & 0xffffffff; eop->di_desc->d_length1 = segs[segcnt].ds_len; eop->di_desc->d_addr2 = segs[segcnt+1].ds_addr & 0xffffffff; eop->di_desc->d_length2 = segs[segcnt+1].ds_len; d_status = TULIP_DSTS_OWNER; if (++nextout == ri->ri_last) nextout = ri->ri_first; } if (segcnt < nsegs) { eop = nextout; eop->di_desc->d_flag &= TULIP_DFLAG_ENDRING|TULIP_DFLAG_CHAIN; eop->di_desc->d_status = d_status; eop->di_desc->d_addr1 = segs[segcnt].ds_addr & 0xffffffff; eop->di_desc->d_length1 = segs[segcnt].ds_len; eop->di_desc->d_addr2 = 0; eop->di_desc->d_length2 = 0; if (++nextout == ri->ri_last) nextout = ri->ri_first; } /* * tulip_tx_intr() harvests the mbuf from the last descriptor in the * frame. We just used the dmamap in the first descriptor for the * load operation however. Thus, to let the tulip_dequeue_mbuf() call * in tulip_tx_intr() unload the correct dmamap, we swap the dmamap * pointers in the two descriptors if this is a multiple-descriptor * packet. */ if (eop != ri->ri_nextout) { map = eop->di_map; eop->di_map = ri->ri_nextout->di_map; ri->ri_nextout->di_map = map; } /* * bounce a copy to the bpf listener, if any. */ if (!(sc->tulip_flags & TULIP_DEVICEPROBE)) BPF_MTAP(sc->tulip_ifp, m); /* * The descriptors have been filled in. Now get ready * to transmit. */ CTR3(KTR_TULIP, "tulip_txput: enqueued mbuf %p to descriptors %td - %td", m, ri->ri_nextout - ri->ri_first, eop - ri->ri_first); KASSERT(eop->di_mbuf == NULL, ("clobbering earlier tx mbuf")); eop->di_mbuf = m; TULIP_TXMAP_PRESYNC(ri, ri->ri_nextout); m = NULL; /* * Make sure the next descriptor after this packet is owned * by us since it may have been set up above if we ran out * of room in the ring. */ nextout->di_desc->d_status = 0; TULIP_TXDESC_PRESYNC(ri); /* * Mark the last and first segments, indicate we want a transmit * complete interrupt, and tell it to transmit! */ eop->di_desc->d_flag |= TULIP_DFLAG_TxLASTSEG|TULIP_DFLAG_TxWANTINTR; /* * Note that ri->ri_nextout is still the start of the packet * and until we set the OWNER bit, we can still back out of * everything we have done. */ ri->ri_nextout->di_desc->d_flag |= TULIP_DFLAG_TxFIRSTSEG; TULIP_TXDESC_PRESYNC(ri); ri->ri_nextout->di_desc->d_status = TULIP_DSTS_OWNER; TULIP_TXDESC_PRESYNC(ri); /* * This advances the ring for us. */ ri->ri_nextout = nextout; ri->ri_free = free; TULIP_PERFEND(txput); if (sc->tulip_flags & TULIP_TXPROBE_ACTIVE) { TULIP_CSR_WRITE(sc, csr_txpoll, 1); sc->tulip_ifp->if_drv_flags |= IFF_DRV_OACTIVE; TULIP_PERFEND(txput); return NULL; } /* * switch back to the single queueing ifstart. */ sc->tulip_flags &= ~TULIP_WANTTXSTART; if (sc->tulip_txtimer == 0) sc->tulip_txtimer = TULIP_TXTIMER; #if defined(TULIP_DEBUG) sc->tulip_dbg.dbg_txput_finishes[5]++; #endif /* * If we want a txstart, there must be not enough space in the * transmit ring. So we want to enable transmit done interrupts * so we can immediately reclaim some space. When the transmit * interrupt is posted, the interrupt handler will call tx_intr * to reclaim space and then txstart (since WANTTXSTART is set). * txstart will move the packet into the transmit ring and clear * WANTTXSTART thereby causing TXINTR to be cleared. */ finish: #if defined(TULIP_DEBUG) sc->tulip_dbg.dbg_txput_finishes[6]++; #endif if (sc->tulip_flags & (TULIP_WANTTXSTART|TULIP_DOINGSETUP)) { sc->tulip_ifp->if_drv_flags |= IFF_DRV_OACTIVE; if ((sc->tulip_intrmask & TULIP_STS_TXINTR) == 0) { sc->tulip_intrmask |= TULIP_STS_TXINTR; TULIP_CSR_WRITE(sc, csr_intr, sc->tulip_intrmask); } } else if ((sc->tulip_flags & TULIP_PROMISC) == 0) { if (sc->tulip_intrmask & TULIP_STS_TXINTR) { sc->tulip_intrmask &= ~TULIP_STS_TXINTR; TULIP_CSR_WRITE(sc, csr_intr, sc->tulip_intrmask); } } TULIP_CSR_WRITE(sc, csr_txpoll, 1); TULIP_PERFEND(txput); return m; } static void tulip_txput_setup(tulip_softc_t * const sc) { tulip_ringinfo_t * const ri = &sc->tulip_txinfo; tulip_desc_t *nextout; TULIP_LOCK_ASSERT(sc); /* * We will transmit, at most, one setup packet per call to ifstart. */ #if defined(TULIP_DEBUG) if ((sc->tulip_cmdmode & TULIP_CMD_TXRUN) == 0) { device_printf(sc->tulip_dev, "txput_setup: tx not running\n"); sc->tulip_flags |= TULIP_WANTTXSTART; return; } #endif /* * Try to reclaim some free descriptors.. */ if (ri->ri_free < 2) tulip_tx_intr(sc); if ((sc->tulip_flags & TULIP_DOINGSETUP) || ri->ri_free == 1) { sc->tulip_flags |= TULIP_WANTTXSTART; return; } bcopy(sc->tulip_setupdata, sc->tulip_setupbuf, sizeof(sc->tulip_setupdata)); /* * Clear WANTSETUP and set DOINGSETUP. Since we know that WANTSETUP is * set and DOINGSETUP is clear doing an XOR of the two will DTRT. */ sc->tulip_flags ^= TULIP_WANTSETUP|TULIP_DOINGSETUP; ri->ri_free--; nextout = ri->ri_nextout->di_desc; nextout->d_flag &= TULIP_DFLAG_ENDRING|TULIP_DFLAG_CHAIN; nextout->d_flag |= TULIP_DFLAG_TxFIRSTSEG|TULIP_DFLAG_TxLASTSEG |TULIP_DFLAG_TxSETUPPKT|TULIP_DFLAG_TxWANTINTR; if (sc->tulip_flags & TULIP_WANTHASHPERFECT) nextout->d_flag |= TULIP_DFLAG_TxHASHFILT; else if (sc->tulip_flags & TULIP_WANTHASHONLY) nextout->d_flag |= TULIP_DFLAG_TxHASHFILT|TULIP_DFLAG_TxINVRSFILT; nextout->d_length2 = 0; nextout->d_addr2 = 0; nextout->d_length1 = sizeof(sc->tulip_setupdata); nextout->d_addr1 = sc->tulip_setup_dma_addr & 0xffffffff; bus_dmamap_sync(sc->tulip_setup_tag, sc->tulip_setup_map, BUS_DMASYNC_PREREAD|BUS_DMASYNC_PREWRITE); TULIP_TXDESC_PRESYNC(ri); CTR1(KTR_TULIP, "tulip_txput_setup: using descriptor %td", ri->ri_nextout - ri->ri_first); /* * Advance the ring for the next transmit packet. */ if (++ri->ri_nextout == ri->ri_last) ri->ri_nextout = ri->ri_first; /* * Make sure the next descriptor is owned by us since it * may have been set up above if we ran out of room in the * ring. */ ri->ri_nextout->di_desc->d_status = 0; TULIP_TXDESC_PRESYNC(ri); nextout->d_status = TULIP_DSTS_OWNER; /* * Flush the ownwership of the current descriptor */ TULIP_TXDESC_PRESYNC(ri); TULIP_CSR_WRITE(sc, csr_txpoll, 1); if ((sc->tulip_intrmask & TULIP_STS_TXINTR) == 0) { sc->tulip_intrmask |= TULIP_STS_TXINTR; TULIP_CSR_WRITE(sc, csr_intr, sc->tulip_intrmask); } } static int tulip_ifioctl(struct ifnet * ifp, u_long cmd, caddr_t data) { TULIP_PERFSTART(ifioctl) tulip_softc_t * const sc = (tulip_softc_t *)ifp->if_softc; struct ifreq *ifr = (struct ifreq *) data; int error = 0; switch (cmd) { case SIOCSIFFLAGS: { TULIP_LOCK(sc); tulip_init_locked(sc); TULIP_UNLOCK(sc); break; } case SIOCSIFMEDIA: case SIOCGIFMEDIA: { error = ifmedia_ioctl(ifp, ifr, &sc->tulip_ifmedia, cmd); break; } case SIOCADDMULTI: case SIOCDELMULTI: { /* * Update multicast listeners */ TULIP_LOCK(sc); tulip_init_locked(sc); TULIP_UNLOCK(sc); error = 0; break; } default: { error = ether_ioctl(ifp, cmd, data); break; } } TULIP_PERFEND(ifioctl); return error; } static void tulip_start(struct ifnet * const ifp) { TULIP_PERFSTART(ifstart) tulip_softc_t * const sc = (tulip_softc_t *)ifp->if_softc; TULIP_LOCK(sc); tulip_start_locked(sc); TULIP_UNLOCK(sc); TULIP_PERFEND(ifstart); } static void tulip_start_locked(tulip_softc_t * const sc) { struct mbuf *m; TULIP_LOCK_ASSERT(sc); CTR0(KTR_TULIP, "tulip_start_locked invoked"); if ((sc->tulip_flags & (TULIP_WANTSETUP|TULIP_TXPROBE_ACTIVE)) == TULIP_WANTSETUP) tulip_txput_setup(sc); CTR1(KTR_TULIP, "tulip_start_locked: %d tx packets pending", sc->tulip_ifp->if_snd.ifq_len); while (!IFQ_DRV_IS_EMPTY(&sc->tulip_ifp->if_snd)) { IFQ_DRV_DEQUEUE(&sc->tulip_ifp->if_snd, m); if(m == NULL) break; if ((m = tulip_txput(sc, m)) != NULL) { IFQ_DRV_PREPEND(&sc->tulip_ifp->if_snd, m); break; } } } static void tulip_watchdog(void *arg) { TULIP_PERFSTART(stat) tulip_softc_t *sc = arg; #if defined(TULIP_DEBUG) u_int32_t rxintrs; #endif TULIP_LOCK_ASSERT(sc); callout_reset(&sc->tulip_stat_timer, hz, tulip_watchdog, sc); #if defined(TULIP_DEBUG) rxintrs = sc->tulip_dbg.dbg_rxintrs - sc->tulip_dbg.dbg_last_rxintrs; if (rxintrs > sc->tulip_dbg.dbg_high_rxintrs_hz) sc->tulip_dbg.dbg_high_rxintrs_hz = rxintrs; sc->tulip_dbg.dbg_last_rxintrs = sc->tulip_dbg.dbg_rxintrs; #endif /* TULIP_DEBUG */ /* * These should be rare so do a bulk test up front so we can just skip * them if needed. */ if (sc->tulip_flags & (TULIP_SYSTEMERROR|TULIP_RXBUFSLOW|TULIP_NOMESSAGES)) { /* * If the number of receive buffer is low, try to refill */ if (sc->tulip_flags & TULIP_RXBUFSLOW) tulip_rx_intr(sc); if (sc->tulip_flags & TULIP_SYSTEMERROR) { if_printf(sc->tulip_ifp, "%d system errors: last was %s\n", sc->tulip_system_errors, tulip_system_errors[sc->tulip_last_system_error]); } if (sc->tulip_statusbits) { tulip_print_abnormal_interrupt(sc, sc->tulip_statusbits); sc->tulip_statusbits = 0; } sc->tulip_flags &= ~(TULIP_NOMESSAGES|TULIP_SYSTEMERROR); } if (sc->tulip_txtimer) tulip_tx_intr(sc); if (sc->tulip_txtimer && --sc->tulip_txtimer == 0) { if_printf(sc->tulip_ifp, "transmission timeout\n"); if (TULIP_DO_AUTOSENSE(sc)) { sc->tulip_media = TULIP_MEDIA_UNKNOWN; sc->tulip_probe_state = TULIP_PROBE_INACTIVE; sc->tulip_flags &= ~(TULIP_WANTRXACT|TULIP_LINKUP); } tulip_reset(sc); tulip_init_locked(sc); } TULIP_PERFEND(stat); TULIP_PERFMERGE(sc, perf_intr_cycles); TULIP_PERFMERGE(sc, perf_ifstart_cycles); TULIP_PERFMERGE(sc, perf_ifioctl_cycles); TULIP_PERFMERGE(sc, perf_stat_cycles); TULIP_PERFMERGE(sc, perf_timeout_cycles); TULIP_PERFMERGE(sc, perf_ifstart_one_cycles); TULIP_PERFMERGE(sc, perf_txput_cycles); TULIP_PERFMERGE(sc, perf_txintr_cycles); TULIP_PERFMERGE(sc, perf_rxintr_cycles); TULIP_PERFMERGE(sc, perf_rxget_cycles); TULIP_PERFMERGE(sc, perf_intr); TULIP_PERFMERGE(sc, perf_ifstart); TULIP_PERFMERGE(sc, perf_ifioctl); TULIP_PERFMERGE(sc, perf_stat); TULIP_PERFMERGE(sc, perf_timeout); TULIP_PERFMERGE(sc, perf_ifstart_one); TULIP_PERFMERGE(sc, perf_txput); TULIP_PERFMERGE(sc, perf_txintr); TULIP_PERFMERGE(sc, perf_rxintr); TULIP_PERFMERGE(sc, perf_rxget); } static void tulip_attach(tulip_softc_t * const sc) { struct ifnet *ifp; ifp = sc->tulip_ifp = if_alloc(IFT_ETHER); /* XXX: driver name/unit should be set some other way */ if_initname(ifp, "de", sc->tulip_unit); ifp->if_softc = sc; ifp->if_flags = IFF_BROADCAST|IFF_SIMPLEX|IFF_MULTICAST; ifp->if_ioctl = tulip_ifioctl; ifp->if_start = tulip_start; ifp->if_init = tulip_init; IFQ_SET_MAXLEN(&ifp->if_snd, ifqmaxlen); ifp->if_snd.ifq_drv_maxlen = ifqmaxlen; IFQ_SET_READY(&ifp->if_snd); device_printf(sc->tulip_dev, "%s%s pass %d.%d%s\n", sc->tulip_boardid, tulip_chipdescs[sc->tulip_chipid], (sc->tulip_revinfo & 0xF0) >> 4, sc->tulip_revinfo & 0x0F, (sc->tulip_features & (TULIP_HAVE_ISVSROM|TULIP_HAVE_OKSROM)) == TULIP_HAVE_ISVSROM ? " (invalid EESPROM checksum)" : ""); TULIP_LOCK(sc); (*sc->tulip_boardsw->bd_media_probe)(sc); ifmedia_init(&sc->tulip_ifmedia, 0, tulip_ifmedia_change, tulip_ifmedia_status); tulip_ifmedia_add(sc); tulip_reset(sc); TULIP_UNLOCK(sc); ether_ifattach(sc->tulip_ifp, sc->tulip_enaddr); TULIP_LOCK(sc); sc->tulip_flags &= ~TULIP_DEVICEPROBE; TULIP_UNLOCK(sc); + + gone_by_fcp101_dev(sc->tulip_dev); } /* Release memory for a single descriptor ring. */ static void tulip_busdma_freering(tulip_ringinfo_t *ri) { int i; /* Release the DMA maps and tag for data buffers. */ if (ri->ri_data_maps != NULL) { for (i = 0; i < ri->ri_max; i++) { if (ri->ri_data_maps[i] != NULL) { bus_dmamap_destroy(ri->ri_data_tag, ri->ri_data_maps[i]); ri->ri_data_maps[i] = NULL; } } free(ri->ri_data_maps, M_DEVBUF); ri->ri_data_maps = NULL; } if (ri->ri_data_tag != NULL) { bus_dma_tag_destroy(ri->ri_data_tag); ri->ri_data_tag = NULL; } /* Release the DMA memory and tag for the ring descriptors. */ if (ri->ri_dma_addr != 0) { bus_dmamap_unload(ri->ri_ring_tag, ri->ri_ring_map); ri->ri_dma_addr = 0; } if (ri->ri_descs != NULL) { bus_dmamem_free(ri->ri_ring_tag, ri->ri_descs, ri->ri_ring_map); ri->ri_descs = NULL; } if (ri->ri_ring_tag != NULL) { bus_dma_tag_destroy(ri->ri_ring_tag); ri->ri_ring_tag = NULL; } } /* Allocate memory for a single descriptor ring. */ static int tulip_busdma_allocring(device_t dev, tulip_softc_t * const sc, size_t count, bus_size_t align, int nsegs, tulip_ringinfo_t *ri, const char *name) { size_t size; int error, i; /* First, setup a tag. */ ri->ri_max = count; size = count * sizeof(tulip_desc_t); error = bus_dma_tag_create(bus_get_dma_tag(dev), 32, 0, BUS_SPACE_MAXADDR_32BIT, BUS_SPACE_MAXADDR, NULL, NULL, size, 1, size, 0, NULL, NULL, &ri->ri_ring_tag); if (error) { device_printf(dev, "failed to allocate %s descriptor ring dma tag\n", name); return (error); } /* Next, allocate memory for the descriptors. */ error = bus_dmamem_alloc(ri->ri_ring_tag, (void **)&ri->ri_descs, BUS_DMA_NOWAIT | BUS_DMA_ZERO, &ri->ri_ring_map); if (error) { device_printf(dev, "failed to allocate memory for %s descriptor ring\n", name); return (error); } /* Map the descriptors. */ error = bus_dmamap_load(ri->ri_ring_tag, ri->ri_ring_map, ri->ri_descs, size, tulip_dma_map_addr, &ri->ri_dma_addr, BUS_DMA_NOWAIT); if (error) { device_printf(dev, "failed to get dma address for %s descriptor ring\n", name); return (error); } /* Allocate a tag for the data buffers. */ error = bus_dma_tag_create(bus_get_dma_tag(dev), align, 0, BUS_SPACE_MAXADDR_32BIT, BUS_SPACE_MAXADDR, NULL, NULL, MCLBYTES * nsegs, nsegs, MCLBYTES, 0, NULL, NULL, &ri->ri_data_tag); if (error) { device_printf(dev, "failed to allocate %s buffer dma tag\n", name); return (error); } /* Allocate maps for the data buffers. */ ri->ri_data_maps = malloc(sizeof(bus_dmamap_t) * count, M_DEVBUF, M_WAITOK | M_ZERO); for (i = 0; i < count; i++) { error = bus_dmamap_create(ri->ri_data_tag, 0, &ri->ri_data_maps[i]); if (error) { device_printf(dev, "failed to create map for %s buffer %d\n", name, i); return (error); } } return (0); } /* Release busdma maps, tags, and memory. */ static void tulip_busdma_cleanup(tulip_softc_t * const sc) { /* Release resources for the setup descriptor. */ if (sc->tulip_setup_dma_addr != 0) { bus_dmamap_unload(sc->tulip_setup_tag, sc->tulip_setup_map); sc->tulip_setup_dma_addr = 0; } if (sc->tulip_setupbuf != NULL) { bus_dmamem_free(sc->tulip_setup_tag, sc->tulip_setupbuf, sc->tulip_setup_map); sc->tulip_setupbuf = NULL; } if (sc->tulip_setup_tag != NULL) { bus_dma_tag_destroy(sc->tulip_setup_tag); sc->tulip_setup_tag = NULL; } /* Release the transmit ring. */ tulip_busdma_freering(&sc->tulip_txinfo); /* Release the receive ring. */ tulip_busdma_freering(&sc->tulip_rxinfo); } static int tulip_busdma_init(device_t dev, tulip_softc_t * const sc) { int error; /* * Allocate space and dmamap for transmit ring. */ error = tulip_busdma_allocring(dev, sc, TULIP_TXDESCS, 1, TULIP_MAX_TXSEG, &sc->tulip_txinfo, "transmit"); if (error) return (error); /* * Allocate space and dmamap for receive ring. We tell bus_dma that * we can map MCLBYTES so that it will accept a full MCLBYTES cluster, * but we will only map the first TULIP_RX_BUFLEN bytes. This is not * a waste in practice though as an ethernet frame can easily fit * in TULIP_RX_BUFLEN bytes. */ error = tulip_busdma_allocring(dev, sc, TULIP_RXDESCS, 4, 1, &sc->tulip_rxinfo, "receive"); if (error) return (error); /* * Allocate a DMA tag, memory, and map for setup descriptor */ error = bus_dma_tag_create(bus_get_dma_tag(dev), 32, 0, BUS_SPACE_MAXADDR_32BIT, BUS_SPACE_MAXADDR, NULL, NULL, sizeof(sc->tulip_setupdata), 1, sizeof(sc->tulip_setupdata), 0, NULL, NULL, &sc->tulip_setup_tag); if (error) { device_printf(dev, "failed to allocate setup descriptor dma tag\n"); return (error); } error = bus_dmamem_alloc(sc->tulip_setup_tag, (void **)&sc->tulip_setupbuf, BUS_DMA_NOWAIT | BUS_DMA_ZERO, &sc->tulip_setup_map); if (error) { device_printf(dev, "failed to allocate memory for setup descriptor\n"); return (error); } error = bus_dmamap_load(sc->tulip_setup_tag, sc->tulip_setup_map, sc->tulip_setupbuf, sizeof(sc->tulip_setupdata), tulip_dma_map_addr, &sc->tulip_setup_dma_addr, BUS_DMA_NOWAIT); if (error) { device_printf(dev, "failed to get dma address for setup descriptor\n"); return (error); } return error; } static void tulip_initcsrs(tulip_softc_t * const sc, tulip_csrptr_t csr_base, size_t csr_size) { sc->tulip_csrs.csr_busmode = csr_base + 0 * csr_size; sc->tulip_csrs.csr_txpoll = csr_base + 1 * csr_size; sc->tulip_csrs.csr_rxpoll = csr_base + 2 * csr_size; sc->tulip_csrs.csr_rxlist = csr_base + 3 * csr_size; sc->tulip_csrs.csr_txlist = csr_base + 4 * csr_size; sc->tulip_csrs.csr_status = csr_base + 5 * csr_size; sc->tulip_csrs.csr_command = csr_base + 6 * csr_size; sc->tulip_csrs.csr_intr = csr_base + 7 * csr_size; sc->tulip_csrs.csr_missed_frames = csr_base + 8 * csr_size; sc->tulip_csrs.csr_9 = csr_base + 9 * csr_size; sc->tulip_csrs.csr_10 = csr_base + 10 * csr_size; sc->tulip_csrs.csr_11 = csr_base + 11 * csr_size; sc->tulip_csrs.csr_12 = csr_base + 12 * csr_size; sc->tulip_csrs.csr_13 = csr_base + 13 * csr_size; sc->tulip_csrs.csr_14 = csr_base + 14 * csr_size; sc->tulip_csrs.csr_15 = csr_base + 15 * csr_size; } static int tulip_initring( device_t dev, tulip_softc_t * const sc, tulip_ringinfo_t * const ri, int ndescs) { int i; ri->ri_descinfo = malloc(sizeof(tulip_descinfo_t) * ndescs, M_DEVBUF, M_WAITOK | M_ZERO); for (i = 0; i < ndescs; i++) { ri->ri_descinfo[i].di_desc = &ri->ri_descs[i]; ri->ri_descinfo[i].di_map = &ri->ri_data_maps[i]; } ri->ri_first = ri->ri_descinfo; ri->ri_max = ndescs; ri->ri_last = ri->ri_first + ri->ri_max; bzero(ri->ri_descs, sizeof(tulip_desc_t) * ri->ri_max); ri->ri_last[-1].di_desc->d_flag = TULIP_DFLAG_ENDRING; return (0); } /* * This is the PCI configuration support. */ #define PCI_CBIO PCIR_BAR(0) /* Configuration Base IO Address */ #define PCI_CBMA PCIR_BAR(1) /* Configuration Base Memory Address */ #define PCI_CFDA 0x40 /* Configuration Driver Area */ static int tulip_pci_probe(device_t dev) { const char *name = NULL; if (pci_get_vendor(dev) != DEC_VENDORID) return ENXIO; /* * Some LanMedia WAN cards use the Tulip chip, but they have * their own driver, and we should not recognize them */ if (pci_get_subvendor(dev) == 0x1376) return ENXIO; switch (pci_get_device(dev)) { case CHIPID_21040: name = "Digital 21040 Ethernet"; break; case CHIPID_21041: name = "Digital 21041 Ethernet"; break; case CHIPID_21140: if (pci_get_revid(dev) >= 0x20) name = "Digital 21140A Fast Ethernet"; else name = "Digital 21140 Fast Ethernet"; break; case CHIPID_21142: if (pci_get_revid(dev) >= 0x20) name = "Digital 21143 Fast Ethernet"; else name = "Digital 21142 Fast Ethernet"; break; } if (name) { device_set_desc(dev, name); return BUS_PROBE_LOW_PRIORITY; } return ENXIO; } static int tulip_shutdown(device_t dev) { tulip_softc_t * const sc = device_get_softc(dev); TULIP_CSR_WRITE(sc, csr_busmode, TULIP_BUSMODE_SWRESET); DELAY(10); /* Wait 10 microseconds (actually 50 PCI cycles but at 33MHz that comes to two microseconds but wait a bit longer anyways) */ return 0; } static int tulip_pci_attach(device_t dev) { tulip_softc_t *sc; int retval, idx; u_int32_t revinfo, cfdainfo; unsigned csroffset = TULIP_PCI_CSROFFSET; unsigned csrsize = TULIP_PCI_CSRSIZE; tulip_csrptr_t csr_base; tulip_chipid_t chipid = TULIP_CHIPID_UNKNOWN; struct resource *res; int rid, unit; unit = device_get_unit(dev); if (unit >= TULIP_MAX_DEVICES) { device_printf(dev, "not configured; limit of %d reached or exceeded\n", TULIP_MAX_DEVICES); return ENXIO; } revinfo = pci_get_revid(dev); cfdainfo = pci_read_config(dev, PCI_CFDA, 4); /* turn busmaster on in case BIOS doesn't set it */ pci_enable_busmaster(dev); if (pci_get_vendor(dev) == DEC_VENDORID) { if (pci_get_device(dev) == CHIPID_21040) chipid = TULIP_21040; else if (pci_get_device(dev) == CHIPID_21041) chipid = TULIP_21041; else if (pci_get_device(dev) == CHIPID_21140) chipid = (revinfo >= 0x20) ? TULIP_21140A : TULIP_21140; else if (pci_get_device(dev) == CHIPID_21142) chipid = (revinfo >= 0x20) ? TULIP_21143 : TULIP_21142; } if (chipid == TULIP_CHIPID_UNKNOWN) return ENXIO; if (chipid == TULIP_21040 && revinfo < 0x20) { device_printf(dev, "not configured; 21040 pass 2.0 required (%d.%d found)\n", revinfo >> 4, revinfo & 0x0f); return ENXIO; } else if (chipid == TULIP_21140 && revinfo < 0x11) { device_printf(dev, "not configured; 21140 pass 1.1 required (%d.%d found)\n", revinfo >> 4, revinfo & 0x0f); return ENXIO; } sc = device_get_softc(dev); sc->tulip_dev = dev; sc->tulip_pci_busno = pci_get_bus(dev); sc->tulip_pci_devno = pci_get_slot(dev); sc->tulip_chipid = chipid; sc->tulip_flags |= TULIP_DEVICEPROBE; if (chipid == TULIP_21140 || chipid == TULIP_21140A) sc->tulip_features |= TULIP_HAVE_GPR|TULIP_HAVE_STOREFWD; if (chipid == TULIP_21140A && revinfo <= 0x22) sc->tulip_features |= TULIP_HAVE_RXBADOVRFLW; if (chipid == TULIP_21140) sc->tulip_features |= TULIP_HAVE_BROKEN_HASH; if (chipid != TULIP_21040 && chipid != TULIP_21140) sc->tulip_features |= TULIP_HAVE_POWERMGMT; if (chipid == TULIP_21041 || chipid == TULIP_21142 || chipid == TULIP_21143) { sc->tulip_features |= TULIP_HAVE_DUALSENSE; if (chipid != TULIP_21041 || revinfo >= 0x20) sc->tulip_features |= TULIP_HAVE_SIANWAY; if (chipid != TULIP_21041) sc->tulip_features |= TULIP_HAVE_SIAGP|TULIP_HAVE_RXBADOVRFLW|TULIP_HAVE_STOREFWD; if (chipid != TULIP_21041 && revinfo >= 0x20) sc->tulip_features |= TULIP_HAVE_SIA100; } if (sc->tulip_features & TULIP_HAVE_POWERMGMT && (cfdainfo & (TULIP_CFDA_SLEEP|TULIP_CFDA_SNOOZE))) { cfdainfo &= ~(TULIP_CFDA_SLEEP|TULIP_CFDA_SNOOZE); pci_write_config(dev, PCI_CFDA, cfdainfo, 4); DELAY(11*1000); } sc->tulip_unit = unit; sc->tulip_revinfo = revinfo; #if defined(TULIP_IOMAPPED) rid = PCI_CBIO; res = bus_alloc_resource_any(dev, SYS_RES_IOPORT, &rid, RF_ACTIVE); #else rid = PCI_CBMA; res = bus_alloc_resource_any(dev, SYS_RES_MEMORY, &rid, RF_ACTIVE); #endif if (!res) return ENXIO; sc->tulip_csrs_bst = rman_get_bustag(res); sc->tulip_csrs_bsh = rman_get_bushandle(res); csr_base = 0; mtx_init(TULIP_MUTEX(sc), MTX_NETWORK_LOCK, device_get_nameunit(dev), MTX_DEF); callout_init_mtx(&sc->tulip_callout, TULIP_MUTEX(sc), 0); callout_init_mtx(&sc->tulip_stat_timer, TULIP_MUTEX(sc), 0); tulips[unit] = sc; tulip_initcsrs(sc, csr_base + csroffset, csrsize); if ((retval = tulip_busdma_init(dev, sc)) != 0) { device_printf(dev, "error initing bus_dma: %d\n", retval); tulip_busdma_cleanup(sc); mtx_destroy(TULIP_MUTEX(sc)); return ENXIO; } retval = tulip_initring(dev, sc, &sc->tulip_rxinfo, TULIP_RXDESCS); if (retval == 0) retval = tulip_initring(dev, sc, &sc->tulip_txinfo, TULIP_TXDESCS); if (retval) { tulip_busdma_cleanup(sc); mtx_destroy(TULIP_MUTEX(sc)); return retval; } /* * Make sure there won't be any interrupts or such... */ TULIP_CSR_WRITE(sc, csr_busmode, TULIP_BUSMODE_SWRESET); DELAY(100); /* Wait 10 microseconds (actually 50 PCI cycles but at 33MHz that comes to two microseconds but wait a bit longer anyways) */ TULIP_LOCK(sc); retval = tulip_read_macaddr(sc); TULIP_UNLOCK(sc); if (retval < 0) { device_printf(dev, "can't read ENET ROM (why=%d) (", retval); for (idx = 0; idx < 32; idx++) printf("%02x", sc->tulip_rombuf[idx]); printf("\n"); device_printf(dev, "%s%s pass %d.%d\n", sc->tulip_boardid, tulip_chipdescs[sc->tulip_chipid], (sc->tulip_revinfo & 0xF0) >> 4, sc->tulip_revinfo & 0x0F); device_printf(dev, "address unknown\n"); } else { void (*intr_rtn)(void *) = tulip_intr_normal; if (sc->tulip_features & TULIP_HAVE_SHAREDINTR) intr_rtn = tulip_intr_shared; tulip_attach(sc); /* Setup interrupt last. */ if ((sc->tulip_features & TULIP_HAVE_SLAVEDINTR) == 0) { void *ih; rid = 0; res = bus_alloc_resource_any(dev, SYS_RES_IRQ, &rid, RF_SHAREABLE | RF_ACTIVE); if (res == NULL || bus_setup_intr(dev, res, INTR_TYPE_NET | INTR_MPSAFE, NULL, intr_rtn, sc, &ih)) { device_printf(dev, "couldn't map interrupt\n"); tulip_busdma_cleanup(sc); ether_ifdetach(sc->tulip_ifp); if_free(sc->tulip_ifp); mtx_destroy(TULIP_MUTEX(sc)); return ENXIO; } } } return 0; } static device_method_t tulip_pci_methods[] = { /* Device interface */ DEVMETHOD(device_probe, tulip_pci_probe), DEVMETHOD(device_attach, tulip_pci_attach), DEVMETHOD(device_shutdown, tulip_shutdown), { 0, 0 } }; static driver_t tulip_pci_driver = { "de", tulip_pci_methods, sizeof(tulip_softc_t), }; static devclass_t tulip_devclass; DRIVER_MODULE(de, pci, tulip_pci_driver, tulip_devclass, 0, 0); #ifdef DDB void tulip_dumpring(int unit, int ring); void tulip_dumpdesc(int unit, int ring, int desc); void tulip_status(int unit); void tulip_dumpring(int unit, int ring) { tulip_softc_t *sc; tulip_ringinfo_t *ri; tulip_descinfo_t *di; if (unit < 0 || unit >= TULIP_MAX_DEVICES) { db_printf("invalid unit %d\n", unit); return; } sc = tulips[unit]; if (sc == NULL) { db_printf("unit %d not present\n", unit); return; } switch (ring) { case 0: db_printf("receive ring:\n"); ri = &sc->tulip_rxinfo; break; case 1: db_printf("transmit ring:\n"); ri = &sc->tulip_txinfo; break; default: db_printf("invalid ring %d\n", ring); return; } db_printf(" nextin: %td, nextout: %td, max: %d, free: %d\n", ri->ri_nextin - ri->ri_first, ri->ri_nextout - ri->ri_first, ri->ri_max, ri->ri_free); for (di = ri->ri_first; di != ri->ri_last; di++) { if (di->di_mbuf != NULL) db_printf(" descriptor %td: mbuf %p\n", di - ri->ri_first, di->di_mbuf); else if (di->di_desc->d_flag & TULIP_DFLAG_TxSETUPPKT) db_printf(" descriptor %td: setup packet\n", di - ri->ri_first); } } void tulip_dumpdesc(int unit, int ring, int desc) { tulip_softc_t *sc; tulip_ringinfo_t *ri; tulip_descinfo_t *di; char *s; if (unit < 0 || unit >= TULIP_MAX_DEVICES) { db_printf("invalid unit %d\n", unit); return; } sc = tulips[unit]; if (sc == NULL) { db_printf("unit %d not present\n", unit); return; } switch (ring) { case 0: s = "receive"; ri = &sc->tulip_rxinfo; break; case 1: s = "transmit"; ri = &sc->tulip_txinfo; break; default: db_printf("invalid ring %d\n", ring); return; } if (desc < 0 || desc >= ri->ri_max) { db_printf("invalid descriptor %d\n", desc); return; } db_printf("%s descriptor %d:\n", s, desc); di = &ri->ri_first[desc]; db_printf(" mbuf: %p\n", di->di_mbuf); db_printf(" status: %08x flag: %03x\n", di->di_desc->d_status, di->di_desc->d_flag); db_printf(" addr1: %08x len1: %03x\n", di->di_desc->d_addr1, di->di_desc->d_length1); db_printf(" addr2: %08x len2: %03x\n", di->di_desc->d_addr2, di->di_desc->d_length2); } #endif Index: stable/12/sys/dev/dme/if_dme.c =================================================================== --- stable/12/sys/dev/dme/if_dme.c (revision 339734) +++ stable/12/sys/dev/dme/if_dme.c (revision 339735) @@ -1,1061 +1,1064 @@ /* * Copyright (C) 2015 Alexander Kabaev * Copyright (C) 2010 Andrew Turner * All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. */ /* A driver for the Davicom DM9000 MAC. */ #include __FBSDID("$FreeBSD$"); #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include "miibus_if.h" struct dme_softc { struct ifnet *dme_ifp; device_t dme_dev; device_t dme_miibus; bus_space_handle_t dme_handle; bus_space_tag_t dme_tag; int dme_rev; int dme_bits; struct resource *dme_res; struct resource *dme_irq; void *dme_intrhand; struct mtx dme_mtx; struct callout dme_tick_ch; struct gpiobus_pin *gpio_rset; uint32_t dme_ticks; uint8_t dme_macaddr[ETHER_ADDR_LEN]; regulator_t dme_vcc_regulator; uint8_t dme_txbusy: 1; uint8_t dme_txready: 1; uint16_t dme_txlen; }; #define DME_CHIP_DM9000 0x00 #define DME_CHIP_DM9000A 0x19 #define DME_CHIP_DM9000B 0x1a #define DME_INT_PHY 1 static int dme_probe(device_t); static int dme_attach(device_t); static int dme_detach(device_t); static void dme_intr(void *arg); static void dme_init_locked(struct dme_softc *); static void dme_prepare(struct dme_softc *); static void dme_transmit(struct dme_softc *); static int dme_miibus_writereg(device_t dev, int phy, int reg, int data); static int dme_miibus_readreg(device_t dev, int phy, int reg); /* The bit on the address bus attached to the CMD pin */ #define BASE_ADDR 0x000 #define CMD_ADDR BASE_ADDR #define DATA_BIT 1 #define DATA_ADDR 0x002 #undef DME_TRACE #ifdef DME_TRACE #define DTR3 TR3 #define DTR4 TR4 #else #define NOTR(args...) (void)0 #define DTR3 NOTR #define DTR4 NOTR #endif static uint8_t dme_read_reg(struct dme_softc *sc, uint8_t reg) { /* Send the register to read from */ bus_space_write_1(sc->dme_tag, sc->dme_handle, CMD_ADDR, reg); bus_space_barrier(sc->dme_tag, sc->dme_handle, CMD_ADDR, 1, BUS_SPACE_BARRIER_WRITE); /* Get the value of the register */ return bus_space_read_1(sc->dme_tag, sc->dme_handle, DATA_ADDR); } static void dme_write_reg(struct dme_softc *sc, uint8_t reg, uint8_t value) { /* Send the register to write to */ bus_space_write_1(sc->dme_tag, sc->dme_handle, CMD_ADDR, reg); bus_space_barrier(sc->dme_tag, sc->dme_handle, CMD_ADDR, 1, BUS_SPACE_BARRIER_WRITE); /* Write the value to the register */ bus_space_write_1(sc->dme_tag, sc->dme_handle, DATA_ADDR, value); bus_space_barrier(sc->dme_tag, sc->dme_handle, DATA_ADDR, 1, BUS_SPACE_BARRIER_WRITE); } static void dme_reset(struct dme_softc *sc) { u_int ncr; /* Send a soft reset #1 */ dme_write_reg(sc, DME_NCR, NCR_RST | NCR_LBK_MAC); DELAY(100); /* Wait for the MAC to reset */ ncr = dme_read_reg(sc, DME_NCR); if (ncr & NCR_RST) device_printf(sc->dme_dev, "device did not complete first reset\n"); /* Send a soft reset #2 per Application Notes v1.22 */ dme_write_reg(sc, DME_NCR, 0); dme_write_reg(sc, DME_NCR, NCR_RST | NCR_LBK_MAC); DELAY(100); /* Wait for the MAC to reset */ ncr = dme_read_reg(sc, DME_NCR); if (ncr & NCR_RST) device_printf(sc->dme_dev, "device did not complete second reset\n"); /* Reset trasmit state */ sc->dme_txbusy = 0; sc->dme_txready = 0; DTR3("dme_reset, flags %#x busy %d ready %d", sc->dme_ifp ? sc->dme_ifp->if_drv_flags : 0, sc->dme_txbusy, sc->dme_txready); } /* * Parse string MAC address into usable form */ static int dme_parse_macaddr(const char *str, uint8_t *mac) { int count, i; unsigned int amac[ETHER_ADDR_LEN]; /* Aligned version */ count = sscanf(str, "%x%*c%x%*c%x%*c%x%*c%x%*c%x", &amac[0], &amac[1], &amac[2], &amac[3], &amac[4], &amac[5]); if (count < ETHER_ADDR_LEN) { memset(mac, 0, ETHER_ADDR_LEN); return (1); } /* Copy aligned to result */ for (i = 0; i < ETHER_ADDR_LEN; i ++) mac[i] = (amac[i] & 0xff); return (0); } /* * Try to determine our own MAC address */ static void dme_get_macaddr(struct dme_softc *sc) { char devid_str[32]; char *var; int i; /* Cannot use resource_string_value with static hints mode */ snprintf(devid_str, 32, "hint.%s.%d.macaddr", device_get_name(sc->dme_dev), device_get_unit(sc->dme_dev)); /* Try resource hints */ if ((var = kern_getenv(devid_str)) != NULL) { if (!dme_parse_macaddr(var, sc->dme_macaddr)) { device_printf(sc->dme_dev, "MAC address: %s (hints)\n", var); return; } } /* * Try to read MAC address from the device, in case U-Boot has * pre-programmed one for us. */ for (i = 0; i < ETHER_ADDR_LEN; i++) sc->dme_macaddr[i] = dme_read_reg(sc, DME_PAR(i)); device_printf(sc->dme_dev, "MAC address %6D (existing)\n", sc->dme_macaddr, ":"); } static void dme_config(struct dme_softc *sc) { int i; /* Mask all interrupts and reset receive pointer */ dme_write_reg(sc, DME_IMR, IMR_PAR); /* Disable GPIO0 to enable the internal PHY */ dme_write_reg(sc, DME_GPCR, 1); dme_write_reg(sc, DME_GPR, 0); #if 0 /* * Supposedly requires special initialization for DSP PHYs * used by DM9000B. Maybe belongs in dedicated PHY driver? */ if (sc->dme_rev == DME_CHIP_DM9000B) { dme_miibus_writereg(sc->dme_dev, DME_INT_PHY, MII_BMCR, BMCR_RESET); dme_miibus_writereg(sc->dme_dev, DME_INT_PHY, MII_DME_DSPCR, DSPCR_INIT); /* Wait 100ms for it to complete. */ for (i = 0; i < 100; i++) { int reg; reg = dme_miibus_readreg(sc->dme_dev, DME_INT_PHY, MII_BMCR); if ((reg & BMCR_RESET) == 0) break; DELAY(1000); } } #endif /* Select the internal PHY and normal loopback */ dme_write_reg(sc, DME_NCR, NCR_LBK_NORMAL); /* Clear any TX requests */ dme_write_reg(sc, DME_TCR, 0); /* Setup backpressure thresholds to 4k and 600us */ dme_write_reg(sc, DME_BPTR, BPTR_BPHW(3) | BPTR_JPT(0x0f)); /* Setup flow control */ dme_write_reg(sc, DME_FCTR, FCTR_HWOT(0x3) | FCTR_LWOT(0x08)); /* Enable flow control */ dme_write_reg(sc, DME_FCR, 0xff); /* Clear special modes */ dme_write_reg(sc, DME_SMCR, 0); /* Clear TX status */ dme_write_reg(sc, DME_NSR, NSR_WAKEST | NSR_TX2END | NSR_TX1END); /* Clear interrrupts */ dme_write_reg(sc, DME_ISR, 0xff); /* Set multicast address filter */ for (i = 0; i < 8; i++) dme_write_reg(sc, DME_MAR(i), 0xff); /* Set the MAC address */ for (i = 0; i < ETHER_ADDR_LEN; i++) dme_write_reg(sc, DME_PAR(i), sc->dme_macaddr[i]); /* Enable the RX buffer */ dme_write_reg(sc, DME_RCR, RCR_DIS_LONG | RCR_DIS_CRC | RCR_RXEN); /* Enable interrupts we care about */ dme_write_reg(sc, DME_IMR, IMR_PAR | IMR_PRI | IMR_PTI); } void dme_prepare(struct dme_softc *sc) { struct ifnet *ifp; struct mbuf *m, *mp; uint16_t total_len, len; DME_ASSERT_LOCKED(sc); KASSERT(sc->dme_txready == 0, ("dme_prepare: called with txready set\n")); ifp = sc->dme_ifp; IFQ_DEQUEUE(&ifp->if_snd, m); if (m == NULL) { ifp->if_drv_flags &= ~IFF_DRV_OACTIVE; DTR3("dme_prepare none, flags %#x busy %d ready %d", sc->dme_ifp->if_drv_flags, sc->dme_txbusy, sc->dme_txready); return; /* Nothing to transmit */ } /* Element has now been removed from the queue, so we better send it */ BPF_MTAP(ifp, m); /* Setup the controller to accept the writes */ bus_space_write_1(sc->dme_tag, sc->dme_handle, CMD_ADDR, DME_MWCMD); /* * TODO: Fix the case where an mbuf is * not a multiple of the write size. */ total_len = 0; for (mp = m; mp != NULL; mp = mp->m_next) { len = mp->m_len; /* Ignore empty parts */ if (len == 0) continue; total_len += len; #if 0 bus_space_write_multi_2(sc->dme_tag, sc->dme_handle, DATA_ADDR, mtod(mp, uint16_t *), (len + 1) / 2); #else bus_space_write_multi_1(sc->dme_tag, sc->dme_handle, DATA_ADDR, mtod(mp, uint8_t *), len); #endif } if (total_len % (sc->dme_bits >> 3) != 0) panic("dme_prepare: length is not compatible with IO_MODE"); sc->dme_txlen = total_len; sc->dme_txready = 1; DTR3("dme_prepare done, flags %#x busy %d ready %d", sc->dme_ifp->if_drv_flags, sc->dme_txbusy, sc->dme_txready); m_freem(m); } void dme_transmit(struct dme_softc *sc) { DME_ASSERT_LOCKED(sc); KASSERT(sc->dme_txready, ("transmit without txready")); dme_write_reg(sc, DME_TXPLL, sc->dme_txlen & 0xff); dme_write_reg(sc, DME_TXPLH, (sc->dme_txlen >> 8) & 0xff ); /* Request to send the packet */ dme_read_reg(sc, DME_ISR); dme_write_reg(sc, DME_TCR, TCR_TXREQ); sc->dme_txready = 0; sc->dme_txbusy = 1; DTR3("dme_transmit done, flags %#x busy %d ready %d", sc->dme_ifp->if_drv_flags, sc->dme_txbusy, sc->dme_txready); } static void dme_start_locked(struct ifnet *ifp) { struct dme_softc *sc; sc = ifp->if_softc; DME_ASSERT_LOCKED(sc); if ((ifp->if_drv_flags & (IFF_DRV_RUNNING | IFF_DRV_OACTIVE)) != IFF_DRV_RUNNING) return; DTR3("dme_start, flags %#x busy %d ready %d", sc->dme_ifp->if_drv_flags, sc->dme_txbusy, sc->dme_txready); KASSERT(sc->dme_txbusy == 0 || sc->dme_txready == 0, ("dme: send without empty queue\n")); dme_prepare(sc); if (sc->dme_txbusy == 0) { /* We are ready to transmit right away */ dme_transmit(sc); dme_prepare(sc); /* Prepare next one */ } /* * We need to wait until the current packet has * been transmitted. */ if (sc->dme_txready != 0) ifp->if_drv_flags |= IFF_DRV_OACTIVE; } static void dme_start(struct ifnet *ifp) { struct dme_softc *sc; sc = ifp->if_softc; DME_LOCK(sc); dme_start_locked(ifp); DME_UNLOCK(sc); } static void dme_stop(struct dme_softc *sc) { struct ifnet *ifp; DME_ASSERT_LOCKED(sc); /* Disable receiver */ dme_write_reg(sc, DME_RCR, 0x00); /* Mask interrupts */ dme_write_reg(sc, DME_IMR, 0x00); /* Stop poll */ callout_stop(&sc->dme_tick_ch); ifp = sc->dme_ifp; ifp->if_drv_flags &= ~(IFF_DRV_RUNNING | IFF_DRV_OACTIVE); DTR3("dme_stop, flags %#x busy %d ready %d", sc->dme_ifp->if_drv_flags, sc->dme_txbusy, sc->dme_txready); sc->dme_txbusy = 0; sc->dme_txready = 0; } static int dme_rxeof(struct dme_softc *sc) { struct ifnet *ifp; struct mbuf *m; int len, i; DME_ASSERT_LOCKED(sc); ifp = sc->dme_ifp; /* Read the first byte to check it correct */ (void)dme_read_reg(sc, DME_MRCMDX); i = bus_space_read_1(sc->dme_tag, sc->dme_handle, DATA_ADDR); switch(bus_space_read_1(sc->dme_tag, sc->dme_handle, DATA_ADDR)) { case 1: /* Correct value */ break; case 0: return 1; default: /* Error */ return -1; } i = dme_read_reg(sc, DME_MRRL); i |= dme_read_reg(sc, DME_MRRH) << 8; len = dme_read_reg(sc, DME_ROCR); bus_space_write_1(sc->dme_tag, sc->dme_handle, CMD_ADDR, DME_MRCMD); len = 0; switch(sc->dme_bits) { case 8: i = bus_space_read_1(sc->dme_tag, sc->dme_handle, DATA_ADDR); i <<= 8; i |= bus_space_read_1(sc->dme_tag, sc->dme_handle, DATA_ADDR); len = bus_space_read_1(sc->dme_tag, sc->dme_handle, DATA_ADDR); len |= bus_space_read_1(sc->dme_tag, sc->dme_handle, DATA_ADDR) << 8; break; case 16: bus_space_read_2(sc->dme_tag, sc->dme_handle, DATA_ADDR); len = bus_space_read_2(sc->dme_tag, sc->dme_handle, DATA_ADDR); break; case 32: { uint32_t reg; reg = bus_space_read_4(sc->dme_tag, sc->dme_handle, DATA_ADDR); len = reg & 0xFFFF; break; } } MGETHDR(m, M_NOWAIT, MT_DATA); if (m == NULL) return -1; if (len > MHLEN - ETHER_ALIGN) { MCLGET(m, M_NOWAIT); if (!(m->m_flags & M_EXT)) { m_freem(m); return -1; } } m->m_pkthdr.rcvif = ifp; m->m_len = m->m_pkthdr.len = len; m_adj(m, ETHER_ALIGN); /* Read the data */ #if 0 bus_space_read_multi_2(sc->dme_tag, sc->dme_handle, DATA_ADDR, mtod(m, uint16_t *), (len + 1) / 2); #else bus_space_read_multi_1(sc->dme_tag, sc->dme_handle, DATA_ADDR, mtod(m, uint8_t *), len); #endif if_inc_counter(ifp, IFCOUNTER_IPACKETS, 1); DME_UNLOCK(sc); (*ifp->if_input)(ifp, m); DME_LOCK(sc); return 0; } static void dme_tick(void *arg) { struct dme_softc *sc; struct mii_data *mii; sc = (struct dme_softc *)arg; /* Probably too frequent? */ mii = device_get_softc(sc->dme_miibus); mii_tick(mii); callout_reset(&sc->dme_tick_ch, hz, dme_tick, sc); } static void dme_intr(void *arg) { struct dme_softc *sc; uint32_t intr_status; sc = (struct dme_softc *)arg; DME_LOCK(sc); intr_status = dme_read_reg(sc, DME_ISR); dme_write_reg(sc, DME_ISR, intr_status); DTR4("dme_intr flags %#x busy %d ready %d intr %#x", sc->dme_ifp->if_drv_flags, sc->dme_txbusy, sc->dme_txready, intr_status); if (intr_status & ISR_PT) { uint8_t nsr, tx_status; sc->dme_txbusy = 0; nsr = dme_read_reg(sc, DME_NSR); if (nsr & NSR_TX1END) tx_status = dme_read_reg(sc, DME_TSR1); else if (nsr & NSR_TX2END) tx_status = dme_read_reg(sc, DME_TSR2); else tx_status = 1; DTR4("dme_intr flags %#x busy %d ready %d nsr %#x", sc->dme_ifp->if_drv_flags, sc->dme_txbusy, sc->dme_txready, nsr); /* Prepare packet to send if none is currently pending */ if (sc->dme_txready == 0) dme_prepare(sc); /* Send the packet out of one is waiting for transmit */ if (sc->dme_txready != 0) { /* Initiate transmission of the prepared packet */ dme_transmit(sc); /* Prepare next packet to send */ dme_prepare(sc); /* * We need to wait until the current packet has * been transmitted. */ if (sc->dme_txready != 0) sc->dme_ifp->if_drv_flags |= IFF_DRV_OACTIVE; } } if (intr_status & ISR_PR) { /* Read the packets off the device */ while (dme_rxeof(sc) == 0) continue; } DME_UNLOCK(sc); } static void dme_setmode(struct dme_softc *sc) { } static int dme_ioctl(struct ifnet *ifp, u_long command, caddr_t data) { struct dme_softc *sc; struct mii_data *mii; struct ifreq *ifr; int error = 0; sc = ifp->if_softc; ifr = (struct ifreq *)data; switch (command) { case SIOCSIFFLAGS: /* * Switch interface state between "running" and * "stopped", reflecting the UP flag. */ DME_LOCK(sc); if (ifp->if_flags & IFF_UP) { if ((ifp->if_drv_flags & IFF_DRV_RUNNING) == 0) { dme_init_locked(sc); } } else { if ((ifp->if_drv_flags & IFF_DRV_RUNNING) != 0) { dme_stop(sc); } } dme_setmode(sc); DME_UNLOCK(sc); break; case SIOCGIFMEDIA: case SIOCSIFMEDIA: mii = device_get_softc(sc->dme_miibus); error = ifmedia_ioctl(ifp, ifr, &mii->mii_media, command); break; default: error = ether_ioctl(ifp, command, data); break; } return (error); } static void dme_init_locked(struct dme_softc *sc) { struct ifnet *ifp = sc->dme_ifp; DME_ASSERT_LOCKED(sc); if ((ifp->if_drv_flags & IFF_DRV_RUNNING) != 0) return; dme_reset(sc); dme_config(sc); ifp->if_drv_flags |= IFF_DRV_RUNNING; ifp->if_drv_flags &= ~IFF_DRV_OACTIVE; callout_reset(&sc->dme_tick_ch, hz, dme_tick, sc); } static void dme_init(void *xcs) { struct dme_softc *sc = xcs; DME_LOCK(sc); dme_init_locked(sc); DME_UNLOCK(sc); } static int dme_ifmedia_upd(struct ifnet *ifp) { struct dme_softc *sc; struct mii_data *mii; sc = ifp->if_softc; mii = device_get_softc(sc->dme_miibus); DME_LOCK(sc); mii_mediachg(mii); DME_UNLOCK(sc); return (0); } static void dme_ifmedia_sts(struct ifnet *ifp, struct ifmediareq *ifmr) { struct dme_softc *sc; struct mii_data *mii; sc = ifp->if_softc; mii = device_get_softc(sc->dme_miibus); DME_LOCK(sc); mii_pollstat(mii); ifmr->ifm_active = mii->mii_media_active; ifmr->ifm_status = mii->mii_media_status; DME_UNLOCK(sc); } static struct ofw_compat_data compat_data[] = { { "davicom,dm9000", true }, { NULL, false } }; static int dme_probe(device_t dev) { if (!ofw_bus_search_compatible(dev, compat_data)->ocd_data) return (ENXIO); device_set_desc(dev, "Davicom DM9000"); return (0); } static int dme_attach(device_t dev) { struct dme_softc *sc; struct ifnet *ifp; int error, rid; uint32_t data; sc = device_get_softc(dev); sc->dme_dev = dev; error = 0; mtx_init(&sc->dme_mtx, device_get_nameunit(dev), MTX_NETWORK_LOCK, MTX_DEF); callout_init_mtx(&sc->dme_tick_ch, &sc->dme_mtx, 0); rid = 0; sc->dme_res = bus_alloc_resource_any(dev, SYS_RES_MEMORY, &rid, RF_ACTIVE); if (sc->dme_res == NULL) { device_printf(dev, "unable to map memory\n"); error = ENXIO; goto fail; } rid = 0; sc->dme_irq = bus_alloc_resource_any(dev, SYS_RES_IRQ, &rid, RF_ACTIVE); if (sc->dme_irq == NULL) { device_printf(dev, "unable to map memory\n"); error = ENXIO; goto fail; } /* * Power the chip up, if necessary */ error = regulator_get_by_ofw_property(dev, 0, "vcc-supply", &sc->dme_vcc_regulator); if (error == 0) { error = regulator_enable(sc->dme_vcc_regulator); if (error != 0) { device_printf(dev, "unable to enable power supply\n"); error = ENXIO; goto fail; } } /* * Delay a little. This seems required on rev-1 boards (green.) */ DELAY(100000); /* Bring controller out of reset */ error = ofw_gpiobus_parse_gpios(dev, "reset-gpios", &sc->gpio_rset); if (error > 1) { device_printf(dev, "too many reset gpios\n"); sc->gpio_rset = NULL; error = ENXIO; goto fail; } if (sc->gpio_rset != NULL) { error = GPIO_PIN_SET(sc->gpio_rset->dev, sc->gpio_rset->pin, 0); if (error != 0) { device_printf(dev, "Cannot configure GPIO pin %d on %s\n", sc->gpio_rset->pin, device_get_nameunit(sc->gpio_rset->dev)); goto fail; } error = GPIO_PIN_SETFLAGS(sc->gpio_rset->dev, sc->gpio_rset->pin, GPIO_PIN_OUTPUT); if (error != 0) { device_printf(dev, "Cannot configure GPIO pin %d on %s\n", sc->gpio_rset->pin, device_get_nameunit(sc->gpio_rset->dev)); goto fail; } DELAY(2000); error = GPIO_PIN_SET(sc->gpio_rset->dev, sc->gpio_rset->pin, 1); if (error != 0) { device_printf(dev, "Cannot configure GPIO pin %d on %s\n", sc->gpio_rset->pin, device_get_nameunit(sc->gpio_rset->dev)); goto fail; } DELAY(4000); } else device_printf(dev, "Unable to find reset GPIO\n"); sc->dme_tag = rman_get_bustag(sc->dme_res); sc->dme_handle = rman_get_bushandle(sc->dme_res); /* Reset the chip as soon as possible */ dme_reset(sc); /* Figure IO mode */ switch((dme_read_reg(sc, DME_ISR) >> 6) & 0x03) { case 0: /* 16 bit */ sc->dme_bits = 16; break; case 1: /* 32 bit */ sc->dme_bits = 32; break; case 2: /* 8 bit */ sc->dme_bits = 8; break; default: /* reserved */ device_printf(dev, "Unable to determine device mode\n"); error = ENXIO; goto fail; } DELAY(100000); /* Read vendor and device id's */ data = dme_read_reg(sc, DME_VIDH) << 8; data |= dme_read_reg(sc, DME_VIDL); device_printf(dev, "Vendor ID: 0x%04x\n", data); /* Read vendor and device id's */ data = dme_read_reg(sc, DME_PIDH) << 8; data |= dme_read_reg(sc, DME_PIDL); device_printf(dev, "Product ID: 0x%04x\n", data); /* Chip revision */ data = dme_read_reg(sc, DME_CHIPR); device_printf(dev, "Revision: 0x%04x\n", data); if (data != DME_CHIP_DM9000A && data != DME_CHIP_DM9000B) data = DME_CHIP_DM9000; sc->dme_rev = data; device_printf(dev, "using %d-bit IO mode\n", sc->dme_bits); KASSERT(sc->dme_bits == 8, ("wrong io mode")); /* Try to figure our mac address */ dme_get_macaddr(sc); /* Configure chip after reset */ dme_config(sc); ifp = sc->dme_ifp = if_alloc(IFT_ETHER); if (ifp == NULL) { device_printf(dev, "unable to allocate ifp\n"); error = ENOSPC; goto fail; } ifp->if_softc = sc; /* Setup MII */ error = mii_attach(dev, &sc->dme_miibus, ifp, dme_ifmedia_upd, dme_ifmedia_sts, BMSR_DEFCAPMASK, MII_PHY_ANY, MII_OFFSET_ANY, 0); /* This should never happen as the DM9000 contains it's own PHY */ if (error != 0) { device_printf(dev, "PHY probe failed\n"); goto fail; } if_initname(ifp, device_get_name(dev), device_get_unit(dev)); ifp->if_flags = IFF_BROADCAST | IFF_SIMPLEX | IFF_MULTICAST; ifp->if_start = dme_start; ifp->if_ioctl = dme_ioctl; ifp->if_init = dme_init; IFQ_SET_MAXLEN(&ifp->if_snd, IFQ_MAXLEN); ether_ifattach(ifp, sc->dme_macaddr); error = bus_setup_intr(dev, sc->dme_irq, INTR_TYPE_NET | INTR_MPSAFE, NULL, dme_intr, sc, &sc->dme_intrhand); if (error) { device_printf(dev, "couldn't set up irq\n"); ether_ifdetach(ifp); goto fail; } + + gone_by_fcp101_dev(dev); + fail: if (error != 0) dme_detach(dev); return (error); } static int dme_detach(device_t dev) { struct dme_softc *sc; struct ifnet *ifp; sc = device_get_softc(dev); KASSERT(mtx_initialized(&sc->dme_mtx), ("dme mutex not initialized")); ifp = sc->dme_ifp; if (device_is_attached(dev)) { DME_LOCK(sc); dme_stop(sc); DME_UNLOCK(sc); ether_ifdetach(ifp); callout_drain(&sc->dme_tick_ch); } if (sc->dme_miibus) device_delete_child(dev, sc->dme_miibus); bus_generic_detach(dev); if (sc->dme_vcc_regulator != 0) regulator_release(sc->dme_vcc_regulator); if (sc->dme_intrhand) bus_teardown_intr(dev, sc->dme_irq, sc->dme_intrhand); if (sc->dme_irq) bus_release_resource(dev, SYS_RES_IRQ, 0, sc->dme_irq); if (sc->dme_res) bus_release_resource(dev, SYS_RES_MEMORY, 0, sc->dme_res); if (ifp != NULL) if_free(ifp); mtx_destroy(&sc->dme_mtx); return (0); } /* * The MII bus interface */ static int dme_miibus_readreg(device_t dev, int phy, int reg) { struct dme_softc *sc; int i, rval; /* We have up to 4 PHY's */ if (phy >= 4) return (0); sc = device_get_softc(dev); /* Send the register to read to the phy and start the read */ dme_write_reg(sc, DME_EPAR, (phy << 6) | reg); dme_write_reg(sc, DME_EPCR, EPCR_EPOS | EPCR_ERPRR); /* Wait for the data to be read */ for (i = 0; i < DME_TIMEOUT; i++) { if ((dme_read_reg(sc, DME_EPCR) & EPCR_ERRE) == 0) break; DELAY(1); } /* Clear the comand */ dme_write_reg(sc, DME_EPCR, 0); if (i == DME_TIMEOUT) return (0); rval = (dme_read_reg(sc, DME_EPDRH) << 8) | dme_read_reg(sc, DME_EPDRL); return (rval); } static int dme_miibus_writereg(device_t dev, int phy, int reg, int data) { struct dme_softc *sc; int i; /* We have up to 4 PHY's */ if (phy > 3) return (0); sc = device_get_softc(dev); /* Send the register and data to write to the phy */ dme_write_reg(sc, DME_EPAR, (phy << 6) | reg); dme_write_reg(sc, DME_EPDRL, data & 0xFF); dme_write_reg(sc, DME_EPDRH, (data >> 8) & 0xFF); /* Start the write */ dme_write_reg(sc, DME_EPCR, EPCR_EPOS | EPCR_ERPRW); /* Wait for the data to be written */ for (i = 0; i < DME_TIMEOUT; i++) { if ((dme_read_reg(sc, DME_EPCR) & EPCR_ERRE) == 0) break; DELAY(1); } /* Clear the comand */ dme_write_reg(sc, DME_EPCR, 0); return (0); } static device_method_t dme_methods[] = { /* Device interface */ DEVMETHOD(device_probe, dme_probe), DEVMETHOD(device_attach, dme_attach), DEVMETHOD(device_detach, dme_detach), /* bus interface, for miibus */ DEVMETHOD(bus_print_child, bus_generic_print_child), DEVMETHOD(bus_driver_added, bus_generic_driver_added), /* MII interface */ DEVMETHOD(miibus_readreg, dme_miibus_readreg), DEVMETHOD(miibus_writereg, dme_miibus_writereg), { 0, 0 } }; static driver_t dme_driver = { "dme", dme_methods, sizeof(struct dme_softc) }; static devclass_t dme_devclass; MODULE_DEPEND(dme, ether, 1, 1, 1); MODULE_DEPEND(dme, miibus, 1, 1, 1); DRIVER_MODULE(dme, simplebus, dme_driver, dme_devclass, 0, 0); DRIVER_MODULE(miibus, dme, miibus_driver, miibus_devclass, 0, 0); Index: stable/12/sys/dev/ed/if_ed.c =================================================================== --- stable/12/sys/dev/ed/if_ed.c (revision 339734) +++ stable/12/sys/dev/ed/if_ed.c (revision 339735) @@ -1,1857 +1,1860 @@ /*- * SPDX-License-Identifier: BSD-2-Clause-FreeBSD * * Copyright (c) 1995, David Greenman * All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice unmodified, this list of conditions, and the following * disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. */ #include __FBSDID("$FreeBSD$"); /* * Device driver for National Semiconductor DS8390/WD83C690 based ethernet * adapters. By David Greenman, 29-April-1993 * * Currently supports the Western Digital/SMC 8003 and 8013 series, * the SMC Elite Ultra (8216), the 3Com 3c503, the NE1000 and NE2000, * and a variety of similar clones. * */ #include "opt_ed.h" #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include devclass_t ed_devclass; static void ed_init(void *); static void ed_init_locked(struct ed_softc *); static int ed_ioctl(struct ifnet *, u_long, caddr_t); static void ed_start(struct ifnet *); static void ed_start_locked(struct ifnet *); static void ed_reset(struct ifnet *); static void ed_tick(void *); static void ed_watchdog(struct ed_softc *); static void ed_ds_getmcaf(struct ed_softc *, uint32_t *); static void ed_get_packet(struct ed_softc *, bus_size_t, u_short); static void ed_stop_hw(struct ed_softc *sc); static __inline void ed_rint(struct ed_softc *); static __inline void ed_xmit(struct ed_softc *); static __inline void ed_ring_copy(struct ed_softc *, bus_size_t, char *, u_short); static void ed_setrcr(struct ed_softc *); /* * Generic probe routine for testing for the existance of a DS8390. * Must be called after the NIC has just been reset. This routine * works by looking at certain register values that are guaranteed * to be initialized a certain way after power-up or reset. Seems * not to currently work on the 83C690. * * Specifically: * * Register reset bits set bits * Command Register (CR) TXP, STA RD2, STP * Interrupt Status (ISR) RST * Interrupt Mask (IMR) All bits * Data Control (DCR) LAS * Transmit Config. (TCR) LB1, LB0 * * We only look at the CR and ISR registers, however, because looking at * the others would require changing register pages (which would be * intrusive if this isn't an 8390). * * Return 1 if 8390 was found, 0 if not. */ int ed_probe_generic8390(struct ed_softc *sc) { if ((ed_nic_inb(sc, ED_P0_CR) & (ED_CR_RD2 | ED_CR_TXP | ED_CR_STA | ED_CR_STP)) != (ED_CR_RD2 | ED_CR_STP)) return (0); if ((ed_nic_inb(sc, ED_P0_ISR) & ED_ISR_RST) != ED_ISR_RST) return (0); return (1); } void ed_disable_16bit_access(struct ed_softc *sc) { /* * Disable 16 bit access to shared memory */ if (sc->isa16bit && sc->vendor == ED_VENDOR_WD_SMC) { if (sc->chip_type == ED_CHIP_TYPE_WD790) ed_asic_outb(sc, ED_WD_MSR, 0x00); ed_asic_outb(sc, ED_WD_LAAR, sc->wd_laar_proto & ~ED_WD_LAAR_M16EN); } } void ed_enable_16bit_access(struct ed_softc *sc) { if (sc->isa16bit && sc->vendor == ED_VENDOR_WD_SMC) { ed_asic_outb(sc, ED_WD_LAAR, sc->wd_laar_proto | ED_WD_LAAR_M16EN); if (sc->chip_type == ED_CHIP_TYPE_WD790) ed_asic_outb(sc, ED_WD_MSR, ED_WD_MSR_MENB); } } /* * Allocate a port resource with the given resource id. */ int ed_alloc_port(device_t dev, int rid, int size) { struct ed_softc *sc = device_get_softc(dev); struct resource *res; res = bus_alloc_resource_anywhere(dev, SYS_RES_IOPORT, &rid, size, RF_ACTIVE); if (res) { sc->port_res = res; sc->port_used = size; sc->port_bst = rman_get_bustag(res); sc->port_bsh = rman_get_bushandle(res); return (0); } return (ENOENT); } /* * Allocate a memory resource with the given resource id. */ int ed_alloc_memory(device_t dev, int rid, int size) { struct ed_softc *sc = device_get_softc(dev); struct resource *res; res = bus_alloc_resource_anywhere(dev, SYS_RES_MEMORY, &rid, size, RF_ACTIVE); if (res) { sc->mem_res = res; sc->mem_used = size; sc->mem_bst = rman_get_bustag(res); sc->mem_bsh = rman_get_bushandle(res); return (0); } return (ENOENT); } /* * Allocate an irq resource with the given resource id. */ int ed_alloc_irq(device_t dev, int rid, int flags) { struct ed_softc *sc = device_get_softc(dev); struct resource *res; res = bus_alloc_resource_any(dev, SYS_RES_IRQ, &rid, RF_ACTIVE | flags); if (res) { sc->irq_res = res; return (0); } return (ENOENT); } /* * Release all resources */ void ed_release_resources(device_t dev) { struct ed_softc *sc = device_get_softc(dev); if (sc->port_res) bus_free_resource(dev, SYS_RES_IOPORT, sc->port_res); if (sc->port_res2) bus_free_resource(dev, SYS_RES_IOPORT, sc->port_res2); if (sc->mem_res) bus_free_resource(dev, SYS_RES_MEMORY, sc->mem_res); if (sc->irq_res) bus_free_resource(dev, SYS_RES_IRQ, sc->irq_res); sc->port_res = 0; sc->port_res2 = 0; sc->mem_res = 0; sc->irq_res = 0; if (sc->ifp) if_free(sc->ifp); } /* * Install interface into kernel networking data structures */ int ed_attach(device_t dev) { struct ed_softc *sc = device_get_softc(dev); struct ifnet *ifp; sc->dev = dev; ED_LOCK_INIT(sc); ifp = sc->ifp = if_alloc(IFT_ETHER); if (ifp == NULL) { device_printf(dev, "can not if_alloc()\n"); ED_LOCK_DESTROY(sc); return (ENOSPC); } if (sc->readmem == NULL) { if (sc->mem_shared) { if (sc->isa16bit) sc->readmem = ed_shmem_readmem16; else sc->readmem = ed_shmem_readmem8; } else { sc->readmem = ed_pio_readmem; } } if (sc->sc_write_mbufs == NULL) { device_printf(dev, "No write mbufs routine set\n"); return (ENXIO); } callout_init_mtx(&sc->tick_ch, ED_MUTEX(sc), 0); /* * Set interface to stopped condition (reset) */ ed_stop_hw(sc); /* * Initialize ifnet structure */ ifp->if_softc = sc; if_initname(ifp, device_get_name(dev), device_get_unit(dev)); ifp->if_start = ed_start; ifp->if_ioctl = ed_ioctl; ifp->if_init = ed_init; IFQ_SET_MAXLEN(&ifp->if_snd, ifqmaxlen); ifp->if_snd.ifq_drv_maxlen = ifqmaxlen; IFQ_SET_READY(&ifp->if_snd); ifp->if_linkmib = &sc->mibdata; ifp->if_linkmiblen = sizeof sc->mibdata; /* * XXX - should do a better job. */ if (sc->chip_type == ED_CHIP_TYPE_WD790) sc->mibdata.dot3StatsEtherChipSet = DOT3CHIPSET(dot3VendorWesternDigital, dot3ChipSetWesternDigital83C790); else sc->mibdata.dot3StatsEtherChipSet = DOT3CHIPSET(dot3VendorNational, dot3ChipSetNational8390); sc->mibdata.dot3Compliance = DOT3COMPLIANCE_COLLS; ifp->if_flags = IFF_BROADCAST | IFF_SIMPLEX | IFF_MULTICAST; /* * Set default state for LINK2 flag (used to disable the * tranceiver for AUI operation), based on config option. * We only set this flag before we attach the device, so there's * no race. It is convenient to allow users to turn this off * by default in the kernel config, but given our more advanced * boot time configuration options, this might no longer be needed. */ if (device_get_flags(dev) & ED_FLAGS_DISABLE_TRANCEIVER) ifp->if_flags |= IFF_LINK2; /* * Attach the interface */ ether_ifattach(ifp, sc->enaddr); /* device attach does transition from UNCONFIGURED to IDLE state */ sc->tx_mem = sc->txb_cnt * ED_PAGE_SIZE * ED_TXBUF_SIZE; sc->rx_mem = (sc->rec_page_stop - sc->rec_page_start) * ED_PAGE_SIZE; SYSCTL_ADD_STRING(device_get_sysctl_ctx(dev), SYSCTL_CHILDREN(device_get_sysctl_tree(dev)), 0, "type", CTLFLAG_RD, sc->type_str, 0, "Type of chip in card"); SYSCTL_ADD_UINT(device_get_sysctl_ctx(dev), SYSCTL_CHILDREN(device_get_sysctl_tree(dev)), 1, "TxMem", CTLFLAG_RD, &sc->tx_mem, 0, "Memory set aside for transmitting packets"); SYSCTL_ADD_UINT(device_get_sysctl_ctx(dev), SYSCTL_CHILDREN(device_get_sysctl_tree(dev)), 2, "RxMem", CTLFLAG_RD, &sc->rx_mem, 0, "Memory set aside for receiving packets"); SYSCTL_ADD_UINT(device_get_sysctl_ctx(dev), SYSCTL_CHILDREN(device_get_sysctl_tree(dev)), 3, "Mem", CTLFLAG_RD, &sc->mem_size, 0, "Total Card Memory"); if (bootverbose) { if (sc->type_str && (*sc->type_str != 0)) device_printf(dev, "type %s ", sc->type_str); else device_printf(dev, "type unknown (0x%x) ", sc->type); #ifdef ED_HPP if (sc->vendor == ED_VENDOR_HP) printf("(%s %s IO)", (sc->hpp_id & ED_HPP_ID_16_BIT_ACCESS) ? "16-bit" : "32-bit", sc->hpp_mem_start ? "memory mapped" : "regular"); else #endif printf("%s", sc->isa16bit ? "(16 bit)" : "(8 bit)"); #if defined(ED_HPP) || defined(ED_3C503) printf("%s", (((sc->vendor == ED_VENDOR_3COM) || (sc->vendor == ED_VENDOR_HP)) && (ifp->if_flags & IFF_LINK2)) ? " tranceiver disabled" : ""); #endif printf("\n"); } + + gone_by_fcp101_dev(dev); + return (0); } /* * Detach the driver from the hardware and other systems in the kernel. */ int ed_detach(device_t dev) { struct ed_softc *sc = device_get_softc(dev); struct ifnet *ifp = sc->ifp; if (mtx_initialized(ED_MUTEX(sc))) ED_ASSERT_UNLOCKED(sc); if (ifp) { ED_LOCK(sc); if (bus_child_present(dev)) ed_stop(sc); ifp->if_drv_flags &= ~IFF_DRV_RUNNING; ED_UNLOCK(sc); ether_ifdetach(ifp); callout_drain(&sc->tick_ch); } if (sc->irq_res != NULL && sc->irq_handle) bus_teardown_intr(dev, sc->irq_res, sc->irq_handle); ed_release_resources(dev); if (sc->miibus) device_delete_child(dev, sc->miibus); if (mtx_initialized(ED_MUTEX(sc))) ED_LOCK_DESTROY(sc); bus_generic_detach(dev); return (0); } /* * Reset interface. */ static void ed_reset(struct ifnet *ifp) { struct ed_softc *sc = ifp->if_softc; ED_ASSERT_LOCKED(sc); /* * Stop interface and re-initialize. */ ed_stop(sc); ed_init_locked(sc); } static void ed_stop_hw(struct ed_softc *sc) { int n = 5000; /* * Stop everything on the interface, and select page 0 registers. */ ed_nic_barrier(sc, ED_P0_CR, 1, BUS_SPACE_BARRIER_READ | BUS_SPACE_BARRIER_WRITE); ed_nic_outb(sc, ED_P0_CR, sc->cr_proto | ED_CR_STP); ed_nic_barrier(sc, ED_P0_CR, 1, BUS_SPACE_BARRIER_READ | BUS_SPACE_BARRIER_WRITE); /* * Wait for interface to enter stopped state, but limit # of checks to * 'n' (about 5ms). It shouldn't even take 5us on modern DS8390's, but * just in case it's an old one. * * The AX88x90 chips don't seem to implement this behavor. The * datasheets say it is only turned on when the chip enters a RESET * state and is silent about behavior for the stopped state we just * entered. */ if (sc->chip_type == ED_CHIP_TYPE_AX88190 || sc->chip_type == ED_CHIP_TYPE_AX88790) return; while (((ed_nic_inb(sc, ED_P0_ISR) & ED_ISR_RST) == 0) && --n) continue; if (n <= 0) device_printf(sc->dev, "ed_stop_hw RST never set\n"); } /* * Take interface offline. */ void ed_stop(struct ed_softc *sc) { ED_ASSERT_LOCKED(sc); callout_stop(&sc->tick_ch); ed_stop_hw(sc); } /* * Periodic timer used to drive the watchdog and attachment-specific * tick handler. */ static void ed_tick(void *arg) { struct ed_softc *sc; sc = arg; ED_ASSERT_LOCKED(sc); if (sc->sc_tick) sc->sc_tick(sc); if (sc->tx_timer != 0 && --sc->tx_timer == 0) ed_watchdog(sc); callout_reset(&sc->tick_ch, hz, ed_tick, sc); } /* * Device timeout/watchdog routine. Entered if the device neglects to * generate an interrupt after a transmit has been started on it. */ static void ed_watchdog(struct ed_softc *sc) { struct ifnet *ifp; ifp = sc->ifp; log(LOG_ERR, "%s: device timeout\n", ifp->if_xname); if_inc_counter(ifp, IFCOUNTER_OERRORS, 1); ed_reset(ifp); } /* * Initialize device. */ static void ed_init(void *xsc) { struct ed_softc *sc = xsc; ED_ASSERT_UNLOCKED(sc); ED_LOCK(sc); ed_init_locked(sc); ED_UNLOCK(sc); } static void ed_init_locked(struct ed_softc *sc) { struct ifnet *ifp = sc->ifp; int i; ED_ASSERT_LOCKED(sc); /* * Initialize the NIC in the exact order outlined in the NS manual. * This init procedure is "mandatory"...don't change what or when * things happen. */ /* reset transmitter flags */ sc->xmit_busy = 0; sc->tx_timer = 0; sc->txb_inuse = 0; sc->txb_new = 0; sc->txb_next_tx = 0; /* This variable is used below - don't move this assignment */ sc->next_packet = sc->rec_page_start + 1; /* * Set interface for page 0, Remote DMA complete, Stopped */ ed_nic_barrier(sc, ED_P0_CR, 1, BUS_SPACE_BARRIER_READ | BUS_SPACE_BARRIER_WRITE); ed_nic_outb(sc, ED_P0_CR, sc->cr_proto | ED_CR_STP); ed_nic_barrier(sc, ED_P0_CR, 1, BUS_SPACE_BARRIER_READ | BUS_SPACE_BARRIER_WRITE); if (sc->isa16bit) /* * Set FIFO threshold to 8, No auto-init Remote DMA, byte * order=80x86, word-wide DMA xfers, */ ed_nic_outb(sc, ED_P0_DCR, ED_DCR_FT1 | ED_DCR_WTS | ED_DCR_LS); else /* * Same as above, but byte-wide DMA xfers */ ed_nic_outb(sc, ED_P0_DCR, ED_DCR_FT1 | ED_DCR_LS); /* * Clear Remote Byte Count Registers */ ed_nic_outb(sc, ED_P0_RBCR0, 0); ed_nic_outb(sc, ED_P0_RBCR1, 0); /* * For the moment, don't store incoming packets in memory. */ ed_nic_outb(sc, ED_P0_RCR, ED_RCR_MON); /* * Place NIC in internal loopback mode */ ed_nic_outb(sc, ED_P0_TCR, ED_TCR_LB0); /* * Initialize transmit/receive (ring-buffer) Page Start */ ed_nic_outb(sc, ED_P0_TPSR, sc->tx_page_start); ed_nic_outb(sc, ED_P0_PSTART, sc->rec_page_start); /* Set lower bits of byte addressable framing to 0 */ if (sc->chip_type == ED_CHIP_TYPE_WD790) ed_nic_outb(sc, 0x09, 0); /* * Initialize Receiver (ring-buffer) Page Stop and Boundry */ ed_nic_outb(sc, ED_P0_PSTOP, sc->rec_page_stop); ed_nic_outb(sc, ED_P0_BNRY, sc->rec_page_start); /* * Clear all interrupts. A '1' in each bit position clears the * corresponding flag. */ ed_nic_outb(sc, ED_P0_ISR, 0xff); /* * Enable the following interrupts: receive/transmit complete, * receive/transmit error, and Receiver OverWrite. * * Counter overflow and Remote DMA complete are *not* enabled. */ ed_nic_outb(sc, ED_P0_IMR, ED_IMR_PRXE | ED_IMR_PTXE | ED_IMR_RXEE | ED_IMR_TXEE | ED_IMR_OVWE); /* * Program Command Register for page 1 */ ed_nic_barrier(sc, ED_P0_CR, 1, BUS_SPACE_BARRIER_READ | BUS_SPACE_BARRIER_WRITE); ed_nic_outb(sc, ED_P0_CR, sc->cr_proto | ED_CR_PAGE_1 | ED_CR_STP); ed_nic_barrier(sc, ED_P0_CR, 1, BUS_SPACE_BARRIER_READ | BUS_SPACE_BARRIER_WRITE); /* * Copy out our station address */ for (i = 0; i < ETHER_ADDR_LEN; ++i) ed_nic_outb(sc, ED_P1_PAR(i), IF_LLADDR(sc->ifp)[i]); /* * Set Current Page pointer to next_packet (initialized above) */ ed_nic_outb(sc, ED_P1_CURR, sc->next_packet); /* * Program Receiver Configuration Register and multicast filter. CR is * set to page 0 on return. */ ed_setrcr(sc); /* * Take interface out of loopback */ ed_nic_outb(sc, ED_P0_TCR, 0); if (sc->sc_mediachg) sc->sc_mediachg(sc); /* * Set 'running' flag, and clear output active flag. */ ifp->if_drv_flags |= IFF_DRV_RUNNING; ifp->if_drv_flags &= ~IFF_DRV_OACTIVE; /* * ...and attempt to start output */ ed_start_locked(ifp); callout_reset(&sc->tick_ch, hz, ed_tick, sc); } /* * This routine actually starts the transmission on the interface */ static __inline void ed_xmit(struct ed_softc *sc) { unsigned short len; len = sc->txb_len[sc->txb_next_tx]; /* * Set NIC for page 0 register access */ ed_nic_barrier(sc, ED_P0_CR, 1, BUS_SPACE_BARRIER_READ | BUS_SPACE_BARRIER_WRITE); ed_nic_outb(sc, ED_P0_CR, sc->cr_proto | ED_CR_STA); ed_nic_barrier(sc, ED_P0_CR, 1, BUS_SPACE_BARRIER_READ | BUS_SPACE_BARRIER_WRITE); /* * Set TX buffer start page */ ed_nic_outb(sc, ED_P0_TPSR, sc->tx_page_start + sc->txb_next_tx * ED_TXBUF_SIZE); /* * Set TX length */ ed_nic_outb(sc, ED_P0_TBCR0, len); ed_nic_outb(sc, ED_P0_TBCR1, len >> 8); /* * Set page 0, Remote DMA complete, Transmit Packet, and *Start* */ ed_nic_barrier(sc, ED_P0_CR, 1, BUS_SPACE_BARRIER_READ | BUS_SPACE_BARRIER_WRITE); ed_nic_outb(sc, ED_P0_CR, sc->cr_proto | ED_CR_TXP | ED_CR_STA); ed_nic_barrier(sc, ED_P0_CR, 1, BUS_SPACE_BARRIER_READ | BUS_SPACE_BARRIER_WRITE); sc->xmit_busy = 1; /* * Point to next transmit buffer slot and wrap if necessary. */ sc->txb_next_tx++; if (sc->txb_next_tx == sc->txb_cnt) sc->txb_next_tx = 0; /* * Set a timer just in case we never hear from the board again */ sc->tx_timer = 2; } /* * Start output on interface. * We make two assumptions here: * 1) that the current priority is set to splimp _before_ this code * is called *and* is returned to the appropriate priority after * return * 2) that the IFF_DRV_OACTIVE flag is checked before this code is called * (i.e. that the output part of the interface is idle) */ static void ed_start(struct ifnet *ifp) { struct ed_softc *sc = ifp->if_softc; ED_ASSERT_UNLOCKED(sc); ED_LOCK(sc); ed_start_locked(ifp); ED_UNLOCK(sc); } static void ed_start_locked(struct ifnet *ifp) { struct ed_softc *sc = ifp->if_softc; struct mbuf *m0, *m; bus_size_t buffer; int len; ED_ASSERT_LOCKED(sc); outloop: /* * First, see if there are buffered packets and an idle transmitter - * should never happen at this point. */ if (sc->txb_inuse && (sc->xmit_busy == 0)) { printf("ed: packets buffered, but transmitter idle\n"); ed_xmit(sc); } /* * See if there is room to put another packet in the buffer. */ if (sc->txb_inuse == sc->txb_cnt) { /* * No room. Indicate this to the outside world and exit. */ ifp->if_drv_flags |= IFF_DRV_OACTIVE; return; } IFQ_DRV_DEQUEUE(&ifp->if_snd, m); if (m == NULL) { /* * We are using the !OACTIVE flag to indicate to the outside * world that we can accept an additional packet rather than * that the transmitter is _actually_ active. Indeed, the * transmitter may be active, but if we haven't filled all the * buffers with data then we still want to accept more. */ ifp->if_drv_flags &= ~IFF_DRV_OACTIVE; return; } /* * Copy the mbuf chain into the transmit buffer */ m0 = m; /* txb_new points to next open buffer slot */ buffer = sc->mem_start + (sc->txb_new * ED_TXBUF_SIZE * ED_PAGE_SIZE); len = sc->sc_write_mbufs(sc, m, buffer); if (len == 0) { m_freem(m0); goto outloop; } sc->txb_len[sc->txb_new] = max(len, (ETHER_MIN_LEN-ETHER_CRC_LEN)); sc->txb_inuse++; /* * Point to next buffer slot and wrap if necessary. */ sc->txb_new++; if (sc->txb_new == sc->txb_cnt) sc->txb_new = 0; if (sc->xmit_busy == 0) ed_xmit(sc); /* * Tap off here if there is a bpf listener. */ BPF_MTAP(ifp, m0); m_freem(m0); /* * Loop back to the top to possibly buffer more packets */ goto outloop; } /* * Ethernet interface receiver interrupt. */ static __inline void ed_rint(struct ed_softc *sc) { struct ifnet *ifp = sc->ifp; u_char boundry; u_short len; struct ed_ring packet_hdr; bus_size_t packet_ptr; ED_ASSERT_LOCKED(sc); /* * Set NIC to page 1 registers to get 'current' pointer */ ed_nic_barrier(sc, ED_P0_CR, 1, BUS_SPACE_BARRIER_READ | BUS_SPACE_BARRIER_WRITE); ed_nic_outb(sc, ED_P0_CR, sc->cr_proto | ED_CR_PAGE_1 | ED_CR_STA); ed_nic_barrier(sc, ED_P0_CR, 1, BUS_SPACE_BARRIER_READ | BUS_SPACE_BARRIER_WRITE); /* * 'sc->next_packet' is the logical beginning of the ring-buffer - * i.e. it points to where new data has been buffered. The 'CURR' * (current) register points to the logical end of the ring-buffer - * i.e. it points to where additional new data will be added. We loop * here until the logical beginning equals the logical end (or in * other words, until the ring-buffer is empty). */ while (sc->next_packet != ed_nic_inb(sc, ED_P1_CURR)) { /* get pointer to this buffer's header structure */ packet_ptr = sc->mem_ring + (sc->next_packet - sc->rec_page_start) * ED_PAGE_SIZE; /* * The byte count includes a 4 byte header that was added by * the NIC. */ sc->readmem(sc, packet_ptr, (char *) &packet_hdr, sizeof(packet_hdr)); len = packet_hdr.count; if (len > (ETHER_MAX_LEN - ETHER_CRC_LEN + sizeof(struct ed_ring)) || len < (ETHER_MIN_LEN - ETHER_CRC_LEN + sizeof(struct ed_ring))) { /* * Length is a wild value. There's a good chance that * this was caused by the NIC being old and buggy. * The bug is that the length low byte is duplicated * in the high byte. Try to recalculate the length * based on the pointer to the next packet. Also, * need ot preserve offset into page. * * NOTE: sc->next_packet is pointing at the current * packet. */ len &= ED_PAGE_SIZE - 1; if (packet_hdr.next_packet >= sc->next_packet) len += (packet_hdr.next_packet - sc->next_packet) * ED_PAGE_SIZE; else len += ((packet_hdr.next_packet - sc->rec_page_start) + (sc->rec_page_stop - sc->next_packet)) * ED_PAGE_SIZE; /* * because buffers are aligned on 256-byte boundary, * the length computed above is off by 256 in almost * all cases. Fix it... */ if (len & 0xff) len -= 256; if (len > (ETHER_MAX_LEN - ETHER_CRC_LEN + sizeof(struct ed_ring))) sc->mibdata.dot3StatsFrameTooLongs++; } /* * Be fairly liberal about what we allow as a "reasonable" * length so that a [crufty] packet will make it to BPF (and * can thus be analyzed). Note that all that is really * important is that we have a length that will fit into one * mbuf cluster or less; the upper layer protocols can then * figure out the length from their own length field(s). But * make sure that we have at least a full ethernet header or * we would be unable to call ether_input() later. */ if ((len >= sizeof(struct ed_ring) + ETHER_HDR_LEN) && (len <= MCLBYTES) && (packet_hdr.next_packet >= sc->rec_page_start) && (packet_hdr.next_packet < sc->rec_page_stop)) { /* * Go get packet. */ ed_get_packet(sc, packet_ptr + sizeof(struct ed_ring), len - sizeof(struct ed_ring)); if_inc_counter(ifp, IFCOUNTER_IPACKETS, 1); } else { /* * Really BAD. The ring pointers are corrupted. */ log(LOG_ERR, "%s: NIC memory corrupt - invalid packet length %d\n", ifp->if_xname, len); if_inc_counter(ifp, IFCOUNTER_IERRORS, 1); ed_reset(ifp); return; } /* * Update next packet pointer */ sc->next_packet = packet_hdr.next_packet; /* * Update NIC boundry pointer - being careful to keep it one * buffer behind. (as recommended by NS databook) */ boundry = sc->next_packet - 1; if (boundry < sc->rec_page_start) boundry = sc->rec_page_stop - 1; /* * Set NIC to page 0 registers to update boundry register */ ed_nic_barrier(sc, ED_P0_CR, 1, BUS_SPACE_BARRIER_READ | BUS_SPACE_BARRIER_WRITE); ed_nic_outb(sc, ED_P0_CR, sc->cr_proto | ED_CR_STA); ed_nic_barrier(sc, ED_P0_CR, 1, BUS_SPACE_BARRIER_READ | BUS_SPACE_BARRIER_WRITE); ed_nic_outb(sc, ED_P0_BNRY, boundry); /* * Set NIC to page 1 registers before looping to top (prepare * to get 'CURR' current pointer) */ ed_nic_barrier(sc, ED_P0_CR, 1, BUS_SPACE_BARRIER_READ | BUS_SPACE_BARRIER_WRITE); ed_nic_outb(sc, ED_P0_CR, sc->cr_proto | ED_CR_PAGE_1 | ED_CR_STA); ed_nic_barrier(sc, ED_P0_CR, 1, BUS_SPACE_BARRIER_READ | BUS_SPACE_BARRIER_WRITE); } } /* * Ethernet interface interrupt processor */ void edintr(void *arg) { struct ed_softc *sc = (struct ed_softc*) arg; struct ifnet *ifp = sc->ifp; u_char isr; int count; ED_LOCK(sc); if (!(ifp->if_drv_flags & IFF_DRV_RUNNING)) { ED_UNLOCK(sc); return; } /* * Set NIC to page 0 registers */ ed_nic_barrier(sc, ED_P0_CR, 1, BUS_SPACE_BARRIER_READ | BUS_SPACE_BARRIER_WRITE); ed_nic_outb(sc, ED_P0_CR, sc->cr_proto | ED_CR_STA); ed_nic_barrier(sc, ED_P0_CR, 1, BUS_SPACE_BARRIER_READ | BUS_SPACE_BARRIER_WRITE); /* * loop until there are no more new interrupts. When the card goes * away, the hardware will read back 0xff. Looking at the interrupts, * it would appear that 0xff is impossible as ED_ISR_RST is normally * clear. ED_ISR_RDC is also normally clear and only set while * we're transferring memory to the card and we're holding the * ED_LOCK (so we can't get into here). */ while ((isr = ed_nic_inb(sc, ED_P0_ISR)) != 0 && isr != 0xff) { /* * reset all the bits that we are 'acknowledging' by writing a * '1' to each bit position that was set (writing a '1' * *clears* the bit) */ ed_nic_outb(sc, ED_P0_ISR, isr); /* * The AX88190 and AX88190A has problems acking an interrupt * and having them clear. This interferes with top-level loop * here. Wait for all the bits to clear. * * We limit this to 5000 iterations. At 1us per inb/outb, * this translates to about 15ms, which should be plenty of * time, and also gives protection in the card eject case. */ if (sc->chip_type == ED_CHIP_TYPE_AX88190) { count = 5000; /* 15ms */ while (count-- && (ed_nic_inb(sc, ED_P0_ISR) & isr)) { ed_nic_outb(sc, ED_P0_ISR,0); ed_nic_outb(sc, ED_P0_ISR,isr); } if (count == 0) break; } /* * Handle transmitter interrupts. Handle these first because * the receiver will reset the board under some conditions. */ if (isr & (ED_ISR_PTX | ED_ISR_TXE)) { u_char collisions = ed_nic_inb(sc, ED_P0_NCR) & 0x0f; /* * Check for transmit error. If a TX completed with an * error, we end up throwing the packet away. Really * the only error that is possible is excessive * collisions, and in this case it is best to allow * the automatic mechanisms of TCP to backoff the * flow. Of course, with UDP we're screwed, but this * is expected when a network is heavily loaded. */ (void) ed_nic_inb(sc, ED_P0_TSR); if (isr & ED_ISR_TXE) { u_char tsr; /* * Excessive collisions (16) */ tsr = ed_nic_inb(sc, ED_P0_TSR); if ((tsr & ED_TSR_ABT) && (collisions == 0)) { /* * When collisions total 16, the * P0_NCR will indicate 0, and the * TSR_ABT is set. */ collisions = 16; sc->mibdata.dot3StatsExcessiveCollisions++; sc->mibdata.dot3StatsCollFrequencies[15]++; } if (tsr & ED_TSR_OWC) sc->mibdata.dot3StatsLateCollisions++; if (tsr & ED_TSR_CDH) sc->mibdata.dot3StatsSQETestErrors++; if (tsr & ED_TSR_CRS) sc->mibdata.dot3StatsCarrierSenseErrors++; if (tsr & ED_TSR_FU) sc->mibdata.dot3StatsInternalMacTransmitErrors++; /* * update output errors counter */ if_inc_counter(ifp, IFCOUNTER_OERRORS, 1); } else { /* * Update total number of successfully * transmitted packets. */ if_inc_counter(ifp, IFCOUNTER_OPACKETS, 1); } /* * reset tx busy and output active flags */ sc->xmit_busy = 0; ifp->if_drv_flags &= ~IFF_DRV_OACTIVE; /* * clear watchdog timer */ sc->tx_timer = 0; /* * Add in total number of collisions on last * transmission. */ if_inc_counter(ifp, IFCOUNTER_COLLISIONS, collisions); switch(collisions) { case 0: case 16: break; case 1: sc->mibdata.dot3StatsSingleCollisionFrames++; sc->mibdata.dot3StatsCollFrequencies[0]++; break; default: sc->mibdata.dot3StatsMultipleCollisionFrames++; sc->mibdata. dot3StatsCollFrequencies[collisions-1] ++; break; } /* * Decrement buffer in-use count if not zero (can only * be zero if a transmitter interrupt occured while * not actually transmitting). If data is ready to * transmit, start it transmitting, otherwise defer * until after handling receiver */ if (sc->txb_inuse && --sc->txb_inuse) ed_xmit(sc); } /* * Handle receiver interrupts */ if (isr & (ED_ISR_PRX | ED_ISR_RXE | ED_ISR_OVW)) { /* * Overwrite warning. In order to make sure that a * lockup of the local DMA hasn't occurred, we reset * and re-init the NIC. The NSC manual suggests only a * partial reset/re-init is necessary - but some chips * seem to want more. The DMA lockup has been seen * only with early rev chips - Methinks this bug was * fixed in later revs. -DG */ if (isr & ED_ISR_OVW) { if_inc_counter(ifp, IFCOUNTER_IERRORS, 1); #ifdef DIAGNOSTIC log(LOG_WARNING, "%s: warning - receiver ring buffer overrun\n", ifp->if_xname); #endif /* * Stop/reset/re-init NIC */ ed_reset(ifp); } else { /* * Receiver Error. One or more of: CRC error, * frame alignment error FIFO overrun, or * missed packet. */ if (isr & ED_ISR_RXE) { u_char rsr; rsr = ed_nic_inb(sc, ED_P0_RSR); if (rsr & ED_RSR_CRC) sc->mibdata.dot3StatsFCSErrors++; if (rsr & ED_RSR_FAE) sc->mibdata.dot3StatsAlignmentErrors++; if (rsr & ED_RSR_FO) sc->mibdata.dot3StatsInternalMacReceiveErrors++; if_inc_counter(ifp, IFCOUNTER_IERRORS, 1); #ifdef ED_DEBUG if_printf(ifp, "receive error %x\n", ed_nic_inb(sc, ED_P0_RSR)); #endif } /* * Go get the packet(s) XXX - Doing this on an * error is dubious because there shouldn't be * any data to get (we've configured the * interface to not accept packets with * errors). */ /* * Enable 16bit access to shared memory first * on WD/SMC boards. */ ed_enable_16bit_access(sc); ed_rint(sc); ed_disable_16bit_access(sc); } } /* * If it looks like the transmitter can take more data, * attempt to start output on the interface. This is done * after handling the receiver to give the receiver priority. */ if ((ifp->if_drv_flags & IFF_DRV_OACTIVE) == 0) ed_start_locked(ifp); /* * return NIC CR to standard state: page 0, remote DMA * complete, start (toggling the TXP bit off, even if was just * set in the transmit routine, is *okay* - it is 'edge' * triggered from low to high) */ ed_nic_barrier(sc, ED_P0_CR, 1, BUS_SPACE_BARRIER_READ | BUS_SPACE_BARRIER_WRITE); ed_nic_outb(sc, ED_P0_CR, sc->cr_proto | ED_CR_STA); ed_nic_barrier(sc, ED_P0_CR, 1, BUS_SPACE_BARRIER_READ | BUS_SPACE_BARRIER_WRITE); /* * If the Network Talley Counters overflow, read them to reset * them. It appears that old 8390's won't clear the ISR flag * otherwise - resulting in an infinite loop. */ if (isr & ED_ISR_CNT) { (void) ed_nic_inb(sc, ED_P0_CNTR0); (void) ed_nic_inb(sc, ED_P0_CNTR1); (void) ed_nic_inb(sc, ED_P0_CNTR2); } } ED_UNLOCK(sc); } /* * Process an ioctl request. */ static int ed_ioctl(struct ifnet *ifp, u_long command, caddr_t data) { struct ed_softc *sc = ifp->if_softc; struct ifreq *ifr = (struct ifreq *)data; int error = 0; switch (command) { case SIOCSIFFLAGS: /* * If the interface is marked up and stopped, then start it. * If we're up and already running, then it may be a mediachg. * If it is marked down and running, then stop it. */ ED_LOCK(sc); if (ifp->if_flags & IFF_UP) { if ((ifp->if_drv_flags & IFF_DRV_RUNNING) == 0) ed_init_locked(sc); else if (sc->sc_mediachg) sc->sc_mediachg(sc); } else { if (ifp->if_drv_flags & IFF_DRV_RUNNING) { ed_stop(sc); ifp->if_drv_flags &= ~IFF_DRV_RUNNING; } } /* * Promiscuous flag may have changed, so reprogram the RCR. */ ed_setrcr(sc); ED_UNLOCK(sc); break; case SIOCADDMULTI: case SIOCDELMULTI: /* * Multicast list has changed; set the hardware filter * accordingly. */ ED_LOCK(sc); ed_setrcr(sc); ED_UNLOCK(sc); error = 0; break; case SIOCGIFMEDIA: case SIOCSIFMEDIA: if (sc->sc_media_ioctl == NULL) { error = EINVAL; break; } sc->sc_media_ioctl(sc, ifr, command); break; default: error = ether_ioctl(ifp, command, data); break; } return (error); } /* * Given a source and destination address, copy 'amount' of a packet from * the ring buffer into a linear destination buffer. Takes into account * ring-wrap. */ static __inline void ed_ring_copy(struct ed_softc *sc, bus_size_t src, char *dst, u_short amount) { u_short tmp_amount; /* does copy wrap to lower addr in ring buffer? */ if (src + amount > sc->mem_end) { tmp_amount = sc->mem_end - src; /* copy amount up to end of NIC memory */ sc->readmem(sc, src, dst, tmp_amount); amount -= tmp_amount; src = sc->mem_ring; dst += tmp_amount; } sc->readmem(sc, src, dst, amount); } /* * Retreive packet from shared memory and send to the next level up via * ether_input(). */ static void ed_get_packet(struct ed_softc *sc, bus_size_t buf, u_short len) { struct ifnet *ifp = sc->ifp; struct ether_header *eh; struct mbuf *m; /* Allocate a header mbuf */ MGETHDR(m, M_NOWAIT, MT_DATA); if (m == NULL) return; m->m_pkthdr.rcvif = ifp; m->m_pkthdr.len = m->m_len = len; /* * We always put the received packet in a single buffer - * either with just an mbuf header or in a cluster attached * to the header. The +2 is to compensate for the alignment * fixup below. */ if ((len + 2) > MHLEN) { /* Attach an mbuf cluster */ if (!(MCLGET(m, M_NOWAIT))) { m_freem(m); return; } } /* * The +2 is to longword align the start of the real packet. * This is important for NFS. */ m->m_data += 2; eh = mtod(m, struct ether_header *); /* * Get packet, including link layer address, from interface. */ ed_ring_copy(sc, buf, (char *)eh, len); m->m_pkthdr.len = m->m_len = len; ED_UNLOCK(sc); (*ifp->if_input)(ifp, m); ED_LOCK(sc); } /* * Supporting routines */ /* * Given a NIC memory source address and a host memory destination * address, copy 'amount' from NIC to host using shared memory. * The 'amount' is rounded up to a word - okay as long as mbufs * are word sized. That's what the +1 is below. * This routine accesses things as 16 bit quantities. */ void ed_shmem_readmem16(struct ed_softc *sc, bus_size_t src, uint8_t *dst, uint16_t amount) { bus_space_read_region_2(sc->mem_bst, sc->mem_bsh, src, (uint16_t *)dst, (amount + 1) / 2); } /* * Given a NIC memory source address and a host memory destination * address, copy 'amount' from NIC to host using shared memory. * This routine accesses things as 8 bit quantities. */ void ed_shmem_readmem8(struct ed_softc *sc, bus_size_t src, uint8_t *dst, uint16_t amount) { bus_space_read_region_1(sc->mem_bst, sc->mem_bsh, src, dst, amount); } /* * Given a NIC memory source address and a host memory destination * address, copy 'amount' from NIC to host using Programmed I/O. * The 'amount' is rounded up to a word - okay as long as mbufs * are word sized. * This routine is currently Novell-specific. */ void ed_pio_readmem(struct ed_softc *sc, bus_size_t src, uint8_t *dst, uint16_t amount) { /* Regular Novell cards */ /* select page 0 registers */ ed_nic_barrier(sc, ED_P0_CR, 1, BUS_SPACE_BARRIER_READ | BUS_SPACE_BARRIER_WRITE); ed_nic_outb(sc, ED_P0_CR, ED_CR_RD2 | ED_CR_STA); ed_nic_barrier(sc, ED_P0_CR, 1, BUS_SPACE_BARRIER_READ | BUS_SPACE_BARRIER_WRITE); /* round up to a word */ if (amount & 1) ++amount; /* set up DMA byte count */ ed_nic_outb(sc, ED_P0_RBCR0, amount); ed_nic_outb(sc, ED_P0_RBCR1, amount >> 8); /* set up source address in NIC mem */ ed_nic_outb(sc, ED_P0_RSAR0, src); ed_nic_outb(sc, ED_P0_RSAR1, src >> 8); ed_nic_outb(sc, ED_P0_CR, ED_CR_RD0 | ED_CR_STA); if (sc->isa16bit) ed_asic_insw(sc, ED_NOVELL_DATA, dst, amount / 2); else ed_asic_insb(sc, ED_NOVELL_DATA, dst, amount); } /* * Stripped down routine for writing a linear buffer to NIC memory. * Only used in the probe routine to test the memory. 'len' must * be even. */ void ed_pio_writemem(struct ed_softc *sc, uint8_t *src, uint16_t dst, uint16_t len) { int maxwait = 200; /* about 240us */ /* select page 0 registers */ ed_nic_barrier(sc, ED_P0_CR, 1, BUS_SPACE_BARRIER_READ | BUS_SPACE_BARRIER_WRITE); ed_nic_outb(sc, ED_P0_CR, ED_CR_RD2 | ED_CR_STA); ed_nic_barrier(sc, ED_P0_CR, 1, BUS_SPACE_BARRIER_READ | BUS_SPACE_BARRIER_WRITE); /* reset remote DMA complete flag */ ed_nic_outb(sc, ED_P0_ISR, ED_ISR_RDC); /* set up DMA byte count */ ed_nic_outb(sc, ED_P0_RBCR0, len); ed_nic_outb(sc, ED_P0_RBCR1, len >> 8); /* set up destination address in NIC mem */ ed_nic_outb(sc, ED_P0_RSAR0, dst); ed_nic_outb(sc, ED_P0_RSAR1, dst >> 8); /* set remote DMA write */ ed_nic_outb(sc, ED_P0_CR, ED_CR_RD1 | ED_CR_STA); if (sc->isa16bit) ed_asic_outsw(sc, ED_NOVELL_DATA, src, len / 2); else ed_asic_outsb(sc, ED_NOVELL_DATA, src, len); /* * Wait for remote DMA complete. This is necessary because on the * transmit side, data is handled internally by the NIC in bursts and * we can't start another remote DMA until this one completes. Not * waiting causes really bad things to happen - like the NIC * irrecoverably jamming the ISA bus. */ while (((ed_nic_inb(sc, ED_P0_ISR) & ED_ISR_RDC) != ED_ISR_RDC) && --maxwait) continue; } /* * Write an mbuf chain to the destination NIC memory address using * programmed I/O. */ u_short ed_pio_write_mbufs(struct ed_softc *sc, struct mbuf *m, bus_size_t dst) { struct ifnet *ifp = sc->ifp; unsigned short total_len, dma_len; struct mbuf *mp; int maxwait = 200; /* about 240us */ ED_ASSERT_LOCKED(sc); /* Regular Novell cards */ /* First, count up the total number of bytes to copy */ for (total_len = 0, mp = m; mp; mp = mp->m_next) total_len += mp->m_len; dma_len = total_len; if (sc->isa16bit && (dma_len & 1)) dma_len++; /* select page 0 registers */ ed_nic_barrier(sc, ED_P0_CR, 1, BUS_SPACE_BARRIER_READ | BUS_SPACE_BARRIER_WRITE); ed_nic_outb(sc, ED_P0_CR, ED_CR_RD2 | ED_CR_STA); ed_nic_barrier(sc, ED_P0_CR, 1, BUS_SPACE_BARRIER_READ | BUS_SPACE_BARRIER_WRITE); /* reset remote DMA complete flag */ ed_nic_outb(sc, ED_P0_ISR, ED_ISR_RDC); /* set up DMA byte count */ ed_nic_outb(sc, ED_P0_RBCR0, dma_len); ed_nic_outb(sc, ED_P0_RBCR1, dma_len >> 8); /* set up destination address in NIC mem */ ed_nic_outb(sc, ED_P0_RSAR0, dst); ed_nic_outb(sc, ED_P0_RSAR1, dst >> 8); /* set remote DMA write */ ed_nic_outb(sc, ED_P0_CR, ED_CR_RD1 | ED_CR_STA); /* * Transfer the mbuf chain to the NIC memory. * 16-bit cards require that data be transferred as words, and only words. * So that case requires some extra code to patch over odd-length mbufs. */ if (!sc->isa16bit) { /* NE1000s are easy */ while (m) { if (m->m_len) ed_asic_outsb(sc, ED_NOVELL_DATA, m->m_data, m->m_len); m = m->m_next; } } else { /* NE2000s are a pain */ uint8_t *data; int len, wantbyte; union { uint16_t w; uint8_t b[2]; } saveword; wantbyte = 0; while (m) { len = m->m_len; if (len) { data = mtod(m, caddr_t); /* finish the last word */ if (wantbyte) { saveword.b[1] = *data; ed_asic_outw(sc, ED_NOVELL_DATA, saveword.w); data++; len--; wantbyte = 0; } /* output contiguous words */ if (len > 1) { ed_asic_outsw(sc, ED_NOVELL_DATA, data, len >> 1); data += len & ~1; len &= 1; } /* save last byte, if necessary */ if (len == 1) { saveword.b[0] = *data; wantbyte = 1; } } m = m->m_next; } /* spit last byte */ if (wantbyte) ed_asic_outw(sc, ED_NOVELL_DATA, saveword.w); } /* * Wait for remote DMA complete. This is necessary because on the * transmit side, data is handled internally by the NIC in bursts and * we can't start another remote DMA until this one completes. Not * waiting causes really bad things to happen - like the NIC * irrecoverably jamming the ISA bus. */ while (((ed_nic_inb(sc, ED_P0_ISR) & ED_ISR_RDC) != ED_ISR_RDC) && --maxwait) continue; if (!maxwait) { log(LOG_WARNING, "%s: remote transmit DMA failed to complete\n", ifp->if_xname); ed_reset(ifp); return(0); } return (total_len); } static void ed_setrcr(struct ed_softc *sc) { struct ifnet *ifp = sc->ifp; int i; u_char reg1; ED_ASSERT_LOCKED(sc); /* Bit 6 in AX88190 RCR register must be set. */ if (sc->chip_type == ED_CHIP_TYPE_AX88190 || sc->chip_type == ED_CHIP_TYPE_AX88790) reg1 = ED_RCR_INTT; else reg1 = 0x00; /* set page 1 registers */ ed_nic_barrier(sc, ED_P0_CR, 1, BUS_SPACE_BARRIER_READ | BUS_SPACE_BARRIER_WRITE); ed_nic_outb(sc, ED_P0_CR, sc->cr_proto | ED_CR_PAGE_1 | ED_CR_STP); ed_nic_barrier(sc, ED_P0_CR, 1, BUS_SPACE_BARRIER_READ | BUS_SPACE_BARRIER_WRITE); if (ifp->if_flags & IFF_PROMISC) { /* * Reconfigure the multicast filter. */ for (i = 0; i < 8; i++) ed_nic_outb(sc, ED_P1_MAR(i), 0xff); /* * And turn on promiscuous mode. Also enable reception of * runts and packets with CRC & alignment errors. */ /* Set page 0 registers */ ed_nic_barrier(sc, ED_P0_CR, 1, BUS_SPACE_BARRIER_READ | BUS_SPACE_BARRIER_WRITE); ed_nic_outb(sc, ED_P0_CR, sc->cr_proto | ED_CR_STP); ed_nic_barrier(sc, ED_P0_CR, 1, BUS_SPACE_BARRIER_READ | BUS_SPACE_BARRIER_WRITE); ed_nic_outb(sc, ED_P0_RCR, ED_RCR_PRO | ED_RCR_AM | ED_RCR_AB | ED_RCR_AR | ED_RCR_SEP | reg1); } else { /* set up multicast addresses and filter modes */ if (ifp->if_flags & IFF_MULTICAST) { uint32_t mcaf[2]; if (ifp->if_flags & IFF_ALLMULTI) { mcaf[0] = 0xffffffff; mcaf[1] = 0xffffffff; } else ed_ds_getmcaf(sc, mcaf); /* * Set multicast filter on chip. */ for (i = 0; i < 8; i++) ed_nic_outb(sc, ED_P1_MAR(i), ((u_char *) mcaf)[i]); /* Set page 0 registers */ ed_nic_barrier(sc, ED_P0_CR, 1, BUS_SPACE_BARRIER_READ | BUS_SPACE_BARRIER_WRITE); ed_nic_outb(sc, ED_P0_CR, sc->cr_proto | ED_CR_STP); ed_nic_barrier(sc, ED_P0_CR, 1, BUS_SPACE_BARRIER_READ | BUS_SPACE_BARRIER_WRITE); ed_nic_outb(sc, ED_P0_RCR, ED_RCR_AM | ED_RCR_AB | reg1); } else { /* * Initialize multicast address hashing registers to * not accept multicasts. */ for (i = 0; i < 8; ++i) ed_nic_outb(sc, ED_P1_MAR(i), 0x00); /* Set page 0 registers */ ed_nic_barrier(sc, ED_P0_CR, 1, BUS_SPACE_BARRIER_READ | BUS_SPACE_BARRIER_WRITE); ed_nic_outb(sc, ED_P0_CR, sc->cr_proto | ED_CR_STP); ed_nic_outb(sc, ED_P0_RCR, ED_RCR_AB | reg1); } } /* * Start interface. */ ed_nic_outb(sc, ED_P0_CR, sc->cr_proto | ED_CR_STA); } /* * Compute the multicast address filter from the * list of multicast addresses we need to listen to. */ static void ed_ds_getmcaf(struct ed_softc *sc, uint32_t *mcaf) { uint32_t index; u_char *af = (u_char *) mcaf; struct ifmultiaddr *ifma; mcaf[0] = 0; mcaf[1] = 0; if_maddr_rlock(sc->ifp); CK_STAILQ_FOREACH(ifma, &sc->ifp->if_multiaddrs, ifma_link) { if (ifma->ifma_addr->sa_family != AF_LINK) continue; index = ether_crc32_be(LLADDR((struct sockaddr_dl *) ifma->ifma_addr), ETHER_ADDR_LEN) >> 26; af[index >> 3] |= 1 << (index & 7); } if_maddr_runlock(sc->ifp); } int ed_isa_mem_ok(device_t dev, u_long pmem, u_int memsize) { if (pmem < 0xa0000 || pmem + memsize > 0x1000000) { device_printf(dev, "Invalid ISA memory address range " "configured: 0x%lx - 0x%lx\n", pmem, pmem + memsize); return (ENXIO); } return (0); } int ed_clear_memory(device_t dev) { struct ed_softc *sc = device_get_softc(dev); bus_size_t i; bus_space_set_region_1(sc->mem_bst, sc->mem_bsh, sc->mem_start, 0, sc->mem_size); for (i = 0; i < sc->mem_size; i++) { if (bus_space_read_1(sc->mem_bst, sc->mem_bsh, sc->mem_start + i)) { device_printf(dev, "failed to clear shared memory at " "0x%jx - check configuration\n", (uintmax_t)rman_get_start(sc->mem_res) + i); return (ENXIO); } } return (0); } u_short ed_shmem_write_mbufs(struct ed_softc *sc, struct mbuf *m, bus_size_t dst) { u_short len; /* * Special case setup for 16 bit boards... */ if (sc->isa16bit) { switch (sc->vendor) { #ifdef ED_3C503 /* * For 16bit 3Com boards (which have 16k of * memory), we have the xmit buffers in a * different page of memory ('page 0') - so * change pages. */ case ED_VENDOR_3COM: ed_asic_outb(sc, ED_3COM_GACFR, ED_3COM_GACFR_RSEL); break; #endif /* * Enable 16bit access to shared memory on * WD/SMC boards. * * XXX - same as ed_enable_16bit_access() */ case ED_VENDOR_WD_SMC: ed_asic_outb(sc, ED_WD_LAAR, sc->wd_laar_proto | ED_WD_LAAR_M16EN); if (sc->chip_type == ED_CHIP_TYPE_WD790) ed_asic_outb(sc, ED_WD_MSR, ED_WD_MSR_MENB); break; } } for (len = 0; m != NULL; m = m->m_next) { if (m->m_len == 0) continue; if (sc->isa16bit) { if (m->m_len > 1) bus_space_write_region_2(sc->mem_bst, sc->mem_bsh, dst, mtod(m, uint16_t *), m->m_len / 2); if ((m->m_len & 1) != 0) bus_space_write_1(sc->mem_bst, sc->mem_bsh, dst + m->m_len - 1, *(mtod(m, uint8_t *) + m->m_len - 1)); } else bus_space_write_region_1(sc->mem_bst, sc->mem_bsh, dst, mtod(m, uint8_t *), m->m_len); dst += m->m_len; len += m->m_len; } /* * Restore previous shared memory access */ if (sc->isa16bit) { switch (sc->vendor) { #ifdef ED_3C503 case ED_VENDOR_3COM: ed_asic_outb(sc, ED_3COM_GACFR, ED_3COM_GACFR_RSEL | ED_3COM_GACFR_MBS0); break; #endif case ED_VENDOR_WD_SMC: /* XXX - same as ed_disable_16bit_access() */ if (sc->chip_type == ED_CHIP_TYPE_WD790) ed_asic_outb(sc, ED_WD_MSR, 0x00); ed_asic_outb(sc, ED_WD_LAAR, sc->wd_laar_proto & ~ED_WD_LAAR_M16EN); break; } } return (len); } /* * Generic ifmedia support. By default, the DP8390-based cards don't know * what their network attachment really is, or even if it is valid (except * upon successful transmission of a packet). To play nicer with dhclient, as * well as to fit in with a framework where some cards can provde more * detailed information, make sure that we use this as a fallback. */ static int ed_gen_ifmedia_ioctl(struct ed_softc *sc, struct ifreq *ifr, u_long command) { return (ifmedia_ioctl(sc->ifp, ifr, &sc->ifmedia, command)); } static int ed_gen_ifmedia_upd(struct ifnet *ifp) { return 0; } static void ed_gen_ifmedia_sts(struct ifnet *ifp, struct ifmediareq *ifmr) { ifmr->ifm_active = IFM_ETHER | IFM_AUTO; ifmr->ifm_status = IFM_AVALID | IFM_ACTIVE; } void ed_gen_ifmedia_init(struct ed_softc *sc) { sc->sc_media_ioctl = &ed_gen_ifmedia_ioctl; ifmedia_init(&sc->ifmedia, 0, ed_gen_ifmedia_upd, ed_gen_ifmedia_sts); ifmedia_add(&sc->ifmedia, IFM_ETHER | IFM_AUTO, 0, 0); ifmedia_set(&sc->ifmedia, IFM_ETHER | IFM_AUTO); } Index: stable/12/sys/dev/ep/if_ep.c =================================================================== --- stable/12/sys/dev/ep/if_ep.c (revision 339734) +++ stable/12/sys/dev/ep/if_ep.c (revision 339735) @@ -1,1006 +1,1008 @@ /*- * SPDX-License-Identifier: BSD-4-Clause * * Copyright (c) 1994 Herb Peyerl * All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * 3. All advertising materials mentioning features or use of this software * must display the following acknowledgement: * This product includes software developed by Herb Peyerl. * 4. The name of Herb Peyerl may not be used to endorse or promote products * derived from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR * IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES * OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. * IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, * INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT * NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF * THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ #include __FBSDID("$FreeBSD$"); /* * Modified from the FreeBSD 1.1.5.1 version by: * Andres Vega Garcia * INRIA - Sophia Antipolis, France * avega@sophia.inria.fr */ /* * Promiscuous mode added and interrupt logic slightly changed * to reduce the number of adapter failures. Transceiver select * logic changed to use value from EEPROM. Autoconfiguration * features added. * Done by: * Serge Babkin * Chelindbank (Chelyabinsk, Russia) * babkin@hq.icb.chel.su */ /* * Pccard support for 3C589 by: * HAMADA Naoki * nao@tom-yam.or.jp */ /* * MAINTAINER: Matthew N. Dodd * */ #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include /* Exported variables */ devclass_t ep_devclass; static int ep_media2if_media[] = {IFM_10_T, IFM_10_5, IFM_NONE, IFM_10_2, IFM_NONE}; /* if functions */ static void epinit(void *); static int epioctl(struct ifnet *, u_long, caddr_t); static void epstart(struct ifnet *); static void ep_intr_locked(struct ep_softc *); static void epstart_locked(struct ifnet *); static void epinit_locked(struct ep_softc *); static void eptick(void *); static void epwatchdog(struct ep_softc *); /* if_media functions */ static int ep_ifmedia_upd(struct ifnet *); static void ep_ifmedia_sts(struct ifnet *, struct ifmediareq *); static void epstop(struct ep_softc *); static void epread(struct ep_softc *); static int eeprom_rdy(struct ep_softc *); #define EP_FTST(sc, f) (sc->stat & (f)) #define EP_FSET(sc, f) (sc->stat |= (f)) #define EP_FRST(sc, f) (sc->stat &= ~(f)) static int eeprom_rdy(struct ep_softc *sc) { int i; for (i = 0; is_eeprom_busy(sc) && i < MAX_EEPROMBUSY; i++) DELAY(100); if (i >= MAX_EEPROMBUSY) { device_printf(sc->dev, "eeprom failed to come ready.\n"); return (ENXIO); } return (0); } /* * get_e: gets a 16 bits word from the EEPROM. we must have set the window * before */ int ep_get_e(struct ep_softc *sc, uint16_t offset, uint16_t *result) { if (eeprom_rdy(sc)) return (ENXIO); CSR_WRITE_2(sc, EP_W0_EEPROM_COMMAND, (EEPROM_CMD_RD << sc->epb.cmd_off) | offset); if (eeprom_rdy(sc)) return (ENXIO); (*result) = CSR_READ_2(sc, EP_W0_EEPROM_DATA); return (0); } static int ep_get_macaddr(struct ep_softc *sc, u_char *addr) { int i; uint16_t result; int error; uint16_t *macaddr; macaddr = (uint16_t *) addr; GO_WINDOW(sc, 0); for (i = EEPROM_NODE_ADDR_0; i <= EEPROM_NODE_ADDR_2; i++) { error = ep_get_e(sc, i, &result); if (error) return (error); macaddr[i] = htons(result); } return (0); } int ep_alloc(device_t dev) { struct ep_softc *sc = device_get_softc(dev); int rid; int error = 0; uint16_t result; rid = 0; sc->iobase = bus_alloc_resource_any(dev, SYS_RES_IOPORT, &rid, RF_ACTIVE); if (!sc->iobase) { device_printf(dev, "No I/O space?!\n"); error = ENXIO; goto bad; } rid = 0; sc->irq = bus_alloc_resource_any(dev, SYS_RES_IRQ, &rid, RF_ACTIVE); if (!sc->irq) { device_printf(dev, "No irq?!\n"); error = ENXIO; goto bad; } sc->dev = dev; sc->stat = 0; /* 16 bit access */ sc->bst = rman_get_bustag(sc->iobase); sc->bsh = rman_get_bushandle(sc->iobase); sc->ep_connectors = 0; sc->ep_connector = 0; GO_WINDOW(sc, 0); error = ep_get_e(sc, EEPROM_PROD_ID, &result); if (error) goto bad; sc->epb.prod_id = result; error = ep_get_e(sc, EEPROM_RESOURCE_CFG, &result); if (error) goto bad; sc->epb.res_cfg = result; bad: if (error != 0) ep_free(dev); return (error); } void ep_get_media(struct ep_softc *sc) { uint16_t config; GO_WINDOW(sc, 0); config = CSR_READ_2(sc, EP_W0_CONFIG_CTRL); if (config & IS_AUI) sc->ep_connectors |= AUI; if (config & IS_BNC) sc->ep_connectors |= BNC; if (config & IS_UTP) sc->ep_connectors |= UTP; if (!(sc->ep_connectors & 7)) if (bootverbose) device_printf(sc->dev, "no connectors!\n"); /* * This works for most of the cards so we'll do it here. * The cards that require something different can override * this later on. */ sc->ep_connector = CSR_READ_2(sc, EP_W0_ADDRESS_CFG) >> ACF_CONNECTOR_BITS; } void ep_free(device_t dev) { struct ep_softc *sc = device_get_softc(dev); if (sc->ep_intrhand) bus_teardown_intr(dev, sc->irq, sc->ep_intrhand); if (sc->iobase) bus_release_resource(dev, SYS_RES_IOPORT, 0, sc->iobase); if (sc->irq) bus_release_resource(dev, SYS_RES_IRQ, 0, sc->irq); sc->ep_intrhand = 0; sc->iobase = 0; sc->irq = 0; } static void ep_setup_station(struct ep_softc *sc, u_char *enaddr) { int i; /* * Setup the station address */ GO_WINDOW(sc, 2); for (i = 0; i < ETHER_ADDR_LEN; i++) CSR_WRITE_1(sc, EP_W2_ADDR_0 + i, enaddr[i]); } int ep_attach(struct ep_softc *sc) { struct ifnet *ifp = NULL; struct ifmedia *ifm = NULL; int error; sc->gone = 0; EP_LOCK_INIT(sc); if (! (sc->stat & F_ENADDR_SKIP)) { error = ep_get_macaddr(sc, sc->eaddr); if (error) { device_printf(sc->dev, "Unable to get MAC address!\n"); EP_LOCK_DESTROY(sc); return (ENXIO); } } ep_setup_station(sc, sc->eaddr); ifp = sc->ifp = if_alloc(IFT_ETHER); if (ifp == NULL) { device_printf(sc->dev, "if_alloc() failed\n"); EP_LOCK_DESTROY(sc); return (ENOSPC); } ifp->if_softc = sc; if_initname(ifp, device_get_name(sc->dev), device_get_unit(sc->dev)); ifp->if_flags = IFF_BROADCAST | IFF_SIMPLEX | IFF_MULTICAST; ifp->if_start = epstart; ifp->if_ioctl = epioctl; ifp->if_init = epinit; IFQ_SET_MAXLEN(&ifp->if_snd, ifqmaxlen); ifp->if_snd.ifq_drv_maxlen = ifqmaxlen; IFQ_SET_READY(&ifp->if_snd); callout_init_mtx(&sc->watchdog_timer, &sc->sc_mtx, 0); if (!sc->epb.mii_trans) { ifmedia_init(&sc->ifmedia, 0, ep_ifmedia_upd, ep_ifmedia_sts); if (sc->ep_connectors & AUI) ifmedia_add(&sc->ifmedia, IFM_ETHER | IFM_10_5, 0, NULL); if (sc->ep_connectors & UTP) ifmedia_add(&sc->ifmedia, IFM_ETHER | IFM_10_T, 0, NULL); if (sc->ep_connectors & BNC) ifmedia_add(&sc->ifmedia, IFM_ETHER | IFM_10_2, 0, NULL); if (!sc->ep_connectors) ifmedia_add(&sc->ifmedia, IFM_ETHER | IFM_NONE, 0, NULL); ifmedia_set(&sc->ifmedia, IFM_ETHER | ep_media2if_media[sc->ep_connector]); ifm = &sc->ifmedia; ifm->ifm_media = ifm->ifm_cur->ifm_media; ep_ifmedia_upd(ifp); } ether_ifattach(ifp, sc->eaddr); #ifdef EP_LOCAL_STATS sc->rx_no_first = sc->rx_no_mbuf = sc->rx_bpf_disc = sc->rx_overrunf = sc->rx_overrunl = sc->tx_underrun = 0; #endif EP_FSET(sc, F_RX_FIRST); sc->top = sc->mcur = 0; EP_LOCK(sc); epstop(sc); EP_UNLOCK(sc); + gone_by_fcp101_dev(sc->dev); + return (0); } int ep_detach(device_t dev) { struct ep_softc *sc; struct ifnet *ifp; sc = device_get_softc(dev); ifp = sc->ifp; EP_ASSERT_UNLOCKED(sc); EP_LOCK(sc); if (bus_child_present(dev)) epstop(sc); sc->gone = 1; ifp->if_drv_flags &= ~IFF_DRV_RUNNING; EP_UNLOCK(sc); ether_ifdetach(ifp); callout_drain(&sc->watchdog_timer); ep_free(dev); if_free(ifp); EP_LOCK_DESTROY(sc); return (0); } static void epinit(void *xsc) { struct ep_softc *sc = xsc; EP_LOCK(sc); epinit_locked(sc); EP_UNLOCK(sc); } /* * The order in here seems important. Otherwise we may not receive * interrupts. ?! */ static void epinit_locked(struct ep_softc *sc) { struct ifnet *ifp = sc->ifp; int i; if (sc->gone) return; EP_ASSERT_LOCKED(sc); EP_BUSY_WAIT(sc); GO_WINDOW(sc, 0); CSR_WRITE_2(sc, EP_COMMAND, STOP_TRANSCEIVER); GO_WINDOW(sc, 4); CSR_WRITE_2(sc, EP_W4_MEDIA_TYPE, DISABLE_UTP); GO_WINDOW(sc, 0); /* Disable the card */ CSR_WRITE_2(sc, EP_W0_CONFIG_CTRL, 0); /* Enable the card */ CSR_WRITE_2(sc, EP_W0_CONFIG_CTRL, ENABLE_DRQ_IRQ); GO_WINDOW(sc, 2); /* Reload the ether_addr. */ ep_setup_station(sc, IF_LLADDR(sc->ifp)); CSR_WRITE_2(sc, EP_COMMAND, RX_RESET); CSR_WRITE_2(sc, EP_COMMAND, TX_RESET); EP_BUSY_WAIT(sc); /* Window 1 is operating window */ GO_WINDOW(sc, 1); for (i = 0; i < 31; i++) CSR_READ_1(sc, EP_W1_TX_STATUS); /* get rid of stray intr's */ CSR_WRITE_2(sc, EP_COMMAND, ACK_INTR | 0xff); CSR_WRITE_2(sc, EP_COMMAND, SET_RD_0_MASK | S_5_INTS); CSR_WRITE_2(sc, EP_COMMAND, SET_INTR_MASK | S_5_INTS); if (ifp->if_flags & IFF_PROMISC) CSR_WRITE_2(sc, EP_COMMAND, SET_RX_FILTER | FIL_INDIVIDUAL | FIL_MULTICAST | FIL_BRDCST | FIL_PROMISC); else CSR_WRITE_2(sc, EP_COMMAND, SET_RX_FILTER | FIL_INDIVIDUAL | FIL_MULTICAST | FIL_BRDCST); if (!sc->epb.mii_trans) ep_ifmedia_upd(ifp); if (sc->stat & F_HAS_TX_PLL) CSR_WRITE_2(sc, EP_COMMAND, TX_PLL_ENABLE); CSR_WRITE_2(sc, EP_COMMAND, RX_ENABLE); CSR_WRITE_2(sc, EP_COMMAND, TX_ENABLE); ifp->if_drv_flags |= IFF_DRV_RUNNING; ifp->if_drv_flags &= ~IFF_DRV_OACTIVE; /* just in case */ #ifdef EP_LOCAL_STATS sc->rx_no_first = sc->rx_no_mbuf = sc->rx_overrunf = sc->rx_overrunl = sc->tx_underrun = 0; #endif EP_FSET(sc, F_RX_FIRST); if (sc->top) { m_freem(sc->top); sc->top = sc->mcur = 0; } CSR_WRITE_2(sc, EP_COMMAND, SET_RX_EARLY_THRESH | RX_INIT_EARLY_THRESH); CSR_WRITE_2(sc, EP_COMMAND, SET_TX_START_THRESH | 16); GO_WINDOW(sc, 1); epstart_locked(ifp); callout_reset(&sc->watchdog_timer, hz, eptick, sc); } static void epstart(struct ifnet *ifp) { struct ep_softc *sc; sc = ifp->if_softc; EP_LOCK(sc); epstart_locked(ifp); EP_UNLOCK(sc); } static void epstart_locked(struct ifnet *ifp) { struct ep_softc *sc; u_int len; struct mbuf *m, *m0; int pad, started; sc = ifp->if_softc; if (sc->gone) return; EP_ASSERT_LOCKED(sc); EP_BUSY_WAIT(sc); if (ifp->if_drv_flags & IFF_DRV_OACTIVE) return; started = 0; startagain: /* Sneak a peek at the next packet */ IFQ_DRV_DEQUEUE(&ifp->if_snd, m0); if (m0 == NULL) return; if (!started && (sc->stat & F_HAS_TX_PLL)) CSR_WRITE_2(sc, EP_COMMAND, TX_PLL_ENABLE); started++; for (len = 0, m = m0; m != NULL; m = m->m_next) len += m->m_len; pad = (4 - len) & 3; /* * The 3c509 automatically pads short packets to minimum * ethernet length, but we drop packets that are too large. * Perhaps we should truncate them instead? */ if (len + pad > ETHER_MAX_LEN) { /* packet is obviously too large: toss it */ if_inc_counter(ifp, IFCOUNTER_OERRORS, 1); m_freem(m0); goto readcheck; } if (CSR_READ_2(sc, EP_W1_FREE_TX) < len + pad + 4) { /* no room in FIFO */ CSR_WRITE_2(sc, EP_COMMAND, SET_TX_AVAIL_THRESH | (len + pad + 4)); /* make sure */ if (CSR_READ_2(sc, EP_W1_FREE_TX) < len + pad + 4) { ifp->if_drv_flags |= IFF_DRV_OACTIVE; IFQ_DRV_PREPEND(&ifp->if_snd, m0); goto done; } } else CSR_WRITE_2(sc, EP_COMMAND, SET_TX_AVAIL_THRESH | EP_THRESH_DISABLE); CSR_WRITE_2(sc, EP_W1_TX_PIO_WR_1, len); /* Second dword meaningless */ CSR_WRITE_2(sc, EP_W1_TX_PIO_WR_1, 0x0); for (m = m0; m != NULL; m = m->m_next) { if (m->m_len > 1) CSR_WRITE_MULTI_2(sc, EP_W1_TX_PIO_WR_1, mtod(m, uint16_t *), m->m_len / 2); if (m->m_len & 1) CSR_WRITE_1(sc, EP_W1_TX_PIO_WR_1, *(mtod(m, uint8_t *)+m->m_len - 1)); } while (pad--) CSR_WRITE_1(sc, EP_W1_TX_PIO_WR_1, 0); /* Padding */ /* XXX and drop splhigh here */ BPF_MTAP(ifp, m0); sc->tx_timer = 2; if_inc_counter(ifp, IFCOUNTER_OPACKETS, 1); m_freem(m0); /* * Is another packet coming in? We don't want to overflow * the tiny RX fifo. */ readcheck: if (CSR_READ_2(sc, EP_W1_RX_STATUS) & RX_BYTES_MASK) { /* * we check if we have packets left, in that case * we prepare to come back later */ if (!IFQ_DRV_IS_EMPTY(&ifp->if_snd)) CSR_WRITE_2(sc, EP_COMMAND, SET_TX_AVAIL_THRESH | 8); goto done; } goto startagain; done:; return; } void ep_intr(void *arg) { struct ep_softc *sc; sc = (struct ep_softc *) arg; EP_LOCK(sc); ep_intr_locked(sc); EP_UNLOCK(sc); } static void ep_intr_locked(struct ep_softc *sc) { int status; struct ifnet *ifp; /* XXX 4.x splbio'd here to reduce interruptability */ /* * quick fix: Try to detect an interrupt when the card goes away. */ if (sc->gone || CSR_READ_2(sc, EP_STATUS) == 0xffff) return; ifp = sc->ifp; CSR_WRITE_2(sc, EP_COMMAND, SET_INTR_MASK); /* disable all Ints */ rescan: while ((status = CSR_READ_2(sc, EP_STATUS)) & S_5_INTS) { /* first acknowledge all interrupt sources */ CSR_WRITE_2(sc, EP_COMMAND, ACK_INTR | (status & S_MASK)); if (status & (S_RX_COMPLETE | S_RX_EARLY)) epread(sc); if (status & S_TX_AVAIL) { /* we need ACK */ sc->tx_timer = 0; ifp->if_drv_flags &= ~IFF_DRV_OACTIVE; GO_WINDOW(sc, 1); CSR_READ_2(sc, EP_W1_FREE_TX); epstart_locked(ifp); } if (status & S_CARD_FAILURE) { sc->tx_timer = 0; #ifdef EP_LOCAL_STATS device_printf(sc->dev, "\n\tStatus: %x\n", status); GO_WINDOW(sc, 4); printf("\tFIFO Diagnostic: %x\n", CSR_READ_2(sc, EP_W4_FIFO_DIAG)); printf("\tStat: %x\n", sc->stat); printf("\tIpackets=%d, Opackets=%d\n", ifp->if_get_counter(ifp, IFCOUNTER_IPACKETS), ifp->if_get_counter(ifp, IFCOUNTER_OPACKETS)); printf("\tNOF=%d, NOMB=%d, RXOF=%d, RXOL=%d, TXU=%d\n", sc->rx_no_first, sc->rx_no_mbuf, sc->rx_overrunf, sc->rx_overrunl, sc->tx_underrun); #else #ifdef DIAGNOSTIC device_printf(sc->dev, "Status: %x (input buffer overflow)\n", status); #else if_inc_counter(ifp, IFCOUNTER_IERRORS, 1); #endif #endif epinit_locked(sc); return; } if (status & S_TX_COMPLETE) { sc->tx_timer = 0; /* * We need ACK. We do it at the end. * * We need to read TX_STATUS until we get a * 0 status in order to turn off the interrupt flag. */ while ((status = CSR_READ_1(sc, EP_W1_TX_STATUS)) & TXS_COMPLETE) { if (status & TXS_SUCCES_INTR_REQ) ; /* nothing */ else if (status & (TXS_UNDERRUN | TXS_JABBER | TXS_MAX_COLLISION)) { CSR_WRITE_2(sc, EP_COMMAND, TX_RESET); if (status & TXS_UNDERRUN) { #ifdef EP_LOCAL_STATS sc->tx_underrun++; #endif } if (status & TXS_MAX_COLLISION) { /* * TXS_MAX_COLLISION we * shouldn't get here */ if_inc_counter(ifp, IFCOUNTER_COLLISIONS, 1); } if_inc_counter(ifp, IFCOUNTER_OERRORS, 1); CSR_WRITE_2(sc, EP_COMMAND, TX_ENABLE); /* * To have a tx_avail_int but giving * the chance to the Reception */ if (!IFQ_DRV_IS_EMPTY(&ifp->if_snd)) CSR_WRITE_2(sc, EP_COMMAND, SET_TX_AVAIL_THRESH | 8); } /* pops up the next status */ CSR_WRITE_1(sc, EP_W1_TX_STATUS, 0x0); } /* while */ ifp->if_drv_flags &= ~IFF_DRV_OACTIVE; GO_WINDOW(sc, 1); CSR_READ_2(sc, EP_W1_FREE_TX); epstart_locked(ifp); } /* end TX_COMPLETE */ } CSR_WRITE_2(sc, EP_COMMAND, C_INTR_LATCH); /* ACK int Latch */ if ((status = CSR_READ_2(sc, EP_STATUS)) & S_5_INTS) goto rescan; /* re-enable Ints */ CSR_WRITE_2(sc, EP_COMMAND, SET_INTR_MASK | S_5_INTS); } static void epread(struct ep_softc *sc) { struct mbuf *top, *mcur, *m; struct ifnet *ifp; int lenthisone; short rx_fifo2, status; short rx_fifo; /* XXX Must be called with sc locked */ ifp = sc->ifp; status = CSR_READ_2(sc, EP_W1_RX_STATUS); read_again: if (status & ERR_RX) { if_inc_counter(ifp, IFCOUNTER_IERRORS, 1); if (status & ERR_RX_OVERRUN) { /* * We can think the rx latency is actually * greather than we expect */ #ifdef EP_LOCAL_STATS if (EP_FTST(sc, F_RX_FIRST)) sc->rx_overrunf++; else sc->rx_overrunl++; #endif } goto out; } rx_fifo = rx_fifo2 = status & RX_BYTES_MASK; if (EP_FTST(sc, F_RX_FIRST)) { MGETHDR(m, M_NOWAIT, MT_DATA); if (!m) goto out; if (rx_fifo >= MINCLSIZE) MCLGET(m, M_NOWAIT); sc->top = sc->mcur = top = m; #define EROUND ((sizeof(struct ether_header) + 3) & ~3) #define EOFF (EROUND - sizeof(struct ether_header)) top->m_data += EOFF; /* Read what should be the header. */ CSR_READ_MULTI_2(sc, EP_W1_RX_PIO_RD_1, mtod(top, uint16_t *), sizeof(struct ether_header) / 2); top->m_len = sizeof(struct ether_header); rx_fifo -= sizeof(struct ether_header); sc->cur_len = rx_fifo2; } else { /* come here if we didn't have a complete packet last time */ top = sc->top; m = sc->mcur; sc->cur_len += rx_fifo2; } /* Reads what is left in the RX FIFO */ while (rx_fifo > 0) { lenthisone = min(rx_fifo, M_TRAILINGSPACE(m)); if (lenthisone == 0) { /* no room in this one */ mcur = m; MGET(m, M_NOWAIT, MT_DATA); if (!m) goto out; if (rx_fifo >= MINCLSIZE) MCLGET(m, M_NOWAIT); m->m_len = 0; mcur->m_next = m; lenthisone = min(rx_fifo, M_TRAILINGSPACE(m)); } CSR_READ_MULTI_2(sc, EP_W1_RX_PIO_RD_1, (uint16_t *)(mtod(m, caddr_t)+m->m_len), lenthisone / 2); m->m_len += lenthisone; if (lenthisone & 1) *(mtod(m, caddr_t)+m->m_len - 1) = CSR_READ_1(sc, EP_W1_RX_PIO_RD_1); rx_fifo -= lenthisone; } if (status & ERR_RX_INCOMPLETE) { /* we haven't received the complete packet */ sc->mcur = m; #ifdef EP_LOCAL_STATS /* to know how often we come here */ sc->rx_no_first++; #endif EP_FRST(sc, F_RX_FIRST); status = CSR_READ_2(sc, EP_W1_RX_STATUS); if (!(status & ERR_RX_INCOMPLETE)) { /* * We see if by now, the packet has completly * arrived */ goto read_again; } CSR_WRITE_2(sc, EP_COMMAND, SET_RX_EARLY_THRESH | RX_NEXT_EARLY_THRESH); return; } CSR_WRITE_2(sc, EP_COMMAND, RX_DISCARD_TOP_PACK); if_inc_counter(ifp, IFCOUNTER_IPACKETS, 1); EP_FSET(sc, F_RX_FIRST); top->m_pkthdr.rcvif = sc->ifp; top->m_pkthdr.len = sc->cur_len; /* * Drop locks before calling if_input() since it may re-enter * ep_start() in the netisr case. This would result in a * lock reversal. Better performance might be obtained by * chaining all packets received, dropping the lock, and then * calling if_input() on each one. */ EP_UNLOCK(sc); (*ifp->if_input) (ifp, top); EP_LOCK(sc); sc->top = 0; EP_BUSY_WAIT(sc); CSR_WRITE_2(sc, EP_COMMAND, SET_RX_EARLY_THRESH | RX_INIT_EARLY_THRESH); return; out: CSR_WRITE_2(sc, EP_COMMAND, RX_DISCARD_TOP_PACK); if (sc->top) { m_freem(sc->top); sc->top = 0; #ifdef EP_LOCAL_STATS sc->rx_no_mbuf++; #endif } EP_FSET(sc, F_RX_FIRST); EP_BUSY_WAIT(sc); CSR_WRITE_2(sc, EP_COMMAND, SET_RX_EARLY_THRESH | RX_INIT_EARLY_THRESH); } static int ep_ifmedia_upd(struct ifnet *ifp) { struct ep_softc *sc = ifp->if_softc; int i = 0, j; GO_WINDOW(sc, 0); CSR_WRITE_2(sc, EP_COMMAND, STOP_TRANSCEIVER); GO_WINDOW(sc, 4); CSR_WRITE_2(sc, EP_W4_MEDIA_TYPE, DISABLE_UTP); GO_WINDOW(sc, 0); switch (IFM_SUBTYPE(sc->ifmedia.ifm_media)) { case IFM_10_T: if (sc->ep_connectors & UTP) { i = ACF_CONNECTOR_UTP; GO_WINDOW(sc, 4); CSR_WRITE_2(sc, EP_W4_MEDIA_TYPE, ENABLE_UTP); } break; case IFM_10_2: if (sc->ep_connectors & BNC) { i = ACF_CONNECTOR_BNC; CSR_WRITE_2(sc, EP_COMMAND, START_TRANSCEIVER); DELAY(DELAY_MULTIPLE * 1000); } break; case IFM_10_5: if (sc->ep_connectors & AUI) i = ACF_CONNECTOR_AUI; break; default: i = sc->ep_connector; device_printf(sc->dev, "strange connector type in EEPROM: assuming AUI\n"); } GO_WINDOW(sc, 0); j = CSR_READ_2(sc, EP_W0_ADDRESS_CFG) & 0x3fff; CSR_WRITE_2(sc, EP_W0_ADDRESS_CFG, j | (i << ACF_CONNECTOR_BITS)); return (0); } static void ep_ifmedia_sts(struct ifnet *ifp, struct ifmediareq *ifmr) { struct ep_softc *sc = ifp->if_softc; uint16_t ms; switch (IFM_SUBTYPE(sc->ifmedia.ifm_media)) { case IFM_10_T: GO_WINDOW(sc, 4); ms = CSR_READ_2(sc, EP_W4_MEDIA_TYPE); GO_WINDOW(sc, 0); ifmr->ifm_status = IFM_AVALID; if (ms & MT_LB) { ifmr->ifm_status |= IFM_ACTIVE; ifmr->ifm_active = IFM_ETHER | IFM_10_T; } else { ifmr->ifm_active = IFM_ETHER | IFM_NONE; } break; default: ifmr->ifm_active = sc->ifmedia.ifm_media; break; } } static int epioctl(struct ifnet *ifp, u_long cmd, caddr_t data) { struct ep_softc *sc = ifp->if_softc; struct ifreq *ifr = (struct ifreq *) data; int error = 0; switch (cmd) { case SIOCSIFFLAGS: EP_LOCK(sc); if (((ifp->if_flags & IFF_UP) == 0) && (ifp->if_drv_flags & IFF_DRV_RUNNING)) { epstop(sc); } else /* reinitialize card on any parameter change */ epinit_locked(sc); EP_UNLOCK(sc); break; case SIOCADDMULTI: case SIOCDELMULTI: /* * The Etherlink III has no programmable multicast * filter. We always initialize the card to be * promiscuous to multicast, since we're always a * member of the ALL-SYSTEMS group, so there's no * need to process SIOC*MULTI requests. */ error = 0; break; case SIOCSIFMEDIA: case SIOCGIFMEDIA: if (!sc->epb.mii_trans) error = ifmedia_ioctl(ifp, ifr, &sc->ifmedia, cmd); else error = EINVAL; break; default: error = ether_ioctl(ifp, cmd, data); break; } return (error); } static void eptick(void *arg) { struct ep_softc *sc; sc = arg; if (sc->tx_timer != 0 && --sc->tx_timer == 0) epwatchdog(sc); callout_reset(&sc->watchdog_timer, hz, eptick, sc); } static void epwatchdog(struct ep_softc *sc) { struct ifnet *ifp; ifp = sc->ifp; if (sc->gone) return; ifp->if_drv_flags &= ~IFF_DRV_OACTIVE; epstart_locked(ifp); ep_intr_locked(sc); } static void epstop(struct ep_softc *sc) { EP_ASSERT_LOCKED(sc); CSR_WRITE_2(sc, EP_COMMAND, RX_DISABLE); CSR_WRITE_2(sc, EP_COMMAND, RX_DISCARD_TOP_PACK); EP_BUSY_WAIT(sc); CSR_WRITE_2(sc, EP_COMMAND, TX_DISABLE); CSR_WRITE_2(sc, EP_COMMAND, STOP_TRANSCEIVER); DELAY(800); CSR_WRITE_2(sc, EP_COMMAND, RX_RESET); EP_BUSY_WAIT(sc); CSR_WRITE_2(sc, EP_COMMAND, TX_RESET); EP_BUSY_WAIT(sc); CSR_WRITE_2(sc, EP_COMMAND, C_INTR_LATCH); CSR_WRITE_2(sc, EP_COMMAND, SET_RD_0_MASK); CSR_WRITE_2(sc, EP_COMMAND, SET_INTR_MASK); CSR_WRITE_2(sc, EP_COMMAND, SET_RX_FILTER); sc->ifp->if_drv_flags &= ~(IFF_DRV_RUNNING | IFF_DRV_OACTIVE); callout_stop(&sc->watchdog_timer); } Index: stable/12/sys/dev/ex/if_ex.c =================================================================== --- stable/12/sys/dev/ex/if_ex.c (revision 339734) +++ stable/12/sys/dev/ex/if_ex.c (revision 339735) @@ -1,1077 +1,1079 @@ /*- * SPDX-License-Identifier: BSD-2-Clause-FreeBSD * * Copyright (c) 1996, Javier Martín Rueda (jmrueda@diatel.upm.es) * All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice unmodified, this list of conditions, and the following * disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * * * MAINTAINER: Matthew N. Dodd * */ #include __FBSDID("$FreeBSD$"); /* * Intel EtherExpress Pro/10, Pro/10+ Ethernet driver * * Revision history: * * dd-mmm-yyyy: Multicast support ported from NetBSD's if_iy driver. * 30-Oct-1996: first beta version. Inet and BPF supported, but no multicast. */ #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #ifdef EXDEBUG # define Start_End 1 # define Rcvd_Pkts 2 # define Sent_Pkts 4 # define Status 8 static int debug_mask = 0; # define DODEBUG(level, action) if (level & debug_mask) action #else # define DODEBUG(level, action) #endif devclass_t ex_devclass; char irq2eemap[] = { -1, -1, 0, 1, -1, 2, -1, -1, -1, 0, 3, 4, -1, -1, -1, -1 }; u_char ee2irqmap[] = { 9, 3, 5, 10, 11, 0, 0, 0 }; char plus_irq2eemap[] = { -1, -1, -1, 0, 1, 2, -1, 3, -1, 4, 5, 6, 7, -1, -1, -1 }; u_char plus_ee2irqmap[] = { 3, 4, 5, 7, 9, 10, 11, 12 }; /* Network Interface Functions */ static void ex_init(void *); static void ex_init_locked(struct ex_softc *); static void ex_start(struct ifnet *); static void ex_start_locked(struct ifnet *); static int ex_ioctl(struct ifnet *, u_long, caddr_t); static void ex_watchdog(void *); /* ifmedia Functions */ static int ex_ifmedia_upd(struct ifnet *); static void ex_ifmedia_sts(struct ifnet *, struct ifmediareq *); static int ex_get_media(struct ex_softc *); static void ex_reset(struct ex_softc *); static void ex_setmulti(struct ex_softc *); static void ex_tx_intr(struct ex_softc *); static void ex_rx_intr(struct ex_softc *); void ex_get_address(struct ex_softc *sc, u_char *enaddr) { uint16_t eaddr_tmp; eaddr_tmp = ex_eeprom_read(sc, EE_Eth_Addr_Lo); enaddr[5] = eaddr_tmp & 0xff; enaddr[4] = eaddr_tmp >> 8; eaddr_tmp = ex_eeprom_read(sc, EE_Eth_Addr_Mid); enaddr[3] = eaddr_tmp & 0xff; enaddr[2] = eaddr_tmp >> 8; eaddr_tmp = ex_eeprom_read(sc, EE_Eth_Addr_Hi); enaddr[1] = eaddr_tmp & 0xff; enaddr[0] = eaddr_tmp >> 8; return; } int ex_card_type(u_char *enaddr) { if ((enaddr[0] == 0x00) && (enaddr[1] == 0xA0) && (enaddr[2] == 0xC9)) return (CARD_TYPE_EX_10_PLUS); return (CARD_TYPE_EX_10); } /* * Caller is responsible for eventually calling * ex_release_resources() on failure. */ int ex_alloc_resources(device_t dev) { struct ex_softc * sc = device_get_softc(dev); int error = 0; sc->ioport = bus_alloc_resource_any(dev, SYS_RES_IOPORT, &sc->ioport_rid, RF_ACTIVE); if (!sc->ioport) { device_printf(dev, "No I/O space?!\n"); error = ENOMEM; goto bad; } sc->irq = bus_alloc_resource_any(dev, SYS_RES_IRQ, &sc->irq_rid, RF_ACTIVE); if (!sc->irq) { device_printf(dev, "No IRQ?!\n"); error = ENOMEM; goto bad; } bad: return (error); } void ex_release_resources(device_t dev) { struct ex_softc * sc = device_get_softc(dev); if (sc->ih) { bus_teardown_intr(dev, sc->irq, sc->ih); sc->ih = NULL; } if (sc->ioport) { bus_release_resource(dev, SYS_RES_IOPORT, sc->ioport_rid, sc->ioport); sc->ioport = NULL; } if (sc->irq) { bus_release_resource(dev, SYS_RES_IRQ, sc->irq_rid, sc->irq); sc->irq = NULL; } if (sc->ifp) if_free(sc->ifp); return; } int ex_attach(device_t dev) { struct ex_softc * sc = device_get_softc(dev); struct ifnet * ifp; struct ifmedia * ifm; int error; uint16_t temp; ifp = sc->ifp = if_alloc(IFT_ETHER); if (ifp == NULL) { device_printf(dev, "can not if_alloc()\n"); return (ENOSPC); } /* work out which set of irq <-> internal tables to use */ if (ex_card_type(sc->enaddr) == CARD_TYPE_EX_10_PLUS) { sc->irq2ee = plus_irq2eemap; sc->ee2irq = plus_ee2irqmap; } else { sc->irq2ee = irq2eemap; sc->ee2irq = ee2irqmap; } sc->mem_size = CARD_RAM_SIZE; /* XXX This should be read from the card itself. */ /* * Initialize the ifnet structure. */ ifp->if_softc = sc; if_initname(ifp, device_get_name(dev), device_get_unit(dev)); ifp->if_flags = IFF_SIMPLEX | IFF_BROADCAST | IFF_MULTICAST; ifp->if_start = ex_start; ifp->if_ioctl = ex_ioctl; ifp->if_init = ex_init; IFQ_SET_MAXLEN(&ifp->if_snd, ifqmaxlen); ifmedia_init(&sc->ifmedia, 0, ex_ifmedia_upd, ex_ifmedia_sts); mtx_init(&sc->lock, device_get_nameunit(dev), MTX_NETWORK_LOCK, MTX_DEF); callout_init_mtx(&sc->timer, &sc->lock, 0); temp = ex_eeprom_read(sc, EE_W5); if (temp & EE_W5_PORT_TPE) ifmedia_add(&sc->ifmedia, IFM_ETHER|IFM_10_T, 0, NULL); if (temp & EE_W5_PORT_BNC) ifmedia_add(&sc->ifmedia, IFM_ETHER|IFM_10_2, 0, NULL); if (temp & EE_W5_PORT_AUI) ifmedia_add(&sc->ifmedia, IFM_ETHER|IFM_10_5, 0, NULL); ifmedia_add(&sc->ifmedia, IFM_ETHER|IFM_AUTO, 0, NULL); ifmedia_add(&sc->ifmedia, IFM_ETHER|IFM_NONE, 0, NULL); ifmedia_set(&sc->ifmedia, ex_get_media(sc)); ifm = &sc->ifmedia; ifm->ifm_media = ifm->ifm_cur->ifm_media; ex_ifmedia_upd(ifp); /* * Attach the interface. */ ether_ifattach(ifp, sc->enaddr); error = bus_setup_intr(dev, sc->irq, INTR_TYPE_NET | INTR_MPSAFE, NULL, ex_intr, (void *)sc, &sc->ih); if (error) { device_printf(dev, "bus_setup_intr() failed!\n"); ether_ifdetach(ifp); mtx_destroy(&sc->lock); return (error); } + gone_by_fcp101_dev(dev); + return(0); } int ex_detach(device_t dev) { struct ex_softc *sc; struct ifnet *ifp; sc = device_get_softc(dev); ifp = sc->ifp; EX_LOCK(sc); ex_stop(sc); EX_UNLOCK(sc); ether_ifdetach(ifp); callout_drain(&sc->timer); ex_release_resources(dev); mtx_destroy(&sc->lock); return (0); } static void ex_init(void *xsc) { struct ex_softc * sc = (struct ex_softc *) xsc; EX_LOCK(sc); ex_init_locked(sc); EX_UNLOCK(sc); } static void ex_init_locked(struct ex_softc *sc) { struct ifnet * ifp = sc->ifp; int i; unsigned short temp_reg; DODEBUG(Start_End, printf("%s: ex_init: start\n", ifp->if_xname);); sc->tx_timeout = 0; /* * Load the ethernet address into the card. */ CSR_WRITE_1(sc, CMD_REG, Bank2_Sel); temp_reg = CSR_READ_1(sc, EEPROM_REG); if (temp_reg & Trnoff_Enable) CSR_WRITE_1(sc, EEPROM_REG, temp_reg & ~Trnoff_Enable); for (i = 0; i < ETHER_ADDR_LEN; i++) CSR_WRITE_1(sc, I_ADDR_REG0 + i, IF_LLADDR(sc->ifp)[i]); /* * - Setup transmit chaining and discard bad received frames. * - Match broadcast. * - Clear test mode. * - Set receiving mode. */ CSR_WRITE_1(sc, REG1, CSR_READ_1(sc, REG1) | Tx_Chn_Int_Md | Tx_Chn_ErStp | Disc_Bad_Fr); CSR_WRITE_1(sc, REG2, CSR_READ_1(sc, REG2) | No_SA_Ins | RX_CRC_InMem); CSR_WRITE_1(sc, REG3, CSR_READ_1(sc, REG3) & 0x3f /* XXX constants. */ ); /* * - Set IRQ number, if this part has it. ISA devices have this, * while PC Card devices don't seem to. Either way, we have to * switch to Bank1 as the rest of this code relies on that. */ CSR_WRITE_1(sc, CMD_REG, Bank1_Sel); if (sc->flags & HAS_INT_NO_REG) CSR_WRITE_1(sc, INT_NO_REG, (CSR_READ_1(sc, INT_NO_REG) & 0xf8) | sc->irq2ee[sc->irq_no]); /* * Divide the available memory in the card into rcv and xmt buffers. * By default, I use the first 3/4 of the memory for the rcv buffer, * and the remaining 1/4 of the memory for the xmt buffer. */ sc->rx_mem_size = sc->mem_size * 3 / 4; sc->tx_mem_size = sc->mem_size - sc->rx_mem_size; sc->rx_lower_limit = 0x0000; sc->rx_upper_limit = sc->rx_mem_size - 2; sc->tx_lower_limit = sc->rx_mem_size; sc->tx_upper_limit = sc->mem_size - 2; CSR_WRITE_1(sc, RCV_LOWER_LIMIT_REG, sc->rx_lower_limit >> 8); CSR_WRITE_1(sc, RCV_UPPER_LIMIT_REG, sc->rx_upper_limit >> 8); CSR_WRITE_1(sc, XMT_LOWER_LIMIT_REG, sc->tx_lower_limit >> 8); CSR_WRITE_1(sc, XMT_UPPER_LIMIT_REG, sc->tx_upper_limit >> 8); /* * Enable receive and transmit interrupts, and clear any pending int. */ CSR_WRITE_1(sc, REG1, CSR_READ_1(sc, REG1) | TriST_INT); CSR_WRITE_1(sc, CMD_REG, Bank0_Sel); CSR_WRITE_1(sc, MASK_REG, All_Int & ~(Rx_Int | Tx_Int)); CSR_WRITE_1(sc, STATUS_REG, All_Int); /* * Initialize receive and transmit ring buffers. */ CSR_WRITE_2(sc, RCV_BAR, sc->rx_lower_limit); sc->rx_head = sc->rx_lower_limit; CSR_WRITE_2(sc, RCV_STOP_REG, sc->rx_upper_limit | 0xfe); CSR_WRITE_2(sc, XMT_BAR, sc->tx_lower_limit); sc->tx_head = sc->tx_tail = sc->tx_lower_limit; ifp->if_drv_flags |= IFF_DRV_RUNNING; ifp->if_drv_flags &= ~IFF_DRV_OACTIVE; DODEBUG(Status, printf("OIDLE init\n");); callout_reset(&sc->timer, hz, ex_watchdog, sc); ex_setmulti(sc); /* * Final reset of the board, and enable operation. */ CSR_WRITE_1(sc, CMD_REG, Sel_Reset_CMD); DELAY(2); CSR_WRITE_1(sc, CMD_REG, Rcv_Enable_CMD); ex_start_locked(ifp); DODEBUG(Start_End, printf("%s: ex_init: finish\n", ifp->if_xname);); } static void ex_start(struct ifnet *ifp) { struct ex_softc * sc = ifp->if_softc; EX_LOCK(sc); ex_start_locked(ifp); EX_UNLOCK(sc); } static void ex_start_locked(struct ifnet *ifp) { struct ex_softc * sc = ifp->if_softc; int i, len, data_len, avail, dest, next; unsigned char tmp16[2]; struct mbuf * opkt; struct mbuf * m; DODEBUG(Start_End, printf("ex_start%d: start\n", unit);); /* * Main loop: send outgoing packets to network card until there are no * more packets left, or the card cannot accept any more yet. */ while (((opkt = ifp->if_snd.ifq_head) != NULL) && !(ifp->if_drv_flags & IFF_DRV_OACTIVE)) { /* * Ensure there is enough free transmit buffer space for * this packet, including its header. Note: the header * cannot wrap around the end of the transmit buffer and * must be kept together, so we allow space for twice the * length of the header, just in case. */ for (len = 0, m = opkt; m != NULL; m = m->m_next) { len += m->m_len; } data_len = len; DODEBUG(Sent_Pkts, printf("1. Sending packet with %d data bytes. ", data_len);); if (len & 1) { len += XMT_HEADER_LEN + 1; } else { len += XMT_HEADER_LEN; } if ((i = sc->tx_tail - sc->tx_head) >= 0) { avail = sc->tx_mem_size - i; } else { avail = -i; } DODEBUG(Sent_Pkts, printf("i=%d, avail=%d\n", i, avail);); if (avail >= len + XMT_HEADER_LEN) { IF_DEQUEUE(&ifp->if_snd, opkt); #ifdef EX_PSA_INTR /* * Disable rx and tx interrupts, to avoid corruption * of the host address register by interrupt service * routines. * XXX Is this necessary with splimp() enabled? */ CSR_WRITE_1(sc, MASK_REG, All_Int); #endif /* * Compute the start and end addresses of this * frame in the tx buffer. */ dest = sc->tx_tail; next = dest + len; if (next > sc->tx_upper_limit) { if ((sc->tx_upper_limit + 2 - sc->tx_tail) <= XMT_HEADER_LEN) { dest = sc->tx_lower_limit; next = dest + len; } else { next = sc->tx_lower_limit + next - sc->tx_upper_limit - 2; } } /* * Build the packet frame in the card's ring buffer. */ DODEBUG(Sent_Pkts, printf("2. dest=%d, next=%d. ", dest, next);); CSR_WRITE_2(sc, HOST_ADDR_REG, dest); CSR_WRITE_2(sc, IO_PORT_REG, Transmit_CMD); CSR_WRITE_2(sc, IO_PORT_REG, 0); CSR_WRITE_2(sc, IO_PORT_REG, next); CSR_WRITE_2(sc, IO_PORT_REG, data_len); /* * Output the packet data to the card. Ensure all * transfers are 16-bit wide, even if individual * mbufs have odd length. */ for (m = opkt, i = 0; m != NULL; m = m->m_next) { DODEBUG(Sent_Pkts, printf("[%d]", m->m_len);); if (i) { tmp16[1] = *(mtod(m, caddr_t)); CSR_WRITE_MULTI_2(sc, IO_PORT_REG, (uint16_t *) tmp16, 1); } CSR_WRITE_MULTI_2(sc, IO_PORT_REG, (uint16_t *) (mtod(m, caddr_t) + i), (m->m_len - i) / 2); if ((i = (m->m_len - i) & 1) != 0) { tmp16[0] = *(mtod(m, caddr_t) + m->m_len - 1); } } if (i) CSR_WRITE_MULTI_2(sc, IO_PORT_REG, (uint16_t *) tmp16, 1); /* * If there were other frames chained, update the * chain in the last one. */ if (sc->tx_head != sc->tx_tail) { if (sc->tx_tail != dest) { CSR_WRITE_2(sc, HOST_ADDR_REG, sc->tx_last + XMT_Chain_Point); CSR_WRITE_2(sc, IO_PORT_REG, dest); } CSR_WRITE_2(sc, HOST_ADDR_REG, sc->tx_last + XMT_Byte_Count); i = CSR_READ_2(sc, IO_PORT_REG); CSR_WRITE_2(sc, HOST_ADDR_REG, sc->tx_last + XMT_Byte_Count); CSR_WRITE_2(sc, IO_PORT_REG, i | Ch_bit); } /* * Resume normal operation of the card: * - Make a dummy read to flush the DRAM write * pipeline. * - Enable receive and transmit interrupts. * - Send Transmit or Resume_XMT command, as * appropriate. */ CSR_READ_2(sc, IO_PORT_REG); #ifdef EX_PSA_INTR CSR_WRITE_1(sc, MASK_REG, All_Int & ~(Rx_Int | Tx_Int)); #endif if (sc->tx_head == sc->tx_tail) { CSR_WRITE_2(sc, XMT_BAR, dest); CSR_WRITE_1(sc, CMD_REG, Transmit_CMD); sc->tx_head = dest; DODEBUG(Sent_Pkts, printf("Transmit\n");); } else { CSR_WRITE_1(sc, CMD_REG, Resume_XMT_List_CMD); DODEBUG(Sent_Pkts, printf("Resume\n");); } sc->tx_last = dest; sc->tx_tail = next; BPF_MTAP(ifp, opkt); sc->tx_timeout = 2; if_inc_counter(ifp, IFCOUNTER_OPACKETS, 1); m_freem(opkt); } else { ifp->if_drv_flags |= IFF_DRV_OACTIVE; DODEBUG(Status, printf("OACTIVE start\n");); } } DODEBUG(Start_End, printf("ex_start%d: finish\n", unit);); } void ex_stop(struct ex_softc *sc) { DODEBUG(Start_End, printf("ex_stop%d: start\n", unit);); EX_ASSERT_LOCKED(sc); /* * Disable card operation: * - Disable the interrupt line. * - Flush transmission and disable reception. * - Mask and clear all interrupts. * - Reset the 82595. */ CSR_WRITE_1(sc, CMD_REG, Bank1_Sel); CSR_WRITE_1(sc, REG1, CSR_READ_1(sc, REG1) & ~TriST_INT); CSR_WRITE_1(sc, CMD_REG, Bank0_Sel); CSR_WRITE_1(sc, CMD_REG, Rcv_Stop); sc->tx_head = sc->tx_tail = sc->tx_lower_limit; sc->tx_last = 0; /* XXX I think these two lines are not necessary, because ex_init will always be called again to reinit the interface. */ CSR_WRITE_1(sc, MASK_REG, All_Int); CSR_WRITE_1(sc, STATUS_REG, All_Int); CSR_WRITE_1(sc, CMD_REG, Reset_CMD); DELAY(200); sc->ifp->if_drv_flags &= ~(IFF_DRV_RUNNING | IFF_DRV_OACTIVE); sc->tx_timeout = 0; callout_stop(&sc->timer); DODEBUG(Start_End, printf("ex_stop%d: finish\n", unit);); return; } void ex_intr(void *arg) { struct ex_softc *sc = (struct ex_softc *)arg; struct ifnet *ifp = sc->ifp; int int_status, send_pkts; int loops = 100; DODEBUG(Start_End, printf("ex_intr%d: start\n", unit);); EX_LOCK(sc); send_pkts = 0; while (loops-- > 0 && (int_status = CSR_READ_1(sc, STATUS_REG)) & (Tx_Int | Rx_Int)) { /* don't loop forever */ if (int_status == 0xff) break; if (int_status & Rx_Int) { CSR_WRITE_1(sc, STATUS_REG, Rx_Int); ex_rx_intr(sc); } else if (int_status & Tx_Int) { CSR_WRITE_1(sc, STATUS_REG, Tx_Int); ex_tx_intr(sc); send_pkts = 1; } } if (loops == 0) printf("100 loops are not enough\n"); /* * If any packet has been transmitted, and there are queued packets to * be sent, attempt to send more packets to the network card. */ if (send_pkts && (ifp->if_snd.ifq_head != NULL)) ex_start_locked(ifp); EX_UNLOCK(sc); DODEBUG(Start_End, printf("ex_intr%d: finish\n", unit);); return; } static void ex_tx_intr(struct ex_softc *sc) { struct ifnet * ifp = sc->ifp; int tx_status; DODEBUG(Start_End, printf("ex_tx_intr%d: start\n", unit);); /* * - Cancel the watchdog. * For all packets transmitted since last transmit interrupt: * - Advance chain pointer to next queued packet. * - Update statistics. */ sc->tx_timeout = 0; while (sc->tx_head != sc->tx_tail) { CSR_WRITE_2(sc, HOST_ADDR_REG, sc->tx_head); if (!(CSR_READ_2(sc, IO_PORT_REG) & Done_bit)) break; tx_status = CSR_READ_2(sc, IO_PORT_REG); sc->tx_head = CSR_READ_2(sc, IO_PORT_REG); if (tx_status & TX_OK_bit) { if_inc_counter(ifp, IFCOUNTER_OPACKETS, 1); } else { if_inc_counter(ifp, IFCOUNTER_OERRORS, 1); } if_inc_counter(ifp, IFCOUNTER_COLLISIONS, tx_status & No_Collisions_bits); } /* * The card should be ready to accept more packets now. */ ifp->if_drv_flags &= ~IFF_DRV_OACTIVE; DODEBUG(Status, printf("OIDLE tx_intr\n");); DODEBUG(Start_End, printf("ex_tx_intr%d: finish\n", unit);); return; } static void ex_rx_intr(struct ex_softc *sc) { struct ifnet * ifp = sc->ifp; int rx_status; int pkt_len; int QQQ; struct mbuf * m; struct mbuf * ipkt; struct ether_header * eh; DODEBUG(Start_End, printf("ex_rx_intr%d: start\n", unit);); /* * For all packets received since last receive interrupt: * - If packet ok, read it into a new mbuf and queue it to interface, * updating statistics. * - If packet bad, just discard it, and update statistics. * Finally, advance receive stop limit in card's memory to new location. */ CSR_WRITE_2(sc, HOST_ADDR_REG, sc->rx_head); while (CSR_READ_2(sc, IO_PORT_REG) == RCV_Done) { rx_status = CSR_READ_2(sc, IO_PORT_REG); sc->rx_head = CSR_READ_2(sc, IO_PORT_REG); QQQ = pkt_len = CSR_READ_2(sc, IO_PORT_REG); if (rx_status & RCV_OK_bit) { MGETHDR(m, M_NOWAIT, MT_DATA); ipkt = m; if (ipkt == NULL) { if_inc_counter(ifp, IFCOUNTER_IQDROPS, 1); } else { ipkt->m_pkthdr.rcvif = ifp; ipkt->m_pkthdr.len = pkt_len; ipkt->m_len = MHLEN; while (pkt_len > 0) { if (pkt_len >= MINCLSIZE) { if (MCLGET(m, M_NOWAIT)) { m->m_len = MCLBYTES; } else { m_freem(ipkt); if_inc_counter(ifp, IFCOUNTER_IQDROPS, 1); goto rx_another; } } m->m_len = min(m->m_len, pkt_len); /* * NOTE: I'm assuming that all mbufs allocated are of even length, * except for the last one in an odd-length packet. */ CSR_READ_MULTI_2(sc, IO_PORT_REG, mtod(m, uint16_t *), m->m_len / 2); if (m->m_len & 1) { *(mtod(m, caddr_t) + m->m_len - 1) = CSR_READ_1(sc, IO_PORT_REG); } pkt_len -= m->m_len; if (pkt_len > 0) { MGET(m->m_next, M_NOWAIT, MT_DATA); if (m->m_next == NULL) { m_freem(ipkt); if_inc_counter(ifp, IFCOUNTER_IQDROPS, 1); goto rx_another; } m = m->m_next; m->m_len = MLEN; } } eh = mtod(ipkt, struct ether_header *); #ifdef EXDEBUG if (debug_mask & Rcvd_Pkts) { if ((eh->ether_dhost[5] != 0xff) || (eh->ether_dhost[0] != 0xff)) { printf("Receive packet with %d data bytes: %6D -> ", QQQ, eh->ether_shost, ":"); printf("%6D\n", eh->ether_dhost, ":"); } /* QQQ */ } #endif EX_UNLOCK(sc); (*ifp->if_input)(ifp, ipkt); EX_LOCK(sc); if_inc_counter(ifp, IFCOUNTER_IPACKETS, 1); } } else { if_inc_counter(ifp, IFCOUNTER_IERRORS, 1); } CSR_WRITE_2(sc, HOST_ADDR_REG, sc->rx_head); rx_another: ; } if (sc->rx_head < sc->rx_lower_limit + 2) CSR_WRITE_2(sc, RCV_STOP_REG, sc->rx_upper_limit); else CSR_WRITE_2(sc, RCV_STOP_REG, sc->rx_head - 2); DODEBUG(Start_End, printf("ex_rx_intr%d: finish\n", unit);); return; } static int ex_ioctl(struct ifnet *ifp, u_long cmd, caddr_t data) { struct ex_softc * sc = ifp->if_softc; struct ifreq * ifr = (struct ifreq *)data; int error = 0; DODEBUG(Start_End, printf("%s: ex_ioctl: start ", ifp->if_xname);); switch(cmd) { case SIOCSIFFLAGS: DODEBUG(Start_End, printf("SIOCSIFFLAGS");); EX_LOCK(sc); if ((ifp->if_flags & IFF_UP) == 0 && (ifp->if_drv_flags & IFF_DRV_RUNNING)) { ex_stop(sc); } else { ex_init_locked(sc); } EX_UNLOCK(sc); break; case SIOCADDMULTI: case SIOCDELMULTI: ex_init(sc); error = 0; break; case SIOCSIFMEDIA: case SIOCGIFMEDIA: error = ifmedia_ioctl(ifp, ifr, &sc->ifmedia, cmd); break; default: error = ether_ioctl(ifp, cmd, data); break; } DODEBUG(Start_End, printf("\n%s: ex_ioctl: finish\n", ifp->if_xname);); return(error); } static void ex_setmulti(struct ex_softc *sc) { struct ifnet *ifp; struct ifmultiaddr *maddr; uint16_t *addr; int count; int timeout, status; ifp = sc->ifp; count = 0; if_maddr_rlock(ifp); CK_STAILQ_FOREACH(maddr, &ifp->if_multiaddrs, ifma_link) { if (maddr->ifma_addr->sa_family != AF_LINK) continue; count++; } if_maddr_runlock(ifp); if ((ifp->if_flags & IFF_PROMISC) || (ifp->if_flags & IFF_ALLMULTI) || count > 63) { /* Interface is in promiscuous mode or there are too many * multicast addresses for the card to handle */ CSR_WRITE_1(sc, CMD_REG, Bank2_Sel); CSR_WRITE_1(sc, REG2, CSR_READ_1(sc, REG2) | Promisc_Mode); CSR_WRITE_1(sc, REG3, CSR_READ_1(sc, REG3)); CSR_WRITE_1(sc, CMD_REG, Bank0_Sel); } else if ((ifp->if_flags & IFF_MULTICAST) && (count > 0)) { /* Program multicast addresses plus our MAC address * into the filter */ CSR_WRITE_1(sc, CMD_REG, Bank2_Sel); CSR_WRITE_1(sc, REG2, CSR_READ_1(sc, REG2) | Multi_IA); CSR_WRITE_1(sc, REG3, CSR_READ_1(sc, REG3)); CSR_WRITE_1(sc, CMD_REG, Bank0_Sel); /* Borrow space from TX buffer; this should be safe * as this is only called from ex_init */ CSR_WRITE_2(sc, HOST_ADDR_REG, sc->tx_lower_limit); CSR_WRITE_2(sc, IO_PORT_REG, MC_Setup_CMD); CSR_WRITE_2(sc, IO_PORT_REG, 0); CSR_WRITE_2(sc, IO_PORT_REG, 0); CSR_WRITE_2(sc, IO_PORT_REG, (count + 1) * 6); if_maddr_rlock(ifp); CK_STAILQ_FOREACH(maddr, &ifp->if_multiaddrs, ifma_link) { if (maddr->ifma_addr->sa_family != AF_LINK) continue; addr = (uint16_t*)LLADDR((struct sockaddr_dl *) maddr->ifma_addr); CSR_WRITE_2(sc, IO_PORT_REG, *addr++); CSR_WRITE_2(sc, IO_PORT_REG, *addr++); CSR_WRITE_2(sc, IO_PORT_REG, *addr++); } if_maddr_runlock(ifp); /* Program our MAC address as well */ /* XXX: Is this necessary? The Linux driver does this * but the NetBSD driver does not */ addr = (uint16_t*)IF_LLADDR(sc->ifp); CSR_WRITE_2(sc, IO_PORT_REG, *addr++); CSR_WRITE_2(sc, IO_PORT_REG, *addr++); CSR_WRITE_2(sc, IO_PORT_REG, *addr++); CSR_READ_2(sc, IO_PORT_REG); CSR_WRITE_2(sc, XMT_BAR, sc->tx_lower_limit); CSR_WRITE_1(sc, CMD_REG, MC_Setup_CMD); sc->tx_head = sc->tx_lower_limit; sc->tx_tail = sc->tx_head + XMT_HEADER_LEN + (count + 1) * 6; for (timeout=0; timeout<100; timeout++) { DELAY(2); if ((CSR_READ_1(sc, STATUS_REG) & Exec_Int) == 0) continue; status = CSR_READ_1(sc, CMD_REG); CSR_WRITE_1(sc, STATUS_REG, Exec_Int); break; } sc->tx_head = sc->tx_tail; } else { /* No multicast or promiscuous mode */ CSR_WRITE_1(sc, CMD_REG, Bank2_Sel); CSR_WRITE_1(sc, REG2, CSR_READ_1(sc, REG2) & 0xDE); /* ~(Multi_IA | Promisc_Mode) */ CSR_WRITE_1(sc, REG3, CSR_READ_1(sc, REG3)); CSR_WRITE_1(sc, CMD_REG, Bank0_Sel); } } static void ex_reset(struct ex_softc *sc) { DODEBUG(Start_End, printf("ex_reset%d: start\n", unit);); EX_ASSERT_LOCKED(sc); ex_stop(sc); ex_init_locked(sc); DODEBUG(Start_End, printf("ex_reset%d: finish\n", unit);); return; } static void ex_watchdog(void *arg) { struct ex_softc * sc = arg; struct ifnet *ifp = sc->ifp; if (sc->tx_timeout && --sc->tx_timeout == 0) { DODEBUG(Start_End, if_printf(ifp, "ex_watchdog: start\n");); ifp->if_drv_flags &= ~IFF_DRV_OACTIVE; DODEBUG(Status, printf("OIDLE watchdog\n");); if_inc_counter(ifp, IFCOUNTER_OERRORS, 1); ex_reset(sc); ex_start_locked(ifp); DODEBUG(Start_End, if_printf(ifp, "ex_watchdog: finish\n");); } callout_reset(&sc->timer, hz, ex_watchdog, sc); } static int ex_get_media(struct ex_softc *sc) { int current; int media; media = ex_eeprom_read(sc, EE_W5); CSR_WRITE_1(sc, CMD_REG, Bank2_Sel); current = CSR_READ_1(sc, REG3); CSR_WRITE_1(sc, CMD_REG, Bank0_Sel); if ((current & TPE_bit) && (media & EE_W5_PORT_TPE)) return(IFM_ETHER|IFM_10_T); if ((current & BNC_bit) && (media & EE_W5_PORT_BNC)) return(IFM_ETHER|IFM_10_2); if (media & EE_W5_PORT_AUI) return (IFM_ETHER|IFM_10_5); return (IFM_ETHER|IFM_AUTO); } static int ex_ifmedia_upd(ifp) struct ifnet * ifp; { struct ex_softc * sc = ifp->if_softc; if (IFM_TYPE(sc->ifmedia.ifm_media) != IFM_ETHER) return EINVAL; return (0); } static void ex_ifmedia_sts(ifp, ifmr) struct ifnet * ifp; struct ifmediareq * ifmr; { struct ex_softc * sc = ifp->if_softc; EX_LOCK(sc); ifmr->ifm_active = ex_get_media(sc); ifmr->ifm_status = IFM_AVALID | IFM_ACTIVE; EX_UNLOCK(sc); return; } u_short ex_eeprom_read(struct ex_softc *sc, int location) { int i; u_short data = 0; int read_cmd = location | EE_READ_CMD; short ctrl_val = EECS; CSR_WRITE_1(sc, CMD_REG, Bank2_Sel); CSR_WRITE_1(sc, EEPROM_REG, EECS); for (i = 8; i >= 0; i--) { short outval = (read_cmd & (1 << i)) ? ctrl_val | EEDI : ctrl_val; CSR_WRITE_1(sc, EEPROM_REG, outval); CSR_WRITE_1(sc, EEPROM_REG, outval | EESK); DELAY(3); CSR_WRITE_1(sc, EEPROM_REG, outval); DELAY(2); } CSR_WRITE_1(sc, EEPROM_REG, ctrl_val); for (i = 16; i > 0; i--) { CSR_WRITE_1(sc, EEPROM_REG, ctrl_val | EESK); DELAY(3); data = (data << 1) | ((CSR_READ_1(sc, EEPROM_REG) & EEDO) ? 1 : 0); CSR_WRITE_1(sc, EEPROM_REG, ctrl_val); DELAY(2); } ctrl_val &= ~EECS; CSR_WRITE_1(sc, EEPROM_REG, ctrl_val | EESK); DELAY(3); CSR_WRITE_1(sc, EEPROM_REG, ctrl_val); DELAY(2); CSR_WRITE_1(sc, CMD_REG, Bank0_Sel); return(data); } Index: stable/12/sys/dev/fe/if_fe.c =================================================================== --- stable/12/sys/dev/fe/if_fe.c (revision 339734) +++ stable/12/sys/dev/fe/if_fe.c (revision 339735) @@ -1,2256 +1,2258 @@ /*- * All Rights Reserved, Copyright (C) Fujitsu Limited 1995 * * This software may be used, modified, copied, distributed, and sold, in * both source and binary form provided that the above copyright, these * terms and the following disclaimer are retained. The name of the author * and/or the contributor may not be used to endorse or promote products * derived from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND THE CONTRIBUTOR ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR THE CONTRIBUTOR BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION. * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. */ #include __FBSDID("$FreeBSD$"); /* * * Device driver for Fujitsu MB86960A/MB86965A based Ethernet cards. * Contributed by M. Sekiguchi. * * This version is intended to be a generic template for various * MB86960A/MB86965A based Ethernet cards. It currently supports * Fujitsu FMV-180 series for ISA and Allied-Telesis AT1700/RE2000 * series for ISA, as well as Fujitsu MBH10302 PC Card. * There are some currently- * unused hooks embedded, which are primarily intended to support * other types of Ethernet cards, but the author is not sure whether * they are useful. * * This software is a derivative work of if_ed.c version 1.56 by David * Greenman available as a part of FreeBSD 2.0 RELEASE source distribution. * * The following lines are retained from the original if_ed.c: * * Copyright (C) 1993, David Greenman. This software may be used, modified, * copied, distributed, and sold, in both source and binary form provided * that the above copyright and these terms are retained. Under no * circumstances is the author responsible for the proper functioning * of this software, nor does the author assume any responsibility * for damages incurred with its use. */ /* * TODO: * o To support ISA PnP auto configuration for FMV-183/184. * o To reconsider mbuf usage. * o To reconsider transmission buffer usage, including * transmission buffer size (currently 4KB x 2) and pros-and- * cons of multiple frame transmission. * o To test IPX codes. * o To test new-bus frontend. */ #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include /* * Transmit just one packet per a "send" command to 86960. * This option is intended for performance test. An EXPERIMENTAL option. */ #ifndef FE_SINGLE_TRANSMISSION #define FE_SINGLE_TRANSMISSION 0 #endif /* * Maximum loops when interrupt. * This option prevents an infinite loop due to hardware failure. * (Some laptops make an infinite loop after PC Card is ejected.) */ #ifndef FE_MAX_LOOP #define FE_MAX_LOOP 0x800 #endif /* * Device configuration flags. */ /* DLCR6 settings. */ #define FE_FLAGS_DLCR6_VALUE 0x007F /* Force DLCR6 override. */ #define FE_FLAGS_OVERRIDE_DLCR6 0x0080 devclass_t fe_devclass; /* * Special filter values. */ static struct fe_filter const fe_filter_nothing = { FE_FILTER_NOTHING }; static struct fe_filter const fe_filter_all = { FE_FILTER_ALL }; /* Standard driver entry points. These can be static. */ static void fe_init (void *); static void fe_init_locked (struct fe_softc *); static driver_intr_t fe_intr; static int fe_ioctl (struct ifnet *, u_long, caddr_t); static void fe_start (struct ifnet *); static void fe_start_locked (struct ifnet *); static void fe_watchdog (void *); static int fe_medchange (struct ifnet *); static void fe_medstat (struct ifnet *, struct ifmediareq *); /* Local functions. Order of declaration is confused. FIXME. */ static int fe_get_packet ( struct fe_softc *, u_short ); static void fe_tint ( struct fe_softc *, u_char ); static void fe_rint ( struct fe_softc *, u_char ); static void fe_xmit ( struct fe_softc * ); static void fe_write_mbufs ( struct fe_softc *, struct mbuf * ); static void fe_setmode ( struct fe_softc * ); static void fe_loadmar ( struct fe_softc * ); #ifdef DIAGNOSTIC static void fe_emptybuffer ( struct fe_softc * ); #endif /* * Fe driver specific constants which relate to 86960/86965. */ /* Interrupt masks */ #define FE_TMASK ( FE_D2_COLL16 | FE_D2_TXDONE ) #define FE_RMASK ( FE_D3_OVRFLO | FE_D3_CRCERR \ | FE_D3_ALGERR | FE_D3_SRTPKT | FE_D3_PKTRDY ) /* Maximum number of iterations for a receive interrupt. */ #define FE_MAX_RECV_COUNT ( ( 65536 - 2048 * 2 ) / 64 ) /* * Maximum size of SRAM is 65536, * minimum size of transmission buffer in fe is 2x2KB, * and minimum amount of received packet including headers * added by the chip is 64 bytes. * Hence FE_MAX_RECV_COUNT is the upper limit for number * of packets in the receive buffer. */ /* * Miscellaneous definitions not directly related to hardware. */ /* The following line must be delete when "net/if_media.h" support it. */ #ifndef IFM_10_FL #define IFM_10_FL /* 13 */ IFM_10_5 #endif #if 0 /* Mapping between media bitmap (in fe_softc.mbitmap) and ifm_media. */ static int const bit2media [] = { IFM_HDX | IFM_ETHER | IFM_AUTO, IFM_HDX | IFM_ETHER | IFM_MANUAL, IFM_HDX | IFM_ETHER | IFM_10_T, IFM_HDX | IFM_ETHER | IFM_10_2, IFM_HDX | IFM_ETHER | IFM_10_5, IFM_HDX | IFM_ETHER | IFM_10_FL, IFM_FDX | IFM_ETHER | IFM_10_T, /* More can be come here... */ 0 }; #else /* Mapping between media bitmap (in fe_softc.mbitmap) and ifm_media. */ static int const bit2media [] = { IFM_ETHER | IFM_AUTO, IFM_ETHER | IFM_MANUAL, IFM_ETHER | IFM_10_T, IFM_ETHER | IFM_10_2, IFM_ETHER | IFM_10_5, IFM_ETHER | IFM_10_FL, IFM_ETHER | IFM_10_T, /* More can be come here... */ 0 }; #endif /* * Check for specific bits in specific registers have specific values. * A common utility function called from various sub-probe routines. */ int fe_simple_probe (struct fe_softc const * sc, struct fe_simple_probe_struct const * sp) { struct fe_simple_probe_struct const *p; int8_t bits; for (p = sp; p->mask != 0; p++) { bits = fe_inb(sc, p->port); printf("port %d, mask %x, bits %x read %x\n", p->port, p->mask, p->bits, bits); if ((bits & p->mask) != p->bits) return 0; } return 1; } /* Test if a given 6 byte value is a valid Ethernet station (MAC) address. "Vendor" is an expected vendor code (first three bytes,) or a zero when nothing expected. */ int fe_valid_Ether_p (u_char const * addr, unsigned vendor) { #ifdef FE_DEBUG printf("fe?: validating %6D against %06x\n", addr, ":", vendor); #endif /* All zero is not allowed as a vendor code. */ if (addr[0] == 0 && addr[1] == 0 && addr[2] == 0) return 0; switch (vendor) { case 0x000000: /* Legal Ethernet address (stored in ROM) must have its Group and Local bits cleared. */ if ((addr[0] & 0x03) != 0) return 0; break; case 0x020000: /* Same as above, but a local address is allowed in this context. */ if (ETHER_IS_MULTICAST(addr)) return 0; break; default: /* Make sure the vendor part matches if one is given. */ if ( addr[0] != ((vendor >> 16) & 0xFF) || addr[1] != ((vendor >> 8) & 0xFF) || addr[2] != ((vendor ) & 0xFF)) return 0; break; } /* Host part must not be all-zeros nor all-ones. */ if (addr[3] == 0xFF && addr[4] == 0xFF && addr[5] == 0xFF) return 0; if (addr[3] == 0x00 && addr[4] == 0x00 && addr[5] == 0x00) return 0; /* Given addr looks like an Ethernet address. */ return 1; } /* Fill our softc struct with default value. */ void fe_softc_defaults (struct fe_softc *sc) { /* Prepare for typical register prototypes. We assume a "typical" board has <32KB> of SRAM connected with a data lines. */ sc->proto_dlcr4 = FE_D4_LBC_DISABLE | FE_D4_CNTRL; sc->proto_dlcr5 = 0; sc->proto_dlcr6 = FE_D6_BUFSIZ_32KB | FE_D6_TXBSIZ_2x4KB | FE_D6_BBW_BYTE | FE_D6_SBW_WORD | FE_D6_SRAM_100ns; sc->proto_dlcr7 = FE_D7_BYTSWP_LH; sc->proto_bmpr13 = 0; /* Assume the probe process (to be done later) is stable. */ sc->stability = 0; /* A typical board needs no hooks. */ sc->init = NULL; sc->stop = NULL; /* Assume the board has no software-controllable media selection. */ sc->mbitmap = MB_HM; sc->defmedia = MB_HM; sc->msel = NULL; } /* Common error reporting routine used in probe routines for "soft configured IRQ"-type boards. */ void fe_irq_failure (char const *name, int unit, int irq, char const *list) { printf("fe%d: %s board is detected, but %s IRQ was given\n", unit, name, (irq == NO_IRQ ? "no" : "invalid")); if (list != NULL) { printf("fe%d: specify an IRQ from %s in kernel config\n", unit, list); } } /* * Hardware (vendor) specific hooks. */ /* * Generic media selection scheme for MB86965 based boards. */ void fe_msel_965 (struct fe_softc *sc) { u_char b13; /* Find the appropriate bits for BMPR13 tranceiver control. */ switch (IFM_SUBTYPE(sc->media.ifm_media)) { case IFM_AUTO: b13 = FE_B13_PORT_AUTO | FE_B13_TPTYPE_UTP; break; case IFM_10_T: b13 = FE_B13_PORT_TP | FE_B13_TPTYPE_UTP; break; default: b13 = FE_B13_PORT_AUI; break; } /* Write it into the register. It takes effect immediately. */ fe_outb(sc, FE_BMPR13, sc->proto_bmpr13 | b13); } /* * Fujitsu MB86965 JLI mode support routines. */ /* * Routines to read all bytes from the config EEPROM through MB86965A. * It is a MicroWire (3-wire) serial EEPROM with 6-bit address. * (93C06 or 93C46.) */ static void fe_strobe_eeprom_jli (struct fe_softc *sc, u_short bmpr16) { /* * We must guarantee 1us (or more) interval to access slow * EEPROMs. The following redundant code provides enough * delay with ISA timing. (Even if the bus clock is "tuned.") * Some modification will be needed on faster busses. */ fe_outb(sc, bmpr16, FE_B16_SELECT); fe_outb(sc, bmpr16, FE_B16_SELECT | FE_B16_CLOCK); fe_outb(sc, bmpr16, FE_B16_SELECT | FE_B16_CLOCK); fe_outb(sc, bmpr16, FE_B16_SELECT); } void fe_read_eeprom_jli (struct fe_softc * sc, u_char * data) { u_char n, val, bit; u_char save16, save17; /* Save the current value of the EEPROM interface registers. */ save16 = fe_inb(sc, FE_BMPR16); save17 = fe_inb(sc, FE_BMPR17); /* Read bytes from EEPROM; two bytes per an iteration. */ for (n = 0; n < JLI_EEPROM_SIZE / 2; n++) { /* Reset the EEPROM interface. */ fe_outb(sc, FE_BMPR16, 0x00); fe_outb(sc, FE_BMPR17, 0x00); /* Start EEPROM access. */ fe_outb(sc, FE_BMPR16, FE_B16_SELECT); fe_outb(sc, FE_BMPR17, FE_B17_DATA); fe_strobe_eeprom_jli(sc, FE_BMPR16); /* Pass the iteration count as well as a READ command. */ val = 0x80 | n; for (bit = 0x80; bit != 0x00; bit >>= 1) { fe_outb(sc, FE_BMPR17, (val & bit) ? FE_B17_DATA : 0); fe_strobe_eeprom_jli(sc, FE_BMPR16); } fe_outb(sc, FE_BMPR17, 0x00); /* Read a byte. */ val = 0; for (bit = 0x80; bit != 0x00; bit >>= 1) { fe_strobe_eeprom_jli(sc, FE_BMPR16); if (fe_inb(sc, FE_BMPR17) & FE_B17_DATA) val |= bit; } *data++ = val; /* Read one more byte. */ val = 0; for (bit = 0x80; bit != 0x00; bit >>= 1) { fe_strobe_eeprom_jli(sc, FE_BMPR16); if (fe_inb(sc, FE_BMPR17) & FE_B17_DATA) val |= bit; } *data++ = val; } #if 0 /* Reset the EEPROM interface, again. */ fe_outb(sc, FE_BMPR16, 0x00); fe_outb(sc, FE_BMPR17, 0x00); #else /* Make sure to restore the original value of EEPROM interface registers, since we are not yet sure we have MB86965A on the address. */ fe_outb(sc, FE_BMPR17, save17); fe_outb(sc, FE_BMPR16, save16); #endif #if 1 /* Report what we got. */ if (bootverbose) { int i; data -= JLI_EEPROM_SIZE; for (i = 0; i < JLI_EEPROM_SIZE; i += 16) { if_printf(sc->ifp, "EEPROM(JLI):%3x: %16D\n", i, data + i, " "); } } #endif } void fe_init_jli (struct fe_softc * sc) { /* "Reset" by writing into a magic location. */ DELAY(200); fe_outb(sc, 0x1E, fe_inb(sc, 0x1E)); DELAY(300); } /* * SSi 78Q8377A support routines. */ /* * Routines to read all bytes from the config EEPROM through 78Q8377A. * It is a MicroWire (3-wire) serial EEPROM with 8-bit address. (I.e., * 93C56 or 93C66.) * * As I don't have SSi manuals, (hmm, an old song again!) I'm not exactly * sure the following code is correct... It is just stolen from the * C-NET(98)P2 support routine in FreeBSD(98). */ void fe_read_eeprom_ssi (struct fe_softc *sc, u_char *data) { u_char val, bit; int n; u_char save6, save7, save12; /* Save the current value for the DLCR registers we are about to destroy. */ save6 = fe_inb(sc, FE_DLCR6); save7 = fe_inb(sc, FE_DLCR7); /* Put the 78Q8377A into a state that we can access the EEPROM. */ fe_outb(sc, FE_DLCR6, FE_D6_BBW_WORD | FE_D6_SBW_WORD | FE_D6_DLC_DISABLE); fe_outb(sc, FE_DLCR7, FE_D7_BYTSWP_LH | FE_D7_RBS_BMPR | FE_D7_RDYPNS | FE_D7_POWER_UP); /* Save the current value for the BMPR12 register, too. */ save12 = fe_inb(sc, FE_DLCR12); /* Read bytes from EEPROM; two bytes per an iteration. */ for (n = 0; n < SSI_EEPROM_SIZE / 2; n++) { /* Start EEPROM access */ fe_outb(sc, FE_DLCR12, SSI_EEP); fe_outb(sc, FE_DLCR12, SSI_EEP | SSI_CSL); /* Send the following four bits to the EEPROM in the specified order: a dummy bit, a start bit, and command bits (10) for READ. */ fe_outb(sc, FE_DLCR12, SSI_EEP | SSI_CSL ); fe_outb(sc, FE_DLCR12, SSI_EEP | SSI_CSL | SSI_CLK ); /* 0 */ fe_outb(sc, FE_DLCR12, SSI_EEP | SSI_CSL | SSI_DAT); fe_outb(sc, FE_DLCR12, SSI_EEP | SSI_CSL | SSI_CLK | SSI_DAT); /* 1 */ fe_outb(sc, FE_DLCR12, SSI_EEP | SSI_CSL | SSI_DAT); fe_outb(sc, FE_DLCR12, SSI_EEP | SSI_CSL | SSI_CLK | SSI_DAT); /* 1 */ fe_outb(sc, FE_DLCR12, SSI_EEP | SSI_CSL ); fe_outb(sc, FE_DLCR12, SSI_EEP | SSI_CSL | SSI_CLK ); /* 0 */ /* Pass the iteration count to the chip. */ for (bit = 0x80; bit != 0x00; bit >>= 1) { val = ( n & bit ) ? SSI_DAT : 0; fe_outb(sc, FE_DLCR12, SSI_EEP | SSI_CSL | val); fe_outb(sc, FE_DLCR12, SSI_EEP | SSI_CSL | SSI_CLK | val); } /* Read a byte. */ val = 0; for (bit = 0x80; bit != 0x00; bit >>= 1) { fe_outb(sc, FE_DLCR12, SSI_EEP | SSI_CSL); fe_outb(sc, FE_DLCR12, SSI_EEP | SSI_CSL | SSI_CLK); if (fe_inb(sc, FE_DLCR12) & SSI_DIN) val |= bit; } *data++ = val; /* Read one more byte. */ val = 0; for (bit = 0x80; bit != 0x00; bit >>= 1) { fe_outb(sc, FE_DLCR12, SSI_EEP | SSI_CSL); fe_outb(sc, FE_DLCR12, SSI_EEP | SSI_CSL | SSI_CLK); if (fe_inb(sc, FE_DLCR12) & SSI_DIN) val |= bit; } *data++ = val; fe_outb(sc, FE_DLCR12, SSI_EEP); } /* Reset the EEPROM interface. (For now.) */ fe_outb(sc, FE_DLCR12, 0x00); /* Restore the saved register values, for the case that we didn't have 78Q8377A at the given address. */ fe_outb(sc, FE_DLCR12, save12); fe_outb(sc, FE_DLCR7, save7); fe_outb(sc, FE_DLCR6, save6); #if 1 /* Report what we got. */ if (bootverbose) { int i; data -= SSI_EEPROM_SIZE; for (i = 0; i < SSI_EEPROM_SIZE; i += 16) { if_printf(sc->ifp, "EEPROM(SSI):%3x: %16D\n", i, data + i, " "); } } #endif } /* * TDK/LANX boards support routines. */ /* It is assumed that the CLK line is low and SDA is high (float) upon entry. */ #define LNX_PH(D,K,N) \ ((LNX_SDA_##D | LNX_CLK_##K) << N) #define LNX_CYCLE(D1,D2,D3,D4,K1,K2,K3,K4) \ (LNX_PH(D1,K1,0)|LNX_PH(D2,K2,8)|LNX_PH(D3,K3,16)|LNX_PH(D4,K4,24)) #define LNX_CYCLE_START LNX_CYCLE(HI,LO,LO,HI, HI,HI,LO,LO) #define LNX_CYCLE_STOP LNX_CYCLE(LO,LO,HI,HI, LO,HI,HI,LO) #define LNX_CYCLE_HI LNX_CYCLE(HI,HI,HI,HI, LO,HI,LO,LO) #define LNX_CYCLE_LO LNX_CYCLE(LO,LO,LO,HI, LO,HI,LO,LO) #define LNX_CYCLE_INIT LNX_CYCLE(LO,HI,HI,HI, LO,LO,LO,LO) static void fe_eeprom_cycle_lnx (struct fe_softc *sc, u_short reg20, u_long cycle) { fe_outb(sc, reg20, (cycle ) & 0xFF); DELAY(15); fe_outb(sc, reg20, (cycle >> 8) & 0xFF); DELAY(15); fe_outb(sc, reg20, (cycle >> 16) & 0xFF); DELAY(15); fe_outb(sc, reg20, (cycle >> 24) & 0xFF); DELAY(15); } static u_char fe_eeprom_receive_lnx (struct fe_softc *sc, u_short reg20) { u_char dat; fe_outb(sc, reg20, LNX_CLK_HI | LNX_SDA_FL); DELAY(15); dat = fe_inb(sc, reg20); fe_outb(sc, reg20, LNX_CLK_LO | LNX_SDA_FL); DELAY(15); return (dat & LNX_SDA_IN); } void fe_read_eeprom_lnx (struct fe_softc *sc, u_char *data) { int i; u_char n, bit, val; u_char save20; u_short reg20 = 0x14; save20 = fe_inb(sc, reg20); /* NOTE: DELAY() timing constants are approximately three times longer (slower) than the required minimum. This is to guarantee a reliable operation under some tough conditions... Fortunately, this routine is only called during the boot phase, so the speed is less important than stability. */ #if 1 /* Reset the X24C01's internal state machine and put it into the IDLE state. We usually don't need this, but *if* someone (e.g., probe routine of other driver) write some garbage into the register at 0x14, synchronization will be lost, and the normal EEPROM access protocol won't work. Moreover, as there are no easy way to reset, we need a _manoeuvre_ here. (It even lacks a reset pin, so pushing the RESET button on the PC doesn't help!) */ fe_eeprom_cycle_lnx(sc, reg20, LNX_CYCLE_INIT); for (i = 0; i < 10; i++) fe_eeprom_cycle_lnx(sc, reg20, LNX_CYCLE_START); fe_eeprom_cycle_lnx(sc, reg20, LNX_CYCLE_STOP); DELAY(10000); #endif /* Issue a start condition. */ fe_eeprom_cycle_lnx(sc, reg20, LNX_CYCLE_START); /* Send seven bits of the starting address (zero, in this case) and a command bit for READ. */ val = 0x01; for (bit = 0x80; bit != 0x00; bit >>= 1) { if (val & bit) { fe_eeprom_cycle_lnx(sc, reg20, LNX_CYCLE_HI); } else { fe_eeprom_cycle_lnx(sc, reg20, LNX_CYCLE_LO); } } /* Receive an ACK bit. */ if (fe_eeprom_receive_lnx(sc, reg20)) { /* ACK was not received. EEPROM is not present (i.e., this board was not a TDK/LANX) or not working properly. */ if (bootverbose) { if_printf(sc->ifp, "no ACK received from EEPROM(LNX)\n"); } /* Clear the given buffer to indicate we could not get any info. and return. */ bzero(data, LNX_EEPROM_SIZE); goto RET; } /* Read bytes from EEPROM. */ for (n = 0; n < LNX_EEPROM_SIZE; n++) { /* Read a byte and store it into the buffer. */ val = 0x00; for (bit = 0x80; bit != 0x00; bit >>= 1) { if (fe_eeprom_receive_lnx(sc, reg20)) val |= bit; } *data++ = val; /* Acknowledge if we have to read more. */ if (n < LNX_EEPROM_SIZE - 1) { fe_eeprom_cycle_lnx(sc, reg20, LNX_CYCLE_LO); } } /* Issue a STOP condition, de-activating the clock line. It will be safer to keep the clock line low than to leave it high. */ fe_eeprom_cycle_lnx(sc, reg20, LNX_CYCLE_STOP); RET: fe_outb(sc, reg20, save20); #if 1 /* Report what we got. */ if (bootverbose) { data -= LNX_EEPROM_SIZE; for (i = 0; i < LNX_EEPROM_SIZE; i += 16) { if_printf(sc->ifp, "EEPROM(LNX):%3x: %16D\n", i, data + i, " "); } } #endif } void fe_init_lnx (struct fe_softc * sc) { /* Reset the 86960. Do we need this? FIXME. */ fe_outb(sc, 0x12, 0x06); DELAY(100); fe_outb(sc, 0x12, 0x07); DELAY(100); /* Setup IRQ control register on the ASIC. */ fe_outb(sc, 0x14, sc->priv_info); } /* * Ungermann-Bass boards support routine. */ void fe_init_ubn (struct fe_softc * sc) { /* Do we need this? FIXME. */ fe_outb(sc, FE_DLCR7, sc->proto_dlcr7 | FE_D7_RBS_BMPR | FE_D7_POWER_UP); fe_outb(sc, 0x18, 0x00); DELAY(200); /* Setup IRQ control register on the ASIC. */ fe_outb(sc, 0x14, sc->priv_info); } /* * Install interface into kernel networking data structures */ int fe_attach (device_t dev) { struct fe_softc *sc = device_get_softc(dev); struct ifnet *ifp; int flags = device_get_flags(dev); int b, error; ifp = sc->ifp = if_alloc(IFT_ETHER); if (ifp == NULL) { device_printf(dev, "can not ifalloc\n"); fe_release_resource(dev); return (ENOSPC); } mtx_init(&sc->lock, device_get_nameunit(dev), MTX_NETWORK_LOCK, MTX_DEF); callout_init_mtx(&sc->timer, &sc->lock, 0); /* * Initialize ifnet structure */ ifp->if_softc = sc; if_initname(sc->ifp, device_get_name(dev), device_get_unit(dev)); ifp->if_start = fe_start; ifp->if_ioctl = fe_ioctl; ifp->if_init = fe_init; ifp->if_linkmib = &sc->mibdata; ifp->if_linkmiblen = sizeof (sc->mibdata); #if 0 /* I'm not sure... */ sc->mibdata.dot3Compliance = DOT3COMPLIANCE_COLLS; #endif /* * Set fixed interface flags. */ ifp->if_flags = IFF_BROADCAST | IFF_SIMPLEX | IFF_MULTICAST; IFQ_SET_MAXLEN(&ifp->if_snd, ifqmaxlen); #if FE_SINGLE_TRANSMISSION /* Override txb config to allocate minimum. */ sc->proto_dlcr6 &= ~FE_D6_TXBSIZ sc->proto_dlcr6 |= FE_D6_TXBSIZ_2x2KB; #endif /* Modify hardware config if it is requested. */ if (flags & FE_FLAGS_OVERRIDE_DLCR6) sc->proto_dlcr6 = flags & FE_FLAGS_DLCR6_VALUE; /* Find TX buffer size, based on the hardware dependent proto. */ switch (sc->proto_dlcr6 & FE_D6_TXBSIZ) { case FE_D6_TXBSIZ_2x2KB: sc->txb_size = 2048; break; case FE_D6_TXBSIZ_2x4KB: sc->txb_size = 4096; break; case FE_D6_TXBSIZ_2x8KB: sc->txb_size = 8192; break; default: /* Oops, we can't work with single buffer configuration. */ if (bootverbose) { if_printf(sc->ifp, "strange TXBSIZ config; fixing\n"); } sc->proto_dlcr6 &= ~FE_D6_TXBSIZ; sc->proto_dlcr6 |= FE_D6_TXBSIZ_2x2KB; sc->txb_size = 2048; break; } /* Initialize the if_media interface. */ ifmedia_init(&sc->media, 0, fe_medchange, fe_medstat); for (b = 0; bit2media[b] != 0; b++) { if (sc->mbitmap & (1 << b)) { ifmedia_add(&sc->media, bit2media[b], 0, NULL); } } for (b = 0; bit2media[b] != 0; b++) { if (sc->defmedia & (1 << b)) { ifmedia_set(&sc->media, bit2media[b]); break; } } #if 0 /* Turned off; this is called later, when the interface UPs. */ fe_medchange(sc); #endif /* Attach and stop the interface. */ FE_LOCK(sc); fe_stop(sc); FE_UNLOCK(sc); ether_ifattach(sc->ifp, sc->enaddr); error = bus_setup_intr(dev, sc->irq_res, INTR_TYPE_NET | INTR_MPSAFE, NULL, fe_intr, sc, &sc->irq_handle); if (error) { ether_ifdetach(ifp); mtx_destroy(&sc->lock); if_free(ifp); fe_release_resource(dev); return ENXIO; } /* Print additional info when attached. */ device_printf(dev, "type %s%s\n", sc->typestr, (sc->proto_dlcr4 & FE_D4_DSC) ? ", full duplex" : ""); if (bootverbose) { int buf, txb, bbw, sbw, ram; buf = txb = bbw = sbw = ram = -1; switch ( sc->proto_dlcr6 & FE_D6_BUFSIZ ) { case FE_D6_BUFSIZ_8KB: buf = 8; break; case FE_D6_BUFSIZ_16KB: buf = 16; break; case FE_D6_BUFSIZ_32KB: buf = 32; break; case FE_D6_BUFSIZ_64KB: buf = 64; break; } switch ( sc->proto_dlcr6 & FE_D6_TXBSIZ ) { case FE_D6_TXBSIZ_2x2KB: txb = 2; break; case FE_D6_TXBSIZ_2x4KB: txb = 4; break; case FE_D6_TXBSIZ_2x8KB: txb = 8; break; } switch ( sc->proto_dlcr6 & FE_D6_BBW ) { case FE_D6_BBW_BYTE: bbw = 8; break; case FE_D6_BBW_WORD: bbw = 16; break; } switch ( sc->proto_dlcr6 & FE_D6_SBW ) { case FE_D6_SBW_BYTE: sbw = 8; break; case FE_D6_SBW_WORD: sbw = 16; break; } switch ( sc->proto_dlcr6 & FE_D6_SRAM ) { case FE_D6_SRAM_100ns: ram = 100; break; case FE_D6_SRAM_150ns: ram = 150; break; } device_printf(dev, "SRAM %dKB %dbit %dns, TXB %dKBx2, %dbit I/O\n", buf, bbw, ram, txb, sbw); } if (sc->stability & UNSTABLE_IRQ) device_printf(dev, "warning: IRQ number may be incorrect\n"); if (sc->stability & UNSTABLE_MAC) device_printf(dev, "warning: above MAC address may be incorrect\n"); if (sc->stability & UNSTABLE_TYPE) device_printf(dev, "warning: hardware type was not validated\n"); + gone_by_fcp101_dev(dev); + return 0; } int fe_alloc_port(device_t dev, int size) { struct fe_softc *sc = device_get_softc(dev); struct resource *res; int rid; rid = 0; res = bus_alloc_resource_anywhere(dev, SYS_RES_IOPORT, &rid, size, RF_ACTIVE); if (res) { sc->port_used = size; sc->port_res = res; return (0); } return (ENOENT); } int fe_alloc_irq(device_t dev, int flags) { struct fe_softc *sc = device_get_softc(dev); struct resource *res; int rid; rid = 0; res = bus_alloc_resource_any(dev, SYS_RES_IRQ, &rid, RF_ACTIVE | flags); if (res) { sc->irq_res = res; return (0); } return (ENOENT); } void fe_release_resource(device_t dev) { struct fe_softc *sc = device_get_softc(dev); if (sc->port_res) { bus_release_resource(dev, SYS_RES_IOPORT, 0, sc->port_res); sc->port_res = NULL; } if (sc->irq_res) { bus_release_resource(dev, SYS_RES_IRQ, 0, sc->irq_res); sc->irq_res = NULL; } } /* * Reset interface, after some (hardware) trouble is deteced. */ static void fe_reset (struct fe_softc *sc) { /* Record how many packets are lost by this accident. */ if_inc_counter(sc->ifp, IFCOUNTER_OERRORS, sc->txb_sched + sc->txb_count); sc->mibdata.dot3StatsInternalMacTransmitErrors++; /* Put the interface into known initial state. */ fe_stop(sc); if (sc->ifp->if_flags & IFF_UP) fe_init_locked(sc); } /* * Stop everything on the interface. * * All buffered packets, both transmitting and receiving, * if any, will be lost by stopping the interface. */ void fe_stop (struct fe_softc *sc) { FE_ASSERT_LOCKED(sc); /* Disable interrupts. */ fe_outb(sc, FE_DLCR2, 0x00); fe_outb(sc, FE_DLCR3, 0x00); /* Stop interface hardware. */ DELAY(200); fe_outb(sc, FE_DLCR6, sc->proto_dlcr6 | FE_D6_DLC_DISABLE); DELAY(200); /* Clear all interrupt status. */ fe_outb(sc, FE_DLCR0, 0xFF); fe_outb(sc, FE_DLCR1, 0xFF); /* Put the chip in stand-by mode. */ DELAY(200); fe_outb(sc, FE_DLCR7, sc->proto_dlcr7 | FE_D7_POWER_DOWN); DELAY(200); /* Reset transmitter variables and interface flags. */ sc->ifp->if_drv_flags &= ~(IFF_DRV_OACTIVE | IFF_DRV_RUNNING); sc->tx_timeout = 0; callout_stop(&sc->timer); sc->txb_free = sc->txb_size; sc->txb_count = 0; sc->txb_sched = 0; /* MAR loading can be delayed. */ sc->filter_change = 0; /* Call a device-specific hook. */ if (sc->stop) sc->stop(sc); } /* * Device timeout/watchdog routine. Entered if the device neglects to * generate an interrupt after a transmit has been started on it. */ static void fe_watchdog (void *arg) { struct fe_softc *sc = arg; FE_ASSERT_LOCKED(sc); if (sc->tx_timeout && --sc->tx_timeout == 0) { struct ifnet *ifp = sc->ifp; /* A "debug" message. */ if_printf(ifp, "transmission timeout (%d+%d)%s\n", sc->txb_sched, sc->txb_count, (ifp->if_flags & IFF_UP) ? "" : " when down"); if (ifp->if_get_counter(ifp, IFCOUNTER_OPACKETS) == 0 && ifp->if_get_counter(ifp, IFCOUNTER_IPACKETS) == 0) if_printf(ifp, "wrong IRQ setting in config?\n"); fe_reset(sc); } callout_reset(&sc->timer, hz, fe_watchdog, sc); } /* * Initialize device. */ static void fe_init (void * xsc) { struct fe_softc *sc = xsc; FE_LOCK(sc); fe_init_locked(sc); FE_UNLOCK(sc); } static void fe_init_locked (struct fe_softc *sc) { /* Start initializing 86960. */ /* Call a hook before we start initializing the chip. */ if (sc->init) sc->init(sc); /* * Make sure to disable the chip, also. * This may also help re-programming the chip after * hot insertion of PCMCIAs. */ DELAY(200); fe_outb(sc, FE_DLCR6, sc->proto_dlcr6 | FE_D6_DLC_DISABLE); DELAY(200); /* Power up the chip and select register bank for DLCRs. */ DELAY(200); fe_outb(sc, FE_DLCR7, sc->proto_dlcr7 | FE_D7_RBS_DLCR | FE_D7_POWER_UP); DELAY(200); /* Feed the station address. */ fe_outblk(sc, FE_DLCR8, IF_LLADDR(sc->ifp), ETHER_ADDR_LEN); /* Clear multicast address filter to receive nothing. */ fe_outb(sc, FE_DLCR7, sc->proto_dlcr7 | FE_D7_RBS_MAR | FE_D7_POWER_UP); fe_outblk(sc, FE_MAR8, fe_filter_nothing.data, FE_FILTER_LEN); /* Select the BMPR bank for runtime register access. */ fe_outb(sc, FE_DLCR7, sc->proto_dlcr7 | FE_D7_RBS_BMPR | FE_D7_POWER_UP); /* Initialize registers. */ fe_outb(sc, FE_DLCR0, 0xFF); /* Clear all bits. */ fe_outb(sc, FE_DLCR1, 0xFF); /* ditto. */ fe_outb(sc, FE_DLCR2, 0x00); fe_outb(sc, FE_DLCR3, 0x00); fe_outb(sc, FE_DLCR4, sc->proto_dlcr4); fe_outb(sc, FE_DLCR5, sc->proto_dlcr5); fe_outb(sc, FE_BMPR10, 0x00); fe_outb(sc, FE_BMPR11, FE_B11_CTRL_SKIP | FE_B11_MODE1); fe_outb(sc, FE_BMPR12, 0x00); fe_outb(sc, FE_BMPR13, sc->proto_bmpr13); fe_outb(sc, FE_BMPR14, 0x00); fe_outb(sc, FE_BMPR15, 0x00); /* Enable interrupts. */ fe_outb(sc, FE_DLCR2, FE_TMASK); fe_outb(sc, FE_DLCR3, FE_RMASK); /* Select requested media, just before enabling DLC. */ if (sc->msel) sc->msel(sc); /* Enable transmitter and receiver. */ DELAY(200); fe_outb(sc, FE_DLCR6, sc->proto_dlcr6 | FE_D6_DLC_ENABLE); DELAY(200); #ifdef DIAGNOSTIC /* * Make sure to empty the receive buffer. * * This may be redundant, but *if* the receive buffer were full * at this point, then the driver would hang. I have experienced * some strange hang-up just after UP. I hope the following * code solve the problem. * * I have changed the order of hardware initialization. * I think the receive buffer cannot have any packets at this * point in this version. The following code *must* be * redundant now. FIXME. * * I've heard a rumore that on some PC Card implementation of * 8696x, the receive buffer can have some data at this point. * The following message helps discovering the fact. FIXME. */ if (!(fe_inb(sc, FE_DLCR5) & FE_D5_BUFEMP)) { if_printf(sc->ifp, "receive buffer has some data after reset\n"); fe_emptybuffer(sc); } /* Do we need this here? Actually, no. I must be paranoia. */ fe_outb(sc, FE_DLCR0, 0xFF); /* Clear all bits. */ fe_outb(sc, FE_DLCR1, 0xFF); /* ditto. */ #endif /* Set 'running' flag, because we are now running. */ sc->ifp->if_drv_flags |= IFF_DRV_RUNNING; callout_reset(&sc->timer, hz, fe_watchdog, sc); /* * At this point, the interface is running properly, * except that it receives *no* packets. we then call * fe_setmode() to tell the chip what packets to be * received, based on the if_flags and multicast group * list. It completes the initialization process. */ fe_setmode(sc); #if 0 /* ...and attempt to start output queued packets. */ /* TURNED OFF, because the semi-auto media prober wants to UP the interface keeping it idle. The upper layer will soon start the interface anyway, and there are no significant delay. */ fe_start_locked(sc->ifp); #endif } /* * This routine actually starts the transmission on the interface */ static void fe_xmit (struct fe_softc *sc) { /* * Set a timer just in case we never hear from the board again. * We use longer timeout for multiple packet transmission. * I'm not sure this timer value is appropriate. FIXME. */ sc->tx_timeout = 1 + sc->txb_count; /* Update txb variables. */ sc->txb_sched = sc->txb_count; sc->txb_count = 0; sc->txb_free = sc->txb_size; sc->tx_excolls = 0; /* Start transmitter, passing packets in TX buffer. */ fe_outb(sc, FE_BMPR10, sc->txb_sched | FE_B10_START); } /* * Start output on interface. * We make one assumption here: * 1) that the IFF_DRV_OACTIVE flag is checked before this code is called * (i.e. that the output part of the interface is idle) */ static void fe_start (struct ifnet *ifp) { struct fe_softc *sc = ifp->if_softc; FE_LOCK(sc); fe_start_locked(ifp); FE_UNLOCK(sc); } static void fe_start_locked (struct ifnet *ifp) { struct fe_softc *sc = ifp->if_softc; struct mbuf *m; #ifdef DIAGNOSTIC /* Just a sanity check. */ if ((sc->txb_count == 0) != (sc->txb_free == sc->txb_size)) { /* * Txb_count and txb_free co-works to manage the * transmission buffer. Txb_count keeps track of the * used potion of the buffer, while txb_free does unused * potion. So, as long as the driver runs properly, * txb_count is zero if and only if txb_free is same * as txb_size (which represents whole buffer.) */ if_printf(ifp, "inconsistent txb variables (%d, %d)\n", sc->txb_count, sc->txb_free); /* * So, what should I do, then? * * We now know txb_count and txb_free contradicts. We * cannot, however, tell which is wrong. More * over, we cannot peek 86960 transmission buffer or * reset the transmission buffer. (In fact, we can * reset the entire interface. I don't want to do it.) * * If txb_count is incorrect, leaving it as-is will cause * sending of garbage after next interrupt. We have to * avoid it. Hence, we reset the txb_count here. If * txb_free was incorrect, resetting txb_count just loses * some packets. We can live with it. */ sc->txb_count = 0; } #endif /* * First, see if there are buffered packets and an idle * transmitter - should never happen at this point. */ if ((sc->txb_count > 0) && (sc->txb_sched == 0)) { if_printf(ifp, "transmitter idle with %d buffered packets\n", sc->txb_count); fe_xmit(sc); } /* * Stop accepting more transmission packets temporarily, when * a filter change request is delayed. Updating the MARs on * 86960 flushes the transmission buffer, so it is delayed * until all buffered transmission packets have been sent * out. */ if (sc->filter_change) { /* * Filter change request is delayed only when the DLC is * working. DLC soon raise an interrupt after finishing * the work. */ goto indicate_active; } for (;;) { /* * See if there is room to put another packet in the buffer. * We *could* do better job by peeking the send queue to * know the length of the next packet. Current version just * tests against the worst case (i.e., longest packet). FIXME. * * When adding the packet-peek feature, don't forget adding a * test on txb_count against QUEUEING_MAX. * There is a little chance the packet count exceeds * the limit. Assume transmission buffer is 8KB (2x8KB * configuration) and an application sends a bunch of small * (i.e., minimum packet sized) packets rapidly. An 8KB * buffer can hold 130 blocks of 62 bytes long... */ if (sc->txb_free < ETHER_MAX_LEN - ETHER_CRC_LEN + FE_DATA_LEN_LEN) { /* No room. */ goto indicate_active; } #if FE_SINGLE_TRANSMISSION if (sc->txb_count > 0) { /* Just one packet per a transmission buffer. */ goto indicate_active; } #endif /* * Get the next mbuf chain for a packet to send. */ IF_DEQUEUE(&sc->ifp->if_snd, m); if (m == NULL) { /* No more packets to send. */ goto indicate_inactive; } /* * Copy the mbuf chain into the transmission buffer. * txb_* variables are updated as necessary. */ fe_write_mbufs(sc, m); /* Start transmitter if it's idle. */ if ((sc->txb_count > 0) && (sc->txb_sched == 0)) fe_xmit(sc); /* * Tap off here if there is a bpf listener, * and the device is *not* in promiscuous mode. * (86960 receives self-generated packets if * and only if it is in "receive everything" * mode.) */ if (!(sc->ifp->if_flags & IFF_PROMISC)) BPF_MTAP(sc->ifp, m); m_freem(m); } indicate_inactive: /* * We are using the !OACTIVE flag to indicate to * the outside world that we can accept an * additional packet rather than that the * transmitter is _actually_ active. Indeed, the * transmitter may be active, but if we haven't * filled all the buffers with data then we still * want to accept more. */ sc->ifp->if_drv_flags &= ~IFF_DRV_OACTIVE; return; indicate_active: /* * The transmitter is active, and there are no room for * more outgoing packets in the transmission buffer. */ sc->ifp->if_drv_flags |= IFF_DRV_OACTIVE; return; } /* * Drop (skip) a packet from receive buffer in 86960 memory. */ static void fe_droppacket (struct fe_softc * sc, int len) { int i; /* * 86960 manual says that we have to read 8 bytes from the buffer * before skip the packets and that there must be more than 8 bytes * remaining in the buffer when issue a skip command. * Remember, we have already read 4 bytes before come here. */ if (len > 12) { /* Read 4 more bytes, and skip the rest of the packet. */ if ((sc->proto_dlcr6 & FE_D6_SBW) == FE_D6_SBW_BYTE) { (void) fe_inb(sc, FE_BMPR8); (void) fe_inb(sc, FE_BMPR8); (void) fe_inb(sc, FE_BMPR8); (void) fe_inb(sc, FE_BMPR8); } else { (void) fe_inw(sc, FE_BMPR8); (void) fe_inw(sc, FE_BMPR8); } fe_outb(sc, FE_BMPR14, FE_B14_SKIP); } else { /* We should not come here unless receiving RUNTs. */ if ((sc->proto_dlcr6 & FE_D6_SBW) == FE_D6_SBW_BYTE) { for (i = 0; i < len; i++) (void) fe_inb(sc, FE_BMPR8); } else { for (i = 0; i < len; i += 2) (void) fe_inw(sc, FE_BMPR8); } } } #ifdef DIAGNOSTIC /* * Empty receiving buffer. */ static void fe_emptybuffer (struct fe_softc * sc) { int i; u_char saved_dlcr5; #ifdef FE_DEBUG if_printf(sc->ifp, "emptying receive buffer\n"); #endif /* * Stop receiving packets, temporarily. */ saved_dlcr5 = fe_inb(sc, FE_DLCR5); fe_outb(sc, FE_DLCR5, sc->proto_dlcr5); DELAY(1300); /* * When we come here, the receive buffer management may * have been broken. So, we cannot use skip operation. * Just discard everything in the buffer. */ if ((sc->proto_dlcr6 & FE_D6_SBW) == FE_D6_SBW_BYTE) { for (i = 0; i < 65536; i++) { if (fe_inb(sc, FE_DLCR5) & FE_D5_BUFEMP) break; (void) fe_inb(sc, FE_BMPR8); } } else { for (i = 0; i < 65536; i += 2) { if (fe_inb(sc, FE_DLCR5) & FE_D5_BUFEMP) break; (void) fe_inw(sc, FE_BMPR8); } } /* * Double check. */ if (fe_inb(sc, FE_DLCR5) & FE_D5_BUFEMP) { if_printf(sc->ifp, "could not empty receive buffer\n"); /* Hmm. What should I do if this happens? FIXME. */ } /* * Restart receiving packets. */ fe_outb(sc, FE_DLCR5, saved_dlcr5); } #endif /* * Transmission interrupt handler * The control flow of this function looks silly. FIXME. */ static void fe_tint (struct fe_softc * sc, u_char tstat) { int left; int col; /* * Handle "excessive collision" interrupt. */ if (tstat & FE_D0_COLL16) { /* * Find how many packets (including this collided one) * are left unsent in transmission buffer. */ left = fe_inb(sc, FE_BMPR10); if_printf(sc->ifp, "excessive collision (%d/%d)\n", left, sc->txb_sched); /* * Clear the collision flag (in 86960) here * to avoid confusing statistics. */ fe_outb(sc, FE_DLCR0, FE_D0_COLLID); /* * Restart transmitter, skipping the * collided packet. * * We *must* skip the packet to keep network running * properly. Excessive collision error is an * indication of the network overload. If we * tried sending the same packet after excessive * collision, the network would be filled with * out-of-time packets. Packets belonging * to reliable transport (such as TCP) are resent * by some upper layer. */ fe_outb(sc, FE_BMPR11, FE_B11_CTRL_SKIP | FE_B11_MODE1); /* Update statistics. */ sc->tx_excolls++; } /* * Handle "transmission complete" interrupt. */ if (tstat & FE_D0_TXDONE) { /* * Add in total number of collisions on last * transmission. We also clear "collision occurred" flag * here. * * 86960 has a design flaw on collision count on multiple * packet transmission. When we send two or more packets * with one start command (that's what we do when the * transmission queue is crowded), 86960 informs us number * of collisions occurred on the last packet on the * transmission only. Number of collisions on previous * packets are lost. I have told that the fact is clearly * stated in the Fujitsu document. * * I considered not to mind it seriously. Collision * count is not so important, anyway. Any comments? FIXME. */ if (fe_inb(sc, FE_DLCR0) & FE_D0_COLLID) { /* Clear collision flag. */ fe_outb(sc, FE_DLCR0, FE_D0_COLLID); /* Extract collision count from 86960. */ col = fe_inb(sc, FE_DLCR4); col = (col & FE_D4_COL) >> FE_D4_COL_SHIFT; if (col == 0) { /* * Status register indicates collisions, * while the collision count is zero. * This can happen after multiple packet * transmission, indicating that one or more * previous packet(s) had been collided. * * Since the accurate number of collisions * has been lost, we just guess it as 1; * Am I too optimistic? FIXME. */ col = 1; } if_inc_counter(sc->ifp, IFCOUNTER_COLLISIONS, col); if (col == 1) sc->mibdata.dot3StatsSingleCollisionFrames++; else sc->mibdata.dot3StatsMultipleCollisionFrames++; sc->mibdata.dot3StatsCollFrequencies[col-1]++; } /* * Update transmission statistics. * Be sure to reflect number of excessive collisions. */ col = sc->tx_excolls; if_inc_counter(sc->ifp, IFCOUNTER_OPACKETS, sc->txb_sched - col); if_inc_counter(sc->ifp, IFCOUNTER_OERRORS, col); if_inc_counter(sc->ifp, IFCOUNTER_COLLISIONS, col * 16); sc->mibdata.dot3StatsExcessiveCollisions += col; sc->mibdata.dot3StatsCollFrequencies[15] += col; sc->txb_sched = 0; /* * The transmitter is no more active. * Reset output active flag and watchdog timer. */ sc->ifp->if_drv_flags &= ~IFF_DRV_OACTIVE; sc->tx_timeout = 0; /* * If more data is ready to transmit in the buffer, start * transmitting them. Otherwise keep transmitter idle, * even if more data is queued. This gives receive * process a slight priority. */ if (sc->txb_count > 0) fe_xmit(sc); } } /* * Ethernet interface receiver interrupt. */ static void fe_rint (struct fe_softc * sc, u_char rstat) { u_short len; u_char status; int i; /* * Update statistics if this interrupt is caused by an error. * Note that, when the system was not sufficiently fast, the * receive interrupt might not be acknowledged immediately. If * one or more errornous frames were received before this routine * was scheduled, they are ignored, and the following error stats * give less than real values. */ if (rstat & (FE_D1_OVRFLO | FE_D1_CRCERR | FE_D1_ALGERR | FE_D1_SRTPKT)) { if (rstat & FE_D1_OVRFLO) sc->mibdata.dot3StatsInternalMacReceiveErrors++; if (rstat & FE_D1_CRCERR) sc->mibdata.dot3StatsFCSErrors++; if (rstat & FE_D1_ALGERR) sc->mibdata.dot3StatsAlignmentErrors++; #if 0 /* The reference MAC receiver defined in 802.3 silently ignores short frames (RUNTs) without notifying upper layer. RFC 1650 (dot3 MIB) is based on the 802.3, and it has no stats entry for RUNTs... */ if (rstat & FE_D1_SRTPKT) sc->mibdata.dot3StatsFrameTooShorts++; /* :-) */ #endif if_inc_counter(sc->ifp, IFCOUNTER_IERRORS, 1); } /* * MB86960 has a flag indicating "receive queue empty." * We just loop, checking the flag, to pull out all received * packets. * * We limit the number of iterations to avoid infinite-loop. * The upper bound is set to unrealistic high value. */ for (i = 0; i < FE_MAX_RECV_COUNT * 2; i++) { /* Stop the iteration if 86960 indicates no packets. */ if (fe_inb(sc, FE_DLCR5) & FE_D5_BUFEMP) return; /* * Extract a receive status byte. * As our 86960 is in 16 bit bus access mode, we have to * use inw() to get the status byte. The significant * value is returned in lower 8 bits. */ if ((sc->proto_dlcr6 & FE_D6_SBW) == FE_D6_SBW_BYTE) { status = fe_inb(sc, FE_BMPR8); (void) fe_inb(sc, FE_BMPR8); } else { status = (u_char) fe_inw(sc, FE_BMPR8); } /* * Extract the packet length. * It is a sum of a header (14 bytes) and a payload. * CRC has been stripped off by the 86960. */ if ((sc->proto_dlcr6 & FE_D6_SBW) == FE_D6_SBW_BYTE) { len = fe_inb(sc, FE_BMPR8); len |= (fe_inb(sc, FE_BMPR8) << 8); } else { len = fe_inw(sc, FE_BMPR8); } /* * AS our 86960 is programed to ignore errored frame, * we must not see any error indication in the * receive buffer. So, any error condition is a * serious error, e.g., out-of-sync of the receive * buffer pointers. */ if ((status & 0xF0) != 0x20 || len > ETHER_MAX_LEN - ETHER_CRC_LEN || len < ETHER_MIN_LEN - ETHER_CRC_LEN) { if_printf(sc->ifp, "RX buffer out-of-sync\n"); if_inc_counter(sc->ifp, IFCOUNTER_IERRORS, 1); sc->mibdata.dot3StatsInternalMacReceiveErrors++; fe_reset(sc); return; } /* * Go get a packet. */ if (fe_get_packet(sc, len) < 0) { /* * Negative return from fe_get_packet() * indicates no available mbuf. We stop * receiving packets, even if there are more * in the buffer. We hope we can get more * mbuf next time. */ if_inc_counter(sc->ifp, IFCOUNTER_IERRORS, 1); sc->mibdata.dot3StatsMissedFrames++; fe_droppacket(sc, len); return; } /* Successfully received a packet. Update stat. */ if_inc_counter(sc->ifp, IFCOUNTER_IPACKETS, 1); } /* Maximum number of frames has been received. Something strange is happening here... */ if_printf(sc->ifp, "unusual receive flood\n"); sc->mibdata.dot3StatsInternalMacReceiveErrors++; fe_reset(sc); } /* * Ethernet interface interrupt processor */ static void fe_intr (void *arg) { struct fe_softc *sc = arg; u_char tstat, rstat; int loop_count = FE_MAX_LOOP; FE_LOCK(sc); /* Loop until there are no more new interrupt conditions. */ while (loop_count-- > 0) { /* * Get interrupt conditions, masking unneeded flags. */ tstat = fe_inb(sc, FE_DLCR0) & FE_TMASK; rstat = fe_inb(sc, FE_DLCR1) & FE_RMASK; if (tstat == 0 && rstat == 0) { FE_UNLOCK(sc); return; } /* * Reset the conditions we are acknowledging. */ fe_outb(sc, FE_DLCR0, tstat); fe_outb(sc, FE_DLCR1, rstat); /* * Handle transmitter interrupts. */ if (tstat) fe_tint(sc, tstat); /* * Handle receiver interrupts */ if (rstat) fe_rint(sc, rstat); /* * Update the multicast address filter if it is * needed and possible. We do it now, because * we can make sure the transmission buffer is empty, * and there is a good chance that the receive queue * is empty. It will minimize the possibility of * packet loss. */ if (sc->filter_change && sc->txb_count == 0 && sc->txb_sched == 0) { fe_loadmar(sc); sc->ifp->if_drv_flags &= ~IFF_DRV_OACTIVE; } /* * If it looks like the transmitter can take more data, * attempt to start output on the interface. This is done * after handling the receiver interrupt to give the * receive operation priority. * * BTW, I'm not sure in what case the OACTIVE is on at * this point. Is the following test redundant? * * No. This routine polls for both transmitter and * receiver interrupts. 86960 can raise a receiver * interrupt when the transmission buffer is full. */ if ((sc->ifp->if_drv_flags & IFF_DRV_OACTIVE) == 0) fe_start_locked(sc->ifp); } FE_UNLOCK(sc); if_printf(sc->ifp, "too many loops\n"); } /* * Process an ioctl request. This code needs some work - it looks * pretty ugly. */ static int fe_ioctl (struct ifnet * ifp, u_long command, caddr_t data) { struct fe_softc *sc = ifp->if_softc; struct ifreq *ifr = (struct ifreq *)data; int error = 0; switch (command) { case SIOCSIFFLAGS: /* * Switch interface state between "running" and * "stopped", reflecting the UP flag. */ FE_LOCK(sc); if (sc->ifp->if_flags & IFF_UP) { if ((sc->ifp->if_drv_flags & IFF_DRV_RUNNING) == 0) fe_init_locked(sc); } else { if ((sc->ifp->if_drv_flags & IFF_DRV_RUNNING) != 0) fe_stop(sc); } /* * Promiscuous and/or multicast flags may have changed, * so reprogram the multicast filter and/or receive mode. */ fe_setmode(sc); FE_UNLOCK(sc); /* Done. */ break; case SIOCADDMULTI: case SIOCDELMULTI: /* * Multicast list has changed; set the hardware filter * accordingly. */ FE_LOCK(sc); fe_setmode(sc); FE_UNLOCK(sc); break; case SIOCSIFMEDIA: case SIOCGIFMEDIA: /* Let if_media to handle these commands and to call us back. */ error = ifmedia_ioctl(ifp, ifr, &sc->media, command); break; default: error = ether_ioctl(ifp, command, data); break; } return (error); } /* * Retrieve packet from receive buffer and send to the next level up via * ether_input(). * Returns 0 if success, -1 if error (i.e., mbuf allocation failure). */ static int fe_get_packet (struct fe_softc * sc, u_short len) { struct ifnet *ifp = sc->ifp; struct ether_header *eh; struct mbuf *m; FE_ASSERT_LOCKED(sc); /* * NFS wants the data be aligned to the word (4 byte) * boundary. Ethernet header has 14 bytes. There is a * 2-byte gap. */ #define NFS_MAGIC_OFFSET 2 /* * This function assumes that an Ethernet packet fits in an * mbuf (with a cluster attached when necessary.) On FreeBSD * 2.0 for x86, which is the primary target of this driver, an * mbuf cluster has 4096 bytes, and we are happy. On ancient * BSDs, such as vanilla 4.3 for 386, a cluster size was 1024, * however. If the following #error message were printed upon * compile, you need to rewrite this function. */ #if ( MCLBYTES < ETHER_MAX_LEN - ETHER_CRC_LEN + NFS_MAGIC_OFFSET ) #error "Too small MCLBYTES to use fe driver." #endif /* * Our strategy has one more problem. There is a policy on * mbuf cluster allocation. It says that we must have at * least MINCLSIZE (208 bytes on FreeBSD 2.0 for x86) to * allocate a cluster. For a packet of a size between * (MHLEN - 2) to (MINCLSIZE - 2), our code violates the rule... * On the other hand, the current code is short, simple, * and fast, however. It does no harmful thing, just waists * some memory. Any comments? FIXME. */ /* Allocate an mbuf with packet header info. */ MGETHDR(m, M_NOWAIT, MT_DATA); if (m == NULL) return -1; /* Attach a cluster if this packet doesn't fit in a normal mbuf. */ if (len > MHLEN - NFS_MAGIC_OFFSET) { if (!(MCLGET(m, M_NOWAIT))) { m_freem(m); return -1; } } /* Initialize packet header info. */ m->m_pkthdr.rcvif = ifp; m->m_pkthdr.len = len; /* Set the length of this packet. */ m->m_len = len; /* The following silliness is to make NFS happy */ m->m_data += NFS_MAGIC_OFFSET; /* Get (actually just point to) the header part. */ eh = mtod(m, struct ether_header *); /* Get a packet. */ if ((sc->proto_dlcr6 & FE_D6_SBW) == FE_D6_SBW_BYTE) { fe_insb(sc, FE_BMPR8, (u_int8_t *)eh, len); } else { fe_insw(sc, FE_BMPR8, (u_int16_t *)eh, (len + 1) >> 1); } /* Feed the packet to upper layer. */ FE_UNLOCK(sc); (*ifp->if_input)(ifp, m); FE_LOCK(sc); return 0; } /* * Write an mbuf chain to the transmission buffer memory using 16 bit PIO. * Returns number of bytes actually written, including length word. * * If an mbuf chain is too long for an Ethernet frame, it is not sent. * Packets shorter than Ethernet minimum are legal, and we pad them * before sending out. An exception is "partial" packets which are * shorter than mandatory Ethernet header. */ static void fe_write_mbufs (struct fe_softc *sc, struct mbuf *m) { u_short length, len; struct mbuf *mp; u_char *data; u_short savebyte; /* WARNING: Architecture dependent! */ #define NO_PENDING_BYTE 0xFFFF static u_char padding [ETHER_MIN_LEN - ETHER_CRC_LEN - ETHER_HDR_LEN]; #ifdef DIAGNOSTIC /* First, count up the total number of bytes to copy */ length = 0; for (mp = m; mp != NULL; mp = mp->m_next) length += mp->m_len; /* Check if this matches the one in the packet header. */ if (length != m->m_pkthdr.len) { if_printf(sc->ifp, "packet length mismatch? (%d/%d)\n", length, m->m_pkthdr.len); } #else /* Just use the length value in the packet header. */ length = m->m_pkthdr.len; #endif #ifdef DIAGNOSTIC /* * Should never send big packets. If such a packet is passed, * it should be a bug of upper layer. We just ignore it. * ... Partial (too short) packets, neither. */ if (length < ETHER_HDR_LEN || length > ETHER_MAX_LEN - ETHER_CRC_LEN) { if_printf(sc->ifp, "got an out-of-spec packet (%u bytes) to send\n", length); if_inc_counter(sc->ifp, IFCOUNTER_OERRORS, 1); sc->mibdata.dot3StatsInternalMacTransmitErrors++; return; } #endif /* * Put the length word for this frame. * Does 86960 accept odd length? -- Yes. * Do we need to pad the length to minimum size by ourselves? * -- Generally yes. But for (or will be) the last * packet in the transmission buffer, we can skip the * padding process. It may gain performance slightly. FIXME. */ if ((sc->proto_dlcr6 & FE_D6_SBW) == FE_D6_SBW_BYTE) { len = max(length, ETHER_MIN_LEN - ETHER_CRC_LEN); fe_outb(sc, FE_BMPR8, len & 0x00ff); fe_outb(sc, FE_BMPR8, (len & 0xff00) >> 8); } else { fe_outw(sc, FE_BMPR8, max(length, ETHER_MIN_LEN - ETHER_CRC_LEN)); } /* * Update buffer status now. * Truncate the length up to an even number, since we use outw(). */ if ((sc->proto_dlcr6 & FE_D6_SBW) != FE_D6_SBW_BYTE) { length = (length + 1) & ~1; } sc->txb_free -= FE_DATA_LEN_LEN + max(length, ETHER_MIN_LEN - ETHER_CRC_LEN); sc->txb_count++; /* * Transfer the data from mbuf chain to the transmission buffer. * MB86960 seems to require that data be transferred as words, and * only words. So that we require some extra code to patch * over odd-length mbufs. */ if ((sc->proto_dlcr6 & FE_D6_SBW) == FE_D6_SBW_BYTE) { /* 8-bit cards are easy. */ for (mp = m; mp != NULL; mp = mp->m_next) { if (mp->m_len) fe_outsb(sc, FE_BMPR8, mtod(mp, caddr_t), mp->m_len); } } else { /* 16-bit cards are a pain. */ savebyte = NO_PENDING_BYTE; for (mp = m; mp != NULL; mp = mp->m_next) { /* Ignore empty mbuf. */ len = mp->m_len; if (len == 0) continue; /* Find the actual data to send. */ data = mtod(mp, caddr_t); /* Finish the last byte. */ if (savebyte != NO_PENDING_BYTE) { fe_outw(sc, FE_BMPR8, savebyte | (*data << 8)); data++; len--; savebyte = NO_PENDING_BYTE; } /* output contiguous words */ if (len > 1) { fe_outsw(sc, FE_BMPR8, (u_int16_t *)data, len >> 1); data += len & ~1; len &= 1; } /* Save a remaining byte, if there is one. */ if (len > 0) savebyte = *data; } /* Spit the last byte, if the length is odd. */ if (savebyte != NO_PENDING_BYTE) fe_outw(sc, FE_BMPR8, savebyte); } /* Pad to the Ethernet minimum length, if the packet is too short. */ if (length < ETHER_MIN_LEN - ETHER_CRC_LEN) { if ((sc->proto_dlcr6 & FE_D6_SBW) == FE_D6_SBW_BYTE) { fe_outsb(sc, FE_BMPR8, padding, ETHER_MIN_LEN - ETHER_CRC_LEN - length); } else { fe_outsw(sc, FE_BMPR8, (u_int16_t *)padding, (ETHER_MIN_LEN - ETHER_CRC_LEN - length) >> 1); } } } /* * Compute the multicast address filter from the * list of multicast addresses we need to listen to. */ static struct fe_filter fe_mcaf ( struct fe_softc *sc ) { int index; struct fe_filter filter; struct ifmultiaddr *ifma; filter = fe_filter_nothing; if_maddr_rlock(sc->ifp); CK_STAILQ_FOREACH(ifma, &sc->ifp->if_multiaddrs, ifma_link) { if (ifma->ifma_addr->sa_family != AF_LINK) continue; index = ether_crc32_le(LLADDR((struct sockaddr_dl *) ifma->ifma_addr), ETHER_ADDR_LEN) >> 26; #ifdef FE_DEBUG if_printf(sc->ifp, "hash(%6D) == %d\n", enm->enm_addrlo , ":", index); #endif filter.data[index >> 3] |= 1 << (index & 7); } if_maddr_runlock(sc->ifp); return ( filter ); } /* * Calculate a new "multicast packet filter" and put the 86960 * receiver in appropriate mode. */ static void fe_setmode (struct fe_softc *sc) { /* * If the interface is not running, we postpone the update * process for receive modes and multicast address filter * until the interface is restarted. It reduces some * complicated job on maintaining chip states. (Earlier versions * of this driver had a bug on that point...) * * To complete the trick, fe_init() calls fe_setmode() after * restarting the interface. */ if (!(sc->ifp->if_drv_flags & IFF_DRV_RUNNING)) return; /* * Promiscuous mode is handled separately. */ if (sc->ifp->if_flags & IFF_PROMISC) { /* * Program 86960 to receive all packets on the segment * including those directed to other stations. * Multicast filter stored in MARs are ignored * under this setting, so we don't need to update it. * * Promiscuous mode in FreeBSD 2 is used solely by * BPF, and BPF only listens to valid (no error) packets. * So, we ignore erroneous ones even in this mode. * (Older versions of fe driver mistook the point.) */ fe_outb(sc, FE_DLCR5, sc->proto_dlcr5 | FE_D5_AFM0 | FE_D5_AFM1); sc->filter_change = 0; return; } /* * Turn the chip to the normal (non-promiscuous) mode. */ fe_outb(sc, FE_DLCR5, sc->proto_dlcr5 | FE_D5_AFM1); /* * Find the new multicast filter value. */ if (sc->ifp->if_flags & IFF_ALLMULTI) sc->filter = fe_filter_all; else sc->filter = fe_mcaf(sc); sc->filter_change = 1; /* * We have to update the multicast filter in the 86960, A.S.A.P. * * Note that the DLC (Data Link Control unit, i.e. transmitter * and receiver) must be stopped when feeding the filter, and * DLC trashes all packets in both transmission and receive * buffers when stopped. * * To reduce the packet loss, we delay the filter update * process until buffers are empty. */ if (sc->txb_sched == 0 && sc->txb_count == 0 && !(fe_inb(sc, FE_DLCR1) & FE_D1_PKTRDY)) { /* * Buffers are (apparently) empty. Load * the new filter value into MARs now. */ fe_loadmar(sc); } else { /* * Buffers are not empty. Mark that we have to update * the MARs. The new filter will be loaded by feintr() * later. */ } } /* * Load a new multicast address filter into MARs. * * The caller must have acquired the softc lock before fe_loadmar. * This function starts the DLC upon return. So it can be called only * when the chip is working, i.e., from the driver's point of view, when * a device is RUNNING. (I mistook the point in previous versions.) */ static void fe_loadmar (struct fe_softc * sc) { /* Stop the DLC (transmitter and receiver). */ DELAY(200); fe_outb(sc, FE_DLCR6, sc->proto_dlcr6 | FE_D6_DLC_DISABLE); DELAY(200); /* Select register bank 1 for MARs. */ fe_outb(sc, FE_DLCR7, sc->proto_dlcr7 | FE_D7_RBS_MAR | FE_D7_POWER_UP); /* Copy filter value into the registers. */ fe_outblk(sc, FE_MAR8, sc->filter.data, FE_FILTER_LEN); /* Restore the bank selection for BMPRs (i.e., runtime registers). */ fe_outb(sc, FE_DLCR7, sc->proto_dlcr7 | FE_D7_RBS_BMPR | FE_D7_POWER_UP); /* Restart the DLC. */ DELAY(200); fe_outb(sc, FE_DLCR6, sc->proto_dlcr6 | FE_D6_DLC_ENABLE); DELAY(200); /* We have just updated the filter. */ sc->filter_change = 0; } /* Change the media selection. */ static int fe_medchange (struct ifnet *ifp) { struct fe_softc *sc = (struct fe_softc *)ifp->if_softc; #ifdef DIAGNOSTIC /* If_media should not pass any request for a media which this interface doesn't support. */ int b; for (b = 0; bit2media[b] != 0; b++) { if (bit2media[b] == sc->media.ifm_media) break; } if (((1 << b) & sc->mbitmap) == 0) { if_printf(sc->ifp, "got an unsupported media request (0x%x)\n", sc->media.ifm_media); return EINVAL; } #endif /* We don't actually change media when the interface is down. fe_init() will do the job, instead. Should we also wait until the transmission buffer being empty? Changing the media when we are sending a frame will cause two garbages on wires, one on old media and another on new. FIXME */ FE_LOCK(sc); if (sc->ifp->if_flags & IFF_UP) { if (sc->msel) sc->msel(sc); } FE_UNLOCK(sc); return 0; } /* I don't know how I can support media status callback... FIXME. */ static void fe_medstat (struct ifnet *ifp, struct ifmediareq *ifmr) { struct fe_softc *sc = ifp->if_softc; ifmr->ifm_active = sc->media.ifm_media; } Index: stable/12/sys/dev/pcn/if_pcn.c =================================================================== --- stable/12/sys/dev/pcn/if_pcn.c (revision 339734) +++ stable/12/sys/dev/pcn/if_pcn.c (revision 339735) @@ -1,1522 +1,1524 @@ /*- * SPDX-License-Identifier: BSD-4-Clause * * Copyright (c) 2000 Berkeley Software Design, Inc. * Copyright (c) 1997, 1998, 1999, 2000 * Bill Paul . All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * 3. All advertising materials mentioning features or use of this software * must display the following acknowledgement: * This product includes software developed by Bill Paul. * 4. Neither the name of the author nor the names of any co-contributors * may be used to endorse or promote products derived from this software * without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY Bill Paul AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL Bill Paul OR THE VOICES IN HIS HEAD * BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF * THE POSSIBILITY OF SUCH DAMAGE. */ #include __FBSDID("$FreeBSD$"); /* * AMD Am79c972 fast ethernet PCI NIC driver. Datasheets are available * from http://www.amd.com. * * The AMD PCnet/PCI controllers are more advanced and functional * versions of the venerable 7990 LANCE. The PCnet/PCI chips retain * backwards compatibility with the LANCE and thus can be made * to work with older LANCE drivers. This is in fact how the * PCnet/PCI chips were supported in FreeBSD originally. The trouble * is that the PCnet/PCI devices offer several performance enhancements * which can't be exploited in LANCE compatibility mode. Chief among * these enhancements is the ability to perform PCI DMA operations * using 32-bit addressing (which eliminates the need for ISA * bounce-buffering), and special receive buffer alignment (which * allows the receive handler to pass packets to the upper protocol * layers without copying on both the x86 and alpha platforms). */ #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include /* for vtophys */ #include /* for vtophys */ #include #include #include #include #include #include #include #include #define PCN_USEIOSPACE #include MODULE_DEPEND(pcn, pci, 1, 1, 1); MODULE_DEPEND(pcn, ether, 1, 1, 1); MODULE_DEPEND(pcn, miibus, 1, 1, 1); /* "device miibus" required. See GENERIC if you get errors here. */ #include "miibus_if.h" /* * Various supported device vendors/types and their names. */ static const struct pcn_type pcn_devs[] = { { PCN_VENDORID, PCN_DEVICEID_PCNET, "AMD PCnet/PCI 10/100BaseTX" }, { PCN_VENDORID, PCN_DEVICEID_HOME, "AMD PCnet/Home HomePNA" }, { 0, 0, NULL } }; static const struct pcn_chipid { u_int32_t id; const char *name; } pcn_chipid[] = { { Am79C971, "Am79C971" }, { Am79C972, "Am79C972" }, { Am79C973, "Am79C973" }, { Am79C978, "Am79C978" }, { Am79C975, "Am79C975" }, { Am79C976, "Am79C976" }, { 0, NULL }, }; static const char *pcn_chipid_name(u_int32_t); static u_int32_t pcn_chip_id(device_t); static const struct pcn_type *pcn_match(u_int16_t, u_int16_t); static u_int32_t pcn_csr_read(struct pcn_softc *, int); static u_int16_t pcn_csr_read16(struct pcn_softc *, int); static u_int16_t pcn_bcr_read16(struct pcn_softc *, int); static void pcn_csr_write(struct pcn_softc *, int, int); static u_int32_t pcn_bcr_read(struct pcn_softc *, int); static void pcn_bcr_write(struct pcn_softc *, int, int); static int pcn_probe(device_t); static int pcn_attach(device_t); static int pcn_detach(device_t); static int pcn_newbuf(struct pcn_softc *, int, struct mbuf *); static int pcn_encap(struct pcn_softc *, struct mbuf *, u_int32_t *); static void pcn_rxeof(struct pcn_softc *); static void pcn_txeof(struct pcn_softc *); static void pcn_intr(void *); static void pcn_tick(void *); static void pcn_start(struct ifnet *); static void pcn_start_locked(struct ifnet *); static int pcn_ioctl(struct ifnet *, u_long, caddr_t); static void pcn_init(void *); static void pcn_init_locked(struct pcn_softc *); static void pcn_stop(struct pcn_softc *); static void pcn_watchdog(struct pcn_softc *); static int pcn_shutdown(device_t); static int pcn_ifmedia_upd(struct ifnet *); static void pcn_ifmedia_sts(struct ifnet *, struct ifmediareq *); static int pcn_miibus_readreg(device_t, int, int); static int pcn_miibus_writereg(device_t, int, int, int); static void pcn_miibus_statchg(device_t); static void pcn_setfilt(struct ifnet *); static void pcn_setmulti(struct pcn_softc *); static void pcn_reset(struct pcn_softc *); static int pcn_list_rx_init(struct pcn_softc *); static int pcn_list_tx_init(struct pcn_softc *); #ifdef PCN_USEIOSPACE #define PCN_RES SYS_RES_IOPORT #define PCN_RID PCN_PCI_LOIO #else #define PCN_RES SYS_RES_MEMORY #define PCN_RID PCN_PCI_LOMEM #endif static device_method_t pcn_methods[] = { /* Device interface */ DEVMETHOD(device_probe, pcn_probe), DEVMETHOD(device_attach, pcn_attach), DEVMETHOD(device_detach, pcn_detach), DEVMETHOD(device_shutdown, pcn_shutdown), /* MII interface */ DEVMETHOD(miibus_readreg, pcn_miibus_readreg), DEVMETHOD(miibus_writereg, pcn_miibus_writereg), DEVMETHOD(miibus_statchg, pcn_miibus_statchg), DEVMETHOD_END }; static driver_t pcn_driver = { "pcn", pcn_methods, sizeof(struct pcn_softc) }; static devclass_t pcn_devclass; DRIVER_MODULE(pcn, pci, pcn_driver, pcn_devclass, 0, 0); MODULE_PNP_INFO("U16:vendor;U16:device", pci, pcn, pcn_devs, nitems(pcn_devs) - 1); DRIVER_MODULE(miibus, pcn, miibus_driver, miibus_devclass, 0, 0); #define PCN_CSR_SETBIT(sc, reg, x) \ pcn_csr_write(sc, reg, pcn_csr_read(sc, reg) | (x)) #define PCN_CSR_CLRBIT(sc, reg, x) \ pcn_csr_write(sc, reg, pcn_csr_read(sc, reg) & ~(x)) #define PCN_BCR_SETBIT(sc, reg, x) \ pcn_bcr_write(sc, reg, pcn_bcr_read(sc, reg) | (x)) #define PCN_BCR_CLRBIT(sc, reg, x) \ pcn_bcr_write(sc, reg, pcn_bcr_read(sc, reg) & ~(x)) static u_int32_t pcn_csr_read(sc, reg) struct pcn_softc *sc; int reg; { CSR_WRITE_4(sc, PCN_IO32_RAP, reg); return(CSR_READ_4(sc, PCN_IO32_RDP)); } static u_int16_t pcn_csr_read16(sc, reg) struct pcn_softc *sc; int reg; { CSR_WRITE_2(sc, PCN_IO16_RAP, reg); return(CSR_READ_2(sc, PCN_IO16_RDP)); } static void pcn_csr_write(sc, reg, val) struct pcn_softc *sc; int reg; int val; { CSR_WRITE_4(sc, PCN_IO32_RAP, reg); CSR_WRITE_4(sc, PCN_IO32_RDP, val); return; } static u_int32_t pcn_bcr_read(sc, reg) struct pcn_softc *sc; int reg; { CSR_WRITE_4(sc, PCN_IO32_RAP, reg); return(CSR_READ_4(sc, PCN_IO32_BDP)); } static u_int16_t pcn_bcr_read16(sc, reg) struct pcn_softc *sc; int reg; { CSR_WRITE_2(sc, PCN_IO16_RAP, reg); return(CSR_READ_2(sc, PCN_IO16_BDP)); } static void pcn_bcr_write(sc, reg, val) struct pcn_softc *sc; int reg; int val; { CSR_WRITE_4(sc, PCN_IO32_RAP, reg); CSR_WRITE_4(sc, PCN_IO32_BDP, val); return; } static int pcn_miibus_readreg(dev, phy, reg) device_t dev; int phy, reg; { struct pcn_softc *sc; int val; sc = device_get_softc(dev); /* * At least Am79C971 with DP83840A wedge when isolating the * external PHY so we can't allow multiple external PHYs. * There are cards that use Am79C971 with both the internal * and an external PHY though. * For internal PHYs it doesn't really matter whether we can * isolate the remaining internal and the external ones in * the PHY drivers as the internal PHYs have to be enabled * individually in PCN_BCR_PHYSEL, PCN_CSR_MODE, etc. * With Am79C97{3,5,8} we don't support switching beetween * the internal and external PHYs, yet, so we can't allow * multiple PHYs with these either. * Am79C97{2,6} actually only support external PHYs (not * connectable internal ones respond at the usual addresses, * which don't hurt if we let them show up on the bus) and * isolating them works. */ if (((sc->pcn_type == Am79C971 && phy != PCN_PHYAD_10BT) || sc->pcn_type == Am79C973 || sc->pcn_type == Am79C975 || sc->pcn_type == Am79C978) && sc->pcn_extphyaddr != -1 && phy != sc->pcn_extphyaddr) return(0); pcn_bcr_write(sc, PCN_BCR_MIIADDR, reg | (phy << 5)); val = pcn_bcr_read(sc, PCN_BCR_MIIDATA) & 0xFFFF; if (val == 0xFFFF) return(0); if (((sc->pcn_type == Am79C971 && phy != PCN_PHYAD_10BT) || sc->pcn_type == Am79C973 || sc->pcn_type == Am79C975 || sc->pcn_type == Am79C978) && sc->pcn_extphyaddr == -1) sc->pcn_extphyaddr = phy; return(val); } static int pcn_miibus_writereg(dev, phy, reg, data) device_t dev; int phy, reg, data; { struct pcn_softc *sc; sc = device_get_softc(dev); pcn_bcr_write(sc, PCN_BCR_MIIADDR, reg | (phy << 5)); pcn_bcr_write(sc, PCN_BCR_MIIDATA, data); return(0); } static void pcn_miibus_statchg(dev) device_t dev; { struct pcn_softc *sc; struct mii_data *mii; sc = device_get_softc(dev); mii = device_get_softc(sc->pcn_miibus); if ((mii->mii_media_active & IFM_GMASK) == IFM_FDX) { PCN_BCR_SETBIT(sc, PCN_BCR_DUPLEX, PCN_DUPLEX_FDEN); } else { PCN_BCR_CLRBIT(sc, PCN_BCR_DUPLEX, PCN_DUPLEX_FDEN); } return; } static void pcn_setmulti(sc) struct pcn_softc *sc; { struct ifnet *ifp; struct ifmultiaddr *ifma; u_int32_t h, i; u_int16_t hashes[4] = { 0, 0, 0, 0 }; ifp = sc->pcn_ifp; PCN_CSR_SETBIT(sc, PCN_CSR_EXTCTL1, PCN_EXTCTL1_SPND); if (ifp->if_flags & IFF_ALLMULTI || ifp->if_flags & IFF_PROMISC) { for (i = 0; i < 4; i++) pcn_csr_write(sc, PCN_CSR_MAR0 + i, 0xFFFF); PCN_CSR_CLRBIT(sc, PCN_CSR_EXTCTL1, PCN_EXTCTL1_SPND); return; } /* first, zot all the existing hash bits */ for (i = 0; i < 4; i++) pcn_csr_write(sc, PCN_CSR_MAR0 + i, 0); /* now program new ones */ if_maddr_rlock(ifp); CK_STAILQ_FOREACH(ifma, &ifp->if_multiaddrs, ifma_link) { if (ifma->ifma_addr->sa_family != AF_LINK) continue; h = ether_crc32_le(LLADDR((struct sockaddr_dl *) ifma->ifma_addr), ETHER_ADDR_LEN) >> 26; hashes[h >> 4] |= 1 << (h & 0xF); } if_maddr_runlock(ifp); for (i = 0; i < 4; i++) pcn_csr_write(sc, PCN_CSR_MAR0 + i, hashes[i]); PCN_CSR_CLRBIT(sc, PCN_CSR_EXTCTL1, PCN_EXTCTL1_SPND); return; } static void pcn_reset(sc) struct pcn_softc *sc; { /* * Issue a reset by reading from the RESET register. * Note that we don't know if the chip is operating in * 16-bit or 32-bit mode at this point, so we attempt * to reset the chip both ways. If one fails, the other * will succeed. */ CSR_READ_2(sc, PCN_IO16_RESET); CSR_READ_4(sc, PCN_IO32_RESET); /* Wait a little while for the chip to get its brains in order. */ DELAY(1000); /* Select 32-bit (DWIO) mode */ CSR_WRITE_4(sc, PCN_IO32_RDP, 0); /* Select software style 3. */ pcn_bcr_write(sc, PCN_BCR_SSTYLE, PCN_SWSTYLE_PCNETPCI_BURST); return; } static const char * pcn_chipid_name(u_int32_t id) { const struct pcn_chipid *p; p = pcn_chipid; while (p->name) { if (id == p->id) return (p->name); p++; } return ("Unknown"); } static u_int32_t pcn_chip_id(device_t dev) { struct pcn_softc *sc; u_int32_t chip_id; sc = device_get_softc(dev); /* * Note: we can *NOT* put the chip into * 32-bit mode yet. The le(4) driver will only * work in 16-bit mode, and once the chip * goes into 32-bit mode, the only way to * get it out again is with a hardware reset. * So if pcn_probe() is called before the * le(4) driver's probe routine, the chip will * be locked into 32-bit operation and the * le(4) driver will be unable to attach to it. * Note II: if the chip happens to already * be in 32-bit mode, we still need to check * the chip ID, but first we have to detect * 32-bit mode using only 16-bit operations. * The safest way to do this is to read the * PCI subsystem ID from BCR23/24 and compare * that with the value read from PCI config * space. */ chip_id = pcn_bcr_read16(sc, PCN_BCR_PCISUBSYSID); chip_id <<= 16; chip_id |= pcn_bcr_read16(sc, PCN_BCR_PCISUBVENID); /* * Note III: the test for 0x10001000 is a hack to * pacify VMware, who's pseudo-PCnet interface is * broken. Reading the subsystem register from PCI * config space yields 0x00000000 while reading the * same value from I/O space yields 0x10001000. It's * not supposed to be that way. */ if (chip_id == pci_read_config(dev, PCIR_SUBVEND_0, 4) || chip_id == 0x10001000) { /* We're in 16-bit mode. */ chip_id = pcn_csr_read16(sc, PCN_CSR_CHIPID1); chip_id <<= 16; chip_id |= pcn_csr_read16(sc, PCN_CSR_CHIPID0); } else { /* We're in 32-bit mode. */ chip_id = pcn_csr_read(sc, PCN_CSR_CHIPID1); chip_id <<= 16; chip_id |= pcn_csr_read(sc, PCN_CSR_CHIPID0); } return (chip_id); } static const struct pcn_type * pcn_match(u_int16_t vid, u_int16_t did) { const struct pcn_type *t; t = pcn_devs; while (t->pcn_name != NULL) { if ((vid == t->pcn_vid) && (did == t->pcn_did)) return (t); t++; } return (NULL); } /* * Probe for an AMD chip. Check the PCI vendor and device * IDs against our list and return a device name if we find a match. */ static int pcn_probe(dev) device_t dev; { const struct pcn_type *t; struct pcn_softc *sc; int rid; u_int32_t chip_id; t = pcn_match(pci_get_vendor(dev), pci_get_device(dev)); if (t == NULL) return (ENXIO); sc = device_get_softc(dev); /* * Temporarily map the I/O space so we can read the chip ID register. */ rid = PCN_RID; sc->pcn_res = bus_alloc_resource_any(dev, PCN_RES, &rid, RF_ACTIVE); if (sc->pcn_res == NULL) { device_printf(dev, "couldn't map ports/memory\n"); return(ENXIO); } sc->pcn_btag = rman_get_bustag(sc->pcn_res); sc->pcn_bhandle = rman_get_bushandle(sc->pcn_res); chip_id = pcn_chip_id(dev); bus_release_resource(dev, PCN_RES, PCN_RID, sc->pcn_res); switch((chip_id >> 12) & PART_MASK) { case Am79C971: case Am79C972: case Am79C973: case Am79C975: case Am79C976: case Am79C978: break; default: return(ENXIO); } device_set_desc(dev, t->pcn_name); return(BUS_PROBE_DEFAULT); } /* * Attach the interface. Allocate softc structures, do ifmedia * setup and ethernet/BPF attach. */ static int pcn_attach(dev) device_t dev; { u_int32_t eaddr[2]; struct pcn_softc *sc; struct mii_data *mii; struct mii_softc *miisc; struct ifnet *ifp; int error = 0, rid; sc = device_get_softc(dev); /* Initialize our mutex. */ mtx_init(&sc->pcn_mtx, device_get_nameunit(dev), MTX_NETWORK_LOCK, MTX_DEF); /* * Map control/status registers. */ pci_enable_busmaster(dev); /* Retrieve the chip ID */ sc->pcn_type = (pcn_chip_id(dev) >> 12) & PART_MASK; device_printf(dev, "Chip ID %04x (%s)\n", sc->pcn_type, pcn_chipid_name(sc->pcn_type)); rid = PCN_RID; sc->pcn_res = bus_alloc_resource_any(dev, PCN_RES, &rid, RF_ACTIVE); if (sc->pcn_res == NULL) { device_printf(dev, "couldn't map ports/memory\n"); error = ENXIO; goto fail; } sc->pcn_btag = rman_get_bustag(sc->pcn_res); sc->pcn_bhandle = rman_get_bushandle(sc->pcn_res); /* Allocate interrupt */ rid = 0; sc->pcn_irq = bus_alloc_resource_any(dev, SYS_RES_IRQ, &rid, RF_SHAREABLE | RF_ACTIVE); if (sc->pcn_irq == NULL) { device_printf(dev, "couldn't map interrupt\n"); error = ENXIO; goto fail; } /* Reset the adapter. */ pcn_reset(sc); /* * Get station address from the EEPROM. */ eaddr[0] = CSR_READ_4(sc, PCN_IO32_APROM00); eaddr[1] = CSR_READ_4(sc, PCN_IO32_APROM01); callout_init_mtx(&sc->pcn_stat_callout, &sc->pcn_mtx, 0); sc->pcn_ldata = contigmalloc(sizeof(struct pcn_list_data), M_DEVBUF, M_NOWAIT, 0, 0xffffffff, PAGE_SIZE, 0); if (sc->pcn_ldata == NULL) { device_printf(dev, "no memory for list buffers!\n"); error = ENXIO; goto fail; } bzero(sc->pcn_ldata, sizeof(struct pcn_list_data)); ifp = sc->pcn_ifp = if_alloc(IFT_ETHER); if (ifp == NULL) { device_printf(dev, "can not if_alloc()\n"); error = ENOSPC; goto fail; } ifp->if_softc = sc; if_initname(ifp, device_get_name(dev), device_get_unit(dev)); ifp->if_flags = IFF_BROADCAST | IFF_SIMPLEX | IFF_MULTICAST; ifp->if_ioctl = pcn_ioctl; ifp->if_start = pcn_start; ifp->if_init = pcn_init; ifp->if_snd.ifq_maxlen = PCN_TX_LIST_CNT - 1; /* * Do MII setup. * See the comment in pcn_miibus_readreg() for why we can't * universally pass MIIF_NOISOLATE here. */ sc->pcn_extphyaddr = -1; error = mii_attach(dev, &sc->pcn_miibus, ifp, pcn_ifmedia_upd, pcn_ifmedia_sts, BMSR_DEFCAPMASK, MII_PHY_ANY, MII_OFFSET_ANY, 0); if (error != 0) { device_printf(dev, "attaching PHYs failed\n"); goto fail; } /* * Record the media instances of internal PHYs, which map the * built-in interfaces to the MII, so we can set the active * PHY/port based on the currently selected media. */ sc->pcn_inst_10bt = -1; mii = device_get_softc(sc->pcn_miibus); LIST_FOREACH(miisc, &mii->mii_phys, mii_list) { switch (miisc->mii_phy) { case PCN_PHYAD_10BT: sc->pcn_inst_10bt = miisc->mii_inst; break; /* * XXX deal with the Am79C97{3,5} internal 100baseT * and the Am79C978 internal HomePNA PHYs. */ } } /* * Call MI attach routine. */ ether_ifattach(ifp, (u_int8_t *) eaddr); /* Hook interrupt last to avoid having to lock softc */ error = bus_setup_intr(dev, sc->pcn_irq, INTR_TYPE_NET | INTR_MPSAFE, NULL, pcn_intr, sc, &sc->pcn_intrhand); if (error) { device_printf(dev, "couldn't set up irq\n"); ether_ifdetach(ifp); goto fail; } fail: if (error) pcn_detach(dev); + gone_by_fcp101_dev(dev); + return(error); } /* * Shutdown hardware and free up resources. This can be called any * time after the mutex has been initialized. It is called in both * the error case in attach and the normal detach case so it needs * to be careful about only freeing resources that have actually been * allocated. */ static int pcn_detach(dev) device_t dev; { struct pcn_softc *sc; struct ifnet *ifp; sc = device_get_softc(dev); ifp = sc->pcn_ifp; KASSERT(mtx_initialized(&sc->pcn_mtx), ("pcn mutex not initialized")); /* These should only be active if attach succeeded */ if (device_is_attached(dev)) { PCN_LOCK(sc); pcn_reset(sc); pcn_stop(sc); PCN_UNLOCK(sc); callout_drain(&sc->pcn_stat_callout); ether_ifdetach(ifp); } if (sc->pcn_miibus) device_delete_child(dev, sc->pcn_miibus); bus_generic_detach(dev); if (sc->pcn_intrhand) bus_teardown_intr(dev, sc->pcn_irq, sc->pcn_intrhand); if (sc->pcn_irq) bus_release_resource(dev, SYS_RES_IRQ, 0, sc->pcn_irq); if (sc->pcn_res) bus_release_resource(dev, PCN_RES, PCN_RID, sc->pcn_res); if (ifp) if_free(ifp); if (sc->pcn_ldata) { contigfree(sc->pcn_ldata, sizeof(struct pcn_list_data), M_DEVBUF); } mtx_destroy(&sc->pcn_mtx); return(0); } /* * Initialize the transmit descriptors. */ static int pcn_list_tx_init(sc) struct pcn_softc *sc; { struct pcn_list_data *ld; struct pcn_ring_data *cd; int i; cd = &sc->pcn_cdata; ld = sc->pcn_ldata; for (i = 0; i < PCN_TX_LIST_CNT; i++) { cd->pcn_tx_chain[i] = NULL; ld->pcn_tx_list[i].pcn_tbaddr = 0; ld->pcn_tx_list[i].pcn_txctl = 0; ld->pcn_tx_list[i].pcn_txstat = 0; } cd->pcn_tx_prod = cd->pcn_tx_cons = cd->pcn_tx_cnt = 0; return(0); } /* * Initialize the RX descriptors and allocate mbufs for them. */ static int pcn_list_rx_init(sc) struct pcn_softc *sc; { struct pcn_ring_data *cd; int i; cd = &sc->pcn_cdata; for (i = 0; i < PCN_RX_LIST_CNT; i++) { if (pcn_newbuf(sc, i, NULL) == ENOBUFS) return(ENOBUFS); } cd->pcn_rx_prod = 0; return(0); } /* * Initialize an RX descriptor and attach an MBUF cluster. */ static int pcn_newbuf(sc, idx, m) struct pcn_softc *sc; int idx; struct mbuf *m; { struct mbuf *m_new = NULL; struct pcn_rx_desc *c; c = &sc->pcn_ldata->pcn_rx_list[idx]; if (m == NULL) { MGETHDR(m_new, M_NOWAIT, MT_DATA); if (m_new == NULL) return(ENOBUFS); if (!(MCLGET(m_new, M_NOWAIT))) { m_freem(m_new); return(ENOBUFS); } m_new->m_len = m_new->m_pkthdr.len = MCLBYTES; } else { m_new = m; m_new->m_len = m_new->m_pkthdr.len = MCLBYTES; m_new->m_data = m_new->m_ext.ext_buf; } m_adj(m_new, ETHER_ALIGN); sc->pcn_cdata.pcn_rx_chain[idx] = m_new; c->pcn_rbaddr = vtophys(mtod(m_new, caddr_t)); c->pcn_bufsz = (~(PCN_RXLEN) + 1) & PCN_RXLEN_BUFSZ; c->pcn_bufsz |= PCN_RXLEN_MBO; c->pcn_rxstat = PCN_RXSTAT_STP|PCN_RXSTAT_ENP|PCN_RXSTAT_OWN; return(0); } /* * A frame has been uploaded: pass the resulting mbuf chain up to * the higher level protocols. */ static void pcn_rxeof(sc) struct pcn_softc *sc; { struct mbuf *m; struct ifnet *ifp; struct pcn_rx_desc *cur_rx; int i; PCN_LOCK_ASSERT(sc); ifp = sc->pcn_ifp; i = sc->pcn_cdata.pcn_rx_prod; while(PCN_OWN_RXDESC(&sc->pcn_ldata->pcn_rx_list[i])) { cur_rx = &sc->pcn_ldata->pcn_rx_list[i]; m = sc->pcn_cdata.pcn_rx_chain[i]; sc->pcn_cdata.pcn_rx_chain[i] = NULL; /* * If an error occurs, update stats, clear the * status word and leave the mbuf cluster in place: * it should simply get re-used next time this descriptor * comes up in the ring. */ if (cur_rx->pcn_rxstat & PCN_RXSTAT_ERR) { if_inc_counter(ifp, IFCOUNTER_IERRORS, 1); pcn_newbuf(sc, i, m); PCN_INC(i, PCN_RX_LIST_CNT); continue; } if (pcn_newbuf(sc, i, NULL)) { /* Ran out of mbufs; recycle this one. */ pcn_newbuf(sc, i, m); if_inc_counter(ifp, IFCOUNTER_IERRORS, 1); PCN_INC(i, PCN_RX_LIST_CNT); continue; } PCN_INC(i, PCN_RX_LIST_CNT); /* No errors; receive the packet. */ if_inc_counter(ifp, IFCOUNTER_IPACKETS, 1); m->m_len = m->m_pkthdr.len = cur_rx->pcn_rxlen - ETHER_CRC_LEN; m->m_pkthdr.rcvif = ifp; PCN_UNLOCK(sc); (*ifp->if_input)(ifp, m); PCN_LOCK(sc); } sc->pcn_cdata.pcn_rx_prod = i; return; } /* * A frame was downloaded to the chip. It's safe for us to clean up * the list buffers. */ static void pcn_txeof(sc) struct pcn_softc *sc; { struct pcn_tx_desc *cur_tx = NULL; struct ifnet *ifp; u_int32_t idx; ifp = sc->pcn_ifp; /* * Go through our tx list and free mbufs for those * frames that have been transmitted. */ idx = sc->pcn_cdata.pcn_tx_cons; while (idx != sc->pcn_cdata.pcn_tx_prod) { cur_tx = &sc->pcn_ldata->pcn_tx_list[idx]; if (!PCN_OWN_TXDESC(cur_tx)) break; if (!(cur_tx->pcn_txctl & PCN_TXCTL_ENP)) { sc->pcn_cdata.pcn_tx_cnt--; PCN_INC(idx, PCN_TX_LIST_CNT); continue; } if (cur_tx->pcn_txctl & PCN_TXCTL_ERR) { if_inc_counter(ifp, IFCOUNTER_OERRORS, 1); if (cur_tx->pcn_txstat & PCN_TXSTAT_EXDEF) if_inc_counter(ifp, IFCOUNTER_COLLISIONS, 1); if (cur_tx->pcn_txstat & PCN_TXSTAT_RTRY) if_inc_counter(ifp, IFCOUNTER_COLLISIONS, 1); } if_inc_counter(ifp, IFCOUNTER_COLLISIONS, cur_tx->pcn_txstat & PCN_TXSTAT_TRC); if_inc_counter(ifp, IFCOUNTER_OPACKETS, 1); if (sc->pcn_cdata.pcn_tx_chain[idx] != NULL) { m_freem(sc->pcn_cdata.pcn_tx_chain[idx]); sc->pcn_cdata.pcn_tx_chain[idx] = NULL; } sc->pcn_cdata.pcn_tx_cnt--; PCN_INC(idx, PCN_TX_LIST_CNT); } if (idx != sc->pcn_cdata.pcn_tx_cons) { /* Some buffers have been freed. */ sc->pcn_cdata.pcn_tx_cons = idx; ifp->if_drv_flags &= ~IFF_DRV_OACTIVE; } sc->pcn_timer = (sc->pcn_cdata.pcn_tx_cnt == 0) ? 0 : 5; return; } static void pcn_tick(xsc) void *xsc; { struct pcn_softc *sc; struct mii_data *mii; struct ifnet *ifp; sc = xsc; ifp = sc->pcn_ifp; PCN_LOCK_ASSERT(sc); mii = device_get_softc(sc->pcn_miibus); mii_tick(mii); /* link just died */ if (sc->pcn_link && !(mii->mii_media_status & IFM_ACTIVE)) sc->pcn_link = 0; /* link just came up, restart */ if (!sc->pcn_link && mii->mii_media_status & IFM_ACTIVE && IFM_SUBTYPE(mii->mii_media_active) != IFM_NONE) { sc->pcn_link++; if (ifp->if_snd.ifq_head != NULL) pcn_start_locked(ifp); } if (sc->pcn_timer > 0 && --sc->pcn_timer == 0) pcn_watchdog(sc); callout_reset(&sc->pcn_stat_callout, hz, pcn_tick, sc); return; } static void pcn_intr(arg) void *arg; { struct pcn_softc *sc; struct ifnet *ifp; u_int32_t status; sc = arg; ifp = sc->pcn_ifp; PCN_LOCK(sc); /* Suppress unwanted interrupts */ if (!(ifp->if_flags & IFF_UP)) { pcn_stop(sc); PCN_UNLOCK(sc); return; } CSR_WRITE_4(sc, PCN_IO32_RAP, PCN_CSR_CSR); while ((status = CSR_READ_4(sc, PCN_IO32_RDP)) & PCN_CSR_INTR) { CSR_WRITE_4(sc, PCN_IO32_RDP, status); if (status & PCN_CSR_RINT) pcn_rxeof(sc); if (status & PCN_CSR_TINT) pcn_txeof(sc); if (status & PCN_CSR_ERR) { pcn_init_locked(sc); break; } } if (ifp->if_snd.ifq_head != NULL) pcn_start_locked(ifp); PCN_UNLOCK(sc); return; } /* * Encapsulate an mbuf chain in a descriptor by coupling the mbuf data * pointers to the fragment pointers. */ static int pcn_encap(sc, m_head, txidx) struct pcn_softc *sc; struct mbuf *m_head; u_int32_t *txidx; { struct pcn_tx_desc *f = NULL; struct mbuf *m; int frag, cur, cnt = 0; /* * Start packing the mbufs in this chain into * the fragment pointers. Stop when we run out * of fragments or hit the end of the mbuf chain. */ m = m_head; cur = frag = *txidx; for (m = m_head; m != NULL; m = m->m_next) { if (m->m_len == 0) continue; if ((PCN_TX_LIST_CNT - (sc->pcn_cdata.pcn_tx_cnt + cnt)) < 2) return(ENOBUFS); f = &sc->pcn_ldata->pcn_tx_list[frag]; f->pcn_txctl = (~(m->m_len) + 1) & PCN_TXCTL_BUFSZ; f->pcn_txctl |= PCN_TXCTL_MBO; f->pcn_tbaddr = vtophys(mtod(m, vm_offset_t)); if (cnt == 0) f->pcn_txctl |= PCN_TXCTL_STP; else f->pcn_txctl |= PCN_TXCTL_OWN; cur = frag; PCN_INC(frag, PCN_TX_LIST_CNT); cnt++; } if (m != NULL) return(ENOBUFS); sc->pcn_cdata.pcn_tx_chain[cur] = m_head; sc->pcn_ldata->pcn_tx_list[cur].pcn_txctl |= PCN_TXCTL_ENP|PCN_TXCTL_ADD_FCS|PCN_TXCTL_MORE_LTINT; sc->pcn_ldata->pcn_tx_list[*txidx].pcn_txctl |= PCN_TXCTL_OWN; sc->pcn_cdata.pcn_tx_cnt += cnt; *txidx = frag; return(0); } /* * Main transmit routine. To avoid having to do mbuf copies, we put pointers * to the mbuf data regions directly in the transmit lists. We also save a * copy of the pointers since the transmit list fragment pointers are * physical addresses. */ static void pcn_start(ifp) struct ifnet *ifp; { struct pcn_softc *sc; sc = ifp->if_softc; PCN_LOCK(sc); pcn_start_locked(ifp); PCN_UNLOCK(sc); } static void pcn_start_locked(ifp) struct ifnet *ifp; { struct pcn_softc *sc; struct mbuf *m_head = NULL; u_int32_t idx; sc = ifp->if_softc; PCN_LOCK_ASSERT(sc); if (!sc->pcn_link) return; idx = sc->pcn_cdata.pcn_tx_prod; if (ifp->if_drv_flags & IFF_DRV_OACTIVE) return; while(sc->pcn_cdata.pcn_tx_chain[idx] == NULL) { IF_DEQUEUE(&ifp->if_snd, m_head); if (m_head == NULL) break; if (pcn_encap(sc, m_head, &idx)) { IF_PREPEND(&ifp->if_snd, m_head); ifp->if_drv_flags |= IFF_DRV_OACTIVE; break; } /* * If there's a BPF listener, bounce a copy of this frame * to him. */ BPF_MTAP(ifp, m_head); } /* Transmit */ sc->pcn_cdata.pcn_tx_prod = idx; pcn_csr_write(sc, PCN_CSR_CSR, PCN_CSR_TX|PCN_CSR_INTEN); /* * Set a timeout in case the chip goes out to lunch. */ sc->pcn_timer = 5; return; } static void pcn_setfilt(ifp) struct ifnet *ifp; { struct pcn_softc *sc; sc = ifp->if_softc; /* If we want promiscuous mode, set the allframes bit. */ if (ifp->if_flags & IFF_PROMISC) { PCN_CSR_SETBIT(sc, PCN_CSR_MODE, PCN_MODE_PROMISC); } else { PCN_CSR_CLRBIT(sc, PCN_CSR_MODE, PCN_MODE_PROMISC); } /* Set the capture broadcast bit to capture broadcast frames. */ if (ifp->if_flags & IFF_BROADCAST) { PCN_CSR_CLRBIT(sc, PCN_CSR_MODE, PCN_MODE_RXNOBROAD); } else { PCN_CSR_SETBIT(sc, PCN_CSR_MODE, PCN_MODE_RXNOBROAD); } return; } static void pcn_init(xsc) void *xsc; { struct pcn_softc *sc = xsc; PCN_LOCK(sc); pcn_init_locked(sc); PCN_UNLOCK(sc); } static void pcn_init_locked(sc) struct pcn_softc *sc; { struct ifnet *ifp = sc->pcn_ifp; struct mii_data *mii = NULL; struct ifmedia_entry *ife; PCN_LOCK_ASSERT(sc); /* * Cancel pending I/O and free all RX/TX buffers. */ pcn_stop(sc); pcn_reset(sc); mii = device_get_softc(sc->pcn_miibus); ife = mii->mii_media.ifm_cur; /* Set MAC address */ pcn_csr_write(sc, PCN_CSR_PAR0, ((u_int16_t *)IF_LLADDR(sc->pcn_ifp))[0]); pcn_csr_write(sc, PCN_CSR_PAR1, ((u_int16_t *)IF_LLADDR(sc->pcn_ifp))[1]); pcn_csr_write(sc, PCN_CSR_PAR2, ((u_int16_t *)IF_LLADDR(sc->pcn_ifp))[2]); /* Init circular RX list. */ if (pcn_list_rx_init(sc) == ENOBUFS) { if_printf(ifp, "initialization failed: no " "memory for rx buffers\n"); pcn_stop(sc); return; } /* * Init tx descriptors. */ pcn_list_tx_init(sc); /* Clear PCN_MISC_ASEL so we can set the port via PCN_CSR_MODE. */ PCN_BCR_CLRBIT(sc, PCN_BCR_MISCCFG, PCN_MISC_ASEL); /* * Set up the port based on the currently selected media. * For Am79C978 we've to unconditionally set PCN_PORT_MII and * set the PHY in PCN_BCR_PHYSEL instead. */ if (sc->pcn_type != Am79C978 && IFM_INST(ife->ifm_media) == sc->pcn_inst_10bt) pcn_csr_write(sc, PCN_CSR_MODE, PCN_PORT_10BASET); else pcn_csr_write(sc, PCN_CSR_MODE, PCN_PORT_MII); /* Set up RX filter. */ pcn_setfilt(ifp); /* * Load the multicast filter. */ pcn_setmulti(sc); /* * Load the addresses of the RX and TX lists. */ pcn_csr_write(sc, PCN_CSR_RXADDR0, vtophys(&sc->pcn_ldata->pcn_rx_list[0]) & 0xFFFF); pcn_csr_write(sc, PCN_CSR_RXADDR1, (vtophys(&sc->pcn_ldata->pcn_rx_list[0]) >> 16) & 0xFFFF); pcn_csr_write(sc, PCN_CSR_TXADDR0, vtophys(&sc->pcn_ldata->pcn_tx_list[0]) & 0xFFFF); pcn_csr_write(sc, PCN_CSR_TXADDR1, (vtophys(&sc->pcn_ldata->pcn_tx_list[0]) >> 16) & 0xFFFF); /* Set the RX and TX ring sizes. */ pcn_csr_write(sc, PCN_CSR_RXRINGLEN, (~PCN_RX_LIST_CNT) + 1); pcn_csr_write(sc, PCN_CSR_TXRINGLEN, (~PCN_TX_LIST_CNT) + 1); /* We're not using the initialization block. */ pcn_csr_write(sc, PCN_CSR_IAB1, 0); /* Enable fast suspend mode. */ PCN_CSR_SETBIT(sc, PCN_CSR_EXTCTL2, PCN_EXTCTL2_FASTSPNDE); /* * Enable burst read and write. Also set the no underflow * bit. This will avoid transmit underruns in certain * conditions while still providing decent performance. */ PCN_BCR_SETBIT(sc, PCN_BCR_BUSCTL, PCN_BUSCTL_NOUFLOW| PCN_BUSCTL_BREAD|PCN_BUSCTL_BWRITE); /* Enable graceful recovery from underflow. */ PCN_CSR_SETBIT(sc, PCN_CSR_IMR, PCN_IMR_DXSUFLO); /* Enable auto-padding of short TX frames. */ PCN_CSR_SETBIT(sc, PCN_CSR_TFEAT, PCN_TFEAT_PAD_TX); /* Disable MII autoneg (we handle this ourselves). */ PCN_BCR_SETBIT(sc, PCN_BCR_MIICTL, PCN_MIICTL_DANAS); if (sc->pcn_type == Am79C978) /* XXX support other PHYs? */ pcn_bcr_write(sc, PCN_BCR_PHYSEL, PCN_PHYSEL_PCNET|PCN_PHY_HOMEPNA); /* Enable interrupts and start the controller running. */ pcn_csr_write(sc, PCN_CSR_CSR, PCN_CSR_INTEN|PCN_CSR_START); mii_mediachg(mii); ifp->if_drv_flags |= IFF_DRV_RUNNING; ifp->if_drv_flags &= ~IFF_DRV_OACTIVE; callout_reset(&sc->pcn_stat_callout, hz, pcn_tick, sc); return; } /* * Set media options. */ static int pcn_ifmedia_upd(ifp) struct ifnet *ifp; { struct pcn_softc *sc; sc = ifp->if_softc; PCN_LOCK(sc); /* * At least Am79C971 with DP83840A can wedge when switching * from the internal 10baseT PHY to the external PHY without * issuing pcn_reset(). For setting the port in PCN_CSR_MODE * the PCnet chip has to be powered down or stopped anyway * and although documented otherwise it doesn't take effect * until the next initialization. */ sc->pcn_link = 0; pcn_stop(sc); pcn_reset(sc); pcn_init_locked(sc); if (ifp->if_snd.ifq_head != NULL) pcn_start_locked(ifp); PCN_UNLOCK(sc); return(0); } /* * Report current media status. */ static void pcn_ifmedia_sts(ifp, ifmr) struct ifnet *ifp; struct ifmediareq *ifmr; { struct pcn_softc *sc; struct mii_data *mii; sc = ifp->if_softc; mii = device_get_softc(sc->pcn_miibus); PCN_LOCK(sc); mii_pollstat(mii); ifmr->ifm_active = mii->mii_media_active; ifmr->ifm_status = mii->mii_media_status; PCN_UNLOCK(sc); return; } static int pcn_ioctl(ifp, command, data) struct ifnet *ifp; u_long command; caddr_t data; { struct pcn_softc *sc = ifp->if_softc; struct ifreq *ifr = (struct ifreq *) data; struct mii_data *mii = NULL; int error = 0; switch(command) { case SIOCSIFFLAGS: PCN_LOCK(sc); if (ifp->if_flags & IFF_UP) { if (ifp->if_drv_flags & IFF_DRV_RUNNING && ifp->if_flags & IFF_PROMISC && !(sc->pcn_if_flags & IFF_PROMISC)) { PCN_CSR_SETBIT(sc, PCN_CSR_EXTCTL1, PCN_EXTCTL1_SPND); pcn_setfilt(ifp); PCN_CSR_CLRBIT(sc, PCN_CSR_EXTCTL1, PCN_EXTCTL1_SPND); pcn_csr_write(sc, PCN_CSR_CSR, PCN_CSR_INTEN|PCN_CSR_START); } else if (ifp->if_drv_flags & IFF_DRV_RUNNING && !(ifp->if_flags & IFF_PROMISC) && sc->pcn_if_flags & IFF_PROMISC) { PCN_CSR_SETBIT(sc, PCN_CSR_EXTCTL1, PCN_EXTCTL1_SPND); pcn_setfilt(ifp); PCN_CSR_CLRBIT(sc, PCN_CSR_EXTCTL1, PCN_EXTCTL1_SPND); pcn_csr_write(sc, PCN_CSR_CSR, PCN_CSR_INTEN|PCN_CSR_START); } else if (!(ifp->if_drv_flags & IFF_DRV_RUNNING)) pcn_init_locked(sc); } else { if (ifp->if_drv_flags & IFF_DRV_RUNNING) pcn_stop(sc); } sc->pcn_if_flags = ifp->if_flags; PCN_UNLOCK(sc); error = 0; break; case SIOCADDMULTI: case SIOCDELMULTI: PCN_LOCK(sc); pcn_setmulti(sc); PCN_UNLOCK(sc); error = 0; break; case SIOCGIFMEDIA: case SIOCSIFMEDIA: mii = device_get_softc(sc->pcn_miibus); error = ifmedia_ioctl(ifp, ifr, &mii->mii_media, command); break; default: error = ether_ioctl(ifp, command, data); break; } return(error); } static void pcn_watchdog(struct pcn_softc *sc) { struct ifnet *ifp; PCN_LOCK_ASSERT(sc); ifp = sc->pcn_ifp; if_inc_counter(ifp, IFCOUNTER_OERRORS, 1); if_printf(ifp, "watchdog timeout\n"); pcn_stop(sc); pcn_reset(sc); pcn_init_locked(sc); if (ifp->if_snd.ifq_head != NULL) pcn_start_locked(ifp); } /* * Stop the adapter and free any mbufs allocated to the * RX and TX lists. */ static void pcn_stop(struct pcn_softc *sc) { int i; struct ifnet *ifp; PCN_LOCK_ASSERT(sc); ifp = sc->pcn_ifp; sc->pcn_timer = 0; callout_stop(&sc->pcn_stat_callout); /* Turn off interrupts */ PCN_CSR_CLRBIT(sc, PCN_CSR_CSR, PCN_CSR_INTEN); /* Stop adapter */ PCN_CSR_SETBIT(sc, PCN_CSR_CSR, PCN_CSR_STOP); sc->pcn_link = 0; /* * Free data in the RX lists. */ for (i = 0; i < PCN_RX_LIST_CNT; i++) { if (sc->pcn_cdata.pcn_rx_chain[i] != NULL) { m_freem(sc->pcn_cdata.pcn_rx_chain[i]); sc->pcn_cdata.pcn_rx_chain[i] = NULL; } } bzero((char *)&sc->pcn_ldata->pcn_rx_list, sizeof(sc->pcn_ldata->pcn_rx_list)); /* * Free the TX list buffers. */ for (i = 0; i < PCN_TX_LIST_CNT; i++) { if (sc->pcn_cdata.pcn_tx_chain[i] != NULL) { m_freem(sc->pcn_cdata.pcn_tx_chain[i]); sc->pcn_cdata.pcn_tx_chain[i] = NULL; } } bzero((char *)&sc->pcn_ldata->pcn_tx_list, sizeof(sc->pcn_ldata->pcn_tx_list)); ifp->if_drv_flags &= ~(IFF_DRV_RUNNING | IFF_DRV_OACTIVE); return; } /* * Stop all chip I/O so that the kernel's probe routines don't * get confused by errant DMAs when rebooting. */ static int pcn_shutdown(device_t dev) { struct pcn_softc *sc; sc = device_get_softc(dev); PCN_LOCK(sc); pcn_reset(sc); pcn_stop(sc); PCN_UNLOCK(sc); return 0; } Index: stable/12/sys/dev/sf/if_sf.c =================================================================== --- stable/12/sys/dev/sf/if_sf.c (revision 339734) +++ stable/12/sys/dev/sf/if_sf.c (revision 339735) @@ -1,2738 +1,2740 @@ /*- * SPDX-License-Identifier: BSD-4-Clause * * Copyright (c) 1997, 1998, 1999 * Bill Paul . All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * 3. All advertising materials mentioning features or use of this software * must display the following acknowledgement: * This product includes software developed by Bill Paul. * 4. Neither the name of the author nor the names of any co-contributors * may be used to endorse or promote products derived from this software * without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY Bill Paul AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL Bill Paul OR THE VOICES IN HIS HEAD * BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF * THE POSSIBILITY OF SUCH DAMAGE. */ #include __FBSDID("$FreeBSD$"); /* * Adaptec AIC-6915 "Starfire" PCI fast ethernet driver for FreeBSD. * Programming manual is available from: * http://download.adaptec.com/pdfs/user_guides/aic6915_pg.pdf. * * Written by Bill Paul * Department of Electical Engineering * Columbia University, New York City */ /* * The Adaptec AIC-6915 "Starfire" is a 64-bit 10/100 PCI ethernet * controller designed with flexibility and reducing CPU load in mind. * The Starfire offers high and low priority buffer queues, a * producer/consumer index mechanism and several different buffer * queue and completion queue descriptor types. Any one of a number * of different driver designs can be used, depending on system and * OS requirements. This driver makes use of type2 transmit frame * descriptors to take full advantage of fragmented packets buffers * and two RX buffer queues prioritized on size (one queue for small * frames that will fit into a single mbuf, another with full size * mbuf clusters for everything else). The producer/consumer indexes * and completion queues are also used. * * One downside to the Starfire has to do with alignment: buffer * queues must be aligned on 256-byte boundaries, and receive buffers * must be aligned on longword boundaries. The receive buffer alignment * causes problems on the strict alignment architecture, where the * packet payload should be longword aligned. There is no simple way * around this. * * For receive filtering, the Starfire offers 16 perfect filter slots * and a 512-bit hash table. * * The Starfire has no internal transceiver, relying instead on an * external MII-based transceiver. Accessing registers on external * PHYs is done through a special register map rather than with the * usual bitbang MDIO method. * * Acesssing the registers on the Starfire is a little tricky. The * Starfire has a 512K internal register space. When programmed for * PCI memory mapped mode, the entire register space can be accessed * directly. However in I/O space mode, only 256 bytes are directly * mapped into PCI I/O space. The other registers can be accessed * indirectly using the SF_INDIRECTIO_ADDR and SF_INDIRECTIO_DATA * registers inside the 256-byte I/O window. */ #ifdef HAVE_KERNEL_OPTION_HEADERS #include "opt_device_polling.h" #endif #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include /* "device miibus" required. See GENERIC if you get errors here. */ #include "miibus_if.h" MODULE_DEPEND(sf, pci, 1, 1, 1); MODULE_DEPEND(sf, ether, 1, 1, 1); MODULE_DEPEND(sf, miibus, 1, 1, 1); #undef SF_GFP_DEBUG #define SF_CSUM_FEATURES (CSUM_TCP | CSUM_UDP) /* Define this to activate partial TCP/UDP checksum offload. */ #undef SF_PARTIAL_CSUM_SUPPORT static struct sf_type sf_devs[] = { { AD_VENDORID, AD_DEVICEID_STARFIRE, "Adaptec AIC-6915 10/100BaseTX", AD_SUBSYSID_62011_REV0, "Adaptec ANA-62011 (rev 0) 10/100BaseTX" }, { AD_VENDORID, AD_DEVICEID_STARFIRE, "Adaptec AIC-6915 10/100BaseTX", AD_SUBSYSID_62011_REV1, "Adaptec ANA-62011 (rev 1) 10/100BaseTX" }, { AD_VENDORID, AD_DEVICEID_STARFIRE, "Adaptec AIC-6915 10/100BaseTX", AD_SUBSYSID_62022, "Adaptec ANA-62022 10/100BaseTX" }, { AD_VENDORID, AD_DEVICEID_STARFIRE, "Adaptec AIC-6915 10/100BaseTX", AD_SUBSYSID_62044_REV0, "Adaptec ANA-62044 (rev 0) 10/100BaseTX" }, { AD_VENDORID, AD_DEVICEID_STARFIRE, "Adaptec AIC-6915 10/100BaseTX", AD_SUBSYSID_62044_REV1, "Adaptec ANA-62044 (rev 1) 10/100BaseTX" }, { AD_VENDORID, AD_DEVICEID_STARFIRE, "Adaptec AIC-6915 10/100BaseTX", AD_SUBSYSID_62020, "Adaptec ANA-62020 10/100BaseFX" }, { AD_VENDORID, AD_DEVICEID_STARFIRE, "Adaptec AIC-6915 10/100BaseTX", AD_SUBSYSID_69011, "Adaptec ANA-69011 10/100BaseTX" }, }; static int sf_probe(device_t); static int sf_attach(device_t); static int sf_detach(device_t); static int sf_shutdown(device_t); static int sf_suspend(device_t); static int sf_resume(device_t); static void sf_intr(void *); static void sf_tick(void *); static void sf_stats_update(struct sf_softc *); #ifndef __NO_STRICT_ALIGNMENT static __inline void sf_fixup_rx(struct mbuf *); #endif static int sf_rxeof(struct sf_softc *); static void sf_txeof(struct sf_softc *); static int sf_encap(struct sf_softc *, struct mbuf **); static void sf_start(struct ifnet *); static void sf_start_locked(struct ifnet *); static int sf_ioctl(struct ifnet *, u_long, caddr_t); static void sf_download_fw(struct sf_softc *); static void sf_init(void *); static void sf_init_locked(struct sf_softc *); static void sf_stop(struct sf_softc *); static void sf_watchdog(struct sf_softc *); static int sf_ifmedia_upd(struct ifnet *); static int sf_ifmedia_upd_locked(struct ifnet *); static void sf_ifmedia_sts(struct ifnet *, struct ifmediareq *); static void sf_reset(struct sf_softc *); static int sf_dma_alloc(struct sf_softc *); static void sf_dma_free(struct sf_softc *); static int sf_init_rx_ring(struct sf_softc *); static void sf_init_tx_ring(struct sf_softc *); static int sf_newbuf(struct sf_softc *, int); static void sf_rxfilter(struct sf_softc *); static int sf_setperf(struct sf_softc *, int, uint8_t *); static int sf_sethash(struct sf_softc *, caddr_t, int); #ifdef notdef static int sf_setvlan(struct sf_softc *, int, uint32_t); #endif static uint8_t sf_read_eeprom(struct sf_softc *, int); static int sf_miibus_readreg(device_t, int, int); static int sf_miibus_writereg(device_t, int, int, int); static void sf_miibus_statchg(device_t); #ifdef DEVICE_POLLING static int sf_poll(struct ifnet *ifp, enum poll_cmd cmd, int count); #endif static uint32_t csr_read_4(struct sf_softc *, int); static void csr_write_4(struct sf_softc *, int, uint32_t); static void sf_txthresh_adjust(struct sf_softc *); static int sf_sysctl_stats(SYSCTL_HANDLER_ARGS); static int sysctl_int_range(SYSCTL_HANDLER_ARGS, int, int); static int sysctl_hw_sf_int_mod(SYSCTL_HANDLER_ARGS); static device_method_t sf_methods[] = { /* Device interface */ DEVMETHOD(device_probe, sf_probe), DEVMETHOD(device_attach, sf_attach), DEVMETHOD(device_detach, sf_detach), DEVMETHOD(device_shutdown, sf_shutdown), DEVMETHOD(device_suspend, sf_suspend), DEVMETHOD(device_resume, sf_resume), /* MII interface */ DEVMETHOD(miibus_readreg, sf_miibus_readreg), DEVMETHOD(miibus_writereg, sf_miibus_writereg), DEVMETHOD(miibus_statchg, sf_miibus_statchg), DEVMETHOD_END }; static driver_t sf_driver = { "sf", sf_methods, sizeof(struct sf_softc), }; static devclass_t sf_devclass; DRIVER_MODULE(sf, pci, sf_driver, sf_devclass, 0, 0); DRIVER_MODULE(miibus, sf, miibus_driver, miibus_devclass, 0, 0); #define SF_SETBIT(sc, reg, x) \ csr_write_4(sc, reg, csr_read_4(sc, reg) | (x)) #define SF_CLRBIT(sc, reg, x) \ csr_write_4(sc, reg, csr_read_4(sc, reg) & ~(x)) static uint32_t csr_read_4(struct sf_softc *sc, int reg) { uint32_t val; if (sc->sf_restype == SYS_RES_MEMORY) val = CSR_READ_4(sc, (reg + SF_RMAP_INTREG_BASE)); else { CSR_WRITE_4(sc, SF_INDIRECTIO_ADDR, reg + SF_RMAP_INTREG_BASE); val = CSR_READ_4(sc, SF_INDIRECTIO_DATA); } return (val); } static uint8_t sf_read_eeprom(struct sf_softc *sc, int reg) { uint8_t val; val = (csr_read_4(sc, SF_EEADDR_BASE + (reg & 0xFFFFFFFC)) >> (8 * (reg & 3))) & 0xFF; return (val); } static void csr_write_4(struct sf_softc *sc, int reg, uint32_t val) { if (sc->sf_restype == SYS_RES_MEMORY) CSR_WRITE_4(sc, (reg + SF_RMAP_INTREG_BASE), val); else { CSR_WRITE_4(sc, SF_INDIRECTIO_ADDR, reg + SF_RMAP_INTREG_BASE); CSR_WRITE_4(sc, SF_INDIRECTIO_DATA, val); } } /* * Copy the address 'mac' into the perfect RX filter entry at * offset 'idx.' The perfect filter only has 16 entries so do * some sanity tests. */ static int sf_setperf(struct sf_softc *sc, int idx, uint8_t *mac) { if (idx < 0 || idx > SF_RXFILT_PERFECT_CNT) return (EINVAL); if (mac == NULL) return (EINVAL); csr_write_4(sc, SF_RXFILT_PERFECT_BASE + (idx * SF_RXFILT_PERFECT_SKIP) + 0, mac[5] | (mac[4] << 8)); csr_write_4(sc, SF_RXFILT_PERFECT_BASE + (idx * SF_RXFILT_PERFECT_SKIP) + 4, mac[3] | (mac[2] << 8)); csr_write_4(sc, SF_RXFILT_PERFECT_BASE + (idx * SF_RXFILT_PERFECT_SKIP) + 8, mac[1] | (mac[0] << 8)); return (0); } /* * Set the bit in the 512-bit hash table that corresponds to the * specified mac address 'mac.' If 'prio' is nonzero, update the * priority hash table instead of the filter hash table. */ static int sf_sethash(struct sf_softc *sc, caddr_t mac, int prio) { uint32_t h; if (mac == NULL) return (EINVAL); h = ether_crc32_be(mac, ETHER_ADDR_LEN) >> 23; if (prio) { SF_SETBIT(sc, SF_RXFILT_HASH_BASE + SF_RXFILT_HASH_PRIOOFF + (SF_RXFILT_HASH_SKIP * (h >> 4)), (1 << (h & 0xF))); } else { SF_SETBIT(sc, SF_RXFILT_HASH_BASE + SF_RXFILT_HASH_ADDROFF + (SF_RXFILT_HASH_SKIP * (h >> 4)), (1 << (h & 0xF))); } return (0); } #ifdef notdef /* * Set a VLAN tag in the receive filter. */ static int sf_setvlan(struct sf_softc *sc, int idx, uint32_t vlan) { if (idx < 0 || idx >> SF_RXFILT_HASH_CNT) return (EINVAL); csr_write_4(sc, SF_RXFILT_HASH_BASE + (idx * SF_RXFILT_HASH_SKIP) + SF_RXFILT_HASH_VLANOFF, vlan); return (0); } #endif static int sf_miibus_readreg(device_t dev, int phy, int reg) { struct sf_softc *sc; int i; uint32_t val = 0; sc = device_get_softc(dev); for (i = 0; i < SF_TIMEOUT; i++) { val = csr_read_4(sc, SF_PHY_REG(phy, reg)); if ((val & SF_MII_DATAVALID) != 0) break; } if (i == SF_TIMEOUT) return (0); val &= SF_MII_DATAPORT; if (val == 0xffff) return (0); return (val); } static int sf_miibus_writereg(device_t dev, int phy, int reg, int val) { struct sf_softc *sc; int i; int busy; sc = device_get_softc(dev); csr_write_4(sc, SF_PHY_REG(phy, reg), val); for (i = 0; i < SF_TIMEOUT; i++) { busy = csr_read_4(sc, SF_PHY_REG(phy, reg)); if ((busy & SF_MII_BUSY) == 0) break; } return (0); } static void sf_miibus_statchg(device_t dev) { struct sf_softc *sc; struct mii_data *mii; struct ifnet *ifp; uint32_t val; sc = device_get_softc(dev); mii = device_get_softc(sc->sf_miibus); ifp = sc->sf_ifp; if (mii == NULL || ifp == NULL || (ifp->if_drv_flags & IFF_DRV_RUNNING) == 0) return; sc->sf_link = 0; if ((mii->mii_media_status & (IFM_ACTIVE | IFM_AVALID)) == (IFM_ACTIVE | IFM_AVALID)) { switch (IFM_SUBTYPE(mii->mii_media_active)) { case IFM_10_T: case IFM_100_TX: case IFM_100_FX: sc->sf_link = 1; break; } } if (sc->sf_link == 0) return; val = csr_read_4(sc, SF_MACCFG_1); val &= ~SF_MACCFG1_FULLDUPLEX; val &= ~(SF_MACCFG1_RX_FLOWENB | SF_MACCFG1_TX_FLOWENB); if ((IFM_OPTIONS(mii->mii_media_active) & IFM_FDX) != 0) { val |= SF_MACCFG1_FULLDUPLEX; csr_write_4(sc, SF_BKTOBKIPG, SF_IPGT_FDX); #ifdef notyet /* Configure flow-control bits. */ if ((IFM_OPTIONS(sc->sc_mii->mii_media_active) & IFM_ETH_RXPAUSE) != 0) val |= SF_MACCFG1_RX_FLOWENB; if ((IFM_OPTIONS(sc->sc_mii->mii_media_active) & IFM_ETH_TXPAUSE) != 0) val |= SF_MACCFG1_TX_FLOWENB; #endif } else csr_write_4(sc, SF_BKTOBKIPG, SF_IPGT_HDX); /* Make sure to reset MAC to take changes effect. */ csr_write_4(sc, SF_MACCFG_1, val | SF_MACCFG1_SOFTRESET); DELAY(1000); csr_write_4(sc, SF_MACCFG_1, val); val = csr_read_4(sc, SF_TIMER_CTL); if (IFM_SUBTYPE(mii->mii_media_active) == IFM_100_TX) val |= SF_TIMER_TIMES_TEN; else val &= ~SF_TIMER_TIMES_TEN; csr_write_4(sc, SF_TIMER_CTL, val); } static void sf_rxfilter(struct sf_softc *sc) { struct ifnet *ifp; int i; struct ifmultiaddr *ifma; uint8_t dummy[ETHER_ADDR_LEN] = { 0, 0, 0, 0, 0, 0 }; uint32_t rxfilt; ifp = sc->sf_ifp; /* First zot all the existing filters. */ for (i = 1; i < SF_RXFILT_PERFECT_CNT; i++) sf_setperf(sc, i, dummy); for (i = SF_RXFILT_HASH_BASE; i < (SF_RXFILT_HASH_MAX + 1); i += sizeof(uint32_t)) csr_write_4(sc, i, 0); rxfilt = csr_read_4(sc, SF_RXFILT); rxfilt &= ~(SF_RXFILT_PROMISC | SF_RXFILT_ALLMULTI | SF_RXFILT_BROAD); if ((ifp->if_flags & IFF_BROADCAST) != 0) rxfilt |= SF_RXFILT_BROAD; if ((ifp->if_flags & IFF_ALLMULTI) != 0 || (ifp->if_flags & IFF_PROMISC) != 0) { if ((ifp->if_flags & IFF_PROMISC) != 0) rxfilt |= SF_RXFILT_PROMISC; if ((ifp->if_flags & IFF_ALLMULTI) != 0) rxfilt |= SF_RXFILT_ALLMULTI; goto done; } /* Now program new ones. */ i = 1; /* XXX how do we maintain reverse semantics without impl */ if_maddr_rlock(ifp); CK_STAILQ_FOREACH(ifma, &ifp->if_multiaddrs, ifma_link) { if (ifma->ifma_addr->sa_family != AF_LINK) continue; /* * Program the first 15 multicast groups * into the perfect filter. For all others, * use the hash table. */ if (i < SF_RXFILT_PERFECT_CNT) { sf_setperf(sc, i, LLADDR((struct sockaddr_dl *)ifma->ifma_addr)); i++; continue; } sf_sethash(sc, LLADDR((struct sockaddr_dl *)ifma->ifma_addr), 0); } if_maddr_runlock(ifp); done: csr_write_4(sc, SF_RXFILT, rxfilt); } /* * Set media options. */ static int sf_ifmedia_upd(struct ifnet *ifp) { struct sf_softc *sc; int error; sc = ifp->if_softc; SF_LOCK(sc); error = sf_ifmedia_upd_locked(ifp); SF_UNLOCK(sc); return (error); } static int sf_ifmedia_upd_locked(struct ifnet *ifp) { struct sf_softc *sc; struct mii_data *mii; struct mii_softc *miisc; sc = ifp->if_softc; mii = device_get_softc(sc->sf_miibus); LIST_FOREACH(miisc, &mii->mii_phys, mii_list) PHY_RESET(miisc); return (mii_mediachg(mii)); } /* * Report current media status. */ static void sf_ifmedia_sts(struct ifnet *ifp, struct ifmediareq *ifmr) { struct sf_softc *sc; struct mii_data *mii; sc = ifp->if_softc; SF_LOCK(sc); if ((ifp->if_flags & IFF_UP) == 0) { SF_UNLOCK(sc); return; } mii = device_get_softc(sc->sf_miibus); mii_pollstat(mii); ifmr->ifm_active = mii->mii_media_active; ifmr->ifm_status = mii->mii_media_status; SF_UNLOCK(sc); } static int sf_ioctl(struct ifnet *ifp, u_long command, caddr_t data) { struct sf_softc *sc; struct ifreq *ifr; struct mii_data *mii; int error, mask; sc = ifp->if_softc; ifr = (struct ifreq *)data; error = 0; switch (command) { case SIOCSIFFLAGS: SF_LOCK(sc); if (ifp->if_flags & IFF_UP) { if ((ifp->if_drv_flags & IFF_DRV_RUNNING) != 0) { if ((ifp->if_flags ^ sc->sf_if_flags) & (IFF_PROMISC | IFF_ALLMULTI)) sf_rxfilter(sc); } else { if (sc->sf_detach == 0) sf_init_locked(sc); } } else { if ((ifp->if_drv_flags & IFF_DRV_RUNNING) != 0) sf_stop(sc); } sc->sf_if_flags = ifp->if_flags; SF_UNLOCK(sc); break; case SIOCADDMULTI: case SIOCDELMULTI: SF_LOCK(sc); if ((ifp->if_drv_flags & IFF_DRV_RUNNING) != 0) sf_rxfilter(sc); SF_UNLOCK(sc); break; case SIOCGIFMEDIA: case SIOCSIFMEDIA: mii = device_get_softc(sc->sf_miibus); error = ifmedia_ioctl(ifp, ifr, &mii->mii_media, command); break; case SIOCSIFCAP: mask = ifr->ifr_reqcap ^ ifp->if_capenable; #ifdef DEVICE_POLLING if ((mask & IFCAP_POLLING) != 0) { if ((ifr->ifr_reqcap & IFCAP_POLLING) != 0) { error = ether_poll_register(sf_poll, ifp); if (error != 0) break; SF_LOCK(sc); /* Disable interrupts. */ csr_write_4(sc, SF_IMR, 0); ifp->if_capenable |= IFCAP_POLLING; SF_UNLOCK(sc); } else { error = ether_poll_deregister(ifp); /* Enable interrupts. */ SF_LOCK(sc); csr_write_4(sc, SF_IMR, SF_INTRS); ifp->if_capenable &= ~IFCAP_POLLING; SF_UNLOCK(sc); } } #endif /* DEVICE_POLLING */ if ((mask & IFCAP_TXCSUM) != 0) { if ((IFCAP_TXCSUM & ifp->if_capabilities) != 0) { SF_LOCK(sc); ifp->if_capenable ^= IFCAP_TXCSUM; if ((IFCAP_TXCSUM & ifp->if_capenable) != 0) { ifp->if_hwassist |= SF_CSUM_FEATURES; SF_SETBIT(sc, SF_GEN_ETH_CTL, SF_ETHCTL_TXGFP_ENB); } else { ifp->if_hwassist &= ~SF_CSUM_FEATURES; SF_CLRBIT(sc, SF_GEN_ETH_CTL, SF_ETHCTL_TXGFP_ENB); } SF_UNLOCK(sc); } } if ((mask & IFCAP_RXCSUM) != 0) { if ((IFCAP_RXCSUM & ifp->if_capabilities) != 0) { SF_LOCK(sc); ifp->if_capenable ^= IFCAP_RXCSUM; if ((IFCAP_RXCSUM & ifp->if_capenable) != 0) SF_SETBIT(sc, SF_GEN_ETH_CTL, SF_ETHCTL_RXGFP_ENB); else SF_CLRBIT(sc, SF_GEN_ETH_CTL, SF_ETHCTL_RXGFP_ENB); SF_UNLOCK(sc); } } break; default: error = ether_ioctl(ifp, command, data); break; } return (error); } static void sf_reset(struct sf_softc *sc) { int i; csr_write_4(sc, SF_GEN_ETH_CTL, 0); SF_SETBIT(sc, SF_MACCFG_1, SF_MACCFG1_SOFTRESET); DELAY(1000); SF_CLRBIT(sc, SF_MACCFG_1, SF_MACCFG1_SOFTRESET); SF_SETBIT(sc, SF_PCI_DEVCFG, SF_PCIDEVCFG_RESET); for (i = 0; i < SF_TIMEOUT; i++) { DELAY(10); if (!(csr_read_4(sc, SF_PCI_DEVCFG) & SF_PCIDEVCFG_RESET)) break; } if (i == SF_TIMEOUT) device_printf(sc->sf_dev, "reset never completed!\n"); /* Wait a little while for the chip to get its brains in order. */ DELAY(1000); } /* * Probe for an Adaptec AIC-6915 chip. Check the PCI vendor and device * IDs against our list and return a device name if we find a match. * We also check the subsystem ID so that we can identify exactly which * NIC has been found, if possible. */ static int sf_probe(device_t dev) { struct sf_type *t; uint16_t vid; uint16_t did; uint16_t sdid; int i; vid = pci_get_vendor(dev); did = pci_get_device(dev); sdid = pci_get_subdevice(dev); t = sf_devs; for (i = 0; i < nitems(sf_devs); i++, t++) { if (vid == t->sf_vid && did == t->sf_did) { if (sdid == t->sf_sdid) { device_set_desc(dev, t->sf_sname); return (BUS_PROBE_DEFAULT); } } } if (vid == AD_VENDORID && did == AD_DEVICEID_STARFIRE) { /* unknown subdevice */ device_set_desc(dev, sf_devs[0].sf_name); return (BUS_PROBE_DEFAULT); } return (ENXIO); } /* * Attach the interface. Allocate softc structures, do ifmedia * setup and ethernet/BPF attach. */ static int sf_attach(device_t dev) { int i; struct sf_softc *sc; struct ifnet *ifp; uint32_t reg; int rid, error = 0; uint8_t eaddr[ETHER_ADDR_LEN]; sc = device_get_softc(dev); sc->sf_dev = dev; mtx_init(&sc->sf_mtx, device_get_nameunit(dev), MTX_NETWORK_LOCK, MTX_DEF); callout_init_mtx(&sc->sf_co, &sc->sf_mtx, 0); /* * Map control/status registers. */ pci_enable_busmaster(dev); /* * Prefer memory space register mapping over I/O space as the * hardware requires lots of register access to get various * producer/consumer index during Tx/Rx operation. However this * requires large memory space(512K) to map the entire register * space. */ sc->sf_rid = PCIR_BAR(0); sc->sf_restype = SYS_RES_MEMORY; sc->sf_res = bus_alloc_resource_any(dev, sc->sf_restype, &sc->sf_rid, RF_ACTIVE); if (sc->sf_res == NULL) { reg = pci_read_config(dev, PCIR_BAR(0), 4); if ((reg & PCIM_BAR_MEM_64) == PCIM_BAR_MEM_64) sc->sf_rid = PCIR_BAR(2); else sc->sf_rid = PCIR_BAR(1); sc->sf_restype = SYS_RES_IOPORT; sc->sf_res = bus_alloc_resource_any(dev, sc->sf_restype, &sc->sf_rid, RF_ACTIVE); if (sc->sf_res == NULL) { device_printf(dev, "couldn't allocate resources\n"); mtx_destroy(&sc->sf_mtx); return (ENXIO); } } if (bootverbose) device_printf(dev, "using %s space register mapping\n", sc->sf_restype == SYS_RES_MEMORY ? "memory" : "I/O"); reg = pci_read_config(dev, PCIR_CACHELNSZ, 1); if (reg == 0) { /* * If cache line size is 0, MWI is not used at all, so set * reasonable default. AIC-6915 supports 0, 4, 8, 16, 32 * and 64. */ reg = 16; device_printf(dev, "setting PCI cache line size to %u\n", reg); pci_write_config(dev, PCIR_CACHELNSZ, reg, 1); } else { if (bootverbose) device_printf(dev, "PCI cache line size : %u\n", reg); } /* Enable MWI. */ reg = pci_read_config(dev, PCIR_COMMAND, 2); reg |= PCIM_CMD_MWRICEN; pci_write_config(dev, PCIR_COMMAND, reg, 2); /* Allocate interrupt. */ rid = 0; sc->sf_irq = bus_alloc_resource_any(dev, SYS_RES_IRQ, &rid, RF_SHAREABLE | RF_ACTIVE); if (sc->sf_irq == NULL) { device_printf(dev, "couldn't map interrupt\n"); error = ENXIO; goto fail; } SYSCTL_ADD_PROC(device_get_sysctl_ctx(dev), SYSCTL_CHILDREN(device_get_sysctl_tree(dev)), OID_AUTO, "stats", CTLTYPE_INT | CTLFLAG_RW, sc, 0, sf_sysctl_stats, "I", "Statistics"); SYSCTL_ADD_PROC(device_get_sysctl_ctx(dev), SYSCTL_CHILDREN(device_get_sysctl_tree(dev)), OID_AUTO, "int_mod", CTLTYPE_INT | CTLFLAG_RW, &sc->sf_int_mod, 0, sysctl_hw_sf_int_mod, "I", "sf interrupt moderation"); /* Pull in device tunables. */ sc->sf_int_mod = SF_IM_DEFAULT; error = resource_int_value(device_get_name(dev), device_get_unit(dev), "int_mod", &sc->sf_int_mod); if (error == 0) { if (sc->sf_int_mod < SF_IM_MIN || sc->sf_int_mod > SF_IM_MAX) { device_printf(dev, "int_mod value out of range; " "using default: %d\n", SF_IM_DEFAULT); sc->sf_int_mod = SF_IM_DEFAULT; } } /* Reset the adapter. */ sf_reset(sc); /* * Get station address from the EEPROM. */ for (i = 0; i < ETHER_ADDR_LEN; i++) eaddr[i] = sf_read_eeprom(sc, SF_EE_NODEADDR + ETHER_ADDR_LEN - i); /* Allocate DMA resources. */ if (sf_dma_alloc(sc) != 0) { error = ENOSPC; goto fail; } sc->sf_txthresh = SF_MIN_TX_THRESHOLD; ifp = sc->sf_ifp = if_alloc(IFT_ETHER); if (ifp == NULL) { device_printf(dev, "can not allocate ifnet structure\n"); error = ENOSPC; goto fail; } /* Do MII setup. */ error = mii_attach(dev, &sc->sf_miibus, ifp, sf_ifmedia_upd, sf_ifmedia_sts, BMSR_DEFCAPMASK, MII_PHY_ANY, MII_OFFSET_ANY, 0); if (error != 0) { device_printf(dev, "attaching PHYs failed\n"); goto fail; } ifp->if_softc = sc; if_initname(ifp, device_get_name(dev), device_get_unit(dev)); ifp->if_flags = IFF_BROADCAST | IFF_SIMPLEX | IFF_MULTICAST; ifp->if_ioctl = sf_ioctl; ifp->if_start = sf_start; ifp->if_init = sf_init; IFQ_SET_MAXLEN(&ifp->if_snd, SF_TX_DLIST_CNT - 1); ifp->if_snd.ifq_drv_maxlen = SF_TX_DLIST_CNT - 1; IFQ_SET_READY(&ifp->if_snd); /* * With the help of firmware, AIC-6915 supports * Tx/Rx TCP/UDP checksum offload. */ ifp->if_hwassist = SF_CSUM_FEATURES; ifp->if_capabilities = IFCAP_HWCSUM; /* * Call MI attach routine. */ ether_ifattach(ifp, eaddr); /* VLAN capability setup. */ ifp->if_capabilities |= IFCAP_VLAN_MTU; ifp->if_capenable = ifp->if_capabilities; #ifdef DEVICE_POLLING ifp->if_capabilities |= IFCAP_POLLING; #endif /* * Tell the upper layer(s) we support long frames. * Must appear after the call to ether_ifattach() because * ether_ifattach() sets ifi_hdrlen to the default value. */ ifp->if_hdrlen = sizeof(struct ether_vlan_header); /* Hook interrupt last to avoid having to lock softc */ error = bus_setup_intr(dev, sc->sf_irq, INTR_TYPE_NET | INTR_MPSAFE, NULL, sf_intr, sc, &sc->sf_intrhand); if (error) { device_printf(dev, "couldn't set up irq\n"); ether_ifdetach(ifp); goto fail; } + gone_by_fcp101_dev(dev); + fail: if (error) sf_detach(dev); return (error); } /* * Shutdown hardware and free up resources. This can be called any * time after the mutex has been initialized. It is called in both * the error case in attach and the normal detach case so it needs * to be careful about only freeing resources that have actually been * allocated. */ static int sf_detach(device_t dev) { struct sf_softc *sc; struct ifnet *ifp; sc = device_get_softc(dev); ifp = sc->sf_ifp; #ifdef DEVICE_POLLING if (ifp != NULL && ifp->if_capenable & IFCAP_POLLING) ether_poll_deregister(ifp); #endif /* These should only be active if attach succeeded */ if (device_is_attached(dev)) { SF_LOCK(sc); sc->sf_detach = 1; sf_stop(sc); SF_UNLOCK(sc); callout_drain(&sc->sf_co); if (ifp != NULL) ether_ifdetach(ifp); } if (sc->sf_miibus) { device_delete_child(dev, sc->sf_miibus); sc->sf_miibus = NULL; } bus_generic_detach(dev); if (sc->sf_intrhand != NULL) bus_teardown_intr(dev, sc->sf_irq, sc->sf_intrhand); if (sc->sf_irq != NULL) bus_release_resource(dev, SYS_RES_IRQ, 0, sc->sf_irq); if (sc->sf_res != NULL) bus_release_resource(dev, sc->sf_restype, sc->sf_rid, sc->sf_res); sf_dma_free(sc); if (ifp != NULL) if_free(ifp); mtx_destroy(&sc->sf_mtx); return (0); } struct sf_dmamap_arg { bus_addr_t sf_busaddr; }; static void sf_dmamap_cb(void *arg, bus_dma_segment_t *segs, int nseg, int error) { struct sf_dmamap_arg *ctx; if (error != 0) return; ctx = arg; ctx->sf_busaddr = segs[0].ds_addr; } static int sf_dma_alloc(struct sf_softc *sc) { struct sf_dmamap_arg ctx; struct sf_txdesc *txd; struct sf_rxdesc *rxd; bus_addr_t lowaddr; bus_addr_t rx_ring_end, rx_cring_end; bus_addr_t tx_ring_end, tx_cring_end; int error, i; lowaddr = BUS_SPACE_MAXADDR; again: /* Create parent DMA tag. */ error = bus_dma_tag_create( bus_get_dma_tag(sc->sf_dev), /* parent */ 1, 0, /* alignment, boundary */ lowaddr, /* lowaddr */ BUS_SPACE_MAXADDR, /* highaddr */ NULL, NULL, /* filter, filterarg */ BUS_SPACE_MAXSIZE_32BIT, /* maxsize */ 0, /* nsegments */ BUS_SPACE_MAXSIZE_32BIT, /* maxsegsize */ 0, /* flags */ NULL, NULL, /* lockfunc, lockarg */ &sc->sf_cdata.sf_parent_tag); if (error != 0) { device_printf(sc->sf_dev, "failed to create parent DMA tag\n"); goto fail; } /* Create tag for Tx ring. */ error = bus_dma_tag_create(sc->sf_cdata.sf_parent_tag,/* parent */ SF_RING_ALIGN, 0, /* alignment, boundary */ BUS_SPACE_MAXADDR, /* lowaddr */ BUS_SPACE_MAXADDR, /* highaddr */ NULL, NULL, /* filter, filterarg */ SF_TX_DLIST_SIZE, /* maxsize */ 1, /* nsegments */ SF_TX_DLIST_SIZE, /* maxsegsize */ 0, /* flags */ NULL, NULL, /* lockfunc, lockarg */ &sc->sf_cdata.sf_tx_ring_tag); if (error != 0) { device_printf(sc->sf_dev, "failed to create Tx ring DMA tag\n"); goto fail; } /* Create tag for Tx completion ring. */ error = bus_dma_tag_create(sc->sf_cdata.sf_parent_tag,/* parent */ SF_RING_ALIGN, 0, /* alignment, boundary */ BUS_SPACE_MAXADDR, /* lowaddr */ BUS_SPACE_MAXADDR, /* highaddr */ NULL, NULL, /* filter, filterarg */ SF_TX_CLIST_SIZE, /* maxsize */ 1, /* nsegments */ SF_TX_CLIST_SIZE, /* maxsegsize */ 0, /* flags */ NULL, NULL, /* lockfunc, lockarg */ &sc->sf_cdata.sf_tx_cring_tag); if (error != 0) { device_printf(sc->sf_dev, "failed to create Tx completion ring DMA tag\n"); goto fail; } /* Create tag for Rx ring. */ error = bus_dma_tag_create(sc->sf_cdata.sf_parent_tag,/* parent */ SF_RING_ALIGN, 0, /* alignment, boundary */ BUS_SPACE_MAXADDR, /* lowaddr */ BUS_SPACE_MAXADDR, /* highaddr */ NULL, NULL, /* filter, filterarg */ SF_RX_DLIST_SIZE, /* maxsize */ 1, /* nsegments */ SF_RX_DLIST_SIZE, /* maxsegsize */ 0, /* flags */ NULL, NULL, /* lockfunc, lockarg */ &sc->sf_cdata.sf_rx_ring_tag); if (error != 0) { device_printf(sc->sf_dev, "failed to create Rx ring DMA tag\n"); goto fail; } /* Create tag for Rx completion ring. */ error = bus_dma_tag_create(sc->sf_cdata.sf_parent_tag,/* parent */ SF_RING_ALIGN, 0, /* alignment, boundary */ BUS_SPACE_MAXADDR, /* lowaddr */ BUS_SPACE_MAXADDR, /* highaddr */ NULL, NULL, /* filter, filterarg */ SF_RX_CLIST_SIZE, /* maxsize */ 1, /* nsegments */ SF_RX_CLIST_SIZE, /* maxsegsize */ 0, /* flags */ NULL, NULL, /* lockfunc, lockarg */ &sc->sf_cdata.sf_rx_cring_tag); if (error != 0) { device_printf(sc->sf_dev, "failed to create Rx completion ring DMA tag\n"); goto fail; } /* Create tag for Tx buffers. */ error = bus_dma_tag_create(sc->sf_cdata.sf_parent_tag,/* parent */ 1, 0, /* alignment, boundary */ BUS_SPACE_MAXADDR, /* lowaddr */ BUS_SPACE_MAXADDR, /* highaddr */ NULL, NULL, /* filter, filterarg */ MCLBYTES * SF_MAXTXSEGS, /* maxsize */ SF_MAXTXSEGS, /* nsegments */ MCLBYTES, /* maxsegsize */ 0, /* flags */ NULL, NULL, /* lockfunc, lockarg */ &sc->sf_cdata.sf_tx_tag); if (error != 0) { device_printf(sc->sf_dev, "failed to create Tx DMA tag\n"); goto fail; } /* Create tag for Rx buffers. */ error = bus_dma_tag_create(sc->sf_cdata.sf_parent_tag,/* parent */ SF_RX_ALIGN, 0, /* alignment, boundary */ BUS_SPACE_MAXADDR, /* lowaddr */ BUS_SPACE_MAXADDR, /* highaddr */ NULL, NULL, /* filter, filterarg */ MCLBYTES, /* maxsize */ 1, /* nsegments */ MCLBYTES, /* maxsegsize */ 0, /* flags */ NULL, NULL, /* lockfunc, lockarg */ &sc->sf_cdata.sf_rx_tag); if (error != 0) { device_printf(sc->sf_dev, "failed to create Rx DMA tag\n"); goto fail; } /* Allocate DMA'able memory and load the DMA map for Tx ring. */ error = bus_dmamem_alloc(sc->sf_cdata.sf_tx_ring_tag, (void **)&sc->sf_rdata.sf_tx_ring, BUS_DMA_WAITOK | BUS_DMA_COHERENT | BUS_DMA_ZERO, &sc->sf_cdata.sf_tx_ring_map); if (error != 0) { device_printf(sc->sf_dev, "failed to allocate DMA'able memory for Tx ring\n"); goto fail; } ctx.sf_busaddr = 0; error = bus_dmamap_load(sc->sf_cdata.sf_tx_ring_tag, sc->sf_cdata.sf_tx_ring_map, sc->sf_rdata.sf_tx_ring, SF_TX_DLIST_SIZE, sf_dmamap_cb, &ctx, 0); if (error != 0 || ctx.sf_busaddr == 0) { device_printf(sc->sf_dev, "failed to load DMA'able memory for Tx ring\n"); goto fail; } sc->sf_rdata.sf_tx_ring_paddr = ctx.sf_busaddr; /* * Allocate DMA'able memory and load the DMA map for Tx completion ring. */ error = bus_dmamem_alloc(sc->sf_cdata.sf_tx_cring_tag, (void **)&sc->sf_rdata.sf_tx_cring, BUS_DMA_WAITOK | BUS_DMA_COHERENT | BUS_DMA_ZERO, &sc->sf_cdata.sf_tx_cring_map); if (error != 0) { device_printf(sc->sf_dev, "failed to allocate DMA'able memory for " "Tx completion ring\n"); goto fail; } ctx.sf_busaddr = 0; error = bus_dmamap_load(sc->sf_cdata.sf_tx_cring_tag, sc->sf_cdata.sf_tx_cring_map, sc->sf_rdata.sf_tx_cring, SF_TX_CLIST_SIZE, sf_dmamap_cb, &ctx, 0); if (error != 0 || ctx.sf_busaddr == 0) { device_printf(sc->sf_dev, "failed to load DMA'able memory for Tx completion ring\n"); goto fail; } sc->sf_rdata.sf_tx_cring_paddr = ctx.sf_busaddr; /* Allocate DMA'able memory and load the DMA map for Rx ring. */ error = bus_dmamem_alloc(sc->sf_cdata.sf_rx_ring_tag, (void **)&sc->sf_rdata.sf_rx_ring, BUS_DMA_WAITOK | BUS_DMA_COHERENT | BUS_DMA_ZERO, &sc->sf_cdata.sf_rx_ring_map); if (error != 0) { device_printf(sc->sf_dev, "failed to allocate DMA'able memory for Rx ring\n"); goto fail; } ctx.sf_busaddr = 0; error = bus_dmamap_load(sc->sf_cdata.sf_rx_ring_tag, sc->sf_cdata.sf_rx_ring_map, sc->sf_rdata.sf_rx_ring, SF_RX_DLIST_SIZE, sf_dmamap_cb, &ctx, 0); if (error != 0 || ctx.sf_busaddr == 0) { device_printf(sc->sf_dev, "failed to load DMA'able memory for Rx ring\n"); goto fail; } sc->sf_rdata.sf_rx_ring_paddr = ctx.sf_busaddr; /* * Allocate DMA'able memory and load the DMA map for Rx completion ring. */ error = bus_dmamem_alloc(sc->sf_cdata.sf_rx_cring_tag, (void **)&sc->sf_rdata.sf_rx_cring, BUS_DMA_WAITOK | BUS_DMA_COHERENT | BUS_DMA_ZERO, &sc->sf_cdata.sf_rx_cring_map); if (error != 0) { device_printf(sc->sf_dev, "failed to allocate DMA'able memory for " "Rx completion ring\n"); goto fail; } ctx.sf_busaddr = 0; error = bus_dmamap_load(sc->sf_cdata.sf_rx_cring_tag, sc->sf_cdata.sf_rx_cring_map, sc->sf_rdata.sf_rx_cring, SF_RX_CLIST_SIZE, sf_dmamap_cb, &ctx, 0); if (error != 0 || ctx.sf_busaddr == 0) { device_printf(sc->sf_dev, "failed to load DMA'able memory for Rx completion ring\n"); goto fail; } sc->sf_rdata.sf_rx_cring_paddr = ctx.sf_busaddr; /* * Tx desciptor ring and Tx completion ring should be addressed in * the same 4GB space. The same rule applys to Rx ring and Rx * completion ring. Unfortunately there is no way to specify this * boundary restriction with bus_dma(9). So just try to allocate * without the restriction and check the restriction was satisfied. * If not, fall back to 32bit dma addressing mode which always * guarantees the restriction. */ tx_ring_end = sc->sf_rdata.sf_tx_ring_paddr + SF_TX_DLIST_SIZE; tx_cring_end = sc->sf_rdata.sf_tx_cring_paddr + SF_TX_CLIST_SIZE; rx_ring_end = sc->sf_rdata.sf_rx_ring_paddr + SF_RX_DLIST_SIZE; rx_cring_end = sc->sf_rdata.sf_rx_cring_paddr + SF_RX_CLIST_SIZE; if ((SF_ADDR_HI(sc->sf_rdata.sf_tx_ring_paddr) != SF_ADDR_HI(tx_cring_end)) || (SF_ADDR_HI(sc->sf_rdata.sf_tx_cring_paddr) != SF_ADDR_HI(tx_ring_end)) || (SF_ADDR_HI(sc->sf_rdata.sf_rx_ring_paddr) != SF_ADDR_HI(rx_cring_end)) || (SF_ADDR_HI(sc->sf_rdata.sf_rx_cring_paddr) != SF_ADDR_HI(rx_ring_end))) { device_printf(sc->sf_dev, "switching to 32bit DMA mode\n"); sf_dma_free(sc); /* Limit DMA address space to 32bit and try again. */ lowaddr = BUS_SPACE_MAXADDR_32BIT; goto again; } /* Create DMA maps for Tx buffers. */ for (i = 0; i < SF_TX_DLIST_CNT; i++) { txd = &sc->sf_cdata.sf_txdesc[i]; txd->tx_m = NULL; txd->ndesc = 0; txd->tx_dmamap = NULL; error = bus_dmamap_create(sc->sf_cdata.sf_tx_tag, 0, &txd->tx_dmamap); if (error != 0) { device_printf(sc->sf_dev, "failed to create Tx dmamap\n"); goto fail; } } /* Create DMA maps for Rx buffers. */ if ((error = bus_dmamap_create(sc->sf_cdata.sf_rx_tag, 0, &sc->sf_cdata.sf_rx_sparemap)) != 0) { device_printf(sc->sf_dev, "failed to create spare Rx dmamap\n"); goto fail; } for (i = 0; i < SF_RX_DLIST_CNT; i++) { rxd = &sc->sf_cdata.sf_rxdesc[i]; rxd->rx_m = NULL; rxd->rx_dmamap = NULL; error = bus_dmamap_create(sc->sf_cdata.sf_rx_tag, 0, &rxd->rx_dmamap); if (error != 0) { device_printf(sc->sf_dev, "failed to create Rx dmamap\n"); goto fail; } } fail: return (error); } static void sf_dma_free(struct sf_softc *sc) { struct sf_txdesc *txd; struct sf_rxdesc *rxd; int i; /* Tx ring. */ if (sc->sf_cdata.sf_tx_ring_tag) { if (sc->sf_rdata.sf_tx_ring_paddr) bus_dmamap_unload(sc->sf_cdata.sf_tx_ring_tag, sc->sf_cdata.sf_tx_ring_map); if (sc->sf_rdata.sf_tx_ring) bus_dmamem_free(sc->sf_cdata.sf_tx_ring_tag, sc->sf_rdata.sf_tx_ring, sc->sf_cdata.sf_tx_ring_map); sc->sf_rdata.sf_tx_ring = NULL; sc->sf_rdata.sf_tx_ring_paddr = 0; bus_dma_tag_destroy(sc->sf_cdata.sf_tx_ring_tag); sc->sf_cdata.sf_tx_ring_tag = NULL; } /* Tx completion ring. */ if (sc->sf_cdata.sf_tx_cring_tag) { if (sc->sf_rdata.sf_tx_cring_paddr) bus_dmamap_unload(sc->sf_cdata.sf_tx_cring_tag, sc->sf_cdata.sf_tx_cring_map); if (sc->sf_rdata.sf_tx_cring) bus_dmamem_free(sc->sf_cdata.sf_tx_cring_tag, sc->sf_rdata.sf_tx_cring, sc->sf_cdata.sf_tx_cring_map); sc->sf_rdata.sf_tx_cring = NULL; sc->sf_rdata.sf_tx_cring_paddr = 0; bus_dma_tag_destroy(sc->sf_cdata.sf_tx_cring_tag); sc->sf_cdata.sf_tx_cring_tag = NULL; } /* Rx ring. */ if (sc->sf_cdata.sf_rx_ring_tag) { if (sc->sf_rdata.sf_rx_ring_paddr) bus_dmamap_unload(sc->sf_cdata.sf_rx_ring_tag, sc->sf_cdata.sf_rx_ring_map); if (sc->sf_rdata.sf_rx_ring) bus_dmamem_free(sc->sf_cdata.sf_rx_ring_tag, sc->sf_rdata.sf_rx_ring, sc->sf_cdata.sf_rx_ring_map); sc->sf_rdata.sf_rx_ring = NULL; sc->sf_rdata.sf_rx_ring_paddr = 0; bus_dma_tag_destroy(sc->sf_cdata.sf_rx_ring_tag); sc->sf_cdata.sf_rx_ring_tag = NULL; } /* Rx completion ring. */ if (sc->sf_cdata.sf_rx_cring_tag) { if (sc->sf_rdata.sf_rx_cring_paddr) bus_dmamap_unload(sc->sf_cdata.sf_rx_cring_tag, sc->sf_cdata.sf_rx_cring_map); if (sc->sf_rdata.sf_rx_cring) bus_dmamem_free(sc->sf_cdata.sf_rx_cring_tag, sc->sf_rdata.sf_rx_cring, sc->sf_cdata.sf_rx_cring_map); sc->sf_rdata.sf_rx_cring = NULL; sc->sf_rdata.sf_rx_cring_paddr = 0; bus_dma_tag_destroy(sc->sf_cdata.sf_rx_cring_tag); sc->sf_cdata.sf_rx_cring_tag = NULL; } /* Tx buffers. */ if (sc->sf_cdata.sf_tx_tag) { for (i = 0; i < SF_TX_DLIST_CNT; i++) { txd = &sc->sf_cdata.sf_txdesc[i]; if (txd->tx_dmamap) { bus_dmamap_destroy(sc->sf_cdata.sf_tx_tag, txd->tx_dmamap); txd->tx_dmamap = NULL; } } bus_dma_tag_destroy(sc->sf_cdata.sf_tx_tag); sc->sf_cdata.sf_tx_tag = NULL; } /* Rx buffers. */ if (sc->sf_cdata.sf_rx_tag) { for (i = 0; i < SF_RX_DLIST_CNT; i++) { rxd = &sc->sf_cdata.sf_rxdesc[i]; if (rxd->rx_dmamap) { bus_dmamap_destroy(sc->sf_cdata.sf_rx_tag, rxd->rx_dmamap); rxd->rx_dmamap = NULL; } } if (sc->sf_cdata.sf_rx_sparemap) { bus_dmamap_destroy(sc->sf_cdata.sf_rx_tag, sc->sf_cdata.sf_rx_sparemap); sc->sf_cdata.sf_rx_sparemap = 0; } bus_dma_tag_destroy(sc->sf_cdata.sf_rx_tag); sc->sf_cdata.sf_rx_tag = NULL; } if (sc->sf_cdata.sf_parent_tag) { bus_dma_tag_destroy(sc->sf_cdata.sf_parent_tag); sc->sf_cdata.sf_parent_tag = NULL; } } static int sf_init_rx_ring(struct sf_softc *sc) { struct sf_ring_data *rd; int i; sc->sf_cdata.sf_rxc_cons = 0; rd = &sc->sf_rdata; bzero(rd->sf_rx_ring, SF_RX_DLIST_SIZE); bzero(rd->sf_rx_cring, SF_RX_CLIST_SIZE); for (i = 0; i < SF_RX_DLIST_CNT; i++) { if (sf_newbuf(sc, i) != 0) return (ENOBUFS); } bus_dmamap_sync(sc->sf_cdata.sf_rx_cring_tag, sc->sf_cdata.sf_rx_cring_map, BUS_DMASYNC_PREREAD | BUS_DMASYNC_PREWRITE); bus_dmamap_sync(sc->sf_cdata.sf_rx_ring_tag, sc->sf_cdata.sf_rx_ring_map, BUS_DMASYNC_PREREAD | BUS_DMASYNC_PREWRITE); return (0); } static void sf_init_tx_ring(struct sf_softc *sc) { struct sf_ring_data *rd; int i; sc->sf_cdata.sf_tx_prod = 0; sc->sf_cdata.sf_tx_cnt = 0; sc->sf_cdata.sf_txc_cons = 0; rd = &sc->sf_rdata; bzero(rd->sf_tx_ring, SF_TX_DLIST_SIZE); bzero(rd->sf_tx_cring, SF_TX_CLIST_SIZE); for (i = 0; i < SF_TX_DLIST_CNT; i++) { rd->sf_tx_ring[i].sf_tx_ctrl = htole32(SF_TX_DESC_ID); sc->sf_cdata.sf_txdesc[i].tx_m = NULL; sc->sf_cdata.sf_txdesc[i].ndesc = 0; } rd->sf_tx_ring[i].sf_tx_ctrl |= htole32(SF_TX_DESC_END); bus_dmamap_sync(sc->sf_cdata.sf_tx_ring_tag, sc->sf_cdata.sf_tx_ring_map, BUS_DMASYNC_PREREAD | BUS_DMASYNC_PREWRITE); bus_dmamap_sync(sc->sf_cdata.sf_tx_cring_tag, sc->sf_cdata.sf_tx_cring_map, BUS_DMASYNC_PREREAD | BUS_DMASYNC_PREWRITE); } /* * Initialize an RX descriptor and attach an MBUF cluster. */ static int sf_newbuf(struct sf_softc *sc, int idx) { struct sf_rx_rdesc *desc; struct sf_rxdesc *rxd; struct mbuf *m; bus_dma_segment_t segs[1]; bus_dmamap_t map; int nsegs; m = m_getcl(M_NOWAIT, MT_DATA, M_PKTHDR); if (m == NULL) return (ENOBUFS); m->m_len = m->m_pkthdr.len = MCLBYTES; m_adj(m, sizeof(uint32_t)); if (bus_dmamap_load_mbuf_sg(sc->sf_cdata.sf_rx_tag, sc->sf_cdata.sf_rx_sparemap, m, segs, &nsegs, 0) != 0) { m_freem(m); return (ENOBUFS); } KASSERT(nsegs == 1, ("%s: %d segments returned!", __func__, nsegs)); rxd = &sc->sf_cdata.sf_rxdesc[idx]; if (rxd->rx_m != NULL) { bus_dmamap_sync(sc->sf_cdata.sf_rx_tag, rxd->rx_dmamap, BUS_DMASYNC_POSTREAD); bus_dmamap_unload(sc->sf_cdata.sf_rx_tag, rxd->rx_dmamap); } map = rxd->rx_dmamap; rxd->rx_dmamap = sc->sf_cdata.sf_rx_sparemap; sc->sf_cdata.sf_rx_sparemap = map; bus_dmamap_sync(sc->sf_cdata.sf_rx_tag, rxd->rx_dmamap, BUS_DMASYNC_PREREAD); rxd->rx_m = m; desc = &sc->sf_rdata.sf_rx_ring[idx]; desc->sf_addr = htole64(segs[0].ds_addr); return (0); } #ifndef __NO_STRICT_ALIGNMENT static __inline void sf_fixup_rx(struct mbuf *m) { int i; uint16_t *src, *dst; src = mtod(m, uint16_t *); dst = src - 1; for (i = 0; i < (m->m_len / sizeof(uint16_t) + 1); i++) *dst++ = *src++; m->m_data -= ETHER_ALIGN; } #endif /* * The starfire is programmed to use 'normal' mode for packet reception, * which means we use the consumer/producer model for both the buffer * descriptor queue and the completion descriptor queue. The only problem * with this is that it involves a lot of register accesses: we have to * read the RX completion consumer and producer indexes and the RX buffer * producer index, plus the RX completion consumer and RX buffer producer * indexes have to be updated. It would have been easier if Adaptec had * put each index in a separate register, especially given that the damn * NIC has a 512K register space. * * In spite of all the lovely features that Adaptec crammed into the 6915, * it is marred by one truly stupid design flaw, which is that receive * buffer addresses must be aligned on a longword boundary. This forces * the packet payload to be unaligned, which is suboptimal on the x86 and * completely unusable on the Alpha. Our only recourse is to copy received * packets into properly aligned buffers before handing them off. */ static int sf_rxeof(struct sf_softc *sc) { struct mbuf *m; struct ifnet *ifp; struct sf_rxdesc *rxd; struct sf_rx_rcdesc *cur_cmp; int cons, eidx, prog, rx_npkts; uint32_t status, status2; SF_LOCK_ASSERT(sc); ifp = sc->sf_ifp; rx_npkts = 0; bus_dmamap_sync(sc->sf_cdata.sf_rx_ring_tag, sc->sf_cdata.sf_rx_ring_map, BUS_DMASYNC_POSTREAD | BUS_DMASYNC_POSTWRITE); bus_dmamap_sync(sc->sf_cdata.sf_rx_cring_tag, sc->sf_cdata.sf_rx_cring_map, BUS_DMASYNC_POSTREAD | BUS_DMASYNC_POSTWRITE); /* * To reduce register access, directly read Receive completion * queue entry. */ eidx = 0; prog = 0; for (cons = sc->sf_cdata.sf_rxc_cons; (ifp->if_drv_flags & IFF_DRV_RUNNING) != 0; SF_INC(cons, SF_RX_CLIST_CNT)) { cur_cmp = &sc->sf_rdata.sf_rx_cring[cons]; status = le32toh(cur_cmp->sf_rx_status1); if (status == 0) break; #ifdef DEVICE_POLLING if ((ifp->if_capenable & IFCAP_POLLING) != 0) { if (sc->rxcycles <= 0) break; sc->rxcycles--; } #endif prog++; eidx = (status & SF_RX_CMPDESC_EIDX) >> 16; rxd = &sc->sf_cdata.sf_rxdesc[eidx]; m = rxd->rx_m; /* * Note, IFCOUNTER_IPACKETS and IFCOUNTER_IERRORS * are handled in sf_stats_update(). */ if ((status & SF_RXSTAT1_OK) == 0) { cur_cmp->sf_rx_status1 = 0; continue; } if (sf_newbuf(sc, eidx) != 0) { if_inc_counter(ifp, IFCOUNTER_IQDROPS, 1); cur_cmp->sf_rx_status1 = 0; continue; } /* AIC-6915 supports TCP/UDP checksum offload. */ if ((ifp->if_capenable & IFCAP_RXCSUM) != 0) { status2 = le32toh(cur_cmp->sf_rx_status2); /* * Sometimes AIC-6915 generates an interrupt to * warn RxGFP stall with bad checksum bit set * in status word. I'm not sure what conditioan * triggers it but recevied packet's checksum * was correct even though AIC-6915 does not * agree on this. This may be an indication of * firmware bug. To fix the issue, do not rely * on bad checksum bit in status word and let * upper layer verify integrity of received * frame. * Another nice feature of AIC-6915 is hardware * assistance of checksum calculation by * providing partial checksum value for received * frame. The partial checksum value can be used * to accelerate checksum computation for * fragmented TCP/UDP packets. Upper network * stack already takes advantage of the partial * checksum value in IP reassembly stage. But * I'm not sure the correctness of the partial * hardware checksum assistance as frequent * RxGFP stalls are seen on non-fragmented * frames. Due to the nature of the complexity * of checksum computation code in firmware it's * possible to see another bug in RxGFP so * ignore checksum assistance for fragmented * frames. This can be changed in future. */ if ((status2 & SF_RXSTAT2_FRAG) == 0) { if ((status2 & (SF_RXSTAT2_TCP | SF_RXSTAT2_UDP)) != 0) { if ((status2 & SF_RXSTAT2_CSUM_OK)) { m->m_pkthdr.csum_flags = CSUM_DATA_VALID | CSUM_PSEUDO_HDR; m->m_pkthdr.csum_data = 0xffff; } } } #ifdef SF_PARTIAL_CSUM_SUPPORT else if ((status2 & SF_RXSTAT2_FRAG) != 0) { if ((status2 & (SF_RXSTAT2_TCP | SF_RXSTAT2_UDP)) != 0) { if ((status2 & SF_RXSTAT2_PCSUM_OK)) { m->m_pkthdr.csum_flags = CSUM_DATA_VALID; m->m_pkthdr.csum_data = (status & SF_RX_CMPDESC_CSUM2); } } } #endif } m->m_pkthdr.len = m->m_len = status & SF_RX_CMPDESC_LEN; #ifndef __NO_STRICT_ALIGNMENT sf_fixup_rx(m); #endif m->m_pkthdr.rcvif = ifp; SF_UNLOCK(sc); (*ifp->if_input)(ifp, m); SF_LOCK(sc); rx_npkts++; /* Clear completion status. */ cur_cmp->sf_rx_status1 = 0; } if (prog > 0) { sc->sf_cdata.sf_rxc_cons = cons; bus_dmamap_sync(sc->sf_cdata.sf_rx_ring_tag, sc->sf_cdata.sf_rx_ring_map, BUS_DMASYNC_PREREAD | BUS_DMASYNC_PREWRITE); bus_dmamap_sync(sc->sf_cdata.sf_rx_cring_tag, sc->sf_cdata.sf_rx_cring_map, BUS_DMASYNC_PREREAD | BUS_DMASYNC_PREWRITE); /* Update Rx completion Q1 consumer index. */ csr_write_4(sc, SF_CQ_CONSIDX, (csr_read_4(sc, SF_CQ_CONSIDX) & ~SF_CQ_CONSIDX_RXQ1) | (cons & SF_CQ_CONSIDX_RXQ1)); /* Update Rx descriptor Q1 ptr. */ csr_write_4(sc, SF_RXDQ_PTR_Q1, (csr_read_4(sc, SF_RXDQ_PTR_Q1) & ~SF_RXDQ_PRODIDX) | (eidx & SF_RXDQ_PRODIDX)); } return (rx_npkts); } /* * Read the transmit status from the completion queue and release * mbufs. Note that the buffer descriptor index in the completion * descriptor is an offset from the start of the transmit buffer * descriptor list in bytes. This is important because the manual * gives the impression that it should match the producer/consumer * index, which is the offset in 8 byte blocks. */ static void sf_txeof(struct sf_softc *sc) { struct sf_txdesc *txd; struct sf_tx_rcdesc *cur_cmp; struct ifnet *ifp; uint32_t status; int cons, idx, prod; SF_LOCK_ASSERT(sc); ifp = sc->sf_ifp; bus_dmamap_sync(sc->sf_cdata.sf_tx_cring_tag, sc->sf_cdata.sf_tx_cring_map, BUS_DMASYNC_POSTREAD | BUS_DMASYNC_POSTWRITE); cons = sc->sf_cdata.sf_txc_cons; prod = (csr_read_4(sc, SF_CQ_PRODIDX) & SF_TXDQ_PRODIDX_HIPRIO) >> 16; if (prod == cons) return; for (; cons != prod; SF_INC(cons, SF_TX_CLIST_CNT)) { cur_cmp = &sc->sf_rdata.sf_tx_cring[cons]; status = le32toh(cur_cmp->sf_tx_status1); if (status == 0) break; switch (status & SF_TX_CMPDESC_TYPE) { case SF_TXCMPTYPE_TX: /* Tx complete entry. */ break; case SF_TXCMPTYPE_DMA: /* DMA complete entry. */ idx = status & SF_TX_CMPDESC_IDX; idx = idx / sizeof(struct sf_tx_rdesc); /* * We don't need to check Tx status here. * SF_ISR_TX_LOFIFO intr would handle this. * Note, IFCOUNTER_OPACKETS, IFCOUNTER_COLLISIONS * and IFCOUNTER_OERROR are handled in * sf_stats_update(). */ txd = &sc->sf_cdata.sf_txdesc[idx]; if (txd->tx_m != NULL) { bus_dmamap_sync(sc->sf_cdata.sf_tx_tag, txd->tx_dmamap, BUS_DMASYNC_POSTWRITE); bus_dmamap_unload(sc->sf_cdata.sf_tx_tag, txd->tx_dmamap); m_freem(txd->tx_m); txd->tx_m = NULL; } sc->sf_cdata.sf_tx_cnt -= txd->ndesc; KASSERT(sc->sf_cdata.sf_tx_cnt >= 0, ("%s: Active Tx desc counter was garbled\n", __func__)); txd->ndesc = 0; ifp->if_drv_flags &= ~IFF_DRV_OACTIVE; break; default: /* It should not happen. */ device_printf(sc->sf_dev, "unknown Tx completion type : 0x%08x : %d : %d\n", status, cons, prod); break; } cur_cmp->sf_tx_status1 = 0; } sc->sf_cdata.sf_txc_cons = cons; bus_dmamap_sync(sc->sf_cdata.sf_tx_cring_tag, sc->sf_cdata.sf_tx_cring_map, BUS_DMASYNC_PREREAD | BUS_DMASYNC_PREWRITE); if (sc->sf_cdata.sf_tx_cnt == 0) sc->sf_watchdog_timer = 0; /* Update Tx completion consumer index. */ csr_write_4(sc, SF_CQ_CONSIDX, (csr_read_4(sc, SF_CQ_CONSIDX) & 0xffff) | ((cons << 16) & 0xffff0000)); } static void sf_txthresh_adjust(struct sf_softc *sc) { uint32_t txfctl; device_printf(sc->sf_dev, "Tx underrun -- "); if (sc->sf_txthresh < SF_MAX_TX_THRESHOLD) { txfctl = csr_read_4(sc, SF_TX_FRAMCTL); /* Increase Tx threshold 256 bytes. */ sc->sf_txthresh += 16; if (sc->sf_txthresh > SF_MAX_TX_THRESHOLD) sc->sf_txthresh = SF_MAX_TX_THRESHOLD; txfctl &= ~SF_TXFRMCTL_TXTHRESH; txfctl |= sc->sf_txthresh; printf("increasing Tx threshold to %d bytes\n", sc->sf_txthresh * SF_TX_THRESHOLD_UNIT); csr_write_4(sc, SF_TX_FRAMCTL, txfctl); } else printf("\n"); } #ifdef DEVICE_POLLING static int sf_poll(struct ifnet *ifp, enum poll_cmd cmd, int count) { struct sf_softc *sc; uint32_t status; int rx_npkts; sc = ifp->if_softc; rx_npkts = 0; SF_LOCK(sc); if ((ifp->if_drv_flags & IFF_DRV_RUNNING) == 0) { SF_UNLOCK(sc); return (rx_npkts); } sc->rxcycles = count; rx_npkts = sf_rxeof(sc); sf_txeof(sc); if (!IFQ_DRV_IS_EMPTY(&ifp->if_snd)) sf_start_locked(ifp); if (cmd == POLL_AND_CHECK_STATUS) { /* Reading the ISR register clears all interrrupts. */ status = csr_read_4(sc, SF_ISR); if ((status & SF_ISR_ABNORMALINTR) != 0) { if ((status & SF_ISR_STATSOFLOW) != 0) sf_stats_update(sc); else if ((status & SF_ISR_TX_LOFIFO) != 0) sf_txthresh_adjust(sc); else if ((status & SF_ISR_DMAERR) != 0) { device_printf(sc->sf_dev, "DMA error, resetting\n"); ifp->if_drv_flags &= ~IFF_DRV_RUNNING; sf_init_locked(sc); SF_UNLOCK(sc); return (rx_npkts); } else if ((status & SF_ISR_NO_TX_CSUM) != 0) { sc->sf_statistics.sf_tx_gfp_stall++; #ifdef SF_GFP_DEBUG device_printf(sc->sf_dev, "TxGFP is not responding!\n"); #endif } else if ((status & SF_ISR_RXGFP_NORESP) != 0) { sc->sf_statistics.sf_rx_gfp_stall++; #ifdef SF_GFP_DEBUG device_printf(sc->sf_dev, "RxGFP is not responding!\n"); #endif } } } SF_UNLOCK(sc); return (rx_npkts); } #endif /* DEVICE_POLLING */ static void sf_intr(void *arg) { struct sf_softc *sc; struct ifnet *ifp; uint32_t status; int cnt; sc = (struct sf_softc *)arg; SF_LOCK(sc); if (sc->sf_suspended != 0) goto done_locked; /* Reading the ISR register clears all interrrupts. */ status = csr_read_4(sc, SF_ISR); if (status == 0 || status == 0xffffffff || (status & SF_ISR_PCIINT_ASSERTED) == 0) goto done_locked; ifp = sc->sf_ifp; #ifdef DEVICE_POLLING if ((ifp->if_capenable & IFCAP_POLLING) != 0) goto done_locked; #endif /* Disable interrupts. */ csr_write_4(sc, SF_IMR, 0x00000000); for (cnt = 32; (status & SF_INTRS) != 0;) { if ((ifp->if_drv_flags & IFF_DRV_RUNNING) == 0) break; if ((status & SF_ISR_RXDQ1_DMADONE) != 0) sf_rxeof(sc); if ((status & (SF_ISR_TX_TXDONE | SF_ISR_TX_DMADONE | SF_ISR_TX_QUEUEDONE)) != 0) sf_txeof(sc); if ((status & SF_ISR_ABNORMALINTR) != 0) { if ((status & SF_ISR_STATSOFLOW) != 0) sf_stats_update(sc); else if ((status & SF_ISR_TX_LOFIFO) != 0) sf_txthresh_adjust(sc); else if ((status & SF_ISR_DMAERR) != 0) { device_printf(sc->sf_dev, "DMA error, resetting\n"); ifp->if_drv_flags &= ~IFF_DRV_RUNNING; sf_init_locked(sc); SF_UNLOCK(sc); return; } else if ((status & SF_ISR_NO_TX_CSUM) != 0) { sc->sf_statistics.sf_tx_gfp_stall++; #ifdef SF_GFP_DEBUG device_printf(sc->sf_dev, "TxGFP is not responding!\n"); #endif } else if ((status & SF_ISR_RXGFP_NORESP) != 0) { sc->sf_statistics.sf_rx_gfp_stall++; #ifdef SF_GFP_DEBUG device_printf(sc->sf_dev, "RxGFP is not responding!\n"); #endif } } if (!IFQ_DRV_IS_EMPTY(&ifp->if_snd)) sf_start_locked(ifp); if (--cnt <= 0) break; /* Reading the ISR register clears all interrrupts. */ status = csr_read_4(sc, SF_ISR); } if ((ifp->if_drv_flags & IFF_DRV_RUNNING) != 0) { /* Re-enable interrupts. */ csr_write_4(sc, SF_IMR, SF_INTRS); } done_locked: SF_UNLOCK(sc); } static void sf_download_fw(struct sf_softc *sc) { uint32_t gfpinst; int i, ndx; uint8_t *p; /* * A FP instruction is composed of 48bits so we have to * write it with two parts. */ p = txfwdata; ndx = 0; for (i = 0; i < sizeof(txfwdata) / SF_GFP_INST_BYTES; i++) { gfpinst = p[2] << 24 | p[3] << 16 | p[4] << 8 | p[5]; csr_write_4(sc, SF_TXGFP_MEM_BASE + ndx * 4, gfpinst); gfpinst = p[0] << 8 | p[1]; csr_write_4(sc, SF_TXGFP_MEM_BASE + (ndx + 1) * 4, gfpinst); p += SF_GFP_INST_BYTES; ndx += 2; } if (bootverbose) device_printf(sc->sf_dev, "%d Tx instructions downloaded\n", i); p = rxfwdata; ndx = 0; for (i = 0; i < sizeof(rxfwdata) / SF_GFP_INST_BYTES; i++) { gfpinst = p[2] << 24 | p[3] << 16 | p[4] << 8 | p[5]; csr_write_4(sc, SF_RXGFP_MEM_BASE + (ndx * 4), gfpinst); gfpinst = p[0] << 8 | p[1]; csr_write_4(sc, SF_RXGFP_MEM_BASE + (ndx + 1) * 4, gfpinst); p += SF_GFP_INST_BYTES; ndx += 2; } if (bootverbose) device_printf(sc->sf_dev, "%d Rx instructions downloaded\n", i); } static void sf_init(void *xsc) { struct sf_softc *sc; sc = (struct sf_softc *)xsc; SF_LOCK(sc); sf_init_locked(sc); SF_UNLOCK(sc); } static void sf_init_locked(struct sf_softc *sc) { struct ifnet *ifp; uint8_t eaddr[ETHER_ADDR_LEN]; bus_addr_t addr; int i; SF_LOCK_ASSERT(sc); ifp = sc->sf_ifp; if ((ifp->if_drv_flags & IFF_DRV_RUNNING) != 0) return; sf_stop(sc); /* Reset the hardware to a known state. */ sf_reset(sc); /* Init all the receive filter registers */ for (i = SF_RXFILT_PERFECT_BASE; i < (SF_RXFILT_HASH_MAX + 1); i += sizeof(uint32_t)) csr_write_4(sc, i, 0); /* Empty stats counter registers. */ for (i = SF_STATS_BASE; i < (SF_STATS_END + 1); i += sizeof(uint32_t)) csr_write_4(sc, i, 0); /* Init our MAC address. */ bcopy(IF_LLADDR(sc->sf_ifp), eaddr, sizeof(eaddr)); csr_write_4(sc, SF_PAR0, eaddr[2] << 24 | eaddr[3] << 16 | eaddr[4] << 8 | eaddr[5]); csr_write_4(sc, SF_PAR1, eaddr[0] << 8 | eaddr[1]); sf_setperf(sc, 0, eaddr); if (sf_init_rx_ring(sc) == ENOBUFS) { device_printf(sc->sf_dev, "initialization failed: no memory for rx buffers\n"); sf_stop(sc); return; } sf_init_tx_ring(sc); /* * 16 perfect address filtering. * Hash only multicast destination address, Accept matching * frames regardless of VLAN ID. */ csr_write_4(sc, SF_RXFILT, SF_PERFMODE_NORMAL | SF_HASHMODE_ANYVLAN); /* * Set Rx filter. */ sf_rxfilter(sc); /* Init the completion queue indexes. */ csr_write_4(sc, SF_CQ_CONSIDX, 0); csr_write_4(sc, SF_CQ_PRODIDX, 0); /* Init the RX completion queue. */ addr = sc->sf_rdata.sf_rx_cring_paddr; csr_write_4(sc, SF_CQ_ADDR_HI, SF_ADDR_HI(addr)); csr_write_4(sc, SF_RXCQ_CTL_1, SF_ADDR_LO(addr) & SF_RXCQ_ADDR); if (SF_ADDR_HI(addr) != 0) SF_SETBIT(sc, SF_RXCQ_CTL_1, SF_RXCQ_USE_64BIT); /* Set RX completion queue type 2. */ SF_SETBIT(sc, SF_RXCQ_CTL_1, SF_RXCQTYPE_2); csr_write_4(sc, SF_RXCQ_CTL_2, 0); /* * Init RX DMA control. * default RxHighPriority Threshold, * default RxBurstSize, 128bytes. */ SF_SETBIT(sc, SF_RXDMA_CTL, SF_RXDMA_REPORTBADPKTS | (SF_RXDMA_HIGHPRIO_THRESH << 8) | SF_RXDMA_BURST); /* Init the RX buffer descriptor queue. */ addr = sc->sf_rdata.sf_rx_ring_paddr; csr_write_4(sc, SF_RXDQ_ADDR_HI, SF_ADDR_HI(addr)); csr_write_4(sc, SF_RXDQ_ADDR_Q1, SF_ADDR_LO(addr)); /* Set RX queue buffer length. */ csr_write_4(sc, SF_RXDQ_CTL_1, ((MCLBYTES - sizeof(uint32_t)) << 16) | SF_RXDQCTL_64BITBADDR | SF_RXDQCTL_VARIABLE); if (SF_ADDR_HI(addr) != 0) SF_SETBIT(sc, SF_RXDQ_CTL_1, SF_RXDQCTL_64BITDADDR); csr_write_4(sc, SF_RXDQ_PTR_Q1, SF_RX_DLIST_CNT - 1); csr_write_4(sc, SF_RXDQ_CTL_2, 0); /* Init the TX completion queue */ addr = sc->sf_rdata.sf_tx_cring_paddr; csr_write_4(sc, SF_TXCQ_CTL, SF_ADDR_LO(addr) & SF_TXCQ_ADDR); if (SF_ADDR_HI(addr) != 0) SF_SETBIT(sc, SF_TXCQ_CTL, SF_TXCQ_USE_64BIT); /* Init the TX buffer descriptor queue. */ addr = sc->sf_rdata.sf_tx_ring_paddr; csr_write_4(sc, SF_TXDQ_ADDR_HI, SF_ADDR_HI(addr)); csr_write_4(sc, SF_TXDQ_ADDR_HIPRIO, 0); csr_write_4(sc, SF_TXDQ_ADDR_LOPRIO, SF_ADDR_LO(addr)); csr_write_4(sc, SF_TX_FRAMCTL, SF_TXFRMCTL_CPLAFTERTX | sc->sf_txthresh); csr_write_4(sc, SF_TXDQ_CTL, SF_TXDMA_HIPRIO_THRESH << 24 | SF_TXSKIPLEN_0BYTES << 16 | SF_TXDDMA_BURST << 8 | SF_TXBUFDESC_TYPE2 | SF_TXMINSPACE_UNLIMIT); if (SF_ADDR_HI(addr) != 0) SF_SETBIT(sc, SF_TXDQ_CTL, SF_TXDQCTL_64BITADDR); /* Set VLAN Type register. */ csr_write_4(sc, SF_VLANTYPE, ETHERTYPE_VLAN); /* Set TxPause Timer. */ csr_write_4(sc, SF_TXPAUSETIMER, 0xffff); /* Enable autopadding of short TX frames. */ SF_SETBIT(sc, SF_MACCFG_1, SF_MACCFG1_AUTOPAD); SF_SETBIT(sc, SF_MACCFG_2, SF_MACCFG2_AUTOVLANPAD); /* Make sure to reset MAC to take changes effect. */ SF_SETBIT(sc, SF_MACCFG_1, SF_MACCFG1_SOFTRESET); DELAY(1000); SF_CLRBIT(sc, SF_MACCFG_1, SF_MACCFG1_SOFTRESET); /* Enable PCI bus master. */ SF_SETBIT(sc, SF_PCI_DEVCFG, SF_PCIDEVCFG_PCIMEN); /* Load StarFire firmware. */ sf_download_fw(sc); /* Intialize interrupt moderation. */ csr_write_4(sc, SF_TIMER_CTL, SF_TIMER_IMASK_MODE | SF_TIMER_TIMES_TEN | (sc->sf_int_mod & SF_TIMER_IMASK_INTERVAL)); #ifdef DEVICE_POLLING /* Disable interrupts if we are polling. */ if ((ifp->if_capenable & IFCAP_POLLING) != 0) csr_write_4(sc, SF_IMR, 0x00000000); else #endif /* Enable interrupts. */ csr_write_4(sc, SF_IMR, SF_INTRS); SF_SETBIT(sc, SF_PCI_DEVCFG, SF_PCIDEVCFG_INTR_ENB); /* Enable the RX and TX engines. */ csr_write_4(sc, SF_GEN_ETH_CTL, SF_ETHCTL_RX_ENB | SF_ETHCTL_RXDMA_ENB | SF_ETHCTL_TX_ENB | SF_ETHCTL_TXDMA_ENB); if ((ifp->if_capenable & IFCAP_TXCSUM) != 0) SF_SETBIT(sc, SF_GEN_ETH_CTL, SF_ETHCTL_TXGFP_ENB); else SF_CLRBIT(sc, SF_GEN_ETH_CTL, SF_ETHCTL_TXGFP_ENB); if ((ifp->if_capenable & IFCAP_RXCSUM) != 0) SF_SETBIT(sc, SF_GEN_ETH_CTL, SF_ETHCTL_RXGFP_ENB); else SF_CLRBIT(sc, SF_GEN_ETH_CTL, SF_ETHCTL_RXGFP_ENB); ifp->if_drv_flags |= IFF_DRV_RUNNING; ifp->if_drv_flags &= ~IFF_DRV_OACTIVE; sc->sf_link = 0; sf_ifmedia_upd_locked(ifp); callout_reset(&sc->sf_co, hz, sf_tick, sc); } static int sf_encap(struct sf_softc *sc, struct mbuf **m_head) { struct sf_txdesc *txd; struct sf_tx_rdesc *desc; struct mbuf *m; bus_dmamap_t map; bus_dma_segment_t txsegs[SF_MAXTXSEGS]; int error, i, nsegs, prod, si; int avail, nskip; SF_LOCK_ASSERT(sc); m = *m_head; prod = sc->sf_cdata.sf_tx_prod; txd = &sc->sf_cdata.sf_txdesc[prod]; map = txd->tx_dmamap; error = bus_dmamap_load_mbuf_sg(sc->sf_cdata.sf_tx_tag, map, *m_head, txsegs, &nsegs, BUS_DMA_NOWAIT); if (error == EFBIG) { m = m_collapse(*m_head, M_NOWAIT, SF_MAXTXSEGS); if (m == NULL) { m_freem(*m_head); *m_head = NULL; return (ENOBUFS); } *m_head = m; error = bus_dmamap_load_mbuf_sg(sc->sf_cdata.sf_tx_tag, map, *m_head, txsegs, &nsegs, BUS_DMA_NOWAIT); if (error != 0) { m_freem(*m_head); *m_head = NULL; return (error); } } else if (error != 0) return (error); if (nsegs == 0) { m_freem(*m_head); *m_head = NULL; return (EIO); } /* Check number of available descriptors. */ avail = (SF_TX_DLIST_CNT - 1) - sc->sf_cdata.sf_tx_cnt; if (avail < nsegs) { bus_dmamap_unload(sc->sf_cdata.sf_tx_tag, map); return (ENOBUFS); } nskip = 0; if (prod + nsegs >= SF_TX_DLIST_CNT) { nskip = SF_TX_DLIST_CNT - prod - 1; if (avail < nsegs + nskip) { bus_dmamap_unload(sc->sf_cdata.sf_tx_tag, map); return (ENOBUFS); } } bus_dmamap_sync(sc->sf_cdata.sf_tx_tag, map, BUS_DMASYNC_PREWRITE); si = prod; for (i = 0; i < nsegs; i++) { desc = &sc->sf_rdata.sf_tx_ring[prod]; desc->sf_tx_ctrl = htole32(SF_TX_DESC_ID | (txsegs[i].ds_len & SF_TX_DESC_FRAGLEN)); desc->sf_tx_reserved = 0; desc->sf_addr = htole64(txsegs[i].ds_addr); if (i == 0 && prod + nsegs >= SF_TX_DLIST_CNT) { /* Queue wraps! */ desc->sf_tx_ctrl |= htole32(SF_TX_DESC_END); prod = 0; } else SF_INC(prod, SF_TX_DLIST_CNT); } /* Update producer index. */ sc->sf_cdata.sf_tx_prod = prod; sc->sf_cdata.sf_tx_cnt += nsegs + nskip; desc = &sc->sf_rdata.sf_tx_ring[si]; /* Check TDP/UDP checksum offload request. */ if ((m->m_pkthdr.csum_flags & SF_CSUM_FEATURES) != 0) desc->sf_tx_ctrl |= htole32(SF_TX_DESC_CALTCP); desc->sf_tx_ctrl |= htole32(SF_TX_DESC_CRCEN | SF_TX_DESC_INTR | (nsegs << 16)); txd->tx_dmamap = map; txd->tx_m = m; txd->ndesc = nsegs + nskip; return (0); } static void sf_start(struct ifnet *ifp) { struct sf_softc *sc; sc = ifp->if_softc; SF_LOCK(sc); sf_start_locked(ifp); SF_UNLOCK(sc); } static void sf_start_locked(struct ifnet *ifp) { struct sf_softc *sc; struct mbuf *m_head; int enq; sc = ifp->if_softc; SF_LOCK_ASSERT(sc); if ((ifp->if_drv_flags & (IFF_DRV_RUNNING | IFF_DRV_OACTIVE)) != IFF_DRV_RUNNING || sc->sf_link == 0) return; /* * Since we don't know when descriptor wrap occurrs in advance * limit available number of active Tx descriptor counter to be * higher than maximum number of DMA segments allowed in driver. */ for (enq = 0; !IFQ_DRV_IS_EMPTY(&ifp->if_snd) && sc->sf_cdata.sf_tx_cnt < SF_TX_DLIST_CNT - SF_MAXTXSEGS; ) { IFQ_DRV_DEQUEUE(&ifp->if_snd, m_head); if (m_head == NULL) break; /* * Pack the data into the transmit ring. If we * don't have room, set the OACTIVE flag and wait * for the NIC to drain the ring. */ if (sf_encap(sc, &m_head)) { if (m_head == NULL) break; IFQ_DRV_PREPEND(&ifp->if_snd, m_head); ifp->if_drv_flags |= IFF_DRV_OACTIVE; break; } enq++; /* * If there's a BPF listener, bounce a copy of this frame * to him. */ ETHER_BPF_MTAP(ifp, m_head); } if (enq > 0) { bus_dmamap_sync(sc->sf_cdata.sf_tx_ring_tag, sc->sf_cdata.sf_tx_ring_map, BUS_DMASYNC_PREREAD | BUS_DMASYNC_PREWRITE); /* Kick transmit. */ csr_write_4(sc, SF_TXDQ_PRODIDX, sc->sf_cdata.sf_tx_prod * (sizeof(struct sf_tx_rdesc) / 8)); /* Set a timeout in case the chip goes out to lunch. */ sc->sf_watchdog_timer = 5; } } static void sf_stop(struct sf_softc *sc) { struct sf_txdesc *txd; struct sf_rxdesc *rxd; struct ifnet *ifp; int i; SF_LOCK_ASSERT(sc); ifp = sc->sf_ifp; ifp->if_drv_flags &= ~(IFF_DRV_RUNNING | IFF_DRV_OACTIVE); sc->sf_link = 0; callout_stop(&sc->sf_co); sc->sf_watchdog_timer = 0; /* Reading the ISR register clears all interrrupts. */ csr_read_4(sc, SF_ISR); /* Disable further interrupts. */ csr_write_4(sc, SF_IMR, 0); /* Disable Tx/Rx egine. */ csr_write_4(sc, SF_GEN_ETH_CTL, 0); /* Give hardware chance to drain active DMA cycles. */ DELAY(1000); csr_write_4(sc, SF_CQ_CONSIDX, 0); csr_write_4(sc, SF_CQ_PRODIDX, 0); csr_write_4(sc, SF_RXDQ_ADDR_Q1, 0); csr_write_4(sc, SF_RXDQ_CTL_1, 0); csr_write_4(sc, SF_RXDQ_PTR_Q1, 0); csr_write_4(sc, SF_TXCQ_CTL, 0); csr_write_4(sc, SF_TXDQ_ADDR_HIPRIO, 0); csr_write_4(sc, SF_TXDQ_CTL, 0); /* * Free RX and TX mbufs still in the queues. */ for (i = 0; i < SF_RX_DLIST_CNT; i++) { rxd = &sc->sf_cdata.sf_rxdesc[i]; if (rxd->rx_m != NULL) { bus_dmamap_sync(sc->sf_cdata.sf_rx_tag, rxd->rx_dmamap, BUS_DMASYNC_POSTREAD); bus_dmamap_unload(sc->sf_cdata.sf_rx_tag, rxd->rx_dmamap); m_freem(rxd->rx_m); rxd->rx_m = NULL; } } for (i = 0; i < SF_TX_DLIST_CNT; i++) { txd = &sc->sf_cdata.sf_txdesc[i]; if (txd->tx_m != NULL) { bus_dmamap_sync(sc->sf_cdata.sf_tx_tag, txd->tx_dmamap, BUS_DMASYNC_POSTWRITE); bus_dmamap_unload(sc->sf_cdata.sf_tx_tag, txd->tx_dmamap); m_freem(txd->tx_m); txd->tx_m = NULL; txd->ndesc = 0; } } } static void sf_tick(void *xsc) { struct sf_softc *sc; struct mii_data *mii; sc = xsc; SF_LOCK_ASSERT(sc); mii = device_get_softc(sc->sf_miibus); mii_tick(mii); sf_stats_update(sc); sf_watchdog(sc); callout_reset(&sc->sf_co, hz, sf_tick, sc); } /* * Note: it is important that this function not be interrupted. We * use a two-stage register access scheme: if we are interrupted in * between setting the indirect address register and reading from the * indirect data register, the contents of the address register could * be changed out from under us. */ static void sf_stats_update(struct sf_softc *sc) { struct ifnet *ifp; struct sf_stats now, *stats, *nstats; int i; SF_LOCK_ASSERT(sc); ifp = sc->sf_ifp; stats = &now; stats->sf_tx_frames = csr_read_4(sc, SF_STATS_BASE + SF_STATS_TX_FRAMES); stats->sf_tx_single_colls = csr_read_4(sc, SF_STATS_BASE + SF_STATS_TX_SINGLE_COL); stats->sf_tx_multi_colls = csr_read_4(sc, SF_STATS_BASE + SF_STATS_TX_MULTI_COL); stats->sf_tx_crcerrs = csr_read_4(sc, SF_STATS_BASE + SF_STATS_TX_CRC_ERRS); stats->sf_tx_bytes = csr_read_4(sc, SF_STATS_BASE + SF_STATS_TX_BYTES); stats->sf_tx_deferred = csr_read_4(sc, SF_STATS_BASE + SF_STATS_TX_DEFERRED); stats->sf_tx_late_colls = csr_read_4(sc, SF_STATS_BASE + SF_STATS_TX_LATE_COL); stats->sf_tx_pause_frames = csr_read_4(sc, SF_STATS_BASE + SF_STATS_TX_PAUSE); stats->sf_tx_control_frames = csr_read_4(sc, SF_STATS_BASE + SF_STATS_TX_CTL_FRAME); stats->sf_tx_excess_colls = csr_read_4(sc, SF_STATS_BASE + SF_STATS_TX_EXCESS_COL); stats->sf_tx_excess_defer = csr_read_4(sc, SF_STATS_BASE + SF_STATS_TX_EXCESS_DEF); stats->sf_tx_mcast_frames = csr_read_4(sc, SF_STATS_BASE + SF_STATS_TX_MULTI); stats->sf_tx_bcast_frames = csr_read_4(sc, SF_STATS_BASE + SF_STATS_TX_BCAST); stats->sf_tx_frames_lost = csr_read_4(sc, SF_STATS_BASE + SF_STATS_TX_FRAME_LOST); stats->sf_rx_frames = csr_read_4(sc, SF_STATS_BASE + SF_STATS_RX_FRAMES); stats->sf_rx_crcerrs = csr_read_4(sc, SF_STATS_BASE + SF_STATS_RX_CRC_ERRS); stats->sf_rx_alignerrs = csr_read_4(sc, SF_STATS_BASE + SF_STATS_RX_ALIGN_ERRS); stats->sf_rx_bytes = csr_read_4(sc, SF_STATS_BASE + SF_STATS_RX_BYTES); stats->sf_rx_pause_frames = csr_read_4(sc, SF_STATS_BASE + SF_STATS_RX_PAUSE); stats->sf_rx_control_frames = csr_read_4(sc, SF_STATS_BASE + SF_STATS_RX_CTL_FRAME); stats->sf_rx_unsup_control_frames = csr_read_4(sc, SF_STATS_BASE + SF_STATS_RX_UNSUP_FRAME); stats->sf_rx_giants = csr_read_4(sc, SF_STATS_BASE + SF_STATS_RX_GIANTS); stats->sf_rx_runts = csr_read_4(sc, SF_STATS_BASE + SF_STATS_RX_RUNTS); stats->sf_rx_jabbererrs = csr_read_4(sc, SF_STATS_BASE + SF_STATS_RX_JABBER); stats->sf_rx_fragments = csr_read_4(sc, SF_STATS_BASE + SF_STATS_RX_FRAGMENTS); stats->sf_rx_pkts_64 = csr_read_4(sc, SF_STATS_BASE + SF_STATS_RX_64); stats->sf_rx_pkts_65_127 = csr_read_4(sc, SF_STATS_BASE + SF_STATS_RX_65_127); stats->sf_rx_pkts_128_255 = csr_read_4(sc, SF_STATS_BASE + SF_STATS_RX_128_255); stats->sf_rx_pkts_256_511 = csr_read_4(sc, SF_STATS_BASE + SF_STATS_RX_256_511); stats->sf_rx_pkts_512_1023 = csr_read_4(sc, SF_STATS_BASE + SF_STATS_RX_512_1023); stats->sf_rx_pkts_1024_1518 = csr_read_4(sc, SF_STATS_BASE + SF_STATS_RX_1024_1518); stats->sf_rx_frames_lost = csr_read_4(sc, SF_STATS_BASE + SF_STATS_RX_FRAME_LOST); /* Lower 16bits are valid. */ stats->sf_tx_underruns = (csr_read_4(sc, SF_STATS_BASE + SF_STATS_TX_UNDERRUN) & 0xffff); /* Empty stats counter registers. */ for (i = SF_STATS_BASE; i < (SF_STATS_END + 1); i += sizeof(uint32_t)) csr_write_4(sc, i, 0); if_inc_counter(ifp, IFCOUNTER_OPACKETS, (u_long)stats->sf_tx_frames); if_inc_counter(ifp, IFCOUNTER_COLLISIONS, (u_long)stats->sf_tx_single_colls + (u_long)stats->sf_tx_multi_colls); if_inc_counter(ifp, IFCOUNTER_OERRORS, (u_long)stats->sf_tx_excess_colls + (u_long)stats->sf_tx_excess_defer + (u_long)stats->sf_tx_frames_lost); if_inc_counter(ifp, IFCOUNTER_IPACKETS, (u_long)stats->sf_rx_frames); if_inc_counter(ifp, IFCOUNTER_IERRORS, (u_long)stats->sf_rx_crcerrs + (u_long)stats->sf_rx_alignerrs + (u_long)stats->sf_rx_giants + (u_long)stats->sf_rx_runts + (u_long)stats->sf_rx_jabbererrs + (u_long)stats->sf_rx_frames_lost); nstats = &sc->sf_statistics; nstats->sf_tx_frames += stats->sf_tx_frames; nstats->sf_tx_single_colls += stats->sf_tx_single_colls; nstats->sf_tx_multi_colls += stats->sf_tx_multi_colls; nstats->sf_tx_crcerrs += stats->sf_tx_crcerrs; nstats->sf_tx_bytes += stats->sf_tx_bytes; nstats->sf_tx_deferred += stats->sf_tx_deferred; nstats->sf_tx_late_colls += stats->sf_tx_late_colls; nstats->sf_tx_pause_frames += stats->sf_tx_pause_frames; nstats->sf_tx_control_frames += stats->sf_tx_control_frames; nstats->sf_tx_excess_colls += stats->sf_tx_excess_colls; nstats->sf_tx_excess_defer += stats->sf_tx_excess_defer; nstats->sf_tx_mcast_frames += stats->sf_tx_mcast_frames; nstats->sf_tx_bcast_frames += stats->sf_tx_bcast_frames; nstats->sf_tx_frames_lost += stats->sf_tx_frames_lost; nstats->sf_rx_frames += stats->sf_rx_frames; nstats->sf_rx_crcerrs += stats->sf_rx_crcerrs; nstats->sf_rx_alignerrs += stats->sf_rx_alignerrs; nstats->sf_rx_bytes += stats->sf_rx_bytes; nstats->sf_rx_pause_frames += stats->sf_rx_pause_frames; nstats->sf_rx_control_frames += stats->sf_rx_control_frames; nstats->sf_rx_unsup_control_frames += stats->sf_rx_unsup_control_frames; nstats->sf_rx_giants += stats->sf_rx_giants; nstats->sf_rx_runts += stats->sf_rx_runts; nstats->sf_rx_jabbererrs += stats->sf_rx_jabbererrs; nstats->sf_rx_fragments += stats->sf_rx_fragments; nstats->sf_rx_pkts_64 += stats->sf_rx_pkts_64; nstats->sf_rx_pkts_65_127 += stats->sf_rx_pkts_65_127; nstats->sf_rx_pkts_128_255 += stats->sf_rx_pkts_128_255; nstats->sf_rx_pkts_256_511 += stats->sf_rx_pkts_256_511; nstats->sf_rx_pkts_512_1023 += stats->sf_rx_pkts_512_1023; nstats->sf_rx_pkts_1024_1518 += stats->sf_rx_pkts_1024_1518; nstats->sf_rx_frames_lost += stats->sf_rx_frames_lost; nstats->sf_tx_underruns += stats->sf_tx_underruns; } static void sf_watchdog(struct sf_softc *sc) { struct ifnet *ifp; SF_LOCK_ASSERT(sc); if (sc->sf_watchdog_timer == 0 || --sc->sf_watchdog_timer) return; ifp = sc->sf_ifp; if_inc_counter(ifp, IFCOUNTER_OERRORS, 1); if (sc->sf_link == 0) { if (bootverbose) if_printf(sc->sf_ifp, "watchdog timeout " "(missed link)\n"); } else if_printf(ifp, "watchdog timeout, %d Tx descs are active\n", sc->sf_cdata.sf_tx_cnt); ifp->if_drv_flags &= ~IFF_DRV_RUNNING; sf_init_locked(sc); if (!IFQ_DRV_IS_EMPTY(&ifp->if_snd)) sf_start_locked(ifp); } static int sf_shutdown(device_t dev) { struct sf_softc *sc; sc = device_get_softc(dev); SF_LOCK(sc); sf_stop(sc); SF_UNLOCK(sc); return (0); } static int sf_suspend(device_t dev) { struct sf_softc *sc; sc = device_get_softc(dev); SF_LOCK(sc); sf_stop(sc); sc->sf_suspended = 1; bus_generic_suspend(dev); SF_UNLOCK(sc); return (0); } static int sf_resume(device_t dev) { struct sf_softc *sc; struct ifnet *ifp; sc = device_get_softc(dev); SF_LOCK(sc); bus_generic_resume(dev); ifp = sc->sf_ifp; if ((ifp->if_flags & IFF_UP) != 0) sf_init_locked(sc); sc->sf_suspended = 0; SF_UNLOCK(sc); return (0); } static int sf_sysctl_stats(SYSCTL_HANDLER_ARGS) { struct sf_softc *sc; struct sf_stats *stats; int error; int result; result = -1; error = sysctl_handle_int(oidp, &result, 0, req); if (error != 0 || req->newptr == NULL) return (error); if (result != 1) return (error); sc = (struct sf_softc *)arg1; stats = &sc->sf_statistics; printf("%s statistics:\n", device_get_nameunit(sc->sf_dev)); printf("Transmit good frames : %ju\n", (uintmax_t)stats->sf_tx_frames); printf("Transmit good octets : %ju\n", (uintmax_t)stats->sf_tx_bytes); printf("Transmit single collisions : %u\n", stats->sf_tx_single_colls); printf("Transmit multiple collisions : %u\n", stats->sf_tx_multi_colls); printf("Transmit late collisions : %u\n", stats->sf_tx_late_colls); printf("Transmit abort due to excessive collisions : %u\n", stats->sf_tx_excess_colls); printf("Transmit CRC errors : %u\n", stats->sf_tx_crcerrs); printf("Transmit deferrals : %u\n", stats->sf_tx_deferred); printf("Transmit abort due to excessive deferrals : %u\n", stats->sf_tx_excess_defer); printf("Transmit pause control frames : %u\n", stats->sf_tx_pause_frames); printf("Transmit control frames : %u\n", stats->sf_tx_control_frames); printf("Transmit good multicast frames : %u\n", stats->sf_tx_mcast_frames); printf("Transmit good broadcast frames : %u\n", stats->sf_tx_bcast_frames); printf("Transmit frames lost due to internal transmit errors : %u\n", stats->sf_tx_frames_lost); printf("Transmit FIFO underflows : %u\n", stats->sf_tx_underruns); printf("Transmit GFP stalls : %u\n", stats->sf_tx_gfp_stall); printf("Receive good frames : %ju\n", (uint64_t)stats->sf_rx_frames); printf("Receive good octets : %ju\n", (uint64_t)stats->sf_rx_bytes); printf("Receive CRC errors : %u\n", stats->sf_rx_crcerrs); printf("Receive alignment errors : %u\n", stats->sf_rx_alignerrs); printf("Receive pause frames : %u\n", stats->sf_rx_pause_frames); printf("Receive control frames : %u\n", stats->sf_rx_control_frames); printf("Receive control frames with unsupported opcode : %u\n", stats->sf_rx_unsup_control_frames); printf("Receive frames too long : %u\n", stats->sf_rx_giants); printf("Receive frames too short : %u\n", stats->sf_rx_runts); printf("Receive frames jabber errors : %u\n", stats->sf_rx_jabbererrs); printf("Receive frames fragments : %u\n", stats->sf_rx_fragments); printf("Receive packets 64 bytes : %ju\n", (uint64_t)stats->sf_rx_pkts_64); printf("Receive packets 65 to 127 bytes : %ju\n", (uint64_t)stats->sf_rx_pkts_65_127); printf("Receive packets 128 to 255 bytes : %ju\n", (uint64_t)stats->sf_rx_pkts_128_255); printf("Receive packets 256 to 511 bytes : %ju\n", (uint64_t)stats->sf_rx_pkts_256_511); printf("Receive packets 512 to 1023 bytes : %ju\n", (uint64_t)stats->sf_rx_pkts_512_1023); printf("Receive packets 1024 to 1518 bytes : %ju\n", (uint64_t)stats->sf_rx_pkts_1024_1518); printf("Receive frames lost due to internal receive errors : %u\n", stats->sf_rx_frames_lost); printf("Receive GFP stalls : %u\n", stats->sf_rx_gfp_stall); return (error); } static int sysctl_int_range(SYSCTL_HANDLER_ARGS, int low, int high) { int error, value; if (!arg1) return (EINVAL); value = *(int *)arg1; error = sysctl_handle_int(oidp, &value, 0, req); if (error || !req->newptr) return (error); if (value < low || value > high) return (EINVAL); *(int *)arg1 = value; return (0); } static int sysctl_hw_sf_int_mod(SYSCTL_HANDLER_ARGS) { return (sysctl_int_range(oidp, arg1, arg2, req, SF_IM_MIN, SF_IM_MAX)); } Index: stable/12/sys/dev/sn/if_sn.c =================================================================== --- stable/12/sys/dev/sn/if_sn.c (revision 339734) +++ stable/12/sys/dev/sn/if_sn.c (revision 339735) @@ -1,1438 +1,1441 @@ /*- * SPDX-License-Identifier: BSD-4-Clause * * Copyright (c) 1996 Gardner Buchanan * All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * 3. All advertising materials mentioning features or use of this software * must display the following acknowledgement: * This product includes software developed by Gardner Buchanan. * 4. The name of Gardner Buchanan may not be used to endorse or promote * products derived from this software without specific prior written * permission. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR * IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES * OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. * IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, * INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT * NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF * THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ #include __FBSDID("$FreeBSD$"); /* * This is a driver for SMC's 9000 series of Ethernet adapters. * * This FreeBSD driver is derived from the smc9194 Linux driver by * Erik Stahlman and is Copyright (C) 1996 by Erik Stahlman. * This driver also shamelessly borrows from the FreeBSD ep driver * which is Copyright (C) 1994 Herb Peyerl * All rights reserved. * * It is set up for my SMC91C92 equipped Ampro LittleBoard embedded * PC. It is adapted from Erik Stahlman's Linux driver which worked * with his EFA Info*Express SVC VLB adaptor. According to SMC's databook, * it will work for the entire SMC 9xxx series. (Ha Ha) * * "Features" of the SMC chip: * 4608 byte packet memory. (for the 91C92. Others have more) * EEPROM for configuration * AUI/TP selection * * Authors: * Erik Stahlman erik@vt.edu * Herb Peyerl hpeyerl@novatel.ca * Andres Vega Garcia avega@sophia.inria.fr * Serge Babkin babkin@hq.icb.chel.su * Gardner Buchanan gbuchanan@shl.com * * Sources: * o SMC databook * o "smc9194.c:v0.10(FIXED) 02/15/96 by Erik Stahlman (erik@vt.edu)" * o "if_ep.c,v 1.19 1995/01/24 20:53:45 davidg Exp" * * Known Bugs: * o Setting of the hardware address isn't supported. * o Hardware padding isn't used. */ /* * Modifications for Megahertz X-Jack Ethernet Card (XJ-10BT) * * Copyright (c) 1996 by Tatsumi Hosokawa * BSD-nomads, Tokyo, Japan. */ /* * Multicast support by Kei TANAKA * Special thanks to itojun@itojun.org */ #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #ifdef INET #include #include #include #include #endif #include #include #include #include /* Exported variables */ devclass_t sn_devclass; static int snioctl(struct ifnet * ifp, u_long, caddr_t); static void snresume(struct ifnet *); static void snintr_locked(struct sn_softc *); static void sninit_locked(void *); static void snstart_locked(struct ifnet *); static void sninit(void *); static void snread(struct ifnet *); static void snstart(struct ifnet *); static void snstop(struct sn_softc *); static void snwatchdog(void *); static void sn_setmcast(struct sn_softc *); static int sn_getmcf(struct ifnet *ifp, u_char *mcf); /* I (GB) have been unlucky getting the hardware padding * to work properly. */ #define SW_PAD static const char *chip_ids[15] = { NULL, NULL, NULL, /* 3 */ "SMC91C90/91C92", /* 4 */ "SMC91C94/91C96", /* 5 */ "SMC91C95", NULL, /* 7 */ "SMC91C100", /* 8 */ "SMC91C100FD", /* 9 */ "SMC91C110", NULL, NULL, NULL, NULL, NULL }; int sn_attach(device_t dev) { struct sn_softc *sc = device_get_softc(dev); struct ifnet *ifp; uint16_t i; uint8_t *p; int rev; uint16_t address; int err; u_char eaddr[6]; ifp = sc->ifp = if_alloc(IFT_ETHER); if (ifp == NULL) { device_printf(dev, "can not if_alloc()\n"); return (ENOSPC); } SN_LOCK_INIT(sc); callout_init_mtx(&sc->watchdog, &sc->sc_mtx, 0); snstop(sc); sc->pages_wanted = -1; if (bootverbose || 1) { SMC_SELECT_BANK(sc, 3); rev = (CSR_READ_2(sc, REVISION_REG_W) >> 4) & 0xf; if (chip_ids[rev]) device_printf(dev, " %s ", chip_ids[rev]); else device_printf(dev, " unsupported chip: rev %d ", rev); SMC_SELECT_BANK(sc, 1); i = CSR_READ_2(sc, CONFIG_REG_W); printf("%s\n", i & CR_AUI_SELECT ? "AUI" : "UTP"); } /* * Read the station address from the chip. The MAC address is bank 1, * regs 4 - 9 */ SMC_SELECT_BANK(sc, 1); p = (uint8_t *) eaddr; for (i = 0; i < 6; i += 2) { address = CSR_READ_2(sc, IAR_ADDR0_REG_W + i); p[i + 1] = address >> 8; p[i] = address & 0xFF; } ifp->if_softc = sc; if_initname(ifp, device_get_name(dev), device_get_unit(dev)); ifp->if_flags = IFF_BROADCAST | IFF_SIMPLEX | IFF_MULTICAST; ifp->if_start = snstart; ifp->if_ioctl = snioctl; ifp->if_init = sninit; ifp->if_baudrate = 10000000; IFQ_SET_MAXLEN(&ifp->if_snd, ifqmaxlen); ifp->if_snd.ifq_maxlen = ifqmaxlen; IFQ_SET_READY(&ifp->if_snd); ether_ifattach(ifp, eaddr); /* * Activate the interrupt so we can get card interrupts. This * needs to be done last so that we don't have/hold the lock * during startup to avoid LORs in the network layer. */ if ((err = bus_setup_intr(dev, sc->irq_res, INTR_TYPE_NET | INTR_MPSAFE, NULL, sn_intr, sc, &sc->intrhand)) != 0) { sn_detach(dev); return err; } + + gone_by_fcp101_dev(dev); + return 0; } int sn_detach(device_t dev) { struct sn_softc *sc = device_get_softc(dev); struct ifnet *ifp = sc->ifp; ether_ifdetach(ifp); SN_LOCK(sc); snstop(sc); SN_UNLOCK(sc); callout_drain(&sc->watchdog); sn_deactivate(dev); if_free(ifp); SN_LOCK_DESTROY(sc); return 0; } static void sninit(void *xsc) { struct sn_softc *sc = xsc; SN_LOCK(sc); sninit_locked(sc); SN_UNLOCK(sc); } /* * Reset and initialize the chip */ static void sninit_locked(void *xsc) { struct sn_softc *sc = xsc; struct ifnet *ifp = sc->ifp; int flags; int mask; SN_ASSERT_LOCKED(sc); /* * This resets the registers mostly to defaults, but doesn't affect * EEPROM. After the reset cycle, we pause briefly for the chip to * be happy. */ SMC_SELECT_BANK(sc, 0); CSR_WRITE_2(sc, RECV_CONTROL_REG_W, RCR_SOFTRESET); SMC_DELAY(sc); CSR_WRITE_2(sc, RECV_CONTROL_REG_W, 0x0000); SMC_DELAY(sc); SMC_DELAY(sc); CSR_WRITE_2(sc, TXMIT_CONTROL_REG_W, 0x0000); /* * Set the control register to automatically release successfully * transmitted packets (making the best use out of our limited * memory) and to enable the EPH interrupt on certain TX errors. */ SMC_SELECT_BANK(sc, 1); CSR_WRITE_2(sc, CONTROL_REG_W, (CTR_AUTO_RELEASE | CTR_TE_ENABLE | CTR_CR_ENABLE | CTR_LE_ENABLE)); /* Set squelch level to 240mV (default 480mV) */ flags = CSR_READ_2(sc, CONFIG_REG_W); flags |= CR_SET_SQLCH; CSR_WRITE_2(sc, CONFIG_REG_W, flags); /* * Reset the MMU and wait for it to be un-busy. */ SMC_SELECT_BANK(sc, 2); CSR_WRITE_2(sc, MMU_CMD_REG_W, MMUCR_RESET); while (CSR_READ_2(sc, MMU_CMD_REG_W) & MMUCR_BUSY) /* NOTHING */ ; /* * Disable all interrupts */ CSR_WRITE_1(sc, INTR_MASK_REG_B, 0x00); sn_setmcast(sc); /* * Set the transmitter control. We want it enabled. */ flags = TCR_ENABLE; #ifndef SW_PAD /* * I (GB) have been unlucky getting this to work. */ flags |= TCR_PAD_ENABLE; #endif /* SW_PAD */ CSR_WRITE_2(sc, TXMIT_CONTROL_REG_W, flags); /* * Now, enable interrupts */ SMC_SELECT_BANK(sc, 2); mask = IM_EPH_INT | IM_RX_OVRN_INT | IM_RCV_INT | IM_TX_INT; CSR_WRITE_1(sc, INTR_MASK_REG_B, mask); sc->intr_mask = mask; sc->pages_wanted = -1; /* * Mark the interface running but not active. */ ifp->if_drv_flags |= IFF_DRV_RUNNING; ifp->if_drv_flags &= ~IFF_DRV_OACTIVE; callout_reset(&sc->watchdog, hz, snwatchdog, sc); /* * Attempt to push out any waiting packets. */ snstart_locked(ifp); } static void snstart(struct ifnet *ifp) { struct sn_softc *sc = ifp->if_softc; SN_LOCK(sc); snstart_locked(ifp); SN_UNLOCK(sc); } static void snstart_locked(struct ifnet *ifp) { struct sn_softc *sc = ifp->if_softc; u_int len; struct mbuf *m; struct mbuf *top; int pad; int mask; uint16_t length; uint16_t numPages; uint8_t packet_no; int time_out; int junk = 0; SN_ASSERT_LOCKED(sc); if (ifp->if_drv_flags & IFF_DRV_OACTIVE) return; if (sc->pages_wanted != -1) { if_printf(ifp, "snstart() while memory allocation pending\n"); return; } startagain: /* * Sneak a peek at the next packet */ m = ifp->if_snd.ifq_head; if (m == NULL) return; /* * Compute the frame length and set pad to give an overall even * number of bytes. Below we assume that the packet length is even. */ for (len = 0, top = m; m; m = m->m_next) len += m->m_len; pad = (len & 1); /* * We drop packets that are too large. Perhaps we should truncate * them instead? */ if (len + pad > ETHER_MAX_LEN - ETHER_CRC_LEN) { if_printf(ifp, "large packet discarded (A)\n"); if_inc_counter(ifp, IFCOUNTER_OERRORS, 1); IFQ_DRV_DEQUEUE(&ifp->if_snd, m); m_freem(m); goto readcheck; } #ifdef SW_PAD /* * If HW padding is not turned on, then pad to ETHER_MIN_LEN. */ if (len < ETHER_MIN_LEN - ETHER_CRC_LEN) pad = ETHER_MIN_LEN - ETHER_CRC_LEN - len; #endif /* SW_PAD */ length = pad + len; /* * The MMU wants the number of pages to be the number of 256 byte * 'pages', minus 1 (A packet can't ever have 0 pages. We also * include space for the status word, byte count and control bytes in * the allocation request. */ numPages = (length + 6) >> 8; /* * Now, try to allocate the memory */ SMC_SELECT_BANK(sc, 2); CSR_WRITE_2(sc, MMU_CMD_REG_W, MMUCR_ALLOC | numPages); /* * Wait a short amount of time to see if the allocation request * completes. Otherwise, I enable the interrupt and wait for * completion asynchronously. */ time_out = MEMORY_WAIT_TIME; do { if (CSR_READ_1(sc, INTR_STAT_REG_B) & IM_ALLOC_INT) break; } while (--time_out); if (!time_out || junk > 10) { /* * No memory now. Oh well, wait until the chip finds memory * later. Remember how many pages we were asking for and * enable the allocation completion interrupt. Also set a * watchdog in case we miss the interrupt. We mark the * interface active since there is no point in attempting an * snstart() until after the memory is available. */ mask = CSR_READ_1(sc, INTR_MASK_REG_B) | IM_ALLOC_INT; CSR_WRITE_1(sc, INTR_MASK_REG_B, mask); sc->intr_mask = mask; sc->timer = 1; ifp->if_drv_flags |= IFF_DRV_OACTIVE; sc->pages_wanted = numPages; return; } /* * The memory allocation completed. Check the results. */ packet_no = CSR_READ_1(sc, ALLOC_RESULT_REG_B); if (packet_no & ARR_FAILED) { if (junk++ > 10) if_printf(ifp, "Memory allocation failed\n"); goto startagain; } /* * We have a packet number, so tell the card to use it. */ CSR_WRITE_1(sc, PACKET_NUM_REG_B, packet_no); /* * Point to the beginning of the packet */ CSR_WRITE_2(sc, POINTER_REG_W, PTR_AUTOINC | 0x0000); /* * Send the packet length (+6 for status, length and control byte) * and the status word (set to zeros) */ CSR_WRITE_2(sc, DATA_REG_W, 0); CSR_WRITE_1(sc, DATA_REG_B, (length + 6) & 0xFF); CSR_WRITE_1(sc, DATA_REG_B, (length + 6) >> 8); /* * Get the packet from the kernel. This will include the Ethernet * frame header, MAC Addresses etc. */ IFQ_DRV_DEQUEUE(&ifp->if_snd, m); /* * Push out the data to the card. */ for (top = m; m != NULL; m = m->m_next) { /* * Push out words. */ CSR_WRITE_MULTI_2(sc, DATA_REG_W, mtod(m, uint16_t *), m->m_len / 2); /* * Push out remaining byte. */ if (m->m_len & 1) CSR_WRITE_1(sc, DATA_REG_B, *(mtod(m, caddr_t) + m->m_len - 1)); } /* * Push out padding. */ while (pad > 1) { CSR_WRITE_2(sc, DATA_REG_W, 0); pad -= 2; } if (pad) CSR_WRITE_1(sc, DATA_REG_B, 0); /* * Push out control byte and unused packet byte The control byte is 0 * meaning the packet is even lengthed and no special CRC handling is * desired. */ CSR_WRITE_2(sc, DATA_REG_W, 0); /* * Enable the interrupts and let the chipset deal with it Also set a * watchdog in case we miss the interrupt. */ mask = CSR_READ_1(sc, INTR_MASK_REG_B) | (IM_TX_INT | IM_TX_EMPTY_INT); CSR_WRITE_1(sc, INTR_MASK_REG_B, mask); sc->intr_mask = mask; CSR_WRITE_2(sc, MMU_CMD_REG_W, MMUCR_ENQUEUE); ifp->if_drv_flags |= IFF_DRV_OACTIVE; sc->timer = 1; BPF_MTAP(ifp, top); if_inc_counter(ifp, IFCOUNTER_OPACKETS, 1); m_freem(top); readcheck: /* * Is another packet coming in? We don't want to overflow the tiny * RX FIFO. If nothing has arrived then attempt to queue another * transmit packet. */ if (CSR_READ_2(sc, FIFO_PORTS_REG_W) & FIFO_REMPTY) goto startagain; return; } /* Resume a packet transmit operation after a memory allocation * has completed. * * This is basically a hacked up copy of snstart() which handles * a completed memory allocation the same way snstart() does. * It then passes control to snstart to handle any other queued * packets. */ static void snresume(struct ifnet *ifp) { struct sn_softc *sc = ifp->if_softc; u_int len; struct mbuf *m; struct mbuf *top; int pad; int mask; uint16_t length; uint16_t numPages; uint16_t pages_wanted; uint8_t packet_no; if (sc->pages_wanted < 0) return; pages_wanted = sc->pages_wanted; sc->pages_wanted = -1; /* * Sneak a peek at the next packet */ m = ifp->if_snd.ifq_head; if (m == NULL) { if_printf(ifp, "snresume() with nothing to send\n"); return; } /* * Compute the frame length and set pad to give an overall even * number of bytes. Below we assume that the packet length is even. */ for (len = 0, top = m; m; m = m->m_next) len += m->m_len; pad = (len & 1); /* * We drop packets that are too large. Perhaps we should truncate * them instead? */ if (len + pad > ETHER_MAX_LEN - ETHER_CRC_LEN) { if_printf(ifp, "large packet discarded (B)\n"); if_inc_counter(ifp, IFCOUNTER_OERRORS, 1); IFQ_DRV_DEQUEUE(&ifp->if_snd, m); m_freem(m); return; } #ifdef SW_PAD /* * If HW padding is not turned on, then pad to ETHER_MIN_LEN. */ if (len < ETHER_MIN_LEN - ETHER_CRC_LEN) pad = ETHER_MIN_LEN - ETHER_CRC_LEN - len; #endif /* SW_PAD */ length = pad + len; /* * The MMU wants the number of pages to be the number of 256 byte * 'pages', minus 1 (A packet can't ever have 0 pages. We also * include space for the status word, byte count and control bytes in * the allocation request. */ numPages = (length + 6) >> 8; SMC_SELECT_BANK(sc, 2); /* * The memory allocation completed. Check the results. If it failed, * we simply set a watchdog timer and hope for the best. */ packet_no = CSR_READ_1(sc, ALLOC_RESULT_REG_B); if (packet_no & ARR_FAILED) { if_printf(ifp, "Memory allocation failed. Weird.\n"); sc->timer = 1; goto try_start; } /* * We have a packet number, so tell the card to use it. */ CSR_WRITE_1(sc, PACKET_NUM_REG_B, packet_no); /* * Now, numPages should match the pages_wanted recorded when the * memory allocation was initiated. */ if (pages_wanted != numPages) { if_printf(ifp, "memory allocation wrong size. Weird.\n"); /* * If the allocation was the wrong size we simply release the * memory once it is granted. Wait for the MMU to be un-busy. */ while (CSR_READ_2(sc, MMU_CMD_REG_W) & MMUCR_BUSY) /* NOTHING */ ; CSR_WRITE_2(sc, MMU_CMD_REG_W, MMUCR_FREEPKT); return; } /* * Point to the beginning of the packet */ CSR_WRITE_2(sc, POINTER_REG_W, PTR_AUTOINC | 0x0000); /* * Send the packet length (+6 for status, length and control byte) * and the status word (set to zeros) */ CSR_WRITE_2(sc, DATA_REG_W, 0); CSR_WRITE_1(sc, DATA_REG_B, (length + 6) & 0xFF); CSR_WRITE_1(sc, DATA_REG_B, (length + 6) >> 8); /* * Get the packet from the kernel. This will include the Ethernet * frame header, MAC Addresses etc. */ IFQ_DRV_DEQUEUE(&ifp->if_snd, m); /* * Push out the data to the card. */ for (top = m; m != NULL; m = m->m_next) { /* * Push out words. */ CSR_WRITE_MULTI_2(sc, DATA_REG_W, mtod(m, uint16_t *), m->m_len / 2); /* * Push out remaining byte. */ if (m->m_len & 1) CSR_WRITE_1(sc, DATA_REG_B, *(mtod(m, caddr_t) + m->m_len - 1)); } /* * Push out padding. */ while (pad > 1) { CSR_WRITE_2(sc, DATA_REG_W, 0); pad -= 2; } if (pad) CSR_WRITE_1(sc, DATA_REG_B, 0); /* * Push out control byte and unused packet byte The control byte is 0 * meaning the packet is even lengthed and no special CRC handling is * desired. */ CSR_WRITE_2(sc, DATA_REG_W, 0); /* * Enable the interrupts and let the chipset deal with it Also set a * watchdog in case we miss the interrupt. */ mask = CSR_READ_1(sc, INTR_MASK_REG_B) | (IM_TX_INT | IM_TX_EMPTY_INT); CSR_WRITE_1(sc, INTR_MASK_REG_B, mask); sc->intr_mask = mask; CSR_WRITE_2(sc, MMU_CMD_REG_W, MMUCR_ENQUEUE); BPF_MTAP(ifp, top); if_inc_counter(ifp, IFCOUNTER_OPACKETS, 1); m_freem(top); try_start: /* * Now pass control to snstart() to queue any additional packets */ ifp->if_drv_flags &= ~IFF_DRV_OACTIVE; snstart_locked(ifp); /* * We've sent something, so we're active. Set a watchdog in case the * TX_EMPTY interrupt is lost. */ ifp->if_drv_flags |= IFF_DRV_OACTIVE; sc->timer = 1; return; } void sn_intr(void *arg) { struct sn_softc *sc = (struct sn_softc *) arg; SN_LOCK(sc); snintr_locked(sc); SN_UNLOCK(sc); } static void snintr_locked(struct sn_softc *sc) { int status, interrupts; struct ifnet *ifp = sc->ifp; /* * Chip state registers */ uint8_t mask; uint8_t packet_no; uint16_t tx_status; uint16_t card_stats; /* * Clear the watchdog. */ sc->timer = 0; SMC_SELECT_BANK(sc, 2); /* * Obtain the current interrupt mask and clear the hardware mask * while servicing interrupts. */ mask = CSR_READ_1(sc, INTR_MASK_REG_B); CSR_WRITE_1(sc, INTR_MASK_REG_B, 0x00); /* * Get the set of interrupts which occurred and eliminate any which * are masked. */ interrupts = CSR_READ_1(sc, INTR_STAT_REG_B); status = interrupts & mask; /* * Now, process each of the interrupt types. */ /* * Receive Overrun. */ if (status & IM_RX_OVRN_INT) { /* * Acknowlege Interrupt */ SMC_SELECT_BANK(sc, 2); CSR_WRITE_1(sc, INTR_ACK_REG_B, IM_RX_OVRN_INT); if_inc_counter(ifp, IFCOUNTER_IERRORS, 1); } /* * Got a packet. */ if (status & IM_RCV_INT) { int packet_number; SMC_SELECT_BANK(sc, 2); packet_number = CSR_READ_2(sc, FIFO_PORTS_REG_W); if (packet_number & FIFO_REMPTY) { /* * we got called , but nothing was on the FIFO */ printf("sn: Receive interrupt with nothing on FIFO\n"); goto out; } snread(ifp); } /* * An on-card memory allocation came through. */ if (status & IM_ALLOC_INT) { /* * Disable this interrupt. */ mask &= ~IM_ALLOC_INT; ifp->if_drv_flags &= ~IFF_DRV_OACTIVE; snresume(ifp); } /* * TX Completion. Handle a transmit error message. This will only be * called when there is an error, because of the AUTO_RELEASE mode. */ if (status & IM_TX_INT) { /* * Acknowlege Interrupt */ SMC_SELECT_BANK(sc, 2); CSR_WRITE_1(sc, INTR_ACK_REG_B, IM_TX_INT); packet_no = CSR_READ_2(sc, FIFO_PORTS_REG_W); packet_no &= FIFO_TX_MASK; /* * select this as the packet to read from */ CSR_WRITE_1(sc, PACKET_NUM_REG_B, packet_no); /* * Position the pointer to the first word from this packet */ CSR_WRITE_2(sc, POINTER_REG_W, PTR_AUTOINC | PTR_READ | 0x0000); /* * Fetch the TX status word. The value found here will be a * copy of the EPH_STATUS_REG_W at the time the transmit * failed. */ tx_status = CSR_READ_2(sc, DATA_REG_W); if (tx_status & EPHSR_TX_SUC) { device_printf(sc->dev, "Successful packet caused interrupt\n"); } else { if_inc_counter(ifp, IFCOUNTER_OERRORS, 1); } if (tx_status & EPHSR_LATCOL) if_inc_counter(ifp, IFCOUNTER_COLLISIONS, 1); /* * Some of these errors will have disabled transmit. * Re-enable transmit now. */ SMC_SELECT_BANK(sc, 0); #ifdef SW_PAD CSR_WRITE_2(sc, TXMIT_CONTROL_REG_W, TCR_ENABLE); #else CSR_WRITE_2(sc, TXMIT_CONTROL_REG_W, TCR_ENABLE | TCR_PAD_ENABLE); #endif /* SW_PAD */ /* * kill the failed packet. Wait for the MMU to be un-busy. */ SMC_SELECT_BANK(sc, 2); while (CSR_READ_2(sc, MMU_CMD_REG_W) & MMUCR_BUSY) /* NOTHING */ ; CSR_WRITE_2(sc, MMU_CMD_REG_W, MMUCR_FREEPKT); /* * Attempt to queue more transmits. */ ifp->if_drv_flags &= ~IFF_DRV_OACTIVE; snstart_locked(ifp); } /* * Transmit underrun. We use this opportunity to update transmit * statistics from the card. */ if (status & IM_TX_EMPTY_INT) { /* * Acknowlege Interrupt */ SMC_SELECT_BANK(sc, 2); CSR_WRITE_1(sc, INTR_ACK_REG_B, IM_TX_EMPTY_INT); /* * Disable this interrupt. */ mask &= ~IM_TX_EMPTY_INT; SMC_SELECT_BANK(sc, 0); card_stats = CSR_READ_2(sc, COUNTER_REG_W); /* * Single collisions */ if_inc_counter(ifp, IFCOUNTER_COLLISIONS, card_stats & ECR_COLN_MASK); /* * Multiple collisions */ if_inc_counter(ifp, IFCOUNTER_COLLISIONS, (card_stats & ECR_MCOLN_MASK) >> 4); SMC_SELECT_BANK(sc, 2); /* * Attempt to enqueue some more stuff. */ ifp->if_drv_flags &= ~IFF_DRV_OACTIVE; snstart_locked(ifp); } /* * Some other error. Try to fix it by resetting the adapter. */ if (status & IM_EPH_INT) { snstop(sc); sninit_locked(sc); } out: /* * Handled all interrupt sources. */ SMC_SELECT_BANK(sc, 2); /* * Reestablish interrupts from mask which have not been deselected * during this interrupt. Note that the hardware mask, which was set * to 0x00 at the start of this service routine, may have been * updated by one or more of the interrupt handers and we must let * those new interrupts stay enabled here. */ mask |= CSR_READ_1(sc, INTR_MASK_REG_B); CSR_WRITE_1(sc, INTR_MASK_REG_B, mask); sc->intr_mask = mask; } static void snread(struct ifnet *ifp) { struct sn_softc *sc = ifp->if_softc; struct ether_header *eh; struct mbuf *m; short status; int packet_number; uint16_t packet_length; uint8_t *data; SMC_SELECT_BANK(sc, 2); #if 0 packet_number = CSR_READ_2(sc, FIFO_PORTS_REG_W); if (packet_number & FIFO_REMPTY) { /* * we got called , but nothing was on the FIFO */ printf("sn: Receive interrupt with nothing on FIFO\n"); return; } #endif read_another: /* * Start reading from the start of the packet. Since PTR_RCV is set, * packet number is found in FIFO_PORTS_REG_W, FIFO_RX_MASK. */ CSR_WRITE_2(sc, POINTER_REG_W, PTR_READ | PTR_RCV | PTR_AUTOINC | 0x0000); /* * First two words are status and packet_length */ status = CSR_READ_2(sc, DATA_REG_W); packet_length = CSR_READ_2(sc, DATA_REG_W) & RLEN_MASK; /* * The packet length contains 3 extra words: status, length, and a * extra word with the control byte. */ packet_length -= 6; /* * Account for receive errors and discard. */ if (status & RS_ERRORS) { if_inc_counter(ifp, IFCOUNTER_IERRORS, 1); goto out; } /* * A packet is received. */ /* * Adjust for odd-length packet. */ if (status & RS_ODDFRAME) packet_length++; /* * Allocate a header mbuf from the kernel. */ MGETHDR(m, M_NOWAIT, MT_DATA); if (m == NULL) goto out; m->m_pkthdr.rcvif = ifp; m->m_pkthdr.len = m->m_len = packet_length; /* * Attach an mbuf cluster. */ if (!(MCLGET(m, M_NOWAIT))) { m_freem(m); if_inc_counter(ifp, IFCOUNTER_IERRORS, 1); printf("sn: snread() kernel memory allocation problem\n"); goto out; } eh = mtod(m, struct ether_header *); /* * Get packet, including link layer address, from interface. */ data = (uint8_t *) eh; CSR_READ_MULTI_2(sc, DATA_REG_W, (uint16_t *) data, packet_length >> 1); if (packet_length & 1) { data += packet_length & ~1; *data = CSR_READ_1(sc, DATA_REG_B); } if_inc_counter(ifp, IFCOUNTER_IPACKETS, 1); /* * Remove link layer addresses and whatnot. */ m->m_pkthdr.len = m->m_len = packet_length; /* * Drop locks before calling if_input() since it may re-enter * snstart() in the netisr case. This would result in a * lock reversal. Better performance might be obtained by * chaining all packets received, dropping the lock, and then * calling if_input() on each one. */ SN_UNLOCK(sc); (*ifp->if_input)(ifp, m); SN_LOCK(sc); out: /* * Error or good, tell the card to get rid of this packet Wait for * the MMU to be un-busy. */ SMC_SELECT_BANK(sc, 2); while (CSR_READ_2(sc, MMU_CMD_REG_W) & MMUCR_BUSY) /* NOTHING */ ; CSR_WRITE_2(sc, MMU_CMD_REG_W, MMUCR_RELEASE); /* * Check whether another packet is ready */ packet_number = CSR_READ_2(sc, FIFO_PORTS_REG_W); if (packet_number & FIFO_REMPTY) { return; } goto read_another; } /* * Handle IOCTLS. This function is completely stolen from if_ep.c * As with its progenitor, it does not handle hardware address * changes. */ static int snioctl(struct ifnet *ifp, u_long cmd, caddr_t data) { struct sn_softc *sc = ifp->if_softc; int error = 0; switch (cmd) { case SIOCSIFFLAGS: SN_LOCK(sc); if ((ifp->if_flags & IFF_UP) == 0 && ifp->if_drv_flags & IFF_DRV_RUNNING) { snstop(sc); } else { /* reinitialize card on any parameter change */ sninit_locked(sc); } SN_UNLOCK(sc); break; case SIOCADDMULTI: case SIOCDELMULTI: /* update multicast filter list. */ SN_LOCK(sc); sn_setmcast(sc); error = 0; SN_UNLOCK(sc); break; default: error = ether_ioctl(ifp, cmd, data); break; } return (error); } static void snwatchdog(void *arg) { struct sn_softc *sc; sc = arg; SN_ASSERT_LOCKED(sc); callout_reset(&sc->watchdog, hz, snwatchdog, sc); if (sc->timer == 0 || --sc->timer > 0) return; snintr_locked(sc); } /* 1. zero the interrupt mask * 2. clear the enable receive flag * 3. clear the enable xmit flags */ static void snstop(struct sn_softc *sc) { struct ifnet *ifp = sc->ifp; /* * Clear interrupt mask; disable all interrupts. */ SMC_SELECT_BANK(sc, 2); CSR_WRITE_1(sc, INTR_MASK_REG_B, 0x00); /* * Disable transmitter and Receiver */ SMC_SELECT_BANK(sc, 0); CSR_WRITE_2(sc, RECV_CONTROL_REG_W, 0x0000); CSR_WRITE_2(sc, TXMIT_CONTROL_REG_W, 0x0000); /* * Cancel watchdog. */ sc->timer = 0; callout_stop(&sc->watchdog); ifp->if_drv_flags &= ~(IFF_DRV_RUNNING | IFF_DRV_OACTIVE); } int sn_activate(device_t dev) { struct sn_softc *sc = device_get_softc(dev); sc->port_rid = 0; sc->port_res = bus_alloc_resource_anywhere(dev, SYS_RES_IOPORT, &sc->port_rid, SMC_IO_EXTENT, RF_ACTIVE); if (!sc->port_res) { if (bootverbose) device_printf(dev, "Cannot allocate ioport\n"); return ENOMEM; } sc->irq_rid = 0; sc->irq_res = bus_alloc_resource_any(dev, SYS_RES_IRQ, &sc->irq_rid, RF_ACTIVE); if (!sc->irq_res) { if (bootverbose) device_printf(dev, "Cannot allocate irq\n"); sn_deactivate(dev); return ENOMEM; } return (0); } void sn_deactivate(device_t dev) { struct sn_softc *sc = device_get_softc(dev); if (sc->intrhand) bus_teardown_intr(dev, sc->irq_res, sc->intrhand); sc->intrhand = 0; if (sc->port_res) bus_release_resource(dev, SYS_RES_IOPORT, sc->port_rid, sc->port_res); sc->port_res = 0; if (sc->modem_res) bus_release_resource(dev, SYS_RES_IOPORT, sc->modem_rid, sc->modem_res); sc->modem_res = 0; if (sc->irq_res) bus_release_resource(dev, SYS_RES_IRQ, sc->irq_rid, sc->irq_res); sc->irq_res = 0; return; } /* * Function: sn_probe(device_t dev) * * Purpose: * Tests to see if a given ioaddr points to an SMC9xxx chip. * Tries to cause as little damage as possible if it's not a SMC chip. * Returns a 0 on success * * Algorithm: * (1) see if the high byte of BANK_SELECT is 0x33 * (2) compare the ioaddr with the base register's address * (3) see if I recognize the chip ID in the appropriate register * * */ int sn_probe(device_t dev) { struct sn_softc *sc = device_get_softc(dev); uint16_t bank; uint16_t revision_register; uint16_t base_address_register; int err; if ((err = sn_activate(dev)) != 0) return err; /* * First, see if the high byte is 0x33 */ bank = CSR_READ_2(sc, BANK_SELECT_REG_W); if ((bank & BSR_DETECT_MASK) != BSR_DETECT_VALUE) { #ifdef SN_DEBUG device_printf(dev, "test1 failed\n"); #endif goto error; } /* * The above MIGHT indicate a device, but I need to write to further * test this. Go to bank 0, then test that the register still * reports the high byte is 0x33. */ CSR_WRITE_2(sc, BANK_SELECT_REG_W, 0x0000); bank = CSR_READ_2(sc, BANK_SELECT_REG_W); if ((bank & BSR_DETECT_MASK) != BSR_DETECT_VALUE) { #ifdef SN_DEBUG device_printf(dev, "test2 failed\n"); #endif goto error; } /* * well, we've already written once, so hopefully another time won't * hurt. This time, I need to switch the bank register to bank 1, so * I can access the base address register. The contents of the * BASE_ADDR_REG_W register, after some jiggery pokery, is expected * to match the I/O port address where the adapter is being probed. */ CSR_WRITE_2(sc, BANK_SELECT_REG_W, 0x0001); base_address_register = (CSR_READ_2(sc, BASE_ADDR_REG_W) >> 3) & 0x3e0; if (rman_get_start(sc->port_res) != base_address_register) { /* * Well, the base address register didn't match. Must not * have been a SMC chip after all. */ #ifdef SN_DEBUG device_printf(dev, "test3 failed ioaddr = 0x%x, " "base_address_register = 0x%x\n", rman_get_start(sc->port_res), base_address_register); #endif goto error; } /* * Check if the revision register is something that I recognize. * These might need to be added to later, as future revisions could * be added. */ CSR_WRITE_2(sc, BANK_SELECT_REG_W, 0x3); revision_register = CSR_READ_2(sc, REVISION_REG_W); if (!chip_ids[(revision_register >> 4) & 0xF]) { /* * I don't regonize this chip, so... */ #ifdef SN_DEBUG device_printf(dev, "test4 failed\n"); #endif goto error; } /* * at this point I'll assume that the chip is an SMC9xxx. It might be * prudent to check a listing of MAC addresses against the hardware * address, or do some other tests. */ sn_deactivate(dev); return 0; error: sn_deactivate(dev); return ENXIO; } #define MCFSZ 8 static void sn_setmcast(struct sn_softc *sc) { struct ifnet *ifp = sc->ifp; int flags; uint8_t mcf[MCFSZ]; SN_ASSERT_LOCKED(sc); /* * Set the receiver filter. We want receive enabled and auto strip * of CRC from received packet. If we are promiscuous then set that * bit too. */ flags = RCR_ENABLE | RCR_STRIP_CRC; if (ifp->if_flags & IFF_PROMISC) { flags |= RCR_PROMISC | RCR_ALMUL; } else if (ifp->if_flags & IFF_ALLMULTI) { flags |= RCR_ALMUL; } else { if (sn_getmcf(ifp, mcf)) { /* set filter */ SMC_SELECT_BANK(sc, 3); CSR_WRITE_2(sc, MULTICAST1_REG_W, ((uint16_t)mcf[1] << 8) | mcf[0]); CSR_WRITE_2(sc, MULTICAST2_REG_W, ((uint16_t)mcf[3] << 8) | mcf[2]); CSR_WRITE_2(sc, MULTICAST3_REG_W, ((uint16_t)mcf[5] << 8) | mcf[4]); CSR_WRITE_2(sc, MULTICAST4_REG_W, ((uint16_t)mcf[7] << 8) | mcf[6]); } else { flags |= RCR_ALMUL; } } SMC_SELECT_BANK(sc, 0); CSR_WRITE_2(sc, RECV_CONTROL_REG_W, flags); } static int sn_getmcf(struct ifnet *ifp, uint8_t *mcf) { int i; uint32_t index, index2; uint8_t *af = mcf; struct ifmultiaddr *ifma; bzero(mcf, MCFSZ); if_maddr_rlock(ifp); CK_STAILQ_FOREACH(ifma, &ifp->if_multiaddrs, ifma_link) { if (ifma->ifma_addr->sa_family != AF_LINK) { if_maddr_runlock(ifp); return 0; } index = ether_crc32_le(LLADDR((struct sockaddr_dl *) ifma->ifma_addr), ETHER_ADDR_LEN) & 0x3f; index2 = 0; for (i = 0; i < 6; i++) { index2 <<= 1; index2 |= (index & 0x01); index >>= 1; } af[index2 >> 3] |= 1 << (index2 & 7); } if_maddr_runlock(ifp); return 1; /* use multicast filter */ } Index: stable/12/sys/dev/tl/if_tl.c =================================================================== --- stable/12/sys/dev/tl/if_tl.c (revision 339734) +++ stable/12/sys/dev/tl/if_tl.c (revision 339735) @@ -1,2279 +1,2281 @@ /*- * SPDX-License-Identifier: BSD-4-Clause * * Copyright (c) 1997, 1998 * Bill Paul . All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * 3. All advertising materials mentioning features or use of this software * must display the following acknowledgement: * This product includes software developed by Bill Paul. * 4. Neither the name of the author nor the names of any co-contributors * may be used to endorse or promote products derived from this software * without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY Bill Paul AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL Bill Paul OR THE VOICES IN HIS HEAD * BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF * THE POSSIBILITY OF SUCH DAMAGE. */ #include __FBSDID("$FreeBSD$"); /* * Texas Instruments ThunderLAN driver for FreeBSD 2.2.6 and 3.x. * Supports many Compaq PCI NICs based on the ThunderLAN ethernet controller, * the National Semiconductor DP83840A physical interface and the * Microchip Technology 24Cxx series serial EEPROM. * * Written using the following four documents: * * Texas Instruments ThunderLAN Programmer's Guide (www.ti.com) * National Semiconductor DP83840A data sheet (www.national.com) * Microchip Technology 24C02C data sheet (www.microchip.com) * Micro Linear ML6692 100BaseTX only PHY data sheet (www.microlinear.com) * * Written by Bill Paul * Electrical Engineering Department * Columbia University, New York City */ /* * Some notes about the ThunderLAN: * * The ThunderLAN controller is a single chip containing PCI controller * logic, approximately 3K of on-board SRAM, a LAN controller, and media * independent interface (MII) bus. The MII allows the ThunderLAN chip to * control up to 32 different physical interfaces (PHYs). The ThunderLAN * also has a built-in 10baseT PHY, allowing a single ThunderLAN controller * to act as a complete ethernet interface. * * Other PHYs may be attached to the ThunderLAN; the Compaq 10/100 cards * use a National Semiconductor DP83840A PHY that supports 10 or 100Mb/sec * in full or half duplex. Some of the Compaq Deskpro machines use a * Level 1 LXT970 PHY with the same capabilities. Certain Olicom adapters * use a Micro Linear ML6692 100BaseTX only PHY, which can be used in * concert with the ThunderLAN's internal PHY to provide full 10/100 * support. This is cheaper than using a standalone external PHY for both * 10/100 modes and letting the ThunderLAN's internal PHY go to waste. * A serial EEPROM is also attached to the ThunderLAN chip to provide * power-up default register settings and for storing the adapter's * station address. Although not supported by this driver, the ThunderLAN * chip can also be connected to token ring PHYs. * * The ThunderLAN has a set of registers which can be used to issue * commands, acknowledge interrupts, and to manipulate other internal * registers on its DIO bus. The primary registers can be accessed * using either programmed I/O (inb/outb) or via PCI memory mapping, * depending on how the card is configured during the PCI probing * phase. It is even possible to have both PIO and memory mapped * access turned on at the same time. * * Frame reception and transmission with the ThunderLAN chip is done * using frame 'lists.' A list structure looks more or less like this: * * struct tl_frag { * u_int32_t fragment_address; * u_int32_t fragment_size; * }; * struct tl_list { * u_int32_t forward_pointer; * u_int16_t cstat; * u_int16_t frame_size; * struct tl_frag fragments[10]; * }; * * The forward pointer in the list header can be either a 0 or the address * of another list, which allows several lists to be linked together. Each * list contains up to 10 fragment descriptors. This means the chip allows * ethernet frames to be broken up into up to 10 chunks for transfer to * and from the SRAM. Note that the forward pointer and fragment buffer * addresses are physical memory addresses, not virtual. Note also that * a single ethernet frame can not span lists: if the host wants to * transmit a frame and the frame data is split up over more than 10 * buffers, the frame has to collapsed before it can be transmitted. * * To receive frames, the driver sets up a number of lists and populates * the fragment descriptors, then it sends an RX GO command to the chip. * When a frame is received, the chip will DMA it into the memory regions * specified by the fragment descriptors and then trigger an RX 'end of * frame interrupt' when done. The driver may choose to use only one * fragment per list; this may result is slighltly less efficient use * of memory in exchange for improving performance. * * To transmit frames, the driver again sets up lists and fragment * descriptors, only this time the buffers contain frame data that * is to be DMA'ed into the chip instead of out of it. Once the chip * has transferred the data into its on-board SRAM, it will trigger a * TX 'end of frame' interrupt. It will also generate an 'end of channel' * interrupt when it reaches the end of the list. */ /* * Some notes about this driver: * * The ThunderLAN chip provides a couple of different ways to organize * reception, transmission and interrupt handling. The simplest approach * is to use one list each for transmission and reception. In this mode, * the ThunderLAN will generate two interrupts for every received frame * (one RX EOF and one RX EOC) and two for each transmitted frame (one * TX EOF and one TX EOC). This may make the driver simpler but it hurts * performance to have to handle so many interrupts. * * Initially I wanted to create a circular list of receive buffers so * that the ThunderLAN chip would think there was an infinitely long * receive channel and never deliver an RXEOC interrupt. However this * doesn't work correctly under heavy load: while the manual says the * chip will trigger an RXEOF interrupt each time a frame is copied into * memory, you can't count on the chip waiting around for you to acknowledge * the interrupt before it starts trying to DMA the next frame. The result * is that the chip might traverse the entire circular list and then wrap * around before you have a chance to do anything about it. Consequently, * the receive list is terminated (with a 0 in the forward pointer in the * last element). Each time an RXEOF interrupt arrives, the used list * is shifted to the end of the list. This gives the appearance of an * infinitely large RX chain so long as the driver doesn't fall behind * the chip and allow all of the lists to be filled up. * * If all the lists are filled, the adapter will deliver an RX 'end of * channel' interrupt when it hits the 0 forward pointer at the end of * the chain. The RXEOC handler then cleans out the RX chain and resets * the list head pointer in the ch_parm register and restarts the receiver. * * For frame transmission, it is possible to program the ThunderLAN's * transmit interrupt threshold so that the chip can acknowledge multiple * lists with only a single TX EOF interrupt. This allows the driver to * queue several frames in one shot, and only have to handle a total * two interrupts (one TX EOF and one TX EOC) no matter how many frames * are transmitted. Frame transmission is done directly out of the * mbufs passed to the tl_start() routine via the interface send queue. * The driver simply sets up the fragment descriptors in the transmit * lists to point to the mbuf data regions and sends a TX GO command. * * Note that since the RX and TX lists themselves are always used * only by the driver, the are malloc()ed once at driver initialization * time and never free()ed. * * Also, in order to remain as platform independent as possible, this * driver uses memory mapped register access to manipulate the card * as opposed to programmed I/O. This avoids the use of the inb/outb * (and related) instructions which are specific to the i386 platform. * * Using these techniques, this driver achieves very high performance * by minimizing the amount of interrupts generated during large * transfers and by completely avoiding buffer copies. Frame transfer * to and from the ThunderLAN chip is performed entirely by the chip * itself thereby reducing the load on the host CPU. */ #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include /* for vtophys */ #include /* for vtophys */ #include #include #include #include #include #include #include #include #include /* * Default to using PIO register access mode to pacify certain * laptop docking stations with built-in ThunderLAN chips that * don't seem to handle memory mapped mode properly. */ #define TL_USEIOSPACE #include MODULE_DEPEND(tl, pci, 1, 1, 1); MODULE_DEPEND(tl, ether, 1, 1, 1); MODULE_DEPEND(tl, miibus, 1, 1, 1); /* "device miibus" required. See GENERIC if you get errors here. */ #include "miibus_if.h" /* * Various supported device vendors/types and their names. */ static const struct tl_type tl_devs[] = { { TI_VENDORID, TI_DEVICEID_THUNDERLAN, "Texas Instruments ThunderLAN" }, { COMPAQ_VENDORID, COMPAQ_DEVICEID_NETEL_10, "Compaq Netelligent 10" }, { COMPAQ_VENDORID, COMPAQ_DEVICEID_NETEL_10_100, "Compaq Netelligent 10/100" }, { COMPAQ_VENDORID, COMPAQ_DEVICEID_NETEL_10_100_PROLIANT, "Compaq Netelligent 10/100 Proliant" }, { COMPAQ_VENDORID, COMPAQ_DEVICEID_NETEL_10_100_DUAL, "Compaq Netelligent 10/100 Dual Port" }, { COMPAQ_VENDORID, COMPAQ_DEVICEID_NETFLEX_3P_INTEGRATED, "Compaq NetFlex-3/P Integrated" }, { COMPAQ_VENDORID, COMPAQ_DEVICEID_NETFLEX_3P, "Compaq NetFlex-3/P" }, { COMPAQ_VENDORID, COMPAQ_DEVICEID_NETFLEX_3P_BNC, "Compaq NetFlex 3/P w/ BNC" }, { COMPAQ_VENDORID, COMPAQ_DEVICEID_NETEL_10_100_EMBEDDED, "Compaq Netelligent 10/100 TX Embedded UTP" }, { COMPAQ_VENDORID, COMPAQ_DEVICEID_NETEL_10_T2_UTP_COAX, "Compaq Netelligent 10 T/2 PCI UTP/Coax" }, { COMPAQ_VENDORID, COMPAQ_DEVICEID_NETEL_10_100_TX_UTP, "Compaq Netelligent 10/100 TX UTP" }, { OLICOM_VENDORID, OLICOM_DEVICEID_OC2183, "Olicom OC-2183/2185" }, { OLICOM_VENDORID, OLICOM_DEVICEID_OC2325, "Olicom OC-2325" }, { OLICOM_VENDORID, OLICOM_DEVICEID_OC2326, "Olicom OC-2326 10/100 TX UTP" }, { 0, 0, NULL } }; static int tl_probe(device_t); static int tl_attach(device_t); static int tl_detach(device_t); static int tl_intvec_rxeoc(void *, u_int32_t); static int tl_intvec_txeoc(void *, u_int32_t); static int tl_intvec_txeof(void *, u_int32_t); static int tl_intvec_rxeof(void *, u_int32_t); static int tl_intvec_adchk(void *, u_int32_t); static int tl_intvec_netsts(void *, u_int32_t); static int tl_newbuf(struct tl_softc *, struct tl_chain_onefrag *); static void tl_stats_update(void *); static int tl_encap(struct tl_softc *, struct tl_chain *, struct mbuf *); static void tl_intr(void *); static void tl_start(struct ifnet *); static void tl_start_locked(struct ifnet *); static int tl_ioctl(struct ifnet *, u_long, caddr_t); static void tl_init(void *); static void tl_init_locked(struct tl_softc *); static void tl_stop(struct tl_softc *); static void tl_watchdog(struct tl_softc *); static int tl_shutdown(device_t); static int tl_ifmedia_upd(struct ifnet *); static void tl_ifmedia_sts(struct ifnet *, struct ifmediareq *); static u_int8_t tl_eeprom_putbyte(struct tl_softc *, int); static u_int8_t tl_eeprom_getbyte(struct tl_softc *, int, u_int8_t *); static int tl_read_eeprom(struct tl_softc *, caddr_t, int, int); static int tl_miibus_readreg(device_t, int, int); static int tl_miibus_writereg(device_t, int, int, int); static void tl_miibus_statchg(device_t); static void tl_setmode(struct tl_softc *, int); static uint32_t tl_mchash(const uint8_t *); static void tl_setmulti(struct tl_softc *); static void tl_setfilt(struct tl_softc *, caddr_t, int); static void tl_softreset(struct tl_softc *, int); static void tl_hardreset(device_t); static int tl_list_rx_init(struct tl_softc *); static int tl_list_tx_init(struct tl_softc *); static u_int8_t tl_dio_read8(struct tl_softc *, int); static u_int16_t tl_dio_read16(struct tl_softc *, int); static u_int32_t tl_dio_read32(struct tl_softc *, int); static void tl_dio_write8(struct tl_softc *, int, int); static void tl_dio_write16(struct tl_softc *, int, int); static void tl_dio_write32(struct tl_softc *, int, int); static void tl_dio_setbit(struct tl_softc *, int, int); static void tl_dio_clrbit(struct tl_softc *, int, int); static void tl_dio_setbit16(struct tl_softc *, int, int); static void tl_dio_clrbit16(struct tl_softc *, int, int); /* * MII bit-bang glue */ static uint32_t tl_mii_bitbang_read(device_t); static void tl_mii_bitbang_write(device_t, uint32_t); static const struct mii_bitbang_ops tl_mii_bitbang_ops = { tl_mii_bitbang_read, tl_mii_bitbang_write, { TL_SIO_MDATA, /* MII_BIT_MDO */ TL_SIO_MDATA, /* MII_BIT_MDI */ TL_SIO_MCLK, /* MII_BIT_MDC */ TL_SIO_MTXEN, /* MII_BIT_DIR_HOST_PHY */ 0, /* MII_BIT_DIR_PHY_HOST */ } }; #ifdef TL_USEIOSPACE #define TL_RES SYS_RES_IOPORT #define TL_RID TL_PCI_LOIO #else #define TL_RES SYS_RES_MEMORY #define TL_RID TL_PCI_LOMEM #endif static device_method_t tl_methods[] = { /* Device interface */ DEVMETHOD(device_probe, tl_probe), DEVMETHOD(device_attach, tl_attach), DEVMETHOD(device_detach, tl_detach), DEVMETHOD(device_shutdown, tl_shutdown), /* MII interface */ DEVMETHOD(miibus_readreg, tl_miibus_readreg), DEVMETHOD(miibus_writereg, tl_miibus_writereg), DEVMETHOD(miibus_statchg, tl_miibus_statchg), DEVMETHOD_END }; static driver_t tl_driver = { "tl", tl_methods, sizeof(struct tl_softc) }; static devclass_t tl_devclass; DRIVER_MODULE(tl, pci, tl_driver, tl_devclass, 0, 0); DRIVER_MODULE(miibus, tl, miibus_driver, miibus_devclass, 0, 0); static u_int8_t tl_dio_read8(sc, reg) struct tl_softc *sc; int reg; { CSR_BARRIER(sc, TL_DIO_ADDR, 2, BUS_SPACE_BARRIER_READ | BUS_SPACE_BARRIER_WRITE); CSR_WRITE_2(sc, TL_DIO_ADDR, reg); CSR_BARRIER(sc, TL_DIO_ADDR, 2, BUS_SPACE_BARRIER_READ | BUS_SPACE_BARRIER_WRITE); return(CSR_READ_1(sc, TL_DIO_DATA + (reg & 3))); } static u_int16_t tl_dio_read16(sc, reg) struct tl_softc *sc; int reg; { CSR_BARRIER(sc, TL_DIO_ADDR, 2, BUS_SPACE_BARRIER_READ | BUS_SPACE_BARRIER_WRITE); CSR_WRITE_2(sc, TL_DIO_ADDR, reg); CSR_BARRIER(sc, TL_DIO_ADDR, 2, BUS_SPACE_BARRIER_READ | BUS_SPACE_BARRIER_WRITE); return(CSR_READ_2(sc, TL_DIO_DATA + (reg & 3))); } static u_int32_t tl_dio_read32(sc, reg) struct tl_softc *sc; int reg; { CSR_BARRIER(sc, TL_DIO_ADDR, 2, BUS_SPACE_BARRIER_READ | BUS_SPACE_BARRIER_WRITE); CSR_WRITE_2(sc, TL_DIO_ADDR, reg); CSR_BARRIER(sc, TL_DIO_ADDR, 2, BUS_SPACE_BARRIER_READ | BUS_SPACE_BARRIER_WRITE); return(CSR_READ_4(sc, TL_DIO_DATA + (reg & 3))); } static void tl_dio_write8(sc, reg, val) struct tl_softc *sc; int reg; int val; { CSR_BARRIER(sc, TL_DIO_ADDR, 2, BUS_SPACE_BARRIER_READ | BUS_SPACE_BARRIER_WRITE); CSR_WRITE_2(sc, TL_DIO_ADDR, reg); CSR_BARRIER(sc, TL_DIO_ADDR, 2, BUS_SPACE_BARRIER_READ | BUS_SPACE_BARRIER_WRITE); CSR_WRITE_1(sc, TL_DIO_DATA + (reg & 3), val); } static void tl_dio_write16(sc, reg, val) struct tl_softc *sc; int reg; int val; { CSR_BARRIER(sc, TL_DIO_ADDR, 2, BUS_SPACE_BARRIER_READ | BUS_SPACE_BARRIER_WRITE); CSR_WRITE_2(sc, TL_DIO_ADDR, reg); CSR_BARRIER(sc, TL_DIO_ADDR, 2, BUS_SPACE_BARRIER_READ | BUS_SPACE_BARRIER_WRITE); CSR_WRITE_2(sc, TL_DIO_DATA + (reg & 3), val); } static void tl_dio_write32(sc, reg, val) struct tl_softc *sc; int reg; int val; { CSR_BARRIER(sc, TL_DIO_ADDR, 2, BUS_SPACE_BARRIER_READ | BUS_SPACE_BARRIER_WRITE); CSR_WRITE_2(sc, TL_DIO_ADDR, reg); CSR_BARRIER(sc, TL_DIO_ADDR, 2, BUS_SPACE_BARRIER_READ | BUS_SPACE_BARRIER_WRITE); CSR_WRITE_4(sc, TL_DIO_DATA + (reg & 3), val); } static void tl_dio_setbit(sc, reg, bit) struct tl_softc *sc; int reg; int bit; { u_int8_t f; CSR_BARRIER(sc, TL_DIO_ADDR, 2, BUS_SPACE_BARRIER_READ | BUS_SPACE_BARRIER_WRITE); CSR_WRITE_2(sc, TL_DIO_ADDR, reg); CSR_BARRIER(sc, TL_DIO_ADDR, 2, BUS_SPACE_BARRIER_READ | BUS_SPACE_BARRIER_WRITE); f = CSR_READ_1(sc, TL_DIO_DATA + (reg & 3)); f |= bit; CSR_BARRIER(sc, TL_DIO_DATA + (reg & 3), 1, BUS_SPACE_BARRIER_READ | BUS_SPACE_BARRIER_WRITE); CSR_WRITE_1(sc, TL_DIO_DATA + (reg & 3), f); } static void tl_dio_clrbit(sc, reg, bit) struct tl_softc *sc; int reg; int bit; { u_int8_t f; CSR_BARRIER(sc, TL_DIO_ADDR, 2, BUS_SPACE_BARRIER_READ | BUS_SPACE_BARRIER_WRITE); CSR_WRITE_2(sc, TL_DIO_ADDR, reg); CSR_BARRIER(sc, TL_DIO_ADDR, 2, BUS_SPACE_BARRIER_READ | BUS_SPACE_BARRIER_WRITE); f = CSR_READ_1(sc, TL_DIO_DATA + (reg & 3)); f &= ~bit; CSR_BARRIER(sc, TL_DIO_DATA + (reg & 3), 1, BUS_SPACE_BARRIER_READ | BUS_SPACE_BARRIER_WRITE); CSR_WRITE_1(sc, TL_DIO_DATA + (reg & 3), f); } static void tl_dio_setbit16(sc, reg, bit) struct tl_softc *sc; int reg; int bit; { u_int16_t f; CSR_BARRIER(sc, TL_DIO_ADDR, 2, BUS_SPACE_BARRIER_READ | BUS_SPACE_BARRIER_WRITE); CSR_WRITE_2(sc, TL_DIO_ADDR, reg); CSR_BARRIER(sc, TL_DIO_ADDR, 2, BUS_SPACE_BARRIER_READ | BUS_SPACE_BARRIER_WRITE); f = CSR_READ_2(sc, TL_DIO_DATA + (reg & 3)); f |= bit; CSR_BARRIER(sc, TL_DIO_DATA + (reg & 3), 2, BUS_SPACE_BARRIER_READ | BUS_SPACE_BARRIER_WRITE); CSR_WRITE_2(sc, TL_DIO_DATA + (reg & 3), f); } static void tl_dio_clrbit16(sc, reg, bit) struct tl_softc *sc; int reg; int bit; { u_int16_t f; CSR_BARRIER(sc, TL_DIO_ADDR, 2, BUS_SPACE_BARRIER_READ | BUS_SPACE_BARRIER_WRITE); CSR_WRITE_2(sc, TL_DIO_ADDR, reg); CSR_BARRIER(sc, TL_DIO_ADDR, 2, BUS_SPACE_BARRIER_READ | BUS_SPACE_BARRIER_WRITE); f = CSR_READ_2(sc, TL_DIO_DATA + (reg & 3)); f &= ~bit; CSR_BARRIER(sc, TL_DIO_DATA + (reg & 3), 2, BUS_SPACE_BARRIER_READ | BUS_SPACE_BARRIER_WRITE); CSR_WRITE_2(sc, TL_DIO_DATA + (reg & 3), f); } /* * Send an instruction or address to the EEPROM, check for ACK. */ static u_int8_t tl_eeprom_putbyte(sc, byte) struct tl_softc *sc; int byte; { int i, ack = 0; /* * Make sure we're in TX mode. */ tl_dio_setbit(sc, TL_NETSIO, TL_SIO_ETXEN); /* * Feed in each bit and stobe the clock. */ for (i = 0x80; i; i >>= 1) { if (byte & i) { tl_dio_setbit(sc, TL_NETSIO, TL_SIO_EDATA); } else { tl_dio_clrbit(sc, TL_NETSIO, TL_SIO_EDATA); } DELAY(1); tl_dio_setbit(sc, TL_NETSIO, TL_SIO_ECLOK); DELAY(1); tl_dio_clrbit(sc, TL_NETSIO, TL_SIO_ECLOK); } /* * Turn off TX mode. */ tl_dio_clrbit(sc, TL_NETSIO, TL_SIO_ETXEN); /* * Check for ack. */ tl_dio_setbit(sc, TL_NETSIO, TL_SIO_ECLOK); ack = tl_dio_read8(sc, TL_NETSIO) & TL_SIO_EDATA; tl_dio_clrbit(sc, TL_NETSIO, TL_SIO_ECLOK); return(ack); } /* * Read a byte of data stored in the EEPROM at address 'addr.' */ static u_int8_t tl_eeprom_getbyte(sc, addr, dest) struct tl_softc *sc; int addr; u_int8_t *dest; { int i; u_int8_t byte = 0; device_t tl_dev = sc->tl_dev; tl_dio_write8(sc, TL_NETSIO, 0); EEPROM_START; /* * Send write control code to EEPROM. */ if (tl_eeprom_putbyte(sc, EEPROM_CTL_WRITE)) { device_printf(tl_dev, "failed to send write command, status: %x\n", tl_dio_read8(sc, TL_NETSIO)); return(1); } /* * Send address of byte we want to read. */ if (tl_eeprom_putbyte(sc, addr)) { device_printf(tl_dev, "failed to send address, status: %x\n", tl_dio_read8(sc, TL_NETSIO)); return(1); } EEPROM_STOP; EEPROM_START; /* * Send read control code to EEPROM. */ if (tl_eeprom_putbyte(sc, EEPROM_CTL_READ)) { device_printf(tl_dev, "failed to send write command, status: %x\n", tl_dio_read8(sc, TL_NETSIO)); return(1); } /* * Start reading bits from EEPROM. */ tl_dio_clrbit(sc, TL_NETSIO, TL_SIO_ETXEN); for (i = 0x80; i; i >>= 1) { tl_dio_setbit(sc, TL_NETSIO, TL_SIO_ECLOK); DELAY(1); if (tl_dio_read8(sc, TL_NETSIO) & TL_SIO_EDATA) byte |= i; tl_dio_clrbit(sc, TL_NETSIO, TL_SIO_ECLOK); DELAY(1); } EEPROM_STOP; /* * No ACK generated for read, so just return byte. */ *dest = byte; return(0); } /* * Read a sequence of bytes from the EEPROM. */ static int tl_read_eeprom(sc, dest, off, cnt) struct tl_softc *sc; caddr_t dest; int off; int cnt; { int err = 0, i; u_int8_t byte = 0; for (i = 0; i < cnt; i++) { err = tl_eeprom_getbyte(sc, off + i, &byte); if (err) break; *(dest + i) = byte; } return(err ? 1 : 0); } #define TL_SIO_MII (TL_SIO_MCLK | TL_SIO_MDATA | TL_SIO_MTXEN) /* * Read the MII serial port for the MII bit-bang module. */ static uint32_t tl_mii_bitbang_read(device_t dev) { struct tl_softc *sc; uint32_t val; sc = device_get_softc(dev); val = tl_dio_read8(sc, TL_NETSIO) & TL_SIO_MII; CSR_BARRIER(sc, TL_NETSIO, 1, BUS_SPACE_BARRIER_READ | BUS_SPACE_BARRIER_WRITE); return (val); } /* * Write the MII serial port for the MII bit-bang module. */ static void tl_mii_bitbang_write(device_t dev, uint32_t val) { struct tl_softc *sc; sc = device_get_softc(dev); val = (tl_dio_read8(sc, TL_NETSIO) & ~TL_SIO_MII) | val; CSR_BARRIER(sc, TL_NETSIO, 1, BUS_SPACE_BARRIER_READ | BUS_SPACE_BARRIER_WRITE); tl_dio_write8(sc, TL_NETSIO, val); CSR_BARRIER(sc, TL_NETSIO, 1, BUS_SPACE_BARRIER_READ | BUS_SPACE_BARRIER_WRITE); } static int tl_miibus_readreg(dev, phy, reg) device_t dev; int phy, reg; { struct tl_softc *sc; int minten, val; sc = device_get_softc(dev); /* * Turn off MII interrupt by forcing MINTEN low. */ minten = tl_dio_read8(sc, TL_NETSIO) & TL_SIO_MINTEN; if (minten) { tl_dio_clrbit(sc, TL_NETSIO, TL_SIO_MINTEN); } val = mii_bitbang_readreg(dev, &tl_mii_bitbang_ops, phy, reg); /* Reenable interrupts. */ if (minten) { tl_dio_setbit(sc, TL_NETSIO, TL_SIO_MINTEN); } return (val); } static int tl_miibus_writereg(dev, phy, reg, data) device_t dev; int phy, reg, data; { struct tl_softc *sc; int minten; sc = device_get_softc(dev); /* * Turn off MII interrupt by forcing MINTEN low. */ minten = tl_dio_read8(sc, TL_NETSIO) & TL_SIO_MINTEN; if (minten) { tl_dio_clrbit(sc, TL_NETSIO, TL_SIO_MINTEN); } mii_bitbang_writereg(dev, &tl_mii_bitbang_ops, phy, reg, data); /* Reenable interrupts. */ if (minten) { tl_dio_setbit(sc, TL_NETSIO, TL_SIO_MINTEN); } return(0); } static void tl_miibus_statchg(dev) device_t dev; { struct tl_softc *sc; struct mii_data *mii; sc = device_get_softc(dev); mii = device_get_softc(sc->tl_miibus); if ((mii->mii_media_active & IFM_GMASK) == IFM_FDX) { tl_dio_setbit(sc, TL_NETCMD, TL_CMD_DUPLEX); } else { tl_dio_clrbit(sc, TL_NETCMD, TL_CMD_DUPLEX); } } /* * Set modes for bitrate devices. */ static void tl_setmode(sc, media) struct tl_softc *sc; int media; { if (IFM_SUBTYPE(media) == IFM_10_5) tl_dio_setbit(sc, TL_ACOMMIT, TL_AC_MTXD1); if (IFM_SUBTYPE(media) == IFM_10_T) { tl_dio_clrbit(sc, TL_ACOMMIT, TL_AC_MTXD1); if ((media & IFM_GMASK) == IFM_FDX) { tl_dio_clrbit(sc, TL_ACOMMIT, TL_AC_MTXD3); tl_dio_setbit(sc, TL_NETCMD, TL_CMD_DUPLEX); } else { tl_dio_setbit(sc, TL_ACOMMIT, TL_AC_MTXD3); tl_dio_clrbit(sc, TL_NETCMD, TL_CMD_DUPLEX); } } } /* * Calculate the hash of a MAC address for programming the multicast hash * table. This hash is simply the address split into 6-bit chunks * XOR'd, e.g. * byte: 000000|00 1111|1111 22|222222|333333|33 4444|4444 55|555555 * bit: 765432|10 7654|3210 76|543210|765432|10 7654|3210 76|543210 * Bytes 0-2 and 3-5 are symmetrical, so are folded together. Then * the folded 24-bit value is split into 6-bit portions and XOR'd. */ static uint32_t tl_mchash(addr) const uint8_t *addr; { int t; t = (addr[0] ^ addr[3]) << 16 | (addr[1] ^ addr[4]) << 8 | (addr[2] ^ addr[5]); return ((t >> 18) ^ (t >> 12) ^ (t >> 6) ^ t) & 0x3f; } /* * The ThunderLAN has a perfect MAC address filter in addition to * the multicast hash filter. The perfect filter can be programmed * with up to four MAC addresses. The first one is always used to * hold the station address, which leaves us free to use the other * three for multicast addresses. */ static void tl_setfilt(sc, addr, slot) struct tl_softc *sc; caddr_t addr; int slot; { int i; u_int16_t regaddr; regaddr = TL_AREG0_B5 + (slot * ETHER_ADDR_LEN); for (i = 0; i < ETHER_ADDR_LEN; i++) tl_dio_write8(sc, regaddr + i, *(addr + i)); } /* * XXX In FreeBSD 3.0, multicast addresses are managed using a doubly * linked list. This is fine, except addresses are added from the head * end of the list. We want to arrange for 224.0.0.1 (the "all hosts") * group to always be in the perfect filter, but as more groups are added, * the 224.0.0.1 entry (which is always added first) gets pushed down * the list and ends up at the tail. So after 3 or 4 multicast groups * are added, the all-hosts entry gets pushed out of the perfect filter * and into the hash table. * * Because the multicast list is a doubly-linked list as opposed to a * circular queue, we don't have the ability to just grab the tail of * the list and traverse it backwards. Instead, we have to traverse * the list once to find the tail, then traverse it again backwards to * update the multicast filter. */ static void tl_setmulti(sc) struct tl_softc *sc; { struct ifnet *ifp; u_int32_t hashes[2] = { 0, 0 }; int h, i; struct ifmultiaddr *ifma; u_int8_t dummy[] = { 0, 0, 0, 0, 0 ,0 }; ifp = sc->tl_ifp; /* First, zot all the existing filters. */ for (i = 1; i < 4; i++) tl_setfilt(sc, (caddr_t)&dummy, i); tl_dio_write32(sc, TL_HASH1, 0); tl_dio_write32(sc, TL_HASH2, 0); /* Now program new ones. */ if (ifp->if_flags & IFF_ALLMULTI) { hashes[0] = 0xFFFFFFFF; hashes[1] = 0xFFFFFFFF; } else { i = 1; if_maddr_rlock(ifp); /* XXX want to maintain reverse semantics - pop list and re-add? */ CK_STAILQ_FOREACH(ifma, &ifp->if_multiaddrs, ifma_link) { if (ifma->ifma_addr->sa_family != AF_LINK) continue; /* * Program the first three multicast groups * into the perfect filter. For all others, * use the hash table. */ if (i < 4) { tl_setfilt(sc, LLADDR((struct sockaddr_dl *)ifma->ifma_addr), i); i++; continue; } h = tl_mchash( LLADDR((struct sockaddr_dl *)ifma->ifma_addr)); if (h < 32) hashes[0] |= (1 << h); else hashes[1] |= (1 << (h - 32)); } if_maddr_runlock(ifp); } tl_dio_write32(sc, TL_HASH1, hashes[0]); tl_dio_write32(sc, TL_HASH2, hashes[1]); } /* * This routine is recommended by the ThunderLAN manual to insure that * the internal PHY is powered up correctly. It also recommends a one * second pause at the end to 'wait for the clocks to start' but in my * experience this isn't necessary. */ static void tl_hardreset(dev) device_t dev; { int i; u_int16_t flags; mii_bitbang_sync(dev, &tl_mii_bitbang_ops); flags = BMCR_LOOP|BMCR_ISO|BMCR_PDOWN; for (i = 0; i < MII_NPHY; i++) tl_miibus_writereg(dev, i, MII_BMCR, flags); tl_miibus_writereg(dev, 31, MII_BMCR, BMCR_ISO); DELAY(50000); tl_miibus_writereg(dev, 31, MII_BMCR, BMCR_LOOP|BMCR_ISO); mii_bitbang_sync(dev, &tl_mii_bitbang_ops); while(tl_miibus_readreg(dev, 31, MII_BMCR) & BMCR_RESET); DELAY(50000); } static void tl_softreset(sc, internal) struct tl_softc *sc; int internal; { u_int32_t cmd, dummy, i; /* Assert the adapter reset bit. */ CMD_SET(sc, TL_CMD_ADRST); /* Turn off interrupts */ CMD_SET(sc, TL_CMD_INTSOFF); /* First, clear the stats registers. */ for (i = 0; i < 5; i++) dummy = tl_dio_read32(sc, TL_TXGOODFRAMES); /* Clear Areg and Hash registers */ for (i = 0; i < 8; i++) tl_dio_write32(sc, TL_AREG0_B5, 0x00000000); /* * Set up Netconfig register. Enable one channel and * one fragment mode. */ tl_dio_setbit16(sc, TL_NETCONFIG, TL_CFG_ONECHAN|TL_CFG_ONEFRAG); if (internal && !sc->tl_bitrate) { tl_dio_setbit16(sc, TL_NETCONFIG, TL_CFG_PHYEN); } else { tl_dio_clrbit16(sc, TL_NETCONFIG, TL_CFG_PHYEN); } /* Handle cards with bitrate devices. */ if (sc->tl_bitrate) tl_dio_setbit16(sc, TL_NETCONFIG, TL_CFG_BITRATE); /* * Load adapter irq pacing timer and tx threshold. * We make the transmit threshold 1 initially but we may * change that later. */ cmd = CSR_READ_4(sc, TL_HOSTCMD); cmd |= TL_CMD_NES; cmd &= ~(TL_CMD_RT|TL_CMD_EOC|TL_CMD_ACK_MASK|TL_CMD_CHSEL_MASK); CMD_PUT(sc, cmd | (TL_CMD_LDTHR | TX_THR)); CMD_PUT(sc, cmd | (TL_CMD_LDTMR | 0x00000003)); /* Unreset the MII */ tl_dio_setbit(sc, TL_NETSIO, TL_SIO_NMRST); /* Take the adapter out of reset */ tl_dio_setbit(sc, TL_NETCMD, TL_CMD_NRESET|TL_CMD_NWRAP); /* Wait for things to settle down a little. */ DELAY(500); } /* * Probe for a ThunderLAN chip. Check the PCI vendor and device IDs * against our list and return its name if we find a match. */ static int tl_probe(dev) device_t dev; { const struct tl_type *t; t = tl_devs; while(t->tl_name != NULL) { if ((pci_get_vendor(dev) == t->tl_vid) && (pci_get_device(dev) == t->tl_did)) { device_set_desc(dev, t->tl_name); return (BUS_PROBE_DEFAULT); } t++; } return(ENXIO); } static int tl_attach(dev) device_t dev; { u_int16_t did, vid; const struct tl_type *t; struct ifnet *ifp; struct tl_softc *sc; int error, flags, i, rid, unit; u_char eaddr[6]; vid = pci_get_vendor(dev); did = pci_get_device(dev); sc = device_get_softc(dev); sc->tl_dev = dev; unit = device_get_unit(dev); t = tl_devs; while(t->tl_name != NULL) { if (vid == t->tl_vid && did == t->tl_did) break; t++; } if (t->tl_name == NULL) { device_printf(dev, "unknown device!?\n"); return (ENXIO); } mtx_init(&sc->tl_mtx, device_get_nameunit(dev), MTX_NETWORK_LOCK, MTX_DEF); /* * Map control/status registers. */ pci_enable_busmaster(dev); #ifdef TL_USEIOSPACE rid = TL_PCI_LOIO; sc->tl_res = bus_alloc_resource_any(dev, SYS_RES_IOPORT, &rid, RF_ACTIVE); /* * Some cards have the I/O and memory mapped address registers * reversed. Try both combinations before giving up. */ if (sc->tl_res == NULL) { rid = TL_PCI_LOMEM; sc->tl_res = bus_alloc_resource_any(dev, SYS_RES_IOPORT, &rid, RF_ACTIVE); } #else rid = TL_PCI_LOMEM; sc->tl_res = bus_alloc_resource_any(dev, SYS_RES_MEMORY, &rid, RF_ACTIVE); if (sc->tl_res == NULL) { rid = TL_PCI_LOIO; sc->tl_res = bus_alloc_resource_any(dev, SYS_RES_MEMORY, &rid, RF_ACTIVE); } #endif if (sc->tl_res == NULL) { device_printf(dev, "couldn't map ports/memory\n"); error = ENXIO; goto fail; } #ifdef notdef /* * The ThunderLAN manual suggests jacking the PCI latency * timer all the way up to its maximum value. I'm not sure * if this is really necessary, but what the manual wants, * the manual gets. */ command = pci_read_config(dev, TL_PCI_LATENCY_TIMER, 4); command |= 0x0000FF00; pci_write_config(dev, TL_PCI_LATENCY_TIMER, command, 4); #endif /* Allocate interrupt */ rid = 0; sc->tl_irq = bus_alloc_resource_any(dev, SYS_RES_IRQ, &rid, RF_SHAREABLE | RF_ACTIVE); if (sc->tl_irq == NULL) { device_printf(dev, "couldn't map interrupt\n"); error = ENXIO; goto fail; } /* * Now allocate memory for the TX and RX lists. */ sc->tl_ldata = contigmalloc(sizeof(struct tl_list_data), M_DEVBUF, M_NOWAIT, 0, 0xffffffff, PAGE_SIZE, 0); if (sc->tl_ldata == NULL) { device_printf(dev, "no memory for list buffers!\n"); error = ENXIO; goto fail; } bzero(sc->tl_ldata, sizeof(struct tl_list_data)); if (vid == COMPAQ_VENDORID || vid == TI_VENDORID) sc->tl_eeaddr = TL_EEPROM_EADDR; if (vid == OLICOM_VENDORID) sc->tl_eeaddr = TL_EEPROM_EADDR_OC; /* Reset the adapter. */ tl_softreset(sc, 1); tl_hardreset(dev); tl_softreset(sc, 1); /* * Get station address from the EEPROM. */ if (tl_read_eeprom(sc, eaddr, sc->tl_eeaddr, ETHER_ADDR_LEN)) { device_printf(dev, "failed to read station address\n"); error = ENXIO; goto fail; } /* * XXX Olicom, in its desire to be different from the * rest of the world, has done strange things with the * encoding of the station address in the EEPROM. First * of all, they store the address at offset 0xF8 rather * than at 0x83 like the ThunderLAN manual suggests. * Second, they store the address in three 16-bit words in * network byte order, as opposed to storing it sequentially * like all the other ThunderLAN cards. In order to get * the station address in a form that matches what the Olicom * diagnostic utility specifies, we have to byte-swap each * word. To make things even more confusing, neither 00:00:28 * nor 00:00:24 appear in the IEEE OUI database. */ if (vid == OLICOM_VENDORID) { for (i = 0; i < ETHER_ADDR_LEN; i += 2) { u_int16_t *p; p = (u_int16_t *)&eaddr[i]; *p = ntohs(*p); } } ifp = sc->tl_ifp = if_alloc(IFT_ETHER); if (ifp == NULL) { device_printf(dev, "can not if_alloc()\n"); error = ENOSPC; goto fail; } ifp->if_softc = sc; if_initname(ifp, device_get_name(dev), device_get_unit(dev)); ifp->if_flags = IFF_BROADCAST | IFF_SIMPLEX | IFF_MULTICAST; ifp->if_ioctl = tl_ioctl; ifp->if_start = tl_start; ifp->if_init = tl_init; ifp->if_snd.ifq_maxlen = TL_TX_LIST_CNT - 1; ifp->if_capabilities |= IFCAP_VLAN_MTU; ifp->if_capenable |= IFCAP_VLAN_MTU; callout_init_mtx(&sc->tl_stat_callout, &sc->tl_mtx, 0); /* Reset the adapter again. */ tl_softreset(sc, 1); tl_hardreset(dev); tl_softreset(sc, 1); /* * Do MII setup. If no PHYs are found, then this is a * bitrate ThunderLAN chip that only supports 10baseT * and AUI/BNC. * XXX mii_attach() can fail for reason different than * no PHYs found! */ flags = 0; if (vid == COMPAQ_VENDORID) { if (did == COMPAQ_DEVICEID_NETEL_10_100_PROLIANT || did == COMPAQ_DEVICEID_NETFLEX_3P_INTEGRATED || did == COMPAQ_DEVICEID_NETFLEX_3P_BNC || did == COMPAQ_DEVICEID_NETEL_10_T2_UTP_COAX) flags |= MIIF_MACPRIV0; if (did == COMPAQ_DEVICEID_NETEL_10 || did == COMPAQ_DEVICEID_NETEL_10_100_DUAL || did == COMPAQ_DEVICEID_NETFLEX_3P || did == COMPAQ_DEVICEID_NETEL_10_100_EMBEDDED) flags |= MIIF_MACPRIV1; } else if (vid == OLICOM_VENDORID && did == OLICOM_DEVICEID_OC2183) flags |= MIIF_MACPRIV0 | MIIF_MACPRIV1; if (mii_attach(dev, &sc->tl_miibus, ifp, tl_ifmedia_upd, tl_ifmedia_sts, BMSR_DEFCAPMASK, MII_PHY_ANY, MII_OFFSET_ANY, 0)) { struct ifmedia *ifm; sc->tl_bitrate = 1; ifmedia_init(&sc->ifmedia, 0, tl_ifmedia_upd, tl_ifmedia_sts); ifmedia_add(&sc->ifmedia, IFM_ETHER|IFM_10_T, 0, NULL); ifmedia_add(&sc->ifmedia, IFM_ETHER|IFM_10_T|IFM_HDX, 0, NULL); ifmedia_add(&sc->ifmedia, IFM_ETHER|IFM_10_T|IFM_FDX, 0, NULL); ifmedia_add(&sc->ifmedia, IFM_ETHER|IFM_10_5, 0, NULL); ifmedia_set(&sc->ifmedia, IFM_ETHER|IFM_10_T); /* Reset again, this time setting bitrate mode. */ tl_softreset(sc, 1); ifm = &sc->ifmedia; ifm->ifm_media = ifm->ifm_cur->ifm_media; tl_ifmedia_upd(ifp); } /* * Call MI attach routine. */ ether_ifattach(ifp, eaddr); /* Hook interrupt last to avoid having to lock softc */ error = bus_setup_intr(dev, sc->tl_irq, INTR_TYPE_NET | INTR_MPSAFE, NULL, tl_intr, sc, &sc->tl_intrhand); if (error) { device_printf(dev, "couldn't set up irq\n"); ether_ifdetach(ifp); goto fail; } + gone_by_fcp101_dev(dev); + fail: if (error) tl_detach(dev); return(error); } /* * Shutdown hardware and free up resources. This can be called any * time after the mutex has been initialized. It is called in both * the error case in attach and the normal detach case so it needs * to be careful about only freeing resources that have actually been * allocated. */ static int tl_detach(dev) device_t dev; { struct tl_softc *sc; struct ifnet *ifp; sc = device_get_softc(dev); KASSERT(mtx_initialized(&sc->tl_mtx), ("tl mutex not initialized")); ifp = sc->tl_ifp; /* These should only be active if attach succeeded */ if (device_is_attached(dev)) { ether_ifdetach(ifp); TL_LOCK(sc); tl_stop(sc); TL_UNLOCK(sc); callout_drain(&sc->tl_stat_callout); } if (sc->tl_miibus) device_delete_child(dev, sc->tl_miibus); bus_generic_detach(dev); if (sc->tl_ldata) contigfree(sc->tl_ldata, sizeof(struct tl_list_data), M_DEVBUF); if (sc->tl_bitrate) ifmedia_removeall(&sc->ifmedia); if (sc->tl_intrhand) bus_teardown_intr(dev, sc->tl_irq, sc->tl_intrhand); if (sc->tl_irq) bus_release_resource(dev, SYS_RES_IRQ, 0, sc->tl_irq); if (sc->tl_res) bus_release_resource(dev, TL_RES, TL_RID, sc->tl_res); if (ifp) if_free(ifp); mtx_destroy(&sc->tl_mtx); return(0); } /* * Initialize the transmit lists. */ static int tl_list_tx_init(sc) struct tl_softc *sc; { struct tl_chain_data *cd; struct tl_list_data *ld; int i; cd = &sc->tl_cdata; ld = sc->tl_ldata; for (i = 0; i < TL_TX_LIST_CNT; i++) { cd->tl_tx_chain[i].tl_ptr = &ld->tl_tx_list[i]; if (i == (TL_TX_LIST_CNT - 1)) cd->tl_tx_chain[i].tl_next = NULL; else cd->tl_tx_chain[i].tl_next = &cd->tl_tx_chain[i + 1]; } cd->tl_tx_free = &cd->tl_tx_chain[0]; cd->tl_tx_tail = cd->tl_tx_head = NULL; sc->tl_txeoc = 1; return(0); } /* * Initialize the RX lists and allocate mbufs for them. */ static int tl_list_rx_init(sc) struct tl_softc *sc; { struct tl_chain_data *cd; struct tl_list_data *ld; int i; cd = &sc->tl_cdata; ld = sc->tl_ldata; for (i = 0; i < TL_RX_LIST_CNT; i++) { cd->tl_rx_chain[i].tl_ptr = (struct tl_list_onefrag *)&ld->tl_rx_list[i]; if (tl_newbuf(sc, &cd->tl_rx_chain[i]) == ENOBUFS) return(ENOBUFS); if (i == (TL_RX_LIST_CNT - 1)) { cd->tl_rx_chain[i].tl_next = NULL; ld->tl_rx_list[i].tlist_fptr = 0; } else { cd->tl_rx_chain[i].tl_next = &cd->tl_rx_chain[i + 1]; ld->tl_rx_list[i].tlist_fptr = vtophys(&ld->tl_rx_list[i + 1]); } } cd->tl_rx_head = &cd->tl_rx_chain[0]; cd->tl_rx_tail = &cd->tl_rx_chain[TL_RX_LIST_CNT - 1]; return(0); } static int tl_newbuf(sc, c) struct tl_softc *sc; struct tl_chain_onefrag *c; { struct mbuf *m_new = NULL; m_new = m_getcl(M_NOWAIT, MT_DATA, M_PKTHDR); if (m_new == NULL) return(ENOBUFS); c->tl_mbuf = m_new; c->tl_next = NULL; c->tl_ptr->tlist_frsize = MCLBYTES; c->tl_ptr->tlist_fptr = 0; c->tl_ptr->tl_frag.tlist_dadr = vtophys(mtod(m_new, caddr_t)); c->tl_ptr->tl_frag.tlist_dcnt = MCLBYTES; c->tl_ptr->tlist_cstat = TL_CSTAT_READY; return(0); } /* * Interrupt handler for RX 'end of frame' condition (EOF). This * tells us that a full ethernet frame has been captured and we need * to handle it. * * Reception is done using 'lists' which consist of a header and a * series of 10 data count/data address pairs that point to buffers. * Initially you're supposed to create a list, populate it with pointers * to buffers, then load the physical address of the list into the * ch_parm register. The adapter is then supposed to DMA the received * frame into the buffers for you. * * To make things as fast as possible, we have the chip DMA directly * into mbufs. This saves us from having to do a buffer copy: we can * just hand the mbufs directly to ether_input(). Once the frame has * been sent on its way, the 'list' structure is assigned a new buffer * and moved to the end of the RX chain. As long we we stay ahead of * the chip, it will always think it has an endless receive channel. * * If we happen to fall behind and the chip manages to fill up all of * the buffers, it will generate an end of channel interrupt and wait * for us to empty the chain and restart the receiver. */ static int tl_intvec_rxeof(xsc, type) void *xsc; u_int32_t type; { struct tl_softc *sc; int r = 0, total_len = 0; struct ether_header *eh; struct mbuf *m; struct ifnet *ifp; struct tl_chain_onefrag *cur_rx; sc = xsc; ifp = sc->tl_ifp; TL_LOCK_ASSERT(sc); while(sc->tl_cdata.tl_rx_head != NULL) { cur_rx = sc->tl_cdata.tl_rx_head; if (!(cur_rx->tl_ptr->tlist_cstat & TL_CSTAT_FRAMECMP)) break; r++; sc->tl_cdata.tl_rx_head = cur_rx->tl_next; m = cur_rx->tl_mbuf; total_len = cur_rx->tl_ptr->tlist_frsize; if (tl_newbuf(sc, cur_rx) == ENOBUFS) { if_inc_counter(ifp, IFCOUNTER_IERRORS, 1); cur_rx->tl_ptr->tlist_frsize = MCLBYTES; cur_rx->tl_ptr->tlist_cstat = TL_CSTAT_READY; cur_rx->tl_ptr->tl_frag.tlist_dcnt = MCLBYTES; continue; } sc->tl_cdata.tl_rx_tail->tl_ptr->tlist_fptr = vtophys(cur_rx->tl_ptr); sc->tl_cdata.tl_rx_tail->tl_next = cur_rx; sc->tl_cdata.tl_rx_tail = cur_rx; /* * Note: when the ThunderLAN chip is in 'capture all * frames' mode, it will receive its own transmissions. * We drop don't need to process our own transmissions, * so we drop them here and continue. */ eh = mtod(m, struct ether_header *); /*if (ifp->if_flags & IFF_PROMISC && */ if (!bcmp(eh->ether_shost, IF_LLADDR(sc->tl_ifp), ETHER_ADDR_LEN)) { m_freem(m); continue; } m->m_pkthdr.rcvif = ifp; m->m_pkthdr.len = m->m_len = total_len; TL_UNLOCK(sc); (*ifp->if_input)(ifp, m); TL_LOCK(sc); } return(r); } /* * The RX-EOC condition hits when the ch_parm address hasn't been * initialized or the adapter reached a list with a forward pointer * of 0 (which indicates the end of the chain). In our case, this means * the card has hit the end of the receive buffer chain and we need to * empty out the buffers and shift the pointer back to the beginning again. */ static int tl_intvec_rxeoc(xsc, type) void *xsc; u_int32_t type; { struct tl_softc *sc; int r; struct tl_chain_data *cd; sc = xsc; cd = &sc->tl_cdata; /* Flush out the receive queue and ack RXEOF interrupts. */ r = tl_intvec_rxeof(xsc, type); CMD_PUT(sc, TL_CMD_ACK | r | (type & ~(0x00100000))); r = 1; cd->tl_rx_head = &cd->tl_rx_chain[0]; cd->tl_rx_tail = &cd->tl_rx_chain[TL_RX_LIST_CNT - 1]; CSR_WRITE_4(sc, TL_CH_PARM, vtophys(sc->tl_cdata.tl_rx_head->tl_ptr)); r |= (TL_CMD_GO|TL_CMD_RT); return(r); } static int tl_intvec_txeof(xsc, type) void *xsc; u_int32_t type; { struct tl_softc *sc; int r = 0; struct tl_chain *cur_tx; sc = xsc; /* * Go through our tx list and free mbufs for those * frames that have been sent. */ while (sc->tl_cdata.tl_tx_head != NULL) { cur_tx = sc->tl_cdata.tl_tx_head; if (!(cur_tx->tl_ptr->tlist_cstat & TL_CSTAT_FRAMECMP)) break; sc->tl_cdata.tl_tx_head = cur_tx->tl_next; r++; m_freem(cur_tx->tl_mbuf); cur_tx->tl_mbuf = NULL; cur_tx->tl_next = sc->tl_cdata.tl_tx_free; sc->tl_cdata.tl_tx_free = cur_tx; if (!cur_tx->tl_ptr->tlist_fptr) break; } return(r); } /* * The transmit end of channel interrupt. The adapter triggers this * interrupt to tell us it hit the end of the current transmit list. * * A note about this: it's possible for a condition to arise where * tl_start() may try to send frames between TXEOF and TXEOC interrupts. * You have to avoid this since the chip expects things to go in a * particular order: transmit, acknowledge TXEOF, acknowledge TXEOC. * When the TXEOF handler is called, it will free all of the transmitted * frames and reset the tx_head pointer to NULL. However, a TXEOC * interrupt should be received and acknowledged before any more frames * are queued for transmission. If tl_statrt() is called after TXEOF * resets the tx_head pointer but _before_ the TXEOC interrupt arrives, * it could attempt to issue a transmit command prematurely. * * To guard against this, tl_start() will only issue transmit commands * if the tl_txeoc flag is set, and only the TXEOC interrupt handler * can set this flag once tl_start() has cleared it. */ static int tl_intvec_txeoc(xsc, type) void *xsc; u_int32_t type; { struct tl_softc *sc; struct ifnet *ifp; u_int32_t cmd; sc = xsc; ifp = sc->tl_ifp; /* Clear the timeout timer. */ sc->tl_timer = 0; if (sc->tl_cdata.tl_tx_head == NULL) { ifp->if_drv_flags &= ~IFF_DRV_OACTIVE; sc->tl_cdata.tl_tx_tail = NULL; sc->tl_txeoc = 1; } else { sc->tl_txeoc = 0; /* First we have to ack the EOC interrupt. */ CMD_PUT(sc, TL_CMD_ACK | 0x00000001 | type); /* Then load the address of the next TX list. */ CSR_WRITE_4(sc, TL_CH_PARM, vtophys(sc->tl_cdata.tl_tx_head->tl_ptr)); /* Restart TX channel. */ cmd = CSR_READ_4(sc, TL_HOSTCMD); cmd &= ~TL_CMD_RT; cmd |= TL_CMD_GO|TL_CMD_INTSON; CMD_PUT(sc, cmd); return(0); } return(1); } static int tl_intvec_adchk(xsc, type) void *xsc; u_int32_t type; { struct tl_softc *sc; sc = xsc; if (type) device_printf(sc->tl_dev, "adapter check: %x\n", (unsigned int)CSR_READ_4(sc, TL_CH_PARM)); tl_softreset(sc, 1); tl_stop(sc); tl_init_locked(sc); CMD_SET(sc, TL_CMD_INTSON); return(0); } static int tl_intvec_netsts(xsc, type) void *xsc; u_int32_t type; { struct tl_softc *sc; u_int16_t netsts; sc = xsc; netsts = tl_dio_read16(sc, TL_NETSTS); tl_dio_write16(sc, TL_NETSTS, netsts); device_printf(sc->tl_dev, "network status: %x\n", netsts); return(1); } static void tl_intr(xsc) void *xsc; { struct tl_softc *sc; struct ifnet *ifp; int r = 0; u_int32_t type = 0; u_int16_t ints = 0; u_int8_t ivec = 0; sc = xsc; TL_LOCK(sc); /* Disable interrupts */ ints = CSR_READ_2(sc, TL_HOST_INT); CSR_WRITE_2(sc, TL_HOST_INT, ints); type = (ints << 16) & 0xFFFF0000; ivec = (ints & TL_VEC_MASK) >> 5; ints = (ints & TL_INT_MASK) >> 2; ifp = sc->tl_ifp; switch(ints) { case (TL_INTR_INVALID): #ifdef DIAGNOSTIC device_printf(sc->tl_dev, "got an invalid interrupt!\n"); #endif /* Re-enable interrupts but don't ack this one. */ CMD_PUT(sc, type); r = 0; break; case (TL_INTR_TXEOF): r = tl_intvec_txeof((void *)sc, type); break; case (TL_INTR_TXEOC): r = tl_intvec_txeoc((void *)sc, type); break; case (TL_INTR_STATOFLOW): tl_stats_update(sc); r = 1; break; case (TL_INTR_RXEOF): r = tl_intvec_rxeof((void *)sc, type); break; case (TL_INTR_DUMMY): device_printf(sc->tl_dev, "got a dummy interrupt\n"); r = 1; break; case (TL_INTR_ADCHK): if (ivec) r = tl_intvec_adchk((void *)sc, type); else r = tl_intvec_netsts((void *)sc, type); break; case (TL_INTR_RXEOC): r = tl_intvec_rxeoc((void *)sc, type); break; default: device_printf(sc->tl_dev, "bogus interrupt type\n"); break; } /* Re-enable interrupts */ if (r) { CMD_PUT(sc, TL_CMD_ACK | r | type); } if (ifp->if_snd.ifq_head != NULL) tl_start_locked(ifp); TL_UNLOCK(sc); } static void tl_stats_update(xsc) void *xsc; { struct tl_softc *sc; struct ifnet *ifp; struct tl_stats tl_stats; struct mii_data *mii; u_int32_t *p; bzero((char *)&tl_stats, sizeof(struct tl_stats)); sc = xsc; TL_LOCK_ASSERT(sc); ifp = sc->tl_ifp; p = (u_int32_t *)&tl_stats; CSR_WRITE_2(sc, TL_DIO_ADDR, TL_TXGOODFRAMES|TL_DIO_ADDR_INC); *p++ = CSR_READ_4(sc, TL_DIO_DATA); *p++ = CSR_READ_4(sc, TL_DIO_DATA); *p++ = CSR_READ_4(sc, TL_DIO_DATA); *p++ = CSR_READ_4(sc, TL_DIO_DATA); *p++ = CSR_READ_4(sc, TL_DIO_DATA); if_inc_counter(ifp, IFCOUNTER_OPACKETS, tl_tx_goodframes(tl_stats)); if_inc_counter(ifp, IFCOUNTER_COLLISIONS, tl_stats.tl_tx_single_collision + tl_stats.tl_tx_multi_collision); if_inc_counter(ifp, IFCOUNTER_IPACKETS, tl_rx_goodframes(tl_stats)); if_inc_counter(ifp, IFCOUNTER_IERRORS, tl_stats.tl_crc_errors + tl_stats.tl_code_errors + tl_rx_overrun(tl_stats)); if_inc_counter(ifp, IFCOUNTER_OERRORS, tl_tx_underrun(tl_stats)); if (tl_tx_underrun(tl_stats)) { u_int8_t tx_thresh; tx_thresh = tl_dio_read8(sc, TL_ACOMMIT) & TL_AC_TXTHRESH; if (tx_thresh != TL_AC_TXTHRESH_WHOLEPKT) { tx_thresh >>= 4; tx_thresh++; device_printf(sc->tl_dev, "tx underrun -- increasing " "tx threshold to %d bytes\n", (64 * (tx_thresh * 4))); tl_dio_clrbit(sc, TL_ACOMMIT, TL_AC_TXTHRESH); tl_dio_setbit(sc, TL_ACOMMIT, tx_thresh << 4); } } if (sc->tl_timer > 0 && --sc->tl_timer == 0) tl_watchdog(sc); callout_reset(&sc->tl_stat_callout, hz, tl_stats_update, sc); if (!sc->tl_bitrate) { mii = device_get_softc(sc->tl_miibus); mii_tick(mii); } } /* * Encapsulate an mbuf chain in a list by coupling the mbuf data * pointers to the fragment pointers. */ static int tl_encap(sc, c, m_head) struct tl_softc *sc; struct tl_chain *c; struct mbuf *m_head; { int frag = 0; struct tl_frag *f = NULL; int total_len; struct mbuf *m; struct ifnet *ifp = sc->tl_ifp; /* * Start packing the mbufs in this chain into * the fragment pointers. Stop when we run out * of fragments or hit the end of the mbuf chain. */ m = m_head; total_len = 0; for (m = m_head, frag = 0; m != NULL; m = m->m_next) { if (m->m_len != 0) { if (frag == TL_MAXFRAGS) break; total_len+= m->m_len; c->tl_ptr->tl_frag[frag].tlist_dadr = vtophys(mtod(m, vm_offset_t)); c->tl_ptr->tl_frag[frag].tlist_dcnt = m->m_len; frag++; } } /* * Handle special cases. * Special case #1: we used up all 10 fragments, but * we have more mbufs left in the chain. Copy the * data into an mbuf cluster. Note that we don't * bother clearing the values in the other fragment * pointers/counters; it wouldn't gain us anything, * and would waste cycles. */ if (m != NULL) { struct mbuf *m_new = NULL; MGETHDR(m_new, M_NOWAIT, MT_DATA); if (m_new == NULL) { if_printf(ifp, "no memory for tx list\n"); return(1); } if (m_head->m_pkthdr.len > MHLEN) { if (!(MCLGET(m_new, M_NOWAIT))) { m_freem(m_new); if_printf(ifp, "no memory for tx list\n"); return(1); } } m_copydata(m_head, 0, m_head->m_pkthdr.len, mtod(m_new, caddr_t)); m_new->m_pkthdr.len = m_new->m_len = m_head->m_pkthdr.len; m_freem(m_head); m_head = m_new; f = &c->tl_ptr->tl_frag[0]; f->tlist_dadr = vtophys(mtod(m_new, caddr_t)); f->tlist_dcnt = total_len = m_new->m_len; frag = 1; } /* * Special case #2: the frame is smaller than the minimum * frame size. We have to pad it to make the chip happy. */ if (total_len < TL_MIN_FRAMELEN) { if (frag == TL_MAXFRAGS) if_printf(ifp, "all frags filled but frame still to small!\n"); f = &c->tl_ptr->tl_frag[frag]; f->tlist_dcnt = TL_MIN_FRAMELEN - total_len; f->tlist_dadr = vtophys(&sc->tl_ldata->tl_pad); total_len += f->tlist_dcnt; frag++; } c->tl_mbuf = m_head; c->tl_ptr->tl_frag[frag - 1].tlist_dcnt |= TL_LAST_FRAG; c->tl_ptr->tlist_frsize = total_len; c->tl_ptr->tlist_cstat = TL_CSTAT_READY; c->tl_ptr->tlist_fptr = 0; return(0); } /* * Main transmit routine. To avoid having to do mbuf copies, we put pointers * to the mbuf data regions directly in the transmit lists. We also save a * copy of the pointers since the transmit list fragment pointers are * physical addresses. */ static void tl_start(ifp) struct ifnet *ifp; { struct tl_softc *sc; sc = ifp->if_softc; TL_LOCK(sc); tl_start_locked(ifp); TL_UNLOCK(sc); } static void tl_start_locked(ifp) struct ifnet *ifp; { struct tl_softc *sc; struct mbuf *m_head = NULL; u_int32_t cmd; struct tl_chain *prev = NULL, *cur_tx = NULL, *start_tx; sc = ifp->if_softc; TL_LOCK_ASSERT(sc); /* * Check for an available queue slot. If there are none, * punt. */ if (sc->tl_cdata.tl_tx_free == NULL) { ifp->if_drv_flags |= IFF_DRV_OACTIVE; return; } start_tx = sc->tl_cdata.tl_tx_free; while(sc->tl_cdata.tl_tx_free != NULL) { IF_DEQUEUE(&ifp->if_snd, m_head); if (m_head == NULL) break; /* Pick a chain member off the free list. */ cur_tx = sc->tl_cdata.tl_tx_free; sc->tl_cdata.tl_tx_free = cur_tx->tl_next; cur_tx->tl_next = NULL; /* Pack the data into the list. */ tl_encap(sc, cur_tx, m_head); /* Chain it together */ if (prev != NULL) { prev->tl_next = cur_tx; prev->tl_ptr->tlist_fptr = vtophys(cur_tx->tl_ptr); } prev = cur_tx; /* * If there's a BPF listener, bounce a copy of this frame * to him. */ BPF_MTAP(ifp, cur_tx->tl_mbuf); } /* * If there are no packets queued, bail. */ if (cur_tx == NULL) return; /* * That's all we can stands, we can't stands no more. * If there are no other transfers pending, then issue the * TX GO command to the adapter to start things moving. * Otherwise, just leave the data in the queue and let * the EOF/EOC interrupt handler send. */ if (sc->tl_cdata.tl_tx_head == NULL) { sc->tl_cdata.tl_tx_head = start_tx; sc->tl_cdata.tl_tx_tail = cur_tx; if (sc->tl_txeoc) { sc->tl_txeoc = 0; CSR_WRITE_4(sc, TL_CH_PARM, vtophys(start_tx->tl_ptr)); cmd = CSR_READ_4(sc, TL_HOSTCMD); cmd &= ~TL_CMD_RT; cmd |= TL_CMD_GO|TL_CMD_INTSON; CMD_PUT(sc, cmd); } } else { sc->tl_cdata.tl_tx_tail->tl_next = start_tx; sc->tl_cdata.tl_tx_tail = cur_tx; } /* * Set a timeout in case the chip goes out to lunch. */ sc->tl_timer = 5; } static void tl_init(xsc) void *xsc; { struct tl_softc *sc = xsc; TL_LOCK(sc); tl_init_locked(sc); TL_UNLOCK(sc); } static void tl_init_locked(sc) struct tl_softc *sc; { struct ifnet *ifp = sc->tl_ifp; struct mii_data *mii; TL_LOCK_ASSERT(sc); ifp = sc->tl_ifp; /* * Cancel pending I/O. */ tl_stop(sc); /* Initialize TX FIFO threshold */ tl_dio_clrbit(sc, TL_ACOMMIT, TL_AC_TXTHRESH); tl_dio_setbit(sc, TL_ACOMMIT, TL_AC_TXTHRESH_16LONG); /* Set PCI burst size */ tl_dio_write8(sc, TL_BSIZEREG, TL_RXBURST_16LONG|TL_TXBURST_16LONG); /* * Set 'capture all frames' bit for promiscuous mode. */ if (ifp->if_flags & IFF_PROMISC) tl_dio_setbit(sc, TL_NETCMD, TL_CMD_CAF); else tl_dio_clrbit(sc, TL_NETCMD, TL_CMD_CAF); /* * Set capture broadcast bit to capture broadcast frames. */ if (ifp->if_flags & IFF_BROADCAST) tl_dio_clrbit(sc, TL_NETCMD, TL_CMD_NOBRX); else tl_dio_setbit(sc, TL_NETCMD, TL_CMD_NOBRX); tl_dio_write16(sc, TL_MAXRX, MCLBYTES); /* Init our MAC address */ tl_setfilt(sc, IF_LLADDR(sc->tl_ifp), 0); /* Init multicast filter, if needed. */ tl_setmulti(sc); /* Init circular RX list. */ if (tl_list_rx_init(sc) == ENOBUFS) { device_printf(sc->tl_dev, "initialization failed: no memory for rx buffers\n"); tl_stop(sc); return; } /* Init TX pointers. */ tl_list_tx_init(sc); /* Enable PCI interrupts. */ CMD_SET(sc, TL_CMD_INTSON); /* Load the address of the rx list */ CMD_SET(sc, TL_CMD_RT); CSR_WRITE_4(sc, TL_CH_PARM, vtophys(&sc->tl_ldata->tl_rx_list[0])); if (!sc->tl_bitrate) { if (sc->tl_miibus != NULL) { mii = device_get_softc(sc->tl_miibus); mii_mediachg(mii); } } else { tl_ifmedia_upd(ifp); } /* Send the RX go command */ CMD_SET(sc, TL_CMD_GO|TL_CMD_NES|TL_CMD_RT); ifp->if_drv_flags |= IFF_DRV_RUNNING; ifp->if_drv_flags &= ~IFF_DRV_OACTIVE; /* Start the stats update counter */ callout_reset(&sc->tl_stat_callout, hz, tl_stats_update, sc); } /* * Set media options. */ static int tl_ifmedia_upd(ifp) struct ifnet *ifp; { struct tl_softc *sc; struct mii_data *mii = NULL; sc = ifp->if_softc; TL_LOCK(sc); if (sc->tl_bitrate) tl_setmode(sc, sc->ifmedia.ifm_media); else { mii = device_get_softc(sc->tl_miibus); mii_mediachg(mii); } TL_UNLOCK(sc); return(0); } /* * Report current media status. */ static void tl_ifmedia_sts(ifp, ifmr) struct ifnet *ifp; struct ifmediareq *ifmr; { struct tl_softc *sc; struct mii_data *mii; sc = ifp->if_softc; TL_LOCK(sc); ifmr->ifm_active = IFM_ETHER; if (sc->tl_bitrate) { if (tl_dio_read8(sc, TL_ACOMMIT) & TL_AC_MTXD1) ifmr->ifm_active = IFM_ETHER|IFM_10_5; else ifmr->ifm_active = IFM_ETHER|IFM_10_T; if (tl_dio_read8(sc, TL_ACOMMIT) & TL_AC_MTXD3) ifmr->ifm_active |= IFM_HDX; else ifmr->ifm_active |= IFM_FDX; return; } else { mii = device_get_softc(sc->tl_miibus); mii_pollstat(mii); ifmr->ifm_active = mii->mii_media_active; ifmr->ifm_status = mii->mii_media_status; } TL_UNLOCK(sc); } static int tl_ioctl(ifp, command, data) struct ifnet *ifp; u_long command; caddr_t data; { struct tl_softc *sc = ifp->if_softc; struct ifreq *ifr = (struct ifreq *) data; int error = 0; switch(command) { case SIOCSIFFLAGS: TL_LOCK(sc); if (ifp->if_flags & IFF_UP) { if (ifp->if_drv_flags & IFF_DRV_RUNNING && ifp->if_flags & IFF_PROMISC && !(sc->tl_if_flags & IFF_PROMISC)) { tl_dio_setbit(sc, TL_NETCMD, TL_CMD_CAF); tl_setmulti(sc); } else if (ifp->if_drv_flags & IFF_DRV_RUNNING && !(ifp->if_flags & IFF_PROMISC) && sc->tl_if_flags & IFF_PROMISC) { tl_dio_clrbit(sc, TL_NETCMD, TL_CMD_CAF); tl_setmulti(sc); } else tl_init_locked(sc); } else { if (ifp->if_drv_flags & IFF_DRV_RUNNING) { tl_stop(sc); } } sc->tl_if_flags = ifp->if_flags; TL_UNLOCK(sc); error = 0; break; case SIOCADDMULTI: case SIOCDELMULTI: TL_LOCK(sc); tl_setmulti(sc); TL_UNLOCK(sc); error = 0; break; case SIOCSIFMEDIA: case SIOCGIFMEDIA: if (sc->tl_bitrate) error = ifmedia_ioctl(ifp, ifr, &sc->ifmedia, command); else { struct mii_data *mii; mii = device_get_softc(sc->tl_miibus); error = ifmedia_ioctl(ifp, ifr, &mii->mii_media, command); } break; default: error = ether_ioctl(ifp, command, data); break; } return(error); } static void tl_watchdog(sc) struct tl_softc *sc; { struct ifnet *ifp; TL_LOCK_ASSERT(sc); ifp = sc->tl_ifp; if_printf(ifp, "device timeout\n"); if_inc_counter(ifp, IFCOUNTER_OERRORS, 1); tl_softreset(sc, 1); tl_init_locked(sc); } /* * Stop the adapter and free any mbufs allocated to the * RX and TX lists. */ static void tl_stop(sc) struct tl_softc *sc; { int i; struct ifnet *ifp; TL_LOCK_ASSERT(sc); ifp = sc->tl_ifp; /* Stop the stats updater. */ callout_stop(&sc->tl_stat_callout); /* Stop the transmitter */ CMD_CLR(sc, TL_CMD_RT); CMD_SET(sc, TL_CMD_STOP); CSR_WRITE_4(sc, TL_CH_PARM, 0); /* Stop the receiver */ CMD_SET(sc, TL_CMD_RT); CMD_SET(sc, TL_CMD_STOP); CSR_WRITE_4(sc, TL_CH_PARM, 0); /* * Disable host interrupts. */ CMD_SET(sc, TL_CMD_INTSOFF); /* * Clear list pointer. */ CSR_WRITE_4(sc, TL_CH_PARM, 0); /* * Free the RX lists. */ for (i = 0; i < TL_RX_LIST_CNT; i++) { if (sc->tl_cdata.tl_rx_chain[i].tl_mbuf != NULL) { m_freem(sc->tl_cdata.tl_rx_chain[i].tl_mbuf); sc->tl_cdata.tl_rx_chain[i].tl_mbuf = NULL; } } bzero((char *)&sc->tl_ldata->tl_rx_list, sizeof(sc->tl_ldata->tl_rx_list)); /* * Free the TX list buffers. */ for (i = 0; i < TL_TX_LIST_CNT; i++) { if (sc->tl_cdata.tl_tx_chain[i].tl_mbuf != NULL) { m_freem(sc->tl_cdata.tl_tx_chain[i].tl_mbuf); sc->tl_cdata.tl_tx_chain[i].tl_mbuf = NULL; } } bzero((char *)&sc->tl_ldata->tl_tx_list, sizeof(sc->tl_ldata->tl_tx_list)); ifp->if_drv_flags &= ~(IFF_DRV_RUNNING | IFF_DRV_OACTIVE); } /* * Stop all chip I/O so that the kernel's probe routines don't * get confused by errant DMAs when rebooting. */ static int tl_shutdown(dev) device_t dev; { struct tl_softc *sc; sc = device_get_softc(dev); TL_LOCK(sc); tl_stop(sc); TL_UNLOCK(sc); return (0); } Index: stable/12/sys/dev/tx/if_tx.c =================================================================== --- stable/12/sys/dev/tx/if_tx.c (revision 339734) +++ stable/12/sys/dev/tx/if_tx.c (revision 339735) @@ -1,1856 +1,1858 @@ /*- * SPDX-License-Identifier: BSD-2-Clause-FreeBSD * * Copyright (c) 1997 Semen Ustimenko (semenu@FreeBSD.org) * All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. */ #include __FBSDID("$FreeBSD$"); /* * EtherPower II 10/100 Fast Ethernet (SMC 9432 serie) * * These cards are based on SMC83c17x (EPIC) chip and one of the various * PHYs (QS6612, AC101 and LXT970 were seen). The media support depends on * card model. All cards support 10baseT/UTP and 100baseTX half- and full- * duplex (SMB9432TX). SMC9432BTX also supports 10baseT/BNC. SMC9432FTX also * supports fibre optics. * * Thanks are going to Steve Bauer and Jason Wright. */ #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include "miidevs.h" #include #include "miibus_if.h" #include #include MODULE_DEPEND(tx, pci, 1, 1, 1); MODULE_DEPEND(tx, ether, 1, 1, 1); MODULE_DEPEND(tx, miibus, 1, 1, 1); static int epic_ifioctl(struct ifnet *, u_long, caddr_t); static void epic_intr(void *); static void epic_tx_underrun(epic_softc_t *); static void epic_ifstart(struct ifnet *); static void epic_ifstart_locked(struct ifnet *); static void epic_timer(void *); static void epic_init(void *); static void epic_init_locked(epic_softc_t *); static void epic_stop(epic_softc_t *); static void epic_rx_done(epic_softc_t *); static void epic_tx_done(epic_softc_t *); static int epic_init_rings(epic_softc_t *); static void epic_free_rings(epic_softc_t *); static void epic_stop_activity(epic_softc_t *); static int epic_queue_last_packet(epic_softc_t *); static void epic_start_activity(epic_softc_t *); static void epic_set_rx_mode(epic_softc_t *); static void epic_set_tx_mode(epic_softc_t *); static void epic_set_mc_table(epic_softc_t *); static int epic_read_eeprom(epic_softc_t *,u_int16_t); static void epic_output_eepromw(epic_softc_t *, u_int16_t); static u_int16_t epic_input_eepromw(epic_softc_t *); static u_int8_t epic_eeprom_clock(epic_softc_t *,u_int8_t); static void epic_write_eepromreg(epic_softc_t *,u_int8_t); static u_int8_t epic_read_eepromreg(epic_softc_t *); static int epic_read_phy_reg(epic_softc_t *, int, int); static void epic_write_phy_reg(epic_softc_t *, int, int, int); static int epic_miibus_readreg(device_t, int, int); static int epic_miibus_writereg(device_t, int, int, int); static void epic_miibus_statchg(device_t); static void epic_miibus_mediainit(device_t); static int epic_ifmedia_upd(struct ifnet *); static int epic_ifmedia_upd_locked(struct ifnet *); static void epic_ifmedia_sts(struct ifnet *, struct ifmediareq *); static int epic_probe(device_t); static int epic_attach(device_t); static int epic_shutdown(device_t); static int epic_detach(device_t); static void epic_release(epic_softc_t *); static struct epic_type *epic_devtype(device_t); static device_method_t epic_methods[] = { /* Device interface */ DEVMETHOD(device_probe, epic_probe), DEVMETHOD(device_attach, epic_attach), DEVMETHOD(device_detach, epic_detach), DEVMETHOD(device_shutdown, epic_shutdown), /* MII interface */ DEVMETHOD(miibus_readreg, epic_miibus_readreg), DEVMETHOD(miibus_writereg, epic_miibus_writereg), DEVMETHOD(miibus_statchg, epic_miibus_statchg), DEVMETHOD(miibus_mediainit, epic_miibus_mediainit), { 0, 0 } }; static driver_t epic_driver = { "tx", epic_methods, sizeof(epic_softc_t) }; static devclass_t epic_devclass; DRIVER_MODULE(tx, pci, epic_driver, epic_devclass, 0, 0); DRIVER_MODULE(miibus, tx, miibus_driver, miibus_devclass, 0, 0); static struct epic_type epic_devs[] = { { SMC_VENDORID, SMC_DEVICEID_83C170, "SMC EtherPower II 10/100" }, { 0, 0, NULL } }; static int epic_probe(device_t dev) { struct epic_type *t; t = epic_devtype(dev); if (t != NULL) { device_set_desc(dev, t->name); return (BUS_PROBE_DEFAULT); } return (ENXIO); } static struct epic_type * epic_devtype(device_t dev) { struct epic_type *t; t = epic_devs; while (t->name != NULL) { if ((pci_get_vendor(dev) == t->ven_id) && (pci_get_device(dev) == t->dev_id)) { return (t); } t++; } return (NULL); } #ifdef EPIC_USEIOSPACE #define EPIC_RES SYS_RES_IOPORT #define EPIC_RID PCIR_BASEIO #else #define EPIC_RES SYS_RES_MEMORY #define EPIC_RID PCIR_BASEMEM #endif static void epic_dma_map_addr(void *arg, bus_dma_segment_t *segs, int nseg, int error) { u_int32_t *addr; if (error) return; KASSERT(nseg == 1, ("too many DMA segments, %d should be 1", nseg)); addr = arg; *addr = segs->ds_addr; } /* * Attach routine: map registers, allocate softc, rings and descriptors. * Reset to known state. */ static int epic_attach(device_t dev) { struct ifnet *ifp; epic_softc_t *sc; int error; int i, rid, tmp; u_char eaddr[6]; sc = device_get_softc(dev); /* Preinitialize softc structure. */ sc->dev = dev; mtx_init(&sc->lock, device_get_nameunit(dev), MTX_NETWORK_LOCK, MTX_DEF); /* Fill ifnet structure. */ ifp = sc->ifp = if_alloc(IFT_ETHER); if (ifp == NULL) { device_printf(dev, "can not if_alloc()\n"); error = ENOSPC; goto fail; } if_initname(ifp, device_get_name(dev), device_get_unit(dev)); ifp->if_softc = sc; ifp->if_flags = IFF_BROADCAST|IFF_SIMPLEX|IFF_MULTICAST; ifp->if_ioctl = epic_ifioctl; ifp->if_start = epic_ifstart; ifp->if_init = epic_init; IFQ_SET_MAXLEN(&ifp->if_snd, TX_RING_SIZE - 1); /* Enable busmastering. */ pci_enable_busmaster(dev); rid = EPIC_RID; sc->res = bus_alloc_resource_any(dev, EPIC_RES, &rid, RF_ACTIVE); if (sc->res == NULL) { device_printf(dev, "couldn't map ports/memory\n"); error = ENXIO; goto fail; } /* Allocate interrupt. */ rid = 0; sc->irq = bus_alloc_resource_any(dev, SYS_RES_IRQ, &rid, RF_SHAREABLE | RF_ACTIVE); if (sc->irq == NULL) { device_printf(dev, "couldn't map interrupt\n"); error = ENXIO; goto fail; } /* Allocate DMA tags. */ error = bus_dma_tag_create(bus_get_dma_tag(dev), 4, 0, BUS_SPACE_MAXADDR_32BIT, BUS_SPACE_MAXADDR, NULL, NULL, MCLBYTES * EPIC_MAX_FRAGS, EPIC_MAX_FRAGS, MCLBYTES, 0, NULL, NULL, &sc->mtag); if (error) { device_printf(dev, "couldn't allocate dma tag\n"); goto fail; } error = bus_dma_tag_create(bus_get_dma_tag(dev), 4, 0, BUS_SPACE_MAXADDR_32BIT, BUS_SPACE_MAXADDR, NULL, NULL, sizeof(struct epic_rx_desc) * RX_RING_SIZE, 1, sizeof(struct epic_rx_desc) * RX_RING_SIZE, 0, NULL, NULL, &sc->rtag); if (error) { device_printf(dev, "couldn't allocate dma tag\n"); goto fail; } error = bus_dma_tag_create(bus_get_dma_tag(dev), 4, 0, BUS_SPACE_MAXADDR_32BIT, BUS_SPACE_MAXADDR, NULL, NULL, sizeof(struct epic_tx_desc) * TX_RING_SIZE, 1, sizeof(struct epic_tx_desc) * TX_RING_SIZE, 0, NULL, NULL, &sc->ttag); if (error) { device_printf(dev, "couldn't allocate dma tag\n"); goto fail; } error = bus_dma_tag_create(bus_get_dma_tag(dev), 4, 0, BUS_SPACE_MAXADDR_32BIT, BUS_SPACE_MAXADDR, NULL, NULL, sizeof(struct epic_frag_list) * TX_RING_SIZE, 1, sizeof(struct epic_frag_list) * TX_RING_SIZE, 0, NULL, NULL, &sc->ftag); if (error) { device_printf(dev, "couldn't allocate dma tag\n"); goto fail; } /* Allocate DMA safe memory and get the DMA addresses. */ error = bus_dmamem_alloc(sc->ftag, (void **)&sc->tx_flist, BUS_DMA_NOWAIT | BUS_DMA_ZERO, &sc->fmap); if (error) { device_printf(dev, "couldn't allocate dma memory\n"); goto fail; } error = bus_dmamap_load(sc->ftag, sc->fmap, sc->tx_flist, sizeof(struct epic_frag_list) * TX_RING_SIZE, epic_dma_map_addr, &sc->frag_addr, 0); if (error) { device_printf(dev, "couldn't map dma memory\n"); goto fail; } error = bus_dmamem_alloc(sc->ttag, (void **)&sc->tx_desc, BUS_DMA_NOWAIT | BUS_DMA_ZERO, &sc->tmap); if (error) { device_printf(dev, "couldn't allocate dma memory\n"); goto fail; } error = bus_dmamap_load(sc->ttag, sc->tmap, sc->tx_desc, sizeof(struct epic_tx_desc) * TX_RING_SIZE, epic_dma_map_addr, &sc->tx_addr, 0); if (error) { device_printf(dev, "couldn't map dma memory\n"); goto fail; } error = bus_dmamem_alloc(sc->rtag, (void **)&sc->rx_desc, BUS_DMA_NOWAIT | BUS_DMA_ZERO, &sc->rmap); if (error) { device_printf(dev, "couldn't allocate dma memory\n"); goto fail; } error = bus_dmamap_load(sc->rtag, sc->rmap, sc->rx_desc, sizeof(struct epic_rx_desc) * RX_RING_SIZE, epic_dma_map_addr, &sc->rx_addr, 0); if (error) { device_printf(dev, "couldn't map dma memory\n"); goto fail; } /* Bring the chip out of low-power mode. */ CSR_WRITE_4(sc, GENCTL, GENCTL_SOFT_RESET); DELAY(500); /* Workaround for Application Note 7-15. */ for (i = 0; i < 16; i++) CSR_WRITE_4(sc, TEST1, TEST1_CLOCK_TEST); /* Read MAC address from EEPROM. */ for (i = 0; i < ETHER_ADDR_LEN / sizeof(u_int16_t); i++) ((u_int16_t *)eaddr)[i] = epic_read_eeprom(sc,i); /* Set Non-Volatile Control Register from EEPROM. */ CSR_WRITE_4(sc, NVCTL, epic_read_eeprom(sc, EEPROM_NVCTL) & 0x1F); /* Set defaults. */ sc->tx_threshold = TRANSMIT_THRESHOLD; sc->txcon = TXCON_DEFAULT; sc->miicfg = MIICFG_SMI_ENABLE; sc->phyid = EPIC_UNKN_PHY; sc->serinst = -1; /* Fetch card id. */ sc->cardvend = pci_read_config(dev, PCIR_SUBVEND_0, 2); sc->cardid = pci_read_config(dev, PCIR_SUBDEV_0, 2); if (sc->cardvend != SMC_VENDORID) device_printf(dev, "unknown card vendor %04xh\n", sc->cardvend); /* Do ifmedia setup. */ error = mii_attach(dev, &sc->miibus, ifp, epic_ifmedia_upd, epic_ifmedia_sts, BMSR_DEFCAPMASK, MII_PHY_ANY, MII_OFFSET_ANY, 0); if (error != 0) { device_printf(dev, "attaching PHYs failed\n"); goto fail; } /* board type and ... */ printf(" type "); for(i = 0x2c; i < 0x32; i++) { tmp = epic_read_eeprom(sc, i); if (' ' == (u_int8_t)tmp) break; printf("%c", (u_int8_t)tmp); tmp >>= 8; if (' ' == (u_int8_t)tmp) break; printf("%c", (u_int8_t)tmp); } printf("\n"); /* Initialize rings. */ if (epic_init_rings(sc)) { device_printf(dev, "failed to init rings\n"); error = ENXIO; goto fail; } ifp->if_hdrlen = sizeof(struct ether_vlan_header); ifp->if_capabilities |= IFCAP_VLAN_MTU; ifp->if_capenable |= IFCAP_VLAN_MTU; callout_init_mtx(&sc->timer, &sc->lock, 0); /* Attach to OS's managers. */ ether_ifattach(ifp, eaddr); /* Activate our interrupt handler. */ error = bus_setup_intr(dev, sc->irq, INTR_TYPE_NET | INTR_MPSAFE, NULL, epic_intr, sc, &sc->sc_ih); if (error) { device_printf(dev, "couldn't set up irq\n"); ether_ifdetach(ifp); goto fail; } + gone_by_fcp101_dev(dev); + return (0); fail: epic_release(sc); return (error); } /* * Free any resources allocated by the driver. */ static void epic_release(epic_softc_t *sc) { if (sc->ifp != NULL) if_free(sc->ifp); if (sc->irq) bus_release_resource(sc->dev, SYS_RES_IRQ, 0, sc->irq); if (sc->res) bus_release_resource(sc->dev, EPIC_RES, EPIC_RID, sc->res); epic_free_rings(sc); if (sc->tx_flist) { bus_dmamap_unload(sc->ftag, sc->fmap); bus_dmamem_free(sc->ftag, sc->tx_flist, sc->fmap); } if (sc->tx_desc) { bus_dmamap_unload(sc->ttag, sc->tmap); bus_dmamem_free(sc->ttag, sc->tx_desc, sc->tmap); } if (sc->rx_desc) { bus_dmamap_unload(sc->rtag, sc->rmap); bus_dmamem_free(sc->rtag, sc->rx_desc, sc->rmap); } if (sc->mtag) bus_dma_tag_destroy(sc->mtag); if (sc->ftag) bus_dma_tag_destroy(sc->ftag); if (sc->ttag) bus_dma_tag_destroy(sc->ttag); if (sc->rtag) bus_dma_tag_destroy(sc->rtag); mtx_destroy(&sc->lock); } /* * Detach driver and free resources. */ static int epic_detach(device_t dev) { struct ifnet *ifp; epic_softc_t *sc; sc = device_get_softc(dev); ifp = sc->ifp; EPIC_LOCK(sc); epic_stop(sc); EPIC_UNLOCK(sc); callout_drain(&sc->timer); ether_ifdetach(ifp); bus_teardown_intr(dev, sc->irq, sc->sc_ih); bus_generic_detach(dev); device_delete_child(dev, sc->miibus); epic_release(sc); return (0); } #undef EPIC_RES #undef EPIC_RID /* * Stop all chip I/O so that the kernel's probe routines don't * get confused by errant DMAs when rebooting. */ static int epic_shutdown(device_t dev) { epic_softc_t *sc; sc = device_get_softc(dev); EPIC_LOCK(sc); epic_stop(sc); EPIC_UNLOCK(sc); return (0); } /* * This is if_ioctl handler. */ static int epic_ifioctl(struct ifnet *ifp, u_long command, caddr_t data) { epic_softc_t *sc = ifp->if_softc; struct mii_data *mii; struct ifreq *ifr = (struct ifreq *) data; int error = 0; switch (command) { case SIOCSIFMTU: if (ifp->if_mtu == ifr->ifr_mtu) break; /* XXX Though the datasheet doesn't imply any * limitations on RX and TX sizes beside max 64Kb * DMA transfer, seems we can't send more then 1600 * data bytes per ethernet packet (transmitter hangs * up if more data is sent). */ EPIC_LOCK(sc); if (ifr->ifr_mtu + ifp->if_hdrlen <= EPIC_MAX_MTU) { ifp->if_mtu = ifr->ifr_mtu; epic_stop(sc); epic_init_locked(sc); } else error = EINVAL; EPIC_UNLOCK(sc); break; case SIOCSIFFLAGS: /* * If the interface is marked up and stopped, then start it. * If it is marked down and running, then stop it. */ EPIC_LOCK(sc); if (ifp->if_flags & IFF_UP) { if ((ifp->if_drv_flags & IFF_DRV_RUNNING) == 0) { epic_init_locked(sc); EPIC_UNLOCK(sc); break; } } else { if (ifp->if_drv_flags & IFF_DRV_RUNNING) { epic_stop(sc); EPIC_UNLOCK(sc); break; } } /* Handle IFF_PROMISC and IFF_ALLMULTI flags. */ epic_stop_activity(sc); epic_set_mc_table(sc); epic_set_rx_mode(sc); epic_start_activity(sc); EPIC_UNLOCK(sc); break; case SIOCADDMULTI: case SIOCDELMULTI: EPIC_LOCK(sc); epic_set_mc_table(sc); EPIC_UNLOCK(sc); error = 0; break; case SIOCSIFMEDIA: case SIOCGIFMEDIA: mii = device_get_softc(sc->miibus); error = ifmedia_ioctl(ifp, ifr, &mii->mii_media, command); break; default: error = ether_ioctl(ifp, command, data); break; } return (error); } static void epic_dma_map_txbuf(void *arg, bus_dma_segment_t *segs, int nseg, bus_size_t mapsize, int error) { struct epic_frag_list *flist; int i; if (error) return; KASSERT(nseg <= EPIC_MAX_FRAGS, ("too many DMA segments")); flist = arg; /* Fill fragments list. */ for (i = 0; i < nseg; i++) { KASSERT(segs[i].ds_len <= MCLBYTES, ("segment size too large")); flist->frag[i].fraglen = segs[i].ds_len; flist->frag[i].fragaddr = segs[i].ds_addr; } flist->numfrags = nseg; } static void epic_dma_map_rxbuf(void *arg, bus_dma_segment_t *segs, int nseg, bus_size_t mapsize, int error) { struct epic_rx_desc *desc; if (error) return; KASSERT(nseg == 1, ("too many DMA segments")); desc = arg; desc->bufaddr = segs->ds_addr; } /* * This is if_start handler. It takes mbufs from if_snd queue * and queue them for transmit, one by one, until TX ring become full * or queue become empty. */ static void epic_ifstart(struct ifnet * ifp) { epic_softc_t *sc = ifp->if_softc; EPIC_LOCK(sc); epic_ifstart_locked(ifp); EPIC_UNLOCK(sc); } static void epic_ifstart_locked(struct ifnet * ifp) { epic_softc_t *sc = ifp->if_softc; struct epic_tx_buffer *buf; struct epic_tx_desc *desc; struct epic_frag_list *flist; struct mbuf *m0, *m; int error; while (sc->pending_txs < TX_RING_SIZE) { buf = sc->tx_buffer + sc->cur_tx; desc = sc->tx_desc + sc->cur_tx; flist = sc->tx_flist + sc->cur_tx; /* Get next packet to send. */ IF_DEQUEUE(&ifp->if_snd, m0); /* If nothing to send, return. */ if (m0 == NULL) return; error = bus_dmamap_load_mbuf(sc->mtag, buf->map, m0, epic_dma_map_txbuf, flist, 0); if (error && error != EFBIG) { m_freem(m0); if_inc_counter(ifp, IFCOUNTER_OERRORS, 1); continue; } /* * If packet was more than EPIC_MAX_FRAGS parts, * recopy packet to a newly allocated mbuf cluster. */ if (error) { m = m_defrag(m0, M_NOWAIT); if (m == NULL) { m_freem(m0); if_inc_counter(ifp, IFCOUNTER_OERRORS, 1); continue; } m_freem(m0); m0 = m; error = bus_dmamap_load_mbuf(sc->mtag, buf->map, m, epic_dma_map_txbuf, flist, 0); if (error) { m_freem(m); if_inc_counter(ifp, IFCOUNTER_OERRORS, 1); continue; } } bus_dmamap_sync(sc->mtag, buf->map, BUS_DMASYNC_PREWRITE); buf->mbuf = m0; sc->pending_txs++; sc->cur_tx = (sc->cur_tx + 1) & TX_RING_MASK; desc->control = 0x01; desc->txlength = max(m0->m_pkthdr.len, ETHER_MIN_LEN - ETHER_CRC_LEN); desc->status = 0x8000; bus_dmamap_sync(sc->ttag, sc->tmap, BUS_DMASYNC_PREREAD | BUS_DMASYNC_PREWRITE); bus_dmamap_sync(sc->ftag, sc->fmap, BUS_DMASYNC_PREWRITE); CSR_WRITE_4(sc, COMMAND, COMMAND_TXQUEUED); /* Set watchdog timer. */ sc->tx_timeout = 8; BPF_MTAP(ifp, m0); } ifp->if_drv_flags |= IFF_DRV_OACTIVE; } /* * Synopsis: Finish all received frames. */ static void epic_rx_done(epic_softc_t *sc) { struct ifnet *ifp = sc->ifp; u_int16_t len; struct epic_rx_buffer *buf; struct epic_rx_desc *desc; struct mbuf *m; bus_dmamap_t map; int error; bus_dmamap_sync(sc->rtag, sc->rmap, BUS_DMASYNC_POSTREAD); while ((sc->rx_desc[sc->cur_rx].status & 0x8000) == 0) { buf = sc->rx_buffer + sc->cur_rx; desc = sc->rx_desc + sc->cur_rx; /* Switch to next descriptor. */ sc->cur_rx = (sc->cur_rx + 1) & RX_RING_MASK; /* * Check for RX errors. This should only happen if * SAVE_ERRORED_PACKETS is set. RX errors generate * RXE interrupt usually. */ if ((desc->status & 1) == 0) { if_inc_counter(ifp, IFCOUNTER_IERRORS, 1); desc->status = 0x8000; continue; } /* Save packet length and mbuf contained packet. */ bus_dmamap_sync(sc->mtag, buf->map, BUS_DMASYNC_POSTREAD); len = desc->rxlength - ETHER_CRC_LEN; m = buf->mbuf; /* Try to get an mbuf cluster. */ buf->mbuf = m_getcl(M_NOWAIT, MT_DATA, M_PKTHDR); if (buf->mbuf == NULL) { buf->mbuf = m; desc->status = 0x8000; if_inc_counter(ifp, IFCOUNTER_IERRORS, 1); continue; } buf->mbuf->m_len = buf->mbuf->m_pkthdr.len = MCLBYTES; m_adj(buf->mbuf, ETHER_ALIGN); /* Point to new mbuf, and give descriptor to chip. */ error = bus_dmamap_load_mbuf(sc->mtag, sc->sparemap, buf->mbuf, epic_dma_map_rxbuf, desc, 0); if (error) { buf->mbuf = m; desc->status = 0x8000; if_inc_counter(ifp, IFCOUNTER_IERRORS, 1); continue; } desc->status = 0x8000; bus_dmamap_unload(sc->mtag, buf->map); map = buf->map; buf->map = sc->sparemap; sc->sparemap = map; bus_dmamap_sync(sc->mtag, buf->map, BUS_DMASYNC_PREREAD); /* First mbuf in packet holds the ethernet and packet headers */ m->m_pkthdr.rcvif = ifp; m->m_pkthdr.len = m->m_len = len; /* Give mbuf to OS. */ EPIC_UNLOCK(sc); (*ifp->if_input)(ifp, m); EPIC_LOCK(sc); /* Successfully received frame */ if_inc_counter(ifp, IFCOUNTER_IPACKETS, 1); } bus_dmamap_sync(sc->rtag, sc->rmap, BUS_DMASYNC_PREREAD | BUS_DMASYNC_PREWRITE); } /* * Synopsis: Do last phase of transmission. I.e. if desc is * transmitted, decrease pending_txs counter, free mbuf contained * packet, switch to next descriptor and repeat until no packets * are pending or descriptor is not transmitted yet. */ static void epic_tx_done(epic_softc_t *sc) { struct epic_tx_buffer *buf; struct epic_tx_desc *desc; u_int16_t status; bus_dmamap_sync(sc->ttag, sc->tmap, BUS_DMASYNC_POSTREAD); while (sc->pending_txs > 0) { buf = sc->tx_buffer + sc->dirty_tx; desc = sc->tx_desc + sc->dirty_tx; status = desc->status; /* * If packet is not transmitted, thou followed * packets are not transmitted too. */ if (status & 0x8000) break; /* Packet is transmitted. Switch to next and free mbuf. */ sc->pending_txs--; sc->dirty_tx = (sc->dirty_tx + 1) & TX_RING_MASK; bus_dmamap_sync(sc->mtag, buf->map, BUS_DMASYNC_POSTWRITE); bus_dmamap_unload(sc->mtag, buf->map); m_freem(buf->mbuf); buf->mbuf = NULL; /* Check for errors and collisions. */ if (status & 0x0001) if_inc_counter(sc->ifp, IFCOUNTER_OPACKETS, 1); else if_inc_counter(sc->ifp, IFCOUNTER_OERRORS, 1); if_inc_counter(sc->ifp, IFCOUNTER_COLLISIONS, (status >> 8) & 0x1F); #ifdef EPIC_DIAG if ((status & 0x1001) == 0x1001) device_printf(sc->dev, "Tx ERROR: excessive coll. number\n"); #endif } if (sc->pending_txs < TX_RING_SIZE) sc->ifp->if_drv_flags &= ~IFF_DRV_OACTIVE; bus_dmamap_sync(sc->ttag, sc->tmap, BUS_DMASYNC_PREREAD | BUS_DMASYNC_PREWRITE); } /* * Interrupt function */ static void epic_intr(void *arg) { epic_softc_t *sc; int status, i; sc = arg; i = 4; EPIC_LOCK(sc); while (i-- && ((status = CSR_READ_4(sc, INTSTAT)) & INTSTAT_INT_ACTV)) { CSR_WRITE_4(sc, INTSTAT, status); if (status & (INTSTAT_RQE|INTSTAT_RCC|INTSTAT_OVW)) { epic_rx_done(sc); if (status & (INTSTAT_RQE|INTSTAT_OVW)) { #ifdef EPIC_DIAG if (status & INTSTAT_OVW) device_printf(sc->dev, "RX buffer overflow\n"); if (status & INTSTAT_RQE) device_printf(sc->dev, "RX FIFO overflow\n"); #endif if ((CSR_READ_4(sc, COMMAND) & COMMAND_RXQUEUED) == 0) CSR_WRITE_4(sc, COMMAND, COMMAND_RXQUEUED); if_inc_counter(sc->ifp, IFCOUNTER_IERRORS, 1); } } if (status & (INTSTAT_TXC|INTSTAT_TCC|INTSTAT_TQE)) { epic_tx_done(sc); if (sc->ifp->if_snd.ifq_head != NULL) epic_ifstart_locked(sc->ifp); } /* Check for rare errors */ if (status & (INTSTAT_FATAL|INTSTAT_PMA|INTSTAT_PTA| INTSTAT_APE|INTSTAT_DPE|INTSTAT_TXU|INTSTAT_RXE)) { if (status & (INTSTAT_FATAL|INTSTAT_PMA|INTSTAT_PTA| INTSTAT_APE|INTSTAT_DPE)) { device_printf(sc->dev, "PCI fatal errors occurred: %s%s%s%s\n", (status & INTSTAT_PMA) ? "PMA " : "", (status & INTSTAT_PTA) ? "PTA " : "", (status & INTSTAT_APE) ? "APE " : "", (status & INTSTAT_DPE) ? "DPE" : ""); epic_stop(sc); epic_init_locked(sc); break; } if (status & INTSTAT_RXE) { #ifdef EPIC_DIAG device_printf(sc->dev, "CRC/Alignment error\n"); #endif if_inc_counter(sc->ifp, IFCOUNTER_IERRORS, 1); } if (status & INTSTAT_TXU) { epic_tx_underrun(sc); if_inc_counter(sc->ifp, IFCOUNTER_OERRORS, 1); } } } /* If no packets are pending, then no timeouts. */ if (sc->pending_txs == 0) sc->tx_timeout = 0; EPIC_UNLOCK(sc); } /* * Handle the TX underrun error: increase the TX threshold * and restart the transmitter. */ static void epic_tx_underrun(epic_softc_t *sc) { if (sc->tx_threshold > TRANSMIT_THRESHOLD_MAX) { sc->txcon &= ~TXCON_EARLY_TRANSMIT_ENABLE; #ifdef EPIC_DIAG device_printf(sc->dev, "Tx UNDERRUN: early TX disabled\n"); #endif } else { sc->tx_threshold += 0x40; #ifdef EPIC_DIAG device_printf(sc->dev, "Tx UNDERRUN: TX threshold increased to %d\n", sc->tx_threshold); #endif } /* We must set TXUGO to reset the stuck transmitter. */ CSR_WRITE_4(sc, COMMAND, COMMAND_TXUGO); /* Update the TX threshold */ epic_stop_activity(sc); epic_set_tx_mode(sc); epic_start_activity(sc); } /* * This function is called once a second when the interface is running * and performs two functions. First, it provides a timer for the mii * to help with autonegotiation. Second, it checks for transmit * timeouts. */ static void epic_timer(void *arg) { epic_softc_t *sc = arg; struct mii_data *mii; struct ifnet *ifp; ifp = sc->ifp; EPIC_ASSERT_LOCKED(sc); if (sc->tx_timeout && --sc->tx_timeout == 0) { device_printf(sc->dev, "device timeout %d packets\n", sc->pending_txs); /* Try to finish queued packets. */ epic_tx_done(sc); /* If not successful. */ if (sc->pending_txs > 0) { if_inc_counter(ifp, IFCOUNTER_OERRORS, sc->pending_txs); /* Reinitialize board. */ device_printf(sc->dev, "reinitialization\n"); epic_stop(sc); epic_init_locked(sc); } else device_printf(sc->dev, "seems we can continue normaly\n"); /* Start output. */ if (ifp->if_snd.ifq_head) epic_ifstart_locked(ifp); } mii = device_get_softc(sc->miibus); mii_tick(mii); callout_reset(&sc->timer, hz, epic_timer, sc); } /* * Set media options. */ static int epic_ifmedia_upd(struct ifnet *ifp) { epic_softc_t *sc; int error; sc = ifp->if_softc; EPIC_LOCK(sc); error = epic_ifmedia_upd_locked(ifp); EPIC_UNLOCK(sc); return (error); } static int epic_ifmedia_upd_locked(struct ifnet *ifp) { epic_softc_t *sc; struct mii_data *mii; struct ifmedia *ifm; struct mii_softc *miisc; int cfg, media; sc = ifp->if_softc; mii = device_get_softc(sc->miibus); ifm = &mii->mii_media; media = ifm->ifm_cur->ifm_media; /* Do not do anything if interface is not up. */ if ((ifp->if_flags & IFF_UP) == 0) return (0); /* * Lookup current selected PHY. */ if (IFM_INST(media) == sc->serinst) { sc->phyid = EPIC_SERIAL; sc->physc = NULL; } else { /* If we're not selecting serial interface, select MII mode. */ sc->miicfg &= ~MIICFG_SERIAL_ENABLE; CSR_WRITE_4(sc, MIICFG, sc->miicfg); /* Default to unknown PHY. */ sc->phyid = EPIC_UNKN_PHY; /* Lookup selected PHY. */ LIST_FOREACH(miisc, &mii->mii_phys, mii_list) { if (IFM_INST(media) == miisc->mii_inst) { sc->physc = miisc; break; } } /* Identify selected PHY. */ if (sc->physc) { int id1, id2, model, oui; id1 = PHY_READ(sc->physc, MII_PHYIDR1); id2 = PHY_READ(sc->physc, MII_PHYIDR2); oui = MII_OUI(id1, id2); model = MII_MODEL(id2); switch (oui) { case MII_OUI_xxQUALSEMI: if (model == MII_MODEL_xxQUALSEMI_QS6612) sc->phyid = EPIC_QS6612_PHY; break; case MII_OUI_ALTIMA: if (model == MII_MODEL_ALTIMA_AC101) sc->phyid = EPIC_AC101_PHY; break; case MII_OUI_xxLEVEL1: if (model == MII_MODEL_xxLEVEL1_LXT970) sc->phyid = EPIC_LXT970_PHY; break; } } } /* * Do PHY specific card setup. */ /* * Call this, to isolate all not selected PHYs and * set up selected. */ mii_mediachg(mii); /* Do our own setup. */ switch (sc->phyid) { case EPIC_QS6612_PHY: break; case EPIC_AC101_PHY: /* We have to powerup fiber tranceivers. */ if (IFM_SUBTYPE(media) == IFM_100_FX) sc->miicfg |= MIICFG_694_ENABLE; else sc->miicfg &= ~MIICFG_694_ENABLE; CSR_WRITE_4(sc, MIICFG, sc->miicfg); break; case EPIC_LXT970_PHY: /* We have to powerup fiber tranceivers. */ cfg = PHY_READ(sc->physc, MII_LXTPHY_CONFIG); if (IFM_SUBTYPE(media) == IFM_100_FX) cfg |= CONFIG_LEDC1 | CONFIG_LEDC0; else cfg &= ~(CONFIG_LEDC1 | CONFIG_LEDC0); PHY_WRITE(sc->physc, MII_LXTPHY_CONFIG, cfg); break; case EPIC_SERIAL: /* Select serial PHY (10base2/BNC usually). */ sc->miicfg |= MIICFG_694_ENABLE | MIICFG_SERIAL_ENABLE; CSR_WRITE_4(sc, MIICFG, sc->miicfg); /* There is no driver to fill this. */ mii->mii_media_active = media; mii->mii_media_status = 0; /* * We need to call this manually as it wasn't called * in mii_mediachg(). */ epic_miibus_statchg(sc->dev); break; default: device_printf(sc->dev, "ERROR! Unknown PHY selected\n"); return (EINVAL); } return (0); } /* * Report current media status. */ static void epic_ifmedia_sts(struct ifnet *ifp, struct ifmediareq *ifmr) { epic_softc_t *sc; struct mii_data *mii; sc = ifp->if_softc; mii = device_get_softc(sc->miibus); EPIC_LOCK(sc); /* Nothing should be selected if interface is down. */ if ((ifp->if_flags & IFF_UP) == 0) { ifmr->ifm_active = IFM_NONE; ifmr->ifm_status = 0; EPIC_UNLOCK(sc); return; } /* Call underlying pollstat, if not serial PHY. */ if (sc->phyid != EPIC_SERIAL) mii_pollstat(mii); /* Simply copy media info. */ ifmr->ifm_active = mii->mii_media_active; ifmr->ifm_status = mii->mii_media_status; EPIC_UNLOCK(sc); } /* * Callback routine, called on media change. */ static void epic_miibus_statchg(device_t dev) { epic_softc_t *sc; struct mii_data *mii; int media; sc = device_get_softc(dev); mii = device_get_softc(sc->miibus); media = mii->mii_media_active; sc->txcon &= ~(TXCON_LOOPBACK_MODE | TXCON_FULL_DUPLEX); /* * If we are in full-duplex mode or loopback operation, * we need to decouple receiver and transmitter. */ if (IFM_OPTIONS(media) & (IFM_FDX | IFM_LOOP)) sc->txcon |= TXCON_FULL_DUPLEX; /* On some cards we need manualy set fullduplex led. */ if (sc->cardid == SMC9432FTX || sc->cardid == SMC9432FTX_SC) { if (IFM_OPTIONS(media) & IFM_FDX) sc->miicfg |= MIICFG_694_ENABLE; else sc->miicfg &= ~MIICFG_694_ENABLE; CSR_WRITE_4(sc, MIICFG, sc->miicfg); } epic_stop_activity(sc); epic_set_tx_mode(sc); epic_start_activity(sc); } static void epic_miibus_mediainit(device_t dev) { epic_softc_t *sc; struct mii_data *mii; struct ifmedia *ifm; int media; sc = device_get_softc(dev); mii = device_get_softc(sc->miibus); ifm = &mii->mii_media; /* * Add Serial Media Interface if present, this applies to * SMC9432BTX serie. */ if (CSR_READ_4(sc, MIICFG) & MIICFG_PHY_PRESENT) { /* Store its instance. */ sc->serinst = mii->mii_instance++; /* Add as 10base2/BNC media. */ media = IFM_MAKEWORD(IFM_ETHER, IFM_10_2, 0, sc->serinst); ifmedia_add(ifm, media, 0, NULL); /* Report to user. */ device_printf(sc->dev, "serial PHY detected (10Base2/BNC)\n"); } } /* * Reset chip and update media. */ static void epic_init(void *xsc) { epic_softc_t *sc = xsc; EPIC_LOCK(sc); epic_init_locked(sc); EPIC_UNLOCK(sc); } static void epic_init_locked(epic_softc_t *sc) { struct ifnet *ifp = sc->ifp; int i; /* If interface is already running, then we need not do anything. */ if (ifp->if_drv_flags & IFF_DRV_RUNNING) { return; } /* Soft reset the chip (we have to power up card before). */ CSR_WRITE_4(sc, GENCTL, 0); CSR_WRITE_4(sc, GENCTL, GENCTL_SOFT_RESET); /* * Reset takes 15 pci ticks which depends on PCI bus speed. * Assuming it >= 33000000 hz, we have wait at least 495e-6 sec. */ DELAY(500); /* Wake up */ CSR_WRITE_4(sc, GENCTL, 0); /* Workaround for Application Note 7-15 */ for (i = 0; i < 16; i++) CSR_WRITE_4(sc, TEST1, TEST1_CLOCK_TEST); /* Give rings to EPIC */ CSR_WRITE_4(sc, PRCDAR, sc->rx_addr); CSR_WRITE_4(sc, PTCDAR, sc->tx_addr); /* Put node address to EPIC. */ CSR_WRITE_4(sc, LAN0, ((u_int16_t *)IF_LLADDR(sc->ifp))[0]); CSR_WRITE_4(sc, LAN1, ((u_int16_t *)IF_LLADDR(sc->ifp))[1]); CSR_WRITE_4(sc, LAN2, ((u_int16_t *)IF_LLADDR(sc->ifp))[2]); /* Set tx mode, includeing transmit threshold. */ epic_set_tx_mode(sc); /* Compute and set RXCON. */ epic_set_rx_mode(sc); /* Set multicast table. */ epic_set_mc_table(sc); /* Enable interrupts by setting the interrupt mask. */ CSR_WRITE_4(sc, INTMASK, INTSTAT_RCC | /* INTSTAT_RQE | INTSTAT_OVW | INTSTAT_RXE | */ /* INTSTAT_TXC | */ INTSTAT_TCC | INTSTAT_TQE | INTSTAT_TXU | INTSTAT_FATAL); /* Acknowledge all pending interrupts. */ CSR_WRITE_4(sc, INTSTAT, CSR_READ_4(sc, INTSTAT)); /* Enable interrupts, set for PCI read multiple and etc */ CSR_WRITE_4(sc, GENCTL, GENCTL_ENABLE_INTERRUPT | GENCTL_MEMORY_READ_MULTIPLE | GENCTL_ONECOPY | GENCTL_RECEIVE_FIFO_THRESHOLD64); /* Mark interface running ... */ if (ifp->if_flags & IFF_UP) ifp->if_drv_flags |= IFF_DRV_RUNNING; else ifp->if_drv_flags &= ~IFF_DRV_RUNNING; /* ... and free */ ifp->if_drv_flags &= ~IFF_DRV_OACTIVE; /* Start Rx process */ epic_start_activity(sc); /* Set appropriate media */ epic_ifmedia_upd_locked(ifp); callout_reset(&sc->timer, hz, epic_timer, sc); } /* * Synopsis: calculate and set Rx mode. Chip must be in idle state to * access RXCON. */ static void epic_set_rx_mode(epic_softc_t *sc) { u_int32_t flags; u_int32_t rxcon; flags = sc->ifp->if_flags; rxcon = RXCON_DEFAULT; #ifdef EPIC_EARLY_RX rxcon |= RXCON_EARLY_RX; #endif rxcon |= (flags & IFF_PROMISC) ? RXCON_PROMISCUOUS_MODE : 0; CSR_WRITE_4(sc, RXCON, rxcon); } /* * Synopsis: Set transmit control register. Chip must be in idle state to * access TXCON. */ static void epic_set_tx_mode(epic_softc_t *sc) { if (sc->txcon & TXCON_EARLY_TRANSMIT_ENABLE) CSR_WRITE_4(sc, ETXTHR, sc->tx_threshold); CSR_WRITE_4(sc, TXCON, sc->txcon); } /* * Synopsis: Program multicast filter honoring IFF_ALLMULTI and IFF_PROMISC * flags (note that setting PROMISC bit in EPIC's RXCON will only touch * individual frames, multicast filter must be manually programmed). * * Note: EPIC must be in idle state. */ static void epic_set_mc_table(epic_softc_t *sc) { struct ifnet *ifp; struct ifmultiaddr *ifma; u_int16_t filter[4]; u_int8_t h; ifp = sc->ifp; if (ifp->if_flags & (IFF_ALLMULTI | IFF_PROMISC)) { CSR_WRITE_4(sc, MC0, 0xFFFF); CSR_WRITE_4(sc, MC1, 0xFFFF); CSR_WRITE_4(sc, MC2, 0xFFFF); CSR_WRITE_4(sc, MC3, 0xFFFF); return; } filter[0] = 0; filter[1] = 0; filter[2] = 0; filter[3] = 0; if_maddr_rlock(ifp); CK_STAILQ_FOREACH(ifma, &ifp->if_multiaddrs, ifma_link) { if (ifma->ifma_addr->sa_family != AF_LINK) continue; h = ether_crc32_be(LLADDR((struct sockaddr_dl *) ifma->ifma_addr), ETHER_ADDR_LEN) >> 26; filter[h >> 4] |= 1 << (h & 0xF); } if_maddr_runlock(ifp); CSR_WRITE_4(sc, MC0, filter[0]); CSR_WRITE_4(sc, MC1, filter[1]); CSR_WRITE_4(sc, MC2, filter[2]); CSR_WRITE_4(sc, MC3, filter[3]); } /* * Synopsis: Start receive process and transmit one, if they need. */ static void epic_start_activity(epic_softc_t *sc) { /* Start rx process. */ CSR_WRITE_4(sc, COMMAND, COMMAND_RXQUEUED | COMMAND_START_RX | (sc->pending_txs ? COMMAND_TXQUEUED : 0)); } /* * Synopsis: Completely stop Rx and Tx processes. If TQE is set additional * packet needs to be queued to stop Tx DMA. */ static void epic_stop_activity(epic_softc_t *sc) { int status, i; /* Stop Tx and Rx DMA. */ CSR_WRITE_4(sc, COMMAND, COMMAND_STOP_RX | COMMAND_STOP_RDMA | COMMAND_STOP_TDMA); /* Wait Rx and Tx DMA to stop (why 1 ms ??? XXX). */ for (i = 0; i < 0x1000; i++) { status = CSR_READ_4(sc, INTSTAT) & (INTSTAT_TXIDLE | INTSTAT_RXIDLE); if (status == (INTSTAT_TXIDLE | INTSTAT_RXIDLE)) break; DELAY(1); } /* Catch all finished packets. */ epic_rx_done(sc); epic_tx_done(sc); status = CSR_READ_4(sc, INTSTAT); if ((status & INTSTAT_RXIDLE) == 0) device_printf(sc->dev, "ERROR! Can't stop Rx DMA\n"); if ((status & INTSTAT_TXIDLE) == 0) device_printf(sc->dev, "ERROR! Can't stop Tx DMA\n"); /* * May need to queue one more packet if TQE, this is rare * but existing case. */ if ((status & INTSTAT_TQE) && !(status & INTSTAT_TXIDLE)) (void)epic_queue_last_packet(sc); } /* * The EPIC transmitter may stuck in TQE state. It will not go IDLE until * a packet from current descriptor will be copied to internal RAM. We * compose a dummy packet here and queue it for transmission. * * XXX the packet will then be actually sent over network... */ static int epic_queue_last_packet(epic_softc_t *sc) { struct epic_tx_desc *desc; struct epic_frag_list *flist; struct epic_tx_buffer *buf; struct mbuf *m0; int error, i; device_printf(sc->dev, "queue last packet\n"); desc = sc->tx_desc + sc->cur_tx; flist = sc->tx_flist + sc->cur_tx; buf = sc->tx_buffer + sc->cur_tx; if ((desc->status & 0x8000) || (buf->mbuf != NULL)) return (EBUSY); MGETHDR(m0, M_NOWAIT, MT_DATA); if (m0 == NULL) return (ENOBUFS); /* Prepare mbuf. */ m0->m_len = min(MHLEN, ETHER_MIN_LEN - ETHER_CRC_LEN); m0->m_pkthdr.len = m0->m_len; m0->m_pkthdr.rcvif = sc->ifp; bzero(mtod(m0, caddr_t), m0->m_len); /* Fill fragments list. */ error = bus_dmamap_load_mbuf(sc->mtag, buf->map, m0, epic_dma_map_txbuf, flist, 0); if (error) { m_freem(m0); return (error); } bus_dmamap_sync(sc->mtag, buf->map, BUS_DMASYNC_PREWRITE); /* Fill in descriptor. */ buf->mbuf = m0; sc->pending_txs++; sc->cur_tx = (sc->cur_tx + 1) & TX_RING_MASK; desc->control = 0x01; desc->txlength = max(m0->m_pkthdr.len, ETHER_MIN_LEN - ETHER_CRC_LEN); desc->status = 0x8000; bus_dmamap_sync(sc->ttag, sc->tmap, BUS_DMASYNC_PREREAD | BUS_DMASYNC_PREWRITE); bus_dmamap_sync(sc->ftag, sc->fmap, BUS_DMASYNC_PREWRITE); /* Launch transmission. */ CSR_WRITE_4(sc, COMMAND, COMMAND_STOP_TDMA | COMMAND_TXQUEUED); /* Wait Tx DMA to stop (for how long??? XXX) */ for (i = 0; i < 1000; i++) { if (CSR_READ_4(sc, INTSTAT) & INTSTAT_TXIDLE) break; DELAY(1); } if ((CSR_READ_4(sc, INTSTAT) & INTSTAT_TXIDLE) == 0) device_printf(sc->dev, "ERROR! can't stop Tx DMA (2)\n"); else epic_tx_done(sc); return (0); } /* * Synopsis: Shut down board and deallocates rings. */ static void epic_stop(epic_softc_t *sc) { EPIC_ASSERT_LOCKED(sc); sc->tx_timeout = 0; callout_stop(&sc->timer); /* Disable interrupts */ CSR_WRITE_4(sc, INTMASK, 0); CSR_WRITE_4(sc, GENCTL, 0); /* Try to stop Rx and TX processes */ epic_stop_activity(sc); /* Reset chip */ CSR_WRITE_4(sc, GENCTL, GENCTL_SOFT_RESET); DELAY(1000); /* Make chip go to bed */ CSR_WRITE_4(sc, GENCTL, GENCTL_POWER_DOWN); /* Mark as stopped */ sc->ifp->if_drv_flags &= ~(IFF_DRV_RUNNING | IFF_DRV_OACTIVE); } /* * Synopsis: This function should free all memory allocated for rings. */ static void epic_free_rings(epic_softc_t *sc) { int i; for (i = 0; i < RX_RING_SIZE; i++) { struct epic_rx_buffer *buf = sc->rx_buffer + i; struct epic_rx_desc *desc = sc->rx_desc + i; desc->status = 0; desc->buflength = 0; desc->bufaddr = 0; if (buf->mbuf) { bus_dmamap_unload(sc->mtag, buf->map); bus_dmamap_destroy(sc->mtag, buf->map); m_freem(buf->mbuf); } buf->mbuf = NULL; } if (sc->sparemap != NULL) bus_dmamap_destroy(sc->mtag, sc->sparemap); for (i = 0; i < TX_RING_SIZE; i++) { struct epic_tx_buffer *buf = sc->tx_buffer + i; struct epic_tx_desc *desc = sc->tx_desc + i; desc->status = 0; desc->buflength = 0; desc->bufaddr = 0; if (buf->mbuf) { bus_dmamap_unload(sc->mtag, buf->map); bus_dmamap_destroy(sc->mtag, buf->map); m_freem(buf->mbuf); } buf->mbuf = NULL; } } /* * Synopsis: Allocates mbufs for Rx ring and point Rx descs to them. * Point Tx descs to fragment lists. Check that all descs and fraglists * are bounded and aligned properly. */ static int epic_init_rings(epic_softc_t *sc) { int error, i; sc->cur_rx = sc->cur_tx = sc->dirty_tx = sc->pending_txs = 0; /* Initialize the RX descriptor ring. */ for (i = 0; i < RX_RING_SIZE; i++) { struct epic_rx_buffer *buf = sc->rx_buffer + i; struct epic_rx_desc *desc = sc->rx_desc + i; desc->status = 0; /* Owned by driver */ desc->next = sc->rx_addr + ((i + 1) & RX_RING_MASK) * sizeof(struct epic_rx_desc); if ((desc->next & 3) || ((desc->next & PAGE_MASK) + sizeof *desc) > PAGE_SIZE) { epic_free_rings(sc); return (EFAULT); } buf->mbuf = m_getcl(M_NOWAIT, MT_DATA, M_PKTHDR); if (buf->mbuf == NULL) { epic_free_rings(sc); return (ENOBUFS); } buf->mbuf->m_len = buf->mbuf->m_pkthdr.len = MCLBYTES; m_adj(buf->mbuf, ETHER_ALIGN); error = bus_dmamap_create(sc->mtag, 0, &buf->map); if (error) { epic_free_rings(sc); return (error); } error = bus_dmamap_load_mbuf(sc->mtag, buf->map, buf->mbuf, epic_dma_map_rxbuf, desc, 0); if (error) { epic_free_rings(sc); return (error); } bus_dmamap_sync(sc->mtag, buf->map, BUS_DMASYNC_PREREAD); desc->buflength = buf->mbuf->m_len; /* Max RX buffer length */ desc->status = 0x8000; /* Set owner bit to NIC */ } bus_dmamap_sync(sc->rtag, sc->rmap, BUS_DMASYNC_PREREAD | BUS_DMASYNC_PREWRITE); /* Create the spare DMA map. */ error = bus_dmamap_create(sc->mtag, 0, &sc->sparemap); if (error) { epic_free_rings(sc); return (error); } /* Initialize the TX descriptor ring. */ for (i = 0; i < TX_RING_SIZE; i++) { struct epic_tx_buffer *buf = sc->tx_buffer + i; struct epic_tx_desc *desc = sc->tx_desc + i; desc->status = 0; desc->next = sc->tx_addr + ((i + 1) & TX_RING_MASK) * sizeof(struct epic_tx_desc); if ((desc->next & 3) || ((desc->next & PAGE_MASK) + sizeof *desc) > PAGE_SIZE) { epic_free_rings(sc); return (EFAULT); } buf->mbuf = NULL; desc->bufaddr = sc->frag_addr + i * sizeof(struct epic_frag_list); if ((desc->bufaddr & 3) || ((desc->bufaddr & PAGE_MASK) + sizeof(struct epic_frag_list)) > PAGE_SIZE) { epic_free_rings(sc); return (EFAULT); } error = bus_dmamap_create(sc->mtag, 0, &buf->map); if (error) { epic_free_rings(sc); return (error); } } bus_dmamap_sync(sc->ttag, sc->tmap, BUS_DMASYNC_PREREAD | BUS_DMASYNC_PREWRITE); bus_dmamap_sync(sc->ftag, sc->fmap, BUS_DMASYNC_PREWRITE); return (0); } /* * EEPROM operation functions */ static void epic_write_eepromreg(epic_softc_t *sc, u_int8_t val) { u_int16_t i; CSR_WRITE_1(sc, EECTL, val); for (i = 0; i < 0xFF; i++) { if ((CSR_READ_1(sc, EECTL) & 0x20) == 0) break; } } static u_int8_t epic_read_eepromreg(epic_softc_t *sc) { return (CSR_READ_1(sc, EECTL)); } static u_int8_t epic_eeprom_clock(epic_softc_t *sc, u_int8_t val) { epic_write_eepromreg(sc, val); epic_write_eepromreg(sc, (val | 0x4)); epic_write_eepromreg(sc, val); return (epic_read_eepromreg(sc)); } static void epic_output_eepromw(epic_softc_t *sc, u_int16_t val) { int i; for (i = 0xF; i >= 0; i--) { if (val & (1 << i)) epic_eeprom_clock(sc, 0x0B); else epic_eeprom_clock(sc, 0x03); } } static u_int16_t epic_input_eepromw(epic_softc_t *sc) { u_int16_t retval = 0; int i; for (i = 0xF; i >= 0; i--) { if (epic_eeprom_clock(sc, 0x3) & 0x10) retval |= (1 << i); } return (retval); } static int epic_read_eeprom(epic_softc_t *sc, u_int16_t loc) { u_int16_t dataval; u_int16_t read_cmd; epic_write_eepromreg(sc, 3); if (epic_read_eepromreg(sc) & 0x40) read_cmd = (loc & 0x3F) | 0x180; else read_cmd = (loc & 0xFF) | 0x600; epic_output_eepromw(sc, read_cmd); dataval = epic_input_eepromw(sc); epic_write_eepromreg(sc, 1); return (dataval); } /* * Here goes MII read/write routines. */ static int epic_read_phy_reg(epic_softc_t *sc, int phy, int reg) { int i; CSR_WRITE_4(sc, MIICTL, ((reg << 4) | (phy << 9) | 0x01)); for (i = 0; i < 0x100; i++) { if ((CSR_READ_4(sc, MIICTL) & 0x01) == 0) break; DELAY(1); } return (CSR_READ_4(sc, MIIDATA)); } static void epic_write_phy_reg(epic_softc_t *sc, int phy, int reg, int val) { int i; CSR_WRITE_4(sc, MIIDATA, val); CSR_WRITE_4(sc, MIICTL, ((reg << 4) | (phy << 9) | 0x02)); for(i = 0; i < 0x100; i++) { if ((CSR_READ_4(sc, MIICTL) & 0x02) == 0) break; DELAY(1); } } static int epic_miibus_readreg(device_t dev, int phy, int reg) { epic_softc_t *sc; sc = device_get_softc(dev); return (PHY_READ_2(sc, phy, reg)); } static int epic_miibus_writereg(device_t dev, int phy, int reg, int data) { epic_softc_t *sc; sc = device_get_softc(dev); PHY_WRITE_2(sc, phy, reg, data); return (0); } Index: stable/12/sys/dev/txp/if_txp.c =================================================================== --- stable/12/sys/dev/txp/if_txp.c (revision 339734) +++ stable/12/sys/dev/txp/if_txp.c (revision 339735) @@ -1,3019 +1,3021 @@ /* $OpenBSD: if_txp.c,v 1.48 2001/06/27 06:34:50 kjc Exp $ */ /*- * SPDX-License-Identifier: BSD-4-Clause * * Copyright (c) 2001 * Jason L. Wright , Theo de Raadt, and * Aaron Campbell . All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * 3. All advertising materials mentioning features or use of this software * must display the following acknowledgement: * This product includes software developed by Jason L. Wright, * Theo de Raadt and Aaron Campbell. * 4. Neither the name of the author nor the names of any co-contributors * may be used to endorse or promote products derived from this software * without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE AUTHORS ``AS IS'' AND ANY EXPRESS OR * IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED * WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL Bill Paul OR THE VOICES IN HIS HEAD * BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF * THE POSSIBILITY OF SUCH DAMAGE. */ #include __FBSDID("$FreeBSD$"); /* * Driver for 3c990 (Typhoon) Ethernet ASIC */ #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include MODULE_DEPEND(txp, pci, 1, 1, 1); MODULE_DEPEND(txp, ether, 1, 1, 1); /* * XXX Known Typhoon firmware issues. * * 1. It seems that firmware has Tx TCP/UDP checksum offloading bug. * The firmware hangs when it's told to compute TCP/UDP checksum. * I'm not sure whether the firmware requires special alignment to * do checksum offloading but datasheet says nothing about that. * 2. Datasheet says nothing for maximum number of fragmented * descriptors supported. Experimentation shows up to 16 fragment * descriptors are supported in the firmware. For TSO case, upper * stack can send 64KB sized IP datagram plus link header size( * ethernet header + VLAN tag) frame but controller can handle up * to 64KB frame given that PAGE_SIZE is 4KB(i.e. 16 * PAGE_SIZE). * Because frames that need TSO operation of hardware can be * larger than 64KB I disabled TSO capability. TSO operation for * less than or equal to 16 fragment descriptors works without * problems, though. * 3. VLAN hardware tag stripping is always enabled in the firmware * even if it's explicitly told to not strip the tag. It's * possible to add the tag back in Rx handler if VLAN hardware * tag is not active but I didn't try that as it would be * layering violation. * 4. TXP_CMD_RECV_BUFFER_CONTROL does not work as expected in * datasheet such that driver should handle the alignment * restriction by copying received frame to align the frame on * 32bit boundary on strict-alignment architectures. This adds a * lot of CPU burden and it effectively reduce Rx performance on * strict-alignment architectures(e.g. sparc64, arm and mips). * * Unfortunately it seems that 3Com have no longer interests in * releasing fixed firmware so we may have to live with these bugs. */ #define TXP_CSUM_FEATURES (CSUM_IP) /* * Various supported device vendors/types and their names. */ static struct txp_type txp_devs[] = { { TXP_VENDORID_3COM, TXP_DEVICEID_3CR990_TX_95, "3Com 3cR990-TX-95 Etherlink with 3XP Processor" }, { TXP_VENDORID_3COM, TXP_DEVICEID_3CR990_TX_97, "3Com 3cR990-TX-97 Etherlink with 3XP Processor" }, { TXP_VENDORID_3COM, TXP_DEVICEID_3CR990B_TXM, "3Com 3cR990B-TXM Etherlink with 3XP Processor" }, { TXP_VENDORID_3COM, TXP_DEVICEID_3CR990_SRV_95, "3Com 3cR990-SRV-95 Etherlink Server with 3XP Processor" }, { TXP_VENDORID_3COM, TXP_DEVICEID_3CR990_SRV_97, "3Com 3cR990-SRV-97 Etherlink Server with 3XP Processor" }, { TXP_VENDORID_3COM, TXP_DEVICEID_3CR990B_SRV, "3Com 3cR990B-SRV Etherlink Server with 3XP Processor" }, { 0, 0, NULL } }; static int txp_probe(device_t); static int txp_attach(device_t); static int txp_detach(device_t); static int txp_shutdown(device_t); static int txp_suspend(device_t); static int txp_resume(device_t); static int txp_intr(void *); static void txp_int_task(void *, int); static void txp_tick(void *); static int txp_ioctl(struct ifnet *, u_long, caddr_t); static uint64_t txp_get_counter(struct ifnet *, ift_counter); static void txp_start(struct ifnet *); static void txp_start_locked(struct ifnet *); static int txp_encap(struct txp_softc *, struct txp_tx_ring *, struct mbuf **); static void txp_stop(struct txp_softc *); static void txp_init(void *); static void txp_init_locked(struct txp_softc *); static void txp_watchdog(struct txp_softc *); static int txp_reset(struct txp_softc *); static int txp_boot(struct txp_softc *, uint32_t); static int txp_sleep(struct txp_softc *, int); static int txp_wait(struct txp_softc *, uint32_t); static int txp_download_fw(struct txp_softc *); static int txp_download_fw_wait(struct txp_softc *); static int txp_download_fw_section(struct txp_softc *, struct txp_fw_section_header *, int); static int txp_alloc_rings(struct txp_softc *); static void txp_init_rings(struct txp_softc *); static int txp_dma_alloc(struct txp_softc *, char *, bus_dma_tag_t *, bus_size_t, bus_size_t, bus_dmamap_t *, void **, bus_size_t, bus_addr_t *); static void txp_dma_free(struct txp_softc *, bus_dma_tag_t *, bus_dmamap_t, void **, bus_addr_t *); static void txp_free_rings(struct txp_softc *); static int txp_rxring_fill(struct txp_softc *); static void txp_rxring_empty(struct txp_softc *); static void txp_set_filter(struct txp_softc *); static int txp_cmd_desc_numfree(struct txp_softc *); static int txp_command(struct txp_softc *, uint16_t, uint16_t, uint32_t, uint32_t, uint16_t *, uint32_t *, uint32_t *, int); static int txp_ext_command(struct txp_softc *, uint16_t, uint16_t, uint32_t, uint32_t, struct txp_ext_desc *, uint8_t, struct txp_rsp_desc **, int); static int txp_response(struct txp_softc *, uint16_t, uint16_t, struct txp_rsp_desc **); static void txp_rsp_fixup(struct txp_softc *, struct txp_rsp_desc *, struct txp_rsp_desc *); static int txp_set_capabilities(struct txp_softc *); static void txp_ifmedia_sts(struct ifnet *, struct ifmediareq *); static int txp_ifmedia_upd(struct ifnet *); #ifdef TXP_DEBUG static void txp_show_descriptor(void *); #endif static void txp_tx_reclaim(struct txp_softc *, struct txp_tx_ring *); static void txp_rxbuf_reclaim(struct txp_softc *); #ifndef __NO_STRICT_ALIGNMENT static __inline void txp_fixup_rx(struct mbuf *); #endif static int txp_rx_reclaim(struct txp_softc *, struct txp_rx_ring *, int); static void txp_stats_save(struct txp_softc *); static void txp_stats_update(struct txp_softc *, struct txp_rsp_desc *); static void txp_sysctl_node(struct txp_softc *); static int sysctl_int_range(SYSCTL_HANDLER_ARGS, int, int); static int sysctl_hw_txp_proc_limit(SYSCTL_HANDLER_ARGS); static int prefer_iomap = 0; TUNABLE_INT("hw.txp.prefer_iomap", &prefer_iomap); static device_method_t txp_methods[] = { /* Device interface */ DEVMETHOD(device_probe, txp_probe), DEVMETHOD(device_attach, txp_attach), DEVMETHOD(device_detach, txp_detach), DEVMETHOD(device_shutdown, txp_shutdown), DEVMETHOD(device_suspend, txp_suspend), DEVMETHOD(device_resume, txp_resume), { NULL, NULL } }; static driver_t txp_driver = { "txp", txp_methods, sizeof(struct txp_softc) }; static devclass_t txp_devclass; DRIVER_MODULE(txp, pci, txp_driver, txp_devclass, 0, 0); static int txp_probe(device_t dev) { struct txp_type *t; t = txp_devs; while (t->txp_name != NULL) { if ((pci_get_vendor(dev) == t->txp_vid) && (pci_get_device(dev) == t->txp_did)) { device_set_desc(dev, t->txp_name); return (BUS_PROBE_DEFAULT); } t++; } return (ENXIO); } static int txp_attach(device_t dev) { struct txp_softc *sc; struct ifnet *ifp; struct txp_rsp_desc *rsp; uint16_t p1; uint32_t p2, reg; int error = 0, pmc, rid; uint8_t eaddr[ETHER_ADDR_LEN], *ver; sc = device_get_softc(dev); sc->sc_dev = dev; mtx_init(&sc->sc_mtx, device_get_nameunit(dev), MTX_NETWORK_LOCK, MTX_DEF); callout_init_mtx(&sc->sc_tick, &sc->sc_mtx, 0); TASK_INIT(&sc->sc_int_task, 0, txp_int_task, sc); TAILQ_INIT(&sc->sc_busy_list); TAILQ_INIT(&sc->sc_free_list); ifmedia_init(&sc->sc_ifmedia, 0, txp_ifmedia_upd, txp_ifmedia_sts); ifmedia_add(&sc->sc_ifmedia, IFM_ETHER | IFM_10_T, 0, NULL); ifmedia_add(&sc->sc_ifmedia, IFM_ETHER | IFM_10_T | IFM_HDX, 0, NULL); ifmedia_add(&sc->sc_ifmedia, IFM_ETHER | IFM_10_T | IFM_FDX, 0, NULL); ifmedia_add(&sc->sc_ifmedia, IFM_ETHER | IFM_100_TX, 0, NULL); ifmedia_add(&sc->sc_ifmedia, IFM_ETHER | IFM_100_TX | IFM_HDX, 0, NULL); ifmedia_add(&sc->sc_ifmedia, IFM_ETHER | IFM_100_TX | IFM_FDX, 0, NULL); ifmedia_add(&sc->sc_ifmedia, IFM_ETHER | IFM_AUTO, 0, NULL); pci_enable_busmaster(dev); /* Prefer memory space register mapping over IO space. */ if (prefer_iomap == 0) { sc->sc_res_id = PCIR_BAR(1); sc->sc_res_type = SYS_RES_MEMORY; } else { sc->sc_res_id = PCIR_BAR(0); sc->sc_res_type = SYS_RES_IOPORT; } sc->sc_res = bus_alloc_resource_any(dev, sc->sc_res_type, &sc->sc_res_id, RF_ACTIVE); if (sc->sc_res == NULL && prefer_iomap == 0) { sc->sc_res_id = PCIR_BAR(0); sc->sc_res_type = SYS_RES_IOPORT; sc->sc_res = bus_alloc_resource_any(dev, sc->sc_res_type, &sc->sc_res_id, RF_ACTIVE); } if (sc->sc_res == NULL) { device_printf(dev, "couldn't map ports/memory\n"); ifmedia_removeall(&sc->sc_ifmedia); mtx_destroy(&sc->sc_mtx); return (ENXIO); } /* Enable MWI. */ reg = pci_read_config(dev, PCIR_COMMAND, 2); reg |= PCIM_CMD_MWRICEN; pci_write_config(dev, PCIR_COMMAND, reg, 2); /* Check cache line size. */ reg = pci_read_config(dev, PCIR_CACHELNSZ, 1); reg <<= 4; if (reg == 0 || (reg % 16) != 0) device_printf(sc->sc_dev, "invalid cache line size : %u\n", reg); /* Allocate interrupt */ rid = 0; sc->sc_irq = bus_alloc_resource_any(dev, SYS_RES_IRQ, &rid, RF_SHAREABLE | RF_ACTIVE); if (sc->sc_irq == NULL) { device_printf(dev, "couldn't map interrupt\n"); error = ENXIO; goto fail; } if ((error = txp_alloc_rings(sc)) != 0) goto fail; txp_init_rings(sc); txp_sysctl_node(sc); /* Reset controller and make it reload sleep image. */ if (txp_reset(sc) != 0) { error = ENXIO; goto fail; } /* Let controller boot from sleep image. */ if (txp_boot(sc, STAT_WAITING_FOR_HOST_REQUEST) != 0) { device_printf(sc->sc_dev, "could not boot sleep image\n"); error = ENXIO; goto fail; } /* Get station address. */ if (txp_command(sc, TXP_CMD_STATION_ADDRESS_READ, 0, 0, 0, &p1, &p2, NULL, TXP_CMD_WAIT)) { error = ENXIO; goto fail; } p1 = le16toh(p1); eaddr[0] = ((uint8_t *)&p1)[1]; eaddr[1] = ((uint8_t *)&p1)[0]; p2 = le32toh(p2); eaddr[2] = ((uint8_t *)&p2)[3]; eaddr[3] = ((uint8_t *)&p2)[2]; eaddr[4] = ((uint8_t *)&p2)[1]; eaddr[5] = ((uint8_t *)&p2)[0]; ifp = sc->sc_ifp = if_alloc(IFT_ETHER); if (ifp == NULL) { device_printf(dev, "can not allocate ifnet structure\n"); error = ENOSPC; goto fail; } /* * Show sleep image version information which may help to * diagnose sleep image specific issues. */ rsp = NULL; if (txp_ext_command(sc, TXP_CMD_VERSIONS_READ, 0, 0, 0, NULL, 0, &rsp, TXP_CMD_WAIT)) { device_printf(dev, "can not read sleep image version\n"); error = ENXIO; goto fail; } if (rsp->rsp_numdesc == 0) { p2 = le32toh(rsp->rsp_par2) & 0xFFFF; device_printf(dev, "Typhoon 1.0 sleep image (2000/%02u/%02u)\n", p2 >> 8, p2 & 0xFF); } else if (rsp->rsp_numdesc == 2) { p2 = le32toh(rsp->rsp_par2); ver = (uint8_t *)(rsp + 1); /* * Even if datasheet says the command returns a NULL * terminated version string, explicitly terminate * the string. Given that several bugs of firmware * I can't trust this simple one. */ ver[25] = '\0'; device_printf(dev, "Typhoon 1.1+ sleep image %02u.%03u.%03u %s\n", p2 >> 24, (p2 >> 12) & 0xFFF, p2 & 0xFFF, ver); } else { p2 = le32toh(rsp->rsp_par2); device_printf(dev, "Unknown Typhoon sleep image version: %u:0x%08x\n", rsp->rsp_numdesc, p2); } free(rsp, M_DEVBUF); sc->sc_xcvr = TXP_XCVR_AUTO; txp_command(sc, TXP_CMD_XCVR_SELECT, TXP_XCVR_AUTO, 0, 0, NULL, NULL, NULL, TXP_CMD_NOWAIT); ifmedia_set(&sc->sc_ifmedia, IFM_ETHER | IFM_AUTO); ifp->if_softc = sc; if_initname(ifp, device_get_name(dev), device_get_unit(dev)); ifp->if_flags = IFF_BROADCAST | IFF_SIMPLEX | IFF_MULTICAST; ifp->if_ioctl = txp_ioctl; ifp->if_start = txp_start; ifp->if_init = txp_init; ifp->if_get_counter = txp_get_counter; ifp->if_snd.ifq_drv_maxlen = TX_ENTRIES - 1; IFQ_SET_MAXLEN(&ifp->if_snd, ifp->if_snd.ifq_drv_maxlen); IFQ_SET_READY(&ifp->if_snd); /* * It's possible to read firmware's offload capability but * we have not downloaded the firmware yet so announce * working capability here. We're not interested in IPSec * capability and due to the lots of firmware bug we can't * advertise the whole capability anyway. */ ifp->if_capabilities = IFCAP_RXCSUM | IFCAP_TXCSUM; if (pci_find_cap(dev, PCIY_PMG, &pmc) == 0) ifp->if_capabilities |= IFCAP_WOL_MAGIC; /* Enable all capabilities. */ ifp->if_capenable = ifp->if_capabilities; ether_ifattach(ifp, eaddr); /* VLAN capability setup. */ ifp->if_capabilities |= IFCAP_VLAN_MTU; ifp->if_capabilities |= IFCAP_VLAN_HWTAGGING | IFCAP_VLAN_HWCSUM; ifp->if_capenable = ifp->if_capabilities; /* Tell the upper layer(s) we support long frames. */ ifp->if_hdrlen = sizeof(struct ether_vlan_header); WRITE_REG(sc, TXP_IER, TXP_INTR_NONE); WRITE_REG(sc, TXP_IMR, TXP_INTR_ALL); /* Create local taskq. */ sc->sc_tq = taskqueue_create_fast("txp_taskq", M_WAITOK, taskqueue_thread_enqueue, &sc->sc_tq); if (sc->sc_tq == NULL) { device_printf(dev, "could not create taskqueue.\n"); ether_ifdetach(ifp); error = ENXIO; goto fail; } taskqueue_start_threads(&sc->sc_tq, 1, PI_NET, "%s taskq", device_get_nameunit(sc->sc_dev)); /* Put controller into sleep. */ if (txp_sleep(sc, 0) != 0) { ether_ifdetach(ifp); error = ENXIO; goto fail; } error = bus_setup_intr(dev, sc->sc_irq, INTR_TYPE_NET | INTR_MPSAFE, txp_intr, NULL, sc, &sc->sc_intrhand); if (error != 0) { ether_ifdetach(ifp); device_printf(dev, "couldn't set up interrupt handler.\n"); goto fail; } + gone_by_fcp101_dev(dev); + return (0); fail: if (error != 0) txp_detach(dev); return (error); } static int txp_detach(device_t dev) { struct txp_softc *sc; struct ifnet *ifp; sc = device_get_softc(dev); ifp = sc->sc_ifp; if (device_is_attached(dev)) { TXP_LOCK(sc); sc->sc_flags |= TXP_FLAG_DETACH; txp_stop(sc); TXP_UNLOCK(sc); callout_drain(&sc->sc_tick); taskqueue_drain(sc->sc_tq, &sc->sc_int_task); ether_ifdetach(ifp); } WRITE_REG(sc, TXP_IMR, TXP_INTR_ALL); ifmedia_removeall(&sc->sc_ifmedia); if (sc->sc_intrhand != NULL) bus_teardown_intr(dev, sc->sc_irq, sc->sc_intrhand); if (sc->sc_irq != NULL) bus_release_resource(dev, SYS_RES_IRQ, 0, sc->sc_irq); if (sc->sc_res != NULL) bus_release_resource(dev, sc->sc_res_type, sc->sc_res_id, sc->sc_res); if (sc->sc_ifp != NULL) { if_free(sc->sc_ifp); sc->sc_ifp = NULL; } txp_free_rings(sc); mtx_destroy(&sc->sc_mtx); return (0); } static int txp_reset(struct txp_softc *sc) { uint32_t r; int i; /* Disable interrupts. */ WRITE_REG(sc, TXP_IER, TXP_INTR_NONE); WRITE_REG(sc, TXP_IMR, TXP_INTR_ALL); /* Ack all pending interrupts. */ WRITE_REG(sc, TXP_ISR, TXP_INTR_ALL); r = 0; WRITE_REG(sc, TXP_SRR, TXP_SRR_ALL); DELAY(1000); WRITE_REG(sc, TXP_SRR, 0); /* Should wait max 6 seconds. */ for (i = 0; i < 6000; i++) { r = READ_REG(sc, TXP_A2H_0); if (r == STAT_WAITING_FOR_HOST_REQUEST) break; DELAY(1000); } if (r != STAT_WAITING_FOR_HOST_REQUEST) device_printf(sc->sc_dev, "reset hung\n"); WRITE_REG(sc, TXP_IER, TXP_INTR_NONE); WRITE_REG(sc, TXP_IMR, TXP_INTR_ALL); WRITE_REG(sc, TXP_ISR, TXP_INTR_ALL); /* * Give more time to complete loading sleep image before * trying to boot from sleep image. */ DELAY(5000); return (0); } static int txp_boot(struct txp_softc *sc, uint32_t state) { /* See if it's waiting for boot, and try to boot it. */ if (txp_wait(sc, state) != 0) { device_printf(sc->sc_dev, "not waiting for boot\n"); return (ENXIO); } WRITE_REG(sc, TXP_H2A_2, TXP_ADDR_HI(sc->sc_ldata.txp_boot_paddr)); TXP_BARRIER(sc, TXP_H2A_2, 4, BUS_SPACE_BARRIER_WRITE); WRITE_REG(sc, TXP_H2A_1, TXP_ADDR_LO(sc->sc_ldata.txp_boot_paddr)); TXP_BARRIER(sc, TXP_H2A_1, 4, BUS_SPACE_BARRIER_WRITE); WRITE_REG(sc, TXP_H2A_0, TXP_BOOTCMD_REGISTER_BOOT_RECORD); TXP_BARRIER(sc, TXP_H2A_0, 4, BUS_SPACE_BARRIER_WRITE); /* See if it booted. */ if (txp_wait(sc, STAT_RUNNING) != 0) { device_printf(sc->sc_dev, "firmware not running\n"); return (ENXIO); } /* Clear TX and CMD ring write registers. */ WRITE_REG(sc, TXP_H2A_1, TXP_BOOTCMD_NULL); TXP_BARRIER(sc, TXP_H2A_1, 4, BUS_SPACE_BARRIER_WRITE); WRITE_REG(sc, TXP_H2A_2, TXP_BOOTCMD_NULL); TXP_BARRIER(sc, TXP_H2A_2, 4, BUS_SPACE_BARRIER_WRITE); WRITE_REG(sc, TXP_H2A_3, TXP_BOOTCMD_NULL); TXP_BARRIER(sc, TXP_H2A_3, 4, BUS_SPACE_BARRIER_WRITE); WRITE_REG(sc, TXP_H2A_0, TXP_BOOTCMD_NULL); TXP_BARRIER(sc, TXP_H2A_0, 4, BUS_SPACE_BARRIER_WRITE); return (0); } static int txp_download_fw(struct txp_softc *sc) { struct txp_fw_file_header *fileheader; struct txp_fw_section_header *secthead; int sect; uint32_t error, ier, imr; TXP_LOCK_ASSERT(sc); error = 0; ier = READ_REG(sc, TXP_IER); WRITE_REG(sc, TXP_IER, ier | TXP_INT_A2H_0); imr = READ_REG(sc, TXP_IMR); WRITE_REG(sc, TXP_IMR, imr | TXP_INT_A2H_0); if (txp_wait(sc, STAT_WAITING_FOR_HOST_REQUEST) != 0) { device_printf(sc->sc_dev, "not waiting for host request\n"); error = ETIMEDOUT; goto fail; } /* Ack the status. */ WRITE_REG(sc, TXP_ISR, TXP_INT_A2H_0); fileheader = (struct txp_fw_file_header *)tc990image; if (bcmp("TYPHOON", fileheader->magicid, sizeof(fileheader->magicid))) { device_printf(sc->sc_dev, "firmware invalid magic\n"); goto fail; } /* Tell boot firmware to get ready for image. */ WRITE_REG(sc, TXP_H2A_1, le32toh(fileheader->addr)); TXP_BARRIER(sc, TXP_H2A_1, 4, BUS_SPACE_BARRIER_WRITE); WRITE_REG(sc, TXP_H2A_2, le32toh(fileheader->hmac[0])); TXP_BARRIER(sc, TXP_H2A_2, 4, BUS_SPACE_BARRIER_WRITE); WRITE_REG(sc, TXP_H2A_3, le32toh(fileheader->hmac[1])); TXP_BARRIER(sc, TXP_H2A_3, 4, BUS_SPACE_BARRIER_WRITE); WRITE_REG(sc, TXP_H2A_4, le32toh(fileheader->hmac[2])); TXP_BARRIER(sc, TXP_H2A_4, 4, BUS_SPACE_BARRIER_WRITE); WRITE_REG(sc, TXP_H2A_5, le32toh(fileheader->hmac[3])); TXP_BARRIER(sc, TXP_H2A_5, 4, BUS_SPACE_BARRIER_WRITE); WRITE_REG(sc, TXP_H2A_6, le32toh(fileheader->hmac[4])); TXP_BARRIER(sc, TXP_H2A_6, 4, BUS_SPACE_BARRIER_WRITE); WRITE_REG(sc, TXP_H2A_0, TXP_BOOTCMD_RUNTIME_IMAGE); TXP_BARRIER(sc, TXP_H2A_0, 4, BUS_SPACE_BARRIER_WRITE); if (txp_download_fw_wait(sc)) { device_printf(sc->sc_dev, "firmware wait failed, initial\n"); error = ETIMEDOUT; goto fail; } secthead = (struct txp_fw_section_header *)(((uint8_t *)tc990image) + sizeof(struct txp_fw_file_header)); for (sect = 0; sect < le32toh(fileheader->nsections); sect++) { if ((error = txp_download_fw_section(sc, secthead, sect)) != 0) goto fail; secthead = (struct txp_fw_section_header *) (((uint8_t *)secthead) + le32toh(secthead->nbytes) + sizeof(*secthead)); } WRITE_REG(sc, TXP_H2A_0, TXP_BOOTCMD_DOWNLOAD_COMPLETE); TXP_BARRIER(sc, TXP_H2A_0, 4, BUS_SPACE_BARRIER_WRITE); if (txp_wait(sc, STAT_WAITING_FOR_BOOT) != 0) { device_printf(sc->sc_dev, "not waiting for boot\n"); error = ETIMEDOUT; goto fail; } fail: WRITE_REG(sc, TXP_IER, ier); WRITE_REG(sc, TXP_IMR, imr); return (error); } static int txp_download_fw_wait(struct txp_softc *sc) { uint32_t i; TXP_LOCK_ASSERT(sc); for (i = 0; i < TXP_TIMEOUT; i++) { if ((READ_REG(sc, TXP_ISR) & TXP_INT_A2H_0) != 0) break; DELAY(50); } if (i == TXP_TIMEOUT) { device_printf(sc->sc_dev, "firmware wait failed comm0\n"); return (ETIMEDOUT); } WRITE_REG(sc, TXP_ISR, TXP_INT_A2H_0); if (READ_REG(sc, TXP_A2H_0) != STAT_WAITING_FOR_SEGMENT) { device_printf(sc->sc_dev, "firmware not waiting for segment\n"); return (ETIMEDOUT); } return (0); } static int txp_download_fw_section(struct txp_softc *sc, struct txp_fw_section_header *sect, int sectnum) { bus_dma_tag_t sec_tag; bus_dmamap_t sec_map; bus_addr_t sec_paddr; uint8_t *sec_buf; int rseg, err = 0; struct mbuf m; uint16_t csum; TXP_LOCK_ASSERT(sc); /* Skip zero length sections. */ if (le32toh(sect->nbytes) == 0) return (0); /* Make sure we aren't past the end of the image. */ rseg = ((uint8_t *)sect) - ((uint8_t *)tc990image); if (rseg >= sizeof(tc990image)) { device_printf(sc->sc_dev, "firmware invalid section address, section %d\n", sectnum); return (EIO); } /* Make sure this section doesn't go past the end. */ rseg += le32toh(sect->nbytes); if (rseg >= sizeof(tc990image)) { device_printf(sc->sc_dev, "firmware truncated section %d\n", sectnum); return (EIO); } sec_tag = NULL; sec_map = NULL; sec_buf = NULL; /* XXX */ TXP_UNLOCK(sc); err = txp_dma_alloc(sc, "firmware sections", &sec_tag, sizeof(uint32_t), 0, &sec_map, (void **)&sec_buf, le32toh(sect->nbytes), &sec_paddr); TXP_LOCK(sc); if (err != 0) goto bail; bcopy(((uint8_t *)sect) + sizeof(*sect), sec_buf, le32toh(sect->nbytes)); /* * dummy up mbuf and verify section checksum */ m.m_type = MT_DATA; m.m_next = m.m_nextpkt = NULL; m.m_len = le32toh(sect->nbytes); m.m_data = sec_buf; m.m_flags = 0; csum = in_cksum(&m, le32toh(sect->nbytes)); if (csum != sect->cksum) { device_printf(sc->sc_dev, "firmware section %d, bad cksum (expected 0x%x got 0x%x)\n", sectnum, le16toh(sect->cksum), csum); err = EIO; goto bail; } bus_dmamap_sync(sec_tag, sec_map, BUS_DMASYNC_PREWRITE); WRITE_REG(sc, TXP_H2A_1, le32toh(sect->nbytes)); TXP_BARRIER(sc, TXP_H2A_1, 4, BUS_SPACE_BARRIER_WRITE); WRITE_REG(sc, TXP_H2A_2, le16toh(sect->cksum)); TXP_BARRIER(sc, TXP_H2A_2, 4, BUS_SPACE_BARRIER_WRITE); WRITE_REG(sc, TXP_H2A_3, le32toh(sect->addr)); TXP_BARRIER(sc, TXP_H2A_3, 4, BUS_SPACE_BARRIER_WRITE); WRITE_REG(sc, TXP_H2A_4, TXP_ADDR_HI(sec_paddr)); TXP_BARRIER(sc, TXP_H2A_4, 4, BUS_SPACE_BARRIER_WRITE); WRITE_REG(sc, TXP_H2A_5, TXP_ADDR_LO(sec_paddr)); TXP_BARRIER(sc, TXP_H2A_5, 4, BUS_SPACE_BARRIER_WRITE); WRITE_REG(sc, TXP_H2A_0, TXP_BOOTCMD_SEGMENT_AVAILABLE); TXP_BARRIER(sc, TXP_H2A_0, 4, BUS_SPACE_BARRIER_WRITE); if (txp_download_fw_wait(sc)) { device_printf(sc->sc_dev, "firmware wait failed, section %d\n", sectnum); err = ETIMEDOUT; } bus_dmamap_sync(sec_tag, sec_map, BUS_DMASYNC_POSTWRITE); bail: txp_dma_free(sc, &sec_tag, sec_map, (void **)&sec_buf, &sec_paddr); return (err); } static int txp_intr(void *vsc) { struct txp_softc *sc; uint32_t status; sc = vsc; status = READ_REG(sc, TXP_ISR); if ((status & TXP_INT_LATCH) == 0) return (FILTER_STRAY); WRITE_REG(sc, TXP_ISR, status); WRITE_REG(sc, TXP_IMR, TXP_INTR_ALL); taskqueue_enqueue(sc->sc_tq, &sc->sc_int_task); return (FILTER_HANDLED); } static void txp_int_task(void *arg, int pending) { struct txp_softc *sc; struct ifnet *ifp; struct txp_hostvar *hv; uint32_t isr; int more; sc = (struct txp_softc *)arg; TXP_LOCK(sc); ifp = sc->sc_ifp; hv = sc->sc_hostvar; isr = READ_REG(sc, TXP_ISR); if ((isr & TXP_INT_LATCH) != 0) WRITE_REG(sc, TXP_ISR, isr); if ((ifp->if_drv_flags & IFF_DRV_RUNNING) != 0) { bus_dmamap_sync(sc->sc_cdata.txp_hostvar_tag, sc->sc_cdata.txp_hostvar_map, BUS_DMASYNC_POSTREAD | BUS_DMASYNC_POSTWRITE); more = 0; if ((*sc->sc_rxhir.r_roff) != (*sc->sc_rxhir.r_woff)) more += txp_rx_reclaim(sc, &sc->sc_rxhir, sc->sc_process_limit); if ((*sc->sc_rxlor.r_roff) != (*sc->sc_rxlor.r_woff)) more += txp_rx_reclaim(sc, &sc->sc_rxlor, sc->sc_process_limit); /* * XXX * It seems controller is not smart enough to handle * FIFO overflow conditions under heavy network load. * No matter how often new Rx buffers are passed to * controller the situation didn't change. Maybe * flow-control would be the only way to mitigate the * issue but firmware does not have commands that * control the threshold of emitting pause frames. */ if (hv->hv_rx_buf_write_idx == hv->hv_rx_buf_read_idx) txp_rxbuf_reclaim(sc); if (sc->sc_txhir.r_cnt && (sc->sc_txhir.r_cons != TXP_OFFSET2IDX(le32toh(*(sc->sc_txhir.r_off))))) txp_tx_reclaim(sc, &sc->sc_txhir); if (sc->sc_txlor.r_cnt && (sc->sc_txlor.r_cons != TXP_OFFSET2IDX(le32toh(*(sc->sc_txlor.r_off))))) txp_tx_reclaim(sc, &sc->sc_txlor); bus_dmamap_sync(sc->sc_cdata.txp_hostvar_tag, sc->sc_cdata.txp_hostvar_map, BUS_DMASYNC_PREREAD | BUS_DMASYNC_PREWRITE); if (!IFQ_DRV_IS_EMPTY(&ifp->if_snd)) txp_start_locked(sc->sc_ifp); if (more != 0 || READ_REG(sc, TXP_ISR & TXP_INT_LATCH) != 0) { taskqueue_enqueue(sc->sc_tq, &sc->sc_int_task); TXP_UNLOCK(sc); return; } } /* Re-enable interrupts. */ WRITE_REG(sc, TXP_IMR, TXP_INTR_NONE); TXP_UNLOCK(sc); } #ifndef __NO_STRICT_ALIGNMENT static __inline void txp_fixup_rx(struct mbuf *m) { int i; uint16_t *src, *dst; src = mtod(m, uint16_t *); dst = src - (TXP_RXBUF_ALIGN - ETHER_ALIGN) / sizeof *src; for (i = 0; i < (m->m_len / sizeof(uint16_t) + 1); i++) *dst++ = *src++; m->m_data -= TXP_RXBUF_ALIGN - ETHER_ALIGN; } #endif static int txp_rx_reclaim(struct txp_softc *sc, struct txp_rx_ring *r, int count) { struct ifnet *ifp; struct txp_rx_desc *rxd; struct mbuf *m; struct txp_rx_swdesc *sd; uint32_t roff, woff, rx_stat, prog; TXP_LOCK_ASSERT(sc); ifp = sc->sc_ifp; bus_dmamap_sync(r->r_tag, r->r_map, BUS_DMASYNC_POSTREAD | BUS_DMASYNC_POSTWRITE); roff = le32toh(*r->r_roff); woff = le32toh(*r->r_woff); rxd = r->r_desc + roff / sizeof(struct txp_rx_desc); for (prog = 0; roff != woff; prog++, count--) { if (count <= 0) break; bcopy((u_long *)&rxd->rx_vaddrlo, &sd, sizeof(sd)); KASSERT(sd != NULL, ("%s: Rx desc ring corrupted", __func__)); bus_dmamap_sync(sc->sc_cdata.txp_rx_tag, sd->sd_map, BUS_DMASYNC_POSTREAD); bus_dmamap_unload(sc->sc_cdata.txp_rx_tag, sd->sd_map); m = sd->sd_mbuf; KASSERT(m != NULL, ("%s: Rx buffer ring corrupted", __func__)); sd->sd_mbuf = NULL; TAILQ_REMOVE(&sc->sc_busy_list, sd, sd_next); TAILQ_INSERT_TAIL(&sc->sc_free_list, sd, sd_next); if ((rxd->rx_flags & RX_FLAGS_ERROR) != 0) { if (bootverbose) device_printf(sc->sc_dev, "Rx error %u\n", le32toh(rxd->rx_stat) & RX_ERROR_MASK); m_freem(m); goto next; } m->m_pkthdr.len = m->m_len = le16toh(rxd->rx_len); m->m_pkthdr.rcvif = ifp; #ifndef __NO_STRICT_ALIGNMENT txp_fixup_rx(m); #endif rx_stat = le32toh(rxd->rx_stat); if ((ifp->if_capenable & IFCAP_RXCSUM) != 0) { if ((rx_stat & RX_STAT_IPCKSUMBAD) != 0) m->m_pkthdr.csum_flags |= CSUM_IP_CHECKED; else if ((rx_stat & RX_STAT_IPCKSUMGOOD) != 0) m->m_pkthdr.csum_flags |= CSUM_IP_CHECKED|CSUM_IP_VALID; if ((rx_stat & RX_STAT_TCPCKSUMGOOD) != 0 || (rx_stat & RX_STAT_UDPCKSUMGOOD) != 0) { m->m_pkthdr.csum_flags |= CSUM_DATA_VALID | CSUM_PSEUDO_HDR; m->m_pkthdr.csum_data = 0xffff; } } /* * XXX * Typhoon has a firmware bug that VLAN tag is always * stripped out even if it is told to not remove the tag. * Therefore don't check if_capenable here. */ if (/* (ifp->if_capenable & IFCAP_VLAN_HWTAGGING) != 0 && */ (rx_stat & RX_STAT_VLAN) != 0) { m->m_pkthdr.ether_vtag = bswap16((le32toh(rxd->rx_vlan) >> 16)); m->m_flags |= M_VLANTAG; } TXP_UNLOCK(sc); (*ifp->if_input)(ifp, m); TXP_LOCK(sc); next: roff += sizeof(struct txp_rx_desc); if (roff == (RX_ENTRIES * sizeof(struct txp_rx_desc))) { roff = 0; rxd = r->r_desc; } else rxd++; prog++; } if (prog == 0) return (0); bus_dmamap_sync(r->r_tag, r->r_map, BUS_DMASYNC_PREREAD | BUS_DMASYNC_PREWRITE); *r->r_roff = le32toh(roff); return (count > 0 ? 0 : EAGAIN); } static void txp_rxbuf_reclaim(struct txp_softc *sc) { struct txp_hostvar *hv; struct txp_rxbuf_desc *rbd; struct txp_rx_swdesc *sd; bus_dma_segment_t segs[1]; int nsegs, prod, prog; uint32_t cons; TXP_LOCK_ASSERT(sc); hv = sc->sc_hostvar; cons = TXP_OFFSET2IDX(le32toh(hv->hv_rx_buf_read_idx)); prod = sc->sc_rxbufprod; TXP_DESC_INC(prod, RXBUF_ENTRIES); if (prod == cons) return; bus_dmamap_sync(sc->sc_cdata.txp_rxbufs_tag, sc->sc_cdata.txp_rxbufs_map, BUS_DMASYNC_POSTREAD | BUS_DMASYNC_POSTWRITE); for (prog = 0; prod != cons; prog++) { sd = TAILQ_FIRST(&sc->sc_free_list); if (sd == NULL) break; rbd = sc->sc_rxbufs + prod; bcopy((u_long *)&rbd->rb_vaddrlo, &sd, sizeof(sd)); sd->sd_mbuf = m_getcl(M_NOWAIT, MT_DATA, M_PKTHDR); if (sd->sd_mbuf == NULL) break; sd->sd_mbuf->m_pkthdr.len = sd->sd_mbuf->m_len = MCLBYTES; #ifndef __NO_STRICT_ALIGNMENT m_adj(sd->sd_mbuf, TXP_RXBUF_ALIGN); #endif if (bus_dmamap_load_mbuf_sg(sc->sc_cdata.txp_rx_tag, sd->sd_map, sd->sd_mbuf, segs, &nsegs, 0) != 0) { m_freem(sd->sd_mbuf); sd->sd_mbuf = NULL; break; } KASSERT(nsegs == 1, ("%s : %d segments returned!", __func__, nsegs)); TAILQ_REMOVE(&sc->sc_free_list, sd, sd_next); TAILQ_INSERT_TAIL(&sc->sc_busy_list, sd, sd_next); bus_dmamap_sync(sc->sc_cdata.txp_rx_tag, sd->sd_map, BUS_DMASYNC_PREREAD); rbd->rb_paddrlo = htole32(TXP_ADDR_LO(segs[0].ds_addr)); rbd->rb_paddrhi = htole32(TXP_ADDR_HI(segs[0].ds_addr)); TXP_DESC_INC(prod, RXBUF_ENTRIES); } if (prog == 0) return; bus_dmamap_sync(sc->sc_cdata.txp_rxbufs_tag, sc->sc_cdata.txp_rxbufs_map, BUS_DMASYNC_PREREAD | BUS_DMASYNC_PREWRITE); prod = (prod + RXBUF_ENTRIES - 1) % RXBUF_ENTRIES; sc->sc_rxbufprod = prod; hv->hv_rx_buf_write_idx = htole32(TXP_IDX2OFFSET(prod)); } /* * Reclaim mbufs and entries from a transmit ring. */ static void txp_tx_reclaim(struct txp_softc *sc, struct txp_tx_ring *r) { struct ifnet *ifp; uint32_t idx; uint32_t cons, cnt; struct txp_tx_desc *txd; struct txp_swdesc *sd; TXP_LOCK_ASSERT(sc); bus_dmamap_sync(r->r_tag, r->r_map, BUS_DMASYNC_POSTREAD | BUS_DMASYNC_POSTWRITE); ifp = sc->sc_ifp; idx = TXP_OFFSET2IDX(le32toh(*(r->r_off))); cons = r->r_cons; cnt = r->r_cnt; txd = r->r_desc + cons; sd = sc->sc_txd + cons; for (cnt = r->r_cnt; cons != idx && cnt > 0; cnt--) { if ((txd->tx_flags & TX_FLAGS_TYPE_M) == TX_FLAGS_TYPE_DATA) { if (sd->sd_mbuf != NULL) { bus_dmamap_sync(sc->sc_cdata.txp_tx_tag, sd->sd_map, BUS_DMASYNC_POSTWRITE); bus_dmamap_unload(sc->sc_cdata.txp_tx_tag, sd->sd_map); m_freem(sd->sd_mbuf); sd->sd_mbuf = NULL; txd->tx_addrlo = 0; txd->tx_addrhi = 0; txd->tx_flags = 0; } } ifp->if_drv_flags &= ~IFF_DRV_OACTIVE; if (++cons == TX_ENTRIES) { txd = r->r_desc; cons = 0; sd = sc->sc_txd; } else { txd++; sd++; } } bus_dmamap_sync(r->r_tag, r->r_map, BUS_DMASYNC_PREREAD | BUS_DMASYNC_PREWRITE); r->r_cons = cons; r->r_cnt = cnt; if (cnt == 0) sc->sc_watchdog_timer = 0; } static int txp_shutdown(device_t dev) { return (txp_suspend(dev)); } static int txp_suspend(device_t dev) { struct txp_softc *sc; struct ifnet *ifp; uint8_t *eaddr; uint16_t p1; uint32_t p2; int pmc; uint16_t pmstat; sc = device_get_softc(dev); TXP_LOCK(sc); ifp = sc->sc_ifp; txp_stop(sc); txp_init_rings(sc); /* Reset controller and make it reload sleep image. */ txp_reset(sc); /* Let controller boot from sleep image. */ if (txp_boot(sc, STAT_WAITING_FOR_HOST_REQUEST) != 0) device_printf(sc->sc_dev, "couldn't boot sleep image\n"); /* Set station address. */ eaddr = IF_LLADDR(sc->sc_ifp); p1 = 0; ((uint8_t *)&p1)[1] = eaddr[0]; ((uint8_t *)&p1)[0] = eaddr[1]; p1 = le16toh(p1); ((uint8_t *)&p2)[3] = eaddr[2]; ((uint8_t *)&p2)[2] = eaddr[3]; ((uint8_t *)&p2)[1] = eaddr[4]; ((uint8_t *)&p2)[0] = eaddr[5]; p2 = le32toh(p2); txp_command(sc, TXP_CMD_STATION_ADDRESS_WRITE, p1, p2, 0, NULL, NULL, NULL, TXP_CMD_WAIT); txp_set_filter(sc); WRITE_REG(sc, TXP_IER, TXP_INTR_NONE); WRITE_REG(sc, TXP_IMR, TXP_INTR_ALL); txp_sleep(sc, sc->sc_ifp->if_capenable); if (pci_find_cap(sc->sc_dev, PCIY_PMG, &pmc) == 0) { /* Request PME. */ pmstat = pci_read_config(sc->sc_dev, pmc + PCIR_POWER_STATUS, 2); pmstat &= ~(PCIM_PSTAT_PME | PCIM_PSTAT_PMEENABLE); if ((ifp->if_capenable & IFCAP_WOL) != 0) pmstat |= PCIM_PSTAT_PME | PCIM_PSTAT_PMEENABLE; pci_write_config(sc->sc_dev, pmc + PCIR_POWER_STATUS, pmstat, 2); } TXP_UNLOCK(sc); return (0); } static int txp_resume(device_t dev) { struct txp_softc *sc; int pmc; uint16_t pmstat; sc = device_get_softc(dev); TXP_LOCK(sc); if (pci_find_cap(sc->sc_dev, PCIY_PMG, &pmc) == 0) { /* Disable PME and clear PME status. */ pmstat = pci_read_config(sc->sc_dev, pmc + PCIR_POWER_STATUS, 2); if ((pmstat & PCIM_PSTAT_PMEENABLE) != 0) { pmstat &= ~PCIM_PSTAT_PMEENABLE; pci_write_config(sc->sc_dev, pmc + PCIR_POWER_STATUS, pmstat, 2); } } if ((sc->sc_ifp->if_flags & IFF_UP) != 0) txp_init_locked(sc); TXP_UNLOCK(sc); return (0); } struct txp_dmamap_arg { bus_addr_t txp_busaddr; }; static void txp_dmamap_cb(void *arg, bus_dma_segment_t *segs, int nsegs, int error) { struct txp_dmamap_arg *ctx; if (error != 0) return; KASSERT(nsegs == 1, ("%s: %d segments returned!", __func__, nsegs)); ctx = (struct txp_dmamap_arg *)arg; ctx->txp_busaddr = segs[0].ds_addr; } static int txp_dma_alloc(struct txp_softc *sc, char *type, bus_dma_tag_t *tag, bus_size_t alignment, bus_size_t boundary, bus_dmamap_t *map, void **buf, bus_size_t size, bus_addr_t *paddr) { struct txp_dmamap_arg ctx; int error; /* Create DMA block tag. */ error = bus_dma_tag_create( sc->sc_cdata.txp_parent_tag, /* parent */ alignment, boundary, /* algnmnt, boundary */ BUS_SPACE_MAXADDR, /* lowaddr */ BUS_SPACE_MAXADDR, /* highaddr */ NULL, NULL, /* filter, filterarg */ size, /* maxsize */ 1, /* nsegments */ size, /* maxsegsize */ 0, /* flags */ NULL, NULL, /* lockfunc, lockarg */ tag); if (error != 0) { device_printf(sc->sc_dev, "could not create DMA tag for %s.\n", type); return (error); } *paddr = 0; /* Allocate DMA'able memory and load the DMA map. */ error = bus_dmamem_alloc(*tag, buf, BUS_DMA_WAITOK | BUS_DMA_ZERO | BUS_DMA_COHERENT, map); if (error != 0) { device_printf(sc->sc_dev, "could not allocate DMA'able memory for %s.\n", type); return (error); } ctx.txp_busaddr = 0; error = bus_dmamap_load(*tag, *map, *(uint8_t **)buf, size, txp_dmamap_cb, &ctx, BUS_DMA_NOWAIT); if (error != 0 || ctx.txp_busaddr == 0) { device_printf(sc->sc_dev, "could not load DMA'able memory for %s.\n", type); return (error); } *paddr = ctx.txp_busaddr; return (0); } static void txp_dma_free(struct txp_softc *sc, bus_dma_tag_t *tag, bus_dmamap_t map, void **buf, bus_addr_t *paddr) { if (*tag != NULL) { if (*paddr != 0) bus_dmamap_unload(*tag, map); if (buf != NULL) bus_dmamem_free(*tag, *(uint8_t **)buf, map); *(uint8_t **)buf = NULL; *paddr = 0; bus_dma_tag_destroy(*tag); *tag = NULL; } } static int txp_alloc_rings(struct txp_softc *sc) { struct txp_boot_record *boot; struct txp_ldata *ld; struct txp_swdesc *txd; struct txp_rxbuf_desc *rbd; struct txp_rx_swdesc *sd; int error, i; ld = &sc->sc_ldata; boot = ld->txp_boot; /* boot record */ sc->sc_boot = boot; /* * Create parent ring/DMA block tag. * Datasheet says that all ring addresses and descriptors * support 64bits addressing. However the controller is * known to have no support DAC so limit DMA address space * to 32bits. */ error = bus_dma_tag_create( bus_get_dma_tag(sc->sc_dev), /* parent */ 1, 0, /* algnmnt, boundary */ BUS_SPACE_MAXADDR_32BIT, /* lowaddr */ BUS_SPACE_MAXADDR, /* highaddr */ NULL, NULL, /* filter, filterarg */ BUS_SPACE_MAXSIZE_32BIT, /* maxsize */ 0, /* nsegments */ BUS_SPACE_MAXSIZE_32BIT, /* maxsegsize */ 0, /* flags */ NULL, NULL, /* lockfunc, lockarg */ &sc->sc_cdata.txp_parent_tag); if (error != 0) { device_printf(sc->sc_dev, "could not create parent DMA tag.\n"); return (error); } /* Boot record. */ error = txp_dma_alloc(sc, "boot record", &sc->sc_cdata.txp_boot_tag, sizeof(uint32_t), 0, &sc->sc_cdata.txp_boot_map, (void **)&sc->sc_ldata.txp_boot, sizeof(struct txp_boot_record), &sc->sc_ldata.txp_boot_paddr); if (error != 0) return (error); boot = sc->sc_ldata.txp_boot; sc->sc_boot = boot; /* Host variables. */ error = txp_dma_alloc(sc, "host variables", &sc->sc_cdata.txp_hostvar_tag, sizeof(uint32_t), 0, &sc->sc_cdata.txp_hostvar_map, (void **)&sc->sc_ldata.txp_hostvar, sizeof(struct txp_hostvar), &sc->sc_ldata.txp_hostvar_paddr); if (error != 0) return (error); boot->br_hostvar_lo = htole32(TXP_ADDR_LO(sc->sc_ldata.txp_hostvar_paddr)); boot->br_hostvar_hi = htole32(TXP_ADDR_HI(sc->sc_ldata.txp_hostvar_paddr)); sc->sc_hostvar = sc->sc_ldata.txp_hostvar; /* Hi priority tx ring. */ error = txp_dma_alloc(sc, "hi priority tx ring", &sc->sc_cdata.txp_txhiring_tag, sizeof(struct txp_tx_desc), 0, &sc->sc_cdata.txp_txhiring_map, (void **)&sc->sc_ldata.txp_txhiring, sizeof(struct txp_tx_desc) * TX_ENTRIES, &sc->sc_ldata.txp_txhiring_paddr); if (error != 0) return (error); boot->br_txhipri_lo = htole32(TXP_ADDR_LO(sc->sc_ldata.txp_txhiring_paddr)); boot->br_txhipri_hi = htole32(TXP_ADDR_HI(sc->sc_ldata.txp_txhiring_paddr)); boot->br_txhipri_siz = htole32(TX_ENTRIES * sizeof(struct txp_tx_desc)); sc->sc_txhir.r_tag = sc->sc_cdata.txp_txhiring_tag; sc->sc_txhir.r_map = sc->sc_cdata.txp_txhiring_map; sc->sc_txhir.r_reg = TXP_H2A_1; sc->sc_txhir.r_desc = sc->sc_ldata.txp_txhiring; sc->sc_txhir.r_cons = sc->sc_txhir.r_prod = sc->sc_txhir.r_cnt = 0; sc->sc_txhir.r_off = &sc->sc_hostvar->hv_tx_hi_desc_read_idx; /* Low priority tx ring. */ error = txp_dma_alloc(sc, "low priority tx ring", &sc->sc_cdata.txp_txloring_tag, sizeof(struct txp_tx_desc), 0, &sc->sc_cdata.txp_txloring_map, (void **)&sc->sc_ldata.txp_txloring, sizeof(struct txp_tx_desc) * TX_ENTRIES, &sc->sc_ldata.txp_txloring_paddr); if (error != 0) return (error); boot->br_txlopri_lo = htole32(TXP_ADDR_LO(sc->sc_ldata.txp_txloring_paddr)); boot->br_txlopri_hi = htole32(TXP_ADDR_HI(sc->sc_ldata.txp_txloring_paddr)); boot->br_txlopri_siz = htole32(TX_ENTRIES * sizeof(struct txp_tx_desc)); sc->sc_txlor.r_tag = sc->sc_cdata.txp_txloring_tag; sc->sc_txlor.r_map = sc->sc_cdata.txp_txloring_map; sc->sc_txlor.r_reg = TXP_H2A_3; sc->sc_txlor.r_desc = sc->sc_ldata.txp_txloring; sc->sc_txlor.r_cons = sc->sc_txlor.r_prod = sc->sc_txlor.r_cnt = 0; sc->sc_txlor.r_off = &sc->sc_hostvar->hv_tx_lo_desc_read_idx; /* High priority rx ring. */ error = txp_dma_alloc(sc, "hi priority rx ring", &sc->sc_cdata.txp_rxhiring_tag, roundup(sizeof(struct txp_rx_desc), 16), 0, &sc->sc_cdata.txp_rxhiring_map, (void **)&sc->sc_ldata.txp_rxhiring, sizeof(struct txp_rx_desc) * RX_ENTRIES, &sc->sc_ldata.txp_rxhiring_paddr); if (error != 0) return (error); boot->br_rxhipri_lo = htole32(TXP_ADDR_LO(sc->sc_ldata.txp_rxhiring_paddr)); boot->br_rxhipri_hi = htole32(TXP_ADDR_HI(sc->sc_ldata.txp_rxhiring_paddr)); boot->br_rxhipri_siz = htole32(RX_ENTRIES * sizeof(struct txp_rx_desc)); sc->sc_rxhir.r_tag = sc->sc_cdata.txp_rxhiring_tag; sc->sc_rxhir.r_map = sc->sc_cdata.txp_rxhiring_map; sc->sc_rxhir.r_desc = sc->sc_ldata.txp_rxhiring; sc->sc_rxhir.r_roff = &sc->sc_hostvar->hv_rx_hi_read_idx; sc->sc_rxhir.r_woff = &sc->sc_hostvar->hv_rx_hi_write_idx; /* Low priority rx ring. */ error = txp_dma_alloc(sc, "low priority rx ring", &sc->sc_cdata.txp_rxloring_tag, roundup(sizeof(struct txp_rx_desc), 16), 0, &sc->sc_cdata.txp_rxloring_map, (void **)&sc->sc_ldata.txp_rxloring, sizeof(struct txp_rx_desc) * RX_ENTRIES, &sc->sc_ldata.txp_rxloring_paddr); if (error != 0) return (error); boot->br_rxlopri_lo = htole32(TXP_ADDR_LO(sc->sc_ldata.txp_rxloring_paddr)); boot->br_rxlopri_hi = htole32(TXP_ADDR_HI(sc->sc_ldata.txp_rxloring_paddr)); boot->br_rxlopri_siz = htole32(RX_ENTRIES * sizeof(struct txp_rx_desc)); sc->sc_rxlor.r_tag = sc->sc_cdata.txp_rxloring_tag; sc->sc_rxlor.r_map = sc->sc_cdata.txp_rxloring_map; sc->sc_rxlor.r_desc = sc->sc_ldata.txp_rxloring; sc->sc_rxlor.r_roff = &sc->sc_hostvar->hv_rx_lo_read_idx; sc->sc_rxlor.r_woff = &sc->sc_hostvar->hv_rx_lo_write_idx; /* Command ring. */ error = txp_dma_alloc(sc, "command ring", &sc->sc_cdata.txp_cmdring_tag, sizeof(struct txp_cmd_desc), 0, &sc->sc_cdata.txp_cmdring_map, (void **)&sc->sc_ldata.txp_cmdring, sizeof(struct txp_cmd_desc) * CMD_ENTRIES, &sc->sc_ldata.txp_cmdring_paddr); if (error != 0) return (error); boot->br_cmd_lo = htole32(TXP_ADDR_LO(sc->sc_ldata.txp_cmdring_paddr)); boot->br_cmd_hi = htole32(TXP_ADDR_HI(sc->sc_ldata.txp_cmdring_paddr)); boot->br_cmd_siz = htole32(CMD_ENTRIES * sizeof(struct txp_cmd_desc)); sc->sc_cmdring.base = sc->sc_ldata.txp_cmdring; sc->sc_cmdring.size = CMD_ENTRIES * sizeof(struct txp_cmd_desc); sc->sc_cmdring.lastwrite = 0; /* Response ring. */ error = txp_dma_alloc(sc, "response ring", &sc->sc_cdata.txp_rspring_tag, sizeof(struct txp_rsp_desc), 0, &sc->sc_cdata.txp_rspring_map, (void **)&sc->sc_ldata.txp_rspring, sizeof(struct txp_rsp_desc) * RSP_ENTRIES, &sc->sc_ldata.txp_rspring_paddr); if (error != 0) return (error); boot->br_resp_lo = htole32(TXP_ADDR_LO(sc->sc_ldata.txp_rspring_paddr)); boot->br_resp_hi = htole32(TXP_ADDR_HI(sc->sc_ldata.txp_rspring_paddr)); boot->br_resp_siz = htole32(RSP_ENTRIES * sizeof(struct txp_rsp_desc)); sc->sc_rspring.base = sc->sc_ldata.txp_rspring; sc->sc_rspring.size = RSP_ENTRIES * sizeof(struct txp_rsp_desc); sc->sc_rspring.lastwrite = 0; /* Receive buffer ring. */ error = txp_dma_alloc(sc, "receive buffer ring", &sc->sc_cdata.txp_rxbufs_tag, sizeof(struct txp_rxbuf_desc), 0, &sc->sc_cdata.txp_rxbufs_map, (void **)&sc->sc_ldata.txp_rxbufs, sizeof(struct txp_rxbuf_desc) * RXBUF_ENTRIES, &sc->sc_ldata.txp_rxbufs_paddr); if (error != 0) return (error); boot->br_rxbuf_lo = htole32(TXP_ADDR_LO(sc->sc_ldata.txp_rxbufs_paddr)); boot->br_rxbuf_hi = htole32(TXP_ADDR_HI(sc->sc_ldata.txp_rxbufs_paddr)); boot->br_rxbuf_siz = htole32(RXBUF_ENTRIES * sizeof(struct txp_rxbuf_desc)); sc->sc_rxbufs = sc->sc_ldata.txp_rxbufs; /* Zero ring. */ error = txp_dma_alloc(sc, "zero buffer", &sc->sc_cdata.txp_zero_tag, sizeof(uint32_t), 0, &sc->sc_cdata.txp_zero_map, (void **)&sc->sc_ldata.txp_zero, sizeof(uint32_t), &sc->sc_ldata.txp_zero_paddr); if (error != 0) return (error); boot->br_zero_lo = htole32(TXP_ADDR_LO(sc->sc_ldata.txp_zero_paddr)); boot->br_zero_hi = htole32(TXP_ADDR_HI(sc->sc_ldata.txp_zero_paddr)); bus_dmamap_sync(sc->sc_cdata.txp_boot_tag, sc->sc_cdata.txp_boot_map, BUS_DMASYNC_PREREAD | BUS_DMASYNC_PREWRITE); /* Create Tx buffers. */ error = bus_dma_tag_create( sc->sc_cdata.txp_parent_tag, /* parent */ 1, 0, /* algnmnt, boundary */ BUS_SPACE_MAXADDR, /* lowaddr */ BUS_SPACE_MAXADDR, /* highaddr */ NULL, NULL, /* filter, filterarg */ MCLBYTES * TXP_MAXTXSEGS, /* maxsize */ TXP_MAXTXSEGS, /* nsegments */ MCLBYTES, /* maxsegsize */ 0, /* flags */ NULL, NULL, /* lockfunc, lockarg */ &sc->sc_cdata.txp_tx_tag); if (error != 0) { device_printf(sc->sc_dev, "could not create Tx DMA tag.\n"); goto fail; } /* Create tag for Rx buffers. */ error = bus_dma_tag_create( sc->sc_cdata.txp_parent_tag, /* parent */ TXP_RXBUF_ALIGN, 0, /* algnmnt, boundary */ BUS_SPACE_MAXADDR, /* lowaddr */ BUS_SPACE_MAXADDR, /* highaddr */ NULL, NULL, /* filter, filterarg */ MCLBYTES, /* maxsize */ 1, /* nsegments */ MCLBYTES, /* maxsegsize */ 0, /* flags */ NULL, NULL, /* lockfunc, lockarg */ &sc->sc_cdata.txp_rx_tag); if (error != 0) { device_printf(sc->sc_dev, "could not create Rx DMA tag.\n"); goto fail; } /* Create DMA maps for Tx buffers. */ for (i = 0; i < TX_ENTRIES; i++) { txd = &sc->sc_txd[i]; txd->sd_mbuf = NULL; txd->sd_map = NULL; error = bus_dmamap_create(sc->sc_cdata.txp_tx_tag, 0, &txd->sd_map); if (error != 0) { device_printf(sc->sc_dev, "could not create Tx dmamap.\n"); goto fail; } } /* Create DMA maps for Rx buffers. */ for (i = 0; i < RXBUF_ENTRIES; i++) { sd = malloc(sizeof(struct txp_rx_swdesc), M_DEVBUF, M_NOWAIT | M_ZERO); if (sd == NULL) { error = ENOMEM; goto fail; } /* * The virtual address part of descriptor is not used * by hardware so use that to save an ring entry. We * need bcopy here otherwise the address wouldn't be * valid on big-endian architectures. */ rbd = sc->sc_rxbufs + i; bcopy(&sd, (u_long *)&rbd->rb_vaddrlo, sizeof(sd)); sd->sd_mbuf = NULL; sd->sd_map = NULL; error = bus_dmamap_create(sc->sc_cdata.txp_rx_tag, 0, &sd->sd_map); if (error != 0) { device_printf(sc->sc_dev, "could not create Rx dmamap.\n"); goto fail; } TAILQ_INSERT_TAIL(&sc->sc_free_list, sd, sd_next); } fail: return (error); } static void txp_init_rings(struct txp_softc *sc) { bzero(sc->sc_ldata.txp_hostvar, sizeof(struct txp_hostvar)); bzero(sc->sc_ldata.txp_zero, sizeof(uint32_t)); sc->sc_txhir.r_cons = 0; sc->sc_txhir.r_prod = 0; sc->sc_txhir.r_cnt = 0; sc->sc_txlor.r_cons = 0; sc->sc_txlor.r_prod = 0; sc->sc_txlor.r_cnt = 0; sc->sc_cmdring.lastwrite = 0; sc->sc_rspring.lastwrite = 0; sc->sc_rxbufprod = 0; bus_dmamap_sync(sc->sc_cdata.txp_hostvar_tag, sc->sc_cdata.txp_hostvar_map, BUS_DMASYNC_PREREAD | BUS_DMASYNC_PREWRITE); } static int txp_wait(struct txp_softc *sc, uint32_t state) { uint32_t reg; int i; for (i = 0; i < TXP_TIMEOUT; i++) { reg = READ_REG(sc, TXP_A2H_0); if (reg == state) break; DELAY(50); } return (i == TXP_TIMEOUT ? ETIMEDOUT : 0); } static void txp_free_rings(struct txp_softc *sc) { struct txp_swdesc *txd; struct txp_rx_swdesc *sd; int i; /* Tx buffers. */ if (sc->sc_cdata.txp_tx_tag != NULL) { for (i = 0; i < TX_ENTRIES; i++) { txd = &sc->sc_txd[i]; if (txd->sd_map != NULL) { bus_dmamap_destroy(sc->sc_cdata.txp_tx_tag, txd->sd_map); txd->sd_map = NULL; } } bus_dma_tag_destroy(sc->sc_cdata.txp_tx_tag); sc->sc_cdata.txp_tx_tag = NULL; } /* Rx buffers. */ if (sc->sc_cdata.txp_rx_tag != NULL) { if (sc->sc_rxbufs != NULL) { KASSERT(TAILQ_FIRST(&sc->sc_busy_list) == NULL, ("%s : still have busy Rx buffers", __func__)); while ((sd = TAILQ_FIRST(&sc->sc_free_list)) != NULL) { TAILQ_REMOVE(&sc->sc_free_list, sd, sd_next); if (sd->sd_map != NULL) { bus_dmamap_destroy( sc->sc_cdata.txp_rx_tag, sd->sd_map); sd->sd_map = NULL; } free(sd, M_DEVBUF); } } bus_dma_tag_destroy(sc->sc_cdata.txp_rx_tag); sc->sc_cdata.txp_rx_tag = NULL; } /* Hi priority Tx ring. */ txp_dma_free(sc, &sc->sc_cdata.txp_txhiring_tag, sc->sc_cdata.txp_txhiring_map, (void **)&sc->sc_ldata.txp_txhiring, &sc->sc_ldata.txp_txhiring_paddr); /* Low priority Tx ring. */ txp_dma_free(sc, &sc->sc_cdata.txp_txloring_tag, sc->sc_cdata.txp_txloring_map, (void **)&sc->sc_ldata.txp_txloring, &sc->sc_ldata.txp_txloring_paddr); /* Hi priority Rx ring. */ txp_dma_free(sc, &sc->sc_cdata.txp_rxhiring_tag, sc->sc_cdata.txp_rxhiring_map, (void **)&sc->sc_ldata.txp_rxhiring, &sc->sc_ldata.txp_rxhiring_paddr); /* Low priority Rx ring. */ txp_dma_free(sc, &sc->sc_cdata.txp_rxloring_tag, sc->sc_cdata.txp_rxloring_map, (void **)&sc->sc_ldata.txp_rxloring, &sc->sc_ldata.txp_rxloring_paddr); /* Receive buffer ring. */ txp_dma_free(sc, &sc->sc_cdata.txp_rxbufs_tag, sc->sc_cdata.txp_rxbufs_map, (void **)&sc->sc_ldata.txp_rxbufs, &sc->sc_ldata.txp_rxbufs_paddr); /* Command ring. */ txp_dma_free(sc, &sc->sc_cdata.txp_cmdring_tag, sc->sc_cdata.txp_cmdring_map, (void **)&sc->sc_ldata.txp_cmdring, &sc->sc_ldata.txp_cmdring_paddr); /* Response ring. */ txp_dma_free(sc, &sc->sc_cdata.txp_rspring_tag, sc->sc_cdata.txp_rspring_map, (void **)&sc->sc_ldata.txp_rspring, &sc->sc_ldata.txp_rspring_paddr); /* Zero ring. */ txp_dma_free(sc, &sc->sc_cdata.txp_zero_tag, sc->sc_cdata.txp_zero_map, (void **)&sc->sc_ldata.txp_zero, &sc->sc_ldata.txp_zero_paddr); /* Host variables. */ txp_dma_free(sc, &sc->sc_cdata.txp_hostvar_tag, sc->sc_cdata.txp_hostvar_map, (void **)&sc->sc_ldata.txp_hostvar, &sc->sc_ldata.txp_hostvar_paddr); /* Boot record. */ txp_dma_free(sc, &sc->sc_cdata.txp_boot_tag, sc->sc_cdata.txp_boot_map, (void **)&sc->sc_ldata.txp_boot, &sc->sc_ldata.txp_boot_paddr); if (sc->sc_cdata.txp_parent_tag != NULL) { bus_dma_tag_destroy(sc->sc_cdata.txp_parent_tag); sc->sc_cdata.txp_parent_tag = NULL; } } static int txp_ioctl(struct ifnet *ifp, u_long command, caddr_t data) { struct txp_softc *sc = ifp->if_softc; struct ifreq *ifr = (struct ifreq *)data; int capenable, error = 0, mask; switch(command) { case SIOCSIFFLAGS: TXP_LOCK(sc); if ((ifp->if_flags & IFF_UP) != 0) { if ((ifp->if_drv_flags & IFF_DRV_RUNNING) != 0) { if (((ifp->if_flags ^ sc->sc_if_flags) & (IFF_PROMISC | IFF_ALLMULTI)) != 0) txp_set_filter(sc); } else { if ((sc->sc_flags & TXP_FLAG_DETACH) == 0) txp_init_locked(sc); } } else { if ((ifp->if_drv_flags & IFF_DRV_RUNNING) != 0) txp_stop(sc); } sc->sc_if_flags = ifp->if_flags; TXP_UNLOCK(sc); break; case SIOCADDMULTI: case SIOCDELMULTI: /* * Multicast list has changed; set the hardware * filter accordingly. */ TXP_LOCK(sc); if ((ifp->if_drv_flags & IFF_DRV_RUNNING) != 0) txp_set_filter(sc); TXP_UNLOCK(sc); break; case SIOCSIFCAP: TXP_LOCK(sc); capenable = ifp->if_capenable; mask = ifr->ifr_reqcap ^ ifp->if_capenable; if ((mask & IFCAP_TXCSUM) != 0 && (ifp->if_capabilities & IFCAP_TXCSUM) != 0) { ifp->if_capenable ^= IFCAP_TXCSUM; if ((ifp->if_capenable & IFCAP_TXCSUM) != 0) ifp->if_hwassist |= TXP_CSUM_FEATURES; else ifp->if_hwassist &= ~TXP_CSUM_FEATURES; } if ((mask & IFCAP_RXCSUM) != 0 && (ifp->if_capabilities & IFCAP_RXCSUM) != 0) ifp->if_capenable ^= IFCAP_RXCSUM; if ((mask & IFCAP_WOL_MAGIC) != 0 && (ifp->if_capabilities & IFCAP_WOL_MAGIC) != 0) ifp->if_capenable ^= IFCAP_WOL_MAGIC; if ((mask & IFCAP_VLAN_HWTAGGING) != 0 && (ifp->if_capabilities & IFCAP_VLAN_HWTAGGING) != 0) ifp->if_capenable ^= IFCAP_VLAN_HWTAGGING; if ((mask & IFCAP_VLAN_HWCSUM) != 0 && (ifp->if_capabilities & IFCAP_VLAN_HWCSUM) != 0) ifp->if_capenable ^= IFCAP_VLAN_HWCSUM; if ((ifp->if_capenable & IFCAP_TXCSUM) == 0) ifp->if_capenable &= ~IFCAP_VLAN_HWCSUM; if ((ifp->if_capenable & IFCAP_VLAN_HWTAGGING) == 0) ifp->if_capenable &= ~IFCAP_VLAN_HWCSUM; if (capenable != ifp->if_capenable) txp_set_capabilities(sc); TXP_UNLOCK(sc); VLAN_CAPABILITIES(ifp); break; case SIOCGIFMEDIA: case SIOCSIFMEDIA: error = ifmedia_ioctl(ifp, ifr, &sc->sc_ifmedia, command); break; default: error = ether_ioctl(ifp, command, data); break; } return (error); } static int txp_rxring_fill(struct txp_softc *sc) { struct txp_rxbuf_desc *rbd; struct txp_rx_swdesc *sd; bus_dma_segment_t segs[1]; int error, i, nsegs; TXP_LOCK_ASSERT(sc); bus_dmamap_sync(sc->sc_cdata.txp_rxbufs_tag, sc->sc_cdata.txp_rxbufs_map, BUS_DMASYNC_POSTREAD | BUS_DMASYNC_POSTWRITE); for (i = 0; i < RXBUF_ENTRIES; i++) { sd = TAILQ_FIRST(&sc->sc_free_list); if (sd == NULL) return (ENOMEM); rbd = sc->sc_rxbufs + i; bcopy(&sd, (u_long *)&rbd->rb_vaddrlo, sizeof(sd)); KASSERT(sd->sd_mbuf == NULL, ("%s : Rx buffer ring corrupted", __func__)); sd->sd_mbuf = m_getcl(M_NOWAIT, MT_DATA, M_PKTHDR); if (sd->sd_mbuf == NULL) return (ENOMEM); sd->sd_mbuf->m_pkthdr.len = sd->sd_mbuf->m_len = MCLBYTES; #ifndef __NO_STRICT_ALIGNMENT m_adj(sd->sd_mbuf, TXP_RXBUF_ALIGN); #endif if ((error = bus_dmamap_load_mbuf_sg(sc->sc_cdata.txp_rx_tag, sd->sd_map, sd->sd_mbuf, segs, &nsegs, 0)) != 0) { m_freem(sd->sd_mbuf); sd->sd_mbuf = NULL; return (error); } KASSERT(nsegs == 1, ("%s : %d segments returned!", __func__, nsegs)); TAILQ_REMOVE(&sc->sc_free_list, sd, sd_next); TAILQ_INSERT_TAIL(&sc->sc_busy_list, sd, sd_next); bus_dmamap_sync(sc->sc_cdata.txp_rx_tag, sd->sd_map, BUS_DMASYNC_PREREAD); rbd->rb_paddrlo = htole32(TXP_ADDR_LO(segs[0].ds_addr)); rbd->rb_paddrhi = htole32(TXP_ADDR_HI(segs[0].ds_addr)); } bus_dmamap_sync(sc->sc_cdata.txp_rxbufs_tag, sc->sc_cdata.txp_rxbufs_map, BUS_DMASYNC_PREREAD | BUS_DMASYNC_PREWRITE); sc->sc_rxbufprod = RXBUF_ENTRIES - 1; sc->sc_hostvar->hv_rx_buf_write_idx = htole32(TXP_IDX2OFFSET(RXBUF_ENTRIES - 1)); return (0); } static void txp_rxring_empty(struct txp_softc *sc) { struct txp_rx_swdesc *sd; int cnt; TXP_LOCK_ASSERT(sc); if (sc->sc_rxbufs == NULL) return; bus_dmamap_sync(sc->sc_cdata.txp_hostvar_tag, sc->sc_cdata.txp_hostvar_map, BUS_DMASYNC_POSTREAD | BUS_DMASYNC_POSTWRITE); /* Release allocated Rx buffers. */ cnt = 0; while ((sd = TAILQ_FIRST(&sc->sc_busy_list)) != NULL) { TAILQ_REMOVE(&sc->sc_busy_list, sd, sd_next); KASSERT(sd->sd_mbuf != NULL, ("%s : Rx buffer ring corrupted", __func__)); bus_dmamap_sync(sc->sc_cdata.txp_rx_tag, sd->sd_map, BUS_DMASYNC_POSTREAD); bus_dmamap_unload(sc->sc_cdata.txp_rx_tag, sd->sd_map); m_freem(sd->sd_mbuf); sd->sd_mbuf = NULL; TAILQ_INSERT_TAIL(&sc->sc_free_list, sd, sd_next); cnt++; } } static void txp_init(void *xsc) { struct txp_softc *sc; sc = xsc; TXP_LOCK(sc); txp_init_locked(sc); TXP_UNLOCK(sc); } static void txp_init_locked(struct txp_softc *sc) { struct ifnet *ifp; uint8_t *eaddr; uint16_t p1; uint32_t p2; int error; TXP_LOCK_ASSERT(sc); ifp = sc->sc_ifp; if ((ifp->if_drv_flags & IFF_DRV_RUNNING) != 0) return; /* Initialize ring structure. */ txp_init_rings(sc); /* Wakeup controller. */ WRITE_REG(sc, TXP_H2A_0, TXP_BOOTCMD_WAKEUP); TXP_BARRIER(sc, TXP_H2A_0, 4, BUS_SPACE_BARRIER_WRITE); /* * It seems that earlier NV image can go back to online from * wakeup command but newer ones require controller reset. * So jut reset controller again. */ if (txp_reset(sc) != 0) goto init_fail; /* Download firmware. */ error = txp_download_fw(sc); if (error != 0) { device_printf(sc->sc_dev, "could not download firmware.\n"); goto init_fail; } bus_dmamap_sync(sc->sc_cdata.txp_hostvar_tag, sc->sc_cdata.txp_hostvar_map, BUS_DMASYNC_POSTREAD | BUS_DMASYNC_POSTWRITE); if ((error = txp_rxring_fill(sc)) != 0) { device_printf(sc->sc_dev, "no memory for Rx buffers.\n"); goto init_fail; } bus_dmamap_sync(sc->sc_cdata.txp_hostvar_tag, sc->sc_cdata.txp_hostvar_map, BUS_DMASYNC_PREREAD | BUS_DMASYNC_PREWRITE); if (txp_boot(sc, STAT_WAITING_FOR_BOOT) != 0) { device_printf(sc->sc_dev, "could not boot firmware.\n"); goto init_fail; } /* * Quite contrary to Typhoon T2 software functional specification, * it seems that TXP_CMD_RECV_BUFFER_CONTROL command is not * implemented in the firmware. This means driver should have to * handle misaligned frames on alignment architectures. AFAIK this * is the only controller manufactured by 3Com that has this stupid * bug. 3Com should fix this. */ if (txp_command(sc, TXP_CMD_MAX_PKT_SIZE_WRITE, TXP_MAX_PKTLEN, 0, 0, NULL, NULL, NULL, TXP_CMD_NOWAIT) != 0) goto init_fail; /* Undocumented command(interrupt coalescing disable?) - From Linux. */ if (txp_command(sc, TXP_CMD_FILTER_DEFINE, 0, 0, 0, NULL, NULL, NULL, TXP_CMD_NOWAIT) != 0) goto init_fail; /* Set station address. */ eaddr = IF_LLADDR(sc->sc_ifp); p1 = 0; ((uint8_t *)&p1)[1] = eaddr[0]; ((uint8_t *)&p1)[0] = eaddr[1]; p1 = le16toh(p1); ((uint8_t *)&p2)[3] = eaddr[2]; ((uint8_t *)&p2)[2] = eaddr[3]; ((uint8_t *)&p2)[1] = eaddr[4]; ((uint8_t *)&p2)[0] = eaddr[5]; p2 = le32toh(p2); if (txp_command(sc, TXP_CMD_STATION_ADDRESS_WRITE, p1, p2, 0, NULL, NULL, NULL, TXP_CMD_NOWAIT) != 0) goto init_fail; txp_set_filter(sc); txp_set_capabilities(sc); if (txp_command(sc, TXP_CMD_CLEAR_STATISTICS, 0, 0, 0, NULL, NULL, NULL, TXP_CMD_NOWAIT)) goto init_fail; if (txp_command(sc, TXP_CMD_XCVR_SELECT, sc->sc_xcvr, 0, 0, NULL, NULL, NULL, TXP_CMD_NOWAIT) != 0) goto init_fail; if (txp_command(sc, TXP_CMD_TX_ENABLE, 0, 0, 0, NULL, NULL, NULL, TXP_CMD_NOWAIT) != 0) goto init_fail; if (txp_command(sc, TXP_CMD_RX_ENABLE, 0, 0, 0, NULL, NULL, NULL, TXP_CMD_NOWAIT) != 0) goto init_fail; /* Ack all pending interrupts and enable interrupts. */ WRITE_REG(sc, TXP_ISR, TXP_INTR_ALL); WRITE_REG(sc, TXP_IER, TXP_INTRS); WRITE_REG(sc, TXP_IMR, TXP_INTR_NONE); ifp->if_drv_flags |= IFF_DRV_RUNNING; ifp->if_drv_flags &= ~IFF_DRV_OACTIVE; callout_reset(&sc->sc_tick, hz, txp_tick, sc); return; init_fail: txp_rxring_empty(sc); txp_init_rings(sc); txp_reset(sc); WRITE_REG(sc, TXP_IMR, TXP_INTR_ALL); } static void txp_tick(void *vsc) { struct txp_softc *sc; struct ifnet *ifp; struct txp_rsp_desc *rsp; struct txp_ext_desc *ext; int link; sc = vsc; TXP_LOCK_ASSERT(sc); bus_dmamap_sync(sc->sc_cdata.txp_hostvar_tag, sc->sc_cdata.txp_hostvar_map, BUS_DMASYNC_POSTREAD | BUS_DMASYNC_POSTWRITE); txp_rxbuf_reclaim(sc); bus_dmamap_sync(sc->sc_cdata.txp_hostvar_tag, sc->sc_cdata.txp_hostvar_map, BUS_DMASYNC_PREREAD | BUS_DMASYNC_PREWRITE); ifp = sc->sc_ifp; rsp = NULL; link = sc->sc_flags & TXP_FLAG_LINK; if (txp_ext_command(sc, TXP_CMD_READ_STATISTICS, 0, 0, 0, NULL, 0, &rsp, TXP_CMD_WAIT)) goto out; if (rsp->rsp_numdesc != 6) goto out; txp_stats_update(sc, rsp); if (link == 0 && (sc->sc_flags & TXP_FLAG_LINK) != 0) { ext = (struct txp_ext_desc *)(rsp + 1); /* Update baudrate with resolved speed. */ if ((ext[5].ext_2 & 0x02) != 0) ifp->if_baudrate = IF_Mbps(100); else ifp->if_baudrate = IF_Mbps(10); } out: if (rsp != NULL) free(rsp, M_DEVBUF); txp_watchdog(sc); callout_reset(&sc->sc_tick, hz, txp_tick, sc); } static void txp_start(struct ifnet *ifp) { struct txp_softc *sc; sc = ifp->if_softc; TXP_LOCK(sc); txp_start_locked(ifp); TXP_UNLOCK(sc); } static void txp_start_locked(struct ifnet *ifp) { struct txp_softc *sc; struct mbuf *m_head; int enq; sc = ifp->if_softc; TXP_LOCK_ASSERT(sc); if ((ifp->if_drv_flags & (IFF_DRV_RUNNING | IFF_DRV_OACTIVE)) != IFF_DRV_RUNNING || (sc->sc_flags & TXP_FLAG_LINK) == 0) return; for (enq = 0; !IFQ_DRV_IS_EMPTY(&ifp->if_snd); ) { IFQ_DRV_DEQUEUE(&ifp->if_snd, m_head); if (m_head == NULL) break; /* * Pack the data into the transmit ring. If we * don't have room, set the OACTIVE flag and wait * for the NIC to drain the ring. * ATM only Hi-ring is used. */ if (txp_encap(sc, &sc->sc_txhir, &m_head)) { if (m_head == NULL) break; IFQ_DRV_PREPEND(&ifp->if_snd, m_head); ifp->if_drv_flags |= IFF_DRV_OACTIVE; break; } /* * If there's a BPF listener, bounce a copy of this frame * to him. */ ETHER_BPF_MTAP(ifp, m_head); /* Send queued frame. */ WRITE_REG(sc, sc->sc_txhir.r_reg, TXP_IDX2OFFSET(sc->sc_txhir.r_prod)); } if (enq > 0) { /* Set a timeout in case the chip goes out to lunch. */ sc->sc_watchdog_timer = TXP_TX_TIMEOUT; } } static int txp_encap(struct txp_softc *sc, struct txp_tx_ring *r, struct mbuf **m_head) { struct txp_tx_desc *first_txd; struct txp_frag_desc *fxd; struct txp_swdesc *sd; struct mbuf *m; bus_dma_segment_t txsegs[TXP_MAXTXSEGS]; int error, i, nsegs; TXP_LOCK_ASSERT(sc); M_ASSERTPKTHDR((*m_head)); m = *m_head; first_txd = r->r_desc + r->r_prod; sd = sc->sc_txd + r->r_prod; error = bus_dmamap_load_mbuf_sg(sc->sc_cdata.txp_tx_tag, sd->sd_map, *m_head, txsegs, &nsegs, 0); if (error == EFBIG) { m = m_collapse(*m_head, M_NOWAIT, TXP_MAXTXSEGS); if (m == NULL) { m_freem(*m_head); *m_head = NULL; return (ENOMEM); } *m_head = m; error = bus_dmamap_load_mbuf_sg(sc->sc_cdata.txp_tx_tag, sd->sd_map, *m_head, txsegs, &nsegs, 0); if (error != 0) { m_freem(*m_head); *m_head = NULL; return (error); } } else if (error != 0) return (error); if (nsegs == 0) { m_freem(*m_head); *m_head = NULL; return (EIO); } /* Check descriptor overrun. */ if (r->r_cnt + nsegs >= TX_ENTRIES - TXP_TXD_RESERVED) { bus_dmamap_unload(sc->sc_cdata.txp_tx_tag, sd->sd_map); return (ENOBUFS); } bus_dmamap_sync(sc->sc_cdata.txp_tx_tag, sd->sd_map, BUS_DMASYNC_PREWRITE); sd->sd_mbuf = m; first_txd->tx_flags = TX_FLAGS_TYPE_DATA; first_txd->tx_numdesc = 0; first_txd->tx_addrlo = 0; first_txd->tx_addrhi = 0; first_txd->tx_totlen = 0; first_txd->tx_pflags = 0; r->r_cnt++; TXP_DESC_INC(r->r_prod, TX_ENTRIES); /* Configure Tx IP/TCP/UDP checksum offload. */ if ((m->m_pkthdr.csum_flags & CSUM_IP) != 0) first_txd->tx_pflags |= htole32(TX_PFLAGS_IPCKSUM); #ifdef notyet /* XXX firmware bug. */ if ((m->m_pkthdr.csum_flags & CSUM_TCP) != 0) first_txd->tx_pflags |= htole32(TX_PFLAGS_TCPCKSUM); if ((m->m_pkthdr.csum_flags & CSUM_UDP) != 0) first_txd->tx_pflags |= htole32(TX_PFLAGS_UDPCKSUM); #endif /* Configure VLAN hardware tag insertion. */ if ((m->m_flags & M_VLANTAG) != 0) first_txd->tx_pflags |= htole32(TX_PFLAGS_VLAN | TX_PFLAGS_PRIO | (bswap16(m->m_pkthdr.ether_vtag) << TX_PFLAGS_VLANTAG_S)); for (i = 0; i < nsegs; i++) { fxd = (struct txp_frag_desc *)(r->r_desc + r->r_prod); fxd->frag_flags = FRAG_FLAGS_TYPE_FRAG | TX_FLAGS_VALID; fxd->frag_rsvd1 = 0; fxd->frag_len = htole16(txsegs[i].ds_len); fxd->frag_addrhi = htole32(TXP_ADDR_HI(txsegs[i].ds_addr)); fxd->frag_addrlo = htole32(TXP_ADDR_LO(txsegs[i].ds_addr)); fxd->frag_rsvd2 = 0; first_txd->tx_numdesc++; r->r_cnt++; TXP_DESC_INC(r->r_prod, TX_ENTRIES); } /* Lastly set valid flag. */ first_txd->tx_flags |= TX_FLAGS_VALID; /* Sync descriptors. */ bus_dmamap_sync(r->r_tag, r->r_map, BUS_DMASYNC_PREREAD | BUS_DMASYNC_PREWRITE); return (0); } /* * Handle simple commands sent to the typhoon */ static int txp_command(struct txp_softc *sc, uint16_t id, uint16_t in1, uint32_t in2, uint32_t in3, uint16_t *out1, uint32_t *out2, uint32_t *out3, int wait) { struct txp_rsp_desc *rsp; rsp = NULL; if (txp_ext_command(sc, id, in1, in2, in3, NULL, 0, &rsp, wait) != 0) { device_printf(sc->sc_dev, "command 0x%02x failed\n", id); return (-1); } if (wait == TXP_CMD_NOWAIT) return (0); KASSERT(rsp != NULL, ("rsp is NULL!\n")); if (out1 != NULL) *out1 = le16toh(rsp->rsp_par1); if (out2 != NULL) *out2 = le32toh(rsp->rsp_par2); if (out3 != NULL) *out3 = le32toh(rsp->rsp_par3); free(rsp, M_DEVBUF); return (0); } static int txp_ext_command(struct txp_softc *sc, uint16_t id, uint16_t in1, uint32_t in2, uint32_t in3, struct txp_ext_desc *in_extp, uint8_t in_extn, struct txp_rsp_desc **rspp, int wait) { struct txp_hostvar *hv; struct txp_cmd_desc *cmd; struct txp_ext_desc *ext; uint32_t idx, i; uint16_t seq; int error; error = 0; hv = sc->sc_hostvar; if (txp_cmd_desc_numfree(sc) < (in_extn + 1)) { device_printf(sc->sc_dev, "%s : out of free cmd descriptors for command 0x%02x\n", __func__, id); return (ENOBUFS); } bus_dmamap_sync(sc->sc_cdata.txp_cmdring_tag, sc->sc_cdata.txp_cmdring_map, BUS_DMASYNC_POSTWRITE); idx = sc->sc_cmdring.lastwrite; cmd = (struct txp_cmd_desc *)(((uint8_t *)sc->sc_cmdring.base) + idx); bzero(cmd, sizeof(*cmd)); cmd->cmd_numdesc = in_extn; seq = sc->sc_seq++; cmd->cmd_seq = htole16(seq); cmd->cmd_id = htole16(id); cmd->cmd_par1 = htole16(in1); cmd->cmd_par2 = htole32(in2); cmd->cmd_par3 = htole32(in3); cmd->cmd_flags = CMD_FLAGS_TYPE_CMD | (wait == TXP_CMD_WAIT ? CMD_FLAGS_RESP : 0) | CMD_FLAGS_VALID; idx += sizeof(struct txp_cmd_desc); if (idx == sc->sc_cmdring.size) idx = 0; for (i = 0; i < in_extn; i++) { ext = (struct txp_ext_desc *)(((uint8_t *)sc->sc_cmdring.base) + idx); bcopy(in_extp, ext, sizeof(struct txp_ext_desc)); in_extp++; idx += sizeof(struct txp_cmd_desc); if (idx == sc->sc_cmdring.size) idx = 0; } sc->sc_cmdring.lastwrite = idx; bus_dmamap_sync(sc->sc_cdata.txp_cmdring_tag, sc->sc_cdata.txp_cmdring_map, BUS_DMASYNC_PREWRITE); bus_dmamap_sync(sc->sc_cdata.txp_hostvar_tag, sc->sc_cdata.txp_hostvar_map, BUS_DMASYNC_PREREAD | BUS_DMASYNC_PREWRITE); WRITE_REG(sc, TXP_H2A_2, sc->sc_cmdring.lastwrite); TXP_BARRIER(sc, TXP_H2A_2, 4, BUS_SPACE_BARRIER_WRITE); if (wait == TXP_CMD_NOWAIT) return (0); for (i = 0; i < TXP_TIMEOUT; i++) { bus_dmamap_sync(sc->sc_cdata.txp_hostvar_tag, sc->sc_cdata.txp_hostvar_map, BUS_DMASYNC_POSTREAD | BUS_DMASYNC_POSTWRITE); if (le32toh(hv->hv_resp_read_idx) != le32toh(hv->hv_resp_write_idx)) { error = txp_response(sc, id, seq, rspp); bus_dmamap_sync(sc->sc_cdata.txp_hostvar_tag, sc->sc_cdata.txp_hostvar_map, BUS_DMASYNC_PREREAD | BUS_DMASYNC_PREWRITE); if (error != 0) return (error); if (*rspp != NULL) break; } DELAY(50); } if (i == TXP_TIMEOUT) { device_printf(sc->sc_dev, "command 0x%02x timedout\n", id); error = ETIMEDOUT; } return (error); } static int txp_response(struct txp_softc *sc, uint16_t id, uint16_t seq, struct txp_rsp_desc **rspp) { struct txp_hostvar *hv; struct txp_rsp_desc *rsp; uint32_t ridx; bus_dmamap_sync(sc->sc_cdata.txp_rspring_tag, sc->sc_cdata.txp_rspring_map, BUS_DMASYNC_POSTREAD); hv = sc->sc_hostvar; ridx = le32toh(hv->hv_resp_read_idx); while (ridx != le32toh(hv->hv_resp_write_idx)) { rsp = (struct txp_rsp_desc *)(((uint8_t *)sc->sc_rspring.base) + ridx); if (id == le16toh(rsp->rsp_id) && le16toh(rsp->rsp_seq) == seq) { *rspp = (struct txp_rsp_desc *)malloc( sizeof(struct txp_rsp_desc) * (rsp->rsp_numdesc + 1), M_DEVBUF, M_NOWAIT); if (*rspp == NULL) { device_printf(sc->sc_dev,"%s : command 0x%02x " "memory allocation failure\n", __func__, id); return (ENOMEM); } txp_rsp_fixup(sc, rsp, *rspp); return (0); } if ((rsp->rsp_flags & RSP_FLAGS_ERROR) != 0) { device_printf(sc->sc_dev, "%s : command 0x%02x response error!\n", __func__, le16toh(rsp->rsp_id)); txp_rsp_fixup(sc, rsp, NULL); ridx = le32toh(hv->hv_resp_read_idx); continue; } /* * The following unsolicited responses are handled during * processing of TXP_CMD_READ_STATISTICS which requires * response. Driver abuses the command to detect media * status change. * TXP_CMD_FILTER_DEFINE is not an unsolicited response * but we don't process response ring in interrupt handler * so we have to ignore this command here, otherwise * unknown command message would be printed. */ switch (le16toh(rsp->rsp_id)) { case TXP_CMD_CYCLE_STATISTICS: case TXP_CMD_FILTER_DEFINE: break; case TXP_CMD_MEDIA_STATUS_READ: if ((le16toh(rsp->rsp_par1) & 0x0800) == 0) { sc->sc_flags |= TXP_FLAG_LINK; if_link_state_change(sc->sc_ifp, LINK_STATE_UP); } else { sc->sc_flags &= ~TXP_FLAG_LINK; if_link_state_change(sc->sc_ifp, LINK_STATE_DOWN); } break; case TXP_CMD_HELLO_RESPONSE: /* * Driver should repsond to hello message but * TXP_CMD_READ_STATISTICS is issued for every * hz, therefore there is no need to send an * explicit command here. */ device_printf(sc->sc_dev, "%s : hello\n", __func__); break; default: device_printf(sc->sc_dev, "%s : unknown command 0x%02x\n", __func__, le16toh(rsp->rsp_id)); } txp_rsp_fixup(sc, rsp, NULL); ridx = le32toh(hv->hv_resp_read_idx); } return (0); } static void txp_rsp_fixup(struct txp_softc *sc, struct txp_rsp_desc *rsp, struct txp_rsp_desc *dst) { struct txp_rsp_desc *src; struct txp_hostvar *hv; uint32_t i, ridx; src = rsp; hv = sc->sc_hostvar; ridx = le32toh(hv->hv_resp_read_idx); for (i = 0; i < rsp->rsp_numdesc + 1; i++) { if (dst != NULL) bcopy(src, dst++, sizeof(struct txp_rsp_desc)); ridx += sizeof(struct txp_rsp_desc); if (ridx == sc->sc_rspring.size) { src = sc->sc_rspring.base; ridx = 0; } else src++; sc->sc_rspring.lastwrite = ridx; } hv->hv_resp_read_idx = htole32(ridx); } static int txp_cmd_desc_numfree(struct txp_softc *sc) { struct txp_hostvar *hv; struct txp_boot_record *br; uint32_t widx, ridx, nfree; bus_dmamap_sync(sc->sc_cdata.txp_hostvar_tag, sc->sc_cdata.txp_hostvar_map, BUS_DMASYNC_POSTREAD | BUS_DMASYNC_POSTWRITE); hv = sc->sc_hostvar; br = sc->sc_boot; widx = sc->sc_cmdring.lastwrite; ridx = le32toh(hv->hv_cmd_read_idx); if (widx == ridx) { /* Ring is completely free */ nfree = le32toh(br->br_cmd_siz) - sizeof(struct txp_cmd_desc); } else { if (widx > ridx) nfree = le32toh(br->br_cmd_siz) - (widx - ridx + sizeof(struct txp_cmd_desc)); else nfree = ridx - widx - sizeof(struct txp_cmd_desc); } return (nfree / sizeof(struct txp_cmd_desc)); } static int txp_sleep(struct txp_softc *sc, int capenable) { uint16_t events; int error; events = 0; if ((capenable & IFCAP_WOL_MAGIC) != 0) events |= 0x01; error = txp_command(sc, TXP_CMD_ENABLE_WAKEUP_EVENTS, events, 0, 0, NULL, NULL, NULL, TXP_CMD_NOWAIT); if (error == 0) { /* Goto sleep. */ error = txp_command(sc, TXP_CMD_GOTO_SLEEP, 0, 0, 0, NULL, NULL, NULL, TXP_CMD_NOWAIT); if (error == 0) { error = txp_wait(sc, STAT_SLEEPING); if (error != 0) device_printf(sc->sc_dev, "unable to enter into sleep\n"); } } return (error); } static void txp_stop(struct txp_softc *sc) { struct ifnet *ifp; TXP_LOCK_ASSERT(sc); ifp = sc->sc_ifp; if ((ifp->if_drv_flags & IFF_DRV_RUNNING) == 0) return; WRITE_REG(sc, TXP_IER, TXP_INTR_NONE); WRITE_REG(sc, TXP_ISR, TXP_INTR_ALL); ifp->if_drv_flags &= ~(IFF_DRV_RUNNING | IFF_DRV_OACTIVE); sc->sc_flags &= ~TXP_FLAG_LINK; callout_stop(&sc->sc_tick); txp_command(sc, TXP_CMD_TX_DISABLE, 0, 0, 0, NULL, NULL, NULL, TXP_CMD_NOWAIT); txp_command(sc, TXP_CMD_RX_DISABLE, 0, 0, 0, NULL, NULL, NULL, TXP_CMD_NOWAIT); /* Save statistics for later use. */ txp_stats_save(sc); /* Halt controller. */ txp_command(sc, TXP_CMD_HALT, 0, 0, 0, NULL, NULL, NULL, TXP_CMD_NOWAIT); if (txp_wait(sc, STAT_HALTED) != 0) device_printf(sc->sc_dev, "controller halt timedout!\n"); /* Reclaim Tx/Rx buffers. */ if (sc->sc_txhir.r_cnt && (sc->sc_txhir.r_cons != TXP_OFFSET2IDX(le32toh(*(sc->sc_txhir.r_off))))) txp_tx_reclaim(sc, &sc->sc_txhir); if (sc->sc_txlor.r_cnt && (sc->sc_txlor.r_cons != TXP_OFFSET2IDX(le32toh(*(sc->sc_txlor.r_off))))) txp_tx_reclaim(sc, &sc->sc_txlor); txp_rxring_empty(sc); txp_init_rings(sc); /* Reset controller and make it reload sleep image. */ txp_reset(sc); /* Let controller boot from sleep image. */ if (txp_boot(sc, STAT_WAITING_FOR_HOST_REQUEST) != 0) device_printf(sc->sc_dev, "could not boot sleep image\n"); txp_sleep(sc, 0); } static void txp_watchdog(struct txp_softc *sc) { struct ifnet *ifp; TXP_LOCK_ASSERT(sc); if (sc->sc_watchdog_timer == 0 || --sc->sc_watchdog_timer) return; ifp = sc->sc_ifp; if_printf(ifp, "watchdog timeout -- resetting\n"); if_inc_counter(ifp, IFCOUNTER_OERRORS, 1); txp_stop(sc); txp_init_locked(sc); } static int txp_ifmedia_upd(struct ifnet *ifp) { struct txp_softc *sc = ifp->if_softc; struct ifmedia *ifm = &sc->sc_ifmedia; uint16_t new_xcvr; TXP_LOCK(sc); if (IFM_TYPE(ifm->ifm_media) != IFM_ETHER) { TXP_UNLOCK(sc); return (EINVAL); } if (IFM_SUBTYPE(ifm->ifm_media) == IFM_10_T) { if ((ifm->ifm_media & IFM_GMASK) == IFM_FDX) new_xcvr = TXP_XCVR_10_FDX; else new_xcvr = TXP_XCVR_10_HDX; } else if (IFM_SUBTYPE(ifm->ifm_media) == IFM_100_TX) { if ((ifm->ifm_media & IFM_GMASK) == IFM_FDX) new_xcvr = TXP_XCVR_100_FDX; else new_xcvr = TXP_XCVR_100_HDX; } else if (IFM_SUBTYPE(ifm->ifm_media) == IFM_AUTO) { new_xcvr = TXP_XCVR_AUTO; } else { TXP_UNLOCK(sc); return (EINVAL); } /* nothing to do */ if (sc->sc_xcvr == new_xcvr) { TXP_UNLOCK(sc); return (0); } txp_command(sc, TXP_CMD_XCVR_SELECT, new_xcvr, 0, 0, NULL, NULL, NULL, TXP_CMD_NOWAIT); sc->sc_xcvr = new_xcvr; TXP_UNLOCK(sc); return (0); } static void txp_ifmedia_sts(struct ifnet *ifp, struct ifmediareq *ifmr) { struct txp_softc *sc = ifp->if_softc; struct ifmedia *ifm = &sc->sc_ifmedia; uint16_t bmsr, bmcr, anar, anlpar; ifmr->ifm_status = IFM_AVALID; ifmr->ifm_active = IFM_ETHER; TXP_LOCK(sc); /* Check whether firmware is running. */ if ((ifp->if_drv_flags & IFF_DRV_RUNNING) == 0) goto bail; if (txp_command(sc, TXP_CMD_PHY_MGMT_READ, 0, MII_BMSR, 0, &bmsr, NULL, NULL, TXP_CMD_WAIT)) goto bail; if (txp_command(sc, TXP_CMD_PHY_MGMT_READ, 0, MII_BMSR, 0, &bmsr, NULL, NULL, TXP_CMD_WAIT)) goto bail; if (txp_command(sc, TXP_CMD_PHY_MGMT_READ, 0, MII_BMCR, 0, &bmcr, NULL, NULL, TXP_CMD_WAIT)) goto bail; if (txp_command(sc, TXP_CMD_PHY_MGMT_READ, 0, MII_ANLPAR, 0, &anlpar, NULL, NULL, TXP_CMD_WAIT)) goto bail; if (txp_command(sc, TXP_CMD_PHY_MGMT_READ, 0, MII_ANAR, 0, &anar, NULL, NULL, TXP_CMD_WAIT)) goto bail; TXP_UNLOCK(sc); if (bmsr & BMSR_LINK) ifmr->ifm_status |= IFM_ACTIVE; if (bmcr & BMCR_ISO) { ifmr->ifm_active |= IFM_NONE; ifmr->ifm_status = 0; return; } if (bmcr & BMCR_LOOP) ifmr->ifm_active |= IFM_LOOP; if (bmcr & BMCR_AUTOEN) { if ((bmsr & BMSR_ACOMP) == 0) { ifmr->ifm_active |= IFM_NONE; return; } anlpar &= anar; if (anlpar & ANLPAR_TX_FD) ifmr->ifm_active |= IFM_100_TX|IFM_FDX; else if (anlpar & ANLPAR_T4) ifmr->ifm_active |= IFM_100_T4; else if (anlpar & ANLPAR_TX) ifmr->ifm_active |= IFM_100_TX; else if (anlpar & ANLPAR_10_FD) ifmr->ifm_active |= IFM_10_T|IFM_FDX; else if (anlpar & ANLPAR_10) ifmr->ifm_active |= IFM_10_T; else ifmr->ifm_active |= IFM_NONE; } else ifmr->ifm_active = ifm->ifm_cur->ifm_media; return; bail: TXP_UNLOCK(sc); ifmr->ifm_active |= IFM_NONE; ifmr->ifm_status &= ~IFM_AVALID; } #ifdef TXP_DEBUG static void txp_show_descriptor(void *d) { struct txp_cmd_desc *cmd = d; struct txp_rsp_desc *rsp = d; struct txp_tx_desc *txd = d; struct txp_frag_desc *frgd = d; switch (cmd->cmd_flags & CMD_FLAGS_TYPE_M) { case CMD_FLAGS_TYPE_CMD: /* command descriptor */ printf("[cmd flags 0x%x num %d id %d seq %d par1 0x%x par2 0x%x par3 0x%x]\n", cmd->cmd_flags, cmd->cmd_numdesc, le16toh(cmd->cmd_id), le16toh(cmd->cmd_seq), le16toh(cmd->cmd_par1), le32toh(cmd->cmd_par2), le32toh(cmd->cmd_par3)); break; case CMD_FLAGS_TYPE_RESP: /* response descriptor */ printf("[rsp flags 0x%x num %d id %d seq %d par1 0x%x par2 0x%x par3 0x%x]\n", rsp->rsp_flags, rsp->rsp_numdesc, le16toh(rsp->rsp_id), le16toh(rsp->rsp_seq), le16toh(rsp->rsp_par1), le32toh(rsp->rsp_par2), le32toh(rsp->rsp_par3)); break; case CMD_FLAGS_TYPE_DATA: /* data header (assuming tx for now) */ printf("[data flags 0x%x num %d totlen %d addr 0x%x/0x%x pflags 0x%x]", txd->tx_flags, txd->tx_numdesc, le16toh(txd->tx_totlen), le32toh(txd->tx_addrlo), le32toh(txd->tx_addrhi), le32toh(txd->tx_pflags)); break; case CMD_FLAGS_TYPE_FRAG: /* fragment descriptor */ printf("[frag flags 0x%x rsvd1 0x%x len %d addr 0x%x/0x%x rsvd2 0x%x]", frgd->frag_flags, frgd->frag_rsvd1, le16toh(frgd->frag_len), le32toh(frgd->frag_addrlo), le32toh(frgd->frag_addrhi), le32toh(frgd->frag_rsvd2)); break; default: printf("[unknown(%x) flags 0x%x num %d id %d seq %d par1 0x%x par2 0x%x par3 0x%x]\n", cmd->cmd_flags & CMD_FLAGS_TYPE_M, cmd->cmd_flags, cmd->cmd_numdesc, le16toh(cmd->cmd_id), le16toh(cmd->cmd_seq), le16toh(cmd->cmd_par1), le32toh(cmd->cmd_par2), le32toh(cmd->cmd_par3)); break; } } #endif static void txp_set_filter(struct txp_softc *sc) { struct ifnet *ifp; uint32_t crc, mchash[2]; uint16_t filter; struct ifmultiaddr *ifma; int mcnt; TXP_LOCK_ASSERT(sc); ifp = sc->sc_ifp; filter = TXP_RXFILT_DIRECT; if ((ifp->if_flags & IFF_BROADCAST) != 0) filter |= TXP_RXFILT_BROADCAST; if ((ifp->if_flags & (IFF_PROMISC | IFF_ALLMULTI)) != 0) { if ((ifp->if_flags & IFF_ALLMULTI) != 0) filter |= TXP_RXFILT_ALLMULTI; if ((ifp->if_flags & IFF_PROMISC) != 0) filter = TXP_RXFILT_PROMISC; goto setit; } mchash[0] = mchash[1] = 0; mcnt = 0; if_maddr_rlock(ifp); CK_STAILQ_FOREACH(ifma, &ifp->if_multiaddrs, ifma_link) { if (ifma->ifma_addr->sa_family != AF_LINK) continue; crc = ether_crc32_be(LLADDR((struct sockaddr_dl *) ifma->ifma_addr), ETHER_ADDR_LEN); crc &= 0x3f; mchash[crc >> 5] |= 1 << (crc & 0x1f); mcnt++; } if_maddr_runlock(ifp); if (mcnt > 0) { filter |= TXP_RXFILT_HASHMULTI; txp_command(sc, TXP_CMD_MCAST_HASH_MASK_WRITE, 2, mchash[0], mchash[1], NULL, NULL, NULL, TXP_CMD_NOWAIT); } setit: txp_command(sc, TXP_CMD_RX_FILTER_WRITE, filter, 0, 0, NULL, NULL, NULL, TXP_CMD_NOWAIT); } static int txp_set_capabilities(struct txp_softc *sc) { struct ifnet *ifp; uint32_t rxcap, txcap; TXP_LOCK_ASSERT(sc); rxcap = txcap = 0; ifp = sc->sc_ifp; if ((ifp->if_capenable & IFCAP_TXCSUM) != 0) { if ((ifp->if_hwassist & CSUM_IP) != 0) txcap |= OFFLOAD_IPCKSUM; if ((ifp->if_hwassist & CSUM_TCP) != 0) txcap |= OFFLOAD_TCPCKSUM; if ((ifp->if_hwassist & CSUM_UDP) != 0) txcap |= OFFLOAD_UDPCKSUM; rxcap = txcap; } if ((ifp->if_capenable & IFCAP_RXCSUM) == 0) rxcap &= ~(OFFLOAD_IPCKSUM | OFFLOAD_TCPCKSUM | OFFLOAD_UDPCKSUM); if ((ifp->if_capabilities & IFCAP_VLAN_HWTAGGING) != 0) { rxcap |= OFFLOAD_VLAN; txcap |= OFFLOAD_VLAN; } /* Tell firmware new offload configuration. */ return (txp_command(sc, TXP_CMD_OFFLOAD_WRITE, 0, txcap, rxcap, NULL, NULL, NULL, TXP_CMD_NOWAIT)); } static void txp_stats_save(struct txp_softc *sc) { struct txp_rsp_desc *rsp; TXP_LOCK_ASSERT(sc); rsp = NULL; if (txp_ext_command(sc, TXP_CMD_READ_STATISTICS, 0, 0, 0, NULL, 0, &rsp, TXP_CMD_WAIT)) goto out; if (rsp->rsp_numdesc != 6) goto out; txp_stats_update(sc, rsp); out: if (rsp != NULL) free(rsp, M_DEVBUF); bcopy(&sc->sc_stats, &sc->sc_ostats, sizeof(struct txp_hw_stats)); } static void txp_stats_update(struct txp_softc *sc, struct txp_rsp_desc *rsp) { struct txp_hw_stats *ostats, *stats; struct txp_ext_desc *ext; TXP_LOCK_ASSERT(sc); ext = (struct txp_ext_desc *)(rsp + 1); ostats = &sc->sc_ostats; stats = &sc->sc_stats; stats->tx_frames = ostats->tx_frames + le32toh(rsp->rsp_par2); stats->tx_bytes = ostats->tx_bytes + (uint64_t)le32toh(rsp->rsp_par3) + ((uint64_t)le32toh(ext[0].ext_1) << 32); stats->tx_deferred = ostats->tx_deferred + le32toh(ext[0].ext_2); stats->tx_late_colls = ostats->tx_late_colls + le32toh(ext[0].ext_3); stats->tx_colls = ostats->tx_colls + le32toh(ext[0].ext_4); stats->tx_carrier_lost = ostats->tx_carrier_lost + le32toh(ext[1].ext_1); stats->tx_multi_colls = ostats->tx_multi_colls + le32toh(ext[1].ext_2); stats->tx_excess_colls = ostats->tx_excess_colls + le32toh(ext[1].ext_3); stats->tx_fifo_underruns = ostats->tx_fifo_underruns + le32toh(ext[1].ext_4); stats->tx_mcast_oflows = ostats->tx_mcast_oflows + le32toh(ext[2].ext_1); stats->tx_filtered = ostats->tx_filtered + le32toh(ext[2].ext_2); stats->rx_frames = ostats->rx_frames + le32toh(ext[2].ext_3); stats->rx_bytes = ostats->rx_bytes + (uint64_t)le32toh(ext[2].ext_4) + ((uint64_t)le32toh(ext[3].ext_1) << 32); stats->rx_fifo_oflows = ostats->rx_fifo_oflows + le32toh(ext[3].ext_2); stats->rx_badssd = ostats->rx_badssd + le32toh(ext[3].ext_3); stats->rx_crcerrs = ostats->rx_crcerrs + le32toh(ext[3].ext_4); stats->rx_lenerrs = ostats->rx_lenerrs + le32toh(ext[4].ext_1); stats->rx_bcast_frames = ostats->rx_bcast_frames + le32toh(ext[4].ext_2); stats->rx_mcast_frames = ostats->rx_mcast_frames + le32toh(ext[4].ext_3); stats->rx_oflows = ostats->rx_oflows + le32toh(ext[4].ext_4); stats->rx_filtered = ostats->rx_filtered + le32toh(ext[5].ext_1); } static uint64_t txp_get_counter(struct ifnet *ifp, ift_counter cnt) { struct txp_softc *sc; struct txp_hw_stats *stats; sc = if_getsoftc(ifp); stats = &sc->sc_stats; switch (cnt) { case IFCOUNTER_IERRORS: return (stats->rx_fifo_oflows + stats->rx_badssd + stats->rx_crcerrs + stats->rx_lenerrs + stats->rx_oflows); case IFCOUNTER_OERRORS: return (stats->tx_deferred + stats->tx_carrier_lost + stats->tx_fifo_underruns + stats->tx_mcast_oflows); case IFCOUNTER_COLLISIONS: return (stats->tx_late_colls + stats->tx_multi_colls + stats->tx_excess_colls); case IFCOUNTER_OPACKETS: return (stats->tx_frames); case IFCOUNTER_IPACKETS: return (stats->rx_frames); default: return (if_get_counter_default(ifp, cnt)); } } #define TXP_SYSCTL_STAT_ADD32(c, h, n, p, d) \ SYSCTL_ADD_UINT(c, h, OID_AUTO, n, CTLFLAG_RD, p, 0, d) #if __FreeBSD_version >= 900030 #define TXP_SYSCTL_STAT_ADD64(c, h, n, p, d) \ SYSCTL_ADD_UQUAD(c, h, OID_AUTO, n, CTLFLAG_RD, p, d) #elif __FreeBSD_version > 800000 #define TXP_SYSCTL_STAT_ADD64(c, h, n, p, d) \ SYSCTL_ADD_QUAD(c, h, OID_AUTO, n, CTLFLAG_RD, p, d) #else #define TXP_SYSCTL_STAT_ADD64(c, h, n, p, d) \ SYSCTL_ADD_ULONG(c, h, OID_AUTO, n, CTLFLAG_RD, p, d) #endif static void txp_sysctl_node(struct txp_softc *sc) { struct sysctl_ctx_list *ctx; struct sysctl_oid_list *child, *parent; struct sysctl_oid *tree; struct txp_hw_stats *stats; int error; stats = &sc->sc_stats; ctx = device_get_sysctl_ctx(sc->sc_dev); child = SYSCTL_CHILDREN(device_get_sysctl_tree(sc->sc_dev)); SYSCTL_ADD_PROC(ctx, child, OID_AUTO, "process_limit", CTLTYPE_INT | CTLFLAG_RW, &sc->sc_process_limit, 0, sysctl_hw_txp_proc_limit, "I", "max number of Rx events to process"); /* Pull in device tunables. */ sc->sc_process_limit = TXP_PROC_DEFAULT; error = resource_int_value(device_get_name(sc->sc_dev), device_get_unit(sc->sc_dev), "process_limit", &sc->sc_process_limit); if (error == 0) { if (sc->sc_process_limit < TXP_PROC_MIN || sc->sc_process_limit > TXP_PROC_MAX) { device_printf(sc->sc_dev, "process_limit value out of range; " "using default: %d\n", TXP_PROC_DEFAULT); sc->sc_process_limit = TXP_PROC_DEFAULT; } } tree = SYSCTL_ADD_NODE(ctx, child, OID_AUTO, "stats", CTLFLAG_RD, NULL, "TXP statistics"); parent = SYSCTL_CHILDREN(tree); /* Tx statistics. */ tree = SYSCTL_ADD_NODE(ctx, parent, OID_AUTO, "tx", CTLFLAG_RD, NULL, "Tx MAC statistics"); child = SYSCTL_CHILDREN(tree); TXP_SYSCTL_STAT_ADD32(ctx, child, "frames", &stats->tx_frames, "Frames"); TXP_SYSCTL_STAT_ADD64(ctx, child, "octets", &stats->tx_bytes, "Octets"); TXP_SYSCTL_STAT_ADD32(ctx, child, "deferred", &stats->tx_deferred, "Deferred frames"); TXP_SYSCTL_STAT_ADD32(ctx, child, "late_colls", &stats->tx_late_colls, "Late collisions"); TXP_SYSCTL_STAT_ADD32(ctx, child, "colls", &stats->tx_colls, "Collisions"); TXP_SYSCTL_STAT_ADD32(ctx, child, "carrier_lost", &stats->tx_carrier_lost, "Carrier lost"); TXP_SYSCTL_STAT_ADD32(ctx, child, "multi_colls", &stats->tx_multi_colls, "Multiple collisions"); TXP_SYSCTL_STAT_ADD32(ctx, child, "excess_colls", &stats->tx_excess_colls, "Excessive collisions"); TXP_SYSCTL_STAT_ADD32(ctx, child, "fifo_underruns", &stats->tx_fifo_underruns, "FIFO underruns"); TXP_SYSCTL_STAT_ADD32(ctx, child, "mcast_oflows", &stats->tx_mcast_oflows, "Multicast overflows"); TXP_SYSCTL_STAT_ADD32(ctx, child, "filtered", &stats->tx_filtered, "Filtered frames"); /* Rx statistics. */ tree = SYSCTL_ADD_NODE(ctx, parent, OID_AUTO, "rx", CTLFLAG_RD, NULL, "Rx MAC statistics"); child = SYSCTL_CHILDREN(tree); TXP_SYSCTL_STAT_ADD32(ctx, child, "frames", &stats->rx_frames, "Frames"); TXP_SYSCTL_STAT_ADD64(ctx, child, "octets", &stats->rx_bytes, "Octets"); TXP_SYSCTL_STAT_ADD32(ctx, child, "fifo_oflows", &stats->rx_fifo_oflows, "FIFO overflows"); TXP_SYSCTL_STAT_ADD32(ctx, child, "badssd", &stats->rx_badssd, "Bad SSD"); TXP_SYSCTL_STAT_ADD32(ctx, child, "crcerrs", &stats->rx_crcerrs, "CRC errors"); TXP_SYSCTL_STAT_ADD32(ctx, child, "lenerrs", &stats->rx_lenerrs, "Length errors"); TXP_SYSCTL_STAT_ADD32(ctx, child, "bcast_frames", &stats->rx_bcast_frames, "Broadcast frames"); TXP_SYSCTL_STAT_ADD32(ctx, child, "mcast_frames", &stats->rx_mcast_frames, "Multicast frames"); TXP_SYSCTL_STAT_ADD32(ctx, child, "oflows", &stats->rx_oflows, "Overflows"); TXP_SYSCTL_STAT_ADD32(ctx, child, "filtered", &stats->rx_filtered, "Filtered frames"); } #undef TXP_SYSCTL_STAT_ADD32 #undef TXP_SYSCTL_STAT_ADD64 static int sysctl_int_range(SYSCTL_HANDLER_ARGS, int low, int high) { int error, value; if (arg1 == NULL) return (EINVAL); value = *(int *)arg1; error = sysctl_handle_int(oidp, &value, 0, req); if (error || req->newptr == NULL) return (error); if (value < low || value > high) return (EINVAL); *(int *)arg1 = value; return (0); } static int sysctl_hw_txp_proc_limit(SYSCTL_HANDLER_ARGS) { return (sysctl_int_range(oidp, arg1, arg2, req, TXP_PROC_MIN, TXP_PROC_MAX)); } Index: stable/12/sys/dev/vx/if_vx.c =================================================================== --- stable/12/sys/dev/vx/if_vx.c (revision 339734) +++ stable/12/sys/dev/vx/if_vx.c (revision 339735) @@ -1,1079 +1,1081 @@ /*- * SPDX-License-Identifier: BSD-2-Clause-FreeBSD * * Copyright (c) 1994 Herb Peyerl * All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * 3. All advertising materials mentioning features or use of this software * must display the following acknowledgement: * This product includes software developed by Herb Peyerl. * 4. The name of Herb Peyerl may not be used to endorse or promote products * derived from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR * IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES * OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. * IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, * INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT * NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF * THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. * * */ #include __FBSDID("$FreeBSD$"); /* * Created from if_ep.c driver by Fred Gray (fgray@rice.edu) to support * the 3c590 family. */ /* * Modified from the FreeBSD 1.1.5.1 version by: * Andres Vega Garcia * INRIA - Sophia Antipolis, France * avega@sophia.inria.fr */ /* * Promiscuous mode added and interrupt logic slightly changed * to reduce the number of adapter failures. Transceiver select * logic changed to use value from EEPROM. Autoconfiguration * features added. * Done by: * Serge Babkin * Chelindbank (Chelyabinsk, Russia) * babkin@hq.icb.chel.su */ #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #define ETHER_MAX_LEN 1518 #define ETHER_ADDR_LEN 6 #define ETHER_ALIGN 2 static struct connector_entry { int bit; char *name; } conn_tab[VX_CONNECTORS] = { #define CONNECTOR_UTP 0 { 0x08, "utp" }, #define CONNECTOR_AUI 1 { 0x20, "aui" }, /* dummy */ { 0, "???" }, #define CONNECTOR_BNC 3 { 0x10, "bnc" }, #define CONNECTOR_TX 4 { 0x02, "tx" }, #define CONNECTOR_FX 5 { 0x04, "fx" }, #define CONNECTOR_MII 6 { 0x40, "mii" }, { 0, "???" } }; static void vx_txstat(struct vx_softc *); static int vx_status(struct vx_softc *); static void vx_init(void *); static void vx_init_locked(struct vx_softc *); static int vx_ioctl(struct ifnet *, u_long, caddr_t); static void vx_start(struct ifnet *); static void vx_start_locked(struct ifnet *); static void vx_watchdog(void *); static void vx_reset(struct vx_softc *); static void vx_read(struct vx_softc *); static struct mbuf *vx_get(struct vx_softc *, u_int); static void vx_mbuf_fill(void *); static void vx_mbuf_empty(struct vx_softc *); static void vx_setfilter(struct vx_softc *); static void vx_getlink(struct vx_softc *); static void vx_setlink(struct vx_softc *); int vx_attach(device_t dev) { struct vx_softc *sc = device_get_softc(dev); struct ifnet *ifp; int i; u_char eaddr[6]; ifp = sc->vx_ifp = if_alloc(IFT_ETHER); if (ifp == NULL) { device_printf(dev, "can not if_alloc()\n"); return 0; } if_initname(ifp, device_get_name(dev), device_get_unit(dev)); mtx_init(&sc->vx_mtx, device_get_nameunit(dev), MTX_NETWORK_LOCK, MTX_DEF); callout_init_mtx(&sc->vx_callout, &sc->vx_mtx, 0); callout_init_mtx(&sc->vx_watchdog, &sc->vx_mtx, 0); GO_WINDOW(0); CSR_WRITE_2(sc, VX_COMMAND, GLOBAL_RESET); VX_BUSY_WAIT; vx_getlink(sc); /* * Read the station address from the eeprom */ GO_WINDOW(0); for (i = 0; i < 3; i++) { int x; if (vx_busy_eeprom(sc)) { mtx_destroy(&sc->vx_mtx); if_free(ifp); return 0; } CSR_WRITE_2(sc, VX_W0_EEPROM_COMMAND, EEPROM_CMD_RD | (EEPROM_OEM_ADDR0 + i)); if (vx_busy_eeprom(sc)) { mtx_destroy(&sc->vx_mtx); if_free(ifp); return 0; } x = CSR_READ_2(sc, VX_W0_EEPROM_DATA); eaddr[(i << 1)] = x >> 8; eaddr[(i << 1) + 1] = x; } ifp->if_snd.ifq_maxlen = ifqmaxlen; ifp->if_flags = IFF_BROADCAST | IFF_SIMPLEX | IFF_MULTICAST; ifp->if_start = vx_start; ifp->if_ioctl = vx_ioctl; ifp->if_init = vx_init; ifp->if_softc = sc; ether_ifattach(ifp, eaddr); sc->vx_tx_start_thresh = 20; /* probably a good starting point. */ VX_LOCK(sc); vx_stop(sc); VX_UNLOCK(sc); + gone_by_fcp101_dev(dev); + return 1; } /* * The order in here seems important. Otherwise we may not receive * interrupts. ?! */ static void vx_init(void *xsc) { struct vx_softc *sc = (struct vx_softc *)xsc; VX_LOCK(sc); vx_init_locked(sc); VX_UNLOCK(sc); } static void vx_init_locked(struct vx_softc *sc) { struct ifnet *ifp = sc->vx_ifp; int i; VX_LOCK_ASSERT(sc); VX_BUSY_WAIT; GO_WINDOW(2); for (i = 0; i < 6; i++) /* Reload the ether_addr. */ CSR_WRITE_1(sc, VX_W2_ADDR_0 + i, IF_LLADDR(sc->vx_ifp)[i]); CSR_WRITE_2(sc, VX_COMMAND, RX_RESET); VX_BUSY_WAIT; CSR_WRITE_2(sc, VX_COMMAND, TX_RESET); VX_BUSY_WAIT; GO_WINDOW(1); /* Window 1 is operating window */ for (i = 0; i < 31; i++) CSR_READ_1(sc, VX_W1_TX_STATUS); CSR_WRITE_2(sc, VX_COMMAND, SET_RD_0_MASK | S_CARD_FAILURE | S_RX_COMPLETE | S_TX_COMPLETE | S_TX_AVAIL); CSR_WRITE_2(sc, VX_COMMAND, SET_INTR_MASK | S_CARD_FAILURE | S_RX_COMPLETE | S_TX_COMPLETE | S_TX_AVAIL); /* * Attempt to get rid of any stray interrupts that occurred during * configuration. On the i386 this isn't possible because one may * already be queued. However, a single stray interrupt is * unimportant. */ CSR_WRITE_2(sc, VX_COMMAND, ACK_INTR | 0xff); vx_setfilter(sc); vx_setlink(sc); CSR_WRITE_2(sc, VX_COMMAND, RX_ENABLE); CSR_WRITE_2(sc, VX_COMMAND, TX_ENABLE); vx_mbuf_fill(sc); /* Interface is now `running', with no output active. */ ifp->if_drv_flags |= IFF_DRV_RUNNING; ifp->if_drv_flags &= ~IFF_DRV_OACTIVE; callout_reset(&sc->vx_watchdog, hz, vx_watchdog, sc); /* Attempt to start output, if any. */ vx_start_locked(ifp); } static void vx_setfilter(struct vx_softc *sc) { struct ifnet *ifp = sc->vx_ifp; VX_LOCK_ASSERT(sc); GO_WINDOW(1); /* Window 1 is operating window */ CSR_WRITE_2(sc, VX_COMMAND, SET_RX_FILTER | FIL_INDIVIDUAL | FIL_BRDCST | FIL_MULTICAST | ((ifp->if_flags & IFF_PROMISC) ? FIL_PROMISC : 0)); } static void vx_getlink(struct vx_softc *sc) { int n, k; GO_WINDOW(3); sc->vx_connectors = CSR_READ_2(sc, VX_W3_RESET_OPT) & 0x7f; for (n = 0, k = 0; k < VX_CONNECTORS; k++) { if (sc->vx_connectors & conn_tab[k].bit) { if (n > 0) printf("/"); printf("%s", conn_tab[k].name); n++; } } if (sc->vx_connectors == 0) { printf("no connectors!\n"); return; } GO_WINDOW(3); sc->vx_connector = (CSR_READ_4(sc, VX_W3_INTERNAL_CFG) & INTERNAL_CONNECTOR_MASK) >> INTERNAL_CONNECTOR_BITS; if (sc->vx_connector & 0x10) { sc->vx_connector &= 0x0f; printf("[*%s*]", conn_tab[(int)sc->vx_connector].name); printf(": disable 'auto select' with DOS util!\n"); } else { printf("[*%s*]\n", conn_tab[(int)sc->vx_connector].name); } } static void vx_setlink(struct vx_softc *sc) { struct ifnet *ifp = sc->vx_ifp; int i, j, k; char *reason, *warning; static int prev_flags; static signed char prev_conn = -1; VX_LOCK_ASSERT(sc); if (prev_conn == -1) prev_conn = sc->vx_connector; /* * S.B. * * Now behavior was slightly changed: * * if any of flags link[0-2] is used and its connector is * physically present the following connectors are used: * * link0 - AUI * highest precedence * link1 - BNC * link2 - UTP * lowest precedence * * If none of them is specified then * connector specified in the EEPROM is used * (if present on card or UTP if not). */ i = sc->vx_connector; /* default in EEPROM */ reason = "default"; warning = NULL; if (ifp->if_flags & IFF_LINK0) { if (sc->vx_connectors & conn_tab[CONNECTOR_AUI].bit) { i = CONNECTOR_AUI; reason = "link0"; } else { warning = "aui not present! (link0)"; } } else if (ifp->if_flags & IFF_LINK1) { if (sc->vx_connectors & conn_tab[CONNECTOR_BNC].bit) { i = CONNECTOR_BNC; reason = "link1"; } else { warning = "bnc not present! (link1)"; } } else if (ifp->if_flags & IFF_LINK2) { if (sc->vx_connectors & conn_tab[CONNECTOR_UTP].bit) { i = CONNECTOR_UTP; reason = "link2"; } else { warning = "utp not present! (link2)"; } } else if ((sc->vx_connectors & conn_tab[(int)sc->vx_connector].bit) == 0) { warning = "strange connector type in EEPROM."; reason = "forced"; i = CONNECTOR_UTP; } /* Avoid unnecessary message. */ k = (prev_flags ^ ifp->if_flags) & (IFF_LINK0 | IFF_LINK1 | IFF_LINK2); if ((k != 0) || (prev_conn != i)) { if (warning != NULL) if_printf(ifp, "warning: %s\n", warning); if_printf(ifp, "selected %s. (%s)\n", conn_tab[i].name, reason); } /* Set the selected connector. */ GO_WINDOW(3); j = CSR_READ_4(sc, VX_W3_INTERNAL_CFG) & ~INTERNAL_CONNECTOR_MASK; CSR_WRITE_4(sc, VX_W3_INTERNAL_CFG, j | (i << INTERNAL_CONNECTOR_BITS)); /* First, disable all. */ CSR_WRITE_2(sc, VX_COMMAND, STOP_TRANSCEIVER); DELAY(800); GO_WINDOW(4); CSR_WRITE_2(sc, VX_W4_MEDIA_TYPE, 0); /* Second, enable the selected one. */ switch (i) { case CONNECTOR_UTP: GO_WINDOW(4); CSR_WRITE_2(sc, VX_W4_MEDIA_TYPE, ENABLE_UTP); break; case CONNECTOR_BNC: CSR_WRITE_2(sc, VX_COMMAND, START_TRANSCEIVER); DELAY(800); break; case CONNECTOR_TX: case CONNECTOR_FX: GO_WINDOW(4); CSR_WRITE_2(sc, VX_W4_MEDIA_TYPE, LINKBEAT_ENABLE); break; default: /* AUI and MII fall here */ break; } GO_WINDOW(1); prev_flags = ifp->if_flags; prev_conn = i; } static void vx_start(struct ifnet *ifp) { struct vx_softc *sc = ifp->if_softc; VX_LOCK(sc); vx_start_locked(ifp); VX_UNLOCK(sc); } static void vx_start_locked(struct ifnet *ifp) { struct vx_softc *sc = ifp->if_softc; struct mbuf *m; int len, pad; VX_LOCK_ASSERT(sc); /* Don't transmit if interface is busy or not running */ if ((sc->vx_ifp->if_drv_flags & (IFF_DRV_RUNNING | IFF_DRV_OACTIVE)) != IFF_DRV_RUNNING) return; startagain: /* Sneak a peek at the next packet */ m = ifp->if_snd.ifq_head; if (m == NULL) { return; } /* We need to use m->m_pkthdr.len, so require the header */ M_ASSERTPKTHDR(m); len = m->m_pkthdr.len; pad = (4 - len) & 3; /* * The 3c509 automatically pads short packets to minimum ethernet * length, but we drop packets that are too large. Perhaps we should * truncate them instead? */ if (len + pad > ETHER_MAX_LEN) { /* packet is obviously too large: toss it */ if_inc_counter(ifp, IFCOUNTER_OERRORS, 1); IF_DEQUEUE(&ifp->if_snd, m); m_freem(m); goto readcheck; } VX_BUSY_WAIT; if (CSR_READ_2(sc, VX_W1_FREE_TX) < len + pad + 4) { CSR_WRITE_2(sc, VX_COMMAND, SET_TX_AVAIL_THRESH | ((len + pad + 4) >> 2)); /* not enough room in FIFO - make sure */ if (CSR_READ_2(sc, VX_W1_FREE_TX) < len + pad + 4) { ifp->if_drv_flags |= IFF_DRV_OACTIVE; sc->vx_timer = 1; return; } } CSR_WRITE_2(sc, VX_COMMAND, SET_TX_AVAIL_THRESH | (8188 >> 2)); IF_DEQUEUE(&ifp->if_snd, m); if (m == NULL) /* not really needed */ return; VX_BUSY_WAIT; CSR_WRITE_2(sc, VX_COMMAND, SET_TX_START_THRESH | ((len / 4 + sc->vx_tx_start_thresh) >> 2)); BPF_MTAP(sc->vx_ifp, m); /* * Do the output at splhigh() so that an interrupt from another device * won't cause a FIFO underrun. * * XXX: Can't enforce that anymore. */ CSR_WRITE_4(sc, VX_W1_TX_PIO_WR_1, len | TX_INDICATE); while (m) { if (m->m_len > 3) bus_space_write_multi_4(sc->vx_bst, sc->vx_bsh, VX_W1_TX_PIO_WR_1, (u_int32_t *)mtod(m, caddr_t), m->m_len / 4); if (m->m_len & 3) bus_space_write_multi_1(sc->vx_bst, sc->vx_bsh, VX_W1_TX_PIO_WR_1, mtod(m, caddr_t) + (m->m_len & ~3), m->m_len & 3); m = m_free(m); } while (pad--) CSR_WRITE_1(sc, VX_W1_TX_PIO_WR_1, 0); /* Padding */ if_inc_counter(ifp, IFCOUNTER_OPACKETS, 1); sc->vx_timer = 1; readcheck: if ((CSR_READ_2(sc, VX_W1_RX_STATUS) & ERR_INCOMPLETE) == 0) { /* We received a complete packet. */ if ((CSR_READ_2(sc, VX_STATUS) & S_INTR_LATCH) == 0) { /* * No interrupt, read the packet and continue * Is this supposed to happen? Is my motherboard * completely busted? */ vx_read(sc); } else /* * Got an interrupt, return so that it gets * serviced. */ return; } else { /* Check if we are stuck and reset [see XXX comment] */ if (vx_status(sc)) { if (ifp->if_flags & IFF_DEBUG) if_printf(ifp, "adapter reset\n"); vx_reset(sc); } } goto startagain; } /* * XXX: The 3c509 card can get in a mode where both the fifo status bit * FIFOS_RX_OVERRUN and the status bit ERR_INCOMPLETE are set * We detect this situation and we reset the adapter. * It happens at times when there is a lot of broadcast traffic * on the cable (once in a blue moon). */ static int vx_status(struct vx_softc *sc) { struct ifnet *ifp; int fifost; VX_LOCK_ASSERT(sc); /* * Check the FIFO status and act accordingly */ GO_WINDOW(4); fifost = CSR_READ_2(sc, VX_W4_FIFO_DIAG); GO_WINDOW(1); ifp = sc->vx_ifp; if (fifost & FIFOS_RX_UNDERRUN) { if (ifp->if_flags & IFF_DEBUG) if_printf(ifp, "RX underrun\n"); vx_reset(sc); return 0; } if (fifost & FIFOS_RX_STATUS_OVERRUN) { if (ifp->if_flags & IFF_DEBUG) if_printf(ifp, "RX Status overrun\n"); return 1; } if (fifost & FIFOS_RX_OVERRUN) { if (ifp->if_flags & IFF_DEBUG) if_printf(ifp, "RX overrun\n"); return 1; } if (fifost & FIFOS_TX_OVERRUN) { if (ifp->if_flags & IFF_DEBUG) if_printf(ifp, "TX overrun\n"); vx_reset(sc); return 0; } return 0; } static void vx_txstat(struct vx_softc *sc) { struct ifnet *ifp; int i; VX_LOCK_ASSERT(sc); /* * We need to read+write TX_STATUS until we get a 0 status * in order to turn off the interrupt flag. */ ifp = sc->vx_ifp; while ((i = CSR_READ_1(sc, VX_W1_TX_STATUS)) & TXS_COMPLETE) { CSR_WRITE_1(sc, VX_W1_TX_STATUS, 0x0); if (i & TXS_JABBER) { if_inc_counter(ifp, IFCOUNTER_OERRORS, 1); if (ifp->if_flags & IFF_DEBUG) if_printf(ifp, "jabber (%x)\n", i); vx_reset(sc); } else if (i & TXS_UNDERRUN) { if_inc_counter(ifp, IFCOUNTER_OERRORS, 1); if (ifp->if_flags & IFF_DEBUG) if_printf(ifp, "fifo underrun (%x) @%d\n", i, sc->vx_tx_start_thresh); if (sc->vx_tx_succ_ok < 100) sc->vx_tx_start_thresh = min(ETHER_MAX_LEN, sc->vx_tx_start_thresh + 20); sc->vx_tx_succ_ok = 0; vx_reset(sc); } else if (i & TXS_MAX_COLLISION) { if_inc_counter(ifp, IFCOUNTER_COLLISIONS, 1); CSR_WRITE_2(sc, VX_COMMAND, TX_ENABLE); ifp->if_drv_flags &= ~IFF_DRV_OACTIVE; } else sc->vx_tx_succ_ok = (sc->vx_tx_succ_ok + 1) & 127; } } void vx_intr(void *voidsc) { short status; struct vx_softc *sc = voidsc; struct ifnet *ifp = sc->vx_ifp; VX_LOCK(sc); for (;;) { CSR_WRITE_2(sc, VX_COMMAND, C_INTR_LATCH); status = CSR_READ_2(sc, VX_STATUS); if ((status & (S_TX_COMPLETE | S_TX_AVAIL | S_RX_COMPLETE | S_CARD_FAILURE)) == 0) break; /* * Acknowledge any interrupts. It's important that we do this * first, since there would otherwise be a race condition. * Due to the i386 interrupt queueing, we may get spurious * interrupts occasionally. */ CSR_WRITE_2(sc, VX_COMMAND, ACK_INTR | status); if (status & S_RX_COMPLETE) vx_read(sc); if (status & S_TX_AVAIL) { sc->vx_timer = 0; sc->vx_ifp->if_drv_flags &= ~IFF_DRV_OACTIVE; vx_start_locked(sc->vx_ifp); } if (status & S_CARD_FAILURE) { if_printf(ifp, "adapter failure (%x)\n", status); sc->vx_timer = 0; vx_reset(sc); break; } if (status & S_TX_COMPLETE) { sc->vx_timer = 0; vx_txstat(sc); vx_start_locked(ifp); } } VX_UNLOCK(sc); /* no more interrupts */ return; } static void vx_read(struct vx_softc *sc) { struct ifnet *ifp = sc->vx_ifp; struct mbuf *m; struct ether_header *eh; u_int len; VX_LOCK_ASSERT(sc); len = CSR_READ_2(sc, VX_W1_RX_STATUS); again: if (ifp->if_flags & IFF_DEBUG) { int err = len & ERR_MASK; char *s = NULL; if (len & ERR_INCOMPLETE) s = "incomplete packet"; else if (err == ERR_OVERRUN) s = "packet overrun"; else if (err == ERR_RUNT) s = "runt packet"; else if (err == ERR_ALIGNMENT) s = "bad alignment"; else if (err == ERR_CRC) s = "bad crc"; else if (err == ERR_OVERSIZE) s = "oversized packet"; else if (err == ERR_DRIBBLE) s = "dribble bits"; if (s) if_printf(ifp, "%s\n", s); } if (len & ERR_INCOMPLETE) return; if (len & ERR_RX) { if_inc_counter(ifp, IFCOUNTER_IERRORS, 1); goto abort; } len &= RX_BYTES_MASK; /* Lower 11 bits = RX bytes. */ /* Pull packet off interface. */ m = vx_get(sc, len); if (m == NULL) { if_inc_counter(ifp, IFCOUNTER_IERRORS, 1); goto abort; } if_inc_counter(ifp, IFCOUNTER_IPACKETS, 1); { struct mbuf *m0; m0 = m_devget(mtod(m, char *), m->m_pkthdr.len, ETHER_ALIGN, ifp, NULL); if (m0 == NULL) { if_inc_counter(ifp, IFCOUNTER_IERRORS, 1); goto abort; } m_freem(m); m = m0; } /* We assume the header fit entirely in one mbuf. */ eh = mtod(m, struct ether_header *); /* * XXX: Some cards seem to be in promiscuous mode all the time. * we need to make sure we only get our own stuff always. * bleah! */ if (!(ifp->if_flags & IFF_PROMISC) && (eh->ether_dhost[0] & 1) == 0 /* !mcast and !bcast */ && bcmp(eh->ether_dhost, IF_LLADDR(sc->vx_ifp), ETHER_ADDR_LEN) != 0) { m_freem(m); return; } VX_UNLOCK(sc); (*ifp->if_input)(ifp, m); VX_LOCK(sc); /* * In periods of high traffic we can actually receive enough * packets so that the fifo overrun bit will be set at this point, * even though we just read a packet. In this case we * are not going to receive any more interrupts. We check for * this condition and read again until the fifo is not full. * We could simplify this test by not using vx_status(), but * rechecking the RX_STATUS register directly. This test could * result in unnecessary looping in cases where there is a new * packet but the fifo is not full, but it will not fix the * stuck behavior. * * Even with this improvement, we still get packet overrun errors * which are hurting performance. Maybe when I get some more time * I'll modify vx_read() so that it can handle RX_EARLY interrupts. */ if (vx_status(sc)) { len = CSR_READ_2(sc, VX_W1_RX_STATUS); /* Check if we are stuck and reset [see XXX comment] */ if (len & ERR_INCOMPLETE) { if (ifp->if_flags & IFF_DEBUG) if_printf(ifp, "adapter reset\n"); vx_reset(sc); return; } goto again; } return; abort: CSR_WRITE_2(sc, VX_COMMAND, RX_DISCARD_TOP_PACK); } static struct mbuf * vx_get(struct vx_softc *sc, u_int totlen) { struct ifnet *ifp = sc->vx_ifp; struct mbuf *top, **mp, *m; int len; VX_LOCK_ASSERT(sc); m = sc->vx_mb[sc->vx_next_mb]; sc->vx_mb[sc->vx_next_mb] = NULL; if (m == NULL) { MGETHDR(m, M_NOWAIT, MT_DATA); if (m == NULL) return NULL; } else { /* If the queue is no longer full, refill. */ if (sc->vx_last_mb == sc->vx_next_mb && sc->vx_buffill_pending == 0) { callout_reset(&sc->vx_callout, hz / 100, vx_mbuf_fill, sc); sc->vx_buffill_pending = 1; } /* Convert one of our saved mbuf's. */ sc->vx_next_mb = (sc->vx_next_mb + 1) % MAX_MBS; m->m_data = m->m_pktdat; m->m_flags = M_PKTHDR; bzero(&m->m_pkthdr, sizeof(m->m_pkthdr)); } m->m_pkthdr.rcvif = ifp; m->m_pkthdr.len = totlen; len = MHLEN; top = NULL; mp = ⊤ /* * We read the packet at splhigh() so that an interrupt from another * device doesn't cause the card's buffer to overflow while we're * reading it. We may still lose packets at other times. * * XXX: Can't enforce this anymore. */ /* * Since we don't set allowLargePackets bit in MacControl register, * we can assume that totlen <= 1500bytes. * The while loop will be performed iff we have a packet with * MLEN < m_len < MINCLSIZE. */ while (totlen > 0) { if (top) { m = sc->vx_mb[sc->vx_next_mb]; sc->vx_mb[sc->vx_next_mb] = NULL; if (m == NULL) { MGET(m, M_NOWAIT, MT_DATA); if (m == NULL) { m_freem(top); return NULL; } } else { sc->vx_next_mb = (sc->vx_next_mb + 1) % MAX_MBS; } len = MLEN; } if (totlen >= MINCLSIZE) { if (MCLGET(m, M_NOWAIT)) len = MCLBYTES; } len = min(totlen, len); if (len > 3) bus_space_read_multi_4(sc->vx_bst, sc->vx_bsh, VX_W1_RX_PIO_RD_1, mtod(m, u_int32_t *), len / 4); if (len & 3) { bus_space_read_multi_1(sc->vx_bst, sc->vx_bsh, VX_W1_RX_PIO_RD_1, mtod(m, u_int8_t *) + (len & ~3), len & 3); } m->m_len = len; totlen -= len; *mp = m; mp = &m->m_next; } CSR_WRITE_2(sc, VX_COMMAND, RX_DISCARD_TOP_PACK); return top; } static int vx_ioctl(struct ifnet *ifp, u_long cmd, caddr_t data) { struct vx_softc *sc = ifp->if_softc; struct ifreq *ifr = (struct ifreq *) data; int error = 0; switch (cmd) { case SIOCSIFFLAGS: VX_LOCK(sc); if ((ifp->if_flags & IFF_UP) == 0 && (ifp->if_drv_flags & IFF_DRV_RUNNING) != 0) { /* * If interface is marked up and it is stopped, then * start it. */ vx_stop(sc); ifp->if_drv_flags &= ~IFF_DRV_RUNNING; } else if ((ifp->if_flags & IFF_UP) != 0 && (ifp->if_drv_flags & IFF_DRV_RUNNING) == 0) { /* * If interface is marked up and it is stopped, then * start it. */ vx_init_locked(sc); } else { /* * deal with flags changes: * IFF_MULTICAST, IFF_PROMISC, * IFF_LINK0, IFF_LINK1, */ vx_setfilter(sc); vx_setlink(sc); } VX_UNLOCK(sc); break; case SIOCSIFMTU: /* * Set the interface MTU. */ VX_LOCK(sc); if (ifr->ifr_mtu > ETHERMTU) { error = EINVAL; } else { ifp->if_mtu = ifr->ifr_mtu; } VX_UNLOCK(sc); break; case SIOCADDMULTI: case SIOCDELMULTI: /* * Multicast list has changed; set the hardware filter * accordingly. */ VX_LOCK(sc); vx_reset(sc); VX_UNLOCK(sc); error = 0; break; default: error = ether_ioctl(ifp, cmd, data); break; } return (error); } static void vx_reset(struct vx_softc *sc) { VX_LOCK_ASSERT(sc); vx_stop(sc); vx_init_locked(sc); } static void vx_watchdog(void *arg) { struct vx_softc *sc; struct ifnet *ifp; sc = arg; VX_LOCK_ASSERT(sc); callout_reset(&sc->vx_watchdog, hz, vx_watchdog, sc); if (sc->vx_timer == 0 || --sc->vx_timer > 0) return; ifp = sc->vx_ifp; if (ifp->if_flags & IFF_DEBUG) if_printf(ifp, "device timeout\n"); ifp->if_drv_flags &= ~IFF_DRV_OACTIVE; vx_start_locked(ifp); vx_intr(sc); } void vx_stop(struct vx_softc *sc) { VX_LOCK_ASSERT(sc); sc->vx_timer = 0; callout_stop(&sc->vx_watchdog); CSR_WRITE_2(sc, VX_COMMAND, RX_DISABLE); CSR_WRITE_2(sc, VX_COMMAND, RX_DISCARD_TOP_PACK); VX_BUSY_WAIT; CSR_WRITE_2(sc, VX_COMMAND, TX_DISABLE); CSR_WRITE_2(sc, VX_COMMAND, STOP_TRANSCEIVER); DELAY(800); CSR_WRITE_2(sc, VX_COMMAND, RX_RESET); VX_BUSY_WAIT; CSR_WRITE_2(sc, VX_COMMAND, TX_RESET); VX_BUSY_WAIT; CSR_WRITE_2(sc, VX_COMMAND, C_INTR_LATCH); CSR_WRITE_2(sc, VX_COMMAND, SET_RD_0_MASK); CSR_WRITE_2(sc, VX_COMMAND, SET_INTR_MASK); CSR_WRITE_2(sc, VX_COMMAND, SET_RX_FILTER); vx_mbuf_empty(sc); } int vx_busy_eeprom(struct vx_softc *sc) { int j, i = 100; while (i--) { j = CSR_READ_2(sc, VX_W0_EEPROM_COMMAND); if (j & EEPROM_BUSY) DELAY(100); else break; } if (!i) { if_printf(sc->vx_ifp, "eeprom failed to come ready\n"); return (1); } return (0); } static void vx_mbuf_fill(void *sp) { struct vx_softc *sc = (struct vx_softc *)sp; int i; VX_LOCK_ASSERT(sc); i = sc->vx_last_mb; do { if (sc->vx_mb[i] == NULL) MGET(sc->vx_mb[i], M_NOWAIT, MT_DATA); if (sc->vx_mb[i] == NULL) break; i = (i + 1) % MAX_MBS; } while (i != sc->vx_next_mb); sc->vx_last_mb = i; /* If the queue was not filled, try again. */ if (sc->vx_last_mb != sc->vx_next_mb) { callout_reset(&sc->vx_callout, hz / 100, vx_mbuf_fill, sc); sc->vx_buffill_pending = 1; } else { sc->vx_buffill_pending = 0; } } static void vx_mbuf_empty(struct vx_softc *sc) { int i; VX_LOCK_ASSERT(sc); for (i = 0; i < MAX_MBS; i++) { if (sc->vx_mb[i]) { m_freem(sc->vx_mb[i]); sc->vx_mb[i] = NULL; } } sc->vx_last_mb = sc->vx_next_mb = 0; if (sc->vx_buffill_pending != 0) callout_stop(&sc->vx_callout); } Index: stable/12/sys/dev/wb/if_wb.c =================================================================== --- stable/12/sys/dev/wb/if_wb.c (revision 339734) +++ stable/12/sys/dev/wb/if_wb.c (revision 339735) @@ -1,1637 +1,1639 @@ /*- * SPDX-License-Identifier: BSD-4-Clause * * Copyright (c) 1997, 1998 * Bill Paul . All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * 3. All advertising materials mentioning features or use of this software * must display the following acknowledgement: * This product includes software developed by Bill Paul. * 4. Neither the name of the author nor the names of any co-contributors * may be used to endorse or promote products derived from this software * without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY Bill Paul AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL Bill Paul OR THE VOICES IN HIS HEAD * BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF * THE POSSIBILITY OF SUCH DAMAGE. */ #include __FBSDID("$FreeBSD$"); /* * Winbond fast ethernet PCI NIC driver * * Supports various cheap network adapters based on the Winbond W89C840F * fast ethernet controller chip. This includes adapters manufactured by * Winbond itself and some made by Linksys. * * Written by Bill Paul * Electrical Engineering Department * Columbia University, New York City */ /* * The Winbond W89C840F chip is a bus master; in some ways it resembles * a DEC 'tulip' chip, only not as complicated. Unfortunately, it has * one major difference which is that while the registers do many of * the same things as a tulip adapter, the offsets are different: where * tulip registers are typically spaced 8 bytes apart, the Winbond * registers are spaced 4 bytes apart. The receiver filter is also * programmed differently. * * Like the tulip, the Winbond chip uses small descriptors containing * a status word, a control word and 32-bit areas that can either be used * to point to two external data blocks, or to point to a single block * and another descriptor in a linked list. Descriptors can be grouped * together in blocks to form fixed length rings or can be chained * together in linked lists. A single packet may be spread out over * several descriptors if necessary. * * For the receive ring, this driver uses a linked list of descriptors, * each pointing to a single mbuf cluster buffer, which us large enough * to hold an entire packet. The link list is looped back to created a * closed ring. * * For transmission, the driver creates a linked list of 'super descriptors' * which each contain several individual descriptors linked toghether. * Each 'super descriptor' contains WB_MAXFRAGS descriptors, which we * abuse as fragment pointers. This allows us to use a buffer managment * scheme very similar to that used in the ThunderLAN and Etherlink XL * drivers. * * Autonegotiation is performed using the external PHY via the MII bus. * The sample boards I have all use a Davicom PHY. * * Note: the author of the Linux driver for the Winbond chip alludes * to some sort of flaw in the chip's design that seems to mandate some * drastic workaround which signigicantly impairs transmit performance. * I have no idea what he's on about: transmit performance with all * three of my test boards seems fine. */ #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include /* for vtophys */ #include /* for vtophys */ #include #include #include #include #include #include #include #include #include /* "device miibus" required. See GENERIC if you get errors here. */ #include "miibus_if.h" #define WB_USEIOSPACE #include MODULE_DEPEND(wb, pci, 1, 1, 1); MODULE_DEPEND(wb, ether, 1, 1, 1); MODULE_DEPEND(wb, miibus, 1, 1, 1); /* * Various supported device vendors/types and their names. */ static const struct wb_type wb_devs[] = { { WB_VENDORID, WB_DEVICEID_840F, "Winbond W89C840F 10/100BaseTX" }, { CP_VENDORID, CP_DEVICEID_RL100, "Compex RL100-ATX 10/100baseTX" }, { 0, 0, NULL } }; static int wb_probe(device_t); static int wb_attach(device_t); static int wb_detach(device_t); static void wb_bfree(struct mbuf *); static int wb_newbuf(struct wb_softc *, struct wb_chain_onefrag *, struct mbuf *); static int wb_encap(struct wb_softc *, struct wb_chain *, struct mbuf *); static void wb_rxeof(struct wb_softc *); static void wb_rxeoc(struct wb_softc *); static void wb_txeof(struct wb_softc *); static void wb_txeoc(struct wb_softc *); static void wb_intr(void *); static void wb_tick(void *); static void wb_start(struct ifnet *); static void wb_start_locked(struct ifnet *); static int wb_ioctl(struct ifnet *, u_long, caddr_t); static void wb_init(void *); static void wb_init_locked(struct wb_softc *); static void wb_stop(struct wb_softc *); static void wb_watchdog(struct wb_softc *); static int wb_shutdown(device_t); static int wb_ifmedia_upd(struct ifnet *); static void wb_ifmedia_sts(struct ifnet *, struct ifmediareq *); static void wb_eeprom_putbyte(struct wb_softc *, int); static void wb_eeprom_getword(struct wb_softc *, int, u_int16_t *); static void wb_read_eeprom(struct wb_softc *, caddr_t, int, int, int); static void wb_setcfg(struct wb_softc *, u_int32_t); static void wb_setmulti(struct wb_softc *); static void wb_reset(struct wb_softc *); static void wb_fixmedia(struct wb_softc *); static int wb_list_rx_init(struct wb_softc *); static int wb_list_tx_init(struct wb_softc *); static int wb_miibus_readreg(device_t, int, int); static int wb_miibus_writereg(device_t, int, int, int); static void wb_miibus_statchg(device_t); /* * MII bit-bang glue */ static uint32_t wb_mii_bitbang_read(device_t); static void wb_mii_bitbang_write(device_t, uint32_t); static const struct mii_bitbang_ops wb_mii_bitbang_ops = { wb_mii_bitbang_read, wb_mii_bitbang_write, { WB_SIO_MII_DATAOUT, /* MII_BIT_MDO */ WB_SIO_MII_DATAIN, /* MII_BIT_MDI */ WB_SIO_MII_CLK, /* MII_BIT_MDC */ WB_SIO_MII_DIR, /* MII_BIT_DIR_HOST_PHY */ 0, /* MII_BIT_DIR_PHY_HOST */ } }; #ifdef WB_USEIOSPACE #define WB_RES SYS_RES_IOPORT #define WB_RID WB_PCI_LOIO #else #define WB_RES SYS_RES_MEMORY #define WB_RID WB_PCI_LOMEM #endif static device_method_t wb_methods[] = { /* Device interface */ DEVMETHOD(device_probe, wb_probe), DEVMETHOD(device_attach, wb_attach), DEVMETHOD(device_detach, wb_detach), DEVMETHOD(device_shutdown, wb_shutdown), /* MII interface */ DEVMETHOD(miibus_readreg, wb_miibus_readreg), DEVMETHOD(miibus_writereg, wb_miibus_writereg), DEVMETHOD(miibus_statchg, wb_miibus_statchg), DEVMETHOD_END }; static driver_t wb_driver = { "wb", wb_methods, sizeof(struct wb_softc) }; static devclass_t wb_devclass; DRIVER_MODULE(wb, pci, wb_driver, wb_devclass, 0, 0); DRIVER_MODULE(miibus, wb, miibus_driver, miibus_devclass, 0, 0); #define WB_SETBIT(sc, reg, x) \ CSR_WRITE_4(sc, reg, \ CSR_READ_4(sc, reg) | (x)) #define WB_CLRBIT(sc, reg, x) \ CSR_WRITE_4(sc, reg, \ CSR_READ_4(sc, reg) & ~(x)) #define SIO_SET(x) \ CSR_WRITE_4(sc, WB_SIO, \ CSR_READ_4(sc, WB_SIO) | (x)) #define SIO_CLR(x) \ CSR_WRITE_4(sc, WB_SIO, \ CSR_READ_4(sc, WB_SIO) & ~(x)) /* * Send a read command and address to the EEPROM, check for ACK. */ static void wb_eeprom_putbyte(sc, addr) struct wb_softc *sc; int addr; { int d, i; d = addr | WB_EECMD_READ; /* * Feed in each bit and stobe the clock. */ for (i = 0x400; i; i >>= 1) { if (d & i) { SIO_SET(WB_SIO_EE_DATAIN); } else { SIO_CLR(WB_SIO_EE_DATAIN); } DELAY(100); SIO_SET(WB_SIO_EE_CLK); DELAY(150); SIO_CLR(WB_SIO_EE_CLK); DELAY(100); } } /* * Read a word of data stored in the EEPROM at address 'addr.' */ static void wb_eeprom_getword(sc, addr, dest) struct wb_softc *sc; int addr; u_int16_t *dest; { int i; u_int16_t word = 0; /* Enter EEPROM access mode. */ CSR_WRITE_4(sc, WB_SIO, WB_SIO_EESEL|WB_SIO_EE_CS); /* * Send address of word we want to read. */ wb_eeprom_putbyte(sc, addr); CSR_WRITE_4(sc, WB_SIO, WB_SIO_EESEL|WB_SIO_EE_CS); /* * Start reading bits from EEPROM. */ for (i = 0x8000; i; i >>= 1) { SIO_SET(WB_SIO_EE_CLK); DELAY(100); if (CSR_READ_4(sc, WB_SIO) & WB_SIO_EE_DATAOUT) word |= i; SIO_CLR(WB_SIO_EE_CLK); DELAY(100); } /* Turn off EEPROM access mode. */ CSR_WRITE_4(sc, WB_SIO, 0); *dest = word; } /* * Read a sequence of words from the EEPROM. */ static void wb_read_eeprom(sc, dest, off, cnt, swap) struct wb_softc *sc; caddr_t dest; int off; int cnt; int swap; { int i; u_int16_t word = 0, *ptr; for (i = 0; i < cnt; i++) { wb_eeprom_getword(sc, off + i, &word); ptr = (u_int16_t *)(dest + (i * 2)); if (swap) *ptr = ntohs(word); else *ptr = word; } } /* * Read the MII serial port for the MII bit-bang module. */ static uint32_t wb_mii_bitbang_read(device_t dev) { struct wb_softc *sc; uint32_t val; sc = device_get_softc(dev); val = CSR_READ_4(sc, WB_SIO); CSR_BARRIER(sc, WB_SIO, 4, BUS_SPACE_BARRIER_READ | BUS_SPACE_BARRIER_WRITE); return (val); } /* * Write the MII serial port for the MII bit-bang module. */ static void wb_mii_bitbang_write(device_t dev, uint32_t val) { struct wb_softc *sc; sc = device_get_softc(dev); CSR_WRITE_4(sc, WB_SIO, val); CSR_BARRIER(sc, WB_SIO, 4, BUS_SPACE_BARRIER_READ | BUS_SPACE_BARRIER_WRITE); } static int wb_miibus_readreg(dev, phy, reg) device_t dev; int phy, reg; { return (mii_bitbang_readreg(dev, &wb_mii_bitbang_ops, phy, reg)); } static int wb_miibus_writereg(dev, phy, reg, data) device_t dev; int phy, reg, data; { mii_bitbang_writereg(dev, &wb_mii_bitbang_ops, phy, reg, data); return(0); } static void wb_miibus_statchg(dev) device_t dev; { struct wb_softc *sc; struct mii_data *mii; sc = device_get_softc(dev); mii = device_get_softc(sc->wb_miibus); wb_setcfg(sc, mii->mii_media_active); } /* * Program the 64-bit multicast hash filter. */ static void wb_setmulti(sc) struct wb_softc *sc; { struct ifnet *ifp; int h = 0; u_int32_t hashes[2] = { 0, 0 }; struct ifmultiaddr *ifma; u_int32_t rxfilt; int mcnt = 0; ifp = sc->wb_ifp; rxfilt = CSR_READ_4(sc, WB_NETCFG); if (ifp->if_flags & IFF_ALLMULTI || ifp->if_flags & IFF_PROMISC) { rxfilt |= WB_NETCFG_RX_MULTI; CSR_WRITE_4(sc, WB_NETCFG, rxfilt); CSR_WRITE_4(sc, WB_MAR0, 0xFFFFFFFF); CSR_WRITE_4(sc, WB_MAR1, 0xFFFFFFFF); return; } /* first, zot all the existing hash bits */ CSR_WRITE_4(sc, WB_MAR0, 0); CSR_WRITE_4(sc, WB_MAR1, 0); /* now program new ones */ if_maddr_rlock(ifp); CK_STAILQ_FOREACH(ifma, &ifp->if_multiaddrs, ifma_link) { if (ifma->ifma_addr->sa_family != AF_LINK) continue; h = ~ether_crc32_be(LLADDR((struct sockaddr_dl *) ifma->ifma_addr), ETHER_ADDR_LEN) >> 26; if (h < 32) hashes[0] |= (1 << h); else hashes[1] |= (1 << (h - 32)); mcnt++; } if_maddr_runlock(ifp); if (mcnt) rxfilt |= WB_NETCFG_RX_MULTI; else rxfilt &= ~WB_NETCFG_RX_MULTI; CSR_WRITE_4(sc, WB_MAR0, hashes[0]); CSR_WRITE_4(sc, WB_MAR1, hashes[1]); CSR_WRITE_4(sc, WB_NETCFG, rxfilt); } /* * The Winbond manual states that in order to fiddle with the * 'full-duplex' and '100Mbps' bits in the netconfig register, we * first have to put the transmit and/or receive logic in the idle state. */ static void wb_setcfg(sc, media) struct wb_softc *sc; u_int32_t media; { int i, restart = 0; if (CSR_READ_4(sc, WB_NETCFG) & (WB_NETCFG_TX_ON|WB_NETCFG_RX_ON)) { restart = 1; WB_CLRBIT(sc, WB_NETCFG, (WB_NETCFG_TX_ON|WB_NETCFG_RX_ON)); for (i = 0; i < WB_TIMEOUT; i++) { DELAY(10); if ((CSR_READ_4(sc, WB_ISR) & WB_ISR_TX_IDLE) && (CSR_READ_4(sc, WB_ISR) & WB_ISR_RX_IDLE)) break; } if (i == WB_TIMEOUT) device_printf(sc->wb_dev, "failed to force tx and rx to idle state\n"); } if (IFM_SUBTYPE(media) == IFM_10_T) WB_CLRBIT(sc, WB_NETCFG, WB_NETCFG_100MBPS); else WB_SETBIT(sc, WB_NETCFG, WB_NETCFG_100MBPS); if ((media & IFM_GMASK) == IFM_FDX) WB_SETBIT(sc, WB_NETCFG, WB_NETCFG_FULLDUPLEX); else WB_CLRBIT(sc, WB_NETCFG, WB_NETCFG_FULLDUPLEX); if (restart) WB_SETBIT(sc, WB_NETCFG, WB_NETCFG_TX_ON|WB_NETCFG_RX_ON); } static void wb_reset(sc) struct wb_softc *sc; { int i; struct mii_data *mii; struct mii_softc *miisc; CSR_WRITE_4(sc, WB_NETCFG, 0); CSR_WRITE_4(sc, WB_BUSCTL, 0); CSR_WRITE_4(sc, WB_TXADDR, 0); CSR_WRITE_4(sc, WB_RXADDR, 0); WB_SETBIT(sc, WB_BUSCTL, WB_BUSCTL_RESET); WB_SETBIT(sc, WB_BUSCTL, WB_BUSCTL_RESET); for (i = 0; i < WB_TIMEOUT; i++) { DELAY(10); if (!(CSR_READ_4(sc, WB_BUSCTL) & WB_BUSCTL_RESET)) break; } if (i == WB_TIMEOUT) device_printf(sc->wb_dev, "reset never completed!\n"); /* Wait a little while for the chip to get its brains in order. */ DELAY(1000); if (sc->wb_miibus == NULL) return; mii = device_get_softc(sc->wb_miibus); LIST_FOREACH(miisc, &mii->mii_phys, mii_list) PHY_RESET(miisc); } static void wb_fixmedia(sc) struct wb_softc *sc; { struct mii_data *mii = NULL; struct ifnet *ifp; u_int32_t media; mii = device_get_softc(sc->wb_miibus); ifp = sc->wb_ifp; mii_pollstat(mii); if (IFM_SUBTYPE(mii->mii_media_active) == IFM_10_T) { media = mii->mii_media_active & ~IFM_10_T; media |= IFM_100_TX; } else if (IFM_SUBTYPE(mii->mii_media_active) == IFM_100_TX) { media = mii->mii_media_active & ~IFM_100_TX; media |= IFM_10_T; } else return; ifmedia_set(&mii->mii_media, media); } /* * Probe for a Winbond chip. Check the PCI vendor and device * IDs against our list and return a device name if we find a match. */ static int wb_probe(dev) device_t dev; { const struct wb_type *t; t = wb_devs; while(t->wb_name != NULL) { if ((pci_get_vendor(dev) == t->wb_vid) && (pci_get_device(dev) == t->wb_did)) { device_set_desc(dev, t->wb_name); return (BUS_PROBE_DEFAULT); } t++; } return(ENXIO); } /* * Attach the interface. Allocate softc structures, do ifmedia * setup and ethernet/BPF attach. */ static int wb_attach(dev) device_t dev; { u_char eaddr[ETHER_ADDR_LEN]; struct wb_softc *sc; struct ifnet *ifp; int error = 0, rid; sc = device_get_softc(dev); sc->wb_dev = dev; mtx_init(&sc->wb_mtx, device_get_nameunit(dev), MTX_NETWORK_LOCK, MTX_DEF); callout_init_mtx(&sc->wb_stat_callout, &sc->wb_mtx, 0); /* * Map control/status registers. */ pci_enable_busmaster(dev); rid = WB_RID; sc->wb_res = bus_alloc_resource_any(dev, WB_RES, &rid, RF_ACTIVE); if (sc->wb_res == NULL) { device_printf(dev, "couldn't map ports/memory\n"); error = ENXIO; goto fail; } /* Allocate interrupt */ rid = 0; sc->wb_irq = bus_alloc_resource_any(dev, SYS_RES_IRQ, &rid, RF_SHAREABLE | RF_ACTIVE); if (sc->wb_irq == NULL) { device_printf(dev, "couldn't map interrupt\n"); error = ENXIO; goto fail; } /* Save the cache line size. */ sc->wb_cachesize = pci_read_config(dev, WB_PCI_CACHELEN, 4) & 0xFF; /* Reset the adapter. */ wb_reset(sc); /* * Get station address from the EEPROM. */ wb_read_eeprom(sc, (caddr_t)&eaddr, 0, 3, 0); sc->wb_ldata = contigmalloc(sizeof(struct wb_list_data) + 8, M_DEVBUF, M_NOWAIT, 0, 0xffffffff, PAGE_SIZE, 0); if (sc->wb_ldata == NULL) { device_printf(dev, "no memory for list buffers!\n"); error = ENXIO; goto fail; } bzero(sc->wb_ldata, sizeof(struct wb_list_data)); ifp = sc->wb_ifp = if_alloc(IFT_ETHER); if (ifp == NULL) { device_printf(dev, "can not if_alloc()\n"); error = ENOSPC; goto fail; } ifp->if_softc = sc; if_initname(ifp, device_get_name(dev), device_get_unit(dev)); ifp->if_flags = IFF_BROADCAST | IFF_SIMPLEX | IFF_MULTICAST; ifp->if_ioctl = wb_ioctl; ifp->if_start = wb_start; ifp->if_init = wb_init; ifp->if_snd.ifq_maxlen = WB_TX_LIST_CNT - 1; /* * Do MII setup. */ error = mii_attach(dev, &sc->wb_miibus, ifp, wb_ifmedia_upd, wb_ifmedia_sts, BMSR_DEFCAPMASK, MII_PHY_ANY, MII_OFFSET_ANY, 0); if (error != 0) { device_printf(dev, "attaching PHYs failed\n"); goto fail; } /* * Call MI attach routine. */ ether_ifattach(ifp, eaddr); /* Hook interrupt last to avoid having to lock softc */ error = bus_setup_intr(dev, sc->wb_irq, INTR_TYPE_NET | INTR_MPSAFE, NULL, wb_intr, sc, &sc->wb_intrhand); if (error) { device_printf(dev, "couldn't set up irq\n"); ether_ifdetach(ifp); goto fail; } + gone_by_fcp101_dev(dev); + fail: if (error) wb_detach(dev); return(error); } /* * Shutdown hardware and free up resources. This can be called any * time after the mutex has been initialized. It is called in both * the error case in attach and the normal detach case so it needs * to be careful about only freeing resources that have actually been * allocated. */ static int wb_detach(dev) device_t dev; { struct wb_softc *sc; struct ifnet *ifp; sc = device_get_softc(dev); KASSERT(mtx_initialized(&sc->wb_mtx), ("wb mutex not initialized")); ifp = sc->wb_ifp; /* * Delete any miibus and phy devices attached to this interface. * This should only be done if attach succeeded. */ if (device_is_attached(dev)) { ether_ifdetach(ifp); WB_LOCK(sc); wb_stop(sc); WB_UNLOCK(sc); callout_drain(&sc->wb_stat_callout); } if (sc->wb_miibus) device_delete_child(dev, sc->wb_miibus); bus_generic_detach(dev); if (sc->wb_intrhand) bus_teardown_intr(dev, sc->wb_irq, sc->wb_intrhand); if (sc->wb_irq) bus_release_resource(dev, SYS_RES_IRQ, 0, sc->wb_irq); if (sc->wb_res) bus_release_resource(dev, WB_RES, WB_RID, sc->wb_res); if (ifp) if_free(ifp); if (sc->wb_ldata) { contigfree(sc->wb_ldata, sizeof(struct wb_list_data) + 8, M_DEVBUF); } mtx_destroy(&sc->wb_mtx); return(0); } /* * Initialize the transmit descriptors. */ static int wb_list_tx_init(sc) struct wb_softc *sc; { struct wb_chain_data *cd; struct wb_list_data *ld; int i; cd = &sc->wb_cdata; ld = sc->wb_ldata; for (i = 0; i < WB_TX_LIST_CNT; i++) { cd->wb_tx_chain[i].wb_ptr = &ld->wb_tx_list[i]; if (i == (WB_TX_LIST_CNT - 1)) { cd->wb_tx_chain[i].wb_nextdesc = &cd->wb_tx_chain[0]; } else { cd->wb_tx_chain[i].wb_nextdesc = &cd->wb_tx_chain[i + 1]; } } cd->wb_tx_free = &cd->wb_tx_chain[0]; cd->wb_tx_tail = cd->wb_tx_head = NULL; return(0); } /* * Initialize the RX descriptors and allocate mbufs for them. Note that * we arrange the descriptors in a closed ring, so that the last descriptor * points back to the first. */ static int wb_list_rx_init(sc) struct wb_softc *sc; { struct wb_chain_data *cd; struct wb_list_data *ld; int i; cd = &sc->wb_cdata; ld = sc->wb_ldata; for (i = 0; i < WB_RX_LIST_CNT; i++) { cd->wb_rx_chain[i].wb_ptr = (struct wb_desc *)&ld->wb_rx_list[i]; cd->wb_rx_chain[i].wb_buf = (void *)&ld->wb_rxbufs[i]; if (wb_newbuf(sc, &cd->wb_rx_chain[i], NULL) == ENOBUFS) return(ENOBUFS); if (i == (WB_RX_LIST_CNT - 1)) { cd->wb_rx_chain[i].wb_nextdesc = &cd->wb_rx_chain[0]; ld->wb_rx_list[i].wb_next = vtophys(&ld->wb_rx_list[0]); } else { cd->wb_rx_chain[i].wb_nextdesc = &cd->wb_rx_chain[i + 1]; ld->wb_rx_list[i].wb_next = vtophys(&ld->wb_rx_list[i + 1]); } } cd->wb_rx_head = &cd->wb_rx_chain[0]; return(0); } static void wb_bfree(struct mbuf *m) { } /* * Initialize an RX descriptor and attach an MBUF cluster. */ static int wb_newbuf(sc, c, m) struct wb_softc *sc; struct wb_chain_onefrag *c; struct mbuf *m; { struct mbuf *m_new = NULL; if (m == NULL) { MGETHDR(m_new, M_NOWAIT, MT_DATA); if (m_new == NULL) return(ENOBUFS); m_new->m_pkthdr.len = m_new->m_len = WB_BUFBYTES; m_extadd(m_new, c->wb_buf, WB_BUFBYTES, wb_bfree, NULL, NULL, 0, EXT_NET_DRV); } else { m_new = m; m_new->m_len = m_new->m_pkthdr.len = WB_BUFBYTES; m_new->m_data = m_new->m_ext.ext_buf; } m_adj(m_new, sizeof(u_int64_t)); c->wb_mbuf = m_new; c->wb_ptr->wb_data = vtophys(mtod(m_new, caddr_t)); c->wb_ptr->wb_ctl = WB_RXCTL_RLINK | 1536; c->wb_ptr->wb_status = WB_RXSTAT; return(0); } /* * A frame has been uploaded: pass the resulting mbuf chain up to * the higher level protocols. */ static void wb_rxeof(sc) struct wb_softc *sc; { struct mbuf *m = NULL; struct ifnet *ifp; struct wb_chain_onefrag *cur_rx; int total_len = 0; u_int32_t rxstat; WB_LOCK_ASSERT(sc); ifp = sc->wb_ifp; while(!((rxstat = sc->wb_cdata.wb_rx_head->wb_ptr->wb_status) & WB_RXSTAT_OWN)) { struct mbuf *m0 = NULL; cur_rx = sc->wb_cdata.wb_rx_head; sc->wb_cdata.wb_rx_head = cur_rx->wb_nextdesc; m = cur_rx->wb_mbuf; if ((rxstat & WB_RXSTAT_MIIERR) || (WB_RXBYTES(cur_rx->wb_ptr->wb_status) < WB_MIN_FRAMELEN) || (WB_RXBYTES(cur_rx->wb_ptr->wb_status) > 1536) || !(rxstat & WB_RXSTAT_LASTFRAG) || !(rxstat & WB_RXSTAT_RXCMP)) { if_inc_counter(ifp, IFCOUNTER_IERRORS, 1); wb_newbuf(sc, cur_rx, m); device_printf(sc->wb_dev, "receiver babbling: possible chip bug," " forcing reset\n"); wb_fixmedia(sc); wb_reset(sc); wb_init_locked(sc); return; } if (rxstat & WB_RXSTAT_RXERR) { if_inc_counter(ifp, IFCOUNTER_IERRORS, 1); wb_newbuf(sc, cur_rx, m); break; } /* No errors; receive the packet. */ total_len = WB_RXBYTES(cur_rx->wb_ptr->wb_status); /* * XXX The Winbond chip includes the CRC with every * received frame, and there's no way to turn this * behavior off (at least, I can't find anything in * the manual that explains how to do it) so we have * to trim off the CRC manually. */ total_len -= ETHER_CRC_LEN; m0 = m_devget(mtod(m, char *), total_len, ETHER_ALIGN, ifp, NULL); wb_newbuf(sc, cur_rx, m); if (m0 == NULL) { if_inc_counter(ifp, IFCOUNTER_IERRORS, 1); break; } m = m0; if_inc_counter(ifp, IFCOUNTER_IPACKETS, 1); WB_UNLOCK(sc); (*ifp->if_input)(ifp, m); WB_LOCK(sc); } } static void wb_rxeoc(sc) struct wb_softc *sc; { wb_rxeof(sc); WB_CLRBIT(sc, WB_NETCFG, WB_NETCFG_RX_ON); CSR_WRITE_4(sc, WB_RXADDR, vtophys(&sc->wb_ldata->wb_rx_list[0])); WB_SETBIT(sc, WB_NETCFG, WB_NETCFG_RX_ON); if (CSR_READ_4(sc, WB_ISR) & WB_RXSTATE_SUSPEND) CSR_WRITE_4(sc, WB_RXSTART, 0xFFFFFFFF); } /* * A frame was downloaded to the chip. It's safe for us to clean up * the list buffers. */ static void wb_txeof(sc) struct wb_softc *sc; { struct wb_chain *cur_tx; struct ifnet *ifp; ifp = sc->wb_ifp; /* Clear the timeout timer. */ sc->wb_timer = 0; if (sc->wb_cdata.wb_tx_head == NULL) return; /* * Go through our tx list and free mbufs for those * frames that have been transmitted. */ while(sc->wb_cdata.wb_tx_head->wb_mbuf != NULL) { u_int32_t txstat; cur_tx = sc->wb_cdata.wb_tx_head; txstat = WB_TXSTATUS(cur_tx); if ((txstat & WB_TXSTAT_OWN) || txstat == WB_UNSENT) break; if (txstat & WB_TXSTAT_TXERR) { if_inc_counter(ifp, IFCOUNTER_OERRORS, 1); if (txstat & WB_TXSTAT_ABORT) if_inc_counter(ifp, IFCOUNTER_COLLISIONS, 1); if (txstat & WB_TXSTAT_LATECOLL) if_inc_counter(ifp, IFCOUNTER_COLLISIONS, 1); } if_inc_counter(ifp, IFCOUNTER_COLLISIONS, (txstat & WB_TXSTAT_COLLCNT) >> 3); if_inc_counter(ifp, IFCOUNTER_OPACKETS, 1); m_freem(cur_tx->wb_mbuf); cur_tx->wb_mbuf = NULL; if (sc->wb_cdata.wb_tx_head == sc->wb_cdata.wb_tx_tail) { sc->wb_cdata.wb_tx_head = NULL; sc->wb_cdata.wb_tx_tail = NULL; break; } sc->wb_cdata.wb_tx_head = cur_tx->wb_nextdesc; } } /* * TX 'end of channel' interrupt handler. */ static void wb_txeoc(sc) struct wb_softc *sc; { struct ifnet *ifp; ifp = sc->wb_ifp; sc->wb_timer = 0; if (sc->wb_cdata.wb_tx_head == NULL) { ifp->if_drv_flags &= ~IFF_DRV_OACTIVE; sc->wb_cdata.wb_tx_tail = NULL; } else { if (WB_TXOWN(sc->wb_cdata.wb_tx_head) == WB_UNSENT) { WB_TXOWN(sc->wb_cdata.wb_tx_head) = WB_TXSTAT_OWN; sc->wb_timer = 5; CSR_WRITE_4(sc, WB_TXSTART, 0xFFFFFFFF); } } } static void wb_intr(arg) void *arg; { struct wb_softc *sc; struct ifnet *ifp; u_int32_t status; sc = arg; WB_LOCK(sc); ifp = sc->wb_ifp; if (!(ifp->if_drv_flags & IFF_DRV_RUNNING)) { WB_UNLOCK(sc); return; } /* Disable interrupts. */ CSR_WRITE_4(sc, WB_IMR, 0x00000000); for (;;) { status = CSR_READ_4(sc, WB_ISR); if (status) CSR_WRITE_4(sc, WB_ISR, status); if ((status & WB_INTRS) == 0) break; if ((status & WB_ISR_RX_NOBUF) || (status & WB_ISR_RX_ERR)) { if_inc_counter(ifp, IFCOUNTER_IERRORS, 1); wb_reset(sc); if (status & WB_ISR_RX_ERR) wb_fixmedia(sc); wb_init_locked(sc); continue; } if (status & WB_ISR_RX_OK) wb_rxeof(sc); if (status & WB_ISR_RX_IDLE) wb_rxeoc(sc); if (status & WB_ISR_TX_OK) wb_txeof(sc); if (status & WB_ISR_TX_NOBUF) wb_txeoc(sc); if (status & WB_ISR_TX_IDLE) { wb_txeof(sc); if (sc->wb_cdata.wb_tx_head != NULL) { WB_SETBIT(sc, WB_NETCFG, WB_NETCFG_TX_ON); CSR_WRITE_4(sc, WB_TXSTART, 0xFFFFFFFF); } } if (status & WB_ISR_TX_UNDERRUN) { if_inc_counter(ifp, IFCOUNTER_OERRORS, 1); wb_txeof(sc); WB_CLRBIT(sc, WB_NETCFG, WB_NETCFG_TX_ON); /* Jack up TX threshold */ sc->wb_txthresh += WB_TXTHRESH_CHUNK; WB_CLRBIT(sc, WB_NETCFG, WB_NETCFG_TX_THRESH); WB_SETBIT(sc, WB_NETCFG, WB_TXTHRESH(sc->wb_txthresh)); WB_SETBIT(sc, WB_NETCFG, WB_NETCFG_TX_ON); } if (status & WB_ISR_BUS_ERR) { wb_reset(sc); wb_init_locked(sc); } } /* Re-enable interrupts. */ CSR_WRITE_4(sc, WB_IMR, WB_INTRS); if (ifp->if_snd.ifq_head != NULL) { wb_start_locked(ifp); } WB_UNLOCK(sc); } static void wb_tick(xsc) void *xsc; { struct wb_softc *sc; struct mii_data *mii; sc = xsc; WB_LOCK_ASSERT(sc); mii = device_get_softc(sc->wb_miibus); mii_tick(mii); if (sc->wb_timer > 0 && --sc->wb_timer == 0) wb_watchdog(sc); callout_reset(&sc->wb_stat_callout, hz, wb_tick, sc); } /* * Encapsulate an mbuf chain in a descriptor by coupling the mbuf data * pointers to the fragment pointers. */ static int wb_encap(sc, c, m_head) struct wb_softc *sc; struct wb_chain *c; struct mbuf *m_head; { int frag = 0; struct wb_desc *f = NULL; int total_len; struct mbuf *m; /* * Start packing the mbufs in this chain into * the fragment pointers. Stop when we run out * of fragments or hit the end of the mbuf chain. */ m = m_head; total_len = 0; for (m = m_head, frag = 0; m != NULL; m = m->m_next) { if (m->m_len != 0) { if (frag == WB_MAXFRAGS) break; total_len += m->m_len; f = &c->wb_ptr->wb_frag[frag]; f->wb_ctl = WB_TXCTL_TLINK | m->m_len; if (frag == 0) { f->wb_ctl |= WB_TXCTL_FIRSTFRAG; f->wb_status = 0; } else f->wb_status = WB_TXSTAT_OWN; f->wb_next = vtophys(&c->wb_ptr->wb_frag[frag + 1]); f->wb_data = vtophys(mtod(m, vm_offset_t)); frag++; } } /* * Handle special case: we used up all 16 fragments, * but we have more mbufs left in the chain. Copy the * data into an mbuf cluster. Note that we don't * bother clearing the values in the other fragment * pointers/counters; it wouldn't gain us anything, * and would waste cycles. */ if (m != NULL) { struct mbuf *m_new = NULL; MGETHDR(m_new, M_NOWAIT, MT_DATA); if (m_new == NULL) return(1); if (m_head->m_pkthdr.len > MHLEN) { if (!(MCLGET(m_new, M_NOWAIT))) { m_freem(m_new); return(1); } } m_copydata(m_head, 0, m_head->m_pkthdr.len, mtod(m_new, caddr_t)); m_new->m_pkthdr.len = m_new->m_len = m_head->m_pkthdr.len; m_freem(m_head); m_head = m_new; f = &c->wb_ptr->wb_frag[0]; f->wb_status = 0; f->wb_data = vtophys(mtod(m_new, caddr_t)); f->wb_ctl = total_len = m_new->m_len; f->wb_ctl |= WB_TXCTL_TLINK|WB_TXCTL_FIRSTFRAG; frag = 1; } if (total_len < WB_MIN_FRAMELEN) { f = &c->wb_ptr->wb_frag[frag]; f->wb_ctl = WB_MIN_FRAMELEN - total_len; f->wb_data = vtophys(&sc->wb_cdata.wb_pad); f->wb_ctl |= WB_TXCTL_TLINK; f->wb_status = WB_TXSTAT_OWN; frag++; } c->wb_mbuf = m_head; c->wb_lastdesc = frag - 1; WB_TXCTL(c) |= WB_TXCTL_LASTFRAG; WB_TXNEXT(c) = vtophys(&c->wb_nextdesc->wb_ptr->wb_frag[0]); return(0); } /* * Main transmit routine. To avoid having to do mbuf copies, we put pointers * to the mbuf data regions directly in the transmit lists. We also save a * copy of the pointers since the transmit list fragment pointers are * physical addresses. */ static void wb_start(ifp) struct ifnet *ifp; { struct wb_softc *sc; sc = ifp->if_softc; WB_LOCK(sc); wb_start_locked(ifp); WB_UNLOCK(sc); } static void wb_start_locked(ifp) struct ifnet *ifp; { struct wb_softc *sc; struct mbuf *m_head = NULL; struct wb_chain *cur_tx = NULL, *start_tx; sc = ifp->if_softc; WB_LOCK_ASSERT(sc); /* * Check for an available queue slot. If there are none, * punt. */ if (sc->wb_cdata.wb_tx_free->wb_mbuf != NULL) { ifp->if_drv_flags |= IFF_DRV_OACTIVE; return; } start_tx = sc->wb_cdata.wb_tx_free; while(sc->wb_cdata.wb_tx_free->wb_mbuf == NULL) { IF_DEQUEUE(&ifp->if_snd, m_head); if (m_head == NULL) break; /* Pick a descriptor off the free list. */ cur_tx = sc->wb_cdata.wb_tx_free; sc->wb_cdata.wb_tx_free = cur_tx->wb_nextdesc; /* Pack the data into the descriptor. */ wb_encap(sc, cur_tx, m_head); if (cur_tx != start_tx) WB_TXOWN(cur_tx) = WB_TXSTAT_OWN; /* * If there's a BPF listener, bounce a copy of this frame * to him. */ BPF_MTAP(ifp, cur_tx->wb_mbuf); } /* * If there are no packets queued, bail. */ if (cur_tx == NULL) return; /* * Place the request for the upload interrupt * in the last descriptor in the chain. This way, if * we're chaining several packets at once, we'll only * get an interrupt once for the whole chain rather than * once for each packet. */ WB_TXCTL(cur_tx) |= WB_TXCTL_FINT; cur_tx->wb_ptr->wb_frag[0].wb_ctl |= WB_TXCTL_FINT; sc->wb_cdata.wb_tx_tail = cur_tx; if (sc->wb_cdata.wb_tx_head == NULL) { sc->wb_cdata.wb_tx_head = start_tx; WB_TXOWN(start_tx) = WB_TXSTAT_OWN; CSR_WRITE_4(sc, WB_TXSTART, 0xFFFFFFFF); } else { /* * We need to distinguish between the case where * the own bit is clear because the chip cleared it * and where the own bit is clear because we haven't * set it yet. The magic value WB_UNSET is just some * ramdomly chosen number which doesn't have the own * bit set. When we actually transmit the frame, the * status word will have _only_ the own bit set, so * the txeoc handler will be able to tell if it needs * to initiate another transmission to flush out pending * frames. */ WB_TXOWN(start_tx) = WB_UNSENT; } /* * Set a timeout in case the chip goes out to lunch. */ sc->wb_timer = 5; } static void wb_init(xsc) void *xsc; { struct wb_softc *sc = xsc; WB_LOCK(sc); wb_init_locked(sc); WB_UNLOCK(sc); } static void wb_init_locked(sc) struct wb_softc *sc; { struct ifnet *ifp = sc->wb_ifp; int i; struct mii_data *mii; WB_LOCK_ASSERT(sc); mii = device_get_softc(sc->wb_miibus); /* * Cancel pending I/O and free all RX/TX buffers. */ wb_stop(sc); wb_reset(sc); sc->wb_txthresh = WB_TXTHRESH_INIT; /* * Set cache alignment and burst length. */ #ifdef foo CSR_WRITE_4(sc, WB_BUSCTL, WB_BUSCTL_CONFIG); WB_CLRBIT(sc, WB_NETCFG, WB_NETCFG_TX_THRESH); WB_SETBIT(sc, WB_NETCFG, WB_TXTHRESH(sc->wb_txthresh)); #endif CSR_WRITE_4(sc, WB_BUSCTL, WB_BUSCTL_MUSTBEONE|WB_BUSCTL_ARBITRATION); WB_SETBIT(sc, WB_BUSCTL, WB_BURSTLEN_16LONG); switch(sc->wb_cachesize) { case 32: WB_SETBIT(sc, WB_BUSCTL, WB_CACHEALIGN_32LONG); break; case 16: WB_SETBIT(sc, WB_BUSCTL, WB_CACHEALIGN_16LONG); break; case 8: WB_SETBIT(sc, WB_BUSCTL, WB_CACHEALIGN_8LONG); break; case 0: default: WB_SETBIT(sc, WB_BUSCTL, WB_CACHEALIGN_NONE); break; } /* This doesn't tend to work too well at 100Mbps. */ WB_CLRBIT(sc, WB_NETCFG, WB_NETCFG_TX_EARLY_ON); /* Init our MAC address */ for (i = 0; i < ETHER_ADDR_LEN; i++) { CSR_WRITE_1(sc, WB_NODE0 + i, IF_LLADDR(sc->wb_ifp)[i]); } /* Init circular RX list. */ if (wb_list_rx_init(sc) == ENOBUFS) { device_printf(sc->wb_dev, "initialization failed: no memory for rx buffers\n"); wb_stop(sc); return; } /* Init TX descriptors. */ wb_list_tx_init(sc); /* If we want promiscuous mode, set the allframes bit. */ if (ifp->if_flags & IFF_PROMISC) { WB_SETBIT(sc, WB_NETCFG, WB_NETCFG_RX_ALLPHYS); } else { WB_CLRBIT(sc, WB_NETCFG, WB_NETCFG_RX_ALLPHYS); } /* * Set capture broadcast bit to capture broadcast frames. */ if (ifp->if_flags & IFF_BROADCAST) { WB_SETBIT(sc, WB_NETCFG, WB_NETCFG_RX_BROAD); } else { WB_CLRBIT(sc, WB_NETCFG, WB_NETCFG_RX_BROAD); } /* * Program the multicast filter, if necessary. */ wb_setmulti(sc); /* * Load the address of the RX list. */ WB_CLRBIT(sc, WB_NETCFG, WB_NETCFG_RX_ON); CSR_WRITE_4(sc, WB_RXADDR, vtophys(&sc->wb_ldata->wb_rx_list[0])); /* * Enable interrupts. */ CSR_WRITE_4(sc, WB_IMR, WB_INTRS); CSR_WRITE_4(sc, WB_ISR, 0xFFFFFFFF); /* Enable receiver and transmitter. */ WB_SETBIT(sc, WB_NETCFG, WB_NETCFG_RX_ON); CSR_WRITE_4(sc, WB_RXSTART, 0xFFFFFFFF); WB_CLRBIT(sc, WB_NETCFG, WB_NETCFG_TX_ON); CSR_WRITE_4(sc, WB_TXADDR, vtophys(&sc->wb_ldata->wb_tx_list[0])); WB_SETBIT(sc, WB_NETCFG, WB_NETCFG_TX_ON); mii_mediachg(mii); ifp->if_drv_flags |= IFF_DRV_RUNNING; ifp->if_drv_flags &= ~IFF_DRV_OACTIVE; callout_reset(&sc->wb_stat_callout, hz, wb_tick, sc); } /* * Set media options. */ static int wb_ifmedia_upd(ifp) struct ifnet *ifp; { struct wb_softc *sc; sc = ifp->if_softc; WB_LOCK(sc); if (ifp->if_flags & IFF_UP) wb_init_locked(sc); WB_UNLOCK(sc); return(0); } /* * Report current media status. */ static void wb_ifmedia_sts(ifp, ifmr) struct ifnet *ifp; struct ifmediareq *ifmr; { struct wb_softc *sc; struct mii_data *mii; sc = ifp->if_softc; WB_LOCK(sc); mii = device_get_softc(sc->wb_miibus); mii_pollstat(mii); ifmr->ifm_active = mii->mii_media_active; ifmr->ifm_status = mii->mii_media_status; WB_UNLOCK(sc); } static int wb_ioctl(ifp, command, data) struct ifnet *ifp; u_long command; caddr_t data; { struct wb_softc *sc = ifp->if_softc; struct mii_data *mii; struct ifreq *ifr = (struct ifreq *) data; int error = 0; switch(command) { case SIOCSIFFLAGS: WB_LOCK(sc); if (ifp->if_flags & IFF_UP) { wb_init_locked(sc); } else { if (ifp->if_drv_flags & IFF_DRV_RUNNING) wb_stop(sc); } WB_UNLOCK(sc); error = 0; break; case SIOCADDMULTI: case SIOCDELMULTI: WB_LOCK(sc); wb_setmulti(sc); WB_UNLOCK(sc); error = 0; break; case SIOCGIFMEDIA: case SIOCSIFMEDIA: mii = device_get_softc(sc->wb_miibus); error = ifmedia_ioctl(ifp, ifr, &mii->mii_media, command); break; default: error = ether_ioctl(ifp, command, data); break; } return(error); } static void wb_watchdog(sc) struct wb_softc *sc; { struct ifnet *ifp; WB_LOCK_ASSERT(sc); ifp = sc->wb_ifp; if_inc_counter(ifp, IFCOUNTER_OERRORS, 1); if_printf(ifp, "watchdog timeout\n"); #ifdef foo if (!(wb_phy_readreg(sc, PHY_BMSR) & PHY_BMSR_LINKSTAT)) if_printf(ifp, "no carrier - transceiver cable problem?\n"); #endif wb_stop(sc); wb_reset(sc); wb_init_locked(sc); if (ifp->if_snd.ifq_head != NULL) wb_start_locked(ifp); } /* * Stop the adapter and free any mbufs allocated to the * RX and TX lists. */ static void wb_stop(sc) struct wb_softc *sc; { int i; struct ifnet *ifp; WB_LOCK_ASSERT(sc); ifp = sc->wb_ifp; sc->wb_timer = 0; callout_stop(&sc->wb_stat_callout); WB_CLRBIT(sc, WB_NETCFG, (WB_NETCFG_RX_ON|WB_NETCFG_TX_ON)); CSR_WRITE_4(sc, WB_IMR, 0x00000000); CSR_WRITE_4(sc, WB_TXADDR, 0x00000000); CSR_WRITE_4(sc, WB_RXADDR, 0x00000000); /* * Free data in the RX lists. */ for (i = 0; i < WB_RX_LIST_CNT; i++) { if (sc->wb_cdata.wb_rx_chain[i].wb_mbuf != NULL) { m_freem(sc->wb_cdata.wb_rx_chain[i].wb_mbuf); sc->wb_cdata.wb_rx_chain[i].wb_mbuf = NULL; } } bzero((char *)&sc->wb_ldata->wb_rx_list, sizeof(sc->wb_ldata->wb_rx_list)); /* * Free the TX list buffers. */ for (i = 0; i < WB_TX_LIST_CNT; i++) { if (sc->wb_cdata.wb_tx_chain[i].wb_mbuf != NULL) { m_freem(sc->wb_cdata.wb_tx_chain[i].wb_mbuf); sc->wb_cdata.wb_tx_chain[i].wb_mbuf = NULL; } } bzero((char *)&sc->wb_ldata->wb_tx_list, sizeof(sc->wb_ldata->wb_tx_list)); ifp->if_drv_flags &= ~(IFF_DRV_RUNNING | IFF_DRV_OACTIVE); } /* * Stop all chip I/O so that the kernel's probe routines don't * get confused by errant DMAs when rebooting. */ static int wb_shutdown(dev) device_t dev; { struct wb_softc *sc; sc = device_get_softc(dev); WB_LOCK(sc); wb_stop(sc); WB_UNLOCK(sc); return (0); } Index: stable/12/sys/dev/xe/if_xe.c =================================================================== --- stable/12/sys/dev/xe/if_xe.c (revision 339734) +++ stable/12/sys/dev/xe/if_xe.c (revision 339735) @@ -1,2076 +1,2078 @@ /*- * SPDX-License-Identifier: BSD-2-Clause-FreeBSD AND BSD-3-Clause * * Copyright (c) 1998, 1999, 2003 Scott Mitchell * All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. */ /*- * Portions of this software were derived from Werner Koch's xirc2ps driver * for Linux under the terms of the following license (from v1.30 of the * xirc2ps driver): * * Copyright (c) 1997 by Werner Koch (dd9jn) * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, and the entire permission notice in its entirety, * including the disclaimer of warranties. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * 3. The name of the author may not be used to endorse or promote * products derived from this software without specific prior * written permission. * * THIS SOFTWARE IS PROVIDED ``AS IS'' AND ANY EXPRESS OR IMPLIED * WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES * OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE * DISCLAIMED. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, * INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR * SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, * STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED * OF THE POSSIBILITY OF SUCH DAMAGE. */ #include __FBSDID("$FreeBSD$"); /* * FreeBSD device driver for Xircom CreditCard PCMCIA Ethernet adapters. The * following cards are currently known to work with the driver: * Xircom CreditCard 10/100 (CE3) * Xircom CreditCard Ethernet + Modem 28 (CEM28) * Xircom CreditCard Ethernet 10/100 + Modem 56 (CEM56) * Xircom RealPort Ethernet 10 * Xircom RealPort Ethernet 10/100 * Xircom RealPort Ethernet 10/100 + Modem 56 (REM56, REM56G) * Intel EtherExpress Pro/100 PC Card Mobile Adapter 16 (Pro/100 M16A) * Compaq Netelligent 10/100 PC Card (CPQ-10/100) * * Some other cards *should* work, but support for them is either broken or in * an unknown state at the moment. I'm always interested in hearing from * people who own any of these cards: * Xircom CreditCard 10Base-T (PS-CE2-10) * Xircom CreditCard Ethernet + ModemII (CEM2) * Xircom CEM28 and CEM33 Ethernet/Modem cards (may be variants of CEM2?) * * Thanks to all who assisted with the development and testing of the driver, * especially: Werner Koch, Duke Kamstra, Duncan Barclay, Jason George, Dru * Nelson, Mike Kephart, Bill Rainey and Douglas Rand. Apologies if I've left * out anyone who deserves a mention here. * * Special thanks to Ade Lovett for both hosting the mailing list and doing * the CEM56/REM56 support code; and the FreeBSD UK Users' Group for hosting * the web pages. * * Author email: * Driver web page: http://ukug.uk.freebsd.org/~scott/xe_drv/ */ #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include /* * MII command structure */ struct xe_mii_frame { uint8_t mii_stdelim; uint8_t mii_opcode; uint8_t mii_phyaddr; uint8_t mii_regaddr; uint8_t mii_turnaround; uint16_t mii_data; }; /* * Media autonegotiation progress constants */ #define XE_AUTONEG_NONE 0 /* No autonegotiation in progress */ #define XE_AUTONEG_WAITING 1 /* Waiting for transmitter to go idle */ #define XE_AUTONEG_STARTED 2 /* Waiting for autonegotiation to complete */ #define XE_AUTONEG_100TX 3 /* Trying to force 100baseTX link */ #define XE_AUTONEG_FAIL 4 /* Autonegotiation failed */ /* * Prototypes start here */ static void xe_init(void *xscp); static void xe_init_locked(struct xe_softc *scp); static void xe_start(struct ifnet *ifp); static void xe_start_locked(struct ifnet *ifp); static int xe_ioctl(struct ifnet *ifp, u_long command, caddr_t data); static void xe_watchdog(void *arg); static void xe_intr(void *xscp); static void xe_txintr(struct xe_softc *scp, uint8_t txst1); static void xe_macintr(struct xe_softc *scp, uint8_t rst0, uint8_t txst0, uint8_t txst1); static void xe_rxintr(struct xe_softc *scp, uint8_t rst0); static int xe_media_change(struct ifnet *ifp); static void xe_media_status(struct ifnet *ifp, struct ifmediareq *mrp); static void xe_setmedia(void *arg); static void xe_reset(struct xe_softc *scp); static void xe_enable_intr(struct xe_softc *scp); static void xe_disable_intr(struct xe_softc *scp); static void xe_set_multicast(struct xe_softc *scp); static void xe_set_addr(struct xe_softc *scp, uint8_t* addr, unsigned idx); static void xe_mchash(struct xe_softc *scp, const uint8_t *addr); static int xe_pio_write_packet(struct xe_softc *scp, struct mbuf *mbp); /* * MII functions */ static void xe_mii_sync(struct xe_softc *scp); static int xe_mii_init(struct xe_softc *scp); static void xe_mii_send(struct xe_softc *scp, uint32_t bits, int cnt); static int xe_mii_readreg(struct xe_softc *scp, struct xe_mii_frame *frame); static int xe_mii_writereg(struct xe_softc *scp, struct xe_mii_frame *frame); static uint16_t xe_phy_readreg(struct xe_softc *scp, uint16_t reg); static void xe_phy_writereg(struct xe_softc *scp, uint16_t reg, uint16_t data); /* * Debugging functions */ static void xe_mii_dump(struct xe_softc *scp); #if 0 static void xe_reg_dump(struct xe_softc *scp); #endif /* * Debug logging levels - set with hw.xe.debug sysctl * 0 = None * 1 = More hardware details, probe/attach progress * 2 = Most function calls, ioctls and media selection progress * 3 = Everything - interrupts, packets in/out and multicast address setup */ #define XE_DEBUG #ifdef XE_DEBUG /* sysctl vars */ static SYSCTL_NODE(_hw, OID_AUTO, xe, CTLFLAG_RD, 0, "if_xe parameters"); int xe_debug = 0; SYSCTL_INT(_hw_xe, OID_AUTO, debug, CTLFLAG_RW, &xe_debug, 0, "if_xe debug level"); #define DEVPRINTF(level, arg) if (xe_debug >= (level)) device_printf arg #define DPRINTF(level, arg) if (xe_debug >= (level)) printf arg #define XE_MII_DUMP(scp) if (xe_debug >= 3) xe_mii_dump(scp) #if 0 #define XE_REG_DUMP(scp) if (xe_debug >= 3) xe_reg_dump(scp) #endif #else #define DEVPRINTF(level, arg) #define DPRINTF(level, arg) #define XE_MII_DUMP(scp) #if 0 #define XE_REG_DUMP(scp) #endif #endif /* * Attach a device. */ int xe_attach(device_t dev) { struct xe_softc *scp = device_get_softc(dev); int err; DEVPRINTF(2, (dev, "attach\n")); /* Initialise stuff... */ scp->dev = dev; scp->ifp = if_alloc(IFT_ETHER); if (scp->ifp == NULL) return (ENOSPC); scp->ifm = &scp->ifmedia; scp->autoneg_status = XE_AUTONEG_NONE; mtx_init(&scp->lock, device_get_nameunit(dev), MTX_NETWORK_LOCK, MTX_DEF); callout_init_mtx(&scp->wdog_timer, &scp->lock, 0); /* Initialise the ifnet structure */ scp->ifp->if_softc = scp; if_initname(scp->ifp, device_get_name(dev), device_get_unit(dev)); scp->ifp->if_flags = (IFF_BROADCAST | IFF_SIMPLEX | IFF_MULTICAST); scp->ifp->if_linkmib = &scp->mibdata; scp->ifp->if_linkmiblen = sizeof(scp->mibdata); scp->ifp->if_start = xe_start; scp->ifp->if_ioctl = xe_ioctl; scp->ifp->if_init = xe_init; scp->ifp->if_baudrate = 100000000; IFQ_SET_MAXLEN(&scp->ifp->if_snd, ifqmaxlen); /* Initialise the ifmedia structure */ ifmedia_init(scp->ifm, 0, xe_media_change, xe_media_status); callout_init_mtx(&scp->media_timer, &scp->lock, 0); /* Add supported media types */ if (scp->mohawk) { ifmedia_add(scp->ifm, IFM_ETHER|IFM_100_TX, 0, NULL); ifmedia_add(scp->ifm, IFM_ETHER|IFM_10_T|IFM_FDX, 0, NULL); ifmedia_add(scp->ifm, IFM_ETHER|IFM_10_T|IFM_HDX, 0, NULL); } ifmedia_add(scp->ifm, IFM_ETHER|IFM_10_T, 0, NULL); if (scp->ce2) ifmedia_add(scp->ifm, IFM_ETHER|IFM_10_2, 0, NULL); ifmedia_add(scp->ifm, IFM_ETHER|IFM_AUTO, 0, NULL); /* Default is to autoselect best supported media type */ ifmedia_set(scp->ifm, IFM_ETHER|IFM_AUTO); /* Get the hardware into a known state */ XE_LOCK(scp); xe_reset(scp); XE_UNLOCK(scp); /* Get hardware version numbers */ XE_SELECT_PAGE(4); scp->version = XE_INB(XE_BOV); if (scp->mohawk) scp->srev = (XE_INB(XE_BOV) & 0x70) >> 4; else scp->srev = (XE_INB(XE_BOV) & 0x30) >> 4; /* Print some useful information */ device_printf(dev, "version 0x%02x/0x%02x%s%s\n", scp->version, scp->srev, scp->mohawk ? ", 100Mbps capable" : "", scp->modem ? ", with modem" : ""); if (scp->mohawk) { XE_SELECT_PAGE(0x10); DEVPRINTF(1, (dev, "DingoID=0x%04x, RevisionID=0x%04x, VendorID=0x%04x\n", XE_INW(XE_DINGOID), XE_INW(XE_RevID), XE_INW(XE_VendorID))); } if (scp->ce2) { XE_SELECT_PAGE(0x45); DEVPRINTF(1, (dev, "CE2 version = 0x%02x\n", XE_INB(XE_REV))); } /* Attach the interface */ ether_ifattach(scp->ifp, scp->enaddr); err = bus_setup_intr(dev, scp->irq_res, INTR_TYPE_NET | INTR_MPSAFE, NULL, xe_intr, scp, &scp->intrhand); if (err) { ether_ifdetach(scp->ifp); mtx_destroy(&scp->lock); return (err); } + gone_by_fcp101_dev(dev); + /* Done */ return (0); } /* * Complete hardware intitialisation and enable output. Exits without doing * anything if there's no address assigned to the card, or if media selection * is in progress (the latter implies we've already run this function). */ static void xe_init(void *xscp) { struct xe_softc *scp = xscp; XE_LOCK(scp); xe_init_locked(scp); XE_UNLOCK(scp); } static void xe_init_locked(struct xe_softc *scp) { unsigned i; if (scp->autoneg_status != XE_AUTONEG_NONE) return; DEVPRINTF(2, (scp->dev, "init\n")); /* Reset transmitter flags */ scp->tx_queued = 0; scp->tx_tpr = 0; scp->tx_timeouts = 0; scp->tx_thres = 64; scp->tx_min = ETHER_MIN_LEN - ETHER_CRC_LEN; scp->tx_timeout = 0; /* Soft reset the card */ XE_SELECT_PAGE(0); XE_OUTB(XE_CR, XE_CR_SOFT_RESET); DELAY(40000); XE_OUTB(XE_CR, 0); DELAY(40000); if (scp->mohawk) { /* * set GP1 and GP2 as outputs (bits 2 & 3) * set GP1 low to power on the ML6692 (bit 0) * set GP2 high to power on the 10Mhz chip (bit 1) */ XE_SELECT_PAGE(4); XE_OUTB(XE_GPR0, XE_GPR0_GP2_SELECT | XE_GPR0_GP1_SELECT | XE_GPR0_GP2_OUT); } /* Shut off interrupts */ xe_disable_intr(scp); /* Wait for everything to wake up */ DELAY(500000); /* Check for PHY */ if (scp->mohawk) scp->phy_ok = xe_mii_init(scp); /* Disable 'source insertion' (not sure what that means) */ XE_SELECT_PAGE(0x42); XE_OUTB(XE_SWC0, XE_SWC0_NO_SRC_INSERT); /* Set 8K/24K Tx/Rx buffer split */ if (scp->srev != 1) { XE_SELECT_PAGE(2); XE_OUTW(XE_RBS, 0x2000); } /* Enable early transmit mode on Mohawk/Dingo */ if (scp->mohawk) { XE_SELECT_PAGE(0x03); XE_OUTW(XE_TPT, scp->tx_thres); XE_SELECT_PAGE(0x01); XE_OUTB(XE_ECR, XE_INB(XE_ECR) | XE_ECR_EARLY_TX); } /* Put MAC address in first 'individual address' register */ XE_SELECT_PAGE(0x50); for (i = 0; i < ETHER_ADDR_LEN; i++) XE_OUTB(0x08 + i, IF_LLADDR(scp->ifp)[scp->mohawk ? 5 - i : i]); /* Set up multicast addresses */ xe_set_multicast(scp); /* Fix the receive data offset -- reset can leave it off-by-one */ XE_SELECT_PAGE(0); XE_OUTW(XE_DO, 0x2000); /* Set interrupt masks */ XE_SELECT_PAGE(1); XE_OUTB(XE_IMR0, XE_IMR0_TX_PACKET | XE_IMR0_MAC_INTR | XE_IMR0_RX_PACKET); /* Set MAC interrupt masks */ XE_SELECT_PAGE(0x40); XE_OUTB(XE_RX0Msk, ~(XE_RX0M_RX_OVERRUN | XE_RX0M_CRC_ERROR | XE_RX0M_ALIGN_ERROR | XE_RX0M_LONG_PACKET)); XE_OUTB(XE_TX0Msk, ~(XE_TX0M_SQE_FAIL | XE_TX0M_LATE_COLLISION | XE_TX0M_TX_UNDERRUN | XE_TX0M_16_COLLISIONS | XE_TX0M_NO_CARRIER)); /* Clear MAC status registers */ XE_SELECT_PAGE(0x40); XE_OUTB(XE_RST0, 0x00); XE_OUTB(XE_TXST0, 0x00); /* Enable receiver and put MAC online */ XE_SELECT_PAGE(0x40); XE_OUTB(XE_CMD0, XE_CMD0_RX_ENABLE|XE_CMD0_ONLINE); /* Set up IMR, enable interrupts */ xe_enable_intr(scp); /* Start media selection */ xe_setmedia(scp); /* Enable output */ scp->ifp->if_drv_flags |= IFF_DRV_RUNNING; scp->ifp->if_drv_flags &= ~IFF_DRV_OACTIVE; callout_reset(&scp->wdog_timer, hz, xe_watchdog, scp); } /* * Start output on interface. Should be called at splimp() priority. Check * that the output is idle (ie, IFF_DRV_OACTIVE is not set) before calling this * function. If media selection is in progress we set IFF_DRV_OACTIVE ourselves * and return immediately. */ static void xe_start(struct ifnet *ifp) { struct xe_softc *scp = ifp->if_softc; XE_LOCK(scp); xe_start_locked(ifp); XE_UNLOCK(scp); } static void xe_start_locked(struct ifnet *ifp) { struct xe_softc *scp = ifp->if_softc; struct mbuf *mbp; if (scp->autoneg_status != XE_AUTONEG_NONE) { ifp->if_drv_flags |= IFF_DRV_OACTIVE; return; } DEVPRINTF(3, (scp->dev, "start\n")); /* * Loop while there are packets to be sent, and space to send * them. */ for (;;) { /* Suck a packet off the send queue */ IF_DEQUEUE(&ifp->if_snd, mbp); if (mbp == NULL) { /* * We are using the !OACTIVE flag to indicate * to the outside world that we can accept an * additional packet rather than that the * transmitter is _actually_ active. Indeed, * the transmitter may be active, but if we * haven't filled all the buffers with data * then we still want to accept more. */ ifp->if_drv_flags &= ~IFF_DRV_OACTIVE; return; } if (xe_pio_write_packet(scp, mbp) != 0) { /* Push the packet back onto the queue */ IF_PREPEND(&ifp->if_snd, mbp); ifp->if_drv_flags |= IFF_DRV_OACTIVE; return; } /* Tap off here if there is a bpf listener */ BPF_MTAP(ifp, mbp); /* In case we don't hear from the card again... */ scp->tx_timeout = 5; scp->tx_queued++; m_freem(mbp); } } /* * Process an ioctl request. Adapted from the ed driver. */ static int xe_ioctl(struct ifnet *ifp, u_long command, caddr_t data) { struct xe_softc *scp; int error; scp = ifp->if_softc; error = 0; switch (command) { case SIOCSIFFLAGS: DEVPRINTF(2, (scp->dev, "ioctl: SIOCSIFFLAGS: 0x%04x\n", ifp->if_flags)); /* * If the interface is marked up and stopped, then * start it. If it is marked down and running, then * stop it. */ XE_LOCK(scp); if (ifp->if_flags & IFF_UP) { if (!(ifp->if_drv_flags & IFF_DRV_RUNNING)) { xe_reset(scp); xe_init_locked(scp); } } else { if (ifp->if_drv_flags & IFF_DRV_RUNNING) xe_stop(scp); } /* handle changes to PROMISC/ALLMULTI flags */ xe_set_multicast(scp); XE_UNLOCK(scp); error = 0; break; case SIOCADDMULTI: case SIOCDELMULTI: DEVPRINTF(2, (scp->dev, "ioctl: SIOC{ADD,DEL}MULTI\n")); /* * Multicast list has (maybe) changed; set the * hardware filters accordingly. */ XE_LOCK(scp); xe_set_multicast(scp); XE_UNLOCK(scp); error = 0; break; case SIOCSIFMEDIA: case SIOCGIFMEDIA: DEVPRINTF(3, (scp->dev, "ioctl: bounce to ifmedia_ioctl\n")); /* * Someone wants to get/set media options. */ error = ifmedia_ioctl(ifp, (struct ifreq *)data, &scp->ifmedia, command); break; default: DEVPRINTF(3, (scp->dev, "ioctl: bounce to ether_ioctl\n")); error = ether_ioctl(ifp, command, data); } return (error); } /* * Card interrupt handler. * * This function is probably more complicated than it needs to be, as it * attempts to deal with the case where multiple packets get sent between * interrupts. This is especially annoying when working out the collision * stats. Not sure whether this case ever really happens or not (maybe on a * slow/heavily loaded machine?) so it's probably best to leave this like it * is. * * Note that the crappy PIO used to get packets on and off the card means that * you will spend a lot of time in this routine -- I can get my P150 to spend * 90% of its time servicing interrupts if I really hammer the network. Could * fix this, but then you'd start dropping/losing packets. The moral of this * story? If you want good network performance _and_ some cycles left over to * get your work done, don't buy a Xircom card. Or convince them to tell me * how to do memory-mapped I/O :) */ static void xe_txintr(struct xe_softc *scp, uint8_t txst1) { struct ifnet *ifp; uint8_t tpr, sent, coll; ifp = scp->ifp; /* Update packet count, accounting for rollover */ tpr = XE_INB(XE_TPR); sent = -scp->tx_tpr + tpr; /* Update statistics if we actually sent anything */ if (sent > 0) { coll = txst1 & XE_TXST1_RETRY_COUNT; scp->tx_tpr = tpr; scp->tx_queued -= sent; if_inc_counter(ifp, IFCOUNTER_OPACKETS, sent); if_inc_counter(ifp, IFCOUNTER_COLLISIONS, coll); /* * According to the Xircom manual, Dingo will * sometimes manage to transmit a packet with * triggering an interrupt. If this happens, we have * sent > 1 and the collision count only reflects * collisions on the last packet sent (the one that * triggered the interrupt). Collision stats might * therefore be a bit low, but there doesn't seem to * be anything we can do about that. */ switch (coll) { case 0: break; case 1: scp->mibdata.dot3StatsSingleCollisionFrames++; scp->mibdata.dot3StatsCollFrequencies[0]++; break; default: scp->mibdata.dot3StatsMultipleCollisionFrames++; scp->mibdata.dot3StatsCollFrequencies[coll-1]++; } } scp->tx_timeout = 0; ifp->if_drv_flags &= ~IFF_DRV_OACTIVE; } /* Handle most MAC interrupts */ static void xe_macintr(struct xe_softc *scp, uint8_t rst0, uint8_t txst0, uint8_t txst1) { struct ifnet *ifp; ifp = scp->ifp; #if 0 /* Carrier sense lost -- only in 10Mbit HDX mode */ if (txst0 & XE_TXST0_NO_CARRIER || !(txst1 & XE_TXST1_LINK_STATUS)) { /* XXX - Need to update media status here */ device_printf(scp->dev, "no carrier\n"); if_inc_counter(ifp, IFCOUNTER_OERRORS, 1); scp->mibdata.dot3StatsCarrierSenseErrors++; } #endif /* Excessive collisions -- try sending again */ if (txst0 & XE_TXST0_16_COLLISIONS) { if_inc_counter(ifp, IFCOUNTER_COLLISIONS, 16); if_inc_counter(ifp, IFCOUNTER_OERRORS, 1); scp->mibdata.dot3StatsExcessiveCollisions++; scp->mibdata.dot3StatsMultipleCollisionFrames++; scp->mibdata.dot3StatsCollFrequencies[15]++; XE_OUTB(XE_CR, XE_CR_RESTART_TX); } /* Transmit underrun -- increase early transmit threshold */ if (txst0 & XE_TXST0_TX_UNDERRUN && scp->mohawk) { DEVPRINTF(1, (scp->dev, "transmit underrun")); if (scp->tx_thres < ETHER_MAX_LEN) { if ((scp->tx_thres += 64) > ETHER_MAX_LEN) scp->tx_thres = ETHER_MAX_LEN; DPRINTF(1, (": increasing transmit threshold to %u", scp->tx_thres)); XE_SELECT_PAGE(0x3); XE_OUTW(XE_TPT, scp->tx_thres); XE_SELECT_PAGE(0x0); } DPRINTF(1, ("\n")); if_inc_counter(ifp, IFCOUNTER_OERRORS, 1); scp->mibdata.dot3StatsInternalMacTransmitErrors++; } /* Late collision -- just complain about it */ if (txst0 & XE_TXST0_LATE_COLLISION) { device_printf(scp->dev, "late collision\n"); if_inc_counter(ifp, IFCOUNTER_OERRORS, 1); scp->mibdata.dot3StatsLateCollisions++; } /* SQE test failure -- just complain about it */ if (txst0 & XE_TXST0_SQE_FAIL) { device_printf(scp->dev, "SQE test failure\n"); if_inc_counter(ifp, IFCOUNTER_OERRORS, 1); scp->mibdata.dot3StatsSQETestErrors++; } /* Packet too long -- what happens to these */ if (rst0 & XE_RST0_LONG_PACKET) { device_printf(scp->dev, "received giant packet\n"); if_inc_counter(ifp, IFCOUNTER_IERRORS, 1); scp->mibdata.dot3StatsFrameTooLongs++; } /* CRC error -- packet dropped */ if (rst0 & XE_RST0_CRC_ERROR) { device_printf(scp->dev, "CRC error\n"); if_inc_counter(ifp, IFCOUNTER_IERRORS, 1); scp->mibdata.dot3StatsFCSErrors++; } } static void xe_rxintr(struct xe_softc *scp, uint8_t rst0) { struct ifnet *ifp; uint8_t esr, rsr; ifp = scp->ifp; /* Handle received packet(s) */ while ((esr = XE_INB(XE_ESR)) & XE_ESR_FULL_PACKET_RX) { rsr = XE_INB(XE_RSR); DEVPRINTF(3, (scp->dev, "intr: ESR=0x%02x, RSR=0x%02x\n", esr, rsr)); /* Make sure packet is a good one */ if (rsr & XE_RSR_RX_OK) { struct ether_header *ehp; struct mbuf *mbp; uint16_t len; len = XE_INW(XE_RBC) - ETHER_CRC_LEN; DEVPRINTF(3, (scp->dev, "intr: receive length = %d\n", len)); if (len == 0) { if_inc_counter(ifp, IFCOUNTER_IQDROPS, 1); continue; } /* * Allocate mbuf to hold received packet. If * the mbuf header isn't big enough, we attach * an mbuf cluster to hold the packet. Note * the +=2 to align the packet data on a * 32-bit boundary, and the +3 to allow for * the possibility of reading one more byte * than the actual packet length (we always * read 16-bit words). XXX - Surely there's a * better way to do this alignment? */ MGETHDR(mbp, M_NOWAIT, MT_DATA); if (mbp == NULL) { if_inc_counter(ifp, IFCOUNTER_IQDROPS, 1); continue; } if (len + 3 > MHLEN) { if (!(MCLGET(mbp, M_NOWAIT))) { m_freem(mbp); if_inc_counter(ifp, IFCOUNTER_IQDROPS, 1); continue; } } mbp->m_data += 2; ehp = mtod(mbp, struct ether_header *); /* * Now get the packet in PIO mode, including * the Ethernet header but omitting the * trailing CRC. */ /* * Work around a bug in CE2 cards. There * seems to be a problem with duplicated and * extraneous bytes in the receive buffer, but * without any real documentation for the CE2 * it's hard to tell for sure. XXX - Needs * testing on CE2 hardware */ if (scp->srev == 0) { u_short rhs; XE_SELECT_PAGE(5); rhs = XE_INW(XE_RHSA); XE_SELECT_PAGE(0); rhs += 3; /* Skip control info */ if (rhs >= 0x8000) rhs = 0; if (rhs + len > 0x8000) { int i; for (i = 0; i < len; i++, rhs++) { ((char *)ehp)[i] = XE_INB(XE_EDP); if (rhs == 0x8000) { rhs = 0; i--; } } } else bus_read_multi_2(scp->port_res, XE_EDP, (uint16_t *)ehp, (len + 1) >> 1); } else bus_read_multi_2(scp->port_res, XE_EDP, (uint16_t *)ehp, (len + 1) >> 1); /* Deliver packet to upper layers */ mbp->m_pkthdr.rcvif = ifp; mbp->m_pkthdr.len = mbp->m_len = len; XE_UNLOCK(scp); (*ifp->if_input)(ifp, mbp); XE_LOCK(scp); if_inc_counter(ifp, IFCOUNTER_IPACKETS, 1); } else if (rsr & XE_RSR_ALIGN_ERROR) { /* Packet alignment error -- drop packet */ device_printf(scp->dev, "alignment error\n"); scp->mibdata.dot3StatsAlignmentErrors++; if_inc_counter(ifp, IFCOUNTER_IERRORS, 1); } /* Skip to next packet, if there is one */ XE_OUTW(XE_DO, 0x8000); } /* Clear receiver overruns now we have some free buffer space */ if (rst0 & XE_RST0_RX_OVERRUN) { DEVPRINTF(1, (scp->dev, "receive overrun\n")); if_inc_counter(ifp, IFCOUNTER_IERRORS, 1); scp->mibdata.dot3StatsInternalMacReceiveErrors++; XE_OUTB(XE_CR, XE_CR_CLEAR_OVERRUN); } } static void xe_intr(void *xscp) { struct xe_softc *scp = (struct xe_softc *) xscp; struct ifnet *ifp; uint8_t psr, isr, rst0, txst0, txst1; ifp = scp->ifp; XE_LOCK(scp); /* Disable interrupts */ if (scp->mohawk) XE_OUTB(XE_CR, 0); /* Cache current register page */ psr = XE_INB(XE_PR); /* Read ISR to see what caused this interrupt */ while ((isr = XE_INB(XE_ISR)) != 0) { /* 0xff might mean the card is no longer around */ if (isr == 0xff) { DEVPRINTF(3, (scp->dev, "intr: interrupt received for missing card?\n")); break; } /* Read other status registers */ XE_SELECT_PAGE(0x40); rst0 = XE_INB(XE_RST0); XE_OUTB(XE_RST0, 0); txst0 = XE_INB(XE_TXST0); txst1 = XE_INB(XE_TXST1); XE_OUTB(XE_TXST0, 0); XE_OUTB(XE_TXST1, 0); XE_SELECT_PAGE(0); DEVPRINTF(3, (scp->dev, "intr: ISR=0x%02x, RST=0x%02x, TXT=0x%02x%02x\n", isr, rst0, txst1, txst0)); if (isr & XE_ISR_TX_PACKET) xe_txintr(scp, txst1); if (isr & XE_ISR_MAC_INTR) xe_macintr(scp, rst0, txst0, txst1); xe_rxintr(scp, rst0); } /* Restore saved page */ XE_SELECT_PAGE(psr); /* Re-enable interrupts */ XE_OUTB(XE_CR, XE_CR_ENABLE_INTR); XE_UNLOCK(scp); } /* * Device timeout/watchdog routine. Called automatically if we queue a packet * for transmission but don't get an interrupt within a specified timeout * (usually 5 seconds). When this happens we assume the worst and reset the * card. */ static void xe_watchdog(void *arg) { struct xe_softc *scp = arg; XE_ASSERT_LOCKED(scp); if (scp->tx_timeout && --scp->tx_timeout == 0) { device_printf(scp->dev, "watchdog timeout: resetting card\n"); scp->tx_timeouts++; if_inc_counter(scp->ifp, IFCOUNTER_OERRORS, scp->tx_queued); xe_stop(scp); xe_reset(scp); xe_init_locked(scp); } callout_reset(&scp->wdog_timer, hz, xe_watchdog, scp); } /* * Change media selection. */ static int xe_media_change(struct ifnet *ifp) { struct xe_softc *scp = ifp->if_softc; DEVPRINTF(2, (scp->dev, "media_change\n")); XE_LOCK(scp); if (IFM_TYPE(scp->ifm->ifm_media) != IFM_ETHER) { XE_UNLOCK(scp); return(EINVAL); } /* * Some card/media combos aren't always possible -- filter * those out here. */ if ((IFM_SUBTYPE(scp->ifm->ifm_media) == IFM_AUTO || IFM_SUBTYPE(scp->ifm->ifm_media) == IFM_100_TX) && !scp->phy_ok) { XE_UNLOCK(scp); return (EINVAL); } xe_setmedia(scp); XE_UNLOCK(scp); return (0); } /* * Return current media selection. */ static void xe_media_status(struct ifnet *ifp, struct ifmediareq *mrp) { struct xe_softc *scp = ifp->if_softc; DEVPRINTF(3, (scp->dev, "media_status\n")); /* XXX - This is clearly wrong. Will fix once I have CE2 working */ XE_LOCK(scp); mrp->ifm_status = IFM_AVALID | IFM_ACTIVE; mrp->ifm_active = ((struct xe_softc *)ifp->if_softc)->media; XE_UNLOCK(scp); } /* * Select active media. */ static void xe_setmedia(void *xscp) { struct xe_softc *scp = xscp; uint16_t bmcr, bmsr, anar, lpar; DEVPRINTF(2, (scp->dev, "setmedia\n")); XE_ASSERT_LOCKED(scp); /* Cancel any pending timeout */ callout_stop(&scp->media_timer); xe_disable_intr(scp); /* Select media */ scp->media = IFM_ETHER; switch (IFM_SUBTYPE(scp->ifm->ifm_media)) { case IFM_AUTO: /* Autoselect media */ scp->media = IFM_ETHER|IFM_AUTO; /* * Autoselection is really awful. It goes something like this: * * Wait until the transmitter goes idle (2sec timeout). * Reset card * IF a 100Mbit PHY exists * Start NWAY autonegotiation (3.5sec timeout) * IF that succeeds * Select 100baseTX or 10baseT, whichever was detected * ELSE * Reset card * IF a 100Mbit PHY exists * Try to force a 100baseTX link (3sec timeout) * IF that succeeds * Select 100baseTX * ELSE * Disable the PHY * ENDIF * ENDIF * ENDIF * ENDIF * IF nothing selected so far * IF a 100Mbit PHY exists * Select 10baseT * ELSE * Select 10baseT or 10base2, whichever is connected * ENDIF * ENDIF */ switch (scp->autoneg_status) { case XE_AUTONEG_NONE: DEVPRINTF(2, (scp->dev, "Waiting for idle transmitter\n")); scp->ifp->if_drv_flags |= IFF_DRV_OACTIVE; scp->autoneg_status = XE_AUTONEG_WAITING; /* FALL THROUGH */ case XE_AUTONEG_WAITING: if (scp->tx_queued != 0) { callout_reset(&scp->media_timer, hz / 2, xe_setmedia, scp); return; } if (scp->phy_ok) { DEVPRINTF(2, (scp->dev, "Starting autonegotiation\n")); bmcr = xe_phy_readreg(scp, PHY_BMCR); bmcr &= ~(PHY_BMCR_AUTONEGENBL); xe_phy_writereg(scp, PHY_BMCR, bmcr); anar = xe_phy_readreg(scp, PHY_ANAR); anar &= ~(PHY_ANAR_100BT4 | PHY_ANAR_100BTXFULL | PHY_ANAR_10BTFULL); anar |= PHY_ANAR_100BTXHALF | PHY_ANAR_10BTHALF; xe_phy_writereg(scp, PHY_ANAR, anar); bmcr |= PHY_BMCR_AUTONEGENBL | PHY_BMCR_AUTONEGRSTR; xe_phy_writereg(scp, PHY_BMCR, bmcr); scp->autoneg_status = XE_AUTONEG_STARTED; callout_reset(&scp->media_timer, hz * 7/2, xe_setmedia, scp); return; } else { scp->autoneg_status = XE_AUTONEG_FAIL; } break; case XE_AUTONEG_STARTED: bmsr = xe_phy_readreg(scp, PHY_BMSR); lpar = xe_phy_readreg(scp, PHY_LPAR); if (bmsr & (PHY_BMSR_AUTONEGCOMP | PHY_BMSR_LINKSTAT)) { DEVPRINTF(2, (scp->dev, "Autonegotiation complete!\n")); /* * XXX - Shouldn't have to do this, * but (on my hub at least) the * transmitter won't work after a * successful autoneg. So we see what * the negotiation result was and * force that mode. I'm sure there is * an easy fix for this. */ if (lpar & PHY_LPAR_100BTXHALF) { xe_phy_writereg(scp, PHY_BMCR, PHY_BMCR_SPEEDSEL); XE_MII_DUMP(scp); XE_SELECT_PAGE(2); XE_OUTB(XE_MSR, XE_INB(XE_MSR) | 0x08); scp->media = IFM_ETHER | IFM_100_TX; scp->autoneg_status = XE_AUTONEG_NONE; } else { /* * XXX - Bit of a hack going * on in here. This is * derived from Ken Hughes * patch to the Linux driver * to make it work with 10Mbit * _autonegotiated_ links on * CE3B cards. What's a CE3B * and how's it differ from a * plain CE3? these are the * things we need to find out. */ xe_phy_writereg(scp, PHY_BMCR, 0x0000); XE_SELECT_PAGE(2); /* BEGIN HACK */ XE_OUTB(XE_MSR, XE_INB(XE_MSR) | 0x08); XE_SELECT_PAGE(0x42); XE_OUTB(XE_SWC1, 0x80); scp->media = IFM_ETHER | IFM_10_T; scp->autoneg_status = XE_AUTONEG_NONE; /* END HACK */ #if 0 /* Display PHY? */ XE_OUTB(XE_MSR, XE_INB(XE_MSR) & ~0x08); scp->autoneg_status = XE_AUTONEG_FAIL; #endif } } else { DEVPRINTF(2, (scp->dev, "Autonegotiation failed; trying 100baseTX\n")); XE_MII_DUMP(scp); if (scp->phy_ok) { xe_phy_writereg(scp, PHY_BMCR, PHY_BMCR_SPEEDSEL); scp->autoneg_status = XE_AUTONEG_100TX; callout_reset(&scp->media_timer, hz * 3, xe_setmedia, scp); return; } else { scp->autoneg_status = XE_AUTONEG_FAIL; } } break; case XE_AUTONEG_100TX: (void)xe_phy_readreg(scp, PHY_BMSR); bmsr = xe_phy_readreg(scp, PHY_BMSR); if (bmsr & PHY_BMSR_LINKSTAT) { DEVPRINTF(2, (scp->dev, "Got 100baseTX link!\n")); XE_MII_DUMP(scp); XE_SELECT_PAGE(2); XE_OUTB(XE_MSR, XE_INB(XE_MSR) | 0x08); scp->media = IFM_ETHER | IFM_100_TX; scp->autoneg_status = XE_AUTONEG_NONE; } else { DEVPRINTF(2, (scp->dev, "Autonegotiation failed; disabling PHY\n")); XE_MII_DUMP(scp); xe_phy_writereg(scp, PHY_BMCR, 0x0000); XE_SELECT_PAGE(2); /* Disable PHY? */ XE_OUTB(XE_MSR, XE_INB(XE_MSR) & ~0x08); scp->autoneg_status = XE_AUTONEG_FAIL; } break; } /* * If we got down here _and_ autoneg_status is * XE_AUTONEG_FAIL, then either autonegotiation * failed, or never got started to begin with. In * either case, select a suitable 10Mbit media and * hope it works. We don't need to reset the card * again, since it will have been done already by the * big switch above. */ if (scp->autoneg_status == XE_AUTONEG_FAIL) { DEVPRINTF(2, (scp->dev, "Selecting 10baseX\n")); if (scp->mohawk) { XE_SELECT_PAGE(0x42); XE_OUTB(XE_SWC1, 0x80); scp->media = IFM_ETHER | IFM_10_T; scp->autoneg_status = XE_AUTONEG_NONE; } else { XE_SELECT_PAGE(4); XE_OUTB(XE_GPR0, 4); DELAY(50000); XE_SELECT_PAGE(0x42); XE_OUTB(XE_SWC1, (XE_INB(XE_ESR) & XE_ESR_MEDIA_SELECT) ? 0x80 : 0xc0); scp->media = IFM_ETHER | ((XE_INB(XE_ESR) & XE_ESR_MEDIA_SELECT) ? IFM_10_T : IFM_10_2); scp->autoneg_status = XE_AUTONEG_NONE; } } break; /* * If a specific media has been requested, we just reset the * card and select it (one small exception -- if 100baseTX is * requested but there is no PHY, we fall back to 10baseT * operation). */ case IFM_100_TX: /* Force 100baseTX */ if (scp->phy_ok) { DEVPRINTF(2, (scp->dev, "Selecting 100baseTX\n")); XE_SELECT_PAGE(0x42); XE_OUTB(XE_SWC1, 0); xe_phy_writereg(scp, PHY_BMCR, PHY_BMCR_SPEEDSEL); XE_SELECT_PAGE(2); XE_OUTB(XE_MSR, XE_INB(XE_MSR) | 0x08); scp->media |= IFM_100_TX; break; } /* FALLTHROUGH */ case IFM_10_T: /* Force 10baseT */ DEVPRINTF(2, (scp->dev, "Selecting 10baseT\n")); if (scp->phy_ok) { xe_phy_writereg(scp, PHY_BMCR, 0x0000); XE_SELECT_PAGE(2); /* Disable PHY */ XE_OUTB(XE_MSR, XE_INB(XE_MSR) & ~0x08); } XE_SELECT_PAGE(0x42); XE_OUTB(XE_SWC1, 0x80); scp->media |= IFM_10_T; break; case IFM_10_2: DEVPRINTF(2, (scp->dev, "Selecting 10base2\n")); XE_SELECT_PAGE(0x42); XE_OUTB(XE_SWC1, 0xc0); scp->media |= IFM_10_2; break; } /* * Finally, the LEDs are set to match whatever media was * chosen and the transmitter is unblocked. */ DEVPRINTF(2, (scp->dev, "Setting LEDs\n")); XE_SELECT_PAGE(2); switch (IFM_SUBTYPE(scp->media)) { case IFM_100_TX: case IFM_10_T: XE_OUTB(XE_LED, 0x3b); if (scp->dingo) XE_OUTB(0x0b, 0x04); /* 100Mbit LED */ break; case IFM_10_2: XE_OUTB(XE_LED, 0x3a); break; } /* Restart output? */ xe_enable_intr(scp); scp->ifp->if_drv_flags &= ~IFF_DRV_OACTIVE; xe_start_locked(scp->ifp); } /* * Hard reset (power cycle) the card. */ static void xe_reset(struct xe_softc *scp) { DEVPRINTF(2, (scp->dev, "reset\n")); XE_ASSERT_LOCKED(scp); /* Power down */ XE_SELECT_PAGE(4); XE_OUTB(XE_GPR1, 0); DELAY(40000); /* Power up again */ if (scp->mohawk) XE_OUTB(XE_GPR1, XE_GPR1_POWER_DOWN); else XE_OUTB(XE_GPR1, XE_GPR1_POWER_DOWN | XE_GPR1_AIC); DELAY(40000); XE_SELECT_PAGE(0); } /* * Take interface offline. This is done by powering down the device, which I * assume means just shutting down the transceiver and Ethernet logic. This * requires a _hard_ reset to recover from, as we need to power up again. */ void xe_stop(struct xe_softc *scp) { DEVPRINTF(2, (scp->dev, "stop\n")); XE_ASSERT_LOCKED(scp); /* * Shut off interrupts. */ xe_disable_intr(scp); /* * Power down. */ XE_SELECT_PAGE(4); XE_OUTB(XE_GPR1, 0); XE_SELECT_PAGE(0); if (scp->mohawk) { /* * set GP1 and GP2 as outputs (bits 2 & 3) * set GP1 high to power on the ML6692 (bit 0) * set GP2 low to power on the 10Mhz chip (bit 1) */ XE_SELECT_PAGE(4); XE_OUTB(XE_GPR0, XE_GPR0_GP2_SELECT | XE_GPR0_GP1_SELECT | XE_GPR0_GP1_OUT); } /* * ~IFF_DRV_RUNNING == interface down. */ scp->ifp->if_drv_flags &= ~IFF_DRV_RUNNING; scp->ifp->if_drv_flags &= ~IFF_DRV_OACTIVE; scp->tx_timeout = 0; callout_stop(&scp->wdog_timer); callout_stop(&scp->media_timer); } /* * Enable interrupts from the card. */ static void xe_enable_intr(struct xe_softc *scp) { DEVPRINTF(2, (scp->dev, "enable_intr\n")); XE_SELECT_PAGE(0); XE_OUTB(XE_CR, XE_CR_ENABLE_INTR); /* Enable interrupts */ if (scp->modem && !scp->dingo) { /* This bit is just magic */ if (!(XE_INB(0x10) & 0x01)) { XE_OUTB(0x10, 0x11); /* Unmask master int enable */ } } } /* * Disable interrupts from the card. */ static void xe_disable_intr(struct xe_softc *scp) { DEVPRINTF(2, (scp->dev, "disable_intr\n")); XE_SELECT_PAGE(0); XE_OUTB(XE_CR, 0); /* Disable interrupts */ if (scp->modem && !scp->dingo) { /* More magic */ XE_OUTB(0x10, 0x10); /* Mask the master int enable */ } } /* * Set up multicast filter and promiscuous modes. */ static void xe_set_multicast(struct xe_softc *scp) { struct ifnet *ifp; struct ifmultiaddr *maddr; unsigned count, i; DEVPRINTF(2, (scp->dev, "set_multicast\n")); ifp = scp->ifp; XE_SELECT_PAGE(0x42); /* Handle PROMISC flag */ if (ifp->if_flags & IFF_PROMISC) { XE_OUTB(XE_SWC1, XE_INB(XE_SWC1) | XE_SWC1_PROMISCUOUS); return; } else XE_OUTB(XE_SWC1, XE_INB(XE_SWC1) & ~XE_SWC1_PROMISCUOUS); /* Handle ALLMULTI flag */ if (ifp->if_flags & IFF_ALLMULTI) { XE_OUTB(XE_SWC1, XE_INB(XE_SWC1) | XE_SWC1_ALLMULTI); return; } else XE_OUTB(XE_SWC1, XE_INB(XE_SWC1) & ~XE_SWC1_ALLMULTI); /* Iterate over multicast address list */ count = 0; if_maddr_rlock(ifp); CK_STAILQ_FOREACH(maddr, &ifp->if_multiaddrs, ifma_link) { if (maddr->ifma_addr->sa_family != AF_LINK) continue; count++; if (count < 10) /* * First 9 use Individual Addresses for exact * matching. */ xe_set_addr(scp, LLADDR((struct sockaddr_dl *)maddr->ifma_addr), count); else if (scp->mohawk) /* Use hash filter on Mohawk and Dingo */ xe_mchash(scp, LLADDR((struct sockaddr_dl *)maddr->ifma_addr)); else /* Nowhere else to put them on CE2 */ break; } if_maddr_runlock(ifp); DEVPRINTF(2, (scp->dev, "set_multicast: count = %u\n", count)); /* Now do some cleanup and enable multicast handling as needed */ if (count == 0) { /* Disable all multicast handling */ XE_SELECT_PAGE(0x42); XE_OUTB(XE_SWC1, XE_INB(XE_SWC1) & ~(XE_SWC1_IA_ENABLE | XE_SWC1_ALLMULTI)); if (scp->mohawk) { XE_SELECT_PAGE(0x02); XE_OUTB(XE_MSR, XE_INB(XE_MSR) & ~XE_MSR_HASH_TABLE); } } else if (count < 10) { /* * Full in any unused Individual Addresses with our * MAC address. */ for (i = count + 1; i < 10; i++) xe_set_addr(scp, IF_LLADDR(scp->ifp), i); /* Enable Individual Address matching only */ XE_SELECT_PAGE(0x42); XE_OUTB(XE_SWC1, (XE_INB(XE_SWC1) & ~XE_SWC1_ALLMULTI) | XE_SWC1_IA_ENABLE); if (scp->mohawk) { XE_SELECT_PAGE(0x02); XE_OUTB(XE_MSR, XE_INB(XE_MSR) & ~XE_MSR_HASH_TABLE); } } else if (scp->mohawk) { /* Check whether hash table is full */ XE_SELECT_PAGE(0x58); for (i = 0x08; i < 0x10; i++) if (XE_INB(i) != 0xff) break; if (i == 0x10) { /* * Hash table full - enable * promiscuous multicast matching */ XE_SELECT_PAGE(0x42); XE_OUTB(XE_SWC1, (XE_INB(XE_SWC1) & ~XE_SWC1_IA_ENABLE) | XE_SWC1_ALLMULTI); XE_SELECT_PAGE(0x02); XE_OUTB(XE_MSR, XE_INB(XE_MSR) & ~XE_MSR_HASH_TABLE); } else { /* Enable hash table and Individual Address matching */ XE_SELECT_PAGE(0x42); XE_OUTB(XE_SWC1, (XE_INB(XE_SWC1) & ~XE_SWC1_ALLMULTI) | XE_SWC1_IA_ENABLE); XE_SELECT_PAGE(0x02); XE_OUTB(XE_MSR, XE_INB(XE_MSR) | XE_MSR_HASH_TABLE); } } else { /* Enable promiscuous multicast matching */ XE_SELECT_PAGE(0x42); XE_OUTB(XE_SWC1, (XE_INB(XE_SWC1) & ~XE_SWC1_IA_ENABLE) | XE_SWC1_ALLMULTI); } XE_SELECT_PAGE(0); } /* * Copy the Ethernet multicast address in addr to the on-chip registers for * Individual Address idx. Assumes that addr is really a multicast address * and that idx > 0 (slot 0 is always used for the card MAC address). */ static void xe_set_addr(struct xe_softc *scp, uint8_t* addr, unsigned idx) { uint8_t page, reg; unsigned i; /* * Individual Addresses are stored in registers 8-F of pages * 0x50-0x57. IA1 therefore starts at register 0xE on page * 0x50. The expressions below compute the starting page and * register for any IA index > 0. */ --idx; page = 0x50 + idx % 4 + idx / 4 * 3; reg = 0x0e - 2 * (idx % 4); DEVPRINTF(3, (scp->dev, "set_addr: idx = %u, page = 0x%02x, reg = 0x%02x\n", idx + 1, page, reg)); /* * Copy the IA bytes. Note that the byte order is reversed * for Mohawk and Dingo wrt. CE2 hardware. */ XE_SELECT_PAGE(page); for (i = 0; i < ETHER_ADDR_LEN; i++) { if (i > 0) { DPRINTF(3, (":%02x", addr[i])); } else { DEVPRINTF(3, (scp->dev, "set_addr: %02x", addr[0])); } XE_OUTB(reg, addr[scp->mohawk ? 5 - i : i]); if (++reg == 0x10) { reg = 0x08; XE_SELECT_PAGE(++page); } } DPRINTF(3, ("\n")); } /* * Set the appropriate bit in the multicast hash table for the supplied * Ethernet multicast address addr. Assumes that addr is really a multicast * address. */ static void xe_mchash(struct xe_softc* scp, const uint8_t *addr) { int bit; uint8_t byte, hash; hash = ether_crc32_le(addr, ETHER_ADDR_LEN) & 0x3F; /* * Top 3 bits of hash give register - 8, bottom 3 give bit * within register. */ byte = hash >> 3 | 0x08; bit = 0x01 << (hash & 0x07); DEVPRINTF(3, (scp->dev, "set_hash: hash = 0x%02x, byte = 0x%02x, bit = 0x%02x\n", hash, byte, bit)); XE_SELECT_PAGE(0x58); XE_OUTB(byte, XE_INB(byte) | bit); } /* * Write an outgoing packet to the card using programmed I/O. */ static int xe_pio_write_packet(struct xe_softc *scp, struct mbuf *mbp) { unsigned len, pad; unsigned char wantbyte; uint8_t *data; uint8_t savebyte[2]; /* Get total packet length */ if (mbp->m_flags & M_PKTHDR) len = mbp->m_pkthdr.len; else { struct mbuf* mbp2 = mbp; for (len = 0; mbp2 != NULL; len += mbp2->m_len, mbp2 = mbp2->m_next); } DEVPRINTF(3, (scp->dev, "pio_write_packet: len = %u\n", len)); /* Packets < minimum length may need to be padded out */ pad = 0; if (len < scp->tx_min) { pad = scp->tx_min - len; len = scp->tx_min; } /* Check transmit buffer space */ XE_SELECT_PAGE(0); XE_OUTW(XE_TRS, len + 2); /* Only effective on rev. 1 CE2 cards */ if ((XE_INW(XE_TSO) & 0x7fff) <= len + 2) return (1); /* Send packet length to card */ XE_OUTW(XE_EDP, len); /* * Write packet to card using PIO (code stolen from the ed driver) */ wantbyte = 0; while (mbp != NULL) { len = mbp->m_len; if (len > 0) { data = mtod(mbp, caddr_t); if (wantbyte) { /* Finish the last word */ savebyte[1] = *data; XE_OUTW(XE_EDP, *(u_short *)savebyte); data++; len--; wantbyte = 0; } if (len > 1) { /* Output contiguous words */ bus_write_multi_2(scp->port_res, XE_EDP, (uint16_t *)data, len >> 1); data += len & ~1; len &= 1; } if (len == 1) { /* Save last byte, if needed */ savebyte[0] = *data; wantbyte = 1; } } mbp = mbp->m_next; } /* * Send last byte of odd-length packets */ if (wantbyte) XE_OUTB(XE_EDP, savebyte[0]); /* * Can just tell CE3 cards to send; short packets will be * padded out with random cruft automatically. For CE2, * manually pad the packet with garbage; it will be sent when * the required number of bytes have been delivered to the * card. */ if (scp->mohawk) XE_OUTB(XE_CR, XE_CR_TX_PACKET | XE_CR_RESTART_TX | XE_CR_ENABLE_INTR); else if (pad > 0) { if (pad & 0x01) XE_OUTB(XE_EDP, 0xaa); pad >>= 1; while (pad > 0) { XE_OUTW(XE_EDP, 0xdead); pad--; } } return (0); } /************************************************************** * * * M I I F U N C T I O N S * * * **************************************************************/ /* * Alternative MII/PHY handling code adapted from the xl driver. It doesn't * seem to work any better than the xirc2_ps stuff, but it's cleaner code. * XXX - this stuff shouldn't be here. It should all be abstracted off to * XXX - some kind of common MII-handling code, shared by all drivers. But * XXX - that's a whole other mission. */ #define XE_MII_SET(x) XE_OUTB(XE_GPR2, (XE_INB(XE_GPR2) | 0x04) | (x)) #define XE_MII_CLR(x) XE_OUTB(XE_GPR2, (XE_INB(XE_GPR2) | 0x04) & ~(x)) /* * Sync the PHYs by setting data bit and strobing the clock 32 times. */ static void xe_mii_sync(struct xe_softc *scp) { int i; XE_SELECT_PAGE(2); XE_MII_SET(XE_MII_DIR|XE_MII_WRD); for (i = 0; i < 32; i++) { XE_MII_SET(XE_MII_CLK); DELAY(1); XE_MII_CLR(XE_MII_CLK); DELAY(1); } } /* * Look for a MII-compliant PHY. If we find one, reset it. */ static int xe_mii_init(struct xe_softc *scp) { uint16_t status; status = xe_phy_readreg(scp, PHY_BMSR); if ((status & 0xff00) != 0x7800) { DEVPRINTF(2, (scp->dev, "no PHY found, %0x\n", status)); return (0); } else { DEVPRINTF(2, (scp->dev, "PHY OK!\n")); /* Reset the PHY */ xe_phy_writereg(scp, PHY_BMCR, PHY_BMCR_RESET); DELAY(500); while(xe_phy_readreg(scp, PHY_BMCR) & PHY_BMCR_RESET) ; /* nothing */ XE_MII_DUMP(scp); return (1); } } /* * Clock a series of bits through the MII. */ static void xe_mii_send(struct xe_softc *scp, uint32_t bits, int cnt) { int i; XE_SELECT_PAGE(2); XE_MII_CLR(XE_MII_CLK); for (i = (0x1 << (cnt - 1)); i; i >>= 1) { if (bits & i) { XE_MII_SET(XE_MII_WRD); } else { XE_MII_CLR(XE_MII_WRD); } DELAY(1); XE_MII_CLR(XE_MII_CLK); DELAY(1); XE_MII_SET(XE_MII_CLK); } } /* * Read an PHY register through the MII. */ static int xe_mii_readreg(struct xe_softc *scp, struct xe_mii_frame *frame) { int i, ack; XE_ASSERT_LOCKED(scp); /* * Set up frame for RX. */ frame->mii_stdelim = XE_MII_STARTDELIM; frame->mii_opcode = XE_MII_READOP; frame->mii_turnaround = 0; frame->mii_data = 0; XE_SELECT_PAGE(2); XE_OUTB(XE_GPR2, 0); /* * Turn on data xmit. */ XE_MII_SET(XE_MII_DIR); xe_mii_sync(scp); /* * Send command/address info. */ xe_mii_send(scp, frame->mii_stdelim, 2); xe_mii_send(scp, frame->mii_opcode, 2); xe_mii_send(scp, frame->mii_phyaddr, 5); xe_mii_send(scp, frame->mii_regaddr, 5); /* Idle bit */ XE_MII_CLR((XE_MII_CLK|XE_MII_WRD)); DELAY(1); XE_MII_SET(XE_MII_CLK); DELAY(1); /* Turn off xmit. */ XE_MII_CLR(XE_MII_DIR); /* Check for ack */ XE_MII_CLR(XE_MII_CLK); DELAY(1); ack = XE_INB(XE_GPR2) & XE_MII_RDD; XE_MII_SET(XE_MII_CLK); DELAY(1); /* * Now try reading data bits. If the ack failed, we still * need to clock through 16 cycles to keep the PHY(s) in sync. */ if (ack) { for(i = 0; i < 16; i++) { XE_MII_CLR(XE_MII_CLK); DELAY(1); XE_MII_SET(XE_MII_CLK); DELAY(1); } goto fail; } for (i = 0x8000; i; i >>= 1) { XE_MII_CLR(XE_MII_CLK); DELAY(1); if (!ack) { if (XE_INB(XE_GPR2) & XE_MII_RDD) frame->mii_data |= i; DELAY(1); } XE_MII_SET(XE_MII_CLK); DELAY(1); } fail: XE_MII_CLR(XE_MII_CLK); DELAY(1); XE_MII_SET(XE_MII_CLK); DELAY(1); if (ack) return(1); return(0); } /* * Write to a PHY register through the MII. */ static int xe_mii_writereg(struct xe_softc *scp, struct xe_mii_frame *frame) { XE_ASSERT_LOCKED(scp); /* * Set up frame for TX. */ frame->mii_stdelim = XE_MII_STARTDELIM; frame->mii_opcode = XE_MII_WRITEOP; frame->mii_turnaround = XE_MII_TURNAROUND; XE_SELECT_PAGE(2); /* * Turn on data output. */ XE_MII_SET(XE_MII_DIR); xe_mii_sync(scp); xe_mii_send(scp, frame->mii_stdelim, 2); xe_mii_send(scp, frame->mii_opcode, 2); xe_mii_send(scp, frame->mii_phyaddr, 5); xe_mii_send(scp, frame->mii_regaddr, 5); xe_mii_send(scp, frame->mii_turnaround, 2); xe_mii_send(scp, frame->mii_data, 16); /* Idle bit. */ XE_MII_SET(XE_MII_CLK); DELAY(1); XE_MII_CLR(XE_MII_CLK); DELAY(1); /* * Turn off xmit. */ XE_MII_CLR(XE_MII_DIR); return(0); } /* * Read a register from the PHY. */ static uint16_t xe_phy_readreg(struct xe_softc *scp, uint16_t reg) { struct xe_mii_frame frame; bzero((char *)&frame, sizeof(frame)); frame.mii_phyaddr = 0; frame.mii_regaddr = reg; xe_mii_readreg(scp, &frame); return (frame.mii_data); } /* * Write to a PHY register. */ static void xe_phy_writereg(struct xe_softc *scp, uint16_t reg, uint16_t data) { struct xe_mii_frame frame; bzero((char *)&frame, sizeof(frame)); frame.mii_phyaddr = 0; frame.mii_regaddr = reg; frame.mii_data = data; xe_mii_writereg(scp, &frame); } /* * A bit of debugging code. */ static void xe_mii_dump(struct xe_softc *scp) { int i; device_printf(scp->dev, "MII registers: "); for (i = 0; i < 2; i++) { printf(" %d:%04x", i, xe_phy_readreg(scp, i)); } for (i = 4; i < 7; i++) { printf(" %d:%04x", i, xe_phy_readreg(scp, i)); } printf("\n"); } #if 0 void xe_reg_dump(struct xe_softc *scp) { int page, i; device_printf(scp->dev, "Common registers: "); for (i = 0; i < 8; i++) { printf(" %2.2x", XE_INB(i)); } printf("\n"); for (page = 0; page <= 8; page++) { device_printf(scp->dev, "Register page %2.2x: ", page); XE_SELECT_PAGE(page); for (i = 8; i < 16; i++) { printf(" %2.2x", XE_INB(i)); } printf("\n"); } for (page = 0x10; page < 0x5f; page++) { if ((page >= 0x11 && page <= 0x3f) || (page == 0x41) || (page >= 0x43 && page <= 0x4f) || (page >= 0x59)) continue; device_printf(scp->dev, "Register page %2.2x: ", page); XE_SELECT_PAGE(page); for (i = 8; i < 16; i++) { printf(" %2.2x", XE_INB(i)); } printf("\n"); } } #endif int xe_activate(device_t dev) { struct xe_softc *sc = device_get_softc(dev); int start, i; DEVPRINTF(2, (dev, "activate\n")); if (!sc->modem) { sc->port_rid = 0; /* 0 is managed by pccard */ sc->port_res = bus_alloc_resource_anywhere(dev, SYS_RES_IOPORT, &sc->port_rid, 16, RF_ACTIVE); } else if (sc->dingo) { /* * Find a 16 byte aligned ioport for the card. */ DEVPRINTF(1, (dev, "Finding an aligned port for RealPort\n")); sc->port_rid = 1; /* 0 is managed by pccard */ start = 0x100; do { sc->port_res = bus_alloc_resource(dev, SYS_RES_IOPORT, &sc->port_rid, start, 0x3ff, 16, RF_ACTIVE); if (sc->port_res == NULL) break; if ((rman_get_start(sc->port_res) & 0xf) == 0) break; bus_release_resource(dev, SYS_RES_IOPORT, sc->port_rid, sc->port_res); start = (rman_get_start(sc->port_res) + 15) & ~0xf; } while (1); DEVPRINTF(1, (dev, "RealPort port 0x%0jx, size 0x%0jx\n", bus_get_resource_start(dev, SYS_RES_IOPORT, sc->port_rid), bus_get_resource_count(dev, SYS_RES_IOPORT, sc->port_rid))); } else if (sc->ce2) { /* * Find contiguous I/O port for the Ethernet function * on CEM2 and CEM3 cards. We allocate window 0 * wherever pccard has decided it should be, then find * an available window adjacent to it for the second * function. Not sure that both windows are actually * needed. */ DEVPRINTF(1, (dev, "Finding I/O port for CEM2/CEM3\n")); sc->ce2_port_rid = 0; /* 0 is managed by pccard */ sc->ce2_port_res = bus_alloc_resource_anywhere(dev, SYS_RES_IOPORT, &sc->ce2_port_rid, 8, RF_ACTIVE); if (sc->ce2_port_res == NULL) { DEVPRINTF(1, (dev, "Cannot allocate I/O port for modem\n")); xe_deactivate(dev); return (ENOMEM); } sc->port_rid = 1; start = bus_get_resource_start(dev, SYS_RES_IOPORT, sc->ce2_port_rid); for (i = 0; i < 2; i++) { start += (i == 0 ? 8 : -24); sc->port_res = bus_alloc_resource(dev, SYS_RES_IOPORT, &sc->port_rid, start, start + 15, 16, RF_ACTIVE); if (sc->port_res == NULL) continue; if (bus_get_resource_start(dev, SYS_RES_IOPORT, sc->port_rid) == start) break; bus_release_resource(dev, SYS_RES_IOPORT, sc->port_rid, sc->port_res); sc->port_res = NULL; } DEVPRINTF(1, (dev, "CEM2/CEM3 port 0x%0jx, size 0x%0jx\n", bus_get_resource_start(dev, SYS_RES_IOPORT, sc->port_rid), bus_get_resource_count(dev, SYS_RES_IOPORT, sc->port_rid))); } if (!sc->port_res) { DEVPRINTF(1, (dev, "Cannot allocate ioport\n")); xe_deactivate(dev); return (ENOMEM); } sc->irq_rid = 0; sc->irq_res = bus_alloc_resource_any(dev, SYS_RES_IRQ, &sc->irq_rid, RF_ACTIVE); if (sc->irq_res == NULL) { DEVPRINTF(1, (dev, "Cannot allocate irq\n")); xe_deactivate(dev); return (ENOMEM); } return (0); } void xe_deactivate(device_t dev) { struct xe_softc *sc = device_get_softc(dev); DEVPRINTF(2, (dev, "deactivate\n")); if (sc->intrhand) bus_teardown_intr(dev, sc->irq_res, sc->intrhand); sc->intrhand = NULL; if (sc->port_res) bus_release_resource(dev, SYS_RES_IOPORT, sc->port_rid, sc->port_res); sc->port_res = NULL; if (sc->ce2_port_res) bus_release_resource(dev, SYS_RES_IOPORT, sc->ce2_port_rid, sc->ce2_port_res); sc->ce2_port_res = NULL; if (sc->irq_res) bus_release_resource(dev, SYS_RES_IRQ, sc->irq_rid, sc->irq_res); sc->irq_res = NULL; if (sc->ifp) if_free(sc->ifp); sc->ifp = NULL; } Index: stable/12/sys/sys/systm.h =================================================================== --- stable/12/sys/sys/systm.h (revision 339734) +++ stable/12/sys/sys/systm.h (revision 339735) @@ -1,549 +1,552 @@ /*- * SPDX-License-Identifier: BSD-3-Clause * * Copyright (c) 1982, 1988, 1991, 1993 * The Regents of the University of California. All rights reserved. * (c) UNIX System Laboratories, Inc. * All or some portions of this file are derived from material licensed * to the University of California by American Telephone and Telegraph * Co. or Unix System Laboratories, Inc. and are reproduced herein with * the permission of UNIX System Laboratories, Inc. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * 3. Neither the name of the University nor the names of its contributors * may be used to endorse or promote products derived from this software * without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE REGENTS AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * * @(#)systm.h 8.7 (Berkeley) 3/29/95 * $FreeBSD$ */ #ifndef _SYS_SYSTM_H_ #define _SYS_SYSTM_H_ #include #include #include #include #include #include /* for people using printf mainly */ __NULLABILITY_PRAGMA_PUSH extern int cold; /* nonzero if we are doing a cold boot */ extern int suspend_blocked; /* block suspend due to pending shutdown */ extern int rebooting; /* kern_reboot() has been called. */ extern const char *panicstr; /* panic message */ extern char version[]; /* system version */ extern char compiler_version[]; /* compiler version */ extern char copyright[]; /* system copyright */ extern int kstack_pages; /* number of kernel stack pages */ extern u_long pagesizes[]; /* supported page sizes */ extern long physmem; /* physical memory */ extern long realmem; /* 'real' memory */ extern char *rootdevnames[2]; /* names of possible root devices */ extern int boothowto; /* reboot flags, from console subsystem */ extern int bootverbose; /* nonzero to print verbose messages */ extern int maxusers; /* system tune hint */ extern int ngroups_max; /* max # of supplemental groups */ extern int vm_guest; /* Running as virtual machine guest? */ /* * Detected virtual machine guest types. The intention is to expand * and/or add to the VM_GUEST_VM type if specific VM functionality is * ever implemented (e.g. vendor-specific paravirtualization features). * Keep in sync with vm_guest_sysctl_names[]. */ enum VM_GUEST { VM_GUEST_NO = 0, VM_GUEST_VM, VM_GUEST_XEN, VM_GUEST_HV, VM_GUEST_VMWARE, VM_GUEST_KVM, VM_GUEST_BHYVE, VM_LAST }; /* * These functions need to be declared before the KASSERT macro is invoked in * !KASSERT_PANIC_OPTIONAL builds, so their declarations are sort of out of * place compared to other function definitions in this header. On the other * hand, this header is a bit disorganized anyway. */ void panic(const char *, ...) __dead2 __printflike(1, 2); void vpanic(const char *, __va_list) __dead2 __printflike(1, 0); #if defined(WITNESS) || defined(INVARIANT_SUPPORT) #ifdef KASSERT_PANIC_OPTIONAL void kassert_panic(const char *fmt, ...) __printflike(1, 2); #else #define kassert_panic panic #endif #endif #ifdef INVARIANTS /* The option is always available */ #define KASSERT(exp,msg) do { \ if (__predict_false(!(exp))) \ kassert_panic msg; \ } while (0) #define VNASSERT(exp, vp, msg) do { \ if (__predict_false(!(exp))) { \ vn_printf(vp, "VNASSERT failed\n"); \ kassert_panic msg; \ } \ } while (0) #else #define KASSERT(exp,msg) do { \ } while (0) #define VNASSERT(exp, vp, msg) do { \ } while (0) #endif #ifndef CTASSERT /* Allow lint to override */ #define CTASSERT(x) _Static_assert(x, "compile-time assertion failed") #endif #if defined(_KERNEL) #include /* MAXCPU */ #include /* curthread */ #include #endif /* * Assert that a pointer can be loaded from memory atomically. * * This assertion enforces stronger alignment than necessary. For example, * on some architectures, atomicity for unaligned loads will depend on * whether or not the load spans multiple cache lines. */ #define ASSERT_ATOMIC_LOAD_PTR(var, msg) \ KASSERT(sizeof(var) == sizeof(void *) && \ ((uintptr_t)&(var) & (sizeof(void *) - 1)) == 0, msg) /* * Assert that a thread is in critical(9) section. */ #define CRITICAL_ASSERT(td) \ KASSERT((td)->td_critnest >= 1, ("Not in critical section")); /* * If we have already panic'd and this is the thread that called * panic(), then don't block on any mutexes but silently succeed. * Otherwise, the kernel will deadlock since the scheduler isn't * going to run the thread that holds any lock we need. */ #define SCHEDULER_STOPPED_TD(td) ({ \ MPASS((td) == curthread); \ __predict_false((td)->td_stopsched); \ }) #define SCHEDULER_STOPPED() SCHEDULER_STOPPED_TD(curthread) /* * Align variables. */ #define __read_mostly __section(".data.read_mostly") #define __read_frequently __section(".data.read_frequently") #define __exclusive_cache_line __aligned(CACHE_LINE_SIZE) \ __section(".data.exclusive_cache_line") /* * XXX the hints declarations are even more misplaced than most declarations * in this file, since they are needed in one file (per arch) and only used * in two files. * XXX most of these variables should be const. */ extern int osreldate; extern bool dynamic_kenv; extern struct mtx kenv_lock; extern char *kern_envp; extern char *md_envp; extern char static_env[]; extern char static_hints[]; /* by config for now */ extern char **kenvp; extern const void *zero_region; /* address space maps to a zeroed page */ extern int unmapped_buf_allowed; #ifdef __LP64__ #define IOSIZE_MAX iosize_max() #define DEVFS_IOSIZE_MAX devfs_iosize_max() #else #define IOSIZE_MAX SSIZE_MAX #define DEVFS_IOSIZE_MAX SSIZE_MAX #endif /* * General function declarations. */ struct inpcb; struct lock_object; struct malloc_type; struct mtx; struct proc; struct socket; struct thread; struct tty; struct ucred; struct uio; struct _jmp_buf; struct trapframe; struct eventtimer; int setjmp(struct _jmp_buf *) __returns_twice; void longjmp(struct _jmp_buf *, int) __dead2; int dumpstatus(vm_offset_t addr, off_t count); int nullop(void); int eopnotsupp(void); int ureadc(int, struct uio *); void hashdestroy(void *, struct malloc_type *, u_long); void *hashinit(int count, struct malloc_type *type, u_long *hashmask); void *hashinit_flags(int count, struct malloc_type *type, u_long *hashmask, int flags); #define HASH_NOWAIT 0x00000001 #define HASH_WAITOK 0x00000002 void *phashinit(int count, struct malloc_type *type, u_long *nentries); void *phashinit_flags(int count, struct malloc_type *type, u_long *nentries, int flags); void g_waitidle(void); void cpu_boot(int); void cpu_flush_dcache(void *, size_t); void cpu_rootconf(void); void critical_enter_KBI(void); void critical_exit_KBI(void); void critical_exit_preempt(void); void init_param1(void); void init_param2(long physpages); void init_static_kenv(char *, size_t); void tablefull(const char *); #if defined(KLD_MODULE) || defined(KTR_CRITICAL) || !defined(_KERNEL) || defined(GENOFFSET) #define critical_enter() critical_enter_KBI() #define critical_exit() critical_exit_KBI() #else static __inline void critical_enter(void) { struct thread_lite *td; td = (struct thread_lite *)curthread; td->td_critnest++; __compiler_membar(); } static __inline void critical_exit(void) { struct thread_lite *td; td = (struct thread_lite *)curthread; KASSERT(td->td_critnest != 0, ("critical_exit: td_critnest == 0")); __compiler_membar(); td->td_critnest--; __compiler_membar(); if (__predict_false(td->td_owepreempt)) critical_exit_preempt(); } #endif #ifdef EARLY_PRINTF typedef void early_putc_t(int ch); extern early_putc_t *early_putc; #endif int kvprintf(char const *, void (*)(int, void*), void *, int, __va_list) __printflike(1, 0); void log(int, const char *, ...) __printflike(2, 3); void log_console(struct uio *); void vlog(int, const char *, __va_list) __printflike(2, 0); int asprintf(char **ret, struct malloc_type *mtp, const char *format, ...) __printflike(3, 4); int printf(const char *, ...) __printflike(1, 2); int snprintf(char *, size_t, const char *, ...) __printflike(3, 4); int sprintf(char *buf, const char *, ...) __printflike(2, 3); int uprintf(const char *, ...) __printflike(1, 2); int vprintf(const char *, __va_list) __printflike(1, 0); int vasprintf(char **ret, struct malloc_type *mtp, const char *format, __va_list ap) __printflike(3, 0); int vsnprintf(char *, size_t, const char *, __va_list) __printflike(3, 0); int vsnrprintf(char *, size_t, int, const char *, __va_list) __printflike(4, 0); int vsprintf(char *buf, const char *, __va_list) __printflike(2, 0); int ttyprintf(struct tty *, const char *, ...) __printflike(2, 3); int sscanf(const char *, char const * _Nonnull, ...) __scanflike(2, 3); int vsscanf(const char * _Nonnull, char const * _Nonnull, __va_list) __scanflike(2, 0); long strtol(const char *, char **, int); u_long strtoul(const char *, char **, int); quad_t strtoq(const char *, char **, int); u_quad_t strtouq(const char *, char **, int); void tprintf(struct proc *p, int pri, const char *, ...) __printflike(3, 4); void vtprintf(struct proc *, int, const char *, __va_list) __printflike(3, 0); void hexdump(const void *ptr, int length, const char *hdr, int flags); #define HD_COLUMN_MASK 0xff #define HD_DELIM_MASK 0xff00 #define HD_OMIT_COUNT (1 << 16) #define HD_OMIT_HEX (1 << 17) #define HD_OMIT_CHARS (1 << 18) #define ovbcopy(f, t, l) bcopy((f), (t), (l)) void bcopy(const void * _Nonnull from, void * _Nonnull to, size_t len); #define bcopy(from, to, len) __builtin_memmove((to), (from), (len)) void bzero(void * _Nonnull buf, size_t len); #define bzero(buf, len) __builtin_memset((buf), 0, (len)) void explicit_bzero(void * _Nonnull, size_t); int bcmp(const void *b1, const void *b2, size_t len); #define bcmp(b1, b2, len) __builtin_memcmp((b1), (b2), (len)) void *memset(void * _Nonnull buf, int c, size_t len); #define memset(buf, c, len) __builtin_memset((buf), (c), (len)) void *memcpy(void * _Nonnull to, const void * _Nonnull from, size_t len); #define memcpy(to, from, len) __builtin_memcpy((to), (from), (len)) void *memmove(void * _Nonnull dest, const void * _Nonnull src, size_t n); #define memmove(dest, src, n) __builtin_memmove((dest), (src), (n)) int memcmp(const void *b1, const void *b2, size_t len); #define memcmp(b1, b2, len) __builtin_memcmp((b1), (b2), (len)) void *memset_early(void * _Nonnull buf, int c, size_t len); #define bzero_early(buf, len) memset_early((buf), 0, (len)) void *memcpy_early(void * _Nonnull to, const void * _Nonnull from, size_t len); void *memmove_early(void * _Nonnull dest, const void * _Nonnull src, size_t n); #define bcopy_early(from, to, len) memmove_early((to), (from), (len)) int copystr(const void * _Nonnull __restrict kfaddr, void * _Nonnull __restrict kdaddr, size_t len, size_t * __restrict lencopied); int copyinstr(const void * __restrict udaddr, void * _Nonnull __restrict kaddr, size_t len, size_t * __restrict lencopied); int copyin(const void * __restrict udaddr, void * _Nonnull __restrict kaddr, size_t len); int copyin_nofault(const void * __restrict udaddr, void * _Nonnull __restrict kaddr, size_t len); int copyout(const void * _Nonnull __restrict kaddr, void * __restrict udaddr, size_t len); int copyout_nofault(const void * _Nonnull __restrict kaddr, void * __restrict udaddr, size_t len); int fubyte(volatile const void *base); long fuword(volatile const void *base); int fuword16(volatile const void *base); int32_t fuword32(volatile const void *base); int64_t fuword64(volatile const void *base); int fueword(volatile const void *base, long *val); int fueword32(volatile const void *base, int32_t *val); int fueword64(volatile const void *base, int64_t *val); int subyte(volatile void *base, int byte); int suword(volatile void *base, long word); int suword16(volatile void *base, int word); int suword32(volatile void *base, int32_t word); int suword64(volatile void *base, int64_t word); uint32_t casuword32(volatile uint32_t *base, uint32_t oldval, uint32_t newval); u_long casuword(volatile u_long *p, u_long oldval, u_long newval); int casueword32(volatile uint32_t *base, uint32_t oldval, uint32_t *oldvalp, uint32_t newval); int casueword(volatile u_long *p, u_long oldval, u_long *oldvalp, u_long newval); void realitexpire(void *); int sysbeep(int hertz, int period); void hardclock(int cnt, int usermode); void hardclock_sync(int cpu); void softclock(void *); void statclock(int cnt, int usermode); void profclock(int cnt, int usermode, uintfptr_t pc); int hardclockintr(void); void startprofclock(struct proc *); void stopprofclock(struct proc *); void cpu_startprofclock(void); void cpu_stopprofclock(void); void suspendclock(void); void resumeclock(void); sbintime_t cpu_idleclock(void); void cpu_activeclock(void); void cpu_new_callout(int cpu, sbintime_t bt, sbintime_t bt_opt); void cpu_et_frequency(struct eventtimer *et, uint64_t newfreq); extern int cpu_disable_c2_sleep; extern int cpu_disable_c3_sleep; char *kern_getenv(const char *name); void freeenv(char *env); int getenv_int(const char *name, int *data); int getenv_uint(const char *name, unsigned int *data); int getenv_long(const char *name, long *data); int getenv_ulong(const char *name, unsigned long *data); int getenv_string(const char *name, char *data, int size); int getenv_int64(const char *name, int64_t *data); int getenv_uint64(const char *name, uint64_t *data); int getenv_quad(const char *name, quad_t *data); int kern_setenv(const char *name, const char *value); int kern_unsetenv(const char *name); int testenv(const char *name); int getenv_array(const char *name, void *data, int size, int *psize, int type_size, bool allow_signed); #define GETENV_UNSIGNED false /* negative numbers not allowed */ #define GETENV_SIGNED true /* negative numbers allowed */ typedef uint64_t (cpu_tick_f)(void); void set_cputicker(cpu_tick_f *func, uint64_t freq, unsigned var); extern cpu_tick_f *cpu_ticks; uint64_t cpu_tickrate(void); uint64_t cputick2usec(uint64_t tick); #ifdef APM_FIXUP_CALLTODO struct timeval; void adjust_timeout_calltodo(struct timeval *time_change); #endif /* APM_FIXUP_CALLTODO */ #include /* Initialize the world */ void consinit(void); void cpu_initclocks(void); void cpu_initclocks_bsp(void); void cpu_initclocks_ap(void); void usrinfoinit(void); /* Finalize the world */ void kern_reboot(int) __dead2; void shutdown_nice(int); /* Timeouts */ typedef void timeout_t(void *); /* timeout function type */ #define CALLOUT_HANDLE_INITIALIZER(handle) \ { NULL } void callout_handle_init(struct callout_handle *); struct callout_handle timeout(timeout_t *, void *, int); void untimeout(timeout_t *, void *, struct callout_handle); /* Stubs for obsolete functions that used to be for interrupt management */ static __inline intrmask_t splbio(void) { return 0; } static __inline intrmask_t splcam(void) { return 0; } static __inline intrmask_t splclock(void) { return 0; } static __inline intrmask_t splhigh(void) { return 0; } static __inline intrmask_t splimp(void) { return 0; } static __inline intrmask_t splnet(void) { return 0; } static __inline intrmask_t spltty(void) { return 0; } static __inline void splx(intrmask_t ipl __unused) { return; } /* * Common `proc' functions are declared here so that proc.h can be included * less often. */ int _sleep(void * _Nonnull chan, struct lock_object *lock, int pri, const char *wmesg, sbintime_t sbt, sbintime_t pr, int flags); #define msleep(chan, mtx, pri, wmesg, timo) \ _sleep((chan), &(mtx)->lock_object, (pri), (wmesg), \ tick_sbt * (timo), 0, C_HARDCLOCK) #define msleep_sbt(chan, mtx, pri, wmesg, bt, pr, flags) \ _sleep((chan), &(mtx)->lock_object, (pri), (wmesg), (bt), (pr), \ (flags)) int msleep_spin_sbt(void * _Nonnull chan, struct mtx *mtx, const char *wmesg, sbintime_t sbt, sbintime_t pr, int flags); #define msleep_spin(chan, mtx, wmesg, timo) \ msleep_spin_sbt((chan), (mtx), (wmesg), tick_sbt * (timo), \ 0, C_HARDCLOCK) int pause_sbt(const char *wmesg, sbintime_t sbt, sbintime_t pr, int flags); #define pause(wmesg, timo) \ pause_sbt((wmesg), tick_sbt * (timo), 0, C_HARDCLOCK) #define pause_sig(wmesg, timo) \ pause_sbt((wmesg), tick_sbt * (timo), 0, C_HARDCLOCK | C_CATCH) #define tsleep(chan, pri, wmesg, timo) \ _sleep((chan), NULL, (pri), (wmesg), tick_sbt * (timo), \ 0, C_HARDCLOCK) #define tsleep_sbt(chan, pri, wmesg, bt, pr, flags) \ _sleep((chan), NULL, (pri), (wmesg), (bt), (pr), (flags)) void wakeup(void * chan); void wakeup_one(void * chan); /* * Common `struct cdev *' stuff are declared here to avoid #include poisoning */ struct cdev; dev_t dev2udev(struct cdev *x); const char *devtoname(struct cdev *cdev); #ifdef __LP64__ size_t devfs_iosize_max(void); size_t iosize_max(void); #endif int poll_no_poll(int events); /* XXX: Should be void nanodelay(u_int nsec); */ void DELAY(int usec); /* Root mount holdback API */ struct root_hold_token; struct root_hold_token *root_mount_hold(const char *identifier); void root_mount_rel(struct root_hold_token *h); int root_mounted(void); /* * Unit number allocation API. (kern/subr_unit.c) */ struct unrhdr; struct unrhdr *new_unrhdr(int low, int high, struct mtx *mutex); void init_unrhdr(struct unrhdr *uh, int low, int high, struct mtx *mutex); void delete_unrhdr(struct unrhdr *uh); void clear_unrhdr(struct unrhdr *uh); void clean_unrhdr(struct unrhdr *uh); void clean_unrhdrl(struct unrhdr *uh); int alloc_unr(struct unrhdr *uh); int alloc_unr_specific(struct unrhdr *uh, u_int item); int alloc_unrl(struct unrhdr *uh); void free_unr(struct unrhdr *uh, u_int item); void intr_prof_stack_use(struct thread *td, struct trapframe *frame); void counted_warning(unsigned *counter, const char *msg); /* * APIs to manage deprecation and obsolescence. */ struct device; void _gone_in(int major, const char *msg); void _gone_in_dev(struct device *dev, int major, const char *msg); #ifdef NO_OBSOLETE_CODE #define __gone_ok(m, msg) \ _Static_assert(m < P_OSREL_MAJOR(__FreeBSD_version)), \ "Obsolete code" msg); #else #define __gone_ok(m, msg) #endif #define gone_in(major, msg) __gone_ok(major, msg) _gone_in(major, msg) #define gone_in_dev(dev, major, msg) __gone_ok(major, msg) _gone_in_dev(dev, major, msg) +#define gone_by_fcp101_dev(dev) \ + gone_in_dev((dev), 13, \ + "see https://github.com/freebsd/fcp/blob/master/fcp-0101.md") __NULLABILITY_PRAGMA_POP #endif /* !_SYS_SYSTM_H_ */ Index: stable/12 =================================================================== --- stable/12 (revision 339734) +++ stable/12 (revision 339735) Property changes on: stable/12 ___________________________________________________________________ Modified: svn:mergeinfo ## -0,0 +0,1 ## Merged /head:r339703