Index: head/en_US.ISO8859-1/books/handbook/advanced-networking/chapter.xml =================================================================== --- head/en_US.ISO8859-1/books/handbook/advanced-networking/chapter.xml (revision 46271) +++ head/en_US.ISO8859-1/books/handbook/advanced-networking/chapter.xml (revision 46272) @@ -1,5217 +1,5217 @@ Advanced Networking Synopsis This chapter covers a number of advanced networking topics. After reading this chapter, you will know: The basics of gateways and routes. How to set up USB tethering. How to set up &ieee; 802.11 and &bluetooth; devices. How to make &os; act as a bridge. How to set up network PXE booting. How to set up IPv6 on a &os; machine. How to enable and utilize the features of the Common Address Redundancy Protocol (CARP) in &os;. Before reading this chapter, you should: Understand the basics of the /etc/rc scripts. Be familiar with basic network terminology. Know how to configure and install a new &os; kernel (). Know how to install additional third-party software (). Gateways and Routes Coranth Gryphon Contributed by routing gateway subnet Routing is the mechanism that allows a system to find the network path to another system. A route is a defined pair of addresses which represent the destination and a gateway. The route indicates that when trying to get to the specified destination, send the packets through the specified gateway. There are three types of destinations: individual hosts, subnets, and default. The default route is used if no other routes apply. There are also three types of gateways: individual hosts, interfaces, also called links, and Ethernet hardware (MAC) addresses. Known routes are stored in a routing table. This section provides an overview of routing basics. It then demonstrates how to configure a &os; system as a router and offers some troubleshooting tips. Routing Basics To view the routing table of a &os; system, use &man.netstat.1;: &prompt.user; netstat -r Routing tables Internet: Destination Gateway Flags Refs Use Netif Expire default outside-gw UGS 37 418 em0 localhost localhost UH 0 181 lo0 test0 0:e0:b5:36:cf:4f UHLW 5 63288 re0 77 10.20.30.255 link#1 UHLW 1 2421 example.com link#1 UC 0 0 host1 0:e0:a8:37:8:1e UHLW 3 4601 lo0 host2 0:e0:a8:37:8:1e UHLW 0 5 lo0 => host2.example.com link#1 UC 0 0 224 link#1 UC 0 0 The entries in this example are as follows: default The first route in this table specifies the default route. When the local system needs to make a connection to a remote host, it checks the routing table to determine if a known path exists. If the remote host matches an entry in the table, the system checks to see if it can connect using the interface specified in that entry. If the destination does not match an entry, or if all known paths fail, the system uses the entry for the default route. For hosts on a local area network, the Gateway field in the default route is set to the system which has a direct connection to the Internet. When reading this entry, verify that the Flags column indicates that the gateway is usable (UG). The default route for a machine which itself is functioning as the gateway to the outside world will be the gateway machine at the Internet Service Provider (ISP). localhost The second route is the localhost route. The interface specified in the Netif column for localhost is lo0, also known as the loopback device. This indicates that all traffic for this destination should be internal, rather than sending it out over the network. MAC address The addresses beginning with 0:e0: are MAC addresses. &os; will automatically identify any hosts, test0 in the example, on the local Ethernet and add a route for that host over the Ethernet interface, re0. This type of route has a timeout, seen in the Expire column, which is used if the host does not respond in a specific amount of time. When this happens, the route to this host will be automatically deleted. These hosts are identified using the Routing Information Protocol (RIP), which calculates routes to local hosts based upon a shortest path determination. subnet &os; will automatically add subnet routes for the local subnet. In this example, 10.20.30.255 is the broadcast address for the subnet 10.20.30 and example.com is the domain name associated with that subnet. The designation link#1 refers to the first Ethernet card in the machine. Local network hosts and local subnets have their routes automatically configured by a daemon called &man.routed.8;. If it is not running, only routes which are statically defined by the administrator will exist. host The host1 line refers to the host by its Ethernet address. Since it is the sending host, &os; knows to use the loopback interface (lo0) rather than the Ethernet interface. The two host2 lines represent aliases which were created using &man.ifconfig.8;. The => symbol after the lo0 interface says that an alias has been set in addition to the loopback address. Such routes only show up on the host that supports the alias and all other hosts on the local network will have a link#1 line for such routes. 224 The final line (destination subnet 224) deals with multicasting. Various attributes of each route can be seen in the Flags column. summarizes some of these flags and their meanings: Commonly Seen Routing Table Flags Command Purpose U The route is active (up). H The route destination is a single host. G Send anything for this destination on to this gateway, which will figure out from there where to send it. S This route was statically configured. C Clones a new route based upon this route for machines to connect to. This type of route is normally used for local networks. W The route was auto-configured based upon a local area network (clone) route. L Route involves references to Ethernet (link) hardware.
On a &os; system, the default route can defined in /etc/rc.conf by specifying the IP address of the default gateway: defaultrouter="10.20.30.1" It is also possible to manually add the route using route: &prompt.root; route add default 10.20.30.1 Note that manually added routes will not survive a reboot. For more information on manual manipulation of network routing tables, refer to &man.route.8;.
Configuring a Router with Static Routes Al Hoang Contributed by dual homed hosts A &os; system can be configured as the default gateway, or router, for a network if it is a dual-homed system. A dual-homed system is a host which resides on at least two different networks. Typically, each network is connected to a separate network interface, though IP aliasing can be used to bind multiple addresses, each on a different subnet, to one physical interface. router In order for the system to forward packets between interfaces, &os; must be configured as a router. Internet standards and good engineering practice prevent the &os; Project from enabling this feature by default, but it can be configured to start at boot by adding this line to /etc/rc.conf: gateway_enable="YES" # Set to YES if this host will be a gateway To enable routing now, set the &man.sysctl.8; variable net.inet.ip.forwarding to 1. To stop routing, reset this variable to 0. BGP RIP OSPF The routing table of a router needs additional routes so it knows how to reach other networks. Routes can be either added manually using static routes or routes can be automatically learned using a routing protocol. Static routes are appropriate for small networks and this section describes how to add a static routing entry for a small network. For large networks, static routes quickly become unscalable. &os; comes with the standard BSD routing daemon &man.routed.8;, which provides the routing protocols RIP, versions 1 and 2, and IRDP. Support for the BGP and OSPF routing protocols can be installed using the net/zebra package or port. Consider the following network: INTERNET | (10.0.0.1/24) Default Router to Internet | |Interface xl0 |10.0.0.10/24 +------+ | | RouterA | | (FreeBSD gateway) +------+ | Interface xl1 | 192.168.1.1/24 | +--------------------------------+ Internal Net 1 | 192.168.1.2/24 | +------+ | | RouterB | | +------+ | 192.168.2.1/24 | Internal Net 2 In this scenario, RouterA is a &os; machine that is acting as a router to the rest of the Internet. It has a default route set to 10.0.0.1 which allows it to connect with the outside world. RouterB is already configured to use 192.168.1.1 as its default gateway. Before adding any static routes, the routing table on RouterA looks like this: &prompt.user; netstat -nr Routing tables Internet: Destination Gateway Flags Refs Use Netif Expire default 10.0.0.1 UGS 0 49378 xl0 127.0.0.1 127.0.0.1 UH 0 6 lo0 10.0.0.0/24 link#1 UC 0 0 xl0 192.168.1.0/24 link#2 UC 0 0 xl1 With the current routing table, RouterA does not have a route to the 192.168.2.0/24 network. The following command adds the Internal Net 2 network to RouterA's routing table using 192.168.1.2 as the next hop: &prompt.root; route add -net 192.168.2.0/24 192.168.1.2 Now, RouterA can reach any host on the 192.168.2.0/24 network. However, the routing information will not persist if the &os; system reboots. If a static route needs to be persistent, add it to /etc/rc.conf: # Add Internal Net 2 as a persistent static route static_routes="internalnet2" route_internalnet2="-net 192.168.2.0/24 192.168.1.2" The static_routes configuration variable is a list of strings separated by a space, where each string references a route name. The variable route_internalnet2 contains the static route for that route name. Using more than one string in static_routes creates multiple static routes. The following shows an example of adding static routes for the 192.168.0.0/24 and 192.168.1.0/24 networks: static_routes="net1 net2" route_net1="-net 192.168.0.0/24 192.168.0.1" route_net2="-net 192.168.1.0/24 192.168.1.1" Troubleshooting When an address space is assigned to a network, the service provider configures their routing tables so that all traffic for the network will be sent to the link for the site. But how do external sites know to send their packets to the network's ISP? There is a system that keeps track of all assigned address spaces and defines their point of connection to the Internet backbone, or the main trunk lines that carry Internet traffic across the country and around the world. Each backbone machine has a copy of a master set of tables, which direct traffic for a particular network to a specific backbone carrier, and from there down the chain of service providers until it reaches a particular network. It is the task of the service provider to advertise to the backbone sites that they are the point of connection, and thus the path inward, for a site. This is known as route propagation. &man.traceroute.8; Sometimes, there is a problem with route propagation and some sites are unable to connect. Perhaps the most useful command for trying to figure out where routing is breaking down is traceroute. It is useful when ping fails. When using traceroute, include the address of the remote host to connect to. The output will show the gateway hosts along the path of the attempt, eventually either reaching the target host, or terminating because of a lack of connection. For more information, refer to &man.traceroute.8;. Multicast Considerations multicast routing kernel options MROUTING &os; natively supports both multicast applications and multicast routing. Multicast applications do not require any special configuration in order to run on &os;. Support for multicast routing requires that the following option be compiled into a custom kernel: options MROUTING The multicast routing daemon, mrouted can be installed using the net/mrouted package or port. This daemon implements the DVMRP multicast routing protocol and is configured by editing /usr/local/etc/mrouted.conf in order to set up the tunnels and DVMRP. The installation of mrouted also installs map-mbone and mrinfo, as well as their associated man pages. Refer to these for configuration examples. DVMRP has largely been replaced by the PIM protocol in many multicast installations. Refer to &man.pim.4; for more information.
Wireless Networking Loader Marc Fonvieille Murray Stokely wireless networking 802.11 wireless networking Wireless Networking Basics Most wireless networks are based on the &ieee; 802.11 standards. A basic wireless network consists of multiple stations communicating with radios that broadcast in either the 2.4GHz or 5GHz band, though this varies according to the locale and is also changing to enable communication in the 2.3GHz and 4.9GHz ranges. 802.11 networks are organized in two ways. In infrastructure mode, one station acts as a master with all the other stations associating to it, the network is known as a BSS, and the master station is termed an access point (AP). In a BSS, all communication passes through the AP; even when one station wants to communicate with another wireless station, messages must go through the AP. In the second form of network, there is no master and stations communicate directly. This form of network is termed an IBSS and is commonly known as an ad-hoc network. 802.11 networks were first deployed in the 2.4GHz band using protocols defined by the &ieee; 802.11 and 802.11b standard. These specifications include the operating frequencies and the MAC layer characteristics, including framing and transmission rates, as communication can occur at various rates. Later, the 802.11a standard defined operation in the 5GHz band, including different signaling mechanisms and higher transmission rates. Still later, the 802.11g standard defined the use of 802.11a signaling and transmission mechanisms in the 2.4GHz band in such a way as to be backwards compatible with 802.11b networks. Separate from the underlying transmission techniques, 802.11 networks have a variety of security mechanisms. The original 802.11 specifications defined a simple security protocol called WEP. This protocol uses a fixed pre-shared key and the RC4 cryptographic cipher to encode data transmitted on a network. Stations must all agree on the fixed key in order to communicate. This scheme was shown to be easily broken and is now rarely used except to discourage transient users from joining networks. Current security practice is given by the &ieee; 802.11i specification that defines new cryptographic ciphers and an additional protocol to authenticate stations to an access point and exchange keys for data communication. Cryptographic keys are periodically refreshed and there are mechanisms for detecting and countering intrusion attempts. Another security protocol specification commonly used in wireless networks is termed WPA, which was a precursor to 802.11i. WPA specifies a subset of the requirements found in 802.11i and is designed for implementation on legacy hardware. Specifically, WPA requires only the TKIP cipher that is derived from the original WEP cipher. 802.11i permits use of TKIP but also requires support for a stronger cipher, AES-CCM, for encrypting data. The AES cipher was not required in WPA because it was deemed too computationally costly to be implemented on legacy hardware. The other standard to be aware of is 802.11e. It defines protocols for deploying multimedia applications, such as streaming video and voice over IP (VoIP), in an 802.11 network. Like 802.11i, 802.11e also has a precursor specification termed WME (later renamed WMM) that has been defined by an industry group as a subset of 802.11e that can be deployed now to enable multimedia applications while waiting for the final ratification of 802.11e. The most important thing to know about 802.11e and WME/WMM is that it enables prioritized traffic over a wireless network through Quality of Service (QoS) protocols and enhanced media access protocols. Proper implementation of these protocols enables high speed bursting of data and prioritized traffic flow. &os; supports networks that operate using 802.11a, 802.11b, and 802.11g. The WPA and 802.11i security protocols are likewise supported (in conjunction with any of 11a, 11b, and 11g) and QoS and traffic prioritization required by the WME/WMM protocols are supported for a limited set of wireless devices. Quick Start Connecting a computer to an existing wireless network is a very common situation. This procedure shows the steps required. Obtain the SSID (Service Set Identifier) and PSK (Pre-Shared Key) for the wireless network from the network administrator. Identify the wireless adapter. The &os; GENERIC kernel includes drivers for many common wireless adapters. If the wireless adapter is one of those models, it will be shown in the output from &man.ifconfig.8;: &prompt.user; ifconfig | grep -B3 -i wireless If a wireless adapter is not listed, an additional kernel module might be required, or it might be a model not supported by &os;. This example shows the Atheros ath0 wireless adapter. Add an entry for this network to /etc/wpa_supplicant.conf. If the file does not exist, create it. Replace myssid and mypsk with the SSID and PSK provided by the network administrator. network={ ssid="myssid" psk="mypsk" } Add entries to /etc/rc.conf to configure the network on startup: wlans_ath0="wlan0" ifconfig_wlan0="WPA SYNCDHCP" Restart the computer, or restart the network service to connect to the network: &prompt.root; service netif restart Basic Setup Kernel Configuration To use wireless networking, a wireless networking card is needed and the kernel needs to be configured with the appropriate wireless networking support. The kernel is separated into multiple modules so that only the required support needs to be configured. The most commonly used wireless devices are those that use parts made by Atheros. These devices are supported by &man.ath.4; and require the following line to be added to /boot/loader.conf: if_ath_load="YES" The Atheros driver is split up into three separate pieces: the driver (&man.ath.4;), the hardware support layer that handles chip-specific functions (&man.ath.hal.4;), and an algorithm for selecting the rate for transmitting frames. When this support is loaded as kernel modules, any dependencies are automatically handled. To load support for a different type of wireless device, specify the module for that device. This example is for devices based on the Intersil Prism parts (&man.wi.4;) driver: if_wi_load="YES" The examples in this section use an &man.ath.4; device and the device name in the examples must be changed according to the configuration. A list of available wireless drivers and supported adapters can be found in the &os; Hardware Notes, available on the Release Information page of the &os; website. If a native &os; driver for the wireless device does not exist, it may be possible to use the &windows; driver with the help of the NDIS driver wrapper. In addition, the modules that implement cryptographic support for the security protocols to use must be loaded. These are intended to be dynamically loaded on demand by the &man.wlan.4; module, but for now they must be manually configured. The following modules are available: &man.wlan.wep.4;, &man.wlan.ccmp.4;, and &man.wlan.tkip.4;. The &man.wlan.ccmp.4; and &man.wlan.tkip.4; drivers are only needed when using the WPA or 802.11i security protocols. If the network does not use encryption, &man.wlan.wep.4; support is not needed. To load these modules at boot time, add the following lines to /boot/loader.conf: wlan_wep_load="YES" wlan_ccmp_load="YES" wlan_tkip_load="YES" Once this information has been added to /boot/loader.conf, reboot the &os; box. Alternately, load the modules by hand using &man.kldload.8;. For users who do not want to use modules, it is possible to compile these drivers into the kernel by adding the following lines to a custom kernel configuration file: device wlan # 802.11 support device wlan_wep # 802.11 WEP support device wlan_ccmp # 802.11 CCMP support device wlan_tkip # 802.11 TKIP support device wlan_amrr # AMRR transmit rate control algorithm device ath # Atheros pci/cardbus NIC's device ath_hal # pci/cardbus chip support options AH_SUPPORT_AR5416 # enable AR5416 tx/rx descriptors device ath_rate_sample # SampleRate tx rate control for ath With this information in the kernel configuration file, recompile the kernel and reboot the &os; machine. Information about the wireless device should appear in the boot messages, like this: ath0: <Atheros 5212> mem 0x88000000-0x8800ffff irq 11 at device 0.0 on cardbus1 ath0: [ITHREAD] ath0: AR2413 mac 7.9 RF2413 phy 4.5 Infrastructure Mode Infrastructure (BSS) mode is the mode that is typically used. In this mode, a number of wireless access points are connected to a wired network. Each wireless network has its own name, called the SSID. Wireless clients connect to the wireless access points. &os; Clients How to Find Access Points To scan for available networks, use &man.ifconfig.8;. This request may take a few moments to complete as it requires the system to switch to each available wireless frequency and probe for available access points. Only the superuser can initiate a scan: &prompt.root; ifconfig wlan0 create wlandev ath0 &prompt.root; ifconfig wlan0 up scan SSID/MESH ID BSSID CHAN RATE S:N INT CAPS dlinkap 00:13:46:49:41:76 11 54M -90:96 100 EPS WPA WME freebsdap 00:11:95:c3:0d:ac 1 54M -83:96 100 EPS WPA The interface must be before it can scan. Subsequent scan requests do not require the interface to be marked as up again. The output of a scan request lists each BSS/IBSS network found. Besides listing the name of the network, the SSID, the output also shows the BSSID, which is the MAC address of the access point. The CAPS field identifies the type of each network and the capabilities of the stations operating there: Station Capability Codes Capability Code Meaning E Extended Service Set (ESS). Indicates that the station is part of an infrastructure network rather than an IBSS/ad-hoc network. I IBSS/ad-hoc network. Indicates that the station is part of an ad-hoc network rather than an ESS network. P Privacy. Encryption is required for all data frames exchanged within the BSS using cryptographic means such as WEP, TKIP or AES-CCMP. S Short Preamble. Indicates that the network is using short preambles, defined in 802.11b High Rate/DSSS PHY, and utilizes a 56 bit sync field rather than the 128 bit field used in long preamble mode. s Short slot time. Indicates that the 802.11g network is using a short slot time because there are no legacy (802.11b) stations present.
One can also display the current list of known networks with: &prompt.root; ifconfig wlan0 list scan This information may be updated automatically by the adapter or manually with a request. Old data is automatically removed from the cache, so over time this list may shrink unless more scans are done.
Basic Settings This section provides a simple example of how to make the wireless network adapter work in &os; without encryption. Once familiar with these concepts, it is strongly recommend to use WPA to set up the wireless network. There are three basic steps to configure a wireless network: select an access point, authenticate the station, and configure an IP address. The following sections discuss each step. Selecting an Access Point Most of the time, it is sufficient to let the system choose an access point using the builtin heuristics. This is the default behaviour when an interface is marked as up or it is listed in /etc/rc.conf: wlans_ath0="wlan0" ifconfig_wlan0="DHCP" If there are multiple access points, a specific one can be selected by its SSID: wlans_ath0="wlan0" ifconfig_wlan0="ssid your_ssid_here DHCP" In an environment where there are multiple access points with the same SSID, which is often done to simplify roaming, it may be necessary to associate to one specific device. In this case, the BSSID of the access point can be specified, with or without the SSID: wlans_ath0="wlan0" ifconfig_wlan0="ssid your_ssid_here bssid xx:xx:xx:xx:xx:xx DHCP" There are other ways to constrain the choice of an access point, such as limiting the set of frequencies the system will scan on. This may be useful for a multi-band wireless card as scanning all the possible channels can be time-consuming. To limit operation to a specific band, use the parameter: wlans_ath0="wlan0" ifconfig_wlan0="mode 11g ssid your_ssid_here DHCP" This example will force the card to operate in 802.11g, which is defined only for 2.4GHz frequencies so any 5GHz channels will not be considered. This can also be achieved with the parameter, which locks operation to one specific frequency, and the parameter, to specify a list of channels for scanning. More information about these parameters can be found in &man.ifconfig.8;. Authentication Once an access point is selected, the station needs to authenticate before it can pass data. Authentication can happen in several ways. The most common scheme, open authentication, allows any station to join the network and communicate. This is the authentication to use for test purposes the first time a wireless network is setup. Other schemes require cryptographic handshakes to be completed before data traffic can flow, either using pre-shared keys or secrets, or more complex schemes that involve backend services such as RADIUS. Open authentication is the default setting. The next most common setup is WPA-PSK, also known as WPA Personal, which is described in . If using an &apple; &airport; Extreme base station for an access point, shared-key authentication together with a WEP key needs to be configured. This can be configured in /etc/rc.conf or by using &man.wpa.supplicant.8;. For a single &airport; base station, access can be configured with: wlans_ath0="wlan0" ifconfig_wlan0="authmode shared wepmode on weptxkey 1 wepkey 01234567 DHCP" In general, shared key authentication should be avoided because it uses the WEP key material in a highly-constrained manner, making it even easier to crack the key. If WEP must be used for compatibility with legacy devices, it is better to use WEP with open authentication. More information regarding WEP can be found in . Getting an <acronym>IP</acronym> Address with <acronym>DHCP</acronym> Once an access point is selected and the authentication parameters are set, an IP address must be obtained in order to communicate. Most of the time, the IP address is obtained via DHCP. To achieve that, edit /etc/rc.conf and add DHCP to the configuration for the device: wlans_ath0="wlan0" ifconfig_wlan0="DHCP" The wireless interface is now ready to bring up: &prompt.root; service netif start Once the interface is running, use &man.ifconfig.8; to see the status of the interface ath0: &prompt.root; ifconfig wlan0 wlan0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> mtu 1500 ether 00:11:95:d5:43:62 inet 192.168.1.100 netmask 0xffffff00 broadcast 192.168.1.255 media: IEEE 802.11 Wireless Ethernet OFDM/54Mbps mode 11g status: associated ssid dlinkap channel 11 (2462 Mhz 11g) bssid 00:13:46:49:41:76 country US ecm authmode OPEN privacy OFF txpower 21.5 bmiss 7 scanvalid 60 bgscan bgscanintvl 300 bgscanidle 250 roam:rssi 7 roam:rate 5 protmode CTS wme burst The status: associated line means that it is connected to the wireless network. The bssid 00:13:46:49:41:76 is the MAC address of the access point and authmode OPEN indicates that the communication is not encrypted. Static <acronym>IP</acronym> Address In an IP address cannot be obtained from a DHCP server, set a fixed IP address. Replace the DHCP keyword shown above with the address information. Be sure to retain any other parameters for selecting the access point: wlans_ath0="wlan0" ifconfig_wlan0="inet 192.168.1.100 netmask 255.255.255.0 ssid your_ssid_here" <acronym>WPA</acronym> Wi-Fi Protected Access (WPA) is a security protocol used together with 802.11 networks to address the lack of proper authentication and the weakness of WEP. WPA leverages the 802.1X authentication protocol and uses one of several ciphers instead of WEP for data integrity. The only cipher required by WPA is the Temporary Key Integrity Protocol (TKIP). TKIP is a cipher that extends the basic RC4 cipher used by WEP by adding integrity checking, tamper detection, and measures for responding to detected intrusions. TKIP is designed to work on legacy hardware with only software modification. It represents a compromise that improves security but is still not entirely immune to attack. WPA also specifies the AES-CCMP cipher as an alternative to TKIP, and that is preferred when possible. For this specification, the term WPA2 or RSN is commonly used. WPA defines authentication and encryption protocols. Authentication is most commonly done using one of two techniques: by 802.1X and a backend authentication service such as RADIUS, or by a minimal handshake between the station and the access point using a pre-shared secret. The former is commonly termed WPA Enterprise and the latter is known as WPA Personal. Since most people will not set up a RADIUS backend server for their wireless network, WPA-PSK is by far the most commonly encountered configuration for WPA. The control of the wireless connection and the key negotiation or authentication with a server is done using &man.wpa.supplicant.8;. This program requires a configuration file, /etc/wpa_supplicant.conf, to run. More information regarding this file can be found in &man.wpa.supplicant.conf.5;. <acronym>WPA-PSK</acronym> WPA-PSK, also known as WPA Personal, is based on a pre-shared key (PSK) which is generated from a given password and used as the master key in the wireless network. This means every wireless user will share the same key. WPA-PSK is intended for small networks where the use of an authentication server is not possible or desired. Always use strong passwords that are sufficiently long and made from a rich alphabet so that they will not be easily guessed or attacked. The first step is the configuration of /etc/wpa_supplicant.conf with the SSID and the pre-shared key of the network: network={ ssid="freebsdap" psk="freebsdmall" } Then, in /etc/rc.conf, indicate that the wireless device configuration will be done with WPA and the IP address will be obtained with DHCP: wlans_ath0="wlan0" ifconfig_wlan0="WPA DHCP" Then, bring up the interface: &prompt.root; service netif start Starting wpa_supplicant. DHCPDISCOVER on wlan0 to 255.255.255.255 port 67 interval 5 DHCPDISCOVER on wlan0 to 255.255.255.255 port 67 interval 6 DHCPOFFER from 192.168.0.1 DHCPREQUEST on wlan0 to 255.255.255.255 port 67 DHCPACK from 192.168.0.1 bound to 192.168.0.254 -- renewal in 300 seconds. wlan0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> mtu 1500 ether 00:11:95:d5:43:62 inet 192.168.0.254 netmask 0xffffff00 broadcast 192.168.0.255 media: IEEE 802.11 Wireless Ethernet OFDM/36Mbps mode 11g status: associated ssid freebsdap channel 1 (2412 Mhz 11g) bssid 00:11:95:c3:0d:ac country US ecm authmode WPA2/802.11i privacy ON deftxkey UNDEF AES-CCM 3:128-bit txpower 21.5 bmiss 7 scanvalid 450 bgscan bgscanintvl 300 bgscanidle 250 roam:rssi 7 roam:rate 5 protmode CTS wme burst roaming MANUAL Or, try to configure the interface manually using the information in /etc/wpa_supplicant.conf: &prompt.root; wpa_supplicant -i wlan0 -c /etc/wpa_supplicant.conf Trying to associate with 00:11:95:c3:0d:ac (SSID='freebsdap' freq=2412 MHz) Associated with 00:11:95:c3:0d:ac WPA: Key negotiation completed with 00:11:95:c3:0d:ac [PTK=CCMP GTK=CCMP] CTRL-EVENT-CONNECTED - Connection to 00:11:95:c3:0d:ac completed (auth) [id=0 id_str=] The next operation is to launch &man.dhclient.8; to get the IP address from the DHCP server: &prompt.root; dhclient wlan0 DHCPREQUEST on wlan0 to 255.255.255.255 port 67 DHCPACK from 192.168.0.1 bound to 192.168.0.254 -- renewal in 300 seconds. &prompt.root; ifconfig wlan0 wlan0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> mtu 1500 ether 00:11:95:d5:43:62 inet 192.168.0.254 netmask 0xffffff00 broadcast 192.168.0.255 media: IEEE 802.11 Wireless Ethernet OFDM/36Mbps mode 11g status: associated ssid freebsdap channel 1 (2412 Mhz 11g) bssid 00:11:95:c3:0d:ac country US ecm authmode WPA2/802.11i privacy ON deftxkey UNDEF AES-CCM 3:128-bit txpower 21.5 bmiss 7 scanvalid 450 bgscan bgscanintvl 300 bgscanidle 250 roam:rssi 7 roam:rate 5 protmode CTS wme burst roaming MANUAL If /etc/rc.conf has an ifconfig_wlan0="DHCP" entry, &man.dhclient.8; will be launched automatically after &man.wpa.supplicant.8; associates with the access point. If DHCP is not possible or desired, set a static IP address after &man.wpa.supplicant.8; has authenticated the station: &prompt.root; ifconfig wlan0 inet 192.168.0.100 netmask 255.255.255.0 &prompt.root; ifconfig wlan0 wlan0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> mtu 1500 ether 00:11:95:d5:43:62 inet 192.168.0.100 netmask 0xffffff00 broadcast 192.168.0.255 media: IEEE 802.11 Wireless Ethernet OFDM/36Mbps mode 11g status: associated ssid freebsdap channel 1 (2412 Mhz 11g) bssid 00:11:95:c3:0d:ac country US ecm authmode WPA2/802.11i privacy ON deftxkey UNDEF AES-CCM 3:128-bit txpower 21.5 bmiss 7 scanvalid 450 bgscan bgscanintvl 300 bgscanidle 250 roam:rssi 7 roam:rate 5 protmode CTS wme burst roaming MANUAL When DHCP is not used, the default gateway and the nameserver also have to be manually set: &prompt.root; route add default your_default_router &prompt.root; echo "nameserver your_DNS_server" >> /etc/resolv.conf <acronym>WPA</acronym> with <acronym>EAP-TLS</acronym> The second way to use WPA is with an 802.1X backend authentication server. In this case, WPA is called WPA Enterprise to differentiate it from the less secure WPA Personal. Authentication in WPA Enterprise is based on the Extensible Authentication Protocol (EAP). EAP does not come with an encryption method. Instead, EAP is embedded inside an encrypted tunnel. There are many EAP authentication methods, but EAP-TLS, EAP-TTLS, and EAP-PEAP are the most common. EAP with Transport Layer Security (EAP-TLS) is a well-supported wireless authentication protocol since it was the first EAP method to be certified by the Wi-Fi Alliance. EAP-TLS requires three certificates to run: the certificate of the Certificate Authority (CA) installed on all machines, the server certificate for the authentication server, and one client certificate for each wireless client. In this EAP method, both the authentication server and wireless client authenticate each other by presenting their respective certificates, and then verify that these certificates were signed by the organization's CA. As previously, the configuration is done via /etc/wpa_supplicant.conf: network={ ssid="freebsdap" proto=RSN key_mgmt=WPA-EAP eap=TLS identity="loader" ca_cert="/etc/certs/cacert.pem" client_cert="/etc/certs/clientcert.pem" private_key="/etc/certs/clientkey.pem" private_key_passwd="freebsdmallclient" } This field indicates the network name (SSID). This example uses the RSN &ieee; 802.11i protocol, also known as WPA2. The key_mgmt line refers to the key management protocol to use. In this example, it is WPA using EAP authentication. This field indicates the EAP method for the connection. The identity field contains the identity string for EAP. The ca_cert field indicates the pathname of the CA certificate file. This file is needed to verify the server certificate. The client_cert line gives the pathname to the client certificate file. This certificate is unique to each wireless client of the network. The private_key field is the pathname to the client certificate private key file. The private_key_passwd field contains the passphrase for the private key. Then, add the following lines to /etc/rc.conf: wlans_ath0="wlan0" ifconfig_wlan0="WPA DHCP" The next step is to bring up the interface: &prompt.root; service netif start Starting wpa_supplicant. DHCPREQUEST on wlan0 to 255.255.255.255 port 67 interval 7 DHCPREQUEST on wlan0 to 255.255.255.255 port 67 interval 15 DHCPACK from 192.168.0.20 bound to 192.168.0.254 -- renewal in 300 seconds. wlan0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> mtu 1500 ether 00:11:95:d5:43:62 inet 192.168.0.254 netmask 0xffffff00 broadcast 192.168.0.255 media: IEEE 802.11 Wireless Ethernet DS/11Mbps mode 11g status: associated ssid freebsdap channel 1 (2412 Mhz 11g) bssid 00:11:95:c3:0d:ac country US ecm authmode WPA2/802.11i privacy ON deftxkey UNDEF AES-CCM 3:128-bit txpower 21.5 bmiss 7 scanvalid 450 bgscan bgscanintvl 300 bgscanidle 250 roam:rssi 7 roam:rate 5 protmode CTS wme burst roaming MANUAL It is also possible to bring up the interface manually using &man.wpa.supplicant.8; and &man.ifconfig.8;. <acronym>WPA</acronym> with <acronym>EAP-TTLS</acronym> With EAP-TTLS, both the authentication server and the client need a certificate. With EAP-TTLS, a client certificate is optional. This method is similar to a web server which creates a secure SSL tunnel even if visitors do not have client-side certificates. EAP-TTLS uses an encrypted TLS tunnel for safe transport of the authentication data. The required configuration can be added to /etc/wpa_supplicant.conf: network={ ssid="freebsdap" proto=RSN key_mgmt=WPA-EAP eap=TTLS identity="test" password="test" ca_cert="/etc/certs/cacert.pem" phase2="auth=MD5" } This field specifies the EAP method for the connection. The identity field contains the identity string for EAP authentication inside the encrypted TLS tunnel. The password field contains the passphrase for the EAP authentication. The ca_cert field indicates the pathname of the CA certificate file. This file is needed to verify the server certificate. This field specifies the authentication method used in the encrypted TLS tunnel. In this example, EAP with MD5-Challenge is used. The inner authentication phase is often called phase2. Next, add the following lines to /etc/rc.conf: wlans_ath0="wlan0" ifconfig_wlan0="WPA DHCP" The next step is to bring up the interface: &prompt.root; service netif start Starting wpa_supplicant. DHCPREQUEST on wlan0 to 255.255.255.255 port 67 interval 7 DHCPREQUEST on wlan0 to 255.255.255.255 port 67 interval 15 DHCPREQUEST on wlan0 to 255.255.255.255 port 67 interval 21 DHCPACK from 192.168.0.20 bound to 192.168.0.254 -- renewal in 300 seconds. wlan0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> mtu 1500 ether 00:11:95:d5:43:62 inet 192.168.0.254 netmask 0xffffff00 broadcast 192.168.0.255 media: IEEE 802.11 Wireless Ethernet DS/11Mbps mode 11g status: associated ssid freebsdap channel 1 (2412 Mhz 11g) bssid 00:11:95:c3:0d:ac country US ecm authmode WPA2/802.11i privacy ON deftxkey UNDEF AES-CCM 3:128-bit txpower 21.5 bmiss 7 scanvalid 450 bgscan bgscanintvl 300 bgscanidle 250 roam:rssi 7 roam:rate 5 protmode CTS wme burst roaming MANUAL <acronym>WPA</acronym> with <acronym>EAP-PEAP</acronym> PEAPv0/EAP-MSCHAPv2 is the most common PEAP method. In this chapter, the term PEAP is used to refer to that method. Protected EAP (PEAP) is designed as an alternative to EAP-TTLS and is the most used EAP standard after EAP-TLS. In a network with mixed operating systems, PEAP should be the most supported standard after EAP-TLS. PEAP is similar to EAP-TTLS as it uses a server-side certificate to authenticate clients by creating an encrypted TLS tunnel between the client and the authentication server, which protects the ensuing exchange of authentication information. PEAP authentication differs from EAP-TTLS as it broadcasts the username in the clear and only the password is sent in the encrypted TLS tunnel. EAP-TTLS will use the TLS tunnel for both the username and password. Add the following lines to /etc/wpa_supplicant.conf to configure the EAP-PEAP related settings: network={ ssid="freebsdap" proto=RSN key_mgmt=WPA-EAP eap=PEAP identity="test" password="test" ca_cert="/etc/certs/cacert.pem" phase1="peaplabel=0" phase2="auth=MSCHAPV2" } This field specifies the EAP method for the connection. The identity field contains the identity string for EAP authentication inside the encrypted TLS tunnel. The password field contains the passphrase for the EAP authentication. The ca_cert field indicates the pathname of the CA certificate file. This file is needed to verify the server certificate. This field contains the parameters for the first phase of authentication, the TLS tunnel. According to the authentication server used, specify a specific label for authentication. Most of the time, the label will be client EAP encryption which is set by using peaplabel=0. More information can be found in &man.wpa.supplicant.conf.5;. This field specifies the authentication protocol used in the encrypted TLS tunnel. In the case of PEAP, it is auth=MSCHAPV2. Add the following to /etc/rc.conf: wlans_ath0="wlan0" ifconfig_wlan0="WPA DHCP" Then, bring up the interface: &prompt.root; service netif start Starting wpa_supplicant. DHCPREQUEST on wlan0 to 255.255.255.255 port 67 interval 7 DHCPREQUEST on wlan0 to 255.255.255.255 port 67 interval 15 DHCPREQUEST on wlan0 to 255.255.255.255 port 67 interval 21 DHCPACK from 192.168.0.20 bound to 192.168.0.254 -- renewal in 300 seconds. wlan0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> mtu 1500 ether 00:11:95:d5:43:62 inet 192.168.0.254 netmask 0xffffff00 broadcast 192.168.0.255 media: IEEE 802.11 Wireless Ethernet DS/11Mbps mode 11g status: associated ssid freebsdap channel 1 (2412 Mhz 11g) bssid 00:11:95:c3:0d:ac country US ecm authmode WPA2/802.11i privacy ON deftxkey UNDEF AES-CCM 3:128-bit txpower 21.5 bmiss 7 scanvalid 450 bgscan bgscanintvl 300 bgscanidle 250 roam:rssi 7 roam:rate 5 protmode CTS wme burst roaming MANUAL <acronym>WEP</acronym> Wired Equivalent Privacy (WEP) is part of the original 802.11 standard. There is no authentication mechanism, only a weak form of access control which is easily cracked. WEP can be set up using &man.ifconfig.8;: &prompt.root; ifconfig wlan0 create wlandev ath0 &prompt.root; ifconfig wlan0 inet 192.168.1.100 netmask 255.255.255.0 \ ssid my_net wepmode on weptxkey 3 wepkey 3:0x3456789012 The weptxkey specifies which WEP key will be used in the transmission. This example uses the third key. This must match the setting on the access point. When unsure which key is used by the access point, try 1 (the first key) for this value. The wepkey selects one of the WEP keys. It should be in the format index:key. Key 1 is used by default; the index only needs to be set when using a key other than the first key. Replace the 0x3456789012 with the key configured for use on the access point. Refer to &man.ifconfig.8; for further information. The &man.wpa.supplicant.8; facility can be used to configure a wireless interface with WEP. The example above can be set up by adding the following lines to /etc/wpa_supplicant.conf: network={ ssid="my_net" key_mgmt=NONE wep_key3=3456789012 wep_tx_keyidx=3 } Then: &prompt.root; wpa_supplicant -i wlan0 -c /etc/wpa_supplicant.conf Trying to associate with 00:13:46:49:41:76 (SSID='dlinkap' freq=2437 MHz) Associated with 00:13:46:49:41:76
Ad-hoc Mode IBSS mode, also called ad-hoc mode, is designed for point to point connections. For example, to establish an ad-hoc network between the machines A and B, choose two IP addresses and a SSID. On A: &prompt.root; ifconfig wlan0 create wlandev ath0 wlanmode adhoc &prompt.root; ifconfig wlan0 inet 192.168.0.1 netmask 255.255.255.0 ssid freebsdap &prompt.root; ifconfig wlan0 wlan0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500 ether 00:11:95:c3:0d:ac inet 192.168.0.1 netmask 0xffffff00 broadcast 192.168.0.255 media: IEEE 802.11 Wireless Ethernet autoselect mode 11g <adhoc> status: running ssid freebsdap channel 2 (2417 Mhz 11g) bssid 02:11:95:c3:0d:ac country US ecm authmode OPEN privacy OFF txpower 21.5 scanvalid 60 protmode CTS wme burst The adhoc parameter indicates that the interface is running in IBSS mode. B should now be able to detect A: &prompt.root; ifconfig wlan0 create wlandev ath0 wlanmode adhoc &prompt.root; ifconfig wlan0 up scan SSID/MESH ID BSSID CHAN RATE S:N INT CAPS freebsdap 02:11:95:c3:0d:ac 2 54M -64:-96 100 IS WME The I in the output confirms that A is in ad-hoc mode. Now, configure B with a different IP address: &prompt.root; ifconfig wlan0 inet 192.168.0.2 netmask 255.255.255.0 ssid freebsdap &prompt.root; ifconfig wlan0 wlan0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500 ether 00:11:95:d5:43:62 inet 192.168.0.2 netmask 0xffffff00 broadcast 192.168.0.255 media: IEEE 802.11 Wireless Ethernet autoselect mode 11g <adhoc> status: running ssid freebsdap channel 2 (2417 Mhz 11g) bssid 02:11:95:c3:0d:ac country US ecm authmode OPEN privacy OFF txpower 21.5 scanvalid 60 protmode CTS wme burst Both A and B are now ready to exchange information. &os; Host Access Points &os; can act as an Access Point (AP) which eliminates the need to buy a hardware AP or run an ad-hoc network. This can be particularly useful when a &os; machine is acting as a gateway to another network such as the Internet. Basic Settings Before configuring a &os; machine as an AP, the kernel must be configured with the appropriate networking support for the wireless card as well as the security protocols being used. For more details, see . The NDIS driver wrapper for &windows; drivers does not currently support AP operation. Only native &os; wireless drivers support AP mode. Once wireless networking support is loaded, check if the wireless device supports the host-based access point mode, also known as hostap mode: &prompt.root; ifconfig wlan0 create wlandev ath0 &prompt.root; ifconfig wlan0 list caps drivercaps=6f85edc1<STA,FF,TURBOP,IBSS,HOSTAP,AHDEMO,TXPMGT,SHSLOT,SHPREAMBLE,MONITOR,MBSS,WPA1,WPA2,BURST,WME,WDS,BGSCAN,TXFRAG> cryptocaps=1f<WEP,TKIP,AES,AES_CCM,TKIPMIC> This output displays the card's capabilities. The HOSTAP word confirms that this wireless card can act as an AP. Various supported ciphers are also listed: WEP, TKIP, and AES. This information indicates which security protocols can be used on the AP. The wireless device can only be put into hostap mode during the creation of the network pseudo-device, so a previously created device must be destroyed first: &prompt.root; ifconfig wlan0 destroy then regenerated with the correct option before setting the other parameters: &prompt.root; ifconfig wlan0 create wlandev ath0 wlanmode hostap &prompt.root; ifconfig wlan0 inet 192.168.0.1 netmask 255.255.255.0 ssid freebsdap mode 11g channel 1 Use &man.ifconfig.8; again to see the status of the wlan0 interface: &prompt.root; ifconfig wlan0 wlan0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500 ether 00:11:95:c3:0d:ac inet 192.168.0.1 netmask 0xffffff00 broadcast 192.168.0.255 media: IEEE 802.11 Wireless Ethernet autoselect mode 11g <hostap> status: running ssid freebsdap channel 1 (2412 Mhz 11g) bssid 00:11:95:c3:0d:ac country US ecm authmode OPEN privacy OFF txpower 21.5 scanvalid 60 protmode CTS wme burst dtimperiod 1 -dfs The hostap parameter indicates the interface is running in the host-based access point mode. The interface configuration can be done automatically at boot time by adding the following lines to /etc/rc.conf: wlans_ath0="wlan0" create_args_wlan0="wlanmode hostap" ifconfig_wlan0="inet 192.168.0.1 netmask 255.255.255.0 ssid freebsdap mode 11g channel 1" Host-based Access Point Without Authentication or Encryption Although it is not recommended to run an AP without any authentication or encryption, this is a simple way to check if the AP is working. This configuration is also important for debugging client issues. Once the AP is configured, initiate a scan from another wireless machine to find the AP: &prompt.root; ifconfig wlan0 create wlandev ath0 &prompt.root; ifconfig wlan0 up scan SSID/MESH ID BSSID CHAN RATE S:N INT CAPS freebsdap 00:11:95:c3:0d:ac 1 54M -66:-96 100 ES WME The client machine found the AP and can be associated with it: &prompt.root; ifconfig wlan0 inet 192.168.0.2 netmask 255.255.255.0 ssid freebsdap &prompt.root; ifconfig wlan0 wlan0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500 ether 00:11:95:d5:43:62 inet 192.168.0.2 netmask 0xffffff00 broadcast 192.168.0.255 media: IEEE 802.11 Wireless Ethernet OFDM/54Mbps mode 11g status: associated ssid freebsdap channel 1 (2412 Mhz 11g) bssid 00:11:95:c3:0d:ac country US ecm authmode OPEN privacy OFF txpower 21.5 bmiss 7 scanvalid 60 bgscan bgscanintvl 300 bgscanidle 250 roam:rssi 7 roam:rate 5 protmode CTS wme burst <acronym>WPA</acronym> Host-based Access Point This section focuses on setting up a &os; AP using the WPA security protocol. More details regarding WPA and the configuration of WPA-based wireless clients can be found in . The &man.hostapd.8; daemon is used to deal with client authentication and key management on the WPA-enabled AP. The following configuration operations are performed on the &os; machine acting as the AP. Once the AP is correctly working, &man.hostapd.8; should be automatically enabled at boot with the following line in /etc/rc.conf: hostapd_enable="YES" Before trying to configure &man.hostapd.8;, first configure the basic settings introduced in . <acronym>WPA-PSK</acronym> WPA-PSK is intended for small networks where the use of a backend authentication server is not possible or desired. The configuration is done in /etc/hostapd.conf: interface=wlan0 debug=1 ctrl_interface=/var/run/hostapd ctrl_interface_group=wheel ssid=freebsdap wpa=1 wpa_passphrase=freebsdmall wpa_key_mgmt=WPA-PSK wpa_pairwise=CCMP TKIP This field indicates the wireless interface used for the AP. This field sets the level of verbosity during the execution of &man.hostapd.8;. A value of 1 represents the minimal level. The ctrl_interface field gives the pathname of the directory used by &man.hostapd.8; to store its domain socket files for the communication with external programs such as &man.hostapd.cli.8;. The default value is used in this example. The ctrl_interface_group line sets the group which is allowed to access the control interface files. This field sets the network name. The wpa field enables WPA and specifies which WPA authentication protocol will be required. A value of 1 configures the AP for WPA-PSK. The wpa_passphrase field contains the ASCII passphrase for WPA authentication. Always use strong passwords that are sufficiently long and made from a rich alphabet so that they will not be easily guessed or attacked. The wpa_key_mgmt line refers to the key management protocol to use. This example sets WPA-PSK. The wpa_pairwise field indicates the set of accepted encryption algorithms by the AP. In this example, both TKIP (WPA) and CCMP (WPA2) ciphers are accepted. The CCMP cipher is an alternative to TKIP and is strongly preferred when possible. TKIP should be used solely for stations incapable of doing CCMP. The next step is to start &man.hostapd.8;: &prompt.root; service hostapd forcestart &prompt.root; ifconfig wlan0 wlan0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> mtu 2290 inet 192.168.0.1 netmask 0xffffff00 broadcast 192.168.0.255 inet6 fe80::211:95ff:fec3:dac%ath0 prefixlen 64 scopeid 0x4 ether 00:11:95:c3:0d:ac media: IEEE 802.11 Wireless Ethernet autoselect mode 11g <hostap> status: associated ssid freebsdap channel 1 bssid 00:11:95:c3:0d:ac authmode WPA2/802.11i privacy MIXED deftxkey 2 TKIP 2:128-bit txpowmax 36 protmode CTS dtimperiod 1 bintval 100 Once the AP is running, the clients can associate with it. See for more details. It is possible to see the stations associated with the AP using ifconfig wlan0 list sta. <acronym>WEP</acronym> Host-based Access Point It is not recommended to use WEP for setting up an AP since there is no authentication mechanism and the encryption is easily cracked. Some legacy wireless cards only support WEP and these cards will only support an AP without authentication or encryption. The wireless device can now be put into hostap mode and configured with the correct SSID and IP address: &prompt.root; ifconfig wlan0 create wlandev ath0 wlanmode hostap &prompt.root; ifconfig wlan0 inet 192.168.0.1 netmask 255.255.255.0 \ ssid freebsdap wepmode on weptxkey 3 wepkey 3:0x3456789012 mode 11g The weptxkey indicates which WEP key will be used in the transmission. This example uses the third key as key numbering starts with 1. This parameter must be specified in order to encrypt the data. The wepkey sets the selected WEP key. It should be in the format index:key. If the index is not given, key 1 is set. The index needs to be set when using keys other than the first key. Use &man.ifconfig.8; to see the status of the wlan0 interface: &prompt.root; ifconfig wlan0 wlan0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500 ether 00:11:95:c3:0d:ac inet 192.168.0.1 netmask 0xffffff00 broadcast 192.168.0.255 media: IEEE 802.11 Wireless Ethernet autoselect mode 11g <hostap> status: running ssid freebsdap channel 4 (2427 Mhz 11g) bssid 00:11:95:c3:0d:ac country US ecm authmode OPEN privacy ON deftxkey 3 wepkey 3:40-bit txpower 21.5 scanvalid 60 protmode CTS wme burst dtimperiod 1 -dfs From another wireless machine, it is now possible to initiate a scan to find the AP: &prompt.root; ifconfig wlan0 create wlandev ath0 &prompt.root; ifconfig wlan0 up scan SSID BSSID CHAN RATE S:N INT CAPS freebsdap 00:11:95:c3:0d:ac 1 54M 22:1 100 EPS In this example, the client machine found the AP and can associate with it using the correct parameters. See for more details. Using Both Wired and Wireless Connections A wired connection provides better performance and reliability, while a wireless connection provides flexibility and mobility. Laptop users typically want to roam seamlessly between the two types of connections. On &os;, it is possible to combine two or even more network interfaces together in a failover fashion. This type of configuration uses the most preferred and available connection from a group of network interfaces, and the operating system switches automatically when the link state changes. Link aggregation and failover is covered in and an example for using both wired and wireless connections is provided at . Troubleshooting This section describes a number of steps to help troubleshoot common wireless networking problems. If the access point is not listed when scanning, check that the configuration has not limited the wireless device to a limited set of channels. If the device cannot associate with an access point, verify that the configuration matches the settings on the access point. This includes the authentication scheme and any security protocols. Simplify the configuration as much as possible. If using a security protocol such as WPA or WEP, configure the access point for open authentication and no security to see if traffic will pass. Debugging support is provided by &man.wpa.supplicant.8;. Try running this utility manually with and look at the system logs. Once the system can associate with the access point, diagnose the network configuration using tools like &man.ping.8;. There are many lower-level debugging tools. Debugging messages can be enabled in the 802.11 protocol support layer using &man.wlandebug.8;. On a &os; system prior to &os; 9.1, this program can be found in /usr/src/tools/tools/net80211. + >/usr/src/tools/tools/net80211. For example, to enable console messages related to scanning for access points and the 802.11 protocol handshakes required to arrange communication: &prompt.root; wlandebug -i ath0 +scan+auth+debug+assoc net.wlan.0.debug: 0 => 0xc80000<assoc,auth,scan> Many useful statistics are maintained by the 802.11 layer and wlanstats, found in /usr/src/tools/tools/net80211, + >/usr/src/tools/tools/net80211, will dump this information. These statistics should display all errors identified by the 802.11 layer. However, some errors are identified in the device drivers that lie below the 802.11 layer so they may not show up. To diagnose device-specific problems, refer to the drivers' documentation. If the above information does not help to clarify the problem, submit a problem report and include output from the above tools.
USB Tethering tether Many cellphones provide the option to share their data connection over USB (often called "tethering"). This feature uses either the RNDIS, CDC or a custom &apple; &iphone;/&ipad; protocol. &android; devices generally use the &man.urndis.4; driver. &apple; devices use the &man.ipheth.4; driver. Older devices will often use the &man.cdce.4; driver. Before attaching a device, load the appropriate driver into the kernel: &prompt.root; kldload if_urndis &prompt.root; kldload if_cdce &prompt.root; kldload if_ipheth Once the device is attached ue0 will be available for use like a normal network device. Be sure that the USB tethering option is enabled on the device. Bluetooth Pav Lucistnik Written by pav@FreeBSD.org Bluetooth Bluetooth is a wireless technology for creating personal networks operating in the 2.4 GHz unlicensed band, with a range of 10 meters. Networks are usually formed ad-hoc from portable devices such as cellular phones, handhelds, and laptops. Unlike Wi-Fi wireless technology, Bluetooth offers higher level service profiles, such as FTP-like file servers, file pushing, voice transport, serial line emulation, and more. This section describes the use of a USB Bluetooth dongle on a &os; system. It then describes the various Bluetooth protocols and utilities. Loading Bluetooth Support The Bluetooth stack in &os; is implemented using the &man.netgraph.4; framework. A broad variety of Bluetooth USB dongles is supported by &man.ng.ubt.4;. Broadcom BCM2033 based Bluetooth devices are supported by the &man.ubtbcmfw.4; and &man.ng.ubt.4; drivers. The 3Com Bluetooth PC Card 3CRWB60-A is supported by the &man.ng.bt3c.4; driver. Serial and UART based Bluetooth devices are supported by &man.sio.4;, &man.ng.h4.4;, and &man.hcseriald.8;. Before attaching a device, determine which of the above drivers it uses, then load the driver. For example, if the device uses the &man.ng.ubt.4; driver: &prompt.root; kldload ng_ubt If the Bluetooth device will be attached to the system during system startup, the system can be configured to load the module at boot time by adding the driver to /boot/loader.conf: ng_ubt_load="YES" Once the driver is loaded, plug in the USB dongle. If the driver load was successful, output similar to the following should appear on the console and in /var/log/messages: ubt0: vendor 0x0a12 product 0x0001, rev 1.10/5.25, addr 2 ubt0: Interface 0 endpoints: interrupt=0x81, bulk-in=0x82, bulk-out=0x2 ubt0: Interface 1 (alt.config 5) endpoints: isoc-in=0x83, isoc-out=0x3, wMaxPacketSize=49, nframes=6, buffer size=294 To start and stop the Bluetooth stack, use its startup script. It is a good idea to stop the stack before unplugging the device. When starting the stack, the output should be similar to the following: &prompt.root; service bluetooth start ubt0 BD_ADDR: 00:02:72:00:d4:1a Features: 0xff 0xff 0xf 00 00 00 00 00 <3-Slot> <5-Slot> <Encryption> <Slot offset> <Timing accuracy> <Switch> <Hold mode> <Sniff mode> <Park mode> <RSSI> <Channel quality> <SCO link> <HV2 packets> <HV3 packets> <u-law log> <A-law log> <CVSD> <Paging scheme> <Power control> <Transparent SCO data> Max. ACL packet size: 192 bytes Number of ACL packets: 8 Max. SCO packet size: 64 bytes Number of SCO packets: 8 Finding Other Bluetooth Devices HCI The Host Controller Interface (HCI) provides a uniform method for accessing Bluetooth baseband capabilities. In &os;, a netgraph HCI node is created for each Bluetooth device. For more details, refer to &man.ng.hci.4;. One of the most common tasks is discovery of Bluetooth devices within RF proximity. This operation is called inquiry. Inquiry and other HCI related operations are done using &man.hccontrol.8;. The example below shows how to find out which Bluetooth devices are in range. The list of devices should be displayed in a few seconds. Note that a remote device will only answer the inquiry if it is set to discoverable mode. &prompt.user; hccontrol -n ubt0hci inquiry Inquiry result, num_responses=1 Inquiry result #0 BD_ADDR: 00:80:37:29:19:a4 Page Scan Rep. Mode: 0x1 Page Scan Period Mode: 00 Page Scan Mode: 00 Class: 52:02:04 Clock offset: 0x78ef Inquiry complete. Status: No error [00] The BD_ADDR is the unique address of a Bluetooth device, similar to the MAC address of a network card. This address is needed for further communication with a device and it is possible to assign a human readable name to a BD_ADDR. Information regarding the known Bluetooth hosts is contained in /etc/bluetooth/hosts. The following example shows how to obtain the human readable name that was assigned to the remote device: &prompt.user; hccontrol -n ubt0hci remote_name_request 00:80:37:29:19:a4 BD_ADDR: 00:80:37:29:19:a4 Name: Pav's T39 If an inquiry is performed on a remote Bluetooth device, it will find the computer as your.host.name (ubt0). The name assigned to the local device can be changed at any time. The Bluetooth system provides a point-to-point connection between two Bluetooth units, or a point-to-multipoint connection which is shared among several Bluetooth devices. The following example shows how to obtain the list of active baseband connections for the local device: &prompt.user; hccontrol -n ubt0hci read_connection_list Remote BD_ADDR Handle Type Mode Role Encrypt Pending Queue State 00:80:37:29:19:a4 41 ACL 0 MAST NONE 0 0 OPEN A connection handle is useful when termination of the baseband connection is required, though it is normally not required to do this by hand. The stack will automatically terminate inactive baseband connections. &prompt.root; hccontrol -n ubt0hci disconnect 41 Connection handle: 41 Reason: Connection terminated by local host [0x16] Type hccontrol help for a complete listing of available HCI commands. Most of the HCI commands do not require superuser privileges. Device Pairing By default, Bluetooth communication is not authenticated, and any device can talk to any other device. A Bluetooth device, such as a cellular phone, may choose to require authentication to provide a particular service. Bluetooth authentication is normally done with a PIN code, an ASCII string up to 16 characters in length. The user is required to enter the same PIN code on both devices. Once the user has entered the PIN code, both devices will generate a link key. After that, the link key can be stored either in the devices or in a persistent storage. Next time, both devices will use the previously generated link key. This procedure is called pairing. Note that if the link key is lost by either device, the pairing must be repeated. The &man.hcsecd.8; daemon is responsible for handling Bluetooth authentication requests. The default configuration file is /etc/bluetooth/hcsecd.conf. An example section for a cellular phone with the PIN code set to 1234 is shown below: device { bdaddr 00:80:37:29:19:a4; name "Pav's T39"; key nokey; pin "1234"; } The only limitation on PIN codes is length. Some devices, such as Bluetooth headsets, may have a fixed PIN code built in. The switch forces &man.hcsecd.8; to stay in the foreground, so it is easy to see what is happening. Set the remote device to receive pairing and initiate the Bluetooth connection to the remote device. The remote device should indicate that pairing was accepted and request the PIN code. Enter the same PIN code listed in hcsecd.conf. Now the computer and the remote device are paired. Alternatively, pairing can be initiated on the remote device. The following line can be added to /etc/rc.conf to configure &man.hcsecd.8; to start automatically on system start: hcsecd_enable="YES" The following is a sample of the &man.hcsecd.8; daemon output: hcsecd[16484]: Got Link_Key_Request event from 'ubt0hci', remote bdaddr 0:80:37:29:19:a4 hcsecd[16484]: Found matching entry, remote bdaddr 0:80:37:29:19:a4, name 'Pav's T39', link key doesn't exist hcsecd[16484]: Sending Link_Key_Negative_Reply to 'ubt0hci' for remote bdaddr 0:80:37:29:19:a4 hcsecd[16484]: Got PIN_Code_Request event from 'ubt0hci', remote bdaddr 0:80:37:29:19:a4 hcsecd[16484]: Found matching entry, remote bdaddr 0:80:37:29:19:a4, name 'Pav's T39', PIN code exists hcsecd[16484]: Sending PIN_Code_Reply to 'ubt0hci' for remote bdaddr 0:80:37:29:19:a4 Network Access with <acronym>PPP</acronym> Profiles A Dial-Up Networking (DUN) profile can be used to configure a cellular phone as a wireless modem for connecting to a dial-up Internet access server. It can also be used to configure a computer to receive data calls from a cellular phone. Network access with a PPP profile can be used to provide LAN access for a single Bluetooth device or multiple Bluetooth devices. It can also provide PC to PC connection using PPP networking over serial cable emulation. In &os;, these profiles are implemented with &man.ppp.8; and the &man.rfcomm.pppd.8; wrapper which converts a Bluetooth connection into something PPP can use. Before a profile can be used, a new PPP label must be created in /etc/ppp/ppp.conf. Consult &man.rfcomm.pppd.8; for examples. In this example, &man.rfcomm.pppd.8; is used to open a connection to a remote device with a BD_ADDR of 00:80:37:29:19:a4 on a DUN RFCOMM channel: &prompt.root; rfcomm_pppd -a 00:80:37:29:19:a4 -c -C dun -l rfcomm-dialup The actual channel number will be obtained from the remote device using the SDP protocol. It is possible to specify the RFCOMM channel by hand, and in this case &man.rfcomm.pppd.8; will not perform the SDP query. Use &man.sdpcontrol.8; to find out the RFCOMM channel on the remote device. In order to provide network access with the PPP LAN service, &man.sdpd.8; must be running and a new entry for LAN clients must be created in /etc/ppp/ppp.conf. Consult &man.rfcomm.pppd.8; for examples. Finally, start the RFCOMM PPP server on a valid RFCOMM channel number. The RFCOMM PPP server will automatically register the Bluetooth LAN service with the local SDP daemon. The example below shows how to start the RFCOMM PPP server. &prompt.root; rfcomm_pppd -s -C 7 -l rfcomm-server Bluetooth Protocols This section provides an overview of the various Bluetooth protocols, their function, and associated utilities. Logical Link Control and Adaptation Protocol (<acronym>L2CAP</acronym>) L2CAP The Logical Link Control and Adaptation Protocol (L2CAP) provides connection-oriented and connectionless data services to upper layer protocols. L2CAP permits higher level protocols and applications to transmit and receive L2CAP data packets up to 64 kilobytes in length. L2CAP is based around the concept of channels. A channel is a logical connection on top of a baseband connection, where each channel is bound to a single protocol in a many-to-one fashion. Multiple channels can be bound to the same protocol, but a channel cannot be bound to multiple protocols. Each L2CAP packet received on a channel is directed to the appropriate higher level protocol. Multiple channels can share the same baseband connection. In &os;, a netgraph L2CAP node is created for each Bluetooth device. This node is normally connected to the downstream Bluetooth HCI node and upstream Bluetooth socket nodes. The default name for the L2CAP node is devicel2cap. For more details refer to &man.ng.l2cap.4;. A useful command is &man.l2ping.8;, which can be used to ping other devices. Some Bluetooth implementations might not return all of the data sent to them, so 0 bytes in the following example is normal. &prompt.root; l2ping -a 00:80:37:29:19:a4 0 bytes from 0:80:37:29:19:a4 seq_no=0 time=48.633 ms result=0 0 bytes from 0:80:37:29:19:a4 seq_no=1 time=37.551 ms result=0 0 bytes from 0:80:37:29:19:a4 seq_no=2 time=28.324 ms result=0 0 bytes from 0:80:37:29:19:a4 seq_no=3 time=46.150 ms result=0 The &man.l2control.8; utility is used to perform various operations on L2CAP nodes. This example shows how to obtain the list of logical connections (channels) and the list of baseband connections for the local device: &prompt.user; l2control -a 00:02:72:00:d4:1a read_channel_list L2CAP channels: Remote BD_ADDR SCID/ DCID PSM IMTU/ OMTU State 00:07:e0:00:0b:ca 66/ 64 3 132/ 672 OPEN &prompt.user; l2control -a 00:02:72:00:d4:1a read_connection_list L2CAP connections: Remote BD_ADDR Handle Flags Pending State 00:07:e0:00:0b:ca 41 O 0 OPEN Another diagnostic tool is &man.btsockstat.1;. It is similar to &man.netstat.1;, but for Bluetooth network-related data structures. The example below shows the same logical connection as &man.l2control.8; above. &prompt.user; btsockstat Active L2CAP sockets PCB Recv-Q Send-Q Local address/PSM Foreign address CID State c2afe900 0 0 00:02:72:00:d4:1a/3 00:07:e0:00:0b:ca 66 OPEN Active RFCOMM sessions L2PCB PCB Flag MTU Out-Q DLCs State c2afe900 c2b53380 1 127 0 Yes OPEN Active RFCOMM sockets PCB Recv-Q Send-Q Local address Foreign address Chan DLCI State c2e8bc80 0 250 00:02:72:00:d4:1a 00:07:e0:00:0b:ca 3 6 OPEN Radio Frequency Communication (<acronym>RFCOMM</acronym>) The RFCOMM protocol provides emulation of serial ports over the L2CAP protocol. RFCOMM is a simple transport protocol, with additional provisions for emulating the 9 circuits of RS-232 (EIATIA-232-E) serial ports. It supports up to 60 simultaneous connections (RFCOMM channels) between two Bluetooth devices. For the purposes of RFCOMM, a complete communication path involves two applications running on the communication endpoints with a communication segment between them. RFCOMM is intended to cover applications that make use of the serial ports of the devices in which they reside. The communication segment is a direct connect Bluetooth link from one device to another. RFCOMM is only concerned with the connection between the devices in the direct connect case, or between the device and a modem in the network case. RFCOMM can support other configurations, such as modules that communicate via Bluetooth wireless technology on one side and provide a wired interface on the other side. In &os;, RFCOMM is implemented at the Bluetooth sockets layer. Service Discovery Protocol (<acronym>SDP</acronym>) SDP The Service Discovery Protocol (SDP) provides the means for client applications to discover the existence of services provided by server applications as well as the attributes of those services. The attributes of a service include the type or class of service offered and the mechanism or protocol information needed to utilize the service. SDP involves communication between a SDP server and a SDP client. The server maintains a list of service records that describe the characteristics of services associated with the server. Each service record contains information about a single service. A client may retrieve information from a service record maintained by the SDP server by issuing a SDP request. If the client, or an application associated with the client, decides to use a service, it must open a separate connection to the service provider in order to utilize the service. SDP provides a mechanism for discovering services and their attributes, but it does not provide a mechanism for utilizing those services. Normally, a SDP client searches for services based on some desired characteristics of the services. However, there are times when it is desirable to discover which types of services are described by an SDP server's service records without any prior information about the services. This process of looking for any offered services is called browsing. The Bluetooth SDP server, &man.sdpd.8;, and command line client, &man.sdpcontrol.8;, are included in the standard &os; installation. The following example shows how to perform a SDP browse query. &prompt.user; sdpcontrol -a 00:01:03:fc:6e:ec browse Record Handle: 00000000 Service Class ID List: Service Discovery Server (0x1000) Protocol Descriptor List: L2CAP (0x0100) Protocol specific parameter #1: u/int/uuid16 1 Protocol specific parameter #2: u/int/uuid16 1 Record Handle: 0x00000001 Service Class ID List: Browse Group Descriptor (0x1001) Record Handle: 0x00000002 Service Class ID List: LAN Access Using PPP (0x1102) Protocol Descriptor List: L2CAP (0x0100) RFCOMM (0x0003) Protocol specific parameter #1: u/int8/bool 1 Bluetooth Profile Descriptor List: LAN Access Using PPP (0x1102) ver. 1.0 Note that each service has a list of attributes, such as the RFCOMM channel. Depending on the service, the user might need to make note of some of the attributes. Some Bluetooth implementations do not support service browsing and may return an empty list. In this case, it is possible to search for the specific service. The example below shows how to search for the OBEX Object Push (OPUSH) service: &prompt.user; sdpcontrol -a 00:01:03:fc:6e:ec search OPUSH Offering services on &os; to Bluetooth clients is done with the &man.sdpd.8; server. The following line can be added to /etc/rc.conf: sdpd_enable="YES" Then the &man.sdpd.8; daemon can be started with: &prompt.root; service sdpd start The local server application that wants to provide a Bluetooth service to remote clients will register the service with the local SDP daemon. An example of such an application is &man.rfcomm.pppd.8;. Once started, it will register the Bluetooth LAN service with the local SDP daemon. The list of services registered with the local SDP server can be obtained by issuing a SDP browse query via the local control channel: &prompt.root; sdpcontrol -l browse <acronym>OBEX</acronym> Object Push (<acronym>OPUSH</acronym>) OBEX Object Exchange (OBEX) is a widely used protocol for simple file transfers between mobile devices. Its main use is in infrared communication, where it is used for generic file transfers between notebooks or PDAs, and for sending business cards or calendar entries between cellular phones and other devices with Personal Information Manager (PIM) applications. The OBEX server and client are implemented by obexapp, which can be installed using the comms/obexapp package or port. The OBEX client is used to push and/or pull objects from the OBEX server. An example object is a business card or an appointment. The OBEX client can obtain the RFCOMM channel number from the remote device via SDP. This can be done by specifying the service name instead of the RFCOMM channel number. Supported service names are: IrMC, FTRN, and OPUSH. It is also possible to specify the RFCOMM channel as a number. Below is an example of an OBEX session where the device information object is pulled from the cellular phone, and a new object, the business card, is pushed into the phone's directory. &prompt.user; obexapp -a 00:80:37:29:19:a4 -C IrMC obex> get telecom/devinfo.txt devinfo-t39.txt Success, response: OK, Success (0x20) obex> put new.vcf Success, response: OK, Success (0x20) obex> di Success, response: OK, Success (0x20) In order to provide the OPUSH service, &man.sdpd.8; must be running and a root folder, where all incoming objects will be stored, must be created. The default path to the root folder is /var/spool/obex. Finally, start the OBEX server on a valid RFCOMM channel number. The OBEX server will automatically register the OPUSH service with the local SDP daemon. The example below shows how to start the OBEX server. &prompt.root; obexapp -s -C 10 Serial Port Profile (<acronym>SPP</acronym>) The Serial Port Profile (SPP) allows Bluetooth devices to perform serial cable emulation. This profile allows legacy applications to use Bluetooth as a cable replacement, through a virtual serial port abstraction. In &os;, &man.rfcomm.sppd.1; implements SPP and a pseudo tty is used as a virtual serial port abstraction. The example below shows how to connect to a remote device's serial port service. A RFCOMM channel does not have to be specified as &man.rfcomm.sppd.1; can obtain it from the remote device via SDP. To override this, specify a RFCOMM channel on the command line. &prompt.root; rfcomm_sppd -a 00:07:E0:00:0B:CA -t rfcomm_sppd[94692]: Starting on /dev/pts/6... /dev/pts/6 Once connected, the pseudo tty can be used as serial port: &prompt.root; cu -l /dev/pts/6 The pseudo tty is printed on stdout and can be read by wrapper scripts: PTS=`rfcomm_sppd -a 00:07:E0:00:0B:CA -t` cu -l $PTS Troubleshooting By default, when &os; is accepting a new connection, it tries to perform a role switch and become master. Some older Bluetooth devices which do not support role switching will not be able to connect. Since role switching is performed when a new connection is being established, it is not possible to ask the remote device if it supports role switching. However, there is a HCI option to disable role switching on the local side: &prompt.root; hccontrol -n ubt0hci write_node_role_switch 0 To display Bluetooth packets, use the third-party package hcidump, which can be installed using the comms/hcidump package or port. This utility is similar to &man.tcpdump.1; and can be used to display the contents of Bluetooth packets on the terminal and to dump the Bluetooth packets to a file. Bridging Andrew Thompson Written by IP subnet bridge It is sometimes useful to divide a network, such as an Ethernet segment, into network segments without having to create IP subnets and use a router to connect the segments together. A device that connects two networks together in this fashion is called a bridge. A bridge works by learning the MAC addresses of the devices on each of its network interfaces. It forwards traffic between networks only when the source and destination MAC addresses are on different networks. In many respects, a bridge is like an Ethernet switch with very few ports. A &os; system with multiple network interfaces can be configured to act as a bridge. Bridging can be useful in the following situations: Connecting Networks The basic operation of a bridge is to join two or more network segments. There are many reasons to use a host-based bridge instead of networking equipment, such as cabling constraints or firewalling. A bridge can also connect a wireless interface running in hostap mode to a wired network and act as an access point. Filtering/Traffic Shaping Firewall A bridge can be used when firewall functionality is needed without routing or Network Address Translation (NAT). An example is a small company that is connected via DSL or ISDN to an ISP. There are thirteen public IP addresses from the ISP and ten computers on the network. In this situation, using a router-based firewall is difficult because of subnetting issues. A bridge-based firewall can be configured without any IP addressing issues. Network Tap A bridge can join two network segments in order to inspect all Ethernet frames that pass between them using &man.bpf.4; and &man.tcpdump.1; on the bridge interface or by sending a copy of all frames out an additional interface known as a span port. Layer 2 VPN Two Ethernet networks can be joined across an IP link by bridging the networks to an EtherIP tunnel or a &man.tap.4; based solution such as OpenVPN. Layer 2 Redundancy A network can be connected together with multiple links and use the Spanning Tree Protocol (STP) to block redundant paths. This section describes how to configure a &os; system as a bridge using &man.if.bridge.4;. A netgraph bridging driver is also available, and is described in &man.ng.bridge.4;. Packet filtering can be used with any firewall package that hooks into the &man.pfil.9; framework. The bridge can be used as a traffic shaper with &man.altq.4; or &man.dummynet.4;. Enabling the Bridge In &os;, &man.if.bridge.4; is a kernel module which is automatically loaded by &man.ifconfig.8; when creating a bridge interface. It is also possible to compile bridge support into a custom kernel by adding device if_bridge to the custom kernel configuration file. The bridge is created using interface cloning. To create the bridge interface: &prompt.root; ifconfig bridge create bridge0 &prompt.root; ifconfig bridge0 bridge0: flags=8802<BROADCAST,SIMPLEX,MULTICAST> metric 0 mtu 1500 ether 96:3d:4b:f1:79:7a id 00:00:00:00:00:00 priority 32768 hellotime 2 fwddelay 15 maxage 20 holdcnt 6 proto rstp maxaddr 100 timeout 1200 root id 00:00:00:00:00:00 priority 0 ifcost 0 port 0 When a bridge interface is created, it is automatically assigned a randomly generated Ethernet address. The maxaddr and timeout parameters control how many MAC addresses the bridge will keep in its forwarding table and how many seconds before each entry is removed after it is last seen. The other parameters control how STP operates. Next, specify which network interfaces to add as members of the bridge. For the bridge to forward packets, all member interfaces and the bridge need to be up: &prompt.root; ifconfig bridge0 addm fxp0 addm fxp1 up &prompt.root; ifconfig fxp0 up &prompt.root; ifconfig fxp1 up The bridge can now forward Ethernet frames between fxp0 and fxp1. Add the following lines to /etc/rc.conf so the bridge is created at startup: cloned_interfaces="bridge0" ifconfig_bridge0="addm fxp0 addm fxp1 up" ifconfig_fxp0="up" ifconfig_fxp1="up" If the bridge host needs an IP address, set it on the bridge interface, not on the member interfaces. The address can be set statically or via DHCP. This example sets a static IP address: &prompt.root; ifconfig bridge0 inet 192.168.0.1/24 It is also possible to assign an IPv6 address to a bridge interface. To make the changes permanent, add the addressing information to /etc/rc.conf. When packet filtering is enabled, bridged packets will pass through the filter inbound on the originating interface on the bridge interface, and outbound on the appropriate interfaces. Either stage can be disabled. When direction of the packet flow is important, it is best to firewall on the member interfaces rather than the bridge itself. The bridge has several configurable settings for passing non-IP and IP packets, and layer2 firewalling with &man.ipfw.8;. See &man.if.bridge.4; for more information. Enabling Spanning Tree For an Ethernet network to function properly, only one active path can exist between two devices. The STP protocol detects loops and puts redundant links into a blocked state. Should one of the active links fail, STP calculates a different tree and enables one of the blocked paths to restore connectivity to all points in the network. The Rapid Spanning Tree Protocol (RSTP or 802.1w) provides backwards compatibility with legacy STP. RSTP provides faster convergence and exchanges information with neighboring switches to quickly transition to forwarding mode without creating loops. &os; supports RSTP and STP as operating modes, with RSTP being the default mode. STP can be enabled on member interfaces using &man.ifconfig.8;. For a bridge with fxp0 and fxp1 as the current interfaces, enable STP with: &prompt.root; ifconfig bridge0 stp fxp0 stp fxp1 bridge0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500 ether d6:cf:d5:a0:94:6d id 00:01:02:4b:d4:50 priority 32768 hellotime 2 fwddelay 15 maxage 20 holdcnt 6 proto rstp maxaddr 100 timeout 1200 root id 00:01:02:4b:d4:50 priority 32768 ifcost 0 port 0 member: fxp0 flags=1c7<LEARNING,DISCOVER,STP,AUTOEDGE,PTP,AUTOPTP> port 3 priority 128 path cost 200000 proto rstp role designated state forwarding member: fxp1 flags=1c7<LEARNING,DISCOVER,STP,AUTOEDGE,PTP,AUTOPTP> port 4 priority 128 path cost 200000 proto rstp role designated state forwarding This bridge has a spanning tree ID of 00:01:02:4b:d4:50 and a priority of 32768. As the root id is the same, it indicates that this is the root bridge for the tree. Another bridge on the network also has STP enabled: bridge0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500 ether 96:3d:4b:f1:79:7a id 00:13:d4:9a:06:7a priority 32768 hellotime 2 fwddelay 15 maxage 20 holdcnt 6 proto rstp maxaddr 100 timeout 1200 root id 00:01:02:4b:d4:50 priority 32768 ifcost 400000 port 4 member: fxp0 flags=1c7<LEARNING,DISCOVER,STP,AUTOEDGE,PTP,AUTOPTP> port 4 priority 128 path cost 200000 proto rstp role root state forwarding member: fxp1 flags=1c7<LEARNING,DISCOVER,STP,AUTOEDGE,PTP,AUTOPTP> port 5 priority 128 path cost 200000 proto rstp role designated state forwarding The line root id 00:01:02:4b:d4:50 priority 32768 ifcost 400000 port 4 shows that the root bridge is 00:01:02:4b:d4:50 and has a path cost of 400000 from this bridge. The path to the root bridge is via port 4 which is fxp0. Bridge Interface Parameters Several ifconfig parameters are unique to bridge interfaces. This section summarizes some common uses for these parameters. The complete list of available parameters is described in &man.ifconfig.8;. private A private interface does not forward any traffic to any other port that is also designated as a private interface. The traffic is blocked unconditionally so no Ethernet frames will be forwarded, including ARP packets. If traffic needs to be selectively blocked, a firewall should be used instead. span A span port transmits a copy of every Ethernet frame received by the bridge. The number of span ports configured on a bridge is unlimited, but if an interface is designated as a span port, it cannot also be used as a regular bridge port. This is most useful for snooping a bridged network passively on another host connected to one of the span ports of the bridge. For example, to send a copy of all frames out the interface named fxp4: &prompt.root; ifconfig bridge0 span fxp4 sticky If a bridge member interface is marked as sticky, dynamically learned address entries are treated at static entries in the forwarding cache. Sticky entries are never aged out of the cache or replaced, even if the address is seen on a different interface. This gives the benefit of static address entries without the need to pre-populate the forwarding table. Clients learned on a particular segment of the bridge can not roam to another segment. An example of using sticky addresses is to combine the bridge with VLANs in order to isolate customer networks without wasting IP address space. Consider that CustomerA is on vlan100, CustomerB is on vlan101, and the bridge has the address 192.168.0.1: &prompt.root; ifconfig bridge0 addm vlan100 sticky vlan100 addm vlan101 sticky vlan101 &prompt.root; ifconfig bridge0 inet 192.168.0.1/24 In this example, both clients see 192.168.0.1 as their default gateway. Since the bridge cache is sticky, one host can not spoof the MAC address of the other customer in order to intercept their traffic. Any communication between the VLANs can be blocked using a firewall or, as seen in this example, private interfaces: &prompt.root; ifconfig bridge0 private vlan100 private vlan101 The customers are completely isolated from each other and the full /24 address range can be allocated without subnetting. The number of unique source MAC addresses behind an interface can be limited. Once the limit is reached, packets with unknown source addresses are dropped until an existing host cache entry expires or is removed. The following example sets the maximum number of Ethernet devices for CustomerA on vlan100 to 10: &prompt.root; ifconfig bridge0 ifmaxaddr vlan100 10 Bridge interfaces also support monitor mode, where the packets are discarded after &man.bpf.4; processing and are not processed or forwarded further. This can be used to multiplex the input of two or more interfaces into a single &man.bpf.4; stream. This is useful for reconstructing the traffic for network taps that transmit the RX/TX signals out through two separate interfaces. For example, to read the input from four network interfaces as one stream: &prompt.root; ifconfig bridge0 addm fxp0 addm fxp1 addm fxp2 addm fxp3 monitor up &prompt.root; tcpdump -i bridge0 <acronym>SNMP</acronym> Monitoring The bridge interface and STP parameters can be monitored via &man.bsnmpd.1; which is included in the &os; base system. The exported bridge MIBs conform to IETF standards so any SNMP client or monitoring package can be used to retrieve the data. To enable monitoring on the bridge, uncomment this line in /etc/snmp.config by removing the beginning # symbol: begemotSnmpdModulePath."bridge" = "/usr/lib/snmp_bridge.so" Other configuration settings, such as community names and access lists, may need to be modified in this file. See &man.bsnmpd.1; and &man.snmp.bridge.3; for more information. Once these edits are saved, add this line to /etc/rc.conf: bsnmpd_enable="YES" Then, start &man.bsnmpd.1;: &prompt.root; service bsnmpd start The following examples use the Net-SNMP software (net-mgmt/net-snmp) to query a bridge from a client system. The net-mgmt/bsnmptools port can also be used. From the SNMP client which is running Net-SNMP, add the following lines to $HOME/.snmp/snmp.conf in order to import the bridge MIB definitions: mibdirs +/usr/share/snmp/mibs mibs +BRIDGE-MIB:RSTP-MIB:BEGEMOT-MIB:BEGEMOT-BRIDGE-MIB To monitor a single bridge using the IETF BRIDGE-MIB (RFC4188): &prompt.user; snmpwalk -v 2c -c public bridge1.example.com mib-2.dot1dBridge BRIDGE-MIB::dot1dBaseBridgeAddress.0 = STRING: 66:fb:9b:6e:5c:44 BRIDGE-MIB::dot1dBaseNumPorts.0 = INTEGER: 1 ports BRIDGE-MIB::dot1dStpTimeSinceTopologyChange.0 = Timeticks: (189959) 0:31:39.59 centi-seconds BRIDGE-MIB::dot1dStpTopChanges.0 = Counter32: 2 BRIDGE-MIB::dot1dStpDesignatedRoot.0 = Hex-STRING: 80 00 00 01 02 4B D4 50 ... BRIDGE-MIB::dot1dStpPortState.3 = INTEGER: forwarding(5) BRIDGE-MIB::dot1dStpPortEnable.3 = INTEGER: enabled(1) BRIDGE-MIB::dot1dStpPortPathCost.3 = INTEGER: 200000 BRIDGE-MIB::dot1dStpPortDesignatedRoot.3 = Hex-STRING: 80 00 00 01 02 4B D4 50 BRIDGE-MIB::dot1dStpPortDesignatedCost.3 = INTEGER: 0 BRIDGE-MIB::dot1dStpPortDesignatedBridge.3 = Hex-STRING: 80 00 00 01 02 4B D4 50 BRIDGE-MIB::dot1dStpPortDesignatedPort.3 = Hex-STRING: 03 80 BRIDGE-MIB::dot1dStpPortForwardTransitions.3 = Counter32: 1 RSTP-MIB::dot1dStpVersion.0 = INTEGER: rstp(2) The dot1dStpTopChanges.0 value is two, indicating that the STP bridge topology has changed twice. A topology change means that one or more links in the network have changed or failed and a new tree has been calculated. The dot1dStpTimeSinceTopologyChange.0 value will show when this happened. To monitor multiple bridge interfaces, the private BEGEMOT-BRIDGE-MIB can be used: &prompt.user; snmpwalk -v 2c -c public bridge1.example.com enterprises.fokus.begemot.begemotBridge BEGEMOT-BRIDGE-MIB::begemotBridgeBaseName."bridge0" = STRING: bridge0 BEGEMOT-BRIDGE-MIB::begemotBridgeBaseName."bridge2" = STRING: bridge2 BEGEMOT-BRIDGE-MIB::begemotBridgeBaseAddress."bridge0" = STRING: e:ce:3b:5a:9e:13 BEGEMOT-BRIDGE-MIB::begemotBridgeBaseAddress."bridge2" = STRING: 12:5e:4d:74:d:fc BEGEMOT-BRIDGE-MIB::begemotBridgeBaseNumPorts."bridge0" = INTEGER: 1 BEGEMOT-BRIDGE-MIB::begemotBridgeBaseNumPorts."bridge2" = INTEGER: 1 ... BEGEMOT-BRIDGE-MIB::begemotBridgeStpTimeSinceTopologyChange."bridge0" = Timeticks: (116927) 0:19:29.27 centi-seconds BEGEMOT-BRIDGE-MIB::begemotBridgeStpTimeSinceTopologyChange."bridge2" = Timeticks: (82773) 0:13:47.73 centi-seconds BEGEMOT-BRIDGE-MIB::begemotBridgeStpTopChanges."bridge0" = Counter32: 1 BEGEMOT-BRIDGE-MIB::begemotBridgeStpTopChanges."bridge2" = Counter32: 1 BEGEMOT-BRIDGE-MIB::begemotBridgeStpDesignatedRoot."bridge0" = Hex-STRING: 80 00 00 40 95 30 5E 31 BEGEMOT-BRIDGE-MIB::begemotBridgeStpDesignatedRoot."bridge2" = Hex-STRING: 80 00 00 50 8B B8 C6 A9 To change the bridge interface being monitored via the mib-2.dot1dBridge subtree: &prompt.user; snmpset -v 2c -c private bridge1.example.com BEGEMOT-BRIDGE-MIB::begemotBridgeDefaultBridgeIf.0 s bridge2 Link Aggregation and Failover Andrew Thompson Written by lagg failover FEC LACP loadbalance roundrobin &os; provides the &man.lagg.4; interface which can be used to aggregate multiple network interfaces into one virtual interface in order to provide failover and link aggregation. Failover allows traffic to continue to flow as long as at least one aggregated network interface has an established link. Link aggregation works best on switches which support LACP, as this protocol distributes traffic bi-directionally while responding to the failure of individual links. The aggregation protocols supported by the lagg interface determine which ports are used for outgoing traffic and whether or not a specific port accepts incoming traffic. The following protocols are supported by &man.lagg.4;: failover This mode sends and receives traffic only through the master port. If the master port becomes unavailable, the next active port is used. The first interface added to the virtual interface is the master port and all subsequently added interfaces are used as failover devices. If failover to a non-master port occurs, the original port becomes master once it becomes available again. fec / loadbalance &cisco; Fast ðerchannel; (FEC) is found on older &cisco; switches. It provides a static setup and does not negotiate aggregation with the peer or exchange frames to monitor the link. If the switch supports LACP, that should be used instead. lacp The &ieee; 802.3ad Link Aggregation Control Protocol (LACP) negotiates a set of aggregable links with the peer into one or more Link Aggregated Groups (LAGs). Each LAG is composed of ports of the same speed, set to full-duplex operation, and traffic is balanced across the ports in the LAG with the greatest total speed. Typically, there is only one LAG which contains all the ports. In the event of changes in physical connectivity, LACP will quickly converge to a new configuration. LACP balances outgoing traffic across the active ports based on hashed protocol header information and accepts incoming traffic from any active port. The hash includes the Ethernet source and destination address and, if available, the VLAN tag, and the IPv4 or IPv6 source and destination address. roundrobin This mode distributes outgoing traffic using a round-robin scheduler through all active ports and accepts incoming traffic from any active port. Since this mode violates Ethernet frame ordering, it should be used with caution. Configuration Examples This section demonstrates how to configure a &cisco; switch and a &os; system for LACP load balancing. It then shows how to configure two Ethernet interfaces in failover mode as well as how to configure failover mode between an Ethernet and a wireless interface. <acronym>LACP</acronym> Aggregation with a &cisco; Switch This example connects two &man.fxp.4; Ethernet interfaces on a &os; machine to the first two Ethernet ports on a &cisco; switch as a single load balanced and fault tolerant link. More interfaces can be added to increase throughput and fault tolerance. Replace the names of the &cisco; ports, Ethernet devices, channel group number, and IP address shown in the example to match the local configuration. Frame ordering is mandatory on Ethernet links and any traffic between two stations always flows over the same physical link, limiting the maximum speed to that of one interface. The transmit algorithm attempts to use as much information as it can to distinguish different traffic flows and balance the flows across the available interfaces. On the &cisco; switch, add the FastEthernet0/1 and FastEthernet0/2 interfaces to channel group 1: interface FastEthernet0/1 channel-group 1 mode active channel-protocol lacp ! interface FastEthernet0/2 channel-group 1 mode active channel-protocol lacp On the &os; system, create the &man.lagg.4; interface using the physical interfaces fxp0 and fxp1 and bring the interfaces up with an IP address of 10.0.0.3/24: &prompt.root; ifconfig fxp0 up &prompt.root; ifconfig fxp1 up &prompt.root; ifconfig lagg0 create &prompt.root; ifconfig lagg0 up laggproto lacp laggport fxp0 laggport fxp1 10.0.0.3/24 Next, verify the status of the virtual interface: &prompt.root; ifconfig lagg0 lagg0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500 options=8<VLAN_MTU> ether 00:05:5d:71:8d:b8 media: Ethernet autoselect status: active laggproto lacp laggport: fxp1 flags=1c<ACTIVE,COLLECTING,DISTRIBUTING> laggport: fxp0 flags=1c<ACTIVE,COLLECTING,DISTRIBUTING> Ports marked as ACTIVE are part of the LAG that has been negotiated with the remote switch. Traffic will be transmitted and received through these active ports. Add to the above command to view the LAG identifiers. To see the port status on the &cisco; switch: switch# show lacp neighbor Flags: S - Device is requesting Slow LACPDUs F - Device is requesting Fast LACPDUs A - Device is in Active mode P - Device is in Passive mode Channel group 1 neighbors Partner's information: LACP port Oper Port Port Port Flags Priority Dev ID Age Key Number State Fa0/1 SA 32768 0005.5d71.8db8 29s 0x146 0x3 0x3D Fa0/2 SA 32768 0005.5d71.8db8 29s 0x146 0x4 0x3D For more detail, type show lacp neighbor detail. To retain this configuration across reboots, add the following entries to /etc/rc.conf on the &os; system: ifconfig_fxp0="up" ifconfig_fxp1="up" cloned_interfaces="lagg0" ifconfig_lagg0="laggproto lacp laggport fxp0 laggport fxp1 10.0.0.3/24" Failover Mode Failover mode can be used to switch over to a secondary interface if the link is lost on the master interface. To configure failover, make sure that the underlying physical interfaces are up, then create the &man.lagg.4; interface. In this example, fxp0 is the master interface, fxp1 is the secondary interface, and the virtual interface is assigned an IP address of 10.0.0.15/24: &prompt.root; ifconfig fxp0 up &prompt.root; ifconfig fxp1 up &prompt.root; ifconfig lagg0 create &prompt.root; ifconfig lagg0 up laggproto failover laggport fxp0 laggport fxp1 10.0.0.15/24 The virtual interface should look something like this: &prompt.root; ifconfig lagg0 lagg0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500 options=8<VLAN_MTU> ether 00:05:5d:71:8d:b8 inet 10.0.0.15 netmask 0xffffff00 broadcast 10.0.0.255 media: Ethernet autoselect status: active laggproto failover laggport: fxp1 flags=0<> laggport: fxp0 flags=5<MASTER,ACTIVE> Traffic will be transmitted and received on fxp0. If the link is lost on fxp0, fxp1 will become the active link. If the link is restored on the master interface, it will once again become the active link. To retain this configuration across reboots, add the following entries to /etc/rc.conf: ifconfig_fxp0="up" ifconfig_fxp1="up" cloned_interfaces="lagg0" ifconfig_lagg0="laggproto failover laggport fxp0 laggport fxp1 10.0.0.15/24" Failover Mode Between Ethernet and Wireless Interfaces For laptop users, it is usually desirable to configure the wireless device as a secondary which is only used when the Ethernet connection is not available. With &man.lagg.4;, it is possible to configure a failover which prefers the Ethernet connection for both performance and security reasons, while maintaining the ability to transfer data over the wireless connection. This is achieved by overriding the physical wireless interface's MAC address with that of the Ethernet interface. In this example, the Ethernet interface, bge0, is the master and the wireless interface, wlan0, is the failover. The wlan0 device was created from iwn0 wireless interface, which will be configured with the MAC address of the Ethernet interface. First, determine the MAC address of the Ethernet interface: &prompt.root; ifconfig bge0 bge0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500 options=19b<RXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,VLAN_HWCSUM,TSO4> ether 00:21:70:da:ae:37 inet6 fe80::221:70ff:feda:ae37%bge0 prefixlen 64 scopeid 0x2 nd6 options=29<PERFORMNUD,IFDISABLED,AUTO_LINKLOCAL> media: Ethernet autoselect (1000baseT <full-duplex>) status: active Replace bge0 to match the system's Ethernet interface name. The ether line will contain the MAC address of the specified interface. Now, change the MAC address of the underlying wireless interface: &prompt.root; ifconfig iwn0 ether 00:21:70:da:ae:37 Bring the wireless interface up, but do not set an IP address: &prompt.root; ifconfig wlan0 create wlandev iwn0 ssid my_router up Make sure the bge0 interface is up, then create the &man.lagg.4; interface with bge0 as master with failover to wlan0: &prompt.root; ifconfig bge0 up &prompt.root; ifconfig lagg0 create &prompt.root; ifconfig lagg0 up laggproto failover laggport bge0 laggport wlan0 The virtual interface should look something like this: &prompt.root; ifconfig lagg0 lagg0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500 options=8<VLAN_MTU> ether 00:21:70:da:ae:37 media: Ethernet autoselect status: active laggproto failover laggport: wlan0 flags=0<> laggport: bge0 flags=5<MASTER,ACTIVE> Then, start the DHCP client to obtain an IP address: &prompt.root; dhclient lagg0 To retain this configuration across reboots, add the following entries to /etc/rc.conf: ifconfig_bge0="up" ifconfig_iwn0="ether 00:21:70:da:ae:37" wlans_iwn0="wlan0" ifconfig_wlan0="WPA" cloned_interfaces="lagg0" ifconfig_lagg0="laggproto failover laggport bge0 laggport wlan0 DHCP" Diskless Operation with <acronym>PXE</acronym> Jean-François Dockès Updated by Alex Dupre Reorganized and enhanced by diskless workstation diskless operation The &intel; Preboot eXecution Environment (PXE) allows an operating system to boot over the network. For example, a &os; system can boot over the network and operate without a local disk, using file systems mounted from an NFS server. PXE support is usually available in the BIOS. To use PXE when the machine starts, select the Boot from network option in the BIOS setup or type a function key during system initialization. In order to provide the files needed for an operating system to boot over the network, a PXE setup also requires properly configured DHCP, TFTP, and NFS servers, where: Initial parameters, such as an IP address, executable boot filename and location, server name, and root path are obtained from the DHCP server. The operating system loader file is booted using TFTP. The file systems are loaded using NFS. When a computer PXE boots, it receives information over DHCP about where to obtain the initial boot loader file. After the host computer receives this information, it downloads the boot loader via TFTP and then executes the boot loader. In &os;, the boot loader file is /boot/pxeboot. After /boot/pxeboot executes, the &os; kernel is loaded and the rest of the &os; bootup sequence proceeds, as described in . This section describes how to configure these services on a &os; system so that other systems can PXE boot into &os;. Refer to &man.diskless.8; for more information. As described, the system providing these services is insecure. It should live in a protected area of a network and be untrusted by other hosts. Setting Up the <acronym>PXE</acronym> Environment Craig Rodrigues
rodrigc@FreeBSD.org
Written by
The steps shown in this section configure the built-in NFS and TFTP servers. The next section demonstrates how to install and configure the DHCP server. In this example, the directory which will contain the files used by PXE users is /b/tftpboot/FreeBSD/install. It is important that this directory exists and that the same directory name is set in both /etc/inetd.conf and /usr/local/etc/dhcpd.conf. Create the root directory which will contain a &os; installation to be NFS mounted: &prompt.root; export NFSROOTDIR=/b/tftpboot/FreeBSD/install &prompt.root; mkdir -p ${NFSROOTDIR} Enable the NFS server by adding this line to /etc/rc.conf: nfs_server_enable="YES" Export the diskless root directory via NFS by adding the following to /etc/exports: /b -ro -alldirs Start the NFS server: &prompt.root; service nfsd start Enable &man.inetd.8; by adding the following line to /etc/rc.conf: inetd_enable="YES" Uncomment the following line in /etc/inetd.conf by making sure it does not start with a # symbol: tftp dgram udp wait root /usr/libexec/tftpd tftpd -l -s /b/tftpboot Some PXE versions require the TCP version of TFTP. In this case, uncomment the second tftp line which contains stream tcp. Start &man.inetd.8;: &prompt.root; service inetd start Rebuild the &os; kernel and userland (refer to for more detailed instructions): &prompt.root; cd /usr/src &prompt.root; make buildworld &prompt.root; make buildkernel Install &os; into the directory mounted over NFS: &prompt.root; make installworld DESTDIR=${NFSROOTDIR} &prompt.root; make installkernel DESTDIR=${NFSROOTDIR} &prompt.root; make distribution DESTDIR=${NFSROOTDIR} Test that the TFTP server works and can download the boot loader which will be obtained via PXE: &prompt.root; tftp localhost tftp> get FreeBSD/install/boot/pxeboot Received 264951 bytes in 0.1 seconds Edit ${NFSROOTDIR}/etc/fstab and create an entry to mount the root file system over NFS: # Device Mountpoint FSType Options Dump Pass myhost.example.com:/b/tftpboot/FreeBSD/install / nfs ro 0 0 Replace myhost.example.com with the hostname or IP address of the NFS server. In this example, the root file system is mounted read-only in order to prevent NFS clients from potentially deleting the contents of the root file system. Set the root password in the PXE environment for client machines which are PXE booting : &prompt.root; chroot ${NFSROOTDIR} &prompt.root; passwd If needed, enable &man.ssh.1; root logins for client machines which are PXE booting by editing ${NFSROOTDIR}/etc/ssh/sshd_config and enabling PermitRootLogin. This option is documented in &man.sshd.config.5;. Perform any other needed customizations of the PXE environment in ${NFSROOTDIR}. These customizations could include things like installing packages or editing the password file with &man.vipw.8;. When booting from an NFS root volume, /etc/rc detects the NFS boot and runs /etc/rc.initdiskless. In this case, /etc and /var need to be memory backed file systems so that these directories are writable but the NFS root directory is read-only: &prompt.root; chroot ${NFSROOTDIR} &prompt.root; mkdir -p conf/base &prompt.root; tar -c -v -f conf/base/etc.cpio.gz --format cpio --gzip etc &prompt.root; tar -c -v -f conf/base/var.cpio.gz --format cpio --gzip var When the system boots, memory file systems for /etc and /var will be created and mounted and the contents of the cpio.gz files will be copied into them.
Configuring the <acronym>DHCP</acronym> Server DHCP diskless operation The DHCP server does not need to be the same machine as the TFTP and NFS server, but it needs to be accessible in the network. DHCP is not part of the &os; base system but can be installed using the net/isc-dhcp42-server port or package. Once installed, edit the configuration file, /usr/local/etc/dhcpd.conf. Configure the next-server, filename, and root-path settings as seen in this example: subnet 192.168.0.0 netmask 255.255.255.0 { range 192.168.0.2 192.168.0.3 ; option subnet-mask 255.255.255.0 ; option routers 192.168.0.1 ; option broadcast-address 192.168.0.255 ; option domain-name-servers 192.168.35.35, 192.168.35.36 ; option domain-name "example.com"; # IP address of TFTP server next-server 192.168.0.1 ; # path of boot loader obtained via tftp filename "FreeBSD/install/boot/pxeboot" ; # pxeboot boot loader will try to NFS mount this directory for root FS option root-path "192.168.0.1:/b/tftpboot/FreeBSD/install/" ; } The next-server directive is used to specify the IP address of the TFTP server. The filename directive defines the path to /boot/pxeboot. A relative filename is used, meaning that /b/tftpboot is not included in the path. The root-path option defines the path to the NFS root file system. Once the edits are saved, enable DHCP at boot time by adding the following line to /etc/rc.conf: dhcpd_enable="YES" Then start the DHCP service: &prompt.root; service isc-dhcpd start Debugging <acronym>PXE</acronym> Problems Once all of the services are configured and started, PXE clients should be able to automatically load &os; over the network. If a particular client is unable to connect, when that client machine boots up, enter the BIOS configuration menu and confirm that it is set to boot from the network. This section describes some troubleshooting tips for isolating the source of the configuration problem should no clients be able to PXE boot. Use the net/wireshark package or port to debug the network traffic involved during the PXE booting process, which is illustrated in the diagram below.
<acronym>PXE</acronym> Booting Process with <acronym>NFS</acronym> Root Mount Client broadcasts a DHCPDISCOVER message. The DHCP server responds with the IP address, next-server, filename, and root-path values. The client sends a TFTP request to next-server, asking to retrieve filename. The TFTP server responds and sends filename to client. The client executes filename, which is &man.pxeboot.8;, which then loads the kernel. When the kernel executes, the root file system specified by root-path is mounted over NFS.
On the TFTP server, read /var/log/xferlog to ensure that pxeboot is being retrieved from the correct location. To test this example configuration: &prompt.root; tftp 192.168.0.1 tftp> get FreeBSD/install/boot/pxeboot Received 264951 bytes in 0.1 seconds The BUGS sections in &man.tftpd.8; and &man.tftp.1; document some limitations with TFTP. Make sure that the root file system can be mounted via NFS. To test this example configuration: &prompt.root; mount -t nfs 192.168.0.1:/b/tftpboot/FreeBSD/install /mnt
<acronym>IPv6</acronym> Aaron Kaplan Originally Written by Tom Rhodes Restructured and Added by Brad Davis Extended by IPv6 is the new version of the well known IP protocol, also known as IPv4. IPv6 provides several advantages over IPv4 as well as many new features: Its 128-bit address space allows for 340,282,366,920,938,463,463,374,607,431,768,211,456 addresses. This addresses the IPv4 address shortage and eventual IPv4 address exhaustion. Routers only store network aggregation addresses in their routing tables, thus reducing the average space of a routing table to 8192 entries. This addresses the scalability issues associated with IPv4, which required every allocated block of IPv4 addresses to be exchanged between Internet routers, causing their routing tables to become too large to allow efficient routing. Address autoconfiguration (RFC2462). Mandatory multicast addresses. Built-in IPsec (IP security). Simplified header structure. Support for mobile IP. IPv6-to-IPv4 transition mechanisms. &os; includes the http://www.kame.net/ IPv6 reference implementation and comes with everything needed to use IPv6. This section focuses on getting IPv6 configured and running. Background on <acronym>IPv6</acronym> Addresses There are three different types of IPv6 addresses: Unicast A packet sent to a unicast address arrives at the interface belonging to the address. Anycast These addresses are syntactically indistinguishable from unicast addresses but they address a group of interfaces. The packet destined for an anycast address will arrive at the nearest router interface. Anycast addresses are only used by routers. Multicast These addresses identify a group of interfaces. A packet destined for a multicast address will arrive at all interfaces belonging to the multicast group. The IPv4 broadcast address, usually xxx.xxx.xxx.255, is expressed by multicast addresses in IPv6. When reading an IPv6 address, the canonical form is represented as x:x:x:x:x:x:x:x, where each x represents a 16 bit hex value. An example is FEBC:A574:382B:23C1:AA49:4592:4EFE:9982. Often, an address will have long substrings of all zeros. A :: (double colon) can be used to replace one substring per address. Also, up to three leading 0s per hex value can be omitted. For example, fe80::1 corresponds to the canonical form fe80:0000:0000:0000:0000:0000:0000:0001. A third form is to write the last 32 bits using the well known IPv4 notation. For example, 2002::10.0.0.1 corresponds to the hexadecimal canonical representation 2002:0000:0000:0000:0000:0000:0a00:0001, which in turn is equivalent to 2002::a00:1. To view a &os; system's IPv6 address, use &man.ifconfig.8;: &prompt.root; ifconfig rl0: flags=8943<UP,BROADCAST,RUNNING,PROMISC,SIMPLEX,MULTICAST> mtu 1500 inet 10.0.0.10 netmask 0xffffff00 broadcast 10.0.0.255 inet6 fe80::200:21ff:fe03:8e1%rl0 prefixlen 64 scopeid 0x1 ether 00:00:21:03:08:e1 media: Ethernet autoselect (100baseTX ) status: active In this example, fe80::200:21ff:fe03:8e1%rl0 is an auto-configured link-local address which was automatically generated from the MAC address. Some IPv6 addresses are reserved. A summary of these reserved addresses is seen in : Reserved <acronym>IPv6</acronym> Addresses IPv6 address Prefixlength (Bits) Description Notes :: 128 bits unspecified Equivalent to 0.0.0.0 in IPv4. ::1 128 bits loopback address Equivalent to 127.0.0.1 in IPv4. ::00:xx:xx:xx:xx 96 bits embedded IPv4 The lower 32 bits are the compatible IPv4 address. ::ff:xx:xx:xx:xx 96 bits IPv4 mapped IPv6 address The lower 32 bits are the IPv4 address for hosts which do not support IPv6. fe80::/10 10 bits link-local Equivalent to 169.254.0.0/16 in IPv4. fc00::/7 7 bits unique-local Unique local addresses are intended for local communication and are only routable within a set of cooperating sites. ff00:: 8 bits multicast   2000::-3fff:: 3 bits global unicast All global unicast addresses are assigned from this pool. The first 3 bits are 001.
For further information on the structure of IPv6 addresses, refer to RFC3513.
Configuring <acronym>IPv6</acronym> To configure a &os; system as an IPv6 client, add these two lines to rc.conf: ifconfig_em0_ipv6="inet6 accept_rtadv" rtsold_enable="YES" The first line enables the specified interface to receive router solicitation messages. The second line enables the router solicitation daemon, &man.rtsol.8;. For &os; 8.x, add a third line: ipv6_enable="YES" If the interface needs a statically assigned IPv6 address, add an entry to specify the static address and associated prefix length: ifconfig_fxp0_ipv6="inet6 2001:db8:4672:6565:2026:5043:2d42:5344 prefixlen 64" On a &os; 8.x system, that line uses this format instead: ipv6_ifconfig_fxp0="2001:db8:4672:6565:2026:5043:2d42:5344" To assign a default router, specify its address: ipv6_defaultrouter="2001:db8:4672:6565::1" Connecting to a Provider In order to connect to other IPv6 networks, one must have a provider or a tunnel that supports IPv6: Contact an Internet Service Provider to see if they offer IPv6. SixXS offers tunnels with end-points all around the globe. Hurricane Electric offers tunnels with end-points all around the globe. Install the net/freenet6 package or port for a dial-up connection. This section demonstrates how to take the directions from a tunnel provider and convert them into /etc/rc.conf settings that will persist through reboots. The first /etc/rc.conf entry creates the generic tunneling interface gif0: gif_interfaces="gif0" Next, configure that interface with the IPv4 addresses of the local and remote endpoints. Replace MY_IPv4_ADDR and REMOTE_IPv4_ADDR with the actual IPv4 addresses: gifconfig_gif0="MY_IPv4_ADDR REMOTE_IPv4_ADDR" To apply the IPv6 address that has been assigned for use as the IPv6 tunnel endpoint, add this line, replacing MY_ASSIGNED_IPv6_TUNNEL_ENDPOINT_ADDR with the assigned address: ifconfig_gif0_ipv6="inet6 MY_ASSIGNED_IPv6_TUNNEL_ENDPOINT_ADDR" For &os; 8.x, that line should instead use this format: ipv6_ifconfig_gif0="MY_ASSIGNED_IPv6_TUNNEL_ENDPOINT_ADDR" Then, set the default route for the other side of the IPv6 tunnel. Replace MY_IPv6_REMOTE_TUNNEL_ENDPOINT_ADDR with the default gateway address assigned by the provider: ipv6_defaultrouter="MY_IPv6_REMOTE_TUNNEL_ENDPOINT_ADDR" If the &os; system will route IPv6 packets between the rest of the network and the world, enable the gateway using this line: ipv6_gateway_enable="YES" Router Advertisement and Host Auto Configuration This section demonstrates how to setup &man.rtadvd.8; to advertise the IPv6 default route. To enable &man.rtadvd.8;, add the following to /etc/rc.conf: rtadvd_enable="YES" It is important to specify the interface on which to do IPv6 router solicitation. For example, to tell &man.rtadvd.8; to use fxp0: rtadvd_interfaces="fxp0" Next, create the configuration file, /etc/rtadvd.conf as seen in this example: fxp0:\ :addrs#1:addr="2001:471:1f11:246::":prefixlen#64:tc=ether: Replace fxp0 with the interface to be used and 2001:471:1f11:246:: with the prefix of the allocation. For a dedicated /64 subnet, nothing else needs to be changed. Otherwise, change the prefixlen# to the correct value. <acronym>IPv6</acronym> and <acronym>IPv6</acronym> Address Mapping When IPv6 is enabled on a server, there may be a need to enable IPv4 mapped IPv6 address communication. This compatibility option allows for IPv4 addresses to be represented as IPv6 addresses. Permitting IPv6 applications to communicate with IPv4 and vice versa may be a security issue. This option may not be required in most cases and is available only for compatibility. This option will allow IPv6-only applications to work with IPv4 in a dual stack environment. This is most useful for third party applications which may not support an IPv6-only environment. To enable this feature, add the following to /etc/rc.conf: ipv6_ipv4mapping="YES" Reviewing the information in RFC 3493, section 3.6 and 3.7 as well as RFC 4038 section 4.2 may be useful to some adminstrators.
Common Address Redundancy Protocol (<acronym>CARP</acronym>) Tom Rhodes Contributed by Allan Jude Updated by CARP Common Address Redundancy Protocol The Common Address Redundancy Protocol (CARP) allows multiple hosts to share the same IP address and Virtual Host ID (VHID) in order to provide high availability for one or more services. This means that one or more hosts can fail, and the other hosts will transparently take over so that users do not see a service failure. In addition to the shared IP address, each host has its own IP address for management and configuration. All of the machines that share an IP address have the same VHID. The VHID for each virtual IP address must be unique across the broadcast domain of the network interface. High availability using CARP is built into &os;, though the steps to configure it vary slightly depending upon the &os; version. This section provides the same example configuration for versions before and equal to or after &os; 10. This example configures failover support with three hosts, all with unique IP addresses, but providing the same web content. It has two different masters named hosta.example.org and hostb.example.org, with a shared backup named hostc.example.org. These machines are load balanced with a Round Robin DNS configuration. The master and backup machines are configured identically except for their hostnames and management IP addresses. These servers must have the same configuration and run the same services. When the failover occurs, requests to the service on the shared IP address can only be answered correctly if the backup server has access to the same content. The backup machine has two additional CARP interfaces, one for each of the master content server's IP addresses. When a failure occurs, the backup server will pick up the failed master machine's IP address. Using <acronym>CARP</acronym> on &os; 10 and Later Enable boot-time support for CARP by adding an entry for the carp.ko kernel module in /boot/loader.conf: carp_load="YES" To load the module now without rebooting: &prompt.root; kldload carp For users who prefer to use a custom kernel, include the following line in the custom kernel configuration file and compile the kernel as described in : device carp The hostname, management IP address and subnet mask, shared IP address, and VHID are all set by adding entries to /etc/rc.conf. This example is for hosta.example.org: hostname="hosta.example.org" ifconfig_em0="inet 192.168.1.3 netmask 255.255.255.0" ifconfig_em0_alias0="vhid 1 pass testpass alias 192.168.1.50/32" The next set of entries are for hostb.example.org. Since it represents a second master, it uses a different shared IP address and VHID. However, the passwords specified with must be identical as CARP will only listen to and accept advertisements from machines with the correct password. hostname="hostb.example.org" ifconfig_em0="inet 192.168.1.4 netmask 255.255.255.0" ifconfig_em0_alias0="vhid 2 pass testpass alias 192.168.1.51/32" The third machine, hostc.example.org, is configured to handle failover from either master. This machine is configured with two CARP VHIDs, one to handle the virtual IP address for each of the master hosts. The CARP advertising skew, , is set to ensure that the backup host advertises later than the master, since controls the order of precedence when there are multiple backup servers. hostname="hostc.example.org" ifconfig_em0="inet 192.168.1.5 netmask 255.255.255.0" ifconfig_em0_alias0="vhid 1 advskew 100 pass testpass alias 192.168.1.50/32" ifconfig_em0_alias1="vhid 2 advskew 100 pass testpass alias 192.168.1.51/32" Having two CARP VHIDs configured means that hostc.example.org will notice if either of the master servers becomes unavailable. If a master fails to advertise before the backup server, the backup server will pick up the shared IP address until the master becomes available again. Preemption is disabled by default. If preemption has been enabled, hostc.example.org might not release the virtual IP address back to the original master server. The administrator can force the backup server to return the IP address to the master with the command: &prompt.root; ifconfig em0 vhid 1 state backup Once the configuration is complete, either restart networking or reboot each system. High availability is now enabled. CARP functionality can be controlled via several &man.sysctl.8; variables documented in the &man.carp.4; manual pages. Other actions can be triggered from CARP events by using &man.devd.8;. Using <acronym>CARP</acronym> on &os; 9 and Earlier The configuration for these versions of &os; is similar to the one described in the previous section, except that a CARP device must first be created and referred to in the configuration. Enable boot-time support for CARP by loading the if_carp.ko kernel module in /boot/loader.conf: if_carp_load="YES" To load the module now without rebooting: &prompt.root; kldload carp For users who prefer to use a custom kernel, include the following line in the custom kernel configuration file and compile the kernel as described in : device carp Next, on each host, create a CARP device: &prompt.root; ifconfig carp0 create Set the hostname, management IP address, the shared IP address, and VHID by adding the required lines to /etc/rc.conf. Since a virtual CARP device is used instead of an alias, the actual subnet mask of /24 is used instead of /32. Here are the entries for hosta.example.org: hostname="hosta.example.org" ifconfig_fxp0="inet 192.168.1.3 netmask 255.255.255.0" cloned_interfaces="carp0" ifconfig_carp0="vhid 1 pass testpass 192.168.1.50/24" On hostb.example.org: hostname="hostb.example.org" ifconfig_fxp0="inet 192.168.1.4 netmask 255.255.255.0" cloned_interfaces="carp0" ifconfig_carp0="vhid 2 pass testpass 192.168.1.51/24" The third machine, hostc.example.org, is configured to handle failover from either of the master hosts: hostname="hostc.example.org" ifconfig_fxp0="inet 192.168.1.5 netmask 255.255.255.0" cloned_interfaces="carp0 carp1" ifconfig_carp0="vhid 1 advskew 100 pass testpass 192.168.1.50/24" ifconfig_carp1="vhid 2 advskew 100 pass testpass 192.168.1.51/24" Preemption is disabled in the GENERIC &os; kernel. If preemption has been enabled with a custom kernel, hostc.example.org may not release the IP address back to the original content server. The administrator can force the backup server to return the IP address to the master with the command: &prompt.root; ifconfig carp0 down && ifconfig carp0 up This should be done on the carp interface which corresponds to the correct host. Once the configuration is complete, either restart networking or reboot each system. High availability is now enabled.
Index: head/en_US.ISO8859-1/books/handbook/basics/chapter.xml =================================================================== --- head/en_US.ISO8859-1/books/handbook/basics/chapter.xml (revision 46271) +++ head/en_US.ISO8859-1/books/handbook/basics/chapter.xml (revision 46272) @@ -1,3438 +1,3438 @@ UNIX Basics Synopsis This chapter covers the basic commands and functionality of the &os; operating system. Much of this material is relevant for any &unix;-like operating system. New &os; users are encouraged to read through this chapter carefully. After reading this chapter, you will know: How to use and configure virtual consoles. How to create and manage users and groups on &os;. How &unix; file permissions and &os; file flags work. The default &os; file system layout. The &os; disk organization. How to mount and unmount file systems. What processes, daemons, and signals are. What a shell is, and how to change the default login environment. How to use basic text editors. What devices and device nodes are. How to read manual pages for more information. Virtual Consoles and Terminals virtual consoles terminals console Unless &os; has been configured to automatically start a graphical environment during startup, the system will boot into a command line login prompt, as seen in this example: FreeBSD/amd64 (pc3.example.org) (ttyv0) login: The first line contains some information about the system. The amd64 indicates that the system in this example is running a 64-bit version of &os;. The hostname is pc3.example.org, and ttyv0 indicates that this is the system console. The second line is the login prompt. Since &os; is a multiuser system, it needs some way to distinguish between different users. This is accomplished by requiring every user to log into the system before gaining access to the programs on the system. Every user has a unique name username and a personal password. To log into the system console, type the username that was configured during system installation, as described in , and press Enter. Then enter the password associated with the username and press Enter. The password is not echoed for security reasons. Once the correct password is input, the message of the day (MOTD) will be displayed followed by a command prompt. Depending upon the shell that was selected when the user was created, this prompt will be a #, $, or % character. The prompt indicates that the user is now logged into the &os; system console and ready to try the available commands. Virtual Consoles While the system console can be used to interact with the system, a user working from the command line at the keyboard of a &os; system will typically instead log into a virtual console. This is because system messages are configured by default to display on the system console. These messages will appear over the command or file that the user is working on, making it difficult to concentrate on the work at hand. By default, &os; is configured to provide several virtual consoles for inputting commands. Each virtual console has its own login prompt and shell and it is easy to switch between virtual consoles. This essentially provides the command line equivalent of having several windows open at the same time in a graphical environment. The key combinations AltF1 through AltF8 have been reserved by &os; for switching between virtual consoles. Use AltF1 to switch to the system console (ttyv0), AltF2 to access the first virtual console (ttyv1), AltF3 to access the second virtual console (ttyv2), and so on. When switching from one console to the next, &os; manages the screen output. The result is an illusion of having multiple virtual screens and keyboards that can be used to type commands for &os; to run. The programs that are launched in one virtual console do not stop running when the user switches to a different virtual console. Refer to &man.syscons.4;, &man.atkbd.4;, &man.vidcontrol.1; and &man.kbdcontrol.1; for a more technical description of the &os; console and its keyboard drivers. In &os;, the number of available virtual consoles is configured in this section of /etc/ttys: # name getty type status comments # ttyv0 "/usr/libexec/getty Pc" xterm on secure # Virtual terminals ttyv1 "/usr/libexec/getty Pc" xterm on secure ttyv2 "/usr/libexec/getty Pc" xterm on secure ttyv3 "/usr/libexec/getty Pc" xterm on secure ttyv4 "/usr/libexec/getty Pc" xterm on secure ttyv5 "/usr/libexec/getty Pc" xterm on secure ttyv6 "/usr/libexec/getty Pc" xterm on secure ttyv7 "/usr/libexec/getty Pc" xterm on secure ttyv8 "/usr/X11R6/bin/xdm -nodaemon" xterm off secure To disable a virtual console, put a comment symbol (#) at the beginning of the line representing that virtual console. For example, to reduce the number of available virtual consoles from eight to four, put a # in front of the last four lines representing virtual consoles ttyv5 through ttyv8. Do not comment out the line for the system console ttyv0. Note that the last virtual console (ttyv8) is used to access the graphical environment if &xorg; has been installed and configured as described in . For a detailed description of every column in this file and the available options for the virtual consoles, refer to &man.ttys.5;. Single User Mode The &os; boot menu provides an option labelled as Boot Single User. If this option is selected, the system will boot into a special mode known as single user mode. This mode is typically used to repair a system that will not boot or to reset the root password when it is not known. While in single user mode, networking and other virtual consoles are not available. However, full root access to the system is available, and by default, the root password is not needed. For these reasons, physical access to the keyboard is needed to boot into this mode and determining who has physical access to the keyboard is something to consider when securing a &os; system. The settings which control single user mode are found in this section of /etc/ttys: # name getty type status comments # # If console is marked "insecure", then init will ask for the root password # when going to single-user mode. console none unknown off secure By default, the status is set to secure. This assumes that who has physical access to the keyboard is either not important or it is controlled by a physical security policy. If this setting is changed to insecure, the assumption is that the environment itself is insecure because anyone can access the keyboard. When this line is changed to insecure, &os; will prompt for the root password when a user selects to boot into single user mode. Be careful when changing this setting to insecure! If the root password is forgotten, booting into single user mode is still possible, but may be difficult for someone who is not familiar with the &os; booting process. Changing Console Video Modes The &os; console default video mode may be adjusted to 1024x768, 1280x1024, or any other size supported by the graphics chip and monitor. To use a different video mode load the VESA module: &prompt.root; kldload vesa To determine which video modes are supported by the hardware, use &man.vidcontrol.1;. To get a list of supported video modes issue the following: &prompt.root; vidcontrol -i mode The output of this command lists the video modes that are supported by the hardware. To select a new video mode, specify the mode using &man.vidcontrol.1; as the root user: &prompt.root; vidcontrol MODE_279 If the new video mode is acceptable, it can be permanently set on boot by adding it to /etc/rc.conf: allscreens_flags="MODE_279" Users and Basic Account Management &os; allows multiple users to use the computer at the same time. While only one user can sit in front of the screen and use the keyboard at any one time, any number of users can log in to the system through the network. To use the system, each user should have their own user account. This chapter describes: The different types of user accounts on a &os; system. How to add, remove, and modify user accounts. How to set limits to control the resources that users and groups are allowed to access. How to create groups and add users as members of a group. Account Types Since all access to the &os; system is achieved using accounts and all processes are run by users, user and account management is important. There are three main types of accounts: system accounts, user accounts, and the superuser account. System Accounts accounts system System accounts are used to run services such as DNS, mail, and web servers. The reason for this is security; if all services ran as the superuser, they could act without restriction. accounts daemon accounts operator Examples of system accounts are daemon, operator, bind, news, and www. accounts nobody nobody is the generic unprivileged system account. However, the more services that use nobody, the more files and processes that user will become associated with, and hence the more privileged that user becomes. User Accounts accounts user User accounts are assigned to real people and are used to log in and use the system. Every person accessing the system should have a unique user account. This allows the administrator to find out who is doing what and prevents users from clobbering the settings of other users. Each user can set up their own environment to accommodate their use of the system, by configuring their default shell, editor, key bindings, and language settings. Every user account on a &os; system has certain information associated with it: User name The user name is typed at the login: prompt. Each user must have a unique user name. There are a number of rules for creating valid user names which are documented in &man.passwd.5;. It is recommended to use user names that consist of eight or fewer, all lower case characters in order to maintain backwards compatibility with applications. Password Each account has an associated password. User ID (UID) The User ID (UID) is a number used to uniquely identify the user to the &os; system. Commands that allow a user name to be specified will first convert it to the UID. It is recommended to use a UID less than 65535, since higher values may cause compatibility issues with some software. Group ID (GID) The Group ID (GID) is a number used to uniquely identify the primary group that the user belongs to. Groups are a mechanism for controlling access to resources based on a user's GID rather than their UID. This can significantly reduce the size of some configuration files and allows users to be members of more than one group. It is recommended to use a GID of 65535 or lower as higher GIDs may break some software. Login class Login classes are an extension to the group mechanism that provide additional flexibility when tailoring the system to different users. Login classes are discussed further in . Password change time By default, passwords do not expire. However, password expiration can be enabled on a per-user basis, forcing some or all users to change their passwords after a certain amount of time has elapsed. Account expiry time By default, &os; does not expire accounts. When creating accounts that need a limited lifespan, such as student accounts in a school, specify the account expiry date using &man.pw.8;. After the expiry time has elapsed, the account cannot be used to log in to the system, although the account's directories and files will remain. User's full name The user name uniquely identifies the account to &os;, but does not necessarily reflect the user's real name. Similar to a comment, this information can contain spaces, uppercase characters, and be more than 8 characters long. Home directory The home directory is the full path to a directory on the system. This is the user's starting directory when the user logs in. A common convention is to put all user home directories under /home/username + >/home/username or /usr/home/username. + >/usr/home/username. Each user stores their personal files and subdirectories in their own home directory. User shell The shell provides the user's default environment for interacting with the system. There are many different kinds of shells and experienced users will have their own preferences, which can be reflected in their account settings. The Superuser Account accounts superuser (root) The superuser account, usually called root, is used to manage the system with no limitations on privileges. For this reason, it should not be used for day-to-day tasks like sending and receiving mail, general exploration of the system, or programming. The superuser, unlike other user accounts, can operate without limits, and misuse of the superuser account may result in spectacular disasters. User accounts are unable to destroy the operating system by mistake, so it is recommended to login as a user account and to only become the superuser when a command requires extra privilege. Always double and triple-check any commands issued as the superuser, since an extra space or missing character can mean irreparable data loss. There are several ways to gain superuser privilege. While one can log in as root, this is highly discouraged. Instead, use &man.su.1; to become the superuser. If - is specified when running this command, the user will also inherit the root user's environment. The user running this command must be in the wheel group or else the command will fail. The user must also know the password for the root user account. In this example, the user only becomes superuser in order to run make install as this step requires superuser privilege. Once the command completes, the user types exit to leave the superuser account and return to the privilege of their user account. Install a Program As the Superuser &prompt.user; configure &prompt.user; make &prompt.user; su - Password: &prompt.root; make install &prompt.root; exit &prompt.user; The built-in &man.su.1; framework works well for single systems or small networks with just one system administrator. An alternative is to install the security/sudo package or port. This software provides activity logging and allows the administrator to configure which users can run which commands as the superuser. Managing Accounts accounts modifying &os; provides a variety of different commands to manage user accounts. The most common commands are summarized in , followed by some examples of their usage. See the manual page for each utility for more details and usage examples. Utilities for Managing User Accounts Command Summary &man.adduser.8; The recommended command-line application for adding new users. &man.rmuser.8; The recommended command-line application for removing users. &man.chpass.1; A flexible tool for changing user database information. &man.passwd.1; The command-line tool to change user passwords. &man.pw.8; A powerful and flexible tool for modifying all aspects of user accounts.
<command>adduser</command> accounts adding adduser /usr/share/skel skeleton directory The recommended program for adding new users is &man.adduser.8;. When a new user is added, this program automatically updates /etc/passwd and /etc/group. It also creates a home directory for the new user, copies in the default configuration files from /usr/share/skel, and can optionally mail the new user a welcome message. This utility must be run as the superuser. The &man.adduser.8; utility is interactive and walks through the steps for creating a new user account. As seen in , either input the required information or press Return to accept the default value shown in square brackets. In this example, the user has been invited into the wheel group, allowing them to become the superuser with &man.su.1;. When finished, the utility will prompt to either create another user or to exit. Adding a User on &os; &prompt.root; adduser Username: jru Full name: J. Random User Uid (Leave empty for default): Login group [jru]: Login group is jru. Invite jru into other groups? []: wheel Login class [default]: Shell (sh csh tcsh zsh nologin) [sh]: zsh Home directory [/home/jru]: Home directory permissions (Leave empty for default): Use password-based authentication? [yes]: Use an empty password? (yes/no) [no]: Use a random password? (yes/no) [no]: Enter password: Enter password again: Lock out the account after creation? [no]: Username : jru Password : **** Full Name : J. Random User Uid : 1001 Class : Groups : jru wheel Home : /home/jru Shell : /usr/local/bin/zsh Locked : no OK? (yes/no): yes adduser: INFO: Successfully added (jru) to the user database. Add another user? (yes/no): no Goodbye! &prompt.root; Since the password is not echoed when typed, be careful to not mistype the password when creating the user account. <command>rmuser</command> rmuser accounts removing To completely remove a user from the system, run &man.rmuser.8; as the superuser. This command performs the following steps: Removes the user's &man.crontab.1; entry, if one exists. Removes any &man.at.1; jobs belonging to the user. Kills all processes owned by the user. Removes the user from the system's local password file. Optionally removes the user's home directory, if it is owned by the user. Removes the incoming mail files belonging to the user from /var/mail. Removes all files owned by the user from temporary file storage areas such as /tmp. Finally, removes the username from all groups to which it belongs in /etc/group. If a group becomes empty and the group name is the same as the username, the group is removed. This complements the per-user unique groups created by &man.adduser.8;. &man.rmuser.8; cannot be used to remove superuser accounts since that is almost always an indication of massive destruction. By default, an interactive mode is used, as shown in the following example. <command>rmuser</command> Interactive Account Removal &prompt.root; rmuser jru Matching password entry: jru:*:1001:1001::0:0:J. Random User:/home/jru:/usr/local/bin/zsh Is this the entry you wish to remove? y Remove user's home directory (/home/jru)? y Removing user (jru): mailspool home passwd. &prompt.root; <command>chpass</command> chpass Any user can use &man.chpass.1; to change their default shell and personal information associated with their user account. The superuser can use this utility to change additional account information for any user. When passed no options, aside from an optional username, &man.chpass.1; displays an editor containing user information. When the user exits from the editor, the user database is updated with the new information. This utility will prompt for the user's password when exiting the editor, unless the utility is run as the superuser. In , the superuser has typed chpass jru and is now viewing the fields that can be changed for this user. If jru runs this command instead, only the last six fields will be displayed and available for editing. This is shown in . Using <command>chpass</command> as Superuser #Changing user database information for jru. Login: jru Password: * Uid [#]: 1001 Gid [# or name]: 1001 Change [month day year]: Expire [month day year]: Class: Home directory: /home/jru Shell: /usr/local/bin/zsh Full Name: J. Random User Office Location: Office Phone: Home Phone: Other information: Using <command>chpass</command> as Regular User #Changing user database information for jru. Shell: /usr/local/bin/zsh Full Name: J. Random User Office Location: Office Phone: Home Phone: Other information: The commands &man.chfn.1; and &man.chsh.1; are links to &man.chpass.1;, as are &man.ypchpass.1;, &man.ypchfn.1;, and &man.ypchsh.1;. Since NIS support is automatic, specifying the yp before the command is not necessary. How to configure NIS is covered in . <command>passwd</command> passwd accounts changing password Any user can easily change their password using &man.passwd.1;. To prevent accidental or unauthorized changes, this command will prompt for the user's original password before a new password can be set: Changing Your Password &prompt.user; passwd Changing local password for jru. Old password: New password: Retype new password: passwd: updating the database... passwd: done The superuser can change any user's password by specifying the username when running &man.passwd.1;. When this utility is run as the superuser, it will not prompt for the user's current password. This allows the password to be changed when a user cannot remember the original password. Changing Another User's Password as the Superuser &prompt.root; passwd jru Changing local password for jru. New password: Retype new password: passwd: updating the database... passwd: done As with &man.chpass.1;, &man.yppasswd.1; is a link to &man.passwd.1;, so NIS works with either command. <command>pw</command> pw The &man.pw.8; utility can create, remove, modify, and display users and groups. It functions as a front end to the system user and group files. &man.pw.8; has a very powerful set of command line options that make it suitable for use in shell scripts, but new users may find it more complicated than the other commands presented in this section.
Managing Groups groups /etc/groups accounts groups A group is a list of users. A group is identified by its group name and GID. In &os;, the kernel uses the UID of a process, and the list of groups it belongs to, to determine what the process is allowed to do. Most of the time, the GID of a user or process usually means the first group in the list. The group name to GID mapping is listed in /etc/group. This is a plain text file with four colon-delimited fields. The first field is the group name, the second is the encrypted password, the third the GID, and the fourth the comma-delimited list of members. For a more complete description of the syntax, refer to &man.group.5;. The superuser can modify /etc/group using a text editor. Alternatively, &man.pw.8; can be used to add and edit groups. For example, to add a group called teamtwo and then confirm that it exists: Adding a Group Using &man.pw.8; &prompt.root; pw groupadd teamtwo &prompt.root; pw groupshow teamtwo teamtwo:*:1100: In this example, 1100 is the GID of teamtwo. Right now, teamtwo has no members. This command will add jru as a member of teamtwo. Adding User Accounts to a New Group Using &man.pw.8; &prompt.root; pw groupmod teamtwo -M jru &prompt.root; pw groupshow teamtwo teamtwo:*:1100:jru The argument to is a comma-delimited list of users to be added to a new (empty) group or to replace the members of an existing group. To the user, this group membership is different from (and in addition to) the user's primary group listed in the password file. This means that the user will not show up as a member when using with &man.pw.8;, but will show up when the information is queried via &man.id.1; or a similar tool. When &man.pw.8; is used to add a user to a group, it only manipulates /etc/group and does not attempt to read additional data from /etc/passwd. Adding a New Member to a Group Using &man.pw.8; &prompt.root; pw groupmod teamtwo -m db &prompt.root; pw groupshow teamtwo teamtwo:*:1100:jru,db In this example, the argument to is a comma-delimited list of users who are to be added to the group. Unlike the previous example, these users are appended to the group and do not replace existing users in the group. Using &man.id.1; to Determine Group Membership &prompt.user; id jru uid=1001(jru) gid=1001(jru) groups=1001(jru), 1100(teamtwo) In this example, jru is a member of the groups jru and teamtwo. For more information about this command and the format of /etc/group, refer to &man.pw.8; and &man.group.5;.
Permissions UNIX In &os;, every file and directory has an associated set of permissions and several utilities are available for viewing and modifying these permissions. Understanding how permissions work is necessary to make sure that users are able to access the files that they need and are unable to improperly access the files used by the operating system or owned by other users. This section discusses the traditional &unix; permissions used in &os;. For finer grained file system access control, refer to . In &unix;, basic permissions are assigned using three types of access: read, write, and execute. These access types are used to determine file access to the file's owner, group, and others (everyone else). The read, write, and execute permissions can be represented as the letters r, w, and x. They can also be represented as binary numbers as each permission is either on or off (0). When represented as a number, the order is always read as rwx, where r has an on value of 4, w has an on value of 2 and x has an on value of 1. Table 4.1 summarizes the possible numeric and alphabetic possibilities. When reading the Directory Listing column, a - is used to represent a permission that is set to off. permissions file permissions &unix; Permissions Value Permission Directory Listing 0 No read, no write, no execute --- 1 No read, no write, execute --x 2 No read, write, no execute -w- 3 No read, write, execute -wx 4 Read, no write, no execute r-- 5 Read, no write, execute r-x 6 Read, write, no execute rw- 7 Read, write, execute rwx
&man.ls.1; directories Use the argument to &man.ls.1; to view a long directory listing that includes a column of information about a file's permissions for the owner, group, and everyone else. For example, a ls -l in an arbitrary directory may show: &prompt.user; ls -l total 530 -rw-r--r-- 1 root wheel 512 Sep 5 12:31 myfile -rw-r--r-- 1 root wheel 512 Sep 5 12:31 otherfile -rw-r--r-- 1 root wheel 7680 Sep 5 12:31 email.txt The first (leftmost) character in the first column indicates whether this file is a regular file, a directory, a special character device, a socket, or any other special pseudo-file device. In this example, the - indicates a regular file. The next three characters, rw- in this example, give the permissions for the owner of the file. The next three characters, r--, give the permissions for the group that the file belongs to. The final three characters, r--, give the permissions for the rest of the world. A dash means that the permission is turned off. In this example, the permissions are set so the owner can read and write to the file, the group can read the file, and the rest of the world can only read the file. According to the table above, the permissions for this file would be 644, where each digit represents the three parts of the file's permission. How does the system control permissions on devices? &os; treats most hardware devices as a file that programs can open, read, and write data to. These special device files are stored in /dev/. Directories are also treated as files. They have read, write, and execute permissions. The executable bit for a directory has a slightly different meaning than that of files. When a directory is marked executable, it means it is possible to change into that directory using &man.cd.1;. This also means that it is possible to access the files within that directory, subject to the permissions on the files themselves. In order to perform a directory listing, the read permission must be set on the directory. In order to delete a file that one knows the name of, it is necessary to have write and execute permissions to the directory containing the file. There are more permission bits, but they are primarily used in special circumstances such as setuid binaries and sticky directories. For more information on file permissions and how to set them, refer to &man.chmod.1;. Symbolic Permissions Tom Rhodes Contributed by permissions symbolic Symbolic permissions use characters instead of octal values to assign permissions to files or directories. Symbolic permissions use the syntax of (who) (action) (permissions), where the following values are available: Option Letter Represents (who) u User (who) g Group owner (who) o Other (who) a All (world) (action) + Adding permissions (action) - Removing permissions (action) = Explicitly set permissions (permissions) r Read (permissions) w Write (permissions) x Execute (permissions) t Sticky bit (permissions) s Set UID or GID These values are used with &man.chmod.1;, but with letters instead of numbers. For example, the following command would block other users from accessing FILE: &prompt.user; chmod go= FILE A comma separated list can be provided when more than one set of changes to a file must be made. For example, the following command removes the group and world write permission on FILE, and adds the execute permissions for everyone: &prompt.user; chmod go-w,a+x FILE &os; File Flags Tom Rhodes Contributed by In addition to file permissions, &os; supports the use of file flags. These flags add an additional level of security and control over files, but not directories. With file flags, even root can be prevented from removing or altering files. File flags are modified using &man.chflags.1;. For example, to enable the system undeletable flag on the file file1, issue the following command: &prompt.root; chflags sunlink file1 To disable the system undeletable flag, put a no in front of the : &prompt.root; chflags nosunlink file1 To view the flags of a file, use with &man.ls.1;: &prompt.root; ls -lo file1 -rw-r--r-- 1 trhodes trhodes sunlnk 0 Mar 1 05:54 file1 Several file flags may only be added or removed by the root user. In other cases, the file owner may set its file flags. Refer to &man.chflags.1; and &man.chflags.2; for more information. The <literal>setuid</literal>, <literal>setgid</literal>, and <literal>sticky</literal> Permissions Tom Rhodes Contributed by Other than the permissions already discussed, there are three other specific settings that all administrators should know about. They are the setuid, setgid, and sticky permissions. These settings are important for some &unix; operations as they provide functionality not normally granted to normal users. To understand them, the difference between the real user ID and effective user ID must be noted. The real user ID is the UID who owns or starts the process. The effective UID is the user ID the process runs as. As an example, &man.passwd.1; runs with the real user ID when a user changes their password. However, in order to update the password database, the command runs as the effective ID of the root user. This allows users to change their passwords without seeing a Permission Denied error. The setuid permission may be set by prefixing a permission set with the number four (4) as shown in the following example: &prompt.root; chmod 4755 suidexample.sh The permissions on suidexample.sh now look like the following: -rwsr-xr-x 1 trhodes trhodes 63 Aug 29 06:36 suidexample.sh Note that a s is now part of the permission set designated for the file owner, replacing the executable bit. This allows utilities which need elevated permissions, such as &man.passwd.1;. The nosuid &man.mount.8; option will cause such binaries to silently fail without alerting the user. That option is not completely reliable as a nosuid wrapper may be able to circumvent it. To view this in real time, open two terminals. On one, type passwd as a normal user. While it waits for a new password, check the process table and look at the user information for &man.passwd.1;: In terminal A: Changing local password for trhodes Old Password: In terminal B: &prompt.root; ps aux | grep passwd trhodes 5232 0.0 0.2 3420 1608 0 R+ 2:10AM 0:00.00 grep passwd root 5211 0.0 0.2 3620 1724 2 I+ 2:09AM 0:00.01 passwd Although &man.passwd.1; is run as a normal user, it is using the effective UID of root. The setgid permission performs the same function as the setuid permission; except that it alters the group settings. When an application or utility executes with this setting, it will be granted the permissions based on the group that owns the file, not the user who started the process. To set the setgid permission on a file, provide &man.chmod.1; with a leading two (2): &prompt.root; chmod 2755 sgidexample.sh In the following listing, notice that the s is now in the field designated for the group permission settings: -rwxr-sr-x 1 trhodes trhodes 44 Aug 31 01:49 sgidexample.sh In these examples, even though the shell script in question is an executable file, it will not run with a different EUID or effective user ID. This is because shell scripts may not access the &man.setuid.2; system calls. The setuid and setgid permission bits may lower system security, by allowing for elevated permissions. The third special permission, the sticky bit, can strengthen the security of a system. When the sticky bit is set on a directory, it allows file deletion only by the file owner. This is useful to prevent file deletion in public directories, such as /tmp, by users who do not own the file. To utilize this permission, prefix the permission set with a one (1): &prompt.root; chmod 1777 /tmp The sticky bit permission will display as a t at the very end of the permission set: &prompt.root; ls -al / | grep tmp drwxrwxrwt 10 root wheel 512 Aug 31 01:49 tmp
Directory Structure directory hierarchy The &os; directory hierarchy is fundamental to obtaining an overall understanding of the system. The most important directory is root or, /. This directory is the first one mounted at boot time and it contains the base system necessary to prepare the operating system for multi-user operation. The root directory also contains mount points for other file systems that are mounted during the transition to multi-user operation. A mount point is a directory where additional file systems can be grafted onto a parent file system (usually the root file system). This is further described in . Standard mount points include /usr/, /var/, /tmp/, /mnt/, and /cdrom/. These directories are usually referenced to entries in /etc/fstab. This file is a table of various file systems and mount points and is read by the system. Most of the file systems in /etc/fstab are mounted automatically at boot time from the script &man.rc.8; unless their entry includes . Details can be found in . A complete description of the file system hierarchy is available in &man.hier.7;. The following table provides a brief overview of the most common directories. Directory Description / Root directory of the file system. /bin/ User utilities fundamental to both single-user and multi-user environments. /boot/ Programs and configuration files used during operating system bootstrap. /boot/defaults/ Default boot configuration files. Refer to &man.loader.conf.5; for details. /dev/ Device nodes. Refer to &man.intro.4; for details. /etc/ System configuration files and scripts. /etc/defaults/ Default system configuration files. Refer to &man.rc.8; for details. /etc/mail/ Configuration files for mail transport agents such as &man.sendmail.8;. /etc/namedb/ &man.named.8; configuration files. /etc/periodic/ Scripts that run daily, weekly, and monthly, via &man.cron.8;. Refer to &man.periodic.8; for details. /etc/ppp/ &man.ppp.8; configuration files. /mnt/ Empty directory commonly used by system administrators as a temporary mount point. /proc/ Process file system. Refer to &man.procfs.5;, &man.mount.procfs.8; for details. /rescue/ Statically linked programs for emergency recovery as described in &man.rescue.8;. /root/ Home directory for the root account. /sbin/ System programs and administration utilities fundamental to both single-user and multi-user environments. /tmp/ Temporary files which are usually not preserved across a system reboot. A memory-based file system is often mounted at /tmp. This can be automated using the tmpmfs-related variables of &man.rc.conf.5; or with an entry in /etc/fstab; refer to &man.mdmfs.8; for details. /usr/ The majority of user utilities and applications. /usr/bin/ Common utilities, programming tools, and applications. /usr/include/ Standard C include files. /usr/lib/ Archive libraries. /usr/libdata/ Miscellaneous utility data files. /usr/libexec/ System daemons and system utilities executed by other programs. /usr/local/ Local executables and libraries. Also used as the default destination for the &os; ports framework. Within - /usr/local, the + /usr/local, the general layout sketched out by &man.hier.7; for - /usr should be + /usr should be used. Exceptions are the man directory, which is directly under /usr/local rather than + >/usr/local rather than under /usr/local/share, and + >/usr/local/share, and the ports documentation is in share/doc/port. + >share/doc/port. /usr/obj/ Architecture-specific target tree produced by building the /usr/src tree. /usr/ports/ The &os; Ports Collection (optional). /usr/sbin/ System daemons and system utilities executed by users. /usr/share/ Architecture-independent files. /usr/src/ BSD and/or local source files. /var/ Multi-purpose log, temporary, transient, and spool files. A memory-based file system is sometimes mounted at /var. This can be automated using the varmfs-related variables in &man.rc.conf.5; or with an entry in /etc/fstab; refer to &man.mdmfs.8; for details. /var/log/ Miscellaneous system log files. /var/mail/ User mailbox files. /var/spool/ Miscellaneous printer and mail system spooling directories. /var/tmp/ Temporary files which are usually preserved across a system reboot, unless /var is a memory-based file system. /var/yp/ NIS maps. Disk Organization The smallest unit of organization that &os; uses to find files is the filename. Filenames are case-sensitive, which means that readme.txt and README.TXT are two separate files. &os; does not use the extension of a file to determine whether the file is a program, document, or some other form of data. Files are stored in directories. A directory may contain no files, or it may contain many hundreds of files. A directory can also contain other directories, allowing a hierarchy of directories within one another in order to organize data. Files and directories are referenced by giving the file or directory name, followed by a forward slash, /, followed by any other directory names that are necessary. For example, if the directory foo contains a directory bar which contains the file readme.txt, the full name, or path, to the file is foo/bar/readme.txt. Note that this is different from &windows; which uses \ to separate file and directory names. &os; does not use drive letters, or other drive names in the path. For example, one would not type c:\foo\bar\readme.txt on &os;. Directories and files are stored in a file system. Each file system contains exactly one directory at the very top level, called the root directory for that file system. This root directory can contain other directories. One file system is designated the root file system or /. Every other file system is mounted under the root file system. No matter how many disks are on the &os; system, every directory appears to be part of the same disk. Consider three file systems, called A, B, and C. Each file system has one root directory, which contains two other directories, called A1, A2 (and likewise B1, B2 and C1, C2). Call A the root file system. If &man.ls.1; is used to view the contents of this directory, it will show two subdirectories, A1 and A2. The directory tree looks like this: / | +--- A1 | `--- A2 A file system must be mounted on to a directory in another file system. When mounting file system B on to the directory A1, the root directory of B replaces A1, and the directories in B appear accordingly: / | +--- A1 | | | +--- B1 | | | `--- B2 | `--- A2 Any files that are in the B1 or B2 directories can be reached with the path /A1/B1 or /A1/B2 as necessary. Any files that were in /A1 have been temporarily hidden. They will reappear if B is unmounted from A. If B had been mounted on A2 then the diagram would look like this: / | +--- A1 | `--- A2 | +--- B1 | `--- B2 and the paths would be /A2/B1 and /A2/B2 respectively. File systems can be mounted on top of one another. Continuing the last example, the C file system could be mounted on top of the B1 directory in the B file system, leading to this arrangement: / | +--- A1 | `--- A2 | +--- B1 | | | +--- C1 | | | `--- C2 | `--- B2 Or C could be mounted directly on to the A file system, under the A1 directory: / | +--- A1 | | | +--- C1 | | | `--- C2 | `--- A2 | +--- B1 | `--- B2 It is entirely possible to have one large root file system, and not need to create any others. There are some drawbacks to this approach, and one advantage. Benefits of Multiple File Systems Different file systems can have different mount options. For example, the root file system can be mounted read-only, making it impossible for users to inadvertently delete or edit a critical file. Separating user-writable file systems, such as /home, from other file systems allows them to be mounted nosuid. This option prevents the suid/guid bits on executables stored on the file system from taking effect, possibly improving security. &os; automatically optimizes the layout of files on a file system, depending on how the file system is being used. So a file system that contains many small files that are written frequently will have a different optimization to one that contains fewer, larger files. By having one big file system this optimization breaks down. &os;'s file systems are robust if power is lost. However, a power loss at a critical point could still damage the structure of the file system. By splitting data over multiple file systems it is more likely that the system will still come up, making it easier to restore from backup as necessary. Benefit of a Single File System File systems are a fixed size. If you create a file system when you install &os; and give it a specific size, you may later discover that you need to make the partition bigger. This is not easily accomplished without backing up, recreating the file system with the new size, and then restoring the backed up data. &os; features the &man.growfs.8; command, which makes it possible to increase the size of file system on the fly, removing this limitation. File systems are contained in partitions. This does not have the same meaning as the common usage of the term partition (for example, &ms-dos; partition), because of &os;'s &unix; heritage. Each partition is identified by a letter from a through to h. Each partition can contain only one file system, which means that file systems are often described by either their typical mount point in the file system hierarchy, or the letter of the partition they are contained in. &os; also uses disk space for swap space to provide virtual memory. This allows your computer to behave as though it has much more memory than it actually does. When &os; runs out of memory, it moves some of the data that is not currently being used to the swap space, and moves it back in (moving something else out) when it needs it. Some partitions have certain conventions associated with them. Partition Convention a Normally contains the root file system. b Normally contains swap space. c Normally the same size as the enclosing slice. This allows utilities that need to work on the entire slice, such as a bad block scanner, to work on the c partition. A file system would not normally be created on this partition. d Partition d used to have a special meaning associated with it, although that is now gone and d may work as any normal partition. Disks in &os; are divided into slices, referred to in &windows; as partitions, which are numbered from 1 to 4. These are then divided into partitions, which contain file systems, and are labeled using letters. slices partitions dangerously dedicated Slice numbers follow the device name, prefixed with an s, starting at 1. So da0s1 is the first slice on the first SCSI drive. There can only be four physical slices on a disk, but there can be logical slices inside physical slices of the appropriate type. These extended slices are numbered starting at 5, so ada0s5 is the first extended slice on the first SATA disk. These devices are used by file systems that expect to occupy a slice. Slices, dangerously dedicated physical drives, and other drives contain partitions, which are represented as letters from a to h. This letter is appended to the device name, so da0a is the a partition on the first da drive, which is dangerously dedicated. ada1s3e is the fifth partition in the third slice of the second SATA disk drive. Finally, each disk on the system is identified. A disk name starts with a code that indicates the type of disk, and then a number, indicating which disk it is. Unlike slices, disk numbering starts at 0. Common codes are listed in . When referring to a partition, include the disk name, s, the slice number, and then the partition letter. Examples are shown in . shows a conceptual model of a disk layout. When installing &os;, configure the disk slices, create partitions within the slice to be used for &os;, create a file system or swap space in each partition, and decide where each file system will be mounted. Disk Device Names Drive Type Drive Device Name SATA and IDE hard drives ada or ad SCSI hard drives and USB storage devices da SATA and IDE CD-ROM drives cd or acd SCSI CD-ROM drives cd Floppy drives fd Assorted non-standard CD-ROM drives mcd for Mitsumi CD-ROM and scd for Sony CD-ROM devices SCSI tape drives sa IDE tape drives ast RAID drives Examples include aacd for &adaptec; AdvancedRAID, mlxd and mlyd for &mylex;, amrd for AMI &megaraid;, idad for Compaq Smart RAID, twed for &tm.3ware; RAID.
Sample Disk, Slice, and Partition Names Name Meaning ada0s1a The first partition (a) on the first slice (s1) on the first IDE disk (ada0). da1s2e The fifth partition (e) on the second slice (s2) on the second SCSI disk (da1). Conceptual Model of a Disk This diagram shows &os;'s view of the first IDE disk attached to the system. Assume that the disk is 4 GB in size, and contains two 2 GB slices (&ms-dos; partitions). The first slice contains a &ms-dos; disk, C:, and the second slice contains a &os; installation. This example &os; installation has three data partitions, and a swap partition. The three partitions will each hold a file system. Partition a will be used for the root file system, e for the /var/ directory hierarchy, and f for the /usr/ directory hierarchy. .-----------------. --. | | | | DOS / Windows | | : : > First slice, ad0s1 : : | | | | :=================: ==: --. | | | Partition a, mounted as / | | | > referred to as ad0s2a | | | | | :-----------------: ==: | | | | Partition b, used as swap | | | > referred to as ad0s2b | | | | | :-----------------: ==: | Partition c, no | | | Partition e, used as /var > file system, all | | > referred to as ad0s2e | of FreeBSD slice, | | | | ad0s2c :-----------------: ==: | | | | | : : | Partition f, used as /usr | : : > referred to as ad0s2f | : : | | | | | | | | --' | `-----------------' --'
Mounting and Unmounting File Systems The file system is best visualized as a tree, rooted, as it were, at /. /dev, /usr, and the other directories in the root directory are branches, which may have their own branches, such as /usr/local, and so on. root file system There are various reasons to house some of these directories on separate file systems. /var contains the directories log/, spool/, and various types of temporary files, and as such, may get filled up. Filling up the root file system is not a good idea, so splitting /var from / is often favorable. Another common reason to contain certain directory trees on other file systems is if they are to be housed on separate physical disks, or are separate virtual disks, such as Network File System mounts, described in , or CDROM drives. The <filename>fstab</filename> File file systems mounted with fstab During the boot process (), file systems listed in /etc/fstab are automatically mounted except for the entries containing . This file contains entries in the following format: device /mount-point fstype options dumpfreq passno device An existing device name as explained in . mount-point An existing directory on which to mount the file system. fstype The file system type to pass to &man.mount.8;. The default &os; file system is ufs. options Either for read-write file systems, or for read-only file systems, followed by any other options that may be needed. A common option is for file systems not normally mounted during the boot sequence. Other options are listed in &man.mount.8;. dumpfreq Used by &man.dump.8; to determine which file systems require dumping. If the field is missing, a value of zero is assumed. passno Determines the order in which file systems should be checked. File systems that should be skipped should have their passno set to zero. The root file system needs to be checked before everything else and should have its passno set to one. The other file systems should be set to values greater than one. If more than one file system has the same passno, &man.fsck.8; will attempt to check file systems in parallel if possible. Refer to &man.fstab.5; for more information on the format of /etc/fstab and its options. Using &man.mount.8; file systems mounting File systems are mounted using &man.mount.8;. The most basic syntax is as follows: &prompt.root; mount device mountpoint This command provides many options which are described in &man.mount.8;, The most commonly used options include: Mount Options Mount all the file systems listed in /etc/fstab, except those marked as noauto, excluded by the flag, or those that are already mounted. Do everything except for the actual mount system call. This option is useful in conjunction with the flag to determine what &man.mount.8; is actually trying to do. Force the mount of an unclean file system (dangerous), or the revocation of write access when downgrading a file system's mount status from read-write to read-only. Mount the file system read-only. This is identical to using . fstype Mount the specified file system type or mount only file systems of the given type, if is included. ufs is the default file system type. Update mount options on the file system. Be verbose. Mount the file system read-write. The following options can be passed to as a comma-separated list: nosuid Do not interpret setuid or setgid flags on the file system. This is also a useful security option. Using &man.umount.8; file systems unmounting To unmount a file system use &man.umount.8;. This command takes one parameter which can be a mountpoint, device name, or . All forms take to force unmounting, and for verbosity. Be warned that is not generally a good idea as it might crash the computer or damage data on the file system. To unmount all mounted file systems, or just the file system types listed after , use or . Note that does not attempt to unmount the root file system. Processes and Daemons &os; is a multi-tasking operating system. Each program running at any one time is called a process. Every running command starts at least one new process and there are a number of system processes that are run by &os;. Each process is uniquely identified by a number called a process ID (PID). Similar to files, each process has one owner and group, and the owner and group permissions are used to determine which files and devices the process can open. Most processes also have a parent process that started them. For example, the shell is a process, and any command started in the shell is a process which has the shell as its parent process. The exception is a special process called &man.init.8; which is always the first process to start at boot time and which always has a PID of 1. Some programs are not designed to be run with continuous user input and disconnect from the terminal at the first opportunity. For example, a web server responds to web requests, rather than user input. Mail servers are another example of this type of application. These types of programs are known as daemons. The term daemon comes from Greek mythology and represents an entity that is neither good nor evil, and which invisibly performs useful tasks. This is why the BSD mascot is the cheerful-looking daemon with sneakers and a pitchfork. There is a convention to name programs that normally run as daemons with a trailing d. For example, BIND is the Berkeley Internet Name Domain, but the actual program that executes is named. The Apache web server program is httpd and the line printer spooling daemon is lpd. This is only a naming convention. For example, the main mail daemon for the Sendmail application is sendmail, and not maild. Viewing Processes To see the processes running on the system, use &man.ps.1; or &man.top.1;. To display a static list of the currently running processes, their PIDs, how much memory they are using, and the command they were started with, use &man.ps.1;. To display all the running processes and update the display every few seconds in order to interactively see what the computer is doing, use &man.top.1;. By default, &man.ps.1; only shows the commands that are running and owned by the user. For example: &prompt.user; ps PID TT STAT TIME COMMAND 8203 0 Ss 0:00.59 /bin/csh 8895 0 R+ 0:00.00 ps The output from &man.ps.1; is organized into a number of columns. The PID column displays the process ID. PIDs are assigned starting at 1, go up to 99999, then wrap around back to the beginning. However, a PID is not reassigned if it is already in use. The TT column shows the tty the program is running on and STAT shows the program's state. TIME is the amount of time the program has been running on the CPU. This is usually not the elapsed time since the program was started, as most programs spend a lot of time waiting for things to happen before they need to spend time on the CPU. Finally, COMMAND is the command that was used to start the program. A number of different options are available to change the information that is displayed. One of the most useful sets is auxww, where displays information about all the running processes of all users, displays the username and memory usage of the process' owner, displays information about daemon processes, and causes &man.ps.1; to display the full command line for each process, rather than truncating it once it gets too long to fit on the screen. The output from &man.top.1; is similar: &prompt.user; top last pid: 9609; load averages: 0.56, 0.45, 0.36 up 0+00:20:03 10:21:46 107 processes: 2 running, 104 sleeping, 1 zombie CPU: 6.2% user, 0.1% nice, 8.2% system, 0.4% interrupt, 85.1% idle Mem: 541M Active, 450M Inact, 1333M Wired, 4064K Cache, 1498M Free ARC: 992M Total, 377M MFU, 589M MRU, 250K Anon, 5280K Header, 21M Other Swap: 2048M Total, 2048M Free PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND 557 root 1 -21 r31 136M 42296K select 0 2:20 9.96% Xorg 8198 dru 2 52 0 449M 82736K select 3 0:08 5.96% kdeinit4 8311 dru 27 30 0 1150M 187M uwait 1 1:37 0.98% firefox 431 root 1 20 0 14268K 1728K select 0 0:06 0.98% moused 9551 dru 1 21 0 16600K 2660K CPU3 3 0:01 0.98% top 2357 dru 4 37 0 718M 141M select 0 0:21 0.00% kdeinit4 8705 dru 4 35 0 480M 98M select 2 0:20 0.00% kdeinit4 8076 dru 6 20 0 552M 113M uwait 0 0:12 0.00% soffice.bin 2623 root 1 30 10 12088K 1636K select 3 0:09 0.00% powerd 2338 dru 1 20 0 440M 84532K select 1 0:06 0.00% kwin 1427 dru 5 22 0 605M 86412K select 1 0:05 0.00% kdeinit4 The output is split into two sections. The header (the first five or six lines) shows the PID of the last process to run, the system load averages (which are a measure of how busy the system is), the system uptime (time since the last reboot) and the current time. The other figures in the header relate to how many processes are running, how much memory and swap space has been used, and how much time the system is spending in different CPU states. If the ZFS file system module has been loaded, an ARC line indicates how much data was read from the memory cache instead of from disk. Below the header is a series of columns containing similar information to the output from &man.ps.1;, such as the PID, username, amount of CPU time, and the command that started the process. By default, &man.top.1; also displays the amount of memory space taken by the process. This is split into two columns: one for total size and one for resident size. Total size is how much memory the application has needed and the resident size is how much it is actually using now. &man.top.1; automatically updates the display every two seconds. A different interval can be specified with . Killing Processes One way to communicate with any running process or daemon is to send a signal using &man.kill.1;. There are a number of different signals; some have a specific meaning while others are described in the application's documentation. A user can only send a signal to a process they own and sending a signal to someone else's process will result in a permission denied error. The exception is the root user, who can send signals to anyone's processes. The operating system can also send a signal to a process. If an application is badly written and tries to access memory that it is not supposed to, &os; will send the process the Segmentation Violation signal (SIGSEGV). If an application has been written to use the &man.alarm.3; system call to be alerted after a period of time has elapsed, it will be sent the Alarm signal (SIGALRM). Two signals can be used to stop a process: SIGTERM and SIGKILL. SIGTERM is the polite way to kill a process as the process can read the signal, close any log files it may have open, and attempt to finish what it is doing before shutting down. In some cases, a process may ignore SIGTERM if it is in the middle of some task that can not be interrupted. SIGKILL can not be ignored by a process. Sending a SIGKILL to a process will usually stop that process there and then. There are a few tasks that can not be interrupted. For example, if the process is trying to read from a file that is on another computer on the network, and the other computer is unavailable, the process is said to be uninterruptible. Eventually the process will time out, typically after two minutes. As soon as this time out occurs the process will be killed.. Other commonly used signals are SIGHUP, SIGUSR1, and SIGUSR2. Since these are general purpose signals, different applications will respond differently. For example, after changing a web server's configuration file, the web server needs to be told to re-read its configuration. Restarting httpd would result in a brief outage period on the web server. Instead, send the daemon the SIGHUP signal. Be aware that different daemons will have different behavior, so refer to the documentation for the daemon to determine if SIGHUP will achieve the desired results. Sending a Signal to a Process This example shows how to send a signal to &man.inetd.8;. The &man.inetd.8; configuration file is /etc/inetd.conf, and &man.inetd.8; will re-read this configuration file when it is sent a SIGHUP. Find the PID of the process to send the signal to using &man.pgrep.1;. In this example, the PID for &man.inetd.8; is 198: &prompt.user; pgrep -l inetd 198 inetd -wW Use &man.kill.1; to send the signal. Because &man.inetd.8; is owned by root, use &man.su.1; to become root first. &prompt.user; su Password: &prompt.root; /bin/kill -s HUP 198 Like most &unix; commands, &man.kill.1; will not print any output if it is successful. If a signal is sent to a process not owned by that user, the message kill: PID: Operation not permitted will be displayed. Mistyping the PID will either send the signal to the wrong process, which could have negative results, or will send the signal to a PID that is not currently in use, resulting in the error kill: PID: No such process. Why Use <command>/bin/kill</command>? Many shells provide kill as a built in command, meaning that the shell will send the signal directly, rather than running /bin/kill. Be aware that different shells have a different syntax for specifying the name of the signal to send. Rather than try to learn all of them, it can be simpler to specify /bin/kill. When sending other signals, substitute TERM or KILL with the name of the signal. Killing a random process on the system is a bad idea. In particular, &man.init.8;, PID 1, is special. Running /bin/kill -s KILL 1 is a quick, and unrecommended, way to shutdown the system. Always double check the arguments to &man.kill.1; before pressing Return. Shells shells command line A shell provides a command line interface for interacting with the operating system. A shell receives commands from the input channel and executes them. Many shells provide built in functions to help with everyday tasks such as file management, file globbing, command line editing, command macros, and environment variables. &os; comes with several shells, including the Bourne shell (&man.sh.1;) and the extended C shell (&man.tcsh.1;). Other shells are available from the &os; Ports Collection, such as zsh and bash. The shell that is used is really a matter of taste. A C programmer might feel more comfortable with a C-like shell such as &man.tcsh.1;. A &linux; user might prefer bash. Each shell has unique properties that may or may not work with a user's preferred working environment, which is why there is a choice of which shell to use. One common shell feature is filename completion. After a user types the first few letters of a command or filename and presses Tab, the shell completes the rest of the command or filename. Consider two files called foobar and football. To delete foobar, the user might type rm foo and press Tab to complete the filename. But the shell only shows rm foo. It was unable to complete the filename because both foobar and football start with foo. Some shells sound a beep or show all the choices if more than one name matches. The user must then type more characters to identify the desired filename. Typing a t and pressing Tab again is enough to let the shell determine which filename is desired and fill in the rest. environment variables Another feature of the shell is the use of environment variables. Environment variables are a variable/key pair stored in the shell's environment. This environment can be read by any program invoked by the shell, and thus contains a lot of program configuration. provides a list of common environment variables and their meanings. Note that the names of environment variables are always in uppercase. Common Environment Variables Variable Description USER Current logged in user's name. PATH Colon-separated list of directories to search for binaries. DISPLAY Network name of the &xorg; display to connect to, if available. SHELL The current shell. TERM The name of the user's type of terminal. Used to determine the capabilities of the terminal. TERMCAP Database entry of the terminal escape codes to perform various terminal functions. OSTYPE Type of operating system. MACHTYPE The system's CPU architecture. EDITOR The user's preferred text editor. PAGER The user's preferred utility for viewing text one page at a time. MANPATH Colon-separated list of directories to search for manual pages.
Bourne shells How to set an environment variable differs between shells. In &man.tcsh.1; and &man.csh.1;, use setenv to set environment variables. In &man.sh.1; and bash, use export to set the current environment variables. This example sets the default EDITOR to /usr/local/bin/emacs for the &man.tcsh.1; shell: &prompt.user; setenv EDITOR /usr/local/bin/emacs The equivalent command for bash would be: &prompt.user; export EDITOR="/usr/local/bin/emacs" To expand an environment variable in order to see its current setting, type a $ character in front of its name on the command line. For example, echo $TERM displays the current $TERM setting. Shells treat special characters, known as meta-characters, as special representations of data. The most common meta-character is *, which represents any number of characters in a filename. Meta-characters can be used to perform filename globbing. For example, echo * is equivalent to ls because the shell takes all the files that match * and echo lists them on the command line. To prevent the shell from interpreting a special character, escape it from the shell by starting it with a backslash (\). For example, echo $TERM prints the terminal setting whereas echo \$TERM literally prints the string $TERM. Changing the Shell The easiest way to permanently change the default shell is to use chsh. Running this command will open the editor that is configured in the EDITOR environment variable, which by default is set to &man.vi.1;. Change the Shell: line to the full path of the new shell. Alternately, use chsh -s which will set the specified shell without opening an editor. For example, to change the shell to bash: &prompt.user; chsh -s /usr/local/bin/bash The new shell must be present in /etc/shells. If the shell was installed from the &os; Ports Collection as described in , it should be automatically added to this file. If it is missing, add it using this command, replacing the path with the path of the shell: &prompt.root; echo /usr/local/bin/bash >> /etc/shells Then, rerun &man.chsh.1;. Advanced Shell Techniques Tom Rhodes Written by The &unix; shell is not just a command interpreter, it acts as a powerful tool which allows users to execute commands, redirect their output, redirect their input and chain commands together to improve the final command output. When this functionality is mixed with built in commands, the user is provided with an environment that can maximize efficiency. Shell redirection is the action of sending the output or the input of a command into another command or into a file. To capture the output of the &man.ls.1; command, for example, into a file, simply redirect the output: &prompt.user; ls > directory_listing.txt The directory_listing.txt file will now contain the directory contents. Some commands allow you to read input in a similar one, such as &man.sort.1;. To sort this listing, redirect the input: &prompt.user; sort < directory_listing.txt The input will be sorted and placed on the screen. To redirect that input into another file, one could redirect the output of &man.sort.1; by mixing the direction: &prompt.user; sort < directory_listing.txt > sorted.txt In all of the previous examples, the commands are performing redirection using file descriptors. Every unix system has file descriptors; however, here we will focus on three, so named as Standard Input, Standard Output, and Standard Error. Each one has a purpose, where input could be a keyboard or a mouse, something that provides input. Output could be a screen or paper in a printer for example. And error would be anything that is used for diagnostic or error messages. All three are considered I/O based file descriptors and sometimes considered streams. Through the use of these descriptors, short named stdin, stdout, and stderr, the shell allows output and input to be passed around through various commands and redirected to or from a file. Another method of redirection is the pipe operator. The &unix; pipe operator, | allows the output of one command to be directly passed, or directed to another program. Basically a pipe will allow the standard output of a command to be passed as standard input to another command, for example: &prompt.user; cat directory_listing.txt | sort | less In that example, the contents of directory_listing.txt will be sorted and the output passed to &man.less.1;. This allows the user to scroll through the output at their own pace and prevent it from scrolling off the screen.
Text Editors text editors editors Most &os; configuration is done by editing text files. Because of this, it is a good idea to become familiar with a text editor. &os; comes with a few as part of the base system, and many more are available in the Ports Collection. ee editors &man.ee.1; A simple editor to learn is &man.ee.1;, which stands for easy editor. To start this editor, type ee filename where filename is the name of the file to be edited. Once inside the editor, all of the commands for manipulating the editor's functions are listed at the top of the display. The caret (^) represents Ctrl, so ^e expands to Ctrl e . To leave &man.ee.1;, press Esc, then choose the leave editor option from the main menu. The editor will prompt to save any changes if the file has been modified. vi editors emacs &os; also comes with more powerful text editors, such as &man.vi.1;, as part of the base system. Other editors, like editors/emacs and editors/vim, are part of the &os; Ports Collection. These editors offer more functionality at the expense of being more complicated to learn. Learning a more powerful editor such as vim or Emacs can save more time in the long run. Many applications which modify files or require typed input will automatically open a text editor. To change the default editor, set the EDITOR environment variable as described in . Devices and Device Nodes A device is a term used mostly for hardware-related activities in a system, including disks, printers, graphics cards, and keyboards. When &os; boots, the majority of the boot messages refer to devices being detected. A copy of the boot messages are saved to /var/run/dmesg.boot. Each device has a device name and number. For example, ada0 is the first SATA hard drive, while kbd0 represents the keyboard. Most devices in a &os; must be accessed through special files called device nodes, which are located in /dev. Manual Pages manual pages The most comprehensive documentation on &os; is in the form of manual pages. Nearly every program on the system comes with a short reference manual explaining the basic operation and available arguments. These manuals can be viewed using man: &prompt.user; man command where command is the name of the command to learn about. For example, to learn more about &man.ls.1;, type: &prompt.user; man ls Manual pages are divided into sections which represent the type of topic. In &os;, the following sections are available: User commands. System calls and error numbers. Functions in the C libraries. Device drivers. File formats. Games and other diversions. Miscellaneous information. System maintenance and operation commands. System kernel interfaces. In some cases, the same topic may appear in more than one section of the online manual. For example, there is a chmod user command and a chmod() system call. To tell &man.man.1; which section to display, specify the section number: &prompt.user; man 1 chmod This will display the manual page for the user command &man.chmod.1;. References to a particular section of the online manual are traditionally placed in parenthesis in written documentation, so &man.chmod.1; refers to the user command and &man.chmod.2; refers to the system call. If the name of the manual page is unknown, use man -k to search for keywords in the manual page descriptions: &prompt.user; man -k mail This command displays a list of commands that have the keyword mail in their descriptions. This is equivalent to using &man.apropos.1;. To read the descriptions for all of the commands in /usr/bin, type: &prompt.user; cd /usr/bin &prompt.user; man -f * | more or &prompt.user; cd /usr/bin &prompt.user; whatis * |more GNU Info Files Free Software Foundation &os; includes several applications and utilities produced by the Free Software Foundation (FSF). In addition to manual pages, these programs may include hypertext documents called info files. These can be viewed using &man.info.1; or, if editors/emacs is installed, the info mode of emacs. To use &man.info.1;, type: &prompt.user; info For a brief introduction, type h. For a quick command reference, type ?.
Index: head/en_US.ISO8859-1/books/handbook/cutting-edge/chapter.xml =================================================================== --- head/en_US.ISO8859-1/books/handbook/cutting-edge/chapter.xml (revision 46271) +++ head/en_US.ISO8859-1/books/handbook/cutting-edge/chapter.xml (revision 46272) @@ -1,2286 +1,2286 @@ Updating and Upgrading &os; Jim Mock Restructured, reorganized, and parts updated by Jordan Hubbard Original work by Poul-Henning Kamp John Polstra Nik Clayton Synopsis &os; is under constant development between releases. Some people prefer to use the officially released versions, while others prefer to keep in sync with the latest developments. However, even official releases are often updated with security and other critical fixes. Regardless of the version used, &os; provides all the necessary tools to keep the system updated, and allows for easy upgrades between versions. This chapter describes how to track the development system and the basic tools for keeping a &os; system up-to-date. After reading this chapter, you will know: How to keep a &os; system up-to-date with freebsd-update, Subversion, or CTM. How to compare the state of an installed system against a known pristine copy. How to keep the installed documentation up-to-date with Subversion or documentation ports. The difference between the two development branches: &os.stable; and &os.current;. How to rebuild and reinstall the entire base system. Before reading this chapter, you should: Properly set up the network connection (). Know how to install additional third-party software (). Throughout this chapter, svn is used to obtain and update &os; sources. To use it, first install the devel/subversion port or package. &os; Update Tom Rhodes Written by Colin Percival Based on notes provided by Updating and Upgrading freebsd-update updating-upgrading Applying security patches in a timely manner and upgrading to a newer release of an operating system are important aspects of ongoing system administration. &os; includes a utility called freebsd-update which can be used to perform both these tasks. This utility supports binary security and errata updates to &os;, without the need to manually compile and install the patch or a new kernel. Binary updates are available for all architectures and releases currently supported by the security team. The list of supported releases and their estimated end-of-life dates are listed at http://www.FreeBSD.org/security/. This utility also supports operating system upgrades to minor point releases as well as upgrades to another release branch. Before upgrading to a new release, review its release announcement as it contains important information pertinent to the release. Release announcements are available from http://www.FreeBSD.org/releases/. If a crontab utilizing the features of &man.freebsd-update.8; exists, it must be disabled before upgrading the operating system. This section describes the configuration file used by freebsd-update, demonstrates how to apply a security patch and how to upgrade to a minor or major operating system release, and discusses some of the considerations when upgrading the operating system. The Configuration File The default configuration file for freebsd-update works as-is. Some users may wish to tweak the default configuration in /etc/freebsd-update.conf, allowing better control of the process. The comments in this file explain the available options, but the following may require a bit more explanation: # Components of the base system which should be kept updated. Components world kernel This parameter controls which parts of &os; will be kept up-to-date. The default is to update the entire base system and the kernel. Individual components can instead be specified, such as src/base or src/sys. However, the best option is to leave this at the default as changing it to include specific items requires every needed item to be listed. Over time, this could have disastrous consequences as source code and binaries may become out of sync. # Paths which start with anything matching an entry in an IgnorePaths # statement will be ignored. IgnorePaths /boot/kernel/linker.hints To leave specified directories, such as /bin or /sbin, untouched during the update process, add their paths to this statement. This option may be used to prevent freebsd-update from overwriting local modifications. # Paths which start with anything matching an entry in an UpdateIfUnmodified # statement will only be updated if the contents of the file have not been # modified by the user (unless changes are merged; see below). UpdateIfUnmodified /etc/ /var/ /root/ /.cshrc /.profile This option will only update unmodified configuration files in the specified directories. Any changes made by the user will prevent the automatic updating of these files. There is another option, KeepModifiedMetadata, which will instruct freebsd-update to save the changes during the merge. # When upgrading to a new &os; release, files which match MergeChanges # will have any local changes merged into the version from the new release. MergeChanges /etc/ /var/named/etc/ /boot/device.hints List of directories with configuration files that freebsd-update should attempt to merge. The file merge process is a series of &man.diff.1; patches similar to &man.mergemaster.8;, but with fewer options. Merges are either accepted, open an editor, or cause freebsd-update to abort. When in doubt, backup /etc and just accept the merges. See for more information about mergemaster. # Directory in which to store downloaded updates and temporary # files used by &os; Update. # WorkDir /var/db/freebsd-update This directory is where all patches and temporary files are placed. In cases where the user is doing a version upgrade, this location should have at least a gigabyte of disk space available. # When upgrading between releases, should the list of Components be # read strictly (StrictComponents yes) or merely as a list of components # which *might* be installed of which &os; Update should figure out # which actually are installed and upgrade those (StrictComponents no)? # StrictComponents no When this option is set to yes, freebsd-update will assume that the Components list is complete and will not attempt to make changes outside of the list. Effectively, freebsd-update will attempt to update every file which belongs to the Components list. Applying Security Patches The process of applying &os; security patches has been simplified, allowing an administrator to keep a system fully patched using freebsd-update. More information about &os; security advisories can be found in . &os; security patches may be downloaded and installed using the following commands. The first command will determine if any outstanding patches are available, and if so, will list the files that will be modifed if the patches are applied. The second command will apply the patches. &prompt.root; freebsd-update fetch &prompt.root; freebsd-update install If the update applies any kernel patches, the system will need a reboot in order to boot into the patched kernel. If the patch was applied to any running binaries, the affected applications should be restarted so that the patched version of the binary is used. The system can be configured to automatically check for updates once every day by adding this entry to /etc/crontab: @daily root freebsd-update cron If patches exist, they will automatically be downloaded but will not be applied. The root user will be sent an email so that the patches may be reviewed and manually installed with freebsd-update install. If anything goes wrong, freebsd-update has the ability to roll back the last set of changes with the following command: &prompt.root; freebsd-update rollback Uninstalling updates... done. Again, the system should be restarted if the kernel or any kernel modules were modified and any affected binaries should be restarted. Only the GENERIC kernel can be automatically updated by freebsd-update. If a custom kernel is installed, it will have to be rebuilt and reinstalled after freebsd-update finishes installing the updates. However, freebsd-update will detect and update the GENERIC kernel if /boot/GENERIC exists, even if it is not the current running kernel of the system. Always keep a copy of the GENERIC kernel in /boot/GENERIC. It will be helpful in diagnosing a variety of problems and in performing version upgrades. Refer to either or for instructions on how to get a copy of the GENERIC kernel. Unless the default configuration in /etc/freebsd-update.conf has been changed, freebsd-update will install the updated kernel sources along with the rest of the updates. Rebuilding and reinstalling a new custom kernel can then be performed in the usual way. The updates distributed by freebsd-update do not always involve the kernel. It is not necessary to rebuild a custom kernel if the kernel sources have not been modified by freebsd-update install. However, freebsd-update will always update /usr/src/sys/conf/newvers.sh. The current patch level, as indicated by the -p number reported by uname -r, is obtained from this file. Rebuilding a custom kernel, even if nothing else changed, allows uname to accurately report the current patch level of the system. This is particularly helpful when maintaining multiple systems, as it allows for a quick assessment of the updates installed in each one. Performing Major and Minor Version Upgrades Upgrades from one minor version of &os; to another, like from &os; 9.0 to &os; 9.1, are called minor version upgrades. Major version upgrades occur when &os; is upgraded from one major version to another, like from &os; 9.X to &os; 10.X. Both types of upgrades can be performed by providing freebsd-update with a release version target. If the system is running a custom kernel, make sure that a copy of the GENERIC kernel exists in /boot/GENERIC before starting the upgrade. Refer to either or for instructions on how to get a copy of the GENERIC kernel. The following command, when run on a &os; 9.0 system, will upgrade it to &os; 9.1: &prompt.root; freebsd-update -r 9.1-RELEASE upgrade After the command has been received, freebsd-update will evaluate the configuration file and current system in an attempt to gather the information necessary to perform the upgrade. A screen listing will display which components have and have not been detected. For example: Looking up update.FreeBSD.org mirrors... 1 mirrors found. Fetching metadata signature for 9.0-RELEASE from update1.FreeBSD.org... done. Fetching metadata index... done. Inspecting system... done. The following components of FreeBSD seem to be installed: kernel/smp src/base src/bin src/contrib src/crypto src/etc src/games src/gnu src/include src/krb5 src/lib src/libexec src/release src/rescue src/sbin src/secure src/share src/sys src/tools src/ubin src/usbin world/base world/info world/lib32 world/manpages The following components of FreeBSD do not seem to be installed: kernel/generic world/catpages world/dict world/doc world/games world/proflibs Does this look reasonable (y/n)? y At this point, freebsd-update will attempt to download all files required for the upgrade. In some cases, the user may be prompted with questions regarding what to install or how to proceed. When using a custom kernel, the above step will produce a warning similar to the following: WARNING: This system is running a "MYKERNEL" kernel, which is not a kernel configuration distributed as part of FreeBSD 9.0-RELEASE. This kernel will not be updated: you MUST update the kernel manually before running "/usr/sbin/freebsd-update install" This warning may be safely ignored at this point. The updated GENERIC kernel will be used as an intermediate step in the upgrade process. Once all the patches have been downloaded to the local system, they will be applied. This process may take a while, depending on the speed and workload of the machine. Configuration files will then be merged. The merging process requires some user intervention as a file may be merged or an editor may appear on screen for a manual merge. The results of every successful merge will be shown to the user as the process continues. A failed or ignored merge will cause the process to abort. Users may wish to make a backup of /etc and manually merge important files, such as master.passwd or group at a later time. The system is not being altered yet as all patching and merging is happening in another directory. Once all patches have been applied successfully, all configuration files have been merged and it seems the process will go smoothly, the changes can be committed to disk by the user using the following command: &prompt.root; freebsd-update install The kernel and kernel modules will be patched first. If the system is running with a custom kernel, use &man.nextboot.8; to set the kernel for the next boot to the updated /boot/GENERIC: &prompt.root; nextboot -k GENERIC Before rebooting with the GENERIC kernel, make sure it contains all the drivers required for the system to boot properly and connect to the network, if the machine being updated is accessed remotely. In particular, if the running custom kernel contains built-in functionality usually provided by kernel modules, make sure to temporarily load these modules into the GENERIC kernel using the /boot/loader.conf facility. It is recommended to disable non-essential services as well as any disk and network mounts until the upgrade process is complete. The machine should now be restarted with the updated kernel: &prompt.root; shutdown -r now Once the system has come back online, restart freebsd-update using the following command. Since the state of the process has been saved, freebsd-update will not start from the beginning, but will instead move on to the next phase and remove all old shared libraries and object files. &prompt.root; freebsd-update install Depending upon whether any library version numbers were bumped, there may only be two install phases instead of three. The upgrade is now complete. If this was a major version upgrade, reinstall all ports and packages as described in . Custom Kernels with &os; 9.X and Later Before using freebsd-update, ensure that a copy of the GENERIC kernel exists in /boot/GENERIC. If a custom kernel has only been built once, the kernel in /boot/kernel.old is the GENERIC kernel. Simply rename this directory to /boot/kernel. If a custom kernel has been built more than once or if it is unknown how many times the custom kernel has been built, obtain a copy of the GENERIC kernel that matches the current version of the operating system. If physical access to the system is available, a copy of the GENERIC kernel can be installed from the installation media: &prompt.root; mount /cdrom &prompt.root; cd /cdrom/usr/freebsd-dist &prompt.root; tar -C/ -xvf kernel.txz boot/kernel/kernel Alternately, the GENERIC kernel may be rebuilt and installed from source: &prompt.root; cd /usr/src &prompt.root; make kernel __MAKE_CONF=/dev/null SRCCONF=/dev/null For this kernel to be identified as the GENERIC kernel by freebsd-update, the GENERIC configuration file must not have been modified in any way. It is also suggested that the kernel is built without any other special options. Rebooting into the GENERIC kernel is not required as freebsd-update only needs /boot/GENERIC to exist. Custom Kernels with &os; 8.X On an &os; 8.X system, the instructions for obtaining or building a GENERIC kernel differ slightly. Assuming physical access to the machine is possible, a copy of the GENERIC kernel can be installed from the installation media using the following commands: &prompt.root; mount /cdrom &prompt.root; cd /cdrom/X.Y-RELEASE/kernels &prompt.root; ./install.sh GENERIC Replace X.Y-RELEASE + >X.Y-RELEASE with the version of the release being used. The GENERIC kernel will be installed in /boot/GENERIC by default. To instead build the GENERIC kernel from source: &prompt.root; cd /usr/src &prompt.root; env DESTDIR=/boot/GENERIC make kernel __MAKE_CONF=/dev/null SRCCONF=/dev/null &prompt.root; mv /boot/GENERIC/boot/kernel/* /boot/GENERIC &prompt.root; rm -rf /boot/GENERIC/boot For this kernel to be picked up as GENERIC by freebsd-update, the GENERIC configuration file must not have been modified in any way. It is also suggested that it is built without any other special options. Rebooting into the GENERIC kernel is not required. Upgrading Packages After a Major Version Upgrade Generally, installed applications will continue to work without problems after minor version upgrades. Major versions use different Application Binary Interfaces (ABIs), which will break most third-party applications. After a major version upgrade, all installed packages and ports need to be upgraded. Packages can be upgraded using pkg upgrade. To upgrade installed ports, use a utility such as ports-mgmt/portmaster. A forced upgrade of all installed packages will replace the packages with fresh versions from the repository even if the version number has not increased. This is required because of the ABI version change when upgrading between major versions of &os;. The forced upgrade can be accomplished by performing: &prompt.root; pkg-static upgrade -f A rebuild of all installed applications can be accomplished with this command: &prompt.root; portmaster -af This command will display the configuration screens for each application that has configurable options and wait for the user to interact with those screens. To prevent this behavior, and use only the default options, include in the above command. Once the software upgrades are complete, finish the upgrade process with a final call to freebsd-update in order to tie up all the loose ends in the upgrade process: &prompt.root; freebsd-update install If the GENERIC kernel was temporarily used, this is the time to build and install a new custom kernel using the instructions in . Reboot the machine into the new &os; version. The upgrade process is now complete. System State Comparison The state of the installed &os; version against a known good copy can be tested using freebsd-update IDS. This command evaluates the current version of system utilities, libraries, and configuration files and can be used as a built-in Intrusion Detection System (IDS). This command is not a replacement for a real IDS such as security/snort. As freebsd-update stores data on disk, the possibility of tampering is evident. While this possibility may be reduced using kern.securelevel and by storing the freebsd-update data on a read-only file system when not in use, a better solution would be to compare the system against a secure disk, such as a DVD or securely stored external USB disk device. An alternative method for providing IDS functionality using a built-in utility is described in To begin the comparison, specify the output file to save the results to: &prompt.root; freebsd-update IDS >> outfile.ids The system will now be inspected and a lengthy listing of files, along with the SHA256 hash values for both the known value in the release and the current installation, will be sent to the specified output file. The entries in the listing are extremely long, but the output format may be easily parsed. For instance, to obtain a list of all files which differ from those in the release, issue the following command: &prompt.root; cat outfile.ids | awk '{ print $1 }' | more /etc/master.passwd /etc/motd /etc/passwd /etc/pf.conf This sample output has been truncated as many more files exist. Some files have natural modifications. For example, /etc/passwd will be modified if users have been added to the system. Kernel modules may differ as freebsd-update may have updated them. To exclude specific files or directories, add them to the IDSIgnorePaths option in /etc/freebsd-update.conf. Updating the Documentation Set Updating and Upgrading Documentation Updating and Upgrading Documentation is an integral part of the &os; operating system. While an up-to-date version of the &os; documentation is always available on the &os; web site (http://www.freebsd.org/doc/), it can be handy to have an up-to-date, local copy of the &os; website, handbooks, FAQ, and articles. This section describes how to use either source or the &os; Ports Collection to keep a local copy of the &os; documentation up-to-date. For information on editing and submitting corrections to the documentation, refer to the &os; Documentation Project Primer for New Contributors (http://www.freebsd.org/doc/en_US.ISO8859-1/books/fdp-primer/). Updating Documentation from Source Rebuilding the &os; documentation from source requires a collection of tools which are not part of the &os; base system. The required tools, including svn, can be installed from the textproc/docproj package or port developed by the &os; Documentation Project. Once installed, use svn to fetch a clean copy of the documentation source. Replace https://svn0.us-west.FreeBSD.org with the address of the closest geographic mirror from : &prompt.root; svn checkout https://svn0.us-west.FreeBSD.org/doc/head /usr/doc The initial download of the documentation sources may take a while. Let it run until it completes. Future updates of the documentation sources may be fetched by running: &prompt.root; svn update /usr/doc Once an up-to-date snapshot of the documentation sources has been fetched to /usr/doc, everything is ready for an update of the installed documentation. A full update of all available languages may be performed by typing: &prompt.root; cd /usr/doc &prompt.root; make install clean If an update of only a specific language is desired, make can be invoked in a language-specific subdirectory of /usr/doc: &prompt.root; cd /usr/doc/en_US.ISO8859-1 &prompt.root; make install clean An alternative way of updating the documentation is to run this command from /usr/doc or the desired language-specific subdirectory: &prompt.root; make update The output formats that will be installed may be specified by setting FORMATS: &prompt.root; cd /usr/doc &prompt.root; make FORMATS='html html-split' install clean Several options are available to ease the process of updating only parts of the documentation, or the build of specific translations. These options can be set either as system-wide options in /etc/make.conf, or as command-line options passed to make. The options include: DOC_LANG The list of languages and encodings to build and install, such as en_US.ISO8859-1 for English documentation. FORMATS A single format or a list of output formats to be built. Currently, html, html-split, txt, ps, and pdf are supported. DOCDIR Where to install the documentation. It defaults to /usr/share/doc. For more make variables supported as system-wide options in &os;, refer to &man.make.conf.5;. Updating Documentation from Ports Marc Fonvieille Based on the work of Updating and Upgrading documentation package Updating and Upgrading The previous section presented a method for updating the &os; documentation from sources. This section describes an alternative method which uses the Ports Collection and makes it possible to: Install pre-built packages of the documentation, without having to locally build anything or install the documentation toolchain. Build the documentation sources through the ports framework, making the checkout and build steps a bit easier. This method of updating the &os; documentation is supported by a set of documentation ports and packages which are updated by the &a.doceng; on a monthly basis. These are listed in the &os; Ports Collection, under the docs category (http://www.freshports.org/docs/). Organization of the documentation ports is as follows: The misc/freebsd-doc-en package or port installs all of the English documentation. The misc/freebsd-doc-all meta-package or port installs all documentation in all available languages. There is a package and port for each translation, such as misc/freebsd-doc-hu for the Hungarian documentation. When binary packages are used, the &os; documentation will be installed in all available formats for the given language. For example, the following command will install the latest package of the Hungarian documentation: &prompt.root; pkg install hu-freebsd-doc Packages use a format that differs from the corresponding port's name: lang-freebsd-doc, where lang is the short format of the language code, such as hu for Hungarian, or zh_cn for Simplified Chinese. To specify the format of the documentation, build the port instead of installing the package. For example, to build and install the English documentation: &prompt.root; cd /usr/ports/misc/freebsd-doc-en &prompt.root; make install clean The port provides a configuration menu where the format to build and install can be specified. By default, split HTML, similar to the format used on http://www.FreeBSD.org, and PDF are selected. Alternately, several make options can be specified when building a documentation port, including: WITH_HTML Builds the HTML format with a single HTML file per document. The formatted documentation is saved to a file called article.html, or book.html. WITH_PDF The formatted documentation is saved to a file called article.pdf or book.pdf. DOCBASE Specifies where to install the documentation. It defaults to /usr/local/share/doc/freebsd. This example uses variables to install the Hungarian documentation as a PDF in the specified directory: &prompt.root; cd /usr/ports/misc/freebsd-doc-hu &prompt.root; make -DWITH_PDF DOCBASE=share/doc/freebsd/hu install clean Documentation packages or ports can be updated using the instructions in . For example, the following command updates the installed Hungarian documentation using ports-mgmt/portmaster by using packages only: &prompt.root; portmaster -PP hu-freebsd-doc Tracking a Development Branch -CURRENT -STABLE &os; has two development branches: &os.current; and &os.stable;. This section provides an explanation of each branch and its intended audience, as well as how to keep a system up-to-date with each respective branch. Using &os.current; &os.current; is the bleeding edge of &os; development and &os.current; users are expected to have a high degree of technical skill. Less technical users who wish to track a development branch should track &os.stable; instead. &os.current; is the very latest source code for &os; and includes works in progress, experimental changes, and transitional mechanisms that might or might not be present in the next official release. While many &os; developers compile the &os.current; source code daily, there are short periods of time when the source may not be buildable. These problems are resolved as quickly as possible, but whether or not &os.current; brings disaster or new functionality can be a matter of when the source code was synced. &os.current; is made available for three primary interest groups: Members of the &os; community who are actively working on some part of the source tree. Members of the &os; community who are active testers. They are willing to spend time solving problems, making topical suggestions on changes and the general direction of &os;, and submitting patches. Users who wish to keep an eye on things, use the current source for reference purposes, or make the occasional comment or code contribution. &os.current; should not be considered a fast-track to getting new features before the next release as pre-release features are not yet fully tested and most likely contain bugs. It is not a quick way of getting bug fixes as any given commit is just as likely to introduce new bugs as to fix existing ones. &os.current; is not in any way officially supported. -CURRENT using To track &os.current;: Join the &a.current.name; and the &a.svn-src-head.name; lists. This is essential in order to see the comments that people are making about the current state of the system and to receive important bulletins about the current state of &os.current;. The &a.svn-src-head.name; list records the commit log entry for each change as it is made, along with any pertinent information on possible side effects. To join these lists, go to &a.mailman.lists.link;, click on the list to subscribe to, and follow the instructions. In order to track changes to the whole source tree, not just the changes to &os.current;, subscribe to the &a.svn-src-all.name; list. Synchronize with the &os.current; sources. Typically, svn is used to check out the -CURRENT code from the head branch of one of the Subversion mirror sites listed in . Users with very slow or limited Internet connectivity can instead use CTM as described in , but it is not as reliable as svn and svn is the recommended method for synchronizing source. Due to the size of the repository, some users choose to only synchronize the sections of source that interest them or which they are contributing patches to. However, users that plan to compile the operating system from source must download all of &os.current;, not just selected portions. Before compiling &os.current; -CURRENT compiling , read /usr/src/Makefile very carefully and follow the instructions in . Read the &a.current; and /usr/src/UPDATING to stay up-to-date on other bootstrapping procedures that sometimes become necessary on the road to the next release. Be active! &os.current; users are encouraged to submit their suggestions for enhancements or bug fixes. Suggestions with accompanying code are always welcome. Using &os.stable; &os.stable; is the development branch from which major releases are made. Changes go into this branch at a slower pace and with the general assumption that they have first been tested in &os.current;. This is still a development branch and, at any given time, the sources for &os.stable; may or may not be suitable for general use. It is simply another engineering development track, not a resource for end-users. Users who do not have the resources to perform testing should instead run the most recent release of &os;. Those interested in tracking or contributing to the &os; development process, especially as it relates to the next release of &os;, should consider following &os.stable;. While the &os.stable; branch should compile and run at all times, this cannot be guaranteed. Since more people run &os.stable; than &os.current;, it is inevitable that bugs and corner cases will sometimes be found in &os.stable; that were not apparent in &os.current;. For this reason, one should not blindly track &os.stable;. It is particularly important not to update any production servers to &os.stable; without thoroughly testing the code in a development or testing environment. To track &os.stable;: -STABLE using Join the &a.stable.name; list in order to stay informed of build dependencies that may appear in &os.stable; or any other issues requiring special attention. Developers will also make announcements in this mailing list when they are contemplating some controversial fix or update, giving the users a chance to respond if they have any issues to raise concerning the proposed change. Join the relevant svn list for the branch being tracked. For example, users tracking the 9-STABLE branch should join the &a.svn-src-stable-9.name; list. This list records the commit log entry for each change as it is made, along with any pertinent information on possible side effects. To join these lists, go to &a.mailman.lists.link;, click on the list to subscribe to, and follow the instructions. In order to track changes for the whole source tree, subscribe to &a.svn-src-all.name;. To install a new &os.stable; system, install the most recent &os.stable; release from the &os; mirror sites or use a monthly snapshot built from &os.stable;. Refer to www.freebsd.org/snapshots for more information about snapshots. To compile or upgrade to an existing &os; system to &os.stable;, use svn Subversion to check out the source for the desired branch. Branch names, such as stable/9, are listed at www.freebsd.org/releng. CTM () can be used if a reliable Internet connection is not available. Before compiling or upgrading to &os.stable; -STABLE compiling , read /usr/src/Makefile carefully and follow the instructions in . Read &a.stable; and /usr/src/UPDATING to keep up-to-date on other bootstrapping procedures that sometimes become necessary on the road to the next release. Synchronizing Source There are various methods for staying up-to-date with the &os; sources. This section compares the primary services, Subversion and CTM. While it is possible to update only parts of the source tree, the only supported update procedure is to update the entire tree and recompile all the programs that run in user space, such as those in /bin and /sbin, and kernel sources. Updating only part of the source tree, only the kernel, or only the userland programs will often result in problems ranging from compile errors to kernel panics or data corruption. Subversion Subversion uses the pull model of updating sources. The user, or a cron script, invokes the svn program which updates the local version of the source. Subversion is the preferred method for updating local source trees as updates are up-to-the-minute and the user controls when updates are downloaded. It is easy to restrict updates to specific files or directories and the requested updates are generated on the fly by the server. How to synchronize source using Subversion is described in . CTM CTM does not interactively compare the local sources with those on the master archive or otherwise pull them across. Instead, a script which identifies changes in files since its previous run is executed several times a day on the master CTM machine. Any detected changes are compressed, stamped with a sequence-number, and encoded for transmission over email in printable ASCII only. Once downloaded, these deltas can be run through ctm.rmail which will automatically decode, verify, and apply the changes to the user's copy of the sources. This process is more efficient than Subversion and places less strain on server resources since it is a push, rather than a pull, model. Instructions for using CTM to synchronize source can be found at . If a user inadvertently wipes out portions of the local archive, Subversion will detect and rebuild the damaged portions. CTM will not, and if a user deletes some portion of the source tree and does not have a backup, they will have to start from scratch from the most recent base delta and rebuild it all with CTM. Rebuilding World Rebuilding world Once the local source tree is synchronized against a particular version of &os; such as &os.stable; or &os.current;, the source tree can be used to rebuild the system. This process is known as rebuilding world. Before rebuilding world, be sure to perform the following tasks: Perform These Tasks <emphasis>Before</emphasis> Building World Backup all important data to another system or removable media, verify the integrity of the backup, and have a bootable installation media at hand. It cannot be stressed enough how important it is to make a backup of the system before rebuilding the system. While rebuilding world is an easy task, there will inevitably be times when mistakes in the source tree render the system unbootable. You will probably never have to use the backup, but it is better to be safe than sorry! mailing list Review the recent &a.stable.name; or &a.current.name; entries, depending upon the branch being tracked. Be aware of any known problems and which systems are affected. If a known issue affects the version of synchronized code, wait for an all clear announcement to be posted stating that the problem has been solved. Resynchronize the sources to ensure that the local version of source has the needed fix. Read /usr/src/UPDATING for any extra steps necessary for that version of the source. This file contains important information about potential problems and may specify the order to run certain commands. Many upgrades require specific additional steps such as renaming or deleting specific files prior to installing the new world. These will be listed at the end of this file where the currently recommended upgrade sequence is explicitly spelled out. If UPDATING contradicts any steps in this chapter, the instructions in UPDATING take precedence and should be followed. Do Not Use <command>make world</command> Some older documentation recommends using make world. However, that command skips some important steps and should only be used by experts. For almost all circumstances make world is the wrong thing to do, and the procedure described here should be used instead. Overview of Process The build world process assumes an upgrade from an older &os; version using the source of a newer version that was obtained using the instructions in . In &os;, the term world includes the kernel, core system binaries, libraries, programming files, and built-in compiler. The order in which these components are built and installed is important. For example, the old compiler might have a bug and not be able to compile the new kernel. Since the new kernel should be built with the new compiler, the new compiler must be built, but not necessarily installed, before the new kernel is built. The new world might rely on new kernel features, so the new kernel must be installed before the new world is installed. The old world might not run correctly on the new kernel, so the new world must be installed immediately upon installing the new kernel. Some configuration changes must be made before the new world is installed, but others might break the old world. Hence, two different configuration upgrade steps are used. For the most part, the update process only replaces or adds files and existing old files are not deleted. Since this can cause problems, /usr/src/UPDATING will indicate if any files need to be manually deleted and at which step to do so. These concerns have led to the recommended upgrade sequence described in the following procedure. It is a good idea to save the output from running make to a file. If something goes wrong, a copy of the error message can be posted to one of the &os; mailing lists. The easiest way to do this is to use script with a parameter that specifies the name of the file to save all output to. Do not save the output to /tmp as this directory may be cleared at next reboot. A better place to save the file is /var/tmp. Run this command immediately before rebuilding the world, and then type exit when the process has finished: &prompt.root; script /var/tmp/mw.out Script started, output file is /var/tmp/mw.out Overview of Build World Process The commands used in the build world process should be run in the order specified here. This section summarizes the function of each command. If the build world process has previously been run on this system, a copy of the previous build may still exist - in /usr/obj. To + in /usr/obj. To speed up the new build world process, and possibly save some dependency headaches, remove this directory if it already exists: &prompt.root; chflags -R noschg /usr/obj/* &prompt.root; rm -rf /usr/obj Compile the new compiler and a few related tools, then use the new compiler to compile the rest of the new world. The result is saved to /usr/obj. + >/usr/obj. &prompt.root; cd /usr/src &prompt.root; make buildworld Use the new compiler residing in /usr/obj to build the new + >/usr/obj to build the new kernel, in order to protect against compiler-kernel mismatches. This is necessary, as certain memory structures may have changed, and programs like ps and top will fail to work if the kernel and source code versions are not the same. &prompt.root; make buildkernel Install the new kernel and kernel modules, making it possible to boot with the newly updated kernel. If kern.securelevel has been raised above 1 and noschg or similar flags have been set on the kernel binary, drop the system into single-user mode first. Otherwise, this command can be run from multi-user mode without problems. See &man.init.8; for details about kern.securelevel and &man.chflags.1; for details about the various file flags. &prompt.root; make installkernel Drop the system into single-user mode in order to minimize problems from updating any binaries that are already running. It also minimizes any problems from running the old world on a new kernel. &prompt.root; shutdown now Once in single-user mode, run these commands if the system is formatted with UFS: &prompt.root; mount -u / &prompt.root; mount -a -t ufs &prompt.root; swapon -a If the system is instead formatted with ZFS, run these two commands. This example assumes a zpool name of zroot: &prompt.root; zfs set readonly=off zroot &prompt.root; zfs mount -a Optional: If a keyboard mapping other than the default US English is desired, it can be changed with &man.kbdmap.1;: &prompt.root; kbdmap Then, for either file system, if the CMOS clock is set to local time (this is true if the output of &man.date.1; does not show the correct time and zone), run: &prompt.root; adjkerntz -i Remaking the world will not update certain directories, such as /etc, /var and /usr, with new or changed configuration files. The next step is to perform some initial configuration file updates - to /etc in + to /etc in preparation for the new world. The following command compares only those files that are essential for the success of installworld. For instance, this step may add new groups, system accounts, or startup scripts which have been added to &os; since the last update. This is necessary so that the installworld step will be able to use any new system accounts, groups, and scripts. Refer to for more detailed instructions about this command: &prompt.root; mergemaster -p Install the new world and system binaries from - /usr/obj. + /usr/obj. &prompt.root; cd /usr/src &prompt.root; make installworld Update any remaining configuration files. &prompt.root; mergemaster -iF Delete any obsolete files. This is important as they may cause problems if left on the disk. &prompt.root; make delete-old A full reboot is now needed to load the new kernel and new world with the new configuration files. &prompt.root; reboot Make sure that all installed ports have first been rebuilt before old libraries are removed using the instructions in . When finished, remove any obsolete libraries to avoid conflicts with newer ones. For a more detailed description of this step, refer to . &prompt.root; make delete-old-libs single-user mode If the system can have a window of down-time, consider compiling the system in single-user mode instead of compiling the system in multi-user mode, and then dropping into single-user mode for the installation. Reinstalling the system touches a lot of important system files, all the standard system binaries, libraries, and include files. Changing these on a running system, particularly one with active users, is asking for trouble. Configuration Files make.conf This build world process uses several configuration files. The Makefile located in /usr/src describes how the programs that comprise &os; should be built and the order in which they should be built. The options available to make are described in &man.make.conf.5; and some common examples are included in /usr/share/examples/etc/make.conf. Any options which are added to /etc/make.conf will control the how make runs and builds programs. These options take effect every time make is used, including compiling applications from the Ports Collection, compiling custom C programs, or building the &os; operating system. Changes to some settings can have far-reaching and potentially surprising effects. Read the comments in both locations and keep in mind that the defaults have been chosen for a combination of performance and safety. src.conf How the operating system is built from source code is controlled by /etc/src.conf. Unlike /etc/make.conf, the contents of /etc/src.conf only take effect when the &os; operating system itself is being built. Descriptions of the many options available for this file are shown in &man.src.conf.5;. Be cautious about disabling seemingly unneeded kernel modules and build options. Sometimes there are unexpected or subtle interactions. Variables and Targets The general format for using make is as follows: &prompt.root; make -x -DVARIABLE target In this example, is an option passed to make. Refer to &man.make.1; for examples of the available options. To pass a variable, specify the variable name with . The behavior of the Makefile is controlled by variables. These can either be set in /etc/make.conf or they can be specified when using make. For example, this variable specifies that profiled libraries should not be built: &prompt.root; make -DNO_PROFILE target It corresponds with this setting in /etc/make.conf: NO_PROFILE= true # Avoid compiling profiled libraries The target tells make what to do and the Makefile defines the available targets. Some targets are used by the build process to break out the steps necessary to rebuild the system into a number of sub-steps. Having separate options is useful for two reasons. First, it allows for a build that does not affect any components of a running system. Because of this, buildworld can be safely run on a machine running in multi-user mode. It is still recommended that installworld be run in part in single-user mode, though. Secondly, it allows NFS mounts to be used to upgrade multiple machines on a network, as described in . It is possible to specify which will cause make to spawn several simultaneous processes. Since much of the compiling process is I/O-bound rather than CPU-bound, this is useful on both single CPU and multi-CPU machines. On a single-CPU machine, run the following command to have up to 4 processes running at any one time. Empirical evidence posted to the mailing lists shows this generally gives the best performance benefit. &prompt.root; make -j4 buildworld On a multi-CPU machine, try values between 6 and 10 to see how they speed things up. rebuilding world timings If any variables were specified to make buildworld, specify the same variables to make installworld. However, must never be used with installworld. For example, if this command was used: &prompt.root; make -DNO_PROFILE buildworld Install the results with: &prompt.root; make -DNO_PROFILE installworld Otherwise, the second command will try to install profiled libraries that were not built during the make buildworld phase. Merging Configuration Files Tom Rhodes Contributed by mergemaster &os; provides the &man.mergemaster.8; Bourne script to aid in determining the differences between the configuration files in /etc, and the configuration files in /usr/src/etc. This is the recommended solution for keeping the system configuration files up to date with those located in the source tree. Before using mergemaster, it is recommended to first copy the existing /etc somewhere safe. Include which does a recursive copy and which preserves times and the ownerships on files: &prompt.root; cp -Rp /etc /etc.old When run, mergemaster builds a temporary root environment, from / down, and populates it with various system configuration files. Those files are then compared to the ones currently installed in the system. Files that differ will be shown in &man.diff.1; format, with the sign representing added or modified lines, and representing lines that will be either removed completely or replaced with a new file. Refer to &man.diff.1; for more information about how file differences are shown. Next, mergemaster will display each file that differs, and present options to: delete the new file, referred to as the temporary file, install the temporary file in its unmodified state, merge the temporary file with the currently installed file, or view the results again. Choosing to delete the temporary file will tell mergemaster to keep the current file unchanged and to delete the new version. This option is not recommended. To get help at any time, type ? at the mergemaster prompt. If the user chooses to skip a file, it will be presented again after all other files have been dealt with. Choosing to install the unmodified temporary file will replace the current file with the new one. For most unmodified files, this is the best option. Choosing to merge the file will present a text editor, and the contents of both files. The files can be merged by reviewing both files side by side on the screen, and choosing parts from both to create a finished product. When the files are compared side by side, l selects the left contents and r selects contents from the right. The final output will be a file consisting of both parts, which can then be installed. This option is customarily used for files where settings have been modified by the user. Choosing to view the results again will redisplay the file differences. After mergemaster is done with the system files, it will prompt for other options. It may prompt to rebuild the password file and will finish up with an option to remove left-over temporary files. Deleting Obsolete Files and Libraries Anton Shterenlikht Based on notes provided by Deleting obsolete files and directories As a part of the &os; development lifecycle, files and their contents occasionally become obsolete. This may be because functionality is implemented elsewhere, the version number of the library has changed, or it was removed from the system entirely. These obsoleted files, libraries, and directories should be removed when updating the system. This ensures that the system is not cluttered with old files which take up unnecessary space on the storage and backup media. Additionally, if the old library has a security or stability issue, the system should be updated to the newer library to keep it safe and to prevent crashes caused by the old library. Files, directories, and libraries which are considered obsolete are listed in /usr/src/ObsoleteFiles.inc. The following instructions should be used to remove obsolete files during the system upgrade process. After the make installworld and the subsequent mergemaster have finished successfully, check for obsolete files and libraries: &prompt.root; cd /usr/src &prompt.root; make check-old If any obsolete files are found, they can be deleted using the following command: &prompt.root; make delete-old A prompt is displayed before deleting each obsolete file. To skip the prompt and let the system remove these files automatically, use BATCH_DELETE_OLD_FILES: &prompt.root; make -DBATCH_DELETE_OLD_FILES delete-old The same goal can be achieved by piping these commands through yes: &prompt.root; yes|make delete-old Warning Deleting obsolete files will break applications that still depend on those obsolete files. This is especially true for old libraries. In most cases, the programs, ports, or libraries that used the old library need to be recompiled before make delete-old-libs is executed. Utilities for checking shared library dependencies include sysutils/libchk and sysutils/bsdadminscripts. Obsolete shared libraries can conflict with newer libraries, causing messages like these: /usr/bin/ld: warning: libz.so.4, needed by /usr/local/lib/libtiff.so, may conflict with libz.so.5 /usr/bin/ld: warning: librpcsvc.so.4, needed by /usr/local/lib/libXext.so, may conflict with librpcsvc.so.5 To solve these problems, determine which port installed the library: &prompt.root; pkg which /usr/local/lib/libtiff.so /usr/local/lib/libtiff.so was installed by package tiff-3.9.4 &prompt.root; pkg which /usr/local/lib/libXext.so /usr/local/lib/libXext.so was installed by package libXext-1.1.1,1 Then deinstall, rebuild, and reinstall the port. To automate this process, ports-mgmt/portmaster can be used. After all ports are rebuilt and no longer use the old libraries, delete the old libraries using the following command: &prompt.root; make delete-old-libs If something goes wrong, it is easy to rebuild a particular piece of the system. For example, if /etc/magic was accidentally deleted as part of the upgrade or merge of /etc, file will stop working. To fix this, run: &prompt.root; cd /usr/src/usr.bin/file &prompt.root; make all install Common Questions Do I need to re-make the world for every change? It depends upon the nature of the change. For example, if svn only shows the following files as being updated: src/games/cribbage/instr.c src/games/sail/pl_main.c src/release/sysinstall/config.c src/release/sysinstall/media.c src/share/mk/bsd.port.mk it probably is not worth rebuilding the entire world. Instead, go into the appropriate sub-directories and run make all install. But if something major changes, such as src/lib/libc/stdlib, consider rebuilding world. Some users rebuild world every fortnight and let changes accumulate over that fortnight. Others only re-make those things that have changed and are careful to spot all the dependencies. It all depends on how often a user wants to upgrade and whether they are tracking &os.stable; or &os.current;. What would cause a compile to fail with lots of signal 11 signal 11 (or other signal number) errors? This normally indicates a hardware problem. Building world is an effective way to stress test hardware, especially memory. A sure indicator of a hardware issue is when make is restarted and it dies at a different point in the process. To resolve this error, swap out the components in the machine, starting with RAM, to determine which component is failing. - Can /usr/obj + Can /usr/obj be removed when finished? This directory contains all the object files that were produced during the compilation phase. Normally, one of the first steps in the make buildworld process is to remove this directory and start afresh. Keeping /usr/obj around when finished makes little sense, and its removal frees up a approximately 2GB of disk space. Can interrupted builds be resumed? This depends on how far into the process the problem occurs. In general, make buildworld builds new copies of essential tools and the system libraries. These tools and libraries are then installed, used to rebuild themselves, and are installed again. The rest of the system is then rebuilt with the new system tools. During the last stage, it is fairly safe to run these commands as they will not undo the work of the previous make buildworld: &prompt.root; cd /usr/src &prompt.root; make -DNO_CLEAN all If this message appears: -------------------------------------------------------------- Building everything.. -------------------------------------------------------------- in the make buildworld output, it is probably fairly safe to do so. If that message is not displayed, it is always better to be safe than sorry and to restart the build from scratch. Is it possible to speed up making the world? Several actions can speed up the build world process. For example, the entire process can be run from single-user mode. However, this will prevent users from having access to the system until the process is complete. Careful file system design or the use of ZFS datasets can make a difference. Consider putting - /usr/src and - /usr/obj on + /usr/src and + /usr/obj on separate file systems. If possible, place the file systems on separate disks on separate disk controllers. When mounting /usr/src, use + >/usr/src, use which prevents the file system from recording the file access time. If /usr/src is not on its + >/usr/src is not on its own file system, consider remounting /usr with + >/usr with . The file system holding /usr/obj can be mounted + >/usr/obj can be mounted or remounted with so that disk writes happen asynchronously. The write completes immediately, and the data is written to the disk a few seconds later. This allows writes to be clustered together, and can provide a dramatic performance boost. Keep in mind that this option makes the file system more fragile. With this option, there is an increased chance that, should power fail, the file system will be in an unrecoverable state when the machine restarts. If /usr/obj is the only directory on this file system, this is not a problem. If you have other, valuable data on the same file system, ensure that there are verified backups before enabling this option. Turn off profiling by setting NO_PROFILE=true in /etc/make.conf. Pass to &man.make.1; to run multiple processes in parallel. This usually helps on both single- and multi-processor machines. What if something goes wrong? First, make absolutely sure that the environment has no extraneous cruft from earlier builds: &prompt.root; chflags -R noschg /usr/obj/usr &prompt.root; rm -rf /usr/obj/usr &prompt.root; cd /usr/src &prompt.root; make cleandir &prompt.root; make cleandir Yes, make cleandir really should be run twice. Then, restart the whole process, starting with make buildworld. If problems persist, send the error and the output of uname -a to &a.questions;. Be prepared to answer other questions about the setup! Tracking for Multiple Machines Mike Meyer Contributed by NFS installing multiple machines When multiple machines need to track the same source tree, it is a waste of disk space, network bandwidth, and CPU cycles to have each system download the sources and rebuild everything. The solution is to have one machine do most of the work, while the rest of the machines mount that work via NFS. This section outlines a method of doing so. For more information about using NFS, refer to . First, identify a set of machines which will run the same set of binaries, known as a build set. Each machine can have a custom kernel, but will run the same userland binaries. From that set, choose a machine to be the build machine that the world and kernel are built on. Ideally, this is a fast machine that has sufficient spare CPU to run make buildworld and make buildkernel. Select a machine to be the test machine, which will test software updates before they are put into production. This must be a machine that can afford to be down for an extended period of time. It can be the build machine, but need not be. All the machines in this build set need to mount /usr/obj and /usr/src from the build machine via NFS. For multiple build sets, /usr/src should be on one build machine, and NFS mounted on the rest. Ensure that /etc/make.conf and /etc/src.conf on all the machines in the build set agree with the build machine. That means that the build machine must build all the parts of the base system that any machine in the build set is going to install. Also, each build machine should have its kernel name set with KERNCONF in /etc/make.conf, and the build machine should list them all in its KERNCONF, listing its own kernel first. The build machine must have the kernel configuration files for each machine in its /usr/src/sys/arch/conf. + >/usr/src/sys/arch/conf. On the build machine, build the kernel and world as described in , but do not install anything on the build machine. Instead, install the built kernel on the test machine. On the test machine, mount /usr/src and /usr/obj via NFS. Then, run shutdown now to go to single-user mode in order to install the new kernel and world and run mergemaster as usual. When done, reboot to return to normal multi-user operations. After verifying that everything on the test machine is working properly, use the same procedure to install the new software on each of the other machines in the build set. The same methodology can be used for the ports tree. The first step is to share /usr/ports via NFS to all the machines in the build set. To configure /etc/make.conf to share distfiles, set DISTDIR to a common shared directory that is writable by whichever user root is mapped to by the NFS mount. Each machine should set WRKDIRPREFIX to a local build directory, if ports are to be built locally. Alternately, if the build system is to build and distribute packages to the machines in the build set, set PACKAGES on the build system to a directory similar to DISTDIR. Index: head/en_US.ISO8859-1/books/handbook/disks/chapter.xml =================================================================== --- head/en_US.ISO8859-1/books/handbook/disks/chapter.xml (revision 46271) +++ head/en_US.ISO8859-1/books/handbook/disks/chapter.xml (revision 46272) @@ -1,3575 +1,3575 @@ Storage Synopsis This chapter covers the use of disks and storage media in &os;. This includes SCSI and IDE disks, CD and DVD media, memory-backed disks, and USB storage devices. After reading this chapter, you will know: How to add additional hard disks to a &os; system. How to grow the size of a disk's partition on &os;. How to configure &os; to use USB storage devices. How to use CD and DVD media on a &os; system. How to use the backup programs available under &os;. How to set up memory disks. What file system snapshots are and how to use them efficiently. How to use quotas to limit disk space usage. How to encrypt disks and swap to secure them against attackers. How to configure a highly available storage network. Before reading this chapter, you should: Know how to configure and install a new &os; kernel. Adding Disks David O'Brien Originally contributed by disks adding This section describes how to add a new SATA disk to a machine that currently only has a single drive. First, turn off the computer and install the drive in the computer following the instructions of the computer, controller, and drive manufacturers. Reboot the system and become root. Inspect /var/run/dmesg.boot to ensure the new disk was found. In this example, the newly added SATA drive will appear as ada1. partitions gpart For this example, a single large partition will be created on the new disk. The GPT partitioning scheme will be used in preference to the older and less versatile MBR scheme. If the disk to be added is not blank, old partition information can be removed with gpart delete. See &man.gpart.8; for details. The partition scheme is created, and then a single partition is added: &prompt.root; gpart create -s GPT ada1 &prompt.root; gpart add -t freebsd-ufs ada1 Depending on use, several smaller partitions may be desired. See &man.gpart.8; for options to create partitions smaller than a whole disk. A file system is created on the new blank disk: &prompt.root; newfs -U /dev/ada1p1 An empty directory is created as a mountpoint, a location for mounting the new disk in the original disk's file system: &prompt.root; mkdir /newdisk Finally, an entry is added to /etc/fstab so the new disk will be mounted automatically at startup: /dev/ada1p1 /newdisk ufs rw 2 2 The new disk can be mounted manually, without restarting the system: &prompt.root; mount /newdisk Resizing and Growing Disks Allan Jude Originally contributed by disks resizing A disk's capacity can increase without any changes to the data already present. This happens commonly with virtual machines, when the virtual disk turns out to be too small and is enlarged. Sometimes a disk image is written to a USB memory stick, but does not use the full capacity. Here we describe how to resize or grow disk contents to take advantage of increased capacity. Determine the device name of the disk to be resized by inspecting /var/run/dmesg.boot. In this example, there is only one SATA disk in the system, so the drive will appear as ada0. partitions gpart List the partitions on the disk to see the current configuration: &prompt.root; gpart show ada0 => 34 83886013 ada0 GPT (48G) [CORRUPT] 34 128 1 freebsd-boot (64k) 162 79691648 2 freebsd-ufs (38G) 79691810 4194236 3 freebsd-swap (2G) 83886046 1 - free - (512B) If the disk was formatted with the GPT partitioning scheme, it may show as corrupted because the GPT backup partition table is no longer at the end of the drive. Fix the backup partition table with gpart: &prompt.root; gpart recover ada0 ada0 recovered Now the additional space on the disk is available for use by a new partition, or an existing partition can be expanded: &prompt.root; gpart show ada0 => 34 102399933 ada0 GPT (48G) 34 128 1 freebsd-boot (64k) 162 79691648 2 freebsd-ufs (38G) 79691810 4194236 3 freebsd-swap (2G) 83886046 18513921 - free - (8.8G) Partitions can only be resized into contiguous free space. Here, the last partition on the disk is the swap partition, but the second partition is the one that needs to be resized. Swap partitions only contain temporary data, so it can safely be unmounted, deleted, and then recreated after resizing other partitions. &prompt.root; swapoff /dev/ada0p3 &prompt.root; gpart delete -i 3 ada0 ada0p3 deleted &prompt.root; gpart show ada0 => 34 102399933 ada0 GPT (48G) 34 128 1 freebsd-boot (64k) 162 79691648 2 freebsd-ufs (38G) 79691810 22708157 - free - (10G) There is risk of data loss when modifying the partition table of a mounted file system. It is best to perform the following steps on an unmounted file system while running from a live CD-ROM or USB device. However, if absolutely necessary, a mounted file system can be resized after disabling GEOM safety features: &prompt.root; sysctl kern.geom.debugflags=16 Resize the partition, leaving room to recreate a swap partition of the desired size. This only modifies the size of the partition. The file system in the partition will be expanded in a separate step. &prompt.root; gpart resize -i 2 -a 4k -s 47G ada0 ada0p2 resized &prompt.root; gpart show ada0 => 34 102399933 ada0 GPT (48G) 34 128 1 freebsd-boot (64k) 162 98566144 2 freebsd-ufs (47G) 98566306 3833661 - free - (1.8G) Recreate the swap partition: &prompt.root; gpart add -t freebsd-swap -a 4k ada0 ada0p3 added &prompt.root; gpart show ada0 => 34 102399933 ada0 GPT (48G) 34 128 1 freebsd-boot (64k) 162 98566144 2 freebsd-ufs (47G) 98566306 3833661 3 freebsd-swap (1.8G) &prompt.root; swapon /dev/ada0p3 Grow the UFS file system to use the new capacity of the resized partition: Growing a live UFS file system is only possible in &os; 10.0-RELEASE and later. For earlier versions, the file system must not be mounted. &prompt.root; growfs /dev/ada0p2 Device is mounted read-write; resizing will result in temporary write suspension for /. It's strongly recommended to make a backup before growing the file system. OK to grow file system on /dev/ada0p2, mounted on /, from 38GB to 47GB? [Yes/No] Yes super-block backups (for fsck -b #) at: 80781312, 82063552, 83345792, 84628032, 85910272, 87192512, 88474752, 89756992, 91039232, 92321472, 93603712, 94885952, 96168192, 97450432 Both the partition and the file system on it have now been resized to use the newly-available disk space. <acronym>USB</acronym> Storage Devices Marc Fonvieille Contributed by USB disks Many external storage solutions, such as hard drives, USB thumbdrives, and CD and DVD burners, use the Universal Serial Bus (USB). &os; provides support for USB 1.x, 2.0, and 3.0 devices. USB 3.0 support is not compatible with some hardware, including Haswell (Lynx point) chipsets. If &os; boots with a failed with error 19 message, disable xHCI/USB3 in the system BIOS. Support for USB storage devices is built into the GENERIC kernel. For a custom kernel, be sure that the following lines are present in the kernel configuration file: device scbus # SCSI bus (required for ATA/SCSI) device da # Direct Access (disks) device pass # Passthrough device (direct ATA/SCSI access) device uhci # provides USB 1.x support device ohci # provides USB 1.x support device ehci # provides USB 2.0 support device xhci # provides USB 3.0 support device usb # USB Bus (required) device umass # Disks/Mass storage - Requires scbus and da device cd # needed for CD and DVD burners &os; uses the &man.umass.4; driver which uses the SCSI subsystem to access USB storage devices. Since any USB device will be seen as a SCSI device by the system, if the USB device is a CD or DVD burner, do not include in a custom kernel configuration file. The rest of this section demonstrates how to verify that a USB storage device is recognized by &os; and how to configure the device so that it can be used. Device Configuration To test the USB configuration, plug in the USB device. Use dmesg to confirm that the drive appears in the system message buffer. It should look something like this: umass0: <STECH Simple Drive, class 0/0, rev 2.00/1.04, addr 3> on usbus0 umass0: SCSI over Bulk-Only; quirks = 0x0100 umass0:4:0:-1: Attached to scbus4 da0 at umass-sim0 bus 0 scbus4 target 0 lun 0 da0: <STECH Simple Drive 1.04> Fixed Direct Access SCSI-4 device da0: Serial Number WD-WXE508CAN263 da0: 40.000MB/s transfers da0: 152627MB (312581808 512 byte sectors: 255H 63S/T 19457C) da0: quirks=0x2<NO_6_BYTE> The brand, device node (da0), speed, and size will differ according to the device. Since the USB device is seen as a SCSI one, camcontrol can be used to list the USB storage devices attached to the system: &prompt.root; camcontrol devlist <STECH Simple Drive 1.04> at scbus4 target 0 lun 0 (pass3,da0) Alternately, usbconfig can be used to list the device. Refer to &man.usbconfig.8; for more information about this command. &prompt.root; usbconfig ugen0.3: <Simple Drive STECH> at usbus0, cfg=0 md=HOST spd=HIGH (480Mbps) pwr=ON (2mA) If the device has not been formatted, refer to for instructions on how to format and create partitions on the USB drive. If the drive comes with a file system, it can be mounted by root using the instructions in . Allowing untrusted users to mount arbitrary media, by enabling vfs.usermount as described below, should not be considered safe from a security point of view. Most file systems were not built to safeguard against malicious devices. To make the device mountable as a normal user, one solution is to make all users of the device a member of the operator group using &man.pw.8;. Next, ensure that operator is able to read and write the device by adding these lines to /etc/devfs.rules: [localrules=5] add path 'da*' mode 0660 group operator If internal SCSI disks are also installed in the system, change the second line as follows: add path 'da[3-9]*' mode 0660 group operator This will exclude the first three SCSI disks (da0 to da2)from belonging to the operator group. Replace 3 with the number of internal SCSI disks. Refer to &man.devfs.rules.5; for more information about this file. Next, enable the ruleset in /etc/rc.conf: devfs_system_ruleset="localrules" Then, instruct the system to allow regular users to mount file systems by adding the following line to /etc/sysctl.conf: vfs.usermount=1 Since this only takes effect after the next reboot, use sysctl to set this variable now: &prompt.root; sysctl vfs.usermount=1 vfs.usermount: 0 -> 1 The final step is to create a directory where the file system is to be mounted. This directory needs to be owned by the user that is to mount the file system. One way to do that is for root to create a subdirectory owned by that user as /mnt/username. + >/mnt/username. In the following example, replace username with the login name of the user and usergroup with the user's primary group: &prompt.root; mkdir /mnt/username &prompt.root; chown username:usergroup /mnt/username Suppose a USB thumbdrive is plugged in, and a device /dev/da0s1 appears. If the device is formatted with a FAT file system, the user can mount it using: &prompt.user; mount -t msdosfs -o -m=644,-M=755 /dev/da0s1 /mnt/username Before the device can be unplugged, it must be unmounted first: &prompt.user; umount /mnt/username After device removal, the system message buffer will show messages similar to the following: umass0: at uhub3, port 2, addr 3 (disconnected) da0 at umass-sim0 bus 0 scbus4 target 0 lun 0 da0: <STECH Simple Drive 1.04> s/n WD-WXE508CAN263 detached (da0:umass-sim0:0:0:0): Periph destroyed Creating and Using <acronym>CD</acronym> Media Mike Meyer Contributed by CD-ROMs creating Compact Disc (CD) media provide a number of features that differentiate them from conventional disks. They are designed so that they can be read continuously without delays to move the head between tracks. While CD media do have tracks, these refer to a section of data to be read continuously, and not a physical property of the disk. The ISO 9660 file system was designed to deal with these differences. ISO 9660 file systems ISO 9660 CD burner ATAPI The &os; Ports Collection provides several utilities for burning and duplicating audio and data CDs. This chapter demonstrates the use of several command line utilities. For CD burning software with a graphical utility, consider installing the sysutils/xcdroast or sysutils/k3b packages or ports. Supported Devices Marc Fonvieille Contributed by CD burner ATAPI/CAM driver The GENERIC kernel provides support for SCSI, USB, and ATAPI CD readers and burners. If a custom kernel is used, the options that need to be present in the kernel configuration file vary by the type of device. For a SCSI burner, make sure these options are present: device scbus # SCSI bus (required for ATA/SCSI) device da # Direct Access (disks) device pass # Passthrough device (direct ATA/SCSI access) device cd # needed for CD and DVD burners For a USB burner, make sure these options are present: device scbus # SCSI bus (required for ATA/SCSI) device da # Direct Access (disks) device pass # Passthrough device (direct ATA/SCSI access) device cd # needed for CD and DVD burners device uhci # provides USB 1.x support device ohci # provides USB 1.x support device ehci # provides USB 2.0 support device xhci # provides USB 3.0 support device usb # USB Bus (required) device umass # Disks/Mass storage - Requires scbus and da For an ATAPI burner, make sure these options are present: device ata # Legacy ATA/SATA controllers device scbus # SCSI bus (required for ATA/SCSI) device pass # Passthrough device (direct ATA/SCSI access) device cd # needed for CD and DVD burners On &os; versions prior to 10.x, this line is also needed in the kernel configuration file if the burner is an ATAPI device: device atapicam Alternately, this driver can be loaded at boot time by adding the following line to /boot/loader.conf: atapicam_load="YES" This will require a reboot of the system as this driver can only be loaded at boot time. To verify that &os; recognizes the device, run dmesg and look for an entry for the device. On systems prior to 10.x, the device name in the first line of the output will be acd0 instead of cd0. &prompt.user; dmesg | grep cd cd0 at ahcich1 bus 0 scbus1 target 0 lun 0 cd0: <HL-DT-ST DVDRAM GU70N LT20> Removable CD-ROM SCSI-0 device cd0: Serial Number M3OD3S34152 cd0: 150.000MB/s transfers (SATA 1.x, UDMA6, ATAPI 12bytes, PIO 8192bytes) cd0: Attempt to query device size failed: NOT READY, Medium not present - tray closed Burning a <acronym>CD</acronym> In &os;, cdrecord can be used to burn CDs. This command is installed with the sysutils/cdrtools package or port. &os; 8.x includes the built-in burncd utility for burning CDs using an ATAPI CD burner. Refer to the manual page for burncd for usage examples. While cdrecord has many options, basic usage is simple. Specify the name of the ISO file to burn and, if the system has multiple burner devices, specify the name of the device to use: &prompt.root; cdrecord dev=device imagefile.iso To determine the device name of the burner, use which might produce results like this: CD-ROMs burning &prompt.root; cdrecord -scanbus ProDVD-ProBD-Clone 3.00 (amd64-unknown-freebsd10.0) Copyright (C) 1995-2010 Jörg Schilling Using libscg version 'schily-0.9' scsibus0: 0,0,0 0) 'SEAGATE ' 'ST39236LW ' '0004' Disk 0,1,0 1) 'SEAGATE ' 'ST39173W ' '5958' Disk 0,2,0 2) * 0,3,0 3) 'iomega ' 'jaz 1GB ' 'J.86' Removable Disk 0,4,0 4) 'NEC ' 'CD-ROM DRIVE:466' '1.26' Removable CD-ROM 0,5,0 5) * 0,6,0 6) * 0,7,0 7) * scsibus1: 1,0,0 100) * 1,1,0 101) * 1,2,0 102) * 1,3,0 103) * 1,4,0 104) * 1,5,0 105) 'YAMAHA ' 'CRW4260 ' '1.0q' Removable CD-ROM 1,6,0 106) 'ARTEC ' 'AM12S ' '1.06' Scanner 1,7,0 107) * Locate the entry for the CD burner and use the three numbers separated by commas as the value for . In this case, the Yamaha burner device is 1,5,0, so the appropriate input to specify that device is . Refer to the manual page for cdrecord for other ways to specify this value and for information on writing audio tracks and controlling the write speed. Alternately, run the following command to get the device address of the burner: &prompt.root; camcontrol devlist <MATSHITA CDRW/DVD UJDA740 1.00> at scbus1 target 0 lun 0 (cd0,pass0) Use the numeric values for scbus, target, and lun. For this example, 1,0,0 is the device name to use. Writing Data to an <acronym>ISO</acronym> File System In order to produce a data CD, the data files that are going to make up the tracks on the CD must be prepared before they can be burned to the CD. In &os;, sysutils/cdrtools installs mkisofs, which can be used to produce an ISO 9660 file system that is an image of a directory tree within a &unix; file system. The simplest usage is to specify the name of the ISO file to create and the path to the files to place into the ISO 9660 file system: &prompt.root; mkisofs -o imagefile.iso /path/to/tree file systems ISO 9660 This command maps the file names in the specified path to names that fit the limitations of the standard ISO 9660 file system, and will exclude files that do not meet the standard for ISO file systems. file systems Joliet A number of options are available to overcome the restrictions imposed by the standard. In particular, enables the Rock Ridge extensions common to &unix; systems and enables Joliet extensions used by µsoft; systems. For CDs that are going to be used only on &os; systems, can be used to disable all filename restrictions. When used with , it produces a file system image that is identical to the specified &os; tree, even if it violates the ISO 9660 standard. CD-ROMs creating bootable The last option of general use is . This is used to specify the location of a boot image for use in producing an El Torito bootable CD. This option takes an argument which is the path to a boot image from the top of the tree being written to the CD. By default, mkisofs creates an ISO image in floppy disk emulation mode, and thus expects the boot image to be exactly 1200, 1440 or 2880 KB in size. Some boot loaders, like the one used by the &os; distribution media, do not use emulation mode. In this case, should be used. So, if /tmp/myboot holds a bootable &os; system with the boot image in /tmp/myboot/boot/cdboot, this command would produce /tmp/bootable.iso: &prompt.root; mkisofs -R -no-emul-boot -b boot/cdboot -o /tmp/bootable.iso /tmp/myboot The resulting ISO image can be mounted as a memory disk with: &prompt.root; mdconfig -a -t vnode -f /tmp/bootable.iso -u 0 &prompt.root; mount -t cd9660 /dev/md0 /mnt One can then verify that /mnt and /tmp/myboot are identical. There are many other options available for mkisofs to fine-tune its behavior. Refer to &man.mkisofs.8; for details. It is possible to copy a data CD to an image file that is functionally equivalent to the image file created with mkisofs. To do so, use dd with the device name as the input file and the name of the ISO to create as the output file: &prompt.root; dd if=/dev/cd0 of=file.iso bs=2048 The resulting image file can be burned to CD as described in . Using Data <acronym>CD</acronym>s Once an ISO has been burned to a CD, it can be mounted by specifying the file system type, the name of the device containing the CD, and an existing mount point: &prompt.root; mount -t cd9660 /dev/cd0 /mnt Since mount assumes that a file system is of type ufs, a Incorrect super block error will occur if -t cd9660 is not included when mounting a data CD. While any data CD can be mounted this way, disks with certain ISO 9660 extensions might behave oddly. For example, Joliet disks store all filenames in two-byte Unicode characters. If some non-English characters show up as question marks, specify the local charset with . For more information, refer to &man.mount.cd9660.8;. In order to do this character conversion with the help of , the kernel requires the cd9660_iconv.ko module to be loaded. This can be done either by adding this line to loader.conf: cd9660_iconv_load="YES" and then rebooting the machine, or by directly loading the module with kldload. Occasionally, Device not configured will be displayed when trying to mount a data CD. This usually means that the CD drive thinks that there is no disk in the tray, or that the drive is not visible on the bus. It can take a couple of seconds for a CD drive to realize that a media is present, so be patient. Sometimes, a SCSI CD drive may be missed because it did not have enough time to answer the bus reset. To resolve this, a custom kernel can be created which increases the default SCSI delay. Add the following option to the custom kernel configuration file and rebuild the kernel using the instructions in : options SCSI_DELAY=15000 This tells the SCSI bus to pause 15 seconds during boot, to give the CD drive every possible chance to answer the bus reset. It is possible to burn a file directly to CD, without creating an ISO 9660 file system. This is known as burning a raw data CD and some people do this for backup purposes. This type of disk can not be mounted as a normal data CD. In order to retrieve the data burned to such a CD, the data must be read from the raw device node. For example, this command will extract a compressed tar file located on the second CD device into the current working directory: &prompt.root; tar xzvf /dev/cd1 In order to mount a data CD, the data must be written using mkisofs. Duplicating Audio <acronym>CD</acronym>s To duplicate an audio CD, extract the audio data from the CD to a series of files, then write these files to a blank CD. describes how to duplicate and burn an audio CD. If the &os; version is less than 10.0 and the device is ATAPI, the module must be first loaded using the instructions in . Duplicating an Audio <acronym>CD</acronym> The sysutils/cdrtools package or port installs cdda2wav. This command can be used to extract all of the audio tracks, with each track written to a separate WAV file in the current working directory: &prompt.user; cdda2wav -vall -B -Owav A device name does not need to be specified if there is only one CD device on the system. Refer to the cdda2wav manual page for instructions on how to specify a device and to learn more about the other options available for this command. Use cdrecord to write the .wav files: &prompt.user; cdrecord -v dev=2,0 -dao -useinfo *.wav Make sure that 2,0 is set appropriately, as described in . Creating and Using <acronym>DVD</acronym> Media Marc Fonvieille Contributed by Andy Polyakov With inputs from DVD burning Compared to the CD, the DVD is the next generation of optical media storage technology. The DVD can hold more data than any CD and is the standard for video publishing. Five physical recordable formats can be defined for a recordable DVD: DVD-R: This was the first DVD recordable format available. The DVD-R standard is defined by the DVD Forum. This format is write once. DVD-RW: This is the rewritable version of the DVD-R standard. A DVD-RW can be rewritten about 1000 times. DVD-RAM: This is a rewritable format which can be seen as a removable hard drive. However, this media is not compatible with most DVD-ROM drives and DVD-Video players as only a few DVD writers support the DVD-RAM format. Refer to for more information on DVD-RAM use. DVD+RW: This is a rewritable format defined by the DVD+RW Alliance. A DVD+RW can be rewritten about 1000 times. DVD+R: This format is the write once variation of the DVD+RW format. A single layer recordable DVD can hold up to 4,700,000,000 bytes which is actually 4.38 GB or 4485 MB as 1 kilobyte is 1024 bytes. A distinction must be made between the physical media and the application. For example, a DVD-Video is a specific file layout that can be written on any recordable DVD physical media such as DVD-R, DVD+R, or DVD-RW. Before choosing the type of media, ensure that both the burner and the DVD-Video player are compatible with the media under consideration. Configuration To perform DVD recording, use &man.growisofs.1;. This command is part of the sysutils/dvd+rw-tools utilities which support all DVD media types. These tools use the SCSI subsystem to access the devices, therefore ATAPI/CAM support must be loaded or statically compiled into the kernel. This support is not needed if the burner uses the USB interface. Refer to for more details on USB device configuration. DMA access must also be enabled for ATAPI devices, by adding the following line to /boot/loader.conf: hw.ata.atapi_dma="1" Before attempting to use dvd+rw-tools, consult the Hardware Compatibility Notes. For a graphical user interface, consider using sysutils/k3b which provides a user friendly interface to &man.growisofs.1; and many other burning tools. Burning Data <acronym>DVD</acronym>s Since &man.growisofs.1; is a front-end to mkisofs, it will invoke &man.mkisofs.8; to create the file system layout and perform the write on the DVD. This means that an image of the data does not need to be created before the burning process. To burn to a DVD+R or a DVD-R the data in /path/to/data, use the following command: &prompt.root; growisofs -dvd-compat -Z /dev/cd0 -J -R /path/to/data In this example, is passed to &man.mkisofs.8; to create an ISO 9660 file system with Joliet and Rock Ridge extensions. Refer to &man.mkisofs.8; for more details. For the initial session recording, is used for both single and multiple sessions. Replace /dev/cd0, with the name of the DVD device. Using indicates that the disk will be closed and that the recording will be unappendable. This should also provide better media compatibility with DVD-ROM drives. To burn a pre-mastered image, such as imagefile.iso, use: &prompt.root; growisofs -dvd-compat -Z /dev/cd0=imagefile.iso The write speed should be detected and automatically set according to the media and the drive being used. To force the write speed, use . Refer to &man.growisofs.1; for example usage. In order to support working files larger than 4.38GB, an UDF/ISO-9660 hybrid file system must be created by passing to &man.mkisofs.8; and all related programs, such as &man.growisofs.1;. This is required only when creating an ISO image file or when writing files directly to a disk. Since a disk created this way must be mounted as an UDF file system with &man.mount.udf.8;, it will be usable only on an UDF aware operating system. Otherwise it will look as if it contains corrupted files. To create this type of ISO file: &prompt.user; mkisofs -R -J -udf -iso-level 3 -o imagefile.iso /path/to/data To burn files directly to a disk: &prompt.root; growisofs -dvd-compat -udf -iso-level 3 -Z /dev/cd0 -J -R /path/to/data When an ISO image already contains large files, no additional options are required for &man.growisofs.1; to burn that image on a disk. Be sure to use an up-to-date version of sysutils/cdrtools, which contains &man.mkisofs.8;, as an older version may not contain large files support. If the latest version does not work, install sysutils/cdrtools-devel and read its &man.mkisofs.8;. Burning a <acronym>DVD</acronym>-Video DVD DVD-Video A DVD-Video is a specific file layout based on the ISO 9660 and micro-UDF (M-UDF) specifications. Since DVD-Video presents a specific data structure hierarchy, a particular program such as multimedia/dvdauthor is needed to author the DVD. If an image of the DVD-Video file system already exists, it can be burned in the same way as any other image. If dvdauthor was used to make the DVD and the result is in /path/to/video, the following command should be used to burn the DVD-Video: &prompt.root; growisofs -Z /dev/cd0 -dvd-video /path/to/video is passed to &man.mkisofs.8; to instruct it to create a DVD-Video file system layout. This option implies the &man.growisofs.1; option. Using a <acronym>DVD+RW</acronym> DVD DVD+RW Unlike CD-RW, a virgin DVD+RW needs to be formatted before first use. It is recommended to let &man.growisofs.1; take care of this automatically whenever appropriate. However, it is possible to use dvd+rw-format to format the DVD+RW: &prompt.root; dvd+rw-format /dev/cd0 Only perform this operation once and keep in mind that only virgin DVD+RW medias need to be formatted. Once formatted, the DVD+RW can be burned as usual. To burn a totally new file system and not just append some data onto a DVD+RW, the media does not need to be blanked first. Instead, write over the previous recording like this: &prompt.root; growisofs -Z /dev/cd0 -J -R /path/to/newdata The DVD+RW format supports appending data to a previous recording. This operation consists of merging a new session to the existing one as it is not considered to be multi-session writing. &man.growisofs.1; will grow the ISO 9660 file system present on the media. For example, to append data to a DVD+RW, use the following: &prompt.root; growisofs -M /dev/cd0 -J -R /path/to/nextdata The same &man.mkisofs.8; options used to burn the initial session should be used during next writes. Use for better media compatibility with DVD-ROM drives. When using DVD+RW, this option will not prevent the addition of data. To blank the media, use: &prompt.root; growisofs -Z /dev/cd0=/dev/zero Using a <acronym>DVD-RW</acronym> DVD DVD-RW A DVD-RW accepts two disc formats: incremental sequential and restricted overwrite. By default, DVD-RW discs are in sequential format. A virgin DVD-RW can be directly written without being formatted. However, a non-virgin DVD-RW in sequential format needs to be blanked before writing a new initial session. To blank a DVD-RW in sequential mode: &prompt.root; dvd+rw-format -blank=full /dev/cd0 A full blanking using will take about one hour on a 1x media. A fast blanking can be performed using , if the DVD-RW will be recorded in Disk-At-Once (DAO) mode. To burn the DVD-RW in DAO mode, use the command: &prompt.root; growisofs -use-the-force-luke=dao -Z /dev/cd0=imagefile.iso Since &man.growisofs.1; automatically attempts to detect fast blanked media and engage DAO write, should not be required. One should instead use restricted overwrite mode with any DVD-RW as this format is more flexible than the default of incremental sequential. To write data on a sequential DVD-RW, use the same instructions as for the other DVD formats: &prompt.root; growisofs -Z /dev/cd0 -J -R /path/to/data To append some data to a previous recording, use with &man.growisofs.1;. However, if data is appended on a DVD-RW in incremental sequential mode, a new session will be created on the disc and the result will be a multi-session disc. A DVD-RW in restricted overwrite format does not need to be blanked before a new initial session. Instead, overwrite the disc with . It is also possible to grow an existing ISO 9660 file system written on the disc with . The result will be a one-session DVD. To put a DVD-RW in restricted overwrite format, the following command must be used: &prompt.root; dvd+rw-format /dev/cd0 To change back to sequential format, use: &prompt.root; dvd+rw-format -blank=full /dev/cd0 Multi-Session Few DVD-ROM drives support multi-session DVDs and most of the time only read the first session. DVD+R, DVD-R and DVD-RW in sequential format can accept multiple sessions. The notion of multiple sessions does not exist for the DVD+RW and the DVD-RW restricted overwrite formats. Using the following command after an initial non-closed session on a DVD+R, DVD-R, or DVD-RW in sequential format, will add a new session to the disc: &prompt.root; growisofs -M /dev/cd0 -J -R /path/to/nextdata Using this command with a DVD+RW or a DVD-RW in restricted overwrite mode will append data while merging the new session to the existing one. The result will be a single-session disc. Use this method to add data after an initial write on these types of media. Since some space on the media is used between each session to mark the end and start of sessions, one should add sessions with a large amount of data to optimize media space. The number of sessions is limited to 154 for a DVD+R, about 2000 for a DVD-R, and 127 for a DVD+R Double Layer. For More Information To obtain more information about a DVD, use dvd+rw-mediainfo /dev/cd0 while the disc in the specified drive. More information about dvd+rw-tools can be found in &man.growisofs.1;, on the dvd+rw-tools web site, and in the cdwrite mailing list archives. When creating a problem report related to the use of dvd+rw-tools, always include the output of dvd+rw-mediainfo. Using a <acronym>DVD-RAM</acronym> DVD DVD-RAM DVD-RAM writers can use either a SCSI or ATAPI interface. For ATAPI devices, DMA access has to be enabled by adding the following line to /boot/loader.conf: hw.ata.atapi_dma="1" A DVD-RAM can be seen as a removable hard drive. Like any other hard drive, the DVD-RAM must be formatted before it can be used. In this example, the whole disk space will be formatted with a standard UFS2 file system: &prompt.root; dd if=/dev/zero of=/dev/acd0 bs=2k count=1 &prompt.root; bsdlabel -Bw acd0 &prompt.root; newfs /dev/acd0 The DVD device, acd0, must be changed according to the configuration. Once the DVD-RAM has been formatted, it can be mounted as a normal hard drive: &prompt.root; mount /dev/acd0 /mnt Once mounted, the DVD-RAM will be both readable and writeable. Creating and Using Floppy Disks This section explains how to format a 3.5 inch floppy disk in &os;. Steps to Format a Floppy A floppy disk needs to be low-level formatted before it can be used. This is usually done by the vendor, but formatting is a good way to check media integrity. To low-level format the floppy disk on &os;, use &man.fdformat.1;. When using this utility, make note of any error messages, as these can help determine if the disk is good or bad. To format the floppy, insert a new 3.5 inch floppy disk into the first floppy drive and issue: &prompt.root; /usr/sbin/fdformat -f 1440 /dev/fd0 After low-level formatting the disk, create a disk label as it is needed by the system to determine the size of the disk and its geometry. The supported geometry values are listed in /etc/disktab. To write the disk label, use &man.bsdlabel.8;: &prompt.root; /sbin/bsdlabel -B -w /dev/fd0 fd1440 The floppy is now ready to be high-level formatted with a file system. The floppy's file system can be either UFS or FAT, where FAT is generally a better choice for floppies. To format the floppy with FAT, issue: &prompt.root; /sbin/newfs_msdos /dev/fd0 The disk is now ready for use. To use the floppy, mount it with &man.mount.msdosfs.8;. One can also install and use emulators/mtools from the Ports Collection. Backup Basics Implementing a backup plan is essential in order to have the ability to recover from disk failure, accidental file deletion, random file corruption, or complete machine destruction, including destruction of on-site backups. The backup type and schedule will vary, depending upon the importance of the data, the granularity needed for file restores, and the amount of acceptable downtime. Some possible backup techniques include: Archives of the whole system, backed up onto permanent, off-site media. This provides protection against all of the problems listed above, but is slow and inconvenient to restore from, especially for non-privileged users. File system snapshots, which are useful for restoring deleted files or previous versions of files. Copies of whole file systems or disks which are sychronized with another system on the network using a scheduled net/rsync. Hardware or software RAID, which minimizes or avoids downtime when a disk fails. Typically, a mix of backup techniques is used. For example, one could create a schedule to automate a weekly, full system backup that is stored off-site and to supplement this backup with hourly ZFS snapshots. In addition, one could make a manual backup of individual directories or files before making file edits or deletions. This section describes some of the utilities which can be used to create and manage backups on a &os; system. File System Backups backup software dump / restore dump restore The traditional &unix; programs for backing up a file system are &man.dump.8;, which creates the backup, and &man.restore.8;, which restores the backup. These utilities work at the disk block level, below the abstractions of the files, links, and directories that are created by file systems. Unlike other backup software, dump backs up an entire file system and is unable to backup only part of a file system or a directory tree that spans multiple file systems. Instead of writing files and directories, dump writes the raw data blocks that comprise files and directories. If dump is used on the root directory, it will not back up /home, /usr or many other directories since these are typically mount points for other file systems or symbolic links into those file systems. When used to restore data, restore stores temporary files in /tmp/ by default. When using a recovery disk with a small /tmp, set TMPDIR to a directory with more free space in order for the restore to succeed. When using dump, be aware that some quirks remain from its early days in Version 6 of AT&T &unix;,circa 1975. The default parameters assume a backup to a 9-track tape, rather than to another type of media or to the high-density tapes available today. These defaults must be overridden on the command line. .rhosts It is possible to backup a file system across the network to a another system or to a tape drive attached to another computer. While the &man.rdump.8; and &man.rrestore.8; utilities can be used for this purpose, they are not considered to be secure. Instead, one can use dump and restore in a more secure fashion over an SSH connection. This example creates a full, compressed backup of /usr and sends the backup file to the specified host over a SSH connection. Using <command>dump</command> over <application>ssh</application> &prompt.root; /sbin/dump -0uan -f - /usr | gzip -2 | ssh -c blowfish \ targetuser@targetmachine.example.com dd of=/mybigfiles/dump-usr-l0.gz This example sets RSH in order to write the backup to a tape drive on a remote system over a SSH connection: Using <command>dump</command> over <application>ssh</application> with <envar>RSH</envar> Set &prompt.root; env RSH=/usr/bin/ssh /sbin/dump -0uan -f targetuser@targetmachine.example.com:/dev/sa0 /usr Directory Backups backup software tar Several built-in utilities are available for backing up and restoring specified files and directories as needed. A good choice for making a backup of all of the files in a directory is &man.tar.1;. This utility dates back to Version 6 of AT&T &unix; and by default assumes a recursive backup to a local tape device. Switches can be used to instead specify the name of a backup file. tar This example creates a compressed backup of the current directory and saves it to /tmp/mybackup.tgz. When creating a backup file, make sure that the backup is not saved to the same directory that is being backed up. Backing Up the Current Directory with <command>tar</command> &prompt.root; tar czvf /tmp/mybackup.tgz . To restore the entire backup, cd into the directory to restore into and specify the name of the backup. Note that this will overwrite any newer versions of files in the restore directory. When in doubt, restore to a temporary directory or specify the name of the file within the backup to restore. Restoring Up the Current Directory with <command>tar</command> &prompt.root; tar xzvf /tmp/mybackup.tgz There are dozens of available switches which are described in &man.tar.1;. This utility also supports the use of exclude patterns to specify which files should not be included when backing up the specified directory or restoring files from a backup. backup software cpio To create a backup using a specified list of files and directories, &man.cpio.1; is a good choice. Unlike tar, cpio does not know how to walk the directory tree and it must be provided the list of files to backup. For example, a list of files can be created using ls or find. This example creates a recursive listing of the current directory which is then piped to cpio in order to create an output backup file named /tmp/mybackup.cpio. Using<command>ls</command> and <command>cpio</command> to Make a Recursive Backup of the Current Directory &prompt.root; ls -R | cpio -ovF /tmp/mybackup.cpio backup software pax pax POSIX IEEE A backup utility which tries to bridge the features provided by tar and cpio is &man.pax.1;. Over the years, the various versions of tar and cpio became slightly incompatible. &posix; created pax which attempts to read and write many of the various cpio and tar formats, plus new formats of its own. The pax equivalent to the previous examples would be: Backing Up the Current Directory with <command>pax</command> &prompt.root; pax -wf /tmp/mybackup.pax . Using Data Tapes for Backups tape media While tape technology has continued to evolve, modern backup systems tend to combine off-site backups with local removable media. &os; supports any tape drive that uses SCSI, such as LTO or DAT. There is limited support for SATA and USB tape drives. For SCSI tape devices, &os; uses the &man.sa.4; driver and the /dev/sa0, /dev/nsa0, and /dev/esa0 devices. The physical device name is /dev/sa0. When /dev/nsa0 is used, the backup application will not rewind the tape after writing a file, which allows writing more than one file to a tape. Using /dev/esa0 ejects the tape after the device is closed. In &os;, mt is used to control operations of the tape drive, such as seeking through files on a tape or writing tape control marks to the tape. For example, the first three files on a tape can be preserved by skipping past them before writing a new file: &prompt.root; mt -f /dev/nsa0 fsf 3 This utility supports many operations. Refer to &man.mt.1; for details. To write a single file to tape using tar, specify the name of the tape device and the file to backup: &prompt.root; tar cvf /dev/sa0 file To recover files from a tar archive on tape into the current directory: &prompt.root; tar xvf /dev/sa0 To backup a UFS file system, use dump. This examples backs up /usr without rewinding the tape when finished: &prompt.root; dump -0aL -b64 -f /dev/nsa0 /usr To interactively restore files from a dump file on tape into the current directory: &prompt.root; restore -i -f /dev/nsa0 Third-Party Backup Utilities backup software The &os; Ports Collection provides many third-party utilities which can be used to schedule the creation of backups, simplify tape backup, and make backups easier and more convenient. Many of these applications are client/server based and can be used to automate the backups of a single system or all of the computers in a network. Popular utilities include Amanda, Bacula, rsync, and duplicity. Emergency Recovery In addition to regular backups, it is recommended to perform the following steps as part of an emergency preparedness plan. bsdlabel Create a print copy of the output of the following commands: gpart show more /etc/fstab dmesg livefs CD Store this printout and a copy of the installation media in a secure location. Should an emergency restore be needed, boot into the installation media and select Live CD to access a rescue shell. This rescue mode can be used to view the current state of the system, and if needed, to reformat disks and restore data from backups. The installation media for &os;/&arch.i386; &rel2.current;-RELEASE does not include a rescue shell. For this version, instead download and burn a Livefs CD image from ftp://ftp.FreeBSD.org/pub/FreeBSD/releases/&arch.i386;/ISO-IMAGES/&rel2.current;/&os;-&rel2.current;-RELEASE-&arch.i386;-livefs.iso. Next, test the rescue shell and the backups. Make notes of the procedure. Store these notes with the media, the printouts, and the backups. These notes may prevent the inadvertent destruction of the backups while under the stress of performing an emergency recovery. For an added measure of security, store the latest backup at a remote location which is physically separated from the computers and disk drives by a significant distance. Memory Disks Marc Fonvieille Reorganized and enhanced by In addition to physical disks, &os; also supports the creation and use of memory disks. One possible use for a memory disk is to access the contents of an ISO file system without the overhead of first burning it to a CD or DVD, then mounting the CD/DVD media. In &os;, the &man.md.4; driver is used to provide support for memory disks. The GENERIC kernel includes this driver. When using a custom kernel configuration file, ensure it includes this line: device md Attaching and Detaching Existing Images disks memory To mount an existing file system image, use mdconfig to specify the name of the ISO file and a free unit number. Then, refer to that unit number to mount it on an existing mount point. Once mounted, the files in the ISO will appear in the mount point. This example attaches diskimage.iso to the memory device /dev/md0 then mounts that memory device on /mnt: &prompt.root; mdconfig -f diskimage.iso -u 0 &prompt.root; mount /dev/md0 /mnt If a unit number is not specified with , mdconfig will automatically allocate an unused memory device and output the name of the allocated unit, such as md4. Refer to &man.mdconfig.8; for more details about this command and its options. disks detaching a memory disk When a memory disk is no longer in use, its resources should be released back to the system. First, unmount the file system, then use mdconfig to detach the disk from the system and release its resources. To continue this example: &prompt.root; umount /mnt &prompt.root; mdconfig -d -u 0 To determine if any memory disks are still attached to the system, type mdconfig -l. Creating a File- or Memory-Backed Memory Disk disks memory file system &os; also supports memory disks where the storage to use is allocated from either a hard disk or an area of memory. The first method is commonly referred to as a file-backed file system and the second method as a memory-backed file system. Both types can be created using mdconfig. To create a new memory-backed file system, specify a type of swap and the size of the memory disk to create. Then, format the memory disk with a file system and mount as usual. This example creates a 5M memory disk on unit 1. That memory disk is then formatted with the UFS file system before it is mounted: &prompt.root; mdconfig -a -t swap -s 5m -u 1 &prompt.root; newfs -U md1 /dev/md1: 5.0MB (10240 sectors) block size 16384, fragment size 2048 using 4 cylinder groups of 1.27MB, 81 blks, 192 inodes. with soft updates super-block backups (for fsck -b #) at: 160, 2752, 5344, 7936 &prompt.root; mount /dev/md1 /mnt &prompt.root; df /mnt Filesystem 1K-blocks Used Avail Capacity Mounted on /dev/md1 4718 4 4338 0% /mnt To create a new file-backed memory disk, first allocate an area of disk to use. This example creates an empty 5K file named newimage: &prompt.root; dd if=/dev/zero of=newimage bs=1k count=5k 5120+0 records in 5120+0 records out Next, attach that file to a memory disk, label the memory disk and format it with the UFS file system, mount the memory disk, and verify the size of the file-backed disk: &prompt.root; mdconfig -f newimage -u 0 &prompt.root; bsdlabel -w md0 auto &prompt.root; newfs md0a /dev/md0a: 5.0MB (10224 sectors) block size 16384, fragment size 2048 using 4 cylinder groups of 1.25MB, 80 blks, 192 inodes. super-block backups (for fsck -b #) at: 160, 2720, 5280, 7840 &prompt.root; mount /dev/md0a /mnt &prompt.root; df /mnt Filesystem 1K-blocks Used Avail Capacity Mounted on /dev/md0a 4710 4 4330 0% /mnt It takes several commands to create a file- or memory-backed file system using mdconfig. &os; also comes with mdmfs which automatically configures a memory disk, formats it with the UFS file system, and mounts it. For example, after creating newimage with dd, this one command is equivalent to running the bsdlabel, newfs, and mount commands shown above: &prompt.root; mdmfs -F newimage -s 5m md0 /mnt To instead create a new memory-based memory disk with mdmfs, use this one command: &prompt.root; mdmfs -s 5m md1 /mnt If the unit number is not specified, mdmfs will automatically select an unused memory device. For more details about mdmfs, refer to &man.mdmfs.8;. File System Snapshots Tom Rhodes Contributed by file systems snapshots &os; offers a feature in conjunction with Soft Updates: file system snapshots. UFS snapshots allow a user to create images of specified file systems, and treat them as a file. Snapshot files must be created in the file system that the action is performed on, and a user may create no more than 20 snapshots per file system. Active snapshots are recorded in the superblock so they are persistent across unmount and remount operations along with system reboots. When a snapshot is no longer required, it can be removed using &man.rm.1;. While snapshots may be removed in any order, all the used space may not be acquired because another snapshot will possibly claim some of the released blocks. The un-alterable file flag is set by &man.mksnap.ffs.8; after initial creation of a snapshot file. &man.unlink.1; makes an exception for snapshot files since it allows them to be removed. Snapshots are created using &man.mount.8;. To place a snapshot of /var in the file /var/snapshot/snap, use the following command: &prompt.root; mount -u -o snapshot /var/snapshot/snap /var Alternatively, use &man.mksnap.ffs.8; to create the snapshot: &prompt.root; mksnap_ffs /var /var/snapshot/snap One can find snapshot files on a file system, such as /var, using &man.find.1;: &prompt.root; find /var -flags snapshot Once a snapshot has been created, it has several uses: Some administrators will use a snapshot file for backup purposes, because the snapshot can be transferred to CDs or tape. The file system integrity checker, &man.fsck.8;, may be run on the snapshot. Assuming that the file system was clean when it was mounted, this should always provide a clean and unchanging result. Running &man.dump.8; on the snapshot will produce a dump file that is consistent with the file system and the timestamp of the snapshot. &man.dump.8; can also take a snapshot, create a dump image, and then remove the snapshot in one command by using . The snapshot can be mounted as a frozen image of the file system. To &man.mount.8; the snapshot /var/snapshot/snap run: &prompt.root; mdconfig -a -t vnode -o readonly -f /var/snapshot/snap -u 4 &prompt.root; mount -r /dev/md4 /mnt The frozen /var is now available through /mnt. Everything will initially be in the same state it was during the snapshot creation time. The only exception is that any earlier snapshots will appear as zero length files. To unmount the snapshot, use: &prompt.root; umount /mnt &prompt.root; mdconfig -d -u 4 For more information about and file system snapshots, including technical papers, visit Marshall Kirk McKusick's website at http://www.mckusick.com/. Disk Quotas accounting disk space disk quotas Disk quotas can be used to limit the amount of disk space or the number of files a user or members of a group may allocate on a per-file system basis. This prevents one user or group of users from consuming all of the available disk space. This section describes how to configure disk quotas for the UFS file system. To configure quotas on the ZFS file system, refer to Enabling Disk Quotas To determine if the &os; kernel provides support for disk quotas: &prompt.user; sysctl kern.features.ufs_quota kern.features.ufs_quota: 1 In this example, the 1 indicates quota support. If the value is instead 0, add the following line to a custom kernel configuration file and rebuild the kernel using the instructions in : options QUOTA Next, enable disk quotas in /etc/rc.conf: quota_enable="YES" disk quotas checking Normally on bootup, the quota integrity of each file system is checked by &man.quotacheck.8;. This program insures that the data in the quota database properly reflects the data on the file system. This is a time consuming process that will significantly affect the time the system takes to boot. To skip this step, add this variable to /etc/rc.conf: check_quotas="NO" Finally, edit /etc/fstab to enable disk quotas on a per-file system basis. To enable per-user quotas on a file system, add to the options field in the /etc/fstab entry for the file system to enable quotas on. For example: /dev/da1s2g /home ufs rw,userquota 1 2 To enable group quotas, use instead. To enable both user and group quotas, separate the options with a comma: /dev/da1s2g /home ufs rw,userquota,groupquota 1 2 By default, quota files are stored in the root directory of the file system as quota.user and quota.group. Refer to &man.fstab.5; for more information. Specifying an alternate location for the quota files is not recommended. Once the configuration is complete, reboot the system and /etc/rc will automatically run the appropriate commands to create the initial quota files for all of the quotas enabled in /etc/fstab. In the normal course of operations, there should be no need to manually run &man.quotacheck.8;, &man.quotaon.8;, or &man.quotaoff.8;. However, one should read these manual pages to be familiar with their operation. Setting Quota Limits disk quotas limits To verify that quotas are enabled, run: &prompt.root; quota -v There should be a one line summary of disk usage and current quota limits for each file system that quotas are enabled on. The system is now ready to be assigned quota limits with edquota. Several options are available to enforce limits on the amount of disk space a user or group may allocate, and how many files they may create. Allocations can be limited based on disk space (block quotas), number of files (inode quotas), or a combination of both. Each limit is further broken down into two categories: hard and soft limits. hard limit A hard limit may not be exceeded. Once a user reaches a hard limit, no further allocations can be made on that file system by that user. For example, if the user has a hard limit of 500 kbytes on a file system and is currently using 490 kbytes, the user can only allocate an additional 10 kbytes. Attempting to allocate an additional 11 kbytes will fail. soft limit Soft limits can be exceeded for a limited amount of time, known as the grace period, which is one week by default. If a user stays over their limit longer than the grace period, the soft limit turns into a hard limit and no further allocations are allowed. When the user drops back below the soft limit, the grace period is reset. In the following example, the quota for the test account is being edited. When edquota is invoked, the editor specified by EDITOR is opened in order to edit the quota limits. The default editor is set to vi. &prompt.root; edquota -u test Quotas for user test: /usr: kbytes in use: 65, limits (soft = 50, hard = 75) inodes in use: 7, limits (soft = 50, hard = 60) /usr/var: kbytes in use: 0, limits (soft = 50, hard = 75) inodes in use: 0, limits (soft = 50, hard = 60) There are normally two lines for each file system that has quotas enabled. One line represents the block limits and the other represents the inode limits. Change the value to modify the quota limit. For example, to raise the block limit on /usr to a soft limit of 500 and a hard limit of 600, change the values in that line as follows: /usr: kbytes in use: 65, limits (soft = 500, hard = 600) The new quota limits take affect upon exiting the editor. Sometimes it is desirable to set quota limits on a range of users. This can be done by first assigning the desired quota limit to a user. Then, use to duplicate that quota to a specified range of user IDs (UIDs). The following command will duplicate those quota limits for UIDs 10,000 through 19,999: &prompt.root; edquota -p test 10000-19999 For more information, refer to &man.edquota.8;. Checking Quota Limits and Disk Usage disk quotas checking To check individual user or group quotas and disk usage, use &man.quota.1;. A user may only examine their own quota and the quota of a group they are a member of. Only the superuser may view all user and group quotas. To get a summary of all quotas and disk usage for file systems with quotas enabled, use &man.repquota.8;. Normally, file systems that the user is not using any disk space on will not show in the output of quota, even if the user has a quota limit assigned for that file system. Use to display those file systems. The following is sample output from quota -v for a user that has quota limits on two file systems. Disk quotas for user test (uid 1002): Filesystem usage quota limit grace files quota limit grace /usr 65* 50 75 5days 7 50 60 /usr/var 0 50 75 0 50 60 grace period In this example, the user is currently 15 kbytes over the soft limit of 50 kbytes on /usr and has 5 days of grace period left. The asterisk * indicates that the user is currently over the quota limit. Quotas over NFS NFS Quotas are enforced by the quota subsystem on the NFS server. The &man.rpc.rquotad.8; daemon makes quota information available to quota on NFS clients, allowing users on those machines to see their quota statistics. On the NFS server, enable rpc.rquotad by removing the # from this line in /etc/inetd.conf: rquotad/1 dgram rpc/udp wait root /usr/libexec/rpc.rquotad rpc.rquotad Then, restart inetd: &prompt.root; service inetd restart Encrypting Disk Partitions Lucky Green Contributed by
shamrock@cypherpunks.to
disks encrypting &os; offers excellent online protections against unauthorized data access. File permissions and Mandatory Access Control (MAC) help prevent unauthorized users from accessing data while the operating system is active and the computer is powered up. However, the permissions enforced by the operating system are irrelevant if an attacker has physical access to a computer and can move the computer's hard drive to another system to copy and analyze the data. Regardless of how an attacker may have come into possession of a hard drive or powered-down computer, the GEOM-based cryptographic subsystems built into &os; are able to protect the data on the computer's file systems against even highly-motivated attackers with significant resources. Unlike encryption methods that encrypt individual files, the built-in gbde and geli utilities can be used to transparently encrypt entire file systems. No cleartext ever touches the hard drive's platter. This chapter demonstrates how to create an encrypted file system on &os;. It first demonstrates the process using gbde and then demonstrates the same example using geli. Disk Encryption with <application>gbde</application> The objective of the &man.gbde.4; facility is to provide a formidable challenge for an attacker to gain access to the contents of a cold storage device. However, if the computer is compromised while up and running and the storage device is actively attached, or the attacker has access to a valid passphrase, it offers no protection to the contents of the storage device. Thus, it is important to provide physical security while the system is running and to protect the passphrase used by the encryption mechanism. This facility provides several barriers to protect the data stored in each disk sector. It encrypts the contents of a disk sector using 128-bit AES in CBC mode. Each sector on the disk is encrypted with a different AES key. For more information on the cryptographic design, including how the sector keys are derived from the user-supplied passphrase, refer to &man.gbde.4;. &os; provides a kernel module for gbde which can be loaded with this command: &prompt.root; kldload geom_bde If using a custom kernel configuration file, ensure it contains this line: options GEOM_BDE The following example demonstrates adding a new hard drive to a system that will hold a single encrypted partition that will be mounted as /private. Encrypting a Partition with <application>gbde</application> Add the New Hard Drive Install the new drive to the system as explained in . For the purposes of this example, a new hard drive partition has been added as /dev/ad4s1c and /dev/ad0s1* represents the existing standard &os; partitions. &prompt.root; ls /dev/ad* /dev/ad0 /dev/ad0s1b /dev/ad0s1e /dev/ad4s1 /dev/ad0s1 /dev/ad0s1c /dev/ad0s1f /dev/ad4s1c /dev/ad0s1a /dev/ad0s1d /dev/ad4 Create a Directory to Hold <command>gbde</command> Lock Files &prompt.root; mkdir /etc/gbde The gbde lock file contains information that gbde requires to access encrypted partitions. Without access to the lock file, gbde will not be able to decrypt the data contained in the encrypted partition without significant manual intervention which is not supported by the software. Each encrypted partition uses a separate lock file. Initialize the <command>gbde</command> Partition A gbde partition must be initialized before it can be used. This initialization needs to be performed only once. This command will open the default editor, in order to set various configuration options in a template. For use with the UFS file system, set the sector_size to 2048: &prompt.root; gbde init /dev/ad4s1c -i -L /etc/gbde/ad4s1c.lock# $FreeBSD: src/sbin/gbde/template.txt,v 1.1.36.1 2009/08/03 08:13:06 kensmith Exp $ # # Sector size is the smallest unit of data which can be read or written. # Making it too small decreases performance and decreases available space. # Making it too large may prevent filesystems from working. 512 is the # minimum and always safe. For UFS, use the fragment size # sector_size = 2048 [...] Once the edit is saved, the user will be asked twice to type the passphrase used to secure the data. The passphrase must be the same both times. The ability of gbde to protect data depends entirely on the quality of the passphrase. For tips on how to select a secure passphrase that is easy to remember, see http://world.std.com/~reinhold/diceware.htm. This initialization creates a lock file for the gbde partition. In this example, it is stored as /etc/gbde/ad4s1c.lock. Lock files must end in .lock in order to be correctly detected by the /etc/rc.d/gbde start up script. Lock files must be backed up together with the contents of any encrypted partitions. Without the lock file, the legitimate owner will be unable to access the data on the encrypted partition. Attach the Encrypted Partition to the Kernel &prompt.root; gbde attach /dev/ad4s1c -l /etc/gbde/ad4s1c.lock This command will prompt to input the passphrase that was selected during the initialization of the encrypted partition. The new encrypted device will appear in /dev as /dev/device_name.bde: &prompt.root; ls /dev/ad* /dev/ad0 /dev/ad0s1b /dev/ad0s1e /dev/ad4s1 /dev/ad0s1 /dev/ad0s1c /dev/ad0s1f /dev/ad4s1c /dev/ad0s1a /dev/ad0s1d /dev/ad4 /dev/ad4s1c.bde Create a File System on the Encrypted Device Once the encrypted device has been attached to the kernel, a file system can be created on the device. This example creates a UFS file system with soft updates enabled. Be sure to specify the partition which has a *.bde extension: &prompt.root; newfs -U /dev/ad4s1c.bde Mount the Encrypted Partition Create a mount point and mount the encrypted file system: &prompt.root; mkdir /private &prompt.root; mount /dev/ad4s1c.bde /private Verify That the Encrypted File System is Available The encrypted file system should now be visible and available for use: &prompt.user; df -H Filesystem Size Used Avail Capacity Mounted on /dev/ad0s1a 1037M 72M 883M 8% / /devfs 1.0K 1.0K 0B 100% /dev /dev/ad0s1f 8.1G 55K 7.5G 0% /home /dev/ad0s1e 1037M 1.1M 953M 0% /tmp /dev/ad0s1d 6.1G 1.9G 3.7G 35% /usr /dev/ad4s1c.bde 150G 4.1K 138G 0% /private After each boot, any encrypted file systems must be manually re-attached to the kernel, checked for errors, and mounted, before the file systems can be used. To configure these steps, add the following lines to /etc/rc.conf: gbde_autoattach_all="YES" gbde_devices="ad4s1c" gbde_lockdir="/etc/gbde" This requires that the passphrase be entered at the console at boot time. After typing the correct passphrase, the encrypted partition will be mounted automatically. Additional gbde boot options are available and listed in &man.rc.conf.5;. sysinstall is incompatible with gbde-encrypted devices. All *.bde devices must be detached from the kernel before starting sysinstall or it will crash during its initial probing for devices. To detach the encrypted device used in the example, use the following command: &prompt.root; gbde detach /dev/ad4s1c Disk Encryption with <command>geli</command> Daniel Gerzo Contributed by An alternative cryptographic GEOM class is available using geli. This control utility adds some features and uses a different scheme for doing cryptographic work. It provides the following features: Utilizes the &man.crypto.9; framework and automatically uses cryptographic hardware when it is available. Supports multiple cryptographic algorithms such as AES, Blowfish, and 3DES. Allows the root partition to be encrypted. The passphrase used to access the encrypted root partition will be requested during system boot. Allows the use of two independent keys. It is fast as it performs simple sector-to-sector encryption. Allows backup and restore of master keys. If a user destroys their keys, it is still possible to get access to the data by restoring keys from the backup. Allows a disk to attach with a random, one-time key which is useful for swap partitions and temporary file systems. More features and usage examples can be found in &man.geli.8;. The following example describes how to generate a key file which will be used as part of the master key for the encrypted provider mounted under /private. The key file will provide some random data used to encrypt the master key. The master key will also be protected by a passphrase. The provider's sector size will be 4kB. The example describes how to attach to the geli provider, create a file system on it, mount it, work with it, and finally, how to detach it. Encrypting a Partition with <command>geli</command> Load <command>geli</command> Support Support for geli is available as a loadable kernel module. To configure the system to automatically load the module at boot time, add the following line to /boot/loader.conf: geom_eli_load="YES" To load the kernel module now: &prompt.root; kldload geom_eli For a custom kernel, ensure the kernel configuration file contains these lines: options GEOM_ELI device crypto Generate the Master Key The following commands generate a master key (/root/da2.key) that is protected with a passphrase. The data source for the key file is /dev/random and the sector size of the provider (/dev/da2.eli) is 4kB as a bigger sector size provides better performance: &prompt.root; dd if=/dev/random of=/root/da2.key bs=64 count=1 &prompt.root; geli init -s 4096 -K /root/da2.key /dev/da2 Enter new passphrase: Reenter new passphrase: It is not mandatory to use both a passphrase and a key file as either method of securing the master key can be used in isolation. If the key file is given as -, standard input will be used. For example, this command generates three key files: &prompt.root; cat keyfile1 keyfile2 keyfile3 | geli init -K - /dev/da2 Attach the Provider with the Generated Key To attach the provider, specify the key file, the name of the disk, and the passphrase: &prompt.root; geli attach -k /root/da2.key /dev/da2 Enter passphrase: This creates a new device with an .eli extension: &prompt.root; ls /dev/da2* /dev/da2 /dev/da2.eli Create the New File System Next, format the device with the UFS file system and mount it on an existing mount point: &prompt.root; dd if=/dev/random of=/dev/da2.eli bs=1m &prompt.root; newfs /dev/da2.eli &prompt.root; mount /dev/da2.eli /private The encrypted file system should now be available for use: &prompt.root; df -H Filesystem Size Used Avail Capacity Mounted on /dev/ad0s1a 248M 89M 139M 38% / /devfs 1.0K 1.0K 0B 100% /dev /dev/ad0s1f 7.7G 2.3G 4.9G 32% /usr /dev/ad0s1d 989M 1.5M 909M 0% /tmp /dev/ad0s1e 3.9G 1.3G 2.3G 35% /var /dev/da2.eli 150G 4.1K 138G 0% /private Once the work on the encrypted partition is done, and the /private partition is no longer needed, it is prudent to put the device into cold storage by unmounting and detaching the geli encrypted partition from the kernel: &prompt.root; umount /private &prompt.root; geli detach da2.eli A rc.d script is provided to simplify the mounting of geli-encrypted devices at boot time. For this example, add these lines to /etc/rc.conf: geli_devices="da2" geli_da2_flags="-p -k /root/da2.key" This configures /dev/da2 as a geli provider with a master key of /root/da2.key. The system will automatically detach the provider from the kernel before the system shuts down. During the startup process, the script will prompt for the passphrase before attaching the provider. Other kernel messages might be shown before and after the password prompt. If the boot process seems to stall, look carefully for the password prompt among the other messages. Once the correct passphrase is entered, the provider is attached. The file system is then mounted, typically by an entry in /etc/fstab. Refer to for instructions on how to configure a file system to mount at boot time.
Encrypting Swap Christian Brüffer Written by swap encrypting Like the encryption of disk partitions, encryption of swap space is used to protect sensitive information. Consider an application that deals with passwords. As long as these passwords stay in physical memory, they are not written to disk and will be cleared after a reboot. However, if &os; starts swapping out memory pages to free space, the passwords may be written to the disk unencrypted. Encrypting swap space can be a solution for this scenario. This section demonstrates how to configure an encrypted swap partition using &man.gbde.8; or &man.geli.8; encryption. It assumes a UFS file system where /dev/ad0s1b is the swap partition. Configuring Encrypted Swap Swap partitions are not encrypted by default and should be cleared of any sensitive data before continuing. To overwrite the current swap partition with random garbage, execute the following command: &prompt.root; dd if=/dev/random of=/dev/ad0s1b bs=1m To encrypt the swap partition using &man.gbde.8;, add the .bde suffix to the swap line in /etc/fstab: # Device Mountpoint FStype Options Dump Pass# /dev/ad0s1b.bde none swap sw 0 0 To instead encrypt the swap partition using &man.geli.8;, use the .eli suffix: # Device Mountpoint FStype Options Dump Pass# /dev/ad0s1b.eli none swap sw 0 0 By default, &man.geli.8; uses the AES algorithm with a key length of 128 bit. These defaults can be altered by using geli_swap_flags in /etc/rc.conf. The following flags configure encryption using the Blowfish algorithm with a key length of 128 bits and a sectorsize of 4 kilobytes, and sets detach on last close: geli_swap_flags="-e blowfish -l 128 -s 4096 -d" Refer to the description of onetime in &man.geli.8; for a list of possible options. Encrypted Swap Verification Once the system has rebooted, proper operation of the encrypted swap can be verified using swapinfo. If &man.gbde.8; is being used: &prompt.user; swapinfo Device 1K-blocks Used Avail Capacity /dev/ad0s1b.bde 542720 0 542720 0% If &man.geli.8; is being used: &prompt.user; swapinfo Device 1K-blocks Used Avail Capacity /dev/ad0s1b.eli 542720 0 542720 0% Highly Available Storage (<acronym>HAST</acronym>) Daniel Gerzo Contributed by Freddie Cash With inputs from Pawel Jakub Dawidek Michael W. Lucas Viktor Petersson HAST high availability High availability is one of the main requirements in serious business applications and highly-available storage is a key component in such environments. In &os;, the Highly Available STorage (HAST) framework allows transparent storage of the same data across several physically separated machines connected by a TCP/IP network. HAST can be understood as a network-based RAID1 (mirror), and is similar to the DRBD® storage system used in the GNU/&linux; platform. In combination with other high-availability features of &os; like CARP, HAST makes it possible to build a highly-available storage cluster that is resistant to hardware failures. The following are the main features of HAST: Can be used to mask I/O errors on local hard drives. File system agnostic as it works with any file system supported by &os;. Efficient and quick resynchronization as only the blocks that were modified during the downtime of a node are synchronized. Can be used in an already deployed environment to add additional redundancy. Together with CARP, Heartbeat, or other tools, it can be used to build a robust and durable storage system. After reading this section, you will know: What HAST is, how it works, and which features it provides. How to set up and use HAST on &os;. How to integrate CARP and &man.devd.8; to build a robust storage system. Before reading this section, you should: Understand &unix; and &os; basics (). Know how to configure network interfaces and other core &os; subsystems (). Have a good understanding of &os; networking (). The HAST project was sponsored by The &os; Foundation with support from http://www.omc.net/ and http://www.transip.nl/. HAST Operation HAST provides synchronous block-level replication between two physical machines: the primary, also known as the master node, and the secondary, or slave node. These two machines together are referred to as a cluster. Since HAST works in a primary-secondary configuration, it allows only one of the cluster nodes to be active at any given time. The primary node, also called active, is the one which will handle all the I/O requests to HAST-managed devices. The secondary node is automatically synchronized from the primary node. The physical components of the HAST system are the local disk on primary node, and the disk on the remote, secondary node. HAST operates synchronously on a block level, making it transparent to file systems and applications. HAST provides regular GEOM providers in /dev/hast/ for use by other tools or applications. There is no difference between using HAST-provided devices and raw disks or partitions. Each write, delete, or flush operation is sent to both the local disk and to the remote disk over TCP/IP. Each read operation is served from the local disk, unless the local disk is not up-to-date or an I/O error occurs. In such cases, the read operation is sent to the secondary node. HAST tries to provide fast failure recovery. For this reason, it is important to reduce synchronization time after a node's outage. To provide fast synchronization, HAST manages an on-disk bitmap of dirty extents and only synchronizes those during a regular synchronization, with an exception of the initial sync. There are many ways to handle synchronization. HAST implements several replication modes to handle different synchronization methods: memsync: This mode reports a write operation as completed when the local write operation is finished and when the remote node acknowledges data arrival, but before actually storing the data. The data on the remote node will be stored directly after sending the acknowledgement. This mode is intended to reduce latency, but still provides good reliability. fullsync: This mode reports a write operation as completed when both the local write and the remote write complete. This is the safest and the slowest replication mode. This mode is the default. async: This mode reports a write operation as completed when the local write completes. This is the fastest and the most dangerous replication mode. It should only be used when replicating to a distant node where latency is too high for other modes. HAST Configuration The HAST framework consists of several components: The &man.hastd.8; daemon which provides data synchronization. When this daemon is started, it will automatically load geom_gate.ko. The userland management utility, &man.hastctl.8;. The &man.hast.conf.5; configuration file. This file must exist before starting hastd. Users who prefer to statically build GEOM_GATE support into the kernel should add this line to the custom kernel configuration file, then rebuild the kernel using the instructions in : options GEOM_GATE The following example describes how to configure two nodes in master-slave/primary-secondary operation using HAST to replicate the data between the two. The nodes will be called hasta, with an IP address of 172.16.0.1, and hastb, with an IP address of 172.16.0.2. Both nodes will have a dedicated hard drive /dev/ad6 of the same size for HAST operation. The HAST pool, sometimes referred to as a resource or the GEOM provider in /dev/hast/, will be called + >/dev/hast/, will be called test. Configuration of HAST is done using /etc/hast.conf. This file should be identical on both nodes. The simplest configuration is: resource test { on hasta { local /dev/ad6 remote 172.16.0.2 } on hastb { local /dev/ad6 remote 172.16.0.1 } } For more advanced configuration, refer to &man.hast.conf.5;. It is also possible to use host names in the remote statements if the hosts are resolvable and defined either in /etc/hosts or in the local DNS. Once the configuration exists on both nodes, the HAST pool can be created. Run these commands on both nodes to place the initial metadata onto the local disk and to start &man.hastd.8;: &prompt.root; hastctl create test &prompt.root; service hastd onestart It is not possible to use GEOM providers with an existing file system or to convert an existing storage to a HAST-managed pool. This procedure needs to store some metadata on the provider and there will not be enough required space available on an existing provider. A HAST node's primary or secondary role is selected by an administrator, or software like Heartbeat, using &man.hastctl.8;. On the primary node, hasta, issue this command: &prompt.root; hastctl role primary test Run this command on the secondary node, hastb: &prompt.root; hastctl role secondary test Verify the result by running hastctl on each node: &prompt.root; hastctl status test Check the status line in the output. If it says degraded, something is wrong with the configuration file. It should say complete on each node, meaning that the synchronization between the nodes has started. The synchronization completes when hastctl status reports 0 bytes of dirty extents. The next step is to create a file system on the GEOM provider and mount it. This must be done on the primary node. Creating the file system can take a few minutes, depending on the size of the hard drive. This example creates a UFS file system on /dev/hast/test: &prompt.root; newfs -U /dev/hast/test &prompt.root; mkdir /hast/test &prompt.root; mount /dev/hast/test /hast/test Once the HAST framework is configured properly, the final step is to make sure that HAST is started automatically during system boot. Add this line to /etc/rc.conf: hastd_enable="YES" Failover Configuration The goal of this example is to build a robust storage system which is resistant to the failure of any given node. If the primary node fails, the secondary node is there to take over seamlessly, check and mount the file system, and continue to work without missing a single bit of data. To accomplish this task, the Common Address Redundancy Protocol (CARP) is used to provide for automatic failover at the IP layer. CARP allows multiple hosts on the same network segment to share an IP address. Set up CARP on both nodes of the cluster according to the documentation available in . In this example, each node will have its own management IP address and a shared IP address of 172.16.0.254. The primary HAST node of the cluster must be the master CARP node. The HAST pool created in the previous section is now ready to be exported to the other hosts on the network. This can be accomplished by exporting it through NFS or Samba, using the shared IP address 172.16.0.254. The only problem which remains unresolved is an automatic failover should the primary node fail. In the event of CARP interfaces going up or down, the &os; operating system generates a &man.devd.8; event, making it possible to watch for state changes on the CARP interfaces. A state change on the CARP interface is an indication that one of the nodes failed or came back online. These state change events make it possible to run a script which will automatically handle the HAST failover. To catch state changes on the CARP interfaces, add this configuration to /etc/devd.conf on each node: notify 30 { match "system" "IFNET"; match "subsystem" "carp0"; match "type" "LINK_UP"; action "/usr/local/sbin/carp-hast-switch master"; }; notify 30 { match "system" "IFNET"; match "subsystem" "carp0"; match "type" "LINK_DOWN"; action "/usr/local/sbin/carp-hast-switch slave"; }; If the systems are running &os; 10 or higher, replace carp0 with the name of the CARP-configured interface. Restart &man.devd.8; on both nodes to put the new configuration into effect: &prompt.root; service devd restart When the specified interface state changes by going up or down , the system generates a notification, allowing the &man.devd.8; subsystem to run the specified automatic failover script, /usr/local/sbin/carp-hast-switch. For further clarification about this configuration, refer to &man.devd.conf.5;. Here is an example of an automated failover script: #!/bin/sh # Original script by Freddie Cash <fjwcash@gmail.com> # Modified by Michael W. Lucas <mwlucas@BlackHelicopters.org> # and Viktor Petersson <vpetersson@wireload.net> # The names of the HAST resources, as listed in /etc/hast.conf resources="test" # delay in mounting HAST resource after becoming master # make your best guess delay=3 # logging log="local0.debug" name="carp-hast" # end of user configurable stuff case "$1" in master) logger -p $log -t $name "Switching to primary provider for ${resources}." sleep ${delay} # Wait for any "hastd secondary" processes to stop for disk in ${resources}; do while $( pgrep -lf "hastd: ${disk} \(secondary\)" > /dev/null 2>&1 ); do sleep 1 done # Switch role for each disk hastctl role primary ${disk} if [ $? -ne 0 ]; then logger -p $log -t $name "Unable to change role to primary for resource ${disk}." exit 1 fi done # Wait for the /dev/hast/* devices to appear for disk in ${resources}; do for I in $( jot 60 ); do [ -c "/dev/hast/${disk}" ] && break sleep 0.5 done if [ ! -c "/dev/hast/${disk}" ]; then logger -p $log -t $name "GEOM provider /dev/hast/${disk} did not appear." exit 1 fi done logger -p $log -t $name "Role for HAST resources ${resources} switched to primary." logger -p $log -t $name "Mounting disks." for disk in ${resources}; do mkdir -p /hast/${disk} fsck -p -y -t ufs /dev/hast/${disk} mount /dev/hast/${disk} /hast/${disk} done ;; slave) logger -p $log -t $name "Switching to secondary provider for ${resources}." # Switch roles for the HAST resources for disk in ${resources}; do if ! mount | grep -q "^/dev/hast/${disk} on " then else umount -f /hast/${disk} fi sleep $delay hastctl role secondary ${disk} 2>&1 if [ $? -ne 0 ]; then logger -p $log -t $name "Unable to switch role to secondary for resource ${disk}." exit 1 fi logger -p $log -t $name "Role switched to secondary for resource ${disk}." done ;; esac In a nutshell, the script takes these actions when a node becomes master: Promotes the HAST pool to primary on the other node. Checks the file system under the HAST pool. Mounts the pool. When a node becomes secondary: Unmounts the HAST pool. Degrades the HAST pool to secondary. This is just an example script which serves as a proof of concept. It does not handle all the possible scenarios and can be extended or altered in any way, for example, to start or stop required services. For this example, a standard UFS file system was used. To reduce the time needed for recovery, a journal-enabled UFS or ZFS file system can be used instead. More detailed information with additional examples can be found at http://wiki.FreeBSD.org/HAST. Troubleshooting HAST should generally work without issues. However, as with any other software product, there may be times when it does not work as supposed. The sources of the problems may be different, but the rule of thumb is to ensure that the time is synchronized between the nodes of the cluster. When troubleshooting HAST, the debugging level of &man.hastd.8; should be increased by starting hastd with -d. This argument may be specified multiple times to further increase the debugging level. Consider also using -F, which starts hastd in the foreground. Recovering from the Split-brain Condition Split-brain occurs when the nodes of the cluster are unable to communicate with each other, and both are configured as primary. This is a dangerous condition because it allows both nodes to make incompatible changes to the data. This problem must be corrected manually by the system administrator. The administrator must decide which node has more important changes or merge them manually. Then, let HAST perform full synchronization of the node which has the broken data. To do this, issue these commands on the node which needs to be resynchronized: &prompt.root; hastctl role init test &prompt.root; hastctl create test &prompt.root; hastctl role secondary test
Index: head/en_US.ISO8859-1/books/handbook/firewalls/chapter.xml =================================================================== --- head/en_US.ISO8859-1/books/handbook/firewalls/chapter.xml (revision 46271) +++ head/en_US.ISO8859-1/books/handbook/firewalls/chapter.xml (revision 46272) @@ -1,3758 +1,3758 @@ Firewalls Joseph J. Barbish Contributed by Brad Davis Converted to SGML and updated by firewall security firewalls Synopsis Firewalls make it possible to filter the incoming and outgoing traffic that flows through a system. A firewall can use one or more sets of rules to inspect network packets as they come in or go out of network connections and either allows the traffic through or blocks it. The rules of a firewall can inspect one or more characteristics of the packets such as the protocol type, source or destination host address, and source or destination port. Firewalls can enhance the security of a host or a network. They can be used to do one or more of the following: Protect and insulate the applications, services, and machines of an internal network from unwanted traffic from the public Internet. Limit or disable access from hosts of the internal network to services of the public Internet. Support network address translation (NAT), which allows an internal network to use private IP addresses and share a single connection to the public Internet using either a single IP address or a shared pool of automatically assigned public addresses. &os; has three firewalls built into the base system: PF, IPFW, and IPFILTER, also known as IPF. &os; also provides two traffic shapers for controlling bandwidth usage: &man.altq.4; and &man.dummynet.4;. ALTQ has traditionally been closely tied with PF and dummynet with IPFW. Each firewall uses rules to control the access of packets to and from a &os; system, although they go about it in different ways and each has a different rule syntax. &os; provides multiple firewalls in order to meet the different requirements and preferences for a wide variety of users. Each user should evaluate which firewall best meets their needs. After reading this chapter, you will know: How to define packet filtering rules. The differences between the firewalls built into &os;. How to use and configure the PF firewall. How to use and configure the IPFW firewall. How to use and configure the IPFILTER firewall. Before reading this chapter, you should: Understand basic &os; and Internet concepts. Since all firewalls are based on inspecting the values of selected packet control fields, the creator of the firewall ruleset must have an understanding of how TCP/IP works, what the different values in the packet control fields are, and how these values are used in a normal session conversation. For a good introduction, refer to Daryl's TCP/IP Primer. Firewall Concepts firewall rulesets A ruleset contains a group of rules which pass or block packets based on the values contained in the packet. The bi-directional exchange of packets between hosts comprises a session conversation. The firewall ruleset processes both the packets arriving from the public Internet, as well as the packets produced by the system as a response to them. Each TCP/IP service is predefined by its protocol and listening port. Packets destined for a specific service originate from the source address using an unprivileged port and target the specific service port on the destination address. All the above parameters can be used as selection criteria to create rules which will pass or block services. To lookup unknown port numbers, refer to /etc/services. Alternatively, visit http://en.wikipedia.org/wiki/List_of_TCP_and_UDP_port_numbers and do a port number lookup to find the purpose of a particular port number. Check out this link for port numbers used by Trojans http://www.sans.org/security-resources/idfaq/oddports.php. FTP has two modes: active mode and passive mode. The difference is in how the data channel is acquired. Passive mode is more secure as the data channel is acquired by the ordinal ftp session requester. For a good explanation of FTP and the different modes, see http://www.slacksite.com/other/ftp.html. A firewall ruleset can be either exclusive or inclusive. An exclusive firewall allows all traffic through except for the traffic matching the ruleset. An inclusive firewall does the reverse as it only allows traffic matching the rules through and blocks everything else. An inclusive firewall offers better control of the outgoing traffic, making it a better choice for systems that offer services to the public Internet. It also controls the type of traffic originating from the public Internet that can gain access to a private network. All traffic that does not match the rules is blocked and logged. Inclusive firewalls are generally safer than exclusive firewalls because they significantly reduce the risk of allowing unwanted traffic. Unless noted otherwise, all configuration and example rulesets in this chapter create inclusive firewall rulesets. Security can be tightened further using a stateful firewall. This type of firewall keeps track of open connections and only allows traffic which either matches an existing connection or opens a new, allowed connection. Stateful filtering treats traffic as a bi-directional exchange of packets comprising a session. When state is specified on a matching rule the firewall dynamically generates internal rules for each anticipated packet being exchanged during the session. It has sufficient matching capabilities to determine if a packet is valid for a session. Any packets that do not properly fit the session template are automatically rejected. When the session completes, it is removed from the dynamic state table. Stateful filtering allows one to focus on blocking/passing new sessions. If the new session is passed, all its subsequent packets are allowed automatically and any impostor packets are automatically rejected. If a new session is blocked, none of its subsequent packets are allowed. Stateful filtering provides advanced matching abilities capable of defending against the flood of different attack methods employed by attackers. NAT stands for Network Address Translation. NAT function enables the private LAN behind the firewall to share a single ISP-assigned IP address, even if that address is dynamically assigned. NAT allows each computer in the LAN to have Internet access, without having to pay the ISP for multiple Internet accounts or IP addresses. NAT will automatically translate the private LAN IP address for each system on the LAN to the single public IP address as packets exit the firewall bound for the public Internet. It also performs the reverse translation for returning packets. According to RFC 1918, the following IP address ranges are reserved for private networks which will never be routed directly to the public Internet, and therefore are available for use with NAT: 10.0.0.0/8. 172.16.0.0/12. 192.168.0.0/16. When working with the firewall rules, be very careful. Some configurations can lock the administrator out of the server. To be on the safe side, consider performing the initial firewall configuration from the local console rather than doing it remotely over ssh. PF John Ferrell Revised and updated by firewall PF Since &os; 5.3, a ported version of OpenBSD's PF firewall has been included as an integrated part of the base system. PF is a complete, full-featured firewall that has optional support for ALTQ (Alternate Queuing), which provides Quality of Service (QoS). The OpenBSD Project maintains the definitive reference for PF in the PF FAQ. Peter Hansteen maintains a thorough PF tutorial at http://home.nuug.no/~peter/pf/. When reading the PF FAQ, keep in mind that different versions of &os; contain different versions of PF. &os; 8.X uses the same version of PF as OpenBSD 4.1 and &os; 9.X and later uses the same version of PF as OpenBSD 4.5. The &a.pf; is a good place to ask questions about configuring and running the PF firewall. Check the mailing list archives before asking a question as it may have already been answered. More information about porting PF to &os; can be found at http://pf4freebsd.love2party.net/. This section of the Handbook focuses on PF as it pertains to &os;. It demonstrates how to enable PF and ALTQ. It then provides several examples for creating rulesets on a &os; system. Enabling <application>PF</application> In order to use PF, its kernel module must be first loaded. This section describes the entries that can be added to /etc/rc.conf in order to enable PF. Start by adding the following line to /etc/rc.conf: pf_enable="YES" Additional options, described in &man.pfctl.8;, can be passed to PF when it is started. Add this entry to /etc/rc.conf and specify any required flags between the two quotes (""): pf_flags="" # additional flags for pfctl startup PF will not start if it cannot find its ruleset configuration file. The default ruleset is already created and is named /etc/pf.conf. If a custom ruleset has been saved somewhere else, add a line to /etc/rc.conf which specifies the full path to the file: pf_rules="/path/to/pf.conf" Logging support for PF is provided by &man.pflog.4;. To enable logging support, add this line to /etc/rc.conf: pflog_enable="YES" The following lines can also be added in order to change the default location of the log file or to specify any additional flags to pass to &man.pflog.4; when it is started: pflog_logfile="/var/log/pflog" # where pflogd should store the logfile pflog_flags="" # additional flags for pflogd startup Finally, if there is a LAN behind the firewall and packets need to be forwarded for the computers on the LAN, or NAT is required, add the following option: gateway_enable="YES" # Enable as LAN gateway After saving the needed edits, PF can be started with logging support by typing: &prompt.root; service pf start &prompt.root; service pflog start By default, PF reads its configuration rules from /etc/pf.conf and modifies, drops, or passes packets according to the rules or definitions specified in this file. The &os; installation includes several sample files located in /usr/share/examples/pf/. Refer to the PF FAQ for complete coverage of PF rulesets. To control PF, use pfctl. summarizes some useful options to this command. Refer to &man.pfctl.8; for a description of all available options: Useful <command>pfctl</command> Options Command Purpose pfctl -e Enable PF. pfctl -d Disable PF. pfctl -F all -f /etc/pf.conf Flush all NAT, filter, state, and table rules and reload /etc/pf.conf. pfctl -s [ rules | nat state ] Report on the filter rules, NAT rules, or state table. pfctl -vnf /etc/pf.conf Check /etc/pf.conf for errors, but do not load ruleset.
security/sudo is useful for running commands like pfctl that require elevated privileges. It can be installed from the Ports Collection. To keep an eye on the traffic that passes through the PF firewall, consider installing the sysutils/pftop package or port. Once installed, pftop can be run to view a running snapshot of traffic in a format which is similar to &man.top.1;.
Enabling <application>ALTQ</application> On &os;, ALTQ can be used with PF to provide Quality of Service (QOS). Once ALTQ is enabled, queues can be defined in the ruleset which determine the processing priority of outbound packets. Before enabling ALTQ, refer to &man.altq.4; to determine if the drivers for the network cards installed on the system support it. ALTQ is not available as a loadable kernel module. If the system's interfaces support ALTQ, create a custom kernel using the instructions in . The following kernel options are available. The first is needed to enable ALTQ. At least one of the other options is necessary to specify the queueing scheduler algorithm: options ALTQ options ALTQ_CBQ # Class Based Queuing (CBQ) options ALTQ_RED # Random Early Detection (RED) options ALTQ_RIO # RED In/Out options ALTQ_HFSC # Hierarchical Packet Scheduler (HFSC) options ALTQ_PRIQ # Priority Queuing (PRIQ) The following scheduler algorithms are available: CBQ Class Based Queuing (CBQ) is used to divide a connection's bandwidth into different classes or queues to prioritize traffic based on filter rules. RED Random Early Detection (RED) is used to avoid network congestion by measuring the length of the queue and comparing it to the minimum and maximum thresholds for the queue. When the queue is over the maximum, all new packets are randomly dropped. RIO In Random Early Detection In and Out (RIO) mode, RED maintains multiple average queue lengths and multiple threshold values, one for each QOS level. HFSC Hierarchical Fair Service Curve Packet Scheduler (HFSC) is described in http://www-2.cs.cmu.edu/~hzhang/HFSC/main.html. PRIQ Priority Queuing (PRIQ) always passes traffic that is in a higher queue first. More information about the scheduling algorithms and example rulesets are available at http://www.openbsd.org/faq/pf/queueing.html. <application>PF</application> Rulesets Peter Hansteen N. M. Contributed by This section demonstrates how to create a customized ruleset. It starts with the simplest of rulesets and builds upon its concepts using several examples to demonstrate real-world usage of PF's many features. The simplest possible ruleset is for a single machine that does not run any services and which needs access to one network, which may be the Internet. To create this minimal ruleset, edit /etc/pf.conf so it looks like this: block in all pass out all keep state The first rule denies all incoming traffic by default. The second rule allows connections created by this system to pass out, while retaining state information on those connections. This state information allows return traffic for those connections to pass back and should only be used on machines that can be trusted. The ruleset can be loaded with: &prompt.root; pfctl -e ; pfctl -f /etc/pf.conf In addition to keeping state, PF provides lists and macros which can be defined for use when creating rules. Macros can include lists and need to be defined before use. As an example, insert these lines at the very top of the ruleset: tcp_services = "{ ssh, smtp, domain, www, pop3, auth, pop3s }" udp_services = "{ domain }" PF understands port names as well as port numbers, as long as the names are listed in /etc/services. This example creates two macros. The first is a list of seven TCP port names and the second is one UDP port name. Once defined, macros can be used in rules. In this example, all traffic is blocked except for the connections initiated by this system for the seven specified TCP services and the one specified UDP service: tcp_services = "{ ssh, smtp, domain, www, pop3, auth, pop3s }" udp_services = "{ domain }" block all pass out proto tcp to any port $tcp_services keep state pass proto udp to any port $udp_services keep state Even though UDP is considered to be a stateless protocol, PF is able to track some state information. For example, when a UDP request is passed which asks a name server about a domain name, PF will watch for the response in order to pass it back. Whenever an edit is made to a ruleset, the new rules must be loaded so they can be used: &prompt.root; pfctl -f /etc/pf.conf If there are no syntax errors, pfctl will not output any messages during the rule load. Rules can also be tested before attempting to load them: &prompt.root; pfctl -nf /etc/pf.conf Including causes the rules to be interpreted only, but not loaded. This provides an opportunity to correct any errors. At all times, the last valid ruleset loaded will be enforced until either PF is disabled or a new ruleset is loaded. Adding to a pfctl ruleset verify or load will display the fully parsed rules exactly the way they will be loaded. This is extremely useful when debugging rules. A Simple Gateway with NAT This section demonstrates how to configure a &os; system running PF to act as a gateway for at least one other machine. The gateway needs at least two network interfaces, each connected to a separate network. In this example, xl1 is connected to the Internet and xl0 is connected to the internal network. First, enable the gateway in order to let the machine forward the network traffic it receives on one interface to another interface. This sysctl setting will forward IPv4 packets: &prompt.root; sysctl net.inet.ip.forwarding=1 To forward IPv6 traffic, use: &prompt.root; sysctl net.inet6.ip6.forwarding=1 To enable these settings at system boot, add the following to /etc/rc.conf: gateway_enable="YES" #for ipv4 ipv6_gateway_enable="YES" #for ipv6 Verify with ifconfig that both of the interfaces are up and running. Next, create the PF rules to allow the gateway to pass traffic. While the following rule allows stateful traffic to pass from the Internet to hosts on the network, the to keyword does not guarantee passage all the way from source to destination: pass in on xl1 from xl1:network to xl0:network port $ports keep state That rule only lets the traffic pass in to the gateway on the internal interface. To let the packets go further, a matching rule is needed: pass out on xl0 from xl1:network to xl0:network port $ports keep state While these two rules will work, rules this specific are rarely needed. For a busy network admin, a readable ruleset is a safer ruleset. The remainder of this section demonstrates how to keep the rules as simple as possible for readability. For example, those two rules could be replaced with one rule: pass from xl1:network to any port $ports keep state The interface:network notation can be replaced with a macro to make the ruleset even more readable. For example, a $localnet macro could be defined as the network directly attached to the internal interface ($xl1:network). Alternatively, the definition of $localnet could be changed to an IP address/netmask notation to denote a network, such as 192.168.100.1/24 for a subnet of private addresses. If required, $localnet could even be defined as a list of networks. Whatever the specific needs, a sensible $localnet definition could be used in a typical pass rule as follows: pass from $localnet to any port $ports keep state The following sample ruleset allows all traffic initiated by machines on the internal network. It first defines two macros to represent the external and internal 3COM interfaces of the gateway. For dialup users, the external interface will use tun0. For an ADSL connection, specifically those using PPP over Ethernet (PPPoE), the correct external interface is tun0, not the physical Ethernet interface. ext_if = "xl0" # macro for external interface - use tun0 for PPPoE int_if = "xl1" # macro for internal interface localnet = $int_if:network # ext_if IP address could be dynamic, hence ($ext_if) nat on $ext_if from $localnet to any -> ($ext_if) block all pass from { lo0, $localnet } to any keep state This ruleset introduces the nat rule which is used to handle the network address translation from the non-routable addresses inside the internal network to the IP address assigned to the external interface. The parentheses surrounding the last part of the nat rule ($ext_if) is included when the IP address of the external interface is dynamically assigned. It ensures that network traffic runs without serious interruptions even if the external IP address changes. Note that this ruleset probably allows more traffic to pass out of the network than is needed. One reasonable setup could create this macro: client_out = "{ ftp-data, ftp, ssh, domain, pop3, auth, nntp, http, \ https, cvspserver, 2628, 5999, 8000, 8080 }" to use in the main pass rule: pass inet proto tcp from $localnet to any port $client_out \ flags S/SA keep state A few other pass rules may be needed. This one enables SSH on the external interface:: pass in inet proto tcp to $ext_if port ssh This macro definition and rule allows DNS and NTP for internal clients: udp_services = "{ domain, ntp }" pass quick inet proto { tcp, udp } to any port $udp_services keep state Note the quick keyword in this rule. Since the ruleset consists of several rules, it is important to understand the relationships between the rules in a ruleset. Rules are evaluated from top to bottom, in the sequence they are written. For each packet or connection evaluated by PF, the last matching rule in the ruleset is the one which is applied. However, when a packet matches a rule which contains the quick keyword, the rule processing stops and the packet is treated according to that rule. This is very useful when an exception to the general rules is needed. Creating an <acronym>FTP</acronym> Proxy Configuring working FTP rules can be problematic due to the nature of the FTP protocol. FTP pre-dates firewalls by several decades and is insecure in its design. The most common points against using FTP include: Passwords are transferred in the clear. The protocol demands the use of at least two TCP connections (control and data) on separate ports. When a session is established, data is communicated using randomly selected ports. All of these points present security challenges, even before considering any potential security weaknesses in client or server software. More secure alternatives for file transfer exist, such as &man.sftp.1; or &man.scp.1;, which both feature authentication and data transfer over encrypted connections.. For those situations when FTP is required, PF provides redirection of FTP traffic to a small proxy program called &man.ftp-proxy.8;, which is included in the base system of &os;. The role of the proxy is to dynamically insert and delete rules in the ruleset, using a set of anchors, in order to correctly handle FTP traffic. To enable the FTP proxy, add this line to /etc/rc.conf: ftpproxy_enable="YES" Then start the proxy by running service ftp-proxy start. For a basic configuration, three elements need to be added to /etc/pf.conf. First, the anchors which the proxy will use to insert the rules it generates for the FTP sessions: nat-anchor "ftp-proxy/*" rdr-anchor "ftp-proxy/*" Second, a pass rule is needed to allow FTP traffic in to the proxy. Third, redirection and NAT rules need to be defined before the filtering rules. Insert this rdr rule immediately after the nat rule: rdr pass on $int_if proto tcp from any to any port ftp -> 127.0.0.1 port 8021 Finally, allow the redirected traffic to pass: pass out proto tcp from $proxy to any port ftp where $proxy expands to the address the proxy daemon is bound to. Save /etc/pf.conf, load the new rules, and verify from a client that FTP connections are working: &prompt.root; pfctl -f /etc/pf.conf This example covers a basic setup where the clients in the local network need to contact FTP servers elsewhere. This basic configuration should work well with most combinations of FTP clients and servers. As shown in &man.ftp-proxy.8;, the proxy's behavior can be changed in various ways by adding options to the ftpproxy_flags= line. Some clients or servers may have specific quirks that must be compensated for in the configuration, or there may be a need to integrate the proxy in specific ways such as assigning FTP traffic to a specific queue. For ways to run an FTP server protected by PF and &man.ftp-proxy.8;, configure a separate ftp-proxy in reverse mode, using , on a separate port with its own redirecting pass rule. Managing <acronym>ICMP</acronym> Many of the tools used for debugging or troubleshooting a TCP/IP network rely on the Internet Control Message Protocol (ICMP), which was designed specifically with debugging in mind. The ICMP protocol sends and receives control messages between hosts and gateways, mainly to provide feedback to a sender about any unusual or difficult conditions enroute to the target host. Routers use ICMP to negotiate packet sizes and other transmission parameters in a process often referred to as path MTU discovery. From a firewall perspective, some ICMP control messages are vulnerable to known attack vectors. Also, letting all diagnostic traffic pass unconditionally makes debugging easier, but it also makes it easier for others to extract information about the network. For these reasons, the following rule may not be optimal: pass inet proto icmp from any to any One solution is to let all ICMP traffic from the local network through while stopping all probes from outside the network: pass inet proto icmp from $localnet to any keep state pass inet proto icmp from any to $ext_if keep state Additional options are available which demonstrate some of PF's flexibility. For example, rather than allowing all ICMP messages, one can specify the messages used by &man.ping.8; and &man.traceroute.8;. Start by defining a macro for that type of message: icmp_types = "echoreq" and a rule which uses the macro: pass inet proto icmp all icmp-type $icmp_types keep state If other types of ICMP packets are needed, expand icmp_types to a list of those packet types. Type more /usr/src/contrib/pf/pfctl/pfctl_parser.c to see the list of ICMP message types supported by PF. Refer to http://www.iana.org/assignments/icmp-parameters/icmp-parameters.xhtml for an explanation of each message type. Since Unix traceroute uses UDP by default, another rule is needed to allow Unix traceroute: # allow out the default range for traceroute(8): pass out on $ext_if inet proto udp from any to any port 33433 >< 33626 keep state Since TRACERT.EXE on Microsoft Windows systems uses ICMP echo request messages, only the first rule is needed to allow network traces from those systems. Unix traceroute can be instructed to use other protocols as well, and will use ICMP echo request messages if is used. Check the &man.traceroute.8; man page for details. Path <acronym>MTU</acronym> Discovery Internet protocols are designed to be device independent, and one consequence of device independence is that the optimal packet size for a given connection cannot always be predicted reliably. The main constraint on packet size is the Maximum Transmission Unit (MTU) which sets the upper limit on the packet size for an interface. Type ifconfig to view the MTUs for a system's network interfaces. TCP/IP uses a process known as path MTU discovery to determine the right packet size for a connection. This process sends packets of varying sizes with the Do not fragment flag set, expecting an ICMP return packet of type 3, code 4 when the upper limit has been reached. Type 3 means destination unreachable, and code 4 is short for fragmentation needed, but the do-not-fragment flag is set. To allow path MTU discovery in order to support connections to other MTUs, add the destination unreachable type to the icmp_types macro: icmp_types = "{ echoreq, unreach }" Since the pass rule already uses that macro, it does not need to be modified in order to support the new ICMP type: pass inet proto icmp all icmp-type $icmp_types keep state PF allows filtering on all variations of ICMP types and codes. The list of possible types and codes are documented in &man.icmp.4; and &man.icmp6.4;. Using Tables Some types of data are relevant to filtering and redirection at a given time, but their definition is too long to be included in the ruleset file. PF supports the use of tables, which are defined lists that can be manipulated without needing to reload the entire ruleset, and which can provide fast lookups. Table names are always enclosed within < >, like this: table <clients> { 192.168.2.0/24, !192.168.2.5 } In this example, the 192.168.2.0/24 network is part of the table, except for the address 192.168.2.5, which is excluded using the ! operator. It is also possible to load tables from files where each item is on a separate line, as seen in this example /etc/clients: 192.168.2.0/24 !192.168.2.5 To refer to the file, define the table like this: table <clients> persist file "/etc/clients" Once the table is defined, it can be referenced by a rule: pass inet proto tcp from <clients> to any port $client_out flags S/SA keep state A table's contents can be manipulated live, using pfctl. This example adds another network to the table: &prompt.root; pfctl -t clients -T add 192.168.1.0/16 Note that any changes made this way will take affect now, making them ideal for testing, but will not survive a power failure or reboot. To make the changes permanent, modify the definition of the table in the ruleset or edit the file that the table refers to. One can maintain the on-disk copy of the table using a &man.cron.8; job which dumps the table's contents to disk at regular intervals, using a command such as pfctl -t clients -T show >/etc/clients. Alternatively, /etc/clients can be updated with the in-memory table contents: &prompt.root; pfctl -t clients -T replace -f /etc/clients Using Overload Tables to Protect <acronym>SSH</acronym> Those who run SSH on an external interface have probably seen something like this in the authentication logs: Sep 26 03:12:34 skapet sshd[25771]: Failed password for root from 200.72.41.31 port 40992 ssh2 Sep 26 03:12:34 skapet sshd[5279]: Failed password for root from 200.72.41.31 port 40992 ssh2 Sep 26 03:12:35 skapet sshd[5279]: Received disconnect from 200.72.41.31: 11: Bye Bye Sep 26 03:12:44 skapet sshd[29635]: Invalid user admin from 200.72.41.31 Sep 26 03:12:44 skapet sshd[24703]: input_userauth_request: invalid user admin Sep 26 03:12:44 skapet sshd[24703]: Failed password for invalid user admin from 200.72.41.31 port 41484 ssh2 This is indicative of a brute force attack where somebody or some program is trying to discover the user name and password which will let them into the system. If external SSH access is needed for legitimate users, changing the default port used by SSH can offer some protection. However, PF provides a more elegant solution. Pass rules can contain limits on what connecting hosts can do and violators can be banished to a table of addresses which are denied some or all access. It is even possible to drop all existing connections from machines which overreach the limits. To configure this, create this table in the tables section of the ruleset: table <bruteforce> persist Then, somewhere early in the ruleset, add rules to block brute access while allowing legitimate access: block quick from <bruteforce> pass inet proto tcp from any to $localnet port $tcp_services \ flags S/SA keep state \ (max-src-conn 100, max-src-conn-rate 15/5, \ overload <bruteforce> flush global) The part in parentheses defines the limits and the numbers should be changed to meet local requirements. It can be read as follows: max-src-conn is the number of simultaneous connections allowed from one host. max-src-conn-rate is the rate of new connections allowed from any single host (15) per number of seconds (5). overload <bruteforce> means that any host which exceeds these limits gets its address added to the bruteforce table. The ruleset blocks all traffic from addresses in the bruteforce table. Finally, flush global says that when a host reaches the limit, that all (global) of that host's connections will be terminated (flush). These rules will not block slow bruteforcers, as described in http://home.nuug.no/~peter/hailmary2013/. This example ruleset is intended mainly as an illustration. For example, if a generous number of connections in general are wanted, but the desire is to be more restrictive when it comes to ssh, supplement the rule above with something like the one below, early on in the rule set: pass quick proto { tcp, udp } from any to any port ssh \ flags S/SA keep state \ (max-src-conn 15, max-src-conn-rate 5/3, \ overload <bruteforce> flush global) It May Not be Necessary to Block All Overloaders It is worth noting that the overload mechanism is a general technique which does not apply exclusively to SSH, and it is not always optimal to entirely block all traffic from offenders. For example, an overload rule could be used to protect a mail service or a web service, and the overload table could be used in a rule to assign offenders to a queue with a minimal bandwidth allocation or to redirect to a specific web page. Over time, tables will be filled by overload rules and their size will grow incrementally, taking up more memory. Sometimes an IP address that is blocked is a dynamically assigned one, which has since been assigned to a host who has a legitimate reason to communicate with hosts in the local network. For situations like these, pfctl provides the ability to expire table entries. For example, this command will remove <bruteforce> table entries which have not been referenced for 86400 seconds: &prompt.root; pfctl -t bruteforce -T expire 86400 Similar functionality is provided by security/expiretable, which removes table entries which have not been accessed for a specified period of time. Once installed, expiretable can be run to remove <bruteforce> table entries older than a specified age. This example removes all entries older than 24 hours: /usr/local/sbin/expiretable -v -d -t 24h bruteforce Protecting Against <acronym>SPAM</acronym> Not to be confused with the spamd daemon which comes bundled with spamassassin, mail/spamd can be configured with PF to provide an outer defense against SPAM. This spamd hooks into the PF configuration using a set of redirections. Spammers tend to send a large number of messages, and SPAM is mainly sent from a few spammer friendly networks and a large number of hijacked machines, both of which are reported to blacklists fairly quickly. When an SMTP connection from an address in a blacklist is received, spamd presents its banner and immediately switches to a mode where it answers SMTP traffic one byte at a time. This technique, which is intended to waste as much time as possible on the spammer's end, is called tarpitting. The specific implementation which uses one byte SMTP replies is often referred to as stuttering. This example demonstrates the basic procedure for setting up spamd with automatically updated blacklists. Refer to the man pages which are installed with mail/spamd for more information. Configuring <application>spamd</application> Install the mail/spamd package or port. In order to use spamd's greylisting features, &man.fdescfs.5; must be mounted at /dev/fd. Add the + >/dev/fd. Add the following line to /etc/fstab: fdescfs /dev/fd fdescfs rw 0 0 Then, mount the filesystem: &prompt.root; mount fdescfs Next, edit the PF ruleset to include: table <spamd> persist table <spamd-white> persist rdr pass on $ext_if inet proto tcp from <spamd> to \ { $ext_if, $localnet } port smtp -> 127.0.0.1 port 8025 rdr pass on $ext_if inet proto tcp from !<spamd-white> to \ { $ext_if, $localnet } port smtp -> 127.0.0.1 port 8025 The two tables <spamd> and <spamd-white> are essential. SMTP traffic from an address listed in <spamd> but not in <spamd-white> is redirected to the spamd daemon listening at port 8025. The next step is to configure spamd in /usr/local/etc/spamd.conf and to add some rc.conf parameters. The installation of mail/spamd includes a sample configuration file (/usr/local/etc/spamd.conf.sample) and a man page for spamd.conf. Refer to these for additional configuration options beyond those shown in this example. One of the first lines in the configuration file that does not begin with a # comment sign contains the block which defines the all list, which specifies the lists to use: all:\ :traplist:whitelist: This entry adds the desired blacklists, separated by colons (:). To use a whitelist to subtract addresses from a blacklist, add the name of the whitelist immediately after the name of that blacklist. For example: :blacklist:whitelist:. This is followed by the specified blacklist's definition: traplist:\ :black:\ :msg="SPAM. Your address %A has sent spam within the last 24 hours":\ :method=http:\ :file=www.openbsd.org/spamd/traplist.gz where the first line is the name of the blacklist and the second line specifies the list type. The msg field contains the message to display to blacklisted senders during the SMTP dialogue. The method field specifies how spamd-setup fetches the list data; supported methods are http, ftp, from a file in a mounted file system, and via exec of an external program. Finally, the file field specifies the name of the file spamd expects to receive. The definition of the specified whitelist is similar, but omits the msg field since a message is not needed: whitelist:\ :white:\ :method=file:\ :file=/var/mail/whitelist.txt Choose Data Sources with Care Using all the blacklists in the sample spamd.conf will blacklist large blocks of the Internet. Administrators need to edit the file to create an optimal configuration which uses applicable data sources and, when necessary, uses custom lists. Next, add this entry to /etc/rc.conf. Additional flags are described in the man page specified by the comment: spamd_flags="-v" # use "" and see spamd-setup(8) for flags When finished, reload the ruleset, start spamd by typing service start obspamd, and complete the configuration using spamd-setup. Finally, create a &man.cron.8; job which calls spamd-setup to update the tables at reasonable intervals. On a typical gateway in front of a mail server, hosts will soon start getting trapped within a few seconds to several minutes. PF also supports greylisting, which temporarily rejects messages from unknown hosts with 45n codes. Messages from greylisted hosts which try again within a reasonable time are let through. Traffic from senders which are set up to behave within the limits set by RFC 1123 and RFC 2821 are immediately let through. More information about greylisting as a technique can be found at the greylisting.org web site. The most amazing thing about greylisting, apart from its simplicity, is that it still works. Spammers and malware writers have been very slow to adapt in order to bypass this technique. The basic procedure for configuring greylisting is as follows: Configuring Greylisting Make sure that &man.fdescfs.5; is mounted as described in Step 1 of the previous Procedure. To run spamd in greylisting mode, add this line to /etc/rc.conf: spamd_grey="YES" # use spamd greylisting if YES Refer to the spamd man page for descriptions of additional related parameters. To complete the greylisting setup: &prompt.root; service restart obspamd &prompt.root; service start spamlogd Behind the scenes, the spamdb database tool and the spamlogd whitelist updater perform essential functions for the greylisting feature. spamdb is the administrator's main interface to managing the black, grey, and white lists via the contents of the /var/db/spamdb database. Network Hygiene This section describes how block-policy, scrub, and antispoof can be used to make the ruleset behave sanely. The block-policy is an option which can be set in the options part of the ruleset, which precedes the redirection and filtering rules. This option determines which feedback, if any, PF sends to hosts that are blocked by a rule. The option has two possible values: drop drops blocked packets with no feedback, and return returns a status code such as Connection refused. If not set, the default policy is drop. To change the block-policy, specify the desired value: set block-policy return In PF, scrub is a keyword which enables network packet normalization. This process reassembles fragmented packets and drops TCP packets that have invalid flag combinations. Enabling scrub provides a measure of protection against certain kinds of attacks based on incorrect handling of packet fragments. A number of options are available, but the simplest form is suitable for most configurations: scrub in all Some services, such as NFS, require specific fragment handling options. Refer to http://www.openbsd.gr/faq/pf/scrub.html for more information. This example reassembles fragments, clears the do not fragment bit, and sets the maximum segment size to 1440 bytes: scrub in all fragment reassemble no-df max-mss 1440 The antispoof mechanism protects against activity from spoofed or forged IP addresses, mainly by blocking packets appearing on interfaces and in directions which are logically not possible. These rules weed out spoofed traffic coming in from the rest of the world as well as any spoofed packets which originate in the local network: antispoof for $ext_if antispoof for $int_if Handling Non-Routable Addresses Even with a properly configured gateway to handle network address translation, one may have to compensate for other people's misconfigurations. A common misconfiguration is to let traffic with non-routable addresses out to the Internet. Since traffic from non-routeable addresses can play a part in several DoS attack techniques, consider explicitly blocking traffic from non-routeable addresses from entering the network through the external interface. In this example, a macro containing non-routable addresses is defined, then used in blocking rules. Traffic to and from these addresses is quietly dropped on the gateway's external interface. martians = "{ 127.0.0.0/8, 192.168.0.0/16, 172.16.0.0/12, \ 10.0.0.0/8, 169.254.0.0/16, 192.0.2.0/24, \ 0.0.0.0/8, 240.0.0.0/4 }" block drop in quick on $ext_if from $martians to any block drop out quick on $ext_if from any to $martians
<application>IPFW</application> firewall IPFW IPFW is a stateful firewall written for &os; which supports both IPv4 and IPv6. It is comprised of several components: the kernel firewall filter rule processor and its integrated packet accounting facility, the logging facility, NAT, the &man.dummynet.4; traffic shaper, a forward facility, a bridge facility, and an ipstealth facility. &os; provides a sample ruleset in /etc/rc.firewall which defines several firewall types for common scenarios to assist novice users in generating an appropriate ruleset. IPFW provides a powerful syntax which advanced users can use to craft customized rulesets that meet the security requirements of a given environment. This section describes how to enable IPFW, provides an overview of its rule syntax, and demonstrates several rulesets for common configuration scenarios. Enabling <application>IPFW</application> IPFW enabling IPFW is included in the basic &os; install as a kernel loadable module, meaning that a custom kernel is not needed in order to enable IPFW. kernel options IPFIREWALL kernel options IPFIREWALL_VERBOSE kernel options IPFIREWALL_VERBOSE_LIMIT IPFW kernel options For those users who wish to statically compile IPFW support into a custom kernel, refer to the instructions in . The following options are available for the custom kernel configuration file: options IPFIREWALL # enables IPFW options IPFIREWALL_VERBOSE # enables logging for rules with log keyword options IPFIREWALL_VERBOSE_LIMIT=5 # limits number of logged packets per-entry options IPFIREWALL_DEFAULT_TO_ACCEPT # sets default policy to pass what is not explicitly denied options IPDIVERT # enables NAT To configure the system to enable IPFW at boot time, add the following entry to /etc/rc.conf: firewall_enable="YES" To use one of the default firewall types provided by &os;, add another line which specifies the type: firewall_type="open" The available types are: open: passes all traffic. client: protects only this machine. simple: protects the whole network. closed: entirely disables IP traffic except for the loopback interface. workstation: protects only this machine using stateful rules. UNKNOWN: disables the loading of firewall rules. filename: full path of the file containing the firewall ruleset. If firewall_type is set to either client or simple, modify the default rules found in /etc/rc.firewall to fit the configuration of the system. Note that the filename type is used to load a custom ruleset. An alternate way to load a custom ruleset is to set the firewall_script variable to the absolute path of an executable script that includes IPFW commands. The examples used in this section assume that the firewall_script is set to /etc/ipfw.rules: firewall_script="/etc/ipfw.rules" To enable logging, include this line: firewall_logging="YES" There is no /etc/rc.conf variable to set logging limits. To limit the number of times a rule is logged per connection attempt, specify the number using this line in /etc/sysctl.conf: net.inet.ip.fw.verbose_limit=5 After saving the needed edits, start the firewall. To enable logging limits now, also set the sysctl value specified above: &prompt.root; service ipfw start &prompt.root; sysctl net.inet.ip.fw.verbose_limit=5 <application>IPFW</application> Rule Syntax IPFW rule processing order When a packet enters the IPFW firewall, it is compared against the first rule in the ruleset and progresses one rule at a time, moving from top to bottom in sequence. When the packet matches the selection parameters of a rule, the rule's action is executed and the search of the ruleset terminates for that packet. This is referred to as first match wins. If the packet does not match any of the rules, it gets caught by the mandatory IPFW default rule number 65535, which denies all packets and silently discards them. However, if the packet matches a rule that contains the count, skipto, or tee keywords, the search continues. Refer to &man.ipfw.8; for details on how these keywords affect rule processing. IPFW rule syntax When creating an IPFW rule, keywords must be written in the following order. Some keywords are mandatory while other keywords are optional. The words shown in uppercase represent a variable and the words shown in lowercase must precede the variable that follows it. The # symbol is used to mark the start of a comment and may appear at the end of a rule or on its own line. Blank lines are ignored. CMD RULE_NUMBER set SET_NUMBER ACTION log LOG_AMOUNT PROTO from SRC SRC_PORT to DST DST_PORT OPTIONS This section provides an overview of these keywords and their options. It is not an exhaustive list of every possible option. Refer to &man.ipfw.8; for a complete description of the rule syntax that can be used when creating IPFW rules. CMD Every rule must start with ipfw add. RULE_NUMBER Each rule is associated with a number from 1 to 65534. The number is used to indicate the order of rule processing. Multiple rules can have the same number, in which case they are applied according to the order in which they have been added. SET_NUMBER Each rule is associated with a set number from 0 to 31. Sets can be individually disabled or enabled, making it possible to quickly add or delete a set of rules. If a SET_NUMBER is not specified, the rule will be added to set 0. ACTION A rule can be associated with one of the following actions. The specified action will be executed when the packet matches the selection criterion of the rule. allow | accept | pass | permit: these keywords are equivalent and allow packets that match the rule. check-state: checks the packet against the dynamic state table. If a match is found, execute the action associated with the rule which generated this dynamic rule, otherwise move to the next rule. A check-state rule does not have selection criterion. If no check-state rule is present in the ruleset, the dynamic rules table is checked at the first keep-state or limit rule. count: updates counters for all packets that match the rule. The search continues with the next rule. deny | drop: either word silently discards packets that match this rule. Additional actions are available. Refer to &man.ipfw.8; for details. LOG_AMOUNT When a packet matches a rule with the log keyword, a message will be logged to &man.syslogd.8; with a facility name of SECURITY. Logging only occurs if the number of packets logged for that particular rule does not exceed a specified LOG_AMOUNT. If no LOG_AMOUNT is specified, the limit is taken from the value of net.inet.ip.fw.verbose_limit. A value of zero removes the logging limit. Once the limit is reached, logging can be re-enabled by clearing the logging counter or the packet counter for that rule, using ipfw reset log. Logging is done after all other packet matching conditions have been met, and before performing the final action on the packet. The administrator decides which rules to enable logging on. PROTO This optional value can be used to specify any protocol name or number found in /etc/protocols. SRC The from keyword must be followed by the source address or a keyword that represents the source address. An address can be represented by any, me (any address configured on an interface on this system), me6, (any IPv6 address configured on an interface on this system), or table followed by the number of a lookup table which contains a list of addresses. When specifying an IP address, it can be optionally followed by its CIDR mask or subnet mask. For example, 1.2.3.4/25 or 1.2.3.4:255.255.255.128. SRC_PORT An optional source port can be specified using the port number or name from /etc/services. DST The to keyword must be followed by the destination address or a keyword that represents the destination address. The same keywords and addresses described in the SRC section can be used to describe the destination. DST_PORT An optional destination port can be specified using the port number or name from /etc/services. OPTIONS Several keywords can follow the source and destination. As the name suggests, OPTIONS are optional. Commonly used options include in or out, which specify the direction of packet flow, icmptypes followed by the type of ICMP message, and keep-state. When a keep-state rule is matched, the firewall will create a dynamic rule which matches bidirectional traffic between the source and destination addresses and ports using the same protocol. The dynamic rules facility is vulnerable to resource depletion from a SYN-flood attack which would open a huge number of dynamic rules. To counter this type of attack with IPFW, use limit. This option limits the number of simultaneous sessions by checking the open dynamic rules, counting the number of times this rule and IP address combination occurred. If this count is greater than the value specified by limit, the packet is discarded. Dozens of OPTIONS are available. Refer to &man.ipfw.8; for a description of each available option. Example Ruleset This section demonstrates how to create an example stateful firewall ruleset script named /etc/ipfw.rules. In this example, all connection rules use in or out to clarify the direction. They also use via interface-name to specify the interface the packet is traveling over. When first creating or testing a firewall ruleset, consider temporarily setting this tunable: net.inet.ip.fw.default_to_accept="1" This sets the default policy of &man.ipfw.8; to be more permissive than the default deny ip from any to any, making it slightly more difficult to get locked out of the system right after a reboot. The firewall script begins by indicating that it is a Bourne shell script and flushes any existing rules. It then creates the cmd variable so that ipfw add does not have to be typed at the beginning of every rule. It also defines the pif variable which represents the name of the interface that is attached to the Internet. #!/bin/sh # Flush out the list before we begin. ipfw -q -f flush # Set rules command prefix cmd="ipfw -q add" pif="dc0" # interface name of NIC attached to Internet The first two rules allow all traffic on the trusted internal interface and on the loopback interface: # Change xl0 to LAN NIC interface name $cmd 00005 allow all from any to any via xl0 # No restrictions on Loopback Interface $cmd 00010 allow all from any to any via lo0 The next rule allows the packet through if it matches an existing entry in the dynamic rules table: $cmd 00101 check-state The next set of rules defines which stateful connections internal systems can create to hosts on the Internet: # Allow access to public DNS # Replace x.x.x.x with the IP address of a public DNS server # and repeat for each DNS server in /etc/resolv.conf $cmd 00110 allow tcp from any to x.x.x.x 53 out via $pif setup keep-state $cmd 00111 allow udp from any to x.x.x.x 53 out via $pif keep-state # Allow access to ISP's DHCP server for cable/DSL configurations. # Use the first rule and check log for IP address. # Then, uncomment the second rule, input the IP address, and delete the first rule $cmd 00120 allow log udp from any to any 67 out via $pif keep-state #$cmd 00120 allow udp from any to x.x.x.x 67 out via $pif keep-state # Allow outbound HTTP and HTTPS connections $cmd 00200 allow tcp from any to any 80 out via $pif setup keep-state $cmd 00220 allow tcp from any to any 443 out via $pif setup keep-state # Allow outbound email connections $cmd 00230 allow tcp from any to any 25 out via $pif setup keep-state $cmd 00231 allow tcp from any to any 110 out via $pif setup keep-state # Allow outbound ping $cmd 00250 allow icmp from any to any out via $pif keep-state # Allow outbound NTP $cmd 00260 allow tcp from any to any 37 out via $pif setup keep-state # Allow outbound SSH $cmd 00280 allow tcp from any to any 22 out via $pif setup keep-state # deny and log all other outbound connections $cmd 00299 deny log all from any to any out via $pif The next set of rules controls connections from Internet hosts to the internal network. It starts by denying packets typically associated with attacks and then explicitly allows specific types of connections. All the authorized services that originate from the Internet use limit to prevent flooding. # Deny all inbound traffic from non-routable reserved address spaces $cmd 00300 deny all from 192.168.0.0/16 to any in via $pif #RFC 1918 private IP $cmd 00301 deny all from 172.16.0.0/12 to any in via $pif #RFC 1918 private IP $cmd 00302 deny all from 10.0.0.0/8 to any in via $pif #RFC 1918 private IP $cmd 00303 deny all from 127.0.0.0/8 to any in via $pif #loopback $cmd 00304 deny all from 0.0.0.0/8 to any in via $pif #loopback $cmd 00305 deny all from 169.254.0.0/16 to any in via $pif #DHCP auto-config $cmd 00306 deny all from 192.0.2.0/24 to any in via $pif #reserved for docs $cmd 00307 deny all from 204.152.64.0/23 to any in via $pif #Sun cluster interconnect $cmd 00308 deny all from 224.0.0.0/3 to any in via $pif #Class D & E multicast # Deny public pings $cmd 00310 deny icmp from any to any in via $pif # Deny ident $cmd 00315 deny tcp from any to any 113 in via $pif # Deny all Netbios services. $cmd 00320 deny tcp from any to any 137 in via $pif $cmd 00321 deny tcp from any to any 138 in via $pif $cmd 00322 deny tcp from any to any 139 in via $pif $cmd 00323 deny tcp from any to any 81 in via $pif # Deny fragments $cmd 00330 deny all from any to any frag in via $pif # Deny ACK packets that did not match the dynamic rule table $cmd 00332 deny tcp from any to any established in via $pif # Allow traffic from ISP's DHCP server. # Replace x.x.x.x with the same IP address used in rule 00120. #$cmd 00360 allow udp from any to x.x.x.x 67 in via $pif keep-state # Allow HTTP connections to internal web server $cmd 00400 allow tcp from any to me 80 in via $pif setup limit src-addr 2 # Allow inbound SSH connections $cmd 00410 allow tcp from any to me 22 in via $pif setup limit src-addr 2 # Reject and log all other incoming connections $cmd 00499 deny log all from any to any in via $pif The last rule logs all packets that do not match any of the rules in the ruleset: # Everything else is denied and logged $cmd 00999 deny log all from any to any Configuring <acronym>NAT</acronym> Chern Lee Contributed by NAT and IPFW &os;'s built-in NAT daemon, &man.natd.8;, works in conjunction with IPFW to provide network address translation. This can be used to provide an Internet Connection Sharing solution so that several internal computers can connect to the Internet using a single IP address. To do this, the &os; machine connected to the Internet must act as a gateway. This system must have two NICs, where one is connected to the Internet and the other is connected to the internal LAN. Each machine connected to the LAN should be assigned an IP address in the private network space, as defined by RFC 1918, and have the default gateway set to the &man.natd.8; system's internal IP address. Some additional configuration is needed in order to activate the NAT function of IPFW. If the system has a custom kernel, the kernel configuration file needs to include option IPDIVERT along with the other IPFIREWALL options described in . To enable NAT support at boot time, the following must be in /etc/rc.conf: gateway_enable="YES" # enables the gateway natd_enable="YES" # enables NAT natd_interface="rl0" # specify interface name of NIC attached to Internet natd_flags="-dynamic -m" # -m = preserve port numbers; additional options are listed in &man.natd.8; It is also possible to specify a configuration file which contains the options to pass to &man.natd.8;: natd_flags="-f /etc/natd.conf" The specified file must contain a list of configuration options, one per line. For example: redirect_port tcp 192.168.0.2:6667 6667 redirect_port tcp 192.168.0.3:80 80 For more information about this configuration file, consult &man.natd.8;. Next, add the NAT rules to the firewall ruleset. When the rulest contains stateful rules, the positioning of the NAT rules is critical and the skipto action is used. The skipto action requires a rule number so that it knows which rule to jump to. The following example builds upon the firewall ruleset shown in the previous section. It adds some additional entries and modifies some existing rules in order to configure the firewall for NAT. It starts by adding some additional variables which represent the rule number to skip to, the keep-state option, and a list of TCP ports which will be used to reduce the number of rules: #!/bin/sh ipfw -q -f flush cmd="ipfw -q add" skip="skipto 500" pif=dc0 ks="keep-state" good_tcpo="22,25,37,53,80,443,110" The inbound NAT rule is inserted after the two rules which allow all traffic on the trusted internal interface and on the loopback interface and before the check-state rule. It is important that the rule number selected for this NAT rule, in this example 100, is higher than the first two rules and lower than the check-state rule: $cmd 005 allow all from any to any via xl0 # exclude LAN traffic $cmd 010 allow all from any to any via lo0 # exclude loopback traffic $cmd 100 divert natd ip from any to any in via $pif # NAT any inbound packets # Allow the packet through if it has an existing entry in the dynamic rules table $cmd 101 check-state The outbound rules are modified to replace the allow action with the $skip variable, indicating that rule processing will continue at rule 500. The seven tcp rules have been replaced by rule 125 as the $good_tcpo variable contains the seven allowed outbound ports. # Authorized outbound packets $cmd 120 $skip udp from any to x.x.x.x 53 out via $pif $ks $cmd 121 $skip udp from any to x.x.x.x 67 out via $pif $ks $cmd 125 $skip tcp from any to any $good_tcpo out via $pif setup $ks $cmd 130 $skip icmp from any to any out via $pif $ks The inbound rules remain the same, except for the very last rule which removes the via $pif in order to catch both inbound and outbound rules. The NAT rule must follow this last outbound rule, must have a higher number than that last rule, and the rule number must be referenced by the skipto action. In this ruleset, rule number 500 diverts all packets which match the outbound rules to &man.natd.8; for NAT processing. The next rule allows any packet which has undergone NAT processing to pass. $cmd 499 deny log all from any to any $cmd 500 divert natd ip from any to any out via $pif # skipto location for outbound stateful rules $cmd 510 allow ip from any to any In this example, rules 100, 101, 125, 500, and 510 control the address translation of the outbound and inbound packets so that the entries in the dynamic state table always register the private LAN IP address. Consider an internal web browser which initializes a new outbound HTTP session over port 80. When the first outbound packet enters the firewall, it does not match rule 100 because it is headed out rather than in. It passes rule 101 because this is the first packet and it has not been posted to the dynamic state table yet. The packet finally matches rule 125 as it is outbound on an allowed port and has a source IP address from the internal LAN. On matching this rule, two actions take place. First, the keep-state action adds an entry to the dynamic state table and the specified action, skipto rule 500, is executed. Next, the packet undergoes NAT and is sent out to the Internet. This packet makes its way to the destination web server, where a response packet is generated and sent back. This new packet enters the top of the ruleset. It matches rule 100 and has its destination IP address mapped back to the original internal address. It then is processed by the check-state rule, is found in the table as an existing session, and is released to the LAN. On the inbound side, the ruleset has to deny bad packets and allow only authorized services. A packet which matches an inbound rule is posted to the dynamic state table and the packet is released to the LAN. The packet generated as a response is recognized by the check-state rule as belonging to an existing session. It is then sent to rule 500 to undergo NAT before being released to the outbound interface. Port Redirection The drawback with &man.natd.8; is that the LAN clients are not accessible from the Internet. Clients on the LAN can make outgoing connections to the world but cannot receive incoming ones. This presents a problem if trying to run Internet services on one of the LAN client machines. A simple way around this is to redirect selected Internet ports on the &man.natd.8; machine to a LAN client. For example, an IRC server runs on client A and a web server runs on client B. For this to work properly, connections received on ports 6667 (IRC) and 80 (HTTP) must be redirected to the respective machines. The syntax for is as follows: -redirect_port proto targetIP:targetPORT[-targetPORT] [aliasIP:]aliasPORT[-aliasPORT] [remoteIP[:remotePORT[-remotePORT]]] In the above example, the argument should be: -redirect_port tcp 192.168.0.2:6667 6667 -redirect_port tcp 192.168.0.3:80 80 This redirects the proper TCP ports to the LAN client machines. Port ranges over individual ports can be indicated with . For example, tcp 192.168.0.2:2000-3000 2000-3000 would redirect all connections received on ports 2000 to 3000 to ports 2000 to 3000 on client A. These options can be used when directly running &man.natd.8;, placed within the natd_flags="" option in /etc/rc.conf, or passed via a configuration file. For further configuration options, consult &man.natd.8; Address Redirection address redirection Address redirection is useful if more than one IP address is available. Each LAN client can be assigned its own external IP address by &man.natd.8;, which will then rewrite outgoing packets from the LAN clients with the proper external IP address and redirects all traffic incoming on that particular IP address back to the specific LAN client. This is also known as static NAT. For example, if IP addresses 128.1.1.1, 128.1.1.2, and 128.1.1.3 are available, 128.1.1.1 can be used as the &man.natd.8; machine's external IP address, while 128.1.1.2 and 128.1.1.3 are forwarded back to LAN clients A and B. The syntax is as follows: -redirect_address localIP publicIP localIP The internal IP address of the LAN client. publicIP The external IP address corresponding to the LAN client. In the example, this argument would read: -redirect_address 192.168.0.2 128.1.1.2 -redirect_address 192.168.0.3 128.1.1.3 Like , these arguments are placed within the natd_flags="" option of /etc/rc.conf, or passed via a configuration file. With address redirection, there is no need for port redirection since all data received on a particular IP address is redirected. The external IP addresses on the &man.natd.8; machine must be active and aliased to the external interface. Refer to &man.rc.conf.5; for details. The <application>IPFW</application> Command ipfw ipfw can be used to make manual, single rule additions or deletions to the active firewall while it is running. The problem with using this method is that all the changes are lost when the system reboots. It is recommended to instead write all the rules in a file and to use that file to load the rules at boot time and to replace the currently running firewall rules whenever that file changes. ipfw is a useful way to display the running firewall rules to the console screen. The IPFW accounting facility dynamically creates a counter for each rule that counts each packet that matches the rule. During the process of testing a rule, listing the rule with its counter is one way to determine if the rule is functioning as expected. To list all the running rules in sequence: &prompt.root; ipfw list To list all the running rules with a time stamp of when the last time the rule was matched: &prompt.root; ipfw -t list The next example lists accounting information and the packet count for matched rules along with the rules themselves. The first column is the rule number, followed by the number of matched packets and bytes, followed by the rule itself. &prompt.root; ipfw -a list To list dynamic rules in addition to static rules: &prompt.root; ipfw -d list To also show the expired dynamic rules: &prompt.root; ipfw -d -e list To zero the counters: &prompt.root; ipfw zero To zero the counters for just the rule with number NUM: &prompt.root; ipfw zero NUM Logging Firewall Messages IPFW logging Even with the logging facility enabled, IPFW will not generate any rule logging on its own. The firewall administrator decides which rules in the ruleset will be logged, and adds the log keyword to those rules. Normally only deny rules are logged. It is customary to duplicate the ipfw default deny everything rule with the log keyword included as the last rule in the ruleset. This way, it is possible to see all the packets that did not match any of the rules in the ruleset. Logging is a two edged sword. If one is not careful, an over abundance of log data or a DoS attack can fill the disk with log files. Log messages are not only written to syslogd, but also are displayed on the root console screen and soon become annoying. The IPFIREWALL_VERBOSE_LIMIT=5 kernel option limits the number of consecutive messages sent to &man.syslogd.8;, concerning the packet matching of a given rule. When this option is enabled in the kernel, the number of consecutive messages concerning a particular rule is capped at the number specified. There is nothing to be gained from 200 identical log messages. With this option set to five, five consecutive messages concerning a particular rule would be logged to syslogd and the remainder identical consecutive messages would be counted and posted to syslogd with a phrase like the following: last message repeated 45 times All logged packets messages are written by default to /var/log/security, which is defined in /etc/syslog.conf. Building a Rule Script Most experienced IPFW users create a file containing the rules and code them in a manner compatible with running them as a script. The major benefit of doing this is the firewall rules can be refreshed in mass without the need of rebooting the system to activate them. This method is convenient in testing new rules as the procedure can be executed as many times as needed. Being a script, symbolic substitution can be used for frequently used values to be substituted into multiple rules. This example script is compatible with the syntax used by the &man.sh.1;, &man.csh.1;, and &man.tcsh.1; shells. Symbolic substitution fields are prefixed with a dollar sign ($). Symbolic fields do not have the $ prefix. The value to populate the symbolic field must be enclosed in double quotes (""). Start the rules file like this: ############### start of example ipfw rules script ############# # ipfw -q -f flush # Delete all rules # Set defaults oif="tun0" # out interface odns="192.0.2.11" # ISP's DNS server IP address cmd="ipfw -q add " # build rule prefix ks="keep-state" # just too lazy to key this each time $cmd 00500 check-state $cmd 00502 deny all from any to any frag $cmd 00501 deny tcp from any to any established $cmd 00600 allow tcp from any to any 80 out via $oif setup $ks $cmd 00610 allow tcp from any to $odns 53 out via $oif setup $ks $cmd 00611 allow udp from any to $odns 53 out via $oif $ks ################### End of example ipfw rules script ############ The rules are not important as the focus of this example is how the symbolic substitution fields are populated. If the above example was in /etc/ipfw.rules, the rules could be reloaded by the following command: &prompt.root; sh /etc/ipfw.rules /etc/ipfw.rules can be located anywhere and the file can have any name. The same thing could be accomplished by running these commands by hand: &prompt.root; ipfw -q -f flush &prompt.root; ipfw -q add check-state &prompt.root; ipfw -q add deny all from any to any frag &prompt.root; ipfw -q add deny tcp from any to any established &prompt.root; ipfw -q add allow tcp from any to any 80 out via tun0 setup keep-state &prompt.root; ipfw -q add allow tcp from any to 192.0.2.11 53 out via tun0 setup keep-state &prompt.root; ipfw -q add 00611 allow udp from any to 192.0.2.11 53 out via tun0 keep-state IPFILTER (IPF) firewall IPFILTER IPFILTER, also known as IPF, is a cross-platform, open source firewall which has been ported to several operating systems, including &os;, NetBSD, OpenBSD, and &solaris;. IPFILTER is a kernel-side firewall and NAT mechanism that can be controlled and monitored by userland programs. Firewall rules can be set or deleted using ipf, NAT rules can be set or deleted using ipnat, run-time statistics for the kernel parts of IPFILTER can be printed using ipfstat, and ipmon can be used to log IPFILTER actions to the system log files. IPF was originally written using a rule processing logic of the last matching rule wins and only used stateless rules. Since then, IPF has been enhanced to include the quick and keep state options. For a detailed explanation of the legacy rules processing method, refer to http://coombs.anu.edu.au/~avalon/ip-filter.html. The IPF FAQ is at http://www.phildev.net/ipf/index.html. A searchable archive of the IPFilter mailing list is available at http://marc.info/?l=ipfilter. This section of the Handbook focuses on IPF as it pertains to FreeBSD. It provides examples of rules that contain the quick and keep state options. Enabling <application>IPF</application> IPFILTER enabling IPF is included in the basic &os; install as a kernel loadable module, meaning that a custom kernel is not needed in order to enable IPF. kernel options IPFILTER kernel options IPFILTER_LOG kernel options IPFILTER_DEFAULT_BLOCK IPFILTER kernel options For users who prefer to statically compile IPF support into a custom kernel, refer to the instructions in . The following kernel options are available: options IPFILTER options IPFILTER_LOG options IPFILTER_LOOKUP options IPFILTER_DEFAULT_BLOCK where options IPFILTER enables support for IPFILTER, options IPFILTER_LOG enables IPF logging using the ipl packet logging pseudo-device for every rule that has the log keyword, IPFILTER_LOOKUP enables IP pools in order to speed up IP lookups, and options IPFILTER_DEFAULT_BLOCK changes the default behavior so that any packet not matching a firewall pass rule gets blocked. To configure the system to enable IPF at boot time, add the following entries to /etc/rc.conf. These entries will also enable logging and default pass all. To change the default policy to block all without compiling a custom kernel, remember to add a block all rule at the end of the ruleset. ipfilter_enable="YES" # Start ipf firewall ipfilter_rules="/etc/ipf.rules" # loads rules definition text file ipmon_enable="YES" # Start IP monitor log ipmon_flags="-Ds" # D = start as daemon # s = log to syslog # v = log tcp window, ack, seq # n = map IP & port to names If NAT functionality is needed, also add these lines: gateway_enable="YES" # Enable as LAN gateway ipnat_enable="YES" # Start ipnat function ipnat_rules="/etc/ipnat.rules" # rules definition file for ipnat Then, to start IPF now: &prompt.root; service ipfilter start To load the firewall rules, specify the name of the ruleset file using ipf. The following command can be used to replace the currently running firewall rules: &prompt.root; ipf -Fa -f /etc/ipf.rules where flushes all the internal rules tables and specifies the file containing the rules to load. This provides the ability to make changes to a custom ruleset and update the running firewall with a fresh copy of the rules without having to reboot the system. This method is convenient for testing new rules as the procedure can be executed as many times as needed. Refer to &man.ipf.8; for details on the other flags available with this command. <application>IPF</application> Rule Syntax IPFILTER rule syntax This section describes the IPF rule syntax used to create stateful rules. When creating rules, keep in mind that unless the quick keyword appears in a rule, every rule is read in order, with the last matching rule being the one that is applied. This means that even if the first rule to match a packet is a pass, if there is a later matching rule that is a block, the packet will be dropped. Sample rulesets can be found in /usr/share/examples/ipfilter. + >/usr/share/examples/ipfilter. When creating rules, a # character is used to mark the start of a comment and may appear at the end of a rule, to explain that rule's function, or on its own line. Any blank lines are ignored. The keywords which are used in rules must be written in a specific order, from left to right. Some keywords are mandatory while others are optional. Some keywords have sub-options which may be keywords themselves and also include more sub-options. The keyword order is as follows, where the words shown in uppercase represent a variable and the words shown in lowercase must precede the variable that follows it: ACTION DIRECTION OPTIONS proto PROTO_TYPE from SRC_ADDR SRC_PORT to DST_ADDR DST_PORT TCP_FLAG|ICMP_TYPE keep state STATE This section describes each of these keywords and their options. It is not an exhaustive list of every possible option. Refer to &man.ipf.5; for a complete description of the rule syntax that can be used when creating IPF rules and examples for using each keyword. ACTION The action keyword indicates what to do with the packet if it matches that rule. Every rule must have an action. The following actions are recognized: block: drops the packet. pass: allows the packet. log: generates a log record. count: counts the number of packets and bytes which can provide an indication of how often a rule is used. auth: queues the packet for further processing by another program. call: provides access to functions built into IPF that allow more complex actions. decapsulate: removes any headers in order to process the contents of the packet. DIRECTION Next, each rule must explicitly state the direction of traffic using one of these keywords: in: the rule is applied against an inbound packet. out: the rule is applied against an outbound packet. all: the rule applies to either direction. If the system has multiple interfaces, the interface can be specified along with the direction. An example would be in on fxp0. OPTIONS Options are optional. However, if multiple options are specified, they must be used in the order shown here. log: when performing the specified ACTION, the contents of the packet's headers will be written to the &man.ipl.4; packet log pseudo-device. quick: if a packet matches this rule, the ACTION specified by the rule occurs and no further processing of any following rules will occur for this packet. on: must be followed by the interface name as displayed by &man.ifconfig.8;. The rule will only match if the packet is going through the specified interface in the specified direction. When using the log keyword, the following qualifiers may be used in this order: body: indicates that the first 128 bytes of the packet contents will be logged after the headers. first: if the log keyword is being used in conjunction with a keep state option, this option is recommended so that only the triggering packet is logged and not every packet which matches the stateful connection. Additional options are available to specify error return messages. Refer to &man.ipf.5; for more details. PROTO_TYPE The protocol type is optional. However, it is mandatory if the rule needs to specify a SRC_PORT or a DST_PORT as it defines the type of protocol. When specifying the type of protocol, use the proto keyword followed by either a protocol number or name from /etc/protocols. Example protocol names include tcp, udp, or icmp. If PROTO_TYPE is specified but no SRC_PORT or DST_PORT is specified, all port numbers for that protocol will match that rule. SRC_ADDR The from keyword is mandatory and is followed by a keyword which represents the source of the packet. The source can be a hostname, an IP address followed by the CIDR mask, an address pool, or the keyword all. Refer to &man.ipf.5; for examples. There is no way to match ranges of IP addresses which do not express themselves easily using the dotted numeric form / mask-length notation. The net-mgmt/ipcalc package or port may be used to ease the calculation of the CIDR mask. Additional information is available at the utility's web page: http://jodies.de/ipcalc. SRC_PORT The port number of the source is optional. However, if it is used, it requires PROTO_TYPE to be first defined in the rule. The port number must also be preceded by the proto keyword. A number of different comparison operators are supported: = (equal to), != (not equal to), < (less than), > (greater than), <= (less than or equal to), and >= (greater than or equal to). To specify port ranges, place the two port numbers between <> (less than and greater than ), >< (greater than and less than ), or : (greater than or equal to and less than or equal to). DST_ADDR The to keyword is mandatory and is followed by a keyword which represents the destination of the packet. Similar to SRC_ADDR, it can be a hostname, an IP address followed by the CIDR mask, an address pool, or the keyword all. DST_PORT Similar to SRC_PORT, the port number of the destination is optional. However, if it is used, it requires PROTO_TYPE to be first defined in the rule. The port number must also be preceded by the proto keyword. TCP_FLAG|ICMP_TYPE If tcp is specifed as the PROTO_TYPE, flags can be specified as letters, where each letter represents one of the possible TCP flags used to determine the state of a connection. Possible values are: S (SYN), A (ACK), P (PSH), F (FIN), U (URG), R (RST), C (CWN), and E (ECN). If icmp is specifed as the PROTO_TYPE, the ICMP type to match can be specified. Refer to &man.ipf.5; for the allowable types. STATE If a pass rule contains keep state, IPF will add an entry to its dynamic state table and allow subsequent packets that match the connection. IPF can track state for TCP, UDP, and ICMP sessions. Any packet that IPF can be certain is part of an active session, even if it is a different protocol, will be allowed. In IPF, packets destined to go out through the interface connected to the public Internet are first checked against the dynamic state table. If the packet matches the next expected packet comprising an active session conversation, it exits the firewall and the state of the session conversation flow is updated in the dynamic state table. Packets that do not belong to an already active session are checked against the outbound ruleset. Packets coming in from the interface connected to the public Internet are first checked against the dynamic state table. If the packet matches the next expected packet comprising an active session, it exits the firewall and the state of the session conversation flow is updated in the dynamic state table. Packets that do not belong to an already active session are checked against the inbound ruleset. Several keywords can be added after keep state. If used, these keywords set various options that control stateful filtering, such as setting connection limits or connection age. Refer to &man.ipf.5; for the list of available options and their descriptions. Example Ruleset This section demonstrates how to create an example ruleset which only allows services matching pass rules and blocks all others. &os; uses the loopback interface (lo0) and the IP address 127.0.0.1 for internal communication. The firewall ruleset must contain rules to allow free movement of these internally used packets: # no restrictions on loopback interface pass in quick on lo0 all pass out quick on lo0 all The public interface connected to the Internet is used to authorize and control access of all outbound and inbound connections. If one or more interfaces are cabled to private networks, those internal interfaces may require rules to allow packets originating from the LAN to flow between the internal networks or to the interface attached to the Internet. The ruleset should be organized into three major sections: any trusted internal interfaces, outbound connections through the public interface, and inbound connections through the public interface. These two rules allow all traffic to pass through a trusted LAN interface named xl0: # no restrictions on inside LAN interface for private network pass out quick on xl0 all pass in quick on xl0 all The rules for the public interface's outbound and inbound sections should have the most frequently matched rules placed before less commonly matched rules, with the last rule in the section blocking and logging all packets for that interface and direction. This set of rules defines the outbound section of the public interface named dc0. These rules keep state and identify the specific services that internal systems are authorized for public Internet access. All the rules use quick and specify the appropriate port numbers and, where applicable, destination addresses. # interface facing Internet (outbound) # Matches session start requests originating from or behind the # firewall, destined for the Internet. # Allow outbound access to public DNS servers. # Replace x.x.x. with address listed in /etc/resolv.conf. # Repeat for each DNS server. pass out quick on dc0 proto tcp from any to x.x.x. port = 53 flags S keep state pass out quick on dc0 proto udp from any to xxx port = 53 keep state # Allow access to ISP's specified DHCP server for cable or DSL networks. # Use the first rule, then check log for the IP address of DHCP server. # Then, uncomment the second rule, replace z.z.z.z with the IP address, # and comment out the first rule pass out log quick on dc0 proto udp from any to any port = 67 keep state #pass out quick on dc0 proto udp from any to z.z.z.z port = 67 keep state # Allow HTTP and HTTPS pass out quick on dc0 proto tcp from any to any port = 80 flags S keep state pass out quick on dc0 proto tcp from any to any port = 443 flags S keep state # Allow email pass out quick on dc0 proto tcp from any to any port = 110 flags S keep state pass out quick on dc0 proto tcp from any to any port = 25 flags S keep state # Allow NTP pass out quick on dc0 proto tcp from any to any port = 37 flags S keep state # Allow FTP pass out quick on dc0 proto tcp from any to any port = 21 flags S keep state # Allow SSH pass out quick on dc0 proto tcp from any to any port = 22 flags S keep state # Allow ping pass out quick on dc0 proto icmp from any to any icmp-type 8 keep state # Block and log everything else block out log first quick on dc0 all This example of the rules in the inbound section of the public interface blocks all undesirable packets first. This reduces the number of packets that are logged by the last rule. # interface facing Internet (inbound) # Block all inbound traffic from non-routable or reserved address spaces block in quick on dc0 from 192.168.0.0/16 to any #RFC 1918 private IP block in quick on dc0 from 172.16.0.0/12 to any #RFC 1918 private IP block in quick on dc0 from 10.0.0.0/8 to any #RFC 1918 private IP block in quick on dc0 from 127.0.0.0/8 to any #loopback block in quick on dc0 from 0.0.0.0/8 to any #loopback block in quick on dc0 from 169.254.0.0/16 to any #DHCP auto-config block in quick on dc0 from 192.0.2.0/24 to any #reserved for docs block in quick on dc0 from 204.152.64.0/23 to any #Sun cluster interconnect block in quick on dc0 from 224.0.0.0/3 to any #Class D & E multicast # Block fragments and too short tcp packets block in quick on dc0 all with frags block in quick on dc0 proto tcp all with short # block source routed packets block in quick on dc0 all with opt lsrr block in quick on dc0 all with opt ssrr # Block OS fingerprint attempts and log first occurrence block in log first quick on dc0 proto tcp from any to any flags FUP # Block anything with special options block in quick on dc0 all with ipopts # Block public pings and ident block in quick on dc0 proto icmp all icmp-type 8 block in quick on dc0 proto tcp from any to any port = 113 # Block incoming Netbios services block in log first quick on dc0 proto tcp/udp from any to any port = 137 block in log first quick on dc0 proto tcp/udp from any to any port = 138 block in log first quick on dc0 proto tcp/udp from any to any port = 139 block in log first quick on dc0 proto tcp/udp from any to any port = 81 Any time there are logged messages on a rule with the log first option, run ipfstat -hio to evaluate how many times the rule has been matched. A large number of matches may indicate that the system is under attack. The rest of the rules in the inbound section define which connections are allowed to be initiated from the Internet. The last rule denies all connections which were not explicitly allowed by previous rules in this section. # Allow traffic in from ISP's DHCP server. Replace z.z.z.z with # the same IP address used in the outbound section. pass in quick on dc0 proto udp from z.z.z.z to any port = 68 keep state # Allow public connections to specified internal web server pass in quick on dc0 proto tcp from any to x.x.x.x port = 80 flags S keep state # Block and log only first occurrence of all remaining traffic. block in log first quick on dc0 all Configuring <acronym>NAT</acronym> NAT IP masquerading NAT network address translation NAT ipnat To enable NAT, add these statements to /etc/rc.conf and specify the name of the file containing the NAT rules: gateway_enable="YES" ipnat_enable="YES" ipnat_rules="/etc/ipnat.rules" NAT rules are flexible and can accomplish many different things to fit the needs of both commercial and home users. The rule syntax presented here has been simplified to demonstrate common usage. For a complete rule syntax description, refer to &man.ipnat.5;. The basic syntax for a NAT rule is as follows, where map starts the rule and IF should be replaced with the name of the external interface: map IF LAN_IP_RANGE -> PUBLIC_ADDRESS The LAN_IP_RANGE is the range of IP addresses used by internal clients. Usually, it is a private address range such as 192.168.1.0/24. The PUBLIC_ADDRESS can either be the static external IP address or the keyword 0/32 which represents the IP address assigned to IF. In IPF, when a packet arrives at the firewall from the LAN with a public destination, it first passes through the outbound rules of the firewall ruleset. Then, the packet is passed to the NAT ruleset which is read from the top down, where the first matching rule wins. IPF tests each NAT rule against the packet's interface name and source IP address. When a packet's interface name matches a NAT rule, the packet's source IP address in the private LAN is checked to see if it falls within the IP address range specified in LAN_IP_RANGE. On a match, the packet has its source IP address rewritten with the public IP address specified by PUBLIC_ADDRESS. IPF posts an entry in its internal NAT table so that when the packet returns from the Internet, it can be mapped back to its original private IP address before being passed to the firewall rules for further processing. For networks that have large numbers of internal systems or multiple subnets, the process of funneling every private IP address into a single public IP address becomes a resource problem. Two methods are available to relieve this issue. The first method is to assign a range of ports to use as source ports. By adding the portmap keyword, NAT can be directed to only use source ports in the specified range: map dc0 192.168.1.0/24 -> 0/32 portmap tcp/udp 20000:60000 Alternately, use the auto keyword which tells NAT to determine the ports that are available for use: map dc0 192.168.1.0/24 -> 0/32 portmap tcp/udp auto The second method is to use a pool of public addresses. This is useful when there are too many LAN addresses to fit into a single public address and a block of public IP addresses is available. These public addresses can be used as a pool from which NAT selects an IP address as a packet's address is mapped on its way out. The range of public IP addresses can be specified using a netmask or CIDR notation. These two rules are equivalent: map dc0 192.168.1.0/24 -> 204.134.75.0/255.255.255.0 map dc0 192.168.1.0/24 -> 204.134.75.0/24 A common practice is to have a publically accessible web server or mail server segregated to an internal network segment. The traffic from these servers still has to undergo NAT, but port redirection is needed to direct inbound traffic to the correct server. For example, to map a web server using the internal address 10.0.10.25 to its public IP address of 20.20.20.5, use this rule: rdr dc0 20.20.20.5/32 port 80 -> 10.0.10.25 port 80 If it is the only web server, this rule would also work as it redirects all external HTTP requests to 10.0.10.25: rdr dc0 0.0.0.0/0 port 80 -> 10.0.10.25 port 80 IPF has a built in FTP proxy which can be used with NAT. It monitors all outbound traffic for active or passive FTP connection requests and dynamically creates temporary filter rules containing the port number used by the FTP data channel. This eliminates the need to open large ranges of high order ports for FTP connections. In this example, the first rule calls the proxy for outbound FTP traffic from the internal LAN. The second rule passes the FTP traffic from the firewall to the Internet, and the third rule handles all non-FTP traffic from the internal LAN: map dc0 10.0.10.0/29 -> 0/32 proxy port 21 ftp/tcp map dc0 0.0.0.0/0 -> 0/32 proxy port 21 ftp/tcp map dc0 10.0.10.0/29 -> 0/32 The FTP map rules go before the NAT rule so that when a packet matches an FTP rule, the FTP proxy creates temporary filter rules to let the FTP session packets pass and undergo NAT. All LAN packets that are not FTP will not match the FTP rules but will undergo NAT if they match the third rule. Without the FTP proxy, the following firewall rules would instead be needed. Note that without the proxy, all ports above 1024 need to be allowed: # Allow out LAN PC client FTP to public Internet # Active and passive modes pass out quick on rl0 proto tcp from any to any port = 21 flags S keep state # Allow out passive mode data channel high order port numbers pass out quick on rl0 proto tcp from any to any port > 1024 flags S keep state # Active mode let data channel in from FTP server pass in quick on rl0 proto tcp from any to any port = 20 flags S keep state Whenever the file containing the NAT rules is edited, run ipnat with to delete the current NAT rules and flush the contents of the dynamic translation table. Include and specify the name of the NAT ruleset to load: &prompt.root; ipnat -CF -f /etc/ipnat.rules To display the NAT statistics: &prompt.root; ipnat -s To list the NAT table's current mappings: &prompt.root; ipnat -l To turn verbose mode on and display information relating to rule processing and active rules and table entries: &prompt.root; ipnat -v Viewing <application>IPF</application> Statistics ipfstat IPFILTER statistics IPF includes &man.ipfstat.8; which can be used to retrieve and display statistics which are gathered as packets match rules as they go through the firewall. Statistics are accumulated since the firewall was last started or since the last time they were reset to zero using ipf -Z. The default ipfstat output looks like this: input packets: blocked 99286 passed 1255609 nomatch 14686 counted 0 output packets: blocked 4200 passed 1284345 nomatch 14687 counted 0 input packets logged: blocked 99286 passed 0 output packets logged: blocked 0 passed 0 packets logged: input 0 output 0 log failures: input 3898 output 0 fragment state(in): kept 0 lost 0 fragment state(out): kept 0 lost 0 packet state(in): kept 169364 lost 0 packet state(out): kept 431395 lost 0 ICMP replies: 0 TCP RSTs sent: 0 Result cache hits(in): 1215208 (out): 1098963 IN Pullups succeeded: 2 failed: 0 OUT Pullups succeeded: 0 failed: 0 Fastroute successes: 0 failures: 0 TCP cksum fails(in): 0 (out): 0 Packet log flags set: (0) Several options are available. When supplied with either for inbound or for outbound, the command will retrieve and display the appropriate list of filter rules currently installed and in use by the kernel. To also see the rule numbers, include . For example, ipfstat -on displays the outbound rules table with rule numbers: @1 pass out on xl0 from any to any @2 block out on dc0 from any to any @3 pass out quick on dc0 proto tcp/udp from any to any keep state Include to prefix each rule with a count of how many times the rule was matched. For example, ipfstat -oh displays the outbound internal rules table, prefixing each rule with its usage count: 2451423 pass out on xl0 from any to any 354727 block out on dc0 from any to any 430918 pass out quick on dc0 proto tcp/udp from any to any keep state To display the state table in a format similar to &man.top.1;, use ipfstat -t. When the firewall is under attack, this option provides the ability to identify and see the attacking packets. The optional sub-flags give the ability to select the destination or source IP, port, or protocol to be monitored in real time. Refer to &man.ipfstat.8; for details. <application>IPF</application> Logging ipmon IPFILTER logging IPF provides ipmon, which can be used to write the firewall's logging information in a human readable format. It requires that options IPFILTER_LOG be first added to a custom kernel using the instructions in . This command is typically run in daemon mode in order to provide a continuous system log file so that logging of past events may be reviewed. Since &os; has a built in &man.syslogd.8; facility to automatically rotate system logs, the default rc.conf ipmon_flags statement uses : ipmon_flags="-Ds" # D = start as daemon # s = log to syslog # v = log tcp window, ack, seq # n = map IP & port to names Logging provides the ability to review, after the fact, information such as which packets were dropped, what addresses they came from, and where they were going. This information is useful in tracking down attackers. Once the logging facility is enabled in rc.conf and started with service ipmon start, IPF will only log the rules which contain the log keyword. The firewall administrator decides which rules in the ruleset should be logged and normally only deny rules are logged. It is customary to include the log keyword in the last rule in the ruleset. This makes it possible to see all the packets that did not match any of the rules in the ruleset. By default, ipmon -Ds mode uses local0 as the logging facility. The following logging levels can be used to further segregate the logged data: LOG_INFO - packets logged using the "log" keyword as the action rather than pass or block. LOG_NOTICE - packets logged which are also passed LOG_WARNING - packets logged which are also blocked LOG_ERR - packets which have been logged and which can be considered short due to an incomplete header In order to setup IPF to log all data to /var/log/ipfilter.log, first create the empty file: &prompt.root; touch /var/log/ipfilter.log Then, to write all logged messages to the specified file, add the following statement to /etc/syslog.conf: local0.* /var/log/ipfilter.log To activate the changes and instruct &man.syslogd.8; to read the modified /etc/syslog.conf, run service syslogd reload. Do not forget to edit /etc/newsyslog.conf to rotate the new log file. Messages generated by ipmon consist of data fields separated by white space. Fields common to all messages are: The date of packet receipt. The time of packet receipt. This is in the form HH:MM:SS.F, for hours, minutes, seconds, and fractions of a second. The name of the interface that processed the packet. The group and rule number of the rule in the format @0:17. The action: p for passed, b for blocked, S for a short packet, n did not match any rules, and L for a log rule. The addresses written as three fields: the source address and port separated by a comma, the -> symbol, and the destination address and port. For example: 209.53.17.22,80 -> 198.73.220.17,1722. PR followed by the protocol name or number: for example, PR tcp. len followed by the header length and total length of the packet: for example, len 20 40. If the packet is a TCP packet, there will be an additional field starting with a hyphen followed by letters corresponding to any flags that were set. Refer to &man.ipf.5; for a list of letters and their flags. If the packet is an ICMP packet, there will be two fields at the end: the first always being icmp and the next being the ICMP message and sub-message type, separated by a slash. For example: icmp 3/3 for a port unreachable message.
Index: head/en_US.ISO8859-1/books/handbook/install/chapter.xml =================================================================== --- head/en_US.ISO8859-1/books/handbook/install/chapter.xml (revision 46271) +++ head/en_US.ISO8859-1/books/handbook/install/chapter.xml (revision 46272) @@ -1,4625 +1,4625 @@ Installing &os; 8.<replaceable>X</replaceable> JimMockRestructured, reorganized, and parts rewritten by RandyPrattThe sysinstall walkthrough, screenshots, and general copy by Synopsis installation &os; provides a text-based, easy to use installation program. &os; 9.0-RELEASE and later use the installation program known as &man.bsdinstall.8; while &os; 8.X uses &man.sysinstall.8;. This chapter describes how to use &man.sysinstall.8;. The use of &man.bsdinstall.8; is covered in . After reading this chapter, you will know: How to create the &os; installation media. How &os; refers to and subdivides hard disks. How to start &man.sysinstall.8;. The questions &man.sysinstall.8; asks, what they mean, and how to answer them. Before reading this chapter, you should: Read the supported hardware list that shipped with the version of &os; to install, and verify that the system's hardware is supported. In general, these installation instructions are written for the &i386; and &os;/&arch.amd64; architectures. Where applicable, instructions specific to other platforms will be listed. There may be minor differences between the installer and what is shown here. This chapter should be used as a general guide rather than a literal installation manual. Hardware Requirements Minimal Configuration The minimal configuration to install &os; varies with the &os; version and the hardware architecture. A summary of this information is given in the following sections. Depending on the method chosen to install &os;, a floppy drive, CDROM drive, or network adapter may be needed. Instructions on how to prepare the installation media can be found in . &os;/&arch.i386; and &os;/&arch.pc98; Both &os;/&arch.i386; and &os;/&arch.pc98; require a 486 or better processor, at least 24 MB of RAM, and at least 150 MB of free hard drive space for the most minimal installation. In the case of older hardware, installing more RAM and more hard drive space is often more important than a faster processor. &os;/&arch.amd64; There are two classes of processors capable of running &os;/&arch.amd64;. The first are AMD64 processors, including the &amd.athlon;64, &amd.athlon;64-FX, and &amd.opteron; or better processors. The second class of processors includes those using the &intel; EM64T architecture. Examples of these processors include the &intel; &core; 2 Duo, Quad, Extreme processor families, and the &intel; &xeon; 3000, 5000, and 7000 sequences of processors. If the machine is based on an nVidia nForce3 Pro-150, the BIOS setup must be used to disable the IO APIC. If this option does not exist, disable ACPI instead as there are bugs in the Pro-150 chipset. &os;/&arch.sparc64; To install &os;/&arch.sparc64;, use a supported platform (see ). A dedicated disk is needed for &os;/&arch.sparc64; as it is not possible to share a disk with another operating system at this time. Supported Hardware A list of supported hardware is provided with each &os; release in the &os; Hardware Notes. This document can usually be found in a file named HARDWARE.TXT, in the top-level directory of a CDROM or FTP distribution, or in &man.sysinstall.8;'s documentation menu. It lists, for a given architecture, which hardware devices are known to be supported by each release of &os;. Copies of the supported hardware list for various releases and architectures can also be found on the Release Information page of the &os; website. Pre-installation Tasks Inventory the Computer Before installing &os; it is recommended to inventory the components in the computer. The &os; installation routines will show components such as hard disks, network cards, and CDROM drives with their model number and manufacturer. &os; will also attempt to determine the correct configuration for these devices, including information about IRQ and I/O port usage. Due to the vagaries of computer hardware, this process is not always completely successful, and &os; may need some manual configuration. If another operating system is already installed, use the facilities provided by that operating systems to view the hardware configuration. If the settings of an expansion card are not obvious, check if they are printed on the card itself. Popular IRQ numbers are 3, 5, and 7, and I/O port addresses are normally written as hexadecimal numbers, such as 0x330. It is recommended to print or write down this information before installing &os;. It may help to use a table, as seen in this example: Sample Device Inventory Device Name IRQ I/O port(s) Notes First hard disk N/A N/A 40 GB, made by Seagate, first IDE master CDROM N/A N/A First IDE slave Second hard disk N/A N/A 20 GB, made by IBM, second IDE master First IDE controller 14 0x1f0 Network card N/A N/A &intel; 10/100 Modem N/A N/A &tm.3com; 56K faxmodem, on COM1
Once the inventory of the components in the computer is complete, check if it matches the hardware requirements of the &os; release to install.
Make a Backup If the computer contains valuable data, ensure it is backed up, and that the backup has been tested before installing &os;. The &os; installer will prompt before writing any data to disk, but once that process has started, it cannot be undone. Decide Where to Install &os; If &os; is to be installed on the entire hard disk, skip this section. However, if &os; will co-exist with other operating systems, a rough understanding of how data is laid out on the disk is useful. Disk Layouts for &os;/&arch.i386; A PC disk can be divided into discrete chunks known as partitions. Since &os; also has partitions, naming can quickly become confusing. Therefore, these disk chunks are referred to as slices in &os;. For example, the &os; version of &man.fdisk.8; refers to slices instead of partitions. By design, the PC only supports four partitions per disk. These partitions are called primary partitions. To work around this limitation and allow more than four partitions, a new partition type was created, the extended partition. A disk may contain only one extended partition. Special partitions, called logical partitions, can be created inside this extended partition. Each partition has a partition ID, which is a number used to identify the type of data on the partition. &os; partitions have the partition ID of 165. In general, each operating system will identify partitions in a particular way. For example, &windows;, assigns each primary and logical partition a drive letter, starting with C:. &os; must be installed into a primary partition. If there are multiple disks, a &os; partition can be created on all, or some, of them. When &os; is installed, at least one partition must be available. This might be a blank partition or it might be an existing partition whose data can be overwritten. If all the partitions on all the disks are in use, free one of them for &os; using the tools provided by an existing operating system, such as &windows; fdisk. If there is a spare partition, use that. If it is too small, shrink one or more existing partitions to create more available space. A minimal installation of &os; takes as little as 100 MB of disk space. However, that is a very minimal install, leaving almost no space for files. A more realistic minimum is 250 MB without a graphical environment, and 350 MB or more for a graphical user interface. If other third-party software will be installed, even more space is needed. You can use a tool such as GParted to resize your partitions and make space for &os;. GParted is known to work on NTFS and is available on a number of Live CD Linux distributions, such as SystemRescueCD. Incorrect use of a shrinking tool can delete the data on the disk. Always have a recent, working backup before using this type of tool. Using an Existing Partition Unchanged Consider a computer with a single 4 GB disk that already has a version of &windows; installed, where the disk has been split into two drive letters, C: and D:, each of which is 2 GB in size. There is 1 GB of data on C:, and 0.5 GB of data on D:. This disk has two partitions, one per drive letter. Copy all existing data from D: to C:, which will free up the second partition, ready for &os;. Shrinking an Existing Partition Consider a computer with a single 4 GB disk that already has a version of &windows; installed. When &windows; was installed, it created one large partition, a C: drive that is 4 GB in size. Currently, 1.5 GB of space is used, and &os; should have 2 GB of space. In order to install &os;, either: Backup the &windows; data and then reinstall &windows;, asking for a 2 GB partition at install time. Use one of the tools described above to shrink your &windows; partition. Collect the Network Configuration Details Before installing from an FTP site or an NFS server, make note of the network configuration. The installer will prompt for this information so that it can connect to the network to complete the installation. Connecting to an Ethernet Network or Cable/DSL Modem If using an Ethernet network or an Internet connection using an Ethernet adapter via cable or DSL, the following information is needed: IP address IP address of the default gateway Hostname DNS server IP addresses Subnet Mask If this information is unknown, ask the system administrator or service provider. Make note if this information is assigned automatically using DHCP. Connecting Using a Modem If using a dialup modem, &os; can still be installed over the Internet, it will just take a very long time. You will need to know: The phone number to dial the Internet Service Provider (ISP) The COM: port the modem is connected to The username and password for the ISP account Check for &os; Errata Although the &os; Project strives to ensure that each release of &os; is as stable as possible, bugs do occasionally creep into the process. On rare occasions those bugs affect the installation process. As these problems are discovered and fixed, they are noted in the &os; Errata, which is found on the &os; website. Check the errata before installing to make sure that there are no late-breaking problems to be aware of. Information about all releases, including the errata for each release, can be found on the release information section of the &os; website. Obtain the &os; Installation Files The &os; installer can install &os; from files located in any of the following places: Local Media A CDROM or DVD A USB Memory Stick A &ms-dos; partition on the same computer Floppy disks (&os;/&arch.pc98; only) Network An FTP site through a firewall or using an HTTP proxy An NFS server A dedicated parallel or serial connection If installing from a purchased &os; CD/DVD, skip ahead to . To obtain the &os; installation files, skip ahead to which explains how to prepare the installation media. After reading that section, come back here and read on to . Prepare the Boot Media The &os; installation process is started by booting the computer into the &os; installer. It is not a program that can be run within another operating system. The computer normally boots using the operating system installed on the hard disk, but it can also be configured to boot from a CDROM or from a USB disk. If installing from a CD/DVD to a computer whose BIOS supports booting from the CD/DVD, skip this section. The &os; CD/DVD images are bootable and can be used to install &os; without any other special preparation. To create a bootable memory stick, follow these steps: Acquire the Memory Stick Image Memory stick images for &os; 8.X can be downloaded from the ISO-IMAGES/ directory at + >ISO-IMAGES/ directory at ftp://ftp.FreeBSD.org/pub/FreeBSD/releases/arch/ISO-IMAGES/version/&os;-version-RELEASE-arch-memstick.img. Replace arch and version with the architecture and the version number to install. For example, the memory stick images for &os;/&arch.i386; &rel2.current;-RELEASE are available from ftp://ftp.FreeBSD.org/pub/FreeBSD/releases/&arch.i386;/ISO-IMAGES/&rel2.current;/&os;-&rel2.current;-RELEASE-&arch.i386;-memstick.img. A different directory path is used for &os; 9.0-RELEASE and later versions. How to download and install &os; 9.X is covered in . The memory stick image has a .img extension. The ISO-IMAGES/ directory contains a number of different images and the one to use depends on the version of &os; and the type of media supported by the hardware being installed to. Before proceeding, back up the data on the USB stick, as this procedure will erase it. Write the Image File to the Memory Stick Using &os; to Write the Image The example below lists /dev/da0 as the target device where the image will be written. Be very careful that you have the correct device as the output target, or you may destroy your existing data. Writing the Image with &man.dd.1; The .img file is not a regular file that can just be copied to the memory stick. It is an image of the complete contents of the disk. This means that &man.dd.1; must be used to write the image directly to the disk: &prompt.root; dd if=&os;-&rel2.current;-RELEASE-&arch.i386;-memstick.img of=/dev/da0 bs=64k If an Operation not permitted error is displayed, make certain that the target device is not in use, mounted, or being automounted by another program. Then try again. Using &windows; to Write the Image Make sure to use the correct drive letter as the output target, as this command will overwrite and destroy any existing data on the specified device. Obtaining <application>Image Writer for Windows</application> Image Writer for Windows is a free application that can correctly write an image file to a memory stick. Download it from https://launchpad.net/win32-image-writer/ and extract it into a folder. Writing the Image with Image Writer Double-click the Win32DiskImager icon to start the program. Verify that the drive letter shown under Device is the drive with the memory stick. Click the folder icon and select the image to be written to the memory stick. Click Save to accept the image file name. Verify that everything is correct, and that no folders on the memory stick are open in other windows. Finally, click Write to write the image file to the drive. To create the boot floppy images for a &os;/&arch.pc98; installation, follow these steps: Acquire the Boot Floppy Images The &os;/&arch.pc98; boot disks can be downloaded from the floppies directory, ftp://ftp.FreeBSD.org/pub/FreeBSD/releases/pc98/version-RELEASE/floppies/. Replace version with the version number to install. The floppy images have a .flp extension. floppies/ contains a number + >floppies/ contains a number of different images. Download boot.flp as well as the number of files associated with the type of installation, such as kern.small* or kern*. The FTP program must use binary mode to download these disk images. Some web browsers use text or ASCII mode, which will be apparent if the disks are not bootable. Prepare the Floppy Disks Prepare one floppy disk per downloaded image file. It is imperative that these disks are free from defects. The easiest way to test this is to reformat the disks. Do not trust pre-formatted floppies. The format utility in &windows; will not tell about the presence of bad blocks, it simply marks them as bad and ignores them. It is advised to use brand new floppies. If the installer crashes, freezes, or otherwise misbehaves, one of the first things to suspect is the floppies. Write the floppy image files to new disks and try again. Write the Image Files to the Floppy Disks The .flp files are not regular files that can be copied to the disk. They are images of the complete contents of the disk. Specific tools must be used to write the images directly to the disk. DOS &os; provides a tool called rawrite for creating the floppies on a computer running &windows;. This tool can be downloaded from ftp://ftp.FreeBSD.org/pub/FreeBSD/releases/pc98/ version-RELEASE/tools/ on the &os; FTP site. Download this tool, insert a floppy, then specify the filename to write to the floppy drive: C:\> rawrite boot.flp A: Repeat this command for each .flp file, replacing the floppy disk each time, being sure to label the disks with the name of the file. Adjust the command line as necessary, depending on where the .flp files are located. When writing the floppies on a &unix;-like system, such as another &os; system, use &man.dd.1; to write the image files directly to disk. On &os;, run: &prompt.root; dd if=boot.flp of=/dev/fd0 On &os;, /dev/fd0 refers to the first floppy disk. Other &unix; variants might have different names for the floppy disk device, so check the documentation for the system as necessary. You are now ready to start installing &os;.
Starting the Installation By default, the installer will not make any changes to the disk(s) until after the following message: Last Chance: Are you SURE you want continue the installation? If you're running this on a disk with data you wish to save then WE STRONGLY ENCOURAGE YOU TO MAKE PROPER BACKUPS before proceeding! We can take no responsibility for lost disk contents! The install can be exited at any time prior to this final warning without changing the contents of the hard drive. If there is a concern that something is configured incorrectly, turn the computer off before this point, and no damage will be done. Booting Booting for the &i386; Turn on the computer. As it starts it should display an option to enter the system set up menu, or BIOS, commonly reached by keys like F2, F10, Del, or Alt S . Use whichever keystroke is indicated on screen. In some cases the computer may display a graphic while it starts. Typically, pressing Esc will dismiss the graphic and display the boot messages. Find the setting that controls which devices the system boots from. This is usually labeled as the Boot Order and commonly shown as a list of devices, such as Floppy, CDROM, First Hard Disk, and so on. If booting from the CD/DVD, make sure that the CDROM drive is selected. If booting from a USB disk, make sure that it is selected instead. When in doubt, consult the manual that came with the computer or its motherboard. Make the change, then save and exit. The computer should now restart. If using a prepared a bootable USB stick, as described in , plug in the USB stick before turning on the computer. If booting from CD/DVD, turn on the computer, and insert the CD/DVD at the first opportunity. For &os;/&arch.pc98;, installation boot floppies are available and can be prepared as described in . The first floppy disc will contain boot.flp. Put this floppy in the floppy drive to boot into the installer. If the computer starts up as normal and loads the existing operating system, then either: The disks were not inserted early enough in the boot process. Leave them in, and try restarting the computer. The BIOS changes did not work correctly. Redo that step until the right option is selected. That particular BIOS does not support booting from the desired media. &os; will start to boot. If booting from CD/DVD, messages will be displayed, similar to these: Booting from CD-Rom... 645MB medium detected CD Loader 1.2 Building the boot loader arguments Looking up /BOOT/LOADER... Found Relocating the loader and the BTX Starting the BTX loader BTX loader 1.00 BTX version is 1.02 Consoles: internal video/keyboard BIOS CD is cd0 BIOS drive C: is disk0 BIOS drive D: is disk1 BIOS 636kB/261056kB available memory FreeBSD/i386 bootstrap loader, Revision 1.1 Loading /boot/defaults/loader.conf /boot/kernel/kernel text=0x64daa0 data=0xa4e80+0xa9e40 syms=[0x4+0x6cac0+0x4+0x88e9d] \ If booting from floppy disc, a display similar to this will be shown: Booting from Floppy... Uncompressing ... done BTX loader 1.00 BTX version is 1.01 Console: internal video/keyboard BIOS drive A: is disk0 BIOS drive C: is disk1 BIOS 639kB/261120kB available memory FreeBSD/i386 bootstrap loader, Revision 1.1 Loading /boot/defaults/loader.conf /kernel text=0x277391 data=0x3268c+0x332a8 | Insert disk labelled "Kernel floppy 1" and press any key... Remove the boot.flp floppy, insert the next floppy, and press Enter. When prompted, insert the other disks as required. The boot process will then display the &os; boot loader menu:
&os; Boot Loader Menu
Either wait ten seconds, or press Enter.
Booting for &sparc64; Most &sparc64; systems are set to boot automatically from disk. To install &os;, boot over the network or from a CD/DVD and wait until the boot message appears. The message depends on the model, but should look similar to: Sun Blade 100 (UltraSPARC-IIe), Keyboard Present Copyright 1998-2001 Sun Microsystems, Inc. All rights reserved. OpenBoot 4.2, 128 MB memory installed, Serial #51090132. Ethernet address 0:3:ba:b:92:d4, Host ID: 830b92d4. If the system proceeds to boot from disk, press L1A or StopA on the keyboard, or send a BREAK over the serial console using ~# in &man.tip.1; or &man.cu.1; to get to the PROM prompt. It looks like this: ok ok {0} This is the prompt used on systems with just one CPU. This is the prompt used on SMP systems and the digit indicates the number of the active CPU. At this point, place the CD/DVD into the drive and from the PROM prompt, type boot cdrom.
Reviewing the Device Probe Results The last few hundred lines that have been displayed on screen are stored and can be reviewed. To review this buffer, press Scroll Lock to turn on scrolling in the display. Use the arrow keys or PageUp and PageDown to view the results. Press Scroll Lock again to stop scrolling. Do this now, to review the text that scrolled off the screen when the kernel was carrying out the device probes. Text similar to will be displayed, although it will differ depending on the devices in the computer.
Typical Device Probe Results avail memory = 253050880 (247120K bytes) Preloaded elf kernel "kernel" at 0xc0817000. Preloaded mfs_root "/mfsroot" at 0xc0817084. md0: Preloaded image </mfsroot> 4423680 bytes at 0xc03ddcd4 md1: Malloc disk Using $PIR table, 4 entries at 0xc00fde60 npx0: <math processor> on motherboard npx0: INT 16 interface pcib0: <Host to PCI bridge> on motherboard pci0: <PCI bus> on pcib0 pcib1:<VIA 82C598MVP (Apollo MVP3) PCI-PCI (AGP) bridge> at device 1.0 on pci0 pci1: <PCI bus> on pcib1 pci1: <Matrox MGA G200 AGP graphics accelerator> at 0.0 irq 11 isab0: <VIA 82C586 PCI-ISA bridge> at device 7.0 on pci0 isa0: <iSA bus> on isab0 atapci0: <VIA 82C586 ATA33 controller> port 0xe000-0xe00f at device 7.1 on pci0 ata0: at 0x1f0 irq 14 on atapci0 ata1: at 0x170 irq 15 on atapci0 uhci0 <VIA 83C572 USB controller> port 0xe400-0xe41f irq 10 at device 7.2 on pci 0 usb0: <VIA 83572 USB controller> on uhci0 usb0: USB revision 1.0 uhub0: VIA UHCI root hub, class 9/0, rev 1.00/1.00, addr1 uhub0: 2 ports with 2 removable, self powered pci0: <unknown card> (vendor=0x1106, dev=0x3040) at 7.3 dc0: <ADMtek AN985 10/100BaseTX> port 0xe800-0xe8ff mem 0xdb000000-0xeb0003ff ir q 11 at device 8.0 on pci0 dc0: Ethernet address: 00:04:5a:74:6b:b5 miibus0: <MII bus> on dc0 ukphy0: <Generic IEEE 802.3u media interface> on miibus0 ukphy0: 10baseT, 10baseT-FDX, 100baseTX, 100baseTX-FDX, auto ed0: <NE2000 PCI Ethernet (RealTek 8029)> port 0xec00-0xec1f irq 9 at device 10. 0 on pci0 ed0 address 52:54:05:de:73:1b, type NE2000 (16 bit) isa0: too many dependant configs (8) isa0: unexpected small tag 14 orm0: <Option ROM> at iomem 0xc0000-0xc7fff on isa0 fdc0: <NEC 72065B or clone> at port 0x3f0-0x3f5,0x3f7 irq 6 drq2 on isa0 fdc0: FIFO enabled, 8 bytes threshold fd0: <1440-KB 3.5” drive> on fdc0 drive 0 atkbdc0: <Keyboard controller (i8042)> at port 0x60,0x64 on isa0 atkbd0: <AT Keyboard> flags 0x1 irq1 on atkbdc0 kbd0 at atkbd0 psm0: <PS/2 Mouse> irq 12 on atkbdc0 psm0: model Generic PS/@ mouse, device ID 0 vga0: <Generic ISA VGA> at port 0x3c0-0x3df iomem 0xa0000-0xbffff on isa0 sc0: <System console> at flags 0x100 on isa0 sc0: VGA <16 virtual consoles, flags=0x300> sio0 at port 0x3f8-0x3ff irq 4 flags 0x10 on isa0 sio0: type 16550A sio1 at port 0x2f8-0x2ff irq 3 on isa0 sio1: type 16550A ppc0: <Parallel port> at port 0x378-0x37f irq 7 on isa0 pppc0: SMC-like chipset (ECP/EPP/PS2/NIBBLE) in COMPATIBLE mode ppc0: FIFO with 16/16/15 bytes threshold plip0: <PLIP network interface> on ppbus0 ad0: 8063MB <IBM-DHEA-38451> [16383/16/63] at ata0-master UDMA33 acd0: CD-RW <LITE-ON LTR-1210B> at ata1-slave PIO4 Mounting root from ufs:/dev/md0c /stand/sysinstall running as init on vty0
Check the probe results carefully to make sure that &os; found all the devices. If a device was not found, it will not be listed. A custom kernel can be used to add in support for devices which are not in the GENERIC kernel. After the device probe, the menu shown in will be displayed. Use the arrow key to choose a country, region, or group. Then press Enter to set the country.
Selecting Country Menu
If United States is selected as the country, the standard American keyboard map will be used. If a different country is chosen, the following menu will be displayed. Use the arrow keys to choose the correct keyboard map and press Enter.
Selecting Keyboard Menu
After the country selection, the &man.sysinstall.8; main menu will display.
Introducing &man.sysinstall.8; The &os; 8.X installer, &man.sysinstall.8;, is console based and is divided into a number of menus and screens that can be used to configure and control the installation process. This menu system is controlled by the arrow keys, Enter, Tab, Space, and other keys. To view a detailed description of these keys and what they do, ensure that the Usage entry is highlighted and that the [Select] button is selected, as shown in , then press Enter. The instructions for using the menu system will be displayed. After reviewing them, press Enter to return to the Main Menu.
Selecting Usage from Sysinstall Main Menu
Selecting the Documentation Menu From the Main Menu, select Doc with the arrow keys and press Enter.
Selecting Documentation Menu
This will display the Documentation Menu.
Sysinstall Documentation Menu
It is important to read the documents provided. To view a document, select it with the arrow keys and press Enter. When finished reading a document, press Enter to return to the Documentation Menu. To return to the Main Installation Menu, select Exit with the arrow keys and press Enter.
Selecting the Keymap Menu To change the keyboard mapping, use the arrow keys to select Keymap from the menu and press Enter. This is only required when using a non-standard or non-US keyboard.
Sysinstall Main Menu
A different keyboard mapping may be chosen by selecting the menu item using the up and down arrow keys and pressing Space. Pressing Space again will unselect the item. When finished, choose the &gui.ok; using the arrow keys and press Enter. Only a partial list is shown in this screen representation. Selecting &gui.cancel; by pressing Tab will use the default keymap and return to the Main Install Menu.
Sysinstall Keymap Menu
Installation Options Screen Select Options and press Enter.
Sysinstall Main Menu
Sysinstall Options
The default values are usually fine for most users and do not need to be changed. The release name will vary according to the version being installed. The description of the selected item will appear at the bottom of the screen highlighted in blue. Notice that one of the options is Use Defaults to reset all values to startup defaults. Press F1 to read the help screen about the various options. Press Q to return to the Main Install menu.
Begin a Standard Installation The Standard installation is the option recommended for those new to &unix; or &os;. Use the arrow keys to select Standard and then press Enter to start the installation.
Begin Standard Installation
Allocating Disk Space The first task is to allocate disk space for &os;, and label that space so that &man.sysinstall.8; can prepare it. In order to do this you need to know how &os; expects to find information on the disk. BIOS Drive Numbering Before installing and configuring &os; it is important to be aware how &os; deals with BIOS drive mappings. MS-DOS Microsoft Windows In a PC running a BIOS-dependent operating system such as µsoft.windows;, the BIOS is able to abstract the normal disk drive order and the operating system goes along with the change. This allows the user to boot from a disk drive other than the "primary master". This is especially convenient for users buy an identical second hard drive, and perform routine copies of the first drive to the second drive. If the first drive fails, is attacked by a virus, or is scribbled upon by an operating system defect, they can easily recover by instructing the BIOS to logically swap the drives. It is like switching the cables on the drives, without having to open the case. SCSI BIOS Systems with SCSI controllers often include BIOS extensions which allow the SCSI drives to be re-ordered in a similar fashion for up to seven drives. A user who is accustomed to taking advantage of these features may become surprised when the results with &os; are not as expected. &os; does not use the BIOS, and does not know the logical BIOS drive mapping. This can lead to perplexing situations, especially when drives are physically identical in geometry and have been made as data clones of one another. When using &os;, always restore the BIOS to natural drive numbering before installing &os;, and then leave it that way. If drives need to be switched around, take the time to open the case and move the jumpers and cables. An Illustration from the Files of Bill and Fred's Exceptional Adventures: Bill breaks-down an older Wintel box to make another &os; box for Fred. Bill installs a single SCSI drive as SCSI unit zero and installs &os; on it. Fred begins using the system, but after several days notices that the older SCSI drive is reporting numerous errors. To address the situation, Bill grabs an identical SCSI drive and installs this drive as SCSI unit four and makes an image copy from drive zero to drive four. Now that the new drive is installed and functioning, Bill decides to start using it, so he uses features in the SCSI BIOS to re-order the disk drives so that the system boots from SCSI unit four. &os; boots and runs just fine. Fred continues his work and soon decides that it is time to upgrade to a newer version of &os;. Bill removes SCSI unit zero because it was a bit flaky and replaces it with another identical disk drive. Bill then installs the new version of &os; onto the new SCSI unit zero and the installation goes well. Fred uses the new version of &os; for a few days, and certifies that it is good enough for use in the engineering department. It is time to copy all of his work from the old version, so Fred mounts SCSI unit four which should contain the latest copy of the older &os; version. Fred is dismayed to find that none of his work is present on SCSI unit four. It turns out that when Bill made an image copy of the original SCSI unit zero onto SCSI unit four, unit four became the new clone. When Bill re-ordered the SCSI BIOS so that he could boot from SCSI unit four, &os; was still running on SCSI unit zero. Making this kind of BIOS change causes some or all of the boot and loader code to be fetched from the selected BIOS drive. But when the &os; kernel drivers take over, the BIOS drive numbering is ignored, and &os; transitions back to normal drive numbering. In this example, the system continued to operate on the original SCSI unit zero, and all of Fred's data was there, not on SCSI unit four. The fact that the system appeared to be running on SCSI unit four was simply an artifact of human expectations. Fortunately, the older SCSI unit zero was retrieved and all of Fred's work was restored. Although SCSI drives were used in this illustration, the concepts apply equally to IDE drives. Creating Slices Using FDisk After choosing to begin a standard installation in &man.sysinstall.8;, this message will appear: Message In the next menu, you will need to set up a DOS-style ("fdisk") partitioning scheme for your hard disk. If you simply wish to devote all disk space to FreeBSD (overwriting anything else that might be on the disk(s) selected) then use the (A)ll command to select the default partitioning scheme followed by a (Q)uit. If you wish to allocate only free space to FreeBSD, move to a partition marked "unused" and use the (C)reate command. [ OK ] [ Press enter or space ] Press Enter and a list of all the hard drives that the kernel found when it carried out the device probes will be displayed. shows an example from a system with two IDE disks called ad0 and ad2.
Select Drive for FDisk
Note that ad1 is not listed here. Consider two IDE hard disks where one is the master on the first IDE controller and one is the master on the second IDE controller. If &os; numbered these as ad0 and ad1, everything would work. But if a third disk is later added as the slave device on the first IDE controller, it would now be ad1, and the previous ad1 would become ad2. Because device names are used to find filesystems, some filesystems may no longer appear correctly, requiring a change to the &os; configuration. To work around this, the kernel can be configured to name IDE disks based on where they are and not the order in which they were found. With this scheme, the master disk on the second IDE controller will always be ad2, even if there are no ad0 or ad1 devices. This configuration is the default for the &os; kernel, which is why the display in this example shows ad0 and ad2. The machine on which this screenshot was taken had IDE disks on both master channels of the IDE controllers and no disks on the slave channels. Select the disk on which to install &os;, and then press &gui.ok;. FDisk will start, with a display similar to that shown in . The FDisk display is broken into three sections. The first section, covering the first two lines of the display, shows details about the currently selected disk, including its &os; name, the disk geometry, and the total size of the disk. The second section shows the slices that are currently on the disk, where they start and end, how large they are, the name &os; gives them, and their description and sub-type. This example shows two small unused slices which are artifacts of disk layout schemes on the PC. It also shows one large FAT slice, which appears as C: in &windows;, and an extended slice, which may contain other drive letters in &windows;. The third section shows the commands that are available in FDisk.
Typical Default <application>FDisk</application> Partitions
This step varies, depending on how the disk is to be sliced. To install &os; to the entire disk, which will delete all the other data on this disk, press A, which corresponds to the Use Entire Disk option. The existing slices will be removed and replaced with a small area flagged as unused and one large slice for &os;. Then, select the newly created &os; slice using the arrow keys and press S to mark the slice as being bootable. The screen will then look similar to . Note the A in the Flags column, which indicates that this slice is active, and will be booted from. If an existing slice needs to be deleted to make space for &os;, select the slice using the arrow keys and press D. Then, press C to be prompted for the size of the slice to create. Enter the appropriate value and press Enter. The default value in this box represents the largest possible slice to make, which could be the largest contiguous block of unallocated space or the size of the entire hard disk. If you have already made space for &os; then you can press C to create a new slice. Again, you will be prompted for the size of slice you would like to create.
Fdisk Partition Using Entire Disk
When finished, press Q. Any changes will be saved in &man.sysinstall.8;, but will not yet be written to disk.
Install a Boot Manager The next menu provides the option to install a boot manager. In general, install the &os; boot manager if: There is more than one drive and &os; will be installed onto a drive other than the first one. &os; will be installed alongside another operating system on the same disk, and you want to choose whether to start &os; or the other operating system when the computer starts. If &os; is going to be the only operating system on this machine, installed on the first hard disk, then the Standard boot manager will suffice. Choose None if using a third-party boot manager capable of booting &os;. Make a selection and press Enter.
Sysinstall Boot Manager Menu
The help screen, reached by pressing F1, discusses the problems that can be encountered when trying to share the hard disk between operating systems.
Creating Slices on Another Drive If there is more than one drive, it will return to the Select Drives screen after the boot manager selection. To install &os; on to more than one disk, select another disk and repeat the slice process using FDisk. If installing &os; on a drive other than the first drive, the &os; boot manager needs to be installed on both drives.
Exit Select Drive
Use Tab to toggle between the last drive selected, &gui.ok;, and &gui.cancel;. Press Tab once to toggle to &gui.ok;, then press Enter to continue with the installation.
Creating Partitions Using <application>Disklabel</application> Next, create some partitions inside each slice. Remember that each partition is lettered, from a through to h, and that partitions b, c, and d have conventional meanings that should be adhered to. Certain applications can benefit from particular partition schemes, especially when laying out partitions across more than one disk. However, for a first &os; installation, do not give too much thought to how to partition the disk. It is more important to install &os; and start learning how to use it. You can always re-install &os; to change the partition scheme after becoming more familiar with the operating system. The following scheme features four partitions: one for swap space and three for filesystems. Partition Layout for First Disk Partition Filesystem Size Description a / 1 GB This is the root filesystem. Every other filesystem will be mounted somewhere under this one. 1 GB is a reasonable size for this filesystem as user files should not be stored here and a regular &os; install will put about 128 MB of data here. b N/A 2-3 x RAM The system's swap space is kept on the b partition. Choosing the right amount of swap space can be a bit of an art. A good rule of thumb is that swap space should be two or three times as much as the available physical memory (RAM). There should be at least 64 MB of swap, so if there is less than 32 MB of RAM in the computer, set the swap amount to 64 MB. If there is more than one disk, swap space can be put on each disk. &os; will then use each disk for swap, which effectively speeds up the act of swapping. In this case, calculate the total amount of swap needed and divide this by the number of disks to give the amount of swap to put on each disk. e /var 512 MB to 4096 MB /var contains files that are constantly varying, such as log files and other administrative files. Many of these files are read from or written to extensively during &os;'s day-to-day running. Putting these files on another filesystem allows &os; to optimize the access of these files without affecting other files in other directories that do not have the same access pattern. f /usr Rest of disk (at least 8 GB) All other files will typically be stored in /usr and its subdirectories.
The values above are given as example and should be used by experienced users only. Users are encouraged to use the automatic partition layout called Auto Defaults by the &os; partition editor. If installing &os; on to more than one disk, create partitions in the other configured slices. The easiest way to do this is to create two partitions on each disk, one for the swap space, and one for a filesystem. Partition Layout for Subsequent Disks Partition Filesystem Size Description b N/A See description Swap space can be split across each disk. Even though the a partition is free, convention dictates that swap space stays on the b partition. e /diskn Rest of disk The rest of the disk is taken up with one big partition. This could easily be put on the a partition, instead of the e partition. However, convention says that the a partition on a slice is reserved for the filesystem that will be the root (/) filesystem. Following this convention is not necessary, but &man.sysinstall.8; uses it, so following it makes the installation slightly cleaner. This filesystem can be mounted anywhere; this example mounts it as /diskn, + >/diskn, where n is a number that changes for each disk.
Having chosen the partition layout, create it using &man.sysinstall.8;. Message Now, you need to create BSD partitions inside of the fdisk partition(s) just created. If you have a reasonable amount of disk space (1GB or more) and don't have any special requirements, simply use the (A)uto command to allocate space automatically. If you have more specific needs or just don't care for the layout chosen by (A)uto, press F1 for more information on manual layout. [ OK ] [ Press enter or space ] Press Enter to start the &os; partition editor, called Disklabel. shows the display when Disklabel starts. The display is divided into three sections. The first few lines show the name of the disk being worked on and the slice that contains the partitions to create. At this point, Disklabel calls this the Partition name rather than slice name. This display also shows the amount of free space within the slice; that is, space that was set aside in the slice, but that has not yet been assigned to a partition. The middle of the display shows the partitions that have been created, the name of the filesystem that each partition contains, their size, and some options pertaining to the creation of the filesystem. The bottom third of the screen shows the keystrokes that are valid in Disklabel.
Sysinstall Disklabel Editor
Disklabel can automatically create partitions and assign them default sizes. The default sizes are calculated with the help of an internal partition sizing algorithm based on the disk size. Press A to see a display similar to that shown in . Depending on the size of the disk, the defaults may or may not be appropriate. The default partitioning assigns /tmp its own partition instead of being part of the / partition. This helps avoid filling the / partition with temporary files.
Sysinstall Disklabel Editor with Auto Defaults
To replace the default partitions, use the arrow keys to select the first partition and press D to delete it. Repeat this to delete all the suggested partitions. To create the first partition, a, mounted as /, make sure the proper disk slice at the top of the screen is selected and press C. A dialog box will appear, prompting for the size of the new partition, as shown in . The size can be entered as the number of disk blocks to use or as a number followed by either M for megabytes, G for gigabytes, or C for cylinders.
Free Space for Root Partition
The default size shown will create a partition that takes up the rest of the slice. If using the partition sizes described in the earlier example, delete the existing figure using Backspace, and then type in 512M, as shown in . Then press &gui.ok;.
Edit Root Partition Size
After choosing the partition's size, the installer will ask whether this partition will contain a filesystem or swap space. The dialog box is shown in . This first partition will contain a filesystem, so check that FS is selected and press Enter.
Choose the Root Partition Type
Finally, tell Disklabel where the filesystem will be mounted. The dialog box is shown in . Type /, and then press Enter.
Choose the Root Mount Point
The display will then update to show the newly created partition. Repeat this procedure for the other partitions. When creating the swap partition, it will not prompt for the filesystem mount point. When creating the final partition, /usr, leave the suggested size as is to use the rest of the slice. The final &os; DiskLabel Editor screen will appear similar to , although the values chosen may be different. Press Q to finish.
Sysinstall Disklabel Editor
Choosing What to Install Select the Distribution Set Deciding which distribution set to install will depend largely on the intended use of the system and the amount of disk space available. The predefined options range from installing the smallest possible configuration to everything. Those who are new to &unix; or &os; should select one of these canned options. Customizing a distribution set is typically for the more experienced user. Press F1 for more information on the distribution set options and what they contain. When finished reviewing the help, press Enter to return to the Select Distributions Menu. If a graphical user interface is desired, the configuration of &xorg; and selection of a default desktop must be done after the installation of &os;. More information regarding the installation and configuration of a &xorg; can be found in . If compiling a custom kernel is anticipated, select an option which includes the source code. For more information on why a custom kernel should be built or how to build a custom kernel, see . The most versatile system is one that includes everything. If there is adequate disk space, select All, as shown in , by using the arrow keys and pressing Enter. If there is a concern about disk space, consider using an option that is more suitable for the situation. Do not fret over the perfect choice, as other distributions can be added after installation.
Choose Distributions
Installing the Ports Collection After selecting the desired distribution, an opportunity to install the &os; Ports Collection is presented. The Ports Collection is an easy and convenient way to install software as it provides a collection of files that automate the downloading, compiling, and installation of third-party software packages. discusses how to use the Ports Collection. The installation program does not check to see if you have adequate space. Select this option only if you have adequate hard disk space. As of &os; &rel.current;, the &os; Ports Collection takes up about &ports.size; of disk space. You can safely assume a larger value for more recent versions of &os;. User Confirmation Requested Would you like to install the FreeBSD ports collection? This will give you ready access to over &os.numports; ported software packages, at a cost of around &ports.size; of disk space when "clean" and possibly much more than that if a lot of the distribution tarballs are loaded (unless you have the extra CDs from a FreeBSD CD/DVD distribution available and can mount it on /cdrom, in which case this is far less of a problem). The Ports Collection is a very valuable resource and well worth having on your /usr partition, so it is advisable to say Yes to this option. For more information on the Ports Collection & the latest ports, visit: http://www.FreeBSD.org/ports [ Yes ] No Select &gui.yes; with the arrow keys to install the Ports Collection or &gui.no; to skip this option. Press Enter to continue. The Choose Distributions menu will redisplay.
Confirm Distributions
Once satisfied with the options, select Exit with the arrow keys, ensure that &gui.ok; is highlighted, and press Enter to continue.
Choosing the Installation Media If installing from a CD/DVD, use the arrow keys to highlight Install from a &os; CD/DVD. Ensure that &gui.ok; is highlighted, then press Enter to proceed with the installation. For other methods of installation, select the appropriate option and follow the instructions. Press F1 to display the Online Help for installation media. Press Enter to return to the media selection menu.
Choose Installation Media
FTP Installation Modes installation network FTP There are three FTP installation modes to choose from: active FTP, passive FTP, or via a HTTP proxy. FTP Active: Install from an FTP server This option makes all FTP transfers use Active mode. This will not work through firewalls, but will often work with older FTP servers that do not support passive mode. If the connection hangs with passive mode (the default), try using active mode. FTP Passive: Install from an FTP server through a firewall This option instructs &man.sysinstall.8; to use passive mode FTP passive mode for all FTP operations. This allows the user to pass through firewalls that do not allow incoming connections on random TCP ports. FTP via a HTTP proxy: Install from an FTP server through a http proxy This option instructs &man.sysinstall.8; to use the HTTP protocol to connect to a proxy for all FTP operations. The proxy will translate the requests and send them to the FTP server. This allows the user to pass through firewalls that do not allow FTP, but offer a HTTP proxy FTP via a HTTP proxy . In this case, specify the proxy in addition to the FTP server. For a proxy FTP server, give the name of the server as part of the username, after an @ sign. The proxy server then fakes the real server. For example, to install from ftp.FreeBSD.org, using the proxy FTP server foo.example.com, listening on port 1234, go to the options menu, set the FTP username to ftp@ftp.FreeBSD.org and the password to an email address. As the installation media, specify FTP (or passive FTP, if the proxy supports it), and the URL ftp://foo.example.com:1234/pub/FreeBSD. Since /pub/FreeBSD from ftp.FreeBSD.org is proxied under foo.example.com, the proxy will fetch the files from ftp.FreeBSD.org as the installer requests them.
Committing to the Installation The installation can now proceed if desired. This is also the last chance for aborting the installation to prevent changes to the hard drive. User Confirmation Requested Last Chance! Are you SURE you want to continue the installation? If you're running this on a disk with data you wish to save then WE STRONGLY ENCOURAGE YOU TO MAKE PROPER BACKUPS before proceeding! We can take no responsibility for lost disk contents! [ Yes ] No Select &gui.yes; and press Enter to proceed. The installation time will vary according to the distribution chosen, installation media, and the speed of the computer. There will be a series of messages displayed, indicating the status. The installation is complete when the following message is displayed: Message Congratulations! You now have FreeBSD installed on your system. We will now move on to the final configuration questions. For any option you do not wish to configure, simply select No. If you wish to re-enter this utility after the system is up, you may do so by typing: /usr/sbin/sysinstall. [ OK ] [ Press enter or space ] Press Enter to proceed with post-installation configurations. Selecting &gui.no; and pressing Enter will abort the installation so no changes will be made to the system. The following message will appear: Message Installation complete with some errors. You may wish to scroll through the debugging messages on VTY1 with the scroll-lock feature. You can also choose "No" at the next prompt and go back into the installation menus to retry whichever operations have failed. [ OK ] This message is generated because nothing was installed. Pressing Enter will return to the Main Installation Menu to exit the installation. Post-installation Configuration of various options can be performed after a successful installation. An option can be configured by re-entering the configuration menus before booting the new &os; system or after boot using &man.sysinstall.8; and then selecting the Configure menu. Network Device Configuration If PPP was previously configured for an FTP install, this screen will not display and can be configured after boot as described above. For detailed information on Local Area Networks and configuring &os; as a gateway/router refer to the Advanced Networking chapter. User Confirmation Requested Would you like to configure any Ethernet or PPP network devices? [ Yes ] No To configure a network device, select &gui.yes; and press Enter. Otherwise, select &gui.no; to continue.
Selecting an Ethernet Device
Select the interface to be configured with the arrow keys and press Enter. User Confirmation Requested Do you want to try IPv6 configuration of the interface? Yes [ No ] In this private local area network, the current Internet type protocol (IPv4) was sufficient and &gui.no; was selected with the arrow keys and Enter pressed. If connected to an existing IPv6 network with an RA server, choose &gui.yes; and press Enter. It will take several seconds to scan for RA servers. User Confirmation Requested Do you want to try DHCP configuration of the interface? Yes [ No ] If Dynamic Host Configuration Protocol DHCP) is not required, select &gui.no; with the arrow keys and press Enter. Selecting &gui.yes; will execute &man.dhclient.8; and, if successful, will fill in the network configuration information automatically. Refer to for more information. The following Network Configuration screen shows the configuration of the Ethernet device for a system that will act as the gateway for a Local Area Network.
Set Network Configuration for <replaceable>ed0</replaceable>
Use Tab to select the information fields and fill in appropriate information: Host The fully-qualified hostname, such as k6-2.example.com in this case. Domain The name of the domain that the machine is in, such as example.com for this case. IPv4 Gateway IP address of host forwarding packets to non-local destinations. This must be filled in if the machine is a node on the network. Leave this field blank if the machine is the gateway to the Internet for the network. The IPv4 Gateway is also known as the default gateway or default route. Name server IP address of the local DNS server. There is no local DNS server on this private local area network so the IP address of the provider's DNS server (208.163.10.2) was used. IPv4 address The IP address to be used for this interface was 192.168.0.1 Netmask The address block being used for this local area network is 192.168.0.0 - 192.168.0.255 with a netmask of 255.255.255.0. Extra options to &man.ifconfig.8; Any additional interface-specific options to &man.ifconfig.8;. There were none in this case. Use Tab to select &gui.ok; when finished and press Enter. User Confirmation Requested Would you like to bring the ed0 interface up right now? [ Yes ] No Choosing &gui.yes; and pressing Enter will bring the machine up on the network so it is ready for use. However, this does not accomplish much during installation, since the machine still needs to be rebooted.
Configure Gateway User Confirmation Requested Do you want this machine to function as a network gateway? [ Yes ] No If the machine will be acting as the gateway for a local area network and forwarding packets between other machines, select &gui.yes; and press Enter. If the machine is a node on a network, select &gui.no; and press Enter to continue. Configure Internet Services User Confirmation Requested Do you want to configure inetd and the network services that it provides? Yes [ No ] If &gui.no; is selected, various services will not be enabled. These services can be enabled after installation by editing /etc/inetd.conf with a text editor. See for more information. Otherwise, select &gui.yes; to configure these services during install. An additional confirmation will display: User Confirmation Requested The Internet Super Server (inetd) allows a number of simple Internet services to be enabled, including finger, ftp and telnetd. Enabling these services may increase risk of security problems by increasing the exposure of your system. With this in mind, do you wish to enable inetd? [ Yes ] No Select &gui.yes; to continue. User Confirmation Requested inetd(8) relies on its configuration file, /etc/inetd.conf, to determine which of its Internet services will be available. The default FreeBSD inetd.conf(5) leaves all services disabled by default, so they must be specifically enabled in the configuration file before they will function, even once inetd(8) is enabled. Note that services for IPv6 must be separately enabled from IPv4 services. Select [Yes] now to invoke an editor on /etc/inetd.conf, or [No] to use the current settings. [ Yes ] No Selecting &gui.yes; allows services to be enabled by deleting the # at the beginning of the lines representing those services.
Editing <filename>inetd.conf</filename>
Once the edits are complete, press Esc to display a menu which will exit the editor and save the changes.
Enabling SSH Login SSH sshd User Confirmation Requested Would you like to enable SSH login? Yes [ No ] Selecting &gui.yes; will enable &man.sshd.8;, the daemon for OpenSSH. This allows secure remote access to the machine. For more information about OpenSSH, see . Anonymous FTP FTP anonymous User Confirmation Requested Do you want to have anonymous FTP access to this machine? Yes [ No ] Deny Anonymous FTP Selecting the default &gui.no; and pressing Enter will still allow users who have accounts with passwords to use FTP to access the machine. Allow Anonymous FTP Anyone can access the machine if anonymous FTP connections are allowed. The security implications should be considered before enabling this option. For more information about security, see . To allow anonymous FTP, use the arrow keys to select &gui.yes; and press Enter. An additional confirmation will display: User Confirmation Requested Anonymous FTP permits un-authenticated users to connect to the system FTP server, if FTP service is enabled. Anonymous users are restricted to a specific subset of the file system, and the default configuration provides a drop-box incoming directory to which uploads are permitted. You must separately enable both inetd(8), and enable ftpd(8) in inetd.conf(5) for FTP services to be available. If you did not do so earlier, you will have the opportunity to enable inetd(8) again later. If you want the server to be read-only you should leave the upload directory option empty and add the -r command-line option to ftpd(8) in inetd.conf(5) Do you wish to continue configuring anonymous FTP? [ Yes ] No This message indicates that the FTP service will also have to be enabled in /etc/inetd.conf to allow anonymous FTP connections. Select &gui.yes; and press Enter to continue. The following screen will display:
Default Anonymous FTP Configuration
Use Tab to select the information fields and fill in appropriate information: UID The user ID to assign to the anonymous FTP user. All files uploaded will be owned by this ID. Group Which group to place the anonymous FTP user into. Comment String describing this user in /etc/passwd. FTP Root Directory Where files available for anonymous FTP will be kept. Upload Subdirectory Where files uploaded by anonymous FTP users will go. The FTP root directory will be put in /var by default. If there is not enough room there for the anticipated FTP needs, use /usr instead by setting the FTP root directory to /usr/ftp. Once satisfied with the values, press Enter to continue. User Confirmation Requested Create a welcome message file for anonymous FTP users? [ Yes ] No If &gui.yes; is selected, press Enter and the &man.ee.1; editor will automatically start.
Edit the FTP Welcome Message
Use the instructions to change the message. Note the file name location at the bottom of the editor screen. Press Esc and a pop-up menu will default to a) leave editor. Press Enter to exit and continue. Press Enter again to save any changes.
Configure the Network File System The Network File System (NFS) allows sharing of files across a network. A machine can be configured as a server, a client, or both. Refer to for more information. NFS Server User Confirmation Requested Do you want to configure this machine as an NFS server? Yes [ No ] If there is no need for a NFS server, select &gui.no; and press Enter. If &gui.yes; is chosen, a message will pop-up indicating that /etc/exports must be created. Message Operating as an NFS server means that you must first configure an /etc/exports file to indicate which hosts are allowed certain kinds of access to your local filesystems. Press [Enter] now to invoke an editor on /etc/exports [ OK ] Press Enter to continue. A text editor will start, allowing /etc/exports to be edited.
Editing <filename>exports</filename>
Use the instructions to add the exported filesystems. Note the file name location at the bottom of the editor screen. Press Esc and a pop-up menu will default to a) leave editor. Press Enter to exit and continue.
<acronym>NFS</acronym> Client The NFS client allows the machine to access NFS servers. User Confirmation Requested Do you want to configure this machine as an NFS client? Yes [ No ] With the arrow keys, select &gui.yes; or &gui.no; as appropriate and press Enter.
System Console Settings There are several options available to customize the system console. User Confirmation Requested Would you like to customize your system console settings? [ Yes ] No To view and configure the options, select &gui.yes; and press Enter.
System Console Configuration Options
A commonly used option is the screen saver. Use the arrow keys to select Saver and then press Enter.
Screen Saver Options
Select the desired screen saver using the arrow keys and then press Enter. The System Console Configuration menu will redisplay. The default time interval is 300 seconds. To change the time interval, select Saver again. At the Screen Saver Options menu, select Timeout using the arrow keys and press Enter. A pop-up menu will appear:
Screen Saver Timeout
The value can be changed, then select &gui.ok; and press Enter to return to the System Console Configuration menu.
System Console Configuration Exit
Select Exit and press Enter to continue with the post-installation configuration.
Setting the Time Zone Setting the time zone allows the system to automatically correct for any regional time changes and perform other time zone related functions properly. The example shown is for a machine located in the Eastern time zone of the United States. The selections will vary according to the geographic location. User Confirmation Requested Would you like to set this machine's time zone now? [ Yes ] No Select &gui.yes; and press Enter to set the time zone. User Confirmation Requested Is this machine's CMOS clock set to UTC? If it is set to local time or you don't know, please choose NO here! Yes [ No ] Select &gui.yes; or &gui.no; according to how the machine's clock is configured, then press Enter.
Select the Region
The appropriate region is selected using the arrow keys and then pressing Enter.
Select the Country
Select the appropriate country using the arrow keys and press Enter.
Select the Time Zone
The appropriate time zone is selected using the arrow keys and pressing Enter. Confirmation Does the abbreviation 'EDT' look reasonable? [ Yes ] No Confirm that the abbreviation for the time zone is correct. If it looks okay, press Enter to continue with the post-installation configuration.
Mouse Settings This option allows cut and paste in the console and user programs using a 3-button mouse. If using a 2-button mouse, refer to &man.moused.8; for details on emulating the 3-button style. This example depicts a non-USB mouse configuration: User Confirmation Requested Does this system have a PS/2, serial, or bus mouse? [ Yes ] No Select &gui.yes; for a PS/2, serial, or bus mouse, or &gui.no; for a USB mouse, then press Enter.
Select Mouse Protocol Type
Use the arrow keys to select Type and press Enter.
Set Mouse Protocol
The mouse used in this example is a PS/2 type, so the default Auto is appropriate. To change the mouse protocol, use the arrow keys to select another option. Ensure that &gui.ok; is highlighted and press Enter to exit this menu.
Configure Mouse Port
Use the arrow keys to select Port and press Enter.
Setting the Mouse Port
This system had a PS/2 mouse, so the default PS/2 is appropriate. To change the port, use the arrow keys and then press Enter.
Enable the Mouse Daemon
Last, use the arrow keys to select Enable, and press Enter to enable and test the mouse daemon.
Test the Mouse Daemon
Move the mouse around the screen to verify that the cursor responds properly. If it does, select &gui.yes; and press Enter. If not, the mouse has not been configured correctly. Select &gui.no; and try using different configuration options. Select Exit with the arrow keys and press Enter to continue with the post-installation configuration.
Install Packages Packages are pre-compiled binaries and are a convenient way to install software. Installation of one package is shown for purposes of illustration. Additional packages can also be added at this time if desired. After installation, &man.sysinstall.8; can be used to add additional packages. User Confirmation Requested The FreeBSD package collection is a collection of hundreds of ready-to-run applications, from text editors to games to WEB servers and more. Would you like to browse the collection now? [ Yes ] No Select &gui.yes; and press Enter to be presented with the Package Selection screens:
Select Package Category
Only packages on the current installation media are available for installation at any given time. All packages available will be displayed if All is selected. Otherwise, select a particular category. Highlight the selection with the arrow keys and press Enter. A menu will display showing all the packages available for the selection made:
Select Packages
The bash shell is shown as selected. Select as many packages as desired by highlighting the package and pressing Space. A short description of each package will appear in the lower left corner of the screen. Press Tab to toggle between the last selected package, &gui.ok;, and &gui.cancel;. Once finished marking the packages for installation, press Tab once to toggle to &gui.ok; and press Enter to return to the Package Selection menu. The left and right arrow keys will also toggle between &gui.ok; and &gui.cancel;. This method can also be used to select &gui.ok; and press Enter to return to the Package Selection menu.
Install Packages
Use the Tab and arrow keys to select [ Install ] and press Enter to see the installation confirmation message:
Confirm Package Installation
Select &gui.ok; and press Enter to start the package installation. Installation messages will appear until all of the installations have completed. Make note if there are any error messages. The final configuration continues after packages are installed. If no packages are selected, select Install to return to the final configuration.
Add Users/Groups Add at least one user during the installation so that the system can be used without logging in as root. The root partition is generally small and running applications as root can quickly fill it. A bigger danger is noted below: User Confirmation Requested Would you like to add any initial user accounts to the system? Adding at least one account for yourself at this stage is suggested since working as the "root" user is dangerous (it is easy to do things which adversely affect the entire system). [ Yes ] No Select &gui.yes; and press Enter to continue with adding a user.
Select User
Select User with the arrow keys and press Enter.
Add User Information
The following descriptions will appear in the lower part of the screen as the items are selected with Tab to assist with entering the required information: Login ID The login name of the new user (mandatory). UID The numerical ID for this user (leave blank for automatic choice). Group The login group name for this user (leave blank for automatic choice). Password The password for this user (enter this field with care!). Full name The user's full name (comment). Member groups The groups this user belongs to. Home directory The user's home directory (leave blank for default). Login shell The user's login shell (leave blank for default of /bin/sh). In this example, the login shell was changed from /bin/sh to /usr/local/bin/bash to use the bash shell that was previously installed as a package. Do not use a shell that does not exist or the user will not be able to login. The most common shell used in &os; is the C shell, /bin/tcsh. The user was also added to the wheel group to be able to become a superuser with root privileges. Once satisfied, press &gui.ok; and the User and Group Management menu will redisplay:
Exit User and Group Management
Groups can also be added at this time. Otherwise, this menu may be accessed using &man.sysinstall.8; at a later time. When finished adding users, select Exit with the arrow keys and press Enter to continue the installation.
Set the <systemitem class="username">root</systemitem> Password Message Now you must set the system manager's password. This is the password you'll use to log in as "root". [ OK ] [ Press enter or space ] Press Enter to set the root password. The password will need to be typed in twice correctly. Do not forget this password. Notice that the typed password is not echoed, nor are asterisks displayed. New password: Retype new password : The installation will continue after the password is successfully entered. Exiting Install A message will ask if configuration is complete: User Confirmation Requested Visit the general configuration menu for a chance to set any last options? Yes [ No ] Select &gui.no; with the arrow keys and press Enter to return to the Main Installation Menu.
Exit Install
Select [X Exit Install] with the arrow keys and press Enter. The installer will prompt to confirm exiting the installation: User Confirmation Requested Are you sure you wish to exit? The system will reboot. [ Yes ] No Select &gui.yes;. If booting from the CDROM drive, the following message will remind you to remove the disk: Message Be sure to remove the media from the drive. [ OK ] [ Press enter or space ] The CDROM drive is locked until the machine starts to reboot, then the disk can quickly be removed from the drive. Press &gui.ok; to reboot. The system will reboot so watch for any error messages that may appear, see for more details.
Configure Additional Network Services TomRhodesContributed by Configuring network services can be a daunting task for users that lack previous knowledge in this area. Since networking and the Internet are critical to all modern operating systems, it is useful to have some understanding of &os;'s extensive networking capabilities. Network services are programs that accept input from anywhere on the network. Since there have been cases where bugs in network services have been exploited by attackers, it is important to only enable needed network services. If in doubt, do not enable a network service until it is needed. Services can be enabled with &man.sysinstall.8; or by editing /etc/rc.conf. Selecting the Networking option will display a menu similar to the one below:
Network Configuration Upper-level
The first option, Interfaces, is covered in . Selecting the AMD option adds support for &man.amd.8;. This is usually used in conjunction with NFS for automatically mounting remote filesystems. Next is the AMD Flags option. When selected, a menu will pop up where specific AMD flags can be entered. The menu already contains a set of default options: -a /.amd_mnt -l syslog /host /etc/amd.map /net /etc/amd.map sets the default mount location which is specified here as /.amd_mnt. specifies the default log; however, when &man.syslogd.8; is used, all log activity will be sent to the system log daemon. /host is used to mount an exported file system from a remote host, while /net is used to mount an exported filesystem from an IP address. The default options for AMD exports are defined in /etc/amd.map. FTP anonymous The Anon FTP option permits anonymous FTP connections. Select this option to make this machine an anonymous FTP server. Be aware of the security risks involved with this option. Another menu will be displayed to explain the security risks and configuration in depth. The Gateway menu will configure the machine to be a gateway. This menu can also be used to unset the Gateway option if it was accidentally selected during installation. The Inetd option can be used to configure or completely disable &man.inetd.8;. The Mail option is used to configure the system's default Mail Transfer Agent (MTA). Selecting this option will bring up the following menu:
Select a Default MTA
This menu offers a choice as to which MTA to install and set as the default. An MTA is a mail server which delivers email to users on the system or the Internet. Select Sendmail to install Sendmail as the default MTA. Select Sendmail local to set Sendmail as the default MTA, but disable its ability to receive incoming email from the Internet. The other options, Postfix and Exim, provide alternatives to Sendmail. The next menu after the MTA menu is NFS client. This menu is used to configure the system to communicate with a NFS server which in turn is used to make filesystems available to other machines on the network over the NFS protocol. See for more information about client and server configuration. Below that option is the NFS server option, for setting the system up as an NFS server. This adds the required information to start up the Remote Procedure Call RPC services. RPC is used to coordinate connections between hosts and programs. Next in line is the Ntpdate option, which deals with time synchronization. When selected, a menu like the one below shows up:
Ntpdate Configuration
From this menu, select the server which is geographically closest. This will make the time synchronization more accurate as a farther server may have more connection latency. The next option is the PCNFSD selection. This option will install the net/pcnfsd package from the Ports Collection. This is a useful utility which provides NFS authentication services for systems which are unable to provide their own, such as Microsoft's &ms-dos; operating system. Now, scroll down a bit to see the other options:
Network Configuration Lower-level
RPC communication between NFS servers and clients is managed by &man.rpcbind.8; which is required for NFS servers to operate correctly. Status monitoring is provided by &man.rpc.statd.8; and the reported status is usually held in /var/db/statd.status. The next option is for &man.rpc.lockd.8; which provides file locking services. This is usually used with &man.rpc.statd.8; to monitor which hosts are requesting locks and how frequently they request them. While these last two options are useful for debugging, they are not required for NFS servers and clients to operate correctly. The next menu, Routed, configures the routing daemon. &man.routed.8;, manages network routing tables, discovers multicast routers, and supplies a copy of the routing tables to any physically connected host on the network upon request. This is mainly used for machines which act as a gateway for the local network. If selected, a menu will request the default location of the utility. To accept the default location, press Enter. Yet another menu will ask for the flags to pass to &man.routed.8;. The default of should appear on the screen. The next menu, Rwhod, starts &man.rwhod.8; during system initialization. This utility broadcasts system messages across the network periodically, or collects them when in consumer mode. More information can be found in &man.ruptime.1; and &man.rwho.1;. The next to last option in the list is for &man.sshd.8;, the secure shell server for OpenSSH. It is highly recommended over the standard &man.telnetd.8; and &man.ftpd.8; servers as it is used to create a secure, encrypted connection from one host to another. The final option is TCP Extensions which are defined in RFC 1323 and RFC 1644. While on many hosts this can speed up connections, it can also cause some connections to be dropped. It is not recommended for servers, but may be beneficial for stand alone machines. Once the network services are configured, scroll up to the very top item which is X Exit and continue on to the next configuration item or simply exit &man.sysinstall.8; by selecting X Exit twice then [X Exit Install].
&os; Bootup &os;/&arch.i386; Bootup If everything went well, messages will scroll along the screen and a login prompt will appear. To view these messages, press Scroll-Lock then use PgUp and PgDn. Press Scroll-Lock again to return to the prompt. All of the messages may not display due to buffer limitations, but they can be read after logging using &man.dmesg.8;. Login using the username and password which were set during installation. Avoid logging in as root except when necessary. Typical boot messages (version information omitted): Copyright (c) 1992-2002 The FreeBSD Project. Copyright (c) 1979, 1980, 1983, 1986, 1988, 1989, 1991, 1992, 1993, 1994 The Regents of the University of California. All rights reserved. Timecounter "i8254" frequency 1193182 Hz CPU: AMD-K6(tm) 3D processor (300.68-MHz 586-class CPU) Origin = "AuthenticAMD" Id = 0x580 Stepping = 0 Features=0x8001bf<FPU,VME,DE,PSE,TSC,MSR,MCE,CX8,MMX> AMD Features=0x80000800<SYSCALL,3DNow!> real memory = 268435456 (262144K bytes) config> di sn0 config> di lnc0 config> di le0 config> di ie0 config> di fe0 config> di cs0 config> di bt0 config> di aic0 config> di aha0 config> di adv0 config> q avail memory = 256311296 (250304K bytes) Preloaded elf kernel "kernel" at 0xc0491000. Preloaded userconfig_script "/boot/kernel.conf" at 0xc049109c. md0: Malloc disk Using $PIR table, 4 entries at 0xc00fde60 npx0: <math processor> on motherboard npx0: INT 16 interface pcib0: <Host to PCI bridge> on motherboard pci0: <PCI bus> on pcib0 pcib1: <VIA 82C598MVP (Apollo MVP3) PCI-PCI (AGP) bridge> at device 1.0 on pci0 pci1: <PCI bus> on pcib1 pci1: <Matrox MGA G200 AGP graphics accelerator> at 0.0 irq 11 isab0: <VIA 82C586 PCI-ISA bridge> at device 7.0 on pci0 isa0: <ISA bus> on isab0 atapci0: <VIA 82C586 ATA33 controller> port 0xe000-0xe00f at device 7.1 on pci0 ata0: at 0x1f0 irq 14 on atapci0 ata1: at 0x170 irq 15 on atapci0 uhci0: <VIA 83C572 USB controller> port 0xe400-0xe41f irq 10 at device 7.2 on pci0 usb0: <VIA 83C572 USB controller> on uhci0 usb0: USB revision 1.0 uhub0: VIA UHCI root hub, class 9/0, rev 1.00/1.00, addr 1 uhub0: 2 ports with 2 removable, self powered chip1: <VIA 82C586B ACPI interface> at device 7.3 on pci0 ed0: <NE2000 PCI Ethernet (RealTek 8029)> port 0xe800-0xe81f irq 9 at device 10.0 on pci0 ed0: address 52:54:05:de:73:1b, type NE2000 (16 bit) isa0: too many dependant configs (8) isa0: unexpected small tag 14 fdc0: <NEC 72065B or clone> at port 0x3f0-0x3f5,0x3f7 irq 6 drq 2 on isa0 fdc0: FIFO enabled, 8 bytes threshold fd0: <1440-KB 3.5" drive> on fdc0 drive 0 atkbdc0: <keyboard controller (i8042)> at port 0x60-0x64 on isa0 atkbd0: <AT Keyboard> flags 0x1 irq 1 on atkbdc0 kbd0 at atkbd0 psm0: <PS/2 Mouse> irq 12 on atkbdc0 psm0: model Generic PS/2 mouse, device ID 0 vga0: <Generic ISA VGA> at port 0x3c0-0x3df iomem 0xa0000-0xbffff on isa0 sc0: <System console> at flags 0x1 on isa0 sc0: VGA <16 virtual consoles, flags=0x300> sio0 at port 0x3f8-0x3ff irq 4 flags 0x10 on isa0 sio0: type 16550A sio1 at port 0x2f8-0x2ff irq 3 on isa0 sio1: type 16550A ppc0: <Parallel port> at port 0x378-0x37f irq 7 on isa0 ppc0: SMC-like chipset (ECP/EPP/PS2/NIBBLE) in COMPATIBLE mode ppc0: FIFO with 16/16/15 bytes threshold ppbus0: IEEE1284 device found /NIBBLE Probing for PnP devices on ppbus0: plip0: <PLIP network interface> on ppbus0 lpt0: <Printer> on ppbus0 lpt0: Interrupt-driven port ppi0: <Parallel I/O> on ppbus0 ad0: 8063MB <IBM-DHEA-38451> [16383/16/63] at ata0-master using UDMA33 ad2: 8063MB <IBM-DHEA-38451> [16383/16/63] at ata1-master using UDMA33 acd0: CDROM <DELTA OTC-H101/ST3 F/W by OIPD> at ata0-slave using PIO4 Mounting root from ufs:/dev/ad0s1a swapon: adding /dev/ad0s1b as swap device Automatic boot in progress... /dev/ad0s1a: FILESYSTEM CLEAN; SKIPPING CHECKS /dev/ad0s1a: clean, 48752 free (552 frags, 6025 blocks, 0.9% fragmentation) /dev/ad0s1f: FILESYSTEM CLEAN; SKIPPING CHECKS /dev/ad0s1f: clean, 128997 free (21 frags, 16122 blocks, 0.0% fragmentation) /dev/ad0s1g: FILESYSTEM CLEAN; SKIPPING CHECKS /dev/ad0s1g: clean, 3036299 free (43175 frags, 374073 blocks, 1.3% fragmentation) /dev/ad0s1e: filesystem CLEAN; SKIPPING CHECKS /dev/ad0s1e: clean, 128193 free (17 frags, 16022 blocks, 0.0% fragmentation) Doing initial network setup: hostname. ed0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> mtu 1500 inet 192.168.0.1 netmask 0xffffff00 broadcast 192.168.0.255 inet6 fe80::5054::5ff::fede:731b%ed0 prefixlen 64 tentative scopeid 0x1 ether 52:54:05:de:73:1b lo0: flags=8049<UP,LOOPBACK,RUNNING,MULTICAST> mtu 16384 inet6 fe80::1%lo0 prefixlen 64 scopeid 0x8 inet6 ::1 prefixlen 128 inet 127.0.0.1 netmask 0xff000000 Additional routing options: IP gateway=YES TCP keepalive=YES routing daemons:. additional daemons: syslogd. Doing additional network setup:. Starting final network daemons: creating ssh RSA host key Generating public/private rsa1 key pair. Your identification has been saved in /etc/ssh/ssh_host_key. Your public key has been saved in /etc/ssh/ssh_host_key.pub. The key fingerprint is: cd:76:89:16:69:0e:d0:6e:f8:66:d0:07:26:3c:7e:2d root@k6-2.example.com creating ssh DSA host key Generating public/private dsa key pair. Your identification has been saved in /etc/ssh/ssh_host_dsa_key. Your public key has been saved in /etc/ssh/ssh_host_dsa_key.pub. The key fingerprint is: f9:a1:a9:47:c4:ad:f9:8d:52:b8:b8:ff:8c:ad:2d:e6 root@k6-2.example.com. setting ELF ldconfig path: /usr/lib /usr/lib/compat /usr/X11R6/lib /usr/local/lib a.out ldconfig path: /usr/lib/aout /usr/lib/compat/aout /usr/X11R6/lib/aout starting standard daemons: inetd cron sshd usbd sendmail. Initial rc.i386 initialization:. rc.i386 configuring syscons: blank_time screensaver moused. Additional ABI support: linux. Local package initialization:. Additional TCP options:. FreeBSD/i386 (k6-2.example.com) (ttyv0) login: rpratt Password: Generating the RSA and DSA keys may take some time on slower machines. This happens only on the initial boot-up of a new installation. Subsequent boots will be faster. If &xorg; has been configured and a default desktop chosen, it can be started by typing startx at the command line. &os; Shutdown It is important to properly shutdown the operating system. Do not just turn off the power. First, become the superuser using &man.su.1; and entering the root password. This will work only if the user is a member of wheel. Otherwise, login as root. To shutdown the system, type shutdown -h now. The operating system has halted. Please press any key to reboot. It is safe to turn off the power after the shutdown command has been issued and the message Please press any key to reboot appears. If any key is pressed instead of turning off the power switch, the system will reboot. The Ctrl Alt Del key combination can also be used to reboot the system; however, this is not recommended.
Troubleshooting installation troubleshooting This section covers basic installation troubleshooting of common problems. There are also a few questions and answers for people wishing to dual-boot &os; with &windows;. If Something Goes Wrong Due to various limitations of the PC architecture, it is impossible for device probing to be 100% reliable. However, there are a few things to try if it fails. Check the Hardware Notes document for the version of &os; to make sure the hardware is supported. If the hardware is supported but still experiences lock-ups or other problems, build a custom kernel to add in support for devices which are not present in the GENERIC kernel. The default kernel assumes that most hardware devices are in their factory default configuration in terms of IRQs, I/O addresses, and DMA channels. If the hardware has been reconfigured, create a custom kernel configuration file and recompile to tell &os; where to find things. It is also possible that a probe for a device not present will cause a later probe for another device that is present to fail. In that case, the probes for the conflicting driver(s) should be disabled. Some installation problems can be avoided or alleviated by updating the firmware on various hardware components, most notably the motherboard BIOS. Most motherboard and computer manufacturers have a website where upgrade information may be located. Most manufacturers strongly advise against upgrading the motherboard BIOS unless there is a good reason for doing so, such as a critical update. The upgrade process can go wrong, causing permanent damage to the BIOS chip. Using &windows; Filesystems At this time, &os; does not support file systems compressed with the Double Space™ application. Therefore the file system will need to be uncompressed before &os; can access the data. This can be done by running the Compression Agent located in the Start> Programs > System Tools menu. &os; can support &ms-dos; file systems (sometimes called FAT file systems). The &man.mount.msdosfs.8; command grafts such file systems onto the existing directory hierarchy, allowing the file system's contents to be accessed. The &man.mount.msdosfs.8; program is not usually invoked directly; instead, it is called by the system through a line in /etc/fstab or by using &man.mount.8; with the appropriate parameters. A typical line in /etc/fstab is: /dev/ad0sN /dos msdosfs rw 0 0 /dos must already exist for this to work. For details about the format of /etc/fstab, see &man.fstab.5;. A typical call to &man.mount.8; for a FAT filesystem looks like: &prompt.root; mount -t msdosfs /dev/ad0s1 /mnt In this example, the FAT filesystem is located on the first partition of the primary hard disk. The output from &man.dmesg.8; and &man.mount.8; should produce enough information to give an idea of the partition layout. &os; may number FAT partitions differently than other operating systems. In particular, extended partitions are usually given higher slice numbers than primary partitions. Use &man.fdisk.8; to help determine which slices belong to &os; and which belong to other operating systems. Troubleshooting Questions and Answers My system hangs while probing hardware during boot or it behaves strangely during install. &os; makes extensive use of the system ACPI service on the i386, amd64, and ia64 platforms to aid in system configuration if it is detected during boot. Unfortunately, some bugs still exist in the ACPI driver and various system motherboards. The use of ACPI can be disabled by setting hint.acpi.0.disabled in the third stage boot loader: set hint.acpi.0.disabled="1" This is reset each time the system is booted, so it is necessary to add hint.acpi.0.disabled="1" to /boot/loader.conf to make this change permanent. More information about the boot loader can be found in . When booting from the hard disk for the first time after installing &os;, the kernel loads and probes hardware, but stops with messages like: changing root device to ad1s1a panic: cannot mount root What is wrong? This can occur when the boot disk is not the first disk in the system. The BIOS uses a different numbering scheme to &os;, and working out which numbers correspond to which is difficult to get right. If this occurs, tell &os; where the root filesystem is by specifying the BIOS disk number, the disk type, and the &os; disk number for that type. Consider two IDE disks, each configured as the master on their respective IDE bus, where &os; should be booted from the second disk. The BIOS sees these as disk 0 and disk 1, while &os; sees them as ad0 and ad2. If &os; is on BIOS disk 1, of type ad and the &os; disk number is 2, this is the correct value: 1:ad(2,a)kernel Note that if there is a slave on the primary bus, the above is not necessary and is effectively wrong. The second situation involves booting from a SCSI disk when there are one or more IDE disks in the system. In this case, the &os; disk number is lower than the BIOS disk number. For two IDE disks and a SCSI disk, where the SCSI disk is BIOS disk 2, type da, and &os; disk number 0, the correct value is: 2:da(0,a)kernel This tells &os; to boot from BIOS disk 2, which is the first SCSI disk in the system. If there is only IDE disk, use 1: instead. Once the correct value to use is determined, put the command in /boot.config using a text editor. Unless instructed otherwise, &os; will use the contents of this file as the default response to the boot: prompt. When booting from the hard disk for the first time after installing &os;, the Boot Manager prompt just prints F? at the boot menu and the boot will not go any further. The hard disk geometry was set incorrectly in the partition editor when &os; was installed. Go back into the partition editor and specify the actual geometry of the hard disk. &os; must be reinstalled again from the beginning with the correct geometry. For a dedicated &os; system that does not need future compatibility with another operating system, use the entire disk by selecting A in the installer's partition editor. The system finds the &man.ed.4; network card but continuously displays device timeout errors. The card is probably on a different IRQ from what is specified in /boot/device.hints. The &man.ed.4; driver does not use software configuration by default, but it will if -1 is specified in the hints for the interface. Either move the jumper on the card to the configuration setting or specify the IRQ as -1 by setting the hint hint.ed.0.irq="-1". This tells the kernel to use the software configuration. Another possibility is that the card is at IRQ 9, which is shared by IRQ 2 and frequently a cause of problems, especially if a VGA card is using IRQ 2. Do not use IRQ 2 or 9 if at all possible. When &man.sysinstall.8; is usedin an &xorg; terminal, the yellow font is difficult to read against the light gray background. Is there a way to provide higher contrastcolor contrast for this application? If the default colors chosen by &man.sysinstall.8; make text illegible while using x11/xterm or x11/rxvt, add the following to ~/.Xdefaults to get a darker background gray: XTerm*color7: #c0c0c0 Advanced Installation Guide ValentinoVaschettoContributed by MarcFonvieilleUpdated by This section describes how to install &os; in exceptional cases. Installing &os; on a System Without a Monitor or Keyboard installation headless (serial console) serial console This type of installation is called a headless install because the machine to be installed does not have either an attached monitor or a VGA output. This type of installation is possible using a serial console, another machine which acts as the main display and keyboard. To do this, follow the steps to create an installation USB stick, explained in , or download the correct installation ISO image as described in . To modify the installation media to boot into a serial console, follow these steps. If using a CD/DVD media, skip the first step): Enabling the Installation USB Stick to Boot into a Serial Console &man.mount.8; By default, booting into the USB stick boots into the installer. To instead boot into a serial console, mount the USB disk onto a &os; system using &man.mount.8;: &prompt.root; mount /dev/da0a /mnt Adapt the device node and the mount point to the situation. Once the USB stick is mounted, set it to boot into a serial console. Add this line to /boot/loader.conf on the USB stick: &prompt.root; echo 'console="comconsole"' >> /mnt/boot/loader.conf Now that the USB is stick configured correctly, unmount the disk using &man.umount.8;: &prompt.root; umount /mnt Now, unplug the USB stick and jump directly to the third step of this procedure. Enabling the Installation CD/DVD to Boot into a Serial Console &man.mount.8; By default, when booting into the installation CD/DVD, &os; boots into its normal install mode. To instead boot into a serial console, extract, modify, and regenerate the ISO image before burning it to the CD/DVD media. From the &os; system with the saved installation ISO image, use &man.tar.1; to extract all the files: &prompt.root; mkdir /path/to/headless-iso &prompt.root; tar -C /path/to/headless-iso -pxvf &os;-&rel.current;-RELEASE-i386-disc1.iso Next, set the installation media to boot into a serial console. Add this line to the /boot/loader.conf of the extracted ISO image: &prompt.root; echo 'console="comconsole"' >> /path/to/headless-iso/boot/loader.conf Then, create a new ISO image from the modified tree. This example uses &man.mkisofs.8; from the sysutils/cdrtools package or port: &prompt.root; mkisofs -v -b boot/cdboot -no-emul-boot -r -J -V "Headless_install" \ -o Headless-&os;-&rel2.current;-RELEASE-i386-disc1.iso /path/to/headless-iso Now that the ISO image is configured correctly, burn it to a CD/DVD media using a burning application. Connecting the Null-modem Cable null-modem cable Connect a null-modem cable to the serial ports of the two machines. A normal serial cable will not work. A null-modem cable is required. Booting Up for the Install It is now time to go ahead and start the install. Plug in the USB stick or insert the CD/DVD media in the headless install machine and power it on. Connecting to the Headless Machine &man.cu.1; Next, connect to that machine with &man.cu.1;: &prompt.root; cu -l /dev/cuau0 The headless machine can now be controlled using &man.cu.1;. It will load the kernel and then display a selection of which type of terminal to use. Select the &os; color console and proceed with the installation. Preparing Custom Installation Media Some situations may require a customized &os; installation media and/or source. This might be physical media or a source that &man.sysinstall.8; can use to retrieve the installation files. Some example situations include: A local network with many machines has a private FTP server hosting the &os; installation files which the machines should use for installation. &os; does not recognize the CD/DVD drive but &windows; does. In this case, copy the &os; installation files to a &windows; partition on the same computer, and then install &os; using those files. The computer to install does not have a CD/DVD drive or a network card, but can be connected using a null-printer cable to a computer that does. A tape will be used to install &os;. Creating an Installation ISO As part of each release, the &os; Project provides ISO images for each supported architecture. These images can be written (burned) to CD or DVD media using a burning application, and then used to install &os;. If a CD/DVD writer is available, this is the easiest way to install &os;. Download the Correct ISO Images The ISO images for each release can be downloaded from ftp://ftp.FreeBSD.org/pub/FreeBSD/ISO-IMAGES-arch/version or the closest mirror. Substitute arch and version as appropriate. An image directory normally contains the following images: &os; ISO Image Names and Meanings Filename Contents &os;-version-RELEASE-arch-bootonly.iso This CD image starts the installation process by booting from a CD-ROM drive but it does not contain the support for installing &os; from the CD itself. Perform a network based install, such as from an FTP server, after booting from this CD. &os;-version-RELEASE-arch-dvd1.iso.gz This DVD image contains everything necessary to install the base &os; operating system, a collection of pre-built packages, and the documentation. It also supports booting into a livefs based rescue mode. &os;-version-RELEASE-arch-memstick.img This image can be written to a USB memory stick in order to install machines capable of booting from USB drives. It also supports booting into a livefs based rescue mode. The only included package is the documentation package. &os;-version-RELEASE-arch-disc1.iso This image can be written to a USB memory stick in order to install machines capable of booting from USB drives. Similar to the bootonly.iso image, it does not contain the distribution sets on the medium itself, but does support network-based installations (for example, via ftp). &os;-version-RELEASE-arch-disc1.iso This CD image contains the base &os; operating system and the documentation package but no other packages. &os;-version-RELEASE-arch-disc2.iso A CD image with as many third-party packages as would fit on the disc. This image is not available for &os; 9.X. &os;-version-RELEASE-arch-disc3.iso Another CD image with as many third-party packages as would fit on the disc. This image is not available for &os; 9.X. &os;-version-RELEASE-arch-livefs.iso This CD image contains support for booting into a livefs based rescue mode but does not support doing an install from the CD itself.
When performing a CD installation, download either the bootonly ISO image or disc1. Do not download both, since disc1 contains everything that the bootonly ISO image contains. Use the bootonly ISO to perform a network install over the Internet. Additional software can be installed as needed using the Ports Collection as described in . Use dvd1 to install &os; and a selection of third-party packages from the disc.
Burn the Media Next, write the downloaded image(s) to disc. If using another &os; system, refer to for instructions. If using another platform, use any burning utility that exists for that platform. The images are in the standard ISO format which most CD writing applications support.
To build a customized release of &os;, refer to the Release Engineering Article.
Creating a Local FTP Site with a &os; Disc installation network FTP &os; discs are laid out in the same way as the FTP site. This makes it easy to create a local FTP site that can be used by other machines on a network to install &os;. On the &os; computer that will host the FTP site, ensure that the CD/DVD is in the drive and mounted: &prompt.root; mount /cdrom Create an account for anonymous FTP. Use &man.vipw.8; to insert this line: ftp:*:99:99::0:0:FTP:/cdrom:/nonexistent Ensure that the FTP service is enabled in /etc/inetd.conf. Anyone with network connectivity to the machine can now chose a media type of FTP and type in ftp://your machine after picking Other in the FTP sites menu during the install. If the boot media for the FTP clients is not precisely the same version as that provided by the local FTP site, &man.sysinstall.8; will not complete the installation. To override this, go into the Options menu and change the distribution name to any. This approach is acceptable for a machine on the local network which is protected by a firewall. Offering anonymous FTP services to other machines over the Internet exposes the computer to increased security risks. It is strongly recommended to follow good security practices when providing services over the Internet. Installing from an &windows; Partition installation from &windows; To prepare for an installation from a &windows; partition, copy the files from the distribution into a directory in the root directory of the partition, such as c:\freebsd. Since the directory structure must be reproduced, it is recommended to use robocopy when copying from a CD/DVD. For example, to prepare for a minimal installation of &os;: C:\> md c:\freebsd C:\> robocopy e:\bin c:\freebsd\bin\ /s C:\> robocopy e:\manpages c:\freebsd\manpages\ /s This example assumes that C: has enough free space and E: is where the CD/DVD is mounted. Alternatively, download the distribution from ftp.FreeBSD.org. Each distribution is in its own directory; for example, the base distribution can be found in the &rel2.current;/base/ directory. Copy the distributions to install from a &windows; partition to c:\freebsd. Both the base and kernel distributions are needed for the most minimal installation. Before Installing over a Network installation network serial (PPP) installation network parallel (PLIP) installation network Ethernet There are three types of network installations available: Ethernet, PPP, and PLIP. For the fastest possible network installation, use an Ethernet adapter. &os; supports most common Ethernet cards. A list of supported cards is provided in the Hardware Notes for each release of &os;. If using a supported PCMCIA Ethernet card, be sure that it is plugged in before the system is powered on as &os; does not support hot insertion of PCMCIA cards during installation. Make note of the system's IP address, subnet mask, hostname, default gateway address, and DNS server addresses if these values are statically assigned. If installing by FTP through a HTTP proxy, make note of the proxy's address. If you do not know these values, ask the system administrator or ISP before trying this type of installation. If using a dialup modem, have the service provider's PPP information handy as it is needed early in the installation process. If PAP or CHAP are used to connect to the ISP without using a script, type dial at the &os; ppp prompt. Otherwise, know how to dial the ISP using the AT commands specific to the modem, as the PPP dialer provides only a simple terminal emulator. Refer to and &url.books.faq;/ppp.html for further information. Logging can be directed to the screen using set log local .... If a hard-wired connection to another &os; machine is available, the installation can occur over a null-modem parallel port cable. The data rate over the parallel port is higher than what is typically possible over a serial line. Before Installing via <acronym>NFS</acronym> installation network NFS To perform an NFS installation, copy the needed &os; distribution files to an NFS server and then point the installer's NFS media selection to it. If the server supports only a privileged port, set the option NFS Secure in the Options menu so that the installation can proceed. If using a poor quality Ethernet card which suffers from slow transfer rates, toggle the NFS Slow flag to on. In order for an NFS installation to work, the server must support subdir mounts. For example, if the &os; &rel.current; distribution lives on: ziggy:/usr/archive/stuff/FreeBSD, ziggy will have to allow the direct mounting of /usr/archive/stuff/FreeBSD, not just /usr or /usr/archive/stuff. In &os;, this is controlled by using in /etc/exports. Other NFS servers may have different conventions. If the server is displaying permission denied messages, it is likely that this is not enabled properly.
Index: head/en_US.ISO8859-1/books/handbook/jails/chapter.xml =================================================================== --- head/en_US.ISO8859-1/books/handbook/jails/chapter.xml (revision 46271) +++ head/en_US.ISO8859-1/books/handbook/jails/chapter.xml (revision 46272) @@ -1,1625 +1,1625 @@ Jails MatteoRiondatoContributed by jails Synopsis Since system administration is a difficult task, many tools have been developed to make life easier for the administrator. These tools often enhance the way systems are installed, configured, and maintained. One of the tools which can be used to enhance the security of a &os; system is jails. Jails have been available since &os; 4.X and continue to be enhanced in their usefulness, performance, reliability, and security. Jails build upon the &man.chroot.2; concept, which is used to change the root directory of a set of processes, creating a safe environment, separate from the rest of the system. Processes created in the chrooted environment can not access files or resources outside of it. For that reason, compromising a service running in a chrooted environment should not allow the attacker to compromise the entire system. However, a chroot has several limitations. It is suited to easy tasks which do not require much flexibility or complex, advanced features. Over time many ways have been found to escape from a chrooted environment, making it a less than ideal solution for securing services. Jails improve on the concept of the traditional chroot environment in several ways. In a traditional chroot environment, processes are only limited in the part of the file system they can access. The rest of the system resources, system users, running processes, and the networking subsystem are shared by the chrooted processes and the processes of the host system. Jails expand this model by virtualizing access to the file system, the set of users, and the networking subsystem. More fine-grained controls are available for tuning the access of a jailed environment. Jails can be considered as a type of operating system-level virtualization. A jail is characterized by four elements: A directory subtree: the starting point from which a jail is entered. Once inside the jail, a process is not permitted to escape outside of this subtree. A hostname: which will be used by the jail. An IP address: which is assigned to the jail. The IP address of a jail is often an alias address for an existing network interface. A command: the path name of an executable to run inside the jail. The path is relative to the root directory of the jail environment. Jails have their own set of users and their own root account which are limited to the jail environment. The root account of a jail is not allowed to perform operations to the system outside of the associated jail environment. This chapter provides an overview of jail terminology are how to use &os; jails. Jails are a powerful tool for system administrators, but their basic usage can also be useful for advanced users. After reading this chapter, you will know: What a jail is and what purpose it may serve in &os; installations. How to build, start, and stop a jail. The basics of jail administration, both from inside and outside the jail. Jails are a powerful tool, but they are not a security panacea. While it is not possible for a jailed process to break out on its own, there are several ways in which an unprivileged user outside the jail can cooperate with a privileged user inside the jail to obtain elevated privileges in the host environment. Most of these attacks can be mitigated by ensuring that the jail root is not accessible to unprivileged users in the host environment. As a general rule, untrusted users with privileged access to a jail should not be given access to the host environment. Terms Related to Jails To facilitate better understanding of parts of the &os; system related to jails, their internals and the way they interact with the rest of &os;, the following terms are used further in this chapter: &man.chroot.8; (command) Utility, which uses &man.chroot.2; &os; system call to change the root directory of a process and all its descendants. &man.chroot.2; (environment) The environment of processes running in a chroot. This includes resources such as the part of the file system which is visible, user and group IDs which are available, network interfaces and other IPC mechanisms, etc. &man.jail.8; (command) The system administration utility which allows launching of processes within a jail environment. host (system, process, user, etc.) The controlling system of a jail environment. The host system has access to all the hardware resources available, and can control processes both outside of and inside a jail environment. One of the important differences of the host system from a jail is that the limitations which apply to superuser processes inside a jail are not enforced for processes of the host system. hosted (system, process, user, etc.) A process, user or other entity, whose access to resources is restricted by a &os; jail. Creating and Controlling Jails Some administrators divide jails into the following two types: complete jails, which resemble a real &os; system, and service jails, dedicated to one application or service, possibly running with privileges. This is only a conceptual division and the process of building a jail is not affected by it. When creating a complete jail there are two options for the source of the userland: use prebuilt binaries (such as those supplied on an install media) or build from source. To install the userland from installation media, first create the root directory for the jail. This can be done by setting the DESTDIR variable to the proper location. The command to use depends on which shell is being used. When using &man.sh.1;: &prompt.root; export DESTDIR=/here/is/the/jail If csh/tcsh is used, execute this instead: &prompt.root; setenv DESTDIR /here/is/the/jail Mount the install media as covered in &man.mdconfig.8; when using the install ISO: &prompt.root; mount -t cd9660 /dev/`mdconfig -f cdimage.iso` /mnt Extract the binaries from the tarballs on the install media into the declared destination. Minimally, only the base set needs to be extracted, but a complete install can be performed when preferred. To install just the base system, run the next command when using &os; 9.x or newer: &prompt.root; tar -xf /mnt/freebsd_install/usr/freebsd_dist/base.txz -C $DESTDIR On &os; 8.x systems, use this command instead: &prompt.root; /mnt/8.4-RELEASE/base/install.sh To install everything but the kernel, issue this command: When using &man.sh.1; on &os; 9.x and newer, issue this command: &prompt.root; for sets in BASE DOC GAMES PORTS; do (tar -xf /mnt/FREEBSD_INSTALL/USR/FREEBSD_DIST/$sets.TXZ -C $DESTDIR) ; done When using &os; 8.x, run this: &prompt.root; cd /mnt/8.4-RELEASE; for dir in base catpages dict doc games info manpages ports; do (cd $dir; ./install.sh) ; done If csh/tcsh is used on &os; 9.x and newer, execute this command: &prompt.root; foreach sets ( BASE DOC GAMES PORTS ) tar -xf /mnt/FREEBSD_INSTALL/USR/FREEBSD_DIST/$sets.TXZ -C $DESTDIR done On &os; 8.x, run this command: &prompt.root; foreach dir ( base catpages dict doc games info manpages ports ) cd /mnt/8.4-RELEASE/$dir; ./install.sh done The &man.jail.8; manual page explains the procedure for building a jail: &prompt.root; setenv D /here/is/the/jail &prompt.root; mkdir -p $D &prompt.root; cd /usr/src &prompt.root; make buildworld &prompt.root; make installworld DESTDIR=$D &prompt.root; make distribution DESTDIR=$D &prompt.root; mount -t devfs devfs $D/dev Selecting a location for a jail is the best starting point. This is where the jail will physically reside within the file system of the jail's host. A good choice can be /usr/jail/jailname, + >/usr/jail/jailname, where jailname is the hostname identifying the jail. The - /usr/ file system + /usr/ file system usually has enough space for the jail file system, which for complete jails is, essentially, a replication of every file present in a default installation of the &os; base system. If you have already rebuilt your userland using make world or make buildworld, you can skip this step and install your existing userland into the new jail. This command will populate the directory subtree chosen as jail's physical location on the file system with the necessary binaries, libraries, manual pages and so on. The distribution target for make installs every needed configuration file. In simple words, it installs every installable file of - /usr/src/etc/ to the - /etc directory of the + /usr/src/etc/ to the + /etc directory of the jail environment: - $D/etc/. + $D/etc/. Mounting the &man.devfs.8; file system inside a jail is not required. On the other hand, any, or almost any application requires access to at least one device, depending on the purpose of the given application. It is very important to control access to devices from inside a jail, as improper settings could permit an attacker to do nasty things in the jail. Control over &man.devfs.8; is managed through rulesets which are described in the &man.devfs.8; and &man.devfs.conf.5; manual pages. Once a jail is installed, it can be started by using the &man.jail.8; utility. The &man.jail.8; utility takes four mandatory arguments which are described in the . Other arguments may be specified too, e.g., to run the jailed process with the credentials of a specific user. The argument depends on the type of the jail; for a virtual system, /etc/rc is a good choice, since it will replicate the startup sequence of a real &os; system. For a service jail, it depends on the service or application that will run within the jail. Jails are often started at boot time and the &os; rc mechanism provides an easy way to do this. A list of the jails which are enabled to start at boot time should be added to the &man.rc.conf.5; file: jail_enable="YES" # Set to NO to disable starting of any jails jail_list="www" # Space separated list of names of jails Jail names in jail_list should contain alphanumeric characters only. For each jail listed in jail_list, a group of &man.rc.conf.5; settings, which describe the particular jail, should be added: jail_www_rootdir="/usr/jail/www" # jail's root directory jail_www_hostname="www.example.org" # jail's hostname jail_www_ip="192.168.0.10" # jail's IP address jail_www_devfs_enable="YES" # mount devfs in the jail The default startup of jails configured in &man.rc.conf.5;, will run the /etc/rc script of the jail, which assumes the jail is a complete virtual system. For service jails, the default startup command of the jail should be changed, by setting the jail_jailname_exec_start option appropriately. For a full list of available options, please see the &man.rc.conf.5; manual page. &man.service.8; can be used to start or stop a jail by hand, if an entry for it exists in rc.conf: &prompt.root; service jail start www &prompt.root; service jail stop www A clean way to shut down a &man.jail.8; is not available at the moment. This is because commands normally used to accomplish a clean system shutdown cannot be used inside a jail. The best way to shut down a jail is to run the following command from within the jail itself or using the &man.jexec.8; utility from outside the jail: &prompt.root; sh /etc/rc.shutdown More information about this can be found in the &man.jail.8; manual page. Fine Tuning and Administration There are several options which can be set for any jail, and various ways of combining a host &os; system with jails, to produce higher level applications. This section presents: Some of the options available for tuning the behavior and security restrictions implemented by a jail installation. Some of the high-level applications for jail management, which are available through the &os; Ports Collection, and can be used to implement overall jail-based solutions. System Tools for Jail Tuning in &os; Fine tuning of a jail's configuration is mostly done by setting &man.sysctl.8; variables. A special subtree of sysctl exists as a basis for organizing all the relevant options: the security.jail.* hierarchy of &os; kernel options. Here is a list of the main jail-related sysctls, complete with their default value. Names should be self-explanatory, but for more information about them, please refer to the &man.jail.8; and &man.sysctl.8; manual pages. security.jail.set_hostname_allowed: 1 security.jail.socket_unixiproute_only: 1 security.jail.sysvipc_allowed: 0 security.jail.enforce_statfs: 2 security.jail.allow_raw_sockets: 0 security.jail.chflags_allowed: 0 security.jail.jailed: 0 These variables can be used by the system administrator of the host system to add or remove some of the limitations imposed by default on the root user. Note that there are some limitations which cannot be removed. The root user is not allowed to mount or unmount file systems from within a &man.jail.8;. The root inside a jail may not load or unload &man.devfs.8; rulesets, set firewall rules, or do many other administrative tasks which require modifications of in-kernel data, such as setting the securelevel of the kernel. The base system of &os; contains a basic set of tools for viewing information about the active jails, and attaching to a jail to run administrative commands. The &man.jls.8; and &man.jexec.8; commands are part of the base &os; system, and can be used to perform the following simple tasks: Print a list of active jails and their corresponding jail identifier (JID), IP address, hostname and path. Attach to a running jail, from its host system, and run a command inside the jail or perform administrative tasks inside the jail itself. This is especially useful when the root user wants to cleanly shut down a jail. The &man.jexec.8; utility can also be used to start a shell in a jail to do administration in it; for example: &prompt.root; jexec 1 tcsh High-Level Administrative Tools in the &os; Ports Collection Among the many third-party utilities for jail administration, one of the most complete and useful is sysutils/ezjail. It is a set of scripts that contribute to &man.jail.8; management. Please refer to the handbook section on ezjail for more information. Keeping Jails Patched and up to Date Jails should be kept up to date from the host operating system as attempting to patch userland from within the jail may likely fail as the default behaviour in FreeBSD is to disallow the use of &man.chflags.1; in a jail which prevents the replacement of some files. It is possible to change this behavior but it is recommended to use &man.freebsd-update.8; to maintain jails instead. Use to specify the path of the jail to be updated. &prompt.root; freebsd-update -b /here/is/the/jail fetch &prompt.root; freebsd-update -b /here/is/the/jail install Updating Multiple Jails Daniel Gerzo Contributed by Simon L. B. Nielsen Based upon an idea presented by Ken Tom And an article written by The management of multiple jails can become problematic because every jail has to be rebuilt from scratch whenever it is upgraded. This can be time consuming and tedious if a lot of jails are created and manually updated. This section demonstrates one method to resolve this issue by safely sharing as much as is possible between jails using read-only &man.mount.nullfs.8; mounts, so that updating is simpler. This makes it more attractive to put single services, such as HTTP, DNS, and SMTP, into individual jails. Additionally, it provides a simple way to add, remove, and upgrade jails. Simpler solutions exist, such as ezjail, which provides an easier method of administering &os; jails but is less versatile than this setup. ezjail is covered in more detail in . The goals of the setup described in this section are: Create a simple and easy to understand jail structure that does not require running a full installworld on each and every jail. Make it easy to add new jails or remove existing ones. Make it easy to update or upgrade existing jails. Make it possible to run a customized &os; branch. Be paranoid about security, reducing as much as possible the possibility of compromise. Save space and inodes, as much as possible. This design relies on a single, read-only master template which is mounted into each jail and one read-write device per jail. A device can be a separate physical disc, a partition, or a vnode backed memory device. This example uses read-write nullfs mounts. The file system layout is as follows: The jails are based under the /home partition. Each jail will be mounted under the /home/j directory. The template for each jail and the read-only partition for all of the jails is /home/j/mroot. A blank directory will be created for each jail under the /home/j directory. Each jail will have a /s directory that will be linked to the read-write portion of the system. Each jail will have its own read-write system that is based upon /home/j/skel. The read-write portion of each jail will be created in /home/js. Creating the Template This section describes the steps needed to create the master template. It is recommended to first update the host &os; system to the latest -RELEASE branch using the instructions in . Additionally, this template uses the sysutils/cpdup package or port and portsnap will be used to download the &os; Ports Collection. First, create a directory structure for the read-only file system which will contain the &os; binaries for the jails. Then, change directory to the &os; source tree and install the read-only file system to the jail template: &prompt.root; mkdir /home/j /home/j/mroot &prompt.root; cd /usr/src &prompt.root; make installworld DESTDIR=/home/j/mroot Next, prepare a &os; Ports Collection for the jails as well as a &os; source tree, which is required for mergemaster: &prompt.root; cd /home/j/mroot &prompt.root; mkdir usr/ports &prompt.root; portsnap -p /home/j/mroot/usr/ports fetch extract &prompt.root; cpdup /usr/src /home/j/mroot/usr/src Create a skeleton for the read-write portion of the system: &prompt.root; mkdir /home/j/skel /home/j/skel/home /home/j/skel/usr-X11R6 /home/j/skel/distfiles &prompt.root; mv etc /home/j/skel &prompt.root; mv usr/local /home/j/skel/usr-local &prompt.root; mv tmp /home/j/skel &prompt.root; mv var /home/j/skel &prompt.root; mv root /home/j/skel Use mergemaster to install missing configuration files. Then, remove the extra directories that mergemaster creates: &prompt.root; mergemaster -t /home/j/skel/var/tmp/temproot -D /home/j/skel -i &prompt.root; cd /home/j/skel &prompt.root; rm -R bin boot lib libexec mnt proc rescue sbin sys usr dev Now, symlink the read-write file system to the read-only file system. Ensure that the symlinks are created in the correct s/ locations as the creation of directories in the wrong locations will cause the installation to fail. &prompt.root; cd /home/j/mroot &prompt.root; mkdir s &prompt.root; ln -s s/etc etc &prompt.root; ln -s s/home home &prompt.root; ln -s s/root root &prompt.root; ln -s s/usr-local usr/local &prompt.root; ln -s s/usr-X11R6 usr/X11R6 &prompt.root; ln -s s/distfiles usr/ports/distfiles &prompt.root; ln -s s/tmp tmp &prompt.root; ln -s s/var var As a last step, create a generic /home/j/skel/etc/make.conf containing this line: WRKDIRPREFIX?= /s/portbuild This makes it possible to compile &os; ports inside each jail. Remember that the ports directory is part of the read-only system. The custom path for WRKDIRPREFIX allows builds to be done in the read-write portion of every jail. Creating Jails The jail template can now be used to setup and configure the jails in /etc/rc.conf. This example demonstrates the creation of 3 jails: NS, MAIL and WWW. Add the following lines to /etc/fstab, so that the read-only template for the jails and the read-write space will be available in the respective jails: /home/j/mroot /home/j/ns nullfs ro 0 0 /home/j/mroot /home/j/mail nullfs ro 0 0 /home/j/mroot /home/j/www nullfs ro 0 0 /home/js/ns /home/j/ns/s nullfs rw 0 0 /home/js/mail /home/j/mail/s nullfs rw 0 0 /home/js/www /home/j/www/s nullfs rw 0 0 To prevent fsck from checking nullfs mounts during boot and dump from backing up the read-only nullfs mounts of the jails, the last two columns are both set to 0. Configure the jails in /etc/rc.conf: jail_enable="YES" jail_set_hostname_allow="NO" jail_list="ns mail www" jail_ns_hostname="ns.example.org" jail_ns_ip="192.168.3.17" jail_ns_rootdir="/usr/home/j/ns" jail_ns_devfs_enable="YES" jail_mail_hostname="mail.example.org" jail_mail_ip="192.168.3.18" jail_mail_rootdir="/usr/home/j/mail" jail_mail_devfs_enable="YES" jail_www_hostname="www.example.org" jail_www_ip="62.123.43.14" jail_www_rootdir="/usr/home/j/www" jail_www_devfs_enable="YES" The jail_name_rootdir variable is set to - /usr/home instead - of /home because + /usr/home instead + of /home because the physical path of /home on a default &os; + >/home on a default &os; installation is /usr/home. The + >/usr/home. The jail_name_rootdir variable must not be set to a path which includes a symbolic link, otherwise the jails will refuse to start. Create the required mount points for the read-only file system of each jail: &prompt.root; mkdir /home/j/ns /home/j/mail /home/j/www Install the read-write template into each jail using sysutils/cpdup: &prompt.root; mkdir /home/js &prompt.root; cpdup /home/j/skel /home/js/ns &prompt.root; cpdup /home/j/skel /home/js/mail &prompt.root; cpdup /home/j/skel /home/js/www In this phase, the jails are built and prepared to run. First, mount the required file systems for each jail, and then start them: &prompt.root; mount -a &prompt.root; service jail start The jails should be running now. To check if they have started correctly, use jls. Its output should be similar to the following: &prompt.root; jls JID IP Address Hostname Path 3 192.168.3.17 ns.example.org /home/j/ns 2 192.168.3.18 mail.example.org /home/j/mail 1 62.123.43.14 www.example.org /home/j/www At this point, it should be possible to log onto each jail, add new users, or configure daemons. The JID column indicates the jail identification number of each running jail. Use the following command to perform administrative tasks in the jail whose JID is 3: &prompt.root; jexec 3 tcsh Upgrading The design of this setup provides an easy way to upgrade existing jails while minimizing their downtime. Also, it provides a way to roll back to the older version should a problem occur. The first step is to upgrade the host system. Then, create a new temporary read-only template in /home/j/mroot2. &prompt.root; mkdir /home/j/mroot2 &prompt.root; cd /usr/src &prompt.root; make installworld DESTDIR=/home/j/mroot2 &prompt.root; cd /home/j/mroot2 &prompt.root; cpdup /usr/src usr/src &prompt.root; mkdir s The installworld creates a few unnecessary directories, which should be removed: &prompt.root; chflags -R 0 var &prompt.root; rm -R etc var root usr/local tmp Recreate the read-write symlinks for the master file system: &prompt.root; ln -s s/etc etc &prompt.root; ln -s s/root root &prompt.root; ln -s s/home home &prompt.root; ln -s ../s/usr-local usr/local &prompt.root; ln -s ../s/usr-X11R6 usr/X11R6 &prompt.root; ln -s s/tmp tmp &prompt.root; ln -s s/var var Next, stop the jails: &prompt.root; service jail stop Unmount the original file systems as the read-write systems are attached to the read-only system (/s): &prompt.root; umount /home/j/ns/s &prompt.root; umount /home/j/ns &prompt.root; umount /home/j/mail/s &prompt.root; umount /home/j/mail &prompt.root; umount /home/j/www/s &prompt.root; umount /home/j/www Move the old read-only file system and replace it with the new one. This will serve as a backup and archive of the old read-only file system should something go wrong. The naming convention used here corresponds to when a new read-only file system has been created. Move the original &os; Ports Collection over to the new file system to save some space and inodes: &prompt.root; cd /home/j &prompt.root; mv mroot mroot.20060601 &prompt.root; mv mroot2 mroot &prompt.root; mv mroot.20060601/usr/ports mroot/usr At this point the new read-only template is ready, so the only remaining task is to remount the file systems and start the jails: &prompt.root; mount -a &prompt.root; service jail start Use jls to check if the jails started correctly. Run mergemaster in each jail to update the configuration files. Managing Jails with <application>ezjail</application> Warren Block Originally contributed by Creating and managing multiple jails can quickly become tedious and error-prone. Dirk Engling's ezjail automates and greatly simplifies many jail tasks. A basejail is created as a template. Additional jails use &man.mount.nullfs.8; to share many of the basejail directories without using additional disk space. Each additional jail takes only a few megabytes of disk space before applications are installed. Upgrading the copy of the userland in the basejail automatically upgrades all of the other jails. Additional benefits and features are described in detail on the ezjail web site, . Installing <application>ezjail</application> Installing ezjail consists of adding a loopback interface for use in jails, installing the port or package, and enabling the service. To keep jail loopback traffic off the host's loopback network interface lo0, a second loopback interface is created by adding an entry to /etc/rc.conf: cloned_interfaces="${cloned_interfaces} lo1" The second loopback interface lo1 will be created when the system starts. It can also be created manually without a restart: &prompt.root; service netif cloneup Created clone interfaces: lo1. Jails can be allowed to use aliases of this secondary loopback interface without interfering with the host. Inside a jail, access to the loopback address 127.0.0.1 is redirected to the first IP address assigned to the jail. To make the jail loopback correspond with the new lo1 interface, that interface must be specified first in the list of interfaces and IP addresses given when creating a new jail. Give each jail a unique loopback address in the 127.0.0.0/8 netblock. Install sysutils/ezjail: &prompt.root; cd /usr/ports/sysutils/ezjail &prompt.root; make install clean Enable ezjail by adding this line to /etc/rc.conf: ezjail_enable="YES" The service will automatically start on system boot. It can be started immediately for the current session: &prompt.root; service ezjail start Initial Setup With ezjail installed, the basejail directory structure can be created and populated. This step is only needed once on the jail host computer. In both of these examples, causes the ports tree to be retrieved with &man.portsnap.8; into the basejail. That single copy of the ports directory will be shared by all the jails. Using a separate copy of the ports directory for jails isolates them from the host. The ezjail FAQ explains in more detail: . To Populate the Jail with &os;-RELEASE For a basejail based on the &os; RELEASE matching that of the host computer, use install. For example, on a host computer running &os; 10-STABLE, the latest RELEASE version of &os; -10 will be installed in the jail): &prompt.root; ezjail-admin install -p To Populate the Jail with <command>installworld</command> The basejail can be installed from binaries created by buildworld on the host with ezjail-admin update. In this example, &os; 10-STABLE has been built from source. The jail directories are created. Then installworld is executed, installing the host's /usr/obj into the basejail. &prompt.root; ezjail-admin update -i -p The host's /usr/src is used by default. A different source directory on the host can be specified with and a path, or set with ezjail_sourcetree in /usr/local/etc/ezjail.conf. The basejail's ports tree is shared by other jails. However, downloaded distfiles are stored in the jail that downloaded them. By default, these files are stored in /var/ports/distfiles within each jail. /var/ports inside each jail is also used as a work directory when building ports. Creating and Starting a New Jail New jails are created with ezjail-admin create. In these examples, the lo1 loopback interface is used as described above. Create and Start a New Jail Create the jail, specifying a name and the loopback and network interfaces to use, along with their IP addresses. In this example, the jail is named dnsjail. &prompt.root; ezjail-admin create dnsjail 'lo1|127.0.1.1,em0|192.168.1.50' Most network services run in jails without problems. A few network services, most notably &man.ping.8;, use raw network sockets. In jails, raw network sockets are disabled by default for security. Services that require them will not work. Occasionally, a jail genuinely needs raw sockets. For example, network monitoring applications often use &man.ping.8; to check the availability of other computers. When raw network sockets are actually needed in a jail, they can be enabled by editing the ezjail configuration file for the individual jail, /usr/local/etc/ezjail/jailname. Modify the parameters entry: export jail_jailname_parameters="allow.raw_sockets=1" Do not enable raw network sockets unless services in the jail actually require them. Start the jail: &prompt.root; ezjail-admin start dnsjail Use a console on the jail: &prompt.root; ezjail-admin console dnsjail The jail is operating and additional configuration can be completed. Typical settings added at this point include: Set the <systemitem class="username">root</systemitem> Password Connect to the jail and set the root user's password: &prompt.root; ezjail-admin console dnsjail &prompt.root; passwd Changing local password for root New Password: Retype New Password: Time Zone Configuration The jail's time zone can be set with &man.tzsetup.8;. To avoid spurious error messages, the &man.adjkerntz.8; entry in /etc/crontab can be commented or removed. This job attempts to update the computer's hardware clock with time zone changes, but jails are not allowed to access that hardware. <acronym>DNS</acronym> Servers Enter domain name server lines in /etc/resolv.conf so DNS works in the jail. Edit <filename>/etc/hosts</filename> Change the address and add the jail name to the localhost entries in /etc/hosts. Configure <filename>/etc/rc.conf</filename> Enter configuration settings in /etc/rc.conf. This is much like configuring a full computer. The host name and IP address are not set here. Those values are already provided by the jail configuration. With the jail configured, the applications for which the jail was created can be installed. Some ports must be built with special options to be used in a jail. For example, both of the network monitoring plugin packages net-mgmt/nagios-plugins and net-mgmt/monitoring-plugins have a JAIL option which must be enabled for them to work correctly inside a jail. Updating Jails Updating the Operating System Because the basejail's copy of the userland is shared by the other jails, updating the basejail automatically updates all of the other jails. Either source or binary updates can be used. To build the world from source on the host, then install it in the basejail, use: &prompt.root; ezjail-admin update -b If the world has already been compiled on the host, install it in the basejail with: &prompt.root; ezjail-admin update -i Binary updates use &man.freebsd-update.8;. These updates have the same limitations as if &man.freebsd-update.8; were being run directly. The most important one is that only -RELEASE versions of &os; are available with this method. To update the basejail to the latest patched release of the version of &os; on the host computer, use: &prompt.root; ezjail-admin update -r After updating the basejail, &man.mergemaster.8; can be run to update each jail's configuration files. How to use &man.mergemaster.8; depends on the purpose and trustworthiness of a jail. If a jail's services or users are not trusted, then &man.mergemaster.8; should only be run from within that jail: &man.mergemaster.8; on Untrusted Jail Delete the link from the jail's /usr/src into the basejail and create a new /usr/src in the jail as a mountpoint. Mount the host computer's /usr/src read-only on the jail's new /usr/src mountpoint: &prompt.root; rm /usr/jails/jailname/usr/src &prompt.root; mkdir /usr/jails/jailname/usr/src &prompt.root; mount -t nullfs -o ro /usr/src /usr/jails/jailname/usr/src Get a console in the jail: &prompt.root; ezjail-admin console jailname Inside the jail, run mergemaster. Then exit the jail console: &prompt.root; cd /usr/src &prompt.root; mergemaster -U &prompt.root; exit Finally, unmount the jail's /usr/src: &prompt.root; umount /usr/jails/jailname/usr/src &man.mergemaster.8; on Trusted Jail If the users and services in a jail are trusted, &man.mergemaster.8; can be run from the host: &prompt.root; mergemaster -U -D /usr/jails/jailname Updating Ports The ports tree in the basejail is shared by the other jails. Updating that copy of the ports tree gives the other jails the updated version also. The basejail ports tree is updated with &man.portsnap.8;: &prompt.root; ezjail-admin update -P Controlling Jails Stopping and Starting Jails ezjail automatically starts jails when the computer is started. Jails can be manually stopped and restarted with stop and start: &prompt.root; ezjail-admin stop sambajail Stopping jails: sambajail. By default, jails are started automatically when the host computer starts. Autostarting can be disabled with config: &prompt.root; ezjail-admin config -r norun seldomjail This takes effect the next time the host computer is started. A jail that is already running will not be stopped. Enabling autostart is very similar: &prompt.root; ezjail-admin config -r run oftenjail Archiving and Restoring Jails Use archive to create a .tar.gz archive of a jail. The file name is composed from the name of the jail and the current date. Archive files are written to the archive directory, /usr/jails/ezjail_archives. A different archive directory can be chosen by setting ezjail_archivedir in the configuration file. The archive file can be copied elsewhere as a backup, or an existing jail can be restored from it with restore. A new jail can be created from the archive, providing a convenient way to clone existing jails. Stop and archive a jail named wwwserver: &prompt.root; ezjail-admin stop wwwserver Stopping jails: wwwserver. &prompt.root; ezjail-admin archive wwwserver &prompt.root; ls /usr/jails/ezjail-archives/ wwwserver-201407271153.13.tar.gz Create a new jail named wwwserver-clone from the archive created in the previous step. Use the em1 interface and assign a new IP address to avoid conflict with the original: &prompt.root; ezjail-admin create -a /usr/jails/ezjail_archives/wwwserver-201407271153.13.tar.gz wwwserver-clone 'lo1|127.0.3.1,em1|192.168.1.51' Full Example: <application>BIND</application> in a Jail Putting the BIND DNS server in a jail improves security by isolating it. This example creates a simple caching-only name server. The jail will be called dns1. The jail will use IP address 192.168.1.240 on the host's re0 interface. The upstream ISP's DNS servers are at 10.0.0.62 and 10.0.0.61. The basejail has already been created and a ports tree installed. Running BIND in a Jail Create a cloned loopback interface by adding a line to /etc/rc.conf: cloned_interfaces="${cloned_interfaces} lo1" Immediately create the new loopback interface: &prompt.root; service netif cloneup Created clone interfaces: lo1. Create the jail: &prompt.root; ezjail-admin create dns1 'lo1|127.0.2.1,re0|192.168.1.240' Start the jail, connect to a console running on it, and perform some basic configuration: &prompt.root; ezjail-admin start dns1 &prompt.root; ezjail-admin console dns1 &prompt.root; passwd Changing local password for root New Password: Retype New Password: &prompt.root; tzsetup &prompt.root; sed -i .bak -e '/adjkerntz/ s/^/#/' /etc/crontab &prompt.root; sed -i .bak -e 's/127.0.0.1/127.0.2.1/g; s/localhost.my.domain/dns1.my.domain dns1/' /etc/hosts Temporarily set the upstream DNS servers in /etc/resolv.conf so ports can be downloaded: nameserver 10.0.0.62 nameserver 10.0.0.61 Still using the jail console, install dns/bind99. &prompt.root; cd /usr/ports/dns/bind99 &prompt.root; make -C /usr/ports/dns/bind99 install clean Configure the name server by editing /usr/local/etc/namedb/named.conf. Create an Access Control List (ACL) of addresses and networks that are permitted to send DNS queries to this name server. This section is added just before the options section already in the file: ... // or cause huge amounts of useless Internet traffic. acl "trusted" { 192.168.1.0/24; localhost; localnets; }; options { ... Use the jail IP address in the listen-on setting to accept DNS queries from other computers on the network: listen-on { 192.168.1.240; }; A simple caching-only DNS name server is created by changing the forwarders section. The original file contains: /* forwarders { 127.0.0.1; }; */ Uncomment the section by removing the /* and */ lines. Enter the IP addresses of the upstream DNS servers. Immediately after the forwarders section, add references to the trusted ACL defined earlier: forwarders { 10.0.0.62; 10.0.0.61; }; allow-query { any; }; allow-recursion { trusted; }; allow-query-cache { trusted; }; Enable the service in /etc/rc.conf: named_enable="YES" Start and test the name server: &prompt.root; service named start wrote key file "/usr/local/etc/namedb/rndc.key" Starting named. &prompt.root; /usr/local/bin/dig @192.168.1.240 freebsd.org A response that includes ;; Got answer; shows that the new DNS server is working. A long delay followed by a response including ;; connection timed out; no servers could be reached shows a problem. Check the configuration settings and make sure any local firewalls allow the new DNS access to the upstream DNS servers. The new DNS server can use itself for local name resolution, just like other local computers. Set the address of the DNS server in the client computer's /etc/resolv.conf: nameserver 192.168.1.240 A local DHCP server can be configured to provide this address for a local DNS server, providing automatic configuration on DHCP clients. Index: head/en_US.ISO8859-1/books/handbook/linuxemu/chapter.xml =================================================================== --- head/en_US.ISO8859-1/books/handbook/linuxemu/chapter.xml (revision 46271) +++ head/en_US.ISO8859-1/books/handbook/linuxemu/chapter.xml (revision 46272) @@ -1,1239 +1,1239 @@ &linux; Binary Compatibility Jim Mock Restructured and parts updated by Brian N. Handy Originally contributed by Rich Murphey Synopsis Linux binary compatibility binary compatibility Linux &os; provides 32-bit binary compatibility with &linux;, allowing users to install and run most 32-bit &linux; binaries on a &os; system without having to first modify the binary. It has even been reported that, in some situations, 32-bit &linux; binaries perform better on &os; than they do on &linux;. However, some &linux;-specific operating system features are not supported under &os;. For example, &linux; binaries will not work on &os; if they overly use &i386; specific calls, such as enabling virtual 8086 mode. In addition, 64-bit &linux; binaries are not supported at this time. After reading this chapter, you will know: How to enable &linux; binary compatibility on a &os; system. How to install additional &linux; shared libraries. How to install &linux; applications on a &os; system. The implementation details of &linux; compatibility in &os;. Before reading this chapter, you should: Know how to install additional third-party software. Configuring &linux; Binary Compatibility Ports Collection By default, &linux; libraries are not installed and &linux; binary compatibility is not enabled. &linux; libraries can either be installed manually or from the &os; Ports Collection. Before attempting to build the port, load the &linux; kernel module, otherwise the build will fail: &prompt.root; kldload linux To verify that the module is loaded: &prompt.user; kldstat Id Refs Address Size Name 1 2 0xc0100000 16bdb8 kernel 7 1 0xc24db000 d000 linux.ko The emulators/linux_base-c6 package or port is the easiest way to install a base set of &linux; libraries and binaries on a &os; system. To install the port: &prompt.root; cd /usr/ports/emulators/linux_base-c6 &prompt.root; make install distclean For &linux; compatibility to be enabled at boot time, add this line to /etc/rc.conf: linux_enable="YES" kernel options COMPAT_LINUX Users who prefer to statically link &linux; binary compatibility into a custom kernel should add options COMPAT_LINUX to their custom kernel configuration file. Compile and install the new kernel as described in . Installing Additional Libraries Manually shared libraries If a &linux; application complains about missing shared libraries after configuring &linux; binary compatibility, determine which shared libraries the &linux; binary needs and install them manually. From a &linux; system, ldd can be used to determine which shared libraries the application needs. For example, to check which shared libraries linuxdoom needs, run this command from a &linux; system that has Doom installed: &prompt.user; ldd linuxdoom libXt.so.3 (DLL Jump 3.1) => /usr/X11/lib/libXt.so.3.1.0 libX11.so.3 (DLL Jump 3.1) => /usr/X11/lib/libX11.so.3.1.0 libc.so.4 (DLL Jump 4.5pl26) => /lib/libc.so.4.6.29 symbolic links Then, copy all the files in the last column of the output from the &linux; system into /compat/linux on the &os; system. Once copied, create symbolic links to the names in the first column. This example will result in the following files on the &os; system: /compat/linux/usr/X11/lib/libXt.so.3.1.0 /compat/linux/usr/X11/lib/libXt.so.3 -> libXt.so.3.1.0 /compat/linux/usr/X11/lib/libX11.so.3.1.0 /compat/linux/usr/X11/lib/libX11.so.3 -> libX11.so.3.1.0 /compat/linux/lib/libc.so.4.6.29 /compat/linux/lib/libc.so.4 -> libc.so.4.6.29 If a &linux; shared library already exists with a matching major revision number to the first column of the ldd output, it does not need to be copied to the file named in the last column, as the existing library should work. It is advisable to copy the shared library if it is a newer version, though. The old one can be removed, as long as the symbolic link points to the new one. For example, these libraries already exist on the &os; system: /compat/linux/lib/libc.so.4.6.27 /compat/linux/lib/libc.so.4 -> libc.so.4.6.27 and ldd indicates that a binary requires a later version: libc.so.4 (DLL Jump 4.5pl26) -> libc.so.4.6.29 Since the existing library is only one or two versions out of date in the last digit, the program should still work with the slightly older version. However, it is safe to replace the existing libc.so with the newer version: /compat/linux/lib/libc.so.4.6.29 /compat/linux/lib/libc.so.4 -> libc.so.4.6.29 Generally, one will need to look for the shared libraries that &linux; binaries depend on only the first few times that a &linux; program is installed on &os;. After a while, there will be a sufficient set of &linux; shared libraries on the system to be able to run newly installed &linux; binaries without any extra work. Installing &linux; <acronym>ELF</acronym> Binaries Linux ELF binaries ELF binaries sometimes require an extra step. When an unbranded ELF binary is executed, it will generate an error message: &prompt.user; ./my-linux-elf-binary ELF binary type not known Abort To help the &os; kernel distinguish between a &os; ELF binary and a &linux; binary, use &man.brandelf.1;: &prompt.user; brandelf -t Linux my-linux-elf-binary GNU toolchain Since the GNU toolchain places the appropriate branding information into ELF binaries automatically, this step is usually not necessary. Installing a &linux; <acronym>RPM</acronym> Based Application In order to install a &linux; RPM-based application, first install the archivers/rpm package or port. Once installed, root can use this command to install a .rpm: &prompt.root; cd /compat/linux &prompt.root; rpm2cpio < /path/to/linux.archive.rpm | cpio -id If necessary, brandelf the installed ELF binaries. Note that this will prevent a clean uninstall. Configuring the Hostname Resolver If DNS does not work or this error appears: resolv+: "bind" is an invalid keyword resolv+: "hosts" is an invalid keyword configure /compat/linux/etc/host.conf as follows: order hosts, bind multi on This specifies that /etc/hosts is searched first and DNS is searched second. When /compat/linux/etc/host.conf does not exist, &linux; applications use /etc/host.conf and complain about the incompatible &os; syntax. Remove bind if a name server is not configured using /etc/resolv.conf. Boris Hollas Updated for Mathematica 5.X by Installing &mathematica; applications Mathematica This section describes the process of installing the &linux; version of &mathematica; 9.X onto a &os; system. &mathematica; is a commercial, computational software program used in scientific, engineering, and mathematical fields. A 30 day trial version is available for download from wolfram.com/mathematica. Running the &mathematica; Installer Before installing &mathematica;, make sure that the textproc/linux-c6-aspell package or port is installed and that the &man.linprocfs.5; file system is mounted. &prompt.root; sysctl kern.fallback_elf_brand=3 &os; will now assume that unbranded ELF binaries use the &linux; ABI which should allow the installer to execute from the CDROM. The downloaded file will be saved to /tmp/Mathematica_9.0.1_LINUX.sh. Become the superuser and run this installer file: &prompt.root; sh /tmp/Mathematica_9.0.1_LINUX.sh Mathematica Secured 9.0.1 for LINUX Installer Archive Verifying archive integrity. Extracting installer. ... Wolfram Mathematica 9 Installer Copyright (c) 1988-2013 Wolfram Research, Inc. All rights reserved. WARNING: Wolfram Mathematica is protected by copyright law and international treaties. Unauthorized reproduction or distribution may result in severe civil and criminal penalties and will be prosecuted to the maximum extent possible under law. Enter the installation directory, or press ENTER to select /usr/local/Wolfram/Mathematica/9.0: > Now installing... *********************** Installation complete. Running the &mathematica; Frontend over a Network &mathematica; uses some special fonts to display characters not present in any of the standard font sets. Xorg requires these fonts to be installed locally. This means that these fonts need to be copied from the CDROM or from a host with &mathematica; installed to the local machine. These fonts are normally stored in /cdrom/Unix/Files/SystemFiles/Fonts + >/cdrom/Unix/Files/SystemFiles/Fonts on the CDROM, or /usr/local/mathematica/SystemFiles/Fonts + >/usr/local/mathematica/SystemFiles/Fonts on the hard drive. The actual fonts are in the subdirectories - Type1 and - X. There are several + Type1 and + X. There are several ways to use them, as described below. The first way is to copy the fonts into one of the existing font directories in /usr/local/lib/X11/fonts then + >/usr/local/lib/X11/fonts then running &man.mkfontdir.1; within the directory containing the new fonts. The second way to do this is to copy the directories to /usr/local/lib/X11/fonts: + >/usr/local/lib/X11/fonts: &prompt.root; cd /usr/local/lib/X11/fonts &prompt.root; mkdir X &prompt.root; mkdir MathType1 &prompt.root; cd /cdrom/Unix/Files/SystemFiles/Fonts &prompt.root; cp X/* /usr/local/lib/X11/fonts/X &prompt.root; cp Type1/* /usr/local/lib/X11/fonts/MathType1 &prompt.root; cd /usr/local/lib/X11/fonts/X &prompt.root; mkfontdir &prompt.root; cd ../MathType1 &prompt.root; mkfontdir Now add the new font directories to the font path: &prompt.root; xset fp+ /usr/local/lib/X11/fonts/X &prompt.root; xset fp+ /usr/local/lib/X11/fonts/MathType1 &prompt.root; xset fp rehash When using the &xorg; server, these font directories can be loaded automatically by adding them to /etc/X11/xorg.conf. fonts If /usr/local/lib/X11/fonts/Type1 + >/usr/local/lib/X11/fonts/Type1 does not already exist, change the name of the MathType1 directory in the + >MathType1 directory in the example above to Type1. + >Type1. --> Network Servers Synopsis This chapter covers some of the more frequently used network services on &unix; systems. This includes installing, configuring, testing, and maintaining many different types of network services. Example configuration files are included throughout this chapter for reference. By the end of this chapter, readers will know: How to manage the inetd daemon. How to set up the Network File System (NFS). How to set up the Network Information Server (NIS) for centralizing and sharing user accounts. How to set &os; up to act as an LDAP server or client How to set up automatic network settings using DHCP. How to set up a Domain Name Server (DNS). How to set up the Apache HTTP Server. How to set up a File Transfer Protocol (FTP) server. How to set up a file and print server for &windows; clients using Samba. How to synchronize the time and date, and set up a time server using the Network Time Protocol (NTP). How to set up iSCSI. This chapter assumes a basic knowledge of: /etc/rc scripts. Network terminology. Installation of additional third-party software (). The <application>inetd</application> Super-Server The &man.inetd.8; daemon is sometimes referred to as a Super-Server because it manages connections for many services. Instead of starting multiple applications, only the inetd service needs to be started. When a connection is received for a service that is managed by inetd, it determines which program the connection is destined for, spawns a process for that program, and delegates the program a socket. Using inetd for services that are not heavily used can reduce system load, when compared to running each daemon individually in stand-alone mode. Primarily, inetd is used to spawn other daemons, but several trivial protocols are handled internally, such as chargen, auth, time, echo, discard, and daytime. This section covers the basics of configuring inetd. Configuration File Configuration of inetd is done by editing /etc/inetd.conf. Each line of this configuration file represents an application which can be started by inetd. By default, every line starts with a comment (#), meaning that inetd is not listening for any applications. To configure inetd to listen for an application's connections, remove the # at the beginning of the line for that application. After saving your edits, configure inetd to start at system boot by editing /etc/rc.conf: inetd_enable="YES" To start inetd now, so that it listens for the service you configured, type: &prompt.root; service inetd start Once inetd is started, it needs to be notified whenever a modification is made to /etc/inetd.conf: Reloading the <application>inetd</application> Configuration File &prompt.root; service inetd reload Typically, the default entry for an application does not need to be edited beyond removing the #. In some situations, it may be appropriate to edit the default entry. As an example, this is the default entry for &man.ftpd.8; over IPv4: ftp stream tcp nowait root /usr/libexec/ftpd ftpd -l The seven columns in an entry are as follows: service-name socket-type protocol {wait|nowait}[/max-child[/max-connections-per-ip-per-minute[/max-child-per-ip]]] user[:group][/login-class] server-program server-program-arguments where: service-name The service name of the daemon to start. It must correspond to a service listed in /etc/services. This determines which port inetd listens on for incoming connections to that service. When using a custom service, it must first be added to /etc/services. socket-type Either stream, dgram, raw, or seqpacket. Use stream for TCP connections and dgram for UDP services. protocol Use one of the following protocol names: Protocol Name Explanation tcp or tcp4 TCP IPv4 udp or udp4 UDP IPv4 tcp6 TCP IPv6 udp6 UDP IPv6 tcp46 Both TCP IPv4 and IPv6 udp46 Both UDP IPv4 and IPv6 {wait|nowait}[/max-child[/max-connections-per-ip-per-minute[/max-child-per-ip]]] In this field, or must be specified. , and are optional. indicates whether or not the service is able to handle its own socket. socket types must use while daemons, which are usually multi-threaded, should use . usually hands off multiple sockets to a single daemon, while spawns a child daemon for each new socket. The maximum number of child daemons inetd may spawn is set by . For example, to limit ten instances of the daemon, place a /10 after . Specifying /0 allows an unlimited number of children. limits the number of connections from any particular IP address per minute. Once the limit is reached, further connections from this IP address will be dropped until the end of the minute. For example, a value of /10 would limit any particular IP address to ten connection attempts per minute. limits the number of child processes that can be started on behalf on any single IP address at any moment. These options can limit excessive resource consumption and help to prevent Denial of Service attacks. An example can be seen in the default settings for &man.fingerd.8;: finger stream tcp nowait/3/10 nobody /usr/libexec/fingerd fingerd -k -s user The username the daemon will run as. Daemons typically run as root, daemon, or nobody. server-program The full path to the daemon. If the daemon is a service provided by inetd internally, use . server-program-arguments Used to specify any command arguments to be passed to the daemon on invocation. If the daemon is an internal service, use . Command-Line Options Like most server daemons, inetd has a number of options that can be used to modify its behaviour. By default, inetd is started with -wW -C 60. These options enable TCP wrappers for all services, including internal services, and prevent any IP address from requesting any service more than 60 times per minute. To change the default options which are passed to inetd, add an entry for inetd_flags in /etc/rc.conf. If inetd is already running, restart it with service inetd restart. The available rate limiting options are: -c maximum Specify the default maximum number of simultaneous invocations of each service, where the default is unlimited. May be overridden on a per-service basis by using in /etc/inetd.conf. -C rate Specify the default maximum number of times a service can be invoked from a single IP address per minute. May be overridden on a per-service basis by using in /etc/inetd.conf. -R rate Specify the maximum number of times a service can be invoked in one minute, where the default is 256. A rate of 0 allows an unlimited number. -s maximum Specify the maximum number of times a service can be invoked from a single IP address at any one time, where the default is unlimited. May be overridden on a per-service basis by using in /etc/inetd.conf. Additional options are available. Refer to &man.inetd.8; for the full list of options. Security Considerations Many of the daemons which can be managed by inetd are not security-conscious. Some daemons, such as fingerd, can provide information that may be useful to an attacker. Only enable the services which are needed and monitor the system for excessive connection attempts. max-connections-per-ip-per-minute, max-child and max-child-per-ip can be used to limit such attacks. By default, TCP wrappers is enabled. Consult &man.hosts.access.5; for more information on placing TCP restrictions on various inetd invoked daemons. Network File System (NFS) Tom Rhodes Reorganized and enhanced by Bill Swingle Written by NFS &os; supports the Network File System (NFS), which allows a server to share directories and files with clients over a network. With NFS, users and programs can access files on remote systems as if they were stored locally. NFS has many practical uses. Some of the more common uses include: Data that would otherwise be duplicated on each client can be kept in a single location and accessed by clients on the network. Several clients may need access to the /usr/ports/distfiles directory. Sharing that directory allows for quick access to the source files without having to download them to each client. On large networks, it is often more convenient to configure a central NFS server on which all user home directories are stored. Users can log into a client anywhere on the network and have access to their home directories. Administration of NFS exports is simplified. For example, there is only one file system where security or backup policies must be set. Removable media storage devices can be used by other machines on the network. This reduces the number of devices throughout the network and provides a centralized location to manage their security. It is often more convenient to install software on multiple machines from a centralized installation media. NFS consists of a server and one or more clients. The client remotely accesses the data that is stored on the server machine. In order for this to function properly, a few processes have to be configured and running. These daemons must be running on the server: NFS server file server UNIX clients rpcbind mountd nfsd Daemon Description nfsd The NFS daemon which services requests from NFS clients. mountd The NFS mount daemon which carries out requests received from nfsd. rpcbind This daemon allows NFS clients to discover which port the NFS server is using. Running &man.nfsiod.8; on the client can improve performance, but is not required. Configuring the Server NFS configuration The file systems which the NFS server will share are specified in /etc/exports. Each line in this file specifies a file system to be exported, which clients have access to that file system, and any access options. When adding entries to this file, each exported file system, its properties, and allowed hosts must occur on a single line. If no clients are listed in the entry, then any client on the network can mount that file system. NFS export examples The following /etc/exports entries demonstrate how to export file systems. The examples can be modified to match the file systems and client names on the reader's network. There are many options that can be used in this file, but only a few will be mentioned here. See &man.exports.5; for the full list of options. This example shows how to export /cdrom to three hosts named alpha, bravo, and charlie: /cdrom -ro alpha bravo charlie The -ro flag makes the file system read-only, preventing clients from making any changes to the exported file system. This example assumes that the host names are either in DNS or in /etc/hosts. Refer to &man.hosts.5; if the network does not have a DNS server. The next example exports /home to three clients by IP address. This can be useful for networks without DNS or /etc/hosts entries. The -alldirs flag allows subdirectories to be mount points. In other words, it will not automatically mount the subdirectories, but will permit the client to mount the directories that are required as needed. /home -alldirs 10.0.0.2 10.0.0.3 10.0.0.4 This next example exports /a so that two clients from different domains may access that file system. The allows root on the remote system to write data on the exported file system as root. If -maproot=root is not specified, the client's root user will be mapped to the server's nobody account and will be subject to the access limitations defined for nobody. /a -maproot=root host.example.com box.example.org A client can only be specified once per file system. For example, if /usr is a single file system, these entries would be invalid as both entries specify the same host: # Invalid when /usr is one file system /usr/src client /usr/ports client The correct format for this situation is to use one entry: /usr/src /usr/ports client The following is an example of a valid export list, where /usr and /exports are local file systems: # Export src and ports to client01 and client02, but only # client01 has root privileges on it /usr/src /usr/ports -maproot=root client01 /usr/src /usr/ports client02 # The client machines have root and can mount anywhere # on /exports. Anyone in the world can mount /exports/obj read-only /exports -alldirs -maproot=root client01 client02 /exports/obj -ro To enable the processes required by the NFS server at boot time, add these options to /etc/rc.conf: rpcbind_enable="YES" nfs_server_enable="YES" mountd_flags="-r" The server can be started now by running this command: &prompt.root; service nfsd start Whenever the NFS server is started, mountd also starts automatically. However, mountd only reads /etc/exports when it is started. To make subsequent /etc/exports edits take effect immediately, force mountd to reread it: &prompt.root; service mountd reload Configuring the Client To enable NFS clients, set this option in each client's /etc/rc.conf: nfs_client_enable="YES" Then, run this command on each NFS client: &prompt.root; service nfsclient start The client now has everything it needs to mount a remote file system. In these examples, the server's name is server and the client's name is client. To mount /home on server to the /mnt mount point on client: NFS mounting &prompt.root; mount server:/home /mnt The files and directories in /home will now be available on client, in the /mnt directory. To mount a remote file system each time the client boots, add it to /etc/fstab: server:/home /mnt nfs rw 0 0 Refer to &man.fstab.5; for a description of all available options. Locking Some applications require file locking to operate correctly. To enable locking, add these lines to /etc/rc.conf on both the client and server: rpc_lockd_enable="YES" rpc_statd_enable="YES" Then start the applications: &prompt.root; service lockd start &prompt.root; service statd start If locking is not required on the server, the NFS client can be configured to lock locally by including when running mount. Refer to &man.mount.nfs.8; for further details. Automating Mounts with &man.amd.8; Wylie Stilwell Contributed by Chern Lee Rewritten by amd automatic mounter daemon The automatic mounter daemon, amd, automatically mounts a remote file system whenever a file or directory within that file system is accessed. File systems that are inactive for a period of time will be automatically unmounted by amd. This daemon provides an alternative to modifying /etc/fstab to list every client. It operates by attaching itself as an NFS server to the /host and /net directories. When a file is accessed within one of these directories, amd looks up the corresponding remote mount and automatically mounts it. /net is used to mount an exported file system from an IP address while /host is used to mount an export from a remote hostname. For instance, an attempt to access a file within /host/foobar/usr would tell amd to mount the /usr export on the host foobar. Mounting an Export with <application>amd</application> In this example, showmount -e shows the exported file systems that can be mounted from the NFS server, foobar: &prompt.user; showmount -e foobar Exports list on foobar: /usr 10.10.10.0 /a 10.10.10.0 &prompt.user; cd /host/foobar/usr The output from showmount shows /usr as an export. When changing directories to /host/foobar/usr, amd intercepts the request and attempts to resolve the hostname foobar. If successful, amd automatically mounts the desired export. To enable amd at boot time, add this line to /etc/rc.conf: amd_enable="YES" To start amd now: &prompt.root; service amd start Custom flags can be passed to amd from the amd_flags environment variable. By default, amd_flags is set to: amd_flags="-a /.amd_mnt -l syslog /host /etc/amd.map /net /etc/amd.map" The default options with which exports are mounted are defined in /etc/amd.map. Some of the more advanced features of amd are defined in /etc/amd.conf. Consult &man.amd.8; and &man.amd.conf.5; for more information. Automating Mounts with &man.autofs.5; The &man.autofs.5; automount facility is supported starting with &os; 10.1-RELEASE. To use the automounter functionality in older versions of &os;, use &man.amd.8; instead. This chapter only describes the &man.autofs.5; automounter. autofs automounter subsystem The &man.autofs.5; facility is a common name for several components that, together, allow for automatic mounting of remote and local filesystems whenever a file or directory within that file system is accessed. It consists of the kernel component, &man.autofs.5;, and several userspace applications: &man.automount.8;, &man.automountd.8; and &man.autounmountd.8;. It serves as an alternative for &man.amd.8; from previous &os; releases. Amd is still provided for backward compatibility purposes, as the two use different map format; the one used by autofs is the same as with other SVR4 automounters, such as the ones in Solaris, MacOS X, and Linux. The &man.autofs.5; virtual filesystem is mounted on specified mountpoints by &man.automount.8;, usually invoked during boot. Whenever a process attempts to access file within the &man.autofs.5; mountpoint, the kernel will notify &man.automountd.8; daemon and pause the triggering process. The &man.automountd.8; daemon will handle kernel requests by finding the proper map and mounting the filesystem according to it, then signal the kernel to release blocked process. The &man.autounmountd.8; daemon automatically unmounts automounted filesystems after some time, unless they are still being used. The primary autofs configuration file is /etc/auto_master. It assigns individual maps to top-level mounts. For an explanation of auto_master and the map syntax, refer to &man.auto.master.5;. There is a special automounter map mounted on /net. When a file is accessed within this directory, &man.autofs.5; looks up the corresponding remote mount and automatically mounts it. For instance, an attempt to access a file within /net/foobar/usr would tell &man.automountd.8; to mount the /usr export from the host + >/usr export from the host foobar. Mounting an Export with &man.autofs.5; In this example, showmount -e shows the exported file systems that can be mounted from the NFS server, foobar: &prompt.user; showmount -e foobar Exports list on foobar: /usr 10.10.10.0 /a 10.10.10.0 &prompt.user; cd /net/foobar/usr The output from showmount shows - /usr as an export. + /usr as an export. When changing directories to /host/foobar/usr, + >/host/foobar/usr, &man.automountd.8; intercepts the request and attempts to resolve the hostname foobar. If successful, &man.automountd.8; automatically mounts the source export. To enable &man.autofs.5; at boot time, add this line to /etc/rc.conf: autofs_enable="YES" Then &man.autofs.5; can be started by running: &prompt.root; service automount start &prompt.root; service automountd start &prompt.root; service autounmountd start The &man.autofs.5; map format is the same as in other operating systems, it might be desirable to consult information from other operating systems, such as the Mac OS X document. Consult the &man.automount.8;, &man.automountd.8;, &man.autounmountd.8;, and &man.auto.master.5; manual pages for more information. Network Information System (<acronym>NIS</acronym>) NIS Solaris HP-UX AIX Linux NetBSD OpenBSD yellow pages NIS Network Information System (NIS) is designed to centralize administration of &unix;-like systems such as &solaris;, HP-UX, &aix;, Linux, NetBSD, OpenBSD, and &os;. NIS was originally known as Yellow Pages but the name was changed due to trademark issues. This is the reason why NIS commands begin with yp. NIS domains NIS is a Remote Procedure Call (RPC)-based client/server system that allows a group of machines within an NIS domain to share a common set of configuration files. This permits a system administrator to set up NIS client systems with only minimal configuration data and to add, remove, or modify configuration data from a single location. &os; uses version 2 of the NIS protocol. <acronym>NIS</acronym> Terms and Processes Table 28.1 summarizes the terms and important processes used by NIS: rpcbind portmap <acronym>NIS</acronym> Terminology Term Description NIS domain name NIS servers and clients share an NIS domain name. Typically, this name does not have anything to do with DNS. &man.rpcbind.8; This service enables RPC and must be running in order to run an NIS server or act as an NIS client. &man.ypbind.8; This service binds an NIS client to its NIS server. It will take the NIS domain name and use RPC to connect to the server. It is the core of client/server communication in an NIS environment. If this service is not running on a client machine, it will not be able to access the NIS server. &man.ypserv.8; This is the process for the NIS server. If this service stops running, the server will no longer be able to respond to NIS requests so hopefully, there is a slave server to take over. Some non-&os; clients will not try to reconnect using a slave server and the ypbind process may need to be restarted on these clients. &man.rpc.yppasswdd.8; This process only runs on NIS master servers. This daemon allows NIS clients to change their NIS passwords. If this daemon is not running, users will have to login to the NIS master server and change their passwords there.
Machine Types NIS master server NIS slave server NIS client There are three types of hosts in an NIS environment: NIS master server This server acts as a central repository for host configuration information and maintains the authoritative copy of the files used by all of the NIS clients. The passwd, group, and other various files used by NIS clients are stored on the master server. While it is possible for one machine to be an NIS master server for more than one NIS domain, this type of configuration will not be covered in this chapter as it assumes a relatively small-scale NIS environment. NIS slave servers NIS slave servers maintain copies of the NIS master's data files in order to provide redundancy. Slave servers also help to balance the load of the master server as NIS clients always attach to the NIS server which responds first. NIS clients NIS clients authenticate against the NIS server during log on. Information in many files can be shared using NIS. The master.passwd, group, and hosts files are commonly shared via NIS. Whenever a process on a client needs information that would normally be found in these files locally, it makes a query to the NIS server that it is bound to instead. Planning Considerations This section describes a sample NIS environment which consists of 15 &os; machines with no centralized point of administration. Each machine has its own /etc/passwd and /etc/master.passwd. These files are kept in sync with each other only through manual intervention. Currently, when a user is added to the lab, the process must be repeated on all 15 machines. The configuration of the lab will be as follows: Machine name IP address Machine role ellington 10.0.0.2 NIS master coltrane 10.0.0.3 NIS slave basie 10.0.0.4 Faculty workstation bird 10.0.0.5 Client machine cli[1-11] 10.0.0.[6-17] Other client machines If this is the first time an NIS scheme is being developed, it should be thoroughly planned ahead of time. Regardless of network size, several decisions need to be made as part of the planning process. Choosing a <acronym>NIS</acronym> Domain Name NIS domain name When a client broadcasts its requests for info, it includes the name of the NIS domain that it is part of. This is how multiple servers on one network can tell which server should answer which request. Think of the NIS domain name as the name for a group of hosts. Some organizations choose to use their Internet domain name for their NIS domain name. This is not recommended as it can cause confusion when trying to debug network problems. The NIS domain name should be unique within the network and it is helpful if it describes the group of machines it represents. For example, the Art department at Acme Inc. might be in the acme-art NIS domain. This example will use the domain name test-domain. However, some non-&os; operating systems require the NIS domain name to be the same as the Internet domain name. If one or more machines on the network have this restriction, the Internet domain name must be used as the NIS domain name. Physical Server Requirements There are several things to keep in mind when choosing a machine to use as a NIS server. Since NIS clients depend upon the availability of the server, choose a machine that is not rebooted frequently. The NIS server should ideally be a stand alone machine whose sole purpose is to be an NIS server. If the network is not heavily used, it is acceptable to put the NIS server on a machine running other services. However, if the NIS server becomes unavailable, it will adversely affect all NIS clients. Configuring the <acronym>NIS</acronym> Master Server The canonical copies of all NIS files are stored on the master server. The databases used to store the information are called NIS maps. In &os;, these maps are stored in /var/yp/[domainname] where [domainname] is the name of the NIS domain. Since multiple domains are supported, it is possible to have several directories, one for each domain. Each domain will have its own independent set of maps. NIS master and slave servers handle all NIS requests through &man.ypserv.8;. This daemon is responsible for receiving incoming requests from NIS clients, translating the requested domain and map name to a path to the corresponding database file, and transmitting data from the database back to the client. NIS server configuration Setting up a master NIS server can be relatively straight forward, depending on environmental needs. Since &os; provides built-in NIS support, it only needs to be enabled by adding the following lines to /etc/rc.conf: nisdomainname="test-domain" nis_server_enable="YES" nis_yppasswdd_enable="YES" This line sets the NIS domain name to test-domain. This automates the start up of the NIS server processes when the system boots. This enables the &man.rpc.yppasswdd.8; daemon so that users can change their NIS password from a client machine. Care must be taken in a multi-server domain where the server machines are also NIS clients. It is generally a good idea to force the servers to bind to themselves rather than allowing them to broadcast bind requests and possibly become bound to each other. Strange failure modes can result if one server goes down and others are dependent upon it. Eventually, all the clients will time out and attempt to bind to other servers, but the delay involved can be considerable and the failure mode is still present since the servers might bind to each other all over again. A server that is also a client can be forced to bind to a particular server by adding these additional lines to /etc/rc.conf: nis_client_enable="YES" # run client stuff as well nis_client_flags="-S NIS domain,server" After saving the edits, type /etc/netstart to restart the network and apply the values defined in /etc/rc.conf. Before initializing the NIS maps, start &man.ypserv.8;: &prompt.root; service ypserv start Initializing the <acronym>NIS</acronym> Maps NIS maps NIS maps are generated from the configuration files in /etc on the NIS master, with one exception: /etc/master.passwd. This is to prevent the propagation of passwords to all the servers in the NIS domain. Therefore, before the NIS maps are initialized, configure the primary password files: &prompt.root; cp /etc/master.passwd /var/yp/master.passwd &prompt.root; cd /var/yp &prompt.root; vi master.passwd It is advisable to remove all entries for system accounts as well as any user accounts that do not need to be propagated to the NIS clients, such as the root and any other administrative accounts. Ensure that the /var/yp/master.passwd is neither group or world readable by setting its permissions to 600. After completing this task, initialize the NIS maps. &os; includes the &man.ypinit.8; script to do this. When generating maps for the master server, include and specify the NIS domain name: ellington&prompt.root; ypinit -m test-domain Server Type: MASTER Domain: test-domain Creating an YP server will require that you answer a few questions. Questions will all be asked at the beginning of the procedure. Do you want this procedure to quit on non-fatal errors? [y/n: n] n Ok, please remember to go back and redo manually whatever fails. If not, something might not work. At this point, we have to construct a list of this domains YP servers. rod.darktech.org is already known as master server. Please continue to add any slave servers, one per line. When you are done with the list, type a <control D>. master server : ellington next host to add: coltrane next host to add: ^D The current list of NIS servers looks like this: ellington coltrane Is this correct? [y/n: y] y [..output from map generation..] NIS Map update completed. ellington has been setup as an YP master server without any errors. This will create /var/yp/Makefile from /var/yp/Makefile.dist. By default, this file assumes that the environment has a single NIS server with only &os; clients. Since test-domain has a slave server, edit this line in /var/yp/Makefile so that it begins with a comment (#): NOPUSH = "True" Adding New Users Every time a new user is created, the user account must be added to the master NIS server and the NIS maps rebuilt. Until this occurs, the new user will not be able to login anywhere except on the NIS master. For example, to add the new user jsmith to the test-domain domain, run these commands on the master server: &prompt.root; pw useradd jsmith &prompt.root; cd /var/yp &prompt.root; make test-domain The user could also be added using adduser jsmith instead of pw useradd smith. Setting up a <acronym>NIS</acronym> Slave Server NIS slave server To set up an NIS slave server, log on to the slave server and edit /etc/rc.conf as for the master server. Do not generate any NIS maps, as these already exist on the master server. When running ypinit on the slave server, use (for slave) instead of (for master). This option requires the name of the NIS master in addition to the domain name, as seen in this example: coltrane&prompt.root; ypinit -s ellington test-domain Server Type: SLAVE Domain: test-domain Master: ellington Creating an YP server will require that you answer a few questions. Questions will all be asked at the beginning of the procedure. Do you want this procedure to quit on non-fatal errors? [y/n: n] n Ok, please remember to go back and redo manually whatever fails. If not, something might not work. There will be no further questions. The remainder of the procedure should take a few minutes, to copy the databases from ellington. Transferring netgroup... ypxfr: Exiting: Map successfully transferred Transferring netgroup.byuser... ypxfr: Exiting: Map successfully transferred Transferring netgroup.byhost... ypxfr: Exiting: Map successfully transferred Transferring master.passwd.byuid... ypxfr: Exiting: Map successfully transferred Transferring passwd.byuid... ypxfr: Exiting: Map successfully transferred Transferring passwd.byname... ypxfr: Exiting: Map successfully transferred Transferring group.bygid... ypxfr: Exiting: Map successfully transferred Transferring group.byname... ypxfr: Exiting: Map successfully transferred Transferring services.byname... ypxfr: Exiting: Map successfully transferred Transferring rpc.bynumber... ypxfr: Exiting: Map successfully transferred Transferring rpc.byname... ypxfr: Exiting: Map successfully transferred Transferring protocols.byname... ypxfr: Exiting: Map successfully transferred Transferring master.passwd.byname... ypxfr: Exiting: Map successfully transferred Transferring networks.byname... ypxfr: Exiting: Map successfully transferred Transferring networks.byaddr... ypxfr: Exiting: Map successfully transferred Transferring netid.byname... ypxfr: Exiting: Map successfully transferred Transferring hosts.byaddr... ypxfr: Exiting: Map successfully transferred Transferring protocols.bynumber... ypxfr: Exiting: Map successfully transferred Transferring ypservers... ypxfr: Exiting: Map successfully transferred Transferring hosts.byname... ypxfr: Exiting: Map successfully transferred coltrane has been setup as an YP slave server without any errors. Remember to update map ypservers on ellington. This will generate a directory on the slave server called /var/yp/test-domain which contains copies of the NIS master server's maps. Adding these /etc/crontab entries on each slave server will force the slaves to sync their maps with the maps on the master server: 20 * * * * root /usr/libexec/ypxfr passwd.byname 21 * * * * root /usr/libexec/ypxfr passwd.byuid These entries are not mandatory because the master server automatically attempts to push any map changes to its slaves. However, since clients may depend upon the slave server to provide correct password information, it is recommended to force frequent password map updates. This is especially important on busy networks where map updates might not always complete. To finish the configuration, run /etc/netstart on the slave server in order to start the NIS services. Setting Up an <acronym>NIS</acronym> Client An NIS client binds to an NIS server using &man.ypbind.8;. This daemon broadcasts RPC requests on the local network. These requests specify the domain name configured on the client. If an NIS server in the same domain receives one of the broadcasts, it will respond to ypbind, which will record the server's address. If there are several servers available, the client will use the address of the first server to respond and will direct all of its NIS requests to that server. The client will automatically ping the server on a regular basis to make sure it is still available. If it fails to receive a reply within a reasonable amount of time, ypbind will mark the domain as unbound and begin broadcasting again in the hopes of locating another server. NIS client configuration To configure a &os; machine to be an NIS client: Edit /etc/rc.conf and add the following lines in order to set the NIS domain name and start &man.ypbind.8; during network startup: nisdomainname="test-domain" nis_client_enable="YES" To import all possible password entries from the NIS server, use vipw to remove all user accounts except one from /etc/master.passwd. When removing the accounts, keep in mind that at least one local account should remain and this account should be a member of wheel. If there is a problem with NIS, this local account can be used to log in remotely, become the superuser, and fix the problem. Before saving the edits, add the following line to the end of the file: +::::::::: This line configures the client to provide anyone with a valid account in the NIS server's password maps an account on the client. There are many ways to configure the NIS client by modifying this line. One method is described in . For more detailed reading, refer to the book Managing NFS and NIS, published by O'Reilly Media. To import all possible group entries from the NIS server, add this line to /etc/group: +:*:: To start the NIS client immediately, execute the following commands as the superuser: &prompt.root; /etc/netstart &prompt.root; service ypbind start After completing these steps, running ypcat passwd on the client should show the server's passwd map. <acronym>NIS</acronym> Security Since RPC is a broadcast-based service, any system running ypbind within the same domain can retrieve the contents of the NIS maps. To prevent unauthorized transactions, &man.ypserv.8; supports a feature called securenets which can be used to restrict access to a given set of hosts. By default, this information is stored in /var/yp/securenets, unless &man.ypserv.8; is started with and an alternate path. This file contains entries that consist of a network specification and a network mask separated by white space. Lines starting with # are considered to be comments. A sample securenets might look like this: # allow connections from local host -- mandatory 127.0.0.1 255.255.255.255 # allow connections from any host # on the 192.168.128.0 network 192.168.128.0 255.255.255.0 # allow connections from any host # between 10.0.0.0 to 10.0.15.255 # this includes the machines in the testlab 10.0.0.0 255.255.240.0 If &man.ypserv.8; receives a request from an address that matches one of these rules, it will process the request normally. If the address fails to match a rule, the request will be ignored and a warning message will be logged. If the securenets does not exist, ypserv will allow connections from any host. is an alternate mechanism for providing access control instead of securenets. While either access control mechanism adds some security, they are both vulnerable to IP spoofing attacks. All NIS-related traffic should be blocked at the firewall. Servers using securenets may fail to serve legitimate NIS clients with archaic TCP/IP implementations. Some of these implementations set all host bits to zero when doing broadcasts or fail to observe the subnet mask when calculating the broadcast address. While some of these problems can be fixed by changing the client configuration, other problems may force the retirement of these client systems or the abandonment of securenets. TCP Wrapper The use of TCP Wrapper increases the latency of the NIS server. The additional delay may be long enough to cause timeouts in client programs, especially in busy networks with slow NIS servers. If one or more clients suffer from latency, convert those clients into NIS slave servers and force them to bind to themselves. Barring Some Users In this example, the basie system is a faculty workstation within the NIS domain. The passwd map on the master NIS server contains accounts for both faculty and students. This section demonstrates how to allow faculty logins on this system while refusing student logins. To prevent specified users from logging on to a system, even if they are present in the NIS database, use vipw to add -username with the correct number of colons towards the end of /etc/master.passwd on the client, where username is the username of a user to bar from logging in. The line with the blocked user must be before the + line that allows NIS users. In this example, bill is barred from logging on to basie: basie&prompt.root; cat /etc/master.passwd root:[password]:0:0::0:0:The super-user:/root:/bin/csh toor:[password]:0:0::0:0:The other super-user:/root:/bin/sh daemon:*:1:1::0:0:Owner of many system processes:/root:/sbin/nologin operator:*:2:5::0:0:System &:/:/sbin/nologin bin:*:3:7::0:0:Binaries Commands and Source,,,:/:/sbin/nologin tty:*:4:65533::0:0:Tty Sandbox:/:/sbin/nologin kmem:*:5:65533::0:0:KMem Sandbox:/:/sbin/nologin games:*:7:13::0:0:Games pseudo-user:/usr/games:/sbin/nologin news:*:8:8::0:0:News Subsystem:/:/sbin/nologin man:*:9:9::0:0:Mister Man Pages:/usr/share/man:/sbin/nologin bind:*:53:53::0:0:Bind Sandbox:/:/sbin/nologin uucp:*:66:66::0:0:UUCP pseudo-user:/var/spool/uucppublic:/usr/libexec/uucp/uucico xten:*:67:67::0:0:X-10 daemon:/usr/local/xten:/sbin/nologin pop:*:68:6::0:0:Post Office Owner:/nonexistent:/sbin/nologin nobody:*:65534:65534::0:0:Unprivileged user:/nonexistent:/sbin/nologin -bill::::::::: +::::::::: basie&prompt.root; Using Netgroups netgroups Barring specified users from logging on to individual systems becomes unscaleable on larger networks and quickly loses the main benefit of NIS: centralized administration. Netgroups were developed to handle large, complex networks with hundreds of users and machines. Their use is comparable to &unix; groups, where the main difference is the lack of a numeric ID and the ability to define a netgroup by including both user accounts and other netgroups. To expand on the example used in this chapter, the NIS domain will be extended to add the users and systems shown in Tables 28.2 and 28.3: Additional Users User Name(s) Description alpha, beta IT department employees charlie, delta IT department apprentices echo, foxtrott, golf, ... employees able, baker, ... interns
Additional Systems Machine Name(s) Description war, death, famine, pollution Only IT employees are allowed to log onto these servers. pride, greed, envy, wrath, lust, sloth All members of the IT department are allowed to login onto these servers. one, two, three, four, ... Ordinary workstations used by employees. trashcan A very old machine without any critical data. Even interns are allowed to use this system.
When using netgroups to configure this scenario, each user is assigned to one or more netgroups and logins are then allowed or forbidden for all members of the netgroup. When adding a new machine, login restrictions must be defined for all netgroups. When a new user is added, the account must be added to one or more netgroups. If the NIS setup is planned carefully, only one central configuration file needs modification to grant or deny access to machines. The first step is the initialization of the NIS netgroup map. In &os;, this map is not created by default. On the NIS master server, use an editor to create a map named /var/yp/netgroup. This example creates four netgroups to represent IT employees, IT apprentices, employees, and interns: IT_EMP (,alpha,test-domain) (,beta,test-domain) IT_APP (,charlie,test-domain) (,delta,test-domain) USERS (,echo,test-domain) (,foxtrott,test-domain) \ (,golf,test-domain) INTERNS (,able,test-domain) (,baker,test-domain) Each entry configures a netgroup. The first column in an entry is the name of the netgroup. Each set of brackets represents either a group of one or more users or the name of another netgroup. When specifying a user, the three comma-delimited fields inside each group represent: The name of the host(s) where the other fields representing the user are valid. If a hostname is not specified, the entry is valid on all hosts. The name of the account that belongs to this netgroup. The NIS domain for the account. Accounts may be imported from other NIS domains into a netgroup. If a group contains multiple users, separate each user with whitespace. Additionally, each field may contain wildcards. See &man.netgroup.5; for details. netgroups Netgroup names longer than 8 characters should not be The names are case sensitive and using capital letters for netgroup names is an easy way to distinguish between user, machine and netgroup names. Some non-&os; NIS clients cannot handle netgroups containing more than 15 entries. This limit may be circumvented by creating several sub-netgroups with 15 users or fewer and a real netgroup consisting of the sub-netgroups, as seen in this example: BIGGRP1 (,joe1,domain) (,joe2,domain) (,joe3,domain) [...] BIGGRP2 (,joe16,domain) (,joe17,domain) [...] BIGGRP3 (,joe31,domain) (,joe32,domain) BIGGROUP BIGGRP1 BIGGRP2 BIGGRP3 Repeat this process if more than 225 (15 times 15) users exist within a single netgroup. To activate and distribute the new NIS map: ellington&prompt.root; cd /var/yp ellington&prompt.root; make This will generate the three NIS maps netgroup, netgroup.byhost and netgroup.byuser. Use the map key option of &man.ypcat.1; to check if the new NIS maps are available: ellington&prompt.user; ypcat -k netgroup ellington&prompt.user; ypcat -k netgroup.byhost ellington&prompt.user; ypcat -k netgroup.byuser The output of the first command should resemble the contents of /var/yp/netgroup. The second command only produces output if host-specific netgroups were created. The third command is used to get the list of netgroups for a user. To configure a client, use &man.vipw.8; to specify the name of the netgroup. For example, on the server named war, replace this line: +::::::::: with +@IT_EMP::::::::: This specifies that only the users defined in the netgroup IT_EMP will be imported into this system's password database and only those users are allowed to login to this system. This configuration also applies to the ~ function of the shell and all routines which convert between user names and numerical user IDs. In other words, cd ~user will not work, ls -l will show the numerical ID instead of the username, and find . -user joe -print will fail with the message No such user. To fix this, import all user entries without allowing them to login into the servers. This can be achieved by adding an extra line: +:::::::::/sbin/nologin This line configures the client to import all entries but to replace the shell in those entries with /sbin/nologin. Make sure that extra line is placed after +@IT_EMP:::::::::. Otherwise, all user accounts imported from NIS will have /sbin/nologin as their login shell and no one will be able to login to the system. To configure the less important servers, replace the old +::::::::: on the servers with these lines: +@IT_EMP::::::::: +@IT_APP::::::::: +:::::::::/sbin/nologin The corresponding lines for the workstations would be: +@IT_EMP::::::::: +@USERS::::::::: +:::::::::/sbin/nologin NIS supports the creation of netgroups from other netgroups which can be useful if the policy regarding user access changes. One possibility is the creation of role-based netgroups. For example, one might create a netgroup called BIGSRV to define the login restrictions for the important servers, another netgroup called SMALLSRV for the less important servers, and a third netgroup called USERBOX for the workstations. Each of these netgroups contains the netgroups that are allowed to login onto these machines. The new entries for the NIS netgroup map would look like this: BIGSRV IT_EMP IT_APP SMALLSRV IT_EMP IT_APP ITINTERN USERBOX IT_EMP ITINTERN USERS This method of defining login restrictions works reasonably well when it is possible to define groups of machines with identical restrictions. Unfortunately, this is the exception and not the rule. Most of the time, the ability to define login restrictions on a per-machine basis is required. Machine-specific netgroup definitions are another possibility to deal with the policy changes. In this scenario, the /etc/master.passwd of each system contains two lines starting with +. The first line adds a netgroup with the accounts allowed to login onto this machine and the second line adds all other accounts with /sbin/nologin as shell. It is recommended to use the ALL-CAPS version of the hostname as the name of the netgroup: +@BOXNAME::::::::: +:::::::::/sbin/nologin Once this task is completed on all the machines, there is no longer a need to modify the local versions of /etc/master.passwd ever again. All further changes can be handled by modifying the NIS map. Here is an example of a possible netgroup map for this scenario: # Define groups of users first IT_EMP (,alpha,test-domain) (,beta,test-domain) IT_APP (,charlie,test-domain) (,delta,test-domain) DEPT1 (,echo,test-domain) (,foxtrott,test-domain) DEPT2 (,golf,test-domain) (,hotel,test-domain) DEPT3 (,india,test-domain) (,juliet,test-domain) ITINTERN (,kilo,test-domain) (,lima,test-domain) D_INTERNS (,able,test-domain) (,baker,test-domain) # # Now, define some groups based on roles USERS DEPT1 DEPT2 DEPT3 BIGSRV IT_EMP IT_APP SMALLSRV IT_EMP IT_APP ITINTERN USERBOX IT_EMP ITINTERN USERS # # And a groups for a special tasks # Allow echo and golf to access our anti-virus-machine SECURITY IT_EMP (,echo,test-domain) (,golf,test-domain) # # machine-based netgroups # Our main servers WAR BIGSRV FAMINE BIGSRV # User india needs access to this server POLLUTION BIGSRV (,india,test-domain) # # This one is really important and needs more access restrictions DEATH IT_EMP # # The anti-virus-machine mentioned above ONE SECURITY # # Restrict a machine to a single user TWO (,hotel,test-domain) # [...more groups to follow] It may not always be advisable to use machine-based netgroups. When deploying a couple of dozen or hundreds of systems, role-based netgroups instead of machine-based netgroups may be used to keep the size of the NIS map within reasonable limits.
Password Formats NIS password formats NIS requires that all hosts within an NIS domain use the same format for encrypting passwords. If users have trouble authenticating on an NIS client, it may be due to a differing password format. In a heterogeneous network, the format must be supported by all operating systems, where DES is the lowest common standard. To check which format a server or client is using, look at this section of /etc/login.conf: default:\ :passwd_format=des:\ :copyright=/etc/COPYRIGHT:\ [Further entries elided] In this example, the system is using the DES format. Other possible values are blf for Blowfish and md5 for MD5 encrypted passwords. If the format on a host needs to be edited to match the one being used in the NIS domain, the login capability database must be rebuilt after saving the change: &prompt.root; cap_mkdb /etc/login.conf The format of passwords for existing user accounts will not be updated until each user changes their password after the login capability database is rebuilt.
Lightweight Directory Access Protocol (<acronym>LDAP</acronym>) Tom Rhodes Written by LDAP The Lightweight Directory Access Protocol (LDAP) is an application layer protocol used to access, modify, and authenticate objects using a distributed directory information service. Think of it as a phone or record book which stores several levels of hierarchical, homogeneous information. It is used in Active Directory and OpenLDAP networks and allows users to access to several levels of internal information utilizing a single account. For example, email authentication, pulling employee contact information, and internal website authentication might all make use of a single user account in the LDAP server's record base. This section provides a quick start guide for configuring an LDAP server on a &os; system. It assumes that the administrator already has a design plan which includes the type of information to store, what that information will be used for, which users should have access to that information, and how to secure this information from unauthorized access. <acronym>LDAP</acronym> Terminology and Structure LDAP uses several terms which should be understood before starting the configuration. All directory entries consist of a group of attributes. Each of these attribute sets contains a unique identifier known as a Distinguished Name (DN) which is normally built from several other attributes such as the common or Relative Distinguished Name (RDN). Similar to how directories have absolute and relative paths, consider a DN as an absolute path and the RDN as the relative path. An example LDAP entry looks like the following. This example searches for the entry for the specified user account (uid), organizational unit (ou), and organization (o): &prompt.user; ldapsearch -xb "uid=trhodes,ou=users,o=example.com" # extended LDIF # # LDAPv3 # base <uid=trhodes,ou=users,o=example.com> with scope subtree # filter: (objectclass=*) # requesting: ALL # # trhodes, users, example.com dn: uid=trhodes,ou=users,o=example.com mail: trhodes@example.com cn: Tom Rhodes uid: trhodes telephoneNumber: (123) 456-7890 # search result search: 2 result: 0 Success # numResponses: 2 # numEntries: 1 This example entry shows the values for the dn, mail, cn, uid, and telephoneNumber attributes. The cn attribute is the RDN. More information about LDAP and its terminology can be found at http://www.openldap.org/doc/admin24/intro.html. Configuring an <acronym>LDAP</acronym> Server LDAP Server &os; does not provide a built-in LDAP server. Begin the configuration by installing the net/openldap24-server package or port. Since the port has many configurable options, it is recommended that the default options are reviewed to see if the package is sufficient, and to instead compile the port if any options should be changed. In most cases, the defaults are fine. However, if SQL support is needed, this option must be enabled and the port compiled using the instructions in . Next, create the directories to hold the data and to store the certificates: &prompt.root; mkdir /var/db/openldap-data &prompt.root; mkdir /usr/local/etc/openldap/private Copy over the database configuration file: &prompt.root; cp /usr/local/etc/openldap/DB_CONFIG.example /var/db/openldap-data/DB_CONFIG The next phase is to configure the certificate authority. The following commands must be executed from /usr/local/etc/openldap/private. This is important as the file permissions need to be restrictive and users should not have access to these files. To create the certificate authority, start with this command and follow the prompts: &prompt.root; openssl req -days 365 -nodes -new -x509 -keyout ca.key -out ../ca.crt The entries for the prompts may be generic except for the Common Name. This entry must be different than the system hostname. If this will be a self signed certificate, prefix the hostname with CA for certificate authority. The next task is to create a certificate signing request and a private key. Input this command and follow the prompts: &prompt.root; openssl req -days 365 -nodes -new -keyout server.key -out server.csr During the certificate generation process, be sure to correctly set the Common Name attribute. Once complete, sign the key: &prompt.root; openssl x509 -req -days 365 -in server.csr -out ../server.crt -CA ../ca.crt -CAkey ca.key -CAcreateserial The final part of the certificate generation process is to generate and sign the client certificates: &prompt.root; openssl req -days 365 -nodes -new -keyout client.key -out client.csr &prompt.root; openssl x509 -req -days 3650 -in client.csr -out ../client.crt -CA ../ca.crt -CAkey ca.key Remember to use the same Common Name attribute when prompted. When finished, ensure that a total of eight (8) new files have been generated through the proceeding commands. If so, the next step is to edit /usr/local/etc/openldap/slapd.conf and add the following options: TLSCipherSuite HIGH:MEDIUM:+SSLv3 TLSCertificateFile /usr/local/etc/openldap/server.crt TLSCertificateKeyFile /usr/local/etc/openldap/private/server.key TLSCACertificateFile /usr/local/etc/openldap/ca.crt Then, edit /usr/local/etc/openldap/ldap.conf and add the following lines: TLS_CACERT /usr/local/etc/openldap/ca.crt TLS_CIPHER_SUITE HIGH:MEDIUM:+SSLv3 While editing this file, uncomment the following entries and set them to the desired values: , , and . Set the to contain and . Then, add two entries pointing to the certificate authority. When finished, the entries should look similar to the following: BASE dc=example,dc=com URI ldap:// ldaps:// SIZELIMIT 12 TIMELIMIT 15 TLS_CACERT /usr/local/etc/openldap/ca.crt TLS_CIPHER_SUITE HIGH:MEDIUM:+SSLv3 The default password for the server should then be changed: &prompt.root; slappasswd -h "{SHA}" >> /usr/local/etc/openldap/slapd.conf This command will prompt for the password and, if the process does not fail, a password hash will be added to the end of slapd.conf. Several hashing formats are supported. Refer to the manual page for slappasswd for more information. Next, edit /usr/local/etc/openldap/slapd.conf and add the following lines: password-hash {sha} allow bind_v2 The in this file must be updated to match the used in /usr/local/etc/openldap/ldap.conf and should also be set. A recommended value for is something like . Before saving this file, place the in front of the password output from slappasswd and delete the old . The end result should look similar to this: TLSCipherSuite HIGH:MEDIUM:+SSLv3 TLSCertificateFile /usr/local/etc/openldap/server.crt TLSCertificateKeyFile /usr/local/etc/openldap/private/server.key TLSCACertificateFile /usr/local/etc/openldap/ca.crt rootpw {SHA}W6ph5Mm5Pz8GgiULbPgzG37mj9g= Finally, enable the OpenLDAP service in /etc/rc.conf and set the URI: slapd_enable="YES" slapd_flags="-4 -h ldaps:///" At this point the server can be started and tested: &prompt.root; service slapd start If everything is configured correctly, a search of the directory should show a successful connection with a single response as in this example: &prompt.root; ldapsearch -Z # extended LDIF # # LDAPv3 # base <dc=example,dc=com> (default) with scope subtree # filter: (objectclass=*) # requesting: ALL # # search result search: 3 result: 32 No such object # numResponses: 1 If the command fails and the configuration looks correct, stop the slapd service and restart it with debugging options: &prompt.root; service slapd stop &prompt.root; /usr/local/libexec/slapd -d -1 Once the service is responding, the directory can be populated using ldapadd. In this example, a file containing this list of users is first created. Each user should use the following format: dn: dc=example,dc=com objectclass: dcObject objectclass: organization o: Example dc: Example dn: cn=Manager,dc=example,dc=com objectclass: organizationalRole cn: Manager To import this file, specify the file name. The following command will prompt for the password specified earlier and the output should look something like this: &prompt.root; ldapadd -Z -D "cn=Manager,dc=example,dc=com" -W -f import.ldif Enter LDAP Password: adding new entry "dc=example,dc=com" adding new entry "cn=Manager,dc=example,dc=com" Verify the data was added by issuing a search on the server using ldapsearch: &prompt.user; ldapsearch -Z # extended LDIF # # LDAPv3 # base <dc=example,dc=com> (default) with scope subtree # filter: (objectclass=*) # requesting: ALL # # example.com dn: dc=example,dc=com objectClass: dcObject objectClass: organization o: Example dc: Example # Manager, example.com dn: cn=Manager,dc=example,dc=com objectClass: organizationalRole cn: Manager # search result search: 3 result: 0 Success # numResponses: 3 # numEntries: 2 At this point, the server should be configured and functioning properly. Dynamic Host Configuration Protocol (<acronym>DHCP</acronym>) Dynamic Host Configuration Protocol DHCP Internet Systems Consortium (ISC) The Dynamic Host Configuration Protocol (DHCP) allows a system to connect to a network in order to be assigned the necessary addressing information for communication on that network. &os; includes the OpenBSD version of dhclient which is used by the client to obtain the addressing information. &os; does not install a DHCP server, but several servers are available in the &os; Ports Collection. The DHCP protocol is fully described in RFC 2131. Informational resources are also available at isc.org/downloads/dhcp/. This section describes how to use the built-in DHCP client. It then describes how to install and configure a DHCP server. In &os;, the &man.bpf.4; device is needed by both the DHCP server and DHCP client. This device is included in the GENERIC kernel that is installed with &os;. Users who prefer to create a custom kernel need to keep this device if DHCP is used. It should be noted that bpf also allows privileged users to run network packet sniffers on that system. Configuring a <acronym>DHCP</acronym> Client DHCP client support is included in the &os; installer, making it easy to configure a newly installed system to automatically receive its networking addressing information from an existing DHCP server. Refer to for examples of network configuration. UDP When dhclient is executed on the client machine, it begins broadcasting requests for configuration information. By default, these requests use UDP port 68. The server replies on UDP port 67, giving the client an IP address and other relevant network information such as a subnet mask, default gateway, and DNS server addresses. This information is in the form of a DHCP lease and is valid for a configurable time. This allows stale IP addresses for clients no longer connected to the network to automatically be reused. DHCP clients can obtain a great deal of information from the server. An exhaustive list may be found in &man.dhcp-options.5;. By default, when a &os; system boots, its DHCP client runs in the background, or asynchronously. Other startup scripts continue to run while the DHCP process completes, which speeds up system startup. Background DHCP works well when the DHCP server responds quickly to the client's requests. However, DHCP may take a long time to complete on some systems. If network services attempt to run before DHCP has assigned the network addressing information, they will fail. Using DHCP in synchronous mode prevents this problem as it pauses startup until the DHCP configuration has completed. This line in /etc/rc.conf is used to configure background or asynchronous mode: ifconfig_fxp0="DHCP" This line may already exist if the system was configured to use DHCP during installation. Replace the fxp0 shown in these examples with the name of the interface to be dynamically configured, as described in . To instead configure the system to use synchronous mode, and to pause during startup while DHCP completes, use SYNCDHCP: ifconfig_fxp0="SYNCDHCP" Additional client options are available. Search for dhclient in &man.rc.conf.5; for details. DHCP configuration files The DHCP client uses the following files: /etc/dhclient.conf The configuration file used by dhclient. Typically, this file contains only comments as the defaults are suitable for most clients. This configuration file is described in &man.dhclient.conf.5;. /sbin/dhclient More information about the command itself can be found in &man.dhclient.8;. /sbin/dhclient-script The &os;-specific DHCP client configuration script. It is described in &man.dhclient-script.8;, but should not need any user modification to function properly. /var/db/dhclient.leases.interface The DHCP client keeps a database of valid leases in this file, which is written as a log and is described in &man.dhclient.leases.5;. Installing and Configuring a <acronym>DHCP</acronym> Server This section demonstrates how to configure a &os; system to act as a DHCP server using the Internet Systems Consortium (ISC) implementation of the DHCP server. This implementation and its documentation can be installed using the net/isc-dhcp42-server package or port. DHCP server DHCP installation The installation of net/isc-dhcp42-server installs a sample configuration file. Copy /usr/local/etc/dhcpd.conf.example to /usr/local/etc/dhcpd.conf and make any edits to this new file. DHCP dhcpd.conf The configuration file is comprised of declarations for subnets and hosts which define the information that is provided to DHCP clients. For example, these lines configure the following: option domain-name "example.org"; option domain-name-servers ns1.example.org; option subnet-mask 255.255.255.0; default-lease-time 600; max-lease-time 72400; ddns-update-style none; subnet 10.254.239.0 netmask 255.255.255.224 { range 10.254.239.10 10.254.239.20; option routers rtr-239-0-1.example.org, rtr-239-0-2.example.org; } host fantasia { hardware ethernet 08:00:07:26:c0:a5; fixed-address fantasia.fugue.com; } This option specifies the default search domain that will be provided to clients. Refer to &man.resolv.conf.5; for more information. This option specifies a comma separated list of DNS servers that the client should use. They can be listed by their Fully Qualified Domain Names (FQDN), as seen in the example, or by their IP addresses. The subnet mask that will be provided to clients. The default lease expiry time in seconds. A client can be configured to override this value. The maximum allowed length of time, in seconds, for a lease. Should a client request a longer lease, a lease will still be issued, but it will only be valid for max-lease-time. The default of disables dynamic DNS updates. Changing this to configures the DHCP server to update a DNS server whenever it hands out a lease so that the DNS server knows which IP addresses are associated with which computers in the network. Do not change the default setting unless the DNS server has been configured to support dynamic DNS. This line creates a pool of available IP addresses which are reserved for allocation to DHCP clients. The range of addresses must be valid for the network or subnet specified in the previous line. Declares the default gateway that is valid for the network or subnet specified before the opening { bracket. Specifies the hardware MAC address of a client so that the DHCP server can recognize the client when it makes a request. Specifies that this host should always be given the same IP address. Using the hostname is correct, since the DHCP server will resolve the hostname before returning the lease information. This configuration file supports many more options. Refer to dhcpd.conf(5), installed with the server, for details and examples. Once the configuration of dhcpd.conf is complete, enable the DHCP server in /etc/rc.conf: dhcpd_enable="YES" dhcpd_ifaces="dc0" Replace the dc0 with the interface (or interfaces, separated by whitespace) that the DHCP server should listen on for DHCP client requests. Start the server by issuing the following command: &prompt.root; service isc-dhcpd start Any future changes to the configuration of the server will require the dhcpd service to be stopped and then started using &man.service.8;. The DHCP server uses the following files. Note that the manual pages are installed with the server software. DHCP configuration files /usr/local/sbin/dhcpd More information about the dhcpd server can be found in dhcpd(8). /usr/local/etc/dhcpd.conf The server configuration file needs to contain all the information that should be provided to clients, along with information regarding the operation of the server. This configuration file is described in dhcpd.conf(5). /var/db/dhcpd.leases The DHCP server keeps a database of leases it has issued in this file, which is written as a log. Refer to dhcpd.leases(5), which gives a slightly longer description. /usr/local/sbin/dhcrelay This daemon is used in advanced environments where one DHCP server forwards a request from a client to another DHCP server on a separate network. If this functionality is required, install the net/isc-dhcp42-relay package or port. The installation includes dhcrelay(8) which provides more detail. Domain Name System (<acronym>DNS</acronym>) DNS Domain Name System (DNS) is the protocol through which domain names are mapped to IP addresses, and vice versa. DNS is coordinated across the Internet through a somewhat complex system of authoritative root, Top Level Domain (TLD), and other smaller-scale name servers, which host and cache individual domain information. It is not necessary to run a name server to perform DNS lookups on a system. BIND In &os; 10, the Berkeley Internet Name Domain (BIND) has been removed from the base system and replaced with Unbound. Unbound as configured in the &os; Base is a local caching resolver. BIND is still available from The Ports Collection as dns/bind99 or dns/bind98. In &os; 9 and lower, BIND is included in &os; Base. The &os; version provides enhanced security features, a new file system layout, and automated &man.chroot.8; configuration. BIND is maintained by the Internet Systems Consortium. resolver reverse DNS root zone The following table describes some of the terms associated with DNS: <acronym>DNS</acronym> Terminology Term Definition Forward DNS Mapping of hostnames to IP addresses. Origin Refers to the domain covered in a particular zone file. named, BIND Common names for the BIND name server package within &os;. Resolver A system process through which a machine queries a name server for zone information. Reverse DNS Mapping of IP addresses to hostnames. Root zone The beginning of the Internet zone hierarchy. All zones fall under the root zone, similar to how all files in a file system fall under the root directory. Zone An individual domain, subdomain, or portion of the DNS administered by the same authority.
zones examples Examples of zones: . is how the root zone is usually referred to in documentation. org. is a Top Level Domain (TLD) under the root zone. example.org. is a zone under the org. TLD. 1.168.192.in-addr.arpa is a zone referencing all IP addresses which fall under the 192.168.1.* IP address space. As one can see, the more specific part of a hostname appears to its left. For example, example.org. is more specific than org., as org. is more specific than the root zone. The layout of each part of a hostname is much like a file system: the /dev directory falls within the root, and so on. Reasons to Run a Name Server Name servers generally come in two forms: authoritative name servers, and caching (also known as resolving) name servers. An authoritative name server is needed when: One wants to serve DNS information to the world, replying authoritatively to queries. A domain, such as example.org, is registered and IP addresses need to be assigned to hostnames under it. An IP address block requires reverse DNS entries (IP to hostname). A backup or second name server, called a slave, will reply to queries. A caching name server is needed when: A local DNS server may cache and respond more quickly than querying an outside name server. When one queries for www.FreeBSD.org, the resolver usually queries the uplink ISP's name server, and retrieves the reply. With a local, caching DNS server, the query only has to be made once to the outside world by the caching DNS server. Additional queries will not have to go outside the local network, since the information is cached locally. <acronym>DNS</acronym> Server Configuration in &os; 10.0 and Later In &os; 10.0, BIND has been replaced with Unbound. Unbound is a validating caching resolver only. If an authoritative server is needed, many are available from the Ports Collection. Unbound is provided in the &os; base system. By default, it will provide DNS resolution to the local machine only. While the base system package can be configured to provide resolution services beyond the local machine, it is recommended that such requirements be addressed by installing Unbound from the &os; Ports Collection. To enable Unbound, add the following to /etc/rc.conf: local_unbound_enable="YES" Any existing nameservers in /etc/resolv.conf will be configured as forwarders in the new Unbound configuration. If any of the listed nameservers do not support DNSSEC, local DNS resolution will fail. Be sure to test each nameserver and remove any that fail the test. The following command will show the trust tree or a failure for a nameserver running on 192.168.1.1: &prompt.user; drill -S FreeBSD.org @192.168.1.1 Once each nameserver is confirmed to support DNSSEC, start Unbound: &prompt.root; service local_unbound onestart This will take care of updating /etc/resolv.conf so that queries for DNSSEC secured domains will now work. For example, run the following to validate the FreeBSD.org DNSSEC trust tree: &prompt.user; drill -S FreeBSD.org ;; Number of trusted keys: 1 ;; Chasing: freebsd.org. A DNSSEC Trust tree: freebsd.org. (A) |---freebsd.org. (DNSKEY keytag: 36786 alg: 8 flags: 256) |---freebsd.org. (DNSKEY keytag: 32659 alg: 8 flags: 257) |---freebsd.org. (DS keytag: 32659 digest type: 2) |---org. (DNSKEY keytag: 49587 alg: 7 flags: 256) |---org. (DNSKEY keytag: 9795 alg: 7 flags: 257) |---org. (DNSKEY keytag: 21366 alg: 7 flags: 257) |---org. (DS keytag: 21366 digest type: 1) | |---. (DNSKEY keytag: 40926 alg: 8 flags: 256) | |---. (DNSKEY keytag: 19036 alg: 8 flags: 257) |---org. (DS keytag: 21366 digest type: 2) |---. (DNSKEY keytag: 40926 alg: 8 flags: 256) |---. (DNSKEY keytag: 19036 alg: 8 flags: 257) ;; Chase successful DNS Server Configuration in &os; 9.<replaceable>X</replaceable> and Earlier In &os;, the BIND daemon is called named. File Description &man.named.8; The BIND daemon. &man.rndc.8; Name server control utility. /etc/namedb Directory where BIND zone information resides. /etc/namedb/named.conf Configuration file of the daemon. Depending on how a given zone is configured on the server, the files related to that zone can be found in the master, slave, or dynamic subdirectories of the /etc/namedb directory. These files contain the DNS information that will be given out by the name server in response to queries. Starting BIND BIND starting Since BIND is installed by default, configuring it is relatively simple. The default named configuration is that of a basic resolving name server, running in a &man.chroot.8; environment, and restricted to listening on the local IPv4 loopback address (127.0.0.1). To start the server one time with this configuration, use the following command: &prompt.root; service named onestart To ensure the named daemon is started at boot each time, put the following line into the /etc/rc.conf: named_enable="YES" There are many configuration options for /etc/namedb/named.conf that are beyond the scope of this document. Other startup options for named on &os; can be found in the named_* flags in /etc/defaults/rc.conf and in &man.rc.conf.5;. The section is also a good read. Configuration Files BIND configuration files Configuration files for named currently reside in /etc/namedb directory and will need modification before use unless all that is needed is a simple resolver. This is where most of the configuration will be performed. <filename>/etc/namedb/named.conf</filename> // $FreeBSD$ // // Refer to the named.conf(5) and named(8) man pages, and the documentation // in /usr/share/doc/bind9 for more details. // // If you are going to set up an authoritative server, make sure you // understand the hairy details of how DNS works. Even with // simple mistakes, you can break connectivity for affected parties, // or cause huge amounts of useless Internet traffic. options { // All file and path names are relative to the chroot directory, // if any, and should be fully qualified. directory "/etc/namedb/working"; pid-file "/var/run/named/pid"; dump-file "/var/dump/named_dump.db"; statistics-file "/var/stats/named.stats"; // If named is being used only as a local resolver, this is a safe default. // For named to be accessible to the network, comment this option, specify // the proper IP address, or delete this option. listen-on { 127.0.0.1; }; // If you have IPv6 enabled on this system, uncomment this option for // use as a local resolver. To give access to the network, specify // an IPv6 address, or the keyword "any". // listen-on-v6 { ::1; }; // These zones are already covered by the empty zones listed below. // If you remove the related empty zones below, comment these lines out. disable-empty-zone "255.255.255.255.IN-ADDR.ARPA"; disable-empty-zone "0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.IP6.ARPA"; disable-empty-zone "1.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.IP6.ARPA"; // If you've got a DNS server around at your upstream provider, enter // its IP address here, and enable the line below. This will make you // benefit from its cache, thus reduce overall DNS traffic in the Internet. /* forwarders { 127.0.0.1; }; */ // If the 'forwarders' clause is not empty the default is to 'forward first' // which will fall back to sending a query from your local server if the name // servers in 'forwarders' do not have the answer. Alternatively you can // force your name server to never initiate queries of its own by enabling the // following line: // forward only; // If you wish to have forwarding configured automatically based on // the entries in /etc/resolv.conf, uncomment the following line and // set named_auto_forward=yes in /etc/rc.conf. You can also enable // named_auto_forward_only (the effect of which is described above). // include "/etc/namedb/auto_forward.conf"; Just as the comment says, to benefit from an uplink's cache, forwarders can be enabled here. Under normal circumstances, a name server will recursively query the Internet looking at certain name servers until it finds the answer it is looking for. Having this enabled will have it query the uplink's name server (or name server provided) first, taking advantage of its cache. If the uplink name server in question is a heavily trafficked, fast name server, enabling this may be worthwhile. 127.0.0.1 will not work here. Change this IP address to a name server at the uplink. /* Modern versions of BIND use a random UDP port for each outgoing query by default in order to dramatically reduce the possibility of cache poisoning. All users are strongly encouraged to utilize this feature, and to configure their firewalls to accommodate it. AS A LAST RESORT in order to get around a restrictive firewall policy you can try enabling the option below. Use of this option will significantly reduce your ability to withstand cache poisoning attacks, and should be avoided if at all possible. Replace NNNNN in the example with a number between 49160 and 65530. */ // query-source address * port NNNNN; }; // If you enable a local name server, don't forget to enter 127.0.0.1 // first in your /etc/resolv.conf so this server will be queried. // Also, make sure to enable it in /etc/rc.conf. // The traditional root hints mechanism. Use this, OR the slave zones below. zone "." { type hint; file "/etc/namedb/named.root"; }; /* Slaving the following zones from the root name servers has some significant advantages: 1. Faster local resolution for your users 2. No spurious traffic will be sent from your network to the roots 3. Greater resilience to any potential root server failure/DDoS On the other hand, this method requires more monitoring than the hints file to be sure that an unexpected failure mode has not incapacitated your server. Name servers that are serving a lot of clients will benefit more from this approach than individual hosts. Use with caution. To use this mechanism, uncomment the entries below, and comment the hint zone above. As documented at http://dns.icann.org/services/axfr/ these zones: "." (the root), ARPA, IN-ADDR.ARPA, IP6.ARPA, and ROOT-SERVERS.NET are available for AXFR from these servers on IPv4 and IPv6: xfr.lax.dns.icann.org, xfr.cjr.dns.icann.org */ /* zone "." { type slave; file "/etc/namedb/slave/root.slave"; masters { 192.5.5.241; // F.ROOT-SERVERS.NET. }; notify no; }; zone "arpa" { type slave; file "/etc/namedb/slave/arpa.slave"; masters { 192.5.5.241; // F.ROOT-SERVERS.NET. }; notify no; }; */ /* Serving the following zones locally will prevent any queries for these zones leaving your network and going to the root name servers. This has two significant advantages: 1. Faster local resolution for your users 2. No spurious traffic will be sent from your network to the roots */ // RFCs 1912 and 5735 (and BCP 32 for localhost) zone "localhost" { type master; file "/etc/namedb/master/localhost-forward.db"; }; zone "127.in-addr.arpa" { type master; file "/etc/namedb/master/localhost-reverse.db"; }; zone "255.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; }; // RFC 1912-style zone for IPv6 localhost address zone "0.ip6.arpa" { type master; file "/etc/namedb/master/localhost-reverse.db"; }; // "This" Network (RFCs 1912 and 5735) zone "0.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; }; // Private Use Networks (RFCs 1918 and 5735) zone "10.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "16.172.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "17.172.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "18.172.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "19.172.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "20.172.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "21.172.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "22.172.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "23.172.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "24.172.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "25.172.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "26.172.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "27.172.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "28.172.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "29.172.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "30.172.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "31.172.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "168.192.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; }; // Link-local/APIPA (RFCs 3927 and 5735) zone "254.169.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; }; // IETF protocol assignments (RFCs 5735 and 5736) zone "0.0.192.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; }; // TEST-NET-[1-3] for Documentation (RFCs 5735 and 5737) zone "2.0.192.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "100.51.198.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "113.0.203.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; }; // IPv6 Range for Documentation (RFC 3849) zone "8.b.d.0.1.0.0.2.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; }; // Domain Names for Documentation and Testing (BCP 32) zone "test" { type master; file "/etc/namedb/master/empty.db"; }; zone "example" { type master; file "/etc/namedb/master/empty.db"; }; zone "invalid" { type master; file "/etc/namedb/master/empty.db"; }; zone "example.com" { type master; file "/etc/namedb/master/empty.db"; }; zone "example.net" { type master; file "/etc/namedb/master/empty.db"; }; zone "example.org" { type master; file "/etc/namedb/master/empty.db"; }; // Router Benchmark Testing (RFCs 2544 and 5735) zone "18.198.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "19.198.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; }; // IANA Reserved - Old Class E Space (RFC 5735) zone "240.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "241.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "242.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "243.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "244.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "245.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "246.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "247.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "248.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "249.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "250.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "251.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "252.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "253.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "254.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; }; // IPv6 Unassigned Addresses (RFC 4291) zone "1.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "3.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "4.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "5.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "6.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "7.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "8.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "9.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "a.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "b.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "c.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "d.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "e.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "0.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "1.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "2.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "3.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "4.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "5.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "6.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "7.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "8.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "9.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "a.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "b.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "0.e.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "1.e.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "2.e.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "3.e.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "4.e.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "5.e.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "6.e.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "7.e.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; }; // IPv6 ULA (RFC 4193) zone "c.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "d.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; }; // IPv6 Link Local (RFC 4291) zone "8.e.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "9.e.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "a.e.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "b.e.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; }; // IPv6 Deprecated Site-Local Addresses (RFC 3879) zone "c.e.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "d.e.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "e.e.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "f.e.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; }; // IP6.INT is Deprecated (RFC 4159) zone "ip6.int" { type master; file "/etc/namedb/master/empty.db"; }; // NB: Do not use the IP addresses below, they are faked, and only // serve demonstration/documentation purposes! // // Example slave zone config entries. It can be convenient to become // a slave at least for the zone your own domain is in. Ask // your network administrator for the IP address of the responsible // master name server. // // Do not forget to include the reverse lookup zone! // This is named after the first bytes of the IP address, in reverse // order, with ".IN-ADDR.ARPA" appended, or ".IP6.ARPA" for IPv6. // // Before starting to set up a master zone, make sure you fully // understand how DNS and BIND work. There are sometimes // non-obvious pitfalls. Setting up a slave zone is usually simpler. // // NB: Don't blindly enable the examples below. :-) Use actual names // and addresses instead. /* An example dynamic zone key "exampleorgkey" { algorithm hmac-md5; secret "sf87HJqjkqh8ac87a02lla=="; }; zone "example.org" { type master; allow-update { key "exampleorgkey"; }; file "/etc/namedb/dynamic/example.org"; }; */ /* Example of a slave reverse zone zone "1.168.192.in-addr.arpa" { type slave; file "/etc/namedb/slave/1.168.192.in-addr.arpa"; masters { 192.168.1.1; }; }; */ In named.conf, these are examples of slave entries for a forward and reverse zone. For each new zone served, a new zone entry must be added to named.conf. For example, the simplest zone entry for example.org can look like: zone "example.org" { type master; file "master/example.org"; }; The zone is a master, as indicated by the statement, holding its zone information in /etc/namedb/master/example.org indicated by the statement. zone "example.org" { type slave; file "slave/example.org"; }; In the slave case, the zone information is transferred from the master name server for the particular zone, and saved in the file specified. If and when the master server dies or is unreachable, the slave name server will have the transferred zone information and will be able to serve it. Zone Files BIND zone files An example master zone file for example.org (existing within /etc/namedb/master/example.org) is as follows: $TTL 3600 ; 1 hour default TTL example.org. IN SOA ns1.example.org. admin.example.org. ( 2006051501 ; Serial 10800 ; Refresh 3600 ; Retry 604800 ; Expire 300 ; Negative Response TTL ) ; DNS Servers IN NS ns1.example.org. IN NS ns2.example.org. ; MX Records IN MX 10 mx.example.org. IN MX 20 mail.example.org. IN A 192.168.1.1 ; Machine Names localhost IN A 127.0.0.1 ns1 IN A 192.168.1.2 ns2 IN A 192.168.1.3 mx IN A 192.168.1.4 mail IN A 192.168.1.5 ; Aliases www IN CNAME example.org. Note that every hostname ending in a . is an exact hostname, whereas everything without a trailing . is relative to the origin. For example, ns1 is translated into ns1.example.org. The format of a zone file follows: recordname IN recordtype value DNS records The most commonly used DNS records: SOA start of zone authority NS an authoritative name server A a host address CNAME the canonical name for an alias MX mail exchanger PTR a domain name pointer (used in reverse DNS) example.org. IN SOA ns1.example.org. admin.example.org. ( 2006051501 ; Serial 10800 ; Refresh after 3 hours 3600 ; Retry after 1 hour 604800 ; Expire after 1 week 300 ) ; Negative Response TTL example.org. the domain name, also the origin for this zone file. ns1.example.org. the primary/authoritative name server for this zone. admin.example.org. the responsible person for this zone, email address with @ replaced. (admin@example.org becomes admin.example.org) 2006051501 the serial number of the file. This must be incremented each time the zone file is modified. Nowadays, many admins prefer a yyyymmddrr format for the serial number. 2006051501 would mean last modified 05/15/2006, the latter 01 being the first time the zone file has been modified this day. The serial number is important as it alerts slave name servers for a zone when it is updated. IN NS ns1.example.org. This is an NS entry. Every name server that is going to reply authoritatively for the zone must have one of these entries. localhost IN A 127.0.0.1 ns1 IN A 192.168.1.2 ns2 IN A 192.168.1.3 mx IN A 192.168.1.4 mail IN A 192.168.1.5 The A record indicates machine names. As seen above, ns1.example.org would resolve to 192.168.1.2. IN A 192.168.1.1 This line assigns IP address 192.168.1.1 to the current origin, in this case example.org. www IN CNAME @ The canonical name record is usually used for giving aliases to a machine. In the example, www is aliased to the master machine whose name happens to be the same as the domain name example.org (192.168.1.1). CNAMEs can never be used together with another kind of record for the same hostname. MX record IN MX 10 mail.example.org. The MX record indicates which mail servers are responsible for handling incoming mail for the zone. mail.example.org is the hostname of a mail server, and 10 is the priority of that mail server. One can have several mail servers, with priorities of 10, 20 and so on. A mail server attempting to deliver to example.org would first try the highest priority MX (the record with the lowest priority number), then the second highest, etc, until the mail can be properly delivered. For in-addr.arpa zone files (reverse DNS), the same format is used, except with PTR entries instead of A or CNAME. $TTL 3600 1.168.192.in-addr.arpa. IN SOA ns1.example.org. admin.example.org. ( 2006051501 ; Serial 10800 ; Refresh 3600 ; Retry 604800 ; Expire 300 ) ; Negative Response TTL IN NS ns1.example.org. IN NS ns2.example.org. 1 IN PTR example.org. 2 IN PTR ns1.example.org. 3 IN PTR ns2.example.org. 4 IN PTR mx.example.org. 5 IN PTR mail.example.org. This file gives the proper IP address to hostname mappings for the above fictitious domain. It is worth noting that all names on the right side of a PTR record need to be fully qualified (i.e., end in a .). Caching Name Server BIND caching name server A caching name server is a name server whose primary role is to resolve recursive queries. It simply asks queries of its own, and remembers the answers for later use. <acronym role="Domain Name Security Extensions">DNSSEC</acronym> BIND DNS security extensions Domain Name System Security Extensions, or DNSSEC for short, is a suite of specifications to protect resolving name servers from forged DNS data, such as spoofed DNS records. By using digital signatures, a resolver can verify the integrity of the record. Note that DNSSEC only provides integrity via digitally signing the Resource Records (RRs). It provides neither confidentiality nor protection against false end-user assumptions. This means that it cannot protect against people going to example.net instead of example.com. The only thing DNSSEC does is authenticate that the data has not been compromised in transit. The security of DNS is an important step in securing the Internet in general. For more in-depth details of how DNSSEC works, the relevant RFCs are a good place to start. See the list in . The following sections will demonstrate how to enable DNSSEC for an authoritative DNS server and a recursive (or caching) DNS server running BIND 9. While all versions of BIND 9 support DNSSEC, it is necessary to have at least version 9.6.2 in order to be able to use the signed root zone when validating DNS queries. This is because earlier versions lack the required algorithms to enable validation using the root zone key. It is strongly recommended to use the latest version of BIND 9.7 or later to take advantage of automatic key updating for the root key, as well as other features to automatically keep zones signed and signatures up to date. Where configurations differ between 9.6.2 and 9.7 and later, differences will be pointed out. Recursive <acronym>DNS</acronym> Server Configuration Enabling DNSSEC validation of queries performed by a recursive DNS server requires a few changes to named.conf. Before making these changes the root zone key, or trust anchor, must be acquired. Currently the root zone key is not available in a file format BIND understands, so it has to be manually converted into the proper format. The key itself can be obtained by querying the root zone for it using dig. By running &prompt.user; dig +multi +noall +answer DNSKEY . > root.dnskey the key will end up in root.dnskey. The contents should look something like this: . 93910 IN DNSKEY 257 3 8 ( AwEAAagAIKlVZrpC6Ia7gEzahOR+9W29euxhJhVVLOyQ bSEW0O8gcCjFFVQUTf6v58fLjwBd0YI0EzrAcQqBGCzh /RStIoO8g0NfnfL2MTJRkxoXbfDaUeVPQuYEhg37NZWA JQ9VnMVDxP/VHL496M/QZxkjf5/Efucp2gaDX6RS6CXp oY68LsvPVjR0ZSwzz1apAzvN9dlzEheX7ICJBBtuA6G3 LQpzW5hOA2hzCTMjJPJ8LbqF6dsV6DoBQzgul0sGIcGO Yl7OyQdXfZ57relSQageu+ipAdTTJ25AsRTAoub8ONGc LmqrAmRLKBP1dfwhYB4N7knNnulqQxA+Uk1ihz0= ) ; key id = 19036 . 93910 IN DNSKEY 256 3 8 ( AwEAAcaGQEA+OJmOzfzVfoYN249JId7gx+OZMbxy69Hf UyuGBbRN0+HuTOpBxxBCkNOL+EJB9qJxt+0FEY6ZUVjE g58sRr4ZQ6Iu6b1xTBKgc193zUARk4mmQ/PPGxn7Cn5V EGJ/1h6dNaiXuRHwR+7oWh7DnzkIJChcTqlFrXDW3tjt ) ; key id = 34525 Do not be alarmed if the obtained keys differ from this example. They might have changed since these instructions were last updated. This output actually contains two keys. The first key in the listing, with the value 257 after the DNSKEY record type, is the one needed. This value indicates that this is a Secure Entry Point (SEP), commonly known as a Key Signing Key (KSK). The second key, with value 256, is a subordinate key, commonly called a Zone Signing Key (ZSK). More on the different key types later in . Now the key must be verified and formatted so that BIND can use it. To verify the key, generate a DS RR set. Create a file containing these RRs with &prompt.user; dnssec-dsfromkey -f root.dnskey . > root.ds These records use SHA-1 and SHA-256 respectively, and should look similar to the following example, where the longer is using SHA-256. . IN DS 19036 8 1 B256BD09DC8DD59F0E0F0D8541B8328DD986DF6E . IN DS 19036 8 2 49AAC11D7B6F6446702E54A1607371607A1A41855200FD2CE1CDDE32F24E8FB5 The SHA-256 RR can now be compared to the digest in https://data.iana.org/root-anchors/root-anchors.xml. To be absolutely sure that the key has not been tampered with the data in the XML file can be verified using the PGP signature in https://data.iana.org/root-anchors/root-anchors.asc. Next, the key must be formatted properly. This differs a little between BIND versions 9.6.2 and 9.7 and later. In version 9.7 support was added to automatically track changes to the key and update it as necessary. This is done using managed-keys as seen in the example below. When using the older version, the key is added using a trusted-keys statement and updates must be done manually. For BIND 9.6.2 the format should look like: trusted-keys { "." 257 3 8 "AwEAAagAIKlVZrpC6Ia7gEzahOR+9W29euxhJhVVLOyQbSEW0O8gcCjF FVQUTf6v58fLjwBd0YI0EzrAcQqBGCzh/RStIoO8g0NfnfL2MTJRkxoX bfDaUeVPQuYEhg37NZWAJQ9VnMVDxP/VHL496M/QZxkjf5/Efucp2gaD X6RS6CXpoY68LsvPVjR0ZSwzz1apAzvN9dlzEheX7ICJBBtuA6G3LQpz W5hOA2hzCTMjJPJ8LbqF6dsV6DoBQzgul0sGIcGOYl7OyQdXfZ57relS Qageu+ipAdTTJ25AsRTAoub8ONGcLmqrAmRLKBP1dfwhYB4N7knNnulq QxA+Uk1ihz0="; }; For 9.7 the format will instead be: managed-keys { "." initial-key 257 3 8 "AwEAAagAIKlVZrpC6Ia7gEzahOR+9W29euxhJhVVLOyQbSEW0O8gcCjF FVQUTf6v58fLjwBd0YI0EzrAcQqBGCzh/RStIoO8g0NfnfL2MTJRkxoX bfDaUeVPQuYEhg37NZWAJQ9VnMVDxP/VHL496M/QZxkjf5/Efucp2gaD X6RS6CXpoY68LsvPVjR0ZSwzz1apAzvN9dlzEheX7ICJBBtuA6G3LQpz W5hOA2hzCTMjJPJ8LbqF6dsV6DoBQzgul0sGIcGOYl7OyQdXfZ57relS Qageu+ipAdTTJ25AsRTAoub8ONGcLmqrAmRLKBP1dfwhYB4N7knNnulq QxA+Uk1ihz0="; }; The root key can now be added to named.conf either directly or by including a file containing the key. After these steps, configure BIND to do DNSSEC validation on queries by editing named.conf and adding the following to the options directive: dnssec-enable yes; dnssec-validation yes; To verify that it is actually working use dig to make a query for a signed zone using the resolver just configured. A successful reply will contain the AD flag to indicate the data was authenticated. Running a query such as &prompt.user; dig @resolver +dnssec se ds should return the DS RR for the .se zone. In the flags: section the AD flag should be set, as seen in: ... ;; flags: qr rd ra ad; QUERY: 1, ANSWER: 3, AUTHORITY: 0, ADDITIONAL: 1 ... The resolver is now capable of authenticating DNS queries. Authoritative <acronym>DNS</acronym> Server Configuration In order to get an authoritative name server to serve a DNSSEC signed zone a little more work is required. A zone is signed using cryptographic keys which must be generated. It is possible to use only one key for this. The preferred method however is to have a strong well-protected Key Signing Key (KSK) that is not rotated very often and a Zone Signing Key (ZSK) that is rotated more frequently. Information on recommended operational practices can be found in RFC 4641: DNSSEC Operational Practices. Practices regarding the root zone can be found in DNSSEC Practice Statement for the Root Zone KSK operator and DNSSEC Practice Statement for the Root Zone ZSK operator. The KSK is used to build a chain of authority to the data in need of validation and as such is also called a Secure Entry Point (SEP) key. A message digest of this key, called a Delegation Signer (DS) record, must be published in the parent zone to establish the trust chain. How this is accomplished depends on the parent zone owner. The ZSK is used to sign the zone, and only needs to be published there. To enable DNSSEC for the example.com zone depicted in previous examples, the first step is to use dnssec-keygen to generate the KSK and ZSK key pair. This key pair can utilize different cryptographic algorithms. It is recommended to use RSA/SHA256 for the keys and 2048 bits key length should be enough. To generate the KSK for example.com, run &prompt.user; dnssec-keygen -f KSK -a RSASHA256 -b 2048 -n ZONE example.com and to generate the ZSK, run &prompt.user; dnssec-keygen -a RSASHA256 -b 2048 -n ZONE example.com dnssec-keygen outputs two files, the public and the private keys in files named similar to Kexample.com.+005+nnnnn.key (public) and Kexample.com.+005+nnnnn.private (private). The nnnnn part of the file name is a five digit key ID. Keep track of which key ID belongs to which key. This is especially important when having more than one key in a zone. It is also possible to rename the keys. For each KSK file do: &prompt.user; mv Kexample.com.+005+nnnnn.key Kexample.com.+005+nnnnn.KSK.key &prompt.user; mv Kexample.com.+005+nnnnn.private Kexample.com.+005+nnnnn.KSK.private For the ZSK files, substitute KSK for ZSK as necessary. The files can now be included in the zone file, using the $include statement. It should look something like this: $include Kexample.com.+005+nnnnn.KSK.key ; KSK $include Kexample.com.+005+nnnnn.ZSK.key ; ZSK Finally, sign the zone and tell BIND to use the signed zone file. To sign a zone dnssec-signzone is used. The command to sign the zone example.com, located in example.com.db would look similar to &prompt.user; dnssec-signzone -o example.com -k Kexample.com.+005+nnnnn.KSK example.com.db Kexample.com.+005+nnnnn.ZSK.key The key supplied to the argument is the KSK and the other key file is the ZSK that should be used in the signing. It is possible to supply more than one KSK and ZSK, which will result in the zone being signed with all supplied keys. This can be needed to supply zone data signed using more than one algorithm. The output of dnssec-signzone is a zone file with all RRs signed. This output will end up in a file with the extension .signed, such as example.com.db.signed. The DS records will also be written to a separate file dsset-example.com. To use this signed zone just modify the zone directive in named.conf to use example.com.db.signed. By default, the signatures are only valid 30 days, meaning that the zone needs to be resigned in about 15 days to be sure that resolvers are not caching records with stale signatures. It is possible to make a script and a cron job to do this. See relevant manuals for details. Be sure to keep private keys confidential, as with all cryptographic keys. When changing a key it is best to include the new key into the zone, while still signing with the old one, and then move over to using the new key to sign. After these steps are done the old key can be removed from the zone. Failure to do this might render the DNS data unavailable for a time, until the new key has propagated through the DNS hierarchy. For more information on key rollovers and other DNSSEC operational issues, see RFC 4641: DNSSEC Operational practices. Automation Using <acronym>BIND</acronym> 9.7 or Later Beginning with BIND version 9.7 a new feature called Smart Signing was introduced. This feature aims to make the key management and signing process simpler by automating parts of the task. By putting the keys into a directory called a key repository, and using the new option auto-dnssec, it is possible to create a dynamic zone which will be resigned as needed. To update this zone use nsupdate with the new option . rndc has also grown the ability to sign zones with keys in the key repository, using the option . To tell BIND to use this automatic signing and zone updating for example.com, add the following to named.conf: zone example.com { type master; key-directory "/etc/named/keys"; update-policy local; auto-dnssec maintain; file "/etc/named/dynamic/example.com.zone"; }; After making these changes, generate keys for the zone as explained in , put those keys in the key repository given as the argument to the key-directory in the zone configuration and the zone will be signed automatically. Updates to a zone configured this way must be done using nsupdate, which will take care of re-signing the zone with the new data added. For further details, see and the BIND documentation. Security Although BIND is the most common implementation of DNS, there is always the issue of security. Possible and exploitable security holes are sometimes found. While &os; automatically drops named into a &man.chroot.8; environment; there are several other security mechanisms in place which could help to lure off possible DNS service attacks. It is always good idea to read CERT's security advisories and to subscribe to the &a.security-notifications; to stay up to date with the current Internet and &os; security issues. If a problem arises, keeping sources up to date and having a fresh build of named may help. Further Reading BIND/named manual pages: &man.rndc.8; &man.named.8; &man.named.conf.5; &man.nsupdate.1; &man.dnssec-signzone.8; &man.dnssec-keygen.8; Official ISC BIND Page Official ISC BIND Forum O'Reilly DNS and BIND 5th Edition Root DNSSEC DNSSEC Trust Anchor Publication for the Root Zone RFC1034 - Domain Names - Concepts and Facilities RFC1035 - Domain Names - Implementation and Specification RFC4033 - DNS Security Introduction and Requirements RFC4034 - Resource Records for the DNS Security Extensions RFC4035 - Protocol Modifications for the DNS Security Extensions RFC4641 - DNSSEC Operational Practices RFC 5011 - Automated Updates of DNS Security (DNSSEC Trust Anchors
Apache HTTP Server Murray Stokely Contributed by web servers setting up Apache The open source Apache HTTP Server is the most widely used web server. &os; does not install this web server by default, but it can be installed from the www/apache24 package or port. This section summarizes how to configure and start version 2.x of the Apache HTTP Server on &os;. For more detailed information about Apache 2.X and its configuration directives, refer to httpd.apache.org. Configuring and Starting Apache Apache configuration file In &os;, the main Apache HTTP Server configuration file is installed as /usr/local/etc/apache2x/httpd.conf, where x represents the version number. This ASCII text file begins comment lines with a #. The most frequently modified directives are: ServerRoot "/usr/local" Specifies the default directory hierarchy for the Apache installation. Binaries are stored in the bin and sbin subdirectories of the server root and configuration files are stored in the etc/apache2x + >etc/apache2x subdirectory. ServerAdmin you@example.com Change this to the email address to receive problems with the server. This address also appears on some server-generated pages, such as error documents. ServerName www.example.com:80 Allows an administrator to set a hostname which is sent back to clients for the server. For example, www can be used instead of the actual hostname. If the system does not have a registered DNS name, enter its IP address instead. If the server will listen on an alternate report, change 80 to the alternate port number. DocumentRoot "/usr/local/www/apache2x/data" The directory where documents will be served from. By default, all requests are taken from this directory, but symbolic links and aliases may be used to point to other locations. It is always a good idea to make a backup copy of the default Apache configuration file before making changes. When the configuration of Apache is complete, save the file and verify the configuration using apachectl. Running apachectl configtest should return Syntax OK. Apache starting or stopping To launch Apache at system startup, add the following line to /etc/rc.conf: apache24_enable="YES" If Apache should be started with non-default options, the following line may be added to /etc/rc.conf to specify the needed flags: apache24_flags="" If apachectl does not report configuration errors, start httpd now: &prompt.root; service apache24 start The httpd service can be tested by entering http://localhost in a web browser, replacing localhost with the fully-qualified domain name of the machine running httpd. The default web page that is displayed is /usr/local/www/apache24/data/index.html. The Apache configuration can be tested for errors after making subsequent configuration changes while httpd is running using the following command: &prompt.root; service apache24 configtest It is important to note that configtest is not an &man.rc.8; standard, and should not be expected to work for all startup scripts. Virtual Hosting Virtual hosting allows multiple websites to run on one Apache server. The virtual hosts can be IP-based or name-based. IP-based virtual hosting uses a different IP address for each website. Name-based virtual hosting uses the clients HTTP/1.1 headers to figure out the hostname, which allows the websites to share the same IP address. To setup Apache to use name-based virtual hosting, add a VirtualHost block for each website. For example, for the webserver named www.domain.tld with a virtual domain of www.someotherdomain.tld, add the following entries to httpd.conf: <VirtualHost *> ServerName www.domain.tld DocumentRoot /www/domain.tld </VirtualHost> <VirtualHost *> ServerName www.someotherdomain.tld DocumentRoot /www/someotherdomain.tld </VirtualHost> For each virtual host, replace the values for ServerName and DocumentRoot with the values to be used. For more information about setting up virtual hosts, consult the official Apache documentation at: http://httpd.apache.org/docs/vhosts/. Apache Modules Apache modules Apache uses modules to augment the functionality provided by the basic server. Refer to http://httpd.apache.org/docs/current/mod/ for a complete listing of and the configuration details for the available modules. In &os;, some modules can be compiled with the www/apache24 port. Type make config within /usr/ports/www/apache24 to see which modules are available and which are enabled by default. If the module is not compiled with the port, the &os; Ports Collection provides an easy way to install many modules. This section describes three of the most commonly used modules. <filename>mod_ssl</filename> web servers secure SSL cryptography The mod_ssl module uses the OpenSSL library to provide strong cryptography via the Secure Sockets Layer (SSLv3) and Transport Layer Security (TLSv1) protocols. This module provides everything necessary to request a signed certificate from a trusted certificate signing authority to run a secure web server on &os;. In &os;, mod_ssl module is enabled by default in both the package and the port. The available configuration directives are explained at http://httpd.apache.org/docs/current/mod/mod_ssl.html. <filename>mod_perl</filename> mod_perl Perl The mod_perl module makes it possible to write Apache modules in Perl. In addition, the persistent interpreter embedded in the server avoids the overhead of starting an external interpreter and the penalty of Perl start-up time. The mod_perl can be installed using the www/mod_perl2 package or port. Documentation for using this module can be found at http://perl.apache.org/docs/2.0/index.html. <filename>mod_php</filename> Tom Rhodes Written by mod_php PHP PHP: Hypertext Preprocessor (PHP) is a general-purpose scripting language that is especially suited for web development. Capable of being embedded into HTML, its syntax draws upon C, &java;, and Perl with the intention of allowing web developers to write dynamically generated webpages quickly. To gain support for PHP5 for the Apache web server, install the www/mod_php5 package or port. This will install and configure the modules required to support dynamic PHP applications. The installation will automatically add this line to /usr/local/etc/apache24/httpd.conf: LoadModule php5_module libexec/apache24/libphp5.so Then, perform a graceful restart to load the PHP module: &prompt.root; apachectl graceful The PHP support provided by www/mod_php5 is limited. Additional support can be installed using the lang/php5-extensions port which provides a menu driven interface to the available PHP extensions. Alternatively, individual extensions can be installed using the appropriate port. For instance, to add PHP support for the MySQL database server, install databases/php5-mysql. After installing an extension, the Apache server must be reloaded to pick up the new configuration changes: &prompt.root; apachectl graceful Dynamic Websites web servers dynamic In addition to mod_perl and mod_php, other languages are available for creating dynamic web content. These include Django and Ruby on Rails. Django Python Django Django is a BSD-licensed framework designed to allow developers to write high performance, elegant web applications quickly. It provides an object-relational mapper so that data types are developed as Python objects. A rich dynamic database-access API is provided for those objects without the developer ever having to write SQL. It also provides an extensible template system so that the logic of the application is separated from the HTML presentation. Django depends on mod_python, and an SQL database engine. In &os;, the www/py-django port automatically installs mod_python and supports the PostgreSQL, MySQL, or SQLite databases, with the default being SQLite. To change the database engine, type make config within /usr/ports/www/py-django, then install the port. Once Django is installed, the application will need a project directory along with the Apache configuration in order to use the embedded Python interpreter. This interpreter is used to call the application for specific URLs on the site. To configure Apache to pass requests for certain URLs to the web application, add the following to httpd.conf, specifying the full path to the project directory: <Location "/"> SetHandler python-program PythonPath "['/dir/to/the/django/packages/'] + sys.path" PythonHandler django.core.handlers.modpython SetEnv DJANGO_SETTINGS_MODULE mysite.settings PythonAutoReload On PythonDebug On </Location> Refer to https://docs.djangoproject.com/en/1.6/ for more information on how to use Django. Ruby on Rails Ruby on Rails Ruby on Rails is another open source web framework that provides a full development stack. It is optimized to make web developers more productive and capable of writing powerful applications quickly. On &os;, it can be installed using the www/rubygem-rails package or port. Refer to http://rubyonrails.org/documentation for more information on how to use Ruby on Rails. File Transfer Protocol (<acronym>FTP</acronym>) FTP servers The File Transfer Protocol (FTP) provides users with a simple way to transfer files to and from an FTP server. &os; includes FTP server software, ftpd, in the base system. &os; provides several configuration files for controlling access to the FTP server. This section summarizes these files. Refer to &man.ftpd.8; for more details about the built-in FTP server. Configuration The most important configuration step is deciding which accounts will be allowed access to the FTP server. A &os; system has a number of system accounts which should not be allowed FTP access. The list of users disallowed any FTP access can be found in /etc/ftpusers. By default, it includes system accounts. Additional users that should not be allowed access to FTP can be added. In some cases it may be desirable to restrict the access of some users without preventing them completely from using FTP. This can be accomplished be creating /etc/ftpchroot as described in &man.ftpchroot.5;. This file lists users and groups subject to FTP access restrictions. FTP anonymous To enable anonymous FTP access to the server, create a user named ftp on the &os; system. Users will then be able to log on to the FTP server with a username of ftp or anonymous. When prompted for the password, any input will be accepted, but by convention, an email address should be used as the password. The FTP server will call &man.chroot.2; when an anonymous user logs in, to restrict access to only the home directory of the ftp user. There are two text files that can be created to specify welcome messages to be displayed to FTP clients. The contents of /etc/ftpwelcome will be displayed to users before they reach the login prompt. After a successful login, the contents of /etc/ftpmotd will be displayed. Note that the path to this file is relative to the login environment, so the contents of ~ftp/etc/ftpmotd would be displayed for anonymous users. Once the FTP server has been configured, set the appropriate variable in /etc/rc.conf to start the service during boot: ftpd_enable="YES" To start the service now: &prompt.root; service ftpd start Test the connection to the FTP server by typing: &prompt.user; ftp localhost syslog log files FTP The ftpd daemon uses &man.syslog.3; to log messages. By default, the system log daemon will write messages related to FTP in /var/log/xferlog. The location of the FTP log can be modified by changing the following line in /etc/syslog.conf: ftp.info /var/log/xferlog FTP anonymous Be aware of the potential problems involved with running an anonymous FTP server. In particular, think twice about allowing anonymous users to upload files. It may turn out that the FTP site becomes a forum for the trade of unlicensed commercial software or worse. If anonymous FTP uploads are required, then verify the permissions so that these files can not be read by other anonymous users until they have been reviewed by an administrator. File and Print Services for µsoft.windows; Clients (Samba) Samba server Microsoft Windows file server Windows clients print server Windows clients Samba is a popular open source software package that provides file and print services using the SMB/CIFS protocol. This protocol is built into µsoft.windows; systems. It can be added to non-µsoft.windows; systems by installing the Samba client libraries. The protocol allows clients to access shared data and printers. These shares can be mapped as a local disk drive and shared printers can be used as if they were local printers. On &os;, the Samba client libraries can be installed using the net/samba-smbclient port or package. The client provides the ability for a &os; system to access SMB/CIFS shares in a µsoft.windows; network. A &os; system can also be configured to act as a Samba server. This allows the administrator to create SMB/CIFS shares on the &os; system which can be accessed by clients running µsoft.windows; or the Samba client libraries. In order to configure a Samba server on &os;, the net/samba36 port or package must first be installed. The rest of this section provides an overview of how to configure a Samba server on &os;. Configuration A default Samba configuration file is installed as /usr/local/share/examples/samba36/smb.conf.default. This file must be copied to /usr/local/etc/smb.conf and customized before Samba can be used. Runtime configuration information for Samba is found in smb.conf, such as definitions of the printers and file system shares that will be shared with &windows; clients. The Samba package includes a web based tool called swat which provides a simple way for configuring smb.conf. Using the Samba Web Administration Tool (SWAT) The Samba Web Administration Tool (SWAT) runs as a daemon from inetd. Therefore, inetd must be enabled as shown in . To enable swat, uncomment the following line in /etc/inetd.conf: swat stream tcp nowait/400 root /usr/local/sbin/swat swat As explained in , the inetd configuration must be reloaded after this configuration file is changed. Once swat has been enabled, use a web browser to connect to http://localhost:901. At first login, enter the credentials for root. Once logged in, the main Samba configuration page and the system documentation will be available. Begin configuration by clicking on the Globals tab. The Globals section corresponds to the variables that are set in the [global] section of /usr/local/etc/smb.conf. Global Settings Whether swat is used or /usr/local/etc/smb.conf is edited directly, the first directives encountered when configuring Samba are: workgroup The domain name or workgroup name for the computers that will be accessing this server. netbios name The NetBIOS name by which a Samba server is known. By default it is the same as the first component of the host's DNS name. server string The string that will be displayed in the output of net view and some other networking tools that seek to display descriptive text about the server. Security Settings Two of the most important settings in /usr/local/etc/smb.conf are the security model and the backend password format for client users. The following directives control these options: security The two most common options are security = share and security = user. If the clients use usernames that are the same as their usernames on the &os; machine, user level security should be used. This is the default security policy and it requires clients to first log on before they can access shared resources. In share level security, clients do not need to log onto the server with a valid username and password before attempting to connect to a shared resource. This was the default security model for older versions of Samba. passdb backend NIS+ LDAP SQL database Samba has several different backend authentication models. Clients may be authenticated with LDAP, NIS+, an SQL database, or a modified password file. The default authentication method is smbpasswd, and that is all that will be covered here. Assuming that the default smbpasswd backend is used, /usr/local/etc/samba/smbpasswd must be created to allow Samba to authenticate clients. To provide &unix; user accounts access from &windows; clients, use the following command to add each required user to that file: &prompt.root; smbpasswd -a username The recommended backend is now tdbsam. If this backend is selected, use the following command to add user accounts: &prompt.root; pdbedit -a -u username This section has only mentioned the most commonly used settings. Refer to the Official Samba HOWTO for additional information about the available configuration options. Starting <application>Samba</application> To enable Samba at boot time, add the following line to /etc/rc.conf: samba_enable="YES" Alternately, its services can be started separately: nmbd_enable="YES" smbd_enable="YES" To start Samba now: &prompt.root; service samba start Starting SAMBA: removing stale tdbs : Starting nmbd. Starting smbd. Samba consists of three separate daemons. Both the nmbd and smbd daemons are started by samba_enable. If winbind name resolution services are enabled in smb.conf, the winbindd daemon is started as well. Samba may be stopped at any time by typing: &prompt.root; service samba stop Samba is a complex software suite with functionality that allows broad integration with µsoft.windows; networks. For more information about functionality beyond the basic configuration described here, refer to http://www.samba.org. Clock Synchronization with NTP NTP ntpd Over time, a computer's clock is prone to drift. This is problematic as many network services require the computers on a network to share the same accurate time. Accurate time is also needed to ensure that file timestamps stay consistent. The Network Time Protocol (NTP) is one way to provide clock accuracy in a network. &os; includes &man.ntpd.8; which can be configured to query other NTP servers in order to synchronize the clock on that machine or to provide time services to other computers in the network. The servers which are queried can be local to the network or provided by an ISP. In addition, an online list of publicly accessible NTP servers is available. When choosing a public NTP server, select one that is geographically close and review its usage policy. Choosing several NTP servers is recommended in case one of the servers becomes unreachable or its clock proves unreliable. As ntpd receives responses, it favors reliable servers over the less reliable ones. This section describes how to configure ntpd on &os;. Further documentation can be found in /usr/share/doc/ntp/ in HTML format. <acronym>NTP</acronym> Configuration NTP ntp.conf On &os;, the built-in ntpd can be used to synchronize a system's clock. To enable ntpd at boot time, add ntpd_enable="YES" to /etc/rc.conf. Additional variables can be specified in /etc/rc.conf. Refer to &man.rc.conf.5; and &man.ntpd.8; for details. This application reads /etc/ntp.conf to determine which NTP servers to query. Here is a simple example of an /etc/ntp.conf: Sample <filename>/etc/ntp.conf</filename> server ntplocal.example.com prefer server timeserver.example.org server ntp2a.example.net driftfile /var/db/ntp.drift The format of this file is described in &man.ntp.conf.5;. The server option specifies which servers to query, with one server listed on each line. If a server entry includes prefer, that server is preferred over other servers. A response from a preferred server will be discarded if it differs significantly from other servers' responses; otherwise it will be used. The prefer argument should only be used for NTP servers that are known to be highly accurate, such as those with special time monitoring hardware. The driftfile entry specifies which file is used to store the system clock's frequency offset. ntpd uses this to automatically compensate for the clock's natural drift, allowing it to maintain a reasonably correct setting even if it is cut off from all external time sources for a period of time. This file also stores information about previous responses from NTP servers. Since this file contains internal information for NTP, it should not be modified. By default, an NTP server is accessible to any network host. The restrict option in /etc/ntp.conf can be used to control which systems can access the server. For example, to deny all machines from accessing the NTP server, add the following line to /etc/ntp.conf: restrict default ignore This will also prevent access from other NTP servers. If there is a need to synchronize with an external NTP server, allow only that specific server. Refer to &man.ntp.conf.5; for more information. To allow machines within the network to synchronize their clocks with the server, but ensure they are not allowed to configure the server or be used as peers to synchronize against, instead use: restrict 192.168.1.0 mask 255.255.255.0 nomodify notrap where 192.168.1.0 is the local network address and 255.255.255.0 is the network's subnet mask. Multiple restrict entries are supported. For more details, refer to the Access Control Support subsection of &man.ntp.conf.5;. Once ntpd_enable="YES" has been added to /etc/rc.conf, ntpd can be started now without rebooting the system by typing: &prompt.root; service ntpd start Using <acronym>NTP</acronym> with a <acronym>PPP</acronym> Connection ntpd does not need a permanent connection to the Internet to function properly. However, if a PPP connection is configured to dial out on demand, NTP traffic should be prevented from triggering a dial out or keeping the connection alive. This can be configured with filter directives in /etc/ppp/ppp.conf. For example: set filter dial 0 deny udp src eq 123 # Prevent NTP traffic from initiating dial out set filter dial 1 permit 0 0 set filter alive 0 deny udp src eq 123 # Prevent incoming NTP traffic from keeping the connection open set filter alive 1 deny udp dst eq 123 # Prevent outgoing NTP traffic from keeping the connection open set filter alive 2 permit 0/0 0/0 For more details, refer to the PACKET FILTERING section in &man.ppp.8; and the examples in /usr/share/examples/ppp/. Some Internet access providers block low-numbered ports, preventing NTP from functioning since replies never reach the machine. <acronym>iSCSI</acronym> Initiator and Target Configuration iSCSI is a way to share storage over a network. Unlike NFS, which works at the file system level, iSCSI works at the block device level. In iSCSI terminology, the system that shares the storage is known as the target. The storage can be a physical disk, or an area representing multiple disks or a portion of a physical disk. For example, if the disk(s) are formatted with ZFS, a zvol can be created to use as the iSCSI storage. The clients which access the iSCSI storage are called initiators. To initiators, the storage available through iSCSI appears as a raw, unformatted disk known as a LUN. Device nodes for the disk appear in /dev/ and the device must be separately formatted and mounted. Beginning with 10.0-RELEASE, &os; provides a native, kernel-based iSCSI target and initiator. This section describes how to configure a &os; system as a target or an initiator. Configuring an <acronym>iSCSI</acronym> Target The native iSCSI target is supported starting with &os; 10.0-RELEASE. To use iSCSI in older versions of &os;, install a userspace target from the Ports Collection, such as net/istgt. This chapter only describes the native target. To configure an iSCSI target, create the /etc/ctl.conf configuration file, add a line to /etc/rc.conf to make sure the &man.ctld.8; daemon is automatically started at boot, and then start the daemon. The following is an example of a simple /etc/ctl.conf configuration file. Refer to &man.ctl.conf.5; for a more complete description of this file's available options. portal-group pg0 { discovery-auth-group no-authentication listen 0.0.0.0 listen [::] } target iqn.2012-06.com.example:target0 { auth-group no-authentication portal-group pg0 lun 0 { path /data/target0-0 size 4G } } The first entry defines the pg0 portal group. Portal groups define which network addresses the &man.ctld.8; daemon will listen on. The discovery-auth-group no-authentication entry indicates that any initiator is allowed to perform iSCSI target discovery without authentication. Lines three and four configure &man.ctld.8; to listen on all IPv4 (listen 0.0.0.0) and IPv6 (listen [::]) addresses on the default port of 3260. It is not necessary to define a portal group as there is a built-in portal group called default. In this case, the difference between default and pg0 is that with default, target discovery is always denied, while with pg0, it is always allowed. The second entry defines a single target. Target has two possible meanings: a machine serving iSCSI or a named group of LUNs. This example uses the latter meaning, where iqn.2012-06.com.example:target0 is the target name. This target name is suitable for testing purposes. For actual use, change com.example to the real domain name, reversed. The 2012-06 represents the year and month of acquiring control of that domain name, and target0 can be any value. Any number of targets can be defined in this configuration file. The auth-group no-authentication line allows all initiators to connect to the specified target and portal-group pg0 makes the target reachable through the pg0 portal group. The next section defines the LUN. To the initiator, each LUN will be visible as a separate disk device. Multiple LUNs can be defined for each target. Each LUN is identified by a number, where LUN 0 is mandatory. The path /data/target0-0 line defines the full path to a file or zvol backing the LUN. That path must exist before starting &man.ctld.8;. The second line is optional and specifies the size of the LUN. Next, to make sure the &man.ctld.8; daemon is started at boot, add this line to /etc/rc.conf: ctld_enable="YES" To start &man.ctld.8; now, run this command: &prompt.root; service ctld start As the &man.ctld.8; daemon is started, it reads /etc/ctl.conf. If this file is edited after the daemon starts, use this command so that the changes take effect immediately: &prompt.root; service ctld reload Authentication The previous example is inherently insecure as it uses no authentication, granting anyone full access to all targets. To require a username and password to access targets, modify the configuration as follows: auth-group ag0 { chap username1 secretsecret chap username2 anothersecret } portal-group pg0 { discovery-auth-group no-authentication listen 0.0.0.0 listen [::] } target iqn.2012-06.com.example:target0 { auth-group ag0 portal-group pg0 lun 0 { path /data/target0-0 size 4G } } The auth-group section defines username and password pairs. An initiator trying to connect to iqn.2012-06.com.example:target0 must first specify a defined username and secret. However, target discovery is still permitted without authentication. To require target discovery authentication, set discovery-auth-group to a defined auth-group name instead of no-authentication. It is common to define a single exported target for every initiator. As a shorthand for the syntax above, the username and password can be specified directly in the target entry: target iqn.2012-06.com.example:target0 { portal-group pg0 chap username1 secretsecret lun 0 { path /data/target0-0 size 4G } } Configuring an <acronym>iSCSI</acronym> Initiator The iSCSI initiator described in this section is supported starting with &os; 10.0-RELEASE. To use the iSCSI initiator available in older versions, refer to &man.iscontrol.8;. The iSCSI initiator requires that the &man.iscsid.8; daemon is running. This daemon does not use a configuration file. To start it automatically at boot, add this line to /etc/rc.conf: iscsid_enable="YES" To start &man.iscsid.8; now, run this command: &prompt.root; service iscsid start Connecting to a target can be done with or without an /etc/iscsi.conf configuration file. This section demonstrates both types of connections. Connecting to a Target Without a Configuration File To connect an initiator to a single target, specify the IP address of the portal and the name of the target: &prompt.root; iscsictl -A -p 10.10.10.10 -t iqn.2012-06.com.example:target0 To verify if the connection succeeded, run iscsictl without any arguments. The output should look similar to this: Target name Target portal State iqn.2012-06.com.example:target0 10.10.10.10 Connected: da0 In this example, the iSCSI session was successfully established, with /dev/da0 representing the attached LUN. If the iqn.2012-06.com.example:target0 target exports more than one LUN, multiple device nodes will be shown in that section of the output: Connected: da0 da1 da2. Any errors will be reported in the output, as well as the system logs. For example, this message usually means that the &man.iscsid.8; daemon is not running: Target name Target portal State iqn.2012-06.com.example:target0 10.10.10.10 Waiting for iscsid(8) The following message suggests a networking problem, such as a wrong IP address or port: Target name Target portal State iqn.2012-06.com.example:target0 10.10.10.11 Connection refused This message means that the specified target name is wrong: Target name Target portal State iqn.2012-06.com.example:atrget0 10.10.10.10 Not found This message means that the target requires authentication: Target name Target portal State iqn.2012-06.com.example:target0 10.10.10.10 Authentication failed To specify a CHAP username and secret, use this syntax: &prompt.root; iscsictl -A -p 10.10.10.10 -t iqn.2012-06.com.example:target0 -u user -s secretsecret Connecting to a Target with a Configuration File To connect using a configuration file, create /etc/iscsi.conf with contents like this: t0 { TargetAddress = 10.10.10.10 TargetName = iqn.2012-06.com.example:target0 AuthMethod = CHAP chapIName = user chapSecret = secretsecret } The t0 specifies a nickname for the configuration file section. It will be used by the initiator to specify which configuration to use. The other lines specify the parameters to use during connection. The TargetAddress and TargetName are mandatory, whereas the other options are optional. In this example, the CHAP username and secret are shown. To connect to the defined target, specify the nickname: &prompt.root; iscsictl -An t0 Alternately, to connect to all targets defined in the configuration file, use: &prompt.root; iscsictl -Aa To make the initiator automatically connect to all targets in /etc/iscsi.conf, add the following to /etc/rc.conf: iscsictl_enable="YES" iscsictl_flags="-Aa"
Index: head/en_US.ISO8859-1/books/handbook/ports/chapter.xml =================================================================== --- head/en_US.ISO8859-1/books/handbook/ports/chapter.xml (revision 46271) +++ head/en_US.ISO8859-1/books/handbook/ports/chapter.xml (revision 46272) @@ -1,1889 +1,1889 @@ Installing Applications: Packages and Ports Synopsis ports packages &os; is bundled with a rich collection of system tools as part of the base system. In addition, &os; provides two complementary technologies for installing third-party software: the &os; Ports Collection, for installing from source, and packages, for installing from pre-built binaries. Either method may be used to install software from local media or from the network. After reading this chapter, you will know: The difference between binary packages and ports. How to find third-party software that has been ported to &os;. How to manage binary packages using pkg. How to build third-party software from source using the Ports Collection. How to find the files installed with the application for post-installation configuration. What to do if a software installation fails. Overview of Software Installation The typical steps for installing third-party software on a &unix; system include: Find and download the software, which might be distributed in source code format or as a binary. Unpack the software from its distribution format. This is typically a tarball compressed with &man.compress.1;, &man.gzip.1;, or &man.bzip2.1;. Locate the documentation in INSTALL, README or some file in a doc/ subdirectory and read up on how to install the software. If the software was distributed in source format, compile it. This may involve editing a Makefile or running a configure script. Test and install the software. If the software package was not deliberately ported, or tested to work, on &os;, the source code may need editing in order for it to install and run properly. At the time of this writing, over &os.numports; third-party applications have been ported to &os;. A &os; package contains pre-compiled copies of all the commands for an application, as well as any configuration files and documentation. A package can be manipulated with the pkg commands, such as pkg install. A &os; port is a collection of files designed to automate the process of compiling an application from source code. The files that comprise a port contain all the necessary information to automatically download, extract, patch, compile, and install the application. The ports system can also be used to generate packages which can be manipulated with the &os; package management commands. Both packages and ports understand dependencies. If a package or port is used to install an application and a dependent library is not already installed, the library will automatically be installed first. While the two technologies are similar, packages and ports each have their own strengths. Select the technology that meets your requirements for installing a particular application. Package Benefits A compressed package tarball is typically smaller than the compressed tarball containing the source code for the application. Packages do not require compilation time. For large applications, such as Mozilla, KDE, or GNOME, this can be important on a slow system. Packages do not require any understanding of the process involved in compiling software on &os;. Port Benefits Packages are normally compiled with conservative options because they have to run on the maximum number of systems. By compiling from the port, one can change the compilation options. Some applications have compile-time options relating to which features are installed. For example, Apache can be configured with a wide variety of different built-in options. In some cases, multiple packages will exist for the same application to specify certain settings. For example, Ghostscript is available as a ghostscript package and a ghostscript-nox11 package, depending on whether or not Xorg is installed. Creating multiple packages rapidly becomes impossible if an application has more than one or two different compile-time options. The licensing conditions of some software forbid binary distribution. Such software must be distributed as source code which must be compiled by the end-user. Some people do not trust binary distributions or prefer to read through source code in order to look for potential problems. Source code is needed in order to apply custom patches. To keep track of updated ports, subscribe to the &a.ports; and the &a.ports-bugs;. Before installing any application, check http://vuxml.freebsd.org/ for security issues related to the application or install ports-mgmt/portaudit. Once installed, type portaudit -F -a to check all installed applications for known vulnerabilities. When pkg is being used the audit functionality is built in. Execute pkg audit -F to get a report on vulnerable packages. The remainder of this chapter explains how to use packages and ports to install and manage third-party software on &os;. Finding Software &os;'s list of available applications is growing all the time. There are a number of ways to find software to install: The &os; web site maintains an up-to-date searchable list of all the available applications, at http://www.FreeBSD.org/ports/. The ports can be searched by application name or by software category. FreshPorts Dan Langille maintains FreshPorts.org which provides a comprehensive search utility and also tracks changes to the applications in the Ports Collection. Registered users can create a customized watch list in order to receive an automated email when their watched ports are updated. SourceForge If finding a particular application becomes challenging, try searching a site like SourceForge.net or GitHub.com then check back at the &os; site to see if the application has been ported. pkg search To search the binary package repository for an application: &prompt.root; pkg search subversion git-subversion-1.9.2 java-subversion-1.8.8_2 p5-subversion-1.8.8_2 py27-hgsubversion-1.6 py27-subversion-1.8.8_2 ruby-subversion-1.8.8_2 subversion-1.8.8_2 subversion-book-4515 subversion-static-1.8.8_2 subversion16-1.6.23_4 subversion17-1.7.16_2 Package names include the version number and in case of ports based on python, the version number of the version of python the package was built with. Some ports also have multiple versions available. In case of subversion there are different versions available, as well as different compile options. In this case, the staticly linked version of subversion. When indicating which package to install, it is best to specify the application by the port origin, which is the path in the ports tree. Repeat the pkg search with to list the origin of each package: &prompt.root; pkg search -o subversion devel/git-subversion java/java-subversion devel/p5-subversion devel/py-hgsubversion devel/py-subversion devel/ruby-subversion devel/subversion16 devel/subversion17 devel/subversion devel/subversion-book devel/subversion-static Searching by shell globs, regular expressions, exact match, by description, or any other field in the repository database is also supported by pkg search. After installing ports-mgmt/pkg or ports-mgmt/pkg-devel, see &man.pkg-search.8; for more details. If the Ports Collection is already installed, there are several methods to query the local version of the ports tree. To find out which category a port is in, type whereis file, where file is the program to be installed: &prompt.root; whereis lsof lsof: /usr/ports/sysutils/lsof Alternately, an &man.echo.1; statement can be used: &prompt.root; echo /usr/ports/*/*lsof* /usr/ports/sysutils/lsof Note that this will also return any matched files downloaded into the /usr/ports/distfiles directory. Another way to find software is by using the Ports Collection's built-in search mechanism. To use the search feature, cd to /usr/ports then run make search name=program-name where program-name is the name of the software. For example, to search for lsof: &prompt.root; cd /usr/ports &prompt.root; make search name=lsof Port: lsof-4.88.d,8 Path: /usr/ports/sysutils/lsof Info: Lists information about open files (similar to fstat(1)) Maint: ler@lerctr.org Index: sysutils B-deps: R-deps: The built-in search mechanism uses a file of index information. If a message indicates that the INDEX is required, run make fetchindex to download the current index file. With the INDEX present, make search will be able to perform the requested search. The Path: line indicates where to find the port. To receive less information, use the quicksearch feature: &prompt.root; cd /usr/ports &prompt.root; make quicksearch name=lsof Port: lsof-4.88.d,8 Path: /usr/ports/sysutils/lsof Info: Lists information about open files (similar to fstat(1)) For more in-depth searching, use make search key=string or make quicksearch key=string, where string is some text to search for. The text can be in comments, descriptions, or dependencies in order to find ports which relate to a particular subject when the name of the program is unknown. When using search or quicksearch, the search string is case-insensitive. Searching for LSOF will yield the same results as searching for lsof. Using <application>pkg</application> for Binary Package Management pkg is the next generation replacement for the traditional &os; package management tools, offering many features that make dealing with binary packages faster and easier. pkg is not a replacement for port management tools like ports-mgmt/portmaster or ports-mgmt/portupgrade. These tools can be used to install third-party software from both binary packages and the Ports Collection, while pkg installs only binary packages. Getting Started with <application>pkg</application> &os; 8.4 and later includes a bootstrap utility which can be used to download and install pkg, along with its manual pages. To bootstrap the system, run: &prompt.root; /usr/sbin/pkg For earlier &os; versions, pkg must instead be installed from the Ports Collection or as a binary package. To install the port, run: &prompt.root; cd /usr/ports/ports-mgmt/pkg &prompt.root; make &prompt.root; make install clean When upgrading an existing system that originally used the older package system, the database must be converted to the new format, so that the new tools are aware of the already installed packages. Once pkg has been installed, the package database must be converted from the traditional format to the new format by running this command: &prompt.root; pkg2ng This step is not required for new installations that do not yet have any third-party software installed. This step is not reversible. Once the package database has been converted to the pkg format, the traditional pkg_* tools should no longer be used. The package database conversion may emit errors as the contents are converted to the new version. Generally, these errors can be safely ignored. However, a list of third-party software that was not successfully converted will be listed after pkg2ng has finished and these applications must be manually reinstalled. To ensure that the &os; Ports Collection registers new software with pkg, and not the traditional packages format, &os; versions earlier than 10.X require this line in /etc/make.conf: WITH_PKGNG= yes The pkg package management system uses a package repository for most operations. The default package repository location is defined in /usr/local/etc/pkg.conf or by the PACKAGESITE environment variable, which overrides the configuration file. Additional pkg configuration options are described in pkg.conf(5). Usage information for pkg is available in pkg(8) or by running pkg without additional arguments. Each pkg command argument is documented in a command-specific manual page. To read the manual page for pkg install, for example, run either of these commands: &prompt.root; pkg help install &prompt.root; man pkg-install The rest of this section demonstrates common binary package management tasks which can be performed using pkg. Each demonstrated command provides many switches to customize its use. Refer to a command's help or man page for details and more examples. Obtaining Information About Installed Packages Information about the packages installed on a system can be viewed by running pkg info which, when run without any switches, will list the package version for either all installed packages or the specified package. For example, to see which version of pkg is installed, run: &prompt.root; pkg info pkg pkg-1.1.4_1 Installing and Removing Packages To install a binary package use the following command, where packagename is the name of the package to install: &prompt.root; pkg install packagename This command uses repository data to determine which version of the software to install and if it has any uninstalled dependencies. For example, to install curl: &prompt.root; pkg install curl Updating repository catalogue /usr/local/tmp/All/curl-7.31.0_1.txz 100% of 1181 kB 1380 kBps 00m01s /usr/local/tmp/All/ca_root_nss-3.15.1_1.txz 100% of 288 kB 1700 kBps 00m00s Updating repository catalogue The following 2 packages will be installed: Installing ca_root_nss: 3.15.1_1 Installing curl: 7.31.0_1 The installation will require 3 MB more space 0 B to be downloaded Proceed with installing packages [y/N]: y Checking integrity... done [1/2] Installing ca_root_nss-3.15.5_1... done [2/2] Installing curl-7.31.0_1... done Cleaning up cache files...Done The new package and any additional packages that were installed as dependencies can be seen in the installed packages list: &prompt.root; pkg info ca_root_nss-3.15.5_1 The root certificate bundle from the Mozilla Project curl-7.31.0_1 Non-interactive tool to get files from FTP, GOPHER, HTTP(S) servers pkg-1.1.4_6 New generation package manager Packages that are no longer needed can be removed with pkg delete. For example: &prompt.root; pkg delete curl The following packages will be deleted: curl-7.31.0_1 The deletion will free 3 MB Proceed with deleting packages [y/N]: y [1/1] Deleting curl-7.31.0_1... done Upgrading Installed Packages Packages that are outdated can be found with pkg version. If a local ports tree does not exist, pkg-version(8) will use the remote repository catalogue. Otherwise, the local ports tree will be used to identify package versions. Installed packages can be upgraded to their latest versions by typing pkg upgrade. This command will compare the installed versions with those available in the repository catalogue. When finished, it will list the applications that have newer versions. Type y to proceed with the upgrade or n to cancel the upgrade. Auditing Installed Packages Occasionally, software vulnerabilities may be discovered in third-party applications. To address this, pkg includes a built-in auditing mechanism. To determine if there are any known vulnerabilities for the software installed on the system, run: &prompt.root; pkg audit -F Automatically Removing Leaf Dependencies Removing a package may leave behind dependencies which are no longer required. Unneeded packages that were installed as dependencies can be automatically detected and removed using: &prompt.root; pkg autoremove Packages to be autoremoved: ca_root_nss-3.13.5 The autoremoval will free 723 kB Proceed with autoremoval of packages [y/N]: y Deinstalling ca_root_nss-3.15.1_1... done Backing Up the Package Database Unlike the traditional package management system, pkg includes its own package database backup mechanism. To manually back up the contents of the package database, run the following command, replacing pkgng.db with a suitable file name: &prompt.root; pkg backup -d pkgng.db Additionally, pkg includes a &man.periodic.8; script to automatically perform a daily back up of the package database. This functionality is enabled if daily_backup_pkgdb_enable is set to YES in &man.periodic.conf.5;. To disable the periodic script from backing up the package database, set daily_backup_pkgdb_enable to NO in &man.periodic.conf.5;. To restore the contents of a previous package database backup, run: &prompt.root; pkg backup -r /path/to/pkgng.db Removing Stale Packages By default, pkg stores binary packages in a cache directory defined by PKG_CACHEDIR in pkg.conf(5). When upgrading packages with pkg upgrade, old versions of the upgraded packages are not automatically removed. To remove these outdated binary packages, run: &prompt.root; pkg clean Modifying Package Metadata Software within the &os; Ports Collection can undergo major version number changes. To address this, pkg has a built-in command to update package origins. This can be useful, for example, if lang/php5 is renamed to lang/php53 so that lang/php5 can now represent version 5.4. To change the package origin for the above example, run: &prompt.root; pkg set -o lang/php5:lang/php53 As another example, to update lang/ruby18 to lang/ruby19, run: &prompt.root; pkg set -o lang/ruby18:lang/ruby19 As a final example, to change the origin of the libglut shared libraries from graphics/libglut to graphics/freeglut, run: &prompt.root; pkg set -o graphics/libglut:graphics/freeglut When changing package origins, it is important to reinstall packages that are dependent on the package with the modified origin. To force a reinstallation of dependent packages, run: &prompt.root; pkg install -Rf graphics/freeglut Using the Ports Collection The Ports Collection is a set of Makefiles, patches, and description files stored in /usr/ports. This set of files is used to compile and install applications on &os;. Before an application can be compiled using a port, the Ports Collection must first be installed. If it was not installed during the installation of &os;, use one of the following methods to install it: Portsnap Method The base system of &os; includes Portsnap. This is a fast and user-friendly tool for retrieving the Ports Collection and is the recommended choice for most users. This utility connects to a &os; site, verifies the secure key, and downloads a new copy of the Ports Collection. The key is used to verify the integrity of all downloaded files. To download a compressed snapshot of the Ports Collection into /var/db/portsnap: &prompt.root; portsnap fetch When running Portsnap for the first time, extract the snapshot into /usr/ports: &prompt.root; portsnap extract After the first use of Portsnap has been completed as shown above, /usr/ports can be updated as needed by running: &prompt.root; portsnap fetch &prompt.root; portsnap update When using fetch, the extract or the update operation may be run consecutively, like so: &prompt.root; portsnap fetch update Subversion Method If more control over the ports tree is needed or if local changes need to be maintained, Subversion can be used to obtain the Ports Collection. Refer to the Subversion Primer for a detailed description of Subversion. Subversion must be installed before it can be used to check out the ports tree. If a copy of the ports tree is already present, install Subversion like this: &prompt.root; cd /usr/ports/devel/subversion &prompt.root; make install clean If the ports tree is not available, or pkg is being used to manage packages, Subversion can be installed as a package: &prompt.root; pkg install subversion Check out a copy of the ports tree. For better performance, replace svn0.us-east.FreeBSD.org with a Subversion mirror close to your geographic location: &prompt.root; svn checkout https://svn0.us-east.FreeBSD.org/ports/head /usr/ports As needed, update /usr/ports after the initial Subversion checkout: &prompt.root; svn update /usr/ports The Ports Collection installs a series of directories representing software categories with each category having a subdirectory for each application. Each subdirectory, also referred to as a ports skeleton, contains a set of files that tell &os; how to compile and install that program. Each port skeleton includes these files and directories: Makefile: contains statements that specify how the application should be compiled and where its components should be installed. distinfo: contains the names and checksums of the files that must be downloaded to build the port. files/: this directory contains any patches needed for the program to compile and install on &os;. This directory may also contain other files used to build the port. pkg-descr: provides a more detailed description of the program. pkg-plist: a list of all the files that will be installed by the port. It also tells the ports system which files to remove upon deinstallation. Some ports include pkg-message or other files to handle special situations. For more details on these files, and on ports in general, refer to the &os; Porter's Handbook. The port does not include the actual source code, also known as a distfile. The extract portion of building a port will automatically save the downloaded source to /usr/ports/distfiles. Installing Ports ports installing This section provides basic instructions on using the Ports Collection to install or remove software. The detailed description of available make targets and environment variables is available in &man.ports.7;. Before compiling any port, be sure to update the Ports Collection as described in the previous section. Since the installation of any third-party software can introduce security vulnerabilities, it is recommended to first check http://vuxml.freebsd.org/ for known security issues related to the port. Alternately, if ports-mgmt/portaudit is installed, run portaudit -F before installing a new port. This command can be configured to automatically perform a security audit and an update of the vulnerability database during the daily security system check. For more information, refer to the manual page for portaudit and &man.periodic.8;. Using the Ports Collection assumes a working Internet connection. It also requires superuser privilege. Some third-party DVD products such as the &os; Toolkit from freebsdmall.com contain distfiles which can be used to install ports without an Internet connection. Mount the DVD on /cdrom. If you use a different mount point, set the CD_MOUNTPTS make variable. The needed distfiles will be automatically used if they are present on the disk. However, the licenses of a few ports do not allow their inclusion on the DVD. This could be because a registration form needs to be filled out before downloading or redistribution is not allowed. In order to install a port not included on the DVD, a connection to the Internet will still be required. To compile and install the port, change to the directory of the port to be installed, then type make install at the prompt. Messages will indicate the progress: &prompt.root; cd /usr/ports/sysutils/lsof &prompt.root; make install >> lsof_4.88D.freebsd.tar.gz doesn't seem to exist in /usr/ports/distfiles/. >> Attempting to fetch from ftp://lsof.itap.purdue.edu/pub/tools/unix/lsof/. ===> Extracting for lsof-4.88 ... [extraction output snipped] ... >> Checksum OK for lsof_4.88D.freebsd.tar.gz. ===> Patching for lsof-4.88.d,8 ===> Applying FreeBSD patches for lsof-4.88.d,8 ===> Configuring for lsof-4.88.d,8 ... [configure output snipped] ... ===> Building for lsof-4.88.d,8 ... [compilation output snipped] ... ===> Installing for lsof-4.88.d,8 ... [installation output snipped] ... ===> Generating temporary packing list ===> Compressing manual pages for lsof-4.88.d,8 ===> Registering installation for lsof-4.88.d,8 ===> SECURITY NOTE: This port has installed the following binaries which execute with increased privileges. /usr/local/sbin/lsof &prompt.root; Since lsof is a program that runs with increased privileges, a security warning is displayed as it is installed. Once the installation is complete, the prompt will be returned. Some shells keep a cache of the commands that are available in the directories listed in the PATH environment variable, to speed up lookup operations for the executable file of these commands. Users of the tcsh shell should type rehash so that a newly installed command can be used without specifying its full path. Use hash -r instead for the sh shell. Refer to the documentation for the shell for more information. During installation, a working subdirectory is created which contains all the temporary files used during compilation. Removing this directory saves disk space and minimizes the chance of problems later when upgrading to the newer version of the port: &prompt.root; make clean ===> Cleaning for lsof-88.d,8 &prompt.root; To save this extra step, instead use make install clean when compiling the port. Customizing Ports Installation Some ports provide build options which can be used to enable or disable application components, provide security options, or allow for other customizations. Examples include www/firefox, security/gpgme, and mail/sylpheed-claws. If the port depends upon other ports which have configurable options, it may pause several times for user interaction as the default behavior is to prompt the user to select options from a menu. To avoid this, run make config-recursive within the port skeleton to do this configuration in one batch. Then, run make install [clean] to compile and install the port. When using config-recursive, the list of ports to configure are gathered by the all-depends-list target. It is recommended to run make config-recursive until all dependent ports options have been defined, and ports options screens no longer appear, to be certain that all dependency options have been configured. There are several ways to revisit a port's build options menu in order to add, remove, or change these options after a port has been built. One method is to cd into the directory containing the port and type make config. Another option is to use make showconfig. Another option is to execute make rmconfig which will remove all selected options and allow you to start over. All of these options, and others, are explained in great detail in &man.ports.7;. The ports system uses &man.fetch.1; to download the source files, which supports various environment variables. The FTP_PASSIVE_MODE, FTP_PROXY, and FTP_PASSWORD variables may need to be set if the &os; system is behind a firewall or FTP/HTTP proxy. See &man.fetch.3; for the complete list of supported variables. For users who cannot be connected to the Internet all the time, make fetch can be run within /usr/ports, to fetch all distfiles, or within a category, such as /usr/ports/net, or within the specific port skeleton. Note that if a port has any dependencies, running this command in a category or ports skeleton will not fetch the distfiles of ports from another category. Instead, use make fetch-recursive to also fetch the distfiles for all the dependencies of a port. In rare cases, such as when an organization has a local distfiles repository, the MASTER_SITES variable can be used to override the download locations specified in the Makefile. When using, specify the alternate location: &prompt.root; cd /usr/ports/directory &prompt.root; make MASTER_SITE_OVERRIDE= \ ftp://ftp.organization.org/pub/FreeBSD/ports/distfiles/ fetch The WRKDIRPREFIX and PREFIX variables can override the default working and target directories. For example: &prompt.root; make WRKDIRPREFIX=/usr/home/example/ports install will compile the port in /usr/home/example/ports and install everything under /usr/local. &prompt.root; make PREFIX=/usr/home/example/local install will compile the port in /usr/ports and install it in /usr/home/example/local. And: &prompt.root; make WRKDIRPREFIX=../ports PREFIX=../local install will combine the two. These can also be set as environmental variables. Refer to the manual page for your shell for instructions on how to set an environmental variable. Removing Installed Ports ports removing Installed ports can be uninstalled using pkg delete. Examples for using this command can be found in . Alternately, make deinstall can be run in the port's directory: &prompt.root; cd /usr/ports/sysutils/lsof make deinstall ===> Deinstalling for sysutils/lsof ===> Deinstalling Deinstallation has been requested for the following 1 packages: lsof-4.88.d,8 The deinstallation will free 229 kB [1/1] Deleting lsof-4.88.d,8... done It is recommended to read the messages as the port is uninstalled. If the port has any applications that depend upon it, this information will be displayed but the uninstallation will proceed. In such cases, it may be better to reinstall the application in order to prevent broken dependencies. Upgrading Ports ports upgrading Over time, newer versions of software become available in the Ports Collection. This section describes how to determine which software can be upgraded and how to perform the upgrade. To determine if newer versions of installed ports are available, ensure that the latest version of the ports tree is installed, using the updating command described in either or . On &os; 10 and later, or if the system has been converted to pkg, the following command will list the installed ports which are out of date: &prompt.root; pkg version -l "<" For &os; 9.X and lower, the following command will list the installed ports that are out of date: &prompt.root; pkg_version -l "<" Before attempting an upgrade, read /usr/ports/UPDATING from the top of the file to the date closest to the last time ports were upgraded or the system was installed. This file describes various issues and additional steps users may encounter and need to perform when updating a port, including such things as file format changes, changes in locations of configuration files, or any incompatibilities with previous versions. Make note of any instructions which match any of the ports that need upgrading and follow these instructions when performing the upgrade. To perform the actual upgrade, use either Portmaster or Portupgrade. Upgrading Ports Using <application>Portmaster</application> portmaster The ports-mgmt/portmaster package or port is the recommended tool for upgrading installed ports as it is designed to use the tools installed with &os; without depending upon other ports. It uses the information in /var/db/pkg/ to determine which ports to upgrade. To install this utility as a port: &prompt.root; cd /usr/ports/ports-mgmt/portmaster &prompt.root; make install clean Portmaster defines four categories of ports: Root port: has no dependencies and is not a dependency of any other ports. Trunk port: has no dependencies, but other ports depend upon it. Branch port: has dependencies and other ports depend upon it. Leaf port: has dependencies but no other ports depend upon it. To list these categories and search for updates: &prompt.root; portmaster -L ===>>> Root ports (No dependencies, not depended on) ===>>> ispell-3.2.06_18 ===>>> screen-4.0.3 ===>>> New version available: screen-4.0.3_1 ===>>> tcpflow-0.21_1 ===>>> 7 root ports ... ===>>> Branch ports (Have dependencies, are depended on) ===>>> apache22-2.2.3 ===>>> New version available: apache22-2.2.8 ... ===>>> Leaf ports (Have dependencies, not depended on) ===>>> automake-1.9.6_2 ===>>> bash-3.1.17 ===>>> New version available: bash-3.2.33 ... ===>>> 32 leaf ports ===>>> 137 total installed ports ===>>> 83 have new versions available This command is used to upgrade all outdated ports: &prompt.root; portmaster -a By default, Portmaster will make a backup package before deleting the existing port. If the installation of the new version is successful, Portmaster will delete the backup. Using will instruct Portmaster not to automatically delete the backup. Adding will start Portmaster in interactive mode, prompting for confirmation before upgrading each port. Many other options are available. Read through the manual page for portmaster(8) for details regarding their usage. If errors are encountered during the upgrade process, add to upgrade and rebuild all ports: &prompt.root; portmaster -af Portmaster can also be used to install new ports on the system, upgrading all dependencies before building and installing the new port. To use this function, specify the location of the port in the Ports Collection: &prompt.root; portmaster shells/bash Upgrading Ports Using Portupgrade portupgrade Another utility that can be used to upgrade ports is Portupgrade, which is available as the ports-mgmt/portupgrade package or port. This utility installs a suite of applications which can be used to manage ports. However, it is dependent upon Ruby. To install the port: &prompt.root; cd /usr/ports/ports-mgmt/portupgrade &prompt.root; make install clean Before performing an upgrade using this utility, it is recommended to scan the list of installed ports using pkgdb -F and to fix all the inconsistencies it reports. To upgrade all the outdated ports installed on the system, use portupgrade -a. Alternately, include to be asked for confirmation of every individual upgrade: &prompt.root; portupgrade -ai To upgrade only a specified application instead of all available ports, use portupgrade pkgname. It is very important to include to first upgrade all the ports required by the given application: &prompt.root; portupgrade -R firefox If is included, Portupgrade searches for available packages in the local directories listed in PKG_PATH. If none are available locally, it then fetches packages from a remote site. If packages can not be found locally or fetched remotely, Portupgrade will use ports. To avoid using ports entirely, specify . This last set of options tells Portupgrade to abort if no packages are available: &prompt.root; portupgrade -PP gnome2 To just fetch the port distfiles, or packages, if is specified, without building or installing anything, use . For further information on all of the available switches, refer to the manual page for portupgrade. Ports and Disk Space ports disk-space Using the Ports Collection will use up disk space over time. After building and installing a port, running make clean within the ports skeleton will clean up the temporary work directory. If Portmaster is used to install a port, it will automatically remove this directory unless is specified. If Portupgrade is installed, this command will remove all work directories found within the local copy of the Ports Collection: &prompt.root; portsclean -C In addition, a lot of out-dated source distribution files will collect in /usr/ports/distfiles over time. If Portupgrade is installed, this command will delete all the distfiles that are no longer referenced by any ports: &prompt.root; portsclean -D To use Portupgrade to remove all distfiles not referenced by any port currently installed on the system: &prompt.root; portsclean -DD If Portmaster is installed, use: &prompt.root; portmaster --clean-distfiles By default, this command is interactive and will prompt the user to confirm if a distfile should be deleted. In addition to these commands, the ports-mgmt/pkg_cutleaves package or port automates the task of removing installed ports that are no longer needed. Building Packages with <application>Poudriere</application> Poudriere is a BSD-licensed utility for creating and testing &os; packages. It uses &os; jails to set up isolated compilation environments. These jails can be used to build packages for versions of &os; that are different from the system on which it is installed, and also to build packages for i386 if the host is an &arch.amd64; system. Once the packages are built, they are in a layout identical to the official mirrors. These packages are usable by &man.pkg.8; and other package management tools. Poudriere is installed using the ports-mgmt/poudriere package or port. The installation includes a sample configuration file /usr/local/etc/poudriere.conf.sample. Copy this file to /usr/local/etc/poudriere.conf. Edit the copied file to suit the local configuration. While ZFS is not required on the system running poudriere, it is beneficial. When ZFS is used, ZPOOL must be specified in /usr/local/etc/poudriere.conf and FREEBSD_HOST should be set to a nearby mirror. Defining CCACHE_DIR enables the use of devel/ccache to cache compilation and reduce build times for frequently-compiled code. It may be convenient to put poudriere datasets in an isolated tree mounted at /poudriere. Defaults for the + >/poudriere. Defaults for the other configuration values are adequate. The number of processor cores detected is used to define how many builds should run in parallel. Supply enough virtual memory, either with RAM or swap space. If virtual memory runs out, compiling jails will stop and be torn down, resulting in weird error messages. Initialize Jails and Port Trees After configuration, initialize poudriere so that it installs a jail with the required &os; tree and a ports tree. Specify a name for the jail using and the &os; version with . On systems running &os;/&arch.amd64;, the architecture can be set with to either i386 or amd64. The default is the architecture shown by uname. &prompt.root; poudriere jail -c -j 10amd64 -v 10.0-RELEASE ====>> Creating 10amd64 fs... done ====>> Fetching base.txz for FreeBSD 10.0-RELEASE amd64 /poudriere/jails/10amd64/fromftp/base.txz 100% of 59 MB 1470 kBps 00m42s ====>> Extracting base.txz... done ====>> Fetching src.txz for FreeBSD 10.0-RELEASE amd64 /poudriere/jails/10amd64/fromftp/src.txz 100% of 107 MB 1476 kBps 01m14s ====>> Extracting src.txz... done ====>> Fetching games.txz for FreeBSD 10.0-RELEASE amd64 /poudriere/jails/10amd64/fromftp/games.txz 100% of 865 kB 734 kBps 00m01s ====>> Extracting games.txz... done ====>> Fetching lib32.txz for FreeBSD 10.0-RELEASE amd64 /poudriere/jails/10amd64/fromftp/lib32.txz 100% of 14 MB 1316 kBps 00m12s ====>> Extracting lib32.txz... done ====>> Cleaning up... done ====>> Jail 10amd64 10.0-RELEASE amd64 is ready to be used &prompt.root; poudriere ports -c -p local ====>> Creating local fs... done ====>> Extracting portstree "local"... Looking up portsnap.FreeBSD.org mirrors... 7 mirrors found. Fetching public key from ec2-eu-west-1.portsnap.freebsd.org... done. Fetching snapshot tag from ec2-eu-west-1.portsnap.freebsd.org... done. Fetching snapshot metadata... done. Fetching snapshot generated at Tue Feb 11 01:07:15 CET 2014: 94a3431f0ce567f6452ffde4fd3d7d3c6e1da143efec76100% of 69 MB 1246 kBps 00m57s Extracting snapshot... done. Verifying snapshot integrity... done. Fetching snapshot tag from ec2-eu-west-1.portsnap.freebsd.org... done. Fetching snapshot metadata... done. Updating from Tue Feb 11 01:07:15 CET 2014 to Tue Feb 11 16:05:20 CET 2014. Fetching 4 metadata patches... done. Applying metadata patches... done. Fetching 0 metadata files... done. Fetching 48 patches. (48/48) 100.00% done. done. Applying patches... done. Fetching 1 new ports or files... done. /poudriere/ports/tester/CHANGES /poudriere/ports/tester/COPYRIGHT [...] Building new INDEX files... done. On a single computer, poudriere can build ports with multiple configurations, in multiple jails, and from different port trees. Custom configurations for these combinations are called sets. See the CUSTOMIZATION section of &man.poudriere.8; for details after ports-mgmt/poudriere or ports-mgmt/poudriere-devel is installed. The basic configuration shown here puts a single jail-, port-, and set-specific make.conf in /usr/local/etc/poudriere.d. + >/usr/local/etc/poudriere.d. The filename in this example is created by combining the jail name, port name, and set name: 10amd64-local-workstation-make.conf. The system make.conf and this new file are combined at build time to create the make.conf used by the build jail. Packages to be built are entered in 10amd64-local-workstation-pkglist: editors/emacs devel/git ports-mgmt/pkg ... Options and dependencies for the specified ports are configured: &prompt.root; poudriere options -j 10amd64 -p local -z workstation -f 10amd64-local-workstation-pkglist Finally, packages are built and a package repository is created: &prompt.root; poudriere bulk -j 10amd64 -p local -z workstation -f 10amd64-local-workstation-pkglist Ctrlt displays the current state of the build. Poudriere also builds files in /poudriere/logs/bulk/jailname that can be used with a web server to display build information. Packages are now available for installation from the poudriere repository. For more information on using poudriere, see &man.poudriere.8; and the main web site, . Post-Installation Considerations Regardless of whether the software was installed from a binary package or port, most third-party applications require some level of configuration after installation. The following commands and locations can be used to help determine what was installed with the application. Most applications install at least one default configuration file in /usr/local/etc. In the case where an application has a large number of configuration files, a subdirectory will be created to hold them. Often, sample configuration files are installed which end with a suffix such as .sample. The configuration files should be reviewed and possibly edited to meet the system's needs. To edit a sample file, first copy it without the .sample extension. Applications which provide documentation will install it into /usr/local/share/doc and many applications also install manual pages. This documentation should be consulted before continuing. Some applications run services which must be added to /etc/rc.conf before starting the application. These applications usually install a startup script in /usr/local/etc/rc.d. See Starting Services for more information. Users of &man.csh.1; should run rehash to rebuild the known binary list in the shells PATH. Use pkg info to determine which files, man pages, and binaries were installed with the application. Dealing with Broken Ports When a port does not build or install, try the following: Search to see if there is a fix pending for the port in the Problem Report database. If so, implementing the proposed fix may fix the issue. Ask the maintainer of the port for help. Type make maintainer in the ports skeleton or read the port's Makefile to find the maintainer's email address. Remember to include the $FreeBSD: line from the port's Makefile and the output leading up to the error in the email to the maintainer. Some ports are not maintained by an individual but instead by a mailing list. Many, but not all, of these addresses look like freebsd-listname@FreeBSD.org. Take this into account when sending an email. In particular, ports shown as maintained by ports@FreeBSD.org are not maintained by a specific individual. Instead, any fixes and support come from the general community who subscribe to that mailing list. More volunteers are always needed! If there is no response to the email, use &man.send-pr.1; to submit a bug report using the instructions in Writing &os; Problem Reports. Fix it! The Porter's Handbook includes detailed information on the ports infrastructure so that you can fix the occasional broken port or even submit your own! Install the package instead of the port using the instructions in . Index: head/en_US.ISO8859-1/books/handbook/ppp-and-slip/chapter.xml =================================================================== --- head/en_US.ISO8859-1/books/handbook/ppp-and-slip/chapter.xml (revision 46271) +++ head/en_US.ISO8859-1/books/handbook/ppp-and-slip/chapter.xml (revision 46272) @@ -1,1679 +1,1679 @@ <acronym>PPP</acronym> Synopsis PPP &os; supports the Point-to-Point (PPP) protocol which can be used to establish a network or Internet connection using a dial-up modem. This chapter describes how to configure modem-based communication services in &os;. After reading this chapter, you will know: How to configure, use, and troubleshoot a PPP connection. How to set up PPP over Ethernet (PPPoE). How to set up PPP over ATM (PPPoA). PPP PPP over Ethernet Before reading this chapter, you should: Be familiar with basic network terminology. Understand the basics and purpose of a dial-up connection and PPP. Configuring <acronym>PPP</acronym> &os; provides built-in support for managing dial-up PPP connections using &man.ppp.8;. The default &os; kernel provides support for tun which is used to interact with a modem hardware. Configuration is performed by editing at least one configuration file, and configuration files containing examples are provided. Finally, ppp is used to start and manage connections. In order to use a PPP connection, the following items are needed: A dial-up account with an Internet Service Provider (ISP). A dial-up modem. The dial-up number for the ISP. The login name and password assigned by the ISP. The IP address of one or more DNS servers. Normally, the ISP provides these addresses. If it did not, &os; can be configured to use DNS negotiation. If any of the required information is missing, contact the ISP. The following information may be supplied by the ISP, but is not necessary: The IP address of the default gateway. If this information is unknown, the ISP will automatically provide the correct value during connection setup. When configuring PPP on &os;, this address is referred to as HISADDR. The subnet mask. If the ISP has not provided one, 255.255.255.255 will be used in the &man.ppp.8; configuration file. static IP address If the ISP has assigned a static IP address and hostname, it should be input into the configuration file. Otherwise, this information will be automatically provided during connection setup. The rest of this section demonstrates how to configure &os; for common PPP connection scenarios. The required configuration file is /etc/ppp/ppp.conf and additional files and examples are available in /usr/share/examples/ppp/. Throughout this section, many of the file examples display line numbers. These line numbers have been added to make it easier to follow the discussion and are not meant to be placed in the actual file. When editing a configuration file, proper indentation is important. Lines that end in a : start in the first column (beginning of the line) while all other lines should be indented as shown using spaces or tabs. Basic Configuration PPP with static IP addresses In order to configure a PPP connection, first edit /etc/ppp/ppp.conf with the dial-in information for the ISP. This file is described as follows: 1 default: 2 set log Phase Chat LCP IPCP CCP tun command 3 ident user-ppp VERSION 4 set device /dev/cuau0 5 set speed 115200 6 set dial "ABORT BUSY ABORT NO\\sCARRIER TIMEOUT 5 \ 7 \"\" AT OK-AT-OK ATE1Q0 OK \\dATDT\\T TIMEOUT 40 CONNECT" 8 set timeout 180 9 enable dns 10 11 provider: 12 set phone "(123) 456 7890" 13 set authname foo 14 set authkey bar 15 set timeout 300 16 set ifaddr x.x.x.x/0 y.y.y.y/0 255.255.255.255 0.0.0.0 17 add default HISADDR Line 1: Identifies the default entry. Commands in this entry (lines 2 through 9) are executed automatically when ppp is run. Line 2: Enables verbose logging parameters for testing the connection. Once the configuration is working satisfactorily, this line should be reduced to: set log phase tun Line 3: Displays the version of &man.ppp.8; to the PPP software running on the other side of the connection. Line 4: Identifies the device to which the modem is connected, where COM1 is /dev/cuau0 and COM2 is /dev/cuau1. Line 5: Sets the connection speed. If 115200 does not work on an older modem, try 38400 instead. Lines 6 & 7: The dial string written as an expect-send syntax. Refer to &man.chat.8; for more information. Note that this command continues onto the next line for readability. Any command in ppp.conf may do this if the last character on the line is \. Line 8: Sets the idle timeout for the link in seconds. Line 9: Instructs the peer to confirm the DNS settings. If the local network is running its own DNS server, this line should be commented out, by adding a # at the beginning of the line, or removed. Line 10: A blank line for readability. Blank lines are ignored by &man.ppp.8;. Line 11: Identifies an entry called provider. This could be changed to the name of the ISP so that can be used to start the connection. Line 12: Use the phone number for the ISP. Multiple phone numbers may be specified using the colon (:) or pipe character (|) as a separator. To rotate through the numbers, use a colon. To always attempt to dial the first number first and only use the other numbers if the first number fails, use the pipe character. Always enclose the entire set of phone numbers between quotation marks (") to prevent dialing failures. Lines 13 & 14: Use the user name and password for the ISP. Line 15: Sets the default idle timeout in seconds for the connection. In this example, the connection will be closed automatically after 300 seconds of inactivity. To prevent a timeout, set this value to zero. Line 16: Sets the interface addresses. The values used depend upon whether a static IP address has been obtained from the ISP or if it instead negotiates a dynamic IP address during connection. If the ISP has allocated a static IP address and default gateway, replace x.x.x.x with the static IP address and replace y.y.y.y with the IP address of the default gateway. If the ISP has only provided a static IP address without a gateway address, replace y.y.y.y with 10.0.0.2/0. If the IP address changes whenever a connection is made, change this line to the following value. This tells &man.ppp.8; to use the IP Configuration Protocol (IPCP) to negotiate a dynamic IP address: set ifaddr 10.0.0.1/0 10.0.0.2/0 255.255.255.255 0.0.0.0 Line 17: Keep this line as-is as it adds a default route to the gateway. The HISADDR will automatically be replaced with the gateway address specified on line 16. It is important that this line appears after line 16. Depending upon whether &man.ppp.8; is started manually or automatically, a /etc/ppp/ppp.linkup may also need to be created which contains the following lines. This file is required when running ppp in mode. This file is used after the connection has been established. At this point, the IP address will have been assigned and it is now be possible to add the routing table entries. When creating this file, make sure that provider matches the value demonstrated in line 11 of ppp.conf. provider: add default HISADDR This file is also needed when the default gateway address is guessed in a static IP address configuration. In this case, remove line 17 from ppp.conf and create /etc/ppp/ppp.linkup with the above two lines. More examples for this file can be found in /usr/share/examples/ppp/. By default, the ppp command must be run as the root user. To change this default, add the account of the user who should run ppp to the network group in /etc/group. Then, give the user access to one or more entries in /etc/ppp/ppp.conf using the allow command. For example, to give fred and mary permission to only the provider: entry, add this line to the provider: section: allow users fred mary To give the specified users access to all entries, put that line in the default section instead. Receiving Incoming Calls PPP receiving incoming calls When configuring &man.ppp.8; to receive incoming calls on a machine connected to a Local Area Network (LAN), decide if packets should be forwarded to the LAN. If so, allocate the connecting system an IP address from the LAN's subnet, and add the enable proxy line to /etc/ppp/ppp.conf. Also, confirm that /etc/rc.conf contains the following line: gateway_enable="YES" Refer to &man.ppp.8; and /usr/share/examples/ppp/ppp.conf.sample for more details. The following steps will also be required: Create an entry in /etc/passwd (using the &man.vipw.8; program). Create a profile in this users home directory that runs ppp -direct direct-server or similar. Create an entry in /etc/ppp/ppp.conf. The direct-server example should suffice. Create an entry in /etc/ppp/ppp.linkup. <acronym>PPP</acronym> Shells for Dynamic <acronym>IP</acronym> Users PPP shells Create a file called /etc/ppp/ppp-shell containing the following: #!/bin/sh IDENT=`echo $0 | sed -e 's/^.*-\(.*\)$/\1/'` CALLEDAS="$IDENT" TTY=`tty` if [ x$IDENT = xdialup ]; then IDENT=`basename $TTY` fi echo "PPP for $CALLEDAS on $TTY" echo "Starting PPP for $IDENT" exec /usr/sbin/ppp -direct $IDENT This script should be executable. Now make a symbolic link called ppp-dialup to this script using the following commands: &prompt.root; ln -s ppp-shell /etc/ppp/ppp-dialup Use this script as the shell for all of dial-up users. This is an example from /etc/passwd for a dial-up PPP: pchilds:*:1011:300:Peter Childs PPP:/home/ppp:/etc/ppp/ppp-dialup Create a /home/ppp directory that + >/home/ppp directory that is world readable containing the following 0 byte files: -r--r--r-- 1 root wheel 0 May 27 02:23 .hushlogin -r--r--r-- 1 root wheel 0 May 27 02:22 .rhosts which prevents /etc/motd from being displayed. <acronym>PPP</acronym> Shells for Static <acronym>IP</acronym> Users PPP shells Create the ppp-shell file as above, and for each account with statically assigned IPs create a symbolic link to ppp-shell. For example, to route /24 CIDR networks for the dial-up customers fred, sam, and mary, type: &prompt.root; ln -s /etc/ppp/ppp-shell /etc/ppp/ppp-fred &prompt.root; ln -s /etc/ppp/ppp-shell /etc/ppp/ppp-sam &prompt.root; ln -s /etc/ppp/ppp-shell /etc/ppp/ppp-mary Each of these users dial-up accounts should have their shell set to the symbolic link created above (for example, mary's shell should be /etc/ppp/ppp-mary). Setting Up <filename>ppp.conf</filename> for Dynamic <acronym>IP</acronym> Users The /etc/ppp/ppp.conf file should contain something along the lines of: default: set debug phase lcp chat set timeout 0 ttyu0: set ifaddr 203.14.100.1 203.14.100.20 255.255.255.255 enable proxy ttyu1: set ifaddr 203.14.100.1 203.14.100.21 255.255.255.255 enable proxy The indenting is important. The default: section is loaded for each session. For each dial-up line enabled in /etc/ttys create an entry similar to the one for ttyu0: above. Each line should get a unique IP address from the pool of IP addresses for dynamic users. Setting Up <filename>ppp.conf</filename> for Static <acronym>IP</acronym> Users Along with the contents of the sample /usr/share/examples/ppp/ppp.conf above, add a section for each of the statically assigned dial-up users:. fred: set ifaddr 203.14.100.1 203.14.101.1 255.255.255.255 sam: set ifaddr 203.14.100.1 203.14.102.1 255.255.255.255 mary: set ifaddr 203.14.100.1 203.14.103.1 255.255.255.255 The file /etc/ppp/ppp.linkup should also contain routing information for each static IP user if required. The line below would add a route for the 203.14.101.0/24 network via the client's ppp link. fred: add 203.14.101.0 netmask 255.255.255.0 HISADDR sam: add 203.14.102.0 netmask 255.255.255.0 HISADDR mary: add 203.14.103.0 netmask 255.255.255.0 HISADDR ?> Advanced Configuration DNS NetBIOS PPP Microsoft extensions It is possible to configure PPP to supply DNS and NetBIOS nameserver addresses on demand. To enable these extensions with PPP version 1.x, the following lines might be added to the relevant section of /etc/ppp/ppp.conf. enable msext set ns 203.14.100.1 203.14.100.2 set nbns 203.14.100.5 And for PPP version 2 and above: accept dns set dns 203.14.100.1 203.14.100.2 set nbns 203.14.100.5 This will tell the clients the primary and secondary name server addresses, and a NetBIOS nameserver host. In version 2 and above, if the set dns line is omitted, PPP will use the values found in /etc/resolv.conf. PAP and CHAP Authentication PAP CHAP Some ISPs set their system up so that the authentication part of the connection is done using either of the PAP or CHAP authentication mechanisms. If this is the case, the ISP will not give a login: prompt at connection, but will start talking PPP immediately. PAP is less secure than CHAP, but security is not normally an issue here as passwords, although being sent as plain text with PAP, are being transmitted down a serial line only. There is not much room for crackers to eavesdrop. The following alterations must be made: 13 set authname MyUserName 14 set authkey MyPassword 15 set login Line 13: This line specifies the PAP/CHAP user name. Insert the correct value for MyUserName. Line 14: This line specifies the PAP/CHAP passwordpassword. Insert the correct value for MyPassword. You may want to add an additional line, such as: 16 accept PAP or 16 accept CHAP to make it obvious that this is the intention, but PAP and CHAP are both accepted by default. Line 15: The ISP will not normally require a login to the server when using PAP or CHAP. Therefore, disable the set login string. Using <acronym>PPP</acronym> Network Address Translation Capability PPPNAT PPP has ability to use internal NAT without kernel diverting capabilities. This functionality may be enabled by the following line in /etc/ppp/ppp.conf: nat enable yes Alternatively, NAT may be enabled by command-line option -nat. There is also /etc/rc.conf knob named ppp_nat, which is enabled by default. When using this feature, it may be useful to include the following /etc/ppp/ppp.conf options to enable incoming connections forwarding: nat port tcp 10.0.0.2:ftp ftp nat port tcp 10.0.0.2:http http or do not trust the outside at all nat deny_incoming yes Final System Configuration PPPconfiguration While ppp is now configured, some edits still need to be made to /etc/rc.conf. Working from the top down in this file, make sure the hostname= line is set: hostname="foo.example.com" If the ISP has supplied a static IP address and name, use this name as the host name. Look for the network_interfaces variable. To configure the system to dial the ISP on demand, make sure the tun0 device is added to the list, otherwise remove it. network_interfaces="lo0 tun0" ifconfig_tun0= The ifconfig_tun0 variable should be empty, and a file called /etc/start_if.tun0 should be created. This file should contain the line: ppp -auto mysystem This script is executed at network configuration time, starting the ppp daemon in automatic mode. If this machine acts as a gateway, consider including . Refer to the manual page for further details. Make sure that the router program is set to NO with the following line in /etc/rc.conf: router_enable="NO" routed It is important that the routed daemon is not started, as routed tends to delete the default routing table entries created by ppp. It is probably a good idea to ensure that the sendmail_flags line does not include the option, otherwise sendmail will attempt to do a network lookup every now and then, possibly causing your machine to dial out. You may try: sendmail_flags="-bd" sendmail The downside is that sendmail is forced to re-examine the mail queue whenever the ppp link. To automate this, include !bg in ppp.linkup: 1 provider: 2 delete ALL 3 add 0 0 HISADDR 4 !bg sendmail -bd -q30m SMTP An alternative is to set up a dfilter to block SMTP traffic. Refer to the sample files for further details. Using <command>ppp</command> All that is left is to reboot the machine. After rebooting, either type: &prompt.root; ppp and then dial provider to start the PPP session, or, to configure ppp to establish sessions automatically when there is outbound traffic and start_if.tun0 does not exist, type: &prompt.root; ppp -auto provider It is possible to talk to the ppp program while it is running in the background, but only if a suitable diagnostic port has been set up. To do this, add the following line to the configuration: set server /var/run/ppp-tun%d DiagnosticPassword 0177 This will tell PPP to listen to the specified &unix; domain socket, asking clients for the specified password before allowing access. The %d in the name is replaced with the tun device number that is in use. Once a socket has been set up, the &man.pppctl.8; program may be used in scripts that wish to manipulate the running program. Configuring Dial-in Services mgetty AutoPPP LCP provides a good description on enabling dial-up services using &man.getty.8;. An alternative to getty is comms/mgetty+sendfax port), a smarter version of getty designed with dial-up lines in mind. The advantages of using mgetty is that it actively talks to modems, meaning if port is turned off in /etc/ttys then the modem will not answer the phone. Later versions of mgetty (from 0.99beta onwards) also support the automatic detection of PPP streams, allowing clients scriptless access to the server. Refer to http://mgetty.greenie.net/doc/mgetty_toc.html for more information on mgetty. By default the comms/mgetty+sendfax port comes with the AUTO_PPP option enabled allowing mgetty to detect the LCP phase of PPP connections and automatically spawn off a ppp shell. However, since the default login/password sequence does not occur it is necessary to authenticate users using either PAP or CHAP. This section assumes the user has successfully compiled, and installed the comms/mgetty+sendfax port on his system. Ensure that /usr/local/etc/mgetty+sendfax/login.config has the following: /AutoPPP/ - - /etc/ppp/ppp-pap-dialup This tells mgetty to run ppp-pap-dialup for detected PPP connections. Create an executable file called /etc/ppp/ppp-pap-dialup containing the following: #!/bin/sh exec /usr/sbin/ppp -direct pap$IDENT For each dial-up line enabled in /etc/ttys, create a corresponding entry in /etc/ppp/ppp.conf. This will happily co-exist with the definitions we created above. pap: enable pap set ifaddr 203.14.100.1 203.14.100.20-203.14.100.40 enable proxy Each user logging in with this method will need to have a username/password in /etc/ppp/ppp.secret file, or alternatively add the following option to authenticate users via PAP from the /etc/passwd file. enable passwdauth To assign some users a static IP number, specify the number as the third argument in /etc/ppp/ppp.secret. See /usr/share/examples/ppp/ppp.secret.sample for examples. Troubleshooting <acronym>PPP</acronym> Connections PPP troubleshooting This section covers a few issues which may arise when using PPP over a modem connection. Some ISPs present the ssword prompt while others present password. If the ppp script is not written accordingly, the login attempt will fail. The most common way to debug ppp connections is by connecting manually as described in this section. Check the Device Nodes When using a custom kernel, make sure to include the following line in the kernel configuration file: device uart The uart device is already included in the GENERIC kernel, so no additional steps are necessary in this case. Just check the dmesg output for the modem device with: &prompt.root; dmesg | grep uart This should display some pertinent output about the uart devices. These are the COM ports we need. If the modem acts like a standard serial port, it should be listed on uart1, or COM2. If so, a kernel rebuild is not required. When matching up, if the modem is on uart1, the modem device would be /dev/cuau1. Connecting Manually Connecting to the Internet by manually controlling ppp is quick, easy, and a great way to debug a connection or just get information on how the ISP treats ppp client connections. Lets start PPP from the command line. Note that in all of our examples we will use example as the hostname of the machine running PPP. To start ppp: &prompt.root; ppp ppp ON example> set device /dev/cuau1 This second command sets the modem device to cuau1. ppp ON example> set speed 115200 This sets the connection speed to 115,200 kbps. ppp ON example> enable dns This tells ppp to configure the resolver and add the nameserver lines to /etc/resolv.conf. If ppp cannot determine the hostname, it can manually be set later. ppp ON example> term This switches to terminal mode in order to manually control the modem. deflink: Entering terminal mode on /dev/cuau1 type '~h' for help at OK atdt123456789 Use at to initialize the modem, then use atdt and the number for the ISP to begin the dial in process. CONNECT Confirmation of the connection, if we are going to have any connection problems, unrelated to hardware, here is where we will attempt to resolve them. ISP Login:myusername At this prompt, return the prompt with the username that was provided by the ISP. ISP Pass:mypassword At this prompt, reply with the password that was provided by the ISP. Just like logging into &os;, the password will not echo. Shell or PPP:ppp Depending on the ISP, this prompt might not appear. If it does, it is asking whether to use a shell on the provider or to start ppp. In this example, ppp was selected in order to establish an Internet connection. Ppp ON example> Notice that in this example the first has been capitalized. This shows that we have successfully connected to the ISP. PPp ON example> We have successfully authenticated with our ISP and are waiting for the assigned IP address. PPP ON example> We have made an agreement on an IP address and successfully completed our connection. PPP ON example>add default HISADDR Here we add our default route, we need to do this before we can talk to the outside world as currently the only established connection is with the peer. If this fails due to existing routes, put a bang character ! in front of the . Alternatively, set this before making the actual connection and it will negotiate a new route accordingly. If everything went good we should now have an active connection to the Internet, which could be thrown into the background using CTRL z If PPP returns to ppp then the connection has bee lost. This is good to know because it shows the connection status. Capital P's represent a connection to the ISP and lowercase p's show that the connection has been lost. Debugging If a connection cannot be established, turn hardware flow CTS/RTS to off using . This is mainly the case when connected to some PPP-capable terminal servers, where PPP hangs when it tries to write data to the communication link, and waits for a Clear To Send (CTS) signal which may never come. When using this option, include as it may be required to defeat hardware dependent on passing certain characters from end to end, most of the time XON/XOFF. Refer to &man.ppp.8; for more information on this option and how it is used. An older modem may need . Parity is set at none be default, but is used for error checkingm with a large increase in traffic, on older modems. PPP may not return to the command mode, which is usually a negotiation error where the ISP is waiting for negotiating to begin. At this point, using ~p will force ppp to start sending the configuration information. If a login prompt never appears, PAP or CHAP authentication is most likely required. To use PAP or CHAP, add the following options to PPP before going into terminal mode: ppp ON example> set authname myusername Where myusername should be replaced with the username that was assigned by the ISP. ppp ON example> set authkey mypassword Where mypassword should be replaced with the password that was assigned by the ISP. If a connection is established, but cannot seem to find any domain name, try to &man.ping.8; an IP address. If there is 100 percent (100%) packet loss, it is likely that a default route was not assigned. Double check that was set during the connection. If a connection can be made to a remote IP address, it is possible that a resolver address has not been added to /etc/resolv.conf. This file should look like: domain example.com nameserver x.x.x.x nameserver y.y.y.y Where x.x.x.x and y.y.y.y should be replaced with the IP address of the ISP's DNS servers. To configure &man.syslog.3; to provide logging for the PPP connection, make sure this line exists in /etc/syslog.conf: !ppp *.* /var/log/ppp.log Using <acronym>PPP</acronym> over Ethernet (PPPoE) PPP over Ethernet This section describes how to set up PPP over Ethernet (PPPoE). Here is an example of a working ppp.conf: default: set log Phase tun command # you can add more detailed logging if you wish set ifaddr 10.0.0.1/0 10.0.0.2/0 name_of_service_provider: set device PPPoE:xl1 # replace xl1 with your Ethernet device set authname YOURLOGINNAME set authkey YOURPASSWORD set dial set login add default HISADDR As root, run: &prompt.root; ppp -ddial name_of_service_provider Add the following to /etc/rc.conf: ppp_enable="YES" ppp_mode="ddial" ppp_nat="YES" # if you want to enable nat for your local network, otherwise NO ppp_profile="name_of_service_provider" Using a PPPoE Service Tag Sometimes it will be necessary to use a service tag to establish the connection. Service tags are used to distinguish between different PPPoE servers attached to a given network. Any required service tag information should be in the documentation provided by the ISP. As a last resort, one could try installing the net/rr-pppoe package or port. Bear in mind however, this may de-program your modem and render it useless, so think twice before doing it. Simply install the program shipped with the modem. Then, access the System menu from the program. The name of the profile should be listed there. It is usually ISP. The profile name (service tag) will be used in the PPPoE configuration entry in ppp.conf as the provider part of the set device command (see the &man.ppp.8; manual page for full details). It should look like this: set device PPPoE:xl1:ISP Do not forget to change xl1 to the proper device for the Ethernet card. Do not forget to change ISP to the profile. For additional information, refer to Cheaper Broadband with &os; on DSL by Renaud Waldura. PPPoE with a &tm.3com; <trademark class="registered">HomeConnect</trademark> ADSL Modem Dual Link This modem does not follow the PPPoE specification defined in RFC 2516. In order to make &os; capable of communicating with this device, a sysctl must be set. This can be done automatically at boot time by updating /etc/sysctl.conf: net.graph.nonstandard_pppoe=1 or can be done immediately with the command: &prompt.root; sysctl net.graph.nonstandard_pppoe=1 Unfortunately, because this is a system-wide setting, it is not possible to talk to a normal PPPoE client or server and a &tm.3com; HomeConnect ADSL Modem at the same time. Using <application>PPP</application> over <acronym>ATM</acronym> (PPPoA) PPP over ATM PPPoA The following describes how to set up PPP over ATM (PPPoA). PPPoA is a popular choice among European DSL providers. Using mpd The mpd application can be used to connect to a variety of services, in particular PPTP services. It can be installed using the net/mpd5 package or port. Many ADSL modems require that a PPTP tunnel is created between the modem and computer. Once installed, configure mpd to suit the provider's settings. The port places a set of sample configuration files which are well documented in /usr/local/etc/mpd/. A complete guide to configure mpd is available in HTML format in /usr/ports/share/doc/mpd/. Here is a sample configuration for connecting to an ADSL service with mpd. The configuration is spread over two files, first the mpd.conf: This example of the mpd.conf file only works with mpd 4.x. default: load adsl adsl: new -i ng0 adsl adsl set bundle authname username set bundle password password set bundle disable multilink set link no pap acfcomp protocomp set link disable chap set link accept chap set link keep-alive 30 10 set ipcp no vjcomp set ipcp ranges 0.0.0.0/0 0.0.0.0/0 set iface route default set iface disable on-demand set iface enable proxy-arp set iface idle 0 open The username used to authenticate with your ISP. The password used to authenticate with your ISP. The mpd.links file contains information about the link, or links, to establish. An example mpd.links to accompany the above example is given beneath: adsl: set link type pptp set pptp mode active set pptp enable originate outcall set pptp self 10.0.0.1 set pptp peer 10.0.0.138 The IP address of &os; computer running mpd. The IP address of the ADSL modem. The Alcatel &speedtouch; Home defaults to 10.0.0.138. It is possible to initialize the connection easily by issuing the following command as root: &prompt.root; mpd -b adsl To view the status of the connection: &prompt.user; ifconfig ng0 ng0: flags=88d1<UP,POINTOPOINT,RUNNING,NOARP,SIMPLEX,MULTICAST> mtu 1500 inet 216.136.204.117 --> 204.152.186.171 netmask 0xffffffff Using mpd is the recommended way to connect to an ADSL service with &os;. Using pptpclient It is also possible to use &os; to connect to other PPPoA services using net/pptpclient. To use net/pptpclient to connect to a DSL service, install the port or package, then edit /etc/ppp/ppp.conf. An example section of ppp.conf is given below. For further information on ppp.conf options consult &man.ppp.8;. adsl: set log phase chat lcp ipcp ccp tun command set timeout 0 enable dns set authname username set authkey password set ifaddr 0 0 add default HISADDR The username for the DSL provider. The password for your account. Since the account's password is added to ppp.confin plain text form, make sure nobody can read the contents of this file: &prompt.root; chown root:wheel /etc/ppp/ppp.conf &prompt.root; chmod 600 /etc/ppp/ppp.conf This will open a tunnel for a PPP session to the DSL router. Ethernet DSL modems have a preconfigured LAN IP address to connect to. In the case of the Alcatel &speedtouch; Home, this address is 10.0.0.138. The router's documentation should list the address the device uses. To open the tunnel and start a PPP session: &prompt.root; pptp address adsl If an ampersand (&) is added to the end of this command, pptp will return the prompt. A tun virtual tunnel device will be created for interaction between the pptp and ppp processes. Once the prompt is returned, or the pptp process has confirmed a connection, examine the tunnel: &prompt.user; ifconfig tun0 tun0: flags=8051<UP,POINTOPOINT,RUNNING,MULTICAST> mtu 1500 inet 216.136.204.21 --> 204.152.186.171 netmask 0xffffff00 Opened by PID 918 If the connection fails, check the configuration of the router, which is usually accessible using a web browser. Also, examine the output of pptp and the contents of the log file, /var/log/ppp.log for clues.