Index: head/en_US.ISO8859-1/books/handbook/basics/chapter.xml =================================================================== --- head/en_US.ISO8859-1/books/handbook/basics/chapter.xml (revision 46048) +++ head/en_US.ISO8859-1/books/handbook/basics/chapter.xml (revision 46049) @@ -1,3440 +1,3440 @@ UNIX Basics Synopsis This chapter covers the basic commands and functionality of the &os; operating system. Much of this material is relevant for any &unix;-like operating system. New &os; users are encouraged to read through this chapter carefully. After reading this chapter, you will know: How to use and configure virtual consoles. How to create and manage users and groups on &os;. How &unix; file permissions and &os; file flags work. The default &os; file system layout. The &os; disk organization. How to mount and unmount file systems. What processes, daemons, and signals are. What a shell is, and how to change the default login environment. How to use basic text editors. What devices and device nodes are. How to read manual pages for more information. Virtual Consoles and Terminals virtual consoles terminals console Unless &os; has been configured to automatically start a graphical environment during startup, the system will boot into a command line login prompt, as seen in this example: FreeBSD/amd64 (pc3.example.org) (ttyv0) login: The first line contains some information about the system. The amd64 indicates that the system in this example is running a 64-bit version of &os;. The hostname is pc3.example.org, and ttyv0 indicates that this is the system console. The second line is the login prompt. Since &os; is a multiuser system, it needs some way to distinguish between different users. This is accomplished by requiring every user to log into the system before gaining access to the programs on the system. Every user has a unique name username and a personal password. To log into the system console, type the username that was configured during system installation, as described in , and press Enter. Then enter the password associated with the username and press Enter. The password is not echoed for security reasons. Once the correct password is input, the message of the day (MOTD) will be displayed followed by a command prompt. Depending upon the shell that was selected when the user was created, this prompt will be a #, $, or % character. The prompt indicates that the user is now logged into the &os; system console and ready to try the available commands. Virtual Consoles While the system console can be used to interact with the system, a user working from the command line at the keyboard of a &os; system will typically instead log into a virtual console. This is because system messages are configured by default to display on the system console. These messages will appear over the command or file that the user is working on, making it difficult to concentrate on the work at hand. By default, &os; is configured to provide several virtual consoles for inputting commands. Each virtual console has its own login prompt and shell and it is easy to switch between virtual consoles. This essentially provides the command line equivalent of having several windows open at the same time in a graphical environment. The key combinations AltF1 through AltF8 have been reserved by &os; for switching between virtual consoles. Use AltF1 to switch to the system console (ttyv0), AltF2 to access the first virtual console (ttyv1), AltF3 to access the second virtual console (ttyv2), and so on. When switching from one console to the next, &os; manages the screen output. The result is an illusion of having multiple virtual screens and keyboards that can be used to type commands for &os; to run. The programs that are launched in one virtual console do not stop running when the user switches to a different virtual console. Refer to &man.syscons.4;, &man.atkbd.4;, &man.vidcontrol.1; and &man.kbdcontrol.1; for a more technical description of the &os; console and its keyboard drivers. In &os;, the number of available virtual consoles is configured in this section of /etc/ttys: # name getty type status comments # ttyv0 "/usr/libexec/getty Pc" xterm on secure # Virtual terminals ttyv1 "/usr/libexec/getty Pc" xterm on secure ttyv2 "/usr/libexec/getty Pc" xterm on secure ttyv3 "/usr/libexec/getty Pc" xterm on secure ttyv4 "/usr/libexec/getty Pc" xterm on secure ttyv5 "/usr/libexec/getty Pc" xterm on secure ttyv6 "/usr/libexec/getty Pc" xterm on secure ttyv7 "/usr/libexec/getty Pc" xterm on secure ttyv8 "/usr/X11R6/bin/xdm -nodaemon" xterm off secure To disable a virtual console, put a comment symbol (#) at the beginning of the line representing that virtual console. For example, to reduce the number of available virtual consoles from eight to four, put a # in front of the last four lines representing virtual consoles ttyv5 through ttyv8. Do not comment out the line for the system console ttyv0. Note that the last virtual console (ttyv8) is used to access the graphical environment if &xorg; has been installed and configured as described in . For a detailed description of every column in this file and the available options for the virtual consoles, refer to &man.ttys.5;. Single User Mode The &os; boot menu provides an option labelled as Boot Single User. If this option is selected, the system will boot into a special mode known as single user mode. This mode is typically used to repair a system that will not boot or to reset the root password when it is not known. While in single user mode, networking and other virtual consoles are not available. However, full root access to the system is available, and by default, the root password is not needed. For these reasons, physical access to the keyboard is needed to boot into this mode and determining who has physical access to the keyboard is something to consider when securing a &os; system. The settings which control single user mode are found in this section of /etc/ttys: # name getty type status comments # # If console is marked "insecure", then init will ask for the root password # when going to single-user mode. console none unknown off secure By default, the status is set to secure. This assumes that who has physical access to the keyboard is either not important or it is controlled by a physical security policy. If this setting is changed to insecure, the assumption is that the environment itself is insecure because anyone can access the keyboard. When this line is changed to insecure, &os; will prompt for the root password when a user selects to boot into single user mode. Be careful when changing this setting to insecure! If the root password is forgotten, booting into single user mode is still possible, but may be difficult for someone who is not familiar with the &os; booting process. Changing Console Video Modes The &os; console default video mode may be adjusted to 1024x768, 1280x1024, or any other size supported by the graphics chip and monitor. To use a different video mode load the VESA module: &prompt.root; kldload vesa To determine which video modes are supported by the hardware, use &man.vidcontrol.1;. To get a list of supported video modes issue the following: &prompt.root; vidcontrol -i mode The output of this command lists the video modes that are supported by the hardware. To select a new video mode, specify the mode using &man.vidcontrol.1; as the root user: &prompt.root; vidcontrol MODE_279 If the new video mode is acceptable, it can be permanently set on boot by adding it to /etc/rc.conf: allscreens_flags="MODE_279" Users and Basic Account Management &os; allows multiple users to use the computer at the same time. While only one user can sit in front of the screen and use the keyboard at any one time, any number of users can log in to the system through the network. To use the system, each user should have their own user account. This chapter describes: The different types of user accounts on a &os; system. How to add, remove, and modify user accounts. How to set limits to control the resources that users and groups are allowed to access. How to create groups and add users as members of a group. Account Types Since all access to the &os; system is achieved using accounts and all processes are run by users, user and account management is important. There are three main types of accounts: system accounts, user accounts, and the superuser account. System Accounts accounts system System accounts are used to run services such as DNS, mail, and web servers. The reason for this is security; if all services ran as the superuser, they could act without restriction. accounts daemon accounts operator Examples of system accounts are daemon, operator, bind, news, and www. accounts nobody nobody is the generic unprivileged system account. However, the more services that use nobody, the more files and processes that user will become associated with, and hence the more privileged that user becomes. User Accounts accounts user User accounts are assigned to real people and are used to log in and use the system. Every person accessing the system should have a unique user account. This allows the administrator to find out who is doing what and prevents users from clobbering the settings of other users. Each user can set up their own environment to accommodate their use of the system, by configuring their default shell, editor, key bindings, and language settings. Every user account on a &os; system has certain information associated with it: User name The user name is typed at the login: prompt. Each user must have a unique user name. There are a number of rules for creating valid user names which are documented in &man.passwd.5;. It is recommended to use user names that consist of eight or fewer, all lower case characters in order to maintain backwards compatibility with applications. Password Each account has an associated password. User ID (UID) The User ID (UID) is a number used to uniquely identify the user to the &os; system. Commands that allow a user name to be specified will first convert it to the UID. It is recommended to use a UID less than 65535, since higher values may cause compatibility issues with some software. Group ID (GID) The Group ID (GID) is a number used to uniquely identify the primary group that the user belongs to. Groups are a mechanism for controlling access to resources based on a user's GID rather than their UID. This can significantly reduce the size of some configuration files and allows users to be members of more than one group. It is recommended to use a GID of 65535 or lower as higher GIDs may break some software. Login class Login classes are an extension to the group mechanism that provide additional flexibility when tailoring the system to different users. Login classes are discussed further in . Password change time By default, passwords do not expire. However, password expiration can be enabled on a per-user basis, forcing some or all users to change their passwords after a certain amount of time has elapsed. Account expiry time By default, &os; does not expire accounts. When creating accounts that need a limited lifespan, such as student accounts in a school, specify the account expiry date using &man.pw.8;. After the expiry time has elapsed, the account cannot be used to log in to the system, although the account's directories and files will remain. User's full name The user name uniquely identifies the account to &os;, but does not necessarily reflect the user's real name. Similar to a comment, this information can contain spaces, uppercase characters, and be more than 8 characters long. Home directory The home directory is the full path to a directory on the system. This is the user's starting directory when the user logs in. A common convention is to put all user home directories under /home/username or /usr/home/username. Each user stores their personal files and subdirectories in their own home directory. User shell The shell provides the user's default environment for interacting with the system. There are many different kinds of shells and experienced users will have their own preferences, which can be reflected in their account settings. The Superuser Account accounts superuser (root) The superuser account, usually called root, is used to manage the system with no limitations on privileges. For this reason, it should not be used for day-to-day tasks like sending and receiving mail, general exploration of the system, or programming. The superuser, unlike other user accounts, can operate without limits, and misuse of the superuser account may result in spectacular disasters. User accounts are unable to destroy the operating system by mistake, so it is recommended to login as a user account and to only become the superuser when a command requires extra privilege. Always double and triple-check any commands issued as the superuser, since an extra space or missing character can mean irreparable data loss. There are several ways to gain superuser privilege. While one can log in as root, this is highly discouraged. Instead, use &man.su.1; to become the superuser. If - is specified when running this command, the user will also inherit the root user's environment. The user running this command must be in the wheel group or else the command will fail. The user must also know the password for the root user account. In this example, the user only becomes superuser in order to run make install as this step requires superuser privilege. Once the command completes, the user types exit to leave the superuser account and return to the privilege of their user account. Install a Program As the Superuser &prompt.user; configure &prompt.user; make &prompt.user; su - Password: &prompt.root; make install &prompt.root; exit &prompt.user; The built-in &man.su.1; framework works well for single systems or small networks with just one system administrator. An alternative is to install the security/sudo package or port. This software provides activity logging and allows the administrator to configure which users can run which commands as the superuser. Managing Accounts accounts modifying &os; provides a variety of different commands to manage user accounts. The most common commands are summarized in , followed by some examples of their usage. See the manual page for each utility for more details and usage examples. Utilities for Managing User Accounts Command Summary &man.adduser.8; The recommended command-line application for adding new users. &man.rmuser.8; The recommended command-line application for removing users. &man.chpass.1; A flexible tool for changing user database information. &man.passwd.1; The command-line tool to change user passwords. &man.pw.8; A powerful and flexible tool for modifying all aspects of user accounts.
<command>adduser</command> accounts adding adduser /usr/share/skel skeleton directory The recommended program for adding new users is &man.adduser.8;. When a new user is added, this program automatically updates /etc/passwd and /etc/group. It also creates a home directory for the new user, copies in the default configuration files from /usr/share/skel, and can optionally mail the new user a welcome message. This utility must be run as the superuser. The &man.adduser.8; utility is interactive and walks through the steps for creating a new user account. As seen in , either input the required information or press Return to accept the default value shown in square brackets. In this example, the user has been invited into the wheel group, allowing them to become the superuser with &man.su.1;. When finished, the utility will prompt to either create another user or to exit. Adding a User on &os; &prompt.root; adduser Username: jru Full name: J. Random User Uid (Leave empty for default): Login group [jru]: Login group is jru. Invite jru into other groups? []: wheel Login class [default]: Shell (sh csh tcsh zsh nologin) [sh]: zsh Home directory [/home/jru]: Home directory permissions (Leave empty for default): Use password-based authentication? [yes]: Use an empty password? (yes/no) [no]: Use a random password? (yes/no) [no]: Enter password: Enter password again: Lock out the account after creation? [no]: Username : jru Password : **** Full Name : J. Random User Uid : 1001 Class : Groups : jru wheel Home : /home/jru Shell : /usr/local/bin/zsh Locked : no OK? (yes/no): yes adduser: INFO: Successfully added (jru) to the user database. Add another user? (yes/no): no Goodbye! &prompt.root; Since the password is not echoed when typed, be careful to not mistype the password when creating the user account. <command>rmuser</command> rmuser accounts removing To completely remove a user from the system, run &man.rmuser.8; as the superuser. This command performs the following steps: Removes the user's &man.crontab.1; entry, if one exists. Removes any &man.at.1; jobs belonging to the user. Kills all processes owned by the user. Removes the user from the system's local password file. Optionally removes the user's home directory, if it is owned by the user. Removes the incoming mail files belonging to the user from /var/mail. Removes all files owned by the user from temporary file storage areas such as /tmp. Finally, removes the username from all groups to which it belongs in /etc/group. If a group becomes empty and the group name is the same as the username, the group is removed. This complements the per-user unique groups created by &man.adduser.8;. &man.rmuser.8; cannot be used to remove superuser accounts since that is almost always an indication of massive destruction. By default, an interactive mode is used, as shown in the following example. <command>rmuser</command> Interactive Account Removal &prompt.root; rmuser jru Matching password entry: jru:*:1001:1001::0:0:J. Random User:/home/jru:/usr/local/bin/zsh Is this the entry you wish to remove? y Remove user's home directory (/home/jru)? y Removing user (jru): mailspool home passwd. &prompt.root; <command>chpass</command> chpass Any user can use &man.chpass.1; to change their default shell and personal information associated with their user account. The superuser can use this utility to change additional account information for any user. When passed no options, aside from an optional username, &man.chpass.1; displays an editor containing user information. When the user exits from the editor, the user database is updated with the new information. This utility will prompt for the user's password when exiting the editor, unless the utility is run as the superuser. In , the superuser has typed chpass jru and is now viewing the fields that can be changed for this user. If jru runs this command instead, only the last six fields will be displayed and available for editing. This is shown in . Using <command>chpass</command> as Superuser #Changing user database information for jru. Login: jru Password: * Uid [#]: 1001 Gid [# or name]: 1001 Change [month day year]: Expire [month day year]: Class: Home directory: /home/jru Shell: /usr/local/bin/zsh Full Name: J. Random User Office Location: Office Phone: Home Phone: Other information: Using <command>chpass</command> as Regular User #Changing user database information for jru. Shell: /usr/local/bin/zsh Full Name: J. Random User Office Location: Office Phone: Home Phone: Other information: The commands &man.chfn.1; and &man.chsh.1; are links to &man.chpass.1;, as are &man.ypchpass.1;, &man.ypchfn.1;, and &man.ypchsh.1;. Since NIS support is automatic, specifying the yp before the command is not necessary. How to configure NIS is covered in . <command>passwd</command> passwd accounts changing password Any user can easily change their password using &man.passwd.1;. To prevent accidental or unauthorized changes, this command will prompt for the user's original password before a new password can be set: Changing Your Password &prompt.user; passwd Changing local password for jru. Old password: New password: Retype new password: passwd: updating the database... passwd: done The superuser can change any user's password by specifying the username when running &man.passwd.1;. When this utility is run as the superuser, it will not prompt for the user's current password. This allows the password to be changed when a user cannot remember the original password. Changing Another User's Password as the Superuser &prompt.root; passwd jru Changing local password for jru. New password: Retype new password: passwd: updating the database... passwd: done As with &man.chpass.1;, &man.yppasswd.1; is a link to &man.passwd.1;, so NIS works with either command. <command>pw</command> pw The &man.pw.8; utility can create, remove, modify, and display users and groups. It functions as a front end to the system user and group files. &man.pw.8; has a very powerful set of command line options that make it suitable for use in shell scripts, but new users may find it more complicated than the other commands presented in this section.
Managing Groups groups /etc/groups accounts groups A group is a list of users. A group is identified by its group name and GID. In &os;, the kernel uses the UID of a process, and the list of groups it belongs to, to determine what the process is allowed to do. Most of the time, the GID of a user or process usually means the first group in the list. The group name to GID mapping is listed in /etc/group. This is a plain text file with four colon-delimited fields. The first field is the group name, the second is the encrypted password, the third the GID, and the fourth the comma-delimited list of members. For a more complete description of the syntax, refer to &man.group.5;. The superuser can modify /etc/group using a text editor. Alternatively, &man.pw.8; can be used to add and edit groups. For example, to add a group called teamtwo and then confirm that it exists: Adding a Group Using &man.pw.8; &prompt.root; pw groupadd teamtwo &prompt.root; pw groupshow teamtwo teamtwo:*:1100: In this example, 1100 is the GID of teamtwo. Right now, teamtwo has no members. This command will add jru as a member of teamtwo. Adding User Accounts to a New Group Using &man.pw.8; &prompt.root; pw groupmod teamtwo -M jru &prompt.root; pw groupshow teamtwo teamtwo:*:1100:jru The argument to is a comma-delimited list of users to be added to a new (empty) group or to replace the members of an existing group. To the user, this group membership is different from (and in addition to) the user's primary group listed in the password file. This means that the user will not show up as a member when using with &man.pw.8;, but will show up when the information is queried via &man.id.1; or a similar tool. When &man.pw.8; is used to add a user to a group, it only manipulates /etc/group and does not attempt to read additional data from /etc/passwd. Adding a New Member to a Group Using &man.pw.8; &prompt.root; pw groupmod teamtwo -m db &prompt.root; pw groupshow teamtwo teamtwo:*:1100:jru,db In this example, the argument to is a comma-delimited list of users who are to be added to the group. Unlike the previous example, these users are appended to the group and do not replace existing users in the group. Using &man.id.1; to Determine Group Membership &prompt.user; id jru uid=1001(jru) gid=1001(jru) groups=1001(jru), 1100(teamtwo) In this example, jru is a member of the groups jru and teamtwo. For more information about this command and the format of /etc/group, refer to &man.pw.8; and &man.group.5;.
Permissions UNIX In &os;, every file and directory has an associated set of permissions and several utilities are available for viewing and modifying these permissions. Understanding how permissions work is necessary to make sure that users are able to access the files that they need and are unable to improperly access the files used by the operating system or owned by other users. This section discusses the traditional &unix; permissions used in &os;. For finer grained file system access control, refer to . In &unix;, basic permissions are assigned using three types of access: read, write, and execute. These access types are used to determine file access to the file's owner, group, and others (everyone else). The read, write, and execute permissions can be represented as the letters r, w, and x. They can also be represented as binary numbers as each permission is either on or off (0). When represented as a number, the order is always read as rwx, where r has an on value of 4, w has an on value of 2 and x has an on value of 1. Table 4.1 summarizes the possible numeric and alphabetic possibilities. When reading the Directory Listing column, a - is used to represent a permission that is set to off. permissions file permissions &unix; Permissions Value Permission Directory Listing 0 No read, no write, no execute --- 1 No read, no write, execute --x 2 No read, write, no execute -w- 3 No read, write, execute -wx 4 Read, no write, no execute r-- 5 Read, no write, execute r-x 6 Read, write, no execute rw- 7 Read, write, execute rwx
&man.ls.1; directories Use the argument to &man.ls.1; to view a long directory listing that includes a column of information about a file's permissions for the owner, group, and everyone else. For example, a ls -l in an arbitrary directory may show: &prompt.user; ls -l total 530 -rw-r--r-- 1 root wheel 512 Sep 5 12:31 myfile -rw-r--r-- 1 root wheel 512 Sep 5 12:31 otherfile -rw-r--r-- 1 root wheel 7680 Sep 5 12:31 email.txt The first (leftmost) character in the first column indicates whether this file is a regular file, a directory, a special character device, a socket, or any other special pseudo-file device. In this example, the - indicates a regular file. The next three characters, rw- in this example, give the permissions for the owner of the file. The next three characters, r--, give the permissions for the group that the file belongs to. The final three characters, r--, give the permissions for the rest of the world. A dash means that the permission is turned off. In this example, the permissions are set so the owner can read and write to the file, the group can read the file, and the rest of the world can only read the file. According to the table above, the permissions for this file would be 644, where each digit represents the three parts of the file's permission. How does the system control permissions on devices? &os; treats most hardware devices as a file that programs can open, read, and write data to. These special device files are stored in /dev/. Directories are also treated as files. They have read, write, and execute permissions. The executable bit for a directory has a slightly different meaning than that of files. When a directory is marked executable, it means it is possible to change into that directory using &man.cd.1;. This also means that it is possible to access the files within that directory, subject to the permissions on the files themselves. In order to perform a directory listing, the read permission must be set on the directory. In order to delete a file that one knows the name of, it is necessary to have write and execute permissions to the directory containing the file. There are more permission bits, but they are primarily used in special circumstances such as setuid binaries and sticky directories. For more information on file permissions and how to set them, refer to &man.chmod.1;. Symbolic Permissions Tom Rhodes Contributed by permissions symbolic Symbolic permissions use characters instead of octal values to assign permissions to files or directories. Symbolic permissions use the syntax of (who) (action) (permissions), where the following values are available: Option Letter Represents (who) u User (who) g Group owner (who) o Other (who) a All (world) (action) + Adding permissions (action) - Removing permissions (action) = Explicitly set permissions (permissions) r Read (permissions) w Write (permissions) x Execute (permissions) t Sticky bit (permissions) s Set UID or GID These values are used with &man.chmod.1;, but with letters instead of numbers. For example, the following command would block other users from accessing FILE: &prompt.user; chmod go= FILE A comma separated list can be provided when more than one set of changes to a file must be made. For example, the following command removes the group and world write permission on FILE, and adds the execute permissions for everyone: &prompt.user; chmod go-w,a+x FILE &os; File Flags Tom Rhodes Contributed by In addition to file permissions, &os; supports the use of file flags. These flags add an additional level of security and control over files, but not directories. With file flags, even root can be prevented from removing or altering files. File flags are modified using &man.chflags.1;. For example, to enable the system undeletable flag on the file file1, issue the following command: &prompt.root; chflags sunlink file1 To disable the system undeletable flag, put a no in front of the : &prompt.root; chflags nosunlink file1 To view the flags of a file, use with &man.ls.1;: &prompt.root; ls -lo file1 -rw-r--r-- 1 trhodes trhodes sunlnk 0 Mar 1 05:54 file1 Several file flags may only be added or removed by the root user. In other cases, the file owner may set its file flags. Refer to &man.chflags.1; and &man.chflags.2; for more information. The <literal>setuid</literal>, <literal>setgid</literal>, and <literal>sticky</literal> Permissions Tom Rhodes Contributed by Other than the permissions already discussed, there are three other specific settings that all administrators should know about. They are the setuid, setgid, and sticky permissions. These settings are important for some &unix; operations as they provide functionality not normally granted to normal users. To understand them, the difference between the real user ID and effective user ID must be noted. The real user ID is the UID who owns or starts the process. The effective UID is the user ID the process runs as. As an example, &man.passwd.1; runs with the real user ID when a user changes their password. However, in order to update the password database, the command runs as the effective ID of the root user. This allows users to change their passwords without seeing a Permission Denied error. The setuid permission may be set by prefixing a permission set with the number four (4) as shown in the following example: &prompt.root; chmod 4755 suidexample.sh The permissions on suidexample.sh now look like the following: -rwsr-xr-x 1 trhodes trhodes 63 Aug 29 06:36 suidexample.sh Note that a s is now part of the permission set designated for the file owner, replacing the executable bit. This allows utilities which need elevated permissions, such as &man.passwd.1;. The nosuid &man.mount.8; option will cause such binaries to silently fail without alerting the user. That option is not completely reliable as a nosuid wrapper may be able to circumvent it. To view this in real time, open two terminals. On one, type passwd as a normal user. While it waits for a new password, check the process table and look at the user information for &man.passwd.1;: In terminal A: Changing local password for trhodes Old Password: In terminal B: &prompt.root; ps aux | grep passwd trhodes 5232 0.0 0.2 3420 1608 0 R+ 2:10AM 0:00.00 grep passwd root 5211 0.0 0.2 3620 1724 2 I+ 2:09AM 0:00.01 passwd Although &man.passwd.1; is run as a normal user, it is using the effective UID of root. The setgid permission performs the same function as the setuid permission; except that it alters the group settings. When an application or utility executes with this setting, it will be granted the permissions based on the group that owns the file, not the user who started the process. To set the setgid permission on a file, provide &man.chmod.1; with a leading two (2): &prompt.root; chmod 2755 sgidexample.sh In the following listing, notice that the s is now in the field designated for the group permission settings: -rwxr-sr-x 1 trhodes trhodes 44 Aug 31 01:49 sgidexample.sh In these examples, even though the shell script in question is an executable file, it will not run with a different EUID or effective user ID. This is because shell scripts may not access the &man.setuid.2; system calls. The setuid and setgid permission bits may lower system security, by allowing for elevated permissions. The third special permission, the sticky bit, can strengthen the security of a system. When the sticky bit is set on a directory, it allows file deletion only by the file owner. This is useful to prevent file deletion in public directories, such as /tmp, by users who do not own the file. To utilize this permission, prefix the permission set with a one (1): &prompt.root; chmod 1777 /tmp The sticky bit permission will display as a t at the very end of the permission set: &prompt.root; ls -al / | grep tmp drwxrwxrwt 10 root wheel 512 Aug 31 01:49 tmp
Directory Structure directory hierarchy The &os; directory hierarchy is fundamental to obtaining an overall understanding of the system. The most important directory is root or, /. This directory is the first one mounted at boot time and it contains the base system necessary to prepare the operating system for multi-user operation. The root directory also contains mount points for other file systems that are mounted during the transition to multi-user operation. A mount point is a directory where additional file systems can be grafted onto a parent file system (usually the root file system). This is further described in . Standard mount points include /usr/, /var/, /tmp/, /mnt/, and /cdrom/. These directories are usually referenced to entries in /etc/fstab. This file is a table of various file systems and mount points and is read by the system. Most of the file systems in /etc/fstab are mounted automatically at boot time from the script &man.rc.8; unless their entry includes . Details can be found in . A complete description of the file system hierarchy is available in &man.hier.7;. The following table provides a brief overview of the most common directories. Directory Description / Root directory of the file system. /bin/ User utilities fundamental to both single-user and multi-user environments. /boot/ Programs and configuration files used during operating system bootstrap. /boot/defaults/ Default boot configuration files. Refer to &man.loader.conf.5; for details. /dev/ Device nodes. Refer to &man.intro.4; for details. /etc/ System configuration files and scripts. /etc/defaults/ Default system configuration files. Refer to &man.rc.8; for details. /etc/mail/ Configuration files for mail transport agents such as &man.sendmail.8;. /etc/namedb/ &man.named.8; configuration files. /etc/periodic/ Scripts that run daily, weekly, and monthly, via &man.cron.8;. Refer to &man.periodic.8; for details. /etc/ppp/ &man.ppp.8; configuration files. /mnt/ Empty directory commonly used by system administrators as a temporary mount point. /proc/ Process file system. Refer to &man.procfs.5;, &man.mount.procfs.8; for details. /rescue/ Statically linked programs for emergency recovery as described in &man.rescue.8;. /root/ Home directory for the root account. /sbin/ System programs and administration utilities fundamental to both single-user and multi-user environments. /tmp/ Temporary files which are usually not preserved across a system reboot. A memory-based file system is often mounted at /tmp. This can be automated using the tmpmfs-related variables of &man.rc.conf.5; or with an entry in /etc/fstab; refer to &man.mdmfs.8; for details. /usr/ The majority of user utilities and applications. /usr/bin/ Common utilities, programming tools, and applications. /usr/include/ Standard C include files. /usr/lib/ Archive libraries. /usr/libdata/ Miscellaneous utility data files. /usr/libexec/ System daemons and system utilities executed by other programs. /usr/local/ Local executables and libraries. Also used as the default destination for the &os; ports framework. Within /usr/local, the general layout sketched out by &man.hier.7; for /usr should be used. Exceptions are the man directory, which is directly under /usr/local rather than under /usr/local/share, and the ports documentation is in share/doc/port. /usr/obj/ Architecture-specific target tree produced by building the /usr/src tree. /usr/ports/ The &os; Ports Collection (optional). /usr/sbin/ System daemons and system utilities executed by users. /usr/share/ Architecture-independent files. /usr/src/ BSD and/or local source files. /var/ Multi-purpose log, temporary, transient, and spool files. A memory-based file system is sometimes mounted at /var. This can be automated using the varmfs-related variables in &man.rc.conf.5; or with an entry in /etc/fstab; refer to &man.mdmfs.8; for details. /var/log/ Miscellaneous system log files. /var/mail/ User mailbox files. /var/spool/ Miscellaneous printer and mail system spooling directories. /var/tmp/ Temporary files which are usually preserved across a system reboot, unless /var is a memory-based file system. /var/yp/ NIS maps. Disk Organization The smallest unit of organization that &os; uses to find files is the filename. Filenames are case-sensitive, which means that readme.txt and README.TXT are two separate files. &os; does not use the extension of a file to determine whether the file is a program, document, or some other form of data. Files are stored in directories. A directory may contain no files, or it may contain many hundreds of files. A directory can also contain other directories, allowing a hierarchy of directories within one another in order to organize data. Files and directories are referenced by giving the file or directory name, followed by a forward slash, /, followed by any other directory names that are necessary. For example, if the directory foo contains a directory bar which contains the file readme.txt, the full name, or path, to the file is foo/bar/readme.txt. Note that this is different from &windows; which uses \ to separate file and directory names. &os; does not use drive letters, or other drive names in the path. For example, one would not type c:\foo\bar\readme.txt on &os;. Directories and files are stored in a file system. Each file system contains exactly one directory at the very top level, called the root directory for that file system. This root directory can contain other directories. One file system is designated the root file system or /. Every other file system is mounted under the root file system. No matter how many disks are on the &os; system, every directory appears to be part of the same disk. Consider three file systems, called A, B, and C. Each file system has one root directory, which contains two other directories, called A1, A2 (and likewise B1, B2 and C1, C2). Call A the root file system. If &man.ls.1; is used to view the contents of this directory, it will show two subdirectories, A1 and A2. The directory tree looks like this: / | +--- A1 | `--- A2 A file system must be mounted on to a directory in another file system. When mounting file system B on to the directory A1, the root directory of B replaces A1, and the directories in B appear accordingly: / | +--- A1 | | | +--- B1 | | | `--- B2 | `--- A2 Any files that are in the B1 or B2 directories can be reached with the path /A1/B1 or /A1/B2 as necessary. Any files that were in /A1 have been temporarily hidden. They will reappear if B is unmounted from A. If B had been mounted on A2 then the diagram would look like this: / | +--- A1 | `--- A2 | +--- B1 | `--- B2 and the paths would be /A2/B1 and /A2/B2 respectively. File systems can be mounted on top of one another. Continuing the last example, the C file system could be mounted on top of the B1 directory in the B file system, leading to this arrangement: / | +--- A1 | `--- A2 | +--- B1 | | | +--- C1 | | | `--- C2 | `--- B2 Or C could be mounted directly on to the A file system, under the A1 directory: / | +--- A1 | | | +--- C1 | | | `--- C2 | `--- A2 | +--- B1 | `--- B2 It is entirely possible to have one large root file system, and not need to create any others. There are some drawbacks to this approach, and one advantage. Benefits of Multiple File Systems Different file systems can have different mount options. For example, the root file system can be mounted read-only, making it impossible for users to inadvertently delete or edit a critical file. Separating user-writable file systems, such as /home, from other file systems allows them to be mounted nosuid. This option prevents the suid/guid bits on executables stored on the file system from taking effect, possibly improving security. &os; automatically optimizes the layout of files on a file system, depending on how the file system is being used. So a file system that contains many small files that are written frequently will have a different optimization to one that contains fewer, larger files. By having one big file system this optimization breaks down. &os;'s file systems are robust if power is lost. However, a power loss at a critical point could still damage the structure of the file system. By splitting data over multiple file systems it is more likely that the system will still come up, making it easier to restore from backup as necessary. Benefit of a Single File System File systems are a fixed size. If you create a file system when you install &os; and give it a specific size, you may later discover that you need to make the partition bigger. This is not easily accomplished without backing up, recreating the file system with the new size, and then restoring the backed up data. &os; features the &man.growfs.8; command, which makes it possible to increase the size of file system on the fly, removing this limitation. File systems are contained in partitions. This does not have the same meaning as the common usage of the term partition (for example, &ms-dos; partition), because of &os;'s &unix; heritage. Each partition is identified by a letter from a through to h. Each partition can contain only one file system, which means that file systems are often described by either their typical mount point in the file system hierarchy, or the letter of the partition they are contained in. &os; also uses disk space for swap space to provide virtual memory. This allows your computer to behave as though it has much more memory than it actually does. When &os; runs out of memory, it moves some of the data that is not currently being used to the swap space, and moves it back in (moving something else out) when it needs it. Some partitions have certain conventions associated with them. Partition Convention a Normally contains the root file system. b Normally contains swap space. c Normally the same size as the enclosing slice. This allows utilities that need to work on the entire slice, such as a bad block scanner, to work on the c partition. A file system would not normally be created on this partition. d Partition d used to have a special meaning associated with it, although that is now gone and d may work as any normal partition. Disks in &os; are divided into slices, referred to in &windows; as partitions, which are numbered from 1 to 4. These are then divided into partitions, which contain file systems, and are labeled using letters. slices partitions dangerously dedicated Slice numbers follow the device name, prefixed with an s, starting at 1. So da0s1 is the first slice on the first SCSI drive. There can only be four physical slices on a disk, but there can be logical slices inside physical slices of the appropriate type. These extended slices are numbered starting at 5, so ada0s5 is the first extended slice on the first SATA disk. These devices are used by file systems that expect to occupy a slice. Slices, dangerously dedicated physical drives, and other drives contain partitions, which are represented as letters from a to h. This letter is appended to the device name, so da0a is the a partition on the first da drive, which is dangerously dedicated. ada1s3e is the fifth partition in the third slice of the second SATA disk drive. Finally, each disk on the system is identified. A disk name starts with a code that indicates the type of disk, and then a number, indicating which disk it is. Unlike slices, disk numbering starts at 0. Common codes are listed in . When referring to a partition, include the disk name, s, the slice number, and then the partition letter. Examples are shown in . shows a conceptual model of a disk layout. When installing &os;, configure the disk slices, create partitions within the slice to be used for &os;, create a file system or swap space in each partition, and decide where each file system will be mounted. Disk Device Names Drive Type Drive Device Name SATA and IDE hard drives ada or ad SCSI hard drives and USB storage devices da SATA and IDE CD-ROM drives cd or acd SCSI CD-ROM drives cd Floppy drives fd Assorted non-standard CD-ROM drives mcd for Mitsumi CD-ROM and scd for Sony CD-ROM devices SCSI tape drives sa IDE tape drives ast RAID drives Examples include aacd for &adaptec; AdvancedRAID, mlxd and mlyd for &mylex;, amrd for AMI &megaraid;, idad for Compaq Smart RAID, twed for &tm.3ware; RAID.
Sample Disk, Slice, and Partition Names Name Meaning ada0s1a The first partition (a) on the first slice (s1) on the first IDE disk (ada0). da1s2e The fifth partition (e) on the second slice (s2) on the second SCSI disk (da1). Conceptual Model of a Disk This diagram shows &os;'s view of the first IDE disk attached to the system. Assume that the disk is 4 GB in size, and contains two 2 GB slices (&ms-dos; partitions). The first slice contains a &ms-dos; disk, C:, and the second slice contains a &os; installation. This example &os; installation has three data partitions, and a swap partition. The three partitions will each hold a file system. Partition a will be used for the root file system, e for the /var/ directory hierarchy, and f for the /usr/ directory hierarchy. .-----------------. --. | | | | DOS / Windows | | : : > First slice, ad0s1 : : | | | | :=================: ==: --. | | | Partition a, mounted as / | | | > referred to as ad0s2a | | | | | :-----------------: ==: | | | | Partition b, used as swap | | | > referred to as ad0s2b | | | | | :-----------------: ==: | Partition c, no | | | Partition e, used as /var > file system, all | | > referred to as ad0s2e | of FreeBSD slice, | | | | ad0s2c :-----------------: ==: | | | | | : : | Partition f, used as /usr | : : > referred to as ad0s2f | : : | | | | | | | | --' | `-----------------' --'
Mounting and Unmounting File Systems The file system is best visualized as a tree, rooted, as it were, at /. /dev, /usr, and the other directories in the root directory are branches, which may have their own branches, such as /usr/local, and so on. root file system There are various reasons to house some of these directories on separate file systems. /var contains the directories log/, spool/, and various types of temporary files, and as such, may get filled up. Filling up the root file system is not a good idea, so splitting /var from / is often favorable. Another common reason to contain certain directory trees on other file systems is if they are to be housed on separate physical disks, or are separate virtual disks, such as Network File System mounts, described in , or CDROM drives. The <filename>fstab</filename> File file systems mounted with fstab During the boot process (), file systems listed in /etc/fstab are automatically mounted except for the entries containing . This file contains entries in the following format: device /mount-point fstype options dumpfreq passno device An existing device name as explained in . mount-point An existing directory on which to mount the file system. fstype The file system type to pass to &man.mount.8;. The default &os; file system is ufs. options Either for read-write file systems, or for read-only file systems, followed by any other options that may be needed. A common option is for file systems not normally mounted during the boot sequence. Other options are listed in &man.mount.8;. dumpfreq Used by &man.dump.8; to determine which file systems require dumping. If the field is missing, a value of zero is assumed. passno Determines the order in which file systems should be checked. File systems that should be skipped should have their passno set to zero. The root file system needs to be checked before everything else and should have its passno set to one. The other file systems should be set to values greater than one. If more than one file system has the same passno, &man.fsck.8; will attempt to check file systems in parallel if possible. Refer to &man.fstab.5; for more information on the format of /etc/fstab and its options. Using &man.mount.8; file systems mounting File systems are mounted using &man.mount.8;. The most basic syntax is as follows: &prompt.root; mount device mountpoint This command provides many options which are described in &man.mount.8;, The most commonly used options include: Mount Options Mount all the file systems listed in /etc/fstab, except those marked as noauto, excluded by the flag, or those that are already mounted. Do everything except for the actual mount system call. This option is useful in conjunction with the flag to determine what &man.mount.8; is actually trying to do. Force the mount of an unclean file system (dangerous), or the revocation of write access when downgrading a file system's mount status from read-write to read-only. Mount the file system read-only. This is identical to using . fstype Mount the specified file system type or mount only file systems of the given type, if is included. ufs is the default file system type. Update mount options on the file system. Be verbose. Mount the file system read-write. The following options can be passed to as a comma-separated list: nosuid Do not interpret setuid or setgid flags on the file system. This is also a useful security option. Using &man.umount.8; file systems unmounting To unmount a file system use &man.umount.8;. This command takes one parameter which can be a mountpoint, device name, or . All forms take to force unmounting, and for verbosity. Be warned that is not generally a good idea as it might crash the computer or damage data on the file system. To unmount all mounted file systems, or just the file system types listed after , use or . Note that does not attempt to unmount the root file system. Processes and Daemons &os; is a multi-tasking operating system. Each program running at any one time is called a process. Every running command starts at least one new process and there are a number of system processes that are run by &os;. Each process is uniquely identified by a number called a process ID (PID). Similar to files, each process has one owner and group, and the owner and group permissions are used to determine which files and devices the process can open. Most processes also have a parent process that started them. For example, the shell is a process, and any command started in the shell is a process which has the shell as its parent process. The exception is a special process called &man.init.8; which is always the first process to start at boot time and which always has a PID of 1. Some programs are not designed to be run with continuous user input and disconnect from the terminal at the first opportunity. For example, a web server responds to web requests, rather than user input. Mail servers are another example of this type of application. These types of programs are known as daemons. The term daemon comes from Greek mythology and represents an entity that is neither good nor evil, and which invisibly performs useful tasks. This is why the BSD mascot is the cheerful-looking daemon with sneakers and a pitchfork. There is a convention to name programs that normally run as daemons with a trailing d. For example, BIND is the Berkeley Internet Name Domain, but the actual program that executes is named. The Apache web server program is httpd and the line printer spooling daemon is lpd. This is only a naming convention. For example, the main mail daemon for the Sendmail application is sendmail, and not maild. Viewing Processes To see the processes running on the system, use &man.ps.1; or &man.top.1;. To display a static list of the currently running processes, their PIDs, how much memory they are using, and the command they were started with, use &man.ps.1;. To display all the running processes and update the display every few seconds in order to interactively see what the computer is doing, use &man.top.1;. By default, &man.ps.1; only shows the commands that are running and owned by the user. For example: &prompt.user; ps PID TT STAT TIME COMMAND 8203 0 Ss 0:00.59 /bin/csh 8895 0 R+ 0:00.00 ps The output from &man.ps.1; is organized into a number of columns. The PID column displays the process ID. PIDs are assigned starting at 1, go up to 99999, then wrap around back to the beginning. However, a PID is not reassigned if it is already in use. The TT column shows the tty the program is running on and STAT shows the program's state. TIME is the amount of time the program has been running on the CPU. This is usually not the elapsed time since the program was started, as most programs spend a lot of time waiting for things to happen before they need to spend time on the CPU. Finally, COMMAND is the command that was used to start the program. A number of different options are available to change the information that is displayed. One of the most useful sets is auxww, where displays information about all the running processes of all users, displays the username and memory usage of the process' owner, displays information about daemon processes, and causes &man.ps.1; to display the full command line for each process, rather than truncating it once it gets too long to fit on the screen. The output from &man.top.1; is similar: &prompt.user; top last pid: 9609; load averages: 0.56, 0.45, 0.36 up 0+00:20:03 10:21:46 107 processes: 2 running, 104 sleeping, 1 zombie CPU: 6.2% user, 0.1% nice, 8.2% system, 0.4% interrupt, 85.1% idle Mem: 541M Active, 450M Inact, 1333M Wired, 4064K Cache, 1498M Free ARC: 992M Total, 377M MFU, 589M MRU, 250K Anon, 5280K Header, 21M Other Swap: 2048M Total, 2048M Free PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND 557 root 1 -21 r31 136M 42296K select 0 2:20 9.96% Xorg 8198 dru 2 52 0 449M 82736K select 3 0:08 5.96% kdeinit4 8311 dru 27 30 0 1150M 187M uwait 1 1:37 0.98% firefox 431 root 1 20 0 14268K 1728K select 0 0:06 0.98% moused 9551 dru 1 21 0 16600K 2660K CPU3 3 0:01 0.98% top 2357 dru 4 37 0 718M 141M select 0 0:21 0.00% kdeinit4 8705 dru 4 35 0 480M 98M select 2 0:20 0.00% kdeinit4 8076 dru 6 20 0 552M 113M uwait 0 0:12 0.00% soffice.bin 2623 root 1 30 10 12088K 1636K select 3 0:09 0.00% powerd 2338 dru 1 20 0 440M 84532K select 1 0:06 0.00% kwin 1427 dru 5 22 0 605M 86412K select 1 0:05 0.00% kdeinit4 The output is split into two sections. The header (the first five or six lines) shows the PID of the last process to run, the system load averages (which are a measure of how busy the system is), the system uptime (time since the last reboot) and the current time. The other figures in the header relate to how many processes are running, how much memory and swap space has been used, and how much time the system is spending in different CPU states. If the ZFS file system module has been loaded, an ARC line indicates how much data was read from the memory cache instead of from disk. Below the header is a series of columns containing similar information to the output from &man.ps.1;, such as the PID, username, amount of CPU time, and the command that started the process. By default, &man.top.1; also displays the amount of memory space taken by the process. This is split into two columns: one for total size and one for resident size. Total size is how much memory the application has needed and the resident size is how much it is actually using now. &man.top.1; automatically updates the display every two seconds. A different interval can be specified with . Killing Processes One way to communicate with any running process or daemon is to send a signal using &man.kill.1;. There are a number of different signals; some have a specific meaning while others are described in the application's documentation. A user can only send a signal to a process they own and sending a signal to someone else's process will result in a permission denied error. The exception is the root user, who can send signals to anyone's processes. The operating system can also send a signal to a process. If an application is badly written and tries to access memory that it is not supposed to, &os; will send the process the Segmentation Violation signal (SIGSEGV). If an application has been written to use the &man.alarm.3; system call to be alerted after a period of time has elapsed, it will be sent the Alarm signal (SIGALRM). Two signals can be used to stop a process: SIGTERM and SIGKILL. SIGTERM is the polite way to kill a process as the process can read the signal, close any log files it may have open, and attempt to finish what it is doing before shutting down. In some cases, a process may ignore SIGTERM if it is in the middle of some task that can not be interrupted. SIGKILL can not be ignored by a process. Sending a SIGKILL to a process will usually stop that process there and then. There are a few tasks that can not be interrupted. For example, if the process is trying to read from a file that is on another computer on the network, and the other computer is unavailable, the process is said to be uninterruptible. Eventually the process will time out, typically after two minutes. As soon as this time out occurs the process will be killed.. Other commonly used signals are SIGHUP, SIGUSR1, and SIGUSR2. Since these are general purpose signals, different applications will respond differently. For example, after changing a web server's configuration file, the web server needs to be told to re-read its configuration. Restarting httpd would result in a brief outage period on the web server. Instead, send the daemon the SIGHUP signal. Be aware that different daemons will have different behavior, so refer to the documentation for the daemon to determine if SIGHUP will achieve the desired results. Sending a Signal to a Process This example shows how to send a signal to &man.inetd.8;. The &man.inetd.8; configuration file is /etc/inetd.conf, and &man.inetd.8; will re-read this configuration file when it is sent a SIGHUP. Find the PID of the process to send the signal to using &man.pgrep.1;. In this example, the PID for &man.inetd.8; is 198: &prompt.user; pgrep -l inetd 198 inetd -wW Use &man.kill.1; to send the signal. Because &man.inetd.8; is owned by root, use &man.su.1; to become root first. &prompt.user; su Password: &prompt.root; /bin/kill -s HUP 198 Like most &unix; commands, &man.kill.1; will not print any output if it is successful. If a signal is sent to a process not owned by that user, the message kill: PID: Operation not permitted will be displayed. Mistyping the PID will either send the signal to the wrong process, which could have negative results, or will send the signal to a PID that is not currently in use, resulting in the error kill: PID: No such process. Why Use <command>/bin/kill</command>? Many shells provide kill as a built in command, meaning that the shell will send the signal directly, rather than running /bin/kill. Be aware that different shells have a different syntax for specifying the name of the signal to send. Rather than try to learn all of them, it can be simpler to specify /bin/kill. When sending other signals, substitute TERM or KILL with the name of the signal. Killing a random process on the system is a bad idea. In particular, &man.init.8;, PID 1, is special. Running /bin/kill -s KILL 1 is a quick, and unrecommended, way to shutdown the system. Always double check the arguments to &man.kill.1; before pressing Return. Shells shells command line A shell provides a command line interface for interacting with the operating system. A shell receives commands from the input channel and executes them. Many shells provide built in functions to help with everyday tasks such as file management, file globbing, command line editing, command macros, and environment variables. &os; comes with several shells, including the Bourne shell (&man.sh.1;) and the extended C shell (&man.tcsh.1;). Other shells are available from the &os; Ports Collection, such as zsh and bash. The shell that is used is really a matter of taste. A C programmer might feel more comfortable with a C-like shell such as &man.tcsh.1;. A &linux; user might prefer bash. Each shell has unique properties that may or may not work with a user's preferred working environment, which is why there is a choice of which shell to use. One common shell feature is filename completion. After a user types the first few letters of a command or filename and presses Tab, the shell completes the rest of the command or filename. Consider two files called foobar and football. To delete foobar, the user might type rm foo and press Tab to complete the filename. But the shell only shows rm foo. It was unable to complete the filename because both foobar and football start with foo. Some shells sound a beep or show all the choices if more than one name matches. The user must then type more characters to identify the desired filename. Typing a t and pressing Tab again is enough to let the shell determine which filename is desired and fill in the rest. environment variables Another feature of the shell is the use of environment variables. Environment variables are a variable/key pair stored in the shell's environment. This environment can be read by any program invoked by the shell, and thus contains a lot of program configuration. provides a list of common environment variables and their meanings. Note that the names of environment variables are always in uppercase. Common Environment Variables Variable Description USER Current logged in user's name. PATH Colon-separated list of directories to search for binaries. DISPLAY Network name of the &xorg; display to connect to, if available. SHELL The current shell. TERM The name of the user's type of terminal. Used to determine the capabilities of the terminal. TERMCAP Database entry of the terminal escape codes to perform various terminal functions. OSTYPE Type of operating system. MACHTYPE The system's CPU architecture. EDITOR The user's preferred text editor. PAGER The user's preferred utility for viewing text one page at a time. MANPATH Colon-separated list of directories to search for manual pages.
Bourne shells How to set an environment variable differs between shells. In &man.tcsh.1; and &man.csh.1;, use setenv to set environment variables. In &man.sh.1; and bash, use export to set the current environment variables. This example sets the default EDITOR to /usr/local/bin/emacs for the &man.tcsh.1; shell: &prompt.user; setenv EDITOR /usr/local/bin/emacs The equivalent command for bash would be: &prompt.user; export EDITOR="/usr/local/bin/emacs" To expand an environment variable in order to see its current setting, type a $ character in front of its name on the command line. For example, echo $TERM displays the current $TERM setting. Shells treat special characters, known as meta-characters, as special representations of data. The most common meta-character is *, which represents any number of characters in a filename. Meta-characters can be used to perform filename globbing. For example, echo * is equivalent to ls because the shell takes all the files that match * and echo lists them on the command line. To prevent the shell from interpreting a special character, escape it from the shell by starting it with a backslash (\). For example, echo $TERM prints the terminal setting whereas echo \$TERM literally prints the string $TERM. Changing the Shell The easiest way to permanently change the default shell is to use chsh. Running this command will open the editor that is configured in the EDITOR environment variable, which by default is set to &man.vi.1;. Change the Shell: line to the full path of the new shell. Alternately, use chsh -s which will set the specified shell without opening an editor. For example, to change the shell to bash: &prompt.user; chsh -s /usr/local/bin/bash The new shell must be present in /etc/shells. If the shell was installed from the &os; Ports Collection as described in , it should be automatically added to this file. If it is missing, add it using this command, replacing the path with the path of the shell: &prompt.root; echo /usr/local/bin/bash >> /etc/shells Then, rerun &man.chsh.1;. Advanced Shell Techniques Tom Rhodes Written by The &unix; shell is not just a command interpreter, it acts as a powerful tool which allows users to execute commands, redirect their output, redirect their input and chain commands together to improve the final command output. When this functionality is mixed with built in commands, the user is provided with an environment that can maximize efficiency. Shell redirection is the action of sending the output or the input of a command into another command or into a file. To capture the output of the &man.ls.1; command, for example, into a file, simply redirect the output: &prompt.user; ls > directory_listing.txt The directory_listing.txt file will now contain the directory contents. Some commands allow you to read input in a similar one, such as &man.sort.1;. To sort this listing, redirect the input: &prompt.user; sort < directory_listing.txt The input will be sorted and placed on the screen. To redirect that input into another file, one could redirect the output of &man.sort.1; by mixing the direction: &prompt.user; sort < directory_listing.txt > sorted.txt In all of the previous examples, the commands are performing redirection using file descriptors. Every unix system has file descriptors; however, here we will focus on three, so named as Standard Input, Standard Output, and Standard Error. Each one has a purpose, where input could be a keyboard or a mouse, something that provides input. Output could be a screen or paper in a printer for example. And error would be anything that is used for diagnostic or error messages. All three are considered I/O based file descriptors and sometimes considered streams. Through the use of these descriptors, short named stdin, stdout, and stderr, the shell allows output and input to be passed around through various commands and redirected to or from a file. Another method of redirection is the pipe operator. The &unix; pipe operator, | allows the output of one command to be directly passed, or directed to another program. Basically a pipe will allow the standard output of a command to be passed as standard input to another command, for example: &prompt.user; cat directory_listing.txt | sort | less In that example, the contents of directory_listing.txt will be sorted and the output passed to &man.less.1;. This allows the user to scroll through the output at their own pace and prevent it from scrolling off the screen.
Text Editors text editors editors Most &os; configuration is done by editing text files. Because of this, it is a good idea to become familiar with a text editor. &os; comes with a few as part of the base system, and many more are available in the Ports Collection. ee editors &man.ee.1; A simple editor to learn is &man.ee.1;, which stands for easy editor. To start this editor, type ee filename where filename is the name of the file to be edited. Once inside the editor, all of the commands for manipulating the editor's functions are listed at the top of the display. The caret (^) represents Ctrl, so ^e expands to Ctrl e . To leave &man.ee.1;, press Esc, then choose the leave editor option from the main menu. The editor will prompt to save any changes if the file has been modified. vi editors emacs &os; also comes with more powerful text editors, such as &man.vi.1;, as part of the base system. Other editors, like editors/emacs and editors/vim, are part of the &os; Ports Collection. These editors offer more functionality at the expense of being more complicated to learn. Learning a more powerful editor such as vim or Emacs can save more time in the long run. Many applications which modify files or require typed input will automatically open a text editor. To change the default editor, set the EDITOR environment variable as described in . Devices and Device Nodes A device is a term used mostly for hardware-related activities in a system, including disks, printers, graphics cards, and keyboards. When &os; boots, the majority of the boot messages refer to devices being detected. A copy of the boot messages are saved to /var/run/dmesg.boot. Each device has a device name and number. For example, ada0 is the first SATA hard drive, while kbd0 represents the keyboard. Most devices in a &os; must be accessed through special files called device nodes, which are located in /dev. Manual Pages manual pages The most comprehensive documentation on &os; is in the form of manual pages. Nearly every program on the system comes with a short reference manual explaining the basic operation and available arguments. These manuals can be viewed using man: &prompt.user; man command where command is the name of the command to learn about. For example, to learn more about &man.ls.1;, type: &prompt.user; man ls Manual pages are divided into sections which represent the type of topic. In &os;, the following sections are available: User commands. System calls and error numbers. Functions in the C libraries. Device drivers. File formats. Games and other diversions. Miscellaneous information. System maintenance and operation commands. System kernel interfaces. In some cases, the same topic may appear in more than one section of the online manual. For example, there is a chmod user command and a chmod() system call. To tell &man.man.1; which section to display, specify the section number: &prompt.user; man 1 chmod This will display the manual page for the user command &man.chmod.1;. References to a particular section of the online manual are traditionally placed in parenthesis in written documentation, so &man.chmod.1; refers to the user command and &man.chmod.2; refers to the system call. If the name of the manual page is unknown, use man -k to search for keywords in the manual page descriptions: &prompt.user; man -k mail This command displays a list of commands that have the keyword mail in their descriptions. This is equivalent to using &man.apropos.1;. To read the descriptions for all of the commands in /usr/bin, type: &prompt.user; cd /usr/bin &prompt.user; man -f * | more or &prompt.user; cd /usr/bin &prompt.user; whatis * |more GNU Info Files Free Software Foundation - &os; includes several applications and utilities produced by - the Free Software Foundation (FSF). In addition to manual + &os; includes several applications and utilities produced + by the Free Software Foundation (FSF). In addition to manual pages, these programs may include hypertext documents called info files. These can be viewed using &man.info.1; or, if editors/emacs is installed, the info mode of emacs. To use &man.info.1;, type: &prompt.user; info For a brief introduction, type h. For a quick command reference, type ?.
Index: head/en_US.ISO8859-1/books/handbook/config/chapter.xml =================================================================== --- head/en_US.ISO8859-1/books/handbook/config/chapter.xml (revision 46048) +++ head/en_US.ISO8859-1/books/handbook/config/chapter.xml (revision 46049) @@ -1,3501 +1,3503 @@ Configuration and Tuning Chern Lee Written by Mike Smith Based on a tutorial written by Matt Dillon Also based on tuning(7) written by Synopsis system configuration system optimization One of the important aspects of &os; is proper system configuration. This chapter explains much of the &os; configuration process, including some of the parameters which can be set to tune a &os; system. After reading this chapter, you will know: The basics of rc.conf configuration and /usr/local/etc/rc.d startup scripts. How to configure and test a network card. How to configure virtual hosts on network devices. How to use the various configuration files in /etc. How to tune &os; using &man.sysctl.8; variables. How to tune disk performance and modify kernel limitations. Before reading this chapter, you should: Understand &unix; and &os; basics (). Be familiar with the basics of kernel configuration and compilation (). Starting Services Tom Rhodes Contributed by services Many users install third party software on &os; from the Ports Collection and require the installed services to be started upon system initialization. Services, such as mail/postfix or www/apache22 are just two of the many software packages which may be started during system initialization. This section explains the procedures available for starting third party software. In &os;, most included services, such as &man.cron.8;, are started through the system start up scripts. Extended Application Configuration Now that &os; includes rc.d, configuration of application startup is easier and provides more features. Using the key words discussed in , applications can be set to start after certain other services and extra flags can be passed through /etc/rc.conf in place of hard coded flags in the start up script. A basic script may look similar to the following: #!/bin/sh # # PROVIDE: utility # REQUIRE: DAEMON # KEYWORD: shutdown . /etc/rc.subr name=utility rcvar=utility_enable command="/usr/local/sbin/utility" load_rc_config $name # # DO NOT CHANGE THESE DEFAULT VALUES HERE # SET THEM IN THE /etc/rc.conf FILE # utility_enable=${utility_enable-"NO"} pidfile=${utility_pidfile-"/var/run/utility.pid"} run_rc_command "$1" This script will ensure that the provided utility will be started after the DAEMON pseudo-service. It also provides a method for setting and tracking the process ID (PID). This application could then have the following line placed in /etc/rc.conf: utility_enable="YES" This method allows for easier manipulation of command line arguments, inclusion of the default functions provided in /etc/rc.subr, compatibility with &man.rcorder.8;, and provides for easier configuration via rc.conf. Using Services to Start Services Other services can be started using &man.inetd.8;. Working with &man.inetd.8; and its configuration is described in depth in . In some cases, it may make more sense to use &man.cron.8; to start system services. This approach has a number of advantages as &man.cron.8; runs these processes as the owner of the &man.crontab.5;. This allows regular users to start and maintain their own applications. The @reboot feature of &man.cron.8;, may be used in place of the time specification. This causes the job to run when &man.cron.8; is started, normally during system initialization. Configuring &man.cron.8; Tom Rhodes Contributed by cron configuration One of the most useful utilities in &os; is cron. This utility runs in the background and regularly checks /etc/crontab for tasks to execute and searches /var/cron/tabs for custom crontab files. These files are used to schedule tasks which cron runs at the specified times. Each entry in a crontab defines a task to run and is known as a cron job. Two different types of configuration files are used: the system crontab, which should not be modified, and user crontabs, which can be created and edited as needed. The format used by these files is documented in &man.crontab.5;. The format of the system crontab, /etc/crontab includes a who column which does not exist in user crontabs. In the system crontab, cron runs the command as the user specified in this column. In a user crontab, all commands run as the user who created the crontab. User crontabs allow individual users to schedule their own tasks. The root user can also have a user crontab which can be used to schedule tasks that do not exist in the system crontab. Here is a sample entry from the system crontab, /etc/crontab: # /etc/crontab - root's crontab for FreeBSD # # $FreeBSD$ # SHELL=/bin/sh PATH=/etc:/bin:/sbin:/usr/bin:/usr/sbin # #minute hour mday month wday who command # */5 * * * * root /usr/libexec/atrun Lines that begin with the # character are comments. A comment can be placed in the file as a reminder of what and why a desired action is performed. Comments cannot be on the same line as a command or else they will be interpreted as part of the command; they must be on a new line. Blank lines are ignored. The equals (=) character is used to define any environment settings. In this example, it is used to define the SHELL and PATH. If the SHELL is omitted, cron will use the default Bourne shell. If the PATH is omitted, the full path must be given to the command or script to run. This line defines the seven fields used in a system crontab: minute, hour, mday, month, wday, who, and command. The minute field is the time in minutes when the specified command will be run, the hour is the hour when the specified command will be run, the mday is the day of the month, month is the month, and wday is the day of the week. These fields must be numeric values, representing the twenty-four hour clock, or a *, representing all values for that field. The who field only exists in the system crontab and specifies which user the command should be run as. The last field is the command to be executed. This entry defines the values for this cron job. The */5, followed by several more * characters, specifies that /usr/libexec/atrun is invoked by root every five minutes of every hour, of every day and day of the week, of every month. Commands can include any number of switches. However, commands which extend to multiple lines need to be broken with the backslash \ continuation character. Creating a User Crontab To create a user crontab, invoke crontab in editor mode: &prompt.user; crontab -e This will open the user's crontab using the default text editor. The first time a user runs this command, it will open an empty file. Once a user creates a crontab, this command will open that file for editing. It is useful to add these lines to the top of the crontab file in order to set the environment variables and to remember the meanings of the fields in the crontab: SHELL=/bin/sh PATH=/etc:/bin:/sbin:/usr/bin:/usr/sbin # Order of crontab fields # minute hour mday month wday command Then add a line for each command or script to run, specifying the time to run the command. This example runs the specified custom Bourne shell script every day at two in the afternoon. Since the path to the script is not specified in PATH, the full path to the script is given: 0 14 * * * /usr/home/dru/bin/mycustomscript.sh Before using a custom script, make sure it is executable and test it with the limited set of environment variables set by cron. To replicate the environment that would be used to run the above cron entry, use: env -i SHELL=/bin/sh PATH=/etc:/bin:/sbin:/usr/bin:/usr/sbin HOME=/home/dru LOGNAME=dru /usr/home/dru/bin/mycustomscript.sh The environment set by cron is discussed in &man.crontab.5;. Checking that scripts operate correctly in a cron environment is especially important if they include any commands that delete files using wildcards. When finished editing the crontab, save the file. It will automatically be installed and cron will read the crontab and run its cron jobs at their specified times. To list the cron jobs in a crontab, use this command: &prompt.user; crontab -l 0 14 * * * /usr/home/dru/bin/mycustomscript.sh To remove all of the cron jobs in a user crontab: &prompt.user; crontab -r remove crontab for dru? y Managing Services in &os; Tom Rhodes Contributed by &os; uses the &man.rc.8; system of startup scripts during system initialization and for managing services. The scripts listed in /etc/rc.d provide basic services which can be controlled with the , , and options to &man.service.8;. For instance, &man.sshd.8; can be restarted with the following command: &prompt.root; service sshd restart This procedure can be used to start services on a running system. Services will be started automatically at boot time as specified in &man.rc.conf.5;. For example, to enable &man.natd.8; at system startup, add the following line to /etc/rc.conf: natd_enable="YES" If a line is already present, change the NO to YES. The &man.rc.8; scripts will automatically load any dependent services during the next boot, as described below. Since the &man.rc.8; system is primarily intended to start and stop services at system startup and shutdown time, the , and options will only perform their action if the appropriate /etc/rc.conf variable is set. For instance, sshd restart will only work if sshd_enable is set to in /etc/rc.conf. To , or a service regardless of the settings in /etc/rc.conf, these commands should be prefixed with one. For instance, to restart &man.sshd.8; regardless of the current /etc/rc.conf setting, execute the following command: &prompt.root; service sshd onerestart To check if a service is enabled in /etc/rc.conf, run the appropriate &man.rc.8; script with . This example checks to see if &man.sshd.8; is enabled in /etc/rc.conf: &prompt.root; service sshd rcvar # sshd # sshd_enable="YES" # (default: "") The # sshd line is output from the above command, not a root console. To determine whether or not a service is running, use . For instance, to verify that &man.sshd.8; is running: &prompt.root; service sshd status sshd is running as pid 433. In some cases, it is also possible to a service. This attempts to send a signal to an individual service, forcing the service to reload its configuration files. In most cases, this means sending the service a SIGHUP signal. Support for this feature is not included for every service. The &man.rc.8; system is used for network services and it also contributes to most of the system initialization. For instance, when the /etc/rc.d/bgfsck script is executed, it prints out the following message: Starting background file system checks in 60 seconds. This script is used for background file system checks, which occur only during system initialization. Many system services depend on other services to function properly. For example, &man.yp.8; and other RPC-based services may fail to start until after the &man.rpcbind.8; service has started. To resolve this issue, information about dependencies and other meta-data is included in the comments at the top of each startup script. The &man.rcorder.8; program is used to parse these comments during system initialization to determine the order in which system services should be invoked to satisfy the dependencies. The following key word must be included in all startup scripts as it is required by &man.rc.subr.8; to enable the startup script: PROVIDE: Specifies the services this file provides. The following key words may be included at the top of each startup script. They are not strictly necessary, but are useful as hints to &man.rcorder.8;: REQUIRE: Lists services which are required for this service. The script containing this key word will run after the specified services. BEFORE: Lists services which depend on this service. The script containing this key word will run before the specified services. By carefully setting these keywords for each startup script, an administrator has a fine-grained level of control of the startup order of the scripts, without the need for runlevels used by some &unix; operating systems. Additional information can be found in &man.rc.8; and &man.rc.subr.8;. Refer to this article for instructions on how to create custom &man.rc.8; scripts. Managing System-Specific Configuration rc files rc.conf The principal location for system configuration information is /etc/rc.conf. This file contains a wide range of configuration information and it is read at system startup to configure the system. It provides the configuration information for the rc* files. The entries in /etc/rc.conf override the default settings in /etc/defaults/rc.conf. The file containing the default settings should not be edited. Instead, all system-specific changes should be made to /etc/rc.conf. A number of strategies may be applied in clustered applications to separate site-wide configuration from system-specific configuration in order to reduce administration overhead. The recommended approach is to place system-specific configuration into /etc/rc.conf.local. For example, these entries in /etc/rc.conf apply to all systems: sshd_enable="YES" keyrate="fast" defaultrouter="10.1.1.254" Whereas these entries in /etc/rc.conf.local apply to this system only: hostname="node1.example.org" ifconfig_fxp0="inet 10.1.1.1/8" Distribute /etc/rc.conf to every system using an application such as rsync or puppet, while /etc/rc.conf.local remains unique. Upgrading the system will not overwrite /etc/rc.conf, so system configuration information will not be lost. Both /etc/rc.conf and /etc/rc.conf.local are parsed by &man.sh.1;. This allows system operators to create complex configuration scenarios. Refer to &man.rc.conf.5; for further information on this topic. Setting Up Network Interface Cards Marc Fonvieille Contributed by network cards configuration Adding and configuring a network interface card (NIC) is a common task for any &os; administrator. Locating the Correct Driver network cards driver First, determine the model of the NIC and the chip it uses. &os; supports a wide variety of NICs. Check the Hardware Compatibility List for the &os; release to see if the NIC is supported. If the NIC is supported, determine the name of the &os; driver for the NIC. Refer to /usr/src/sys/conf/NOTES and /usr/src/sys/arch/conf/NOTES for the list of NIC drivers with some information about the supported chipsets. When in doubt, read the manual page of the driver as it will provide more information about the supported hardware and any known limitations of the driver. The drivers for common NICs are already present in the GENERIC kernel, meaning the NIC should be probed during boot. The system's boot messages can be viewed by typing more /var/run/dmesg.boot and using the spacebar to scroll through the text. In this example, two Ethernet NICs using the &man.dc.4; driver are present on the system: dc0: <82c169 PNIC 10/100BaseTX> port 0xa000-0xa0ff mem 0xd3800000-0xd38 000ff irq 15 at device 11.0 on pci0 miibus0: <MII bus> on dc0 bmtphy0: <BCM5201 10/100baseTX PHY> PHY 1 on miibus0 bmtphy0: 10baseT, 10baseT-FDX, 100baseTX, 100baseTX-FDX, auto dc0: Ethernet address: 00:a0:cc:da:da:da dc0: [ITHREAD] dc1: <82c169 PNIC 10/100BaseTX> port 0x9800-0x98ff mem 0xd3000000-0xd30 000ff irq 11 at device 12.0 on pci0 miibus1: <MII bus> on dc1 bmtphy1: <BCM5201 10/100baseTX PHY> PHY 1 on miibus1 bmtphy1: 10baseT, 10baseT-FDX, 100baseTX, 100baseTX-FDX, auto dc1: Ethernet address: 00:a0:cc:da:da:db dc1: [ITHREAD] If the driver for the NIC is not present in GENERIC, but a driver is available, the driver will need to be loaded before the NIC can be configured and used. This may be accomplished in one of two ways: The easiest way is to load a kernel module for the NIC using &man.kldload.8;. To also automatically load the driver at boot time, add the appropriate line to /boot/loader.conf. Not all NIC drivers are available as modules. Alternatively, statically compile support for the NIC into a custom kernel. Refer to /usr/src/sys/conf/NOTES, /usr/src/sys/arch/conf/NOTES and the manual page of the driver to determine which line to add to the custom kernel configuration file. For more information about recompiling the kernel, refer to . If the NIC was detected at boot, the kernel does not need to be recompiled. Using &windows; <acronym>NDIS</acronym> Drivers NDIS NDISulator &windows; drivers µsoft.windows; device drivers KLD (kernel loadable object) Unfortunately, there are still many vendors that do not provide schematics for their drivers to the open source community because they regard such information as trade secrets. Consequently, the developers of &os; and other operating systems are left with two choices: develop the drivers by a long and pain-staking process of reverse engineering or using the existing driver binaries available for µsoft.windows; platforms. &os; provides native support for the Network Driver Interface Specification (NDIS). It includes &man.ndisgen.8; which can be used to convert a &windowsxp; driver into a format that can be used on &os;. Because the &man.ndis.4; driver uses a &windowsxp; binary, it only runs on &i386; and amd64 systems. PCI, CardBus, PCMCIA, and USB devices are supported. To use &man.ndisgen.8;, three things are needed: &os; kernel sources. A &windowsxp; driver binary with a .SYS extension. A &windowsxp; driver configuration file with a .INF extension. Download the .SYS and .INF files for the specific NIC. Generally, these can be found on the driver CD or at the vendor's website. The following examples use W32DRIVER.SYS and W32DRIVER.INF. The driver bit width must match the version of &os;. For &os;/i386, use a &windows; 32-bit driver. For &os;/amd64, a &windows; 64-bit driver is needed. The next step is to compile the driver binary into a loadable kernel module. As root, use &man.ndisgen.8;: &prompt.root; ndisgen /path/to/W32DRIVER.INF /path/to/W32DRIVER.SYS This command is interactive and prompts for any extra information it requires. A new kernel module will be generated in the current directory. Use &man.kldload.8; to load the new module: &prompt.root; kldload ./W32DRIVER_SYS.ko In addition to the generated kernel module, the ndis.ko and if_ndis.ko modules must be loaded. This should happen automatically when any module that depends on &man.ndis.4; is loaded. If not, load them manually, using the following commands: &prompt.root; kldload ndis &prompt.root; kldload if_ndis The first command loads the &man.ndis.4; miniport driver wrapper and the second loads the generated NIC driver. Check &man.dmesg.8; to see if there were any load errors. If all went well, the output should be similar to the following: ndis0: <Wireless-G PCI Adapter> mem 0xf4100000-0xf4101fff irq 3 at device 8.0 on pci1 ndis0: NDIS API version: 5.0 ndis0: Ethernet address: 0a:b1:2c:d3:4e:f5 ndis0: 11b rates: 1Mbps 2Mbps 5.5Mbps 11Mbps ndis0: 11g rates: 6Mbps 9Mbps 12Mbps 18Mbps 36Mbps 48Mbps 54Mbps From here, ndis0 can be configured like any other NIC. To configure the system to load the &man.ndis.4; modules at boot time, copy the generated module, W32DRIVER_SYS.ko, to /boot/modules. Then, add the following line to /boot/loader.conf: W32DRIVER_SYS_load="YES" Configuring the Network Card network cards configuration Once the right driver is loaded for the NIC, the card needs to be configured. It may have been configured at installation time by &man.sysinstall.8;. To display the NIC configuration, enter the following command: &prompt.user; ifconfig dc0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500 options=80008<VLAN_MTU,LINKSTATE> ether 00:a0:cc:da:da:da inet 192.168.1.3 netmask 0xffffff00 broadcast 192.168.1.255 media: Ethernet autoselect (100baseTX <full-duplex>) status: active dc1: flags=8802<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500 options=80008<VLAN_MTU,LINKSTATE> ether 00:a0:cc:da:da:db inet 10.0.0.1 netmask 0xffffff00 broadcast 10.0.0.255 media: Ethernet 10baseT/UTP status: no carrier lo0: flags=8049<UP,LOOPBACK,RUNNING,MULTICAST> metric 0 mtu 16384 options=3<RXCSUM,TXCSUM> inet6 fe80::1%lo0 prefixlen 64 scopeid 0x4 inet6 ::1 prefixlen 128 inet 127.0.0.1 netmask 0xff000000 nd6 options=3<PERFORMNUD,ACCEPT_RTADV> In this example, the following devices were displayed: dc0: The first Ethernet interface. dc1: The second Ethernet interface. lo0: The loopback device. &os; uses the driver name followed by the order in which the card is detected at boot to name the NIC. For example, sis2 is the third NIC on the system using the &man.sis.4; driver. In this example, dc0 is up and running. The key indicators are: UP means that the card is configured and ready. The card has an Internet (inet) address, 192.168.1.3. It has a valid subnet mask (netmask), where 0xffffff00 is the same as 255.255.255.0. It has a valid broadcast address, 192.168.1.255. The MAC address of the card (ether) is 00:a0:cc:da:da:da. The physical media selection is on autoselection mode (media: Ethernet autoselect (100baseTX <full-duplex>)). In this example, dc1 is configured to run with 10baseT/UTP media. For more information on available media types for a driver, refer to its manual page. The status of the link (status) is active, indicating that the carrier signal is detected. For dc1, the status: no carrier status is normal when an Ethernet cable is not plugged into the card. If the &man.ifconfig.8; output had shown something similar to: dc0: flags=8843<BROADCAST,SIMPLEX,MULTICAST> metric 0 mtu 1500 options=80008<VLAN_MTU,LINKSTATE> ether 00:a0:cc:da:da:da media: Ethernet autoselect (100baseTX <full-duplex>) status: active it would indicate the card has not been configured. The card must be configured as root. The NIC configuration can be performed from the command line with &man.ifconfig.8; but will not persist after a reboot unless the configuration is also added to /etc/rc.conf. Add a line for each NIC present on the system, as seen in this example: ifconfig_dc0="inet 192.168.1.3 netmask 255.255.255.0" ifconfig_dc1="inet 10.0.0.1 netmask 255.255.255.0 media 10baseT/UTP" Replace dc0 and dc1 and the IP address information with the correct values for the system. Refer to the man page for the driver, &man.ifconfig.8;, and &man.rc.conf.5; for more details about the allowed options and the syntax of /etc/rc.conf. If the network was configured during installation, some entries for the NIC(s) may be already present. Double check /etc/rc.conf before adding any lines. If the network is not using DNS, edit /etc/hosts to add the names and IP addresses of the hosts on the LAN, if they are not already there. For more information, refer to &man.hosts.5; and to /usr/share/examples/etc/hosts. If there is no DHCP server and access to the Internet is needed, manually configure the default gateway and the nameserver: &prompt.root; echo 'defaultrouter="your_default_router"' >> /etc/rc.conf &prompt.root; echo 'nameserver your_DNS_server' >> /etc/resolv.conf Testing and Troubleshooting Once the necessary changes to /etc/rc.conf are saved, a reboot can be used to test the network configuration and to verify that the system restarts without any configuration errors. Alternatively, apply the settings to the networking system with this command: &prompt.root; service netif restart If a default gateway has been set in /etc/rc.conf, also issue this command: &prompt.root; service routing restart Once the networking system has been relaunched, test the NICs. Testing the Ethernet Card network cards testing To verify that an Ethernet card is configured correctly, &man.ping.8; the interface itself, and then &man.ping.8; another machine on the LAN: &prompt.user; ping -c5 192.168.1.3 PING 192.168.1.3 (192.168.1.3): 56 data bytes 64 bytes from 192.168.1.3: icmp_seq=0 ttl=64 time=0.082 ms 64 bytes from 192.168.1.3: icmp_seq=1 ttl=64 time=0.074 ms 64 bytes from 192.168.1.3: icmp_seq=2 ttl=64 time=0.076 ms 64 bytes from 192.168.1.3: icmp_seq=3 ttl=64 time=0.108 ms 64 bytes from 192.168.1.3: icmp_seq=4 ttl=64 time=0.076 ms --- 192.168.1.3 ping statistics --- 5 packets transmitted, 5 packets received, 0% packet loss round-trip min/avg/max/stddev = 0.074/0.083/0.108/0.013 ms &prompt.user; ping -c5 192.168.1.2 PING 192.168.1.2 (192.168.1.2): 56 data bytes 64 bytes from 192.168.1.2: icmp_seq=0 ttl=64 time=0.726 ms 64 bytes from 192.168.1.2: icmp_seq=1 ttl=64 time=0.766 ms 64 bytes from 192.168.1.2: icmp_seq=2 ttl=64 time=0.700 ms 64 bytes from 192.168.1.2: icmp_seq=3 ttl=64 time=0.747 ms 64 bytes from 192.168.1.2: icmp_seq=4 ttl=64 time=0.704 ms --- 192.168.1.2 ping statistics --- 5 packets transmitted, 5 packets received, 0% packet loss round-trip min/avg/max/stddev = 0.700/0.729/0.766/0.025 ms To test network resolution, use the host name instead of the IP address. If there is no DNS server on the network, /etc/hosts must first be configured. Troubleshooting network cards troubleshooting When troubleshooting hardware and software configurations, check the simple things first. Is the network cable plugged in? Are the network services properly configured? Is the firewall configured correctly? Is the NIC supported by &os;? Before sending a bug report, always check the Hardware Notes, update the version of &os; to the latest STABLE version, check the mailing list archives, and search the Internet. If the card works, yet performance is poor, read through &man.tuning.7;. Also, check the network configuration as incorrect network settings can cause slow connections. Some users experience one or two device timeout messages, which is normal for some cards. If they continue, or are bothersome, determine if the device is conflicting with another device. Double check the cable connections. Consider trying another card. To resolve watchdog timeout errors, first check the network cable. Many cards require a PCI slot which supports bus mastering. On some old motherboards, only one PCI slot allows it, usually slot 0. Check the NIC and the motherboard documentation to determine if that may be the problem. No route to host messages occur if the system is unable to route a packet to the destination host. This can happen if no default route is specified or if a cable is unplugged. Check the output of netstat -rn and make sure there is a valid route to the host. If there is not, read . ping: sendto: Permission denied error messages are often caused by a misconfigured firewall. If a firewall is enabled on &os; but no rules have been defined, the default policy is to deny all traffic, even &man.ping.8;. Refer to for more information. Sometimes performance of the card is poor or below average. In these cases, try setting the media selection mode from autoselect to the correct media selection. While this works for most hardware, it may or may not resolve the issue. Again, check all the network settings, and refer to &man.tuning.7;. Virtual Hosts virtual hosts IP aliases A common use of &os; is virtual site hosting, where one server appears to the network as many servers. This is achieved by assigning multiple network addresses to a single interface. A given network interface has one real address, and may have any number of alias addresses. These aliases are normally added by placing alias entries in /etc/rc.conf, as seen in this example: ifconfig_fxp0_alias0="inet xxx.xxx.xxx.xxx netmask xxx.xxx.xxx.xxx" Alias entries must start with alias0 using a sequential number such as alias0, alias1, and so on. The configuration process will stop at the first missing number. The calculation of alias netmasks is important. For a given interface, there must be one address which correctly represents the network's netmask. Any other addresses which fall within this network must have a netmask of all 1s, expressed as either 255.255.255.255 or 0xffffffff. For example, consider the case where the fxp0 interface is connected to two networks: 10.1.1.0 with a netmask of 255.255.255.0 and 202.0.75.16 with a netmask of 255.255.255.240. The system is to be configured to appear in the ranges 10.1.1.1 through 10.1.1.5 and 202.0.75.17 through 202.0.75.20. Only the first address in a given network range should have a real netmask. All the rest (10.1.1.2 through 10.1.1.5 and 202.0.75.18 through 202.0.75.20) must be configured with a netmask of 255.255.255.255. The following /etc/rc.conf entries configure the adapter correctly for this scenario: ifconfig_fxp0="inet 10.1.1.1 netmask 255.255.255.0" ifconfig_fxp0_alias0="inet 10.1.1.2 netmask 255.255.255.255" ifconfig_fxp0_alias1="inet 10.1.1.3 netmask 255.255.255.255" ifconfig_fxp0_alias2="inet 10.1.1.4 netmask 255.255.255.255" ifconfig_fxp0_alias3="inet 10.1.1.5 netmask 255.255.255.255" ifconfig_fxp0_alias4="inet 202.0.75.17 netmask 255.255.255.240" ifconfig_fxp0_alias5="inet 202.0.75.18 netmask 255.255.255.255" ifconfig_fxp0_alias6="inet 202.0.75.19 netmask 255.255.255.255" ifconfig_fxp0_alias7="inet 202.0.75.20 netmask 255.255.255.255" Configuring System Logging Niclas Zeising Contributed by system logging syslog &man.syslogd.8; Generating and reading system logs is an important aspect of system administration. The information in system logs can be used to detect hardware and software issues as well as application and system configuration errors. This information also plays an important role in security auditing and incident response. Most system daemons and applications will generate log entries. &os; provides a system logger, syslogd, to manage logging. By default, syslogd is started when the system boots. This is controlled by the variable syslogd_enable in /etc/rc.conf. There are numerous application arguments that can be set using syslogd_flags in /etc/rc.conf. Refer to &man.syslogd.8; for more information on the available arguments. This section describes how to configure the &os; system logger for both local and remote logging and how to perform log rotation and log management. Configuring Local Logging syslog.conf The configuration file, /etc/syslog.conf, controls what syslogd does with log entries as they are received. There are several parameters to control the handling of incoming events. The facility describes which subsystem generated the message, such as the kernel or a daemon, and the level describes the severity of the event that occurred. This makes it possible to configure if and where a log message is logged, depending on the facility and level. It is also possible to take action depending on the application that sent the message, and in the case of remote logging, the hostname of the machine generating the logging event. This configuration file contains one line per action, where the syntax for each line is a selector field followed by an action field. The syntax of the selector field is facility.level which will match log messages from facility at level level or higher. It is also possible to add an optional comparison flag before the level to specify more precisely what is logged. Multiple selector fields can be used for the same action, and are separated with a semicolon (;). Using * will match everything. The action field denotes where to send the log message, such as to a file or remote log host. As an example, here is the default syslog.conf from &os;: # $&os;$ # # Spaces ARE valid field separators in this file. However, # other *nix-like systems still insist on using tabs as field # separators. If you are sharing this file between systems, you # may want to use only tabs as field separators here. # Consult the syslog.conf(5) manpage. *.err;kern.warning;auth.notice;mail.crit /dev/console *.notice;authpriv.none;kern.debug;lpr.info;mail.crit;news.err /var/log/messages security.* /var/log/security auth.info;authpriv.info /var/log/auth.log mail.info /var/log/maillog lpr.info /var/log/lpd-errs ftp.info /var/log/xferlog cron.* /var/log/cron !-devd *.=debug /var/log/debug.log *.emerg * # uncomment this to log all writes to /dev/console to /var/log/console.log #console.info /var/log/console.log # uncomment this to enable logging of all log messages to /var/log/all.log # touch /var/log/all.log and chmod it to mode 600 before it will work #*.* /var/log/all.log # uncomment this to enable logging to a remote loghost named loghost #*.* @loghost # uncomment these if you're running inn # news.crit /var/log/news/news.crit # news.err /var/log/news/news.err # news.notice /var/log/news/news.notice # Uncomment this if you wish to see messages produced by devd # !devd # *.>=info !ppp *.* /var/log/ppp.log !* In this example: Line 8 matches all messages with a level of err or higher, as well as kern.warning, auth.notice and mail.crit, and sends these log messages to the console (/dev/console). Line 12 matches all messages from the mail facility at level info or above and logs the messages to /var/log/maillog. Line 17 uses a comparison flag (=) to only match messages at level debug and logs them to /var/log/debug.log. Line 33 is an example usage of a program specification. This makes the rules following it only valid for the specified program. In this case, only the messages generated by ppp are logged to /var/log/ppp.log. The available levels, in order from most to least critical are emerg, alert, crit, err, warning, notice, info, and debug. The facilities, in no particular order, are auth, authpriv, console, cron, daemon, ftp, kern, lpr, mail, mark, news, security, syslog, user, uucp, and local0 through local7. Be aware that other operating systems might have different facilities. To log everything of level notice and higher to /var/log/daemon.log, add the following entry: daemon.notice /var/log/daemon.log For more information about the different levels and facilities, refer to &man.syslog.3; and &man.syslogd.8;. For more information about /etc/syslog.conf, its syntax, and more advanced usage examples, see &man.syslog.conf.5;. Log Management and Rotation newsyslog newsyslog.conf log rotation log management Log files can grow quickly, taking up disk space and making it more difficult to locate useful information. Log management attempts to mitigate this. In &os;, newsyslog is used to manage log files. This built-in program periodically rotates and compresses log files, and optionally creates missing log files and signals programs when log files are moved. The log files may be generated by syslogd or by any other program which generates log files. While newsyslog is normally run from &man.cron.8;, it is not a system daemon. In the default configuration, it runs every hour. To know which actions to take, newsyslog reads its configuration file, /etc/newsyslog.conf. This file contains one line for each log file that newsyslog manages. Each line states the file owner, permissions, when to rotate that file, optional flags that affect log rotation, such as compression, and programs to signal when the log is rotated. Here is the default configuration in &os;: # configuration file for newsyslog # $FreeBSD$ # # Entries which do not specify the '/pid_file' field will cause the # syslogd process to be signalled when that log file is rotated. This # action is only appropriate for log files which are written to by the # syslogd process (ie, files listed in /etc/syslog.conf). If there # is no process which needs to be signalled when a given log file is # rotated, then the entry for that file should include the 'N' flag. # # The 'flags' field is one or more of the letters: BCDGJNUXZ or a '-'. # # Note: some sites will want to select more restrictive protections than the # defaults. In particular, it may be desirable to switch many of the 644 # entries to 640 or 600. For example, some sites will consider the # contents of maillog, messages, and lpd-errs to be confidential. In the # future, these defaults may change to more conservative ones. # # logfilename [owner:group] mode count size when flags [/pid_file] [sig_num] /var/log/all.log 600 7 * @T00 J /var/log/amd.log 644 7 100 * J /var/log/auth.log 600 7 100 @0101T JC /var/log/console.log 600 5 100 * J /var/log/cron 600 3 100 * JC /var/log/daily.log 640 7 * @T00 JN /var/log/debug.log 600 7 100 * JC /var/log/kerberos.log 600 7 100 * J /var/log/lpd-errs 644 7 100 * JC /var/log/maillog 640 7 * @T00 JC /var/log/messages 644 5 100 @0101T JC /var/log/monthly.log 640 12 * $M1D0 JN /var/log/pflog 600 3 100 * JB /var/run/pflogd.pid /var/log/ppp.log root:network 640 3 100 * JC /var/log/devd.log 644 3 100 * JC /var/log/security 600 10 100 * JC /var/log/sendmail.st 640 10 * 168 B /var/log/utx.log 644 3 * @01T05 B /var/log/weekly.log 640 5 1 $W6D0 JN /var/log/xferlog 600 7 100 * JC Each line starts with the name of the log to be rotated, optionally followed by an owner and group for both rotated and newly created files. The mode field sets the permissions on the log file and count denotes how many rotated log files should be kept. The size and when fields tell newsyslog when to rotate the file. A log file is rotated when either its size is larger than the size field or when the time in the when filed has passed. An asterisk (*) means that this field is ignored. The flags field gives further instructions, such as how to compress the rotated file or to create the log file if it is missing. The last two fields are optional and specify the name of the Process ID (PID) file of a process and a signal number to send to that process when the file is rotated. For more information on all fields, valid flags, and how to specify the rotation time, refer to &man.newsyslog.conf.5;. Since newsyslog is run from &man.cron.8;, it cannot rotate files more often than it is scheduled to run from &man.cron.8;. Configuring Remote Logging Tom Rhodes Contributed by Monitoring the log files of multiple hosts can become unwieldy as the number of systems increases. Configuring centralized logging can reduce some of the administrative burden of log file administration. In &os;, centralized log file aggregation, merging, and rotation can be configured using syslogd and newsyslog. This section demonstrates an example configuration, where host A, named logserv.example.com, will collect logging information for the local network. Host B, named logclient.example.com, will be configured to pass logging information to the logging server. Log Server Configuration A log server is a system that has been configured to accept logging information from other hosts. Before configuring a log server, check the following: If there is a firewall between the logging server and any logging clients, ensure that the firewall ruleset allows UDP port 514 for both the clients and the server. The logging server and all client machines must have forward and reverse entries in the local DNS. If the network does not have a DNS server, create entries in each system's /etc/hosts. Proper name resolution is required so that log entries are not rejected by the logging server. On the log server, edit /etc/syslog.conf to specify the name of the client to receive log entries from, the logging facility to be used, and the name of the log to store the host's log entries. This example adds the hostname of B, logs all facilities, and stores the log entries in /var/log/logclient.log. Sample Log Server Configuration +logclient.example.com *.* /var/log/logclient.log When adding multiple log clients, add a similar two-line entry for each client. More information about the available facilities may be found in &man.syslog.conf.5;. Next, configure /etc/rc.conf: syslogd_enable="YES" syslogd_flags="-a logclient.example.com -v -v" The first entry starts syslogd at system boot. The second entry allows log entries from the specified client. The increases the verbosity of logged messages. This is useful for tweaking facilities as administrators are able to see what type of messages are being logged under each facility. Multiple options may be specified to allow logging from multiple clients. IP addresses and whole netblocks may also be specified. Refer to &man.syslogd.8; for a full list of possible options. Finally, create the log file: &prompt.root; touch /var/log/logclient.log At this point, syslogd should be restarted and verified: &prompt.root; service syslogd restart &prompt.root; pgrep syslog If a PID is returned, the server restarted successfully, and client configuration can begin. If the server did not restart, consult /var/log/messages for the error. Log Client Configuration A logging client sends log entries to a logging server on the network. The client also keeps a local copy of its own logs. Once a logging server has been configured, edit /etc/rc.conf on the logging client: syslogd_enable="YES" syslogd_flags="-s -v -v" The first entry enables syslogd on boot up. The second entry prevents logs from being accepted by this client from other hosts () and increases the verbosity of logged messages. Next, define the logging server in the client's /etc/syslog.conf. In this example, all logged facilities are sent to a remote system, denoted by the @ symbol, with the specified hostname: *.* @logserv.example.com After saving the edit, restart syslogd for the changes to take effect: &prompt.root; service syslogd restart To test that log messages are being sent across the network, use &man.logger.1; on the client to send a message to syslogd: &prompt.root; logger "Test message from logclient" This message should now exist both in /var/log/messages on the client and /var/log/logclient.log on the log server. Debugging Log Servers If no messages are being received on the log server, the cause is most likely a network connectivity issue, a hostname resolution issue, or a typo in a configuration file. To isolate the cause, ensure that both the logging server and the logging client are able to ping each other using the hostname specified in their /etc/rc.conf. If this fails, check the network cabling, the firewall ruleset, and the hostname entries in the DNS server or /etc/hosts on both the logging server and clients. Repeat until the ping is successful from both hosts. If the ping succeeds on both hosts but log messages are still not being received, temporarily increase logging verbosity to narrow down the configuration issue. In the following example, /var/log/logclient.log on the logging server is empty and /var/log/messages on the logging client does not indicate a reason for the failure. To increase debugging output, edit the syslogd_flags entry on the logging server and issue a restart: syslogd_flags="-d -a logclien.example.com -v -v" &prompt.root; service syslogd restart Debugging data similar to the following will flash on the console immediately after the restart: logmsg: pri 56, flags 4, from logserv.example.com, msg syslogd: restart syslogd: restarted logmsg: pri 6, flags 4, from logserv.example.com, msg syslogd: kernel boot file is /boot/kernel/kernel Logging to FILE /var/log/messages syslogd: kernel boot file is /boot/kernel/kernel cvthname(192.168.1.10) validate: dgram from IP 192.168.1.10, port 514, name logclient.example.com; rejected in rule 0 due to name mismatch. In this example, the log messages are being rejected due to a typo which results in a hostname mismatch. The client's hostname should be logclient, not logclien. Fix the typo, issue a restart, and verify the results: &prompt.root; service syslogd restart logmsg: pri 56, flags 4, from logserv.example.com, msg syslogd: restart syslogd: restarted logmsg: pri 6, flags 4, from logserv.example.com, msg syslogd: kernel boot file is /boot/kernel/kernel syslogd: kernel boot file is /boot/kernel/kernel logmsg: pri 166, flags 17, from logserv.example.com, msg Dec 10 20:55:02 <syslog.err> logserv.example.com syslogd: exiting on signal 2 cvthname(192.168.1.10) validate: dgram from IP 192.168.1.10, port 514, name logclient.example.com; accepted in rule 0. logmsg: pri 15, flags 0, from logclient.example.com, msg Dec 11 02:01:28 trhodes: Test message 2 Logging to FILE /var/log/logclient.log Logging to FILE /var/log/messages At this point, the messages are being properly received and placed in the correct file. Security Considerations As with any network service, security requirements should be considered before implementing a logging server. Log files may contain sensitive data about services enabled on the local host, user accounts, and configuration data. Network data sent from the client to the server will not be encrypted or password protected. If a need for encryption exists, consider using security/stunnel, which will transmit the logging data over an encrypted tunnel. Local security is also an issue. Log files are not encrypted during use or after log rotation. Local users may access log files to gain additional insight into system configuration. Setting proper permissions on log files is critical. The built-in log rotator, newsyslog, supports setting permissions on newly created and rotated log files. Setting log files to mode 600 should prevent unwanted access by local users. Refer to &man.newsyslog.conf.5; for additional information. Configuration Files <filename>/etc</filename> Layout There are a number of directories in which configuration information is kept. These include: /etc Generic system-specific configuration information. /etc/defaults Default versions of system configuration files. /etc/mail Extra &man.sendmail.8; configuration and other MTA configuration files. /etc/ppp Configuration for both user- and kernel-ppp programs. /etc/namedb Default location for &man.named.8; data. Normally named.conf and zone files are stored here. /usr/local/etc Configuration files for installed applications. May contain per-application subdirectories. /usr/local/etc/rc.d &man.rc.8; scripts for installed applications. /var/db Automatically generated system-specific database files, such as the package database and the &man.locate.1; database. Hostnames hostname DNS <filename>/etc/resolv.conf</filename> resolv.conf How a &os; system accesses the Internet Domain Name System (DNS) is controlled by &man.resolv.conf.5;. The most common entries to /etc/resolv.conf are: nameserver The IP address of a name server the resolver should query. The servers are queried in the order listed with a maximum of three. search Search list for hostname lookup. This is normally determined by the domain of the local hostname. domain The local domain name. A typical /etc/resolv.conf looks like this: search example.com nameserver 147.11.1.11 nameserver 147.11.100.30 Only one of the search and domain options should be used. When using DHCP, &man.dhclient.8; usually rewrites /etc/resolv.conf with information received from the DHCP server. <filename>/etc/hosts</filename> hosts /etc/hosts is a simple text database which works in conjunction with DNS and NIS to provide host name to IP address mappings. Entries for local computers connected via a LAN can be added to this file for simplistic naming purposes instead of setting up a &man.named.8; server. Additionally, /etc/hosts can be used to provide a local record of Internet names, reducing the need to query external DNS servers for commonly accessed names. # $&os;$ # # # Host Database # # This file should contain the addresses and aliases for local hosts that # share this file. Replace 'my.domain' below with the domainname of your # machine. # # In the presence of the domain name service or NIS, this file may # not be consulted at all; see /etc/nsswitch.conf for the resolution order. # # ::1 localhost localhost.my.domain 127.0.0.1 localhost localhost.my.domain # # Imaginary network. #10.0.0.2 myname.my.domain myname #10.0.0.3 myfriend.my.domain myfriend # # According to RFC 1918, you can use the following IP networks for # private nets which will never be connected to the Internet: # # 10.0.0.0 - 10.255.255.255 # 172.16.0.0 - 172.31.255.255 # 192.168.0.0 - 192.168.255.255 # # In case you want to be able to connect to the Internet, you need # real official assigned numbers. Do not try to invent your own network # numbers but instead get one from your network provider (if any) or # from your regional registry (ARIN, APNIC, LACNIC, RIPE NCC, or AfriNIC.) # The format of /etc/hosts is as follows: [Internet address] [official hostname] [alias1] [alias2] ... For example: 10.0.0.1 myRealHostname.example.com myRealHostname foobar1 foobar2 Consult &man.hosts.5; for more information. Tuning with &man.sysctl.8; sysctl tuning with sysctl &man.sysctl.8; is used to make changes to a running &os; system. This includes many advanced options of the TCP/IP stack and virtual memory system that can dramatically improve performance for an experienced system administrator. Over five hundred system variables can be read and set using &man.sysctl.8;. At its core, &man.sysctl.8; serves two functions: to read and to modify system settings. To view all readable variables: &prompt.user; sysctl -a To read a particular variable, specify its name: &prompt.user; sysctl kern.maxproc kern.maxproc: 1044 To set a particular variable, use the variable=value syntax: &prompt.root; sysctl kern.maxfiles=5000 kern.maxfiles: 2088 -> 5000 Settings of sysctl variables are usually either strings, numbers, or booleans, where a boolean is 1 for yes or 0 for no. To automatically set some variables each time the machine boots, add them to /etc/sysctl.conf. For more information, refer to &man.sysctl.conf.5; and . <filename>sysctl.conf</filename> sysctl.conf sysctl The configuration file for &man.sysctl.8;, /etc/sysctl.conf, looks much like /etc/rc.conf. Values are set in a variable=value form. The specified values are set after the system goes into multi-user mode. Not all variables are settable in this mode. For example, to turn off logging of fatal signal exits and prevent users from seeing processes started by other users, the following tunables can be set in /etc/sysctl.conf: # Do not log fatal signal exits (e.g., sig 11) kern.logsigexit=0 # Prevent users from seeing information about processes that # are being run under another UID. security.bsd.see_other_uids=0 &man.sysctl.8; Read-only Tom Rhodes Contributed by In some cases it may be desirable to modify read-only &man.sysctl.8; values, which will require a reboot of the system. For instance, on some laptop models the &man.cardbus.4; device will not probe memory ranges and will fail with errors similar to: cbb0: Could not map register memory device_probe_and_attach: cbb0 attach returned 12 The fix requires the modification of a read-only &man.sysctl.8; setting. Add to /boot/loader.conf and reboot. Now &man.cardbus.4; should work properly. Tuning Disks The following section will discuss various tuning mechanisms and options which may be applied to disk devices. In many cases, disks with mechanical parts, such as SCSI drives, will be the bottleneck driving down the overall system performance. While a solution is to install a drive without mechanical parts, such as a solid state drive, mechanical drives are not going away anytime in the near future. When tuning disks, it is advisable to utilize the features of the &man.iostat.8; command to test various changes to the system. This command will allow the user to obtain valuable information on system IO. Sysctl Variables <varname>vfs.vmiodirenable</varname> vfs.vmiodirenable The vfs.vmiodirenable &man.sysctl.8; variable may be set to either 0 (off) or 1 (on). It is set to 1 by default. This variable controls how directories are cached by the system. Most directories are small, using just a single fragment (typically 1 K) in the file system and typically 512 bytes in the buffer cache. With this variable turned off, the buffer cache will only cache a fixed number of directories, even if the system has a huge amount of memory. When turned on, this &man.sysctl.8; allows the buffer cache to use the VM page cache to cache the directories, making all the memory available for caching directories. However, the minimum in-core memory used to cache a directory is the physical page size (typically 4 K) rather than 512  bytes. Keeping this option enabled is recommended if the system is running any services which manipulate large numbers of files. Such services can include web caches, large mail systems, and news systems. Keeping this option on will generally not reduce performance, even with the wasted memory, but one should experiment to find out. <varname>vfs.write_behind</varname> vfs.write_behind The vfs.write_behind &man.sysctl.8; variable defaults to 1 (on). This tells the file system to issue media writes as full clusters are collected, which typically occurs when writing large sequential files. This avoids saturating the buffer cache with dirty buffers when it would not benefit I/O performance. However, this may stall processes and under certain circumstances should be turned off. <varname>vfs.hirunningspace</varname> vfs.hirunningspace The vfs.hirunningspace &man.sysctl.8; variable determines how much outstanding write I/O may be queued to disk controllers system-wide at any given instance. The default is usually sufficient, but on machines with many disks, try bumping it up to four or five megabytes. Setting too high a value which exceeds the buffer cache's write threshold can lead to bad clustering performance. Do not set this value arbitrarily high as higher write values may add latency to reads occurring at the same time. There are various other buffer cache and VM page cache related &man.sysctl.8; values. Modifying these values is not recommended as the VM system does a good job of automatically tuning itself. <varname>vm.swap_idle_enabled</varname> vm.swap_idle_enabled The vm.swap_idle_enabled &man.sysctl.8; variable is useful in large multi-user systems with many active login users and lots of idle processes. Such systems tend to generate continuous pressure on free memory reserves. Turning this feature on and tweaking the swapout hysteresis (in idle seconds) via vm.swap_idle_threshold1 and vm.swap_idle_threshold2 depresses the priority of memory pages associated with idle processes more quickly then the normal pageout algorithm. This gives a helping hand to the pageout daemon. Only turn this option on if needed, because the tradeoff is essentially pre-page memory sooner rather than later which eats more swap and disk bandwidth. In a small system this option will have a determinable effect, but in a large system that is already doing moderate paging, this option allows the VM system to stage whole processes into and out of memory easily. <varname>hw.ata.wc</varname> hw.ata.wc Turning off IDE write caching reduces write bandwidth to IDE disks, but may sometimes be necessary due to data consistency issues introduced by hard drive vendors. The problem is that some IDE drives lie about when a write completes. With IDE write caching turned on, IDE hard drives write data to disk out of order and will sometimes delay writing some blocks indefinitely when under heavy disk load. A crash or power failure may cause serious file system corruption. Check the default on the system by observing the hw.ata.wc &man.sysctl.8; variable. If IDE write caching is turned off, one can set this read-only variable to 1 in /boot/loader.conf in order to enable it at boot time. For more information, refer to &man.ata.4;. <literal>SCSI_DELAY</literal> (<varname>kern.cam.scsi_delay</varname>) kern.cam.scsi_delay kernel options SCSI DELAY The SCSI_DELAY kernel configuration option may be used to reduce system boot times. The defaults are fairly high and can be responsible for 15 seconds of delay in the boot process. Reducing it to 5 seconds usually works with modern drives. The kern.cam.scsi_delay boot time tunable should be used. The tunable and kernel configuration option accept values in terms of milliseconds and not seconds. Soft Updates Soft Updates &man.tunefs.8; To fine-tune a file system, use &man.tunefs.8;. This program has many different options. To toggle Soft Updates on and off, use: &prompt.root; tunefs -n enable /filesystem &prompt.root; tunefs -n disable /filesystem A file system cannot be modified with &man.tunefs.8; while it is mounted. A good time to enable Soft Updates is before any partitions have been mounted, in single-user mode. Soft Updates is recommended for UFS file systems as it drastically improves meta-data performance, mainly file creation and deletion, through the use of a memory cache. There are two downsides to Soft Updates to be aware of. First, Soft Updates guarantee file system consistency in the case of a crash, but could easily be several seconds or even a minute behind updating the physical disk. If the system crashes, unwritten data may be lost. Secondly, Soft Updates delay the freeing of file system blocks. If the root file system is almost full, performing a major update, such as make installworld, can cause the file system to run out of space and the update to fail. More Details About Soft Updates Soft Updates details Meta-data updates are updates to non-content data like inodes or directories. There are two traditional approaches to writing a file system's meta-data back to disk. Historically, the default behavior was to write out meta-data updates synchronously. If a directory changed, the system waited until the change was actually written to disk. The file data buffers (file contents) were passed through the buffer cache and backed up to disk later on asynchronously. The advantage of this implementation is that it operates safely. If there is a failure during an update, meta-data is always in a consistent state. A file is either created completely or not at all. If the data blocks of a file did not find their way out of the buffer cache onto the disk by the time of the crash, &man.fsck.8; recognizes this and repairs the file system by setting the file length to 0. Additionally, the implementation is clear and simple. The disadvantage is that meta-data changes are slow. For example, rm -r touches all the files in a directory sequentially, but each directory change will be written synchronously to the disk. This includes updates to the directory itself, to the inode table, and possibly to indirect blocks allocated by the file. Similar considerations apply for unrolling large hierarchies using tar -x. The second approach is to use asynchronous meta-data updates. This is the default for a UFS file system mounted with mount -o async. Since all meta-data updates are also passed through the buffer cache, they will be intermixed with the updates of the file content data. The advantage of this implementation is there is no need to wait until each meta-data update has been written to disk, so all operations which cause huge amounts of meta-data updates work much faster than in the synchronous case. This implementation is still clear and simple, so there is a low risk for bugs creeping into the code. The disadvantage is that there is no guarantee for a consistent state of the file system. If there is a failure during an operation that updated large amounts of meta-data, like a power failure or someone pressing the reset button, the file system will be left in an unpredictable state. There is no opportunity to examine the state of the file system when the system comes up again as the data blocks of a file could already have been written to the disk while the updates of the inode table or the associated directory were not. It is impossible to implement a &man.fsck.8; which is able to clean up the resulting chaos because the necessary information is not available on the disk. If the file system has been damaged beyond repair, the only choice is to reformat it and restore from backup. The usual solution for this problem is to implement dirty region logging, which is also referred to as journaling. Meta-data updates are still written synchronously, but only into a small region of the disk. Later on, they are moved to their proper location. Because the logging area is a small, contiguous region on the disk, there are no long distances for the disk heads to move, even during heavy operations, so these operations are quicker than synchronous updates. Additionally, the complexity of the implementation is limited, so the risk of bugs being present is low. A disadvantage is that all meta-data is written twice, once into the logging region and once to the proper location, so performance pessimization might result. On the other hand, in case of a crash, all pending meta-data operations can be either quickly rolled back or completed from the logging area after the system comes up again, resulting in a fast file system startup. Kirk McKusick, the developer of Berkeley FFS, solved this problem with Soft Updates. All pending meta-data updates are kept in memory and written out to disk in a sorted sequence (ordered meta-data updates). This has the effect that, in case of heavy meta-data operations, later updates to an item catch the earlier ones which are still in memory and have not already been written to disk. All operations are generally performed in memory before the update is written to disk and the data blocks are sorted according to their position so that they will not be on the disk ahead of their meta-data. If the system crashes, an implicit log rewind causes all operations which were not written to the disk appear as if they never happened. A consistent file system state is maintained that appears to be the one of 30 to 60 seconds earlier. The algorithm used guarantees that all resources in use are marked as such in their blocks and inodes. After a crash, the only resource allocation error that occurs is that resources are marked as used which are actually free. &man.fsck.8; recognizes this situation, and frees the resources that are no longer used. It is safe to ignore the dirty state of the file system after a crash by forcibly mounting it with mount -f. In order to free resources that may be unused, &man.fsck.8; needs to be run at a later time. This is the idea behind the background &man.fsck.8;: at system startup time, only a snapshot of the file system is recorded and &man.fsck.8; is run afterwards. All file systems can then be mounted dirty, so the system startup proceeds in multi-user mode. Then, background &man.fsck.8; is scheduled for all file systems where this is required, to free resources that may be unused. File systems that do not use Soft Updates still need the usual foreground &man.fsck.8;. The advantage is that meta-data operations are nearly as fast as asynchronous updates and are faster than logging, which has to write the meta-data twice. The disadvantages are the complexity of the code, a higher memory consumption, and some idiosyncrasies. After a crash, the state of the file system appears to be somewhat older. In situations where the standard synchronous approach would have caused some zero-length files to remain after the &man.fsck.8;, these files do not exist at all with Soft Updates because neither the meta-data nor the file contents have been written to disk. Disk space is not released until the updates have been written to disk, which may take place some time after running &man.rm.1;. This may cause problems when installing large amounts of data on a file system that does not have enough free space to hold all the files twice. Tuning Kernel Limits tuning kernel limits File/Process Limits <varname>kern.maxfiles</varname> kern.maxfiles The kern.maxfiles &man.sysctl.8; variable can be raised or lowered based upon system requirements. This variable indicates the maximum number of file descriptors on the system. When the file descriptor table is full, file: table is full will show up repeatedly in the system message buffer, which can be viewed using &man.dmesg.8;. Each open file, socket, or fifo uses one file descriptor. A large-scale production server may easily require many thousands of file descriptors, depending on the kind and number of services running concurrently. In older &os; releases, the default value of kern.maxfiles is derived from in the kernel configuration file. kern.maxfiles grows proportionally to the value of . When compiling a custom kernel, consider setting this kernel configuration option according to the use of the system. From this number, the kernel is given most of its pre-defined limits. Even though a production machine may not have 256 concurrent users, the resources needed may be similar to a high-scale web server. The read-only &man.sysctl.8; variable kern.maxusers is automatically sized at boot based on the amount of memory available in the system, and may be determined at run-time by inspecting the value of kern.maxusers. Some systems require larger or smaller values of kern.maxusers and values of 64, 128, and 256 are not uncommon. Going above 256 is not recommended unless a huge number of file descriptors is needed. Many of the tunable values set to their defaults by kern.maxusers may be individually overridden at boot-time or run-time in /boot/loader.conf. Refer to &man.loader.conf.5; and /boot/defaults/loader.conf for more details and some hints. In older releases, the system will auto-tune maxusers if it is set to 0. The auto-tuning algorithm sets maxusers equal to the amount of memory in the system, with a minimum of 32, and a maximum of 384.. When setting this option, set maxusers to at least 4, especially if the system runs &xorg; or is used to compile software. The most important table set by maxusers is the maximum number of processes, which is set to 20 + 16 * maxusers. If maxusers is set to 1, there can only be 36 simultaneous processes, including the 18 or so that the system starts up at boot time and the 15 or so used by &xorg;. Even a simple task like reading a manual page will start up nine processes to filter, decompress, and view it. Setting maxusers to 64 allows up to 1044 simultaneous processes, which should be enough for nearly all uses. If, however, the proc table full error is displayed when trying to start another program, or a server is running with a large number of simultaneous users, increase the number and rebuild. maxusers does not limit the number of users which can log into the machine. It instead sets various table sizes to reasonable values considering the maximum number of users on the system and how many processes each user will be running. <varname>kern.ipc.somaxconn</varname> kern.ipc.somaxconn The kern.ipc.somaxconn &man.sysctl.8; variable limits the size of the listen queue for accepting new TCP connections. The default value of 128 is typically too low for robust handling of new connections on a heavily loaded web server. For such environments, it is recommended to increase this value to 1024 or higher. A service such as &man.sendmail.8;, or Apache may itself limit the listen queue size, but will often have a directive in its configuration file to adjust the queue size. Large listen queues do a better job of avoiding Denial of Service (DoS) attacks. Network Limits The NMBCLUSTERS kernel configuration option dictates the amount of network Mbufs available to the system. A heavily-trafficked server with a low number of Mbufs will hinder performance. Each cluster represents approximately 2 K of memory, so a value of 1024 represents 2 megabytes of kernel memory reserved for network buffers. A simple calculation can be done to figure out how many are needed. A web server which maxes out at 1000 simultaneous connections where each connection uses a 6 K receive and 16 K send buffer, requires approximately 32 MB worth of network buffers to cover the web server. A good rule of thumb is to multiply by 2, so 2x32 MB / 2 KB = 64 MB / 2 kB = 32768. Values between 4096 and 32768 are recommended for machines with greater amounts of memory. Never specify an arbitrarily high value for this parameter as it could lead to a boot time crash. To observe network cluster usage, use with &man.netstat.1;. The kern.ipc.nmbclusters loader tunable should be used to tune this at boot time. Only older versions of &os; will require the use of the NMBCLUSTERS kernel &man.config.8; option. For busy servers that make extensive use of the &man.sendfile.2; system call, it may be necessary to increase the number of &man.sendfile.2; buffers via the NSFBUFS kernel configuration option or by setting its value in /boot/loader.conf (see &man.loader.8; for details). A common indicator that this parameter needs to be adjusted is when processes are seen in the sfbufa state. The &man.sysctl.8; variable kern.ipc.nsfbufs is read-only. This parameter nominally scales with kern.maxusers, however it may be necessary to tune accordingly. Even though a socket has been marked as non-blocking, calling &man.sendfile.2; on the non-blocking socket may result in the &man.sendfile.2; call blocking until enough struct sf_buf's are made available. <varname>net.inet.ip.portrange.*</varname> net.inet.ip.portrange.* The net.inet.ip.portrange.* &man.sysctl.8; variables control the port number ranges automatically bound to TCP and UDP sockets. There are three ranges: a low range, a default range, and a high range. Most network programs use the default range which is controlled by net.inet.ip.portrange.first and net.inet.ip.portrange.last, which default to 1024 and 5000, respectively. Bound port ranges are used for outgoing connections and it is possible to run the system out of ports under certain circumstances. This most commonly occurs when running a heavily loaded web proxy. The port range is not an issue when running a server which handles mainly incoming connections, such as a web server, or has a limited number of outgoing connections, such as a mail relay. For situations where there is a shortage of ports, it is recommended to increase net.inet.ip.portrange.last modestly. A value of 10000, 20000 or 30000 may be reasonable. Consider firewall effects when changing the port range. Some firewalls may block large ranges of ports, usually low-numbered ports, and expect systems to use higher ranges of ports for outgoing connections. For this reason, it is not recommended that the value of net.inet.ip.portrange.first be lowered. <literal>TCP</literal> Bandwidth Delay Product TCP Bandwidth Delay Product Limiting net.inet.tcp.inflight.enable TCP bandwidth delay product limiting can be enabled by setting the net.inet.tcp.inflight.enable &man.sysctl.8; variable to 1. This instructs the system to attempt to calculate the bandwidth delay product for each connection and limit the amount of data queued to the network to just the amount required to maintain optimum throughput. This feature is useful when serving data over modems, Gigabit Ethernet, high speed WAN links, or any other link with a high bandwidth delay product, especially when also using window scaling or when a large send window has been configured. When enabling this option, also set net.inet.tcp.inflight.debug to 0 to disable debugging. For production use, setting net.inet.tcp.inflight.min to at least 6144 may be beneficial. Setting high minimums may effectively disable bandwidth limiting, depending on the link. The limiting feature reduces the amount of data built up in intermediate route and switch packet queues and reduces the amount of data built up in the local host's interface queue. With fewer queued packets, interactive connections, especially over slow modems, will operate with lower Round Trip Times. This feature only effects server side data transmission such as uploading. It has no effect on data reception or downloading. Adjusting net.inet.tcp.inflight.stab is not recommended. This parameter defaults to 20, representing 2 maximal packets added to the bandwidth delay product window calculation. The additional window is required to stabilize the algorithm and improve responsiveness to changing conditions, but it can also result in higher &man.ping.8; times over slow links, though still much lower than without the inflight algorithm. In such cases, try reducing this parameter to 15, 10, or 5 and reducing net.inet.tcp.inflight.min to a value such as 3500 to get the desired effect. Reducing these parameters should be done as a last resort only. Virtual Memory <varname>kern.maxvnodes</varname> A vnode is the internal representation of a file or directory. Increasing the number of vnodes available to the operating system reduces disk I/O. Normally, this is handled by the operating system and does not need to be changed. In some cases where disk I/O is a bottleneck and the system is running out of vnodes, this setting needs to be increased. The amount of inactive and free RAM will need to be taken into account. To see the current number of vnodes in use: &prompt.root; sysctl vfs.numvnodes vfs.numvnodes: 91349 To see the maximum vnodes: &prompt.root; sysctl kern.maxvnodes kern.maxvnodes: 100000 If the current vnode usage is near the maximum, try increasing kern.maxvnodes by a value of 1000. Keep an eye on the number of vfs.numvnodes. If it climbs up to the maximum again, kern.maxvnodes will need to be increased further. Otherwise, a shift in memory usage as reported by &man.top.1; should be visible and more memory should be active. Adding Swap Space Sometimes a system requires more swap space. This section describes two methods to increase swap space: adding swap to an existing partition or new hard drive, and creating a swap file on an existing partition. For information on how to encrypt swap space, which options exist, and why it should be done, refer to . Swap on a New Hard Drive or Existing Partition Adding a new hard drive for swap gives better performance than using a partition on an existing drive. Setting up partitions and hard drives is explained in while discusses partition layouts and swap partition size considerations. Use swapon to add a swap partition to the system. For example: &prompt.root; swapon /dev/ada1s1b It is possible to use any partition not currently mounted, even if it already contains data. Using swapon on a partition that contains data will overwrite and destroy that data. Make sure that the partition to be added as swap is really the intended partition before running swapon. To automatically add this swap partition on boot, add an entry to /etc/fstab: /dev/ada1s1b none swap sw 0 0 See &man.fstab.5; for an explanation of the entries in /etc/fstab. More information about swapon can be found in &man.swapon.8;. Creating a Swap File These examples create a 64M swap file called /usr/swap0 instead of using a partition. Using swap files requires that the module needed by &man.md.4; has either been built into the kernel or has been loaded before swap is enabled. See for information about building a custom kernel. - Creating a Swap File on &os; 10.<replaceable>X</replaceable> and Later + Creating a Swap File on + &os; 10.<replaceable>X</replaceable> and Later Create the swap file: &prompt.root; dd if=/dev/zero of=/usr/swap0 bs=1m count=64 Set the proper permissions on the new file: &prompt.root; chmod 0600 /usr/swap0 Inform the system about the swap file by adding a line to /etc/fstab: md99 none swap sw,file=/usr/swap0 0 0 The &man.md.4; device md99 is used, leaving lower device numbers available for interactive use. Swap space will be added on system startup. To add swap space immediately, use &man.swapon.8;: &prompt.root; swapon -aq - Creating a Swap File on &os; 9.<replaceable>X</replaceable> and Earlier + Creating a Swap File on + &os; 9.<replaceable>X</replaceable> and Earlier Create the swap file, /usr/swap0: &prompt.root; dd if=/dev/zero of=/usr/swap0 bs=1m count=64 Set the proper permissions on /usr/swap0: &prompt.root; chmod 0600 /usr/swap0 Enable the swap file in /etc/rc.conf: swapfile="/usr/swap0" # Set to name of swap file Swap space will be added on system startup. To enable the swap file immediately, specify a free memory device. Refer to for more information about memory devices. &prompt.root; mdconfig -a -t vnode -f /usr/swap0 -u 0 && swapon /dev/md0 Power and Resource Management Hiten Pandya Written by Tom Rhodes It is important to utilize hardware resources in an efficient manner. Power and resource management allows the operating system to monitor system limits and to possibly provide an alert if the system temperature increases unexpectedly. An early specification for providing power management was the Advanced Power Management (APM) facility. APM controls the power usage of a system based on its activity. However, it was difficult and inflexible for operating systems to manage the power usage and thermal properties of a system. The hardware was managed by the BIOS and the user had limited configurability and visibility into the power management settings. The APM BIOS is supplied by the vendor and is specific to the hardware platform. An APM driver in the operating system mediates access to the APM Software Interface, which allows management of power levels. There are four major problems in APM. First, power management is done by the vendor-specific BIOS, separate from the operating system. For example, the user can set idle-time values for a hard drive in the APM BIOS so that, when exceeded, the BIOS spins down the hard drive without the consent of the operating system. Second, the APM logic is embedded in the BIOS, and it operates outside the scope of the operating system. This means that users can only fix problems in the APM BIOS by flashing a new one into the ROM, which is a dangerous procedure with the potential to leave the system in an unrecoverable state if it fails. Third, APM is a vendor-specific technology, meaning that there is a lot of duplication of efforts and bugs found in one vendor's BIOS may not be solved in others. Lastly, the APM BIOS did not have enough room to implement a sophisticated power policy or one that can adapt well to the purpose of the machine. The Plug and Play BIOS (PNPBIOS) was unreliable in many situations. PNPBIOS is 16-bit technology, so the operating system has to use 16-bit emulation in order to interface with PNPBIOS methods. &os; provides an APM driver as APM should still be used for systems manufactured at or before the year 2000. The driver is documented in &man.apm.4;. ACPI APM The successor to APM is the Advanced Configuration and Power Interface (ACPI). ACPI is a standard written by an alliance of vendors to provide an interface for hardware resources and power management. It is a key element in Operating System-directed configuration and Power Management as it provides more control and flexibility to the operating system. This chapter demonstrates how to configure ACPI on &os;. It then offers some tips on how to debug ACPI and how to submit a problem report containing debugging information so that developers can diagnosis and fix ACPI issues. Configuring <acronym>ACPI</acronym> In &os; the &man.acpi.4; driver is loaded by default at system boot and should not be compiled into the kernel. This driver cannot be unloaded after boot because the system bus uses it for various hardware interactions. However, if the system is experiencing problems, ACPI can be disabled altogether by rebooting after setting hint.acpi.0.disabled="1" in /boot/loader.conf or by setting this variable at the loader prompt, as described in . ACPI and APM cannot coexist and should be used separately. The last one to load will terminate if the driver notices the other is running. ACPI can be used to put the system into a sleep mode with acpiconf, the flag, and a number from 1 to 5. Most users only need 1 (quick suspend to RAM) or 3 (suspend to RAM). Option 5 performs a soft-off which is the same as running halt -p. Other options are available using sysctl. Refer to &man.acpi.4; and &man.acpiconf.8; for more information. Common Problems ACPI ACPI is present in all modern computers that conform to the ia32 (x86), ia64 (Itanium), and amd64 (AMD) architectures. The full standard has many features including CPU performance management, power planes control, thermal zones, various battery systems, embedded controllers, and bus enumeration. Most systems implement less than the full standard. For instance, a desktop system usually only implements bus enumeration while a laptop might have cooling and battery management support as well. Laptops also have suspend and resume, with their own associated complexity. An ACPI-compliant system has various components. The BIOS and chipset vendors provide various fixed tables, such as FADT, in memory that specify things like the APIC map (used for SMP), config registers, and simple configuration values. Additionally, a bytecode table, the Differentiated System Description Table DSDT, specifies a tree-like name space of devices and methods. The ACPI driver must parse the fixed tables, implement an interpreter for the bytecode, and modify device drivers and the kernel to accept information from the ACPI subsystem. For &os;, &intel; has provided an interpreter (ACPI-CA) that is shared with &linux; and NetBSD. The path to the ACPI-CA source code is src/sys/contrib/dev/acpica. The glue code that allows ACPI-CA to work on &os; is in src/sys/dev/acpica/Osd. Finally, drivers that implement various ACPI devices are found in src/sys/dev/acpica. ACPI problems For ACPI to work correctly, all the parts have to work correctly. Here are some common problems, in order of frequency of appearance, and some possible workarounds or fixes. If a fix does not resolve the issue, refer to for instructions on how to submit a bug report. Mouse Issues In some cases, resuming from a suspend operation will cause the mouse to fail. A known work around is to add hint.psm.0.flags="0x3000" to /boot/loader.conf. Suspend/Resume ACPI has three suspend to RAM (STR) states, S1-S3, and one suspend to disk state (STD), called S4. STD can be implemented in two separate ways. The S4BIOS is a BIOS-assisted suspend to disk and S4OS is implemented entirely by the operating system. The normal state the system is in when plugged in but not powered up is soft off (S5). Use sysctl hw.acpi to check for the suspend-related items. These example results are from a Thinkpad: hw.acpi.supported_sleep_state: S3 S4 S5 hw.acpi.s4bios: 0 Use acpiconf -s to test S3, S4, and S5. An of one (1) indicates S4BIOS support instead of S4 operating system support. When testing suspend/resume, start with S1, if supported. This state is most likely to work since it does not require much driver support. No one has implemented S2, which is similar to S1. Next, try S3. This is the deepest STR state and requires a lot of driver support to properly reinitialize the hardware. A common problem with suspend/resume is that many device drivers do not save, restore, or reinitialize their firmware, registers, or device memory properly. As a first attempt at debugging the problem, try: &prompt.root; sysctl debug.bootverbose=1 &prompt.root; sysctl debug.acpi.suspend_bounce=1 &prompt.root; acpiconf -s 3 This test emulates the suspend/resume cycle of all device drivers without actually going into S3 state. In some cases, problems such as losing firmware state, device watchdog time out, and retrying forever, can be captured with this method. Note that the system will not really enter S3 state, which means devices may not lose power, and many will work fine even if suspend/resume methods are totally missing, unlike real S3 state. Harder cases require additional hardware, such as a serial port and cable for debugging through a serial console, a Firewire port and cable for using &man.dcons.4;, and kernel debugging skills. To help isolate the problem, unload as many drivers as possible. If it works, narrow down which driver is the problem by loading drivers until it fails again. Typically, binary drivers like nvidia.ko, display drivers, and USB will have the most problems while Ethernet interfaces usually work fine. If drivers can be properly loaded and unloaded, automate this by putting the appropriate commands in /etc/rc.suspend and /etc/rc.resume. Try setting to 0 if the display is messed up after resume. Try setting longer or shorter values for to see if that helps. Try loading a recent &linux; distribution to see if suspend/resume works on the same hardware. If it works on &linux;, it is likely a &os; driver problem. Narrowing down which driver causes the problem will assist developers in fixing the problem. Since the ACPI maintainers rarely maintain other drivers, such as sound or ATA, any driver problems should also be posted to the &a.current.name; list and mailed to the driver maintainer. Advanced users can include debugging &man.printf.3;s in a problematic driver to track down where in its resume function it hangs. Finally, try disabling ACPI and enabling APM instead. If suspend/resume works with APM, stick with APM, especially on older hardware (pre-2000). It took vendors a while to get ACPI support correct and older hardware is more likely to have BIOS problems with ACPI. System Hangs Most system hangs are a result of lost interrupts or an interrupt storm. Chipsets may have problems based on boot, how the BIOS configures interrupts before correctness of the APIC (MADT) table, and routing of the System Control Interrupt (SCI). interrupt storms Interrupt storms can be distinguished from lost interrupts by checking the output of vmstat -i and looking at the line that has acpi0. If the counter is increasing at more than a couple per second, there is an interrupt storm. If the system appears hung, try breaking to DDB ( CTRL ALT ESC on console) and type show interrupts. APIC disabling When dealing with interrupt problems, try disabling APIC support with hint.apic.0.disabled="1" in /boot/loader.conf. Panics Panics are relatively rare for ACPI and are the top priority to be fixed. The first step is to isolate the steps to reproduce the panic, if possible, and get a backtrace. Follow the advice for enabling options DDB and setting up a serial console in or setting up a dump partition. To get a backtrace in DDB, use tr. When handwriting the backtrace, get at least the last five and the top five lines in the trace. Then, try to isolate the problem by booting with ACPI disabled. If that works, isolate the ACPI subsystem by using various values of . See &man.acpi.4; for some examples. System Powers Up After Suspend or Shutdown First, try setting hw.acpi.disable_on_poweroff="0" in /boot/loader. This keeps ACPI from disabling various events during the shutdown process. Some systems need this value set to 1 (the default) for the same reason. This usually fixes the problem of a system powering up spontaneously after a suspend or poweroff. BIOS Contains Buggy Bytecode ACPI ASL Some BIOS vendors provide incorrect or buggy bytecode. This is usually manifested by kernel console messages like this: ACPI-1287: *** Error: Method execution failed [\\_SB_.PCI0.LPC0.FIGD._STA] \\ (Node 0xc3f6d160), AE_NOT_FOUND Often, these problems may be resolved by updating the BIOS to the latest revision. Most console messages are harmless, but if there are other problems, like the battery status is not working, these messages are a good place to start looking for problems. Overriding the Default <acronym>AML</acronym> The BIOS bytecode, known as ACPI Machine Language (AML), is compiled from a source language called ACPI Source Language (ASL). The AML is found in the table known as the Differentiated System Description Table (DSDT). ACPI ASL The goal of &os; is for everyone to have working ACPI without any user intervention. Workarounds are still being developed for common mistakes made by BIOS vendors. The µsoft; interpreter (acpi.sys and acpiec.sys) does not strictly check for adherence to the standard, and thus many BIOS vendors who only test ACPI under &windows; never fix their ASL. &os; developers continue to identify and document which non-standard behavior is allowed by µsoft;'s interpreter and replicate it so that &os; can work without forcing users to fix the ASL. To help identify buggy behavior and possibly fix it manually, a copy can be made of the system's ASL. To copy the system's ASL to a specified file name, use acpidump with , to show the contents of the fixed tables, and , to disassemble the AML: &prompt.root; acpidump -td > my.asl Some AML versions assume the user is running &windows;. To override this, set hw.acpi.osname="Windows 2009" in /boot/loader.conf, using the most recent &windows; version listed in the ASL. Other workarounds may require my.asl to be customized. If this file is edited, compile the new ASL using the following command. Warnings can usually be ignored, but errors are bugs that will usually prevent ACPI from working correctly. &prompt.root; iasl -f my.asl Including forces creation of the AML, even if there are errors during compilation. Some errors, such as missing return statements, are automatically worked around by the &os; interpreter. The default output filename for iasl is DSDT.aml. Load this file instead of the BIOS's buggy copy, which is still present in flash memory, by editing /boot/loader.conf as follows: acpi_dsdt_load="YES" acpi_dsdt_name="/boot/DSDT.aml" Be sure to copy DSDT.aml to /boot, then reboot the system. If this fixes the problem, send a &man.diff.1; of the old and new ASL to &a.acpi.name; so that developers can work around the buggy behavior in acpica. Getting and Submitting Debugging Info Nate Lawson Written by Peter Schultz With contributions from Tom Rhodes ACPI problems ACPI debugging The ACPI driver has a flexible debugging facility. A set of subsystems and the level of verbosity can be specified. The subsystems to debug are specified as layers and are broken down into components (ACPI_ALL_COMPONENTS) and ACPI hardware support (ACPI_ALL_DRIVERS). The verbosity of debugging output is specified as the level and ranges from just report errors (ACPI_LV_ERROR) to everything (ACPI_LV_VERBOSE). The level is a bitmask so multiple options can be set at once, separated by spaces. In practice, a serial console should be used to log the output so it is not lost as the console message buffer flushes. A full list of the individual layers and levels is found in &man.acpi.4;. Debugging output is not enabled by default. To enable it, add options ACPI_DEBUG to the custom kernel configuration file if ACPI is compiled into the kernel. Add ACPI_DEBUG=1 to /etc/make.conf to enable it globally. If a module is used instead of a custom kernel, recompile just the acpi.ko module as follows: &prompt.root; cd /sys/modules/acpi/acpi && make clean && make ACPI_DEBUG=1 Copy the compiled acpi.ko to /boot/kernel and add the desired level and layer to /boot/loader.conf. The entries in this example enable debug messages for all ACPI components and hardware drivers and output error messages at the least verbose level: debug.acpi.layer="ACPI_ALL_COMPONENTS ACPI_ALL_DRIVERS" debug.acpi.level="ACPI_LV_ERROR" If the required information is triggered by a specific event, such as a suspend and then resume, do not modify /boot/loader.conf. Instead, use sysctl to specify the layer and level after booting and preparing the system for the specific event. The variables which can be set using sysctl are named the same as the tunables in /boot/loader.conf. ACPI problems Once the debugging information is gathered, it can be sent to &a.acpi.name; so that it can be used by the &os; ACPI maintainers to identify the root cause of the problem and to develop a solution. Before submitting debugging information to this mailing list, ensure the latest BIOS version is installed and, if available, the embedded controller firmware version. When submitting a problem report, include the following information: Description of the buggy behavior, including system type, model, and anything that causes the bug to appear. Note as accurately as possible when the bug began occurring if it is new. The output of dmesg after running boot -v, including any error messages generated by the bug. The dmesg output from boot -v with ACPI disabled, if disabling ACPI helps to fix the problem. Output from sysctl hw.acpi. This lists which features the system offers. The URL to a pasted version of the system's ASL. Do not send the ASL directly to the list as it can be very large. Generate a copy of the ASL by running this command: &prompt.root; acpidump -dt > name-system.asl Substitute the login name for name and manufacturer/model for system. For example, use njl-FooCo6000.asl. Most &os; developers watch the &a.current;, but one should submit problems to &a.acpi.name; to be sure it is seen. Be patient when waiting for a response. If the bug is not immediately apparent, submit a PR using &man.send-pr.1;. When entering a PR, include the same information as requested above. This helps developers to track the problem and resolve it. Do not send a PR without emailing &a.acpi.name; first as it is likely that the problem has been reported before. References More information about ACPI may be found in the following locations: The &os; ACPI Mailing List Archives (http://lists.freebsd.org/pipermail/freebsd-acpi/) The ACPI 2.0 Specification (http://acpi.info/spec.htm) &man.acpi.4;, &man.acpi.thermal.4;, &man.acpidump.8;, &man.iasl.8;, and &man.acpidb.8; Index: head/en_US.ISO8859-1/books/handbook/eresources/chapter.xml =================================================================== --- head/en_US.ISO8859-1/books/handbook/eresources/chapter.xml (revision 46048) +++ head/en_US.ISO8859-1/books/handbook/eresources/chapter.xml (revision 46049) @@ -1,2396 +1,2398 @@ Resources on the Internet The rapid pace of &os; progress makes print media impractical as a means of following the latest developments. Electronic resources are the best, if not often the only, way to stay informed of the latest advances. Since &os; is a volunteer effort, the user community itself also generally serves as a technical support department of sorts, with electronic mail, web forums, and USENET news being the most effective way of reaching that community. The most important points of contact with the &os; user community are outlined below. Please send other resources not mentioned here to the &a.doc; so that they may also be included. Mailing Lists The mailing lists are the most direct way of addressing questions or opening a technical discussion to a concentrated &os; audience. There are a wide variety of lists on a number of different &os; topics. Sending questions to the most appropriate mailing list will invariably assure a faster and more accurate response. The charters for the various lists are given at the bottom of this document. Please read the charter before joining or sending mail to any list. Most list subscribers receive many hundreds of &os; related messages every day, and the charters and rules for use are meant to keep the signal-to-noise ratio of the lists high. To do less would see the mailing lists ultimately fail as an effective communications medium for the Project. To test the ability to send email to &os; lists, send a test message to &a.test.name;. Please do not send test messages to any other list. When in doubt about what list to post a question to, see How to get best results from the FreeBSD-questions mailing list. Before posting to any list, please learn about how to best use the mailing lists, such as how to help avoid frequently-repeated discussions, by reading the Mailing List Frequently Asked Questions (FAQ) document. Archives are kept for all of the mailing lists and can be searched using the &os; World Wide Web server. The keyword searchable archive offers an excellent way of finding answers to frequently asked questions and should be consulted before posting a question. Note that this also means that messages sent to &os; mailing lists are archived in perpetuity. When protecting privacy is a concern, consider using a disposable secondary email address and posting only public information. List Summary General lists: The following are general lists which anyone is free (and encouraged) to join: List Purpose &a.advocacy.name; &os; Evangelism &a.announce.name; Important events and Project milestones (moderated) &a.arch.name; Architecture and design discussions &a.bugbusters.name; Discussions pertaining to the maintenance of the &os; problem report database and related tools &a.bugs.name; Bug reports &a.chat.name; Non-technical items related to the &os; community &a.chromium.name; &os;-specific Chromium issues &a.current.name; Discussion concerning the use of &os.current; &a.isp.name; Issues for Internet Service Providers using &os; &a.jobs.name; &os; employment and consulting opportunities &a.questions.name; User questions and technical support &a.security-notifications.name; Security notifications (moderated) &a.stable.name; Discussion concerning the use of &os.stable; &a.test.name; Where to send test messages instead of to one of the actual lists Technical lists: The following lists are for technical discussion. Read the charter for each list carefully before joining or sending mail to one as there are firm guidelines for their use and content. List Purpose &a.acpi.name; ACPI and power management development &a.afs.name; Porting AFS to &os; &a.aic7xxx.name; Developing drivers for the &adaptec; AIC 7xxx &a.amd64.name; Porting &os; to AMD64 systems (moderated) &a.apache.name; Discussion about Apache related ports &a.arm.name; Porting &os; to &arm; processors &a.atm.name; Using ATM networking with &os; &a.bluetooth.name; Using &bluetooth; technology in &os; &a.cluster.name; Using &os; in a clustered environment &a.database.name; Discussing database use and development under &os; &a.desktop.name; Using and improving &os; on the desktop &a.doc.name; Creating &os; related documents &a.drivers.name; Writing device drivers for &os; &a.dtrace.name; Using and working on DTrace in &os; &a.eclipse.name; &os; users of Eclipse IDE, tools, rich client applications and ports. &a.embedded.name; Using &os; in embedded applications &a.eol.name; Peer support of &os;-related software that is no longer supported by the &os; Project. &a.emulation.name; Emulation of other systems such as Linux/&ms-dos;/&windows; &a.enlightenment.name; - Porting Enlightenment and - Enlightenment applications + Porting Enlightenment + and Enlightenment + applications &a.firewire.name; &os; &firewire; (iLink, IEEE 1394) technical discussion &a.fortran.name; Fortran on &os; &a.fs.name; File systems &a.games.name; Support for Games on &os; &a.gecko.name; Gecko Rendering Engine issues &a.geom.name; GEOM-specific discussions and implementations &a.git.name; Discussion of git use in the &os; project &a.gnome.name; Porting GNOME and GNOME applications &a.hackers.name; General technical discussion &a.hardware.name; General discussion of hardware for running &os; &a.i18n.name; &os; Internationalization &a.ia32.name; &os; on the IA-32 (&intel; x86) platform &a.ia64.name; Porting &os; to &intel;'s upcoming IA64 systems &a.infiniband.name; Infiniband on &os; &a.ipfw.name; Technical discussion concerning the redesign of the IP firewall code &a.isdn.name; ISDN developers &a.jail.name; Discussion about the &man.jail.8; facility &a.java.name; &java; developers and people porting &jdk;s to &os; &a.lfs.name; Porting LFS to &os; &a.mips.name; Porting &os; to &mips; &a.mobile.name; Discussions about mobile computing &a.mono.name; Mono and C# applications on &os; &a.multimedia.name; Multimedia applications &a.newbus.name; Technical discussions about bus architecture &a.net.name; Networking discussion and TCP/IP source code &a.numerics.name; Discussions of high quality implementation of libm functions &a.office.name; Office applications on &os; &a.performance.name; Performance tuning questions for high performance/load installations &a.perl.name; Maintenance of a number of Perl-related ports &a.pf.name; Discussion and questions about the packet filter firewall system &a.pkg.name; Binary package management and package tools discussion &a.pkg-fallout.name; Fallout logs from package building &a.platforms.name; Concerning ports to non &intel; architecture platforms &a.ports.name; Discussion of the Ports Collection &a.ports-announce.name; Important news and instructions about the Ports Collection (moderated) &a.ports-bugs.name; Discussion of the ports bugs/PRs &a.ppc.name; Porting &os; to the &powerpc; &a.proliant.name; Technical discussion of &os; on HP ProLiant server platforms &a.python.name; &os;-specific Python issues &a.rc.name; Discussion related to the rc.d system and its development &a.realtime.name; Development of realtime extensions to &os; &a.ruby.name; &os;-specific Ruby discussions &a.scsi.name; The SCSI subsystem &a.security.name; Security issues affecting &os; &a.small.name; Using &os; in embedded applications (obsolete; use &a.embedded.name; instead) &a.snapshots.name; &os; Development Snapshot Announcements &a.sparc.name; Porting &os; to &sparc; based systems &a.standards.name; &os;'s conformance to the C99 and the &posix; standards &a.sysinstall.name; &man.sysinstall.8; development &a.tcltk.name; &os;-specific Tcl/Tk discussions &a.testing.name; Testing on &os; &a.tex.name; Porting TeX and its applications to &os; &a.threads.name; Threading in &os; &a.tilera.name; Porting &os; to the Tilera family of CPUs &a.tokenring.name; Support Token Ring in &os; &a.toolchain.name; Maintenance of &os;'s integrated toolchain &a.usb.name; Discussing &os; support for USB &a.virtualization.name; Discussion of various virtualization techniques supported by &os; &a.vuxml.name; Discussion on VuXML infrastructure &a.x11.name; Maintenance and support of X11 on &os; &a.xen.name; Discussion of the &os; port to &xen; — implementation and usage &a.xfce.name; XFCE for &os; — porting and maintaining &a.zope.name; Zope for &os; — porting and maintaining Limited lists: The following lists are for more specialized (and demanding) audiences and are probably not of interest to the general public. It is also a good idea to establish a presence in the technical lists before joining one of these limited lists in order to understand the communications etiquette involved. List Purpose &a.hubs.name; People running mirror sites (infrastructural support) &a.usergroups.name; User group coordination &a.wip-status.name; &os; Work-In-Progress Status &a.wireless.name; Discussions of 802.11 stack, tools, device driver development Digest lists: All of the above lists are available in a digest format. Once subscribed to a list, the digest options can be changed in the account options section. SVN lists: The following lists are for people interested in seeing the log messages for changes to various areas of the source tree. They are Read-Only lists and should not have mail sent to them. List Source area Area Description (source for) &a.svn-doc-all.name; /usr/doc All changes to the doc Subversion repository (except for user, projects and translations) &a.svn-doc-head.name; /usr/doc All changes to the head branch of the doc Subversion repository &a.svn-doc-projects.name; /usr/doc/projects All changes to the projects area of the doc Subversion repository &a.svn-doc-svnadmin.name; /usr/doc All changes to the administrative scripts, hooks, and other configuration data of the doc Subversion repository &a.svn-ports-all.name; /usr/ports All changes to the ports Subversion repository &a.svn-ports-head.name; /usr/ports All changes to the head branch of the ports Subversion repository &a.svn-ports-svnadmin.name; /usr/ports All changes to the administrative scripts, hooks, and other configuration data of the ports Subversion repository &a.svn-src-all.name; /usr/src All changes to the src Subversion repository (except for user and projects) &a.svn-src-head.name; /usr/src All changes to the head branch of the src Subversion repository (the &os;-CURRENT branch) &a.svn-src-projects.name; /usr/projects All changes to the projects area of the src Subversion repository &a.svn-src-release.name; /usr/src All changes to the releases area of the src Subversion repository &a.svn-src-releng.name; /usr/src All changes to the releng branches of the src Subversion repository (the security / release engineering branches) &a.svn-src-stable.name; /usr/src All changes to the all stable branches of the src Subversion repository &a.svn-src-stable-6.name; /usr/src All changes to the stable/6 branch of the src Subversion repository &a.svn-src-stable-7.name; /usr/src All changes to the stable/7 branch of the src Subversion repository &a.svn-src-stable-8.name; /usr/src All changes to the stable/8 branch of the src Subversion repository &a.svn-src-stable-9.name; /usr/src All changes to the stable/9 branch of the src Subversion repository &a.svn-src-stable-10.name; /usr/src All changes to the stable/10 branch of the src Subversion repository &a.svn-src-stable-other.name; /usr/src All changes to the older stable branches of the src Subversion repository &a.svn-src-svnadmin.name; /usr/src All changes to the administrative scripts, hooks, and other configuration data of the src Subversion repository &a.svn-src-user.name; /usr/src All changes to the experimental user area of the src Subversion repository &a.svn-src-vendor.name; /usr/src All changes to the vendor work area of the src Subversion repository How to Subscribe To subscribe to a list, click the list name at &a.mailman.lists.link;. The page that is displayed should contain all of the necessary subscription instructions for that list. To actually post to a given list, send mail to listname@FreeBSD.org. It will then be redistributed to mailing list members world-wide. To unsubscribe from a list, click on the URL found at the bottom of every email received from the list. It is also possible to send an email to listname-unsubscribe@FreeBSD.org to unsubscribe. It is important to keep discussion in the technical mailing lists on a technical track. To only receive important announcements, instead join the &a.announce;, which is intended for infrequent traffic. List Charters All &os; mailing lists have certain basic rules which must be adhered to by anyone using them. Failure to comply with these guidelines will result in two (2) written warnings from the &os; Postmaster postmaster@FreeBSD.org, after which, on a third offense, the poster will removed from all &os; mailing lists and filtered from further posting to them. We regret that such rules and measures are necessary at all, but today's Internet is a pretty harsh environment, it would seem, and many fail to appreciate just how fragile some of its mechanisms are. Rules of the road: The topic of any posting should adhere to the basic charter of the list it is posted to. If the list is about technical issues, the posting should contain technical discussion. Ongoing irrelevant chatter or flaming only detracts from the value of the mailing list for everyone on it and will not be tolerated. For free-form discussion on no particular topic, the &a.chat; is freely available and should be used instead. No posting should be made to more than 2 mailing lists, and only to 2 when a clear and obvious need to post to both lists exists. For most lists, there is already a great deal of subscriber overlap and except for the most esoteric mixes (say -stable & -scsi), there really is no reason to post to more than one list at a time. If a message is received with multiple mailing lists on the Cc line, trim the Cc line before replying. The person who replies is still responsible for cross-posting, no matter who the originator might have been. Personal attacks and profanity (in the context of an argument) are not allowed, and that includes users and developers alike. Gross breaches of netiquette, like excerpting or reposting private mail when permission to do so was not and would not be forthcoming, are frowned upon but not specifically enforced. However, there are also very few cases where such content would fit within the charter of a list and it would therefore probably rate a warning (or ban) on that basis alone. Advertising of non-&os; related products or services is strictly prohibited and will result in an immediate ban if it is clear that the offender is advertising by spam. Individual list charters: &a.acpi.name; ACPI and power management development &a.afs.name; Andrew File System This list is for discussion on porting and using AFS from CMU/Transarc &a.announce.name; Important events / milestones This is the mailing list for people interested only in occasional announcements of significant &os; events. This includes announcements about snapshots and other releases. It contains announcements of new &os; capabilities. It may contain calls for volunteers etc. This is a low volume, strictly moderated mailing list. &a.arch.name; Architecture and design discussions This list is for discussion of the &os; architecture. Messages will mostly be kept strictly technical in nature. Examples of suitable topics are: How to re-vamp the build system to have several customized builds running at the same time. What needs to be fixed with VFS to make Heidemann layers work. How do we change the device driver interface to be able to use the same drivers cleanly on many buses and architectures. How to write a network driver. &a.bluetooth.name; &bluetooth; in &os; This is the forum where &os;'s &bluetooth; users congregate. Design issues, implementation details, patches, bug reports, status reports, feature requests, and all matters related to &bluetooth; are fair game. &a.bugbusters.name; Coordination of the Problem Report handling effort The purpose of this list is to serve as a coordination and discussion forum for the Bugmeister, his Bugbusters, and any other parties who have a genuine interest in the PR database. This list is not for discussions about specific bugs, patches or PRs. &a.bugs.name; Bug reports This is the mailing list for reporting bugs in &os;. Whenever possible, bugs should be submitted using the - &man.send-pr.1; command or the - WEB + &man.send-pr.1; command or the WEB interface to it. &a.chat.name; Non technical items related to the &os; community This list contains the overflow from the other lists about non-technical, social information. It includes discussion about whether Jordan looks like a toon ferret or not, whether or not to type in capitals, who is drinking too much coffee, where the best beer is brewed, who is brewing beer in their basement, and so on. Occasional announcements of important events (such as upcoming parties, weddings, births, new jobs, etc) can be made to the technical lists, but the follow ups should be directed to this -chat list. &a.chromium.name; &os;-specific Chromium issues This is a list for the discussion of Chromium support for &os;. This is a technical list to discuss development and installation of Chromium. &a.core.name; &os; core team This is an internal mailing list for use by the core members. Messages can be sent to it when a serious &os;-related matter requires arbitration or high-level scrutiny. &a.current.name; Discussions about the use of &os.current; This is the mailing list for users of &os.current;. It includes warnings about new features coming out in -CURRENT that will affect the users, and instructions on steps that must be taken to remain -CURRENT. Anyone running CURRENT must subscribe to this list. This is a technical mailing list for which strictly technical content is expected. &a.desktop.name; Using and improving &os; on the desktop This is a forum for discussion of &os; on the desktop. It is primarily a place for desktop porters and users to discuss issues and improve &os;'s desktop support. &a.doc.name; Documentation Project This mailing list is for the discussion of issues and projects related to the creation of documentation for &os;. The members of this mailing list are collectively referred to as The &os; Documentation Project. It is an open list; feel free to join and contribute! &a.drivers.name; Writing device drivers for &os; This is a forum for technical discussions related to device drivers on &os;. It is primarily a place for device driver writers to ask questions about how to write device drivers using the APIs in the &os; kernel. &a.dtrace.name; Using and working on DTrace in &os; DTrace is an integrated component of &os; that provides a framework for understanding the kernel as well as user space programs at run time. The mailing list is an archived discussion for developers of the code as well as those using it. &a.eclipse.name; &os; users of Eclipse IDE, tools, rich client applications and ports. The intention of this list is to provide mutual support for everything to do with choosing, installing, using, developing and maintaining the Eclipse IDE, tools, rich client applications on the &os; platform and assisting with the porting of Eclipse IDE and plugins to the &os; environment. The intention is also to facilitate exchange of information between the Eclipse community and the &os; community to the mutual benefit of both. Although this list is focused primarily on the needs of Eclipse users it will also provide a forum for those who would like to develop &os; specific applications using the Eclipse framework. &a.embedded.name; Using &os; in embedded applications This list discusses topics related to using &os; in embedded systems. This is a technical mailing list for which strictly technical content is expected. For the purpose of this list, embedded systems are those computing devices which are not desktops and which usually serve a single purpose as opposed to being general computing environments. Examples include, but are not limited to, all kinds of phone handsets, network equipment such as routers, switches and PBXs, remote measuring equipment, PDAs, Point Of Sale systems, and so on. &a.emulation.name; Emulation of other systems such as Linux/&ms-dos;/&windows; This is a forum for technical discussions related to running programs written for other operating systems on &os;. &a.enlightenment.name; Enlightenment Discussions concerning the - Enlightenment Desktop Environment - for &os; systems. This is a technical mailing list - for which strictly technical content is expected. + Enlightenment Desktop + Environment for &os; systems. This is a technical + mailing list for which strictly technical content is + expected. &a.eol.name; Peer support of &os;-related software that is no longer supported by the &os; Project. This list is for those interested in providing or making use of peer support of &os;-related software for which the &os; Project no longer provides official support in the form of security advisories and patches. &a.firewire.name; &firewire; (iLink, IEEE 1394) This is a mailing list for discussion of the design and implementation of a &firewire; (aka IEEE 1394 aka iLink) subsystem for &os;. Relevant topics specifically include the standards, bus devices and their protocols, adapter boards/cards/chips sets, and the architecture and implementation of code for their proper support. &a.fortran.name; Fortran on &os; This is the mailing list for discussion of Fortran related ports on &os;: compilers, libraries, scientific and engineering applications from laptops to HPC clusters. &a.fs.name; File systems Discussions concerning &os; filesystems. This is a technical mailing list for which strictly technical content is expected. &a.games.name; Games on &os; This is a technical list for discussions related to bringing games to &os;. It is for individuals actively working on porting games to &os;, to bring up problems or discuss alternative solutions. Individuals interested in following the technical discussion are also welcome. &a.gecko.name; Gecko Rendering Engine This is a forum about Gecko applications using &os;. Discussion centers around Gecko Ports applications, their installation, their development and their support within &os;. &a.geom.name; GEOM Discussions specific to GEOM and related implementations. This is a technical mailing list for which strictly technical content is expected. &a.git.name; Use of git in the &os; project Discussions of how to use git in &os; infrastructure including the github mirror and other uses of git for project collaboration. Discussion area for people using git against the &os; github mirror. People wanting to get started with the mirror or git in general on &os; can ask here. &a.gnome.name; GNOME Discussions concerning The GNOME Desktop Environment for &os; systems. This is a technical mailing list for which strictly technical content is expected. &a.infiniband.name; Infiniband on &os; Technical mailing list discussing Infiniband, OFED, and OpenSM on &os;. &a.ipfw.name; IP Firewall This is the forum for technical discussions concerning the redesign of the IP firewall code in &os;. This is a technical mailing list for which strictly technical content is expected. &a.ia64.name; Porting &os; to IA64 This is a technical mailing list for individuals actively working on porting &os; to the IA-64 platform from &intel;, to bring up problems or discuss alternative solutions. Individuals interested in following the technical discussion are also welcome. &a.isdn.name; ISDN Communications This is the mailing list for people discussing the development of ISDN support for &os;. &a.java.name; &java; Development This is the mailing list for people discussing the development of significant &java; applications for &os; and the porting and maintenance of &jdk;s. &a.jobs.name; Jobs offered and sought This is a forum for posting employment notices specifically related to &os; and resumes from those seeking &os;-related employment. This is not a mailing list for general employment issues since adequate forums for that already exist elsewhere. Note that this list, like other FreeBSD.org mailing lists, is distributed worldwide. Be clear about the geographic location and the extent to which telecommuting or assistance with relocation is available. Email should use open formats only — preferably plain text, but basic Portable Document Format (PDF), HTML, and a few others are acceptable to many readers. Closed formats such as µsoft; Word (.doc) will be rejected by the mailing list server. &a.kde.name; KDE Discussions concerning KDE on &os; systems. This is a technical mailing list for which strictly technical content is expected. &a.hackers.name; Technical discussions This is a forum for technical discussions related to &os;. This is the primary technical mailing list. It is for individuals actively working on &os;, to bring up problems or discuss alternative solutions. Individuals interested in following the technical discussion are also welcome. This is a technical mailing list for which strictly technical content is expected. &a.hardware.name; General discussion of &os; hardware General discussion about the types of hardware that &os; runs on, various problems and suggestions concerning what to buy or avoid. &a.hubs.name; Mirror sites Announcements and discussion for people who run &os; mirror sites. &a.isp.name; Issues for Internet Service Providers This mailing list is for discussing topics relevant to Internet Service Providers (ISPs) using &os;. This is a technical mailing list for which strictly technical content is expected. &a.mono.name; Mono and C# applications on &os; This is a list for discussions related to the Mono development framework on &os;. This is a technical mailing list. It is for individuals actively working on porting Mono or C# applications to &os;, to bring up problems or discuss alternative solutions. Individuals interested in following the technical discussion are also welcome. &a.office.name; Office applications on &os; Discussion centers around office applications, their installation, their development and their support within &os;. &a.ops-announce.name; Project Infrastructure Announcements This is the mailing list for people interested in changes and issues related to the FreeBSD.org Project infrastructure. This moderated list is strictly for announcements: no replies, requests, discussions, or opinions. &a.performance.name; Discussions about tuning or speeding up &os; This mailing list exists to provide a place for hackers, administrators, and/or concerned parties to discuss performance related topics pertaining to &os;. Acceptable topics includes talking about &os; installations that are either under high load, are experiencing performance problems, or are pushing the limits of &os;. Concerned parties that are willing to work toward improving the performance of &os; are highly encouraged to subscribe to this list. This is a highly technical list ideally suited for experienced &os; users, hackers, or administrators interested in keeping &os; fast, robust, and scalable. This list is not a question-and-answer list that replaces reading through documentation, but it is a place to make contributions or inquire about unanswered performance related topics. &a.pf.name; Discussion and questions about the packet filter firewall system Discussion concerning the packet filter (pf) firewall system in terms of &os;. Technical discussion and user questions are both welcome. This list is also a place to discuss the ALTQ QoS framework. &a.pkg.name; Binary package management and package tools discussion Discussion of all aspects of managing &os; systems by using binary packages to install software, including binary package toolkits and formats, their development and support within &os;, package repository management, and third party packages. Note that discussion of ports which fail to generate packages correctly should generally be considered as ports problems, and so inappropriate for this list. &a.pkg-fallout.name; Fallout logs from package building All packages building failures logs from the package building clusters &a.platforms.name; Porting to Non &intel; platforms Cross-platform &os; issues, general discussion and proposals for non &intel; &os; ports. This is a technical mailing list for which strictly technical content is expected. &a.ports.name; Discussion of ports Discussions concerning &os;'s ports collection (/usr/ports), ports infrastructure, and general ports coordination efforts. This is a technical mailing list for which strictly technical content is expected. &a.ports-announce.name; Important news and instructions about the &os; Ports Collection Important news for developers, porters, and users of the Ports Collection (/usr/ports), including architecture/infrastructure changes, new capabilities, critical upgrade instructions, and release engineering information. This is a low-volume mailing list, intended for announcements. &a.ports-bugs.name; Discussion of ports bugs Discussions concerning problem reports for &os;'s ports collection (/usr/ports), proposed ports, or modifications to ports. This is a technical mailing list for which strictly technical content is expected. &a.proliant.name; Technical discussion of &os; on HP ProLiant server platforms This mailing list is to be used for the technical discussion of the usage of &os; on HP ProLiant servers, including the discussion of ProLiant-specific drivers, management software, configuration tools, and BIOS updates. As such, this is the primary place to discuss the hpasmd, hpasmcli, and hpacucli modules. &a.python.name; Python on &os; This is a list for discussions related to improving Python-support on &os;. This is a technical mailing list. It is for individuals working on porting Python, its third party modules and Zope stuff to &os;. Individuals interested in following the technical discussion are also welcome. &a.questions.name; User questions This is the mailing list for questions about &os;. Do not send how to questions to the technical lists unless the question is quite technical. &a.ruby.name; &os;-specific Ruby discussions This is a list for discussions related to the Ruby support on &os;. This is a technical mailing list. It is for individuals working on Ruby ports, third party libraries and frameworks. Individuals interested in the technical discussion are also welcome. &a.scsi.name; SCSI subsystem This is the mailing list for people working on the SCSI subsystem for &os;. This is a technical mailing list for which strictly technical content is expected. &a.security.name; Security issues &os; computer security issues (DES, Kerberos, known security holes and fixes, etc). This is a technical mailing list for which strictly technical discussion is expected. Note that this is not a question-and-answer list, but that contributions (BOTH question AND answer) to the FAQ are welcome. &a.security-notifications.name; Security Notifications Notifications of &os; security problems and fixes. This is not a discussion list. The discussion list is FreeBSD-security. &a.small.name; Using &os; in embedded applications This list discusses topics related to unusually small and embedded &os; installations. This is a technical mailing list for which strictly technical content is expected. This list has been obsoleted by &a.embedded.name;. &a.snapshots.name; &os; Development Snapshot Announcements This list provides notifications about the availability of new &os; development snapshots for the head/ and stable/ branches. &a.stable.name; Discussions about the use of &os.stable; This is the mailing list for users of &os.stable;. It includes warnings about new features coming out in -STABLE that will affect the users, and instructions on steps that must be taken to remain -STABLE. Anyone running STABLE should subscribe to this list. This is a technical mailing list for which strictly technical content is expected. &a.standards.name; C99 & POSIX Conformance This is a forum for technical discussions related to &os; Conformance to the C99 and the POSIX standards. &a.testing.name; Testing on &os; Technical mailing list discussing testing on &os;, including ATF/Kyua, test build infrastructure, port tests to &os; from other operating systems (NetBSD, ...), etc. &a.tex.name; Porting TeX and its applications to &os; This is a technical mailing list for discussions related to TeX and its applications on &os;. It is for individuals actively working on porting TeX to FreeBSD, to bring up problems or discuss alternative solutions. Individuals interested in following the technical discussion are also welcome. &a.toolchain.name; Maintenance of &os;'s integrated toolchain This is the mailing list for discussions related to the maintenance of the toolchain shipped with &os;. This could include the state of Clang and GCC, but also pieces of software such as assemblers, linkers and debuggers. &a.usb.name; Discussing &os; support for USB This is a mailing list for technical discussions related to &os; support for USB. &a.usergroups.name; User Group Coordination List This is the mailing list for the coordinators from each of the local area Users Groups to discuss matters with each other and a designated individual from the Core Team. This mail list should be limited to meeting synopsis and coordination of projects that span User Groups. &a.virtualization.name; Discussion of various virtualization techniques supported by &os; A list to discuss the various virtualization techniques supported by &os;. On one hand the focus will be on the implementation of the basic functionality as well as adding new features. On the other hand users will have a forum to ask for help in case of problems or to discuss their use cases. &a.wip-status.name; &os; Work-In-Progress Status This mailing list can be used by developers to announce the creation and progress of &os; related work. Messages will be moderated. It is suggested to send the message "To:" a more topical &os; list and only "BCC:" this list. This way the WIP can also be discussed on the topical list, as no discussion is allowed on this list. Look inside the archives for examples of suitable messages. An editorial digest of the messages to this list might be posted to the &os; website every few months as part of the Status Reports http://www.freebsd.org/news/status/. Past reports are archived. &a.wireless.name; Discussions of 802.11 stack, tools device driver development The FreeBSD-wireless list focuses on 802.11 stack (sys/net80211), device driver and tools development. This includes bugs, new features and maintenance. &a.xen.name; Discussion of the &os; port to &xen; — implementation and usage A list that focuses on the &os; &xen; port. The anticipated traffic level is small enough that it is intended as a forum for both technical discussions of the implementation and design details as well as administrative deployment issues. &a.xfce.name; XFCE This is a forum for discussions related to bring the XFCE environment to &os;. This is a technical mailing list. It is for individuals actively working on porting XFCE to &os;, to bring up problems or discuss alternative solutions. Individuals interested in following the technical discussion are also welcome. &a.zope.name; Zope This is a forum for discussions related to bring the Zope environment to &os;. This is a technical mailing list. It is for individuals actively working on porting Zope to &os;, to bring up problems or discuss alternative solutions. Individuals interested in following the technical discussion are also welcome. Filtering on the Mailing Lists The &os; mailing lists are filtered in multiple ways to avoid the distribution of spam, viruses, and other unwanted emails. The filtering actions described in this section do not include all those used to protect the mailing lists. Only certain types of attachments are allowed on the mailing lists. All attachments with a MIME content type not found in the list below will be stripped before an email is distributed on the mailing lists. application/octet-stream application/pdf application/pgp-signature application/x-pkcs7-signature message/rfc822 multipart/alternative multipart/related multipart/signed text/html text/plain text/x-diff text/x-patch Some of the mailing lists might allow attachments of other MIME content types, but the above list should be applicable for most of the mailing lists. If an email contains both an HTML and a plain text version, the HTML version will be removed. If an email contains only an HTML version, it will be converted to plain text. Usenet Newsgroups In addition to two &os; specific newsgroups, there are many others in which &os; is discussed or are otherwise relevant to &os; users. BSD Specific Newsgroups comp.unix.bsd.freebsd.announce comp.unix.bsd.freebsd.misc de.comp.os.unix.bsd (German) fr.comp.os.bsd (French) it.comp.os.freebsd (Italian) Other &unix; Newsgroups of Interest comp.unix comp.unix.questions comp.unix.admin comp.unix.programmer comp.unix.shell comp.unix.user-friendly comp.security.unix comp.sources.unix comp.unix.advocacy comp.unix.misc comp.unix.bsd X Window System comp.windows.x.i386unix comp.windows.x comp.windows.x.apps comp.windows.x.announce comp.windows.x.intrinsics comp.windows.x.motif comp.windows.x.pex comp.emulators.ms-windows.wine World Wide Web Servers Forums, Blogs, and Social Networks The &os; Forums provide a web based discussion forum for &os; questions and technical discussion. Planet &os; offers an aggregation feed of dozens of blogs written by &os; developers. Many developers use this to post quick notes about what they are working on, new patches, and other works in progress. The BSDConferences YouTube Channel provides a collection of high quality videos from BSD Conferences around the world. This is a great way to watch key developers give presentations about new work in &os;. Official Mirrors &chap.eresources.www.index.inc; &chap.mirrors.lastmod.inc; &chap.eresources.www.inc; Index: head/en_US.ISO8859-1/books/handbook/filesystems/chapter.xml =================================================================== --- head/en_US.ISO8859-1/books/handbook/filesystems/chapter.xml (revision 46048) +++ head/en_US.ISO8859-1/books/handbook/filesystems/chapter.xml (revision 46049) @@ -1,218 +1,218 @@ Other File Systems TomRhodesWritten by Synopsis File Systems File Systems Support File Systems File systems are an integral part of any operating system. - They allow users to upload and store files, provide access - to data, and make hard drives useful. Different operating - systems differ in their native file system. Traditionally, the - native &os; file system has been the Unix File System + They allow users to upload and store files, provide access to + data, and make hard drives useful. Different operating systems + differ in their native file system. Traditionally, the native + &os; file system has been the Unix File System UFS which has been modernized as - UFS2. Since &os; 7.0, the Z File - System (ZFS) is also available as a native file + UFS2. Since &os; 7.0, the Z File System + (ZFS) is also available as a native file system. See for more information. In addition to its native file systems, &os; supports a multitude of other file systems so that data from other operating systems can be accessed locally, such as data stored on locally attached USB storage devices, flash drives, and hard disks. This includes support for the &linux; Extended File System (EXT) and the Reiser file system. There are different levels of &os; support for the various file systems. Some require a kernel module to be loaded and others may require a toolset to be installed. Some non-native file system support is full read-write while others are read-only. After reading this chapter, you will know: The difference between native and supported file systems. Which file systems are supported by &os;. How to enable, configure, access, and make use of non-native file systems. Before reading this chapter, you should: Understand &unix; and &os; basics. Be familiar with the basics of kernel configuration and compilation. Feel comfortable installing software in &os;. Have some familiarity with disks, storage, and device names in &os;. &linux; File Systems &os; provides built-in support for several &linux; file systems. This section demonstrates how to load support for and how to mount the supported &linux; file systems. <acronym>ext2</acronym> Kernel support for ext2 file systems has been available since &os; 2.2. In &os; 8.x and earlier, the code is licensed under the GPL. Since &os; 9.0, the code has been rewritten and is now BSD licensed. The &man.ext2fs.5; driver allows the &os; kernel to both read and write to ext2 file systems. This driver can also be used to access ext3 and ext4 file systems. However, ext3 journaling, extended attributes, and inodes greater than 128-bytes are not supported. Support for ext4 is read-only. To access an ext file system, first load the kernel loadable module: &prompt.root; kldload ext2fs Then, mount the ext volume by specifying its &os; partition name and an existing mount point. This example mounts /dev/ad1s1 on /mnt: &prompt.root; mount -t ext2fs /dev/ad1s1 /mnt XFS A &os; kernel can be configured to provide read-only support for XFS file systems. To compile in XFS support, add the following option to a custom kernel configuration file and recompile the kernel using the instructions in : options XFS Then, to mount an XFS volume located on /dev/ad1s1: &prompt.root; mount -t xfs /dev/ad1s1 /mnt The sysutils/xfsprogs package or port provides additional utilities, with man pages, for using, analyzing, and repairing XFS file systems. ReiserFS &os; provides read-only support for The Reiser file system, ReiserFS. To load the &man.reiserfs.5; driver: &prompt.root; kldload reiserfs Then, to mount a ReiserFS volume located on /dev/ad1s1: &prompt.root; mount -t reiserfs /dev/ad1s1 /mnt Index: head/en_US.ISO8859-1/books/handbook/firewalls/chapter.xml =================================================================== --- head/en_US.ISO8859-1/books/handbook/firewalls/chapter.xml (revision 46048) +++ head/en_US.ISO8859-1/books/handbook/firewalls/chapter.xml (revision 46049) @@ -1,3757 +1,3758 @@ Firewalls Joseph J. Barbish Contributed by Brad Davis Converted to SGML and updated by firewall security firewalls Synopsis Firewalls make it possible to filter the incoming and outgoing traffic that flows through a system. A firewall can use one or more sets of rules to inspect network packets as they come in or go out of network connections and either allows the traffic through or blocks it. The rules of a firewall can inspect one or more characteristics of the packets such as the protocol type, source or destination host address, and source or destination port. Firewalls can enhance the security of a host or a network. They can be used to do one or more of the following: Protect and insulate the applications, services, and machines of an internal network from unwanted traffic from the public Internet. Limit or disable access from hosts of the internal network to services of the public Internet. Support network address translation (NAT), which allows an internal network to use private IP addresses and share a single connection to the public Internet using either a single IP address or a shared pool of automatically assigned public addresses. &os; has three firewalls built into the base system: PF, IPFW, and IPFILTER, also known as IPF. &os; also provides two traffic shapers for controlling bandwidth usage: &man.altq.4; and &man.dummynet.4;. ALTQ has traditionally been closely tied with PF and dummynet with IPFW. Each firewall uses rules to control the access of packets to and from a &os; system, although they go about it in different ways and each has a different rule syntax. &os; provides multiple firewalls in order to meet the different requirements and preferences for a wide variety of users. Each user should evaluate which firewall best meets their needs. After reading this chapter, you will know: How to define packet filtering rules. The differences between the firewalls built into &os;. How to use and configure the PF firewall. How to use and configure the IPFW firewall. How to use and configure the IPFILTER firewall. Before reading this chapter, you should: Understand basic &os; and Internet concepts. Since all firewalls are based on inspecting the values of selected packet control fields, the creator of the firewall ruleset must have an understanding of how TCP/IP works, what the different values in the packet control fields are, and how these values are used in a normal session conversation. For a good introduction, refer to Daryl's TCP/IP Primer. Firewall Concepts firewall rulesets A ruleset contains a group of rules which pass or block packets based on the values contained in the packet. The bi-directional exchange of packets between hosts comprises a session conversation. The firewall ruleset processes both the packets arriving from the public Internet, as well as the packets produced by the system as a response to them. Each TCP/IP service is predefined by its protocol and listening port. Packets destined for a specific service originate from the source address using an unprivileged port and target the specific service port on the destination address. All the above parameters can be used as selection criteria to create rules which will pass or block services. To lookup unknown port numbers, refer to /etc/services. Alternatively, visit http://en.wikipedia.org/wiki/List_of_TCP_and_UDP_port_numbers and do a port number lookup to find the purpose of a particular port number. Check out this link for port numbers used by Trojans http://www.sans.org/security-resources/idfaq/oddports.php. FTP has two modes: active mode and passive mode. The difference is in how the data channel is acquired. Passive mode is more secure as the data channel is acquired by the ordinal ftp session requester. For a good explanation of FTP and the different modes, see http://www.slacksite.com/other/ftp.html. A firewall ruleset can be either exclusive or inclusive. An exclusive firewall allows all traffic through except for the traffic matching the ruleset. An inclusive firewall does the reverse as it only allows traffic matching the rules through and blocks everything else. An inclusive firewall offers better control of the outgoing traffic, making it a better choice for systems that offer services to the public Internet. It also controls the type of traffic originating from the public Internet that can gain access to a private network. All traffic that does not match the rules is blocked and logged. Inclusive firewalls are generally safer than exclusive firewalls because they significantly reduce the risk of allowing unwanted traffic. Unless noted otherwise, all configuration and example rulesets in this chapter create inclusive firewall rulesets. Security can be tightened further using a stateful firewall. This type of firewall keeps track of open connections and only allows traffic which either matches an existing connection or opens a new, allowed connection. Stateful filtering treats traffic as a bi-directional exchange of packets comprising a session. When state is specified on a matching rule the firewall dynamically generates internal rules for each anticipated packet being exchanged during the session. It has sufficient matching capabilities to determine if a packet is valid for a session. Any packets that do not properly fit the session template are automatically rejected. When the session completes, it is removed from the dynamic state table. Stateful filtering allows one to focus on blocking/passing new sessions. If the new session is passed, all its subsequent packets are allowed automatically and any impostor packets are automatically rejected. If a new session is blocked, none of its subsequent packets are allowed. Stateful filtering provides advanced matching abilities capable of defending against the flood of different attack methods employed by attackers. NAT stands for Network Address Translation. NAT function enables the private LAN behind the firewall to share a single ISP-assigned IP address, even if that address is dynamically assigned. NAT allows each computer in the LAN to have Internet access, without having to pay the ISP for multiple Internet accounts or IP addresses. NAT will automatically translate the private LAN IP address for each system on the LAN to the single public IP address as packets exit the firewall bound for the public Internet. It also performs the reverse translation for returning packets. According to RFC 1918, the following IP address ranges are reserved for private networks which will never be routed directly to the public Internet, and therefore are available for use with NAT: 10.0.0.0/8. 172.16.0.0/12. 192.168.0.0/16. When working with the firewall rules, be very careful. Some configurations can lock the administrator out of the server. To be on the safe side, consider performing the initial firewall configuration from the local console rather than doing it remotely over ssh. PF John Ferrell Revised and updated by firewall PF Since &os; 5.3, a ported version of OpenBSD's PF firewall has been included as an integrated part of the base system. PF is a complete, full-featured firewall that has optional support for ALTQ (Alternate Queuing), which provides Quality of Service (QoS). The OpenBSD Project maintains the definitive reference for PF in the PF FAQ. Peter Hansteen maintains a thorough PF tutorial at http://home.nuug.no/~peter/pf/. When reading the PF FAQ, keep in mind that different versions of &os; contain different versions of PF. &os; 8.X uses the same version of PF as OpenBSD 4.1 and &os; 9.X and later uses the same version of PF as OpenBSD 4.5. The &a.pf; is a good place to ask questions about configuring and running the PF firewall. Check the mailing list archives before asking a question as it may have already been answered. More information about porting PF to &os; can be found at http://pf4freebsd.love2party.net/. This section of the Handbook focuses on PF as it pertains to &os;. It demonstrates how to enable PF and ALTQ. It then provides several examples for creating rulesets on a &os; system. Enabling <application>PF</application> In order to use PF, its kernel module must be first loaded. This section describes the entries that can be added to /etc/rc.conf in order to enable PF. Start by adding the following line to /etc/rc.conf: pf_enable="YES" Additional options, described in &man.pfctl.8;, can be passed to PF when it is started. Add this entry to /etc/rc.conf and specify any required flags between the two quotes (""): pf_flags="" # additional flags for pfctl startup PF will not start if it cannot find its ruleset configuration file. The default ruleset is already created and is named /etc/pf.conf. If a custom ruleset has been saved somewhere else, add a line to /etc/rc.conf which specifies the full path to the file: pf_rules="/path/to/pf.conf" Logging support for PF is provided by &man.pflog.4;. To enable logging support, add this line to /etc/rc.conf: pflog_enable="YES" The following lines can also be added in order to change the default location of the log file or to specify any additional flags to pass to &man.pflog.4; when it is started: pflog_logfile="/var/log/pflog" # where pflogd should store the logfile pflog_flags="" # additional flags for pflogd startup Finally, if there is a LAN behind the firewall and packets need to be forwarded for the computers on the LAN, or NAT is required, add the following option: gateway_enable="YES" # Enable as LAN gateway After saving the needed edits, PF can be started with logging support by typing: &prompt.root; service pf start &prompt.root; service pflog start By default, PF reads its configuration rules from /etc/pf.conf and modifies, drops, or passes packets according to the rules or definitions specified in this file. The &os; installation includes several sample files located in /usr/share/examples/pf/. Refer to the PF FAQ for complete coverage of PF rulesets. To control PF, use pfctl. summarizes some useful options to this command. Refer to &man.pfctl.8; for a description of all available options: Useful <command>pfctl</command> Options Command Purpose pfctl -e Enable PF. pfctl -d Disable PF. pfctl -F all -f /etc/pf.conf Flush all NAT, filter, state, and table rules and reload /etc/pf.conf. pfctl -s [ rules | nat state ] Report on the filter rules, NAT rules, or state table. pfctl -vnf /etc/pf.conf Check /etc/pf.conf for errors, but do not load ruleset.
security/sudo is useful for running commands like pfctl that require elevated privileges. It can be installed from the Ports Collection. To keep an eye on the traffic that passes through the PF firewall, consider installing the sysutils/pftop package or port. Once installed, pftop can be run to view a running snapshot of traffic in a format which is similar to &man.top.1;.
Enabling <application>ALTQ</application> On &os;, ALTQ can be used with PF to provide Quality of Service (QOS). Once ALTQ is enabled, queues can be defined in the ruleset which determine the processing priority of outbound packets. Before enabling ALTQ, refer to &man.altq.4; to determine if the drivers for the network cards installed on the system support it. ALTQ is not available as a loadable kernel module. If the system's interfaces support ALTQ, create a custom kernel using the instructions in . The following kernel options are available. The first is needed to enable ALTQ. At least one of the other options is necessary to specify the queueing scheduler algorithm: options ALTQ options ALTQ_CBQ # Class Based Queuing (CBQ) options ALTQ_RED # Random Early Detection (RED) options ALTQ_RIO # RED In/Out options ALTQ_HFSC # Hierarchical Packet Scheduler (HFSC) options ALTQ_PRIQ # Priority Queuing (PRIQ) The following scheduler algorithms are available: CBQ Class Based Queuing (CBQ) is used to divide a connection's bandwidth into different classes or queues to prioritize traffic based on filter rules. RED Random Early Detection (RED) is used to avoid network congestion by measuring the length of the queue and comparing it to the minimum and maximum thresholds for the queue. When the queue is over the maximum, all new packets are randomly dropped. RIO In Random Early Detection In and Out (RIO) mode, RED maintains multiple average queue lengths and multiple threshold values, one for each QOS level. HFSC Hierarchical Fair Service Curve Packet Scheduler (HFSC) is described in http://www-2.cs.cmu.edu/~hzhang/HFSC/main.html. PRIQ Priority Queuing (PRIQ) always passes traffic that is in a higher queue first. More information about the scheduling algorithms and example rulesets are available at http://www.openbsd.org/faq/pf/queueing.html. <application>PF</application> Rulesets Peter Hansteen N. M. Contributed by This section demonstrates how to create a customized ruleset. It starts with the simplest of rulesets and builds upon its concepts using several examples to demonstrate real-world usage of PF's many features. The simplest possible ruleset is for a single machine that does not run any services and which needs access to one network, which may be the Internet. To create this minimal ruleset, edit /etc/pf.conf so it looks like this: block in all pass out all keep state The first rule denies all incoming traffic by default. The second rule allows connections created by this system to pass out, while retaining state information on those connections. This state information allows return traffic for those connections to pass back and should only be used on machines that can be trusted. The ruleset can be loaded with: &prompt.root; pfctl -e ; pfctl -f /etc/pf.conf In addition to keeping state, PF provides lists and macros which can be defined for use when creating rules. Macros can include lists and need to be defined before use. As an example, insert these lines at the very top of the ruleset: tcp_services = "{ ssh, smtp, domain, www, pop3, auth, pop3s }" udp_services = "{ domain }" PF understands port names as well as port numbers, as long as the names are listed in /etc/services. This example creates two macros. The first is a list of seven TCP port names and the second is one UDP port name. Once defined, macros can be used in rules. In this example, all traffic is blocked except for the connections initiated by this system for the seven specified TCP services and the one specified UDP service: tcp_services = "{ ssh, smtp, domain, www, pop3, auth, pop3s }" udp_services = "{ domain }" block all pass out proto tcp to any port $tcp_services keep state pass proto udp to any port $udp_services keep state Even though UDP is considered to be a stateless protocol, PF is able to track some state information. For example, when a UDP request is passed which asks a name server about a domain name, PF will watch for the response in order to pass it back. Whenever an edit is made to a ruleset, the new rules must be loaded so they can be used: &prompt.root; pfctl -f /etc/pf.conf If there are no syntax errors, pfctl will not output any messages during the rule load. Rules can also be tested before attempting to load them: &prompt.root; pfctl -nf /etc/pf.conf Including causes the rules to be interpreted only, but not loaded. This provides an opportunity to correct any errors. At all times, the last valid ruleset loaded will be enforced until either PF is disabled or a new ruleset is loaded. Adding to a pfctl ruleset verify or load will display the fully parsed rules exactly the way they will be loaded. This is extremely useful when debugging rules. A Simple Gateway with NAT This section demonstrates how to configure a &os; system running PF to act as a gateway for at least one other machine. The gateway needs at least two network interfaces, each connected to a separate network. In this example, xl1 is connected to the Internet and xl0 is connected to the internal network. First, enable the gateway in order to let the machine forward the network traffic it receives on one interface to another interface. This sysctl setting will forward IPv4 packets: &prompt.root; sysctl net.inet.ip.forwarding=1 To forward IPv6 traffic, use: &prompt.root; sysctl net.inet6.ip6.forwarding=1 To enable these settings at system boot, add the following to /etc/rc.conf: gateway_enable="YES" #for ipv4 ipv6_gateway_enable="YES" #for ipv6 Verify with ifconfig that both of the interfaces are up and running. Next, create the PF rules to allow the gateway to pass traffic. While the following rule allows stateful traffic to pass from the Internet to hosts on the network, the to keyword does not guarantee passage all the way from source to destination: pass in on xl1 from xl1:network to xl0:network port $ports keep state That rule only lets the traffic pass in to the gateway on the internal interface. To let the packets go further, a matching rule is needed: pass out on xl0 from xl1:network to xl0:network port $ports keep state While these two rules will work, rules this specific are rarely needed. For a busy network admin, a readable ruleset is a safer ruleset. The remainder of this section demonstrates how to keep the rules as simple as possible for readability. For example, those two rules could be replaced with one rule: pass from xl1:network to any port $ports keep state The interface:network notation can be replaced with a macro to make the ruleset even more readable. For example, a $localnet macro could be defined as the network directly attached to the internal interface ($xl1:network). Alternatively, the definition of $localnet could be changed to an IP address/netmask notation to denote a network, such as 192.168.100.1/24 for a subnet of private addresses. If required, $localnet could even be defined as a list of networks. Whatever the specific needs, a sensible $localnet definition could be used in a typical pass rule as follows: pass from $localnet to any port $ports keep state The following sample ruleset allows all traffic initiated by machines on the internal network. It first defines two macros to represent the external and internal 3COM interfaces of the gateway. For dialup users, the external interface will use tun0. For an ADSL connection, specifically those using PPP over Ethernet (PPPoE), the correct external interface is tun0, not the physical Ethernet interface. ext_if = "xl0" # macro for external interface - use tun0 for PPPoE int_if = "xl1" # macro for internal interface localnet = $int_if:network # ext_if IP address could be dynamic, hence ($ext_if) nat on $ext_if from $localnet to any -> ($ext_if) block all pass from { lo0, $localnet } to any keep state This ruleset introduces the nat rule which is used to handle the network address translation from the non-routable addresses inside the internal network to the IP address assigned to the external interface. The parentheses surrounding the last part of the nat rule ($ext_if) is included when the IP address of the external interface is dynamically assigned. It ensures that network traffic runs without serious interruptions even if the external IP address changes. Note that this ruleset probably allows more traffic to pass out of the network than is needed. One reasonable setup could create this macro: client_out = "{ ftp-data, ftp, ssh, domain, pop3, auth, nntp, http, \ https, cvspserver, 2628, 5999, 8000, 8080 }" to use in the main pass rule: pass inet proto tcp from $localnet to any port $client_out \ flags S/SA keep state A few other pass rules may be needed. This one enables SSH on the external interface:: pass in inet proto tcp to $ext_if port ssh This macro definition and rule allows DNS and NTP for internal clients: udp_services = "{ domain, ntp }" pass quick inet proto { tcp, udp } to any port $udp_services keep state Note the quick keyword in this rule. Since the ruleset consists of several rules, it is important to understand the relationships between the rules in a ruleset. Rules are evaluated from top to bottom, in the sequence they are written. For each packet or connection evaluated by PF, the last matching rule in the ruleset is the one which is applied. However, when a packet matches a rule which contains the quick keyword, the rule processing stops and the packet is treated according to that rule. This is very useful when an exception to the general rules is needed. Creating an <acronym>FTP</acronym> Proxy Configuring working FTP rules can be problematic due to the nature of the FTP protocol. FTP pre-dates firewalls by several decades and is insecure in its design. The most common points against using FTP include: Passwords are transferred in the clear. The protocol demands the use of at least two TCP connections (control and data) on separate ports. When a session is established, data is communicated using randomly selected ports. All of these points present security challenges, even before considering any potential security weaknesses in client or server software. More secure alternatives for file transfer exist, such as &man.sftp.1; or &man.scp.1;, which both feature authentication and data transfer over encrypted connections.. For those situations when FTP is required, PF provides redirection of FTP traffic to a small proxy program called &man.ftp-proxy.8;, which is included in the base system of &os;. The role of the proxy is to dynamically insert and delete rules in the ruleset, using a set of anchors, in order to correctly handle FTP traffic. To enable the FTP proxy, add this line to /etc/rc.conf: ftpproxy_enable="YES" Then start the proxy by running service ftp-proxy start. For a basic configuration, three elements need to be added to /etc/pf.conf. First, the anchors which the proxy will use to insert the rules it generates for the FTP sessions: nat-anchor "ftp-proxy/*" rdr-anchor "ftp-proxy/*" Second, a pass rule is needed to allow FTP traffic in to the proxy. Third, redirection and NAT rules need to be defined before the filtering rules. Insert this rdr rule immediately after the nat rule: rdr pass on $int_if proto tcp from any to any port ftp -> 127.0.0.1 port 8021 Finally, allow the redirected traffic to pass: pass out proto tcp from $proxy to any port ftp where $proxy expands to the address the proxy daemon is bound to. Save /etc/pf.conf, load the new rules, and verify from a client that FTP connections are working: &prompt.root; pfctl -f /etc/pf.conf This example covers a basic setup where the clients in the local network need to contact FTP servers elsewhere. This basic configuration should work well with most combinations of FTP clients and servers. As shown in &man.ftp-proxy.8;, the proxy's behavior can be changed in various ways by adding options to the ftpproxy_flags= line. Some clients or servers may have specific quirks that must be compensated for in the configuration, or there may be a need to integrate the proxy in specific ways such as assigning FTP traffic to a specific queue. For ways to run an FTP server protected by PF and &man.ftp-proxy.8;, configure a separate ftp-proxy in reverse mode, using , on a separate port with its own redirecting pass rule. Managing <acronym>ICMP</acronym> Many of the tools used for debugging or troubleshooting a TCP/IP network rely on the Internet Control Message Protocol (ICMP), which was designed specifically with debugging in mind. The ICMP protocol sends and receives control messages between hosts and gateways, mainly to provide feedback to a sender about any unusual or difficult conditions enroute to the target host. Routers use ICMP to negotiate packet sizes and other transmission parameters in a process often referred to as path MTU discovery. From a firewall perspective, some ICMP control messages are vulnerable to known attack vectors. Also, letting all diagnostic traffic pass unconditionally makes debugging easier, but it also makes it easier for others to extract information about the network. For these reasons, the following rule may not be optimal: pass inet proto icmp from any to any One solution is to let all ICMP traffic from the local network through while stopping all probes from outside the network: pass inet proto icmp from $localnet to any keep state pass inet proto icmp from any to $ext_if keep state Additional options are available which demonstrate some of PF's flexibility. For example, rather than allowing all ICMP messages, one can specify the messages used by &man.ping.8; and &man.traceroute.8;. Start by defining a macro for that type of message: icmp_types = "echoreq" and a rule which uses the macro: pass inet proto icmp all icmp-type $icmp_types keep state If other types of ICMP packets are needed, expand icmp_types to a list of those packet types. Type more /usr/src/contrib/pf/pfctl/pfctl_parser.c to see the list of ICMP message types supported by PF. Refer to http://www.iana.org/assignments/icmp-parameters/icmp-parameters.xhtml for an explanation of each message type. Since Unix traceroute uses UDP by default, another rule is needed to allow Unix traceroute: # allow out the default range for traceroute(8): pass out on $ext_if inet proto udp from any to any port 33433 >< 33626 keep state Since TRACERT.EXE on Microsoft Windows systems uses ICMP echo request messages, only the first rule is needed to allow network traces from those systems. Unix traceroute can be instructed to use other protocols as well, and will use ICMP echo request messages if is used. Check the &man.traceroute.8; man page for details. Path <acronym>MTU</acronym> Discovery Internet protocols are designed to be device independent, and one consequence of device independence is that the optimal packet size for a given connection cannot always be predicted reliably. The main constraint on packet size is the Maximum Transmission Unit (MTU) which sets the upper limit on the packet size for an interface. Type ifconfig to view the MTUs for a system's network interfaces. TCP/IP uses a process known as path MTU discovery to determine the right packet size for a connection. This process sends packets of varying sizes with the Do not fragment flag set, expecting an ICMP return packet of type 3, code 4 when the upper limit has been reached. Type 3 means destination unreachable, and code 4 is short for fragmentation needed, but the do-not-fragment flag is set. To allow path MTU discovery in order to support connections to other MTUs, add the destination unreachable type to the icmp_types macro: icmp_types = "{ echoreq, unreach }" Since the pass rule already uses that macro, it does not need to be modified in order to support the new ICMP type: pass inet proto icmp all icmp-type $icmp_types keep state PF allows filtering on all variations of ICMP types and codes. The list of possible types and codes are documented in &man.icmp.4; and &man.icmp6.4;. Using Tables Some types of data are relevant to filtering and redirection at a given time, but their definition is too long to be included in the ruleset file. PF supports the use of tables, which are defined lists that can be manipulated without needing to reload the entire ruleset, and which can provide fast lookups. Table names are always enclosed within < >, like this: table <clients> { 192.168.2.0/24, !192.168.2.5 } In this example, the 192.168.2.0/24 network is part of the table, except for the address 192.168.2.5, which is excluded using the ! operator. It is also possible to load tables from files where each item is on a separate line, as seen in this example /etc/clients: 192.168.2.0/24 !192.168.2.5 To refer to the file, define the table like this: table <clients> persist file "/etc/clients" Once the table is defined, it can be referenced by a rule: pass inet proto tcp from <clients> to any port $client_out flags S/SA keep state A table's contents can be manipulated live, using pfctl. This example adds another network to the table: &prompt.root; pfctl -t clients -T add 192.168.1.0/16 Note that any changes made this way will take affect now, making them ideal for testing, but will not survive a power failure or reboot. To make the changes permanent, modify the definition of the table in the ruleset or edit the file that the table refers to. One can maintain the on-disk copy of the table using a &man.cron.8; job which dumps the table's contents to disk at regular intervals, using a command such as pfctl -t clients -T show >/etc/clients. Alternatively, /etc/clients can be updated with the in-memory table contents: &prompt.root; pfctl -t clients -T replace -f /etc/clients Using Overload Tables to Protect <acronym>SSH</acronym> Those who run SSH on an external interface have probably seen something like this in the authentication logs: Sep 26 03:12:34 skapet sshd[25771]: Failed password for root from 200.72.41.31 port 40992 ssh2 Sep 26 03:12:34 skapet sshd[5279]: Failed password for root from 200.72.41.31 port 40992 ssh2 Sep 26 03:12:35 skapet sshd[5279]: Received disconnect from 200.72.41.31: 11: Bye Bye Sep 26 03:12:44 skapet sshd[29635]: Invalid user admin from 200.72.41.31 Sep 26 03:12:44 skapet sshd[24703]: input_userauth_request: invalid user admin Sep 26 03:12:44 skapet sshd[24703]: Failed password for invalid user admin from 200.72.41.31 port 41484 ssh2 This is indicative of a brute force attack where somebody or some program is trying to discover the user name and password which will let them into the system. If external SSH access is needed for legitimate users, changing the default port used by SSH can offer some protection. However, PF provides a more elegant solution. Pass rules can contain limits on what connecting hosts can do and violators can be banished to a table of addresses which are denied some or all access. It is even possible to drop all existing connections from machines which overreach the limits. To configure this, create this table in the tables section of the ruleset: table <bruteforce> persist Then, somewhere early in the ruleset, add rules to block brute access while allowing legitimate access: block quick from <bruteforce> pass inet proto tcp from any to $localnet port $tcp_services \ flags S/SA keep state \ (max-src-conn 100, max-src-conn-rate 15/5, \ overload <bruteforce> flush global) The part in parentheses defines the limits and the numbers should be changed to meet local requirements. It can be read as follows: max-src-conn is the number of simultaneous connections allowed from one host. max-src-conn-rate is the rate of new connections allowed from any single host (15) per number of seconds (5). overload <bruteforce> means that any host which exceeds these limits gets its address added to the bruteforce table. The ruleset blocks all traffic from addresses in the bruteforce table. Finally, flush global says that when a host reaches the limit, that all (global) of that host's connections will be terminated (flush). These rules will not block slow bruteforcers, as described in http://home.nuug.no/~peter/hailmary2013/. This example ruleset is intended mainly as an illustration. For example, if a generous number of connections in general are wanted, but the desire is to be more restrictive when it comes to ssh, supplement the rule above with something like the one below, early on in the rule set: pass quick proto { tcp, udp } from any to any port ssh \ flags S/SA keep state \ (max-src-conn 15, max-src-conn-rate 5/3, \ overload <bruteforce> flush global) It May Not be Necessary to Block All Overloaders It is worth noting that the overload mechanism is a general technique which does not apply exclusively to SSH, and it is not always optimal to entirely block all traffic from offenders. For example, an overload rule could be used to protect a mail service or a web service, and the overload table could be used in a rule to assign offenders to a queue with a minimal bandwidth allocation or to redirect to a specific web page. Over time, tables will be filled by overload rules and their size will grow incrementally, taking up more memory. Sometimes an IP address that is blocked is a dynamically assigned one, which has since been assigned to a host who has a legitimate reason to communicate with hosts in the local network. For situations like these, pfctl provides the ability to expire table entries. For example, this command will remove <bruteforce> table entries which have not been referenced for 86400 seconds: &prompt.root; pfctl -t bruteforce -T expire 86400 Similar functionality is provided by security/expiretable, which removes table entries which have not been accessed for a specified period of time. Once installed, expiretable can be run to remove <bruteforce> table entries older than a specified age. This example removes all entries older than 24 hours: /usr/local/sbin/expiretable -v -d -t 24h bruteforce Protecting Against <acronym>SPAM</acronym> Not to be confused with the spamd daemon which comes bundled with spamassassin, mail/spamd can be configured with PF to provide an outer defense against SPAM. This spamd hooks into the PF configuration using a set of redirections. Spammers tend to send a large number of messages, and SPAM is mainly sent from a few spammer friendly networks and a large number of hijacked machines, both of which are reported to blacklists fairly quickly. When an SMTP connection from an address in a blacklist is received, spamd presents its banner and immediately switches to a mode where it answers SMTP traffic one byte at a time. This technique, which is intended to waste as much time as possible on the spammer's end, is called tarpitting. The specific implementation which uses one byte SMTP replies is often referred to as stuttering. This example demonstrates the basic procedure for setting up spamd with automatically updated blacklists. Refer to the man pages which are installed with mail/spamd for more information. Configuring <application>spamd</application> Install the mail/spamd package or port. In order to use spamd's greylisting features, &man.fdescfs.5; must be mounted at /dev/fd. Add the following line to /etc/fstab: fdescfs /dev/fd fdescfs rw 0 0 Then, mount the filesystem: &prompt.root; mount fdescfs Next, edit the PF ruleset to include: table <spamd> persist table <spamd-white> persist rdr pass on $ext_if inet proto tcp from <spamd> to \ { $ext_if, $localnet } port smtp -> 127.0.0.1 port 8025 rdr pass on $ext_if inet proto tcp from !<spamd-white> to \ { $ext_if, $localnet } port smtp -> 127.0.0.1 port 8025 The two tables <spamd> and <spamd-white> are essential. SMTP traffic from an address listed in <spamd> but not in <spamd-white> is redirected to the spamd daemon listening at port 8025. The next step is to configure spamd in /usr/local/etc/spamd.conf and to add some rc.conf parameters. The installation of mail/spamd includes a sample configuration file (/usr/local/etc/spamd.conf.sample) and a man page for spamd.conf. Refer to these for additional configuration options beyond those shown in this example. One of the first lines in the configuration file that does not begin with a # comment sign contains the block which defines the all list, which specifies the lists to use: all:\ :traplist:whitelist: This entry adds the desired blacklists, separated by colons (:). To use a whitelist to subtract addresses from a blacklist, add the name of the whitelist immediately after the name of that blacklist. For example: :blacklist:whitelist:. This is followed by the specified blacklist's definition: traplist:\ :black:\ :msg="SPAM. Your address %A has sent spam within the last 24 hours":\ :method=http:\ :file=www.openbsd.org/spamd/traplist.gz where the first line is the name of the blacklist and the second line specifies the list type. The msg field contains the message to display to blacklisted senders during the SMTP dialogue. The method field specifies how spamd-setup fetches the list data; supported methods are http, ftp, from a file in a mounted file system, and via exec of an external program. Finally, the file field specifies the name of the file spamd expects to receive. The definition of the specified whitelist is similar, but omits the msg field since a message is not needed: whitelist:\ :white:\ :method=file:\ :file=/var/mail/whitelist.txt Choose Data Sources with Care Using all the blacklists in the sample spamd.conf will blacklist large blocks of the Internet. Administrators need to edit the file to create an optimal configuration which uses applicable data sources and, when necessary, uses custom lists. Next, add this entry to /etc/rc.conf. Additional flags are described in the man page specified by the comment: spamd_flags="-v" # use "" and see spamd-setup(8) for flags When finished, reload the ruleset, start spamd by typing service start obspamd, and complete the configuration using spamd-setup. Finally, create a &man.cron.8; job which calls spamd-setup to update the tables at reasonable intervals. On a typical gateway in front of a mail server, hosts will soon start getting trapped within a few seconds to several minutes. PF also supports greylisting, which temporarily rejects messages from unknown hosts with 45n codes. Messages from greylisted hosts which try again within a reasonable time are let through. Traffic from senders which are set up to behave within the limits set by RFC 1123 and RFC 2821 are immediately let through. More information about greylisting as a technique can be found at the greylisting.org web site. The most amazing thing about greylisting, apart from its simplicity, is that it still works. Spammers and malware writers have been very slow to adapt in order to bypass this technique. The basic procedure for configuring greylisting is as follows: Configuring Greylisting Make sure that &man.fdescfs.5; is mounted as described in Step 1 of the previous Procedure. To run spamd in greylisting mode, add this line to /etc/rc.conf: spamd_grey="YES" # use spamd greylisting if YES Refer to the spamd man page for descriptions of additional related parameters. To complete the greylisting setup: &prompt.root; service restart obspamd &prompt.root; service start spamlogd Behind the scenes, the spamdb database tool and the spamlogd whitelist updater perform essential functions for the greylisting feature. spamdb is the administrator's main interface to managing the black, grey, and white lists via the contents of the /var/db/spamdb database. Network Hygiene This section describes how block-policy, scrub, and antispoof can be used to make the ruleset behave sanely. The block-policy is an option which can be set in the options part of the ruleset, which precedes the redirection and filtering rules. This option determines which feedback, if any, PF sends to hosts that are blocked by a rule. The option has two possible values: drop drops blocked packets with no feedback, and return returns a status code such as Connection refused. If not set, the default policy is drop. To change the block-policy, specify the desired value: set block-policy return In PF, scrub is a keyword which enables network packet normalization. This process reassembles fragmented packets and drops TCP packets that have invalid flag combinations. Enabling scrub provides a measure of protection against certain kinds of attacks based on incorrect handling of packet fragments. A number of options are available, but the simplest form is suitable for most configurations: scrub in all Some services, such as NFS, require specific fragment handling options. Refer to http://www.openbsd.gr/faq/pf/scrub.html for more information. This example reassembles fragments, clears the do not fragment bit, and sets the maximum segment size to 1440 bytes: scrub in all fragment reassemble no-df max-mss 1440 The antispoof mechanism protects against activity from spoofed or forged IP addresses, mainly by blocking packets appearing on interfaces and in directions which are logically not possible. These rules weed out spoofed traffic coming in from the rest of the world as well as any spoofed packets which originate in the local network: antispoof for $ext_if antispoof for $int_if Handling Non-Routable Addresses Even with a properly configured gateway to handle network address translation, one may have to compensate for other people's misconfigurations. A common misconfiguration is to let traffic with non-routable addresses out to the Internet. Since traffic from non-routeable addresses can play a part in several DoS attack techniques, consider explicitly blocking traffic from non-routeable addresses from entering the network through the external interface. In this example, a macro containing non-routable addresses is defined, then used in blocking rules. Traffic to and from these addresses is quietly dropped on the gateway's external interface. martians = "{ 127.0.0.0/8, 192.168.0.0/16, 172.16.0.0/12, \ 10.0.0.0/8, 169.254.0.0/16, 192.0.2.0/24, \ 0.0.0.0/8, 240.0.0.0/4 }" block drop in quick on $ext_if from $martians to any block drop out quick on $ext_if from any to $martians
<application>IPFW</application> firewall IPFW IPFW is a stateful firewall written for &os; which supports both IPv4 and IPv6. It is comprised of several components: the kernel firewall filter rule processor and its integrated packet accounting facility, the logging facility, NAT, the &man.dummynet.4; traffic shaper, a forward facility, a bridge facility, and an ipstealth facility. &os; provides a sample ruleset in /etc/rc.firewall which defines several firewall types for common scenarios to assist novice users in generating an appropriate ruleset. IPFW provides a powerful syntax which advanced users can use to craft customized rulesets that meet the security requirements of a given environment. This section describes how to enable IPFW, provides an overview of its rule syntax, and demonstrates several rulesets for common configuration scenarios. Enabling <application>IPFW</application> IPFW enabling IPFW is included in the basic &os; install as a kernel loadable module, meaning that a custom kernel is not needed in order to enable IPFW. kernel options IPFIREWALL kernel options IPFIREWALL_VERBOSE kernel options IPFIREWALL_VERBOSE_LIMIT IPFW kernel options For those users who wish to statically compile IPFW support into a custom kernel, refer to the instructions in . The following options are available for the custom kernel configuration file: options IPFIREWALL # enables IPFW options IPFIREWALL_VERBOSE # enables logging for rules with log keyword options IPFIREWALL_VERBOSE_LIMIT=5 # limits number of logged packets per-entry options IPFIREWALL_DEFAULT_TO_ACCEPT # sets default policy to pass what is not explicitly denied options IPDIVERT # enables NAT To configure the system to enable IPFW at boot time, add the following entry to /etc/rc.conf: firewall_enable="YES" To use one of the default firewall types provided by &os;, add another line which specifies the type: firewall_type="open" The available types are: open: passes all traffic. client: protects only this machine. simple: protects the whole network. closed: entirely disables IP traffic except for the loopback interface. workstation: protects only this machine using stateful rules. UNKNOWN: disables the loading of firewall rules. - filename: full path of the file - containing the firewall ruleset. + filename: + full path of the file containing the firewall + ruleset. If firewall_type is set to either client or simple, modify the default rules found in /etc/rc.firewall to fit the configuration of the system. Note that the filename type is used to load a custom ruleset. An alternate way to load a custom ruleset is to set the firewall_script variable to the absolute path of an executable script that includes IPFW commands. The examples used in this section assume that the firewall_script is set to /etc/ipfw.rules: firewall_script="/etc/ipfw.rules" To enable logging, include this line: firewall_logging="YES" There is no /etc/rc.conf variable to set logging limits. To limit the number of times a rule is logged per connection attempt, specify the number using this line in /etc/sysctl.conf: net.inet.ip.fw.verbose_limit=5 After saving the needed edits, start the firewall. To enable logging limits now, also set the sysctl value specified above: &prompt.root; service ipfw start &prompt.root; sysctl net.inet.ip.fw.verbose_limit=5 <application>IPFW</application> Rule Syntax IPFW rule processing order When a packet enters the IPFW firewall, it is compared against the first rule in the ruleset and progresses one rule at a time, moving from top to bottom in sequence. When the packet matches the selection parameters of a rule, the rule's action is executed and the search of the ruleset terminates for that packet. This is referred to as first match wins. If the packet does not match any of the rules, it gets caught by the mandatory IPFW default rule number 65535, which denies all packets and silently discards them. However, if the packet matches a rule that contains the count, skipto, or tee keywords, the search continues. Refer to &man.ipfw.8; for details on how these keywords affect rule processing. IPFW rule syntax When creating an IPFW rule, keywords must be written in the following order. Some keywords are mandatory while other keywords are optional. The words shown in uppercase represent a variable and the words shown in lowercase must precede the variable that follows it. The # symbol is used to mark the start of a comment and may appear at the end of a rule or on its own line. Blank lines are ignored. CMD RULE_NUMBER set SET_NUMBER ACTION log LOG_AMOUNT PROTO from SRC SRC_PORT to DST DST_PORT OPTIONS This section provides an overview of these keywords and their options. It is not an exhaustive list of every possible option. Refer to &man.ipfw.8; for a complete description of the rule syntax that can be used when creating IPFW rules. CMD Every rule must start with ipfw add. RULE_NUMBER Each rule is associated with a number from 1 to 65534. The number is used to indicate the order of rule processing. Multiple rules can have the same number, in which case they are applied according to the order in which they have been added. SET_NUMBER Each rule is associated with a set number from 0 to 31. Sets can be individually disabled or enabled, making it possible to quickly add or delete a set of rules. If a SET_NUMBER is not specified, the rule will be added to set 0. ACTION A rule can be associated with one of the following actions. The specified action will be executed when the packet matches the selection criterion of the rule. allow | accept | pass | permit: these keywords are equivalent and allow packets that match the rule. check-state: checks the packet against the dynamic state table. If a match is found, execute the action associated with the rule which generated this dynamic rule, otherwise move to the next rule. A check-state rule does not have selection criterion. If no check-state rule is present in the ruleset, the dynamic rules table is checked at the first keep-state or limit rule. count: updates counters for all packets that match the rule. The search continues with the next rule. deny | drop: either word silently discards packets that match this rule. Additional actions are available. Refer to &man.ipfw.8; for details. LOG_AMOUNT When a packet matches a rule with the log keyword, a message will be logged to &man.syslogd.8; with a facility name of SECURITY. Logging only occurs if the number of packets logged for that particular rule does not exceed a specified LOG_AMOUNT. If no LOG_AMOUNT is specified, the limit is taken from the value of net.inet.ip.fw.verbose_limit. A value of zero removes the logging limit. Once the limit is reached, logging can be re-enabled by clearing the logging counter or the packet counter for that rule, using ipfw reset log. Logging is done after all other packet matching conditions have been met, and before performing the final action on the packet. The administrator decides which rules to enable logging on. PROTO This optional value can be used to specify any protocol name or number found in /etc/protocols. SRC The from keyword must be followed by the source address or a keyword that represents the source address. An address can be represented by any, me (any address configured on an interface on this system), me6, (any IPv6 address configured on an interface on this system), or table followed by the number of a lookup table which contains a list of addresses. When specifying an IP address, it can be optionally followed by its CIDR mask or subnet mask. For example, 1.2.3.4/25 or 1.2.3.4:255.255.255.128. SRC_PORT An optional source port can be specified using the port number or name from /etc/services. DST The to keyword must be followed by the destination address or a keyword that represents the destination address. The same keywords and addresses described in the SRC section can be used to describe the destination. DST_PORT An optional destination port can be specified using the port number or name from /etc/services. OPTIONS Several keywords can follow the source and destination. As the name suggests, OPTIONS are optional. Commonly used options include in or out, which specify the direction of packet flow, icmptypes followed by the type of ICMP message, and keep-state. When a keep-state rule is matched, the firewall will create a dynamic rule which matches bidirectional traffic between the source and destination addresses and ports using the same protocol. The dynamic rules facility is vulnerable to resource depletion from a SYN-flood attack which would open a huge number of dynamic rules. To counter this type of attack with IPFW, use limit. This option limits the number of simultaneous sessions by checking the open dynamic rules, counting the number of times this rule and IP address combination occurred. If this count is greater than the value specified by limit, the packet is discarded. Dozens of OPTIONS are available. Refer to &man.ipfw.8; for a description of each available option. Example Ruleset This section demonstrates how to create an example stateful firewall ruleset script named /etc/ipfw.rules. In this example, all connection rules use in or out to clarify the direction. They also use via interface-name to specify the interface the packet is traveling over. When first creating or testing a firewall ruleset, consider temporarily setting this tunable: net.inet.ip.fw.default_to_accept="1" This sets the default policy of &man.ipfw.8; to be more permissive than the default deny ip from any to any, making it slightly more difficult to get locked out of the system right after a reboot. The firewall script begins by indicating that it is a Bourne shell script and flushes any existing rules. It then creates the cmd variable so that ipfw add does not have to be typed at the beginning of every rule. It also defines the pif variable which represents the name of the interface that is attached to the Internet. #!/bin/sh # Flush out the list before we begin. ipfw -q -f flush # Set rules command prefix cmd="ipfw -q add" pif="dc0" # interface name of NIC attached to Internet The first two rules allow all traffic on the trusted internal interface and on the loopback interface: # Change xl0 to LAN NIC interface name $cmd 00005 allow all from any to any via xl0 # No restrictions on Loopback Interface $cmd 00010 allow all from any to any via lo0 The next rule allows the packet through if it matches an existing entry in the dynamic rules table: $cmd 00101 check-state The next set of rules defines which stateful connections internal systems can create to hosts on the Internet: # Allow access to public DNS # Replace x.x.x.x with the IP address of a public DNS server # and repeat for each DNS server in /etc/resolv.conf $cmd 00110 allow tcp from any to x.x.x.x 53 out via $pif setup keep-state $cmd 00111 allow udp from any to x.x.x.x 53 out via $pif keep-state # Allow access to ISP's DHCP server for cable/DSL configurations. # Use the first rule and check log for IP address. # Then, uncomment the second rule, input the IP address, and delete the first rule $cmd 00120 allow log udp from any to any 67 out via $pif keep-state #$cmd 00120 allow udp from any to x.x.x.x 67 out via $pif keep-state # Allow outbound HTTP and HTTPS connections $cmd 00200 allow tcp from any to any 80 out via $pif setup keep-state $cmd 00220 allow tcp from any to any 443 out via $pif setup keep-state # Allow outbound email connections $cmd 00230 allow tcp from any to any 25 out via $pif setup keep-state $cmd 00231 allow tcp from any to any 110 out via $pif setup keep-state # Allow outbound ping $cmd 00250 allow icmp from any to any out via $pif keep-state # Allow outbound NTP $cmd 00260 allow tcp from any to any 37 out via $pif setup keep-state # Allow outbound SSH $cmd 00280 allow tcp from any to any 22 out via $pif setup keep-state # deny and log all other outbound connections $cmd 00299 deny log all from any to any out via $pif The next set of rules controls connections from Internet hosts to the internal network. It starts by denying packets typically associated with attacks and then explicitly allows specific types of connections. All the authorized services that originate from the Internet use limit to prevent flooding. # Deny all inbound traffic from non-routable reserved address spaces $cmd 00300 deny all from 192.168.0.0/16 to any in via $pif #RFC 1918 private IP $cmd 00301 deny all from 172.16.0.0/12 to any in via $pif #RFC 1918 private IP $cmd 00302 deny all from 10.0.0.0/8 to any in via $pif #RFC 1918 private IP $cmd 00303 deny all from 127.0.0.0/8 to any in via $pif #loopback $cmd 00304 deny all from 0.0.0.0/8 to any in via $pif #loopback $cmd 00305 deny all from 169.254.0.0/16 to any in via $pif #DHCP auto-config $cmd 00306 deny all from 192.0.2.0/24 to any in via $pif #reserved for docs $cmd 00307 deny all from 204.152.64.0/23 to any in via $pif #Sun cluster interconnect $cmd 00308 deny all from 224.0.0.0/3 to any in via $pif #Class D & E multicast # Deny public pings $cmd 00310 deny icmp from any to any in via $pif # Deny ident $cmd 00315 deny tcp from any to any 113 in via $pif # Deny all Netbios services. $cmd 00320 deny tcp from any to any 137 in via $pif $cmd 00321 deny tcp from any to any 138 in via $pif $cmd 00322 deny tcp from any to any 139 in via $pif $cmd 00323 deny tcp from any to any 81 in via $pif # Deny fragments $cmd 00330 deny all from any to any frag in via $pif # Deny ACK packets that did not match the dynamic rule table $cmd 00332 deny tcp from any to any established in via $pif # Allow traffic from ISP's DHCP server. # Replace x.x.x.x with the same IP address used in rule 00120. #$cmd 00360 allow udp from any to x.x.x.x 67 in via $pif keep-state # Allow HTTP connections to internal web server $cmd 00400 allow tcp from any to me 80 in via $pif setup limit src-addr 2 # Allow inbound SSH connections $cmd 00410 allow tcp from any to me 22 in via $pif setup limit src-addr 2 # Reject and log all other incoming connections $cmd 00499 deny log all from any to any in via $pif The last rule logs all packets that do not match any of the rules in the ruleset: # Everything else is denied and logged $cmd 00999 deny log all from any to any Configuring <acronym>NAT</acronym> Chern Lee Contributed by NAT and IPFW &os;'s built-in NAT daemon, &man.natd.8;, works in conjunction with IPFW to provide network address translation. This can be used to provide an Internet Connection Sharing solution so that several internal computers can connect to the Internet using a single IP address. To do this, the &os; machine connected to the Internet must act as a gateway. This system must have two NICs, where one is connected to the Internet and the other is connected to the internal LAN. Each machine connected to the LAN should be assigned an IP address in the private network space, as defined by RFC 1918, and have the default gateway set to the &man.natd.8; system's internal IP address. Some additional configuration is needed in order to activate the NAT function of IPFW. If the system has a custom kernel, the kernel configuration file needs to include option IPDIVERT along with the other IPFIREWALL options described in . To enable NAT support at boot time, the following must be in /etc/rc.conf: gateway_enable="YES" # enables the gateway natd_enable="YES" # enables NAT natd_interface="rl0" # specify interface name of NIC attached to Internet natd_flags="-dynamic -m" # -m = preserve port numbers; additional options are listed in &man.natd.8; It is also possible to specify a configuration file which contains the options to pass to &man.natd.8;: natd_flags="-f /etc/natd.conf" The specified file must contain a list of configuration options, one per line. For example: redirect_port tcp 192.168.0.2:6667 6667 redirect_port tcp 192.168.0.3:80 80 For more information about this configuration file, consult &man.natd.8;. Next, add the NAT rules to the firewall ruleset. When the rulest contains stateful rules, the positioning of the NAT rules is critical and the skipto action is used. The skipto action requires a rule number so that it knows which rule to jump to. The following example builds upon the firewall ruleset shown in the previous section. It adds some additional entries and modifies some existing rules in order to configure the firewall for NAT. It starts by adding some additional variables which represent the rule number to skip to, the keep-state option, and a list of TCP ports which will be used to reduce the number of rules: #!/bin/sh ipfw -q -f flush cmd="ipfw -q add" skip="skipto 500" pif=dc0 ks="keep-state" good_tcpo="22,25,37,53,80,443,110" The inbound NAT rule is inserted after the two rules which allow all traffic on the trusted internal interface and on the loopback interface and before the check-state rule. It is important that the rule number selected for this NAT rule, in this example 100, is higher than the first two rules and lower than the check-state rule: $cmd 005 allow all from any to any via xl0 # exclude LAN traffic $cmd 010 allow all from any to any via lo0 # exclude loopback traffic $cmd 100 divert natd ip from any to any in via $pif # NAT any inbound packets # Allow the packet through if it has an existing entry in the dynamic rules table $cmd 101 check-state The outbound rules are modified to replace the allow action with the $skip variable, indicating that rule processing will continue at rule 500. The seven tcp rules have been replaced by rule 125 as the $good_tcpo variable contains the seven allowed outbound ports. # Authorized outbound packets $cmd 120 $skip udp from any to x.x.x.x 53 out via $pif $ks $cmd 121 $skip udp from any to x.x.x.x 67 out via $pif $ks $cmd 125 $skip tcp from any to any $good_tcpo out via $pif setup $ks $cmd 130 $skip icmp from any to any out via $pif $ks The inbound rules remain the same, except for the very last rule which removes the via $pif in order to catch both inbound and outbound rules. The NAT rule must follow this last outbound rule, must have a higher number than that last rule, and the rule number must be referenced by the skipto action. In this ruleset, rule number 500 diverts all packets which match the outbound rules to &man.natd.8; for NAT processing. The next rule allows any packet which has undergone NAT processing to pass. $cmd 499 deny log all from any to any $cmd 500 divert natd ip from any to any out via $pif # skipto location for outbound stateful rules $cmd 510 allow ip from any to any In this example, rules 100, 101, 125, 500, and 510 control the address translation of the outbound and inbound packets so that the entries in the dynamic state table always register the private LAN IP address. Consider an internal web browser which initializes a new outbound HTTP session over port 80. When the first outbound packet enters the firewall, it does not match rule 100 because it is headed out rather than in. It passes rule 101 because this is the first packet and it has not been posted to the dynamic state table yet. The packet finally matches rule 125 as it is outbound on an allowed port and has a source IP address from the internal LAN. On matching this rule, two actions take place. First, the keep-state action adds an entry to the dynamic state table and the specified action, skipto rule 500, is executed. Next, the packet undergoes NAT and is sent out to the Internet. This packet makes its way to the destination web server, where a response packet is generated and sent back. This new packet enters the top of the ruleset. It matches rule 100 and has its destination IP address mapped back to the original internal address. It then is processed by the check-state rule, is found in the table as an existing session, and is released to the LAN. On the inbound side, the ruleset has to deny bad packets and allow only authorized services. A packet which matches an inbound rule is posted to the dynamic state table and the packet is released to the LAN. The packet generated as a response is recognized by the check-state rule as belonging to an existing session. It is then sent to rule 500 to undergo NAT before being released to the outbound interface. Port Redirection The drawback with &man.natd.8; is that the LAN clients are not accessible from the Internet. Clients on the LAN can make outgoing connections to the world but cannot receive incoming ones. This presents a problem if trying to run Internet services on one of the LAN client machines. A simple way around this is to redirect selected Internet ports on the &man.natd.8; machine to a LAN client. For example, an IRC server runs on client A and a web server runs on client B. For this to work properly, connections received on ports 6667 (IRC) and 80 (HTTP) must be redirected to the respective machines. The syntax for is as follows: -redirect_port proto targetIP:targetPORT[-targetPORT] [aliasIP:]aliasPORT[-aliasPORT] [remoteIP[:remotePORT[-remotePORT]]] In the above example, the argument should be: -redirect_port tcp 192.168.0.2:6667 6667 -redirect_port tcp 192.168.0.3:80 80 This redirects the proper TCP ports to the LAN client machines. Port ranges over individual ports can be indicated with . For example, tcp 192.168.0.2:2000-3000 2000-3000 would redirect all connections received on ports 2000 to 3000 to ports 2000 to 3000 on client A. These options can be used when directly running &man.natd.8;, placed within the natd_flags="" option in /etc/rc.conf, or passed via a configuration file. For further configuration options, consult &man.natd.8; Address Redirection address redirection Address redirection is useful if more than one IP address is available. Each LAN client can be assigned its own external IP address by &man.natd.8;, which will then rewrite outgoing packets from the LAN clients with the proper external IP address and redirects all traffic incoming on that particular IP address back to the specific LAN client. This is also known as static NAT. For example, if IP addresses 128.1.1.1, 128.1.1.2, and 128.1.1.3 are available, 128.1.1.1 can be used as the &man.natd.8; machine's external IP address, while 128.1.1.2 and 128.1.1.3 are forwarded back to LAN clients A and B. The syntax is as follows: -redirect_address localIP publicIP localIP The internal IP address of the LAN client. publicIP The external IP address corresponding to the LAN client. In the example, this argument would read: -redirect_address 192.168.0.2 128.1.1.2 -redirect_address 192.168.0.3 128.1.1.3 Like , these arguments are placed within the natd_flags="" option of /etc/rc.conf, or passed via a configuration file. With address redirection, there is no need for port redirection since all data received on a particular IP address is redirected. The external IP addresses on the &man.natd.8; machine must be active and aliased to the external interface. Refer to &man.rc.conf.5; for details. The <application>IPFW</application> Command ipfw ipfw can be used to make manual, single rule additions or deletions to the active firewall while it is running. The problem with using this method is that all the changes are lost when the system reboots. It is recommended to instead write all the rules in a file and to use that file to load the rules at boot time and to replace the currently running firewall rules whenever that file changes. ipfw is a useful way to display the running firewall rules to the console screen. The IPFW accounting facility dynamically creates a counter for each rule that counts each packet that matches the rule. During the process of testing a rule, listing the rule with its counter is one way to determine if the rule is functioning as expected. To list all the running rules in sequence: &prompt.root; ipfw list To list all the running rules with a time stamp of when the last time the rule was matched: &prompt.root; ipfw -t list The next example lists accounting information and the packet count for matched rules along with the rules themselves. The first column is the rule number, followed by the number of matched packets and bytes, followed by the rule itself. &prompt.root; ipfw -a list To list dynamic rules in addition to static rules: &prompt.root; ipfw -d list To also show the expired dynamic rules: &prompt.root; ipfw -d -e list To zero the counters: &prompt.root; ipfw zero To zero the counters for just the rule with number NUM: &prompt.root; ipfw zero NUM Logging Firewall Messages IPFW logging Even with the logging facility enabled, IPFW will not generate any rule logging on its own. The firewall administrator decides which rules in the ruleset will be logged, and adds the log keyword to those rules. Normally only deny rules are logged. It is customary to duplicate the ipfw default deny everything rule with the log keyword included as the last rule in the ruleset. This way, it is possible to see all the packets that did not match any of the rules in the ruleset. Logging is a two edged sword. If one is not careful, an over abundance of log data or a DoS attack can fill the disk with log files. Log messages are not only written to syslogd, but also are displayed on the root console screen and soon become annoying. The IPFIREWALL_VERBOSE_LIMIT=5 kernel option limits the number of consecutive messages sent to &man.syslogd.8;, concerning the packet matching of a given rule. When this option is enabled in the kernel, the number of consecutive messages concerning a particular rule is capped at the number specified. There is nothing to be gained from 200 identical log messages. With this option set to five, five consecutive messages concerning a particular rule would be logged to syslogd and the remainder identical consecutive messages would be counted and posted to syslogd with a phrase like the following: last message repeated 45 times All logged packets messages are written by default to /var/log/security, which is defined in /etc/syslog.conf. Building a Rule Script Most experienced IPFW users create a file containing the rules and code them in a manner compatible with running them as a script. The major benefit of doing this is the firewall rules can be refreshed in mass without the need of rebooting the system to activate them. This method is convenient in testing new rules as the procedure can be executed as many times as needed. Being a script, symbolic substitution can be used for frequently used values to be substituted into multiple rules. This example script is compatible with the syntax used by the &man.sh.1;, &man.csh.1;, and &man.tcsh.1; shells. Symbolic substitution fields are prefixed with a dollar sign ($). Symbolic fields do not have the $ prefix. The value to populate the symbolic field must be enclosed in double quotes (""). Start the rules file like this: ############### start of example ipfw rules script ############# # ipfw -q -f flush # Delete all rules # Set defaults oif="tun0" # out interface odns="192.0.2.11" # ISP's DNS server IP address cmd="ipfw -q add " # build rule prefix ks="keep-state" # just too lazy to key this each time $cmd 00500 check-state $cmd 00502 deny all from any to any frag $cmd 00501 deny tcp from any to any established $cmd 00600 allow tcp from any to any 80 out via $oif setup $ks $cmd 00610 allow tcp from any to $odns 53 out via $oif setup $ks $cmd 00611 allow udp from any to $odns 53 out via $oif $ks ################### End of example ipfw rules script ############ The rules are not important as the focus of this example is how the symbolic substitution fields are populated. If the above example was in /etc/ipfw.rules, the rules could be reloaded by the following command: &prompt.root; sh /etc/ipfw.rules /etc/ipfw.rules can be located anywhere and the file can have any name. The same thing could be accomplished by running these commands by hand: &prompt.root; ipfw -q -f flush &prompt.root; ipfw -q add check-state &prompt.root; ipfw -q add deny all from any to any frag &prompt.root; ipfw -q add deny tcp from any to any established &prompt.root; ipfw -q add allow tcp from any to any 80 out via tun0 setup keep-state &prompt.root; ipfw -q add allow tcp from any to 192.0.2.11 53 out via tun0 setup keep-state &prompt.root; ipfw -q add 00611 allow udp from any to 192.0.2.11 53 out via tun0 keep-state IPFILTER (IPF) firewall IPFILTER IPFILTER, also known as IPF, is a cross-platform, open source firewall which has been ported to several operating systems, including &os;, NetBSD, OpenBSD, and &solaris;. IPFILTER is a kernel-side firewall and NAT mechanism that can be controlled and monitored by userland programs. Firewall rules can be set or deleted using ipf, NAT rules can be set or deleted using ipnat, run-time statistics for the kernel parts of IPFILTER can be printed using ipfstat, and ipmon can be used to log IPFILTER actions to the system log files. IPF was originally written using a rule processing logic of the last matching rule wins and only used stateless rules. Since then, IPF has been enhanced to include the quick and keep state options. For a detailed explanation of the legacy rules processing method, refer to http://coombs.anu.edu.au/~avalon/ip-filter.html. The IPF FAQ is at http://www.phildev.net/ipf/index.html. A searchable archive of the IPFilter mailing list is available at http://marc.info/?l=ipfilter. This section of the Handbook focuses on IPF as it pertains to FreeBSD. It provides examples of rules that contain the quick and keep state options. Enabling <application>IPF</application> IPFILTER enabling IPF is included in the basic &os; install as a kernel loadable module, meaning that a custom kernel is not needed in order to enable IPF. kernel options IPFILTER kernel options IPFILTER_LOG kernel options IPFILTER_DEFAULT_BLOCK IPFILTER kernel options For users who prefer to statically compile IPF support into a custom kernel, refer to the instructions in . The following kernel options are available: options IPFILTER options IPFILTER_LOG options IPFILTER_LOOKUP options IPFILTER_DEFAULT_BLOCK where options IPFILTER enables support for IPFILTER, options IPFILTER_LOG enables IPF logging using the ipl packet logging pseudo-device for every rule that has the log keyword, IPFILTER_LOOKUP enables IP pools in order to speed up IP lookups, and options IPFILTER_DEFAULT_BLOCK changes the default behavior so that any packet not matching a firewall pass rule gets blocked. To configure the system to enable IPF at boot time, add the following entries to /etc/rc.conf. These entries will also enable logging and default pass all. To change the default policy to block all without compiling a custom kernel, remember to add a block all rule at the end of the ruleset. ipfilter_enable="YES" # Start ipf firewall ipfilter_rules="/etc/ipf.rules" # loads rules definition text file ipmon_enable="YES" # Start IP monitor log ipmon_flags="-Ds" # D = start as daemon # s = log to syslog # v = log tcp window, ack, seq # n = map IP & port to names If NAT functionality is needed, also add these lines: gateway_enable="YES" # Enable as LAN gateway ipnat_enable="YES" # Start ipnat function ipnat_rules="/etc/ipnat.rules" # rules definition file for ipnat Then, to start IPF now: &prompt.root; service ipfilter start To load the firewall rules, specify the name of the ruleset file using ipf. The following command can be used to replace the currently running firewall rules: &prompt.root; ipf -Fa -f /etc/ipf.rules where flushes all the internal rules tables and specifies the file containing the rules to load. This provides the ability to make changes to a custom ruleset and update the running firewall with a fresh copy of the rules without having to reboot the system. This method is convenient for testing new rules as the procedure can be executed as many times as needed. Refer to &man.ipf.8; for details on the other flags available with this command. <application>IPF</application> Rule Syntax IPFILTER rule syntax This section describes the IPF rule syntax used to create stateful rules. When creating rules, keep in mind that unless the quick keyword appears in a rule, every rule is read in order, with the last matching rule being the one that is applied. This means that even if the first rule to match a packet is a pass, if there is a later matching rule that is a block, the packet will be dropped. Sample rulesets can be found in /usr/share/examples/ipfilter. When creating rules, a # character is used to mark the start of a comment and may appear at the end of a rule, to explain that rule's function, or on its own line. Any blank lines are ignored. The keywords which are used in rules must be written in a specific order, from left to right. Some keywords are mandatory while others are optional. Some keywords have sub-options which may be keywords themselves and also include more sub-options. The keyword order is as follows, where the words shown in uppercase represent a variable and the words shown in lowercase must precede the variable that follows it: ACTION DIRECTION OPTIONS proto PROTO_TYPE from SRC_ADDR SRC_PORT to DST_ADDR DST_PORT TCP_FLAG|ICMP_TYPE keep state STATE This section describes each of these keywords and their options. It is not an exhaustive list of every possible option. Refer to &man.ipf.5; for a complete description of the rule syntax that can be used when creating IPF rules and examples for using each keyword. ACTION The action keyword indicates what to do with the packet if it matches that rule. Every rule must have an action. The following actions are recognized: block: drops the packet. pass: allows the packet. log: generates a log record. count: counts the number of packets and bytes which can provide an indication of how often a rule is used. auth: queues the packet for further processing by another program. call: provides access to functions built into IPF that allow more complex actions. decapsulate: removes any headers in order to process the contents of the packet. DIRECTION Next, each rule must explicitly state the direction of traffic using one of these keywords: in: the rule is applied against an inbound packet. out: the rule is applied against an outbound packet. all: the rule applies to either direction. If the system has multiple interfaces, the interface can be specified along with the direction. An example would be in on fxp0. OPTIONS Options are optional. However, if multiple options are specified, they must be used in the order shown here. log: when performing the specified ACTION, the contents of the packet's headers will be written to the &man.ipl.4; packet log pseudo-device. quick: if a packet matches this rule, the ACTION specified by the rule occurs and no further processing of any following rules will occur for this packet. on: must be followed by the interface name as displayed by &man.ifconfig.8;. The rule will only match if the packet is going through the specified interface in the specified direction. When using the log keyword, the following qualifiers may be used in this order: body: indicates that the first 128 bytes of the packet contents will be logged after the headers. first: if the log keyword is being used in conjunction with a keep state option, this option is recommended so that only the triggering packet is logged and not every packet which matches the stateful connection. Additional options are available to specify error return messages. Refer to &man.ipf.5; for more details. PROTO_TYPE The protocol type is optional. However, it is mandatory if the rule needs to specify a SRC_PORT or a DST_PORT as it defines the type of protocol. When specifying the type of protocol, use the proto keyword followed by either a protocol number or name from /etc/protocols. Example protocol names include tcp, udp, or icmp. If PROTO_TYPE is specified but no SRC_PORT or DST_PORT is specified, all port numbers for that protocol will match that rule. SRC_ADDR The from keyword is mandatory and is followed by a keyword which represents the source of the packet. The source can be a hostname, an IP address followed by the CIDR mask, an address pool, or the keyword all. Refer to &man.ipf.5; for examples. There is no way to match ranges of IP addresses which do not express themselves easily using the dotted numeric form / mask-length notation. The net-mgmt/ipcalc package or port may be used to ease the calculation of the CIDR mask. Additional information is available at the utility's web page: http://jodies.de/ipcalc. SRC_PORT The port number of the source is optional. However, if it is used, it requires PROTO_TYPE to be first defined in the rule. The port number must also be preceded by the proto keyword. A number of different comparison operators are supported: = (equal to), != (not equal to), < (less than), > (greater than), <= (less than or equal to), and >= (greater than or equal to). To specify port ranges, place the two port numbers between <> (less than and greater than ), >< (greater than and less than ), or : (greater than or equal to and less than or equal to). DST_ADDR The to keyword is mandatory and is followed by a keyword which represents the destination of the packet. Similar to SRC_ADDR, it can be a hostname, an IP address followed by the CIDR mask, an address pool, or the keyword all. DST_PORT Similar to SRC_PORT, the port number of the destination is optional. However, if it is used, it requires PROTO_TYPE to be first defined in the rule. The port number must also be preceded by the proto keyword. TCP_FLAG|ICMP_TYPE If tcp is specifed as the PROTO_TYPE, flags can be specified as letters, where each letter represents one of the possible TCP flags used to determine the state of a connection. Possible values are: S (SYN), A (ACK), P (PSH), F (FIN), U (URG), R (RST), C (CWN), and E (ECN). If icmp is specifed as the PROTO_TYPE, the ICMP type to match can be specified. Refer to &man.ipf.5; for the allowable types. STATE If a pass rule contains keep state, IPF will add an entry to its dynamic state table and allow subsequent packets that match the connection. IPF can track state for TCP, UDP, and ICMP sessions. Any packet that IPF can be certain is part of an active session, even if it is a different protocol, will be allowed. In IPF, packets destined to go out through the interface connected to the public Internet are first checked against the dynamic state table. If the packet matches the next expected packet comprising an active session conversation, it exits the firewall and the state of the session conversation flow is updated in the dynamic state table. Packets that do not belong to an already active session are checked against the outbound ruleset. Packets coming in from the interface connected to the public Internet are first checked against the dynamic state table. If the packet matches the next expected packet comprising an active session, it exits the firewall and the state of the session conversation flow is updated in the dynamic state table. Packets that do not belong to an already active session are checked against the inbound ruleset. Several keywords can be added after keep state. If used, these keywords set various options that control stateful filtering, such as setting connection limits or connection age. Refer to &man.ipf.5; for the list of available options and their descriptions. Example Ruleset This section demonstrates how to create an example ruleset which only allows services matching pass rules and blocks all others. &os; uses the loopback interface (lo0) and the IP address 127.0.0.1 for internal communication. The firewall ruleset must contain rules to allow free movement of these internally used packets: # no restrictions on loopback interface pass in quick on lo0 all pass out quick on lo0 all The public interface connected to the Internet is used to authorize and control access of all outbound and inbound connections. If one or more interfaces are cabled to private networks, those internal interfaces may require rules to allow packets originating from the LAN to flow between the internal networks or to the interface attached to the Internet. The ruleset should be organized into three major sections: any trusted internal interfaces, outbound connections through the public interface, and inbound connections through the public interface. These two rules allow all traffic to pass through a trusted LAN interface named xl0: # no restrictions on inside LAN interface for private network pass out quick on xl0 all pass in quick on xl0 all The rules for the public interface's outbound and inbound sections should have the most frequently matched rules placed before less commonly matched rules, with the last rule in the section blocking and logging all packets for that interface and direction. This set of rules defines the outbound section of the public interface named dc0. These rules keep state and identify the specific services that internal systems are authorized for public Internet access. All the rules use quick and specify the appropriate port numbers and, where applicable, destination addresses. # interface facing Internet (outbound) # Matches session start requests originating from or behind the # firewall, destined for the Internet. # Allow outbound access to public DNS servers. # Replace x.x.x. with address listed in /etc/resolv.conf. # Repeat for each DNS server. pass out quick on dc0 proto tcp from any to x.x.x. port = 53 flags S keep state pass out quick on dc0 proto udp from any to xxx port = 53 keep state # Allow access to ISP's specified DHCP server for cable or DSL networks. # Use the first rule, then check log for the IP address of DHCP server. # Then, uncomment the second rule, replace z.z.z.z with the IP address, # and comment out the first rule pass out log quick on dc0 proto udp from any to any port = 67 keep state #pass out quick on dc0 proto udp from any to z.z.z.z port = 67 keep state # Allow HTTP and HTTPS pass out quick on dc0 proto tcp from any to any port = 80 flags S keep state pass out quick on dc0 proto tcp from any to any port = 443 flags S keep state # Allow email pass out quick on dc0 proto tcp from any to any port = 110 flags S keep state pass out quick on dc0 proto tcp from any to any port = 25 flags S keep state # Allow NTP pass out quick on dc0 proto tcp from any to any port = 37 flags S keep state # Allow FTP pass out quick on dc0 proto tcp from any to any port = 21 flags S keep state # Allow SSH pass out quick on dc0 proto tcp from any to any port = 22 flags S keep state # Allow ping pass out quick on dc0 proto icmp from any to any icmp-type 8 keep state # Block and log everything else block out log first quick on dc0 all This example of the rules in the inbound section of the public interface blocks all undesirable packets first. This reduces the number of packets that are logged by the last rule. # interface facing Internet (inbound) # Block all inbound traffic from non-routable or reserved address spaces block in quick on dc0 from 192.168.0.0/16 to any #RFC 1918 private IP block in quick on dc0 from 172.16.0.0/12 to any #RFC 1918 private IP block in quick on dc0 from 10.0.0.0/8 to any #RFC 1918 private IP block in quick on dc0 from 127.0.0.0/8 to any #loopback block in quick on dc0 from 0.0.0.0/8 to any #loopback block in quick on dc0 from 169.254.0.0/16 to any #DHCP auto-config block in quick on dc0 from 192.0.2.0/24 to any #reserved for docs block in quick on dc0 from 204.152.64.0/23 to any #Sun cluster interconnect block in quick on dc0 from 224.0.0.0/3 to any #Class D & E multicast # Block fragments and too short tcp packets block in quick on dc0 all with frags block in quick on dc0 proto tcp all with short # block source routed packets block in quick on dc0 all with opt lsrr block in quick on dc0 all with opt ssrr # Block OS fingerprint attempts and log first occurrence block in log first quick on dc0 proto tcp from any to any flags FUP # Block anything with special options block in quick on dc0 all with ipopts # Block public pings and ident block in quick on dc0 proto icmp all icmp-type 8 block in quick on dc0 proto tcp from any to any port = 113 # Block incoming Netbios services block in log first quick on dc0 proto tcp/udp from any to any port = 137 block in log first quick on dc0 proto tcp/udp from any to any port = 138 block in log first quick on dc0 proto tcp/udp from any to any port = 139 block in log first quick on dc0 proto tcp/udp from any to any port = 81 Any time there are logged messages on a rule with the log first option, run ipfstat -hio to evaluate how many times the rule has been matched. A large number of matches may indicate that the system is under attack. The rest of the rules in the inbound section define which connections are allowed to be initiated from the Internet. The last rule denies all connections which were not explicitly allowed by previous rules in this section. # Allow traffic in from ISP's DHCP server. Replace z.z.z.z with # the same IP address used in the outbound section. pass in quick on dc0 proto udp from z.z.z.z to any port = 68 keep state # Allow public connections to specified internal web server pass in quick on dc0 proto tcp from any to x.x.x.x port = 80 flags S keep state # Block and log only first occurrence of all remaining traffic. block in log first quick on dc0 all Configuring <acronym>NAT</acronym> NAT IP masquerading NAT network address translation NAT ipnat To enable NAT, add these statements to /etc/rc.conf and specify the name of the file containing the NAT rules: gateway_enable="YES" ipnat_enable="YES" ipnat_rules="/etc/ipnat.rules" NAT rules are flexible and can accomplish many different things to fit the needs of both commercial and home users. The rule syntax presented here has been simplified to demonstrate common usage. For a complete rule syntax description, refer to &man.ipnat.5;. The basic syntax for a NAT rule is as follows, where map starts the rule and IF should be replaced with the name of the external interface: map IF LAN_IP_RANGE -> PUBLIC_ADDRESS The LAN_IP_RANGE is the range of IP addresses used by internal clients. Usually, it is a private address range such as 192.168.1.0/24. The PUBLIC_ADDRESS can either be the static external IP address or the keyword 0/32 which represents the IP address assigned to IF. In IPF, when a packet arrives at the firewall from the LAN with a public destination, it first passes through the outbound rules of the firewall ruleset. Then, the packet is passed to the NAT ruleset which is read from the top down, where the first matching rule wins. IPF tests each NAT rule against the packet's interface name and source IP address. When a packet's interface name matches a NAT rule, the packet's source IP address in the private LAN is checked to see if it falls within the IP address range specified in LAN_IP_RANGE. On a match, the packet has its source IP address rewritten with the public IP address specified by PUBLIC_ADDRESS. IPF posts an entry in its internal NAT table so that when the packet returns from the Internet, it can be mapped back to its original private IP address before being passed to the firewall rules for further processing. For networks that have large numbers of internal systems or multiple subnets, the process of funneling every private IP address into a single public IP address becomes a resource problem. Two methods are available to relieve this issue. The first method is to assign a range of ports to use as source ports. By adding the portmap keyword, NAT can be directed to only use source ports in the specified range: map dc0 192.168.1.0/24 -> 0/32 portmap tcp/udp 20000:60000 Alternately, use the auto keyword which tells NAT to determine the ports that are available for use: map dc0 192.168.1.0/24 -> 0/32 portmap tcp/udp auto The second method is to use a pool of public addresses. This is useful when there are too many LAN addresses to fit into a single public address and a block of public IP addresses is available. These public addresses can be used as a pool from which NAT selects an IP address as a packet's address is mapped on its way out. The range of public IP addresses can be specified using a netmask or CIDR notation. These two rules are equivalent: map dc0 192.168.1.0/24 -> 204.134.75.0/255.255.255.0 map dc0 192.168.1.0/24 -> 204.134.75.0/24 A common practice is to have a publically accessible web server or mail server segregated to an internal network segment. The traffic from these servers still has to undergo NAT, but port redirection is needed to direct inbound traffic to the correct server. For example, to map a web server using the internal address 10.0.10.25 to its public IP address of 20.20.20.5, use this rule: rdr dc0 20.20.20.5/32 port 80 -> 10.0.10.25 port 80 If it is the only web server, this rule would also work as it redirects all external HTTP requests to 10.0.10.25: rdr dc0 0.0.0.0/0 port 80 -> 10.0.10.25 port 80 IPF has a built in FTP proxy which can be used with NAT. It monitors all outbound traffic for active or passive FTP connection requests and dynamically creates temporary filter rules containing the port number used by the FTP data channel. This eliminates the need to open large ranges of high order ports for FTP connections. In this example, the first rule calls the proxy for outbound FTP traffic from the internal LAN. The second rule passes the FTP traffic from the firewall to the Internet, and the third rule handles all non-FTP traffic from the internal LAN: map dc0 10.0.10.0/29 -> 0/32 proxy port 21 ftp/tcp map dc0 0.0.0.0/0 -> 0/32 proxy port 21 ftp/tcp map dc0 10.0.10.0/29 -> 0/32 The FTP map rules go before the NAT rule so that when a packet matches an FTP rule, the FTP proxy creates temporary filter rules to let the FTP session packets pass and undergo NAT. All LAN packets that are not FTP will not match the FTP rules but will undergo NAT if they match the third rule. Without the FTP proxy, the following firewall rules would instead be needed. Note that without the proxy, all ports above 1024 need to be allowed: # Allow out LAN PC client FTP to public Internet # Active and passive modes pass out quick on rl0 proto tcp from any to any port = 21 flags S keep state # Allow out passive mode data channel high order port numbers pass out quick on rl0 proto tcp from any to any port > 1024 flags S keep state # Active mode let data channel in from FTP server pass in quick on rl0 proto tcp from any to any port = 20 flags S keep state Whenever the file containing the NAT rules is edited, run ipnat with to delete the current NAT rules and flush the contents of the dynamic translation table. Include and specify the name of the NAT ruleset to load: &prompt.root; ipnat -CF -f /etc/ipnat.rules To display the NAT statistics: &prompt.root; ipnat -s To list the NAT table's current mappings: &prompt.root; ipnat -l To turn verbose mode on and display information relating to rule processing and active rules and table entries: &prompt.root; ipnat -v Viewing <application>IPF</application> Statistics ipfstat IPFILTER statistics IPF includes &man.ipfstat.8; which can be used to retrieve and display statistics which are gathered as packets match rules as they go through the firewall. Statistics are accumulated since the firewall was last started or since the last time they were reset to zero using ipf -Z. The default ipfstat output looks like this: input packets: blocked 99286 passed 1255609 nomatch 14686 counted 0 output packets: blocked 4200 passed 1284345 nomatch 14687 counted 0 input packets logged: blocked 99286 passed 0 output packets logged: blocked 0 passed 0 packets logged: input 0 output 0 log failures: input 3898 output 0 fragment state(in): kept 0 lost 0 fragment state(out): kept 0 lost 0 packet state(in): kept 169364 lost 0 packet state(out): kept 431395 lost 0 ICMP replies: 0 TCP RSTs sent: 0 Result cache hits(in): 1215208 (out): 1098963 IN Pullups succeeded: 2 failed: 0 OUT Pullups succeeded: 0 failed: 0 Fastroute successes: 0 failures: 0 TCP cksum fails(in): 0 (out): 0 Packet log flags set: (0) Several options are available. When supplied with either for inbound or for outbound, the command will retrieve and display the appropriate list of filter rules currently installed and in use by the kernel. To also see the rule numbers, include . For example, ipfstat -on displays the outbound rules table with rule numbers: @1 pass out on xl0 from any to any @2 block out on dc0 from any to any @3 pass out quick on dc0 proto tcp/udp from any to any keep state Include to prefix each rule with a count of how many times the rule was matched. For example, ipfstat -oh displays the outbound internal rules table, prefixing each rule with its usage count: 2451423 pass out on xl0 from any to any 354727 block out on dc0 from any to any 430918 pass out quick on dc0 proto tcp/udp from any to any keep state To display the state table in a format similar to &man.top.1;, use ipfstat -t. When the firewall is under attack, this option provides the ability to identify and see the attacking packets. The optional sub-flags give the ability to select the destination or source IP, port, or protocol to be monitored in real time. Refer to &man.ipfstat.8; for details. <application>IPF</application> Logging ipmon IPFILTER logging IPF provides ipmon, which can be used to write the firewall's logging information in a human readable format. It requires that options IPFILTER_LOG be first added to a custom kernel using the instructions in . This command is typically run in daemon mode in order to provide a continuous system log file so that logging of past events may be reviewed. Since &os; has a built in &man.syslogd.8; facility to automatically rotate system logs, the default rc.conf ipmon_flags statement uses : ipmon_flags="-Ds" # D = start as daemon # s = log to syslog # v = log tcp window, ack, seq # n = map IP & port to names Logging provides the ability to review, after the fact, information such as which packets were dropped, what addresses they came from, and where they were going. This information is useful in tracking down attackers. Once the logging facility is enabled in rc.conf and started with service ipmon start, IPF will only log the rules which contain the log keyword. The firewall administrator decides which rules in the ruleset should be logged and normally only deny rules are logged. It is customary to include the log keyword in the last rule in the ruleset. This makes it possible to see all the packets that did not match any of the rules in the ruleset. By default, ipmon -Ds mode uses local0 as the logging facility. The following logging levels can be used to further segregate the logged data: LOG_INFO - packets logged using the "log" keyword as the action rather than pass or block. LOG_NOTICE - packets logged which are also passed LOG_WARNING - packets logged which are also blocked LOG_ERR - packets which have been logged and which can be considered short due to an incomplete header In order to setup IPF to log all data to /var/log/ipfilter.log, first create the empty file: &prompt.root; touch /var/log/ipfilter.log Then, to write all logged messages to the specified file, add the following statement to /etc/syslog.conf: local0.* /var/log/ipfilter.log To activate the changes and instruct &man.syslogd.8; to read the modified /etc/syslog.conf, run service syslogd reload. Do not forget to edit /etc/newsyslog.conf to rotate the new log file. Messages generated by ipmon consist of data fields separated by white space. Fields common to all messages are: The date of packet receipt. The time of packet receipt. This is in the form HH:MM:SS.F, for hours, minutes, seconds, and fractions of a second. The name of the interface that processed the packet. The group and rule number of the rule in the format @0:17. The action: p for passed, b for blocked, S for a short packet, n did not match any rules, and L for a log rule. The addresses written as three fields: the source address and port separated by a comma, the -> symbol, and the destination address and port. For example: 209.53.17.22,80 -> 198.73.220.17,1722. PR followed by the protocol name or number: for example, PR tcp. len followed by the header length and total length of the packet: for example, len 20 40. If the packet is a TCP packet, there will be an additional field starting with a hyphen followed by letters corresponding to any flags that were set. Refer to &man.ipf.5; for a list of letters and their flags. If the packet is an ICMP packet, there will be two fields at the end: the first always being icmp and the next being the ICMP message and sub-message type, separated by a slash. For example: icmp 3/3 for a port unreachable message.
Index: head/en_US.ISO8859-1/books/handbook/l10n/chapter.xml =================================================================== --- head/en_US.ISO8859-1/books/handbook/l10n/chapter.xml (revision 46048) +++ head/en_US.ISO8859-1/books/handbook/l10n/chapter.xml (revision 46049) @@ -1,1039 +1,1040 @@ Localization - <acronym>i18n</acronym>/<acronym>L10n</acronym> Usage and Setup AndreyChernovContributed by Michael C.WuRewritten by Synopsis &os; is a distributed project with users and contributors located all over the world. As such, &os; supports localization into many languages, allowing users to view, input, or process data in non-English languages. One can choose from most of the major languages, including, but not limited to: Chinese, German, Japanese, Korean, French, Russian, and Vietnamese. internationalization localization localization The term internationalization has been shortened to i18n, which represents the number of letters between the first and the last letters of internationalization. L10n uses the same naming scheme, but from localization. The i18n/L10n methods, protocols, and applications allow users to use languages of their choice. This chapter discusses the internationalization and localization features of &os;. After reading this chapter, you will know: How locale names are constructed. How to set the locale for a login shell. How to configure the console for non-English languages. How to configure Xorg for different languages. How to find i18n-compliant applications. Where to find more information for configuring specific languages. Before reading this chapter, you should: Know how to install additional third-party applications. Using Localization locale Localization settings are based on three components: the language code, country code, and encoding. Locale names are constructed from these parts as follows: LanguageCode_CountryCode.Encoding language codes country codes The LanguageCode and CountryCode are used to determine the country and the specific language variation. provides some examples of LanguageCode_CountryCode: Common Language and Country Codes LanguageCode_Country Code Description en_US English, United States ru_RU Russian, Russia zh_TW Traditional Chinese, Taiwan
A complete listing of available locales can be found by typing: &prompt.user; locale -a | more To determine the current locale setting: &prompt.user; locale encodings ASCII Language specific character sets, such as ISO8859-1, ISO8859-15, KOI8-R, and CP437, are described in &man.multibyte.3;. The active list of character sets can be found at the IANA Registry. Some languages, such as Chinese or Japanese, cannot be represented using ASCII characters and require an extended language encoding using either wide or multibyte characters. Examples of wide or multibyte encodings include EUC and Big5. Older applications may mistake these encodings for control characters while newer applications usually recognize these characters. Depending on the implementation, users may be required to compile an application with wide or multibyte character support, or to configure it correctly. &os; uses Xorg-compatible locale encodings. The rest of this section describes the various methods for configuring the locale on a &os; system. The next section will discuss the considerations for finding and compiling applications with i18n support. Setting Locale for Login Shell Locale settings are configured either in a user's ~/.login_conf or in the startup file of the user's shell: ~/.profile, ~/.bashrc, or ~/.cshrc. Two environment variables should be set: LANG, which sets the locale POSIX MIME MM_CHARSET, which sets the MIME character set used by applications In addition to the user's shell configuration, these variables should also be set for specific application configuration and Xorg configuration. locale login class Two methods are available for making the needed variable assignments: the login class method, which is the recommended method, and the startup file method. The next two sections demonstrate how to use both methods. Login Classes Method This first method is the recommended method as it assigns the required environment variables for locale name and MIME character sets for every possible shell. This setup can either be performed by each user or it can be configured for all users by the superuser. This minimal example sets both variables for Latin-1 encoding in the .login_conf of an individual user's home directory: me:\ :charset=ISO-8859-1:\ :lang=de_DE.ISO8859-1: Traditional Chinese BIG-5 encoding Here is an example of a user's ~/.login_conf that sets the variables for Traditional Chinese in BIG-5 encoding. More variables are needed because some applications do not correctly respect locale variables for Chinese, Japanese, and Korean: #Users who do not wish to use monetary units or time formats #of Taiwan can manually change each variable me:\ :lang=zh_TW.Big5:\ :setenv=LC_ALL=zh_TW.Big5:\ :setenv=LC_COLLATE=zh_TW.Big5:\ :setenv=LC_CTYPE=zh_TW.Big5:\ :setenv=LC_MESSAGES=zh_TW.Big5:\ :setenv=LC_MONETARY=zh_TW.Big5:\ :setenv=LC_NUMERIC=zh_TW.Big5:\ :setenv=LC_TIME=zh_TW.Big5:\ :charset=big5:\ :xmodifiers="@im=gcin": #Set gcin as the XIM Input Server Alternately, the superuser can configure all users of the system for localization. The following variables in /etc/login.conf are used to set the locale and MIME character set: language_name|Account Type Description:\ :charset=MIME_charset:\ :lang=locale_name:\ :tc=default: So, the previous Latin-1 example would look like this: german|German Users Accounts:\ :charset=ISO-8859-1:\ :lang=de_DE.ISO8859-1:\ :tc=default: See &man.login.conf.5; for more details about these variables. Whenever /etc/login.conf is edited, remember to execute the following command to update the capability database: &prompt.root; cap_mkdb /etc/login.conf Utilities Which Change Login Classes vipw In addition to manually editing /etc/login.conf, several utilities are available for setting the locale for newly created users. When using vipw to add new users, specify the language to set the locale: user:password:1111:11:language:0:0:User Name:/home/user:/bin/sh adduser login class When using adduser to add new users, the default language can be pre-configured for all new users or specified for an individual user. If all new users use the same language, set - defaultclass = language in + defaultclass = + language in /etc/adduser.conf. To override this setting when creating a user, either input the required locale at this prompt: Enter login class: default []: or specify the locale to set when invoking adduser: &prompt.root; adduser -class language pw If pw is used to add new users, specify the locale as follows: &prompt.root; pw useradd user_name -L language Shell Startup File Method This second method is not recommended as each shell that is used requires manual configuration, where each shell has a different configuration file and differing syntax. As an example, to set the German language for the sh shell, these lines could be added to ~/.profile to set the shell for that user only. These lines could also be added to /etc/profile or /usr/share/skel/dot.profile to set that shell for all users: LANG=de_DE.ISO8859-1; export LANG MM_CHARSET=ISO-8859-1; export MM_CHARSET However, the name of the configuration file and the syntax used differs for the csh shell. These are the equivalent settings for ~/.csh.login, /etc/csh.login, or /usr/share/skel/dot.login: setenv LANG de_DE.ISO8859-1 setenv MM_CHARSET ISO-8859-1 To complicate matters, the syntax needed to configure Xorg in ~/.xinitrc also depends upon the shell. The first example is for the sh shell and the second is for the csh shell: LANG=de_DE.ISO8859-1; export LANG setenv LANG de_DE.ISO8859-1 Console Setup Several localized fonts are available for the console. To see a listing of available fonts, type ls /usr/share/syscons/fonts. To configure the console font, specify the font_name, without the .fnt suffix, in /etc/rc.conf: font8x16=font_name font8x14=font_name font8x8=font_name keymap screenmap The keymap and screenmap can be set by adding the following to /etc/rc.conf: scrnmap=screenmap_name keymap=keymap_name keychange="fkey_number sequence" To see the list of available screenmaps, type ls /usr/share/syscons/scrnmaps. Do not include the .scm suffix when specifying screenmap_name. A screenmap with a corresponding mapped font is usually needed as a workaround for expanding bit 8 to bit 9 on a VGA adapter's font character matrix so that letters are moved out of the pseudographics area if the screen font uses a bit 8 column. To see the list of available keymaps, type ls /usr/share/syscons/keymaps. When specifying the keymap_name, do not include the .kbd suffix. To test keymaps without rebooting, use &man.kbdmap.1;. The keychange entry is usually needed to program function keys to match the selected terminal type because function key sequences cannot be defined in the keymap. Next, set the correct console terminal type in /etc/ttys for all virtual terminal entries. summarizes the available terminal types.: Defined Terminal Types for Character Sets Character Set Terminal Type ISO8859-1 or ISO8859-15 cons25l1 ISO8859-2 cons25l2 ISO8859-7 cons25l7 KOI8-R cons25r KOI8-U cons25u CP437 (VGA default) cons25 US-ASCII cons25w
moused For languages with wide or multibyte characters, install a console for that language from the &os; Ports Collection. The available ports are summarized in . Once installed, refer to the port's pkg-message or man pages for configuration and usage instructions. Available Console From Ports Collection Language Port Location Traditional Chinese (BIG-5) chinese/big5con Chinese/Japanese/Korean chinese/cce Chinese/Japanese/Korean chinese/zhcon Japanese chinese/kon2 Japanese japanese/kon2-14dot Japanese japanese/kon2-16dot
If moused is enabled in /etc/rc.conf, additional configuration may be required. By default, the mouse cursor of the &man.syscons.4; driver occupies the 0xd0-0xd3 range in the character set. If the language uses this range, move the cursor's range by adding the following line to /etc/rc.conf: mousechar_start=3
Xorg Setup describes how to install and configure Xorg. When configuring Xorg for localization, additional fonts and input methods are available from the &os; Ports Collection. Application specific i18n settings such as fonts and menus can be tuned in ~/.Xresources and should allow users to view their selected language in graphical application menus. X Input Method (XIM) The X Input Method (XIM) protocol is an Xorg standard for inputting non-English characters. summarizes the input method applications which are available in the &os; Ports Collection. Additional Fcitx and Uim applications are also available. Available Input Methods Language Input Method Chinese chinese/gcin Chinese chinese/ibus-chewing Chinese chinese/ibus-pinyin Chinese chinese/oxim Chinese chinese/scim-fcitx Chinese chinese/scim-pinyin Chinese chinese/scim-tables Japanese japanese/ibus-anthy Japanese japanese/ibus-mozc Japanese japanese/ibus-skk Japanese japanese/im-ja Japanese japanese/kinput2 Japanese japanese/scim-anthy Japanese japanese/scim-canna Japanese japanese/scim-honoka Japanese japanese/scim-honoka-plugin-romkan Japanese japanese/scim-honoka-plugin-wnn Japanese japanese/scim-prime Japanese japanese/scim-skk Japanese japanese/scim-tables Japanese japanese/scim-tomoe Japanese japanese/scim-uim Japanese japanese/skkinput Japanese japanese/skkinput3 Japanese japanese/uim-anthy Korean korean/ibus-hangul Korean korean/imhangul Korean korean/nabi Korean korean/scim-hangul Korean korean/scim-tables Vietnamese vietnamese/xvnkb Vietnamese vietnamese/x-unikey
Finding <acronym>i18n</acronym> Applications i18n applications are programmed using i18n kits under libraries. These allow developers to write a simple file and translate displayed menus and texts to each language. The &os; Ports Collection contains many applications with built-in support for wide or multibyte characters for several languages. Such applications include i18n in their names for easy identification. However, they do not always support the language needed. Some applications can be compiled with the specific charset. This is usually done in the port's Makefile or by passing a value to configure. Refer to the i18n documentation in the respective &os; port's source for more information on how to determine the needed configure value or the port's Makefile to determine which compile options to use when building the port. Locale Configuration for Specific Languages This section provides configuration examples for localizing a &os; system for the Russian language. It then provides some additional resources for localizing other languages. Russian Language (KOI8-R Encoding) AndreyChernovOriginally contributed by localization Russian This section shows the specific settings needed to localize a &os; system for the Russian language. Refer to Using Localization for a more complete description of each type of setting. To set this locale for the login shell, add the following lines to each user's ~/.login_conf: me:My Account:\ :charset=KOI8-R:\ :lang=ru_RU.KOI8-R: To configure the console, add the following lines to /etc/rc.conf: keymap="ru.koi8-r" scrnmap="koi8-r2cp866" font8x16="cp866b-8x16" font8x14="cp866-8x14" font8x8="cp866-8x8" mousechar_start=3 For each ttyv entry in /etc/ttys, use cons25r as the terminal type. printers To configure printing, a special output filter is needed to convert from KOI8-R to CP866 since most printers with Russian characters come with hardware code page CP866. &os; includes a default filter for this purpose, /usr/libexec/lpr/ru/koi2alt. To use this filter, add this entry to /etc/printcap: lp|Russian local line printer:\ :sh:of=/usr/libexec/lpr/ru/koi2alt:\ :lp=/dev/lpt0:sd=/var/spool/output/lpd:lf=/var/log/lpd-errs: Refer to &man.printcap.5; for a more detailed explanation. To configure support for Russian filenames in mounted &ms-dos; file systems, include and the locale name when adding an entry to /etc/fstab: /dev/ad0s2 /dos/c msdos rw,-Lru_RU.KOI8-R 0 0 Refer to &man.mount.msdosfs.8; for more details. To configure Russian fonts for &xorg;, install the x11-fonts/xorg-fonts-cyrillic package. Then, check the "Files" section in /etc/X11/xorg.conf. The following line must be added before any other FontPath entries: FontPath "/usr/local/lib/X11/fonts/cyrillic" Additional Cyrillic fonts are available in the Ports Collection. To activate a Russian keyboard, add the following to the "Keyboard" section of /etc/xorg.conf: Option "XkbLayout" "us,ru" Option "XkbOptions" "grp:toggle" Make sure that XkbDisable is commented out in that file. For grp:toggle use Right Alt, for grp:ctrl_shift_toggle use CtrlShift. For grp:caps_toggle use CapsLock. The old CapsLock function is still available in LAT mode only using ShiftCapsLock. grp:caps_toggle does not work in &xorg; for some unknown reason. If the keyboard has &windows; keys, and some non-alphabetical keys are mapped incorrectly, add the following line to /etc/xorg.conf: Option "XkbVariant" ",winkeys" The Russian XKB keyboard may not work with non-localized applications. Minimally localized applications should call a XtSetLanguageProc (NULL, NULL, NULL); function early in the program. See http://koi8.pp.ru/xwin.html for more instructions on localizing Xorg applications. For more general information about KOI8-R encoding, refer to http://koi8.pp.ru/. Additional Language-Specific Resources This section lists some additional resources for configuring other locales. localization Traditional Chinese localization German localization Greek localization Japanese localization Korean Traditional Chinese for Taiwan The &os;-Taiwan Project has a Chinese HOWTO for &os; at http://netlab.cse.yzu.edu.tw/~statue/freebsd/zh-tut/. German Language Localization for All ISO 8859-1 Languages A tutorial on using umlauts on &os; is available in German at http://user.cs.tu-berlin.de/~eserte/FreeBSD/doc/umlaute/umlaute.html. Greek Language Localization A complete article on Greek support in &os; is available here, in Greek only, as part of the official &os; Greek documentation. Japanese and Korean Language Localization For Japanese, refer to http://www.jp.FreeBSD.org/, and for Korean, refer to http://www.kr.FreeBSD.org/. Non-English &os; Documentation Some &os; contributors have translated parts of the &os; documentation to other languages. They are available through links on the &os; web site or in /usr/share/doc.
Index: head/en_US.ISO8859-1/books/handbook/mail/chapter.xml =================================================================== --- head/en_US.ISO8859-1/books/handbook/mail/chapter.xml (revision 46048) +++ head/en_US.ISO8859-1/books/handbook/mail/chapter.xml (revision 46049) @@ -1,1913 +1,1914 @@ Electronic Mail BillLloydOriginal work by JimMockRewritten by Synopsis email Electronic Mail, better known as email, is one of the most widely used forms of communication today. This chapter provides a basic introduction to running a mail server on &os;, as well as an introduction to sending and receiving email using &os;. For more complete coverage of this subject, refer to the books listed in . After reading this chapter, you will know: Which software components are involved in sending and receiving electronic mail. Where basic sendmail configuration files are located in &os;. The difference between remote and local mailboxes. How to block spammers from illegally using a mail server as a relay. How to install and configure an alternate Mail Transfer Agent, replacing sendmail. How to troubleshoot common mail server problems. How to set up the system to send mail only. How to use mail with a dialup connection. How to configure SMTP authentication for added security. How to install and use a Mail User Agent, such as mutt, to send and receive email. How to download mail from a remote POP or IMAP server. How to automatically apply filters and rules to incoming email. Before reading this chapter, you should: Properly set up a network connection (). Properly set up the DNS information for a mail host (). Know how to install additional third-party software (). Mail Components POP IMAP DNS mail server daemons Sendmail mail server daemons Postfix mail server daemons qmail mail server daemons Exim email receiving MX record mail host There are five major parts involved in an email exchange: the Mail User Agent (MUA), the Mail Transfer Agent (MTA), a mail host, a remote or local mailbox, and DNS. This section provides an overview of these components. Mail User Agent (MUA) The Mail User Agent (MUA) is an application which is used to compose, send, and receive emails. This application can be a command line program, such as the built-in mail utility or a third-party application from the Ports Collection, such as mutt, alpine, or elm. Dozens of graphical programs are also available in the Ports Collection, including Claws Mail, Evolution, and Thunderbird. Some organizations provide a web mail program which can be accessed through a web browser. More information about installing and using a MUA on &os; can be found in . Mail Transfer Agent (MTA) The Mail Transfer Agent (MTA) is responsible for receiving incoming mail and delivering outgoing mail. &os; ships with Sendmail as the default MTA, but it also supports numerous other mail server daemons, including Exim, Postfix, and qmail. Sendmail configuration is described in . If another MTA is installed using the Ports Collection, refer to its post-installation message for &os;-specific configuration details and the application's website for more general configuration instructions. Mail Host and Mailboxes The mail host is a server that is responsible for delivering and receiving mail for a host or a network. The mail host collects all mail sent to the domain and stores it either in the default mbox or the alternative Maildir format, depending on the configuration. Once mail has been stored, it may either be read locally using a MUA or remotely accessed and collected using protocols such as POP or IMAP. If mail is read locally, a POP or IMAP server does not need to be installed. To access mailboxes remotely, a POP or IMAP server is required as these protocols allow users to connect to their mailboxes from remote locations. IMAP offers several advantages over POP. These include the ability to store a copy of messages on a remote server after they are downloaded and concurrent updates. IMAP can be useful over low-speed links as it allows users to fetch the structure of messages without downloading them. It can also perform tasks such as searching on the server in order to minimize data transfer between clients and servers. Several POP and IMAP servers are available in the Ports Collection. These include mail/qpopper, mail/imap-uw, mail/courier-imap, and mail/dovecot2. It should be noted that both POP and IMAP transmit information, including username and password credentials, in clear-text. To secure the transmission of information across these protocols, consider tunneling sessions over &man.ssh.1; () or using SSL (). Domain Name System (DNS) The Domain Name System (DNS) and its daemon named play a large role in the delivery of email. In order to deliver mail from one site to another, the MTA will look up the remote site in DNS to determine which host will receive mail for the destination. This process also occurs when mail is sent from a remote host to the MTA. In addition to mapping hostnames to IP addresses, DNS is responsible for storing information specific to mail delivery, known as Mail eXchanger MX records. The MX record specifies which hosts will receive mail for a particular domain. To view the MX records for a domain, specify the type of record. Refer to &man.host.1;, for more details about this command: &prompt.user; host -t mx FreeBSD.org FreeBSD.org mail is handled by 10 mx1.FreeBSD.org Refer to for more information about DNS and its configuration. <application>Sendmail</application> Configuration Files ChristopherShumwayContributed by Sendmail Sendmail is the default MTA installed with &os;. It accepts mail from MUAs and delivers it to the appropriate mail host, as defined by its configuration. Sendmail can also accept network connections and deliver mail to local mailboxes or to another program. The configuration files for Sendmail are located in /etc/mail. This section describes these files in more detail. /etc/mail/access /etc/mail/aliases /etc/mail/local-host-names /etc/mail/mailer.conf /etc/mail/mailertable /etc/mail/sendmail.cf /etc/mail/virtusertable /etc/mail/access This access database file defines which hosts or IP addresses have access to the local mail server and what kind of access they have. Hosts listed as , which is the default option, are allowed to send mail to this host as long as the mail's final destination is the local machine. Hosts listed as are rejected for all mail connections. Hosts listed as are allowed to send mail for any destination using this mail server. Hosts listed as will have their mail returned with the specified mail error. If a host is listed as , Sendmail will abort the current search for this entry without accepting or rejecting the mail. Hosts listed as will have their messages held and will receive the specified text as the reason for the hold. Examples of using these options for both IPv4 and IPv6 addresses can be found in the &os; sample configuration, /etc/mail/access.sample: # $FreeBSD$ # # Mail relay access control list. Default is to reject mail unless the # destination is local, or listed in /etc/mail/local-host-names # ## Examples (commented out for safety) #From:cyberspammer.com ERROR:"550 We don't accept mail from spammers" #From:okay.cyberspammer.com OK #Connect:sendmail.org RELAY #To:sendmail.org RELAY #Connect:128.32 RELAY #Connect:128.32.2 SKIP #Connect:IPv6:1:2:3:4:5:6:7 RELAY #Connect:suspicious.example.com QUARANTINE:Mail from suspicious host #Connect:[127.0.0.3] OK #Connect:[IPv6:1:2:3:4:5:6:7:8] OK To configure the access database, use the format shown in the sample to make entries in /etc/mail/access, but do not put a comment symbol (#) in front of the entries. Create an entry for each host or network whose access should be configured. Mail senders that match the left side of the table are affected by the action on the right side of the table. Whenever this file is updated, update its database and restart Sendmail: &prompt.root; makemap hash /etc/mail/access < /etc/mail/access &prompt.root; service sendmail restart /etc/mail/aliases This database file contains a list of virtual mailboxes that are expanded to users, files, programs, or other aliases. Here are a few entries to illustrate the file format: root: localuser ftp-bugs: joe,eric,paul bit.bucket: /dev/null procmail: "|/usr/local/bin/procmail" The mailbox name on the left side of the colon is expanded to the target(s) on the right. The first entry expands the root mailbox to the localuser mailbox, which is then looked up in the /etc/mail/aliases database. If no match is found, the message is delivered to localuser. The second entry shows a mail list. Mail to ftp-bugs is expanded to the three local mailboxes joe, eric, and paul. A remote mailbox could be specified as user@example.com. The third entry shows how to write mail to a file, in this case /dev/null. The last entry demonstrates how to send mail to a program, /usr/local/bin/procmail, through a &unix; pipe. Refer to &man.aliases.5; for more information about the format of this file. Whenever this file is updated, run newaliases to update and initialize the aliases database. /etc/mail/sendmail.cf This is the master configuration file for Sendmail. It controls the overall behavior of Sendmail, including everything from rewriting email addresses to printing rejection messages to remote mail servers. Accordingly, this configuration file is quite complex. Fortunately, this file rarely needs to be changed for standard mail servers. The master Sendmail configuration file can be built from &man.m4.1; macros that define the features and behavior of Sendmail. Refer to /usr/src/contrib/sendmail/cf/README for some of the details. Whenever changes to this file are made, Sendmail needs to be restarted for the changes to take effect. /etc/mail/virtusertable This database file maps mail addresses for virtual domains and users to real mailboxes. These mailboxes can be local, remote, aliases defined in /etc/mail/aliases, or files. This allows multiple virtual domains to be hosted on one machine. &os; provides a sample configuration file in /etc/mail/virtusertable.sample to further demonstrate its format. The following example demonstrates how to create custom entries using that format: root@example.com root postmaster@example.com postmaster@noc.example.net @example.com joe This file is processed in a first match order. When an email address matches the address on the left, it is mapped to the local mailbox listed on the right. The format of the first entry in this example maps a specific email address to a local mailbox, whereas the format of the second entry maps a specific email address to a remote mailbox. Finally, any email address from example.com which has not matched any of the previous entries will match the last mapping and be sent to the local mailbox joe. When creating custom entries, use this format and add them to /etc/mail/virtusertable. Whenever this file is edited, update its database and restart Sendmail: &prompt.root; makemap hash /etc/mail/virtusertable < /etc/mail/virtusertable &prompt.root; service sendmail restart /etc/mail/relay-domains In a default &os; installation, Sendmail is configured to only - send mail from the host it is running on. For example, - if a POP server is available, users - will be able to check mail from remote locations but they - will not be able to send outgoing emails from outside + send mail from the host it is running on. For example, if + a POP server is available, users will + be able to check mail from remote locations but they will + not be able to send outgoing emails from outside locations. Typically, a few moments after the attempt, an email will be sent from MAILER-DAEMON - with a 5.7 Relaying Denied message. + with a 5.7 Relaying Denied + message. The most straightforward solution is to add the ISP's FQDN to /etc/mail/relay-domains. If multiple addresses are needed, add them one per line: your.isp.example.com other.isp.example.net users-isp.example.org www.example.org After creating or editing this file, restart Sendmail with service sendmail restart. Now any mail sent through the system by any host in this list, provided the user has an account on the system, will succeed. This allows users to send mail from the system remotely without opening the system up to relaying SPAM from the Internet. Changing the Mail Transfer Agent AndrewBoothmanWritten by GregoryNeil ShapiroInformation taken from emails written by email change mta &os; comes with Sendmail already installed as the MTA which is in charge of outgoing and incoming mail. However, the system administrator can change the system's MTA. A wide choice of alternative MTAs is available from the mail category of the &os; Ports Collection. Once a new MTA is installed, configure and test the new software before replacing Sendmail. Refer to the documentation of the new MTA for information on how to configure the software. Once the new MTA is working, use the instructions in this section to disable Sendmail and configure &os; to use the replacement MTA. Disable <application>Sendmail</application> If Sendmail's outgoing mail service is disabled, it is important that it is replaced with an alternative mail delivery system. Otherwise, system functions such as &man.periodic.8; will be unable to deliver their results by email. Many parts of the system expect a functional MTA. If applications continue to use Sendmail's binaries to try to send email after they are disabled, mail could go into an inactive Sendmail queue and never be delivered. In order to completely disable Sendmail, add or edit the following lines in /etc/rc.conf: sendmail_enable="NO" sendmail_submit_enable="NO" sendmail_outbound_enable="NO" sendmail_msp_queue_enable="NO" To only disable Sendmail's incoming mail service, use only this entry in /etc/rc.conf: sendmail_enable="NO" More information on Sendmail's startup options is available in &man.rc.sendmail.8;. Replace the Default <acronym>MTA</acronym> When a new MTA is installed using the Ports Collection, its startup script is also installed and startup instructions are mentioned in its package message. Before starting the new MTA, stop the running Sendmail processes. This example stops all of these services, then starts the Postfix service: &prompt.root; service sendmail stop &prompt.root; service postfix start To start the replacement MTA at system boot, add its configuration line to /etc/rc.conf. This entry enables the Postfix MTA: postfix_enable="YES" Some extra configuration is needed as Sendmail is so ubiquitous that some software assumes it is already installed and configured. Check /etc/periodic.conf and make sure that these values are set to NO. If this file does not exist, create it with these entries: daily_clean_hoststat_enable="NO" daily_status_mail_rejects_enable="NO" daily_status_include_submit_mailq="NO" daily_submit_queuerun="NO" Some alternative MTAs provide their own compatible implementations of the Sendmail command-line interface in order to facilitate using them as drop-in replacements for Sendmail. However, some MUAs may try to execute standard Sendmail binaries instead of the new MTA's binaries. &os; uses /etc/mail/mailer.conf to map the expected Sendmail binaries to the location of the new binaries. More information about this mapping can be found in &man.mailwrapper.8;. The default /etc/mail/mailer.conf looks like this: # $FreeBSD$ # # Execute the "real" sendmail program, named /usr/libexec/sendmail/sendmail # sendmail /usr/libexec/sendmail/sendmail send-mail /usr/libexec/sendmail/sendmail mailq /usr/libexec/sendmail/sendmail newaliases /usr/libexec/sendmail/sendmail hoststat /usr/libexec/sendmail/sendmail purgestat /usr/libexec/sendmail/sendmail When any of the commands listed on the left are run, the system actually executes the associated command shown on the right. This system makes it easy to change what binaries are executed when these default binaries are invoked. Some MTAs, when installed using the Ports Collection, will prompt to update this file for the new binaries. For example, Postfix will update the file like this: # # Execute the Postfix sendmail program, named /usr/local/sbin/sendmail # sendmail /usr/local/sbin/sendmail send-mail /usr/local/sbin/sendmail mailq /usr/local/sbin/sendmail newaliases /usr/local/sbin/sendmail If the installation of the MTA does not automatically update /etc/mail/mailer.conf, edit this file in a text editor so that it points to the new binaries. This example points to the binaries installed by mail/ssmtp: sendmail /usr/local/sbin/ssmtp send-mail /usr/local/sbin/ssmtp mailq /usr/libexec/sendmail/sendmail newaliases /usr/libexec/sendmail/sendmail hoststat /usr/libexec/sendmail/sendmail purgestat /usr/libexec/sendmail/sendmail Once everything is configured, it is recommended to reboot the system. Rebooting provides the opportunity to ensure that the system is correctly configured to start the new MTA automatically on boot. Troubleshooting email troubleshooting Why do I have to use the FQDN for hosts on my site? The host may actually be in a different domain. For example, in order for a host in foo.bar.edu to reach a host called mumble in the bar.edu domain, refer to it by the Fully-Qualified Domain Name FQDN, mumble.bar.edu, instead of just mumble. This is because the version of BIND BIND which ships with &os; no longer provides default abbreviations for non-FQDNs other than the local domain. An unqualified host such as mumble must either be found as mumble.foo.bar.edu, or it will be searched for in the root domain. In older versions of BIND, the search continued across mumble.bar.edu, and mumble.edu. RFC 1535 details why this is considered bad practice or even a security hole. As a good workaround, place the line: search foo.bar.edu bar.edu instead of the previous: domain foo.bar.edu into /etc/resolv.conf. However, make sure that the search order does not go beyond the boundary between local and public administration, as RFC 1535 calls it. How can I run a mail server on a dial-up PPP host? Connect to a &os; mail gateway on the LAN. The PPP connection is non-dedicated. One way to do this is to get a full-time Internet server to provide secondary MX MX record services for the domain. In this example, the domain is example.com and the ISP has configured example.net to provide secondary MX services to the domain: example.com. MX 10 example.com. MX 20 example.net. Only one host should be specified as the final recipient. For Sendmail, add Cw example.com in /etc/mail/sendmail.cf on example.com. When the sending MTA attempts to deliver mail, it will try to connect to the system, example.com, over the PPP link. This will time out if the destination is offline. The MTA will automatically deliver it to the secondary MX site at the Internet Service Provider (ISP), example.net. The secondary MX site will periodically try to connect to the primary MX host, example.com. Use something like this as a login script: #!/bin/sh # Put me in /usr/local/bin/pppmyisp ( sleep 60 ; /usr/sbin/sendmail -q ) & /usr/sbin/ppp -direct pppmyisp When creating a separate login script for users, instead use sendmail -qRexample.com in the script above. This will force all mail in the queue for example.com to be processed immediately. A further refinement of the situation can be seen from this example from the &a.isp;: > we provide the secondary MX for a customer. The customer connects to > our services several times a day automatically to get the mails to > his primary MX (We do not call his site when a mail for his domains > arrived). Our sendmail sends the mailqueue every 30 minutes. At the > moment he has to stay 30 minutes online to be sure that all mail is > gone to the primary MX. > > Is there a command that would initiate sendmail to send all the mails > now? The user has not root-privileges on our machine of course. In the privacy flags section of sendmail.cf, there is a definition Opgoaway,restrictqrun Remove restrictqrun to allow non-root users to start the queue processing. You might also like to rearrange the MXs. We are the 1st MX for our customers like this, and we have defined: # If we are the best MX for a host, try directly instead of generating # local config error. OwTrue That way a remote site will deliver straight to you, without trying the customer connection. You then send to your customer. Only works for hosts, so you need to get your customer to name their mail machine customer.com as well as hostname.customer.com in the DNS. Just put an A record in the DNS for customer.com. Advanced Topics This section covers more involved topics such as mail configuration and setting up mail for an entire domain. Basic Configuration email configuration Out of the box, one can send email to external hosts as long as /etc/resolv.conf is configured or the network has access to a configured DNS server. To have email delivered to the MTA on the &os; host, do one of the following: Run a DNS server for the domain. Get mail delivered directly to to the FQDN for the machine. SMTP In order to have mail delivered directly to a host, it must have a permanent static IP address, not a dynamic IP address. If the system is behind a firewall, it must be configured to allow SMTP traffic. To receive mail directly at a host, one of these two must be configured: Make sure that the lowest-numbered MXMX record record in DNS points to the host's static IP address. Make sure there is no MX entry in the DNS for the host. Either of the above will allow mail to be received directly at the host. Try this: &prompt.root; hostname example.FreeBSD.org &prompt.root; host example.FreeBSD.org example.FreeBSD.org has address 204.216.27.XX In this example, mail sent directly to yourlogin@example.FreeBSD.org should work without problems, assuming Sendmail is running correctly on example.FreeBSD.org. For this example: &prompt.root; host example.FreeBSD.org example.FreeBSD.org has address 204.216.27.XX example.FreeBSD.org mail is handled (pri=10) by hub.FreeBSD.org All mail sent to example.FreeBSD.org will be collected on hub under the same username instead of being sent directly to your host. The above information is handled by the DNS server. The DNS record that carries mail routing information is the MX entry. If no MX record exists, mail will be delivered directly to the host by way of its IP address. The MX entry for freefall.FreeBSD.org at one time looked like this: freefall MX 30 mail.crl.net freefall MX 40 agora.rdrop.com freefall MX 10 freefall.FreeBSD.org freefall MX 20 who.cdrom.com freefall had many MX entries. The lowest MX number is the host that receives mail directly, if available. If it is not accessible for some reason, the next lower-numbered host will accept messages temporarily, and pass it along when a lower-numbered host becomes available. Alternate MX sites should have separate Internet connections in order to be most useful. Your ISP can provide this service. Mail for a Domain When configuring a MTA for a network, any mail sent to hosts in its domain should be diverted to the MTA so that users can receive their mail on the master mail server. DNS To make life easiest, a user account with the same username should exist on both the MTA and the system with the MUA. Use &man.adduser.8; to create the user accounts. The MTA must be the designated mail exchanger for each workstation on the network. This is done in theDNS configuration with an MX record: example.FreeBSD.org A 204.216.27.XX ; Workstation MX 10 hub.FreeBSD.org ; Mailhost This will redirect mail for the workstation to the MTA no matter where the A record points. The mail is sent to the MX host. This must be configured on a DNS server. If the network does not run its own DNS server, talk to the ISP or DNS provider. The following is an example of virtual email hosting. Consider a customer with the domain customer1.org, where all the mail for customer1.org should be sent to mail.myhost.com. The DNS entry should look like this: customer1.org MX 10 mail.myhost.com An A> record is not needed for customer1.org in order to only handle email for that domain. However, running ping against customer1.org will not work unless an A record exists for it. Tell the MTA which domains and/or hostnames it should accept mail for. Either of the following will work for Sendmail: Add the hosts to /etc/mail/local-host-names when using the FEATURE(use_cw_file). For versions of Sendmail earlier than 8.10, edit /etc/sendmail.cw instead. Add a Cwyour.host.com line to /etc/sendmail.cf. For Sendmail 8.10 or higher, add that line to /etc/mail/sendmail.cf. Setting Up to Send Only BillMoranContributed by There are many instances where one may only want to send mail through a relay. Some examples are: The computer is a desktop machine that needs to use programs such as &man.send-pr.1;, using the ISP's mail relay. The computer is a server that does not handle mail locally, but needs to pass off all mail to a relay for processing. While any MTA is capable of filling this particular niche, it can be difficult to properly configure a full-featured MTA just to handle offloading mail. Programs such as Sendmail and Postfix are overkill for this use. Additionally, a typical Internet access service agreement may forbid one from running a mail server. The easiest way to fulfill those needs is to install the mail/ssmtp port: &prompt.root; cd /usr/ports/mail/ssmtp &prompt.root; make install replace clean Once installed, mail/ssmtp can be configured with /usr/local/etc/ssmtp/ssmtp.conf: root=yourrealemail@example.com mailhub=mail.example.com rewriteDomain=example.com hostname=_HOSTNAME_ Use the real email address for root. Enter the ISP's outgoing mail relay in place of mail.example.com. Some ISPs call this the outgoing mail server or SMTP server). Make sure to disable Sendmail, including the outgoing mail service. See for details. mail/ssmtp has some other options available. Refer to the examples in /usr/local/etc/ssmtp or the manual page of ssmtp for more information. Setting up ssmtp in this manner allows any software on the computer that needs to send mail to function properly, while not violating the ISP's usage policy or allowing the computer to be hijacked for spamming. Using Mail with a Dialup Connection When using a static IP address, one should not need to adjust the default configuration. Set the hostname to the assigned Internet name and Sendmail will do the rest. When using a dynamically assigned IP address and a dialup PPP connection to the Internet, one usually has a mailbox on the ISP's mail server. In this example, the ISP's domain is example.net, the user name is user, the hostname is bsd.home, and the ISP has allowed relay.example.net as a mail relay. In order to retrieve mail from the ISP's mailbox, install a retrieval agent from the Ports Collection. mail/fetchmail is a good choice as it supports many different protocols. Usually, the ISP will provide POP. When using user PPP, email can be automatically fetched when an Internet connection is established with the following entry in /etc/ppp/ppp.linkup: MYADDR: !bg su user -c fetchmail When using Sendmail to deliver mail to non-local accounts, configure Sendmail to process the mail queue as soon as the Internet connection is established. To do this, add this line after the above fetchmail entry in /etc/ppp/ppp.linkup: !bg su user -c "sendmail -q" In this example, there is an account for user on bsd.home. In the home directory of user on bsd.home, create a .fetchmailrc which contains this line: poll example.net protocol pop3 fetchall pass MySecret This file should not be readable by anyone except user as it contains the password MySecret. In order to send mail with the correct from: header, configure Sendmail to use user@example.net rather than user@bsd.home and to send all mail via relay.example.net, allowing quicker mail transmission. The following .mc file should suffice: VERSIONID(`bsd.home.mc version 1.0') OSTYPE(bsd4.4)dnl FEATURE(nouucp)dnl MAILER(local)dnl MAILER(smtp)dnl Cwlocalhost Cwbsd.home MASQUERADE_AS(`example.net')dnl FEATURE(allmasquerade)dnl FEATURE(masquerade_envelope)dnl FEATURE(nocanonify)dnl FEATURE(nodns)dnl define(`SMART_HOST', `relay.example.net') Dmbsd.home define(`confDOMAIN_NAME',`bsd.home')dnl define(`confDELIVERY_MODE',`deferred')dnl Refer to the previous section for details of how to convert this file into the sendmail.cf format. Do not forget to restart Sendmail after updating sendmail.cf. SMTP Authentication JamesGorhamWritten by Configuring SMTP authentication on the MTA provides a number of benefits. SMTP authentication adds a layer of security to Sendmail, and provides mobile users who switch hosts the ability to use the same MTA without the need to reconfigure their mail client's settings each time. Install security/cyrus-sasl2 from the Ports Collection. This port supports a number of compile-time options. For the SMTP authentication method demonstrated in this example, make sure that is not disabled. After installing security/cyrus-sasl2, edit /usr/local/lib/sasl2/Sendmail.conf, or create it if it does not exist, and add the following line: pwcheck_method: saslauthd Next, install security/cyrus-sasl2-saslauthd and add the following line to /etc/rc.conf: saslauthd_enable="YES" Finally, start the saslauthd daemon: &prompt.root; service saslauthd start This daemon serves as a broker for sendmail to authenticate against the &os; &man.passwd.5; database. This saves the trouble of creating a new set of usernames and passwords for each user that needs to use SMTP authentication, and keeps the login and mail password the same. Next, edit /etc/make.conf and add the following lines: SENDMAIL_CFLAGS=-I/usr/local/include/sasl -DSASL SENDMAIL_LDFLAGS=-L/usr/local/lib SENDMAIL_LDADD=-lsasl2 These lines provide Sendmail the proper configuration options for linking to cyrus-sasl2 at compile time. Make sure that cyrus-sasl2 has been installed before recompiling Sendmail. Recompile Sendmail by executing the following commands: &prompt.root; cd /usr/src/lib/libsmutil &prompt.root; make cleandir && make obj && make &prompt.root; cd /usr/src/lib/libsm &prompt.root; make cleandir && make obj && make &prompt.root; cd /usr/src/usr.sbin/sendmail &prompt.root; make cleandir && make obj && make && make install This compile should not have any problems if /usr/src has not changed extensively and the shared libraries it needs are available. After Sendmail has been compiled and reinstalled, edit /etc/mail/freebsd.mc or the local .mc file. Many administrators choose to use the output from &man.hostname.1; as the name of the .mc file for uniqueness. Add these lines: dnl set SASL options TRUST_AUTH_MECH(`GSSAPI DIGEST-MD5 CRAM-MD5 LOGIN')dnl define(`confAUTH_MECHANISMS', `GSSAPI DIGEST-MD5 CRAM-MD5 LOGIN')dnl These options configure the different methods available to Sendmail for authenticating users. To use a method other than pwcheck, refer to the Sendmail documentation. Finally, run &man.make.1; while in /etc/mail. That will run the new .mc and create a .cf named either freebsd.cf or the name used for the local .mc. Then, run make install restart, which will copy the file to sendmail.cf, and properly restart Sendmail. For more information about this process, refer to /etc/mail/Makefile. To test the configuration, use a MUA to send a test message. For further investigation, set the of Sendmail to 13 and watch /var/log/maillog for any errors. For more information, refer to SMTP authentication. Mail User Agents MarcSilverContributed by Mail User Agents A MUA is an application that is used to send and receive email. As email evolves and becomes more complex, MUAs are becoming increasingly powerful and provide users increased functionality and flexibility. The mail category of the &os; Ports Collection contains numerous MUAs. These include graphical email clients such as Evolution or Balsa and console based clients such as mutt or alpine. <command>mail</command> &man.mail.1; is the default MUA installed with &os;. It is a console based MUA that offers the basic functionality required to send and receive text-based email. It provides limited attachment support and can only access local mailboxes. Although mail does not natively support interaction with POP or IMAP servers, these mailboxes may be downloaded to a local mbox using an application such as fetchmail. In order to send and receive email, run mail: &prompt.user; mail The contents of the user's mailbox in /var/mail are automatically read by mail. Should the mailbox be empty, the utility exits with a message indicating that no mail could be found. If mail exists, the application interface starts, and a list of messages will be displayed. Messages are automatically numbered, as can be seen in the following example: Mail version 8.1 6/6/93. Type ? for help. "/var/mail/marcs": 3 messages 3 new >N 1 root@localhost Mon Mar 8 14:05 14/510 "test" N 2 root@localhost Mon Mar 8 14:05 14/509 "user account" N 3 root@localhost Mon Mar 8 14:05 14/509 "sample" Messages can now be read by typing t followed by the message number. This example reads the first email: & t 1 Message 1: From root@localhost Mon Mar 8 14:05:52 2004 X-Original-To: marcs@localhost Delivered-To: marcs@localhost To: marcs@localhost Subject: test Date: Mon, 8 Mar 2004 14:05:52 +0200 (SAST) From: root@localhost (Charlie Root) This is a test message, please reply if you receive it. As seen in this example, the message will be displayed with full headers. To display the list of messages again, press h. If the email requires a reply, press either R or r mail keys. R instructs mail to reply only to the sender of the email, while r replies to all other recipients of the message. These commands can be suffixed with the mail number of the message to reply to. After typing the response, the end of the message should be marked by a single . on its own line. An example can be seen below: & R 1 To: root@localhost Subject: Re: test Thank you, I did get your email. . EOT In order to send a new email, press m, followed by the recipient email address. Multiple recipients may be specified by separating each address with the , delimiter. The subject of the message may then be entered, followed by the message contents. The end of the message should be specified by putting a single . on its own line. & mail root@localhost Subject: I mastered mail Now I can send and receive email using mail ... :) . EOT While using mail, press ? to display help at any time. Refer to &man.mail.1; for more help on how to use mail. &man.mail.1; was not designed to handle attachments and thus deals with them poorly. Newer MUAs handle attachments in a more intelligent way. Users who prefer to use mail may find the converters/mpack port to be of considerable use. <application>mutt</application> mutt is a powerful MUA, with many features, including: The ability to thread messages. PGP support for digital signing and encryption of email. MIME support. Maildir support. Highly customizable. Refer to http://www.mutt.org for more information on mutt. mutt may be installed using the mail/mutt port. After the port has been installed, mutt can be started by issuing the following command: &prompt.user; mutt mutt will automatically read and display the contents of the user mailbox in /var/mail. If no mails are found, mutt will wait for commands from the user. The example below shows mutt displaying a list of messages: To read an email, select it using the cursor keys and press Enter. An example of mutt displaying email can be seen below: Similar to &man.mail.1;, mutt can be used to reply only to the sender of the message as well as to all recipients. To reply only to the sender of the email, press r. To send a group reply to the original sender as well as all the message recipients, press g. By default, mutt uses the &man.vi.1; editor for creating and replying to emails. Each user can customize this by creating or editing the .muttrc in their home directory and setting the editor variable or by setting the EDITOR environment variable. Refer to http://www.mutt.org/ for more information about configuring mutt. To compose a new mail message, press m. After a valid subject has been given, mutt will start &man.vi.1; so the email can be written. Once the contents of the email are complete, save and quit from vi. mutt will resume, displaying a summary screen of the mail that is to be delivered. In order to send the mail, press y. An example of the summary screen can be seen below: mutt contains extensive help which can be accessed from most of the menus by pressing ?. The top line also displays the keyboard shortcuts where appropriate. <application>alpine</application> alpine is aimed at a beginner user, but also includes some advanced features. alpine has had several remote vulnerabilities discovered in the past, which allowed remote attackers to execute arbitrary code as users on the local system, by the action of sending a specially-prepared email. While known problems have been fixed, alpine code is written in an insecure style and the &os; Security Officer believes there are likely to be other undiscovered vulnerabilities. Users install alpine at their own risk. The current version of alpine may be installed using the mail/alpine port. Once the port has installed, alpine can be started by issuing the following command: &prompt.user; alpine The first time alpine runs, it displays a greeting page with a brief introduction, as well as a request from the alpine development team to send an anonymous email message allowing them to judge how many users are using their client. To send this anonymous message, press Enter. Alternatively, press E to exit the greeting without sending an anonymous message. An example of the greeting page is shown below: The main menu is then presented, which can be navigated using the cursor keys. This main menu provides shortcuts for the composing new mails, browsing mail directories, and administering address book entries. Below the main menu, relevant keyboard shortcuts to perform functions specific to the task at hand are shown. The default directory opened by alpine is inbox. To view the message index, press I, or select the MESSAGE INDEX option shown below: The message index shows messages in the current directory and can be navigated by using the cursor keys. Highlighted messages can be read by pressing Enter. In the screenshot below, a sample message is displayed by alpine. Contextual keyboard shortcuts are displayed at the bottom of the screen. An example of one of a shortcut is r, which tells the MUA to reply to the current message being displayed. Replying to an email in alpine is done using the pico editor, which is installed by default with alpine. pico makes it easy to navigate the message and is easier for novice users to use than &man.vi.1; or &man.mail.1;. Once the reply is complete, the message can be sent by pressing CtrlX . alpine will ask for confirmation before sending the message. alpine can be customized using the SETUP option from the main menu. Consult http://www.washington.edu/alpine/ for more information. Using <application>fetchmail</application> MarcSilverContributed by fetchmail fetchmail is a full-featured IMAP and POP client. It allows users to automatically download mail from remote IMAP and POP servers and save it into local mailboxes where it can be accessed more easily. fetchmail can be installed using the mail/fetchmail port, and offers various features, including: Support for the POP3, APOP, KPOP, IMAP, ETRN and ODMR protocols. Ability to forward mail using SMTP, which allows filtering, forwarding, and aliasing to function normally. May be run in daemon mode to check periodically for new messages. Can retrieve multiple mailboxes and forward them, based on configuration, to different local users. This section explains some of the basic features of fetchmail. This utility requires a .fetchmailrc configuration in the user's home directory in order to run correctly. This file includes server information as well as login credentials. Due to the sensitive nature of the contents of this file, it is advisable to make it readable only by the user, with the following command: &prompt.user; chmod 600 .fetchmailrc The following .fetchmailrc serves as an example for downloading a single user mailbox using POP. It tells fetchmail to connect to example.com using a username of joesoap and a password of XXX. This example assumes that the user joesoap exists on the local system. poll example.com protocol pop3 username "joesoap" password "XXX" The next example connects to multiple POP and IMAP servers and redirects to different local usernames where applicable: poll example.com proto pop3: user "joesoap", with password "XXX", is "jsoap" here; user "andrea", with password "XXXX"; poll example2.net proto imap: user "john", with password "XXXXX", is "myth" here; fetchmail can be run in daemon mode by running it with , followed by the interval (in seconds) that fetchmail should poll servers listed in .fetchmailrc. The following example configures fetchmail to poll every 600 seconds: &prompt.user; fetchmail -d 600 More information on fetchmail can be found at http://www.fetchmail.info/. Using <application>procmail</application> MarcSilverContributed by procmail procmail is a powerful application used to filter incoming mail. It allows users to define rules which can be matched to incoming mails to perform specific functions or to reroute mail to alternative mailboxes or email addresses. procmail can be installed using the mail/procmail port. Once installed, it can be directly integrated into most MTAs. Consult the MTA documentation for more information. Alternatively, procmail can be integrated by adding the following line to a .forward in the home directory of the user: "|exec /usr/local/bin/procmail || exit 75" The following section displays some basic procmail rules, as well as brief descriptions of what they do. Rules must be inserted into a .procmailrc, which must reside in the user's home directory. The majority of these rules can be found in &man.procmailex.5;. To forward all mail from user@example.com to an external address of goodmail@example2.com: :0 * ^From.*user@example.com ! goodmail@example2.com To forward all mails shorter than 1000 bytes to an external address of goodmail@example2.com: :0 * < 1000 ! goodmail@example2.com To send all mail sent to alternate@example.com to a mailbox called alternate: :0 * ^TOalternate@example.com alternate To send all mail with a subject of Spam to /dev/null: :0 ^Subject:.*Spam /dev/null A useful recipe that parses incoming &os;.org mailing lists and places each list in its own mailbox: :0 * ^Sender:.owner-freebsd-\/[^@]+@FreeBSD.ORG { LISTNAME=${MATCH} :0 * LISTNAME??^\/[^@]+ FreeBSD-${MATCH} } Index: head/en_US.ISO8859-1/books/handbook/mirrors/chapter.xml =================================================================== --- head/en_US.ISO8859-1/books/handbook/mirrors/chapter.xml (revision 46048) +++ head/en_US.ISO8859-1/books/handbook/mirrors/chapter.xml (revision 46049) @@ -1,933 +1,933 @@ Obtaining &os; <acronym>CD</acronym> and <acronym>DVD</acronym> Sets &os; CD and DVD sets are available from several online retailers:
&os; Mall, Inc. 2420 Sand Creek Rd C-1 #347 Brentwood, CA 94513 USA Phone: +1 925 240-6652 Fax: +1 925 674-0821 Email: info@freebsdmall.com WWW: http://www.freebsdmall.com/
Getlinux 78 Rue de la Croix Rochopt Épinay-sous-Sénart 91860 France Email: contact@getlinux.fr WWW: http://www.getlinux.fr/
Dr. Hinner EDV Kochelseestr. 11 D-81371 München Germany Phone: (0177) 428 419 0 Email: infow@hinner.de WWW: http://www.hinner.de/linux/freebsd.html
Linux Center Galernaya Street, 55 Saint-Petersburg 190000 Russia Phone: +7-812-309-06-86 Email: info@linuxcenter.ru WWW: http://linuxcenter.ru/shop/freebsd
<acronym>FTP</acronym> Sites The official sources for &os; are available via anonymous FTP from a worldwide set of mirror sites. The site ftp://ftp.FreeBSD.org/pub/FreeBSD/ is well connected and allows a large number of connections to it, but you are probably better off finding a closer mirror site (especially if you decide to set up some sort of mirror site). Additionally, &os; is available via anonymous FTP from the following mirror sites. If you choose to obtain &os; via anonymous FTP, please try to use a site near you. The mirror sites listed as Primary Mirror Sites typically have the entire &os; archive (all the currently available versions for each of the architectures) but you will probably have faster download times from a site that is in your country or region. The regional sites carry the most recent versions for the most popular architecture(s) but might not carry the entire &os; archive. All sites provide access via anonymous FTP but some sites also provide access via other methods. The access methods available for each site are provided in parentheses after the hostname. &chap.mirrors.ftp.index.inc; &chap.mirrors.lastmod.inc; &chap.mirrors.ftp.inc; Using CTM CTM CTM is a method for keeping a remote directory tree in sync with a central one. It is built into &os; and can be used to synchronize a system with &os;'s source repositories. It supports synchronization of an entire repository or just a specified set of branches. CTM is specifically designed for use on lousy or non-existent TCP/IP connections and provides the ability for changes to be automatically sent by email. It requires the user to obtain up to three deltas per day for the most active branches. Update sizes are always kept as small as possible and are typically less than 5K. About one in very ten updates is 10-50K in size, and there will occasionally be an update larger than 100K+. When using CTM to track &os; development, refer to the caveats related to working directly from the development sources rather than a pre-packaged release. These are discussed in Tracking a Development Branch. Little documentation exists on the process of creating deltas or using CTM for other purposes. Contact the &a.ctm-users.name; mailing list for answers to questions on using CTM. Getting Deltas The deltas used by CTM can be obtained either through anonymous FTP or email. FTP deltas can be obtained from the following mirror sites. When using anonymous FTP to obtain CTM deltas, select a mirror that is geographically nearby. In case of problems, contact the &a.ctm-users.name; mailing list. California, Bay Area, official source ftp://ftp.FreeBSD.org/pub/FreeBSD/development/CTM/ ftp://ftp.FreeBSD.org/pub/FreeBSD/CTM/ South Africa, backup server for old deltas ftp://ftp.za.FreeBSD.org/pub/FreeBSD/CTM/ Taiwan/R.O.C. ftp://ctm.tw.FreeBSD.org/pub/FreeBSD/development/CTM/ ftp://ctm2.tw.FreeBSD.org/pub/FreeBSD/development/CTM/ ftp://ctm3.tw.FreeBSD.org/pub/FreeBSD/development/CTM/ To instead receive deltas through email, subscribe to one of the ctm-src distribution lists available from http://lists.freebsd.org/mailman/listinfo. For example, &a.ctm-src-cur.name; supports the head development branch and &a.ctm-src-9.name; supports the 9.X release branch. As CTM updates arrive through email, use ctm_rmail to unpack and apply them. This command can be run directly from an entry in /etc/aliases in order to automate this process. Refer to &man.ctm.rmail.1; for more details. Regardless of the method which is used to get deltas, CTM users should subscribe to the &a.ctm-announce.name; mailing list as this is the only mechanism by which CTM announcements are posted. <application>CTM</application> Usage Before CTM deltas can be used for the first time, a starting point must be produced. One method is to apply a starter delta to an empty directory. A starter delta can be recognized by the XEmpty in its name, such as src-cur.3210XEmpty.gz. The designation following the X corresponds to the origin of the initial seed, where Empty is an empty directory. As a rule, a base transition from Empty is produced every 100 deltas. Be aware that starter deltas are large and 70 to 80 Megabytes of gzip'd data is common for the XEmpty deltas. Another method is to copy or extract an initial source from a RELEASE media as this can save a significant transfer of data from the Internet. Once a base delta has been created, apply all deltas with higher numbers. To apply the deltas: &prompt.root; cd /directory/to/store/the/stuff &prompt.root; ctm -v -v /directory/which/stores/the/deltas/src-xxx.* Multiple deltas can be applied with a single command as they will be processed one at a time and any deltas that are already applied will be ignored. CTM understands gzip compressed deltas, which saves disk space. To verify a delta without applying it, include in the command line. CTM will not actually modify the local tree but will instead verify the integrity of the delta to see if it would apply cleanly. Refer to &man.ctm.1; for more information about available options and an overview of the process CTM uses when applying deltas. To keep the local source tree up-to-date, every time a new delta becomes available, apply it through CTM. Once applied, it is recommended to not delete the deltas if it is a burden to download them again. This way, a local copy is available in case it is needed for future disaster recovery. Keeping Local Changes Developers often experiment with and change files in their local source tree. CTM supports local modifications in a limited way: before checking for the presence of a file, it first looks for a file with the same name and a .ctm extension. If this file exists, CTM will operate on it instead of the original filename. This behavior provides a simple way to maintain local changes. Before modifying a file, make a copy with a .ctm suffix. Make any changes to the original filename, knowing that CTM will only apply updates to the file with the .ctm suffix. Other <application>CTM</application> Options Finding Out Exactly What Would Be Touched by an Update To determine the list of changes that CTM will make to the local source repository, use . This option is useful for creating logs of the changes or when performing pre- or post-processing on any of the modified files. Making Backups Before Updating To backup all of the files that would be changed by a CTM update, specify . This option tells CTM to backup all files touched by the applied CTM delta to backup-file. Restricting the Files Touched by an Update To restrict the scope of a given CTM update, or to extract just a few files from a sequence of deltas, filtering regular expressions can be specified using , which specifies which files to process, or , which specifies which files to ignore. For example, to extract an up-to-date copy of lib/libc/Makefile from a collection of saved CTM deltas: &prompt.root; cd /directory/to/extract/to/ &prompt.root; ctm -e '^lib/libc/Makefile' /directory/which/stores/the/deltas/src-xxx.* For every file specified in a CTM delta, and are applied in the order given on the command line. A file is processed by CTM only if it is marked as eligible after all and options are applied. Using <application>Subversion</application> Subversion Introduction As of July 2012, &os; uses Subversion as the primary version control system for storing all of &os;'s source code, documentation, and the Ports Collection. Subversion is generally a developer tool. Most users should use freebsd-update () to update the &os; base system, and portsnap () to update the &os; Ports Collection. This chapter demonstrates how to install Subversion on a &os; system and then use it to create a local copy of a &os; repository. It includes a list of the available &os; Subversion mirrors and resources to additional information on how to use Subversion. Installation Subversion must be installed before it can be used to check out the contents of any of the repositories. If a copy of the ports tree is already present, one can install Subversion like this: &prompt.root; cd /usr/ports/devel/subversion &prompt.root; make install clean If the ports tree is not available, Subversion can be installed as a package: &prompt.root; pkg install devel/subversion Running <application>Subversion</application> The svn command is used to fetch a clean copy of the sources into a local directory. The files in this directory are called a local working copy. Move or delete the local directory before using checkout. Checkout over an existing non-svn directory can cause conflicts between the existing files and those brought in from the repository. Subversion uses URLs to designate a repository, taking the form of protocol://hostname/path. Mirrors may support different protocols as specified below. The first component of the path is the &os; repository to access. There are three different repositories, base for the &os; base system source code, ports for the Ports Collection, and doc for documentation. For example, the URL svn://svn0.us-east.FreeBSD.org/ports/head/ specifies the main branch of the ports repository on the svn0.us-east.FreeBSD.org mirror, using the svn protocol. A checkout from a given repository is performed with a command like this: &prompt.root; svn checkout svn-mirror/repository/branch lwcdir where: svn-mirror is a URL for one of the Subversion mirror sites. repository is one of the Project repositories, i.e., base, ports, or doc. branch depends on the repository used. ports and doc are mostly updated in the head branch, while base maintains the latest version of -CURRENT under head and the respective latest versions of the -STABLE branches under stable/8 (for 8.x), stable/9 (9.x) and stable/10 (10.x). lwcdir is the target directory where the contents of the specified branch should be placed. This is usually /usr/ports for ports, /usr/src for base, and /usr/doc for doc. This example checks out the Ports Collection from the western US repository using the HTTPS protocol, placing the local working copy in /usr/ports. If /usr/ports is already present but was not created by svn, remember to rename or delete it before the checkout. &prompt.root; svn checkout https://svn0.us-west.FreeBSD.org/ports/head /usr/ports Because the initial checkout has to download the full branch of the remote repository, it can take a while. Please be patient. After the initial checkout, the local working copy can be updated by running: &prompt.root; svn update lwcdir To update /usr/ports created in the example above, use: &prompt.root; svn update /usr/ports The update is much quicker than a checkout, only transferring files that have changed. An alternate way of updating the local working copy after checkout is provided by the Makefile in the /usr/ports, /usr/src, and /usr/doc directories. Set SVN_UPDATE and use the update target. For example, to update /usr/src: &prompt.root; cd /usr/src &prompt.root; make update SVN_UPDATE=yes <application>Subversion</application> Mirror Sites Subversion Repository Mirror Sites All mirrors carry all repositories. The master &os; Subversion server, svn.FreeBSD.org, is publicly accessible, read-only. That may change in the future, so users are encouraged to use one of the official mirrors. To view the &os; Subversion repositories through a browser, use http://svnweb.FreeBSD.org/. The &os; Subversion mirror network is still in its early days, and will likely change. Do not count on this list of mirrors being static. In particular, the SSL certificates of the servers will likely change at some point. Name Protocols Location SSL Fingerprint svn0.us-west.FreeBSD.org svn, http, https USA, California SHA1 1C:BD:85:95:11:9F:EB:75:A5:4B:C8:A3:FE:08:E4:02:73:06:1E:61 svn0.us-east.FreeBSD.org svn, http, https, rsync USA, New Jersey SHA1 1C:BD:85:95:11:9F:EB:75:A5:4B:C8:A3:FE:08:E4:02:73:06:1E:61 svn0.eu.FreeBSD.org svn, http, https, rsync Europe, UK SHA1 39:B0:53:35:CE:60:C7:BB:00:54:96:96:71:10:94:BB:CE:1C:07:A7 svn0.ru.FreeBSD.org svn, http, https, rsync Russia, Moscow SHA1 F6:44:AA:B9:03:89:0E:3E:8C:4D:4D:14:F0:27:E6:C7:C1:8B:17:C5 HTTPS is the preferred protocol, providing protection against another computer pretending to be the &os; mirror (commonly known as a man in the middle attack) or otherwise trying to send bad content to the end user. - On the first connection to an HTTPS - mirror, the user will be asked to verify the server - fingerprint: + On the first connection + to an HTTPS mirror, the user will be asked + to verify the server fingerprint: Error validating server certificate for 'https://svn0.us-west.freebsd.org:443': - The certificate is not issued by a trusted authority. Use the fingerprint to validate the certificate manually! - The certificate hostname does not match. Certificate information: - Hostname: svnmir.ysv.FreeBSD.org - Valid: from Jul 29 22:01:21 2013 GMT until Dec 13 22:01:21 2040 GMT - Issuer: clusteradm, FreeBSD.org, (null), CA, US (clusteradm@FreeBSD.org) - Fingerprint: 1C:BD:85:95:11:9F:EB:75:A5:4B:C8:A3:FE:08:E4:02:73:06:1E:61 (R)eject, accept (t)emporarily or accept (p)ermanently? Compare the fingerprint shown to those listed in the table above. If the fingerprint matches, the server security certificate can be accepted temporarily or permanently. A temporary certificate will expire after a single session with the server, and the verification step will be repeated on the next connection. Accepting the certificate permanently will store the authentication credentials in ~/.subversion/auth/ and the user will not be asked to verify the fingerprint again until the certificate expires. If https cannot be used due to firewall or other problems, svn is the next choice, with slightly faster transfers. When neither can be used, use http. For More Information For other information about using Subversion, please see the Subversion Book, titled Version Control with Subversion, or the Subversion Documentation. Using <application>rsync</application> The following sites make &os; available through the rsync protocol. The rsync utility works in much the same way as the &man.rcp.1; command, but has more options and uses the rsync remote-update protocol which transfers only the differences between two sets of files, thus greatly speeding up the synchronization over the network. This is most useful if you are a mirror site for the &os; FTP server, or the CVS repository. The rsync suite is available for many operating systems, on &os;, see the net/rsync port or use the package. Czech Republic rsync://ftp.cz.FreeBSD.org/ Available collections: ftp: A partial mirror of the &os; FTP server. &os;: A full mirror of the &os; FTP server. Netherlands rsync://ftp.nl.FreeBSD.org/ Available collections: &os;: A full mirror of the &os; FTP server. Russia rsync://ftp.mtu.ru/ Available collections: &os;: A full mirror of the &os; FTP server. &os;-Archive: The mirror of &os; Archive FTP server. Sweden rsync://ftp4.se.freebsd.org/ Available collections: &os;: A full mirror of the &os; FTP server. Taiwan rsync://ftp.tw.FreeBSD.org/ rsync://ftp2.tw.FreeBSD.org/ rsync://ftp6.tw.FreeBSD.org/ Available collections: &os;: A full mirror of the &os; FTP server. United Kingdom rsync://rsync.mirrorservice.org/ Available collections: ftp.freebsd.org: A full mirror of the &os; FTP server. United States of America rsync://ftp-master.FreeBSD.org/ This server may only be used by &os; primary mirror sites. Available collections: &os;: The master archive of the &os; FTP server. acl: The &os; master ACL list. rsync://ftp13.FreeBSD.org/ Available collections: &os;: A full mirror of the &os; FTP server.
Index: head/en_US.ISO8859-1/books/handbook/multimedia/chapter.xml =================================================================== --- head/en_US.ISO8859-1/books/handbook/multimedia/chapter.xml (revision 46048) +++ head/en_US.ISO8859-1/books/handbook/multimedia/chapter.xml (revision 46049) @@ -1,1620 +1,1620 @@ Multimedia Ross Lippert Edited by Synopsis &os; supports a wide variety of sound cards, allowing users to enjoy high fidelity output from a &os; system. This includes the ability to record and playback audio in the MPEG Audio Layer 3 (MP3), Waveform Audio File (WAV), Ogg Vorbis, and other formats. The &os; Ports Collection contains many applications for editing recorded audio, adding sound effects, and controlling attached MIDI devices. &os; also supports the playback of video files and DVDs. The &os; Ports Collection contains applications to encode, convert, and playback various video media. This chapter describes how to configure sound cards, video playback, TV tuner cards, and scanners on &os;. It also describes some of the applications which are available for using these devices. After reading this chapter, you will know how to: Configure a sound card on &os;. Troubleshoot the sound setup. Playback and encode MP3s and other audio. Prepare a &os; system for video playback. Play DVDs, .mpg, and .avi files. Rip CD and DVD content into files. Configure a TV card. Install and setup MythTV on &os; Configure an image scanner. Before reading this chapter, you should: Know how to install applications as described in . Setting Up the Sound Card Moses Moore Contributed by Marc Fonvieille Enhanced by PCI sound cards Before beginning the configuration, determine the model of the sound card and the chip it uses. &os; supports a wide variety of sound cards. Check the supported audio devices list of the Hardware Notes to see if the card is supported and which &os; driver it uses. kernel configuration In order to use the sound device, its device driver must be loaded. The easiest way is to load a kernel module for the sound card with &man.kldload.8;. This example loads the driver for a built-in audio chipset based on the Intel specification: &prompt.root; kldload snd_hda To automate the loading of this driver at boot time, add the driver to /boot/loader.conf. The line for this driver is: snd_hda_load="YES" Other available sound modules are listed in /boot/defaults/loader.conf. When unsure which driver to use, load the snd_driver module: &prompt.root; kldload snd_driver This is a metadriver which loads all of the most common sound drivers and can be used to speed up the search for the correct driver. It is also possible to load all sound drivers by adding the metadriver to /boot/loader.conf. To determine which driver was selected for the sound card after loading the snd_driver metadriver, type cat /dev/sndstat. Configuring a Custom Kernel with Sound Support This section is for users who prefer to statically compile in support for the sound card in a custom kernel. For more information about recompiling a kernel, refer to . When using a custom kernel to provide sound support, make sure that the audio framework driver exists in the custom kernel configuration file: device sound Next, add support for the sound card. To continue the example of the built-in audio chipset based on the Intel specification from the previous section, use the following line in the custom kernel configuration file: device snd_hda Be sure to read the manual page of the driver for the device name to use for the driver. Non-PnP ISA sound cards may require the IRQ and I/O port settings of the card to be added to /boot/device.hints. During the boot process, &man.loader.8; reads this file and passes the settings to the kernel. For example, an old Creative &soundblaster; 16 ISA non-PnP card will use the &man.snd.sbc.4; driver in conjunction with snd_sb16. For this card, the following lines must be added to the kernel configuration file: device snd_sbc device snd_sb16 If the card uses the 0x220 I/O port and IRQ 5, these lines must also be added to /boot/device.hints: hint.sbc.0.at="isa" hint.sbc.0.port="0x220" hint.sbc.0.irq="5" hint.sbc.0.drq="1" hint.sbc.0.flags="0x15" In this case, the card uses the 0x220 I/O port and the IRQ 5. The syntax used in /boot/device.hints is described in &man.sound.4; and the manual page for the driver of the sound card. The settings shown above are the defaults. In some cases, the IRQ or other settings may need to be changed to match the card. Refer to &man.snd.sbc.4; for more information about this card. Testing Sound After loading the required module or rebooting into the custom kernel, the sound card should be detected. To confirm, run dmesg | grep pcm. This example is from a system with a built-in Conexant CX20590 chipset: pcm0: <NVIDIA (0x001c) (HDMI/DP 8ch)> at nid 5 on hdaa0 pcm1: <NVIDIA (0x001c) (HDMI/DP 8ch)> at nid 6 on hdaa0 pcm2: <Conexant CX20590 (Analog 2.0+HP/2.0)> at nid 31,25 and 35,27 on hdaa1 The status of the sound card may also be checked using this command: &prompt.root; cat /dev/sndstat FreeBSD Audio Driver (newpcm: 64bit 2009061500/amd64) Installed devices: pcm0: <NVIDIA (0x001c) (HDMI/DP 8ch)> (play) pcm1: <NVIDIA (0x001c) (HDMI/DP 8ch)> (play) pcm2: <Conexant CX20590 (Analog 2.0+HP/2.0)> (play/rec) default The output will vary depending upon the sound card. If no pcm devices are listed, double-check that the correct device driver was loaded or compiled into the kernel. The next section lists some common problems and their solutions. If all goes well, the sound card should now work in os;. If the CD or DVD drive is properly connected to the sound card, one can insert an audio CD in the drive and play it with &man.cdcontrol.1;: &prompt.user; cdcontrol -f /dev/acd0 play 1 Audio CDs have specialized encodings which means that they should not be mounted using &man.mount.8;. Various applications, such as audio/workman, provide a friendlier interface. The audio/mpg123 port can be installed to listen to MP3 audio files. Another quick way to test the card is to send data to /dev/dsp: &prompt.user; cat filename > /dev/dsp where filename can be any type of file. This command should produce some noise, confirming that the sound card is working. The /dev/dsp* device nodes will be created automatically as needed. When not in use, they do not exist and will not appear in the output of &man.ls.1;. Troubleshooting Sound device nodes I/O port IRQ DSP Table 8.1 lists some common error messages and their solutions: Common Error Messages Error Solution sb_dspwr(XX) timed out The I/O port is not set correctly. bad irq XX The IRQ is set incorrectly. Make sure that the set IRQ and the sound IRQ are the same. xxx: gus pcm not attached, out of memory There is not enough available memory to use the device. xxx: can't open /dev/dsp! Type fstat | grep dsp to check if another application is holding the device open. Noteworthy troublemakers are esound and KDE's sound support.
Modern graphics cards often come with their own sound driver for use with HDMI. This sound device is sometimes enumerated before the sound card meaning that the sound card will not be used as the default playback device. To check if this is the case, run dmesg and look for pcm. The output looks something like this: ... hdac0: HDA Driver Revision: 20100226_0142 hdac1: HDA Driver Revision: 20100226_0142 hdac0: HDA Codec #0: NVidia (Unknown) hdac0: HDA Codec #1: NVidia (Unknown) hdac0: HDA Codec #2: NVidia (Unknown) hdac0: HDA Codec #3: NVidia (Unknown) pcm0: <HDA NVidia (Unknown) PCM #0 DisplayPort> at cad 0 nid 1 on hdac0 pcm1: <HDA NVidia (Unknown) PCM #0 DisplayPort> at cad 1 nid 1 on hdac0 pcm2: <HDA NVidia (Unknown) PCM #0 DisplayPort> at cad 2 nid 1 on hdac0 pcm3: <HDA NVidia (Unknown) PCM #0 DisplayPort> at cad 3 nid 1 on hdac0 hdac1: HDA Codec #2: Realtek ALC889 pcm4: <HDA Realtek ALC889 PCM #0 Analog> at cad 2 nid 1 on hdac1 pcm5: <HDA Realtek ALC889 PCM #1 Analog> at cad 2 nid 1 on hdac1 pcm6: <HDA Realtek ALC889 PCM #2 Digital> at cad 2 nid 1 on hdac1 pcm7: <HDA Realtek ALC889 PCM #3 Digital> at cad 2 nid 1 on hdac1 ... In this example, the graphics card (NVidia) has been enumerated before the sound card (Realtek ALC889). To use the sound card as the default playback device, change hw.snd.default_unit to the unit that should be used for playback: &prompt.root; sysctl hw.snd.default_unit=n where n is the number of the sound device to use. In this example, it should be 4. Make this change permanent by adding the following line to /etc/sysctl.conf: hw.snd.default_unit=4
Utilizing Multiple Sound Sources Munish Chopra Contributed by It is often desirable to have multiple sources of sound that are able to play simultaneously. &os; uses Virtual Sound Channels to multiplex the sound card's playback by mixing sound in the kernel. Three &man.sysctl.8; knobs are available for configuring virtual channels: &prompt.root; sysctl dev.pcm.0.play.vchans=4 &prompt.root; sysctl dev.pcm.0.rec.vchans=4 &prompt.root; sysctl hw.snd.maxautovchans=4 This example allocates four virtual channels, which is a practical number for everyday use. Both dev.pcm.0.play.vchans=4 and dev.pcm.0.rec.vchans=4 are configurable after a device has been attached and represent the number of virtual channels pcm0 has for playback and recording. Since the pcm module can be loaded independently of the hardware drivers, hw.snd.maxautovchans indicates how many virtual channels will be given to an audio device when it is attached. Refer to &man.pcm.4; for more information. The number of virtual channels for a device cannot be changed while it is in use. First, close any programs using the device, such as music players or sound daemons. The correct pcm device will automatically be allocated transparently to a program that requests /dev/dsp0. Setting Default Values for Mixer Channels Josef El-Rayes Contributed by The default values for the different mixer channels are hardcoded in the source code of the &man.pcm.4; driver. While sound card mixer levels can be changed using &man.mixer.8; or third-party applications and daemons, this is not a permanent solution. To instead set default mixer values at the driver level, define the appropriate values in /boot/device.hints, as seen in this example: hint.pcm.0.vol="50" This will set the volume channel to a default value of 50 when the &man.pcm.4; module is loaded.
MP3 Audio Chern Lee Contributed by This section describes some MP3 players available for &os;, how to rip audio CD tracks, and how to encode and decode MP3s. MP3 Players A popular graphical MP3 player is XMMS. It supports Winamp skins and additional plugins. The interface is intuitive, with a playlist, graphic equalizer, and more. Those familiar with Winamp will find XMMS simple to use. On &os;, XMMS can be installed from the multimedia/xmms port or package. The audio/mpg123 package or port provides an alternative, command-line MP3 player. Once installed, specify the MP3 file to play on the command line. If the system has multiple audio devices, the sound device can also be specifed: &prompt.root; mpg123 -a /dev/dsp1.0 Foobar-GreatestHits.mp3 High Performance MPEG 1.0/2.0/2.5 Audio Player for Layers 1, 2 and 3 version 1.18.1; written and copyright by Michael Hipp and others free software (LGPL) without any warranty but with best wishes Playing MPEG stream from Foobar-GreatestHits.mp3 ... MPEG 1.0 layer III, 128 kbit/s, 44100 Hz joint-stereo Additional MP3 players are available in the &os; Ports Collection. Ripping <acronym>CD</acronym> Audio Tracks Before encoding a CD or CD track to MP3, the audio data on the CD must be ripped to the hard drive. This is done by copying the raw CD Digital Audio (CDDA) data to WAV files. The cdda2wav tool, which is installed with the sysutils/cdrtools suite, can be used to rip audio information from CDs. With the audio CD in the drive, the following command can be issued as root to rip an entire CD into individual, per track, WAV files: &prompt.root; cdda2wav -D 0,1,0 -B In this example, the indicates the SCSI device 0,1,0 containing the CD to rip. Use cdrecord -scanbus to determine the correct device parameters for the system. To rip individual tracks, use to specify the track: &prompt.root; cdda2wav -D 0,1,0 -t 7 To rip a range of tracks, such as track one to seven, specify a range: &prompt.root; cdda2wav -D 0,1,0 -t 1+7 To rip from an ATAPI (IDE) CDROM drive, specify the device name in place of the SCSI unit numbers. For example, to rip track 7 from an IDE drive: &prompt.root; cdda2wav -D /dev/acd0 -t 7 Alternately, dd can be used to extract audio tracks on ATAPI drives, as described in . Encoding and Decoding MP3s Lame is a popular MP3 encoder which can be installed from the audio/lame port. Due to patent issues, a package is not available. The following command will convert the ripped WAV file audio01.wav to audio01.mp3: &prompt.root; lame -h -b 128 --tt "Foo Song Title" --ta "FooBar Artist" --tl "FooBar Album" \ --ty "2014" --tc "Ripped and encoded by Foo" --tg "Genre" audio01.wav audio01.mp3 The specified 128 kbits is a standard MP3 bitrate while the 160 and 192 bitrates provide higher quality. The higher the bitrate, the larger the size of the resulting MP3. The turns on the higher quality but a little slower mode. The options beginning with indicate ID3 tags, which usually contain song information, to be embedded within the MP3 file. Additional encoding options can be found in the lame manual page. In order to burn an audio CD from MP3s, they must first be converted to a non-compressed file format. XMMS can be used to convert to the WAV format, while mpg123 can be used to convert to the raw Pulse-Code Modulation (PCM) audio data format. To convert audio01.mp3 using mpg123, specify the name of the PCM file: &prompt.root; mpg123 -s audio01.mp3 > audio01.pcm To use XMMS to convert a MP3 to WAV format, use these steps: Converting to <acronym>WAV</acronym> Format in <application>XMMS</application> Launch XMMS. Right-click the window to bring up the XMMS menu. Select Preferences under Options. Change the Output Plugin to Disk Writer Plugin. Press Configure. Enter or browse to a directory to write the uncompressed files to. Load the MP3 file into XMMS as usual, with volume at 100% and EQ settings turned off. Press Play. The XMMS will appear as if it is playing the MP3, but no music will be heard. It is actually playing the MP3 to a file. When finished, be sure to set the default Output Plugin back to what it was before in order to listen to MP3s again. Both the WAV and PCM - formats can be used with cdrecord. When using - WAV files, there will be a small tick sound - at the beginning of each track. This sound is the header of - the WAV file. The + formats can be used with cdrecord. + When using WAV files, there will be a small + tick sound at the beginning of each track. This sound is the + header of the WAV file. The audio/sox port or package can be used to remove the header: &prompt.user; sox -t wav -r 44100 -s -w -c 2 track.wav track.raw Refer to for more information on using a CD burner in &os;. Video Playback Ross Lippert Contributed by Before configuring video playback, determine the model and chipset of the video card. While &xorg; supports a wide variety of video cards, not all provide good playback performance. To obtain a list of extensions supported by the &xorg; server using the card, run xdpyinfo while &xorg; is running. It is a good idea to have a short MPEG test file for evaluating various players and options. Since some DVD applications look for DVD media in /dev/dvd by default, or have this device name hardcoded in them, it might be useful to make a symbolic link to the proper device: &prompt.root; ln -sf /dev/cd0 /dev/dvd Due to the nature of &man.devfs.5;, manually created links will not persist after a system reboot. In order to recreate the symbolic link automatically when the system boots, add the following line to /etc/devfs.conf: link cd0 dvd DVD decryption invokes certain functions that require write permission to the DVD device. To enhance the shared memory &xorg; interface, it is recommended to increase the values of these &man.sysctl.8; variables: kern.ipc.shmmax=67108864 kern.ipc.shmall=32768 Determining Video Capabilities XVideo SDL DGA There are several possible ways to display video under &xorg; and what works is largely hardware dependent. Each method described below will have varying quality across different hardware. Common video interfaces include: &xorg;: normal output using shared memory. XVideo: an extension to the &xorg; interface which allows video to be directly displayed in drawable objects through a special acceleration. This extension provides good quality playback even on low-end machines. The next section describes how to determine if this extension is running. SDL: the Simple Directmedia Layer is a porting layer for many operating systems, allowing cross-platform applications to be developed which make efficient use of sound and graphics. SDL provides a low-level abstraction to the hardware which can sometimes be more efficient than the &xorg; interface. On &os;, SDL can be installed using the devel/sdl20 package or port. DGA: the Direct Graphics Access is an &xorg; extension which allows a program to bypass the &xorg; server and directly alter the framebuffer. Because it relies on a low level memory mapping, programs using it must be run as root. The DGA extension can be tested and benchmarked using &man.dga.1;. When dga is running, it changes the colors of the display whenever a key is pressed. To quit, press q. SVGAlib: a low level console graphics layer. XVideo To check whether this extension is running, use xvinfo: &prompt.user; xvinfo XVideo is supported for the card if the result is similar to: X-Video Extension version 2.2 screen #0 Adaptor #0: "Savage Streams Engine" number of ports: 1 port base: 43 operations supported: PutImage supported visuals: depth 16, visualID 0x22 depth 16, visualID 0x23 number of attributes: 5 "XV_COLORKEY" (range 0 to 16777215) client settable attribute client gettable attribute (current value is 2110) "XV_BRIGHTNESS" (range -128 to 127) client settable attribute client gettable attribute (current value is 0) "XV_CONTRAST" (range 0 to 255) client settable attribute client gettable attribute (current value is 128) "XV_SATURATION" (range 0 to 255) client settable attribute client gettable attribute (current value is 128) "XV_HUE" (range -180 to 180) client settable attribute client gettable attribute (current value is 0) maximum XvImage size: 1024 x 1024 Number of image formats: 7 id: 0x32595559 (YUY2) guid: 59555932-0000-0010-8000-00aa00389b71 bits per pixel: 16 number of planes: 1 type: YUV (packed) id: 0x32315659 (YV12) guid: 59563132-0000-0010-8000-00aa00389b71 bits per pixel: 12 number of planes: 3 type: YUV (planar) id: 0x30323449 (I420) guid: 49343230-0000-0010-8000-00aa00389b71 bits per pixel: 12 number of planes: 3 type: YUV (planar) id: 0x36315652 (RV16) guid: 52563135-0000-0000-0000-000000000000 bits per pixel: 16 number of planes: 1 type: RGB (packed) depth: 0 red, green, blue masks: 0x1f, 0x3e0, 0x7c00 id: 0x35315652 (RV15) guid: 52563136-0000-0000-0000-000000000000 bits per pixel: 16 number of planes: 1 type: RGB (packed) depth: 0 red, green, blue masks: 0x1f, 0x7e0, 0xf800 id: 0x31313259 (Y211) guid: 59323131-0000-0010-8000-00aa00389b71 bits per pixel: 6 number of planes: 3 type: YUV (packed) id: 0x0 guid: 00000000-0000-0000-0000-000000000000 bits per pixel: 0 number of planes: 0 type: RGB (packed) depth: 1 red, green, blue masks: 0x0, 0x0, 0x0 The formats listed, such as YUV2 and YUV12, are not present with every implementation of XVideo and their absence may hinder some players. If the result instead looks like: X-Video Extension version 2.2 screen #0 no adaptors present XVideo is probably not supported for the card. This means that it will be more difficult for the display to meet the computational demands of rendering video, depending on the video card and processor. Ports and Packages Dealing with Video video ports video packages This section introduces some of the software available from the &os; Ports Collection which can be used for video playback. <application>MPlayer</application> and <application>MEncoder</application> MPlayer is a command-line video player with an optional graphical interface which aims to provide speed and flexibility. Other graphical front-ends to MPlayer are available from the &os; Ports Collection. MPlayer MPlayer can be installed using the multimedia/mplayer package or port. Several compile options are available and a variety of hardware checks occur during the build process. For these reasons, some users prefer to build the port rather than install the package. When compiling the port, the menu options should be reviewed to determine the type of support to compile into the port. If an option is not selected, MPlayer will not be able to display that type of video format. Use the arrow keys and spacebar to select the required formats. When finished, press Enter to continue the port compile and installation. By default, the package or port will build the mplayer command line utility and the gmplayer graphical utility. To encode videos, compile the multimedia/mencoder port. Due to licensing restrictions, a package is not available for MEncoder. The first time MPlayer is run, it will create ~/.mplayer in the user's home directory. This subdirectory contains default versions of the user-specific configuration files. This section describes only a few common uses. Refer to mplayer(1) for a complete description of its numerous options. To play the file testfile.avi, specify the video interfaces with , as seen in the following examples: &prompt.user; mplayer -vo xv testfile.avi &prompt.user; mplayer -vo sdl testfile.avi &prompt.user; mplayer -vo x11 testfile.avi &prompt.root; mplayer -vo dga testfile.avi &prompt.root; mplayer -vo 'sdl:dga' testfile.avi It is worth trying all of these options, as their relative performance depends on many factors and will vary significantly with hardware. To play a DVD, replace testfile.avi with , where N is the title number to play and DEVICE is the device node for the DVD. For example, to play title 3 from /dev/dvd: &prompt.root; mplayer -vo xv dvd://3 -dvd-device /dev/dvd The default DVD device can be defined during the build of the MPlayer port by including the WITH_DVD_DEVICE=/path/to/desired/device option. By default, the device is /dev/cd0. More details can be found in the port's Makefile.options. To stop, pause, advance, and so on, use a keybinding. To see the list of keybindings, run mplayer -h or read mplayer(1). Additional playback options include , which engages fullscreen mode, and , which helps performance. Each user can add commonly used options to their ~/.mplayer/config like so: vo=xv fs=yes zoom=yes mplayer can be used to rip a DVD title to a .vob. To dump the second title from a DVD: &prompt.root; mplayer -dumpstream -dumpfile out.vob dvd://2 -dvd-device /dev/dvd The output file, out.vob, will be in MPEG format. Anyone wishing to obtain a high level of expertise with &unix; video should consult mplayerhq.hu/DOCS as it is technically informative. This documentation should be considered as required reading before submitting any bug reports. mencoder Before using mencoder, it is a good idea to become familiar with the options described at mplayerhq.hu/DOCS/HTML/en/mencoder.html. There are innumerable ways to improve quality, lower bitrate, and change formats, and some of these options may make the difference between good or bad performance. Improper combinations of command line options can yield output files that are unplayable even by mplayer. Here is an example of a simple copy: &prompt.user; mencoder input.avi -oac copy -ovc copy -o output.avi To rip to a file, use with mplayer. To convert input.avi to the MPEG4 codec with MPEG3 audio encoding, first install the audio/lame port. Due to licensing restrictions, a package is not available. Once installed, type: &prompt.user; mencoder input.avi -oac mp3lame -lameopts br=192 \ -ovc lavc -lavcopts vcodec=mpeg4:vhq -o output.avi This will produce output playable by applications such as mplayer and xine. input.avi can be replaced with and run as root to re-encode a DVD title directly. Since it may take a few tries to get the desired result, it is recommended to instead dump the title to a file and to work on the file. The <application>xine</application> Video Player xine is a video player with a reusable base library and a modular executable which can be extended with plugins. It can be installed using the multimedia/xine package or port. In practice, xine requires either a fast CPU with a fast video card, or support for the XVideo extension. The xine video player performs best on XVideo interfaces. By default, the xine player starts a graphical user interface. The menus can then be used to open a specific file. Alternatively, xine may be invoked from the command line by specifying the name of the file to play: &prompt.user; xine -g -p mymovie.avi Refer to xine-project.org/faq for more information and troubleshooting tips. The <application>Transcode</application> Utilities Transcode provides a suite of tools for re-encoding video and audio files. Transcode can be used to merge video files or repair broken files using command line tools with stdin/stdout stream interfaces. In &os;, Transcode can be installed using the multimedia/transcode package or port. Many users prefer to compile the port as it provides a menu of compile options for specifying the support and codecs to compile in. If an option is not selected, Transcode will not be able to encode that format. Use the arrow keys and spacebar to select the required formats. When finished, press Enter to continue the port compile and installation. This example demonstrates how to convert a DivX file into a PAL MPEG-1 file (PAL VCD): &prompt.user; transcode -i input.avi -V --export_prof vcd-pal -o output_vcd &prompt.user; mplex -f 1 -o output_vcd.mpg output_vcd.m1v output_vcd.mpa The resulting MPEG file, output_vcd.mpg, is ready to be played with MPlayer. The file can be burned on a CD media to create a video CD using a utility such as multimedia/vcdimager or sysutils/cdrdao. In addition to the manual page for transcode, refer to transcoding.org/cgi-bin/transcode for further information and examples. TV Cards Josef El-Rayes Original contribution by Marc Fonvieille Enhanced and adapted by TV cards TV cards can be used to watch broadcast or cable TV on a computer. Most cards accept composite video via an RCA or S-video input and some cards include a FM radio tuner. &os; provides support for PCI-based TV cards using a Brooktree Bt848/849/878/879 video capture chip with the &man.bktr.4; driver. This driver supports most Pinnacle PCTV video cards. Before purchasing a TV card, consult &man.bktr.4; for a list of supported tuners. Loading the Driver In order to use the card, the &man.bktr.4; driver must be loaded. To automate this at boot time, add the following line to /boot/loader.conf: bktr_load="YES" Alternatively, one can statically compile support for the TV card into a custom kernel. In that case, add the following lines to the custom kernel configuration file: device bktr device iicbus device iicbb device smbus These additional devices are necessary as the card components are interconnected via an I2C bus. Then, build and install a new kernel. To test that the tuner is correctly detected, reboot the system. The TV card should appear in the boot messages, as seen in this example: bktr0: <BrookTree 848A> mem 0xd7000000-0xd7000fff irq 10 at device 10.0 on pci0 iicbb0: <I2C bit-banging driver> on bti2c0 iicbus0: <Philips I2C bus> on iicbb0 master-only iicbus1: <Philips I2C bus> on iicbb0 master-only smbus0: <System Management Bus> on bti2c0 bktr0: Pinnacle/Miro TV, Philips SECAM tuner. The messages will differ according to the hardware. If necessary, it is possible to override some of the detected parameters using &man.sysctl.8; or custom kernel configuration options. For example, to force the tuner to a Philips SECAM tuner, add the following line to a custom kernel configuration file: options OVERRIDE_TUNER=6 or, use &man.sysctl.8;: &prompt.root; sysctl hw.bt848.tuner=6 Refer to &man.bktr.4; for a description of the available &man.sysctl.8; parameters and kernel options. Useful Applications To use the TV card, install one of the following applications: multimedia/fxtv provides TV-in-a-window and image/audio/video capture capabilities. multimedia/xawtv is another TV application with similar features. audio/xmradio provides an application for using the FM radio tuner of a TV card. More applications are available in the &os; Ports Collection. Troubleshooting If any problems are encountered with the TV card, check that the video capture chip and the tuner are supported by &man.bktr.4; and that the right configuration options were used. For more support or to ask questions about supported TV cards, refer to the &a.multimedia.name; mailing list. MythTV MythTV is a popular, open source Personal Video Recorder (PVR) application. This section demonstrates how to install and setup MythTV on &os;. Refer to mythtv.org/wiki for more information on how to use MythTV. MythTV requires a frontend and a backend. These components can either be installed on the same system or on different machines. The frontend can be installed on &os; using the multimedia/mythtv-frontend package or port. &xorg; must also be installed and configured as described in . Ideally, this system has a video card that supports X-Video Motion Compensation (XvMC) and, optionally, a Linux Infrared Remote Control (LIRC)-compatible remote. To install both the backend and the frontend on &os;, use the multimedia/mythtv package or port. A &mysql; database server is also required and should automatically be installed as a dependency. Optionally, this system should have a tuner card and sufficient storage to hold recorded data. Hardware MythTV uses Video for Linux (V4L) to access video input devices such as encoders and tuners. In &os;, MythTV works best with USB DVB-S/C/T cards as they are well supported by the multimedia/webcamd package or port which provides a V4L userland application. Any Digital Video Broadcasting (DVB) card supported by webcamd should work with MythTV. A list of known working cards can be found at wiki.freebsd.org/WebcamCompat. Drivers are also available for Hauppauge cards in the multimedia/pvr250 and multimedia/pvrxxx ports, but they provide a non-standard driver interface that does not work with versions of MythTV greater than 0.23. Due to licensing restrictions, no packages are available and these two ports must be compiled. The wiki.freebsd.org/HTPC page contains a list of all available DVB drivers. Setting up the MythTV Backend To install MythTV using the port: &prompt.root; cd /usr/ports/multimedia/mythtv &prompt.root; make install Once installed, set up the MythTV database: &prompt.root; mysql -uroot -p < /usr/local/share/mythtv/database/mc.sql Then, configure the backend: &prompt.root; mythtv-setup Finally, start the backend: &prompt.root; echo 'mythbackend_enable="YES"' >> /etc/rc.conf &prompt.root; service mythbackend start Image Scanners Marc Fonvieille Written by image scanners In &os;, access to image scanners is provided by SANE (Scanner Access Now Easy), which is available in the &os; Ports Collection. SANE will also use some &os; device drivers to provide access to the scanner hardware. &os; supports both SCSI and USB scanners. Depending upon the scanner interface, different device drivers are required. Be sure the scanner is supported by SANE prior to performing any configuration. Refer to http://www.sane-project.org/sane-supported-devices.html for more information about supported scanners. This chapter describes how to determine if the scanner has been detected by &os;. It then provides an overview of how to configure and use SANE on a &os; system. Checking the Scanner The GENERIC kernel includes the device drivers needed to support USB scanners. Users with a custom kernel should ensure that the following lines are present in the custom kernel configuration file: device usb device uhci device ohci device ehci To determine if the USB scanner is detected, plug it in and use dmesg to determine whether the scanner appears in the system message buffer. If it does, it should display a message similar to this: ugen0.2: <EPSON> at usbus0 In this example, an &epson.perfection; 1650 USB scanner was detected on /dev/ugen0.2. If the scanner uses a SCSI interface, it is important to know which SCSI controller board it will use. Depending upon the SCSI chipset, a custom kernel configuration file may be needed. The GENERIC kernel supports the most common SCSI controllers. Refer to /usr/src/sys/conf/NOTES to determine the correct line to add to a custom kernel configuration file. In addition to the SCSI adapter driver, the following lines are needed in a custom kernel configuration file: device scbus device pass Verify that the device is displayed in the system message buffer: pass2 at aic0 bus 0 target 2 lun 0 pass2: <AGFA SNAPSCAN 600 1.10> Fixed Scanner SCSI-2 device pass2: 3.300MB/s transfers If the scanner was not powered-on at system boot, it is still possible to manually force detection by performing a SCSI bus scan with camcontrol: &prompt.root; camcontrol rescan all Re-scan of bus 0 was successful Re-scan of bus 1 was successful Re-scan of bus 2 was successful Re-scan of bus 3 was successful The scanner should now appear in the SCSI devices list: &prompt.root; camcontrol devlist <IBM DDRS-34560 S97B> at scbus0 target 5 lun 0 (pass0,da0) <IBM DDRS-34560 S97B> at scbus0 target 6 lun 0 (pass1,da1) <AGFA SNAPSCAN 600 1.10> at scbus1 target 2 lun 0 (pass3) <PHILIPS CDD3610 CD-R/RW 1.00> at scbus2 target 0 lun 0 (pass2,cd0) Refer to &man.scsi.4; and &man.camcontrol.8; for more details about SCSI devices on &os;. <application>SANE</application> Configuration The SANE system is split in two parts: the backends (graphics/sane-backends) and the frontends (graphics/sane-frontends or graphics/xsane). The backends provide access to the scanner. Refer to http://www.sane-project.org/sane-supported-devices.html to determine which backend supports the scanner. The frontends provide the graphical scanning interface. graphics/sane-frontends installs xscanimage while graphics/xsane installs xsane. After installing the graphics/sane-backends port or package, use sane-find-scanner to check the scanner detection by the SANE system: &prompt.root; sane-find-scanner -q found SCSI scanner "AGFA SNAPSCAN 600 1.10" at /dev/pass3 The output should show the interface type of the scanner and the device node used to attach the scanner to the system. The vendor and the product model may or may not appear. Some USB scanners require firmware to be loaded. Refer to sane-find-scanner(1) and sane(7) for details. Next, check if the scanner will be identified by a scanning frontend. The SANE backends include scanimage which can be used to list the devices and perform an image acquisition. Use to list the scanner devices. The first example is for a SCSI scanner and the second is for a USB scanner: &prompt.root; scanimage -L device `snapscan:/dev/pass3' is a AGFA SNAPSCAN 600 flatbed scanner &prompt.root; scanimage -L device 'epson2:libusb:/dev/usb:/dev/ugen0.2' is a Epson GT-8200 flatbed scanner In this second example, 'epson2:libusb:/dev/usb:/dev/ugen0.2' is the backend name (epson2) and /dev/ugen0.2 is the device node used by the scanner. If scanimage is unable to identify the scanner, this message will appear: &prompt.root; scanimage -L No scanners were identified. If you were expecting something different, check that the scanner is plugged in, turned on and detected by the sane-find-scanner tool (if appropriate). Please read the documentation which came with this software (README, FAQ, manpages). If this happens, edit the backend configuration file in /usr/local/etc/sane.d/ and define the scanner device used. For example, if the undetected scanner model is an &epson.perfection; 1650 and it uses the epson2 backend, edit /usr/local/etc/sane.d/epson2.conf. When editing, add a line specifying the interface and the device node used. In this case, add the following line: usb /dev/ugen0.2 Save the edits and verify that the scanner is identified with the right backend name and the device node: &prompt.root; scanimage -L device 'epson2:libusb:/dev/usb:/dev/ugen0.2' is a Epson GT-8200 flatbed scanner Once scanimage -L sees the scanner, the configuration is complete and the scanner is now ready to use. While scanimage can be used to perform an image acquisition from the command line, it is often preferable to use a graphical interface to perform image scanning. The graphics/sane-frontends package or port installs a simple but efficient graphical interface, xscanimage. Alternately, xsane, which is installed with the graphics/xsane package or port, is another popular graphical scanning frontend. It offers advanced features such as various scanning modes, color correction, and batch scans. Both of these applications are usable as a GIMP plugin. Scanner Permissions In order to have access to the scanner, a user needs read and write permissions to the device node used by the scanner. In the previous example, the USB scanner uses the device node /dev/ugen0.2 which is really a symlink to the real device node /dev/usb/0.2.0. The symlink and the device node are owned, respectively, by the wheel and operator groups. While adding the user to these groups will allow access to the scanner, it is considered insecure to add a user to wheel. A better solution is to create a group and make the scanner device accessible to members of this group. This example creates a group called usb: &prompt.root; pw groupadd usb Then, make the /dev/ugen0.2 symlink and the /dev/usb/0.2.0 device node accessible to the usb group with write permissions of 0660 or 0664 by adding the following lines to /etc/devfs.rules: [system=5] add path ugen0.2 mode 0660 group usb add path usb/0.2.0 mode 0666 group usb Finally, add the users to usb in order to allow access to the scanner: &prompt.root; pw groupmod usb -m joe For more details refer to &man.pw.8;.
Index: head/en_US.ISO8859-1/books/handbook/network-servers/chapter.xml =================================================================== --- head/en_US.ISO8859-1/books/handbook/network-servers/chapter.xml (revision 46048) +++ head/en_US.ISO8859-1/books/handbook/network-servers/chapter.xml (revision 46049) @@ -1,5790 +1,5790 @@ Network Servers Synopsis This chapter covers some of the more frequently used network services on &unix; systems. This includes installing, configuring, testing, and maintaining many different types of network services. Example configuration files are included throughout this chapter for reference. By the end of this chapter, readers will know: How to manage the inetd daemon. How to set up the Network File System (NFS). How to set up the Network Information Server (NIS) for centralizing and sharing user accounts. How to set &os; up to act as an LDAP server or client How to set up automatic network settings using DHCP. How to set up a Domain Name Server (DNS). How to set up the Apache HTTP Server. How to set up a File Transfer Protocol (FTP) server. How to set up a file and print server for &windows; clients using Samba. How to synchronize the time and date, and set up a time server using the Network Time Protocol (NTP). How to set up iSCSI. This chapter assumes a basic knowledge of: /etc/rc scripts. Network terminology. Installation of additional third-party software (). The <application>inetd</application> Super-Server The &man.inetd.8; daemon is sometimes referred to as a Super-Server because it manages connections for many services. Instead of starting multiple applications, only the inetd service needs to be started. When a connection is received for a service that is managed by inetd, it determines which program the connection is destined for, spawns a process for that program, and delegates the program a socket. Using inetd for services that are not heavily used can reduce system load, when compared to running each daemon individually in stand-alone mode. Primarily, inetd is used to spawn other daemons, but several trivial protocols are handled internally, such as chargen, auth, time, echo, discard, and daytime. This section covers the basics of configuring inetd. Configuration File Configuration of inetd is done by editing /etc/inetd.conf. Each line of this configuration file represents an application which can be started by inetd. By default, every line starts with a comment (#), meaning that inetd is not listening for any applications. To configure inetd to listen for an application's connections, remove the # at the beginning of the line for that application. After saving your edits, configure inetd to start at system boot by editing /etc/rc.conf: inetd_enable="YES" To start inetd now, so that it listens for the service you configured, type: &prompt.root; service inetd start Once inetd is started, it needs to be notified whenever a modification is made to /etc/inetd.conf: Reloading the <application>inetd</application> Configuration File &prompt.root; service inetd reload Typically, the default entry for an application does not need to be edited beyond removing the #. In some situations, it may be appropriate to edit the default entry. As an example, this is the default entry for &man.ftpd.8; over IPv4: ftp stream tcp nowait root /usr/libexec/ftpd ftpd -l The seven columns in an entry are as follows: service-name socket-type protocol {wait|nowait}[/max-child[/max-connections-per-ip-per-minute[/max-child-per-ip]]] user[:group][/login-class] server-program server-program-arguments where: service-name The service name of the daemon to start. It must correspond to a service listed in /etc/services. This determines which port inetd listens on for incoming connections to that service. When using a custom service, it must first be added to /etc/services. socket-type Either stream, dgram, raw, or seqpacket. Use stream for TCP connections and dgram for UDP services. protocol Use one of the following protocol names: Protocol Name Explanation tcp or tcp4 TCP IPv4 udp or udp4 UDP IPv4 tcp6 TCP IPv6 udp6 UDP IPv6 tcp46 Both TCP IPv4 and IPv6 udp46 Both UDP IPv4 and IPv6 {wait|nowait}[/max-child[/max-connections-per-ip-per-minute[/max-child-per-ip]]] In this field, or must be specified. , and are optional. indicates whether or not the service is able to handle its own socket. socket types must use while daemons, which are usually multi-threaded, should use . usually hands off multiple sockets to a single daemon, while spawns a child daemon for each new socket. The maximum number of child daemons inetd may spawn is set by . For example, to limit ten instances of the daemon, place a /10 after . Specifying /0 allows an unlimited number of children. limits the number of connections from any particular IP address per minute. Once the limit is reached, further connections from this IP address will be dropped until the end of the minute. For example, a value of /10 would limit any particular IP address to ten connection attempts per minute. limits the number of child processes that can be started on behalf on any single IP address at any moment. These options can limit excessive resource consumption and help to prevent Denial of Service attacks. An example can be seen in the default settings for &man.fingerd.8;: finger stream tcp nowait/3/10 nobody /usr/libexec/fingerd fingerd -k -s user The username the daemon will run as. Daemons typically run as root, daemon, or nobody. server-program The full path to the daemon. If the daemon is a service provided by inetd internally, use . server-program-arguments Used to specify any command arguments to be passed to the daemon on invocation. If the daemon is an internal service, use . Command-Line Options Like most server daemons, inetd has a number of options that can be used to modify its behaviour. By default, inetd is started with -wW -C 60. These options enable TCP wrappers for all services, including internal services, and prevent any IP address from requesting any service more than 60 times per minute. To change the default options which are passed to inetd, add an entry for inetd_flags in /etc/rc.conf. If inetd is already running, restart it with service inetd restart. The available rate limiting options are: -c maximum Specify the default maximum number of simultaneous invocations of each service, where the default is unlimited. May be overridden on a per-service basis by using in /etc/inetd.conf. -C rate Specify the default maximum number of times a service can be invoked from a single IP address per minute. May be overridden on a per-service basis by using in /etc/inetd.conf. -R rate Specify the maximum number of times a service can be invoked in one minute, where the default is 256. A rate of 0 allows an unlimited number. -s maximum Specify the maximum number of times a service can be invoked from a single IP address at any one time, where the default is unlimited. May be overridden on a per-service basis by using in /etc/inetd.conf. Additional options are available. Refer to &man.inetd.8; for the full list of options. Security Considerations Many of the daemons which can be managed by inetd are not security-conscious. Some daemons, such as fingerd, can provide information that may be useful to an attacker. Only enable the services which are needed and monitor the system for excessive connection attempts. max-connections-per-ip-per-minute, max-child and max-child-per-ip can be used to limit such attacks. By default, TCP wrappers is enabled. Consult &man.hosts.access.5; for more information on placing TCP restrictions on various inetd invoked daemons. Network File System (NFS) Tom Rhodes Reorganized and enhanced by Bill Swingle Written by NFS &os; supports the Network File System (NFS), which allows a server to share directories and files with clients over a network. With NFS, users and programs can access files on remote systems as if they were stored locally. NFS has many practical uses. Some of the more common uses include: Data that would otherwise be duplicated on each client can be kept in a single location and accessed by clients on the network. Several clients may need access to the /usr/ports/distfiles directory. Sharing that directory allows for quick access to the source files without having to download them to each client. On large networks, it is often more convenient to configure a central NFS server on which all user home directories are stored. Users can log into a client anywhere on the network and have access to their home directories. Administration of NFS exports is simplified. For example, there is only one file system where security or backup policies must be set. Removable media storage devices can be used by other machines on the network. This reduces the number of devices throughout the network and provides a centralized location to manage their security. It is often more convenient to install software on multiple machines from a centralized installation media. NFS consists of a server and one or more clients. The client remotely accesses the data that is stored on the server machine. In order for this to function properly, a few processes have to be configured and running. These daemons must be running on the server: NFS server file server UNIX clients rpcbind mountd nfsd Daemon Description nfsd The NFS daemon which services requests from NFS clients. mountd The NFS mount daemon which carries out requests received from nfsd. rpcbind This daemon allows NFS clients to discover which port the NFS server is using. Running &man.nfsiod.8; on the client can improve performance, but is not required. Configuring the Server NFS configuration The file systems which the NFS server will share are specified in /etc/exports. Each line in this file specifies a file system to be exported, which clients have access to that file system, and any access options. When adding entries to this file, each exported file system, its properties, and allowed hosts must occur on a single line. If no clients are listed in the entry, then any client on the network can mount that file system. NFS export examples The following /etc/exports entries demonstrate how to export file systems. The examples can be modified to match the file systems and client names on the reader's network. There are many options that can be used in this file, but only a few will be mentioned here. See &man.exports.5; for the full list of options. This example shows how to export /cdrom to three hosts named alpha, bravo, and charlie: /cdrom -ro alpha bravo charlie The -ro flag makes the file system read-only, preventing clients from making any changes to the exported file system. This example assumes that the host names are either in DNS or in /etc/hosts. Refer to &man.hosts.5; if the network does not have a DNS server. The next example exports /home to three clients by IP address. This can be useful for networks without DNS or /etc/hosts entries. The -alldirs flag allows subdirectories to be mount points. In other words, it will not automatically mount the subdirectories, but will permit the client to mount the directories that are required as needed. /home -alldirs 10.0.0.2 10.0.0.3 10.0.0.4 This next example exports /a so that two clients from different domains may access that file system. The allows root on the remote system to write data on the exported file system as root. If -maproot=root is not specified, the client's root user will be mapped to the server's nobody account and will be subject to the access limitations defined for nobody. /a -maproot=root host.example.com box.example.org A client can only be specified once per file system. For example, if /usr is a single file system, these entries would be invalid as both entries specify the same host: # Invalid when /usr is one file system /usr/src client /usr/ports client The correct format for this situation is to use one entry: /usr/src /usr/ports client The following is an example of a valid export list, where /usr and /exports are local file systems: # Export src and ports to client01 and client02, but only # client01 has root privileges on it /usr/src /usr/ports -maproot=root client01 /usr/src /usr/ports client02 # The client machines have root and can mount anywhere # on /exports. Anyone in the world can mount /exports/obj read-only /exports -alldirs -maproot=root client01 client02 /exports/obj -ro To enable the processes required by the NFS server at boot time, add these options to /etc/rc.conf: rpcbind_enable="YES" nfs_server_enable="YES" mountd_flags="-r" The server can be started now by running this command: &prompt.root; service nfsd start Whenever the NFS server is started, mountd also starts automatically. However, mountd only reads /etc/exports when it is started. To make subsequent /etc/exports edits take effect immediately, force mountd to reread it: &prompt.root; service mountd reload Configuring the Client To enable NFS clients, set this option in each client's /etc/rc.conf: nfs_client_enable="YES" Then, run this command on each NFS client: &prompt.root; service nfsclient start The client now has everything it needs to mount a remote file system. In these examples, the server's name is server and the client's name is client. To mount /home on server to the /mnt mount point on client: NFS mounting &prompt.root; mount server:/home /mnt The files and directories in /home will now be available on client, in the /mnt directory. To mount a remote file system each time the client boots, add it to /etc/fstab: server:/home /mnt nfs rw 0 0 Refer to &man.fstab.5; for a description of all available options. Locking Some applications require file locking to operate correctly. To enable locking, add these lines to /etc/rc.conf on both the client and server: rpc_lockd_enable="YES" rpc_statd_enable="YES" Then start the applications: &prompt.root; service lockd start &prompt.root; service statd start If locking is not required on the server, the NFS client can be configured to lock locally by including when running mount. Refer to &man.mount.nfs.8; for further details. Automating Mounts With &man.amd.8; Wylie Stilwell Contributed by Chern Lee Rewritten by amd automatic mounter daemon The automatic mounter daemon, amd, automatically mounts a remote file system whenever a file or directory within that file system is accessed. File systems that are inactive for a period of time will be automatically unmounted by amd. This daemon provides an alternative to modifying /etc/fstab to list every client. It operates by attaching itself as an NFS server to the /host and /net directories. When a file is accessed within one of these directories, amd looks up the corresponding remote mount and automatically mounts it. /net is used to mount an exported file system from an IP address while /host is used to mount an export from a remote hostname. For instance, an attempt to access a file within /host/foobar/usr would tell amd to mount the /usr export on the host foobar. Mounting an Export with <application>amd</application> In this example, showmount -e shows the exported file systems that can be mounted from the NFS server, foobar: &prompt.user; showmount -e foobar Exports list on foobar: /usr 10.10.10.0 /a 10.10.10.0 &prompt.user; cd /host/foobar/usr The output from showmount shows /usr as an export. When changing directories to /host/foobar/usr, amd intercepts the request and attempts to resolve the hostname foobar. If successful, amd automatically mounts the desired export. To enable amd at boot time, add this line to /etc/rc.conf: amd_enable="YES" To start amd now: &prompt.root; service amd start Custom flags can be passed to amd from the amd_flags environment variable. By default, amd_flags is set to: amd_flags="-a /.amd_mnt -l syslog /host /etc/amd.map /net /etc/amd.map" The default options with which exports are mounted are defined in /etc/amd.map. Some of the more advanced features of amd are defined in /etc/amd.conf. Consult &man.amd.8; and &man.amd.conf.5; for more information. Automating Mounts with &man.autofs.5; The &man.autofs.5; automount facility is supported starting with &os; 10.1-RELEASE. To use the automounter functionality in older versions of &os;, use &man.amd.8; instead. This chapter only describes the &man.autofs.5; automounter. autofs automounter subsystem The &man.autofs.5; facility is a common name for several components that, together, allow for automatic mounting of remote and local filesystems whenever a file or directory within that file system is accessed. It consists of the kernel component, &man.autofs.5;, and several userspace applications: &man.automount.8;, &man.automountd.8; and &man.autounmountd.8;. It serves as an alternative for &man.amd.8; from previous &os; releases. Amd is still provided for backward compatibility purposes, as the two use different map format; the one used by autofs is the same as with other SVR4 automounters, such as the ones in Solaris, MacOS X, and Linux. - The &man.autofs.5; virtual filesystem is mounted on specified - mountpoints by &man.automount.8;, usually invoked during - boot. + The &man.autofs.5; virtual filesystem is mounted on + specified mountpoints by &man.automount.8;, usually invoked + during boot. Whenever a process attempts to access file within the &man.autofs.5; mountpoint, the kernel will notify &man.automountd.8; daemon and pause the triggering process. The &man.automountd.8; daemon will handle kernel requests by finding the proper map and mounting the filesystem according to it, then signal the kernel to release blocked process. The &man.autounmountd.8; daemon automatically unmounts automounted filesystems after some time, unless they are still being used. The primary autofs configuration file is /etc/auto_master. It assigns individual maps to top-level mounts. For an explanation of auto_master and the map syntax, refer to &man.auto.master.5;. There is a special automounter map mounted on /net. When a file is accessed within this directory, &man.autofs.5; looks up the corresponding remote mount and automatically mounts it. For instance, an attempt to access a file within /net/foobar/usr would tell &man.automountd.8; to mount the /usr export from the host foobar. Mounting an Export With &man.autofs.5; In this example, showmount -e shows the exported file systems that can be mounted from the NFS server, foobar: &prompt.user; showmount -e foobar Exports list on foobar: /usr 10.10.10.0 /a 10.10.10.0 &prompt.user; cd /net/foobar/usr The output from showmount shows /usr as an export. When changing directories to /host/foobar/usr, &man.automountd.8; intercepts the request and attempts to resolve the hostname foobar. If successful, &man.automountd.8; automatically mounts the source export. To enable &man.autofs.5; at boot time, add this line to /etc/rc.conf: autofs_enable="YES" Then &man.autofs.5; can be started by running: &prompt.root; service automount start &prompt.root; service automountd start &prompt.root; service autounmountd start The &man.autofs.5; map format is the same as in other operating systems, it might be desirable to consult information from other operating systems, such as the Mac OS X document. Consult the &man.automount.8;, &man.automountd.8;, &man.autounmountd.8;, and &man.auto.master.5; manual pages for more information. Network Information System (<acronym>NIS</acronym>) NIS Solaris HP-UX AIX Linux NetBSD OpenBSD yellow pages NIS Network Information System (NIS) is designed to centralize administration of &unix;-like systems such as &solaris;, HP-UX, &aix;, Linux, NetBSD, OpenBSD, and &os;. NIS was originally known as Yellow Pages but the name was changed due to trademark issues. This is the reason why NIS commands begin with yp. NIS domains NIS is a Remote Procedure Call (RPC)-based client/server system that allows a group of machines within an NIS domain to share a common set of configuration files. This permits a system administrator to set up NIS client systems with only minimal configuration data and to add, remove, or modify configuration data from a single location. &os; uses version 2 of the NIS protocol. <acronym>NIS</acronym> Terms and Processes Table 28.1 summarizes the terms and important processes used by NIS: rpcbind portmap <acronym>NIS</acronym> Terminology Term Description NIS domain name NIS servers and clients share an NIS domain name. Typically, this name does not have anything to do with DNS. &man.rpcbind.8; This service enables RPC and must be running in order to run an NIS server or act as an NIS client. &man.ypbind.8; This service binds an NIS client to its NIS server. It will take the NIS domain name and use RPC to connect to the server. It is the core of client/server communication in an NIS environment. If this service is not running on a client machine, it will not be able to access the NIS server. &man.ypserv.8; This is the process for the NIS server. If this service stops running, the server will no longer be able to respond to NIS requests so hopefully, there is a slave server to take over. Some non-&os; clients will not try to reconnect using a slave server and the ypbind process may need to be restarted on these clients. &man.rpc.yppasswdd.8; This process only runs on NIS master servers. This daemon allows NIS clients to change their NIS passwords. If this daemon is not running, users will have to login to the NIS master server and change their passwords there.
Machine Types NIS master server NIS slave server NIS client There are three types of hosts in an NIS environment: NIS master server This server acts as a central repository for host configuration information and maintains the authoritative copy of the files used by all of the NIS clients. The passwd, group, and other various files used by NIS clients are stored on the master server. While it is possible for one machine to be an NIS master server for more than one NIS domain, this type of configuration will not be covered in this chapter as it assumes a relatively small-scale NIS environment. NIS slave servers NIS slave servers maintain copies of the NIS master's data files in order to provide redundancy. Slave servers also help to balance the load of the master server as NIS clients always attach to the NIS server which responds first. NIS clients NIS clients authenticate against the NIS server during log on. Information in many files can be shared using NIS. The master.passwd, group, and hosts files are commonly shared via NIS. Whenever a process on a client needs information that would normally be found in these files locally, it makes a query to the NIS server that it is bound to instead. Planning Considerations This section describes a sample NIS environment which consists of 15 &os; machines with no centralized point of administration. Each machine has its own /etc/passwd and /etc/master.passwd. These files are kept in sync with each other only through manual intervention. Currently, when a user is added to the lab, the process must be repeated on all 15 machines. The configuration of the lab will be as follows: Machine name IP address Machine role ellington 10.0.0.2 NIS master coltrane 10.0.0.3 NIS slave basie 10.0.0.4 Faculty workstation bird 10.0.0.5 Client machine cli[1-11] 10.0.0.[6-17] Other client machines If this is the first time an NIS scheme is being developed, it should be thoroughly planned ahead of time. Regardless of network size, several decisions need to be made as part of the planning process. Choosing a <acronym>NIS</acronym> Domain Name NIS domain name When a client broadcasts its requests for info, it includes the name of the NIS domain that it is part of. This is how multiple servers on one network can tell which server should answer which request. Think of the NIS domain name as the name for a group of hosts. Some organizations choose to use their Internet domain name for their NIS domain name. This is not recommended as it can cause confusion when trying to debug network problems. The NIS domain name should be unique within the network and it is helpful if it describes the group of machines it represents. For example, the Art department at Acme Inc. might be in the acme-art NIS domain. This example will use the domain name test-domain. However, some non-&os; operating systems require the NIS domain name to be the same as the Internet domain name. If one or more machines on the network have this restriction, the Internet domain name must be used as the NIS domain name. Physical Server Requirements There are several things to keep in mind when choosing a machine to use as a NIS server. Since NIS clients depend upon the availability of the server, choose a machine that is not rebooted frequently. The NIS server should ideally be a stand alone machine whose sole purpose is to be an NIS server. If the network is not heavily used, it is acceptable to put the NIS server on a machine running other services. However, if the NIS server becomes unavailable, it will adversely affect all NIS clients. Configuring the <acronym>NIS</acronym> Master Server The canonical copies of all NIS files are stored on the master server. The databases used to store the information are called NIS maps. In &os;, these maps are stored in /var/yp/[domainname] where [domainname] is the name of the NIS domain. Since multiple domains are supported, it is possible to have several directories, one for each domain. Each domain will have its own independent set of maps. NIS master and slave servers handle all NIS requests through &man.ypserv.8;. This daemon is responsible for receiving incoming requests from NIS clients, translating the requested domain and map name to a path to the corresponding database file, and transmitting data from the database back to the client. NIS server configuration Setting up a master NIS server can be relatively straight forward, depending on environmental needs. Since &os; provides built-in NIS support, it only needs to be enabled by adding the following lines to /etc/rc.conf: nisdomainname="test-domain" This line sets the NIS domain name to test-domain. nis_server_enable="YES" This automates the start up of the NIS server processes when the system boots. nis_yppasswdd_enable="YES" This enables the &man.rpc.yppasswdd.8; daemon so that users can change their NIS password from a client machine. Care must be taken in a multi-server domain where the server machines are also NIS clients. It is generally a good idea to force the servers to bind to themselves rather than allowing them to broadcast bind requests and possibly become bound to each other. Strange failure modes can result if one server goes down and others are dependent upon it. Eventually, all the clients will time out and attempt to bind to other servers, but the delay involved can be considerable and the failure mode is still present since the servers might bind to each other all over again. A server that is also a client can be forced to bind to a particular server by adding these additional lines to /etc/rc.conf: nis_client_enable="YES" # run client stuff as well nis_client_flags="-S NIS domain,server" After saving the edits, type /etc/netstart to restart the network and apply the values defined in /etc/rc.conf. Before initializing the NIS maps, start &man.ypserv.8;: &prompt.root; service ypserv start Initializing the <acronym>NIS</acronym> Maps NIS maps NIS maps are generated from the configuration files in /etc on the NIS master, with one exception: /etc/master.passwd. This is to prevent the propagation of passwords to all the servers in the NIS domain. Therefore, before the NIS maps are initialized, configure the primary password files: &prompt.root; cp /etc/master.passwd /var/yp/master.passwd &prompt.root; cd /var/yp &prompt.root; vi master.passwd It is advisable to remove all entries for system accounts as well as any user accounts that do not need to be propagated to the NIS clients, such as the root and any other administrative accounts. Ensure that the /var/yp/master.passwd is neither group or world readable by setting its permissions to 600. After completing this task, initialize the NIS maps. &os; includes the &man.ypinit.8; script to do this. When generating maps for the master server, include and specify the NIS domain name: ellington&prompt.root; ypinit -m test-domain Server Type: MASTER Domain: test-domain Creating an YP server will require that you answer a few questions. Questions will all be asked at the beginning of the procedure. Do you want this procedure to quit on non-fatal errors? [y/n: n] n Ok, please remember to go back and redo manually whatever fails. If not, something might not work. At this point, we have to construct a list of this domains YP servers. rod.darktech.org is already known as master server. Please continue to add any slave servers, one per line. When you are done with the list, type a <control D>. master server : ellington next host to add: coltrane next host to add: ^D The current list of NIS servers looks like this: ellington coltrane Is this correct? [y/n: y] y [..output from map generation..] NIS Map update completed. ellington has been setup as an YP master server without any errors. This will create /var/yp/Makefile from /var/yp/Makefile.dist. By default, this file assumes that the environment has a single NIS server with only &os; clients. Since test-domain has a slave server, edit this line in /var/yp/Makefile so that it begins with a comment (#): NOPUSH = "True" Adding New Users Every time a new user is created, the user account must be added to the master NIS server and the NIS maps rebuilt. Until this occurs, the new user will not be able to login anywhere except on the NIS master. For example, to add the new user jsmith to the test-domain domain, run these commands on the master server: &prompt.root; pw useradd jsmith &prompt.root; cd /var/yp &prompt.root; make test-domain The user could also be added using adduser jsmith instead of pw useradd smith. Setting up a <acronym>NIS</acronym> Slave Server NIS slave server To set up an NIS slave server, log on to the slave server and edit /etc/rc.conf as for the master server. Do not generate any NIS maps, as these already exist on the master server. When running ypinit on the slave server, use (for slave) instead of (for master). This option requires the name of the NIS master in addition to the domain name, as seen in this example: coltrane&prompt.root; ypinit -s ellington test-domain Server Type: SLAVE Domain: test-domain Master: ellington Creating an YP server will require that you answer a few questions. Questions will all be asked at the beginning of the procedure. Do you want this procedure to quit on non-fatal errors? [y/n: n] n Ok, please remember to go back and redo manually whatever fails. If not, something might not work. There will be no further questions. The remainder of the procedure should take a few minutes, to copy the databases from ellington. Transferring netgroup... ypxfr: Exiting: Map successfully transferred Transferring netgroup.byuser... ypxfr: Exiting: Map successfully transferred Transferring netgroup.byhost... ypxfr: Exiting: Map successfully transferred Transferring master.passwd.byuid... ypxfr: Exiting: Map successfully transferred Transferring passwd.byuid... ypxfr: Exiting: Map successfully transferred Transferring passwd.byname... ypxfr: Exiting: Map successfully transferred Transferring group.bygid... ypxfr: Exiting: Map successfully transferred Transferring group.byname... ypxfr: Exiting: Map successfully transferred Transferring services.byname... ypxfr: Exiting: Map successfully transferred Transferring rpc.bynumber... ypxfr: Exiting: Map successfully transferred Transferring rpc.byname... ypxfr: Exiting: Map successfully transferred Transferring protocols.byname... ypxfr: Exiting: Map successfully transferred Transferring master.passwd.byname... ypxfr: Exiting: Map successfully transferred Transferring networks.byname... ypxfr: Exiting: Map successfully transferred Transferring networks.byaddr... ypxfr: Exiting: Map successfully transferred Transferring netid.byname... ypxfr: Exiting: Map successfully transferred Transferring hosts.byaddr... ypxfr: Exiting: Map successfully transferred Transferring protocols.bynumber... ypxfr: Exiting: Map successfully transferred Transferring ypservers... ypxfr: Exiting: Map successfully transferred Transferring hosts.byname... ypxfr: Exiting: Map successfully transferred coltrane has been setup as an YP slave server without any errors. Remember to update map ypservers on ellington. This will generate a directory on the slave server called /var/yp/test-domain which contains copies of the NIS master server's maps. Adding these /etc/crontab entries on each slave server will force the slaves to sync their maps with the maps on the master server: 20 * * * * root /usr/libexec/ypxfr passwd.byname 21 * * * * root /usr/libexec/ypxfr passwd.byuid These entries are not mandatory because the master server automatically attempts to push any map changes to its slaves. However, since clients may depend upon the slave server to provide correct password information, it is recommended to force frequent password map updates. This is especially important on busy networks where map updates might not always complete. To finish the configuration, run /etc/netstart on the slave server in order to start the NIS services. Setting Up an <acronym>NIS</acronym> Client An NIS client binds to an NIS server using &man.ypbind.8;. This daemon broadcasts RPC requests on the local network. These requests specify the domain name configured on the client. If an NIS server in the same domain receives one of the broadcasts, it will respond to ypbind, which will record the server's address. If there are several servers available, the client will use the address of the first server to respond and will direct all of its NIS requests to that server. The client will automatically ping the server on a regular basis to make sure it is still available. If it fails to receive a reply within a reasonable amount of time, ypbind will mark the domain as unbound and begin broadcasting again in the hopes of locating another server. NIS client configuration To configure a &os; machine to be an NIS client: Edit /etc/rc.conf and add the following lines in order to set the NIS domain name and start &man.ypbind.8; during network startup: nisdomainname="test-domain" nis_client_enable="YES" To import all possible password entries from the NIS server, use vipw to remove all user accounts except one from /etc/master.passwd. When removing the accounts, keep in mind that at least one local account should remain and this account should be a member of wheel. If there is a problem with NIS, this local account can be used to log in remotely, become the superuser, and fix the problem. Before saving the edits, add the following line to the end of the file: +::::::::: This line configures the client to provide anyone with a valid account in the NIS server's password maps an account on the client. There are many ways to configure the NIS client by modifying this line. One method is described in . For more detailed reading, refer to the book Managing NFS and NIS, published by O'Reilly Media. To import all possible group entries from the NIS server, add this line to /etc/group: +:*:: To start the NIS client immediately, execute the following commands as the superuser: &prompt.root; /etc/netstart &prompt.root; service ypbind start After completing these steps, running ypcat passwd on the client should show the server's passwd map. <acronym>NIS</acronym> Security Since RPC is a broadcast-based service, any system running ypbind within the same domain can retrieve the contents of the NIS maps. To prevent unauthorized transactions, &man.ypserv.8; supports a feature called securenets which can be used to restrict access to a given set of hosts. By default, this information is stored in /var/yp/securenets, unless &man.ypserv.8; is started with and an alternate path. This file contains entries that consist of a network specification and a network mask separated by white space. Lines starting with # are considered to be comments. A sample securenets might look like this: # allow connections from local host -- mandatory 127.0.0.1 255.255.255.255 # allow connections from any host # on the 192.168.128.0 network 192.168.128.0 255.255.255.0 # allow connections from any host # between 10.0.0.0 to 10.0.15.255 # this includes the machines in the testlab 10.0.0.0 255.255.240.0 If &man.ypserv.8; receives a request from an address that matches one of these rules, it will process the request normally. If the address fails to match a rule, the request will be ignored and a warning message will be logged. If the securenets does not exist, ypserv will allow connections from any host. is an alternate mechanism for providing access control instead of securenets. While either access control mechanism adds some security, they are both vulnerable to IP spoofing attacks. All NIS-related traffic should be blocked at the firewall. Servers using securenets may fail to serve legitimate NIS clients with archaic TCP/IP implementations. Some of these implementations set all host bits to zero when doing broadcasts or fail to observe the subnet mask when calculating the broadcast address. While some of these problems can be fixed by changing the client configuration, other problems may force the retirement of these client systems or the abandonment of securenets. TCP Wrapper The use of TCP Wrapper increases the latency of the NIS server. The additional delay may be long enough to cause timeouts in client programs, especially in busy networks with slow NIS servers. If one or more clients suffer from latency, convert those clients into NIS slave servers and force them to bind to themselves. Barring Some Users In this example, the basie system is a faculty workstation within the NIS domain. The passwd map on the master NIS server contains accounts for both faculty and students. This section demonstrates how to allow faculty logins on this system while refusing student logins. To prevent specified users from logging on to a system, even if they are present in the NIS database, use vipw to add -username with the correct number of colons towards the end of /etc/master.passwd on the client, where username is the username of a user to bar from logging in. The line with the blocked user must be before the + line that allows NIS users. In this example, bill is barred from logging on to basie: basie&prompt.root; cat /etc/master.passwd root:[password]:0:0::0:0:The super-user:/root:/bin/csh toor:[password]:0:0::0:0:The other super-user:/root:/bin/sh daemon:*:1:1::0:0:Owner of many system processes:/root:/sbin/nologin operator:*:2:5::0:0:System &:/:/sbin/nologin bin:*:3:7::0:0:Binaries Commands and Source,,,:/:/sbin/nologin tty:*:4:65533::0:0:Tty Sandbox:/:/sbin/nologin kmem:*:5:65533::0:0:KMem Sandbox:/:/sbin/nologin games:*:7:13::0:0:Games pseudo-user:/usr/games:/sbin/nologin news:*:8:8::0:0:News Subsystem:/:/sbin/nologin man:*:9:9::0:0:Mister Man Pages:/usr/share/man:/sbin/nologin bind:*:53:53::0:0:Bind Sandbox:/:/sbin/nologin uucp:*:66:66::0:0:UUCP pseudo-user:/var/spool/uucppublic:/usr/libexec/uucp/uucico xten:*:67:67::0:0:X-10 daemon:/usr/local/xten:/sbin/nologin pop:*:68:6::0:0:Post Office Owner:/nonexistent:/sbin/nologin nobody:*:65534:65534::0:0:Unprivileged user:/nonexistent:/sbin/nologin -bill::::::::: +::::::::: basie&prompt.root; Using Netgroups netgroups Barring specified users from logging on to individual systems becomes unscaleable on larger networks and quickly loses the main benefit of NIS: centralized administration. Netgroups were developed to handle large, complex networks with hundreds of users and machines. Their use is comparable to &unix; groups, where the main difference is the lack of a numeric ID and the ability to define a netgroup by including both user accounts and other netgroups. To expand on the example used in this chapter, the NIS domain will be extended to add the users and systems shown in Tables 28.2 and 28.3: Additional Users User Name(s) Description alpha, beta IT department employees charlie, delta IT department apprentices echo, foxtrott, golf, ... employees able, baker, ... interns
Additional Systems Machine Name(s) Description war, death, famine, pollution Only IT employees are allowed to log onto these servers. pride, greed, envy, wrath, lust, sloth All members of the IT department are allowed to login onto these servers. one, two, three, four, ... Ordinary workstations used by employees. trashcan A very old machine without any critical data. Even interns are allowed to use this system.
When using netgroups to configure this scenario, each user is assigned to one or more netgroups and logins are then allowed or forbidden for all members of the netgroup. When adding a new machine, login restrictions must be defined for all netgroups. When a new user is added, the account must be added to one or more netgroups. If the NIS setup is planned carefully, only one central configuration file needs modification to grant or deny access to machines. The first step is the initialization of the NIS netgroup map. In &os;, this map is not created by default. On the NIS master server, use an editor to create a map named /var/yp/netgroup. This example creates four netgroups to represent IT employees, IT apprentices, employees, and interns: IT_EMP (,alpha,test-domain) (,beta,test-domain) IT_APP (,charlie,test-domain) (,delta,test-domain) USERS (,echo,test-domain) (,foxtrott,test-domain) \ (,golf,test-domain) INTERNS (,able,test-domain) (,baker,test-domain) Each entry configures a netgroup. The first column in an entry is the name of the netgroup. Each set of brackets represents either a group of one or more users or the name of another netgroup. When specifying a user, the three comma-delimited fields inside each group represent: The name of the host(s) where the other fields representing the user are valid. If a hostname is not specified, the entry is valid on all hosts. The name of the account that belongs to this netgroup. The NIS domain for the account. Accounts may be imported from other NIS domains into a netgroup. If a group contains multiple users, separate each user with whitespace. Additionally, each field may contain wildcards. See &man.netgroup.5; for details. netgroups Netgroup names longer than 8 characters should not be The names are case sensitive and using capital letters for netgroup names is an easy way to distinguish between user, machine and netgroup names. Some non-&os; NIS clients cannot handle netgroups containing more than 15 entries. This limit may be circumvented by creating several sub-netgroups with 15 users or fewer and a real netgroup consisting of the sub-netgroups, as seen in this example: BIGGRP1 (,joe1,domain) (,joe2,domain) (,joe3,domain) [...] BIGGRP2 (,joe16,domain) (,joe17,domain) [...] BIGGRP3 (,joe31,domain) (,joe32,domain) BIGGROUP BIGGRP1 BIGGRP2 BIGGRP3 Repeat this process if more than 225 (15 times 15) users exist within a single netgroup. To activate and distribute the new NIS map: ellington&prompt.root; cd /var/yp ellington&prompt.root; make This will generate the three NIS maps netgroup, netgroup.byhost and netgroup.byuser. Use the map key option of &man.ypcat.1; to check if the new NIS maps are available: ellington&prompt.user; ypcat -k netgroup ellington&prompt.user; ypcat -k netgroup.byhost ellington&prompt.user; ypcat -k netgroup.byuser The output of the first command should resemble the contents of /var/yp/netgroup. The second command only produces output if host-specific netgroups were created. The third command is used to get the list of netgroups for a user. To configure a client, use &man.vipw.8; to specify the name of the netgroup. For example, on the server named war, replace this line: +::::::::: with +@IT_EMP::::::::: This specifies that only the users defined in the netgroup IT_EMP will be imported into this system's password database and only those users are allowed to login to this system. This configuration also applies to the ~ function of the shell and all routines which convert between user names and numerical user IDs. In other words, cd ~user will not work, ls -l will show the numerical ID instead of the username, and find . -user joe -print will fail with the message No such user. To fix this, import all user entries without allowing them to login into the servers. This can be achieved by adding an extra line: +:::::::::/sbin/nologin This line configures the client to import all entries but to replace the shell in those entries with /sbin/nologin. Make sure that extra line is placed after +@IT_EMP:::::::::. Otherwise, all user accounts imported from NIS will have /sbin/nologin as their login shell and no one will be able to login to the system. To configure the less important servers, replace the old +::::::::: on the servers with these lines: +@IT_EMP::::::::: +@IT_APP::::::::: +:::::::::/sbin/nologin The corresponding lines for the workstations would be: +@IT_EMP::::::::: +@USERS::::::::: +:::::::::/sbin/nologin NIS supports the creation of netgroups from other netgroups which can be useful if the policy regarding user access changes. One possibility is the creation of role-based netgroups. For example, one might create a netgroup called BIGSRV to define the login restrictions for the important servers, another netgroup called SMALLSRV for the less important servers, and a third netgroup called USERBOX for the workstations. Each of these netgroups contains the netgroups that are allowed to login onto these machines. The new entries for the NIS netgroup map would look like this: BIGSRV IT_EMP IT_APP SMALLSRV IT_EMP IT_APP ITINTERN USERBOX IT_EMP ITINTERN USERS This method of defining login restrictions works reasonably well when it is possible to define groups of machines with identical restrictions. Unfortunately, this is the exception and not the rule. Most of the time, the ability to define login restrictions on a per-machine basis is required. Machine-specific netgroup definitions are another possibility to deal with the policy changes. In this scenario, the /etc/master.passwd of each system contains two lines starting with +. The first line adds a netgroup with the accounts allowed to login onto this machine and the second line adds all other accounts with /sbin/nologin as shell. It is recommended to use the ALL-CAPS version of the hostname as the name of the netgroup: +@BOXNAME::::::::: +:::::::::/sbin/nologin Once this task is completed on all the machines, there is no longer a need to modify the local versions of /etc/master.passwd ever again. All further changes can be handled by modifying the NIS map. Here is an example of a possible netgroup map for this scenario: # Define groups of users first IT_EMP (,alpha,test-domain) (,beta,test-domain) IT_APP (,charlie,test-domain) (,delta,test-domain) DEPT1 (,echo,test-domain) (,foxtrott,test-domain) DEPT2 (,golf,test-domain) (,hotel,test-domain) DEPT3 (,india,test-domain) (,juliet,test-domain) ITINTERN (,kilo,test-domain) (,lima,test-domain) D_INTERNS (,able,test-domain) (,baker,test-domain) # # Now, define some groups based on roles USERS DEPT1 DEPT2 DEPT3 BIGSRV IT_EMP IT_APP SMALLSRV IT_EMP IT_APP ITINTERN USERBOX IT_EMP ITINTERN USERS # # And a groups for a special tasks # Allow echo and golf to access our anti-virus-machine SECURITY IT_EMP (,echo,test-domain) (,golf,test-domain) # # machine-based netgroups # Our main servers WAR BIGSRV FAMINE BIGSRV # User india needs access to this server POLLUTION BIGSRV (,india,test-domain) # # This one is really important and needs more access restrictions DEATH IT_EMP # # The anti-virus-machine mentioned above ONE SECURITY # # Restrict a machine to a single user TWO (,hotel,test-domain) # [...more groups to follow] It may not always be advisable to use machine-based netgroups. When deploying a couple of dozen or hundreds of systems, role-based netgroups instead of machine-based netgroups may be used to keep the size of the NIS map within reasonable limits.
Password Formats NIS password formats NIS requires that all hosts within an NIS domain use the same format for encrypting passwords. If users have trouble authenticating on an NIS client, it may be due to a differing password format. In a heterogeneous network, the format must be supported by all operating systems, where DES is the lowest common standard. To check which format a server or client is using, look at this section of /etc/login.conf: default:\ :passwd_format=des:\ :copyright=/etc/COPYRIGHT:\ [Further entries elided] In this example, the system is using the DES format. Other possible values are blf for Blowfish and md5 for MD5 encrypted passwords. If the format on a host needs to be edited to match the one being used in the NIS domain, the login capability database must be rebuilt after saving the change: &prompt.root; cap_mkdb /etc/login.conf The format of passwords for existing user accounts will not be updated until each user changes their password after the login capability database is rebuilt.
Lightweight Directory Access Protocol (<acronym>LDAP</acronym>) Tom Rhodes Written by LDAP The Lightweight Directory Access Protocol (LDAP) is an application layer protocol used to access, modify, and authenticate objects using a distributed directory information service. Think of it as a phone or record book which stores several levels of hierarchical, homogeneous information. It is used in Active Directory and OpenLDAP networks and allows users to access to several levels of internal information utilizing a single account. For example, email authentication, pulling employee contact information, and internal website authentication might all make use of a single user account in the LDAP server's record base. This section provides a quick start guide for configuring an LDAP server on a &os; system. It assumes that the administrator already has a design plan which includes the type of information to store, what that information will be used for, which users should have access to that information, and how to secure this information from unauthorized access. <acronym>LDAP</acronym> Terminology and Structure LDAP uses several terms which should be understood before starting the configuration. All directory entries consist of a group of attributes. Each of these attribute sets contains a unique identifier known as a Distinguished Name (DN) which is normally built from several other attributes such as the common or Relative Distinguished Name (RDN). Similar to how directories have absolute and relative paths, consider a DN as an absolute path and the RDN as the relative path. An example LDAP entry looks like the following. This example searches for the entry for the specified user account (uid), organizational unit (ou), and organization (o): &prompt.user; ldapsearch -xb "uid=trhodes,ou=users,o=example.com" # extended LDIF # # LDAPv3 # base <uid=trhodes,ou=users,o=example.com> with scope subtree # filter: (objectclass=*) # requesting: ALL # # trhodes, users, example.com dn: uid=trhodes,ou=users,o=example.com mail: trhodes@example.com cn: Tom Rhodes uid: trhodes telephoneNumber: (123) 456-7890 # search result search: 2 result: 0 Success # numResponses: 2 # numEntries: 1 This example entry shows the values for the dn, mail, cn, uid, and telephoneNumber attributes. The cn attribute is the RDN. More information about LDAP and its terminology can be found at http://www.openldap.org/doc/admin24/intro.html. Configuring an <acronym>LDAP</acronym> Server LDAP Server &os; does not provide a built-in LDAP server. Begin the configuration by installing the net/openldap24-server package or port. Since the port has many configurable options, it is recommended that the default options are reviewed to see if the package is sufficient, and to instead compile the port if any options should be changed. In most cases, the defaults are fine. However, if SQL support is needed, this option must be enabled and the port compiled using the instructions in . Next, create the directories to hold the data and to store the certificates: &prompt.root; mkdir /var/db/openldap-data &prompt.root; mkdir /usr/local/etc/openldap/private Copy over the database configuration file: &prompt.root; cp /usr/local/etc/openldap/DB_CONFIG.example /var/db/openldap-data/DB_CONFIG The next phase is to configure the certificate authority. The following commands must be executed from /usr/local/etc/openldap/private. This is important as the file permissions need to be restrictive and users should not have access to these files. To create the certificate authority, start with this command and follow the prompts: &prompt.root; openssl req -days 365 -nodes -new -x509 -keyout ca.key -out ../ca.crt The entries for the prompts may be generic except for the Common Name. This entry must be different than the system hostname. If this will be a self signed certificate, prefix the hostname with CA for certificate authority. The next task is to create a certificate signing request and a private key. Input this command and follow the prompts: &prompt.root; openssl req -days 365 -nodes -new -keyout server.key -out server.csr During the certificate generation process, be sure to correctly set the Common Name attribute. Once complete, sign the key: &prompt.root; openssl x509 -req -days 365 -in server.csr -out ../server.crt -CA ../ca.crt -CAkey ca.key -CAcreateserial The final part of the certificate generation process is to generate and sign the client certificates: &prompt.root; openssl req -days 365 -nodes -new -keyout client.key -out client.csr &prompt.root; openssl x509 -req -days 3650 -in client.csr -out ../client.crt -CA ../ca.crt -CAkey ca.key Remember to use the same Common Name attribute when prompted. When finished, ensure that a total of eight (8) new files have been generated through the proceeding commands. If so, the next step is to edit /usr/local/etc/openldap/slapd.conf and add the following options: TLSCipherSuite HIGH:MEDIUM:+SSLv3 TLSCertificateFile /usr/local/etc/openldap/server.crt TLSCertificateKeyFile /usr/local/etc/openldap/private/server.key TLSCACertificateFile /usr/local/etc/openldap/ca.crt Then, edit /usr/local/etc/openldap/ldap.conf and add the following lines: TLS_CACERT /usr/local/etc/openldap/ca.crt TLS_CIPHER_SUITE HIGH:MEDIUM:+SSLv3 While editing this file, uncomment the following entries and set them to the desired values: , , and . Set the to contain and . Then, add two entries pointing to the certificate authority. When finished, the entries should look similar to the following: BASE dc=example,dc=com URI ldap:// ldaps:// SIZELIMIT 12 TIMELIMIT 15 TLS_CACERT /usr/local/etc/openldap/ca.crt TLS_CIPHER_SUITE HIGH:MEDIUM:+SSLv3 The default password for the server should then be changed: &prompt.root; slappasswd -h "{SHA}" >> /usr/local/etc/openldap/slapd.conf This command will prompt for the password and, if the process does not fail, a password hash will be added to the end of slapd.conf. Several hashing formats are supported. Refer to the manual page for slappasswd for more information. Next, edit /usr/local/etc/openldap/slapd.conf and add the following lines: password-hash {sha} allow bind_v2 The in this file must be updated to match the used in /usr/local/etc/openldap/ldap.conf and should also be set. A recommended value for is something like . Before saving this file, place the in front of the password output from slappasswd and delete the old . The end result should look similar to this: TLSCipherSuite HIGH:MEDIUM:+SSLv3 TLSCertificateFile /usr/local/etc/openldap/server.crt TLSCertificateKeyFile /usr/local/etc/openldap/private/server.key TLSCACertificateFile /usr/local/etc/openldap/ca.crt rootpw {SHA}W6ph5Mm5Pz8GgiULbPgzG37mj9g= Finally, enable the OpenLDAP service in /etc/rc.conf and set the URI: slapd_enable="YES" slapd_flags="-4 -h ldaps:///" At this point the server can be started and tested: &prompt.root; service slapd start If everything is configured correctly, a search of the directory should show a successful connection with a single response as in this example: &prompt.root; ldapsearch -Z # extended LDIF # # LDAPv3 # base <dc=example,dc=com> (default) with scope subtree # filter: (objectclass=*) # requesting: ALL # # search result search: 3 result: 32 No such object # numResponses: 1 If the command fails and the configuration looks correct, stop the slapd service and restart it with debugging options: &prompt.root; service slapd stop &prompt.root; /usr/local/libexec/slapd -d -1 Once the service is responding, the directory can be populated using ldapadd. In this example, a file containing this list of users is first created. Each user should use the following format: dn: dc=example,dc=com objectclass: dcObject objectclass: organization o: Example dc: Example dn: cn=Manager,dc=example,dc=com objectclass: organizationalRole cn: Manager To import this file, specify the file name. The following command will prompt for the password specified earlier and the output should look something like this: &prompt.root; ldapadd -Z -D "cn=Manager,dc=example,dc=com" -W -f import.ldif Enter LDAP Password: adding new entry "dc=example,dc=com" adding new entry "cn=Manager,dc=example,dc=com" Verify the data was added by issuing a search on the server using ldapsearch: &prompt.user; ldapsearch -Z # extended LDIF # # LDAPv3 # base <dc=example,dc=com> (default) with scope subtree # filter: (objectclass=*) # requesting: ALL # # example.com dn: dc=example,dc=com objectClass: dcObject objectClass: organization o: Example dc: Example # Manager, example.com dn: cn=Manager,dc=example,dc=com objectClass: organizationalRole cn: Manager # search result search: 3 result: 0 Success # numResponses: 3 # numEntries: 2 At this point, the server should be configured and functioning properly. Dynamic Host Configuration Protocol (<acronym>DHCP</acronym>) Dynamic Host Configuration Protocol DHCP Internet Systems Consortium (ISC) The Dynamic Host Configuration Protocol (DHCP) allows a system to connect to a network in order to be assigned the necessary addressing information for communication on that network. &os; includes the OpenBSD version of dhclient which is used by the client to obtain the addressing information. &os; does not install a DHCP server, but several servers are available in the &os; Ports Collection. The DHCP protocol is fully described in RFC 2131. Informational resources are also available at isc.org/downloads/dhcp/. This section describes how to use the built-in DHCP client. It then describes how to install and configure a DHCP server. In &os;, the &man.bpf.4; device is needed by both the DHCP server and DHCP client. This device is included in the GENERIC kernel that is installed with &os;. Users who prefer to create a custom kernel need to keep this device if DHCP is used. It should be noted that bpf also allows privileged users to run network packet sniffers on that system. Configuring a <acronym>DHCP</acronym> Client DHCP client support is included in the &os; installer, making it easy to configure a newly installed system to automatically receive its networking addressing information from an existing DHCP server. Refer to for examples of network configuration. UDP When dhclient is executed on the client machine, it begins broadcasting requests for configuration information. By default, these requests use UDP port 68. The server replies on UDP port 67, giving the client an IP address and other relevant network information such as a subnet mask, default gateway, and DNS server addresses. This information is in the form of a DHCP lease and is valid for a configurable time. This allows stale IP addresses for clients no longer connected to the network to automatically be reused. DHCP clients can obtain a great deal of information from the server. An exhaustive list may be found in &man.dhcp-options.5;. By default, when a &os; system boots, its DHCP client runs in the background, or asynchronously. Other startup scripts continue to run while the DHCP process completes, which speeds up system startup. Background DHCP works well when the DHCP server responds quickly to the client's requests. However, DHCP may take a long time to complete on some systems. If network services attempt to run before DHCP has assigned the network addressing information, they will fail. Using DHCP in synchronous mode prevents this problem as it pauses startup until the DHCP configuration has completed. This line in /etc/rc.conf is used to configure background or asynchronous mode: ifconfig_fxp0="DHCP" This line may already exist if the system was configured to use DHCP during installation. Replace the fxp0 shown in these examples with the name of the interface to be dynamically configured, as described in . To instead configure the system to use synchronous mode, and to pause during startup while DHCP completes, use SYNCDHCP: ifconfig_fxp0="SYNCDHCP" Additional client options are available. Search for dhclient in &man.rc.conf.5; for details. DHCP configuration files The DHCP client uses the following files: /etc/dhclient.conf The configuration file used by dhclient. Typically, this file contains only comments as the defaults are suitable for most clients. This configuration file is described in &man.dhclient.conf.5;. /sbin/dhclient More information about the command itself can be found in &man.dhclient.8;. /sbin/dhclient-script The &os;-specific DHCP client configuration script. It is described in &man.dhclient-script.8;, but should not need any user modification to function properly. /var/db/dhclient.leases.interface The DHCP client keeps a database of valid leases in this file, which is written as a log and is described in &man.dhclient.leases.5;. Installing and Configuring a <acronym>DHCP</acronym> Server This section demonstrates how to configure a &os; system to act as a DHCP server using the Internet Systems Consortium (ISC) implementation of the DHCP server. This implementation and its documentation can be installed using the net/isc-dhcp42-server package or port. DHCP server DHCP installation The installation of net/isc-dhcp42-server installs a sample configuration file. Copy /usr/local/etc/dhcpd.conf.example to /usr/local/etc/dhcpd.conf and make any edits to this new file. DHCP dhcpd.conf The configuration file is comprised of declarations for subnets and hosts which define the information that is provided to DHCP clients. For example, these lines configure the following: option domain-name "example.org"; option domain-name-servers ns1.example.org; option subnet-mask 255.255.255.0; default-lease-time 600; max-lease-time 72400; ddns-update-style none; subnet 10.254.239.0 netmask 255.255.255.224 { range 10.254.239.10 10.254.239.20; option routers rtr-239-0-1.example.org, rtr-239-0-2.example.org; } host fantasia { hardware ethernet 08:00:07:26:c0:a5; fixed-address fantasia.fugue.com; } This option specifies the default search domain that will be provided to clients. Refer to &man.resolv.conf.5; for more information. This option specifies a comma separated list of DNS servers that the client should use. They can be listed by their Fully Qualified Domain Names (FQDN), as seen in the example, or by their IP addresses. The subnet mask that will be provided to clients. The default lease expiry time in seconds. A client can be configured to override this value. The maximum allowed length of time, in seconds, for a lease. Should a client request a longer lease, a lease will still be issued, but it will only be valid for max-lease-time. The default of disables dynamic DNS updates. Changing this to configures the DHCP server to update a DNS server whenever it hands out a lease so that the DNS server knows which IP addresses are associated with which computers in the network. Do not change the default setting unless the DNS server has been configured to support dynamic DNS. This line creates a pool of available IP addresses which are reserved for allocation to DHCP clients. The range of addresses must be valid for the network or subnet specified in the previous line. Declares the default gateway that is valid for the network or subnet specified before the opening { bracket. Specifies the hardware MAC address of a client so that the DHCP server can recognize the client when it makes a request. Specifies that this host should always be given the same IP address. Using the hostname is correct, since the DHCP server will resolve the hostname before returning the lease information. This configuration file supports many more options. Refer to dhcpd.conf(5), installed with the server, for details and examples. Once the configuration of dhcpd.conf is complete, enable the DHCP server in /etc/rc.conf: dhcpd_enable="YES" dhcpd_ifaces="dc0" Replace the dc0 with the interface (or interfaces, separated by whitespace) that the DHCP server should listen on for DHCP client requests. Start the server by issuing the following command: &prompt.root; service isc-dhcpd start Any future changes to the configuration of the server will require the dhcpd service to be stopped and then started using &man.service.8;. The DHCP server uses the following files. Note that the manual pages are installed with the server software. DHCP configuration files /usr/local/sbin/dhcpd More information about the dhcpd server can be found in dhcpd(8). /usr/local/etc/dhcpd.conf The server configuration file needs to contain all the information that should be provided to clients, along with information regarding the operation of the server. This configuration file is described in dhcpd.conf(5). /var/db/dhcpd.leases The DHCP server keeps a database of leases it has issued in this file, which is written as a log. Refer to dhcpd.leases(5), which gives a slightly longer description. /usr/local/sbin/dhcrelay This daemon is used in advanced environments where one DHCP server forwards a request from a client to another DHCP server on a separate network. If this functionality is required, install the net/isc-dhcp42-relay package or port. The installation includes dhcrelay(8) which provides more detail. Domain Name System (<acronym>DNS</acronym>) DNS Domain Name System (DNS) is the protocol through which domain names are mapped to IP addresses, and vice versa. DNS is coordinated across the Internet through a somewhat complex system of authoritative root, Top Level Domain (TLD), and other smaller-scale name servers, which host and cache individual domain information. It is not necessary to run a name server to perform DNS lookups on a system. BIND In &os; 10, the Berkeley Internet Name Domain (BIND) has been removed from the base system and replaced with Unbound. Unbound as configured in the &os; Base is a local caching resolver. BIND is still available from The Ports Collection as dns/bind99 or dns/bind98. In &os; 9 and lower, BIND is included in &os; Base. The &os; version provides enhanced security features, a new file system layout, and automated &man.chroot.8; configuration. BIND is maintained by the Internet Systems Consortium. resolver reverse DNS root zone The following table describes some of the terms associated with DNS: <acronym>DNS</acronym> Terminology Term Definition Forward DNS Mapping of hostnames to IP addresses. Origin Refers to the domain covered in a particular zone file. named, BIND Common names for the BIND name server package within &os;. Resolver A system process through which a machine queries a name server for zone information. Reverse DNS Mapping of IP addresses to hostnames. Root zone The beginning of the Internet zone hierarchy. All zones fall under the root zone, similar to how all files in a file system fall under the root directory. Zone An individual domain, subdomain, or portion of the DNS administered by the same authority.
zones examples Examples of zones: . is how the root zone is usually referred to in documentation. org. is a Top Level Domain (TLD) under the root zone. example.org. is a zone under the org. TLD. 1.168.192.in-addr.arpa is a zone referencing all IP addresses which fall under the 192.168.1.* IP address space. As one can see, the more specific part of a hostname appears to its left. For example, example.org. is more specific than org., as org. is more specific than the root zone. The layout of each part of a hostname is much like a file system: the /dev directory falls within the root, and so on. Reasons to Run a Name Server Name servers generally come in two forms: authoritative name servers, and caching (also known as resolving) name servers. An authoritative name server is needed when: One wants to serve DNS information to the world, replying authoritatively to queries. A domain, such as example.org, is registered and IP addresses need to be assigned to hostnames under it. An IP address block requires reverse DNS entries (IP to hostname). A backup or second name server, called a slave, will reply to queries. A caching name server is needed when: A local DNS server may cache and respond more quickly than querying an outside name server. When one queries for www.FreeBSD.org, the resolver usually queries the uplink ISP's name server, and retrieves the reply. With a local, caching DNS server, the query only has to be made once to the outside world by the caching DNS server. Additional queries will not have to go outside the local network, since the information is cached locally. <acronym>DNS</acronym> Server Configuration in &os; 10.0 and Later In &os; 10.0, BIND has been replaced with Unbound. Unbound is a validating caching resolver only. If an authoritative server is needed, many are available from the Ports Collection. Unbound is provided in the &os; base system. By default, it will provide DNS resolution to the local machine only. While the base system package can be configured to provide resolution services beyond the local machine, it is recommended that such requirements be addressed by installing Unbound from the &os; Ports Collection. To enable Unbound, add the following to /etc/rc.conf: local_unbound_enable="YES" Any existing nameservers in /etc/resolv.conf will be configured as forwarders in the new Unbound configuration. If any of the listed nameservers do not support DNSSEC, local DNS resolution will fail. Be sure to test each nameserver and remove any that fail the test. The following command will show the trust tree or a failure for a nameserver running on 192.168.1.1: &prompt.user; drill -S FreeBSD.org @192.168.1.1 Once each nameserver is confirmed to support DNSSEC, start Unbound: &prompt.root; service local_unbound onestart This will take care of updating /etc/resolv.conf so that queries for DNSSEC secured domains will now work. For example, run the following to validate the FreeBSD.org DNSSEC trust tree: &prompt.user; drill -S FreeBSD.org ;; Number of trusted keys: 1 ;; Chasing: freebsd.org. A DNSSEC Trust tree: freebsd.org. (A) |---freebsd.org. (DNSKEY keytag: 36786 alg: 8 flags: 256) |---freebsd.org. (DNSKEY keytag: 32659 alg: 8 flags: 257) |---freebsd.org. (DS keytag: 32659 digest type: 2) |---org. (DNSKEY keytag: 49587 alg: 7 flags: 256) |---org. (DNSKEY keytag: 9795 alg: 7 flags: 257) |---org. (DNSKEY keytag: 21366 alg: 7 flags: 257) |---org. (DS keytag: 21366 digest type: 1) | |---. (DNSKEY keytag: 40926 alg: 8 flags: 256) | |---. (DNSKEY keytag: 19036 alg: 8 flags: 257) |---org. (DS keytag: 21366 digest type: 2) |---. (DNSKEY keytag: 40926 alg: 8 flags: 256) |---. (DNSKEY keytag: 19036 alg: 8 flags: 257) ;; Chase successful DNS Server Configuration in &os; 9.<replaceable>X</replaceable> and Earlier In &os;, the BIND daemon is called named. File Description &man.named.8; The BIND daemon. &man.rndc.8; Name server control utility. /etc/namedb Directory where BIND zone information resides. /etc/namedb/named.conf Configuration file of the daemon. Depending on how a given zone is configured on the server, the files related to that zone can be found in the master, slave, or dynamic subdirectories of the /etc/namedb directory. These files contain the DNS information that will be given out by the name server in response to queries. Starting BIND BIND starting Since BIND is installed by default, configuring it is relatively simple. The default named configuration is that of a basic resolving name server, running in a &man.chroot.8; environment, and restricted to listening on the local IPv4 loopback address (127.0.0.1). To start the server one time with this configuration, use the following command: &prompt.root; service named onestart To ensure the named daemon is started at boot each time, put the following line into the /etc/rc.conf: named_enable="YES" There are many configuration options for /etc/namedb/named.conf that are beyond the scope of this document. Other startup options for named on &os; can be found in the named_* flags in /etc/defaults/rc.conf and in &man.rc.conf.5;. The section is also a good read. Configuration Files BIND configuration files Configuration files for named currently reside in /etc/namedb directory and will need modification before use unless all that is needed is a simple resolver. This is where most of the configuration will be performed. <filename>/etc/namedb/named.conf</filename> // $FreeBSD$ // // Refer to the named.conf(5) and named(8) man pages, and the documentation // in /usr/share/doc/bind9 for more details. // // If you are going to set up an authoritative server, make sure you // understand the hairy details of how DNS works. Even with // simple mistakes, you can break connectivity for affected parties, // or cause huge amounts of useless Internet traffic. options { // All file and path names are relative to the chroot directory, // if any, and should be fully qualified. directory "/etc/namedb/working"; pid-file "/var/run/named/pid"; dump-file "/var/dump/named_dump.db"; statistics-file "/var/stats/named.stats"; // If named is being used only as a local resolver, this is a safe default. // For named to be accessible to the network, comment this option, specify // the proper IP address, or delete this option. listen-on { 127.0.0.1; }; // If you have IPv6 enabled on this system, uncomment this option for // use as a local resolver. To give access to the network, specify // an IPv6 address, or the keyword "any". // listen-on-v6 { ::1; }; // These zones are already covered by the empty zones listed below. // If you remove the related empty zones below, comment these lines out. disable-empty-zone "255.255.255.255.IN-ADDR.ARPA"; disable-empty-zone "0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.IP6.ARPA"; disable-empty-zone "1.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.IP6.ARPA"; // If you've got a DNS server around at your upstream provider, enter // its IP address here, and enable the line below. This will make you // benefit from its cache, thus reduce overall DNS traffic in the Internet. /* forwarders { 127.0.0.1; }; */ // If the 'forwarders' clause is not empty the default is to 'forward first' // which will fall back to sending a query from your local server if the name // servers in 'forwarders' do not have the answer. Alternatively you can // force your name server to never initiate queries of its own by enabling the // following line: // forward only; // If you wish to have forwarding configured automatically based on // the entries in /etc/resolv.conf, uncomment the following line and // set named_auto_forward=yes in /etc/rc.conf. You can also enable // named_auto_forward_only (the effect of which is described above). // include "/etc/namedb/auto_forward.conf"; Just as the comment says, to benefit from an uplink's cache, forwarders can be enabled here. Under normal circumstances, a name server will recursively query the Internet looking at certain name servers until it finds the answer it is looking for. Having this enabled will have it query the uplink's name server (or name server provided) first, taking advantage of its cache. If the uplink name server in question is a heavily trafficked, fast name server, enabling this may be worthwhile. 127.0.0.1 will not work here. Change this IP address to a name server at the uplink. /* Modern versions of BIND use a random UDP port for each outgoing query by default in order to dramatically reduce the possibility of cache poisoning. All users are strongly encouraged to utilize this feature, and to configure their firewalls to accommodate it. AS A LAST RESORT in order to get around a restrictive firewall policy you can try enabling the option below. Use of this option will significantly reduce your ability to withstand cache poisoning attacks, and should be avoided if at all possible. Replace NNNNN in the example with a number between 49160 and 65530. */ // query-source address * port NNNNN; }; // If you enable a local name server, don't forget to enter 127.0.0.1 // first in your /etc/resolv.conf so this server will be queried. // Also, make sure to enable it in /etc/rc.conf. // The traditional root hints mechanism. Use this, OR the slave zones below. zone "." { type hint; file "/etc/namedb/named.root"; }; /* Slaving the following zones from the root name servers has some significant advantages: 1. Faster local resolution for your users 2. No spurious traffic will be sent from your network to the roots 3. Greater resilience to any potential root server failure/DDoS On the other hand, this method requires more monitoring than the hints file to be sure that an unexpected failure mode has not incapacitated your server. Name servers that are serving a lot of clients will benefit more from this approach than individual hosts. Use with caution. To use this mechanism, uncomment the entries below, and comment the hint zone above. As documented at http://dns.icann.org/services/axfr/ these zones: "." (the root), ARPA, IN-ADDR.ARPA, IP6.ARPA, and ROOT-SERVERS.NET are available for AXFR from these servers on IPv4 and IPv6: xfr.lax.dns.icann.org, xfr.cjr.dns.icann.org */ /* zone "." { type slave; file "/etc/namedb/slave/root.slave"; masters { 192.5.5.241; // F.ROOT-SERVERS.NET. }; notify no; }; zone "arpa" { type slave; file "/etc/namedb/slave/arpa.slave"; masters { 192.5.5.241; // F.ROOT-SERVERS.NET. }; notify no; }; */ /* Serving the following zones locally will prevent any queries for these zones leaving your network and going to the root name servers. This has two significant advantages: 1. Faster local resolution for your users 2. No spurious traffic will be sent from your network to the roots */ // RFCs 1912 and 5735 (and BCP 32 for localhost) zone "localhost" { type master; file "/etc/namedb/master/localhost-forward.db"; }; zone "127.in-addr.arpa" { type master; file "/etc/namedb/master/localhost-reverse.db"; }; zone "255.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; }; // RFC 1912-style zone for IPv6 localhost address zone "0.ip6.arpa" { type master; file "/etc/namedb/master/localhost-reverse.db"; }; // "This" Network (RFCs 1912 and 5735) zone "0.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; }; // Private Use Networks (RFCs 1918 and 5735) zone "10.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "16.172.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "17.172.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "18.172.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "19.172.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "20.172.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "21.172.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "22.172.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "23.172.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "24.172.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "25.172.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "26.172.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "27.172.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "28.172.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "29.172.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "30.172.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "31.172.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "168.192.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; }; // Link-local/APIPA (RFCs 3927 and 5735) zone "254.169.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; }; // IETF protocol assignments (RFCs 5735 and 5736) zone "0.0.192.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; }; // TEST-NET-[1-3] for Documentation (RFCs 5735 and 5737) zone "2.0.192.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "100.51.198.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "113.0.203.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; }; // IPv6 Range for Documentation (RFC 3849) zone "8.b.d.0.1.0.0.2.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; }; // Domain Names for Documentation and Testing (BCP 32) zone "test" { type master; file "/etc/namedb/master/empty.db"; }; zone "example" { type master; file "/etc/namedb/master/empty.db"; }; zone "invalid" { type master; file "/etc/namedb/master/empty.db"; }; zone "example.com" { type master; file "/etc/namedb/master/empty.db"; }; zone "example.net" { type master; file "/etc/namedb/master/empty.db"; }; zone "example.org" { type master; file "/etc/namedb/master/empty.db"; }; // Router Benchmark Testing (RFCs 2544 and 5735) zone "18.198.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "19.198.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; }; // IANA Reserved - Old Class E Space (RFC 5735) zone "240.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "241.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "242.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "243.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "244.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "245.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "246.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "247.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "248.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "249.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "250.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "251.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "252.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "253.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "254.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; }; // IPv6 Unassigned Addresses (RFC 4291) zone "1.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "3.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "4.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "5.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "6.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "7.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "8.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "9.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "a.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "b.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "c.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "d.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "e.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "0.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "1.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "2.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "3.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "4.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "5.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "6.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "7.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "8.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "9.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "a.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "b.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "0.e.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "1.e.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "2.e.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "3.e.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "4.e.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "5.e.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "6.e.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "7.e.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; }; // IPv6 ULA (RFC 4193) zone "c.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "d.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; }; // IPv6 Link Local (RFC 4291) zone "8.e.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "9.e.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "a.e.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "b.e.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; }; // IPv6 Deprecated Site-Local Addresses (RFC 3879) zone "c.e.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "d.e.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "e.e.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "f.e.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; }; // IP6.INT is Deprecated (RFC 4159) zone "ip6.int" { type master; file "/etc/namedb/master/empty.db"; }; // NB: Do not use the IP addresses below, they are faked, and only // serve demonstration/documentation purposes! // // Example slave zone config entries. It can be convenient to become // a slave at least for the zone your own domain is in. Ask // your network administrator for the IP address of the responsible // master name server. // // Do not forget to include the reverse lookup zone! // This is named after the first bytes of the IP address, in reverse // order, with ".IN-ADDR.ARPA" appended, or ".IP6.ARPA" for IPv6. // // Before starting to set up a master zone, make sure you fully // understand how DNS and BIND work. There are sometimes // non-obvious pitfalls. Setting up a slave zone is usually simpler. // // NB: Don't blindly enable the examples below. :-) Use actual names // and addresses instead. /* An example dynamic zone key "exampleorgkey" { algorithm hmac-md5; secret "sf87HJqjkqh8ac87a02lla=="; }; zone "example.org" { type master; allow-update { key "exampleorgkey"; }; file "/etc/namedb/dynamic/example.org"; }; */ /* Example of a slave reverse zone zone "1.168.192.in-addr.arpa" { type slave; file "/etc/namedb/slave/1.168.192.in-addr.arpa"; masters { 192.168.1.1; }; }; */ In named.conf, these are examples of slave entries for a forward and reverse zone. For each new zone served, a new zone entry must be added to named.conf. For example, the simplest zone entry for example.org can look like: zone "example.org" { type master; file "master/example.org"; }; The zone is a master, as indicated by the statement, holding its zone information in /etc/namedb/master/example.org indicated by the statement. zone "example.org" { type slave; file "slave/example.org"; }; In the slave case, the zone information is transferred from the master name server for the particular zone, and saved in the file specified. If and when the master server dies or is unreachable, the slave name server will have the transferred zone information and will be able to serve it. Zone Files BIND zone files An example master zone file for example.org (existing within /etc/namedb/master/example.org) is as follows: $TTL 3600 ; 1 hour default TTL example.org. IN SOA ns1.example.org. admin.example.org. ( 2006051501 ; Serial 10800 ; Refresh 3600 ; Retry 604800 ; Expire 300 ; Negative Response TTL ) ; DNS Servers IN NS ns1.example.org. IN NS ns2.example.org. ; MX Records IN MX 10 mx.example.org. IN MX 20 mail.example.org. IN A 192.168.1.1 ; Machine Names localhost IN A 127.0.0.1 ns1 IN A 192.168.1.2 ns2 IN A 192.168.1.3 mx IN A 192.168.1.4 mail IN A 192.168.1.5 ; Aliases www IN CNAME example.org. Note that every hostname ending in a . is an exact hostname, whereas everything without a trailing . is relative to the origin. For example, ns1 is translated into ns1.example.org. The format of a zone file follows: recordname IN recordtype value DNS records The most commonly used DNS records: SOA start of zone authority NS an authoritative name server A a host address CNAME the canonical name for an alias MX mail exchanger PTR a domain name pointer (used in reverse DNS) example.org. IN SOA ns1.example.org. admin.example.org. ( 2006051501 ; Serial 10800 ; Refresh after 3 hours 3600 ; Retry after 1 hour 604800 ; Expire after 1 week 300 ) ; Negative Response TTL example.org. the domain name, also the origin for this zone file. ns1.example.org. the primary/authoritative name server for this zone. admin.example.org. the responsible person for this zone, email address with @ replaced. (admin@example.org becomes admin.example.org) 2006051501 the serial number of the file. This must be incremented each time the zone file is modified. Nowadays, many admins prefer a yyyymmddrr format for the serial number. 2006051501 would mean last modified 05/15/2006, the latter 01 being the first time the zone file has been modified this day. The serial number is important as it alerts slave name servers for a zone when it is updated. IN NS ns1.example.org. This is an NS entry. Every name server that is going to reply authoritatively for the zone must have one of these entries. localhost IN A 127.0.0.1 ns1 IN A 192.168.1.2 ns2 IN A 192.168.1.3 mx IN A 192.168.1.4 mail IN A 192.168.1.5 The A record indicates machine names. As seen above, ns1.example.org would resolve to 192.168.1.2. IN A 192.168.1.1 This line assigns IP address 192.168.1.1 to the current origin, in this case example.org. www IN CNAME @ The canonical name record is usually used for giving aliases to a machine. In the example, www is aliased to the master machine whose name happens to be the same as the domain name example.org (192.168.1.1). CNAMEs can never be used together with another kind of record for the same hostname. MX record IN MX 10 mail.example.org. The MX record indicates which mail servers are responsible for handling incoming mail for the zone. mail.example.org is the hostname of a mail server, and 10 is the priority of that mail server. One can have several mail servers, with priorities of 10, 20 and so on. A mail server attempting to deliver to example.org would first try the highest priority MX (the record with the lowest priority number), then the second highest, etc, until the mail can be properly delivered. For in-addr.arpa zone files (reverse DNS), the same format is used, except with PTR entries instead of A or CNAME. $TTL 3600 1.168.192.in-addr.arpa. IN SOA ns1.example.org. admin.example.org. ( 2006051501 ; Serial 10800 ; Refresh 3600 ; Retry 604800 ; Expire 300 ) ; Negative Response TTL IN NS ns1.example.org. IN NS ns2.example.org. 1 IN PTR example.org. 2 IN PTR ns1.example.org. 3 IN PTR ns2.example.org. 4 IN PTR mx.example.org. 5 IN PTR mail.example.org. This file gives the proper IP address to hostname mappings for the above fictitious domain. It is worth noting that all names on the right side of a PTR record need to be fully qualified (i.e., end in a .). Caching Name Server BIND caching name server A caching name server is a name server whose primary role is to resolve recursive queries. It simply asks queries of its own, and remembers the answers for later use. <acronym role="Domain Name Security Extensions">DNSSEC</acronym> BIND DNS security extensions Domain Name System Security Extensions, or DNSSEC for short, is a suite of specifications to protect resolving name servers from forged DNS data, such as spoofed DNS records. By using digital signatures, a resolver can verify the integrity of the record. Note that DNSSEC only provides integrity via digitally signing the Resource Records (RRs). It provides neither confidentiality nor protection against false end-user assumptions. This means that it cannot protect against people going to example.net instead of example.com. The only thing DNSSEC does is authenticate that the data has not been compromised in transit. The security of DNS is an important step in securing the Internet in general. For more in-depth details of how DNSSEC works, the relevant RFCs are a good place to start. See the list in . The following sections will demonstrate how to enable DNSSEC for an authoritative DNS server and a recursive (or caching) DNS server running BIND 9. While all versions of BIND 9 support DNSSEC, it is necessary to have at least version 9.6.2 in order to be able to use the signed root zone when validating DNS queries. This is because earlier versions lack the required algorithms to enable validation using the root zone key. It is strongly recommended to use the latest version of BIND 9.7 or later to take advantage of automatic key updating for the root key, as well as other features to automatically keep zones signed and signatures up to date. Where configurations differ between 9.6.2 and 9.7 and later, differences will be pointed out. Recursive <acronym>DNS</acronym> Server Configuration Enabling DNSSEC validation of queries performed by a recursive DNS server requires a few changes to named.conf. Before making these changes the root zone key, or trust anchor, must be acquired. Currently the root zone key is not available in a file format BIND understands, so it has to be manually converted into the proper format. The key itself can be obtained by querying the root zone for it using dig. By running &prompt.user; dig +multi +noall +answer DNSKEY . > root.dnskey the key will end up in root.dnskey. The contents should look something like this: . 93910 IN DNSKEY 257 3 8 ( AwEAAagAIKlVZrpC6Ia7gEzahOR+9W29euxhJhVVLOyQ bSEW0O8gcCjFFVQUTf6v58fLjwBd0YI0EzrAcQqBGCzh /RStIoO8g0NfnfL2MTJRkxoXbfDaUeVPQuYEhg37NZWA JQ9VnMVDxP/VHL496M/QZxkjf5/Efucp2gaDX6RS6CXp oY68LsvPVjR0ZSwzz1apAzvN9dlzEheX7ICJBBtuA6G3 LQpzW5hOA2hzCTMjJPJ8LbqF6dsV6DoBQzgul0sGIcGO Yl7OyQdXfZ57relSQageu+ipAdTTJ25AsRTAoub8ONGc LmqrAmRLKBP1dfwhYB4N7knNnulqQxA+Uk1ihz0= ) ; key id = 19036 . 93910 IN DNSKEY 256 3 8 ( AwEAAcaGQEA+OJmOzfzVfoYN249JId7gx+OZMbxy69Hf UyuGBbRN0+HuTOpBxxBCkNOL+EJB9qJxt+0FEY6ZUVjE g58sRr4ZQ6Iu6b1xTBKgc193zUARk4mmQ/PPGxn7Cn5V EGJ/1h6dNaiXuRHwR+7oWh7DnzkIJChcTqlFrXDW3tjt ) ; key id = 34525 Do not be alarmed if the obtained keys differ from this example. They might have changed since these instructions were last updated. This output actually contains two keys. The first key in the listing, with the value 257 after the DNSKEY record type, is the one needed. This value indicates that this is a Secure Entry Point (SEP), commonly known as a Key Signing Key (KSK). The second key, with value 256, is a subordinate key, commonly called a Zone Signing Key (ZSK). More on the different key types later in . Now the key must be verified and formatted so that BIND can use it. To verify the key, generate a DS RR set. Create a file containing these RRs with &prompt.user; dnssec-dsfromkey -f root.dnskey . > root.ds These records use SHA-1 and SHA-256 respectively, and should look similar to the following example, where the longer is using SHA-256. . IN DS 19036 8 1 B256BD09DC8DD59F0E0F0D8541B8328DD986DF6E . IN DS 19036 8 2 49AAC11D7B6F6446702E54A1607371607A1A41855200FD2CE1CDDE32F24E8FB5 The SHA-256 RR can now be compared to the digest in https://data.iana.org/root-anchors/root-anchors.xml. To be absolutely sure that the key has not been tampered with the data in the XML file can be verified using the PGP signature in https://data.iana.org/root-anchors/root-anchors.asc. Next, the key must be formatted properly. This differs a little between BIND versions 9.6.2 and 9.7 and later. In version 9.7 support was added to automatically track changes to the key and update it as necessary. This is done using managed-keys as seen in the example below. When using the older version, the key is added using a trusted-keys statement and updates must be done manually. For BIND 9.6.2 the format should look like: trusted-keys { "." 257 3 8 "AwEAAagAIKlVZrpC6Ia7gEzahOR+9W29euxhJhVVLOyQbSEW0O8gcCjF FVQUTf6v58fLjwBd0YI0EzrAcQqBGCzh/RStIoO8g0NfnfL2MTJRkxoX bfDaUeVPQuYEhg37NZWAJQ9VnMVDxP/VHL496M/QZxkjf5/Efucp2gaD X6RS6CXpoY68LsvPVjR0ZSwzz1apAzvN9dlzEheX7ICJBBtuA6G3LQpz W5hOA2hzCTMjJPJ8LbqF6dsV6DoBQzgul0sGIcGOYl7OyQdXfZ57relS Qageu+ipAdTTJ25AsRTAoub8ONGcLmqrAmRLKBP1dfwhYB4N7knNnulq QxA+Uk1ihz0="; }; For 9.7 the format will instead be: managed-keys { "." initial-key 257 3 8 "AwEAAagAIKlVZrpC6Ia7gEzahOR+9W29euxhJhVVLOyQbSEW0O8gcCjF FVQUTf6v58fLjwBd0YI0EzrAcQqBGCzh/RStIoO8g0NfnfL2MTJRkxoX bfDaUeVPQuYEhg37NZWAJQ9VnMVDxP/VHL496M/QZxkjf5/Efucp2gaD X6RS6CXpoY68LsvPVjR0ZSwzz1apAzvN9dlzEheX7ICJBBtuA6G3LQpz W5hOA2hzCTMjJPJ8LbqF6dsV6DoBQzgul0sGIcGOYl7OyQdXfZ57relS Qageu+ipAdTTJ25AsRTAoub8ONGcLmqrAmRLKBP1dfwhYB4N7knNnulq QxA+Uk1ihz0="; }; The root key can now be added to named.conf either directly or by including a file containing the key. After these steps, configure BIND to do DNSSEC validation on queries by editing named.conf and adding the following to the options directive: dnssec-enable yes; dnssec-validation yes; To verify that it is actually working use dig to make a query for a signed zone using the resolver just configured. A successful reply will contain the AD flag to indicate the data was authenticated. Running a query such as &prompt.user; dig @resolver +dnssec se ds should return the DS RR for the .se zone. In the flags: section the AD flag should be set, as seen in: ... ;; flags: qr rd ra ad; QUERY: 1, ANSWER: 3, AUTHORITY: 0, ADDITIONAL: 1 ... The resolver is now capable of authenticating DNS queries. Authoritative <acronym>DNS</acronym> Server Configuration In order to get an authoritative name server to serve a DNSSEC signed zone a little more work is required. A zone is signed using cryptographic keys which must be generated. It is possible to use only one key for this. The preferred method however is to have a strong well-protected Key Signing Key (KSK) that is not rotated very often and a Zone Signing Key (ZSK) that is rotated more frequently. Information on recommended operational practices can be found in RFC 4641: DNSSEC Operational Practices. Practices regarding the root zone can be found in DNSSEC Practice Statement for the Root Zone KSK operator and DNSSEC Practice Statement for the Root Zone ZSK operator. The KSK is used to build a chain of authority to the data in need of validation and as such is also called a Secure Entry Point (SEP) key. A message digest of this key, called a Delegation Signer (DS) record, must be published in the parent zone to establish the trust chain. How this is accomplished depends on the parent zone owner. The ZSK is used to sign the zone, and only needs to be published there. To enable DNSSEC for the example.com zone depicted in previous examples, the first step is to use dnssec-keygen to generate the KSK and ZSK key pair. This key pair can utilize different cryptographic algorithms. It is recommended to use RSA/SHA256 for the keys and 2048 bits key length should be enough. To generate the KSK for example.com, run &prompt.user; dnssec-keygen -f KSK -a RSASHA256 -b 2048 -n ZONE example.com and to generate the ZSK, run &prompt.user; dnssec-keygen -a RSASHA256 -b 2048 -n ZONE example.com dnssec-keygen outputs two files, the public and the private keys in files named similar to Kexample.com.+005+nnnnn.key (public) and Kexample.com.+005+nnnnn.private (private). The nnnnn part of the file name is a five digit key ID. Keep track of which key ID belongs to which key. This is especially important when having more than one key in a zone. It is also possible to rename the keys. For each KSK file do: &prompt.user; mv Kexample.com.+005+nnnnn.key Kexample.com.+005+nnnnn.KSK.key &prompt.user; mv Kexample.com.+005+nnnnn.private Kexample.com.+005+nnnnn.KSK.private For the ZSK files, substitute KSK for ZSK as necessary. The files can now be included in the zone file, using the $include statement. It should look something like this: $include Kexample.com.+005+nnnnn.KSK.key ; KSK $include Kexample.com.+005+nnnnn.ZSK.key ; ZSK Finally, sign the zone and tell BIND to use the signed zone file. To sign a zone dnssec-signzone is used. The command to sign the zone example.com, located in example.com.db would look similar to &prompt.user; dnssec-signzone -o example.com -k Kexample.com.+005+nnnnn.KSK example.com.db Kexample.com.+005+nnnnn.ZSK.key The key supplied to the argument is the KSK and the other key file is the ZSK that should be used in the signing. It is possible to supply more than one KSK and ZSK, which will result in the zone being signed with all supplied keys. This can be needed to supply zone data signed using more than one algorithm. The output of dnssec-signzone is a zone file with all RRs signed. This output will end up in a file with the extension .signed, such as example.com.db.signed. The DS records will also be written to a separate file dsset-example.com. To use this signed zone just modify the zone directive in named.conf to use example.com.db.signed. By default, the signatures are only valid 30 days, meaning that the zone needs to be resigned in about 15 days to be sure that resolvers are not caching records with stale signatures. It is possible to make a script and a cron job to do this. See relevant manuals for details. Be sure to keep private keys confidential, as with all cryptographic keys. When changing a key it is best to include the new key into the zone, while still signing with the old one, and then move over to using the new key to sign. After these steps are done the old key can be removed from the zone. Failure to do this might render the DNS data unavailable for a time, until the new key has propagated through the DNS hierarchy. For more information on key rollovers and other DNSSEC operational issues, see RFC 4641: DNSSEC Operational practices. Automation Using <acronym>BIND</acronym> 9.7 or Later Beginning with BIND version 9.7 a new feature called Smart Signing was introduced. This feature aims to make the key management and signing process simpler by automating parts of the task. By putting the keys into a directory called a key repository, and using the new option auto-dnssec, it is possible to create a dynamic zone which will be resigned as needed. To update this zone use nsupdate with the new option . rndc has also grown the ability to sign zones with keys in the key repository, using the option . To tell BIND to use this automatic signing and zone updating for example.com, add the following to named.conf: zone example.com { type master; key-directory "/etc/named/keys"; update-policy local; auto-dnssec maintain; file "/etc/named/dynamic/example.com.zone"; }; After making these changes, generate keys for the zone as explained in , put those keys in the key repository given as the argument to the key-directory in the zone configuration and the zone will be signed automatically. Updates to a zone configured this way must be done using nsupdate, which will take care of re-signing the zone with the new data added. For further details, see and the BIND documentation. Security Although BIND is the most common implementation of DNS, there is always the issue of security. Possible and exploitable security holes are sometimes found. While &os; automatically drops named into a &man.chroot.8; environment; there are several other security mechanisms in place which could help to lure off possible DNS service attacks. It is always good idea to read CERT's security advisories and to subscribe to the &a.security-notifications; to stay up to date with the current Internet and &os; security issues. If a problem arises, keeping sources up to date and having a fresh build of named may help. Further Reading BIND/named manual pages: &man.rndc.8; &man.named.8; &man.named.conf.5; &man.nsupdate.1; &man.dnssec-signzone.8; &man.dnssec-keygen.8; Official ISC BIND Page Official ISC BIND Forum O'Reilly DNS and BIND 5th Edition Root DNSSEC DNSSEC Trust Anchor Publication for the Root Zone RFC1034 - Domain Names - Concepts and Facilities RFC1035 - Domain Names - Implementation and Specification RFC4033 - DNS Security Introduction and Requirements RFC4034 - Resource Records for the DNS Security Extensions RFC4035 - Protocol Modifications for the DNS Security Extensions RFC4641 - DNSSEC Operational Practices RFC 5011 - Automated Updates of DNS Security (DNSSEC Trust Anchors
Apache HTTP Server Murray Stokely Contributed by web servers setting up Apache The open source Apache HTTP Server is the most widely used web server. &os; does not install this web server by default, but it can be installed from the www/apache24 package or port. This section summarizes how to configure and start version 2.x of the Apache HTTP Server on &os;. For more detailed information about Apache 2.X and its configuration directives, refer to httpd.apache.org. Configuring and Starting Apache Apache configuration file In &os;, the main Apache HTTP Server configuration file is installed as /usr/local/etc/apache2x/httpd.conf, where x represents the version number. This ASCII text file begins comment lines with a #. The most frequently modified directives are: ServerRoot "/usr/local" Specifies the default directory hierarchy for the Apache installation. Binaries are stored in the bin and sbin subdirectories of the server root and configuration files are stored in the etc/apache2x subdirectory. ServerAdmin you@example.com Change this to the email address to receive problems with the server. This address also appears on some server-generated pages, such as error documents. ServerName www.example.com:80 Allows an administrator to set a hostname which is sent back to clients for the server. For example, www can be used instead of the actual hostname. If the system does not have a registered DNS name, enter its IP address instead. If the server will listen on an alternate report, change 80 to the alternate port number. DocumentRoot "/usr/local/www/apache2x/data" The directory where documents will be served from. By default, all requests are taken from this directory, but symbolic links and aliases may be used to point to other locations. It is always a good idea to make a backup copy of the default Apache configuration file before making changes. When the configuration of Apache is complete, save the file and verify the configuration using apachectl. Running apachectl configtest should return Syntax OK. Apache starting or stopping To launch Apache at system startup, add the following line to /etc/rc.conf: apache24_enable="YES" If Apache should be started with non-default options, the following line may be added to /etc/rc.conf to specify the needed flags: apache24_flags="" If apachectl does not report configuration errors, start httpd now: &prompt.root; service apache24 start The httpd service can be tested by entering http://localhost in a web browser, replacing localhost with the fully-qualified domain name of the machine running httpd. The default web page that is displayed is /usr/local/www/apache24/data/index.html. The Apache configuration can be tested for errors after making subsequent configuration changes while httpd is running using the following command: &prompt.root; service apache24 configtest It is important to note that configtest is not an &man.rc.8; standard, and should not be expected to work for all startup scripts. Virtual Hosting Virtual hosting allows multiple websites to run on one Apache server. The virtual hosts can be IP-based or name-based. IP-based virtual hosting uses a different IP address for each website. Name-based virtual hosting uses the clients HTTP/1.1 headers to figure out the hostname, which allows the websites to share the same IP address. To setup Apache to use name-based virtual hosting, add a VirtualHost block for each website. For example, for the webserver named www.domain.tld with a virtual domain of www.someotherdomain.tld, add the following entries to httpd.conf: <VirtualHost *> ServerName www.domain.tld DocumentRoot /www/domain.tld </VirtualHost> <VirtualHost *> ServerName www.someotherdomain.tld DocumentRoot /www/someotherdomain.tld </VirtualHost> For each virtual host, replace the values for ServerName and DocumentRoot with the values to be used. For more information about setting up virtual hosts, consult the official Apache documentation at: http://httpd.apache.org/docs/vhosts/. Apache Modules Apache modules Apache uses modules to augment the functionality provided by the basic server. Refer to http://httpd.apache.org/docs/current/mod/ for a complete listing of and the configuration details for the available modules. In &os;, some modules can be compiled with the www/apache24 port. Type make config within /usr/ports/www/apache24 to see which modules are available and which are enabled by default. If the module is not compiled with the port, the &os; Ports Collection provides an easy way to install many modules. This section describes three of the most commonly used modules. <filename>mod_ssl</filename> web servers secure SSL cryptography The mod_ssl module uses the OpenSSL library to provide strong cryptography via the Secure Sockets Layer (SSLv3) and Transport Layer Security (TLSv1) protocols. This module provides everything necessary to request a signed certificate from a trusted certificate signing authority to run a secure web server on &os;. In &os;, mod_ssl module is enabled by default in both the package and the port. The available configuration directives are explained at http://httpd.apache.org/docs/current/mod/mod_ssl.html. <filename>mod_perl</filename> mod_perl Perl The mod_perl module makes it possible to write Apache modules in Perl. In addition, the persistent interpreter embedded in the server avoids the overhead of starting an external interpreter and the penalty of Perl start-up time. The mod_perl can be installed using the www/mod_perl2 package or port. Documentation for using this module can be found at http://perl.apache.org/docs/2.0/index.html. <filename>mod_php</filename> Tom Rhodes Written by mod_php PHP PHP: Hypertext Preprocessor (PHP) is a general-purpose scripting language that is especially suited for web development. Capable of being embedded into HTML, its syntax draws upon C, &java;, and Perl with the intention of allowing web developers to write dynamically generated webpages quickly. To gain support for PHP5 for the Apache web server, install the www/mod_php5 package or port. This will install and configure the modules required to support dynamic PHP applications. The installation will automatically add this line to /usr/local/etc/apache24/httpd.conf: LoadModule php5_module libexec/apache24/libphp5.so Then, perform a graceful restart to load the PHP module: &prompt.root; apachectl graceful The PHP support provided by www/mod_php5 is limited. Additional support can be installed using the lang/php5-extensions port which provides a menu driven interface to the available PHP extensions. Alternatively, individual extensions can be installed using the appropriate port. For instance, to add PHP support for the MySQL database server, install databases/php5-mysql. After installing an extension, the Apache server must be reloaded to pick up the new configuration changes: &prompt.root; apachectl graceful Dynamic Websites web servers dynamic In addition to mod_perl and mod_php, other languages are available for creating dynamic web content. These include Django and Ruby on Rails. Django Python Django Django is a BSD-licensed framework designed to allow developers to write high performance, elegant web applications quickly. It provides an object-relational mapper so that data types are developed as Python objects. A rich dynamic database-access API is provided for those objects without the developer ever having to write SQL. It also provides an extensible template system so that the logic of the application is separated from the HTML presentation. Django depends on mod_python, and an SQL database engine. In &os;, the www/py-django port automatically installs mod_python and supports the PostgreSQL, MySQL, or SQLite databases, with the default being SQLite. To change the database engine, type make config within /usr/ports/www/py-django, then install the port. Once Django is installed, the application will need a project directory along with the Apache configuration in order to use the embedded Python interpreter. This interpreter is used to call the application for specific URLs on the site. To configure Apache to pass requests for certain URLs to the web application, add the following to httpd.conf, specifying the full path to the project directory: <Location "/"> SetHandler python-program PythonPath "['/dir/to/the/django/packages/'] + sys.path" PythonHandler django.core.handlers.modpython SetEnv DJANGO_SETTINGS_MODULE mysite.settings PythonAutoReload On PythonDebug On </Location> Refer to https://docs.djangoproject.com/en/1.6/ for more information on how to use Django. Ruby on Rails Ruby on Rails Ruby on Rails is another open source web framework that provides a full development stack. It is optimized to make web developers more productive and capable of writing powerful applications quickly. On &os;, it can be installed using the www/rubygem-rails package or port. Refer to http://rubyonrails.org/documentation for more information on how to use Ruby on Rails. File Transfer Protocol (<acronym>FTP</acronym>) FTP servers The File Transfer Protocol (FTP) provides users with a simple way to transfer files to and from an FTP server. &os; includes FTP server software, ftpd, in the base system. &os; provides several configuration files for controlling access to the FTP server. This section summarizes these files. Refer to &man.ftpd.8; for more details about the built-in FTP server. Configuration The most important configuration step is deciding which accounts will be allowed access to the FTP server. A &os; system has a number of system accounts which should not be allowed FTP access. The list of users disallowed any FTP access can be found in /etc/ftpusers. By default, it includes system accounts. Additional users that should not be allowed access to FTP can be added. In some cases it may be desirable to restrict the access of some users without preventing them completely from using FTP. This can be accomplished be creating /etc/ftpchroot as described in &man.ftpchroot.5;. This file lists users and groups subject to FTP access restrictions. FTP anonymous To enable anonymous FTP access to the server, create a user named ftp on the &os; system. Users will then be able to log on to the FTP server with a username of ftp or anonymous. When prompted for the password, any input will be accepted, but by convention, an email address should be used as the password. The FTP server will call &man.chroot.2; when an anonymous user logs in, to restrict access to only the home directory of the ftp user. There are two text files that can be created to specify welcome messages to be displayed to FTP clients. The contents of /etc/ftpwelcome will be displayed to users before they reach the login prompt. After a successful login, the contents of /etc/ftpmotd will be displayed. Note that the path to this file is relative to the login environment, so the contents of ~ftp/etc/ftpmotd would be displayed for anonymous users. Once the FTP server has been configured, set the appropriate variable in /etc/rc.conf to start the service during boot: ftpd_enable="YES" To start the service now: &prompt.root; service ftpd start Test the connection to the FTP server by typing: &prompt.user; ftp localhost syslog log files FTP The ftpd daemon uses &man.syslog.3; to log messages. By default, the system log daemon will write messages related to FTP in /var/log/xferlog. The location of the FTP log can be modified by changing the following line in /etc/syslog.conf: ftp.info /var/log/xferlog FTP anonymous Be aware of the potential problems involved with running an anonymous FTP server. In particular, think twice about allowing anonymous users to upload files. It may turn out that the FTP site becomes a forum for the trade of unlicensed commercial software or worse. If anonymous FTP uploads are required, then verify the permissions so that these files can not be read by other anonymous users until they have been reviewed by an administrator. File and Print Services for µsoft.windows; Clients (Samba) Samba server Microsoft Windows file server Windows clients print server Windows clients Samba is a popular open source software package that provides file and print services using the SMB/CIFS protocol. This protocol is built into µsoft.windows; systems. It can be added to non-µsoft.windows; systems by installing the Samba client libraries. The protocol allows clients to access shared data and printers. These shares can be mapped as a local disk drive and shared printers can be used as if they were local printers. On &os;, the Samba client libraries can be installed using the net/samba-smbclient port or package. The client provides the ability for a &os; system to access SMB/CIFS shares in a µsoft.windows; network. A &os; system can also be configured to act as a Samba server. This allows the administrator to create SMB/CIFS shares on the &os; system which can be accessed by clients running µsoft.windows; or the Samba client libraries. In order to configure a Samba server on &os;, the net/samba36 port or package must first be installed. The rest of this section provides an overview of how to configure a Samba server on &os;. Configuration A default Samba configuration file is installed as /usr/local/share/examples/samba36/smb.conf.default. This file must be copied to /usr/local/etc/smb.conf and customized before Samba can be used. Runtime configuration information for Samba is found in smb.conf, such as definitions of the printers and file system shares that will be shared with &windows; clients. The Samba package includes a web based tool called swat which provides a simple way for configuring smb.conf. Using the Samba Web Administration Tool (SWAT) The Samba Web Administration Tool (SWAT) runs as a daemon from inetd. Therefore, inetd must be enabled as shown in . To enable swat, uncomment the following line in /etc/inetd.conf: swat stream tcp nowait/400 root /usr/local/sbin/swat swat As explained in , the inetd configuration must be reloaded after this configuration file is changed. Once swat has been enabled, use a web browser to connect to http://localhost:901. At first login, enter the credentials for root. Once logged in, the main Samba configuration page and the system documentation will be available. Begin configuration by clicking on the Globals tab. The Globals section corresponds to the variables that are set in the [global] section of /usr/local/etc/smb.conf. Global Settings Whether swat is used or /usr/local/etc/smb.conf is edited directly, the first directives encountered when configuring Samba are: workgroup The domain name or workgroup name for the computers that will be accessing this server. netbios name The NetBIOS name by which a Samba server is known. By default it is the same as the first component of the host's DNS name. server string The string that will be displayed in the output of net view and some other networking tools that seek to display descriptive text about the server. Security Settings Two of the most important settings in /usr/local/etc/smb.conf are the security model and the backend password format for client users. The following directives control these options: security The two most common options are security = share and security = user. If the clients use usernames that are the same as their usernames on the &os; machine, user level security should be used. This is the default security policy and it requires clients to first log on before they can access shared resources. In share level security, clients do not need to log onto the server with a valid username and password before attempting to connect to a shared resource. This was the default security model for older versions of Samba. passdb backend NIS+ LDAP SQL database Samba has several different backend authentication models. Clients may be authenticated with LDAP, NIS+, an SQL database, or a modified password file. The default authentication method is smbpasswd, and that is all that will be covered here. Assuming that the default smbpasswd backend is used, /usr/local/etc/samba/smbpasswd must be created to allow Samba to authenticate clients. To provide &unix; user accounts access from &windows; clients, use the following command to add each required user to that file: &prompt.root; smbpasswd -a username The recommended backend is now tdbsam. If this backend is selected, use the following command to add user accounts: &prompt.root; pdbedit -a -u username This section has only mentioned the most commonly used settings. Refer to the Official Samba HOWTO for additional information about the available configuration options. Starting <application>Samba</application> To enable Samba at boot time, add the following line to /etc/rc.conf: samba_enable="YES" Alternately, its services can be started separately: nmbd_enable="YES" smbd_enable="YES" To start Samba now: &prompt.root; service samba start Starting SAMBA: removing stale tdbs : Starting nmbd. Starting smbd. Samba consists of three separate daemons. Both the nmbd and smbd daemons are started by samba_enable. If winbind name resolution services are enabled in smb.conf, the winbindd daemon is started as well. Samba may be stopped at any time by typing: &prompt.root; service samba stop Samba is a complex software suite with functionality that allows broad integration with µsoft.windows; networks. For more information about functionality beyond the basic configuration described here, refer to http://www.samba.org. Clock Synchronization with NTP NTP ntpd Over time, a computer's clock is prone to drift. This is problematic as many network services require the computers on a network to share the same accurate time. Accurate time is also needed to ensure that file timestamps stay consistent. The Network Time Protocol (NTP) is one way to provide clock accuracy in a network. &os; includes &man.ntpd.8; which can be configured to query other NTP servers in order to synchronize the clock on that machine or to provide time services to other computers in the network. The servers which are queried can be local to the network or provided by an ISP. In addition, an online list of publicly accessible NTP servers is available. When choosing a public NTP server, select one that is geographically close and review its usage policy. Choosing several NTP servers is recommended in case one of the servers becomes unreachable or its clock proves unreliable. As ntpd receives responses, it favors reliable servers over the less reliable ones. This section describes how to configure ntpd on &os;. Further documentation can be found in /usr/share/doc/ntp/ in HTML format. <acronym>NTP</acronym> Configuration NTP ntp.conf On &os;, the built-in ntpd can be used to synchronize a system's clock. To enable ntpd at boot time, add ntpd_enable="YES" to /etc/rc.conf. Additional variables can be specified in /etc/rc.conf. Refer to &man.rc.conf.5; and &man.ntpd.8; for details. This application reads /etc/ntp.conf to determine which NTP servers to query. Here is a simple example of an /etc/ntp.conf: Sample <filename>/etc/ntp.conf</filename> server ntplocal.example.com prefer server timeserver.example.org server ntp2a.example.net driftfile /var/db/ntp.drift The format of this file is described in &man.ntp.conf.5;. The server option specifies which servers to query, with one server listed on each line. If a server entry includes prefer, that server is preferred over other servers. A response from a preferred server will be discarded if it differs significantly from other servers' responses; otherwise it will be used. The prefer argument should only be used for NTP servers that are known to be highly accurate, such as those with special time monitoring hardware. The driftfile entry specifies which file is used to store the system clock's frequency offset. ntpd uses this to automatically compensate for the clock's natural drift, allowing it to maintain a reasonably correct setting even if it is cut off from all external time sources for a period of time. This file also stores information about previous responses from NTP servers. Since this file contains internal information for NTP, it should not be modified. By default, an NTP server is accessible to any network host. The restrict option in /etc/ntp.conf can be used to control which systems can access the server. For example, to deny all machines from accessing the NTP server, add the following line to /etc/ntp.conf: restrict default ignore This will also prevent access from other NTP servers. If there is a need to synchronize with an external NTP server, allow only that specific server. Refer to &man.ntp.conf.5; for more information. To allow machines within the network to synchronize their clocks with the server, but ensure they are not allowed to configure the server or be used as peers to synchronize against, instead use: restrict 192.168.1.0 mask 255.255.255.0 nomodify notrap where 192.168.1.0 is the local network address and 255.255.255.0 is the network's subnet mask. Multiple restrict entries are supported. For more details, refer to the Access Control Support subsection of &man.ntp.conf.5;. Once ntpd_enable="YES" has been added to /etc/rc.conf, ntpd can be started now without rebooting the system by typing: &prompt.root; service ntpd start Using <acronym>NTP</acronym> with a <acronym>PPP</acronym> Connection ntpd does not need a permanent connection to the Internet to function properly. However, if a PPP connection is configured to dial out on demand, NTP traffic should be prevented from triggering a dial out or keeping the connection alive. This can be configured with filter directives in /etc/ppp/ppp.conf. For example: set filter dial 0 deny udp src eq 123 # Prevent NTP traffic from initiating dial out set filter dial 1 permit 0 0 set filter alive 0 deny udp src eq 123 # Prevent incoming NTP traffic from keeping the connection open set filter alive 1 deny udp dst eq 123 # Prevent outgoing NTP traffic from keeping the connection open set filter alive 2 permit 0/0 0/0 For more details, refer to the PACKET FILTERING section in &man.ppp.8; and the examples in /usr/share/examples/ppp/. Some Internet access providers block low-numbered ports, preventing NTP from functioning since replies never reach the machine. <acronym>iSCSI</acronym> Initiator and Target Configuration iSCSI is a way to share storage over a network. Unlike NFS, which works at the file system level, iSCSI works at the block device level. In iSCSI terminology, the system that shares the storage is known as the target. The storage can be a physical disk, or an area representing multiple disks or a portion of a physical disk. For example, if the disk(s) are formatted with ZFS, a zvol can be created to use as the iSCSI storage. The clients which access the iSCSI storage are called initiators. To initiators, the storage available through iSCSI appears as a raw, unformatted disk known as a LUN. Device nodes for the disk appear in /dev/ and the device must be separately formatted and mounted. Beginning with 10.0-RELEASE, &os; provides a native, kernel-based iSCSI target and initiator. This section describes how to configure a &os; system as a target or an initiator. Configuring an <acronym>iSCSI</acronym> Target The native iSCSI target is supported starting with &os; 10.0-RELEASE. To use iSCSI in older versions of &os;, install a userspace target from the Ports Collection, such as net/istgt. This chapter only describes the native target. To configure an iSCSI target, create the /etc/ctl.conf configuration file, add a line to /etc/rc.conf to make sure the &man.ctld.8; daemon is automatically started at boot, and then start the daemon. The following is an example of a simple /etc/ctl.conf configuration file. Refer to &man.ctl.conf.5; for a more complete description of this file's available options. portal-group pg0 { discovery-auth-group no-authentication listen 0.0.0.0 listen [::] } target iqn.2012-06.com.example:target0 { auth-group no-authentication portal-group pg0 lun 0 { path /data/target0-0 size 4G } } The first entry defines the pg0 portal group. Portal groups define which network addresses the &man.ctld.8; daemon will listen on. The discovery-auth-group no-authentication entry indicates that any initiator is allowed to perform iSCSI target discovery without authentication. Lines three and four configure &man.ctld.8; to listen on all IPv4 (listen 0.0.0.0) and IPv6 (listen [::]) addresses on the default port of 3260. It is not necessary to define a portal group as there is a built-in portal group called default. In this case, the difference between default and pg0 is that with default, target discovery is always denied, while with pg0, it is always allowed. The second entry defines a single target. Target has two possible meanings: a machine serving iSCSI or a named group of LUNs. This example uses the latter meaning, where iqn.2012-06.com.example:target0 is the target name. This target name is suitable for testing purposes. For actual use, change com.example to the real domain name, reversed. The 2012-06 represents the year and month of acquiring control of that domain name, and target0 can be any value. Any number of targets can be defined in this configuration file. The auth-group no-authentication line allows all initiators to connect to the specified target and portal-group pg0 makes the target reachable through the pg0 portal group. The next section defines the LUN. To the initiator, each LUN will be visible as a separate disk device. Multiple LUNs can be defined for each target. Each LUN is identified by a number, where LUN 0 is mandatory. The path /data/target0-0 line defines the full path to a file or zvol backing the LUN. That path must exist before starting &man.ctld.8;. The second line is optional and specifies the size of the LUN. Next, to make sure the &man.ctld.8; daemon is started at boot, add this line to /etc/rc.conf: ctld_enable="YES" To start &man.ctld.8; now, run this command: &prompt.root; service ctld start As the &man.ctld.8; daemon is started, it reads /etc/ctl.conf. If this file is edited after the daemon starts, use this command so that the changes take effect immediately: &prompt.root; service ctld reload Authentication The previous example is inherently insecure as it uses no authentication, granting anyone full access to all targets. To require a username and password to access targets, modify the configuration as follows: auth-group ag0 { chap username1 secretsecret chap username2 anothersecret } portal-group pg0 { discovery-auth-group no-authentication listen 0.0.0.0 listen [::] } target iqn.2012-06.com.example:target0 { auth-group ag0 portal-group pg0 lun 0 { path /data/target0-0 size 4G } } The auth-group section defines username and password pairs. An initiator trying to connect to iqn.2012-06.com.example:target0 must first specify a defined username and secret. However, target discovery is still permitted without authentication. To require target discovery authentication, set discovery-auth-group to a defined auth-group name instead of no-authentication. It is common to define a single exported target for every initiator. As a shorthand for the syntax above, the username and password can be specified directly in the target entry: target iqn.2012-06.com.example:target0 { portal-group pg0 chap username1 secretsecret lun 0 { path /data/target0-0 size 4G } } Configuring an <acronym>iSCSI</acronym> Initiator The iSCSI initiator described in this section is supported starting with &os; 10.0-RELEASE. To use the iSCSI initiator available in older versions, refer to &man.iscontrol.8;. The iSCSI initiator requires that the &man.iscsid.8; daemon is running. This daemon does not use a configuration file. To start it automatically at boot, add this line to /etc/rc.conf: iscsid_enable="YES" To start &man.iscsid.8; now, run this command: &prompt.root; service iscsid start Connecting to a target can be done with or without an /etc/iscsi.conf configuration file. This section demonstrates both types of connections. Connecting to a Target Without a Configuration File To connect an initiator to a single target, specify the IP address of the portal and the name of the target: &prompt.root; iscsictl -A -p 10.10.10.10 -t iqn.2012-06.com.example:target0 To verify if the connection succeeded, run iscsictl without any arguments. The output should look similar to this: Target name Target portal State iqn.2012-06.com.example:target0 10.10.10.10 Connected: da0 In this example, the iSCSI session was successfully established, with /dev/da0 representing the attached LUN. If the iqn.2012-06.com.example:target0 target exports more than one LUN, multiple device nodes will be shown in that section of the output: Connected: da0 da1 da2. Any errors will be reported in the output, as well as the system logs. For example, this message usually means that the &man.iscsid.8; daemon is not running: Target name Target portal State iqn.2012-06.com.example:target0 10.10.10.10 Waiting for iscsid(8) The following message suggests a networking problem, such as a wrong IP address or port: Target name Target portal State iqn.2012-06.com.example:target0 10.10.10.11 Connection refused This message means that the specified target name is wrong: Target name Target portal State iqn.2012-06.com.example:atrget0 10.10.10.10 Not found This message means that the target requires authentication: Target name Target portal State iqn.2012-06.com.example:target0 10.10.10.10 Authentication failed To specify a CHAP username and secret, use this syntax: &prompt.root; iscsictl -A -p 10.10.10.10 -t iqn.2012-06.com.example:target0 -u user -s secretsecret Connecting to a Target with a Configuration File To connect using a configuration file, create /etc/iscsi.conf with contents like this: t0 { TargetAddress = 10.10.10.10 TargetName = iqn.2012-06.com.example:target0 AuthMethod = CHAP chapIName = user chapSecret = secretsecret } The t0 specifies a nickname for the configuration file section. It will be used by the initiator to specify which configuration to use. The other lines specify the parameters to use during connection. The TargetAddress and TargetName are mandatory, whereas the other options are optional. In this example, the CHAP username and secret are shown. To connect to the defined target, specify the nickname: &prompt.root; iscsictl -An t0 Alternately, to connect to all targets defined in the configuration file, use: &prompt.root; iscsictl -Aa To make the initiator automatically connect to all targets in /etc/iscsi.conf, add the following to /etc/rc.conf: iscsictl_enable="YES" iscsictl_flags="-Aa"
Index: head/en_US.ISO8859-1/books/handbook/ppp-and-slip/chapter.xml =================================================================== --- head/en_US.ISO8859-1/books/handbook/ppp-and-slip/chapter.xml (revision 46048) +++ head/en_US.ISO8859-1/books/handbook/ppp-and-slip/chapter.xml (revision 46049) @@ -1,1667 +1,1679 @@ <acronym>PPP</acronym> Synopsis PPP &os; supports the Point-to-Point (PPP) protocol which can be used to establish a network or Internet connection using a dial-up modem. This chapter describes how to configure modem-based communication services in &os;. After reading this chapter, you will know: How to configure, use, and troubleshoot a PPP connection. How to set up PPP over Ethernet (PPPoE). How to set up PPP over ATM (PPPoA). PPP PPP over Ethernet Before reading this chapter, you should: Be familiar with basic network terminology. Understand the basics and purpose of a dial-up connection and PPP. Configuring <acronym>PPP</acronym> &os; provides built-in support for managing dial-up PPP connections using &man.ppp.8;. The default &os; kernel provides support for tun which is used to interact with a modem hardware. Configuration is performed by editing at least one configuration file, and configuration files containing examples are provided. Finally, ppp is used to start and manage connections. In order to use a PPP connection, the following items are needed: A dial-up account with an Internet Service Provider (ISP). A dial-up modem. The dial-up number for the ISP. The login name and password assigned by the ISP. The IP address of one or more DNS servers. Normally, the ISP provides these addresses. If it did not, &os; can be configured to use DNS negotiation. If any of the required information is missing, contact the ISP. The following information may be supplied by the ISP, but is not necessary: The IP address of the default gateway. If this information is unknown, the ISP will automatically provide the correct value during connection setup. When configuring PPP on &os;, this address is referred to as HISADDR. The subnet mask. If the ISP has not - provided one, 255.255.255.255 will be used in - the &man.ppp.8; configuration file. + provided one, 255.255.255.255 will be used + in the &man.ppp.8; configuration file. static IP address If the ISP has assigned a static IP address and hostname, it should be input into the configuration file. Otherwise, this information will be automatically provided during connection setup. The rest of this section demonstrates how to configure &os; for common PPP connection scenarios. The required configuration file is /etc/ppp/ppp.conf and additional files and - examples are available in /usr/share/examples/ppp/. + examples are available in + /usr/share/examples/ppp/. Throughout this section, many of the file examples display line numbers. These line numbers have been added to make it easier to follow the discussion and are not meant to be placed in the actual file. When editing a configuration file, proper indentation is important. Lines that end in a : start in the first column (beginning of the line) while all other lines should be indented as shown using spaces or tabs. Basic Configuration PPP with static IP addresses In order to configure a PPP connection, first edit /etc/ppp/ppp.conf with the dial-in information for the ISP. This file is described as follows: 1 default: 2 set log Phase Chat LCP IPCP CCP tun command 3 ident user-ppp VERSION 4 set device /dev/cuau0 5 set speed 115200 6 set dial "ABORT BUSY ABORT NO\\sCARRIER TIMEOUT 5 \ 7 \"\" AT OK-AT-OK ATE1Q0 OK \\dATDT\\T TIMEOUT 40 CONNECT" 8 set timeout 180 9 enable dns 10 11 provider: 12 set phone "(123) 456 7890" 13 set authname foo 14 set authkey bar 15 set timeout 300 16 set ifaddr x.x.x.x/0 y.y.y.y/0 255.255.255.255 0.0.0.0 17 add default HISADDR Line 1: Identifies the default entry. Commands in this entry (lines 2 through 9) are executed automatically when ppp is run. Line 2: Enables verbose logging parameters for testing the connection. Once the configuration is working satisfactorily, this line should be reduced to: set log phase tun Line 3: Displays the version of &man.ppp.8; to the PPP software running on the other side of the connection. Line 4: Identifies the device to which the modem is connected, where COM1 is - /dev/cuau0 - and COM2 is /dev/cuau1. + /dev/cuau0 and + COM2 is + /dev/cuau1. Line 5: Sets the connection speed. If 115200 does not work on an older modem, try 38400 instead. Lines 6 & 7: The dial string written as an expect-send syntax. Refer to &man.chat.8; for more information. Note that this command continues onto the next line for readability. Any command in ppp.conf may do this if the last character on the line is \. Line 8: Sets the idle timeout for the link in seconds. Line 9: Instructs the peer to confirm the DNS settings. If the local network is running its own DNS server, this line should be commented out, by adding a # at the beginning of the line, or removed. Line 10: A blank line for readability. Blank lines are ignored by &man.ppp.8;. Line 11: Identifies an entry called provider. This could be changed to the name of the ISP so that can be used to start the connection. Line 12: Use the phone number for the ISP. Multiple phone numbers may be specified using the colon (:) or pipe character (|) as a separator. To rotate through the numbers, use a colon. To always attempt to dial the first number first and only use the other numbers if the first number fails, use the pipe character. Always enclose the entire set of phone numbers between quotation marks (") to prevent dialing failures. Lines 13 & 14: Use the user name and password for the ISP. Line 15: Sets the default idle timeout in seconds for the connection. In this example, the connection will be closed automatically after 300 seconds of inactivity. To prevent a timeout, set this value to zero. Line 16: Sets the interface addresses. The values used depend upon whether a static IP address has been obtained from the ISP or if it instead negotiates a dynamic IP address during connection. If the ISP has allocated a static IP address and default gateway, replace x.x.x.x with the static IP address and replace y.y.y.y with the IP address of the default gateway. If the ISP has only provided a static IP address without a gateway address, replace - y.y.y.y with 10.0.0.2/0. + y.y.y.y with 10.0.0.2/0. If the IP address changes whenever a connection is made, change this line to the following value. This tells &man.ppp.8; to use the IP Configuration Protocol (IPCP) to negotiate a dynamic IP address: set ifaddr 10.0.0.1/0 10.0.0.2/0 255.255.255.255 0.0.0.0 Line 17: Keep this line as-is as it adds a default route to the gateway. The HISADDR will automatically be replaced with the gateway address specified on line 16. It is important that this line appears after line 16. Depending upon whether &man.ppp.8; is started manually or automatically, a /etc/ppp/ppp.linkup may also need to be created which contains the following lines. This file is required when running ppp in mode. This file is used after the connection has been established. At this point, the IP address will have been assigned and it is now be possible to add the routing table entries. When creating this file, make sure that provider matches the value demonstrated in line 11 of ppp.conf. provider: add default HISADDR This file is also needed when the default gateway address is guessed in a static IP address configuration. In this case, remove line 17 from ppp.conf and create /etc/ppp/ppp.linkup with the above two lines. More examples for this file can be found in /usr/share/examples/ppp/. By default, the ppp command must be run as the root user. To change this default, add the account of the user - who should run ppp to the - network group in + who should run ppp to the network group in /etc/group. Then, give the user access to one or more entries in /etc/ppp/ppp.conf using the allow command. For example, to give fred and mary permission to only the provider: entry, add this line to the provider: section: allow users fred mary To give the specified users access to all entries, put that line in the default section instead. Receiving Incoming Calls PPP receiving incoming calls When configuring &man.ppp.8; to receive incoming calls on a machine connected to a Local Area Network (LAN), decide if packets should be forwarded to the LAN. If so, allocate the connecting system an IP address from the LAN's subnet, and add the enable proxy line to /etc/ppp/ppp.conf. Also, confirm that /etc/rc.conf contains the following line: gateway_enable="YES" Refer to &man.ppp.8; and /usr/share/examples/ppp/ppp.conf.sample for more details. The following steps will also be required: Create an entry in /etc/passwd (using the &man.vipw.8; program). Create a profile in this users home directory that runs ppp -direct direct-server or similar. Create an entry in /etc/ppp/ppp.conf. The direct-server example should suffice. Create an entry in /etc/ppp/ppp.linkup. <acronym>PPP</acronym> Shells for Dynamic <acronym>IP</acronym> Users PPP shells Create a file called /etc/ppp/ppp-shell containing the following: #!/bin/sh IDENT=`echo $0 | sed -e 's/^.*-\(.*\)$/\1/'` CALLEDAS="$IDENT" TTY=`tty` if [ x$IDENT = xdialup ]; then IDENT=`basename $TTY` fi echo "PPP for $CALLEDAS on $TTY" echo "Starting PPP for $IDENT" exec /usr/sbin/ppp -direct $IDENT This script should be executable. Now make a symbolic link called ppp-dialup to this script using the following commands: &prompt.root; ln -s ppp-shell /etc/ppp/ppp-dialup Use this script as the shell for all of dial-up users. This is an example from /etc/passwd for a dial-up PPP: pchilds:*:1011:300:Peter Childs PPP:/home/ppp:/etc/ppp/ppp-dialup Create a /home/ppp directory that is world readable containing the following 0 byte files: -r--r--r-- 1 root wheel 0 May 27 02:23 .hushlogin -r--r--r-- 1 root wheel 0 May 27 02:22 .rhosts which prevents /etc/motd from being displayed. <acronym>PPP</acronym> Shells for Static <acronym>IP</acronym> Users PPP shells Create the ppp-shell file as above, and for each account with statically assigned IPs create a symbolic link to ppp-shell. For example, to route /24 CIDR networks for the dial-up customers fred, sam, and mary, type: &prompt.root; ln -s /etc/ppp/ppp-shell /etc/ppp/ppp-fred &prompt.root; ln -s /etc/ppp/ppp-shell /etc/ppp/ppp-sam &prompt.root; ln -s /etc/ppp/ppp-shell /etc/ppp/ppp-mary Each of these users dial-up accounts should have their shell set to the symbolic link created above (for example, mary's shell should be /etc/ppp/ppp-mary). Setting Up <filename>ppp.conf</filename> for Dynamic <acronym>IP</acronym> Users The /etc/ppp/ppp.conf file should contain something along the lines of: default: set debug phase lcp chat set timeout 0 ttyu0: set ifaddr 203.14.100.1 203.14.100.20 255.255.255.255 enable proxy ttyu1: set ifaddr 203.14.100.1 203.14.100.21 255.255.255.255 enable proxy The indenting is important. The default: section is loaded for each session. For each dial-up line enabled in /etc/ttys create an entry similar to the one for ttyu0: above. Each line should get a unique IP address from the pool of IP addresses for dynamic users. Setting Up <filename>ppp.conf</filename> for Static <acronym>IP</acronym> Users Along with the contents of the sample /usr/share/examples/ppp/ppp.conf above, add a section for each of the statically assigned dial-up users:. fred: set ifaddr 203.14.100.1 203.14.101.1 255.255.255.255 sam: set ifaddr 203.14.100.1 203.14.102.1 255.255.255.255 mary: set ifaddr 203.14.100.1 203.14.103.1 255.255.255.255 The file /etc/ppp/ppp.linkup should also contain routing information for each static IP user if required. The line below would add a route for the 203.14.101.0/24 network via the client's ppp link. fred: add 203.14.101.0 netmask 255.255.255.0 HISADDR sam: add 203.14.102.0 netmask 255.255.255.0 HISADDR mary: add 203.14.103.0 netmask 255.255.255.0 HISADDR ?> Advanced Configuration DNS NetBIOS PPP Microsoft extensions It is possible to configure PPP to supply DNS and NetBIOS nameserver addresses on demand. To enable these extensions with PPP version 1.x, the following lines might be added to the relevant section of /etc/ppp/ppp.conf. enable msext set ns 203.14.100.1 203.14.100.2 set nbns 203.14.100.5 And for PPP version 2 and above: accept dns set dns 203.14.100.1 203.14.100.2 set nbns 203.14.100.5 This will tell the clients the primary and secondary name server addresses, and a NetBIOS nameserver host. In version 2 and above, if the set dns line is omitted, PPP will use the values found in /etc/resolv.conf. PAP and CHAP Authentication PAP CHAP Some ISPs set their system up so that the authentication part of the connection is done using either of the PAP or CHAP authentication mechanisms. If this is the case, the ISP will not give a login: prompt at connection, but will start talking PPP immediately. PAP is less secure than CHAP, but security is not normally an issue here as passwords, although being sent as plain text with PAP, are being transmitted down a serial line only. There is not much room for crackers to eavesdrop. The following alterations must be made: 13 set authname MyUserName 14 set authkey MyPassword 15 set login Line 13: This line specifies the PAP/CHAP user name. Insert the correct value for MyUserName. Line 14: This line specifies the PAP/CHAP passwordpassword. Insert the correct value for MyPassword. You may want to add an additional line, such as: 16 accept PAP or 16 accept CHAP to make it obvious that this is the intention, but PAP and CHAP are both accepted by default. Line 15: The ISP will not normally require a login to the server when using PAP or CHAP. Therefore, disable the set login string. Using <acronym>PPP</acronym> Network Address Translation Capability PPPNAT PPP has ability to use internal NAT without kernel diverting capabilities. This functionality may be enabled by the following line in /etc/ppp/ppp.conf: nat enable yes Alternatively, NAT may be enabled by command-line option -nat. There is also /etc/rc.conf knob named ppp_nat, which is enabled by default. When using this feature, it may be useful to include the following /etc/ppp/ppp.conf options to enable incoming connections forwarding: nat port tcp 10.0.0.2:ftp ftp nat port tcp 10.0.0.2:http http or do not trust the outside at all nat deny_incoming yes Final System Configuration PPPconfiguration While ppp is now configured, some edits still need to be made to /etc/rc.conf. Working from the top down in this file, make sure the hostname= line is set: hostname="foo.example.com" If the ISP has supplied a static IP address and name, use this name as the host name. Look for the network_interfaces variable. To configure the system to dial the ISP on demand, make sure the tun0 device is added to the list, otherwise remove it. network_interfaces="lo0 tun0" ifconfig_tun0= The ifconfig_tun0 variable should be empty, and a file called /etc/start_if.tun0 should be created. This file should contain the line: ppp -auto mysystem This script is executed at network configuration time, starting the ppp daemon in automatic mode. If this machine acts as a gateway, consider including . Refer to the manual page for further details. Make sure that the router program is set to NO with the following line in /etc/rc.conf: router_enable="NO" routed It is important that the routed daemon is not started, as routed tends to delete the default routing table entries created by ppp. It is probably a good idea to ensure that the sendmail_flags line does not include the option, otherwise sendmail will attempt to do a network lookup every now and then, possibly causing your machine to dial out. You may try: sendmail_flags="-bd" sendmail The downside is that sendmail is forced to re-examine the mail queue whenever the ppp link. To automate this, include !bg in ppp.linkup: 1 provider: 2 delete ALL 3 add 0 0 HISADDR 4 !bg sendmail -bd -q30m SMTP An alternative is to set up a dfilter to block SMTP traffic. Refer to the sample files for further details. Using <command>ppp</command> All that is left is to reboot the machine. After rebooting, either type: &prompt.root; ppp and then dial provider to start the PPP session, or, to configure ppp to establish sessions automatically when there is outbound traffic and start_if.tun0 does not exist, type: &prompt.root; ppp -auto provider It is possible to talk to the ppp program while it is running in the background, but only if a suitable diagnostic port has been set up. To do this, add the following line to the configuration: set server /var/run/ppp-tun%d DiagnosticPassword 0177 This will tell PPP to listen to the specified &unix; domain socket, asking clients for the specified password before allowing access. The %d in the name is replaced with the tun device number that is in use. Once a socket has been set up, the &man.pppctl.8; program may be used in scripts that wish to manipulate the running program. Configuring Dial-in Services mgetty AutoPPP LCP provides a good description on enabling dial-up services using &man.getty.8;. An alternative to getty is comms/mgetty+sendfax port), a smarter version of getty designed with dial-up lines in mind. The advantages of using mgetty is that it actively talks to modems, meaning if port is turned off in /etc/ttys then the modem will not answer the phone. Later versions of mgetty (from 0.99beta onwards) also support the automatic detection of PPP streams, allowing clients scriptless access to the server. - Refer to http://mgetty.greenie.net/doc/mgetty_toc.html - for more - information on mgetty. + Refer to http://mgetty.greenie.net/doc/mgetty_toc.html + for more information on mgetty. - By default the comms/mgetty+sendfax port - comes with the AUTO_PPP option enabled - allowing mgetty to detect the LCP - phase of PPP connections and + By default the comms/mgetty+sendfax + port comes with the AUTO_PPP option + enabled allowing mgetty to detect the + LCP phase of PPP connections and automatically spawn off a ppp shell. However, since the default login/password sequence does not occur it is necessary to authenticate users using either PAP or CHAP. This section assumes the user has successfully - compiled, and installed the comms/mgetty+sendfax port on - his system. + compiled, and installed the + comms/mgetty+sendfax port on his + system. Ensure that /usr/local/etc/mgetty+sendfax/login.config has the following: /AutoPPP/ - - /etc/ppp/ppp-pap-dialup This tells mgetty to run ppp-pap-dialup for detected PPP connections. Create an executable file called /etc/ppp/ppp-pap-dialup containing the following: #!/bin/sh exec /usr/sbin/ppp -direct pap$IDENT For each dial-up line enabled in /etc/ttys, create a corresponding entry in /etc/ppp/ppp.conf. This will happily co-exist with the definitions we created above. pap: enable pap set ifaddr 203.14.100.1 203.14.100.20-203.14.100.40 enable proxy Each user logging in with this method will need to have a username/password in /etc/ppp/ppp.secret file, or alternatively add the following option to authenticate users via PAP from the /etc/passwd file. enable passwdauth To assign some users a static IP number, specify the number as the third argument in /etc/ppp/ppp.secret. See /usr/share/examples/ppp/ppp.secret.sample for examples. Troubleshooting <acronym>PPP</acronym> Connections PPP troubleshooting This section covers a few issues which may arise when using PPP over a modem connection. Some ISPs present the ssword prompt while others present password. If the ppp script is not written accordingly, the login attempt will fail. The most common way to debug ppp connections is by connecting manually as described in this section. Check the Device Nodes When using a custom kernel, make sure to include the following line in the kernel configuration file: device uart The uart device is already included in the GENERIC kernel, so no additional steps are necessary in this case. Just check the dmesg output for the modem device with: &prompt.root; dmesg | grep uart This should display some pertinent output about the uart devices. These are the COM ports we need. If the modem acts like a standard serial port, it should be listed on uart1, or COM2. If so, a kernel rebuild is not required. When matching up, if the modem is on uart1, the modem device would be /dev/cuau1. Connecting Manually Connecting to the Internet by manually controlling ppp is quick, easy, and a great way to debug a connection or just get information on how the ISP treats ppp client connections. Lets start PPP from the command line. Note that in all of our examples we will use example as the hostname of the machine running PPP. To start ppp: &prompt.root; ppp ppp ON example> set device /dev/cuau1 This second command sets the modem device to cuau1. ppp ON example> set speed 115200 This sets the connection speed to 115,200 kbps. ppp ON example> enable dns This tells ppp to configure the resolver and add the nameserver lines to /etc/resolv.conf. If ppp cannot determine the hostname, it can manually be set later. ppp ON example> term This switches to terminal mode in order to manually control the modem. deflink: Entering terminal mode on /dev/cuau1 type '~h' for help at OK atdt123456789 Use at to initialize the modem, then use atdt and the number for the ISP to begin the dial in process. CONNECT Confirmation of the connection, if we are going to have any connection problems, unrelated to hardware, here is where we will attempt to resolve them. ISP Login:myusername At this prompt, return the prompt with the username that was provided by the ISP. ISP Pass:mypassword At this prompt, reply with the password that was provided by the ISP. Just like logging into &os;, the password will not echo. Shell or PPP:ppp Depending on the ISP, this prompt might not appear. If it does, it is asking whether to use a shell on the provider or to start ppp. In this example, ppp was selected in order to establish an Internet connection. Ppp ON example> Notice that in this example the first has been capitalized. This shows that we have successfully connected to the ISP. PPp ON example> We have successfully authenticated with our ISP and are waiting for the assigned IP address. PPP ON example> We have made an agreement on an IP address and successfully completed our connection. PPP ON example>add default HISADDR Here we add our default route, we need to do this before we can talk to the outside world as currently the only established connection is with the peer. If this fails due to existing routes, put a bang character ! in front of the . Alternatively, set this before making the actual connection and it will negotiate a new route accordingly. If everything went good we should now have an active connection to the Internet, which could be thrown into the - background using CTRL - z If - PPP returns to ppp then - the connection has bee lost. This is good to know because it - shows the connection status. Capital P's represent a - connection to the ISP and lowercase p's - show that the connection has been lost. + background using CTRL + z If PPP + returns to ppp then the connection has bee + lost. This is good to know because it shows the connection + status. Capital P's represent a connection to the + ISP and lowercase p's show that the + connection has been lost. Debugging If a connection cannot be established, turn hardware flow CTS/RTS to off using . This is mainly the case when connected to some PPP-capable terminal servers, where PPP hangs when it tries to write data to the communication link, and waits for a Clear To Send (CTS) signal which may never come. When using this option, include as it may be required to defeat hardware dependent on passing certain characters from end to end, most of the time XON/XOFF. Refer to &man.ppp.8; for more information on this option and how it is used. An older modem may need . Parity is set at none be default, but is used for error checkingm with a large increase in traffic, on older modems. PPP may not return to the command mode, which is usually a negotiation error where the ISP is waiting for negotiating to begin. At this point, using ~p will force ppp to start sending the configuration information. If a login prompt never appears, PAP or CHAP authentication is most likely required. To use PAP or CHAP, add the following options to PPP before going into terminal mode: ppp ON example> set authname myusername Where myusername should be replaced with the username that was assigned by the ISP. ppp ON example> set authkey mypassword Where mypassword should be replaced with the password that was assigned by the ISP. If a connection is established, but cannot seem to find any domain name, try to &man.ping.8; an IP address. If there is 100 percent (100%) packet loss, it is likely that a default route was not assigned. Double check that was set during the connection. If a connection can be made to a remote IP address, it is possible that a resolver address has not been added to /etc/resolv.conf. This file should look like: domain example.com nameserver x.x.x.x nameserver y.y.y.y Where x.x.x.x and y.y.y.y should be replaced with the IP address of the ISP's DNS servers. To configure &man.syslog.3; to provide logging for the PPP connection, make sure this line exists in /etc/syslog.conf: !ppp *.* /var/log/ppp.log Using <acronym>PPP</acronym> over Ethernet (PPPoE) PPP over Ethernet This section describes how to set up PPP over Ethernet (PPPoE). Here is an example of a working ppp.conf: default: set log Phase tun command # you can add more detailed logging if you wish set ifaddr 10.0.0.1/0 10.0.0.2/0 name_of_service_provider: set device PPPoE:xl1 # replace xl1 with your Ethernet device set authname YOURLOGINNAME set authkey YOURPASSWORD set dial set login add default HISADDR - As root, run: + As root, + run: &prompt.root; ppp -ddial name_of_service_provider Add the following to /etc/rc.conf: ppp_enable="YES" ppp_mode="ddial" ppp_nat="YES" # if you want to enable nat for your local network, otherwise NO ppp_profile="name_of_service_provider" Using a PPPoE Service Tag Sometimes it will be necessary to use a service tag to establish the connection. Service tags are used to distinguish between different PPPoE servers attached to a given network. Any required service tag information should be in the documentation provided by the ISP. - As a last resort, one could try installing the net/rr-pppoe package or port. - Bear in mind however, this may de-program your modem and - render it useless, so think twice before doing it. Simply - install the program shipped with the modem. Then, access the + As a last resort, one could try installing the + net/rr-pppoe package or port. Bear in mind + however, this may de-program your modem and render it useless, + so think twice before doing it. Simply install the program + shipped with the modem. Then, access the System menu from the program. The name of the profile should be listed there. It is usually ISP. The profile name (service tag) will be used in the PPPoE configuration entry in ppp.conf as the provider part of the set device command (see the &man.ppp.8; manual page for full details). It should look like this: set device PPPoE:xl1:ISP Do not forget to change xl1 to the proper device for the Ethernet card. Do not forget to change ISP to the profile. - For additional information, refer to Cheaper - Broadband with &os; on DSL by Renaud - Waldura. + For additional information, refer to Cheaper + Broadband with &os; on DSL by Renaud Waldura. PPPoE with a &tm.3com; <trademark class="registered">HomeConnect</trademark> ADSL Modem Dual Link This modem does not follow the PPPoE specification defined - in RFC + in RFC 2516. In order to make &os; capable of communicating with this device, a sysctl must be set. This can be done automatically at boot time by updating /etc/sysctl.conf: net.graph.nonstandard_pppoe=1 or can be done immediately with the command: &prompt.root; sysctl net.graph.nonstandard_pppoe=1 Unfortunately, because this is a system-wide setting, it - is not possible to talk to a normal PPPoE client or server - and a &tm.3com; HomeConnect ADSL Modem at - the same time. + is not possible to talk to a normal PPPoE client or server and + a &tm.3com; HomeConnect ADSL Modem at the + same time. Using <application>PPP</application> over <acronym>ATM</acronym> (PPPoA) PPP over ATM PPPoA The following describes how to set up PPP over ATM (PPPoA). PPPoA is a popular choice among European DSL providers. Using mpd The mpd application can be used to connect to a variety of services, in particular PPTP - services. It can be installed using the net/mpd5 package or port. Many - ADSL modems require that a PPTP tunnel is created between the - modem and computer. + services. It can be installed using the + net/mpd5 package or port. Many ADSL modems + require that a PPTP tunnel is created between the modem and + computer. Once installed, configure mpd to suit the provider's settings. The port places a set of sample configuration files which are well documented in - /usr/local/etc/mpd/. - A complete guide to configure mpd - is available in HTML format in /usr/ports/share/doc/mpd/. + /usr/local/etc/mpd/. A complete guide to + configure mpd is available in HTML + format in /usr/ports/share/doc/mpd/. Here is a sample configuration for connecting to an ADSL service with mpd. The configuration is spread over two files, first the mpd.conf: This example of the mpd.conf file only works with mpd 4.x. default: load adsl adsl: new -i ng0 adsl adsl set bundle authname username set bundle password password set bundle disable multilink set link no pap acfcomp protocomp set link disable chap set link accept chap set link keep-alive 30 10 set ipcp no vjcomp set ipcp ranges 0.0.0.0/0 0.0.0.0/0 set iface route default set iface disable on-demand set iface enable proxy-arp set iface idle 0 open The username used to authenticate with your ISP. The password used to authenticate with your ISP. The mpd.links file contains information about the link, or links, to establish. An example mpd.links to accompany the above example is given beneath: adsl: set link type pptp set pptp mode active set pptp enable originate outcall set pptp self 10.0.0.1 set pptp peer 10.0.0.138 The IP address of &os; computer running mpd. The IP address of the ADSL modem. - The Alcatel &speedtouch; Home defaults to 10.0.0.138. + The Alcatel &speedtouch; Home defaults to 10.0.0.138. It is possible to initialize the connection easily by issuing the following command as root: &prompt.root; mpd -b adsl To view the status of the connection: &prompt.user; ifconfig ng0 ng0: flags=88d1<UP,POINTOPOINT,RUNNING,NOARP,SIMPLEX,MULTICAST> mtu 1500 inet 216.136.204.117 --> 204.152.186.171 netmask 0xffffffff Using mpd is the recommended way to connect to an ADSL service with &os;. Using pptpclient It is also possible to use &os; to connect to other PPPoA services using net/pptpclient. To use net/pptpclient to connect to a DSL service, install the port or package, then edit /etc/ppp/ppp.conf. An example section of ppp.conf is given below. For further information on ppp.conf options consult &man.ppp.8;. adsl: set log phase chat lcp ipcp ccp tun command set timeout 0 enable dns set authname username set authkey password set ifaddr 0 0 add default HISADDR The username for the DSL provider. The password for your account. Since the account's password is added to ppp.confin plain text form, make sure nobody can read the contents of this file: &prompt.root; chown root:wheel /etc/ppp/ppp.conf &prompt.root; chmod 600 /etc/ppp/ppp.conf This will open a tunnel for a PPP session to the DSL router. Ethernet DSL modems have a preconfigured LAN IP address to connect to. In the case of the Alcatel &speedtouch; Home, this address is - 10.0.0.138. The router's - documentation should list the address the device uses. To - open the tunnel and start a PPP + 10.0.0.138. The + router's documentation should list the address the device + uses. To open the tunnel and start a PPP session: &prompt.root; pptp address adsl If an ampersand (&) is added to the end of this command, pptp will return the prompt. A tun virtual tunnel device will be created for interaction between the pptp and ppp processes. Once the prompt is returned, or the pptp process has confirmed a connection, examine the tunnel: &prompt.user; ifconfig tun0 tun0: flags=8051<UP,POINTOPOINT,RUNNING,MULTICAST> mtu 1500 inet 216.136.204.21 --> 204.152.186.171 netmask 0xffffff00 Opened by PID 918 If the connection fails, check the configuration of the router, which is usually accessible using a web browser. Also, examine the output of pptp and the contents of the log file, /var/log/ppp.log for clues. Index: head/en_US.ISO8859-1/books/handbook/preface/preface.xml =================================================================== --- head/en_US.ISO8859-1/books/handbook/preface/preface.xml (revision 46048) +++ head/en_US.ISO8859-1/books/handbook/preface/preface.xml (revision 46049) @@ -1,679 +1,686 @@ Preface Intended Audience The &os; newcomer will find that the first section of this book guides the user through the &os; installation process and gently introduces the concepts and conventions that underpin &unix;. Working through this section requires little more than the desire to explore, and the ability to take on board new concepts as they are introduced. Once you have traveled this far, the second, far larger, section of the Handbook is a comprehensive reference to all manner of topics of interest to &os; system administrators. Some of these chapters may recommend that you do some prior reading, and this is noted in the synopsis at the beginning of each chapter. For a list of additional sources of information, please see . Changes from the Third Edition The current online version of the Handbook represents the cumulative effort of many hundreds of contributors over the past 10 years. The following are some of the significant changes since the two volume third edition was published in 2004: has been added with information about the powerful &dtrace; performance analysis tool. has been added with information about non-native file systems in &os;, such as ZFS from &sun;. has been added to cover the new auditing capabilities in &os; and explain its use. has been added with information about installing &os; on virtualization software. has been added to cover installation of &os; using the new installation utility, bsdinstall. Changes from the Second Edition (2004) The third edition was the culmination of over two years of work by the dedicated members of the &os; Documentation Project. The printed edition grew to such a size that it was necessary to publish as two separate volumes. The following are the major changes in this new edition: has been expanded with new information about the ACPI power and resource management, the cron system utility, and more kernel tuning options. has been expanded with new information about virtual private networks (VPNs), file system access control lists (ACLs), and security advisories. is a new chapter with this edition. It explains what MAC is and how this mechanism can be used to secure a &os; system. has been expanded with new information about USB storage devices, file system snapshots, file system quotas, file and network backed filesystems, and encrypted disk partitions. - A troubleshooting section has been added to . - + A troubleshooting section has been added to . has been expanded with new information about using alternative transport agents, SMTP authentication, UUCP, fetchmail, procmail, and other advanced topics. is all new with this edition. This chapter includes information about setting up the Apache HTTP Server, ftpd, and setting up a server for µsoft; &windows; clients with - Samba. Some sections from were moved here to improve + Samba. Some sections from were moved here to improve the presentation. has been expanded with new information about using &bluetooth; devices with &os;, setting up wireless networks, and Asynchronous Transfer Mode (ATM) networking. A glossary has been added to provide a central location for the definitions of technical terms used throughout the book. A number of aesthetic improvements have been made to the tables and figures throughout the book. - Changes from the - First Edition (2001) + Changes from + the First Edition (2001) The second edition was the culmination of over two years of work by the dedicated members of the &os; Documentation Project. The following were the major changes in this edition: A complete Index has been added. All ASCII figures have been replaced by graphical diagrams. A standard synopsis has been added to each chapter to give a quick summary of what information the chapter contains, and what the reader is expected to know. The content has been logically reorganized into three parts: Getting Started, System Administration, and Appendices. was completely rewritten with many screenshots to make it much easier for new users to grasp the text. has been expanded to contain additional information about processes, daemons, and signals. has been expanded to contain additional information about binary package management. has been completely rewritten with an emphasis on using modern desktop technologies such as KDE and GNOME on &xfree86; 4.X. has been expanded. has been written from what used to be two separate chapters on Disks and Backups. We feel that the topics are easier to comprehend when presented as a single chapter. A section on RAID (both hardware and software) has also been added. has been completely reorganized and updated for &os; 4.X/5.X. has been substantially updated. - Many new sections have been added to . - + Many new sections have been added to . has been expanded to include more information about configuring sendmail. has been expanded to include information about installing &oracle; and &sap.r3;. The following new topics are covered in this second edition: . . - Organization of - This Book + Organization of This Book This book is split into five logically distinct sections. The first section, Getting Started, covers the installation and basic usage of &os;. It is expected that the reader will follow these chapters in sequence, possibly skipping chapters covering familiar topics. The second section, Common Tasks, covers some frequently used features of &os;. This section, and all subsequent sections, can be read out of order. Each chapter begins with a succinct synopsis that describes what the chapter covers and what the reader is expected to already know. This is meant to allow the casual reader to skip around to find chapters of interest. The third section, System Administration, covers administration topics. The fourth section, Network Communication, covers networking and server topics. The fifth section contains appendices of reference information. Introduces &os; to a new user. It describes the history of the &os; Project, its goals and development model. Walks a user through the entire installation process of &os; 9.x and later using bsdinstall. Walks a user through the entire installation process of &os; 8.x and earlier using sysinstall. Some advanced installation topics, such as installing through a serial console, are also covered. Covers the basic commands and functionality of the &os; operating system. If you are familiar with &linux; or another flavor of &unix; then you can probably skip this chapter. Covers the installation of third-party software with both &os;'s innovative Ports Collection and standard binary packages. - Describes the X Window System in general and using - X11 on &os; in particular. Also describes common - desktop environments such as KDE and + Describes the X Window System in general and using X11 + on &os; in particular. Also describes common desktop + environments such as KDE and GNOME. Lists some common desktop applications, such as web browsers and productivity suites, and describes how to install them on &os;. Shows how to set up sound and video playback support for your system. Also describes some sample audio and video applications. Explains why you might need to configure a new kernel and provides detailed instructions for configuring, building, and installing a custom kernel. Describes managing printers on &os;, including information about banner pages, printer accounting, and initial setup. Describes the &linux; compatibility features of &os;. Also provides detailed installation instructions for many popular &linux; applications such as &oracle; and &mathematica;. - - + Describes the parameters available for system administrators to tune a &os; system for optimum performance. Also describes the various configuration files used in &os; and where to find them. Describes the &os; boot process and explains how to control this process with configuration options. Describes many different tools available to help keep your &os; system secure, including Kerberos, IPsec and OpenSSH. Describes the jails framework, and the improvements of jails over the traditional chroot support of &os;. Explains what Mandatory Access Control (MAC) is and how this mechanism can be used to secure a &os; system. Describes what &os; Event Auditing is, how it can be installed, configured, and how audit trails can be inspected or monitored. Describes how to manage storage media and filesystems with &os;. This includes physical disks, RAID arrays, optical and tape media, memory-backed disks, and network filesystems. Describes what the GEOM framework in &os; is and how to configure various supported RAID levels. Examines support of non-native file systems in &os;, like the Z File System from &sun;. - - + Describes what virtualization systems offer, and how they can be used with &os;. Describes how to use &os; in languages other than English. Covers both system and application level localization. - + Explains the differences between &os;-STABLE, &os;-CURRENT, and &os; releases. Describes which users would benefit from tracking a development system and outlines that process. Covers the methods users may take to update their system to the latest security release. Describes how to configure and use the &dtrace; tool from &sun; in &os;. Dynamic tracing can help locate performance issues, by performing real time system analysis. Explains how to connect terminals and modems to your &os; system for both dial in and dial out connections. Describes how to use PPP to connect to remote systems with &os;. Explains the different components of an email server and dives into simple configuration topics for the most popular mail server software: sendmail. - - + Provides detailed instructions and example configuration files to set up your &os; machine as a network filesystem server, domain name server, network information system server, or time synchronization server. Explains the philosophy behind software-based firewalls and provides detailed information about the configuration of the different firewalls available for &os;. - + Describes many networking topics, including sharing an Internet connection with other computers on your LAN, advanced routing topics, wireless networking, &bluetooth;, ATM, IPv6, and much more. Lists different sources for obtaining &os; media on CDROM or DVD as well as different sites on the Internet that allow you to download and install &os;. This book touches on many different subjects that may leave you hungry for a more detailed explanation. The bibliography lists many excellent books that are referenced in the text. Describes the many forums available for &os; users to post questions and engage in technical conversations about &os;. Lists the PGP fingerprints of several &os; Developers. Conventions used in this book To provide a consistent and easy to read text, several conventions are followed throughout the book. - Typographic - Conventions + Typographic Conventions Italic An italic font is used for filenames, URLs, emphasized text, and the first usage of technical terms. Monospace A monospaced font is used for error messages, commands, environment variables, names of ports, hostnames, user names, group names, device names, variables, and code fragments. Bold A bold font is used for applications, commands, and keys. - User Input + User + Input Keys are shown in bold to stand out from other text. Key combinations that are meant to be typed simultaneously are shown with `+' between the keys, such as: Ctrl Alt Del Meaning the user should type the Ctrl, Alt, and Del keys at the same time. Keys that are meant to be typed in sequence will be separated with commas, for example: Ctrl X , Ctrl S Would mean that the user is expected to type the Ctrl and X keys simultaneously and then to type the Ctrl and S keys simultaneously. - Examples + Examples Examples starting with C:\> indicate a &ms-dos; command. Unless otherwise noted, these commands may be executed from a Command Prompt window in a modern µsoft.windows; environment. E:\> tools\fdimage floppies\kern.flp A: Examples starting with &prompt.root; indicate a command that must be invoked as the superuser in &os;. You can login as root to type the command, or login as your normal account and use &man.su.1; to gain superuser privileges. &prompt.root; dd if=kern.flp of=/dev/fd0 Examples starting with &prompt.user; indicate a command that should be invoked from a normal user account. Unless otherwise noted, C-shell syntax is used for setting environment variables and other shell commands. &prompt.user; top - Acknowledgments + Acknowledgments The book you are holding represents the efforts of many hundreds of people around the world. Whether they sent in fixes for typos, or submitted complete chapters, all the contributions have been useful. Several companies have supported the development of this document by paying authors to work on it full-time, paying for publication, etc. In particular, BSDi (subsequently acquired by - Wind River Systems) - paid members of the &os; Documentation Project to work on - improving this book full time leading up to the publication of the - first printed edition in March 2000 (ISBN 1-57176-241-8). Wind - River Systems then paid several additional authors to make a - number of improvements to the print-output infrastructure and - to add additional chapters to the text. This work culminated in - the publication of the second printed edition in November 2001 - (ISBN 1-57176-303-1). In 2003-2004, &os; Mall, Inc, paid - several contributors to improve the Handbook in preparation for - the third printed edition. + Wind River + Systems) paid members of the &os; Documentation Project + to work on improving this book full time leading up to the + publication of the first printed edition in March 2000 (ISBN + 1-57176-241-8). Wind River Systems then paid several additional + authors to make a number of improvements to the print-output + infrastructure and to add additional chapters to the text. This + work culminated in the publication of the second printed edition + in November 2001 (ISBN 1-57176-303-1). In 2003-2004, &os; Mall, Inc, + paid several contributors to improve the Handbook in preparation + for the third printed edition. Index: head/en_US.ISO8859-1/books/handbook/security/chapter.xml =================================================================== --- head/en_US.ISO8859-1/books/handbook/security/chapter.xml (revision 46048) +++ head/en_US.ISO8859-1/books/handbook/security/chapter.xml (revision 46049) @@ -1,3927 +1,3927 @@ Security Tom Rhodes Rewritten by security Synopsis Security, whether physical or virtual, is a topic so broad that an entire industry has grown up around it. Hundreds of standard practices have been authored about how to secure systems and networks, and as a user of &os;, understanding how to protect against attacks and intruders is a must. In this chapter, several fundamentals and techniques will be discussed. The &os; system comes with multiple layers of security, and many more third party utilities may be added to enhance security. After reading this chapter, you will know: Basic &os; system security concepts. The various crypt mechanisms available in &os;. How to set up one-time password authentication. How to configure TCP Wrapper for use with &man.inetd.8;. How to set up Kerberos on &os;. How to configure IPsec and create a VPN. How to configure and use OpenSSH on &os;. How to use file system ACLs. How to use portaudit to audit third party software packages installed from the Ports Collection. How to utilize &os; security advisories. What Process Accounting is and how to enable it on &os;. How to control user resources using login classes or the resource limits database. Before reading this chapter, you should: Understand basic &os; and Internet concepts. Additional security topics are covered elsewhere in this Handbook. For example, Mandatory Access Control is discussed in and Internet firewalls are discussed in . Introduction Security is everyone's responsibility. A weak entry point in any system could allow intruders to gain access to critical information and cause havoc on an entire network. One of the core principles of information security is the CIA triad, which stands for the Confidentiality, Integrity, and Availability of information systems. The CIA triad is a bedrock concept of computer security as customers and users expect their data to be protected. For example, a customer expects that their credit card information is securely stored (confidentiality), that their orders are not changed behind the scenes (integrity), and that they have access to their order information at all times (availablility). To provide CIA, security professionals apply a defense in depth strategy. The idea of defense in depth is to add several layers of security to prevent one single layer failing and the entire security system collapsing. For example, a system administrator cannot simply turn on a firewall and consider the network or system secure. One must also audit accounts, check the integrity of binaries, and ensure malicious tools are not installed. To implement an effective security strategy, one must understand threats and how to defend against them. What is a threat as it pertains to computer security? Threats are not limited to remote attackers who attempt to access a system without permission from a remote location. Threats also include employees, malicious software, unauthorized network devices, natural disasters, security vulnerabilities, and even competing corporations. Systems and networks can be accessed without permission, sometimes by accident, or by remote attackers, and in some cases, via corporate espionage or former employees. As a user, it is important to prepare for and admit when a mistake has lead to a security breach and report possible issues to the security team. As an administrator, it is important to know of the threats and be prepared to mitigate them. When applying security to systems, it is recommended to start by securing the basic accounts and system configuration, and then to secure the network layer so that it adheres to the system policy and the organization's security procedures. Many organizations already have a security policy that covers the configuration of technology devices. The policy should include the security configuration of workstations, desktops, mobile devices, phones, production servers, and development servers. In many cases, standard operating procedures (SOPs) already exist. When in doubt, ask the security team. The rest of this introduction describes how some of these basic security configurations are performed on a &os; system. The rest of this chapter describes some specific tools which can be used when implementing a security policy on a &os; system. Preventing Logins In securing a system, a good starting point is an audit of accounts. Ensure that root has a strong password and that this password is not shared. Disable any accounts that do not need login access. To deny login access to accounts, two methods exist. The first is to lock the account. This example locks the toor account: &prompt.root; pw lock toor The second method is to prevent login access by changing the shell to /sbin/nologin. Only the superuser can change the shell for other users: &prompt.root; chsh -s /usr/sbin/nologin toor The /usr/sbin/nologin shell prevents the system from assigning a shell to the user when they attempt to login. Permitted Account Escalation In some cases, system administration needs to be shared with other users. &os; has two methods to handle this. The first one, which is not recommended, is a shared root password used by members of the wheel group. With this method, a user types su and enters the password for wheel whenever superuser access is needed. The user should then type exit to leave privileged access after finishing the commands that required administrative access. To add a user to this group, edit /etc/group and add the user to the end of the wheel entry. The user must be separated by a comma character with no space. The second, and recommended, method to permit privilege escalation is to install the security/sudo package or port. This software provides additional auditing, more fine-grained user control, and can be configured to lock users into running only the specified privileged commands. After installation, use visudo to edit /usr/local/etc/sudoers. This example creates a new webadmin group, adds the trhodes account to that group, and configures that group access to restart apache24: &prompt.root; pw groupadd webadmin -M trhodes -g 6000 &prompt.root; visudo %webadmin ALL=(ALL) /usr/sbin/service apache24 * Password Hashes Passwords are a necessary evil of technology. When they must be used, they should be complex and a powerful hash mechanism should be used to encrypt the version that is stored in the password database. &os; supports the DES, MD5, SHA256, SHA512, and Blowfish hash algorithms in its crypt() library. The default of SHA512 should not be changed to a less secure hashing algorithm, but can be changed to the more secure Blowfish algorithm. Blowfish is not part of AES and is not considered compliant with any Federal Information Processing Standards (FIPS). Its use may not be permitted in some environments. To determine which hash algorithm is used to encrypt a user's password, the superuser can view the hash for the user in the &os; password database. Each hash starts with a symbol which indicates the type of hash mechanism used to encrypt the password. If DES is used, there is no beginning symbol. For MD5, the symbol is $. For SHA256 and SHA512, the symbol is $6$. For Blowfish, the symbol is $2a$. In this example, the password for dru is hashed using the default SHA512 algorithm as the hash starts with $6$. Note that the encrypted hash, not the password itself, is stored in the password database: &prompt.root; grep dru /etc/master.passwd dru:$6$pzIjSvCAn.PBYQBA$PXpSeWPx3g5kscj3IMiM7tUEUSPmGexxta.8Lt9TGSi2lNQqYGKszsBPuGME0:1001:1001::0:0:dru:/usr/home/dru:/bin/csh The hash mechanism is set in the user's login class. For this example, the user is in the default login class and the hash algorithm is set with this line in /etc/login.conf: :passwd_format=sha512:\ To change the algorithm to Blowfish, modify that line to look like this: :passwd_format=blf:\ Then run cap_mkdb /etc/login.conf as described in . Note that this change will not affect any existing password hashes. This means that all passwords should be re-hashed by asking users to run passwd in order to change their password. For remote logins, two-factor authentication should be used. An example of two-factor authentication is something you have, such as a key, and something you know, such as the passphrase for that key. Since OpenSSH is part of the &os; base system, all network logins should be over an encrypted connection and use key-based authentication instead of passwords. For more information, refer to . Kerberos users may need to make additional changes to implement OpenSSH in their network. These changes are described in . Password Policy Enforcement Enforcing a strong password policy for local accounts is a fundamental aspect of system security. In &os;, password length, password strength, and password complexity can be implemented using built-in Pluggable Authentication Modules (PAM). This section demonstrates how to configure the minimum and maximum password length and the enforcement of mixed characters using the pam_passwdqc.so module. This module is enforced when a user changes their password. To configure this module, become the superuser and uncomment the line containing pam_passwdqc.so in /etc/pam.d/passwd. Then, edit that line to match the password policy: password requisite pam_passwdqc.so min=disabled,disabled,disabled,12,10 similar=deny retry=3 enforce=users This example sets several requirements for new passwords. The min setting controls the minimum password length. It has five values because this module defines five different types of passwords based on their complexity. Complexity is defined by the type of characters that must exist in a password, such as letters, numbers, symbols, and case. The types of passwords are described in &man.pam.passwdqc.8;. In this example, the first three types of passwords are disabled, meaning that passwords that meet those complexity requirements will not be accepted, regardless of their length. The 12 sets a minimum password policy of at least twelve characters, if the password also contains characters with three types of complexity. The 10 sets the password policy to also allow passwords of at least ten characters, if the password contains characters with four types of complexity. The similar setting denies passwords that are similar to the user's previous password. The retry setting provides a user with three opportunities to enter a new password. Once this file is saved, a user changing their password will see a message similar to the following: &prompt.user; passwd Changing local password for trhodes Old Password: You can now choose the new password. A valid password should be a mix of upper and lower case letters, digits and other characters. You can use a 12 character long password with characters from at least 3 of these 4 classes, or a 10 character long password containing characters from all the classes. Characters that form a common pattern are discarded by the check. Alternatively, if noone else can see your terminal now, you can pick this as your password: "trait-useful&knob". Enter new password: If a password that does not match the policy is entered, it will be rejected with a warning and the user will have an opportunity to try again, up to the configured number of retries. Most password policies require passwords to expire after so many days. To set a password age time in &os;, set for the user's login class in /etc/login.conf. The default login class contains an example: # :passwordtime=90d:\ So, to set an expiry of 90 days for this login class, remove the comment symbol (#), save the edit, and run cap_mkdb /etc/login.conf. To set the expiration on individual users, pass an expiration date or the number of days to expiry and a username to pw: &prompt.root; pw usermod -p 30-apr-2015 -n trhodes As seen here, an expiration date is set in the form of day, month, and year. For more information, see &man.pw.8;. Detecting Rootkits A rootkit is any unauthorized software that attempts to gain root access to a system. Once installed, this malicious software will normally open up another avenue of entry for an attacker. Realistically, once a system has been compromised by a rootkit and an investigation has been performed, the system should be reinstalled from scratch. There is tremendous risk that even the most prudent security or systems engineer will miss something an attacker left behind. A rootkit does do one thing usefulfor administrators: once detected, it is a sign that a compromise happened at some point. But, these types of applications tend to be very well hidden. This section demonstrates a tool that can be used to detect rootkits, security/rkhunter. After installation of this package or port, the system may be checked using the following command. It will produce a lot of information and will require some manual pressing of the ENTER key: &prompt.root; rkhunter -c After the process completes, a status message will be printed to the screen. This message will include the amount of files checked, suspect files, possible rootkits, and more. During the check, some generic security warnings may be produced about hidden files, the OpenSSH protocol selection, and known vulnerable versions of installed software. These can be handled now or after a more detailed analysis has been performed. Every administrator should know what is running on the systems they are responsible for. Third-party tools like rkhunter and sysutils/lsof, and native commands such as netstat and ps, can show a great deal of information on the system. Take notes on what is normal, ask questions when something seems out of place, and be paranoid. While preventing a compromise is ideal, detecting a compromise is a must. Binary Verification Verification of system files and binaries is important because it provides the system administration and security teams information about system changes. A software application that monitors the system for changes is called an Intrusion Detection System (IDS). &os; provides native support for a basic IDS system. While the nightly security emails will notify an administrator of changes, the information is stored locally and there is a chance that a malicious user could modify this information in order to hide their changes to the system. As such, it is recommended to create a separate set of binary signatures and store them on a read-only, root-owned directory or, preferably, on a removable USB disk or remote rsync server. The built-in mtree utility can be used to generate a specification of the contents of a directory. A seed, or a numeric constant, is used to generate the specification and is required to check that the specification has not changed. This makes it possible to determine if a file or binary has been modified. Since the seed value is unknown by an attacker, faking or checking the checksum values of files will be difficult to impossible. The following example generates a set of SHA256 hashes, one for each system binary in /bin, and saves those values to a hidden file in root's home directory, /root/.bin_chksum_mtree: &prompt.root; mtree -s 3483151339707503 -c -K cksum,sha256digest -p /bin > /root/.bin_chksum_mtree &prompt.root; mtree: /bin checksum: 3427012225 The 3483151339707503 represents the seed. This value should be remembered, but not shared. Viewing /root/.bin_cksum_mtree should yield output similar to the following: # user: root # machine: dreadnaught # tree: /bin # date: Mon Feb 3 10:19:53 2014 # . /set type=file uid=0 gid=0 mode=0555 nlink=1 flags=none . type=dir mode=0755 nlink=2 size=1024 \ time=1380277977.000000000 \133 nlink=2 size=11704 time=1380277977.000000000 \ cksum=484492447 \ sha256digest=6207490fbdb5ed1904441fbfa941279055c3e24d3a4049aeb45094596400662a cat size=12096 time=1380277975.000000000 cksum=3909216944 \ sha256digest=65ea347b9418760b247ab10244f47a7ca2a569c9836d77f074e7a306900c1e69 chflags size=8168 time=1380277975.000000000 cksum=3949425175 \ sha256digest=c99eb6fc1c92cac335c08be004a0a5b4c24a0c0ef3712017b12c89a978b2dac3 chio size=18520 time=1380277975.000000000 cksum=2208263309 \ sha256digest=ddf7c8cb92a58750a675328345560d8cc7fe14fb3ccd3690c34954cbe69fc964 chmod size=8640 time=1380277975.000000000 cksum=2214429708 \ sha256digest=a435972263bf814ad8df082c0752aa2a7bdd8b74ff01431ccbd52ed1e490bbe7 The machine's hostname, the date and time the specification was created, and the name of the user who created the specification are included in this report. There is a checksum, size, time, and SHA256 digest for each binary in the directory. To verify that the binary signatures have not changed, compare the current contents of the directory to the previously generated specification, and save the results to a file. This command requires the seed that was used to generate the original specification: &prompt.root; mtree -s 3483151339707503 -p /bin < /root/.bin_chksum_mtree >> /root/.bin_chksum_output &prompt.root; mtree: /bin checksum: 3427012225 This should produce the same checksum for /bin that was produced when the specification was created. If no changes have occurred to the binaries in this directory, the /root/.bin_chksum_output output file will be empty. To simulate a change, change the date on /bin/cat using touch and run the verification command again: &prompt.root; touch /bin/cat &prompt.root; mtree -s 3483151339707503 -p /bin < /root/.bin_chksum_mtree >> /root/.bin_chksum_output &prompt.root; more /root/.bin_chksum_output cat changed modification time expected Fri Sep 27 06:32:55 2013 found Mon Feb 3 10:28:43 2014 It is recommended to create specifications for the directories which contain binaries and configuration files, as well as any directories containing sensitive data. Typically, specifications are created for /bin, /sbin, /usr/bin, /usr/sbin, /usr/local/bin, /etc, and /usr/local/etc. More advanced IDS systems exist, such as security/aide. In most cases, mtree provides the functionality administrators need. It is important to keep the seed value and the checksum output hidden from malicious users. More information about mtree can be found in &man.mtree.8;. System Tuning for Security In &os;, many system features can be tuned using sysctl. A few of the security features which can be tuned to prevent Denial of Service (DoS) attacks will be covered in this section. More information about using sysctl, including how to temporarily change values and how to make the changes permanent after testing, can be found in . Any time a setting is changed with sysctl, the chance to cause undesired harm is increased, affecting the availability of the system. All changes should be monitored and, if possible, tried on a testing system before being used on a production system. By default, the &os; kernel boots with a security level of -1. This is called insecure mode because immutable file flags may be turned off and all devices may be read from or written to. The security level will remain at -1 unless it is altered through sysctl or by a setting in the startup scripts. The security level may be increased during system startup by setting kern_securelevel_enable to YES in /etc/rc.conf, and the value of kern_securelevel to the desired security level. See &man.security.7; and &man.init.8; for more information on these settings and the available security levels. Increasing the securelevel can break Xorg and cause other issues. Be prepared to do some debugging. The net.inet.tcp.blackhole and net.inet.udp.blackhole settings can be used to drop incoming SYN packets on closed ports without sending a return RST response. The default behavior is to return an RST to show a port is closed. Changing the default provides some level of protection against ports scans, which are used to determine which applications are running on a system. Set net.inet.tcp.blackhole to 2 and net.inet.udp.blackhole to 1. Refer to &man.blackhole.4; for more information about these settings. The net.inet.icmp.drop_redirect and net.inet.ip.redirect settings help prevent against redirect attacks. A redirect attack is a type of DoS which sends mass numbers of ICMP type 5 packets. Since these packets are not required, set net.inet.icmp.drop_redirect to 1 and set net.inet.ip.redirect to 0. Source routing is a method for detecting and accessing non-routable addresses on the internal network. This should be disabled as non-routable addresses are normally not routable on purpose. To disable this feature, set net.inet.ip.sourceroute and net.inet.ip.accept_sourceroute to 0. When a machine on the network needs to send messages to all hosts on a subnet, an ICMP echo request message is sent to the broadcast address. However, there is no reason for an external host to perform such an action. To reject all external broadcast requests, set net.inet.icmp.bmcastecho to 0. Some additional settings are documented in &man.security.7;. One-time Passwords one-time passwords security one-time passwords By default, &os; includes support for One-time Passwords In Everything (OPIE). OPIE is designed to prevent replay attacks, in which an attacker discovers a user's password and uses it to access a system. Since a password is only used once in OPIE, a discovered password is of little use to an attacker. OPIE uses a secure hash and a challenge/response system to manage passwords. The &os; implementation uses the MD5 hash by default. OPIE uses three different types of passwords. The first is the usual &unix; or Kerberos password. The second is the one-time password which is generated by opiekey. The third type of password is the secret password which is used to generate one-time passwords. The secret password has nothing to do with, and should be different from, the &unix; password. There are two other pieces of data that are important to OPIE. One is the seed or key, consisting of two letters and five digits. The other is the iteration count, a number between 1 and 100. OPIE creates the one-time password by concatenating the seed and the secret password, applying the MD5 hash as many times as specified by the iteration count, and turning the result into six short English words which represent the one-time password. The authentication system keeps track of the last one-time password used, and the user is authenticated if the hash of the user-provided password is equal to the previous password. Because a one-way hash is used, it is impossible to generate future one-time passwords if a successfully used password is captured. The iteration count is decremented after each successful login to keep the user and the login program in sync. When the iteration count gets down to 1, OPIE must be reinitialized. There are a few programs involved in this process. A one-time password, or a consecutive list of one-time passwords, is generated by passing an iteration count, a seed, and a secret password to &man.opiekey.1;. In addition to initializing OPIE, &man.opiepasswd.1; is used to change passwords, iteration counts, or seeds. The relevant credential files in /etc/opiekeys are examined by &man.opieinfo.1; which prints out the invoking user's current iteration count and seed. This section describes four different sorts of operations. The first is how to set up one-time-passwords for the first time over a secure connection. The second is how to use opiepasswd over an insecure connection. The third is how to log in over an insecure connection. The fourth is how to generate a number of keys which can be written down or printed out to use at insecure locations. Initializing <acronym>OPIE</acronym> To initialize OPIE for the first time, run this command from a secure location: &prompt.user; opiepasswd -c [grimreaper] ~ $ opiepasswd -f -c Adding unfurl: Only use this method from the console; NEVER from remote. If you are using telnet, xterm, or a dial-in, type ^C now or exit with no password. Then run opiepasswd without the -c parameter. Using MD5 to compute responses. Enter new secret pass phrase: Again new secret pass phrase: ID unfurl OTP key is 499 to4268 MOS MALL GOAT ARM AVID COED The sets console mode which assumes that the command is being run from a secure location, such as a computer under the user's control or a SSH session to a computer under the user's control. When prompted, enter the secret password which will be used to generate the one-time login keys. This password should be difficult to guess and should be different than the password which is associated with the user's login account. It must be between 10 and 127 characters long. Remember this password. The ID line lists the login name (unfurl), default iteration count (499), and default seed (to4268). When logging in, the system will remember these parameters and display them, meaning that they do not have to be memorized. The last line lists the generated one-time password which corresponds to those parameters and the secret password. At the next login, use this one-time password. Insecure Connection Initialization To initialize or change the secret password on an insecure system, a secure connection is needed to some place where opiekey can be run. This might be a shell prompt on a trusted machine. An iteration count is needed, where 100 is probably a good value, and the seed can either be specified or the randomly-generated one used. On the insecure connection, the machine being initialized, use &man.opiepasswd.1;: &prompt.user; opiepasswd Updating unfurl: You need the response from an OTP generator. Old secret pass phrase: otp-md5 498 to4268 ext Response: GAME GAG WELT OUT DOWN CHAT New secret pass phrase: otp-md5 499 to4269 Response: LINE PAP MILK NELL BUOY TROY ID mark OTP key is 499 gr4269 LINE PAP MILK NELL BUOY TROY To accept the default seed, press Return. Before entering an access password, move over to the secure connection and give it the same parameters: &prompt.user; opiekey 498 to4268 Using the MD5 algorithm to compute response. Reminder: Do not use opiekey from telnet or dial-in sessions. Enter secret pass phrase: GAME GAG WELT OUT DOWN CHAT Switch back over to the insecure connection, and copy the generated one-time password over to the relevant program. Generating a Single One-time Password After initializing OPIE and logging in, a prompt like this will be displayed: &prompt.user; telnet example.com Trying 10.0.0.1... Connected to example.com Escape character is '^]'. FreeBSD/i386 (example.com) (ttypa) login: <username> otp-md5 498 gr4269 ext Password: The OPIE prompts provides a useful feature. If Return is pressed at the password prompt, the prompt will turn echo on and display what is typed. This can be useful when attempting to type in a password by hand from a printout. MS-DOS Windows MacOS At this point, generate the one-time password to answer this login prompt. This must be done on a trusted system where it is safe to run &man.opiekey.1;. There are versions of this command for &windows;, &macos; and &os;. This command needs the iteration count and the seed as command line options. Use cut-and-paste from the login prompt on the machine being logged in to. On the trusted system: &prompt.user; opiekey 498 to4268 Using the MD5 algorithm to compute response. Reminder: Do not use opiekey from telnet or dial-in sessions. Enter secret pass phrase: GAME GAG WELT OUT DOWN CHAT Once the one-time password is generated, continue to log in. Generating Multiple One-time Passwords Sometimes there is no access to a trusted machine or secure connection. In this case, it is possible to use &man.opiekey.1; to generate a number of one-time passwords beforehand. For example: &prompt.user; opiekey -n 5 30 zz99999 Using the MD5 algorithm to compute response. Reminder: Do not use opiekey from telnet or dial-in sessions. Enter secret pass phrase: <secret password> 26: JOAN BORE FOSS DES NAY QUIT 27: LATE BIAS SLAY FOLK MUCH TRIG 28: SALT TIN ANTI LOON NEAL USE 29: RIO ODIN GO BYE FURY TIC 30: GREW JIVE SAN GIRD BOIL PHI The requests five keys in sequence, and specifies what the last iteration number should be. Note that these are printed out in reverse order of use. The really paranoid might want to write the results down by hand; otherwise, print the list. Each line shows both the iteration count and the one-time password. Scratch off the passwords as they are used. Restricting Use of &unix; Passwords OPIE can restrict the use of &unix; passwords based on the IP address of a login session. The relevant file is /etc/opieaccess, which is present by default. Refer to &man.opieaccess.5; for more information on this file and which security considerations to be aware of when using it. Here is a sample opieaccess: permit 192.168.0.0 255.255.0.0 This line allows users whose IP source address (which is vulnerable to spoofing) matches the specified value and mask, to use &unix; passwords at any time. If no rules in opieaccess are matched, the default is to deny non-OPIE logins. TCP Wrapper TomRhodesWritten by TCP Wrapper TCP Wrapper is a host-based access control system which extends the abilities of . It can be configured to provide logging support, return messages, and connection restrictions for the server daemons under the control of inetd. Refer to &man.tcpd.8; for more information about TCP Wrapper and its features. TCP Wrapper should not be considered a replacement for a properly configured firewall. Instead, TCP Wrapper should be used in conjunction with a firewall and other security enhancements in order to provide another layer of protection in the implementation of a security policy. Initial Configuration To enable TCP Wrapper in &os;, add the following lines to /etc/rc.conf: inetd_enable="YES" inetd_flags="-Ww" Then, properly configure /etc/hosts.allow. Unlike other implementations of TCP Wrapper, the use of hosts.deny is deprecated in &os;. All configuration options should be placed in /etc/hosts.allow. In the simplest configuration, daemon connection policies are set to either permit or block, depending on the options in /etc/hosts.allow. The default configuration in &os; is to allow all connections to the daemons started with inetd. Basic configuration usually takes the form of daemon : address : action, where daemon is the daemon which inetd started, address is a valid hostname, IP address, or an IPv6 address enclosed in brackets ([ ]), and action is either allow or deny. TCP Wrapper uses a first rule match semantic, meaning that the configuration file is scanned from the beginning for a matching rule. When a match is found, the rule is applied and the search process stops. For example, to allow POP3 connections via the mail/qpopper daemon, the following lines should be appended to hosts.allow: # This line is required for POP3 connections: qpopper : ALL : allow Whenever this file is edited, restart inetd: &prompt.root; service inetd restart Advanced Configuration TCP Wrapper provides advanced options to allow more control over the way connections are handled. In some cases, it may be appropriate to return a comment to certain hosts or daemon connections. In other cases, a log entry should be recorded or an email sent to the administrator. Other situations may require the use of a service for local connections only. This is all possible through the use of configuration options known as wildcards, expansion characters, and external command execution. Suppose that a situation occurs where a connection should be denied yet a reason should be sent to the host who attempted to establish that connection. That action is possible with . When a connection attempt is made, executes a shell command or script. An example exists in hosts.allow: # The rest of the daemons are protected. ALL : ALL \ : severity auth.info \ : twist /bin/echo "You are not welcome to use %d from %h." In this example, the message You are not allowed to use daemon name from hostname. will be returned for any daemon not configured in hosts.allow. This is useful for sending a reply back to the connection initiator right after the established connection is dropped. Any message returned must be wrapped in quote (") characters. It may be possible to launch a denial of service attack on the server if an attacker floods these daemons with connection requests. Another possibility is to use . Like , implicitly denies the connection and may be used to run external shell commands or scripts. Unlike , will not send a reply back to the host who established the connection. For example, consider the following configuration: # We do not allow connections from example.com: ALL : .example.com \ : spawn (/bin/echo %a from %h attempted to access %d >> \ /var/log/connections.log) \ : deny This will deny all connection attempts from *.example.com and log the hostname, IP address, and the daemon to which access was attempted to /var/log/connections.log. This example uses the substitution characters %a and %h. Refer to &man.hosts.access.5; for the complete list. To match every instance of a daemon, domain, or IP address, use ALL. Another wildcard is PARANOID which may be used to match any host which provides an IP address that may be forged because the IP address differs from its resolved hostname. In this example, all connection requests to Sendmail which have an IP address that varies from its hostname will be denied: # Block possibly spoofed requests to sendmail: sendmail : PARANOID : deny Using the PARANOID wildcard will result in denied connections if the client or server has a broken DNS setup. To learn more about wildcards and their associated functionality, refer to &man.hosts.access.5;. When adding new configuration lines, make sure that any unneeded entries for that daemon are commented out in hosts.allow. <application>Kerberos</application> Tillman Hodgson Contributed by Mark Murray Based on a contribution by Kerberos is a network authentication protocol which was originally created by the Massachusetts Institute of Technology (MIT) as a way to securely provide authentication across a potentially hostile network. The Kerberos protocol uses strong cryptography so that both a client and server can prove their identity without sending any unencrypted secrets over the network. Kerberos can be described as an identity-verifying proxy system and as a trusted third-party authentication system. After a user authenticates with Kerberos, their communications can be encrypted to assure privacy and data integrity. The only function of Kerberos is to provide the secure authentication of users and servers on the network. It does not provide authorization or auditing functions. It is recommended that Kerberos be used with other security methods which provide authorization and audit services. The current version of the protocol is version 5, described in RFC 4120. Several free implementations of this protocol are available, covering a wide range of operating systems. MIT continues to develop their Kerberos package. It is commonly used in the US as a cryptography product, and has historically been subject to US export regulations. In &os;, MIT Kerberos is available as the security/krb5 package or port. The Heimdal Kerberos implementation was explicitly developed outside of the US to avoid export regulations. The Heimdal Kerberos distribution is included in the base &os; installation, and another distribution with more configurable options is available as security/heimdal in the Ports Collection. In Kerberos users and services are identified as principals which are contained within an administrative grouping, called a realm. A typical user principal would be of the form user@REALM (realms are traditionally uppercase). This section provides a guide on how to set up Kerberos using the Heimdal distribution included in &os;. For purposes of demonstrating a Kerberos installation, the name spaces will be as follows: The DNS domain (zone) will be example.org. The Kerberos realm will be EXAMPLE.ORG. Use real domain names when setting up Kerberos, even if it will run internally. This avoids DNS problems and assures inter-operation with other Kerberos realms. Setting up a Heimdal <acronym>KDC</acronym> Kerberos5 Key Distribution Center The Key Distribution Center (KDC) is the centralized authentication service that Kerberos provides, the trusted third party of the system. It is the computer that issues Kerberos tickets, which are used for clients to authenticate to servers. Because the KDC is considered trusted by all other computers in the Kerberos realm, it has heightened security concerns. Direct access to the KDC should be limited. While running a KDC requires few computing resources, a dedicated machine acting only as a KDC is recommended for security reasons. To begin setting up a KDC, add these lines to /etc/rc.conf: kerberos5_server_enable="YES" kadmind5_server_enable="YES" Next, edit /etc/krb5.conf as follows: [libdefaults] default_realm = EXAMPLE.ORG [realms] EXAMPLE.ORG = { kdc = kerberos.example.org admin_server = kerberos.example.org } [domain_realm] .example.org = EXAMPLE.ORG In this example, the KDC will use the fully-qualified hostname kerberos.example.org. The hostname of the KDC must be resolvable in the DNS. Kerberos can also use the DNS to locate KDCs, instead of a [realms] section in /etc/krb5.conf. For large organizations that have their own DNS servers, the above example could be trimmed to: [libdefaults] default_realm = EXAMPLE.ORG [domain_realm] .example.org = EXAMPLE.ORG With the following lines being included in the example.org zone file: _kerberos._udp IN SRV 01 00 88 kerberos.example.org. _kerberos._tcp IN SRV 01 00 88 kerberos.example.org. _kpasswd._udp IN SRV 01 00 464 kerberos.example.org. _kerberos-adm._tcp IN SRV 01 00 749 kerberos.example.org. _kerberos IN TXT EXAMPLE.ORG In order for clients to be able to find the Kerberos services, they must have either a fully configured /etc/krb5.conf or a minimally configured /etc/krb5.conf and a properly configured DNS server. Next, create the Kerberos database which contains the keys of all principals (users and hosts) encrypted with a master password. It is not required to remember this password as it will be stored in /var/heimdal/m-key; it would be reasonable to use a 45-character random password for this purpose. To create the master key, run kstash and enter a password: &prompt.root; kstash Master key: xxxxxxxxxxxxxxxxxxxxxxx Verifying password - Master key: xxxxxxxxxxxxxxxxxxxxxxx Once the master key has been created, the database should be initialized. The Kerberos administrative tool &man.kadmin.8; can be used on the KDC in a mode that operates directly on the database, without using the &man.kadmind.8; network service, as kadmin -l. This resolves the chicken-and-egg problem of trying to connect to the database before it is created. At the kadmin prompt, use init to create the realm's initial database: &prompt.root; kadmin -l kadmin> init EXAMPLE.ORG Realm max ticket life [unlimited]: Lastly, while still in kadmin, create the first principal using add. Stick to the default options for the principal for now, as these can be changed later with modify. Type ? at the prompt to see the available options. kadmin> add tillman Max ticket life [unlimited]: Max renewable life [unlimited]: Attributes []: Password: xxxxxxxx Verifying password - Password: xxxxxxxx Next, start the KDC services by running service kerberos start and service kadmind start. While there will not be any kerberized daemons running at this point, it is possible to confirm that the KDC is functioning by obtaining a ticket for the principal that was just created: &prompt.user; kinit tillman tillman@EXAMPLE.ORG's Password: Confirm that a ticket was successfully obtained using klist: &prompt.user; klist Credentials cache: FILE:/tmp/krb5cc_1001 Principal: tillman@EXAMPLE.ORG Issued Expires Principal Aug 27 15:37:58 2013 Aug 28 01:37:58 2013 krbtgt/EXAMPLE.ORG@EXAMPLE.ORG The temporary ticket can be destroyed when the test is finished: &prompt.user; kdestroy Configuring a Server to Use <application>Kerberos</application> Kerberos5 enabling services The first step in configuring a server to use Kerberos authentication is to ensure that it has the correct configuration in /etc/krb5.conf. The version from the KDC can be used as-is, or it can be regenerated on the new system. Next, create /etc/krb5.keytab on the server. This is the main part of Kerberizing a service — it corresponds to generating a secret shared between the service and the KDC. The secret is a cryptographic key, stored in a keytab. The keytab contains the server's host key, which allows it and the KDC to verify each others' identity. It must be transmitted to the server in a secure fashion, as the security of the server can be broken if the key is made public. Typically, the keytab is generated on an administrator's trusted machine using kadmin, then securely transferred to the server, e.g., with &man.scp.1;; it can also be created directly on the server if that is consistent with the desired security policy. It is very important that the keytab is transmitted to the server in a secure fashion: if the key is known by some other party, that party can impersonate any user to the server! Using kadmin on the server directly is convenient, because the entry for the host principal in the KDC database is also created using kadmin. Of course, kadmin is a kerberized service; a Kerberos ticket is needed to authenticate to the network service, but to ensure that the user running kadmin is actually present (and their session has not been hijacked), kadmin will prompt for the password to get a fresh ticket. The principal authenticating to the kadmin service must be permitted to use the kadmin interface, as specified in kadmind.acl. See the section titled Remote administration in info heimdal for details on designing access control lists. Instead of enabling remote kadmin access, the administrator could securely connect to the KDC via the local console or &man.ssh.1;, and perform administration locally using kadmin -l. After installing /etc/krb5.conf, use add --random-key in kadmin. This adds the server's host principal to the database, but does not extract a copy of the host principal key to a keytab. To generate the keytab, use ext to extract the server's host principal key to its own keytab: &prompt.root; kadmin kadmin> add --random-key host/myserver.example.org Max ticket life [unlimited]: Max renewable life [unlimited]: Principal expiration time [never]: Password expiration time [never]: Attributes []: kadmin> ext_keytab host/myserver.example.org kadmin> exit Note that ext_keytab stores the extracted key in /etc/krb5.keytab by default. This is good when being run on the server being kerberized, but the --keytab path/to/file argument should be used when the keytab is being extracted elsewhere: &prompt.root; kadmin kadmin> ext_keytab --keytab=/tmp/example.keytab host/myserver.example.org kadmin> exit The keytab can then be securely copied to the server using &man.scp.1; or a removable media. Be sure to specify a non-default keytab name to avoid inserting unneeded keys into the system's keytab. At this point, the server can read encrypted messages from the KDC using its shared key, stored in krb5.keytab. It is now ready for the Kerberos-using services to be enabled. One of the most common such services is &man.sshd.8;, which supports Kerberos via the GSS-API. In /etc/ssh/sshd_config, add the line: GSSAPIAuthentication yes After making this change, &man.sshd.8; must be restared for the new configuration to take effect: service sshd restart. Configuring a Client to Use <application>Kerberos</application> Kerberos5 configure clients As it was for the server, the client requires configuration in /etc/krb5.conf. Copy the file in place (securely) or re-enter it as needed. Test the client by using kinit, klist, and kdestroy from the client to obtain, show, and then delete a ticket for an existing principal. Kerberos applications should also be able to connect to Kerberos enabled servers. If that does not work but obtaining a ticket does, the problem is likely with the server and not with the client or the KDC. In the case of kerberized &man.ssh.1;, GSS-API is disabled by default, so test using ssh -o GSSAPIAuthentication=yes hostname. When testing a Kerberized application, try using a packet sniffer such as tcpdump to confirm that no sensitive information is sent in the clear. Various Kerberos client applications are available. With the advent of a bridge so that applications using SASL for authentication can use GSS-API mechanisms as well, large classes of client applications can use Kerberos for authentication, from Jabber clients to IMAP clients. .k5login .k5users Users within a realm typically have their Kerberos principal mapped to a local user account. Occasionally, one needs to grant access to a local user account to someone who does not have a matching Kerberos principal. For example, tillman@EXAMPLE.ORG may need access to the local user account webdevelopers. Other principals may also need access to that local account. The .k5login and .k5users files, placed in a user's home directory, can be used to solve this problem. For example, if the following .k5login is placed in the home directory of webdevelopers, both principals listed will have access to that account without requiring a shared password.: tillman@example.org jdoe@example.org Refer to &man.ksu.1; for more information about .k5users. <acronym>MIT</acronym> Differences The major difference between the MIT and Heimdal implementations is that kadmin has a different, but equivalent, set of commands and uses a different protocol. If the KDC is MIT, the Heimdal version of kadmin cannot be used to administer the KDC remotely, and vice versa. Client applications may also use slightly different command line options to accomplish the same tasks. Following the instructions at http://web.mit.edu/Kerberos/www/ is recommended. Be careful of path issues: the MIT port installs into /usr/local/ by default, and the &os; system applications run instead of the MIT versions if PATH lists the system directories first. When using MIT Kerberos as a KDC on &os;, the following edits should also be made to rc.conf: kerberos5_server="/usr/local/sbin/krb5kdc" kadmind5_server="/usr/local/sbin/kadmind" kerberos5_server_flags="" kerberos5_server_enable="YES" kadmind5_server_enable="YES" <application>Kerberos</application> Tips, Tricks, and Troubleshooting When configuring and troubleshooting Kerberos, keep the following points in mind: When using either Heimdal or MIT Kerberos from ports, ensure that the PATH lists the port's versions of the client applications before the system versions. If all the computers in the realm do not have synchronized time settings, authentication may fail. describes how to synchronize clocks using NTP. If the hostname is changed, the host/ principal must be changed and the keytab updated. This also applies to special keytab entries like the HTTP/ principal used for Apache's www/mod_auth_kerb. All hosts in the realm must be both forward and reverse resolvable in DNS or, at a minimum, exist in /etc/hosts. CNAMEs will work, but the A and PTR records must be correct and in place. The error message for unresolvable hosts is not intuitive: Kerberos5 refuses authentication because Read req failed: Key table entry not found. Some operating systems that act as clients to the KDC do not set the permissions for ksu to be setuid root. This means that ksu does not work. This is a permissions problem, not a KDC error. With MIT Kerberos, to allow a principal to have a ticket life longer than the default lifetime of ten hours, use modify_principal at the &man.kadmin.8; prompt to change the maxlife of both the principal in question and the krbtgt principal. The principal can then use kinit -l to request a ticket with a longer lifetime. When running a packet sniffer on the KDC to aid in troubleshooting while running kinit from a workstation, the Ticket Granting Ticket (TGT) is sent immediately, even before the password is typed. This is because the Kerberos server freely transmits a TGT to any unauthorized request. However, every TGT is encrypted in a key derived from the user's password. When a user types their password, it is not sent to the KDC, it is instead used to decrypt the TGT that kinit already obtained. If the decryption process results in a valid ticket with a valid time stamp, the user has valid Kerberos credentials. These credentials include a session key for establishing secure communications with the Kerberos server in the future, as well as the actual TGT, which is encrypted with the Kerberos server's own key. This second layer of encryption allows the Kerberos server to verify the authenticity of each TGT. Host principals can have a longer ticket lifetime. If the user principal has a lifetime of a week but the host being connected to has a lifetime of nine hours, the user cache will have an expired host principal and the ticket cache will not work as expected. When setting up krb5.dict to prevent specific bad passwords from being used as described in &man.kadmind.8;, remember that it only applies to principals that have a password policy assigned to them. The format used in krb5.dict is one string per line. Creating a symbolic link to /usr/share/dict/words might be useful. Mitigating <application>Kerberos</application> Limitations Kerberos5 limitations and shortcomings Since Kerberos is an all or nothing approach, every service enabled on the network must either be modified to work with Kerberos or be otherwise secured against network attacks. This is to prevent user credentials from being stolen and re-used. An example is when Kerberos is enabled on all remote shells but the non-Kerberized POP3 mail server sends passwords in plain text. The KDC is a single point of failure. By design, the KDC must be as secure as its master password database. The KDC should have absolutely no other services running on it and should be physically secure. The danger is high because Kerberos stores all passwords encrypted with the same master key which is stored as a file on the KDC. A compromised master key is not quite as bad as one might fear. The master key is only used to encrypt the Kerberos database and as a seed for the random number generator. As long as access to the KDC is secure, an attacker cannot do much with the master key. If the KDC is unavailable, network services are unusable as authentication cannot be performed. This can be alleviated with a single master KDC and one or more slaves, and with careful implementation of secondary or fall-back authentication using PAM. Kerberos allows users, hosts and services to authenticate between themselves. It does not have a mechanism to authenticate the KDC to the users, hosts, or services. This means that a trojanned kinit could record all user names and passwords. File system integrity checking tools like security/tripwire can alleviate this. Resources and Further Information Kerberos5 external resources The Kerberos FAQ Designing an Authentication System: a Dialog in Four Scenes RFC 4120, The Kerberos Network Authentication Service (V5) MIT Kerberos home page Heimdal Kerberos home page OpenSSL TomRhodesWritten by security OpenSSL OpenSSL is an open source implementation of the SSL and TLS protocols. It provides an encryption transport layer on top of the normal communications layer, allowing it to be intertwined with many network applications and services. The version of OpenSSL included in &os; supports the Secure Sockets Layer v2/v3 (SSLv2/SSLv3) and Transport Layer Security v1 (TLSv1) network security protocols and can be used as a general cryptographic library. OpenSSL is often used to encrypt authentication of mail clients and to secure web based transactions such as credit card payments. Some ports, such as www/apache24 and databases/postgresql91-server, include a compile option for building with OpenSSL. &os; provides two versions of OpenSSL: one in the base system and one in the Ports Collection. Users can choose which version to use by default for other ports using the following knobs: WITH_OPENSSL_PORT: when set, the port will use OpenSSL from the security/openssl port, even if the version in the base system is up to date or newer. WITH_OPENSSL_BASE: when set, the port will compile against OpenSSL provided by the base system. Another common use of OpenSSL is to provide certificates for use with software applications. Certificates can be used to verify the credentials of a company or individual. If a certificate has not been signed by an external Certificate Authority (CA), such as http://www.verisign.com, the application that uses the certificate will produce a warning. There is a cost associated with obtaining a signed certificate and using a signed certificate is not mandatory as certificates can be self-signed. However, using an external authority will prevent warnings and can put users at ease. This section demonstrates how to create and use certificates on a &os; system. Refer to for an example of how to create a CA for signing one's own certificates. Generating Certificates OpenSSL certificate generation To generate a certificate that will be signed by an external CA, issue the following command and input the information requested at the prompts. This input information will be written to the certificate. At the Common Name prompt, input the fully qualified name for the system that will use the certificate. If this name does not match the server, the application verifying the certificate will issue a warning to the user, rendering the verification provided by the certificate as useless. &prompt.root; openssl req -new -nodes -out req.pem -keyout cert.pem Generating a 1024 bit RSA private key ................++++++ .......................................++++++ writing new private key to 'cert.pem' ----- You are about to be asked to enter information that will be incorporated into your certificate request. What you are about to enter is what is called a Distinguished Name or a DN. There are quite a few fields but you can leave some blank For some fields there will be a default value, If you enter '.', the field will be left blank. ----- Country Name (2 letter code) [AU]:US State or Province Name (full name) [Some-State]:PA Locality Name (eg, city) []:Pittsburgh Organization Name (eg, company) [Internet Widgits Pty Ltd]:My Company Organizational Unit Name (eg, section) []:Systems Administrator Common Name (eg, YOUR name) []:localhost.example.org Email Address []:trhodes@FreeBSD.org Please enter the following 'extra' attributes to be sent with your certificate request A challenge password []:SOME PASSWORD An optional company name []:Another Name Other options, such as the expire time and alternate encryption algorithms, are available when creating a certificate. A complete list of options is described in &man.openssl.1;. This command will create two files in the current directory. The certificate request, req.pem, can be sent to a CA who will validate the entered credentials, sign the request, and return the signed certificate. The second file, cert.pem, is the private key for the certificate and should be stored in a secure location. If this falls in the hands of others, it can be used to impersonate the user or the server. Alternately, if a signature from a CA is not required, a self-signed certificate can be created. First, generate the RSA key: &prompt.root; openssl dsaparam -rand -genkey -out myRSA.key 1024 0 semi-random bytes loaded Generating DSA parameters, 1024 bit long prime This could take some time .............+........+...........+...+....+........+.....+++++++++++++++++++++++++++++++++++++++++++++++++++* ..........+.+...........+....+........+.................+.+++++++++++++++++++++++++++++++++++++++++++++++++++* Next, generate the CA key. When prompted, enter a passphrase between 4 to 1023 characters. Remember this passphrase as it is needed whenever the key is used to sign a certificate. &prompt.root; openssl gendsa -des3 -out myca.key myRSA.key Generating DSA key, 1024 bits Enter PEM pass phrase: Verifying - Enter PEM pass phrase: Use this key to create a self-signed certificate. When prompted, enter the passphrase. Then follow the usual prompts for creating a certificate: &prompt.root; openssl req -new -x509 -days 365 -key myca.key -out new.crt Enter pass phrase for myca.key: You are about to be asked to enter information that will be incorporated into your certificate request. What you are about to enter is what is called a Distinguished Name or a DN. There are quite a few fields but you can leave some blank For some fields there will be a default value, If you enter '.', the field will be left blank. ----- Country Name (2 letter code) [AU]:US State or Province Name (full name) [Some-State]:PA Locality Name (eg, city) []:Pittsburgh Organization Name (eg, company) [Internet Widgits Pty Ltd]:My Company Organizational Unit Name (eg, section) []:Systems Administrator Common Name (e.g. server FQDN or YOUR name) []:localhost.example.org Email Address []:trhodes@FreeBSD.org This will create two new files in the current directory: a certificate authority signature file, myca.key, and the certificate itself, new.crt. These should be placed in a directory, preferably under /etc, which is readable only by root. Permissions of 0700 are appropriate for these files and can be set using chmod. Using Certificates One use for a certificate is to encrypt connections to the Sendmail mail server in order to prevent the use of clear text authentication. Some mail clients will display an error if the user has not installed a local copy of the certificate. Refer to the documentation included with the software for more information on certificate installation. - In &os; 10.0-RELEASE and above, it is possible to create - a self-signed certificate for Sendmail - automatically. To enable this, add the - following lines to + In &os; 10.0-RELEASE and above, it is possible to create a + self-signed certificate for + Sendmail automatically. To enable + this, add the following lines to /etc/rc.conf: sendmail_enable="YES" sendmail_cert_create="YES" sendmail_cert_cn="localhost.example.org" This will automatically create a self-signed certificate, /etc/mail/certs/host.cert, a signing key, /etc/mail/certs/host.key, and a CA certificate, /etc/mail/certs/cacert.pem. The certificate will use the Common Name specified in . After saving the edits, restart Sendmail: &prompt.root; service sendmail restart If all went well, there will be no error messages in /var/log/maillog. For a simple test, connect to the mail server's listening port using telnet: &prompt.root; telnet example.com 25 Trying 192.0.34.166... Connected to example.com. Escape character is '^]'. 220 example.com ESMTP Sendmail 8.14.7/8.14.7; Fri, 18 Apr 2014 11:50:32 -0400 (EDT) ehlo example.com 250-example.com Hello example.com [192.0.34.166], pleased to meet you 250-ENHANCEDSTATUSCODES 250-PIPELINING 250-8BITMIME 250-SIZE 250-DSN 250-ETRN 250-AUTH LOGIN PLAIN 250-STARTTLS 250-DELIVERBY 250 HELP quit 221 2.0.0 example.com closing connection Connection closed by foreign host. If the STARTTLS line appears in the output, everything is working correctly. <acronym>VPN</acronym> over <acronym>IPsec</acronym> Nik Clayton
nik@FreeBSD.org
Written by
Hiten M. Pandya
hmp@FreeBSD.org
Written by
IPsec Internet Protocol Security (IPsec) is a set of protocols which sit on top of the Internet Protocol (IP) layer. It allows two or more hosts to communicate in a secure manner by authenticating and encrypting each IP packet of a communication session. The &os; IPsec network stack is based on the http://www.kame.net/ implementation and supports both IPv4 and IPv6 sessions. IPsec ESP IPsec AH IPsec is comprised of the following sub-protocols: Encapsulated Security Payload (ESP): this protocol protects the IP packet data from third party interference by encrypting the contents using symmetric cryptography algorithms such as Blowfish and 3DES. Authentication Header (AH)): this protocol protects the IP packet header from third party interference and spoofing by computing a cryptographic checksum and hashing the IP packet header fields with a secure hashing function. This is then followed by an additional header that contains the hash, to allow the information in the packet to be authenticated. IP Payload Compression Protocol (IPComp): this protocol tries to increase communication performance by compressing the IP payload in order ro reduce the amount of data sent. These protocols can either be used together or separately, depending on the environment. VPN virtual private network VPN IPsec supports two modes of operation. The first mode, Transport Mode, protects communications between two hosts. The second mode, Tunnel Mode, is used to build virtual tunnels, commonly known as Virtual Private Networks (VPNs). Consult &man.ipsec.4; for detailed information on the IPsec subsystem in &os;. To add IPsec support to the kernel, add the following options to the custom kernel configuration file and rebuild the kernel using the instructions in : kernel options IPSEC options IPSEC #IP security device crypto kernel options IPSEC_DEBUG If IPsec debugging support is desired, the following kernel option should also be added: options IPSEC_DEBUG #debug for IP security This rest of this chapter demonstrates the process of setting up an IPsec VPN between a home network and a corporate network. In the example scenario: Both sites are connected to the Internet through a gateway that is running &os;. The gateway on each network has at least one external IP address. In this example, the corporate LAN's external IP address is 172.16.5.4 and the home LAN's external IP address is 192.168.1.12. The internal addresses of the two networks can be either public or private IP addresses. However, the address space must not collide. For example, both networks cannot use 192.168.1.x. In this example, the corporate LAN's internal IP address is 10.246.38.1 and the home LAN's internal IP address is 10.0.0.5. Configuring a <acronym>VPN</acronym> on &os; Tom Rhodes
trhodes@FreeBSD.org
Written by
To begin, security/ipsec-tools must be installed from the Ports Collection. This software provides a number of applications which support the configuration. The next requirement is to create two &man.gif.4; pseudo-devices which will be used to tunnel packets and allow both networks to communicate properly. As root, run the following commands, replacing internal and external with the real IP addresses of the internal and external interfaces of the two gateways: &prompt.root; ifconfig gif0 create &prompt.root; ifconfig gif0 internal1 internal2 &prompt.root; ifconfig gif0 tunnel external1 external2 Verify the setup on each gateway, using ifconfig. Here is the output from Gateway 1: gif0: flags=8051 mtu 1280 tunnel inet 172.16.5.4 --> 192.168.1.12 inet6 fe80::2e0:81ff:fe02:5881%gif0 prefixlen 64 scopeid 0x6 inet 10.246.38.1 --> 10.0.0.5 netmask 0xffffff00 Here is the output from Gateway 2: gif0: flags=8051 mtu 1280 tunnel inet 192.168.1.12 --> 172.16.5.4 inet 10.0.0.5 --> 10.246.38.1 netmask 0xffffff00 inet6 fe80::250:bfff:fe3a:c1f%gif0 prefixlen 64 scopeid 0x4 Once complete, both internal IP addresses should be reachable using &man.ping.8;: priv-net# ping 10.0.0.5 PING 10.0.0.5 (10.0.0.5): 56 data bytes 64 bytes from 10.0.0.5: icmp_seq=0 ttl=64 time=42.786 ms 64 bytes from 10.0.0.5: icmp_seq=1 ttl=64 time=19.255 ms 64 bytes from 10.0.0.5: icmp_seq=2 ttl=64 time=20.440 ms 64 bytes from 10.0.0.5: icmp_seq=3 ttl=64 time=21.036 ms --- 10.0.0.5 ping statistics --- 4 packets transmitted, 4 packets received, 0% packet loss round-trip min/avg/max/stddev = 19.255/25.879/42.786/9.782 ms corp-net# ping 10.246.38.1 PING 10.246.38.1 (10.246.38.1): 56 data bytes 64 bytes from 10.246.38.1: icmp_seq=0 ttl=64 time=28.106 ms 64 bytes from 10.246.38.1: icmp_seq=1 ttl=64 time=42.917 ms 64 bytes from 10.246.38.1: icmp_seq=2 ttl=64 time=127.525 ms 64 bytes from 10.246.38.1: icmp_seq=3 ttl=64 time=119.896 ms 64 bytes from 10.246.38.1: icmp_seq=4 ttl=64 time=154.524 ms --- 10.246.38.1 ping statistics --- 5 packets transmitted, 5 packets received, 0% packet loss round-trip min/avg/max/stddev = 28.106/94.594/154.524/49.814 ms As expected, both sides have the ability to send and receive ICMP packets from the privately configured addresses. Next, both gateways must be told how to route packets in order to correctly send traffic from either network. The following commands will achieve this goal: &prompt.root; corp-net# route add 10.0.0.0 10.0.0.5 255.255.255.0 &prompt.root; corp-net# route add net 10.0.0.0: gateway 10.0.0.5 &prompt.root; priv-net# route add 10.246.38.0 10.246.38.1 255.255.255.0 &prompt.root; priv-net# route add host 10.246.38.0: gateway 10.246.38.1 At this point, internal machines should be reachable from each gateway as well as from machines behind the gateways. Again, use &man.ping.8; to confirm: corp-net# ping 10.0.0.8 PING 10.0.0.8 (10.0.0.8): 56 data bytes 64 bytes from 10.0.0.8: icmp_seq=0 ttl=63 time=92.391 ms 64 bytes from 10.0.0.8: icmp_seq=1 ttl=63 time=21.870 ms 64 bytes from 10.0.0.8: icmp_seq=2 ttl=63 time=198.022 ms 64 bytes from 10.0.0.8: icmp_seq=3 ttl=63 time=22.241 ms 64 bytes from 10.0.0.8: icmp_seq=4 ttl=63 time=174.705 ms --- 10.0.0.8 ping statistics --- 5 packets transmitted, 5 packets received, 0% packet loss round-trip min/avg/max/stddev = 21.870/101.846/198.022/74.001 ms priv-net# ping 10.246.38.107 PING 10.246.38.1 (10.246.38.107): 56 data bytes 64 bytes from 10.246.38.107: icmp_seq=0 ttl=64 time=53.491 ms 64 bytes from 10.246.38.107: icmp_seq=1 ttl=64 time=23.395 ms 64 bytes from 10.246.38.107: icmp_seq=2 ttl=64 time=23.865 ms 64 bytes from 10.246.38.107: icmp_seq=3 ttl=64 time=21.145 ms 64 bytes from 10.246.38.107: icmp_seq=4 ttl=64 time=36.708 ms --- 10.246.38.107 ping statistics --- 5 packets transmitted, 5 packets received, 0% packet loss round-trip min/avg/max/stddev = 21.145/31.721/53.491/12.179 ms Setting up the tunnels is the easy part. Configuring a secure link is a more in depth process. The following configuration uses pre-shared (PSK) RSA keys. Other than the IP addresses, the /usr/local/etc/racoon/racoon.conf on both gateways will be identical and look similar to: path pre_shared_key "/usr/local/etc/racoon/psk.txt"; #location of pre-shared key file log debug; #log verbosity setting: set to 'notify' when testing and debugging is complete padding # options are not to be changed { maximum_length 20; randomize off; strict_check off; exclusive_tail off; } timer # timing options. change as needed { counter 5; interval 20 sec; persend 1; # natt_keepalive 15 sec; phase1 30 sec; phase2 15 sec; } listen # address [port] that racoon will listen on { isakmp 172.16.5.4 [500]; isakmp_natt 172.16.5.4 [4500]; } remote 192.168.1.12 [500] { exchange_mode main,aggressive; doi ipsec_doi; situation identity_only; my_identifier address 172.16.5.4; peers_identifier address 192.168.1.12; lifetime time 8 hour; passive off; proposal_check obey; # nat_traversal off; generate_policy off; proposal { encryption_algorithm blowfish; hash_algorithm md5; authentication_method pre_shared_key; lifetime time 30 sec; dh_group 1; } } sainfo (address 10.246.38.0/24 any address 10.0.0.0/24 any) # address $network/$netmask $type address $network/$netmask $type ( $type being any or esp) { # $network must be the two internal networks you are joining. pfs_group 1; lifetime time 36000 sec; encryption_algorithm blowfish,3des,des; authentication_algorithm hmac_md5,hmac_sha1; compression_algorithm deflate; } For descriptions of each available option, refer to the manual page for racoon.conf. The Security Policy Database (SPD) needs to be configured so that &os; and racoon are able to encrypt and decrypt network traffic between the hosts. This can be achieved with a shell script, similar to the following, on the corporate gateway. This file will be used during system initialization and should be saved as /usr/local/etc/racoon/setkey.conf. flush; spdflush; # To the home network spdadd 10.246.38.0/24 10.0.0.0/24 any -P out ipsec esp/tunnel/172.16.5.4-192.168.1.12/use; spdadd 10.0.0.0/24 10.246.38.0/24 any -P in ipsec esp/tunnel/192.168.1.12-172.16.5.4/use; Once in place, racoon may be started on both gateways using the following command: &prompt.root; /usr/local/sbin/racoon -F -f /usr/local/etc/racoon/racoon.conf -l /var/log/racoon.log The output should be similar to the following: corp-net# /usr/local/sbin/racoon -F -f /usr/local/etc/racoon/racoon.conf Foreground mode. 2006-01-30 01:35:47: INFO: begin Identity Protection mode. 2006-01-30 01:35:48: INFO: received Vendor ID: KAME/racoon 2006-01-30 01:35:55: INFO: received Vendor ID: KAME/racoon 2006-01-30 01:36:04: INFO: ISAKMP-SA established 172.16.5.4[500]-192.168.1.12[500] spi:623b9b3bd2492452:7deab82d54ff704a 2006-01-30 01:36:05: INFO: initiate new phase 2 negotiation: 172.16.5.4[0]192.168.1.12[0] 2006-01-30 01:36:09: INFO: IPsec-SA established: ESP/Tunnel 192.168.1.12[0]->172.16.5.4[0] spi=28496098(0x1b2d0e2) 2006-01-30 01:36:09: INFO: IPsec-SA established: ESP/Tunnel 172.16.5.4[0]->192.168.1.12[0] spi=47784998(0x2d92426) 2006-01-30 01:36:13: INFO: respond new phase 2 negotiation: 172.16.5.4[0]192.168.1.12[0] 2006-01-30 01:36:18: INFO: IPsec-SA established: ESP/Tunnel 192.168.1.12[0]->172.16.5.4[0] spi=124397467(0x76a279b) 2006-01-30 01:36:18: INFO: IPsec-SA established: ESP/Tunnel 172.16.5.4[0]->192.168.1.12[0] spi=175852902(0xa7b4d66) To ensure the tunnel is working properly, switch to another console and use &man.tcpdump.1; to view network traffic using the following command. Replace em0 with the network interface card as required: &prompt.root; tcpdump -i em0 host 172.16.5.4 and dst 192.168.1.12 Data similar to the following should appear on the console. If not, there is an issue and debugging the returned data will be required. 01:47:32.021683 IP corporatenetwork.com > 192.168.1.12.privatenetwork.com: ESP(spi=0x02acbf9f,seq=0xa) 01:47:33.022442 IP corporatenetwork.com > 192.168.1.12.privatenetwork.com: ESP(spi=0x02acbf9f,seq=0xb) 01:47:34.024218 IP corporatenetwork.com > 192.168.1.12.privatenetwork.com: ESP(spi=0x02acbf9f,seq=0xc) At this point, both networks should be available and seem to be part of the same network. Most likely both networks are protected by a firewall. To allow traffic to flow between them, rules need to be added to pass packets. For the &man.ipfw.8; firewall, add the following lines to the firewall configuration file: ipfw add 00201 allow log esp from any to any ipfw add 00202 allow log ah from any to any ipfw add 00203 allow log ipencap from any to any ipfw add 00204 allow log udp from any 500 to any The rule numbers may need to be altered depending on the current host configuration. For users of &man.pf.4; or &man.ipf.8;, the following rules should do the trick: pass in quick proto esp from any to any pass in quick proto ah from any to any pass in quick proto ipencap from any to any pass in quick proto udp from any port = 500 to any port = 500 pass in quick on gif0 from any to any pass out quick proto esp from any to any pass out quick proto ah from any to any pass out quick proto ipencap from any to any pass out quick proto udp from any port = 500 to any port = 500 pass out quick on gif0 from any to any Finally, to allow the machine to start support for the VPN during system initialization, add the following lines to /etc/rc.conf: ipsec_enable="YES" ipsec_program="/usr/local/sbin/setkey" ipsec_file="/usr/local/etc/racoon/setkey.conf" # allows setting up spd policies on boot racoon_enable="yes"
OpenSSH ChernLeeContributed by OpenSSH security OpenSSH OpenSSH is a set of network connectivity tools used to provide secure access to remote machines. Additionally, TCP/IP connections can be tunneled or forwarded securely through SSH connections. OpenSSH encrypts all traffic to effectively eliminate eavesdropping, connection hijacking, and other network-level attacks. OpenSSH is maintained by the OpenBSD project and is installed by default in &os;. It is compatible with both SSH version 1 and 2 protocols. When data is sent over the network in an unencrypted form, network sniffers anywhere in between the client and server can steal user/password information or data transferred during the session. OpenSSH offers a variety of authentication and encryption methods to prevent this from happening. More information about OpenSSH is available from http://www.openssh.com/. This section provides an overview of the built-in client utilities to securely access other systems and securely transfer files from a &os; system. It then describes how to configure a SSH server on a &os; system. More information is available in the man pages mentioned in this chapter. Using the SSH Client Utilities OpenSSH client To log into a SSH server, use ssh and specify a username that exists on that server and the IP address or hostname of the server. If this is the first time a connection has been made to the specified server, the user will be prompted to first verify the server's fingerprint: &prompt.root; ssh user@example.com The authenticity of host 'example.com (10.0.0.1)' can't be established. ECDSA key fingerprint is 25:cc:73:b5:b3:96:75:3d:56:19:49:d2:5c:1f:91:3b. Are you sure you want to continue connecting (yes/no)? yes Permanently added 'example.com' (ECDSA) to the list of known hosts. Password for user@example.com: user_password SSH utilizes a key fingerprint system to verify the authenticity of the server when the client connects. When the user accepts the key's fingerprint by typing yes when connecting for the first time, a copy of the key is saved to .ssh/known_hosts in the user's home directory. Future attempts to login are verified against the saved key and ssh will display an alert if the server's key does not match the saved key. If this occurs, the user should first verify why the key has changed before continuing with the connection. By default, recent versions of OpenSSH only accept SSHv2 connections. By default, the client will use version 2 if possible and will fall back to version 1 if the server does not support version 2. To force ssh to only use the specified protocol, include or . Additional options are described in &man.ssh.1;. OpenSSH secure copy &man.scp.1; Use &man.scp.1; to securely copy a file to or from a remote machine. This example copies COPYRIGHT on the remote system to a file of the same name in the current directory of the local system: &prompt.root; scp user@example.com:/COPYRIGHT COPYRIGHT Password for user@example.com: ******* COPYRIGHT 100% |*****************************| 4735 00:00 &prompt.root; Since the fingerprint was already verified for this host, the server's key is automatically checked before prompting for the user's password. The arguments passed to scp are similar to cp. The file or files to copy is the first argument and the destination to copy to is the second. Since the file is fetched over the network, one or more of the file arguments takes the form . Be aware when copying directories recursively that scp uses , whereas cp uses . To open an interactive session for copying files, use sftp. Refer to &man.sftp.1; for a list of available commands while in an sftp session. Key-based Authentication Instead of using passwords, a client can be configured to connect to the remote machine using keys. To generate DSA or RSA authentication keys, use ssh-keygen. To generate a public and private key pair, specify the type of key and follow the prompts. It is recommended to protect the keys with a memorable, but hard to guess passphrase. &prompt.user; ssh-keygen -t dsa Generating public/private dsa key pair. Enter file in which to save the key (/home/user/.ssh/id_dsa): Created directory '/home/user/.ssh'. Enter passphrase (empty for no passphrase): type some passphrase here which can contain spaces Enter same passphrase again: type some passphrase here which can contain spaces Your identification has been saved in /home/user/.ssh/id_dsa. Your public key has been saved in /home/user/.ssh/id_dsa.pub. The key fingerprint is: bb:48:db:f2:93:57:80:b6:aa:bc:f5:d5:ba:8f:79:17 user@host.example.com Depending upon the specified protocol, the private key is stored in ~/.ssh/id_dsa (or ~/.ssh/id_rsa), and the public key is stored in ~/.ssh/id_dsa.pub (or ~/.ssh/id_rsa.pub). The public key must be first copied to ~/.ssh/authorized_keys on the remote machine in order for key-based authentication to work. Many users believe that keys are secure by design and will use a key without a passphrase. This is dangerous behavior. An administrator can verify that a key pair is protected by a passphrase by viewing the private key manually. If the private key file contains the word ENCRYPTED, the key owner is using a passphrase. In addition, to better secure end users, from may be placed in the public key file. For example, adding from="192.168.10.5" in the front of ssh-rsa or rsa-dsa prefix will only allow that specific user to login from that IP address. The various options and files can be different according to the OpenSSH version. To avoid problems, consult &man.ssh-keygen.1;. If a passphrase is used, the user will be prompted for the passphrase each time a connection is made to the server. To load SSH keys into memory, without needing to type the passphrase each time, use &man.ssh-agent.1; and &man.ssh-add.1;. Authentication is handled by ssh-agent, using the private key(s) that are loaded into it. Then, ssh-agent should be used to launch another application such as a shell or a window manager. To use ssh-agent in a shell, start it with a shell as an argument. Next, add the identity by running ssh-add and providing it the passphrase for the private key. Once these steps have been completed, the user will be able to ssh to any host that has the corresponding public key installed. For example: &prompt.user; ssh-agent csh &prompt.user; ssh-add Enter passphrase for key '/usr/home/user/.ssh/id_dsa': type passphrase here Identity added: /usr/home/user/.ssh/id_dsa (/usr/home/user/.ssh/id_dsa) &prompt.user; To use ssh-agent in &xorg;, add an entry for it in ~/.xinitrc. This provides the ssh-agent services to all programs launched in &xorg;. An example ~/.xinitrc might look like this: exec ssh-agent startxfce4 This launches ssh-agent, which in turn launches XFCE, every time &xorg; starts. Once &xorg; has been restarted so that the changes can take effect, run ssh-add to load all of the SSH keys. <acronym>SSH</acronym> Tunneling OpenSSH tunneling OpenSSH has the ability to create a tunnel to encapsulate another protocol in an encrypted session. The following command tells ssh to create a tunnel for telnet: &prompt.user; ssh -2 -N -f -L 5023:localhost:23 user@foo.example.com &prompt.user; This example uses the following options: Forces ssh to use version 2 to connect to the server. Indicates no command, or tunnel only. If omitted, ssh initiates a normal session. Forces ssh to run in the background. Indicates a local tunnel in localport:remotehost:remoteport format. The login name to use on the specified remote SSH server. An SSH tunnel works by creating a listen socket on localhost on the specified localport. It then forwards any connections received on localport via the SSH connection to the specified remotehost:remoteport. In the example, port 5023 on the client is forwarded to port 23 on the remote machine. Since port 23 is used by telnet, this creates an encrypted telnet session through an SSH tunnel. This method can be used to wrap any number of insecure TCP protocols such as SMTP, POP3, and FTP, as seen in the following examples. Create a Secure Tunnel for <acronym>SMTP</acronym> &prompt.user; ssh -2 -N -f -L 5025:localhost:25 user@mailserver.example.com user@mailserver.example.com's password: ***** &prompt.user; telnet localhost 5025 Trying 127.0.0.1... Connected to localhost. Escape character is '^]'. 220 mailserver.example.com ESMTP This can be used in conjunction with ssh-keygen and additional user accounts to create a more seamless SSH tunneling environment. Keys can be used in place of typing a password, and the tunnels can be run as a separate user. Secure Access of a <acronym>POP3</acronym> Server In this example, there is an SSH server that accepts connections from the outside. On the same network resides a mail server running a POP3 server. To check email in a secure manner, create an SSH connection to the SSH server and tunnel through to the mail server: &prompt.user; ssh -2 -N -f -L 2110:mail.example.com:110 user@ssh-server.example.com user@ssh-server.example.com's password: ****** Once the tunnel is up and running, point the email client to send POP3 requests to localhost on port 2110. This connection will be forwarded securely across the tunnel to mail.example.com. Bypassing a Firewall Some firewalls filter both incoming and outgoing connections. For example, a firewall might limit access from remote machines to ports 22 and 80 to only allow SSH and web surfing. This prevents access to any other service which uses a port other than 22 or 80. The solution is to create an SSH connection to a machine outside of the network's firewall and use it to tunnel to the desired service: &prompt.user; ssh -2 -N -f -L 8888:music.example.com:8000 user@unfirewalled-system.example.org user@unfirewalled-system.example.org's password: ******* In this example, a streaming Ogg Vorbis client can now be pointed to localhost port 8888, which will be forwarded over to music.example.com on port 8000, successfully bypassing the firewall. Enabling the SSH Server OpenSSH enabling In addition to providing built-in SSH client utilities, a &os; system can be configured as an SSH server, accepting connections from other SSH clients. To see if sshd is enabled, check /etc/rc.conf for this line and add it if it is missing: sshd_enable="YES" This will start sshd, the daemon program for OpenSSH, the next time the system boots. To start it now: &prompt.root; service sshd start The first time sshd starts on a &os; system, the system's host keys will be automatically created and the fingerprint will be displayed on the console. Provide users with the fingerprint so that they can verify it the first time they connect to the server. Refer to &man.sshd.8; for the list of available options when starting sshd and a more complete discussion about authentication, the login process, and the various configuration files. It is a good idea to limit which users can log into the SSH server and from where using the AllowUsers keyword in the OpenSSH server configuration file. For example, to only allow root to log in from 192.168.1.32, add this line to /etc/ssh/sshd_config: AllowUsers root@192.168.1.32 To allow admin to log in from anywhere, list that user without specifying an IP address: AllowUsers admin Multiple users should be listed on the same line, like so: AllowUsers root@192.168.1.32 admin After making changes to /etc/ssh/sshd_config, tell sshd to reload its configuration file by running: &prompt.root; service sshd reload When this keyword is used, it is important to list each user that needs to log into this machine. Any user that is not specified in that line will be locked out. Also, the keywords used in the OpenSSH server configuration file are case-sensitive. If the keyword is not spelled correctly, including its case, it will be ignored. Always test changes to this file to make sure that the edits are working as expected. Refer to &man.sshd.config.5; to verify the spelling and use of the available keywords. Do not confuse /etc/ssh/sshd_config with /etc/ssh/ssh_config (note the extra d in the first filename). The first file configures the server and the second file configures the client. Refer to &man.ssh.config.5; for a listing of the available client settings,. Access Control Lists TomRhodesContributed by ACL Access Control Lists (ACLs) extend the standard &unix; permission model in a &posix;.1e compatible way. This permits an administrator to take advantage of a more fine-grained permissions model. The &os; GENERIC kernel provides ACL support for UFS file systems. Users who prefer to compile a custom kernel must include the following option in their custom kernel configuration file: options UFS_ACL If this option is not compiled in, a warning message will be displayed when attempting to mount a file system with ACL support. ACLs rely on extended attributes which are natively supported in UFS2. This chapter describes how to enable ACL support and provides some usage examples. Enabling <acronym>ACL</acronym> Support ACLs are enabled by the mount-time administrative flag, , which may be added to /etc/fstab. The mount-time flag can also be automatically set in a persistent manner using &man.tunefs.8; to modify a superblock ACLs flag in the file system header. In general, it is preferred to use the superblock flag for several reasons: The superblock flag cannot be changed by a remount using as it requires a complete umount and fresh mount. This means that ACLs cannot be enabled on the root file system after boot. It also means that ACL support on a file system cannot be changed while the system is in use. Setting the superblock flag causes the file system to always be mounted with ACLs enabled, even if there is not an fstab entry or if the devices re-order. This prevents accidental mounting of the file system without ACL support. It is desirable to discourage accidental mounting without ACLs enabled because nasty things can happen if ACLs are enabled, then disabled, then re-enabled without flushing the extended attributes. In general, once ACLs are enabled on a file system, they should not be disabled, as the resulting file protections may not be compatible with those intended by the users of the system, and re-enabling ACLs may re-attach the previous ACLs to files that have since had their permissions changed, resulting in unpredictable behavior. File systems with ACLs enabled will show a plus (+) sign in their permission settings: drwx------ 2 robert robert 512 Dec 27 11:54 private drwxrwx---+ 2 robert robert 512 Dec 23 10:57 directory1 drwxrwx---+ 2 robert robert 512 Dec 22 10:20 directory2 drwxrwx---+ 2 robert robert 512 Dec 27 11:57 directory3 drwxr-xr-x 2 robert robert 512 Nov 10 11:54 public_html In this example, directory1, directory2, and directory3 are all taking advantage of ACLs, whereas public_html is not. Using <acronym>ACL</acronym>s File system ACLs can be viewed using getfacl. For instance, to view the ACL settings on test: &prompt.user; getfacl test #file:test #owner:1001 #group:1001 user::rw- group::r-- other::r-- To change the ACL settings on this file, use setfacl. To remove all of the currently defined ACLs from a file or file system, include . However, the preferred method is to use as it leaves the basic fields required for ACLs to work. &prompt.user; setfacl -k test To modify the default ACL entries, use : &prompt.user; setfacl -m u:trhodes:rwx,group:web:r--,o::--- test In this example, there were no pre-defined entries, as they were removed by the previous command. This command restores the default options and assigns the options listed. If a user or group is added which does not exist on the system, an Invalid argument error will be displayed. Refer to &man.getfacl.1; and &man.setfacl.1; for more information about the options available for these commands. Monitoring Third Party Security Issues TomRhodesContributed by portaudit In recent years, the security world has made many improvements to how vulnerability assessment is handled. The threat of system intrusion increases as third party utilities are installed and configured for virtually any operating system available today. Vulnerability assessment is a key factor in security. While &os; releases advisories for the base system, doing so for every third party utility is beyond the &os; Project's capability. There is a way to mitigate third party vulnerabilities and warn administrators of known security issues. A &os; add on utility known as portaudit exists solely for this purpose. The ports-mgmt/portaudit port polls a database, which is updated and maintained by the &os; Security Team and ports developers, for known security issues. To install portaudit from the Ports Collection: &prompt.root; cd /usr/ports/ports-mgmt/portaudit && make install clean During the installation, the configuration files for &man.periodic.8; will be updated, permitting portaudit output in the daily security runs. Ensure that the daily security run emails, which are sent to root's email account, are being read. No other configuration is required. After installation, an administrator can update the database and view known vulnerabilities in installed packages by invoking the following command: &prompt.root; portaudit -Fda The database is automatically updated during the &man.periodic.8; run. The above command is optional and can be used to manually update the database now. To audit the third party utilities installed as part of the Ports Collection at anytime, an administrator can run the following command: &prompt.root; portaudit -a portaudit will display messages for any installed vulnerable packages: Affected package: cups-base-1.1.22.0_1 Type of problem: cups-base -- HPGL buffer overflow vulnerability. Reference: <http://www.FreeBSD.org/ports/portaudit/40a3bca2-6809-11d9-a9e7-0001020eed82.html> 1 problem(s) in your installed packages found. You are advised to update or deinstall the affected package(s) immediately. By pointing a web browser to the displayed URL, an administrator may obtain more information about the vulnerability. This will include the versions affected, by &os; port version, along with other web sites which may contain security advisories. portaudit is a powerful utility and is extremely useful when coupled with the portmaster port. &os; Security Advisories TomRhodesContributed by &os; Security Advisories Like many producers of quality operating systems, the &os; Project has a security team which is responsible for determining the End-of-Life (EoL) date for each &os; release and to provide security updates for supported releases which have not yet reached their EoL. More information about the &os; security team and the supported releases is available on the &os; security page. One task of the security team is to respond to reported security vulnerabilities in the &os; operating system. Once a vulnerability is confirmed, the security team verifies the steps necessary to fix the vulnerability and updates the source code with the fix. It then publishes the details as a Security Advisory. Security advisories are published on the &os; website and mailed to the &a.security-notifications.name;, &a.security.name;, and &a.announce.name; mailing lists. This section describes the format of a &os; security advisory. Format of a Security Advisory Here is an example of a &os; security advisory: ============================================================================= -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA512 ============================================================================= FreeBSD-SA-14:04.bind Security Advisory The FreeBSD Project Topic: BIND remote denial of service vulnerability Category: contrib Module: bind Announced: 2014-01-14 Credits: ISC Affects: FreeBSD 8.x and FreeBSD 9.x Corrected: 2014-01-14 19:38:37 UTC (stable/9, 9.2-STABLE) 2014-01-14 19:42:28 UTC (releng/9.2, 9.2-RELEASE-p3) 2014-01-14 19:42:28 UTC (releng/9.1, 9.1-RELEASE-p10) 2014-01-14 19:38:37 UTC (stable/8, 8.4-STABLE) 2014-01-14 19:42:28 UTC (releng/8.4, 8.4-RELEASE-p7) 2014-01-14 19:42:28 UTC (releng/8.3, 8.3-RELEASE-p14) CVE Name: CVE-2014-0591 For general information regarding FreeBSD Security Advisories, including descriptions of the fields above, security branches, and the following sections, please visit <URL:http://security.FreeBSD.org/>. I. Background BIND 9 is an implementation of the Domain Name System (DNS) protocols. The named(8) daemon is an Internet Domain Name Server. II. Problem Description Because of a defect in handling queries for NSEC3-signed zones, BIND can crash with an "INSIST" failure in name.c when processing queries possessing certain properties. This issue only affects authoritative nameservers with at least one NSEC3-signed zone. Recursive-only servers are not at risk. III. Impact An attacker who can send a specially crafted query could cause named(8) to crash, resulting in a denial of service. IV. Workaround No workaround is available, but systems not running authoritative DNS service with at least one NSEC3-signed zone using named(8) are not vulnerable. V. Solution Perform one of the following: 1) Upgrade your vulnerable system to a supported FreeBSD stable or release / security branch (releng) dated after the correction date. 2) To update your vulnerable system via a source code patch: The following patches have been verified to apply to the applicable FreeBSD release branches. a) Download the relevant patch from the location below, and verify the detached PGP signature using your PGP utility. [FreeBSD 8.3, 8.4, 9.1, 9.2-RELEASE and 8.4-STABLE] # fetch http://security.FreeBSD.org/patches/SA-14:04/bind-release.patch # fetch http://security.FreeBSD.org/patches/SA-14:04/bind-release.patch.asc # gpg --verify bind-release.patch.asc [FreeBSD 9.2-STABLE] # fetch http://security.FreeBSD.org/patches/SA-14:04/bind-stable-9.patch # fetch http://security.FreeBSD.org/patches/SA-14:04/bind-stable-9.patch.asc # gpg --verify bind-stable-9.patch.asc b) Execute the following commands as root: # cd /usr/src # patch < /path/to/patch Recompile the operating system using buildworld and installworld as described in <URL:http://www.FreeBSD.org/handbook/makeworld.html>. Restart the applicable daemons, or reboot the system. 3) To update your vulnerable system via a binary patch: Systems running a RELEASE version of FreeBSD on the i386 or amd64 platforms can be updated via the freebsd-update(8) utility: # freebsd-update fetch # freebsd-update install VI. Correction details The following list contains the correction revision numbers for each affected branch. Branch/path Revision - ------------------------------------------------------------------------- stable/8/ r260646 releng/8.3/ r260647 releng/8.4/ r260647 stable/9/ r260646 releng/9.1/ r260647 releng/9.2/ r260647 - ------------------------------------------------------------------------- To see which files were modified by a particular revision, run the following command, replacing NNNNNN with the revision number, on a machine with Subversion installed: # svn diff -cNNNNNN --summarize svn://svn.freebsd.org/base Or visit the following URL, replacing NNNNNN with the revision number: <URL:http://svnweb.freebsd.org/base?view=revision&revision=NNNNNN> VII. References <URL:https://kb.isc.org/article/AA-01078> <URL:http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2014-0591> The latest revision of this advisory is available at <URL:http://security.FreeBSD.org/advisories/FreeBSD-SA-14:04.bind.asc> -----BEGIN PGP SIGNATURE----- iQIcBAEBCgAGBQJS1ZTYAAoJEO1n7NZdz2rnOvQP/2/68/s9Cu35PmqNtSZVVxVG ZSQP5EGWx/lramNf9566iKxOrLRMq/h3XWcC4goVd+gZFrvITJSVOWSa7ntDQ7TO XcinfRZ/iyiJbs/Rg2wLHc/t5oVSyeouyccqODYFbOwOlk35JjOTMUG1YcX+Zasg ax8RV+7Zt1QSBkMlOz/myBLXUjlTZ3Xg2FXVsfFQW5/g2CjuHpRSFx1bVNX6ysoG 9DT58EQcYxIS8WfkHRbbXKh9I1nSfZ7/Hky/kTafRdRMrjAgbqFgHkYTYsBZeav5 fYWKGQRJulYfeZQ90yMTvlpF42DjCC3uJYamJnwDIu8OhS1WRBI8fQfr9DRzmRua OK3BK9hUiScDZOJB6OqeVzUTfe7MAA4/UwrDtTYQ+PqAenv1PK8DZqwXyxA9ThHb zKO3OwuKOVHJnKvpOcr+eNwo7jbnHlis0oBksj/mrq2P9m2ueF9gzCiq5Ri5Syag Wssb1HUoMGwqU0roS8+pRpNC8YgsWpsttvUWSZ8u6Vj/FLeHpiV3mYXPVMaKRhVm 067BA2uj4Th1JKtGleox+Em0R7OFbCc/9aWC67wiqI6KRyit9pYiF3npph+7D5Eq 7zPsUdDd+qc+UTiLp3liCRp5w6484wWdhZO6wRtmUgxGjNkxFoNnX8CitzF8AaqO UWWemqWuz3lAZuORQ9KX =OQzQ -----END PGP SIGNATURE----- Every security advisory uses the following format: Each security advisory is signed by the PGP key of the Security Officer. The public key for the Security Officer can be verified at . The name of the security advisory always begins with FreeBSD-SA- (for FreeBSD Security Advisory), followed by the year in two digit format (14:), followed by the advisory number for that year (04.), followed by the name of the affected application or subsystem (bind). The advisory shown here is the fourth advisory for 2014 and it affects BIND. The Topic field summarizes the vulnerability. The Category refers to the affected part of the system which may be one of core, contrib, or ports. The core category means that the vulnerability affects a core component of the &os; operating system. The contrib category means that the vulnerability affects software included with &os;, such as BIND. The ports category indicates that the vulnerability affects software available through the Ports Collection. The Module field refers to the component location. In this example, the bind module is affected; therefore, this vulnerability affects an application installed with the operating system. The Announced field reflects the date the security advisory was published. This means that the security team has verified that the problem exists and that a patch has been committed to the &os; source code repository. The Credits field gives credit to the individual or organization who noticed the vulnerability and reported it. The Affects field explains which releases of &os; are affected by this vulnerability. The Corrected field indicates the date, time, time offset, and releases that were corrected. The section in parentheses shows each branch for which the fix has been merged, and the version number of the corresponding release from that branch. The release identifier itself includes the version number and, if appropriate, the patch level. The patch level is the letter p followed by a number, indicating the sequence number of the patch, allowing users to track which patches have already been applied to the system. The CVE Name field lists the advisory number, if one exists, in the public cve.mitre.org security vulnerabilities database. The Background field provides a description of the affected module. The Problem Description field explains the vulnerability. This can include information about the flawed code and how the utility could be maliciously used. The Impact field describes what type of impact the problem could have on a system. The Workaround field indicates if a workaround is available to system administrators who cannot immediately patch the system . The Solution field provides the instructions for patching the affected system. This is a step by step tested and verified method for getting a system patched and working securely. The Correction Details field displays each affected Subversion branch with the revision number that contains the corrected code. The References field offers sources of additional information regarding the vulnerability. Process Accounting TomRhodesContributed by Process Accounting Process accounting is a security method in which an administrator may keep track of system resources used and their allocation among users, provide for system monitoring, and minimally track a user's commands. Process accounting has both positive and negative points. One of the positives is that an intrusion may be narrowed down to the point of entry. A negative is the amount of logs generated by process accounting, and the disk space they may require. This section walks an administrator through the basics of process accounting. If more fine-grained accounting is needed, refer to . Enabling and Utilizing Process Accounting Before using process accounting, it must be enabled using the following commands: &prompt.root; touch /var/account/acct &prompt.root; chmod 600 /var/account/acct &prompt.root; accton /var/account/acct &prompt.root; echo 'accounting_enable="YES"' >> /etc/rc.conf Once enabled, accounting will begin to track information such as CPU statistics and executed commands. All accounting logs are in a non-human readable format which can be viewed using sa. If issued without any options, sa prints information relating to the number of per-user calls, the total elapsed time in minutes, total CPU and user time in minutes, and the average number of I/O operations. Refer to &man.sa.8; for the list of available options which control the output. To display the commands issued by users, use lastcomm. For example, this command prints out all usage of ls by trhodes on the ttyp1 terminal: &prompt.root; lastcomm ls trhodes ttyp1 Many other useful options exist and are explained in &man.lastcomm.1;, &man.acct.5;, and &man.sa.8;. Resource Limits TomRhodesContributed by Resource limits &os; provides several methods for an administrator to limit the amount of system resources an individual may use. Disk quotas limit the amount of disk space available to users. Quotas are discussed in . quotas limiting users quotas disk quotas Limits to other resources, such as CPU and memory, can be set using either a flat file or a command to configure a resource limits database. The traditional method defines login classes by editing /etc/login.conf. While this method is still supported, any changes require a multi-step process of editing this file, rebuilding the resource database, making necessary changes to /etc/master.passwd, and rebuilding the password database. This can become time consuming, depending upon the number of users to configure. Beginning with &os; 9.0-RELEASE, rctl can be used to provide a more fine-grained method for controlling resource limits. This command supports more than user limits as it can also be used to set resource constraints on processes and jails. This section demonstrates both methods for controlling resources, beginning with the traditional method. Configuring Login Classes limiting users accounts limiting /etc/login.conf In the traditional method, login classes and the resource limits to apply to a login class are defined in /etc/login.conf. Each user account can be assigned to a login class, where default is the default login class. Each login class has a set of login capabilities associated with it. A login capability is a name=value pair, where name is a well-known identifier and value is an arbitrary string which is processed accordingly depending on the name. Whenever /etc/login.conf is edited, the /etc/login.conf.db must be updated by executing the following command: &prompt.root; cap_mkdb /etc/login.conf Resource limits differ from the default login capabilities in two ways. First, for every limit, there is a soft and hard limit. A soft limit may be adjusted by the user or application, but may not be set higher than the hard limit. The hard limit may be lowered by the user, but can only be raised by the superuser. Second, most resource limits apply per process to a specific user. lists the most commonly used resource limits. All of the available resource limits and capabilities are described in detail in &man.login.conf.5;. limiting users coredumpsize limiting users cputime limiting users filesize limiting users maxproc limiting users memorylocked limiting users memoryuse limiting users openfiles limiting users sbsize limiting users stacksize Login Class Resource Limits Resource Limit Description coredumpsize The limit on the size of a core file generated by a program is subordinate to other limits on disk usage, such as filesize or disk quotas. This limit is often used as a less severe method of controlling disk space consumption. Since users do not generate core files and often do not delete them, this setting may save them from running out of disk space should a large program crash. cputime The maximum amount of CPU time a user's process may consume. Offending processes will be killed by the kernel. This is a limit on CPU time consumed, not the percentage of the CPU as displayed in some of the fields generated by top and ps. filesize The maximum size of a file the user may own. Unlike disk quotas (), this limit is enforced on individual files, not the set of all files a user owns. maxproc The maximum number of foreground and background processes a user can run. This limit may not be larger than the system limit specified by kern.maxproc. Setting this limit too small may hinder a user's productivity as some tasks, such as compiling a large program, start lots of processes. memorylocked The maximum amount of memory a process may request to be locked into main memory using &man.mlock.2;. Some system-critical programs, such as &man.amd.8;, lock into main memory so that if the system begins to swap, they do not contribute to disk thrashing. memoryuse The maximum amount of memory a process may consume at any given time. It includes both core memory and swap usage. This is not a catch-all limit for restricting memory consumption, but is a good start. openfiles The maximum number of files a process may have open. In &os;, files are used to represent sockets and IPC channels, so be careful not to set this too low. The system-wide limit for this is defined by kern.maxfiles. sbsize The limit on the amount of network memory a user may consume. This can be generally used to limit network communications. stacksize The maximum size of a process stack. This alone is not sufficient to limit the amount of memory a program may use, so it should be used in conjunction with other limits.
There are a few other things to remember when setting resource limits: Processes started at system startup by /etc/rc are assigned to the daemon login class. Although the default /etc/login.conf is a good source of reasonable values for most limits, they may not be appropriate for every system. Setting a limit too high may open the system up to abuse, while setting it too low may put a strain on productivity. &xorg; takes a lot of resources and encourages users to run more programs simultaneously. Many limits apply to individual processes, not the user as a whole. For example, setting openfiles to 50 means that each process the user runs may open up to 50 files. The total amount of files a user may open is the value of openfiles multiplied by the value of maxproc. This also applies to memory consumption. For further information on resource limits and login classes and capabilities in general, refer to &man.cap.mkdb.1;, &man.getrlimit.2;, and &man.login.conf.5;.
Enabling and Configuring Resource Limits By default, kernel support for rctl is not built-in, meaning that the kernel will first need to be recompiled using the instructions in . Add these lines to either GENERIC or a custom kernel configuration file, then rebuild the kernel: options RACCT options RCTL Once the system has rebooted into the new kernel, rctl may be used to set rules for the system. Rule syntax is controlled through the use of a subject, subject-id, resource, and action, as seen in this example rule: user:trhodes:maxproc:deny=10/user In this rule, the subject is user, the subject-id is trhodes, the resource, maxproc, is the maximum number of processes, and the action is deny, which blocks any new processes from being created. This means that the user, trhodes, will be constrained to no greater than 10 processes. Other possible actions include logging to the console, passing a notification to &man.devd.8;, or sending a sigterm to the process. Some care must be taken when adding rules. Since this user is constrained to 10 processes, this example will prevent the user from performing other tasks after logging in and executing a screen session. Once a resource limit has been hit, an error will be printed, as in this example: &prompt.user; man test /usr/bin/man: Cannot fork: Resource temporarily unavailable eval: Cannot fork: Resource temporarily unavailable As another example, a jail can be prevented from exceeding a memory limit. This rule could be written as: &prompt.root; rctl -a jail:httpd:memoryuse:deny=2G/jail Rules will persist across reboots if they have been added to /etc/rctl.conf. The format is a rule, without the preceding command. For example, the previous rule could be added as: # Block jail from using more than 2G memory: jail:httpd:memoryuse:deny=2G/jail To remove a rule, use rctl to remove it from the list: &prompt.root; rctl -r user:trhodes:maxproc:deny=10/user A method for removing all rules is documented in &man.rctl.8;. However, if removing all rules for a single user is required, this command may be issued: &prompt.root; rctl -r user:trhodes Many other resources exist which can be used to exert additional control over various subjects. See &man.rctl.8; to learn about them.
Index: head/en_US.ISO8859-1/books/handbook/serialcomms/chapter.xml =================================================================== --- head/en_US.ISO8859-1/books/handbook/serialcomms/chapter.xml (revision 46048) +++ head/en_US.ISO8859-1/books/handbook/serialcomms/chapter.xml (revision 46049) @@ -1,2201 +1,2200 @@ Serial Communications Synopsis serial communications &unix; has always had support for serial communications as the very first &unix; machines relied on serial lines for user input and output. Things have changed a lot from the days when the average terminal consisted of a 10-character-per-second serial printer and a keyboard. This chapter covers some of the ways serial communications can be used on &os;. After reading this chapter, you will know: How to connect terminals to a &os; system. How to use a modem to dial out to remote hosts. How to allow remote users to login to a &os; system with a modem. How to boot a &os; system from a serial console. Before reading this chapter, you should: Know how to configure and install a custom kernel. Understand &os; permissions and processes. Have access to the technical manual for the serial hardware to be used with &os;. Serial Terminology and Hardware The following terms are often used in serial communications: bps Bits per Secondbits-per-second (bps) is the rate at which data is transmitted. DTE Data Terminal EquipmentDTE (DTE) is one of two endpoints in a serial communication. An example would be a computer. DCE Data Communications EquipmentDCE (DTE) is the other endpoint in a serial communication. Typically, it is a modem or serial terminal. RS-232 The original standard which defined hardware serial communications. It has since been renamed to TIA-232. When referring to communication data rates, this section does not use the term baud. Baud refers to the number of electrical state transitions made in a period of time, while bps is the correct term to use. To connect a serial terminal to a &os; system, a serial port on the computer and the proper cable to connect to the serial device are needed. Users who are already familiar with serial hardware and cabling can safely skip this section. Serial Cables and Ports There are several different kinds of serial cables. The two most common types are null-modem cables and standard RS-232 cables. The documentation for the hardware should describe the type of cable required. These two types of cables differ in how the wires are connected to the connector. Each wire represents a signal, with the defined signals summarized in . A standard serial cable passes all of the RS-232C signals straight through. For example, the Transmitted Data pin on one end of the cable goes to the Transmitted Data pin on the other end. This is the type of cable used to connect a modem to the &os; system, and is also appropriate for some terminals. A null-modem cable switches the Transmitted Data pin of the connector on one end with the Received Data pin on the other end. The connector can be either a DB-25 or a DB-9. A null-modem cable can be constructed using the pin connections summarized in , , and . While the standard calls for a straight-through pin 1 to pin 1 Protective Ground line, it is often omitted. Some terminals work using only pins 2, 3, and 7, while others require different configurations. When in doubt, refer to the documentation for the hardware. null-modem cable <acronym>RS-232C</acronym> Signal Names Acronyms Names RD Received Data TD Transmitted Data DTR Data Terminal Ready DSR Data Set Ready DCD Data Carrier Detect SG Signal Ground RTS Request to Send CTS Clear to Send
DB-25 to DB-25 Null-Modem Cable Signal Pin # Pin # Signal SG 7 connects to 7 SG TD 2 connects to 3 RD RD 3 connects to 2 TD RTS 4 connects to 5 CTS CTS 5 connects to 4 RTS DTR 20 connects to 6 DSR DTR 20 connects to 8 DCD DSR 6 connects to 20 DTR DCD 8 connects to 20 DTR
DB-9 to DB-9 Null-Modem Cable Signal Pin # Pin # Signal RD 2 connects to 3 TD TD 3 connects to 2 RD DTR 4 connects to 6 DSR DTR 4 connects to 1 DCD SG 5 connects to 5 SG DSR 6 connects to 4 DTR DCD 1 connects to 4 DTR RTS 7 connects to 8 CTS CTS 8 connects to 7 RTS
DB-9 to DB-25 Null-Modem Cable Signal Pin # Pin # Signal RD 2 connects to 2 TD TD 3 connects to 3 RD DTR 4 connects to 6 DSR DTR 4 connects to 8 DCD SG 5 connects to 7 SG DSR 6 connects to 20 DTR DCD 1 connects to 20 DTR RTS 7 connects to 5 CTS CTS 8 connects to 4 RTS
When one pin at one end connects to a pair of pins at the other end, it is usually implemented with one short wire between the pair of pins in their connector and a long wire to the other single pin. Serial ports are the devices through which data is transferred between the &os; host computer and the terminal. Several kinds of serial ports exist. Before purchasing or constructing a cable, make sure it will fit the ports on the terminal and on the &os; system. Most terminals have DB-25 ports. Personal computers may have DB-25 or DB-9 ports. A multiport serial card may have RJ-12 or RJ-45/ ports. See the documentation that accompanied the hardware for specifications on the kind of port or visually verify the type of port. In &os;, each serial port is accessed through an entry in /dev. There are two different kinds of entries: Call-in ports are named /dev/ttyuN where N is the port number, starting from zero. If a terminal is connected to the first serial port (COM1), use /dev/ttyu0 to refer to the terminal. If the terminal is on the second serial port (COM2), use /dev/ttyu1, and so forth. Generally, the call-in port is used for terminals. Call-in ports require that the serial line assert the Data Carrier Detect signal to work correctly. Call-out ports are named /dev/cuauN on &os; versions 10.x and higher and /dev/cuadN on &os; versions 9.x and lower. Call-out ports are usually not used for terminals, but are used for modems. The call-out port can be used if the serial cable or the terminal does not support the Data Carrier Detect signal. &os; also provides initialization devices (/dev/ttyuN.init and /dev/cuauN.init or /dev/cuadN.init) and locking devices (/dev/ttyuN.lock and /dev/cuauN.lock or /dev/cuadN.lock). The initialization devices are used to initialize communications port parameters each time a port is opened, such as crtscts for modems which use RTS/CTS signaling for flow control. The locking devices are used to lock flags on ports to prevent users or programs changing certain parameters. Refer to &man.termios.4;, &man.sio.4;, and &man.stty.1; for information on terminal settings, locking and initializing devices, and setting terminal options, respectively.
Serial Port Configuration By default, &os; supports four serial ports which are commonly known as COM1, COM2, COM3, and COM4. &os; also supports dumb multi-port serial interface cards, such as the BocaBoard 1008 and 2016, as well as more intelligent multi-port cards such as those made by Digiboard. However, the default kernel only looks for the standard COM ports. To see if the system recognizes the serial ports, look for system boot messages that start with uart: &prompt.root; grep uart /var/run/dmesg.boot If the system does not recognize all of the needed serial ports, additional entries can be added to /boot/device.hints. This file already contains hint.uart.0.* entries for COM1 and hint.uart.1.* entries for COM2. When adding a port entry for COM3 use 0x3E8, and for COM4 use 0x2E8. Common IRQ addresses are 5 for COM3 and 9 for COM4. ttyu cuau To determine the default set of terminal I/O settings used by the port, specify its device name. This example determines the settings for the call-in port on COM2: &prompt.root; stty -a -f /dev/ttyu1 System-wide initialization of serial devices is controlled by /etc/rc.d/serial. This file affects the default settings of serial devices. To change the settings for a device, use stty. By default, the changed settings are in effect until the device is closed and when the device is reopened, it goes back to the default set. To permanently change the default set, open and adjust the settings of the initialization device. For example, to turn on mode, 8 bit communication, and flow control for ttyu5, type: &prompt.root; stty -f /dev/ttyu5.init clocal cs8 ixon ixoff rc files rc.serial To prevent certain settings from being changed by an application, make adjustments to the locking device. For example, to lock the speed of ttyu5 to 57600 bps, type: &prompt.root; stty -f /dev/ttyu5.lock 57600 Now, any application that opens ttyu5 and tries to change the speed of the port will be stuck with 57600 bps.
Terminals Sean Kelly Contributed by terminals Terminals provide a convenient and low-cost way to access a &os; system when not at the computer's console or on a connected network. This section describes how to use terminals with &os;. The original &unix; systems did not have consoles. Instead, users logged in and ran programs through terminals that were connected to the computer's serial ports. The ability to establish a login session on a serial port still exists in nearly every &unix;-like operating system today, including &os;. By using a terminal attached to an unused serial port, a user can log in and run any text program that can normally be run on the console or in an xterm window. Many terminals can be attached to a &os; system. An older spare computer can be used as a terminal wired into a more powerful computer running &os;. This can turn what might otherwise be a single-user computer into a powerful multiple-user system. &os; supports three types of terminals: Dumb terminals Dumb terminals are specialized hardware that connect to computers over serial lines. They are called dumb because they have only enough computational power to display, send, and receive text. No programs can be run on these devices. Instead, dumb terminals connect to a computer that runs the needed programs. There are hundreds of kinds of dumb terminals made by many manufacturers, and just about any kind will work with &os;. Some high-end terminals can even display graphics, but only certain software packages can take advantage of these advanced features. Dumb terminals are popular in work environments where workers do not need access to graphical applications. Computers Acting as Terminals Since a dumb terminal has just enough ability to display, send, and receive text, any spare computer can be a dumb terminal. All that is needed is the proper cable and some terminal emulation software to run on the computer. This configuration can be useful. For example, if one user is busy working at the &os; system's console, another user can do some text-only work at the same time from a less powerful personal computer hooked up as a terminal to the &os; system. There are at least two utilities in the base-system of &os; that can be used to work through a serial connection: &man.cu.1; and &man.tip.1;. For example, to connect from a client system that runs &os; to the serial connection of another system: &prompt.root; cu -l serial-port-device Replace serial-port-device with the device name of the connected serial port. These device files are called /dev/cuauN on &os; versions 10.x and higher and /dev/cuadN on &os; versions 9.x and lower. In either case, N is the serial port number, starting from zero. This means that COM1 is /dev/cuau0 or /dev/cuad0 in &os;. Additional programs are available through the Ports Collection, such as comms/minicom. X Terminals X terminals are the most sophisticated kind of terminal available. Instead of connecting to a serial port, they usually connect to a network like Ethernet. Instead of being relegated to text-only applications, they can display any &xorg; application. This chapter does not cover the setup, configuration, or use of X terminals. Terminal Configuration This section describes how to configure a &os; system to enable a login session on a serial terminal. It assumes that the system recognizes the serial port to which the terminal is connected and that the terminal is connected with the correct cable. In &os;, init reads /etc/ttys and starts a getty process on the available terminals. The getty process is responsible for reading a login name and starting the login program. The ports on the &os; system which allow logins are listed in /etc/ttys. For example, the first virtual console, ttyv0, has an entry in this file, allowing logins on the console. This file also contains entries for the other virtual consoles, serial ports, and pseudo-ttys. For a hardwired terminal, the serial port's /dev entry is listed without the /dev part. For example, /dev/ttyv0 is listed as ttyv0. The default /etc/ttys configures support for the first four serial ports, ttyu0 through ttyu3: ttyu0 "/usr/libexec/getty std.9600" dialup off secure ttyu1 "/usr/libexec/getty std.9600" dialup off secure ttyu2 "/usr/libexec/getty std.9600" dialup off secure ttyu3 "/usr/libexec/getty std.9600" dialup off secure When attaching a terminal to one of those ports, modify the default entry to set the required speed and terminal type, to turn the device on and, if needed, to change the port's secure setting. If the terminal is connected to another port, add an entry for the port. configures two terminals in /etc/ttys. The first entry configures a Wyse-50 connected to COM2. The second entry configures an old computer running Procomm terminal software emulating a VT-100 terminal. The computer is connected to the sixth serial port on a multi-port serial card. Configuring Terminal Entries ttyu1 "/usr/libexec/getty std.38400" wy50 on insecure ttyu5 "/usr/libexec/getty std.19200" vt100 on insecure The first field specifies the device name of the serial terminal. The second field tells getty to initialize and open the line, set the line speed, prompt for a user name, and then execute the login program. The optional getty type configures characteristics on the terminal line, like bps rate and parity. The available getty types are listed in /etc/gettytab. In almost all cases, the getty types that start with std will work for hardwired terminals as these entries ignore parity. There is a std entry for each bps rate from 110 to 115200. Refer to &man.gettytab.5; for more information. When setting the getty type, make sure to match the communications settings used by the terminal. For this example, the Wyse-50 uses no parity and connects at 38400 bps. The computer uses no parity and connects at 19200 bps. The third field is the type of terminal. For dial-up ports, unknown or dialup is typically used since users may dial up with practically any type of terminal or software. Since the terminal type does not change for hardwired terminals, a real terminal type from /etc/termcap can be specified. For this example, the Wyse-50 uses the real terminal type while the computer running Procomm is set to emulate a VT-100. The fourth field specifies if the port should be enabled. To enable logins on this port, this field must be set to on. The final field is used to specify whether the port is secure. Marking a port as secure means that it is trusted enough to allow root to login from that port. Insecure ports do not allow root logins. On an insecure port, users must login from unprivileged accounts and then use su or a similar mechanism to gain superuser privileges, as described in . For security reasons, it is recommended to change this setting to insecure. After making any changes to /etc/ttys, send a SIGHUP (hangup) signal to the init process to force it to re-read its configuration file: &prompt.root; kill -HUP 1 Since init is always the first process run on a system, it always has a process ID of 1. If everything is set up correctly, all cables are in place, and the terminals are powered up, a getty process should now be running on each terminal and login prompts should be available on each terminal. Troubleshooting the Connection Even with the most meticulous attention to detail, something could still go wrong while setting up a terminal. Here is a list of common symptoms and some suggested fixes. If no login prompt appears, make sure the terminal is plugged in and powered up. If it is a personal computer acting as a terminal, make sure it is running terminal emulation software on the correct serial port. Make sure the cable is connected firmly to both the terminal and the &os; computer. Make sure it is the right kind of cable. Make sure the terminal and &os; agree on the bps rate and parity settings. For a video display terminal, make sure the contrast and brightness controls are turned up. If it is a printing terminal, make sure paper and ink are in good supply. Use ps to make sure that a getty process is running and serving the terminal. For example, the following listing shows that a getty is running on the second serial port, ttyu1, and is using the std.38400 entry in /etc/gettytab: &prompt.root; ps -axww|grep ttyu 22189 d1 Is+ 0:00.03 /usr/libexec/getty std.38400 ttyu1 If no getty process is running, make sure the port is enabled in /etc/ttys. Remember to run kill -HUP 1 after modifying /etc/ttys. If the getty process is running but the terminal still does not display a login prompt, or if it displays a prompt but will not accept typed input, the terminal or cable may not support hardware handshaking. Try changing the entry in /etc/ttys from std.38400 to 3wire.38400, then run kill -HUP 1 after modifying /etc/ttys. The 3wire entry is similar to std, but ignores hardware handshaking. The baud rate may need to be reduced or software flow control enabled when using 3wire to prevent buffer overflows. If garbage appears instead of a login prompt, make sure the terminal and &os; agree on the bps rate and parity settings. Check the getty processes to make sure the correct getty type is in use. If not, edit /etc/ttys and run kill -HUP 1. If characters appear doubled and the password appears when typed, switch the terminal, or the terminal emulation software, from half duplex or local echo to full duplex. Dial-in Service Guy Helmer Contributed by Sean Kelly Additions by dial-in service Configuring a &os; system for dial-in service is similar to configuring terminals, except that modems are used instead of terminal devices. &os; supports both external and internal modems. External modems are more convenient because they often can be configured via parameters stored in non-volatile RAM and they usually provide lighted indicators that display the state of important RS-232 signals, indicating whether the modem is operating properly. Internal modems usually lack non-volatile RAM, so their configuration may be limited to setting DIP switches. If the internal modem has any signal indicator lights, they are difficult to view when the system's cover is in place. modem When using an external modem, a proper cable is needed. A standard RS-232C serial cable should suffice. &os; needs the RTS and CTS signals for flow control at speeds above 2400 bps, the CD signal to detect when a call has been answered or the line has been hung up, and the DTR signal to reset the modem after a session is complete. Some cables are wired without all of the needed signals, so if a login session does not go away when the line hangs up, there may be a problem with the cable. Refer to for more information about these signals. Like other &unix;-like operating systems, &os; uses the hardware signals to find out when a call has been answered or a line has been hung up and to hangup and reset the modem after a call. &os; avoids sending commands to the modem or watching for status reports from the modem. &os; supports the NS8250, NS16450, NS16550, and NS16550A-based RS-232C (CCITT V.24) communications interfaces. The 8250 and 16450 devices have single-character buffers. The 16550 device provides a 16-character buffer, which allows for better system performance. Bugs in plain 16550 devices prevent the use of the 16-character buffer, so use 16550A devices if possible. Because single-character-buffer devices require more work by the operating system than the 16-character-buffer devices, 16550A-based serial interface cards are preferred. If the system has many active serial ports or will have a heavy load, 16550A-based cards are better for low-error-rate communications. The rest of this section demonstrates how to configure a modem to receive incoming connections, how to communicate with the modem, and offers some troubleshooting tips. Modem Configuration getty As with terminals, init spawns a getty process for each configured serial port used for dial-in connections. When a user dials the modem's line and the modems connect, the Carrier Detect signal is reported by the modem. The kernel notices that the carrier has been detected and instructs getty to open the port and display a login: prompt at the specified initial line speed. In a typical configuration, if garbage characters are received, usually due to the modem's connection speed being different than the configured speed, getty tries adjusting the line speeds until it receives reasonable characters. After the user enters their login name, getty executes login, which completes the login process by asking for the user's password and then starting the user's shell. /usr/bin/login There are two schools of thought regarding dial-up modems. One confiuration method is to set the modems and systems so that no matter at what speed a remote user dials in, the dial-in RS-232 interface runs at a locked speed. The benefit of this configuration is that the remote user always sees a system login prompt immediately. The downside is that the system does not know what a user's true data rate is, so full-screen programs like Emacs will not adjust their screen-painting methods to make their response better for slower connections. The second method is to configure the RS-232 interface to vary its speed based on the remote user's connection speed. Because getty does not understand any particular modem's connection speed reporting, it gives a login: message at an initial speed and watches the characters that come back in response. If the user sees junk, they should press Enter until they see a recognizable prompt. If the data rates do not match, getty sees anything the user types as junk, tries the next speed, and gives the login: prompt again. This procedure normally only takes a keystroke or two before the user sees a good prompt. This login sequence does not look as clean as the locked-speed method, but a user on a low-speed connection should receive better interactive response from full-screen programs. When locking a modem's data communications rate at a particular speed, no changes to /etc/gettytab should be needed. However, for a matching-speed configuration, additional entries may be required in order to define the speeds to use for the modem. This example configures a 14.4 Kbps modem with a top interface speed of 19.2 Kbps using 8-bit, no parity connections. It configures getty to start the communications rate for a V.32bis connection at 19.2 Kbps, then cycles through 9600 bps, 2400 bps, 1200 bps, 300 bps, and back to 19.2 Kbps. Communications rate cycling is implemented with the nx= (next table) capability. Each line uses a tc= (table continuation) entry to pick up the rest of the settings for a particular data rate. # # Additions for a V.32bis Modem # um|V300|High Speed Modem at 300,8-bit:\ :nx=V19200:tc=std.300: un|V1200|High Speed Modem at 1200,8-bit:\ :nx=V300:tc=std.1200: uo|V2400|High Speed Modem at 2400,8-bit:\ :nx=V1200:tc=std.2400: up|V9600|High Speed Modem at 9600,8-bit:\ :nx=V2400:tc=std.9600: uq|V19200|High Speed Modem at 19200,8-bit:\ :nx=V9600:tc=std.19200: For a 28.8 Kbps modem, or to take advantage of compression on a 14.4 Kbps modem, use a higher communications rate, as seen in this example: # # Additions for a V.32bis or V.34 Modem # Starting at 57.6 Kbps # vm|VH300|Very High Speed Modem at 300,8-bit:\ :nx=VH57600:tc=std.300: vn|VH1200|Very High Speed Modem at 1200,8-bit:\ :nx=VH300:tc=std.1200: vo|VH2400|Very High Speed Modem at 2400,8-bit:\ :nx=VH1200:tc=std.2400: vp|VH9600|Very High Speed Modem at 9600,8-bit:\ :nx=VH2400:tc=std.9600: vq|VH57600|Very High Speed Modem at 57600,8-bit:\ :nx=VH9600:tc=std.57600: For a slow CPU or a heavily loaded system without 16550A-based serial ports, this configuration may produce sio silo errors at 57.6 Kbps. /etc/ttys The configuration of /etc/ttys is similar to , but a different argument is passed to getty and dialup is used for the terminal type. Replace xxx with the process init will run on the device: ttyu0 "/usr/libexec/getty xxx" dialup on The dialup terminal type can be changed. For example, setting vt102 as the default terminal type allows users to use VT102 emulation on their remote systems. For a locked-speed configuration, specify the speed with a valid type listed in /etc/gettytab. This example is for a modem whose port speed is locked at 19.2 Kbps: ttyu0 "/usr/libexec/getty std.19200" dialup on In a matching-speed configuration, the entry needs to reference the appropriate beginning auto-baud entry in /etc/gettytab. To continue the example for a matching-speed modem that starts at 19.2 Kbps, use this entry: ttyu0 "/usr/libexec/getty V19200" dialup on After editing /etc/ttys, wait until the modem is properly configured and connected before signaling init: &prompt.root; kill -HUP 1 rc files rc.serial High-speed modems, like V.32, V.32bis, and V.34 modems, use hardware (RTS/CTS) flow control. Use stty to set the hardware flow control flag for the modem port. This example sets the crtscts flag on COM2's dial-in and dial-out initialization devices: &prompt.root; stty -f /dev/ttyu1.init crtscts &prompt.root; stty -f /dev/cuau1.init crtscts Troubleshooting This section provides a few tips for troubleshooting a dial-up modem that will not connect to a &os; system. Hook up the modem to the &os; system and boot the system. If the modem has status indication lights, watch to see whether the modem's DTR indicator lights when the login: prompt appears on the system's console. If it lights up, that should mean that &os; has started a getty process on the appropriate communications port and is waiting for the modem to accept a call. If the DTR indicator does not light, login to the &os; system through the console and type ps ax to see if &os; is running a getty process on the correct port: 114 ?? I 0:00.10 /usr/libexec/getty V19200 ttyu0 If the second column contains a d0 instead of a ?? and the modem has not - accepted a call yet, this means - that getty has completed its open on the - communications port. This could indicate a problem with the - cabling or a misconfigured modem because - getty should not be able to open the - communications port until the carrier detect signal has been asserted by - the modem. + accepted a call yet, this means that getty + has completed its open on the communications port. This could + indicate a problem with the cabling or a misconfigured modem + because getty should not be able to open + the communications port until the carrier detect signal has + been asserted by the modem. If no getty processes are waiting to open the port, double-check that the entry for the port is correct in /etc/ttys. Also, check /var/log/messages to see if there are any log messages from init or getty. Next, try dialing into the system. Be sure to use 8 bits, no parity, and 1 stop bit on the remote system. If a prompt does not appear right away, or the prompt shows garbage, try pressing Enter about once per second. If there is still no login: prompt, try sending a BREAK. When using a high-speed modem, try dialing again after locking the dialing modem's interface speed. If there is still no login: prompt, check /etc/gettytab again and double-check that: The initial capability name specified in the entry in /etc/ttys matches the name of a capability in /etc/gettytab. Each nx= entry matches another gettytab capability name. Each tc= entry matches another gettytab capability name. If the modem on the &os; system will not answer, make sure that the modem is configured to answer the phone when DTR is asserted. If the modem seems to be configured correctly, verify that the DTR line is asserted by checking the modem's indicator lights. If it still does not work, try sending an email to the &a.questions; describing the modem and the problem. Dial-out Service dial-out service The following are tips for getting the host to connect over the modem to another computer. This is appropriate for establishing a terminal session with a remote host. This kind of connection can be helpful to get a file on the Internet if there are problems using PPP. If PPP is not working, use the terminal session to FTP the needed file. Then use zmodem to transfer it to the machine. Using a Stock Hayes Modem A generic Hayes dialer is built into tip. Use at=hayes in /etc/remote. The Hayes driver is not smart enough to recognize some of the advanced features of newer modems messages like BUSY, NO DIALTONE, or CONNECT 115200. Turn those messages off when using tip with ATX0&W. The dial timeout for tip is 60 seconds. The modem should use something less, or else tip will think there is a communication problem. Try ATS7=45&W. Using <literal>AT</literal> Commands /etc/remote Create a direct entry in /etc/remote. For example, if the modem is hooked up to the first serial port, /dev/cuau0, use the following line: cuau0:dv=/dev/cuau0:br#19200:pa=none Use the highest bps rate the modem supports in the br capability. Then, type tip cuau0 to connect to the modem. Or, use cu as root with the following command: &prompt.root; cu -lline -sspeed line is the serial port, such as /dev/cuau0, and speed is the speed, such as 57600. When finished entering the AT commands, type ~. to exit. The <literal>@</literal> Sign Does Not Work The @ sign in the phone number capability tells tip to look in /etc/phones for a phone number. But, the @ sign is also a special character in capability files like /etc/remote, so it needs to be escaped with a backslash: pn=\@ Dialing from the Command Line Put a generic entry in /etc/remote. For example: tip115200|Dial any phone number at 115200 bps:\ :dv=/dev/cuau0:br#115200:at=hayes:pa=none:du: tip57600|Dial any phone number at 57600 bps:\ :dv=/dev/cuau0:br#57600:at=hayes:pa=none:du: This should now work: &prompt.root; tip -115200 5551234 Users who prefer cu over tip, can use a generic cu entry: cu115200|Use cu to dial any number at 115200bps:\ :dv=/dev/cuau1:br#57600:at=hayes:pa=none:du: and type: &prompt.root; cu 5551234 -s 115200 Setting the <acronym>bps</acronym> Rate Put in an entry for tip1200 or cu1200, but go ahead and use whatever bps rate is appropriate with the br capability. tip thinks a good default is 1200 bps which is why it looks for a tip1200 entry. 1200 bps does not have to be used, though. Accessing a Number of Hosts Through a Terminal Server Rather than waiting until connected and typing CONNECT host each time, use tip's cm capability. For example, these entries in /etc/remote will let you type tip pain or tip muffin to connect to the hosts pain or muffin, and tip deep13 to connect to the terminal server. pain|pain.deep13.com|Forrester's machine:\ :cm=CONNECT pain\n:tc=deep13: muffin|muffin.deep13.com|Frank's machine:\ :cm=CONNECT muffin\n:tc=deep13: deep13:Gizmonics Institute terminal server:\ :dv=/dev/cuau2:br#38400:at=hayes:du:pa=none:pn=5551234: Using More Than One Line with <command>tip</command> This is often a problem where a university has several modem lines and several thousand students trying to use them. Make an entry in /etc/remote and use @ for the pn capability: big-university:\ :pn=\@:tc=dialout dialout:\ :dv=/dev/cuau3:br#9600:at=courier:du:pa=none: Then, list the phone numbers in /etc/phones: big-university 5551111 big-university 5551112 big-university 5551113 big-university 5551114 tip will try each number in the listed order, then give up. To keep retrying, run tip in a while loop. Using the Force Character Ctrl P is the default force character, used to tell tip that the next character is literal data. The force character can be set to any other character with the ~s escape, which means set a variable. Type ~sforce=single-char followed by a newline. single-char is any single character. If single-char is left out, then the force character is the null character, which is accessed by typing Ctrl2 or CtrlSpace . A pretty good value for single-char is Shift Ctrl 6 , which is only used on some terminal servers. To change the force character, specify the following in ~/.tiprc: force=single-char Upper Case Characters This happens when Ctrl A is pressed, which is tip's raise character, specially designed for people with broken caps-lock keys. Use ~s to set raisechar to something reasonable. It can be set to be the same as the force character, if neither feature is used. Here is a sample ~/.tiprc for Emacs users who need to type Ctrl 2 and Ctrl A : force=^^ raisechar=^^ The ^^ is ShiftCtrl6 . File Transfers with <command>tip</command> When talking to another &unix;-like operating system, files can be sent and received using ~p (put) and ~t (take). These commands run cat and echo on the remote system to accept and send files. The syntax is: ~p local-file remote-file ~t remote-file local-file There is no error checking, so another protocol, like zmodem, should probably be used. Using <application>zmodem</application> with <command>tip</command>? To receive files, start the sending program on the remote end. Then, type ~C rz to begin receiving them locally. To send files, start the receiving program on the remote end. Then, type ~C sz files to send them to the remote system. Setting Up the Serial Console Kazutaka YOKOTA Contributed by Bill Paul Based on a document by serial console &os; has the ability to boot a system with a dumb terminal on a serial port as a console. This configuration is useful for system administrators who wish to install &os; on machines that have no keyboard or monitor attached, and developers who want to debug the kernel or device drivers. As described in , &os; employs a three stage bootstrap. The first two stages are in the boot block code which is stored at the beginning of the &os; slice on the boot disk. The boot block then loads and runs the boot loader as the third stage code. In order to set up booting from a serial console, the boot block code, the boot loader code, and the kernel need to be configured. Quick Serial Console Configuration This section provides a fast overview of setting up the serial console. This procedure can be used when the dumb terminal is connected to COM1. Configuring a Serial Console on <filename>COM1</filename> Connect the serial cable to COM1 and the controlling terminal. To configure boot messages to display on the serial console, issue the following command as the superuser: &prompt.root; echo 'console="comconsole"' >> /boot/loader.conf Edit /etc/ttys and change off to on and dialup to vt100 for the ttyu0 entry. Otherwise, a password will not be required to connect via the serial console, resulting in a potential security hole. Reboot the system to see if the changes took effect. If a different configuration is required, see the next section for a more in-depth configuration explanation. In-Depth Serial Console Configuration This section provides a more detailed explanation of the steps needed to setup a serial console in &os;. Configuring a Serial Console Prepare a serial cable. null-modem cable Use either a null-modem cable or a standard serial cable and a null-modem adapter. See for a discussion on serial cables. Unplug the keyboard. Many systems probe for the keyboard during the Power-On Self-Test (POST) and will generate an error if the keyboard is not detected. Some machines will refuse to boot until the keyboard is plugged in. If the computer complains about the error, but boots anyway, no further configuration is needed. If the computer refuses to boot without a keyboard attached, configure the BIOS so that it ignores this error. Consult the motherboard's manual for details on how to do this. Try setting the keyboard to Not installed in the BIOS. This setting tells the BIOS not to probe for a keyboard at power-on so it should not complain if the keyboard is absent. If that option is not present in the BIOS, look for an Halt on Error option instead. Setting this to All but Keyboard or to No Errors will have the same effect. If the system has a &ps2; mouse, unplug it as well. &ps2; mice share some hardware with the keyboard and leaving the mouse plugged in can fool the keyboard probe into thinking the keyboard is still there. While most systems will boot without a keyboard, quite a few will not boot without a graphics adapter. Some systems can be configured to boot with no graphics adapter by changing the graphics adapter setting in the BIOS configuration to Not installed. Other systems do not support this option and will refuse to boot if there is no display hardware in the system. With these machines, leave some kind of graphics card plugged in, even if it is just a junky mono board. A monitor does not need to be attached. Plug a dumb terminal, an old computer with a modem program, or the serial port on another &unix; box into the serial port. Add the appropriate hint.sio.* entries to /boot/device.hints for the serial port. Some multi-port cards also require kernel configuration options. Refer to &man.sio.4; for the required options and device hints for each supported serial port. Create boot.config in the root directory of the a partition on the boot drive. This file instructs the boot block code how to boot the system. In order to activate the serial console, one or more of the following options are needed. When using multiple options, include them all on the same line: Toggles between the internal and serial consoles. Use this to switch console devices. For instance, to boot from the internal (video) console, use to direct the boot loader and the kernel to use the serial port as its console device. Alternatively, to boot from the serial port, use to tell the boot loader and the kernel to use the video display as the console instead. Toggles between the single and dual console configurations. In the single configuration, the console will be either the internal console (video display) or the serial port, depending on the state of . In the dual console configuration, both the video display and the serial port will become the console at the same time, regardless of the state of . However, the dual console configuration takes effect only while the boot block is running. Once the boot loader gets control, the console specified by becomes the only console. Makes the boot block probe the keyboard. If no keyboard is found, the and options are automatically set. Due to space constraints in the current version of the boot blocks, is capable of detecting extended keyboards only. Keyboards with less than 101 keys and without F11 and F12 keys may not be detected. Keyboards on some laptops may not be properly found because of this limitation. If this is the case, do not use . Use either to select the console automatically or to activate the serial console. Refer to &man.boot.8; and &man.boot.config.5; for more details. The options, except for , are passed to the boot loader. The boot loader will determine whether the internal video or the serial port should become the console by examining the state of . This means that if is specified but is not specified in /boot.config, the serial port can be used as the console only during the boot block as the boot loader will use the internal video display as the console. Boot the machine. When &os; starts, the boot blocks echo the contents of /boot.config to the console. For example: /boot.config: -P Keyboard: no The second line appears only if is in /boot.config and indicates the presence or absence of the keyboard. These messages go to either the serial or internal console, or both, depending on the option in /boot.config: Options Message goes to none internal console serial console serial and internal consoles serial and internal consoles , keyboard present internal console , keyboard absent serial console After the message, there will be a small pause before the boot blocks continue loading the boot loader and before any further messages are printed to the console. Under normal circumstances, there is no need to interrupt the boot blocks, but one can do so in order to make sure things are set up correctly. Press any key, other than Enter, at the console to interrupt the boot process. The boot blocks will then prompt for further action: >> FreeBSD/i386 BOOT Default: 0:ad(0,a)/boot/loader boot: Verify that the above message appears on either the serial or internal console, or both, according to the options in /boot.config. If the message appears in the correct console, press Enter to continue the boot process. If there is no prompt on the serial terminal, something is wrong with the settings. Enter then Enter or Return to tell the boot block (and then the boot loader and the kernel) to choose the serial port for the console. Once the system is up, go back and check what went wrong. During the third stage of the boot process, one can still switch between the internal console and the serial console by setting appropriate environment variables in the boot loader. See &man.loader.8; for more information. This line in /boot/loader.conf or /boot/loader.conf.local configures the boot loader and the kernel to send their boot messages to the serial console, regardless of the options in /boot.config: console="comconsole" That line should be the first line of /boot/loader.conf so that boot messages are displayed on the serial console as early as possible. If that line does not exist, or if it is set to console="vidconsole", the boot loader and the kernel will use whichever console is indicated by in the boot block. See &man.loader.conf.5; for more information. At the moment, the boot loader has no option equivalent to in the boot block, and there is no provision to automatically select the internal console and the serial console based on the presence of the keyboard. While it is not required, it is possible to provide a login prompt over the serial line. To configure this, edit the entry for the serial port in /etc/ttys using the instructions in . If the speed of the serial port has been changed, change std.9600 to match the new setting. Setting a Faster Serial Port Speed By default, the serial port settings are 9600 baud, 8 bits, no parity, and 1 stop bit. To change the default console speed, use one of the following options: Edit /etc/make.conf and set BOOT_COMCONSOLE_SPEED to the new console speed. Then, recompile and install the boot blocks and the boot loader: &prompt.root; cd /sys/boot &prompt.root; make clean &prompt.root; make &prompt.root; make install If the serial console is configured in some other way than by booting with , or if the serial console used by the kernel is different from the one used by the boot blocks, add the following option, with the desired speed, to a custom kernel configuration file and compile a new kernel: options CONSPEED=19200 Add the boot option to /boot.config, replacing 19200 with the speed to use. Add the following options to /boot/loader.conf. Replace 115200 with the speed to use. boot_multicons="YES" boot_serial="YES" comconsole_speed="115200" console="comconsole,vidconsole" Entering the DDB Debugger from the Serial Line To configure the ability to drop into the kernel debugger from the serial console, add the following options to a custom kernel configuration file and compile the kernel using the instructions in . Note that while this is useful for remote diagnostics, it is also dangerous if a spurious BREAK is generated on the serial port. Refer to &man.ddb.4; and &man.ddb.8; for more information about the kernel debugger. options BREAK_TO_DEBUGGER options DDB
Index: head/en_US.ISO8859-1/books/handbook/x11/chapter.xml =================================================================== --- head/en_US.ISO8859-1/books/handbook/x11/chapter.xml (revision 46048) +++ head/en_US.ISO8859-1/books/handbook/x11/chapter.xml (revision 46049) @@ -1,1524 +1,1525 @@ The X Window System Synopsis An installation of &os; using bsdinstall does not automatically install a graphical user interface. This chapter describes how to install and configure &xorg;, which provides the open source X Window System used to provide a graphical environment. It then describes how to find and install a desktop environment or window manager. Users who prefer an installation method that automatically configures the &xorg; and offers a choice of window managers during installation should refer to the pcbsd.org website. For more information on the video hardware that &xorg; supports, refer to the x.org website. After reading this chapter, you will know: The various components of the X Window System, and how they interoperate. How to install and configure &xorg;. How to install and configure several window managers and desktop environments. How to use &truetype; fonts in &xorg;. How to set up your system for graphical logins (XDM). Before reading this chapter, you should: Know how to install additional third-party software as described in . Terminology While it is not necessary to understand all of the details of the various components in the X Window System and how they interact, some basic knowledge of these components can be useful: X server X was designed from the beginning to be network-centric, and adopts a client-server model. In this model, the X server runs on the computer that has the keyboard, monitor, and mouse attached. The server's responsibility includes tasks such as managing the display, handling input from the keyboard and mouse, and handling input or output from other devices such as a tablet or a video projector. This confuses some people, because the X terminology is exactly backward to what they expect. They expect the X server to be the big powerful machine down the hall, and the X client to be the machine on their desk. X client Each X application, such as XTerm or Firefox, is a client. A client sends messages to the server such as Please draw a window at these coordinates, and the server sends back messages such as The user just clicked on the OK button. In a home or small office environment, the X server and the X clients commonly run on the same computer. It is also possible to run the X server on a less powerful computer and to run the X applications on a more powerful system. In this scenario, the communication between the X client and server takes place over the network. window manager X does not dictate what windows should look like on screen, how to move them around with the mouse, which keystrokes should be used to move between windows, what the title bars on each window should look like, whether or not they have close buttons on them, and so on. Instead, X delegates this responsibility to a separate window manager application. There are dozens of window managers available. Each window manager provides a different look and feel: some support virtual desktops, some allow customized keystrokes to manage the desktop, some have a Start button, and some are themeable, allowing a complete change of the desktop's look-and-feel. Window managers are available in the x11-wm category of the Ports Collection. Each window manager uses a different configuration mechanism. Some expect configuration file written by hand while others provide graphical tools for most configuration tasks. desktop environment KDE and GNOME are considered to be desktop environments as they include an entire suite of applications for performing common desktop tasks. These may include office suites, web browsers, and games. focus policy The window manager is responsible for the mouse focus policy. This policy provides some means for choosing which window is actively receiving keystrokes and it should also visibly indicate which window is currently active. One focus policy is called click-to-focus. In this model, a window becomes active upon receiving a mouse click. In the focus-follows-mouse policy, the window that is under the mouse pointer has focus and the focus is changed by pointing at another window. If the mouse is over the root window, then this window is focused. In the sloppy-focus model, if the mouse is moved over the root window, the most recently used window still has the focus. With sloppy-focus, focus is only changed when the cursor enters a new window, and not when exiting the current window. In the click-to-focus policy, the active window is selected by mouse click. The window may then be raised and appear in front of all other windows. All keystrokes will now be directed to this window, even if the cursor is moved to another window. Different window managers support different focus models. All of them support click-to-focus, and the majority of them also support other policies. Consult the documentation for the window manager to determine which focus models are available. widgets Widget is a term for all of the items in the user interface that can be clicked or manipulated in some way. This includes buttons, check boxes, radio buttons, icons, and lists. A widget toolkit is a set of widgets used to create graphical applications. There are several popular widget toolkits, including Qt, used by KDE, and GTK+, used by GNOME. As a result, applications will have a different look and feel, depending upon which widget toolkit was used to create the application. Installing <application>&xorg;</application> &xorg; is the implementation of the open source X Window System released by the X.Org Foundation. In &os;, it can be installed as a package or port. The meta-port for the complete distribution which includes X servers, clients, libraries, and fonts is located in x11/xorg. A minimal distribution is located in x11/xorg-minimal, with separate ports available for docs, libraries, and apps. The examples in this section install the complete &xorg; distribution. To build and install &xorg; from the Ports Collection: &prompt.root; cd /usr/ports/x11/xorg &prompt.root; make install clean To build &xorg; in its entirety, be sure to have at least 4 GB of free disk space available. Alternatively, &xorg; can be installed directly from packages with this command: &prompt.root; pkg install xorg Quick Start In most cases, &xorg; is self-configuring. When started without any configuration file, the video card and input devices are automatically detected and used. Autoconfiguration is the preferred method, and should be tried first. Check if HAL is used by the X server: &prompt.user; pkg info xorg-server | grep HAL If the output shows HAL is off, skip to the next step. If HAL is on, enable needed services by adding two entries to /etc/rc.conf. Then start the services: hald_enable="YES" dbus_enable="YES" &prompt.root; service hald start ; service dbus start Rename or delete old versions of xorg.conf: &prompt.root; mv /etc/X11/xorg.conf ~/xorg.conf.etc &prompt.root; mv /usr/local/etc/X11/xorg.conf ~/xorg.conf.localetc Start the X system: &prompt.user; startx Test the system by moving the mouse and typing text into the windows. If both mouse and keyboard work as expected, see and . If the mouse or keyboard do not work, continue with . <application>&xorg;</application> Configuration &xorg; &xorg; Those with older or unusual equipment may find it helpful to gather some hardware information before beginning configuration. Monitor sync frequencies Video card chipset Video card memory horizontal sync frequency horizontal scan rate horizontal sync frequency refresh rate vertical sync frequency refresh rate vertical scan rate refresh rate Screen resolution and refresh rate are determined by the monitor's horizontal and vertical sync frequencies. Almost all monitors support electronic autodetection of these values. A few monitors do not provide these values, and the specifications must be determined from the printed manual or manufacturer web site. The video card chipset is also autodetected, and used to select the proper video driver. It is beneficial for the user to be aware of which chipset is installed for when autodetection does not provide the desired result. Video card memory determines the maximum resolution and color depth which can be displayed. Caveats The ability to configure optimal resolution is dependent upon the video hardware and the support provided by its driver. At this time, driver support includes: - Intel: as of &os; 9.3 and &os; 10.1, 3D acceleration on most - Intel graphics, including IronLake, SandyBridge, and - IvyBridge, is supported. Support for switching between X - and virtual consoles is provided by &man.vt.4;. + Intel: as of &os; 9.3 and &os; 10.1, 3D + acceleration on most Intel graphics, including IronLake, + SandyBridge, and IvyBridge, is supported. Support for + switching between X and virtual consoles is provided by + &man.vt.4;. - ATI/Radeon: 2D and 3D acceleration is supported on most - Radeon cards up to the HD6000 series. + ATI/Radeon: 2D and 3D acceleration is supported on + most Radeon cards up to the HD6000 series. NVIDIA: several NVIDIA drivers are available in the - x11 category of the Ports Collection. Install - the driver that matches the video card. + x11 category of the Ports Collection. + Install the driver that matches the video card. Optimus: currently there is no switching support between the two graphics adapters provided by Optimus. - Optimus implementations vary, and &os; will not - be able to drive all versions of the - hardware. Some computers provide a BIOS - option to disable one of the graphics adapters or - select a discrete mode. + Optimus implementations vary, and &os; will not be able to + drive all versions of the hardware. Some computers + provide a BIOS option to disable one of + the graphics adapters or select a + discrete mode. Configuring <application>&xorg;</application> By default, &xorg; uses HAL to autodetect keyboards and mice. The sysutils/hal and devel/dbus ports are automatically installed as dependencies of x11/xorg, but must be enabled by adding these entries to /etc/rc.conf: hald_enable="YES" dbus_enable="YES" Start these services before configuring &xorg;: &prompt.root; service hald start &prompt.root; service dbus start Once the services have been started, check whether &xorg; auto-configures itself by typing: &prompt.root; Xorg -configure This will generate a file named /root/xorg.conf.new which attempts to load the proper drivers for the detected hardware. Next, test that the automatically generated configuration file works with the graphics hardware by typing: &prompt.root; Xorg -config xorg.conf.new -retro If a black and grey grid and an X mouse cursor appear, the configuration was successful. To exit the test, switch to the virtual console used to start it by pressing Ctrl Alt Fn (F1 for the first virtual console) and press Ctrl C . The Ctrl Alt Backspace key combination may also be used to break out of &xorg;. To enable it, you can either type the following command from any X terminal emulator: &prompt.user; setxkbmap -option terminate:ctrl_alt_bksp or create a keyboard configuration file for hald called x11-input.fdi and saved in the /usr/local/etc/hal/fdi/policy directory. This file should contain the following lines: <?xml version="1.0" encoding="iso-8859-1"?> <deviceinfo version="0.2"> <device> <match key="info.capabilities" contains="input.keyboard"> <merge key="input.x11_options.XkbOptions" type="string">terminate:ctrl_alt_bksp</merge> </match> </device> </deviceinfo> You will have to reboot your machine to force hald to read this file. The following line will also have to be added to xorg.conf.new, in the ServerLayout or ServerFlags section: Option "DontZap" "off" If the test is unsuccessful, skip ahead to . Once the test is successful, copy the configuration file to /etc/X11/xorg.conf: &prompt.root; cp xorg.conf.new /etc/X11/xorg.conf Desktop environments like GNOME, KDE or Xfce provide graphical tools to set parameters such as video resolution. If the default configuration works, skip to for examples on how to install a desktop environment. Using Fonts in <application>&xorg;</application> Type1 Fonts The default fonts that ship with &xorg; are less than ideal for typical desktop publishing applications. Large presentation fonts show up jagged and unprofessional looking, and small fonts are almost completely unintelligible. However, there are several free, high quality Type1 (&postscript;) fonts available which can be readily used with &xorg;. For instance, the URW font collection (x11-fonts/urwfonts) includes high quality versions of standard type1 fonts (Times Roman, Helvetica, Palatino and others). The Freefonts collection (x11-fonts/freefonts) includes many more fonts, but most of them are intended for use in graphics software such as the Gimp, and are not complete enough to serve as screen fonts. In addition, &xorg; can be configured to use &truetype; fonts with a minimum of effort. For more details on this, see the &man.X.7; manual page or . To install the above Type1 font collections from the Ports Collection, run the following commands: &prompt.root; cd /usr/ports/x11-fonts/urwfonts &prompt.root; make install clean And likewise with the freefont or other collections. To have the X server detect these fonts, add an appropriate line to the X server configuration file (/etc/X11/xorg.conf), which reads: FontPath "/usr/local/lib/X11/fonts/URW/" Alternatively, at the command line in the X session run: &prompt.user; xset fp+ /usr/local/lib/X11/fonts/URW &prompt.user; xset fp rehash This will work but will be lost when the X session is closed, unless it is added to the startup file (~/.xinitrc for a normal startx session, or ~/.xsession when logging in through a graphical login manager like XDM). A third way is to use the new /usr/local/etc/fonts/local.conf file as demonstrated in . &truetype; Fonts TrueType Fonts fonts TrueType &xorg; has built in support for rendering &truetype; fonts. There are two different modules that can enable this functionality. The freetype module is used in this example because it is more consistent with the other font rendering back-ends. To enable the freetype module just add the following line to the "Module" section of the /etc/X11/xorg.conf file. Load "freetype" Now make a directory for the &truetype; fonts (for example, /usr/local/lib/X11/fonts/TrueType) and copy all of the &truetype; fonts into this directory. Keep in mind that &truetype; fonts cannot be directly taken from an - &apple; &mac;; they must be in &unix;/&ms-dos;/&windows; format - for use by &xorg;. Once the + &apple; &mac;; they must be in &unix;/&ms-dos;/&windows; + format for use by &xorg;. Once the files have been copied into this directory, use ttmkfdir to create a fonts.dir file, so that the X font renderer knows that these new files have been installed. ttmkfdir is available from the FreeBSD Ports Collection as x11-fonts/ttmkfdir. &prompt.root; cd /usr/local/lib/X11/fonts/TrueType &prompt.root; ttmkfdir -o fonts.dir Now add the &truetype; directory to the font path. This is just the same as described in : &prompt.user; xset fp+ /usr/local/lib/X11/fonts/TrueType &prompt.user; xset fp rehash or add a FontPath line to the xorg.conf file. That's it. Now Gimp, Apache OpenOffice, and all of the other X applications should now recognize the installed &truetype; fonts. Extremely small fonts (as with text in a high resolution display on a web page) and extremely large fonts (within &staroffice;) will look much better now. Anti-Aliased Fonts anti-aliased fonts fonts anti-aliased All fonts in &xorg; that are found in /usr/local/lib/X11/fonts/ and ~/.fonts/ are automatically made available for anti-aliasing to Xft-aware applications. Most recent applications are Xft-aware, including KDE, GNOME, and Firefox. In order to control which fonts are anti-aliased, or to configure anti-aliasing properties, create (or edit, if it already exists) the file /usr/local/etc/fonts/local.conf. Several advanced features of the Xft font system can be tuned using this file; this section describes only some simple possibilities. For more details, please see &man.fonts-conf.5;. XML This file must be in XML format. Pay careful attention to case, and make sure all tags are properly closed. The file begins with the usual XML header followed by a DOCTYPE definition, and then the <fontconfig> tag: <?xml version="1.0"?> <!DOCTYPE fontconfig SYSTEM "fonts.dtd"> <fontconfig> As previously stated, all fonts in /usr/local/lib/X11/fonts/ as well as ~/.fonts/ are already made available to Xft-aware applications. If you wish to add another directory outside of these two directory trees, add a line similar to the following to /usr/local/etc/fonts/local.conf: <dir>/path/to/my/fonts</dir> After adding new fonts, and especially new font directories, you should run the following command to rebuild the font caches: &prompt.root; fc-cache -f Anti-aliasing makes borders slightly fuzzy, which makes very small text more readable and removes staircases from large text, but can cause eyestrain if applied to normal text. To exclude font sizes smaller than 14 point from anti-aliasing, include these lines: <match target="font"> <test name="size" compare="less"> <double>14</double> </test> <edit name="antialias" mode="assign"> <bool>false</bool> </edit> </match> <match target="font"> <test name="pixelsize" compare="less" qual="any"> <double>14</double> </test> <edit mode="assign" name="antialias"> <bool>false</bool> </edit> </match> fonts spacing Spacing for some monospaced fonts may also be inappropriate with anti-aliasing. This seems to be an issue with KDE, in particular. One possible fix for this is to force the spacing for such fonts to be 100. Add the following lines: <match target="pattern" name="family"> <test qual="any" name="family"> <string>fixed</string> </test> <edit name="family" mode="assign"> <string>mono</string> </edit> </match> <match target="pattern" name="family"> <test qual="any" name="family"> <string>console</string> </test> <edit name="family" mode="assign"> <string>mono</string> </edit> </match> (this aliases the other common names for fixed fonts as "mono"), and then add: <match target="pattern" name="family"> <test qual="any" name="family"> <string>mono</string> </test> <edit name="spacing" mode="assign"> <int>100</int> </edit> </match> Certain fonts, such as Helvetica, may have a problem when anti-aliased. Usually this manifests itself as a font that seems cut in half vertically. At worst, it may cause applications to crash. To avoid this, consider adding the following to local.conf: <match target="pattern" name="family"> <test qual="any" name="family"> <string>Helvetica</string> </test> <edit name="family" mode="assign"> <string>sans-serif</string> </edit> </match> Once you have finished editing local.conf make sure you end the file with the </fontconfig> tag. Not doing this will cause your changes to be ignored. Finally, users can add their own settings via their personal .fonts.conf files. To do this, each user should simply create a ~/.fonts.conf. This file must also be in XML format. LCD screen Fonts LCD screen One last point: with an LCD screen, sub-pixel sampling may be desired. This basically treats the (horizontally separated) red, green and blue components separately to improve the horizontal resolution; the results can be dramatic. To enable this, add the line somewhere in the local.conf file: <match target="font"> <test qual="all" name="rgba"> <const>unknown</const> </test> <edit name="rgba" mode="assign"> <const>rgb</const> </edit> </match> Depending on the sort of display, rgb may need to be changed to bgr, vrgb or vbgr: experiment and see which works best. The X Display Manager Seth Kingsley Contributed by X Display Manager &xorg; provides an X Display Manager, XDM, which can be used for login session management. XDM provides a graphical interface for choosing which display server to connect to and for entering authorization information such as a login and password combination. This section demonstrates how to configure the X Display Manager on &os;. Some desktop environments provide their own graphical login manager. Refer to for instructions on how to configure the GNOME Display Manager and for instructions on how to configure the KDE Display Manager. Configuring <application>XDM</application> To install XDM, use the x11/xdm package or port. Once installed, XDM can be configured to run when the machine boots up by editing this entry in /etc/ttys: ttyv8 "/usr/local/bin/xdm -nodaemon" xterm off secure Change the off to on and save the edit. The ttyv8 in this entry indicates that XDM will run on the ninth virtual terminal. The XDM configuration directory is located in /usr/local/lib/X11/xdm. This directory contains several files used to change the behavior and appearance of XDM, as well as a few scripts and programs used to set up the desktop when XDM is running. summarizes the function of each of these files. The exact syntax and usage of these files is described in &man.xdm.1;. XDM Configuration Files File Description Xaccess The protocol for connecting to XDM is called the X Display Manager Connection Protocol (XDMCP) This file is a client authorization ruleset for controlling XDMCP connections from remote machines. By default, this file does not allow any remote clients to connect. Xresources This file controls the look and feel of the XDM display chooser and login screens. The default configuration is a simple rectangular login window with the hostname of the machine displayed at the top in a large font and Login: and Password: prompts below. The format of this file is identical to the app-defaults file described in the &xorg; documentation. Xservers The list of local and remote displays the chooser should provide as login choices. Xsession Default session script for logins which is run by XDM after a user has logged in. Normally each user will have a customized session script in ~/.xsession that overrides this script Xsetup_* Script to automatically launch applications before displaying the chooser or login interfaces. There is a script for each display being used, named Xsetup_*, where * is the local display number. Typically these scripts run one or two programs in the background such as xconsole. xdm-config Global configuration for all displays running on this machine. xdm-errors Contains errors generated by the server program. If a display that XDM is trying to start hangs, look at this file for error messages. These messages are also written to the user's ~/.xsession-errors file on a per-session basis. xdm-pid The running process ID of XDM.
Configuring Remote Access By default, only users on the same system can login using XDM. To enable users on other systems to connect to the display server, edit the access control rules and enable the connection listener. To configure XDM to listen for any remote connection, comment out the DisplayManager.requestPort line in /usr/local/lib/X11/xdm/xdm-config by putting a ! in front of it: ! SECURITY: do not listen for XDMCP or Chooser requests ! Comment out this line if you want to manage X terminals with xdm DisplayManager.requestPort: 0 Save the edits and restart XDM. To restrict remote access, look at the example entries in /usr/local/lib/X11/xdm/Xaccess and refer to &man.xdm.1; for further information.
Desktop Environments Valentino Vaschetto Contributed by This section describes how to install three popular desktop environments on a &os; system. A desktop environment can range from a simple window manager to a complete suite of desktop applications. Over a hundred desktop environments are available in the x11-wm category of the Ports Collection. GNOME GNOME GNOME is a user-friendly desktop environment. It includes a panel for starting applications and displaying status, a desktop, a set of tools and applications, and a set of conventions that make it easy for applications to cooperate and be consistent with each other. More information regarding GNOME on &os; can be found at http://www.FreeBSD.org/gnome. That web site contains additional documentation about installing, configuring, and managing GNOME on &os;. This desktop environment can be installed from a package: &prompt.root; pkg install gnome2 To instead build GNOME from ports, use the following command. GNOME is a large application and will take some time to compile, even on a fast computer. &prompt.root; cd /usr/ports/x11/gnome2 &prompt.root; make install clean For proper operation, GNOME requires the /proc file system to be mounted. Add this line to /etc/fstab to mount this file system automatically during system startup: proc /proc procfs rw 0 0 Once GNOME is installed, configure &xorg; to start GNOME. The easiest way to do this is to enable the GNOME Display Manager, GDM, which is installed as part of the GNOME package or port. It can be enabled by adding this line to /etc/rc.conf: gdm_enable="YES" It is often desirable to also start all GNOME services. To achieve this, add a second line to /etc/rc.conf: gnome_enable="YES" GDM will now start automatically when the system boots. A second method for starting GNOME is to type startx from the command-line after configuring ~/.xinitrc. If this file already exists, replace the line that starts the current window manager with one that starts /usr/local/bin/gnome-session. If this file does not exist, create it with this command: &prompt.user; echo "exec /usr/local/bin/gnome-session" > ~/.xinitrc A third method is to use XDM as the display manager. In this case, create an executable ~/.xsession: &prompt.user; echo "#!/bin/sh" > ~/.xsession &prompt.user; echo "exec /usr/local/bin/gnome-session" >> ~/.xsession &prompt.user; chmod +x ~/.xsession KDE KDE KDE is another easy-to-use desktop environment. This desktop provides a suite of applications with a consistent look and feel, a standardized menu and toolbars, keybindings, color-schemes, internationalization, and a centralized, dialog-driven desktop configuration. More information on KDE can be found at http://www.kde.org/. For &os;-specific information, consult http://freebsd.kde.org. To install the KDE package, type: &prompt.root; pkg install x11/kde4 To instead build the KDE port, use the following command. Installing the port will provide a menu for selecting which components to install. KDE is a large application and will take some time to compile, even on a fast computer. &prompt.root; cd /usr/ports/x11/kde4 &prompt.root; make install clean KDE display manager KDE requires the /proc file system to be mounted. Add this line to /etc/fstab to mount this file system automatically during system startup: proc /proc procfs rw 0 0 The installation of KDE includes the KDE Display Manager, KDM. To enable this display manager, add this line to /etc/rc.conf: kdm4_enable="YES" A second method for launching KDE is to type startx from the command line. For this to work, the following line is needed in ~/.xinitrc: exec /usr/local/kde4/bin/startkde A third method for starting KDE is through XDM. To do so, create an executable ~/.xsession as follows: &prompt.user; echo "#!/bin/sh" > ~/.xsession &prompt.user; echo "exec /usr/local/kde4/bin/startkde" >> ~/.xsession &prompt.user; chmod +x ~/.xsession Once KDE is started, refer to its built-in help system for more information on how to use its various menus and applications. Xfce Xfce is a desktop environment based on the GTK+ toolkit used by GNOME. However, it is more lightweight and provides a simple, efficient, easy-to-use desktop. It is fully configurable, has a main panel with menus, applets, and application launchers, provides a file manager and sound manager, and is themeable. Since it is fast, light, and efficient, it is ideal for older or slower machines with memory limitations. More information on Xfce can be found at http://www.xfce.org. To install the Xfce package: &prompt.root; pkg install xfce Alternatively, to build the port: &prompt.root; cd /usr/ports/x11-wm/xfce4 &prompt.root; make install clean Unlike GNOME or KDE, Xfce does not provide its own login manager. In order to start Xfce from the command line by typing startx, first add its entry to ~/.xinitrc: &prompt.user; echo "exec /usr/local/bin/startxfce4" > ~/.xinitrc An alternate method is to use XDM. To configure this method, create an executable ~/.xsession: &prompt.user; echo "#!/bin/sh" > ~/.xsession &prompt.user; echo "exec /usr/local/bin/startxfce4" >> ~/.xsession &prompt.user; chmod +x ~/.xsession Troubleshooting If the mouse does not work, you will need to first configure it before proceeding. See in the &os; install chapter. In recent Xorg versions, the InputDevice sections in xorg.conf are ignored in favor of the autodetected devices. To restore the old behavior, add the following line to the ServerLayout or ServerFlags section of this file: Option "AutoAddDevices" "false" Input devices may then be configured as in previous versions, along with any other options needed (e.g., keyboard layout switching). As previously explained the hald daemon will, by default, automatically detect your keyboard. There are chances that your keyboard layout or model will not be correct, desktop environments like GNOME, KDE or Xfce provide tools to configure the keyboard. However, it is possible to set the keyboard properties directly either with the help of the &man.setxkbmap.1; utility or with a hald's configuration rule. For example if, one wants to use a PC 102 keys keyboard coming with a french layout, we have to create a keyboard configuration file for hald called x11-input.fdi and saved in the /usr/local/etc/hal/fdi/policy directory. This file should contain the following lines: <?xml version="1.0" encoding="iso-8859-1"?> <deviceinfo version="0.2"> <device> <match key="info.capabilities" contains="input.keyboard"> <merge key="input.x11_options.XkbModel" type="string">pc102</merge> <merge key="input.x11_options.XkbLayout" type="string">fr</merge> </match> </device> </deviceinfo> If this file already exists, just copy and add to your file the lines regarding the keyboard configuration. You will have to reboot your machine to force hald to read this file. It is possible to do the same configuration from an X terminal or a script with this command line: &prompt.user; setxkbmap -model pc102 -layout fr The /usr/local/share/X11/xkb/rules/base.lst file lists the various keyboard, layouts and options available. &xorg; tuning The xorg.conf.new configuration file may now be tuned to taste. Open the file in a text editor such as &man.emacs.1; or &man.ee.1;. If the monitor is an older or unusual model that does not support autodetection of sync frequencies, those settings can be added to xorg.conf.new under the "Monitor" section: Section "Monitor" Identifier "Monitor0" VendorName "Monitor Vendor" ModelName "Monitor Model" HorizSync 30-107 VertRefresh 48-120 EndSection Most monitors support sync frequency autodetection, making manual entry of these values unnecessary. For the few monitors that do not support autodetection, avoid potential damage by only entering values provided by the manufacturer. X allows DPMS (Energy Star) features to be used with capable monitors. The &man.xset.1; program controls the time-outs and can force standby, suspend, or off modes. If you wish to enable DPMS features for your monitor, you must add the following line to the monitor section: Option "DPMS" xorg.conf While the xorg.conf.new configuration file is still open in an editor, select the default resolution and color depth desired. This is defined in the "Screen" section: Section "Screen" Identifier "Screen0" Device "Card0" Monitor "Monitor0" DefaultDepth 24 SubSection "Display" Viewport 0 0 Depth 24 Modes "1024x768" EndSubSection EndSection The DefaultDepth keyword describes the color depth to run at by default. This can be overridden with the command line switch to &man.Xorg.1;. The Modes keyword describes the resolution to run at for the given color depth. Note that only VESA standard modes are supported as defined by the target system's graphics hardware. In the example above, the default color depth is twenty-four bits per pixel. At this color depth, the accepted resolution is 1024 by 768 pixels. Finally, write the configuration file and test it using the test mode given above. One of the tools available to assist you during troubleshooting process are the &xorg; log files, which contain information on each device that the &xorg; server attaches to. &xorg; log file names are in the format of /var/log/Xorg.0.log. The exact name of the log can vary from Xorg.0.log to Xorg.8.log and so forth. If all is well, the configuration file needs to be installed in a common location where &man.Xorg.1; can find it. This is typically /etc/X11/xorg.conf or /usr/local/etc/X11/xorg.conf. &prompt.root; cp xorg.conf.new /etc/X11/xorg.conf The &xorg; configuration process is now complete. &xorg; may be now started with the &man.startx.1; utility. The &xorg; server may also be started with the use of &man.xdm.1;. Configuration with &intel; <literal>i810</literal> Graphics Chipsets Intel i810 graphic chipset Configuration with &intel; i810 integrated chipsets requires the agpgart AGP programming interface for &xorg; to drive the card. See the &man.agp.4; driver manual page for more information. This will allow configuration of the hardware as any other graphics board. Note on systems without the &man.agp.4; driver compiled in the kernel, trying to load the module with &man.kldload.8; will not work. This driver has to be in the kernel at boot time through being compiled in or using /boot/loader.conf. Adding a Widescreen Flatpanel to the Mix widescreen flatpanel configuration This section assumes a bit of advanced configuration knowledge. If attempts to use the standard configuration tools above have not resulted in a working configuration, there is information enough in the log files to be of use in getting the setup working. Use of a text editor will be necessary. Current widescreen (WSXGA, WSXGA+, WUXGA, WXGA, WXGA+, et.al.) formats support 16:10 and 10:9 formats or aspect ratios that can be problematic. Examples of some common screen resolutions for 16:10 aspect ratios are: 2560x1600 1920x1200 1680x1050 1440x900 1280x800 At some point, it will be as easy as adding one of these resolutions as a possible Mode in the Section "Screen" as such: Section "Screen" Identifier "Screen0" Device "Card0" Monitor "Monitor0" DefaultDepth 24 SubSection "Display" Viewport 0 0 Depth 24 Modes "1680x1050" EndSubSection EndSection &xorg; is smart enough to pull the resolution information from the widescreen via I2C/DDC information so it knows what the monitor can handle as far as frequencies and resolutions. If those ModeLines do not exist in the drivers, one might need to give &xorg; a little hint. Using /var/log/Xorg.0.log one can extract enough information to manually create a ModeLine that will work. Simply look for information resembling this: (II) MGA(0): Supported additional Video Mode: (II) MGA(0): clock: 146.2 MHz Image Size: 433 x 271 mm (II) MGA(0): h_active: 1680 h_sync: 1784 h_sync_end 1960 h_blank_end 2240 h_border: 0 (II) MGA(0): v_active: 1050 v_sync: 1053 v_sync_end 1059 v_blanking: 1089 v_border: 0 (II) MGA(0): Ranges: V min: 48 V max: 85 Hz, H min: 30 H max: 94 kHz, PixClock max 170 MHz This information is called EDID information. Creating a ModeLine from this is just a matter of putting the numbers in the correct order: ModeLine <name> <clock> <4 horiz. timings> <4 vert. timings> So that the ModeLine in Section "Monitor" for this example would look like this: Section "Monitor" Identifier "Monitor1" VendorName "Bigname" ModelName "BestModel" ModeLine "1680x1050" 146.2 1680 1784 1960 2240 1050 1053 1059 1089 Option "DPMS" EndSection Now having completed these simple editing steps, X should start on your new widescreen monitor.