Index: head/en_US.ISO8859-1/books/handbook/audit/chapter.xml
===================================================================
--- head/en_US.ISO8859-1/books/handbook/audit/chapter.xml (revision 48528)
+++ head/en_US.ISO8859-1/books/handbook/audit/chapter.xml (revision 48529)
@@ -1,769 +1,769 @@
Security Event AuditingTomRhodesWritten by RobertWatsonSynopsisAUDITSecurity Event AuditingMACThe &os; operating system includes support for security
event auditing. Event auditing supports reliable, fine-grained,
and configurable logging of a variety of security-relevant
system events, including logins, configuration changes, and file
and network access. These log records can be invaluable for
live system monitoring, intrusion detection, and postmortem
analysis. &os; implements &sun;'s published Basic Security
Module (BSM) Application Programming
Interface (API) and file format, and is
interoperable with the &solaris; and &macos; X audit
implementations.This chapter focuses on the installation and configuration
of event auditing. It explains audit policies and provides an
example audit configuration.After reading this chapter, you will know:What event auditing is and how it works.How to configure event auditing on &os; for users and
processes.How to review the audit trail using the audit reduction
and review tools.Before reading this chapter, you should:
- Understand &unix; and &os; basics ().
+ Understand &unix; and &os; basics
+ ().Be familiar with the basics of kernel
configuration/compilation ().Have some familiarity with security and how it pertains
to &os; ().The audit facility has some known limitations. Not all
security-relevant system events are auditable and some login
mechanisms, such as Xorg-based
display managers and third-party daemons, do not properly
configure auditing for user login sessions.The security event auditing facility is able to generate
very detailed logs of system activity. On a busy system,
trail file data can be very large when configured for high
detail, exceeding gigabytes a week in some configurations.
Administrators should take into account the disk space
requirements associated with high volume audit configurations.
For example, it may be desirable to dedicate a file system to
/var/audit so that other file systems are
not affected if the audit file system becomes full.Key TermsThe following terms are related to security event
auditing:event: an auditable event is any
event that can be logged using the audit subsystem.
Examples of security-relevant events include the creation of
a file, the building of a network connection, or a user
logging in. Events are either attributable,
meaning that they can be traced to an authenticated user, or
non-attributable. Examples of
non-attributable events are any events that occur before
authentication in the login process, such as bad password
attempts.class: a named set of related
events which are used in selection expressions. Commonly
used classes of events include file creation
(fc), exec (ex), and
login_logout (lo).record: an audit log entry
describing a security event. Records contain a record
event type, information on the subject (user) performing the
action, date and time information, information on any
objects or arguments, and a success or failure
condition.trail: a log file consisting of a
series of audit records describing security events. Trails
are in roughly chronological order with respect to the time
events completed. Only authorized processes are allowed to
commit records to the audit trail.selection expression: a string
containing a list of prefixes and audit event class names
used to match events.preselection: the process by which
the system identifies which events are of interest to the
administrator. The preselection configuration uses a series
of selection expressions to identify which classes of events
to audit for which users, as well as global settings that
apply to both authenticated and unauthenticated
processes.reduction: the process by which
records from existing audit trails are selected for
preservation, printing, or analysis. Likewise, the process
by which undesired audit records are removed from the audit
trail. Using reduction, administrators can implement
policies for the preservation of audit data. For example,
detailed audit trails might be kept for one month, but after
that, trails might be reduced in order to preserve only
login information for archival purposes.Audit ConfigurationUser space support for event auditing is installed as part
of the base &os; operating system. Kernel support is available
in the GENERIC kernel by default,
and &man.auditd.8; can be enabled
by adding the following line to
/etc/rc.conf:auditd_enable="YES"Then, start the audit daemon:&prompt.root; service auditd startUsers who prefer to compile a custom kernel must include the
following line in their custom kernel configuration file:options AUDITEvent Selection ExpressionsSelection expressions are used in a number of places in
the audit configuration to determine which events should be
audited. Expressions contain a list of event classes to
match. Selection expressions are evaluated from left to
right, and two expressions are combined by appending one onto
the other. summarizes the default
audit event classes:
Default Audit Event ClassesClass NameDescriptionActionallallMatch all event classes.aaauthentication and authorizationadadministrativeAdministrative actions performed on the system as
a whole.apapplicationApplication defined action.clfile closeAudit calls to the
close system call.exexecAudit program execution. Auditing of command
line arguments and environmental variables is
controlled via &man.audit.control.5; using the
argv and envv
parameters to the policy
setting.fafile attribute accessAudit the access of object attributes such as
&man.stat.1; and &man.pathconf.2;.fcfile createAudit events where a file is created as a
result.fdfile deleteAudit events where file deletion occurs.fmfile attribute modifyAudit events where file attribute modification
occurs, such as by &man.chown.8;, &man.chflags.1;, and
&man.flock.2;.frfile readAudit events in which data is read or files are
opened for reading.fwfile writeAudit events in which data is written or files
are written or modified.ioioctlAudit use of the ioctl
system call.ipipcAudit various forms of Inter-Process
Communication, including POSIX pipes and System V
IPC operations.lologin_logoutAudit &man.login.1; and &man.logout.1;
events.nanon attributableAudit non-attributable events.noinvalid classMatch no audit events.ntnetworkAudit events related to network actions such as
&man.connect.2; and &man.accept.2;.ototherAudit miscellaneous events.pcprocessAudit process operations such as &man.exec.3; and
&man.exit.3;.
These audit event classes may be customized by modifying
the audit_class and
audit_event configuration files.Each audit event class may be combined with a prefix
indicating whether successful/failed operations are matched,
and whether the entry is adding or removing matching for the
class and type. summarizes
the available prefixes:
Prefixes for Audit Event ClassesPrefixAction+Audit successful events in this class.-Audit failed events in this class.^Audit neither successful nor failed events in
this class.^+Do not audit successful events in this
class.^-Do not audit failed events in this class.
If no prefix is present, both successful and failed
instances of the event will be audited.The following example selection string selects both
successful and failed login/logout events, but only successful
execution events:lo,+exConfiguration FilesThe following configuration files for security event
auditing are found in
/etc/security:audit_class: contains the
definitions of the audit classes.audit_control: controls aspects
of the audit subsystem, such as default audit classes,
minimum disk space to leave on the audit log volume, and
maximum audit trail size.audit_event: textual names and
descriptions of system audit events and a list of which
classes each event is in.audit_user: user-specific audit
requirements to be combined with the global defaults at
login.audit_warn: a customizable shell
script used by &man.auditd.8; to generate warning messages
in exceptional situations, such as when space for audit
records is running low or when the audit trail file has
been rotated.Audit configuration files should be edited and
maintained carefully, as errors in configuration may result
in improper logging of events.In most cases, administrators will only need to modify
audit_control and
audit_user. The first file controls
system-wide audit properties and policies and the second file
may be used to fine-tune auditing by user.The audit_control FileA number of defaults for the audit subsystem are
specified in audit_control:dir:/var/audit
dist:off
flags:lo,aa
minfree:5
naflags:lo,aa
policy:cnt,argv
filesz:2M
expire-after:10MThe entry is used to set one or
more directories where audit logs will be stored. If more
than one directory entry appears, they will be used in order
as they fill. It is common to configure audit so that audit
logs are stored on a dedicated file system, in order to
prevent interference between the audit subsystem and other
subsystems if the file system fills.If the field is set to
on or yes, hard links
will be created to all trail files in
/var/audit/dist.The field sets the system-wide
default preselection mask for attributable events. In the
example above, successful and failed login/logout events as
well as authentication and authorization are audited for all
users.The entry defines the minimum
percentage of free space for the file system where the audit
trail is stored.The entry specifies audit
classes to be audited for non-attributed events, such as the
login/logout process and authentication and
authorization.The entry specifies a
comma-separated list of policy flags controlling various
aspects of audit behavior. The cnt
indicates that the system should continue running despite an
auditing failure (this flag is highly recommended). The
other flag, argv, causes command line
arguments to the &man.execve.2; system call to be audited as
part of command execution.The entry specifies the maximum
size for an audit trail before automatically terminating and
rotating the trail file. A value of 0
disables automatic log rotation. If the requested file size
is below the minimum of 512k, it will be ignored and a log
message will be generated.The field specifies when
audit log files will expire and be removed.The audit_user FileThe administrator can specify further audit requirements
for specific users in audit_user.
Each line configures auditing for a user via two fields:
the alwaysaudit field specifies a set of
events that should always be audited for the user, and the
neveraudit field specifies a set of
events that should never be audited for the user.The following example entries audit login/logout events
and successful command execution for root and file creation and
successful command execution for www. If used with the
default audit_control, the
lo entry for root is redundant, and
login/logout events will also be audited for www.root:lo,+ex:no
www:fc,+ex:noWorking with Audit TrailsSince audit trails are stored in the BSM
binary format, several built-in tools are available to modify or
convert these trails to text. To convert trail files to a
simple text format, use praudit. To reduce
the audit trail file for analysis, archiving, or printing
purposes, use auditreduce. This utility
supports a variety of selection parameters, including event
type, event class, user, date or time of the event, and the file
path or object acted on.For example, to dump the entire contents of a specified
audit log in plain text:&prompt.root; praudit /var/audit/AUDITFILEWhere AUDITFILE is the audit log
to dump.Audit trails consist of a series of audit records made up of
tokens, which praudit prints sequentially,
one per line. Each token is of a specific type, such as
header (an audit record header) or
path (a file path from a name lookup). The
following is an example of an
execve event:header,133,10,execve(2),0,Mon Sep 25 15:58:03 2006, + 384 msec
exec arg,finger,doug
path,/usr/bin/finger
attribute,555,root,wheel,90,24918,104944
subject,robert,root,wheel,root,wheel,38439,38032,42086,128.232.9.100
return,success,0
trailer,133This audit represents a successful
execve call, in which the command
finger doug has been run. The
exec arg token contains the processed command
line presented by the shell to the kernel. The
path token holds the path to the executable
as looked up by the kernel. The attribute
token describes the binary and includes the file mode. The
subject token stores the audit user ID,
effective user ID and group ID, real user ID and group ID,
process ID, session ID, port ID, and login address. Notice that
the audit user ID and real user ID differ as the user
robert switched to the
root account before
running this command, but it is audited using the original
authenticated user. The return token
indicates the successful execution and the
trailer concludes the record.XML output format is also supported and
can be selected by including .Since audit logs may be very large, a subset of records can
be selected using auditreduce. This example
selects all audit records produced for the user
trhodes stored in
AUDITFILE:&prompt.root; auditreduce -u trhodes /var/audit/AUDITFILE | prauditMembers of the audit group have permission to
read audit trails in /var/audit. By
default, this group is empty, so only the root user can read audit trails.
Users may be added to the audit group in order to
delegate audit review rights. As the ability to track audit log
contents provides significant insight into the behavior of users
and processes, it is recommended that the delegation of audit
review rights be performed with caution.Live Monitoring Using Audit PipesAudit pipes are cloning pseudo-devices which allow
applications to tap the live audit record stream. This is
primarily of interest to authors of intrusion detection and
system monitoring applications. However, the audit pipe
device is a convenient way for the administrator to allow live
monitoring without running into problems with audit trail file
ownership or log rotation interrupting the event stream. To
track the live audit event stream:&prompt.root; praudit /dev/auditpipeBy default, audit pipe device nodes are accessible only to
the root user. To
make them accessible to the members of the audit group, add a
devfs rule to
/etc/devfs.rules:add path 'auditpipe*' mode 0440 group auditSee &man.devfs.rules.5; for more information on
configuring the devfs file system.It is easy to produce audit event feedback cycles, in
which the viewing of each audit event results in the
generation of more audit events. For example, if all
network I/O is audited, and
praudit is run from an
SSH session, a continuous stream of audit
events will be generated at a high rate, as each event being
printed will generate another event. For this reason, it is
advisable to run praudit on an audit pipe
device from sessions without fine-grained
I/O auditing.Rotating and Compressing Audit Trail FilesAudit trails are written to by the kernel and
managed by the audit daemon, &man.auditd.8;.
Administrators should not attempt to use
&man.newsyslog.conf.5; or other tools to directly rotate
audit logs. Instead, audit should
be used to shut down auditing, reconfigure the audit system,
and perform log rotation. The following command causes the
audit daemon to create a new audit log and signal the kernel
to switch to using the new log. The old log will be
terminated and renamed, at which point it may then be
manipulated by the administrator:&prompt.root; audit -nIf &man.auditd.8; is not currently running, this command
will fail and an error message will be produced.Adding the following line to
/etc/crontab will schedule this rotation
every twelve hours:0 */12 * * * root /usr/sbin/audit -nThe change will take effect once
/etc/crontab is saved.Automatic rotation of the audit trail file based on file
size is possible using in
audit_control as described in .As audit trail files can become very large, it is often
desirable to compress or otherwise archive trails once they
have been closed by the audit daemon. The
audit_warn script can be used to perform
customized operations for a variety of audit-related events,
including the clean termination of audit trails when they are
rotated. For example, the following may be added to
/etc/security/audit_warn to compress
audit trails on close:#
# Compress audit trail files on close.
#
if [ "$1" = closefile ]; then
gzip -9 $2
fiOther archiving activities might include copying trail
files to a centralized server, deleting old trail files, or
reducing the audit trail to remove unneeded records. This
script will be run only when audit trail files are cleanly
terminated, so will not be run on trails left unterminated
following an improper shutdown.
Index: head/en_US.ISO8859-1/books/handbook/boot/chapter.xml
===================================================================
--- head/en_US.ISO8859-1/books/handbook/boot/chapter.xml (revision 48528)
+++ head/en_US.ISO8859-1/books/handbook/boot/chapter.xml (revision 48529)
@@ -1,906 +1,906 @@
The &os; Booting ProcessSynopsisbootingbootstrapThe process of starting a computer and loading the operating
system is referred to as the bootstrap process,
or booting. &os;'s boot process provides a great
deal of flexibility in customizing what happens when the system
starts, including the ability to select from different operating
systems installed on the same computer, different versions of
the same operating system, or a different installed
kernel.This chapter details the configuration options that can be
set. It demonstrates how to customize the &os; boot process,
including everything that happens until the &os; kernel has
started, probed for devices, and started &man.init.8;. This
occurs when the text color of the boot messages changes from
bright white to grey.After reading this chapter, you will recognize:The components of the &os; bootstrap system and how they
interact.The options that can be passed to the components in the
&os; bootstrap in order to control the boot process.How to configure a customized boot splash screen.The basics of setting device hints.How to boot into single- and multi-user mode and how to
properly shut down a &os; system.This chapter only describes the boot process for &os;
running on x86 and amd64 systems.&os; Boot ProcessTurning on a computer and starting the operating system
poses an interesting dilemma. By definition, the computer does
not know how to do anything until the operating system is
started. This includes running programs from the disk. If the
computer can not run a program from the disk without the
operating system, and the operating system programs are on the
disk, how is the operating system started?This problem parallels one in the book The
Adventures of Baron Munchausen. A character had
fallen part way down a manhole, and pulled himself out by
grabbing his bootstraps and lifting. In the early days of
computing, the term bootstrap was applied
to the mechanism used to load the operating system. It has
since become shortened to booting.BIOSBasic Input/Output
SystemBIOSOn x86 hardware, the Basic Input/Output System
(BIOS) is responsible for loading the
operating system. The BIOS looks on the hard
disk for the Master Boot Record (MBR), which
must be located in a specific place on the disk. The
BIOS has enough knowledge to load and run the
MBR, and assumes that the
MBR can then carry out the rest of the tasks
involved in loading the operating system, possibly with the help
of the BIOS.&os; provides for booting from both the older
MBR standard, and the newer GUID Partition
Table (GPT). GPT
partitioning is often found on computers with the Unified
Extensible Firmware Interface (UEFI).
However, &os; can boot from GPT partitions
even on machines with only a legacy BIOS
with &man.gptboot.8;. Work is under way to provide direct
UEFI booting.Master Boot Record
(MBR)Boot ManagerBoot LoaderThe code within the MBR is typically
referred to as a boot manager, especially
when it interacts with the user. The boot manager usually has
more code in the first track of the disk or within the file
system. Examples of boot managers include the standard &os;
boot manager boot0, also called
Boot Easy, and
Grub, which is used by many &linux;
distributions.If only one operating system is installed, the
MBR searches for the first bootable (active)
slice on the disk, and then runs the code on that slice to load
the remainder of the operating system. When multiple operating
systems are present, a different boot manager can be installed
to display a list of operating systems so the user
can select one to boot.The remainder of the &os; bootstrap system is divided into
three stages. The first stage knows just enough to get the
computer into a specific state and run the second stage. The
second stage can do a little bit more, before running the third
stage. The third stage finishes the task of loading the
operating system. The work is split into three stages because
the MBR puts limits on the size of the
programs that can be run at stages one and two. Chaining the
tasks together allows &os; to provide a more flexible
loader.kernel&man.init.8;The kernel is then started and begins to probe for devices
and initialize them for use. Once the kernel boot process is
finished, the kernel passes control to the user process
&man.init.8;, which makes sure the disks are in a usable state,
starts the user-level resource configuration which mounts file
systems, sets up network cards to communicate on the network,
and starts the processes which have been configured to run at
startup.This section describes these stages in more detail and
demonstrates how to interact with the &os; boot process.The Boot ManagerBoot ManagerMaster Boot Record
(MBR)The boot manager code in the MBR is
sometimes referred to as stage zero of
the boot process. By default, &os; uses the
boot0 boot manager.The MBR installed by the &os; installer
is based on /boot/boot0. The size and
capability of boot0 is restricted
to 446 bytes due to the slice table and
0x55AA identifier at the end of the
MBR. If boot0
and multiple operating systems are installed, a message
similar to this example will be displayed at boot time:boot0 ScreenshotF1 Win
F2 FreeBSD
Default: F2Other operating systems will overwrite an existing
MBR if they are installed after &os;. If
this happens, or to replace the existing
MBR with the &os; MBR,
use the following command:&prompt.root; fdisk -B -b /boot/boot0 devicewhere device is the boot disk,
such as ad0 for the first
IDE disk, ad2 for the
first IDE disk on a second
IDE controller, or da0
for the first SCSI disk. To create a
custom configuration of the MBR, refer to
&man.boot0cfg.8;.Stage One and Stage TwoConceptually, the first and second stages are part of the
same program on the same area of the disk. Because of space
constraints, they have been split into two, but are always
installed together. They are copied from the combined
/boot/boot by the &os; installer or
bsdlabel.These two stages are located outside file systems, in the
first track of the boot slice, starting with the first sector.
This is where boot0, or any other
boot manager, expects to find a program to run which will
continue the boot process.The first stage, boot1, is very
simple, since it can only be 512 bytes in size. It knows just
enough about the &os; bsdlabel, which
stores information about the slice, to find and execute
boot2.Stage two, boot2, is slightly more
sophisticated, and understands the &os; file system enough to
find files. It can provide a simple interface to choose the
kernel or loader to run. It runs
loader, which is much more
sophisticated and provides a boot configuration file. If the
boot process is interrupted at stage two, the following
interactive screen is displayed:boot2 Screenshot>> FreeBSD/i386 BOOT
Default: 0:ad(0,a)/boot/loader
boot:To replace the installed boot1 and
boot2, use bsdlabel,
where diskslice is the disk and
slice to boot from, such as ad0s1 for the
first slice on the first IDE disk:&prompt.root; bsdlabel -B disksliceIf just the disk name is used, such as
ad0, bsdlabel will
create the disk in dangerously dedicated
mode, without slices. This is probably not the
desired action, so double check the
diskslice before pressing
Return.Stage Threeboot-loaderThe loader is the final stage
of the three-stage bootstrap process. It is located on the
file system, usually as
/boot/loader.The loader is intended as an
interactive method for configuration, using a built-in command
set, backed up by a more powerful interpreter which has a more
complex command set.During initialization, loader
will probe for a console and for disks, and figure out which
disk it is booting from. It will set variables accordingly,
and an interpreter is started where user commands can be
passed from a script or interactively.loaderloader configurationThe loader will then read
/boot/loader.rc, which by default reads
in /boot/defaults/loader.conf which sets
reasonable defaults for variables and reads
/boot/loader.conf for local changes to
those variables. loader.rc then acts on
these variables, loading whichever modules and kernel are
selected.Finally, by default, loader
issues a 10 second wait for key presses, and boots the kernel
if it is not interrupted. If interrupted, the user is
presented with a prompt which understands the command set,
where the user may adjust variables, unload all modules, load
modules, and then finally boot or reboot. lists the most commonly
used loader commands. For a
complete discussion of all available commands, refer to
&man.loader.8;.
Loader Built-In CommandsVariableDescriptionautoboot
secondsProceeds to boot the kernel if not interrupted
within the time span given, in seconds. It displays a
countdown, and the default time span is 10
seconds.boot
-optionskernelnameImmediately proceeds to boot the kernel, with
any specified options or kernel name. Providing a
kernel name on the command-line is only applicable
after an unload has been issued.
Otherwise, the previously-loaded kernel will be
used. If kernelname is not
qualified it will be searched under
/boot/kernel and
/boot/modules.boot-confGoes through the same automatic configuration of
modules based on specified variables, most commonly
kernel. This only makes sense if
unload is used first, before
changing some variables.help
topicShows help messages read from
/boot/loader.help. If the topic
given is index, the list of
available topics is displayed.include filename
…Reads the specified file and interprets it line
by line. An error immediately stops the
include.load -t
typefilenameLoads the kernel, kernel module, or file of the
type given, with the specified filename. Any
arguments after filename
are passed to the file. If
filename is not qualified it
will be searched under
/boot/kernel
and /boot/modules.ls -lpathDisplays a listing of files in the given path, or
the root directory, if the path is not specified. If
is specified, file sizes will
also be shown.lsdev -vLists all of the devices from which it may be
possible to load modules. If is
specified, more details are printed.lsmod -vDisplays loaded modules. If
is specified, more details are shown.more filenameDisplays the files specified, with a pause at
each LINES displayed.rebootImmediately reboots the system.set variable, set
variable=valueSets the specified environment variables.unloadRemoves all loaded modules.
Here are some practical examples of loader usage. To boot
the usual kernel in single-user mode
single-user
mode:boot -sTo unload the usual kernel and modules and then load the
previous or another, specified kernel:unloadload kernel.oldUse kernel.GENERIC to refer to the
default kernel that comes with an installation, or
kernel.old, to refer to the previously
installed kernel before a system upgrade or before configuring
a custom kernel.Use the following to load the usual modules with another
kernel:unloadset kernel="kernel.old"boot-confTo load an automated kernel configuration script:load -t userconfig_script /boot/kernel.confkernelboot interactionLast Stage&man.init.8;Once the kernel is loaded by either
loader or by
boot2, which bypasses
loader, it examines any boot flags
and adjusts its behavior as necessary. lists the commonly used boot flags.
Refer to &man.boot.8; for more information on the other boot
flags.kernelbootflags
Kernel Interaction During BootOptionDescriptionDuring kernel initialization, ask for the device
to mount as the root file system.Boot the root file system from a
CDROM.Boot into single-user mode.Be more verbose during kernel startup.
Once the kernel has finished booting, it passes control to
the user process &man.init.8;, which is located at
/sbin/init, or the program path specified
in the init_path variable in
loader. This is the last stage of the boot
process.The boot sequence makes sure that the file systems
available on the system are consistent. If a
UFS file system is not, and
fsck cannot fix the inconsistencies,
init drops the system into
single-user mode so that the system administrator can resolve
the problem directly. Otherwise, the system boots into
multi-user mode.Single-User Modesingle-user modeconsoleA user can specify this mode by booting with
or by setting the
boot_single variable in
loader. It can also be reached
by running shutdown now from multi-user
mode. Single-user mode begins with this message:Enter full pathname of shell or RETURN for /bin/sh:If the user presses Enter, the system
will enter the default Bourne shell. To specify a different
shell, input the full path to the shell.Single-user mode is usually used to repair a system that
will not boot due to an inconsistent file system or an error
in a boot configuration file. It can also be used to reset
the root password
when it is unknown. These actions are possible as the
single-user mode prompt gives full, local access to the
system and its configuration files. There is no networking
in this mode.While single-user mode is useful for repairing a system,
it poses a security risk unless the system is in a
physically secure location. By default, any user who can
gain physical access to a system will have full control of
that system after booting into single-user mode.If the system console is changed to
insecure in
/etc/ttys, the system will first prompt
for the root
password before initiating single-user mode. This adds a
measure of security while removing the ability to reset the
root password when
it is unknown.Configuring an Insecure Console in
/etc/ttys# name getty type status comments
#
# If console is marked "insecure", then init will ask for the root password
# when going to single-user mode.
console none unknown off insecureAn insecure console means that
physical security to the console is considered to be
insecure, so only someone who knows the root password may use
single-user mode.Multi-User Modemulti-user modeIf init finds the file
systems to be in order, or once the user has finished their
commands in single-user mode and has typed
exit to leave single-user mode, the
system enters multi-user mode, in which it starts the
resource configuration of the system.rc filesThe resource configuration system reads in configuration
defaults from /etc/defaults/rc.conf and
system-specific details from
/etc/rc.conf. It then proceeds to
mount the system file systems listed in
/etc/fstab. It starts up networking
services, miscellaneous system daemons, then the startup
scripts of locally installed packages.To learn more about the resource configuration system,
refer to &man.rc.8; and examine the scripts located in
/etc/rc.d.
-
+ -->
- Configuring Boot Time Splash Screens
+ Configuring Boot Time Splash Screens
-
-
-
- Joseph J.
- Barbish
-
- Contributed by
-
-
-
+
+
+
+ Joseph J.
+ Barbish
+
+ Contributed by
+
+
+
Typically when a &os; system boots, it displays its progress
as a series of messages at the console. A boot splash screen
creates an alternate boot screen that hides all of the boot
probe and service startup messages. A few boot loader messages,
including the boot options menu and a timed wait countdown
prompt, are displayed at boot time, even when the splash screen
is enabled. The display of the splash screen can be turned off
by hitting any key on the keyboard during the boot
process.There are two basic environments available in &os;. The
first is the default legacy virtual console command line
environment. After the system finishes booting, a console login
prompt is presented. The second environment is a configured
graphical environment. Refer to for more
information on how to install and configure a graphical display
manager and a graphical login manager.Once the system has booted, the splash screen defaults to
being a screen saver. After a time period of non-use, the
splash screen will display and will cycle through steps of
changing intensity of the image, from bright to very dark and
over again. The configuration of the splash screen saver can be
overridden by adding a saver= line to
/etc/rc.conf. Several built-in screen
savers are available and described in &man.splash.4;. The
saver= option only applies to virtual
consoles and has no effect on graphical display managers.Sample splash screen files can be downloaded from the
gallery at http://artwork.freebsdgr.org.
By installing the sysutils/bsd-splash-changer
package or port, a random splash image from a collection will
display at boot.The splash screen function supports 256-colors in the
bitmap (.bmp), ZSoft
PCX (.pcx), or
TheDraw (.bin) formats. The
.bmp, .pcx, or
.bin image has to be placed on the root
partition, for example in /boot. The
splash image files must have a resolution of 320 by 200 pixels
or less in order to work on standard VGA
adapters. For the default boot display resolution of 256-colors
and 320 by 200 pixels or less, add the following lines to
/boot/loader.conf. Replace
splash.bmp with the name of the
bitmap file to use:splash_bmp_load="YES"
bitmap_load="YES"
bitmap_name="/boot/splash.bmp"To use a PCX file instead of a bitmap
file:splash_pcx_load="YES"
bitmap_load="YES"
bitmap_name="/boot/splash.pcx"To instead use ASCII art in the https://en.wikipedia.org/wiki/TheDraw
format:splash_txt="YES"
bitmap_load="YES"
bitmap_name="/boot/splash.bin"To use larger images that fill the whole display screen, up
to the maximum resolution of 1024 by 768 pixels, the
VESA module must also be loaded during system
boot. If using a custom kernel, ensure that the custom kernel
configuration file includes the VESA kernel
configuration option. To load the VESA
module for the splash screen, add this line to
/boot/loader.conf before the three lines
mentioned in the above examples:vesa_load="YES"Other interesting loader.conf options
include:beastie_disable="YES"This will stop the boot options menu from being
displayed, but the timed wait count down prompt will still
be present. Even with the display of the boot options
menu disabled, entering an option selection at the timed
wait count down prompt will enact the corresponding boot
option.loader_logo="beastie"This will replace the default words
&os;, which are displayed to the right of
the boot options menu, with the colored beastie
logo.For more information, refer to &man.splash.4;,
&man.loader.conf.5;, and &man.vga.4;.Device HintsTomRhodesContributed by device.hintsDuring initial system startup, the boot &man.loader.8; reads
&man.device.hints.5;. This file stores kernel boot information
known as variables, sometimes referred to as
device hints. These device hints
are used by device drivers for device configuration.Device hints may also be specified at the Stage 3 boot
loader prompt, as demonstrated in .
Variables can be added using set, removed
with unset, and viewed
show. Variables set in
/boot/device.hints can also be overridden.
Device hints entered at the boot loader are not permanent and
will not be applied on the next reboot.Once the system is booted, &man.kenv.1; can be used to dump
all of the variables.The syntax for /boot/device.hints
is one variable per line, using the hash
# as comment markers. Lines are constructed as
follows:hint.driver.unit.keyword="value"The syntax for the Stage 3 boot loader is:set hint.driver.unit.keyword=valuewhere driver is the device driver name,
unit is the device driver unit number, and
keyword is the hint keyword. The keyword may
consist of the following options:at: specifies the bus which the
device is attached to.port: specifies the start address of
the I/O to be used.irq: specifies the interrupt request
number to be used.drq: specifies the DMA channel
number.maddr: specifies the physical memory
address occupied by the device.flags: sets various flag bits for the
device.disabled: if set to
1 the device is disabled.Since device drivers may accept or require more hints not
listed here, viewing a driver's manual page is recommended.
For more information, refer to &man.device.hints.5;,
&man.kenv.1;, &man.loader.conf.5;, and &man.loader.8;.Shutdown Sequence&man.shutdown.8;Upon controlled shutdown using &man.shutdown.8;,
&man.init.8; will attempt to run the script
/etc/rc.shutdown, and then proceed to send
all processes the TERM signal, and
subsequently the KILL signal to any that do
not terminate in a timely manner.To power down a &os; machine on architectures and systems
that support power management, use
shutdown -p now to turn the power off
immediately. To reboot a &os; system, use
shutdown -r now. One must be
root or a member of
operator in order to
run &man.shutdown.8;. One can also use &man.halt.8; and
&man.reboot.8;. Refer to their manual pages and to
&man.shutdown.8; for more information.
- Modify group membership by referring to
- .
+ Modify group membership by referring to
+ .Power management requires &man.acpi.4; to be loaded as
a module or statically compiled into a custom kernel.
Index: head/en_US.ISO8859-1/books/handbook/config/chapter.xml
===================================================================
--- head/en_US.ISO8859-1/books/handbook/config/chapter.xml (revision 48528)
+++ head/en_US.ISO8859-1/books/handbook/config/chapter.xml (revision 48529)
@@ -1,3531 +1,3531 @@
Configuration and TuningChernLeeWritten by MikeSmithBased on a tutorial written by MattDillonAlso based on tuning(7) written by Synopsissystem configurationsystem optimizationOne of the important aspects of &os; is proper system
configuration. This chapter explains much of the &os;
configuration process, including some of the parameters which
can be set to tune a &os; system.After reading this chapter, you will know:The basics of rc.conf configuration
and /usr/local/etc/rc.d startup
scripts.How to configure and test a network card.How to configure virtual hosts on network
devices.How to use the various configuration files in
/etc.How to tune &os; using &man.sysctl.8; variables.How to tune disk performance and modify kernel
limitations.Before reading this chapter, you should:Understand &unix; and &os; basics
().Be familiar with the basics of kernel configuration and
compilation ().Starting ServicesTomRhodesContributed by servicesMany users install third party software on &os; from the
Ports Collection and require the installed services to be
started upon system initialization. Services, such as
mail/postfix or
www/apache22 are just two of the many
software packages which may be started during system
initialization. This section explains the procedures available
for starting third party software.In &os;, most included services, such as &man.cron.8;, are
started through the system start up scripts.Extended Application ConfigurationNow that &os; includes rc.d,
configuration of application startup is easier and provides
more features. Using the key words discussed in
, applications can be set to
start after certain other services and extra flags can be
passed through /etc/rc.conf in place of
hard coded flags in the start up script. A basic script may
look similar to the following:#!/bin/sh
#
# PROVIDE: utility
# REQUIRE: DAEMON
# KEYWORD: shutdown
. /etc/rc.subr
name=utility
rcvar=utility_enable
command="/usr/local/sbin/utility"
load_rc_config $name
#
# DO NOT CHANGE THESE DEFAULT VALUES HERE
# SET THEM IN THE /etc/rc.conf FILE
#
utility_enable=${utility_enable-"NO"}
pidfile=${utility_pidfile-"/var/run/utility.pid"}
run_rc_command "$1"This script will ensure that the provided
utility will be started after the
DAEMON pseudo-service. It also provides a
method for setting and tracking the process ID
(PID).This application could then have the following line placed
in /etc/rc.conf:utility_enable="YES"This method allows for easier manipulation of command
line arguments, inclusion of the default functions provided
in /etc/rc.subr, compatibility with
&man.rcorder.8;, and provides for easier configuration via
rc.conf.Using Services to Start ServicesOther services can be started using &man.inetd.8;.
Working with &man.inetd.8; and its configuration is
described in depth in
.In some cases, it may make more sense to use
&man.cron.8; to start system services. This approach
has a number of advantages as &man.cron.8; runs these
processes as the owner of the &man.crontab.5;. This allows
regular users to start and maintain their own
applications.The @reboot feature of &man.cron.8;,
may be used in place of the time specification. This causes
the job to run when &man.cron.8; is started, normally during
system initialization.Configuring &man.cron.8;TomRhodesContributed by cronconfigurationOne of the most useful utilities in &os; is
cron. This utility runs in the
background and regularly checks
/etc/crontab for tasks to execute and
searches /var/cron/tabs for custom crontab
files. These files are used to schedule tasks which
cron runs at the specified times.
Each entry in a crontab defines a task to run and is known as a
cron job.Two different types of configuration files are used: the
system crontab, which should not be modified, and user crontabs,
which can be created and edited as needed. The format used by
these files is documented in &man.crontab.5;. The format of the
system crontab, /etc/crontab includes a
who column which does not exist in user
crontabs. In the system crontab,
cron runs the command as the user
specified in this column. In a user crontab, all commands run
as the user who created the crontab.User crontabs allow individual users to schedule their own
tasks. The root user
can also have a user crontab which can be
used to schedule tasks that do not exist in the system
crontab.Here is a sample entry from the system crontab,
/etc/crontab:# /etc/crontab - root's crontab for FreeBSD
#
# $FreeBSD$
#
SHELL=/bin/sh
PATH=/etc:/bin:/sbin:/usr/bin:/usr/sbin
#
#minute hour mday month wday who command
#
*/5 * * * * root /usr/libexec/atrun Lines that begin with the # character
are comments. A comment can be placed in the file as a
reminder of what and why a desired action is performed.
Comments cannot be on the same line as a command or else
they will be interpreted as part of the command; they must
be on a new line. Blank lines are ignored.The equals (=) character is used to
define any environment settings. In this example, it is
used to define the SHELL and
PATH. If the SHELL is
omitted, cron will use the
default Bourne shell. If the PATH is
omitted, the full path must be given to the command or
script to run.This line defines the seven fields used in a system
crontab: minute, hour,
mday, month,
wday, who, and
command. The minute
field is the time in minutes when the specified command will
be run, the hour is the hour when the
specified command will be run, the mday
is the day of the month, month is the
month, and wday is the day of the week.
These fields must be numeric values, representing the
twenty-four hour clock, or a *,
representing all values for that field. The
who field only exists in the system
crontab and specifies which user the command should be run
as. The last field is the command to be executed.This entry defines the values for this cron job. The
*/5, followed by several more
* characters, specifies that
/usr/libexec/atrun is invoked by
root every five
minutes of every hour, of every day and day of the week, of
every month.Commands can include any number of switches. However,
commands which extend to multiple lines need to be broken
with the backslash \ continuation
character.Creating a User CrontabTo create a user crontab, invoke
crontab in editor mode:&prompt.user; crontab -eThis will open the user's crontab using the default text
editor. The first time a user runs this command, it will open
an empty file. Once a user creates a crontab, this command
will open that file for editing.It is useful to add these lines to the top of the crontab
file in order to set the environment variables and to remember
the meanings of the fields in the crontab:SHELL=/bin/sh
PATH=/etc:/bin:/sbin:/usr/bin:/usr/sbin
# Order of crontab fields
# minute hour mday month wday commandThen add a line for each command or script to run,
specifying the time to run the command. This example runs the
specified custom Bourne shell script every day at two in the
afternoon. Since the path to the script is not specified in
PATH, the full path to the script is
given:0 14 * * * /usr/home/dru/bin/mycustomscript.shBefore using a custom script, make sure it is executable
and test it with the limited set of environment variables
set by cron. To replicate the environment that would be
used to run the above cron entry, use:env -i SHELL=/bin/sh PATH=/etc:/bin:/sbin:/usr/bin:/usr/sbin HOME=/home/dru LOGNAME=dru/usr/home/dru/bin/mycustomscript.shThe environment set by cron is discussed in
&man.crontab.5;. Checking that scripts operate correctly in
a cron environment is especially important if they include
any commands that delete files using wildcards.When finished editing the crontab, save the file. It
will automatically be installed and
cron will read the crontab and run
its cron jobs at their specified times. To list the cron jobs
in a crontab, use this command:&prompt.user; crontab -l
0 14 * * * /usr/home/dru/bin/mycustomscript.shTo remove all of the cron jobs in a user crontab:&prompt.user; crontab -r
remove crontab for dru? yManaging Services in &os;TomRhodesContributed by &os; uses the &man.rc.8; system of startup scripts during
system initialization and for managing services. The scripts
listed in /etc/rc.d provide basic services
which can be controlled with the ,
, and options to
&man.service.8;. For instance, &man.sshd.8; can be restarted
with the following command:&prompt.root; service sshd restartThis procedure can be used to start services on a running
system. Services will be started automatically at boot time
as specified in &man.rc.conf.5;. For example, to enable
&man.natd.8; at system startup, add the following line to
/etc/rc.conf:natd_enable="YES"If a line is already
present, change the NO to
YES. The &man.rc.8; scripts will
automatically load any dependent services during the next boot,
as described below.Since the &man.rc.8; system is primarily intended to start
and stop services at system startup and shutdown time, the
, and
options will only perform their action
if the appropriate /etc/rc.conf variable
is set. For instance, sshd restart will
only work if sshd_enable is set to
in /etc/rc.conf.
To , or
a service regardless of the settings
in /etc/rc.conf, these commands should be
prefixed with one. For instance, to restart
&man.sshd.8; regardless of the current
/etc/rc.conf setting, execute the following
command:&prompt.root; service sshd onerestartTo check if a service is enabled in
/etc/rc.conf, run the appropriate
&man.rc.8; script with . This example
checks to see if &man.sshd.8; is enabled in
/etc/rc.conf:&prompt.root; service sshd rcvar
# sshd
#
sshd_enable="YES"
# (default: "")The # sshd line is output from the
above command, not a
root console.To determine whether or not a service is running, use
. For instance, to verify that
&man.sshd.8; is running:&prompt.root; service sshd status
sshd is running as pid 433.In some cases, it is also possible to
a service. This attempts to send a
signal to an individual service, forcing the service to reload
its configuration files. In most cases, this means sending
the service a SIGHUP signal. Support for
this feature is not included for every service.The &man.rc.8; system is used for network services and it
also contributes to most of the system initialization. For
instance, when the
/etc/rc.d/bgfsck script is executed, it
prints out the following message:Starting background file system checks in 60 seconds.This script is used for background file system checks,
which occur only during system initialization.Many system services depend on other services to function
properly. For example, &man.yp.8; and other
RPC-based services may fail to start until
after the &man.rpcbind.8; service has started. To resolve this
issue, information about dependencies and other meta-data is
included in the comments at the top of each startup script.
The &man.rcorder.8; program is used to parse these comments
during system initialization to determine the order in which
system services should be invoked to satisfy the
dependencies.The following key word must be included in all startup
scripts as it is required by &man.rc.subr.8; to
enable the startup script:PROVIDE: Specifies the services this
file provides.The following key words may be included at the top of each
startup script. They are not strictly necessary, but are
useful as hints to &man.rcorder.8;:REQUIRE: Lists services which are
required for this service. The script containing this key
word will run after the specified
services.BEFORE: Lists services which depend
on this service. The script containing this key word will
run before the specified
services.By carefully setting these keywords for each startup script,
an administrator has a fine-grained level of control of the
startup order of the scripts, without the need for
runlevels used by some &unix; operating
systems.Additional information can be found in &man.rc.8; and
&man.rc.subr.8;. Refer to this article
for instructions on how to create custom &man.rc.8;
scripts.Managing System-Specific Configurationrc filesrc.confThe principal location for system configuration
information is /etc/rc.conf. This file
contains a wide range of configuration information and it is
read at system startup to configure the system. It provides
the configuration information for the
rc* files.The entries in /etc/rc.conf override
the default settings in
/etc/defaults/rc.conf. The file
containing the default settings should not be edited.
Instead, all system-specific changes should be made to
/etc/rc.conf.A number of strategies may be applied in clustered
applications to separate site-wide configuration from
system-specific configuration in order to reduce
administration overhead. The recommended approach is to place
system-specific configuration into
/etc/rc.conf.local. For example, these
entries in /etc/rc.conf apply to all
systems:sshd_enable="YES"
keyrate="fast"
defaultrouter="10.1.1.254"Whereas these entries in
/etc/rc.conf.local apply to this system
only:hostname="node1.example.org"
ifconfig_fxp0="inet 10.1.1.1/8"Distribute /etc/rc.conf to every
system using an application such as
rsync or
puppet, while
/etc/rc.conf.local remains
unique.Upgrading the system will not overwrite
/etc/rc.conf, so system configuration
information will not be lost.Both /etc/rc.conf and
/etc/rc.conf.local
are parsed by &man.sh.1;. This allows system operators to
create complex configuration scenarios. Refer to
&man.rc.conf.5; for further information on this
topic.Setting Up Network Interface CardsMarcFonvieilleContributed by network cardsconfigurationAdding and configuring a network interface card
(NIC) is a common task for any &os;
administrator.Locating the Correct Drivernetwork cardsdriverFirst, determine the model of the NIC
and the chip it uses. &os; supports a wide variety of
NICs. Check the Hardware Compatibility
List for the &os; release to see if the NIC
is supported.If the NIC is supported, determine
the name of the &os; driver for the NIC.
Refer to /usr/src/sys/conf/NOTES and
/usr/src/sys/arch/conf/NOTES
for the list of NIC drivers with some
information about the supported chipsets. When in doubt, read
the manual page of the driver as it will provide more
information about the supported hardware and any known
limitations of the driver.The drivers for common NICs are already
present in the GENERIC kernel, meaning
the NIC should be probed during boot. The
system's boot messages can be viewed by typing
more /var/run/dmesg.boot and using the
spacebar to scroll through the text. In this example, two
Ethernet NICs using the &man.dc.4; driver
are present on the system:dc0: <82c169 PNIC 10/100BaseTX> port 0xa000-0xa0ff mem 0xd3800000-0xd38
000ff irq 15 at device 11.0 on pci0
miibus0: <MII bus> on dc0
bmtphy0: <BCM5201 10/100baseTX PHY> PHY 1 on miibus0
bmtphy0: 10baseT, 10baseT-FDX, 100baseTX, 100baseTX-FDX, auto
dc0: Ethernet address: 00:a0:cc:da:da:da
dc0: [ITHREAD]
dc1: <82c169 PNIC 10/100BaseTX> port 0x9800-0x98ff mem 0xd3000000-0xd30
000ff irq 11 at device 12.0 on pci0
miibus1: <MII bus> on dc1
bmtphy1: <BCM5201 10/100baseTX PHY> PHY 1 on miibus1
bmtphy1: 10baseT, 10baseT-FDX, 100baseTX, 100baseTX-FDX, auto
dc1: Ethernet address: 00:a0:cc:da:da:db
dc1: [ITHREAD]If the driver for the NIC is not
present in GENERIC, but a driver is
available, the driver will need to be loaded before the
NIC can be configured and used. This may
be accomplished in one of two ways:The easiest way is to load a kernel module for the
NIC using &man.kldload.8;. To also
automatically load the driver at boot time, add the
appropriate line to
/boot/loader.conf. Not all
NIC drivers are available as
modules.Alternatively, statically compile support for the
NIC into a custom kernel. Refer to
/usr/src/sys/conf/NOTES,
/usr/src/sys/arch/conf/NOTES
and the manual page of the driver to determine which line
to add to the custom kernel configuration file. For more
information about recompiling the kernel, refer to . If the NIC
was detected at boot, the kernel does not need to be
recompiled.Using &windows; NDIS DriversNDISNDISulator&windows; driversµsoft.windows;device driversKLD (kernel loadable
object)Unfortunately, there are still many vendors that do not
provide schematics for their drivers to the open source
community because they regard such information as trade
secrets. Consequently, the developers of &os; and other
operating systems are left with two choices: develop the
drivers by a long and pain-staking process of reverse
engineering or using the existing driver binaries available
for µsoft.windows; platforms.&os; provides native support for the
Network Driver Interface Specification
(NDIS). It includes &man.ndisgen.8;
which can be used to convert a &windowsxp; driver into a
format that can be used on &os;. Because the &man.ndis.4;
driver uses a &windowsxp; binary, it only runs on &i386;
and amd64 systems. PCI, CardBus,
PCMCIA, and USB
devices are supported.To use &man.ndisgen.8;, three things are needed:&os; kernel sources.A &windowsxp; driver binary with a
.SYS extension.A &windowsxp; driver configuration file with a
.INF extension.Download the .SYS and
.INF files for the specific
NIC. Generally, these can be found on
the driver CD or at the vendor's website. The following
examples use W32DRIVER.SYS and
W32DRIVER.INF.The driver bit width must match the version of &os;.
For &os;/i386, use a &windows; 32-bit driver. For
&os;/amd64, a &windows; 64-bit driver is needed.The next step is to compile the driver binary into a
loadable kernel module. As
root, use
&man.ndisgen.8;:&prompt.root; ndisgen /path/to/W32DRIVER.INF/path/to/W32DRIVER.SYSThis command is interactive and prompts for any extra
information it requires. A new kernel module will be
generated in the current directory. Use &man.kldload.8;
to load the new module:&prompt.root; kldload ./W32DRIVER_SYS.koIn addition to the generated kernel module, the
ndis.ko and
if_ndis.ko modules must be loaded.
This should happen automatically when any module that
depends on &man.ndis.4; is loaded. If not, load them
manually, using the following commands:&prompt.root; kldload ndis
&prompt.root; kldload if_ndisThe first command loads the &man.ndis.4; miniport driver
wrapper and the second loads the generated
NIC driver.Check &man.dmesg.8; to see if there were any load
errors. If all went well, the output should be similar to
the following:ndis0: <Wireless-G PCI Adapter> mem 0xf4100000-0xf4101fff irq 3 at device 8.0 on pci1
ndis0: NDIS API version: 5.0
ndis0: Ethernet address: 0a:b1:2c:d3:4e:f5
ndis0: 11b rates: 1Mbps 2Mbps 5.5Mbps 11Mbps
ndis0: 11g rates: 6Mbps 9Mbps 12Mbps 18Mbps 36Mbps 48Mbps 54MbpsFrom here, ndis0 can be
configured like any other NIC.To configure the system to load the &man.ndis.4; modules
at boot time, copy the generated module,
W32DRIVER_SYS.ko, to
/boot/modules. Then, add the following
line to /boot/loader.conf:W32DRIVER_SYS_load="YES"Configuring the Network Cardnetwork cardsconfigurationOnce the right driver is loaded for the
NIC, the card needs to be configured. It
may have been configured at installation time by
&man.bsdinstall.8;.To display the NIC configuration,
enter the following command:&prompt.user; ifconfig
dc0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500
options=80008<VLAN_MTU,LINKSTATE>
ether 00:a0:cc:da:da:da
inet 192.168.1.3 netmask 0xffffff00 broadcast 192.168.1.255
media: Ethernet autoselect (100baseTX <full-duplex>)
status: active
dc1: flags=8802<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500
options=80008<VLAN_MTU,LINKSTATE>
ether 00:a0:cc:da:da:db
inet 10.0.0.1 netmask 0xffffff00 broadcast 10.0.0.255
media: Ethernet 10baseT/UTP
status: no carrier
lo0: flags=8049<UP,LOOPBACK,RUNNING,MULTICAST> metric 0 mtu 16384
options=3<RXCSUM,TXCSUM>
inet6 fe80::1%lo0 prefixlen 64 scopeid 0x4
inet6 ::1 prefixlen 128
inet 127.0.0.1 netmask 0xff000000
nd6 options=3<PERFORMNUD,ACCEPT_RTADV>In this example, the following devices were
displayed:dc0: The first Ethernet
interface.dc1: The second Ethernet
interface.lo0: The loopback
device.&os; uses the driver name followed by the order in which
the card is detected at boot to name the
NIC. For example,
sis2 is the third
NIC on the system using the &man.sis.4;
driver.In this example, dc0 is up and
running. The key indicators are:UP means that the card is
configured and ready.The card has an Internet (inet)
address, 192.168.1.3.It has a valid subnet mask
(netmask), where
0xffffff00 is the
same as 255.255.255.0.It has a valid broadcast address, 192.168.1.255.The MAC address of the card
(ether) is 00:a0:cc:da:da:da.The physical media selection is on autoselection mode
(media: Ethernet autoselect (100baseTX
<full-duplex>)). In this example,
dc1 is configured to run with
10baseT/UTP media. For more
information on available media types for a driver, refer
to its manual page.The status of the link (status) is
active, indicating that the carrier
signal is detected. For dc1, the
status: no carrier status is normal
when an Ethernet cable is not plugged into the
card.If the &man.ifconfig.8; output had shown something similar
to:dc0: flags=8843<BROADCAST,SIMPLEX,MULTICAST> metric 0 mtu 1500
options=80008<VLAN_MTU,LINKSTATE>
ether 00:a0:cc:da:da:da
media: Ethernet autoselect (100baseTX <full-duplex>)
status: activeit would indicate the card has not been configured.The card must be configured as
root. The
NIC configuration can be performed from the
command line with &man.ifconfig.8; but will not persist after
a reboot unless the configuration is also added to
/etc/rc.conf. If a
DHCP server is present on your LAN, you
will just have to add the following line:ifconfig_dc0="DHCP"Replace dc0 with the correct value
for the system.
- The line added, then, follow the instructions given in .
+ The line added, then, follow the instructions given in
+ .If the network was configured during installation, some
entries for the NIC(s) may be already
present. Double check /etc/rc.conf
before adding any lines.In the case, there is no DHCP server,
the NIC(s) have to be configured manually.
Add a line for each NIC present on the
system, as seen in this example:ifconfig_dc0="inet 192.168.1.3 netmask 255.255.255.0"
ifconfig_dc1="inet 10.0.0.1 netmask 255.255.255.0 media 10baseT/UTP"Replace dc0 and
dc1 and the IP
address information with the correct values for the system.
Refer to the man page for the driver, &man.ifconfig.8;, and
&man.rc.conf.5; for more details about the allowed options and
the syntax of /etc/rc.conf.If the network is not using DNS, edit
/etc/hosts to add the names and
IP addresses of the hosts on the
LAN, if they are not already there. For
more information, refer to &man.hosts.5; and to
/usr/share/examples/etc/hosts.If there is no DHCP server and
access to the Internet is needed, manually configure the
default gateway and the nameserver:&prompt.root; echo 'defaultrouter="your_default_router"' >> /etc/rc.conf
&prompt.root; echo 'nameserver your_DNS_server' >> /etc/resolv.confTesting and TroubleshootingOnce the necessary changes to
/etc/rc.conf are saved, a reboot can be
used to test the network configuration and to verify that the
system restarts without any configuration errors.
Alternatively, apply the settings to the networking system
with this command:&prompt.root; service netif restartIf a default gateway has been set in
/etc/rc.conf, also issue this
command:&prompt.root; service routing restartOnce the networking system has been relaunched, test the
NICs.Testing the Ethernet Cardnetwork cardstestingTo verify that an Ethernet card is configured correctly,
&man.ping.8; the interface itself, and then &man.ping.8;
another machine on the LAN:&prompt.user; ping -c5 192.168.1.3
PING 192.168.1.3 (192.168.1.3): 56 data bytes
64 bytes from 192.168.1.3: icmp_seq=0 ttl=64 time=0.082 ms
64 bytes from 192.168.1.3: icmp_seq=1 ttl=64 time=0.074 ms
64 bytes from 192.168.1.3: icmp_seq=2 ttl=64 time=0.076 ms
64 bytes from 192.168.1.3: icmp_seq=3 ttl=64 time=0.108 ms
64 bytes from 192.168.1.3: icmp_seq=4 ttl=64 time=0.076 ms
--- 192.168.1.3 ping statistics ---
5 packets transmitted, 5 packets received, 0% packet loss
round-trip min/avg/max/stddev = 0.074/0.083/0.108/0.013 ms&prompt.user; ping -c5 192.168.1.2
PING 192.168.1.2 (192.168.1.2): 56 data bytes
64 bytes from 192.168.1.2: icmp_seq=0 ttl=64 time=0.726 ms
64 bytes from 192.168.1.2: icmp_seq=1 ttl=64 time=0.766 ms
64 bytes from 192.168.1.2: icmp_seq=2 ttl=64 time=0.700 ms
64 bytes from 192.168.1.2: icmp_seq=3 ttl=64 time=0.747 ms
64 bytes from 192.168.1.2: icmp_seq=4 ttl=64 time=0.704 ms
--- 192.168.1.2 ping statistics ---
5 packets transmitted, 5 packets received, 0% packet loss
round-trip min/avg/max/stddev = 0.700/0.729/0.766/0.025 msTo test network resolution, use the host name instead
of the IP address. If there is no
DNS server on the network,
/etc/hosts must first be
configured. To this purpose, edit
/etc/hosts to add the names and
IP addresses of the hosts on the
LAN, if they are not already there. For
more information, refer to &man.hosts.5; and to
/usr/share/examples/etc/hosts.Troubleshootingnetwork cardstroubleshootingWhen troubleshooting hardware and software
configurations, check the simple things first. Is the
network cable plugged in? Are the network services properly
configured? Is the firewall configured correctly? Is the
NIC supported by &os;? Before sending
a bug report, always check the Hardware Notes, update the
version of &os; to the latest STABLE version, check the
mailing list archives, and search the Internet.If the card works, yet performance is poor, read
through &man.tuning.7;. Also, check the network
configuration as incorrect network settings can cause slow
connections.Some users experience one or two
device timeout messages, which is
normal for some cards. If they continue, or are bothersome,
determine if the device is conflicting with another device.
Double check the cable connections. Consider trying another
card.To resolve watchdog timeout
errors, first check the network cable. Many cards
require a PCI slot which supports bus
mastering. On some old motherboards, only one
PCI slot allows it, usually slot 0.
Check the NIC and the motherboard
documentation to determine if that may be the
problem.No route to host messages occur
if the system is unable to route a packet to the destination
host. This can happen if no default route is specified or
if a cable is unplugged. Check the output of
netstat -rn and make sure there is a
valid route to the host. If there is not, read
.ping: sendto: Permission denied
error messages are often caused by a misconfigured firewall.
If a firewall is enabled on &os; but no rules have been
defined, the default policy is to deny all traffic, even
&man.ping.8;. Refer to
for more information.Sometimes performance of the card is poor or below
average. In these cases, try setting the media
selection mode from autoselect to the
correct media selection. While this works for most
hardware, it may or may not resolve the issue. Again,
check all the network settings, and refer to
&man.tuning.7;.Virtual Hostsvirtual hostsIP
aliasesA common use of &os; is virtual site hosting, where one
server appears to the network as many servers. This is achieved
by assigning multiple network addresses to a single
interface.A given network interface has one real
address, and may have any number of alias
addresses. These aliases are normally added by placing alias
entries in /etc/rc.conf, as seen in this
example:ifconfig_fxp0_alias0="inet xxx.xxx.xxx.xxx netmask xxx.xxx.xxx.xxx"Alias entries must start with
alias0 using a
sequential number such as
alias0, alias1,
and so on. The configuration process will stop at the first
missing number.The calculation of alias netmasks is important. For a
given interface, there must be one address which correctly
represents the network's netmask. Any other addresses which
fall within this network must have a netmask of all
1s, expressed as either
255.255.255.255 or
0xffffffff.For example, consider the case where the
fxp0 interface is connected to two
networks: 10.1.1.0
with a netmask of
255.255.255.0 and
202.0.75.16 with a
netmask of
255.255.255.240. The
system is to be configured to appear in the ranges
10.1.1.1 through
10.1.1.5 and
202.0.75.17 through
202.0.75.20. Only
the first address in a given network range should have a real
netmask. All the rest
(10.1.1.2 through
10.1.1.5 and
202.0.75.18 through
202.0.75.20) must be
configured with a netmask of
255.255.255.255.The following /etc/rc.conf entries
configure the adapter correctly for this scenario:ifconfig_fxp0="inet 10.1.1.1 netmask 255.255.255.0"
ifconfig_fxp0_alias0="inet 10.1.1.2 netmask 255.255.255.255"
ifconfig_fxp0_alias1="inet 10.1.1.3 netmask 255.255.255.255"
ifconfig_fxp0_alias2="inet 10.1.1.4 netmask 255.255.255.255"
ifconfig_fxp0_alias3="inet 10.1.1.5 netmask 255.255.255.255"
ifconfig_fxp0_alias4="inet 202.0.75.17 netmask 255.255.255.240"
ifconfig_fxp0_alias5="inet 202.0.75.18 netmask 255.255.255.255"
ifconfig_fxp0_alias6="inet 202.0.75.19 netmask 255.255.255.255"
ifconfig_fxp0_alias7="inet 202.0.75.20 netmask 255.255.255.255"A simpler way to express this is with a space-separated list
of IP address ranges. The first address
will be given the
indicated subnet mask and the additional addresses will have a
subnet mask of 255.255.255.255.ifconfig_fxp0_aliases="inet 10.1.1.1-5/24 inet 202.0.75.17-20/28"Configuring System LoggingNiclasZeisingContributed by system loggingsyslog&man.syslogd.8;Generating and reading system logs is an important aspect of
system administration. The information in system logs can be
used to detect hardware and software issues as well as
application and system configuration errors. This information
also plays an important role in security auditing and incident
response. Most system daemons and applications will generate
log entries.&os; provides a system logger,
syslogd, to manage logging. By
default, syslogd is started when the
system boots. This is controlled by the variable
syslogd_enable in
/etc/rc.conf. There are numerous
application arguments that can be set using
syslogd_flags in
/etc/rc.conf. Refer to &man.syslogd.8; for
more information on the available arguments.This section describes how to configure the &os; system
logger for both local and remote logging and how to perform log
rotation and log management.Configuring Local Loggingsyslog.confThe configuration file,
/etc/syslog.conf, controls what
syslogd does with log entries as
they are received. There are several parameters to control
the handling of incoming events. The
facility describes which subsystem
generated the message, such as the kernel or a daemon, and the
level describes the severity of the
event that occurred. This makes it possible to configure if
and where a log message is logged, depending on the facility
and level. It is also possible to take action depending on
the application that sent the message, and in the case of
remote logging, the hostname of the machine generating the
logging event.This configuration file contains one line per action,
where the syntax for each line is a selector field followed by
an action field. The syntax of the selector field is
facility.level which will match log
messages from facility at level
level or higher. It is also
possible to add an optional comparison flag before the level
to specify more precisely what is logged. Multiple selector
fields can be used for the same action, and are separated with
a semicolon (;). Using
* will match everything. The action field
denotes where to send the log message, such as to a file or
remote log host. As an example, here is the default
syslog.conf from &os;:# $&os;$
#
# Spaces ARE valid field separators in this file. However,
# other *nix-like systems still insist on using tabs as field
# separators. If you are sharing this file between systems, you
# may want to use only tabs as field separators here.
# Consult the syslog.conf(5) manpage.
*.err;kern.warning;auth.notice;mail.crit /dev/console
*.notice;authpriv.none;kern.debug;lpr.info;mail.crit;news.err /var/log/messages
security.* /var/log/security
auth.info;authpriv.info /var/log/auth.log
mail.info /var/log/maillog
lpr.info /var/log/lpd-errs
ftp.info /var/log/xferlog
cron.* /var/log/cron
!-devd
*.=debug /var/log/debug.log
*.emerg *
# uncomment this to log all writes to /dev/console to /var/log/console.log
#console.info /var/log/console.log
# uncomment this to enable logging of all log messages to /var/log/all.log
# touch /var/log/all.log and chmod it to mode 600 before it will work
#*.* /var/log/all.log
# uncomment this to enable logging to a remote loghost named loghost
#*.* @loghost
# uncomment these if you're running inn
# news.crit /var/log/news/news.crit
# news.err /var/log/news/news.err
# news.notice /var/log/news/news.notice
# Uncomment this if you wish to see messages produced by devd
# !devd
# *.>=info
!ppp
*.* /var/log/ppp.log
!*In this example:Line 8 matches all messages with a level of
err or higher, as well as
kern.warning,
auth.notice and
mail.crit, and sends these log messages
to the console
(/dev/console).Line 12 matches all messages from the
mail facility at level
info or above and logs the messages to
/var/log/maillog.Line 17 uses a comparison flag (=)
to only match messages at level debug
and logs them to
/var/log/debug.log.Line 33 is an example usage of a program
specification. This makes the rules following it only
valid for the specified program. In this case, only the
messages generated by ppp are
logged to /var/log/ppp.log.The available levels, in order from most to least
critical are emerg,
alert, crit,
err, warning,
notice, info, and
debug.The facilities, in no particular order, are
auth, authpriv,
console, cron,
daemon, ftp,
kern, lpr,
mail, mark,
news, security,
syslog, user,
uucp, and local0 through
local7. Be aware that other operating
systems might have different facilities.To log everything of level notice and
higher to /var/log/daemon.log, add the
following entry:daemon.notice /var/log/daemon.logFor more information about the different levels and
facilities, refer to &man.syslog.3; and &man.syslogd.8;.
For more information about
/etc/syslog.conf, its syntax, and more
advanced usage examples, see &man.syslog.conf.5;.Log Management and Rotationnewsyslognewsyslog.conflog rotationlog managementLog files can grow quickly, taking up disk space and
making it more difficult to locate useful information. Log
management attempts to mitigate this. In &os;,
newsyslog is used to manage log
files. This built-in program periodically rotates and
compresses log files, and optionally creates missing log files
and signals programs when log files are moved. The log files
may be generated by syslogd or by
any other program which generates log files. While
newsyslog is normally run from
&man.cron.8;, it is not a system daemon. In the default
configuration, it runs every hour.To know which actions to take,
newsyslog reads its configuration
file, /etc/newsyslog.conf. This file
contains one line for each log file that
newsyslog manages. Each line
states the file owner, permissions, when to rotate that file,
optional flags that affect log rotation, such as compression,
and programs to signal when the log is rotated. Here is the
default configuration in &os;:# configuration file for newsyslog
# $FreeBSD$
#
# Entries which do not specify the '/pid_file' field will cause the
# syslogd process to be signalled when that log file is rotated. This
# action is only appropriate for log files which are written to by the
# syslogd process (ie, files listed in /etc/syslog.conf). If there
# is no process which needs to be signalled when a given log file is
# rotated, then the entry for that file should include the 'N' flag.
#
# The 'flags' field is one or more of the letters: BCDGJNUXZ or a '-'.
#
# Note: some sites will want to select more restrictive protections than the
# defaults. In particular, it may be desirable to switch many of the 644
# entries to 640 or 600. For example, some sites will consider the
# contents of maillog, messages, and lpd-errs to be confidential. In the
# future, these defaults may change to more conservative ones.
#
# logfilename [owner:group] mode count size when flags [/pid_file] [sig_num]
/var/log/all.log 600 7 * @T00 J
/var/log/amd.log 644 7 100 * J
/var/log/auth.log 600 7 100 @0101T JC
/var/log/console.log 600 5 100 * J
/var/log/cron 600 3 100 * JC
/var/log/daily.log 640 7 * @T00 JN
/var/log/debug.log 600 7 100 * JC
/var/log/kerberos.log 600 7 100 * J
/var/log/lpd-errs 644 7 100 * JC
/var/log/maillog 640 7 * @T00 JC
/var/log/messages 644 5 100 @0101T JC
/var/log/monthly.log 640 12 * $M1D0 JN
/var/log/pflog 600 3 100 * JB /var/run/pflogd.pid
/var/log/ppp.log root:network 640 3 100 * JC
/var/log/devd.log 644 3 100 * JC
/var/log/security 600 10 100 * JC
/var/log/sendmail.st 640 10 * 168 B
/var/log/utx.log 644 3 * @01T05 B
/var/log/weekly.log 640 5 1 $W6D0 JN
/var/log/xferlog 600 7 100 * JCEach line starts with the name of the log to be rotated,
optionally followed by an owner and group for both rotated and
newly created files. The mode field sets
the permissions on the log file and count
denotes how many rotated log files should be kept. The
size and when fields
tell newsyslog when to rotate the
file. A log file is rotated when either its size is larger
than the size field or when the time in the
when field has passed. An asterisk
(*) means that this field is ignored. The
flags field gives further
instructions, such as how to compress the rotated file or to
create the log file if it is missing. The last two fields are
optional and specify the name of the Process ID
(PID) file of a process and a signal number
to send to that process when the file is rotated.For more information on all fields, valid flags, and how
to specify the rotation time, refer to &man.newsyslog.conf.5;.
Since newsyslog is run from
&man.cron.8;, it cannot rotate files more often than it is
scheduled to run from &man.cron.8;.Configuring Remote LoggingTomRhodesContributed by Monitoring the log files of multiple hosts can become
unwieldy as the number of systems increases. Configuring
centralized logging can reduce some of the administrative
burden of log file administration.In &os;, centralized log file aggregation, merging, and
rotation can be configured using
syslogd and
newsyslog. This section
demonstrates an example configuration, where host
A, named logserv.example.com, will
collect logging information for the local network. Host
B, named logclient.example.com,
will be configured to pass logging information to the logging
server.Log Server ConfigurationA log server is a system that has been configured to
accept logging information from other hosts. Before
configuring a log server, check the following:If there is a firewall between the logging server
and any logging clients, ensure that the firewall
ruleset allows UDP port 514 for both
the clients and the server.The logging server and all client machines must
have forward and reverse entries in the local
DNS. If the network does not have a
DNS server, create entries in each
system's /etc/hosts. Proper name
resolution is required so that log entries are not
rejected by the logging server.On the log server, edit
/etc/syslog.conf to specify the name of
the client to receive log entries from, the logging facility
to be used, and the name of the log to store the host's log
entries. This example adds the hostname of
B, logs all facilities, and stores
the log entries in
/var/log/logclient.log.Sample Log Server Configuration+logclient.example.com
*.* /var/log/logclient.logWhen adding multiple log clients, add a similar two-line
entry for each client. More information about the available
facilities may be found in &man.syslog.conf.5;.Next, configure
/etc/rc.conf:syslogd_enable="YES"
syslogd_flags="-a logclient.example.com -v -v"The first entry starts
syslogd at system boot. The
second entry allows log entries from the specified client.
The increases the verbosity of logged
messages. This is useful for tweaking facilities as
administrators are able to see what type of messages are
being logged under each facility.Multiple options may be specified to
allow logging from multiple clients. IP
addresses and whole netblocks may also be specified. Refer
to &man.syslogd.8; for a full list of possible
options.Finally, create the log file:&prompt.root; touch /var/log/logclient.logAt this point, syslogd should
be restarted and verified:&prompt.root; service syslogd restart
&prompt.root; pgrep syslogIf a PID is returned, the server
restarted successfully, and client configuration can begin.
If the server did not restart, consult
/var/log/messages for the error.Log Client ConfigurationA logging client sends log entries to a logging server
on the network. The client also keeps a local copy of its
own logs.Once a logging server has been configured, edit
/etc/rc.conf on the logging
client:syslogd_enable="YES"
syslogd_flags="-s -v -v"The first entry enables
syslogd on boot up. The second
entry prevents logs from being accepted by this client from
other hosts () and increases the
verbosity of logged messages.Next, define the logging server in the client's
/etc/syslog.conf. In this example, all
logged facilities are sent to a remote system, denoted by
the @ symbol, with the specified
hostname:*.* @logserv.example.comAfter saving the edit, restart
syslogd for the changes to take
effect:&prompt.root; service syslogd restartTo test that log messages are being sent across the
network, use &man.logger.1; on the client to send a message
to syslogd:&prompt.root; logger "Test message from logclient"This message should now exist both in
/var/log/messages on the client and
/var/log/logclient.log on the log
server.Debugging Log ServersIf no messages are being received on the log server, the
cause is most likely a network connectivity issue, a
hostname resolution issue, or a typo in a configuration
file. To isolate the cause, ensure that both the logging
server and the logging client are able to
ping each other using the hostname
specified in their /etc/rc.conf. If
this fails, check the network cabling, the firewall ruleset,
and the hostname entries in the DNS
server or /etc/hosts on both the
logging server and clients. Repeat until the
ping is successful from both
hosts.If the ping succeeds on both hosts
but log messages are still not being received, temporarily
increase logging verbosity to narrow down the configuration
issue. In the following example,
/var/log/logclient.log on the logging
server is empty and /var/log/messages
on the logging client does not indicate a reason for the
failure. To increase debugging output, edit the
syslogd_flags entry on the logging server
and issue a restart:syslogd_flags="-d -a logclient.example.com -v -v"&prompt.root; service syslogd restartDebugging data similar to the following will flash on
the console immediately after the restart:logmsg: pri 56, flags 4, from logserv.example.com, msg syslogd: restart
syslogd: restarted
logmsg: pri 6, flags 4, from logserv.example.com, msg syslogd: kernel boot file is /boot/kernel/kernel
Logging to FILE /var/log/messages
syslogd: kernel boot file is /boot/kernel/kernel
cvthname(192.168.1.10)
validate: dgram from IP 192.168.1.10, port 514, name logclient.example.com;
rejected in rule 0 due to name mismatch.In this example, the log messages are being rejected due
to a typo which results in a hostname mismatch. The
client's hostname should be logclient,
not logclien. Fix the typo, issue a
restart, and verify the results:&prompt.root; service syslogd restart
logmsg: pri 56, flags 4, from logserv.example.com, msg syslogd: restart
syslogd: restarted
logmsg: pri 6, flags 4, from logserv.example.com, msg syslogd: kernel boot file is /boot/kernel/kernel
syslogd: kernel boot file is /boot/kernel/kernel
logmsg: pri 166, flags 17, from logserv.example.com,
msg Dec 10 20:55:02 <syslog.err> logserv.example.com syslogd: exiting on signal 2
cvthname(192.168.1.10)
validate: dgram from IP 192.168.1.10, port 514, name logclient.example.com;
accepted in rule 0.
logmsg: pri 15, flags 0, from logclient.example.com, msg Dec 11 02:01:28 trhodes: Test message 2
Logging to FILE /var/log/logclient.log
Logging to FILE /var/log/messagesAt this point, the messages are being properly received
and placed in the correct file.Security ConsiderationsAs with any network service, security requirements
should be considered before implementing a logging server.
Log files may contain sensitive data about services enabled
on the local host, user accounts, and configuration data.
Network data sent from the client to the server will not be
encrypted or password protected. If a need for encryption
exists, consider using security/stunnel,
which will transmit the logging data over an encrypted
tunnel.Local security is also an issue. Log files are not
encrypted during use or after log rotation. Local users may
access log files to gain additional insight into system
configuration. Setting proper permissions on log files is
critical. The built-in log rotator,
newsyslog, supports setting
permissions on newly created and rotated log files. Setting
log files to mode 600 should prevent
unwanted access by local users. Refer to
&man.newsyslog.conf.5; for additional information.Configuration Files/etc
LayoutThere are a number of directories in which configuration
information is kept. These include:/etcGeneric system-specific configuration
information./etc/defaultsDefault versions of system configuration
files./etc/mailExtra &man.sendmail.8; configuration and other
MTA configuration files./etc/pppConfiguration for both user- and kernel-ppp
programs./etc/namedbDefault location for &man.named.8; data.
Normally named.conf and zone
files are stored here./usr/local/etcConfiguration files for installed applications.
May contain per-application subdirectories./usr/local/etc/rc.d&man.rc.8; scripts for installed
applications./var/dbAutomatically generated system-specific database
files, such as the package database and the
&man.locate.1; database.HostnameshostnameDNS/etc/resolv.confresolv.confHow a &os; system accesses the Internet Domain Name
System (DNS) is controlled by
&man.resolv.conf.5;.The most common entries to
/etc/resolv.conf are:nameserverThe IP address of a name
server the resolver should query. The servers are
queried in the order listed with a maximum of
three.searchSearch list for hostname lookup. This is
normally determined by the domain of the local
hostname.domainThe local domain name.A typical /etc/resolv.conf looks
like this:search example.com
nameserver 147.11.1.11
nameserver 147.11.100.30Only one of the search and
domain options should be used.When using DHCP, &man.dhclient.8;
usually rewrites /etc/resolv.conf
with information received from the DHCP
server./etc/hostshosts/etc/hosts is a simple text
database which works in conjunction with
DNS and
NIS to provide host name to
IP address mappings. Entries for local
computers connected via a LAN can be
added to this file for simplistic naming purposes instead
of setting up a &man.named.8; server. Additionally,
/etc/hosts can be used to provide a
local record of Internet names, reducing the need to query
external DNS servers for commonly
accessed names.# $&os;$
#
#
# Host Database
#
# This file should contain the addresses and aliases for local hosts that
# share this file. Replace 'my.domain' below with the domainname of your
# machine.
#
# In the presence of the domain name service or NIS, this file may
# not be consulted at all; see /etc/nsswitch.conf for the resolution order.
#
#
::1 localhost localhost.my.domain
127.0.0.1 localhost localhost.my.domain
#
# Imaginary network.
#10.0.0.2 myname.my.domain myname
#10.0.0.3 myfriend.my.domain myfriend
#
# According to RFC 1918, you can use the following IP networks for
# private nets which will never be connected to the Internet:
#
# 10.0.0.0 - 10.255.255.255
# 172.16.0.0 - 172.31.255.255
# 192.168.0.0 - 192.168.255.255
#
# In case you want to be able to connect to the Internet, you need
# real official assigned numbers. Do not try to invent your own network
# numbers but instead get one from your network provider (if any) or
# from your regional registry (ARIN, APNIC, LACNIC, RIPE NCC, or AfriNIC.)
#The format of /etc/hosts is as
follows:[Internet address] [official hostname] [alias1] [alias2] ...For example:10.0.0.1 myRealHostname.example.com myRealHostname foobar1 foobar2Consult &man.hosts.5; for more information.Tuning with &man.sysctl.8;sysctltuningwith sysctl&man.sysctl.8; is used to make changes to a running &os;
system. This includes many advanced options of the
TCP/IP stack and virtual memory system
that can dramatically improve performance for an experienced
system administrator. Over five hundred system variables can
be read and set using &man.sysctl.8;.At its core, &man.sysctl.8; serves two functions: to read
and to modify system settings.To view all readable variables:&prompt.user; sysctl -aTo read a particular variable, specify its name:&prompt.user; sysctl kern.maxproc
kern.maxproc: 1044To set a particular variable, use the
variable=value
syntax:&prompt.root; sysctl kern.maxfiles=5000
kern.maxfiles: 2088 -> 5000Settings of sysctl variables are usually either strings,
numbers, or booleans, where a boolean is 1
for yes or 0 for no.To automatically set some variables each time the machine
boots, add them to /etc/sysctl.conf. For
more information, refer to &man.sysctl.conf.5; and
.sysctl.confsysctl.confsysctlThe configuration file for &man.sysctl.8;,
/etc/sysctl.conf, looks much like
/etc/rc.conf. Values are set in a
variable=value form. The specified values
are set after the system goes into multi-user mode. Not all
variables are settable in this mode.For example, to turn off logging of fatal signal exits
and prevent users from seeing processes started by other
users, the following tunables can be set in
/etc/sysctl.conf:# Do not log fatal signal exits (e.g., sig 11)
kern.logsigexit=0
# Prevent users from seeing information about processes that
# are being run under another UID.
security.bsd.see_other_uids=0&man.sysctl.8; Read-onlyTomRhodesContributed by In some cases it may be desirable to modify read-only
&man.sysctl.8; values, which will require a reboot of the
system.For instance, on some laptop models the &man.cardbus.4;
device will not probe memory ranges and will fail with errors
similar to:cbb0: Could not map register memory
device_probe_and_attach: cbb0 attach returned 12The fix requires the modification of a read-only
&man.sysctl.8; setting. Add
to
/boot/loader.conf and reboot. Now
&man.cardbus.4; should work properly.Tuning DisksThe following section will discuss various tuning
mechanisms and options which may be applied to disk
devices. In many cases, disks with mechanical parts,
such as SCSI drives, will be the
bottleneck driving down the overall system performance. While
a solution is to install a drive without mechanical parts,
such as a solid state drive, mechanical drives are not
going away anytime in the near future. When tuning disks,
it is advisable to utilize the features of the &man.iostat.8;
command to test various changes to the system. This
command will allow the user to obtain valuable information
on system IO.Sysctl Variablesvfs.vmiodirenablevfs.vmiodirenableThe vfs.vmiodirenable &man.sysctl.8;
variable
may be set to either 0 (off) or
1 (on). It is set to
1 by default. This variable controls
how directories are cached by the system. Most directories
are small, using just a single fragment (typically 1 K)
in the file system and typically 512 bytes in the
buffer cache. With this variable turned off, the buffer
cache will only cache a fixed number of directories, even
if the system has a huge amount of memory. When turned on,
this &man.sysctl.8; allows the buffer cache to use the
VM page cache to cache the directories,
making all the memory available for caching directories.
However, the minimum in-core memory used to cache a
directory is the physical page size (typically 4 K)
rather than 512 bytes. Keeping this option enabled
is recommended if the system is running any services which
manipulate large numbers of files. Such services can
include web caches, large mail systems, and news systems.
Keeping this option on will generally not reduce
performance, even with the wasted memory, but one should
experiment to find out.vfs.write_behindvfs.write_behindThe vfs.write_behind &man.sysctl.8;
variable
defaults to 1 (on). This tells the file
system to issue media writes as full clusters are collected,
which typically occurs when writing large sequential files.
This avoids saturating the buffer cache with dirty buffers
when it would not benefit I/O performance. However, this
may stall processes and under certain circumstances should
be turned off.vfs.hirunningspacevfs.hirunningspaceThe vfs.hirunningspace &man.sysctl.8;
variable determines how much outstanding write I/O may be
queued to disk controllers system-wide at any given
instance. The default is usually sufficient, but on
machines with many disks, try bumping it up to four or five
megabytes. Setting too high a value
which exceeds the buffer cache's write threshold can lead
to bad clustering performance. Do not set this value
arbitrarily high as higher write values may add latency to
reads occurring at the same time.There are various other buffer cache and
VM page cache related &man.sysctl.8;
values. Modifying these values is not recommended as the
VM system does a good job of
automatically tuning itself.vm.swap_idle_enabledvm.swap_idle_enabledThe vm.swap_idle_enabled
&man.sysctl.8; variable is useful in large multi-user
systems with many active login users and lots of idle
processes. Such systems tend to generate continuous
pressure on free memory reserves. Turning this feature on
and tweaking the swapout hysteresis (in idle seconds) via
vm.swap_idle_threshold1 and
vm.swap_idle_threshold2 depresses the
priority of memory pages associated with idle processes more
quickly then the normal pageout algorithm. This gives a
helping hand to the pageout daemon. Only turn this option
on if needed, because the tradeoff is essentially pre-page
memory sooner rather than later which eats more swap and
disk bandwidth. In a small system this option will have a
determinable effect, but in a large system that is already
doing moderate paging, this option allows the
VM system to stage whole processes into
and out of memory easily.hw.ata.wchw.ata.wcTurning off IDE write caching reduces
write bandwidth to IDE disks, but may
sometimes be necessary due to data consistency issues
introduced by hard drive vendors. The problem is that
some IDE drives lie about when a write
completes. With IDE write caching
turned on, IDE hard drives write data
to disk out of order and will sometimes delay writing some
blocks indefinitely when under heavy disk load. A crash or
power failure may cause serious file system corruption.
Check the default on the system by observing the
hw.ata.wc &man.sysctl.8; variable. If
IDE write caching is turned off, one can
set this read-only variable to
1 in
/boot/loader.conf in order to enable
it at boot time.For more information, refer to &man.ata.4;.SCSI_DELAY
(kern.cam.scsi_delay)kern.cam.scsi_delaykernel optionsSCSI DELAYThe SCSI_DELAY kernel configuration
option may be used to reduce system boot times. The
defaults are fairly high and can be responsible for
15 seconds of delay in the boot process.
Reducing it to 5 seconds usually works
with modern drives. The
kern.cam.scsi_delay boot time tunable
should be used. The tunable and kernel configuration
option accept values in terms of
milliseconds and
notseconds.Soft UpdatesSoft Updates&man.tunefs.8;To fine-tune a file system, use &man.tunefs.8;. This
program has many different options. To toggle Soft Updates
on and off, use:&prompt.root; tunefs -n enable /filesystem
&prompt.root; tunefs -n disable /filesystemA file system cannot be modified with &man.tunefs.8; while
it is mounted. A good time to enable Soft Updates is before
any partitions have been mounted, in single-user mode.Soft Updates is recommended for UFS
file systems as it drastically improves meta-data performance,
mainly file creation and deletion, through the use of a memory
cache. There are two downsides to Soft Updates to be aware
of. First, Soft Updates guarantee file system consistency
in the case of a crash, but could easily be several seconds
or even a minute behind updating the physical disk. If the
system crashes, unwritten data may be lost. Secondly, Soft
Updates delay the freeing of file system blocks. If the
root file system is almost full, performing a major update,
such as make installworld, can cause the
file system to run out of space and the update to fail.More Details About Soft UpdatesSoft UpdatesdetailsMeta-data updates are updates to non-content data like
inodes or directories. There are two traditional approaches
to writing a file system's meta-data back to disk.Historically, the default behavior was to write out
meta-data updates synchronously. If a directory changed,
the system waited until the change was actually written to
disk. The file data buffers (file contents) were passed
through the buffer cache and backed up to disk later on
asynchronously. The advantage of this implementation is
that it operates safely. If there is a failure during an
update, meta-data is always in a consistent state. A
file is either created completely or not at all. If the
data blocks of a file did not find their way out of the
buffer cache onto the disk by the time of the crash,
&man.fsck.8; recognizes this and repairs the file system
by setting the file length to 0.
Additionally, the implementation is clear and simple. The
disadvantage is that meta-data changes are slow. For
example, rm -r touches all the files in a
directory sequentially, but each directory change will be
written synchronously to the disk. This includes updates to
the directory itself, to the inode table, and possibly to
indirect blocks allocated by the file. Similar
considerations apply for unrolling large hierarchies using
tar -x.The second approach is to use asynchronous meta-data
updates. This is the default for a UFS
file system mounted with mount -o async.
Since all meta-data updates are also passed through the
buffer cache, they will be intermixed with the updates of
the file content data. The advantage of this
implementation is there is no need to wait until each
meta-data update has been written to disk, so all operations
which cause huge amounts of meta-data updates work much
faster than in the synchronous case. This implementation
is still clear and simple, so there is a low risk for bugs
creeping into the code. The disadvantage is that there is
no guarantee for a consistent state of the file system.
If there is a failure during an operation that updated
large amounts of meta-data, like a power failure or someone
pressing the reset button, the file system will be left
in an unpredictable state. There is no opportunity to
examine the state of the file system when the system comes
up again as the data blocks of a file could already have
been written to the disk while the updates of the inode
table or the associated directory were not. It is
impossible to implement a &man.fsck.8; which is able to
clean up the resulting chaos because the necessary
information is not available on the disk. If the file
system has been damaged beyond repair, the only choice
is to reformat it and restore from backup.The usual solution for this problem is to implement
dirty region logging, which is also
referred to as journaling.
Meta-data updates are still written synchronously, but only
into a small region of the disk. Later on, they are moved
to their proper location. Because the logging area is a
small, contiguous region on the disk, there are no long
distances for the disk heads to move, even during heavy
operations, so these operations are quicker than synchronous
updates. Additionally, the complexity of the implementation
is limited, so the risk of bugs being present is low. A
disadvantage is that all meta-data is written twice, once
into the logging region and once to the proper location, so
performance pessimization might result. On
the other hand, in case of a crash, all pending meta-data
operations can be either quickly rolled back or completed
from the logging area after the system comes up again,
resulting in a fast file system startup.Kirk McKusick, the developer of Berkeley
FFS, solved this problem with Soft
Updates. All pending meta-data updates are kept in memory
and written out to disk in a sorted sequence
(ordered meta-data updates). This has the
effect that, in case of heavy meta-data operations, later
updates to an item catch the earlier ones
which are still in memory and have not already been written
to disk. All operations are generally performed in memory
before the update is written to disk and the data blocks are
sorted according to their position so that they will not be
on the disk ahead of their meta-data. If the system
crashes, an implicit log rewind causes all
operations which were not written to the disk appear as if
they never happened. A consistent file system state is
maintained that appears to be the one of 30 to 60 seconds
earlier. The algorithm used guarantees that all resources
in use are marked as such in their blocks and inodes.
After a crash, the only resource allocation error that
occurs is that resources are marked as used
which are actually free. &man.fsck.8;
recognizes this situation, and frees the resources that
are no longer used. It is safe to ignore the dirty state
of the file system after a crash by forcibly mounting it
with mount -f. In order to free
resources that may be unused, &man.fsck.8; needs to be run
at a later time. This is the idea behind the
background &man.fsck.8;: at system
startup time, only a snapshot of the
file system is recorded and &man.fsck.8; is run afterwards.
All file systems can then be mounted
dirty, so the system startup proceeds in
multi-user mode. Then, background &man.fsck.8; is
scheduled for all file systems where this is required, to
free resources that may be unused. File systems that do
not use Soft Updates still need the usual foreground
&man.fsck.8;.The advantage is that meta-data operations are nearly
as fast as asynchronous updates and are faster than
logging, which has to write the
meta-data twice. The disadvantages are the complexity of
the code, a higher memory consumption, and some
idiosyncrasies. After a crash, the state of the file
system appears to be somewhat older. In
situations where the standard synchronous approach would
have caused some zero-length files to remain after the
&man.fsck.8;, these files do not exist at all with Soft
Updates because neither the meta-data nor the file contents
have been written to disk. Disk space is not released until
the updates have been written to disk, which may take place
some time after running &man.rm.1;. This may cause problems
when installing large amounts of data on a file system
that does not have enough free space to hold all the files
twice.Tuning Kernel Limitstuningkernel limitsFile/Process Limitskern.maxfileskern.maxfilesThe kern.maxfiles &man.sysctl.8;
variable can be raised or lowered based upon system
requirements. This variable indicates the maximum number
of file descriptors on the system. When the file descriptor
table is full, file: table is full
will show up repeatedly in the system message buffer, which
can be viewed using &man.dmesg.8;.Each open file, socket, or fifo uses one file
descriptor. A large-scale production server may easily
require many thousands of file descriptors, depending on the
kind and number of services running concurrently.In older &os; releases, the default value of
kern.maxfiles is derived from
in the kernel configuration file.
kern.maxfiles grows proportionally to the
value of . When compiling a custom
kernel, consider setting this kernel configuration option
according to the use of the system. From this number, the
kernel is given most of its pre-defined limits. Even though
a production machine may not have 256 concurrent users, the
resources needed may be similar to a high-scale web
server.The read-only &man.sysctl.8; variable
kern.maxusers is automatically sized at
boot based on the amount of memory available in the system,
and may be determined at run-time by inspecting the value
of kern.maxusers. Some systems require
larger or smaller values of
kern.maxusers and values of
64, 128, and
256 are not uncommon. Going above
256 is not recommended unless a huge
number of file descriptors is needed. Many of the tunable
values set to their defaults by
kern.maxusers may be individually
overridden at boot-time or run-time in
/boot/loader.conf. Refer to
&man.loader.conf.5; and
/boot/defaults/loader.conf for more
details and some hints.In older releases, the system will auto-tune
maxusers if it is set to
0.
The auto-tuning algorithm sets
maxusers equal to the amount of
memory in the system, with a minimum of
32, and a maximum of
384.. When
setting this option, set maxusers to
at least 4, especially if the system
runs &xorg; or is used to
compile software. The most important table set by
maxusers is the maximum number of
processes, which is set to
20 + 16 * maxusers. If
maxusers is set to 1,
there can only be
36 simultaneous processes, including
the 18 or so that the system starts up
at boot time and the 15 or so used by
&xorg;. Even a simple task like
reading a manual page will start up nine processes to
filter, decompress, and view it. Setting
maxusers to 64 allows
up to 1044 simultaneous processes, which
should be enough for nearly all uses. If, however, the
proc table full error is displayed
when trying to start another program, or a server is
running with a large number of simultaneous users, increase
the number and rebuild.maxusers does
not limit the number of users which
can log into the machine. It instead sets various table
sizes to reasonable values considering the maximum number
of users on the system and how many processes each user
will be running.kern.ipc.soacceptqueuekern.ipc.soacceptqueue
- The kern.ipc.soacceptqueue &man.sysctl.8;
- variable limits the size of the listen queue for accepting
- new TCP connections. The default value
- of 128 is typically too low for robust
- handling of new connections on a heavily loaded web server.
- For such environments, it is recommended to increase this
- value to 1024 or higher. A service
- such as &man.sendmail.8;, or
+ The kern.ipc.soacceptqueue
+ &man.sysctl.8; variable limits the size of the listen queue
+ for accepting new TCP connections. The
+ default value of 128 is typically too low
+ for robust handling of new connections on a heavily loaded
+ web server. For such environments, it is recommended to
+ increase this value to 1024 or higher. A
+ service such as &man.sendmail.8;, or
Apache may itself limit the
listen queue size, but will often have a directive in its
configuration file to adjust the queue size. Large listen
queues do a better job of avoiding Denial of Service
(DoS) attacks.Network LimitsThe NMBCLUSTERS kernel configuration
option dictates the amount of network Mbufs available to the
system. A heavily-trafficked server with a low number of
Mbufs will hinder performance. Each cluster represents
approximately 2 K of memory, so a value of
1024 represents 2
megabytes of kernel memory reserved for network buffers. A
simple calculation can be done to figure out how many are
needed. A web server which maxes out at
1000 simultaneous connections where each
connection uses a 6 K receive and 16 K send buffer,
requires approximately 32 MB worth of network buffers
to cover the web server. A good rule of thumb is to multiply
by 2, so
2x32 MB / 2 KB =
64 MB / 2 kB =
32768. Values between
4096 and 32768 are
recommended for machines with greater amounts of memory.
Never specify an arbitrarily high value for this parameter
as it could lead to a boot time crash. To observe network
cluster usage, use with
&man.netstat.1;.The kern.ipc.nmbclusters loader tunable
should be used to tune this at boot time. Only older versions
of &os; will require the use of the
NMBCLUSTERS kernel &man.config.8;
option.For busy servers that make extensive use of the
&man.sendfile.2; system call, it may be necessary to increase
the number of &man.sendfile.2; buffers via the
NSFBUFS kernel configuration option or by
setting its value in /boot/loader.conf
(see &man.loader.8; for details). A common indicator that
this parameter needs to be adjusted is when processes are seen
in the sfbufa state. The &man.sysctl.8;
variable kern.ipc.nsfbufs is read-only.
This parameter nominally scales with
kern.maxusers, however it may be necessary
to tune accordingly.Even though a socket has been marked as non-blocking,
calling &man.sendfile.2; on the non-blocking socket may
result in the &man.sendfile.2; call blocking until enough
struct sf_buf's are made
available.net.inet.ip.portrange.*net.inet.ip.portrange.*The net.inet.ip.portrange.*
&man.sysctl.8; variables control the port number ranges
automatically bound to TCP and
UDP sockets. There are three ranges: a
low range, a default range, and a high range. Most network
programs use the default range which is controlled by
net.inet.ip.portrange.first and
net.inet.ip.portrange.last, which default
to 1024 and 5000,
respectively. Bound port ranges are used for outgoing
connections and it is possible to run the system out of
ports under certain circumstances. This most commonly
occurs when running a heavily loaded web proxy. The port
range is not an issue when running a server which handles
mainly incoming connections, such as a web server, or has
a limited number of outgoing connections, such as a mail
relay. For situations where there is a shortage of ports,
it is recommended to increase
net.inet.ip.portrange.last modestly. A
value of 10000, 20000
or 30000 may be reasonable. Consider
firewall effects when changing the port range. Some
firewalls may block large ranges of ports, usually
low-numbered ports, and expect systems to use higher ranges
of ports for outgoing connections. For this reason, it
is not recommended that the value of
net.inet.ip.portrange.first be
lowered.TCP Bandwidth Delay ProductTCP Bandwidth Delay Product
Limitingnet.inet.tcp.inflight.enableTCP bandwidth delay product limiting
can be enabled by setting the
net.inet.tcp.inflight.enable
&man.sysctl.8; variable to 1. This
instructs the system to attempt to calculate the bandwidth
delay product for each connection and limit the amount of
data queued to the network to just the amount required to
maintain optimum throughput.This feature is useful when serving data over modems,
Gigabit Ethernet, high speed WAN links,
or any other link with a high bandwidth delay product,
especially when also using window scaling or when a large
send window has been configured. When enabling this option,
also set net.inet.tcp.inflight.debug to
0 to disable debugging. For production
use, setting net.inet.tcp.inflight.min
to at least 6144 may be beneficial.
Setting high minimums may effectively disable bandwidth
limiting, depending on the link. The limiting feature
reduces the amount of data built up in intermediate route
and switch packet queues and reduces the amount of data
built up in the local host's interface queue. With fewer
queued packets, interactive connections, especially over
slow modems, will operate with lower
Round Trip Times. This feature only
effects server side data transmission such as uploading.
It has no effect on data reception or downloading.Adjusting net.inet.tcp.inflight.stab
is not recommended. This parameter
defaults to 20, representing 2 maximal
packets added to the bandwidth delay product window
calculation. The additional window is required to stabilize
the algorithm and improve responsiveness to changing
conditions, but it can also result in higher &man.ping.8;
times over slow links, though still much lower than without
the inflight algorithm. In such cases, try reducing this
parameter to 15, 10,
or 5 and reducing
net.inet.tcp.inflight.min to a value such
as 3500 to get the desired effect.
Reducing these parameters should be done as a last resort
only.Virtual Memorykern.maxvnodesA vnode is the internal representation of a file or
directory. Increasing the number of vnodes available to
the operating system reduces disk I/O. Normally, this is
handled by the operating system and does not need to be
changed. In some cases where disk I/O is a bottleneck and
the system is running out of vnodes, this setting needs
to be increased. The amount of inactive and free
RAM will need to be taken into
account.To see the current number of vnodes in use:&prompt.root; sysctl vfs.numvnodes
vfs.numvnodes: 91349To see the maximum vnodes:&prompt.root; sysctl kern.maxvnodes
kern.maxvnodes: 100000If the current vnode usage is near the maximum, try
increasing kern.maxvnodes by a value of
1000. Keep an eye on the number of
vfs.numvnodes. If it climbs up to the
maximum again, kern.maxvnodes will need
to be increased further. Otherwise, a shift in memory
usage as reported by &man.top.1; should be visible and
more memory should be active.Adding Swap SpaceSometimes a system requires more swap space. This section
describes two methods to increase swap space: adding swap to an
existing partition or new hard drive, and creating a swap file
on an existing partition.For information on how to encrypt swap space, which options
exist, and why it should be done, refer to .Swap on a New Hard Drive or Existing PartitionAdding a new hard drive for swap gives better performance
than using a partition on an existing drive. Setting up
partitions and hard drives is explained in while discusses partition layouts
and swap partition size considerations.Use swapon to add a swap partition to
the system. For example:&prompt.root; swapon /dev/ada1s1bIt is possible to use any partition not currently
mounted, even if it already contains data. Using
swapon on a partition that contains data
will overwrite and destroy that data. Make sure that the
partition to be added as swap is really the intended
partition before running swapon.To automatically add this swap partition on boot, add an
entry to /etc/fstab:/dev/ada1s1b none swap sw 0 0See &man.fstab.5; for an explanation of the entries in
/etc/fstab. More information about
swapon can be found in
&man.swapon.8;.Creating a Swap FileThese examples create a 64M swap file called
/usr/swap0 instead of using a
partition.Using swap files requires that the module needed by
&man.md.4; has either been built into the kernel or has been
loaded before swap is enabled. See
for information about building
a custom kernel.Creating a Swap File on
&os; 10.X and LaterCreate the swap file:&prompt.root; dd if=/dev/zero of=/usr/swap0 bs=1m count=64Set the proper permissions on the new file:&prompt.root; chmod 0600 /usr/swap0Inform the system about the swap file by adding a
line to /etc/fstab:md99 none swap sw,file=/usr/swap0,late 0 0The &man.md.4; device md99 is
used, leaving lower device numbers available for
interactive use.Swap space will be added on system startup. To add
swap space immediately, use &man.swapon.8;:&prompt.root; swapon -aLCreating a Swap File on
&os; 9.X and EarlierCreate the swap file,
/usr/swap0:&prompt.root; dd if=/dev/zero of=/usr/swap0 bs=1m count=64Set the proper permissions on
/usr/swap0:&prompt.root; chmod 0600 /usr/swap0Enable the swap file in
/etc/rc.conf:swapfile="/usr/swap0" # Set to name of swap fileSwap space will be added on system startup. To
enable the swap file immediately, specify a free memory
device. Refer to for
more information about memory devices.&prompt.root; mdconfig -a -t vnode -f /usr/swap0 -u 0 && swapon /dev/md0Power and Resource ManagementHitenPandyaWritten by TomRhodesIt is important to utilize hardware resources in an
efficient manner. Power and resource management allows the
operating system to monitor system limits and to possibly
provide an alert if the system temperature increases
unexpectedly. An early specification for providing power
management was the Advanced Power Management
(APM) facility. APM
controls the power usage of a system based on its activity.
However, it was difficult and inflexible for operating systems
to manage the power usage and thermal properties of a system.
The hardware was managed by the BIOS and the
user had limited configurability and visibility into the power
management settings. The APM
BIOS is supplied by the vendor and is
specific to the hardware platform. An APM
driver in the operating system mediates access to the
APM Software Interface, which allows
management of power levels.There are four major problems in APM.
First, power management is done by the vendor-specific
BIOS, separate from the operating system.
For example, the user can set idle-time values for a hard drive
in the APM BIOS so that,
when exceeded, the BIOS spins down the hard
drive without the consent of the operating system. Second, the
APM logic is embedded in the
BIOS, and it operates outside the scope of
the operating system. This means that users can only fix
problems in the APM
BIOS by flashing a new one into the
ROM, which is a dangerous procedure with the
potential to leave the system in an unrecoverable state if it
fails. Third, APM is a vendor-specific
technology, meaning that there is a lot of duplication of
efforts and bugs found in one vendor's BIOS
may not be solved in others. Lastly, the APM
BIOS did not have enough room to implement a
sophisticated power policy or one that can adapt well to the
purpose of the machine.The Plug and Play BIOS
(PNPBIOS) was unreliable in many situations.
PNPBIOS is 16-bit technology, so the
operating system has to use 16-bit emulation in order to
interface with PNPBIOS methods. &os;
provides an APM driver as
APM should still be used for systems
manufactured at or before the year 2000. The driver is
documented in &man.apm.4;.ACPIAPMThe successor to APM is the Advanced
Configuration and Power Interface (ACPI).
ACPI is a standard written by an alliance of
vendors to provide an interface for hardware resources and power
management. It is a key element in Operating
System-directed configuration and Power Management
as it provides more control and flexibility to the operating
system.This chapter demonstrates how to configure
ACPI on &os;. It then offers some tips on
how to debug ACPI and how to submit a problem
report containing debugging information so that developers can
diagnosis and fix ACPI issues.Configuring ACPIIn &os; the &man.acpi.4; driver is loaded by default at
system boot and should not be compiled
into the kernel. This driver cannot be unloaded after boot
because the system bus uses it for various hardware
interactions. However, if the system is experiencing
problems, ACPI can be disabled altogether
by rebooting after setting
hint.acpi.0.disabled="1" in
/boot/loader.conf or by setting this
variable at the loader prompt, as described in .ACPI and APM
cannot coexist and should be used separately. The last one
to load will terminate if the driver notices the other is
running.ACPI can be used to put the system into
a sleep mode with acpiconf, the
flag, and a number from
1 to 5. Most users only
need 1 (quick suspend to
RAM) or 3 (suspend to
RAM). Option 5 performs
a soft-off which is the same as running
halt -p.Other options are available using
sysctl. Refer to &man.acpi.4; and
&man.acpiconf.8; for more information.Common ProblemsACPIACPI is present in all modern computers
that conform to the ia32 (x86), ia64 (Itanium), and amd64
(AMD) architectures. The full standard has
many features including CPU performance
management, power planes control, thermal zones, various
battery systems, embedded controllers, and bus enumeration.
Most systems implement less than the full standard. For
instance, a desktop system usually only implements bus
enumeration while a laptop might have cooling and battery
management support as well. Laptops also have suspend and
resume, with their own associated complexity.An ACPI-compliant system has various
components. The BIOS and chipset vendors
provide various fixed tables, such as FADT,
in memory that specify things like the APIC
map (used for SMP), config registers, and
simple configuration values. Additionally, a bytecode table,
the Differentiated System Description Table
DSDT, specifies a tree-like name space of
devices and methods.The ACPI driver must parse the fixed
tables, implement an interpreter for the bytecode, and modify
device drivers and the kernel to accept information from the
ACPI subsystem. For &os;, &intel; has
provided an interpreter (ACPI-CA) that is
shared with &linux; and NetBSD. The path to the
ACPI-CA source code is
src/sys/contrib/dev/acpica. The glue
code that allows ACPI-CA to work on &os; is
in src/sys/dev/acpica/Osd. Finally,
drivers that implement various ACPI devices
are found in src/sys/dev/acpica.ACPIproblemsFor ACPI to work correctly, all the
parts have to work correctly. Here are some common problems,
in order of frequency of appearance, and some possible
workarounds or fixes. If a fix does not resolve the issue,
refer to for instructions
on how to submit a bug report.Mouse IssuesIn some cases, resuming from a suspend operation will
cause the mouse to fail. A known work around is to add
hint.psm.0.flags="0x3000" to
/boot/loader.conf.Suspend/ResumeACPI has three suspend to
RAM (STR) states,
S1-S3, and one suspend
to disk state (STD), called
S4. STD can be
implemented in two separate ways. The
S4BIOS is a
BIOS-assisted suspend to disk and
S4OS is implemented
entirely by the operating system. The normal state the
system is in when plugged in but not powered up is
soft off (S5).Use sysctl hw.acpi to check for the
suspend-related items. These example results are from a
Thinkpad:hw.acpi.supported_sleep_state: S3 S4 S5
hw.acpi.s4bios: 0Use acpiconf -s to test
S3, S4, and
S5. An of one
(1) indicates
S4BIOS support instead
of S4 operating system support.When testing suspend/resume, start with
S1, if supported. This state is most
likely to work since it does not require much driver
support. No one has implemented S2,
which is similar to S1. Next, try
S3. This is the deepest
STR state and requires a lot of driver
support to properly reinitialize the hardware.A common problem with suspend/resume is that many device
drivers do not save, restore, or reinitialize their
firmware, registers, or device memory properly. As a first
attempt at debugging the problem, try:&prompt.root; sysctl debug.bootverbose=1
&prompt.root; sysctl debug.acpi.suspend_bounce=1
&prompt.root; acpiconf -s 3This test emulates the suspend/resume cycle of all
device drivers without actually going into
S3 state. In some cases, problems such
as losing firmware state, device watchdog time out, and
retrying forever, can be captured with this method. Note
that the system will not really enter S3
state, which means devices may not lose power, and many
will work fine even if suspend/resume methods are totally
missing, unlike real S3 state.Harder cases require additional hardware, such as a
serial port and cable for debugging through a serial
console, a Firewire port and cable for using &man.dcons.4;,
and kernel debugging skills.To help isolate the problem, unload as many drivers as
possible. If it works, narrow down which driver is the
problem by loading drivers until it fails again. Typically,
binary drivers like nvidia.ko, display
drivers, and USB will have the most
problems while Ethernet interfaces usually work fine. If
drivers can be properly loaded and unloaded, automate this
by putting the appropriate commands in
/etc/rc.suspend and
/etc/rc.resume. Try setting
to 1
if the display is messed up after resume. Try setting
longer or shorter values for
to see if that
helps.Try loading a recent &linux; distribution to see if
suspend/resume works on the same hardware. If it works on
&linux;, it is likely a &os; driver problem. Narrowing down
which driver causes the problem will assist developers in
fixing the problem. Since the ACPI
maintainers rarely maintain other drivers, such as sound
or ATA, any driver problems should also
be posted to the &a.current.name; list and mailed to the
driver maintainer. Advanced users can include debugging
&man.printf.3;s in a problematic driver to track down where
in its resume function it hangs.Finally, try disabling ACPI and
enabling APM instead. If suspend/resume
works with APM, stick with
APM, especially on older hardware
(pre-2000). It took vendors a while to get
ACPI support correct and older hardware
is more likely to have BIOS problems with
ACPI.System HangsMost system hangs are a result of lost interrupts or an
interrupt storm. Chipsets may have problems based on boot,
how the BIOS configures interrupts before
correctness of the APIC
(MADT) table, and routing of the System
Control Interrupt (SCI).interrupt stormsInterrupt storms can be distinguished from lost
interrupts by checking the output of
vmstat -i and looking at the line that
has acpi0. If the counter is increasing
at more than a couple per second, there is an interrupt
storm. If the system appears hung, try breaking to
DDB (CTRLALTESC on console) and type
show interrupts.APICdisablingWhen dealing with interrupt problems, try disabling
APIC support with
hint.apic.0.disabled="1" in
/boot/loader.conf.PanicsPanics are relatively rare for ACPI
and are the top priority to be fixed. The first step is to
isolate the steps to reproduce the panic, if possible, and
get a backtrace. Follow the advice for enabling
options DDB and setting up a serial
console in or setting
up a dump partition. To get a backtrace in
DDB, use tr. When
handwriting the backtrace, get at least the last five and
the top five lines in the trace.Then, try to isolate the problem by booting with
ACPI disabled. If that works, isolate
the ACPI subsystem by using various
values of . See
&man.acpi.4; for some examples.System Powers Up After Suspend or ShutdownFirst, try setting
hw.acpi.disable_on_poweroff="0" in
/boot/loader. This keeps
ACPI from disabling various events during
the shutdown process. Some systems need this value set to
1 (the default) for the same reason.
This usually fixes the problem of a system powering up
spontaneously after a suspend or poweroff.BIOS Contains Buggy BytecodeACPIASLSome BIOS vendors provide incorrect
or buggy bytecode. This is usually manifested by kernel
console messages like this:ACPI-1287: *** Error: Method execution failed [\\_SB_.PCI0.LPC0.FIGD._STA] \\
(Node 0xc3f6d160), AE_NOT_FOUNDOften, these problems may be resolved by updating the
BIOS to the latest revision. Most
console messages are harmless, but if there are other
problems, like the battery status is not working, these
messages are a good place to start looking for
problems.Overriding the Default AMLThe BIOS bytecode, known as
ACPI Machine Language
(AML), is compiled from a source language
called ACPI Source Language
(ASL). The AML is
found in the table known as the Differentiated System
Description Table (DSDT).ACPIASLThe goal of &os; is for everyone to have working
ACPI without any user intervention.
Workarounds are still being developed for common mistakes made
by BIOS vendors. The µsoft;
interpreter (acpi.sys and
acpiec.sys) does not strictly check for
adherence to the standard, and thus many
BIOS vendors who only test
ACPI under &windows; never fix their
ASL. &os; developers continue to identify
and document which non-standard behavior is allowed by
µsoft;'s interpreter and replicate it so that &os; can
work without forcing users to fix the
ASL.To help identify buggy behavior and possibly fix it
manually, a copy can be made of the system's
ASL. To copy the system's
ASL to a specified file name, use
acpidump with , to show
the contents of the fixed tables, and , to
disassemble the AML:&prompt.root; acpidump -td > my.aslSome AML versions assume the user is
running &windows;. To override this, set
hw.acpi.osname="Windows
2009" in
/boot/loader.conf, using the most recent
&windows; version listed in the ASL.Other workarounds may require my.asl
to be customized. If this file is edited, compile the new
ASL using the following command. Warnings
can usually be ignored, but errors are bugs that will usually
prevent ACPI from working correctly.&prompt.root; iasl -f my.aslIncluding forces creation of the
AML, even if there are errors during
compilation. Some errors, such as missing return statements,
are automatically worked around by the &os;
interpreter.The default output filename for iasl is
DSDT.aml. Load this file instead of the
BIOS's buggy copy, which is still present
in flash memory, by editing
/boot/loader.conf as follows:acpi_dsdt_load="YES"
acpi_dsdt_name="/boot/DSDT.aml"Be sure to copy DSDT.aml to
/boot, then reboot the system. If this
fixes the problem, send a &man.diff.1; of the old and new
ASL to &a.acpi.name; so that developers can
work around the buggy behavior in
acpica.Getting and Submitting Debugging InfoNateLawsonWritten by PeterSchultzWith contributions from TomRhodesACPIproblemsACPIdebuggingThe ACPI driver has a flexible
debugging facility. A set of subsystems and the level of
verbosity can be specified. The subsystems to debug are
specified as layers and are broken down into components
(ACPI_ALL_COMPONENTS) and
ACPI hardware support
(ACPI_ALL_DRIVERS). The verbosity of
debugging output is specified as the level and ranges from
just report errors (ACPI_LV_ERROR) to
everything (ACPI_LV_VERBOSE). The level is
a bitmask so multiple options can be set at once, separated by
spaces. In practice, a serial console should be used to log
the output so it is not lost as the console message buffer
flushes. A full list of the individual layers and levels is
found in &man.acpi.4;.Debugging output is not enabled by default. To enable it,
add options ACPI_DEBUG to the custom kernel
configuration file if ACPI is compiled into
the kernel. Add ACPI_DEBUG=1 to
/etc/make.conf to enable it globally. If
a module is used instead of a custom kernel, recompile just
the acpi.ko module as follows:&prompt.root; cd /sys/modules/acpi/acpi && make clean && make ACPI_DEBUG=1Copy the compiled acpi.ko to
/boot/kernel and add the desired level
and layer to /boot/loader.conf. The
entries in this example enable debug messages for all
ACPI components and hardware drivers and
output error messages at the least verbose level:debug.acpi.layer="ACPI_ALL_COMPONENTS ACPI_ALL_DRIVERS"
debug.acpi.level="ACPI_LV_ERROR"If the required information is triggered by a specific
event, such as a suspend and then resume, do not modify
/boot/loader.conf. Instead, use
sysctl to specify the layer and level after
booting and preparing the system for the specific event. The
variables which can be set using sysctl are
named the same as the tunables in
/boot/loader.conf.ACPIproblemsOnce the debugging information is gathered, it can be sent
to &a.acpi.name; so that it can be used by the &os;
ACPI maintainers to identify the root cause
of the problem and to develop a solution.Before submitting debugging information to this mailing
list, ensure the latest BIOS version is
installed and, if available, the embedded controller
firmware version.When submitting a problem report, include the following
information:Description of the buggy behavior, including system
type, model, and anything that causes the bug to appear.
Note as accurately as possible when the bug began
occurring if it is new.The output of dmesg after running
boot -v, including any error messages
generated by the bug.The dmesg output from boot
-v with ACPI disabled,
if disabling ACPI helps to fix the
problem.Output from sysctl hw.acpi. This
lists which features the system offers.The URL to a pasted version of the
system's ASL. Do
not send the ASL
directly to the list as it can be very large. Generate a
copy of the ASL by running this
command:&prompt.root; acpidump -dt > name-system.aslSubstitute the login name for
name and manufacturer/model for
system. For example, use
njl-FooCo6000.asl.Most &os; developers watch the &a.current;, but one should
submit problems to &a.acpi.name; to be sure it is seen. Be
patient when waiting for a response. If the bug is not
immediately apparent, submit a PR using
&man.send-pr.1;. When entering a PR,
include the same information as requested above. This helps
developers to track the problem and resolve it. Do not send a
PR without emailing &a.acpi.name; first as
it is likely that the problem has been reported before.ReferencesMore information about ACPI may be
found in the following locations:The &os; ACPI Mailing List Archives
(http://lists.freebsd.org/pipermail/freebsd-acpi/)The ACPI 2.0 Specification (http://acpi.info/spec.htm)&man.acpi.4;, &man.acpi.thermal.4;, &man.acpidump.8;,
&man.iasl.8;, and &man.acpidb.8;
Index: head/en_US.ISO8859-1/books/handbook/cutting-edge/chapter.xml
===================================================================
--- head/en_US.ISO8859-1/books/handbook/cutting-edge/chapter.xml (revision 48528)
+++ head/en_US.ISO8859-1/books/handbook/cutting-edge/chapter.xml (revision 48529)
@@ -1,2240 +1,2240 @@
Updating and Upgrading &os;JimMockRestructured, reorganized, and parts updated
by JordanHubbardOriginal work by Poul-HenningKampJohnPolstraNikClaytonSynopsis&os; is under constant development between releases. Some
people prefer to use the officially released versions, while
others prefer to keep in sync with the latest developments.
However, even official releases are often updated with security
and other critical fixes. Regardless of the version used, &os;
provides all the necessary tools to keep the system updated, and
allows for easy upgrades between versions. This chapter
describes how to track the development system and the basic
tools for keeping a &os; system up-to-date.After reading this chapter, you will know:How to keep a &os; system up-to-date with
freebsd-update,
Subversion, or
CTM.How to compare the state of an installed system against
a known pristine copy.How to keep the installed documentation up-to-date with
Subversion or documentation
ports.The difference between the two development
branches: &os.stable; and &os.current;.How to rebuild and reinstall the entire base
system.Before reading this chapter, you should:Properly set up the network connection
().Know how to install additional third-party
software ().Throughout this chapter, svn is used to
obtain and update &os; sources. To use it, first install the
devel/subversion port or
package.&os; UpdateTomRhodesWritten by ColinPercivalBased on notes provided by Updating and Upgradingfreebsd-updateupdating-upgradingApplying security patches in a timely manner and upgrading
to a newer release of an operating system are important aspects
of ongoing system administration. &os; includes a utility
called freebsd-update which can be used to
perform both these tasks.This utility supports binary security and errata updates to
&os;, without the need to manually compile and install the patch
or a new kernel. Binary updates are available for all
architectures and releases currently supported by the security
team. The list of supported releases and their estimated
end-of-life dates are listed at http://www.FreeBSD.org/security/.This utility also supports operating system upgrades to
minor point releases as well as upgrades to another release
branch. Before upgrading to a new release, review its release
announcement as it contains important information pertinent to
the release. Release announcements are available from http://www.FreeBSD.org/releases/.If a crontab utilizing the features of
&man.freebsd-update.8; exists, it must be disabled before
upgrading the operating system.This section describes the configuration file used by
freebsd-update, demonstrates how to apply a
security patch and how to upgrade to a minor or major operating
system release, and discusses some of the considerations when
upgrading the operating system.The Configuration FileThe default configuration file for
freebsd-update works as-is. Some users may
wish to tweak the default configuration in
/etc/freebsd-update.conf, allowing
better control of the process. The comments in this file
explain the available options, but the following may require a
bit more explanation:# Components of the base system which should be kept updated.
Components world kernelThis parameter controls which parts of &os; will be kept
up-to-date. The default is to update the entire base system
and the kernel. Individual components can instead be
specified, such as src/base or
src/sys. However, the best option is to
leave this at the default as changing it to include specific
items requires every needed item to be listed. Over time,
this could have disastrous consequences as source code and
binaries may become out of sync.# Paths which start with anything matching an entry in an IgnorePaths
# statement will be ignored.
IgnorePaths /boot/kernel/linker.hintsTo leave specified directories, such as
/bin or /sbin,
untouched during the update process, add their paths to this
statement. This option may be used to prevent
freebsd-update from overwriting local
modifications.# Paths which start with anything matching an entry in an UpdateIfUnmodified
# statement will only be updated if the contents of the file have not been
# modified by the user (unless changes are merged; see below).
UpdateIfUnmodified /etc/ /var/ /root/ /.cshrc /.profileThis option will only update unmodified configuration
files in the specified directories. Any changes made by the
user will prevent the automatic updating of these files.
There is another option,
KeepModifiedMetadata, which will instruct
freebsd-update to save the changes during
the merge.# When upgrading to a new &os; release, files which match MergeChanges
# will have any local changes merged into the version from the new release.
MergeChanges /etc/ /var/named/etc/ /boot/device.hintsList of directories with configuration files that
freebsd-update should attempt to merge.
The file merge process is a series of &man.diff.1; patches
similar to &man.mergemaster.8;, but with fewer options.
Merges are either accepted, open an editor, or cause
freebsd-update to abort. When in doubt,
backup /etc and just accept the merges.
See for more information about
mergemaster.# Directory in which to store downloaded updates and temporary
# files used by &os; Update.
# WorkDir /var/db/freebsd-updateThis directory is where all patches and temporary files
are placed. In cases where the user is doing a version
upgrade, this location should have at least a gigabyte of disk
space available.# When upgrading between releases, should the list of Components be
# read strictly (StrictComponents yes) or merely as a list of components
# which *might* be installed of which &os; Update should figure out
# which actually are installed and upgrade those (StrictComponents no)?
# StrictComponents noWhen this option is set to yes,
freebsd-update will assume that the
Components list is complete and will not
attempt to make changes outside of the list. Effectively,
freebsd-update will attempt to update
every file which belongs to the Components
list.Applying Security PatchesThe process of applying &os; security patches has been
simplified, allowing an administrator to keep a system fully
patched using freebsd-update. More
information about &os; security advisories can be found in
.&os; security patches may be downloaded and installed
using the following commands. The first command will
determine if any outstanding patches are available, and if so,
will list the files that will be modifed if the patches are
applied. The second command will apply the patches.&prompt.root; freebsd-update fetch
&prompt.root; freebsd-update installIf the update applies any kernel patches, the system will
need a reboot in order to boot into the patched kernel. If
the patch was applied to any running binaries, the affected
applications should be restarted so that the patched version
of the binary is used.The system can be configured to automatically check for
updates once every day by adding this entry to
/etc/crontab:@daily root freebsd-update cronIf patches exist, they will automatically be downloaded
but will not be applied. The root user will be sent an
email so that the patches may be reviewed and manually
installed with
freebsd-update install.If anything goes wrong, freebsd-update
has the ability to roll back the last set of changes with the
following command:&prompt.root; freebsd-update rollback
Uninstalling updates... done.Again, the system should be restarted if the kernel or any
kernel modules were modified and any affected binaries should
be restarted.Only the GENERIC kernel can be
automatically updated by freebsd-update.
If a custom kernel is installed, it will have to be rebuilt
and reinstalled after freebsd-update
finishes installing the updates. However,
freebsd-update will detect and update the
GENERIC kernel if
/boot/GENERIC exists, even if it is not
the current running kernel of the system.Always keep a copy of the GENERIC
kernel in /boot/GENERIC. It will be
helpful in diagnosing a variety of problems and in
performing version upgrades. Refer to for
instructions on how to get a copy of the
GENERIC kernel.Unless the default configuration in
/etc/freebsd-update.conf has been
changed, freebsd-update will install the
updated kernel sources along with the rest of the updates.
Rebuilding and reinstalling a new custom kernel can then be
performed in the usual way.The updates distributed by
freebsd-update do not always involve the
kernel. It is not necessary to rebuild a custom kernel if the
kernel sources have not been modified by
freebsd-update install. However,
freebsd-update will always update
/usr/src/sys/conf/newvers.sh. The
current patch level, as indicated by the -p
number reported by uname -r, is obtained
from this file. Rebuilding a custom kernel, even if nothing
else changed, allows uname to accurately
report the current patch level of the system. This is
particularly helpful when maintaining multiple systems, as it
allows for a quick assessment of the updates installed in each
one.Performing Major and Minor Version UpgradesUpgrades from one minor version of &os; to another, like
from &os; 9.0 to &os; 9.1, are called
minor version upgrades.
Major version upgrades occur when &os;
is upgraded from one major version to another, like from
&os; 9.X to &os; 10.X. Both types of upgrades can
be performed by providing freebsd-update
with a release version target.If the system is running a custom kernel, make sure that
a copy of the GENERIC kernel exists in
/boot/GENERIC before starting the
upgrade. Refer to for
instructions on how to get a copy of the
GENERIC kernel.The following command, when run on a &os; 9.0 system,
will upgrade it to &os; 9.1:&prompt.root; freebsd-update -r 9.1-RELEASE upgradeAfter the command has been received,
freebsd-update will evaluate the
configuration file and current system in an attempt to gather
the information necessary to perform the upgrade. A screen
listing will display which components have and have not been
detected. For example:Looking up update.FreeBSD.org mirrors... 1 mirrors found.
Fetching metadata signature for 9.0-RELEASE from update1.FreeBSD.org... done.
Fetching metadata index... done.
Inspecting system... done.
The following components of FreeBSD seem to be installed:
kernel/smp src/base src/bin src/contrib src/crypto src/etc src/games
src/gnu src/include src/krb5 src/lib src/libexec src/release src/rescue
src/sbin src/secure src/share src/sys src/tools src/ubin src/usbin
world/base world/info world/lib32 world/manpages
The following components of FreeBSD do not seem to be installed:
kernel/generic world/catpages world/dict world/doc world/games
world/proflibs
Does this look reasonable (y/n)? yAt this point, freebsd-update will
attempt to download all files required for the upgrade. In
some cases, the user may be prompted with questions regarding
what to install or how to proceed.When using a custom kernel, the above step will produce a
warning similar to the following:WARNING: This system is running a "MYKERNEL" kernel, which is not a
kernel configuration distributed as part of FreeBSD 9.0-RELEASE.
This kernel will not be updated: you MUST update the kernel manually
before running "/usr/sbin/freebsd-update install"This warning may be safely ignored at this point. The
updated GENERIC kernel will be used as an
intermediate step in the upgrade process.Once all the patches have been downloaded to the local
system, they will be applied. This process may take a while,
depending on the speed and workload of the machine.
Configuration files will then be merged. The merging process
requires some user intervention as a file may be merged or an
editor may appear on screen for a manual merge. The results
of every successful merge will be shown to the user as the
process continues. A failed or ignored merge will cause the
process to abort. Users may wish to make a backup of
/etc and manually merge important files,
such as master.passwd or
group at a later time.The system is not being altered yet as all patching and
merging is happening in another directory. Once all patches
have been applied successfully, all configuration files have
been merged and it seems the process will go smoothly, the
changes can be committed to disk by the user using the
following command:&prompt.root; freebsd-update installThe kernel and kernel modules will be patched first. If
the system is running with a custom kernel, use
&man.nextboot.8; to set the kernel for the next boot to the
updated /boot/GENERIC:&prompt.root; nextboot -k GENERICBefore rebooting with the GENERIC
kernel, make sure it contains all the drivers required for
the system to boot properly and connect to the network, if
the machine being updated is accessed remotely. In
particular, if the running custom kernel contains built-in
functionality usually provided by kernel modules, make sure
to temporarily load these modules into the
GENERIC kernel using the
/boot/loader.conf facility. It is
recommended to disable non-essential services as well as any
disk and network mounts until the upgrade process is
complete.The machine should now be restarted with the updated
kernel:&prompt.root; shutdown -r nowOnce the system has come back online, restart
freebsd-update using the following command.
Since the state of the process has been saved,
freebsd-update will not start from the
beginning, but will instead move on to the next phase and
remove all old shared libraries and object files.&prompt.root; freebsd-update installDepending upon whether any library version numbers were
bumped, there may only be two install phases instead of
three.The upgrade is now complete. If this was a major version
upgrade, reinstall all ports and packages as described in
.Custom Kernels with &os; 9.X and LaterBefore using freebsd-update, ensure
that a copy of the GENERIC kernel
exists in /boot/GENERIC. If a custom
kernel has only been built once, the kernel in
/boot/kernel.old is the
GENERIC kernel. Simply rename this
directory to /boot/kernel.If a custom kernel has been built more than once or if
it is unknown how many times the custom kernel has been
built, obtain a copy of the GENERIC
kernel that matches the current version of the operating
system. If physical access to the system is available, a
copy of the GENERIC kernel can be
installed from the installation media:&prompt.root; mount /cdrom
&prompt.root; cd /cdrom/usr/freebsd-dist
&prompt.root; tar -C/ -xvf kernel.txz boot/kernel/kernelAlternately, the GENERIC kernel may
be rebuilt and installed from source:&prompt.root; cd /usr/src
&prompt.root; make kernel __MAKE_CONF=/dev/null SRCCONF=/dev/nullFor this kernel to be identified as the
GENERIC kernel by
freebsd-update, the
GENERIC configuration file must not
have been modified in any way. It is also suggested that
the kernel is built without any other special
options.Rebooting into the GENERIC kernel
is not required as freebsd-update only
needs /boot/GENERIC to exist.Upgrading Packages After a Major Version
UpgradeGenerally, installed applications will continue to work
without problems after minor version upgrades. Major
versions use different Application Binary Interfaces
(ABIs), which will break most
third-party applications. After a major version upgrade,
all installed packages and ports need to be upgraded.
Packages can be upgraded using pkg
upgrade. To upgrade installed ports, use a
utility such as
ports-mgmt/portmaster.A forced upgrade of all installed packages will replace
the packages with fresh versions from the repository even if
the version number has not increased. This is required
because of the ABI version change when upgrading between
major versions of &os;. The forced upgrade can be
accomplished by performing:&prompt.root; pkg-static upgrade -fA rebuild of all installed applications can be
accomplished with this command:&prompt.root; portmaster -afThis command will display the configuration screens for
each application that has configurable options and wait for
the user to interact with those screens. To prevent this
behavior, and use only the default options, include
in the above command.Once the software upgrades are complete, finish the
upgrade process with a final call to
freebsd-update in order to tie up all the
loose ends in the upgrade process:&prompt.root; freebsd-update installIf the GENERIC kernel was
temporarily used, this is the time to build and install a
new custom kernel using the instructions in .Reboot the machine into the new &os; version. The
upgrade process is now complete.System State ComparisonThe state of the installed &os; version against a known
good copy can be tested using
freebsd-update IDS. This command evaluates
the current version of system utilities, libraries, and
configuration files and can be used as a built-in Intrusion
Detection System (IDS).This command is not a replacement for a real
IDS such as
security/snort. As
freebsd-update stores data on disk, the
possibility of tampering is evident. While this possibility
may be reduced using kern.securelevel and
by storing the freebsd-update data on a
read-only file system when not in use, a better solution
would be to compare the system against a secure disk, such
as a DVD or securely stored external
USB disk device. An alternative method
for providing IDS functionality using a
built-in utility is described in To begin the comparison, specify the output file to save
the results to:&prompt.root; freebsd-update IDS >> outfile.idsThe system will now be inspected and a lengthy listing of
files, along with the SHA256 hash values
for both the known value in the release and the current
installation, will be sent to the specified output
file.The entries in the listing are extremely long, but the
output format may be easily parsed. For instance, to obtain a
list of all files which differ from those in the release,
issue the following command:&prompt.root; cat outfile.ids | awk '{ print $1 }' | more
/etc/master.passwd
/etc/motd
/etc/passwd
/etc/pf.confThis sample output has been truncated as many more files
exist. Some files have natural modifications. For example,
/etc/passwd will be modified if users
have been added to the system. Kernel modules may differ as
freebsd-update may have updated them. To
exclude specific files or directories, add them to the
IDSIgnorePaths option in
/etc/freebsd-update.conf.Updating the Documentation SetUpdating and UpgradingDocumentationUpdating and UpgradingDocumentation is an integral part of the &os; operating
system. While an up-to-date version of the &os; documentation
is always available on the &os; web site (http://www.freebsd.org/doc/),
it can be handy to have an up-to-date, local copy of the &os;
website, handbooks, FAQ, and articles.This section describes how to use either source or the &os;
Ports Collection to keep a local copy of the &os; documentation
up-to-date.For information on editing and submitting corrections to the
documentation, refer to the &os; Documentation Project Primer
for New Contributors (http://www.freebsd.org/doc/en_US.ISO8859-1/books/fdp-primer/).
+ xlink:href="&url.books.fdp-primer;">http://www.freebsd.org/doc/en_US.ISO8859-1/books/fdp-primer/).
Updating Documentation from SourceRebuilding the &os; documentation from source requires a
collection of tools which are not part of the &os; base
system. The required tools, including
svn, can be installed from the
textproc/docproj package or port developed
by the &os; Documentation Project.Once installed, use svn to
fetch a clean copy of the documentation source:&prompt.root; svn checkout https://svn.FreeBSD.org/doc/head /usr/docThe initial download of the documentation sources may take
a while. Let it run until it completes.Future updates of the documentation sources may be fetched
by running:&prompt.root; svn update /usr/docOnce an up-to-date snapshot of the documentation sources
has been fetched to /usr/doc, everything
is ready for an update of the installed documentation.A full update of all available languages may be performed
by typing:&prompt.root; cd /usr/doc
&prompt.root; make install cleanIf an update of only a specific language is desired,
make can be invoked in a language-specific
subdirectory of
/usr/doc:&prompt.root; cd /usr/doc/en_US.ISO8859-1
&prompt.root; make install cleanAn alternative way of updating the documentation is to run
this command from /usr/doc or the desired
language-specific subdirectory:&prompt.root; make updateThe output formats that will be installed may be specified
by setting FORMATS:&prompt.root; cd /usr/doc
&prompt.root; make FORMATS='html html-split' install cleanSeveral options are available to ease the process of
updating only parts of the documentation, or the build of
specific translations. These options can be set either as
system-wide options in /etc/make.conf, or
as command-line options passed to
make.The options include:DOC_LANGThe list of languages and encodings to build and
install, such as en_US.ISO8859-1 for
English documentation.FORMATSA single format or a list of output formats to be
built. Currently, html,
html-split, txt,
ps, and pdf are
supported.DOCDIRWhere to install the documentation. It defaults to
/usr/share/doc.For more make variables supported as
system-wide options in &os;, refer to
&man.make.conf.5;.Updating Documentation from PortsMarcFonvieilleBased on the work of Updating and Upgradingdocumentation packageUpdating and UpgradingThe previous section presented a method for updating the
&os; documentation from sources. This section describes an
alternative method which uses the Ports Collection and makes
it possible to:Install pre-built packages of the documentation,
without having to locally build anything or install the
documentation toolchain.Build the documentation sources through the ports
framework, making the checkout and build steps a bit
easier.This method of updating the &os; documentation is
supported by a set of documentation ports and packages which
are updated by the &a.doceng; on a monthly basis. These are
listed in the &os; Ports Collection, under the docs
category (http://www.freshports.org/docs/).Organization of the documentation ports is as
follows:The misc/freebsd-doc-en package or
port installs all of the English documentation.The misc/freebsd-doc-all
meta-package or port installs all documentation in all
available languages.There is a package and port for each translation, such
as misc/freebsd-doc-hu for the
Hungarian documentation.When binary packages are used, the &os; documentation will
be installed in all available formats for the given language.
For example, the following command will install the latest
package of the Hungarian documentation:&prompt.root; pkg install hu-freebsd-docPackages use a format that differs from the
corresponding port's name:
lang-freebsd-doc,
where lang is the short format of
the language code, such as hu for
Hungarian, or zh_cn for Simplified
Chinese.To specify the format of the documentation, build the port
instead of installing the package. For example, to build and
install the English documentation:&prompt.root; cd /usr/ports/misc/freebsd-doc-en
&prompt.root; make install cleanThe port provides a configuration menu where the format to
build and install can be specified. By default, split
HTML, similar to the format used on http://www.FreeBSD.org,
and PDF are selected.Alternately, several make options can
be specified when building a documentation port,
including:WITH_HTMLBuilds the HTML format with a single HTML file per
document. The formatted documentation is saved to a
file called article.html, or
book.html.WITH_PDFThe formatted documentation is saved to a file
called article.pdf or
book.pdf.DOCBASESpecifies where to install the documentation. It
defaults to
/usr/local/share/doc/freebsd.This example uses variables to install the Hungarian
documentation as a PDF in the specified
directory:&prompt.root; cd /usr/ports/misc/freebsd-doc-hu
&prompt.root; make -DWITH_PDF DOCBASE=share/doc/freebsd/hu install cleanDocumentation packages or ports can be updated using the
instructions in . For example, the
following command updates the installed Hungarian
documentation using ports-mgmt/portmaster
by using packages only:&prompt.root; portmaster -PP hu-freebsd-docTracking a Development Branch-CURRENT-STABLE&os; has two development branches: &os.current; and
&os.stable;.This section provides an explanation of each branch and its
intended audience, as well as how to keep a system up-to-date
with each respective branch.Using &os.current;&os.current; is the bleeding edge of &os;
development and &os.current; users are expected to have a
high degree of technical skill. Less technical users who wish
to track a development branch should track &os.stable;
instead.&os.current; is the very latest source code for &os; and
includes works in progress, experimental changes, and
transitional mechanisms that might or might not be present in
the next official release. While many &os; developers compile
the &os.current; source code daily, there are short periods of
time when the source may not be buildable. These problems are
resolved as quickly as possible, but whether or not
&os.current; brings disaster or new functionality can be a
matter of when the source code was synced.&os.current; is made available for three primary interest
groups:Members of the &os; community who are actively
working on some part of the source tree.Members of the &os; community who are active testers.
They are willing to spend time solving problems, making
topical suggestions on changes and the general direction
of &os;, and submitting patches.Users who wish to keep an eye on things, use the
current source for reference purposes, or make the
occasional comment or code contribution.&os.current; should not be
considered a fast-track to getting new features before the
next release as pre-release features are not yet fully tested
and most likely contain bugs. It is not a quick way of
getting bug fixes as any given commit is just as likely to
introduce new bugs as to fix existing ones. &os.current; is
not in any way officially supported.-CURRENTusingTo track &os.current;:Join the &a.current.name; and the
&a.svn-src-head.name; lists. This is
essential in order to see the
comments that people are making about the current state
of the system and to receive important bulletins about
the current state of &os.current;.The &a.svn-src-head.name; list records the commit log
entry for each change as it is made, along with any
pertinent information on possible side effects.To join these lists, go to &a.mailman.lists.link;,
click on the list to subscribe to, and follow the
instructions. In order to track changes to the whole
source tree, not just the changes to &os.current;,
subscribe to the &a.svn-src-all.name; list.Synchronize with the &os.current; sources. Typically,
svn is used to check out the
-CURRENT code from the head branch of
- one of the Subversion mirror
- sites listed in .
+ one of the Subversion mirror sites listed in
+ .
Users with very slow or limited Internet connectivity
can instead use CTM as described in ,
but it is not as reliable as
svn and
svn is the recommended method
for synchronizing source. Due to the size of the repository, some users choose
to only synchronize the sections of source that interest
them or which they are contributing patches to. However,
users that plan to compile the operating system from
source must download all of
&os.current;, not just selected portions.Before compiling &os.current;
-CURRENTcompiling, read /usr/src/Makefile
very carefully and follow the instructions in
.
Read the &a.current; and
/usr/src/UPDATING to stay
up-to-date on other bootstrapping procedures that
sometimes become necessary on the road to the next
release.Be active! &os.current; users are encouraged to
submit their suggestions for enhancements or bug fixes.
Suggestions with accompanying code are always
welcome.Using &os.stable;&os.stable; is the development branch from which major
releases are made. Changes go into this branch at a slower
pace and with the general assumption that they have first been
tested in &os.current;. This is still a
development branch and, at any given time, the sources for
&os.stable; may or may not be suitable for general use. It is
simply another engineering development track, not a resource
for end-users. Users who do not have the resources to perform
testing should instead run the most recent release of
&os;.Those interested in tracking or contributing to the &os;
development process, especially as it relates to the next
release of &os;, should consider following &os.stable;.While the &os.stable; branch should compile and run at all
times, this cannot be guaranteed. Since more people run
&os.stable; than &os.current;, it is inevitable that bugs and
corner cases will sometimes be found in &os.stable; that were
not apparent in &os.current;. For this reason, one should not
blindly track &os.stable;. It is particularly important
not to update any production servers to
&os.stable; without thoroughly testing the code in a
development or testing environment.To track &os.stable;:-STABLEusingJoin the &a.stable.name; list in order to stay
informed of build dependencies that may appear in
&os.stable; or any other issues requiring special
attention. Developers will also make announcements in
this mailing list when they are contemplating some
controversial fix or update, giving the users a chance to
respond if they have any issues to raise concerning the
proposed change.Join the relevant svn list
for the branch being tracked. For example, users
tracking the 9-STABLE branch should join the
&a.svn-src-stable-9.name; list. This list records the
commit log entry for each change as it is made, along
with any pertinent information on possible
side effects.To join these lists, go to &a.mailman.lists.link;,
click on the list to subscribe to, and follow the
instructions. In order to track changes for the whole
source tree, subscribe to &a.svn-src-all.name;.To install a new &os.stable; system, install the most
recent &os.stable; release from the &os; mirror sites or use a
monthly snapshot built from &os.stable;. Refer to www.freebsd.org/snapshots
for more information about snapshots.To compile or upgrade to an existing &os; system to
&os.stable;, use svn
Subversion to check out the source for the desired
branch. Branch names, such as
stable/9, are listed at www.freebsd.org/releng.
CTM () can be used if a reliable
Internet connection is not available.Before compiling or upgrading to &os.stable;
-STABLEcompiling, read /usr/src/Makefile
carefully and follow the instructions in . Read &a.stable; and
/usr/src/UPDATING to keep up-to-date
on other bootstrapping procedures that sometimes become
necessary on the road to the next release.Synchronizing SourceThere are various methods for staying up-to-date with the
&os; sources. This section compares the primary services,
Subversion and
CTM.While it is possible to update only parts of the source
tree, the only supported update procedure is to update the
entire tree and recompile all the programs that run in user
space, such as those in /bin and
/sbin, and kernel sources. Updating only
part of the source tree, only the kernel, or only the userland
programs will often result in problems ranging from compile
errors to kernel panics or data corruption.SubversionSubversion uses the
pull model of updating sources. The user,
or a cron script, invokes the
svn program which updates the local version
of the source. Subversion is the
preferred method for updating local source trees as updates are
up-to-the-minute and the user controls when updates are
downloaded. It is easy to restrict updates to specific files or
directories and the requested updates are generated on the fly
by the server. How to synchronize source using
Subversion is described in .CTMCTM does not interactively
compare the local sources with those on the master archive or
otherwise pull them across. Instead, a script which identifies
changes in files since its previous run is executed several
times a day on the master CTM machine. Any detected changes are
compressed, stamped with a sequence-number, and encoded for
transmission over email in printable ASCII
only. Once downloaded, these deltas can
be run through ctm.rmail which will
automatically decode, verify, and apply the changes to the
user's copy of the sources. This process is more efficient than
Subversion and places less strain on
server resources since it is a push, rather
than a pull, model. Instructions for using
CTM to synchronize source can be
found at .If a user inadvertently wipes out portions of the local
archive, Subversion will detect and
rebuild the damaged portions. CTM
will not, and if a user deletes some portion of the source tree
and does not have a backup, they will have to start from scratch
from the most recent base delta and
rebuild it all with CTM.Rebuilding WorldRebuilding worldOnce the local source tree is synchronized against a
particular version of &os; such as &os.stable; or &os.current;,
the source tree can be used to rebuild the system. This process
is known as rebuilding world.Before rebuilding world, be sure to
perform the following tasks:Perform These Tasks Before
Building WorldBackup all important data to another system or removable
media, verify the integrity of the backup, and have a
bootable installation media at hand. It cannot be stressed
enough how important it is to make a backup of the system
before rebuilding the system. While
rebuilding world is an easy task, there will inevitably be
times when mistakes in the source tree render the system
unbootable. You will probably never have to use the backup,
but it is better to be safe than sorry!mailing listReview the recent &a.stable.name; or &a.current.name;
entries, depending upon the branch being tracked. Be aware
of any known problems and which systems are affected. If a
known issue affects the version of synchronized code, wait
for an all clear announcement to be posted
stating that the problem has been solved. Resynchronize the
sources to ensure that the local version of source has the
needed fix.Read /usr/src/UPDATING for any
extra steps necessary for that version of the source. This
file contains important information about potential problems
and may specify the order to run certain commands. Many
upgrades require specific additional steps such as renaming
or deleting specific files prior to installing the new
world. These will be listed at the end of this file where
the currently recommended upgrade sequence is explicitly
spelled out. If UPDATING contradicts
any steps in this chapter, the instructions in
UPDATING take precedence and should be
followed.Do Not Use make worldSome older documentation recommends using make
world. However, that command skips some important
steps and should only be used by experts. For almost all
circumstances make world is the wrong thing
to do, and the procedure described here should be used
instead.Overview of ProcessThe build world process assumes an upgrade from an older
&os; version using the source of a newer version that was
obtained using the instructions in .In &os;, the term world includes the
kernel, core system binaries, libraries, programming files,
and built-in compiler. The order in which these components
are built and installed is important.For example, the old compiler might have a bug and not be
able to compile the new kernel. Since the new kernel should
be built with the new compiler, the new compiler must be
built, but not necessarily installed, before the new kernel is
built.The new world might rely on new kernel features, so the
new kernel must be installed before the new world is
installed. The old world might not run correctly on the new
kernel, so the new world must be installed immediately upon
installing the new kernel.Some configuration changes must be made before the new
world is installed, but others might break the old world.
Hence, two different configuration upgrade steps are used.
For the most part, the update process only replaces or adds
files and existing old files are not deleted. Since this can
cause problems, /usr/src/UPDATING will
indicate if any files need to be manually deleted and at which
step to do so.These concerns have led to the recommended upgrade
sequence described in the following procedure.It is a good idea to save the output from running
make to a file. If something goes wrong,
a copy of the error message can be posted to one of the &os;
mailing lists.The easiest way to do this is to use
script with a parameter that specifies
the name of the file to save all output to. Do not save the
output to /tmp as this directory may be
cleared at next reboot. A better place to save the file is
/var/tmp. Run this command immediately
before rebuilding the world, and then type
exit when the process has
finished:&prompt.root; script /var/tmp/mw.out
Script started, output file is /var/tmp/mw.outOverview of Build World ProcessThe commands used in the build world process should be
run in the order specified here. This section summarizes
the function of each command.If the build world process has previously been run on
this system, a copy of the previous build may still exist
in /usr/obj. To
speed up the new build world process, and possibly save
some dependency headaches, remove this directory if it
already exists:&prompt.root; chflags -R noschg /usr/obj/*
&prompt.root; rm -rf /usr/objCompile the new compiler and a few related tools, then
use the new compiler to compile the rest of the new world.
The result is saved to /usr/obj.&prompt.root; cd /usr/src
&prompt.root; make buildworldUse the new compiler residing in /usr/obj to build the new
kernel, in order to protect against compiler-kernel
mismatches. This is necessary, as certain memory
structures may have changed, and programs like
ps and top will fail
to work if the kernel and source code versions are not the
same.&prompt.root; make buildkernelInstall the new kernel and kernel modules, making it
possible to boot with the newly updated kernel. If
kern.securelevel has been raised above
1andnoschg or similar flags have been set
on the kernel binary, drop the system into single-user
mode first. Otherwise, this command can be run from
multi-user mode without problems. See &man.init.8; for
details about kern.securelevel and
&man.chflags.1; for details about the various file
flags.&prompt.root; make installkernelDrop the system into single-user mode in order to
minimize problems from updating any binaries that are
already running. It also minimizes any problems from
running the old world on a new kernel.&prompt.root; shutdown nowOnce in single-user mode, run these commands if the
system is formatted with UFS:&prompt.root; mount -u /
&prompt.root; mount -a -t ufs
&prompt.root; swapon -aIf the system is instead formatted with ZFS, run these
two commands. This example assumes a zpool name of
zroot:&prompt.root; zfs set readonly=off zroot
&prompt.root; zfs mount -aOptional: If a keyboard mapping other than the default
US English is desired, it can be changed with
&man.kbdmap.1;:&prompt.root; kbdmapThen, for either file system, if the
CMOS clock is set to local time (this
is true if the output of &man.date.1; does not show the
correct time and zone), run:&prompt.root; adjkerntz -iRemaking the world will not update certain
directories, such as /etc,
/var and /usr,
with new or changed configuration files. The next step is
to perform some initial configuration file updates
to /etc in
preparation for the new world. The following command
compares only those files that are essential for the
success of installworld. For
instance, this step may add new groups, system accounts,
or startup scripts which have been added to &os; since the
last update. This is necessary so that the
installworld step will be able
to use any new system accounts, groups, and scripts.
Refer to for more detailed
instructions about this command:&prompt.root; mergemaster -pInstall the new world and system binaries from
/usr/obj.&prompt.root; cd /usr/src
&prompt.root; make installworldUpdate any remaining configuration files.&prompt.root; mergemaster -iFDelete any obsolete files. This is important as they
may cause problems if left on the disk.&prompt.root; make delete-oldA full reboot is now needed to load the new kernel and
new world with the new configuration files.&prompt.root; rebootMake sure that all installed ports have first been
rebuilt before old libraries are removed using the
instructions in . When
finished, remove any obsolete libraries to avoid conflicts
with newer ones. For a more detailed description of this
step, refer to .&prompt.root; make delete-old-libssingle-user modeIf the system can have a window of down-time, consider
compiling the system in single-user mode instead of compiling
the system in multi-user mode, and then dropping into
single-user mode for the installation. Reinstalling the
system touches a lot of important system files, all the
standard system binaries, libraries, and include files.
Changing these on a running system, particularly one with
active users, is asking for trouble.Configuration Filesmake.confThis build world process uses several configuration
files.The Makefile located in
/usr/src describes how the programs that
comprise &os; should be built and the order in which they
should be built.The options available to make are
described in &man.make.conf.5; and some common examples are
included in
/usr/share/examples/etc/make.conf. Any
options which are added to /etc/make.conf
will control the how make runs and builds
programs. These options take effect every time
make is used, including compiling
applications from the Ports Collection, compiling custom C
programs, or building the &os; operating system. Changes to
some settings can have far-reaching and potentially surprising
effects. Read the comments in both locations and keep in mind
that the defaults have been chosen for a combination of
performance and safety.src.confHow the operating system is built from source code is
controlled by /etc/src.conf. Unlike
/etc/make.conf, the contents of
/etc/src.conf only take effect when the
&os; operating system itself is being built. Descriptions of
the many options available for this file are shown in
&man.src.conf.5;. Be cautious about disabling seemingly
unneeded kernel modules and build options. Sometimes there
are unexpected or subtle interactions.Variables and TargetsThe general format for using make is as
follows:&prompt.root; make -x -DVARIABLEtargetIn this example,
is an option
passed to make. Refer to &man.make.1; for
examples of the available options.To pass a variable, specify the variable name with
. The
behavior of the Makefile is controlled by
variables. These can either be set in
/etc/make.conf or they can be specified
when using make. For example, this
variable specifies that profiled libraries should not be
built:&prompt.root; make -DNO_PROFILE targetIt corresponds with this setting in
/etc/make.conf:NO_PROFILE= true # Avoid compiling profiled librariesThe target tells
make what to do and the
Makefile defines the available targets.
Some targets are used by the build process to break out the
steps necessary to rebuild the system into a number of
sub-steps.Having separate options is useful for two reasons. First,
it allows for a build that does not affect any components of a
running system. Because of this,
buildworld can be safely run on a
machine running in multi-user mode. It is still recommended
that installworld be run in part in
single-user mode, though.Secondly, it allows NFS mounts to be
used to upgrade multiple machines on a network, as described
in .It is possible to specify which will
cause make to spawn several simultaneous
processes. Since much of the compiling process is
I/O-bound rather than
CPU-bound, this is useful on both single
CPU and multi-CPU
machines.On a single-CPU machine, run the
following command to have up to 4 processes running at any one
time. Empirical evidence posted to the mailing lists shows
this generally gives the best performance benefit.&prompt.root; make -j4 buildworldOn a multi-CPU machine, try values
between 6 and 10 to see
how they speed things up.rebuilding worldtimingsIf any variables were specified to make
buildworld, specify the same variables to
make installworld. However,
must never be used
with installworld.For example, if this command was used:&prompt.root; make -DNO_PROFILE buildworldInstall the results with:&prompt.root; make -DNO_PROFILE installworldOtherwise, the second command will try to install
profiled libraries that were not built during the
make buildworld phase.
- Merging Configuration Files
+ Merging Configuration Files
-
-
-
- Tom
- Rhodes
-
- Contributed by
-
-
-
+
+
+
+ Tom
+ Rhodes
+
+ Contributed by
+
+
+
-
-
- mergemaster
-
-
+
+
+ mergemaster
+
+ &os; provides the &man.mergemaster.8; Bourne script to aid
in determining the differences between the configuration files
in /etc, and the configuration files in
/usr/src/etc. This is the recommended
solution for keeping the system configuration files up to date
with those located in the source tree.Before using mergemaster, it is
recommended to first copy the existing
/etc somewhere safe. Include
which does a recursive copy and
which preserves times and the ownerships
on files:&prompt.root; cp -Rp /etc /etc.oldWhen run, mergemaster builds a
temporary root environment, from / down,
and populates it with various system configuration files.
Those files are then compared to the ones currently installed
in the system. Files that differ will be shown in
&man.diff.1; format, with the sign
representing added or modified lines, and
representing lines that will be either removed completely or
replaced with a new file. Refer to &man.diff.1; for more
information about how file differences are shown.Next, mergemaster will display each
file that differs, and present options to: delete the new
file, referred to as the temporary file, install the temporary
file in its unmodified state, merge the temporary file with
the currently installed file, or view the results
again.Choosing to delete the temporary file will tell
mergemaster to keep the current file
unchanged and to delete the new version. This option is not
recommended. To get help at any time, type
? at the mergemaster
prompt. If the user chooses to skip a file, it will be
presented again after all other files have been dealt
with.Choosing to install the unmodified temporary file will
replace the current file with the new one. For most
unmodified files, this is the best option.Choosing to merge the file will present a text editor, and
the contents of both files. The files can be merged by
reviewing both files side by side on the screen, and choosing
parts from both to create a finished product. When the files
are compared side by side, l selects the left
contents and r selects contents from the
right. The final output will be a file consisting of both
parts, which can then be installed. This option is
customarily used for files where settings have been modified
by the user.Choosing to view the results again will redisplay the file
differences.After mergemaster is done with the
system files, it will prompt for other options. It may prompt
to rebuild the password file and will finish up with an option
to remove left-over temporary files.Deleting Obsolete Files and LibrariesAntonShterenlikhtBased on notes provided by Deleting obsolete files and directoriesAs a part of the &os; development lifecycle, files and
their contents occasionally become obsolete. This may be
because functionality is implemented elsewhere, the version
number of the library has changed, or it was removed from the
system entirely. These obsoleted files, libraries, and
directories should be removed when updating the system.
This ensures that the system is not cluttered with old files
which take up unnecessary space on the storage and backup
media. Additionally, if the old library has a security or
stability issue, the system should be updated to the newer
library to keep it safe and to prevent crashes caused by the
old library. Files, directories, and libraries which are
considered obsolete are listed in
/usr/src/ObsoleteFiles.inc. The
following instructions should be used to remove obsolete files
during the system upgrade process.After the make installworld and the
subsequent mergemaster have finished
successfully, check for obsolete files and libraries:&prompt.root; cd /usr/src
&prompt.root; make check-oldIf any obsolete files are found, they can be deleted using
the following command:&prompt.root; make delete-oldA prompt is displayed before deleting each obsolete file.
To skip the prompt and let the system remove these files
automatically, use
BATCH_DELETE_OLD_FILES:&prompt.root; make -DBATCH_DELETE_OLD_FILES delete-oldThe same goal can be achieved by piping these commands
through yes:&prompt.root; yes|make delete-oldWarningDeleting obsolete files will break applications that
still depend on those obsolete files. This is especially
true for old libraries. In most cases, the programs, ports,
or libraries that used the old library need to be recompiled
before make delete-old-libs is
executed.Utilities for checking shared library dependencies include
sysutils/libchk and
sysutils/bsdadminscripts.Obsolete shared libraries can conflict with newer
libraries, causing messages like these:/usr/bin/ld: warning: libz.so.4, needed by /usr/local/lib/libtiff.so, may conflict with libz.so.5
/usr/bin/ld: warning: librpcsvc.so.4, needed by /usr/local/lib/libXext.so, may conflict with librpcsvc.so.5To solve these problems, determine which port installed
the library:&prompt.root; pkg which /usr/local/lib/libtiff.so
/usr/local/lib/libtiff.so was installed by package tiff-3.9.4
&prompt.root; pkg which /usr/local/lib/libXext.so
/usr/local/lib/libXext.so was installed by package libXext-1.1.1,1Then deinstall, rebuild, and reinstall the port. To
automate this process,
ports-mgmt/portmaster can be used. After
all ports are rebuilt and no longer use the old libraries,
delete the old libraries using the following command:&prompt.root; make delete-old-libsIf something goes wrong, it is easy to rebuild a
particular piece of the system. For example, if
/etc/magic was accidentally deleted as
part of the upgrade or merge of /etc,
file will stop working. To fix this,
run:&prompt.root; cd /usr/src/usr.bin/file
&prompt.root; make all installCommon QuestionsDo I need to re-make the world for every
change?It depends upon the nature of the change. For
example, if svn only shows
the following files as being updated:src/games/cribbage/instr.csrc/games/sail/pl_main.csrc/release/sysinstall/config.csrc/release/sysinstall/media.csrc/share/mk/bsd.port.mkit probably is not worth rebuilding the entire
world. Instead, go into the appropriate sub-directories
and run make all install. But if
something major changes, such as
src/lib/libc/stdlib, consider
rebuilding world.Some users rebuild world every fortnight and let
changes accumulate over that fortnight. Others only
re-make those things that have changed and are careful
to spot all the dependencies. It all depends on how
often a user wants to upgrade and whether they are
tracking &os.stable; or &os.current;.What would cause a compile to fail with lots of
signal 11signal 11
(or other signal number) errors?This normally indicates a hardware problem.
Building world is an effective way to stress test
hardware, especially memory. A sure indicator of a
hardware issue is when make
is restarted and it dies at a different point in the
process.To resolve this error, swap out the components in
the machine, starting with RAM, to determine which
component is failing.Can /usr/obj
be removed when finished?This directory contains all the object files that
were produced during the compilation phase. Normally,
one of the first steps in the make
buildworld process is to remove this
directory and start afresh. Keeping
/usr/obj around when finished makes
little sense, and its removal frees up a approximately
2GB of disk space.Can interrupted builds be resumed?This depends on how far into the process the
problem occurs. In general, make
buildworld builds new copies of essential
tools and the system libraries. These tools and
libraries are then installed, used to rebuild
themselves, and are installed again. The rest of the
system is then rebuilt with the new system
tools.During the last stage, it is fairly safe to run
these commands as they will not undo the work of the
previous make buildworld:&prompt.root; cd /usr/src
&prompt.root; make -DNO_CLEAN allIf this message appears:--------------------------------------------------------------
Building everything..
--------------------------------------------------------------in the make buildworld output,
it is probably fairly safe to do so.If that message is not displayed, it is always
better to be safe than sorry and to restart the build
from scratch.Is it possible to speed up making the world?Several actions can speed up the build world
process. For example, the entire process can be run
from single-user mode. However, this will prevent users
from having access to the system until the process is
complete.Careful file system design or the use of ZFS
datasets can make a difference. Consider putting
/usr/src and
/usr/obj on
separate file systems. If possible, place the file
systems on separate disks on separate disk controllers.
When mounting /usr/src, use
which prevents the file system
from recording the file access time. If /usr/src is not on its
own file system, consider remounting /usr with
.The file system holding /usr/obj can be mounted
or remounted with so that disk
writes happen asynchronously. The write completes
immediately, and the data is written to the disk a few
seconds later. This allows writes to be clustered
together, and can provide a dramatic performance
boost.Keep in mind that this option makes the file
system more fragile. With this option, there is an
increased chance that, should power fail, the file
system will be in an unrecoverable state when the
machine restarts.If /usr/obj is the only
directory on this file system, this is not a problem.
If you have other, valuable data on the same file
system, ensure that there are verified backups before
enabling this option.Turn off profiling by setting
NO_PROFILE=true in
/etc/make.conf.Pass
to &man.make.1; to run multiple processes in parallel.
This usually helps on both single- and multi-processor
machines.What if something goes wrong?First, make absolutely sure that the environment has
no extraneous cruft from earlier builds:&prompt.root; chflags -R noschg /usr/obj/usr
&prompt.root; rm -rf /usr/obj/usr
&prompt.root; cd /usr/src
&prompt.root; make cleandir
&prompt.root; make cleandirYes, make cleandir really should
be run twice.Then, restart the whole process, starting with
make buildworld.If problems persist, send the error and the output
of uname -a to &a.questions;. Be
prepared to answer other questions about the
setup!Tracking for Multiple MachinesMikeMeyerContributed by NFSinstalling multiple machinesWhen multiple machines need to track the same source tree,
it is a waste of disk space, network bandwidth, and
CPU cycles to have each system download the
sources and rebuild everything. The solution is to have one
machine do most of the work, while the rest of the machines
mount that work via NFS. This section
outlines a method of doing so. For more information about using
NFS, refer to .First, identify a set of machines which will run the same
set of binaries, known as a build set.
Each machine can have a custom kernel, but will run the same
userland binaries. From that set, choose a machine to be the
build machine that the world and kernel
are built on. Ideally, this is a fast machine that has
sufficient spare CPU to run make
buildworld and make
buildkernel.Select a machine to be the test
machine, which will test software updates before
they are put into production. This must be
a machine that can afford to be down for an extended period of
time. It can be the build machine, but need not be.All the machines in this build set need to mount
/usr/obj and /usr/src
from the build machine via NFS. For multiple
build sets, /usr/src should be on one build
machine, and NFS mounted on the rest.Ensure that /etc/make.conf and
/etc/src.conf on all the machines in the
build set agree with the build machine. That means that the
build machine must build all the parts of the base system that
any machine in the build set is going to install. Also, each
build machine should have its kernel name set with
KERNCONF in
/etc/make.conf, and the build machine
should list them all in its KERNCONF,
listing its own kernel first. The build machine must have the
kernel configuration files for each machine in its /usr/src/sys/arch/conf.On the build machine, build the kernel and world as
described in , but do not install
anything on the build machine. Instead, install the built
kernel on the test machine. On the test machine, mount
/usr/src and
/usr/obj via NFS. Then,
run shutdown now to go to single-user mode in
order to install the new kernel and world and run
mergemaster as usual. When done, reboot to
return to normal multi-user operations.After verifying that everything on the test machine is
working properly, use the same procedure to install the new
software on each of the other machines in the build set.The same methodology can be used for the ports tree. The
first step is to share /usr/ports via
NFS to all the machines in the build set. To
configure /etc/make.conf to share
distfiles, set DISTDIR to a common shared
directory that is writable by whichever user root is mapped to by the
NFS mount. Each machine should set
WRKDIRPREFIX to a local build directory, if
ports are to be built locally. Alternately, if the build system
is to build and distribute packages to the machines in the build
set, set PACKAGES on the build system to a
directory similar to DISTDIR.
Index: head/en_US.ISO8859-1/books/handbook/desktop/chapter.xml
===================================================================
--- head/en_US.ISO8859-1/books/handbook/desktop/chapter.xml (revision 48528)
+++ head/en_US.ISO8859-1/books/handbook/desktop/chapter.xml (revision 48529)
@@ -1,1131 +1,1131 @@
Desktop ApplicationsSynopsisWhile &os; is popular as a server for its performance and
stability, it is also suited for day-to-day use as a desktop.
With over &os.numports; applications available as &os; packages
or ports, it is easy to build a customized desktop that runs
a wide variety of desktop applications. This chapter
demonstrates how to install numerous desktop applications,
including web browsers, productivity software, document viewers,
and financial software.Users who prefer to install a pre-built desktop version
of FreeBSD rather than configuring one from scratch should
- refer to the pcbsd.org
- website.
+ refer to the
+ pcbsd.org
+ website.
Readers of this chapter should know how to:Install additional software using packages or
ports as described in .
- Install X and a window manager as described in .
+ Install X and a window manager as described in
+ .For information on how to configure a multimedia
environment, refer to .Browsersbrowsersweb&os; does not come with a pre-installed web browser.
Instead, the www
category of the Ports Collection contains many browsers which
can be installed as a package or compiled from the Ports
Collection.The KDE and
GNOME desktop environments include
their own HTML browser. Refer to
for more information on how to set up these complete
desktops.Some lightweight browsers include
www/dillo2, www/links, and
www/w3m.This section demonstrates how to install the following
popular web browsers and indicates if the application is
resource-heavy, takes time to compile from ports, or has any
major dependencies.Application NameResources NeededInstallation from PortsNotesFirefoxmediumheavy&os;, &linux;, and localized versions are
availableOperalightlight&os; and &linux; versions are availableKonquerormediumheavyRequires KDE
librariesChromiummediumheavyRequires Gtk+FirefoxFirefoxFirefox is an open source
browser that is fully ported to &os;. It features a
standards-compliant HTML display engine, tabbed browsing,
popup blocking, extensions, improved security, and more.
Firefox is based on the
Mozilla codebase.To install the package of the latest release version of
Firefox, type:&prompt.root; pkg install firefoxTo instead install Firefox
Extended Support Release (ESR) version, use:&prompt.root; pkg install firefox-esrLocalized versions are available in
www/firefox-i18n and
www/firefox-esr-i18n.The Ports Collection can instead be used to compile the
desired version of Firefox from
source code. This example builds
www/firefox, where
firefox can be replaced with the ESR or
localized version to install.&prompt.root; cd /usr/ports/www/firefox
&prompt.root; make install cleanFirefox and &java; PluginThe installation of
Firefox does not include &java;
support. However, java/icedtea-web
provides a free software web browser plugin for running Java
applets. It can be installed as a package. To alternately
compile the port:&prompt.root; cd /usr/ports/java/icedtea-web
&prompt.root; make install cleanKeep the default configuration options when compiling the
port.Once installed, start firefox,
enter about:plugins in the location bar and
press Enter. A page listing the installed
plugins will be displayed. The
&java; plugin should be
listed.If the browser is unable to find the plugin, each user
will have to run the following command and relaunch the
browser:&prompt.user; ln -s /usr/local/lib/IcedTeaPlugin.so \
$HOME/.mozilla/plugins/Firefox and &adobe; &flash; PluginFlashA native &adobe; &flash; plugin is not available for &os;.
However, a software wrapper for running the &linux; version
of the plugin is available. This wrapper also provides
support for other browser plugins such as &realplayer;.To install and enable this plugin, perform these
steps:Install www/nspluginwrapper from the port.
Due to licensing restrictions, a package is not available.
This port requires
emulators/linux_base-c6.Install www/linux-c6-flashplugin11 from
the port. Due to licensing restrictions, a package is not
available.Before the plugin is first used, each user must
run:&prompt.user; nspluginwrapper -v -a -iWhen the plugin port has been updated and reinstalled,
each user must run:&prompt.user; nspluginwrapper -v -a -uStart the browser, enter
about:plugins in the location bar and
press Enter. A list of all the currently
available plugins will be shown.Firefox and Swfdec &flash; PluginSwfdec is a decoder and
renderer for &flash; animations.
Swfdec-Mozilla is a plugin for
Firefox browsers that uses the
Swfdec library for playing SWF files.To install the package:&prompt.root; pkg install swfdec-pluginIf the package is not available, compile and install it
from the Ports Collection:&prompt.root; cd /usr/ports/www/swfdec-plugin
&prompt.root; make install cleanRestart the browser to activate this plugin.OperaOperaOpera is a full-featured and
standards-compliant browser which is still lightweight and
fast. It comes with a built-in mail and news reader, an IRC
client, an RSS/Atom feeds reader, and more. It is available
as a native &os; version and as a version that runs under
&linux; emulation.This command installs the package of the &os; version of
Opera. Replace
opera with linux-opera
to instead install the &linux; version.&prompt.root; pkg install operaAlternately, install either version through the Ports
Collection. This example compiles the native version:&prompt.root; cd /usr/ports/www/opera
&prompt.root; make install cleanTo install the &linux; version, substitute
linux-opera in place of
opera.To install &adobe; &flash; plugin support, first compile
the www/linux-c6-flashplugin11
port. Licensing restrictions prevent making a package
available. Then install www/opera-linuxplugins. This example
compiles both applications from ports:&prompt.root; cd /usr/ports/www/linux-c6-flashplugin11
&prompt.root; make install clean
&prompt.root; cd /usr/ports/www/opera-linuxplugins
&prompt.root; make install cleanOnce installed, check the presence of the plugin by
starting the browser, entering
opera:plugins in the location bar and
pressing Enter. A list should appear with
all the currently available plugins.To add the &java; plugin,
follow the instructions in .KonquerorKonquerorKonqueror is more than a web
browser as it is also a file manager and a multimedia
viewer. It is included in the
x11/kde4-baseapps package or port.Konqueror supports WebKit as
well as its own KHTML. WebKit is a rendering engine used by
many modern browsers including Chromium. To use WebKit with
Konqueror on &os;, install the
www/kwebkitpart package
or port. This example compiles the port:&prompt.root; cd /usr/ports/www/kwebkitpart
&prompt.root; make install cleanTo enable WebKit within
Konqueror, click
Settings, Configure Konqueror.
In the General settings page, click the
drop-down menu next to Default web browser
engine and change KHTML to
WebKit.Konqueror also supports
&flash;. A How To
guide for getting &flash; support
on Konqueror is available at http://freebsd.kde.org/howtos/konqueror-flash.php.ChromiumChromiumChromium is an open source
browser project that aims to build a safer, faster, and more
stable web browsing experience.
Chromium features tabbed browsing,
popup blocking, extensions, and much more.
Chromium is the open source project
upon which the Google Chrome web browser is based.Chromium can be installed as a
package by typing:&prompt.root; pkg install chromiumAlternatively, Chromium can be
compiled from source using the Ports Collection:&prompt.root; cd /usr/ports/www/chromium
&prompt.root; make install cleanThe executable for Chromium
is /usr/local/bin/chrome, not
/usr/local/bin/chromium.
-
- Chromium and &java; Plugin
+
+ Chromium and &java; Plugin
- The installation of
- Chromium does not include &java;
- support. To install &java; plugin support, follow the
- instructions in .
+ The installation of
+ Chromium does not include &java;
+ support. To install &java; plugin support, follow the
+ instructions in .
- Once &java; support is installed, start
- Chromium and enter
- about:plugins in the address bar.
- IcedTea-Web should be listed as one of the installed
- plugins.
+ Once &java; support is installed, start
+ Chromium and enter
+ about:plugins in the address bar.
+ IcedTea-Web should be listed as one of the installed
+ plugins.
- If Chromium does not display
- the IcedTea-Web plugin, run the following commands and
- restart the web browser:
+ If Chromium does not display
+ the IcedTea-Web plugin, run the following commands and
+ restart the web browser:
- &prompt.root; mkdir -p /usr/local/share/chromium/plugins
+ &prompt.root; mkdir -p /usr/local/share/chromium/plugins
&prompt.root; ln -s /usr/local/lib/IcedTeaPlugin.so \
/usr/local/share/chromium/plugins/Chromium and &adobe; &flash; PluginConfiguring Chromium and
&adobe; &flash; is similar to the instructions in
. No additional
configuration should be necessary, since
Chromium is able to use some
plugins from other browsers.ProductivityWhen it comes to productivity, new users often look for an
office suite or an easy-to-use word processor. While some
desktop environments like
KDE provide an office suite, there
is no default productivity package. Several office suites and
graphical word processors are available for &os;, regardless
of the installed window manager.This section demonstrates how to install the following
popular productivity software and indicates if the application
is resource-heavy, takes time to compile from ports, or has any
major dependencies.Application NameResources NeededInstallation from PortsMajor DependenciesCalligralightheavyKDEAbiWordlightlightGtk+ or
GNOMEThe GimplightheavyGtk+Apache
OpenOfficeheavyhuge&jdk; and
MozillaLibreOfficesomewhat heavyhugeGtk+, or
KDE/
GNOME, or
&jdk;CalligraCalligraoffice suiteCalligraThe KDE desktop environment includes
an office suite which can be installed separately from
KDE.
Calligra includes standard
components that can be found in other office suites.
Words is the word processor,
Sheets is the spreadsheet program,
Stage manages slide presentations,
and Karbon is used to draw
graphical documents.In &os;, editors/calligra can be
installed as a package or a port. To install the
package:&prompt.root; pkg install calligraIf the package is not available, use the Ports Collection
instead:&prompt.root; cd /usr/ports/editors/calligra
&prompt.root; make install cleanAbiWordAbiWordAbiWord is a free word
processing program similar in look and feel to
µsoft; Word. It is fast,
contains many features, and is user-friendly.AbiWord can import or export
many file formats, including some proprietary ones like
µsoft; .rtf.To install the AbiWord
package:&prompt.root; pkg install abiwordIf the package is not available, it can be compiled from
the Ports Collection:&prompt.root; cd /usr/ports/editors/abiword
&prompt.root; make install cleanThe GIMPThe GIMPFor image authoring or picture retouching,
The GIMP provides a sophisticated
image manipulation program. It can be used as a simple paint
program or as a quality photo retouching suite. It supports a
large number of plugins and features a scripting interface.
The GIMP can read and write a wide
range of file formats and supports interfaces with scanners
and tablets.To install the package:&prompt.root; pkg install gimpAlternately, use the Ports Collection:&prompt.root; cd /usr/ports/graphics/gimp
&prompt.root; make install cleanThe graphics category (freebsd.org/ports/graphics.html)
of the Ports Collection contains several
GIMP-related plugins, help files,
and user manuals.Apache OpenOfficeApache OpenOfficeoffice suiteApache OpenOfficeApache OpenOffice is an open
source office suite which is developed under the wing of the
Apache Software Foundation's Incubator. It includes all of
the applications found in a complete office productivity
suite: a word processor, spreadsheet, presentation manager,
and drawing program. Its user interface is similar to other
office suites, and it can import and export in various popular
file formats. It is available in a number of different
languages and internationalization has been extended to
interfaces, spell checkers, and dictionaries.The word processor of Apache
OpenOffice uses a native XML file format for
increased portability and flexibility. The spreadsheet
program features a macro language which can be interfaced
with external databases. Apache
OpenOffice is stable and runs natively on
&windows;, &solaris;, &linux;, &os;, and &macos; X.
More information about Apache
OpenOffice can be found at openoffice.org.
For &os; specific information refer to porting.openoffice.org/freebsd/.To install the Apache
OpenOffice package:&prompt.root; pkg install apache-openofficeOnce the package is installed, type the following command
to launch Apache OpenOffice:&prompt.user; openoffice-X.Y.Zwhere X.Y.Z is the version
number of the installed version of Apache
OpenOffice. The first time
Apache OpenOffice launches, some
questions will be asked and a
.openoffice.org folder will be created in
the user's home directory.If the desired Apache
OpenOffice package is not available, compiling
the port is still an option. However, this requires a lot of
disk space and a fairly long time to compile:&prompt.root; cd /usr/ports/editors/openoffice-4
&prompt.root; make install cleanTo build a localized version, replace the previous
command with:&prompt.root; make LOCALIZED_LANG=your_language install cleanReplace
your_language with the correct
language ISO-code. A list of supported language codes is
available in
files/Makefile.localized, located in
the port's directory.LibreOfficeLibreOfficeoffice suiteLibreOfficeLibreOffice is a free software
office suite developed by documentfoundation.org.
It is compatible with other major office suites and available
on a variety of platforms. It is a rebranded fork of
Apache OpenOffice and includes
applications found in a complete office productivity suite:
a word processor, spreadsheet, presentation manager, drawing
program, database management program, and a tool for creating
and editing mathematical formulæ. It is available in
a number of different languages and internationalization has
been extended to interfaces, spell checkers, and
dictionaries.The word processor of
LibreOffice uses a native XML file
format for increased portability and flexibility. The
spreadsheet program features a macro language which can be
interfaced with external databases.
LibreOffice is stable and runs
natively on &windows;, &linux;, &os;, and &macos; X.
More information about LibreOffice
can be found at libreoffice.org.To install the English version of the
LibreOffice package:&prompt.root; pkg install libreofficeThe editors category (freebsd.org/ports/editors.html)
of the Ports Collection contains several localizations for
LibreOffice. When installing a
localized package, replace libreoffice
with the name of the localized package.Once the package is installed, type the following command
to run LibreOffice:&prompt.user; libreofficeDuring the first launch, some questions will be asked
and a .libreoffice folder will be created
in the user's home directory.If the desired LibreOffice
package is not available, compiling the port is still an
option. However, this requires a lot of disk space and a
fairly long time to compile. This example compiles the
English version:&prompt.root; cd /usr/ports/editors/libreoffice
&prompt.root; make install cleanTo build a localized version,
cd into the port directory of
the desired language. Supported languages can be found
in the editors category (freebsd.org/ports/editors.html)
of the Ports Collection.Document ViewersSome new document formats have gained popularity since
the advent of &unix; and the viewers they require may not be
available in the base system. This section demonstrates how to
install the following document viewers:Application NameResources NeededInstallation from PortsMajor DependenciesXpdflightlightFreeTypegvlightlightXaw3dGQviewlightlightGtk+ or
GNOMEePDFViewlightlightGtk+OkularlightheavyKDEXpdfXpdfPDFviewingFor users that prefer a small &os; PDF viewer,
Xpdf provides a light-weight and
efficient viewer which requires few resources. It uses the
standard X fonts and does not require any additional
toolkits.To install the Xpdf
package:&prompt.root; pkg install xpdfIf the package is not available, use the Ports
Collection:&prompt.root; cd /usr/ports/graphics/xpdf
&prompt.root; make install cleanOnce the installation is complete, launch
xpdf and use the right mouse button to
activate the menu.gvgvPDFviewingPostScriptviewinggv is a &postscript; and PDF
viewer. It is based on ghostview,
but has a nicer look as it is based on the
Xaw3d widget toolkit.
gv has many configurable features,
such as orientation, paper size, scale, and anti-aliasing.
Almost any operation can be performed with either the
keyboard or the mouse.To install gv as a
package:&prompt.root; pkg install gvIf a package is unavailable, use the Ports
Collection:&prompt.root; cd /usr/ports/print/gv
&prompt.root; make install cleanGQviewGQviewGQview is an image manager
which supports viewing a file with a single click, launching
an external editor, and thumbnail previews. It also features
a slideshow mode and some basic file operations, making it
easy to manage image collections and to find duplicate files.
GQview supports full screen viewing
and internationalization.To install the GQview
package:&prompt.root; pkg install gqviewIf the package is not available, use the Ports
Collection:&prompt.root; cd /usr/ports/graphics/gqview
&prompt.root; make install cleanePDFViewePDFViewPDFviewingePDFView is a lightweight
PDF document viewer that only uses the
Gtk+ and
Poppler libraries. It is currently
under development, but already opens most
PDF files (even encrypted), save copies of
documents, and has support for printing using
CUPS.To install ePDFView as a
package:&prompt.root; pkg install epdfviewIf a package is unavailable, use the Ports
Collection:&prompt.root; cd /usr/ports/graphics/epdfview
&prompt.root; make install cleanOkularOkularPDFviewingOkular is a universal document
viewer based on KPDF for
KDE. It can open many document
formats, including PDF, &postscript;, DjVu,
CHM, XPS, and
ePub.To install Okular as a
package:&prompt.root; pkg install okularIf a package is unavailable, use the Ports
Collection:&prompt.root; cd /usr/ports/graphics/okular
&prompt.root; make install cleanFinanceFor managing personal finances on a &os; desktop, some
powerful and easy-to-use applications can be installed. Some
are compatible with widespread file formats, such as the formats
used by Quicken and
Excel.This section covers these programs:Application NameResources NeededInstallation from PortsMajor DependenciesGnuCashlightheavyGNOMEGnumericlightheavyGNOMEKMyMoneylightheavyKDEGnuCashGnuCashGnuCash is part of the
GNOME effort to provide
user-friendly, yet powerful, applications to end-users.
GnuCash can be used to keep track
of income and expenses, bank accounts, and stocks. It
features an intuitive interface while remaining
professional.GnuCash provides a smart
register, a hierarchical system of accounts, and many keyboard
accelerators and auto-completion methods. It can split a
single transaction into several more detailed pieces.
GnuCash can import and merge
Quicken QIF files. It also handles
most international date and currency formats.To install the GnuCash
package:&prompt.root; pkg install gnucashIf the package is not available, use the Ports
Collection:&prompt.root; cd /usr/ports/finance/gnucash
&prompt.root; make install cleanGnumericGnumericspreadsheetGnumericGnumeric is a spreadsheet
program developed by the GNOME
community. It features convenient automatic guessing of user
input according to the cell format with an autofill system
for many sequences. It can import files in a number of
popular formats, including Excel,
Lotus 1-2-3, and
Quattro Pro. It has a large number
of built-in functions and allows all of the usual cell formats
such as number, currency, date, time, and much more.To install Gnumeric as a
package:&prompt.root; pkg install gnumericIf the package is not available, use the Ports
Collection:&prompt.root; cd /usr/ports/math/gnumeric
&prompt.root; make install cleanKMyMoneyKMyMoneyspreadsheetKMyMoneyKMyMoney is a personal finance
application created by the KDE
community. KMyMoney aims to
provide the important features found in commercial personal
finance manager applications. It also highlights ease-of-use
and proper double-entry accounting among its features.
KMyMoney imports from standard
Quicken QIF files, tracks
investments, handles multiple currencies, and provides a
wealth of reports.To install KMyMoney as a
package:&prompt.root; pkg install kmymoney-kde4If the package is not available, use the Ports
Collection:&prompt.root; cd /usr/ports/finance/kmymoney-kde4
&prompt.root; make install clean
Index: head/en_US.ISO8859-1/books/handbook/disks/chapter.xml
===================================================================
--- head/en_US.ISO8859-1/books/handbook/disks/chapter.xml (revision 48528)
+++ head/en_US.ISO8859-1/books/handbook/disks/chapter.xml (revision 48529)
@@ -1,3670 +1,3670 @@
StorageSynopsisThis chapter covers the use of disks and storage media in
&os;. This includes SCSI and
IDE disks, CD and
DVD media, memory-backed disks, and
USB storage devices.After reading this chapter, you will know:How to add additional hard disks to a &os;
system.How to grow the size of a disk's partition on
&os;.How to configure &os; to use USB
storage devices.How to use CD and
DVD media on a &os; system.How to use the backup programs available under
&os;.How to set up memory disks.What file system snapshots are and how to use them
efficiently.How to use quotas to limit disk space usage.How to encrypt disks and swap to secure them against
attackers.How to configure a highly available storage
network.Before reading this chapter, you should:Know how to configure and
install a new &os; kernel.Adding DisksDavidO'BrienOriginally contributed by disksaddingThis section describes how to add a new
SATA disk to a machine that currently only
has a single drive. First, turn off the computer and install
the drive in the computer following the instructions of the
computer, controller, and drive manufacturers. Reboot the
system and become
root.Inspect /var/run/dmesg.boot to ensure
the new disk was found. In this example, the newly added
SATA drive will appear as
ada1.partitionsgpartFor this example, a single large partition will be created
on the new disk. The
GPT partitioning scheme will be
used in preference to the older and less versatile
MBR scheme.If the disk to be added is not blank, old partition
information can be removed with
gpart delete. See &man.gpart.8; for
details.The partition scheme is created, and then a single partition
is added. To improve performance on newer disks with larger
hardware block sizes, the partition is aligned to one megabyte
boundaries:&prompt.root; gpart create -s GPT ada1
&prompt.root; gpart add -t freebsd-ufs -a 1M ada1Depending on use, several smaller partitions may be desired.
See &man.gpart.8; for options to create partitions smaller than
a whole disk.A file system is created on the new blank disk:&prompt.root; newfs -U /dev/ada1p1An empty directory is created as a
mountpoint, a location for mounting the new
disk in the original disk's file system:&prompt.root; mkdir /newdiskFinally, an entry is added to
/etc/fstab so the new disk will be mounted
automatically at startup:/dev/ada1p1 /newdisk ufs rw 2 2The new disk can be mounted manually, without restarting the
system:&prompt.root; mount /newdiskResizing and Growing DisksAllanJudeOriginally contributed by disksresizingA disk's capacity can increase without any changes to the
data already present. This happens commonly with virtual
machines, when the virtual disk turns out to be too small and is
enlarged. Sometimes a disk image is written to a
USB memory stick, but does not use the full
capacity. Here we describe how to resize or
grow disk contents to take advantage of
increased capacity.Determine the device name of the disk to be resized by
inspecting /var/run/dmesg.boot. In this
example, there is only one SATA disk in the
system, so the drive will appear as
ada0.partitionsgpartList the partitions on the disk to see the current
configuration:&prompt.root; gpart show ada0
=> 34 83886013 ada0 GPT (48G) [CORRUPT]
34 128 1 freebsd-boot (64k)
162 79691648 2 freebsd-ufs (38G)
79691810 4194236 3 freebsd-swap (2G)
83886046 1 - free - (512B)If the disk was formatted with the
GPT partitioning scheme, it may show
as corrupted because the GPT
backup partition table is no longer at the end of the
drive. Fix the backup
partition table with
gpart:&prompt.root; gpart recover ada0
ada0 recoveredNow the additional space on the disk is available for
use by a new partition, or an existing partition can be
expanded:&prompt.root; gpart show ada0
=> 34 102399933 ada0 GPT (48G)
34 128 1 freebsd-boot (64k)
162 79691648 2 freebsd-ufs (38G)
79691810 4194236 3 freebsd-swap (2G)
83886046 18513921 - free - (8.8G)Partitions can only be resized into contiguous free space.
Here, the last partition on the disk is the swap partition, but
the second partition is the one that needs to be resized. Swap
partitions only contain temporary data, so it can safely be
unmounted, deleted, and then recreated after resizing other
partitions.&prompt.root; swapoff /dev/ada0p3
&prompt.root; gpart delete -i 3ada0
ada0p3 deleted
&prompt.root; gpart show ada0
=> 34 102399933 ada0 GPT (48G)
34 128 1 freebsd-boot (64k)
162 79691648 2 freebsd-ufs (38G)
79691810 22708157 - free - (10G)There is risk of data loss when modifying the partition
table of a mounted file system. It is best to perform the
following steps on an unmounted file system while running from
a live CD-ROM or USB
device. However, if absolutely necessary, a mounted file
system can be resized after disabling GEOM safety
features:&prompt.root; sysctl kern.geom.debugflags=16Resize the partition, leaving room to recreate a swap
partition of the desired size. This only modifies the size of
the partition. The file system in the partition will be
expanded in a separate step.&prompt.root; gpart resize -i 2 -a 4k -s 47Gada0
ada0p2 resized
&prompt.root; gpart show ada0
=> 34 102399933 ada0 GPT (48G)
34 128 1 freebsd-boot (64k)
162 98566144 2 freebsd-ufs (47G)
98566306 3833661 - free - (1.8G)Recreate the swap partition:&prompt.root; gpart add -t freebsd-swap -a 4k ada0
ada0p3 added
&prompt.root; gpart show ada0
=> 34 102399933 ada0 GPT (48G)
34 128 1 freebsd-boot (64k)
162 98566144 2 freebsd-ufs (47G)
98566306 3833661 3 freebsd-swap (1.8G)
&prompt.root; swapon /dev/ada0p3Grow the UFS file system to use the new
capacity of the resized partition:Growing a live UFS file system is only
possible in &os; 10.0-RELEASE and later. For earlier
versions, the file system must not be mounted.&prompt.root; growfs /dev/ada0p2
Device is mounted read-write; resizing will result in temporary write suspension for /.
It's strongly recommended to make a backup before growing the file system.
OK to grow file system on /dev/ada0p2, mounted on /, from 38GB to 47GB? [Yes/No] Yes
super-block backups (for fsck -b #) at:
80781312, 82063552, 83345792, 84628032, 85910272, 87192512, 88474752,
89756992, 91039232, 92321472, 93603712, 94885952, 96168192, 97450432Both the partition and the file system on it have now been
resized to use the newly-available disk space.USB Storage DevicesMarcFonvieilleContributed by USBdisksMany external storage solutions, such as hard drives,
USB thumbdrives, and CD
and DVD burners, use the Universal Serial Bus
(USB). &os; provides support for
USB 1.x, 2.0, and 3.0 devices.USB 3.0 support is not compatible with
some hardware, including Haswell (Lynx point) chipsets. If
&os; boots with a failed with error 19
message, disable xHCI/USB3 in the system
BIOS.Support for USB storage devices is built
into the GENERIC kernel. For a custom
kernel, be sure that the following lines are present in the
kernel configuration file:device scbus # SCSI bus (required for ATA/SCSI)
device da # Direct Access (disks)
device pass # Passthrough device (direct ATA/SCSI access)
device uhci # provides USB 1.x support
device ohci # provides USB 1.x support
device ehci # provides USB 2.0 support
device xhci # provides USB 3.0 support
device usb # USB Bus (required)
device umass # Disks/Mass storage - Requires scbus and da
device cd # needed for CD and DVD burners&os; uses the &man.umass.4; driver which uses the
SCSI subsystem to access
USB storage devices. Since any
USB device will be seen as a
SCSI device by the system, if the
USB device is a CD or
DVD burner, do not
include in a custom kernel
configuration file.The rest of this section demonstrates how to verify that a
USB storage device is recognized by &os; and
how to configure the device so that it can be used.Device ConfigurationTo test the USB configuration, plug in
the USB device. Use
dmesg to confirm that the drive appears in
the system message buffer. It should look something like
this:umass0: <STECH Simple Drive, class 0/0, rev 2.00/1.04, addr 3> on usbus0
umass0: SCSI over Bulk-Only; quirks = 0x0100
umass0:4:0:-1: Attached to scbus4
da0 at umass-sim0 bus 0 scbus4 target 0 lun 0
da0: <STECH Simple Drive 1.04> Fixed Direct Access SCSI-4 device
da0: Serial Number WD-WXE508CAN263
da0: 40.000MB/s transfers
da0: 152627MB (312581808 512 byte sectors: 255H 63S/T 19457C)
da0: quirks=0x2<NO_6_BYTE>The brand, device node (da0), speed,
and size will differ according to the device.Since the USB device is seen as a
SCSI one, camcontrol can
be used to list the USB storage devices
attached to the system:&prompt.root; camcontrol devlist
<STECH Simple Drive 1.04> at scbus4 target 0 lun 0 (pass3,da0)Alternately, usbconfig can be used to
list the device. Refer to &man.usbconfig.8; for more
information about this command.&prompt.root; usbconfig
ugen0.3: <Simple Drive STECH> at usbus0, cfg=0 md=HOST spd=HIGH (480Mbps) pwr=ON (2mA)If the device has not been formatted, refer to for instructions on how to format
and create partitions on the USB drive. If
the drive comes with a file system, it can be mounted by
root using the
instructions in .Allowing untrusted users to mount arbitrary media, by
enabling vfs.usermount as described
below, should not be considered safe from a security point
of view. Most file systems were not built to safeguard
against malicious devices.To make the device mountable as a normal user, one
solution is to make all users of the device a member of the
operator group
using &man.pw.8;. Next, ensure that operator is able to read and
write the device by adding these lines to
/etc/devfs.rules:[localrules=5]
add path 'da*' mode 0660 group operatorIf internal SCSI disks are also
installed in the system, change the second line as
follows:add path 'da[3-9]*' mode 0660 group operatorThis will exclude the first three
SCSI disks (da0 to
da2)from belonging to the operator group. Replace
3 with the number of internal
SCSI disks. Refer to &man.devfs.rules.5;
for more information about this file.Next, enable the ruleset in
/etc/rc.conf:devfs_system_ruleset="localrules"Then, instruct the system to allow regular users to mount
file systems by adding the following line to
/etc/sysctl.conf:vfs.usermount=1Since this only takes effect after the next reboot, use
sysctl to set this variable now:&prompt.root; sysctl vfs.usermount=1
vfs.usermount: 0 -> 1The final step is to create a directory where the file
system is to be mounted. This directory needs to be owned by
the user that is to mount the file system. One way to do that
is for root to
create a subdirectory owned by that user as /mnt/username.
In the following example, replace
username with the login name of the
user and usergroup with the user's
primary group:&prompt.root; mkdir /mnt/username
&prompt.root; chown username:usergroup /mnt/usernameSuppose a USB thumbdrive is plugged in,
and a device /dev/da0s1 appears. If the
device is formatted with a FAT file system,
the user can mount it using:&prompt.user; mount -t msdosfs -o -m=644,-M=755 /dev/da0s1 /mnt/usernameBefore the device can be unplugged, it
must be unmounted first:&prompt.user; umount /mnt/usernameAfter device removal, the system message buffer will show
messages similar to the following:umass0: at uhub3, port 2, addr 3 (disconnected)
da0 at umass-sim0 bus 0 scbus4 target 0 lun 0
da0: <STECH Simple Drive 1.04> s/n WD-WXE508CAN263 detached
(da0:umass-sim0:0:0:0): Periph destroyedAutomounting Removable Media
- &man.autofs.5; supports automatic mounting of
+ &man.autofs.5; supports automatic mounting of
removable media starting with &os; 10.2-RELEASE.USB devices can be automatically
- mounted by uncommenting this line in
+ mounted by uncommenting this line in
/etc/auto_master:/media -media -nosuidThen add these lines to
/etc/devd.conf:notify 100 {
match "system" "GEOM";
match "subsystem" "DEV";
action "/usr/sbin/automount -c";
};
Reload the configuration if &man.autofs.5;
- and &man.devd.8; are already running:
+ and &man.devd.8; are already running:
&prompt.root; service automount reload
&prompt.root; service devd restart&man.autofs.5; can be set to start at boot by adding this
- line to /etc/rc.conf:
+ line to /etc/rc.conf:
autofs_enable="YES"&man.autofs.5; requires &man.devd.8; to be enabled, as it
- is by default.
+ is by default.
Start the services immediately with:&prompt.root; service automount start
&prompt.root; service automountd start
&prompt.root; service autounmountd start
&prompt.root; service devd startEach file system that can be automatically mounted appears
- as a directory in /media/. The directory
+ as a directory in /media/. The directory
is named after the file system label. If the label is
missing, the directory is named after the device node.The file system is transparently mounted on the first
- access, and unmounted after a period of inactivity.
+ access, and unmounted after a period of inactivity.
Automounted drives can also be unmounted manually:&prompt.root; automount -fuThis mechanism is typically used for memory cards and
- USB memory sticks. It can be used with
+ USB memory sticks. It can be used with
any block device, including optical drives or
iSCSI LUNs.Creating and Using CD MediaMikeMeyerContributed by CD-ROMscreatingCompact Disc (CD) media provide a number
of features that differentiate them from conventional disks.
They are designed so that they can be read continuously without
delays to move the head between tracks. While
CD media do have tracks, these refer to a
section of data to be read continuously, and not a physical
property of the disk. The ISO 9660 file
system was designed to deal with these differences.ISO
9660file systemsISO 9660CD burnerATAPIThe &os; Ports Collection provides several utilities for
burning and duplicating audio and data CDs.
This chapter demonstrates the use of several command line
utilities. For CD burning software with a
graphical utility, consider installing the
sysutils/xcdroast or
sysutils/k3b packages or ports.Supported DevicesMarcFonvieilleContributed by CD burnerATAPI/CAM driverThe GENERIC kernel provides support
for SCSI, USB, and
ATAPI CD readers and
burners. If a custom kernel is used, the options that need to
be present in the kernel configuration file vary by the type
of device.For a SCSI burner, make sure these
options are present:device scbus # SCSI bus (required for ATA/SCSI)
device da # Direct Access (disks)
device pass # Passthrough device (direct ATA/SCSI access)
device cd # needed for CD and DVD burnersFor a USB burner, make sure these
options are present:device scbus # SCSI bus (required for ATA/SCSI)
device da # Direct Access (disks)
device pass # Passthrough device (direct ATA/SCSI access)
device cd # needed for CD and DVD burners
device uhci # provides USB 1.x support
device ohci # provides USB 1.x support
device ehci # provides USB 2.0 support
device xhci # provides USB 3.0 support
device usb # USB Bus (required)
device umass # Disks/Mass storage - Requires scbus and daFor an ATAPI burner, make sure these
options are present:device ata # Legacy ATA/SATA controllers
device scbus # SCSI bus (required for ATA/SCSI)
device pass # Passthrough device (direct ATA/SCSI access)
device cd # needed for CD and DVD burnersOn &os; versions prior to 10.x, this line is also
needed in the kernel configuration file if the burner is an
ATAPI device:device atapicamAlternately, this driver can be loaded at boot time by
adding the following line to
/boot/loader.conf:atapicam_load="YES"This will require a reboot of the system as this driver
can only be loaded at boot time.To verify that &os; recognizes the device, run
dmesg and look for an entry for the device.
On systems prior to 10.x, the device name in the first line of
the output will be acd0 instead of
cd0.&prompt.user; dmesg | grep cd
cd0 at ahcich1 bus 0 scbus1 target 0 lun 0
cd0: <HL-DT-ST DVDRAM GU70N LT20> Removable CD-ROM SCSI-0 device
cd0: Serial Number M3OD3S34152
cd0: 150.000MB/s transfers (SATA 1.x, UDMA6, ATAPI 12bytes, PIO 8192bytes)
cd0: Attempt to query device size failed: NOT READY, Medium not present - tray closedBurning a CDIn &os;, cdrecord can be used to burn
CDs. This command is installed with the
sysutils/cdrtools package or port.While cdrecord has many options, basic
usage is simple. Specify the name of the
ISO file to burn and, if the system has
multiple burner devices, specify the name of the device to
use:&prompt.root; cdrecord dev=deviceimagefile.isoTo determine the device name of the burner, use
which might produce results like
this:CD-ROMsburning&prompt.root; cdrecord -scanbus
ProDVD-ProBD-Clone 3.00 (amd64-unknown-freebsd10.0) Copyright (C) 1995-2010 Jörg Schilling
Using libscg version 'schily-0.9'
scsibus0:
0,0,0 0) 'SEAGATE ' 'ST39236LW ' '0004' Disk
0,1,0 1) 'SEAGATE ' 'ST39173W ' '5958' Disk
0,2,0 2) *
0,3,0 3) 'iomega ' 'jaz 1GB ' 'J.86' Removable Disk
0,4,0 4) 'NEC ' 'CD-ROM DRIVE:466' '1.26' Removable CD-ROM
0,5,0 5) *
0,6,0 6) *
0,7,0 7) *
scsibus1:
1,0,0 100) *
1,1,0 101) *
1,2,0 102) *
1,3,0 103) *
1,4,0 104) *
1,5,0 105) 'YAMAHA ' 'CRW4260 ' '1.0q' Removable CD-ROM
1,6,0 106) 'ARTEC ' 'AM12S ' '1.06' Scanner
1,7,0 107) *Locate the entry for the CD burner and
use the three numbers separated by commas as the value for
. In this case, the Yamaha burner device
is 1,5,0, so the appropriate input to
specify that device is . Refer to
the manual page for cdrecord for other ways
to specify this value and for information on writing audio
tracks and controlling the write speed.Alternately, run the following command to get the device
address of the burner:&prompt.root; camcontrol devlist
<MATSHITA CDRW/DVD UJDA740 1.00> at scbus1 target 0 lun 0 (cd0,pass0)Use the numeric values for scbus,
target, and lun. For
this example, 1,0,0 is the device name to
use.Writing Data to an ISO File
SystemIn order to produce a data CD, the data
files that are going to make up the tracks on the
CD must be prepared before they can be
burned to the CD. In &os;,
sysutils/cdrtools installs
mkisofs, which can be used to produce an
ISO 9660 file system that is an image of a
directory tree within a &unix; file system. The simplest
usage is to specify the name of the ISO
file to create and the path to the files to place into the
ISO 9660 file system:&prompt.root; mkisofs -o imagefile.iso/path/to/treefile systemsISO 9660This command maps the file names in the specified path to
names that fit the limitations of the standard
ISO 9660 file system, and will exclude
files that do not meet the standard for ISO
file systems.file systemsJolietA number of options are available to overcome the
restrictions imposed by the standard. In particular,
enables the Rock Ridge extensions common
to &unix; systems and enables Joliet
extensions used by µsoft; systems.For CDs that are going to be used only
on &os; systems, can be used to disable
all filename restrictions. When used with
, it produces a file system image that is
identical to the specified &os; tree, even if it violates the
ISO 9660 standard.CD-ROMscreating bootableThe last option of general use is .
This is used to specify the location of a boot image for use
in producing an El Torito bootable
CD. This option takes an argument which is
the path to a boot image from the top of the tree being
written to the CD. By default,
mkisofs creates an ISO
image in floppy disk emulation mode, and thus
expects the boot image to be exactly 1200, 1440 or
2880 KB in size. Some boot loaders, like the one used by
the &os; distribution media, do not use emulation mode. In
this case, should be used. So,
if /tmp/myboot holds a bootable &os;
system with the boot image in
/tmp/myboot/boot/cdboot, this command
would produce
/tmp/bootable.iso:&prompt.root; mkisofs -R -no-emul-boot -b boot/cdboot -o /tmp/bootable.iso /tmp/mybootThe resulting ISO image can be mounted
as a memory disk with:&prompt.root; mdconfig -a -t vnode -f /tmp/bootable.iso -u 0
&prompt.root; mount -t cd9660 /dev/md0 /mntOne can then verify that /mnt and
/tmp/myboot are identical.There are many other options available for
mkisofs to fine-tune its behavior. Refer
to &man.mkisofs.8; for details.It is possible to copy a data CD to
an image file that is functionally equivalent to the image
file created with mkisofs. To do so, use
dd with the device name as the input
file and the name of the ISO to create as
the output file:&prompt.root; dd if=/dev/cd0 of=file.iso bs=2048The resulting image file can be burned to
CD as described in .Using Data CDsOnce an ISO has been burned to a
CD, it can be mounted by specifying the
file system type, the name of the device containing the
CD, and an existing mount point:&prompt.root; mount -t cd9660 /dev/cd0/mntSince mount assumes that a file system
is of type ufs, a Incorrect
super block error will occur if -t
cd9660 is not included when mounting a data
CD.While any data CD can be mounted this
way, disks with certain ISO 9660 extensions
might behave oddly. For example, Joliet disks store all
filenames in two-byte Unicode characters. If some non-English
characters show up as question marks, specify the local
charset with . For more information, refer
to &man.mount.cd9660.8;.In order to do this character conversion with the help
of , the kernel requires the
cd9660_iconv.ko module to be loaded.
This can be done either by adding this line to
loader.conf:cd9660_iconv_load="YES"and then rebooting the machine, or by directly loading
the module with kldload.Occasionally, Device not configured
will be displayed when trying to mount a data
CD. This usually means that the
CD drive has not detected a disk in
the tray, or that the drive is not visible on the bus. It
can take a couple of seconds for a CD
drive to detect media, so be
patient.Sometimes, a SCSI
CD drive may be missed because it did not
have enough time to answer the bus reset. To resolve this,
a custom kernel can be created which increases the default
SCSI delay. Add the following option to
the custom kernel configuration file and rebuild the kernel
using the instructions in :options SCSI_DELAY=15000This tells the SCSI bus to pause 15
seconds during boot, to give the CD
drive every possible chance to answer the bus reset.It is possible to burn a file directly to
CD, without creating an
ISO 9660 file system. This is known as
burning a raw data CD and some people do
this for backup purposes.This type of disk can not be mounted as a normal data
CD. In order to retrieve the data burned
to such a CD, the data must be read from
the raw device node. For example, this command will extract
a compressed tar file located on the second
CD device into the current working
directory:&prompt.root; tar xzvf /dev/cd1 In order to mount a data CD, the
data must be written using
mkisofs.Duplicating Audio CDsTo duplicate an audio CD, extract the
audio data from the CD to a series of
files, then write these files to a blank
CD. describes how to
duplicate and burn an audio CD. If the
&os; version is less than 10.0 and the device is
ATAPI, the module
must be first loaded using the instructions in .Duplicating an Audio CDThe sysutils/cdrtools package or
port installs cdda2wav. This command
can be used to extract all of the audio tracks, with each
track written to a separate WAV file in
the current working directory:&prompt.user; cdda2wav -vall -B -OwavA device name does not need to be specified if there
is only one CD device on the system.
Refer to the cdda2wav manual page for
instructions on how to specify a device and to learn more
about the other options available for this command.Use cdrecord to write the
.wav files:&prompt.user; cdrecord -v dev=2,0 -dao -useinfo *.wavMake sure that 2,0 is set
appropriately, as described in .Creating and Using DVD MediaMarcFonvieilleContributed by AndyPolyakovWith inputs from DVDburningCompared to the CD, the
DVD is the next generation of optical media
storage technology. The DVD can hold more
data than any CD and is the standard for
video publishing.Five physical recordable formats can be defined for a
recordable DVD:DVD-R: This was the first DVD
recordable format available. The DVD-R standard is defined
by the DVD
Forum. This format is write once.DVD-RW: This is the rewritable
version of the DVD-R standard. A
DVD-RW can be rewritten about 1000
times.DVD-RAM: This is a rewritable format
which can be seen as a removable hard drive. However, this
media is not compatible with most
DVD-ROM drives and DVD-Video players as
only a few DVD writers support the
DVD-RAM format. Refer to for more information on
DVD-RAM use.DVD+RW: This is a rewritable format
defined by the DVD+RW
Alliance. A DVD+RW can be
rewritten about 1000 times.DVD+R: This format is the write once variation of the
DVD+RW format.A single layer recordable DVD can hold up
to 4,700,000,000 bytes which is actually 4.38 GB or
4485 MB as 1 kilobyte is 1024 bytes.A distinction must be made between the physical media and
the application. For example, a DVD-Video is a specific file
layout that can be written on any recordable
DVD physical media such as DVD-R, DVD+R, or
DVD-RW. Before choosing the type of media,
ensure that both the burner and the DVD-Video player are
compatible with the media under consideration.ConfigurationTo perform DVD recording, use
&man.growisofs.1;. This command is part of the
sysutils/dvd+rw-tools utilities which
support all DVD media types.These tools use the SCSI subsystem to
access the devices, therefore ATAPI/CAM support must be loaded
or statically compiled into the kernel. This support is not
needed if the burner uses the USB
interface. Refer to for more
details on USB device configuration.DMA access must also be enabled for
ATAPI devices, by adding the following line
to /boot/loader.conf:hw.ata.atapi_dma="1"Before attempting to use
dvd+rw-tools, consult the Hardware
Compatibility Notes.For a graphical user interface, consider using
sysutils/k3b which provides a user
friendly interface to &man.growisofs.1; and many other
burning tools.Burning Data DVDsSince &man.growisofs.1; is a front-end to mkisofs, it will invoke
&man.mkisofs.8; to create the file system layout and perform
the write on the DVD. This means that an
image of the data does not need to be created before the
burning process.To burn to a DVD+R or a DVD-R the data in
/path/to/data, use the following
command:&prompt.root; growisofs -dvd-compat -Z /dev/cd0 -J -R /path/to/dataIn this example, is passed to
&man.mkisofs.8; to create an ISO 9660 file system with Joliet
and Rock Ridge extensions. Refer to &man.mkisofs.8; for more
details.For the initial session recording, is
used for both single and multiple sessions. Replace
/dev/cd0, with the name of the
DVD device. Using
indicates that the disk will be
closed and that the recording will be unappendable. This
should also provide better media compatibility with
DVD-ROM drives.To burn a pre-mastered image, such as
imagefile.iso, use:&prompt.root; growisofs -dvd-compat -Z /dev/cd0=imagefile.isoThe write speed should be detected and automatically set
according to the media and the drive being used. To force the
write speed, use . Refer to
&man.growisofs.1; for example usage.In order to support working files larger than 4.38GB, an
UDF/ISO-9660 hybrid file system must be created by passing
to &man.mkisofs.8; and
all related programs, such as &man.growisofs.1;. This is
required only when creating an ISO image file or when
writing files directly to a disk. Since a disk created this
way must be mounted as an UDF file system with
&man.mount.udf.8;, it will be usable only on an UDF aware
operating system. Otherwise it will look as if it contains
corrupted files.To create this type of ISO file:&prompt.user; mkisofs -R -J -udf -iso-level 3 -o imagefile.iso/path/to/dataTo burn files directly to a disk:&prompt.root; growisofs -dvd-compat -udf -iso-level 3 -Z /dev/cd0 -J -R /path/to/dataWhen an ISO image already contains large files, no
additional options are required for &man.growisofs.1; to
burn that image on a disk.Be sure to use an up-to-date version of
sysutils/cdrtools, which contains
&man.mkisofs.8;, as an older version may not contain large
files support. If the latest version does not work, install
sysutils/cdrtools-devel and read its
&man.mkisofs.8;.Burning a DVD-VideoDVDDVD-VideoA DVD-Video is a specific file layout based on the ISO
9660 and micro-UDF (M-UDF) specifications. Since DVD-Video
presents a specific data structure hierarchy, a particular
program such as multimedia/dvdauthor is
needed to author the DVD.If an image of the DVD-Video file system already exists,
it can be burned in the same way as any other image. If
dvdauthor was used to make the
DVD and the result is in
/path/to/video, the following command
should be used to burn the DVD-Video:&prompt.root; growisofs -Z /dev/cd0 -dvd-video /path/to/video is passed to &man.mkisofs.8;
to instruct it to create a DVD-Video file system layout.
This option implies the
&man.growisofs.1; option.Using a DVD+RWDVDDVD+RWUnlike CD-RW, a virgin DVD+RW needs to
be formatted before first use. It is
recommended to let &man.growisofs.1; take
care of this automatically whenever appropriate. However, it
is possible to use dvd+rw-format to format
the DVD+RW:&prompt.root; dvd+rw-format /dev/cd0Only perform this operation once and keep in mind that
only virgin DVD+RW medias need to be
formatted. Once formatted, the DVD+RW can
be burned as usual.To burn a totally new file system and not just append some
data onto a DVD+RW, the media does not need
to be blanked first. Instead, write over the previous
recording like this:&prompt.root; growisofs -Z /dev/cd0 -J -R /path/to/newdataThe DVD+RW format supports appending
data to a previous recording. This operation consists of
merging a new session to the existing one as it is not
considered to be multi-session writing. &man.growisofs.1;
will grow the ISO 9660 file system
present on the media.For example, to append data to a
DVD+RW, use the following:&prompt.root; growisofs -M /dev/cd0 -J -R /path/to/nextdataThe same &man.mkisofs.8; options used to burn the
initial session should be used during next writes.Use for better media
compatibility with DVD-ROM drives. When
using DVD+RW, this option will not
prevent the addition of data.To blank the media, use:&prompt.root; growisofs -Z /dev/cd0=/dev/zeroUsing a DVD-RWDVDDVD-RWA DVD-RW accepts two disc formats:
incremental sequential and restricted overwrite. By default,
DVD-RW discs are in sequential
format.A virgin DVD-RW can be directly written
without being formatted. However, a non-virgin
DVD-RW in sequential format needs to be
blanked before writing a new initial session.To blank a DVD-RW in sequential
mode:&prompt.root; dvd+rw-format -blank=full /dev/cd0A full blanking using will
take about one hour on a 1x media. A fast blanking can be
performed using , if the
DVD-RW will be recorded in Disk-At-Once
(DAO) mode. To burn the DVD-RW in DAO
mode, use the command:&prompt.root; growisofs -use-the-force-luke=dao -Z /dev/cd0=imagefile.isoSince &man.growisofs.1; automatically attempts to detect
fast blanked media and engage DAO write,
should not be
required.One should instead use restricted overwrite mode with
any DVD-RW as this format is more
flexible than the default of incremental sequential.To write data on a sequential DVD-RW,
use the same instructions as for the other
DVD formats:&prompt.root; growisofs -Z /dev/cd0 -J -R /path/to/dataTo append some data to a previous recording, use
with &man.growisofs.1;. However, if data
is appended on a DVD-RW in incremental
sequential mode, a new session will be created on the disc and
the result will be a multi-session disc.A DVD-RW in restricted overwrite format
does not need to be blanked before a new initial session.
Instead, overwrite the disc with . It is
also possible to grow an existing ISO 9660 file system written
on the disc with . The result will be a
one-session DVD.To put a DVD-RW in restricted overwrite
format, the following command must be used:&prompt.root; dvd+rw-format /dev/cd0To change back to sequential format, use:&prompt.root; dvd+rw-format -blank=full /dev/cd0Multi-SessionFew DVD-ROM drives support
multi-session DVDs and most of the time only read the first
session. DVD+R, DVD-R and DVD-RW in
sequential format can accept multiple sessions. The notion
of multiple sessions does not exist for the
DVD+RW and the DVD-RW
restricted overwrite formats.Using the following command after an initial non-closed
session on a DVD+R, DVD-R, or DVD-RW in
sequential format, will add a new session to the disc:&prompt.root; growisofs -M /dev/cd0 -J -R /path/to/nextdataUsing this command with a DVD+RW or a
DVD-RW in restricted overwrite mode will
append data while merging the new session to the existing one.
The result will be a single-session disc. Use this method to
add data after an initial write on these types of
media.Since some space on the media is used between each
session to mark the end and start of sessions, one should
add sessions with a large amount of data to optimize media
space. The number of sessions is limited to 154 for a
DVD+R, about 2000 for a DVD-R, and 127 for a DVD+R Double
Layer.For More InformationTo obtain more information about a DVD,
use dvd+rw-mediainfo
/dev/cd0 while the
disc in the specified drive.More information about
dvd+rw-tools can be found in
&man.growisofs.1;, on the dvd+rw-tools
web site, and in the cdwrite
mailing list archives.When creating a problem report related to the use of
dvd+rw-tools, always include the
output of dvd+rw-mediainfo.Using a DVD-RAMDVDDVD-RAMDVD-RAM writers can use either a
SCSI or ATAPI interface.
For ATAPI devices, DMA access has to be
enabled by adding the following line to
/boot/loader.conf:hw.ata.atapi_dma="1"A DVD-RAM can be seen as a removable
hard drive. Like any other hard drive, the
DVD-RAM must be formatted before it can be
used. In this example, the whole disk space will be formatted
with a standard UFS2 file system:&prompt.root; dd if=/dev/zero of=/dev/acd0 bs=2k count=1
&prompt.root; bsdlabel -Bw acd0
&prompt.root; newfs /dev/acd0The DVD device,
acd0, must be changed according to the
configuration.Once the DVD-RAM has been formatted, it
can be mounted as a normal hard drive:&prompt.root; mount /dev/acd0/mntOnce mounted, the DVD-RAM will be both
readable and writeable.Creating and Using Floppy DisksThis section explains how to format a 3.5 inch floppy disk
in &os;.Steps to Format a FloppyA floppy disk needs to be low-level formatted before it
can be used. This is usually done by the vendor, but
formatting is a good way to check media integrity. To
low-level format the floppy disk on &os;, use
&man.fdformat.1;. When using this utility, make note of any
error messages, as these can help determine if the disk is
good or bad.To format the floppy, insert a new 3.5 inch floppy disk
into the first floppy drive and issue:&prompt.root; /usr/sbin/fdformat -f 1440 /dev/fd0After low-level formatting the disk, create a disk label
as it is needed by the system to determine the size of the
disk and its geometry. The supported geometry values are
listed in /etc/disktab.To write the disk label, use &man.bsdlabel.8;:&prompt.root; /sbin/bsdlabel -B -w /dev/fd0 fd1440The floppy is now ready to be high-level formatted with
a file system. The floppy's file system can be either UFS
or FAT, where FAT is generally a better choice for
floppies.To format the floppy with FAT, issue:&prompt.root; /sbin/newfs_msdos /dev/fd0The disk is now ready for use. To use the floppy, mount it
with &man.mount.msdosfs.8;. One can also install and use
emulators/mtools from the Ports
Collection.Backup BasicsImplementing a backup plan is essential in order to have the
ability to recover from disk failure, accidental file deletion,
random file corruption, or complete machine destruction,
including destruction of on-site backups.The backup type and schedule will vary, depending upon the
importance of the data, the granularity needed for file
restores, and the amount of acceptable downtime. Some possible
backup techniques include:Archives of the whole system, backed up onto permanent,
off-site media. This provides protection against all of the
problems listed above, but is slow and inconvenient to
restore from, especially for non-privileged users.File system snapshots, which are useful for restoring
deleted files or previous versions of files.Copies of whole file systems or disks which are
sychronized with another system on the network using a
scheduled net/rsync.Hardware or software RAID, which
minimizes or avoids downtime when a disk fails.Typically, a mix of backup techniques is used. For
example, one could create a schedule to automate a weekly, full
system backup that is stored off-site and to supplement this
backup with hourly ZFS snapshots. In addition, one could make a
manual backup of individual directories or files before making
file edits or deletions.This section describes some of the utilities which can be
used to create and manage backups on a &os; system.File System Backupsbackup softwaredump / restoredumprestoreThe traditional &unix; programs for backing up a file
system are &man.dump.8;, which creates the backup, and
&man.restore.8;, which restores the backup. These utilities
work at the disk block level, below the abstractions of the
files, links, and directories that are created by file
systems. Unlike other backup software,
dump backs up an entire file system and is
unable to backup only part of a file system or a directory
tree that spans multiple file systems. Instead of writing
files and directories, dump writes the raw
data blocks that comprise files and directories.If dump is used on the root
directory, it will not back up /home,
/usr or many other directories since
these are typically mount points for other file systems or
symbolic links into those file systems.When used to restore data, restore
stores temporary files in /tmp/ by
default. When using a recovery disk with a small
/tmp, set TMPDIR to a
directory with more free space in order for the restore to
succeed.When using dump, be aware that some
quirks remain from its early days in Version 6 of
AT&T &unix;,circa 1975. The default parameters assume a
backup to a 9-track tape, rather than to another type of media
or to the high-density tapes available today. These defaults
must be overridden on the command line..rhostsIt is possible to backup a file system across the network
to a another system or to a tape drive attached to another
computer. While the &man.rdump.8; and &man.rrestore.8;
utilities can be used for this purpose, they are not
considered to be secure.Instead, one can use dump and
restore in a more secure fashion over an
SSH connection. This example creates a
full, compressed backup of /usr and sends
the backup file to the specified host over a
SSH connection.Using dump over
ssh&prompt.root; /sbin/dump -0uan -f - /usr | gzip -2 | ssh -c blowfish \
targetuser@targetmachine.example.com dd of=/mybigfiles/dump-usr-l0.gzThis example sets RSH in order to write the
backup to a tape drive on a remote system over a
SSH connection:Using dump over
ssh with RSH
Set&prompt.root; env RSH=/usr/bin/ssh /sbin/dump -0uan -f targetuser@targetmachine.example.com:/dev/sa0 /usrDirectory Backupsbackup softwaretarSeveral built-in utilities are available for backing up
and restoring specified files and directories as
needed.A good choice for making a backup of all of the files in a
directory is &man.tar.1;. This utility dates back to Version
6 of AT&T &unix; and by default assumes a recursive backup
to a local tape device. Switches can be used to instead
specify the name of a backup file.tarThis example creates a compressed backup of the current
directory and saves it to
/tmp/mybackup.tgz. When creating a
backup file, make sure that the backup is not saved to the
same directory that is being backed up.Backing Up the Current Directory with
tar&prompt.root; tar czvf /tmp/mybackup.tgz . To restore the entire backup, cd into
the directory to restore into and specify the name of the
backup. Note that this will overwrite any newer versions of
files in the restore directory. When in doubt, restore to a
temporary directory or specify the name of the file within the
backup to restore.Restoring Up the Current Directory with
tar&prompt.root; tar xzvf /tmp/mybackup.tgzThere are dozens of available switches which are described
in &man.tar.1;. This utility also supports the use of exclude
patterns to specify which files should not be included when
backing up the specified directory or restoring files from a
backup.backup softwarecpioTo create a backup using a specified list of files and
directories, &man.cpio.1; is a good choice. Unlike
tar, cpio does not know
how to walk the directory tree and it must be provided the
list of files to backup.For example, a list of files can be created using
ls or find. This
example creates a recursive listing of the current directory
which is then piped to cpio in order to
create an output backup file named
/tmp/mybackup.cpio.Using ls and cpio
to Make a Recursive Backup of the Current Directory&prompt.root; ls -R | cpio -ovF /tmp/mybackup.cpiobackup softwarepaxpaxPOSIXIEEEA backup utility which tries to bridge the features
provided by tar and cpio
is &man.pax.1;. Over the years, the various versions of
tar and cpio became
slightly incompatible. &posix; created pax
which attempts to read and write many of the various
cpio and tar formats,
plus new formats of its own.The pax equivalent to the previous
examples would be:Backing Up the Current Directory with
pax&prompt.root; pax -wf /tmp/mybackup.pax .Using Data Tapes for Backupstape mediaWhile tape technology has continued to evolve, modern
backup systems tend to combine off-site backups with local
removable media. &os; supports any tape drive that uses
SCSI, such as LTO or
DAT. There is limited support for
SATA and USB tape
drives.For SCSI tape devices, &os; uses the
&man.sa.4; driver and the /dev/sa0,
/dev/nsa0, and
/dev/esa0 devices. The physical device
name is /dev/sa0. When
/dev/nsa0 is used, the backup application
will not rewind the tape after writing a file, which allows
writing more than one file to a tape. Using
/dev/esa0 ejects the tape after the
device is closed.In &os;, mt is used to control
operations of the tape drive, such as seeking through files on
a tape or writing tape control marks to the tape. For
example, the first three files on a tape can be preserved by
skipping past them before writing a new file:&prompt.root; mt -f /dev/nsa0 fsf 3This utility supports many operations. Refer to
&man.mt.1; for details.To write a single file to tape using
tar, specify the name of the tape device
and the file to backup:&prompt.root; tar cvf /dev/sa0 fileTo recover files from a tar archive
on tape into the current directory:&prompt.root; tar xvf /dev/sa0To backup a UFS file system, use
dump. This examples backs up
/usr without rewinding the tape when
finished:&prompt.root; dump -0aL -b64 -f /dev/nsa0 /usrTo interactively restore files from a
dump file on tape into the current
directory:&prompt.root; restore -i -f /dev/nsa0Third-Party Backup Utilitiesbackup softwareThe &os; Ports Collection provides many third-party
utilities which can be used to schedule the creation of
backups, simplify tape backup, and make backups easier and
more convenient. Many of these applications are client/server
based and can be used to automate the backups of a single
system or all of the computers in a network.Popular utilities include
Amanda,
Bacula,
rsync, and
duplicity.Emergency RecoveryIn addition to regular backups, it is recommended to
perform the following steps as part of an emergency
preparedness plan.bsdlabelCreate a print copy of the output of the following
commands:gpart showmore /etc/fstabdmesglivefs
CDStore this printout and a copy of the installation media
in a secure location. Should an emergency restore be
needed, boot into the installation media and select
Live CD to access a rescue shell. This
rescue mode can be used to view the current state of the
system, and if needed, to reformat disks and restore data
from backups.The installation media for
&os;/&arch.i386; &rel2.current;-RELEASE does not
include a rescue shell. For this version, instead
download and burn a Livefs CD image from
ftp://ftp.FreeBSD.org/pub/FreeBSD/releases/&arch.i386;/ISO-IMAGES/&rel2.current;/&os;-&rel2.current;-RELEASE-&arch.i386;-livefs.iso.Next, test the rescue shell and the backups. Make notes
of the procedure. Store these notes with the media, the
printouts, and the backups. These notes may prevent the
inadvertent destruction of the backups while under the stress
of performing an emergency recovery.For an added measure of security, store the latest backup
at a remote location which is physically separated from the
computers and disk drives by a significant distance.Memory DisksMarcFonvieilleReorganized and enhanced by In addition to physical disks, &os; also supports the
creation and use of memory disks. One possible use for a
memory disk is to access the contents of an
ISO file system without the overhead of first
burning it to a CD or DVD,
then mounting the CD/DVD media.In &os;, the &man.md.4; driver is used to provide support
for memory disks. The GENERIC kernel
includes this driver. When using a custom kernel configuration
file, ensure it includes this line:device mdAttaching and Detaching Existing ImagesdisksmemoryTo mount an existing file system image, use
mdconfig to specify the name of the
ISO file and a free unit number. Then,
refer to that unit number to mount it on an existing mount
point. Once mounted, the files in the ISO
will appear in the mount point. This example attaches
diskimage.iso to the memory device
/dev/md0 then mounts that memory device
on /mnt:&prompt.root; mdconfig -f diskimage.iso -u 0
&prompt.root; mount /dev/md0/mntIf a unit number is not specified with
, mdconfig will
automatically allocate an unused memory device and output
the name of the allocated unit, such as
md4. Refer to &man.mdconfig.8; for more
details about this command and its options.disksdetaching a memory diskWhen a memory disk is no longer in use, its resources
should be released back to the system. First, unmount the
file system, then use mdconfig to detach
the disk from the system and release its resources. To
continue this example:&prompt.root; umount /mnt
&prompt.root; mdconfig -d -u 0To determine if any memory disks are still attached to the
system, type mdconfig -l.Creating a File- or Memory-Backed Memory Diskdisksmemory file system&os; also supports memory disks where the storage to use
is allocated from either a hard disk or an area of memory.
The first method is commonly referred to as a file-backed file
system and the second method as a memory-backed file system.
Both types can be created using
mdconfig.To create a new memory-backed file system, specify a type
of swap and the size of the memory disk to
create. Then, format the memory disk with a file system and
mount as usual. This example creates a 5M memory disk on unit
1. That memory disk is then formatted with
the UFS file system before it is
mounted:&prompt.root; mdconfig -a -t swap -s 5m -u 1
&prompt.root; newfs -U md1
/dev/md1: 5.0MB (10240 sectors) block size 16384, fragment size 2048
using 4 cylinder groups of 1.27MB, 81 blks, 192 inodes.
with soft updates
super-block backups (for fsck -b #) at:
160, 2752, 5344, 7936
&prompt.root; mount /dev/md1/mnt
&prompt.root; df /mnt
Filesystem 1K-blocks Used Avail Capacity Mounted on
/dev/md1 4718 4 4338 0% /mntTo create a new file-backed memory disk, first allocate an
area of disk to use. This example creates an empty 5K file
named newimage:&prompt.root; dd if=/dev/zero of=newimage bs=1k count=5k
5120+0 records in
5120+0 records outNext, attach that file to a memory disk, label the memory
disk and format it with the UFS file
system, mount the memory disk, and verify the size of the
file-backed disk:&prompt.root; mdconfig -f newimage -u 0
&prompt.root; bsdlabel -w md0 auto
&prompt.root; newfs md0a
/dev/md0a: 5.0MB (10224 sectors) block size 16384, fragment size 2048
using 4 cylinder groups of 1.25MB, 80 blks, 192 inodes.
super-block backups (for fsck -b #) at:
160, 2720, 5280, 7840
&prompt.root; mount /dev/md0a /mnt
&prompt.root; df /mnt
Filesystem 1K-blocks Used Avail Capacity Mounted on
/dev/md0a 4710 4 4330 0% /mntIt takes several commands to create a file- or
memory-backed file system using mdconfig.
&os; also comes with mdmfs which
automatically configures a memory disk, formats it with the
UFS file system, and mounts it. For
example, after creating newimage
with dd, this one command is equivalent to
running the bsdlabel,
newfs, and mount
commands shown above:&prompt.root; mdmfs -F newimage -s 5m md0/mntTo instead create a new memory-based memory disk with
mdmfs, use this one command:&prompt.root; mdmfs -s 5m md1/mntIf the unit number is not specified,
mdmfs will automatically select an unused
memory device. For more details about
mdmfs, refer to &man.mdmfs.8;.File System SnapshotsTomRhodesContributed by file systemssnapshots&os; offers a feature in conjunction with
Soft Updates: file system
snapshots.UFS snapshots allow a user to create images of specified
file systems, and treat them as a file. Snapshot files must be
created in the file system that the action is performed on, and
a user may create no more than 20 snapshots per file system.
Active snapshots are recorded in the superblock so they are
persistent across unmount and remount operations along with
system reboots. When a snapshot is no longer required, it can
be removed using &man.rm.1;. While snapshots may be removed in
any order, all the used space may not be acquired because
another snapshot will possibly claim some of the released
blocks.The un-alterable file flag is set
by &man.mksnap.ffs.8; after initial creation of a snapshot file.
&man.unlink.1; makes an exception for snapshot files since it
allows them to be removed.Snapshots are created using &man.mount.8;. To place a
snapshot of /var in the
file /var/snapshot/snap, use the following
command:&prompt.root; mount -u -o snapshot /var/snapshot/snap /varAlternatively, use &man.mksnap.ffs.8; to create the
snapshot:&prompt.root; mksnap_ffs /var /var/snapshot/snapOne can find snapshot files on a file system, such as
/var, using
&man.find.1;:&prompt.root; find /var -flags snapshotOnce a snapshot has been created, it has several
uses:Some administrators will use a snapshot file for backup
purposes, because the snapshot can be transferred to
CDs or tape.The file system integrity checker, &man.fsck.8;, may be
run on the snapshot. Assuming that the file system was
clean when it was mounted, this should always provide a
clean and unchanging result.Running &man.dump.8; on the snapshot will produce a dump
file that is consistent with the file system and the
timestamp of the snapshot. &man.dump.8; can also take a
snapshot, create a dump image, and then remove the snapshot
in one command by using .The snapshot can be mounted as a frozen image of the
file system. To &man.mount.8; the snapshot
/var/snapshot/snap run:&prompt.root; mdconfig -a -t vnode -o readonly -f /var/snapshot/snap -u 4
&prompt.root; mount -r /dev/md4 /mntThe frozen /var is now available
through /mnt. Everything will initially be
in the same state it was during the snapshot creation time. The
only exception is that any earlier snapshots will appear as zero
length files. To unmount the snapshot, use:&prompt.root; umount /mnt
&prompt.root; mdconfig -d -u 4For more information about and
file system snapshots, including technical papers, visit
Marshall Kirk McKusick's website at http://www.mckusick.com/.Disk Quotasaccountingdisk spacedisk quotasDisk quotas can be used to limit the amount of disk space or
the number of files a user or members of a group may allocate on
a per-file system basis. This prevents one user or group of
users from consuming all of the available disk space.This section describes how to configure disk quotas for the
UFS file system. To configure quotas on the
ZFS file system, refer to Enabling Disk QuotasTo determine if the &os; kernel provides support for disk
quotas:&prompt.user; sysctl kern.features.ufs_quota
kern.features.ufs_quota: 1In this example, the 1 indicates quota
support. If the value is instead 0, add
the following line to a custom kernel configuration file and
rebuild the kernel using the instructions in :options QUOTANext, enable disk quotas in
/etc/rc.conf:quota_enable="YES"disk quotascheckingNormally on bootup, the quota integrity of each file
system is checked by &man.quotacheck.8;. This program insures
that the data in the quota database properly reflects the data
on the file system. This is a time consuming process that
will significantly affect the time the system takes to boot.
To skip this step, add this variable to
/etc/rc.conf:check_quotas="NO"Finally, edit /etc/fstab to enable
disk quotas on a per-file system basis. To enable per-user
quotas on a file system, add to the
options field in the /etc/fstab entry for
the file system to enable quotas on. For example:/dev/da1s2g /home ufs rw,userquota 1 2To enable group quotas, use
instead. To enable both user and group quotas, separate the
options with a comma:/dev/da1s2g /home ufs rw,userquota,groupquota 1 2By default, quota files are stored in the root directory
of the file system as quota.user and
quota.group. Refer to &man.fstab.5; for
more information. Specifying an alternate location for the
quota files is not recommended.Once the configuration is complete, reboot the system and
/etc/rc will automatically run the
appropriate commands to create the initial quota files for all
of the quotas enabled in
/etc/fstab.In the normal course of operations, there should be no
need to manually run &man.quotacheck.8;, &man.quotaon.8;, or
&man.quotaoff.8;. However, one should read these manual pages
to be familiar with their operation.Setting Quota Limitsdisk quotaslimitsTo
verify that quotas are enabled, run:&prompt.root; quota -vThere should be a one line summary of disk usage and
current quota limits for each file system that quotas are
enabled on.The system is now ready to be assigned quota limits with
edquota.Several options are available to enforce limits on the
amount of disk space a user or group may allocate, and how
many files they may create. Allocations can be limited based
on disk space (block quotas), number of files (inode quotas),
or a combination of both. Each limit is further broken down
into two categories: hard and soft limits.hard limitA hard limit may not be exceeded. Once a user reaches a
hard limit, no further allocations can be made on that file
system by that user. For example, if the user has a hard
limit of 500 kbytes on a file system and is currently using
490 kbytes, the user can only allocate an additional 10
kbytes. Attempting to allocate an additional 11 kbytes will
fail.soft limitSoft limits can be exceeded for a limited amount of time,
known as the grace period, which is one week by default. If a
user stays over their limit longer than the grace period, the
soft limit turns into a hard limit and no further allocations
are allowed. When the user drops back below the soft limit,
the grace period is reset.In the following example, the quota for the test account is being edited.
When edquota is invoked, the editor
specified by EDITOR is opened in order to edit
the quota limits. The default editor is set to
vi.&prompt.root; edquota -u test
Quotas for user test:
/usr: kbytes in use: 65, limits (soft = 50, hard = 75)
inodes in use: 7, limits (soft = 50, hard = 60)
/usr/var: kbytes in use: 0, limits (soft = 50, hard = 75)
inodes in use: 0, limits (soft = 50, hard = 60)There are normally two lines for each file system that has
quotas enabled. One line represents the block limits and the
other represents the inode limits. Change the value to modify
the quota limit. For example, to raise the block limit on
/usr to a soft limit of
500 and a hard limit of
600, change the values in that line as
follows:/usr: kbytes in use: 65, limits (soft = 500, hard = 600)The new quota limits take affect upon exiting the
editor.Sometimes it is desirable to set quota limits on a range
of users. This can be done by first assigning the desired
quota limit to a user. Then, use to
duplicate that quota to a specified range of user IDs
(UIDs). The following command will
duplicate those quota limits for UIDs
10,000 through
19,999:&prompt.root; edquota -p test 10000-19999For more information, refer to &man.edquota.8;.Checking Quota Limits and Disk Usagedisk quotascheckingTo check individual user or group quotas and disk usage,
use &man.quota.1;. A user may only examine their own quota
and the quota of a group they are a member of. Only the
superuser may view all user and group quotas. To get a
summary of all quotas and disk usage for file systems with
quotas enabled, use &man.repquota.8;.Normally, file systems that the user is not using any disk
space on will not show in the output of
quota, even if the user has a quota limit
assigned for that file system. Use to
display those file systems. The following is sample output
from quota -v for a user that has quota
limits on two file systems.Disk quotas for user test (uid 1002):
Filesystem usage quota limit grace files quota limit grace
/usr 65* 50 75 5days 7 50 60
/usr/var 0 50 75 0 50 60grace periodIn this example, the user is currently 15 kbytes over the
soft limit of 50 kbytes on /usr and has 5
days of grace period left. The asterisk *
indicates that the user is currently over the quota
limit.Quotas over NFSNFSQuotas are enforced by the quota subsystem on the
NFS server. The &man.rpc.rquotad.8; daemon
makes quota information available to quota
on NFS clients, allowing users on those
machines to see their quota statistics.On the NFS server, enable
rpc.rquotad by removing the
# from this line in
/etc/inetd.conf:rquotad/1 dgram rpc/udp wait root /usr/libexec/rpc.rquotad rpc.rquotadThen, restart inetd:&prompt.root; service inetd restartEncrypting Disk PartitionsLuckyGreenContributed by shamrock@cypherpunks.todisksencrypting&os; offers excellent online protections against
unauthorized data access. File permissions and Mandatory Access Control (MAC) help
prevent unauthorized users from accessing data while the
operating system is active and the computer is powered up.
However, the permissions enforced by the operating system are
irrelevant if an attacker has physical access to a computer and
can move the computer's hard drive to another system to copy and
analyze the data.Regardless of how an attacker may have come into possession
of a hard drive or powered-down computer, the
GEOM-based cryptographic subsystems built
into &os; are able to protect the data on the computer's file
systems against even highly-motivated attackers with significant
resources. Unlike encryption methods that encrypt individual
files, the built-in gbde and
geli utilities can be used to transparently
encrypt entire file systems. No cleartext ever touches the hard
drive's platter.This chapter demonstrates how to create an encrypted file
system on &os;. It first demonstrates the process using
gbde and then demonstrates the same example
using geli.Disk Encryption with
gbdeThe objective of the &man.gbde.4; facility is to provide a
formidable challenge for an attacker to gain access to the
contents of a cold storage device.
However, if the computer is compromised while up and running
and the storage device is actively attached, or the attacker
has access to a valid passphrase, it offers no protection to
the contents of the storage device. Thus, it is important to
provide physical security while the system is running and to
protect the passphrase used by the encryption
mechanism.This facility provides several barriers to protect the
data stored in each disk sector. It encrypts the contents of
a disk sector using 128-bit AES in
CBC mode. Each sector on the disk is
encrypted with a different AES key. For
more information on the cryptographic design, including how
the sector keys are derived from the user-supplied passphrase,
refer to &man.gbde.4;.&os; provides a kernel module for
gbde which can be loaded with this
command:&prompt.root; kldload geom_bdeIf using a custom kernel configuration file, ensure it
contains this line:options GEOM_BDEThe following example demonstrates adding a new hard drive
to a system that will hold a single encrypted partition that
will be mounted as /private.Encrypting a Partition with
gbdeAdd the New Hard DriveInstall the new drive to the system as explained in
. For the purposes of this
example, a new hard drive partition has been added as
/dev/ad4s1c and
/dev/ad0s1*
represents the existing standard &os; partitions.&prompt.root; ls /dev/ad*
/dev/ad0 /dev/ad0s1b /dev/ad0s1e /dev/ad4s1
/dev/ad0s1 /dev/ad0s1c /dev/ad0s1f /dev/ad4s1c
/dev/ad0s1a /dev/ad0s1d /dev/ad4Create a Directory to Hold gbde
Lock Files&prompt.root; mkdir /etc/gbdeThe gbde lock file
contains information that gbde
requires to access encrypted partitions. Without access
to the lock file, gbde will not
be able to decrypt the data contained in the encrypted
partition without significant manual intervention which is
not supported by the software. Each encrypted partition
uses a separate lock file.Initialize the gbde
PartitionA gbde partition must be
initialized before it can be used. This initialization
needs to be performed only once. This command will open
the default editor, in order to set various configuration
options in a template. For use with the
UFS file system, set the sector_size to
2048:&prompt.root; gbde init /dev/ad4s1c -i -L /etc/gbde/ad4s1c.lock# $FreeBSD: src/sbin/gbde/template.txt,v 1.1.36.1 2009/08/03 08:13:06 kensmith Exp $
#
# Sector size is the smallest unit of data which can be read or written.
# Making it too small decreases performance and decreases available space.
# Making it too large may prevent filesystems from working. 512 is the
# minimum and always safe. For UFS, use the fragment size
#
sector_size = 2048
[...]Once the edit is saved, the user will be asked twice
to type the passphrase used to secure the data. The
passphrase must be the same both times. The ability of
gbde to protect data depends
entirely on the quality of the passphrase. For tips on
how to select a secure passphrase that is easy to
remember, see http://world.std.com/~reinhold/diceware.htm.This initialization creates a lock file for the
gbde partition. In this
example, it is stored as
/etc/gbde/ad4s1c.lock. Lock files
must end in .lock in order to be correctly
detected by the /etc/rc.d/gbde start
up script.Lock files must be backed up
together with the contents of any encrypted partitions.
Without the lock file, the legitimate owner will be
unable to access the data on the encrypted
partition.Attach the Encrypted Partition to the
Kernel&prompt.root; gbde attach /dev/ad4s1c -l /etc/gbde/ad4s1c.lockThis command will prompt to input the passphrase that
was selected during the initialization of the encrypted
partition. The new encrypted device will appear in
/dev as
/dev/device_name.bde:&prompt.root; ls /dev/ad*
/dev/ad0 /dev/ad0s1b /dev/ad0s1e /dev/ad4s1
/dev/ad0s1 /dev/ad0s1c /dev/ad0s1f /dev/ad4s1c
/dev/ad0s1a /dev/ad0s1d /dev/ad4 /dev/ad4s1c.bdeCreate a File System on the Encrypted
DeviceOnce the encrypted device has been attached to the
kernel, a file system can be created on the device. This
example creates a UFS file system with
soft updates enabled. Be sure to specify the partition
which has a
*.bde
extension:&prompt.root; newfs -U /dev/ad4s1c.bdeMount the Encrypted PartitionCreate a mount point and mount the encrypted file
system:&prompt.root; mkdir /private
&prompt.root; mount /dev/ad4s1c.bde /privateVerify That the Encrypted File System is
AvailableThe encrypted file system should now be visible and
available for use:&prompt.user; df -H
Filesystem Size Used Avail Capacity Mounted on
/dev/ad0s1a 1037M 72M 883M 8% /
/devfs 1.0K 1.0K 0B 100% /dev
/dev/ad0s1f 8.1G 55K 7.5G 0% /home
/dev/ad0s1e 1037M 1.1M 953M 0% /tmp
/dev/ad0s1d 6.1G 1.9G 3.7G 35% /usr
/dev/ad4s1c.bde 150G 4.1K 138G 0% /privateAfter each boot, any encrypted file systems must be
manually re-attached to the kernel, checked for errors, and
mounted, before the file systems can be used. To configure
these steps, add the following lines to
/etc/rc.conf:gbde_autoattach_all="YES"
gbde_devices="ad4s1c"
gbde_lockdir="/etc/gbde"This requires that the passphrase be entered at the
console at boot time. After typing the correct passphrase,
the encrypted partition will be mounted automatically.
Additional gbde boot options are
available and listed in &man.rc.conf.5;.sysinstall is incompatible
with gbde-encrypted devices. All
*.bde devices must be detached from the
kernel before starting sysinstall
or it will crash during its initial probing for devices. To
detach the encrypted device used in the example, use the
following command:&prompt.root; gbde detach /dev/ad4s1cDisk Encryption with geliDanielGerzoContributed by An alternative cryptographic GEOM class
is available using geli. This control
utility adds some features and uses a different scheme for
doing cryptographic work. It provides the following
features:Utilizes the &man.crypto.9; framework and
automatically uses cryptographic hardware when it is
available.Supports multiple cryptographic algorithms such as
AES, Blowfish, and
3DES.Allows the root partition to be encrypted. The
passphrase used to access the encrypted root partition
will be requested during system boot.Allows the use of two independent keys.It is fast as it performs simple sector-to-sector
encryption.Allows backup and restore of master keys. If a user
destroys their keys, it is still possible to get access to
the data by restoring keys from the backup.Allows a disk to attach with a random, one-time key
which is useful for swap partitions and temporary file
systems.More features and usage examples can be found in
&man.geli.8;.The following example describes how to generate a key file
which will be used as part of the master key for the encrypted
provider mounted under /private. The key
file will provide some random data used to encrypt the master
key. The master key will also be protected by a passphrase.
The provider's sector size will be 4kB. The example describes
how to attach to the geli provider, create
a file system on it, mount it, work with it, and finally, how
to detach it.Encrypting a Partition with
geliLoad geli SupportSupport for geli is available as a
loadable kernel module. To configure the system to
automatically load the module at boot time, add the
following line to
/boot/loader.conf:geom_eli_load="YES"To load the kernel module now:&prompt.root; kldload geom_eliFor a custom kernel, ensure the kernel configuration
file contains these lines:options GEOM_ELI
device cryptoGenerate the Master KeyThe following commands generate a master key
(/root/da2.key) that is protected
with a passphrase. The data source for the key file is
/dev/random and the sector size of
the provider (/dev/da2.eli) is 4kB as
a bigger sector size provides better performance:&prompt.root; dd if=/dev/random of=/root/da2.key bs=64 count=1
&prompt.root; geli init -s 4096 -K /root/da2.key /dev/da2
Enter new passphrase:
Reenter new passphrase:It is not mandatory to use both a passphrase and a key
file as either method of securing the master key can be
used in isolation.If the key file is given as -, standard
input will be used. For example, this command generates
three key files:&prompt.root; cat keyfile1 keyfile2 keyfile3 | geli init -K - /dev/da2Attach the Provider with the Generated KeyTo attach the provider, specify the key file, the name
of the disk, and the passphrase:&prompt.root; geli attach -k /root/da2.key /dev/da2
Enter passphrase:This creates a new device with an
.eli extension:&prompt.root; ls /dev/da2*
/dev/da2 /dev/da2.eliCreate the New File SystemNext, format the device with the
UFS file system and mount it on an
existing mount point:&prompt.root; dd if=/dev/random of=/dev/da2.eli bs=1m
&prompt.root; newfs /dev/da2.eli
&prompt.root; mount /dev/da2.eli /privateThe encrypted file system should now be available for
use:&prompt.root; df -H
Filesystem Size Used Avail Capacity Mounted on
/dev/ad0s1a 248M 89M 139M 38% /
/devfs 1.0K 1.0K 0B 100% /dev
/dev/ad0s1f 7.7G 2.3G 4.9G 32% /usr
/dev/ad0s1d 989M 1.5M 909M 0% /tmp
/dev/ad0s1e 3.9G 1.3G 2.3G 35% /var
/dev/da2.eli 150G 4.1K 138G 0% /privateOnce the work on the encrypted partition is done, and the
/private partition is no longer needed,
it is prudent to put the device into cold storage by
unmounting and detaching the geli encrypted
partition from the kernel:&prompt.root; umount /private
&prompt.root; geli detach da2.eliA rc.d script is provided to
simplify the mounting of geli-encrypted
devices at boot time. For this example, add these lines to
/etc/rc.conf:geli_devices="da2"
geli_da2_flags="-k /root/da2.key"This configures /dev/da2 as a
geli provider with a master key of
/root/da2.key. The system will
automatically detach the provider from the kernel before the
system shuts down. During the startup process, the script
will prompt for the passphrase before attaching the provider.
Other kernel messages might be shown before and after the
password prompt. If the boot process seems to stall, look
carefully for the password prompt among the other messages.
Once the correct passphrase is entered, the provider is
attached. The file system is then mounted, typically by an
entry in /etc/fstab. Refer to for instructions on how to
configure a file system to mount at boot time.Encrypting SwapChristianBruefferWritten by swapencryptingLike the encryption of disk partitions, encryption of swap
space is used to protect sensitive information. Consider an
application that deals with passwords. As long as these
passwords stay in physical memory, they are not written to disk
and will be cleared after a reboot. However, if &os; starts
swapping out memory pages to free space, the passwords may be
written to the disk unencrypted. Encrypting swap space can be a
solution for this scenario.This section demonstrates how to configure an encrypted
swap partition using &man.gbde.8; or &man.geli.8; encryption.
It assumes that
/dev/ada0s1b is the swap partition.Configuring Encrypted SwapSwap partitions are not encrypted by default and should be
cleared of any sensitive data before continuing. To overwrite
the current swap partition with random garbage, execute the
following command:&prompt.root; dd if=/dev/random of=/dev/ada0s1b bs=1mTo encrypt the swap partition using &man.gbde.8;, add the
.bde suffix to the swap line in
/etc/fstab:# Device Mountpoint FStype Options Dump Pass#
/dev/ada0s1b.bde none swap sw 0 0To instead encrypt the swap partition using &man.geli.8;,
use the
.eli suffix:# Device Mountpoint FStype Options Dump Pass#
/dev/ada0s1b.eli none swap sw 0 0By default, &man.geli.8; uses the AES
algorithm with a key length of 128 bits. Normally the default
settings will suffice. If desired, these defaults can be
altered in the options field in
/etc/fstab. The possible flags
are:aalgoData integrity verification algorithm used to ensure
that the encrypted data has not been tampered with. See
&man.geli.8; for a list of supported algorithms.ealgoEncryption algorithm used to protect the data. See
&man.geli.8; for a list of supported algorithms.keylenThe length of the key used for the encryption
algorithm. See &man.geli.8; for the key lengths that
are supported by each encryption algorithm.sectorsizeThe size of the blocks data is broken into before
it is encrypted. Larger sector sizes increase
performance at the cost of higher storage
overhead. The recommended size is 4096 bytes.This example configures an encryped swap partition using
the Blowfish algorithm with a key length of 128 bits and a
sectorsize of 4 kilobytes:# Device Mountpoint FStype Options Dump Pass#
/dev/ada0s1b.eli none swap sw,ealgo=blowfish,keylen=128,sectorsize=4096 0 0Encrypted Swap VerificationOnce the system has rebooted, proper operation of the
encrypted swap can be verified using
swapinfo.If &man.gbde.8; is being used:&prompt.user; swapinfo
Device 1K-blocks Used Avail Capacity
/dev/ada0s1b.bde 542720 0 542720 0%If &man.geli.8; is being used:&prompt.user; swapinfo
Device 1K-blocks Used Avail Capacity
/dev/ada0s1b.eli 542720 0 542720 0%Highly Available Storage
(HAST)DanielGerzoContributed by FreddieCashWith inputs from Pawel JakubDawidekMichael W.LucasViktorPeterssonHASThigh availabilityHigh availability is one of the main requirements in
serious business applications and highly-available storage is a
key component in such environments. In &os;, the Highly
Available STorage (HAST) framework allows
transparent storage of the same data across several physically
separated machines connected by a TCP/IP
network. HAST can be understood as a
network-based RAID1 (mirror), and is similar to the DRBD®
storage system used in the GNU/&linux; platform. In combination
with other high-availability features of &os; like
CARP, HAST makes it
possible to build a highly-available storage cluster that is
resistant to hardware failures.The following are the main features of
HAST:Can be used to mask I/O errors on
local hard drives.File system agnostic as it works with any file system
supported by &os;.Efficient and quick resynchronization as only the blocks
that were modified during the downtime of a node are
synchronized.Can be used in an already deployed environment to add
additional redundancy.Together with CARP,
Heartbeat, or other tools, it can
be used to build a robust and durable storage system.After reading this section, you will know:What HAST is, how it works, and
which features it provides.How to set up and use HAST on
&os;.How to integrate CARP and
&man.devd.8; to build a robust storage system.Before reading this section, you should:Understand &unix; and &os; basics ().Know how to configure network
interfaces and other core &os; subsystems ().Have a good understanding of &os;
networking ().The HAST project was sponsored by The
&os; Foundation with support from http://www.omc.net/
and http://www.transip.nl/.HAST OperationHAST provides synchronous block-level
replication between two physical machines: the
primary, also known as the
master node, and the
secondary, or slave
node. These two machines together are referred to as a
cluster.Since HAST works in a primary-secondary
configuration, it allows only one of the cluster nodes to be
active at any given time. The primary node, also called
active, is the one which will handle all
the I/O requests to
HAST-managed devices. The secondary node
is automatically synchronized from the primary node.The physical components of the HAST
system are the local disk on primary node, and the disk on the
remote, secondary node.HAST operates synchronously on a block
level, making it transparent to file systems and applications.
HAST provides regular GEOM providers in
/dev/hast/ for use by other tools or
applications. There is no difference between using
HAST-provided devices and raw disks or
partitions.Each write, delete, or flush operation is sent to both the
local disk and to the remote disk over
TCP/IP. Each read operation is served from
the local disk, unless the local disk is not up-to-date or an
I/O error occurs. In such cases, the read
operation is sent to the secondary node.HAST tries to provide fast failure
recovery. For this reason, it is important to reduce
synchronization time after a node's outage. To provide fast
synchronization, HAST manages an on-disk
bitmap of dirty extents and only synchronizes those during a
regular synchronization, with an exception of the initial
sync.There are many ways to handle synchronization.
HAST implements several replication modes
to handle different synchronization methods:memsync: This mode reports a
write operation as completed when the local write
operation is finished and when the remote node
acknowledges data arrival, but before actually storing the
data. The data on the remote node will be stored directly
after sending the acknowledgement. This mode is intended
to reduce latency, but still provides good reliability.
This mode is the default.fullsync: This mode reports a
write operation as completed when both the local write and
the remote write complete. This is the safest and the
slowest replication mode.async: This mode reports a write
operation as completed when the local write completes.
This is the fastest and the most dangerous replication
mode. It should only be used when replicating to a
distant node where latency is too high for other
modes.HAST ConfigurationThe HAST framework consists of several
components:The &man.hastd.8; daemon which provides data
synchronization. When this daemon is started, it will
automatically load geom_gate.ko.The userland management utility,
&man.hastctl.8;.The &man.hast.conf.5; configuration file. This file
must exist before starting
hastd.Users who prefer to statically build
GEOM_GATE support into the kernel should
add this line to the custom kernel configuration file, then
rebuild the kernel using the instructions in :options GEOM_GATEThe following example describes how to configure two nodes
in master-slave/primary-secondary operation using
HAST to replicate the data between the two.
The nodes will be called hasta, with an
IP address of
172.16.0.1, and hastb,
with an IP address of
172.16.0.2. Both nodes will have a
dedicated hard drive /dev/ad6 of the same
size for HAST operation. The
HAST pool, sometimes referred to as a
resource or the GEOM provider in /dev/hast/, will be called
test.Configuration of HAST is done using
/etc/hast.conf. This file should be
identical on both nodes. The simplest configuration
is:resource test {
on hasta {
local /dev/ad6
remote 172.16.0.2
}
on hastb {
local /dev/ad6
remote 172.16.0.1
}
}For more advanced configuration, refer to
&man.hast.conf.5;.It is also possible to use host names in the
remote statements if the hosts are
resolvable and defined either in
/etc/hosts or in the local
DNS.Once the configuration exists on both nodes, the
HAST pool can be created. Run these
commands on both nodes to place the initial metadata onto the
local disk and to start &man.hastd.8;:&prompt.root; hastctl create test
&prompt.root; service hastd onestartIt is not possible to use
GEOM
providers with an existing file system or to convert an
existing storage to a HAST-managed pool.
This procedure needs to store some metadata on the provider
and there will not be enough required space available on an
existing provider.A HAST node's primary or
secondary role is selected by an
administrator, or software like
Heartbeat, using &man.hastctl.8;.
On the primary node, hasta, issue this
command:&prompt.root; hastctl role primary testRun this command on the secondary node,
hastb:&prompt.root; hastctl role secondary testVerify the result by running hastctl on
each node:&prompt.root; hastctl status testCheck the status line in the output.
If it says degraded, something is wrong
with the configuration file. It should say
complete on each node, meaning that the
synchronization between the nodes has started. The
synchronization completes when hastctl
status reports 0 bytes of dirty
extents.The next step is to create a file system on the
GEOM provider and mount it. This must be
done on the primary node. Creating the
file system can take a few minutes, depending on the size of
the hard drive. This example creates a UFS
file system on /dev/hast/test:&prompt.root; newfs -U /dev/hast/test
&prompt.root; mkdir /hast/test
&prompt.root; mount /dev/hast/test/hast/testOnce the HAST framework is configured
properly, the final step is to make sure that
HAST is started automatically during
system boot. Add this line to
/etc/rc.conf:hastd_enable="YES"Failover ConfigurationThe goal of this example is to build a robust storage
system which is resistant to the failure of any given node.
If the primary node fails, the secondary node is there to
take over seamlessly, check and mount the file system, and
continue to work without missing a single bit of
data.To accomplish this task, the Common Address Redundancy
Protocol (CARP) is used to provide for
automatic failover at the IP layer.
CARP allows multiple hosts on the same
network segment to share an IP address.
Set up CARP on both nodes of the cluster
according to the documentation available in . In this example, each node will have
its own management IP address and a
shared IP address of
172.16.0.254. The primary
HAST node of the cluster must be the
master CARP node.The HAST pool created in the previous
section is now ready to be exported to the other hosts on
the network. This can be accomplished by exporting it
through NFS or
Samba, using the shared
IP address
172.16.0.254. The only problem
which remains unresolved is an automatic failover should the
primary node fail.In the event of CARP interfaces going
up or down, the &os; operating system generates a
&man.devd.8; event, making it possible to watch for state
changes on the CARP interfaces. A state
change on the CARP interface is an
indication that one of the nodes failed or came back online.
These state change events make it possible to run a script
which will automatically handle the HAST failover.To catch state changes on the
CARP interfaces, add this configuration
to /etc/devd.conf on each node:notify 30 {
match "system" "IFNET";
match "subsystem" "carp0";
match "type" "LINK_UP";
action "/usr/local/sbin/carp-hast-switch master";
};
notify 30 {
match "system" "IFNET";
match "subsystem" "carp0";
match "type" "LINK_DOWN";
action "/usr/local/sbin/carp-hast-switch slave";
};If the systems are running &os; 10 or higher,
replace carp0 with the name of the
CARP-configured interface.Restart &man.devd.8; on both nodes to put the new
configuration into effect:&prompt.root; service devd restartWhen the specified interface state changes by going up
or down , the system generates a notification, allowing the
&man.devd.8; subsystem to run the specified automatic
failover script,
/usr/local/sbin/carp-hast-switch.
For further clarification about this configuration, refer to
&man.devd.conf.5;.Here is an example of an automated failover
script:#!/bin/sh
# Original script by Freddie Cash <fjwcash@gmail.com>
# Modified by Michael W. Lucas <mwlucas@BlackHelicopters.org>
# and Viktor Petersson <vpetersson@wireload.net>
# The names of the HAST resources, as listed in /etc/hast.conf
resources="test"
# delay in mounting HAST resource after becoming master
# make your best guess
delay=3
# logging
log="local0.debug"
name="carp-hast"
# end of user configurable stuff
case "$1" in
master)
logger -p $log -t $name "Switching to primary provider for ${resources}."
sleep ${delay}
# Wait for any "hastd secondary" processes to stop
for disk in ${resources}; do
while $( pgrep -lf "hastd: ${disk} \(secondary\)" > /dev/null 2>&1 ); do
sleep 1
done
# Switch role for each disk
hastctl role primary ${disk}
if [ $? -ne 0 ]; then
logger -p $log -t $name "Unable to change role to primary for resource ${disk}."
exit 1
fi
done
# Wait for the /dev/hast/* devices to appear
for disk in ${resources}; do
for I in $( jot 60 ); do
[ -c "/dev/hast/${disk}" ] && break
sleep 0.5
done
if [ ! -c "/dev/hast/${disk}" ]; then
logger -p $log -t $name "GEOM provider /dev/hast/${disk} did not appear."
exit 1
fi
done
logger -p $log -t $name "Role for HAST resources ${resources} switched to primary."
logger -p $log -t $name "Mounting disks."
for disk in ${resources}; do
mkdir -p /hast/${disk}
fsck -p -y -t ufs /dev/hast/${disk}
mount /dev/hast/${disk} /hast/${disk}
done
;;
slave)
logger -p $log -t $name "Switching to secondary provider for ${resources}."
# Switch roles for the HAST resources
for disk in ${resources}; do
if ! mount | grep -q "^/dev/hast/${disk} on "
then
else
umount -f /hast/${disk}
fi
sleep $delay
hastctl role secondary ${disk} 2>&1
if [ $? -ne 0 ]; then
logger -p $log -t $name "Unable to switch role to secondary for resource ${disk}."
exit 1
fi
logger -p $log -t $name "Role switched to secondary for resource ${disk}."
done
;;
esacIn a nutshell, the script takes these actions when a
node becomes master:Promotes the HAST pool to
primary on the other node.Checks the file system under the
HAST pool.Mounts the pool.When a node becomes secondary:Unmounts the HAST pool.Degrades the HAST pool to
secondary.This is just an example script which serves as a proof
of concept. It does not handle all the possible scenarios
and can be extended or altered in any way, for example, to
start or stop required services.For this example, a standard UFS
file system was used. To reduce the time needed for
recovery, a journal-enabled UFS or
ZFS file system can be used
instead.More detailed information with additional examples can
be found at http://wiki.FreeBSD.org/HAST.TroubleshootingHAST should generally work without
issues. However, as with any other software product, there
may be times when it does not work as supposed. The sources
of the problems may be different, but the rule of thumb is to
ensure that the time is synchronized between the nodes of the
cluster.When troubleshooting HAST, the
debugging level of &man.hastd.8; should be increased by
starting hastd with -d.
This argument may be specified multiple times to further
increase the debugging level. Consider also using
-F, which starts hastd
in the foreground.Recovering from the Split-brain ConditionSplit-brain occurs when the nodes
of the cluster are unable to communicate with each other,
and both are configured as primary. This is a dangerous
condition because it allows both nodes to make incompatible
changes to the data. This problem must be corrected
manually by the system administrator.The administrator must either decide which node has more
important changes, or perform the merge manually. Then, let
HAST perform full synchronization of the
node which has the broken data. To do this, issue these
commands on the node which needs to be
resynchronized:&prompt.root; hastctl role init test
&prompt.root; hastctl create test
&prompt.root; hastctl role secondary test
Index: head/en_US.ISO8859-1/books/handbook/filesystems/chapter.xml
===================================================================
--- head/en_US.ISO8859-1/books/handbook/filesystems/chapter.xml (revision 48528)
+++ head/en_US.ISO8859-1/books/handbook/filesystems/chapter.xml (revision 48529)
@@ -1,194 +1,194 @@
Other File SystemsTomRhodesWritten
by SynopsisFile SystemsFile Systems SupportFile SystemsFile systems are an integral part of any operating system.
They allow users to upload and store files, provide access to
data, and make hard drives useful. Different operating systems
differ in their native file system. Traditionally, the native
&os; file system has been the Unix File System
UFS which has been modernized as
UFS2. Since &os; 7.0, the Z File System
(ZFS) is also available as a native file
system. See for more information.In addition to its native file systems, &os; supports a
multitude of other file systems so that data from other
operating systems can be accessed locally, such as data stored
on locally attached USB storage devices,
flash drives, and hard disks. This includes support for the
&linux; Extended File System (EXT) and the
Reiser file system.There are different levels of &os; support for the various
file systems. Some require a kernel module to be loaded and
others may require a toolset to be installed. Some non-native
file system support is full read-write while others are
read-only.After reading this chapter, you will know:The difference between native and supported file
systems.Which file systems are supported by &os;.How to enable, configure, access, and make use of
non-native file systems.Before reading this chapter, you should:
- Understand &unix; and &os; basics.
+ Understand &unix; and
+ &os; basics.Be familiar with the basics of kernel configuration and
compilation.Feel comfortable installing
software in &os;.Have some familiarity with disks, storage, and device names in
&os;.&linux; File Systems&os; provides built-in support for several &linux; file
systems. This section demonstrates how to load support for and
how to mount the supported &linux; file systems.ext2Kernel support for ext2 file systems has
been available since &os; 2.2. In &os; 8.x and
earlier, the code is licensed under the
GPL. Since &os; 9.0, the code has
been rewritten and is now BSD
licensed.The &man.ext2fs.5; driver allows the &os; kernel to both
read and write to ext2 file systems.
This driver can also be used to access ext3 and ext4 file
systems. However, ext3 journaling, extended attributes, and
inodes greater than 128-bytes are not supported. Support
for ext4 is read-only.To access an ext file system, first
load the kernel loadable module:&prompt.root; kldload ext2fsThen, mount the ext volume by specifying its &os;
partition name and an existing mount point. This example
mounts /dev/ad1s1 on
/mnt:&prompt.root; mount -t ext2fs /dev/ad1s1/mntReiserFS&os; provides read-only support for The Reiser file
system, ReiserFS.To load the &man.reiserfs.5; driver:&prompt.root; kldload reiserfsThen, to mount a ReiserFS volume located on
/dev/ad1s1:&prompt.root; mount -t reiserfs /dev/ad1s1/mnt
Index: head/en_US.ISO8859-1/books/handbook/introduction/chapter.xml
===================================================================
--- head/en_US.ISO8859-1/books/handbook/introduction/chapter.xml (revision 48528)
+++ head/en_US.ISO8859-1/books/handbook/introduction/chapter.xml (revision 48529)
@@ -1,1298 +1,1298 @@
IntroductionJimMockRestructured, reorganized, and parts rewritten
by SynopsisThank you for your interest in &os;! The following chapter
covers various aspects of the &os; Project, such as its
history, goals, development model, and so on.After reading this chapter, you will know:How &os; relates to other computer operating
systems.The history of the &os; Project.The goals of the &os; Project.The basics of the &os; open-source development
model.And of course: where the name &os; comes
from.Welcome to &os;!4.4BSD-Lite&os; is a 4.4BSD-Lite based operating system for Intel (x86
and &itanium;), AMD64, Sun &ultrasparc; computers. Ports to
other architectures are also under way. You can also read about
the history of &os;, or the
- current release. If you are
- interested in contributing something to the Project (code,
- hardware, funding), see the current release.
+ If you are interested in contributing something to the Project
+ (code, hardware, funding), see the Contributing
to &os; article.What Can &os; Do?&os; has many noteworthy features. Some of these
are:Preemptive multitaskingpreemptive multitasking with dynamic priority adjustment to ensure
smooth and fair sharing of the computer between
applications and users, even under the heaviest of
loads.Multi-user facilitiesmulti-user facilities which allow many people to use a &os; system
simultaneously for a variety of things. This means, for
example, that system peripherals such as printers and tape
drives are properly shared between all users on the system
or the network and that individual resource limits can be
placed on users or groups of users, protecting critical
system resources from over-use.Strong TCP/IP
networkingTCP/IP networking with support for industry standards such as
SCTP, DHCP, NFS, NIS, PPP, SLIP, IPsec, and IPv6. This
means that your &os; machine can interoperate easily with
other systems as well as act as an enterprise server,
providing vital functions such as NFS (remote file access)
and email services or putting your organization on the
Internet with WWW, FTP, routing and firewall (security)
services.Memory protectionmemory protection ensures that applications (or users) cannot
interfere with each other. One application crashing will
not affect others in any way.The industry standard
X Window SystemX Window System (X11R7) can provide a graphical user
interface (GUI) on any machine and comes with full
sources.binary compatibilityLinuxbinary compatibilitySCObinary compatibilitySVR4binary compatibilityBSD/OSbinary compatibilityNetBSDBinary compatibility with many
programs built for Linux, SCO, SVR4, BSDI and
NetBSD.Thousands of ready-to-run
applications are available from the &os;
ports and
packages collection. Why search the
net when you can find it all right here?Thousands of additional and
easy-to-port applications are
available on the Internet. &os; is source code compatible
with most popular commercial &unix; systems and thus most
applications require few, if any, changes to
compile.Demand paged virtual
memoryvirtual memory and merged VM/buffer cache
design efficiently satisfies applications with large
appetites for memory while still maintaining interactive
response to other users.SMPSymmetric Multi-Processing
(SMP) support for machines with multiple
CPUs.compilersCcompilersC++
A full complement of C
and C++
development tools.
Many additional languages for advanced research
and development are also available in the ports and
packages collection.Source codesource code for the entire system means you have the
greatest degree of control over your environment. Why be
locked into a proprietary solution at the mercy of your
vendor when you can have a truly open system?Extensive online
documentation.And many more!&os; is based on the 4.4BSD-Lite4.4BSD-Lite release from Computer
Systems Research Group (CSRG)Computer Systems Research Group (CSRG) at the University of California at Berkeley, and
carries on the distinguished tradition of BSD systems
development. In addition to the fine work provided by CSRG,
the &os; Project has put in many thousands of hours in
fine tuning the system for maximum performance and reliability
in real-life load situations. &os; offers performance and
reliability on par with commercial offerings, combined with
many cutting-edge features not available anywhere else.The applications to which &os; can be put are truly
limited only by your own imagination. From software
development to factory automation, inventory control to
azimuth correction of remote satellite antennae; if it can be
done with a commercial &unix; product then it is more than
likely that you can do it with &os; too! &os; also benefits
significantly from literally thousands of high quality
applications developed by research centers and universities
around the world, often available at little to no cost.
Commercial applications are also available and appearing in
greater numbers every day.Because the source code for &os; itself is generally
available, the system can also be customized to an almost
unheard of degree for special applications or projects, and in
ways not generally possible with operating systems from most
major commercial vendors. Here is just a sampling of some of
the applications in which people are currently using
&os;:Internet Services: The robust
TCP/IP networking built into &os; makes it an ideal
platform for a variety of Internet services such
as:World Wide Web serversweb servers
(standard or secure [SSL])IPv4 and IPv6 routingFirewallsfirewall
and NATNAT
(IP masquerading) gatewaysFTP serversFTP serverselectronic mailemailemail
Electronic Mail serversAnd more...Education: Are you a student of
computer science or a related engineering field? There
is no better way of learning about operating systems,
computer architecture and networking than the hands on,
under the hood experience that &os; can provide. A number
of freely available CAD, mathematical and graphic design
packages also make it highly useful to those whose primary
interest in a computer is to get
other work done!Research: With source code for
the entire system available, &os; is an excellent platform
for research in operating systems as well as other
branches of computer science. &os;'s freely available
nature also makes it possible for remote groups to
collaborate on ideas or shared development without having
to worry about special licensing agreements or limitations
on what may be discussed in open forums.Networking: Need a new
router?router A name server (DNS)?DNS Server A firewall to keep people out of your
internal network? &os; can easily turn that unused
PC sitting in the corner into an advanced router with
sophisticated packet-filtering capabilities.Embedded: &os; makes an
excellent platform to build embedded systems upon.
embedded
With support for the &arm;, &mips; and &powerpc;
platforms, coupled with a robust network stack, cutting
edge features and the permissive BSD
license &os; makes an excellent foundation for
building embedded routers, firewalls, and other
devices.X Window SystemGNOMEKDEDesktop: &os; makes a
fine choice for an inexpensive desktop solution
using the freely available X11 server.
&os; offers a choice from many open-source desktop
environments, including the standard
GNOME and
KDE graphical user interfaces.
&os; can even boot diskless from
a central server, making individual workstations
even cheaper and easier to administer.Software Development: The basic
&os; system comes with a full complement of development
tools including a full
C/C++Compiler
compiler and debugger suite.
Support for many other languages are also available
through the ports and packages collection.&os; is available to download free of charge, or can be
obtained on either CD-ROM or DVD. Please see
for more information about obtaining
&os;.Who Uses &os;?userslarge sites running &os;&os;'s advanced features, proven security, predictable
release cycle, and permissive license have led to its use as a
platform for building many commercial and open source
appliances, devices, and products. Many of the world's
largest IT companies use &os;:Apache
Apache - The Apache Software Foundation runs most of
its public facing infrastructure, including possibly one
of the largest SVN repositories in the world with over 1.4
million commits, on &os;.Apple
Apple - OS X borrows heavily from &os; for the
network stack, virtual file system, and many userland
components. Apple iOS also contains elements borrowed
from &os;.Cisco
Cisco - IronPort network security and anti-spam
appliances run a modified &os; kernel.Citrix
Citrix - The NetScaler line of security appliances
provide layer 4-7 load balancing, content caching,
application firewall, secure VPN, and mobile cloud network
access, along with the power of a &os; shell.Dell
KACE
Dell KACE - The KACE system management appliances run
&os; because of its reliability, scalability, and the
community that supports its continued development.Experts
Exchange
Experts Exchange - All public facing web servers are powered
by &os; and they make extensive use of jails to isolate
development and testing environments without the overhead
of virtualization.Isilon
Isilon - Isilon's enterprise storage appliances
are based on &os;. The extremely liberal &os; license
allowed Isilon to integrate their intellectual property
throughout the kernel and focus on building their product
instead of an operating system.iXsystems
iXsystems - The TrueNAS line of unified storage
appliances is based on &os;. In addition to their
commercial products, iXsystems also manages development of
the open source projects PC-BSD and FreeNAS.Juniper
Juniper - The JunOS operating system that powers all
Juniper networking gear (including routers, switches,
security, and networking appliances) is based on &os;.
Juniper is one of many vendors that showcases the
symbiotic relationship between the project and vendors of
commercial products. Improvements generated at Juniper
are upstreamed into &os; to reduce the complexity of
integrating new features from &os; back into JunOS in the
future.McAfee
McAfee - SecurOS, the basis of McAfee enterprise
firewall products including Sidewinder is based on
&os;.NetApp
NetApp - The Data ONTAP GX line of storage
appliances are based on &os;. In addition, NetApp has
contributed back many features, including the new BSD
licensed hypervisor, bhyve.Netflix
Netflix - The OpenConnect appliance that Netflix
uses to stream movies to its customers is based on &os;.
Netflix has made extensive contributions to the codebase
and works to maintain a zero delta from mainline &os;.
Netflix OpenConnect appliances are responsible for
delivering more than 32% of all Internet traffic in North
America.Sandvine
Sandvine - Sandvine uses &os; as the basis of their
high performance realtime network processing platforms
that make up their intelligent network policy control
products.Sony
Sony - The PlayStation 4 gaming console runs a
modified version of &os;.Sophos
Sophos - The Sophos Email Appliance product is based
on a hardened &os; and scans inbound mail for spam and
viruses, while also monitoring outbound mail for malware
as well as the accidental loss of sensitive
information.Spectra
Logic
Spectra Logic - The nTier line of archive grade storage
appliances run &os; and OpenZFS.The Weather
Channel
The Weather Channel - The IntelliStar appliance that is installed
at each local cable providers headend and is responsible
for injecting local weather forecasts into the cable TV
network's programming runs &os;.Verisign
Verisign - Verisign is responsible for operating the
.com and .net root domain registries as well as the
accompanying DNS infrastructure. They rely on a number of
different network operating systems including &os; to
ensure there is no common point of failure in their
infrastructure.Voxer
Voxer - Voxer powers their mobile voice messaging
platform with ZFS on &os;. Voxer switched from a Solaris
derivative to &os; because of its superior documentation,
larger and more active community, and more developer
friendly environment. In addition to critical features
like ZFS and DTrace, &os; also offers
TRIM support for ZFS.WhatsApp
WhatsApp - When WhatsApp needed a platform that would
be able to handle more than 1 million concurrent TCP
connections per server, they chose &os;. They then
proceeded to scale past 2.5 million connections per
server.Wheel
Systems
Wheel Systems - The FUDO security appliance allows
enterprises to monitor, control, record, and audit
contractors and administrators who work on their systems.
Based on all of the best security features of &os;
including ZFS, GELI, Capsicum, HAST, and
auditdistd.&os; has also spawned a number of related open source
projects:BSD
Router
BSD Router - A &os; based replacement for large
enterprise routers designed to run on standard PC
hardware.FreeNAS
FreeNAS - A customized &os; designed to be used as a
network file server appliance. Provides a python based
web interface to simplify the management of both the UFS
and ZFS file systems. Includes support for NFS, SMB/CIFS,
AFP, FTP, and iSCSI. Includes an extensible plugin system
based on &os; jails.GhostBSD
GhostBSD - A desktop oriented distribution of &os;
bundled with the Gnome desktop environment.mfsBSD
mfsBSD - A toolkit for building a &os; system image
that runs entirely from memory.NAS4Free
NAS4Free - A file server distribution based on &os;
with a PHP powered web interface.OPNSense
OPNsense
- - OPNsense is an open source, easy-to-use and
- easy-to-build FreeBSD based firewall and routing platform.
- OPNsense includes most of the features available in expensive
- commercial firewalls, and more in many cases. It brings the
- rich feature set of commercial offerings with the benefits of
- open and verifiable sources.
+ - OPNsense is an open source, easy-to-use and
+ easy-to-build FreeBSD based firewall and routing platform.
+ OPNsense includes most of the features available in
+ expensive commercial firewalls, and more in many cases.
+ It brings the rich feature set of commercial offerings
+ with the benefits of open and verifiable sources.
PC-BSD
PC-BSD - A customized version of &os; geared towards
desktop users with graphical utilities to exposing the
power of &os; to all users. Designed to ease the
transition of Windows and OS X users.pfSense
pfSense - A firewall distribution based on &os; with
a huge array of features and extensive IPv6
support.ZRouter
ZRouter - An open source alternative firmware for
embedded devices based on &os;. Designed to replace the
proprietary firmware on off-the-shelf routers.&os; is also used to power some of the biggest sites on
the Internet, including:Yahoo!
Yahoo!Yandex
YandexRambler
RamblerSina
SinaPair
Networks
Pair NetworksSony
Japan
Sony JapanNetcraft
NetcraftNetflix
NetflixNetEase
NetEaseWeathernews
WeathernewsTELEHOUSE
America
TELEHOUSE Americaand many more. Wikipedia also maintains a list
of products based on &os;.About the &os; ProjectThe following section provides some background information
on the project, including a brief history, project goals, and
the development model of the project.A Brief History of &os;386BSD PatchkitHubbard, JordanWilliams, NateGrimes, RodFreeBSD ProjecthistoryThe &os; Project had its genesis in the early part
of 1993, partially as an outgrowth of the Unofficial
386BSDPatchkit by the patchkit's last 3 coordinators: Nate
Williams, Rod Grimes and Jordan Hubbard.386BSDThe original goal was to produce an intermediate snapshot
of 386BSD in order to fix a number of problems with it that
the patchkit mechanism just was not capable of solving. The
early working title for the project was 386BSD 0.5 or 386BSD
Interim in reference of that fact.Jolitz, Bill386BSD was Bill Jolitz's operating system, which had been
up to that point suffering rather severely from almost a
year's worth of neglect. As the patchkit swelled ever more
uncomfortably with each passing day, they decided to assist
Bill by providing this interim cleanup
snapshot. Those plans came to a rude halt when Bill Jolitz
suddenly decided to withdraw his sanction from the project
without any clear indication of what would be done
instead.Greenman, DavidWalnut Creek CDROMThe trio thought that the goal remained worthwhile, even
without Bill's support, and so they adopted the name "&os;"
coined by David Greenman. The initial objectives were set
after consulting with the system's current users and, once it
became clear that the project was on the road to perhaps even
becoming a reality, Jordan contacted Walnut Creek CDROM with
an eye toward improving &os;'s distribution channels for those
many unfortunates without easy access to the Internet. Walnut
Creek CDROM not only supported the idea of distributing &os;
on CD but also went so far as to provide the project with a
machine to work on and a fast Internet connection. Without
Walnut Creek CDROM's almost unprecedented degree of faith in
what was, at the time, a completely unknown project, it is
quite unlikely that &os; would have gotten as far, as fast, as
it has today.4.3BSD-LiteNet/2U.C. Berkeley386BSDFree Software
FoundationThe first CD-ROM (and general net-wide) distribution was
&os; 1.0, released in December of 1993. This was based
on the 4.3BSD-Lite (Net/2) tape from U.C.
Berkeley, with many components also provided by 386BSD and the
Free Software Foundation. It was a fairly reasonable success
for a first offering, and they followed it with the highly
successful &os; 1.1 release in May of 1994.NovellU.C. BerkeleyNet/2AT&TAround this time, some rather unexpected storm clouds
formed on the horizon as Novell and U.C. Berkeley settled
their long-running lawsuit over the legal status of the
Berkeley Net/2 tape. A condition of that settlement was U.C.
Berkeley's concession that large parts of Net/2 were
encumbered code and the property of Novell, who
had in turn acquired it from AT&T some time previously.
What Berkeley got in return was Novell's
blessing that the 4.4BSD-Lite release, when
it was finally released, would be declared unencumbered and
all existing Net/2 users would be strongly encouraged to
switch. This included &os;, and the project was given until
the end of July 1994 to stop shipping its own Net/2 based
product. Under the terms of that agreement, the project was
allowed one last release before the deadline, that release
being &os; 1.1.5.1.&os; then set about the arduous task of literally
re-inventing itself from a completely new and rather
incomplete set of 4.4BSD-Lite bits. The Lite
releases were light in part because Berkeley's CSRG had
removed large chunks of code required for actually
constructing a bootable running system (due to various legal
requirements) and the fact that the Intel port of 4.4 was
highly incomplete. It took the project until November of 1994
to make this transition, and in December it released
&os; 2.0 to the world. Despite being still more than a
little rough around the edges, the release was a significant
success and was followed by the more robust and easier to
install &os; 2.0.5 release in June of 1995.Since that time, &os; has made a series of releases each
time improving the stability, speed, and feature set of the
previous version.For now, long-term development projects continue to take
place in the 10.X-CURRENT (trunk) branch, and snapshot
releases of 10.X are continually made available from the
snapshot server as work progresses.&os; Project GoalsJordanHubbardContributed by FreeBSD ProjectgoalsThe goals of the &os; Project are to provide software
that may be used for any purpose and without strings attached.
Many of us have a significant investment in the code (and
project) and would certainly not mind a little financial
compensation now and then, but we are definitely not prepared
to insist on it. We believe that our first and foremost
mission is to provide code to any and all
comers, and for whatever purpose, so that the code gets the
widest possible use and provides the widest possible benefit.
This is, I believe, one of the most fundamental goals of Free
Software and one that we enthusiastically support.GNU General Public License (GPL)GNU Lesser General Public License (LGPL)BSD CopyrightThat code in our source tree which falls under the GNU
General Public License (GPL) or Library General Public License
(LGPL) comes with slightly more strings attached, though at
least on the side of enforced access rather than the usual
opposite. Due to the additional complexities that can evolve
in the commercial use of GPL software we do, however, prefer
software submitted under the more relaxed BSD copyright when
it is a reasonable option to do so.The &os; Development ModelSatoshiAsamiContributed by FreeBSD Projectdevelopment modelThe development of &os; is a very open and flexible
process, being literally built from the contributions of
thousands of people around the world, as can be seen from our
list
of contributors. &os;'s development infrastructure
allow these thousands of contributors to collaborate over the
Internet. We are constantly on the lookout for new developers
and ideas, and those interested in becoming more closely
involved with the project need simply contact us at the
&a.hackers;. The &a.announce; is also available to those
wishing to make other &os; users aware of major areas of
work.Useful things to know about the &os; Project and its
development process, whether working independently or in close
cooperation:The SVN repositoriesCVSCVS RepositoryConcurrent Versions SystemCVSSubversionSubversion RepositorySVNSubversion
For several years, the central source tree for &os;
was maintained by
CVS
(Concurrent Versions System), a freely available source
code control tool. In June 2008, the Project switched
to using SVN
(Subversion). The switch was deemed necessary, as the
technical limitations imposed by
CVS were becoming obvious due
to the rapid expansion of the source tree and the amount
of history already stored. The Documentation Project
and Ports Collection repositories also moved from
CVS to
SVN in May 2012 and July
2012, respectively. Please refer to the Synchronizing your source
tree section for more information on obtaining
the &os; src/ repository and Using the Ports
Collection for details on obtaining the &os;
Ports Collection.The committers listThe committerscommitters are the people who have
write access to the Subversion
tree, and are authorized to make modifications to the
&os; source (the term committer comes
from commit, the source control
command which is used to bring new changes into the
repository). Anyone can submit a bug to the Bug
Database. Before submitting a bug report, the
&os; mailing lists, IRC channels, or forums can be used to
help verify that an issue is actually a bug.The FreeBSD core teamThe &os; core teamcore team would be equivalent to the board of
directors if the &os; Project were a company. The
primary task of the core team is to make sure the
project, as a whole, is in good shape and is heading in
the right directions. Inviting dedicated and
responsible developers to join our group of committers
is one of the functions of the core team, as is the
recruitment of new core team members as others move on.
The current core team was elected from a pool of
committer candidates in July 2014. Elections are held
every 2 years.Like most developers, most members of the
core team are also volunteers when
it comes to &os; development and do not benefit from
the project financially, so commitment
should also not be misconstrued as meaning
guaranteed support. The
board of directors analogy above is not
very accurate, and it may be more suitable to say that
these are the people who gave up their lives in favor
of &os; against their better judgement!Outside contributorsLast, but definitely not least, the largest group of
developers are the users themselves who provide feedback
and bug fixes to us on an almost constant basis. The
primary way of keeping in touch with &os;'s more
non-centralized development is to subscribe to the
&a.hackers; where such things are discussed. See
for more information about
the various &os; mailing lists.The
&os; Contributors Listcontributors is a long and growing one, so why not join
it by contributing something back to &os; today?Providing code is not the only way of contributing
to the project; for a more complete list of things that
need doing, please refer to the &os; Project
web site.In summary, our development model is organized as a loose
set of concentric circles. The centralized model is designed
for the convenience of the users of &os;,
who are provided with an easy way of tracking one central code
base, not to keep potential contributors out! Our desire is to
present a stable operating system with a large set of coherent
application programs that the
users can easily install and use — this model works very
well in accomplishing that.All we ask of those who would join us as &os; developers
is some of the same dedication its current people have to its
continued success!Third Party ProgramsIn addition to the base distributions, &os; offers a
ported software collection with thousands of commonly
sought-after programs. At the time of this writing, there
were over &os.numports; ports! The list of ports ranges from
http servers, to games, languages, editors, and almost
everything in between. The entire Ports Collection requires
approximately &ports.size;. To compile a port, you simply
change to the directory of the program you wish to install,
type make install, and let the system do
the rest. The full original distribution for each port you
build is retrieved dynamically so you need only enough disk
space to build the ports you want. Almost every port is also
provided as a pre-compiled package, which can
be installed with a simple command
(pkg install) by those who do not wish to
compile their own ports from source. More information on
packages and ports can be found in
.Additional DocumentationAll recent &os; versions provide an option in the
installer (either &man.sysinstall.8; or &man.bsdinstall.8;) to
install additional documentation under
/usr/local/share/doc/freebsd during the
initial system setup. Documentation may also be installed at
any later time using packages as described in
. You may view the
locally installed manuals with any HTML capable browser using
the following URLs:The FreeBSD Handbook/usr/local/share/doc/freebsd/handbook/index.htmlThe FreeBSD FAQ/usr/local/share/doc/freebsd/faq/index.htmlYou can also view the master (and most frequently updated)
copies at http://www.FreeBSD.org/.
Index: head/en_US.ISO8859-1/books/handbook/mail/chapter.xml
===================================================================
--- head/en_US.ISO8859-1/books/handbook/mail/chapter.xml (revision 48528)
+++ head/en_US.ISO8859-1/books/handbook/mail/chapter.xml (revision 48529)
@@ -1,1909 +1,1951 @@
Electronic MailBillLloydOriginal
work by JimMockRewritten
by SynopsisemailElectronic Mail, better known as email, is
one of the most widely used forms of communication today. This
chapter provides a basic introduction to running a mail server
on &os;, as well as an introduction to sending and receiving
email using &os;. For more complete coverage of this subject,
refer to the books listed in .After reading this chapter, you will know:Which software components are involved in sending and
receiving electronic mail.Where basic Sendmail
configuration files are located in &os;.The difference between remote and local
mailboxes.How to block spammers from illegally using a mail server
as a relay.How to install and configure an alternate Mail Transfer
Agent, replacing
Sendmail.How to troubleshoot common mail server problems.How to set up the system to send mail only.How to use mail with a dialup connection.How to configure SMTP authentication for added
security.How to install and use a Mail User Agent, such as
mutt, to send and receive
email.How to download mail from a remote
POP or IMAP
server.How to automatically apply filters and rules to incoming
email.Before reading this chapter, you should:Properly set up a network connection ().Properly set up the DNS information
for a mail host ().Know how to install additional third-party software
().Mail ComponentsPOPIMAPDNSmail server daemonsSendmailmail server daemonsPostfixmail server daemonsqmailmail server daemonsEximemailreceivingMX recordmail hostThere are five major parts involved in an email exchange:
the Mail User Agent (MUA), the Mail Transfer
Agent (MTA), a mail host, a remote or local
mailbox, and DNS. This section provides an
overview of these components.Mail User Agent (MUA)The Mail User Agent (MUA) is an
application which is used to compose, send, and receive
emails. This application can be a command line program,
such as the built-in mail utility or a
third-party application from the Ports Collection, such as
mutt,
alpine, or
elm. Dozens of graphical
programs are also available in the Ports Collection,
including Claws Mail,
Evolution, and
Thunderbird. Some
organizations provide a web mail program which can be
accessed through a web browser. More information about
installing and using a MUA on &os; can
be found in .Mail Transfer Agent (MTA)The Mail Transfer Agent (MTA) is
responsible for receiving incoming mail and delivering
outgoing mail. &os; ships with
Sendmail as the default
MTA, but it also supports numerous
other mail server daemons, including
Exim,
Postfix, and
qmail.
Sendmail configuration is
described in . If another
MTA is installed using the Ports
Collection, refer to its post-installation message for
&os;-specific configuration details and the application's
website for more general configuration
instructions.Mail Host and MailboxesThe mail host is a server that is responsible for
delivering and receiving mail for a host or a network.
The mail host collects all mail sent to the domain and
stores it either in the default mbox
or the alternative Maildir format, depending on the
configuration. Once mail has been stored, it may either
be read locally using a MUA or remotely
accessed and collected using protocols such as
POP or IMAP. If
mail is read locally, a POP or
IMAP server does not need to be
installed.To access mailboxes remotely, a POP
or IMAP server is required as these
protocols allow users to connect to their mailboxes from
remote locations. IMAP offers several
advantages over POP. These include the
ability to store a copy of messages on a remote server
after they are downloaded and concurrent updates.
IMAP can be useful over low-speed links
as it allows users to fetch the structure of messages
without downloading them. It can also perform tasks such
as searching on the server in order to minimize data
transfer between clients and servers.Several POP and
IMAP servers are available in the Ports
Collection. These include
mail/qpopper,
mail/imap-uw,
mail/courier-imap, and
mail/dovecot2.It should be noted that both POP
and IMAP transmit information,
including username and password credentials, in
clear-text. To secure the transmission of information
across these protocols, consider tunneling sessions over
&man.ssh.1; ()
or using SSL ().Domain Name System (DNS)The Domain Name System (DNS) and
its daemon named play a large role in
the delivery of email. In order to deliver mail from one
site to another, the MTA will look up
the remote site in DNS to determine
which host will receive mail for the destination. This
process also occurs when mail is sent from a remote host
to the MTA.In addition to mapping hostnames to
IP addresses, DNS is
responsible for storing information specific to mail
delivery, known as Mail eXchanger
MX records. The MX
record specifies which hosts will receive mail for a
particular domain.To view the MX records for a
domain, specify the type of record. Refer to
&man.host.1;, for more details about this command:&prompt.user; host -t mx FreeBSD.org
FreeBSD.org mail is handled by 10 mx1.FreeBSD.orgRefer to for more
information about DNS and its
configuration.
- Sendmail Configuration
- Files
+ Sendmail Configuration
+ Files
- ChristopherShumwayContributed
- by
+
+
+ Christopher
+ Shumway
+
+ Contributed by
+ SendmailSendmail is the default
MTA installed with &os;. It accepts mail
from MUAs and delivers it to the appropriate
mail host, as defined by its configuration.
Sendmail can also accept network
connections and deliver mail to local mailboxes or to another
program.The configuration files for
Sendmail are located in
/etc/mail. This section describes these
files in more detail./etc/mail/access/etc/mail/aliases/etc/mail/local-host-names/etc/mail/mailer.conf/etc/mail/mailertable/etc/mail/sendmail.cf/etc/mail/virtusertable/etc/mail/accessThis access database file defines which hosts or
IP addresses have access to the local
mail server and what kind of access they have. Hosts
listed as , which is the default
option, are allowed to send mail to this host as long as
the mail's final destination is the local machine. Hosts
listed as are rejected for all
mail connections. Hosts listed as
are allowed to send mail for any destination using this
mail server. Hosts listed as will
have their mail returned with the specified mail error.
If a host is listed as ,
Sendmail will abort the current
search for this entry without accepting or rejecting the
mail. Hosts listed as will
have their messages held and will receive the specified
text as the reason for the hold.Examples of using these options for both
IPv4 and IPv6
addresses can be found in the &os; sample configuration,
/etc/mail/access.sample:# $FreeBSD$
#
# Mail relay access control list. Default is to reject mail unless the
# destination is local, or listed in /etc/mail/local-host-names
#
## Examples (commented out for safety)
#From:cyberspammer.com ERROR:"550 We don't accept mail from spammers"
#From:okay.cyberspammer.com OK
#Connect:sendmail.org RELAY
#To:sendmail.org RELAY
#Connect:128.32 RELAY
#Connect:128.32.2 SKIP
#Connect:IPv6:1:2:3:4:5:6:7 RELAY
#Connect:suspicious.example.com QUARANTINE:Mail from suspicious host
#Connect:[127.0.0.3] OK
#Connect:[IPv6:1:2:3:4:5:6:7:8] OKTo configure the access database, use the format shown
in the sample to make entries in
/etc/mail/access, but do not put a
comment symbol (#) in front of the
entries. Create an entry for each host or network whose
access should be configured. Mail senders that match the
left side of the table are affected by the action on the
right side of the table.Whenever this file is updated, update its database and
restart Sendmail:&prompt.root; makemap hash /etc/mail/access < /etc/mail/access
&prompt.root; service sendmail restart/etc/mail/aliasesThis database file contains a list of virtual
mailboxes that are expanded to users, files, programs, or
other aliases. Here are a few entries to illustrate the
file format:root: localuser
ftp-bugs: joe,eric,paul
bit.bucket: /dev/null
procmail: "|/usr/local/bin/procmail"The mailbox name on the left side of the colon is
expanded to the target(s) on the right. The first entry
expands the root
mailbox to the localuser mailbox, which
is then looked up in the
/etc/mail/aliases database. If no
match is found, the message is delivered to localuser. The second
entry shows a mail list. Mail to ftp-bugs is expanded to
the three local mailboxes joe, eric, and paul. A remote mailbox
could be specified as
user@example.com. The third
entry shows how to write mail to a file, in this case
/dev/null. The last entry
demonstrates how to send mail to a program,
/usr/local/bin/procmail, through a
&unix; pipe. Refer to &man.aliases.5; for more
information about the format of this file.Whenever this file is updated, run
newaliases to update and initialize the
aliases database./etc/mail/sendmail.cfThis is the master configuration file for
Sendmail. It controls the
overall behavior of Sendmail,
including everything from rewriting email addresses to
printing rejection messages to remote mail servers.
Accordingly, this configuration file is quite complex.
Fortunately, this file rarely needs to be changed for
standard mail servers.The master Sendmail
configuration file can be built from &man.m4.1; macros
that define the features and behavior of
Sendmail. Refer to
/usr/src/contrib/sendmail/cf/README
for some of the details.Whenever changes to this file are made,
Sendmail needs to be restarted
for the changes to take effect./etc/mail/virtusertableThis database file maps mail addresses for virtual
domains and users to real mailboxes. These mailboxes can
be local, remote, aliases defined in
/etc/mail/aliases, or files. This
allows multiple virtual domains to be hosted on one
machine.&os; provides a sample configuration file in
/etc/mail/virtusertable.sample to
further demonstrate its format. The following example
demonstrates how to create custom entries using that
format:root@example.com root
postmaster@example.com postmaster@noc.example.net
@example.com joeThis file is processed in a first match order. When
an email address matches the address on the left, it is
mapped to the local mailbox listed on the right. The
format of the first entry in this example maps a specific
email address to a local mailbox, whereas the format of
the second entry maps a specific email address to a remote
mailbox. Finally, any email address from
example.com which has not matched any
of the previous entries will match the last mapping and be
sent to the local mailbox joe. When
creating custom entries, use this format and add them to
/etc/mail/virtusertable. Whenever
this file is edited, update its database and restart
Sendmail:&prompt.root; makemap hash /etc/mail/virtusertable < /etc/mail/virtusertable
&prompt.root; service sendmail restart/etc/mail/relay-domainsIn a default &os; installation,
Sendmail is configured to only
send mail from the host it is running on. For example, if
a POP server is available, users will
be able to check mail from remote locations but they will
not be able to send outgoing emails from outside
locations. Typically, a few moments after the attempt, an
email will be sent from MAILER-DAEMON
with a 5.7 Relaying Denied
message.The most straightforward solution is to add the
ISP's FQDN to
/etc/mail/relay-domains. If multiple
addresses are needed, add them one per
line:your.isp.example.com
other.isp.example.net
users-isp.example.org
www.example.org
- After creating or editing this file, restart
+ After creating or editing this file, restart
Sendmail with
service sendmail restart.Now any mail sent through the system by any host in
this list, provided the user has an account on the system,
will succeed. This allows users to send mail from the
system remotely without opening the system up to relaying
SPAM from the Internet.
- Changing the Mail Transfer Agent
+ Changing the Mail Transfer Agent
- AndrewBoothmanWritten
- by
+
+
+ Andrew
+ Boothman
+
+ Written by
+
+
- GregoryNeil
- ShapiroInformation taken
- from emails written by
+
+
+ Gregory
+ Neil Shapiro
+
+ Information taken from emails written by
+ emailchange mta&os; comes with Sendmail already
installed as the MTA which is in charge of
outgoing and incoming mail. However, the system administrator
can change the system's MTA. A wide choice
of alternative MTAs is available from the
mail category of the &os; Ports
Collection.Once a new MTA is installed, configure
and test the new software before replacing
Sendmail. Refer to the documentation
of the new MTA for information on how to
configure the software.Once the new MTA is working, use the
instructions in this section to disable
Sendmail and configure &os; to use
the replacement MTA.Disable SendmailIf Sendmail's outgoing mail
service is disabled, it is important that it is replaced
with an alternative mail delivery system. Otherwise, system
functions such as &man.periodic.8; will be unable to deliver
their results by email. Many parts of the system expect a
functional MTA. If applications continue
to use Sendmail's binaries to try
to send email after they are disabled, mail could go into an
inactive Sendmail queue and
never be delivered.In order to completely disable
Sendmail, add or edit the following
lines in /etc/rc.conf:sendmail_enable="NO"
sendmail_submit_enable="NO"
sendmail_outbound_enable="NO"
sendmail_msp_queue_enable="NO"To only disable Sendmail's
incoming mail service, use only this entry in
/etc/rc.conf:sendmail_enable="NO"More information on Sendmail's
startup options is available in &man.rc.sendmail.8;.Replace the Default MTAWhen a new MTA is installed using the
Ports Collection, its startup script is also installed and
startup instructions are mentioned in its package message.
Before starting the new MTA, stop the
running Sendmail processes. This
example stops all of these services, then starts the
Postfix service:&prompt.root; service sendmail stop
&prompt.root; service postfix startTo start the replacement MTA at system
boot, add its configuration line to
/etc/rc.conf. This entry enables the
Postfix MTA:postfix_enable="YES"Some extra configuration is needed as
Sendmail is so ubiquitous that some
software assumes it is already installed and configured.
Check /etc/periodic.conf and make sure
that these values are set to NO. If this
file does not exist, create it with these entries:daily_clean_hoststat_enable="NO"
daily_status_mail_rejects_enable="NO"
daily_status_include_submit_mailq="NO"
daily_submit_queuerun="NO"Some alternative MTAs provide their own
compatible implementations of the
Sendmail command-line interface in
order to facilitate using them as drop-in replacements for
Sendmail. However, some
MUAs may try to execute standard
Sendmail binaries instead of the
new MTA's binaries. &os; uses
/etc/mail/mailer.conf to map the expected
Sendmail binaries to the location
of the new binaries. More information about this mapping can
be found in &man.mailwrapper.8;.The default /etc/mail/mailer.conf
looks like this:# $FreeBSD$
#
# Execute the "real" sendmail program, named /usr/libexec/sendmail/sendmail
#
sendmail /usr/libexec/sendmail/sendmail
send-mail /usr/libexec/sendmail/sendmail
mailq /usr/libexec/sendmail/sendmail
newaliases /usr/libexec/sendmail/sendmail
hoststat /usr/libexec/sendmail/sendmail
purgestat /usr/libexec/sendmail/sendmailWhen any of the commands listed on the left are run, the
system actually executes the associated command shown on the
right. This system makes it easy to change what binaries are
executed when these default binaries are invoked.Some MTAs, when installed using the
Ports Collection, will prompt to update this file for the new
binaries. For example, Postfix
will update the file like this:#
# Execute the Postfix sendmail program, named /usr/local/sbin/sendmail
#
sendmail /usr/local/sbin/sendmail
send-mail /usr/local/sbin/sendmail
mailq /usr/local/sbin/sendmail
newaliases /usr/local/sbin/sendmailIf the installation of the MTA does
not automatically update
/etc/mail/mailer.conf, edit this file in
a text editor so that it points to the new binaries. This
example points to the binaries installed by
mail/ssmtp:sendmail /usr/local/sbin/ssmtp
send-mail /usr/local/sbin/ssmtp
mailq /usr/libexec/sendmail/sendmail
newaliases /usr/libexec/sendmail/sendmail
hoststat /usr/libexec/sendmail/sendmail
purgestat /usr/libexec/sendmail/sendmailOnce everything is configured, it is recommended to reboot
the system. Rebooting provides the opportunity to ensure that
the system is correctly configured to start the new
MTA automatically on boot.TroubleshootingemailtroubleshootingWhy do I have to use the FQDN for hosts on my
site?The host may actually be in a different domain. For
example, in order for a host in foo.bar.edu to reach a
host called mumble in the
bar.edu
domain, refer to it by the Fully-Qualified Domain Name
FQDN, mumble.bar.edu,
instead of just mumble.This is because the version of
BINDBIND which ships with &os;
no longer provides default abbreviations for non-FQDNs
other than the local domain. An unqualified host such as
mumble must either be found as
mumble.foo.bar.edu, or
it will be searched for in the root domain.In older versions of BIND,
the search continued across mumble.bar.edu, and
mumble.edu.
RFC 1535 details why this is considered bad practice or
even a security hole.As a good workaround, place the line:search foo.bar.edu bar.eduinstead of the previous:domain foo.bar.eduinto /etc/resolv.conf. However,
make sure that the search order does not go beyond the
boundary between local and public
administration, as RFC 1535 calls it.
-
-
- How can I run a mail server on a dial-up PPP
- host?
-
+
+
+ How can I run a mail server on a dial-up PPP
+ host?
+
-
- Connect to a &os; mail gateway on the LAN. The PPP
- connection is non-dedicated.
+
+ Connect to a &os; mail gateway on the LAN. The PPP
+ connection is non-dedicated.
- One way to do this is to get a full-time Internet server
- to provide secondary MX
- MX record services for the
- domain. In this example, the domain is example.com and the ISP
- has configured example.net to provide
- secondary MX services to the
- domain:
+ One way to do this is to get a full-time Internet
+ server to provide secondary
+ MX
+ MX record
+ services for the domain. In this example, the domain is
+ example.com
+ and the ISP has configured
+ example.net
+ to provide secondary MX services to the
+ domain:
- example.com. MX 10 example.com.
+ example.com. MX 10 example.com.
MX 20 example.net.
- Only one host should be specified as the final
- recipient. For Sendmail, add
- Cw example.com in
- /etc/mail/sendmail.cf on example.com.
+ Only one host should be specified as the final
+ recipient. For Sendmail, add
+ Cw example.com in
+ /etc/mail/sendmail.cf on example.com.
- When the sending MTA attempts
- to deliver mail, it will try to connect to the system,
- example.com,
- over the PPP link. This will time out if the destination is
- offline. The MTA will automatically
- deliver it to the secondary MX site at
- the Internet Service Provider (ISP),
- example.net.
- The secondary MX site will periodically
- try to connect to the primary MX host,
- example.com.
+ When the sending MTA attempts
+ to deliver mail, it will try to connect to the system,
+ example.com,
+ over the PPP link. This will time out if the destination
+ is offline. The MTA will automatically
+ deliver it to the secondary MX site at
+ the Internet Service Provider (ISP),
+ example.net.
+ The secondary MX site will periodically
+ try to connect to the primary MX host,
+ example.com.
- Use something like this as a login script:
+ Use something like this as a login script:
- #!/bin/sh
+ #!/bin/sh
# Put me in /usr/local/bin/pppmyisp
( sleep 60 ; /usr/sbin/sendmail -q ) &
/usr/sbin/ppp -direct pppmyisp
- When creating a separate login script for users, instead
- use sendmail -qRexample.com in the script
- above. This will force all mail in the queue for
- example.com to
- be processed immediately.
+ When creating a separate login script for users,
+ instead use sendmail -qRexample.com in
+ the script above. This will force all mail in the queue
+ for
+ example.com
+ to be processed immediately.
- A further refinement of the situation can be seen from
- this example from the &a.isp;:
+ A further refinement of the situation can be seen from
+ this example from the &a.isp;:
- > we provide the secondary MX for a customer. The customer connects to
+ > we provide the secondary MX for a customer. The customer connects to
> our services several times a day automatically to get the mails to
> his primary MX (We do not call his site when a mail for his domains
> arrived). Our sendmail sends the mailqueue every 30 minutes. At the
> moment he has to stay 30 minutes online to be sure that all mail is
> gone to the primary MX.
>
> Is there a command that would initiate sendmail to send all the mails
> now? The user has not root-privileges on our machine of course.
In the privacy flags section of sendmail.cf, there is a
definition Opgoaway,restrictqrun
Remove restrictqrun to allow non-root users to start the queue processing.
You might also like to rearrange the MXs. We are the 1st MX for our
customers like this, and we have defined:
# If we are the best MX for a host, try directly instead of generating
# local config error.
OwTrue
That way a remote site will deliver straight to you, without trying
the customer connection. You then send to your customer. Only works for
hosts, so you need to get your customer to name their mail
machine customer.com as well as
hostname.customer.com in the DNS. Just put an A record in
the DNS for customer.com.Advanced TopicsThis section covers more involved topics such as mail
configuration and setting up mail for an entire domain.Basic ConfigurationemailconfigurationOut of the box, one can send email to external hosts as
long as /etc/resolv.conf is configured or
the network has access to a configured DNS
server. To have email delivered to the MTA
on the &os; host, do one of the following:Run a DNS server for the
domain.Get mail delivered directly to the
FQDN for the machine.SMTPIn order to have mail delivered directly to a host, it
must have a permanent static IP address, not a dynamic IP
address. If the system is behind a firewall, it must be
configured to allow SMTP traffic. To receive mail directly at
a host, one of these two must be configured:Make sure that the lowest-numbered
MXMX
record record in
DNS points to the host's static IP
address.Make sure there is no MX entry in
the DNS for the host.Either of the above will allow mail to be received
directly at the host.Try this:&prompt.root; hostname
example.FreeBSD.org
&prompt.root; host example.FreeBSD.org
example.FreeBSD.org has address 204.216.27.XX
- In this example, mail sent directly to yourlogin@example.FreeBSD.org should
- work without problems, assuming
+ In this example, mail sent directly to
+ yourlogin@example.FreeBSD.org
+ should work without problems, assuming
Sendmail is running correctly on
example.FreeBSD.org.For this example:&prompt.root; host example.FreeBSD.org
example.FreeBSD.org has address 204.216.27.XX
example.FreeBSD.org mail is handled (pri=10) by nevdull.FreeBSD.orgAll mail sent to example.FreeBSD.org will
be collected on hub under the same
username instead of being sent directly to your host.The above information is handled by the
DNS server. The DNS
record that carries mail routing information is the
MX entry. If no MX
record exists, mail will be delivered directly to the host by
way of its IP address.The MX entry for freefall.FreeBSD.org at
one time looked like this:freefall MX 30 mail.crl.net
freefall MX 40 agora.rdrop.com
freefall MX 10 freefall.FreeBSD.org
freefall MX 20 who.cdrom.comfreefall had many
MX entries. The lowest
MX number is the host that receives mail
directly, if available. If it is not accessible for some
reason, the next lower-numbered host will accept messages
temporarily, and pass it along when a lower-numbered host
becomes available.Alternate MX sites should have separate
Internet connections in order to be most useful. Your
ISP can provide this service.Mail for a DomainWhen configuring a MTA for a network,
any mail sent to hosts in its domain should be diverted to the
MTA so that users can receive their mail on
the master mail server.DNSTo make life easiest, a user account with the same
username should exist on both the
MTA and the system with the
MUA. Use &man.adduser.8; to create the
user accounts.The MTA must be the designated mail
exchanger for each workstation on the network. This is done
in theDNS configuration with an
MX record:example.FreeBSD.org A 204.216.27.XX ; Workstation
MX 10 nevdull.FreeBSD.org ; MailhostThis will redirect mail for the workstation to the
MTA no matter where the A record points.
The mail is sent to the MX host.This must be configured on a DNS
server. If the network does not run its own
DNS server, talk to the
ISP or DNS
provider.The following is an example of virtual email hosting.
Consider a customer with the domain customer1.org, where all
the mail for customer1.org should be
sent to mail.myhost.com. The
DNS entry should look like this:customer1.org MX 10 mail.myhost.comAn A> record is
not needed for customer1.org in order to
only handle email for that domain. However, running
ping against customer1.org will not
work unless an A record exists for
it.Tell the MTA which domains and/or
hostnames it should accept mail for. Either of the following
will work for Sendmail:Add the hosts to
/etc/mail/local-host-names when
using the FEATURE(use_cw_file).Add a Cwyour.host.com line to
/etc/sendmail.cf.
- Setting Up to Send Only
+ Setting Up to Send Only
- BillMoranContributed
- by
+
+
+ Bill
+ Moran
+
+ Contributed by
+ There are many instances where one may only want to send
mail through a relay. Some examples are:The computer is a desktop machine that needs to use
programs such as &man.send-pr.1;, using the
ISP's mail relay.The computer is a server that does not handle mail
locally, but needs to pass off all mail to a relay for
processing.While any MTA is capable of filling
this particular niche, it can be difficult to properly configure
a full-featured MTA just to handle offloading
mail. Programs such as Sendmail and
Postfix are overkill for this
use.Additionally, a typical Internet access service agreement
may forbid one from running a mail server.The easiest way to fulfill those needs is to install the
mail/ssmtp port:&prompt.root; cd /usr/ports/mail/ssmtp
&prompt.root; make install replace cleanOnce installed, mail/ssmtp can be
configured with
/usr/local/etc/ssmtp/ssmtp.conf:root=yourrealemail@example.com
mailhub=mail.example.com
rewriteDomain=example.com
hostname=_HOSTNAME_Use the real email address for root. Enter the
ISP's outgoing mail relay in place of
mail.example.com.
Some ISPs call this the outgoing mail
server or SMTP server.Make sure to disable Sendmail,
including the outgoing mail service. See for details.mail/ssmtp has some other options
available. Refer to the examples in
/usr/local/etc/ssmtp or the manual page
of ssmtp for more information.Setting up ssmtp in this manner
allows any software on the computer that needs to send mail to
function properly, while not violating the
ISP's usage policy or allowing the computer
to be hijacked for spamming.Using Mail with a Dialup ConnectionWhen using a static IP address, one should not need to
adjust the default configuration. Set the hostname to the
assigned Internet name and Sendmail
will do the rest.When using a dynamically assigned IP address and a dialup
PPP connection to the Internet, one usually has a mailbox on the
ISP's mail server. In this example, the
ISP's domain is example.net, the user name
is user, the hostname
is bsd.home, and
the ISP has allowed relay.example.net as a mail
relay.In order to retrieve mail from the ISP's
mailbox, install a retrieval agent from the Ports Collection.
mail/fetchmail is a good choice as it
supports many different protocols. Usually, the
ISP will provide POP.
When using user PPP, email can be
automatically fetched when an Internet connection is established
with the following entry in
/etc/ppp/ppp.linkup:MYADDR:
!bg su user -c fetchmailWhen using Sendmail to deliver
mail to non-local accounts, configure
Sendmail to process the mail queue as
soon as the Internet connection is established. To do this, add
this line after the above fetchmail entry in
/etc/ppp/ppp.linkup: !bg su user -c "sendmail -q"In this example, there is an account for
user on bsd.home. In the home
directory of user on
bsd.home, create a
.fetchmailrc which contains this
line:poll example.net protocol pop3 fetchall pass MySecretThis file should not be readable by anyone except
user as it contains
the password MySecret.In order to send mail with the correct
from: header, configure
Sendmail to use
user@example.net rather than user@bsd.home and to send all mail via
relay.example.net,
allowing quicker mail transmission.The following .mc should
suffice:VERSIONID(`bsd.home.mc version 1.0')
OSTYPE(bsd4.4)dnl
FEATURE(nouucp)dnl
MAILER(local)dnl
MAILER(smtp)dnl
Cwlocalhost
Cwbsd.home
MASQUERADE_AS(`example.net')dnl
FEATURE(allmasquerade)dnl
FEATURE(masquerade_envelope)dnl
FEATURE(nocanonify)dnl
FEATURE(nodns)dnl
define(`SMART_HOST', `relay.example.net')
Dmbsd.home
define(`confDOMAIN_NAME',`bsd.home')dnl
define(`confDELIVERY_MODE',`deferred')dnlRefer to the previous section for details of how to convert
this file into the sendmail.cf format. Do
not forget to restart Sendmail after
updating sendmail.cf.
- SMTP Authentication
+ SMTP Authentication
- JamesGorhamWritten
- by
+
+
+ James
+ Gorham
+
+ Written by
+ Configuring SMTP authentication on the
MTA provides a number of benefits.
SMTP authentication adds a layer
of security to Sendmail, and provides
mobile users who switch hosts the ability to use the same
MTA without the need to reconfigure their
mail client's settings each time.Install security/cyrus-sasl2
from the Ports Collection. This port supports a number of
compile-time options. For the SMTP authentication method
demonstrated in this example, make sure that
is not disabled.After installing
security/cyrus-sasl2, edit
/usr/local/lib/sasl2/Sendmail.conf,
or create it if it does not exist, and add the following
line:pwcheck_method: saslauthdNext, install
security/cyrus-sasl2-saslauthd and add
the following line to
/etc/rc.conf:saslauthd_enable="YES"Finally, start the saslauthd daemon:&prompt.root; service saslauthd startThis daemon serves as a broker for
Sendmail to authenticate against
the &os; &man.passwd.5; database. This saves the trouble of
creating a new set of usernames and passwords for each user
that needs to use SMTP authentication,
and keeps the login and mail password the same.Next, edit /etc/make.conf and add
the following lines:SENDMAIL_CFLAGS=-I/usr/local/include/sasl -DSASL
SENDMAIL_LDFLAGS=-L/usr/local/lib
SENDMAIL_LDADD=-lsasl2These lines provide Sendmail
the proper configuration options for linking to
cyrus-sasl2 at compile time. Make sure
that cyrus-sasl2 has been installed
before recompiling
Sendmail.Recompile Sendmail by
executing the following commands:&prompt.root; cd /usr/src/lib/libsmutil
&prompt.root; make cleandir && make obj && make
&prompt.root; cd /usr/src/lib/libsm
&prompt.root; make cleandir && make obj && make
&prompt.root; cd /usr/src/usr.sbin/sendmail
&prompt.root; make cleandir && make obj && make && make installThis compile should not have any problems if
/usr/src has not changed extensively
and the shared libraries it needs are available.After Sendmail has been
compiled and reinstalled, edit
/etc/mail/freebsd.mc or the local
.mc. Many administrators choose
to use the output from &man.hostname.1; as the name of
.mc for uniqueness. Add these
lines:dnl set SASL options
TRUST_AUTH_MECH(`GSSAPI DIGEST-MD5 CRAM-MD5 LOGIN')dnl
define(`confAUTH_MECHANISMS', `GSSAPI DIGEST-MD5 CRAM-MD5 LOGIN')dnlThese options configure the different methods available
to Sendmail for authenticating
users. To use a method other than
pwcheck, refer to the
Sendmail documentation.Finally, run &man.make.1; while in
/etc/mail. That will run the new
.mc and create a
.cf named either
freebsd.cf or the name used for the
local .mc. Then, run make
install restart, which will copy the file to
sendmail.cf, and properly restart
Sendmail. For more information
about this process, refer to
/etc/mail/Makefile.To test the configuration, use a MUA to
send a test message. For further investigation, set the
of Sendmail
to 13 and watch
/var/log/maillog for any errors.For more information, refer to
SMTP authentication.
- Mail User Agents
+ Mail User Agents
- MarcSilverContributed
- by
+
+
+ Marc
+ Silver
+
+ Contributed by
+ Mail User AgentsA MUA is an application that is used to
send and receive email. As email evolves and
becomes more complex, MUAs are becoming
increasingly powerful and provide users increased functionality
and flexibility. The mail category of the
&os; Ports Collection contains numerous MUAs.
These include graphical email clients such as
Evolution or
Balsa and console based clients such
as mutt or
alpine.mail&man.mail.1; is the default
MUA installed with &os;. It is a console
based MUA that offers the basic
functionality required to send and receive text-based email.
It provides limited attachment support and can only access
local mailboxes.Although mail does not natively support
interaction with POP or
IMAP servers, these mailboxes may be
downloaded to a local mbox using an
application such as
fetchmail.In order to send and receive email, run
mail:&prompt.user; mailThe contents of the user's mailbox in
/var/mail are automatically read by
mail. Should the mailbox be empty, the
utility exits with a message indicating that no mail could
be found. If mail exists, the application interface starts,
and a list of messages will be displayed. Messages are
automatically numbered, as can be seen in the following
example:Mail version 8.1 6/6/93. Type ? for help.
"/var/mail/marcs": 3 messages 3 new
>N 1 root@localhost Mon Mar 8 14:05 14/510 "test"
N 2 root@localhost Mon Mar 8 14:05 14/509 "user account"
N 3 root@localhost Mon Mar 8 14:05 14/509 "sample"Messages can now be read by typing t
followed by the message number. This example reads the first
email:& t 1
Message 1:
From root@localhost Mon Mar 8 14:05:52 2004
X-Original-To: marcs@localhost
Delivered-To: marcs@localhost
To: marcs@localhost
Subject: test
Date: Mon, 8 Mar 2004 14:05:52 +0200 (SAST)
From: root@localhost (Charlie Root)
This is a test message, please reply if you receive it.As seen in this example, the message will be displayed
with full headers. To display the list of messages again,
press h.If the email requires a reply, press either
R or rmail keys. R instructs
mail to reply only to the sender of the
email, while r replies to all other
recipients of the message. These commands can be suffixed
with the mail number of the message to reply to. After typing
the response, the end of the message should be marked by a
single . on its own line. An example can be
seen below:& R 1
To: root@localhost
Subject: Re: test
Thank you, I did get your email.
.
EOTIn order to send a new email, press m,
followed by the recipient email address. Multiple recipients
may be specified by separating each address with the
, delimiter. The subject of the message may
then be entered, followed by the message contents. The end of
the message should be specified by putting a single
. on its own line.& mail root@localhost
Subject: I mastered mail
Now I can send and receive email using mail ... :)
.
EOTWhile using mail, press
? to display help at any time. Refer to
&man.mail.1; for more help on how to use
mail.&man.mail.1; was not designed to handle attachments and
thus deals with them poorly. Newer MUAs
handle attachments in a more intelligent way. Users who
prefer to use mail may find the
converters/mpack port to be of
considerable use.muttmutt is a powerful
MUA, with many features, including:The ability to thread messages.PGP support for digital signing and encryption of
email.MIME support.Maildir support.Highly customizable.Refer to http://www.mutt.org
for more information on
mutt.mutt may be installed using the
mail/mutt port. After the port has been
installed, mutt can be started by
issuing the following command:&prompt.user; muttmutt will automatically read
and display the contents of the user mailbox in
/var/mail. If no mails are found,
mutt will wait for commands from
the user. The example below shows
mutt displaying a list of
messages:To read an email, select it using the cursor keys and
press Enter. An example of
mutt displaying email can be seen
below:Similar to &man.mail.1;, mutt
can be used to reply only to the sender of the message as well
as to all recipients. To reply only to the sender of the
email, press r. To send a group reply
to the original sender as well as all the message recipients,
press g.By default, mutt uses the
&man.vi.1; editor for creating and replying to emails. Each
user can customize this by creating or editing the
.muttrc in their home directory and
setting the editor variable or by setting
the EDITOR environment variable. Refer to
http://www.mutt.org/
for more information about configuring
mutt.To compose a new mail message, press
m. After a valid subject has been given,
mutt will start &man.vi.1; so the
email can be written. Once the contents of the email are
complete, save and quit from vi.
mutt will resume, displaying a
summary screen of the mail that is to be delivered. In
order to send the mail, press y. An example
of the summary screen can be seen below:mutt contains extensive help
which can be accessed from most of the menus by pressing
?. The top line also displays the keyboard
shortcuts where appropriate.alpinealpine is aimed at a beginner
user, but also includes some advanced features.alpine has had several remote
vulnerabilities discovered in the past, which allowed remote
attackers to execute arbitrary code as users on the local
system, by the action of sending a specially-prepared email.
While known problems have been fixed,
alpine code is written in an
insecure style and the &os; Security Officer believes there
are likely to be other undiscovered vulnerabilities. Users
install alpine at their own
risk.The current version of alpine
may be installed using the mail/alpine
port. Once the port has installed,
alpine can be started by issuing
the following command:&prompt.user; alpineThe first time alpine
runs, it displays a greeting page with a brief introduction,
as well as a request from the
alpine development team to send
an anonymous email message allowing them to judge how many
users are using their client. To send this anonymous message,
press Enter. Alternatively, press
E to exit the greeting without sending an
anonymous message. An example of the greeting page is
shown below:The main menu is then presented, which can be navigated
using the cursor keys. This main menu provides shortcuts for
the composing new mails, browsing mail directories, and
administering address book entries. Below the main menu,
relevant keyboard shortcuts to perform functions specific to
the task at hand are shown.The default directory opened by
alpine is
inbox. To view the message index, press
I, or select the
MESSAGE INDEX option shown
below:The message index shows messages in the current directory
and can be navigated by using the cursor keys. Highlighted
messages can be read by pressing
Enter.In the screenshot below, a sample message is displayed by
alpine. Contextual keyboard
shortcuts are displayed at the bottom of the screen. An
example of one of a shortcut is r, which
tells the MUA to reply to the current
message being displayed.Replying to an email in alpine
is done using the pico editor,
which is installed by default with
alpine.
pico makes it easy to navigate the
message and is easier for novice users to use than &man.vi.1;
or &man.mail.1;. Once the reply is complete, the message can
be sent by pressing CtrlX. alpine will ask for
confirmation before sending the message.alpine can be customized using
the SETUP option from the main
menu. Consult http://www.washington.edu/alpine/
for more information.
- Using fetchmail
+ Using fetchmail
- MarcSilverContributed
- by
+
+
+ Marc
+ Silver
+
+ Contributed by
+ fetchmailfetchmail is a full-featured
IMAP and POP client. It
allows users to automatically download mail from remote
IMAP and POP servers and
save it into local mailboxes where it can be accessed more
easily. fetchmail can be installed
using the mail/fetchmail port, and offers
various features, including:Support for the POP3,
APOP, KPOP,
IMAP, ETRN and
ODMR protocols.Ability to forward mail using SMTP,
which allows filtering, forwarding, and aliasing to function
normally.May be run in daemon mode to check periodically for new
messages.Can retrieve multiple mailboxes and forward them, based
on configuration, to different local users.This section explains some of the basic features of
fetchmail. This utility requires a
.fetchmailrc configuration in the user's
home directory in order to run correctly. This file includes
server information as well as login credentials. Due to the
sensitive nature of the contents of this file, it is advisable
to make it readable only by the user, with the following
command:&prompt.user; chmod 600 .fetchmailrcThe following .fetchmailrc serves as an
example for downloading a single user mailbox using
POP. It tells
- fetchmail to connect to example.com using a
- username of joesoap
+ fetchmail to connect to
+ example.com using
+ a username of joesoap
and a password of XXX. This example assumes
that the user joesoap
exists on the local system.poll example.com protocol pop3 username "joesoap" password "XXX"The next example connects to multiple POP
and IMAP servers and redirects to different
local usernames where applicable:poll example.com proto pop3:
user "joesoap", with password "XXX", is "jsoap" here;
user "andrea", with password "XXXX";
poll example2.net proto imap:
user "john", with password "XXXXX", is "myth" here;fetchmail can be run in daemon
mode by running it with , followed by the
interval (in seconds) that fetchmail
should poll servers listed in .fetchmailrc.
The following example configures
fetchmail to poll every 600
seconds:&prompt.user; fetchmail -d 600More information on fetchmail can
be found at http://www.fetchmail.info/.
- Using procmail
+ Using procmail
- MarcSilverContributed
- by
+
+
+ Marc
+ Silver
+
+ Contributed by
+ procmailprocmail is a powerful
application used to filter incoming mail. It allows users to
define rules which can be matched to incoming
mails to perform specific functions or to reroute mail to
alternative mailboxes or email addresses.
procmail can be installed using the
mail/procmail port. Once installed, it can
be directly integrated into most MTAs.
Consult the MTA documentation for more
information. Alternatively, procmail
can be integrated by adding the following line to a
.forward in the home directory of the
user:"|exec /usr/local/bin/procmail || exit 75"The following section displays some basic
procmail rules, as well as brief
descriptions of what they do. Rules must be inserted into a
.procmailrc, which must reside in the
user's home directory.The majority of these rules can be found in
&man.procmailex.5;.To forward all mail from user@example.com to
an external address of goodmail@example2.com::0
* ^From.*user@example.com
! goodmail@example2.comTo forward all mails shorter than 1000 bytes to an external
address of goodmail@example2.com::0
* < 1000
! goodmail@example2.comTo send all mail sent to
alternate@example.com to a mailbox called
alternate::0
* ^TOalternate@example.com
alternateTo send all mail with a subject of Spam to
/dev/null::0
^Subject:.*Spam
/dev/nullA useful recipe that parses incoming &os;.org mailing lists and
places each list in its own mailbox::0
* ^Sender:.owner-freebsd-\/[^@]+@FreeBSD.ORG
{
LISTNAME=${MATCH}
:0
* LISTNAME??^\/[^@]+
FreeBSD-${MATCH}
}
Index: head/en_US.ISO8859-1/books/handbook/multimedia/chapter.xml
===================================================================
--- head/en_US.ISO8859-1/books/handbook/multimedia/chapter.xml (revision 48528)
+++ head/en_US.ISO8859-1/books/handbook/multimedia/chapter.xml (revision 48529)
@@ -1,1617 +1,1617 @@
Multimedia
-
- Ross
- Lippert
+
+ Ross
+ LippertEdited by Synopsis&os; supports a wide variety of sound cards, allowing users
to enjoy high fidelity output from a &os; system. This includes
the ability to record and playback audio in the MPEG Audio Layer
3 (MP3), Waveform Audio File
(WAV), Ogg Vorbis, and other formats. The
&os; Ports Collection contains many applications for editing
recorded audio, adding sound effects, and controlling attached
MIDI devices.&os; also supports the playback of video files and
DVDs. The &os; Ports Collection contains
applications to encode, convert, and playback various video
media.This chapter describes how to configure sound cards, video
playback, TV tuner cards, and scanners on &os;. It also
describes some of the applications which are available for
using these devices.After reading this chapter, you will know how to:Configure a sound card on &os;.Troubleshoot the sound setup.Playback and encode MP3s and other audio.Prepare a &os; system for video playback.Play DVDs, .mpg,
and .avi files.Rip CD and DVD
content into files.Configure a TV card.Install and setup MythTV on &os;Configure an image scanner.Before reading this chapter, you should:Know how to install applications as described in
.
- Setting Up the Sound Card
+ Setting Up the Sound Card
-
- Moses
- Moore
-
- Contributed by
+
+ Moses
+ Moore
+
+ Contributed by
+
-
- Marc
- Fonvieille
+
+ Marc
+ FonvieilleEnhanced by PCIsound cardsBefore beginning the configuration, determine the model of
the sound card and the chip it uses. &os; supports a wide
variety of sound cards. Check the supported audio devices
list of the Hardware
Notes to see if the card is supported and which &os;
driver it uses.kernelconfigurationIn order to use the sound device, its device driver must be
loaded. The easiest way is to load a kernel module for the
sound card with &man.kldload.8;. This example loads the driver
for a built-in audio chipset based on the Intel
specification:&prompt.root; kldload snd_hdaTo automate the loading of this driver at boot time, add the
driver to /boot/loader.conf. The line for
this driver is:snd_hda_load="YES"Other available sound modules are listed in
/boot/defaults/loader.conf. When unsure
which driver to use, load the snd_driver
module:&prompt.root; kldload snd_driverThis is a metadriver which loads all of the most common
sound drivers and can be used to speed up the search for the
correct driver. It is also possible to load all sound drivers
by adding the metadriver to
/boot/loader.conf.To determine which driver was selected for the sound card
after loading the snd_driver metadriver,
type cat /dev/sndstat.Configuring a Custom Kernel with Sound SupportThis section is for users who prefer to statically compile
in support for the sound card in a custom kernel. For more
information about recompiling a kernel, refer to .When using a custom kernel to provide sound support, make
sure that the audio framework driver exists in the custom
kernel configuration file:device soundNext, add support for the sound card. To continue the
example of the built-in audio chipset based on the Intel
specification from the previous section, use the following
line in the custom kernel configuration file:device snd_hdaBe sure to read the manual page of the driver for the
device name to use for the driver.Non-PnP ISA sound cards may require the IRQ and I/O port
settings of the card to be added to
/boot/device.hints. During the boot
process, &man.loader.8; reads this file and passes the
settings to the kernel. For example, an old Creative
&soundblaster; 16 ISA non-PnP card will use the
&man.snd.sbc.4; driver in conjunction with
snd_sb16. For this card, the following
lines must be added to the kernel configuration file:device snd_sbc
device snd_sb16If the card uses the 0x220 I/O port and
IRQ 5, these lines must also be added to
/boot/device.hints:hint.sbc.0.at="isa"
hint.sbc.0.port="0x220"
hint.sbc.0.irq="5"
hint.sbc.0.drq="1"
hint.sbc.0.flags="0x15"The syntax used in /boot/device.hints
is described in &man.sound.4; and the manual page for the
driver of the sound card.The settings shown above are the defaults. In some
cases, the IRQ or other settings may need to be changed to
match the card. Refer to &man.snd.sbc.4; for more information
about this card.Testing SoundAfter loading the required module or rebooting into the
custom kernel, the sound card should be detected. To confirm,
run dmesg | grep pcm. This example is
from a system with a built-in Conexant CX20590 chipset:pcm0: <NVIDIA (0x001c) (HDMI/DP 8ch)> at nid 5 on hdaa0
pcm1: <NVIDIA (0x001c) (HDMI/DP 8ch)> at nid 6 on hdaa0
pcm2: <Conexant CX20590 (Analog 2.0+HP/2.0)> at nid 31,25 and 35,27 on hdaa1The status of the sound card may also be checked using
this command:&prompt.root; cat /dev/sndstat
FreeBSD Audio Driver (newpcm: 64bit 2009061500/amd64)
Installed devices:
pcm0: <NVIDIA (0x001c) (HDMI/DP 8ch)> (play)
pcm1: <NVIDIA (0x001c) (HDMI/DP 8ch)> (play)
pcm2: <Conexant CX20590 (Analog 2.0+HP/2.0)> (play/rec) defaultThe output will vary depending upon the sound card. If no
pcm devices are listed, double-check
that the correct device driver was loaded or compiled into the
kernel. The next section lists some common problems and their
solutions.If all goes well, the sound card should now work in &os;.
If the CD or DVD drive
is properly connected to the sound card, one can insert an
audio CD in the drive and play it with
&man.cdcontrol.1;:&prompt.user; cdcontrol -f /dev/acd0 play 1Audio CDs have specialized encodings
which means that they should not be mounted using
&man.mount.8;.Various applications, such as
audio/workman, provide a friendlier
interface. The audio/mpg123 port can be
installed to listen to MP3 audio files.Another quick way to test the card is to send data to
/dev/dsp:&prompt.user; cat filename > /dev/dspwhere
filename can
be any type of file. This command should produce some noise,
confirming that the sound card is working.The /dev/dsp* device nodes will
be created automatically as needed. When not in use, they
do not exist and will not appear in the output of
&man.ls.1;.Troubleshooting Sounddevice nodesI/O portIRQDSP
lists some common error messages and their solutions:
Common Error MessagesErrorSolutionsb_dspwr(XX) timed
outThe I/O port is not set
correctly.bad irq XXThe IRQ is set incorrectly. Make sure
that the set IRQ and the sound IRQ are the
same.xxx: gus pcm not attached, out of
memoryThere is not enough available memory to
use the device.xxx: can't open
/dev/dsp!Type fstat | grep
dsp to check if another application is
holding the device open. Noteworthy troublemakers are
esound and
KDE's sound
support.
Modern graphics cards often come with their own sound
driver for use with HDMI. This sound
device is sometimes enumerated before the sound card meaning
that the sound card will not be used as the default playback
device. To check if this is the case, run
dmesg and look for
pcm. The output looks something like
this:...
hdac0: HDA Driver Revision: 20100226_0142
hdac1: HDA Driver Revision: 20100226_0142
hdac0: HDA Codec #0: NVidia (Unknown)
hdac0: HDA Codec #1: NVidia (Unknown)
hdac0: HDA Codec #2: NVidia (Unknown)
hdac0: HDA Codec #3: NVidia (Unknown)
pcm0: <HDA NVidia (Unknown) PCM #0 DisplayPort> at cad 0 nid 1 on hdac0
pcm1: <HDA NVidia (Unknown) PCM #0 DisplayPort> at cad 1 nid 1 on hdac0
pcm2: <HDA NVidia (Unknown) PCM #0 DisplayPort> at cad 2 nid 1 on hdac0
pcm3: <HDA NVidia (Unknown) PCM #0 DisplayPort> at cad 3 nid 1 on hdac0
hdac1: HDA Codec #2: Realtek ALC889
pcm4: <HDA Realtek ALC889 PCM #0 Analog> at cad 2 nid 1 on hdac1
pcm5: <HDA Realtek ALC889 PCM #1 Analog> at cad 2 nid 1 on hdac1
pcm6: <HDA Realtek ALC889 PCM #2 Digital> at cad 2 nid 1 on hdac1
pcm7: <HDA Realtek ALC889 PCM #3 Digital> at cad 2 nid 1 on hdac1
...In this example, the graphics card
(NVidia) has been enumerated before the
sound card (Realtek ALC889). To use the
sound card as the default playback device, change
hw.snd.default_unit to the unit that should
be used for playback:&prompt.root; sysctl hw.snd.default_unit=nwhere n is the number of the sound
device to use. In this example, it should be
4. Make this change permanent by adding
the following line to
/etc/sysctl.conf:hw.snd.default_unit=4
-
+
-
-
- Utilizing Multiple Sound Sources
+
+
+ Utilizing Multiple Sound Sources
-
-
-
- Munish
- Chopra
-
- Contributed by
-
-
-
+
+
+
+ Munish
+ Chopra
+
+ Contributed by
+
+
+
- It is often desirable to have multiple sources of sound that
- are able to play simultaneously. &os; uses Virtual
- Sound Channels to multiplex the sound card's playback
- by mixing sound in the kernel.
+ It is often desirable to have multiple sources of sound
+ that are able to play simultaneously. &os; uses
+ Virtual Sound Channels to multiplex the sound
+ card's playback by mixing sound in the kernel.
- Three &man.sysctl.8; knobs are available for configuring
- virtual channels:
+ Three &man.sysctl.8; knobs are available for configuring
+ virtual channels:
- &prompt.root; sysctl dev.pcm.0.play.vchans=4
+ &prompt.root; sysctl dev.pcm.0.play.vchans=4
&prompt.root; sysctl dev.pcm.0.rec.vchans=4
&prompt.root; sysctl hw.snd.maxautovchans=4
- This example allocates four virtual channels, which is a
- practical number for everyday use. Both
- dev.pcm.0.play.vchans=4 and
- dev.pcm.0.rec.vchans=4 are configurable after
- a device has been attached and represent the number of virtual
- channels pcm0 has for playback and
- recording. Since the pcm module can be
- loaded independently of the hardware drivers,
- hw.snd.maxautovchans indicates how many
- virtual channels will be given to an audio device when it is
- attached. Refer to &man.pcm.4; for more information.
+ This example allocates four virtual channels, which is a
+ practical number for everyday use. Both
+ dev.pcm.0.play.vchans=4 and
+ dev.pcm.0.rec.vchans=4 are configurable
+ after a device has been attached and represent the number of
+ virtual channels pcm0 has for playback
+ and recording. Since the pcm module can
+ be loaded independently of the hardware drivers,
+ hw.snd.maxautovchans indicates how many
+ virtual channels will be given to an audio device when it is
+ attached. Refer to &man.pcm.4; for more information.
-
- The number of virtual channels for a device cannot be
- changed while it is in use. First, close any programs using
- the device, such as music players or sound daemons.
-
+
+ The number of virtual channels for a device cannot be
+ changed while it is in use. First, close any programs using
+ the device, such as music players or sound daemons.
+
-
- The correct pcm device will
- automatically be allocated transparently to a program that
- requests /dev/dsp0.
-
+ The correct pcm device will
+ automatically be allocated transparently to a program that
+ requests /dev/dsp0.
+
-
+
+
+ Setting Default Values for Mixer Channels
+
+
+
+
+ Josef
+ El-Rayes
+
+ Contributed by
+
+
+
+
+ The default values for the different mixer channels are
+ hardcoded in the source code of the &man.pcm.4; driver. While
+ sound card mixer levels can be changed using &man.mixer.8; or
+ third-party applications and daemons, this is not a permanent
+ solution. To instead set default mixer values at the driver
+ level, define the appropriate values in
+ /boot/device.hints, as seen in this
+ example:
+
+ hint.pcm.0.vol="50"
+
+ This will set the volume channel to a default value of
+ 50 when the &man.pcm.4; module is
+ loaded.
+
+
+
+
- Setting Default Values for Mixer Channels
+ MP3 Audio
-
- Josef
- El-Rayes
-
- Contributed by
+
+ Chern
+ Lee
+
+ Contributed by
-
- The default values for the different mixer channels are
- hardcoded in the source code of the &man.pcm.4; driver. While
- sound card mixer levels can be changed using &man.mixer.8; or
- third-party applications and daemons, this is not a permanent
- solution. To instead set default mixer values at the driver
- level, define the appropriate values in
- /boot/device.hints, as seen in this
- example:
-
- hint.pcm.0.vol="50"
-
- This will set the volume channel to a default value of
- 50 when the &man.pcm.4; module is
- loaded.
-
-
-
-
-
- MP3 Audio
-
-
-
-
- Chern
- Lee
-
- Contributed by
-
-
-
-
This section describes some MP3
players available for &os;, how to rip audio
CD tracks, and how to encode and decode
MP3s.MP3 PlayersA popular graphical MP3 player is
XMMS. It supports
Winamp skins and additional
plugins. The interface is intuitive, with a playlist, graphic
equalizer, and more. Those familiar with
Winamp will find
XMMS simple to use. On &os;,
XMMS can be installed from the
multimedia/xmms port or package.The audio/mpg123 package or port
provides an alternative, command-line MP3
player. Once installed, specify the MP3
file to play on the command line. If the system has multiple
audio devices, the sound device can also be specifed:&prompt.root; mpg123 -a /dev/dsp1.0 Foobar-GreatestHits.mp3
High Performance MPEG 1.0/2.0/2.5 Audio Player for Layers 1, 2 and 3
version 1.18.1; written and copyright by Michael Hipp and others
free software (LGPL) without any warranty but with best wishes
Playing MPEG stream from Foobar-GreatestHits.mp3 ...
MPEG 1.0 layer III, 128 kbit/s, 44100 Hz joint-stereoAdditional MP3 players are available in
the &os; Ports Collection.Ripping CD Audio TracksBefore encoding a CD or
CD track to MP3, the
audio data on the CD must be ripped to the
hard drive. This is done by copying the raw
CD Digital Audio (CDDA)
data to WAV files.The cdda2wav tool, which is installed
with the sysutils/cdrtools suite, can be
used to rip audio information from
CDs.With the audio CD in the drive, the
- following command can be issued as root to rip an entire
- CD into individual, per track,
+ following command can be issued as
+ root to rip an
+ entire CD into individual, per track,
WAV files:&prompt.root; cdda2wav -D 0,1,0 -B
- In this example, the indicates the
- SCSI device 0,1,0
+ In this example, the
+ indicates
+ the SCSI device 0,1,0
containing the CD to rip. Use
cdrecord -scanbus to determine the correct
device parameters for the system.To rip individual tracks, use to
specify the track:&prompt.root; cdda2wav -D 0,1,0 -t 7To rip a range of tracks, such as track one to seven,
specify a range:&prompt.root; cdda2wav -D 0,1,0 -t 1+7To rip from an ATAPI
(IDE) CDROM drive,
specify the device name in place of the
SCSI unit numbers. For example, to rip
track 7 from an IDE drive:&prompt.root; cdda2wav -D /dev/acd0 -t 7Alternately, dd can be used to extract
audio tracks on ATAPI drives, as described
in .Encoding and Decoding MP3s
-
- Lame is a popular
+ Lame is a popular
MP3 encoder which can be installed from the
audio/lame port. Due to patent issues, a
package is not available.The following command will convert the ripped
WAV file
audio01.wav to
audio01.mp3:&prompt.root; lame -h -b 128 --tt "Foo Song Title" --ta "FooBar Artist" --tl "FooBar Album" \
--ty "2014" --tc "Ripped and encoded by Foo" --tg "Genre" audio01.wav audio01.mp3The specified 128 kbits is a standard
MP3 bitrate while the 160 and 192 bitrates
provide higher quality. The higher the bitrate, the larger
the size of the resulting MP3. The
- turns on the higher quality but a
- little slower mode. The options beginning with
- indicate ID3 tags,
- which usually contain song information, to be embedded within
- the MP3 file. Additional encoding options
- can be found in the lame manual
+ turns on the
+ higher quality but a little slower
+ mode. The options beginning with
+ indicate ID3 tags, which usually contain
+ song information, to be embedded within the
+ MP3 file. Additional encoding options can
+ be found in the lame manual
page.In order to burn an audio CD from
MP3s, they must first be converted to a
non-compressed file format. XMMS
can be used to convert to the WAV format,
while mpg123 can be used to convert
to the raw Pulse-Code Modulation (PCM)
audio data format.To convert audio01.mp3 using
mpg123, specify the name of the
PCM file:&prompt.root; mpg123 -s audio01.mp3 > audio01.pcmTo use XMMS to convert a
MP3 to WAV format, use
these steps:Converting to WAV Format in
XMMSLaunch XMMS.Right-click the window to bring up the
XMMS menu.Select Preferences under
Options.Change the Output Plugin to Disk Writer
Plugin.Press Configure.Enter or browse to a directory to write the
uncompressed files to.Load the MP3 file into
XMMS as usual, with volume at
100% and EQ settings turned off.Press Play. The
XMMS will appear as if it is
playing the MP3, but no music will be
heard. It is actually playing the MP3
to a file.When finished, be sure to set the default Output
Plugin back to what it was before in order to listen to
MP3s again.Both the WAV and PCM
formats can be used with cdrecord.
When using WAV files, there will be a small
tick sound at the beginning of each track. This sound is the
header of the WAV file. The
audio/sox port or package can be used to
remove the header:&prompt.user; sox -t wav -r 44100 -s -w -c 2 track.wav track.rawRefer to for more
information on using a CD burner in
&os;.
- Video Playback
+ Video Playback
-
- Ross
- Lippert
+
+ Ross
+ LippertContributed by Before configuring video playback, determine the model and
chipset of the video card. While
&xorg; supports a wide variety of
video cards, not all provide good playback performance. To
obtain a list of extensions supported by the
&xorg; server using the card, run
xdpyinfo while
&xorg; is running.It is a good idea to have a short MPEG test file for
evaluating various players and options. Since some
DVD applications look for
DVD media in /dev/dvd by
default, or have this device name hardcoded in them, it might be
useful to make a symbolic link to the proper device:&prompt.root; ln -sf /dev/cd0 /dev/dvdDue to the nature of &man.devfs.5;, manually created links
will not persist after a system reboot. In order to recreate
the symbolic link automatically when the system boots, add the
following line to /etc/devfs.conf:link cd0 dvdDVD decryption invokes certain functions
that require write permission to the DVD
device.To enhance the shared memory
&xorg; interface, it is recommended
to increase the values of these &man.sysctl.8;
variables:kern.ipc.shmmax=67108864
kern.ipc.shmall=32768Determining Video CapabilitiesXVideoSDLDGAThere are several possible ways to display video under
&xorg; and what works is largely
hardware dependent. Each method described below will have
varying quality across different hardware.Common video interfaces include:&xorg;: normal output using
shared memory.XVideo: an extension to the
&xorg; interface which
allows video to be directly displayed in drawable objects
through a special acceleration. This extension provides
good quality playback even on low-end machines. The next
section describes how to determine if this extension is
running.SDL: the Simple Directmedia Layer
is a porting layer for many operating systems, allowing
cross-platform applications to be developed which make
efficient use of sound and graphics.
SDL provides a low-level abstraction to
the hardware which can sometimes be more efficient than
the &xorg; interface. On &os;,
SDL can be installed using the
devel/sdl20 package or port.DGA: the Direct Graphics Access is
an &xorg; extension which
allows a program to bypass the
&xorg; server and directly
alter the framebuffer. Because it relies on a low level
memory mapping, programs using it must be run as
root. The
DGA extension can be tested and
benchmarked using &man.dga.1;. When
dga is running, it changes the colors
of the display whenever a key is pressed. To quit, press
q.SVGAlib: a low level console graphics layer.XVideoTo check whether this extension is running, use
xvinfo:&prompt.user; xvinfoXVideo is supported for the card if the result is
similar to:X-Video Extension version 2.2
screen #0
Adaptor #0: "Savage Streams Engine"
number of ports: 1
port base: 43
operations supported: PutImage
supported visuals:
depth 16, visualID 0x22
depth 16, visualID 0x23
number of attributes: 5
"XV_COLORKEY" (range 0 to 16777215)
client settable attribute
client gettable attribute (current value is 2110)
"XV_BRIGHTNESS" (range -128 to 127)
client settable attribute
client gettable attribute (current value is 0)
"XV_CONTRAST" (range 0 to 255)
client settable attribute
client gettable attribute (current value is 128)
"XV_SATURATION" (range 0 to 255)
client settable attribute
client gettable attribute (current value is 128)
"XV_HUE" (range -180 to 180)
client settable attribute
client gettable attribute (current value is 0)
maximum XvImage size: 1024 x 1024
Number of image formats: 7
id: 0x32595559 (YUY2)
guid: 59555932-0000-0010-8000-00aa00389b71
bits per pixel: 16
number of planes: 1
type: YUV (packed)
id: 0x32315659 (YV12)
guid: 59563132-0000-0010-8000-00aa00389b71
bits per pixel: 12
number of planes: 3
type: YUV (planar)
id: 0x30323449 (I420)
guid: 49343230-0000-0010-8000-00aa00389b71
bits per pixel: 12
number of planes: 3
type: YUV (planar)
id: 0x36315652 (RV16)
guid: 52563135-0000-0000-0000-000000000000
bits per pixel: 16
number of planes: 1
type: RGB (packed)
depth: 0
red, green, blue masks: 0x1f, 0x3e0, 0x7c00
id: 0x35315652 (RV15)
guid: 52563136-0000-0000-0000-000000000000
bits per pixel: 16
number of planes: 1
type: RGB (packed)
depth: 0
red, green, blue masks: 0x1f, 0x7e0, 0xf800
id: 0x31313259 (Y211)
guid: 59323131-0000-0010-8000-00aa00389b71
bits per pixel: 6
number of planes: 3
type: YUV (packed)
id: 0x0
guid: 00000000-0000-0000-0000-000000000000
bits per pixel: 0
number of planes: 0
type: RGB (packed)
depth: 1
red, green, blue masks: 0x0, 0x0, 0x0The formats listed, such as YUV2 and YUV12, are not
present with every implementation of XVideo and their
absence may hinder some players.If the result instead looks like:X-Video Extension version 2.2
screen #0
no adaptors presentXVideo is probably not supported for the card. This
means that it will be more difficult for the display to meet
the computational demands of rendering video, depending on
the video card and processor.Ports and Packages Dealing with Videovideo portsvideo packagesThis section introduces some of the software available
from the &os; Ports Collection which can be used for video
playback.MPlayer and
MEncoderMPlayer is a command-line
video player with an optional graphical interface which aims
to provide speed and flexibility. Other graphical
front-ends to MPlayer are
available from the &os; Ports Collection.MPlayerMPlayer can be installed
using the multimedia/mplayer package or
port. Several compile options are available and a variety
of hardware checks occur during the build process. For
these reasons, some users prefer to build the port rather
than install the package.When compiling the port, the menu options should be
reviewed to determine the type of support to compile into
the port. If an option is not selected,
MPlayer will not be able to
display that type of video format. Use the arrow keys and
spacebar to select the required formats. When finished,
press Enter to continue the port compile
and installation.By default, the package or port will build the
mplayer command line utility and the
gmplayer graphical utility. To encode
videos, compile the multimedia/mencoder
port. Due to licensing restrictions, a package is not
available for MEncoder.The first time MPlayer is
run, it will create ~/.mplayer in the
user's home directory. This subdirectory contains default
versions of the user-specific configuration files.This section describes only a few common uses. Refer to
mplayer(1) for a complete description of its numerous
options.To play the file
testfile.avi,
specify the video interfaces with , as
seen in the following examples:&prompt.user; mplayer -vo xv testfile.avi&prompt.user; mplayer -vo sdl testfile.avi&prompt.user; mplayer -vo x11 testfile.avi&prompt.root; mplayer -vo dga testfile.avi&prompt.root; mplayer -vo 'sdl:dga' testfile.aviIt is worth trying all of these options, as their
relative performance depends on many factors and will vary
significantly with hardware.To play a DVD, replace
testfile.avi
with , where
N is the title number to play and
DEVICE is the device node for the
DVD. For example, to play title 3 from
/dev/dvd:&prompt.root; mplayer -vo xv dvd://3 -dvd-device /dev/dvdThe default DVD device can be
defined during the build of the
MPlayer port by including the
WITH_DVD_DEVICE=/path/to/desired/device
option. By default, the device is
/dev/cd0. More details can be found
in the port's
Makefile.options.To stop, pause, advance, and so on, use a keybinding.
To see the list of keybindings, run mplayer
-h or read mplayer(1).Additional playback options include , which engages fullscreen mode, and
, which helps performance.Each user can add commonly used options to their
~/.mplayer/config like so:vo=xv
fs=yes
zoom=yesmplayer can be used to rip a
DVD title to a .vob.
To dump the second title from a
DVD:&prompt.root; mplayer -dumpstream -dumpfile out.vob dvd://2 -dvd-device /dev/dvdThe output file, out.vob, will be
in MPEG format.Anyone wishing to obtain a high level of expertise with
&unix; video should consult mplayerhq.hu/DOCS
as it is technically informative. This documentation should
be considered as required reading before submitting any bug
reports.mencoderBefore using mencoder, it is a good
idea to become familiar with the options described at mplayerhq.hu/DOCS/HTML/en/mencoder.html.
There are innumerable ways to improve quality, lower
bitrate, and change formats, and some of these options may
make the difference between good or bad performance.
Improper combinations of command line options can yield
output files that are unplayable even by
mplayer.Here is an example of a simple copy:&prompt.user; mencoder input.avi -oac copy -ovc copy -o output.aviTo rip to a file, use with
mplayer.To convert
input.avi to
the MPEG4 codec with MPEG3 audio encoding, first install the
audio/lame port. Due to licensing
restrictions, a package is not available. Once installed,
type:&prompt.user; mencoder input.avi -oac mp3lame -lameopts br=192 \
-ovc lavc -lavcopts vcodec=mpeg4:vhq -o output.aviThis will produce output playable by applications such
as mplayer and
xine.input.avi
can be replaced with and run as root to re-encode a
DVD title directly. Since it may take a
few tries to get the desired result, it is recommended to
instead dump the title to a file and to work on the
file.The xine Video
Playerxine is a video player with a
reusable base library and a modular executable which can be
extended with plugins. It can be installed using the
multimedia/xine package or port.In practice, xine requires
either a fast CPU with a fast video card, or support for the
XVideo extension. The xine video
player performs best on XVideo interfaces.By default, the xine player
starts a graphical user interface. The menus can then be
used to open a specific file.Alternatively, xine may be
invoked from the command line by specifying the name of the
file to play:&prompt.user; xine -g -p mymovie.aviRefer to
xine-project.org/faq for more information and
troubleshooting tips.The Transcode
UtilitiesTranscode provides a suite of
tools for re-encoding video and audio files.
Transcode can be used to merge
video files or repair broken files using command line tools
with stdin/stdout stream interfaces.In &os;, Transcode can be
installed using the multimedia/transcode
package or port. Many users prefer to compile the port as
it provides a menu of compile options for specifying the
support and codecs to compile in. If an option is not
selected, Transcode will not be
able to encode that format. Use the arrow keys and spacebar
to select the required formats. When finished, press
Enter to continue the port compile and
installation.This example demonstrates how to convert a DivX file
into a PAL MPEG-1 file (PAL VCD):&prompt.user; transcode -i input.avi -V --export_prof vcd-pal -o output_vcd
&prompt.user; mplex -f 1 -o output_vcd.mpg output_vcd.m1v output_vcd.mpaThe resulting MPEG file,
output_vcd.mpg,
is ready to be played with
MPlayer. The file can be burned
on a CD media to create a video
CD using a utility such as
multimedia/vcdimager or
sysutils/cdrdao.In addition to the manual page for
transcode, refer to transcoding.org/cgi-bin/transcode
for further information and examples.
- TV Cards
+ TV Cards
-
-
-
- Josef
- El-Rayes
-
- Original contribution by
-
-
-
-
-
- Marc
- Fonvieille
-
- Enhanced and adapted by
-
-
-
+
+
+
+ Josef
+ El-Rayes
+
+ Original contribution by
+
+
+
+
+
+ Marc
+ Fonvieille
+
+ Enhanced and adapted by
+
+
+
+
TV cardsTV cards can be used to watch broadcast or cable TV on a
computer. Most cards accept composite video via an
RCA or S-video input and some cards include a
FM radio tuner.&os; provides support for PCI-based TV cards using a
Brooktree Bt848/849/878/879 video capture chip with the
&man.bktr.4; driver. This driver supports most Pinnacle PCTV
video cards. Before purchasing a TV card, consult &man.bktr.4;
for a list of supported tuners.Loading the DriverIn order to use the card, the &man.bktr.4; driver must be
loaded. To automate this at boot time, add the following line
to /boot/loader.conf:bktr_load="YES"Alternatively, one can statically compile support for
the TV card into a custom kernel. In that case, add the
following lines to the custom kernel configuration
file:device bktr
device iicbus
device iicbb
device smbusThese additional devices are necessary as the card
components are interconnected via an I2C bus. Then, build and
install a new kernel.To test that the tuner is correctly detected, reboot the
system. The TV card should appear in the boot messages, as
seen in this example:bktr0: <BrookTree 848A> mem 0xd7000000-0xd7000fff irq 10 at device 10.0 on pci0
iicbb0: <I2C bit-banging driver> on bti2c0
iicbus0: <Philips I2C bus> on iicbb0 master-only
iicbus1: <Philips I2C bus> on iicbb0 master-only
smbus0: <System Management Bus> on bti2c0
bktr0: Pinnacle/Miro TV, Philips SECAM tuner.The messages will differ according to the hardware. If
necessary, it is possible to override some of the detected
parameters using &man.sysctl.8; or custom kernel configuration
options. For example, to force the tuner to a Philips SECAM
tuner, add the following line to a custom kernel configuration
file:options OVERRIDE_TUNER=6or, use &man.sysctl.8;:&prompt.root; sysctl hw.bt848.tuner=6Refer to &man.bktr.4; for a description of the available
&man.sysctl.8; parameters and kernel options.Useful ApplicationsTo use the TV card, install one of the following
applications:multimedia/fxtv
provides TV-in-a-window and image/audio/video capture
capabilities.multimedia/xawtv
is another TV application with similar features.audio/xmradio
provides an application for using the FM radio tuner of a
TV card.More applications are available in the &os; Ports
Collection.TroubleshootingIf any problems are encountered with the TV card, check
that the video capture chip and the tuner are supported by
&man.bktr.4; and that the right configuration options were
used. For more support or to ask questions about supported TV
cards, refer to the &a.multimedia.name; mailing list.MythTVMythTV is a popular, open source Personal Video Recorder
(PVR) application. This section demonstrates
how to install and setup MythTV on &os;. Refer to mythtv.org/wiki
for more information on how to use MythTV.MythTV requires a frontend and a backend. These components
can either be installed on the same system or on different
machines.The frontend can be installed on &os; using the
multimedia/mythtv-frontend package or port.
&xorg; must also be installed and
configured as described in . Ideally, this
system has a video card that supports X-Video Motion
Compensation (XvMC) and, optionally, a Linux
Infrared Remote Control (LIRC)-compatible
remote.To install both the backend and the frontend on &os;, use
the multimedia/mythtv package or port. A
&mysql; database server is also required and should
automatically be installed as a dependency. Optionally, this
system should have a tuner card and sufficient storage to hold
recorded data.HardwareMythTV uses Video for Linux (V4L) to
access video input devices such as encoders and tuners. In
&os;, MythTV works best with USB DVB-S/C/T
cards as they are well supported by the
multimedia/webcamd package or port which
provides a V4L userland application. Any
Digital Video Broadcasting (DVB) card
supported by webcamd should work
with MythTV. A list of known working cards can be found at
wiki.freebsd.org/WebcamCompat.
Drivers are also available for Hauppauge cards in the
multimedia/pvr250 and
multimedia/pvrxxx ports, but they provide a
non-standard driver interface that does not work with versions
of MythTV greater than 0.23. Due to licensing restrictions,
no packages are available and these two ports must be
compiled.The wiki.freebsd.org/HTPC
page contains a list of all available DVB
drivers.Setting up the MythTV BackendTo install MythTV using the port:&prompt.root; cd /usr/ports/multimedia/mythtv
&prompt.root; make installOnce installed, set up the MythTV database:&prompt.root; mysql -uroot -p < /usr/local/share/mythtv/database/mc.sqlThen, configure the backend:&prompt.root; mythtv-setupFinally, start the backend:&prompt.root; echo 'mythbackend_enable="YES"' >> /etc/rc.conf
&prompt.root; service mythbackend start
- Image Scanners
+ Image Scanners
-
- Marc
- Fonvieille
+
+ Marc
+ FonvieilleWritten by image scannersIn &os;, access to image scanners is provided by
SANE (Scanner Access Now Easy), which
is available in the &os; Ports Collection.
SANE will also use some &os; device
drivers to provide access to the scanner hardware.&os; supports both SCSI and
USB scanners. Depending upon the scanner
interface, different device drivers are required. Be sure the
scanner is supported by SANE prior
to performing any configuration. Refer to
http://www.sane-project.org/sane-supported-devices.html
for more information about supported scanners.This chapter describes how to determine if the scanner has
been detected by &os;. It then provides an overview of how to
configure and use SANE on a &os;
system.Checking the ScannerThe GENERIC kernel includes the
device drivers needed to support USB
scanners. Users with a custom kernel should ensure that the
following lines are present in the custom kernel configuration
file:device usb
device uhci
device ohci
device ehciTo determine if the USB scanner is
detected, plug it in and use dmesg to
determine whether the scanner appears in the system message
buffer. If it does, it should display a message similar to
this:ugen0.2: <EPSON> at usbus0In this example, an &epson.perfection; 1650
USB scanner was detected on
/dev/ugen0.2.If the scanner uses a SCSI interface,
it is important to know which SCSI
controller board it will use. Depending upon the
SCSI chipset, a custom kernel configuration
file may be needed. The GENERIC kernel
supports the most common SCSI controllers.
Refer to /usr/src/sys/conf/NOTES to
determine the correct line to add to a custom kernel
configuration file. In addition to the
SCSI adapter driver, the following lines
are needed in a custom kernel configuration file:device scbus
device passVerify that the device is displayed in the system message
buffer:pass2 at aic0 bus 0 target 2 lun 0
pass2: <AGFA SNAPSCAN 600 1.10> Fixed Scanner SCSI-2 device
pass2: 3.300MB/s transfersIf the scanner was not powered-on at system boot, it is
still possible to manually force detection by performing a
SCSI bus scan with
camcontrol:&prompt.root; camcontrol rescan all
Re-scan of bus 0 was successful
Re-scan of bus 1 was successful
Re-scan of bus 2 was successful
Re-scan of bus 3 was successfulThe scanner should now appear in the
SCSI devices list:&prompt.root; camcontrol devlist
<IBM DDRS-34560 S97B> at scbus0 target 5 lun 0 (pass0,da0)
<IBM DDRS-34560 S97B> at scbus0 target 6 lun 0 (pass1,da1)
<AGFA SNAPSCAN 600 1.10> at scbus1 target 2 lun 0 (pass3)
<PHILIPS CDD3610 CD-R/RW 1.00> at scbus2 target 0 lun 0 (pass2,cd0)Refer to &man.scsi.4; and &man.camcontrol.8; for more
details about SCSI devices on &os;.SANE ConfigurationThe SANE system is split in two
parts: the backends
(graphics/sane-backends) and the frontends
(graphics/sane-frontends or
graphics/xsane). The backends provide
access to the scanner. Refer to http://www.sane-project.org/sane-supported-devices.html
to determine which backend supports the scanner. The
frontends provide the graphical scanning interface.
graphics/sane-frontends installs
xscanimage while
graphics/xsane installs
xsane.After installing the
graphics/sane-backends port or package, use
sane-find-scanner to check the scanner
detection by the SANE
system:&prompt.root; sane-find-scanner -q
found SCSI scanner "AGFA SNAPSCAN 600 1.10" at /dev/pass3The output should show the interface type of the scanner
and the device node used to attach the scanner to the system.
The vendor and the product model may or may not appear.Some USB scanners require firmware to
be loaded. Refer to sane-find-scanner(1) and sane(7) for
details.Next, check if the scanner will be identified by a
scanning frontend. The SANE
backends include scanimage which can be
used to list the devices and perform an image acquisition.
Use to list the scanner devices. The
first example is for a SCSI scanner and the
second is for a USB scanner:&prompt.root; scanimage -L
device `snapscan:/dev/pass3' is a AGFA SNAPSCAN 600 flatbed scanner
&prompt.root; scanimage -L
device 'epson2:libusb:/dev/usb:/dev/ugen0.2' is a Epson GT-8200 flatbed scannerIn this second example,
'epson2:libusb:/dev/usb:/dev/ugen0.2' is
the backend name (epson2) and
/dev/ugen0.2 is the device node used by the
scanner.If scanimage is unable to identify the
scanner, this message will appear:&prompt.root; scanimage -L
No scanners were identified. If you were expecting something different,
check that the scanner is plugged in, turned on and detected by the
sane-find-scanner tool (if appropriate). Please read the documentation
which came with this software (README, FAQ, manpages).If this happens, edit the backend configuration file in
/usr/local/etc/sane.d/ and define the
scanner device used. For example, if the undetected scanner
model is an &epson.perfection; 1650 and it uses the
epson2 backend, edit
/usr/local/etc/sane.d/epson2.conf. When
editing, add a line specifying the interface and the device
node used. In this case, add the following line:usb /dev/ugen0.2Save the edits and verify that the scanner is identified
with the right backend name and the device node:&prompt.root; scanimage -L
device 'epson2:libusb:/dev/usb:/dev/ugen0.2' is a Epson GT-8200 flatbed scannerOnce scanimage -L sees the scanner, the
configuration is complete and the scanner is now ready to
use.While scanimage can be used to perform
an image acquisition from the command line, it is often
preferable to use a graphical interface to perform image
scanning. The graphics/sane-frontends
package or port installs a simple but efficient graphical
interface, xscanimage.Alternately, xsane, which is
installed with the graphics/xsane package
or port, is another popular graphical scanning frontend. It
offers advanced features such as various scanning modes, color
correction, and batch scans. Both of these applications are
usable as a GIMP plugin.Scanner PermissionsIn order to have access to the scanner, a user needs read
and write permissions to the device node used by the scanner.
In the previous example, the USB scanner
uses the device node /dev/ugen0.2 which
is really a symlink to the real device node
/dev/usb/0.2.0. The symlink and the
device node are owned, respectively, by the wheel and operator groups. While
adding the user to these groups will allow access to the
scanner, it is considered insecure to add a user to
wheel. A better
solution is to create a group and make the scanner device
accessible to members of this group.This example creates a group called usb:&prompt.root; pw groupadd usbThen, make the /dev/ugen0.2 symlink
and the /dev/usb/0.2.0 device node
accessible to the usb group with write
permissions of 0660 or
0664 by adding the following lines to
/etc/devfs.rules:[system=5]
add path ugen0.2 mode 0660 group usb
add path usb/0.2.0 mode 0666 group usbFinally, add the users to usb
in order to allow access to the scanner:&prompt.root; pw groupmod usb -m joeFor more details refer to &man.pw.8;.
Index: head/en_US.ISO8859-1/books/handbook/network-servers/chapter.xml
===================================================================
--- head/en_US.ISO8859-1/books/handbook/network-servers/chapter.xml (revision 48528)
+++ head/en_US.ISO8859-1/books/handbook/network-servers/chapter.xml (revision 48529)
@@ -1,5788 +1,5812 @@
Network ServersSynopsisThis chapter covers some of the more frequently used network
services on &unix; systems. This includes installing,
configuring, testing, and maintaining many different types of
network services. Example configuration files are included
throughout this chapter for reference.By the end of this chapter, readers will know:How to manage the inetd
daemon.How to set up the Network File System
(NFS).How to set up the Network Information Server
(NIS) for centralizing and sharing
user accounts.How to set &os; up to act as an LDAP
server or clientHow to set up automatic network settings using
DHCP.How to set up a Domain Name Server
(DNS).How to set up the Apache
HTTP Server.How to set up a File Transfer Protocol
(FTP) server.How to set up a file and print server for &windows;
clients using Samba.How to synchronize the time and date, and set up a
time server using the Network Time Protocol
(NTP).How to set up iSCSI.This chapter assumes a basic knowledge of:/etc/rc scripts.Network terminology.Installation of additional third-party
software ().The inetd
Super-ServerThe &man.inetd.8; daemon is sometimes referred to as a
Super-Server because it manages connections for many services.
Instead of starting multiple applications, only the
inetd service needs to be started.
When a connection is received for a service that is managed by
inetd, it determines which program
the connection is destined for, spawns a process for that
program, and delegates the program a socket. Using
inetd for services that are not
heavily used can reduce system load, when compared to running
each daemon individually in stand-alone mode.Primarily, inetd is used to
spawn other daemons, but several trivial protocols are handled
internally, such as chargen,
auth,
time,
echo,
discard, and
daytime.This section covers the basics of configuring
inetd.Configuration FileConfiguration of inetd is
done by editing /etc/inetd.conf. Each
line of this configuration file represents an application
which can be started by inetd. By
default, every line starts with a comment
(#), meaning that
inetd is not listening for any
applications. To configure inetd
to listen for an application's connections, remove the
# at the beginning of the line for that
application.After saving your edits, configure
inetd to start at system boot by
editing /etc/rc.conf:inetd_enable="YES"To start inetd now, so that it
listens for the service you configured, type:&prompt.root; service inetd startOnce inetd is started, it needs
to be notified whenever a modification is made to
/etc/inetd.conf:Reloading the inetd
Configuration File&prompt.root; service inetd reloadTypically, the default entry for an application does not
need to be edited beyond removing the #.
In some situations, it may be appropriate to edit the default
entry.As an example, this is the default entry for &man.ftpd.8;
over IPv4:ftp stream tcp nowait root /usr/libexec/ftpd ftpd -lThe seven columns in an entry are as follows:service-name
socket-type
protocol
{wait|nowait}[/max-child[/max-connections-per-ip-per-minute[/max-child-per-ip]]]
user[:group][/login-class]
server-program
server-program-argumentswhere:service-nameThe service name of the daemon to start. It must
correspond to a service listed in
/etc/services. This determines
which port inetd listens on
for incoming connections to that service. When using a
custom service, it must first be added to
/etc/services.socket-typeEither stream,
dgram, raw, or
seqpacket. Use
stream for TCP connections and
dgram for
UDP services.protocolUse one of the following protocol names:Protocol NameExplanationtcp or tcp4TCP IPv4udp or udp4UDP IPv4tcp6TCP IPv6udp6UDP IPv6tcp46Both TCP IPv4 and IPv6udp46Both UDP IPv4 and
IPv6{wait|nowait}[/max-child[/max-connections-per-ip-per-minute[/max-child-per-ip]]]In this field, or
must be specified.
,
and
are optional. indicates whether or
not the service is able to handle its own socket.
socket types must use
while
daemons, which are usually
multi-threaded, should use .
usually hands off multiple sockets
to a single daemon, while spawns
a child daemon for each new socket.The maximum number of child daemons
inetd may spawn is set by
. For example, to limit ten
instances of the daemon, place a /10
after . Specifying
/0 allows an unlimited number of
children.
limits the number of connections from any particular
IP address per minute. Once the
limit is reached, further connections from this IP
address will be dropped until the end of the minute.
For example, a value of /10 would
limit any particular IP address to
ten connection attempts per minute.
limits the number of
child processes that can be started on behalf on any
single IP address at any moment.
These options can limit excessive resource consumption
and help to prevent Denial of Service attacks.An example can be seen in the default settings for
&man.fingerd.8;:finger stream tcp nowait/3/10 nobody /usr/libexec/fingerd fingerd -k -suserThe username the daemon
will run as. Daemons typically run as
root,
daemon, or
nobody.server-programThe full path to the daemon. If the daemon is a
service provided by inetd
internally, use .server-program-argumentsUsed to specify any command arguments to be passed
to the daemon on invocation. If the daemon is an
internal service, use
.Command-Line OptionsLike most server daemons, inetd
has a number of options that can be used to modify its
behavior. By default, inetd is
started with -wW -C 60. These options
enable TCP wrappers for all services, including internal
services, and prevent any IP address from
requesting any service more than 60 times per minute.To change the default options which are passed to
inetd, add an entry for
inetd_flags in
/etc/rc.conf. If
inetd is already running, restart
it with service inetd restart.The available rate limiting options are:-c maximumSpecify the default maximum number of simultaneous
invocations of each service, where the default is
unlimited. May be overridden on a per-service basis by
using in
/etc/inetd.conf.-C rateSpecify the default maximum number of times a
service can be invoked from a single
IP address per minute. May be
overridden on a per-service basis by using
in
/etc/inetd.conf.-R rateSpecify the maximum number of times a service can be
invoked in one minute, where the default is
256. A rate of 0
allows an unlimited number.-s maximumSpecify the maximum number of times a service can be
invoked from a single IP address at
any one time, where the default is unlimited. May be
overridden on a per-service basis by using
in
/etc/inetd.conf.Additional options are available. Refer to &man.inetd.8;
for the full list of options.Security ConsiderationsMany of the daemons which can be managed by
inetd are not security-conscious.
Some daemons, such as fingerd, can
provide information that may be useful to an attacker. Only
enable the services which are needed and monitor the system
for excessive connection attempts.
max-connections-per-ip-per-minute,
max-child and
max-child-per-ip can be used to limit such
attacks.By default, TCP wrappers is enabled. Consult
&man.hosts.access.5; for more information on placing TCP
restrictions on various
inetd invoked daemons.Network File System (NFS)
- Tom
- Rhodes
-
+ Tom
+ Rhodes
+
Reorganized and enhanced by
+
- Bill
- Swingle
-
+ Bill
+ Swingle
+
Written by NFS&os; supports the Network File System
(NFS), which allows a server to share
directories and files with clients over a network. With
NFS, users and programs can access files on
remote systems as if they were stored locally.NFS has many practical uses. Some of
the more common uses include:Data that would otherwise be duplicated on each client
can be kept in a single location and accessed by clients
on the network.Several clients may need access to the
/usr/ports/distfiles directory.
Sharing that directory allows for quick access to the
source files without having to download them to each
client.On large networks, it is often more convenient to
configure a central NFS server on which
all user home directories are stored. Users can log into
a client anywhere on the network and have access to their
home directories.Administration of NFS exports is
simplified. For example, there is only one file system
where security or backup policies must be set.Removable media storage devices can be used by other
machines on the network. This reduces the number of devices
throughout the network and provides a centralized location
to manage their security. It is often more convenient to
install software on multiple machines from a centralized
installation media.NFS consists of a server and one or more
clients. The client remotely accesses the data that is stored
on the server machine. In order for this to function properly,
a few processes have to be configured and running.These daemons must be running on the server:NFSserverfile serverUNIX clientsrpcbindmountdnfsdDaemonDescriptionnfsdThe NFS daemon which services
requests from NFS clients.mountdThe NFS mount daemon which
carries out requests received from
nfsd.rpcbind This daemon allows NFS
clients to discover which port the
NFS server is using.Running &man.nfsiod.8; on the client can improve
performance, but is not required.Configuring the ServerNFSconfigurationThe file systems which the NFS server
will share are specified in /etc/exports.
Each line in this file specifies a file system to be exported,
which clients have access to that file system, and any access
options. When adding entries to this file, each exported file
system, its properties, and allowed hosts must occur on a
single line. If no clients are listed in the entry, then any
client on the network can mount that file system.NFSexport examplesThe following /etc/exports entries
demonstrate how to export file systems. The examples can be
modified to match the file systems and client names on the
reader's network. There are many options that can be used in
this file, but only a few will be mentioned here. See
&man.exports.5; for the full list of options.This example shows how to export
/cdrom to three hosts named
alpha,
bravo, and
charlie:/cdrom -ro alphabravocharlieThe -ro flag makes the file system
read-only, preventing clients from making any changes to the
exported file system. This example assumes that the host
names are either in DNS or in
/etc/hosts. Refer to &man.hosts.5; if
the network does not have a DNS
server.The next example exports /home to
three clients by IP address. This can be
useful for networks without DNS or
/etc/hosts entries. The
-alldirs flag allows subdirectories to be
mount points. In other words, it will not automatically mount
the subdirectories, but will permit the client to mount the
directories that are required as needed./usr/home -alldirs 10.0.0.2 10.0.0.3 10.0.0.4This next example exports /a so that
two clients from different domains may access that file
system. The allows root on the remote system to
write data on the exported file system as root. If
-maproot=root is not specified, the
client's root user
will be mapped to the server's nobody account and will be
subject to the access limitations defined for nobody./a -maproot=root host.example.com box.example.orgA client can only be specified once per file system. For
example, if /usr is a single file system,
these entries would be invalid as both entries specify the
same host:# Invalid when /usr is one file system
/usr/src client
/usr/ports clientThe correct format for this situation is to use one
entry:/usr/src /usr/ports clientThe following is an example of a valid export list, where
/usr and /exports
are local file systems:# Export src and ports to client01 and client02, but only
# client01 has root privileges on it
/usr/src /usr/ports -maproot=root client01
/usr/src /usr/ports client02
# The client machines have root and can mount anywhere
# on /exports. Anyone in the world can mount /exports/obj read-only
/exports -alldirs -maproot=root client01 client02
/exports/obj -roTo enable the processes required by the
NFS server at boot time, add these options
to /etc/rc.conf:rpcbind_enable="YES"
nfs_server_enable="YES"
mountd_flags="-r"The server can be started now by running this
command:&prompt.root; service nfsd startWhenever the NFS server is started,
mountd also starts automatically.
However, mountd only reads
/etc/exports when it is started. To make
subsequent /etc/exports edits take effect
immediately, force mountd to reread
it:&prompt.root; service mountd reloadConfiguring the ClientTo enable NFS clients, set this option
in each client's /etc/rc.conf:nfs_client_enable="YES"Then, run this command on each NFS
client:&prompt.root; service nfsclient startThe client now has everything it needs to mount a remote
file system. In these examples, the server's name is
server and the client's name is
client. To mount
/home on
server to the
/mnt mount point on
client:NFSmounting&prompt.root; mount server:/home /mntThe files and directories in
/home will now be available on
client, in the
/mnt directory.To mount a remote file system each time the client boots,
add it to /etc/fstab:server:/home /mnt nfs rw 0 0Refer to &man.fstab.5; for a description of all available
options.LockingSome applications require file locking to operate
correctly. To enable locking, add these lines to
/etc/rc.conf on both the client and
server:rpc_lockd_enable="YES"
rpc_statd_enable="YES"Then start the applications:&prompt.root; service lockd start
&prompt.root; service statd startIf locking is not required on the server, the
NFS client can be configured to lock
locally by including when running
mount. Refer to &man.mount.nfs.8;
for further details.
- Automating Mounts with &man.amd.8;
+ Automating Mounts with &man.amd.8;
- Wylie
- Stilwell
-
+ Wylie
+ Stilwell
+
Contributed by
+
- Chern
- Lee
+ Chern
+ LeeRewritten by amdautomatic mounter daemonThe automatic mounter daemon,
amd, automatically mounts a remote
file system whenever a file or directory within that file
system is accessed. File systems that are inactive for a
period of time will be automatically unmounted by
amd.This daemon provides an alternative to modifying
/etc/fstab to list every client. It
operates by attaching itself as an NFS
server to the /host and
/net directories. When a file is
accessed within one of these directories,
amd looks up the corresponding
remote mount and automatically mounts it.
/net is used to mount an exported file
system from an IP address while
/host is used to mount an export from a
remote hostname. For instance, an attempt to access a file
within /host/foobar/usr would tell
amd to mount the
/usr export on the host
foobar.Mounting an Export with
amdIn this example, showmount -e shows
the exported file systems that can be mounted from the
NFS server,
foobar:&prompt.user; showmount -e foobar
Exports list on foobar:
/usr 10.10.10.0
/a 10.10.10.0
&prompt.user; cd /host/foobar/usrThe output from showmount shows
/usr as an export. When changing
directories to /host/foobar/usr,
amd intercepts the request and
attempts to resolve the hostname
foobar. If successful,
amd automatically mounts the
desired export.To enable amd at boot time, add
this line to /etc/rc.conf:amd_enable="YES"To start amd now:&prompt.root; service amd startCustom flags can be passed to
amd from the
amd_flags environment variable. By
default, amd_flags is set to:amd_flags="-a /.amd_mnt -l syslog /host /etc/amd.map /net /etc/amd.map"The default options with which exports are mounted are
defined in /etc/amd.map. Some of the
more advanced features of amd are
defined in /etc/amd.conf.Consult &man.amd.8; and &man.amd.conf.5; for more
information.Automating Mounts with &man.autofs.5;The &man.autofs.5; automount facility is supported
starting with &os; 10.1-RELEASE. To use the
automounter functionality in older versions of &os;, use
&man.amd.8; instead. This chapter only describes the
&man.autofs.5; automounter.autofsautomounter subsystemThe &man.autofs.5; facility is a common name for several
components that, together, allow for automatic mounting of
remote and local filesystems whenever a file or directory
within that file system is accessed. It consists of the
kernel component, &man.autofs.5;, and several userspace
applications: &man.automount.8;, &man.automountd.8; and
&man.autounmountd.8;. It serves as an alternative for
&man.amd.8; from previous &os; releases. Amd is still
provided for backward compatibility purposes, as the two use
different map format; the one used by autofs is the same as
with other SVR4 automounters, such as the ones in Solaris,
MacOS X, and Linux.The &man.autofs.5; virtual filesystem is mounted on
specified mountpoints by &man.automount.8;, usually invoked
during boot.Whenever a process attempts to access file within the
&man.autofs.5; mountpoint, the kernel will notify
&man.automountd.8; daemon and pause the triggering process.
The &man.automountd.8; daemon will handle kernel requests by
finding the proper map and mounting the filesystem according
to it, then signal the kernel to release blocked process. The
&man.autounmountd.8; daemon automatically unmounts automounted
filesystems after some time, unless they are still being
used.The primary autofs configuration file is
/etc/auto_master. It assigns individual
maps to top-level mounts. For an explanation of
auto_master and the map syntax, refer to
&man.auto.master.5;.There is a special automounter map mounted on
/net. When a file is accessed within
this directory, &man.autofs.5; looks up the corresponding
remote mount and automatically mounts it. For instance, an
attempt to access a file within
/net/foobar/usr would tell
&man.automountd.8; to mount the /usr export from the host
foobar.Mounting an Export with &man.autofs.5;In this example, showmount -e shows
the exported file systems that can be mounted from the
NFS server,
foobar:&prompt.user; showmount -e foobar
Exports list on foobar:
/usr 10.10.10.0
/a 10.10.10.0
&prompt.user; cd /net/foobar/usrThe output from showmount shows
/usr as an export.
When changing directories to /host/foobar/usr,
&man.automountd.8; intercepts the request and attempts to
resolve the hostname foobar. If successful,
&man.automountd.8; automatically mounts the source
export.To enable &man.autofs.5; at boot time, add this line to
/etc/rc.conf:autofs_enable="YES"Then &man.autofs.5; can be started by running:&prompt.root; service automount start
&prompt.root; service automountd start
&prompt.root; service autounmountd startThe &man.autofs.5; map format is the same as in other
operating systems, it might be desirable to consult
information from other operating systems, such as the Mac
OS X document.Consult the &man.automount.8;, &man.automountd.8;,
- &man.autounmountd.8;, and &man.auto.master.5; manual pages for
- more information.
+ &man.autounmountd.8;, and &man.auto.master.5; manual pages for
+ more information.
Network Information System
(NIS)NISSolarisHP-UXAIXLinuxNetBSDOpenBSDyellow pagesNISNetwork Information System (NIS) is
designed to centralize administration of &unix;-like systems
such as &solaris;, HP-UX, &aix;, Linux, NetBSD, OpenBSD, and
&os;. NIS was originally known as Yellow
Pages but the name was changed due to trademark issues. This
is the reason why NIS commands begin with
yp.NISdomainsNIS is a Remote Procedure Call
(RPC)-based client/server system that allows
a group of machines within an NIS domain to
share a common set of configuration files. This permits a
system administrator to set up NIS client
systems with only minimal configuration data and to add, remove,
or modify configuration data from a single location.&os; uses version 2 of the NIS
protocol.NIS Terms and ProcessesTable 28.1 summarizes the terms and important processes
used by NIS:rpcbindportmap
NIS TerminologyTermDescriptionNIS domain nameNIS servers and clients share
an NIS domain name. Typically,
this name does not have anything to do with
DNS.&man.rpcbind.8;This service enables RPC and
must be running in order to run an
NIS server or act as an
NIS client.&man.ypbind.8;This service binds an NIS
client to its NIS server. It will
take the NIS domain name and use
RPC to connect to the server. It
is the core of client/server communication in an
NIS environment. If this service
is not running on a client machine, it will not be
able to access the NIS
server.&man.ypserv.8;This is the process for the
NIS server. If this service stops
running, the server will no longer be able to respond
to NIS requests so hopefully, there
is a slave server to take over. Some non-&os; clients
will not try to reconnect using a slave server and the
ypbind process may need to
be restarted on these
clients.&man.rpc.yppasswdd.8;This process only runs on
NIS master servers. This daemon
allows NIS clients to change their
NIS passwords. If this daemon is
not running, users will have to login to the
NIS master server and change their
passwords there.
Machine TypesNISmaster serverNISslave serverNISclientThere are three types of hosts in an
NIS environment:NIS master serverThis server acts as a central repository for host
configuration information and maintains the
authoritative copy of the files used by all of the
NIS clients. The
passwd, group,
and other various files used by NIS
clients are stored on the master server. While it is
possible for one machine to be an NIS
master server for more than one NIS
domain, this type of configuration will not be covered in
this chapter as it assumes a relatively small-scale
NIS environment.NIS slave serversNIS slave servers maintain copies
of the NIS master's data files in
order to provide redundancy. Slave servers also help to
balance the load of the master server as
NIS clients always attach to the
NIS server which responds
first.NIS clientsNIS clients authenticate
against the NIS server during log
on.Information in many files can be shared using
NIS. The
master.passwd,
group, and hosts
files are commonly shared via NIS.
Whenever a process on a client needs information that would
normally be found in these files locally, it makes a query to
the NIS server that it is bound to
instead.Planning ConsiderationsThis section describes a sample NIS
environment which consists of 15 &os; machines with no
centralized point of administration. Each machine has its own
/etc/passwd and
/etc/master.passwd. These files are kept
in sync with each other only through manual intervention.
Currently, when a user is added to the lab, the process must
be repeated on all 15 machines.The configuration of the lab will be as follows:Machine nameIP addressMachine roleellington10.0.0.2
+ class="ipaddress">10.0.0.2
NIS mastercoltrane10.0.0.3
+ class="ipaddress">10.0.0.3
NIS slavebasie10.0.0.4
+ class="ipaddress">10.0.0.4
Faculty workstationbird10.0.0.5
+ class="ipaddress">10.0.0.5
Client machinecli[1-11]10.0.0.[6-17]Other client machinesIf this is the first time an NIS
scheme is being developed, it should be thoroughly planned
ahead of time. Regardless of network size, several decisions
need to be made as part of the planning process.Choosing a NIS Domain NameNISdomain nameWhen a client broadcasts its requests for info, it
includes the name of the NIS domain that
it is part of. This is how multiple servers on one network
can tell which server should answer which request. Think of
the NIS domain name as the name for a
group of hosts.Some organizations choose to use their Internet domain
name for their NIS domain name. This is
not recommended as it can cause confusion when trying to
debug network problems. The NIS domain
name should be unique within the network and it is helpful
if it describes the group of machines it represents. For
example, the Art department at Acme Inc. might be in the
acme-art NIS domain. This
example will use the domain name
test-domain.However, some non-&os; operating systems require the
NIS domain name to be the same as the
Internet domain name. If one or more machines on the
network have this restriction, the Internet domain name
must be used as the
NIS domain name.Physical Server RequirementsThere are several things to keep in mind when choosing a
machine to use as a NIS server. Since
NIS clients depend upon the availability
of the server, choose a machine that is not rebooted
frequently. The NIS server should
ideally be a stand alone machine whose sole purpose is to be
an NIS server. If the network is not
heavily used, it is acceptable to put the
NIS server on a machine running other
services. However, if the NIS server
becomes unavailable, it will adversely affect all
NIS clients.Configuring the NIS Master
Server The canonical copies of all NIS files
are stored on the master server. The databases used to store
the information are called NIS maps. In
&os;, these maps are stored in
/var/yp/[domainname] where
[domainname] is the name of the
NIS domain. Since multiple domains are
supported, it is possible to have several directories, one for
each domain. Each domain will have its own independent set of
maps.NIS master and slave servers handle all
NIS requests through &man.ypserv.8;. This
daemon is responsible for receiving incoming requests from
NIS clients, translating the requested
domain and map name to a path to the corresponding database
file, and transmitting data from the database back to the
client.NISserver configurationSetting up a master NIS server can be
relatively straight forward, depending on environmental needs.
Since &os; provides built-in NIS support,
it only needs to be enabled by adding the following lines to
/etc/rc.conf:nisdomainname="test-domain"
nis_server_enable="YES"
nis_yppasswdd_enable="YES" This line sets the NIS domain name
to test-domain.This automates the start up of the
NIS server processes when the system
boots.This enables the &man.rpc.yppasswdd.8; daemon so that
users can change their NIS password
from a client machine.Care must be taken in a multi-server domain where the
server machines are also NIS clients. It
is generally a good idea to force the servers to bind to
themselves rather than allowing them to broadcast bind
requests and possibly become bound to each other. Strange
failure modes can result if one server goes down and others
are dependent upon it. Eventually, all the clients will time
out and attempt to bind to other servers, but the delay
involved can be considerable and the failure mode is still
present since the servers might bind to each other all over
again.A server that is also a client can be forced to bind to a
particular server by adding these additional lines to
/etc/rc.conf:nis_client_enable="YES" # run client stuff as well
nis_client_flags="-S NIS domain,server"After saving the edits, type
/etc/netstart to restart the network and
apply the values defined in /etc/rc.conf.
Before initializing the NIS maps, start
&man.ypserv.8;:&prompt.root; service ypserv startInitializing the NIS MapsNISmapsNIS maps are generated from the
configuration files in /etc on the
NIS master, with one exception:
/etc/master.passwd. This is to prevent
the propagation of passwords to all the servers in the
NIS domain. Therefore, before the
NIS maps are initialized, configure the
primary password files:&prompt.root; cp /etc/master.passwd /var/yp/master.passwd
&prompt.root; cd /var/yp
&prompt.root; vi master.passwdIt is advisable to remove all entries for system
accounts as well as any user accounts that do not need to be
propagated to the NIS clients, such as
the root and any
other administrative accounts.
- Ensure that the
- /var/yp/master.passwd is neither group
- or world readable by setting its permissions to
- 600.
+
+ Ensure that the
+ /var/yp/master.passwd is neither
+ group or world readable by setting its permissions to
+ 600.After completing this task, initialize the
NIS maps. &os; includes the
&man.ypinit.8; script to do this. When generating maps
for the master server, include and
specify the NIS domain name:ellington&prompt.root; ypinit -m test-domain
Server Type: MASTER Domain: test-domain
Creating an YP server will require that you answer a few questions.
Questions will all be asked at the beginning of the procedure.
Do you want this procedure to quit on non-fatal errors? [y/n: n] n
Ok, please remember to go back and redo manually whatever fails.
If not, something might not work.
At this point, we have to construct a list of this domains YP servers.
rod.darktech.org is already known as master server.
Please continue to add any slave servers, one per line. When you are
done with the list, type a <control D>.
master server : ellington
next host to add: coltrane
next host to add: ^D
The current list of NIS servers looks like this:
ellington
coltrane
Is this correct? [y/n: y] y
[..output from map generation..]
NIS Map update completed.
ellington has been setup as an YP master server without any errors.This will create /var/yp/Makefile
from /var/yp/Makefile.dist. By
default, this file assumes that the environment has a
single NIS server with only &os; clients.
Since test-domain has a slave server,
edit this line in /var/yp/Makefile so
that it begins with a comment
(#):NOPUSH = "True"Adding New UsersEvery time a new user is created, the user account must
be added to the master NIS server and the
NIS maps rebuilt. Until this occurs, the
new user will not be able to login anywhere except on the
NIS master. For example, to add the new
user jsmith to the
test-domain domain, run these commands on
the master server:&prompt.root; pw useradd jsmith
&prompt.root; cd /var/yp
&prompt.root; make test-domainThe user could also be added using adduser
jsmith instead of pw useradd
smith.Setting up a NIS Slave ServerNISslave serverTo set up an NIS slave server, log on
to the slave server and edit /etc/rc.conf
as for the master server. Do not generate any
NIS maps, as these already exist on the
master server. When running ypinit on the
slave server, use (for slave) instead of
(for master). This option requires the
name of the NIS master in addition to the
domain name, as seen in this example:coltrane&prompt.root; ypinit -s ellington test-domain
Server Type: SLAVE Domain: test-domain Master: ellington
Creating an YP server will require that you answer a few questions.
Questions will all be asked at the beginning of the procedure.
Do you want this procedure to quit on non-fatal errors? [y/n: n] n
Ok, please remember to go back and redo manually whatever fails.
If not, something might not work.
There will be no further questions. The remainder of the procedure
should take a few minutes, to copy the databases from ellington.
Transferring netgroup...
ypxfr: Exiting: Map successfully transferred
Transferring netgroup.byuser...
ypxfr: Exiting: Map successfully transferred
Transferring netgroup.byhost...
ypxfr: Exiting: Map successfully transferred
Transferring master.passwd.byuid...
ypxfr: Exiting: Map successfully transferred
Transferring passwd.byuid...
ypxfr: Exiting: Map successfully transferred
Transferring passwd.byname...
ypxfr: Exiting: Map successfully transferred
Transferring group.bygid...
ypxfr: Exiting: Map successfully transferred
Transferring group.byname...
ypxfr: Exiting: Map successfully transferred
Transferring services.byname...
ypxfr: Exiting: Map successfully transferred
Transferring rpc.bynumber...
ypxfr: Exiting: Map successfully transferred
Transferring rpc.byname...
ypxfr: Exiting: Map successfully transferred
Transferring protocols.byname...
ypxfr: Exiting: Map successfully transferred
Transferring master.passwd.byname...
ypxfr: Exiting: Map successfully transferred
Transferring networks.byname...
ypxfr: Exiting: Map successfully transferred
Transferring networks.byaddr...
ypxfr: Exiting: Map successfully transferred
Transferring netid.byname...
ypxfr: Exiting: Map successfully transferred
Transferring hosts.byaddr...
ypxfr: Exiting: Map successfully transferred
Transferring protocols.bynumber...
ypxfr: Exiting: Map successfully transferred
Transferring ypservers...
ypxfr: Exiting: Map successfully transferred
Transferring hosts.byname...
ypxfr: Exiting: Map successfully transferred
coltrane has been setup as an YP slave server without any errors.
Remember to update map ypservers on ellington.This will generate a directory on the slave server called
/var/yp/test-domain which contains copies
of the NIS master server's maps. Adding
these /etc/crontab entries on each slave
server will force the slaves to sync their maps with the maps
on the master server:20 * * * * root /usr/libexec/ypxfr passwd.byname
21 * * * * root /usr/libexec/ypxfr passwd.byuidThese entries are not mandatory because the master server
automatically attempts to push any map changes to its slaves.
However, since clients may depend upon the slave server to
provide correct password information, it is recommended to
force frequent password map updates. This is especially
important on busy networks where map updates might not always
complete.To finish the configuration, run
/etc/netstart on the slave server in order
to start the NIS services.Setting Up an NIS ClientAn NIS client binds to an
NIS server using &man.ypbind.8;. This
daemon broadcasts RPC requests on the local network. These
requests specify the domain name configured on the client. If
an NIS server in the same domain receives
one of the broadcasts, it will respond to
ypbind, which will record the
server's address. If there are several servers available,
the client will use the address of the first server to respond
and will direct all of its NIS requests to
that server. The client will automatically
ping the server on a regular basis
to make sure it is still available. If it fails to receive a
reply within a reasonable amount of time,
ypbind will mark the domain as
unbound and begin broadcasting again in the hopes of locating
another server.NISclient configurationTo configure a &os; machine to be an
NIS client:Edit /etc/rc.conf and add the
following lines in order to set the
NIS domain name and start
&man.ypbind.8; during network startup:nisdomainname="test-domain"
nis_client_enable="YES"To import all possible password entries from the
NIS server, use
vipw to remove all user accounts
except one from /etc/master.passwd.
When removing the accounts, keep in mind that at least one
local account should remain and this account should be a
member of wheel. If there is a
problem with NIS, this local account
can be used to log in remotely, become the superuser, and
fix the problem. Before saving the edits, add the
following line to the end of the file:+:::::::::This line configures the client to provide anyone with
a valid account in the NIS server's
password maps an account on the client. There are many
ways to configure the NIS client by
modifying this line. One method is described in . For more detailed
reading, refer to the book
Managing NFS and NIS, published by
O'Reilly Media.To import all possible group entries from the
NIS server, add this line to
/etc/group:+:*::To start the NIS client immediately,
execute the following commands as the superuser:&prompt.root; /etc/netstart
&prompt.root; service ypbind startAfter completing these steps, running
ypcat passwd on the client should show
the server's passwd map.NIS SecuritySince RPC is a broadcast-based service,
any system running ypbind within
the same domain can retrieve the contents of the
NIS maps. To prevent unauthorized
transactions, &man.ypserv.8; supports a feature called
securenets which can be used to restrict access
to a given set of hosts. By default, this information is
stored in /var/yp/securenets, unless
&man.ypserv.8; is started with and an
alternate path. This file contains entries that consist of a
network specification and a network mask separated by white
space. Lines starting with # are
considered to be comments. A sample
securenets might look like this:# allow connections from local host -- mandatory
127.0.0.1 255.255.255.255
# allow connections from any host
# on the 192.168.128.0 network
192.168.128.0 255.255.255.0
# allow connections from any host
# between 10.0.0.0 to 10.0.15.255
# this includes the machines in the testlab
10.0.0.0 255.255.240.0If &man.ypserv.8; receives a request from an address that
matches one of these rules, it will process the request
normally. If the address fails to match a rule, the request
will be ignored and a warning message will be logged. If the
securenets does not exist,
ypserv will allow connections from any
host. is an alternate mechanism
for providing access control instead of
securenets. While either access control
mechanism adds some security, they are both vulnerable to
IP spoofing attacks. All
NIS-related traffic should be blocked at
the firewall.Servers using securenets
may fail to serve legitimate NIS clients
with archaic TCP/IP implementations. Some of these
implementations set all host bits to zero when doing
broadcasts or fail to observe the subnet mask when
calculating the broadcast address. While some of these
problems can be fixed by changing the client configuration,
other problems may force the retirement of these client
systems or the abandonment of
securenets.TCP WrapperThe use of TCP Wrapper
increases the latency of the NIS server.
The additional delay may be long enough to cause timeouts in
client programs, especially in busy networks with slow
NIS servers. If one or more clients suffer
from latency, convert those clients into
NIS slave servers and force them to bind to
themselves.Barring Some UsersIn this example, the basie
system is a faculty workstation within the
NIS domain. The
passwd map on the master
NIS server contains accounts for both
faculty and students. This section demonstrates how to
allow faculty logins on this system while refusing student
logins.To prevent specified users from logging on to a system,
even if they are present in the NIS
database, use vipw to add
-username with
the correct number of colons towards the end of
/etc/master.passwd on the client,
where username is the username of
a user to bar from logging in. The line with the blocked
user must be before the + line that
allows NIS users. In this example,
bill is barred
from logging on to basie:basie&prompt.root; cat /etc/master.passwd
root:[password]:0:0::0:0:The super-user:/root:/bin/csh
toor:[password]:0:0::0:0:The other super-user:/root:/bin/sh
daemon:*:1:1::0:0:Owner of many system processes:/root:/sbin/nologin
operator:*:2:5::0:0:System &:/:/sbin/nologin
bin:*:3:7::0:0:Binaries Commands and Source,,,:/:/sbin/nologin
tty:*:4:65533::0:0:Tty Sandbox:/:/sbin/nologin
kmem:*:5:65533::0:0:KMem Sandbox:/:/sbin/nologin
games:*:7:13::0:0:Games pseudo-user:/usr/games:/sbin/nologin
news:*:8:8::0:0:News Subsystem:/:/sbin/nologin
man:*:9:9::0:0:Mister Man Pages:/usr/share/man:/sbin/nologin
bind:*:53:53::0:0:Bind Sandbox:/:/sbin/nologin
uucp:*:66:66::0:0:UUCP pseudo-user:/var/spool/uucppublic:/usr/libexec/uucp/uucico
xten:*:67:67::0:0:X-10 daemon:/usr/local/xten:/sbin/nologin
pop:*:68:6::0:0:Post Office Owner:/nonexistent:/sbin/nologin
nobody:*:65534:65534::0:0:Unprivileged user:/nonexistent:/sbin/nologin
-bill:::::::::
+:::::::::
basie&prompt.root;Using NetgroupsnetgroupsBarring specified users from logging on to individual
systems becomes unscaleable on larger networks and quickly
loses the main benefit of NIS:
centralized administration.Netgroups were developed to handle large, complex networks
with hundreds of users and machines. Their use is comparable
to &unix; groups, where the main difference is the lack of a
numeric ID and the ability to define a netgroup by including
both user accounts and other netgroups.To expand on the example used in this chapter, the
NIS domain will be extended to add the
users and systems shown in Tables 28.2 and 28.3:
Additional UsersUser Name(s)Descriptionalpha,
betaIT department employeescharlie, deltaIT department apprenticesecho,
foxtrott,
golf,
...employeesable,
baker,
...interns
Additional SystemsMachine Name(s)Descriptionwar,
death,
famine,
pollutionOnly IT employees are allowed to log onto these
servers.pride,
greed,
envy,
wrath,
lust,
slothAll members of the IT department are allowed to
login onto these servers.one,
two,
three,
four,
...Ordinary workstations used by
employees.trashcanA very old machine without any critical data.
Even interns are allowed to use this system.
When using netgroups to configure this scenario, each user
is assigned to one or more netgroups and logins are then
allowed or forbidden for all members of the netgroup. When
adding a new machine, login restrictions must be defined for
all netgroups. When a new user is added, the account must be
added to one or more netgroups. If the
NIS setup is planned carefully, only one
central configuration file needs modification to grant or deny
access to machines.The first step is the initialization of the
NIS netgroup map. In
&os;, this map is not created by default. On the
NIS master server, use an editor to create
a map named /var/yp/netgroup.This example creates four netgroups to represent IT
employees, IT apprentices, employees, and interns:IT_EMP (,alpha,test-domain) (,beta,test-domain)
IT_APP (,charlie,test-domain) (,delta,test-domain)
USERS (,echo,test-domain) (,foxtrott,test-domain) \
(,golf,test-domain)
INTERNS (,able,test-domain) (,baker,test-domain)Each entry configures a netgroup. The first column in an
entry is the name of the netgroup. Each set of brackets
represents either a group of one or more users or the name of
another netgroup. When specifying a user, the three
comma-delimited fields inside each group represent:The name of the host(s) where the other fields
representing the user are valid. If a hostname is not
specified, the entry is valid on all hosts.The name of the account that belongs to this
netgroup.The NIS domain for the account.
Accounts may be imported from other NIS
domains into a netgroup.If a group contains multiple users, separate each user
with whitespace. Additionally, each field may contain
wildcards. See &man.netgroup.5; for details.netgroupsNetgroup names longer than 8 characters should not be
used. The names are case sensitive and using capital letters
for netgroup names is an easy way to distinguish between user,
machine and netgroup names.Some non-&os; NIS clients cannot
handle netgroups containing more than 15 entries. This
limit may be circumvented by creating several sub-netgroups
with 15 users or fewer and a real netgroup consisting of the
sub-netgroups, as seen in this example:BIGGRP1 (,joe1,domain) (,joe2,domain) (,joe3,domain) [...]
BIGGRP2 (,joe16,domain) (,joe17,domain) [...]
BIGGRP3 (,joe31,domain) (,joe32,domain)
BIGGROUP BIGGRP1 BIGGRP2 BIGGRP3Repeat this process if more than 225 (15 times 15) users
exist within a single netgroup.To activate and distribute the new
NIS map:ellington&prompt.root; cd /var/yp
ellington&prompt.root; makeThis will generate the three NIS maps
netgroup,
netgroup.byhost and
netgroup.byuser. Use the map key option
of &man.ypcat.1; to check if the new NIS
maps are available:ellington&prompt.user; ypcat -k netgroup
ellington&prompt.user; ypcat -k netgroup.byhost
ellington&prompt.user; ypcat -k netgroup.byuserThe output of the first command should resemble the
contents of /var/yp/netgroup. The second
command only produces output if host-specific netgroups were
created. The third command is used to get the list of
netgroups for a user.To configure a client, use &man.vipw.8; to specify the
name of the netgroup. For example, on the server named
war, replace this line:+:::::::::with+@IT_EMP:::::::::This specifies that only the users defined in the netgroup
IT_EMP will be imported into this system's
password database and only those users are allowed to login to
this system.This configuration also applies to the
~ function of the shell and all routines
which convert between user names and numerical user IDs. In
other words,
cd ~user will
not work, ls -l will show the numerical ID
instead of the username, and find . -user joe
-print will fail with the message
No such user. To fix this, import all
user entries without allowing them to login into the servers.
This can be achieved by adding an extra line:+:::::::::/sbin/nologinThis line configures the client to import all entries but
to replace the shell in those entries with
/sbin/nologin.Make sure that extra line is placed
after+@IT_EMP:::::::::. Otherwise, all user
accounts imported from NIS will have
/sbin/nologin as their login
shell and no one will be able to login to the system.To configure the less important servers, replace the old
+::::::::: on the servers with these
lines:+@IT_EMP:::::::::
+@IT_APP:::::::::
+:::::::::/sbin/nologinThe corresponding lines for the workstations
would be:+@IT_EMP:::::::::
+@USERS:::::::::
+:::::::::/sbin/nologinNIS supports the creation of netgroups from other
netgroups which can be useful if the policy regarding user
access changes. One possibility is the creation of role-based
netgroups. For example, one might create a netgroup called
BIGSRV to define the login restrictions for
the important servers, another netgroup called
SMALLSRV for the less important servers,
and a third netgroup called USERBOX for the
workstations. Each of these netgroups contains the netgroups
that are allowed to login onto these machines. The new
entries for the NIS
netgroup map would look like this:BIGSRV IT_EMP IT_APP
SMALLSRV IT_EMP IT_APP ITINTERN
USERBOX IT_EMP ITINTERN USERSThis method of defining login restrictions works
reasonably well when it is possible to define groups of
machines with identical restrictions. Unfortunately, this is
the exception and not the rule. Most of the time, the ability
to define login restrictions on a per-machine basis is
required.Machine-specific netgroup definitions are another
possibility to deal with the policy changes. In this
scenario, the /etc/master.passwd of each
system contains two lines starting with +.
The first line adds a netgroup with the accounts allowed to
login onto this machine and the second line adds all other
accounts with /sbin/nologin as shell. It
is recommended to use the ALL-CAPS version of
the hostname as the name of the netgroup:+@BOXNAME:::::::::
+:::::::::/sbin/nologinOnce this task is completed on all the machines, there is
no longer a need to modify the local versions of
/etc/master.passwd ever again. All
further changes can be handled by modifying the
NIS map. Here is an example of a possible
netgroup map for this scenario:# Define groups of users first
IT_EMP (,alpha,test-domain) (,beta,test-domain)
IT_APP (,charlie,test-domain) (,delta,test-domain)
DEPT1 (,echo,test-domain) (,foxtrott,test-domain)
DEPT2 (,golf,test-domain) (,hotel,test-domain)
DEPT3 (,india,test-domain) (,juliet,test-domain)
ITINTERN (,kilo,test-domain) (,lima,test-domain)
D_INTERNS (,able,test-domain) (,baker,test-domain)
#
# Now, define some groups based on roles
USERS DEPT1 DEPT2 DEPT3
BIGSRV IT_EMP IT_APP
SMALLSRV IT_EMP IT_APP ITINTERN
USERBOX IT_EMP ITINTERN USERS
#
# And a groups for a special tasks
# Allow echo and golf to access our anti-virus-machine
SECURITY IT_EMP (,echo,test-domain) (,golf,test-domain)
#
# machine-based netgroups
# Our main servers
WAR BIGSRV
FAMINE BIGSRV
# User india needs access to this server
POLLUTION BIGSRV (,india,test-domain)
#
# This one is really important and needs more access restrictions
DEATH IT_EMP
#
# The anti-virus-machine mentioned above
ONE SECURITY
#
# Restrict a machine to a single user
TWO (,hotel,test-domain)
# [...more groups to follow]It may not always be advisable
to use machine-based netgroups. When deploying a couple of
dozen or hundreds of systems,
role-based netgroups instead of machine-based netgroups may be
used to keep the size of the NIS map within
reasonable limits.Password FormatsNISpassword formatsNIS requires that all hosts within an
NIS domain use the same format for
encrypting passwords. If users have trouble authenticating on
an NIS client, it may be due to a differing
password format. In a heterogeneous network, the format must
be supported by all operating systems, where
DES is the lowest common standard.To check which format a server or client is using, look
at this section of
/etc/login.conf:default:\
:passwd_format=des:\
:copyright=/etc/COPYRIGHT:\
[Further entries elided]In this example, the system is using the
DES format. Other possible values are
blf for Blowfish and md5
for MD5 encrypted passwords.If the format on a host needs to be edited to match the
one being used in the NIS domain, the
login capability database must be rebuilt after saving the
change:&prompt.root; cap_mkdb /etc/login.confThe format of passwords for existing user accounts will
not be updated until each user changes their password
after the login capability database is
rebuilt.
- Lightweight Directory Access Protocol
- (LDAP)
+ Lightweight Directory Access Protocol
+ (LDAP)
-
- Tom
- Rhodes
-
+
+ Tom
+ Rhodes
+ Written by LDAPThe Lightweight Directory Access Protocol
(LDAP) is an application layer protocol used
to access, modify, and authenticate objects using a distributed
directory information service. Think of it as a phone or record
book which stores several levels of hierarchical, homogeneous
information. It is used in Active Directory and
OpenLDAP networks and allows users to
access to several levels of internal information utilizing a
single account. For example, email authentication, pulling
employee contact information, and internal website
authentication might all make use of a single user account in
the LDAP server's record base.This section provides a quick start guide for configuring an
LDAP server on a &os; system. It assumes
that the administrator already has a design plan which includes
the type of information to store, what that information will be
used for, which users should have access to that information,
and how to secure this information from unauthorized
access.LDAP Terminology and StructureLDAP uses several terms which should be
understood before starting the configuration. All directory
entries consist of a group of
attributes. Each of these attribute
sets contains a unique identifier known as a
Distinguished Name
(DN) which is normally built from several
other attributes such as the common or
Relative Distinguished Name
(RDN). Similar to how directories have
absolute and relative paths, consider a DN
as an absolute path and the RDN as the
relative path.An example LDAP entry looks like the
following. This example searches for the entry for the
specified user account (uid),
organizational unit (ou), and organization
(o):&prompt.user; ldapsearch -xb "uid=trhodes,ou=users,o=example.com"
# extended LDIF
#
# LDAPv3
# base <uid=trhodes,ou=users,o=example.com> with scope subtree
# filter: (objectclass=*)
# requesting: ALL
#
# trhodes, users, example.com
dn: uid=trhodes,ou=users,o=example.com
mail: trhodes@example.com
cn: Tom Rhodes
uid: trhodes
telephoneNumber: (123) 456-7890
# search result
search: 2
result: 0 Success
# numResponses: 2
# numEntries: 1This example entry shows the values for the
dn, mail,
cn, uid, and
telephoneNumber attributes. The
cn attribute is the
RDN.More information about LDAP and its
terminology can be found at http://www.openldap.org/doc/admin24/intro.html.Configuring an LDAP ServerLDAP Server&os; does not provide a built-in LDAP
server. Begin the configuration by installing the net/openldap24-server package or port.
Since the port has many configurable options, it is
recommended that the default options are reviewed to see if
the package is sufficient, and to instead compile the port if
any options should be changed. In most cases, the defaults
are fine. However, if SQL support is needed, this option must
be enabled and the port compiled using the instructions in
.Next, create the directories to hold the data and to store
the certificates:&prompt.root; mkdir /var/db/openldap-data
&prompt.root; mkdir /usr/local/etc/openldap/privateCopy over the database configuration file:&prompt.root; cp /usr/local/etc/openldap/DB_CONFIG.example /var/db/openldap-data/DB_CONFIGThe next phase is to configure the certificate authority.
The following commands must be executed from
/usr/local/etc/openldap/private. This is
important as the file permissions need to be restrictive and
users should not have access to these files. To create the
certificate authority, start with this command and follow the
prompts:&prompt.root; openssl req -days 365 -nodes -new -x509 -keyout ca.key -out ../ca.crtThe entries for the prompts may be generic
except for the
Common Name. This entry must be
different than the system hostname. If
this will be a self signed certificate, prefix the hostname
with CA for certificate authority.The next task is to create a certificate signing request
and a private key. Input this command and follow the
prompts:&prompt.root; openssl req -days 365 -nodes -new -keyout server.key -out server.csrDuring the certificate generation process, be sure to
correctly set the Common Name attribute.
Once complete, sign the key:&prompt.root; openssl x509 -req -days 365 -in server.csr -out ../server.crt -CA ../ca.crt -CAkey ca.key -CAcreateserialThe final part of the certificate generation process is to
generate and sign the client certificates:&prompt.root; openssl req -days 365 -nodes -new -keyout client.key -out client.csr
&prompt.root; openssl x509 -req -days 3650 -in client.csr -out ../client.crt -CA ../ca.crt -CAkey ca.keyRemember to use the same Common Name
attribute when prompted. When finished, ensure that a total
of eight (8) new files have been generated through the
proceeding commands. If so, the next step is to edit
/usr/local/etc/openldap/slapd.conf and
add the following options:TLSCipherSuite HIGH:MEDIUM:+SSLv3
TLSCertificateFile /usr/local/etc/openldap/server.crt
TLSCertificateKeyFile /usr/local/etc/openldap/private/server.key
TLSCACertificateFile /usr/local/etc/openldap/ca.crtThen, edit
/usr/local/etc/openldap/ldap.conf and add
the following lines:TLS_CACERT /usr/local/etc/openldap/ca.crt
TLS_CIPHER_SUITE HIGH:MEDIUM:+SSLv3While editing this file, uncomment the following entries
and set them to the desired values: ,
, and
. Set the to
contain and
. Then, add two entries pointing to
the certificate authority. When finished, the entries should
look similar to the following:BASE dc=example,dc=com
URI ldap:// ldaps://
SIZELIMIT 12
TIMELIMIT 15
TLS_CACERT /usr/local/etc/openldap/ca.crt
TLS_CIPHER_SUITE HIGH:MEDIUM:+SSLv3The default password for the server should then be
changed:&prompt.root; slappasswd -h "{SHA}" >> /usr/local/etc/openldap/slapd.confThis command will prompt for the password and, if the
process does not fail, a password hash will be added to the
end of slapd.conf. Several hashing
formats are supported. Refer to the manual page for
slappasswd for more information.Next, edit
/usr/local/etc/openldap/slapd.conf and
add the following lines:password-hash {sha}
allow bind_v2The in this file must be updated
to match the used in
/usr/local/etc/openldap/ldap.conf and
should also be set. A recommended
value for is something like
. Before saving this file, place
the in front of the password output
from slappasswd and delete the old
. The end result should
look similar to this:TLSCipherSuite HIGH:MEDIUM:+SSLv3
TLSCertificateFile /usr/local/etc/openldap/server.crt
TLSCertificateKeyFile /usr/local/etc/openldap/private/server.key
TLSCACertificateFile /usr/local/etc/openldap/ca.crt
rootpw {SHA}W6ph5Mm5Pz8GgiULbPgzG37mj9g=Finally, enable the OpenLDAP
service in /etc/rc.conf and set the
URI:slapd_enable="YES"
slapd_flags="-4 -h ldaps:///"At this point the server can be started and tested:&prompt.root; service slapd startIf everything is configured correctly, a search of the
directory should show a successful connection with a single
response as in this example:&prompt.root; ldapsearch -Z
# extended LDIF
#
# LDAPv3
# base <dc=example,dc=com> (default) with scope subtree
# filter: (objectclass=*)
# requesting: ALL
#
# search result
search: 3
result: 32 No such object
# numResponses: 1If the command fails and the configuration looks
correct, stop the slapd service and
restart it with debugging options:&prompt.root; service slapd stop
&prompt.root; /usr/local/libexec/slapd -d -1Once the service is responding, the directory can be
populated using ldapadd. In this example,
a file containing this list of users is first created. Each
user should use the following format:dn: dc=example,dc=com
objectclass: dcObject
objectclass: organization
o: Example
dc: Example
dn: cn=Manager,dc=example,dc=com
objectclass: organizationalRole
cn: ManagerTo import this file, specify the file name. The following
command will prompt for the password specified earlier and the
output should look something like this:&prompt.root; ldapadd -Z -D "cn=Manager,dc=example,dc=com" -W -f import.ldif
Enter LDAP Password:
adding new entry "dc=example,dc=com"
adding new entry "cn=Manager,dc=example,dc=com"Verify the data was added by issuing a search on the
server using ldapsearch:&prompt.user; ldapsearch -Z
# extended LDIF
#
# LDAPv3
# base <dc=example,dc=com> (default) with scope subtree
# filter: (objectclass=*)
# requesting: ALL
#
# example.com
dn: dc=example,dc=com
objectClass: dcObject
objectClass: organization
o: Example
dc: Example
# Manager, example.com
dn: cn=Manager,dc=example,dc=com
objectClass: organizationalRole
cn: Manager
# search result
search: 3
result: 0 Success
# numResponses: 3
# numEntries: 2At this point, the server should be configured and
functioning properly.Dynamic Host Configuration Protocol
(DHCP)Dynamic Host Configuration ProtocolDHCPInternet Systems Consortium (ISC)The Dynamic Host Configuration Protocol
(DHCP) allows a system to connect to a
network in order to be assigned the necessary addressing
information for communication on that network. &os; includes
the OpenBSD version of dhclient which is used
by the client to obtain the addressing information. &os; does
not install a DHCP server, but several
servers are available in the &os; Ports Collection. The
DHCP protocol is fully described in RFC
2131.
Informational resources are also available at isc.org/downloads/dhcp/.This section describes how to use the built-in
DHCP client. It then describes how to
install and configure a DHCP server.In &os;, the &man.bpf.4; device is needed by both the
DHCP server and DHCP
client. This device is included in the
GENERIC kernel that is installed with
&os;. Users who prefer to create a custom kernel need to keep
this device if DHCP is used.It should be noted that bpf also
allows privileged users to run network packet sniffers on
that system.Configuring a DHCP ClientDHCP client support is included in the
&os; installer, making it easy to configure a newly installed
system to automatically receive its networking addressing
information from an existing DHCP server.
Refer to for examples of
network configuration.UDPWhen dhclient is executed on the client
machine, it begins broadcasting requests for configuration
information. By default, these requests use
UDP port 68. The server replies on
UDP port 67, giving the client an
IP address and other relevant network
information such as a subnet mask, default gateway, and
DNS server addresses. This information is
in the form of a DHCP
lease and is valid for a configurable time.
This allows stale IP addresses for clients
no longer connected to the network to automatically be reused.
DHCP clients can obtain a great deal of
information from the server. An exhaustive list may be found
in &man.dhcp-options.5;.By default, when a &os; system boots, its
DHCP client runs in the background, or
asynchronously. Other startup scripts
continue to run while the DHCP process
completes, which speeds up system startup.Background DHCP works well when the
DHCP server responds quickly to the
client's requests. However, DHCP may take
a long time to complete on some systems. If network services
attempt to run before DHCP has assigned the
network addressing information, they will fail. Using
DHCP in synchronous
mode prevents this problem as it pauses startup until the
DHCP configuration has completed.This line in /etc/rc.conf is used to
configure background or asynchronous mode:ifconfig_fxp0="DHCP"This line may already exist if the system was configured
to use DHCP during installation. Replace
the fxp0 shown in these examples
with the name of the interface to be dynamically configured,
as described in .To instead configure the system to use synchronous mode,
and to pause during startup while DHCP
completes, use
SYNCDHCP:ifconfig_fxp0="SYNCDHCP"Additional client options are available. Search for
dhclient in &man.rc.conf.5; for
details.DHCPconfiguration filesThe DHCP client uses the following
files:/etc/dhclient.confThe configuration file used by
dhclient. Typically, this file
contains only comments as the defaults are suitable for
most clients. This configuration file is described in
&man.dhclient.conf.5;./sbin/dhclientMore information about the command itself can
be found in &man.dhclient.8;./sbin/dhclient-scriptThe
&os;-specific DHCP client configuration
script. It is described in &man.dhclient-script.8;, but
should not need any user modification to function
properly./var/db/dhclient.leases.interfaceThe DHCP client keeps a database of
valid leases in this file, which is written as a log and
is described in &man.dhclient.leases.5;.Installing and Configuring a DHCP
ServerThis section demonstrates how to configure a &os; system
to act as a DHCP server using the Internet
Systems Consortium (ISC) implementation of
the DHCP server. This implementation and
its documentation can be installed using the
net/isc-dhcp42-server package or
port.DHCPserverDHCPinstallationThe installation of
net/isc-dhcp42-server installs a sample
configuration file. Copy
/usr/local/etc/dhcpd.conf.example to
/usr/local/etc/dhcpd.conf and make any
edits to this new file.DHCPdhcpd.confThe configuration file is comprised of declarations for
subnets and hosts which define the information that is
provided to DHCP clients. For example,
these lines configure the following:option domain-name "example.org";
option domain-name-servers ns1.example.org;
option subnet-mask 255.255.255.0;
default-lease-time 600;
max-lease-time 72400;
ddns-update-style none;
subnet 10.254.239.0 netmask 255.255.255.224 {
range 10.254.239.10 10.254.239.20;
option routers rtr-239-0-1.example.org, rtr-239-0-2.example.org;
}
host fantasia {
hardware ethernet 08:00:07:26:c0:a5;
fixed-address fantasia.fugue.com;
}This option specifies the default search domain that
will be provided to clients. Refer to
&man.resolv.conf.5; for more information.This option specifies a comma separated list of
DNS servers that the client should use.
They can be listed by their Fully Qualified Domain Names
(FQDN), as seen in the example, or by
their IP addresses.The subnet mask that will be provided to
clients.The default lease expiry time in seconds. A client
can be configured to override this value. The maximum allowed length of time, in seconds, for a
lease. Should a client request a longer lease, a lease
will still be issued, but it will only be valid for
max-lease-time.The default of disables dynamic
DNS updates. Changing this to
configures the DHCP server to update a
DNS server whenever it hands out a
lease so that the DNS server knows
which IP addresses are associated with
which computers in the network. Do not change the default
setting unless the DNS server has been
configured to support dynamic
DNS.This line creates a pool of available
IP addresses which are reserved for
allocation to DHCP clients. The range
of addresses must be valid for the network or subnet
specified in the previous line.Declares the default gateway that is valid for the
network or subnet specified before the opening
{ bracket.Specifies the hardware MAC address
of a client so that the DHCP server can
recognize the client when it makes a request.Specifies that this host should always be given the
same IP address. Using the hostname is
correct, since the DHCP server will
resolve the hostname before returning the lease
information.This configuration file supports many more options. Refer
to dhcpd.conf(5), installed with the server, for details and
examples.Once the configuration of dhcpd.conf
is complete, enable the DHCP server in
/etc/rc.conf:dhcpd_enable="YES"
dhcpd_ifaces="dc0"Replace the dc0 with the interface (or
interfaces, separated by whitespace) that the
DHCP server should listen on for
DHCP client requests.Start the server by issuing the following command:&prompt.root; service isc-dhcpd startAny future changes to the configuration of the server will
require the dhcpd service to be
stopped and then started using &man.service.8;.The DHCP server uses the following
files. Note that the manual pages are installed with the
server software.DHCPconfiguration files/usr/local/sbin/dhcpdMore information about the
dhcpd server can be found in
dhcpd(8)./usr/local/etc/dhcpd.confThe server configuration file needs to contain all the
information that should be provided to clients, along with
information regarding the operation of the server. This
configuration file is described in dhcpd.conf(5)./var/db/dhcpd.leasesThe DHCP server keeps a database of
leases it has issued in this file, which is written as a
log. Refer to dhcpd.leases(5), which gives a slightly
longer description./usr/local/sbin/dhcrelayThis daemon is used in advanced environments where one
DHCP server forwards a request from a
client to another DHCP server on a
separate network. If this functionality is required,
install the net/isc-dhcp42-relay
package or port. The installation includes dhcrelay(8)
which provides more detail.Domain Name System (DNS)DNSDomain Name System (DNS) is the protocol
through which domain names are mapped to IP
addresses, and vice versa. DNS is
coordinated across the Internet through a somewhat complex
system of authoritative root, Top Level Domain
(TLD), and other smaller-scale name servers,
which host and cache individual domain information. It is not
necessary to run a name server to perform
DNS lookups on a system.BINDIn &os; 10, the Berkeley Internet Name Domain
(BIND) has been removed from the base system
and replaced with Unbound. Unbound as configured in the &os;
Base is a local caching resolver. BIND is
still available from The Ports Collection as dns/bind99 or dns/bind98. In &os; 9 and lower,
BIND is included in &os; Base. The &os;
version provides enhanced security features, a new file system
layout, and automated &man.chroot.8; configuration.
BIND is maintained by the Internet Systems
Consortium.resolverreverse
DNSroot zoneThe following table describes some of the terms associated
with DNS:
DNS TerminologyTermDefinitionForward DNSMapping of hostnames to IP
addresses.OriginRefers to the domain covered in a particular zone
file.named, BINDCommon names for the BIND name server package
within &os;.ResolverA system process through which a machine queries
a name server for zone information.Reverse DNSMapping of IP addresses to
hostnames.Root zoneThe beginning of the Internet zone hierarchy. All
zones fall under the root zone, similar to how all files
in a file system fall under the root directory.ZoneAn individual domain, subdomain, or portion of the
DNS administered by the same
authority.
zonesexamplesExamples of zones:. is how the root zone is
usually referred to in documentation.org. is a Top Level Domain
(TLD) under the root zone.example.org. is a zone
under the org.
TLD.1.168.192.in-addr.arpa is a
zone referencing all IP addresses which
fall under the 192.168.1.*
IP address space.As one can see, the more specific part of a hostname
appears to its left. For example, example.org. is more
specific than org., as
org. is more specific than the root
zone. The layout of each part of a hostname is much like a file
system: the /dev directory falls within the
root, and so on.Reasons to Run a Name ServerName servers generally come in two forms: authoritative
name servers, and caching (also known as resolving) name
servers.An authoritative name server is needed when:One wants to serve DNS information
to the world, replying authoritatively to queries.A domain, such as example.org, is
registered and IP addresses need to be
assigned to hostnames under it.An IP address block requires
reverse DNS entries
(IP to hostname).A backup or second name server, called a slave, will
reply to queries.A caching name server is needed when:A local DNS server may cache and
respond more quickly than querying an outside name
server.When one queries for www.FreeBSD.org, the
resolver usually queries the uplink ISP's
name server, and retrieves the reply. With a local, caching
DNS server, the query only has to be made
once to the outside world by the caching
DNS server. Additional queries will not
have to go outside the local network, since the information is
cached locally.DNS Server Configuration in &os; 10.0
and LaterIn &os; 10.0, BIND has been
replaced with Unbound.
Unbound is a validating caching
resolver only. If an authoritative server is needed, many are
available from the Ports Collection.Unbound is provided in the &os;
base system. By default, it will provide
DNS resolution to the local machine only.
While the base system package can be configured to provide
resolution services beyond the local machine, it is
recommended that such requirements be addressed by installing
Unbound from the &os; Ports
Collection.To enable Unbound, add the
following to /etc/rc.conf:local_unbound_enable="YES"Any existing nameservers in
/etc/resolv.conf will be configured as
forwarders in the new Unbound
configuration.If any of the listed nameservers do not support
DNSSEC, local DNS
resolution will fail. Be sure to test each nameserver and
remove any that fail the test. The following command will
show the trust tree or a failure for a nameserver running on
192.168.1.1:&prompt.user; drill -S FreeBSD.org @192.168.1.1Once each nameserver is confirmed to support
DNSSEC, start
Unbound:&prompt.root; service local_unbound onestartThis will take care of updating
/etc/resolv.conf so that queries for
DNSSEC secured domains will now work. For
example, run the following to validate the FreeBSD.org
DNSSEC trust tree:&prompt.user; drill -S FreeBSD.org
;; Number of trusted keys: 1
;; Chasing: freebsd.org. A
DNSSEC Trust tree:
freebsd.org. (A)
|---freebsd.org. (DNSKEY keytag: 36786 alg: 8 flags: 256)
|---freebsd.org. (DNSKEY keytag: 32659 alg: 8 flags: 257)
|---freebsd.org. (DS keytag: 32659 digest type: 2)
|---org. (DNSKEY keytag: 49587 alg: 7 flags: 256)
|---org. (DNSKEY keytag: 9795 alg: 7 flags: 257)
|---org. (DNSKEY keytag: 21366 alg: 7 flags: 257)
|---org. (DS keytag: 21366 digest type: 1)
| |---. (DNSKEY keytag: 40926 alg: 8 flags: 256)
| |---. (DNSKEY keytag: 19036 alg: 8 flags: 257)
|---org. (DS keytag: 21366 digest type: 2)
|---. (DNSKEY keytag: 40926 alg: 8 flags: 256)
|---. (DNSKEY keytag: 19036 alg: 8 flags: 257)
;; Chase successfulDNS Server Configuration in &os;
9.XIn &os;, the BIND daemon is called
named.FileDescription&man.named.8;The BIND daemon.&man.rndc.8;Name server control utility./etc/namedbDirectory where BIND zone information
resides./etc/namedb/named.confConfiguration file of the daemon.Depending on how a given zone is configured on the server,
the files related to that zone can be found in the
master,
slave, or
dynamic subdirectories
of the /etc/namedb
directory. These files contain the DNS
information that will be given out by the name server in
response to queries.
-
- Starting BIND
+
+ Starting BIND
-
- BIND
- starting
-
+
+ BIND
+ starting
+
- Since BIND is installed by default, configuring it is
- relatively simple.
+ Since BIND is installed by default, configuring it is
+ relatively simple.
- The default named configuration
- is that of a basic resolving name server, running in a
- &man.chroot.8; environment, and restricted to listening on the
- local IPv4 loopback address (127.0.0.1). To start the server
- one time with this configuration, use the following
- command:
+ The default named
+ configuration is that of a basic resolving name server,
+ running in a &man.chroot.8; environment, and restricted to
+ listening on the local IPv4 loopback address (127.0.0.1).
+ To start the server one time with this configuration, use
+ the following command:
- &prompt.root; service named onestart
+ &prompt.root; service named onestart
- To ensure the named daemon is
- started at boot each time, put the following line into the
- /etc/rc.conf:
+ To ensure the named daemon is
+ started at boot each time, put the following line into the
+ /etc/rc.conf:
- named_enable="YES"
+ named_enable="YES"
- There are many configuration options for
- /etc/namedb/named.conf that are beyond
- the scope of this document. Other startup options
- for named on &os; can be found in
- the named_*
- flags in /etc/defaults/rc.conf and in
- &man.rc.conf.5;. The
- section is also a good
- read.
-
+ There are many configuration options for
+ /etc/namedb/named.conf that are beyond
+ the scope of this document. Other startup options for
+ named on &os; can be found in the
+ named_* flags
+ in /etc/defaults/rc.conf and in
+ &man.rc.conf.5;. The
+ section is also a good read.
+
-
- Configuration Files
+
+ Configuration Files
-
- BIND
- configuration files
-
+
+ BIND
+ configuration files
+
- Configuration files for named
- currently reside in
- /etc/namedb directory
- and will need modification before use unless all that is
- needed is a simple resolver. This is where most of the
- configuration will be performed.
+ Configuration files for named
+ currently reside in /etc/namedb
+ directory and will need modification before use unless all
+ that is needed is a simple resolver. This is where most of
+ the configuration will be performed.
-
- /etc/namedb/named.conf
+
+ /etc/namedb/named.conf
- // $FreeBSD$
+ // $FreeBSD$
//
// Refer to the named.conf(5) and named(8) man pages, and the documentation
// in /usr/share/doc/bind9 for more details.
//
// If you are going to set up an authoritative server, make sure you
// understand the hairy details of how DNS works. Even with
// simple mistakes, you can break connectivity for affected parties,
// or cause huge amounts of useless Internet traffic.
options {
// All file and path names are relative to the chroot directory,
// if any, and should be fully qualified.
directory "/etc/namedb/working";
pid-file "/var/run/named/pid";
dump-file "/var/dump/named_dump.db";
statistics-file "/var/stats/named.stats";
// If named is being used only as a local resolver, this is a safe default.
// For named to be accessible to the network, comment this option, specify
// the proper IP address, or delete this option.
listen-on { 127.0.0.1; };
// If you have IPv6 enabled on this system, uncomment this option for
// use as a local resolver. To give access to the network, specify
// an IPv6 address, or the keyword "any".
// listen-on-v6 { ::1; };
// These zones are already covered by the empty zones listed below.
// If you remove the related empty zones below, comment these lines out.
disable-empty-zone "255.255.255.255.IN-ADDR.ARPA";
disable-empty-zone "0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.IP6.ARPA";
disable-empty-zone "1.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.IP6.ARPA";
// If you've got a DNS server around at your upstream provider, enter
// its IP address here, and enable the line below. This will make you
// benefit from its cache, thus reduce overall DNS traffic in the Internet.
/*
forwarders {
127.0.0.1;
};
*/
// If the 'forwarders' clause is not empty the default is to 'forward first'
// which will fall back to sending a query from your local server if the name
// servers in 'forwarders' do not have the answer. Alternatively you can
// force your name server to never initiate queries of its own by enabling the
// following line:
// forward only;
// If you wish to have forwarding configured automatically based on
// the entries in /etc/resolv.conf, uncomment the following line and
// set named_auto_forward=yes in /etc/rc.conf. You can also enable
// named_auto_forward_only (the effect of which is described above).
// include "/etc/namedb/auto_forward.conf";
- Just as the comment says, to benefit from an uplink's
- cache, forwarders can be enabled here.
- Under normal circumstances, a name server will recursively
- query the Internet looking at certain name servers until it
- finds the answer it is looking for. Having this enabled
- will have it query the uplink's name server (or name server
- provided) first, taking advantage of its cache. If the
- uplink name server in question is a heavily trafficked, fast
- name server, enabling this may be worthwhile.
+ Just as the comment says, to benefit from an uplink's
+ cache, forwarders can be enabled here.
+ Under normal circumstances, a name server will recursively
+ query the Internet looking at certain name servers until
+ it finds the answer it is looking for. Having this
+ enabled will have it query the uplink's name server (or
+ name server provided) first, taking advantage of its
+ cache. If the uplink name server in question is a heavily
+ trafficked, fast name server, enabling this may be
+ worthwhile.
-
- 127.0.0.1
- will not work here. Change this
- IP address to a name server at the
- uplink.
-
+
+ 127.0.0.1
+ will not work here. Change this
+ IP address to a name server at the
+ uplink.
+
- /*
+ /*
Modern versions of BIND use a random UDP port for each outgoing
query by default in order to dramatically reduce the possibility
of cache poisoning. All users are strongly encouraged to utilize
this feature, and to configure their firewalls to accommodate it.
AS A LAST RESORT in order to get around a restrictive firewall
policy you can try enabling the option below. Use of this option
will significantly reduce your ability to withstand cache poisoning
attacks, and should be avoided if at all possible.
Replace NNNNN in the example with a number between 49160 and 65530.
*/
// query-source address * port NNNNN;
};
// If you enable a local name server, don't forget to enter 127.0.0.1
// first in your /etc/resolv.conf so this server will be queried.
// Also, make sure to enable it in /etc/rc.conf.
// The traditional root hints mechanism. Use this, OR the slave zones below.
zone "." { type hint; file "/etc/namedb/named.root"; };
/* Slaving the following zones from the root name servers has some
significant advantages:
1. Faster local resolution for your users
2. No spurious traffic will be sent from your network to the roots
3. Greater resilience to any potential root server failure/DDoS
On the other hand, this method requires more monitoring than the
hints file to be sure that an unexpected failure mode has not
incapacitated your server. Name servers that are serving a lot
of clients will benefit more from this approach than individual
hosts. Use with caution.
To use this mechanism, uncomment the entries below, and comment
the hint zone above.
As documented at http://dns.icann.org/services/axfr/ these zones:
"." (the root), ARPA, IN-ADDR.ARPA, IP6.ARPA, and ROOT-SERVERS.NET
are available for AXFR from these servers on IPv4 and IPv6:
xfr.lax.dns.icann.org, xfr.cjr.dns.icann.org
*/
/*
zone "." {
type slave;
file "/etc/namedb/slave/root.slave";
masters {
192.5.5.241; // F.ROOT-SERVERS.NET.
};
notify no;
};
zone "arpa" {
type slave;
file "/etc/namedb/slave/arpa.slave";
masters {
192.5.5.241; // F.ROOT-SERVERS.NET.
};
notify no;
};
*/
/* Serving the following zones locally will prevent any queries
for these zones leaving your network and going to the root
name servers. This has two significant advantages:
1. Faster local resolution for your users
2. No spurious traffic will be sent from your network to the roots
*/
// RFCs 1912 and 5735 (and BCP 32 for localhost)
zone "localhost" { type master; file "/etc/namedb/master/localhost-forward.db"; };
zone "127.in-addr.arpa" { type master; file "/etc/namedb/master/localhost-reverse.db"; };
zone "255.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; };
// RFC 1912-style zone for IPv6 localhost address
zone "0.ip6.arpa" { type master; file "/etc/namedb/master/localhost-reverse.db"; };
// "This" Network (RFCs 1912 and 5735)
zone "0.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; };
// Private Use Networks (RFCs 1918 and 5735)
zone "10.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; };
zone "16.172.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; };
zone "17.172.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; };
zone "18.172.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; };
zone "19.172.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; };
zone "20.172.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; };
zone "21.172.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; };
zone "22.172.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; };
zone "23.172.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; };
zone "24.172.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; };
zone "25.172.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; };
zone "26.172.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; };
zone "27.172.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; };
zone "28.172.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; };
zone "29.172.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; };
zone "30.172.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; };
zone "31.172.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; };
zone "168.192.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; };
// Link-local/APIPA (RFCs 3927 and 5735)
zone "254.169.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; };
// IETF protocol assignments (RFCs 5735 and 5736)
zone "0.0.192.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; };
// TEST-NET-[1-3] for Documentation (RFCs 5735 and 5737)
zone "2.0.192.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; };
zone "100.51.198.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; };
zone "113.0.203.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; };
// IPv6 Range for Documentation (RFC 3849)
zone "8.b.d.0.1.0.0.2.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; };
// Domain Names for Documentation and Testing (BCP 32)
zone "test" { type master; file "/etc/namedb/master/empty.db"; };
zone "example" { type master; file "/etc/namedb/master/empty.db"; };
zone "invalid" { type master; file "/etc/namedb/master/empty.db"; };
zone "example.com" { type master; file "/etc/namedb/master/empty.db"; };
zone "example.net" { type master; file "/etc/namedb/master/empty.db"; };
zone "example.org" { type master; file "/etc/namedb/master/empty.db"; };
// Router Benchmark Testing (RFCs 2544 and 5735)
zone "18.198.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; };
zone "19.198.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; };
// IANA Reserved - Old Class E Space (RFC 5735)
zone "240.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; };
zone "241.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; };
zone "242.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; };
zone "243.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; };
zone "244.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; };
zone "245.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; };
zone "246.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; };
zone "247.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; };
zone "248.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; };
zone "249.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; };
zone "250.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; };
zone "251.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; };
zone "252.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; };
zone "253.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; };
zone "254.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; };
// IPv6 Unassigned Addresses (RFC 4291)
zone "1.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; };
zone "3.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; };
zone "4.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; };
zone "5.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; };
zone "6.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; };
zone "7.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; };
zone "8.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; };
zone "9.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; };
zone "a.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; };
zone "b.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; };
zone "c.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; };
zone "d.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; };
zone "e.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; };
zone "0.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; };
zone "1.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; };
zone "2.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; };
zone "3.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; };
zone "4.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; };
zone "5.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; };
zone "6.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; };
zone "7.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; };
zone "8.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; };
zone "9.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; };
zone "a.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; };
zone "b.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; };
zone "0.e.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; };
zone "1.e.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; };
zone "2.e.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; };
zone "3.e.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; };
zone "4.e.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; };
zone "5.e.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; };
zone "6.e.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; };
zone "7.e.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; };
// IPv6 ULA (RFC 4193)
zone "c.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; };
zone "d.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; };
// IPv6 Link Local (RFC 4291)
zone "8.e.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; };
zone "9.e.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; };
zone "a.e.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; };
zone "b.e.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; };
// IPv6 Deprecated Site-Local Addresses (RFC 3879)
zone "c.e.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; };
zone "d.e.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; };
zone "e.e.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; };
zone "f.e.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; };
// IP6.INT is Deprecated (RFC 4159)
zone "ip6.int" { type master; file "/etc/namedb/master/empty.db"; };
// NB: Do not use the IP addresses below, they are faked, and only
// serve demonstration/documentation purposes!
//
// Example slave zone config entries. It can be convenient to become
// a slave at least for the zone your own domain is in. Ask
// your network administrator for the IP address of the responsible
// master name server.
//
// Do not forget to include the reverse lookup zone!
// This is named after the first bytes of the IP address, in reverse
// order, with ".IN-ADDR.ARPA" appended, or ".IP6.ARPA" for IPv6.
//
// Before starting to set up a master zone, make sure you fully
// understand how DNS and BIND work. There are sometimes
// non-obvious pitfalls. Setting up a slave zone is usually simpler.
//
// NB: Don't blindly enable the examples below. :-) Use actual names
// and addresses instead.
/* An example dynamic zone
key "exampleorgkey" {
algorithm hmac-md5;
secret "sf87HJqjkqh8ac87a02lla==";
};
zone "example.org" {
type master;
allow-update {
key "exampleorgkey";
};
file "/etc/namedb/dynamic/example.org";
};
*/
/* Example of a slave reverse zone
zone "1.168.192.in-addr.arpa" {
type slave;
file "/etc/namedb/slave/1.168.192.in-addr.arpa";
masters {
192.168.1.1;
};
};
*/
- In named.conf, these are examples
- of slave entries for a forward and reverse zone.
+ In named.conf, these are examples
+ of slave entries for a forward and reverse zone.
- For each new zone served, a new zone entry must be added
- to named.conf.
+ For each new zone served, a new zone entry must be
+ added to named.conf.
- For example, the simplest zone entry for
- example.org
- can look like:
+ For example, the simplest zone entry for
+ example.org
+ can look like:
- zone "example.org" {
+ zone "example.org" {
type master;
file "master/example.org";
};
- The zone is a master, as indicated by the
- statement, holding its zone
- information in
- /etc/namedb/master/example.org
- indicated by the statement.
+ The zone is a master, as indicated by the
+ statement, holding its zone
+ information in
+ /etc/namedb/master/example.org
+ indicated by the statement.
- zone "example.org" {
+ zone "example.org" {
type slave;
file "slave/example.org";
};
- In the slave case, the zone information is transferred
- from the master name server for the particular zone, and
- saved in the file specified. If and when the master server
- dies or is unreachable, the slave name server will have the
- transferred zone information and will be able to serve
- it.
-
+ In the slave case, the zone information is transferred
+ from the master name server for the particular zone, and
+ saved in the file specified. If and when the master
+ server dies or is unreachable, the slave name server will
+ have the transferred zone information and will be able to
+ serve it.
+
-
- Zone Files
+
+ Zone Files
-
- BIND
- zone files
-
+
+ BIND
+ zone files
+
- An example master zone file for example.org (existing
- within /etc/namedb/master/example.org)
- is as follows:
+ An example master zone file for
+ example.org
+ (existing within
+ /etc/namedb/master/example.org) is as
+ follows:
- $TTL 3600 ; 1 hour default TTL
+ $TTL 3600 ; 1 hour default TTL
example.org. IN SOA ns1.example.org. admin.example.org. (
2006051501 ; Serial
10800 ; Refresh
3600 ; Retry
604800 ; Expire
300 ; Negative Response TTL
)
; DNS Servers
IN NS ns1.example.org.
IN NS ns2.example.org.
; MX Records
IN MX 10 mx.example.org.
IN MX 20 mail.example.org.
IN A 192.168.1.1
; Machine Names
localhost IN A 127.0.0.1
ns1 IN A 192.168.1.2
ns2 IN A 192.168.1.3
mx IN A 192.168.1.4
mail IN A 192.168.1.5
; Aliases
www IN CNAME example.org.
- Note that every hostname ending in a . is
- an exact hostname, whereas everything without a trailing
- . is relative to the origin. For example,
- ns1 is translated into
- ns1.example.org.
+ Note that every hostname ending in a .
+ is an exact hostname, whereas everything without a
+ trailing . is relative to the origin. For
+ example, ns1 is translated into
+ ns1.example.org.
- The format of a zone file follows:
+ The format of a zone file follows:
- recordname IN recordtype value
+ recordname IN recordtype value
-
- DNS
- records
-
+
+ DNS
+ records
+
- The most commonly used DNS
- records:
+ The most commonly used DNS
+ records:
-
-
- SOA
+
+
+ SOA
- start of zone authority
-
+
+ start of zone authority
+
+
-
- NS
+
+ NS
-
- an authoritative name server
-
+
+ an authoritative name server
+
+
-
- A
+
+ A
- a host address
-
+
+ a host address
+
+
-
- CNAME
+
+ CNAME
- the canonical name for an
- alias
-
+
+ the canonical name for an alias
+
+
-
- MX
+
+ MX
- mail exchanger
-
+
+ mail exchanger
+
+
-
- PTR
+
+ PTR
-
- a domain name pointer (used in reverse
- DNS)
-
-
-
+
+ a domain name pointer (used in reverse
+ DNS)
+
+
+
- example.org. IN SOA ns1.example.org. admin.example.org. (
+ example.org. IN SOA ns1.example.org. admin.example.org. (
2006051501 ; Serial
10800 ; Refresh after 3 hours
3600 ; Retry after 1 hour
604800 ; Expire after 1 week
300 ) ; Negative Response TTL
-
-
- example.org.
+
+
+ example.org.
-
- the domain name, also the origin for this
- zone file.
-
-
+
+ the domain name, also the origin for this
+ zone file.
+
+
-
- ns1.example.org.
+
+ ns1.example.org.
-
- the primary/authoritative name server for this
- zone.
-
-
+
+ the primary/authoritative name server for this
+ zone.
+
+
-
- admin.example.org.
+
+ admin.example.org.
-
- the responsible person for this zone,
- email address with @
- replaced. (admin@example.org becomes
- admin.example.org)
-
-
+
+ the responsible person for this zone,
+ email address with @
+ replaced. (admin@example.org becomes
+ admin.example.org)
+
+
-
- 2006051501
+
+ 2006051501
-
- the serial number of the file. This must be
- incremented each time the zone file is modified.
- Nowadays, many admins prefer a
- yyyymmddrr format for the serial
- number. 2006051501 would mean last
- modified 05/15/2006, the latter 01
- being the first time the zone file has been modified
- this day. The serial number is important as it alerts
- slave name servers for a zone when it is
- updated.
-
-
-
+
+ the serial number of the file. This must be
+ incremented each time the zone file is modified.
+ Nowadays, many admins prefer a
+ yyyymmddrr format for the serial
+ number. 2006051501 would mean
+ last modified 05/15/2006, the latter
+ 01 being the first time the zone
+ file has been modified this day. The serial number
+ is important as it alerts slave name servers for a
+ zone when it is updated.
+
+
+
- IN NS ns1.example.org.
+ IN NS ns1.example.org.
- This is an NS entry. Every name server that is going to
- reply authoritatively for the zone must have one of these
- entries.
+ This is an NS entry. Every name server that is going
+ to reply authoritatively for the zone must have one of
+ these entries.
- localhost IN A 127.0.0.1
+ localhost IN A 127.0.0.1
ns1 IN A 192.168.1.2
ns2 IN A 192.168.1.3
mx IN A 192.168.1.4
mail IN A 192.168.1.5
- The A record indicates machine names. As seen above,
- ns1.example.org would
- resolve to 192.168.1.2.
+ The A record indicates machine names. As seen above,
+ ns1.example.org would
+ resolve to 192.168.1.2.
- IN A 192.168.1.1
+ IN A 192.168.1.1
- This line assigns IP address
- 192.168.1.1 to
- the current origin, in this case example.org.
+ This line assigns IP address
+ 192.168.1.1 to
+ the current origin, in this case example.org.
- www IN CNAME @
+ www IN CNAME @
- The canonical name record is usually used for giving
- aliases to a machine. In the example,
- www is aliased to the
- master machine whose name happens to be the
- same as the domain name example.org
- (192.168.1.1).
- CNAMEs can never be used together with another kind of
- record for the same hostname.
+ The canonical name record is usually used for giving
+ aliases to a machine. In the example,
+ www is aliased to the
+ master machine whose name happens to be the
+ same as the domain name
+ example.org
+ (192.168.1.1).
+ CNAMEs can never be used together with another kind of
+ record for the same hostname.
-
- MX record
-
+
+ MX record
+
- IN MX 10 mail.example.org.
+ IN MX 10 mail.example.org.
- The MX record indicates which mail servers are
- responsible for handling incoming mail for the zone.
- mail.example.org is the
- hostname of a mail server, and 10 is the priority of that
- mail server.
+ The MX record indicates which mail servers are
+ responsible for handling incoming mail for the zone.
+ mail.example.org is
+ the hostname of a mail server, and 10 is the priority of
+ that mail server.
- One can have several mail servers, with priorities of
- 10, 20 and so on. A mail server attempting to deliver to
- example.org
- would first try the highest priority MX (the record with the
- lowest priority number), then the second highest, etc, until
- the mail can be properly delivered.
+ One can have several mail servers, with priorities of
+ 10, 20 and so on. A mail server attempting to deliver to
+ example.org
+ would first try the highest priority MX (the record with
+ the lowest priority number), then the second highest, etc,
+ until the mail can be properly delivered.
- For in-addr.arpa zone files (reverse
- DNS), the same format is used, except
- with PTR entries instead of A or CNAME.
+ For in-addr.arpa zone files (reverse
+ DNS), the same format is used, except
+ with PTR entries instead of A or CNAME.
- $TTL 3600
+ $TTL 3600
1.168.192.in-addr.arpa. IN SOA ns1.example.org. admin.example.org. (
2006051501 ; Serial
10800 ; Refresh
3600 ; Retry
604800 ; Expire
300 ) ; Negative Response TTL
IN NS ns1.example.org.
IN NS ns2.example.org.
1 IN PTR example.org.
2 IN PTR ns1.example.org.
3 IN PTR ns2.example.org.
4 IN PTR mx.example.org.
5 IN PTR mail.example.org.
- This file gives the proper IP address
- to hostname mappings for the above fictitious domain.
+ This file gives the proper IP
+ address to hostname mappings for the above fictitious
+ domain.
- It is worth noting that all names on the right side
- of a PTR record need to be fully qualified (i.e., end in
- a .).
-
-
+ It is worth noting that all names on the right side
+ of a PTR record need to be fully qualified (i.e., end in
+ a .).
+
+
-
- Caching Name Server
+
+ Caching Name Server
-
- BIND
- caching name server
-
+
+ BIND
+ caching name server
+
- A caching name server is a name server whose primary role
- is to resolve recursive queries. It simply asks queries of
- its own, and remembers the answers for later use.
-
+ A caching name server is a name server whose primary
+ role is to resolve recursive queries. It simply asks
+ queries of its own, and remembers the answers for later
+ use.
+
-
- DNSSEC
+
+ DNSSEC
-
- BIND
- DNS security
- extensions
-
+
+ BIND
+ DNS security
+ extensions
+
- Domain Name System Security Extensions, or DNSSEC for
- short, is a suite of specifications to protect resolving name
- servers from forged DNS data, such as
- spoofed DNS records. By using digital
- signatures, a resolver can verify the integrity of the record.
- Note that DNSSEC only provides integrity via
- digitally signing the Resource Records (RRs). It provides neither
- confidentiality nor protection against false end-user
- assumptions. This means that it cannot protect against people
- going to example.net instead of
- example.com.
- The only thing DNSSEC does is authenticate
- that the data has not been compromised in transit. The
- security of DNS is an important step in
- securing the Internet in general. For more in-depth details
- of how DNSSEC works, the relevant
- RFCs are a good place to start. See the
- list in .
+ Domain Name System Security Extensions, or DNSSEC
+ for short, is a suite of specifications to protect resolving
+ name servers from forged DNS data, such
+ as spoofed DNS records. By using digital
+ signatures, a resolver can verify the integrity of the
+ record. Note that DNSSEC only provides integrity via
+ digitally signing the Resource Records (RRs). It provides
+ neither confidentiality nor protection against false
+ end-user assumptions. This means that it cannot protect
+ against people going to
+ example.net
+ instead of
+ example.com.
+ The only thing DNSSEC does is
+ authenticate that the data has not been compromised in
+ transit. The security of DNS is an
+ important step in securing the Internet in general. For
+ more in-depth details of how DNSSEC
+ works, the relevant RFCs are a good place
+ to start. See the list in
+ .
- The following sections will demonstrate how to enable
- DNSSEC for an authoritative
- DNS server and a recursive (or caching)
- DNS server running BIND
- 9. While all versions of BIND 9 support
- DNSSEC, it is necessary to have at least
- version 9.6.2 in order to be able to use the signed root zone
- when validating DNS queries. This is
- because earlier versions lack the required algorithms to
- enable validation using the root zone key. It is strongly
- recommended to use the latest version of
- BIND 9.7 or later to take advantage of
- automatic key updating for the root key, as well as other
- features to automatically keep zones signed and signatures up
- to date. Where configurations differ between 9.6.2 and 9.7
- and later, differences will be pointed out.
+ The following sections will demonstrate how to enable
+ DNSSEC for an authoritative
+ DNS server and a recursive (or caching)
+ DNS server running
+ BIND 9. While all versions of
+ BIND 9 support DNSSEC,
+ it is necessary to have at least version 9.6.2 in order to
+ be able to use the signed root zone when validating
+ DNS queries. This is because earlier
+ versions lack the required algorithms to enable validation
+ using the root zone key. It is strongly recommended to use
+ the latest version of BIND 9.7 or later
+ to take advantage of automatic key updating for the root
+ key, as well as other features to automatically keep zones
+ signed and signatures up to date. Where configurations
+ differ between 9.6.2 and 9.7 and later, differences will be
+ pointed out.
-
- Recursive DNS Server
- Configuration
+
+ Recursive DNS Server
+ Configuration
- Enabling DNSSEC validation of queries
- performed by a recursive DNS server
- requires a few changes to named.conf.
- Before making these changes the root zone key, or trust
- anchor, must be acquired. Currently the root zone key is
- not available in a file format BIND
- understands, so it has to be manually converted into the
- proper format. The key itself can be obtained by querying
- the root zone for it using dig.
- By running
+ Enabling DNSSEC validation of
+ queries performed by a recursive DNS
+ server requires a few changes to
+ named.conf. Before making these
+ changes the root zone key, or trust anchor, must be
+ acquired. Currently the root zone key is not available in
+ a file format BIND understands, so it
+ has to be manually converted into the proper format. The
+ key itself can be obtained by querying the root zone for
+ it using dig. By
+ running
- &prompt.user; dig +multi +noall +answer DNSKEY . > root.dnskey
+ &prompt.user; dig +multi +noall +answer DNSKEY . > root.dnskey
- the key will end up in root.dnskey.
- The contents should look something like this:
+ the key will end up in
+ root.dnskey. The contents should
+ look something like this:
- . 93910 IN DNSKEY 257 3 8 (
+ . 93910 IN DNSKEY 257 3 8 (
AwEAAagAIKlVZrpC6Ia7gEzahOR+9W29euxhJhVVLOyQ
bSEW0O8gcCjFFVQUTf6v58fLjwBd0YI0EzrAcQqBGCzh
/RStIoO8g0NfnfL2MTJRkxoXbfDaUeVPQuYEhg37NZWA
JQ9VnMVDxP/VHL496M/QZxkjf5/Efucp2gaDX6RS6CXp
oY68LsvPVjR0ZSwzz1apAzvN9dlzEheX7ICJBBtuA6G3
LQpzW5hOA2hzCTMjJPJ8LbqF6dsV6DoBQzgul0sGIcGO
Yl7OyQdXfZ57relSQageu+ipAdTTJ25AsRTAoub8ONGc
LmqrAmRLKBP1dfwhYB4N7knNnulqQxA+Uk1ihz0=
) ; key id = 19036
. 93910 IN DNSKEY 256 3 8 (
AwEAAcaGQEA+OJmOzfzVfoYN249JId7gx+OZMbxy69Hf
UyuGBbRN0+HuTOpBxxBCkNOL+EJB9qJxt+0FEY6ZUVjE
g58sRr4ZQ6Iu6b1xTBKgc193zUARk4mmQ/PPGxn7Cn5V
EGJ/1h6dNaiXuRHwR+7oWh7DnzkIJChcTqlFrXDW3tjt
) ; key id = 34525
- Do not be alarmed if the obtained keys differ from this
- example. They might have changed since these instructions
- were last updated. This output actually contains two keys.
- The first key in the listing, with the value 257 after the
- DNSKEY record type, is the one needed. This value indicates
- that this is a Secure Entry Point
- (SEP), commonly
- known as a Key Signing Key
- (KSK). The second
- key, with value 256, is a subordinate key, commonly called a
- Zone Signing Key
- (ZSK). More on
- the different key types later in
- .
+ Do not be alarmed if the obtained keys differ from
+ this example. They might have changed since these
+ instructions were last updated. This output actually
+ contains two keys. The first key in the listing, with the
+ value 257 after the DNSKEY record type, is the one needed.
+ This value indicates that this is a Secure Entry Point
+ (SEP),
+ commonly known as a Key Signing Key
+ (KSK). The
+ second key, with value 256, is a subordinate key, commonly
+ called a Zone Signing Key
+ (ZSK). More on
+ the different key types later in
+ .
- Now the key must be verified and formatted so that
- BIND can use it. To verify the key,
- generate a DS
- RR set. Create a
- file containing these
- RRs with
+ Now the key must be verified and formatted so that
+ BIND can use it. To verify the key,
+ generate a DS
+ RR set. Create
+ a file containing these
+ RRs with
- &prompt.user; dnssec-dsfromkey -f root.dnskey . > root.ds
+ &prompt.user; dnssec-dsfromkey -f root.dnskey . > root.ds
- These records use SHA-1 and SHA-256 respectively, and
- should look similar to the following example, where the
- longer is using SHA-256.
+ These records use SHA-1 and SHA-256 respectively, and
+ should look similar to the following example, where the
+ longer is using SHA-256.
- . IN DS 19036 8 1
+ . IN DS 19036 8 1
B256BD09DC8DD59F0E0F0D8541B8328DD986DF6E
. IN DS 19036 8 2 49AAC11D7B6F6446702E54A1607371607A1A41855200FD2CE1CDDE32F24E8FB5
- The SHA-256 RR can now be compared to
- the digest in https://data.iana.org/root-anchors/root-anchors.xml.
- To be absolutely sure that the key has not been tampered
- with the data in the XML file can be
- verified using the PGP signature in
- https://data.iana.org/root-anchors/root-anchors.asc.
+ The SHA-256 RR can now be compared
+ to the digest in https://data.iana.org/root-anchors/root-anchors.xml.
+ To be absolutely sure that the key has not been tampered
+ with the data in the XML file can be
+ verified using the PGP signature in
+ https://data.iana.org/root-anchors/root-anchors.asc.
- Next, the key must be formatted properly. This differs
- a little between BIND versions 9.6.2 and
- 9.7 and later. In version 9.7 support was added to
- automatically track changes to the key and update it as
- necessary. This is done using
- managed-keys as seen in the example
- below. When using the older version, the key is added using
- a trusted-keys statement and updates must
- be done manually. For BIND 9.6.2 the
- format should look like:
+ Next, the key must be formatted properly. This
+ differs a little between BIND versions
+ 9.6.2 and 9.7 and later. In version 9.7 support was added
+ to automatically track changes to the key and update it as
+ necessary. This is done using
+ managed-keys as seen in the example
+ below. When using the older version, the key is added
+ using a trusted-keys statement and
+ updates must be done manually. For
+ BIND 9.6.2 the format should look
+ like:
- trusted-keys {
+ trusted-keys {
"." 257 3 8
"AwEAAagAIKlVZrpC6Ia7gEzahOR+9W29euxhJhVVLOyQbSEW0O8gcCjF
FVQUTf6v58fLjwBd0YI0EzrAcQqBGCzh/RStIoO8g0NfnfL2MTJRkxoX
bfDaUeVPQuYEhg37NZWAJQ9VnMVDxP/VHL496M/QZxkjf5/Efucp2gaD
X6RS6CXpoY68LsvPVjR0ZSwzz1apAzvN9dlzEheX7ICJBBtuA6G3LQpz
W5hOA2hzCTMjJPJ8LbqF6dsV6DoBQzgul0sGIcGOYl7OyQdXfZ57relS
Qageu+ipAdTTJ25AsRTAoub8ONGcLmqrAmRLKBP1dfwhYB4N7knNnulq
QxA+Uk1ihz0=";
};
- For 9.7 the format will instead be:
+ For 9.7 the format will instead be:
- managed-keys {
+ managed-keys {
"." initial-key 257 3 8
"AwEAAagAIKlVZrpC6Ia7gEzahOR+9W29euxhJhVVLOyQbSEW0O8gcCjF
FVQUTf6v58fLjwBd0YI0EzrAcQqBGCzh/RStIoO8g0NfnfL2MTJRkxoX
bfDaUeVPQuYEhg37NZWAJQ9VnMVDxP/VHL496M/QZxkjf5/Efucp2gaD
X6RS6CXpoY68LsvPVjR0ZSwzz1apAzvN9dlzEheX7ICJBBtuA6G3LQpz
W5hOA2hzCTMjJPJ8LbqF6dsV6DoBQzgul0sGIcGOYl7OyQdXfZ57relS
Qageu+ipAdTTJ25AsRTAoub8ONGcLmqrAmRLKBP1dfwhYB4N7knNnulq
QxA+Uk1ihz0=";
};
- The root key can now be added to
- named.conf either directly or by
- including a file containing the key. After these steps,
- configure BIND to do
- DNSSEC validation on queries by editing
- named.conf and adding the following to
- the options directive:
+ The root key can now be added to
+ named.conf either directly or by
+ including a file containing the key. After these steps,
+ configure BIND to do
+ DNSSEC validation on queries by editing
+ named.conf and adding the following
+ to the options directive:
- dnssec-enable yes;
+ dnssec-enable yes;
dnssec-validation yes;
- To verify that it is actually working use
- dig to make a query for a signed
- zone using the resolver just configured. A successful reply
- will contain the AD flag to indicate the
- data was authenticated. Running a query such as
+ To verify that it is actually working use
+ dig to make a query for a
+ signed zone using the resolver just configured. A
+ successful reply will contain the AD
+ flag to indicate the data was authenticated. Running a
+ query such as
- &prompt.user; dig @resolver +dnssec se ds
+ &prompt.user; dig @resolver +dnssec se ds
- should return the DS
- RR for the .se zone.
- In the flags: section the
- AD flag should be set, as seen
- in:
+ should return the DS
+ RR for the .se zone.
+ In the flags: section the
+ AD flag should be set, as seen
+ in:
- ...
+ ...
;; flags: qr rd ra ad; QUERY: 1, ANSWER: 3, AUTHORITY: 0, ADDITIONAL: 1
...
- The resolver is now capable of authenticating
- DNS queries.
-
+ The resolver is now capable of authenticating
+ DNS queries.
+
-
- Authoritative DNS Server
- Configuration
+
+ Authoritative DNS Server
+ Configuration
- In order to get an authoritative name server to serve a
- DNSSEC signed zone a little more work is
- required. A zone is signed using cryptographic keys which
- must be generated. It is possible to use only one key for
- this. The preferred method however is to have a strong
- well-protected Key Signing Key
- (KSK) that is
- not rotated very often and a Zone Signing Key
- (ZSK) that is
- rotated more frequently. Information on recommended
- operational practices can be found in RFC
- 4641: DNSSEC Operational
- Practices. Practices regarding the root zone can
- be found in DNSSEC
- Practice Statement for the Root Zone
- KSK operator and DNSSEC
- Practice Statement for the Root Zone
- ZSK operator. The
- KSK is used to
- build a chain of authority to the data in need of validation
- and as such is also called a Secure Entry Point
- (SEP) key. A
- message digest of this key, called a Delegation Signer
- (DS) record,
- must be published in the parent zone to establish the trust
- chain. How this is accomplished depends on the parent zone
- owner. The ZSK
- is used to sign the zone, and only needs to be published
- there.
+ In order to get an authoritative name server to serve
+ a DNSSEC signed zone a little more work
+ is required. A zone is signed using cryptographic keys
+ which must be generated. It is possible to use only one
+ key for this. The preferred method however is to have a
+ strong well-protected Key Signing Key
+ (KSK) that is
+ not rotated very often and a Zone Signing Key
+ (ZSK) that is
+ rotated more frequently. Information on recommended
+ operational practices can be found in RFC
+ 4641: DNSSEC Operational
+ Practices. Practices regarding the root zone can
+ be found in DNSSEC
+ Practice Statement for the Root Zone
+ KSK operator and DNSSEC
+ Practice Statement for the Root Zone
+ ZSK operator. The
+ KSK is used to
+ build a chain of authority to the data in need of
+ validation and as such is also called a Secure Entry Point
+ (SEP) key. A
+ message digest of this key, called a Delegation Signer
+ (DS) record,
+ must be published in the parent zone to establish the
+ trust chain. How this is accomplished depends on the
+ parent zone owner. The
+ ZSK is used to
+ sign the zone, and only needs to be published
+ there.
- To enable DNSSEC for the example.com zone
- depicted in previous examples, the first step is to use
- dnssec-keygen to generate the
- KSK and ZSK key pair.
- This key pair can utilize different cryptographic
- algorithms. It is recommended to use RSA/SHA256 for the
- keys and 2048 bits key length should be enough. To generate
- the KSK for example.com, run
+ To enable DNSSEC for the
+ example.com
+ zone depicted in previous examples, the first step is to
+ use dnssec-keygen to generate
+ the KSK and ZSK key
+ pair. This key pair can utilize different cryptographic
+ algorithms. It is recommended to use RSA/SHA256 for the
+ keys and 2048 bits key length should be enough. To
+ generate the KSK for
+ example.com,
+ run
- &prompt.user; dnssec-keygen -f KSK -a RSASHA256 -b 2048 -n ZONE example.com
+ &prompt.user; dnssec-keygen -f KSK -a RSASHA256 -b 2048 -n ZONE example.com
- and to generate the ZSK, run
+ and to generate the ZSK, run
- &prompt.user; dnssec-keygen -a RSASHA256 -b 2048 -n ZONE example.com
+ &prompt.user; dnssec-keygen -a RSASHA256 -b 2048 -n ZONE example.com
- dnssec-keygen outputs two
- files, the public and the private keys in files named
- similar to Kexample.com.+005+nnnnn.key
- (public) and
- Kexample.com.+005+nnnnn.private
- (private). The nnnnn part of the file
- name is a five digit key ID. Keep track of which key ID
- belongs to which key. This is especially important when
- having more than one key in a zone. It is also possible to
- rename the keys. For each KSK file
- do:
+ dnssec-keygen outputs two
+ files, the public and the private keys in files named
+ similar to
+ Kexample.com.+005+nnnnn.key (public)
+ and Kexample.com.+005+nnnnn.private
+ (private). The nnnnn part of the file
+ name is a five digit key ID. Keep track of which key ID
+ belongs to which key. This is especially important when
+ having more than one key in a zone. It is also possible
+ to rename the keys. For each KSK file
+ do:
- &prompt.user; mv Kexample.com.+005+nnnnn.key Kexample.com.+005+nnnnn.KSK.key
+ &prompt.user; mv Kexample.com.+005+nnnnn.key Kexample.com.+005+nnnnn.KSK.key
&prompt.user; mv Kexample.com.+005+nnnnn.private Kexample.com.+005+nnnnn.KSK.private
- For the ZSK files, substitute
- KSK for ZSK as
- necessary. The files can now be included in the zone file,
- using the $include statement. It should
- look something like this:
+ For the ZSK files, substitute
+ KSK for ZSK as
+ necessary. The files can now be included in the zone
+ file, using the $include statement. It
+ should look something like this:
- $include Kexample.com.+005+nnnnn.KSK.key ; KSK
+ $include Kexample.com.+005+nnnnn.KSK.key ; KSK
$include Kexample.com.+005+nnnnn.ZSK.key ; ZSK
- Finally, sign the zone and tell BIND
- to use the signed zone file. To sign a zone
- dnssec-signzone is used. The
- command to sign the zone example.com, located in
- example.com.db would look similar
- to
+ Finally, sign the zone and tell
+ BIND to use the signed zone file. To
+ sign a zone dnssec-signzone is
+ used. The command to sign the zone
+ example.com,
+ located in example.com.db would look
+ similar to
- &prompt.user; dnssec-signzone -o
+ &prompt.user; dnssec-signzone -o
example.com -k Kexample.com.+005+nnnnn.KSK example.com.db
Kexample.com.+005+nnnnn.ZSK.key
- The key supplied to the argument is
- the KSK and the other key file is the
- ZSK that should be used in the signing.
- It is possible to supply more than one
- KSK and ZSK, which
- will result in the zone being signed with all supplied keys.
- This can be needed to supply zone data signed using more
- than one algorithm. The output of
- dnssec-signzone is a zone file
- with all RRs signed. This output will
- end up in a file with the extension
- .signed, such as
- example.com.db.signed. The
- DS records will
- also be written to a separate file
- dsset-example.com. To use this signed
- zone just modify the zone directive in
- named.conf to use
- example.com.db.signed. By default, the
- signatures are only valid 30 days, meaning that the zone
- needs to be resigned in about 15 days to be sure that
- resolvers are not caching records with stale signatures. It
- is possible to make a script and a cron job to do this. See
- relevant manuals for details.
+ The key supplied to the argument
+ is the KSK and the other key file is
+ the ZSK that should be used in the
+ signing. It is possible to supply more than one
+ KSK and ZSK, which
+ will result in the zone being signed with all supplied
+ keys. This can be needed to supply zone data signed using
+ more than one algorithm. The output of
+ dnssec-signzone is a zone file
+ with all RRs signed. This output will
+ end up in a file with the extension
+ .signed, such as
+ example.com.db.signed. The
+ DS records
+ will also be written to a separate file
+ dsset-example.com. To use this
+ signed zone just modify the zone directive in
+ named.conf to use
+ example.com.db.signed. By default,
+ the signatures are only valid 30 days, meaning that the
+ zone needs to be resigned in about 15 days to be sure
+ that resolvers are not caching records with stale
+ signatures. It is possible to make a script and a cron
+ job to do this. See relevant manuals for details.
- Be sure to keep private keys confidential, as with all
- cryptographic keys. When changing a key it is best to
- include the new key into the zone, while still signing with
- the old one, and then move over to using the new key to
- sign. After these steps are done the old key can be removed
- from the zone. Failure to do this might render the
- DNS data unavailable for a time, until
- the new key has propagated through the
- DNS hierarchy. For more information on
- key rollovers and other DNSSEC
- operational issues, see RFC
- 4641: DNSSEC Operational
- practices.
-
+ Be sure to keep private keys confidential, as with all
+ cryptographic keys. When changing a key it is best to
+ include the new key into the zone, while still signing
+ with the old one, and then move over to using the new key
+ to sign. After these steps are done the old key can be
+ removed from the zone. Failure to do this might render
+ the DNS data unavailable for a time,
+ until the new key has propagated through the
+ DNS hierarchy. For more information on
+ key rollovers and other DNSSEC
+ operational issues, see RFC
+ 4641: DNSSEC Operational
+ practices.
+
-
- Automation Using BIND 9.7 or
- Later
+
+ Automation Using BIND 9.7 or
+ Later
- Beginning with BIND version 9.7 a new
- feature called Smart Signing was
- introduced. This feature aims to make the key management
- and signing process simpler by automating parts of the task.
- By putting the keys into a directory called a
- key repository, and using the new
- option auto-dnssec, it is possible to
- create a dynamic zone which will be resigned as needed. To
- update this zone use nsupdate
- with the new option .
- rndc has also grown the ability
- to sign zones with keys in the key repository, using the
- option . To tell
- BIND to use this automatic signing and
- zone updating for example.com, add the
- following to named.conf:
+ Beginning with BIND version 9.7 a
+ new feature called Smart Signing was
+ introduced. This feature aims to make the key management
+ and signing process simpler by automating parts of the
+ task. By putting the keys into a directory called a
+ key repository, and using the new
+ option auto-dnssec, it is possible to
+ create a dynamic zone which will be resigned as needed.
+ To update this zone use
+ nsupdate with the new option
+ . rndc has
+ also grown the ability to sign zones with keys in the key
+ repository, using the option . To
+ tell BIND to use this automatic signing
+ and zone updating for example.com, add the
+ following to named.conf:
- zone example.com {
+ zone example.com {
type master;
key-directory "/etc/named/keys";
update-policy local;
auto-dnssec maintain;
file "/etc/named/dynamic/example.com.zone";
};
- After making these changes, generate keys for the zone
- as explained in , put those
- keys in the key repository given as the argument to the
- key-directory in the zone configuration
- and the zone will be signed automatically. Updates to a
- zone configured this way must be done using
- nsupdate, which will take care of
- re-signing the zone with the new data added. For further
- details, see and the
- BIND documentation.
-
-
+ After making these changes, generate keys for the zone
+ as explained in , put
+ those keys in the key repository given as the argument to
+ the key-directory in the zone
+ configuration and the zone will be signed automatically.
+ Updates to a zone configured this way must be done using
+ nsupdate, which will take care
+ of re-signing the zone with the new data added. For
+ further details, see and the
+ BIND documentation.
+
+
-
- Security
+
+ Security
- Although BIND is the most common implementation of
- DNS, there is always the issue of security.
- Possible and exploitable security holes are sometimes
- found.
+ Although BIND is the most common implementation of
+ DNS, there is always the issue of
+ security. Possible and exploitable security holes are
+ sometimes found.
- While &os; automatically drops
- named into a &man.chroot.8;
- environment; there are several other security mechanisms in
- place which could help to lure off possible
- DNS service attacks.
+ While &os; automatically drops
+ named into a &man.chroot.8;
+ environment; there are several other security mechanisms in
+ place which could help to lure off possible
+ DNS service attacks.
- It is always good idea to read
- CERT's security
- advisories and to subscribe to the &a.security-notifications;
- to stay up to date with the current Internet and &os; security
- issues.
+ It is always good idea to read
+ CERT's
+ security advisories and to subscribe to the
+ &a.security-notifications; to stay up to date with the
+ current Internet and &os; security issues.
-
- If a problem arises, keeping sources up to date and
- having a fresh build of named
- may help.
-
-
+
+ If a problem arises, keeping sources up to date and
+ having a fresh build of named
+ may help.
+
+
-
- Further Reading
+
+ Further Reading
- BIND/named manual pages:
- &man.rndc.8; &man.named.8; &man.named.conf.5; &man.nsupdate.1;
- &man.dnssec-signzone.8; &man.dnssec-keygen.8;
+ BIND/named manual pages:
+ &man.rndc.8; &man.named.8; &man.named.conf.5;
+ &man.nsupdate.1; &man.dnssec-signzone.8;
+ &man.dnssec-keygen.8;
-
-
- Official
- ISC BIND Page
-
+
+
+ Official
+ ISC BIND Page
+
-
- Official
- ISC BIND Forum
-
+
+ Official
+ ISC BIND Forum
+
-
- O'Reilly
- DNS and BIND 5th
- Edition
-
+
+ O'Reilly
+ DNS and BIND 5th
+ Edition
+
-
- Root
- DNSSEC
-
+
+ Root
+ DNSSEC
+
-
- DNSSEC
- Trust Anchor Publication for the Root
- Zone
-
+
+ DNSSEC
+ Trust Anchor Publication for the Root
+ Zone
+
-
- RFC1034
- - Domain Names - Concepts and Facilities
-
+
+ RFC1034
+ - Domain Names - Concepts and Facilities
+
-
- RFC1035
- - Domain Names - Implementation and
- Specification
-
+
+ RFC1035
+ - Domain Names - Implementation and
+ Specification
+
-
- RFC4033
- - DNS Security Introduction and
- Requirements
-
+
+ RFC4033
+ - DNS Security Introduction and
+ Requirements
+
-
- RFC4034
- - Resource Records for the DNS
- Security Extensions
-
+
+ RFC4034
+ - Resource Records for the DNS
+ Security Extensions
+
-
- RFC4035
- - Protocol Modifications for the DNS
- Security Extensions
-
+
+ RFC4035
+ - Protocol Modifications for the
+ DNS Security
+ Extensions
+
-
- RFC4641
- - DNSSEC Operational Practices
-
+
+ RFC4641
+ - DNSSEC Operational Practices
+
-
- RFC 5011
- - Automated Updates of DNS Security
- (DNSSEC
- Trust Anchors
-
-
-
+
+ RFC
+ 5011 - Automated Updates of DNS
+ Security (DNSSEC
+ Trust Anchors
+
+
+
- Apache HTTP Server
+ Apache HTTP Server
-
- Murray
- Stokely
+
+ Murray
+ StokelyContributed by web serverssetting upApacheThe open source
Apache HTTP Server is the most widely
used web server. &os; does not install this web server by
default, but it can be installed from the
www/apache24 package or port.This section summarizes how to configure and start version
2.x of the Apache HTTP
Server on &os;. For more detailed information
about Apache 2.X and its
configuration directives, refer to httpd.apache.org.Configuring and Starting ApacheApacheconfiguration fileIn &os;, the main Apache HTTP
Server configuration file is installed as
/usr/local/etc/apache2x/httpd.conf,
where x represents the version
number. This ASCII text file begins
comment lines with a #. The most
frequently modified directives are:ServerRoot "/usr/local"Specifies the default directory hierarchy for the
Apache installation.
Binaries are stored in the bin and
sbin subdirectories of the server
root and configuration files are stored in the etc/apache2x
subdirectory.ServerAdmin you@example.comChange this to the email address to receive problems
with the server. This address also appears on some
server-generated pages, such as error documents.ServerName
www.example.com:80Allows an administrator to set a hostname which is
sent back to clients for the server. For example,
www can be used instead of the
actual hostname. If the system does not have a
registered DNS name, enter its
IP address instead. If the server
will listen on an alternate report, change
80 to the alternate port
number.DocumentRoot
"/usr/local/www/apache2x/data"The directory where documents will be served from.
By default, all requests are taken from this directory,
but symbolic links and aliases may be used to point to
other locations.It is always a good idea to make a backup copy of the
default Apache configuration file
before making changes. When the configuration of
Apache is complete, save the file
and verify the configuration using
apachectl. Running apachectl
configtest should return Syntax
OK.Apachestarting or stoppingTo launch Apache at system
startup, add the following line to
/etc/rc.conf:apache24_enable="YES"If Apache should be started
with non-default options, the following line may be added to
/etc/rc.conf to specify the needed
flags:apache24_flags=""If apachectl does not report
configuration errors, start httpd
now:&prompt.root; service apache24 startThe httpd service can be tested by
entering
http://localhost
in a web browser, replacing
localhost with the fully-qualified
domain name of the machine running httpd.
The default web page that is displayed is
/usr/local/www/apache24/data/index.html.The Apache configuration can be
tested for errors after making subsequent configuration
changes while httpd is running using the
following command:&prompt.root; service apache24 configtestIt is important to note that
configtest is not an &man.rc.8; standard,
and should not be expected to work for all startup
scripts.Virtual HostingVirtual hosting allows multiple websites to run on one
Apache server. The virtual hosts
can be IP-based or
name-based.
IP-based virtual hosting uses a different
IP address for each website. Name-based
virtual hosting uses the clients HTTP/1.1 headers to figure
out the hostname, which allows the websites to share the same
IP address.To setup Apache to use
name-based virtual hosting, add a
VirtualHost block for each website. For
example, for the webserver named www.domain.tld with a
virtual domain of www.someotherdomain.tld,
add the following entries to
httpd.conf:<VirtualHost *>
ServerName www.domain.tld
DocumentRoot /www/domain.tld
</VirtualHost>
<VirtualHost *>
ServerName www.someotherdomain.tld
DocumentRoot /www/someotherdomain.tld
</VirtualHost>For each virtual host, replace the values for
ServerName and
DocumentRoot with the values to be
used.For more information about setting up virtual hosts,
consult the official Apache
documentation at: http://httpd.apache.org/docs/vhosts/.Apache ModulesApachemodulesApache uses modules to augment
the functionality provided by the basic server. Refer to http://httpd.apache.org/docs/current/mod/
for a complete listing of and the configuration details for
the available modules.In &os;, some modules can be compiled with the
www/apache24 port. Type make
config within
/usr/ports/www/apache24 to see which
modules are available and which are enabled by default. If
the module is not compiled with the port, the &os; Ports
Collection provides an easy way to install many modules. This
section describes three of the most commonly used
modules.mod_sslweb serverssecureSSLcryptographyThe mod_ssl module uses the
OpenSSL library to provide strong
cryptography via the Secure Sockets Layer
(SSLv3) and Transport Layer Security
(TLSv1) protocols. This module provides
everything necessary to request a signed certificate from a
trusted certificate signing authority to run a secure web
server on &os;.In &os;, mod_ssl module is enabled
by default in both the package and the port. The available
configuration directives are explained at http://httpd.apache.org/docs/current/mod/mod_ssl.html.mod_perlmod_perlPerlThe
mod_perl module makes it possible to
write Apache modules in
Perl. In addition, the
persistent interpreter embedded in the server avoids the
overhead of starting an external interpreter and the penalty
of Perl start-up time.The mod_perl can be installed using
the www/mod_perl2 package or port.
Documentation for using this module can be found at http://perl.apache.org/docs/2.0/index.html.
- mod_php
+ mod_php
-
- Tom
- Rhodes
-
+
+ Tom
+ Rhodes
+ Written by mod_phpPHPPHP: Hypertext Preprocessor
(PHP) is a general-purpose scripting
language that is especially suited for web development.
Capable of being embedded into HTML, its
syntax draws upon C, &java;, and
Perl with the intention of
allowing web developers to write dynamically generated
webpages quickly.To gain support for PHP5 for the
Apache web server, install the
www/mod_php56 package or port. This will
install and configure the modules required to support
dynamic PHP applications. The
installation will automatically add this line to
/usr/local/etc/apache24/httpd.conf:LoadModule php5_module libexec/apache24/libphp5.soThen, perform a graceful restart to load the
PHP module:&prompt.root; apachectl gracefulThe PHP support provided by
www/mod_php56 is limited. Additional
support can be installed using the
lang/php56-extensions port which provides
a menu driven interface to the available
PHP extensions.Alternatively, individual extensions can be installed
using the appropriate port. For instance, to add
PHP support for the
MySQL database server, install
databases/php56-mysql.After installing an extension, the
Apache server must be reloaded to
pick up the new configuration changes:&prompt.root; apachectl gracefulDynamic Websitesweb serversdynamicIn addition to mod_perl and
mod_php, other languages are
available for creating dynamic web content. These include
Django and
Ruby on Rails.DjangoPythonDjangoDjango is a BSD-licensed
framework designed to allow developers to write high
performance, elegant web applications quickly. It provides
an object-relational mapper so that data types are developed
as Python objects. A rich
dynamic database-access API is provided
for those objects without the developer ever having to write
SQL. It also provides an extensible
template system so that the logic of the application is
separated from the HTML
presentation.Django depends on mod_python, and
an SQL database engine. In &os;, the
www/py-django port automatically installs
mod_python and supports the
PostgreSQL,
MySQL, or
SQLite databases, with the
default being SQLite. To change
the database engine, type make config
within /usr/ports/www/py-django, then
install the port.Once Django is installed, the
application will need a project directory along with the
Apache configuration in order to
use the embedded Python
interpreter. This interpreter is used to call the
application for specific URLs on the
site.To configure Apache to pass
requests for certain URLs to the web
application, add the following to
httpd.conf, specifying the full path to
the project directory:
- <Location "/">
+ <Location "/">
SetHandler python-program
PythonPath "['/dir/to/the/django/packages/'] + sys.path"
PythonHandler django.core.handlers.modpython
SetEnv DJANGO_SETTINGS_MODULE mysite.settings
PythonAutoReload On
PythonDebug On
</Location>Refer to https://docs.djangoproject.com/en/1.6/
for more information on how to use
Django.Ruby on RailsRuby on RailsRuby on Rails is another open
source web framework that provides a full development stack.
It is optimized to make web developers more productive and
capable of writing powerful applications quickly. On &os;,
it can be installed using the
www/rubygem-rails package or port.Refer to http://rubyonrails.org/documentation
for more information on how to use Ruby on
Rails.File Transfer Protocol (FTP)FTP
serversThe File Transfer Protocol (FTP) provides
users with a simple way to transfer files to and from an
FTP server. &os; includes
FTP server software,
ftpd, in the base system.&os; provides several configuration files for controlling
access to the FTP server. This section
summarizes these files. Refer to &man.ftpd.8; for more details
about the built-in FTP server.ConfigurationThe most important configuration step is deciding which
accounts will be allowed access to the FTP
server. A &os; system has a number of system accounts which
should not be allowed FTP access. The list
of users disallowed any FTP access can be
found in /etc/ftpusers. By default, it
includes system accounts. Additional users that should not be
allowed access to FTP can be added.In some cases it may be desirable to restrict the access
of some users without preventing them completely from using
FTP. This can be accomplished be creating
/etc/ftpchroot as described in
&man.ftpchroot.5;. This file lists users and groups subject
to FTP access restrictions.FTPanonymousTo enable anonymous FTP access to the
server, create a user named ftp on the &os; system. Users
will then be able to log on to the
FTP server with a username of
ftp or anonymous. When prompted for
the password, any input will be accepted, but by convention,
an email address should be used as the password. The
FTP server will call &man.chroot.2; when an
anonymous user logs in, to restrict access to only the home
directory of the ftp user.There are two text files that can be created to specify
welcome messages to be displayed to FTP
clients. The contents of
/etc/ftpwelcome will be displayed to
users before they reach the login prompt. After a successful
login, the contents of
/etc/ftpmotd will be displayed. Note
that the path to this file is relative to the login
environment, so the contents of
~ftp/etc/ftpmotd would be displayed for
anonymous users.Once the FTP server has been
configured, set the appropriate variable in
/etc/rc.conf to start the service during
boot:ftpd_enable="YES"To start the service now:&prompt.root; service ftpd startTest the connection to the FTP server
by typing:&prompt.user; ftp localhostsysloglog filesFTPThe ftpd daemon uses
&man.syslog.3; to log messages. By default, the system log
daemon will write messages related to FTP
in /var/log/xferlog. The location of
the FTP log can be modified by changing the
following line in
/etc/syslog.conf:ftp.info /var/log/xferlogFTPanonymousBe aware of the potential problems involved with running
an anonymous FTP server. In particular,
think twice about allowing anonymous users to upload files.
It may turn out that the FTP site becomes
a forum for the trade of unlicensed commercial software or
worse. If anonymous FTP uploads are
required, then verify the permissions so that these files
can not be read by other anonymous users until they have
been reviewed by an administrator.File and Print Services for µsoft.windows; Clients
(Samba)Samba serverMicrosoft Windowsfile serverWindows clientsprint serverWindows clientsSamba is a popular open source
software package that provides file and print services using the
SMB/CIFS protocol. This protocol is built
into µsoft.windows; systems. It can be added to
non-µsoft.windows; systems by installing the
Samba client libraries. The protocol
allows clients to access shared data and printers. These shares
can be mapped as a local disk drive and shared printers can be
used as if they were local printers.On &os;, the Samba client
libraries can be installed using the
net/samba-smbclient port or package. The
client provides the ability for a &os; system to access
SMB/CIFS shares in a µsoft.windows;
network.A &os; system can also be configured to act as a
Samba server. This allows the
administrator to create SMB/CIFS shares on
the &os; system which can be accessed by clients running
µsoft.windows; or the Samba
client libraries. In order to configure a
Samba server on &os;, the
net/samba36 port or package must first be
installed. The rest of this section provides an overview of how
to configure a Samba server on
&os;.ConfigurationA default Samba configuration
file is installed as
/usr/local/share/examples/samba36/smb.conf.default.
This file must be copied to
/usr/local/etc/smb.conf and customized
before Samba can be used.Runtime configuration information for
Samba is found in
smb.conf, such as definitions of the
printers and file system shares that will
be shared with &windows; clients. The
Samba package includes a web based
tool called swat which provides a
simple way for configuring
smb.conf.Using the Samba Web Administration Tool (SWAT)The Samba Web Administration Tool (SWAT) runs as a
daemon from inetd. Therefore,
inetd must be enabled as shown in
. To enable
swat, uncomment the following
line in /etc/inetd.conf:swat stream tcp nowait/400 root /usr/local/sbin/swat swatAs explained in ,
the inetd configuration must be
reloaded after this configuration file is changed.Once swat has been enabled,
use a web browser to connect to http://localhost:901.
At first login, enter the credentials for root.Once logged in, the main
Samba configuration page and the
system documentation will be available. Begin configuration
by clicking on the Globals tab. The
Globals section corresponds to the
variables that are set in the [global]
section of
/usr/local/etc/smb.conf.Global SettingsWhether swat is used or
/usr/local/etc/smb.conf is edited
directly, the first directives encountered when configuring
Samba are:workgroupThe domain name or workgroup name for the
computers that will be accessing this server.netbios nameThe NetBIOS name by which a
Samba server is known. By
default it is the same as the first component of the
host's DNS name.server stringThe string that will be displayed in the output of
net view and some other
networking tools that seek to display descriptive text
about the server.Security SettingsTwo of the most important settings in
/usr/local/etc/smb.conf are the
security model and the backend password format for client
users. The following directives control these
options:securityThe two most common options are
security = share and
security = user. If the clients
use usernames that are the same as their usernames on
the &os; machine, user level security should be
used. This is the default security policy and it
requires clients to first log on before they can
access shared resources.In share level security, clients do not need to
log onto the server with a valid username and password
before attempting to connect to a shared resource.
This was the default security model for older versions
of Samba.passdb backendNIS+LDAPSQL databaseSamba has several
different backend authentication models. Clients may
be authenticated with LDAP, NIS+, an SQL database,
or a modified password file. The default
authentication method is smbpasswd,
and that is all that will be covered here.Assuming that the default smbpasswd
backend is used,
/usr/local/etc/samba/smbpasswd
must be created to allow Samba to
authenticate clients. To provide &unix; user accounts
access from &windows; clients, use the following command to
add each required user to that file:&prompt.root; smbpasswd -a usernameThe recommended backend is now
tdbsam. If this backend is selected,
use the following command to add user accounts:&prompt.root; pdbedit -a -u usernameThis section has only mentioned the most commonly used
settings. Refer to the Official
Samba HOWTO for additional information about the
available configuration options.Starting SambaTo enable Samba at boot time,
add the following line to
/etc/rc.conf:samba_enable="YES"Alternately, its services can be started
separately:nmbd_enable="YES"smbd_enable="YES"To start Samba now:&prompt.root; service samba start
Starting SAMBA: removing stale tdbs :
Starting nmbd.
Starting smbd.Samba consists of three
separate daemons. Both the nmbd
and smbd daemons are started by
samba_enable. If winbind name resolution
services are enabled in smb.conf, the
winbindd daemon is started as
well.Samba may be stopped at any
time by typing:&prompt.root; service samba stopSamba is a complex software
suite with functionality that allows broad integration with
µsoft.windows; networks. For more information about
functionality beyond the basic configuration described here,
refer to http://www.samba.org.Clock Synchronization with NTPNTPntpdOver time, a computer's clock is prone to drift. This is
problematic as many network services require the computers on a
network to share the same accurate time. Accurate time is also
needed to ensure that file timestamps stay consistent. The
Network Time Protocol (NTP) is one way to
provide clock accuracy in a network.&os; includes &man.ntpd.8; which can be configured to query
other NTP servers in order to synchronize the
clock on that machine or to provide time services to other
computers in the network. The servers which are queried can be
local to the network or provided by an ISP.
In addition, an online
list of publicly accessible NTP
servers is available. When choosing a public
NTP server, select one that is geographically
close and review its usage policy.Choosing several NTP servers is
recommended in case one of the servers becomes unreachable or
its clock proves unreliable. As ntpd
receives responses, it favors reliable servers over the less
reliable ones.This section describes how to configure
ntpd on &os;. Further documentation
can be found in /usr/share/doc/ntp/ in HTML
format.NTP ConfigurationNTPntp.confOn &os;, the built-in ntpd can
be used to synchronize a system's clock. To enable
ntpd at boot time, add
ntpd_enable="YES" to
/etc/rc.conf. Additional variables can
be specified in /etc/rc.conf. Refer to
&man.rc.conf.5; and &man.ntpd.8; for
details.This application reads /etc/ntp.conf
to determine which NTP servers to query.
Here is a simple example of an
/etc/ntp.conf: Sample /etc/ntp.confserver ntplocal.example.com prefer
server timeserver.example.org
server ntp2a.example.net
driftfile /var/db/ntp.driftThe format of this file is described in &man.ntp.conf.5;.
The server option specifies which servers
to query, with one server listed on each line. If a server
entry includes prefer, that server is
preferred over other servers. A response from a preferred
server will be discarded if it differs significantly from
other servers' responses; otherwise it will be used. The
prefer argument should only be used for
NTP servers that are known to be highly
accurate, such as those with special time monitoring
hardware.The driftfile entry specifies which
file is used to store the system clock's frequency offset.
ntpd uses this to automatically
compensate for the clock's natural drift, allowing it to
maintain a reasonably correct setting even if it is cut off
from all external time sources for a period of time. This
file also stores information about previous responses
from NTP servers. Since this file contains
internal information for NTP, it should not
be modified.By default, an NTP server is accessible
to any network host. The restrict option
in /etc/ntp.conf can be used to control
which systems can access the server. For example, to deny all
machines from accessing the NTP server, add
the following line to
/etc/ntp.conf:restrict default ignoreThis will also prevent access from other
NTP servers. If there is a need to
synchronize with an external NTP server,
allow only that specific server. Refer to &man.ntp.conf.5;
for more information.To allow machines within the network to synchronize their
clocks with the server, but ensure they are not allowed to
configure the server or be used as peers to synchronize
against, instead use:restrict 192.168.1.0 mask 255.255.255.0 nomodify notrapwhere 192.168.1.0 is the local
network address and 255.255.255.0 is the network's
subnet mask.Multiple restrict entries are
supported. For more details, refer to the Access
Control Support subsection of
&man.ntp.conf.5;.Once ntpd_enable="YES" has been added
to /etc/rc.conf,
ntpd can be started now without
rebooting the system by typing:&prompt.root; service ntpd startUsing NTP with a
PPP Connectionntpd does not need a permanent
connection to the Internet to function properly. However, if
a PPP connection is configured to dial out
on demand, NTP traffic should be prevented
from triggering a dial out or keeping the connection alive.
This can be configured with filter
directives in /etc/ppp/ppp.conf. For
example: set filter dial 0 deny udp src eq 123
# Prevent NTP traffic from initiating dial out
set filter dial 1 permit 0 0
set filter alive 0 deny udp src eq 123
# Prevent incoming NTP traffic from keeping the connection open
set filter alive 1 deny udp dst eq 123
# Prevent outgoing NTP traffic from keeping the connection open
set filter alive 2 permit 0/0 0/0For more details, refer to the
PACKET FILTERING section in &man.ppp.8; and
the examples in
/usr/share/examples/ppp/.Some Internet access providers block low-numbered ports,
preventing NTP from functioning since replies never reach
the machine.iSCSI Initiator and Target
ConfigurationiSCSI is a way to share storage over a
network. Unlike NFS, which works at the file
system level, iSCSI works at the block device
level.In iSCSI terminology, the system that
shares the storage is known as the target.
The storage can be a physical disk, or an area representing
multiple disks or a portion of a physical disk. For example, if
the disk(s) are formatted with ZFS, a zvol
can be created to use as the iSCSI
storage.The clients which access the iSCSI
storage are called initiators. To
initiators, the storage available through
iSCSI appears as a raw, unformatted disk
known as a LUN. Device nodes for the disk
appear in /dev/ and the device must be
separately formatted and mounted.Beginning with 10.0-RELEASE, &os; provides a native,
kernel-based iSCSI target and initiator.
This section describes how to configure a &os; system as a
target or an initiator.Configuring an iSCSI TargetThe native iSCSI target is supported
starting with &os; 10.0-RELEASE. To use
iSCSI in older versions of &os;, install
a userspace target from the Ports Collection, such as
net/istgt. This chapter only describes
the native target.To configure an iSCSI target, create
the /etc/ctl.conf configuration file, add
a line to /etc/rc.conf to make sure the
&man.ctld.8; daemon is automatically started at boot, and then
start the daemon.The following is an example of a simple
/etc/ctl.conf configuration file. Refer
to &man.ctl.conf.5; for a more complete description of this
file's available options.portal-group pg0 {
discovery-auth-group no-authentication
listen 0.0.0.0
listen [::]
}
target iqn.2012-06.com.example:target0 {
auth-group no-authentication
portal-group pg0
lun 0 {
path /data/target0-0
size 4G
}
}The first entry defines the pg0 portal
group. Portal groups define which network addresses the
&man.ctld.8; daemon will listen on. The
discovery-auth-group no-authentication
entry indicates that any initiator is allowed to perform
iSCSI target discovery without
authentication. Lines three and four configure &man.ctld.8;
to listen on all IPv4
(listen 0.0.0.0) and
IPv6 (listen [::])
addresses on the default port of 3260.It is not necessary to define a portal group as there is a
built-in portal group called default. In
this case, the difference between default
and pg0 is that with
default, target discovery is always denied,
while with pg0, it is always
allowed.The second entry defines a single target. Target has two
possible meanings: a machine serving iSCSI
or a named group of LUNs. This example
uses the latter meaning, where
iqn.2012-06.com.example:target0 is the
target name. This target name is suitable for testing
purposes. For actual use, change
com.example to the real domain name,
reversed. The 2012-06 represents the year
and month of acquiring control of that domain name, and
target0 can be any value. Any number of
targets can be defined in this configuration file.The auth-group no-authentication line
allows all initiators to connect to the specified target and
portal-group pg0 makes the target reachable
through the pg0 portal group.The next section defines the LUN. To
the initiator, each LUN will be visible as
a separate disk device. Multiple LUNs can
be defined for each target. Each LUN is
identified by a number, where LUN 0 is
mandatory. The path /data/target0-0 line
defines the full path to a file or zvol backing the
LUN. That path must exist before starting
&man.ctld.8;. The second line is optional and specifies the
size of the LUN.Next, to make sure the &man.ctld.8; daemon is started at
boot, add this line to
/etc/rc.conf:ctld_enable="YES"To start &man.ctld.8; now, run this command:&prompt.root; service ctld startAs the &man.ctld.8; daemon is started, it reads
/etc/ctl.conf. If this file is edited
after the daemon starts, use this command so that the changes
take effect immediately:&prompt.root; service ctld reloadAuthenticationThe previous example is inherently insecure as it uses
no authentication, granting anyone full access to all
targets. To require a username and password to access
targets, modify the configuration as follows:auth-group ag0 {
chap username1 secretsecret
chap username2 anothersecret
}
portal-group pg0 {
discovery-auth-group no-authentication
listen 0.0.0.0
listen [::]
}
target iqn.2012-06.com.example:target0 {
auth-group ag0
portal-group pg0
lun 0 {
path /data/target0-0
size 4G
}
}The auth-group section defines
username and password pairs. An initiator trying to connect
to iqn.2012-06.com.example:target0 must
first specify a defined username and secret. However,
target discovery is still permitted without authentication.
To require target discovery authentication, set
discovery-auth-group to a defined
auth-group name instead of
no-authentication.It is common to define a single exported target for
every initiator. As a shorthand for the syntax above, the
username and password can be specified directly in the
target entry:target iqn.2012-06.com.example:target0 {
portal-group pg0
chap username1 secretsecret
lun 0 {
path /data/target0-0
size 4G
}
}Configuring an iSCSI InitiatorThe iSCSI initiator described in this
section is supported starting with &os; 10.0-RELEASE. To
use the iSCSI initiator available in
older versions, refer to &man.iscontrol.8;.The iSCSI initiator requires that the
&man.iscsid.8; daemon is running. This daemon does not use a
configuration file. To start it automatically at boot, add
this line to /etc/rc.conf:iscsid_enable="YES"To start &man.iscsid.8; now, run this command:&prompt.root; service iscsid startConnecting to a target can be done with or without an
/etc/iscsi.conf configuration file. This
section demonstrates both types of connections.Connecting to a Target Without a Configuration
FileTo connect an initiator to a single target, specify the
IP address of the portal and the name of
the target:&prompt.root; iscsictl -A -p 10.10.10.10 -t iqn.2012-06.com.example:target0To verify if the connection succeeded, run
iscsictl without any arguments. The
output should look similar to this:Target name Target portal State
iqn.2012-06.com.example:target0 10.10.10.10 Connected: da0In this example, the iSCSI session
was successfully established, with
/dev/da0 representing the attached
LUN. If the
iqn.2012-06.com.example:target0 target
exports more than one LUN, multiple
device nodes will be shown in that section of the
output:Connected: da0 da1 da2.Any errors will be reported in the output, as well as
the system logs. For example, this message usually means
that the &man.iscsid.8; daemon is not running:Target name Target portal State
iqn.2012-06.com.example:target0 10.10.10.10 Waiting for iscsid(8)The following message suggests a networking problem,
such as a wrong IP address or
port:Target name Target portal State
iqn.2012-06.com.example:target0 10.10.10.11 Connection refusedThis message means that the specified target name is
wrong:Target name Target portal State
iqn.2012-06.com.example:target0 10.10.10.10 Not foundThis message means that the target requires
authentication:Target name Target portal State
iqn.2012-06.com.example:target0 10.10.10.10 Authentication failedTo specify a CHAP username and
secret, use this syntax:&prompt.root; iscsictl -A -p 10.10.10.10 -t iqn.2012-06.com.example:target0 -u user -s secretsecretConnecting to a Target with a Configuration
FileTo connect using a configuration file, create
/etc/iscsi.conf with contents like
this:t0 {
TargetAddress = 10.10.10.10
TargetName = iqn.2012-06.com.example:target0
AuthMethod = CHAP
chapIName = user
chapSecret = secretsecret
}The t0 specifies a nickname for the
configuration file section. It will be used by the
initiator to specify which configuration to use. The other
lines specify the parameters to use during connection. The
TargetAddress and
TargetName are mandatory, whereas the
other options are optional. In this example, the
CHAP username and secret are
shown.To connect to the defined target, specify the
nickname:&prompt.root; iscsictl -An t0Alternately, to connect to all targets defined in the
configuration file, use:&prompt.root; iscsictl -AaTo make the initiator automatically connect to all
targets in /etc/iscsi.conf, add the
following to /etc/rc.conf:iscsictl_enable="YES"
iscsictl_flags="-Aa"
Index: head/en_US.ISO8859-1/books/handbook/pgpkeys/chapter.xml
===================================================================
--- head/en_US.ISO8859-1/books/handbook/pgpkeys/chapter.xml (revision 48528)
+++ head/en_US.ISO8859-1/books/handbook/pgpkeys/chapter.xml (revision 48529)
@@ -1,39 +1,39 @@
OpenPGP Keyspgp keysThe OpenPGP keys of the
FreeBSD.org officers
are shown here. These keys can be used to verify a signature or
send encrypted email to one of the officers. A full list of &os;
OpenPGP keys is available in the
PGP
Keys article. The complete keyring can be downloaded
at https://www.FreeBSD.org/doc/pgpkeyring.txt.
+ xlink:href="https://www.FreeBSD.org/doc/pgpkeyring.txt">https://www.FreeBSD.org/doc/pgpkeyring.txt.
Officers
§ion.pgpkeys-officers;
Index: head/en_US.ISO8859-1/books/handbook/security/chapter.xml
===================================================================
--- head/en_US.ISO8859-1/books/handbook/security/chapter.xml (revision 48528)
+++ head/en_US.ISO8859-1/books/handbook/security/chapter.xml (revision 48529)
@@ -1,3938 +1,3938 @@
SecurityTomRhodesRewritten by securitySynopsisSecurity, whether physical or virtual, is a topic so broad
that an entire industry has evolved around it. Hundreds of
standard practices have been authored about how to secure
systems and networks, and as a user of &os;, understanding how
to protect against attacks and intruders is a must.In this chapter, several fundamentals and techniques will be
discussed. The &os; system comes with multiple layers of
security, and many more third party utilities may be added to
enhance security.After reading this chapter, you will know:Basic &os; system security concepts.The various crypt mechanisms available in &os;.How to set up one-time password authentication.How to configure TCP Wrapper
for use with &man.inetd.8;.How to set up Kerberos on
&os;.How to configure IPsec and create a
VPN.How to configure and use
OpenSSH on &os;.How to use file system ACLs.How to use pkg to audit
third party software packages installed from the Ports
Collection.How to utilize &os; security advisories.What Process Accounting is and how to enable it on
&os;.How to control user resources using login classes or the
resource limits database.Before reading this chapter, you should:Understand basic &os; and Internet concepts.Additional security topics are covered elsewhere in this
Handbook. For example, Mandatory Access Control is discussed in
and Internet firewalls are discussed in
.IntroductionSecurity is everyone's responsibility. A weak entry point
in any system could allow intruders to gain access to critical
information and cause havoc on an entire network. One of the
core principles of information security is the
CIA triad, which stands for the
Confidentiality, Integrity, and Availability of information
systems.The CIA triad is a bedrock concept of
computer security as customers and users expect their data to be
protected. For example, a customer expects that their credit
card information is securely stored (confidentiality), that
their orders are not changed behind the scenes (integrity), and
that they have access to their order information at all times
(availablility).To provide CIA, security professionals
apply a defense in depth strategy. The idea of defense in depth
is to add several layers of security to prevent one single layer
failing and the entire security system collapsing. For example,
a system administrator cannot simply turn on a firewall and
consider the network or system secure. One must also audit
accounts, check the integrity of binaries, and ensure malicious
tools are not installed. To implement an effective security
strategy, one must understand threats and how to defend against
them.What is a threat as it pertains to computer security?
Threats are not limited to remote attackers who attempt to
access a system without permission from a remote location.
Threats also include employees, malicious software, unauthorized
network devices, natural disasters, security vulnerabilities,
and even competing corporations.Systems and networks can be accessed without permission,
sometimes by accident, or by remote attackers, and in some
cases, via corporate espionage or former employees. As a user,
it is important to prepare for and admit when a mistake has led
to a security breach and report possible issues to the security
team. As an administrator, it is important to know of the
threats and be prepared to mitigate them.When applying security to systems, it is recommended to
start by securing the basic accounts and system configuration,
and then to secure the network layer so that it adheres to the
system policy and the organization's security procedures. Many
organizations already have a security policy that covers the
configuration of technology devices. The policy should include
the security configuration of workstations, desktops, mobile
devices, phones, production servers, and development servers.
In many cases, standard operating procedures
(SOPs) already exist. When in doubt, ask the
security team.The rest of this introduction describes how some of these
basic security configurations are performed on a &os; system.
The rest of this chapter describes some specific tools which can
be used when implementing a security policy on a &os;
system.Preventing LoginsIn securing a system, a good starting point is an audit of
accounts. Ensure that root has a strong password and
that this password is not shared. Disable any accounts that
do not need login access.To deny login access to accounts, two methods exist. The
first is to lock the account. This example locks the
toor account:&prompt.root; pw lock toorThe second method is to prevent login access by changing
the shell to /sbin/nologin. Only the
superuser can change the shell for other users:&prompt.root; chsh -s /usr/sbin/nologin toorThe /usr/sbin/nologin shell prevents
the system from assigning a shell to the user when they
attempt to login.Permitted Account EscalationIn some cases, system administration needs to be shared
with other users. &os; has two methods to handle this. The
first one, which is not recommended, is a shared root password
used by members of the wheel group. With this
method, a user types su and enters the
password for wheel
whenever superuser access is needed. The user should then
type exit to leave privileged access after
finishing the commands that required administrative access.
To add a user to this group, edit
/etc/group and add the user to the end of
the wheel entry. The user must be
separated by a comma character with no space.The second, and recommended, method to permit privilege
escalation is to install the security/sudo
package or port. This software provides additional auditing,
more fine-grained user control, and can be configured to lock
users into running only the specified privileged
commands.After installation, use visudo to edit
/usr/local/etc/sudoers. This example
creates a new webadmin group, adds the
trhodes account to
that group, and configures that group access to restart
apache24:&prompt.root; pw groupadd webadmin -M trhodes -g 6000
&prompt.root; visudo
%webadmin ALL=(ALL) /usr/sbin/service apache24 *Password HashesPasswords are a necessary evil of technology. When they
must be used, they should be complex and a powerful hash
mechanism should be used to encrypt the version that is stored
in the password database. &os; supports the
DES, MD5,
SHA256, SHA512, and
Blowfish hash algorithms in its crypt()
library. The default of SHA512 should not
be changed to a less secure hashing algorithm, but can be
changed to the more secure Blowfish algorithm.Blowfish is not part of AES and is
not considered compliant with any Federal Information
Processing Standards (FIPS). Its use may
not be permitted in some environments.To determine which hash algorithm is used to encrypt a
user's password, the superuser can view the hash for the user
in the &os; password database. Each hash starts with a symbol
which indicates the type of hash mechanism used to encrypt the
password. If DES is used, there is no
beginning symbol. For MD5, the symbol is
$. For SHA256 and
SHA512, the symbol is
$6$. For Blowfish, the symbol is
$2a$. In this example, the password for
dru is hashed using
the default SHA512 algorithm as the hash
starts with $6$. Note that the encrypted
hash, not the password itself, is stored in the password
database:&prompt.root; grep dru /etc/master.passwd
dru:$6$pzIjSvCAn.PBYQBA$PXpSeWPx3g5kscj3IMiM7tUEUSPmGexxta.8Lt9TGSi2lNQqYGKszsBPuGME0:1001:1001::0:0:dru:/usr/home/dru:/bin/cshThe hash mechanism is set in the user's login class. For
this example, the user is in the default
login class and the hash algorithm is set with this line in
/etc/login.conf: :passwd_format=sha512:\To change the algorithm to Blowfish, modify that line to
look like this: :passwd_format=blf:\Then run cap_mkdb /etc/login.conf as
described in . Note that this
change will not affect any existing password hashes. This
means that all passwords should be re-hashed by asking users
to run passwd in order to change their
password.For remote logins, two-factor authentication should be
used. An example of two-factor authentication is
something you have, such as a key, and
something you know, such as the passphrase for
that key. Since OpenSSH is part of
the &os; base system, all network logins should be over an
encrypted connection and use key-based authentication instead
of passwords. For more information, refer to . Kerberos users may need to make
additional changes to implement
OpenSSH in their network. These
changes are described in .Password Policy EnforcementEnforcing a strong password policy for local accounts is a
fundamental aspect of system security. In &os;, password
length, password strength, and password complexity can be
implemented using built-in Pluggable Authentication Modules
(PAM).This section demonstrates how to configure the minimum and
maximum password length and the enforcement of mixed
characters using the pam_passwdqc.so
module. This module is enforced when a user changes their
password.To configure this module, become the superuser and
uncomment the line containing
pam_passwdqc.so in
/etc/pam.d/passwd. Then, edit that line
to match the password policy:password requisite pam_passwdqc.so min=disabled,disabled,disabled,12,10 similar=deny retry=3 enforce=usersThis example sets several requirements for new passwords.
The min setting controls the minimum
password length. It has five values because this module
defines five different types of passwords based on their
complexity. Complexity is defined by the type of characters
that must exist in a password, such as letters, numbers,
symbols, and case. The types of passwords are described in
&man.pam.passwdqc.8;. In this example, the first three types
of passwords are disabled, meaning that passwords that meet
those complexity requirements will not be accepted, regardless
of their length. The 12 sets a minimum
password policy of at least twelve characters, if the password
also contains characters with three types of complexity. The
10 sets the password policy to also allow
passwords of at least ten characters, if the password contains
characters with four types of complexity.The similar setting denies passwords
that are similar to the user's previous password. The
retry setting provides a user with three
opportunities to enter a new password.Once this file is saved, a user changing their password
will see a message similar to the following:&prompt.user; passwd
Changing local password for trhodes
Old Password:
You can now choose the new password.
A valid password should be a mix of upper and lower case letters,
digits and other characters. You can use a 12 character long
password with characters from at least 3 of these 4 classes, or
a 10 character long password containing characters from all the
classes. Characters that form a common pattern are discarded by
the check.
Alternatively, if noone else can see your terminal now, you can
pick this as your password: "trait-useful&knob".
Enter new password:If a password that does not match the policy is entered,
it will be rejected with a warning and the user will have an
opportunity to try again, up to the configured number of
retries.Most password policies require passwords to expire after
so many days. To set a password age time in &os;, set
for the user's login class in
/etc/login.conf. The
default login class contains an
example:# :passwordtime=90d:\So, to set an expiry of 90 days for this login class,
remove the comment symbol (#), save the
edit, and run cap_mkdb
/etc/login.conf.To set the expiration on individual users, pass an
expiration date or the number of days to expiry and a username
to pw:&prompt.root; pw usermod -p 30-apr-2015 -n trhodesAs seen here, an expiration date is set in the form of
day, month, and year. For more information, see
&man.pw.8;.Detecting RootkitsA rootkit is any unauthorized
software that attempts to gain root access to a system. Once
installed, this malicious software will normally open up
another avenue of entry for an attacker. Realistically, once
a system has been compromised by a rootkit and an
investigation has been performed, the system should be
reinstalled from scratch. There is tremendous risk that even
the most prudent security or systems engineer will miss
something an attacker left behind.A rootkit does do one thing usefulfor administrators: once
detected, it is a sign that a compromise happened at some
point. But, these types of applications tend to be very well
hidden. This section demonstrates a tool that can be used to
detect rootkits, security/rkhunter.After installation of this package or port, the system may
be checked using the following command. It will produce a lot
of information and will require some manual pressing of
ENTER:&prompt.root; rkhunter -cAfter the process completes, a status message will be
printed to the screen. This message will include the amount
of files checked, suspect files, possible rootkits, and more.
During the check, some generic security warnings may
be produced about hidden files, the
OpenSSH protocol selection, and
known vulnerable versions of installed software. These can be
handled now or after a more detailed analysis has been
performed.Every administrator should know what is running on the
systems they are responsible for. Third-party tools like
rkhunter and
sysutils/lsof, and native commands such
as netstat and ps, can
show a great deal of information on the system. Take notes on
what is normal, ask questions when something seems out of
place, and be paranoid. While preventing a compromise is
ideal, detecting a compromise is a must.Binary VerificationVerification of system files and binaries is important
because it provides the system administration and security
teams information about system changes. A software
application that monitors the system for changes is called an
Intrusion Detection System (IDS).&os; provides native support for a basic
IDS system. While the nightly security
emails will notify an administrator of changes, the
information is stored locally and there is a chance that a
malicious user could modify this information in order to hide
their changes to the system. As such, it is recommended to
create a separate set of binary signatures and store them on a
read-only, root-owned directory or, preferably, on a removable
USB disk or remote
rsync server.The built-in mtree utility can be used
to generate a specification of the contents of a directory. A
seed, or a numeric constant, is used to generate the
specification and is required to check that the specification
has not changed. This makes it possible to determine if a
file or binary has been modified. Since the seed value is
unknown by an attacker, faking or checking the checksum values
of files will be difficult to impossible. The following
example generates a set of SHA256 hashes,
one for each system binary in /bin, and
saves those values to a hidden file in root's home directory,
/root/.bin_chksum_mtree:&prompt.root; mtree -s 3483151339707503 -c -K cksum,sha256digest -p /bin > /root/.bin_chksum_mtree
&prompt.root; mtree: /bin checksum: 3427012225The 3483151339707503 represents
the seed. This value should be remembered, but not
shared.Viewing /root/.bin_cksum_mtree should
yield output similar to the following:# user: root
# machine: dreadnaught
# tree: /bin
# date: Mon Feb 3 10:19:53 2014
# .
/set type=file uid=0 gid=0 mode=0555 nlink=1 flags=none
. type=dir mode=0755 nlink=2 size=1024 \
time=1380277977.000000000
\133 nlink=2 size=11704 time=1380277977.000000000 \
cksum=484492447 \
sha256digest=6207490fbdb5ed1904441fbfa941279055c3e24d3a4049aeb45094596400662a
cat size=12096 time=1380277975.000000000 cksum=3909216944 \
sha256digest=65ea347b9418760b247ab10244f47a7ca2a569c9836d77f074e7a306900c1e69
chflags size=8168 time=1380277975.000000000 cksum=3949425175 \
sha256digest=c99eb6fc1c92cac335c08be004a0a5b4c24a0c0ef3712017b12c89a978b2dac3
chio size=18520 time=1380277975.000000000 cksum=2208263309 \
sha256digest=ddf7c8cb92a58750a675328345560d8cc7fe14fb3ccd3690c34954cbe69fc964
chmod size=8640 time=1380277975.000000000 cksum=2214429708 \
sha256digest=a435972263bf814ad8df082c0752aa2a7bdd8b74ff01431ccbd52ed1e490bbe7The machine's hostname, the date and time the
specification was created, and the name of the user who
created the specification are included in this report. There
is a checksum, size, time, and SHA256
digest for each binary in the directory.To verify that the binary signatures have not changed,
compare the current contents of the directory to the
previously generated specification, and save the results to a
file. This command requires the seed that was used to
generate the original specification:&prompt.root; mtree -s 3483151339707503 -p /bin < /root/.bin_chksum_mtree >> /root/.bin_chksum_output
&prompt.root; mtree: /bin checksum: 3427012225This should produce the same checksum for
/bin that was produced when the
specification was created. If no changes have occurred to the
binaries in this directory, the
/root/.bin_chksum_output output file will
be empty. To simulate a change, change the date on
/bin/cat using touch
and run the verification command again:&prompt.root; touch /bin/cat
&prompt.root; mtree -s 3483151339707503 -p /bin < /root/.bin_chksum_mtree >> /root/.bin_chksum_output
&prompt.root; more /root/.bin_chksum_output
cat changed
modification time expected Fri Sep 27 06:32:55 2013 found Mon Feb 3 10:28:43 2014It is recommended to create specifications for the
directories which contain binaries and configuration files, as
well as any directories containing sensitive data. Typically,
specifications are created for /bin,
/sbin, /usr/bin,
/usr/sbin,
/usr/local/bin,
/etc, and
/usr/local/etc.More advanced IDS systems exist, such
as security/aide. In most cases,
mtree provides the functionality
administrators need. It is important to keep the seed value
and the checksum output hidden from malicious users. More
information about mtree can be found in
&man.mtree.8;.System Tuning for SecurityIn &os;, many system features can be tuned using
sysctl. A few of the security features
which can be tuned to prevent Denial of Service
(DoS) attacks will be covered in this
section. More information about using
sysctl, including how to temporarily change
values and how to make the changes permanent after testing,
can be found in .Any time a setting is changed with
sysctl, the chance to cause undesired
harm is increased, affecting the availability of the system.
All changes should be monitored and, if possible, tried on a
testing system before being used on a production
system.By default, the &os; kernel boots with a security level of
-1. This is called insecure
mode because immutable file flags may be turned off
and all devices may be read from or written to. The security
level will remain at -1 unless it is
altered through sysctl or by a setting in
the startup scripts. The security level may be increased
during system startup by setting
kern_securelevel_enable to
YES in /etc/rc.conf,
and the value of kern_securelevel to the
desired security level. See &man.security.7; and &man.init.8;
for more information on these settings and the available
security levels.Increasing the securelevel can break
Xorg and cause other issues. Be
prepared to do some debugging.The net.inet.tcp.blackhole and
net.inet.udp.blackhole settings can be used
to drop incoming SYN packets on closed
ports without sending a return RST
response. The default behavior is to return an
RST to show a port is closed. Changing the
default provides some level of protection against ports scans,
which are used to determine which applications are running on
a system. Set net.inet.tcp.blackhole to
2 and
net.inet.udp.blackhole to
1. Refer to &man.blackhole.4; for more
information about these settings.The net.inet.icmp.drop_redirect and
net.inet.ip.redirect settings help prevent
against redirect attacks. A redirect
attack is a type of DoS which sends mass
numbers of ICMP type 5 packets. Since
these packets are not required, set
net.inet.icmp.drop_redirect to
1 and set
net.inet.ip.redirect to
0.Source routing is a method for detecting and accessing
non-routable addresses on the internal network. This should
be disabled as non-routable addresses are normally not
routable on purpose. To disable this feature, set
net.inet.ip.sourceroute and
net.inet.ip.accept_sourceroute to
0.When a machine on the network needs to send messages to
all hosts on a subnet, an ICMP echo request
message is sent to the broadcast address. However, there is
no reason for an external host to perform such an action. To
reject all external broadcast requests, set
net.inet.icmp.bmcastecho to
0.Some additional settings are documented in
&man.security.7;.One-time Passwordsone-time passwordssecurityone-time passwordsBy default, &os; includes support for One-time Passwords In
Everything (OPIE). OPIE
is designed to prevent replay attacks, in which an attacker
discovers a user's password and uses it to access a system.
Since a password is only used once in OPIE, a
discovered password is of little use to an attacker.
OPIE uses a secure hash and a
challenge/response system to manage passwords. The &os;
implementation uses the MD5 hash by
default.OPIE uses three different types of
passwords. The first is the usual &unix; or Kerberos password.
The second is the one-time password which is generated by
opiekey. The third type of password is the
secret password which is used to generate
one-time passwords. The secret password has nothing to do with,
and should be different from, the &unix; password.There are two other pieces of data that are important to
OPIE. One is the seed or
key, consisting of two letters and five digits.
The other is the iteration count, a number
between 1 and 100. OPIE creates the one-time
password by concatenating the seed and the secret password,
applying the MD5 hash as many times as
specified by the iteration count, and turning the result into
six short English words which represent the one-time password.
The authentication system keeps track of the last one-time
password used, and the user is authenticated if the hash of the
user-provided password is equal to the previous password.
Because a one-way hash is used, it is impossible to generate
future one-time passwords if a successfully used password is
captured. The iteration count is decremented after each
successful login to keep the user and the login program in sync.
When the iteration count gets down to 1,
OPIE must be reinitialized.There are a few programs involved in this process. A
one-time password, or a consecutive list of one-time passwords,
is generated by passing an iteration count, a seed, and a secret
password to &man.opiekey.1;. In addition to initializing
OPIE, &man.opiepasswd.1; is used to change
passwords, iteration counts, or seeds. The relevant credential
files in /etc/opiekeys are examined by
&man.opieinfo.1; which prints out the invoking user's current
iteration count and seed.This section describes four different sorts of operations.
The first is how to set up one-time-passwords for the first time
over a secure connection. The second is how to use
opiepasswd over an insecure connection. The
third is how to log in over an insecure connection. The fourth
is how to generate a number of keys which can be written down or
printed out to use at insecure locations.Initializing OPIETo initialize OPIE for the first time,
run this command from a secure location:&prompt.user; opiepasswd -c
[grimreaper] ~ $ opiepasswd -f -c
Adding unfurl:
Only use this method from the console; NEVER from remote. If you are using
telnet, xterm, or a dial-in, type ^C now or exit with no password.
Then run opiepasswd without the -c parameter.
Using MD5 to compute responses.
Enter new secret pass phrase:
Again new secret pass phrase:
ID unfurl OTP key is 499 to4268
MOS MALL GOAT ARM AVID COEDThe sets console mode which assumes
that the command is being run from a secure location, such as
a computer under the user's control or a
SSH session to a computer under the user's
control.When prompted, enter the secret password which will be
used to generate the one-time login keys. This password
should be difficult to guess and should be different than the
password which is associated with the user's login account.
It must be between 10 and 127 characters long. Remember this
password.The ID line lists the login name
(unfurl), default iteration count
(499), and default seed
(to4268). When logging in, the system will
remember these parameters and display them, meaning that they
do not have to be memorized. The last line lists the
generated one-time password which corresponds to those
parameters and the secret password. At the next login, use
this one-time password.Insecure Connection InitializationTo initialize or change the secret password on an
insecure system, a secure connection is needed to some place
where opiekey can be run. This might be a
shell prompt on a trusted machine. An iteration count is
needed, where 100 is probably a good value, and the seed can
either be specified or the randomly-generated one used. On
the insecure connection, the machine being initialized, use
&man.opiepasswd.1;:&prompt.user; opiepasswd
Updating unfurl:
You need the response from an OTP generator.
Old secret pass phrase:
otp-md5 498 to4268 ext
Response: GAME GAG WELT OUT DOWN CHAT
New secret pass phrase:
otp-md5 499 to4269
Response: LINE PAP MILK NELL BUOY TROY
ID mark OTP key is 499 gr4269
LINE PAP MILK NELL BUOY TROYTo accept the default seed, press Return.
Before entering an access password, move over to the secure
connection and give it the same parameters:&prompt.user; opiekey 498 to4268
Using the MD5 algorithm to compute response.
Reminder: Do not use opiekey from telnet or dial-in sessions.
Enter secret pass phrase:
GAME GAG WELT OUT DOWN CHATSwitch back over to the insecure connection, and copy the
generated one-time password over to the relevant
program.Generating a Single One-time PasswordAfter initializing OPIE and logging in,
a prompt like this will be displayed:&prompt.user; telnet example.com
Trying 10.0.0.1...
Connected to example.com
Escape character is '^]'.
FreeBSD/i386 (example.com) (ttypa)
login: <username>
otp-md5 498 gr4269 ext
Password: The OPIE prompts provides a useful
feature. If Return is pressed at the
password prompt, the prompt will turn echo on and display
what is typed. This can be useful when attempting to type in
a password by hand from a printout.MS-DOSWindowsMacOSAt this point, generate the one-time password to answer
this login prompt. This must be done on a trusted system
where it is safe to run &man.opiekey.1;. There are versions
of this command for &windows;, &macos; and &os;. This command
needs the iteration count and the seed as command line
options. Use cut-and-paste from the login prompt on the
machine being logged in to.On the trusted system:&prompt.user; opiekey 498 to4268
Using the MD5 algorithm to compute response.
Reminder: Do not use opiekey from telnet or dial-in sessions.
Enter secret pass phrase:
GAME GAG WELT OUT DOWN CHATOnce the one-time password is generated, continue to log
in.Generating Multiple One-time PasswordsSometimes there is no access to a trusted machine or
secure connection. In this case, it is possible to use
&man.opiekey.1; to generate a number of one-time passwords
beforehand. For example:&prompt.user; opiekey -n 5 30 zz99999
Using the MD5 algorithm to compute response.
Reminder: Do not use opiekey from telnet or dial-in sessions.
Enter secret pass phrase: <secret password>
26: JOAN BORE FOSS DES NAY QUIT
27: LATE BIAS SLAY FOLK MUCH TRIG
28: SALT TIN ANTI LOON NEAL USE
29: RIO ODIN GO BYE FURY TIC
30: GREW JIVE SAN GIRD BOIL PHIThe requests five keys in sequence,
and specifies what the last iteration
number should be. Note that these are printed out in
reverse order of use. The really
paranoid might want to write the results down by hand;
otherwise, print the list. Each line shows both the iteration
count and the one-time password. Scratch off the passwords as
they are used.Restricting Use of &unix; PasswordsOPIE can restrict the use of &unix;
passwords based on the IP address of a login session. The
relevant file is /etc/opieaccess, which
is present by default. Refer to &man.opieaccess.5; for more
information on this file and which security considerations to
be aware of when using it.Here is a sample opieaccess:permit 192.168.0.0 255.255.0.0This line allows users whose IP source address (which is
vulnerable to spoofing) matches the specified value and mask,
to use &unix; passwords at any time.If no rules in opieaccess are
matched, the default is to deny non-OPIE
logins.TCP WrapperTomRhodesWritten
by TCP WrapperTCP Wrapper is a host-based
access control system which extends the abilities of . It can be configured to provide
logging support, return messages, and connection restrictions
for the server daemons under the control of
inetd. Refer to &man.tcpd.8; for
more information about
TCP Wrapper and its features.TCP Wrapper should not be
considered a replacement for a properly configured firewall.
Instead, TCP Wrapper should be used
in conjunction with a firewall and other security enhancements
in order to provide another layer of protection in the
implementation of a security policy.Initial ConfigurationTo enable TCP Wrapper in &os;,
add the following lines to
/etc/rc.conf:inetd_enable="YES"
inetd_flags="-Ww"Then, properly configure
/etc/hosts.allow.Unlike other implementations of
TCP Wrapper, the use of
hosts.deny is deprecated in &os;. All
configuration options should be placed in
/etc/hosts.allow.In the simplest configuration, daemon connection policies
are set to either permit or block, depending on the options in
/etc/hosts.allow. The default
configuration in &os; is to allow all connections to the
daemons started with inetd.Basic configuration usually takes the form of
daemon : address : action, where
daemon is the daemon which
inetd started,
address is a valid hostname,
IP address, or an IPv6 address enclosed in
brackets ([ ]), and action is either
allow or deny.
TCP Wrapper uses a first rule match
semantic, meaning that the configuration file is scanned from
the beginning for a matching rule. When a match is found, the
rule is applied and the search process stops.For example, to allow POP3 connections
via the mail/qpopper daemon, the following
lines should be appended to
hosts.allow:# This line is required for POP3 connections:
qpopper : ALL : allowWhenever this file is edited, restart
inetd:&prompt.root; service inetd restartAdvanced ConfigurationTCP Wrapper provides advanced
options to allow more control over the way connections are
handled. In some cases, it may be appropriate to return a
comment to certain hosts or daemon connections. In other
cases, a log entry should be recorded or an email sent to the
administrator. Other situations may require the use of a
service for local connections only. This is all possible
through the use of configuration options known as wildcards,
expansion characters, and external command execution.Suppose that a situation occurs where a connection should
be denied yet a reason should be sent to the host who
attempted to establish that connection. That action is
possible with . When a connection
attempt is made, executes a shell
command or script. An example exists in
hosts.allow:# The rest of the daemons are protected.
ALL : ALL \
: severity auth.info \
: twist /bin/echo "You are not welcome to use %d from %h."In this example, the message You are not allowed to
use daemon name from
hostname. will be
returned for any daemon not configured in
hosts.allow. This is useful for sending
a reply back to the connection initiator right after the
established connection is dropped. Any message returned
must be wrapped in quote
(") characters.It may be possible to launch a denial of service attack
on the server if an attacker floods these daemons with
connection requests.Another possibility is to use .
Like , implicitly
denies the connection and may be used to run external shell
commands or scripts. Unlike ,
will not send a reply back to the host
who established the connection. For example, consider the
following configuration:# We do not allow connections from example.com:
ALL : .example.com \
: spawn (/bin/echo %a from %h attempted to access %d >> \
/var/log/connections.log) \
: denyThis will deny all connection attempts from *.example.com and log the
hostname, IP address, and the daemon to
which access was attempted to
/var/log/connections.log. This example
uses the substitution characters %a and
%h. Refer to &man.hosts.access.5; for the
complete list.To match every instance of a daemon, domain, or
IP address, use ALL.
Another wildcard is PARANOID which may be
used to match any host which provides an IP
address that may be forged because the IP
address differs from its resolved hostname. In this example,
all connection requests to Sendmail
which have an IP address that varies from
its hostname will be denied:# Block possibly spoofed requests to sendmail:
sendmail : PARANOID : denyUsing the PARANOID wildcard will
result in denied connections if the client or server has a
broken DNS setup.To learn more about wildcards and their associated
functionality, refer to &man.hosts.access.5;.When adding new configuration lines, make sure that any
unneeded entries for that daemon are commented out in
hosts.allow.KerberosTillmanHodgsonContributed by MarkMurrayBased on a contribution by Kerberos is a network
authentication protocol which was originally created by the
Massachusetts Institute of Technology (MIT)
as a way to securely provide authentication across a potentially
hostile network. The Kerberos
protocol uses strong cryptography so that both a client and
server can prove their identity without sending any unencrypted
secrets over the network. Kerberos
can be described as an identity-verifying proxy system and as a
trusted third-party authentication system. After a user
authenticates with Kerberos, their
communications can be encrypted to assure privacy and data
integrity.The only function of Kerberos is
to provide the secure authentication of users and servers on the
network. It does not provide authorization or auditing
functions. It is recommended that
Kerberos be used with other security
methods which provide authorization and audit services.The current version of the protocol is version 5, described
in RFC 4120. Several free
implementations of this protocol are available, covering a wide
range of operating systems. MIT continues to
develop their Kerberos package. It
is commonly used in the US as a cryptography
product, and has historically been subject to
US export regulations. In &os;,
MIT Kerberos is
available as the security/krb5 package or
port. The Heimdal Kerberos
implementation was explicitly developed outside of the
US to avoid export regulations. The Heimdal
Kerberos distribution is included in
the base &os; installation, and another distribution with more
configurable options is available as
security/heimdal in the Ports
Collection.In Kerberos users and services
are identified as principals which are contained
within an administrative grouping, called a
realm. A typical user principal would be of the
form
user@REALM
(realms are traditionally uppercase).This section provides a guide on how to set up
Kerberos using the Heimdal
distribution included in &os;.For purposes of demonstrating a
Kerberos installation, the name
spaces will be as follows:The DNS domain (zone) will be
example.org.The Kerberos realm will be
EXAMPLE.ORG.Use real domain names when setting up
Kerberos, even if it will run
internally. This avoids DNS problems and
assures inter-operation with other
Kerberos realms.Setting up a Heimdal KDCKerberos5Key Distribution CenterThe Key Distribution Center (KDC) is
the centralized authentication service that
Kerberos provides, the
trusted third party of the system. It is the
computer that issues Kerberos
tickets, which are used for clients to authenticate to
servers. Because the KDC is considered
trusted by all other computers in the
Kerberos realm, it has heightened
security concerns. Direct access to the KDC should be
limited.While running a KDC requires few
computing resources, a dedicated machine acting only as a
KDC is recommended for security
reasons.To begin setting up a KDC, add these
lines to /etc/rc.conf:kdc_enable="YES"
kadmind_enable="YES"Next, edit /etc/krb5.conf as
follows:[libdefaults]
default_realm = EXAMPLE.ORG
[realms]
EXAMPLE.ORG = {
kdc = kerberos.example.org
admin_server = kerberos.example.org
}
[domain_realm]
.example.org = EXAMPLE.ORGIn this example, the KDC will use the
fully-qualified hostname kerberos.example.org. The
hostname of the KDC must be resolvable in the
DNS.Kerberos can also use the
DNS to locate KDCs, instead of a
[realms] section in
/etc/krb5.conf. For large organizations
that have their own DNS servers, the above
example could be trimmed to:[libdefaults]
default_realm = EXAMPLE.ORG
[domain_realm]
.example.org = EXAMPLE.ORGWith the following lines being included in the
example.org zone
file:_kerberos._udp IN SRV 01 00 88 kerberos.example.org.
_kerberos._tcp IN SRV 01 00 88 kerberos.example.org.
_kpasswd._udp IN SRV 01 00 464 kerberos.example.org.
_kerberos-adm._tcp IN SRV 01 00 749 kerberos.example.org.
_kerberos IN TXT EXAMPLE.ORGIn order for clients to be able to find the
Kerberos services, they
must have either
a fully configured /etc/krb5.conf or a
minimally configured /etc/krb5.confand a properly configured
DNS server.Next, create the Kerberos
database which contains the keys of all principals (users and
hosts) encrypted with a master password. It is not required
to remember this password as it will be stored in
/var/heimdal/m-key; it would be
reasonable to use a 45-character random password for this
purpose. To create the master key, run
kstash and enter a password:&prompt.root; kstash
Master key: xxxxxxxxxxxxxxxxxxxxxxx
Verifying password - Master key: xxxxxxxxxxxxxxxxxxxxxxxOnce the master key has been created, the database should
be initialized. The Kerberos
administrative tool &man.kadmin.8; can be used on the KDC in a
mode that operates directly on the database, without using the
&man.kadmind.8; network service, as
kadmin -l. This resolves the
chicken-and-egg problem of trying to connect to the database
before it is created. At the kadmin
prompt, use init to create the realm's
initial database:&prompt.root; kadmin -l
kadmin> init EXAMPLE.ORG
Realm max ticket life [unlimited]:Lastly, while still in kadmin, create
the first principal using add. Stick to
the default options for the principal for now, as these can be
changed later with modify. Type
? at the prompt to see the available
options.kadmin> add tillman
Max ticket life [unlimited]:
Max renewable life [unlimited]:
Attributes []:
Password: xxxxxxxx
Verifying password - Password: xxxxxxxxNext, start the KDC services by running
service kdc start and
service kadmind start. While there will
not be any kerberized daemons running at this point, it is
possible to confirm that the KDC is
functioning by obtaining a ticket for the
principal that was just created:&prompt.user; kinit tillman
tillman@EXAMPLE.ORG's Password:Confirm that a ticket was successfully obtained using
klist:&prompt.user; klist
Credentials cache: FILE:/tmp/krb5cc_1001
Principal: tillman@EXAMPLE.ORG
Issued Expires Principal
Aug 27 15:37:58 2013 Aug 28 01:37:58 2013 krbtgt/EXAMPLE.ORG@EXAMPLE.ORGThe temporary ticket can be destroyed when the test is
finished:&prompt.user; kdestroyConfiguring a Server to Use
KerberosKerberos5enabling servicesThe first step in configuring a server to use
Kerberos authentication is to
ensure that it has the correct configuration in
/etc/krb5.conf. The version from the
KDC can be used as-is, or it can be
regenerated on the new system.Next, create /etc/krb5.keytab on the
server. This is the main part of Kerberizing a
service — it corresponds to generating a secret shared
between the service and the KDC. The
secret is a cryptographic key, stored in a
keytab. The keytab contains the server's host
key, which allows it and the KDC to verify
each others' identity. It must be transmitted to the server
in a secure fashion, as the security of the server can be
broken if the key is made public. Typically, the
keytab is generated on an administrator's
trusted machine using kadmin, then securely
transferred to the server, e.g., with &man.scp.1;; it can also
be created directly on the server if that is consistent with
the desired security policy. It is very important that the
keytab is transmitted to the server in a secure fashion: if
the key is known by some other party, that party can
impersonate any user to the server! Using
kadmin on the server directly is
convenient, because the entry for the host principal in the
KDC database is also created using
kadmin.Of course, kadmin is a kerberized
service; a Kerberos ticket is
needed to authenticate to the network service, but to ensure
that the user running kadmin is actually
present (and their session has not been hijacked),
kadmin will prompt for the password to get
a fresh ticket. The principal authenticating to the kadmin
service must be permitted to use the kadmin
interface, as specified in kadmind.acl.
See the section titled Remote administration in
info heimdal for details on designing
access control lists. Instead of enabling remote
kadmin access, the administrator could
securely connect to the KDC via the local
console or &man.ssh.1;, and perform administration locally
using kadmin -l.After installing /etc/krb5.conf,
use add --random-key in
kadmin. This adds the server's host
principal to the database, but does not extract a copy of the
host principal key to a keytab. To generate the keytab, use
ext to extract the server's host principal
key to its own keytab:&prompt.root; kadmin
kadmin> add --random-key host/myserver.example.org
Max ticket life [unlimited]:
Max renewable life [unlimited]:
Principal expiration time [never]:
Password expiration time [never]:
Attributes []:
kadmin> ext_keytab host/myserver.example.org
kadmin> exitNote that ext_keytab stores the
extracted key in /etc/krb5.keytab by
default. This is good when being run on the server being
kerberized, but the --keytab
path/to/file argument
should be used when the keytab is being extracted
elsewhere:&prompt.root; kadmin
kadmin> ext_keytab --keytab=/tmp/example.keytab host/myserver.example.org
kadmin> exitThe keytab can then be securely copied to the server
using &man.scp.1; or a removable media. Be sure to specify a
non-default keytab name to avoid inserting unneeded keys into
the system's keytab.At this point, the server can read encrypted messages from
the KDC using its shared key, stored in
krb5.keytab. It is now ready for the
Kerberos-using services to be
enabled. One of the most common such services is
&man.sshd.8;, which supports
Kerberos via the
GSS-API. In
/etc/ssh/sshd_config, add the
line:GSSAPIAuthentication yesAfter making this change, &man.sshd.8; must be restared
for the new configuration to take effect:
service sshd restart.Configuring a Client to Use
KerberosKerberos5configure clientsAs it was for the server, the client requires
configuration in /etc/krb5.conf. Copy
the file in place (securely) or re-enter it as needed.Test the client by using kinit,
klist, and kdestroy from
the client to obtain, show, and then delete a ticket for an
existing principal. Kerberos
applications should also be able to connect to
Kerberos enabled servers. If that
does not work but obtaining a ticket does, the problem is
likely with the server and not with the client or the
KDC. In the case of kerberized
&man.ssh.1;, GSS-API is disabled by
default, so test using ssh -o
GSSAPIAuthentication=yes
hostname.When testing a Kerberized application, try using a packet
sniffer such as tcpdump to confirm that no
sensitive information is sent in the clear.Various Kerberos client
applications are available. With the advent of a bridge so
that applications using SASL for
authentication can use GSS-API mechanisms
as well, large classes of client applications can use
Kerberos for authentication, from
Jabber clients to IMAP clients..k5login.k5usersUsers within a realm typically have their
Kerberos principal mapped to a
local user account. Occasionally, one needs to grant access
to a local user account to someone who does not have a
matching Kerberos principal. For
example, tillman@EXAMPLE.ORG may need
access to the local user account webdevelopers. Other
principals may also need access to that local account.The .k5login and
.k5users files, placed in a user's home
directory, can be used to solve this problem. For example, if
the following .k5login is placed in the
home directory of webdevelopers, both principals
listed will have access to that account without requiring a
shared password:tillman@example.org
jdoe@example.orgRefer to &man.ksu.1; for more information about
.k5users.MIT DifferencesThe major difference between the MIT
and Heimdal implementations is that kadmin
has a different, but equivalent, set of commands and uses a
different protocol. If the KDC is
MIT, the Heimdal version of
kadmin cannot be used to administer the
KDC remotely, and vice versa.Client applications may also use slightly different
command line options to accomplish the same tasks. Following
the instructions at http://web.mit.edu/Kerberos/www/
is recommended. Be careful of path issues: the
MIT port installs into
/usr/local/ by default, and the &os;
system applications run instead of the
MIT versions if PATH lists
the system directories first.When using MIT Kerberos as a KDC on
&os;, the following edits should also be made to
rc.conf:kerberos5_server="/usr/local/sbin/krb5kdc"
kadmind5_server="/usr/local/sbin/kadmind"
kerberos5_server_flags=""
kerberos5_server_enable="YES"
kadmind5_server_enable="YES"Kerberos Tips, Tricks, and
TroubleshootingWhen configuring and troubleshooting
Kerberos, keep the following points
in mind:When using either Heimdal or MIT
Kerberos from ports, ensure
that the PATH lists the port's versions of
the client applications before the system versions.If all the computers in the realm do not have
synchronized time settings, authentication may fail.
describes how to synchronize
clocks using NTP.If the hostname is changed, the host/ principal must be
changed and the keytab updated. This also applies to
special keytab entries like the HTTP/ principal used for
Apache's www/mod_auth_kerb.All hosts in the realm must be both forward and
reverse resolvable in DNS or, at a
minimum, exist in /etc/hosts. CNAMEs
will work, but the A and PTR records must be correct and
in place. The error message for unresolvable hosts is not
intuitive: Kerberos5 refuses authentication
because Read req failed: Key table entry not
found.Some operating systems that act as clients to the
KDC do not set the permissions for
ksu to be setuid root. This means that
ksu does not work. This is a
permissions problem, not a KDC
error.With MIT
Kerberos, to allow a principal
to have a ticket life longer than the default lifetime of
ten hours, use modify_principal at the
&man.kadmin.8; prompt to change the
maxlife of both the principal in
question and the
krbtgt
principal. The principal can then use
kinit -l to request a ticket with a
longer lifetime.When running a packet sniffer on the
KDC to aid in troubleshooting while
running kinit from a workstation, the
Ticket Granting Ticket (TGT) is sent
immediately, even before the password is typed. This is
because the Kerberos server
freely transmits a TGT to any
unauthorized request. However, every
TGT is encrypted in a key derived from
the user's password. When a user types their password, it
is not sent to the KDC, it is instead
used to decrypt the TGT that
kinit already obtained. If the
decryption process results in a valid ticket with a valid
time stamp, the user has valid
Kerberos credentials. These
credentials include a session key for establishing secure
communications with the
Kerberos server in the future,
as well as the actual TGT, which is
encrypted with the Kerberos
server's own key. This second layer of encryption allows
the Kerberos server to verify
the authenticity of each TGT.Host principals can have a longer ticket lifetime. If
the user principal has a lifetime of a week but the host
being connected to has a lifetime of nine hours, the user
cache will have an expired host principal and the ticket
cache will not work as expected.When setting up krb5.dict to
prevent specific bad passwords from being used as
described in &man.kadmind.8;, remember that it only
applies to principals that have a password policy assigned
to them. The format used in
krb5.dict is one string per line.
Creating a symbolic link to
/usr/share/dict/words might be
useful.Mitigating Kerberos
LimitationsKerberos5limitations and shortcomingsSince Kerberos is an all or
nothing approach, every service enabled on the network must
either be modified to work with
Kerberos or be otherwise secured
against network attacks. This is to prevent user credentials
from being stolen and re-used. An example is when
Kerberos is enabled on all remote
shells but the non-Kerberized POP3 mail
server sends passwords in plain text.The KDC is a single point of failure.
By design, the KDC must be as secure as its
master password database. The KDC should
have absolutely no other services running on it and should be
physically secure. The danger is high because
Kerberos stores all passwords
encrypted with the same master key which is stored as a file
on the KDC.A compromised master key is not quite as bad as one might
fear. The master key is only used to encrypt the
Kerberos database and as a seed for
the random number generator. As long as access to the
KDC is secure, an attacker cannot do much
with the master key.If the KDC is unavailable, network
services are unusable as authentication cannot be performed.
This can be alleviated with a single master
KDC and one or more slaves, and with
careful implementation of secondary or fall-back
authentication using PAM.Kerberos allows users, hosts
and services to authenticate between themselves. It does not
have a mechanism to authenticate the
KDC to the users, hosts, or services. This
means that a trojanned kinit could record
all user names and passwords. File system integrity checking
tools like security/tripwire can
alleviate this.Resources and Further InformationKerberos5external resources
The Kerberos
FAQDesigning
an Authentication System: a Dialog in Four
ScenesRFC
4120, The Kerberos Network
Authentication Service (V5)MIT
Kerberos home
pageHeimdal
Kerberos home
pageOpenSSLTomRhodesWritten
by securityOpenSSLOpenSSL is an open source
implementation of the SSL and
TLS protocols. It provides an encryption
transport layer on top of the normal communications layer,
allowing it to be intertwined with many network applications and
services.The version of OpenSSL included
in &os; supports the Secure Sockets Layer v2/v3 (SSLv2/SSLv3)
and Transport Layer Security v1 (TLSv1) network security
protocols and can be used as a general cryptographic
library.OpenSSL is often used to encrypt
authentication of mail clients and to secure web based
transactions such as credit card payments. Some ports, such as
www/apache24 and
databases/postgresql91-server, include a
compile option for building with
OpenSSL.&os; provides two versions of
OpenSSL: one in the base system and
one in the Ports Collection. Users can choose which version to
use by default for other ports using the following knobs:WITH_OPENSSL_PORT: when set, the port will use
OpenSSL from the
security/openssl port, even if the
version in the base system is up to date or newer.WITH_OPENSSL_BASE: when set, the port will compile
against OpenSSL provided by the
base system.Another common use of OpenSSL is
to provide certificates for use with software applications.
Certificates can be used to verify the credentials of a company
or individual. If a certificate has not been signed by an
external Certificate Authority
(CA), such as http://www.verisign.com,
the application that uses the certificate will produce a
warning. There is a cost associated with obtaining a signed
certificate and using a signed certificate is not mandatory as
certificates can be self-signed. However, using an external
authority will prevent warnings and can put users at
ease.This section demonstrates how to create and use certificates
on a &os; system. Refer to for an
example of how to create a CA for signing
one's own certificates.For more information about SSL, read the
free OpenSSL
Cookbook.Generating CertificatesOpenSSLcertificate generationTo generate a certificate that will be signed by an
external CA, issue the following command
and input the information requested at the prompts. This
input information will be written to the certificate. At the
Common Name prompt, input the fully
qualified name for the system that will use the certificate.
If this name does not match the server, the application
verifying the certificate will issue a warning to the user,
rendering the verification provided by the certificate as
useless.&prompt.root; openssl req -new -nodes -out req.pem -keyout cert.key -sha256 -newkey rsa:2048
Generating a 2048 bit RSA private key
..................+++
.............................................................+++
writing new private key to 'cert.key'
-----
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [AU]:US
State or Province Name (full name) [Some-State]:PA
Locality Name (eg, city) []:Pittsburgh
Organization Name (eg, company) [Internet Widgits Pty Ltd]:My Company
Organizational Unit Name (eg, section) []:Systems Administrator
Common Name (eg, YOUR name) []:localhost.example.org
Email Address []:trhodes@FreeBSD.org
Please enter the following 'extra' attributes
to be sent with your certificate request
A challenge password []:
An optional company name []:Another NameOther options, such as the expire time and alternate
encryption algorithms, are available when creating a
certificate. A complete list of options is described in
&man.openssl.1;.This command will create two files in the current
directory. The certificate request,
req.pem, can be sent to a
CA who will validate the entered
credentials, sign the request, and return the signed
certificate. The second file,
cert.key, is the private key for the
certificate and should be stored in a secure location. If
this falls in the hands of others, it can be used to
impersonate the user or the server.Alternately, if a signature from a CA
is not required, a self-signed certificate can be created.
First, generate the RSA key:&prompt.root; openssl genrsa -rand -genkey -out cert.key 2048
0 semi-random bytes loaded
Generating RSA private key, 2048 bit long modulus
.............................................+++
.................................................................................................................+++
e is 65537 (0x10001)Use this key to create a self-signed certificate.
Follow the usual prompts for creating a certificate:&prompt.root; openssl req -new -x509 -days 365 -key cert.key -out cert.crt -sha256
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [AU]:US
State or Province Name (full name) [Some-State]:PA
Locality Name (eg, city) []:Pittsburgh
Organization Name (eg, company) [Internet Widgits Pty Ltd]:My Company
Organizational Unit Name (eg, section) []:Systems Administrator
Common Name (e.g. server FQDN or YOUR name) []:localhost.example.org
Email Address []:trhodes@FreeBSD.orgThis will create two new files in the current directory: a
private key file
cert.key, and the certificate itself,
cert.crt. These should be placed in a
directory, preferably under /etc/ssl/,
which is readable only by root. Permissions of
0700 are appropriate for these files and
can be set using chmod.Using CertificatesOne use for a certificate is to encrypt connections to the
Sendmail mail server in order to
prevent the use of clear text authentication.Some mail clients will display an error if the user has
not installed a local copy of the certificate. Refer to the
documentation included with the software for more
information on certificate installation.In &os; 10.0-RELEASE and above, it is possible to create a
self-signed certificate for
Sendmail automatically. To enable
this, add the following lines to
/etc/rc.conf:sendmail_enable="YES"
sendmail_cert_create="YES"
sendmail_cert_cn="localhost.example.org"This will automatically create a self-signed certificate,
/etc/mail/certs/host.cert, a signing key,
/etc/mail/certs/host.key, and a
CA certificate,
/etc/mail/certs/cacert.pem. The
certificate will use the Common Name
specified in . After saving
the edits, restart Sendmail:&prompt.root; service sendmail restartIf all went well, there will be no error messages in
/var/log/maillog. For a simple test,
connect to the mail server's listening port using
telnet:&prompt.root; telnet example.com 25
Trying 192.0.34.166...
Connected to example.com.
Escape character is '^]'.
220 example.com ESMTP Sendmail 8.14.7/8.14.7; Fri, 18 Apr 2014 11:50:32 -0400 (EDT)
ehlo example.com
250-example.com Hello example.com [192.0.34.166], pleased to meet you
250-ENHANCEDSTATUSCODES
250-PIPELINING
250-8BITMIME
250-SIZE
250-DSN
250-ETRN
250-AUTH LOGIN PLAIN
250-STARTTLS
250-DELIVERBY
250 HELP
quit
221 2.0.0 example.com closing connection
Connection closed by foreign host.If the STARTTLS line appears in the
output, everything is working correctly.VPN over
IPsecNikClaytonnik@FreeBSD.orgWritten by Hiten M.Pandyahmp@FreeBSD.orgWritten by IPsecInternet Protocol Security (IPsec) is a
set of protocols which sit on top of the Internet Protocol
(IP) layer. It allows two or more hosts to
communicate in a secure manner by authenticating and encrypting
each IP packet of a communication session.
The &os; IPsec network stack is based on the
http://www.kame.net/
implementation and supports both IPv4 and
IPv6 sessions.IPsecESPIPsecAHIPsec is comprised of the following
sub-protocols:Encapsulated Security Payload
(ESP): this protocol
protects the IP packet data from third
party interference by encrypting the contents using
symmetric cryptography algorithms such as Blowfish and
3DES.Authentication Header
(AH)): this protocol
protects the IP packet header from third
party interference and spoofing by computing a cryptographic
checksum and hashing the IP packet
header fields with a secure hashing function. This is then
followed by an additional header that contains the hash, to
allow the information in the packet to be
authenticated.IP Payload Compression Protocol
(IPComp): this protocol
tries to increase communication performance by compressing
the IP payload in order to reduce the
amount of data sent.These protocols can either be used together or separately,
depending on the environment.VPNvirtual private networkVPNIPsec supports two modes of operation.
The first mode, Transport Mode, protects
communications between two hosts. The second mode,
Tunnel Mode, is used to build virtual
tunnels, commonly known as Virtual Private Networks
(VPNs). Consult &man.ipsec.4; for detailed
information on the IPsec subsystem in
&os;.To add IPsec support to the kernel, add
the following options to the custom kernel configuration file
and rebuild the kernel using the instructions in :kernel optionsIPSECoptions IPSEC #IP security
device cryptokernel optionsIPSEC_DEBUGIf IPsec debugging support is desired,
the following kernel option should also be added:options IPSEC_DEBUG #debug for IP securityThis rest of this chapter demonstrates the process of
setting up an IPsec VPN
between a home network and a corporate network. In the example
scenario:Both sites are connected to the Internet through a
gateway that is running &os;.The gateway on each network has at least one external
IP address. In this example, the
corporate LAN's external
IP address is 172.16.5.4 and the home
LAN's external IP
address is 192.168.1.12.The internal addresses of the two networks can be either
public or private IP addresses. However,
the address space must not collide. For example, both
networks cannot use 192.168.1.x. In this
example, the corporate LAN's internal
IP address is 10.246.38.1 and the home
LAN's internal IP
address is 10.0.0.5.Configuring a VPN on &os;TomRhodestrhodes@FreeBSD.orgWritten by To begin, security/ipsec-tools must be
installed from the Ports Collection. This software provides a
number of applications which support the configuration.The next requirement is to create two &man.gif.4;
pseudo-devices which will be used to tunnel packets and allow
both networks to communicate properly. As root, run the following
commands, replacing internal and
external with the real IP
addresses of the internal and external interfaces of the two
gateways:&prompt.root; ifconfig gif0 create
&prompt.root; ifconfig gif0 internal1 internal2
&prompt.root; ifconfig gif0 tunnel external1 external2Verify the setup on each gateway, using
ifconfig. Here is the output from Gateway
1:gif0: flags=8051 mtu 1280
tunnel inet 172.16.5.4 --> 192.168.1.12
inet6 fe80::2e0:81ff:fe02:5881%gif0 prefixlen 64 scopeid 0x6
inet 10.246.38.1 --> 10.0.0.5 netmask 0xffffff00Here is the output from Gateway 2:gif0: flags=8051 mtu 1280
tunnel inet 192.168.1.12 --> 172.16.5.4
inet 10.0.0.5 --> 10.246.38.1 netmask 0xffffff00
inet6 fe80::250:bfff:fe3a:c1f%gif0 prefixlen 64 scopeid 0x4Once complete, both internal IP
addresses should be reachable using &man.ping.8;:priv-net# ping 10.0.0.5
PING 10.0.0.5 (10.0.0.5): 56 data bytes
64 bytes from 10.0.0.5: icmp_seq=0 ttl=64 time=42.786 ms
64 bytes from 10.0.0.5: icmp_seq=1 ttl=64 time=19.255 ms
64 bytes from 10.0.0.5: icmp_seq=2 ttl=64 time=20.440 ms
64 bytes from 10.0.0.5: icmp_seq=3 ttl=64 time=21.036 ms
--- 10.0.0.5 ping statistics ---
4 packets transmitted, 4 packets received, 0% packet loss
round-trip min/avg/max/stddev = 19.255/25.879/42.786/9.782 ms
corp-net# ping 10.246.38.1
PING 10.246.38.1 (10.246.38.1): 56 data bytes
64 bytes from 10.246.38.1: icmp_seq=0 ttl=64 time=28.106 ms
64 bytes from 10.246.38.1: icmp_seq=1 ttl=64 time=42.917 ms
64 bytes from 10.246.38.1: icmp_seq=2 ttl=64 time=127.525 ms
64 bytes from 10.246.38.1: icmp_seq=3 ttl=64 time=119.896 ms
64 bytes from 10.246.38.1: icmp_seq=4 ttl=64 time=154.524 ms
--- 10.246.38.1 ping statistics ---
5 packets transmitted, 5 packets received, 0% packet loss
round-trip min/avg/max/stddev = 28.106/94.594/154.524/49.814 msAs expected, both sides have the ability to send and
receive ICMP packets from the privately
configured addresses. Next, both gateways must be told how to
route packets in order to correctly send traffic from either
network. The following commands will achieve this
goal:&prompt.root; corp-net# route add 10.0.0.0 10.0.0.5 255.255.255.0
&prompt.root; corp-net# route add net 10.0.0.0: gateway 10.0.0.5
&prompt.root; priv-net# route add 10.246.38.0 10.246.38.1 255.255.255.0
&prompt.root; priv-net# route add host 10.246.38.0: gateway 10.246.38.1At this point, internal machines should be reachable from
each gateway as well as from machines behind the gateways.
Again, use &man.ping.8; to confirm:corp-net# ping 10.0.0.8
PING 10.0.0.8 (10.0.0.8): 56 data bytes
64 bytes from 10.0.0.8: icmp_seq=0 ttl=63 time=92.391 ms
64 bytes from 10.0.0.8: icmp_seq=1 ttl=63 time=21.870 ms
64 bytes from 10.0.0.8: icmp_seq=2 ttl=63 time=198.022 ms
64 bytes from 10.0.0.8: icmp_seq=3 ttl=63 time=22.241 ms
64 bytes from 10.0.0.8: icmp_seq=4 ttl=63 time=174.705 ms
--- 10.0.0.8 ping statistics ---
5 packets transmitted, 5 packets received, 0% packet loss
round-trip min/avg/max/stddev = 21.870/101.846/198.022/74.001 ms
priv-net# ping 10.246.38.107
PING 10.246.38.1 (10.246.38.107): 56 data bytes
64 bytes from 10.246.38.107: icmp_seq=0 ttl=64 time=53.491 ms
64 bytes from 10.246.38.107: icmp_seq=1 ttl=64 time=23.395 ms
64 bytes from 10.246.38.107: icmp_seq=2 ttl=64 time=23.865 ms
64 bytes from 10.246.38.107: icmp_seq=3 ttl=64 time=21.145 ms
64 bytes from 10.246.38.107: icmp_seq=4 ttl=64 time=36.708 ms
--- 10.246.38.107 ping statistics ---
5 packets transmitted, 5 packets received, 0% packet loss
round-trip min/avg/max/stddev = 21.145/31.721/53.491/12.179 msSetting up the tunnels is the easy part. Configuring a
secure link is a more in depth process. The following
configuration uses pre-shared (PSK)
RSA keys. Other than the
IP addresses, the
/usr/local/etc/racoon/racoon.conf on both
gateways will be identical and look similar to:path pre_shared_key "/usr/local/etc/racoon/psk.txt"; #location of pre-shared key file
log debug; #log verbosity setting: set to 'notify' when testing and debugging is complete
padding # options are not to be changed
{
maximum_length 20;
randomize off;
strict_check off;
exclusive_tail off;
}
timer # timing options. change as needed
{
counter 5;
interval 20 sec;
persend 1;
# natt_keepalive 15 sec;
phase1 30 sec;
phase2 15 sec;
}
listen # address [port] that racoon will listen on
{
isakmp 172.16.5.4 [500];
isakmp_natt 172.16.5.4 [4500];
}
remote 192.168.1.12 [500]
{
exchange_mode main,aggressive;
doi ipsec_doi;
situation identity_only;
my_identifier address 172.16.5.4;
peers_identifier address 192.168.1.12;
lifetime time 8 hour;
passive off;
proposal_check obey;
# nat_traversal off;
generate_policy off;
proposal {
encryption_algorithm blowfish;
hash_algorithm md5;
authentication_method pre_shared_key;
lifetime time 30 sec;
dh_group 1;
}
}
sainfo (address 10.246.38.0/24 any address 10.0.0.0/24 any) # address $network/$netmask $type address $network/$netmask $type ( $type being any or esp)
{ # $network must be the two internal networks you are joining.
pfs_group 1;
lifetime time 36000 sec;
encryption_algorithm blowfish,3des;
authentication_algorithm hmac_md5,hmac_sha1;
compression_algorithm deflate;
}For descriptions of each available option, refer to the
manual page for racoon.conf.The Security Policy Database (SPD)
needs to be configured so that &os; and
racoon are able to encrypt and
decrypt network traffic between the hosts.This can be achieved with a shell script, similar to the
following, on the corporate gateway. This file will be used
during system initialization and should be saved as
/usr/local/etc/racoon/setkey.conf.flush;
spdflush;
# To the home network
spdadd 10.246.38.0/24 10.0.0.0/24 any -P out ipsec esp/tunnel/172.16.5.4-192.168.1.12/use;
spdadd 10.0.0.0/24 10.246.38.0/24 any -P in ipsec esp/tunnel/192.168.1.12-172.16.5.4/use;Once in place, racoon may be
started on both gateways using the following command:&prompt.root; /usr/local/sbin/racoon -F -f /usr/local/etc/racoon/racoon.conf -l /var/log/racoon.logThe output should be similar to the following:corp-net# /usr/local/sbin/racoon -F -f /usr/local/etc/racoon/racoon.conf
Foreground mode.
2006-01-30 01:35:47: INFO: begin Identity Protection mode.
2006-01-30 01:35:48: INFO: received Vendor ID: KAME/racoon
2006-01-30 01:35:55: INFO: received Vendor ID: KAME/racoon
2006-01-30 01:36:04: INFO: ISAKMP-SA established 172.16.5.4[500]-192.168.1.12[500] spi:623b9b3bd2492452:7deab82d54ff704a
2006-01-30 01:36:05: INFO: initiate new phase 2 negotiation: 172.16.5.4[0]192.168.1.12[0]
2006-01-30 01:36:09: INFO: IPsec-SA established: ESP/Tunnel 192.168.1.12[0]->172.16.5.4[0] spi=28496098(0x1b2d0e2)
2006-01-30 01:36:09: INFO: IPsec-SA established: ESP/Tunnel 172.16.5.4[0]->192.168.1.12[0] spi=47784998(0x2d92426)
2006-01-30 01:36:13: INFO: respond new phase 2 negotiation: 172.16.5.4[0]192.168.1.12[0]
2006-01-30 01:36:18: INFO: IPsec-SA established: ESP/Tunnel 192.168.1.12[0]->172.16.5.4[0] spi=124397467(0x76a279b)
2006-01-30 01:36:18: INFO: IPsec-SA established: ESP/Tunnel 172.16.5.4[0]->192.168.1.12[0] spi=175852902(0xa7b4d66)To ensure the tunnel is working properly, switch to
another console and use &man.tcpdump.1; to view network
traffic using the following command. Replace
em0 with the network interface card as
required:&prompt.root; tcpdump -i em0 host 172.16.5.4 and dst 192.168.1.12Data similar to the following should appear on the
console. If not, there is an issue and debugging the
returned data will be required.01:47:32.021683 IP corporatenetwork.com > 192.168.1.12.privatenetwork.com: ESP(spi=0x02acbf9f,seq=0xa)
01:47:33.022442 IP corporatenetwork.com > 192.168.1.12.privatenetwork.com: ESP(spi=0x02acbf9f,seq=0xb)
01:47:34.024218 IP corporatenetwork.com > 192.168.1.12.privatenetwork.com: ESP(spi=0x02acbf9f,seq=0xc)At this point, both networks should be available and seem
to be part of the same network. Most likely both networks are
protected by a firewall. To allow traffic to flow between
them, rules need to be added to pass packets. For the
&man.ipfw.8; firewall, add the following lines to the firewall
configuration file:ipfw add 00201 allow log esp from any to any
ipfw add 00202 allow log ah from any to any
ipfw add 00203 allow log ipencap from any to any
ipfw add 00204 allow log udp from any 500 to anyThe rule numbers may need to be altered depending on the
current host configuration.For users of &man.pf.4; or &man.ipf.8;, the following
rules should do the trick:pass in quick proto esp from any to any
pass in quick proto ah from any to any
pass in quick proto ipencap from any to any
pass in quick proto udp from any port = 500 to any port = 500
pass in quick on gif0 from any to any
pass out quick proto esp from any to any
pass out quick proto ah from any to any
pass out quick proto ipencap from any to any
pass out quick proto udp from any port = 500 to any port = 500
pass out quick on gif0 from any to anyFinally, to allow the machine to start support for the
VPN during system initialization, add the
following lines to /etc/rc.conf:ipsec_enable="YES"
ipsec_program="/usr/local/sbin/setkey"
ipsec_file="/usr/local/etc/racoon/setkey.conf" # allows setting up spd policies on boot
racoon_enable="yes"OpenSSHChernLeeContributed
by OpenSSHsecurityOpenSSHOpenSSH is a set of network
connectivity tools used to provide secure access to remote
machines. Additionally, TCP/IP connections
can be tunneled or forwarded securely through
SSH connections.
OpenSSH encrypts all traffic to
effectively eliminate eavesdropping, connection hijacking, and
other network-level attacks.OpenSSH is maintained by the
OpenBSD project and is installed by default in &os;. It is
compatible with both SSH version 1 and 2
protocols.When data is sent over the network in an unencrypted form,
network sniffers anywhere in between the client and server can
steal user/password information or data transferred during the
session. OpenSSH offers a variety of
authentication and encryption methods to prevent this from
happening. More information about
OpenSSH is available from http://www.openssh.com/.This section provides an overview of the built-in client
utilities to securely access other systems and securely transfer
files from a &os; system. It then describes how to configure a
SSH server on a &os; system. More
information is available in the man pages mentioned in this
chapter.Using the SSH Client UtilitiesOpenSSHclientTo log into a SSH server, use
ssh and specify a username that exists on
that server and the IP address or hostname
of the server. If this is the first time a connection has
been made to the specified server, the user will be prompted
to first verify the server's fingerprint:&prompt.root; ssh user@example.com
The authenticity of host 'example.com (10.0.0.1)' can't be established.
ECDSA key fingerprint is 25:cc:73:b5:b3:96:75:3d:56:19:49:d2:5c:1f:91:3b.
Are you sure you want to continue connecting (yes/no)? yes
Permanently added 'example.com' (ECDSA) to the list of known hosts.
Password for user@example.com: user_passwordSSH utilizes a key fingerprint system
to verify the authenticity of the server when the client
connects. When the user accepts the key's fingerprint by
typing yes when connecting for the first
time, a copy of the key is saved to
.ssh/known_hosts in the user's home
directory. Future attempts to login are verified against the
saved key and ssh will display an alert if
the server's key does not match the saved key. If this
occurs, the user should first verify why the key has changed
before continuing with the connection.By default, recent versions of
OpenSSH only accept
SSHv2 connections. By default, the client
will use version 2 if possible and will fall back to version 1
if the server does not support version 2. To force
ssh to only use the specified protocol,
include or .
Additional options are described in &man.ssh.1;.OpenSSHsecure copy&man.scp.1;Use &man.scp.1; to securely copy a file to or from a
remote machine. This example copies
COPYRIGHT on the remote system to a file
of the same name in the current directory of the local
system:&prompt.root; scp user@example.com:/COPYRIGHT COPYRIGHT
Password for user@example.com: *******
COPYRIGHT 100% |*****************************| 4735
00:00
&prompt.root;Since the fingerprint was already verified for this host,
the server's key is automatically checked before prompting for
the user's password.The arguments passed to scp are similar
to cp. The file or files to copy is the
first argument and the destination to copy to is the second.
Since the file is fetched over the network, one or more of the
file arguments takes the form
. Be
aware when copying directories recursively that
scp uses , whereas
cp uses .To open an interactive session for copying files, use
sftp. Refer to &man.sftp.1; for a list of
available commands while in an sftp
session.Key-based AuthenticationInstead of using passwords, a client can be configured
to connect to the remote machine using keys. To generate
DSA or RSA
authentication keys, use ssh-keygen. To
generate a public and private key pair, specify the type of
key and follow the prompts. It is recommended to protect
the keys with a memorable, but hard to guess
passphrase.&prompt.user; ssh-keygen -t dsa
Generating public/private dsa key pair.
Enter file in which to save the key (/home/user/.ssh/id_dsa):
Created directory '/home/user/.ssh'.
Enter passphrase (empty for no passphrase): type some passphrase here which can contain spaces
Enter same passphrase again: type some passphrase here which can contain spaces
Your identification has been saved in /home/user/.ssh/id_dsa.
Your public key has been saved in /home/user/.ssh/id_dsa.pub.
The key fingerprint is:
bb:48:db:f2:93:57:80:b6:aa:bc:f5:d5:ba:8f:79:17 user@host.example.comDepending upon the specified protocol, the private key
is stored in ~/.ssh/id_dsa (or
~/.ssh/id_rsa), and the public key
is stored in ~/.ssh/id_dsa.pub (or
~/.ssh/id_rsa.pub). The
public key must be first copied to
~/.ssh/authorized_keys on the remote
machine in order for key-based authentication to
work.Many users believe that keys are secure by design and
will use a key without a passphrase. This is
dangerous behavior. An
administrator can verify that a key pair is protected by a
passphrase by viewing the private key manually. If the
private key file contains the word
ENCRYPTED, the key owner is using a
passphrase. In addition, to better secure end users,
from may be placed in the public key
file. For example, adding
from="192.168.10.5" in the front of
ssh-rsa or rsa-dsa
prefix will only allow that specific user to login from
that IP address.The various options and files can be different
according to the OpenSSH version.
To avoid problems, consult &man.ssh-keygen.1;.If a passphrase is used, the user will be prompted for
the passphrase each time a connection is made to the server.
To load SSH keys into memory, without
needing to type the passphrase each time, use
&man.ssh-agent.1; and &man.ssh-add.1;.Authentication is handled by
ssh-agent, using the private key(s) that
are loaded into it. Then, ssh-agent
should be used to launch another application such as a
shell or a window manager.To use ssh-agent in a shell, start it
with a shell as an argument. Next, add the identity by
running ssh-add and providing it the
passphrase for the private key. Once these steps have been
completed, the user will be able to ssh
to any host that has the corresponding public key installed.
For example:&prompt.user; ssh-agent csh
&prompt.user; ssh-add
Enter passphrase for key '/usr/home/user/.ssh/id_dsa': type passphrase here
Identity added: /usr/home/user/.ssh/id_dsa (/usr/home/user/.ssh/id_dsa)
&prompt.user;To use ssh-agent in
&xorg;, add an entry for it in
~/.xinitrc. This provides the
ssh-agent services to all programs
launched in &xorg;. An example
~/.xinitrc might look like this:exec ssh-agent startxfce4This launches ssh-agent, which in
turn launches XFCE, every time
&xorg; starts. Once
&xorg; has been restarted so that
the changes can take effect, run ssh-add
to load all of the SSH keys.SSH TunnelingOpenSSHtunnelingOpenSSH has the ability to
create a tunnel to encapsulate another protocol in an
encrypted session.The following command tells ssh to
create a tunnel for
telnet:&prompt.user; ssh -2 -N -f -L 5023:localhost:23 user@foo.example.com
&prompt.user;This example uses the following options:Forces ssh to use version 2 to
connect to the server.Indicates no command, or tunnel only. If omitted,
ssh initiates a normal
session.Forces ssh to run in the
background.Indicates a local tunnel in
localport:remotehost:remoteport
format.The login name to use on the specified remote
SSH server.An SSH tunnel works by creating a
listen socket on localhost on the
specified localport. It then forwards
any connections received on localport via
the SSH connection to the specified
remotehost:remoteport. In the example,
port 5023 on the client is forwarded to
port 23 on the remote machine. Since
port 23 is used by telnet, this
creates an encrypted telnet
session through an SSH tunnel.This method can be used to wrap any number of insecure
TCP protocols such as
SMTP, POP3, and
FTP, as seen in the following
examples.Create a Secure Tunnel for
SMTP&prompt.user; ssh -2 -N -f -L 5025:localhost:25 user@mailserver.example.com
user@mailserver.example.com's password: *****
&prompt.user; telnet localhost 5025
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.
220 mailserver.example.com ESMTPThis can be used in conjunction with
ssh-keygen and additional user accounts
to create a more seamless SSH tunneling
environment. Keys can be used in place of typing a
password, and the tunnels can be run as a separate
user.Secure Access of a POP3
ServerIn this example, there is an SSH
server that accepts connections from the outside. On the
same network resides a mail server running a
POP3 server. To check email in a
secure manner, create an SSH connection
to the SSH server and tunnel through to
the mail server:&prompt.user; ssh -2 -N -f -L 2110:mail.example.com:110 user@ssh-server.example.com
user@ssh-server.example.com's password: ******Once the tunnel is up and running, point the email
client to send POP3 requests to
localhost on port 2110. This
connection will be forwarded securely across the tunnel to
mail.example.com.Bypassing a FirewallSome firewalls
filter both incoming and outgoing connections. For
example, a firewall might limit access from remote
machines to ports 22 and 80 to only allow
SSH and web surfing. This prevents
access to any other service which uses a port other than
22 or 80.The solution is to create an SSH
connection to a machine outside of the network's firewall
and use it to tunnel to the desired service:&prompt.user; ssh -2 -N -f -L 8888:music.example.com:8000 user@unfirewalled-system.example.org
user@unfirewalled-system.example.org's password: *******In this example, a streaming Ogg Vorbis client can now
be pointed to localhost port
8888, which will be forwarded over to
music.example.com on port 8000,
successfully bypassing the firewall.Enabling the SSH ServerOpenSSHenablingIn addition to providing built-in SSH
client utilities, a &os; system can be configured as an
SSH server, accepting connections from
other SSH clients.To see if sshd is operating,
- use the &man.service.8; command:
+ use the &man.service.8; command:
&prompt.root; service sshd status
-
+
If the service is not running, add the following line to
/etc/rc.conf.sshd_enable="YES"This will start sshd, the
daemon program for OpenSSH, the
next time the system boots. To start it now:&prompt.root; service sshd startThe first time sshd starts on a
&os; system, the system's host keys will be automatically
created and the fingerprint will be displayed on the console.
Provide users with the fingerprint so that they can verify it
the first time they connect to the server.Refer to &man.sshd.8; for the list of available options
when starting sshd and a more
complete discussion about authentication, the login process,
and the various configuration files.At this point, the sshd should
be available to all users with a username and password on
the system.SSH Server SecurityWhile sshd is the most widely
used remote administration facility for &os;, brute force
and drive by attacks are common to any system exposed to
public networks. Several additional parameters are available
to prevent the success of these attacks and will be described
in this section.It is a good idea to limit which users can log into the
SSH server and from where using the
AllowUsers keyword in the
OpenSSH server configuration file.
For example, to only allow root to log in from
192.168.1.32, add
this line to /etc/ssh/sshd_config:AllowUsers root@192.168.1.32To allow admin
to log in from anywhere, list that user without specifying an
IP address:AllowUsers adminMultiple users should be listed on the same line, like
so:AllowUsers root@192.168.1.32 adminAfter making changes to
/etc/ssh/sshd_config,
tell sshd to reload its
configuration file by running:&prompt.root; service sshd reloadWhen this keyword is used, it is important to list each
user that needs to log into this machine. Any user that is
not specified in that line will be locked out. Also, the
keywords used in the OpenSSH
server configuration file are case-sensitive. If the
keyword is not spelled correctly, including its case, it
will be ignored. Always test changes to this file to make
sure that the edits are working as expected. Refer to
&man.sshd.config.5; to verify the spelling and use of the
available keywords.In addition, users may be forced to use two factor
authentication via the use of a public and private key. When
required, the user may generate a key pair through the use
of &man.ssh-keygen.1; and send the administrator the public
key. This key file will be placed in the
authorized_keys as described above in
the client section. To force the users to use keys only,
the following option may be configured:AuthenticationMethods publickeyDo not confuse /etc/ssh/sshd_config
with /etc/ssh/ssh_config (note the
extra d in the first filename). The
first file configures the server and the second file
configures the client. Refer to &man.ssh.config.5; for a
listing of the available client settings.Access Control ListsTomRhodesContributed
by ACLAccess Control Lists (ACLs) extend the
standard &unix; permission model in a &posix;.1e compatible way.
This permits an administrator to take advantage of a more
fine-grained permissions model.The &os; GENERIC kernel provides
ACL support for UFS file
systems. Users who prefer to compile a custom kernel must
include the following option in their custom kernel
configuration file:options UFS_ACLIf this option is not compiled in, a warning message will be
displayed when attempting to mount a file system with
ACL support. ACLs rely on
extended attributes which are natively supported in
UFS2.This chapter describes how to enable
ACL support and provides some usage
examples.Enabling ACL SupportACLs are enabled by the mount-time
administrative flag, , which may be added
to /etc/fstab. The mount-time flag can
also be automatically set in a persistent manner using
&man.tunefs.8; to modify a superblock ACLs
flag in the file system header. In general, it is preferred
to use the superblock flag for several reasons:The superblock flag cannot be changed by a remount
using as it requires a complete
umount and fresh
mount. This means that
ACLs cannot be enabled on the root file
system after boot. It also means that
ACL support on a file system cannot be
changed while the system is in use.Setting the superblock flag causes the file system to
always be mounted with ACLs enabled,
even if there is not an fstab entry
or if the devices re-order. This prevents accidental
mounting of the file system without ACL
support.It is desirable to discourage accidental mounting
without ACLs enabled because nasty things
can happen if ACLs are enabled, then
disabled, then re-enabled without flushing the extended
attributes. In general, once ACLs are
enabled on a file system, they should not be disabled, as
the resulting file protections may not be compatible with
those intended by the users of the system, and re-enabling
ACLs may re-attach the previous
ACLs to files that have since had their
permissions changed, resulting in unpredictable
behavior.File systems with ACLs enabled will
show a plus (+) sign in their permission
settings:drwx------ 2 robert robert 512 Dec 27 11:54 private
drwxrwx---+ 2 robert robert 512 Dec 23 10:57 directory1
drwxrwx---+ 2 robert robert 512 Dec 22 10:20 directory2
drwxrwx---+ 2 robert robert 512 Dec 27 11:57 directory3
drwxr-xr-x 2 robert robert 512 Nov 10 11:54 public_htmlIn this example, directory1,
directory2, and
directory3 are all taking advantage of
ACLs, whereas
public_html is not.Using ACLsFile system ACLs can be viewed using
getfacl. For instance, to view the
ACL settings on
test:&prompt.user; getfacl test
#file:test
#owner:1001
#group:1001
user::rw-
group::r--
other::r--To change the ACL settings on this
file, use setfacl. To remove all of the
currently defined ACLs from a file or file
system, include . However, the preferred
method is to use as it leaves the basic
fields required for ACLs to work.&prompt.user; setfacl -k testTo modify the default ACL entries, use
:&prompt.user; setfacl -m u:trhodes:rwx,group:web:r--,o::--- testIn this example, there were no pre-defined entries, as
they were removed by the previous command. This command
restores the default options and assigns the options listed.
If a user or group is added which does not exist on the
system, an Invalid argument error will
be displayed.Refer to &man.getfacl.1; and &man.setfacl.1; for more
information about the options available for these
commands.Monitoring Third Party Security IssuesTomRhodesContributed
by pkgIn recent years, the security world has made many
improvements to how vulnerability assessment is handled. The
threat of system intrusion increases as third party utilities
are installed and configured for virtually any operating
system available today.Vulnerability assessment is a key factor in security.
While &os; releases advisories for the base system, doing so
for every third party utility is beyond the &os; Project's
capability. There is a way to mitigate third party
vulnerabilities and warn administrators of known security
issues. A &os; add on utility known as
pkg includes options explicitly for
this purpose.pkg polls a database for security
issues. The database is updated and maintained by the &os;
Security Team and ports developers.Please refer to instructions
for installing
pkg.Installation provides &man.periodic.8; configuration files
for maintaining the pkg audit
database, and provides a programmatic method of keeping it
updated. This functionality is enabled if
daily_status_security_pkgaudit_enable
is set to YES in &man.periodic.conf.5;.
Ensure that daily security run emails, which are sent to
root's email account,
are being read.After installation, and to audit third party utilities as
part of the Ports Collection at any time, an administrator may
choose to update the database and view known vulnerabilities
of installed packages by invoking:&prompt.root; pkg audit -Fpkg displays messages
any published vulnerabilities in installed packages:Affected package: cups-base-1.1.22.0_1
Type of problem: cups-base -- HPGL buffer overflow vulnerability.
Reference: <http://www.FreeBSD.org/ports/portaudit/40a3bca2-6809-11d9-a9e7-0001020eed82.html>
1 problem(s) in your installed packages found.
You are advised to update or deinstall the affected package(s) immediately.By pointing a web browser to the displayed
URL, an administrator may obtain more
information about the vulnerability. This will include the
versions affected, by &os; port version, along with other web
sites which may contain security advisories.pkg is a powerful utility
and is extremely useful when coupled with
ports-mgmt/portmaster.&os; Security AdvisoriesTomRhodesContributed
by &os; Security AdvisoriesLike many producers of quality operating systems, the &os;
Project has a security team which is responsible for
determining the End-of-Life (EoL) date for
each &os; release and to provide security updates for supported
releases which have not yet reached their
EoL. More information about the &os;
security team and the supported releases is available on the
&os; security
page.One task of the security team is to respond to reported
security vulnerabilities in the &os; operating system. Once a
vulnerability is confirmed, the security team verifies the steps
necessary to fix the vulnerability and updates the source code
with the fix. It then publishes the details as a
Security Advisory. Security
advisories are published on the &os;
website and mailed to the
&a.security-notifications.name;, &a.security.name;, and
&a.announce.name; mailing lists.This section describes the format of a &os; security
advisory.Format of a Security AdvisoryHere is an example of a &os; security advisory:=============================================================================
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA512
=============================================================================
FreeBSD-SA-14:04.bind Security Advisory
The FreeBSD Project
Topic: BIND remote denial of service vulnerability
Category: contrib
Module: bind
Announced: 2014-01-14
Credits: ISC
Affects: FreeBSD 8.x and FreeBSD 9.x
Corrected: 2014-01-14 19:38:37 UTC (stable/9, 9.2-STABLE)
2014-01-14 19:42:28 UTC (releng/9.2, 9.2-RELEASE-p3)
2014-01-14 19:42:28 UTC (releng/9.1, 9.1-RELEASE-p10)
2014-01-14 19:38:37 UTC (stable/8, 8.4-STABLE)
2014-01-14 19:42:28 UTC (releng/8.4, 8.4-RELEASE-p7)
2014-01-14 19:42:28 UTC (releng/8.3, 8.3-RELEASE-p14)
CVE Name: CVE-2014-0591
For general information regarding FreeBSD Security Advisories,
including descriptions of the fields above, security branches, and the
following sections, please visit <URL:http://security.FreeBSD.org/>.
I. Background
BIND 9 is an implementation of the Domain Name System (DNS) protocols.
The named(8) daemon is an Internet Domain Name Server.
II. Problem Description
Because of a defect in handling queries for NSEC3-signed zones, BIND can
crash with an "INSIST" failure in name.c when processing queries possessing
certain properties. This issue only affects authoritative nameservers with
at least one NSEC3-signed zone. Recursive-only servers are not at risk.
III. Impact
An attacker who can send a specially crafted query could cause named(8)
to crash, resulting in a denial of service.
IV. Workaround
No workaround is available, but systems not running authoritative DNS service
with at least one NSEC3-signed zone using named(8) are not vulnerable.
V. Solution
Perform one of the following:
1) Upgrade your vulnerable system to a supported FreeBSD stable or
release / security branch (releng) dated after the correction date.
2) To update your vulnerable system via a source code patch:
The following patches have been verified to apply to the applicable
FreeBSD release branches.
a) Download the relevant patch from the location below, and verify the
detached PGP signature using your PGP utility.
[FreeBSD 8.3, 8.4, 9.1, 9.2-RELEASE and 8.4-STABLE]
# fetch http://security.FreeBSD.org/patches/SA-14:04/bind-release.patch
# fetch http://security.FreeBSD.org/patches/SA-14:04/bind-release.patch.asc
# gpg --verify bind-release.patch.asc
[FreeBSD 9.2-STABLE]
# fetch http://security.FreeBSD.org/patches/SA-14:04/bind-stable-9.patch
# fetch http://security.FreeBSD.org/patches/SA-14:04/bind-stable-9.patch.asc
# gpg --verify bind-stable-9.patch.asc
b) Execute the following commands as root:
# cd /usr/src
# patch < /path/to/patch
Recompile the operating system using buildworld and installworld as
described in <URL:http://www.FreeBSD.org/handbook/makeworld.html>.
Restart the applicable daemons, or reboot the system.
3) To update your vulnerable system via a binary patch:
Systems running a RELEASE version of FreeBSD on the i386 or amd64
platforms can be updated via the freebsd-update(8) utility:
# freebsd-update fetch
# freebsd-update install
VI. Correction details
The following list contains the correction revision numbers for each
affected branch.
Branch/path Revision
- -------------------------------------------------------------------------
stable/8/ r260646
releng/8.3/ r260647
releng/8.4/ r260647
stable/9/ r260646
releng/9.1/ r260647
releng/9.2/ r260647
- -------------------------------------------------------------------------
To see which files were modified by a particular revision, run the
following command, replacing NNNNNN with the revision number, on a
machine with Subversion installed:
# svn diff -cNNNNNN --summarize svn://svn.freebsd.org/base
Or visit the following URL, replacing NNNNNN with the revision number:
<URL:http://svnweb.freebsd.org/base?view=revision&revision=NNNNNN>
VII. References
<URL:https://kb.isc.org/article/AA-01078>
<URL:http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2014-0591>
The latest revision of this advisory is available at
<URL:http://security.FreeBSD.org/advisories/FreeBSD-SA-14:04.bind.asc>
-----BEGIN PGP SIGNATURE-----
iQIcBAEBCgAGBQJS1ZTYAAoJEO1n7NZdz2rnOvQP/2/68/s9Cu35PmqNtSZVVxVG
ZSQP5EGWx/lramNf9566iKxOrLRMq/h3XWcC4goVd+gZFrvITJSVOWSa7ntDQ7TO
XcinfRZ/iyiJbs/Rg2wLHc/t5oVSyeouyccqODYFbOwOlk35JjOTMUG1YcX+Zasg
ax8RV+7Zt1QSBkMlOz/myBLXUjlTZ3Xg2FXVsfFQW5/g2CjuHpRSFx1bVNX6ysoG
9DT58EQcYxIS8WfkHRbbXKh9I1nSfZ7/Hky/kTafRdRMrjAgbqFgHkYTYsBZeav5
fYWKGQRJulYfeZQ90yMTvlpF42DjCC3uJYamJnwDIu8OhS1WRBI8fQfr9DRzmRua
OK3BK9hUiScDZOJB6OqeVzUTfe7MAA4/UwrDtTYQ+PqAenv1PK8DZqwXyxA9ThHb
zKO3OwuKOVHJnKvpOcr+eNwo7jbnHlis0oBksj/mrq2P9m2ueF9gzCiq5Ri5Syag
Wssb1HUoMGwqU0roS8+pRpNC8YgsWpsttvUWSZ8u6Vj/FLeHpiV3mYXPVMaKRhVm
067BA2uj4Th1JKtGleox+Em0R7OFbCc/9aWC67wiqI6KRyit9pYiF3npph+7D5Eq
7zPsUdDd+qc+UTiLp3liCRp5w6484wWdhZO6wRtmUgxGjNkxFoNnX8CitzF8AaqO
UWWemqWuz3lAZuORQ9KX
=OQzQ
-----END PGP SIGNATURE-----Every security advisory uses the following format:Each security advisory is signed by the
PGP key of the Security Officer. The
public key for the Security Officer can be verified at
.The name of the security advisory always begins with
FreeBSD-SA- (for FreeBSD Security
Advisory), followed by the year in two digit format
(14:), followed by the advisory number
for that year (04.), followed by the
name of the affected application or subsystem
(bind). The advisory shown here is the
fourth advisory for 2014 and it affects
BIND.The Topic field summarizes the
vulnerability.The Category refers to the
affected part of the system which may be one of
core, contrib, or
ports. The core
category means that the vulnerability affects a core
component of the &os; operating system. The
contrib category means that the
vulnerability affects software included with &os;,
such as BIND. The
ports category indicates that the
vulnerability affects software available through the Ports
Collection.The Module field refers to the
component location. In this example, the
bind module is affected; therefore,
this vulnerability affects an application installed with
the operating system.The Announced field reflects the
date the security advisory was published. This means
that the security team has verified that the problem
exists and that a patch has been committed to the &os;
source code repository.The Credits field gives credit to
the individual or organization who noticed the
vulnerability and reported it.The Affects field explains which
releases of &os; are affected by this
vulnerability.The Corrected field indicates the
date, time, time offset, and releases that were
corrected. The section in parentheses shows each branch
for which the fix has been merged, and the version number
of the corresponding release from that branch. The
release identifier itself includes the version number
and, if appropriate, the patch level. The patch level is
the letter p followed by a number,
indicating the sequence number of the patch, allowing
users to track which patches have already been applied to
the system.The CVE Name field lists the
advisory number, if one exists, in the public cve.mitre.org
security vulnerabilities database.The Background field provides a
description of the affected module.The Problem Description field
explains the vulnerability. This can include
information about the flawed code and how the utility
could be maliciously used.The Impact field describes what
type of impact the problem could have on a system.The Workaround field indicates if
a workaround is available to system administrators who
cannot immediately patch the system .The Solution field provides the
instructions for patching the affected system. This is a
step by step tested and verified method for getting a
system patched and working securely.The Correction Details field
displays each affected Subversion branch with the revision
number that contains the corrected code.The References field offers sources
of additional information regarding the
vulnerability.Process AccountingTomRhodesContributed
by Process AccountingProcess accounting is a security method in which an
administrator may keep track of system resources used and
their allocation among users, provide for system monitoring,
and minimally track a user's commands.Process accounting has both positive and negative points.
One of the positives is that an intrusion may be narrowed down
to the point of entry. A negative is the amount of logs
generated by process accounting, and the disk space they may
require. This section walks an administrator through the basics
of process accounting.If more fine-grained accounting is needed, refer to
.Enabling and Utilizing Process AccountingBefore using process accounting, it must be enabled using
the following commands:&prompt.root; touch /var/account/acct
&prompt.root; chmod 600 /var/account/acct
&prompt.root; accton /var/account/acct
&prompt.root; echo 'accounting_enable="YES"' >> /etc/rc.confOnce enabled, accounting will begin to track information
such as CPU statistics and executed
commands. All accounting logs are in a non-human readable
format which can be viewed using sa. If
issued without any options, sa prints
information relating to the number of per-user calls, the
total elapsed time in minutes, total CPU
and user time in minutes, and the average number of
I/O operations. Refer to &man.sa.8; for
the list of available options which control the output.To display the commands issued by users, use
lastcomm. For example, this command
prints out all usage of ls by trhodes on the
ttyp1 terminal:&prompt.root; lastcomm ls trhodes ttyp1Many other useful options exist and are explained in
&man.lastcomm.1;, &man.acct.5;, and &man.sa.8;.Resource LimitsTomRhodesContributed
by Resource limits&os; provides several methods for an administrator to
limit the amount of system resources an individual may use.
Disk quotas limit the amount of disk space available to users.
Quotas are discussed in .quotaslimiting usersquotasdisk quotasLimits to other resources, such as CPU
and memory, can be set using either a flat file or a command to
configure a resource limits database. The traditional method
defines login classes by editing
/etc/login.conf. While this method is
still supported, any changes require a multi-step process of
editing this file, rebuilding the resource database, making
necessary changes to /etc/master.passwd,
and rebuilding the password database. This can become time
consuming, depending upon the number of users to
configure.Beginning with &os; 9.0-RELEASE,
rctl can be used to provide a more
fine-grained method for controlling resource limits. This
command supports more than user limits as it can also be used to
set resource constraints on processes and jails.This section demonstrates both methods for controlling
resources, beginning with the traditional method.Configuring Login Classeslimiting usersaccountslimiting/etc/login.confIn the traditional method, login classes and the resource
limits to apply to a login class are defined in
/etc/login.conf. Each user account can
be assigned to a login class, where default
is the default login class. Each login class has a set of
login capabilities associated with it. A login capability is
a
name=value
pair, where name is a well-known
identifier and value is an
arbitrary string which is processed accordingly depending on
the name.Whenever /etc/login.conf is edited,
the /etc/login.conf.db must be updated
by executing the following command:&prompt.root; cap_mkdb /etc/login.confResource limits differ from the default login capabilities
in two ways. First, for every limit, there is a
soft and hard
limit. A soft limit may be adjusted by the user or
application, but may not be set higher than the hard limit.
The hard limit may be lowered by the user, but can only be
raised by the superuser. Second, most resource limits apply
per process to a specific user. lists the most commonly
used resource limits. All of the available resource limits
and capabilities are described in detail in
&man.login.conf.5;.limiting userscoredumpsizelimiting userscputimelimiting usersfilesizelimiting usersmaxproclimiting usersmemorylockedlimiting usersmemoryuselimiting usersopenfileslimiting userssbsizelimiting usersstacksize
Login Class Resource LimitsResource LimitDescriptioncoredumpsizeThe limit on the size of a core file generated by
a program is subordinate to other limits on disk
usage, such as filesize or disk
quotas. This limit is often used as a less severe
method of controlling disk space consumption. Since
users do not generate core files and often do not
delete them, this setting may save them from running
out of disk space should a large program
crash.cputimeThe maximum amount of CPU time
a user's process may consume. Offending processes
will be killed by the kernel. This is a limit on
CPU time
consumed, not the percentage of the
CPU as displayed in some of the
fields generated by top and
ps.filesizeThe maximum size of a file the user may own.
Unlike disk quotas (), this
limit is enforced on individual files, not the set of
all files a user owns.maxprocThe maximum number of foreground and background
processes a user can run. This limit may not be
larger than the system limit specified by
kern.maxproc. Setting this limit
too small may hinder a user's productivity as some
tasks, such as compiling a large program, start lots
of processes.memorylockedThe maximum amount of memory a process may
request to be locked into main memory using
&man.mlock.2;. Some system-critical programs, such as
&man.amd.8;, lock into main memory so that if the
system begins to swap, they do not contribute to disk
thrashing.memoryuseThe maximum amount of memory a process may
consume at any given time. It includes both core
memory and swap usage. This is not a catch-all limit
for restricting memory consumption, but is a good
start.openfilesThe maximum number of files a process may have
open. In &os;, files are used to represent sockets
and IPC channels, so be careful not
to set this too low. The system-wide limit for this
is defined by
kern.maxfiles.sbsizeThe limit on the amount of network memory a user
may consume. This can be generally used to limit
network communications.stacksizeThe maximum size of a process stack. This alone
is not sufficient to limit the amount of memory a
program may use, so it should be used in conjunction
with other limits.
There are a few other things to remember when setting
resource limits:Processes started at system startup by
/etc/rc are assigned to the
daemon login class.Although the default
/etc/login.conf is a good source of
reasonable values for most limits, they may not be
appropriate for every system. Setting a limit too high
may open the system up to abuse, while setting it too low
may put a strain on productivity.&xorg; takes a lot of
resources and encourages users to run more programs
simultaneously.Many limits apply to individual processes, not the
user as a whole. For example, setting
openfiles to 50
means that each process the user runs may open up to
50 files. The total amount of files a
user may open is the value of openfiles
multiplied by the value of maxproc.
This also applies to memory consumption.For further information on resource limits and login
classes and capabilities in general, refer to
&man.cap.mkdb.1;, &man.getrlimit.2;, and
&man.login.conf.5;.Enabling and Configuring Resource LimitsBy default, kernel support for rctl is
not built-in, meaning that the kernel will first need to be
recompiled using the instructions in . Add these lines to either
GENERIC or a custom kernel configuration
file, then rebuild the kernel:options RACCT
options RCTLOnce the system has rebooted into the new kernel,
rctl may be used to set rules for the
system.Rule syntax is controlled through the use of a subject,
subject-id, resource, and action, as seen in this example
rule:user:trhodes:maxproc:deny=10/userIn this rule, the subject is user, the
subject-id is trhodes, the resource,
maxproc, is the maximum number of
processes, and the action is deny, which
blocks any new processes from being created. This means that
the user, trhodes, will be constrained to
no greater than 10 processes. Other
possible actions include logging to the console, passing a
notification to &man.devd.8;, or sending a sigterm to the
process.Some care must be taken when adding rules. Since this
user is constrained to 10 processes, this
example will prevent the user from performing other tasks
after logging in and executing a
screen session. Once a resource limit has
been hit, an error will be printed, as in this example:&prompt.user; man test
/usr/bin/man: Cannot fork: Resource temporarily unavailable
eval: Cannot fork: Resource temporarily unavailableAs another example, a jail can be prevented from exceeding
a memory limit. This rule could be written as:&prompt.root; rctl -a jail:httpd:memoryuse:deny=2G/jailRules will persist across reboots if they have been added
to /etc/rctl.conf. The format is a rule,
without the preceding command. For example, the previous rule
could be added as:# Block jail from using more than 2G memory:
jail:httpd:memoryuse:deny=2G/jailTo remove a rule, use rctl to remove it
from the list:&prompt.root; rctl -r user:trhodes:maxproc:deny=10/userA method for removing all rules is documented in
&man.rctl.8;. However, if removing all rules for a single
user is required, this command may be issued:&prompt.root; rctl -r user:trhodesMany other resources exist which can be used to exert
additional control over various subjects.
See &man.rctl.8; to learn about them.
Index: head/en_US.ISO8859-1/books/handbook/serialcomms/chapter.xml
===================================================================
--- head/en_US.ISO8859-1/books/handbook/serialcomms/chapter.xml (revision 48528)
+++ head/en_US.ISO8859-1/books/handbook/serialcomms/chapter.xml (revision 48529)
@@ -1,2199 +1,2200 @@
Serial CommunicationsSynopsisserial communications&unix; has always had support for serial communications as
the very first &unix; machines relied on serial lines for user
input and output. Things have changed a lot from the days
when the average terminal consisted of a 10-character-per-second
serial printer and a keyboard. This chapter covers some of the
ways serial communications can be used on &os;.After reading this chapter, you will know:How to connect terminals to a &os; system.How to use a modem to dial out to remote hosts.How to allow remote users to login to a &os; system
with a modem.How to boot a &os; system from a serial console.Before reading this chapter, you should:Know how to configure and
install a custom kernel.Understand &os; permissions
and processes.Have access to the technical manual for the serial
hardware to be used with &os;.Serial Terminology and HardwareThe following terms are often used in serial
communications:bpsBits per
Secondbits-per-second
(bps) is the rate at which data is
transmitted.DTEData Terminal
EquipmentDTE
(DTE) is one of two endpoints in a
serial communication. An example would be a
computer.DCEData Communications
EquipmentDCE
(DTE) is the other endpoint in a
serial communication. Typically, it is a modem or serial
terminal.RS-232The original standard which defined hardware serial
communications. It has since been renamed to
TIA-232.When referring to communication data rates, this section
does not use the term baud. Baud refers
to the number of electrical state transitions made in a period
of time, while bps is the correct term to
use.To connect a serial terminal to a &os; system, a serial port
on the computer and the proper cable to connect to the serial
device are needed. Users who are already familiar with serial
hardware and cabling can safely skip this section.Serial Cables and PortsThere are several different kinds of serial cables. The
two most common types are null-modem cables and standard
RS-232 cables. The documentation for the
hardware should describe the type of cable required.These two types of cables differ in how the wires are
connected to the connector. Each wire represents a signal,
with the defined signals summarized in . A standard serial
cable passes all of the RS-232C signals
straight through. For example, the Transmitted
Data pin on one end of the cable goes to the
Transmitted Data pin on the other end. This is
the type of cable used to connect a modem to the &os; system,
and is also appropriate for some terminals.A null-modem cable switches the Transmitted
Data pin of the connector on one end with the
Received Data pin on the other end. The
connector can be either a DB-25 or a
DB-9.A null-modem cable can be constructed using the pin
connections summarized in ,
, and . While the standard calls for
a straight-through pin 1 to pin 1 Protective
Ground line, it is often omitted. Some terminals
work using only pins 2, 3, and 7, while others require
different configurations. When in doubt, refer to the
documentation for the hardware.null-modem cable
RS-232C Signal NamesAcronymsNamesRDReceived DataTDTransmitted DataDTRData Terminal ReadyDSRData Set ReadyDCDData Carrier DetectSGSignal GroundRTSRequest to SendCTSClear to Send
When one pin at one end connects to a pair of pins at
the other end, it is usually implemented with one short wire
between the pair of pins in their connector and a long wire
to the other single pin.Serial ports are the devices through which data is
transferred between the &os; host computer and the terminal.
Several kinds of serial ports exist. Before purchasing or
constructing a cable, make sure it will fit the ports on the
terminal and on the &os; system.Most terminals have DB-25 ports.
Personal computers may have DB-25 or
DB-9 ports. A multiport serial card may
have RJ-12 or RJ-45/
ports. See the documentation that accompanied the hardware
for specifications on the kind of port or visually verify the
type of port.In &os;, each serial port is accessed through an entry in
/dev. There are two different kinds of
entries:Call-in ports are named
/dev/ttyuN
where N is the port number,
starting from zero. If a terminal is connected to the
first serial port (COM1), use
/dev/ttyu0 to refer to the terminal.
If the terminal is on the second serial port
(COM2), use
/dev/ttyu1, and so forth. Generally,
the call-in port is used for terminals. Call-in ports
require that the serial line assert the Data
Carrier Detect signal to work correctly.Call-out ports are named
/dev/cuauN
on &os; versions 10.x and higher and
/dev/cuadN
on &os; versions 9.x and lower. Call-out ports are
usually not used for terminals, but are used for modems.
The call-out port can be used if the serial cable or the
terminal does not support the Data Carrier
Detect signal.&os; also provides initialization devices
(/dev/ttyuN.init
and
/dev/cuauN.init
or
/dev/cuadN.init)
and locking devices
(/dev/ttyuN.lock
and
/dev/cuauN.lock
or
/dev/cuadN.lock).
The initialization devices are used to initialize
communications port parameters each time a port is opened,
such as crtscts for modems which use
RTS/CTS signaling for flow control. The
locking devices are used to lock flags on ports to prevent
users or programs changing certain parameters. Refer to
&man.termios.4;, &man.sio.4;, and &man.stty.1; for information
on terminal settings, locking and initializing devices, and
setting terminal options, respectively.Serial Port ConfigurationBy default, &os; supports four serial ports which are
commonly known as COM1,
COM2, COM3, and
COM4. &os; also supports dumb multi-port
serial interface cards, such as the BocaBoard 1008 and 2016,
as well as more intelligent multi-port cards such as those
made by Digiboard. However, the default kernel only looks for
the standard COM ports.To see if the system recognizes the serial ports, look for
system boot messages that start with
uart:&prompt.root; grep uart /var/run/dmesg.bootIf the system does not recognize all of the needed serial
ports, additional entries can be added to
/boot/device.hints. This file already
contains hint.uart.0.* entries for
COM1 and hint.uart.1.*
entries for COM2. When adding a port
entry for COM3 use
0x3E8, and for COM4
use 0x2E8. Common IRQ
addresses are 5 for
COM3 and 9 for
COM4.ttyucuauTo determine the default set of terminal
I/O settings used by the port, specify its
device name. This example determines the settings for the
call-in port on COM2:&prompt.root; stty -a -f /dev/ttyu1System-wide initialization of serial devices is controlled
by /etc/rc.d/serial. This file affects
the default settings of serial devices. To change the
settings for a device, use stty. By
default, the changed settings are in effect until the device
is closed and when the device is reopened, it goes back to the
default set. To permanently change the default set, open and
adjust the settings of the initialization device. For
example, to turn on mode, 8 bit
communication, and flow control for
ttyu5, type:&prompt.root; stty -f /dev/ttyu5.init clocal cs8 ixon ixoffrc filesrc.serialTo prevent certain settings from being changed by an
application, make adjustments to the locking device. For
example, to lock the speed of ttyu5 to
57600 bps, type:&prompt.root; stty -f /dev/ttyu5.lock 57600Now, any application that opens ttyu5
and tries to change the speed of the port will be stuck with
57600 bps.
- Terminals
+ Terminals
-
- Sean
- Kelly
-
- Contributed by
-
+
+ Sean
+ Kelly
+
+ Contributed by terminalsTerminals provide a convenient and low-cost way to access
a &os; system when not at the computer's console or on a
connected network. This section describes how to use terminals
with &os;.The original &unix; systems did not have consoles. Instead,
users logged in and ran programs through terminals that were
connected to the computer's serial ports.The ability to establish a login session on a serial port
still exists in nearly every &unix;-like operating system
today, including &os;. By using a terminal attached to an
unused serial port, a user can log in and run any text program
that can normally be run on the console or in an
xterm window.Many terminals can be attached to a &os; system. An older
spare computer can be used as a terminal wired into a more
powerful computer running &os;. This can turn what might
otherwise be a single-user computer into a powerful
multiple-user system.&os; supports three types of terminals:Dumb terminalsDumb terminals are specialized hardware that connect
to computers over serial lines. They are called
dumb because they have only enough
computational power to display, send, and receive text.
No programs can be run on these devices. Instead, dumb
terminals connect to a computer that runs the needed
programs.There are hundreds of kinds of dumb terminals made by
many manufacturers, and just about any kind will work with
&os;. Some high-end terminals can even display graphics,
but only certain software packages can take advantage of
these advanced features.Dumb terminals are popular in work environments where
workers do not need access to graphical
applications.Computers Acting as TerminalsSince a dumb terminal has just enough ability to
display, send, and receive text, any spare computer can
be a dumb terminal. All that is needed is the proper
cable and some terminal emulation
software to run on the computer.This configuration can be useful. For example, if one
user is busy working at the &os; system's console, another
user can do some text-only work at the same time from a
less powerful personal computer hooked up as a terminal to
the &os; system.There are at least two utilities in the base-system of
&os; that can be used to work through a serial connection:
&man.cu.1; and &man.tip.1;.For example, to connect from a client system that runs
&os; to the serial connection of another system:&prompt.root; cu -l serial-port-deviceReplace serial-port-device
with the device name of the connected serial port. These
device files are called
/dev/cuauN
on &os; versions 10.x and higher and
/dev/cuadN
on &os; versions 9.x and lower. In either case,
N is the serial port number,
starting from zero. This means that
COM1 is
/dev/cuau0 or
/dev/cuad0 in &os;.Additional programs are available through the Ports
Collection, such as
comms/minicom.X TerminalsX terminals are the most sophisticated kind of
terminal available. Instead of connecting to a serial
port, they usually connect to a network like Ethernet.
Instead of being relegated to text-only applications, they
can display any &xorg;
application.This chapter does not cover the setup, configuration,
or use of X terminals.Terminal ConfigurationThis section describes how to configure a &os; system to
enable a login session on a serial terminal. It assumes that
the system recognizes the serial port to which the terminal is
connected and that the terminal is connected with the correct
cable.In &os;, init reads
/etc/ttys and starts a
getty process on the available terminals.
The getty process is responsible for
reading a login name and starting the login
program. The ports on the &os; system which allow logins are
listed in /etc/ttys. For example, the
first virtual console, ttyv0, has an
entry in this file, allowing logins on the console. This file
also contains entries for the other virtual consoles, serial
ports, and pseudo-ttys. For a hardwired terminal, the serial
port's /dev entry is listed without the
/dev part. For example,
/dev/ttyv0 is listed as
ttyv0.The default /etc/ttys configures
support for the first four serial ports,
ttyu0 through
ttyu3:ttyu0 "/usr/libexec/getty std.9600" dialup off secure
ttyu1 "/usr/libexec/getty std.9600" dialup off secure
ttyu2 "/usr/libexec/getty std.9600" dialup off secure
ttyu3 "/usr/libexec/getty std.9600" dialup off secureWhen attaching a terminal to one of those ports, modify
the default entry to set the required speed and terminal type,
to turn the device on and, if needed, to
change the port's secure setting. If the
terminal is connected to another port, add an entry for the
port. configures two terminals in
/etc/ttys. The first entry configures a
Wyse-50 connected to COM2. The second
entry configures an old computer running
Procomm terminal software emulating
a VT-100 terminal. The computer is connected to the sixth
serial port on a multi-port serial card.Configuring Terminal Entriesttyu1 "/usr/libexec/getty std.38400" wy50 on insecure
ttyu5 "/usr/libexec/getty std.19200" vt100 on insecureThe first field specifies the device name of the
serial terminal.The second field tells getty to
initialize and open the line, set the line speed, prompt
for a user name, and then execute the
login program. The optional
getty type configures
characteristics on the terminal line, like
bps rate and parity. The available
getty types are listed in
/etc/gettytab. In almost all
cases, the getty types that start with
std will work for hardwired terminals
as these entries ignore parity. There is a
std entry for each
bps rate from 110 to 115200. Refer
to &man.gettytab.5; for more information.When setting the getty type, make sure to match the
communications settings used by the terminal. For this
example, the Wyse-50 uses no parity and connects at
38400 bps. The computer uses no parity and
connects at 19200 bps.The third field is the type of terminal. For
dial-up ports, unknown or
dialup is typically used since users
may dial up with practically any type of terminal or
software. Since the terminal type does not change for
hardwired terminals, a real terminal type from
/etc/termcap can be specified. For
this example, the Wyse-50 uses the real terminal type
while the computer running
Procomm is set to emulate a
VT-100.The fourth field specifies if the port should be
enabled. To enable logins on this port, this field must
be set to on.The final field is used to specify whether the port
is secure. Marking a port as secure
means that it is trusted enough to allow root to login from that
port. Insecure ports do not allow root logins. On an
insecure port, users must login from unprivileged
accounts and then use su or a similar
mechanism to gain superuser privileges, as described in
. For security
reasons, it is recommended to change this setting to
insecure.After making any changes to
/etc/ttys, send a SIGHUP (hangup) signal
to the init process to force it to re-read
its configuration file:&prompt.root; kill -HUP 1Since init is always the first process
run on a system, it always has a process ID
of 1.If everything is set up correctly, all cables are in
place, and the terminals are powered up, a
getty process should now be running on each
terminal and login prompts should be available on each
terminal.Troubleshooting the ConnectionEven with the most meticulous attention to detail,
something could still go wrong while setting up a terminal.
Here is a list of common symptoms and some suggested
fixes.If no login prompt appears, make sure the terminal is
plugged in and powered up. If it is a personal computer
acting as a terminal, make sure it is running terminal
emulation software on the correct serial port.Make sure the cable is connected firmly to both the
terminal and the &os; computer. Make sure it is the right
kind of cable.Make sure the terminal and &os; agree on the
bps rate and parity settings. For a video
display terminal, make sure the contrast and brightness
controls are turned up. If it is a printing terminal, make
sure paper and ink are in good supply.Use ps to make sure that a
getty process is running and serving the
terminal. For example, the following listing shows that a
getty is running on the second serial port,
ttyu1, and is using the
std.38400 entry in
/etc/gettytab:&prompt.root; ps -axww|grep ttyu
22189 d1 Is+ 0:00.03 /usr/libexec/getty std.38400 ttyu1If no getty process is running, make
sure the port is enabled in /etc/ttys.
Remember to run kill -HUP 1 after modifying
/etc/ttys.If the getty process is running but the
terminal still does not display a login prompt, or if it
displays a prompt but will not accept typed input, the
terminal or cable may not support hardware handshaking. Try
changing the entry in /etc/ttys from
std.38400 to
3wire.38400, then run kill -HUP
1 after modifying /etc/ttys.
The 3wire entry is similar to
std, but ignores hardware handshaking. The
baud rate may need to be reduced or software flow control
enabled when using 3wire to prevent buffer
overflows.If garbage appears instead of a login prompt, make sure
the terminal and &os; agree on the bps rate
and parity settings. Check the getty
processes to make sure the correct
getty type is in use. If not, edit
/etc/ttys and run kill
-HUP 1.If characters appear doubled and the password appears when
typed, switch the terminal, or the terminal emulation
software, from half duplex or local
echo to full duplex.
- Dial-in Service
+ Dial-in Service
-
- Guy
- Helmer
+
+ Guy
+ HelmerContributed by
+
-
- Sean
- Kelly
+
+ Sean
+ KellyAdditions by dial-in serviceConfiguring a &os; system for dial-in service is similar to
configuring terminals, except that modems are used instead of
terminal devices. &os; supports both external and internal
modems.External modems are more convenient because they often can
be configured via parameters stored in non-volatile
RAM and they usually provide lighted
indicators that display the state of important
RS-232 signals, indicating whether the modem
is operating properly.Internal modems usually lack non-volatile
RAM, so their configuration may be limited to
setting DIP switches. If the internal modem
has any signal indicator lights, they are difficult to view when
the system's cover is in place.modemWhen using an external modem, a proper cable is needed. A
standard RS-232C serial cable should
suffice.&os; needs the RTS and
CTS signals for flow control at speeds above
2400 bps, the CD signal to detect when a
call has been answered or the line has been hung up, and the
DTR signal to reset the modem after a session
is complete. Some cables are wired without all of the needed
signals, so if a login session does not go away when the line
hangs up, there may be a problem with the cable. Refer to for more information about these
signals.Like other &unix;-like operating systems, &os; uses the
hardware signals to find out when a call has been answered or a
line has been hung up and to hangup and reset the modem after a
call. &os; avoids sending commands to the modem or watching for
status reports from the modem.&os; supports the NS8250,
NS16450, NS16550, and
NS16550A-based RS-232C
(CCITT V.24) communications interfaces. The
8250 and 16450 devices have single-character buffers. The 16550
device provides a 16-character buffer, which allows for better
system performance. Bugs in plain 16550 devices prevent the use
of the 16-character buffer, so use 16550A devices if possible.
Because single-character-buffer devices require more work by the
operating system than the 16-character-buffer devices,
16550A-based serial interface cards are preferred. If the
system has many active serial ports or will have a heavy load,
16550A-based cards are better for low-error-rate
communications.The rest of this section demonstrates how to configure a
modem to receive incoming connections, how to communicate with
the modem, and offers some troubleshooting tips.Modem ConfigurationgettyAs with terminals, init spawns a
getty process for each configured serial
port used for dial-in connections. When a user dials the
modem's line and the modems connect, the Carrier
Detect signal is reported by the modem. The kernel
notices that the carrier has been detected and instructs
getty to open the port and display a
login: prompt at the specified initial line
speed. In a typical configuration, if garbage characters are
received, usually due to the modem's connection speed being
different than the configured speed, getty
tries adjusting the line speeds until it receives reasonable
characters. After the user enters their login name,
getty executes login,
which completes the login process by asking for the user's
password and then starting the user's shell./usr/bin/loginThere are two schools of thought regarding dial-up modems.
One configuration method is to set the modems and systems so
that no matter at what speed a remote user dials in, the
dial-in RS-232 interface runs at a locked
speed. The benefit of this configuration is that the remote
user always sees a system login prompt immediately. The
downside is that the system does not know what a user's true
data rate is, so full-screen programs like
Emacs will not adjust their
screen-painting methods to make their response better for
slower connections.The second method is to configure the
RS-232 interface to vary its speed based on
the remote user's connection speed. Because
getty does not understand any particular
modem's connection speed reporting, it gives a
login: message at an initial speed and
watches the characters that come back in response. If the
user sees junk, they should press Enter until
they see a recognizable prompt. If the data rates do not
match, getty sees anything the user types
as junk, tries the next speed, and gives the
login: prompt again. This procedure normally
only takes a keystroke or two before the user sees a good
prompt. This login sequence does not look as clean as the
locked-speed method, but a user on a low-speed connection
should receive better interactive response from full-screen
programs.When locking a modem's data communications rate at a
particular speed, no changes to
/etc/gettytab should be needed. However,
for a matching-speed configuration, additional entries may be
required in order to define the speeds to use for the modem.
This example configures a 14.4 Kbps modem with a top
interface speed of 19.2 Kbps using 8-bit, no parity
connections. It configures getty to start
the communications rate for a V.32bis connection at
19.2 Kbps, then cycles through 9600 bps,
2400 bps, 1200 bps, 300 bps, and back to
19.2 Kbps. Communications rate cycling is implemented
with the nx= (next table) capability. Each
line uses a tc= (table continuation) entry
to pick up the rest of the settings for a particular data
rate.#
# Additions for a V.32bis Modem
#
um|V300|High Speed Modem at 300,8-bit:\
:nx=V19200:tc=std.300:
un|V1200|High Speed Modem at 1200,8-bit:\
:nx=V300:tc=std.1200:
uo|V2400|High Speed Modem at 2400,8-bit:\
:nx=V1200:tc=std.2400:
up|V9600|High Speed Modem at 9600,8-bit:\
:nx=V2400:tc=std.9600:
uq|V19200|High Speed Modem at 19200,8-bit:\
:nx=V9600:tc=std.19200:For a 28.8 Kbps modem, or to take advantage of
compression on a 14.4 Kbps modem, use a higher
communications rate, as seen in this example:#
# Additions for a V.32bis or V.34 Modem
# Starting at 57.6 Kbps
#
vm|VH300|Very High Speed Modem at 300,8-bit:\
:nx=VH57600:tc=std.300:
vn|VH1200|Very High Speed Modem at 1200,8-bit:\
:nx=VH300:tc=std.1200:
vo|VH2400|Very High Speed Modem at 2400,8-bit:\
:nx=VH1200:tc=std.2400:
vp|VH9600|Very High Speed Modem at 9600,8-bit:\
:nx=VH2400:tc=std.9600:
vq|VH57600|Very High Speed Modem at 57600,8-bit:\
:nx=VH9600:tc=std.57600:For a slow CPU or a heavily loaded
system without 16550A-based serial ports, this configuration
may produce siosilo errors at 57.6 Kbps./etc/ttysThe configuration of /etc/ttys is
similar to , but a different
argument is passed to getty and
dialup is used for the terminal type.
Replace xxx with the process
init will run on the device:ttyu0 "/usr/libexec/getty xxx" dialup onThe dialup terminal type can be
changed. For example, setting vt102 as the
default terminal type allows users to use
VT102 emulation on their remote
systems.For a locked-speed configuration, specify the speed with
a valid type listed in /etc/gettytab.
This example is for a modem whose port speed is locked at
19.2 Kbps:ttyu0 "/usr/libexec/getty std.19200" dialup onIn a matching-speed configuration, the entry needs to
reference the appropriate beginning auto-baud
entry in /etc/gettytab. To continue the
example for a matching-speed modem that starts at
19.2 Kbps, use this entry:ttyu0 "/usr/libexec/getty V19200" dialup onAfter editing /etc/ttys, wait until
the modem is properly configured and connected before
signaling init:&prompt.root; kill -HUP 1rc filesrc.serialHigh-speed modems, like V.32,
V.32bis, and V.34
modems, use hardware (RTS/CTS) flow
control. Use stty to set the hardware flow
control flag for the modem port. This example sets the
crtscts flag on COM2's
dial-in and dial-out initialization devices:&prompt.root; stty -f /dev/ttyu1.init crtscts
&prompt.root; stty -f /dev/cuau1.init crtsctsTroubleshootingThis section provides a few tips for troubleshooting a
dial-up modem that will not connect to a &os; system.Hook up the modem to the &os; system and boot the system.
If the modem has status indication lights, watch to see
whether the modem's DTR indicator lights
when the login: prompt appears on the
system's console. If it lights up, that should mean that &os;
has started a getty process on the
appropriate communications port and is waiting for the modem
to accept a call.If the DTR indicator does not light,
login to the &os; system through the console and type
ps ax to see if &os; is running a
getty process on the correct port: 114 ?? I 0:00.10 /usr/libexec/getty V19200 ttyu0If the second column contains a d0
instead of a ?? and the modem has not
accepted a call yet, this means that getty
has completed its open on the communications port. This could
indicate a problem with the cabling or a misconfigured modem
because getty should not be able to open
the communications port until the carrier detect signal has
been asserted by the modem.If no getty processes are waiting to
open the port, double-check that the entry for the port is
correct in /etc/ttys. Also, check
/var/log/messages to see if there are
any log messages from init or
getty.Next, try dialing into the system. Be sure to use 8 bits,
no parity, and 1 stop bit on the remote system. If a prompt
does not appear right away, or the prompt shows garbage, try
pressing Enter about once per second. If
there is still no login: prompt,
try sending a BREAK. When using a
high-speed modem, try dialing again after locking the
dialing modem's interface speed.If there is still no login: prompt, check
/etc/gettytab again and double-check
that:The initial capability name specified in the entry in
/etc/ttys matches the name of a
capability in /etc/gettytab.Each nx= entry matches another
gettytab capability name.Each tc= entry matches another
gettytab capability name.If the modem on the &os; system will not answer, make
sure that the modem is configured to answer the phone when
DTR is asserted. If the modem seems to be
configured correctly, verify that the
DTR line is asserted by checking the
modem's indicator lights.If it still does not work, try sending an email
to the &a.questions; describing the modem and the
problem.Dial-out Servicedial-out serviceThe following are tips for getting the host to connect over
the modem to another computer. This is appropriate for
establishing a terminal session with a remote host.This kind of connection can be helpful to get a file on the
Internet if there are problems using PPP. If PPP is not
working, use the terminal session to FTP the needed file. Then
use zmodem to transfer it to the machine.Using a Stock Hayes ModemA generic Hayes dialer is built into
tip. Use at=hayes in
/etc/remote.The Hayes driver is not smart enough to recognize some of
the advanced features of newer modems messages like
BUSY, NO DIALTONE, or
CONNECT 115200. Turn those messages off
when using tip with
ATX0&W.The dial timeout for tip is 60
seconds. The modem should use something less, or else
tip will think there is a communication
problem. Try ATS7=45&W.Using AT Commands/etc/remoteCreate a direct entry in
/etc/remote. For example, if the modem
is hooked up to the first serial port,
/dev/cuau0, use the following
line:cuau0:dv=/dev/cuau0:br#19200:pa=noneUse the highest bps rate the modem
supports in the br capability. Then, type
tip cuau0 to connect to the modem.Or, use cu as root with the following
command:&prompt.root; cu -lline -sspeedline is the serial port, such
as /dev/cuau0, and
speed is the speed, such as
57600. When finished entering the AT
commands, type ~. to exit.The @ Sign Does Not WorkThe @ sign in the phone number
capability tells tip to look in
/etc/phones for a phone number. But, the
@ sign is also a special character in
capability files like /etc/remote, so it
needs to be escaped with a backslash:pn=\@Dialing from the Command LinePut a generic entry in
/etc/remote. For example:tip115200|Dial any phone number at 115200 bps:\
:dv=/dev/cuau0:br#115200:at=hayes:pa=none:du:
tip57600|Dial any phone number at 57600 bps:\
:dv=/dev/cuau0:br#57600:at=hayes:pa=none:du:This should now work:&prompt.root; tip -115200 5551234Users who prefer cu over
tip, can use a generic
cu entry:cu115200|Use cu to dial any number at 115200bps:\
:dv=/dev/cuau1:br#57600:at=hayes:pa=none:du:and type:&prompt.root; cu 5551234 -s 115200Setting the bps RatePut in an entry for tip1200 or
cu1200, but go ahead and use whatever
bps rate is appropriate with the
br capability.
tip thinks a good default is 1200 bps
which is why it looks for a tip1200 entry.
1200 bps does not have to be used, though.Accessing a Number of Hosts Through a Terminal
ServerRather than waiting until connected and typing
CONNECT host
each time, use tip's cm
capability. For example, these entries in
/etc/remote will let you type
tip pain or tip muffin
to connect to the hosts pain or
muffin, and tip
deep13 to connect to the terminal server.pain|pain.deep13.com|Forrester's machine:\
:cm=CONNECT pain\n:tc=deep13:
muffin|muffin.deep13.com|Frank's machine:\
:cm=CONNECT muffin\n:tc=deep13:
deep13:Gizmonics Institute terminal server:\
:dv=/dev/cuau2:br#38400:at=hayes:du:pa=none:pn=5551234:Using More Than One Line with
tipThis is often a problem where a university has several
modem lines and several thousand students trying to use
them.Make an entry in /etc/remote and use
@ for the pn
capability:big-university:\
:pn=\@:tc=dialout
dialout:\
:dv=/dev/cuau3:br#9600:at=courier:du:pa=none:Then, list the phone numbers in
/etc/phones:big-university 5551111
big-university 5551112
big-university 5551113
big-university 5551114tip will try each number in the listed
order, then give up. To keep retrying, run
tip in a while
loop.Using the Force CharacterCtrlP is the default force character,
used to tell tip that the next character is
literal data. The force character can be set to any other
character with the ~s escape, which means
set a variable.Type
~sforce=single-char
followed by a newline. single-char
is any single character. If
single-char is left out, then the
force character is the null character, which is accessed by
typing
Ctrl2
or CtrlSpace. A pretty good value for
single-char is
ShiftCtrl6, which is only used on some terminal
servers.To change the force character, specify the following in
~/.tiprc:force=single-charUpper Case CharactersThis happens when
CtrlA is pressed, which is tip's
raise character, specially designed for people
with broken caps-lock keys. Use ~s to set
raisechar to something reasonable. It can
be set to be the same as the force character, if neither
feature is used.Here is a sample ~/.tiprc for
Emacs users who need to type
Ctrl2 and CtrlA:force=^^
raisechar=^^The ^^ is
ShiftCtrl6.File Transfers with tipWhen talking to another &unix;-like operating system,
files can be sent and received using ~p
(put) and ~t (take). These commands run
cat and echo on the
remote system to accept and send files. The syntax is:~plocal-fileremote-file~tremote-filelocal-fileThere is no error checking, so another protocol, like
zmodem, should probably be used.Using zmodem with
tip?To receive files, start the sending program on the remote
end. Then, type ~C rz to begin receiving
them locally.To send files, start the receiving program on the remote
end. Then, type ~C sz
files to send them to the
remote system.
- Setting Up the Serial Console
+ Setting Up the Serial Console
-
- Kazutaka
- YOKOTA
+
+ Kazutaka
+ YOKOTAContributed by
+
-
- Bill
- Paul
+
+ Bill
+ PaulBased on a document by serial console&os; has the ability to boot a system with a dumb
terminal on a serial port as a console. This configuration is
useful for system administrators who wish to install &os; on
machines that have no keyboard or monitor attached, and
developers who want to debug the kernel or device
drivers.As described in , &os; employs a three
stage bootstrap. The first two stages are in the boot block
code which is stored at the beginning of the &os; slice on the
boot disk. The boot block then loads and runs the boot loader
as the third stage code.In order to set up booting from a serial console, the boot
block code, the boot loader code, and the kernel need to be
configured.Quick Serial Console ConfigurationThis section provides a fast overview of setting up the
serial console. This procedure can be used when the dumb
terminal is connected to COM1.Configuring a Serial Console on
COM1Connect the serial cable to
COM1 and the controlling
terminal.To configure boot messages to display on the serial
console, issue the following command as the
superuser:&prompt.root; echo 'console="comconsole"' >> /boot/loader.confEdit /etc/ttys and change
off to on and
dialup to vt100 for
the ttyu0 entry. Otherwise, a
password will not be required to connect via the serial
console, resulting in a potential security hole.Reboot the system to see if the changes took
effect.If a different configuration is required, see the next
section for a more in-depth configuration explanation.In-Depth Serial Console ConfigurationThis section provides a more detailed explanation of the
steps needed to setup a serial console in &os;.Configuring a Serial ConsolePrepare a serial cable.null-modem cableUse either a null-modem cable or a standard serial
cable and a null-modem adapter. See for a discussion on serial
cables.Unplug the keyboard.Many systems probe for the keyboard during the
Power-On Self-Test (POST) and will
generate an error if the keyboard is not detected. Some
machines will refuse to boot until the keyboard is plugged
in.If the computer complains about the error, but boots
anyway, no further configuration is needed.If the computer refuses to boot without a keyboard
attached, configure the BIOS so that it
ignores this error. Consult the motherboard's manual for
details on how to do this.Try setting the keyboard to Not
installed in the BIOS.
This setting tells the BIOS not to
probe for a keyboard at power-on so it should not
complain if the keyboard is absent. If that option is
not present in the BIOS, look for an
Halt on Error option instead. Setting
this to All but Keyboard or to No
Errors will have the same effect.If the system has a &ps2; mouse, unplug it as well.
&ps2; mice share some hardware with the keyboard and
leaving the mouse plugged in can fool the keyboard probe
into thinking the keyboard is still there.While most systems will boot without a keyboard,
quite a few will not boot without a graphics adapter.
Some systems can be configured to boot with no graphics
adapter by changing the graphics adapter
setting in the BIOS configuration to
Not installed. Other systems do not
support this option and will refuse to boot if there is
no display hardware in the system. With these machines,
leave some kind of graphics card plugged in, even if it
is just a junky mono board. A monitor does not need to
be attached.Plug a dumb terminal, an old computer with a modem
program, or the serial port on another &unix; box into the
serial port.Add the appropriate hint.sio.*
entries to /boot/device.hints for the
serial port. Some multi-port cards also require kernel
configuration options. Refer to &man.sio.4; for the
required options and device hints for each supported
serial port.Create boot.config in the root
directory of the a partition on the
boot drive.This file instructs the boot block code how to boot
the system. In order to activate the serial console, one
or more of the following options are needed. When using
multiple options, include them all on the same
line:Toggles between the internal and serial
consoles. Use this to switch console devices. For
instance, to boot from the internal (video) console,
use to direct the boot loader
and the kernel to use the serial port as its console
device. Alternatively, to boot from the serial
port, use to tell the boot
loader and the kernel to use the video display as
the console instead.Toggles between the single and dual console
configurations. In the single configuration, the
console will be either the internal console (video
display) or the serial port, depending on the state
of . In the dual console
configuration, both the video display and the
serial port will become the console at the same
time, regardless of the state of
. However, the dual console
configuration takes effect only while the boot
block is running. Once the boot loader gets
control, the console specified by
becomes the only
console.Makes the boot block probe the keyboard. If no
keyboard is found, the and
options are automatically
set.Due to space constraints in the current
version of the boot blocks, is
capable of detecting extended keyboards only.
Keyboards with less than 101 keys and without F11
and F12 keys may not be detected. Keyboards on
some laptops may not be properly found because of
this limitation. If this is the case, do not use
.Use either to select the console
automatically or to activate the
serial console. Refer to &man.boot.8; and
&man.boot.config.5; for more details.The options, except for , are
passed to the boot loader. The boot loader will
determine whether the internal video or the serial port
should become the console by examining the state of
. This means that if
is specified but
is not specified in /boot.config, the
serial port can be used as the console only during the
boot block as the boot loader will use the internal video
display as the console.Boot the machine.When &os; starts, the boot blocks echo the contents of
/boot.config to the console. For
example:/boot.config: -P
Keyboard: noThe second line appears only if is
in /boot.config and indicates the
presence or absence of the keyboard. These messages go
to either the serial or internal console, or both,
depending on the option in
/boot.config:OptionsMessage goes tononeinternal consoleserial consoleserial and internal consolesserial and internal consoles, keyboard presentinternal console, keyboard absentserial consoleAfter the message, there will be a small pause before
the boot blocks continue loading the boot loader and
before any further messages are printed to the console.
Under normal circumstances, there is no need to interrupt
the boot blocks, but one can do so in order to make sure
things are set up correctly.Press any key, other than Enter, at
the console to interrupt the boot process. The boot
blocks will then prompt for further action:>> FreeBSD/i386 BOOT
Default: 0:ad(0,a)/boot/loader
boot:Verify that the above message appears on either the
serial or internal console, or both, according to the
options in /boot.config. If the
message appears in the correct console, press
Enter to continue the boot
process.If there is no prompt on the serial terminal,
something is wrong with the settings. Enter
then Enter or
Return to tell the boot block (and then
the boot loader and the kernel) to choose the serial port
for the console. Once the system is up, go back and check
what went wrong.During the third stage of the boot process, one can still
switch between the internal console and the serial console by
setting appropriate environment variables in the boot loader.
See &man.loader.8; for more
information.This line in /boot/loader.conf or
/boot/loader.conf.local configures the
boot loader and the kernel to send their boot messages to
the serial console, regardless of the options in
/boot.config:console="comconsole"That line should be the first line of
/boot/loader.conf so that boot messages
are displayed on the serial console as early as
possible.If that line does not exist, or if it is set to
console="vidconsole", the boot loader and
the kernel will use whichever console is indicated by
in the boot block. See
&man.loader.conf.5; for more information.At the moment, the boot loader has no option
equivalent to in the boot block, and
there is no provision to automatically select the internal
console and the serial console based on the presence of the
keyboard.While it is not required, it is possible to provide a
login prompt over the serial line. To
configure this, edit the entry for the serial port in
/etc/ttys using the instructions in
. If the speed of the serial
port has been changed, change std.9600 to
match the new setting.Setting a Faster Serial Port SpeedBy default, the serial port settings are 9600 baud, 8
bits, no parity, and 1 stop bit. To change the default
console speed, use one of the following options:Edit /etc/make.conf and set
BOOT_COMCONSOLE_SPEED to the new
console speed. Then, recompile and install the boot
blocks and the boot loader:&prompt.root; cd /sys/boot
&prompt.root; make clean
&prompt.root; make
&prompt.root; make installIf the serial console is configured in some other way
than by booting with , or if the serial
console used by the kernel is different from the one used
by the boot blocks, add the following option, with the
desired speed, to a custom kernel configuration file and
compile a new kernel:options CONSPEED=19200Add the
boot
option to /boot.config, replacing
19200 with the speed to
use.Add the following options to
/boot/loader.conf. Replace
115200 with the speed to
use.boot_multicons="YES"
boot_serial="YES"
comconsole_speed="115200"
console="comconsole,vidconsole"Entering the DDB Debugger from the Serial LineTo configure the ability to drop into the kernel debugger
from the serial console, add the following options to a custom
kernel configuration file and compile the kernel using the
instructions in . Note that
while this is useful for remote diagnostics, it is also
dangerous if a spurious BREAK is generated on the serial port.
Refer to &man.ddb.4; and &man.ddb.8; for more information
about the kernel debugger.options BREAK_TO_DEBUGGER
options DDB
Index: head/en_US.ISO8859-1/books/handbook/virtualization/chapter.xml
===================================================================
--- head/en_US.ISO8859-1/books/handbook/virtualization/chapter.xml (revision 48528)
+++ head/en_US.ISO8859-1/books/handbook/virtualization/chapter.xml (revision 48529)
@@ -1,1317 +1,1316 @@
VirtualizationMurrayStokelyContributed by AllanJudebhyve section by SynopsisVirtualization software allows multiple operating systems to
run simultaneously on the same computer. Such software systems
for PCs often involve a host operating system
which runs the virtualization software and supports any number
of guest operating systems.After reading this chapter, you will know:The difference between a host operating system and a
guest operating system.How to install &os; on an &intel;-based &apple;
&mac; computer.How to install &os; on µsoft.windows; with
Virtual PC.How to install &os; as a guest in
bhyve.How to tune a &os; system for best performance under
virtualization.Before reading this chapter, you should:Understand the basics of &unix;
and &os;.Know how to install
&os;.Know how to set up a
network connection.Know how to install additional
third-party software.&os; as a Guest on Parallels for
&macos; XParallels Desktop for &mac; is
a commercial software product available for &intel; based
&apple; &mac; computers running &macos; 10.4.6 or higher. &os;
is a fully supported guest operating system. Once
Parallels has been installed on
&macos; X, the user must configure a virtual machine and then
install the desired guest operating system.Installing &os; on Parallels/&macos; XThe first step in installing &os; on
Parallels is to create a new
virtual machine for installing &os;. Select
&os; as the
Guest OS Type when prompted:Choose a reasonable amount of disk and memory
depending on the plans for this virtual &os; instance.
4GB of disk space and 512MB of RAM work well for most uses
of &os; under Parallels:Select the type of networking and a network
interface:Save and finish the configuration:After the &os; virtual machine has been created, &os;
can be installed on it. This is best done with an official
&os; CD/DVD or with an
ISO image downloaded from an official
FTP site. Copy the appropriate
ISO image to the local &mac; filesystem or
insert a CD/DVD in the
&mac;'s CD-ROM drive. Click on the disc
icon in the bottom right corner of the &os;
Parallels window. This will bring
up a window that can be used to associate the
CD-ROM drive in the virtual machine with
the ISO file on disk or with the real
CD-ROM drive.Once this association with the CD-ROM
source has been made, reboot the &os; virtual machine by
clicking the reboot icon.
Parallels will reboot with a
special BIOS that first checks if there is
a CD-ROM.In this case it will find the &os; installation media and
begin a normal &os; installation. Perform the installation,
but do not attempt to configure
&xorg; at this time.When the installation is finished, reboot into the newly
installed &os; virtual machine.Configuring &os; on
ParallelsAfter &os; has been successfully installed on &macos; X
with Parallels, there are a number
of configuration steps that can be taken to optimize the
system for virtualized operation.Set Boot Loader VariablesThe most important step is to reduce the
tunable to reduce the CPU
utilization of &os; under the
Parallels environment. This is
accomplished by adding the following line to
/boot/loader.conf:kern.hz=100Without this setting, an idle &os;
Parallels guest will use
roughly 15% of the CPU of a single processor &imac;.
After this change the usage will be closer to 5%.Create a New Kernel Configuration FileAll of the SCSI, FireWire, and USB device drivers
can be removed from a custom kernel configuration file.
Parallels provides a virtual
network adapter used by the &man.ed.4; driver, so all
network devices except for &man.ed.4; and &man.miibus.4;
can be removed from the kernel.Configure NetworkingThe most basic networking setup uses DHCP to connect
the virtual machine to the same local area network as the
host &mac;. This can be accomplished by adding
ifconfig_ed0="DHCP" to
/etc/rc.conf. More advanced
networking setups are described in
.&os; as a Guest on Virtual PC
for &windows;Virtual PC for &windows; is a
µsoft; software product available for free download. See
this website for the system
requirements. Once
Virtual PC has been installed on
µsoft.windows;, the user can configure a virtual machine
and then install the desired guest operating system.Installing &os; on
Virtual PCThe first step in installing &os; on
Virtual PC is to create a new
virtual machine for installing &os;. Select
Create a virtual machine when
prompted:Select Other as the
Operating system when
prompted:Then, choose a reasonable amount of disk and memory
depending on the plans for this virtual &os; instance.
4GB of disk space and 512MB of RAM work well for most uses
of &os; under Virtual PC:Save and finish the configuration:Select the &os; virtual machine and click
Settings, then set the type of networking
and a network interface:After the &os; virtual machine has been created, &os; can
be installed on it. This is best done with an official &os;
CD/DVD or with an
ISO image downloaded from an official
FTP site. Copy the appropriate
ISO image to the local &windows; filesystem
or insert a CD/DVD in
the CD drive, then double click on the &os;
virtual machine to boot. Then, click CD
and choose Capture ISO Image... on the
Virtual PC window. This will bring
up a window where the CD-ROM drive in the
virtual machine can be associated with an
ISO file on disk or with the real
CD-ROM drive.Once this association with the CD-ROM
source has been made, reboot the &os; virtual machine by
clicking Action and
Reset.
Virtual PC will reboot with a
special BIOS that first checks for a
CD-ROM.In this case it will find the &os; installation media
and begin a normal &os; installation. Continue with the
installation, but do not attempt to configure
&xorg; at this time.When the installation is finished, remember to eject the
CD/DVD or release the
ISO image. Finally, reboot into the newly
installed &os; virtual machine.Configuring &os; on Virtual
PCAfter &os; has been successfully installed on
µsoft.windows; with
Virtual PC, there are a number of
configuration steps that can be taken to optimize the system
for virtualized operation.Set Boot Loader VariablesThe most important step is to reduce the
tunable to reduce the CPU
utilization of &os; under the
Virtual PC environment. This
is accomplished by adding the following line to
/boot/loader.conf:kern.hz=100Without this setting, an idle &os;
Virtual PC guest OS will
use roughly 40% of the CPU of a single processor
computer. After this change, the usage will be
closer to 3%.Create a New Kernel Configuration FileAll of the SCSI, FireWire, and USB device drivers can
be removed from a custom kernel configuration file.
Virtual PC provides a virtual
network adapter used by the &man.de.4; driver, so all
network devices except for &man.de.4; and &man.miibus.4;
can be removed from the kernel.Configure NetworkingThe most basic networking setup uses DHCP to connect
the virtual machine to the same local area network as the
µsoft.windows; host. This can be accomplished by
adding ifconfig_de0="DHCP" to
/etc/rc.conf. More advanced
networking setups are described in
.&os; as a Guest on VMware Fusion
for &macos;VMware Fusion for &mac; is a
commercial software product available for &intel; based &apple;
&mac; computers running &macos; 10.4.9 or higher. &os; is a
fully supported guest operating system. Once
VMware Fusion has been installed on
&macos; X, the user can configure a virtual machine and then
install the desired guest operating system.Installing &os; on
VMware FusionThe first step is to start
VMware Fusion which will load the
Virtual Machine Library. Click New
to create the virtual machine:This will load the New Virtual Machine Assistant. Click
Continue to proceed:Select Other as the
Operating System and either
&os; or
&os; 64-bit, as the
Version when prompted:Choose the name of the virtual machine and the directory
where it should be saved:Choose the size of the Virtual Hard Disk for the virtual
machine:Choose the method to install the virtual machine, either
from an ISO image or from a
CD/DVD:Click Finish and the virtual
machine will boot:Install &os; as usual:Once the install is complete, the settings of the virtual
machine can be modified, such as memory usage:The System Hardware settings of the virtual machine
cannot be modified while the virtual machine is
running.The number of CPUs the virtual machine will have access
to:The status of the CD-ROM device.
Normally the
CD/DVD/ISO
is disconnected from the virtual machine when it is no longer
needed.The last thing to change is how the virtual machine will
connect to the network. To allow connections to the virtual
machine from other machines besides the host, choose
Connect directly to the physical network
(Bridged). Otherwise,
Share the host's internet connection
(NAT) is preferred so that the virtual machine
can have access to the Internet, but the network cannot access
the virtual machine.After modifying the settings, boot the newly installed
&os; virtual machine.Configuring &os; on VMware
FusionAfter &os; has been successfully installed on &macos; X
with VMware Fusion, there are a
number of configuration steps that can be taken to optimize
the system for virtualized operation.Set Boot Loader VariablesThe most important step is to reduce the
tunable to reduce the CPU
utilization of &os; under the
VMware Fusion environment.
This is accomplished by adding the following line to
/boot/loader.conf:kern.hz=100Without this setting, an idle &os;
VMware Fusion guest will use
roughly 15% of the CPU of a single processor &imac;.
After this change, the usage will be closer to 5%.Create a New Kernel Configuration FileAll of the FireWire, and USB device drivers can be
removed from a custom kernel configuration file.
VMware Fusion provides a
virtual network adapter used by the &man.em.4; driver, so
all network devices except for &man.em.4; can be removed
from the kernel.Configure NetworkingThe most basic networking setup uses DHCP to connect
the virtual machine to the same local area network as the
host &mac;. This can be accomplished by adding
ifconfig_em0="DHCP" to
/etc/rc.conf. More advanced
networking setups are described in
.&virtualbox; Guest Additions on a &os; Guest&os; works well as a guest in
&virtualbox;. The virtualization
software is available for most common operating systems,
including &os; itself.The &virtualbox; guest additions
provide support for:Clipboard sharing.Mouse pointer integration.Host time synchronization.Window scaling.Seamless mode.These commands are run in the &os; guest.First, install the
emulators/virtualbox-ose-additions package
or port in the &os; guest. This will install the port:&prompt.root; cd /usr/ports/emulators/virtualbox-ose-additions && make install cleanAdd these lines to /etc/rc.conf:vboxguest_enable="YES"
vboxservice_enable="YES"If &man.ntpd.8; or &man.ntpdate.8; is used, disable host
time synchronization:vboxservice_flags="--disable-timesync"Xorg will automatically recognize
the vboxvideo driver. It can also be
manually entered in
/etc/X11/xorg.conf:Section "Device"
Identifier "Card0"
Driver "vboxvideo"
VendorName "InnoTek Systemberatung GmbH"
BoardName "VirtualBox Graphics Adapter"
EndSectionTo use the vboxmouse driver, adjust the
mouse section in /etc/X11/xorg.conf:Section "InputDevice"
Identifier "Mouse0"
Driver "vboxmouse"
EndSectionHAL users should create the following
/usr/local/etc/hal/fdi/policy/90-vboxguest.fdi
or copy it from
/usr/local/share/hal/fdi/policy/10osvendor/90-vboxguest.fdi:<?xml version="1.0" encoding="utf-8"?>
<!--
# Sun VirtualBox
# Hal driver description for the vboxmouse driver
# $Id: chapter.xml,v 1.33 2012-03-17 04:53:52 eadler Exp $
Copyright (C) 2008-2009 Sun Microsystems, Inc.
This file is part of VirtualBox Open Source Edition (OSE, as
available from http://www.virtualbox.org. This file is free software;
you can redistribute it and/or modify it under the terms of the GNU
General Public License (GPL) as published by the Free Software
Foundation, in version 2 as it comes in the "COPYING" file of the
VirtualBox OSE distribution. VirtualBox OSE is distributed in the
hope that it will be useful, but WITHOUT ANY WARRANTY of any kind.
Please contact Sun Microsystems, Inc., 4150 Network Circle, Santa
Clara, CA 95054 USA or visit http://www.sun.com if you need
additional information or have any questions.
-->
<deviceinfo version="0.2">
<device>
<match key="info.subsystem" string="pci">
<match key="info.product" string="VirtualBox guest Service">
<append key="info.capabilities" type="strlist">input</append>
<append key="info.capabilities" type="strlist">input.mouse</append>
<merge key="input.x11_driver" type="string">vboxmouse</merge>
<merge key="input.device" type="string">/dev/vboxguest</merge>
</match>
</match>
</device>
</deviceinfo>&os; as a Host with
VirtualBox&virtualbox; is an actively
developed, complete virtualization package, that is available
for most operating systems including &windows;, &macos;, &linux;
and &os;. It is equally capable of running &windows; or
&unix;-like guests. It is released as open source software, but
with closed-source components available in a separate extension
pack. These components include support for USB 2.0 devices.
More information may be found on the Downloads
page of the &virtualbox;
wiki. Currently, these extensions are not available
for &os;.Installing &virtualbox;&virtualbox; is available as a
&os; package or port in
emulators/virtualbox-ose. The port can be
installed using these commands:&prompt.root; cd /usr/ports/emulators/virtualbox-ose
&prompt.root; make install cleanOne useful option in the port's configuration menu is the
GuestAdditions suite of programs. These
provide a number of useful features in guest operating
systems, like mouse pointer integration (allowing the mouse to
be shared between host and guest without the need to press a
special keyboard shortcut to switch) and faster video
rendering, especially in &windows; guests. The guest
additions are available in the Devices
menu, after the installation of the guest is finished.A few configuration changes are needed before
&virtualbox; is started for the
first time. The port installs a kernel module in
/boot/modules which
must be loaded into the running kernel:&prompt.root; kldload vboxdrvTo ensure the module is always loaded after a reboot,
add this line to
/boot/loader.conf:vboxdrv_load="YES"To use the kernel modules that allow bridged or host-only
networking, add this line to
/etc/rc.conf and reboot the
computer:vboxnet_enable="YES"The vboxusers
group is created during installation of
&virtualbox;. All users that need
access to &virtualbox; will have to
be added as members of this group. pw can
be used to add new members:&prompt.root; pw groupmod vboxusers -m yourusernameThe default permissions for
/dev/vboxnetctl are restrictive and need
to be changed for bridged networking:&prompt.root; chown root:vboxusers /dev/vboxnetctl
&prompt.root; chmod 0660 /dev/vboxnetctlTo make this permissions change permanent, add these
lines to /etc/devfs.conf:own vboxnetctl root:vboxusers
perm vboxnetctl 0660To launch &virtualbox;,
type from a &xorg; session:&prompt.user; VirtualBoxFor more information on configuring and using
&virtualbox;, refer to the
official
website. For &os;-specific information and
troubleshooting instructions, refer to the relevant
page in the &os; wiki.&virtualbox; USB SupportIn order to be able to read and write to USB devices,
users need to be members of
operator:&prompt.root; pw groupmod operator -m jerryThen, add the following to
/etc/devfs.rules, or create this file if
it does not exist yet:[system=10]
add path 'usb/*' mode 0660 group operatorTo load these new rules, add the following to
/etc/rc.conf:devfs_system_ruleset="system"Then, restart devfs:&prompt.root; service devfs restartUSB can now be enabled in the guest operating system. USB
devices should be visible in the &virtualbox;
preferences.&virtualbox; Host
DVD/CD AccessAccess to the host
DVD/CD drives from
guests is achieved through the sharing of the physical drives.
Within &virtualbox;, this is set up from the Storage window in
the Settings of the virtual machine. If needed, create an
empty IDE
CD/DVD device first.
Then choose the Host Drive from the popup menu for the virtual
CD/DVD drive selection.
A checkbox labeled Passthrough will appear.
This allows the virtual machine to use the hardware directly.
For example, audio CDs or the burner will
only function if this option is selected.HAL needs to run for
&virtualbox;
DVD/CD functions to
work, so enable it in /etc/rc.conf and
start it if it is not already running:hald_enable="YES"&prompt.root; service hald startIn order for users to be able to use
&virtualbox;
DVD/CD functions, they
need access to /dev/xpt0,
/dev/cdN, and
/dev/passN.
This is usually achieved by making the user a member of
operator.
Permissions to these devices have to be corrected by adding
these lines to /etc/devfs.conf:perm cd* 0660
perm xpt0 0660
perm pass* 0660&prompt.root; service devfs restart&os; as a Host with
bhyve
- The
- bhyve BSD-licensed
- hypervisor became part of the base system with &os; 10.0-RELEASE. This hypervisor supports
- a number of guests, including &os;, OpenBSD, and many &linux;
+ The bhyve
+ BSD-licensed hypervisor became part of the
+ base system with &os; 10.0-RELEASE. This hypervisor supports a
+ number of guests, including &os;, OpenBSD, and many &linux;
distributions. Currently, bhyve only
supports a serial console and does not emulate a graphical
- console.
- Virtualization offload features of newer
- CPUs are used to avoid the legacy methods of translating instructions and
- manually managing memory mappings.
+ console. Virtualization offload features of newer
+ CPUs are used to avoid the legacy methods of
+ translating instructions and manually managing memory
+ mappings.
- The bhyve design
- requires a processor that supports &intel;
- Extended Page Tables (EPT) or &amd; Rapid
- Virtualization Indexing (RVI) or
- Nested Page Tables (NPT). Hosting
- &linux; guests or &os; guests with more than one
- vCPU requires VMX unrestricted
- mode support (UG). Most
- newer processors, specifically the &intel; &core;
- i3/i5/i7 and &intel; &xeon; E3/E5/E7, support these
- features. UG support was introduced with
- Intel's Westmere micro-architecture. For a complete list of
- &intel; processors that support EPT, refer
- to The bhyve design requires a
+ processor that supports &intel; Extended Page Tables
+ (EPT) or &amd; Rapid Virtualization Indexing
+ (RVI) or Nested Page Tables
+ (NPT). Hosting &linux; guests or &os; guests
+ with more than one vCPU requires
+ VMX unrestricted mode support
+ (UG). Most newer processors, specifically
+ the &intel; &core; i3/i5/i7 and &intel; &xeon;
+ E3/E5/E7, support these features. UG support
+ was introduced with Intel's Westmere micro-architecture. For a
+ complete list of &intel; processors that support
+ EPT, refer to .
RVI is found on the third generation and
later of the &amd.opteron; (Barcelona) processors. The easiest
way to tell if a processor supports
bhyve is to run
dmesg or look in
/var/run/dmesg.boot for the
POPCNT processor feature flag on the
- Features2 line for &amd; processors or EPT and
- UG on the VT-x
- line for &intel; processors.
+ Features2 line for &amd; processors or
+ EPT and UG on the
+ VT-x line for &intel; processors.
Preparing the HostThe first step to creating a virtual machine in
bhyve is configuring the host
system. First, load the bhyve
kernel module:&prompt.root; kldload vmmThen, create a tap interface for the
network device in the virtual machine to attach to. In order
for the network device to participate in the network, also
create a bridge interface containing the
tap interface and the physical interface
as members. In this example, the physical interface is
igb0:&prompt.root; ifconfig tap0 create
&prompt.root; sysctl net.link.tap.up_on_open=1
net.link.tap.up_on_open: 0 -> 1
&prompt.root; ifconfig bridge0 create
&prompt.root; ifconfig bridge0 addm igb0 addm tap0
&prompt.root; ifconfig bridge0 upCreating a FreeBSD GuestCreate a file to use as the virtual disk for the guest
machine. Specify the size and name of the virtual
disk:&prompt.root; truncate -s 16Gguest.imgDownload an installation image of &os; to install:&prompt.root; fetch ftp://ftp.freebsd.org/pub/FreeBSD/releases/ISO-IMAGES/10.2/FreeBSD-10.2-RELEASE-amd64-bootonly.iso
FreeBSD-10.2-RELEASE-amd64-bootonly.iso 100% of 230 MB 570 kBps 06m17s&os; comes with an example script for running a virtual
machine in bhyve. The script will
start the virtual machine and run it in a loop, so it will
automatically restart if it crashes. The script takes a
number of options to control the configuration of the machine:
controls the number of virtual CPUs,
limits the amount of memory available to
the guest, defines which
tap device to use,
indicates which disk image to use, tells
bhyve to boot from the
CD image instead of the disk, and
defines which CD image
to use. The last parameter is the name of the virtual
machine, used to track the running machines. This example
starts the virtual machine in installation mode:&prompt.root; sh /usr/share/examples/bhyve/vmrun.sh -c 4 -m 1024M -t tap0 -d guest.img -i -I FreeBSD-10.0-RELEASE-amd64-bootonly.isoguestnameThe virtual machine will boot and start the installer.
After installing a system in the virtual machine, when the
system asks about dropping in to a shell at the end of the
installation, choose Yes. A small
change needs to be made to make the system start with a serial
console. Edit /etc/ttys and replace the
existing ttyu0 line with:ttyu0 "/usr/libexec/getty 3wire" xterm on secureBeginning with &os; 9.3-RELEASE and
10.1-RELEASE the console is configured
automatically.Reboot the virtual machine. While rebooting the virtual
machine causes bhyve to exit, the
vmrun.sh script runs
bhyve in a loop and will automatically
restart it. When this happens, choose the reboot option from
the boot loader menu in order to escape the loop. Now the
guest can be started from the virtual disk:&prompt.root; sh /usr/share/examples/bhyve/vmrun.sh -c 4 -m 1024M -t tap0 -d guest.imgguestnameCreating a &linux; GuestIn order to boot operating systems other than &os;, the
sysutils/grub2-bhyve port must be first
installed.Next, create a file to use as the virtual disk for the
guest machine:&prompt.root; truncate -s 16Glinux.imgStarting a virtual machine with
bhyve is a two step process. First
a kernel must be loaded, then the guest can be started. The
&linux; kernel is loaded with
sysutils/grub2-bhyve. Create a
device.map that
grub will use to map the virtual
devices to the files on the host system:(hd0) ./linux.img
(cd0) ./somelinux.isoUse sysutils/grub2-bhyve to load the
&linux; kernel from the ISO image:&prompt.root; grub-bhyve -m device.map -r cd0 -M 1024MlinuxguestThis will start grub. If the installation
CD contains a
grub.cfg, a menu will be displayed.
If not, the vmlinuz and
initrd files must be located and loaded
manually:grub> ls
(hd0) (cd0) (cd0,msdos1) (host)
grub> ls (cd0)/isolinux
boot.cat boot.msg grub.conf initrd.img isolinux.bin isolinux.cfg memtest
splash.jpg TRANS.TBL vesamenu.c32 vmlinuz
grub> linux (cd0)/isolinux/vmlinuz
grub> initrd (cd0)/isolinux/initrd.img
grub> bootNow that the &linux; kernel is loaded, the guest can be
started:&prompt.root; bhyve -A -H -P -s 0:0,hostbridge -s 1:0,lpc -s 2:0,virtio-net,tap1 -s 3:0,virtio-blk,./linux.img \
-s 4:0,ahci-cd,./somelinux.iso -l com1,stdio -c 4 -m 1024MlinuxguestThe system will boot and start the installer. After
installing a system in the virtual machine, reboot the virtual
machine. This will cause bhyve to
exit. The instance of the virtual machine needs to be
destroyed before it can be started again:&prompt.root; bhyvectl --destroy --vm=linuxguestNow the guest can be started directly from the virtual
disk. Load the kernel:&prompt.root; grub-bhyve -m device.map -r hd0,msdos1 -M 1024Mlinuxguest
grub> ls
(hd0) (hd0,msdos2) (hd0,msdos1) (cd0) (cd0,msdos1) (host)
(lvm/VolGroup-lv_swap) (lvm/VolGroup-lv_root)
grub> ls (hd0,msdos1)/
lost+found/ grub/ efi/ System.map-2.6.32-431.el6.x86_64 config-2.6.32-431.el6.x
86_64 symvers-2.6.32-431.el6.x86_64.gz vmlinuz-2.6.32-431.el6.x86_64
initramfs-2.6.32-431.el6.x86_64.img
grub> linux (hd0,msdos1)/vmlinuz-2.6.32-431.el6.x86_64 root=/dev/mapper/VolGroup-lv_root
grub> initrd (hd0,msdos1)/initramfs-2.6.32-431.el6.x86_64.img
grub> bootBoot the virtual machine:&prompt.root; bhyve -A -H -P -s 0:0,hostbridge -s 1:0,lpc -s 2:0,virtio-net,tap1 \
-s 3:0,virtio-blk,./linux.img -l com1,stdio -c 4 -m 1024Mlinuxguest&linux; will now boot in the virtual machine and
eventually present you with the login prompt. Login and use
the virtual machine. When you are finished, reboot the
virtual machine to exit bhyve.
Destroy the virtual machine instance:&prompt.root; bhyvectl --destroy --vm=linuxguestUsing ZFS with
bhyve GuestsIf ZFS is available on the host
machine, using ZFS volumes
instead of disk image files can provide significant
performance benefits for the guest VMs. A
ZFS volume can be created by:&prompt.root; zfs create -V16G -o volmode=dev zroot/linuxdisk0When starting the VM, specify the
ZFS volume as the disk drive:&prompt.root; bhyve -A -H -P -s 0:0,hostbridge -s 1:0,lpc -s 2:0,virtio-net,tap1 -s3:0,virtio-blk,/dev/zvol/zroot/linuxdisk0 \
-l com1,stdio -c 4 -m 1024MlinuxguestVirtual Machine ConsolesIt is advantageous to wrap the
bhyve console in a session
management tool such as sysutils/tmux or
sysutils/screen in order to detach and
reattach to the console. It is also possible to have the
console of bhyve be a null modem
device that can be accessed with cu. To do
this, load the nmdm kernel module and
replace with
. The
/dev/nmdm devices are created
automatically as needed, where each is a pair, corresponding
to the two ends of the null modem cable
(/dev/nmdm0A and
/dev/nmdm0B). See &man.nmdm.4; for more
information.&prompt.root; kldload nmdm
&prompt.root; bhyve -A -H -P -s 0:0,hostbridge -s 1:0,lpc -s 2:0,virtio-net,tap1 -s 3:0,virtio-blk,./linux.img \
-l com1,/dev/nmdm0A -c 4 -m 1024Mlinuxguest
&prompt.root; cu -l /dev/nmdm0B
Connected
Ubuntu 13.10 handbook ttyS0
handbook login:Managing Virtual MachinesA device node is created in /dev/vmm for each virtual
machine. This allows the administrator to easily see a list
of the running virtual machines:&prompt.root; ls -al /dev/vmm
total 1
dr-xr-xr-x 2 root wheel 512 Mar 17 12:19 ./
dr-xr-xr-x 14 root wheel 512 Mar 17 06:38 ../
crw------- 1 root wheel 0x1a2 Mar 17 12:20 guestname
crw------- 1 root wheel 0x19f Mar 17 12:19 linuxguest
crw------- 1 root wheel 0x1a1 Mar 17 12:19 otherguestA specified virtual machine can be destroyed using
bhyvectl:&prompt.root; bhyvectl --destroy --vm=guestnamePersistent ConfigurationIn order to configure the system to start
bhyve guests at boot time, the
following configurations must be made in the specified
files:/etc/sysctl.confnet.link.tap.up_on_open=1/boot/loader.confvmm_load="YES"
nmdm_load="YES"
if_bridge_load="YES"
if_tap_load="YES"/etc/rc.confcloned_interfaces="bridge0tap0"
ifconfig_bridge0="addm igb0 addm tap0"
Index: head/en_US.ISO8859-1/books/handbook/x11/chapter.xml
===================================================================
--- head/en_US.ISO8859-1/books/handbook/x11/chapter.xml (revision 48528)
+++ head/en_US.ISO8859-1/books/handbook/x11/chapter.xml (revision 48529)
@@ -1,2133 +1,2135 @@
The X Window SystemSynopsisAn installation of &os; using
bsdinstall does not automatically
install a graphical user interface. This chapter describes how
to install and configure &xorg;,
which provides the open source X Window System used to provide a
graphical environment. It then describes how to find and
install a desktop environment or window manager.Users who prefer an installation method that automatically
configures the &xorg; and offers a
choice of window managers during installation should refer to
the pcbsd.org
website.For more information on the video hardware that
&xorg; supports, refer to the x.org website.After reading this chapter, you will know:The various components of the X Window System, and how
they interoperate.How to install and configure
&xorg;.How to install and configure several window managers
and desktop environments.How to use &truetype; fonts in
&xorg;.How to set up your system for graphical logins
(XDM).Before reading this chapter, you should:Know how to install additional third-party
software as described in .TerminologyWhile it is not necessary to understand all of the details
of the various components in the X Window System and how they
interact, some basic knowledge of these components can be
useful.X serverX was designed from the beginning to be
network-centric, and adopts a client-server
model. In this model, the X server runs on
the computer that has the keyboard, monitor, and mouse
attached. The server's responsibility includes tasks such
as managing the display, handling input from the keyboard
and mouse, and handling input or output from other devices
such as a tablet or a video projector. This confuses some
people, because the X terminology is exactly backward to
what they expect. They expect the X server
to be the big powerful machine down the hall, and the
X client to be the machine on their
desk.X clientEach X application, such as
XTerm or
Firefox, is a
client. A client sends messages to the
server such as Please draw a window at these
coordinates, and the server sends back messages
such as The user just clicked on the OK
button.In a home or small office environment, the X server
and the X clients commonly run on the same computer. It
is also possible to run the X server on a less powerful
computer and to run the X applications on a more powerful
system. In this scenario, the communication between the X
client and server takes place over the network.window managerX does not dictate what windows should look like
on-screen, how to move them around with the mouse, which
keystrokes should be used to move between windows, what
the title bars on each window should look like, whether or
not they have close buttons on them, and so on. Instead,
X delegates this responsibility to a separate window
manager application. There are dozens of window
managers available. Each window manager provides
a different look and feel: some support virtual desktops,
some allow customized keystrokes to manage the desktop,
some have a Start button, and some are
themeable, allowing a complete change of the desktop's
look-and-feel. Window managers are available in the
x11-wm category of the Ports
Collection.Each window manager uses a different configuration
mechanism. Some expect configuration file written by hand
while others provide graphical tools for most
configuration tasks.desktop environmentKDE and
GNOME are considered to be
desktop environments as they include an entire suite of
applications for performing common desktop tasks. These
may include office suites, web browsers, and games.focus policyThe window manager is responsible for the mouse focus
policy. This policy provides some means for choosing
which window is actively receiving keystrokes and it
should also visibly indicate which window is currently
active.One focus policy is called
click-to-focus. In this model, a window
becomes active upon receiving a mouse click. In the
focus-follows-mouse policy, the window that
is under the mouse pointer has focus and the focus is
changed by pointing at another window. If the mouse is
over the root window, then this window is focused. In the
sloppy-focus model, if the mouse is moved
over the root window, the most recently used window still
has the focus. With sloppy-focus, focus is only changed
when the cursor enters a new window, and not when exiting
the current window. In the click-to-focus
policy, the active window is selected by mouse click. The
window may then be raised and appear in front of all other
windows. All keystrokes will now be directed to this
window, even if the cursor is moved to another
window.Different window managers support different focus
models. All of them support click-to-focus, and the
majority of them also support other policies. Consult the
documentation for the window manager to determine which
focus models are available.widgetsWidget is a term for all of the items in the user
interface that can be clicked or manipulated in some way.
This includes buttons, check boxes, radio buttons, icons,
and lists. A widget toolkit is a set of widgets used to
create graphical applications. There are several popular
widget toolkits, including Qt, used by
KDE, and GTK+, used by
GNOME. As a result,
applications will have a different look and feel,
depending upon which widget toolkit was used to create the
application.Installing &xorg;On &os;, &xorg; can be installed
as a package or port.To build and install from the Ports Collection:&prompt.root; cd /usr/ports/x11/xorg
&prompt.root; make install cleanThe binary package can be installed more quickly but with
fewer options for customization:&prompt.root; pkg install xorgEither of these installations results in the complete
&xorg; system being installed. This
is the best option for most users.A smaller version of the X system suitable for experienced
users is available in x11/xorg-minimal. Most
of the documents, libraries, and applications will not be
installed. Some applications require these additional
components to function.&xorg; ConfigurationWarrenBlockOriginally contributed by&xorg;&xorg;Quick Start&xorg; supports most common
video cards, keyboards, and pointing devices. These devices
are automatically detected and do not require any manual
configuration.If &xorg; has been used on
this computer before, move or remove any existing
configuration files:&prompt.root; mv /etc/X11/xorg.conf ~/xorg.conf.etc
&prompt.root; mv /usr/local/etc/X11/xorg.conf ~/xorg.conf.localetcAdd the user who will run
&xorg; to the
video or
wheel group to enable 3D acceleration
when available. To add user
jru to whichever group is
available:&prompt.root; pw groupmod video -m jru || pw groupmod wheel -m jruThe TWM window manager is included
by default. It is started when
&xorg; starts:&prompt.user; startxOn some older versions of &os;, the system console
must be set to &man.vt.4; before switching back to the
text console will work properly. See
.User Group for Accelerated VideoAccess to /dev/dri is needed to allow
3D acceleration on video cards. It is usually simplest to add
the user who will be running X to either the
video or wheel group.
Here, &man.pw.8; is used to add user
slurms to the
video group, or to the
wheel group if there is no
video group:&prompt.root; pw groupmod video -m slurms || pw groupmod wheel -m slurmsKernel Mode Setting (KMS)When the computer switches from displaying the console to
a higher screen resolution for X, it must set the video
output mode. Recent versions of
&xorg; use a system inside the kernel to do
these mode changes more efficiently. Older versions of &os;
use &man.sc.4;, which is not aware of the
KMS system. The end result is that after
closing X, the system console is blank, even though it is
still working. The newer &man.vt.4; console avoids this
problem.Add this line to /boot/loader.conf
to enable &man.vt.4;:kern.vty=vtConfiguration FilesDirectory&xorg; looks in several
directories for configuration files.
/usr/local/etc/X11/ is the recommended
directory for these files on &os;. Using this directory
helps keep application files separate from operating system
files.Storing configuration files in the legacy
/etc/X11/ still works. However, this
mixes application files with the base &os; files and is not
recommended.Single or Multiple FilesIt is easier to use multiple files that each configure a
specific setting than the traditional single
xorg.conf. These files are stored in
the xorg.conf.d/ subdirectory of the
main configuration file directory. The full path is
typically
/usr/local/etc/X11/xorg.conf.d/.Examples of these files are shown later in this
section.The traditional single xorg.conf
still works, but is neither as clear nor as flexible as
multiple files in the xorg.conf.d/
subdirectory.Video Cards&intel;3D acceleration is supported on most &intel;
graphics up to Ivy Bridge (HD Graphics 2500, 4000, and
P4000), including Iron Lake (HD Graphics) and
Sandy Bridge (HD Graphics 2000).Driver name: intelFor reference, see .
+ xlink:href="https://en.wikipedia.org/wiki/List_of_Intel_graphics_processing_units"/>.
&amd; Radeon2D and 3D acceleration is supported on Radeon
cards up to and including the HD6000 series.Driver name: radeonFor reference, see .
+ xlink:href="https://en.wikipedia.org/wiki/List_of_AMD_graphics_processing_units"/>.
NVIDIASeveral NVIDIA drivers are available in the
x11 category of the Ports
Collection. Install the driver that matches the video
card.For reference, see .
+ xlink:href="https://en.wikipedia.org/wiki/List_of_Nvidia_graphics_processing_units"/>.
Hybrid Combination GraphicsSome notebook computers add additional graphics
processing units to those built into the chipset or
processor. Optimus combines
&intel; and NVIDIA hardware.
Switchable Graphics or
Hybrid Graphics are a combination
of an &intel; or &amd; processor and an &amd; Radeon
GPU.Implementations of these hybrid graphics systems
vary, and &xorg; on &os; is
not able to drive all versions of them.Some computers provide a BIOS
option to disable one of the graphics adapters or select
a discrete mode which can be used
with one of the standard video card drivers. For
example, it is sometimes possible to disable the NVIDIA
GPU in an Optimus system. The
&intel; video can then be used with an &intel;
driver.BIOS settings depend on the model
of computer. In some situations, both
GPUs can be left enabled, but
creating a configuration file that only uses the main
GPU in the Device
section is enough to make such a system
functional.Other Video CardsDrivers for some less-common video cards can be
found in the x11-drivers directory
of the Ports Collection.Cards that are not supported by a specific driver
might still be usable with the
x11-drivers/xf86-video-vesa driver.
This driver is installed by x11/xorg.
It can also be installed manually as
x11-drivers/xf86-video-vesa.
&xorg; attempts to use this
driver when a specific driver is not found for the video
card.x11-drivers/xf86-video-scfb is a
similar nonspecialized video driver that works on many
UEFI and &arm; computers.Setting the Video Driver in a FileTo set the &intel; driver in a configuration
file:Select &intel; Video Driver in a File/usr/local/etc/X11/xorg.conf.d/driver-intel.confSection "Device"
Identifier "Card0"
Driver "intel"
# BusID "PCI:1:0:0"
EndSectionIf more than one video card is present, the
BusID identifier can be uncommented
and set to select the desired card. A list of video
card bus IDs can be displayed with
pciconf -lv | grep -B3
display.To set the Radeon driver in a configuration
file:Select Radeon Video Driver in a File/usr/local/etc/X11/xorg.conf.d/driver-radeon.confSection "Device"
Identifier "Card0"
Driver "radeon"
EndSectionTo set the VESA driver in a
configuration file:Select VESA Video Driver in a
File/usr/local/etc/X11/xorg.conf.d/driver-vesa.confSection "Device"
Identifier "Card0"
Driver "vesa"
EndSectionMonitorsAlmost all monitors support the Extended Display
Identification Data standard (EDID).
&xorg; uses EDID
to communicate with the monitor and detect the supported
resolutions and refresh rates. Then it selects the most
appropriate combination of settings to use with that
monitor.Other resolutions supported by the monitor can be
chosen by setting the desired resolution in configuration
files, or after the X server has been started with
&man.xrandr.1;.Using &man.xrandr.1;Run &man.xrandr.1; without any parameters to see a
list of video outputs and detected monitor modes:&prompt.user; xrandr
Screen 0: minimum 320 x 200, current 3000 x 1920, maximum 8192 x 8192
DVI-0 connected primary 1920x1200+1080+0 (normal left inverted right x axis y axis) 495mm x 310mm
1920x1200 59.95*+
1600x1200 60.00
1280x1024 85.02 75.02 60.02
1280x960 60.00
1152x864 75.00
1024x768 85.00 75.08 70.07 60.00
832x624 74.55
800x600 75.00 60.32
640x480 75.00 60.00
720x400 70.08
DisplayPort-0 disconnected (normal left inverted right x axis y axis)
HDMI-0 disconnected (normal left inverted right x axis y axis)This shows that the DVI-0 output
is being used to display a screen resolution of
1920x1200 pixels at a refresh rate of about 60 Hz.
Monitors are not attached to the
DisplayPort-0 and
HDMI-0 connectors.Any of the other display modes can be selected with
&man.xrandr.1;. For example, to switch to 1280x1024 at
60 Hz:&prompt.user; xrandr --mode 1280x1024 --rate 60A common task is using the external video output on
a notebook computer for a video projector.The type and quantity of output connectors varies
between devices, and the name given to each output
varies from driver to driver. What one driver calls
HDMI-1, another might call
HDMI1. So the first step is to run
&man.xrandr.1; to list all the available
outputs:&prompt.user; xrandr
Screen 0: minimum 320 x 200, current 1366 x 768, maximum 8192 x 8192
LVDS1 connected 1366x768+0+0 (normal left inverted right x axis y axis) 344mm x 193mm
1366x768 60.04*+
1024x768 60.00
800x600 60.32 56.25
640x480 59.94
VGA1 connected (normal left inverted right x axis y axis)
1280x1024 60.02 + 75.02
1280x960 60.00
1152x864 75.00
1024x768 75.08 70.07 60.00
832x624 74.55
800x600 72.19 75.00 60.32 56.25
640x480 75.00 72.81 66.67 60.00
720x400 70.08
HDMI1 disconnected (normal left inverted right x axis y axis)
DP1 disconnected (normal left inverted right x axis y axis)Four outputs were found: the built-in panel
LVDS1, and external
VGA1, HDMI1, and
DP1 connectors.The projector has been connected to the
VGA1 output. &man.xrandr.1; is now
used to set that output to the native resolution of the
projector and add the additional space to the right side
of the desktop:&prompt.user; xrandr --output VGA1 --auto --right-of LVDS1--auto chooses the resolution and
refresh rate detected by EDID. If
the resolution is not correctly detected, a fixed value
can be given with --mode instead of
the --auto statement. For example,
most projectors can be used with a 1024x768 resolution,
which is set with
--mode 1024x768.&man.xrandr.1; is often run from
.xinitrc to set the appropriate
mode when X starts.Setting Monitor Resolution in a FileTo set a screen resolution of 1024x768 in a
configuration file:Set Screen Resolution in a File/usr/local/etc/X11/xorg.conf.d/screen-resolution.confSection "Screen"
Identifier "Screen0"
Device "Card0"
SubSection "Display"
Modes "1024x768"
EndSubSection
EndSectionThe few monitors that do not have
EDID can be configured by setting
HorizSync and
VertRefresh to the range of
frequencies supported by the monitor.Manually Setting Monitor Frequencies/usr/local/etc/X11/xorg.conf.d/monitor0-freq.confSection "Monitor"
Identifier "Monitor0"
HorizSync 30-83 # kHz
VertRefresh 50-76 # Hz
EndSectionInput DevicesKeyboardsKeyboard LayoutThe standardized location of keys on a keyboard
is called a layout. Layouts and
other adjustable parameters are listed in
&man.xkeyboard-config.7;.A United States layout is the default. To select
an alternate layout, set the
XkbLayout and
XkbVariant options in an
InputClass. This will be applied
to all input devices that match the class.This example selects a French keyboard layout with
the oss variant.Setting a Keyboard Layout/usr/local/etc/X11/xorg.conf.d/keyboard-fr-oss.confSection "InputClass"
Identifier "KeyboardDefaults"
Driver "keyboard"
MatchIsKeyboard "on"
Option "XkbLayout" "fr"
Option "XkbVariant" "oss"
EndSectionSetting Multiple Keyboard LayoutsSet United States, Spanish, and Ukrainian
keyboard layouts. Cycle through these layouts by
pressing
AltShift. x11/xxkb or
x11/sbxkb can be used for
improved layout switching control and
current layout indicators./usr/local/etc/X11/xorg.conf.d/kbd-layout-multi.confSection "InputClass"
Identifier "All Keyboards"
MatchIsKeyboard "yes"
Option "XkbLayout" "us, es, ua"
EndSectionClosing &xorg; From the
KeyboardX can be closed with a combination of keys.
By default, that key combination is not set because it
conflicts with keyboard commands for some
applications. Enabling this option requires changes
to the keyboard InputDevice
section:Enabling Keyboard Exit from X/usr/local/etc/X11/xorg.conf.d/keyboard-zap.confSection "InputClass"
Identifier "KeyboardDefaults"
Driver "keyboard"
MatchIsKeyboard "on"
Option "XkbOptions" "terminate:ctrl_alt_bksp"
EndSectionMice and Pointing DevicesMany mouse parameters can be adjusted with configuration
options. See &man.mousedrv.4x; for a full list.Mouse ButtonsThe number of buttons on a mouse can be set in the
mouse InputDevice section of
xorg.conf. To set the number of
buttons to 7:Setting the Number of Mouse Buttons/usr/local/etc/X11/xorg.conf.d/mouse0-buttons.confSection "InputDevice"
Identifier "Mouse0"
Option "Buttons" "7"
EndSectionManual ConfigurationIn some cases, &xorg;
autoconfiguration does not work with particular hardware, or a
different configuration is desired. For these cases, a custom
configuration file can be created.A configuration file can be generated by
&xorg; based on the detected
hardware. This file is often a useful starting point for
custom configurations.Generating an xorg.conf:&prompt.root; Xorg -configureThe configuration file is saved to
/root/xorg.conf.new. Make any changes
desired, then test that file with:&prompt.root; Xorg -config /root/xorg.conf.newAfter the new configuration has been adjusted and tested,
it can be split into smaller files in the normal location,
/usr/local/etc/X11/xorg.conf.d/.Using Fonts in &xorg;Type1 FontsThe default fonts that ship with
&xorg; are less than ideal for
typical desktop publishing applications. Large presentation
fonts show up jagged and unprofessional looking, and small
fonts are almost completely unintelligible. However, there
are several free, high quality Type1 (&postscript;) fonts
available which can be readily used with
&xorg;. For instance, the URW font
collection (x11-fonts/urwfonts) includes
high quality versions of standard type1 fonts (Times Roman, Helvetica, Palatino and others). The
Freefonts collection (x11-fonts/freefonts)
includes many more fonts, but most of them are intended for
use in graphics software such as the
Gimp, and are not complete enough
to serve as screen fonts. In addition,
&xorg; can be configured to use
&truetype; fonts with a minimum of effort. For more details
on this, see the &man.X.7; manual page or .To install the above Type1 font collections from the Ports
Collection, run the following commands:&prompt.root; cd /usr/ports/x11-fonts/urwfonts
&prompt.root; make install cleanAnd likewise with the freefont or other collections. To
have the X server detect these fonts, add an appropriate line
to the X server configuration file
(/etc/X11/xorg.conf), which reads:FontPath "/usr/local/share/fonts/urwfonts/"Alternatively, at the command line in the X session
run:&prompt.user; xset fp+ /usr/local/share/fonts/urwfonts
&prompt.user; xset fp rehashThis will work but will be lost when the X session is
closed, unless it is added to the startup file
(~/.xinitrc for a normal
startx session, or
~/.xsession when logging in through a
graphical login manager like XDM).
A third way is to use the new
/usr/local/etc/fonts/local.conf as
demonstrated in .&truetype; FontsTrueType FontsfontsTrueType&xorg; has built in support for
rendering &truetype; fonts. There are two different modules
that can enable this functionality. The freetype module is
used in this example because it is more consistent with the
other font rendering back-ends. To enable the freetype module
just add the following line to the "Module"
section of /etc/X11/xorg.conf.Load "freetype"Now make a directory for the &truetype; fonts (for
- example,
- /usr/local/share/fonts/TrueType) and
- copy all of the &truetype; fonts into this directory. Keep in
- mind that &truetype; fonts cannot be directly taken from an
- &apple; &mac;; they must be in &unix;/&ms-dos;/&windows;
- format for use by &xorg;. Once the
- files have been copied into this directory, use
+ example, /usr/local/share/fonts/TrueType)
+ and copy all of the &truetype; fonts into this directory.
+ Keep in mind that &truetype; fonts cannot be directly taken
+ from an &apple; &mac;; they must be in
+ &unix;/&ms-dos;/&windows; format for use by
+ &xorg;. Once the files have been
+ copied into this directory, use
mkfontdir to create a
fonts.dir, so that the X font renderer
knows that these new files have been installed.
- mkfontdir can be installed as a package:
+ mkfontdir can be installed as a
+ package:
&prompt.root; pkg install mkfontdir
- Then create an index of X font files in a directory:
+ Then create an index of X font files in a
+ directory:&prompt.root; cd /usr/local/share/fonts/TrueType
&prompt.root; mkfontdirNow add the &truetype; directory to the font path. This
is just the same as described in :&prompt.user; xset fp+ /usr/local/share/fonts/TrueType
&prompt.user; xset fp rehashor add a FontPath line to
xorg.conf.Now Gimp,
- Apache OpenOffice, and all of the other X
- applications should now recognize the installed &truetype;
- fonts. Extremely small fonts (as with text in a high
- resolution display on a web page) and extremely large fonts
- (within &staroffice;) will look
- much better now.
+ Apache OpenOffice, and all of the
+ other X applications should now recognize the installed
+ &truetype; fonts. Extremely small fonts (as with text in a
+ high resolution display on a web page) and extremely large
+ fonts (within &staroffice;) will
+ look much better now.
Anti-Aliased Fontsanti-aliased fontsfontsanti-aliasedAll fonts in &xorg; that are
found in /usr/local/share/fonts/ and
~/.fonts/ are automatically made
available for anti-aliasing to Xft-aware applications. Most
recent applications are Xft-aware, including
KDE,
GNOME, and
Firefox.In order to control which fonts are anti-aliased, or to
configure anti-aliasing properties, create (or edit, if it
already exists) the file
/usr/local/etc/fonts/local.conf. Several
advanced features of the Xft font system can be tuned using
this file; this section describes only some simple
possibilities. For more details, please see
&man.fonts-conf.5;.XMLThis file must be in XML format. Pay careful attention to
case, and make sure all tags are properly closed. The file
begins with the usual XML header followed by a DOCTYPE
definition, and then the <fontconfig>
tag:<?xml version="1.0"?>
<!DOCTYPE fontconfig SYSTEM "fonts.dtd">
<fontconfig>As previously stated, all fonts in
/usr/local/share/fonts/ as well as
~/.fonts/ are already made available to
Xft-aware applications. If you wish to add another directory
outside of these two directory trees, add a line similar to
the following to
/usr/local/etc/fonts/local.conf:<dir>/path/to/my/fonts</dir>After adding new fonts, and especially new font
directories, you should run the following command to rebuild
the font caches:&prompt.root; fc-cache -fAnti-aliasing makes borders slightly fuzzy, which makes
very small text more readable and removes
staircases from large text, but can cause
eyestrain if applied to normal text. To exclude font sizes
smaller than 14 point from anti-aliasing, include these
lines: <match target="font">
<test name="size" compare="less">
<double>14</double>
</test>
<edit name="antialias" mode="assign">
<bool>false</bool>
</edit>
</match>
<match target="font">
<test name="pixelsize" compare="less" qual="any">
<double>14</double>
</test>
<edit mode="assign" name="antialias">
<bool>false</bool>
</edit>
</match>fontsspacingSpacing for some monospaced fonts may also be
inappropriate with anti-aliasing. This seems to be an issue
with KDE, in particular. One
possible fix for this is to force the spacing for such fonts
to be 100. Add the following lines: <match target="pattern" name="family">
<test qual="any" name="family">
<string>fixed</string>
</test>
<edit name="family" mode="assign">
<string>mono</string>
</edit>
</match>
<match target="pattern" name="family">
<test qual="any" name="family">
<string>console</string>
</test>
<edit name="family" mode="assign">
<string>mono</string>
</edit>
</match>(this aliases the other common names for fixed fonts as
"mono"), and then add: <match target="pattern" name="family">
<test qual="any" name="family">
<string>mono</string>
</test>
<edit name="spacing" mode="assign">
<int>100</int>
</edit>
</match> Certain fonts, such as Helvetica, may have a problem when
anti-aliased. Usually this manifests itself as a font that
seems cut in half vertically. At worst, it may cause
applications to crash. To avoid this, consider adding the
following to local.conf: <match target="pattern" name="family">
<test qual="any" name="family">
<string>Helvetica</string>
</test>
<edit name="family" mode="assign">
<string>sans-serif</string>
</edit>
</match> Once you have finished editing
local.conf make sure you end the file
with the </fontconfig> tag. Not
doing this will cause your changes to be ignored.Finally, users can add their own settings via their
personal .fonts.conf files. To do this,
each user should simply create a
~/.fonts.conf. This file must also be in
XML format.LCD screenFontsLCD screenOne last point: with an LCD screen, sub-pixel sampling may
be desired. This basically treats the (horizontally
separated) red, green and blue components separately to
improve the horizontal resolution; the results can be
dramatic. To enable this, add the line somewhere in
local.conf:<match target="font">
<test qual="all" name="rgba">
<const>unknown</const>
</test>
<edit name="rgba" mode="assign">
<const>rgb</const>
</edit>
</match>Depending on the sort of display,
rgb may need to be changed to
bgr, vrgb or
vbgr: experiment and see which works
best.The X Display ManagerSethKingsleyContributed by X Display Manager&xorg; provides an X Display
Manager, XDM, which can be used for
login session management. XDM
provides a graphical interface for choosing which display server
to connect to and for entering authorization information such as
a login and password combination.This section demonstrates how to configure the X Display
Manager on &os;. Some desktop environments provide their own
graphical login manager. Refer to for instructions on how to configure
the GNOME Display Manager and for
instructions on how to configure the KDE Display Manager.Configuring XDMTo install XDM, use the
x11/xdm package or port. Once installed,
XDM can be configured to run when
the machine boots up by editing this entry in
/etc/ttys:ttyv8 "/usr/local/bin/xdm -nodaemon" xterm off secureChange the off to on
and save the edit. The ttyv8 in this entry
indicates that XDM will run on the
ninth virtual terminal.The XDM configuration directory
is located in /usr/local/lib/X11/xdm.
This directory contains several files used to change the
behavior and appearance of XDM, as
well as a few scripts and programs used to set up the desktop
when XDM is running. summarizes the function of each
of these files. The exact syntax and usage of these files is
described in &man.xdm.1;.
XDM Configuration FilesFileDescriptionXaccessThe protocol for connecting to
XDM is called the X Display
Manager Connection Protocol (XDMCP)
This file is a client authorization ruleset for
controlling XDMCP connections from
remote machines. By default, this file does not allow
any remote clients to connect.XresourcesThis file controls the look and feel of the
XDM display chooser and
login screens. The default configuration is a simple
rectangular login window with the hostname of the
machine displayed at the top in a large font and
Login: and Password:
prompts below. The format of this file is identical
to the app-defaults file described in the
&xorg;
documentation.XserversThe list of local and remote displays the chooser
should provide as login choices.XsessionDefault session script for logins which is run by
XDM after a user has logged
in. Normally each user will have a customized session
script in ~/.xsession that
overrides this scriptXsetup_*Script to automatically launch applications
before displaying the chooser or login interfaces.
There is a script for each display being used, named
Xsetup_*, where
* is the local display number.
Typically these scripts run one or two programs in the
background such as
xconsole.xdm-configGlobal configuration for all displays running
on this machine.xdm-errorsContains errors generated by the server program.
If a display that XDM is
trying to start hangs, look at this file for error
messages. These messages are also written to the
user's ~/.xsession-errors on a
per-session basis.xdm-pidThe running process ID of
XDM.
Configuring Remote AccessBy default, only users on the same system can login using
XDM. To enable users on other
systems to connect to the display server, edit the access
control rules and enable the connection listener.To configure XDM to listen for
any remote connection, comment out the
DisplayManager.requestPort line in
/usr/local/lib/X11/xdm/xdm-config by
putting a ! in front of it:! SECURITY: do not listen for XDMCP or Chooser requests
! Comment out this line if you want to manage X terminals with xdm
DisplayManager.requestPort: 0Save the edits and restart XDM.
To restrict remote access, look at the example entries in
/usr/local/lib/X11/xdm/Xaccess and refer
to &man.xdm.1; for further information.Desktop EnvironmentsValentinoVaschettoContributed by This section describes how to install three popular desktop
environments on a &os; system. A desktop environment can range
from a simple window manager to a complete suite of desktop
applications. Over a hundred desktop environments are available
in the x11-wm category of the Ports
Collection.GNOMEGNOMEGNOME is a user-friendly
desktop environment. It includes a panel for starting
applications and displaying status, a desktop, a set of tools
and applications, and a set of conventions that make it easy
for applications to cooperate and be consistent with each
other. More information regarding
GNOME on &os; can be found at http://www.FreeBSD.org/gnome.
That web site contains additional documentation about
installing, configuring, and managing
GNOME on &os;.This desktop environment can be installed from a
package:&prompt.root; pkg install gnome2To instead build GNOME from
ports, use the following command.
GNOME is a large application and
will take some time to compile, even on a fast
computer.&prompt.root; cd /usr/ports/x11/gnome2
&prompt.root; make install cleanGNOME
requires /proc to be mounted. Add this
line to /etc/fstab to mount this file
system automatically during system startup:proc /proc procfs rw 0 0GNOME uses
D-Bus and
- HAL for a
- message bus and hardware abstraction. These applications are automatically
- installed as dependencies of GNOME.
- Enable them in /etc/rc.conf so
- they will be started when the system boots:
+ HAL for a message bus and hardware
+ abstraction. These applications are automatically installed
+ as dependencies of GNOME. Enable
+ them in /etc/rc.conf so they will be
+ started when the system boots:
dbus_enable="YES"
hald_enable="YES"After installation,
configure &xorg; to start
GNOME. The easiest way to do this
is to enable the GNOME Display Manager,
GDM, which is installed as part of
the GNOME package or port. It can
be enabled by adding this line to
/etc/rc.conf:gdm_enable="YES"It is often desirable to also start all
GNOME services. To achieve this,
add a second line to /etc/rc.conf:gnome_enable="YES"GDM will start
automatically when the system boots.A second method for starting
GNOME is to type
startx from the command-line after
configuring ~/.xinitrc. If this file
already exists, replace the line that starts the current
window manager with one that starts
/usr/local/bin/gnome-session. If this
file does not exist, create it with this command:&prompt.user; echo "exec /usr/local/bin/gnome-session" > ~/.xinitrcA third method is to use XDM as
the display manager. In this case, create an executable
~/.xsession:&prompt.user; echo "#!/bin/sh" > ~/.xsession
&prompt.user; echo "exec /usr/local/bin/gnome-session" >> ~/.xsession
&prompt.user; chmod +x ~/.xsessionKDEKDEKDE is another easy-to-use
desktop environment. This desktop provides a suite of
applications with a consistent look and feel, a standardized
menu and toolbars, keybindings, color-schemes,
internationalization, and a centralized, dialog-driven desktop
configuration. More information on
KDE can be found at http://www.kde.org/.
For &os;-specific information, consult http://freebsd.kde.org.To install the KDE package,
type:&prompt.root; pkg install x11/kde4To instead build the KDE port,
use the following command. Installing the port will provide a
menu for selecting which components to install.
KDE is a large application and will
take some time to compile, even on a fast computer.&prompt.root; cd /usr/ports/x11/kde4
&prompt.root; make install cleanKDEdisplay managerKDE requires
/proc to be mounted. Add this line to
/etc/fstab to mount this file system
automatically during system startup:proc /proc procfs rw 0 0KDE uses
D-Bus and
- HAL for a
- message bus and hardware abstraction. These applications are automatically
- installed as dependencies of KDE.
- Enable them in /etc/rc.conf so
- they will be started when the system boots:
+ HAL for a message bus and hardware
+ abstraction. These applications are automatically installed
+ as dependencies of KDE. Enable
+ them in /etc/rc.conf so they will be
+ started when the system boots:
dbus_enable="YES"
hald_enable="YES"The installation of KDE
includes the KDE Display Manager,
KDM. To enable this display
manager, add this line to
/etc/rc.conf:kdm4_enable="YES"A second method for launching
KDE is to type
startx from the command line. For this to
work, the following line is needed in
~/.xinitrc:exec /usr/local/bin/startkdeA third method for starting KDE
is through XDM. To do so, create
an executable ~/.xsession as
follows:&prompt.user; echo "#!/bin/sh" > ~/.xsession
&prompt.user; echo "exec /usr/local/bin/startkde" >> ~/.xsession
&prompt.user; chmod +x ~/.xsessionOnce KDE is started, refer to
its built-in help system for more information on how to use
its various menus and applications.XfceXfce is a desktop environment
based on the GTK+ toolkit used by
GNOME. However, it is more
lightweight and provides a simple, efficient, easy-to-use
desktop. It is fully configurable, has a main panel with
menus, applets, and application launchers, provides a file
manager and sound manager, and is themeable. Since it is
fast, light, and efficient, it is ideal for older or slower
machines with memory limitations. More information on
Xfce can be found at http://www.xfce.org.To install the Xfce
package:&prompt.root; pkg install xfceAlternatively, to build the port:&prompt.root; cd /usr/ports/x11-wm/xfce4
&prompt.root; make install cleanUnlike GNOME or
KDE,
Xfce does not provide its own login
manager. In order to start Xfce
from the command line by typing startx,
first add its entry to ~/.xinitrc:&prompt.user; echo "exec /usr/local/bin/startxfce4 --with-ck-launch" > ~/.xinitrcAn alternate method is to use
XDM. To configure this method,
create an executable ~/.xsession:&prompt.user; echo "#!/bin/sh" > ~/.xsession
&prompt.user; echo "exec /usr/local/bin/startxfce4 --with-ck-launch" >> ~/.xsession
&prompt.user; chmod +x ~/.xsessionInstalling Compiz FusionOne way to make using a desktop
computer more pleasant is with nice 3D effects.Installing the Compiz Fusion
package is easy, but configuring it requires a few steps that
are not described in the port's documentation.Setting up the &os; nVidia DriverDesktop effects can cause quite a load on the graphics
card. For an nVidia-based graphics card, the proprietary
driver is required for good performance. Users of other
graphics cards can skip this section and continue with the
xorg.conf configuration.To determine which nVidia driver is needed see the FAQ question
on the subject.Having determined the correct driver to use for your card,
installation is as simple as installing any other
package.For example, to install the latest driver:&prompt.root; pkg install x11/nvidia-driverThe driver will create a kernel module, which needs to be
loaded at system startup. Add the following line to
/boot/loader.conf:nvidia_load="YES"To immediately load the kernel module into the running
kernel by issuing a command like kldload
nvidia, however it has been noted that the some
versions of &xorg; will not
function properly if the driver is not loaded at boot time.
After editing /boot/loader.conf, a
reboot is recommended.With the kernel module loaded, you normally only need to
change a single line in xorg.conf
to enable the proprietary driver:Find the following line in
/etc/X11/xorg.conf:Driver "nv"and change it to:Driver "nvidia"Start the GUI as usual, and you should be greeted by the
nVidia splash. Everything should work as usual.Configuring xorg.conf for Desktop EffectsTo enable Compiz Fusion,
/etc/X11/xorg.conf needs to be
modified:Add the following section to enable composite
effects:Section "Extensions"
Option "Composite" "Enable"
EndSectionLocate the Screen section which should look
similar to the one below:Section "Screen"
Identifier "Screen0"
Device "Card0"
Monitor "Monitor0"
...and add the following two lines (after
Monitor will do):DefaultDepth 24
Option "AddARGBGLXVisuals" "True"Locate the Subsection that refers to the
screen resolution that you wish to use. For example, if you
wish to use 1280x1024, locate the section that follows. If
the desired resolution does not appear in any subsection, you
may add the relevant entry by hand:SubSection "Display"
Viewport 0 0
Modes "1280x1024"
EndSubSectionA color depth of 24 bits is needed for desktop
composition, change the above subsection to:SubSection "Display"
Viewport 0 0
Depth 24
Modes "1280x1024"
EndSubSectionFinally, confirm that the glx and
extmod modules are loaded in the
Module section:Section "Module"
Load "extmod"
Load "glx"
...The preceding can be done automatically with
x11/nvidia-xconfig by running (as
root):&prompt.root; nvidia-xconfig --add-argb-glx-visuals
&prompt.root; nvidia-xconfig --composite
&prompt.root; nvidia-xconfig --depth=24Installing and Configuring Compiz FusionInstalling Compiz Fusion
is as simple as any other package:&prompt.root; pkg install x11-wm/compiz-fusionWhen the installation is finished, start your graphic
desktop and at a terminal, enter the following commands (as a
normal user):&prompt.user; compiz --replace --sm-disable --ignore-desktop-hints ccp &
&prompt.user; emerald --replace &Your screen will flicker for a few seconds, as your window
manager (e.g. Metacity if you are
using GNOME) is replaced by
Compiz Fusion.
Emerald takes care of the window
decorations (i.e. close, minimize, maximize buttons, title
bars and so on).You may convert this to a trivial script and have it run
at startup automatically (e.g. by adding to
Sessions in a GNOME
desktop):#! /bin/sh
compiz --replace --sm-disable --ignore-desktop-hints ccp &
emerald --replace &Save this in your home directory as, for example,
start-compiz and make it
executable:&prompt.user; chmod +x ~/start-compizThen use the GUI to add it to Startup
Programs (located in
System,
Preferences,
Sessions on a
GNOME desktop).To actually select all the desired effects and their
settings, execute (again as a normal user) the
Compiz Config Settings Manager:&prompt.user; ccsmIn GNOME, this can also be
found in the System,
Preferences menu.If you have selected gconf support during
the build, you will also be able to view these settings using
gconf-editor under
apps/compiz.TroubleshootingIf the mouse does not work, you will need to first configure
it before proceeding.
In recent Xorg
versions, the InputDevice sections in
xorg.conf are ignored in favor of the
autodetected devices. To restore the old behavior, add the
following line to the ServerLayout or
ServerFlags section of this file:Option "AutoAddDevices" "false"Input devices may then be configured as in previous
versions, along with any other options needed (e.g., keyboard
layout switching).As previously explained the
hald daemon will, by default,
automatically detect your keyboard. There are chances that
your keyboard layout or model will not be correct, desktop
environments like GNOME,
KDE or
Xfce provide tools to configure
the keyboard. However, it is possible to set the keyboard
properties directly either with the help of the
&man.setxkbmap.1; utility or with a
hald's configuration rule.For example if, one wants to use a PC 102 keys keyboard
coming with a french layout, we have to create a keyboard
configuration file for hald
called x11-input.fdi and saved in the
/usr/local/etc/hal/fdi/policy
directory. This file should contain the following
lines:<?xml version="1.0" encoding="iso-8859-1"?>
<deviceinfo version="0.2">
<device>
<match key="info.capabilities" contains="input.keyboard">
<merge key="input.x11_options.XkbModel" type="string">pc102</merge>
<merge key="input.x11_options.XkbLayout" type="string">fr</merge>
</match>
</device>
</deviceinfo>If this file already exists, just copy and add to your
file the lines regarding the keyboard configuration.You will have to reboot your machine to force
hald to read this file.It is possible to do the same configuration from an X
terminal or a script with this command line:&prompt.user; setxkbmap -model pc102 -layout fr/usr/local/share/X11/xkb/rules/base.lst
lists the various keyboard, layouts and options
available.&xorg;
tuningThe xorg.conf.new configuration file
may now be tuned to taste. Open the file in a text editor
such as &man.emacs.1; or &man.ee.1;. If the monitor is an
older or unusual model that does not support autodetection of
sync frequencies, those settings can be added to
xorg.conf.new under the
"Monitor" section:Section "Monitor"
Identifier "Monitor0"
VendorName "Monitor Vendor"
ModelName "Monitor Model"
HorizSync 30-107
VertRefresh 48-120
EndSectionMost monitors support sync frequency autodetection, making
manual entry of these values unnecessary. For the few
monitors that do not support autodetection, avoid potential
damage by only entering values provided by the
manufacturer.X allows DPMS (Energy Star) features to be used with
capable monitors. The &man.xset.1; program controls the
time-outs and can force standby, suspend, or off modes. If
you wish to enable DPMS features for your monitor, you must
add the following line to the monitor section:Option "DPMS"xorg.confWhile the xorg.conf.new configuration
file is still open in an editor, select the default resolution
and color depth desired. This is defined in the
"Screen" section:Section "Screen"
Identifier "Screen0"
Device "Card0"
Monitor "Monitor0"
DefaultDepth 24
SubSection "Display"
Viewport 0 0
Depth 24
Modes "1024x768"
EndSubSection
EndSectionThe DefaultDepth keyword describes the
color depth to run at by default. This can be overridden with
the command line switch to
&man.Xorg.1;. The Modes keyword describes
the resolution to run at for the given color depth. Note that
only VESA standard modes are supported as defined by the
target system's graphics hardware. In the example above, the
default color depth is twenty-four bits per pixel. At this
color depth, the accepted resolution is 1024 by 768
pixels.Finally, write the configuration file and test it using
the test mode given above.One of the tools available to assist you during
troubleshooting process are the
&xorg; log files, which contain
information on each device that the
&xorg; server attaches to.
&xorg; log file names are in the
format of /var/log/Xorg.0.log. The
exact name of the log can vary from
Xorg.0.log to
Xorg.8.log and so forth.If all is well, the configuration file needs to be
installed in a common location where &man.Xorg.1; can find it.
This is typically /etc/X11/xorg.conf or
/usr/local/etc/X11/xorg.conf.&prompt.root; cp xorg.conf.new /etc/X11/xorg.confThe &xorg; configuration
process is now complete. &xorg;
may be now started with the &man.startx.1; utility. The
&xorg; server may also be started
with the use of &man.xdm.1;.Configuration with &intel; i810
Graphics ChipsetsIntel i810 graphic chipsetConfiguration with &intel; i810 integrated chipsets
requires the agpgart AGP programming
interface for &xorg; to drive the
card. See the &man.agp.4; driver manual page for more
information.This will allow configuration of the hardware as any
other graphics board. Note on systems without the
&man.agp.4; driver compiled in the kernel, trying to load
the module with &man.kldload.8; will not work. This driver
has to be in the kernel at boot time through being compiled
in or using /boot/loader.conf.Adding a Widescreen Flatpanel to the Mixwidescreen flatpanel configurationThis section assumes a bit of advanced configuration
knowledge. If attempts to use the standard configuration
tools above have not resulted in a working configuration,
there is information enough in the log files to be of use in
getting the setup working. Use of a text editor will be
necessary.Current widescreen (WSXGA, WSXGA+, WUXGA, WXGA, WXGA+,
et.al.) formats support 16:10 and 10:9 formats or aspect
ratios that can be problematic. Examples of some common
screen resolutions for 16:10 aspect ratios are:2560x16001920x12001680x10501440x9001280x800At some point, it will be as easy as adding one of these
resolutions as a possible Mode in the
Section "Screen" as such:Section "Screen"
Identifier "Screen0"
Device "Card0"
Monitor "Monitor0"
DefaultDepth 24
SubSection "Display"
Viewport 0 0
Depth 24
Modes "1680x1050"
EndSubSection
EndSection&xorg; is smart enough to
pull the resolution information from the widescreen via
I2C/DDC information so it knows what the monitor can handle
as far as frequencies and resolutions.If those ModeLines do not exist in
the drivers, one might need to give
&xorg; a little hint. Using
/var/log/Xorg.0.log one can extract
enough information to manually create a
ModeLine that will work. Simply look for
information resembling this:(II) MGA(0): Supported additional Video Mode:
(II) MGA(0): clock: 146.2 MHz Image Size: 433 x 271 mm
(II) MGA(0): h_active: 1680 h_sync: 1784 h_sync_end 1960 h_blank_end 2240 h_border: 0
(II) MGA(0): v_active: 1050 v_sync: 1053 v_sync_end 1059 v_blanking: 1089 v_border: 0
(II) MGA(0): Ranges: V min: 48 V max: 85 Hz, H min: 30 H max: 94 kHz, PixClock max 170 MHzThis information is called EDID information. Creating a
ModeLine from this is just a matter of
putting the numbers in the correct order:ModeLine <name> <clock> <4 horiz. timings> <4 vert. timings>So that the ModeLine in
Section "Monitor" for this example would
look like this:Section "Monitor"
Identifier "Monitor1"
VendorName "Bigname"
ModelName "BestModel"
ModeLine "1680x1050" 146.2 1680 1784 1960 2240 1050 1053 1059 1089
Option "DPMS"
EndSectionNow having completed these simple editing steps, X
should start on your new widescreen monitor.Troubleshooting Compiz FusionI have installed
Compiz Fusion, and
after running the commands you mention, my windows are
left without title bars and buttons. What is
wrong?You are probably missing a setting in
/etc/X11/xorg.conf. Review this
file carefully and check especially the
DefaultDepth and
AddARGBGLXVisuals
directives.When I run the command to start
Compiz Fusion, the X
server crashes and I am back at the console. What is
wrong?If you check
/var/log/Xorg.0.log, you
will probably find error messages during the X
startup. The most common would be:(EE) NVIDIA(0): Failed to initialize the GLX module; please check in your X
(EE) NVIDIA(0): log file that the GLX module has been loaded in your X
(EE) NVIDIA(0): server, and that the module is the NVIDIA GLX module. If
(EE) NVIDIA(0): you continue to encounter problems, Please try
(EE) NVIDIA(0): reinstalling the NVIDIA driver.This is usually the case when you upgrade
&xorg;. You will need to
reinstall the x11/nvidia-driver
package so glx is built again.