Index: head/en_US.ISO8859-1/books/handbook/bibliography/chapter.xml
===================================================================
--- head/en_US.ISO8859-1/books/handbook/bibliography/chapter.xml (revision 49530)
+++ head/en_US.ISO8859-1/books/handbook/bibliography/chapter.xml (revision 49531)
@@ -1,603 +1,603 @@
BibliographyWhile manual pages provide a definitive reference for
individual pieces of the &os; operating system, they seldom
illustrate how to put the pieces together to
make the whole operating system run smoothly. For this, there is
no substitute for a good book or users' manual on &unix; system
administration.Books Specific to &os;International books:Using
FreeBSD (in Traditional Chinese), published by
Drmaster,
1997. ISBN 9-578-39435-7.FreeBSD Unleashed (Simplified Chinese translation),
published by
China Machine
Press. ISBN 7-111-10201-0.FreeBSD From Scratch Second Edition (in Simplified
Chinese), published by China Machine Press. ISBN
7-111-10286-X.FreeBSD Handbook Second Edition (Simplified Chinese
translation), published by Posts &
Telecom Press. ISBN 7-115-10541-3.FreeBSD & Windows (in Simplified Chinese), published
by China Railway
Publishing House. ISBN 7-113-03845-XFreeBSD Internet Services HOWTO (in Simplified Chinese),
published by China Railway Publishing House. ISBN
7-113-03423-3FreeBSD (in Japanese), published by CUTT. ISBN
4-906391-22-2 C3055 P2400E.Complete
Introduction to FreeBSD (in Japanese), published by
Shoeisha Co.,
Ltd. ISBN 4-88135-473-6 P3600E.Personal
UNIX Starter Kit FreeBSD (in Japanese), published
by ASCII.
ISBN 4-7561-1733-3 P3000E.FreeBSD Handbook (Japanese translation), published by
ASCII.
ISBN 4-7561-1580-2 P3800E.FreeBSD mit Methode (in German), published by
Computer und Literatur
Verlag/Vertrieb Hanser, 1998. ISBN
3-932311-31-0.
FreeBSD de Luxe (in German), published by
Verlag Modere
Industrie, 2003. ISBN 3-8266-1343-0.FreeBSD
Install and Utilization Manual (in Japanese),
published by
Mainichi
Communications Inc., 1998. ISBN
4-8399-0112-0.Onno W Purbo, Dodi Maryanto, Syahrial Hubbany, Widjil
Widodo Building Internet
Server with FreeBSD (in Indonesia
Language), published by
Elex
Media Komputindo.Absolute BSD: The Ultimate Guide to FreeBSD (Traditional
Chinese translation), published by GrandTech
Press, 2003. ISBN 986-7944-92-5.The
FreeBSD 6.0 Book (in Traditional Chinese),
published by Drmaster, 2006. ISBN 9-575-27878-X.English language books:Absolute
FreeBSD, 2nd Edition: The Complete Guide to
FreeBSD, published by
No Starch
Press, 2007. ISBN: 978-1-59327-151-0
The Complete FreeBSD, published by
O'Reilly,
2003. ISBN: 0596005164The
FreeBSD Corporate Networker's Guide, published by
Addison-Wesley,
2000. ISBN: 0201704811
FreeBSD: An Open-Source Operating System for Your Personal
Computer, published by The Bit Tree Press, 2001.
ISBN: 0971204500Teach Yourself FreeBSD in 24 Hours, published by Sams,
2002. ISBN: 0672324245FreeBSD 6 Unleashed, published by Sams,
2006. ISBN: 0672328755FreeBSD: The Complete Reference, published by McGrawHill,
- 2003. ISBN: 0072224096
+ 2003. ISBN: 0072224096
Users' GuidesOhio State University has written a UNIX
Introductory Course which is available online in
HTML and PostScript format.An Italian translation
of this document is available as part of the FreeBSD Italian
Documentation Project.Jpman
Project, Japan FreeBSD Users Group. FreeBSD
User's Reference Manual (Japanese translation).
Mainichi
Communications Inc., 1998. ISBN4-8399-0088-4
P3800E.Edinburgh
University has written an
Online
Guide for newcomers to the UNIX environment.Administrators' GuidesJpman
Project, Japan FreeBSD Users Group. FreeBSD
System Administrator's Manual (Japanese
translation).
Mainichi
Communications Inc., 1998. ISBN4-8399-0109-0
P3300E.Dreyfus, Emmanuel. Cahiers
de l'Admin: BSD 2nd Ed. (in French), Eyrolles,
2004. ISBN 2-212-11463-XProgrammers' GuidesComputer Systems Research Group, UC Berkeley.
4.4BSD Programmer's Reference Manual.
O'Reilly & Associates, Inc., 1994. ISBN
1-56592-078-3Computer Systems Research Group, UC Berkeley.
4.4BSD Programmer's Supplementary
Documents. O'Reilly & Associates, Inc.,
1994. ISBN 1-56592-079-1Harbison, Samuel P. and Steele, Guy L. Jr. C:
A Reference Manual. 4th Ed. Prentice Hall,
1995. ISBN 0-13-326224-3Kernighan, Brian and Dennis M. Ritchie. The C
Programming Language. 2nd Ed. PTR Prentice
Hall, 1988. ISBN 0-13-110362-8Lehey, Greg. Porting UNIX
Software. O'Reilly & Associates, Inc.,
1995. ISBN 1-56592-126-7Plauger, P. J. The Standard C
Library. Prentice Hall, 1992. ISBN
0-13-131509-9Spinellis, Diomidis. Code
Reading: The Open Source Perspective.
Addison-Wesley, 2003. ISBN 0-201-79940-5Spinellis, Diomidis. Code
Quality: The Open Source Perspective.
Addison-Wesley, 2006. ISBN 0-321-16607-8Stevens, W. Richard and Stephen A. Rago.
Advanced Programming in the UNIX
Environment. 2nd Ed. Reading, Mass. :
Addison-Wesley, 2005. ISBN 0-201-43307-9Stevens, W. Richard. UNIX Network
Programming. 2nd Ed, PTR Prentice Hall, 1998.
ISBN 0-13-490012-XOperating System InternalsAndleigh, Prabhat K. UNIX System
Architecture. Prentice-Hall, Inc., 1990. ISBN
0-13-949843-5Jolitz, William. Porting UNIX to the
386. Dr. Dobb's Journal.
January 1991-July 1992.Leffler, Samuel J., Marshall Kirk McKusick, Michael J
Karels and John Quarterman The Design and
Implementation of the 4.3BSD UNIX Operating
System. Reading, Mass. : Addison-Wesley, 1989.
ISBN 0-201-06196-1Leffler, Samuel J., Marshall Kirk McKusick,
The Design and Implementation of the 4.3BSD UNIX
Operating System: Answer Book. Reading, Mass.
: Addison-Wesley, 1991. ISBN 0-201-54629-9McKusick, Marshall Kirk, Keith Bostic, Michael J Karels,
and John Quarterman. The Design and
Implementation of the 4.4BSD Operating System.
Reading, Mass. : Addison-Wesley, 1996. ISBN
0-201-54979-4(Chapter 2 of this book is available online
as part of the FreeBSD Documentation Project.)Marshall Kirk McKusick, George V. Neville-Neil
The Design and Implementation of the FreeBSD
Operating System. Boston, Mass. :
Addison-Wesley, 2004. ISBN 0-201-70245-2Marshall Kirk McKusick, George V. Neville-Neil,
Robert N. M. Watson The Design and Implementation
of the FreeBSD Operating System, 2nd Ed..
Westford, Mass. : Pearson Education, Inc., 2014.
ISBN 0-321-96897-2Stevens, W. Richard. TCP/IP Illustrated,
Volume 1: The Protocols. Reading, Mass. :
Addison-Wesley, 1996. ISBN 0-201-63346-9Schimmel, Curt. Unix Systems for Modern
Architectures. Reading, Mass. :
Addison-Wesley, 1994. ISBN 0-201-63338-8Stevens, W. Richard. TCP/IP Illustrated,
Volume 3: TCP for Transactions, HTTP, NNTP and the UNIX
Domain Protocols. Reading, Mass. :
Addison-Wesley, 1996. ISBN 0-201-63495-3Vahalia, Uresh. UNIX Internals -- The New
Frontiers. Prentice Hall, 1996. ISBN
0-13-101908-2Wright, Gary R. and W. Richard Stevens.
TCP/IP Illustrated, Volume 2: The
Implementation. Reading, Mass. :
Addison-Wesley, 1995. ISBN 0-201-63354-XSecurity ReferenceCheswick, William R. and Steven M. Bellovin.
Firewalls and Internet Security: Repelling the
Wily Hacker. Reading, Mass. : Addison-Wesley,
1995. ISBN 0-201-63357-4Garfinkel, Simson. PGP Pretty Good
Privacy O'Reilly & Associates, Inc., 1995.
ISBN 1-56592-098-8Hardware ReferenceAnderson, Don and Tom Shanley. Pentium
Processor System Architecture. 2nd Ed.
Reading, Mass. : Addison-Wesley, 1995. ISBN
0-201-40992-5Ferraro, Richard F. Programmer's Guide to the
EGA, VGA, and Super VGA Cards. 3rd ed.
Reading, Mass. : Addison-Wesley, 1995. ISBN
0-201-62490-7Intel Corporation publishes documentation on their CPUs,
chipsets and standards on their
developer web
site, usually as PDF files.Shanley, Tom. 80486 System
Architecture. 3rd Ed. Reading, Mass. :
Addison-Wesley, 1995. ISBN 0-201-40994-1Shanley, Tom. ISA System
Architecture. 3rd Ed. Reading, Mass. :
Addison-Wesley, 1995. ISBN 0-201-40996-8Shanley, Tom. PCI System
Architecture. 4th Ed. Reading, Mass. :
Addison-Wesley, 1999. ISBN 0-201-30974-2Van Gilluwe, Frank. The Undocumented
PC, 2nd Ed. Reading, Mass: Addison-Wesley Pub.
Co., 1996. ISBN 0-201-47950-8Messmer, Hans-Peter. The Indispensable PC
Hardware Book, 4th Ed. Reading, Mass :
Addison-Wesley Pub. Co., 2002. ISBN 0-201-59616-4&unix; HistoryLion, John Lion's Commentary on UNIX, 6th Ed.
With Source Code. ITP Media Group, 1996. ISBN
1573980137Raymond, Eric S. The New Hacker's Dictionary,
3rd edition. MIT Press, 1996. ISBN
0-262-68092-0. Also known as the Jargon
FileSalus, Peter H. A quarter century of
UNIX. Addison-Wesley Publishing Company, Inc.,
1994. ISBN 0-201-54777-5Simon Garfinkel, Daniel Weise, Steven Strassmann.
The UNIX-HATERS Handbook. IDG Books
Worldwide, Inc., 1994. ISBN 1-56884-203-1. Out of print,
but available online.Don Libes, Sandy Ressler Life with
UNIX — special edition. Prentice-Hall,
Inc., 1989. ISBN 0-13-536657-7The BSD family tree.
https://svnweb.freebsd.org/base/head/share/misc/bsd-family-tree?view=co
or /usr/share/misc/bsd-family-tree
on a FreeBSD machine.Networked Computer Science Technical Reports
Library. http://www.ncstrl.org/Old BSD releases from the Computer Systems
Research group (CSRG). http://www.mckusick.com/csrg/:
The 4CD set covers all BSD versions from 1BSD to 4.4BSD and
4.4BSD-Lite2 (but not 2.11BSD, unfortunately). The last
disk also holds the final sources plus the SCCS
files.Periodicals, Journals, and MagazinesAdmin
Magazin (in German), published by
Medialinx AG. ISSN: 2190-1066BSD
Magazine, published by Software Press Sp. z o.o.
SK. ISSN: 1898-9144BSD Now
— Video Podcast, published by
Jupiter Broadcasting LLCBSD
Talk Podcast, by Will BackmanFreeBSD
Journal, published by S&W
Publishing, sponsored by The FreeBSD Foundation.
ISBN: 978-0-615-88479-0
Index: head/en_US.ISO8859-1/books/handbook/cutting-edge/chapter.xml
===================================================================
--- head/en_US.ISO8859-1/books/handbook/cutting-edge/chapter.xml (revision 49530)
+++ head/en_US.ISO8859-1/books/handbook/cutting-edge/chapter.xml (revision 49531)
@@ -1,2205 +1,2205 @@
Updating and Upgrading &os;JimMockRestructured, reorganized, and parts updated
by JordanHubbardOriginal work by Poul-HenningKampJohnPolstraNikClaytonSynopsis&os; is under constant development between releases. Some
people prefer to use the officially released versions, while
others prefer to keep in sync with the latest developments.
However, even official releases are often updated with security
and other critical fixes. Regardless of the version used, &os;
provides all the necessary tools to keep the system updated, and
allows for easy upgrades between versions. This chapter
describes how to track the development system and the basic
tools for keeping a &os; system up-to-date.After reading this chapter, you will know:How to keep a &os; system up-to-date with
freebsd-update or
Subversion.How to compare the state of an installed system against
a known pristine copy.How to keep the installed documentation up-to-date with
Subversion or documentation
ports.The difference between the two development
branches: &os.stable; and &os.current;.How to rebuild and reinstall the entire base
system.Before reading this chapter, you should:Properly set up the network connection
().Know how to install additional third-party
software ().Throughout this chapter, svn is used to
obtain and update &os; sources. To use it, first install the
devel/subversion port or
package.&os; UpdateTomRhodesWritten by ColinPercivalBased on notes provided by Updating and Upgradingfreebsd-updateupdating-upgradingApplying security patches in a timely manner and upgrading
to a newer release of an operating system are important aspects
of ongoing system administration. &os; includes a utility
called freebsd-update which can be used to
perform both these tasks.This utility supports binary security and errata updates to
&os;, without the need to manually compile and install the patch
or a new kernel. Binary updates are available for all
architectures and releases currently supported by the security
team. The list of supported releases and their estimated
end-of-life dates are listed at http://www.FreeBSD.org/security/.This utility also supports operating system upgrades to
minor point releases as well as upgrades to another release
branch. Before upgrading to a new release, review its release
announcement as it contains important information pertinent to
the release. Release announcements are available from http://www.FreeBSD.org/releases/.If a crontab utilizing the features of
&man.freebsd-update.8; exists, it must be disabled before
upgrading the operating system.This section describes the configuration file used by
freebsd-update, demonstrates how to apply a
security patch and how to upgrade to a minor or major operating
system release, and discusses some of the considerations when
upgrading the operating system.The Configuration FileThe default configuration file for
freebsd-update works as-is. Some users may
wish to tweak the default configuration in
/etc/freebsd-update.conf, allowing
better control of the process. The comments in this file
explain the available options, but the following may require a
bit more explanation:# Components of the base system which should be kept updated.
Components world kernelThis parameter controls which parts of &os; will be kept
up-to-date. The default is to update the entire base system
and the kernel. Individual components can instead be
specified, such as src/base or
src/sys. However, the best option is to
leave this at the default as changing it to include specific
items requires every needed item to be listed. Over time,
this could have disastrous consequences as source code and
binaries may become out of sync.# Paths which start with anything matching an entry in an IgnorePaths
# statement will be ignored.
IgnorePaths /boot/kernel/linker.hintsTo leave specified directories, such as
/bin or /sbin,
untouched during the update process, add their paths to this
statement. This option may be used to prevent
freebsd-update from overwriting local
modifications.# Paths which start with anything matching an entry in an UpdateIfUnmodified
# statement will only be updated if the contents of the file have not been
# modified by the user (unless changes are merged; see below).
UpdateIfUnmodified /etc/ /var/ /root/ /.cshrc /.profileThis option will only update unmodified configuration
files in the specified directories. Any changes made by the
user will prevent the automatic updating of these files.
There is another option,
KeepModifiedMetadata, which will instruct
freebsd-update to save the changes during
the merge.# When upgrading to a new &os; release, files which match MergeChanges
# will have any local changes merged into the version from the new release.
MergeChanges /etc/ /var/named/etc/ /boot/device.hintsList of directories with configuration files that
freebsd-update should attempt to merge.
The file merge process is a series of &man.diff.1; patches
similar to &man.mergemaster.8;, but with fewer options.
Merges are either accepted, open an editor, or cause
freebsd-update to abort. When in doubt,
backup /etc and just accept the merges.
See for more information about
mergemaster.# Directory in which to store downloaded updates and temporary
# files used by &os; Update.
# WorkDir /var/db/freebsd-updateThis directory is where all patches and temporary files
are placed. In cases where the user is doing a version
upgrade, this location should have at least a gigabyte of disk
space available.# When upgrading between releases, should the list of Components be
# read strictly (StrictComponents yes) or merely as a list of components
# which *might* be installed of which &os; Update should figure out
# which actually are installed and upgrade those (StrictComponents no)?
# StrictComponents noWhen this option is set to yes,
freebsd-update will assume that the
Components list is complete and will not
attempt to make changes outside of the list. Effectively,
freebsd-update will attempt to update
every file which belongs to the Components
list.Applying Security PatchesThe process of applying &os; security patches has been
simplified, allowing an administrator to keep a system fully
patched using freebsd-update. More
information about &os; security advisories can be found in
.&os; security patches may be downloaded and installed
using the following commands. The first command will
determine if any outstanding patches are available, and if so,
will list the files that will be modifed if the patches are
applied. The second command will apply the patches.&prompt.root; freebsd-update fetch
&prompt.root; freebsd-update installIf the update applies any kernel patches, the system will
need a reboot in order to boot into the patched kernel. If
the patch was applied to any running binaries, the affected
applications should be restarted so that the patched version
of the binary is used.The system can be configured to automatically check for
updates once every day by adding this entry to
/etc/crontab:@daily root freebsd-update cronIf patches exist, they will automatically be downloaded
but will not be applied. The root user will be sent an
email so that the patches may be reviewed and manually
installed with
freebsd-update install.If anything goes wrong, freebsd-update
has the ability to roll back the last set of changes with the
following command:&prompt.root; freebsd-update rollback
Uninstalling updates... done.Again, the system should be restarted if the kernel or any
kernel modules were modified and any affected binaries should
be restarted.Only the GENERIC kernel can be
automatically updated by freebsd-update.
If a custom kernel is installed, it will have to be rebuilt
and reinstalled after freebsd-update
finishes installing the updates. However,
freebsd-update will detect and update the
GENERIC kernel if
/boot/GENERIC exists, even if it is not
the current running kernel of the system.Always keep a copy of the GENERIC
kernel in /boot/GENERIC. It will be
helpful in diagnosing a variety of problems and in
performing version upgrades. Refer to for
instructions on how to get a copy of the
GENERIC kernel.Unless the default configuration in
/etc/freebsd-update.conf has been
changed, freebsd-update will install the
updated kernel sources along with the rest of the updates.
Rebuilding and reinstalling a new custom kernel can then be
performed in the usual way.The updates distributed by
freebsd-update do not always involve the
kernel. It is not necessary to rebuild a custom kernel if the
kernel sources have not been modified by
freebsd-update install. However,
freebsd-update will always update
/usr/src/sys/conf/newvers.sh. The
current patch level, as indicated by the -p
number reported by uname -r, is obtained
from this file. Rebuilding a custom kernel, even if nothing
else changed, allows uname to accurately
report the current patch level of the system. This is
particularly helpful when maintaining multiple systems, as it
allows for a quick assessment of the updates installed in each
one.Performing Major and Minor Version UpgradesUpgrades from one minor version of &os; to another, like
from &os; 9.0 to &os; 9.1, are called
minor version upgrades.
Major version upgrades occur when &os;
is upgraded from one major version to another, like from
&os; 9.X to &os; 10.X. Both types of upgrades can
be performed by providing freebsd-update
with a release version target.If the system is running a custom kernel, make sure that
a copy of the GENERIC kernel exists in
/boot/GENERIC before starting the
upgrade. Refer to for
instructions on how to get a copy of the
GENERIC kernel.The following command, when run on a &os; 9.0 system,
will upgrade it to &os; 9.1:&prompt.root; freebsd-update -r 9.1-RELEASE upgradeAfter the command has been received,
freebsd-update will evaluate the
configuration file and current system in an attempt to gather
the information necessary to perform the upgrade. A screen
listing will display which components have and have not been
detected. For example:Looking up update.FreeBSD.org mirrors... 1 mirrors found.
Fetching metadata signature for 9.0-RELEASE from update1.FreeBSD.org... done.
Fetching metadata index... done.
Inspecting system... done.
The following components of FreeBSD seem to be installed:
kernel/smp src/base src/bin src/contrib src/crypto src/etc src/games
src/gnu src/include src/krb5 src/lib src/libexec src/release src/rescue
src/sbin src/secure src/share src/sys src/tools src/ubin src/usbin
world/base world/info world/lib32 world/manpages
The following components of FreeBSD do not seem to be installed:
kernel/generic world/catpages world/dict world/doc world/games
world/proflibs
Does this look reasonable (y/n)? yAt this point, freebsd-update will
attempt to download all files required for the upgrade. In
some cases, the user may be prompted with questions regarding
what to install or how to proceed.When using a custom kernel, the above step will produce a
warning similar to the following:WARNING: This system is running a "MYKERNEL" kernel, which is not a
kernel configuration distributed as part of FreeBSD 9.0-RELEASE.
This kernel will not be updated: you MUST update the kernel manually
before running "/usr/sbin/freebsd-update install"This warning may be safely ignored at this point. The
updated GENERIC kernel will be used as an
intermediate step in the upgrade process.Once all the patches have been downloaded to the local
system, they will be applied. This process may take a while,
depending on the speed and workload of the machine.
Configuration files will then be merged. The merging process
requires some user intervention as a file may be merged or an
editor may appear on screen for a manual merge. The results
of every successful merge will be shown to the user as the
process continues. A failed or ignored merge will cause the
process to abort. Users may wish to make a backup of
/etc and manually merge important files,
such as master.passwd or
group at a later time.The system is not being altered yet as all patching and
merging is happening in another directory. Once all patches
have been applied successfully, all configuration files have
been merged and it seems the process will go smoothly, the
changes can be committed to disk by the user using the
following command:&prompt.root; freebsd-update installThe kernel and kernel modules will be patched first. If
the system is running with a custom kernel, use
&man.nextboot.8; to set the kernel for the next boot to the
updated /boot/GENERIC:&prompt.root; nextboot -k GENERICBefore rebooting with the GENERIC
kernel, make sure it contains all the drivers required for
the system to boot properly and connect to the network, if
the machine being updated is accessed remotely. In
particular, if the running custom kernel contains built-in
functionality usually provided by kernel modules, make sure
to temporarily load these modules into the
GENERIC kernel using the
/boot/loader.conf facility. It is
recommended to disable non-essential services as well as any
disk and network mounts until the upgrade process is
complete.The machine should now be restarted with the updated
kernel:&prompt.root; shutdown -r nowOnce the system has come back online, restart
freebsd-update using the following command.
Since the state of the process has been saved,
freebsd-update will not start from the
beginning, but will instead move on to the next phase and
remove all old shared libraries and object files.&prompt.root; freebsd-update installDepending upon whether any library version numbers were
bumped, there may only be two install phases instead of
three.The upgrade is now complete. If this was a major version
upgrade, reinstall all ports and packages as described in
.Custom Kernels with &os; 9.X and LaterBefore using freebsd-update, ensure
that a copy of the GENERIC kernel
exists in /boot/GENERIC. If a custom
kernel has only been built once, the kernel in
/boot/kernel.old is the
GENERIC kernel. Simply rename this
directory to /boot/kernel.If a custom kernel has been built more than once or if
it is unknown how many times the custom kernel has been
built, obtain a copy of the GENERIC
kernel that matches the current version of the operating
system. If physical access to the system is available, a
copy of the GENERIC kernel can be
installed from the installation media:&prompt.root; mount /cdrom
&prompt.root; cd /cdrom/usr/freebsd-dist
&prompt.root; tar -C/ -xvf kernel.txz boot/kernel/kernelAlternately, the GENERIC kernel may
be rebuilt and installed from source:&prompt.root; cd /usr/src
&prompt.root; make kernel __MAKE_CONF=/dev/null SRCCONF=/dev/nullFor this kernel to be identified as the
GENERIC kernel by
freebsd-update, the
GENERIC configuration file must not
have been modified in any way. It is also suggested that
the kernel is built without any other special
options.Rebooting into the GENERIC kernel
is not required as freebsd-update only
needs /boot/GENERIC to exist.Upgrading Packages After a Major Version
UpgradeGenerally, installed applications will continue to work
without problems after minor version upgrades. Major
versions use different Application Binary Interfaces
(ABIs), which will break most
third-party applications. After a major version upgrade,
all installed packages and ports need to be upgraded.
Packages can be upgraded using pkg
upgrade. To upgrade installed ports, use a
utility such as
ports-mgmt/portmaster.A forced upgrade of all installed packages will replace
the packages with fresh versions from the repository even if
the version number has not increased. This is required
because of the ABI version change when upgrading between
major versions of &os;. The forced upgrade can be
accomplished by performing:&prompt.root; pkg-static upgrade -fA rebuild of all installed applications can be
accomplished with this command:&prompt.root; portmaster -afThis command will display the configuration screens for
each application that has configurable options and wait for
the user to interact with those screens. To prevent this
behavior, and use only the default options, include
in the above command.Once the software upgrades are complete, finish the
upgrade process with a final call to
freebsd-update in order to tie up all the
loose ends in the upgrade process:&prompt.root; freebsd-update installIf the GENERIC kernel was
temporarily used, this is the time to build and install a
new custom kernel using the instructions in .Reboot the machine into the new &os; version. The
upgrade process is now complete.System State ComparisonThe state of the installed &os; version against a known
good copy can be tested using
freebsd-update IDS. This command evaluates
the current version of system utilities, libraries, and
configuration files and can be used as a built-in Intrusion
Detection System (IDS).This command is not a replacement for a real
IDS such as
security/snort. As
freebsd-update stores data on disk, the
possibility of tampering is evident. While this possibility
may be reduced using kern.securelevel and
by storing the freebsd-update data on a
read-only file system when not in use, a better solution
would be to compare the system against a secure disk, such
as a DVD or securely stored external
USB disk device. An alternative method
for providing IDS functionality using a
built-in utility is described in To begin the comparison, specify the output file to save
the results to:&prompt.root; freebsd-update IDS >> outfile.idsThe system will now be inspected and a lengthy listing of
files, along with the SHA256 hash values
for both the known value in the release and the current
installation, will be sent to the specified output
file.The entries in the listing are extremely long, but the
output format may be easily parsed. For instance, to obtain a
list of all files which differ from those in the release,
issue the following command:&prompt.root; cat outfile.ids | awk '{ print $1 }' | more
/etc/master.passwd
/etc/motd
/etc/passwd
/etc/pf.confThis sample output has been truncated as many more files
exist. Some files have natural modifications. For example,
/etc/passwd will be modified if users
have been added to the system. Kernel modules may differ as
freebsd-update may have updated them. To
exclude specific files or directories, add them to the
IDSIgnorePaths option in
/etc/freebsd-update.conf.Updating the Documentation SetUpdating and UpgradingDocumentationUpdating and UpgradingDocumentation is an integral part of the &os; operating
system. While an up-to-date version of the &os; documentation
is always available on the &os; web site (http://www.freebsd.org/doc/),
it can be handy to have an up-to-date, local copy of the &os;
website, handbooks, FAQ, and articles.This section describes how to use either source or the &os;
Ports Collection to keep a local copy of the &os; documentation
up-to-date.For information on editing and submitting corrections to the
documentation, refer to the &os; Documentation Project Primer
for New Contributors (http://www.freebsd.org/doc/en_US.ISO8859-1/books/fdp-primer/).Updating Documentation from SourceRebuilding the &os; documentation from source requires a
collection of tools which are not part of the &os; base
system. The required tools, including
svn, can be installed from the
textproc/docproj package or port developed
by the &os; Documentation Project.Once installed, use svn to
fetch a clean copy of the documentation source:&prompt.root; svn checkout https://svn.FreeBSD.org/doc/head /usr/docThe initial download of the documentation sources may take
a while. Let it run until it completes.Future updates of the documentation sources may be fetched
by running:&prompt.root; svn update /usr/docOnce an up-to-date snapshot of the documentation sources
has been fetched to /usr/doc, everything
is ready for an update of the installed documentation.A full update of all available languages may be performed
by typing:&prompt.root; cd /usr/doc
&prompt.root; make install cleanIf an update of only a specific language is desired,
make can be invoked in a language-specific
subdirectory of
/usr/doc:&prompt.root; cd /usr/doc/en_US.ISO8859-1
&prompt.root; make install cleanAn alternative way of updating the documentation is to run
this command from /usr/doc or the desired
language-specific subdirectory:&prompt.root; make updateThe output formats that will be installed may be specified
by setting FORMATS:&prompt.root; cd /usr/doc
&prompt.root; make FORMATS='html html-split' install cleanSeveral options are available to ease the process of
updating only parts of the documentation, or the build of
specific translations. These options can be set either as
system-wide options in /etc/make.conf, or
as command-line options passed to
make.The options include:DOC_LANGThe list of languages and encodings to build and
install, such as en_US.ISO8859-1 for
English documentation.FORMATSA single format or a list of output formats to be
built. Currently, html,
html-split, txt,
ps, and pdf are
supported.DOCDIRWhere to install the documentation. It defaults to
/usr/share/doc.For more make variables supported as
system-wide options in &os;, refer to
&man.make.conf.5;.Updating Documentation from PortsMarcFonvieilleBased on the work of Updating and Upgradingdocumentation packageUpdating and UpgradingThe previous section presented a method for updating the
&os; documentation from sources. This section describes an
alternative method which uses the Ports Collection and makes
it possible to:Install pre-built packages of the documentation,
without having to locally build anything or install the
documentation toolchain.Build the documentation sources through the ports
framework, making the checkout and build steps a bit
easier.This method of updating the &os; documentation is
supported by a set of documentation ports and packages which
are updated by the &a.doceng; on a monthly basis. These are
listed in the &os; Ports Collection, under the docs
category (http://www.freshports.org/docs/).Organization of the documentation ports is as
follows:The misc/freebsd-doc-en package or
port installs all of the English documentation.The misc/freebsd-doc-all
meta-package or port installs all documentation in all
available languages.There is a package and port for each translation, such
as misc/freebsd-doc-hu for the
Hungarian documentation.When binary packages are used, the &os; documentation will
be installed in all available formats for the given language.
For example, the following command will install the latest
package of the Hungarian documentation:&prompt.root; pkg install hu-freebsd-docPackages use a format that differs from the
corresponding port's name:
lang-freebsd-doc,
where lang is the short format of
the language code, such as hu for
Hungarian, or zh_cn for Simplified
Chinese.To specify the format of the documentation, build the port
instead of installing the package. For example, to build and
install the English documentation:&prompt.root; cd /usr/ports/misc/freebsd-doc-en
&prompt.root; make install cleanThe port provides a configuration menu where the format to
build and install can be specified. By default, split
HTML, similar to the format used on http://www.FreeBSD.org,
and PDF are selected.Alternately, several make options can
be specified when building a documentation port,
including:WITH_HTMLBuilds the HTML format with a single HTML file per
document. The formatted documentation is saved to a
file called article.html, or
book.html.WITH_PDFThe formatted documentation is saved to a file
called article.pdf or
book.pdf.DOCBASESpecifies where to install the documentation. It
defaults to
/usr/local/share/doc/freebsd.This example uses variables to install the Hungarian
documentation as a PDF in the specified
directory:&prompt.root; cd /usr/ports/misc/freebsd-doc-hu
&prompt.root; make -DWITH_PDF DOCBASE=share/doc/freebsd/hu install cleanDocumentation packages or ports can be updated using the
instructions in . For example, the
following command updates the installed Hungarian
documentation using ports-mgmt/portmaster
by using packages only:&prompt.root; portmaster -PP hu-freebsd-docTracking a Development Branch-CURRENT-STABLE&os; has two development branches: &os.current; and
&os.stable;.This section provides an explanation of each branch and its
intended audience, as well as how to keep a system up-to-date
with each respective branch.Using &os.current;&os.current; is the bleeding edge of &os;
development and &os.current; users are expected to have a
high degree of technical skill. Less technical users who wish
to track a development branch should track &os.stable;
instead.&os.current; is the very latest source code for &os; and
includes works in progress, experimental changes, and
transitional mechanisms that might or might not be present in
the next official release. While many &os; developers compile
the &os.current; source code daily, there are short periods of
time when the source may not be buildable. These problems are
resolved as quickly as possible, but whether or not
&os.current; brings disaster or new functionality can be a
matter of when the source code was synced.&os.current; is made available for three primary interest
groups:Members of the &os; community who are actively
working on some part of the source tree.Members of the &os; community who are active testers.
They are willing to spend time solving problems, making
topical suggestions on changes and the general direction
of &os;, and submitting patches.Users who wish to keep an eye on things, use the
current source for reference purposes, or make the
occasional comment or code contribution.&os.current; should not be
considered a fast-track to getting new features before the
next release as pre-release features are not yet fully tested
and most likely contain bugs. It is not a quick way of
getting bug fixes as any given commit is just as likely to
introduce new bugs as to fix existing ones. &os.current; is
not in any way officially supported.-CURRENTusingTo track &os.current;:Join the &a.current.name; and the
&a.svn-src-head.name; lists. This is
essential in order to see the
comments that people are making about the current state
of the system and to receive important bulletins about
the current state of &os.current;.The &a.svn-src-head.name; list records the commit log
entry for each change as it is made, along with any
pertinent information on possible side effects.To join these lists, go to &a.mailman.lists.link;,
click on the list to subscribe to, and follow the
instructions. In order to track changes to the whole
source tree, not just the changes to &os.current;,
subscribe to the &a.svn-src-all.name; list.Synchronize with the &os.current; sources. Typically,
svn is used to check out the
-CURRENT code from the head branch of
one of the Subversion mirror sites listed in
.
- Due to the size of the repository, some users choose
+ Due to the size of the repository, some users choose
to only synchronize the sections of source that interest
them or which they are contributing patches to. However,
users that plan to compile the operating system from
source must download all of
&os.current;, not just selected portions.Before compiling &os.current;
-CURRENTcompiling, read /usr/src/Makefile
very carefully and follow the instructions in
.
Read the &a.current; and
/usr/src/UPDATING to stay
up-to-date on other bootstrapping procedures that
sometimes become necessary on the road to the next
release.Be active! &os.current; users are encouraged to
submit their suggestions for enhancements or bug fixes.
Suggestions with accompanying code are always
welcome.Using &os.stable;&os.stable; is the development branch from which major
releases are made. Changes go into this branch at a slower
pace and with the general assumption that they have first been
tested in &os.current;. This is still a
development branch and, at any given time, the sources for
&os.stable; may or may not be suitable for general use. It is
simply another engineering development track, not a resource
for end-users. Users who do not have the resources to perform
testing should instead run the most recent release of
&os;.Those interested in tracking or contributing to the &os;
development process, especially as it relates to the next
release of &os;, should consider following &os.stable;.While the &os.stable; branch should compile and run at all
times, this cannot be guaranteed. Since more people run
&os.stable; than &os.current;, it is inevitable that bugs and
corner cases will sometimes be found in &os.stable; that were
not apparent in &os.current;. For this reason, one should not
blindly track &os.stable;. It is particularly important
not to update any production servers to
&os.stable; without thoroughly testing the code in a
development or testing environment.To track &os.stable;:-STABLEusingJoin the &a.stable.name; list in order to stay
informed of build dependencies that may appear in
&os.stable; or any other issues requiring special
attention. Developers will also make announcements in
this mailing list when they are contemplating some
controversial fix or update, giving the users a chance to
respond if they have any issues to raise concerning the
proposed change.Join the relevant svn list
for the branch being tracked. For example, users
tracking the 9-STABLE branch should join the
&a.svn-src-stable-9.name; list. This list records the
commit log entry for each change as it is made, along
with any pertinent information on possible
side effects.To join these lists, go to &a.mailman.lists.link;,
click on the list to subscribe to, and follow the
instructions. In order to track changes for the whole
source tree, subscribe to &a.svn-src-all.name;.To install a new &os.stable; system, install the most
recent &os.stable; release from the &os; mirror sites or use a
monthly snapshot built from &os.stable;. Refer to www.freebsd.org/snapshots
for more information about snapshots.To compile or upgrade to an existing &os; system to
&os.stable;, use svn
Subversion to check out the source for the desired
branch. Branch names, such as
stable/9, are listed at www.freebsd.org/releng.Before compiling or upgrading to &os.stable;
-STABLEcompiling, read /usr/src/Makefile
carefully and follow the instructions in . Read &a.stable; and
/usr/src/UPDATING to keep up-to-date
on other bootstrapping procedures that sometimes become
necessary on the road to the next release.Synchronizing SourceThere are various methods for staying up-to-date with the
&os; sources. This section describes the primary service,
Subversion.While it is possible to update only parts of the source
tree, the only supported update procedure is to update the
entire tree and recompile all the programs that run in user
space, such as those in /bin and
/sbin, and kernel sources. Updating only
part of the source tree, only the kernel, or only the userland
programs will often result in problems ranging from compile
errors to kernel panics or data corruption.SubversionSubversion uses the
pull model of updating sources. The user,
or a cron script, invokes the
svn program which updates the local version
of the source. Subversion is the
preferred method for updating local source trees as updates are
up-to-the-minute and the user controls when updates are
downloaded. It is easy to restrict updates to specific files or
directories and the requested updates are generated on the fly
by the server. How to synchronize source using
Subversion is described in .If a user inadvertently wipes out portions of the local
archive, Subversion will detect and
rebuild the damaged portions during an update.Rebuilding WorldRebuilding worldOnce the local source tree is synchronized against a
particular version of &os; such as &os.stable; or &os.current;,
the source tree can be used to rebuild the system. This process
is known as rebuilding world.Before rebuilding world, be sure to
perform the following tasks:Perform These Tasks Before
Building WorldBackup all important data to another system or removable
media, verify the integrity of the backup, and have a
bootable installation media at hand. It cannot be stressed
enough how important it is to make a backup of the system
before rebuilding the system. While
rebuilding world is an easy task, there will inevitably be
times when mistakes in the source tree render the system
unbootable. You will probably never have to use the backup,
but it is better to be safe than sorry!mailing listReview the recent &a.stable.name; or &a.current.name;
entries, depending upon the branch being tracked. Be aware
of any known problems and which systems are affected. If a
known issue affects the version of synchronized code, wait
for an all clear announcement to be posted
stating that the problem has been solved. Resynchronize the
sources to ensure that the local version of source has the
needed fix.Read /usr/src/UPDATING for any
extra steps necessary for that version of the source. This
file contains important information about potential problems
and may specify the order to run certain commands. Many
upgrades require specific additional steps such as renaming
or deleting specific files prior to installing the new
world. These will be listed at the end of this file where
the currently recommended upgrade sequence is explicitly
spelled out. If UPDATING contradicts
any steps in this chapter, the instructions in
UPDATING take precedence and should be
followed.Do Not Use make worldSome older documentation recommends using make
world. However, that command skips some important
steps and should only be used by experts. For almost all
circumstances make world is the wrong thing
to do, and the procedure described here should be used
instead.Overview of ProcessThe build world process assumes an upgrade from an older
&os; version using the source of a newer version that was
obtained using the instructions in .In &os;, the term world includes the
kernel, core system binaries, libraries, programming files,
and built-in compiler. The order in which these components
are built and installed is important.For example, the old compiler might have a bug and not be
able to compile the new kernel. Since the new kernel should
be built with the new compiler, the new compiler must be
built, but not necessarily installed, before the new kernel is
built.The new world might rely on new kernel features, so the
new kernel must be installed before the new world is
installed. The old world might not run correctly on the new
kernel, so the new world must be installed immediately upon
installing the new kernel.Some configuration changes must be made before the new
world is installed, but others might break the old world.
Hence, two different configuration upgrade steps are used.
For the most part, the update process only replaces or adds
files and existing old files are not deleted. Since this can
cause problems, /usr/src/UPDATING will
indicate if any files need to be manually deleted and at which
step to do so.These concerns have led to the recommended upgrade
sequence described in the following procedure.It is a good idea to save the output from running
make to a file. If something goes wrong,
a copy of the error message can be posted to one of the &os;
mailing lists.The easiest way to do this is to use
script with a parameter that specifies
the name of the file to save all output to. Do not save the
output to /tmp as this directory may be
cleared at next reboot. A better place to save the file is
/var/tmp. Run this command immediately
before rebuilding the world, and then type
exit when the process has
finished:&prompt.root; script /var/tmp/mw.out
Script started, output file is /var/tmp/mw.outOverview of Build World ProcessThe commands used in the build world process should be
run in the order specified here. This section summarizes
the function of each command.If the build world process has previously been run on
this system, a copy of the previous build may still exist
in /usr/obj. To
speed up the new build world process, and possibly save
some dependency headaches, remove this directory if it
already exists:&prompt.root; chflags -R noschg /usr/obj/*
&prompt.root; rm -rf /usr/objCompile the new compiler and a few related tools, then
use the new compiler to compile the rest of the new world.
The result is saved to /usr/obj.&prompt.root; cd /usr/src
&prompt.root; make buildworldUse the new compiler residing in /usr/obj to build the new
kernel, in order to protect against compiler-kernel
mismatches. This is necessary, as certain memory
structures may have changed, and programs like
ps and top will fail
to work if the kernel and source code versions are not the
same.&prompt.root; make buildkernelInstall the new kernel and kernel modules, making it
possible to boot with the newly updated kernel. If
kern.securelevel has been raised above
1andnoschg or similar flags have been set
on the kernel binary, drop the system into single-user
mode first. Otherwise, this command can be run from
multi-user mode without problems. See &man.init.8; for
details about kern.securelevel and
&man.chflags.1; for details about the various file
flags.&prompt.root; make installkernelDrop the system into single-user mode in order to
minimize problems from updating any binaries that are
already running. It also minimizes any problems from
running the old world on a new kernel.&prompt.root; shutdown nowOnce in single-user mode, run these commands if the
system is formatted with UFS:&prompt.root; mount -u /
&prompt.root; mount -a -t ufs
&prompt.root; swapon -aIf the system is instead formatted with ZFS, run these
two commands. This example assumes a zpool name of
zroot:&prompt.root; zfs set readonly=off zroot
&prompt.root; zfs mount -aOptional: If a keyboard mapping other than the default
US English is desired, it can be changed with
&man.kbdmap.1;:&prompt.root; kbdmapThen, for either file system, if the
CMOS clock is set to local time (this
is true if the output of &man.date.1; does not show the
correct time and zone), run:&prompt.root; adjkerntz -iRemaking the world will not update certain
directories, such as /etc,
/var and /usr,
with new or changed configuration files. The next step is
to perform some initial configuration file updates
to /etc in
preparation for the new world. The following command
compares only those files that are essential for the
success of installworld. For
instance, this step may add new groups, system accounts,
or startup scripts which have been added to &os; since the
last update. This is necessary so that the
installworld step will be able
to use any new system accounts, groups, and scripts.
Refer to for more detailed
instructions about this command:&prompt.root; mergemaster -pInstall the new world and system binaries from
/usr/obj.&prompt.root; cd /usr/src
&prompt.root; make installworldUpdate any remaining configuration files.&prompt.root; mergemaster -iFDelete any obsolete files. This is important as they
may cause problems if left on the disk.&prompt.root; make delete-oldA full reboot is now needed to load the new kernel and
new world with the new configuration files.&prompt.root; rebootMake sure that all installed ports have first been
rebuilt before old libraries are removed using the
instructions in . When
finished, remove any obsolete libraries to avoid conflicts
with newer ones. For a more detailed description of this
step, refer to .&prompt.root; make delete-old-libssingle-user modeIf the system can have a window of down-time, consider
compiling the system in single-user mode instead of compiling
the system in multi-user mode, and then dropping into
single-user mode for the installation. Reinstalling the
system touches a lot of important system files, all the
standard system binaries, libraries, and include files.
Changing these on a running system, particularly one with
active users, is asking for trouble.Configuration Filesmake.confThis build world process uses several configuration
files.The Makefile located in
/usr/src describes how the programs that
comprise &os; should be built and the order in which they
should be built.The options available to make are
described in &man.make.conf.5; and some common examples are
included in
/usr/share/examples/etc/make.conf. Any
options which are added to /etc/make.conf
will control the how make runs and builds
programs. These options take effect every time
make is used, including compiling
applications from the Ports Collection, compiling custom C
programs, or building the &os; operating system. Changes to
some settings can have far-reaching and potentially surprising
effects. Read the comments in both locations and keep in mind
that the defaults have been chosen for a combination of
performance and safety.src.confHow the operating system is built from source code is
controlled by /etc/src.conf. Unlike
/etc/make.conf, the contents of
/etc/src.conf only take effect when the
&os; operating system itself is being built. Descriptions of
the many options available for this file are shown in
&man.src.conf.5;. Be cautious about disabling seemingly
unneeded kernel modules and build options. Sometimes there
are unexpected or subtle interactions.Variables and TargetsThe general format for using make is as
follows:&prompt.root; make -x -DVARIABLEtargetIn this example,
is an option
passed to make. Refer to &man.make.1; for
examples of the available options.To pass a variable, specify the variable name with
. The
behavior of the Makefile is controlled by
variables. These can either be set in
/etc/make.conf or they can be specified
when using make. For example, this
variable specifies that profiled libraries should not be
built:&prompt.root; make -DNO_PROFILE targetIt corresponds with this setting in
/etc/make.conf:NO_PROFILE= true # Avoid compiling profiled librariesThe target tells
make what to do and the
Makefile defines the available targets.
Some targets are used by the build process to break out the
steps necessary to rebuild the system into a number of
sub-steps.Having separate options is useful for two reasons. First,
it allows for a build that does not affect any components of a
running system. Because of this,
buildworld can be safely run on a
machine running in multi-user mode. It is still recommended
that installworld be run in part in
single-user mode, though.Secondly, it allows NFS mounts to be
used to upgrade multiple machines on a network, as described
in .It is possible to specify which will
cause make to spawn several simultaneous
processes. Since much of the compiling process is
I/O-bound rather than
CPU-bound, this is useful on both single
CPU and multi-CPU
machines.On a single-CPU machine, run the
following command to have up to 4 processes running at any one
time. Empirical evidence posted to the mailing lists shows
this generally gives the best performance benefit.&prompt.root; make -j4 buildworldOn a multi-CPU machine, try values
between 6 and 10 to see
how they speed things up.rebuilding worldtimingsIf any variables were specified to make
buildworld, specify the same variables to
make installworld. However,
must never be used
with installworld.For example, if this command was used:&prompt.root; make -DNO_PROFILE buildworldInstall the results with:&prompt.root; make -DNO_PROFILE installworldOtherwise, the second command will try to install
profiled libraries that were not built during the
make buildworld phase.Merging Configuration FilesTomRhodesContributed by mergemaster&os; provides the &man.mergemaster.8; Bourne script to aid
in determining the differences between the configuration files
in /etc, and the configuration files in
/usr/src/etc. This is the recommended
solution for keeping the system configuration files up to date
with those located in the source tree.Before using mergemaster, it is
recommended to first copy the existing
/etc somewhere safe. Include
which does a recursive copy and
which preserves times and the ownerships
on files:&prompt.root; cp -Rp /etc /etc.oldWhen run, mergemaster builds a
temporary root environment, from / down,
and populates it with various system configuration files.
Those files are then compared to the ones currently installed
in the system. Files that differ will be shown in
&man.diff.1; format, with the sign
representing added or modified lines, and
representing lines that will be either removed completely or
replaced with a new file. Refer to &man.diff.1; for more
information about how file differences are shown.Next, mergemaster will display each
file that differs, and present options to: delete the new
file, referred to as the temporary file, install the temporary
file in its unmodified state, merge the temporary file with
the currently installed file, or view the results
again.Choosing to delete the temporary file will tell
mergemaster to keep the current file
unchanged and to delete the new version. This option is not
recommended. To get help at any time, type
? at the mergemaster
prompt. If the user chooses to skip a file, it will be
presented again after all other files have been dealt
with.Choosing to install the unmodified temporary file will
replace the current file with the new one. For most
unmodified files, this is the best option.Choosing to merge the file will present a text editor, and
the contents of both files. The files can be merged by
reviewing both files side by side on the screen, and choosing
parts from both to create a finished product. When the files
are compared side by side, l selects the left
contents and r selects contents from the
right. The final output will be a file consisting of both
parts, which can then be installed. This option is
customarily used for files where settings have been modified
by the user.Choosing to view the results again will redisplay the file
differences.After mergemaster is done with the
system files, it will prompt for other options. It may prompt
to rebuild the password file and will finish up with an option
to remove left-over temporary files.Deleting Obsolete Files and LibrariesAntonShterenlikhtBased on notes provided by Deleting obsolete files and directoriesAs a part of the &os; development lifecycle, files and
their contents occasionally become obsolete. This may be
because functionality is implemented elsewhere, the version
number of the library has changed, or it was removed from the
system entirely. These obsoleted files, libraries, and
directories should be removed when updating the system.
This ensures that the system is not cluttered with old files
which take up unnecessary space on the storage and backup
media. Additionally, if the old library has a security or
stability issue, the system should be updated to the newer
library to keep it safe and to prevent crashes caused by the
old library. Files, directories, and libraries which are
considered obsolete are listed in
/usr/src/ObsoleteFiles.inc. The
following instructions should be used to remove obsolete files
during the system upgrade process.After the make installworld and the
subsequent mergemaster have finished
successfully, check for obsolete files and libraries:&prompt.root; cd /usr/src
&prompt.root; make check-oldIf any obsolete files are found, they can be deleted using
the following command:&prompt.root; make delete-oldA prompt is displayed before deleting each obsolete file.
To skip the prompt and let the system remove these files
automatically, use
BATCH_DELETE_OLD_FILES:&prompt.root; make -DBATCH_DELETE_OLD_FILES delete-oldThe same goal can be achieved by piping these commands
through yes:&prompt.root; yes|make delete-oldWarningDeleting obsolete files will break applications that
still depend on those obsolete files. This is especially
true for old libraries. In most cases, the programs, ports,
or libraries that used the old library need to be recompiled
before make delete-old-libs is
executed.Utilities for checking shared library dependencies include
sysutils/libchk and
sysutils/bsdadminscripts.Obsolete shared libraries can conflict with newer
libraries, causing messages like these:/usr/bin/ld: warning: libz.so.4, needed by /usr/local/lib/libtiff.so, may conflict with libz.so.5
/usr/bin/ld: warning: librpcsvc.so.4, needed by /usr/local/lib/libXext.so, may conflict with librpcsvc.so.5To solve these problems, determine which port installed
the library:&prompt.root; pkg which /usr/local/lib/libtiff.so
/usr/local/lib/libtiff.so was installed by package tiff-3.9.4
&prompt.root; pkg which /usr/local/lib/libXext.so
/usr/local/lib/libXext.so was installed by package libXext-1.1.1,1Then deinstall, rebuild, and reinstall the port. To
automate this process,
ports-mgmt/portmaster can be used. After
all ports are rebuilt and no longer use the old libraries,
delete the old libraries using the following command:&prompt.root; make delete-old-libsIf something goes wrong, it is easy to rebuild a
particular piece of the system. For example, if
/etc/magic was accidentally deleted as
part of the upgrade or merge of /etc,
file will stop working. To fix this,
run:&prompt.root; cd /usr/src/usr.bin/file
&prompt.root; make all installCommon QuestionsDo I need to re-make the world for every
change?It depends upon the nature of the change. For
example, if svn only shows
the following files as being updated:src/games/cribbage/instr.csrc/games/sail/pl_main.csrc/release/sysinstall/config.csrc/release/sysinstall/media.csrc/share/mk/bsd.port.mkit probably is not worth rebuilding the entire
world. Instead, go into the appropriate sub-directories
and run make all install. But if
something major changes, such as
src/lib/libc/stdlib, consider
rebuilding world.Some users rebuild world every fortnight and let
changes accumulate over that fortnight. Others only
re-make those things that have changed and are careful
to spot all the dependencies. It all depends on how
often a user wants to upgrade and whether they are
tracking &os.stable; or &os.current;.What would cause a compile to fail with lots of
signal 11signal 11
(or other signal number) errors?This normally indicates a hardware problem.
Building world is an effective way to stress test
hardware, especially memory. A sure indicator of a
hardware issue is when make
is restarted and it dies at a different point in the
process.To resolve this error, swap out the components in
the machine, starting with RAM, to determine which
component is failing.Can /usr/obj
be removed when finished?This directory contains all the object files that
were produced during the compilation phase. Normally,
one of the first steps in the make
buildworld process is to remove this
directory and start afresh. Keeping
/usr/obj around when finished makes
little sense, and its removal frees up a approximately
2GB of disk space.Can interrupted builds be resumed?This depends on how far into the process the
problem occurs. In general, make
buildworld builds new copies of essential
tools and the system libraries. These tools and
libraries are then installed, used to rebuild
themselves, and are installed again. The rest of the
system is then rebuilt with the new system
tools.During the last stage, it is fairly safe to run
these commands as they will not undo the work of the
previous make buildworld:&prompt.root; cd /usr/src
&prompt.root; make -DNO_CLEAN allIf this message appears:--------------------------------------------------------------
Building everything..
--------------------------------------------------------------in the make buildworld output,
it is probably fairly safe to do so.If that message is not displayed, it is always
better to be safe than sorry and to restart the build
from scratch.Is it possible to speed up making the world?Several actions can speed up the build world
process. For example, the entire process can be run
from single-user mode. However, this will prevent users
from having access to the system until the process is
complete.Careful file system design or the use of ZFS
datasets can make a difference. Consider putting
/usr/src and
/usr/obj on
separate file systems. If possible, place the file
systems on separate disks on separate disk controllers.
When mounting /usr/src, use
which prevents the file system
from recording the file access time. If /usr/src is not on its
own file system, consider remounting /usr with
.The file system holding /usr/obj can be mounted
or remounted with so that disk
writes happen asynchronously. The write completes
immediately, and the data is written to the disk a few
seconds later. This allows writes to be clustered
together, and can provide a dramatic performance
boost.Keep in mind that this option makes the file
system more fragile. With this option, there is an
increased chance that, should power fail, the file
system will be in an unrecoverable state when the
machine restarts.If /usr/obj is the only
directory on this file system, this is not a problem.
If you have other, valuable data on the same file
system, ensure that there are verified backups before
enabling this option.Turn off profiling by setting
NO_PROFILE=true in
/etc/make.conf.Pass
to &man.make.1; to run multiple processes in parallel.
This usually helps on both single- and multi-processor
machines.What if something goes wrong?First, make absolutely sure that the environment has
no extraneous cruft from earlier builds:&prompt.root; chflags -R noschg /usr/obj/usr
&prompt.root; rm -rf /usr/obj/usr
&prompt.root; cd /usr/src
&prompt.root; make cleandir
&prompt.root; make cleandirYes, make cleandir really should
be run twice.Then, restart the whole process, starting with
make buildworld.If problems persist, send the error and the output
of uname -a to &a.questions;. Be
prepared to answer other questions about the
setup!Tracking for Multiple MachinesMikeMeyerContributed by NFSinstalling multiple machinesWhen multiple machines need to track the same source tree,
it is a waste of disk space, network bandwidth, and
CPU cycles to have each system download the
sources and rebuild everything. The solution is to have one
machine do most of the work, while the rest of the machines
mount that work via NFS. This section
outlines a method of doing so. For more information about using
NFS, refer to .First, identify a set of machines which will run the same
set of binaries, known as a build set.
Each machine can have a custom kernel, but will run the same
userland binaries. From that set, choose a machine to be the
build machine that the world and kernel
are built on. Ideally, this is a fast machine that has
sufficient spare CPU to run make
buildworld and make
buildkernel.Select a machine to be the test
machine, which will test software updates before
they are put into production. This must be
a machine that can afford to be down for an extended period of
time. It can be the build machine, but need not be.All the machines in this build set need to mount
/usr/obj and /usr/src
from the build machine via NFS. For multiple
build sets, /usr/src should be on one build
machine, and NFS mounted on the rest.Ensure that /etc/make.conf and
/etc/src.conf on all the machines in the
build set agree with the build machine. That means that the
build machine must build all the parts of the base system that
any machine in the build set is going to install. Also, each
build machine should have its kernel name set with
KERNCONF in
/etc/make.conf, and the build machine
should list them all in its KERNCONF,
listing its own kernel first. The build machine must have the
kernel configuration files for each machine in its /usr/src/sys/arch/conf.On the build machine, build the kernel and world as
described in , but do not install
anything on the build machine. Instead, install the built
kernel on the test machine. On the test machine, mount
/usr/src and
/usr/obj via NFS. Then,
run shutdown now to go to single-user mode in
order to install the new kernel and world and run
mergemaster as usual. When done, reboot to
return to normal multi-user operations.After verifying that everything on the test machine is
working properly, use the same procedure to install the new
software on each of the other machines in the build set.The same methodology can be used for the ports tree. The
first step is to share /usr/ports via
NFS to all the machines in the build set. To
configure /etc/make.conf to share
distfiles, set DISTDIR to a common shared
directory that is writable by whichever user root is mapped to by the
NFS mount. Each machine should set
WRKDIRPREFIX to a local build directory, if
ports are to be built locally. Alternately, if the build system
is to build and distribute packages to the machines in the build
set, set PACKAGES on the build system to a
directory similar to DISTDIR.
Index: head/en_US.ISO8859-1/books/handbook/network-servers/chapter.xml
===================================================================
--- head/en_US.ISO8859-1/books/handbook/network-servers/chapter.xml (revision 49530)
+++ head/en_US.ISO8859-1/books/handbook/network-servers/chapter.xml (revision 49531)
@@ -1,5785 +1,5785 @@
Network ServersSynopsisThis chapter covers some of the more frequently used network
services on &unix; systems. This includes installing,
configuring, testing, and maintaining many different types of
network services. Example configuration files are included
throughout this chapter for reference.By the end of this chapter, readers will know:How to manage the inetd
daemon.How to set up the Network File System
(NFS).How to set up the Network Information Server
(NIS) for centralizing and sharing
user accounts.How to set &os; up to act as an LDAP
server or clientHow to set up automatic network settings using
DHCP.How to set up a Domain Name Server
(DNS).How to set up the Apache
HTTP Server.How to set up a File Transfer Protocol
(FTP) server.How to set up a file and print server for &windows;
clients using Samba.How to synchronize the time and date, and set up a
time server using the Network Time Protocol
(NTP).How to set up iSCSI.This chapter assumes a basic knowledge of:/etc/rc scripts.Network terminology.Installation of additional third-party
software ().The inetd
Super-ServerThe &man.inetd.8; daemon is sometimes referred to as a
Super-Server because it manages connections for many services.
Instead of starting multiple applications, only the
inetd service needs to be started.
When a connection is received for a service that is managed by
inetd, it determines which program
the connection is destined for, spawns a process for that
program, and delegates the program a socket. Using
inetd for services that are not
heavily used can reduce system load, when compared to running
each daemon individually in stand-alone mode.Primarily, inetd is used to
spawn other daemons, but several trivial protocols are handled
internally, such as chargen,
auth,
time,
echo,
discard, and
daytime.This section covers the basics of configuring
inetd.Configuration FileConfiguration of inetd is
done by editing /etc/inetd.conf. Each
line of this configuration file represents an application
which can be started by inetd. By
default, every line starts with a comment
(#), meaning that
inetd is not listening for any
applications. To configure inetd
to listen for an application's connections, remove the
# at the beginning of the line for that
application.After saving your edits, configure
inetd to start at system boot by
editing /etc/rc.conf:inetd_enable="YES"To start inetd now, so that it
listens for the service you configured, type:&prompt.root; service inetd startOnce inetd is started, it needs
to be notified whenever a modification is made to
/etc/inetd.conf:Reloading the inetd
Configuration File&prompt.root; service inetd reloadTypically, the default entry for an application does not
need to be edited beyond removing the #.
In some situations, it may be appropriate to edit the default
entry.As an example, this is the default entry for &man.ftpd.8;
over IPv4:ftp stream tcp nowait root /usr/libexec/ftpd ftpd -lThe seven columns in an entry are as follows:service-name
socket-type
protocol
{wait|nowait}[/max-child[/max-connections-per-ip-per-minute[/max-child-per-ip]]]
user[:group][/login-class]
server-program
server-program-argumentswhere:service-nameThe service name of the daemon to start. It must
correspond to a service listed in
/etc/services. This determines
which port inetd listens on
for incoming connections to that service. When using a
custom service, it must first be added to
/etc/services.socket-typeEither stream,
dgram, raw, or
seqpacket. Use
stream for TCP connections and
dgram for
UDP services.protocolUse one of the following protocol names:Protocol NameExplanationtcp or tcp4TCP IPv4udp or udp4UDP IPv4tcp6TCP IPv6udp6UDP IPv6tcp46Both TCP IPv4 and IPv6udp46Both UDP IPv4 and
IPv6{wait|nowait}[/max-child[/max-connections-per-ip-per-minute[/max-child-per-ip]]]In this field, or
must be specified.
,
and
are optional. indicates whether or
not the service is able to handle its own socket.
socket types must use
while
daemons, which are usually
multi-threaded, should use .
usually hands off multiple sockets
to a single daemon, while spawns
a child daemon for each new socket.The maximum number of child daemons
inetd may spawn is set by
. For example, to limit ten
instances of the daemon, place a /10
after . Specifying
/0 allows an unlimited number of
children.
limits the number of connections from any particular
IP address per minute. Once the
limit is reached, further connections from this IP
address will be dropped until the end of the minute.
For example, a value of /10 would
limit any particular IP address to
ten connection attempts per minute.
limits the number of
child processes that can be started on behalf on any
single IP address at any moment.
These options can limit excessive resource consumption
and help to prevent Denial of Service attacks.An example can be seen in the default settings for
&man.fingerd.8;:finger stream tcp nowait/3/10 nobody /usr/libexec/fingerd fingerd -k -suserThe username the daemon
will run as. Daemons typically run as
root,
daemon, or
nobody.server-programThe full path to the daemon. If the daemon is a
service provided by inetd
internally, use .server-program-argumentsUsed to specify any command arguments to be passed
to the daemon on invocation. If the daemon is an
internal service, use
.Command-Line OptionsLike most server daemons, inetd
has a number of options that can be used to modify its
behavior. By default, inetd is
started with -wW -C 60. These options
enable TCP wrappers for all services, including internal
services, and prevent any IP address from
requesting any service more than 60 times per minute.To change the default options which are passed to
inetd, add an entry for
inetd_flags in
/etc/rc.conf. If
inetd is already running, restart
it with service inetd restart.The available rate limiting options are:-c maximumSpecify the default maximum number of simultaneous
invocations of each service, where the default is
unlimited. May be overridden on a per-service basis by
using in
/etc/inetd.conf.-C rateSpecify the default maximum number of times a
service can be invoked from a single
IP address per minute. May be
overridden on a per-service basis by using
in
/etc/inetd.conf.-R rateSpecify the maximum number of times a service can be
invoked in one minute, where the default is
256. A rate of 0
allows an unlimited number.-s maximumSpecify the maximum number of times a service can be
invoked from a single IP address at
any one time, where the default is unlimited. May be
overridden on a per-service basis by using
in
/etc/inetd.conf.Additional options are available. Refer to &man.inetd.8;
for the full list of options.Security ConsiderationsMany of the daemons which can be managed by
inetd are not security-conscious.
Some daemons, such as fingerd, can
provide information that may be useful to an attacker. Only
enable the services which are needed and monitor the system
for excessive connection attempts.
max-connections-per-ip-per-minute,
max-child and
max-child-per-ip can be used to limit such
attacks.By default, TCP wrappers is enabled. Consult
&man.hosts.access.5; for more information on placing TCP
restrictions on various
inetd invoked daemons.Network File System (NFS)TomRhodesReorganized and enhanced by BillSwingleWritten by NFS&os; supports the Network File System
(NFS), which allows a server to share
directories and files with clients over a network. With
NFS, users and programs can access files on
remote systems as if they were stored locally.NFS has many practical uses. Some of
the more common uses include:Data that would otherwise be duplicated on each client
can be kept in a single location and accessed by clients
on the network.Several clients may need access to the
/usr/ports/distfiles directory.
Sharing that directory allows for quick access to the
source files without having to download them to each
client.On large networks, it is often more convenient to
configure a central NFS server on which
all user home directories are stored. Users can log into
a client anywhere on the network and have access to their
home directories.Administration of NFS exports is
simplified. For example, there is only one file system
where security or backup policies must be set.Removable media storage devices can be used by other
machines on the network. This reduces the number of devices
throughout the network and provides a centralized location
to manage their security. It is often more convenient to
install software on multiple machines from a centralized
installation media.NFS consists of a server and one or more
clients. The client remotely accesses the data that is stored
on the server machine. In order for this to function properly,
a few processes have to be configured and running.These daemons must be running on the server:NFSserverfile serverUNIX clientsrpcbindmountdnfsdDaemonDescriptionnfsdThe NFS daemon which services
requests from NFS clients.mountdThe NFS mount daemon which
carries out requests received from
nfsd.rpcbind This daemon allows NFS
clients to discover which port the
NFS server is using.Running &man.nfsiod.8; on the client can improve
performance, but is not required.Configuring the ServerNFSconfigurationThe file systems which the NFS server
will share are specified in /etc/exports.
Each line in this file specifies a file system to be exported,
which clients have access to that file system, and any access
options. When adding entries to this file, each exported file
system, its properties, and allowed hosts must occur on a
single line. If no clients are listed in the entry, then any
client on the network can mount that file system.NFSexport examplesThe following /etc/exports entries
demonstrate how to export file systems. The examples can be
modified to match the file systems and client names on the
reader's network. There are many options that can be used in
this file, but only a few will be mentioned here. See
&man.exports.5; for the full list of options.This example shows how to export
/cdrom to three hosts named
alpha,
bravo, and
charlie:/cdrom -ro alphabravocharlieThe -ro flag makes the file system
read-only, preventing clients from making any changes to the
exported file system. This example assumes that the host
names are either in DNS or in
/etc/hosts. Refer to &man.hosts.5; if
the network does not have a DNS
server.The next example exports /home to
three clients by IP address. This can be
useful for networks without DNS or
/etc/hosts entries. The
-alldirs flag allows subdirectories to be
mount points. In other words, it will not automatically mount
the subdirectories, but will permit the client to mount the
directories that are required as needed./usr/home -alldirs 10.0.0.2 10.0.0.3 10.0.0.4This next example exports /a so that
two clients from different domains may access that file
system. The allows root on the remote system to
write data on the exported file system as root. If
-maproot=root is not specified, the
client's root user
will be mapped to the server's nobody account and will be
subject to the access limitations defined for nobody./a -maproot=root host.example.com box.example.orgA client can only be specified once per file system. For
example, if /usr is a single file system,
these entries would be invalid as both entries specify the
same host:# Invalid when /usr is one file system
/usr/src client
/usr/ports clientThe correct format for this situation is to use one
entry:/usr/src /usr/ports clientThe following is an example of a valid export list, where
/usr and /exports
are local file systems:# Export src and ports to client01 and client02, but only
# client01 has root privileges on it
/usr/src /usr/ports -maproot=root client01
/usr/src /usr/ports client02
# The client machines have root and can mount anywhere
# on /exports. Anyone in the world can mount /exports/obj read-only
/exports -alldirs -maproot=root client01 client02
/exports/obj -roTo enable the processes required by the
NFS server at boot time, add these options
to /etc/rc.conf:rpcbind_enable="YES"
nfs_server_enable="YES"
mountd_flags="-r"The server can be started now by running this
command:&prompt.root; service nfsd startWhenever the NFS server is started,
mountd also starts automatically.
However, mountd only reads
/etc/exports when it is started. To make
subsequent /etc/exports edits take effect
immediately, force mountd to reread
it:&prompt.root; service mountd reloadConfiguring the ClientTo enable NFS clients, set this option
in each client's /etc/rc.conf:nfs_client_enable="YES"Then, run this command on each NFS
client:&prompt.root; service nfsclient startThe client now has everything it needs to mount a remote
file system. In these examples, the server's name is
server and the client's name is
client. To mount
/home on
server to the
/mnt mount point on
client:NFSmounting&prompt.root; mount server:/home /mntThe files and directories in
/home will now be available on
client, in the
/mnt directory.To mount a remote file system each time the client boots,
add it to /etc/fstab:server:/home /mnt nfs rw 0 0Refer to &man.fstab.5; for a description of all available
options.LockingSome applications require file locking to operate
correctly. To enable locking, add these lines to
/etc/rc.conf on both the client and
server:rpc_lockd_enable="YES"
rpc_statd_enable="YES"Then start the applications:&prompt.root; service lockd start
&prompt.root; service statd startIf locking is not required on the server, the
NFS client can be configured to lock
locally by including when running
mount. Refer to &man.mount.nfs.8;
for further details.Automating Mounts with &man.amd.8;WylieStilwellContributed by ChernLeeRewritten by amdautomatic mounter daemonThe automatic mounter daemon,
amd, automatically mounts a remote
file system whenever a file or directory within that file
system is accessed. File systems that are inactive for a
period of time will be automatically unmounted by
amd.This daemon provides an alternative to modifying
/etc/fstab to list every client. It
operates by attaching itself as an NFS
server to the /host and
/net directories. When a file is
accessed within one of these directories,
amd looks up the corresponding
remote mount and automatically mounts it.
/net is used to mount an exported file
system from an IP address while
/host is used to mount an export from a
remote hostname. For instance, an attempt to access a file
within /host/foobar/usr would tell
amd to mount the
/usr export on the host
foobar.Mounting an Export with
amdIn this example, showmount -e shows
the exported file systems that can be mounted from the
NFS server,
foobar:&prompt.user; showmount -e foobar
Exports list on foobar:
/usr 10.10.10.0
/a 10.10.10.0
&prompt.user; cd /host/foobar/usrThe output from showmount shows
/usr as an export. When changing
directories to /host/foobar/usr,
amd intercepts the request and
attempts to resolve the hostname
foobar. If successful,
amd automatically mounts the
desired export.To enable amd at boot time, add
this line to /etc/rc.conf:amd_enable="YES"To start amd now:&prompt.root; service amd startCustom flags can be passed to
amd from the
amd_flags environment variable. By
default, amd_flags is set to:amd_flags="-a /.amd_mnt -l syslog /host /etc/amd.map /net /etc/amd.map"The default options with which exports are mounted are
defined in /etc/amd.map. Some of the
more advanced features of amd are
defined in /etc/amd.conf.Consult &man.amd.8; and &man.amd.conf.5; for more
information.Automating Mounts with &man.autofs.5;The &man.autofs.5; automount facility is supported
starting with &os; 10.1-RELEASE. To use the
automounter functionality in older versions of &os;, use
&man.amd.8; instead. This chapter only describes the
&man.autofs.5; automounter.autofsautomounter subsystemThe &man.autofs.5; facility is a common name for several
components that, together, allow for automatic mounting of
remote and local filesystems whenever a file or directory
within that file system is accessed. It consists of the
kernel component, &man.autofs.5;, and several userspace
applications: &man.automount.8;, &man.automountd.8; and
&man.autounmountd.8;. It serves as an alternative for
&man.amd.8; from previous &os; releases. Amd is still
provided for backward compatibility purposes, as the two use
different map format; the one used by autofs is the same as
with other SVR4 automounters, such as the ones in Solaris,
MacOS X, and Linux.The &man.autofs.5; virtual filesystem is mounted on
specified mountpoints by &man.automount.8;, usually invoked
during boot.Whenever a process attempts to access file within the
&man.autofs.5; mountpoint, the kernel will notify
&man.automountd.8; daemon and pause the triggering process.
The &man.automountd.8; daemon will handle kernel requests by
finding the proper map and mounting the filesystem according
to it, then signal the kernel to release blocked process. The
&man.autounmountd.8; daemon automatically unmounts automounted
filesystems after some time, unless they are still being
used.The primary autofs configuration file is
/etc/auto_master. It assigns individual
maps to top-level mounts. For an explanation of
auto_master and the map syntax, refer to
&man.auto.master.5;.There is a special automounter map mounted on
/net. When a file is accessed within
this directory, &man.autofs.5; looks up the corresponding
remote mount and automatically mounts it. For instance, an
attempt to access a file within
/net/foobar/usr would tell
&man.automountd.8; to mount the /usr export from the host
foobar.Mounting an Export with &man.autofs.5;In this example, showmount -e shows
the exported file systems that can be mounted from the
NFS server,
foobar:&prompt.user; showmount -e foobar
Exports list on foobar:
/usr 10.10.10.0
/a 10.10.10.0
&prompt.user; cd /net/foobar/usrThe output from showmount shows
/usr as an export.
When changing directories to /host/foobar/usr,
&man.automountd.8; intercepts the request and attempts to
resolve the hostname foobar. If successful,
&man.automountd.8; automatically mounts the source
export.To enable &man.autofs.5; at boot time, add this line to
/etc/rc.conf:autofs_enable="YES"Then &man.autofs.5; can be started by running:&prompt.root; service automount start
&prompt.root; service automountd start
&prompt.root; service autounmountd startThe &man.autofs.5; map format is the same as in other
operating systems. Information about this format from other
sources can be useful, like the Mac
OS X document.Consult the &man.automount.8;, &man.automountd.8;,
&man.autounmountd.8;, and &man.auto.master.5; manual pages for
more information.Network Information System
(NIS)NISSolarisHP-UXAIXLinuxNetBSDOpenBSDyellow pagesNISNetwork Information System (NIS) is
designed to centralize administration of &unix;-like systems
such as &solaris;, HP-UX, &aix;, Linux, NetBSD, OpenBSD, and
&os;. NIS was originally known as Yellow
Pages but the name was changed due to trademark issues. This
is the reason why NIS commands begin with
yp.NISdomainsNIS is a Remote Procedure Call
(RPC)-based client/server system that allows
a group of machines within an NIS domain to
share a common set of configuration files. This permits a
system administrator to set up NIS client
systems with only minimal configuration data and to add, remove,
or modify configuration data from a single location.&os; uses version 2 of the NIS
protocol.NIS Terms and ProcessesTable 28.1 summarizes the terms and important processes
used by NIS:rpcbindportmap
NIS TerminologyTermDescriptionNIS domain nameNIS servers and clients share
an NIS domain name. Typically,
this name does not have anything to do with
DNS.&man.rpcbind.8;This service enables RPC and
must be running in order to run an
NIS server or act as an
NIS client.&man.ypbind.8;This service binds an NIS
client to its NIS server. It will
take the NIS domain name and use
RPC to connect to the server. It
is the core of client/server communication in an
NIS environment. If this service
is not running on a client machine, it will not be
able to access the NIS
server.&man.ypserv.8;This is the process for the
NIS server. If this service stops
running, the server will no longer be able to respond
to NIS requests so hopefully, there
is a slave server to take over. Some non-&os; clients
will not try to reconnect using a slave server and the
ypbind process may need to
be restarted on these
clients.&man.rpc.yppasswdd.8;This process only runs on
NIS master servers. This daemon
allows NIS clients to change their
NIS passwords. If this daemon is
not running, users will have to login to the
NIS master server and change their
passwords there.
Machine TypesNISmaster serverNISslave serverNISclientThere are three types of hosts in an
NIS environment:NIS master serverThis server acts as a central repository for host
configuration information and maintains the
authoritative copy of the files used by all of the
NIS clients. The
passwd, group,
and other various files used by NIS
clients are stored on the master server. While it is
possible for one machine to be an NIS
master server for more than one NIS
domain, this type of configuration will not be covered in
this chapter as it assumes a relatively small-scale
NIS environment.NIS slave serversNIS slave servers maintain copies
of the NIS master's data files in
order to provide redundancy. Slave servers also help to
balance the load of the master server as
NIS clients always attach to the
NIS server which responds
first.NIS clientsNIS clients authenticate
against the NIS server during log
on.Information in many files can be shared using
NIS. The
master.passwd,
group, and hosts
files are commonly shared via NIS.
Whenever a process on a client needs information that would
normally be found in these files locally, it makes a query to
the NIS server that it is bound to
instead.Planning ConsiderationsThis section describes a sample NIS
environment which consists of 15 &os; machines with no
centralized point of administration. Each machine has its own
/etc/passwd and
/etc/master.passwd. These files are kept
in sync with each other only through manual intervention.
Currently, when a user is added to the lab, the process must
be repeated on all 15 machines.The configuration of the lab will be as follows:Machine nameIP addressMachine roleellington10.0.0.2NIS mastercoltrane10.0.0.3NIS slavebasie10.0.0.4Faculty workstationbird10.0.0.5Client machinecli[1-11]10.0.0.[6-17]Other client machinesIf this is the first time an NIS
scheme is being developed, it should be thoroughly planned
ahead of time. Regardless of network size, several decisions
need to be made as part of the planning process.Choosing a NIS Domain NameNISdomain nameWhen a client broadcasts its requests for info, it
includes the name of the NIS domain that
it is part of. This is how multiple servers on one network
can tell which server should answer which request. Think of
the NIS domain name as the name for a
group of hosts.Some organizations choose to use their Internet domain
name for their NIS domain name. This is
not recommended as it can cause confusion when trying to
debug network problems. The NIS domain
name should be unique within the network and it is helpful
if it describes the group of machines it represents. For
example, the Art department at Acme Inc. might be in the
acme-art NIS domain. This
example will use the domain name
test-domain.However, some non-&os; operating systems require the
NIS domain name to be the same as the
Internet domain name. If one or more machines on the
network have this restriction, the Internet domain name
must be used as the
NIS domain name.Physical Server RequirementsThere are several things to keep in mind when choosing a
machine to use as a NIS server. Since
NIS clients depend upon the availability
of the server, choose a machine that is not rebooted
frequently. The NIS server should
ideally be a stand alone machine whose sole purpose is to be
an NIS server. If the network is not
heavily used, it is acceptable to put the
NIS server on a machine running other
services. However, if the NIS server
becomes unavailable, it will adversely affect all
NIS clients.Configuring the NIS Master
Server
- The canonical copies of all NIS files
+ The canonical copies of all NIS files
are stored on the master server. The databases used to store
the information are called NIS maps. In
&os;, these maps are stored in
/var/yp/[domainname] where
[domainname] is the name of the
NIS domain. Since multiple domains are
supported, it is possible to have several directories, one for
each domain. Each domain will have its own independent set of
maps.NIS master and slave servers handle all
NIS requests through &man.ypserv.8;. This
daemon is responsible for receiving incoming requests from
NIS clients, translating the requested
domain and map name to a path to the corresponding database
file, and transmitting data from the database back to the
client.NISserver configurationSetting up a master NIS server can be
relatively straight forward, depending on environmental needs.
Since &os; provides built-in NIS support,
it only needs to be enabled by adding the following lines to
/etc/rc.conf:nisdomainname="test-domain"
nis_server_enable="YES"
nis_yppasswdd_enable="YES" This line sets the NIS domain name
to test-domain.This automates the start up of the
NIS server processes when the system
boots.This enables the &man.rpc.yppasswdd.8; daemon so that
users can change their NIS password
from a client machine.Care must be taken in a multi-server domain where the
server machines are also NIS clients. It
is generally a good idea to force the servers to bind to
themselves rather than allowing them to broadcast bind
requests and possibly become bound to each other. Strange
failure modes can result if one server goes down and others
are dependent upon it. Eventually, all the clients will time
out and attempt to bind to other servers, but the delay
involved can be considerable and the failure mode is still
present since the servers might bind to each other all over
again.A server that is also a client can be forced to bind to a
particular server by adding these additional lines to
/etc/rc.conf:nis_client_enable="YES" # run client stuff as well
nis_client_flags="-S NIS domain,server"After saving the edits, type
/etc/netstart to restart the network and
apply the values defined in /etc/rc.conf.
Before initializing the NIS maps, start
&man.ypserv.8;:&prompt.root; service ypserv startInitializing the NIS MapsNISmapsNIS maps are generated from the
configuration files in /etc on the
NIS master, with one exception:
/etc/master.passwd. This is to prevent
the propagation of passwords to all the servers in the
NIS domain. Therefore, before the
NIS maps are initialized, configure the
primary password files:&prompt.root; cp /etc/master.passwd /var/yp/master.passwd
&prompt.root; cd /var/yp
&prompt.root; vi master.passwdIt is advisable to remove all entries for system
accounts as well as any user accounts that do not need to be
propagated to the NIS clients, such as
the root and any
other administrative accounts.Ensure that the
/var/yp/master.passwd is neither
group or world readable by setting its permissions to
600.After completing this task, initialize the
NIS maps. &os; includes the
&man.ypinit.8; script to do this. When generating maps
for the master server, include and
specify the NIS domain name:ellington&prompt.root; ypinit -m test-domain
Server Type: MASTER Domain: test-domain
Creating an YP server will require that you answer a few questions.
Questions will all be asked at the beginning of the procedure.
Do you want this procedure to quit on non-fatal errors? [y/n: n] n
Ok, please remember to go back and redo manually whatever fails.
If not, something might not work.
At this point, we have to construct a list of this domains YP servers.
rod.darktech.org is already known as master server.
Please continue to add any slave servers, one per line. When you are
done with the list, type a <control D>.
master server : ellington
next host to add: coltrane
next host to add: ^D
The current list of NIS servers looks like this:
ellington
coltrane
Is this correct? [y/n: y] y
[..output from map generation..]
NIS Map update completed.
ellington has been setup as an YP master server without any errors.This will create /var/yp/Makefile
from /var/yp/Makefile.dist. By
default, this file assumes that the environment has a
single NIS server with only &os; clients.
Since test-domain has a slave server,
edit this line in /var/yp/Makefile so
that it begins with a comment
(#):NOPUSH = "True"Adding New UsersEvery time a new user is created, the user account must
be added to the master NIS server and the
NIS maps rebuilt. Until this occurs, the
new user will not be able to login anywhere except on the
NIS master. For example, to add the new
user jsmith to the
test-domain domain, run these commands on
the master server:&prompt.root; pw useradd jsmith
&prompt.root; cd /var/yp
&prompt.root; make test-domainThe user could also be added using adduser
jsmith instead of pw useradd
smith.Setting up a NIS Slave ServerNISslave serverTo set up an NIS slave server, log on
to the slave server and edit /etc/rc.conf
as for the master server. Do not generate any
NIS maps, as these already exist on the
master server. When running ypinit on the
slave server, use (for slave) instead of
(for master). This option requires the
name of the NIS master in addition to the
domain name, as seen in this example:coltrane&prompt.root; ypinit -s ellington test-domain
Server Type: SLAVE Domain: test-domain Master: ellington
Creating an YP server will require that you answer a few questions.
Questions will all be asked at the beginning of the procedure.
Do you want this procedure to quit on non-fatal errors? [y/n: n] n
Ok, please remember to go back and redo manually whatever fails.
If not, something might not work.
There will be no further questions. The remainder of the procedure
should take a few minutes, to copy the databases from ellington.
Transferring netgroup...
ypxfr: Exiting: Map successfully transferred
Transferring netgroup.byuser...
ypxfr: Exiting: Map successfully transferred
Transferring netgroup.byhost...
ypxfr: Exiting: Map successfully transferred
Transferring master.passwd.byuid...
ypxfr: Exiting: Map successfully transferred
Transferring passwd.byuid...
ypxfr: Exiting: Map successfully transferred
Transferring passwd.byname...
ypxfr: Exiting: Map successfully transferred
Transferring group.bygid...
ypxfr: Exiting: Map successfully transferred
Transferring group.byname...
ypxfr: Exiting: Map successfully transferred
Transferring services.byname...
ypxfr: Exiting: Map successfully transferred
Transferring rpc.bynumber...
ypxfr: Exiting: Map successfully transferred
Transferring rpc.byname...
ypxfr: Exiting: Map successfully transferred
Transferring protocols.byname...
ypxfr: Exiting: Map successfully transferred
Transferring master.passwd.byname...
ypxfr: Exiting: Map successfully transferred
Transferring networks.byname...
ypxfr: Exiting: Map successfully transferred
Transferring networks.byaddr...
ypxfr: Exiting: Map successfully transferred
Transferring netid.byname...
ypxfr: Exiting: Map successfully transferred
Transferring hosts.byaddr...
ypxfr: Exiting: Map successfully transferred
Transferring protocols.bynumber...
ypxfr: Exiting: Map successfully transferred
Transferring ypservers...
ypxfr: Exiting: Map successfully transferred
Transferring hosts.byname...
ypxfr: Exiting: Map successfully transferred
coltrane has been setup as an YP slave server without any errors.
Remember to update map ypservers on ellington.This will generate a directory on the slave server called
/var/yp/test-domain which contains copies
of the NIS master server's maps. Adding
these /etc/crontab entries on each slave
server will force the slaves to sync their maps with the maps
on the master server:20 * * * * root /usr/libexec/ypxfr passwd.byname
21 * * * * root /usr/libexec/ypxfr passwd.byuidThese entries are not mandatory because the master server
automatically attempts to push any map changes to its slaves.
However, since clients may depend upon the slave server to
provide correct password information, it is recommended to
force frequent password map updates. This is especially
important on busy networks where map updates might not always
complete.To finish the configuration, run
/etc/netstart on the slave server in order
to start the NIS services.Setting Up an NIS ClientAn NIS client binds to an
NIS server using &man.ypbind.8;. This
daemon broadcasts RPC requests on the local network. These
requests specify the domain name configured on the client. If
an NIS server in the same domain receives
one of the broadcasts, it will respond to
ypbind, which will record the
server's address. If there are several servers available,
the client will use the address of the first server to respond
and will direct all of its NIS requests to
that server. The client will automatically
ping the server on a regular basis
to make sure it is still available. If it fails to receive a
reply within a reasonable amount of time,
ypbind will mark the domain as
unbound and begin broadcasting again in the hopes of locating
another server.NISclient configurationTo configure a &os; machine to be an
NIS client:Edit /etc/rc.conf and add the
following lines in order to set the
NIS domain name and start
&man.ypbind.8; during network startup:nisdomainname="test-domain"
nis_client_enable="YES"To import all possible password entries from the
NIS server, use
vipw to remove all user accounts
except one from /etc/master.passwd.
When removing the accounts, keep in mind that at least one
local account should remain and this account should be a
member of wheel. If there is a
problem with NIS, this local account
can be used to log in remotely, become the superuser, and
fix the problem. Before saving the edits, add the
following line to the end of the file:+:::::::::This line configures the client to provide anyone with
a valid account in the NIS server's
password maps an account on the client. There are many
ways to configure the NIS client by
modifying this line. One method is described in . For more detailed
reading, refer to the book
Managing NFS and NIS, published by
O'Reilly Media.To import all possible group entries from the
NIS server, add this line to
/etc/group:+:*::To start the NIS client immediately,
execute the following commands as the superuser:&prompt.root; /etc/netstart
&prompt.root; service ypbind startAfter completing these steps, running
ypcat passwd on the client should show
the server's passwd map.NIS SecuritySince RPC is a broadcast-based service,
any system running ypbind within
the same domain can retrieve the contents of the
NIS maps. To prevent unauthorized
transactions, &man.ypserv.8; supports a feature called
securenets which can be used to restrict access
to a given set of hosts. By default, this information is
stored in /var/yp/securenets, unless
&man.ypserv.8; is started with and an
alternate path. This file contains entries that consist of a
network specification and a network mask separated by white
space. Lines starting with # are
considered to be comments. A sample
securenets might look like this:# allow connections from local host -- mandatory
127.0.0.1 255.255.255.255
# allow connections from any host
# on the 192.168.128.0 network
192.168.128.0 255.255.255.0
# allow connections from any host
# between 10.0.0.0 to 10.0.15.255
# this includes the machines in the testlab
10.0.0.0 255.255.240.0If &man.ypserv.8; receives a request from an address that
matches one of these rules, it will process the request
normally. If the address fails to match a rule, the request
will be ignored and a warning message will be logged. If the
securenets does not exist,
ypserv will allow connections from any
host. is an alternate mechanism
for providing access control instead of
securenets. While either access control
mechanism adds some security, they are both vulnerable to
IP spoofing attacks. All
NIS-related traffic should be blocked at
the firewall.Servers using securenets
may fail to serve legitimate NIS clients
with archaic TCP/IP implementations. Some of these
implementations set all host bits to zero when doing
broadcasts or fail to observe the subnet mask when
calculating the broadcast address. While some of these
problems can be fixed by changing the client configuration,
other problems may force the retirement of these client
systems or the abandonment of
securenets.TCP WrapperThe use of TCP Wrapper
increases the latency of the NIS server.
The additional delay may be long enough to cause timeouts in
client programs, especially in busy networks with slow
NIS servers. If one or more clients suffer
from latency, convert those clients into
NIS slave servers and force them to bind to
themselves.Barring Some UsersIn this example, the basie
system is a faculty workstation within the
NIS domain. The
passwd map on the master
NIS server contains accounts for both
faculty and students. This section demonstrates how to
allow faculty logins on this system while refusing student
logins.To prevent specified users from logging on to a system,
even if they are present in the NIS
database, use vipw to add
-username with
the correct number of colons towards the end of
/etc/master.passwd on the client,
where username is the username of
a user to bar from logging in. The line with the blocked
user must be before the + line that
allows NIS users. In this example,
bill is barred
from logging on to basie:basie&prompt.root; cat /etc/master.passwd
root:[password]:0:0::0:0:The super-user:/root:/bin/csh
toor:[password]:0:0::0:0:The other super-user:/root:/bin/sh
daemon:*:1:1::0:0:Owner of many system processes:/root:/sbin/nologin
operator:*:2:5::0:0:System &:/:/sbin/nologin
bin:*:3:7::0:0:Binaries Commands and Source,,,:/:/sbin/nologin
tty:*:4:65533::0:0:Tty Sandbox:/:/sbin/nologin
kmem:*:5:65533::0:0:KMem Sandbox:/:/sbin/nologin
games:*:7:13::0:0:Games pseudo-user:/usr/games:/sbin/nologin
news:*:8:8::0:0:News Subsystem:/:/sbin/nologin
man:*:9:9::0:0:Mister Man Pages:/usr/share/man:/sbin/nologin
bind:*:53:53::0:0:Bind Sandbox:/:/sbin/nologin
uucp:*:66:66::0:0:UUCP pseudo-user:/var/spool/uucppublic:/usr/libexec/uucp/uucico
xten:*:67:67::0:0:X-10 daemon:/usr/local/xten:/sbin/nologin
pop:*:68:6::0:0:Post Office Owner:/nonexistent:/sbin/nologin
nobody:*:65534:65534::0:0:Unprivileged user:/nonexistent:/sbin/nologin
-bill:::::::::
+:::::::::
basie&prompt.root;Using NetgroupsnetgroupsBarring specified users from logging on to individual
systems becomes unscaleable on larger networks and quickly
loses the main benefit of NIS:
centralized administration.Netgroups were developed to handle large, complex networks
with hundreds of users and machines. Their use is comparable
to &unix; groups, where the main difference is the lack of a
numeric ID and the ability to define a netgroup by including
both user accounts and other netgroups.To expand on the example used in this chapter, the
NIS domain will be extended to add the
users and systems shown in Tables 28.2 and 28.3:
Additional UsersUser Name(s)Descriptionalpha,
betaIT department employeescharlie, deltaIT department apprenticesecho,
foxtrott,
golf,
...employeesable,
baker,
...interns
Additional SystemsMachine Name(s)Descriptionwar,
death,
famine,
pollutionOnly IT employees are allowed to log onto these
servers.pride,
greed,
envy,
wrath,
lust,
slothAll members of the IT department are allowed to
login onto these servers.one,
two,
three,
four,
...Ordinary workstations used by
employees.trashcanA very old machine without any critical data.
Even interns are allowed to use this system.
When using netgroups to configure this scenario, each user
is assigned to one or more netgroups and logins are then
allowed or forbidden for all members of the netgroup. When
adding a new machine, login restrictions must be defined for
all netgroups. When a new user is added, the account must be
added to one or more netgroups. If the
NIS setup is planned carefully, only one
central configuration file needs modification to grant or deny
access to machines.The first step is the initialization of the
NIS netgroup map. In
&os;, this map is not created by default. On the
NIS master server, use an editor to create
a map named /var/yp/netgroup.This example creates four netgroups to represent IT
employees, IT apprentices, employees, and interns:IT_EMP (,alpha,test-domain) (,beta,test-domain)
IT_APP (,charlie,test-domain) (,delta,test-domain)
USERS (,echo,test-domain) (,foxtrott,test-domain) \
(,golf,test-domain)
INTERNS (,able,test-domain) (,baker,test-domain)Each entry configures a netgroup. The first column in an
entry is the name of the netgroup. Each set of brackets
represents either a group of one or more users or the name of
another netgroup. When specifying a user, the three
comma-delimited fields inside each group represent:The name of the host(s) where the other fields
representing the user are valid. If a hostname is not
specified, the entry is valid on all hosts.The name of the account that belongs to this
netgroup.The NIS domain for the account.
Accounts may be imported from other NIS
domains into a netgroup.If a group contains multiple users, separate each user
with whitespace. Additionally, each field may contain
wildcards. See &man.netgroup.5; for details.netgroupsNetgroup names longer than 8 characters should not be
used. The names are case sensitive and using capital letters
for netgroup names is an easy way to distinguish between user,
machine and netgroup names.Some non-&os; NIS clients cannot
handle netgroups containing more than 15 entries. This
limit may be circumvented by creating several sub-netgroups
with 15 users or fewer and a real netgroup consisting of the
sub-netgroups, as seen in this example:BIGGRP1 (,joe1,domain) (,joe2,domain) (,joe3,domain) [...]
BIGGRP2 (,joe16,domain) (,joe17,domain) [...]
BIGGRP3 (,joe31,domain) (,joe32,domain)
BIGGROUP BIGGRP1 BIGGRP2 BIGGRP3Repeat this process if more than 225 (15 times 15) users
exist within a single netgroup.To activate and distribute the new
NIS map:ellington&prompt.root; cd /var/yp
ellington&prompt.root; makeThis will generate the three NIS maps
netgroup,
netgroup.byhost and
netgroup.byuser. Use the map key option
of &man.ypcat.1; to check if the new NIS
maps are available:ellington&prompt.user; ypcat -k netgroup
ellington&prompt.user; ypcat -k netgroup.byhost
ellington&prompt.user; ypcat -k netgroup.byuserThe output of the first command should resemble the
contents of /var/yp/netgroup. The second
command only produces output if host-specific netgroups were
created. The third command is used to get the list of
netgroups for a user.To configure a client, use &man.vipw.8; to specify the
name of the netgroup. For example, on the server named
war, replace this line:+:::::::::with+@IT_EMP:::::::::This specifies that only the users defined in the netgroup
IT_EMP will be imported into this system's
password database and only those users are allowed to login to
this system.This configuration also applies to the
~ function of the shell and all routines
which convert between user names and numerical user IDs. In
other words,
cd ~user will
not work, ls -l will show the numerical ID
instead of the username, and find . -user joe
-print will fail with the message
No such user. To fix this, import all
user entries without allowing them to login into the servers.
This can be achieved by adding an extra line:+:::::::::/sbin/nologinThis line configures the client to import all entries but
to replace the shell in those entries with
/sbin/nologin.Make sure that extra line is placed
after+@IT_EMP:::::::::. Otherwise, all user
accounts imported from NIS will have
/sbin/nologin as their login
shell and no one will be able to login to the system.To configure the less important servers, replace the old
+::::::::: on the servers with these
lines:+@IT_EMP:::::::::
+@IT_APP:::::::::
+:::::::::/sbin/nologinThe corresponding lines for the workstations
would be:+@IT_EMP:::::::::
+@USERS:::::::::
+:::::::::/sbin/nologinNIS supports the creation of netgroups from other
netgroups which can be useful if the policy regarding user
access changes. One possibility is the creation of role-based
netgroups. For example, one might create a netgroup called
BIGSRV to define the login restrictions for
the important servers, another netgroup called
SMALLSRV for the less important servers,
and a third netgroup called USERBOX for the
workstations. Each of these netgroups contains the netgroups
that are allowed to login onto these machines. The new
entries for the NIS
netgroup map would look like this:BIGSRV IT_EMP IT_APP
SMALLSRV IT_EMP IT_APP ITINTERN
USERBOX IT_EMP ITINTERN USERSThis method of defining login restrictions works
reasonably well when it is possible to define groups of
machines with identical restrictions. Unfortunately, this is
the exception and not the rule. Most of the time, the ability
to define login restrictions on a per-machine basis is
required.Machine-specific netgroup definitions are another
possibility to deal with the policy changes. In this
scenario, the /etc/master.passwd of each
system contains two lines starting with +.
The first line adds a netgroup with the accounts allowed to
login onto this machine and the second line adds all other
accounts with /sbin/nologin as shell. It
is recommended to use the ALL-CAPS version of
the hostname as the name of the netgroup:+@BOXNAME:::::::::
+:::::::::/sbin/nologinOnce this task is completed on all the machines, there is
no longer a need to modify the local versions of
/etc/master.passwd ever again. All
further changes can be handled by modifying the
NIS map. Here is an example of a possible
netgroup map for this scenario:# Define groups of users first
IT_EMP (,alpha,test-domain) (,beta,test-domain)
IT_APP (,charlie,test-domain) (,delta,test-domain)
DEPT1 (,echo,test-domain) (,foxtrott,test-domain)
DEPT2 (,golf,test-domain) (,hotel,test-domain)
DEPT3 (,india,test-domain) (,juliet,test-domain)
ITINTERN (,kilo,test-domain) (,lima,test-domain)
D_INTERNS (,able,test-domain) (,baker,test-domain)
#
# Now, define some groups based on roles
USERS DEPT1 DEPT2 DEPT3
BIGSRV IT_EMP IT_APP
SMALLSRV IT_EMP IT_APP ITINTERN
USERBOX IT_EMP ITINTERN USERS
#
# And a groups for a special tasks
# Allow echo and golf to access our anti-virus-machine
SECURITY IT_EMP (,echo,test-domain) (,golf,test-domain)
#
# machine-based netgroups
# Our main servers
WAR BIGSRV
FAMINE BIGSRV
# User india needs access to this server
POLLUTION BIGSRV (,india,test-domain)
#
# This one is really important and needs more access restrictions
DEATH IT_EMP
#
# The anti-virus-machine mentioned above
ONE SECURITY
#
# Restrict a machine to a single user
TWO (,hotel,test-domain)
# [...more groups to follow]It may not always be advisable
to use machine-based netgroups. When deploying a couple of
dozen or hundreds of systems,
role-based netgroups instead of machine-based netgroups may be
used to keep the size of the NIS map within
reasonable limits.Password FormatsNISpassword formatsNIS requires that all hosts within an
NIS domain use the same format for
encrypting passwords. If users have trouble authenticating on
an NIS client, it may be due to a differing
password format. In a heterogeneous network, the format must
be supported by all operating systems, where
DES is the lowest common standard.To check which format a server or client is using, look
at this section of
/etc/login.conf:default:\
:passwd_format=des:\
:copyright=/etc/COPYRIGHT:\
[Further entries elided]In this example, the system is using the
DES format. Other possible values are
blf for Blowfish and md5
for MD5 encrypted passwords.If the format on a host needs to be edited to match the
one being used in the NIS domain, the
login capability database must be rebuilt after saving the
change:&prompt.root; cap_mkdb /etc/login.confThe format of passwords for existing user accounts will
not be updated until each user changes their password
after the login capability database is
rebuilt.Lightweight Directory Access Protocol
(LDAP)TomRhodesWritten by LDAPThe Lightweight Directory Access Protocol
(LDAP) is an application layer protocol used
to access, modify, and authenticate objects using a distributed
directory information service. Think of it as a phone or record
book which stores several levels of hierarchical, homogeneous
information. It is used in Active Directory and
OpenLDAP networks and allows users to
access to several levels of internal information utilizing a
single account. For example, email authentication, pulling
employee contact information, and internal website
authentication might all make use of a single user account in
the LDAP server's record base.This section provides a quick start guide for configuring an
LDAP server on a &os; system. It assumes
that the administrator already has a design plan which includes
the type of information to store, what that information will be
used for, which users should have access to that information,
and how to secure this information from unauthorized
access.LDAP Terminology and StructureLDAP uses several terms which should be
understood before starting the configuration. All directory
entries consist of a group of
attributes. Each of these attribute
sets contains a unique identifier known as a
Distinguished Name
(DN) which is normally built from several
other attributes such as the common or
Relative Distinguished Name
(RDN). Similar to how directories have
absolute and relative paths, consider a DN
as an absolute path and the RDN as the
relative path.An example LDAP entry looks like the
following. This example searches for the entry for the
specified user account (uid),
organizational unit (ou), and organization
(o):&prompt.user; ldapsearch -xb "uid=trhodes,ou=users,o=example.com"
# extended LDIF
#
# LDAPv3
# base <uid=trhodes,ou=users,o=example.com> with scope subtree
# filter: (objectclass=*)
# requesting: ALL
#
# trhodes, users, example.com
dn: uid=trhodes,ou=users,o=example.com
mail: trhodes@example.com
cn: Tom Rhodes
uid: trhodes
telephoneNumber: (123) 456-7890
# search result
search: 2
result: 0 Success
# numResponses: 2
# numEntries: 1This example entry shows the values for the
dn, mail,
cn, uid, and
telephoneNumber attributes. The
cn attribute is the
RDN.More information about LDAP and its
terminology can be found at http://www.openldap.org/doc/admin24/intro.html.Configuring an LDAP ServerLDAP Server&os; does not provide a built-in LDAP
server. Begin the configuration by installing the net/openldap24-server package or port.
Since the port has many configurable options, it is
recommended that the default options are reviewed to see if
the package is sufficient, and to instead compile the port if
any options should be changed. In most cases, the defaults
are fine. However, if SQL support is needed, this option must
be enabled and the port compiled using the instructions in
.Next, create the directories to hold the data and to store
the certificates:&prompt.root; mkdir /var/db/openldap-data
&prompt.root; mkdir /usr/local/etc/openldap/privateCopy over the database configuration file:&prompt.root; cp /usr/local/etc/openldap/DB_CONFIG.example /var/db/openldap-data/DB_CONFIGThe next phase is to configure the certificate authority.
The following commands must be executed from
/usr/local/etc/openldap/private. This is
important as the file permissions need to be restrictive and
users should not have access to these files. To create the
certificate authority, start with this command and follow the
prompts:&prompt.root; openssl req -days 365 -nodes -new -x509 -keyout ca.key -out ../ca.crtThe entries for the prompts may be generic
except for the
Common Name. This entry must be
different than the system hostname. If
this will be a self signed certificate, prefix the hostname
with CA for certificate authority.The next task is to create a certificate signing request
and a private key. Input this command and follow the
prompts:&prompt.root; openssl req -days 365 -nodes -new -keyout server.key -out server.csrDuring the certificate generation process, be sure to
correctly set the Common Name attribute.
Once complete, sign the key:&prompt.root; openssl x509 -req -days 365 -in server.csr -out ../server.crt -CA ../ca.crt -CAkey ca.key -CAcreateserialThe final part of the certificate generation process is to
generate and sign the client certificates:&prompt.root; openssl req -days 365 -nodes -new -keyout client.key -out client.csr
&prompt.root; openssl x509 -req -days 3650 -in client.csr -out ../client.crt -CA ../ca.crt -CAkey ca.keyRemember to use the same Common Name
attribute when prompted. When finished, ensure that a total
of eight (8) new files have been generated through the
proceeding commands. If so, the next step is to edit
/usr/local/etc/openldap/slapd.conf and
add the following options:TLSCipherSuite HIGH:MEDIUM:+SSLv3
TLSCertificateFile /usr/local/etc/openldap/server.crt
TLSCertificateKeyFile /usr/local/etc/openldap/private/server.key
TLSCACertificateFile /usr/local/etc/openldap/ca.crtThen, edit
/usr/local/etc/openldap/ldap.conf and add
the following lines:TLS_CACERT /usr/local/etc/openldap/ca.crt
TLS_CIPHER_SUITE HIGH:MEDIUM:+SSLv3While editing this file, uncomment the following entries
and set them to the desired values: ,
, and
. Set the to
contain and
. Then, add two entries pointing to
the certificate authority. When finished, the entries should
look similar to the following:BASE dc=example,dc=com
URI ldap:// ldaps://
SIZELIMIT 12
TIMELIMIT 15
TLS_CACERT /usr/local/etc/openldap/ca.crt
TLS_CIPHER_SUITE HIGH:MEDIUM:+SSLv3The default password for the server should then be
changed:&prompt.root; slappasswd -h "{SHA}" >> /usr/local/etc/openldap/slapd.confThis command will prompt for the password and, if the
process does not fail, a password hash will be added to the
end of slapd.conf. Several hashing
formats are supported. Refer to the manual page for
slappasswd for more information.Next, edit
/usr/local/etc/openldap/slapd.conf and
add the following lines:password-hash {sha}
allow bind_v2The in this file must be updated
to match the used in
/usr/local/etc/openldap/ldap.conf and
should also be set. A recommended
value for is something like
. Before saving this file, place
the in front of the password output
from slappasswd and delete the old
. The end result should
look similar to this:TLSCipherSuite HIGH:MEDIUM:+SSLv3
TLSCertificateFile /usr/local/etc/openldap/server.crt
TLSCertificateKeyFile /usr/local/etc/openldap/private/server.key
TLSCACertificateFile /usr/local/etc/openldap/ca.crt
rootpw {SHA}W6ph5Mm5Pz8GgiULbPgzG37mj9g=Finally, enable the OpenLDAP
service in /etc/rc.conf and set the
URI:slapd_enable="YES"
slapd_flags="-4 -h ldaps:///"At this point the server can be started and tested:&prompt.root; service slapd startIf everything is configured correctly, a search of the
directory should show a successful connection with a single
response as in this example:&prompt.root; ldapsearch -Z
# extended LDIF
#
# LDAPv3
# base <dc=example,dc=com> (default) with scope subtree
# filter: (objectclass=*)
# requesting: ALL
#
# search result
search: 3
result: 32 No such object
# numResponses: 1If the command fails and the configuration looks
correct, stop the slapd service and
restart it with debugging options:&prompt.root; service slapd stop
&prompt.root; /usr/local/libexec/slapd -d -1Once the service is responding, the directory can be
populated using ldapadd. In this example,
a file containing this list of users is first created. Each
user should use the following format:dn: dc=example,dc=com
objectclass: dcObject
objectclass: organization
o: Example
dc: Example
dn: cn=Manager,dc=example,dc=com
objectclass: organizationalRole
cn: ManagerTo import this file, specify the file name. The following
command will prompt for the password specified earlier and the
output should look something like this:&prompt.root; ldapadd -Z -D "cn=Manager,dc=example,dc=com" -W -f import.ldif
Enter LDAP Password:
adding new entry "dc=example,dc=com"
adding new entry "cn=Manager,dc=example,dc=com"Verify the data was added by issuing a search on the
server using ldapsearch:&prompt.user; ldapsearch -Z
# extended LDIF
#
# LDAPv3
# base <dc=example,dc=com> (default) with scope subtree
# filter: (objectclass=*)
# requesting: ALL
#
# example.com
dn: dc=example,dc=com
objectClass: dcObject
objectClass: organization
o: Example
dc: Example
# Manager, example.com
dn: cn=Manager,dc=example,dc=com
objectClass: organizationalRole
cn: Manager
# search result
search: 3
result: 0 Success
# numResponses: 3
# numEntries: 2At this point, the server should be configured and
functioning properly.Dynamic Host Configuration Protocol
(DHCP)Dynamic Host Configuration ProtocolDHCPInternet Systems Consortium (ISC)The Dynamic Host Configuration Protocol
(DHCP) allows a system to connect to a
network in order to be assigned the necessary addressing
information for communication on that network. &os; includes
the OpenBSD version of dhclient which is used
by the client to obtain the addressing information. &os; does
not install a DHCP server, but several
servers are available in the &os; Ports Collection. The
DHCP protocol is fully described in RFC
2131.
Informational resources are also available at isc.org/downloads/dhcp/.This section describes how to use the built-in
DHCP client. It then describes how to
install and configure a DHCP server.In &os;, the &man.bpf.4; device is needed by both the
DHCP server and DHCP
client. This device is included in the
GENERIC kernel that is installed with
&os;. Users who prefer to create a custom kernel need to keep
this device if DHCP is used.It should be noted that bpf also
allows privileged users to run network packet sniffers on
that system.Configuring a DHCP ClientDHCP client support is included in the
&os; installer, making it easy to configure a newly installed
system to automatically receive its networking addressing
information from an existing DHCP server.
Refer to for examples of
network configuration.UDPWhen dhclient is executed on the client
machine, it begins broadcasting requests for configuration
information. By default, these requests use
UDP port 68. The server replies on
UDP port 67, giving the client an
IP address and other relevant network
information such as a subnet mask, default gateway, and
DNS server addresses. This information is
in the form of a DHCP
lease and is valid for a configurable time.
This allows stale IP addresses for clients
no longer connected to the network to automatically be reused.
DHCP clients can obtain a great deal of
information from the server. An exhaustive list may be found
in &man.dhcp-options.5;.By default, when a &os; system boots, its
DHCP client runs in the background, or
asynchronously. Other startup scripts
continue to run while the DHCP process
completes, which speeds up system startup.Background DHCP works well when the
DHCP server responds quickly to the
client's requests. However, DHCP may take
a long time to complete on some systems. If network services
attempt to run before DHCP has assigned the
network addressing information, they will fail. Using
DHCP in synchronous
mode prevents this problem as it pauses startup until the
DHCP configuration has completed.This line in /etc/rc.conf is used to
configure background or asynchronous mode:ifconfig_fxp0="DHCP"This line may already exist if the system was configured
to use DHCP during installation. Replace
the fxp0 shown in these examples
with the name of the interface to be dynamically configured,
as described in .To instead configure the system to use synchronous mode,
and to pause during startup while DHCP
completes, use
SYNCDHCP:ifconfig_fxp0="SYNCDHCP"Additional client options are available. Search for
dhclient in &man.rc.conf.5; for
details.DHCPconfiguration filesThe DHCP client uses the following
files:/etc/dhclient.confThe configuration file used by
dhclient. Typically, this file
contains only comments as the defaults are suitable for
most clients. This configuration file is described in
&man.dhclient.conf.5;./sbin/dhclientMore information about the command itself can
be found in &man.dhclient.8;./sbin/dhclient-scriptThe
&os;-specific DHCP client configuration
script. It is described in &man.dhclient-script.8;, but
should not need any user modification to function
properly./var/db/dhclient.leases.interfaceThe DHCP client keeps a database of
valid leases in this file, which is written as a log and
is described in &man.dhclient.leases.5;.Installing and Configuring a DHCP
ServerThis section demonstrates how to configure a &os; system
to act as a DHCP server using the Internet
Systems Consortium (ISC) implementation of
the DHCP server. This implementation and
its documentation can be installed using the
net/isc-dhcp43-server package or
port.DHCPserverDHCPinstallationThe installation of
net/isc-dhcp43-server installs a sample
configuration file. Copy
/usr/local/etc/dhcpd.conf.example to
/usr/local/etc/dhcpd.conf and make any
edits to this new file.DHCPdhcpd.confThe configuration file is comprised of declarations for
subnets and hosts which define the information that is
provided to DHCP clients. For example,
these lines configure the following:option domain-name "example.org";
option domain-name-servers ns1.example.org;
option subnet-mask 255.255.255.0;
default-lease-time 600;
max-lease-time 72400;
ddns-update-style none;
subnet 10.254.239.0 netmask 255.255.255.224 {
range 10.254.239.10 10.254.239.20;
option routers rtr-239-0-1.example.org, rtr-239-0-2.example.org;
}
host fantasia {
hardware ethernet 08:00:07:26:c0:a5;
fixed-address fantasia.fugue.com;
}This option specifies the default search domain that
will be provided to clients. Refer to
&man.resolv.conf.5; for more information.This option specifies a comma separated list of
DNS servers that the client should use.
They can be listed by their Fully Qualified Domain Names
(FQDN), as seen in the example, or by
their IP addresses.The subnet mask that will be provided to
clients.The default lease expiry time in seconds. A client
can be configured to override this value. The maximum allowed length of time, in seconds, for a
lease. Should a client request a longer lease, a lease
will still be issued, but it will only be valid for
max-lease-time.The default of disables dynamic
DNS updates. Changing this to
configures the DHCP server to update a
DNS server whenever it hands out a
lease so that the DNS server knows
which IP addresses are associated with
which computers in the network. Do not change the default
setting unless the DNS server has been
configured to support dynamic
DNS.This line creates a pool of available
IP addresses which are reserved for
allocation to DHCP clients. The range
of addresses must be valid for the network or subnet
specified in the previous line.Declares the default gateway that is valid for the
network or subnet specified before the opening
{ bracket.Specifies the hardware MAC address
of a client so that the DHCP server can
recognize the client when it makes a request.Specifies that this host should always be given the
same IP address. Using the hostname is
correct, since the DHCP server will
resolve the hostname before returning the lease
information.This configuration file supports many more options. Refer
to dhcpd.conf(5), installed with the server, for details and
examples.Once the configuration of dhcpd.conf
is complete, enable the DHCP server in
/etc/rc.conf:dhcpd_enable="YES"
dhcpd_ifaces="dc0"Replace the dc0 with the interface (or
interfaces, separated by whitespace) that the
DHCP server should listen on for
DHCP client requests.Start the server by issuing the following command:&prompt.root; service isc-dhcpd startAny future changes to the configuration of the server will
require the dhcpd service to be
stopped and then started using &man.service.8;.The DHCP server uses the following
files. Note that the manual pages are installed with the
server software.DHCPconfiguration files/usr/local/sbin/dhcpdMore information about the
dhcpd server can be found in
dhcpd(8)./usr/local/etc/dhcpd.confThe server configuration file needs to contain all the
information that should be provided to clients, along with
information regarding the operation of the server. This
configuration file is described in dhcpd.conf(5)./var/db/dhcpd.leasesThe DHCP server keeps a database of
leases it has issued in this file, which is written as a
log. Refer to dhcpd.leases(5), which gives a slightly
longer description./usr/local/sbin/dhcrelayThis daemon is used in advanced environments where one
DHCP server forwards a request from a
client to another DHCP server on a
separate network. If this functionality is required,
install the net/isc-dhcp43-relay
package or port. The installation includes dhcrelay(8)
which provides more detail.Domain Name System (DNS)DNSDomain Name System (DNS) is the protocol
through which domain names are mapped to IP
addresses, and vice versa. DNS is
coordinated across the Internet through a somewhat complex
system of authoritative root, Top Level Domain
(TLD), and other smaller-scale name servers,
which host and cache individual domain information. It is not
necessary to run a name server to perform
DNS lookups on a system.BINDIn &os; 10, the Berkeley Internet Name Domain
(BIND) has been removed from the base system
and replaced with Unbound. Unbound as configured in the &os;
Base is a local caching resolver. BIND is
still available from The Ports Collection as dns/bind99 or dns/bind98. In &os; 9 and lower,
BIND is included in &os; Base. The &os;
version provides enhanced security features, a new file system
layout, and automated &man.chroot.8; configuration.
BIND is maintained by the Internet Systems
Consortium.resolverreverse
DNSroot zoneThe following table describes some of the terms associated
with DNS:
DNS TerminologyTermDefinitionForward DNSMapping of hostnames to IP
addresses.OriginRefers to the domain covered in a particular zone
file.named, BINDCommon names for the BIND name server package
within &os;.ResolverA system process through which a machine queries
a name server for zone information.Reverse DNSMapping of IP addresses to
hostnames.Root zoneThe beginning of the Internet zone hierarchy. All
zones fall under the root zone, similar to how all files
in a file system fall under the root directory.ZoneAn individual domain, subdomain, or portion of the
DNS administered by the same
authority.
zonesexamplesExamples of zones:. is how the root zone is
usually referred to in documentation.org. is a Top Level Domain
(TLD) under the root zone.example.org. is a zone
under the org.
TLD.1.168.192.in-addr.arpa is a
zone referencing all IP addresses which
fall under the 192.168.1.*
IP address space.As one can see, the more specific part of a hostname
appears to its left. For example, example.org. is more
specific than org., as
org. is more specific than the root
zone. The layout of each part of a hostname is much like a file
system: the /dev directory falls within the
root, and so on.Reasons to Run a Name ServerName servers generally come in two forms: authoritative
name servers, and caching (also known as resolving) name
servers.An authoritative name server is needed when:One wants to serve DNS information
to the world, replying authoritatively to queries.A domain, such as example.org, is
registered and IP addresses need to be
assigned to hostnames under it.An IP address block requires
reverse DNS entries
(IP to hostname).A backup or second name server, called a slave, will
reply to queries.A caching name server is needed when:A local DNS server may cache and
respond more quickly than querying an outside name
server.When one queries for www.FreeBSD.org, the
resolver usually queries the uplink ISP's
name server, and retrieves the reply. With a local, caching
DNS server, the query only has to be made
once to the outside world by the caching
DNS server. Additional queries will not
have to go outside the local network, since the information is
cached locally.DNS Server Configuration in &os; 10.0
and LaterIn &os; 10.0, BIND has been
replaced with Unbound.
Unbound is a validating caching
resolver only. If an authoritative server is needed, many are
available from the Ports Collection.Unbound is provided in the &os;
base system. By default, it will provide
DNS resolution to the local machine only.
While the base system package can be configured to provide
resolution services beyond the local machine, it is
recommended that such requirements be addressed by installing
Unbound from the &os; Ports
Collection.To enable Unbound, add the
following to /etc/rc.conf:local_unbound_enable="YES"Any existing nameservers in
/etc/resolv.conf will be configured as
forwarders in the new Unbound
configuration.If any of the listed nameservers do not support
DNSSEC, local DNS
resolution will fail. Be sure to test each nameserver and
remove any that fail the test. The following command will
show the trust tree or a failure for a nameserver running on
192.168.1.1:&prompt.user; drill -S FreeBSD.org @192.168.1.1Once each nameserver is confirmed to support
DNSSEC, start
Unbound:&prompt.root; service local_unbound onestartThis will take care of updating
/etc/resolv.conf so that queries for
DNSSEC secured domains will now work. For
example, run the following to validate the FreeBSD.org
DNSSEC trust tree:&prompt.user; drill -S FreeBSD.org
;; Number of trusted keys: 1
;; Chasing: freebsd.org. A
DNSSEC Trust tree:
freebsd.org. (A)
|---freebsd.org. (DNSKEY keytag: 36786 alg: 8 flags: 256)
|---freebsd.org. (DNSKEY keytag: 32659 alg: 8 flags: 257)
|---freebsd.org. (DS keytag: 32659 digest type: 2)
|---org. (DNSKEY keytag: 49587 alg: 7 flags: 256)
|---org. (DNSKEY keytag: 9795 alg: 7 flags: 257)
|---org. (DNSKEY keytag: 21366 alg: 7 flags: 257)
|---org. (DS keytag: 21366 digest type: 1)
| |---. (DNSKEY keytag: 40926 alg: 8 flags: 256)
| |---. (DNSKEY keytag: 19036 alg: 8 flags: 257)
|---org. (DS keytag: 21366 digest type: 2)
|---. (DNSKEY keytag: 40926 alg: 8 flags: 256)
|---. (DNSKEY keytag: 19036 alg: 8 flags: 257)
;; Chase successfulDNS Server Configuration in &os;
9.XIn &os;, the BIND daemon is called
named.FileDescription&man.named.8;The BIND daemon.&man.rndc.8;Name server control utility./etc/namedbDirectory where BIND zone information
resides./etc/namedb/named.confConfiguration file of the daemon.Depending on how a given zone is configured on the server,
the files related to that zone can be found in the
master,
slave, or
dynamic subdirectories
of the /etc/namedb
directory. These files contain the DNS
information that will be given out by the name server in
response to queries.Starting BINDBINDstartingSince BIND is installed by default, configuring it is
relatively simple.The default named
configuration is that of a basic resolving name server,
running in a &man.chroot.8; environment, and restricted to
listening on the local IPv4 loopback address (127.0.0.1).
To start the server one time with this configuration, use
the following command:&prompt.root; service named onestartTo ensure the named daemon is
started at boot each time, put the following line into the
/etc/rc.conf:named_enable="YES"There are many configuration options for
/etc/namedb/named.conf that are beyond
the scope of this document. Other startup options for
named on &os; can be found in the
named_* flags
in /etc/defaults/rc.conf and in
&man.rc.conf.5;. The
section is also a good read.Configuration FilesBINDconfiguration filesConfiguration files for named
currently reside in /etc/namedb
directory and will need modification before use unless all
that is needed is a simple resolver. This is where most of
the configuration will be performed./etc/namedb/named.conf// $FreeBSD$
//
// Refer to the named.conf(5) and named(8) man pages, and the documentation
// in /usr/share/doc/bind9 for more details.
//
// If you are going to set up an authoritative server, make sure you
// understand the hairy details of how DNS works. Even with
// simple mistakes, you can break connectivity for affected parties,
// or cause huge amounts of useless Internet traffic.
options {
// All file and path names are relative to the chroot directory,
// if any, and should be fully qualified.
directory "/etc/namedb/working";
pid-file "/var/run/named/pid";
dump-file "/var/dump/named_dump.db";
statistics-file "/var/stats/named.stats";
// If named is being used only as a local resolver, this is a safe default.
// For named to be accessible to the network, comment this option, specify
// the proper IP address, or delete this option.
listen-on { 127.0.0.1; };
// If you have IPv6 enabled on this system, uncomment this option for
// use as a local resolver. To give access to the network, specify
// an IPv6 address, or the keyword "any".
// listen-on-v6 { ::1; };
// These zones are already covered by the empty zones listed below.
// If you remove the related empty zones below, comment these lines out.
disable-empty-zone "255.255.255.255.IN-ADDR.ARPA";
disable-empty-zone "0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.IP6.ARPA";
disable-empty-zone "1.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.IP6.ARPA";
// If you have a DNS server around at your upstream provider, enter
// its IP address here, and enable the line below. This will make you
// benefit from its cache, thus reduce overall DNS traffic in the Internet.
/*
forwarders {
127.0.0.1;
};
*/
// If the 'forwarders' clause is not empty the default is to 'forward first'
// which will fall back to sending a query from your local server if the name
// servers in 'forwarders' do not have the answer. Alternatively you can
// force your name server to never initiate queries of its own by enabling the
// following line:
// forward only;
// If you wish to have forwarding configured automatically based on
// the entries in /etc/resolv.conf, uncomment the following line and
// set named_auto_forward=yes in /etc/rc.conf. You can also enable
// named_auto_forward_only (the effect of which is described above).
// include "/etc/namedb/auto_forward.conf";Just as the comment says, to benefit from an uplink's
cache, forwarders can be enabled here.
Under normal circumstances, a name server will recursively
query the Internet looking at certain name servers until
it finds the answer it is looking for. Having this
enabled will have it query the uplink's name server (or
name server provided) first, taking advantage of its
cache. If the uplink name server in question is a heavily
trafficked, fast name server, enabling this may be
worthwhile.127.0.0.1
will not work here. Change this
IP address to a name server at the
uplink. /*
Modern versions of BIND use a random UDP port for each outgoing
query by default in order to dramatically reduce the possibility
of cache poisoning. All users are strongly encouraged to utilize
this feature, and to configure their firewalls to accommodate it.
AS A LAST RESORT in order to get around a restrictive firewall
policy you can try enabling the option below. Use of this option
will significantly reduce your ability to withstand cache poisoning
attacks, and should be avoided if at all possible.
Replace NNNNN in the example with a number between 49160 and 65530.
*/
// query-source address * port NNNNN;
};
// If you enable a local name server, do not forget to enter 127.0.0.1
// first in your /etc/resolv.conf so this server will be queried.
// Also, make sure to enable it in /etc/rc.conf.
// The traditional root hints mechanism. Use this, OR the slave zones below.
zone "." { type hint; file "/etc/namedb/named.root"; };
/* Slaving the following zones from the root name servers has some
significant advantages:
1. Faster local resolution for your users
2. No spurious traffic will be sent from your network to the roots
3. Greater resilience to any potential root server failure/DDoS
On the other hand, this method requires more monitoring than the
hints file to be sure that an unexpected failure mode has not
incapacitated your server. Name servers that are serving a lot
of clients will benefit more from this approach than individual
hosts. Use with caution.
To use this mechanism, uncomment the entries below, and comment
the hint zone above.
As documented at http://dns.icann.org/services/axfr/ these zones:
"." (the root), ARPA, IN-ADDR.ARPA, IP6.ARPA, and ROOT-SERVERS.NET
are available for AXFR from these servers on IPv4 and IPv6:
xfr.lax.dns.icann.org, xfr.cjr.dns.icann.org
*/
/*
zone "." {
type slave;
file "/etc/namedb/slave/root.slave";
masters {
192.5.5.241; // F.ROOT-SERVERS.NET.
};
notify no;
};
zone "arpa" {
type slave;
file "/etc/namedb/slave/arpa.slave";
masters {
192.5.5.241; // F.ROOT-SERVERS.NET.
};
notify no;
};
*/
/* Serving the following zones locally will prevent any queries
for these zones leaving your network and going to the root
name servers. This has two significant advantages:
1. Faster local resolution for your users
2. No spurious traffic will be sent from your network to the roots
*/
// RFCs 1912 and 5735 (and BCP 32 for localhost)
zone "localhost" { type master; file "/etc/namedb/master/localhost-forward.db"; };
zone "127.in-addr.arpa" { type master; file "/etc/namedb/master/localhost-reverse.db"; };
zone "255.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; };
// RFC 1912-style zone for IPv6 localhost address
zone "0.ip6.arpa" { type master; file "/etc/namedb/master/localhost-reverse.db"; };
// "This" Network (RFCs 1912 and 5735)
zone "0.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; };
// Private Use Networks (RFCs 1918 and 5735)
zone "10.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; };
zone "16.172.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; };
zone "17.172.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; };
zone "18.172.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; };
zone "19.172.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; };
zone "20.172.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; };
zone "21.172.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; };
zone "22.172.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; };
zone "23.172.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; };
zone "24.172.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; };
zone "25.172.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; };
zone "26.172.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; };
zone "27.172.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; };
zone "28.172.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; };
zone "29.172.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; };
zone "30.172.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; };
zone "31.172.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; };
zone "168.192.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; };
// Link-local/APIPA (RFCs 3927 and 5735)
zone "254.169.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; };
// IETF protocol assignments (RFCs 5735 and 5736)
zone "0.0.192.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; };
// TEST-NET-[1-3] for Documentation (RFCs 5735 and 5737)
zone "2.0.192.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; };
zone "100.51.198.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; };
zone "113.0.203.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; };
// IPv6 Range for Documentation (RFC 3849)
zone "8.b.d.0.1.0.0.2.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; };
// Domain Names for Documentation and Testing (BCP 32)
zone "test" { type master; file "/etc/namedb/master/empty.db"; };
zone "example" { type master; file "/etc/namedb/master/empty.db"; };
zone "invalid" { type master; file "/etc/namedb/master/empty.db"; };
zone "example.com" { type master; file "/etc/namedb/master/empty.db"; };
zone "example.net" { type master; file "/etc/namedb/master/empty.db"; };
zone "example.org" { type master; file "/etc/namedb/master/empty.db"; };
// Router Benchmark Testing (RFCs 2544 and 5735)
zone "18.198.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; };
zone "19.198.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; };
// IANA Reserved - Old Class E Space (RFC 5735)
zone "240.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; };
zone "241.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; };
zone "242.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; };
zone "243.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; };
zone "244.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; };
zone "245.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; };
zone "246.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; };
zone "247.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; };
zone "248.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; };
zone "249.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; };
zone "250.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; };
zone "251.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; };
zone "252.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; };
zone "253.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; };
zone "254.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; };
// IPv6 Unassigned Addresses (RFC 4291)
zone "1.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; };
zone "3.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; };
zone "4.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; };
zone "5.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; };
zone "6.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; };
zone "7.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; };
zone "8.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; };
zone "9.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; };
zone "a.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; };
zone "b.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; };
zone "c.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; };
zone "d.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; };
zone "e.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; };
zone "0.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; };
zone "1.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; };
zone "2.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; };
zone "3.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; };
zone "4.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; };
zone "5.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; };
zone "6.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; };
zone "7.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; };
zone "8.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; };
zone "9.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; };
zone "a.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; };
zone "b.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; };
zone "0.e.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; };
zone "1.e.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; };
zone "2.e.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; };
zone "3.e.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; };
zone "4.e.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; };
zone "5.e.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; };
zone "6.e.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; };
zone "7.e.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; };
// IPv6 ULA (RFC 4193)
zone "c.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; };
zone "d.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; };
// IPv6 Link Local (RFC 4291)
zone "8.e.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; };
zone "9.e.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; };
zone "a.e.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; };
zone "b.e.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; };
// IPv6 Deprecated Site-Local Addresses (RFC 3879)
zone "c.e.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; };
zone "d.e.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; };
zone "e.e.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; };
zone "f.e.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; };
// IP6.INT is Deprecated (RFC 4159)
zone "ip6.int" { type master; file "/etc/namedb/master/empty.db"; };
// NB: Do not use the IP addresses below, they are faked, and only
// serve demonstration/documentation purposes!
//
// Example slave zone config entries. It can be convenient to become
// a slave at least for the zone your own domain is in. Ask
// your network administrator for the IP address of the responsible
// master name server.
//
// Do not forget to include the reverse lookup zone!
// This is named after the first bytes of the IP address, in reverse
// order, with ".IN-ADDR.ARPA" appended, or ".IP6.ARPA" for IPv6.
//
// Before starting to set up a master zone, make sure you fully
// understand how DNS and BIND work. There are sometimes
// non-obvious pitfalls. Setting up a slave zone is usually simpler.
//
// NB: Do not blindly enable the examples below. :-) Use actual names
// and addresses instead.
/* An example dynamic zone
key "exampleorgkey" {
algorithm hmac-md5;
secret "sf87HJqjkqh8ac87a02lla==";
};
zone "example.org" {
type master;
allow-update {
key "exampleorgkey";
};
file "/etc/namedb/dynamic/example.org";
};
*/
/* Example of a slave reverse zone
zone "1.168.192.in-addr.arpa" {
type slave;
file "/etc/namedb/slave/1.168.192.in-addr.arpa";
masters {
192.168.1.1;
};
};
*/In named.conf, these are examples
of slave entries for a forward and reverse zone.For each new zone served, a new zone entry must be
added to named.conf.For example, the simplest zone entry for
example.org
can look like:zone "example.org" {
type master;
file "master/example.org";
};The zone is a master, as indicated by the
statement, holding its zone
information in
/etc/namedb/master/example.org
indicated by the statement.zone "example.org" {
type slave;
file "slave/example.org";
};In the slave case, the zone information is transferred
from the master name server for the particular zone, and
saved in the file specified. If and when the master
server dies or is unreachable, the slave name server will
have the transferred zone information and will be able to
serve it.Zone FilesBINDzone filesAn example master zone file for
example.org
(existing within
/etc/namedb/master/example.org) is as
follows:$TTL 3600 ; 1 hour default TTL
example.org. IN SOA ns1.example.org. admin.example.org. (
2006051501 ; Serial
10800 ; Refresh
3600 ; Retry
604800 ; Expire
300 ; Negative Response TTL
)
; DNS Servers
IN NS ns1.example.org.
IN NS ns2.example.org.
; MX Records
IN MX 10 mx.example.org.
IN MX 20 mail.example.org.
IN A 192.168.1.1
; Machine Names
localhost IN A 127.0.0.1
ns1 IN A 192.168.1.2
ns2 IN A 192.168.1.3
mx IN A 192.168.1.4
mail IN A 192.168.1.5
; Aliases
www IN CNAME example.org.Note that every hostname ending in a .
is an exact hostname, whereas everything without a
trailing . is relative to the origin. For
example, ns1 is translated into
ns1.example.org.The format of a zone file follows:recordname IN recordtype valueDNSrecordsThe most commonly used DNS
records:SOAstart of zone authorityNSan authoritative name serverAa host addressCNAMEthe canonical name for an aliasMXmail exchangerPTRa domain name pointer (used in reverse
DNS)example.org. IN SOA ns1.example.org. admin.example.org. (
2006051501 ; Serial
10800 ; Refresh after 3 hours
3600 ; Retry after 1 hour
604800 ; Expire after 1 week
300 ) ; Negative Response TTLexample.org.the domain name, also the origin for this
zone file.ns1.example.org.the primary/authoritative name server for this
zone.admin.example.org.the responsible person for this zone,
email address with @
replaced. (admin@example.org becomes
admin.example.org)2006051501the serial number of the file. This must be
incremented each time the zone file is modified.
Nowadays, many admins prefer a
yyyymmddrr format for the serial
number. 2006051501 would mean
last modified 05/15/2006, the latter
01 being the first time the zone
file has been modified this day. The serial number
is important as it alerts slave name servers for a
zone when it is updated. IN NS ns1.example.org.This is an NS entry. Every name server that is going
to reply authoritatively for the zone must have one of
these entries.localhost IN A 127.0.0.1
ns1 IN A 192.168.1.2
ns2 IN A 192.168.1.3
mx IN A 192.168.1.4
mail IN A 192.168.1.5The A record indicates machine names. As seen above,
ns1.example.org would
resolve to 192.168.1.2. IN A 192.168.1.1This line assigns IP address
192.168.1.1 to
the current origin, in this case example.org.www IN CNAME @The canonical name record is usually used for giving
aliases to a machine. In the example,
www is aliased to the
master machine whose name happens to be the
same as the domain name
example.org
(192.168.1.1).
CNAMEs can never be used together with another kind of
record for the same hostname.MX record IN MX 10 mail.example.org.The MX record indicates which mail servers are
responsible for handling incoming mail for the zone.
mail.example.org is
the hostname of a mail server, and 10 is the priority of
that mail server.One can have several mail servers, with priorities of
10, 20 and so on. A mail server attempting to deliver to
example.org
would first try the highest priority MX (the record with
the lowest priority number), then the second highest, etc,
until the mail can be properly delivered.For in-addr.arpa zone files (reverse
DNS), the same format is used, except
with PTR entries instead of A or CNAME.$TTL 3600
1.168.192.in-addr.arpa. IN SOA ns1.example.org. admin.example.org. (
2006051501 ; Serial
10800 ; Refresh
3600 ; Retry
604800 ; Expire
300 ) ; Negative Response TTL
IN NS ns1.example.org.
IN NS ns2.example.org.
1 IN PTR example.org.
2 IN PTR ns1.example.org.
3 IN PTR ns2.example.org.
4 IN PTR mx.example.org.
5 IN PTR mail.example.org.This file gives the proper IP
address to hostname mappings for the above fictitious
domain.It is worth noting that all names on the right side
of a PTR record need to be fully qualified (i.e., end in
a .).Caching Name ServerBINDcaching name serverA caching name server is a name server whose primary
role is to resolve recursive queries. It simply asks
queries of its own, and remembers the answers for later
use.DNSSECBINDDNS security
extensionsDomain Name System Security Extensions, or DNSSEC
for short, is a suite of specifications to protect resolving
name servers from forged DNS data, such
as spoofed DNS records. By using digital
signatures, a resolver can verify the integrity of the
record. Note that DNSSEC only provides integrity via
digitally signing the Resource Records (RRs). It provides
neither confidentiality nor protection against false
end-user assumptions. This means that it cannot protect
against people going to
example.net
instead of
example.com.
The only thing DNSSEC does is
authenticate that the data has not been compromised in
transit. The security of DNS is an
important step in securing the Internet in general. For
more in-depth details of how DNSSEC
works, the relevant RFCs are a good place
to start. See the list in
.The following sections will demonstrate how to enable
DNSSEC for an authoritative
DNS server and a recursive (or caching)
DNS server running
BIND 9. While all versions of
BIND 9 support DNSSEC,
it is necessary to have at least version 9.6.2 in order to
be able to use the signed root zone when validating
DNS queries. This is because earlier
versions lack the required algorithms to enable validation
using the root zone key. It is strongly recommended to use
the latest version of BIND 9.7 or later
to take advantage of automatic key updating for the root
key, as well as other features to automatically keep zones
signed and signatures up to date. Where configurations
differ between 9.6.2 and 9.7 and later, differences will be
pointed out.Recursive DNS Server
ConfigurationEnabling DNSSEC validation of
queries performed by a recursive DNS
server requires a few changes to
named.conf. Before making these
changes the root zone key, or trust anchor, must be
acquired. Currently the root zone key is not available in
a file format BIND understands, so it
has to be manually converted into the proper format. The
key itself can be obtained by querying the root zone for
it using dig. By
running&prompt.user; dig +multi +noall +answer DNSKEY . > root.dnskeythe key will end up in
root.dnskey. The contents should
look something like this:. 93910 IN DNSKEY 257 3 8 (
AwEAAagAIKlVZrpC6Ia7gEzahOR+9W29euxhJhVVLOyQ
bSEW0O8gcCjFFVQUTf6v58fLjwBd0YI0EzrAcQqBGCzh
/RStIoO8g0NfnfL2MTJRkxoXbfDaUeVPQuYEhg37NZWA
JQ9VnMVDxP/VHL496M/QZxkjf5/Efucp2gaDX6RS6CXp
oY68LsvPVjR0ZSwzz1apAzvN9dlzEheX7ICJBBtuA6G3
LQpzW5hOA2hzCTMjJPJ8LbqF6dsV6DoBQzgul0sGIcGO
Yl7OyQdXfZ57relSQageu+ipAdTTJ25AsRTAoub8ONGc
LmqrAmRLKBP1dfwhYB4N7knNnulqQxA+Uk1ihz0=
) ; key id = 19036
. 93910 IN DNSKEY 256 3 8 (
AwEAAcaGQEA+OJmOzfzVfoYN249JId7gx+OZMbxy69Hf
UyuGBbRN0+HuTOpBxxBCkNOL+EJB9qJxt+0FEY6ZUVjE
g58sRr4ZQ6Iu6b1xTBKgc193zUARk4mmQ/PPGxn7Cn5V
EGJ/1h6dNaiXuRHwR+7oWh7DnzkIJChcTqlFrXDW3tjt
) ; key id = 34525Do not be alarmed if the obtained keys differ from
this example. They might have changed since these
instructions were last updated. This output actually
contains two keys. The first key in the listing, with the
value 257 after the DNSKEY record type, is the one needed.
This value indicates that this is a Secure Entry Point
(SEP),
commonly known as a Key Signing Key
(KSK). The
second key, with value 256, is a subordinate key, commonly
called a Zone Signing Key
(ZSK). More on
the different key types later in
.Now the key must be verified and formatted so that
BIND can use it. To verify the key,
generate a DS
RR set. Create
a file containing these
RRs with&prompt.user; dnssec-dsfromkey -f root.dnskey . > root.dsThese records use SHA-1 and SHA-256 respectively, and
should look similar to the following example, where the
longer is using SHA-256.. IN DS 19036 8 1
B256BD09DC8DD59F0E0F0D8541B8328DD986DF6E
. IN DS 19036 8 2 49AAC11D7B6F6446702E54A1607371607A1A41855200FD2CE1CDDE32F24E8FB5The SHA-256 RR can now be compared
to the digest in https://data.iana.org/root-anchors/root-anchors.xml.
To be absolutely sure that the key has not been tampered
with the data in the XML file can be
verified using the PGP signature in
https://data.iana.org/root-anchors/root-anchors.asc.Next, the key must be formatted properly. This
differs a little between BIND versions
9.6.2 and 9.7 and later. In version 9.7 support was added
to automatically track changes to the key and update it as
necessary. This is done using
managed-keys as seen in the example
below. When using the older version, the key is added
using a trusted-keys statement and
updates must be done manually. For
BIND 9.6.2 the format should look
like:trusted-keys {
"." 257 3 8
"AwEAAagAIKlVZrpC6Ia7gEzahOR+9W29euxhJhVVLOyQbSEW0O8gcCjF
FVQUTf6v58fLjwBd0YI0EzrAcQqBGCzh/RStIoO8g0NfnfL2MTJRkxoX
bfDaUeVPQuYEhg37NZWAJQ9VnMVDxP/VHL496M/QZxkjf5/Efucp2gaD
X6RS6CXpoY68LsvPVjR0ZSwzz1apAzvN9dlzEheX7ICJBBtuA6G3LQpz
W5hOA2hzCTMjJPJ8LbqF6dsV6DoBQzgul0sGIcGOYl7OyQdXfZ57relS
Qageu+ipAdTTJ25AsRTAoub8ONGcLmqrAmRLKBP1dfwhYB4N7knNnulq
QxA+Uk1ihz0=";
};For 9.7 the format will instead be:managed-keys {
"." initial-key 257 3 8
"AwEAAagAIKlVZrpC6Ia7gEzahOR+9W29euxhJhVVLOyQbSEW0O8gcCjF
FVQUTf6v58fLjwBd0YI0EzrAcQqBGCzh/RStIoO8g0NfnfL2MTJRkxoX
bfDaUeVPQuYEhg37NZWAJQ9VnMVDxP/VHL496M/QZxkjf5/Efucp2gaD
X6RS6CXpoY68LsvPVjR0ZSwzz1apAzvN9dlzEheX7ICJBBtuA6G3LQpz
W5hOA2hzCTMjJPJ8LbqF6dsV6DoBQzgul0sGIcGOYl7OyQdXfZ57relS
Qageu+ipAdTTJ25AsRTAoub8ONGcLmqrAmRLKBP1dfwhYB4N7knNnulq
QxA+Uk1ihz0=";
};The root key can now be added to
named.conf either directly or by
including a file containing the key. After these steps,
configure BIND to do
DNSSEC validation on queries by editing
named.conf and adding the following
to the options directive:dnssec-enable yes;
dnssec-validation yes;To verify that it is actually working use
dig to make a query for a
signed zone using the resolver just configured. A
successful reply will contain the AD
flag to indicate the data was authenticated. Running a
query such as&prompt.user; dig @resolver +dnssec se ds should return the DS
RR for the .se zone.
In the flags: section the
AD flag should be set, as seen
in:...
;; flags: qr rd ra ad; QUERY: 1, ANSWER: 3, AUTHORITY: 0, ADDITIONAL: 1
...The resolver is now capable of authenticating
DNS queries.Authoritative DNS Server
ConfigurationIn order to get an authoritative name server to serve
a DNSSEC signed zone a little more work
is required. A zone is signed using cryptographic keys
which must be generated. It is possible to use only one
key for this. The preferred method however is to have a
strong well-protected Key Signing Key
(KSK) that is
not rotated very often and a Zone Signing Key
(ZSK) that is
rotated more frequently. Information on recommended
operational practices can be found in RFC
4641: DNSSEC Operational
Practices. Practices regarding the root zone can
be found in DNSSEC
Practice Statement for the Root Zone
KSK operator and DNSSEC
Practice Statement for the Root Zone
ZSK operator. The
KSK is used to
build a chain of authority to the data in need of
validation and as such is also called a Secure Entry Point
(SEP) key. A
message digest of this key, called a Delegation Signer
(DS) record,
must be published in the parent zone to establish the
trust chain. How this is accomplished depends on the
parent zone owner. The
ZSK is used to
sign the zone, and only needs to be published
there.To enable DNSSEC for the
example.com
zone depicted in previous examples, the first step is to
use dnssec-keygen to generate
the KSK and ZSK key
pair. This key pair can utilize different cryptographic
algorithms. It is recommended to use RSA/SHA256 for the
keys and 2048 bits key length should be enough. To
generate the KSK for
example.com,
run&prompt.user; dnssec-keygen -f KSK -a RSASHA256 -b 2048 -n ZONE example.comand to generate the ZSK, run&prompt.user; dnssec-keygen -a RSASHA256 -b 2048 -n ZONE example.comdnssec-keygen outputs two
files, the public and the private keys in files named
similar to
Kexample.com.+005+nnnnn.key (public)
and Kexample.com.+005+nnnnn.private
(private). The nnnnn part of the file
name is a five digit key ID. Keep track of which key ID
belongs to which key. This is especially important when
having more than one key in a zone. It is also possible
to rename the keys. For each KSK file
do:&prompt.user; mv Kexample.com.+005+nnnnn.key Kexample.com.+005+nnnnn.KSK.key
&prompt.user; mv Kexample.com.+005+nnnnn.private Kexample.com.+005+nnnnn.KSK.privateFor the ZSK files, substitute
KSK for ZSK as
necessary. The files can now be included in the zone
file, using the $include statement. It
should look something like this:$include Kexample.com.+005+nnnnn.KSK.key ; KSK
$include Kexample.com.+005+nnnnn.ZSK.key ; ZSKFinally, sign the zone and tell
BIND to use the signed zone file. To
sign a zone dnssec-signzone is
used. The command to sign the zone
example.com,
located in example.com.db would look
similar to&prompt.user; dnssec-signzone -o
example.com -k Kexample.com.+005+nnnnn.KSK example.com.db
Kexample.com.+005+nnnnn.ZSK.keyThe key supplied to the argument
is the KSK and the other key file is
the ZSK that should be used in the
signing. It is possible to supply more than one
KSK and ZSK, which
will result in the zone being signed with all supplied
keys. This can be needed to supply zone data signed using
more than one algorithm. The output of
dnssec-signzone is a zone file
with all RRs signed. This output will
end up in a file with the extension
.signed, such as
example.com.db.signed. The
DS records
will also be written to a separate file
dsset-example.com. To use this
signed zone just modify the zone directive in
named.conf to use
example.com.db.signed. By default,
the signatures are only valid 30 days, meaning that the
zone needs to be resigned in about 15 days to be sure
that resolvers are not caching records with stale
signatures. It is possible to make a script and a cron
job to do this. See relevant manuals for details.Be sure to keep private keys confidential, as with all
cryptographic keys. When changing a key it is best to
include the new key into the zone, while still signing
with the old one, and then move over to using the new key
to sign. After these steps are done the old key can be
removed from the zone. Failure to do this might render
the DNS data unavailable for a time,
until the new key has propagated through the
DNS hierarchy. For more information on
key rollovers and other DNSSEC
operational issues, see RFC
4641: DNSSEC Operational
practices.Automation Using BIND 9.7 or
LaterBeginning with BIND version 9.7 a
new feature called Smart Signing was
introduced. This feature aims to make the key management
and signing process simpler by automating parts of the
task. By putting the keys into a directory called a
key repository, and using the new
option auto-dnssec, it is possible to
create a dynamic zone which will be resigned as needed.
To update this zone use
nsupdate with the new option
. rndc has
also grown the ability to sign zones with keys in the key
repository, using the option . To
tell BIND to use this automatic signing
and zone updating for example.com, add the
following to named.conf:zone example.com {
type master;
key-directory "/etc/named/keys";
update-policy local;
auto-dnssec maintain;
file "/etc/named/dynamic/example.com.zone";
};After making these changes, generate keys for the zone
as explained in , put
those keys in the key repository given as the argument to
the key-directory in the zone
configuration and the zone will be signed automatically.
Updates to a zone configured this way must be done using
nsupdate, which will take care
of re-signing the zone with the new data added. For
further details, see and the
BIND documentation.SecurityAlthough BIND is the most common implementation of
DNS, there is always the issue of
security. Possible and exploitable security holes are
sometimes found.While &os; automatically drops
named into a &man.chroot.8;
environment; there are several other security mechanisms in
place which could help to lure off possible
DNS service attacks.It is always good idea to read
CERT's
security advisories and to subscribe to the
&a.security-notifications; to stay up to date with the
current Internet and &os; security issues.If a problem arises, keeping sources up to date and
having a fresh build of named
may help.Further ReadingBIND/named manual pages:
&man.rndc.8; &man.named.8; &man.named.conf.5;
&man.nsupdate.1; &man.dnssec-signzone.8;
&man.dnssec-keygen.8;Official
ISC BIND PageOfficial
ISC BIND ForumO'Reilly
DNS and BIND 5th
EditionRoot
DNSSECDNSSEC
Trust Anchor Publication for the Root
ZoneRFC1034
- Domain Names - Concepts and FacilitiesRFC1035
- Domain Names - Implementation and
SpecificationRFC4033
- DNS Security Introduction and
RequirementsRFC4034
- Resource Records for the DNS
Security ExtensionsRFC4035
- Protocol Modifications for the
DNS Security
ExtensionsRFC4641
- DNSSEC Operational PracticesRFC
5011 - Automated Updates of DNS
Security (DNSSEC
Trust AnchorsApache HTTP ServerMurrayStokelyContributed by web serverssetting upApacheThe open source
Apache HTTP Server is the most widely
used web server. &os; does not install this web server by
default, but it can be installed from the
www/apache24 package or port.This section summarizes how to configure and start version
2.x of the Apache HTTP
Server on &os;. For more detailed information
about Apache 2.X and its
configuration directives, refer to httpd.apache.org.Configuring and Starting ApacheApacheconfiguration fileIn &os;, the main Apache HTTP
Server configuration file is installed as
/usr/local/etc/apache2x/httpd.conf,
where x represents the version
number. This ASCII text file begins
comment lines with a #. The most
frequently modified directives are:ServerRoot "/usr/local"Specifies the default directory hierarchy for the
Apache installation.
Binaries are stored in the bin and
sbin subdirectories of the server
root and configuration files are stored in the etc/apache2x
subdirectory.ServerAdmin you@example.comChange this to the email address to receive problems
with the server. This address also appears on some
server-generated pages, such as error documents.ServerName
www.example.com:80Allows an administrator to set a hostname which is
sent back to clients for the server. For example,
www can be used instead of the
actual hostname. If the system does not have a
registered DNS name, enter its
IP address instead. If the server
will listen on an alternate report, change
80 to the alternate port
number.DocumentRoot
"/usr/local/www/apache2x/data"The directory where documents will be served from.
By default, all requests are taken from this directory,
but symbolic links and aliases may be used to point to
other locations.It is always a good idea to make a backup copy of the
default Apache configuration file
before making changes. When the configuration of
Apache is complete, save the file
and verify the configuration using
apachectl. Running apachectl
configtest should return Syntax
OK.Apachestarting or stoppingTo launch Apache at system
startup, add the following line to
/etc/rc.conf:apache24_enable="YES"If Apache should be started
with non-default options, the following line may be added to
/etc/rc.conf to specify the needed
flags:apache24_flags=""If apachectl does not report
configuration errors, start httpd
now:&prompt.root; service apache24 startThe httpd service can be tested by
entering
http://localhost
in a web browser, replacing
localhost with the fully-qualified
domain name of the machine running httpd.
The default web page that is displayed is
/usr/local/www/apache24/data/index.html.The Apache configuration can be
tested for errors after making subsequent configuration
changes while httpd is running using the
following command:&prompt.root; service apache24 configtestIt is important to note that
configtest is not an &man.rc.8; standard,
and should not be expected to work for all startup
scripts.Virtual HostingVirtual hosting allows multiple websites to run on one
Apache server. The virtual hosts
can be IP-based or
name-based.
IP-based virtual hosting uses a different
IP address for each website. Name-based
virtual hosting uses the clients HTTP/1.1 headers to figure
out the hostname, which allows the websites to share the same
IP address.To setup Apache to use
name-based virtual hosting, add a
VirtualHost block for each website. For
example, for the webserver named www.domain.tld with a
virtual domain of www.someotherdomain.tld,
add the following entries to
httpd.conf:<VirtualHost *>
ServerName www.domain.tld
DocumentRoot /www/domain.tld
</VirtualHost>
<VirtualHost *>
ServerName www.someotherdomain.tld
DocumentRoot /www/someotherdomain.tld
</VirtualHost>For each virtual host, replace the values for
ServerName and
DocumentRoot with the values to be
used.For more information about setting up virtual hosts,
consult the official Apache
documentation at: http://httpd.apache.org/docs/vhosts/.Apache ModulesApachemodulesApache uses modules to augment
the functionality provided by the basic server. Refer to http://httpd.apache.org/docs/current/mod/
for a complete listing of and the configuration details for
the available modules.In &os;, some modules can be compiled with the
www/apache24 port. Type make
config within
/usr/ports/www/apache24 to see which
modules are available and which are enabled by default. If
the module is not compiled with the port, the &os; Ports
Collection provides an easy way to install many modules. This
section describes three of the most commonly used
modules.mod_sslweb serverssecureSSLcryptographyThe mod_ssl module uses the
OpenSSL library to provide strong
cryptography via the Secure Sockets Layer
(SSLv3) and Transport Layer Security
(TLSv1) protocols. This module provides
everything necessary to request a signed certificate from a
trusted certificate signing authority to run a secure web
server on &os;.In &os;, mod_ssl module is enabled
by default in both the package and the port. The available
configuration directives are explained at http://httpd.apache.org/docs/current/mod/mod_ssl.html.mod_perlmod_perlPerlThe
mod_perl module makes it possible to
write Apache modules in
Perl. In addition, the
persistent interpreter embedded in the server avoids the
overhead of starting an external interpreter and the penalty
of Perl start-up time.The mod_perl can be installed using
the www/mod_perl2 package or port.
Documentation for using this module can be found at http://perl.apache.org/docs/2.0/index.html.mod_phpTomRhodesWritten by mod_phpPHPPHP: Hypertext Preprocessor
(PHP) is a general-purpose scripting
language that is especially suited for web development.
Capable of being embedded into HTML, its
syntax draws upon C, &java;, and
Perl with the intention of
allowing web developers to write dynamically generated
webpages quickly.To gain support for PHP5 for the
Apache web server, install the
www/mod_php56 package or port. This will
install and configure the modules required to support
dynamic PHP applications. The
installation will automatically add this line to
/usr/local/etc/apache24/httpd.conf:LoadModule php5_module libexec/apache24/libphp5.soThen, perform a graceful restart to load the
PHP module:&prompt.root; apachectl gracefulThe PHP support provided by
www/mod_php56 is limited. Additional
support can be installed using the
lang/php56-extensions port which provides
a menu driven interface to the available
PHP extensions.Alternatively, individual extensions can be installed
using the appropriate port. For instance, to add
PHP support for the
MySQL database server, install
databases/php56-mysql.After installing an extension, the
Apache server must be reloaded to
pick up the new configuration changes:&prompt.root; apachectl gracefulDynamic Websitesweb serversdynamicIn addition to mod_perl and
mod_php, other languages are
available for creating dynamic web content. These include
Django and
Ruby on Rails.DjangoPythonDjangoDjango is a BSD-licensed
framework designed to allow developers to write high
performance, elegant web applications quickly. It provides
an object-relational mapper so that data types are developed
as Python objects. A rich
dynamic database-access API is provided
for those objects without the developer ever having to write
SQL. It also provides an extensible
template system so that the logic of the application is
separated from the HTML
presentation.Django depends on mod_python, and
an SQL database engine. In &os;, the
www/py-django port automatically installs
mod_python and supports the
PostgreSQL,
MySQL, or
SQLite databases, with the
default being SQLite. To change
the database engine, type make config
within /usr/ports/www/py-django, then
install the port.Once Django is installed, the
application will need a project directory along with the
Apache configuration in order to
use the embedded Python
interpreter. This interpreter is used to call the
application for specific URLs on the
site.To configure Apache to pass
requests for certain URLs to the web
application, add the following to
httpd.conf, specifying the full path to
the project directory:<Location "/">
SetHandler python-program
PythonPath "['/dir/to/the/django/packages/'] + sys.path"
PythonHandler django.core.handlers.modpython
SetEnv DJANGO_SETTINGS_MODULE mysite.settings
PythonAutoReload On
PythonDebug On
</Location>Refer to https://docs.djangoproject.com
for more information on how to use
Django.Ruby on RailsRuby on RailsRuby on Rails is another open
source web framework that provides a full development stack.
It is optimized to make web developers more productive and
capable of writing powerful applications quickly. On &os;,
it can be installed using the
www/rubygem-rails package or port.Refer to http://guides.rubyonrails.org
for more information on how to use Ruby on
Rails.File Transfer Protocol (FTP)FTP
serversThe File Transfer Protocol (FTP) provides
users with a simple way to transfer files to and from an
FTP server. &os; includes
FTP server software,
ftpd, in the base system.&os; provides several configuration files for controlling
access to the FTP server. This section
summarizes these files. Refer to &man.ftpd.8; for more details
about the built-in FTP server.ConfigurationThe most important configuration step is deciding which
accounts will be allowed access to the FTP
server. A &os; system has a number of system accounts which
should not be allowed FTP access. The list
of users disallowed any FTP access can be
found in /etc/ftpusers. By default, it
includes system accounts. Additional users that should not be
allowed access to FTP can be added.In some cases it may be desirable to restrict the access
of some users without preventing them completely from using
FTP. This can be accomplished be creating
/etc/ftpchroot as described in
&man.ftpchroot.5;. This file lists users and groups subject
to FTP access restrictions.FTPanonymousTo enable anonymous FTP access to the
server, create a user named ftp on the &os; system. Users
will then be able to log on to the
FTP server with a username of
ftp or anonymous. When prompted for
the password, any input will be accepted, but by convention,
an email address should be used as the password. The
FTP server will call &man.chroot.2; when an
anonymous user logs in, to restrict access to only the home
directory of the ftp user.There are two text files that can be created to specify
welcome messages to be displayed to FTP
clients. The contents of
/etc/ftpwelcome will be displayed to
users before they reach the login prompt. After a successful
login, the contents of
/etc/ftpmotd will be displayed. Note
that the path to this file is relative to the login
environment, so the contents of
~ftp/etc/ftpmotd would be displayed for
anonymous users.Once the FTP server has been
configured, set the appropriate variable in
/etc/rc.conf to start the service during
boot:ftpd_enable="YES"To start the service now:&prompt.root; service ftpd startTest the connection to the FTP server
by typing:&prompt.user; ftp localhostsysloglog filesFTPThe ftpd daemon uses
&man.syslog.3; to log messages. By default, the system log
daemon will write messages related to FTP
in /var/log/xferlog. The location of
the FTP log can be modified by changing the
following line in
/etc/syslog.conf:ftp.info /var/log/xferlogFTPanonymousBe aware of the potential problems involved with running
an anonymous FTP server. In particular,
think twice about allowing anonymous users to upload files.
It may turn out that the FTP site becomes
a forum for the trade of unlicensed commercial software or
worse. If anonymous FTP uploads are
required, then verify the permissions so that these files
cannot be read by other anonymous users until they have
been reviewed by an administrator.File and Print Services for µsoft.windows; Clients
(Samba)Samba serverMicrosoft Windowsfile serverWindows clientsprint serverWindows clientsSamba is a popular open source
software package that provides file and print services using the
SMB/CIFS protocol. This protocol is built
into µsoft.windows; systems. It can be added to
non-µsoft.windows; systems by installing the
Samba client libraries. The protocol
allows clients to access shared data and printers. These shares
can be mapped as a local disk drive and shared printers can be
used as if they were local printers.On &os;, the Samba client
libraries can be installed using the
net/samba-smbclient port or package. The
client provides the ability for a &os; system to access
SMB/CIFS shares in a µsoft.windows;
network.A &os; system can also be configured to act as a
Samba server by installing the
net/samba43 port or package. This allows the
administrator to create SMB/CIFSshares on
the &os; system which can be accessed by clients running
µsoft.windows; or the Samba
client libraries.Server ConfigurationSamba is configured in
/usr/local/etc/smb4.conf. This file must
be created before Samba
can be used.A simple smb4.conf to share
directories and printers with &windows; clients in a
workgroup is shown here. For more complex setups
involving LDAP or Active Directory, it is easier to use
&man.samba-tool.8; to create the initial
smb4.conf.[global]
workgroup = WORKGROUP
server string = Samba Server Version %v
netbios name = ExampleMachine
wins support = Yes
security = user
passdb backend = tdbsam
# Example: share /usr/src accessible only to 'developer' user
[src]
path = /usr/src
valid users = developer
writable = yes
browsable = yes
read only = no
guest ok = no
public = no
create mask = 0666
directory mask = 0755Global SettingsSettings that describe the network are added in
/usr/local/etc/smb4.conf:workgroupThe name of the workgroup to be served.netbios nameThe NetBIOS name by which a
Samba server is known. By
default, it is the same as the first component of the
host's DNS name.server stringThe string that will be displayed in the output of
net view and some other
networking tools that seek to display descriptive text
about the server.wins supportWhether Samba will
act as a WINS server. Do not
enable support for WINS on more than
one server on the network.Security SettingsThe most important settings in
/usr/local/etc/smb4.conf are the
security model and the backend password format. These
directives control the options:securityThe most common settings are
security = share and
security = user. If the clients
use usernames that are the same as their usernames on
the &os; machine, user level security should be
used. This is the default security policy and it
requires clients to first log on before they can
access shared resources.In share level security, clients do not need to
log onto the server with a valid username and password
before attempting to connect to a shared resource.
This was the default security model for older versions
of Samba.passdb backendNIS+LDAPSQL databaseSamba has several
different backend authentication models. Clients may
be authenticated with LDAP, NIS+, an SQL database,
or a modified password file. The recommended
authentication method, tdbsam,
is ideal for simple networks and is covered here.
For larger or more complex networks,
ldapsam is recommended.
smbpasswd
was the former default and is now obsolete.Samba Users&os; user accounts must be mapped to the
SambaSAMAccount database for
&windows; clients to access the share.
Map existing &os; user accounts using
&man.pdbedit.8;:&prompt.root; pdbedit -a usernameThis section has only mentioned the most commonly used
settings. Refer to the Official
Samba HOWTO for additional information about the
available configuration options.Starting SambaTo enable Samba at boot time,
add the following line to
/etc/rc.conf:samba_enable="YES"To start Samba now:&prompt.root; service samba start
Starting SAMBA: removing stale tdbs :
Starting nmbd.
Starting smbd.Samba consists of three
separate daemons. Both the nmbd
and smbd daemons are started by
samba_enable. If winbind name resolution
is also required, set:winbindd_enable="YES"Samba can be stopped at any
time by typing:&prompt.root; service samba stopSamba is a complex software
suite with functionality that allows broad integration with
µsoft.windows; networks. For more information about
functionality beyond the basic configuration described here,
refer to http://www.samba.org.Clock Synchronization with NTPNTPntpdOver time, a computer's clock is prone to drift. This is
problematic as many network services require the computers on a
network to share the same accurate time. Accurate time is also
needed to ensure that file timestamps stay consistent. The
Network Time Protocol (NTP) is one way to
provide clock accuracy in a network.&os; includes &man.ntpd.8; which can be configured to query
other NTP servers in order to synchronize the
clock on that machine or to provide time services to other
computers in the network. The servers which are queried can be
local to the network or provided by an ISP.
In addition, an online
list of publicly accessible NTP
servers is available. When choosing a public
NTP server, select one that is geographically
close and review its usage policy.Choosing several NTP servers is
recommended in case one of the servers becomes unreachable or
its clock proves unreliable. As ntpd
receives responses, it favors reliable servers over the less
reliable ones.This section describes how to configure
ntpd on &os;. Further documentation
can be found in /usr/share/doc/ntp/ in HTML
format.NTP ConfigurationNTPntp.confOn &os;, the built-in ntpd can
be used to synchronize a system's clock. To enable
ntpd at boot time, add
ntpd_enable="YES" to
/etc/rc.conf. Additional variables can
be specified in /etc/rc.conf. Refer to
&man.rc.conf.5; and &man.ntpd.8; for
details.This application reads /etc/ntp.conf
to determine which NTP servers to query.
Here is a simple example of an
/etc/ntp.conf: Sample /etc/ntp.confserver ntplocal.example.com prefer
server timeserver.example.org
server ntp2a.example.net
driftfile /var/db/ntp.driftThe format of this file is described in &man.ntp.conf.5;.
The server option specifies which servers
to query, with one server listed on each line. If a server
entry includes prefer, that server is
preferred over other servers. A response from a preferred
server will be discarded if it differs significantly from
other servers' responses; otherwise it will be used. The
prefer argument should only be used for
NTP servers that are known to be highly
accurate, such as those with special time monitoring
hardware.The driftfile entry specifies which
file is used to store the system clock's frequency offset.
ntpd uses this to automatically
compensate for the clock's natural drift, allowing it to
maintain a reasonably correct setting even if it is cut off
from all external time sources for a period of time. This
file also stores information about previous responses
from NTP servers. Since this file contains
internal information for NTP, it should not
be modified.By default, an NTP server is accessible
to any network host. The restrict option
in /etc/ntp.conf can be used to control
which systems can access the server. For example, to deny all
machines from accessing the NTP server, add
the following line to
/etc/ntp.conf:restrict default ignoreThis will also prevent access from other
NTP servers. If there is a need to
synchronize with an external NTP server,
allow only that specific server. Refer to &man.ntp.conf.5;
for more information.To allow machines within the network to synchronize their
clocks with the server, but ensure they are not allowed to
configure the server or be used as peers to synchronize
against, instead use:restrict 192.168.1.0 mask 255.255.255.0 nomodify notrapwhere 192.168.1.0 is the local
network address and 255.255.255.0 is the network's
subnet mask.Multiple restrict entries are
supported. For more details, refer to the Access
Control Support subsection of
&man.ntp.conf.5;.Once ntpd_enable="YES" has been added
to /etc/rc.conf,
ntpd can be started now without
rebooting the system by typing:&prompt.root; service ntpd startUsing NTP with a
PPP Connectionntpd does not need a permanent
connection to the Internet to function properly. However, if
a PPP connection is configured to dial out
on demand, NTP traffic should be prevented
from triggering a dial out or keeping the connection alive.
This can be configured with filter
directives in /etc/ppp/ppp.conf. For
example: set filter dial 0 deny udp src eq 123
# Prevent NTP traffic from initiating dial out
set filter dial 1 permit 0 0
set filter alive 0 deny udp src eq 123
# Prevent incoming NTP traffic from keeping the connection open
set filter alive 1 deny udp dst eq 123
# Prevent outgoing NTP traffic from keeping the connection open
set filter alive 2 permit 0/0 0/0For more details, refer to the
PACKET FILTERING section in &man.ppp.8; and
the examples in
/usr/share/examples/ppp/.Some Internet access providers block low-numbered ports,
preventing NTP from functioning since replies never reach
the machine.iSCSI Initiator and Target
ConfigurationiSCSI is a way to share storage over a
network. Unlike NFS, which works at the file
system level, iSCSI works at the block device
level.In iSCSI terminology, the system that
shares the storage is known as the target.
The storage can be a physical disk, or an area representing
multiple disks or a portion of a physical disk. For example, if
the disk(s) are formatted with ZFS, a zvol
can be created to use as the iSCSI
storage.The clients which access the iSCSI
storage are called initiators. To
initiators, the storage available through
iSCSI appears as a raw, unformatted disk
known as a LUN. Device nodes for the disk
appear in /dev/ and the device must be
separately formatted and mounted.Beginning with 10.0-RELEASE, &os; provides a native,
kernel-based iSCSI target and initiator.
This section describes how to configure a &os; system as a
target or an initiator.Configuring an iSCSI TargetThe native iSCSI target is supported
starting with &os; 10.0-RELEASE. To use
iSCSI in older versions of &os;, install
a userspace target from the Ports Collection, such as
net/istgt. This chapter only describes
the native target.To configure an iSCSI target, create
the /etc/ctl.conf configuration file, add
a line to /etc/rc.conf to make sure the
&man.ctld.8; daemon is automatically started at boot, and then
start the daemon.The following is an example of a simple
/etc/ctl.conf configuration file. Refer
to &man.ctl.conf.5; for a more complete description of this
file's available options.portal-group pg0 {
discovery-auth-group no-authentication
listen 0.0.0.0
listen [::]
}
target iqn.2012-06.com.example:target0 {
auth-group no-authentication
portal-group pg0
lun 0 {
path /data/target0-0
size 4G
}
}The first entry defines the pg0 portal
group. Portal groups define which network addresses the
&man.ctld.8; daemon will listen on. The
discovery-auth-group no-authentication
entry indicates that any initiator is allowed to perform
iSCSI target discovery without
authentication. Lines three and four configure &man.ctld.8;
to listen on all IPv4
(listen 0.0.0.0) and
IPv6 (listen [::])
addresses on the default port of 3260.It is not necessary to define a portal group as there is a
built-in portal group called default. In
this case, the difference between default
and pg0 is that with
default, target discovery is always denied,
while with pg0, it is always
allowed.The second entry defines a single target. Target has two
possible meanings: a machine serving iSCSI
or a named group of LUNs. This example
uses the latter meaning, where
iqn.2012-06.com.example:target0 is the
target name. This target name is suitable for testing
purposes. For actual use, change
com.example to the real domain name,
reversed. The 2012-06 represents the year
and month of acquiring control of that domain name, and
target0 can be any value. Any number of
targets can be defined in this configuration file.The auth-group no-authentication line
allows all initiators to connect to the specified target and
portal-group pg0 makes the target reachable
through the pg0 portal group.The next section defines the LUN. To
the initiator, each LUN will be visible as
a separate disk device. Multiple LUNs can
be defined for each target. Each LUN is
identified by a number, where LUN 0 is
mandatory. The path /data/target0-0 line
defines the full path to a file or zvol backing the
LUN. That path must exist before starting
&man.ctld.8;. The second line is optional and specifies the
size of the LUN.Next, to make sure the &man.ctld.8; daemon is started at
boot, add this line to
/etc/rc.conf:ctld_enable="YES"To start &man.ctld.8; now, run this command:&prompt.root; service ctld startAs the &man.ctld.8; daemon is started, it reads
/etc/ctl.conf. If this file is edited
after the daemon starts, use this command so that the changes
take effect immediately:&prompt.root; service ctld reloadAuthenticationThe previous example is inherently insecure as it uses
no authentication, granting anyone full access to all
targets. To require a username and password to access
targets, modify the configuration as follows:auth-group ag0 {
chap username1 secretsecret
chap username2 anothersecret
}
portal-group pg0 {
discovery-auth-group no-authentication
listen 0.0.0.0
listen [::]
}
target iqn.2012-06.com.example:target0 {
auth-group ag0
portal-group pg0
lun 0 {
path /data/target0-0
size 4G
}
}The auth-group section defines
username and password pairs. An initiator trying to connect
to iqn.2012-06.com.example:target0 must
first specify a defined username and secret. However,
target discovery is still permitted without authentication.
To require target discovery authentication, set
discovery-auth-group to a defined
auth-group name instead of
no-authentication.It is common to define a single exported target for
every initiator. As a shorthand for the syntax above, the
username and password can be specified directly in the
target entry:target iqn.2012-06.com.example:target0 {
portal-group pg0
chap username1 secretsecret
lun 0 {
path /data/target0-0
size 4G
}
}Configuring an iSCSI InitiatorThe iSCSI initiator described in this
section is supported starting with &os; 10.0-RELEASE. To
use the iSCSI initiator available in
older versions, refer to &man.iscontrol.8;.The iSCSI initiator requires that the
&man.iscsid.8; daemon is running. This daemon does not use a
configuration file. To start it automatically at boot, add
this line to /etc/rc.conf:iscsid_enable="YES"To start &man.iscsid.8; now, run this command:&prompt.root; service iscsid startConnecting to a target can be done with or without an
/etc/iscsi.conf configuration file. This
section demonstrates both types of connections.Connecting to a Target Without a Configuration
FileTo connect an initiator to a single target, specify the
IP address of the portal and the name of
the target:&prompt.root; iscsictl -A -p 10.10.10.10 -t iqn.2012-06.com.example:target0To verify if the connection succeeded, run
iscsictl without any arguments. The
output should look similar to this:Target name Target portal State
iqn.2012-06.com.example:target0 10.10.10.10 Connected: da0In this example, the iSCSI session
was successfully established, with
/dev/da0 representing the attached
LUN. If the
iqn.2012-06.com.example:target0 target
exports more than one LUN, multiple
device nodes will be shown in that section of the
output:Connected: da0 da1 da2.Any errors will be reported in the output, as well as
the system logs. For example, this message usually means
that the &man.iscsid.8; daemon is not running:Target name Target portal State
iqn.2012-06.com.example:target0 10.10.10.10 Waiting for iscsid(8)The following message suggests a networking problem,
such as a wrong IP address or
port:Target name Target portal State
iqn.2012-06.com.example:target0 10.10.10.11 Connection refusedThis message means that the specified target name is
wrong:Target name Target portal State
iqn.2012-06.com.example:target0 10.10.10.10 Not foundThis message means that the target requires
authentication:Target name Target portal State
iqn.2012-06.com.example:target0 10.10.10.10 Authentication failedTo specify a CHAP username and
secret, use this syntax:&prompt.root; iscsictl -A -p 10.10.10.10 -t iqn.2012-06.com.example:target0 -u user -s secretsecretConnecting to a Target with a Configuration
FileTo connect using a configuration file, create
/etc/iscsi.conf with contents like
this:t0 {
TargetAddress = 10.10.10.10
TargetName = iqn.2012-06.com.example:target0
AuthMethod = CHAP
chapIName = user
chapSecret = secretsecret
}The t0 specifies a nickname for the
configuration file section. It will be used by the
initiator to specify which configuration to use. The other
lines specify the parameters to use during connection. The
TargetAddress and
TargetName are mandatory, whereas the
other options are optional. In this example, the
CHAP username and secret are
shown.To connect to the defined target, specify the
nickname:&prompt.root; iscsictl -An t0Alternately, to connect to all targets defined in the
configuration file, use:&prompt.root; iscsictl -AaTo make the initiator automatically connect to all
targets in /etc/iscsi.conf, add the
following to /etc/rc.conf:iscsictl_enable="YES"
iscsictl_flags="-Aa"
Index: head/en_US.ISO8859-1/books/handbook/security/chapter.xml
===================================================================
--- head/en_US.ISO8859-1/books/handbook/security/chapter.xml (revision 49530)
+++ head/en_US.ISO8859-1/books/handbook/security/chapter.xml (revision 49531)
@@ -1,4147 +1,4147 @@
SecurityTomRhodesRewritten by securitySynopsisSecurity, whether physical or virtual, is a topic so broad
that an entire industry has evolved around it. Hundreds of
standard practices have been authored about how to secure
systems and networks, and as a user of &os;, understanding how
to protect against attacks and intruders is a must.In this chapter, several fundamentals and techniques will be
discussed. The &os; system comes with multiple layers of
security, and many more third party utilities may be added to
enhance security.After reading this chapter, you will know:Basic &os; system security concepts.The various crypt mechanisms available in &os;.How to set up one-time password authentication.How to configure TCP Wrapper
for use with &man.inetd.8;.How to set up Kerberos on
&os;.How to configure IPsec and create a
VPN.How to configure and use
OpenSSH on &os;.How to use file system ACLs.How to use pkg to audit
third party software packages installed from the Ports
Collection.How to utilize &os; security advisories.What Process Accounting is and how to enable it on
&os;.How to control user resources using login classes or the
resource limits database.Before reading this chapter, you should:Understand basic &os; and Internet concepts.Additional security topics are covered elsewhere in this
Handbook. For example, Mandatory Access Control is discussed in
and Internet firewalls are discussed in
.IntroductionSecurity is everyone's responsibility. A weak entry point
in any system could allow intruders to gain access to critical
information and cause havoc on an entire network. One of the
core principles of information security is the
CIA triad, which stands for the
Confidentiality, Integrity, and Availability of information
systems.The CIA triad is a bedrock concept of
computer security as customers and users expect their data to be
protected. For example, a customer expects that their credit
card information is securely stored (confidentiality), that
their orders are not changed behind the scenes (integrity), and
that they have access to their order information at all times
(availablility).To provide CIA, security professionals
apply a defense in depth strategy. The idea of defense in depth
is to add several layers of security to prevent one single layer
failing and the entire security system collapsing. For example,
a system administrator cannot simply turn on a firewall and
consider the network or system secure. One must also audit
accounts, check the integrity of binaries, and ensure malicious
tools are not installed. To implement an effective security
strategy, one must understand threats and how to defend against
them.What is a threat as it pertains to computer security?
Threats are not limited to remote attackers who attempt to
access a system without permission from a remote location.
Threats also include employees, malicious software, unauthorized
network devices, natural disasters, security vulnerabilities,
and even competing corporations.Systems and networks can be accessed without permission,
sometimes by accident, or by remote attackers, and in some
cases, via corporate espionage or former employees. As a user,
it is important to prepare for and admit when a mistake has led
to a security breach and report possible issues to the security
team. As an administrator, it is important to know of the
threats and be prepared to mitigate them.When applying security to systems, it is recommended to
start by securing the basic accounts and system configuration,
and then to secure the network layer so that it adheres to the
system policy and the organization's security procedures. Many
organizations already have a security policy that covers the
configuration of technology devices. The policy should include
the security configuration of workstations, desktops, mobile
devices, phones, production servers, and development servers.
In many cases, standard operating procedures
(SOPs) already exist. When in doubt, ask the
security team.The rest of this introduction describes how some of these
basic security configurations are performed on a &os; system.
The rest of this chapter describes some specific tools which can
be used when implementing a security policy on a &os;
system.Preventing LoginsIn securing a system, a good starting point is an audit of
accounts. Ensure that root has a strong password and
that this password is not shared. Disable any accounts that
do not need login access.To deny login access to accounts, two methods exist. The
first is to lock the account. This example locks the
toor account:&prompt.root; pw lock toorThe second method is to prevent login access by changing
the shell to /sbin/nologin. Only the
superuser can change the shell for other users:&prompt.root; chsh -s /usr/sbin/nologin toorThe /usr/sbin/nologin shell prevents
the system from assigning a shell to the user when they
attempt to login.Permitted Account EscalationIn some cases, system administration needs to be shared
with other users. &os; has two methods to handle this. The
first one, which is not recommended, is a shared root password
used by members of the wheel group. With this
method, a user types su and enters the
password for wheel
whenever superuser access is needed. The user should then
type exit to leave privileged access after
finishing the commands that required administrative access.
To add a user to this group, edit
/etc/group and add the user to the end of
the wheel entry. The user must be
separated by a comma character with no space.The second, and recommended, method to permit privilege
escalation is to install the security/sudo
package or port. This software provides additional auditing,
more fine-grained user control, and can be configured to lock
users into running only the specified privileged
commands.After installation, use visudo to edit
/usr/local/etc/sudoers. This example
creates a new webadmin group, adds the
trhodes account to
that group, and configures that group access to restart
apache24:&prompt.root; pw groupadd webadmin -M trhodes -g 6000
&prompt.root; visudo
%webadmin ALL=(ALL) /usr/sbin/service apache24 *Password HashesPasswords are a necessary evil of technology. When they
must be used, they should be complex and a powerful hash
mechanism should be used to encrypt the version that is stored
in the password database. &os; supports the
DES, MD5,
SHA256, SHA512, and
Blowfish hash algorithms in its crypt()
library. The default of SHA512 should not
be changed to a less secure hashing algorithm, but can be
changed to the more secure Blowfish algorithm.Blowfish is not part of AES and is
not considered compliant with any Federal Information
Processing Standards (FIPS). Its use may
not be permitted in some environments.To determine which hash algorithm is used to encrypt a
user's password, the superuser can view the hash for the user
in the &os; password database. Each hash starts with a symbol
which indicates the type of hash mechanism used to encrypt the
password. If DES is used, there is no
beginning symbol. For MD5, the symbol is
$. For SHA256 and
SHA512, the symbol is
$6$. For Blowfish, the symbol is
$2a$. In this example, the password for
dru is hashed using
the default SHA512 algorithm as the hash
starts with $6$. Note that the encrypted
hash, not the password itself, is stored in the password
database:&prompt.root; grep dru /etc/master.passwd
dru:$6$pzIjSvCAn.PBYQBA$PXpSeWPx3g5kscj3IMiM7tUEUSPmGexxta.8Lt9TGSi2lNQqYGKszsBPuGME0:1001:1001::0:0:dru:/usr/home/dru:/bin/cshThe hash mechanism is set in the user's login class. For
this example, the user is in the default
login class and the hash algorithm is set with this line in
/etc/login.conf: :passwd_format=sha512:\To change the algorithm to Blowfish, modify that line to
look like this: :passwd_format=blf:\Then run cap_mkdb /etc/login.conf as
described in . Note that this
change will not affect any existing password hashes. This
means that all passwords should be re-hashed by asking users
to run passwd in order to change their
password.For remote logins, two-factor authentication should be
used. An example of two-factor authentication is
something you have, such as a key, and
something you know, such as the passphrase for
that key. Since OpenSSH is part of
the &os; base system, all network logins should be over an
encrypted connection and use key-based authentication instead
of passwords. For more information, refer to . Kerberos users may need to make
additional changes to implement
OpenSSH in their network. These
changes are described in .Password Policy EnforcementEnforcing a strong password policy for local accounts is a
fundamental aspect of system security. In &os;, password
length, password strength, and password complexity can be
implemented using built-in Pluggable Authentication Modules
(PAM).This section demonstrates how to configure the minimum and
maximum password length and the enforcement of mixed
characters using the pam_passwdqc.so
module. This module is enforced when a user changes their
password.To configure this module, become the superuser and
uncomment the line containing
pam_passwdqc.so in
/etc/pam.d/passwd. Then, edit that line
to match the password policy:password requisite pam_passwdqc.so min=disabled,disabled,disabled,12,10 similar=deny retry=3 enforce=usersThis example sets several requirements for new passwords.
The min setting controls the minimum
password length. It has five values because this module
defines five different types of passwords based on their
complexity. Complexity is defined by the type of characters
that must exist in a password, such as letters, numbers,
symbols, and case. The types of passwords are described in
&man.pam.passwdqc.8;. In this example, the first three types
of passwords are disabled, meaning that passwords that meet
those complexity requirements will not be accepted, regardless
of their length. The 12 sets a minimum
password policy of at least twelve characters, if the password
also contains characters with three types of complexity. The
10 sets the password policy to also allow
passwords of at least ten characters, if the password contains
characters with four types of complexity.The similar setting denies passwords
that are similar to the user's previous password. The
retry setting provides a user with three
opportunities to enter a new password.Once this file is saved, a user changing their password
will see a message similar to the following:&prompt.user; passwd
Changing local password for trhodes
Old Password:
You can now choose the new password.
A valid password should be a mix of upper and lower case letters,
digits and other characters. You can use a 12 character long
password with characters from at least 3 of these 4 classes, or
a 10 character long password containing characters from all the
classes. Characters that form a common pattern are discarded by
the check.
Alternatively, if noone else can see your terminal now, you can
pick this as your password: "trait-useful&knob".
Enter new password:If a password that does not match the policy is entered,
it will be rejected with a warning and the user will have an
opportunity to try again, up to the configured number of
retries.Most password policies require passwords to expire after
so many days. To set a password age time in &os;, set
for the user's login class in
/etc/login.conf. The
default login class contains an
example:# :passwordtime=90d:\So, to set an expiry of 90 days for this login class,
remove the comment symbol (#), save the
edit, and run cap_mkdb
/etc/login.conf.To set the expiration on individual users, pass an
expiration date or the number of days to expiry and a username
to pw:&prompt.root; pw usermod -p 30-apr-2015 -n trhodesAs seen here, an expiration date is set in the form of
day, month, and year. For more information, see
&man.pw.8;.Detecting RootkitsA rootkit is any unauthorized
software that attempts to gain root access to a system. Once
installed, this malicious software will normally open up
another avenue of entry for an attacker. Realistically, once
a system has been compromised by a rootkit and an
investigation has been performed, the system should be
reinstalled from scratch. There is tremendous risk that even
the most prudent security or systems engineer will miss
something an attacker left behind.A rootkit does do one thing usefulfor administrators: once
detected, it is a sign that a compromise happened at some
point. But, these types of applications tend to be very well
hidden. This section demonstrates a tool that can be used to
detect rootkits, security/rkhunter.After installation of this package or port, the system may
be checked using the following command. It will produce a lot
of information and will require some manual pressing of
ENTER:&prompt.root; rkhunter -cAfter the process completes, a status message will be
printed to the screen. This message will include the amount
of files checked, suspect files, possible rootkits, and more.
During the check, some generic security warnings may
be produced about hidden files, the
OpenSSH protocol selection, and
known vulnerable versions of installed software. These can be
handled now or after a more detailed analysis has been
performed.Every administrator should know what is running on the
systems they are responsible for. Third-party tools like
rkhunter and
sysutils/lsof, and native commands such
as netstat and ps, can
show a great deal of information on the system. Take notes on
what is normal, ask questions when something seems out of
place, and be paranoid. While preventing a compromise is
ideal, detecting a compromise is a must.Binary VerificationVerification of system files and binaries is important
because it provides the system administration and security
teams information about system changes. A software
application that monitors the system for changes is called an
Intrusion Detection System (IDS).&os; provides native support for a basic
IDS system. While the nightly security
emails will notify an administrator of changes, the
information is stored locally and there is a chance that a
malicious user could modify this information in order to hide
their changes to the system. As such, it is recommended to
create a separate set of binary signatures and store them on a
read-only, root-owned directory or, preferably, on a removable
USB disk or remote
rsync server.The built-in mtree utility can be used
to generate a specification of the contents of a directory. A
seed, or a numeric constant, is used to generate the
specification and is required to check that the specification
has not changed. This makes it possible to determine if a
file or binary has been modified. Since the seed value is
unknown by an attacker, faking or checking the checksum values
of files will be difficult to impossible. The following
example generates a set of SHA256 hashes,
one for each system binary in /bin, and
saves those values to a hidden file in root's home directory,
/root/.bin_chksum_mtree:&prompt.root; mtree -s 3483151339707503 -c -K cksum,sha256digest -p /bin > /root/.bin_chksum_mtree
&prompt.root; mtree: /bin checksum: 3427012225The 3483151339707503 represents
the seed. This value should be remembered, but not
shared.Viewing /root/.bin_cksum_mtree should
yield output similar to the following:# user: root
# machine: dreadnaught
# tree: /bin
# date: Mon Feb 3 10:19:53 2014
# .
/set type=file uid=0 gid=0 mode=0555 nlink=1 flags=none
. type=dir mode=0755 nlink=2 size=1024 \
time=1380277977.000000000
\133 nlink=2 size=11704 time=1380277977.000000000 \
cksum=484492447 \
sha256digest=6207490fbdb5ed1904441fbfa941279055c3e24d3a4049aeb45094596400662a
cat size=12096 time=1380277975.000000000 cksum=3909216944 \
sha256digest=65ea347b9418760b247ab10244f47a7ca2a569c9836d77f074e7a306900c1e69
chflags size=8168 time=1380277975.000000000 cksum=3949425175 \
sha256digest=c99eb6fc1c92cac335c08be004a0a5b4c24a0c0ef3712017b12c89a978b2dac3
chio size=18520 time=1380277975.000000000 cksum=2208263309 \
sha256digest=ddf7c8cb92a58750a675328345560d8cc7fe14fb3ccd3690c34954cbe69fc964
chmod size=8640 time=1380277975.000000000 cksum=2214429708 \
sha256digest=a435972263bf814ad8df082c0752aa2a7bdd8b74ff01431ccbd52ed1e490bbe7The machine's hostname, the date and time the
specification was created, and the name of the user who
created the specification are included in this report. There
is a checksum, size, time, and SHA256
digest for each binary in the directory.To verify that the binary signatures have not changed,
compare the current contents of the directory to the
previously generated specification, and save the results to a
file. This command requires the seed that was used to
generate the original specification:&prompt.root; mtree -s 3483151339707503 -p /bin < /root/.bin_chksum_mtree >> /root/.bin_chksum_output
&prompt.root; mtree: /bin checksum: 3427012225This should produce the same checksum for
/bin that was produced when the
specification was created. If no changes have occurred to the
binaries in this directory, the
/root/.bin_chksum_output output file will
be empty. To simulate a change, change the date on
/bin/cat using touch
and run the verification command again:&prompt.root; touch /bin/cat
&prompt.root; mtree -s 3483151339707503 -p /bin < /root/.bin_chksum_mtree >> /root/.bin_chksum_output
&prompt.root; more /root/.bin_chksum_output
cat changed
modification time expected Fri Sep 27 06:32:55 2013 found Mon Feb 3 10:28:43 2014It is recommended to create specifications for the
directories which contain binaries and configuration files, as
well as any directories containing sensitive data. Typically,
specifications are created for /bin,
/sbin, /usr/bin,
/usr/sbin,
/usr/local/bin,
/etc, and
/usr/local/etc.More advanced IDS systems exist, such
as security/aide. In most cases,
mtree provides the functionality
administrators need. It is important to keep the seed value
and the checksum output hidden from malicious users. More
information about mtree can be found in
&man.mtree.8;.System Tuning for SecurityIn &os;, many system features can be tuned using
sysctl. A few of the security features
which can be tuned to prevent Denial of Service
(DoS) attacks will be covered in this
section. More information about using
sysctl, including how to temporarily change
values and how to make the changes permanent after testing,
can be found in .Any time a setting is changed with
sysctl, the chance to cause undesired
harm is increased, affecting the availability of the system.
All changes should be monitored and, if possible, tried on a
testing system before being used on a production
system.By default, the &os; kernel boots with a security level of
-1. This is called insecure
mode because immutable file flags may be turned off
and all devices may be read from or written to. The security
level will remain at -1 unless it is
altered through sysctl or by a setting in
the startup scripts. The security level may be increased
during system startup by setting
kern_securelevel_enable to
YES in /etc/rc.conf,
and the value of kern_securelevel to the
desired security level. See &man.security.7; and &man.init.8;
for more information on these settings and the available
security levels.Increasing the securelevel can break
Xorg and cause other issues. Be
prepared to do some debugging.The net.inet.tcp.blackhole and
net.inet.udp.blackhole settings can be used
to drop incoming SYN packets on closed
ports without sending a return RST
response. The default behavior is to return an
RST to show a port is closed. Changing the
default provides some level of protection against ports scans,
which are used to determine which applications are running on
a system. Set net.inet.tcp.blackhole to
2 and
net.inet.udp.blackhole to
1. Refer to &man.blackhole.4; for more
information about these settings.The net.inet.icmp.drop_redirect and
net.inet.ip.redirect settings help prevent
against redirect attacks. A redirect
attack is a type of DoS which sends mass
numbers of ICMP type 5 packets. Since
these packets are not required, set
net.inet.icmp.drop_redirect to
1 and set
net.inet.ip.redirect to
0.Source routing is a method for detecting and accessing
non-routable addresses on the internal network. This should
be disabled as non-routable addresses are normally not
routable on purpose. To disable this feature, set
net.inet.ip.sourceroute and
net.inet.ip.accept_sourceroute to
0.When a machine on the network needs to send messages to
all hosts on a subnet, an ICMP echo request
message is sent to the broadcast address. However, there is
no reason for an external host to perform such an action. To
reject all external broadcast requests, set
- net.inet.icmp.bmcastecho to
+ net.inet.icmp.bmcastecho to
0.Some additional settings are documented in
&man.security.7;.One-time Passwordsone-time passwordssecurityone-time passwordsBy default, &os; includes support for One-time Passwords In
Everything (OPIE). OPIE
is designed to prevent replay attacks, in which an attacker
discovers a user's password and uses it to access a system.
Since a password is only used once in OPIE, a
discovered password is of little use to an attacker.
OPIE uses a secure hash and a
challenge/response system to manage passwords. The &os;
implementation uses the MD5 hash by
default.OPIE uses three different types of
passwords. The first is the usual &unix; or Kerberos password.
The second is the one-time password which is generated by
opiekey. The third type of password is the
secret password which is used to generate
one-time passwords. The secret password has nothing to do with,
and should be different from, the &unix; password.There are two other pieces of data that are important to
OPIE. One is the seed or
key, consisting of two letters and five digits.
The other is the iteration count, a number
between 1 and 100. OPIE creates the one-time
password by concatenating the seed and the secret password,
applying the MD5 hash as many times as
specified by the iteration count, and turning the result into
six short English words which represent the one-time password.
The authentication system keeps track of the last one-time
password used, and the user is authenticated if the hash of the
user-provided password is equal to the previous password.
Because a one-way hash is used, it is impossible to generate
future one-time passwords if a successfully used password is
captured. The iteration count is decremented after each
successful login to keep the user and the login program in sync.
When the iteration count gets down to 1,
OPIE must be reinitialized.There are a few programs involved in this process. A
one-time password, or a consecutive list of one-time passwords,
is generated by passing an iteration count, a seed, and a secret
password to &man.opiekey.1;. In addition to initializing
OPIE, &man.opiepasswd.1; is used to change
passwords, iteration counts, or seeds. The relevant credential
files in /etc/opiekeys are examined by
&man.opieinfo.1; which prints out the invoking user's current
iteration count and seed.This section describes four different sorts of operations.
The first is how to set up one-time-passwords for the first time
over a secure connection. The second is how to use
opiepasswd over an insecure connection. The
third is how to log in over an insecure connection. The fourth
is how to generate a number of keys which can be written down or
printed out to use at insecure locations.Initializing OPIETo initialize OPIE for the first time,
run this command from a secure location:&prompt.user; opiepasswd -c
Adding unfurl:
Only use this method from the console; NEVER from remote. If you are using
telnet, xterm, or a dial-in, type ^C now or exit with no password.
Then run opiepasswd without the -c parameter.
Using MD5 to compute responses.
Enter new secret pass phrase:
Again new secret pass phrase:
ID unfurl OTP key is 499 to4268
MOS MALL GOAT ARM AVID COEDThe sets console mode which assumes
that the command is being run from a secure location, such as
a computer under the user's control or a
SSH session to a computer under the user's
control.When prompted, enter the secret password which will be
used to generate the one-time login keys. This password
should be difficult to guess and should be different than the
password which is associated with the user's login account.
It must be between 10 and 127 characters long. Remember this
password.The ID line lists the login name
(unfurl), default iteration count
(499), and default seed
(to4268). When logging in, the system will
remember these parameters and display them, meaning that they
do not have to be memorized. The last line lists the
generated one-time password which corresponds to those
parameters and the secret password. At the next login, use
this one-time password.Insecure Connection InitializationTo initialize or change the secret password on an
insecure system, a secure connection is needed to some place
where opiekey can be run. This might be a
shell prompt on a trusted machine. An iteration count is
needed, where 100 is probably a good value, and the seed can
either be specified or the randomly-generated one used. On
the insecure connection, the machine being initialized, use
&man.opiepasswd.1;:&prompt.user; opiepasswd
Updating unfurl:
You need the response from an OTP generator.
Old secret pass phrase:
otp-md5 498 to4268 ext
Response: GAME GAG WELT OUT DOWN CHAT
New secret pass phrase:
otp-md5 499 to4269
Response: LINE PAP MILK NELL BUOY TROY
ID mark OTP key is 499 gr4269
LINE PAP MILK NELL BUOY TROYTo accept the default seed, press Return.
Before entering an access password, move over to the secure
connection and give it the same parameters:&prompt.user; opiekey 498 to4268
Using the MD5 algorithm to compute response.
Reminder: Do not use opiekey from telnet or dial-in sessions.
Enter secret pass phrase:
GAME GAG WELT OUT DOWN CHATSwitch back over to the insecure connection, and copy the
generated one-time password over to the relevant
program.Generating a Single One-time PasswordAfter initializing OPIE and logging in,
a prompt like this will be displayed:&prompt.user; telnet example.com
Trying 10.0.0.1...
Connected to example.com
Escape character is '^]'.
FreeBSD/i386 (example.com) (ttypa)
login: <username>
otp-md5 498 gr4269 ext
Password: The OPIE prompts provides a useful
feature. If Return is pressed at the
password prompt, the prompt will turn echo on and display
what is typed. This can be useful when attempting to type in
a password by hand from a printout.MS-DOSWindowsMacOSAt this point, generate the one-time password to answer
this login prompt. This must be done on a trusted system
where it is safe to run &man.opiekey.1;. There are versions
of this command for &windows;, &macos; and &os;. This command
needs the iteration count and the seed as command line
options. Use cut-and-paste from the login prompt on the
machine being logged in to.On the trusted system:&prompt.user; opiekey 498 to4268
Using the MD5 algorithm to compute response.
Reminder: Do not use opiekey from telnet or dial-in sessions.
Enter secret pass phrase:
GAME GAG WELT OUT DOWN CHATOnce the one-time password is generated, continue to log
in.Generating Multiple One-time PasswordsSometimes there is no access to a trusted machine or
secure connection. In this case, it is possible to use
&man.opiekey.1; to generate a number of one-time passwords
beforehand. For example:&prompt.user; opiekey -n 5 30 zz99999
Using the MD5 algorithm to compute response.
Reminder: Do not use opiekey from telnet or dial-in sessions.
Enter secret pass phrase: <secret password>
26: JOAN BORE FOSS DES NAY QUIT
27: LATE BIAS SLAY FOLK MUCH TRIG
28: SALT TIN ANTI LOON NEAL USE
29: RIO ODIN GO BYE FURY TIC
30: GREW JIVE SAN GIRD BOIL PHIThe requests five keys in sequence,
and specifies what the last iteration
number should be. Note that these are printed out in
reverse order of use. The really
paranoid might want to write the results down by hand;
otherwise, print the list. Each line shows both the iteration
count and the one-time password. Scratch off the passwords as
they are used.Restricting Use of &unix; PasswordsOPIE can restrict the use of &unix;
passwords based on the IP address of a login session. The
relevant file is /etc/opieaccess, which
is present by default. Refer to &man.opieaccess.5; for more
information on this file and which security considerations to
be aware of when using it.Here is a sample opieaccess:permit 192.168.0.0 255.255.0.0This line allows users whose IP source address (which is
vulnerable to spoofing) matches the specified value and mask,
to use &unix; passwords at any time.If no rules in opieaccess are
matched, the default is to deny non-OPIE
logins.TCP WrapperTomRhodesWritten
by TCP WrapperTCP Wrapper is a host-based
access control system which extends the abilities of . It can be configured to provide
logging support, return messages, and connection restrictions
for the server daemons under the control of
inetd. Refer to &man.tcpd.8; for
more information about
TCP Wrapper and its features.TCP Wrapper should not be
considered a replacement for a properly configured firewall.
Instead, TCP Wrapper should be used
in conjunction with a firewall and other security enhancements
in order to provide another layer of protection in the
implementation of a security policy.Initial ConfigurationTo enable TCP Wrapper in &os;,
add the following lines to
/etc/rc.conf:inetd_enable="YES"
inetd_flags="-Ww"Then, properly configure
/etc/hosts.allow.Unlike other implementations of
TCP Wrapper, the use of
hosts.deny is deprecated in &os;. All
configuration options should be placed in
/etc/hosts.allow.In the simplest configuration, daemon connection policies
are set to either permit or block, depending on the options in
/etc/hosts.allow. The default
configuration in &os; is to allow all connections to the
daemons started with inetd.Basic configuration usually takes the form of
daemon : address : action, where
daemon is the daemon which
inetd started,
address is a valid hostname,
IP address, or an IPv6 address enclosed in
brackets ([ ]), and action is either
allow or deny.
TCP Wrapper uses a first rule match
semantic, meaning that the configuration file is scanned from
the beginning for a matching rule. When a match is found, the
rule is applied and the search process stops.For example, to allow POP3 connections
via the mail/qpopper daemon, the following
lines should be appended to
hosts.allow:# This line is required for POP3 connections:
qpopper : ALL : allowWhenever this file is edited, restart
inetd:&prompt.root; service inetd restartAdvanced ConfigurationTCP Wrapper provides advanced
options to allow more control over the way connections are
handled. In some cases, it may be appropriate to return a
comment to certain hosts or daemon connections. In other
cases, a log entry should be recorded or an email sent to the
administrator. Other situations may require the use of a
service for local connections only. This is all possible
through the use of configuration options known as wildcards,
expansion characters, and external command execution.Suppose that a situation occurs where a connection should
be denied yet a reason should be sent to the host who
attempted to establish that connection. That action is
possible with . When a connection
attempt is made, executes a shell
command or script. An example exists in
hosts.allow:# The rest of the daemons are protected.
ALL : ALL \
: severity auth.info \
: twist /bin/echo "You are not welcome to use %d from %h."In this example, the message You are not allowed to
use daemon name from
hostname. will be
returned for any daemon not configured in
hosts.allow. This is useful for sending
a reply back to the connection initiator right after the
established connection is dropped. Any message returned
must be wrapped in quote
(") characters.It may be possible to launch a denial of service attack
on the server if an attacker floods these daemons with
connection requests.Another possibility is to use .
Like , implicitly
denies the connection and may be used to run external shell
commands or scripts. Unlike ,
will not send a reply back to the host
who established the connection. For example, consider the
following configuration:# We do not allow connections from example.com:
ALL : .example.com \
: spawn (/bin/echo %a from %h attempted to access %d >> \
/var/log/connections.log) \
: denyThis will deny all connection attempts from *.example.com and log the
hostname, IP address, and the daemon to
which access was attempted to
/var/log/connections.log. This example
uses the substitution characters %a and
%h. Refer to &man.hosts.access.5; for the
complete list.To match every instance of a daemon, domain, or
IP address, use ALL.
Another wildcard is PARANOID which may be
used to match any host which provides an IP
address that may be forged because the IP
address differs from its resolved hostname. In this example,
all connection requests to Sendmail
which have an IP address that varies from
its hostname will be denied:# Block possibly spoofed requests to sendmail:
sendmail : PARANOID : denyUsing the PARANOID wildcard will
result in denied connections if the client or server has a
broken DNS setup.To learn more about wildcards and their associated
functionality, refer to &man.hosts.access.5;.When adding new configuration lines, make sure that any
unneeded entries for that daemon are commented out in
hosts.allow.KerberosTillmanHodgsonContributed by MarkMurrayBased on a contribution by Kerberos is a network
authentication protocol which was originally created by the
Massachusetts Institute of Technology (MIT)
as a way to securely provide authentication across a potentially
hostile network. The Kerberos
protocol uses strong cryptography so that both a client and
server can prove their identity without sending any unencrypted
secrets over the network. Kerberos
can be described as an identity-verifying proxy system and as a
trusted third-party authentication system. After a user
authenticates with Kerberos, their
communications can be encrypted to assure privacy and data
integrity.The only function of Kerberos is
to provide the secure authentication of users and servers on the
network. It does not provide authorization or auditing
functions. It is recommended that
Kerberos be used with other security
methods which provide authorization and audit services.The current version of the protocol is version 5, described
in RFC 4120. Several free
implementations of this protocol are available, covering a wide
range of operating systems. MIT continues to
develop their Kerberos package. It
is commonly used in the US as a cryptography
product, and has historically been subject to
US export regulations. In &os;,
MIT Kerberos is
available as the security/krb5 package or
port. The Heimdal Kerberos
implementation was explicitly developed outside of the
US to avoid export regulations. The Heimdal
Kerberos distribution is included in
the base &os; installation, and another distribution with more
configurable options is available as
security/heimdal in the Ports
Collection.In Kerberos users and services
are identified as principals which are contained
within an administrative grouping, called a
realm. A typical user principal would be of the
form
user@REALM
(realms are traditionally uppercase).This section provides a guide on how to set up
Kerberos using the Heimdal
distribution included in &os;.For purposes of demonstrating a
Kerberos installation, the name
spaces will be as follows:The DNS domain (zone) will be
example.org.The Kerberos realm will be
EXAMPLE.ORG.Use real domain names when setting up
Kerberos, even if it will run
internally. This avoids DNS problems and
assures inter-operation with other
Kerberos realms.Setting up a Heimdal KDCKerberos5Key Distribution CenterThe Key Distribution Center (KDC) is
the centralized authentication service that
Kerberos provides, the
trusted third party of the system. It is the
computer that issues Kerberos
tickets, which are used for clients to authenticate to
servers. Because the KDC is considered
trusted by all other computers in the
Kerberos realm, it has heightened
security concerns. Direct access to the KDC should be
limited.While running a KDC requires few
computing resources, a dedicated machine acting only as a
KDC is recommended for security
reasons.To begin setting up a KDC, add these
lines to /etc/rc.conf:kdc_enable="YES"
kadmind_enable="YES"Next, edit /etc/krb5.conf as
follows:[libdefaults]
default_realm = EXAMPLE.ORG
[realms]
EXAMPLE.ORG = {
kdc = kerberos.example.org
admin_server = kerberos.example.org
}
[domain_realm]
.example.org = EXAMPLE.ORGIn this example, the KDC will use the
fully-qualified hostname kerberos.example.org. The
hostname of the KDC must be resolvable in the
DNS.Kerberos can also use the
DNS to locate KDCs, instead of a
[realms] section in
/etc/krb5.conf. For large organizations
that have their own DNS servers, the above
example could be trimmed to:[libdefaults]
default_realm = EXAMPLE.ORG
[domain_realm]
.example.org = EXAMPLE.ORGWith the following lines being included in the
example.org zone
file:_kerberos._udp IN SRV 01 00 88 kerberos.example.org.
_kerberos._tcp IN SRV 01 00 88 kerberos.example.org.
_kpasswd._udp IN SRV 01 00 464 kerberos.example.org.
_kerberos-adm._tcp IN SRV 01 00 749 kerberos.example.org.
_kerberos IN TXT EXAMPLE.ORGIn order for clients to be able to find the
Kerberos services, they
must have either
a fully configured /etc/krb5.conf or a
minimally configured /etc/krb5.confand a properly configured
DNS server.Next, create the Kerberos
database which contains the keys of all principals (users and
hosts) encrypted with a master password. It is not required
to remember this password as it will be stored in
/var/heimdal/m-key; it would be
reasonable to use a 45-character random password for this
purpose. To create the master key, run
kstash and enter a password:&prompt.root; kstash
Master key: xxxxxxxxxxxxxxxxxxxxxxx
Verifying password - Master key: xxxxxxxxxxxxxxxxxxxxxxxOnce the master key has been created, the database should
be initialized. The Kerberos
administrative tool &man.kadmin.8; can be used on the KDC in a
mode that operates directly on the database, without using the
&man.kadmind.8; network service, as
kadmin -l. This resolves the
chicken-and-egg problem of trying to connect to the database
before it is created. At the kadmin
prompt, use init to create the realm's
initial database:&prompt.root; kadmin -l
kadmin> init EXAMPLE.ORG
Realm max ticket life [unlimited]:Lastly, while still in kadmin, create
the first principal using add. Stick to
the default options for the principal for now, as these can be
changed later with modify. Type
? at the prompt to see the available
options.kadmin> add tillman
Max ticket life [unlimited]:
Max renewable life [unlimited]:
Attributes []:
Password: xxxxxxxx
Verifying password - Password: xxxxxxxxNext, start the KDC services by running
service kdc start and
service kadmind start. While there will
not be any kerberized daemons running at this point, it is
possible to confirm that the KDC is
functioning by obtaining a ticket for the
principal that was just created:&prompt.user; kinit tillman
tillman@EXAMPLE.ORG's Password:Confirm that a ticket was successfully obtained using
klist:&prompt.user; klist
Credentials cache: FILE:/tmp/krb5cc_1001
Principal: tillman@EXAMPLE.ORG
Issued Expires Principal
Aug 27 15:37:58 2013 Aug 28 01:37:58 2013 krbtgt/EXAMPLE.ORG@EXAMPLE.ORGThe temporary ticket can be destroyed when the test is
finished:&prompt.user; kdestroyConfiguring a Server to Use
KerberosKerberos5enabling servicesThe first step in configuring a server to use
Kerberos authentication is to
ensure that it has the correct configuration in
/etc/krb5.conf. The version from the
KDC can be used as-is, or it can be
regenerated on the new system.Next, create /etc/krb5.keytab on the
server. This is the main part of Kerberizing a
service — it corresponds to generating a secret shared
between the service and the KDC. The
secret is a cryptographic key, stored in a
keytab. The keytab contains the server's host
key, which allows it and the KDC to verify
each others' identity. It must be transmitted to the server
in a secure fashion, as the security of the server can be
broken if the key is made public. Typically, the
keytab is generated on an administrator's
trusted machine using kadmin, then securely
transferred to the server, e.g., with &man.scp.1;; it can also
be created directly on the server if that is consistent with
the desired security policy. It is very important that the
keytab is transmitted to the server in a secure fashion: if
the key is known by some other party, that party can
impersonate any user to the server! Using
kadmin on the server directly is
convenient, because the entry for the host principal in the
KDC database is also created using
kadmin.Of course, kadmin is a kerberized
service; a Kerberos ticket is
needed to authenticate to the network service, but to ensure
that the user running kadmin is actually
present (and their session has not been hijacked),
kadmin will prompt for the password to get
a fresh ticket. The principal authenticating to the kadmin
service must be permitted to use the kadmin
interface, as specified in kadmind.acl.
See the section titled Remote administration in
info heimdal for details on designing
access control lists. Instead of enabling remote
kadmin access, the administrator could
securely connect to the KDC via the local
console or &man.ssh.1;, and perform administration locally
using kadmin -l.After installing /etc/krb5.conf,
use add --random-key in
kadmin. This adds the server's host
principal to the database, but does not extract a copy of the
host principal key to a keytab. To generate the keytab, use
ext to extract the server's host principal
key to its own keytab:&prompt.root; kadmin
kadmin> add --random-key host/myserver.example.org
Max ticket life [unlimited]:
Max renewable life [unlimited]:
Principal expiration time [never]:
Password expiration time [never]:
Attributes []:
kadmin> ext_keytab host/myserver.example.org
kadmin> exitNote that ext_keytab stores the
extracted key in /etc/krb5.keytab by
default. This is good when being run on the server being
kerberized, but the --keytab
path/to/file argument
should be used when the keytab is being extracted
elsewhere:&prompt.root; kadmin
kadmin> ext_keytab --keytab=/tmp/example.keytab host/myserver.example.org
kadmin> exitThe keytab can then be securely copied to the server
using &man.scp.1; or a removable media. Be sure to specify a
non-default keytab name to avoid inserting unneeded keys into
the system's keytab.At this point, the server can read encrypted messages from
the KDC using its shared key, stored in
krb5.keytab. It is now ready for the
Kerberos-using services to be
enabled. One of the most common such services is
&man.sshd.8;, which supports
Kerberos via the
GSS-API. In
/etc/ssh/sshd_config, add the
line:GSSAPIAuthentication yesAfter making this change, &man.sshd.8; must be restared
for the new configuration to take effect:
service sshd restart.Configuring a Client to Use
KerberosKerberos5configure clientsAs it was for the server, the client requires
configuration in /etc/krb5.conf. Copy
the file in place (securely) or re-enter it as needed.Test the client by using kinit,
klist, and kdestroy from
the client to obtain, show, and then delete a ticket for an
existing principal. Kerberos
applications should also be able to connect to
Kerberos enabled servers. If that
does not work but obtaining a ticket does, the problem is
likely with the server and not with the client or the
KDC. In the case of kerberized
&man.ssh.1;, GSS-API is disabled by
default, so test using ssh -o
GSSAPIAuthentication=yes
hostname.When testing a Kerberized application, try using a packet
sniffer such as tcpdump to confirm that no
sensitive information is sent in the clear.Various Kerberos client
applications are available. With the advent of a bridge so
that applications using SASL for
authentication can use GSS-API mechanisms
as well, large classes of client applications can use
Kerberos for authentication, from
Jabber clients to IMAP clients..k5login.k5usersUsers within a realm typically have their
Kerberos principal mapped to a
local user account. Occasionally, one needs to grant access
to a local user account to someone who does not have a
matching Kerberos principal. For
example, tillman@EXAMPLE.ORG may need
access to the local user account webdevelopers. Other
principals may also need access to that local account.The .k5login and
.k5users files, placed in a user's home
directory, can be used to solve this problem. For example, if
the following .k5login is placed in the
home directory of webdevelopers, both principals
listed will have access to that account without requiring a
shared password:tillman@example.org
jdoe@example.orgRefer to &man.ksu.1; for more information about
.k5users.MIT DifferencesThe major difference between the MIT
and Heimdal implementations is that kadmin
has a different, but equivalent, set of commands and uses a
different protocol. If the KDC is
MIT, the Heimdal version of
kadmin cannot be used to administer the
KDC remotely, and vice versa.Client applications may also use slightly different
command line options to accomplish the same tasks. Following
the instructions at http://web.mit.edu/Kerberos/www/
is recommended. Be careful of path issues: the
MIT port installs into
/usr/local/ by default, and the &os;
system applications run instead of the
MIT versions if PATH lists
the system directories first.When using MIT Kerberos as a KDC on
&os;, the following edits should also be made to
rc.conf:kerberos5_server="/usr/local/sbin/krb5kdc"
kadmind5_server="/usr/local/sbin/kadmind"
kerberos5_server_flags=""
kerberos5_server_enable="YES"
kadmind5_server_enable="YES"Kerberos Tips, Tricks, and
TroubleshootingWhen configuring and troubleshooting
Kerberos, keep the following points
in mind:When using either Heimdal or MIT
Kerberos from ports, ensure
that the PATH lists the port's versions of
the client applications before the system versions.If all the computers in the realm do not have
synchronized time settings, authentication may fail.
describes how to synchronize
clocks using NTP.If the hostname is changed, the host/ principal must be
changed and the keytab updated. This also applies to
special keytab entries like the HTTP/ principal used for
Apache's www/mod_auth_kerb.All hosts in the realm must be both forward and
reverse resolvable in DNS or, at a
minimum, exist in /etc/hosts. CNAMEs
will work, but the A and PTR records must be correct and
in place. The error message for unresolvable hosts is not
intuitive: Kerberos5 refuses authentication
because Read req failed: Key table entry not
found.Some operating systems that act as clients to the
KDC do not set the permissions for
ksu to be setuid root. This means that
ksu does not work. This is a
permissions problem, not a KDC
error.With MIT
Kerberos, to allow a principal
to have a ticket life longer than the default lifetime of
ten hours, use modify_principal at the
&man.kadmin.8; prompt to change the
maxlife of both the principal in
question and the
krbtgt
principal. The principal can then use
kinit -l to request a ticket with a
longer lifetime.When running a packet sniffer on the
KDC to aid in troubleshooting while
running kinit from a workstation, the
Ticket Granting Ticket (TGT) is sent
immediately, even before the password is typed. This is
because the Kerberos server
freely transmits a TGT to any
unauthorized request. However, every
TGT is encrypted in a key derived from
the user's password. When a user types their password, it
is not sent to the KDC, it is instead
used to decrypt the TGT that
kinit already obtained. If the
decryption process results in a valid ticket with a valid
time stamp, the user has valid
Kerberos credentials. These
credentials include a session key for establishing secure
communications with the
Kerberos server in the future,
as well as the actual TGT, which is
encrypted with the Kerberos
server's own key. This second layer of encryption allows
the Kerberos server to verify
the authenticity of each TGT.Host principals can have a longer ticket lifetime. If
the user principal has a lifetime of a week but the host
being connected to has a lifetime of nine hours, the user
cache will have an expired host principal and the ticket
cache will not work as expected.When setting up krb5.dict to
prevent specific bad passwords from being used as
described in &man.kadmind.8;, remember that it only
applies to principals that have a password policy assigned
to them. The format used in
krb5.dict is one string per line.
Creating a symbolic link to
/usr/share/dict/words might be
useful.Mitigating Kerberos
LimitationsKerberos5limitations and shortcomingsSince Kerberos is an all or
nothing approach, every service enabled on the network must
either be modified to work with
Kerberos or be otherwise secured
against network attacks. This is to prevent user credentials
from being stolen and re-used. An example is when
Kerberos is enabled on all remote
shells but the non-Kerberized POP3 mail
server sends passwords in plain text.The KDC is a single point of failure.
By design, the KDC must be as secure as its
master password database. The KDC should
have absolutely no other services running on it and should be
physically secure. The danger is high because
Kerberos stores all passwords
encrypted with the same master key which is stored as a file
on the KDC.A compromised master key is not quite as bad as one might
fear. The master key is only used to encrypt the
Kerberos database and as a seed for
the random number generator. As long as access to the
KDC is secure, an attacker cannot do much
with the master key.If the KDC is unavailable, network
services are unusable as authentication cannot be performed.
This can be alleviated with a single master
KDC and one or more slaves, and with
careful implementation of secondary or fall-back
authentication using PAM.Kerberos allows users, hosts
and services to authenticate between themselves. It does not
have a mechanism to authenticate the
KDC to the users, hosts, or services. This
means that a trojanned kinit could record
all user names and passwords. File system integrity checking
tools like security/tripwire can
alleviate this.Resources and Further InformationKerberos5external resources
The Kerberos
FAQDesigning
an Authentication System: a Dialog in Four
ScenesRFC
4120, The Kerberos Network
Authentication Service (V5)MIT
Kerberos home
pageHeimdal
Kerberos home
pageOpenSSLTomRhodesWritten
by securityOpenSSLOpenSSL is an open source
implementation of the SSL and
TLS protocols. It provides an encryption
transport layer on top of the normal communications layer,
allowing it to be intertwined with many network applications and
services.The version of OpenSSL included
in &os; supports the Secure Sockets Layer v2/v3 (SSLv2/SSLv3)
and Transport Layer Security v1 (TLSv1) network security
protocols and can be used as a general cryptographic
library.OpenSSL is often used to encrypt
authentication of mail clients and to secure web based
transactions such as credit card payments. Some ports, such as
www/apache24 and
databases/postgresql91-server, include a
compile option for building with
OpenSSL.&os; provides two versions of
OpenSSL: one in the base system and
one in the Ports Collection. Users can choose which version to
use by default for other ports using the following knobs:WITH_OPENSSL_PORT: when set, the port will use
OpenSSL from the
security/openssl port, even if the
version in the base system is up to date or newer.WITH_OPENSSL_BASE: when set, the port will compile
against OpenSSL provided by the
base system.Another common use of OpenSSL is
to provide certificates for use with software applications.
Certificates can be used to verify the credentials of a company
or individual. If a certificate has not been signed by an
external Certificate Authority
(CA), such as http://www.verisign.com,
the application that uses the certificate will produce a
warning. There is a cost associated with obtaining a signed
certificate and using a signed certificate is not mandatory as
certificates can be self-signed. However, using an external
authority will prevent warnings and can put users at
ease.This section demonstrates how to create and use certificates
on a &os; system. Refer to for an
example of how to create a CA for signing
one's own certificates.For more information about SSL, read the
free OpenSSL
Cookbook.Generating CertificatesOpenSSLcertificate generationTo generate a certificate that will be signed by an
external CA, issue the following command
and input the information requested at the prompts. This
input information will be written to the certificate. At the
Common Name prompt, input the fully
qualified name for the system that will use the certificate.
If this name does not match the server, the application
verifying the certificate will issue a warning to the user,
rendering the verification provided by the certificate as
useless.&prompt.root; openssl req -new -nodes -out req.pem -keyout cert.key -sha256 -newkey rsa:2048
Generating a 2048 bit RSA private key
..................+++
.............................................................+++
writing new private key to 'cert.key'
-----
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [AU]:US
State or Province Name (full name) [Some-State]:PA
Locality Name (eg, city) []:Pittsburgh
Organization Name (eg, company) [Internet Widgits Pty Ltd]:My Company
Organizational Unit Name (eg, section) []:Systems Administrator
Common Name (eg, YOUR name) []:localhost.example.org
Email Address []:trhodes@FreeBSD.org
Please enter the following 'extra' attributes
to be sent with your certificate request
A challenge password []:
An optional company name []:Another NameOther options, such as the expire time and alternate
encryption algorithms, are available when creating a
certificate. A complete list of options is described in
&man.openssl.1;.This command will create two files in the current
directory. The certificate request,
req.pem, can be sent to a
CA who will validate the entered
credentials, sign the request, and return the signed
certificate. The second file,
cert.key, is the private key for the
certificate and should be stored in a secure location. If
this falls in the hands of others, it can be used to
impersonate the user or the server.Alternately, if a signature from a CA
is not required, a self-signed certificate can be created.
First, generate the RSA key:&prompt.root; openssl genrsa -rand -genkey -out cert.key 2048
0 semi-random bytes loaded
Generating RSA private key, 2048 bit long modulus
.............................................+++
.................................................................................................................+++
e is 65537 (0x10001)Use this key to create a self-signed certificate.
Follow the usual prompts for creating a certificate:&prompt.root; openssl req -new -x509 -days 365 -key cert.key -out cert.crt -sha256
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [AU]:US
State or Province Name (full name) [Some-State]:PA
Locality Name (eg, city) []:Pittsburgh
Organization Name (eg, company) [Internet Widgits Pty Ltd]:My Company
Organizational Unit Name (eg, section) []:Systems Administrator
Common Name (e.g. server FQDN or YOUR name) []:localhost.example.org
Email Address []:trhodes@FreeBSD.orgThis will create two new files in the current directory: a
private key file
cert.key, and the certificate itself,
cert.crt. These should be placed in a
directory, preferably under /etc/ssl/,
which is readable only by root. Permissions of
0700 are appropriate for these files and
can be set using chmod.Using CertificatesOne use for a certificate is to encrypt connections to the
Sendmail mail server in order to
prevent the use of clear text authentication.Some mail clients will display an error if the user has
not installed a local copy of the certificate. Refer to the
documentation included with the software for more
information on certificate installation.In &os; 10.0-RELEASE and above, it is possible to create a
self-signed certificate for
Sendmail automatically. To enable
this, add the following lines to
/etc/rc.conf:sendmail_enable="YES"
sendmail_cert_create="YES"
sendmail_cert_cn="localhost.example.org"This will automatically create a self-signed certificate,
/etc/mail/certs/host.cert, a signing key,
/etc/mail/certs/host.key, and a
CA certificate,
/etc/mail/certs/cacert.pem. The
certificate will use the Common Name
specified in . After saving
the edits, restart Sendmail:&prompt.root; service sendmail restartIf all went well, there will be no error messages in
/var/log/maillog. For a simple test,
connect to the mail server's listening port using
telnet:&prompt.root; telnet example.com 25
Trying 192.0.34.166...
Connected to example.com.
Escape character is '^]'.
220 example.com ESMTP Sendmail 8.14.7/8.14.7; Fri, 18 Apr 2014 11:50:32 -0400 (EDT)
ehlo example.com
250-example.com Hello example.com [192.0.34.166], pleased to meet you
250-ENHANCEDSTATUSCODES
250-PIPELINING
250-8BITMIME
250-SIZE
250-DSN
250-ETRN
250-AUTH LOGIN PLAIN
250-STARTTLS
250-DELIVERBY
250 HELP
quit
221 2.0.0 example.com closing connection
Connection closed by foreign host.If the STARTTLS line appears in the
output, everything is working correctly.VPN over
IPsecNikClaytonnik@FreeBSD.orgWritten by Hiten M.Pandyahmp@FreeBSD.orgWritten by IPsecInternet Protocol Security (IPsec) is a
set of protocols which sit on top of the Internet Protocol
(IP) layer. It allows two or more hosts to
communicate in a secure manner by authenticating and encrypting
each IP packet of a communication session.
The &os; IPsec network stack is based on the
http://www.kame.net/
implementation and supports both IPv4 and
IPv6 sessions.IPsecESPIPsecAHIPsec is comprised of the following
sub-protocols:Encapsulated Security Payload
(ESP): this protocol
protects the IP packet data from third
party interference by encrypting the contents using
symmetric cryptography algorithms such as Blowfish and
3DES.Authentication Header
(AH): this protocol
protects the IP packet header from third
party interference and spoofing by computing a cryptographic
checksum and hashing the IP packet
header fields with a secure hashing function. This is then
followed by an additional header that contains the hash, to
allow the information in the packet to be
authenticated.IP Payload Compression Protocol
(IPComp): this protocol
tries to increase communication performance by compressing
the IP payload in order to reduce the
amount of data sent.These protocols can either be used together or separately,
depending on the environment.VPNvirtual private networkVPNIPsec supports two modes of operation.
The first mode, Transport Mode, protects
communications between two hosts. The second mode,
Tunnel Mode, is used to build virtual
tunnels, commonly known as Virtual Private Networks
(VPNs). Consult &man.ipsec.4; for detailed
information on the IPsec subsystem in
&os;.To add IPsec support to the kernel, add
the following options to the custom kernel configuration file
and rebuild the kernel using the instructions in :kernel optionsIPSECoptions IPSEC #IP security
device cryptokernel optionsIPSEC_DEBUGIf IPsec debugging support is desired,
the following kernel option should also be added:options IPSEC_DEBUG #debug for IP securityThis rest of this chapter demonstrates the process of
setting up an IPsec VPN
between a home network and a corporate network. In the example
scenario:Both sites are connected to the Internet through a
gateway that is running &os;.The gateway on each network has at least one external
IP address. In this example, the
corporate LAN's external
IP address is 172.16.5.4 and the home
LAN's external IP
address is 192.168.1.12.The internal addresses of the two networks can be either
public or private IP addresses. However,
the address space must not collide. For example, both
networks cannot use 192.168.1.x. In this
example, the corporate LAN's internal
IP address is 10.246.38.1 and the home
LAN's internal IP
address is 10.0.0.5.Configuring a VPN on &os;TomRhodestrhodes@FreeBSD.orgWritten by To begin, security/ipsec-tools must be
installed from the Ports Collection. This software provides a
number of applications which support the configuration.The next requirement is to create two &man.gif.4;
pseudo-devices which will be used to tunnel packets and allow
both networks to communicate properly. As root, run the following
commands, replacing internal and
external with the real IP
addresses of the internal and external interfaces of the two
gateways:&prompt.root; ifconfig gif0 create
&prompt.root; ifconfig gif0 internal1 internal2
&prompt.root; ifconfig gif0 tunnel external1 external2Verify the setup on each gateway, using
ifconfig. Here is the output from Gateway
1:gif0: flags=8051 mtu 1280
tunnel inet 172.16.5.4 --> 192.168.1.12
inet6 fe80::2e0:81ff:fe02:5881%gif0 prefixlen 64 scopeid 0x6
inet 10.246.38.1 --> 10.0.0.5 netmask 0xffffff00Here is the output from Gateway 2:gif0: flags=8051 mtu 1280
tunnel inet 192.168.1.12 --> 172.16.5.4
inet 10.0.0.5 --> 10.246.38.1 netmask 0xffffff00
inet6 fe80::250:bfff:fe3a:c1f%gif0 prefixlen 64 scopeid 0x4Once complete, both internal IP
addresses should be reachable using &man.ping.8;:priv-net# ping 10.0.0.5
PING 10.0.0.5 (10.0.0.5): 56 data bytes
64 bytes from 10.0.0.5: icmp_seq=0 ttl=64 time=42.786 ms
64 bytes from 10.0.0.5: icmp_seq=1 ttl=64 time=19.255 ms
64 bytes from 10.0.0.5: icmp_seq=2 ttl=64 time=20.440 ms
64 bytes from 10.0.0.5: icmp_seq=3 ttl=64 time=21.036 ms
--- 10.0.0.5 ping statistics ---
4 packets transmitted, 4 packets received, 0% packet loss
round-trip min/avg/max/stddev = 19.255/25.879/42.786/9.782 ms
corp-net# ping 10.246.38.1
PING 10.246.38.1 (10.246.38.1): 56 data bytes
64 bytes from 10.246.38.1: icmp_seq=0 ttl=64 time=28.106 ms
64 bytes from 10.246.38.1: icmp_seq=1 ttl=64 time=42.917 ms
64 bytes from 10.246.38.1: icmp_seq=2 ttl=64 time=127.525 ms
64 bytes from 10.246.38.1: icmp_seq=3 ttl=64 time=119.896 ms
64 bytes from 10.246.38.1: icmp_seq=4 ttl=64 time=154.524 ms
--- 10.246.38.1 ping statistics ---
5 packets transmitted, 5 packets received, 0% packet loss
round-trip min/avg/max/stddev = 28.106/94.594/154.524/49.814 msAs expected, both sides have the ability to send and
receive ICMP packets from the privately
configured addresses. Next, both gateways must be told how to
route packets in order to correctly send traffic from either
network. The following commands will achieve this
goal:&prompt.root; corp-net# route add 10.0.0.0 10.0.0.5 255.255.255.0
&prompt.root; corp-net# route add net 10.0.0.0: gateway 10.0.0.5
&prompt.root; priv-net# route add 10.246.38.0 10.246.38.1 255.255.255.0
&prompt.root; priv-net# route add host 10.246.38.0: gateway 10.246.38.1At this point, internal machines should be reachable from
each gateway as well as from machines behind the gateways.
Again, use &man.ping.8; to confirm:corp-net# ping 10.0.0.8
PING 10.0.0.8 (10.0.0.8): 56 data bytes
64 bytes from 10.0.0.8: icmp_seq=0 ttl=63 time=92.391 ms
64 bytes from 10.0.0.8: icmp_seq=1 ttl=63 time=21.870 ms
64 bytes from 10.0.0.8: icmp_seq=2 ttl=63 time=198.022 ms
64 bytes from 10.0.0.8: icmp_seq=3 ttl=63 time=22.241 ms
64 bytes from 10.0.0.8: icmp_seq=4 ttl=63 time=174.705 ms
--- 10.0.0.8 ping statistics ---
5 packets transmitted, 5 packets received, 0% packet loss
round-trip min/avg/max/stddev = 21.870/101.846/198.022/74.001 ms
priv-net# ping 10.246.38.107
PING 10.246.38.1 (10.246.38.107): 56 data bytes
64 bytes from 10.246.38.107: icmp_seq=0 ttl=64 time=53.491 ms
64 bytes from 10.246.38.107: icmp_seq=1 ttl=64 time=23.395 ms
64 bytes from 10.246.38.107: icmp_seq=2 ttl=64 time=23.865 ms
64 bytes from 10.246.38.107: icmp_seq=3 ttl=64 time=21.145 ms
64 bytes from 10.246.38.107: icmp_seq=4 ttl=64 time=36.708 ms
--- 10.246.38.107 ping statistics ---
5 packets transmitted, 5 packets received, 0% packet loss
round-trip min/avg/max/stddev = 21.145/31.721/53.491/12.179 msSetting up the tunnels is the easy part. Configuring a
secure link is a more in depth process. The following
configuration uses pre-shared (PSK)
RSA keys. Other than the
IP addresses, the
/usr/local/etc/racoon/racoon.conf on both
gateways will be identical and look similar to:path pre_shared_key "/usr/local/etc/racoon/psk.txt"; #location of pre-shared key file
log debug; #log verbosity setting: set to 'notify' when testing and debugging is complete
padding # options are not to be changed
{
maximum_length 20;
randomize off;
strict_check off;
exclusive_tail off;
}
timer # timing options. change as needed
{
counter 5;
interval 20 sec;
persend 1;
# natt_keepalive 15 sec;
phase1 30 sec;
phase2 15 sec;
}
listen # address [port] that racoon will listen on
{
isakmp 172.16.5.4 [500];
isakmp_natt 172.16.5.4 [4500];
}
remote 192.168.1.12 [500]
{
exchange_mode main,aggressive;
doi ipsec_doi;
situation identity_only;
my_identifier address 172.16.5.4;
peers_identifier address 192.168.1.12;
lifetime time 8 hour;
passive off;
proposal_check obey;
# nat_traversal off;
generate_policy off;
proposal {
encryption_algorithm blowfish;
hash_algorithm md5;
authentication_method pre_shared_key;
lifetime time 30 sec;
dh_group 1;
}
}
sainfo (address 10.246.38.0/24 any address 10.0.0.0/24 any) # address $network/$netmask $type address $network/$netmask $type ( $type being any or esp)
{ # $network must be the two internal networks you are joining.
pfs_group 1;
lifetime time 36000 sec;
encryption_algorithm blowfish,3des;
authentication_algorithm hmac_md5,hmac_sha1;
compression_algorithm deflate;
}For descriptions of each available option, refer to the
manual page for racoon.conf.The Security Policy Database (SPD)
needs to be configured so that &os; and
racoon are able to encrypt and
decrypt network traffic between the hosts.This can be achieved with a shell script, similar to the
following, on the corporate gateway. This file will be used
during system initialization and should be saved as
/usr/local/etc/racoon/setkey.conf.flush;
spdflush;
# To the home network
spdadd 10.246.38.0/24 10.0.0.0/24 any -P out ipsec esp/tunnel/172.16.5.4-192.168.1.12/use;
spdadd 10.0.0.0/24 10.246.38.0/24 any -P in ipsec esp/tunnel/192.168.1.12-172.16.5.4/use;Once in place, racoon may be
started on both gateways using the following command:&prompt.root; /usr/local/sbin/racoon -F -f /usr/local/etc/racoon/racoon.conf -l /var/log/racoon.logThe output should be similar to the following:corp-net# /usr/local/sbin/racoon -F -f /usr/local/etc/racoon/racoon.conf
Foreground mode.
2006-01-30 01:35:47: INFO: begin Identity Protection mode.
2006-01-30 01:35:48: INFO: received Vendor ID: KAME/racoon
2006-01-30 01:35:55: INFO: received Vendor ID: KAME/racoon
2006-01-30 01:36:04: INFO: ISAKMP-SA established 172.16.5.4[500]-192.168.1.12[500] spi:623b9b3bd2492452:7deab82d54ff704a
2006-01-30 01:36:05: INFO: initiate new phase 2 negotiation: 172.16.5.4[0]192.168.1.12[0]
2006-01-30 01:36:09: INFO: IPsec-SA established: ESP/Tunnel 192.168.1.12[0]->172.16.5.4[0] spi=28496098(0x1b2d0e2)
2006-01-30 01:36:09: INFO: IPsec-SA established: ESP/Tunnel 172.16.5.4[0]->192.168.1.12[0] spi=47784998(0x2d92426)
2006-01-30 01:36:13: INFO: respond new phase 2 negotiation: 172.16.5.4[0]192.168.1.12[0]
2006-01-30 01:36:18: INFO: IPsec-SA established: ESP/Tunnel 192.168.1.12[0]->172.16.5.4[0] spi=124397467(0x76a279b)
2006-01-30 01:36:18: INFO: IPsec-SA established: ESP/Tunnel 172.16.5.4[0]->192.168.1.12[0] spi=175852902(0xa7b4d66)To ensure the tunnel is working properly, switch to
another console and use &man.tcpdump.1; to view network
traffic using the following command. Replace
em0 with the network interface card as
required:&prompt.root; tcpdump -i em0 host 172.16.5.4 and dst 192.168.1.12Data similar to the following should appear on the
console. If not, there is an issue and debugging the
returned data will be required.01:47:32.021683 IP corporatenetwork.com > 192.168.1.12.privatenetwork.com: ESP(spi=0x02acbf9f,seq=0xa)
01:47:33.022442 IP corporatenetwork.com > 192.168.1.12.privatenetwork.com: ESP(spi=0x02acbf9f,seq=0xb)
01:47:34.024218 IP corporatenetwork.com > 192.168.1.12.privatenetwork.com: ESP(spi=0x02acbf9f,seq=0xc)At this point, both networks should be available and seem
to be part of the same network. Most likely both networks are
protected by a firewall. To allow traffic to flow between
them, rules need to be added to pass packets. For the
&man.ipfw.8; firewall, add the following lines to the firewall
configuration file:ipfw add 00201 allow log esp from any to any
ipfw add 00202 allow log ah from any to any
ipfw add 00203 allow log ipencap from any to any
ipfw add 00204 allow log udp from any 500 to anyThe rule numbers may need to be altered depending on the
current host configuration.For users of &man.pf.4; or &man.ipf.8;, the following
rules should do the trick:pass in quick proto esp from any to any
pass in quick proto ah from any to any
pass in quick proto ipencap from any to any
pass in quick proto udp from any port = 500 to any port = 500
pass in quick on gif0 from any to any
pass out quick proto esp from any to any
pass out quick proto ah from any to any
pass out quick proto ipencap from any to any
pass out quick proto udp from any port = 500 to any port = 500
pass out quick on gif0 from any to anyFinally, to allow the machine to start support for the
VPN during system initialization, add the
following lines to /etc/rc.conf:ipsec_enable="YES"
ipsec_program="/usr/local/sbin/setkey"
ipsec_file="/usr/local/etc/racoon/setkey.conf" # allows setting up spd policies on boot
racoon_enable="yes"OpenSSHChernLeeContributed
by OpenSSHsecurityOpenSSHOpenSSH is a set of network
connectivity tools used to provide secure access to remote
machines. Additionally, TCP/IP connections
can be tunneled or forwarded securely through
SSH connections.
OpenSSH encrypts all traffic to
effectively eliminate eavesdropping, connection hijacking, and
other network-level attacks.OpenSSH is maintained by the
OpenBSD project and is installed by default in &os;. It is
compatible with both SSH version 1 and 2
protocols.When data is sent over the network in an unencrypted form,
network sniffers anywhere in between the client and server can
steal user/password information or data transferred during the
session. OpenSSH offers a variety of
authentication and encryption methods to prevent this from
happening. More information about
OpenSSH is available from http://www.openssh.com/.This section provides an overview of the built-in client
utilities to securely access other systems and securely transfer
files from a &os; system. It then describes how to configure a
SSH server on a &os; system. More
information is available in the man pages mentioned in this
chapter.Using the SSH Client UtilitiesOpenSSHclientTo log into a SSH server, use
ssh and specify a username that exists on
that server and the IP address or hostname
of the server. If this is the first time a connection has
been made to the specified server, the user will be prompted
to first verify the server's fingerprint:&prompt.root; ssh user@example.com
The authenticity of host 'example.com (10.0.0.1)' can't be established.
ECDSA key fingerprint is 25:cc:73:b5:b3:96:75:3d:56:19:49:d2:5c:1f:91:3b.
Are you sure you want to continue connecting (yes/no)? yes
Permanently added 'example.com' (ECDSA) to the list of known hosts.
Password for user@example.com: user_passwordSSH utilizes a key fingerprint system
to verify the authenticity of the server when the client
connects. When the user accepts the key's fingerprint by
typing yes when connecting for the first
time, a copy of the key is saved to
.ssh/known_hosts in the user's home
directory. Future attempts to login are verified against the
saved key and ssh will display an alert if
the server's key does not match the saved key. If this
occurs, the user should first verify why the key has changed
before continuing with the connection.By default, recent versions of
OpenSSH only accept
SSHv2 connections. By default, the client
will use version 2 if possible and will fall back to version 1
if the server does not support version 2. To force
ssh to only use the specified protocol,
include or .
Additional options are described in &man.ssh.1;.OpenSSHsecure copy&man.scp.1;Use &man.scp.1; to securely copy a file to or from a
remote machine. This example copies
COPYRIGHT on the remote system to a file
of the same name in the current directory of the local
system:&prompt.root; scp user@example.com:/COPYRIGHT COPYRIGHT
Password for user@example.com: *******
COPYRIGHT 100% |*****************************| 4735
00:00
&prompt.root;Since the fingerprint was already verified for this host,
the server's key is automatically checked before prompting for
the user's password.The arguments passed to scp are similar
to cp. The file or files to copy is the
first argument and the destination to copy to is the second.
Since the file is fetched over the network, one or more of the
file arguments takes the form
. Be
aware when copying directories recursively that
scp uses , whereas
cp uses .To open an interactive session for copying files, use
sftp. Refer to &man.sftp.1; for a list of
available commands while in an sftp
session.Key-based AuthenticationInstead of using passwords, a client can be configured
to connect to the remote machine using keys. To generate
RSA
authentication keys, use ssh-keygen. To
generate a public and private key pair, specify the type of
key and follow the prompts. It is recommended to protect
the keys with a memorable, but hard to guess
passphrase.&prompt.user; ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/home/user/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/user/.ssh/id_rsa.
Your public key has been saved in /home/user/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:54Xm9Uvtv6H4NOo6yjP/YCfODryvUU7yWHzMqeXwhq8 user@host.example.com
The key's randomart image is:
+---[RSA 2048]----+
| |
| |
| |
| . o.. |
| .S*+*o |
| . O=Oo . . |
| = Oo= oo..|
| .oB.* +.oo.|
| =OE**.o..=|
+----[SHA256]-----+Type a passphrase here. It can contain spaces and
symbols.Retype the passphrase to verify it.The private key
is stored in ~/.ssh/id_rsa
and the public key
is stored in ~/.ssh/id_rsa.pub.
The
public key must be copied to
~/.ssh/authorized_keys on the remote
machine for key-based authentication to
work.Many users believe that keys are secure by design and
will use a key without a passphrase. This is
dangerous behavior. An
administrator can verify that a key pair is protected by a
passphrase by viewing the private key manually. If the
private key file contains the word
ENCRYPTED, the key owner is using a
passphrase. In addition, to better secure end users,
from may be placed in the public key
file. For example, adding
from="192.168.10.5" in front of the
ssh-rsa
prefix will only allow that specific user to log in from
that IP address.The options and files vary with different versions of
OpenSSH.
To avoid problems, consult &man.ssh-keygen.1;.If a passphrase is used, the user is prompted for
the passphrase each time a connection is made to the server.
To load SSH keys into memory and remove
the need to type the passphrase each time, use
&man.ssh-agent.1; and &man.ssh-add.1;.Authentication is handled by
ssh-agent, using the private keys that
are loaded into it. ssh-agent
can be used to launch another application like a
shell or a window manager.To use ssh-agent in a shell, start it
with a shell as an argument. Add the identity by
running ssh-add and entering the
passphrase for the private key.
The user will then be able to ssh
to any host that has the corresponding public key installed.
For example:&prompt.user; ssh-agent csh
&prompt.user; ssh-add
Enter passphrase for key '/usr/home/user/.ssh/id_rsa':
Identity added: /usr/home/user/.ssh/id_rsa (/usr/home/user/.ssh/id_rsa)
&prompt.user;Enter the passphrase for the key.To use ssh-agent in
&xorg;, add an entry for it in
~/.xinitrc. This provides the
ssh-agent services to all programs
launched in &xorg;. An example
~/.xinitrc might look like this:exec ssh-agent startxfce4This launches ssh-agent, which in
turn launches XFCE, every time
&xorg; starts. Once
&xorg; has been restarted so that
the changes can take effect, run ssh-add
to load all of the SSH keys.SSH TunnelingOpenSSHtunnelingOpenSSH has the ability to
create a tunnel to encapsulate another protocol in an
encrypted session.The following command tells ssh to
create a tunnel for
telnet:&prompt.user; ssh -2 -N -f -L 5023:localhost:23 user@foo.example.com
&prompt.user;This example uses the following options:Forces ssh to use version 2 to
connect to the server.Indicates no command, or tunnel only. If omitted,
ssh initiates a normal
session.Forces ssh to run in the
background.Indicates a local tunnel in
localport:remotehost:remoteport
format.The login name to use on the specified remote
SSH server.An SSH tunnel works by creating a
listen socket on localhost on the
specified localport. It then forwards
any connections received on localport via
the SSH connection to the specified
remotehost:remoteport. In the example,
port 5023 on the client is forwarded to
port 23 on the remote machine. Since
port 23 is used by telnet, this
creates an encrypted telnet
session through an SSH tunnel.This method can be used to wrap any number of insecure
TCP protocols such as
SMTP, POP3, and
FTP, as seen in the following
examples.Create a Secure Tunnel for
SMTP&prompt.user; ssh -2 -N -f -L 5025:localhost:25 user@mailserver.example.com
user@mailserver.example.com's password: *****
&prompt.user; telnet localhost 5025
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.
220 mailserver.example.com ESMTPThis can be used in conjunction with
ssh-keygen and additional user accounts
to create a more seamless SSH tunneling
environment. Keys can be used in place of typing a
password, and the tunnels can be run as a separate
user.Secure Access of a POP3
ServerIn this example, there is an SSH
server that accepts connections from the outside. On the
same network resides a mail server running a
POP3 server. To check email in a
secure manner, create an SSH connection
to the SSH server and tunnel through to
the mail server:&prompt.user; ssh -2 -N -f -L 2110:mail.example.com:110 user@ssh-server.example.com
user@ssh-server.example.com's password: ******Once the tunnel is up and running, point the email
client to send POP3 requests to
localhost on port 2110. This
connection will be forwarded securely across the tunnel to
mail.example.com.Bypassing a FirewallSome firewalls
filter both incoming and outgoing connections. For
example, a firewall might limit access from remote
machines to ports 22 and 80 to only allow
SSH and web surfing. This prevents
access to any other service which uses a port other than
22 or 80.The solution is to create an SSH
connection to a machine outside of the network's firewall
and use it to tunnel to the desired service:&prompt.user; ssh -2 -N -f -L 8888:music.example.com:8000 user@unfirewalled-system.example.org
user@unfirewalled-system.example.org's password: *******In this example, a streaming Ogg Vorbis client can now
be pointed to localhost port
8888, which will be forwarded over to
music.example.com on port 8000,
successfully bypassing the firewall.Enabling the SSH ServerOpenSSHenablingIn addition to providing built-in SSH
client utilities, a &os; system can be configured as an
SSH server, accepting connections from
other SSH clients.To see if sshd is operating,
use the &man.service.8; command:&prompt.root; service sshd statusIf the service is not running, add the following line to
/etc/rc.conf.sshd_enable="YES"This will start sshd, the
daemon program for OpenSSH, the
next time the system boots. To start it now:&prompt.root; service sshd startThe first time sshd starts on a
&os; system, the system's host keys will be automatically
created and the fingerprint will be displayed on the console.
Provide users with the fingerprint so that they can verify it
the first time they connect to the server.Refer to &man.sshd.8; for the list of available options
when starting sshd and a more
complete discussion about authentication, the login process,
and the various configuration files.At this point, the sshd should
be available to all users with a username and password on
the system.SSH Server SecurityWhile sshd is the most widely
used remote administration facility for &os;, brute force
and drive by attacks are common to any system exposed to
public networks. Several additional parameters are available
to prevent the success of these attacks and will be described
in this section.It is a good idea to limit which users can log into the
SSH server and from where using the
AllowUsers keyword in the
OpenSSH server configuration file.
For example, to only allow root to log in from
192.168.1.32, add
this line to /etc/ssh/sshd_config:AllowUsers root@192.168.1.32To allow admin
to log in from anywhere, list that user without specifying an
IP address:AllowUsers adminMultiple users should be listed on the same line, like
so:AllowUsers root@192.168.1.32 adminAfter making changes to
/etc/ssh/sshd_config,
tell sshd to reload its
configuration file by running:&prompt.root; service sshd reloadWhen this keyword is used, it is important to list each
user that needs to log into this machine. Any user that is
not specified in that line will be locked out. Also, the
keywords used in the OpenSSH
server configuration file are case-sensitive. If the
keyword is not spelled correctly, including its case, it
will be ignored. Always test changes to this file to make
sure that the edits are working as expected. Refer to
&man.sshd.config.5; to verify the spelling and use of the
available keywords.In addition, users may be forced to use two factor
authentication via the use of a public and private key. When
required, the user may generate a key pair through the use
of &man.ssh-keygen.1; and send the administrator the public
key. This key file will be placed in the
authorized_keys as described above in
the client section. To force the users to use keys only,
the following option may be configured:AuthenticationMethods publickeyDo not confuse /etc/ssh/sshd_config
with /etc/ssh/ssh_config (note the
extra d in the first filename). The
first file configures the server and the second file
configures the client. Refer to &man.ssh.config.5; for a
listing of the available client settings.Access Control ListsTomRhodesContributed
by ACLAccess Control Lists (ACLs) extend the
standard &unix; permission model in a &posix;.1e compatible way.
This permits an administrator to take advantage of a more
fine-grained permissions model.The &os; GENERIC kernel provides
ACL support for UFS file
systems. Users who prefer to compile a custom kernel must
include the following option in their custom kernel
configuration file:options UFS_ACLIf this option is not compiled in, a warning message will be
displayed when attempting to mount a file system with
ACL support. ACLs rely on
extended attributes which are natively supported in
UFS2.This chapter describes how to enable
ACL support and provides some usage
examples.Enabling ACL SupportACLs are enabled by the mount-time
administrative flag, , which may be added
to /etc/fstab. The mount-time flag can
also be automatically set in a persistent manner using
&man.tunefs.8; to modify a superblock ACLs
flag in the file system header. In general, it is preferred
to use the superblock flag for several reasons:The superblock flag cannot be changed by a remount
using as it requires a complete
umount and fresh
mount. This means that
ACLs cannot be enabled on the root file
system after boot. It also means that
ACL support on a file system cannot be
changed while the system is in use.Setting the superblock flag causes the file system to
always be mounted with ACLs enabled,
even if there is not an fstab entry
or if the devices re-order. This prevents accidental
mounting of the file system without ACL
support.It is desirable to discourage accidental mounting
without ACLs enabled because nasty things
can happen if ACLs are enabled, then
disabled, then re-enabled without flushing the extended
attributes. In general, once ACLs are
enabled on a file system, they should not be disabled, as
the resulting file protections may not be compatible with
those intended by the users of the system, and re-enabling
ACLs may re-attach the previous
ACLs to files that have since had their
permissions changed, resulting in unpredictable
behavior.File systems with ACLs enabled will
show a plus (+) sign in their permission
settings:drwx------ 2 robert robert 512 Dec 27 11:54 private
drwxrwx---+ 2 robert robert 512 Dec 23 10:57 directory1
drwxrwx---+ 2 robert robert 512 Dec 22 10:20 directory2
drwxrwx---+ 2 robert robert 512 Dec 27 11:57 directory3
drwxr-xr-x 2 robert robert 512 Nov 10 11:54 public_htmlIn this example, directory1,
directory2, and
directory3 are all taking advantage of
ACLs, whereas
public_html is not.Using ACLsFile system ACLs can be viewed using
getfacl. For instance, to view the
ACL settings on
test:&prompt.user; getfacl test
#file:test
#owner:1001
#group:1001
user::rw-
group::r--
other::r--To change the ACL settings on this
file, use setfacl. To remove all of the
currently defined ACLs from a file or file
system, include . However, the preferred
method is to use as it leaves the basic
fields required for ACLs to work.&prompt.user; setfacl -k testTo modify the default ACL entries, use
:&prompt.user; setfacl -m u:trhodes:rwx,group:web:r--,o::--- testIn this example, there were no pre-defined entries, as
they were removed by the previous command. This command
restores the default options and assigns the options listed.
If a user or group is added which does not exist on the
system, an Invalid argument error will
be displayed.Refer to &man.getfacl.1; and &man.setfacl.1; for more
information about the options available for these
commands.Monitoring Third Party Security IssuesTomRhodesContributed
by pkgIn recent years, the security world has made many
improvements to how vulnerability assessment is handled. The
threat of system intrusion increases as third party utilities
are installed and configured for virtually any operating
system available today.Vulnerability assessment is a key factor in security.
While &os; releases advisories for the base system, doing so
for every third party utility is beyond the &os; Project's
capability. There is a way to mitigate third party
vulnerabilities and warn administrators of known security
issues. A &os; add on utility known as
pkg includes options explicitly for
this purpose.pkg polls a database for security
issues. The database is updated and maintained by the &os;
Security Team and ports developers.Please refer to instructions
for installing
pkg.Installation provides &man.periodic.8; configuration files
for maintaining the pkg audit
database, and provides a programmatic method of keeping it
updated. This functionality is enabled if
daily_status_security_pkgaudit_enable
is set to YES in &man.periodic.conf.5;.
Ensure that daily security run emails, which are sent to
root's email account,
are being read.After installation, and to audit third party utilities as
part of the Ports Collection at any time, an administrator may
choose to update the database and view known vulnerabilities
of installed packages by invoking:&prompt.root; pkg audit -Fpkg displays messages
any published vulnerabilities in installed packages:Affected package: cups-base-1.1.22.0_1
Type of problem: cups-base -- HPGL buffer overflow vulnerability.
Reference: <http://www.FreeBSD.org/ports/portaudit/40a3bca2-6809-11d9-a9e7-0001020eed82.html>
1 problem(s) in your installed packages found.
You are advised to update or deinstall the affected package(s) immediately.By pointing a web browser to the displayed
URL, an administrator may obtain more
information about the vulnerability. This will include the
versions affected, by &os; port version, along with other web
sites which may contain security advisories.pkg is a powerful utility
and is extremely useful when coupled with
ports-mgmt/portmaster.&os; Security AdvisoriesTomRhodesContributed
by &os; Security AdvisoriesLike many producers of quality operating systems, the &os;
Project has a security team which is responsible for
determining the End-of-Life (EoL) date for
each &os; release and to provide security updates for supported
releases which have not yet reached their
EoL. More information about the &os;
security team and the supported releases is available on the
&os; security
page.One task of the security team is to respond to reported
security vulnerabilities in the &os; operating system. Once a
vulnerability is confirmed, the security team verifies the steps
necessary to fix the vulnerability and updates the source code
with the fix. It then publishes the details as a
Security Advisory. Security
advisories are published on the &os;
website and mailed to the
&a.security-notifications.name;, &a.security.name;, and
&a.announce.name; mailing lists.This section describes the format of a &os; security
advisory.Format of a Security AdvisoryHere is an example of a &os; security advisory:=============================================================================
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA512
=============================================================================
FreeBSD-SA-14:04.bind Security Advisory
The FreeBSD Project
Topic: BIND remote denial of service vulnerability
Category: contrib
Module: bind
Announced: 2014-01-14
Credits: ISC
Affects: FreeBSD 8.x and FreeBSD 9.x
Corrected: 2014-01-14 19:38:37 UTC (stable/9, 9.2-STABLE)
2014-01-14 19:42:28 UTC (releng/9.2, 9.2-RELEASE-p3)
2014-01-14 19:42:28 UTC (releng/9.1, 9.1-RELEASE-p10)
2014-01-14 19:38:37 UTC (stable/8, 8.4-STABLE)
2014-01-14 19:42:28 UTC (releng/8.4, 8.4-RELEASE-p7)
2014-01-14 19:42:28 UTC (releng/8.3, 8.3-RELEASE-p14)
CVE Name: CVE-2014-0591
For general information regarding FreeBSD Security Advisories,
including descriptions of the fields above, security branches, and the
following sections, please visit <URL:http://security.FreeBSD.org/>.
I. Background
BIND 9 is an implementation of the Domain Name System (DNS) protocols.
The named(8) daemon is an Internet Domain Name Server.
II. Problem Description
Because of a defect in handling queries for NSEC3-signed zones, BIND can
crash with an "INSIST" failure in name.c when processing queries possessing
certain properties. This issue only affects authoritative nameservers with
at least one NSEC3-signed zone. Recursive-only servers are not at risk.
III. Impact
An attacker who can send a specially crafted query could cause named(8)
to crash, resulting in a denial of service.
IV. Workaround
No workaround is available, but systems not running authoritative DNS service
with at least one NSEC3-signed zone using named(8) are not vulnerable.
V. Solution
Perform one of the following:
1) Upgrade your vulnerable system to a supported FreeBSD stable or
release / security branch (releng) dated after the correction date.
2) To update your vulnerable system via a source code patch:
The following patches have been verified to apply to the applicable
FreeBSD release branches.
a) Download the relevant patch from the location below, and verify the
detached PGP signature using your PGP utility.
[FreeBSD 8.3, 8.4, 9.1, 9.2-RELEASE and 8.4-STABLE]
# fetch http://security.FreeBSD.org/patches/SA-14:04/bind-release.patch
# fetch http://security.FreeBSD.org/patches/SA-14:04/bind-release.patch.asc
# gpg --verify bind-release.patch.asc
[FreeBSD 9.2-STABLE]
# fetch http://security.FreeBSD.org/patches/SA-14:04/bind-stable-9.patch
# fetch http://security.FreeBSD.org/patches/SA-14:04/bind-stable-9.patch.asc
# gpg --verify bind-stable-9.patch.asc
b) Execute the following commands as root:
# cd /usr/src
# patch < /path/to/patch
Recompile the operating system using buildworld and installworld as
described in <URL:http://www.FreeBSD.org/handbook/makeworld.html>.
Restart the applicable daemons, or reboot the system.
3) To update your vulnerable system via a binary patch:
Systems running a RELEASE version of FreeBSD on the i386 or amd64
platforms can be updated via the freebsd-update(8) utility:
# freebsd-update fetch
# freebsd-update install
VI. Correction details
The following list contains the correction revision numbers for each
affected branch.
Branch/path Revision
- -------------------------------------------------------------------------
stable/8/ r260646
releng/8.3/ r260647
releng/8.4/ r260647
stable/9/ r260646
releng/9.1/ r260647
releng/9.2/ r260647
- -------------------------------------------------------------------------
To see which files were modified by a particular revision, run the
following command, replacing NNNNNN with the revision number, on a
machine with Subversion installed:
# svn diff -cNNNNNN --summarize svn://svn.freebsd.org/base
Or visit the following URL, replacing NNNNNN with the revision number:
<URL:http://svnweb.freebsd.org/base?view=revision&revision=NNNNNN>
VII. References
<URL:https://kb.isc.org/article/AA-01078>
<URL:http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2014-0591>
The latest revision of this advisory is available at
<URL:http://security.FreeBSD.org/advisories/FreeBSD-SA-14:04.bind.asc>
-----BEGIN PGP SIGNATURE-----
iQIcBAEBCgAGBQJS1ZTYAAoJEO1n7NZdz2rnOvQP/2/68/s9Cu35PmqNtSZVVxVG
ZSQP5EGWx/lramNf9566iKxOrLRMq/h3XWcC4goVd+gZFrvITJSVOWSa7ntDQ7TO
XcinfRZ/iyiJbs/Rg2wLHc/t5oVSyeouyccqODYFbOwOlk35JjOTMUG1YcX+Zasg
ax8RV+7Zt1QSBkMlOz/myBLXUjlTZ3Xg2FXVsfFQW5/g2CjuHpRSFx1bVNX6ysoG
9DT58EQcYxIS8WfkHRbbXKh9I1nSfZ7/Hky/kTafRdRMrjAgbqFgHkYTYsBZeav5
fYWKGQRJulYfeZQ90yMTvlpF42DjCC3uJYamJnwDIu8OhS1WRBI8fQfr9DRzmRua
OK3BK9hUiScDZOJB6OqeVzUTfe7MAA4/UwrDtTYQ+PqAenv1PK8DZqwXyxA9ThHb
zKO3OwuKOVHJnKvpOcr+eNwo7jbnHlis0oBksj/mrq2P9m2ueF9gzCiq5Ri5Syag
Wssb1HUoMGwqU0roS8+pRpNC8YgsWpsttvUWSZ8u6Vj/FLeHpiV3mYXPVMaKRhVm
067BA2uj4Th1JKtGleox+Em0R7OFbCc/9aWC67wiqI6KRyit9pYiF3npph+7D5Eq
7zPsUdDd+qc+UTiLp3liCRp5w6484wWdhZO6wRtmUgxGjNkxFoNnX8CitzF8AaqO
UWWemqWuz3lAZuORQ9KX
=OQzQ
-----END PGP SIGNATURE-----Every security advisory uses the following format:Each security advisory is signed by the
PGP key of the Security Officer. The
public key for the Security Officer can be verified at
.The name of the security advisory always begins with
FreeBSD-SA- (for FreeBSD Security
Advisory), followed by the year in two digit format
(14:), followed by the advisory number
for that year (04.), followed by the
name of the affected application or subsystem
(bind). The advisory shown here is the
fourth advisory for 2014 and it affects
BIND.The Topic field summarizes the
vulnerability.The Category refers to the
affected part of the system which may be one of
core, contrib, or
ports. The core
category means that the vulnerability affects a core
component of the &os; operating system. The
contrib category means that the
vulnerability affects software included with &os;,
such as BIND. The
ports category indicates that the
vulnerability affects software available through the Ports
Collection.The Module field refers to the
component location. In this example, the
bind module is affected; therefore,
this vulnerability affects an application installed with
the operating system.The Announced field reflects the
date the security advisory was published. This means
that the security team has verified that the problem
exists and that a patch has been committed to the &os;
source code repository.The Credits field gives credit to
the individual or organization who noticed the
vulnerability and reported it.The Affects field explains which
releases of &os; are affected by this
vulnerability.The Corrected field indicates the
date, time, time offset, and releases that were
corrected. The section in parentheses shows each branch
for which the fix has been merged, and the version number
of the corresponding release from that branch. The
release identifier itself includes the version number
and, if appropriate, the patch level. The patch level is
the letter p followed by a number,
indicating the sequence number of the patch, allowing
users to track which patches have already been applied to
the system.The CVE Name field lists the
advisory number, if one exists, in the public cve.mitre.org
security vulnerabilities database.The Background field provides a
description of the affected module.The Problem Description field
explains the vulnerability. This can include
information about the flawed code and how the utility
could be maliciously used.The Impact field describes what
type of impact the problem could have on a system.The Workaround field indicates if
a workaround is available to system administrators who
cannot immediately patch the system .The Solution field provides the
instructions for patching the affected system. This is a
step by step tested and verified method for getting a
system patched and working securely.The Correction Details field
displays each affected Subversion branch with the revision
number that contains the corrected code.The References field offers sources
of additional information regarding the
vulnerability.Process AccountingTomRhodesContributed
by Process AccountingProcess accounting is a security method in which an
administrator may keep track of system resources used and
their allocation among users, provide for system monitoring,
and minimally track a user's commands.Process accounting has both positive and negative points.
One of the positives is that an intrusion may be narrowed down
to the point of entry. A negative is the amount of logs
generated by process accounting, and the disk space they may
require. This section walks an administrator through the basics
of process accounting.If more fine-grained accounting is needed, refer to
.Enabling and Utilizing Process AccountingBefore using process accounting, it must be enabled using
the following commands:&prompt.root; touch /var/account/acct
&prompt.root; chmod 600 /var/account/acct
&prompt.root; accton /var/account/acct
&prompt.root; echo 'accounting_enable="YES"' >> /etc/rc.confOnce enabled, accounting will begin to track information
such as CPU statistics and executed
commands. All accounting logs are in a non-human readable
format which can be viewed using sa. If
issued without any options, sa prints
information relating to the number of per-user calls, the
total elapsed time in minutes, total CPU
and user time in minutes, and the average number of
I/O operations. Refer to &man.sa.8; for
the list of available options which control the output.To display the commands issued by users, use
lastcomm. For example, this command
prints out all usage of ls by trhodes on the
ttyp1 terminal:&prompt.root; lastcomm ls trhodes ttyp1Many other useful options exist and are explained in
&man.lastcomm.1;, &man.acct.5;, and &man.sa.8;.Resource LimitsTomRhodesContributed
by Resource limits&os; provides several methods for an administrator to
limit the amount of system resources an individual may use.
Disk quotas limit the amount of disk space available to users.
Quotas are discussed in .quotaslimiting usersquotasdisk quotasLimits to other resources, such as CPU
and memory, can be set using either a flat file or a command to
configure a resource limits database. The traditional method
defines login classes by editing
/etc/login.conf. While this method is
still supported, any changes require a multi-step process of
editing this file, rebuilding the resource database, making
necessary changes to /etc/master.passwd,
and rebuilding the password database. This can become time
consuming, depending upon the number of users to
configure.Beginning with &os; 9.0-RELEASE,
rctl can be used to provide a more
fine-grained method for controlling resource limits. This
command supports more than user limits as it can also be used to
set resource constraints on processes and jails.This section demonstrates both methods for controlling
resources, beginning with the traditional method.Configuring Login Classeslimiting usersaccountslimiting/etc/login.confIn the traditional method, login classes and the resource
limits to apply to a login class are defined in
/etc/login.conf. Each user account can
be assigned to a login class, where default
is the default login class. Each login class has a set of
login capabilities associated with it. A login capability is
a
name=value
pair, where name is a well-known
identifier and value is an
arbitrary string which is processed accordingly depending on
the name.Whenever /etc/login.conf is edited,
the /etc/login.conf.db must be updated
by executing the following command:&prompt.root; cap_mkdb /etc/login.confResource limits differ from the default login capabilities
in two ways. First, for every limit, there is a
soft and hard
limit. A soft limit may be adjusted by the user or
application, but may not be set higher than the hard limit.
The hard limit may be lowered by the user, but can only be
raised by the superuser. Second, most resource limits apply
per process to a specific user. lists the most commonly
used resource limits. All of the available resource limits
and capabilities are described in detail in
&man.login.conf.5;.limiting userscoredumpsizelimiting userscputimelimiting usersfilesizelimiting usersmaxproclimiting usersmemorylockedlimiting usersmemoryuselimiting usersopenfileslimiting userssbsizelimiting usersstacksize
Login Class Resource LimitsResource LimitDescriptioncoredumpsizeThe limit on the size of a core file generated by
a program is subordinate to other limits on disk
usage, such as filesize or disk
quotas. This limit is often used as a less severe
method of controlling disk space consumption. Since
users do not generate core files and often do not
delete them, this setting may save them from running
out of disk space should a large program
crash.cputimeThe maximum amount of CPU time
a user's process may consume. Offending processes
will be killed by the kernel. This is a limit on
CPU time
consumed, not the percentage of the
CPU as displayed in some of the
fields generated by top and
ps.filesizeThe maximum size of a file the user may own.
Unlike disk quotas (), this
limit is enforced on individual files, not the set of
all files a user owns.maxprocThe maximum number of foreground and background
processes a user can run. This limit may not be
larger than the system limit specified by
kern.maxproc. Setting this limit
too small may hinder a user's productivity as some
tasks, such as compiling a large program, start lots
of processes.memorylockedThe maximum amount of memory a process may
request to be locked into main memory using
&man.mlock.2;. Some system-critical programs, such as
&man.amd.8;, lock into main memory so that if the
system begins to swap, they do not contribute to disk
thrashing.memoryuseThe maximum amount of memory a process may
consume at any given time. It includes both core
memory and swap usage. This is not a catch-all limit
for restricting memory consumption, but is a good
start.openfilesThe maximum number of files a process may have
open. In &os;, files are used to represent sockets
and IPC channels, so be careful not
to set this too low. The system-wide limit for this
is defined by
kern.maxfiles.sbsizeThe limit on the amount of network memory a user
may consume. This can be generally used to limit
network communications.stacksizeThe maximum size of a process stack. This alone
is not sufficient to limit the amount of memory a
program may use, so it should be used in conjunction
with other limits.
There are a few other things to remember when setting
resource limits:Processes started at system startup by
/etc/rc are assigned to the
daemon login class.Although the default
/etc/login.conf is a good source of
reasonable values for most limits, they may not be
appropriate for every system. Setting a limit too high
may open the system up to abuse, while setting it too low
may put a strain on productivity.&xorg; takes a lot of
resources and encourages users to run more programs
simultaneously.Many limits apply to individual processes, not the
user as a whole. For example, setting
openfiles to 50
means that each process the user runs may open up to
50 files. The total amount of files a
user may open is the value of openfiles
multiplied by the value of maxproc.
This also applies to memory consumption.For further information on resource limits and login
classes and capabilities in general, refer to
&man.cap.mkdb.1;, &man.getrlimit.2;, and
&man.login.conf.5;.Enabling and Configuring Resource LimitsAs of &os; 10.2, rctl support is
built into the kernel. Previous supported releases will
need to be recompiled using the instructions in . Add these lines to either
GENERIC or a custom kernel configuration
file, then rebuild the kernel:options RACCT
options RCTLOnce the system has rebooted into the new kernel,
rctl may be used to set rules for the
system.Rule syntax is controlled through the use of a subject,
subject-id, resource, and action, as seen in this example
rule:user:trhodes:maxproc:deny=10/userIn this rule, the subject is user, the
subject-id is trhodes, the resource,
maxproc, is the maximum number of
processes, and the action is deny, which
blocks any new processes from being created. This means that
the user, trhodes, will be constrained to
no greater than 10 processes. Other
possible actions include logging to the console, passing a
notification to &man.devd.8;, or sending a sigterm to the
process.Some care must be taken when adding rules. Since this
user is constrained to 10 processes, this
example will prevent the user from performing other tasks
after logging in and executing a
screen session. Once a resource limit has
been hit, an error will be printed, as in this example:&prompt.user; man test
/usr/bin/man: Cannot fork: Resource temporarily unavailable
eval: Cannot fork: Resource temporarily unavailableAs another example, a jail can be prevented from exceeding
a memory limit. This rule could be written as:&prompt.root; rctl -a jail:httpd:memoryuse:deny=2G/jailRules will persist across reboots if they have been added
to /etc/rctl.conf. The format is a rule,
without the preceding command. For example, the previous rule
could be added as:# Block jail from using more than 2G memory:
jail:httpd:memoryuse:deny=2G/jailTo remove a rule, use rctl to remove it
from the list:&prompt.root; rctl -r user:trhodes:maxproc:deny=10/userA method for removing all rules is documented in
&man.rctl.8;. However, if removing all rules for a single
user is required, this command may be issued:&prompt.root; rctl -r user:trhodesMany other resources exist which can be used to exert
additional control over various subjects.
See &man.rctl.8; to learn about them.Shared Administration with SudoTomRhodesContributed
by SecuritySudoSystem administrators often need the ability to grant
enhanced permissions to users so they may perform privileged
tasks. The idea that team members are provided access
to a &os; system to perform their specific tasks opens up unique
challenges to every administrator. These team members only
need a subset of access beyond normal end user levels; however,
they almost always tell management they are unable to
perform their tasks without superuser access. Thankfully, there
is no reason to provide such access to end users because tools
exist to manage this exact requirement.Up to this point, the security chapter has covered permitting
access to authorized users and attempting to prevent unauthorized
access. Another problem arises once authorized users have access
to the system resources. In many cases, some users may need
access to application startup scripts, or a team of
administrators need to maintain the system. Traditionally, the
standard users and groups, file permissions, and even the
&man.su.1; command would manage this access. And as applications
required more access, as more users needed to use system
resources, a better solution was required. The most used
application is currently Sudo.Sudo allows administrators
to configure more rigid access to system commands
and provide for some advanced logging features.
As a tool, it is available from the Ports Collection as
security/sudo or by use of
the &man.pkg.8; utility. To use the &man.pkg.8; tool:&prompt.root; pkg install sudoAfter the installation is complete, the installed
visudo will open the configuration file with
a text editor. Using visudo is highly
recommended as it comes with a built in syntax checker to verify
there are no errors before the file is saved.The configuration file is made up of several small sections
which allow for extensive configuration. In the following
example, web application maintainer, user1, needs to start,
stop, and restart the web application known as
webservice. To
grant this user permission to perform these tasks, add
this line to the end of
/usr/local/etc/sudoers:user1 ALL=(ALL) /usr/sbin/service webservice *The user may now start webservice
using this command:&prompt.user; sudo /usr/sbin/service webservice startWhile this configuration allows a single user access to the
webservice service; however, in most
organizations, there is an entire web team in charge of managing
the service. A single line can also give access to an entire
group. These steps will create a web group, add a user to this
group, and allow all members of the group to manage the
service:&prompt.root; pw groupadd -g 6001 -n webteamUsing the same &man.pw.8; command, the user is added to
the webteam group:&prompt.root; pw groupmod -m user1 -n webteamFinally, this line in
/usr/local/etc/sudoers allows any
member of the webteam group to manage
webservice:%webteam ALL=(ALL) /usr/sbin/service webservice *Unlike &man.su.1;, Sudo
only requires the end user password. This adds an advantage where
users will not need shared passwords, a finding in most security
audits and just bad all the way around.Users permitted to run applications with
Sudo only enter their own passwords.
This is more secure and gives better control than &man.su.1;,
where the root
password is entered and the user acquires all
root
permissions.Most organizations are moving or have moved toward a two
factor authentication model. In these cases, the user may
not have a password to enter. Sudo
provides for these cases with the NOPASSWD
variable. Adding it to the configuration above
will allow all members of the webteam
group to manage the service without the password
requirement:%webteam ALL=(ALL) NOPASSWD: /usr/sbin/service webservice *Logging OutputAn advantage to implementing
Sudo is the ability to enable
session logging. Using the built in log mechanisms
and the included sudoreplay
command, all commands initiated through
Sudo are logged for later
verification. To enable this feature, add a default log
directory entry, this example uses a user variable.
Several other log filename conventions exist, consult the
manual page for sudoreplay for
additional information.Defaults iolog_dir=/var/log/sudo-io/%{user}This directory will be created automatically after the
logging is configured. It is best to let the system create
directory with default permissions just to be safe. In
addition, this entry will also log administrators who use the
sudoreplay command. To change
this behavior, read and uncomment the logging options inside
sudoers.Once this directive has been added to the
sudoers file, any user configuration
can be updated with the request to log access. In the
example shown, the updated webteam
entry would have the following additional changes:%webteam ALL=(ALL) NOPASSWD: LOG_INPUT: LOG_OUTPUT: /usr/sbin/service webservice *From this point on, all webteam
members altering the status of the
webservice application
will be logged. The list of previous and current sessions
can be displayed with:&prompt.root; sudoreplay -lIn the output, to replay a specific session, search for the
TSID= entry, and pass that to
sudoreplay with no other options to
replay the session at normal speed. For example:&prompt.root; sudoreplay user1/00/00/02While sessions are logged, any administrator is
able to remove sessions and leave only a question of why they
had done so. It is worthwhile to add a daily check
through an intrusion detection system (IDS)
or similar software so that other administrators are alerted
to manual alterations.The sudoreplay is extremely extendable.
Consult the documentation for more information.
Index: head/en_US.ISO8859-1/books/handbook/virtualization/chapter.xml
===================================================================
--- head/en_US.ISO8859-1/books/handbook/virtualization/chapter.xml (revision 49530)
+++ head/en_US.ISO8859-1/books/handbook/virtualization/chapter.xml (revision 49531)
@@ -1,1316 +1,1316 @@
VirtualizationMurrayStokelyContributed by AllanJudebhyve section by SynopsisVirtualization software allows multiple operating systems to
run simultaneously on the same computer. Such software systems
for PCs often involve a host operating system
which runs the virtualization software and supports any number
of guest operating systems.After reading this chapter, you will know:The difference between a host operating system and a
guest operating system.How to install &os; on an &intel;-based &apple;
&mac; computer.How to install &os; on µsoft.windows; with
Virtual PC.How to install &os; as a guest in
bhyve.How to tune a &os; system for best performance under
virtualization.Before reading this chapter, you should:Understand the basics of &unix;
and &os;.Know how to install
&os;.Know how to set up a
network connection.Know how to install additional
third-party software.&os; as a Guest on Parallels for
&macos; XParallels Desktop for &mac; is
a commercial software product available for &intel; based
&apple; &mac; computers running &macos; 10.4.6 or higher. &os;
is a fully supported guest operating system. Once
Parallels has been installed on
&macos; X, the user must configure a virtual machine and then
install the desired guest operating system.Installing &os; on Parallels/&macos; XThe first step in installing &os; on
Parallels is to create a new
virtual machine for installing &os;. Select
&os; as the
Guest OS Type when prompted:Choose a reasonable amount of disk and memory
depending on the plans for this virtual &os; instance.
4GB of disk space and 512MB of RAM work well for most uses
of &os; under Parallels:Select the type of networking and a network
interface:Save and finish the configuration:After the &os; virtual machine has been created, &os;
can be installed on it. This is best done with an official
&os; CD/DVD or with an
ISO image downloaded from an official
FTP site. Copy the appropriate
ISO image to the local &mac; filesystem or
insert a CD/DVD in the
&mac;'s CD-ROM drive. Click on the disc
icon in the bottom right corner of the &os;
Parallels window. This will bring
up a window that can be used to associate the
CD-ROM drive in the virtual machine with
the ISO file on disk or with the real
CD-ROM drive.Once this association with the CD-ROM
source has been made, reboot the &os; virtual machine by
clicking the reboot icon.
Parallels will reboot with a
special BIOS that first checks if there is
a CD-ROM.In this case it will find the &os; installation media and
begin a normal &os; installation. Perform the installation,
but do not attempt to configure
&xorg; at this time.When the installation is finished, reboot into the newly
installed &os; virtual machine.Configuring &os; on
- Parallels
+ ParallelsAfter &os; has been successfully installed on &macos; X
with Parallels, there are a number
of configuration steps that can be taken to optimize the
system for virtualized operation.Set Boot Loader VariablesThe most important step is to reduce the
tunable to reduce the CPU
utilization of &os; under the
Parallels environment. This is
accomplished by adding the following line to
/boot/loader.conf:kern.hz=100Without this setting, an idle &os;
Parallels guest will use
roughly 15% of the CPU of a single processor &imac;.
After this change the usage will be closer to 5%.Create a New Kernel Configuration FileAll of the SCSI, FireWire, and USB device drivers
can be removed from a custom kernel configuration file.
Parallels provides a virtual
network adapter used by the &man.ed.4; driver, so all
network devices except for &man.ed.4; and &man.miibus.4;
can be removed from the kernel.Configure NetworkingThe most basic networking setup uses DHCP to connect
the virtual machine to the same local area network as the
host &mac;. This can be accomplished by adding
ifconfig_ed0="DHCP" to
/etc/rc.conf. More advanced
networking setups are described in
.&os; as a Guest on Virtual PC
for &windows;Virtual PC for &windows; is a
µsoft; software product available for free download. See
this website for the system
requirements. Once
Virtual PC has been installed on
µsoft.windows;, the user can configure a virtual machine
and then install the desired guest operating system.Installing &os; on
Virtual PCThe first step in installing &os; on
Virtual PC is to create a new
virtual machine for installing &os;. Select
Create a virtual machine when
prompted:Select Other as the
Operating system when
prompted:Then, choose a reasonable amount of disk and memory
depending on the plans for this virtual &os; instance.
4GB of disk space and 512MB of RAM work well for most uses
of &os; under Virtual PC:Save and finish the configuration:Select the &os; virtual machine and click
Settings, then set the type of networking
and a network interface:After the &os; virtual machine has been created, &os; can
be installed on it. This is best done with an official &os;
CD/DVD or with an
ISO image downloaded from an official
FTP site. Copy the appropriate
ISO image to the local &windows; filesystem
or insert a CD/DVD in
the CD drive, then double click on the &os;
virtual machine to boot. Then, click CD
and choose Capture ISO Image... on the
Virtual PC window. This will bring
up a window where the CD-ROM drive in the
virtual machine can be associated with an
ISO file on disk or with the real
CD-ROM drive.Once this association with the CD-ROM
source has been made, reboot the &os; virtual machine by
clicking Action and
Reset.
Virtual PC will reboot with a
special BIOS that first checks for a
CD-ROM.In this case it will find the &os; installation media
and begin a normal &os; installation. Continue with the
installation, but do not attempt to configure
&xorg; at this time.When the installation is finished, remember to eject the
CD/DVD or release the
ISO image. Finally, reboot into the newly
installed &os; virtual machine.Configuring &os; on Virtual
PCAfter &os; has been successfully installed on
µsoft.windows; with
Virtual PC, there are a number of
configuration steps that can be taken to optimize the system
for virtualized operation.Set Boot Loader VariablesThe most important step is to reduce the
tunable to reduce the CPU
utilization of &os; under the
Virtual PC environment. This
is accomplished by adding the following line to
/boot/loader.conf:kern.hz=100Without this setting, an idle &os;
Virtual PC guest OS will
use roughly 40% of the CPU of a single processor
computer. After this change, the usage will be
closer to 3%.Create a New Kernel Configuration FileAll of the SCSI, FireWire, and USB device drivers can
be removed from a custom kernel configuration file.
Virtual PC provides a virtual
network adapter used by the &man.de.4; driver, so all
network devices except for &man.de.4; and &man.miibus.4;
can be removed from the kernel.Configure NetworkingThe most basic networking setup uses DHCP to connect
the virtual machine to the same local area network as the
µsoft.windows; host. This can be accomplished by
adding ifconfig_de0="DHCP" to
/etc/rc.conf. More advanced
networking setups are described in
.&os; as a Guest on VMware Fusion
for &macos;VMware Fusion for &mac; is a
commercial software product available for &intel; based &apple;
&mac; computers running &macos; 10.4.9 or higher. &os; is a
fully supported guest operating system. Once
VMware Fusion has been installed on
&macos; X, the user can configure a virtual machine and then
install the desired guest operating system.Installing &os; on
VMware FusionThe first step is to start
VMware Fusion which will load the
Virtual Machine Library. Click New
to create the virtual machine:This will load the New Virtual Machine Assistant. Click
Continue to proceed:Select Other as the
Operating System and either
&os; or
&os; 64-bit, as the
Version when prompted:Choose the name of the virtual machine and the directory
where it should be saved:Choose the size of the Virtual Hard Disk for the virtual
machine:Choose the method to install the virtual machine, either
from an ISO image or from a
CD/DVD:Click Finish and the virtual
machine will boot:Install &os; as usual:Once the install is complete, the settings of the virtual
machine can be modified, such as memory usage:The System Hardware settings of the virtual machine
cannot be modified while the virtual machine is
running.The number of CPUs the virtual machine will have access
to:The status of the CD-ROM device.
Normally the
CD/DVD/ISO
is disconnected from the virtual machine when it is no longer
needed.The last thing to change is how the virtual machine will
connect to the network. To allow connections to the virtual
machine from other machines besides the host, choose
Connect directly to the physical network
(Bridged). Otherwise,
Share the host's internet connection
(NAT) is preferred so that the virtual machine
can have access to the Internet, but the network cannot access
the virtual machine.After modifying the settings, boot the newly installed
&os; virtual machine.Configuring &os; on VMware
FusionAfter &os; has been successfully installed on &macos; X
with VMware Fusion, there are a
number of configuration steps that can be taken to optimize
the system for virtualized operation.Set Boot Loader VariablesThe most important step is to reduce the
tunable to reduce the CPU
utilization of &os; under the
VMware Fusion environment.
This is accomplished by adding the following line to
/boot/loader.conf:kern.hz=100Without this setting, an idle &os;
VMware Fusion guest will use
roughly 15% of the CPU of a single processor &imac;.
After this change, the usage will be closer to 5%.Create a New Kernel Configuration FileAll of the FireWire, and USB device drivers can be
removed from a custom kernel configuration file.
VMware Fusion provides a
virtual network adapter used by the &man.em.4; driver, so
all network devices except for &man.em.4; can be removed
from the kernel.Configure NetworkingThe most basic networking setup uses DHCP to connect
the virtual machine to the same local area network as the
host &mac;. This can be accomplished by adding
ifconfig_em0="DHCP" to
/etc/rc.conf. More advanced
networking setups are described in
.&os; as a Guest on &virtualbox;&os; works well as a guest in
&virtualbox;. The virtualization
software is available for most common operating systems,
including &os; itself.The &virtualbox; guest additions
provide support for:Clipboard sharing.Mouse pointer integration.Host time synchronization.Window scaling.Seamless mode.These commands are run in the &os; guest.First, install the
emulators/virtualbox-ose-additions package
or port in the &os; guest. This will install the port:&prompt.root; cd /usr/ports/emulators/virtualbox-ose-additions && make install cleanAdd these lines to /etc/rc.conf:vboxguest_enable="YES"
vboxservice_enable="YES"If &man.ntpd.8; or &man.ntpdate.8; is used, disable host
time synchronization:vboxservice_flags="--disable-timesync"Xorg will automatically recognize
the vboxvideo driver. It can also be
manually entered in
/etc/X11/xorg.conf:Section "Device"
Identifier "Card0"
Driver "vboxvideo"
VendorName "InnoTek Systemberatung GmbH"
BoardName "VirtualBox Graphics Adapter"
EndSectionTo use the vboxmouse driver, adjust the
mouse section in /etc/X11/xorg.conf:Section "InputDevice"
Identifier "Mouse0"
Driver "vboxmouse"
EndSectionHAL users should create the following
/usr/local/etc/hal/fdi/policy/90-vboxguest.fdi
or copy it from
/usr/local/share/hal/fdi/policy/10osvendor/90-vboxguest.fdi:<?xml version="1.0" encoding="utf-8"?>
<!--
# Sun VirtualBox
# Hal driver description for the vboxmouse driver
# $Id: chapter.xml,v 1.33 2012-03-17 04:53:52 eadler Exp $
Copyright (C) 2008-2009 Sun Microsystems, Inc.
This file is part of VirtualBox Open Source Edition (OSE, as
available from http://www.virtualbox.org. This file is free software;
you can redistribute it and/or modify it under the terms of the GNU
General Public License (GPL) as published by the Free Software
Foundation, in version 2 as it comes in the "COPYING" file of the
VirtualBox OSE distribution. VirtualBox OSE is distributed in the
hope that it will be useful, but WITHOUT ANY WARRANTY of any kind.
Please contact Sun Microsystems, Inc., 4150 Network Circle, Santa
Clara, CA 95054 USA or visit http://www.sun.com if you need
additional information or have any questions.
-->
<deviceinfo version="0.2">
<device>
<match key="info.subsystem" string="pci">
<match key="info.product" string="VirtualBox guest Service">
<append key="info.capabilities" type="strlist">input</append>
<append key="info.capabilities" type="strlist">input.mouse</append>
<merge key="input.x11_driver" type="string">vboxmouse</merge>
<merge key="input.device" type="string">/dev/vboxguest</merge>
</match>
</match>
</device>
</deviceinfo>&os; as a Host with
VirtualBox&virtualbox; is an actively
developed, complete virtualization package, that is available
for most operating systems including &windows;, &macos;, &linux;
and &os;. It is equally capable of running &windows; or
&unix;-like guests. It is released as open source software, but
with closed-source components available in a separate extension
pack. These components include support for USB 2.0 devices.
More information may be found on the Downloads
page of the &virtualbox;
wiki. Currently, these extensions are not available
for &os;.Installing &virtualbox;&virtualbox; is available as a
&os; package or port in
emulators/virtualbox-ose. The port can be
installed using these commands:&prompt.root; cd /usr/ports/emulators/virtualbox-ose
&prompt.root; make install cleanOne useful option in the port's configuration menu is the
GuestAdditions suite of programs. These
provide a number of useful features in guest operating
systems, like mouse pointer integration (allowing the mouse to
be shared between host and guest without the need to press a
special keyboard shortcut to switch) and faster video
rendering, especially in &windows; guests. The guest
additions are available in the Devices
menu, after the installation of the guest is finished.A few configuration changes are needed before
&virtualbox; is started for the
first time. The port installs a kernel module in
/boot/modules which
must be loaded into the running kernel:&prompt.root; kldload vboxdrvTo ensure the module is always loaded after a reboot,
add this line to
/boot/loader.conf:vboxdrv_load="YES"To use the kernel modules that allow bridged or host-only
networking, add this line to
/etc/rc.conf and reboot the
computer:vboxnet_enable="YES"The vboxusers
group is created during installation of
&virtualbox;. All users that need
access to &virtualbox; will have to
be added as members of this group. pw can
be used to add new members:&prompt.root; pw groupmod vboxusers -m yourusernameThe default permissions for
/dev/vboxnetctl are restrictive and need
to be changed for bridged networking:&prompt.root; chown root:vboxusers /dev/vboxnetctl
&prompt.root; chmod 0660 /dev/vboxnetctlTo make this permissions change permanent, add these
lines to /etc/devfs.conf:own vboxnetctl root:vboxusers
perm vboxnetctl 0660To launch &virtualbox;,
type from a &xorg; session:&prompt.user; VirtualBoxFor more information on configuring and using
&virtualbox;, refer to the
official
website. For &os;-specific information and
troubleshooting instructions, refer to the relevant
page in the &os; wiki.&virtualbox; USB SupportIn order to be able to read and write to USB devices,
users need to be members of
operator:&prompt.root; pw groupmod operator -m jerryThen, add the following to
/etc/devfs.rules, or create this file if
it does not exist yet:[system=10]
add path 'usb/*' mode 0660 group operatorTo load these new rules, add the following to
/etc/rc.conf:devfs_system_ruleset="system"Then, restart devfs:&prompt.root; service devfs restartUSB can now be enabled in the guest operating system. USB
devices should be visible in the &virtualbox;
preferences.&virtualbox; Host
DVD/CD AccessAccess to the host
DVD/CD drives from
guests is achieved through the sharing of the physical drives.
Within &virtualbox;, this is set up from the Storage window in
the Settings of the virtual machine. If needed, create an
empty IDE
CD/DVD device first.
Then choose the Host Drive from the popup menu for the virtual
CD/DVD drive selection.
A checkbox labeled Passthrough will appear.
This allows the virtual machine to use the hardware directly.
For example, audio CDs or the burner will
only function if this option is selected.HAL needs to run for
&virtualbox;
DVD/CD functions to
work, so enable it in /etc/rc.conf and
start it if it is not already running:hald_enable="YES"&prompt.root; service hald startIn order for users to be able to use
&virtualbox;
DVD/CD functions, they
need access to /dev/xpt0,
/dev/cdN, and
/dev/passN.
This is usually achieved by making the user a member of
operator.
Permissions to these devices have to be corrected by adding
these lines to /etc/devfs.conf:perm cd* 0660
perm xpt0 0660
perm pass* 0660&prompt.root; service devfs restart&os; as a Host with
bhyveThe bhyve
BSD-licensed hypervisor became part of the
base system with &os; 10.0-RELEASE. This hypervisor supports a
number of guests, including &os;, OpenBSD, and many &linux;
distributions. Currently, bhyve only
supports a serial console and does not emulate a graphical
console. Virtualization offload features of newer
CPUs are used to avoid the legacy methods of
translating instructions and manually managing memory
mappings.The bhyve design requires a
processor that supports &intel; Extended Page Tables
(EPT) or &amd; Rapid Virtualization Indexing
(RVI) or Nested Page Tables
(NPT). Hosting &linux; guests or &os; guests
with more than one vCPU requires
VMX unrestricted mode support
(UG). Most newer processors, specifically
the &intel; &core; i3/i5/i7 and &intel; &xeon;
E3/E5/E7, support these features. UG support
was introduced with Intel's Westmere micro-architecture. For a
complete list of &intel; processors that support
EPT, refer to .
RVI is found on the third generation and
later of the &amd.opteron; (Barcelona) processors. The easiest
way to tell if a processor supports
bhyve is to run
dmesg or look in
/var/run/dmesg.boot for the
POPCNT processor feature flag on the
Features2 line for &amd; processors or
EPT and UG on the
VT-x line for &intel; processors.Preparing the HostThe first step to creating a virtual machine in
bhyve is configuring the host
system. First, load the bhyve
kernel module:&prompt.root; kldload vmmThen, create a tap interface for the
network device in the virtual machine to attach to. In order
for the network device to participate in the network, also
create a bridge interface containing the
tap interface and the physical interface
as members. In this example, the physical interface is
igb0:&prompt.root; ifconfig tap0 create
&prompt.root; sysctl net.link.tap.up_on_open=1
net.link.tap.up_on_open: 0 -> 1
&prompt.root; ifconfig bridge0 create
&prompt.root; ifconfig bridge0 addm igb0 addm tap0
&prompt.root; ifconfig bridge0 upCreating a FreeBSD GuestCreate a file to use as the virtual disk for the guest
machine. Specify the size and name of the virtual
disk:&prompt.root; truncate -s 16Gguest.imgDownload an installation image of &os; to install:&prompt.root; fetch ftp://ftp.freebsd.org/pub/FreeBSD/releases/ISO-IMAGES/10.3/FreeBSD-10.3-RELEASE-amd64-bootonly.iso
FreeBSD-10.3-RELEASE-amd64-bootonly.iso 100% of 230 MB 570 kBps 06m17s&os; comes with an example script for running a virtual
machine in bhyve. The script will
start the virtual machine and run it in a loop, so it will
automatically restart if it crashes. The script takes a
number of options to control the configuration of the machine:
controls the number of virtual CPUs,
limits the amount of memory available to
the guest, defines which
tap device to use,
indicates which disk image to use, tells
bhyve to boot from the
CD image instead of the disk, and
defines which CD image
to use. The last parameter is the name of the virtual
machine, used to track the running machines. This example
starts the virtual machine in installation mode:&prompt.root; sh /usr/share/examples/bhyve/vmrun.sh -c 4 -m 1024M -t tap0 -d guest.img -i -I FreeBSD-10.3-RELEASE-amd64-bootonly.isoguestnameThe virtual machine will boot and start the installer.
After installing a system in the virtual machine, when the
system asks about dropping in to a shell at the end of the
installation, choose Yes. A small
change needs to be made to make the system start with a serial
console. Edit /etc/ttys and replace the
existing ttyu0 line with:ttyu0 "/usr/libexec/getty 3wire" xterm on secureBeginning with &os; 9.3-RELEASE and
10.1-RELEASE the console is configured
automatically.Reboot the virtual machine. While rebooting the virtual
machine causes bhyve to exit, the
vmrun.sh script runs
bhyve in a loop and will automatically
restart it. When this happens, choose the reboot option from
the boot loader menu in order to escape the loop. Now the
guest can be started from the virtual disk:&prompt.root; sh /usr/share/examples/bhyve/vmrun.sh -c 4 -m 1024M -t tap0 -d guest.imgguestnameCreating a &linux; GuestIn order to boot operating systems other than &os;, the
sysutils/grub2-bhyve port must be first
installed.Next, create a file to use as the virtual disk for the
guest machine:&prompt.root; truncate -s 16Glinux.imgStarting a virtual machine with
bhyve is a two step process. First
a kernel must be loaded, then the guest can be started. The
&linux; kernel is loaded with
sysutils/grub2-bhyve. Create a
device.map that
grub will use to map the virtual
devices to the files on the host system:(hd0) ./linux.img
(cd0) ./somelinux.isoUse sysutils/grub2-bhyve to load the
&linux; kernel from the ISO image:&prompt.root; grub-bhyve -m device.map -r cd0 -M 1024MlinuxguestThis will start grub. If the installation
CD contains a
grub.cfg, a menu will be displayed.
If not, the vmlinuz and
initrd files must be located and loaded
manually:grub> ls
(hd0) (cd0) (cd0,msdos1) (host)
grub> ls (cd0)/isolinux
boot.cat boot.msg grub.conf initrd.img isolinux.bin isolinux.cfg memtest
splash.jpg TRANS.TBL vesamenu.c32 vmlinuz
grub> linux (cd0)/isolinux/vmlinuz
grub> initrd (cd0)/isolinux/initrd.img
grub> bootNow that the &linux; kernel is loaded, the guest can be
started:&prompt.root; bhyve -A -H -P -s 0:0,hostbridge -s 1:0,lpc -s 2:0,virtio-net,tap1 -s 3:0,virtio-blk,./linux.img \
-s 4:0,ahci-cd,./somelinux.iso -l com1,stdio -c 4 -m 1024MlinuxguestThe system will boot and start the installer. After
installing a system in the virtual machine, reboot the virtual
machine. This will cause bhyve to
exit. The instance of the virtual machine needs to be
destroyed before it can be started again:&prompt.root; bhyvectl --destroy --vm=linuxguestNow the guest can be started directly from the virtual
disk. Load the kernel:&prompt.root; grub-bhyve -m device.map -r hd0,msdos1 -M 1024Mlinuxguest
grub> ls
(hd0) (hd0,msdos2) (hd0,msdos1) (cd0) (cd0,msdos1) (host)
(lvm/VolGroup-lv_swap) (lvm/VolGroup-lv_root)
grub> ls (hd0,msdos1)/
lost+found/ grub/ efi/ System.map-2.6.32-431.el6.x86_64 config-2.6.32-431.el6.x
86_64 symvers-2.6.32-431.el6.x86_64.gz vmlinuz-2.6.32-431.el6.x86_64
initramfs-2.6.32-431.el6.x86_64.img
grub> linux (hd0,msdos1)/vmlinuz-2.6.32-431.el6.x86_64 root=/dev/mapper/VolGroup-lv_root
grub> initrd (hd0,msdos1)/initramfs-2.6.32-431.el6.x86_64.img
grub> bootBoot the virtual machine:&prompt.root; bhyve -A -H -P -s 0:0,hostbridge -s 1:0,lpc -s 2:0,virtio-net,tap1 \
-s 3:0,virtio-blk,./linux.img -l com1,stdio -c 4 -m 1024Mlinuxguest&linux; will now boot in the virtual machine and
eventually present you with the login prompt. Login and use
the virtual machine. When you are finished, reboot the
virtual machine to exit bhyve.
Destroy the virtual machine instance:&prompt.root; bhyvectl --destroy --vm=linuxguestUsing ZFS with
bhyve GuestsIf ZFS is available on the host
machine, using ZFS volumes
instead of disk image files can provide significant
performance benefits for the guest VMs. A
ZFS volume can be created by:&prompt.root; zfs create -V16G -o volmode=dev zroot/linuxdisk0When starting the VM, specify the
ZFS volume as the disk drive:&prompt.root; bhyve -A -H -P -s 0:0,hostbridge -s 1:0,lpc -s 2:0,virtio-net,tap1 -s3:0,virtio-blk,/dev/zvol/zroot/linuxdisk0 \
-l com1,stdio -c 4 -m 1024MlinuxguestVirtual Machine ConsolesIt is advantageous to wrap the
bhyve console in a session
management tool such as sysutils/tmux or
sysutils/screen in order to detach and
reattach to the console. It is also possible to have the
console of bhyve be a null modem
device that can be accessed with cu. To do
this, load the nmdm kernel module and
replace with
. The
/dev/nmdm devices are created
automatically as needed, where each is a pair, corresponding
to the two ends of the null modem cable
(/dev/nmdm0A and
/dev/nmdm0B). See &man.nmdm.4; for more
information.&prompt.root; kldload nmdm
&prompt.root; bhyve -A -H -P -s 0:0,hostbridge -s 1:0,lpc -s 2:0,virtio-net,tap1 -s 3:0,virtio-blk,./linux.img \
-l com1,/dev/nmdm0A -c 4 -m 1024Mlinuxguest
&prompt.root; cu -l /dev/nmdm0B
Connected
Ubuntu 13.10 handbook ttyS0
handbook login:Managing Virtual MachinesA device node is created in /dev/vmm for each virtual
machine. This allows the administrator to easily see a list
of the running virtual machines:&prompt.root; ls -al /dev/vmm
total 1
dr-xr-xr-x 2 root wheel 512 Mar 17 12:19 ./
dr-xr-xr-x 14 root wheel 512 Mar 17 06:38 ../
crw------- 1 root wheel 0x1a2 Mar 17 12:20 guestname
crw------- 1 root wheel 0x19f Mar 17 12:19 linuxguest
crw------- 1 root wheel 0x1a1 Mar 17 12:19 otherguestA specified virtual machine can be destroyed using
bhyvectl:&prompt.root; bhyvectl --destroy --vm=guestnamePersistent ConfigurationIn order to configure the system to start
bhyve guests at boot time, the
following configurations must be made in the specified
files:/etc/sysctl.confnet.link.tap.up_on_open=1/boot/loader.confvmm_load="YES"
nmdm_load="YES"
if_bridge_load="YES"
if_tap_load="YES"/etc/rc.confcloned_interfaces="bridge0tap0"
ifconfig_bridge0="addm igb0 addm tap0"